text
stringlengths
56
7.94M
\begin{document} \begin{abstract} We give a complete classification of smooth quotients of abelian varieties by finite groups that fix the origin. In the particular case where the action of the group $G$ on the tangent space at the origin of the abelian variety $A$ is irreducible, we prove that $A$ is isomorphic to the self-product of an elliptic curve and $A/G\simeq \bb P^n$. In the general case, assuming $\dim(A^G)=0$, we prove that $A/G$ is isomorphic to a direct product of projective spaces.\\ \noindent\textbf{MSC codes:} 14L30, 14K99. \end{abstract} \title{Smooth quotients of abelian varieties by finite groups} \section{Introduction} Quotients of abelian varieties by finite groups have appeared in many dif\-fer\-ent contexts and topics of research. For example, in \cite{KL} Koll\'ar and Larsen study groups acting on simple abelian varieties in dimension greater than or equal to 4, and prove that the quotient has canonical singularities and Kodaira dimension 0. This is done in the context of studying quotients of Calabi-Yau varieties by finite groups. In \cite{IL}, Im and Larsen study the existence of rational curves lying on quotients of abelian varieties by finite groups, and they find a condition on the group that implies that rational curves actually exist on the quotient. Along another line, in \cite{Yoshi} Yoshihara initiates the study of Galois em\-bed\-dings of varieties, where he asks when a projective variety embedded into projective space admits a finite linear projection that is a Galois morphism. In particular, the existence of a Galois embedding implies that the variety has a finite group of automorphisms such that the quotient variety is isomorphic to projective space. Yoshihara finishes the paper by analyzing the case of abelian surfaces. In \cite{Auff}, the first author generalizes Yoshihara's results to arbitrary dimension, and proves that if the quotient of an abelian variety by a finite group is projective space, then the abelian variety is isogenous to the self-product of an elliptic curve. As a matter of fact, when there is an action of an irreducible finite subgroup of $\mathrm{GL}(T_0(A))$ with Schur index 1 on an abelian variety $A$, then $A$ is isogenous to the self-product of an elliptic curve, as was proven in \cite{PopovZ}. These results are in some sense opposite to the work done by Koll\'ar and Larsen in \cite{KL}. These examples show that quotients of abelian varieties by finite groups have indeed garnered attention in varied contexts within algebraic geometry. On the other hand, group actions on abelian varieties over ${\mathbb{C}}$ lead to the study of finite-dimensional complex representations via their universal covering space, and viceversa. In this sense, a classic article by Looijenga relates root systems and self-products of elliptic curves in \cite{Looijenga}. There is also work by Popov \cite{Popov} and Tokunaga-Yoshida \cite{TY} on complex crystallographic reflection groups, which are extensions $\Gamma$ of a finite complex reflection group $G$ by a $G$-stable lattice $\Lambda$ in ${\mathbb{C}}^n$. In \cite{TY} the authors study the corresponding quotient ${\mathbb{C}}^n/\Gamma$ for $n=2$ and in \cite{BS}, Bernstein and Schwarzman do the same in arbitrary dimension for complex crystallographic groups of Coxeter type. Note that such quotients correspond to the quotient of the abelian variety $A={\mathbb{C}}^n/\Lambda$ by $G$. However, for a given finite complex reflection group $G$, \emph{not every $G$-stable lattice comes from a complex crystallographic reflection group} and hence the study of smooth quotients of abelian varieties remains an open question.\\ The purpose of this paper is to give a full classification of smooth quotients of abelian varieties by finite groups in the particular case in which the group fixes the origin. Our main theorem states the following: \begin{theorem}\label{classification} Let $A$ be an abelian variety of dimension $n\geq 3$, and let $G$ be a (non trivial) finite group of automorphisms of $A$ that fix the origin. Then the following conditions are equivalent: \begin{itemize} \item[(1)] $A/G$ is smooth and the action of $G$ on $T_0A$ is irreducible. \item[(2)] $A/G$ is smooth of Picard number 1. \item[(3)] $A/G\cong\mathbb{P}^n$. \item[(4)] There exists an elliptic curve $E$ such that $A\cong E^n$ and $(A,G)$ satisfies exactly one of the following: \begin{enumerate}[label=(\alph*)] \item $G\cong C^n\rtimes S_n$ where $C$ is a non-trivial (cyclic) subgroup of automorphisms of $E$ that fix the origin; here the action of $C^n$ is coordinatewise and $S_n$ permutes the coordinates.\label{ex1} \item $G\cong S_{n+1}$ and acts on $$A\cong\{(x_1,\ldots,x_{n+1})\in E^{n+1}:x_1+\cdots+x_{n+1}=0\}$$ by permutations.\label{ex2} \end{enumerate} \end{itemize} \end{theorem} The two cases found in item $(4)$ of the above theorem were studied in detail in \cite{Auff}, where it was proven that both examples give projective space as quotients. This gives the proof of $(4){\mathbb{R}}ightarrow(3)$. Our theorem shows that these are the only cases that give smooth quotients in dimension $n\geq 3$. Throughout the paper we will refer to these two examples as Example \ref{ex1} and Example \ref{ex2}, respectively. Note that the case of dimension $n=1$ is obvious: every pair $(A,G)$ gives $\bb P^1$ as a quotient. For $n=2$, according to Yoshihara (cf.~\cite{Yoshi}), this classification was already done by Tokunaga and Yoshida in \cite{TY}. This paper classifies 2-dimensional complex crystallographic reflection groups. How\-ever, as stated above, these do not cover all possible $G$-stable lattices and hence not all possible group actions on abelian surfaces. The classification in this case was thus incomplete, but was recently achieved by P.~Quezada and the authors in \cite{Pablo}. The outcome is that, in the irreducible case, there is only one example different from Examples \ref{ex1} and \ref{ex2} giving a smooth quotient: it is the pair $(A,G)$ with $A=E^2$ for $E={\mathbb{C}}/{\mathbb{Z}}[i]$ and $G$ is the order 16 subgroup of $\mathrm{GL}_2({\mathbb{Z}}[i])$ generated by: \[\left\{\begin{pmatrix} -1 & 1+i \\ 0 & 1\end{pmatrix}\right.,\, \begin{pmatrix} -i & i-1 \\ 0 & i\end{pmatrix},\, \left.\begin{pmatrix} -1 & 0 \\ i-1 & 1\end{pmatrix} \right\},\] acting on $A$ in the obvious way.\\ An interesting corollary, which was a first motivation for writing this paper is the following: \begin{corollary} If $G$ is a finite group that acts on an abelian variety $A$ such that the elements of $G$ fix the origin and $A/G\cong\mathbb{P}^n$, then $A$ is isomorphic to the self-product of an elliptic curve. \end{corollary} The general case is quickly reduced to the irreducible case. \begin{theorem}[Cf.~Theorem \ref{thm red irred}]\label{thm red irred intro} Let $G$ be a group that acts by algebraic homomorphisms on an abelian variety $A$ such that $A/G$ is smooth. Assume that $\dim(A^G)=0$. Then $G=\prod_{i=1}^r G_i$, $A=\prod_{i=1}^r A_i$ and each pair $(A_i,G_i)$ satisfies the equivalent conditions from Theorem \ref{classification} above. \end{theorem} When $A^G$ has positive dimension, the situation does not necessarily split, but we can still describe the quotient $A/G$ as a fibration over an abelian variety with smooth fibers that are isomorphic to the quotients in Theorem \ref{thm red irred intro}. Actually, we prove in the general case that $A/G$ is smooth if and only if $P_G /G$ is smooth, where $P_G $ is the complementary abelian subvariety of the connected component of $A^G$ that contains 0, cf. Proposition \ref{And smooth}. The notation $P_G $ comes from the fact that in the case that $A$ is the Jacobian of a curve $X$ and $G$ is a group of automorphisms of $X$, $P_G $ is the Prym variety associated to the morphism $X\to X/G$.\\ As an application of our main theorems, we expect to give in a subsequent paper a classification of quotients of principally polarized abelian varieties by groups preserving the divisor class of the polarization. This will be applicable to the specific case of Jacobian varieties with group action coming from an action on the corresponding curve. As a final application, B.~Lim pointed out to us that our classification would be a key ingredient in solving a conjecture by Polishchuk and Van den Bergh (cf.~\cite[Conj,~A]{PVdB}) on semiorthogonal decompositions of categories of equivariant coherent sheaves in the case of abelian varieties.\\ The structure of this paper is as follows: In Section \ref{sec group actions}, we cover some basic properties of abelian varieties with a finite group action and smooth quotient. In particular, we prove in Section \ref{generalities} the implication $(2){\mathbb{R}}ightarrow (1)$ from Theorem \ref{classification}, while Section \ref{sect isogeny} is dedicated to the study of $G$-equivariant isogenies in this context, which are used in the sequel. In Section \ref{sec red irred} we prove Theorem \ref{thm red irred intro} and we briefly look at the ultimate general case in which $A^G$ may have positive dimension. Section \ref{reflectiongroup} is dedicated to the proof of $(1){\mathbb{R}}ightarrow (4)$ (note that $(3){\mathbb{R}}ightarrow (2)$ is evident and $(4){\mathbb{R}}ightarrow (3)$ was established in \cite{Auff}, so this concludes the proof of Theorem \ref{classification}). This is the heart of the article and therefore its longest and most technical part. Here we use Shephard-Todd's classification of irreducible complex reflection groups in order to study them case by case. The case of the symmetric group $S_{n}$ is studied in Section \ref{sec Sn} and the infinite family of groups $G(m,p,n)$ for $m\geq 2$ is studied in Section \ref{sec Gmpn}. Finally, Section \ref{sec sporadic} is dedicated to the remaining sporadic cases.\\ \noindent\textit{Acknowledgements:} We would like to thank Anita Rojas and Giancarlo Urz\'ua for interesting discussions and Antonio Behn for help with the computer program SageMath. \section{Groups acting on abelian varieties with smooth quotient}\label{sec group actions} \subsection{Generalities}\label{generalities} Let $A$ be an abelian variety of dimension $n$ and let $G$ be a group of automorphisms of $A$ that fix the origin, such that the quotient variety $A/G$ is smooth. By the Chevalley-Shephard-Todd Theorem, the stabilizer in $G$ of each point in $A$ must be generated by pseudoreflections; that is, elements that fix a divisor pointwise, such that the divisor passes through the point. In particular, $G$ is generated by pseudoreflections and $G$ acts on the tangent space at the origin $T_0(A)$ (this is the analytic representation). In this context, a pseudoreflection is an element that fixes a hyperplane pointwise. We will often abuse notation and display $G$ as either acting on $A$ or $T_0(A)$; it will be clear from the context which action we are considering. In what follows, let $\mathcal{L}$ be a fixed $G$-invariant polarization on $A$ (take the pullback of an ample class on $A/G$, for example). For $\sigma$ a pseudoreflection in $G$ of order $r$, define \begin{align*} D_\sigma&:=\mathrm{im}(1+\sigma+\cdots+\sigma^{r-1}),\\ E_\sigma&:=\mathrm{im}(1-\sigma). \end{align*} These are both abelian subvarieties of $A$. \begin{lemma}\label{lemma Esigma Dsigma} We have the following: \begin{itemize} \item[1.] $D_\sigma$ is the connected component of $\fix(\sigma):=\ker(1-\sigma)$ that contains 0 and $E_\sigma$ is the complementary abelian subvariety of $D_\sigma$ with respect to $\mathcal{L}$. In particular, $D_\sigma$ is a divisor and $E_\sigma$ is an elliptic curve. \item[2.] $\sigma$ acts on $E_\sigma$ and hence $r\in\{2,3,4,6\}$. \item[3.] For $a\not\equiv 0\pmod r$, $E_{\sigma^a}=E_{\sigma}$ and $D_{\sigma^a}=D_{\sigma}$. \item[4.] $D_\sigma\cap E_\sigma$ consists of $2$-torsion points for $r=2,4$, of $3$-torsion points for $r=3$ and $D_\sigma\cap E_\sigma=0$ for $r=6$. \end{itemize} \end{lemma} \begin{proof} Since \[(1+\sigma+\cdots+\sigma^{r-1})(1-\sigma)=(1-\sigma)(1+\sigma+\cdots+\sigma^{r-1})=1-\sigma^r=0,\] we see that $D_\sigma\subset\ker(1-\sigma)$ and $E_\sigma\subset\ker(1+\sigma+\cdots+\sigma^{r-1})$. If $x\in\ker(1-\sigma)$, then $$rx=x+\sigma(x)+\cdots+\sigma^{r-1}(x)=(1+\sigma+\cdots+\sigma^{r-1})(x)\in D_\sigma,$$ and so after possibly adding an $r$-torsion point to $x$ we obtain that it lies in $D_\sigma$. Therefore both spaces are of the same dimension and, since $D_\sigma$ is irreducible, we get that it corresponds to the connected component containing 0. To show that $E_\sigma$ is the complementary abelian subvariety of $D_\sigma$, let $H$ be the first Chern class of $\mathcal{L}$, seen as a Hermitian form $H$ on $T_0(A)=\mathbb{C}^n$. Then, since $\sigma$ preserves the numerical class of $\mathcal{L}$, we have that $\sigma^tH=H\sigma^{-1}$. Hence $$\left(\sum_{i=0}^{r-1}\sigma^i\right)^tH(I_n-\sigma)=H\left(\sum_{i=0}^{r-1}\sigma^{-i}\right)(I_n-\sigma)=0.$$ This shows that the vector subspaces of $T_0(A)$ induced by $D_\sigma$ and $E_\sigma$ are orthogonal with respect to $H$; i.e.~they are complementary abelian subvarieties. This proves 1. Since $\sigma$ and $(1-\sigma)$ clearly commute, we see that $\sigma(E_\sigma)=E_\sigma$ by definition. This implies immediately that $r\in\{2,3,4,6\}$. This proves 2. For the third assertion, we know that both $D_\sigma$ and $D_{\sigma^a}$ are irreducible divisors. But clearly $\ker(1-\sigma^a)\supset\ker(1-\sigma)$ and hence $D_\sigma=D_{\sigma^a}$. Complementarity implies then that $E_\sigma=E_{\sigma^a}$. Finally, note that since $D_\sigma\subset\ker(1-\sigma)$ and $E_\sigma\subset\ker(1+\sigma+\cdots+\sigma^{n-1})$, for every $x\in D_\sigma\cap E_\sigma$ we have \[rx=x-\sigma(x)+x+\sigma(x)+\cdots+\sigma^{r-1}(x)=(1-\sigma)(x)+(1+\sigma+\cdots+\sigma^{r-1})(x)=0.\] This proves that $D_\sigma\cap E_\sigma$ consists of $r$-torsion points. Using the third assertion for $a=2,3$ we prove 4. \end{proof} We are now in a position to prove that $(2){\mathbb{R}}ightarrow(1)$ in Theorem \ref{classification}; the proof goes along the lines of \cite[Rem.~2.1]{Auff}. \begin{proposition} Let $G$ be a finite group acting on an abelian variety $A$ via algebraic homomorphisms. Assume that $A/G$ is smooth and the Picard number of $A/G$ is 1. Then the analytic representation of $G$ is irreducible. \end{proposition} \begin{proof} Assume that $A/G$ is of Picard number 1. We will first show that $G$ does not leave a non-trivial abelian subvariety invariant. Indeed, let $X\subseteq A$ be an abelian subvariety on which $G$ acts, and let $N_X\in\mbox{End}(A)$ be its norm endomorphism with respect to some fixed $G$-invariant polarization $\mathcal{L}$ ($N_X$ on tangent spaces is just the orthogonal projection onto the linear subspace that defines $X$ with respect to the first Chern class of $\mathcal{L}$). Now \[N_X^*\mathcal{L}\in\mbox{NS}(A)_\mathbb{Q}^G\cong\mbox{NS}(A/G)_\mathbb{Q}\cong\mathbb{Q},\] where the subscript $\mathbb{Q}$ indicates that we extended scalars to $\bb Q$. Since $\mathcal{L}\in\mbox{NS}(A)_\mathbb{Q}^G$, we have that $N_X^*\mathcal{L}$ is a rational multiple of $\mathcal{L}$ and therefore the self-intersection number $(N_X^*\mathcal{L})^n$ is non zero. However, by \cite[Prop.~3.1]{ALR}, if $X$ is non-trivial then this number must be zero. Therefore $X$ must be trivial. Now let $W$ be a $G$-stable linear subspace of $T_0(A)$, and let $\sigma\in G$ be a pseudoreflection. Since the image of $1-\sigma$ is an elliptic curve on $A$ induced, say, by a linear subspace $\langle z_0\rangle\leq T_0(A)$, we have that for every $z\in W$, $(1-\sigma)(z)=\lambda_z z_0$ for some $\lambda_z\in\mathbb{C}$. If $\lambda_z\neq 0$ for some $z\in W$, then $z_0\in W$. Now, since the translates of $z_0$ by $G$ all lie in $W$ and $\sum_{\tau\in G}\tau(E_\sigma)=A$ by the previous discussion, we have that $W=T_0(A)$. Assume now that $\lambda_z=0$ for every $z\in W$ and every pseudoreflection $\sigma\in G$. In particular, $W$ is fixed by every $\sigma$, and since these generate the group, we have that $W$ is fixed pointwise by $G$. Now since $G$ does not fix pointwise any non-trivial abelian subvariety of $A$, we have that $$\bigcap_{\tau\in G}\ker(1-\tau)\subseteq A$$ is finite and so its preimage in $T_0(A)$ is discrete. However $W$ is contained in this preimage, and so it must be trivial. \end{proof} \subsection{$G$-equivariant isogenies}\label{sect isogeny} We will consider now a new abelian variety $B$ equipped with a $G$-equivariant isogeny to $A$, which we will call a $G$-isogeny from now on. Let $\Lambda_A$ denote the lattice in ${\mathbb{C}}^n$ such that $A={\mathbb{C}}^n/\Lambda_A$. Let $\Lambda_B\subseteq\Lambda_A$ be a $G$-invariant sublattice, and let $B:={\mathbb{C}}^n/\Lambda_B$ be the induced abelian variety, along with the $G$-isogeny \[\pi:B\to A,\] whose analytic representation is the identity. Note that this implies that $\sigma\in G$ is a pseudoreflection of $B$ if and only if it is a pseudoreflection of $A$. We may then consider the subvarieties $E_\sigma,D_\sigma\subset A$ defined as above, which we will denote by $E_{\sigma,A} $ and $D_{\sigma,A}$. Now, we can do the same thing for $B$ and hence we obtain subvarieties $E_{\sigma,B},D_{\sigma,B}\subset B$. Note that, by definition, $\pi$ sends $E_{\sigma,B}$ to $E_{\sigma,A}$ and $D_{\sigma,B}$ to $D_{\sigma,A}$.\\ Define $\Delta:=\ker(\pi)$. Since $\pi$ is $G$-equivariant, $G$ acts on $\Delta$ and hence we may consider the group $\Delta\rtimes G$. This group acts on $B$ in the obvious way: $\Delta$ acts by translations and $G$ by automorphisms. In particular, we see that the quotient $B/(\Delta\rtimes G)$ is isomorphic to $A/G$. Our goal is to reduce as much as we can the structure of $B/G$ and $\Delta$ and to prove that the latter must be trivial in several cases. Fix then a pseudoreflection $\sigma\in G$ of order $r$ and consider the subvarieties $E_{\sigma,A},D_{\sigma,A}\subset A$ and $E_{\sigma,B},D_{\sigma,B}\subset B$. Define moreover $F_{\sigma,A}=E_{\sigma,A}\cap D_{\sigma,A}$ and $F_{\sigma,B}$ similarly. Then the isogeny $\pi:B\to A$ sends $F_{\sigma,B}$ to $F_{\sigma,A}$. \begin{lemma} Assume that the map $E_{\sigma,B}\to E_{\sigma,A}$ is injective and that the map $F_{\sigma,B}\to F_{\sigma,A}$ is surjective. Then $\Delta\subset D_{\sigma,B}$. \end{lemma} \begin{proof} Since $E_{\sigma,B}$ and $D_{\sigma,B}$ generate $B$, we have $z=x+y\in\Delta=\ker(\pi)$ with $x\in E_{\sigma,B}$ and $y\in D_{\sigma,B}$ for every $z\in\Delta$. Then $\pi(z)=0$ implies $\pi(x)=-\pi(y)\in F_{\sigma,A}$. But since $F_{\sigma,B}\to F_{\sigma,A}$ is surjective and $E_{\sigma,A}\to E_{\sigma,B}$ is injective, we have that $x\in E_{\sigma,B}\cap\pi^{-1}(F_{\sigma,A})=F_{\sigma,B}$. Thus $x\in D_{\sigma, B}$ and hence $\Delta\subset D_{\sigma,B}$. \end{proof} Since all conjugates of a pseudoreflection are pseudoreflections and everything is $G$-equivariant, we immediately get the following result. \begin{proposition}\label{prop Delta and D} Let $\sigma\in G$ be a pseudoreflection and assume that the map $E_{\sigma,B}\to E_{\sigma,A}$ is injective and that the map $F_{\sigma,B}\to F_{\sigma,A}$ is surjective. Then the subgroup $\Delta=\ker(\pi)$ is contained in $D_{\tau\sigma\tau^{-1},B}$ for \emph{every} $\tau\in G$.\qed \end{proposition} We conclude this section by studying pseudoreflections in $\Delta\rtimes G$. \begin{lemma}\label{main lemma} Let $\sigma\in\Delta\rtimes G$ be a pseudoreflection. Then $\sigma=(t,\tau)$ with $\tau\in G$ a pseudoreflection and $t\in\Delta\cap E_{\tau,B}$. \end{lemma} \begin{proof} Let $t\in\Delta$ and $\tau\in G$ be such that $\sigma=(t,\tau)\in\Delta\rtimes G$. This element acts on $B$ sending $x$ to $\tau(x)+t$. By definition, $\sigma$ must fix a divisor pointwise, that is, there is a subvariety $C\subset B$ of codimension 1 such that $x=\tau(x)+t$ for all $x\in C$, or equivalently, $x\in (1-\tau)^{-1}(t)$. But since $1-\tau\in\End(B)$, we see that $C$ is a translate of $\ker(1-\tau)$, which is a divisor if and only if $\tau$ is a pseudoreflection and $t\in (1-\tau)(B)=E_{\tau,B}$. \end{proof} \subsection{Reduction to irreducible representations}\label{sec red irred} Let $G$ be a group that acts by algebraic homomorphisms on an abelian variety $A$ such that $A/G$ is smooth. In particular the analytic representation of $G$ on $T_0(A)$ is a finite complex reflection group. It is well-known (cf.~for instance \cite{ST} or \cite[\S1.4]{Popov}), that $G\cong G_1\times\cdots\times G_r$ and $T_0(A)=W_0\oplus W_1\oplus\cdots\oplus W_r$ where \begin{itemize} \item $W_i$ is an irreducible complex representation of $G_i$ that makes $G_i$ an irreducible finite complex reflection group for $i>0$; \item $G_j$ acts trivially on $W_i$ for $i\neq j$. \end{itemize} In particular, $W_0=T_0(A)^G$. \begin{lemma}\label{lemma desc Ai} The subspace $W_i$ induces a $G$-stable abelian subvariety $A_i$ of $A$ such that $G_j$ acts trivially on $A_i$ for $i\neq j$. Moreover, $A_i/G=A_i/G_i$ is smooth. \end{lemma} \begin{proof} Since $W_0=T_0(A)^G$, then $A_0$ is the neutral connected component of $A^G$ and $A_0/G=A_0$. Assume now $i>0$, let $\sigma\in G_i$ be a pseudoreflection and let $L$ be the linear subspace of $T_0(A)$ that induces $E_\sigma$. It is clear that $L\subseteq W_i$, since $L=(1-\sigma)(T_0(A))$. Since the representation of $G_i$ on $W_i$ is irreducible, we have that $$W_i=\sum_{\tau\in G}(\tau(L)).$$ Therefore, $W_i$ is the tangent space of the abelian subvariety $A_i=\sum_{\tau\in G}\tau(E_\sigma)$. It is clear that $A_i$ is $G$-stable and $G_j$ acts trivially on $A_i$ for $i\neq j$ so that $A_i/G=A_i/G_i$. Moreover, since $\mathrm{Stab}_{G_i}(x)=\mathrm{Stab}_G(x)\cap G_i$ for $x\in A_i$ and every pseudoreflection in $G$ belongs to some $G_j$, it is easy to see that $\mathrm{Stab}_{G_i}(x)$ is generated by pseudoreflections in $G_i$ whenever $\mathrm{Stab}_G(x)$ is generated by pseudoreflections in $G$. This is the case by the Chevalley-Shephard-Todd Theorem because $A/G$ is smooth and therefore $A_i/G_i$ is smooth. \end{proof} We can now prove that, whenever $A_0$ is trivial, it is enough to understand the case when the action of $G$ on $T_0(A)$ is irreducible. \begin{theorem}\label{thm red irred} Let $G$ be a group that acts by algebraic homomorphisms on an abelian variety $A$ such that $A/G$ is smooth. Assume that $\dim(A^G)=0$. Then $A$ is the direct product of the $A_i$, defined as above. In particular, \[A/G\cong A_1/G_1\times\cdots\times A_r/G_r.\] \end{theorem} We will need the following small result on irreducible finite complex reflection groups: \begin{lemma}\label{lem surj} Let $G$ be a finite complex reflection group acting irreducibly on ${\mathbb{C}}^n$. Then there exists $\tau\in G$ such that $(1-\tau)$ is surjective. \end{lemma} \begin{proof} This amounts to finding an element $\tau\in G$ such that 1 is not an eigenvalue of $\tau$. Now this follows directly from \cite[Thm.~5.4]{ST}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm red irred}] Consider the subvarieties $A_i\subset A$ from Lemma \ref{lemma desc Ai} for $i\geq 1$ ($A_0$ is trivial by the hypothesis on $A^G$). Then there is a natural $G$-isogeny \[B:=A_1\times\cdots\times A_r\to A,\] given by the sum in $A$. In particular, the kernel of this isogeny is \[\Delta:=\left\{(a_1,\ldots,a_r)\in A_1\times\cdots\times A_r\mid \sum_{i=0}^r a_i=0\right\}.\] We claim that $\Delta$ is fixed pointwise by $G$. Indeed, since $a_i\in A_i$, we know that $G_j$ acts trivially on it for $j\neq i$; but since $a_i=-\sum_{j\neq i}a_j\in \sum_{j\neq i}A_j$, we also know that $G_i$ acts trivially on it (since it acts trivially on every $A_j$ for $j\neq i$). We see then that $G$ acts trivially on every coordinate of every element of $\Delta$, which proves the claim. Thus, $\Delta\times G$ acts on $B$ and hence $A/G$ is isomorphic to $B/(\Delta\times G)$, i.e. \[A/G\cong [(A_1/G_1)\times\cdots (A_r/G_r)]/\Delta.\] All we need to prove now is that $\Delta$ has to be trivial. Assume then that this is not the case and note that the action of $(a_1,\ldots,a_r)\in\Delta$ on $X:=(A_1/G_1)\times\cdots (A_r/G_r)$ corresponds coordinatewise to the action of $a_i$ on $A_i/G_i$ (which is well defined since $a_i$ is $G_i$-invariant and thus its action commutes with that of $G_i$). Now, the action of $a_i$ on $A_i/G_i$ always has a fixed point $p_i$. Indeed, by Lemma \ref{lem surj} we know that there exists $\tau\in G_i$ such that $(1-\tau)$ is surjective. Thus, there exists $x_i\in A_i$ such that $x_i-\tau(x_i)=a_i$, which implies that the image $p_i$ of $x_i$ in $A_i/G_i$ is fixed by $a_i$. We see then that $(p_1,\ldots,p_r)\in X$ is a point that is fixed by $(a_1,\ldots,a_r)$ and thus the action of $\Delta$ on $X$ is not free. It is also a non-trivial action since the image of $0\in B$ in $X$ is clearly moved by $\Delta$. Since $A/G=X/\Delta$ is smooth, the Chevalley-Shephard-Todd Theorem tells us then that every stabilizer of this action has to be generated by pseudoreflections. Now this is impossible since, for every non-trivial $(a_1,\ldots,a_r)\in\Delta$, its fixed locus in $X$ corresponds to the product of the fixed loci in each $A_i/G_i$ via $a_i$. We see then that if any element in $\Delta$ is a pseudoreflection, it must fix all but one $A_i/G_i$ (otherwise the fixed locus would not be a divisor), which amounts to $a_i=0$ for all but one $i$, and this is impossible since $\sum_{i=1}^ra_i=0$. This proves that $\Delta$ is trivial. \end{proof} Let us consider now the ``degenerated'' case in which $\dim(A^G)>0$. \begin{proposition}\label{And smooth} Let $G$ be a group that acts by algebraic homomorphisms on an abelian variety $A$. Let $A_0$ be the connected component of $A^G$ containing 0 and let $P_G $ be its complementary abelian subvariety with respect to a $G$-invariant polarization. Then there exists a fibration $A/G\to A_0/(A_0\cap P_G )$ with fibers isomorphic to $P_G /G$. Moreover, $A/G$ is smooth if and only if $P_G /G$ is smooth. \end{proposition} \begin{proof} Consider as in the last proof the natural $G$-isogeny $A_0\times P_G \to A$ and denote its kernel by $\Delta$. This can be rewritten as \[A\cong A_0\stackrel{\Delta}{\times}P_G .\] Now, the same argument from the proof above shows that $\Delta$ is fixed pointwise by $G$. In particular, the actions of $G$ and $\Delta$ on $P_G $ commute and it is easy to see then that \[A/G\cong A_0\stackrel{\Delta}{\times} (P_G /G).\] Recalling that $\Delta\cong A_0\cap P_G $, we may thus see $A/G$ as a fibration over the abelian variety $A_0/(A_0\cap P_G )$ with fibers isomorphic to $P_G /G$. Finally note that, since the action of $\Delta$ on $A_0$ is free, the quotient $A/G=(A_0\times P_G /G)/\Delta$ is smooth whenever $P_G /G$ is. On the other hand, by the same argument we used for $A_i/G_i$, $P_G /G$ is smooth if $A/G$ is. \end{proof} Note that this fibration is non-trivial in general, as shown by the following example: Let $E$ be an elliptic curve and let $e\in E[2]$. Define $B=E\times E$ and let $G=\{\pm 1\}$ act on the second factor. Note in particular that $(e,e)$ is $G$-invariant. Put then $A=B/\langle (e,e)\rangle$ and denote by $\pi:B\to A$ the projection. We have that $A_0=\pi(E\times\{0\})$, $P_G =\pi(\{0\}\times E)$, $B=A_0\times P_G $ and $\Delta=\langle (e,e)\rangle$. We see then that \[A/G\cong B/(\Delta\times G)\cong (B/G)/\Delta\cong (E\times\bb P^1)/\Delta,\] where, up to a base change in $\bb P^1$, $\Delta$ acts on $E\times\bb P^1$ by sending $(x,y)$ to $(x+e,-y)$. Looking at the first coordinate, we see then that the action is free and thus defines by \'etale descent a non-trivial $\bb P^1$-bundle over the elliptic curve $E/\langle e\rangle$. \section{Quotients by irreducible finite complex reflection groups}\label{reflectiongroup} Given the results from the last section, we will now concentrate on group actions on abelian varieties that satisfy the following condition (which is condition (1) from Theorem \ref{classification}): \begin{equation}\label{eq cond star} A/G\text{ is smooth and the analytic representation of }G\text{ is irreducible.}\tag{$\star$} \end{equation} If the pair $(A,G)$ satisfies \eqref{eq cond star}, we see that the analytic representation makes $G$ an irreducible finite complex reflection group, in the sense of Shephard-Todd \cite{ST}. These groups were completely classified by Shephard and Todd in \cite{ST}, where they discovered that any finite irreducible complex reflection group is either a group $G(m,p,n)$ depending on $m,p,n\in\mathbb{Z}_{>0}$ where $p\mid m$ and $n\geq 1$, or is one of 34 sporadic cases. The group $G(m,p,n)$ consists of the semidirect product $H\rtimes S_n$ of the abelian group \begin{equation}\label{eq def H} H=H(m,p,n)=\{(\zeta_m^{a_1},\ldots,\zeta_m^{a_n})\mid a_1+\cdots+a_n\equiv0\pmod p\}\subset \mu_m^n \end{equation} with the symmetric group $S_n$, where $\zeta_m$ is a primitive $m$-th root of unity and $S_n$ acts on each member by permuting the coordinates in the obvious way. For $m=p=1$, $G(1,1,n)$ is just the symmetric group on $n$ letters, and acts irreducibly on an $(n-1)$-dimensional complex vector space. For $m>1$, $G(m,p,n)$ acts irreducibly on an $n$-dimensional complex vector space. The purpose of this section is to describe which of these actions actually appear on abelian varieties of dimension $n\geq 3$ such that \eqref{eq cond star} is satisfied. In the following subsections we will analyze each case of the Shephard-Todd classification. In particular, in this section we prove $(1){\mathbb{R}}ightarrow(4)$ of Theorem \ref{classification}. \subsection{The case $m=p=1$: the standard representation of $S_{n+1}$}\label{sec Sn} Let $G(1,1,n+1)=S_{n+1}$ act on an abelian variety $A$ of dimension $n\geq 2$ in such a way that its action on $T_0(A)$ is the standard one. Let $\sigma=(1\, 2)$ and $E=E_{\sigma}$ be induced by a line $L_\sigma\subseteq T_0(A)$, and define the lattice \[\Lambda_B:=\sum_{\tau\in S_{n+1}}\tau(L_\sigma\cap\Lambda_A).\] This gives us a $G$-invariant sublattice of $\Lambda_A$, and we therefore get a $G$-equivariant isogeny $\pi:B\to A$ with kernel $\Delta$. Applying this construction to Example \ref{ex2}, we see that it gives the whole lattice and hence corresponds to Example \ref{ex2} itself. We can thus see $B$ as \[B=\{(x_1,\ldots,x_{n+1})\in E^{n+1}\mid x_1+\cdots+x_{n+1}=0\}\] and $S_{n+1}$ acts coordinatewise in the natural way. Using the notations from Section \ref{sect isogeny}, we see by inspection that $F_{\sigma,B}=E_{\sigma,B}[2]\cong E[2]$, hence the map $\pi:F_{\sigma,B}\to F_{\sigma,A}$ is surjective since by Lemma \ref{lemma Esigma Dsigma} we have $F_{\sigma,A}\subset E_{\sigma,A}[2]\cong E[2]$. Moreover, the induced map $E_{\sigma,B}\to E_{\sigma,A}$ is injective by construction. Thus, by Proposition \ref{prop Delta and D}, we have that $\Delta$ is contained in the fixed locus of all the conjugates of $\sigma$. In other words, $\Delta$ consists of elements of the form $(x,\ldots,x)\in E^{n+1}$ such that $(n+1)x=0$. Note that this implies that the direct product $\Delta\times G$ acts on $B$. \begin{proposition}\label{prop Sn} Let $n\geq 2$. If $S_{n+1}$ acts on $A$ in such a way that its analytic representation is the standard representation and $(A,S_{n+1})$ satisfies \eqref{eq cond star}, then $A\cong E^n$ and $S_{n+1}$ acts as in Example \ref{ex2}. \end{proposition} \begin{proof} Let $\pi:B\to A$ be the $G$-isogeny defined above. We have to prove then that $\Delta=\{0\}$. Let $\bar t=(t,\ldots,t)\in\Delta$ be a non-trivial element and let $\tau\in G$ be an element such that $(1-\tau)$ is surjective (such an element exists by Lemma \ref{lem surj}). Then there exists an element $z\in B$ such that $z-\tau(z)=\bar t$ and thus the stabilizer of $z$ contains the element $(\bar t,\tau)\in\Delta\times G$. Note now that $\Delta\cap E_{\sigma,B}=\{0\}$ for every pseudoreflection $\sigma\in G$. Thus, by Lemma \ref{main lemma}, the only pseudoreflections in $\Delta\times G$ are the transpositions in $G=S_{n+1}$, and so $\mathrm{Stab}_G(z)$ cannot be generated by pseudoreflections. Therefore if $\Delta\neq 0$, $A/G$ is not smooth by the Chevalley-Shephard-Todd Theorem, which contradicts condition \eqref{eq cond star}. \end{proof} \subsection{The case of $G(m,p,n)$, $m\geq 2$, $n\geq 3$}\label{sec Gmpn} Now we will study when $G=G(m,p,n)$ acts on an abelian variety $A$ of dimension $n$ for $m\geq2$. \emph{We assume here that $n\geq 3$} (recall that the case of dimension 2 was already dealt with elsewhere). Recall that $G=H\rtimes S_n$, where $H\subset \mu_m^n$ is defined in \eqref{eq def H} and it acts coordinatewise on ${\mathbb{C}}^n=T_0(A)$, while $S_n$ permutes the variables in the obvious way. \begin{rem} In what follows, we will try as much as we can to prove results on $G$ without splitting into subcases depending on the value of $p$. Hence, in the following arguments we will only consider elements in $G(m,m,n)\subset G(m,p,n)$, even if in some cases a simpler argument can be found for certain values of $p$. We will also keep all arguments (with one exception) depending on at most three dimensions, so that they are all valid for $n\geq 3$. \end{rem} Let $E_i$ be the image of $\mathbb{C}e_i$ in $A$ via the exponential map. We claim that it corresponds to an elliptic curve. Indeed, consider the element $\tau=(1,\zeta_m,\zeta_m^{-1},1,\ldots,1)\in H$ and denote $\rho=1+\tau+\cdots+\tau^{m-1}$. Then a direct computation shows that, for $\sigma=(1\, 2)\in S_n\subset G$, $\mathrm{im}(\rho(1-\sigma))={\mathbb{C}} e_1$. This tells us that $E_1=\rho(1-\sigma)(A)$ and hence it corresponds to an elliptic curve. This allows us to prove the following. \begin{lemma}\label{lem m leq 6} Assume that $G$ acts on $A$ as above. Then $m\in\{2,3,4,6\}$ and, if $m\geq 3$, then the curves $E_i\subset A$ have non-trivial automorphisms. \end{lemma} \begin{proof} Consider the curve $E_1\subset A$ defined as above. We see then that the element $(\zeta_m,\zeta_m^{-1},1,\ldots,1)\in H$ induces an automorphism of order $m$ of $E_1$. Therefore $m\in\{2,3,4,6\}$ and, if $m\geq 3$, then $E_1$ has non-trivial automorphisms. The other $E_i$ are obtained from $E_1$ via the action of $S_n$ and hence are isomorphic to it. \end{proof} Now, let $\Lambda_A$ be a lattice for $A$ in $\mathbb{C}^n$. Then $\mathbb{C}e_i\cap\Lambda_A$ corresponds to the lattice of $E_i$ in ${\mathbb{C}}={\mathbb{C}} e_i$. We can thus define the $G$-stable sublattice of $\Lambda_A$ $$\Lambda_B:=\bigoplus_{i=1}^n(\mathbb{C}e_i\cap\Lambda_A).$$ As in Section \ref{sect isogeny}, this defines a $G$-isogeny $\pi:B\to A$. Moreover, we see that $B\cong E_1\times\cdots\times E_n\cong E^n$ and that $\pi|_{E_i}$ is an injection. As in the previous section, let $\Delta$ be the kernel of $\pi$. We will study the different possible quotients $A/G$ by studying the possible quotients $B/(\Delta\rtimes G)$ and thus by studying the possible $\Delta$'s. Let us start with the case of a trivial $\Delta$: \begin{proposition}\label{prop Gmpn p not 1} Let $G=G(m,p,n)$ with $n\geq 2$ act on $B=E^n$ as above. Then the quotient $B/G$ is smooth if and only if $p=1$. \end{proposition} \begin{proof} By Lemma \ref{lem m leq 6}, we know that $m\in\{2,3,4,6\}$. Thus, if $p=1$, the action of $G$ on $B$ is by construction the same as in Example \ref{ex1}, which tells us that $B/G\cong\bb P^n$ and hence it is smooth.\\ Assume now that $p\geq 2$. By Lemma \ref{lem m leq 6}, we also know that if $m\neq 2$ then $E$ has non-trivial automorphisms given by multiplication by $\zeta_m$. In particular, $E$ is a very specific curve in each of these cases and it is easy to see that: \begin{itemize} \item if $m=3,6$, then there exists a non-trivial $t\in E[3]$ such that $\zeta_6 t=-t$; \item if $m=4$, then there exists a non-trivial $t\in E[2]$ such that $\zeta_4 t=t$. \end{itemize} Consider one such element $t\in E$ unless $(m,p)\in\{(2,2),(6,2)\}$, in which case take any non-trivial element $t\in E[2]$. Let $(x_3,\ldots,x_{n})\in E^{n-2}$ be a general element. Then, if $\bar x=(t,0,x_3,\ldots,x_{n})\in B=E^n$, we immediately see that an element in $\mathrm{Stab}_G(\bar x)$ must be in $H\subset G$ since the coordinates cannot be permuted, even after applying automorphisms on some coordinates via $H$. A direct computation tells us then that $\mathrm{Stab}_G(\bar x)$ is equal to the (abelian) subgroup of $H\subset G$ given in each case by the following table:\\ \begin{center} \begin{tabular}{ c | l } $(m,p)$ & Generators of $\mbox{Stab}_G(\bar x)$\\\hline (2,2) & $(-1,-1,1,\ldots,1)$\\ (3,3) & $(\zeta_3,\zeta_3^{-1},1,\ldots,1)$\\ (4,2) & $(\zeta_4,\zeta_4,1,\ldots,1)$, $(-1,1,1,\ldots,1)$, $(1,-1,1,\ldots,1)$\\ (4,4) & $(\zeta_4,\zeta_4^{-1},1,\ldots,1)$\\ (6,2) & $(-1,-1,1,\ldots,1)$, $(1,\zeta_3,1,\ldots,1)$\\ (6,3) & $(\zeta_3,\zeta_3^{-1},1,\ldots,1)$, $(1,-1,1,\ldots,1)$\\ (6,6) & $(\zeta_3,\zeta_3^{-1},1,\ldots,1)$ \end{tabular} \end{center} However we observe that in all cases the first element is not a pseudoreflection, since its fixed locus is of codimension 2. Moreover, the only pseudoreflections in $\mathrm{Stab}_G(\bar x)$ are the other given generators (and their powers) and hence they cannot generate the first one. Therefore, by the Chevalley-Shephard-Todd Theorem, the quotient $B/G$ is not smooth. \end{proof} Let us consider now the case of a non-trivial kernel $\Delta$. We start with an application of Proposition \ref{prop Delta and D}. \begin{lemma}\label{lemma Delta diagonal or hyper} If $\Delta$ is non-trivial, then $m\neq 6$ and, if we define the following type of elements in $\Delta$: \begin{itemize} \item Diagonal: $(t,\ldots,t)$ with $t\in E$; \item Hyperplanar: $(t,-t,0,\ldots,0)$ with $t\in E$; \end{itemize} then $\Delta$ contains a non-trivial hyperplanar element \emph{unless} it consists purely of diagonal elements. Moreover, the coordinates of every hyperplanar element are invariant by $\zeta_m$, so in particular these elements are 2-torsion if $m=2,4$ and 3-torsion if $m=3$. \end{lemma} \begin{proof} Let $\sigma=(1\,2)$ and note that $(t,-t,0,\ldots,0)\in E_{\sigma,B}$. Then there being no non-trivial hyperplanar element in $\Delta$ amounts to $E_{\sigma,B}\to E_{\sigma,A}$ being an isomorphism. By inspection, we see that $F_{\sigma,B}=E_{\sigma,B}[2]$ and we can thus apply Proposition \ref{prop Delta and D}, which tells us that elements in $\Delta$ are invariant by \emph{every} transposition, hence diagonal. Assume now that $\Delta$ contains a hyperplanar element $\bar t$. Then, since $\Delta$ is $G$-stable, we have that, for $\rho_1=(\zeta_m,1,\zeta_m^{-1},1,\ldots,1)\in H$, \[(1-\rho_1)(\bar t)=((1-\zeta_m)t,0,\ldots,0)\in\Delta.\] But, by construction, there are no elements of the form $(x,0,\ldots,0)$ in $\Delta$. We deduce then that $t$ is $\zeta_m$-invariant. The assertion on the torsion of its coordinates follows immediately. Assume finally that $m=6$ and let $(t_1,\ldots,t_n)\in\Delta$. Define $\sigma_i=(1\, i)\in S_n\subset G$ and $\rho_2=(\zeta_6^{-1},\zeta_6,1,\ldots,1)\in H\subset G$. Then \[[(1-\rho_2)(1-\rho_1)\sigma_i](\bar t)=(t_i,0,\ldots,0)\in\Delta,\] which implies as above that $t_i=0$ and thus $\Delta=0$. \end{proof} Let us study now pseudoreflections in $\Delta\rtimes G$. Define the elements \begin{align*} \rho &:=(\zeta_m,\zeta_m^{-1},1,\ldots,1)\in H\subset G;\\ \sigma &:=(1\, 2)\in S_n\subset G;\\ \tau &:=(\zeta_m^p,1,\ldots,1)\in H\subset G. \end{align*} Then there are two types of pseudoreflections in $G$:\label{pseudoref in Delta G} \begin{itemize} \item[(I)] conjugates of $\rho^a\sigma$ for $0\leq a<\frac mp$; \item[(II)] conjugates of powers of $\tau$ (these do not exist if $m=p$); \end{itemize} and the corresponding elliptic curves in $B$ are respectively: \begin{align*} E_{\rho^a\sigma}&=\{(x,-\zeta_m^ax,0,\ldots,0)\mid x\in E\};\\ E_{\tau}&=\{(x,0,0,\ldots,0)\mid x\in E\}. \end{align*} Note now that elements of the form $(x,0,\ldots,0)$ are not in $\Delta$ by construction of the isogeny $\pi:B\to A$. Using Lemmas \ref{main lemma} and \ref{lemma Delta diagonal or hyper}, we see then that pseudoreflections in $\Delta\rtimes G$ that are not in $G$ must be of the form \begin{itemize} \item[(III)] conjugates of $(\bar t,\rho^a\sigma)\in\Delta\rtimes G$ for $0\leq a<p$; \end{itemize} where $\bar t=(t,-t,0,\ldots,0)\in\Delta$ and $t$ is $\zeta_m$-invariant.\\ With these considerations, we can restrict further the structure of $\Delta$. For instance, diagonal elements in $\Delta$ are bound to bring problems since they do not belong to any elliptic curve $E_\upsilon$ for a pseudoreflection $\upsilon\in G$. Thus, they cannot bring up new pseudoreflections in $\Delta\rtimes G$ unless they are generated by hyperplanar elements. This is explained by the following proposition. \begin{proposition}[$\Delta$ is not diagonal]\label{proposition Delta not diagonal} Assume that there exists $s\in E$ such that $(s,\ldots,s)\in \Delta$ but $(s,-s,0,\ldots,0)\not\in\Delta$. Then $A/G$ is not smooth. \end{proposition} In particular, we see that $\Delta$ has to contain at least one hyperplanar element. \begin{proof} Since $A/G\cong B/(\Delta\rtimes G)$, we will work with this last quotient using the Chevalley-Shephard-Todd Theorem. Let $\bar s\in\Delta$ denote the diagonal element in the statement of the Proposition. We will prove first that an element of the form $(\bar s,\upsilon)$ cannot be generated by pseudoreflections in $\Delta\rtimes G$. Indeed, the only pseudoreflections that are not in $G$ are those of type (III), so that if $(\bar s,\upsilon)$ was generated by pseudoreflections, we should be able to write \begin{equation}\label{eq diagonal delta} \bar s=\sum_{i=1}^\ell \upsilon_i(\bar t_i), \end{equation} with $\bar t_i=(t_i,-t_i,0\ldots,0)\in\Delta$ a hyperplanar element and $\upsilon_i\in G$. In particular, $\bar s$ would be contained in the sub-$G$-module of $\Delta$ generated by the $\bar t_i$. But since $t_i$ is $\zeta_m$-invariant, the only way in which $G$ acts on the $\bar t_i$ is by permuting their coordinates. Thus, by looking at the first coordinate in equation \eqref{eq diagonal delta}, we get that $s$ is a linear combination of the $t_i$, which implies immediately that $(s,-s,0,\ldots,0)$ is a linear combination of the $\bar t_i$ and hence is in $\Delta$, contradicting our hypothesis. Having proved this, it suffices then to exhibit an element $\bar x\in B$ such that its stabilizer in $\Delta\rtimes G$ has an element of the form $(\bar s,\upsilon)$. In other words, we need $\upsilon\in G$ and $\bar x\in B$ such that $\upsilon(\bar x)+\bar s=\bar x$, and this is a direct consequence of Lemma \ref{lem surj}. \end{proof} Denote by $E_0$ the subgroup of $\zeta_m$-invariant elements of $E$. Then $E_0$ is equal to $E[2]\cong ({\mathbb{Z}}/2{\mathbb{Z}})^2$ if $m=2$, isomorphic to ${\mathbb{Z}}/3{\mathbb{Z}}$ if $m=3$ and isomorphic to ${\mathbb{Z}}/2{\mathbb{Z}}$ if $m=4$. Now that we know that diagonal elements in $\Delta$ only appear if generated by hyperplanar elements, Lemma \ref{lemma Delta diagonal or hyper} tells us that $\Delta$ is contained in $E_0^n=B^H$, and more precisely in the ``hyperplane'' \[\left\{(x_1,\ldots,x_n)\in E_0^n\mid \sum_{i=1}^nx_i=0\right\}\subset E_0^n\subset B.\] Indeed, for $m=3,4$ the mere presence of a hyperplanar element implies by $G$-stability that $\Delta$ is actually the whole ``hyperplane'' and thus the presence of any additional element in $\Delta$ would imply the existence of elements of the form $(x,0,\ldots,0)$, which is forbidden by construction. A similar argument using Proposition \ref{proposition Delta not diagonal} works for $m=2$. In this last case, one hyperplanar element does not suffice to generate the whole hyperplanar subgroup of $E_0^n$ since $E_0=E[2]$ needs two generators. We prove now that such an ``incomplete'' hyperplanar $\Delta$ does not work either. \begin{proposition}\label{proposition Delta not special} Assume that $m=2$, $\Delta\neq\{0\}$ and there exists a hyperplanar 2-torsion element that is not in $\Delta$. Then $A/G$ is not smooth. \end{proposition} \begin{proof} As before, we can use the Chevalley-Shephard-Todd Theorem on the quotient $B/(\Delta\rtimes G)\cong A/G$. By the previous Proposition, we may assume that $\Delta$ has a non-trivial element $\bar t=(t,t,0,\ldots,0)\in\Delta$ with $t\in E[2]$. But since $E[2]\cong ({\mathbb{Z}}/2{\mathbb{Z}})^2$, we easily see from the hypothesis that there are no elements of the form $(s,s,0,\ldots,0)$ for $s\neq 0,t$. Let $s\in E[2]$ be such an element. Let $t_1\in E[4]$ be such that $2t_1=t$ and let $t_2=t_1+s\in E[4]$. Let $(x_3,\ldots,x_n)\in E^{n-2}$ be a general element and consider the element $\bar x=(t_1,t_2,x_3,\ldots,x_n)\in B$. Recalling the notations given in page \pageref{pseudoref in Delta G}, it is easy to see that $(\bar t,\rho)\in\Delta\rtimes G$ fixes $\bar x$. Since $t_1\neq\pm t_2$, it is also easy to see that no element in $G$ fixes $\bar x$, so that pseudoreflections fixing $\bar x$ can only be of type (III), that is either $(\bar t,\sigma)$ or $(\bar t,\rho\sigma)$. But again, since $t_1\neq\pm t_2$, we see that neither of these fixes $\bar x$. Thus, $\mathrm{Stab}_{\Delta\rtimes G}(\bar x)$ is not generated by pseudoreflections and hence $B/(\Delta\rtimes G)$ cannot be smooth by the Chevalley-Shephard-Todd Theorem. \end{proof} Thus, we are reduced to the ``full'' hyperplanar case, that is, \begin{equation}\label{eq hyper delta} \Delta:=\left\{(x_1,\ldots,x_n)\in E_0^n\mid \sum_{i=1}^nx_i=0\right\}\subset E_0^n\subset B. \end{equation} We prove then the following: \begin{proposition}[$\Delta$ is not hyperplanar]\label{proposition Delta not hyperplanar} Assume that $\Delta$ is as in \eqref{eq hyper delta}. Then $A/G$ is not smooth \textbf{except} if $G=G(2,2,3)$. \end{proposition} \begin{proof} As always, it will suffice to give an element $\bar x\in B=E^n$ such that its stabilizer in $\Delta\rtimes G$ is not generated by pseudoreflections. The idea, as in the last proof, is to exhibit an element whose coordinates $x_i$ are ``different enough'' so that it is clear that elements in $S_n\subset G$ cannot appear in $\mathrm{Stab}_{\Delta\rtimes G}(\bar x)$, even after being mixed up with elements of $\Delta\times H\subset \Delta\rtimes G$. This amounts to ensuring that different coordinates do not belong to the same $(E_0\times\mu_m)$-orbit (this is how $\Delta\times H$ acts on coordinates). Then the stabilizer must be contained in $\Delta\times H$ and hence it is easy to exhibit examples that are not generated by pseudoreflections. Consider then the following element $\bar x\in B$: \begin{itemize} \item If $G=G(2,p,n)$ and $n\geq 4$, then $\bar x=(0,a',b',c',x_5,\ldots,x_n)$.\\ Here, $(x_5,\ldots,x_n)\in E^{n-4}$ is a general element and $2a'=a$, $2b'=b$, $2c'=c$, where $E[2]=\{0,a,b,c\}$. \item If $G=G(2,1,3)$, then $\bar x=(a',b',c')$, where $a',b',c'$ are as above. \item If $G=G(3,p,n)$, then $\bar x=(0,d,2d,x_4,\ldots,x_n)$.\\ Here $(x_4,\ldots,x_n)\in E^{n-3}$ is a general element and $d\in E[3]$ is not $\zeta_3$-invariant. \item If $G=G(4,p,n)$, then $\bar x=(0,d,e',x_4,\ldots,x_n)$.\\ Here, $(x_4,\ldots,x_n)\in E^{n-3}$ is a general element, $d$ and $e=2e'$ are in $E[2]$, $d$ is not $\zeta_4$-invariant and $e$ is $\zeta_4$-invariant. \end{itemize} The fact that these coordinates are in different $(E_0\times\mu_m)$-orbits is seen as follows. In the first two cases, multiplication by 2 kills the actions of $E_0$ and $\mu_2$ on 4-torsion elements and the coordinates are still all different. In the third case, the action of $\zeta_3$ on $d$ is by translation by a $\zeta_3$-invariant element (say, $e$), so $E_0$ and $\mu_3$ act in the same way on $d$. A direct computation tells us then that $0$, $d$ and $2d$ are in different $(E_0\times\mu_3)$-orbits. In the fourth case, all coordinates have different torsion. Thus, $\mathrm{Stab}_{\Delta\rtimes G}(x)\subset\Delta\times H$ as it was explained above. And easy direct computations in $\Delta\times H$ tell us that the stabilizer of $\bar x$ is given in each case by: \begin{center} \begin{tabular}{p{.2\textwidth}|p{.65\textwidth}} $(m,p,n)$ & Generators of $\mathrm{Stab}_{\Delta\rtimes G}(\bar x)\subset \Delta\times H$ \\ \hline $(2,p,n)$, $n\geq 4$ & $((0,a,b,c,0,\ldots,0),(-1,-1,-1,-1,1,\ldots,1))$ \\ & $(\bar 0,(-1,1,\ldots,1))$ (exists only if $p=1$).\\ \hline $(2,1,3)$ & $((a,b,c),(-1,-1,-1))$\\ \hline $(3,p,n)$ & $((0,2e,e,0,\ldots,0),(\zeta_3,\zeta_3,\zeta_3,1,\ldots,1))$\\ & $(\bar 0,(\zeta_3,1,\ldots,1))$ (exists only if $p=1$)\\ \hline $(4,p,n)$ & $((0,e,e,0,\ldots,0),(\zeta_4,\zeta_4,-1,1,\ldots,1))$\\ & $(\bar 0,(-1,1,\ldots,1))$ (exists only if $p\leq 2$)\\ & $(\bar 0,(\zeta_4,1,\ldots,1))$ (exists only if $p=1$) \end{tabular} \end{center} In every case, the first element is clearly not a pseudoreflection and it cannot be generated by the others, which proves the proposition. \end{proof} The statement of the last proposition hints that the quotient $A/G$ is indeed smooth for $G=G(2,2,3)$. This is actually the case, since it is well-known that $G(2,2,3)$ is isomorphic, as a complex reflection group, to $S_4$ and was therefore already considered in the previous section. The proof of $(1){\mathbb{R}}ightarrow(4)$ is now complete. \subsection{Sporadic groups}\label{sec sporadic} We deal now with complex reflection groups that are not of the type $G(m,p,n)$. As we recalled before, these are 34 sporadic groups with given actions on ${\mathbb{C}}^n$ where $n$ varies from 2 to 8. Let $G$ be such a sporadic group. Recall that having an abelian variety $A$ with an action of $G$ by automorphisms gives us in particular a linear action of $G$ on $T_0(A)\cong{\mathbb{C}}^n$ that preserves the lattice $\Lambda=\Lambda_A$. We need then some sort of classification of $G$-invariant lattices up to equivalence. A great part of this work was done by Popov in \cite{Popov}, where he studied infinite complex reflection groups, in particular \emph{crystallographic} complex reflection groups, which turn out to be extensions of a finite complex reflection group $G$ by some lattice $\Lambda$ in ${\mathbb{C}}^n$, where the action of $G$ on ${\mathbb{C}}^n$ is the one given by Shephard-Todd. In order to deal with sporadic groups, we use then some of Popov's results, which we briefly recall here.\\ First of all, we need the notion of \emph{root lattice}. Given a finite (irreducible) complex reflection group $G$, we can consider the directions on which the pseudoreflections act. With these one can define an actual (irreducible) root system which in turn is useful for classifying these groups (cf.~\cite[\S1]{Popov}). Here, we only care about the lines generated by these roots, that is the eigenspaces of eigenvalue $\neq 1$ for some pseudoreflection $\sigma\in G$, which Popov calls \emph{root lines}. If we consider a $G$-invariant lattice $\Lambda\subset{\mathbb{C}}^n$, then the sublattices $\Lambda\cap L$ for $L$ a root line generate a $G$-invariant sublattice $\Lambda^0$ of $\Lambda$ called the \emph{root lattice} of $\Lambda$. Note that this is precisely how we constructed the $G$-equivariant isogeny $B\to A$ for $G=G(1,1,n+1)=S_{n+1}$. We have then the following result, cf.~\cite[\S2.6]{Popov}: \begin{theorem}[Popov] The only sporadic groups $G$ in the list of Shephard-Todd that admit a $G$-invariant lattice are the numbers 4, 5, 8, 12, 24--26, 28, 29, 31--37. Their corresponding root lattices are classified up to equivalence by the table in \cite[\S2.6, pp.~37--44]{Popov}. \end{theorem} Note that Popov's notion of equivalence of $G$-invariant lattices induces isomorphisms between the corresponding abelian varieties with $G$-action, so that we only need to study abelian varieties $A={\mathbb{C}}^n/\Lambda$ for lattices $\Lambda$ such that its root lattice $\Lambda^0$ is in Popov's table. Let us recall then another result that will be useful to classify lattices that are not a root lattice cf.~\cite[\S\S 4.2--4.4]{Popov}. Consider the endomorphism of ${\mathbb{C}}^n$ defined as $S:=n\cdot I_n-\sum_{i=1}^n R_i$, where $R_i$ denotes the $i$-th pseudoreflection of a fixed generating set of pseudoreflections of $G$. \begin{theorem}[Popov] Let $\Lambda$ be a $G$-invariant lattice in ${\mathbb{C}}^n$ and let $\Lambda^0$ be its root lattice. Then $\Lambda^0\subset\Lambda\subset S^{-1}\Lambda^0$. In particular, if $|\det(S)|=1$, then every $G$-invariant lattice is a root lattice. \end{theorem} All we are left to do then is to explicitly verify, for each lattice $\Lambda^0$ in Popov's list and for each $G$-invariant lattice between $\Lambda^0$ and $S^{-1}\Lambda^0$, whether the quotient of the corresponding abelian variety by $G$ is smooth or not. As it turns out, this is never true, which we summarize in the following proposition: \begin{proposition}\label{prop sporadic} Let $G$ be a sporadic group from the Shephard-Todd list. If $G$ acts on an abelian variety $A$ in such a way that its action on $T_0(A)$ is an irreducible representation, then $A/G$ is not smooth. \end{proposition} \begin{proof} For every such pair $(A,G)$, we consider the associated pair $(\Lambda,G)$, where $A={\mathbb{C}}^n/\Lambda$. Tables \ref{table detS=1} and \ref{table detS neq 1} give, for every such pair, a point $x_0\in A$ such that its stabilizer is not generated by pseudoreflections. The result follows then from the Chevalley-Shephard-Todd theorem.\\ We start with the groups $G$ such that $|\det(S)|=1$, so that we only need to verify Popov's explicit lattices. For these, Table \ref{table detS=1} gives: \begin{itemize} \item The group $G$ (by giving its number in Shephard-Todd's list). \item Popov's name for the group $\Lambda^0\rtimes G$. \item A rational linear combination $v_0$ of the ${\mathbb{Z}}$-basis $\{e_1,\ldots,e_{2n}\}$ of $\Lambda=\Lambda^0$. \item The order of the stabilizer $S_0=\mathrm{Stab}_G(x_0)$ of the image $x_0$ of $v_0$ in the abelian variety $A={\mathbb{C}}^n/\Lambda$. \item The order of the subgroup $P_0$ of $S_0$ that is generated by pseudoreflections. \end{itemize} We refer to \cite[\S2.6, pp.~37--44]{Popov} for the explicit ${\mathbb{Z}}$-basis. In each case, the first $n$ elements of the basis are Popov's $e_1,\ldots,e_n$ and the $(n+i)$-th element is $\tau_i e_i$ for some explicit $\tau_i\in{\mathbb{C}}$. \begin{table}[h!] \[ \begin{array}{|c|c|c|c|c|}\hline \# G & \Lambda^0\rtimes G & v_0\in\Lambda^0\otimes_{\mathbb{Z}} {\mathbb{Q}} & |S_0| & |P_0| \\[.3em] \hline \hline 5 & [K_{5}] & (\frac13,\frac13,\frac13,0) & 3 & 1 \\[.3em] \hline 8 & [K_{8}] & (\frac13,\frac13,\frac13,0) & 3 & 1 \\[.3em] \hline 12 & [K_{12}] & (0,0,0,\frac12) & 16 & 8 \\[.3em] \hline 24 & [K_{24}] & (\frac14, -\frac14, -\frac14, \frac12, \frac14, -\frac14) & 4 & 1 \\[.3em] \hline 26 & [K_{26}]_1 & (\frac12, \frac12, \frac12, 0, \frac12, \frac12) & 36 & 18 \\[.3em] \hline 26 & [K_{26}]_2 & (0, 0, -\frac13, 0, 0, \frac13) & 72 & 24 \\[.3em] \hline 28 & [F_{4}]_1^\alpha & (\frac12, \frac12, \frac12, \frac12, \frac12, 0, 0, 0) & 12 & 6 \\[.3em] \hline 28 & [F_{4}]_2^\beta & (0,\frac12,\frac12,0,0,0,\frac12,0) & 16 & 8 \\[.3em] \hline 28 & [F_{4}]_3^\gamma & (0,\frac12,0,0,0,0,\frac12,0) & 16 & 8 \\[.3em] \hline 29 & [K_{29}] & (\frac12, 0, 0, 0, \frac12, 0, 0, 0) & 768 & 384 \\[.3em] \hline 31 & [K_{31}] & (\frac12, \frac12, 0, 0, 0, 0, 0, 0) & 384 & 192 \\[.3em] \hline 32 & [K_{32}] & (\frac12, 0, 0, 0, 0, 0, 0, 0) & 1296 & 648 \\[.3em] \hline 34 & [K_{34}] & (\frac13, 0, 0, 0, 0, 0, -\frac13, 0, 0, 0, 0, 0) & 155520 & 51840 \\[.3em] \hline 37 & [E_8]^\alpha & (\frac12, \frac12, \frac12, \frac12, \frac12, \frac12, \frac12, 0, \frac12, \frac12, 0, 0, 0, 0, 0, \frac12) & 103680 & 51840 \\[.3em] \hline \end{array} \] \caption{Examples of non-smooth points in $A/G$ for sporadic groups $G$ such that $|\det(S)|=1$.}\label{table detS=1} \end{table} We consider now those groups $G$ in Popov's table for which $|\det(S)|\neq 1$, so that we need to check for new lattices aside from Popov's. These correspond to the numbers 4, 25, 33, 35 and 36 in Shephard-Todd's list. Since we always have $\Lambda^0\subset\Lambda\subset S^{-1}\Lambda^0$ and $[S^{-1}\Lambda^0:\Lambda^0]=|\det(S)|^2$, we see that there are finitely many other lattices to look at. Actually, in all five cases we get that the action of $G$ on the quotient $S^{-1}\Lambda^0/\Lambda^0$ is trivial, so that every lattice in between is a $G$-invariant lattice and needs to be considered. We keep then notations as above (in particular, Popov's ${\mathbb{Z}}$-basis is given by $\{e_1,\ldots,e_{2n}\}$) and we go case by case:\\ In case 4, a ${\mathbb{Z}}$-basis for $S^{-1}\Lambda^0$ is given by $\{d_1,d_2,e_3,e_4\}$, where $d_1=\frac12 e_1+\frac12 e_2+\frac12 e_3$ and $d_2=\frac12 e_1+\frac12 e_4$. In particular, we see that the quotient $S^{-1}\Lambda^0/\Lambda^0$ is a Klein group and thus, apart from $S^{-1}\Lambda^0$, we have 3 new lattices to consider: $\Lambda_1:=\langle d_1,\Lambda^0\rangle$, $\Lambda_2:=\langle d_2,\Lambda^0\rangle$, $\Lambda_3:=\langle d_1+d_2,\Lambda^0\rangle$. In case 25, a ${\mathbb{Z}}$-basis for $S^{-1}\Lambda^0$ is given by $\{d_1,e_2,\ldots,e_6\}$, where $d_1=\frac13 e_1+\frac13 e_3+\frac23 e_4+\frac23 e_6$. Since the index is 3, this is the only new lattice that needs to be checked. In case 33, a ${\mathbb{Z}}$-basis for $S^{-1}\Lambda^0$ is given by $\{d_1,e_2,\ldots,e_5,d_6,e_7\ldots,e_{10}\}$, where $d_1=\frac12 e_1+\frac12 e_3+\frac12 e_5$ and $d_6=\frac12 e_6+\frac12 e_8+\frac12 e_{10}$. In particular, we see that the quotient $S^{-1}\Lambda^0/\Lambda^0$ is a Klein group and thus, apart from $S^{-1}\Lambda^0$, we have 3 new lattices to consider: $\Lambda_1:=\langle d_1,\Lambda^0\rangle$, $\Lambda_2:=\langle d_6,\Lambda^0\rangle$, $\Lambda_3:=\langle d_1+d_6,\Lambda^0\rangle$. In case 35, a ${\mathbb{Z}}$-basis for $S^{-1}\Lambda^0$ is given by $\{d_1,e_2,\ldots,e_6,d_7,e_8\ldots,e_{12}\}$, where $d_1=\frac13 e_1-\frac13 e_3+\frac13 e_5-\frac13 e_6$ and $d_7=\frac13 e_7-\frac13 e_{9}+\frac13 e_{11}-\frac13 e_{12}$. In particular, we see that the quotient $S^{-1}\Lambda^0/\Lambda^0$ is isomorphic to $({\mathbb{Z}}/3{\mathbb{Z}})^2$ and thus, apart from $S^{-1}\Lambda^0$, we have 4 new lattices to consider: $\Lambda_1:=\langle d_1,\Lambda^0\rangle$, $\Lambda_2:=\langle d_7,\Lambda^0\rangle$, $\Lambda_3:=\langle d_1+d_7,\Lambda^0\rangle$, $\Lambda_4:=\langle d_1+2d_7,\Lambda^0\rangle$. In case 36, a ${\mathbb{Z}}$-basis for $S^{-1}\Lambda^0$ is given by $\{e_1,d_2,e_3,\ldots,e_8,d_9,e_{10},\ldots,e_{14}\}$, where $d_2=\frac12 e_2+\frac12 e_5+\frac12 e_7$ and $d_9=\frac12 e_9+\frac12 e_{12}+\frac12 e_{14}$. In particular, we see that the quotient $S^{-1}\Lambda^0/\Lambda^0$ is a Klein group and thus, apart from $S^{-1}\Lambda^0$, we have 3 new lattices to consider: $\Lambda_1:=\langle d_2,\Lambda^0\rangle$, $\Lambda_2:=\langle d_9,\Lambda^0\rangle$, $\Lambda_3:=\langle d_2+d_9,\Lambda^0\rangle$.\\ Table \ref{table detS neq 1} gives then, for every pair $(A,G)$ with $A={\mathbb{C}}^n/\Lambda$: \begin{itemize} \item The group $G$ (by giving its number in Shephard-Todd's list). \item The corresponding lattice $\Lambda$ (as we named them here above). \item A rational linear combination $v_0$ of the corresponding ${\mathbb{Z}}$-basis (as given here above). \item The order of the stabilizer $S_0=\mathrm{Stab}_G(x_0)$ of the image $x_0$ of $v_0$ in the abelian variety $A$. \item The order of the subgroup $P_0$ of $S_0$ that is generated by pseudoreflections. \end{itemize} This concludes the proof of Proposition \ref{prop sporadic}. \end{proof} \begin{rem} For each lattice $\Lambda$ and each ``bad'' element $x_0$ analyzed here above, we computed the stabilizer $S_0$ and its subgroup $P_0$ by brute force using basic SageMath algorithms (we thank once again Antonio Behn for his enormous help in optimizing our first algorithms). Since these are really basic, readers can certainly write their own (and probably in a more efficient manner than ours!). However, for those who would like to look at our code, it is presented in an appendix to a previous version of this article\\ (cf.~\texttt{arxiv.org/abs/1801.00028v2}). The main idea in order to find these elements was to check the stabilizers (and the pseudoreflections therein) of small torsion elements chosen via the following principle: for every element $g$ of the matrix group $G$, we decomposed $\mathbb{Z}^{2n}$ as $\ker(g-I_{2n})\oplus \ker(g-I_{2n})^\perp$, where the $\perp$ is taken with respect to a $G$-invariant Hermitian form $H$ on $\mathbb{C}^n$. By restricting $g$ to $\ker(g-I_{2n})^\perp$, we obtain an integer-valued matrix $\tilde{g}$ such that $\tilde{g}-I$ is invertible over $\mathbb{Q}$. The columns of $(\tilde{g}-I)^{-1}$ that contain rational, non-integer numbers therefore correspond to fixed points of $g$ in $A$ that do not come from the eigenspace associated to 1 of $g$. These were the vectors whose stabilizers we calculated and analyzed. \end{rem} \begin{table}[h!] \[ \begin{array}{|c|c|c|c|c|}\hline \# G & \Lambda & v_0\in\Lambda\otimes_{\mathbb{Z}} {\mathbb{Q}} & |S_0| & |P_0| \\[.3em] \hline \hline 4 & \Lambda^0 & (\frac12, 0, 0, 0) & 2 & 1 \\[.3em] \hline 4 & \Lambda_1 & (0, \frac12, 0, 0) & 4 & 1 \\[.3em] \hline 4 & \Lambda_2 & (0, 0, \frac12, \frac12) & 4 & 1 \\[.3em] \hline 4 & \Lambda_3 & (0, \frac12, \frac12, 0) & 6 & 3 \\[.3em] \hline 4 & S^{-1}\Lambda^0 & (0, 0, 0, \frac12) & 8 & 1 \\[.3em] \hline\hline 25 & \Lambda^0 & (0, -\frac13, 0, 0, \frac13, -\frac13) & 3 & 1 \\[.3em] \hline 25 & S^{-1}\Lambda^0 & (0, 0, 0, \frac13, 0, \frac13) & 72 & 24 \\[.3em] \hline\hline 33 & \Lambda^0 & (0, \frac12, 0, \frac12, \frac12, \frac12, 0, \frac12, 0, \frac12) & 108 & 54 \\[.3em] \hline 33 & \Lambda_1 & (\frac12, 0, 0, 0, 0, 0, 0, 0, 0, 0) & 1296 & 648 \\[.3em] \hline 33 & \Lambda_2 & (0, 0, 0, 0, 0, \frac12, 0, 0, 0, 0) & 1296 & 648 \\[.3em] \hline 33 & \Lambda_3 & (\frac12, 0, 0, 0, 0, \frac12, 0, 0, 0, 0) & 240 & 120 \\[.3em] \hline 33 & S^{-1}\Lambda^0 & (\frac12, 0, 0, 0, 0, \frac12, 0, 0, 0, 0) & 1296 & 648 \\[.3em] \hline\hline 35 & \Lambda^0 & (0, \frac12, 0, \frac12, \frac12, \frac12, 0, \frac12, 0, 0, 0, 0) & 72 & 36 \\[.3em] \hline 35 & \Lambda_1 & (0, \frac13, \frac13, 0, \frac13, 0, 0, 0, 0, 0, 0, 0) & 648 & 216 \\[.3em] \hline 35 & \Lambda_2 & (0, 0, 0, 0, 0, 0, 0, \frac13, \frac13, 0, \frac13, 0) & 648 & 216 \\[.3em] \hline 35 & \Lambda_3 & (0, \frac13, \frac13, 0, \frac13, 0, 0, \frac13, \frac13, 0, \frac13, 0) & 648 & 216 \\[.3em] \hline 35 & \Lambda_4 & (0, \frac13, \frac13, 0, \frac13, 0, 0, -\frac13, -\frac13, 0, -\frac13, 0) & 648 & 216 \\[.3em] \hline 35 & S^{-1}\Lambda^0 & (0, \frac13, \frac13, 0, \frac13, 0, 0, \frac13, \frac13, 0, \frac13, 0) & 648 & 216 \\[.3em] \hline\hline 36 & \Lambda^0 & (\frac12, \frac12, \frac12, \frac12, \frac12, \frac12, \frac12, \frac12, \frac12, 0, 0, 0, 0, \frac12) & 1440 & 720 \\[.3em] \hline 36 & \Lambda_1 & (0, \frac12, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) & 103680 & 51840 \\[.3em] \hline 36 & \Lambda_2 & (0, 0, 0, 0, 0, 0, 0, 0, \frac12, 0, 0, 0, 0, 0) & 103680 & 51840 \\[.3em] \hline 36 & \Lambda_3 & (0, \frac12, 0, 0, 0, 0, 0, 0, \frac12, 0, 0, 0, 0, 0) & 3840 & 1920 \\[.3em] \hline 36 & S^{-1}\Lambda^0 & (0, \frac12, 0, 0, 0, 0, 0, 0, \frac12, 0, 0, 0, 0, 0) & 103680 & 51840 \\[.3em] \hline \end{array} \] \caption{Non-smooth points in $A/G$ for sporadic groups $G$ such that $|\det(S)|\neq 1$.}\label{table detS neq 1} \end{table} \end{document}
\mathcal{M}athbf{b}egin{document} \title[level set]{Recovering an homogeneous polynomial from moments of its level set} \mathcal{M}athbf{a}uthor{Jean B. Lasserre} \mathcal{M}athbf{b}egin{abstract} Let $\mathcal{M}athbf{K}:=\{\mathcal{M}athbf{x}: g(\mathcal{M}athbf{x})\leq 1\}$ be the compact sub-level set of some homogeneous polynomial $g$. Assume that the only knowledge about $\mathcal{M}athbf{K}$ is the degree of $g$ as well as the moments of the Lebesgue measure on $\mathcal{M}athbf{K}$ up to order $2d$. Then the vector of coefficients of $g$ is solution of a simple linear system whose associated matrix is nonsingular. In other words, the moments up to order $2d$ of the Lebesgue measure on $\mathcal{M}athbf{K}$ encode all information on the homogeneous polynomial $g$ that defines $\mathcal{M}athbf{K}$ (in fact, only moments of order $d$ and $2d$ are needed). \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{abstract} \keywords{homogeneous polynomials; sublevel sets; moments; inverse problem from moments} \mathcal{M}aketitle \mathcal{M}athcal{S}ection{Introduction} The inverse problem of reconstructing a geometrical object $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ from the only knowledge of moments of some measure $\mathcal{M}u$ whose support is $\mathcal{M}athbf{K}$ is a fundamental problem in both applied and pure mathematics with important applications in e.g. computer tomography, inverse potentials, signal processing, and statistics and probability, to cite a few. In computer tomography, for instance, the X-ray images of an object can be used to estimate the moments of the underlying mass distribution, from which one seeks to recover the shape of the object that appears on some given images. In gravimetry applications, the measurements of the gravitational field can be converted into information concerning the moments, from which one seeks to recover the shape of the source of the anomaly. Of course, {\it exact} reconstruction of objects $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ is in general impossible unless $\mathcal{M}athbf{K}$ has very specific properties. For instance, if $\mathcal{M}athbf{K}$ is a convex polytope then exact recovery of all its vertices has been shown to be possible via a variant of what is known as {\it prony} method. Only a rough bound on the number of vertices is required and relatively few moments suffice for exact recovery. For more details the interested reader is referred to the recent contribution of Gravin et al. \mathcal{M}athbf{c}ite{gravin} and the references therein. On the other hand, Cuyt et al. \mathcal{M}athbf{c}ite{cuyt} have shown that {\it approximate} recovery of a general $n$-dimensional shape is possible by using an interesting property of multi-dimensional Pad\'e approximants, analogous to the Fourier slice theorem for the Radon transform. \mathcal{M}athcal{S}ubsection*{Contribution} From previous contributions and their references, it is transparent that exact recovery of an $n$-dimensional shape is a difficult problem that can be solved only in a few cases. And so identifying such cases is of theoretical and practical interest. The goal of this paper is to identify one such case as we show that exact recovery is possible when $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ is the (compact) sublevel set $\{\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n \,:\,g(\mathcal{M}athbf{x})\leq 1\}$ associated with an homogeneous polynomial $g$. By exact recovery we mean recovery of {\it all} coefficients of the polynomial $g$. In fact, exact recovery is not only possible but rather straightforward as it suffices to solve a linear system with a nonsingular matrix! Moreover, only moments of order $d$ and $2d$ of the Lebesgue measure on $\mathcal{M}athbf{K}$ are needed. As already mentioned, exact recovery is possible only if $\mathcal{M}athbf{K}$ has very specific properties and indeed, crucial in the proof is a property of levels sets associated with homogeneous polynomials (and in fact, also true for level sets of positively homogeneous nonnegative functions). \mathcal{M}athcal{S}ection{Main result} \mathcal{M}athcal{S}ubsection{Notation and definitions} Let $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$ be the ring of polynomials in the variables $\mathcal{M}athbf{x}=(x_1,\ldots,x_n)$ and let $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_d$ be the vector space of polynomials of degree at most $d$ (whose dimension is $s(d):={n+d\mathcal{M}athbf{c}hoose n}$). For every $d\in\mathcal{M}athbb{N}$, let $\mathcal{M}athbb{N}^n_d:=\{\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n:\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert \,(=\mathcal{M}athcal{S}um_i\mathcal{M}athbf{a}lpha_i)=d\}$, and let $\mathcal{M}athbf{v}_d(\mathcal{M}athbf{x})=(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$, be the vector of monomials of the canonical basis $(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)$ of $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{d}$. Denote by $\mathcal{M}athcal{S}_k$ the space of $k\times k$ real symmetric matrices with scalar product $\langlengle \mathcal{M}athbf{B},\mathcal{M}athbf{C}\mathcal{M}athcal{R}angle={\mathcal{M}athcal{R}m trace}\,(\mathcal{M}athbf{B}\mathcal{M}athbf{C})$; also, the notation $\mathcal{M}athbf{B}\mathcal{M}athcal{S}ucceq0$ (resp. $\mathcal{M}athbf{B}\mathcal{M}athcal{S}ucc0$) stands for $\mathcal{M}athbf{B}$ is positive semidefinite (resp. positive definite). A polynomial $f\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_d$ is written \[\mathcal{M}athbf{x}\mathcal{M}apsto f(\mathcal{M}athbf{x})\,=\,\mathcal{M}athcal{S}um_{\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n}f_\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha,\] for some vector of coefficients $\mathcal{M}athbf{f}=(f_\mathcal{M}athbf{a}lpha)\in\mathcal{M}athbb{R}^{s(d)}$. A real-valued polynomial $g:\mathcal{M}athbb{R}^n\to\mathcal{M}athbb{R}$ is homogeneous of degree $d$ ($d\in\mathcal{M}athbb{N}$) if $g(\langlembda\mathcal{M}athbf{x})=\langlembda^dg(\mathcal{M}athbf{x})$ for all $\langlembda$ and all $\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}$. Given $g\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$, denote by $G\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ the sublevel set $\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$. If $g$ is homogeneous then $G$ is compact only if $g$ is nonnegative on $\mathcal{M}athbb{R}^n$ (and so $d$ is even). Indeed suppose that $g(\mathcal{M}athbf{x}_0)<0$ for some $\mathcal{M}athbf{x}_0\in\mathcal{M}athbb{R}^n$; then by homogeneity, $g(\langlembda \mathcal{M}athbf{x}_0)<0$ for all $\langlembda>0$ and so $G$ contains a half-line and cannot be compact. \mathcal{M}athcal{S}ubsection{Main result} The main result is based on the following result of independent interest valid for positively homogeneous functions (and not only homogeneous polynomials). A function $f:\mathcal{M}athbb{R}^n\to\mathcal{M}athbb{R}$ is positively homogeneous of degree $d\in\mathcal{M}athbb{R}$ if $f(\langlembda\mathcal{M}athbf{x})=\langlembda^df(\mathcal{M}athbf{x})$ for all $\langlembda>0$ and all $\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n$. \mathcal{M}athbf{b}egin{lemma} Let $f:\mathcal{M}athbb{R}^n\to\mathcal{M}athbb{R}$ be a measurable, positively homogeneous and nonnegative function of degree $0<d\in\mathcal{M}athbb{R}$, with bounded level set $\{\mathcal{M}athbf{x}\,:\,f(\mathcal{M}athbf{x})\leq 1\}$. Then for every $k\in\mathcal{M}athbb{N}$ and $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$: \mathcal{M}athbf{b}egin{equation} \langlebel{lem1-1} \int_{\{\mathcal{M}athbf{x}\,:\,f(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,f(\mathcal{M}athbf{x})^k\,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert}{n+kd+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert}\,\int_{\{\mathcal{M}athbf{x}\,:\,f(\mathcal{M}athbf{x})\leq 1\}}\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,d\mathcal{M}athbf{x}.\\ \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{lemma} \mathcal{M}athbf{b}egin{proof} To prove (\mathcal{M}athcal{R}ef{lem1-1}) we use an argument already used in Morosov and Shakirov \mathcal{M}athbf{c}ite{morosov1,morosov2}. With $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$, let $\tilde{\mathcal{M}athbf{a}lpha}:=(\mathcal{M}athbf{a}lpha_2,\ldots,\mathcal{M}athbf{a}lpha_n)\in\mathcal{M}athbb{N}^{n-1}$ and define $\mathcal{M}athbf{z}:=(z_2,\ldots,z_n)$. Let $\phi:\mathcal{M}athbb{R}_+\to\mathcal{M}athbb{R}$ be measurable and consider the integral $\int_{\mathcal{M}athbb{R}^n}\phi(g(\mathcal{M}athbf{x}))\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha d\mathcal{M}athbf{x}$. Using the change of variable $x_1=t$ and $x_i=tz_i$ for all $i=2,\ldots,n$, and invoking homogeneity, one obtains: \mathcal{M}athbf{b}egin{eqnarray*} \int_{\mathcal{M}athbb{R}^n}\phi(f(\mathcal{M}athbf{x}))\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,d\mathcal{M}athbf{x}&=&\int_{\mathcal{M}athbb{R}^n}\phi(t^df(1,z_2,\ldots,z_n))\,t^{n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert-1}\mathcal{M}athbf{z}^{\tilde{\mathcal{M}athbf{a}lpha}}\,d(t,\mathcal{M}athbf{z})\\ &=&d^{-1}\left(\int_0^\infty u^{(n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert)/d-1}\phi(u)\,du\mathcal{M}athcal{R}ight)\times A_\mathcal{M}athbf{a}lpha\\ \mathcal{M}box{with $A_\mathcal{M}athbf{a}lpha$}&=&\int_{\mathcal{M}athbb{R}^{n-1}}\mathcal{M}athbf{z}^{-\tilde{\mathcal{M}athbf{a}lpha}}f(1,\mathcal{M}athbf{z})^{-(n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert)/d}\,d\mathcal{M}athbf{z}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} Hence the choices $t\mathcal{M}apsto \phi(t):={\mathcal{M}athcal{R}m I}_{[0,1]}(t)$ and $t\mathcal{M}apsto \phi(t):=t^k {\mathcal{M}athcal{R}m I}_{[0,1]}(t)$ yield \mathcal{M}athbf{b}egin{eqnarray*} d\int_{\{\mathcal{M}athbf{x}\,:\,f(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha d\mathcal{M}athbf{x}&=&A_\mathcal{M}athbf{a}lpha\int_0^1u^{(n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert)/d-1}\,du=\mathcal{M}athbf{f}rac{A_\mathcal{M}athbf{a}lpha d}{n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert}\\ d\int_{\{\mathcal{M}athbf{x}\,:\,f(\mathcal{M}athbf{x})\leq 1\}}f(\mathcal{M}athbf{x})^k\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha d\mathcal{M}athbf{x}&=&A_\mathcal{M}athbf{a}lpha\int_0^1u^{(n+kd+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert)/d-1}\,du=\mathcal{M}athbf{f}rac{A_\mathcal{M}athbf{a}lpha d}{n+kd+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} respectively. And so (\mathcal{M}athcal{R}ef{lem1-1}) follows. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} With $g\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_d$ being an homogeneous polynomial of degree $d$, consider now the matrix $\mathcal{M}athbf{M}_d(\langlembda)$ of moments of order $2d$ associated with the Lebesgue measure on $G=\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$, that is, $\mathcal{M}athbf{M}_d(\langlembda)$ is a real square matrix with rows and columns indexed by the monomials $\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n_d$, and with entries \mathcal{M}athbf{b}egin{equation} \langlebel{moment-2d} \mathcal{M}athbf{M}_d(\mathcal{M}athbf{b}lambda)[\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta]\,=\,\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{x}^{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta}\,d\mathcal{M}athbf{x}\,=:\,\langlembda_{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta},\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\,\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta\in\mathcal{M}athbb{N}^n_d. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} So all entries of $\mathcal{M}athbf{M}_d(\langlembda)$ are moments of order $2d$. Our main result is as follows: \mathcal{M}athbf{b}egin{thm} \langlebel{thmain} Let $g\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{d}$ be homogeneous of degree $d$ with unknown coefficient vector $\mathbf{g}\in\mathcal{M}athbb{R}^{s(d)}$ and with compact level set $G=\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$. Assume that one knows the moments $\mathcal{M}athbf{b}lambda=(\langlembda_\mathcal{M}athbf{a}lpha)$ for the Lebesgue measure on $G$, for every $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$ with $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d$ and $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=d$. Then: \mathcal{M}athbf{b}egin{equation} \langlebel{thmain-1} \mathbf{g}\,=\,\mathcal{M}athbf{f}rac{n+d}{n+2d}\:\mathcal{M}athbf{M}_d(\mathcal{M}athbf{b}lambda)^{-1}\,\mathcal{M}athbf{b}lambda^{(d)} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where $\mathcal{M}athbf{b}lambda^{(d)}=(\langlembda_\mathcal{M}athbf{a}lpha)$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n_d$, is the vector of all moments of order $d$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thm} \mathcal{M}athbf{b}egin{proof} Use (\mathcal{M}athcal{R}ef{lem1-1}) with $k=1$ and $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=d$ to obtain \[\mathcal{M}athcal{S}um_{\mathcal{M}athbf{b}eta\in\mathcal{M}athbb{N}^n:\,\mathcal{M}athbf{v}ert\mathcal{M}athbf{b}eta\mathcal{M}athbf{v}ert=2d}g_\mathcal{M}athbf{b}eta\,\langlembda_{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta}\,=\,\mathcal{M}athbf{f}rac{n+d}{n+2d}\,\langlembda_\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\,\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=d,\] or in matrix form \[\mathcal{M}athbf{M}_d(\mathcal{M}athbf{b}lambda)\,\mathbf{g}\,=\,\mathcal{M}athbf{f}rac{n+d}{n+2d}\,\mathcal{M}athbf{b}lambda_d,\] from which the desired result follows if $\mathcal{M}athbf{M}_d(\mathcal{M}athbf{b}lambda)$ is non singular. But this follows from the fact that $G$ has nonempty interior. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} There are alternative ways for obtaining $\mathbf{g}$ from the moments $\mathcal{M}athbf{b}lambda$. It suffices to apply (\mathcal{M}athcal{R}ef{lem1-1}) for a family $\mathcal{M}athcal{F}$ of multi-indices $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$ whose cardinal $\mathcal{M}athbf{v}ert\mathcal{M}athcal{F}\mathcal{M}athbf{v}ert$ matches the dimension ${n+d-1\mathcal{M}athbf{c}hoose d}$ of the vector $\mathbf{g}$. For instance, with $n=2$ and $d=2$ ($g$ is a quadratic form with vector of coefficients $\mathbf{g}=(g_{20},g_{11},g_{02})$), $\mathbf{g}$ can also be obtained by: \[\mathbf{g}\,=\,\left[\mathcal{M}athbf{b}egin{array}{ccc} \langlembda_{20}&\langlembda_{11}&\langlembda_{02}\\ \langlembda_{30}&\langlembda_{21}&\langlembda_{12}\\ \langlembda_{21}&\langlembda_{12}&\langlembda_{30}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{array}\mathcal{M}athcal{R}ight]^{-1}\left[\mathcal{M}athbf{b}egin{array}{c}\mathcal{M}athbf{f}rac{n}{n+2}\langlembda_{00}\\ \mathcal{M}athbf{f}rac{n+1}{n+3}\langlembda_{10}\\ \mathcal{M}athbf{f}rac{n+1}{n+3}\langlembda_{01}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{array}\mathcal{M}athcal{R}ight],\] provided that the above inverse matrix exists. Similarly, with $n=2$ and $d=4$ ($g$ is a quartic form with vector of coefficients $\mathbf{g}=(g_{40},g_{31},g_{22},g_{13},g_{04})$), $\mathbf{g}$ can also be obtained by \[\mathbf{g}\,=\,\left[\mathcal{M}athbf{b}egin{array}{ccccc} \langlembda_{40}&\langlembda_{31}&\langlembda_{22}&\langlembda_{13}&\langlembda_{04}\\ \langlembda_{50}&\langlembda_{41}&\langlembda_{32}&\langlembda_{23}&\langlembda_{14}\\ \langlembda_{41}&\langlembda_{32}&\langlembda_{23}&\langlembda_{14}&\langlembda_{05}\\ \langlembda_{60}&\langlembda_{51}&\langlembda_{42}&\langlembda_{33}&\langlembda_{24}\\ \langlembda_{42}&\langlembda_{33}&\langlembda_{24}&\langlembda_{15}&\langlembda_{06}\\ \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{array}\mathcal{M}athcal{R}ight]^{-1}\left[\mathcal{M}athbf{b}egin{array}{c}\mathcal{M}athbf{f}rac{n}{n+4}\langlembda_{00}\\ \mathcal{M}athbf{f}rac{n+1}{n+5}\langlembda_{10}\\ \mathcal{M}athbf{f}rac{n+1}{n+5}\langlembda_{01}\\ \mathcal{M}athbf{f}rac{n+2}{n+6}\langlembda_{20}\\ \mathcal{M}athbf{f}rac{n+2}{n+6}\langlembda_{02} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{array}\mathcal{M}athcal{R}ight],\] provided that the above inverse matrix exists. \mathcal{M}athbf{b}egin{thebibliography}{las} \mathcal{M}athbf{b}ibitem{cuyt} Cuyt, A., Golub, G., Milanfar, P., Verdonk, B.: Multidimensional integral inversion, with applications in shape reconstruction. SIAM J. Sci. Comput. 27(3), 1058--1070 (2005) (electronic) \mathcal{M}athbf{b}ibitem{gravin} Gravin, N., Lasserre J., Pasechnik, D.V., Robins, S.: The Inverse Moment Problem for Convex Polytopes. Discrete Comput. Geom., to appear.\\ DOI 10.1007/s00454-012-9426-4 \mathcal{M}athbf{b}ibitem{morosov1} Morosov, A., Shakirov, S.: New and old results in resultant theory, {\tt arXiv:0911.5278v1}, 2009. \mathcal{M}athbf{b}ibitem{morosov2} Morosov, A., Shakirov, S.: Introduction to integral discriminants, J. High Energy Phys. 12 (2009), {\tt arXiv:0911.5278v1}, 2009. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thebibliography}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{document} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{document} \mathcal{M}athcal{S}ubsection{Integration on sublevel sets and non Gaussian integrals} As a consequence of Lemma \mathcal{M}athcal{R}ef{lemma0} we obtain the following result. \mathcal{M}athbf{b}egin{thm} \langlebel{thmain} Let $g,h$ be PHFs of degree $0 \neq d\in\mathcal{M}athbb{R}$ and $p\in\mathcal{M}athbb{R}$ respectively, and such that $g\in C_d$. Then for every $y\in [0,\infty)$: \mathcal{M}athbf{b}egin{equation} \langlebel{b1} \mathcal{M}athbf{v}ol(\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\})\,=\,\mathcal{M}athbf{f}rac{y^{n/d}}{\mathcal{M}athcal{G}amma(1+n/d)}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} and \mathcal{M}athbf{b}egin{equation} \langlebel{b2} \int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}}h\,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{y^{(n+p)/d}}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}\int_{\mathcal{M}athbb{R}^n}h\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} whenever $\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ (or $\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x}$) is finite. And (\mathcal{M}athcal{R}ef{b2}) also holds if $h$ is replaced with $\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert$, or with $\mathcal{M}ax[0,h]$, or with $\mathcal{M}ax[0,-h]$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thm} \mathcal{M}athbf{b}egin{proof} Write $h=h^+-h^-$ with $h^+(\mathcal{M}athbf{x}):=\mathcal{M}ax[0,h(\mathcal{M}athbf{x})]$, $h^-(\mathcal{M}athbf{x}):=\mathcal{M}ax[0,-h(\mathcal{M}athbf{x})]$ for all $\mathcal{M}athbf{x}$. Observe that $\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert$, $h^+$ and $h^-$ are nonnegative and positively homogeneous of degree $p$, and $\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert=h^++h^-$, $h=h^+-h^-$. With $A$ as in (\mathcal{M}athcal{R}ef{aux77}), using Lemma \mathcal{M}athcal{R}ef{lemma0} with $\phi(t)=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t)$, yields \mathcal{M}athbf{b}egin{eqnarray*} \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}&=&\mathcal{M}athbf{f}rac{\mathcal{M}athcal{G}amma(1+(n+p)/d)}{n+p}\mathcal{M}athbf{c}dot A(g,\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert)\\ \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} where all integrals are finite since the right-hand-side is finite. Similarly, using Lemma \mathcal{M}athcal{R}ef{lemma0} with $\phi(t)=1_{[0,1]}(t)$, yields \[\int_{\{\mathcal{M}athbf{x}:g(\mathcal{M}athbf{x})\leq1\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{1}{n+p}\mathcal{M}athbf{c}dot A(g,\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert),\] and therefore, \[\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}.\] Next, by homogeneity, \[\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x}\,=\,y^{(n+p)/d}\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x},\] and so \mathcal{M}athbf{b}egin{equation} \langlebel{aux66} \int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{y^{(n+p)/d}}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} Obviously, with same arguments, (\mathcal{M}athcal{R}ef{aux66}) also holds if we replace $\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert$ with $h^+$ and $h^-$ and finiteness follows because \[0\leq\,\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}} h^+ \,d\mathcal{M}athbf{x}\,\leq\,\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,d\mathcal{M}athbf{x},\] and similarly for $h^-$. Hence (\mathcal{M}athcal{R}ef{aux66}) with $h$ in lieu of $\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert$ (i.e. (\mathcal{M}athcal{R}ef{b2})) follows by additivity since $h=h^+-h^-$. Finally, from what precedes, finiteness of $\int_{\{\mathcal{M}athbf{x}:g(\mathcal{M}athbf{x})\leq 1\}}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert d\mathcal{M}athbf{x}$ is equivalent to finiteness of $\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} When $y=1$, $g$ is a positive definite form with $d\in\mathcal{M}athbb{N}$, and $h$ is identical to the constant function $1$, (\mathcal{M}athcal{R}ef{b1}) is already proved in Morosov and Shakirov \mathcal{M}athbf{c}ite[(75) p. 39]{morosov1}. In particular, when $g$ is the quadratic form $\mathcal{M}athbf{x}\mathcal{M}apsto g_2(\mathcal{M}athbf{x}):=\mathcal{M}athbf{f}rac{1}{2}\mathcal{M}athbf{x}^T\mathcal{M}athbf{Q}\mathcal{M}athbf{x}$ for some real symmetric positive definite matrix $\mathcal{M}athbf{Q}$, one retrieves that the volume of the ellipsoid $\mathcal{M}athbf{x}i(y):=\{\mathcal{M}athbf{x}\,:\,g_2(\mathcal{M}athbf{x})\leq y\}$ is simply related to the determinant of $\mathcal{M}athbf{Q}$ by the formula \mathcal{M}athbf{b}egin{equation} \langlebel{det} \mathcal{M}athbf{v}ol(\mathcal{M}athbf{x}i(y))\,=\,\mathcal{M}athbf{f}rac{y^{n/2}}{\mathcal{M}athcal{G}amma(1+n/2)}\mathcal{M}athbf{u}nderbrace{{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\mathcal{M}athbf{f}rac{1}{2}\mathcal{M}athbf{x}^T\mathcal{M}athbf{Q}\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x}}_{=(2\pi)^{n/2}/\mathcal{M}athcal{S}qrt{{\rm det}\,et\mathcal{M}athbf{Q}}} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} So Theorem \mathcal{M}athcal{R}ef{thmain} states that the volume of the sublevel set is simply related to the integral of $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))$ over the entire domain $\mathcal{M}athbb{R}^n$ which happens to be simply related to the determinant of $\mathcal{M}athbf{Q}$ when $g$ is the quadratic form $\mathcal{M}athbf{x}^T \mathcal{M}athbf{Q}\mathcal{M}athbf{x}$. One goal of the theory of integral discriminants is precisely to express $\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)$ in terms of invariants of $g$ when $g$ is a form. See e.g. Morosov and Shakirov \mathcal{M}athbf{c}ite{morosov1,morosov2} and Shakirov \mathcal{M}athbf{c}ite{shakirov}. \mathcal{M}athcal{S}ubsection*{Intersection of sublevel sets} Suppose that with $h$ being positively homogeneous of degree $p$, one wishes to compute the integral $\int_{\Omega}h(\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x}$ where \[\Omega:=\{\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n\::\: g_k(\mathcal{M}athbf{x})\leq z_k,\mathcal{M}athbf{q}uad k=1,\ldots ,m\},\] for some $m$ PHFs $g_1,\ldots,g_m$ of degree $0\neq d\in\mathcal{M}athbb{R}$, and some (strictly) positive vector $\mathcal{M}athbf{z}\in\mathcal{M}athbb{R}^m$. Equivalently, \[\Omega:=\{\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n\::\: \tilde{g}_k(\mathcal{M}athbf{x};\mathcal{M}athbf{z})\leq 1,\mathcal{M}athbf{q}uad k=1,\ldots ,m\},\] for the functions $\mathcal{M}athbf{x}\mathcal{M}apsto \tilde{g}_k(\mathcal{M}athbf{x};\mathcal{M}athbf{z}):=g_k(z_k^{-1/d}\mathcal{M}athbf{x})$, $k=1,\ldots,m$, which are also PHFs of degree $d\in\mathcal{M}athbb{R}$. Hence with no loss of generality, one may restrict to sets of the form \mathcal{M}athbf{b}egin{equation} \langlebel{newset} \Omega(y)\,:=\,\{\mathcal{M}athbf{x}\::\:g_k(\mathcal{M}athbf{x})\,\leq\,y,\mathcal{M}athbf{q}uad k=1,\ldots,m\}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} for some positive scalar $y\in\mathcal{M}athbb{R}$, and PHFs $g_1,\ldots,g_m$ of same degree $d\in\mathcal{M}athbb{R}$. Notice that $\mathcal{M}athbf{x}\mathcal{M}apsto \psi(\mathcal{M}athbf{x}):=\mathcal{M}ax[g_1(\mathcal{M}athbf{x}),\ldots,g_m(\mathcal{M}athbf{x})]$ is a PHF of degree $d$. \mathcal{M}athbf{b}egin{corollary} \langlebel{coro4} Let $h$ be a PHF of degree $p\in\mathcal{M}athbb{R}$, let $g_1,\ldots,g_m$ be PHFs of degree $0\neq d\in\mathcal{M}athbb{R}$, and assume that the set $\{\mathcal{M}athbf{x}\,:\, g_k(\mathcal{M}athbf{x})\leq 1,\: k=1,\ldots,m\}$ is bounded and for every $y>0$, let $\Omega(y)$ be as in (\mathcal{M}athcal{R}ef{newset}). Then: \mathcal{M}athbf{b}egin{equation} \langlebel{coro4-2} \mathcal{M}athbf{v}ol (\Omega(y))\,=\,\mathcal{M}athbf{f}rac{y^{n/d}}{\mathcal{M}athcal{G}amma(1+n/d)}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\psi)\,d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} and \mathcal{M}athbf{b}egin{equation} \langlebel{coro4-1} \int_{\Omega(y)}h(\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{y^{(n+p)/d}}{\mathcal{M}athcal{G}amma(1+(n+p)/d)} \int_{\mathcal{M}athbb{R}^n}h\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\psi)\,d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} whenever $\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\psi)\,d\mathcal{M}athbf{x}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{corollary} \mathcal{M}athbf{b}egin{proof} Notice that $\mathcal{M}athbf{x}\mathcal{M}apsto \psi(\mathcal{M}athbf{x}):=\mathcal{M}ax[g_1(\mathcal{M}athbf{x}),\ldots,g_m(\mathcal{M}athbf{x})]$ is also a PHF of degree $0\neq d\in\mathcal{M}athbb{R}$, and $\Omega(y)=\{\mathcal{M}athbf{x}\,:\,\psi(\mathcal{M}athbf{x})\leq y\}$. And so if $\Omega(y)$ is bounded then $\psi\in C_d$. Hence (\mathcal{M}athcal{R}ef{coro4-1})-(\mathcal{M}athcal{R}ef{coro4-2}) is just (\mathcal{M}athcal{R}ef{b1})-(\mathcal{M}athcal{R}ef{b2}) with $h$ and $\psi$ in lieu of $h$ and $g$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} \mathcal{M}athcal{S}ubsection{An alternative proof with a duality interpretation} Next, we present an alternative proof of Theorem \mathcal{M}athcal{R}ef{thmain} that uses Laplace transform techniques and provides an interpretation of the result in an appropriate {\it duality} framework. Suppose that $g,h\in C_d$. Since $g$ is nonnegative, the function $I_{g,h}$ vanishes on $(-\infty,0]$. Its Laplace transform $\mathcal{M}athcal{L}_{I_{g,h}}: \mathcal{M}athbb{C}\to\mathcal{M}athbb{R}$ is the function \[\langlembda\mathcal{M}apsto \mathcal{M}athcal{L}_{I_{g,h}}(\langlembda):=\int_0^\infty\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y) I_{g,h}(y)\,dy,\] and observe that \mathcal{M}athbf{b}egin{eqnarray*} \mathcal{M}athcal{L}_{I_{g,h}}(\langlembda)&=&\int_0^\infty\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y)\left({\rm det}\,is\int_{\{\mathcal{M}athbf{x}: g(\mathcal{M}athbf{x})\leq y\}}hd\mathcal{M}athbf{x}\mathcal{M}athcal{R}ight)\,dy\\ &=&{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}h(\mathcal{M}athbf{x})\left( \int_{g(\mathcal{M}athbf{x})}^\infty \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y)dy\mathcal{M}athcal{R}ight)\,d\mathcal{M}athbf{x}\mathcal{M}athbf{q}uad\mathcal{M}box{[by Fubini's Theorem]}\\ &=&\mathcal{M}athbf{f}rac{1}{\langlembda}{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\\ &=& \mathcal{M}athbf{f}rac{\langlembda^{-p/d}}{\langlembda}{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\,h(\langlembda^{1/d}\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\langlembda^{1/d}\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\mathcal{M}athbf{q}uad\mathcal{M}box{[by homogeneity]}\\ &=&\mathcal{M}athbf{f}rac{1}{\langlembda^{1+(n+p)/d}}{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}h(\mathcal{M}athbf{z})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{z}))\,d\mathcal{M}athbf{z}\mathcal{M}athbf{q}uad\mathcal{M}box{[by $\langlembda^{-1/d}\mathcal{M}athbf{x}\to\mathcal{M}athbf{z}$]}\\ &=&\mathcal{M}athbf{f}rac{{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}h(\mathcal{M}athbf{z})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{z}))\,d\mathcal{M}athbf{z}}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}\:\mathcal{M}athcal{L}_{y^{(n+p)/d}}(\langlembda). \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} And so, by uniqueness of the Laplace transform, \[I_{g,h}(y)=\mathcal{M}athbf{f}rac{y^{(n+p)/d}}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x},\] which is the desired result. So the above expression of $I_{g,h}(y)$ is obtained by ``inverting" the Laplace transform $\mathcal{M}athcal{L}_{I_{g,h}}$ at the point $y$, which in fact, as we next see, is solving a ``dual" problem. For analogy purposes, consider, the optimization problem \[ \mathcal{M}athcal{R}ho_{g,h}(y)\,=\,\mathcal{M}athcal{S}up_\mathcal{M}athbf{x}\{ h(\mathcal{M}athbf{x})\::\:g(\mathcal{M}athbf{x})\leq y\},\] where $y$ is fixed, $h$ and $-g$ are concave and $h$ is nonnegative. Equivalently, $\mathcal{M}athcal{R}ho_{g,h}(y)=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\theta_{g,h}(y))$ where $y\mathcal{M}apsto \theta_{g,h}(y)$ is the {\it optimal value function} of the optimization problem \[\mathcal{M}athbf{P}_y:\mathcal{M}athbf{q}uad \theta_{g,h}(y)\,=\,\mathcal{M}athcal{S}up_\mathcal{M}athbf{x}\:\{ \ln h(\mathcal{M}athbf{x})\::\:g(\mathcal{M}athbf{x})\leq y\,\}.\] Associated with $\mathcal{M}athbf{P}_y$ is the {\it dual} problem $\mathcal{M}athbf{P}^*_y:\:\inf_\langlembda\,\{\mathcal{M}athcal{G}(\langlembda)\::\:\langlembda \mathbf{g}eq0\}$, where $G:\mathcal{M}athbb{R}_+\to\mathcal{M}athbb{R}$ is the function $\langlembda\mathcal{M}apsto G(\langlembda):=\mathcal{M}athcal{S}up_{\mathcal{M}athbf{x}}\:\{\,\ln h(\mathcal{M}athbf{x})+\langlembda (y-g(\mathcal{M}athbf{x}))\}$. Observe that: \mathcal{M}athbf{b}egin{eqnarray*} \langlembda\mathcal{M}apsto G(\langlembda)&=&\langlembda y+\mathcal{M}athcal{S}up_{\mathcal{M}athbf{x}}\:\{\,\ln h(\mathcal{M}athbf{x})-\langlembda g(\mathcal{M}athbf{x})\}\\ &=&\langlembda y+\mathcal{M}athcal{S}up_{\mathcal{M}athbf{z}}\:\{\,\ln (\langlembda^{-p/d}h(\mathcal{M}athbf{z}))-g(\mathcal{M}athbf{z})\}\mathcal{M}athbf{q}uad\mathcal{M}box{[via $\mathcal{M}athbf{z}=\langlembda^{1/d}\mathcal{M}athbf{x}$]}\\ &=&\langlembda y+-\mathcal{M}athbf{f}rac{p}{d}\ln \langlembda +\mathcal{M}athcal{S}up_{\mathcal{M}athbf{z}}\{\ln h(\mathcal{M}athbf{z})-g(\mathcal{M}athbf{z})\}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} And so the dual problem $\mathcal{M}athbf{P}^*_y$ reads \mathcal{M}athbf{b}egin{eqnarray*} \mathcal{M}athbf{P}^*_y:\mathcal{M}athbf{q}uad\mathbf{g}amma&=&\mathcal{M}athcal{S}up_{\mathcal{M}athbf{z}}\{\ln h(\mathcal{M}athbf{z})-g(\mathcal{M}athbf{z})\}+\inf_{\langlembda\mathbf{g}eq0}\:\{\langlembda y-\mathcal{M}athbf{f}rac{p}{d}\ln \langlembda\,\}\\ &=&\ln y^{p/d} +\ln\left(\mathcal{M}athcal{S}up_{\mathcal{M}athbf{z}}\{h(\mathcal{M}athbf{z})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{z}))\}\mathcal{M}athcal{R}ight)+\mathcal{M}athbf{f}rac{p}{d}(1-\ln \mathcal{M}athbf{f}rac{p}{d}) \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} In particular, if $h$ is log-concave and $g$ is convex, by a standard argument of convex optimization, $\mathbf{g}amma=\theta_{g,h}(y)\,(=\ln\mathcal{M}athcal{R}ho_{g,h}(y))$. And therefore, \[\mathcal{M}athcal{R}ho_{g,h}(y)\,=\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\theta_{g,h}(y))\,=\,y^{p/d}\, \mathcal{M}athbf{f}rac{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(p/d)}{(p/d)^{p/d}}\,\mathcal{M}athcal{S}up_\mathcal{M}athbf{x} \{\,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,\} \] to compare with \[I_{g,h}(y)=y^{(n+p)/d}\,\mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}\,{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}.\] Alternatively, the Legendre-Fenchel transform of $\theta_{g,h}(y)$ (for the concave version) is the function \mathcal{M}athbf{b}egin{eqnarray*} \theta_{g,h}^*(\langlembda)&=&\inf_y \{\langlembda y-\theta_{g,h}(y)\}\\ &=&\inf_\langlembda \{\langlembda y-\mathcal{M}athcal{S}up_x \{\ln h(\mathcal{M}athbf{x})\::\:g(\mathcal{M}athbf{x})\leq y\}\,\}\\ &=&\inf_\mathcal{M}athbf{x}\,\{\langlembda g(\mathcal{M}athbf{x})-\ln h(\mathcal{M}athbf{x})\}\,=\,-\ln\left(\mathcal{M}athcal{S}up_\mathcal{M}athbf{x}\{h(\mathcal{M}athbf{x})\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda g(\mathcal{M}athbf{x}))\}\mathcal{M}athcal{R}ight) \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} and so when $h$ is log-concave and $g$ is convex, $\theta_{g,h}(y)=(\theta^*_{g,h})^*(y)$, so that \mathcal{M}athbf{b}egin{eqnarray*} \theta_{g,h}(y) &=&\inf_\langlembda \{\langlembda y-\theta_{g,h}^*(\langlembda)\}\\ &=&\inf_\langlembda\,\{\langlembda y+\ln\left(\mathcal{M}athcal{S}up_\mathcal{M}athbf{x}\{ h(\mathcal{M}athbf{x})\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda g(\mathcal{M}athbf{x}))\}\mathcal{M}athcal{R}ight)\,\}\\ &=&\ln y^{p/d} +\ln\left(\mathcal{M}athcal{S}up_{\mathcal{M}athbf{z}}\{h(\mathcal{M}athbf{z})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{z}))\}\mathcal{M}athcal{R}ight)+\mathcal{M}athbf{f}rac{p}{d}(1-\ln \mathcal{M}athbf{f}rac{p}{d}). \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} In summary, to the Laplace transform step \mathcal{M}athbf{b}egin{equation} \langlebel{laplace} \mathcal{M}athcal{L}_{I_{g,h}}(\langlembda)\,=\,\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y)I_{g,h}(y)\,dy\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} in the usual $(\mathcal{M}athbf{c}dot,+)$-algebra, corresponds to the Legendre Fenchel transform \mathcal{M}athbf{b}egin{eqnarray*} \theta_{g,h}^*(\langlembda)=\inf_y \{\langlembda y-\theta_{g,h}(y)\}&=&-\mathcal{M}athcal{S}up_y \{-\langlembda y+\theta_{g,h}(y)\}\\ &=&-\ln\mathcal{M}athcal{S}up_y \{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y)\mathcal{M}athcal{R}ho_{g,h}(y)\},\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} where to emphasize the analogy, the latter term can be written \[\ln\left( ``\int" \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y) \,\mathcal{M}athcal{R}ho_{g,h}(y)\,\mathcal{M}athcal{R}ight),\] i.e., the $``\mathcal{M}athcal{S}up"$ operator is integration $``\int"$ in the $(\mathcal{M}ax,+)$-algebra. And with this convention \[\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\theta_{g,h}^*(\langlembda))\,\mathcal{M}athbf{\Sigma}meq\,``\int"\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y)\,\mathcal{M}athcal{R}ho_{g,h}(y).\] (Compare with (\mathcal{M}athcal{R}ef{laplace}).) Similarly, the Laplace inverse transform step \mathcal{M}athbf{b}egin{equation} \langlebel{laplaceinv} I_{g,h}(y)=\int_{\omega-i\infty}^{\omega+i\infty} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\langlembda y)\,\mathcal{M}athcal{L}_{I_{g,h}}(\langlembda)\,d\langlembda\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} (where $\omega\in\mathcal{M}athbb{C}$ is on the right of all singularities of $\mathcal{M}athcal{L}_{I_{g,h}}$) and which yields \[I_{g,h}(y)=\mathcal{M}athbf{f}rac{y^{(n+p)/d}}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}\int_{\mathcal{M}athbb{R}^n}h\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x},\] is solving the ``dual" and corresponds to the Legendre-Fenchel transform (which is involutive) applied to $\theta_{g,h}^*$ \[\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\theta_{g,h}(y))\,=\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\inf_\langlembda\{\langlembda y-\theta_{g,h}^*(\langlembda)\})\,\mathcal{M}athbf{\Sigma}meq\,-``\int"\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\langlembda y)\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\theta_{g,h}^*(\langlembda))\,d\langlembda\] (compare with (\mathcal{M}athcal{R}ef{laplaceinv})), and which yields \[\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(\theta_{g,h}(y))=y^{p/d}\, \mathcal{M}athbf{f}rac{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(p/d)}{(p/d)^{p/d}}\:``\int" \,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x})).\] A rigorous analysis of the links between integration and linear optimization on a polytope has been already investigated in \mathcal{M}athbf{c}ite{lasserrebook}. \mathcal{M}athcal{S}ubsection{Approximating non Gaussian integrals} As we have already mentioned, in some cases the non Gaussian integral can be computed explicitly in terms of some on algebraic invariants of $g$; see e.g. Morosov and Shakirov \mathcal{M}athbf{c}ite{morosov1} and Shakirov \mathcal{M}athbf{c}ite{shakirov}. But so far there is no general formula and therefore an alternative is to seak for a numerical scheme for its evaluation, or at least, its approximation. For this purpose, we next show that Theorem \mathcal{M}athcal{R}ef{thmain} is helpful as it provides a means to compute any moment of the measure $d\mathcal{M}u=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ on $\mathcal{M}athbb{R}^n$ by computing the same moment but now of the Lebesgue measure on the sublevel set $\{\mathcal{M}athbf{x}: g(\mathcal{M}athbf{x})\leq 1\}$. Indeed, for every $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$, letting $\mathcal{M}athbf{x}\mathcal{M}apsto h(\mathcal{M}athbf{x}):=\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha$ in Theorem \mathcal{M}athcal{R}ef{thmain}, yields \mathcal{M}athbf{b}egin{equation} \langlebel{moment} \int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\,=\,\mathcal{M}athcal{G}amma(1+(n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert)/d)\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq1\}}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=\mathcal{M}athcal{S}um_i\mathcal{M}athbf{a}lpha_i$. Have we made any progress with this equivalence? The answer is yes. If $g$ is a (non necessarily homogeneous) polynomial, it turns out that every moment of the Lebesgue measure on the sublevel set $\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq1\}$ can be approximated as closely as desired by solving a hierarchy of semidefinite programs\mathcal{M}athbf{f}ootnote{A semidefinite program is a finite-dimensional convex conic optimization problem, that up to arbitrary (fixed) precision, can be solved efficiently, i.e., in time polynomial in the input size of the problem. For more details the interested reader is referred to e.g. \mathcal{M}athbf{c}ite{wolko}.} as described in Henrion et al. \mathcal{M}athbf{c}ite{sirev}. In fact, for every $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$ fixed, the moment \[z_\mathcal{M}athbf{a}lpha\,:=\,\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq1\}}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,d\mathcal{M}athbf{x}\] can be approximated to arbitrary precision $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon>0$ fixed in advance, by solving two sequences of semidefinite programs, one which provides a monotone non decreasing sequence of upper bounds $u_k$, $k\in\mathcal{M}athbb{N}$, while the other provides a monotone non increasing sequence of lower bounds $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll_k$, $k\in\mathcal{M}athbb{N}$. The procedure stops whenever $u_k-\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll_k<\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon$, in which case one may set \[z_\mathcal{M}athbf{a}lpha\mathcal{M}athbf{a}pprox \tilde{z}_\mathcal{M}athbf{a}lpha\,:=\,(u_k+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll_k)/2.\] This requires to solve two sequences of semidefinite programs for each $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$. In fact, if one is ready to relax the monotonicity property of the upper and lower bounds $\{u_k,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll_k\}$, it is enough to solve a single sequence of semidefinite programs, e.g., the one defined to approximate the mass $z_0$. Then if $d\in\mathcal{M}athbb{N}$ and $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon >0$ are fixed, and $k$ is large enough, not only $\mathcal{M}athbf{v}ert u_k-z_0\mathcal{M}athbf{v}ert<\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon$ but also from the solution of the semidefinite program at step $k$ one obtains scalars $\tilde{z}_\mathcal{M}athbf{a}lpha$ such that $\mathcal{M}athbf{v}ert \tilde{z}_\mathcal{M}athbf{a}lpha-z_\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert<\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon$, for all $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$ such that $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert <d$. However, in contrast to the case of upper and lower bounds, there is no simple stopping criterion to guarantee the $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon$-approximation. For more details, the interested reader is referred to Henrion et al. \mathcal{M}athbf{c}ite{sirev}. And therefore, once the $\tilde{z}_\mathcal{M}athbf{a}lpha$ have been computed, using Theorem \mathcal{M}athcal{R}ef{thmain}, we obtain: \[\left\mathcal{M}athbf{v}ert \int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\,-\,\tilde{z}_\mathcal{M}athbf{a}lpha\,\mathcal{M}athcal{G}amma(1+(n+\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert)/d) \mathcal{M}athcal{R}ight\mathcal{M}athbf{v}ert\,<\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon,\mathcal{M}athbf{q}uad\mathcal{M}athbf{f}orall\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n_d,\] which provides an $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon$-approximation guarantee for the non Gaussian integral $\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ (and more generally for the integral $\int_{\mathcal{M}athbb{R}^n} h\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ whenever $h$ is any polynomial). \mathcal{M}athcal{S}ubsection{Sensitivity analysis and convexity} Recall that when $d\in\mathcal{M}athbb{N}$, $\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{d}\mathcal{M}athcal{S}ubset C_d$ is the convex cone of nonnegative and homogeneous polynomials of degree $d$, with compact sublevel set $\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$. Formula (\mathcal{M}athcal{R}ef{b2}) of Theorem \mathcal{M}athcal{R}ef{thmain} allows us to provide insights into the function $f:C_d\to\mathcal{M}athbb{R}$, defined by: \mathcal{M}athbf{b}egin{equation} \langlebel{def-f} g\mathcal{M}apsto f_h(g)\,:=\,\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}} h(\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x},\mathcal{M}athbf{q}quad g\in\,C_d, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where $h$ is a PHF. In particular when one wishes to see how $f_h$ changes when some coefficient of $g\in \mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d$ varies. Notice that the restriction of $f_h$ to $\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d$ may be seen as a function $f_h:\mathcal{M}athbb{R}^{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)}\to\mathcal{M}athbb{R}$ of the coefficient vector of the polynomial $g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d$, where $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)={n+d-1\mathcal{M}athbf{c}hoose d}$. Before proceeding further we need the following result. Recall that the support $\mathcal{M}athcal{S}upp\mathcal{M}u$ of a Borel measure on $\mathcal{M}athbb{R}^n$ is the smallest closed set $A$ such that $\mathcal{M}u(\mathcal{M}athbb{R}^n\mathcal{M}athcal{S}etminus A)=0$. Let $C^0_d:=\{ g\in C_d: \mathcal{M}box{ $g$ is continuous}\}\mathcal{M}athcal{S}ubset C_d$. \mathcal{M}athbf{b}egin{lemma} \langlebel{lemma-aux2} Let $\mathcal{M}u$ be a non trivial $\mathcal{M}athbf{\Sigma}gma$-finite Borel measure on $\mathcal{M}athbb{R}^n$ and let $\Theta_\mathcal{M}u\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_d$ be the convex cone of polynomials $g$ of degree at most $d$ such that $\int\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}u <\infty$. Then: {\mathcal{M}athcal{R}m (a)} With $d\in\mathcal{M}athbb{R}$, the function $f:\,C_d\to\mathcal{M}athbb{R}$, with $g\mathcal{M}apsto f(g):=\int\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}u$, is convex (and strictly convex on $C^0_d$ if $\mathcal{M}athcal{S}upp\mathcal{M}u=\mathcal{M}athbb{R}^n$). {\mathcal{M}athcal{R}m (b)} If $\mathcal{M}u(O)>0$ for some open set $O\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ and if $d\in\mathcal{M}athbb{N}$, then $f$ is strictly convex and twice differentiable on ${\mathcal{M}athcal{R}m int}(\Theta_\mathcal{M}u)$, with: \mathcal{M}athbf{b}egin{eqnarray} \langlebel{aux2-1} \mathcal{M}athbf{f}rac{\partial f(g)}{\partial g_\mathcal{M}athbf{a}lpha}&=&\int \mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u,\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\mathcal{M}athbf{a}lpha,\,\:\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert\leq d.\\ \langlebel{aux2-2} \mathcal{M}athbf{f}rac{\partial^2 f(g)}{\partial g_\mathcal{M}athbf{a}lpha\partial g_\mathcal{M}athbf{b}eta}&=&\int \mathcal{M}athbf{x}^{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta}\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u,\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta,\,\:\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert,\:\mathcal{M}athbf{v}ert\mathcal{M}athbf{b}eta\mathcal{M}athbf{v}ert\leq d. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{lemma} \mathcal{M}athbf{b}egin{proof} (a) Observe that $f$ is nonnegative. Let $\mathcal{M}athbf{a}lpha\in [0,1]$ and let $g,q\in C_d$. To prove $f(\mathcal{M}athbf{a}lpha g+(1-\mathcal{M}athbf{a}lpha) q)\leq\mathcal{M}athbf{a}lpha f(g)+(1-\mathcal{M}athbf{a}lpha)f(q)$, we only need consider the case where $f(g),f(q)<+\infty$, for which we have \[f(\mathcal{M}athbf{a}lpha g+(1-\mathcal{M}athbf{a}lpha)q)\,=\,\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\mathcal{M}athbf{a}lpha g-(1-\mathcal{M}athbf{a}lpha)q)\,d\mathcal{M}u.\] By convexity of $u\mathcal{M}apsto \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-u)$, \mathcal{M}athbf{b}egin{eqnarray*} f(\mathcal{M}athbf{a}lpha g+(1-\mathcal{M}athbf{a}lpha)q)&\leq&{\rm det}\,is\int [\,\mathcal{M}athbf{a}lpha\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)+(1-\mathcal{M}athbf{a}lpha)\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-q)\,]\,d\mathcal{M}u\\ &=&\mathcal{M}athbf{a}lpha f(g)+(1-\mathcal{M}athbf{a}lpha)f(q), \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} and so $f$ is convex. Now, in view of the strict convexity of $u\mathcal{M}apsto \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-u)$, equality may occur only if $g(\mathcal{M}athbf{x})=q(\mathcal{M}athbf{x})$, $\mathcal{M}u$-almost everywhere. If $g,q\in C^0_d$, the set $\Delta:=\{\mathcal{M}athbf{x} :g(\mathcal{M}athbf{x})-q(\mathcal{M}athbf{x})=0\}$ is closed and so if $\Delta\neq\mathcal{M}athbb{R}^n$ then $\mathcal{M}u(\Delta)<\mathcal{M}u(\mathcal{M}athbb{R}^n)$ because $\mathcal{M}athcal{S}upp\mathcal{M}u=\mathcal{M}athbb{R}^n$. Therefore, equality occurs only if $g=q$ so that $f$ is strictly convex on $C^0_d$.\\ (b) Next, if $d\in\mathcal{M}athbb{N}$ and $g\in{\mathcal{M}athcal{R}m int}(\Theta_\mathcal{M}u)$, write $g$ in the canonical basis as $g(\mathcal{M}athbf{x})=\mathcal{M}athcal{S}um_\mathcal{M}athbf{a}lpha g_\mathcal{M}athbf{a}lpha\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha$. For every $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n_d$, let $e_\mathcal{M}athbf{a}lpha=(e_\mathcal{M}athbf{a}lpha(\mathcal{M}athbf{b}eta))\in\mathcal{M}athbb{R}^{s(d)}$ be such that $e_\mathcal{M}athbf{a}lpha(\mathcal{M}athbf{b}eta)={\rm det}\,elta_{\mathcal{M}athbf{b}eta=\mathcal{M}athbf{a}lpha}$ (with ${\rm det}\,elta$ being the Kronecker symbol). Then for every $t\mathbf{g}eq0$, \[\mathcal{M}athbf{f}rac{f(g+te_\mathcal{M}athbf{a}lpha)-f(g)}{t}\,=\,\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,\left(\mathcal{M}athbf{u}nderbrace{\mathcal{M}athbf{f}rac{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)-1}{t}}_{\psi(t,\mathcal{M}athbf{x})}\mathcal{M}athcal{R}ight)\,d\mathcal{M}u(\mathcal{M}athbf{x})\] Notice that for every $\mathcal{M}athbf{x}$, by convexity of the function $t\mathcal{M}apsto \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)$, \[\lim_{t{\rm det}\,ownarrow0}\psi(t,\mathcal{M}athbf{x})\,=\,\inf_{t\mathbf{g}eq0}\psi(t,\mathcal{M}athbf{x})\,=\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)'_{\mathcal{M}athbf{v}ert t=0}\,=\,-\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha,\] because for every $\mathcal{M}athbf{x}$, the function $t\mathcal{M}apsto \psi(t,\mathcal{M}athbf{x})$ is nondecreasing; see e.g. Rockafellar \mathcal{M}athbf{c}ite[Theorem 23.1]{rockafellar}. Hence, the one-sided directional derivative $f'(g;\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}_\mathcal{M}athbf{a}lpha)$ in the direction $e_\mathcal{M}athbf{a}lpha$ satisfies \mathcal{M}athbf{b}egin{eqnarray*} f'(g;e_\mathcal{M}athbf{a}lpha)&=&\lim_{t{\rm det}\,ownarrow 0}\mathcal{M}athbf{f}rac{f(g+te_\mathcal{M}athbf{a}lpha)-f(g)}{t}\,=\, \lim_{t{\rm det}\,ownarrow 0}\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,\psi(t,\mathcal{M}athbf{x})\,d\mathcal{M}u(\mathcal{M}athbf{x})\\ &=&\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,\lim_{t{\rm det}\,ownarrow 0}\psi(t,\mathcal{M}athbf{x})\,d\mathcal{M}u(\mathcal{M}athbf{x})\,=\,\int-\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u(\mathcal{M}athbf{x}), \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} where the third equality follows from the Extended Monotone Convergence Theorem \mathcal{M}athbf{c}ite[1.6.7]{ash}. Indeed for all $t<t_0$ with $t_0$ sufficiently small, the function $\psi(t,\mathcal{M}athbf{c}dot)$ is bounded above by $\psi(t_0,\mathcal{M}athbf{c}dot)$ and $\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\psi(t_0,\mathcal{M}athbf{x})d\mathcal{M}u<\infty$. Similarly, for every $t>0$ \[\mathcal{M}athbf{f}rac{f(g-te_\mathcal{M}athbf{a}lpha)-f(g)}{t}\,=\, \int\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,\mathcal{M}athbf{u}nderbrace{\mathcal{M}athbf{f}rac{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(t\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)-1}{t}}_{\mathcal{M}athbf{x}i(t,\mathcal{M}athbf{x})}\,d\mathcal{M}u(\mathcal{M}athbf{x}),\] and by convexity of the function $t\mathcal{M}apsto \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(t\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)$ \[\lim_{t{\rm det}\,ownarrow0}\mathcal{M}athbf{x}i(t,\mathcal{M}athbf{x})\,=\,\inf_{t\mathbf{g}eq0}\mathcal{M}athbf{x}i(t,\mathcal{M}athbf{x})\,=\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(t\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)'_{\mathcal{M}athbf{v}ert t=0}\,=\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha.\] Therefore, with exactly same arguments as before, \mathcal{M}athbf{b}egin{eqnarray*} f'(g;-e_\mathcal{M}athbf{a}lpha)&=&\lim_{t{\rm det}\,ownarrow 0}\mathcal{M}athbf{f}rac{f(g-te_\mathcal{M}athbf{a}lpha)-f(g)}{t}\\ &=&\int\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u(\mathcal{M}athbf{x})=-f'(g;e_\mathcal{M}athbf{a}lpha),\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} and so \[\mathcal{M}athbf{f}rac{\partial f(g)}{\partial g_\mathcal{M}athbf{a}lpha}\,=\,-\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u(\mathcal{M}athbf{x}),\] for every $\mathcal{M}athbf{a}lpha$ with $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert\leq d$, which yields (\mathcal{M}athcal{R}ef{aux2-1}). Similar arguments can used for the Hessian $\nabla^2f(g)$ which yields (\mathcal{M}athcal{R}ef{aux2-2}). So the Hessian $\nabla^2f(g)$ is the matrix $\mathcal{M}athbf{M}_d\in\mathcal{M}athcal{S}_{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)}$ whose rows and columns are indexed in the set $\mathcal{M}athcal{G}amma_d:=\{\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n:\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=d\}$ and with entries \[\mathcal{M}athbf{M}_d(\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta)\,=\,\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{x}^{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta}\,\mathcal{M}athbf{u}nderbrace{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u}_{d\nu},\mathcal{M}athbf{q}quad \mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta\in\mathcal{M}athcal{G}amma_d,\] i.e., $\mathcal{M}athbf{M}_d$ is the matrix of $2d$-moments of the finite Borel measure $\nu$. Let $\mathcal{M}athbf{h}\in\mathcal{M}athbb{R}^{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)}$ be the coefficient vector of a non trivial and arbitrary homogeneous polynomial $h\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_d$. Then \[\langlengle\mathcal{M}athbf{h},\mathcal{M}athbf{M}_d\mathcal{M}athbf{h}\mathcal{M}athcal{R}angle\:\left(\,=\,\int_{\mathcal{M}athbb{R}^n}h(\mathcal{M}athbf{x})^2\,d\nu(\mathcal{M}athbf{x})\mathcal{M}athcal{R}ight)\,>\,0\] because $\mathcal{M}u(O)>0$ (hence $\nu(O)>0$) on some open set $O\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$. Therefore $\nabla^2f(g)\mathcal{M}athcal{S}ucc0$, which in turn implies that $f$ is strictly convex on ${\mathcal{M}athcal{R}m int}(\Theta_\mathcal{M}u)$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} \mathcal{M}athbf{b}egin{corollary} \langlebel{coro-convex} Let $h$ be a PHF of degree $p\in\mathcal{M}athbb{R}$ and with $0\neq d\in\mathcal{M}athbb{R}$, consider the function $f_h: C_d\to\mathcal{M}athbb{R}$ defined by: \mathcal{M}athbf{b}egin{equation} \langlebel{coro1-0} g\,\mathcal{M}apsto\mathcal{M}athbf{q}uad f_h(g)\,:=\,{\rm det}\,is\int_{\{\mathcal{M}athbf{x}:g(\mathcal{M}athbf{x})\leq 1\}}h(\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x},\mathcal{M}athbf{q}quad \mathcal{M}athbf{f}orall g\in C_d. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} The function $f_h$ is a PHF of degree $-(n+p)/d$ and convex whenever $h$ is nonnegative (and strictly convex if $h>0$ on $\mathcal{M}athbb{R}^n\mathcal{M}athcal{S}etminus\{0\}$). In addition, if $d\in\mathcal{M}athbb{N}$ and $h$ is continuous with $\int \mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}<\infty$, then:\\ {\mathcal{M}athcal{R}m (a)} $f_h$ is twice differentiable on ${\mathcal{M}athcal{R}m int}(\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d)$, and for every $\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta\in\mathcal{M}athbb{N}^n_{d}$: \mathcal{M}athbf{b}egin{eqnarray} \langlebel{coro1-1} \mathcal{M}athbf{f}rac{\partial f_h(g)}{\partial g_\mathcal{M}athbf{a}lpha}&=&\mathcal{M}athbf{f}rac{-1}{\mathcal{M}athcal{G}amma(1+(n+p)/d)} {\rm det}\,is\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha \,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\\ \langlebel{coro1-2} &=&\mathcal{M}athbf{f}rac{-\mathcal{M}athcal{G}amma(2+(n+p)/d)}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}{\rm det}\,is\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}} \mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha \,h(\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x}.\\ \langlebel{coro1-3} \mathcal{M}athbf{f}rac{\partial^2 f_h(g)}{\partial g_\mathcal{M}athbf{a}lpha\partial g_\mathcal{M}athbf{b}eta}&=& \mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}{\rm det}\,is\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{x}^{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta} \,h(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray} (b) If $h$ is non trivial and nonnegative, then $f_h$ is strictly convex on ${\mathcal{M}athcal{R}m int}(\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d)$, and its Hessian $\nabla^2f_h(g)$ is the matrix of $2d$-moments of the measure \[\nu(B)=\mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}{\rm det}\,is\int_{B} h\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x},\mathcal{M}athbf{q}quad B\in\mathcal{M}athcal{B}.\] \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{corollary} \mathcal{M}athbf{b}egin{proof} With $\langlembda>0$ and $g\in C_d$, \[f_h(\langlembda g)=\int_{\{\mathcal{M}athbf{x}:\langlembda g(\mathcal{M}athbf{x})\leq1\}}h\,d\mathcal{M}athbf{x}=\int_{\{\mathcal{M}athbf{x}:g(\langlembda^{1/d}x)\leq1\}}h\,d\mathcal{M}athbf{x},\] and so, doing the change of variable $\mathcal{M}athbf{z}=\langlembda^{1/d}\mathcal{M}athbf{x}$, one obtains $f_h(\langlembda g)=\langlembda^{-(n+p)/d}f_h(g)$, i.e., $f_h$ is a PHF of degree $-(n+p)/d$. Next, first consider the case where $h$ is nonnegative, and let $\mathcal{M}u$ be the $\mathcal{M}athbf{\Sigma}gma$-finite measure defined by $\mathcal{M}u(B):={\rm det}\,is\int_Bh(\mathcal{M}athbf{x})d\mathcal{M}athbf{x}$ for every Borel set $B$ of $\mathcal{M}athbb{R}^n$. By Theorem \mathcal{M}athcal{R}ef{thmain}, whenever $g\in C_d$ and $f_h(g)$ is finite, \[f_h(g)\,=\,\mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma(1+(n+p)/d)}{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}u.\] Hence by Lemma \mathcal{M}athcal{R}ef{lemma-aux2}, $f_h$ is convex (and strictly convex if $h>0$ on $\mathcal{M}athbb{R}^n\mathcal{M}athcal{S}etminus\{0\}$). In addition, if $d\in\mathcal{M}athbb{N}$ and $h$ is continuous, $f_{h}(g)<\infty$ if $g\in{\mathcal{M}athcal{R}m int}(\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d)$ (and so $g\in{\mathcal{M}athcal{R}m int}(\Theta_\mathcal{M}u)$). Moreover, $h$ being non trivial, nonnegative and continuous, $h>0$ on some open set $O$ and so $\mathcal{M}u(O)>0$. Therefore, by Lemma \mathcal{M}athcal{R}ef{lemma-aux2}(b), $f_h$ is twice differentiable and strictly convex on ${\mathcal{M}athcal{R}m int}(\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_d)$ and (\mathcal{M}athcal{R}ef{aux2-1})-(\mathcal{M}athcal{R}ef{aux2-2}) yield (\mathcal{M}athcal{R}ef{coro1-1}) and (\mathcal{M}athcal{R}ef{coro1-3}), while (\mathcal{M}athcal{R}ef{coro1-2}) follows from (\mathcal{M}athcal{R}ef{coro1-1}) and Theorem \mathcal{M}athcal{R}ef{thmain}. That $\nabla^2f_h(g)$ is the matrix of $2d$-moments of the measure $d\nu=h\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ follows from (\mathcal{M}athcal{R}ef{coro1-3}). This proves (b). To prove (a) when $h$ is not nonnegative, write $h=h^+-h^-$ with $h^+:=\mathcal{M}ax[0,h]$ and $h^-:=\mathcal{M}ax[0,-h]$. Both $h^+$ and $h^-$ are continuous PHFs of degree $p$, and nonnegative. Moreover, $f_h=f_{h^+}-f_{h^-}$ and so applying (b) to $f_{h^+}$ and $f_{h^-}$, yields (a) by additivity. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} \mathcal{M}athbf{b}egin{remark} \langlebel{remark-PHF} (a) Notice that proving convexity of $f_h$ directly from its definition (\mathcal{M}athcal{R}ef{coro1-0}) is not obvious at all whereas it becomes much easier when using Theorem \mathcal{M}athcal{R}ef{thmain}. (b) In Lemma \mathcal{M}athcal{R}ef{lemma-aux2}, differentiability of $f$ on the convex cone $C_d$ should be now in the sense of G\^ateaux-differentiablity, not explored here. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{remark} We end up with the following relatively surprising results which even though are again particular cases of Lemma \mathcal{M}athcal{R}ef{lemma0}, deserve special mention. \mathcal{M}athbf{b}egin{lemma} \langlebel{lemma4} Let $y\mathbf{g}eq0$ be fixed, let $h$ be a PHF of degree $p\in\mathcal{M}athbb{R}$ and let $\mathcal{M}athbf{x}i,\psi:\mathcal{M}athbb{R}_+\to\mathcal{M}athbb{R}$ be measurable functions such that \[{\rm det}\,is\int_{\{t:\psi(t^d)\leq y\}}t^{n+p-1}\mathcal{M}athbf{x}i(t^d)\,dt\,<\,+\infty.\] Let $f_h:C_d\to\mathcal{M}athbb{R}$, $0\neq d\in\mathcal{M}athbb{R}$, be the function: \[g\mathcal{M}apsto f_h(g)\,:=\,\int_{\{\mathcal{M}athbf{x}\,:\,\psi(g(\mathcal{M}athbf{x}))\leq y\}}\mathcal{M}athbf{x}i(g(\mathcal{M}athbf{x}))\,h(\mathcal{M}athbf{x})\,d\mathcal{M}athbf{x},\mathcal{M}athbf{q}quad g\in C_d.\] Then whenever $\int \mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}<+\infty$, $f_h(g)$ is finite, and \mathcal{M}athbf{b}egin{equation} \langlebel{lemma4-1} f_h(g)\,=\,\mathcal{M}athbf{u}nderbrace{\mathcal{M}athbf{f}rac{d{\rm det}\,is\int_{\{t:\psi(t^d)\leq y\}}t^{n+p-1}\mathcal{M}athbf{x}i(t^d)\,dt}{\mathcal{M}athcal{G}amma((n+p)/d)}}_{{\mathcal{M}athcal{R}m cte}(y,p,d,\mathcal{M}athbf{x}i,\psi)}\mathcal{M}athbf{c}dot \int_{\mathcal{M}athbb{R}^n}h\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where the constant depends only on $\mathcal{M}athbf{x}i,\psi,d,p,y$ and neither on $g$ nor $h$. Therefore, $f_h$ is convex whenever $h$ is nonnegative (and strictly convex on $C^0_d$ whenever $h>0$ on $\mathcal{M}athbb{R}^n\mathcal{M}athcal{S}etminus\{0\}$). In particular, \mathcal{M}athbf{b}egin{eqnarray} \langlebel{coroeuler-1} \int_{\mathcal{M}athbb{R}^n} gh\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}&=&\mathcal{M}athbf{f}rac{n+p}{d}\int_{\mathcal{M}athbb{R}^n}h\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\\ \langlebel{coroeuler-2} \int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}} gh\,d\mathcal{M}athbf{x}&=&\mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma((n+p)/d)}\int_{\mathcal{M}athbb{R}^n}h\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\\ \langlebel{lemma4-2} \mathcal{M}athbf{f}rac{{\rm det}\,is\int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}}{{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}}&=&\mathcal{M}athbf{f}rac{{\rm det}\,is\int_0^y\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-z)z^{n/d-1}\,dz} {\mathcal{M}athcal{G}amma(n/d)}\\ \langlebel{lemma4-3} \int_{\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq y\}} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(g)\,d\mathcal{M}athbf{x}&=&\mathcal{M}athbf{f}rac{{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}}{\mathcal{M}athcal{G}amma(n/d)}{\rm det}\,is\int_0^y\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(z)z^{n/d-1}\,dz. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{lemma} \mathcal{M}athbf{b}egin{proof} We first assume that $h$ is nonnegative and $\int h\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)<+\infty$, so that (\mathcal{M}athcal{R}ef{lemma4-1}) follows from Lemma \mathcal{M}athcal{R}ef{lemma0} with $t\mathcal{M}apsto \phi(t):=\mathcal{M}athbf{x}i(t)I_{[0,y]}(\psi(t))$, which yields $f_h(g)={\mathcal{M}athcal{R}m cte}(y,p,d,\mathcal{M}athbf{x}i,\psi)\mathcal{M}athbf{c}dot A(g,h)$, with $A(\mathcal{M}athbf{c}dot,\mathcal{M}athbf{c}dot)$ as in (\mathcal{M}athcal{R}ef{A}) and \[{\mathcal{M}athcal{R}m cte}(y,p,d,\mathcal{M}athbf{x}i,\psi)\,=\,\int_{\{t:\psi(t^d)\leq y\}}t^{n+p-1}\mathcal{M}athbf{x}i(t^d)\,dt,\] and the result follows by recalling that with $\phi(t)=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t)$ one had already obtained in (\mathcal{M}athcal{R}ef{AA}) \[A(g,h)\,=\,\mathcal{M}athbf{f}rac{d}{\mathcal{M}athcal{G}amma((n+p)/d)}\,\int_{\mathcal{M}athbb{R}^n}h\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\:(\,<\,+\infty).\] When $h$ is not nonnegative, writing $h=h^+-h^-$ where both $h^+$ and $h^-$ are also PHFs of degree $p$, the result follows by additivity since $\int \mathcal{M}athbf{v}ert h\mathcal{M}athbf{v}ert \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}<+\infty$ only if both $\int h^+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ and $\int h^-\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ are finite. Finally, (\mathcal{M}athcal{R}ef{coroeuler-1})-(\mathcal{M}athcal{R}ef{lemma4-3}) are special cases of (\mathcal{M}athcal{R}ef{lemma4-1}) with respective choices $t\mathcal{M}apsto\psi(t):=0,\mathcal{M}athbf{x}i(t):=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t)$, then $t\mathcal{M}apsto \psi(t):=t,\mathcal{M}athbf{x}i(t)=t$ and finally, $t\mathcal{M}apsto \psi(t)=t,\mathcal{M}athbf{x}i(t):=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t)$ and $t\mathcal{M}apsto \mathcal{M}athbf{x}i(t):=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(t)$. At last, when $h$ is nonnegative, convexity and strict convexity follow from Lemma \mathcal{M}athcal{R}ef{lemma-aux2}(a) with $d\mathcal{M}u=hd\mathcal{M}athbf{x}$ (and so $\mathcal{M}athcal{S}upp\mathcal{M}u=\mathcal{M}athbb{R}^n$ if $h>0$). \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} So Lemma \mathcal{M}athcal{R}ef{lemma4} shows that $f$ is convex provided that $h$ is nonnegative and no matter how the functions $\mathcal{M}athbf{x}i$ and $\psi$ behave! Next, if $\mathcal{M}u$ is the non Gaussian measure $d\mathcal{M}u=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ on $\mathcal{M}athbb{R}^n$, then (\mathcal{M}athcal{R}ef{lemma4-2}) shows how fast $\mathcal{M}u(\{\mathcal{M}athbf{x}:g(\mathcal{M}athbf{x})\leq y\})$ converges to the non Gaussian integral $\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ as $y\to\infty$. It converges as fast as the one-dimensional integral $\int_0^yt^{n/d-1}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-t)dt$ converges to the Gamma function $\mathcal{M}athcal{G}amma(n/d)$. \mathcal{M}athcal{S}ubsection{Polarity} We here investigate the {\it polar} $G^\mathcal{M}athbf{c}irc$ set of the sublevel set $G:=\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq1\}$ assumed to be compact and when $g$ is a proper closed convex PHF. In this case $G$ is a convex body, and in fact $g$ is a {\it gauge}. The polar $C^\mathcal{M}athbf{c}irc$ of a set $C\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ is the convex set defined by \[C^\mathcal{M}athbf{c}irc\,=\,\{\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n\::\: \mathcal{M}athbf{\Sigma}gma_C(\mathcal{M}athbf{x})\,\leq\,1\}\mathcal{M}athbf{q}uad\mathcal{M}box{with }\mathcal{M}athbf{\Sigma}gma_C(\mathcal{M}athbf{x}):=\mathcal{M}athcal{S}up_\mathcal{M}athbf{y}\,\{\langlengle\mathcal{M}athbf{x},\mathcal{M}athbf{y}\mathcal{M}athcal{R}angle\,:\,\mathcal{M}athbf{y} \in C\}.\] and $(C^\mathcal{M}athbf{c}irc)^\mathcal{M}athbf{c}irc$ is the smallest convex balanced set that contains $C$. Recall that the Legendre-Fenchel {\it conjugate} $f^*:\mathcal{M}athbb{R}^n\to\mathcal{M}athbb{R}\mathcal{M}athbf{c}up\{+\infty,-\infty\}$ of $f:\mathcal{M}athbb{R}^n\to\mathcal{M}athbb{R}$ is defined by \[f^*(\mathcal{M}athbf{u})\,:=\,\mathcal{M}athcal{S}up_\mathcal{M}athbf{x}\:\{\langlengle\mathcal{M}athbf{u},\mathcal{M}athbf{x}\mathcal{M}athcal{R}angle -f(\mathcal{M}athbf{x})\,\}\mathcal{M}athbf{q}quad \mathcal{M}athbf{u}\in\mathcal{M}athbb{R}^n.\] The conjugate $g^*$ of a PHF $g$ of degree $d\in\mathcal{M}athbb{R}$ is tself a PHF of degree $q$ with $\mathcal{M}athbf{f}rac{1}{d}+\mathcal{M}athbf{f}rac{1}{q}=1$ (where $d$ does not need to be positive); see e.g. Lasserre \mathcal{M}athbf{c}ite{jca}. \mathcal{M}athbf{b}egin{proposition} \langlebel{th-polar} Let $g$ be closed proper convex PHF of degree $1<d\in\mathcal{M}athbb{R}$, and let $G:=\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq1/d\}$. Then with $\mathcal{M}athbf{f}rac{1}{d}+\mathcal{M}athbf{f}rac{1}{q}=1$, \mathcal{M}athbf{b}egin{eqnarray} \langlebel{th-polar-1} G^\mathcal{M}athbf{c}irc&=&\{\mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n\::\: g^*(\mathcal{M}athbf{x})\,\leq\,1/q\}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray} where $g^*$ is the Legendre-Fenchel conjugate of $g$. In other words, $G^\mathcal{M}athbf{c}irc$ is a sublevel set the PHF $g^*$ of degree $q$. Moreover, if $G$ is bounded then \mathcal{M}athbf{b}egin{equation} \langlebel{th-polar-3} {\mathcal{M}athcal{R}m vol}\,(G^\mathcal{M}athbf{c}irc)\,=\,\mathcal{M}athbf{f}rac{1}{q^{n/q}\mathcal{M}athcal{G}amma(1+n/q)}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*)\,d\mathcal{M}athbf{x}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proposition} \mathcal{M}athbf{b}egin{proof} (\mathcal{M}athcal{R}ef{th-polar-1}) is from Rockafellar \mathcal{M}athbf{c}ite[Corollary 15.3.2]{rockafellar} and (\mathcal{M}athcal{R}ef{th-polar-3}) follows from Theorem \mathcal{M}athcal{R}ef{thmain} applied to the PHF $g^*$ and with $y=1/q$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} \mathcal{M}athbf{b}egin{example} \langlebel{ex1} Let $x\mathcal{M}apsto g(x):=\mathcal{M}athbf{v}ert x\mathcal{M}athbf{v}ert^3$. Hence, $d=3$, $q=3/2$, $G:=\{x:g(x)\leq 1\}=[-1,1]$, $\mathcal{M}athbf{\Sigma}gma_G(x)=\mathcal{M}athbf{v}ert x\mathcal{M}athbf{v}ert$, and $g^*(x)=\mathcal{M}athbf{f}rac{2}{3\mathcal{M}athcal{S}qrt{3}}\mathcal{M}athbf{v}ert x\mathcal{M}athbf{v}ert^{3/2}$. One retrieves that $G^\mathcal{M}athbf{c}irc=G=[-1,1]=\{x:g^*(x)\leq (d-1)/d^q\}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{example} \mathcal{M}athbf{b}egin{example} \langlebel{ex2} Let $x\mathcal{M}apsto g(x):=\mathcal{M}athbf{v}ert x\mathcal{M}athbf{v}ert$ so that $G=[-1,1]$. As $g^*(x)=0$ if $x\in [-1,1]$ and $+\infty$ otherwise (a PHF of degree $0$) one may check that indeed $G^\mathcal{M}athbf{c}irc=[-1,1]=G$ and (\mathcal{M}athcal{R}ef{th-polar-1}) holds although $d=1$ and $g$ is not strictly convex. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{example} \mathcal{M}athbf{b}egin{example} \langlebel{ex3} Let $\mathcal{M}athbf{x}\mathcal{M}apsto g(\mathcal{M}athbf{x}):=x_1^4+x_2^4$ and $G=\{\mathcal{M}athbf{x}:\,x_1^4+x_2^4\leq 1/4\}$. Then $g^*(\mathcal{M}athbf{x})=3 (x_1^{4/3}+x_2^{4/3})/4^{4/3}$ (a PHF of degree $4/3$) and $G^\mathcal{M}athbf{c}irc=\{\mathcal{M}athbf{x}:\,x_1^{4/3}+x_2^{4/3}\leq 1/4^{1/3}\}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{example} \mathcal{M}athcal{S}ubsection{A variational property of homogeneous polynomials} We end up this section with an intriguing variational property of homogeneous polynomials that are sums of squares. Let $\tilde{\v}_d(\mathcal{M}athbf{x})$ be the vector of all monomials $(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)$ of degree $d$ and let $g\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}$ be homogeneous and a sum of squares, that is, \mathcal{M}athbf{b}egin{equation} \langlebel{newdef} g(\mathcal{M}athbf{x})\,=\,-\mathcal{M}athbf{f}rac{1}{2}\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}),\mathcal{M}athbf{q}quad \mathcal{M}athbf{x}\in\mathcal{M}athbb{R}^n, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} for some real symmetric $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)\times \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)$ matrix $\mathcal{M}athbf{\Sigma}$ which is positive definite (denoted $\mathcal{M}athbf{\Sigma}\mathcal{M}athcal{S}ucc0$). If $d=1$ it is well-known that \[\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{(2\pi)^{n/2}}{\mathcal{M}athcal{S}qrt{{\rm det}\,et\,\mathcal{M}athbf{\Sigma}}},\] and \[\int_{\mathcal{M}athbb{R}^n} \tilde{\v}_d(\mathcal{M}athbf{x})\tilde{\v}_d^T(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}\,=\,\mathcal{M}athbf{f}rac{(2\pi)^{n/2}}{\mathcal{M}athcal{S}qrt{{\rm det}\,et\,\mathcal{M}athbf{\Sigma}}}\mathcal{M}athbf{\Sigma}^{-1},\] that is, $\mathcal{M}athbf{\Sigma}^{-1}$ is the covariance matrix associated with the Gaussian probability density $(2\pi)^{-n/2}\mathcal{M}athcal{S}qrt{{\rm det}\,et \mathcal{M}athbf{\Sigma}}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)$. When $d>1$ the non Gaussian integral can be still expressed as a (possibly complicated) combination of several algebraic invariants of $g$, but in general not in terms of the single algebraic invariant ${\rm det}\,et\mathcal{M}athbf{\Sigma}$. It turns out that ${\rm det}\,et\mathcal{M}athbf{\Sigma}$ and $\int \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\mathcal{M}athbf{f}rac{1}{2}\mathcal{M}athbf{v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\mathcal{M}athbf{v}_d(\mathcal{M}athbf{x}))d\mathcal{M}athbf{x}$ are still related in the following Gaussian-like manner. Let $\mathcal{M}athcal{S}_{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)}^{++}\mathcal{M}athcal{S}ubset\mathcal{M}athcal{S}_{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)}$ be the convex cone of $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)\times \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)$ positive definite matrices, and let $\theta_d: \mathcal{M}athcal{S}_{s(d)}^{++}\to \mathcal{M}athbb{R}$ be defined by \mathcal{M}athbf{b}egin{equation} \langlebel{def-f2} \theta_d(\mathcal{M}athbf{\Sigma})\,:=\,({\rm det}\,et\mathcal{M}athbf{\Sigma})^k{\rm det}\,is\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\,\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x},\mathcal{M}athbf{q}quad\mathcal{M}athbf{\Sigma}\in\mathcal{M}athcal{S}_{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)}^{++},\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where $k=n/(2d\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d))$ and let \[\mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma})\,:=\,\mathcal{M}athbf{f}rac{{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\tilde{\v}_d(\mathcal{M}athbf{x})\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\,\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}} {{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\,\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}},\] be the matrix of moments of order $d$, associated with the non Gaussian probability measure \mathcal{M}athbf{b}egin{equation} \langlebel{defmu} \mathcal{M}u(B)\,:=\,\mathcal{M}athbf{f}rac{{\rm det}\,is\int_B\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}} {{\rm det}\,is\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}},\mathcal{M}athbf{q}quad B\in\mathcal{M}athcal{B}.\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} Observe that $\theta_d$ is nonnegative and positively homogeneous of degree $0$; therefore $\theta_d$ is constant in any fixed direction $\mathcal{M}athbf{\Sigma}$. In particular, if $d=1$ then $k=1/2$, $\mathcal{M}u$ is a Gaussian probability measure, $\mathcal{M}athbf{M}_1(\mathcal{M}athbf{\Sigma})$ is the associated covariance matrix $\mathcal{M}athbf{\Sigma}^{-1}$ and $\theta_d(\mathcal{M}athbf{\Sigma})$ is constant. In fact, \mathcal{M}athbf{b}egin{lemma} \langlebel{lemma1} Let $\theta_d$ be the function defined in (\mathcal{M}athcal{R}ef{def-f2}). Then $\langlengle \mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma}),\mathcal{M}athbf{\Sigma}\mathcal{M}athcal{R}angle=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)$ for all $\mathcal{M}athbf{\Sigma}$ in the domain of $\theta_d$ and $\nabla \theta_d(\mathcal{M}athbf{\Sigma})=0$ if \mathcal{M}athbf{b}egin{equation} \langlebel{lemma1-1} \mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma})\,=\,\mathcal{M}athbf{\Sigma}^{-1}.\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{lemma} \mathcal{M}athbf{b}egin{proof} Let $g(\mathcal{M}athbf{x})=k\tilde{\v}_d(\mathcal{M}athbf{x})\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x})$ so that \mathcal{M}athbf{b}egin{eqnarray*} \langlengle \mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma}),\mathcal{M}athbf{\Sigma}\mathcal{M}athcal{R}angle\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}&=&\left\langlengle \int_{\mathcal{M}athbb{R}^n} \tilde{\v}_d(\mathcal{M}athbf{x})\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x},\mathcal{M}athbf{\Sigma}\mathcal{M}athcal{R}ight\mathcal{M}athcal{R}angle\\ &=&k^{-1}\int_{\mathcal{M}athbb{R}^n}k \tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x})\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\\ &=&k^{-1}\int_{\mathcal{M}athbb{R}^n}g\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\\ &=&\mathcal{M}athbf{f}rac{k^{-1}n}{2d}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x} \mathcal{M}athbf{q}uad\mathcal{M}box{[by (\mathcal{M}athcal{R}ef{coroeuler-1})]}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} which yields the desired result $\langlengle \mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma}),\mathcal{M}athbf{\Sigma}\mathcal{M}athcal{R}angle=\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(d)$. Next, write the gradient $\nabla\theta(\mathcal{M}athbf{\Sigma})$ in the form $A_1+A_2$ with \mathcal{M}athbf{b}egin{eqnarray*} A_1&=&\nabla(({\rm det}\,et\mathcal{M}athbf{\Sigma})^k)\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\\ &=&k({\rm det}\,et\mathcal{M}athbf{\Sigma})^{k-1}\mathcal{M}athbf{\Sigma}^\mathcal{M}athbb{A}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\\ &=&k\mathcal{M}athbf{f}rac{\mathcal{M}athbf{\Sigma}^\mathcal{M}athbb{A}}{{\rm det}\,et\mathcal{M}athbf{\Sigma}}({\rm det}\,et\mathcal{M}athbf{\Sigma})^k\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\\ &=&k\mathcal{M}athbf{\Sigma}^{-1}\,\theta_d(\mathcal{M}athbf{\Sigma}), \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} where $\mathcal{M}athbf{\Sigma}^\mathcal{M}athbb{A}$ is the {\it adjugate} of $\mathcal{M}athbf{\Sigma}$ (see e.g. \mathcal{M}athbf{c}ite[p. 411]{bernstein}), and \mathcal{M}athbf{b}egin{eqnarray*} A_2&=&({\rm det}\,et\mathcal{M}athbf{\Sigma})^k\:\nabla\left(\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\mathcal{M}athcal{R}ight)\\ &=&-k({\rm det}\,et\mathcal{M}athbf{\Sigma})^k\int_{\mathcal{M}athbb{R}^n}\tilde{\v}_d(\mathcal{M}athbf{x})\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-k\tilde{\v}_d(\mathcal{M}athbf{x})^T\mathcal{M}athbf{\Sigma}\tilde{\v}_d(\mathcal{M}athbf{x}))\,d\mathcal{M}athbf{x}\\ &=&-k\mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma})\,\theta_d(\mathcal{M}athbf{\Sigma}). \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} This yields $A_1+A_2=k\theta_d(\mathcal{M}athbf{\Sigma})(\mathcal{M}athbf{\Sigma}^{-1}-\mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma}))$ and so $\nabla \theta_d(\mathcal{M}athbf{\Sigma})=0$ if $\mathcal{M}athbf{M}_d(\mathcal{M}athbf{\Sigma})=\mathcal{M}athbf{\Sigma}^{-1}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} Lemma \mathcal{M}athcal{R}ef{lemma1} states that for all critical points $\mathcal{M}athbf{\Sigma}$, or equivalently for all critical SOS homogeneous polynomials $g$ of the function $\theta_d$ (assuming that at least one such critical point exists), their associated non Gaussian measure $d\mathcal{M}u =\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}u$ (rescaled to a probability measure) has the Gaussian-like property that $\mathcal{M}athbf{\Sigma}^{-1}$ is the ``$d$-covariance" matrix of $\mathcal{M}u$! \mathcal{M}athcal{S}ection{sublevel set of minimum volume containing a compact set} If $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ is a convex body, computing the ellipsoid of minimum volume that contains $\mathcal{M}athbf{K}$ is a classical problem which has an optimal solution called the {\it L\"owner-John} ellipsoid; see e.g. Barvinok \mathcal{M}athbf{c}ite[p. 209]{barvinok}. In this section we consider the following generalization: {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m $\mathcal{M}athbf{P}$: Find a homogeneous polynomial $g$ of degree $2d$ such that its sublevel set $G:=\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$ contains $\mathcal{M}athbf{K}$ and has minimum volume among all such sublevel sets with this inclusion property.} With $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$, let $C_{2d}(\mathcal{M}athbf{K})\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}$ be the convex cone of polynomials of degree at most $2d$ that are nonnegative on $\mathcal{M}athbf{K}$. Recall that $\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}$ is the convex cone of homogeneous polynomials of degree $2d$ with compact sublevel set $\{\mathcal{M}athbf{x}:g(\mathcal{M}athbf{x})\leq 1\}$. We next show that problem $\mathcal{M}athbf{P}$ is a convex optimization problem: \mathcal{M}athbf{b}egin{proposition} \langlebel{min-vol-lemma} The minimum volume of a sublevel set $\{\mathcal{M}athbf{x}:g(\mathcal{M}athbf{x})\leq1\}$, $g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$, that contains $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ is $\mathcal{M}athcal{R}ho/\mathcal{M}athcal{G}amma(1+n/2d)$ where $\mathcal{M}athcal{R}ho$ is the optimal value of the finite-dimensional convex optimization problem: \mathcal{M}athbf{b}egin{equation} \langlebel{minvolume} \mathcal{M}athcal{P}:\mathcal{M}athbf{q}quad \mathcal{M}athcal{R}ho={\rm det}\,is\inf_{g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}}\:\left\{\,\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\::\: 1-g\,\in\,C_{2d}(\mathcal{M}athbf{K})\,\mathcal{M}athcal{R}ight\}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proposition} \mathcal{M}athbf{b}egin{proof} From Theorem \mathcal{M}athcal{R}ef{thmain} \[\mathcal{M}athbf{v}ol(\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\})\,=\,\mathcal{M}athbf{f}rac{1}{\mathcal{M}athcal{G}amma(1+n/2d)}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}.\] Moreover, the sublevel set $\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$ contains $\mathcal{M}athbf{K}$ if and only if $1-g\in C_{2d}(\mathcal{M}athbf{K})$, and so $\mathcal{M}athcal{R}ho/\mathcal{M}athcal{G}amma(1+n/2d)$ in (\mathcal{M}athcal{R}ef{minvolume}) is the minimum value of all volumes of sublevels sets $\{\mathcal{M}athbf{x}\,:\,g(\mathcal{M}athbf{x})\leq 1\}$, $g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$, that contain $\mathcal{M}athbf{K}$. Now since $g\mathcal{M}apsto \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}$ is strictly convex (see Corollary \mathcal{M}athcal{R}ef{coro-convex}(b)) and $C_{2d}(\mathcal{M}athbf{K})$ is a convex cone, problem $\mathcal{M}athcal{P}$ is a finite-dimensional convex optimization problem. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} We also have the following characterization of an optimal solution of $\mathcal{M}athcal{P}$ when it exists. Let $M(\mathcal{M}athbb{R}^n)$ be the convex cone of finite Borel measures on $\mathcal{M}athbb{R}^n$, and let $C°_{2d}(\mathcal{M}athbf{K})^*$ be the dual cone of $C_{2d}(\mathcal{M}athbf{K})$. \mathcal{M}athbf{b}egin{thm} \langlebel{vol-suff-cond} Let $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ be compact and consider the convex optimization problem $\mathcal{M}athcal{P}$ in (\mathcal{M}athcal{R}ef{minvolume}). {\mathcal{M}athcal{R}m (a)} Suppose that $g^*\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ is an optimal solution of $\mathcal{M}athcal{P}$. Then there exists $\mathcal{M}athbf{y}^*\in C_{2d}(\mathcal{M}athbf{K})^*$ such that \mathcal{M}athbf{b}egin{equation} \langlebel{kkt-suff} \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*)d\mathcal{M}athbf{x}\,=\,y^*_\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{q}uad\mathcal{M}athbf{f}orall\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d;\mathcal{M}athbf{q}uad \langlengle 1-g^*,\mathcal{M}athbf{y}^*\mathcal{M}athcal{R}angle=0.\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} If $\mathcal{M}athbf{y}^*$ is coming from a measure $\mathcal{M}u^*\in M(\mathcal{M}athbf{K})$ then $\mathcal{M}u^*$ is supported on the real variety $V:=\{\mathcal{M}athbf{x}\in\mathcal{M}athbf{K}: g(\mathcal{M}athbf{x})=1\}$ and in fact, $\mathcal{M}u^*$ can be substituted with another measure $\nu^*\in M(V)$ supported on at most ${n+2d-1\mathcal{M}athbf{c}hoose 2d}+1$ points of $V$. {\mathcal{M}athcal{R}m (b)} Conversely, if $g^*\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ and $\mathcal{M}athbf{y}^*\in C_{2d}(\mathcal{M}athbf{K})^*$ satisfy (\mathcal{M}athcal{R}ef{kkt-suff}) then $g^*$ is an optimal solution of $\mathcal{M}athcal{P}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thm} \mathcal{M}athbf{b}egin{proof} (a) We may and will consider $g^*$ as an element of $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}$ with $g^*_\mathcal{M}athbf{b}eta=0$ whenever $\mathcal{M}athbf{v}ert\mathcal{M}athbf{b}eta\mathcal{M}athbf{v}ert<2d$. As $\mathcal{M}athbf{K}$ is compact, there exists $\theta\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ such that $1-\theta\in{\mathcal{M}athcal{R}m int}\,C_{2d}(\mathcal{M}athbf{K})$, i.e., Slater's condition holds for $\mathcal{M}athcal{P}$. Indeed, choose $\theta:=M^{-1}\Vert\mathcal{M}athbf{x}\Vert^{2d}$ for $M>0$ sufficiently large so that $1-\theta>0$ on $\mathcal{M}athbf{K}$. Hence with $\Vert g\Vert_1$ denoting the $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll_1$-norm of the coefficient vector of $g$ (in $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}$), there exists $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon>0$ such that for every $h\in B(\theta,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)(:=\{h\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}:\Vert \theta-h\Vert_1<\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon\}$), $1-h>0$ on $\mathcal{M}athbf{K}$. Therefore, the optimal solution $g^*$ satisfies the KKT-optimality conditions, which read: \mathcal{M}athbf{b}egin{equation} \langlebel{aux60} \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*)\,d\mathcal{M}athbf{x}=y^*_\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{q}uad\mathcal{M}athbf{f}orall\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d;\mathcal{M}athbf{q}uad \langlengle 1-g^*,\mathcal{M}athbf{y}^*\mathcal{M}athcal{R}angle =0,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} for some $\mathcal{M}athbf{y}^*=(y^*_\mathcal{M}athbf{a}lpha)$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n_{2d}$, an element of the dual cone $C_{2d}(\mathcal{M}athbf{K})^*\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^{s(2d)}$ of $C_{2d}(\mathcal{M}athbf{K})$. Finally, if $\mathcal{M}athbf{y}^*$ is coming from a measure $\mathcal{M}u^*$ on $\mathcal{M}athbf{K}$, then $\int_\mathcal{M}athbf{K} (1-g^*)d\mathcal{M}u^*=0$ and $1-g^*\mathbf{g}eq0$ on $\mathcal{M}athbf{K}$, imply that $\mathcal{M}u^*$ is supported on $V$ and the last statement follows from Anastassiou \mathcal{M}athbf{c}ite[Theorem 2.1.1, p. 39]{anastassiou} applied to the ${n+2d-1\mathcal{M}athbf{c}hoose 2d}$ equality constraints of (\mathcal{M}athcal{R}ef{kkt-suff}). (b) As Slater's condition holds for $\mathcal{M}athcal{P}$, the KKT-optimality conditions (\mathcal{M}athcal{R}ef{kkt-suff}) are sufficient to ensure that $g^*$ is an optimal solution on $\mathcal{M}athcal{P}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} Theorem \mathcal{M}athcal{R}ef{vol-suff-cond} states that when (\mathcal{M}athcal{R}ef{kkt-suff}) holds and $\mathcal{M}athbf{y}^*$ is coming from a measure on $\mathcal{M}athbf{K}$, there is an optimal solution $g^*$ to $\mathcal{M}athcal{P}$ such that $g(\mathcal{M}athbf{x})=1$ on (at most) ${n+2d-1\mathcal{M}athbf{c}hoose 2d}+1$ points of $\mathcal{M}athbf{K}$. When $d=1$, we retrieve a well-known property of the L\"owner-John ellipsoid.\\ Even though being convex and finite-dimensional, $\mathcal{M}athcal{P}$ is by no means easy to solve because there is no simple and computationally tractable way of describing the convex cone $C_{2d}(\mathcal{M}athbf{K})$. However, there are cases where one may provide a sequence of inner or outer approximations that both converge to $C_{2d}(\mathcal{M}athbf{K})$. One such case is when one knows all moments of a finite Borel measure whose support is exactly $\mathcal{M}athbf{K}$, and another case is when $\mathcal{M}athbf{K}=\{\mathcal{M}athbf{x}\,:\,g_k(\mathcal{M}athbf{x})\mathbf{g}eq 0,\:k=1,\ldots,m\}$ for some polynomials $(g_k)\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$, i.e., $\mathcal{M}athbf{K}$ is a compact basic semi-algebraic set. \mathcal{M}athcal{S}ubsection{Lower bounds via inner approximations} Suppose that one knows all moments $\mathcal{M}athbf{z}=(z_\mathcal{M}athbf{a}lpha)$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$, of a finite Borel measure $\mathcal{M}u$ on $\mathcal{M}athbf{K}$, i.e., \[z_\mathcal{M}athbf{a}lpha\,=\,\int_\mathcal{M}athbf{K} \mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,d\mathcal{M}u(\mathcal{M}athbf{x}),\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n,\] whose support is exactly $\mathcal{M}athbf{K}$. For every $k\in\mathcal{M}athbb{N}$ and $p\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$, let $\mathcal{M}athbf{M}_k(p,\,\mathcal{M}athbf{z})$ be the localizing matrix with respect to the polynomial $p$ and the moment sequence $\mathcal{M}athbf{z}$, that is, $\mathcal{M}athbf{M}_k(p,\,\mathcal{M}athbf{z})$ is the $s(k)\times s(k)$ real symmetric matrix with rows and columns indexed in the canonical basis $(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha)$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n_k$, of $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_k$, and with entries \mathcal{M}athbf{b}egin{eqnarray*} \mathcal{M}athbf{M}_k(p,\,\mathcal{M}athbf{z})(\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta)&=&\int_\mathcal{M}athbf{K} p(\mathcal{M}athbf{x})\,\mathcal{M}athbf{x}^{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta}\,d\mathcal{M}u(\mathcal{M}athbf{x}),\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\mathcal{M}athbf{a}lpha,\mathcal{M}athbf{b}eta\in\mathcal{M}athbb{N}^n_k\\ &=&\mathcal{M}athcal{S}um_\mathbf{g}amma p_\mathbf{g}amma\,z_{\mathcal{M}athbf{a}lpha+\mathcal{M}athbf{b}eta+\mathbf{g}amma} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} (when $p(\mathcal{M}athbf{x})=\mathcal{M}athcal{S}um_\mathbf{g}amma p_\mathbf{g}amma\mathcal{M}athbf{x}^\mathbf{g}amma$). We recall the following result. \mathcal{M}athbf{b}egin{lemma}(\mathcal{M}athbf{c}ite[Theorem 3.2]{lasserresiopt}) \langlebel{newlooklemma} Let $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ be compact and let $\mathcal{M}u$ be a finite Borel measure with support $\mathcal{M}athbf{K}$ and with moments $\mathcal{M}athbf{z}=(z_\mathcal{M}athbf{a}lpha)$, $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$. Then $p\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$ is nonnegative on $\mathcal{M}athbf{K}$ if and only if $\mathcal{M}athbf{M}_k(p,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$ for every $k=0,1,\ldots$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{lemma} In view of Lemma \mathcal{M}athcal{R}ef{newlooklemma}, a natural idea is to relax the ``difficult" constraint $1-g\in C_{2d}(\mathcal{M}athbf{K})$ in (\mathcal{M}athcal{R}ef{minvolume}) to $\mathcal{M}athbf{M}_k(1-g,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$ for fixed $k$, and then let $k\to\infty$. Indeed, for every fixed $k$, the latter is much easier to handle as it defines a spectrahedron\mathcal{M}athbf{f}ootnote{A spectrahedron is a convex set that can be formed by intersecting the cone of positive semidefinite matrices $\mathcal{M}athcal{S}_n$ with a linear affine subspace.} $\Delta_k\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(2d)}$ on the coefficients of the homogeneous polynomial $g$. And so one obtains a hierarchy of convex relaxations of $\mathcal{M}athcal{P}$ by minimizing the (strictly) convex function $g\mathcal{M}apsto \int\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)$ on the spectrahedra $\Delta_k$, $k\in\mathcal{M}athbb{N}$, which yields a monotone nondecreasing sequence of {\it lower bounds} $\mathcal{M}athcal{R}ho_k\leq\mathcal{M}athcal{R}ho$, $k\in\mathcal{M}athbb{N}$, on $\mathcal{M}athcal{R}ho$. Of course the larger $k$ (i.e., the larger the size of the localizing matrix $\mathcal{M}athbf{M}_k(1-g,\,\mathcal{M}athbf{z})$) the better the lower bound $\mathcal{M}athcal{R}ho_k$ (but also the harder the problem). \mathcal{M}athbf{b}egin{thm} \langlebel{th-inner} Let $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ be compact with nonempty interior and consider the finite-dimensional convex optimization problem: \mathcal{M}athbf{b}egin{equation} \langlebel{minvolume-d} \mathcal{M}athcal{P}_k:\mathcal{M}athbf{q}quad \mathcal{M}athcal{R}ho_k={\rm det}\,is\inf_{g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}}\:\left\{\,\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\::\: \mathcal{M}athbf{M}_k(1-g,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0\,\mathcal{M}athcal{R}ight\}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} The sequence $(\mathcal{M}athcal{R}ho_k)$, $k\in\mathcal{M}athbb{N}$, is monotone nondecreasing with $\mathcal{M}athcal{R}ho_k\leq\mathcal{M}athcal{R}ho$ and: {\mathcal{M}athcal{R}m (a)} If $\mathcal{M}athcal{P}_k$ has an optimal solution $g^*_k\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ then there exists a SOS polynomial $\mathcal{M}athbf{\Sigma}gma^*_k\in\Sigma[\mathcal{M}athbf{x}]_k$ such that \mathcal{M}athbf{b}egin{eqnarray} \langlebel{dualpb1} \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_k)\,d\mathcal{M}athbf{x}&=&\int_\mathcal{M}athbf{K}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{\Sigma}gma^*_k\,d\mathcal{M}u,\mathcal{M}athbf{q}quad \mathcal{M}athbf{f}orall \mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d.\\ \langlebel{dualpb2} \int_\mathcal{M}athbf{K}(1-g^*_k)\,\mathcal{M}athbf{\Sigma}gma^*_k\,d\mathcal{M}u&=&0, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray} and $\mathcal{M}athcal{R}ho_k=\mathcal{M}athbf{f}rac{2d}{n}{\rm det}\,is\int_\mathcal{M}athbf{K}\mathcal{M}athbf{\Sigma}gma^*_kd\mathcal{M}u$.\\ {\mathcal{M}athcal{R}m (b)} Conversely if (\mathcal{M}athcal{R}ef{dualpb1})-(\mathcal{M}athcal{R}ef{dualpb2}) holds for some $g^*_k\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ and some $\mathcal{M}athbf{\Sigma}gma^*_k\in\Sigma[\mathcal{M}athbf{x}]_k$ then $g^*_k$ is an optimal solution of $\mathcal{M}athcal{P}_k$.\\ {\mathcal{M}athcal{R}m (c)} If in addition, \[\mathcal{M}athcal{S}up_k\,\mathcal{M}athcal{S}up_{\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d}\:\mathcal{M}athbf{v}ert g^*_{k\mathcal{M}athbf{a}lpha}\,\mathcal{M}athbf{v}ert\,<\,M,\] for some $M>0$, then $\mathcal{M}athcal{R}ho_k\to\mathcal{M}athcal{R}ho$ and $\mathcal{M}athcal{P}$ has an optimal solution $g^*$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thm} For a proof see \S \mathcal{M}athcal{R}ef{proof-inner}. So problem $\mathcal{M}athcal{P}_k$ amounts to minimize a strictly convex function on a spectrahedron of $\mathcal{M}athbb{R}^{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(2d)}$. For instance, one may use {\it interior point} methods and minimize the standard log-barrier function \[g\mathcal{M}apsto \phi_\nu(g)\,:=\,\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}-\mathcal{M}athbf{f}rac{1}{\nu}\log{\mathcal{M}athcal{R}m det}(\mathcal{M}athbf{M}_k(1-g,\,\mathcal{M}athbf{z})),\] with parameter $\nu$, and let $\nu\to\infty$. For more details on log-barrier methods, the reader is referred to e.g. Wright \mathcal{M}athbf{c}ite{wright}. \mathcal{M}athcal{S}ubsection{Upper bounds via outer approximations} Let $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ be the compact basic semi-algebraic set defined by \[\mathcal{M}athbf{K}\,=\,\{\mathcal{M}athbf{x}\::\: u_j(\mathcal{M}athbf{x})\mathbf{g}eq0,\:\mathcal{M}athbf{q}uad j=1,\ldots,m\},\] for some polynomials $(u_j)\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$. As $\mathcal{M}athbf{K}$ is compact, with no loss of generality we may and will suppose that $u_1(\mathcal{M}athbf{x})=M-\Vert\mathcal{M}athbf{x}\Vert^2$ for some $M$ large enough. Let \[Q^*\,=\,\{ \mathcal{M}athcal{S}um_{j=0}^m\mathcal{M}athbf{\Sigma}gma_j u_j\::\: u_j\in\Sigma[\mathcal{M}athbf{x}],\:j=0,\ldots,m\},\] be the quadratic module of $\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]$ generated by the $u_j$'s (with $u_0=1$). $Q^*$ is Archimedean because the quadratic polynomial $\mathcal{M}athbf{x}\mathcal{M}apsto M-\Vert\mathcal{M}athbf{x}\Vert^2$ belongs to $Q^*$. If one defines \[Q^*_k\,:=\,\{ \mathcal{M}athcal{S}um_{j=0}^m\mathcal{M}athbf{\Sigma}gma_j u_j\::\: u_j\in\Sigma[\mathcal{M}athbf{x}];\mathcal{M}athbf{q}uad {\mathcal{M}athcal{R}m deg}\,\mathcal{M}athbf{\Sigma}gma_j u_j \leq 2k,\:j=0,\ldots,m\},\mathcal{M}athbf{q}uad k\in\mathcal{M}athbb{N},\] (a subset of $Q^*$ with a degree bound on the SOS weights $\mathcal{M}athbf{\Sigma}gma_j$) then this time one may replace the ``difficult" constraint $1-g\in C_{2d}(\mathcal{M}athbf{K})$ by the stronger constraint $1-g\in Q^*_k$ for fixed $k\in\mathcal{M}athbb{N}$. The convex cone $Q^*_k$ is the dual cone of the closed pointed convex cone \mathcal{M}athbf{b}egin{equation} \langlebel{cone-Q_k} Q_k:=\{\mathcal{M}athbf{y}\in\mathcal{M}athbb{R}^{s(2d)}\::\:\mathcal{M}athbf{M}_k(\mathcal{M}athbf{y})\,\mathcal{M}athcal{S}ucceq0,\mathcal{M}athbf{q}uad \mathcal{M}athbf{M}_{k-v_j}(u_j,\,\mathcal{M}athbf{y})\mathcal{M}athcal{S}ucceq0,\:j=1,\ldots,m\},\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where $\mathcal{M}athbf{M}_k(\mathcal{M}athbf{y})$ (resp. $\mathcal{M}athbf{M}_{k-v_j}(u_j,\,\mathcal{M}athbf{y})$) is the moment matrix associated with the sequence $\mathcal{M}athbf{y}$ (resp. the localizing matrix associated with $\mathcal{M}athbf{y}$ and the polynomial $u_j$). Hence, $Q^*_k$ has a nonempty interior; see e.g. Rockafellar \mathcal{M}athbf{c}ite{rockafellar}. Then solving the problem \mathcal{M}athbf{b}egin{equation} \langlebel{minvolume-outer} \mathcal{M}athcal{P}'_k:\mathcal{M}athbf{q}uad{\rm det}\,is\inf_{g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}}\:\left\{\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}\::\: 1-g\in Q^*_k\mathcal{M}athcal{R}ight\},\mathcal{M}athbf{q}uad k\in\mathcal{M}athbb{N},\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} $k\mathbf{g}eq d$, now provides a monotone sequence of {\it upper bounds} $\mathcal{M}athcal{R}ho'_k\mathbf{g}eq\mathcal{M}athcal{R}ho$, $k\in\mathcal{M}athbb{N}$. \mathcal{M}athbf{b}egin{thm} \langlebel{th-outer} Let $\mathcal{M}athbf{K}\mathcal{M}athcal{S}ubset\mathcal{M}athbb{R}^n$ be compact with nonempty interior and consider the finite-dimensional convex optimization problem (\mathcal{M}athcal{R}ef{minvolume-outer}), $k\mathbf{g}eq d$. {\mathcal{M}athcal{R}m (a)} The sequence $(\mathcal{M}athcal{R}ho'_k)$, $d\leq k\in\mathcal{M}athbb{N}$, is monotone nonincreasing with $\mathcal{M}athcal{R}ho'_k\to \mathcal{M}athcal{R}ho$ as $k\to\infty$. In addition, assume that there exists $g_0\in \mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ such that $1-g_0\in {\mathcal{M}athcal{R}m int}\,Q^*_{k_0}$ for some $k_0\mathbf{g}eq d$. Then: {\mathcal{M}athcal{R}m (b)} If $k\mathbf{g}eq k_0$ and $\mathcal{M}athcal{P}'_k$ has an optimal solution $g^*_k\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$, there exists a vector $\mathcal{M}athbf{y}^k\in Q_k$ such that: \mathcal{M}athbf{b}egin{equation} \langlebel{th-outer-1} 0\,=\,\langlengle 1-g^*_k,\mathcal{M}athbf{y}^k\mathcal{M}athcal{R}angle;\mathcal{M}athbf{q}uad y^k_\mathcal{M}athbf{a}lpha\,=\,\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_k)\,d\mathcal{M}athbf{x},\mathcal{M}athbf{q}uad\mathcal{M}athbf{f}orall\,\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} \indent {\mathcal{M}athcal{R}m (c)} Conversely, if $k\mathbf{g}eq k_0$ and $(\mathcal{M}athbf{y}^k,1-g^*_k)\in Q_k\times Q^*_k$ satisfy (\mathcal{M}athcal{R}ef{th-outer-1}), then $g^*_k$ is an optimal solution of $\mathcal{M}athcal{P}'_k$. {\mathcal{M}athcal{R}m (d)} If $\mathcal{M}athcal{P}'_k$ has an optimal solution $g^*_k\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ for every $k\mathbf{g}eq k_0$ and \[\mathcal{M}athcal{S}up_k\,\mathcal{M}athcal{S}up_{\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d}\:\mathcal{M}athbf{v}ert g^*_{k\mathcal{M}athbf{a}lpha}\,\mathcal{M}athbf{v}ert\,\,<\,N,\] for some $N>0$, then every accumulation point $g^*$ of the sequence $(g^*_k)$, $k\in\mathcal{M}athbb{N}$, is an optimal solution of $\mathcal{M}athcal{P}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thm} For proof see \S \mathcal{M}athcal{R}ef{proof-outer}. \mathcal{M}athcal{S}ection{Proofs} \mathcal{M}athcal{S}ubsection{Proof of Theorem \mathcal{M}athcal{R}ef{th-inner}} \langlebel{proof-inner} \mathcal{M}athbf{b}egin{proof} (a) That the sequence $(\mathcal{M}athcal{R}ho_k)$ is monotone non decreasing is straightforward since the constraints of $\mathcal{M}athcal{P}_k$ are more and more restrictive as $k$ increases. Next, as $\mathcal{M}athbf{K}$ is compact there exists $M,{\rm det}\,elta>0$ such that $M-\Vert\mathcal{M}athbf{x}\Vert^{2d}\,>{\rm det}\,elta$ for all $\mathcal{M}athbf{x}\in\mathcal{M}athbf{K}$. With $g_0\in\Sigma[\mathcal{M}athbf{x}]_d$ being the polynomial $\mathcal{M}athbf{x}\mathcal{M}apsto M^{-1}\Vert \mathcal{M}athbf{x}\Vert^{2d}$, one has $1-g_0(\mathcal{M}athbf{x})>{\rm det}\,elta/M$ for all $\mathcal{M}athbf{x}\in\mathcal{M}athbf{K}$ and so $\mathcal{M}athbf{M}_k(1-g_0,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucc0$. Indeed, for every $0\neq f\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_k$, \[\langlengle \mathcal{M}athbf{f},\mathcal{M}athbf{M}_k(1-g_0,\,\mathcal{M}athbf{z})\,\mathcal{M}athbf{f}\mathcal{M}athcal{R}angle\,=\,\int_\mathcal{M}athbf{K} f^2(1-g_0)\,d\mathcal{M}u\,\mathbf{g}eq\,\mathcal{M}athbf{f}rac{{\rm det}\,elta}{M}\,\int_\mathcal{M}athbf{K} f^2d\mathcal{M}u\,>\,0.\] where the last inequality is because $\mathcal{M}athbf{K}$ has nonempty interior and ${\mathcal{M}athcal{R}m supp}\,\mathcal{M}u=\mathcal{M}athbf{K}$. Observe also that $g_0\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ and so $g_0$ is a strictly feasible solution of $\mathcal{M}athcal{P}_k$, that is, Slater's condition\mathcal{M}athbf{f}ootnote{For a convex optimization $\inf_\mathcal{M}athbf{x}\{f(\mathcal{M}athbf{x})\,:\,h_k(\mathcal{M}athbf{x})\mathbf{g}eq0,\:k=1,\ldots,m\}$ Slater's condition holds if there exists $\mathcal{M}athbf{x}_0$ such that $h_k(\mathcal{M}athbf{x}_0)>0$ for all $k$. In this case the Karush-Kuhn-Tucker (KKT) optimality conditions are necessary and sufficient.} holds for $\mathcal{M}athcal{P}_k$. So the Karush-Kuhn-Tucker (KKT) optimality conditions at a point $g^*_k$ are necessary and sufficient for $g^*_k$ to be a (global) minimizer of $\mathcal{M}athcal{P}_k$. Therefore, there exists $0\preceq\Delta\in\mathcal{M}athcal{S}_{s(k)}$ such that: \mathcal{M}athbf{b}egin{equation} \langlebel{aaux1} \int_{\mathcal{M}athbb{R}^n}-\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_k)\,d\mathcal{M}athbf{x}+\langlengle \Delta,\mathcal{M}athbf{M}_k(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha,\,\mathcal{M}athbf{z})\mathcal{M}athcal{R}angle\,=\,0,\mathcal{M}athbf{q}quad \mathcal{M}athbf{f}orall \mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} and \mathcal{M}athbf{b}egin{equation} \langlebel{aux2} \langlengle \mathcal{M}athbf{M}_d(1-g^*_k,\,\mathcal{M}athbf{z}),\Delta\mathcal{M}athcal{R}angle\,=\,0,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{equation} where one has written $\mathcal{M}athbf{M}_k(g^*_k\,\mathcal{M}athbf{z})=\mathcal{M}athcal{S}um_\mathcal{M}athbf{a}lpha g^*_{k\mathcal{M}athbf{a}lpha}\,\mathcal{M}athbf{M}_k(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha,\,\mathcal{M}athbf{z})$ (with $\mathcal{M}athbf{M}_k(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha,\,\mathcal{M}athbf{z})$ being the localizing matrix with respect to the polynomial $\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha$ and the moment sequence $\mathcal{M}athbf{z}$). From its spectral decomposition, $\Delta=\mathcal{M}athcal{S}um_jq_jq_j^T$ for some vectors $(q_j)$, which yields \[\langlengle\Delta,\mathcal{M}athbf{M}_k(\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha,\,\mathcal{M}athbf{z})\mathcal{M}athcal{R}angle\,=\,\int_\mathcal{M}athbf{K}\mathcal{M}athbf{\Sigma}gma^*_k(\mathcal{M}athbf{x})\,\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha\,d\mathcal{M}u,\mathcal{M}athbf{q}quad\mathcal{M}athbf{f}orall\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d,\] where $\mathcal{M}athbf{\Sigma}gma^*_k=\mathcal{M}athcal{S}um_j(q_j^T\mathcal{M}athbf{v}_k(\mathcal{M}athbf{x}))^2\in\Sigma_k$ is a SOS polynomial. And so (\mathcal{M}athcal{R}ef{aaux1}) yields (\mathcal{M}athcal{R}ef{dualpb1}), and the complementary condition (\mathcal{M}athcal{R}ef{aux2}) yields (\mathcal{M}athcal{R}ef{dualpb2}). Next, multiplying both sides of (\mathcal{M}athcal{R}ef{aaux1}) with $g^*_{k\mathcal{M}athbf{a}lpha}$ and summing up yields: \mathcal{M}athbf{b}egin{eqnarray*} \mathcal{M}athbf{f}rac{n}{2d}\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_k)\,d\mathcal{M}athbf{x}&=&\int_{\mathcal{M}athbb{R}^n}g^*_k\,\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_k)\,d\mathcal{M}athbf{x}\mathcal{M}athbf{q}uad\mathcal{M}box{[by (\mathcal{M}athcal{R}ef{coroeuler-1})]}\\ &=&\langlengle \Delta,\mathcal{M}athbf{M}_k(g^*_k,\,\mathcal{M}athbf{z})\mathcal{M}athcal{R}angle\\ &=&\int_\mathcal{M}athbf{K}\mathcal{M}athbf{\Sigma}gma^*_k\,g^*_k\,d\mathcal{M}u,\\ &=&\int_\mathcal{M}athbf{K}\mathcal{M}athbf{\Sigma}gma^*_k\,d\mathcal{M}u \mathcal{M}athbf{q}uad\mathcal{M}box{[by (\mathcal{M}athcal{R}ef{aux2})]} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} (b) The converse is because under Slater's condition, the KKT optimality conditions are also sufficient for $g^*_k$ to be a minimizer. (c) Write $g^*_k(\mathcal{M}athbf{x})=\mathcal{M}athcal{S}um_{\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d}g^*_{k\mathcal{M}athbf{a}lpha}\mathcal{M}athbf{x}^\mathcal{M}athbf{a}lpha$. As $\mathcal{M}athcal{S}up_k\mathcal{M}athcal{S}up_{\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d}\mathcal{M}athbf{v}ert g^*_{k\mathcal{M}athbf{a}lpha}\mathcal{M}athbf{v}ert<M$, there exists a subsequence $(k_i)$, $i\in\mathcal{M}athbb{N}$, and a vector $g^*\in\mathcal{M}athbb{R}^{\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}ll(2d)}$ such that for every $\mathcal{M}athbf{a}lpha\in\mathcal{M}athbb{N}^n$ with $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d$, $g^*_{k_i\mathcal{M}athbf{a}lpha}\to g^*_\mathcal{M}athbf{a}lpha$ as $i\to\infty$. Notice that $\mathcal{M}athbf{x}\mathcal{M}apsto g^*(\mathcal{M}athbf{x})=\limsup_{i\to\infty}g^*_{k_i}(\mathcal{M}athbf{x})=\liminf_{i\to\infty}g^*_{k_i}(\mathcal{M}athbf{x})$. Next, as $\mathcal{M}athcal{R}ho\mathbf{g}eq \mathcal{M}athcal{R}ho_k$ for every $k$, and $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_k)\mathbf{g}eq0$, by Fatou's Lemma (see e.g. Ash \mathcal{M}athbf{c}ite{ash}) \mathcal{M}athbf{b}egin{eqnarray*} \mathcal{M}athcal{R}ho\,\mathbf{g}eq\,\liminf_{i\to\infty}\mathcal{M}athcal{R}ho_{k_i}&=&\liminf_{i\to\infty}\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_{k_i})\,d\mathcal{M}athbf{x}\\ &\mathbf{g}eq&\int_{\mathcal{M}athbb{R}^n}\liminf_{i\to\infty} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_{k_i})\,d\mathcal{M}athbf{x}\\ &=&\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\limsup_{i} g^*_{k_i})\,d\mathcal{M}athbf{x}= \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*)\,d\mathcal{M}athbf{x}. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} On the other hand, observe that for every $k\in\mathcal{M}athbb{N}$, $\mathcal{M}athbf{M}_k(1-g^*_k,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$ implies $\mathcal{M}athbf{M}_j(1-g^*_k,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$ for all $j\leq k$. Hence, let $j$ be fixed arbitrary so that $\mathcal{M}athbf{M}_j(1-g^*_k,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$, for all sufficiently large $k$. For every $f\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{j}$, $f^2g_k$ is uniformly bounded on $\mathcal{M}athbf{K}$ and so by Fatou's Lemma \mathcal{M}athbf{b}egin{eqnarray*} 0\leq\limsup_{i\to\infty}\int_{\mathcal{M}athbf{K}} f^2\,(1-g^*_{k_i})\,d\mathcal{M}u&=&-\liminf_{i\to\infty}\int_{\mathcal{M}athbf{K}}f^2\,(g^*_{k_i}-1)\,d\mathcal{M}u\\ &\leq&\int_\mathcal{M}athbf{K} -\liminf_{i\to\infty}f^2\,(g^*_{k_i}-1)\,d\mathcal{M}u\\ &=&\int_{\mathcal{M}athbf{K}}f^2\,(1-g^*)\,d\mathcal{M}u .\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} As $f\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_j$ was arbitrary, $\mathcal{M}athbf{M}_j(1-g^*,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$; and as $j$ was arbitrary, $\mathcal{M}athbf{M}_j(1-g^*,\,\mathcal{M}athbf{z})\mathcal{M}athcal{S}ucceq0$, for all $j=0,1,\ldots$ But by Lemma \mathcal{M}athcal{R}ef{newlooklemma}, this implies $1-g^*\mathbf{g}eq0$ on $\mathcal{M}athbf{K}$, and so $g^*$ is a feasible solution for $\mathcal{M}athcal{P}$. Combining with $\mathcal{M}athcal{R}ho\mathbf{g}eq \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*)d\mathcal{M}athbf{x}$ yields that $g^*$ is an optimal solution of $\mathcal{M}athcal{P}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} \mathcal{M}athcal{S}ubsection{Proof of Theorem \mathcal{M}athcal{R}ef{th-outer}} \langlebel{proof-outer} \mathcal{M}athbf{b}egin{proof} (a) The sequence $(\mathcal{M}athcal{R}ho'_k)$ is monotone non increasing because $Q^*_k\mathcal{M}athcal{S}ubset Q^*_{k+1}$ for every $k\in\mathcal{M}athbb{N}$. Next, let $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon>0$ be fixed, and let $g\in \mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ be such that $\mathcal{M}athcal{R}ho\leq \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)d\mathcal{M}athbf{x}\leq \mathcal{M}athcal{R}ho+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon$. Observe that $\tilde{g}:=(1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)^{-1}g\in\mathcal{M}athbf{P}[\mathcal{M}athbf{x}]_{2d}$ and \mathcal{M}athbf{b}egin{eqnarray*} \int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-\tilde{g})\,d\mathcal{M}athbf{x}&=&\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-(1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)^{-1}g)\,d\mathcal{M}athbf{x}\\ &=&(1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)^{n/d}\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g)\,d\mathcal{M}athbf{x}\leq (1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)^{n/d}(\mathcal{M}athcal{R}ho+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon).\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} and $1-\tilde{g}=\mathcal{M}athbf{f}rac{1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon -g}{1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon}>0$ on $\mathcal{M}athbf{K}$. Therefore, as $Q^*$ is Archimedean, by Putinar's Positivstellensatz \mathcal{M}athbf{c}ite{putinar}, $1-\tilde{g}\in Q^*$, that is, $1-\tilde{g}\in Q^*_{k_1}$ for some $k_1$. Hence $\tilde{g}$ is a feasible solution of $\mathcal{M}athcal{P}_{k_1}$ which implies $\mathcal{M}athcal{R}ho\leq \mathcal{M}athcal{R}ho'_{k_1}\leq (1+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)^{n/d}(\mathcal{M}athcal{R}ho+\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon)$. As the sequence is monotone and $\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}psilon>0$ was arbitrary, we obtain the desired convergence $\mathcal{M}athcal{R}ho'_k\to\mathcal{M}athcal{R}ho$. (b) $1-g_0\in {\mathcal{M}athcal{R}m int}\,Q^*_{k}$ for all $k\mathbf{g}eq k_0$ because $1-g_0\in {\mathcal{M}athcal{R}m int}\,Q^*_{k_0}$ and $Q^*_k\mathcal{M}athcal{S}upset Q^*_{k_0}$. Hence Slater's condition holds for $\mathcal{M}athcal{P}'_k$ whenever $k\mathbf{g}eq k_0$. Therefore, if $g^*_k$ is an optimal solution of $\mathcal{M}athcal{P}_k$, there exists $\mathcal{M}athbf{y}^k\in Q_k$ such that the KKT-optimality condition hold, which yields (\mathcal{M}athcal{R}ef{th-outer-1}). (c) Follows from the fact that under Slater's condition, the KKT-optimality conditions are sufficient for $g^*_k$ to be an optimal solution of $\mathcal{M}athcal{P}'_k$. (d) If $\mathcal{M}athcal{S}up_k\,\mathcal{M}athcal{S}up_{\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d}\:\mathcal{M}athbf{v}ert g^*_{k\mathcal{M}athbf{a}lpha}\,\mathcal{M}athbf{v}ert<N$, let $g^*$ be an accumulation point, i.e., a limit point of some subsequence $(g^*_{k_i})$, $i\in\mathcal{M}athbb{N}$, i.e., such that for every $\mathcal{M}athbf{a}lpha$ with $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d$, $g^*_{k_i\mathcal{M}athbf{a}lpha}\to g^*_\mathcal{M}athbf{a}lpha$ as $i\to\infty$. Let $g^*\in\mathcal{M}athbb{R}[\mathcal{M}athbf{x}]_{2d}$ be the homogeneous polynomial with coefficients $g^*_\mathcal{M}athbf{a}lpha$, $\mathcal{M}athbf{v}ert\mathcal{M}athbf{a}lpha\mathcal{M}athbf{v}ert=2d$. For every $\mathcal{M}athbf{x}\in\mathcal{M}athbf{K}$, $(1-g_{k_i}(\mathcal{M}athbf{x}))\mathbf{g}eq0$ because $1-g^*_{k_i}\in Q^*_{k_i}$ for every $i$. Therefore, with $\mathcal{M}athbf{x}\in\mathcal{M}athbf{K}$ fixed, \[0\leq 1-\lim_{i\to\infty}g^*_{k_i}(\mathcal{M}athbf{x})\,=\,1-g^*(\mathcal{M}athbf{x}),\] and so, as $\mathcal{M}athbf{x}\in\mathcal{M}athbf{K}$ is arbitrary, $1-g^*\in C_{2d}(\mathcal{M}athbf{K})$, which implies that $1-g^*$ is a feasible solution of $\mathcal{M}athcal{P}$. Next, from (a) and by Fatou's Lemma, \mathcal{M}athbf{b}egin{eqnarray*} \mathcal{M}athcal{R}ho\,=\,\lim_{i\to\infty}\mathcal{M}athcal{R}ho_{k_i}\,=\,\liminf_{i\to\infty}\mathcal{M}athcal{R}ho_{k_i}&=&\liminf_{i\to\infty}\int_{\mathcal{M}athbb{R}^n} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_{k_i})\,d\mathcal{M}athbf{x}\\ &\mathbf{g}eq&\int_{\mathcal{M}athbb{R}^n}\liminf_{i\to\infty} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*_{k_i})\,d\mathcal{M}athbf{x}\\ &=&\int_{\mathcal{M}athbb{R}^n}\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}xp(-g^*)\,d\mathcal{M}athbf{x}, \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{eqnarray*} and so as $g^*$ is feasible for $\mathcal{M}athcal{P}$, it is an optimal solution of $\mathcal{M}athcal{P}$. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{proof} \mathcal{M}athbf{b}egin{thebibliography}{las} \mathcal{M}athbf{b}ibitem{anastassiou} G.A. Anastassiou; {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Moments in Probability and Approximation Theory}, Longman Scientific \& Technical, UK, 1993. \mathcal{M}athbf{b}ibitem{ash} R.B. Ash. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Real Analysis and Probability}, Academic Press Inc., Boston, 1972. \mathcal{M}athbf{b}ibitem{jbhu} D. Az\'e, J.-B. Hiriart-Urruty. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Analyse Variationnelle et Optimisation}, CEPADUES editions, 2010. \mathcal{M}athbf{b}ibitem{barvinok} A. Barvinok. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m A Course in Convexity}, American Mathematical Society, Providence, RI, 2002. \mathcal{M}athbf{b}ibitem{bernstein} D.S. Bernstein. {\it Matrix Mathematics}, Princeton University Press, Princeton, 2005. \mathcal{M}athbf{b}ibitem{dunford} N. Dunford and J.T. Schwartz. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Linear Operators: Part I: General Theory}, Interscience Publishers, Inc., New York, 1958. \mathcal{M}athbf{b}ibitem{fujii} K. Fujii. Beyond the Gaussian, Sigma {\mathcal{M}athbf{b}f 7} (2011). \mathcal{M}athbf{b}ibitem{sirev} D. Henrion, J.B. Lasserre, C. Savorgnan. Approximate volume and integration of basic semi-algebraic sets, SIAM Review {\mathcal{M}athbf{b}f 51} (2009), pp. 722--743. \mathcal{M}athbf{b}ibitem{jbhu-las} J.B. Lasserre, J-B. Hiriart-Urruty. (2002). Mathematical properties of optimization problems defined by positively homogeneous functions, J. Optim. Theor. Appl. {\mathcal{M}athbf{b}f 112} (2002), pp. 31---52. \mathcal{M}athbf{b}ibitem{lasserrebook} J.B. Lasserre. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Linear and Integer Programming vs Linear Integration and Counting}, Springer, New York, 2009. \mathcal{M}athbf{b}ibitem{pams} J.B. Lasserre. {\mathcal{M}athcal{R}m Integration and homogeneous functions} Proc. Amer. Math. Soc. {\mathcal{M}athbf{b}f 127} (1999), pp. 813--818. \mathcal{M}athbf{b}ibitem{laplace} J.B. Lasserre and E.S. Zeron. Solving a class of multivariable integration problems via Laplace techniques, Appl. Math. (Warsaw) {\mathcal{M}athbf{b}f 28}, pp. 391--405. \mathcal{M}athbf{b}ibitem{jca} J.B. Lasserre. Homogeneous functions and conjugacy, J. Convex Anal. {\mathcal{M}athbf{b}f 5} (1998), pp. 397--403. \mathcal{M}athbf{b}ibitem{lasserresiopt} J.B. Lasserre. A new look at nonegativity on closed sets and polynomial opimization, SIAM J. Optim. {\mathcal{M}athbf{b}f 21} (2011), pp. 864--885. \mathcal{M}athbf{b}ibitem{morosov1} A. Morosov and S. Shakirov. New and old results in resultant theory, {\tt arXiv:0911.5278v1}, 2009. \mathcal{M}athbf{b}ibitem{morosov2} A. Morosov and S. Shakirov. Introduction to integral discriminants, J. High Energy Phys. {\mathcal{M}athbf{b}f 12} (2009), {\tt arXiv:0911.5278v1}, 2009. \mathcal{M}athbf{b}ibitem{muller} C. M\"uller, A. Feuer, G. Goodwin. Derivative of an integral over a convex polytope, Appl. Math. Letters {\mathcal{M}athbf{b}f 24} (2011), 1120--1123 \mathcal{M}athbf{b}ibitem{putinar} M. Putinar. Positive polynomials on compact semi-algebraic sets, Indiana Univ. Math. J. {\mathcal{M}athbf{b}f 42} (1993), pp. 969--984 \mathcal{M}athbf{b}ibitem{rockafellar} R.T. Rockafellar. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Convex Analysis}, Princeton University Press, Princeton, New Jersey, 1970. \mathcal{M}athbf{b}ibitem{shakirov} S.R. Shakirov. Nonperturbative approach to finite-dimensional non-Gaussian integrals, Theor. Math. Physics {\mathcal{M}athbf{b}f 163} (2010), pp.804--812 \mathcal{M}athbf{b}ibitem{stoya} A.V. Stoyanovsky. On integral of exponent of a homogeneous polynomial. {\tt arXiv:11030514v4}, 2011 \mathcal{M}athbf{b}ibitem{xu} Zhiqiang Xu. Multivariate splines and polytopes, J. Approx. Theory {\mathcal{M}athbf{b}f 163} (2011), pp. 377--387. \mathcal{M}athbf{b}ibitem{wolko} H. Wolkowicz, R. Saigal and L. Vandenberghe. {\mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}m Handbook of Semidefinite Programming}, H. Wolkowicz, R. Saigal and L. Vandenberghe (Eds.), Kluwer Academic Publishers, Norwell, 2000. \mathcal{M}athbf{b}ibitem{wright} S. Wright. On the convergence of the Newton/log-barrier method, Math. Program. Ser. A {\mathcal{M}athbf{b}f 90} (2001), Ser. A, pp. 71--100. \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{thebibliography} \mathcal{M}athbf{h}box{\mathcal{M}athcal{R}m e}nd{document}
\begin{document} \title{A new class of generalized Genocchi polynomials} \author{N. I. Mahmudov\\Eastern Mediterranean University\\Gazimagusa, TRNC, Mersiin 10, Turkey \\Email: [email protected]} \maketitle \begin{abstract} The main purpose of this paper is to introduce and investigate a new class of generalized Genocchi polynomials based on the $q$-integers. The $q$-analogues of well-known formulas are derived. The $q$-analogue of the Srivastava--Pint\'{e}r addition theorem is obtained. \end{abstract} \section{ Introduction} Throughout this paper, we always make use of the following notation: $\mathbb{N}$ denotes the set of natural numbers, $\mathbb{N}_{0}$ denotes the set of nonnegative integers, $\mathbb{R}$ denotes the set of real numbers, $\mathbb{C}$ denotes the set of complex numbers. The $q$-shifted factorial is defined by \[ \left( a;q\right) _{0}=1,\ \ \ \left( a;q\right) _{n}= {\displaystyle\prod\limits_{j=0}^{n-1}} \left( 1-q^{j}a\right) ,\ \ \ n\in\mathbb{N},\ \ \ \left( a;q\right) _{\infty}= {\displaystyle\prod\limits_{j=0}^{\infty}} \left( 1-q^{j}a\right) ,\ \ \ \ \left\vert q\right\vert <1,\ \ a\in \mathbb{C}. \] The $q$-numbers and $q$-numbers factorial is defined by \[ \left[ a\right] _{q}=\frac{1-q^{a}}{1-q}\ \ \ \left( q\neq1\right) ;\ \ \ \left[ 0\right] _{q}!=1;\ \ \ \ \left[ n\right] _{q}!=\left[ 1\right] _{q}\left[ 2\right] _{q}...\left[ n\right] _{q}\ \ \ \ \ n\in \mathbb{N},\ \ a\in\mathbb{C} \] respectively. The $q$-polynomail coefficient is defined by \[ \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}=\frac{\left( q;q\right) _{n}}{\left( q;q\right) _{n-k}\left( q;q\right) _{k}}. \] The $q$-analogue of the function $\left( x+y\right) ^{n}$ is defined by \[ \left( x+y\right) _{q}^{n}:= {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}q^{\frac{1}{2}k\left( k-1\right) }x^{n-k}y^{k},\ \ \ n\in \mathbb{N}_{0}. \] In the standard approach to the $q$-calculus two exponential function are used: \begin{align*} e_{q}\left( z\right) & =\sum_{n=0}^{\infty}\frac{z^{n}}{\left[ n\right] _{q}!}=\prod_{k=0}^{\infty}\frac{1}{\left( 1-\left( 1-q\right) q^{k}z\right) },\ \ \ 0<\left\vert q\right\vert <1,\ \left\vert z\right\vert <\frac{1}{\left\vert 1-q\right\vert },\ \ \ \ \ \ \ \\ E_{q}\left( z\right) & =\sum_{n=0}^{\infty}\frac{q^{\frac{1}{2}n\left( n-1\right) }z^{n}}{\left[ n\right] _{q}!}=\prod_{k=0}^{\infty}\left( 1+\left( 1-q\right) q^{k}z\right) ,\ \ \ \ \ \ \ 0<\left\vert q\right\vert <1,\ z\in\mathbb{C}.\ \end{align*} From this form we easily see that $e_{q}\left( z\right) E_{q}\left( -z\right) =1$. Moreover, \[ D_{q}e_{q}\left( z\right) =e_{q}\left( z\right) ,\ \ \ \ D_{q}E_{q}\left( z\right) =E_{q}\left( qz\right) , \] where $D_{q}$ is defined by \[ D_{q}f\left( z\right) :=\frac{f\left( qz\right) -f\left( z\right) }{qz-z}. \] Carlitz has introduced the $q$-Bernoulli numbers and polynomials in \cite{carlitz}. Srivastava and Pinter proved some relations and theorems between the Bernoulli polynomials and Euler polynomials in \cite{sri1}. They also gave some generalizations of these polynomials. In \cite{kim1} -\cite{kim9}, Kim et al. investigated some properties of the $q$-Euler polynomials and Genocchi polynomials. They gave some recurrence relations. In \cite{cenkci}, Cenkci et al. gave the $q$-extension of Genocchi numbers in a different manner. In \cite{kim5}, Kim gave a new concept for the $q$-Genocchi numbers and polynomials. In \cite{simsek}, Simsek et al. investigated the $q$-Genocchi zeta function and $l$-function by using generating functions and Mellin transformation. \begin{definition} The $q$-Bernoulli numbers $\mathfrak{B}_{n,q}^{\left( \alpha\right) }$ and polynomials $\mathfrak{B}_{n,q}^{\left( \alpha\right) }\left( x,y\right) $ in $x,y$ of order $\alpha$ are defined by means of the generating function functions: \begin{align*} \left( \frac{t}{e_{q}\left( t\right) -1}\right) ^{\alpha} & =\sum _{n=0}^{\infty}\mathfrak{B}_{n,q}^{\left( \alpha\right) }\frac{t^{n} }{\left[ n\right] _{q}!},\ \ \ \left\vert t\right\vert <2\pi,\\ \left( \frac{t}{e_{q}\left( t\right) -1}\right) ^{\alpha}e_{q}\left( tx\right) E_{q}\left( ty\right) & =\sum_{n=0}^{\infty}\mathfrak{B} _{n,q}^{\left( \alpha\right) }\left( x,y\right) \frac{t^{n}}{\left[ n\right] _{q}!},\ \ \ \left\vert t\right\vert <2\pi. \end{align*} \end{definition} \begin{definition} The $q$-Genocchi numbers $\mathfrak{G}_{n,q}^{\left( \alpha\right) }$ and polynomials $\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) $ in $x,y$ are defined by means of the generating functions: \begin{align*} \left( \frac{2t}{e_{q}\left( t\right) +1}\right) ^{\alpha} & =\sum _{n=0}^{\infty}\mathfrak{G}_{n,q}^{\left( \alpha\right) }\frac{t^{n} }{\left[ n\right] _{q}!},\ \ \ \left\vert t\right\vert <\pi,\\ \left( \frac{2t}{e_{q}\left( t\right) +1}\right) ^{\alpha}e_{q}\left( tx\right) E_{q}\left( ty\right) & =\sum_{n=0}^{\infty}\mathfrak{G} _{n,q}^{\left( \alpha\right) }\left( x,y\right) \frac{t^{n}}{\left[ n\right] _{q}!},\ \ \ \left\vert t\right\vert <\pi. \end{align*} \end{definition} It is obvious that \begin{align*} \mathfrak{B}_{n,q}^{\left( \alpha\right) } & =\mathfrak{B}_{n,q}^{\left( \alpha\right) }\left( 0,0\right) ,\ \ \ \lim_{q\rightarrow1^{-} }\mathfrak{B}_{n,q}^{\left( \alpha\right) }\left( x,y\right) =B_{n}^{\left( \alpha\right) }\left( x+y\right) ,\ \ \ \lim_{q\rightarrow 1^{-}}\mathfrak{B}_{n,q}^{\left( \alpha\right) }=B_{n}^{\left( \alpha\right) },\\ \mathfrak{G}_{n,q}^{\left( \alpha\right) } & =\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( 0,0\right) ,\ \ \ \lim_{q\rightarrow1^{-} }\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) =G_{n}^{\left( \alpha\right) }\left( x+y\right) ,\ \ \ \lim_{q\rightarrow 1^{-}}\mathfrak{G}_{n,q}^{\left( \alpha\right) }=G_{n}^{\left( \alpha\right) }. \end{align*} Here $B_{n}^{\left( \alpha\right) }\left( x\right) $ and $E_{n}^{\left( \alpha\right) }\left( x\right) $ denote the classical Bernoulli and Genocchi polynomials of order $\alpha$ are defined by \[ \left( \frac{t}{e^{t}-1}\right) ^{\alpha}e^{tx}=\sum_{n=0}^{\infty} B_{n}^{\left( \alpha\right) }\left( x\right) \frac{t^{n}}{\left[ n\right] _{q}!}\ \ \ \ \ \text{and\ \ \ \ }\left( \frac{2}{e^{t}+1}\right) ^{\alpha}e^{tx}=\sum_{n=0}^{\infty}G_{n}^{\left( \alpha\right) }\left( x\right) \frac{t^{n}}{\left[ n\right] _{q}!}. \] The aim of the present paper is to obtain some results for the $q$-Genocchi polynomials. The $q$-analogues of well-known results, for example, Srivastava and Pint\'{e}r \cite{pinter}, Cheon \cite{cheon}, etc., can be derived from these $q$-identities. The formulas involving the $q$-Stirling numbers of the second kind, $q$-Bernoulli polynomials and $q$-Bernstein polynomials are also given. Furthermore some special cases are also considered. The following elementary properties of the $q$-Genocchi polynomials $\mathfrak{E}_{n,q}^{\left( \alpha\right) }\left( x,y\right) $ of order $\alpha$ are readily derived from Definition. We choose to omit the details involved. \textbf{Property 1. }\emph{Special values of the }$q$\emph{-Genocchi polynomials of order }$\alpha$\emph{:} \[ \mathfrak{E}_{n,q}^{\left( 0\right) }\left( x,0\right) =x^{n} ,\ \ \ \mathfrak{E}_{n,q}^{\left( 0\right) }\left( 0,y\right) =q^{\frac {1}{2}n\left( n-1\right) }y^{n}. \] \textbf{Property 2.}\emph{ Summation formulas for the }$q$\emph{-Genocchi polynomials of order }$\alpha$\emph{:} \begin{align*} \mathfrak{E}_{n,q}^{\left( \alpha\right) }\left( x,y\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\mathfrak{E}_{k,q}^{\left( \alpha\right) }\left( x+y\right) _{q}^{n-k},\ \ \ \mathfrak{E}_{n,q}^{\left( \alpha\right) }\left( x,y\right) = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\mathfrak{E}_{n-k,q}^{\left( \alpha-1\right) }\mathfrak{E} _{k,q}\left( x,y\right) ,\\ \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) & =\sum_{k=0}^{n}\left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}q^{\left( n-k\right) \left( n-k-1\right) /2}\mathfrak{G} _{k,q}^{\left( \alpha\right) }\left( x,0\right) y^{n-k}=\sum_{k=0} ^{n}\left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\mathfrak{G}_{k,q}^{\left( \alpha\right) }\left( 0,y\right) x^{n-k},\\ \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,0\right) & =\sum_{k=0}^{n}\left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\mathfrak{G}_{k,q}^{\left( \alpha\right) }x^{n-k} ,\ \ \ \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( 0,y\right) =\sum_{k=0}^{n}\left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}q^{\left( n-k\right) \left( n-k-1\right) /2}\mathfrak{G} _{k,q}^{\left( \alpha\right) }y^{n-k}. \end{align*} \textbf{Property 3.}\emph{ Difference equations:} \begin{align*} \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( 1,y\right) +\mathfrak{G} _{n,q}^{\left( \alpha\right) }\left( 0,y\right) & =2\left[ n\right] _{q}\mathfrak{G}_{n-1,q}^{\left( \alpha-1\right) }\left( 0,y\right) ,\\ \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,0\right) +\mathfrak{G} _{n,q}^{\left( \alpha\right) }\left( x,-1\right) & =2\left[ n\right] _{q}\mathfrak{G}_{n-1,q}^{\left( \alpha-1\right) }\left( x,-1\right) . \end{align*} \textbf{Property 4. }\emph{Differential relations:} \[ D_{q,x}\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) =\left[ n\right] _{q}\mathfrak{G}_{n-1,q}^{\left( \alpha\right) }\left( x,y\right) ,\ \ \ D_{q,y}\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) =\left[ n\right] _{q}\ \mathfrak{G}_{n-1,q}^{\left( \alpha\right) }\left( x,qy\right) . \] \textbf{Property 5. }\emph{Addition theorem of the argument:} \[ \mathfrak{E}_{n,q}^{\left( \alpha+\beta\right) }\left( x,y\right) = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\mathfrak{E}_{n-k,q}^{\left( \alpha\right) }\left( x,0\right) \mathfrak{E}_{k,q}^{\left( \beta\right) }\left( 0,y\right) . \] \textbf{Property 6. }\emph{Recurrence Relationships:} \[ \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( \frac{1}{m},y\right) +\sum_{k=0}^{n}\left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{n-k}\mathfrak{G} _{k,q}^{\left( \alpha\right) }\left( 0,y\right) =2\left[ n\right] _{q}\sum_{k=0}^{n-1}\left[ \begin{array} [c]{c} n-1\\ k \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{n-1-k}\mathfrak{G} _{k,q}^{\left( \alpha-1\right) }\left( 0,y\right) . \] \section{Explicit relationship between the $q$-Genocchi and the $q$-Bernoulli polynomials} In this section we prove an interesting relationship between the $q$-Genocchi polynomials $\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) $ of order $\alpha$ and the $q$-Bernoulli polynomials. Here some $q$-analogues of known results will be given. We also obtain new formulas and their some special cases below. \begin{theorem} \label{S-P1}For $n\in\mathbb{N}_{0}$, the following relationship \begin{align*} \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \frac{1}{m^{n-k-1}\left[ k+1\right] _{q}}\left[ 2\left[ k+1\right] _{q} {\displaystyle\sum\limits_{j=0}^{k}} \left[ \begin{array} [c]{c} k\\ j \end{array} \right] _{q}\frac{1}{m^{k-j}}\mathfrak{G}_{j,q}^{\left( \alpha-1\right) }\left( x,-1\right) \right. \\ & -\left. {\displaystyle\sum\limits_{j=0}^{k+1}} \left[ \begin{array} [c]{c} k+1\\ j \end{array} \right] _{q}\frac{1}{m^{k+1-j}}\mathfrak{G}_{j,q}^{\left( \alpha\right) }\left( x,-1\right) -\mathfrak{G}_{k+1,q}^{\left( \alpha\right) }\left( x,0\right) \right] \mathfrak{B}_{n-k,q}\left( 0,my\right) . \end{align*} holds true between the $q$-Genocchi and the $q$-Bernoulli polynomials.. \end{theorem} \begin{proof} Using the following identity \[ \left( \frac{2t}{e_{q}\left( t\right) +1}\right) ^{\alpha}e_{q}\left( tx\right) E_{q}\left( ty\right) =\left( \frac{2t}{e_{q}\left( t\right) +1}\right) ^{\alpha}e_{q}\left( tx\right) \cdot\frac{e_{q}\left( \frac {t}{m}\right) -1}{t}\cdot\frac{t}{e_{q}\left( \frac{t}{m}\right) -1}\cdot E_{q}\left( \frac{t}{m}my\right) \] we have \begin{align*} {\displaystyle\sum\limits_{n=0}^{\infty}} \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) \frac{t^{n} }{\left[ n\right] _{q}!} & =\frac{m}{t} {\displaystyle\sum\limits_{n=0}^{\infty}} \left( {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{1}{m^{n-k}}\mathfrak{G}_{k,q}^{\left( \alpha\right) }\left( x,0\right) -\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,0\right) \right) \frac{t^{n}}{\left[ n\right] _{q}!} {\displaystyle\sum\limits_{n=0}^{\infty}} \mathfrak{B}_{n,q}\left( 0,my\right) \frac{t^{n}}{m^{n}\left[ n\right] _{q}!}\\ & = {\displaystyle\sum\limits_{n=1}^{\infty}} \left( {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{1}{m^{n-1-k}}\mathfrak{G}_{k,q}^{\left( \alpha\right) }\left( x,0\right) -m\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,0\right) \right) \frac{t^{n-1}}{\left[ n\right] _{q}!} {\displaystyle\sum\limits_{n=0}^{\infty}} \mathfrak{B}_{n,q}\left( 0,my\right) \frac{t^{n}}{m^{n}\left[ n\right] _{q}!}\\ & = {\displaystyle\sum\limits_{n=0}^{\infty}} \left( {\displaystyle\sum\limits_{k=0}^{n+1}} \left[ \begin{array} [c]{c} n+1\\ k \end{array} \right] _{q}m^{k}\mathfrak{G}_{k,q}^{\left( \alpha\right) }\left( x,0\right) -m^{n+1}\mathfrak{G}_{n+1,q}^{\left( \alpha\right) }\left( x,0\right) \right) \frac{t^{n}}{m^{n}\left[ n+1\right] _{q}!} {\displaystyle\sum\limits_{n=0}^{\infty}} \mathfrak{B}_{n,q}\left( 0,my\right) \frac{t^{n}}{m^{n}\left[ n\right] _{q}!}\\ & = {\displaystyle\sum\limits_{n=0}^{\infty}} {\displaystyle\sum\limits_{k=0}^{n}} \frac{1}{m^{n}\left[ k+1\right] _{q}}\left( {\displaystyle\sum\limits_{j=0}^{k+1}} \left[ \begin{array} [c]{c} k+1\\ j \end{array} \right] _{q}m^{j}\mathfrak{G}_{j,q}^{\left( \alpha\right) }\left( x,0\right) -m^{k+1}\mathfrak{G}_{k+1,q}^{\left( \alpha\right) }\left( x,0\right) \right) \mathfrak{B}_{n-k,q}\left( 0,my\right) \frac{t^{n} }{\left[ n\right] _{q}!}. \end{align*} It remains to use Poperty 6. \end{proof} Since $\mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) $ is not symmetric with respect to $x$ and $y$ we can prove a different from of the above theorem. It should be stressed out that Theorems \ref{S-P1} and \ref{S-P11} coincide in the limiting case when $q\rightarrow1^{-}.$ \begin{theorem} \label{S-P11}For $n\in\mathbb{N}_{0}$, the following relationship \begin{align*} \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,y\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{1}{m^{n-k-1}\left[ k+1\right] _{q}}\left[ 2\left[ k+1\right] _{q}\sum_{j=0}^{k}\left[ \begin{array} [c]{c} k\\ j \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{k-j}\mathfrak{G} _{j,q}^{\left( \alpha-1\right) }\left( 0,y\right) \right. \\ & -\left. \sum_{j=0}^{k+1}\left[ \begin{array} [c]{c} k+1\\ j \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{k+1-j}\mathfrak{G} _{j,q}^{\left( \alpha\right) }\left( 0,y\right) -\mathfrak{G} _{k+1,q}\left( 0,y\right) \right] \mathfrak{B}_{n-k,q}\left( mx,0\right) \end{align*} holds true between the $q$-Genocchi and the $q$-Bernoulli polynomials. \end{theorem} \begin{proof} The proof is based on the following identity \[ \left( \frac{2t}{e_{q}\left( t\right) +1}\right) ^{\alpha}e_{q}\left( tx\right) E_{q}\left( ty\right) =\left( \frac{2t}{e_{q}\left( t\right) +1}\right) ^{\alpha}E_{q}\left( ty\right) \cdot\frac{e_{q}\left( \frac {t}{m}\right) -1}{t}\cdot\frac{t}{e_{q}\left( \frac{t}{m}\right) -1}\cdot e_{q}\left( \frac{t}{m}mx\right) . \] \end{proof} Next we discuss some special cases of Theorems \ref{S-P1} and \ref{S-P11}. By noting that \[ \mathfrak{G}_{j,q}^{\left( 0\right) }\left( 0,y\right) =q^{\frac{1} {2}j\left( j-1\right) }y^{j},\ \ \ \ \ \mathfrak{G}_{j,q}^{\left( 0\right) }\left( x,-1\right) =\left( x-1\right) _{q}^{j} \] we deduce from Theorems \ref{S-P1} and \ref{S-P11} Corollary \ref{C:1} below. \begin{corollary} \label{C:1}For $n\in\mathbb{N}_{0}$, $m\in\mathbb{N}$ the following relationship \begin{align*} \mathfrak{G}_{n,q}\left( x,y\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{1}{m^{n-k-1}\left[ k+1\right] _{q}}\left[ 2\left[ k+1\right] _{q}\sum_{j=0}^{k}\left[ \begin{array} [c]{c} k\\ j \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{k-j}q^{\frac{1}{2}j\left( j-1\right) }y^{j}\right. \\ & -\left. \sum_{j=0}^{k+1}\left[ \begin{array} [c]{c} k+1\\ j \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{k+1-j}\mathfrak{G} _{j,q}\left( 0,y\right) -\mathfrak{G}_{k+1,q}\left( 0,y\right) \right] \mathfrak{B}_{n-k,q}\left( mx,0\right) , \end{align*} \begin{align*} \mathfrak{G}_{n,q}\left( x,y\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{1}{m^{n-k-1}\left[ k+1\right] _{q}}\left[ 2\left[ k+1\right] _{q} {\displaystyle\sum\limits_{j=0}^{k}} \left[ \begin{array} [c]{c} k\\ j \end{array} \right] _{q}\frac{1}{m^{k-j}}\left( x-1\right) _{q}^{j}\right. \\ & -\left. {\displaystyle\sum\limits_{j=0}^{k+1}} \left[ \begin{array} [c]{c} k+1\\ j \end{array} \right] _{q}\frac{1}{m^{k+1-j}}\mathfrak{G}_{j,q}\left( x,-1\right) -\mathfrak{G}_{k+1,q}\left( x,0\right) \right] \mathfrak{B}_{n-k,q}\left( 0,my\right) . \end{align*} holds true between the $q$-Bernoulli polynomials and $q$-Euler polynomials. \end{corollary} \begin{corollary} \label{C:2} For $n\in\mathbb{N}_{0}$, $m\in\mathbb{N}$ the following relationship holds true. \begin{align} G_{n}\left( x+y\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \left( \begin{array} [c]{c} n\\ k \end{array} \right) \frac{2}{k+1}\left( \left( k+1\right) y^{k}-G_{k+1,q}\left( y\right) \right) B_{n-k}\left( x\right) ,\label{g1}\\ G_{n}\left( x+y\right) & =\sum_{k=0}^{n}\left( \begin{array} [c]{c} n\\ k \end{array} \right) \frac{1}{m^{n-k-1}\left( k+1\right) }\left[ 2\left( k+1\right) G_{k}\left( y+\frac{1}{m}-1\right) -G_{k+1}\left( y+\frac{1}{m}-1\right) -G_{k+1}\left( y\right) \right] B_{n-k,q}\left( mx\right) \label{g2} \end{align} between the classical Genocchi polynomials and the classical Bernoulli polynomials. \end{corollary} Note that the formula (\ref{g2}) is new for the classical polynomials. In terms of the $q$-Genocchi numbers $\mathfrak{G}_{k,q}^{\left( \alpha\right) }$, by setting $y=0$ in Theorem \ref{S-P1}, we obtain the following explicit relationship between the $q$-Genocchi polynomials $\mathfrak{G}_{k,q}^{\left( \alpha\right) }$ of order $\alpha$ and the $q$-Bernoulli polynomials. \begin{corollary} The following relationship holds true: \begin{align*} \mathfrak{G}_{n,q}^{\left( \alpha\right) }\left( x,0\right) & = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{1}{m^{n-k-1}\left[ k+1\right] _{q}}\left[ 2\left[ k+1\right] _{q}\sum_{j=0}^{k}\left[ \begin{array} [c]{c} k\\ j \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{k-j}\mathfrak{G} _{j,q}^{\left( \alpha-1\right) }\right. \\ & -\left. \sum_{j=0}^{k+1}\left[ \begin{array} [c]{c} k+1\\ j \end{array} \right] _{q}\left( \frac{1}{m}-1\right) _{q}^{k+1-j}\mathfrak{G} _{j,q}^{\left( \alpha\right) }-\mathfrak{G}_{k+1,q}^{\left( \alpha\right) }\right] \mathfrak{B}_{n-k,q}\left( mx,0\right) . \end{align*} \end{corollary} \begin{corollary} \label{C:3}For $n\in\mathbb{N}_{0}$ the following relationship holds true. \[ \mathfrak{G}_{n,q}\left( x,y\right) = {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{2}{\left[ k+1\right] _{q}}\left[ \left[ k+1\right] _{q}q^{\frac{1}{2}k\left( k-1\right) }y^{k}-\mathfrak{G}_{k+1,q}\left( 0,y\right) \right] \mathfrak{B}_{n-k,q}\left( x,0\right) . \] \end{corollary} \begin{corollary} \label{C:4}For $n\in\mathbb{N}_{0}$ the following relationship holds true. \begin{align*} \mathfrak{G}_{n,q}\left( x,0\right) & =- {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{2}{\left[ k+1\right] _{q}}\mathfrak{G}_{k+1,q} \mathfrak{B}_{n-k,q}\left( x,0\right) ,\\ \mathfrak{G}_{n,q} & =- {\displaystyle\sum\limits_{k=0}^{n}} \left[ \begin{array} [c]{c} n\\ k \end{array} \right] _{q}\frac{2}{\left[ k+1\right] _{q}}\mathfrak{G}_{k+1,q} \mathfrak{B}_{n-k,q}. \end{align*} \end{corollary} \end{document}
\begin{document} \title[Braid groups and mapping class groups]{Braid groups, mapping class groups and their homology with twisted coefficients} \author{Andrea Bianchi} \address{Mathematics Institute, University of Bonn, Endenicher Allee 60, Bonn, Germany } \email{[email protected]} \date{\today} \keywords{Mapping class group, framed mapping class group, braid group, twisted coefficients, symplectic coefficients.} \subjclass[2010]{20F36, 55N25, 55R20, 55R35, 55R37, 55R40, 55R80.} \begin{abstract} We consider the Birman-Hilden inclusion $\varphi\colon\mathfrak{Br}_{2g+1}\to\Gamma_{g,1}$ of the braid group into the mapping class group of an orientable surface with boundary, and prove that $\varphi$ is stably trivial in homology with twisted coefficients in the symplectic representation $H_1(\mathcal{S}igma_{g,1})$ of the mapping class group; this generalises a result of Song and Tillmann regarding homology with constant coefficients. Furthermore we show that the stable homology of the braid group with coefficients in $\varphi^*(H_1(\mathcal{S}igma_{g,1}))$ has only $4$-torsion. \end{abstract} \maketitle \section{Introduction} Braid groups have a strong connection with mapping class groups of surfaces. On the one hand the braid group $\mathfrak{Br}_n$ on $n$ strands is itself a mapping class group, namely the one associated to the surface $\mathcal{S}igma_{0,1}^{n}$ of genus $0$ with one (parametrised) boundary component and $n$ (permutable) punctures. On the other hand Birman and Hilden show in \cite{BH2} that the group $\mathfrak{Br}_{2g+1}$ can be identified with the hyperelliptic mapping class group: this is a certain subgroup of the mapping class group $\Gamma_{g,1}$ of an orientable surface of genus $g$ with one parametrised boundary component (see subsection \ref{subsec:hyp}). It is natural to study the behaviour in homology of the Birman-Hilden inclusion $\varphi\colon\mathfrak{Br}_{2g+1}\to\Gamma_{g,1}$. Song and Tillmann \cite{SongTillmann}, and later Segal and Tillmann \cite{SegalTillmann}, show that the map $\varphi_*$ is stably trivial in homology with constant coefficients. More precisely: \begin{thm} \label{thm:ST} For any abelian group $A$ the map \[ \varphi_*\colon H_k(\mathfrak{Br}_{2g+1};A)\to H_k(\Gamma_{g,1};A) \] is trivial for $k\leq \frac 23 g -\frac 23$. \end{thm} The range $k\leq \frac 23 g -\frac 23$ is the best known stable range for the homology with constant coefficients of the mapping class group (see section \ref{sec:preliminaries} for the precise statement of Harer's stability theorem). Both proofs of theorem \ref{thm:ST} use certain analogues of the maps $\varphi$, namely the maps \[ \varphi^{even}\colon \mathfrak{Br}_{2g}\to\Gamma_{g-1,2} \] from a certain braid group on an even number of strands to a mapping class group of a surface with two boundary components. The maps $\varphi^{even}$ can be put together to form a braided monoidal functor $\coprod_{g\geq 1}\mathfrak{Br}_{2g}\to\coprod_{g\geq 1}\Gamma_{g-1,2}$. Passing to classifying spaces of categories and then taking the group completion, one shows that the stable map $\mathfrak{Br}_{\infty}\to\Gamma_{\infty,2}$ behaves in homology as the restriction on $0$-th components of a certain $\Omega^2$-map between the following $\Omega^2$-spaces. The first $\Omega^2$-space is $\Omega^2S^2$; its $0$-th connected component is the $\Omega^2$-space $\Omega^2 S^3$, which is the free $\Omega^2$-space over $S^1$. The second $\Omega^2$-space is \[ \Omega B\pa{\coprod_{g\geq 0}B\Gamma_{g,2}}\simeq \mathbb{Z}\times \pa{B\Gamma_{\infty}}^+, \] and in particular it has simply connected components, since $H_1(\Gamma_{\infty,2})=0$. Here $\pa{B\Gamma_{\infty}}^+$ denotes the Quillen plus construction applied to the classifying space of the group $\Gamma_{\infty}$, which is the colimit of the groups $\Gamma_{g,1}$ for increasing $g$ along the inclusions $\alpha$ (see subsection \ref{subsec:mcg}). The map $\varphi^{even}\colon\Omega^2 S^2\to \mathbb{Z}\times \pa{B\Gamma_{\infty}}^+$ is nullhomotopic on $\Omega^2S^3$, because its restriction to $S^1\subset\Omega^2 S^3$ is nullhomotopic. In particular the induced map in homology $\varphi^{even}_*$ is trivial in degree $*>0$. In \cite{BoT:Embeddings} B\"{o}digheimer and Tillmann generalise this argument to other families of embeddings of braid groups into mapping class groups. Our aim is to prove an analogue of theorem \ref{thm:ST} for homology with symplectic twisted coefficients. \begin{thm} \label{thm:STtwisted} Consider the symplectic representation $\mathcal{H}\colon =H_1(\mathcal{S}igma_{g,1})$, of the mapping class group $\Gamma_{g,1}$, and its pull-back $\varphi^*\mathcal{H}$, which is a representation of $\mathfrak{Br}_{2g+1}$. The induced map in homology with twisted coefficients \[ \varphi_*\colon H_k(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})\to H_k(\Gamma_{g,1};\mathcal{H}) \] is trivial for $k\leq \frac 23 g -\frac 23 -1$. \end{thm} Our proof is more elementary and only relies on a weak version of Harer's stability theorem: in particular we will not need to stabilise with respect to the number of strands or the genus. We also obtain a result concerning the homology $H_*(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})$ on its own: \begin{thm} \label{thm:fourtorsion} The homology $H_*(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})$ is $4$-torsion, i.e. every element vanishes when multiplied by $4$. \end{thm} The homology $H_*(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})$ arises in a natural way as a direct summand of $H_*(\varphi^*\mathcal{S}_{g,1})$. Here $\mathcal{S}_{g,1}$ denotes the total space of the tautological $\mathcal{S}igma_{g,1}$-bundle $\mathcal{S}_{g,1}\to B\Gamma_{g,1}$ over the classifying space of the mapping class group, and $\varphi^*\mathcal{S}_{g,1}$ is its pull-back on the braid group, or, as we have seen, on the hyperelliptic mapping class group. This follows from the fact that every $\mathcal{S}igma_{g,1}$-bundle has a section \emph{at the boundary} (see section \ref{sec:preliminaries}). This article contains the main results of my Master Thesis \cite{Bianchi}. Recently Callegaro and Salvetti (\cite{CallegaroSalvetti}) have computed explicitly the homology $H_*(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})$, showing that it has even only $2$-torsion; in another work \cite{CallegaroSalvetti2} the same authors have studied the analogue problem for totally ramified $d$-fold branched coverings of the disc. Their results are partially based on results of my Master Thesis, which are discussed in this article. I would like to thank Ulrike Tillmann, Mario Salvetti and Filippo Callegaro for their supervision, their help and their encouragement during the preparation of my Master Thesis, and Carl-Friedrich Bödigheimer for helpful discussions and detailed comments on a first draft of this article. \section{Preliminaries} \label{sec:preliminaries} In this section we recall some classical facts about braid groups and mapping class groups. \subsection{Braid groups} Let $\mathbb{D}=\set{z \,|\,\abs{z}<1}\subset\mathbb{C}$ be the open unit disc, and let \[ F_n(\mathbb{D})=\set{(z_1,\dots,z_n)\in\mathbb{D}^n\;|\;z_i\neq z_j \;\forall i\neq j} \] be the \emph{ordered configuration space} of $n$ points in $\mathbb{D}$. There is a natural, free action of $\mathfrak{S}_n$ on $F_n(\mathbb{D})$, which permutes the labels of a configuration. The quotient space is denoted by $C_n(\mathbb{D})$ and is called the \emph{unordered configuration space} of $n$ points in $\mathbb{D}$. Artin's braid group $\mathfrak{Br}_n$ is defined as the fundamental group $\pi_1(C_n(\mathbb{D}))$; recall that $C_n(\mathbb{D})$ is an aspherical space (see \cite{FadellNeuwirth}), and hence a classifying space for $\mathfrak{Br}_n$. The braid group $\mathfrak{Br}_n$ has a presentation (see \cite{Artin}) with generators $\sigma_1,\dots,\sigma_{n-1}$ and relations: \begin{itemize} \item $\sigma_i\sigma_j=\sigma_j\sigma_i$ for $|i-j|\geq 2$; \item $\sigma_i\sigma_j\sigma_i=\sigma_j\sigma_i\sigma_j$ for $|i-j|=1$. \end{itemize} The space $C_n(\mathbb{D})$ has a natural structure of complex manifold, with local coordinates $(z_1,\dots,z_n)$, the positions of the points in the configuration. To stress that these local coordinates do not have a preferred order, we will also write $\set{z_1,\dots,z_n}$. \subsection{Mapping class groups} \label{subsec:mcg} Let $\mathcal{S}igma_{g,m}$ be a smooth, oriented, compact surface of genus $g$ with $m\geq 1$ \emph{parametrised} boundary components. We will be mainly interested in the case $m=1$, but we will need also the case $m=2$ to present our results. A parametrisation of the boundary is a diffeomorphism $\partial\mathcal{S}igma_{g,m}\cong \set{1,\dots,m}\times\mathbb{S}^1$, where $\mathbb{S}^1\in\mathbb{C}$ is the unit circle; this diffeomorphism should induce on each boundary component the same orientation as the one induced by the (oriented) surface $\mathcal{S}igma_{g,m}$ on the boundary. We choose as basepoint for $\mathcal{S}igma_{g,m}$ the point $*\in\partial\mathcal{S}igma_{g,m}$ corresponding to $(1,1)\in\set{1,\dots,m}\times\mathbb{S}^1$. We consider the group $\mathbb{D}iff_{g,m}$ of diffeomorphisms $f\colon\mathcal{S}igma_{g,m}\to\mathcal{S}igma_{g,m}$ for which there exists a collar neighborhood $U\subseteq \mathcal{S}igma_{g,m}$ of the boundary $\partial\mathcal{S}igma_{g,m}$ such that $f|_U$ is the identity. This is a topological group with the Whitney $C^{\infty}$-topology. Note that a diffeomorphism that fixes a neighborhood of the boundary (in particular an open set of $\mathcal{S}igma_{g,m}$) must be orientation-preserving. A result by Earle and Schatz (\cite{EarleSchatz}) ensures that $\mathbb{D}iff_{g,m}$ has contractible connected components, so the tautological map \[ \mathbb{D}iff_{g,m}\to\pi_0(\mathbb{D}iff_{g,m}) \] is a homotopy equivalence. The second term $\pi_0(\mathbb{D}iff_{g,m})$ is the discrete group of connected components of $\mathbb{D}iff_{g,m}$: it is called the mapping class group of $\mathcal{S}igma_{g,m}$ and it is denoted by $\Gamma_{g,m}$. By taking classifying spaces we obtain a homotopy equivalence $B\mathbb{D}iff_{g,m}\simeq B\Gamma_{g,m}$. The tautological action of $\mathbb{D}iff_{g,m}$ on $\mathcal{S}igma_{g,m}$ yields, through the Borel construction, the map \[ E\mathbb{D}iff_{g,m}\times_{\mathbb{D}iff_{g,m}}\mathcal{S}igma_{g,m}\to B\mathbb{D}iff_{g,m}=E\mathbb{D}iff_{g,m}/\mathbb{D}iff_{g,m}. \] This map is a fiber bundle with fiber $\mathcal{S}igma_{g,m}$. The pullback bundle along the inverse homotopy equivalence $B\Gamma_{g,m}\to B\mathbb{D}iff_{g,m}$ is denoted by $p\colon \mathcal{S}_{g,m}\to B\Gamma_{g,m}$; we have a pull-back square \[ \begin{tikzcd} \mathcal{S}_{g,m} \ar[r,"\simeq"] \ar[d,"p"] & E\mathbb{D}iff_{g,m}\times_{\mathbb{D}iff_{g,m}}\mathcal{S}igma_{g,m}\ar[d] \\ B\Gamma_{g,m}\ar[r,"\simeq"] & B\mathbb{D}iff_{g,m}. \end{tikzcd} \] The fiber of $p$ is a surface diffeomorphic to $\mathcal{S}igma_{g,m}$; boundaries of fibers are moreover equipped with a parametrisation, so that the subspace $\partial\mathcal{S}_{g,m}$ given by the union of all boundaries of fibers is canonically homeomorphic to $B\Gamma_{g,1}\times \pa{\set{1,\dots,m}\times\mathbb{S}^1}$; each boundary component of each fiber of $p$ inherits the same orientation from the oriented fiber ($\simeq\mathcal{S}igma_{g,m}$) to which it belongs and from its identification with $\mathbb{S}^1$ along the aforementioned canonical homeomorphism. The bundle $p$ is universal among bundles with all these properties: if $X$ is a paracompact space and $\tilde p\colon\tilde S\to X$ is a $\mathcal{S}igma_{g,m}$-bundle over $X$ with a given homeomorphism between the subspace $\partial \tilde S$ of boundaries of fibers with $X\times\pa{\set{1,\dots,m}\times\mathbb{S}^1}$, such that each boundary component of each fiber of $\tilde p$ inherits the same orientation from the fiber to which it belongs and from the aforementioned homeomorphism, then there is up to homotopy a unique classifying map $\psi\colon X\to B\Gamma_{g,m}$ such that $\tilde p\simeq \psi^*p$ as bundles with parametrised boundaries of fibers. The bundle $p$ admits a global section \emph{at the boundary} $s_0\colon B\Gamma_{g,m}\to \mathcal{S}_{g,m}$, obtained by choosing the basepoint of each fiber (i.e. the point corresponding to $(1,1)\in\set{1,\dots,m}\times\mathbb{S}^1$ under the parametrisation). By abuse of notation, we will also see $B\Gamma_{g,m}=s_0(B\Gamma_{g,m})$ as a subspace of $\mathcal{S}_{g,m}$. Fibers of $p$ are smooth surfaces, and we can assemble together their tangent bundles to get a vector bundle $\bar p^v\colon \mathcal{V}_{g,m}\to \mathcal{S}_{g,m}$ with fiber $\mathbb{R}^2$, called the \emph{vertical tangent bundle}. Choosing a Riemannian metric on $\bar p^v$ and considering on each vector space its unit circle, we can also define the \emph{unit vertical tangent bundle} $p^v\colon \mathcal{UV}_{g,m}\to \mathcal{S}_{g,m}$, with fiber $\mathbb{S}^1$. We can define a section of $p^v$ over the subspace $\partial \mathcal{S}_{g,m}\simeq B\Gamma_{g,m}\times\pa{\set{1,\dots,m}\times\mathbb{S}^1}$: we assign to each point on the boundary of some fiber of $p$ the unit vector which is tangent to that fiber, is orthogonal to the boundary of that fiber and points outwards. We will actually only need the restriction of this section to $B\Gamma_{g,m}=s_0(B\Gamma_{g,m})\subset\partial \mathcal{S}_{g,m}$: we call it $s_0^v\colon B\Gamma_{g,m}=s_0(B\Gamma_{g,m})\to \mathcal{UV}_{g,m}$, and again by abuse of notation we see $B\Gamma_{g,m}$ as a subspace of $\mathcal{UV}_{g,m}$. See the following diagram \[ \begin{tikzcd}[column sep=6em,row sep=3em] s_0^v(B\Gamma_{g,m}) \ar[d, equal]\ar[r,hook] & \mathcal{UV}_{g,m}\ar[d,"p^v"]\\ s_0(B\Gamma_{g,m})\ar[r,hook]\ar[dr,equal] & \mathcal{S}_{g,m}\ar[d,"p"]\\ & B\Gamma_{g,m} \end{tikzcd} \] The previous constructions are natural with respect to pullbacks: if $\tilde p\colon\tilde S\to X$ is a $\mathcal{S}igma_{g,m}$-bundle over a paracompact space $X$, we have a section \emph{at the boundary} $\tilde s_0\colon X\to \tilde S$, a unit vertical tangent bundle $\tilde p^v\colon \tilde{\mathcal{UV}}\to \tilde S$ and a \emph{pointing outward} section $\tilde s^v_0\colon X=\tilde s_0(X)\to \tilde{\mathcal{UV}}$. We now restrict to the cases $m=1,2$ and construct a map $\beta\colon\Gamma_{g,1}\to\Gamma_{g,2}$. First we decompose $\mathcal{S}igma_{g,2}$ as the union of $\mathcal{S}igma_{g,1}$ and a pair of pants $\mathcal{S}igma_{0,3}$ along a boundary component. Each diffeomorphism of $\mathcal{S}igma_{g,1}$ fixing a collar neighborhood of $\partial\mathcal{S}igma_{g,1}$ extends to a diffeomorphism of $\mathcal{S}igma_{g,2}$, by prescribing the identity map on $\mathcal{S}igma_{0,3}$: we obtain a homomorphism $\bar\beta\colon\mathbb{D}iff_{g,1}\to\mathbb{D}iff_{g,2}$, and the homomorphism $\beta$ is $\pi_0(\bar\beta)$. See figure \ref{fig:glue}. \begin{figure} \caption{Glueing surfaces in different ways yields homomorphisms $\alpha$,$\beta$ and $\gamma$ between mapping class groups.} \label{fig:glue} \end{figure} Conversely, we can construct a map $\gamma\colon\Gamma_{g,2}\to \Gamma_{g,1}$ as follows. First, we decompose $\mathcal{S}igma_{g,1}$ as the union of $\mathcal{S}igma_{g,2}$ and a disc $\mathcal{S}igma_{0,1}$ along a boundary component. Each diffeomorphism of $\mathcal{S}igma_{g,2}$ fixing a collar neighborhood of $\partial\mathcal{S}igma_{g,2}$ extends to $\mathcal{S}igma_{g,1}$, by prescribing the identity map on $\mathcal{S}igma_{0,1}$: we obtain a homomorphism $\bar\gamma\colon\mathbb{D}iff_{g,2}\to\mathbb{D}iff_{g,1}$, and the homomorphism $\gamma$ is $\pi_0(\bar\gamma)$. The composition $\gamma\circ\beta\colon\Gamma_{g,1}\to\Gamma_{g,1}$ is essentially the identity: we are glueing a cylinder $\mathcal{S}igma_{0,2}\simeq\mathbb{S}^1\times[0,1]$ to $\partial\mathcal{S}igma_{g,1}$ to obtain a surface that is again diffeomorphic to $\mathcal{S}igma_{g,1}$; moreover there is a preferred isotopy class of diffeomorphisms $\mathcal{S}igma_{g,1}\to \mathcal{S}igma_{g,1}\cup_{\mathbb{S}^1}\mathbb{S}^1\times[0,1]$, represented by the evaluation at time 1 of any extension of the tautological isotopy $\partial\mathcal{S}igma_{g,1}\times [0,1]\overset{\simeq}{\to} \mathbb{S}^1\times[0,1]\subset\mathcal{S}igma_{g,1}\cup_{\mathbb{S}^1}\mathbb{S}^1\times[0,1]$, starting from the inclusion $\mathcal{S}igma_{g,1}\subset\mathcal{S}igma_{g,1}\cup_{\mathbb{S}^1}\mathbb{S}^1\times[0,1]$. We can thus identify the mapping class group of $\mathcal{S}igma_{g,1}$ and the mapping class group of $\mathcal{S}igma_{g,1}\cup_{\mathbb{S}^1} \mathbb{S}^1\times[0,1]$, and under this identification the map $\gamma\circ\beta$ is the identity of $\Gamma_{g,1}$. Finally, consider the following morphism of groups $\alpha\colon\Gamma_{g,2}\to\Gamma_{g+1,1}$: this time we obtain $\mathcal{S}igma_{g+1,1}$ glueing $\mathcal{S}igma_{g,1}$ and pair of pants $\mathcal{S}igma_{0,3}$ along two boundary components. Again we get first a homomorphism $\mathbb{D}iff_{g,2}\to\mathbb{D}iff_{g+1,1}$ and then a homomorphism $\alpha$ between the corresponding mapping class groups. We will state Harer's stability theorem in a form that suffices for our purposes (see \cite{Harer} for the original theorem and \cite{Boldsen, ORW:resolutions_homstab} for the improved stability ranges). \begin{thm}[Harer] \label{thm:Harer} Let $A$ be an abelian group. The maps $\alpha,\beta,\gamma$ described above induce isomorphisms in homology in a certain range: \[ \alpha_*\colon H_k(\Gamma_{g,2};A)\cong H_k(\Gamma_{g+1,1};A)\quad \mbox{for }k\leq \frac 23 g-\frac 23; \] \[ \beta_*\colon H_k(\Gamma_{g,1};A)\cong H_k(\Gamma_{g,2};A)\quad \mbox{for }k\leq \frac 23 g; \] \[ \gamma_*\colon H_k(\Gamma_{g,2};A)\cong H_k(\Gamma_{g,1};A)\quad \mbox{for }k\leq \frac 23 g. \] \end{thm} Theorem \ref{thm:ST} relies on the full statement of theorem \ref{thm:Harer}, but in the proof of theorem \ref{thm:STtwisted} we will only need homological stability for the maps $\beta$ and $\gamma$: these are the stabilisation maps that change the number of boundary components but not the genus. We will also need the following classical result (see \cite{FarbMargalit}, propositions 3.19 and 4.6) \begin{thm} The space $\mathcal{UV}_{g,1}$ is a classifying space for $\Gamma_{g,2}$, i.e. it is homotopy equivalent to $B\Gamma_{g,2}$. The map $s_0^v\circ s_0\colon B\Gamma_{g,1}\to\mathcal{UV}_{g,1}$ induces the map $\beta$ on fundamental groups. The map $p\circ p^v\colon \mathcal{UV}_{g,1}\to B\Gamma_{g,1}$ induces the map $\gamma$ on fundamental groups. \end{thm} \subsection{Hyperelliptic mapping class groups} \label{subsec:hyp} Fix a diffeomorphism $J$ of $\mathcal{S}igma_{g,1}$ with the following properites: \begin{itemize} \item $J^2$ is the identity of $\mathcal{S}igma_{g,1}$; \item $J$ acts on $\partial\mathcal{S}igma_{g,1}\cong\mathbb{S}^1$ as the rotation by an angle $\pi$; \item $J$ has exactly $2g+1$ fixed points in the interior of $\mathcal{S}igma$. \end{itemize} The quotient $\mathcal{S}igma_{g,1}/J$ is a disc and the map $\mathcal{S}igma_{g,1}\to\mathcal{S}igma_{g,1}/J$ is a $2$-fold branched covering map with $2g+1$ branching points. We say that $J$ is a \emph{hyperelliptic involution} of $\mathcal{S}igma_{g,1}$. Consider the group $\mathbb{D}iff_{g,1}^{ext}$ of diffeomorphisms $f\colon\mathcal{S}igma_{g,1}\to\mathcal{S}igma_{g,1}$ that preserve the orientation and restrict on a neighborhood of $\partial\mathcal{S}igma_{g,1}$ either to the identity, or to $J$. We have a short exact sequence of topological groups \[ \begin{tikzcd} 1\ar[r] & \mathbb{D}iff_{g,1} \ar[r] & \mathbb{D}iff_{g,1}^{ext}\ar[r] & \mathbb{Z}_2 \ar[r] &1. \end{tikzcd} \] There is a section $\mathbb{Z}_2\to\mathbb{D}iff_{g,1}^{ext}$ given by $J$. Taking connected components we obtain a split short exact sequence \[ \begin{tikzcd} 1\ar[r] & \Gamma_{g,1} \ar[r] & \Gamma_{g,1}^{ext}\ar[r] & \mathbb{Z}_2 \ar[r] &1, \end{tikzcd} \] where $\Gamma_{g,1}^{ext}=\pi_0\pa{\mathbb{D}iff_{g,1}^{ext}}$ is called the \emph{extended mapping class group}. The \emph{extended hyperelliptic mapping class group} $\triangle_{g,1}^{ext}$ is the centralizer in $\Gamma_{g,1}^{ext}$ of the mapping class of $J$: \[ \triangle_{g,1}^{ext}=Z([J]). \] The \emph{hyperelliptic mapping class group} $\triangle_{g,1}$ is the intersection in $\Gamma_{g,1}^{ext}$ between $\Gamma_{g,1}$ and $\triangle_{g,1}^{ext}$. We have an isomorphism \[ \triangle^{ext}_{g,1}\simeq \triangle_{g,1}\times\left<J\right>. \] \section{Definition of the map $\varphi$ and a general construction} \label{sec:defphi} We consider on $\mathcal{S}igma_{g,1}$ a chain of $2g$ simple closed curves $c_1,\dots,c_{2g}$, such that $c_i\cap c_j=\emptyset$ for $\abs{i-j}\geq 2$, whereas $c_i$ and $c_j$ intersect transversely in one point if $\abs{i-j}=1$. Note that a tubular neighborhood of the union of these curves is itself diffeomorphic to $\mathcal{S}igma_{g,1}$. See figure \ref{fig:birman}. \begin{figure} \caption{A chain of $2g$ simple closed curves on $\mathcal{S} \label{fig:birman} \end{figure} Denote by $D_i\in\Gamma_{g,1}$ the Dehn twist about the curve $c_i$; then it is a classical result (see \cite{FarbMargalit}, Fact 3.9 and Proposition 3.11) that $D_iD_j=D_jD_i$ in $\Gamma_{g,1}$ for $\abs{i-j}\geq 2$, and $D_iD_jD_i=D_jD_iD_j$ for $\abs{i-j}=1$. Therefore there is an induced morphism of groups \[ \varphi\colon\mathfrak{Br}_{2g+1}\to\Gamma_{g,1} \] which is defined by mapping the generator $\sigma_i\in\mathfrak{Br}_{2g+1}$ to the Dehn twist $D_i\in\Gamma_{g,1}$. This map is called the Birman-Hilden inclusion: it is indeed injective and its image is the hyperelliptic mapping class group (see \cite{BH1,BH2}). From now on let $n\colon=2g+1$, in particular $n$ is odd. We give now a nice, geometric description of the $\mathcal{S}igma_{g,1}$-bundle $\varphi^*\mathcal{S}_{g,1}$ over $C_n(\mathbb{D})\simeq B\mathfrak{Br}_n$. Consider, in the complex manifold $C_n(\mathbb{D})\times\overline{\mathbb{D}}\times\mathbb{C}$, the subspace \[ \mathcal{V}_n=\set{\pa{\set{z_1,\dots,z_n},x,y} \;|\; y^2=\prod_{i=1}^n(x-z_i)}. \] Here $\overline\mathbb{D}$ is the closed unit disc in $\mathbb{C}$. First we show that $\mathcal{V}_n$ is a smooth manifold with boundary: indeed it is the zero locus on $C_n(\mathbb{D})\times\overline\mathbb{D}\times\mathbb{C}$ of the function $f(\set{z_i},x,y)=y^2-\prod_i(x-z_i)$, whose partial derivatives with respect to $x$ and $y$ are \[ \frac {df}{dx}(\set{z_1,\dots,z_n},x,y)=-\sum_{i=1}^n\prod_{j\neq i}(x-z_j) \] \[ \frac {df}{dy}(\set{z_1,\dots,z_n},x,y)=2y \] If $\frac {df}{dy}$ vanishes, then $y=0$; if moreover $f$ vanishes, then $x=z_i$ for exactly one value of $i$. Then all the summands but exactly one in the sum for $\frac {df}{dx}$ vanish, and therefore $\frac {df}{dx}\neq 0$. We have thus shown that $\mathcal{V}_n$ is a smooth manifold, as $df$ never vanishes on $\mathcal{V}_n$. Moreover at least one of the $x$ and the $y$ component of $df$ does not vanish on $\mathcal{V}_n$, therefore the natural projection \[ \pi\colon\mathcal{V}_n\to C_n(\mathbb{D}) \] is a submersion; in particular its fibers are smooth. Note also that $\mathcal{V}_n$ is transverse to $C_n(\mathbb{D})\times\partial\overline\mathbb{D}\times\mathbb{C}$: if $|x|=1$ then $x\neq z_i$ for all $i$ and we can rewrite \[ \frac {df}{dx}(\set{z_1,\dots,z_n},x,y)=-\pa{\prod_{i=1}^n(x-z_i)}\pa{\sum_{i=1}^n\frac{1}{x-z_i}}\neq 0, \] where the sum is non-zero because it has a non-trivial component in the direction of $\frac 1x$: if we consider summands as vectors in $\mathbb{R}^2$ with the usual scalar product, then each summand has a positive scalar product with the vector $\frac 1x$. The fibers of $\pi$ are smooth manifolds of complex dimension 1, i.e. Riemann surfaces. By projecting to $x$, each fiber is a double covering of $\overline{\mathbb{D}}$, branched over $n$ points: the covering map is given by the projection on the $x$ coordinate. Thus the Euler characteristic of the fiber is $2\cdot\chi(\overline{\mathbb{D}})-n=1-2g$. The boundary of the fiber over any $q=\set{z_1,\dots,z_n}\in C_n(\mathbb{D})$ is $\set{(x,y)\;|\;\abs x=1, y^2=\prod_{i=1}^n(x-z_i)}$ and is a \emph{connected} double covering of $\mathbb{S}^1$: a section of this covering would be a continuous choice, for $x\in\mathbb{S}^1$, of a square root $y=\sqrt{\prod_{i=1}^n(x-z_i)}$, which does not exist since $n$ is odd. Therefore the fiber of $\pi$ is diffeomorphic to $\mathcal{S}igma_{g,1}$. We want to parametrise the boundary component of each fiber. For any $q=\set{z_1,\dots,z_n}$ we can consider the equation $y^2=\prod_{i=1}^g(1-tz_i)$, for $t$ ranging in $[0,1]$. If $t=1$ the two solutions for $y$ give rise to two points $p_1,p_2\in\partial \pi^{-1}(q)$, putting $x=1$; if $t=0$ the two values of $y$ are $\pm 1$. As $\prod_{i=1}^g(1-tz_i)\neq 0$ for all $t$, the two values of $y$ are always different and change continuously while $t$ ranges from $0$ to $1$. This gives a bijection of the sets $\set{p_1,p_2}$ and $\set{\pm 1}$. Assume that $p_1$ corresponds to $+1$; then we parametrise $\partial\pi^{-1}(q)$ with the unique continuous choice of a square root $\sqrt{x}$ taking the value $+1$ on $p_1$. This construction is continuous in $q\in C_n(\mathbb{D})$. We have therefore constructed a $\mathcal{S}igma_{g,1}$-bundle over $C_n(\mathbb{D})$, and this yields a classifying map $C_n(\mathbb{D})\to B\Gamma_{g,1}$ which in turn gives a map $\mathfrak{Br}_n\to\Gamma_{g,1}$ between fundamental groups: the induced map is precisely $\varphi$ (see also \cite{SegalTillmann}). The last construction admits a slight generalisation, that we briefly discuss here. Let $B$ be a topological space and let $\psi\colon B\to \mathbb{C} [x,y]$ be a continuous function from $B$ to the space of polynomials in two variables. Assume the following: \begin{itemize} \item there exist two relatively prime positive integers $m,n$ such that for every $b\in B$ the polynomial $\psi(b)(x,y)$ has the form $\pm x^n\pm y^m +$lower order terms, all of order $<n$ in $x$ and $<m$ in $y$; \item for every $b\in B$ there is no point $(x,y)\in\mathbb{C}^2$ where all the following functions vanish: $\psi(b)(x,y)$, $\frac d{dx}\psi(b)(x,y)$ and $\frac d{dy}\psi(b)(x,y)$; \item for every $b\in B$ there is no point $(x,y)\in\mathbb{C}^2$ with $\abs{x}\geq 1$ where both $\psi(b)(x,y)$ and $\frac d{dx}\psi(b)(x,y)$ vanish. \end{itemize} Then we can consider in $B\times \overline\mathbb{D}\times\mathbb{C}$ the zero locus of $\psi$, that is, the set \[ \mathcal{V}=\set{(b,x,y)\,|\,\psi(b)(x,y)=0}. \] There is a natural projection of $\mathcal{V}$ onto $B$, and each fiber is a smooth surface of genus $\frac{(m-1)(n-1)}2$ with one boundary component. This boundary component is a $m$-fold covering of $\mathbb{S}^1$ by projecting on $x$, and it can be parametrised with the parameter $\sqrt[d]{x}$ starting from a point $(1,y_0)$ obtained again by shrinking to zero all lower order terms of the polynomial (and considering $1$ as the preferred $m$-th root of the unity). The construction of $\mathcal{V}_n$ is a special case of this construction, in which $B=C_n(\mathbb{D})$ and $\psi(\set{z_1,\dots,z_n})=y^2-\prod_{i=1}^n(x-z_i)$. Another example is given, again for $B=C_n(\mathbb{D})$ and for a fixed $d\geq 3$, by the assigment $\psi(\set{z_1,\dots,z_n})=y^d-\prod_{i=1}^n(x-z_i)$: one gets the universal family of \emph{superelliptic curves} of degree $d$. A superelliptic curve of degree $d$ is a $d$-fold covering of $\overline{\mathbb{D}}$ branched over $n$ points; its group of deck transformations is cyclic of order $d$ (in particular it acts transitively on fibers); the fiber over each branching point consists of only one point, and all branching points have the same total holonomy, computed with respect to any regular point. This family was studied by Callegaro and Salvetti in \cite{CallegaroSalvetti2}. \section{Unit vertical vector fields} Our next aim is to construct on the $\mathcal{S}igma_{g,1}$-bundle $\mathcal{V}_n\to C_n(\mathbb{D})$ a unit vertical vector field, i.e. a section of the $\mathbb{S}^1$-bundle $\varphi^*\mathcal{UV}_{g,1}\to\mathcal{V}_n=\varphi^*\mathcal{S}_{g,1}$. To do so consider on the entire manifold $C_n(\mathbb{D})\times\overline{\mathbb{D}}\times\mathbb{C}$ the holomorphic vector field \[ \vec v(\set{z_1,\dots,z_n},x,y)=\frac {df}{dy}\cdot\frac{\partial}{\partial x} - \frac {df}{dx}\cdot\frac{\partial}{\partial y}= 2y\cdot \frac{\partial}{\partial x}-\pa{\sum_{i=1}^n\prod_{j\neq i}(x-z_j)}\cdot\frac{\partial}{\partial y}. \] Then $\vec v$ does not vanish on $\mathcal{V}_n$: we have already seen that on each point of $\mathcal{V}_n$ at least one of the $x$- and $y$-partial derivatives of $y^2-\prod_{i=1}^n(x-z_i)$ does not vanish. Moreover $\vec v$ is tangent to $\mathcal{V}_n$, since it annihilates $df$; and $\vec v$ is vertical, as it is a linear combination of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$. Therefore, up to the canonical identification between the holomorphic tangent bundle and the real tangent bundle and up to renormalisation, we have found a unit vertical vector field on $\mathcal{V}_n$, i.e. a section of the $\mathbb{S}^1$-bundle $\varphi^*\mathcal{UV}_{g,1}\to\varphi^*\mathcal{S}_{g,1}$. We already have a unit vertical vector field $\varphi^*s_0^v$ on the subspace $C_n(\mathbb{D})=\varphi^*s_0(C_n(\mathbb{D}))\subset \varphi^*\mathcal{S}_{g,1}$, and the ratio between them (in the sense of ratio between sections of a principar $\mathbb{S}^1$-bundle) is given by a map $\theta\colon C_n(\mathbb{D})\to \mathbb{S}^1$; if we substitute $\vec v$ with $\theta\cdot \vec v$, then our global vertical vector field extends the canonical one over $\varphi^*s_0(C_n(\mathbb{D}))$. The same construction works in the generalised framework introduced at the end of section \ref{sec:defphi}: this time $\vec v$ is given on each fiber by the formula \[ \vec v(b,x,y)=\frac {d\psi(b)}{dy}(x,y)\cdot \frac{\partial}{\partial x} - \frac {d\psi(b)}{dx}(x,y)\cdot \frac{\partial}{\partial y} \] and again we can modify it so as to agree with the canonical vector field over the section at the boundary. We sketch now an alternative proof of the existence of a unit vertical vector field on $\mathcal{V}_n\to C_n(\mathbb{D})$ extending the canonical one over the section at the boundary. Let $\vec{\mathfrak{v}}$ be a vector field on $\mathcal{S}igma_{g,1}$ as in figure \ref{fig:field}: it is orthogonal to the curve $c_1$, parallel to $c_2$, again orthogonal to $c_3$ and so on; moreover if $*\in\mathcal{S}igma_{g,1}$ denotes the basepoint, then $\vec{\mathfrak{v}}(*)$ is exactly the unit tangent vector at $*$ that is orthogonal to $\partial\mathcal{S}igma_{g,1}$ and points outwards. Let $\mathbb{V}$ be the space of all vector fields $\vec w$ on $\mathcal{S}igma_{g,1}$ that satisfy $\vec w(*)=\vec{\mathfrak{v}}(*)$ and that have no zeroes on $\mathcal{S}igma_{g,1}$ (we say briefly that they are \emph{non-vanishing}). Then $\mathbb{V}\simeq Map_*(\mathcal{S}igma_{g,1};\mathbb{S}^1)$ is a disjoint union of infinitely many contractible components. The group $\mathbb{D}iff_{g,1}$ acts on $\mathbb{V}$ through differentials of diffeomorphisms: this action is well-defined thanks to the hypothesis that differomorphisms in $\mathbb{D}iff_{g,1}$ restrict to the identity on a neighborhood of the boundary, so that in particular their differential fixes the vector $\vec{\mathfrak{v}}(*)$. There is an induced action of the mapping class group $\Gamma_{g,1}$ on $\pi_0(\mathbb{V})$, and the \emph{framed mapping class group} associated to $\vec{\mathfrak{v}}$ is by definition the stabiliser of $[\vec{\mathfrak{v}}]\in \pi_0(\mathbb{V})$, which is a subgroup $\Gamma_{g,1}^{fr}(\vec{\mathfrak{v}})\hookrightarrow\Gamma_{g,1}$; we call $\mathfrak{i}$ this inclusion of groups. Consider $\mathfrak{i}^*\mathcal{S}_{g,1}\to B\Gamma_{g,1}^{fr}(\vec{\mathfrak{v}})$, the pull-back of the universal surface bundle along the map $\mathfrak{i}\colon B\Gamma_{g,1}^{fr}(\vec{\mathfrak{v}})\to B\Gamma_{g,1}$. Using that connected components of $\mathbb{V}$ are contractible one can construct a unit vertical vector field $\mathfrak{v}$ on $\mathfrak{i}^*\mathcal{S}_{g,1}$ that restricts to $\mathfrak{i}^*s_0^v$ on the section at the boundary $\mathfrak{i}^*s_0(B\Gamma_{g,1}^{fr}(\vec{\mathfrak{v}}))$. The key remark is now that the image of the Birman-Hilden inclusion $\varphi\colon\mathfrak{Br}_{2g+1}\to\Gamma_{g,1}$ lies inside $\Gamma_{g,1}^{fr}(v_0)$: indeed the vector field $\vec{\mathfrak{v}}$ is preserved by the differential of all Dehn twists about the curves $c_i$, up to isotopy through vector fields in $\mathbb{V}$. Therefore the map $\varphi\colon C_n(\mathbb{D})\to B\Gamma_{g,1}$ factors through $B\Gamma_{g,1}^{fr}(\vec{\mathfrak{v}})$, and we can now pullback the unit vertical vector field $\mathfrak{v}$ over $\mathfrak{i}^*\mathcal{S}_{g,1}$ to a unit vertical vector field $\vec v$ over $\varphi^*\mathcal{S}_{g,1}$with all the desired properties. \begin{figure} \caption{The vector field $\vec{\mathfrak{v} \label{fig:field} \end{figure} \section{Stable vanishing of $\varphi_*$} Our proof of theorem \ref{thm:STtwisted} consists of two steps. In the first step we formulate the problem in an alternative way, namely we replace the map \[ \varphi_*\colon H_k(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})\to H_k(\Gamma_{g,1};\mathcal{H}) \] with the map \[ \varphi_*\colon H_{k+1}(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_{2g+1})\to H_{k+1}(\mathcal{S}_{g,1},B\Gamma_{g,1}). \] Recall that $B\mathfrak{Br}_{2g+1}$ can be seen as a subspace of $\varphi^*\mathcal{S}_{g,1}$ through the section at the boundary. The second map deals only with homology with constant coefficients, although we have now more complicated spaces. In the second step we factor the above map through the homology groups \[ H_{k+1}(\mathcal{UV}_{g,1},B\Gamma_{g,1}) \] which are trivial in the stable range. Thus the map $\varphi_*$ is trivial in homology in the stable range. The strategy of the proof is summarized in the following diagram \[ \begin{tikzcd}[column sep=2em,row sep=3em] H_k(\mathfrak{Br}_{2g+1};\varphi^*\mathcal{H})\ar[rr,"\varphi_*"]\ar[d,dashed,"\cong"] & & H_k(\Gamma_{g,1};\mathcal{H})\ar[d,dashed,"\cong"]\\ H_{k+1}(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_{2g+1})\ar[rr,"\varphi_*"]\ar[dr,dashed] & & H_{k+1}(\mathcal{S}_{g,1},B\Gamma_{g,1})\\ & H_{k+1}(\mathcal{UV}_{g,1},B\Gamma_{g,1})=0.\ar[ur,"p^v_*"] & \end{tikzcd} \] \subsection{The reformulation of the problem.} The bundle $\mathcal{S}_{g,1}\to B\Gamma_{g,1}$, together with the global section $s_0$, can be seen as a pair of bundles $(\mathcal{S}_{g,1},B\Gamma_{g,1})\to B\Gamma_{g.1}$ with fiber the pair $(\mathcal{S}igma_{g,1},*)$. There is an associated Serre spectral sequence whose second page contains the homology groups \[ E^2_{p,q}=H_p(B\Gamma_{g,1};H_q(\mathcal{S}igma_{g,1},*)) \] and whose limit is the homology of the pair $(\mathcal{S}_{g,1},B\Gamma_{g,1})$. Note that the homology group $H_q(\mathcal{S}igma_{g,1},*)$ is non-trivial only for $q=1$, in which case it is exactly the symplectic representation $\mathcal{H}$ of $\Gamma_{g,1}$. So the second page of the spectral sequence has only one non-vanishing row and therefore coincides with its limit, i.e. \[ H_{p+1}(\mathcal{S}_{g,1},B\Gamma_{g,1})=H_p(B\Gamma_{g,1};\mathcal{H}). \] The whole construction is natural with respect to pullbacks. Let again $n=2g+1$. The natural map $\varphi\colon(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)\to (\mathcal{S}_{g,1},B\Gamma_{g,1})$ is a map of pairs of bundles, i.e. it covers the map $\varphi\colon B\mathfrak{Br}_n\to B\Gamma_{g,1}$. The fiber of the pair of bundles $(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)\to B\mathfrak{Br}_n$ is still the pair $(\mathcal{S}igma_{g,1}, *)$, so its homology is concentrated in degree one and the corresponding spectral sequence gives again an isomorphism \[ H_{p+1}(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)=H_p(B\mathfrak{Br}_n;\varphi^*\mathcal{H}). \] The induced map between the second pages of the spectral sequences is the map \[ \varphi_*\colon H_k(\mathfrak{Br}_n;\varphi^*\mathcal{H})\to H_k(\Gamma_{g,1};\mathcal{H}), \] appearing in theorem \ref{thm:STtwisted}; the induced map on the limit is the map \[ \varphi_*\colon H_{k+1}(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)\to H_{k+1}(\mathcal{S}_{g,1},B\Gamma_{g,1}). \] Hence we can study the latter map, thus reducing ourselves to understand the behaviour of the map of pairs $\varphi\colon (\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)\to (\mathcal{S}_{g,1},B\Gamma_{g,1})$ in homology with constant coefficients. \subsection{The factorisation through $H_{k+1}(\mathcal{UV}_{g,1},B\Gamma_{g,1})$.} Recall that there is a unit vertical vector field $\vec v$ on $\mathcal{V}_n=\varphi^*\mathcal{S}_{g,1}$ extending the canonical vector field $\varphi^*s_0^vS$ on the subspace $B\mathfrak{Br}_n\subset \varphi^*\mathcal{S}_{g,1}$. This means that in the following diagram \[ \begin{tikzcd}[column sep=6em,row sep=3em] \pa{\varphi^*\mathcal{UV},B\mathfrak{Br}_n} \ar[r,"\varphi"]\ar[d,"\varphi^*p^v"] & \pa{\mathcal{UV}, B\Gamma_{g,1}} \ar[d,"p_v"]\\ \pa{\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n} \ar[r,"\varphi"] \ar[ur,dashed] & \pa{\mathcal{S}_{g,1},B\Gamma_{g,1}} \end{tikzcd} \] there is a dashed diagonal arrow lifting the bottom horizontal map, so that the lower right triangle commutes. In particular the map \[ \varphi_*\colon H_{k+1}(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)\to H_{k+1}(\mathcal{S}_{g,1},B\Gamma_{g,1}) \] factors through the homology group $H_{k+1}\pa{\mathcal{UV}, B\Gamma_{g,1}}$. Since by theorem \ref{thm:Harer} the inclusion $s_0^v\circ s_0\colon B\Gamma_{g,1}\to \mathcal{UV}_{g,1}$ is a homology-isomorphism in degree $\leq \frac 23 g$, we deduce that $H_{k+1}\pa{\mathcal{UV}, B\Gamma_{g,1}}=0$ for $k+1\leq \frac 23 g$, and therefore for $k\leq \frac 23 g -1$ the map \[ \varphi_*\colon H_{k+1}(\varphi^*\mathcal{S}_{g,1},B\mathfrak{Br}_n)\to H_{k+1}(\mathcal{S}_{g,1},B\Gamma_{g,1}) \] is the zero map. This completes the proof of theorem \ref{thm:STtwisted}. The result can be generalised to the case in which we construct a $\mathcal{S}igma_{g,1}$-bundle over a space $B$ through a map $\psi\colon B\to\mathbb{C}[x,y]$ as in the previous section. We obtain a map $\Psi\colon B\to B\Gamma_{g,1}$ that induces the trivial map \[ \Psi_*\colon H_k(B;\Psi^*\mathcal{H})\to H_k(B\Gamma_{g,1};\mathcal{H}) \] in homology in degree $k\leq \frac 23 g-1$. The proof is the same. \section{Torsion property of $H_*(\mathfrak{Br}_n;\varphi^*\mathcal{H})$.} In this section we prove theorem \ref{thm:fourtorsion}. Using the isomorphism \[ H_k(\mathfrak{Br}_n;\varphi^*\mathcal{H})\simeq H_{k+1}(\mathcal{V}_n,C_n(\mathbb{D})) \] we want to prove that the second group is $4$-torsion. On the complex manifold $\mathcal{V}_n$ we consider the holomorphic function $y$. We call $\mathcal{Z}_n\subset\mathcal{V}_n$ the zero locus of $y$: $\mathcal{Z}_n$ is a smooth complex submanifold, indeed the vector field already considered \[ \vec v=2y\cdot \frac{\partial}{\partial x}-\pa{\sum_{i=1}^n\prod_{j\neq i}(x-z_j)}\cdot\frac{\partial}{\partial y} \] is non-zero on the whole $\mathcal{V}_n$, and thererfore on $\mathcal{Z}_n$ its $y$-component must be non-zero; this witnesses the non-vanishing of $dy|_{\mathcal{V}_n}$ on $\mathcal{Z}_n$. As $\mathcal{Z}_n$ is the smooth zero-locus of a holomorphic function on the complex manifold $\mathcal{V}_n$, the normal bundle of $\mathcal{Z}_n$ in $\mathcal{V}_n$ must be trivial. The space $\mathcal{Z}_n$ is homeomorphic to the space \[ C_{n-1,1}(\mathbb{D})=\set{(\set{z_1,\dots, z_{n-1}},x)\in C_{n-1}(\mathbb{D})\times \mathbb{D}\;|\; x\neq z_i\;\forall 1\leq i\leq n-1}, \] that we call the \emph{configuration space} of $n-1$ \emph{black} and one \emph{white} points in the disc. Indeed if $y=0$, then the equation $y^2=\prod_{i=1}^n(x-z_i)$ defining $\mathcal{V}_n$ tells us that $x$ must coincide with one, and exactly one, of the numbers $z_i$; hence a point of $\mathcal{Z}_n$ is exactly an unordered configuration of $n$ points in $\mathbb{D}$, one of which is special (and we say, it is \emph{white}) because it coincides with $x$. We call $\mathcal{T}_n$ the (open) complement of $\mathcal{Z}_n$ in $\mathcal{V}_n$. We call $N(\mathcal{Z}_n)$ a small, closed tubular neighborhood of $\mathcal{Z}_n$ in $\mathcal{V}_n$. Since the normal bundle of $\mathcal{Z}_n$ in $\mathcal{V}_n$ is trivial, we have $N(\mathcal{Z}_n)\cong \mathcal{Z}_n\times\overline{\mathbb{D}}$, and $N(\mathcal{Z}_n)\cap\mathcal{T}_n\simeq \partial N(\mathcal{Z}_n)\cong\mathcal{Z}_n\times\mathbb{S}^1$. By construction the copy of $C_n$ contained in $\mathcal{V}_n$, i.e. the image of the section $\varphi^*s_0$, is contained in $\mathcal{T}_n\setminus N(\mathcal{Z}_n)$. We have a Mayer-Vietoris sequence \[ \dots \rightarrow H_k\pa{ N(\mathcal{Z}_n)\cap\mathcal{T}_n} \rightarrow H_k\pa{\mathcal{Z}_n}\oplus H_k\pa{\mathcal{T}_n,C_n(\mathbb{D})} \rightarrow H_k\pa{\mathcal{V}_n,C_n(\mathbb{D})} \rightarrow \dots \] from which we derive the following lemma. \begin{lem} \label{lemma:MVsuTZ} There is a long exact sequence \[ \begin{array}{c} \dots \rightarrow H_k\pa{C_{n-1,1}(\mathbb{D})}\oplus H_{k-1}\pa{C_{n-1,1}(\mathbb{D})}\otimes H_1(\mathbb{S}^1) \overset{\iota}{\rightarrow}\\ \overset{\iota}{\rightarrow} H_k\pa{C_{n-1,1}(\mathbb{D})}\oplus H_k\pa{\mathcal{T}_n,C_n(\mathbb{D})} \rightarrow H_k\pa{\mathcal{V}_n,C_n(\mathbb{D})} \rightarrow \dots \end{array} \] \end{lem} Our goal is to get information about the homology of $\mathcal{V}_n$ by knowing the other homologies and the behaviour of the maps in the previous sequence. In particular we need some results about the space $\mathcal{T}_n$. There is a double convering map $\mathcal{S}q\colon\mathcal{T}_n\to C_{n,1}(\overline{\mathbb{D}})$, where \[ C_{n,1}(\overline{\mathbb{D}})=\set{(\set{z_1,\dots, z_n},x)\in C_n(\mathbb{D})\times \overline{\mathbb{D}}\;|\; x\neq z_i\;\forall 1\leq i\leq n}. \] The map $\mathcal{S}q$ is given by forgetting the value of $y$ and interpreting $x$ as the \emph{white}, distinguished point. We have introduced $C_{n,1}(\overline{\mathbb{D}})$ because in $\mathcal{T}_n$ it may happen that $x\in\mathbb{S}^1$, whereas the numbers $z_i$ are always in the interior of the unit disc; nevertheless the inclusion $C_{n,1}(\mathbb{D})\subset C_{n,1}(\overline{\mathbb{D}})$ is a homotopy equivalence. The 2-fold covering $\mathcal{S}q\colon \mathcal{T}_n\rightarrow C_{n,1}(\overline\mathbb{D})$ has a nontrivial deck transformation $\varepsilon\colon \mathcal{T}_n\rightarrow \mathcal{T}_n$, which corresponds to changing the sign of $y$. \begin{lem} \label{lemma:varepsilon=id} The map $\varepsilon$ is homotopic to the identity of $\mathcal{T}_n$. \end{lem} \begin{proof} First we define a homotopy $H_{\varepsilon}\colon C_{n,1}(\overline\mathbb{D})\times[0,1]\rightarrow C_{n,1}(\overline\mathbb{D})$. For $p\in C_{n,1}(\overline\mathbb{D})$ and $t\in [0,1]$ we set $H_{\varepsilon}(p,t)= e^{2\pi it}\cdot p$: that is, at time $t$ we rotate the configuration $p$ by an angle $2\pi t$ counterclockwise. Thus $H_\varepsilon$ is a homotopy from the identity of $C_{n,1}(\overline\mathbb{D})$ to, again, the identity of $C_{n,1}(\overline\mathbb{D})$. We lift this homotopy to a homotopy $\tilde H_{\varepsilon}\colon \mathcal{T}_n\times [0,1]\rightarrow \mathcal{T}_n$, starting from the identity of $\mathcal{T}_n$ at time $t=0$. At time $t=1$ any point $p\in \mathcal{T}_n$ is mapped to a point $p'$ lying over the same point of $C_{n,1}(\overline\mathbb{D})$, i.e., $\mathcal{S}q(p)=\mathcal{S}q(p')$. During the homotopy $\tilde H_{\varepsilon}$ the complex number $y$ associated to $p$ is multiplied by $e^{2\pi i nt/2}$ at time $t$, since its square is multiplied by $e^{2\pi int}$. So at time $t=1$, the value of $y$ has been multiplied by $e^{2\pi in/2}=-1$, i.e., $p'=\varepsilon(p)$: here we have used that $n$ is odd. \end{proof} We get the following corollary in homology: \begin{cor} \label{cor:twoproperties} The map $\mathcal{S}q_*\colon H_*(\mathcal{T}_n)\rightarrow H_*\pa{C_{n,1}(\overline\mathbb{D})}$ has the following properties: \begin{itemize} \item every element in the kernel of $\mathcal{S}q_*$ has order 2 in $H_*\pa{\mathcal{T}_n}$; \item every element of the form $2c$ with $c\in H_*\pa{C_{n,1}(\overline\mathbb{D})}$ is in the image of $\mathcal{S}q_*$. \end{itemize} The same two properties hold for the transfer homomorphism $\mathcal{S}q^!\colon H_n(C_{n,1}(\overline\mathbb{D}))\rightarrow H_n(\mathcal{T}_n)$. \end{cor} \begin{proof} We know that $\varepsilon_*$ is the identity map on $H_*\pa{\mathcal{T}_n}$, by lemma \ref{lemma:varepsilon=id}; then $\mathcal{S}q_*\circ\mathcal{S}q^!$ is multiplication by 2, and $\mathcal{S}q^!\circ\mathcal{S}q_*$ is the sum of the identity and $\varepsilon_*$, so it is also multiplication by 2. The result follows immediately. \end{proof} There is a copy of $C_n(\mathbb{D})$ embedded in $C_{n,1}(\overline{\mathbb{D}})$, given by selecting $1\in\mathbb{S}^1$ as white point: this is exactly the image under $\mathcal{S}q$ of the copy of $C_n(\mathbb{D})$ embedded in $\mathcal{T}_n$ along $\varphi^*s_0$. We get a diagram of split short exact sequences \[ \begin{tikzcd}[column sep=3em, row sep=3em] H_k(C_n(\mathbb{D}))\ar[r,"s"] \ar[d, equal] & H_k(\mathcal{T}_n)\ar[r] \ar[d, "\mathcal{S}q_*"] & H_k(\mathcal{T}_n,C_n(\mathbb{D})) \ar[d, "\mathcal{S}q_*"] \\ H_k(C_n(\mathbb{D}))\ar[r,"s"] & H_k(C_{n,1}(\overline\mathbb{D}))\ar[r] & H_k(C_{n,1}(\overline\mathbb{D}),C_n(\mathbb{D})). \end{tikzcd} \] Splitting is due to the fact that both $\mathcal{T}_n$ and $C_{n,1}(\overline{D})$ retract onto $C_n(\mathbb{D})$: the retraction is given by forgetting all data but the position of the $z_i$'s, so the left square is endowed with retractions which are also compatible with the vertical maps. Therefore the properties listed in corollary \ref{cor:twoproperties} hold also for the map $\mathcal{S}q_*\colon H_*(\mathcal{T}_n,C_n)\to H_*(C_{n,1}(\overline\mathbb{D}),C_n(\mathbb{D}))$. Let $\mu\colon C_{n-1,1}(\mathbb{D})\times\mathbb{S}^1\to C_{n,1}(\mathbb{D})$ be the following map: \[ \mu\pa{\pa{\set{z_1,\dots,z_{n-1}},x},\theta}=\pa{\set{z_1,\dots,z_{n-1}.x+\delta\theta},x}, \] where \[ \delta=\delta(\set{z_1,\dots,z_{n-1}},x)=\frac 12\min\pa{\set{1-\abs{x}}\cup \set{\abs{z_i-x}\,|\,1\leq i\leq n-1}}>0. \] In words, $\mu$ transforms a configuration of one white point $x$ and $n-1$ black points $z_1,\dots, z_{n-1}$ into a configuration with one more black point, by adding a new black point near $x$, in the direction of $\theta$. If we see $\mathbb{S}^1$ as a homotopy equivalent replacement of $C_{1,1}$, then $\mu$ is up to homotopy a special case of the multiplication $\mu\colon C_{1,h}\times C_{1,k}\to C_{1,h+k}$ making $\coprod_{k\geq 0} C_{1,k}$ into a $H$-space; we will not need this general construction, which was first described in \cite{Vershinin}. We recall also the following result, that can be found in \cite{Vassiliev} \begin{lem} \label{lem:Vassiliev} Let $\nu$ be the composition \[ H_{k-1}\pa{C_{n-1,1}(\mathbb{D})}\otimes H_1(\mathbb{S}^1)\subset H_k(C_{n-1,1}(\mathbb{D})\times\mathbb{S}^1) \overset{\mu_*}{\rightarrow} H_k(C_{n,1}(\mathbb{D})) \simeq H_k(C_{n,1}(\overline{\mathbb{D}})) \] Then $\nu$ is an isomorphism of $H_{k-1}\pa{C_{n-1,1}(\mathbb{D})}\otimes H_1(\mathbb{S}^1)$ with the kernel of the retraction $H_k(C_{n,1}(\overline{\mathbb{D}}))\to H_k(C_n(\mathbb{D}))$; this kernel is also isomorphic to the group $H_k(C_{n,1}(\overline{\mathbb{D}}),C_n(\mathbb{D}))$. \end{lem} The following lemma analyses the behaviour of the map $\iota$ appearing in the Mayer-Vietoris sequence of lemma \ref{lemma:MVsuTZ}. \begin{lem} \label{lemma:behaviourofiota} Let $\iota$ be the map in the Mayer Vietoris sequence of lemma \ref{lemma:MVsuTZ}. We consider the restriction of $\iota$ to the two summands of its domain, and its projection to the two summands of its codomain: \begin{itemize} \item $\iota$ induces an isomorphism $H_k(C_{n-1,1}(\mathbb{D}))\rightarrow H_k(C_{n-1,1}(\mathbb{D}))$; \item $\iota$ induces the zero map $H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1)\rightarrow H_k(C_{n-1,1}(\mathbb{D}))$; \item $\iota$ induces the following map $H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1)\rightarrow H_k(\mathcal{T}_n,C_n(\mathbb{D}))$ \[ H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1) \overset{\nu}{\rightarrow} H_k(C_{n,1}(\overline{\mathbb{D}}) \overset{\mathcal{S}q^!}{\rightarrow} H_k(\mathcal{T}_n) \rightarrow H_k(\mathcal{T}_n,C_n(\mathbb{D})). \] \end{itemize} \end{lem} \begin{proof} The first two points of the statement come from the behaviour of the map $\iota\colon H_k(C_{n-1,1}(\mathbb{D})\times \mathbb{S}^1)\rightarrow H_k(C_{n-1,1}(\mathbb{D})\times \overline{\mathbb{D}})$ on Kunneth summands. For the third point, recall that $C_{n-1,1}(\mathbb{D})\times \mathbb{S}^1$ represents $\partial N(\mathcal{Z}_n)$, where $N(\mathcal{Z}_n)\cong\mathcal{Z}_n\times\overline{\mathbb{D}}$ is a tubular neighborhood of $\mathcal{Z}_n\simeq C_{n-1,1}(\mathbb{D})$ in $\mathcal{V}_n$. Note that the map $\mathcal{S}q\colon \mathcal{T}_n\rightarrow C_{n,1}(\overline{\mathbb{D}})$ extends to a map (which is no longer a covering) $\mathcal{S}q\colon \mathcal{V}_n\rightarrow C_n(\mathbb{D})\times \overline{\mathbb{D}}$: this map still consists in forgetting $y$. Let $\mathcal{Z}_n'\subset C_n\times\overline \mathbb{D}$ be the subspace where the white point (in $\mathbb{D}$) coincides with one of the $n$ black points; then again $\mathcal{Z}_n'\simeq C_{n-1,1}(\mathbb{D})$ and $\mathcal{Z}_n'$ has a small, closed tubular neighborhood $N(\mathcal{Z}_n')\simeq \mathcal{Z}_n'\times\overline{\mathbb{D}}\subset C_n(\mathbb{D})\times\overline{\mathbb{D}}$. We can choose $N(\mathcal{Z}_n)$ to be $\mathcal{S}q^{-1}(N(\mathcal{Z}_n'))\subset\mathcal{V}_n$; the map $\mathcal{S}q\colon N(\mathcal{Z}_n)\to N(\mathcal{Z}_n')$ is a 2-fold branched covering, and it is branched exactly over $\mathcal{Z}_n'$, which is homeomorphically covered by $\mathcal{Z}_n$. The restriction $\mathcal{S}q\colon \partial N(\mathcal{Z}_n)\to\partial N(\mathcal{Z}_n')$ is a genuine 2-fold covering, and agrees with the projections of these boundaries of tubular neighborhoods on $\mathcal{Z}_n$ and $\mathcal{Z}_n'$ respectively (the projection of $N(\mathcal{Z}_n)$ onto $\mathcal{Z}_n$ can be chosen to be the lift of the projection of $N(\mathcal{Z}_n')$ onto $\mathcal{Z}_n'\cong\mathcal{Z}_n$). So we have a commutative diagram \[ \begin{tikzcd}[column sep=3em, row sep=3em] \partial N(\mathcal{Z}_n)\simeq \mathcal{Z}_n\times\mathbb{S}^1 \ar[r,"\pi_N"] \ar[d,"\mathcal{S}q"] & \mathcal{Z}_n\cong C_{n-1,1}(\mathbb{D})\ar[d,equal]\\ \partial N(\mathcal{Z}_n')\simeq \mathcal{Z}_n'\times \mathbb{S}^1 \ar[r,"\pi_{N'}"] & \mathcal{Z}_n'\cong C_{n-1,1}(\mathbb{D}) \end{tikzcd} \] In particular the composition $\pi_{N'}\circ\mathcal{S}q$ is equal, up to identifying both $\mathcal{Z}_n$ and $\mathcal{Z}_n'$ with $C_{n-1,1}(\mathbb{D})$, to the map $\pi_{N}$, and in homology we can express the Gysin map $\pi_{N}^!$ as $\mathcal{S}q^!\circ\pi_{N'}^!$. We now observe that $\pi_N^!\colon H_{k-1}(\mathcal{Z}_n)\to H_k(\partial N(\mathcal{Z}_n))$ is exactly the inclusion of the summand $H_{k-1}(C_{n-1,1})\otimes H_1(\mathbb{S}^1)\subset H_k(\partial N(\mathcal{Z}_n))$. The map $\iota$ is the composition of this inclusion with the maps $H_k(\partial N(\mathcal{Z}_n))\to H_k(\mathcal{T}_n)$ induced by $\partial N(\mathcal{Z}_n)\subset\mathcal{T}_n$, and then the natural map $H_k(\mathcal{T}_n)\to H_k(\mathcal{T}_n,C_n(\mathbb{D}))$. On the other hand $\pi_{N'}^!\colon H_{k-1}(\mathcal{Z}_n')\to H_k(\partial N(\mathcal{Z}_n'))$ is the inclusion \[ H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1)\subset H_k(\partial N(\mathcal{Z}_n'))\simeq H_k(C_{n-1,1}(\mathbb{D})\times\mathbb{S}^1); \] and the map $\nu\colon H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1)\to H_k(C_{n,1}(\mathbb{D}))$ is exactly this inclusion, followed by the map $\mu_*\colon H_k(\partial N(\mathcal{Z}_n'))\to H_k(C_{n,1}(\overline{\mathbb{D}})$. \end{proof} We are now ready to prove theorem \ref{thm:fourtorsion}. We pick any class $a\in H_k(\mathcal{V}_n,C_n(\mathbb{D}))$, and map it to $H_{k-1}(\partial N(\mathcal{Z}_n))$ along the long exact sequence of lemma \ref{lemma:MVsuTZ}; we get some class $b+c$, where $b\in H_{k-1}(C_{n-1,1}(\mathbb{D}))$ and $c\in H_{k-2}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1)$. Then $\iota(b+c)$ must be zero, hence its first component, lying in $H_{k-1}(C_{n-1,1}(\mathbb{D}))$, must be zero; therefore $b=0$ by the first two points of lemma \ref{lemma:behaviourofiota}. Similarly $\iota(c)=0$, so also $\mathcal{S}q_*\circ\iota(c)=0\in H_{k-1}(C_{n,1}(\overline{\mathbb{D}},C_n(\mathbb{D}))$; by the third point of lemma \ref{lemma:behaviourofiota} this is equal to the image of $c$ under the map \[ \begin{array}{c} H_{k-2}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1) \overset{\nu}{\rightarrow} H_{k-1}(C_{n,1}(\overline{\mathbb{D}}) \rightarrow\\ \rightarrow H_{k-1}(C_{n,1}(\overline{\mathbb{D}}),C_n(\mathbb{D})) \overset{\cdot 2}{\rightarrow} H_{k-1}(C_{n,1}(\overline{\mathbb{D}}),C_n(\mathbb{D})) \end{array} \] As the composition of the first two maps is an isomorphism (see lemma \ref{lem:Vassiliev}) and clearly multiplication by $2$ commutes with any maps of abelian groups, we have that $2c=0$. Therefore $2a$ is in the kernel of the map $H_k(\mathcal{V}_n,C_n(\mathbb{D}))\rightarrow H_{k-1}(\partial N(\mathcal{Z}_n))$, so it is in the image of the map $H_k(C_{n-1,1}(\mathbb{D}))\oplus H_k(\mathcal{T}_n,C_n(\mathbb{D}))\rightarrow H_k(\mathcal{V}_n,C_n(\mathbb{D}))$. Let $d+e\mapsto 2a$, where $d\in H_k(C_{n-1,1}(\mathbb{D}))$ and $e\in H_k(\mathcal{T}_n,C_n(\mathbb{D}))$: we want now to show that $2d+2e$ is in the image of $\iota$. Since $\iota(d+0)=d+h$ for some $h\in H_k(\mathcal{T}_n,C_n(\mathbb{D}))$, we have $\iota(2d+0)=2d+2h$ so it suffices to find $i\in H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1)$ such that $\iota(i)=2e-2h$. As $2e-2h=2(e-h)$ is twice an element in $H_k(\mathcal{T}_n,C_n(\mathbb{D}))\subset H_k(\mathcal{T}_n)$, by corollary \ref{cor:twoproperties} there is an element $j+i\in H_k(C_{n,1}(\overline{\mathbb{D}}))$ such that $\mathcal{S}q^!(j+i)=2e-2h$, for some $j\in H_k(C_n(\mathbb{D}))$ and some $i\in H_{k-1}(C_{n-1,1}(\mathbb{D}))\otimes H_1(\mathbb{S}^1) \simeq H_k(C_{n,1}(\overline{\mathbb{D}}),C_n(\mathbb{D}))$ (using again lemma \ref{lem:Vassiliev}). We now observe that the composition \[ H_k(C_n(\mathbb{D})) \rightarrow H_k(C_{n,1}(\overline{\mathbb{D}})) \overset{\mathcal{S}q^!}{\rightarrow} H_k(\mathcal{T}_n) \] is equal to the composition \[ H_k(C_n(\mathbb{D})) \overset{\cdot 2}{\rightarrow} H_k(C_n(\mathbb{D})) \subset H_k(\mathcal{T}_n), \] and in particular its image lies in the summand $H_k(C_n(\mathbb{D})) \subset H_k(\mathcal{T}_n)$. Indeed the covering $\mathcal{S}q$ is the trivial covering over $C_n(\mathbb{D})\subset C_{n,1}(\overline{\mathbb{D}})$, with sections $\varphi^*s_0\colon C_n(\mathbb{D})\rightarrow \mathcal{T}_n$ and $\varepsilon\circ \varphi^*s_0\colon C_n(\mathbb{D})\rightarrow \mathcal{T}_n$, and these sections are homotopic as maps $C_n(\mathbb{D})\rightarrow \mathcal{T}_n$ by lemma \ref{lemma:varepsilon=id}. Therefore we must have $\mathcal{S}q^{!}(j)=0$ and we may assume $j=0$. It follows that $\iota(i)=2e-2h$, so the class $2d+2e$ is in the image of $\iota$ and must therefore also be in the kernel of the map $H_k(C_{n-1,1}(\mathbb{D}))\oplus H_k(\mathcal{T}_n,C_n(\mathbb{D}))\rightarrow H_k(\mathcal{V}_n,C_n(\mathbb{D}))$: this exactly means that $4a=0\in H_k(\mathcal{V}_n,C_n(\mathbb{D}))$, and theorem \ref{thm:fourtorsion} now follows from the isomorphism $H_k(\mathcal{V}_n,C_n(\mathbb{D}))\simeq H_{k-1}(\mathfrak{Br}_n;\varphi^*\mathcal{H})$. {} \end{document}
\begin{document} \title{Online Sparse Subspace Clustering} \begin{abstract} This paper focuses on the sparse subspace clustering problem, and develops an online algorithmic solution to cluster data points on-the-fly, without revisiting the whole dataset. The strategy involves an online solution of a sparse representation (SR) problem to build a (sparse) dictionary of similarities where points in the same subspace are considered ``similar,'' followed by a spectral clustering based on the obtained similarity matrix. When the SR cost is strongly convex, the online solution converges to within a neighborhood of the optimal time-varying batch solution. A dynamic regret analysis is performed when the SR cost is not strongly convex. \end{abstract} \begin{keywords} Subspace clustering; sparse representation; time-varying optimization; online algorithms. \end{keywords} \section{Introduction and Problem Setup} \label{sec:intro} Modern data processing tasks aim to extract information from datasets or signals on graphs -- examples include identification of trends or patterns, learning of dynamics and data structures, or methods for comprehensive awareness of the underlying networks or systems generating the data~\cite{Becker18SPMag,Slavakis14SPMag}. In this domain, the present paper focuses on data processing methods for streams of (possibly high-dimensional) data, with particular emphasis on a setting where underlying computational constraints require one to process data on-the-fly, and with limited access to stored data. One prominent task is clustering, which is utilized to cluster data points based on well-defined metrics modeling similarities (or distances) among data points, or capturing underlying data structures. For example, spectral clustering groups data points based on minimizing cuts of a similarity graph \cite{Lux}. In particular, subspace clustering builds a similarity graph where points in the same subspace are considered ``similar''. It does this by finding either a low-rank representation (LRR) \cite{LRR,Shen} or sparse representation (SSC) \cite{Vidal,Becker} of the data such that the data points are represented as linear combinations of other data points in the same subspace. In this paper, we will consider an \emph{online} SSC methodology, where underlying computational considerations prevent one from solving pertinent optimization problems associated with a given set of data~\cite{Vidal} before a new datum arrives. To outline the problem concretely, consider $N$-dimensional data points $\{x_{t}, t \in \mathbb{N}\}$ sequentially arriving at times $ t\cdot h$, with $h > 0$ a given interval\footnote{\emph{Notation}: hereafter, $(\cdot)^T$ denotes transposition. For a given vector $x \in \mathbb{R}^N$ or matrix $X \in \mathbb{R}^{N\times M}$, $\|x\|$ and $\|X\|$ refer to a generic norm, and $\vertiii{X}$ the spectral norm. If $X_{ij}$ is the $(i,j)$ entry of $X$, then $\|X\|_F^2=\sum_{i,j} X_{ij}^2$ and $\|X\|_1 = \sum_{i,j} |X_{ij}|$. The composition of two operators is denoted by $\circ$. Write $f(n)=\mathcal{O}(g(n))$ to denote that for sufficiently large $n$, $\exists c>0$ such that $|f(n)| \le c |g(n)|$. }. Assume that the data points lie in (or in the neighbourhood of) $S$ \emph{subspaces} $\{\mathcal S_i\}_{i=1}^S \subset \mathbb R^N$, with $\text{dim}(\mathcal S_i)=d_i$ for each $i$. This paper studies the problem of repeatedly applying SSC to all observed data up to a given time, $\{x_{j}, j \le t\}$ with $\bar{X}_t = [x_{1}, \ldots, x_t]$. SSC is a two-step approach \cite{Vidal}: first, step [S1], based on the self-expressiveness property, a sparse representation (SR) problem is solved to identify the (sparse) coefficients $\{\bar{c}_j\}_{j = 1}^t$ so that $x_j = \bar{X}_t \bar{c}_j$ for all $j = 1, \ldots, t$; that is, data points are represented as linear combinations of data points in the same subspace (and we force the $j^\text{th}$ component of $\bar{c}_j$ to be $0$ to exclude the trivial solution). Second, step [S2], apply spectral clustering based on a similarity matrix $W := |\bar{C}_t| + |\bar{C}_t^T|$, where $\bar{C}_t := [\bar{c}_1, \ldots \bar{c}_t]$ and $|\cdot|$ is taken entry-wise. However, in the streaming setting, this setting has two drawbacks: \begin{enumerate}[wide, labelwidth=!, labelindent=6pt] \setlength\itemsep{.3em} \setlength\parskip{0pt} \setlength\parsep{0pt} \item[(d2)] The dimensions of $\bar{X}_t$ and $\bar{C}_t$ grow with time, thus increasing the complexity of the associated SR and spectral clustering tasks; and, \item[(d2)] Due to underlying computational complexity considerations, steps [S1] and [S2] might not be executed to completion within a time interval $h$ (i.e., before a new datum arrives). \end{enumerate} Given (d1)-(d2), we address the problem of developing online algorithmic solutions to carry out steps [S1]-[S2] at each time $t$, based on a \emph{given computational budget}. The first step towards this goal involves the processing of data points using a ``sliding window'' $X_t:=[x_{t-T+1}, \ldots, x_t]\in \mathbb R ^{N\times T}$ of length $T$ (with $T$ determined by the computational budget, as explained later in the paper). Step [S1] is ideally carried out at time $t$ by solving the following SR problem: \begin{align} \label{eq:srt} C_t^* \in \argmin_{C\in\mathbb R^{T \times T}}& \hspace{.3cm} F_t(C)\equiv\|C\|_1+\frac{\lambda_e}{2}\|X_t-X_tC\|_{\text{F}}^2 \tag{SR$_t$}\\ \text{s.t.}& \hspace{.3cm} \text{diag}(C)=0 \notag \end{align} with $\lambda_e > 0$ a given tuning parameter. If $\lambda_e$ is too small, in particular if $\lambda_e\le \|\textrm{vec}(X_t^T X_t)\|_{\infty}^{-1}$, then the minimizer $C_t^*$ may have all-zero columns, which is not informative, hence we always choose $\lambda_e$ sufficiently large. Solving (SR$_t$) \emph{to convergence} within a time interval $h$ might not be possible for a given computational budget, especially for streams of high-dimensional vectors over a large window $T$. Section~\ref{sec:sparse} will address the design of \emph{online} algorithms, for the case where only one algorithmic step can be performed before a new datum $x_t$ arrives (the case of multiple steps follows easily). With the minimizer (or approximate solution) $C_t^*$, one can compute the matrix $W_t=|C_t^*|+|C_t^*|^T$. Interpreting $W$ as the ``similarity'' matrix of a graph, one can compute the graph Laplacian $L_t = D_t - W_t$ where the degree matrix $D_t$ is a diagonal matrix attained by summing the rows of $W_t$. The graph Laplacian is then normalized in one of two possible ways: as the symmetric graph Laplacian, $L_{sym,t}=D_t^{-1/2}L_tD_t^{-1/2}$, or as the random walk graph Laplacian, $L_{rw,t} = D_t^{-1} L_t$. We then compute the $S$ trailing eigenvectors of the normalized graph Laplacian and, viewing these eigenvectors as columns, we cluster their rows in $\mathbb R ^S$ into $S$ cluster using the k-means algorithm; see, e.g., \cite{Lux} for details on spectral clustering. This clustering is then applied to the original data points. Section~\ref{sec:spectral} will elaborate further on this step. \section{Online Sparse Representation} \label{sec:sparse} The proximal gradient descent algorithm and its accelerated version~\cite{Beck} have rigorous convergence guarantees and can be applied to equations of the form \eqref{eq:srt}, which we now detail. Let $f_t (C) = \frac{\lambda_e}{2}\|X_t-X_tC\|^2_F$ for brevity, and notice that $\nabla f_t(C)=\lambda_eX_t^T(X_tC-X_t)$ so $\nabla f_t$ is Lipschitz continuous with constant $M_t = \lambda_e\vertiii{X_t^TX_t}$. Let $n$ be the iteration index of the algorithm, and let $\gamma<\frac{2}{M_t}$. Then the (batch) proximal gradient descent algorithm, used to solve~\eqref{eq:srt} at time $t$, involves the following iterations for $n = 1, 2, \dots$ until convergence: \begin{equation} \label{eq:prox_batch} C_{n+1}=\text{prox}_{\gamma\|\cdot\|_1,\text{diag}(\cdot)=0}\circ(I-\gamma\nabla f_t)(C_n) \end{equation} where $\text{prox}_{g,\mathcal{X}}(z)=\text{argmin}_{x\in \mathcal{X}}g(x)+\frac{1}{2}\|x-z\|^2$ is the proximal operator defined over a closed convex set $\mathcal{X}$ and for a function $g$. Convergence of~\eqref{eq:prox_batch} to a minimizer $C_t^*$ is shown in, e.g.,~\cite{Combettes}. Furthermore, $F_t(C_n)-F_t(C_t^*)\rightarrow 0$ by the continuity of $F_t$; see Theorem 10.21 in \cite{Beck} for the rate. Consider now the case where only one iteration~\eqref{eq:prox_batch} can be performed per time interval $h$ (see Remark 3 for the extension to multiple steps). Then, an \emph{online} implementation of the proximal gradient descent algorithm involves the sequential execution of the following step at each time $t$: \begin{equation} \label{eq:prox_online} C_{t+1}=\underbrace{\text{prox}_{\gamma_t\|\cdot\|_1,\text{diag}(\cdot)=0}}_{P_t}\circ\underbrace{(I-\gamma_t\nabla f_t)}_{G_t}(C_t) \end{equation} where the coefficient $\gamma_t$ is selected so that $\gamma_t<\frac{2}{M_t}$. The difference between \eqref{eq:prox_batch} and \eqref{eq:prox_online} is that $f_t$ changes per iteration in the latter. The goal is to demonstrate that the online algorithm~\eqref{eq:prox_online} can \emph{track} the sequence of optimizers $\{C_t^*\}$. In the following, the performance of the online algorithm~\eqref{eq:prox_online} is investigated in two cases: \noindent i) The cost of~\eqref{eq:srt} is \emph{strongly convex} for each $t$. In this case, we will derive bounds for $\|C_t-C_t^*\|$, where $C_t^*$ is the \textit{unique} minimizer of~\eqref{eq:srt} for each $t$. \noindent ii) The cost of~\eqref{eq:srt} is \emph{not} strongly convex. In this case, we will derive dynamic regret bounds. Before proceeding, to capture the variability of the clustering solutions, define $\sigma_t = \|C_{t+1}^*-C_t^*\|_F$~\cite{Simonetto}. As expected, it will be shown that high variability leads to poor tracking performance. The following assumptions are then introduced. \begin{assumption} The matrix $X_t^TX_t$ is positive definite for each time $t$ (e.g., $N>T$ and $X_t$ is full rank). Let $m_t>0$ be the smallest eigenvalue of $X_t^TX_t$. \end{assumption} By inspection, the $m_t$ of Assumption 1 is the strong convexity constant of $f_t$. Based on the assumptions below, the first result is stated next, where $\|\cdot\|$ is taken to be the Frobenius norm. \begin{theorem} Under Assumption 1, \begin{equation} \forall t \ge 1, \|C_t-C_t^*\|\leq \Tilde{L}_{t-1}\left(\|C_0-C_0^*\|+\sum_{\tau=0}^{t-1} \frac{\sigma_{\tau}}{\Tilde{L}_{\tau}}\right) \end{equation} where $L_t=\max\{|1-\gamma_tm_t|,|1-\gamma_tM_t|\}$, $\Tilde{L}_t=\prod_{\tau=0}^t L_{\tau}$. \end{theorem} \begin{proof} Define $G_t$ and $P_t$ as in \eqref{eq:prox_online}, so $C_{t+1}=(P_t\circ G_t) C_t$. Observe, for any $C,C'\in\mathbb R^{T\times T}$, \begin{align*} \|G&_t(C)-G_t(C')\|^2\\ &= \|C-\gamma_t \nabla f_t(C)-C'+\gamma_t \nabla f_t(C')\|^2\\ &\leq (1-2\gamma_t m_t + \gamma_t^2 M_t^2)\|C-C'\|^2\\ &\leq L_t^2\|C-C'\|^2. \end{align*} Also, by optimality, $C_t^*$ is a fixed point, so $C_t^* = P_tG_tC_t^*$. Therefore, one has that $\|C_{t+1}-C_t^*\| = \|P_tG_tC_t-P_tG_tC_t^*\| \leq \|G_tC_t-G_tC_t^*\| \leq L_t\|C_t-C_t^*\|$, where the second inequality comes from the nonexpansiveness of the prox operator~\cite{Combettes}. Finally, we get \begin{align*} \|C_{t+1}-C_{t+1}^*\| &\leq \|C_{t+1}-C_t^*\|+\|C_{t+1}^*-C_t^*\|\\ &\leq L_t\|C_t-C_t^*\|+\sigma_t \end{align*} and we apply this inequality recursively. \end{proof} \begin{corollary} Define $\hat{L}_t = \underset{\tau=0,...,t}{\max}L_{\tau}$ and $\hat{\sigma}_t = \underset{\tau=0,...,t}{\max}\sigma_{\tau}$. If Assumption 1 holds, then, for each $t$: \begin{align} \|C_t-C_t^*\|\leq \left(\hat{L}_{t-1}\right)^t\|C_0-C_0^*\|+\frac{\hat{\sigma}_t}{1-\hat{L}_{t-1}}. \end{align} If $m_\tau \ge m$ and $M_\tau\le M$ and $\gamma_\tau$ is chosen $\gamma_\tau = 2/(m_\tau+M_\tau)$ for all $\tau = 0, \ldots, t$, then $\hat{L}_t\le(M-m)/(M+m) < 1$. \end{corollary} \begin{proof} Follows from Thm. 1 and the geometric series. \end{proof} These results closely follow the analysis of \cite{Simonetto,DallAnese,BoydPrimer}, applying it to SSC. \begin{remark} Under Assumption 1, it can be shown that $F_t(C_t)-F_t(C_t^*)\leq\frac{M_t}{2}\|C_t-C_t^*\|^2$ \cite[Thm. 10.29]{Beck}. \end{remark} In the following, we consider the case where the cost of~\eqref{eq:srt} is \emph{not} strongly convex. It is clear that contractive arguments cannot be utilized in this case since~\eqref{eq:prox_online} is no longer a strongly monotone operator. Again, we use $\|\cdot\|$ for the Frobenius norm, and $\|g\|_\infty=\sup_{x}\,|g(x)|$. \begin{theorem} Let $\hat{C}_t^*=\frac{1}{t}\sum_{\tau=0}^{t-1}C_{\tau}^*$, $\rho_t(\tau)=\|\hat{C}_t^*-C_{\tau}^*\|$, $\delta_t=\|f_{t+1}-f_t\|_\infty$, and $M=\max_t M_t$. Set $\gamma_t=\frac{1}{M}$. Then \begin{align*} &\frac{1}{t}\sum_{\tau=0}^{t-1}\left(F_{\tau}(C_{\tau})-F_{\tau}(C_{\tau}^*)\right)\leq \frac{M}{2t}\|C_0-\hat{C}_t^*\|^2+\frac{1}{t}\Biggl(F_0(C_0)\\ &\hspace{1cm}-F_{t-1}(C_t)+\sum_{{\tau}=0}^{t-2}\delta_{\tau}+\sum_{{\tau}=0}^{t-1}\rho_t({\tau})(T+\frac{M}{2}\rho_t({\tau}))\Biggr) \end{align*} \end{theorem} \begin{proof} From the descent lemma \cite[Thm. 10.16]{Beck}, we have \begin{align*} \frac{1}{t}\sum_{{\tau}=0}^{t-1}\left(F_{\tau}(C_{{\tau}+1})-F_{\tau}(\hat{C}_t^*)\right)\leq \frac{M}{2t}\|C_0-\hat{C}_t^*\|^2 \end{align*} Using the Lipschitz continuity of $\nabla f_t$, $\partial\|\cdot\|_1\in[-1,1]^{T^2}$, and the Cauchy-Schwartz inequality, we get \begin{align*} \frac{1}{t}\sum_{{\tau}=0}^{t-1}\left(F_{\tau}(\hat{C}_t^*)-F_{\tau}(C_{\tau}^*)\right)\leq \frac{1}{t}\sum_{{\tau}=0}^{t-1}\rho_t({\tau})(T+\frac{M}{2}\rho_t({\tau})) \end{align*} And, rearranging, we find \begin{align*} &\frac{1}{t}\sum_{\tau=0}^{t-1}\left(F_{\tau}(C_{\tau})-F_{\tau}(C_{{\tau}+1})\right) = \frac{1}{t}(F_0(C_0)-F_{t-1}(C_t)\\ &\hspace{3cm}+\sum_{{\tau}=0}^{t-2}\left(f_{{\tau}+1}(C_{{\tau}+1})-f_{\tau}(C_{{\tau}+1})\right)\\ &\hspace{1cm}\leq \frac{1}{t}\left(F_0(C_0)-F_{t-1}(C_t)+\sum_{{\tau}=0}^{t-2}\delta_{\tau}\right) \end{align*} Adding these together gives us the result. \end{proof} \begin{corollary} Define $\hat{\rho}_t=\underset{\tau=0,...,t-1}{\max}\rho_t(\tau)$ and\\ $\hat{\delta}_t=\underset{\tau=0,...,t-2}{\max}\delta_{\tau}$. Then \begin{align*} &\frac{1}{t}\sum_{{\tau}=0}^{t-1}\left(F_{\tau}(C_{\tau})-F_{\tau}(C_{\tau}^*)\right)\leq\frac{1}{t}\biggl(\frac{M}{2}\|C_0-\hat{C}_t^*\|^2\\ &+F_0(C_0)-F_{t-1}(C_t)\biggr)+\frac{M\hat{\rho}_t^2}{2}+T\hat{\rho}_t+\hat{\delta}_t. \end{align*} \end{corollary} \noindent The term $\hat{C}_t^*$ serves as a center point of all the $C^*_\tau$ and is the most meaningful ``best overall'' point. The corollary says that if $\hat{\delta}_t$ and $\hat{\rho}_t$ are well-behaved (i.e., the function changes slowly) then, on average, $F_t(C_t)$ tracks within a constant term of $F_t(C_t^*)$. \begin{remark} If $(C_t^*)$ is bounded, then $\hat{\rho}_t$ converges and $\hat{C}_t^*$ has a convergent subsequence. In that case, we can replace them with their limits in the bound in Corollary 2. The bound is only meaningful, though, when the $\delta_t$'s are finite. One way to make them finite is to impose boundedness with respect to the infinity norm as another constraint in~\eqref{eq:srt}. This particular constraint can be incorporated into our closed-form proximal projection \cite[Prop.\ 24.47]{Combettes}. \end{remark} \begin{remark} If $f_t$ is not strongly convex (which it will not be if $T>N$), then we can add a Tikhonov term to make it strongly convex. For example, we could add $\frac{\lambda_r}{2}\|C\|^2$. While this will provide the stronger result of Theorem 1, it incurs an error in the minimizer. That is, $\|C_{r,t}-C_t^*\|\leq\|C_{r,t}-C_{r,t}^*\|+\|C_{r,t}^*-C_t^*\|$ where $C_{r,t}$ and $C_{r,t}^*$ are the regularized tracking sequence and regularized minimizer sequence respectively. \end{remark} \begin{remark} \label{rmk:severalIterations} If we take more than one iteration per time step, then we can modify $(f_t)$ accordingly in order to use the results in this paper. For example, with 2 iterations per time step, define $\Tilde{f}_t=f_{\floor*{t/2}}$. Alternatively, we can modify the Theorems. For example, if we take $n_t$ iterations at time $t$, then we just have to redefine $\Tilde{L}_t=\prod_{\tau=0}^t L_{\tau}^{n_{\tau}}$ in Theorem 1. \end{remark} Note that we did not consider accelerating our algorithm. In the non-strongly convex case, accelerated methods rely on global structure not just local descent. Because of this, it is not obvious that adapting an accelerated method to the online setting would lead to better tracking. However, this is a further research direction that we are currently exploring. \section{Spectral Clustering} \label{sec:spectral} The main factor in determining how many iterations to take in step [S1] is the ratio between the costs of steps [S1] and [S2]. There are papers that explore online spectral clustering \cite{onlinespec}, but the results require relatively small changes in the graph. There are no such guarantees here. For example, when the oldest data point is thrown out and a new one added, all connections to the previous data point are thrown out as well, new connections have to be made for all of the points that were previously connected to the old data point, and connections have to be made for the new data point. While the end result of the spectral clustering may barely change, the change in the graph is catastrophic. Thus, we do a full batch spectral clustering operation, and leave a more general framework for online spectral clustering as a future research direction. Note that the proximal operator in (2) simplifies to $\text{proj}_{\text{diag}(\cdot)=0}\circ \text{prox}_{\gamma_t\|\cdot\|_1}$. The closed form expression for the latter proximal operator is soft-thresholding: $\text{prox}_{\gamma_t\|\cdot\|_1}C = \text{sign}(C)(|C|-\gamma)_+$ component-wise where $(a)_+=\text{max}(a,0)$. Thus, the cost of each iteration of step [S1] is dominated by the gradient descent sub-step, and so the total cost of each step is $\mathcal{O}(NT^2)$ operations. If, instead of applying the proximal gradient operator just once to $C_t$, we apply it $n_t$ times, then the step at time $t$ will cost $\mathcal{O}(n_tNT^2)$ operations. The cost of step [S2] depends on what method we use to compute the specified eigenvectors. An upper bound on the cost is $\mathcal{O}(T^3)$ which can be achieved by classical dense algorithms and gives \emph{all} $T$ eigenvalues $\lambda$. To find $S \ll T$ eigenvalues approximately, the power method or Lanczos iterative methods can be used \cite{NMMC}. First consider that $L_{rw}v=\lambda v$ iff $v-D^{-1}Wv=\lambda v$ iff $D^{-1}Wv=(1-\lambda)v$. Also, by the Gershgorin disc theorem, the eigenvalues of $L_{rw}$ are in [0,2] and the eigenvalues of $D^{-1}W$ are in [-1,1]. Thus, for $D^{-1}W$, we want the eigenvectors corresponding to the eigenvalues near 1. The leading eigenpairs of $D^{-1}W$ should correspond to the trailing eigenpairs of $L_{rw}$ as long as the trailing eigenvalues of $L_{rw}$ are closer to 0 than the leading eigenvalues are to 1. If this is not the case, then we have to use the power method to compute \textit{more} than S eigenpairs, so that we can take the eigenvectors corresponding to the S largest \textit{positive} eigenvalues. The power method costs $\mathcal{O}(nnz\cdot niter\cdot S)$ in the ideal case where we do not have to compute extra eigenpairs. Here, $nnz$ denotes the number of nonzero elements and $niter$ denotes the number of iterations to reach convergence. There are efficient methods for computing the trailing eigenpairs of $L_{rw}$ directly. In particular, \cite{3methods} found that the Jacobi-Davidson method was superior to the Lanczos method, in terms of computation time, for spectral clustering. Both methods cost $\mathcal{O}(nnz\cdot niter)$, but $niter \approx S$ can vary. The number of nonzeros, in the ideal case, can be estimated by the subspace dimensions, $d_i$. For a given subspace, the minimum number of points needed to represent another point, as long as it isn't cohyperplanar with a strict subset of the points, is exactly the dimension of the subspace. Thus, we can say that $nnz(C^*)\geq \mathcal{O}(\sum_i d_i T_i^t)$ where $T_i^t$ is the number of data points in $\mathcal S_i$ at time $t$. The same can be said of $W$, $L$, and $L_{rw}$. In the case where each subspace has the same dimension, $d$, and the same number of points in it, $T/S$, then $nnz(C^*) \ge \mathcal{O}(dT)$ (and recall $d\le N$). This means in the best case, [S2] costs $\mathcal{O}(dTS)$, while a single step of [S1] costs $\mathcal{O}(NT^2)$ so this suggests choosing $T \approx dS/N$ to balance the costs of the two steps; if $T$ is smaller than this, multiple steps to solve [S1] can be taken. \section{Numerical Results} \label{sec:numerical} We performed tests on both synthetic data and the Yale Face Database \cite{Yale}. The synthetic data was composed of $S=10$ 5-dimensional subspaces in $R^{50}$ ($d=5$), each with 50 points. Noise was added to make the simulation more realistic. The sliding window had a capacity of $T=400$ data points and we took 100 time steps. The Yale Face Database has $S=38$ subspaces, and we let the sliding window have a capacity of $T=500$ points and took 200 time steps. The points in the Yale Face Database are in $\mathbb R^{2016}$. For both datasets, we took 50 iterations of the optimization algorithm per time step (cf.\ Remark~\ref{rmk:severalIterations}). For the synthetic data, $T>N$, so the cost function is not strongly convex. On the other hand, for the Yale Face Database, $T<N$, so the cost function \textit{is} strongly convex. The numerical results show that for both datasets, though, the objective value trajectory converges to a region above the minimum trajectory. This can be seen in Figure 1. \begin{figure} \caption{Objective value of tracking sequence and actual time-varying minimum.} \label{fig:res} \end{figure} The objective error seems to be driving whether or not the clustering error converges. Figure 2 shows the clustering error. \begin{figure} \caption{Clustering error of both the tracking sequence and minimizer sequence.} \label{fig:res} \end{figure} For both datasets, the clustering error of the tracking algorithm decreases. However, decreasing the number of iterations per time step too much causes the clustering error to no longer decrease. This suggests that there is some maximum value of the objective error for the clustering error to decrease. Finally, for the synthetic dataset, step [S2] took 50 times as long as step [S1]. For the Yale Face dataset, step [S2] took 3 times as long as step [S1]. While we took a sufficient number of iterations in order for the clustering error of our algorithm to be small, for an actual system, its dynamics would dictate the number of iterations. The spectral clustering time would be subtracted from the length of a time step, and this value would be divided by the time for step [S1]. \nocite{Sleijpen} \nocite{NystSpec1} \nocite{NystSpec2} \nocite{sketched} \nocite{Nesterov} \end{document}
\begin{document} \TITLE{2-Approximation Algorithms for Perishable Inventory Control When FIFO Is an Optimal Issuing Policy} \ARTICLEAUTHORS{ \AUTHOR{Can Zhang} \AFF{H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia, 30332, \EMAIL{[email protected]}} \AUTHOR{Turgay Ayer} \AFF{H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia, 30332, \EMAIL{[email protected]}} \AUTHOR{Chelsea C. White III} \AFF{H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia, 30332, \EMAIL{[email protected]}} } \ABSTRACT{ We consider a periodic-review, fixed-lifetime perishable inventory control problem where demand is a general stochastic process. The optimal solution for this problem is intractable due to ``curse of dimensionality''. In this paper, we first present a computationally efficient algorithm that we call the \textit{marginal-cost dual-balancing policy} for perishable inventory control problem. We then prove that a myopic policy under the so-called marginal-cost accounting scheme provides a lower bound on the optimal ordering quantity. By combining the specific lower bound we derive and any upper bound on the optimal ordering quantity with the marginal-cost dual-balancing policy, we present a more general class of algorithms that we call the \textit{truncated-balancing policy}. We prove that when first-in-first-out (FIFO) is an optimal issuing policy, both of our proposed algorithms admit a worst-case performance guarantee of two, i.e. the expected total cost of our policy is at most twice that of an optimal ordering policy. We further present sufficient conditions that ensure the optimality of FIFO issuing policy. Finally, we conduct numerical analyses based on real data and show that both of our algorithms perform much better than the worst-case performance guarantee, and the truncated-balancing policy has a significant performance improvement over the balancing policy. } \KEYWORDS{perishable inventory; nonstationary correlated demand; approximation algorithms; optimality of FIFO issuing policy} \maketitle \section{Introduction.} \label{sec:intro} Perishable products are very common in practice. Typical examples include medical products such as blood and certain pharmaceuticals, and food products such as refrigerated meat and many dairy products. Unlike nonperishable products that can wait in inventory until they are used to satisfy demand, perishable products must be used within a short period of time, and will become outdated otherwise. Outdating can result in a significant amount of wastes and financial losses. For example, the number of platelets outdated in 2011 in the U.S. was approximately 321,000 units, which accounted for 12.8\% of all processed units (\citet{us2011}). Similarly, the total annual unsaleable costs in the food, beverage, health and beauty industries in the U.S. were estimated as \$15 billion, and about 17\% of these costs (over 2.5 billion dollars) were caused by outdating (\citet{grocery2008joint}). These facts underline the critical need for efficient inventory management policies for perishable products. Our study is specifically motivated by a platelet inventory control problem faced by a local acute-care hospital, Hospital Alpha (name blinded). In Hospital Alpha, the demand for platelets mainly comes from cardiac surgeries, which account for more than 85\% of its platelet transfusion. In this case, the uncertainty of demand stems from two sources: 1) the number of surgeries performed per day, and 2) the amount of platelets needed per surgery. Such a compound structure of demand is common for many blood products. As such, the compound Poisson distribution, where a random (Poisson) amount of patients arrive at every time period and each patient consumes a random amount of blood products, has been widely assumed for modeling demand in the blood supply chain literature (e.g., \citet{gregor1982evaluation, kopach2003models, katsaliaki2008cost}). However, while simply assuming random arrivals is reasonable for some cases such as trauma patients, forecast information on the number of arrivals is often available for many other cases, especially for scheduled operations such as cardiac surgeries. In particular, most of those surgeries are scheduled days or even weeks in advance, thus the number of surgeries scheduled for each day is gradually revealed as time approaches. Although the compound structure of demand is widely considered in the blood inventory management literature, to our knowledge, the dynamically evolving forecast information on the number of arrivals is not formally captured. Motivated by the platelet inventory control problem with evolving forecast information, in this paper, we study a periodic-review, fixed-lifetime perishable inventory control problem under a general demand process which can be nonstationary, correlated, and dynamically evolving over time. {Similar to many other perishable inventory focused studies, we consider first-in-first-out (FIFO) issuing policy, i.e., older products are issued first to meet demand, which is shown to perform very well in many perishable inventory systems ({e.g., \citet{fries1975optimal2,pierskalla1972optimal}})}. Our contributions in this paper are as follows: i) {We first present a new approximation algorithm that we call the \textit{marginal-cost dual-balancing policy} for the perishable inventory control problem, and prove that whenever replacing old products with new ones in inventory does not increase the expected total cost, our algorithm has a worst-case performance guarantee of two, i.e., the expected total cost of our policy is at most twice that of an optimal ordering policy.} ii) {In many perishable inventory systems, the major concern is outdating; and clearly, replacing old products with new ones reduces the chance of products being outdated. {Therefore, the condition that replacing old products with new ones does not increase the expected total cost is very intuitive; however, it is not obvious when this would be ensured to be true {theoretically}.} In that regard, we find that this condition coincides with the optimality of FIFO issuing policy: FIFO being an optimal issuing policy implies that younger products are more preferred to have in inventory than older ones, thus replacing old products with new ones will not increase the expected total cost; and vice versa. Further, given that directly checking the optimality of a FIFO issuing policy may be difficult, we extend the existing findings in the literature on the optimality of FIFO issuing policy and provide a \textit{necessary and sufficient} condition and several easy-to-check sufficient conditions that ensure FIFO to be an optimal issuing policy.} {iii) {By ``truncating'' the marginal-cost dual-balancing policy, we present also a more general class of algorithms that we call the \textit{truncated-balancing policy}. In particular, we first prove that a myopic policy under the so-called marginal-cost accounting scheme provides a lower bound on the optimal ordering quantity. Then, we construct the truncated-balancing policy by truncating the marginal-cost dual-balancing policy using the \textit{specific} lower bound we derive and any upper bound on the optimal ordering quantity. We prove that when FIFO is an optimal issuing policy, the truncated-balancing policy also admits a worst-case performance guarantee of two.} iv) {We further compare our marginal-cost dual-balancing policy with base-stock policies, which are widely studied in the perishable inventory literature. Given that FIFO issuing policy is always optimal under base-stock policies, we show that the expected total cost of the marginal-cost dual-balancing policy is always at most twice that of an optimal base-stock policy.} v) {Lastly, using real data for the platelet inventory control problem from Hospital Alpha, we conduct extensive computational analyses and show that a) our proposed approximation algorithms perform significantly better than an ``optimal'' policy that does not consider the evolving forecast information, b) the computational performance of our proposed algorithms is substantially better than the theoretical worst-case performance guarantee of two, and c) the truncated-balancing policy has a significant performance improvement over the marginal-cost dual-balancing policy as well as other relevant policies proposed in the literature.} In the literature, many papers have studied the periodic-review, fixed-lifetime perishable inventory control problem (see reviews by \citet{nahmias1982perishable,nahmias2011perishable} and \citet{karaesmen2011managing}). The general multi-period lifetime perishable inventory control problem was first studied independently by \citet{fries1975optimal} and \citet{nahmias1975optimal}, who both formulated the problem as a dynamic program (DP) with a state space comprised of inventory levels of different ages. However, the structure of an optimal policy is complicated and finding optimal policies using standard dynamic programming is computationally intractable due to the well-known ``curse of dimensionality''. Therefore, later efforts are mainly focused on heuristic policies. Among the developed heuristic policies, the base-stock policy, under which the total inventory is replenished up to the same level at each period, is particularly popular due to its simplicity and near-optimal numerical performance (e.g., \citet{nahmias1976myopic, cohen1976analysis, chazan1977markovian, nandakumar1993near, cooper2001pathwise, li2009note, chen2014coordinating, zhang2016nonparametric}). Other heuristic policies such as modified base-stock policy (e.g., \citet{broekmeulen2009heuristic}), constant order policy (e.g., \citet{brodheim1975evaluation, deniz2010managing}), and higher-order approximation (\citet{nahmias1977higher}) are also proposed and studied. However, due to the complexity of the perishable inventory control problem, most of these studies assume that demand over time is independently and identically distributed (i.i.d.), and none of the proposed heuristics has a theoretical performance guarantee. More recently, there is a stream of work focusing on approximation algorithms for stochastic inventory systems under general demand processes. The pioneering work by \citet{levi2007approximation} studies a stochastic inventory control problem for nonperishable products. They show that the proposed dual-balancing policy, which balances the costs of under-ordering and over-ordering under a marginal-cost accounting scheme, has a worst-case performance guarantee of two. This idea has been later extended to many other settings to consider lost sales (\citet{levi20082}), setup costs and capacity constraints (\citet{levi2008approximation, levi2013approximation, shi2014approximation}), remanufacturing (\citet{tao2014approximation}), and perishable products (\citet{chao2016approximation,chao2015approximation}). {Among these papers that study approximation algorithms in inventory management, \citet{chao2015approximation}, which also considers a perishable inventory control problem with no set-up cost, is the most relevant to ours. In particular, \citet{chao2015approximation} present a proportional-balancing policy and a dual-balancing policy for perishable inventory systems under FIFO issuing policy, and they prove that 1) the proportional-balancing policy has a performance guarantee between two and three for the general case, and 2) the dual-balancing policy has a performance guarantee of two when demand is independent and stochastically non-decreasing over time. While both our study and \citet{chao2015approximation} focus on developing approximation algorithms for perishable inventory systems, our analysis and results are different in the following aspects: i) We present new approximation algorithms for perishable inventory systems (marginal-cost dual balancing policy and truncated-balancing policy) that are different from the ones presented in \citet{chao2015approximation}.} ii) {We tighten the worst-case performance guarantee to exactly two for cases where FIFO is an optimal issuing policy, and show that the condition presented in Chao et al. to ensure a performance guarantee of two (i.e., demand is independent and stochastically non-decreasing over time) is a special case of ours. {Further, we identify several intuitive sufficient conditions that ensure the optimality of FIFO issuing policy, and we present examples where the worst-case performance guarantee of our proposed algorithms is strictly tighter than that presented in \citet{chao2015approximation}} (please see details in $\S$\ref{sec:fifo} and Examples \ref{ex:1} and \ref{ex:2}).} iii) {We further consider truncating the marginal-cost dual-balancing policy using a specific lower bound we derive on the optimal ordering quantity. We remark that this is an important contribution because, unlike the existing results in the nonperishable inventory literature, truncation in the perishable inventory case imposes several new methodological challenges and hence is non-trivial (see also the next paragraph for more details). We show that the truncated-balancing policy also admits a performance guarantee of two, and using real data from a local hospital, we show that it numerically performs much better than the marginal-cost dual-balancing policy and the policies presented in \citet{chao2015approximation}.} iv) {Methodologically, while Chao et al. build their analysis based on algebraic arguments, our analysis is based on two new ideas that we call the \textit{imaginary operation policy} and the \textit{dynamic unit-matching scheme}, respectively (see discussion also in the following paragraph). In particular, the existing worst-case analyses for the nonperishable cases are based on a (static) one-to-one matching between units under two different policies. However, as stated in Chao et al., ``the perishability of products destroys this matching mechanism''. To overcome this challenge, Chao et al. turned to an innovative algebraic approach. On the other hand, the new ideas we propose {(i.e., the imaginary operation policy and the dynamic unit-matching scheme)}, a) significantly simplify the comparison of two different policies and allow us to stay on the track {of} unit matching; b) enable us to reach to a very insightful new result for perishable inventory systems: The worst-case performance guarantee can be tightened to exactly two whenever replacing old products with new ones does not increase the expected total cost (or equivalently when FIFO is an optimal issuing policy); and c) allow us to capture not only perishability but also truncation, which is not considered in Chao et al., but as we show, it significantly improves the computational performance. We believe our ideas are valuable beyond this study and can also be applied to facilitate the analysis for other perishable inventory systems.}} The idea of truncating a dual-balancing policy with bounds on optimal ordering quantities is first proposed by \citet{hurley2007new}. However, unlike the dual-balancing policy that has been extended to many other settings, the truncated-balancing policy needs a more sophisticated analysis and has only been shown to have a worst-case performance guarantee for the nonperishable backlogging case (\citet{hurley2007new, levi2008approximation}). In this study, we consider a truncated-balancing policy in the perishable inventory setting, where challenges arise from both perishability and the complexity caused by truncation. In particular, first, the analyses for all the nonperishable cases are based on a (static) one-to-one matching between units under two different policies, which rely on the fact that all inventory units will be eventually used to satisfy demand. However, {this fails to be true for the perishable inventory case, where units can simply outdate without satisfying any demand}. Second, the existing analysis for the truncated-balancing policy in the nonperishable case relies on the base-stock structure of an optimal ordering policy. This also fails to be true for the perishable inventory case, where due to perishability the structure of an optimal policy is complicated and depends on the entire inventory vector. To overcome the above challenges, we introduce two new ideas: 1) a bridging policy that we call the \textit{imaginary operation policy}, under which old products can be replaced with new ones for free so that the inventory vectors under two different policies can be easily compared, and 2) a \textit{dynamic unit-matching scheme}, under which units can be matched and re-matched at different periods so that the shortcomings of the existing static matching approach can be addressed. {As we discussed earlier, these two new ideas together {enable} us to overcome the challenges arising from {both} perishability and truncation, and lead to an insightful {new} result.} The remainder of this paper is organized as follows. In $\S$\ref{sec:problem}, we present a formal model formulation. In $\S$\ref{sec:accounting}, we present a marginal-cost dual-balancing policy for the perishable inventory control problem. In $\S$\ref{sec:analysis}, we prove that when FIFO is an optimal issuing policy, our algorithm has a worst-case performance guarantee of two, i.e., the expected total cost of our policy is at most twice that of an optimal ordering policy. We further compare our policy with base-stock policies and show that the expected total cost of our policy is always at most twice that of an optimal base-stock policy. In $\S$\ref{sec:truncated}, we first show that a myopic policy under the marginal-cost accounting scheme provides a lower bound on the optimal ordering quantity; we then present a truncated-balancing policy that also admits a worst-case performance guarantee of two when FIFO is an optimal issuing policy. In $\S$\ref{sec:fifo}, we present a necessary and sufficient condition and several easy-to-check sufficient conditions that ensure the optimality of FIFO issuing policy. Finally, we present computational results based on a platelet inventory control problem in $\S$\ref{sec:numerical}, and draw conclusions in $\S$\ref{sec:conclusion}. \section{Model Formulation.} \label{sec:problem} We study a periodic-review, fixed-lifetime perishable inventory control problem under a general stochastic demand process. \textbf{Notation}: We consider a product lifetime of $K$ periods and a planning horizon of $T$ periods. Demands over the planning horizon are denoted as $D_1,...,D_T$, which are exogenous random variables with finite means, and can be nonstationary, correlated, and dynamically evolving. As a convention, we generally use capital letters to denote random variables, and lowercase letters to denote their realizations (product lifetime $K$ and planning horizon $T$ are exceptions). At the beginning of each period $t$, there is an information set denoted as $f_t$, which contains the realization of demands $(d_1,...,d_{t-1})$ and possibly some other forecast information available at period $t$, denoted as $(u_1,...,u_t)$. That is, the information set $f_t$ is a specific realization of the random vector $F_t=(D_1,...,D_{t-1},U_1,...,U_t)$. Further, we assume that the conditional joint distribution of future demands $(D_t,...,D_T)$ is known for given $f_t$. Additional notation that describes system states and decision variables is defined as follows: $X_{k,t}$: the inventory level of age $k$ at the beginning of period $t$, $k=1,...,K-1, t=1,...,T$. $\textbf{X}_t$: the inventory vector at the beginning of period $t$, i.e., $\textbf{X}_t=(X_{1,t},...,X_{K-1,t})$, $t=1,...,T$. $Q_t$: the ordering quantity at period $t, t=1,...,T$. $Y_t$: the total inventory level after ordering and before demand realization at period $t$, i.e., $Y_t=\sum\limits_{k=1}^{K-1}X_{k,t}+Q_t$, $t=1,...,T$. \textbf{System Dynamics}: We define the sequence of events as follows: 1) At the beginning of each period $t=1,...,T$, the $K-1$ dimensional inventory vector $\textbf{X}_t$ and the information set $F_t$ are observed, based on which $Q_t$ products of age 0 are ordered; 2) products ordered arrive instantly with a zero lead time; 3) random demand $D_t$ then occurs during the period, inventory is issued to satisfy demand based on the FIFO rule, and unmet demand is lost (since we assume zero lead time, our results hold equally well for the backlogging case); and 4) at the end of each period, all products in inventory age by 1, and products reaching age $K$ are disposed from the inventory. Let $X_{0,t}=Q_t$. Then, the inventory vector is updated as follows: $$X_{k,t+1}=\bigg(X_{k-1,t}-\bigg(D_t-\sum\limits_{m=k}^{K-1}X_{m,t}\bigg)^+\bigg)^+, k=1,...,K-1; t=0,...,T-1.$$ Without loss of generality, we assume that the system starts from empty (i.e., zero initial inventory); however, all of our results can be extended to consider arbitrary initial inventory levels. \textbf{Cost Structure}: At each period, we consider an ordering cost $\hat{c}$ for each unit of product ordered at that period, a shortage penalty $\hat{p}$ for each unit of stock-out, a holding cost $\hat{h}$ for each unit of excess inventory after demand realization, and an outdating cost $\hat{w}$ for each unit of product that is outdated at the end of that period. To eliminate trivial situations, we assume $\hat{p}-\hat{c}\geq0$. We allow negative outdating cost (i.e., positive salvage value) as long as $\hat{w}+\beta\hat{c}\geq0$, where $\beta$ denotes the discount factor. We also consider a salvage value for each unit of product left in inventory at the end of the planning horizon, and for simplicity we assume it is equal to the ordering cost $\hat{c}$ (our results can be easily extended to consider any salvage value $\hat{v}$ as long as $\hat{w}+\beta\hat{v}\geq0$). \textbf{Optimality Criterion}: At each period $t$, given the inventory vector $\textbf{x}_t$ and the information set $f_t$, an ordering \textit{decision rule} is a function from the set of all possible $(\textbf{x}_t,f_t)$ to the set of all possible $q_t$; and an ordering \textit{policy} is a collection of ordering decision rules at all periods. Let $\pi$ denote any given ordering policy. Then, the total cost under policy $\pi$ over the planning horizon is: \begin{equation*} \hat{\mathscr{C}}(\pi)= \sum\limits_{t=1}^T\beta^{t-1}\big(\hat{c}Q_t^{\pi}+\hat{p}(D_t-Y_t^{\pi})^++\hat{h}(Y_t^{\pi}-D_t)^++\hat{w}(X_{K-1,t}^{\pi}-D_t)^+\big)-\beta^{T}\hat{c}\sum\limits_{k=1}^{K-1}X_{k,T+1}^{\pi}, \end{equation*} where $\textbf{X}_t^{\pi}$ and $Q_t^{\pi}$ denote the inventory vector and the ordering quantity at period $t$ under policy $\pi$, respectively. Then, our problem is to find an optimal ordering policy $OPT$ such that $OPT\in \arg\min\limits_{\pi}\mathrm{E}[\hat{\mathscr{C}}(\pi)]$. \section{Marginal-Cost Dual-Balancing Policy.} \label{sec:accounting} In this section, we first introduce a cost transformation to eliminate ordering cost in $\S$\ref{sec:cost}. We then present a marginal-cost accounting scheme for the perishable inventory setting in $\S$\ref{sec:marginal}. Finally, we present our algorithm in $\S$\ref{sec:algorithm}. Unless presented in the main text, the proofs of all analytical results are included in the Appendix. \subsection{Cost Transformation.} \label{sec:cost} To apply the marginal-cost accounting scheme which we present in $\S$\ref{sec:marginal}, we first need to construct an equivalent problem with a zero ordering cost. Define the cost parameters for the transformed problem as: $c=0, p=\hat{p}-\hat{c}, h=\hat{h}+(1-\beta)\hat{c}$, and $w=\hat{w}+\beta \hat{c}$. {Since we assume $\hat{p}-\hat{c}\geq0$ and $\hat{w}+\beta\hat{c}\geq0$, all the transformed costs are nonnegative}. Then, for a given policy $\pi$, the total cost of the transformed problem is: \begin{equation*} \mathscr{C}(\pi)= \sum\limits_{t=1}^T\beta^{t-1}\big(p(D_t-Y_t^{\pi})^++h(Y_t^{\pi}-D_t)^++w(X_{K-1,t}^{\pi}-D_t)^+\big). \end{equation*} In the following lemma, we show that the difference between the total costs of the original and transformed problems is independent of policy $\pi$, which implies that the two problems are equivalent in the sense that they have the same set of optimal ordering policies. \begin{lemma} \label{lem:c1c2} For any policy $\pi$, $\hat{\mathscr{C}}(\pi)-\mathscr{C}(\pi)=\sum\limits_{t=1}^T\beta^{t-1}\hat{c}D_t$, with probability one. \end{lemma} \subsection{Marginal-Cost Accounting Scheme.} \label{sec:marginal} Unlike traditional methods which assign each period all costs that occur at this period, the marginal-cost accounting scheme, introduced by \citet{levi2007approximation}, assigns each period all costs that are caused by the decision made at this period. For example, a unit ordered at period $t$ may stay in the system for multiple periods, thus holding costs may be charged for this unit for multiple periods; under the marginal-cost accounting scheme, all these holding costs are assigned to period $t$. We now present the marginal-cost accounting scheme for the perishable inventory setting. \textbf{Marginal Shortage Penalty}: Since inventory can be replenished with a zero lead time, the marginal shortage penalty at each period is simply defined as the shortage penalty that occurs at this period. For $t=1,...,T$, given $\textbf{x}_t, f_t$ and $q_t$, let $P_t(\textbf{x}_t,f_t,q_t)$ denote the expected marginal shortage penalty at period $t$. Then, we have: $$P_t(\textbf{x}_t,f_t,q_t)\mathrel{\mathop:}=\beta^{t-1}p\mathrm{E}[(D_t-y_t)^+|f_t].$$ \textbf{Marginal Holding Cost}: For $t=1,...,T$, given $\textbf{x}_t$, $f_t$ and $q_t$, let $H_t(\textbf{x}_t,f_t,q_t)$ denote the expected marginal holding cost at period $t$, which is defined as the sum of all expected holding costs charged for units ordered at period $t$. In the perishable inventory setting, since units in inventory may become outdated without satisfying any demand, the future holding costs charged for $q_t$ depend on the entire inventory vector $\textbf{x}_t$. Thus, similar to \citet{nahmias1975optimal}, we let $A_{0,t}=0$, and for $k=1,...,K-1$, let $A_{k,t}$ be the total demand over periods $t, ...,t+k-1$ that cannot be satisfied by the inventory of $(x_{K-k,t},...,x_{K-1,t})$, i.e., the inventory that would have been outdated by the end of period $t+k-1$. Then: $$A_{k,t}=(A_{k-1,t}+D_{t+k-1}-x_{K-k,t})^+, k=1,...,K-1.$$ Thus, for $k=0,...,K-1$, $(A_{k,t}+D_{t+k}-\sum\limits_{m=1}^{K-k-1}x_{m,t})^+$ represents the total demand over periods $t, ...,t+k$ that cannot be satisfied by the inventory of $\textbf{x}_t$, and $(q_t-(A_{k,t}+D_{t+k}-\sum\limits_{m=1}^{K-k-1}x_{m,t})^+)^+$ represents the amount of $q_t$ left in inventory at the end of period $t+k$. Then, we have: $$H_t(\textbf{x}_t,f_t,q_t)\mathrel{\mathop:} =\sum\limits_{k=0}^{K-1}\beta^{t+k-1}h\mathrm{E}\bigg[(q_t-(A_{k,t}+D_{t+k}-\sum\limits_{m=1}^{K-k-1}x_{m,t})^+)^+\bigg|f_t\bigg],$$ where the sum over $k$ is defined up to $T-t$ when $t+K-1\geq T$. \textbf{Marginal Outdating Cost}: For $t=1,...,T$, given $\textbf{x}_t$, $f_t$ and $q_t$, let $W_t(\textbf{x}_t,f_t,q_t)$ denote the expected marginal outdating cost at period $t$, which is defined as the sum of all expected outdating costs charged for units ordered at period $t$, i.e., the expected outdating costs that occur at period $t+K-1$. Note that units ordered at periods $T-K+2,...,T$ will not be outdated within the planning horizon, thus we simply define $W_t(\textbf{x}_t,f_t,q_t)=0$ for $t=T-K+2,...,T$. For $t\leq T-K+1$, $(q_t-A_{K-1,t}-D_{t+K-1})^+$ represents the amount of $q_t$ that will be outdated at the end of period $t+K-1$. Then, we have: $$W_t(\textbf{x}_t,f_t,q_t)\mathrel{\mathop:}=\beta^{t+K-1}w\mathrm{E}[(q_t-A_{K-1,t}-D_{t+K-1})^+|f_t].$$ For a given policy $\pi$, let $P_t^{\pi}, H_t^{\pi}$ and $W_t^{\pi}$ denote the corresponding marginal shortage penalty, holding and outdating costs at period $t$, respectively. Under a given policy $\pi$, $\textbf{x}^{\pi}_t$ and $q_t^{\pi}$ are both known for given $f_t$. Then, $\mathrm{E}[P_t^{\pi}|f_t]=P_t(\textbf{x}_t^{\pi},f_t,q_t^{\pi})$, $\mathrm{E}[H_t^{\pi}|f_t]=H_t(\textbf{x}_t^{\pi},f_t,q_t^{\pi})$, and $\mathrm{E}[W_t^{\pi}|f_t]=W_t(\textbf{x}_t^{\pi},f_t,q_t^{\pi})$. Since the system starts from zero inventory, we have $\mathscr{C}(\pi)=\sum\limits_{t=1}^T(P_t^{\pi}+H_t^{\pi}+W_t^{\pi}).$ \subsection{Algorithm.} \label{sec:algorithm} Now we present our first algorithm based on the marginal-cost accounting scheme presented above. Clearly, the expected marginal shortage penalty $P_t(\textbf{x}_t,f_t,q_t)$ occurs due to under-ordering, while the expected marginal holding and outdating costs $H_t(\textbf{x}_t,f_t,q_t)$ and $W_t(\textbf{x}_t,f_t,q_t)$ occur due to over-ordering. Therefore, we define the \textit{marginal-cost dual-balancing policy} (denoted as $B$) as to balance the expected marginal shortage penalty against the sum of the expected marginal holding and outdating costs. More specifically, at each period $t$, given $\textbf{x}_t$ and $f_t$, the marginal-cost dual-balancing ordering quantity $q_t^B$ (for simplicity, we also call it the balancing ordering quantity in the following text) is defined as the solution to the following equation: \begin{equation} \label{eqn:balancing} P_t(\textbf{x}_t,f_t,q_t)=H_t(\textbf{x}_t,f_t,q_t)+W_t(\textbf{x}_t,f_t,q_t). \end{equation} Note that the existence of the balancing ordering quantity $q_t^B$ is guaranteed, because at any period $t$, given $\textbf{x}_t$ and $f_t$, $P_t(\textbf{x}_t,f_t,q_t)$ is non-increasing in $q_t$; when $q_t=0$, $P_t(\textbf{x}_t,f_t,q_t)$ is nonnegative, and when $q_t$ goes to infinity, $P_t(\textbf{x}_t,f_t,q_t)$ goes to zero (since demand has a finite mean). In contrast, $H_t(\textbf{x}_t,f_t,q_t)$ and $W_t(\textbf{x}_t,f_t,q_t)$ are non-decreasing in $q_t$; when $q_t=0$, $H_t(\textbf{x}_t,f_t,q_t)=W_t(\textbf{x}_t,f_t,q_t)=0$, and when $q_t$ goes to infinity, both $H_t(\textbf{x}_t,f_t,q_t)$ and $W_t(\textbf{x}_t,f_t,q_t)$ go to infinity. Therefore, $q_t^B$ is guaranteed to exist when we allow fractional ordering quantities. The algorithm can be easily extended to consider discrete ordering quantities following a similar argument as in \citet{levi2007approximation}. We also remark that our marginal-cost dual-balancing policy is different from the dual-balancing policy defined in \citet{chao2015approximation}. In particular, while our marginal-cost dual-balancing policy balances the marginal shortage penalty against the sum of the marginal holding and outdating costs, the dual-balancing policy in \citet{chao2015approximation} balances the marginal shortage penalty against the marginal outdating cost plus the holding cost that occurs at period $t$, i.e., the marginal holding cost $H_t(\textbf{x}_t,f_t,q_t)$ in Equation (\ref{eqn:balancing}) is replaced by $\beta^{t-1}h\mathrm{E}[(y_t-D_t)^+|f_t]$ in \citet{chao2015approximation}. \section{Worst-Case Analysis.} \label{sec:analysis} In this section, we first build a bridging policy in $\S$\ref{sec:im}. Then, in $\S$\ref{sec:matching}, we construct a new unit-matching scheme that (dynamically) matches units under two different policies on a one-to-one correspondence. Based on these results, we show in $\S$\ref{sec:worst} that when FIFO is an optimal issuing policy, our marginal-cost dual-balancing policy has a worst-case performance guarantee of two, i.e., the expected total cost of our policy is at most twice that of an optimal ordering policy. Finally, in $\S$\ref{sec:base}, we compare our policy with an optimal base-stock policy, and show that the expected total cost of our policy is always at most twice that of an optimal base-stock policy. \subsection{A Bridging Policy: Imaginary Operation Policy.} \label{sec:im} By Lemma \ref{lem:c1c2}, we know that $\hat{\mathscr{C}}(\pi)-\mathscr{C}(\pi)$ is nonnegative and independent of policy $\pi$. Therefore, to show that the expected total cost of the marginal-cost dual-balancing policy is at most twice that of an optimal ordering policy (i.e., $\mathrm{E}[\hat{\mathscr{C}}(B)]\leq 2\mathrm{E}[\hat{\mathscr{C}}(OPT)]$), it is sufficient to show that $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(OPT)].$ However, due to the partially ordered nature of multi-dimensional inventory vectors, it is difficult to directly compare the costs under policies $B$ and $OPT$ . Therefore, we next propose a bridging policy that we call the \textit{imaginary operation policy} (denoted as $IM$), which allows us to properly modify the inventory vectors so that the inventory vectors under two different policies become completely ordered, and the respective costs can be easily compared. We then show $\mathrm{E}[\mathscr{C}(IM)]\leq \mathrm{E}[\mathscr{C}(OPT)]$ (Lemma \ref{lem:optim}) and $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$ (Lemma \ref{lem:imb}), respectively, which leads to our main result $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(OPT)]$ (Theorem \ref{optb}). Policy $IM$ is constructed as follows: At each period $t$, given $\textbf{x}_t$ and $f_t$, let the system under policy $IM$ follow an optimal ordering policy.\footnote{Throughout the paper, we refer following an optimal ordering policy to implementing an optimal decision rule at each period given the system state, instead of copying the ordering quantity from the system under policy $OPT$.} What differentiates policies $IM$ and $OPT$ is that under policy $IM$, {at each period after ordering and before demand realization}, products in the inventory vector can be ``moved'' from older positions to the position of age 0, i.e., old products can be replaced with new ones for free. Note that since the inventory vectors under policies $IM$ and $OPT$ may be different at each period, the actual ordering quantities under the two policies can also be different. At each period $t$, let $y_t^{B}$ and $y_t^{IM}$ be the total inventory levels after ordering under policies $B$ and $IM$, respectively (note that once the rules of movements under policy $IM$ for periods $1,...,t-1$ are defined, $y_t^{IM}$ is well-defined). Then, we partition the set of decision epochs $\{1,...,T\}$ into the following two subsets: $$\mathscr{T}_P=\{t: y_t^B\geq y_t^{IM}\}, \mathscr{T}_H=\{t: y_t^B< y_t^{IM}\}.$$ The main objective of constructing policy $IM$ is to bound the total shortage penalty of policy $B$ at each period $t\in \mathscr{T}_P$ and the total holding and outdating costs of policy $B$ charged for the units ordered at each period $t\in \mathscr{T}_H$. {Since we have $y_t^B\geq y_t^{IM}, \forall t\in \mathscr{T}_P$, the total shortage penalty of policy $B$ at $t\in \mathscr{T}_P$ can be easily bounded. Therefore, unit movements are only needed at $t\in \mathscr{T}_H$.} The rules of movements are defined as follows, and an illustrative example is provided at the end of this subsection. {Let $\mathscr{T}_H=\{\tau_1,...,\tau_n\}$, where $\tau_1<...<\tau_n$. At the beginning of period $\tau_1$ after ordering (and before demand realization), we simply move all units in the inventory vector under policy $IM$ to age 0.} {At $\tau_2$, units ordered at $\tau_1$ become of age $\tau_2-\tau_1$. Since all units are moved to age 0 at $\tau_1$, the inventory under policy $IM$ is consumed (used to satisfy demand or outdated) no faster than that under policy $B$. Also, we have $y_{\tau_1}^{IM}> y_{\tau_1}^{B}$ at $\tau_1$. Then at $\tau_2$, the total inventory of age greater than or equal to age $\tau_2-\tau_1$ under policy $IM$ is no less than that under policy $B$, i.e., $\sum\limits_{k=\tau_2-\tau_1}^{K-1}x_{k,\tau_2}^{IM}\geq \sum\limits_{k=\tau_2-\tau_1}^{K-1}x_{k,\tau_2}^{B}$. Therefore, at the beginning of period $\tau_2$ after ordering, we first move all units of age strictly less than $\tau_2-\tau_1$ under policy $IM$ to age 0 such that there are only positive inventory of age 0 and $\tau_2-\tau_1$ under policy $IM$. We then move some units of age equal to $\tau_2-\tau_1$ under policy $IM$ to age 0 such that $\sum\limits_{k=\tau_2-\tau_1}^{K-1}x_{k,\tau_2}^{IM}= \sum\limits_{k=\tau_2-\tau_1}^{K-1}x_{k,\tau_2}^{B}$.} {Similarly, for any $i\geq 2$, at the beginning of period $\tau_i$ after ordering, we first move all units of age strictly less than $\tau_i-\tau_{i-1}$ under policy $IM$ to age 0, and then for each $j=1,...,i-1$, move some units of age equal to $\tau_i-\tau_j$ under policy $IM$ to age 0 such that after all the movements, we have:} {(i) There are only positive inventory of age $0, \tau_i-\tau_{i-1},...,\tau_i-\tau_1$ under policy $IM$.} {(ii) For $j=1,...,i-1$, the total inventory of age greater than or equal to age $\tau_i-\tau_j$ under policies $IM$ and $B$ are the same, i.e.,} \begin{equation} \label{eqn:move1} {\sum\limits_{k=\tau_i-\tau_j}^{K-1}x_{k,\tau_i}^{IM}=\sum\limits_{k=\tau_i-\tau_j}^{K-1}x_{k,\tau_i}^{B}, j=1,...,i-1.} \end{equation} {Based on the rules of movements defined above, we are ensured to have that: $\forall \tau_i\in\mathscr{T}_H$, after the movements of units at $\tau_i$, the inventory vector under policy $IM$ is ``younger'' than that under policy $B$, i.e., for $k=1,...,K-1$, policy $IM$ has no more inventory of age greater than or equal to $k$.} \begin{lemma} \label{lem:im} $\forall \tau_i\in\mathscr{T}_H$, after the movements of units at $\tau_i$, we have: \begin{equation} \sum\limits_{m=k}^{K-1}x_{m,\tau_i}^{IM}\leq \sum\limits_{m=k}^{K-1}x_{m,\tau_i}^{B}, k=1,...,K-1. \label{Eq. ine} \end{equation} \end{lemma} \begin{figure} \caption{An illustrative example to show the imaginary operation policy ($IM$)} \label{figure:Figure1} \end{figure} An illustrative example describing the rules of movements is presented in Figure \ref{figure:Figure1}. In this example, we have product lifetime of $K=3$ periods, and planning horizon of $T=4$ periods. Consider a given sample path where $d_1=d_2=d_3=0$ and $d_4=2$. At the beginning of period $t=1$, assume $q_1^B=2$ and $q_1^{IM}=1$. Then, $y_1^{B}=2>1=y_1^{IM}$, thus $t=1\in \mathscr{T}_P$ and no movements are performed at this period. At the beginning of period $t=2$, assume $q_2^B=1$ and $q_2^{IM}=3$. Then, $y_2^{B}=3<4=y_2^{IM}$, thus $t=2=\tau_1\in \mathscr{T}_H$, and we move all units under policy $IM$ to age 0. At the beginning of period $t=3$, assume $q_3^{B}=q_3^{IM}=2$. Then, $y_3^{B}=5<6=y_3^{IM}$, thus $t=3=\tau_2\in \mathscr{T}_{H}$. The unit ordered at $\tau_1$ under policy $B$ is now of age $\tau_2-\tau_1=1$, therefore we move one unit of age 1 under policy $IM$ to age 0 such that the amount of units of age greater than or equal to 1 under policies $B$ and $IM$ are equal. At the beginning of period $t=4$, assume $q_4^B=3$ and $q_4^{IM}=0$. Then, $y_4^{B}=6=y_4^{IM}$, thus $t=4\in \mathscr{T}_P$ and no movements are performed at this period. \subsection{A Dynamic Unit-Matching Scheme.} \label{sec:matching} Based on the imaginary operation policy $(IM)$ we constructed above, we now introduce a new unit-matching scheme that matches inventory units under policies $B$ and $IM$, which plays a key role in the comparison of $\mathscr{C}(B)$ and $\mathscr{C}(IM)$. In particular, our objective is to match the units ordered at each period $t\in\mathscr{T}_H$ under policy $B$ to units under policy $IM$ on a one-to-one correspondence, such that a matched unit under policy $B$ stays in inventory no longer than the corresponding unit under policy $IM$. This way, the total holding and outdating costs charged for the units ordered at $t\in\mathscr{T}_H$ under policy $B$ can be bounded by the total holding and outdating costs under policy $IM$. The idea of examining inventory and demand at a unit level is first proposed by \citet{muharremoglu2008single}, and is first applied to prove worst-case performance guarantee by \citet{levi2007approximation}, where units under two policies are matched on a one-to-one correspondence. Similar arguments are also used in all the subsequent studies on approximation algorithms for nonperishable inventory systems (\citet{levi20082, levi2008approximation, levi2013approximation, shi2014approximation, tao2014approximation}). However, in these studies, the matching of inventory units is static in the sense that once a pair of units under two policies are matched at some period, the matching is permanent. This approach relies on the assumption that all units ordered will be eventually used to satisfy demand, and a pair of units, once matched, will be used to satisfy the same unit of demand. However, this fails to be true in the perishable inventory setting, where units in inventory may simply outdate without satisfying any demand. To address this complication, we introduce a new matching scheme that we call the \textit{dynamic unit-matching scheme}, under which, a unit ordered at $t\in\mathscr{T}_H$ under policy $B$ can be matched and then re-matched to a new unit under policy $IM$. The rules of matchings are defined as follows, and an illustrative example is provided at the end of this subsection. {Recall that $\mathscr{T}_H=\{\tau_1,...,\tau_n\}$, where $\tau_1<...<\tau_n$}. At the beginning of period $\tau_1$, after the movements of units under policy $IM$ (based on the rules described in $\S$\ref{sec:im}), we assign indices from 1 to $y_{\tau_1}^{B}$ for units under policy $B$, and assign indices from 1 to $y_{\tau_1}^{IM}$ for units under policy $IM$,\footnote{For continuous demands and ordering quantities, indices are defined continuously from 0 to $y_{\tau_1}^{B}$ and $y_{\tau_1}^{IM}$.} where $y_{\tau_1}^B< y_{\tau_1}^{IM}$. Older units are assigned smaller indices, and units of the same age are sorted in an arbitrary sequence and assigned indices accordingly. Then, we \textit{temporarily} match each unit ordered at $\tau_1$ under policy $B$ to the unit with the same index under policy $IM$. Clearly, each pair of temporarily matched units have the same age (of age 0). Then, at the beginning of period $\tau_2$, consider the following three cases. First, if a temporarily matched unit under policy $B$ has been used to satisfy demand, there must exist a unit under policy $IM$ that is also used to satisfy the same unit of demand. This is because $y_{\tau_1}^B< y_{\tau_1}^{IM}$ and the inventory under policy $IM$ is consumed no faster than that under policy $B$ (due to Inequality (\ref{Eq. ine})). Therefore, for a temporarily matched unit under policy $B$ that has been used to satisfy demand, we re-match it to the unit under policy $IM$ that is used to satisfy the same unit of demand, and we set this matching to be \textit{permanent}. Second, if a temporarily matched unit under policy $B$ has been outdated, its last temporarily matched unit under policy $IM$ must also have been outdated (since they have the same age). We set this matching also to be permanent. Third, for units that are still in inventory at the beginning of period $\tau_2$, we re-define the indices and re-match them to new units under policy $IM$. In particular, we assign indices from 1 to $y_{\tau_2}^{B}$ to units under policy $B$ and assign indices from 1 to $y_{\tau_2}^{IM}$ to units under policy $IM$. Then, we re-match (still temporarily) all previously temporarily matched units (now of age $\tau_2-\tau_1$) and all units ordered at $\tau_2$ (now of age 0) under policy $B$ to units with the same indices under policy $IM$. Since after the movements of units at $\tau_2$, there are only positive inventory of age 0 and $\tau_2-\tau_1$ under policy $IM$ and Equation (\ref{eqn:move1}) holds, each pair of temporarily matched units must have the same age (either 0 or $\tau_2-\tau_1$). Continuing in this manner, all units ordered at $t\in\mathscr{T}_H$ under policy $B$ are ultimately permanently matched to certain units under policy $IM$ if they are used to satisfy demand or outdated. For units that are still in inventory at the end of the planning horizon, their last temporarily matched units under policy $IM$ must also be in inventory. This is because after the movements at the last period in $\mathscr{T}_H$, the inventory vector under policy $IM$ is ``younger'' than that under policy $B$ (i.e., Inequality (\ref{Eq. ine})), which ensures that a unit under policy $IM$ is consumed no earlier than the unit with the same index under policy $B$. We then set these matchings also to be permanent. Note that by construction, there are no overlaps in the permanent matchings. Further, as we will show in Lemma \ref{lem:imb}, any matched unit under policy $B$ stays in inventory no longer than its permanently matched unit under policy $IM$. \begin{figure} \caption{An illustrative example to show the dynamic unit-matching scheme} \label{figure:Figure2} \end{figure} Next, we illustrate the dynamic unit-matching scheme in Figure \ref{figure:Figure2} based on the example presented in $\S$\ref{sec:im}. At the beginning of period $t=1\in \mathscr{T}_P$, no matching is defined. At the beginning of period $t=2=\tau_1$, after the movements under policy $IM$, units under policy $B$ are assigned with indices from 1 to 3, units under policy $IM$ are assigned with indices from 1 to 4, and unit 3 under policy $B$ (ordered at $\tau_1$) is temporarily matched to unit 3 under policy $IM$; {units 1-2 are not matched since they are not ordered at periods in $\mathscr{T}_H$}. At the beginning of period $t=3=\tau_2$, no permanent matching is defined since unit 3 under policy $B$ is still in inventory. After the movements under policy $IM$, units under policy $B$ are assigned with indices from 1 to 5, units under policy $IM$ are assigned with indices from 1 to 6, and units 3-5 under policy $B$ (ordered at either $\tau_1$ or $\tau_2$) are temporarily matched to units 3-5 under policy $IM$, respectively. At the beginning of period $t=4\in \mathscr{T}_P$, no new temporary matching is defined; also, since units 3-5 are all in inventory, no permanent matching is defined. At the end of the horizon (i.e., end of period 4), units 3-4 under policy $B$ is permanently matched to units 1-2 under policy $IM$ since they are used to satisfy the same units of demand. Unit 5 under policy $B$ is still in inventory at the end of the horizon, therefore it is permanently matched to its last temporarily matched unit, i.e., unit 5 under policy $IM$. \subsection{Worst-Case Performance Guarantee.} \label{sec:worst} We now prove the worst-case performance guarantee for policy $B$. As discussed before, we use policy $IM$ as a bridging policy, and show that the expected total cost of policy $IM$ is no more that that of policy $OPT$, and the expected total cost of policy $B$ is at most twice that of policy $IM$, respectively. \textbf{{Comparison of Policies $IM$ and $OPT$}}: Recall that under Policy $IM$, an optimal ordering policy is implemented and old products can be replaced with new ones for free. Therefore, to show that the expected total cost of policy $IM$ is no more than that of policy $OPT$, it is sufficient to show that replacing old products with new ones does not increase the expected total cost. This is very intuitive {and expected in most perishable inventory problems,} because a major concern for perishable inventory systems is outdating; {and} clearly, replacing old products with new ones reduces the chance of products being outdated. {On the other hand, while it is intuitive that replacing old products with new ones does not increase the expected total cost; it is not obvious when this would be {ensured to be} true {theoretically}. {In that regard, it turns out that this condition coincides with the optimality of FIFO issuing policy: FIFO being an optimal issuing policy implies that younger products are more preferred to have in inventory than older ones, thus replacing old products with new ones will not increase the expected total cost; and vice versa. We next formally describe the optimality of an issuing policy, followed by the statement of our assumption. At each period $t$, given the inventory vector $\textbf{x}_t$, the information set $f_t$ and the ordering quantity $q_t$, let $v_{k,t}$ be the amount of units of age $k$ that is used to meet demand at period $t, k=0,...,K-1$;let $x_{0,t}=q_t$, and $y_t=\sum\limits_{k=1}^{K-1}x_{k,t}+q_t$. Then, $v_{k,t}\leq x_{k,t}, k=0,...,K-1$, and $\sum\limits_{k=0}^{K-1}v_{k,t}=\min\{y_t, d_t\}$. An issuing \textit{decision rule} is a function from the set of all possible $(\textbf{x}_t,f_t,q_t,d_t)$ to the set of all possible $\textbf{v}_t=(v_{0,t},...,v_{K-1,t})$; and an issuing \textit{policy} is a collection of issuing decision rules at all periods. Then, a FIFO issuing policy is such that at each period $t$, $v_{k,t}=\min\{x_{k,t},(d_t-\sum\limits_{m=k+1}^{K-1}x_{m,t})^+\}, k=0,...,K-1$. Given an initial inventory level and an ordering policy, an issuing policy is said to be optimal if it minimizes the expected total cost among all issuing policies. Let $\Phi_t$ be the cumulative distribution function (c.d.f.) of the demand at period $t$ conditioned on $f_t$. Define the inverse of the c.d.f. $\Phi_t^{-1}(z):= \inf\{x: \Phi_t(x)\geq z\}$, and define the critical fractile $\bar{y}_t:=\Phi_t^{-1}(\frac{p}{p+h})$. To compare policies $IM$ and $OPT$, we state our assumption as follows: \begin{assumption} \label{assump:1} Starting from any period $t$, suppose $y_t\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$ holds at $t$ and an optimal ordering policy is implemented at $t+1,...,T$, then FIFO is an optimal issuing policy, i.e., FIFO minimizes the expected total cost at $t,...,T$ among all issuing policies. \end{assumption} {Intuitively, this assumption says that if the total inventory level after ordering is less than or equal to a threshold at some period, and an optimal ordering policy is implemented at the following periods, then FIFO issuing policy minimizes the expected total cost from that period to the end of the planning horizon.} Note that a stronger sense of optimality of an issuing policy requires the issuing policy to be optimal for any initial inventory level and under an arbitrary sequence of ordering quantities (\citet{pierskalla1972optimal}). However, we only require FIFO to be optimal for small initial inventory levels and under an optimal ordering policy, which is a much weaker assumption. We acknowledge that in practice, directly checking whether Assumption \ref{assump:1} holds or not may be difficult. To date, FIFO has already been shown to be optimal under i.i.d. demand (\citet{fries1975optimal2}), which is widely assumed in the perishable inventory literature, or zero holding cost (\citet{pierskalla1972optimal}). In $\S$\ref{sec:fifo}, we extend these existing findings and present a necessary and sufficient condition and three easy-to-check sufficient conditions to ensure the optimality of FIFO issuing policy. In particular, we show that the condition presented in Chao et al. (2015) to ensure a performance guarantee of two (i.e., demand is independent and stochastically non-decreasing over time) is a special case of ours, and we further provide conditions and examples where FIFO is optimal and our performance guarantee is strictly tighter. With Assumption \ref{assump:1}, we now present a structural property on the optimal cost-to-go function, which is a key result for comparing policies $IM$ and $OPT$. At each period $t$, given $\textbf{x}_t$ and $f_t$, let $C_t(\textbf{x}_t,f_t)$ be the optimal cost-to-go, and let $y_t=\sum\limits_{k=1}^{K-1}x_{k,t}+q_t$. Then, we have: $$C_t(\textbf{x}_t, f_t)=\min_{q_t\geq 0}\{p\mathrm{E}[(D_t-y_t)^+|f_t]+h\mathrm{E}[(y_t-D_t)^+|f_t]+w\mathrm{E}[(x_{K-1,t}-D_t)^+|f_t]$$ \begin{equation*} +\beta \mathrm{E}[C_{t+1}(\textbf{X}_{t+1},F_{t+1})]\}. \end{equation*} For $k=1,...,K-1$, for the continuous case, let $C_t^{(k)}(\textbf{x}_t,f_t)$ denote the partial derivative of $C_t(\textbf{x}_t,f_t)$ with respect to to $x_{k,t}$ (the differentiability can be easily established following similar arguments as in \citet{fries1975optimal2}); for the discrete case, let $C_t^{(k)}(\textbf{x}_t,f_t)$ denote the incremental of $C_t(\textbf{x}_t,f_t)$ caused by a unit increase of $x_{k,t}$. Then, we have the following result. \begin{lemma} \label{lem:ij} Under Assumption \ref{assump:1}, for $t=1,...,T$, (i) $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\geq 0, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$; (ii) $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, 1\leq i<j\leq K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. \end{lemma} Lemma \ref{lem:ij} implies that when FIFO is an optimal issuing policy (i.e., Assumption \ref{assump:1}), the optimal cost-to-go is non-decreasing in the inventory levels; and if the total inventory level is small, the incremental of the discounted optimal cost-to-go caused by a unit increase of an older unit is higher than that caused by a younger unit, and both are bounded by the unit outdating cost. Then, since units are only moved from older to younger positions under policy $IM$ {(it is not difficult to show that $\sum\limits_{k=1}^{K-1}x_{k,t+1}^{IM}\leq (\max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t})^+$ holds for all $t$)}, it is intuitive that the expected total cost of policy $IM$ is no more than that of policy $OPT$, which we formally prove in the next lemma. \begin{lemma} \label{lem:optim} Under Assumption \ref{assump:1}, $E[\mathscr{C}(IM)]\leq E[\mathscr{C}(OPT)]$. \end{lemma} \textbf{Comparison of Policies $B$ and $IM$}: With Lemma \ref{lem:optim}, to establish the worst-case performance guarantee of policy $B$, all remains to show is $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$. To do so, we first provide a key lemma as follows, where $P_t^{\pi}, H_t^{\pi}$ and $W_t^{\pi}$ denote the marginal shortage penalty, holding and outdating costs at period $t$ under policy $\pi$, respectively. \begin{lemma} \label{lem:imb1} With probability one, (i) $\sum\limits_{t\in \mathscr{T}_P}P_t^{B}\leq \sum\limits_{t=1}^TP_t^{IM}$; (ii) $\sum\limits_{t\in \mathscr{T}_H}H_t^{B}\leq \sum\limits_{t=1}^TH_t^{IM}$; (iii) $\sum\limits_{t\in \mathscr{T}_H}W_t^{B}\leq \sum\limits_{t=1}^TW_t^{IM}.$ \end{lemma} \proof{Proof.} (i) For any given sample path, after the movements of units under policy $IM$, we have $y_t^B\geq y_t^{IM}, \forall t\in \mathscr{T}_P$. Then, $P_t^B\leq P_t^{IM}, \forall t\in \mathscr{T}_P$. Therefore, $\sum\limits_{t\in \mathscr{T}_P}P_t^{B}\leq \sum\limits_{t\in \mathscr{T}_P}P_t^{IM}\leq \sum\limits_{t=1}^TP_t^{IM}$ with probability one. (ii) To show that the total holding cost charged for units ordered at $t\in\mathscr{T}_H$ under policy $B$ is no more than the total holding cost under policy $IM$, it is sufficient to show that under the dynamic unit-matching scheme described in $\S$\ref{sec:matching}, all matched units under policy $B$ (i.e., all units ordered at $t\in\mathscr{T}_H$) stay in inventory no longer than their permanently matched units under policy $IM$. Units ordered at $t\in\mathscr{T}_H$ under policy $B$ can be 1) used to satisfy demand, 2) outdated or still in inventory at the end of the planning horizon. First, recall that after the movements at each period $\tau_i\in \mathscr{T}_H$, units under policy $B$ are assigned indices from 1 to $y_{\tau_i}^{B}$ and units under policy $IM$ are assigned indices from 1 to $y_{\tau_i}^{IM}$ from oldest to youngest. Since the inventory vector under policy $IM$ is ``younger'' than that under policy $B$ (i.e., Inequality (\ref{Eq. ine})), each unit under policy $IM$ is consumed no earlier than the unit with the same index under policy $B$. Therefore, for a matched unit under policy $B$ that is used to satisfy demand (call it $u_1$), its last temporarily matched unit under policy $IM$, which has the same index and age as $u_1$, is consumed no earlier than $u_1$. Then, given the FIFO issuing policy, the permanently matched unit of $u_1$, which is used to satisfy the same unit of demand as $u_1$, must be no younger than $u_1$. Second, by the dynamic unit-matching scheme, for a matched unit under policy $B$ that is outdated or is still in inventory at the end of the planning horizon (call it $u_2$), its permanently matched unit under policy $IM$ is defined as its last temporarily matched unit. Since all temporarily matched pairs of units have the same age, and further considering possible movements from older to younger positions for units under policy $IM$, $u_2$ stays in inventory no longer than its permanently matched unit. Since the permanent matchings are defined on a one-to-one correspondence and the above argument is true for any given sample path, we have $\sum\limits_{t\in \mathscr{T}_H}H_t^{B}\leq \sum\limits_{t=1}^TH_t^{IM}$ with probability one. (iii) By the dynamic unit-matching scheme, for a matched unit under policy $B$ that is outdated, the permanently matched unit under policy $IM$ must be outdated at the same period. Since the permanent matchings are defined on a one-to-one correspondence and the above argument is true for any given sample path, we have $\sum\limits_{t\in \mathscr{T}_H}W_t^{B}\leq \sum\limits_{t=1}^TW_t^{IM}$ with probability one. \Halmos \endproof With the above result, now it is easy to reach to the following conclusion. \begin{lemma} \label{lem:imb} $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$. \end{lemma} \proof{Proof.} Let $\mathbbm{1}{(t\in \mathscr{T}_P)}$ and $\mathbbm{1}{(t \in \mathscr{T}_H)}$ be two indicator functions. Then, we have $\mathbbm{1}{(t\in \mathscr{T}_P)}+\mathbbm{1}{(t \in \mathscr{T}_H)}=1$ with probability one, and we have the following result. \begin{align*} \mathrm{E}[\mathscr{C}(B)] =&\sum\limits_{t=1}^T\mathrm{E}[\mathrm{E}[(P_t^{B}+H_t^{B}+W_t^{B})|F_t]]\\ =&\sum\limits_{t=1}^T\mathrm{E}[\mathrm{E}[(P_t^{B}+H_t^{B}+W_t^{B})(\mathbbm{1}{(t\in \mathscr{T}_P})+\mathbbm{1}{(t\in \mathscr{T}_H)})|F_t]] \notag \\ =&\sum\limits_{t=1}^T\mathrm{E}[\mathrm{E}[2P_t^B\mathbbm{1}{(t\in \mathscr{T}_P})+2(W_t^B+H_t^B)\mathbbm{1}{(t\in \mathscr{T}_H)}|F_t]] \notag \\ =&\mathrm{E}\bigg[\sum\limits_{t\in \mathscr{T}_P}2P_t^{B}+\sum\limits_{t\in \mathscr{T}_H}2(H_t^{B}+W_t^{B})\bigg]\\ \leq &\mathrm{E}\bigg[2\sum\limits_{t=1}^TP_t^{IM}+2\sum\limits_{t=1}^T(H_t^{IM}+W_t^{IM})\bigg]\\ =&2\mathrm{E}[\mathscr{C}(IM)], \end{align*} where the third equality follows from the definition of the marginal-cost dual-balancing policy and the fact that $\mathbbm{1}{(t\in \mathscr{T}_P)}$ and $\mathbbm{1}{(t \in \mathscr{T}_H}$ are deterministic for given $f_t$, and the inequality follows from Lemma \ref{lem:imb1}. \Halmos \endproof Based on the above results, we now state our main theorem as follows. \begin{theorem} \label{optb} Under Assumption \ref{assump:1}, the marginal-cost dual-balancing policy has a worst-case performance guarantee of two. That is, the expected total cost of the marginal-cost dual-balancing policy is at most twice that of an optimal ordering policy, i.e., $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(OPT)]$. \end{theorem} \proof{Proof.} Since we have $\mathrm{E}[\mathscr{C}(IM)]\leq \mathrm{E}[\mathscr{C}(OPT)]$ from Lemma \ref{lem:optim} and $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$ from Lemma \ref{lem:imb}, we have $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(IM)] \leq 2\mathrm{E}[\mathscr{C}(OPT)]$, which completes the proof. \Halmos \endproof \subsection{{Comparison with an Optimal Base-Stock Policy.}} \label{sec:base} In the previous subsection, we have compared our policy with an optimal ordering policy, and have shown that the expected total cost of our policy is at most twice that of an optimal ordering policy when FIFO is an optimal issuing policy (i.e., Assumption \ref{assump:1}). In this subsection, we switch the benchmark to compare our policy with an optimal base-stock policy, and show that the expected total cost of our policy is always at most twice that of an optimal base-stock policy. This result is noteworthy because as discussed in $\S$\ref{sec:intro}, among the heuristic policies developed for perishable inventory systems, the base-stock policy, under which the total inventory is replenished up to the same level at each period, is particularly popular due to its simple implementation and competitive numerical performance (e.g., \citet{nahmias1976myopic, cohen1976analysis, chazan1977markovian, nandakumar1993near, cooper2001pathwise, li2009note, chen2014coordinating, zhang2016nonparametric}). However, the computation of an optimal base-stock policy involves the evaluation of the expected outdating cost for each given base-stock level, which is again intractable due to the large state space. Although many heuristic approaches are developed to compute ``good'' base-stock levels, none of them admits a theoretical performance guarantee. In the following theorem, we compare our marginal-cost dual-balancing policy with an optimal base-stock policy (denoted as $BA$), and show that the expected total cost of our policy {(although itself is not a base-stock policy)} is at most twice that of an optimal base-stock policy. We remark that since FIFO is always optimal under base-stock policies (\citet{chazan1977markovian}), Theorem \ref{baseb} follows without any assumption. \begin{lemma}[Chazan and Gal (1977)] \label{lem:base} Under base-stock policies, the cumulative amount of outdate under FIFO issuing policy is smaller than that under any other issuing policy with probability one. \end{lemma} \begin{theorem} \label{baseb} The expected total cost of the marginal-cost dual-balancing policy is at most twice that of an optimal base-stock policy, i.e., $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(BA)]$. \end{theorem} The proof for Theorem \ref{baseb} is similar to that for Theorem \ref{optb}, except that now policy $IM$ is constructed based on $BA$ instead of $OPT$. Since FIFO is always optimal under base-stock policies (Lemma \ref{lem:base}), we can easily show that $\mathrm{E}[\mathscr{C}(IM)]\leq \mathrm{E}[\mathscr{C}(BA)]$ (in fact, we have $\mathscr{C}(IM)\leq \mathscr{C}(BA)$ with probability one). Given this result and also $\mathrm{E}[\mathscr{C}(B)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$ as we have shown in Lemma \ref{lem:imb}, the result in Theorem \ref{baseb} immediately follows. Also note that the result in Theorem $\ref{baseb}$ can be easily extended to cases where the base-stock levels at different periods are different but non-decreasing over time. \section{Truncated-Balancing Policy.} \label{sec:truncated} {In this section, we first prove that a myopic policy under the marginal-cost accounting scheme provides a lower bound on the optimal ordering quantity. Then, by combining the \textit{specific} lower bound we derive and any upper bound on the optimal ordering quantity with the marginal-cost dual-balancing policy, we present a more general class of algorithms that we call the truncated-balancing policy.} We later show that while both the marginal-cost dual-balancing policy and the truncated-balancing policy have the same worst-case performance guarantee of two, the latter performs much better in the computational studies (see $\S$\ref{sec:numerical}). In the following proposition, we show that under Assumption \ref{assump:1}, the minimizer of the total marginal costs provides a lower bound on the optimal ordering quantity. A similar result has been developed in \citet{levi2007approximation} for the nonperishable backlogging case (clearly, Assumption \ref{assump:1} is not needed for the nonperishable case), where the total inventory level is known to be a sufficient statistic for the system state. However, generalizing this result to the lost sales case, where a pipeline inventory vector is needed to describe the system state, remains an open problem. In that regard, our problem is similar to the lost sales case since we also need an inventory vector to describe the system state, and the analysis for the nonperishable backlogging case is not applicable to our case. \begin{proposition} \label{prop:lower} At any period $t$, given $\textbf{x}_t$ and $f_t$, let $q_t^{OPT}$ be an optimal ordering quantity, and $q_t^{L}$ be the smallest quantity that minimizes $P_t(\textbf{x}_t,f_t,q_t)+H_t(\textbf{x}_t,f_t,q_t)+W_t(\textbf{x}_t,f_t,q_t)$. Then, under Assumption \ref{assump:1}, $q_t^{L}\leq q_t^{OPT}$. \end{proposition} \begin{remark} The reason why a myopic ordering quantity under the marginal-cost accounting scheme provides a lower bound on the optimal ordering quantity is that, under the marginal-cost accounting scheme, the optimal cost-to-go function is non-increasing in the inventory levels of all ages (see proof in the Appendix); thus, ordering more units can decrease the optimal cost-to-go of the next period, and the minimizer of the single-period cost, which ignores the benefit of ordering more for future, will tend to order less than optimal. We remark that while the monotonicity result for the optimal cost-to-go function can be easily established for the nonperishable inventory case, it can in fact be violated for the perishable inventory case in general. However, as we show in the proof of Proposition \ref{prop:lower}, the optimality of FIFO issuing policy is a sufficient condition to establish the monotonicity result, which we believe is a new and important contribution to the literature. \end{remark} Based on the above result, we now define the \textit{truncated-balancing policy} (denoted as $TB$) as follows. At each period $t$, given $\textbf{x}_t$ and $f_t$, let $q_t^B$ be the balancing ordering quantity defined by Equation (\ref{eqn:balancing}), let $q_t^L$ be the lower bound on the optimal ordering quantity defined in Proposition \ref{prop:lower} (or any \textit{looser} lower bound), and let $q_t^U$ be any upper bound on the optimal ordering quantity. Then, the truncated-balancing ordering quantity $q_t^{TB}$ is defined as: $$q_t^{TB}= \left\{ \begin{aligned} &q_t^B,\quad \text{if}~q_t^L\leq q_t^B\leq q_t^U,\\ &q_t^L,\quad \text{if}~q_t^B<q_t^L,\\ &q_t^U,\quad \text{if}~q_t^B>q_t^U. \end{aligned} \right. $$ Note that for trivial lower and upper bounds (i.e., $q_t^L=0, q_t^U=\infty$), the truncated-balancing policy reduces to the marginal-cost dual-balancing policy. In the following theorem, we show that the truncated-balancing policy, as a more general class of algorithms, also admits a worst-case performance guarantee of two. \begin{theorem} \label{opttb} Under Assumption \ref{assump:1}, the truncated-balancing policy has a worst-case performance guarantee of two. That is, the expected total cost of the truncated-balancing policy is at most twice that of an optimal ordering policy, i.e., $\mathrm{E}[\mathscr{C}(TB)]\leq 2\mathrm{E}[\mathscr{C}(OPT)]$. \end{theorem} \begin{remark} {We remark that policy $TB$ is not guaranteed to perform at least as good as policy $B$ (thus the proof of Theorem \ref{opttb} is nontrivial). While it may appear that $q_t^{TB}$ is at least as good as $q_t^B$, this is only true if an optimal policy is implemented at the following periods.} \end{remark} \begin{remark}{We also remark that unlike the nonperishable inventory case where the lower and upper bounds in the definition of policy $TB$ can be replaced with any (tighter) ones, in our case, the lower bound $q_t^L$, as a minimizer of the single-period marginal cost, is special and cannot be tightened. To see why this is the case, let $y_t^{TB}$ and $y_t^{IM}$ be the total inventory levels after ordering at period $t$ under policies $TB$ and $IM$, respectively; also, given $\textbf{x}_t^{TB}$ and $f_t$, let $y_t^B$ denote the total inventory level after ordering if the balancing ordering quantity $q_t^B$ is ordered. Then, given that $q_t^B<q_t^{TB}=q_t^L$, it is possible that $y_t^B\leq y_t^{IM}<y_t^{TB}$ (while for the nonperishable case, since a base-stock policy is optimal, given that $q_t^B<q_t^{TB}=q_t^L$, we always have $y_t^B< y_t^{TB}\leq y_t^{OPT}$). In this case, since $y_t^{TB}>y_t^{IM}$, it is not possible to match all the $q_t^{TB}$ units to units under policy $IM$. Therefore, we instead only match the first $q_t^B$ units ordered under policy $TB$ to units under policy $IM$, and then show that the total marginal cost at period $t$ for ordering $q_t^{TB}$ is no more than that for ordering $q_t^B$, which is only ensured to be true if $q_t^{TB}=q_t^L$ is a minimizer of the single-period marginal cost (see more details in the proof of Theorem \ref{opttb} in the Appendix).} \end{remark} \section{{Sufficient Conditions for Optimality of FIFO Issuing Policy.}} \label{sec:fifo} In $\S$\ref{sec:analysis}-\ref{sec:truncated}, we have shown that our proposed algorithms have a worst-case performance guarantee of two when FIFO is an optimal issuing policy (i.e., Assumption \ref{assump:1}). In this section, we provide a necessary and sufficient condition and several easy-to-check sufficient conditions that ensure the optimality of FIFO issuing policy, which extends the existing literature and provides insights into the key trade-offs of different issuing policies. In Lemma \ref{lem:ij}, we have presented a necessary condition for the optimality of FIFO issuing policy. In the following proposition, we show that this condition is also sufficient (in fact, we only need a part of that condition), which leads to a \textit{necessary and sufficient} condition for the optimality of FIFO issuing policy. \begin{proposition} \label{prop:fifo} Assumption \ref{assump:1} holds if and only if for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}.$ \end{proposition} Proposition \ref{prop:fifo} says that FIFO is optimal if and only if the incremental of the discounted optimal cost-to-go caused by a unit increase of the inventory of any age is bounded by the unit outdating cost. This provides an overall insight into the key trade-off in ensuring the optimality of FIFO issuing policy, and provides intuition in finding sufficient conditions for Assumption \ref{assump:1} to hold. In particular, a unit increase of the in-hand inventory can potentially increase both holding and outdating costs but decrease the shortage penalty for future periods. Consider a case where demand for future periods is sufficiently large so that the decrease in shortage penalty offsets the increase in holding cost. In this case, the incremental of the total cost is bounded by the unit outdating cost, i.e., the condition in Proposition \ref{prop:fifo} holds. Also, consider another case where the unit holding cost is sufficiently small so that the total discounted holding and outdating costs for future periods are bounded by the unit outdating cost, regardless of the demand distribution. In this case, we also have the condition in Proposition \ref{prop:fifo} hold. Based on these interpretations of Proposition \ref{prop:fifo}, we now provide several easy-to-check sufficient conditions that ensure the optimality of FIFO issuing policy. In particular, in Propositions \ref{prop:demand} and \ref{prop:cost}, we formalize the two intuitions discussed above and show that Assumption \ref{assump:1} holds either when future demand is sufficiently large or when holding cost is sufficiently small. Next, in Proposition \ref{prop:mix}, we present a sufficient condition for the optimality of FIFO issuing policy that involves both demand and holding cost; however the respective conditions are much weaker than those that involve either only demand (Proposition \ref{prop:demand}) or only holding cost (Proposition \ref{prop:cost}). {We remark that our results also extend the existing findings on the optimality of FIFO issuing policy (\citet{fries1975optimal2, pierskalla1972optimal}), which we describe in further details below.} In the following Proposition, we first present a sufficient condition on demand distribution that ensures the optimality of FIFO issuing policy. \begin{proposition} \label{prop:demand} Assumption \ref{assump:1} holds if $\forall f_{T+1}, \bar{y}_t=\Phi_{t}^{-1}(\frac{p}{p+h})$ is non-decreasing in $t$. \end{proposition} As discussed above, the intuition for the above condition is that when future demand is large, increasing the in-hand inventory level will not increase the sum of shortage penalty and holding cost, and thus the condition in Proposition \ref{prop:fifo} holds, which implies the optimality of FIFO issuing policy. Clearly, the condition in Proposition \ref{prop:demand} is an extension of i.i.d. demand considered in \citet{fries1975optimal2}, and also includes the independent and stochastically non-decreasing demand (in the sense of first-order dominance)\footnote{Note that the exact condition \citet{chao2015approximation} needs in their proof is that $S_t$ is non-decreasing in $t$, where $S_t$ is the solution of y to the equation $\mathrm{E}[(y-D_t)^+|f_t]=\mathrm{E}[(D_t-y)^+|f_t]$.} presented in \citet{chao2015approximation} as a special case. Next, we present a sufficient condition regarding holding cost to ensure the optimality of FIFO issuing policy, which extends the zero holding cost considered in \citet{pierskalla1972optimal}. \begin{proposition} \label{prop:cost} Assumption \ref{assump:1} holds if $h\leq \frac{1-\beta}{\beta}w$. \end{proposition} This result is also intuitive because when $h\leq \frac{(1-\beta)}{\beta}w$, the total discounted holding cost for an arbitrary number of future periods $(\beta+\beta^2+...)h=\beta h/(1-\beta)$ is bounded by the unit outdating cost $w$. Note that Proposition \ref{prop:cost} identifies an important class of problems where the performance guarantee of our algorithms is strictly tighter than that presented in \citet{chao2015approximation}. Recall the cost transformation that $h=\hat{h}+(1-\beta)\hat{c}, w=\hat{w}+\beta \hat{c}$. Then it is straightforward to check that $h\leq \frac{1-\beta}{\beta}w$ if and only if $\hat{h}\leq \frac{1-\beta}{\beta}\hat{w}$. Thus, Proposition \ref{prop:cost} implies that as long as the original holding cost $\hat{h}$ is sufficiently small, our algorithms have a performance guarantee of two. Under general demand, the performance guarantee presented in \citet{chao2015approximation} is $2+\frac{(K-2)h}{Kh+w}$. For general $K>2$ and any $\beta\in(0,1)$, for their performance guarantee $2+\frac{(K-2)h}{Kh+w}=2+\frac{(K-2)(\hat{h}+(1-\beta)\hat{c})}{K(\hat{h}+(1-\beta)\hat{c})+\hat{w}+\beta \hat{c}}$ to be equal to 2, the transformed holding cost $h$ needs to be zero, which implies both the original ordering cost $\hat{c}$ and holding cost $\hat{h}$ need to be zero. On the other hand, we only assume the original holding cost $\hat{h}$ to be small and allow arbitrarily large ordering cost $\hat{c}$ (note that conducting cost transformation to eliminate $\hat{c}$ with end up with positive $h$). Therefore, {as we illustrate in the following example,} the performance guarantee of our algorithms can be strictly tighter, especially for problems with positive ordering cost and small or zero holding cost, which represent a large class of perishable (especially blood) inventory problems (\citet{haijema2007blood,zhou2011inventory}). \begin{example} \label{ex:1} Consider an instance with product lifetime $K=5$, and original cost parameters $\hat{c}=1, \hat{h}=\hat{w}=0$. Demand is a general stochastic process. Then for discount factor values $\beta=0.9,0.95,0.99$, the performance guarantees presented in \citet{chao2015approximation} are 2.214, 2.125, 2.029, respectively, while the performance guarantee of our algorithms is exactly two in all three cases. \end{example} Next, inspired by Propositions \ref{prop:demand} and \ref{prop:cost}, in the following proposition, we present a sufficient condition for the optimality of FIFO issuing policy that involves both demand and holding cost. We further show that Proposition \ref{prop:mix} is in fact much more general and provides a unified framework that extends both Propositions \ref{prop:demand} and \ref{prop:cost}. \begin{proposition} \label{prop:mix} Assumption \ref{assump:1} holds if $\forall f_{T+1}, h\leq \frac{1-\gamma}{\gamma}p+\frac{1-\beta \gamma}{\beta \gamma}w$, where $\gamma=\max\limits_{1< s\leq t\leq T}\Phi_t(\bar{y}_s)$. \end{proposition} Clearly, $\gamma$ provides an upper bound on the probability that there will be excess inventory after demand realization at each period. The intuition behind Proposition \ref{prop:mix} is that by increasing a unit of in-hand inventory and keeping the ordering quantity unchanged, the increase of the total expected holding and outdating costs are at most $\gamma(h+w)$ while the decrease of the expected shortage penalty is at least $(1-\gamma)p$. Then, the incremental of the expected total cost by increasing a unit of in-hand inventory is bounded by $\gamma(h+w)-(1-\gamma)p$; let it to be less than or equal to $w/\beta$ (in order to satisfy the condition in Proposition \ref{prop:fifo}) and rearrange terms, we achieve the condition in Proposition \ref{prop:mix}. We believe this is an authentic result and provides key insights into the main trade-offs of FIFO issuing policy. In particular, Proposition \ref{prop:mix} says that FIFO is optimal when the demand over time does not ``drop'' significantly (so that $\gamma$ is not too large) and the holding cost is moderately small. Moreover, there is a clear trade-off between demand and holding cost: The smaller the holding cost, the fewer requirements we need for the demand, and vice versa. This result is powerful and in fact provides a unified framework that extends both Propositions \ref{prop:demand} and \ref{prop:cost}:} On one hand, suppose $\forall f_{T+1}$, the critical fractile $\bar{y}_t=\Phi_{t}^{-1}(\frac{p}{p+h})$ is non-decreasing in $t$; then $\gamma=\Phi_t(\bar{y}_t)=\frac{p}{p+h}$ and thus $h(= \frac{1-\gamma}{\gamma}p)\leq \frac{1-\gamma}{\gamma}p+\frac{1-\beta \gamma}{\beta \gamma}w$ is automatically satisfied, i.e., Proposition \ref{prop:mix} reduces to Proposition \ref{prop:demand}. On another hand, suppose we do not impose any restriction on demand; then the worst case is $\gamma=1$ and we thus need $h\leq \frac{1-\beta}{\beta}w$, i.e., Proposition \ref{prop:mix} reduces to Proposition \ref{prop:cost}. Furthermore, the condition in Proposition \ref{prop:mix} also leads to a broader class of problems where the performance guarantee of our algorithms is strictly tighter than that in \citet{chao2015approximation}, {as we illustrate in the following example}. \begin{example} \label{ex:2} Consider an instance with product lifetime $K=5$, cost parameters $p=w=5,h=1$, and discount factor $\beta=1$ (any $\beta\leq 1$ would work). Let the demand at each period be independent and assume demands at periods $1,3,5,...$ are exponentially distributed with mean of 5, i.e., exp($\frac{1}{5}$), and demands at periods $2,4,6,...$ are exp($\frac{1}{6}$) (note that our result does not rely on this specific pattern of fluctuation; the demand can be for example exp($\frac{1}{5}$) on weekdays and exp($\frac{1}{6}$) on weekends). It is easy to check that the critical fractile for exp($\frac{1}{5}$) is $\Phi_{1}^{-1}(\frac{p}{p+h})\approx\Phi_{1}^{-1}(0.833)\approx8.9$ and the critical fractile for exp($\frac{1}{6}$) is $\Phi_{2}^{-1}(\frac{p}{p+h})\approx\Phi_{2}^{-1}(0.833)\approx10.7$. Then, $\gamma=\max\limits_{1< s\leq t\leq T}\Phi_t(\bar{y}_s)\approx\Phi_{1}(10.7)\approx0.882$, and $h=1< \frac{1-\gamma}{\gamma}p+\frac{1-\beta \gamma}{\beta \gamma}w\approx1.338$. Thus, the condition in Proposition \ref{prop:mix} is clearly satisfied and the performance guarantee of our algorithms is two. On the other hand, the performance guarantee presented in \citet{chao2015approximation} is $2+\frac{(K-2)h}{Kh+w}= 2.3$. \end{example} We remark that besides the sufficient conditions we presented above, there could be potentially many other situations where FIFO issuing policy is optimal (i.e., the condition in Proposition \ref{prop:fifo} holds). It is worth noting that under all those situations, our algorithms would admit a worst-case performance guarantee of two. \section{Computational Experiments.} \label{sec:numerical} We start with a discussion on the computation of the balancing ordering quantity in $\S$\ref{sec:computation}. Then, in $\S$\ref{sec:compound}, we test the performances of our policies under the Hospital Alpha platelet inventory control problem using real data. \subsection{Computation of Balancing Ordering Quantity.} \label{sec:computation} At each period $t$, given $\textbf{x}_t$ and $f_t$, the marginal shortage penalty $P_t(\textbf{x}_t,f_t,q_t)$ is non-increasing in $q_t$, while the marginal holding and outdating costs $H_t(\textbf{x}_t,f_t,q_t)$ and $W_t(\textbf{x}_t,f_t,q_t)$ are non-decreasing in $q_t$. Thus the balancing ordering quantity $q_t^B$ defined in Equation (\ref{eqn:balancing}) can be computed using a simple binary search. However, to do so, we first need to efficiently compute the expected marginal costs for each given $q_t$. Given the distribution of $D_t$, the computation of $P_t(\textbf{x}_t,f_t,q_t)$ is straightforward. Thus, in this subsection, we focus on the computation of $H_t(\textbf{x}_t,f_t,q_t)$ and $W_t(\textbf{x}_t,f_t,q_t)$. Similar as the existing studies on balancing policies, for general demands, the expected marginal holding and outdating costs $H_t(\textbf{x}_t,f_t,q_t)$ and $W_t(\textbf{x}_t,f_t,q_t)$ can be computed using methods such as Monte Carlo simulation. However, if demand over time is independent and integer-valued (we consider integer-valued quantities in our computational experiments), we can further achieve closed-form expressions for the expected marginal costs as follows. Recall that $A_{0,t}=0$, and for $k=1,...,K-1$, $A_{k,t}$ denotes the total demand over periods $t, ...,t+k-1$ that cannot be satisfied by the inventory of $(x_{K-k,t},...,x_{K-1,t})$, and $A_{0,t}=0$. Similar to \citet{nahmias1975optimal}, for given $(x_{K-k+1,t},...,x_{K-1,t})$ and $f_t$, define: $$R_{k,t}(x_{K-k,t})=\mathrm{P}(A_{k-1,t}+D_{t+k-1}< x_{K-k,t}|f_t), k=1,...,K,$$ which denotes the conditional probability that there will be outdates at the end of period $t+k-1$. Then, $\forall u\geq 0$, we have $R_{1,t}(u)=\mathrm{P}(D_{t}< u|f_t)$, and for $k=2,...,K$: \begin{align*} R_{k,t}(u)&=\mathrm{P}(A_{k-1,t}+D_{t+k-1}< u|f_t)\\ &=\sum\limits_{v=1}^{u}\mathrm{P}(A_{k-1,t}< v|f_t)\mathrm{P}(D_{t+k-1}=u-v|f_t)\\ &=\sum\limits_{v=1}^{u}\mathrm{P}((A_{k-2,t}+D_{t+k-2}-x_{K-k+1,t})^+< v|f_t)\mathrm{P}(D_{t+k-1}=u-v|f_t)\\ &=\sum\limits_{v=1}^{u}\mathrm{P}(A_{k-2,t}+D_{t+k-2}< v+x_{K-k+1,t}|f_t)\mathrm{P}(D_{t+k-1}=u-v|f_t)\\ &=\sum\limits_{v=1}^{u}R_{k-1,t}(v+x_{K-k+1,t})\mathrm{P}(D_{t+k-1}=u-v|f_t). \end{align*} Therefore, the probabilities $R_{k,t}$ can be computed efficiently by recursion. In this case, at each period $t$, given $\textbf{x}_t$, $f_t$ and $q_t$, the expected marginal holding and outdating costs can be computed as: \begin{align*} H_t(\textbf{x}_t,f_t,q_t)&=\sum\limits_{k=0}^{K-1}h_{t+k}\mathrm{E}\bigg[(q_t-(A_{k,t}+D_{t+k}-\sum\limits_{m=1}^{K-k-1}x_{m,t})^+)^+\bigg|f_t\bigg]\\ &=\sum\limits_{k=0}^{K-1}h_{t+k}\sum\limits_{u=1}^{q_t}\mathrm{P}\bigg((A_{k,t}+D_{t+k}-\sum\limits_{m=1}^{K-k-1}x_{m,t})^+< u\bigg|f_t\bigg)\\ &=\sum\limits_{k=0}^{K-1}h_{t+k}\sum\limits_{u=1}^{q_t}\mathrm{P}\bigg(A_{k,t}+D_{t+k}< u+\sum\limits_{m=1}^{K-k-1}x_{m,t}\bigg|f_t\bigg)\\ &=\sum\limits_{k=0}^{K-1}h_{t+k}\sum\limits_{u=1}^{q_t}R_{k+1}\bigg(u+\sum\limits_{m=1}^{K-k-1}x_{m,t}\bigg), \end{align*} and \begin{align*} W_t(\textbf{x}_t,f_t,q_t)&=w_{t+K-1}\mathrm{E}[(q_t-A_{K-1,t}-D_{t+K-1})^+|f_t]\\ &=w_{t+K-1}\sum\limits_{u=1}^{q_t}\mathrm{P}(A_{K-1,t}+D_{t+K-1}< u|f_t)\\ &=w_{t+K-1}\sum\limits_{u=1}^{q_t}R_{K,t}(u). \end{align*} \subsection{Experiments for the Platelet Inventory Control Problem.} \label{sec:compound} We now consider the platelet inventory control problem at Hospital Alpha, as described in $\S$\ref{sec:intro}. At Hospital Alpha, 1) platelets are ordered on a daily basis, and an order placed at the end of the previous day will arrive in the morning of the next day; 2) as demand arises, older products are typically issued first to reduce outdates; and 3) unmet demand is satisfied by emergency deliveries. Therefore, our assumptions for zero lead time, FIFO issuing policy, and lost sales are applicable in this setting. Platelets have a short lifetime of $K=3$ days, and we consider a planning horizon of 4 weeks (i.e., $T=28$ days). As discussed in $\S$\ref{sec:intro}, we focus on the main source of demand for platelets: cardiac surgeries, and we model daily demand for platelets by a compound Poisson distribution (\citet{gregor1982evaluation, kopach2003models, katsaliaki2008cost}). Similar to two recent studies by \citet{haijema2007blood} and \citet{zhou2011inventory}, we assume that demand over time is independently distributed, but the distribution in different days may not be identical. Based on the cardiac surgery records from January to April in 2014, we identify a significant weekly periodicity, and estimate the average number of surgeries from Monday to Sunday as 2.6, 5.5, 1.9, 3.2, 3.7, 0.1, and 0, respectively. We assume the amount of platelets needed per surgery is stationary; based on the platelet transfusion records, we estimate it as a geometric distribution with mean of 0.32. Cardiac surgeries are usually scheduled days or even weeks in advance; therefore forecast information on the number surgeries per day is typically available. We consider a forecast horizon equal to product lifetime $K=3$ days, and assume that the forecast is perfect. That is, the number of surgeries at day $t+2$ becomes known at the beginning of day $t$, and will not change at the following days. In this case, at any day $t$, given $f_t$, each of the demand $D_t,...,D_{t+K-1}$ is a sum of a deterministic number of i.i.d. geometric distributions (which is a negative binomial distribution, instead of a compound Poisson distribution). Based on the interaction with the blood bank manager at Hospital Alpha, we estimate the unit outdating cost $w$ to be equal to the purchase cost $\$500$. On the other hand, the shortage penalty for blood inventory problems could include the cost of emergent shipment from other blood banks and/or the penalty of postponing the surgeries, which is usually high and often estimated as 2-10 times higher than the purchase cost (\citet{haijema2007blood}). Therefore, we consider three different shortage penalties $p=\$1000,\$2500,\$5000$. Also, we consider a zero holding cost $h=0$ and no discount, i.e., $\beta=1$. We first benchmark the performances of our policies with the optimal policy solved by dynamic programming. The state of the dynamic program is comprised of a $K-1$ dimensional vector of inventory levels of age $1,...,K-1$, and a $K$ dimensional vector of forecasts on the number of surgeries at days $t,...,t+K-1$. Although the problem size we face here is not too large, it still takes more than 50 hours to compute the optimal policy on a standard 2.6GHz PC, whereas the ordering quantities under our policies can be computed on the fly in an online fashion. On the other hand, while compound Poisson distribution is widely considered in the blood inventory literature (e.g., \citet{gregor1982evaluation, kopach2003models, katsaliaki2008cost}), none of these studies has considered the forecast information on the number of patients per period. A natural question is that how much do we lose by ignoring this information? Therefore, we also compare the performances of our policies with the ``optimal'' policy that does not make use of the forecast information (i.e., it simply treats the demand at each day as a compound Poisson distribution and thus the state of this dynamic program is simply comprised of a $K-1$ dimensional vector of inventory levels of age $1,...,K-1$). We use $B, TB, OPT$ and $OPT_{wof}$ to denote the marginal-cost dual-balancing policy,\footnote{Since only two cost components $p$ and $w$ are considered here, our marginal-cost dual-balancing policy is the same as both the proportional-balancing policy ($PB$) and dual-balancing policy $(DB)$ proposed in \citet{chao2015approximation}.} the truncated-balancing policy, the optimal policy, and the ``optimal'' policy without forecast information, respectively. For each policy, We generate $10,000$ random scenarios, and use a sample average to estimate the expected total cost. Let $\bar{\mathscr{C}}(\pi)$ and $\bar{\mathscr{C}}(OPT)$ denote the estimated total costs under policies $\pi$ and $OPT$, respectively. We define the performance error of policy $\pi$ as: $$error(\pi)\mathrel{\mathop:}=\frac{\bar{\mathscr{C}}(\pi)-\bar{\mathscr{C}}(OPT)}{\bar{\mathscr{C}}(OPT)}\times 100\%.$$ We can also characterize the value of forecast information in this setting by assessing the performance improvements of our policies over policy $OPT_{wof}$. Let $\bar{\mathscr{C}}(\pi)$ and $\bar{\mathscr{C}}(OPT_{wof})$ be the estimated total costs under policies $\pi$ and $OPT_{wof}$, respectively. We define the performance improvement of policy $\pi$ as: $$impr(\pi)\mathrel{\mathop:}=\frac{\bar{\mathscr{C}}(OPT_{wof})-\bar{\mathscr{C}}(\pi)}{\bar{\mathscr{C}}(OPT_{wof})}\times 100\%.$$ \begin{table}[h] \caption{Performance summary of each policy for the platelet inventory control problem $(w=\$ 500)$} \begin{center} \begin{tabular}{l l r r r r} \hline Policy & & \multicolumn{1}{c}{~~~~~B} & \multicolumn{1}{c}{~~~~~TB} & \multicolumn{1}{c}{~~~~OPT} & \multicolumn{1}{c}{~OPT$_{wof}$} \\ \hline $p=\$ 1000$~~~ & $\bar{\mathscr{C}}$ \hspace{7.0mm} (\$)& ~~~~6813 & ~~~~6684 & ~~~~6174 & ~~7262 \\ & $error$ \hspace{0.2mm} (\%)& 10.4 & 8.3 & 0 & 17.6 \\ & $impr$ \hspace{1.0mm} (\%)& 6.2 & 8.0 & 15.0 & 0 \\ \hline $p=\$ 2500$ & $\bar{\mathscr{C}}$ \hspace{7.0mm} (\$)& ~~~~10059 & ~~~~9666 & ~~~~8943 & ~~10532 \\ & $error$ \hspace{0.2mm} (\%)& 12.5 & 8.1 & 0 & 17.8 \\ & $impr$ \hspace{1.0mm} (\%)& 4.5 & 8.2 & 15.1 & 0 \\ \hline $p=\$ 5000$ & $\bar{\mathscr{C}}$ \hspace{7.0mm} (\$)& ~~~~12689 & ~~~~11918 & ~~~~10990 & ~~12999 \\ & $error$ \hspace{0.2mm} (\%)& 15.5 & 8.4 & 0 & 18.3 \\ & $impr$ \hspace{1.0mm} (\%)& 2.4 & 8.3 & 15.5 & 0 \\ \hline \end{tabular} \end{center} \label{table:Table2} \end{table} The estimated total cost, performance error, and performance improvement of each policy are reported in Table \ref{table:Table2}. We first observe that both our marginal-cost dual-balancing policy ($B$) and truncated-balancing policy ($TB$) perform significantly better than the theoretical worst-case performance guarantee of two (i.e., error of $100\%$). Further, policy $TB$ has a significant performance improvement over policy $B$, especially when the ratio of unit shortage penalty over unit outdating cost $p/w$ gets large. In particular, we observe from the experiments that when the ratio $p/w$ gets larger, both the optimal ordering quantity and the ordering quantity under policy $B$ gets larger, however the ordering quantity under policy $B$ grows slower than the optimal ordering quantity. In this case, the truncation by lower bound helps correct the under-ordering of policy $B$ and bring the ordering quantity up to a more reasonable level. Meanwhile, we also observe that the ``optimal'' policy that ignores the forecast information ($OPT_{wof}$) performs poorly, with a performance error of more than 17\% in all three instances, and our policy $TB$ has a substantial performance improvement (more than 8\%) over policy $OPT_{wof}$. Therefore, the value of the forecast information is significant, and implementing an inventory control policy that takes into account such information has a high potential to achieve a better performance in practice. \section{Conclusions.} \label{sec:conclusion} In this paper, we consider a fixed-lifetime perishable inventory control problem assuming demand is a general stochastic process which can be nonstationary, correlated, and dynamically evolving. Theoretically an optimal ordering policy of this problem can be solved using standard dynamic programming, however it becomes computationally intractable for realistic size problems due to the high dimension of the state space. We first present a computationally efficient algorithm that we call the marginal-cost dual-balancing policy. We then prove that under the marginal-cost accounting scheme, the minimizer of the single-period cost provides a lower bound on the optimal ordering quantity; by combining the specific lower bound we derive and any upper bound on the optimal ordering quantity with the marginal-cost dual-balancing policy, we present a more general class of algorithms that we call the truncated-balancing policy. We prove that when FIFO is an optimal issuing policy, both of our policies have a worst-case performance guarantee of two, i.e., the expected total cost of our policies is at most twice that of an optimal policy. We further provide a necessary and sufficient condition and several easy-to-check sufficient conditions that ensure the optimality of FIFO issuing policy. We also compare our marginal-cost dual-balancing policy with an optimal base-stock policy, and show that the expected total cost of our policy is always at most twice that of an optimal base-stock policy. Finally, we conduct numerical experiments based on a platelet inventory control problem using real data and show that a) our policies perform significantly better than the theoretical worst-case performance guarantee, and b) the truncated-balancing policy significantly outperforms the marginal-cost dual-balancing policy, which illustrates that the lower bound we derive is effective and help improve the performance of the marginal-cost dual-balancing policy. Our worst-case analysis is built on two novel ideas, the imaginary operation policy and the dynamic unit-matching scheme. In particular, we show that when FIFO is an optimal issuing policy, moving units from older to younger positions in the inventory vector can only decrease the expected total cost. This is very intuitive and helps significantly simplify the analysis by allowing properly modifying the inventory vectors and effectively matching units under two different policies. We believe these ideas are valuable beyond this study and can also be applied to facilitate the analysis for other perishable inventory problems. \section*{Acknowledgments.} The authors thank Prof. Cong Shi and anonymous referees for their valuable suggestions on improving the quality and the presentation of the paper. \begin{APPENDIX}{~} \small \textbf{Proof of Lemma \ref{lem:c1c2}.} From the system dynamics, we have $\sum\limits_{k=1}^{K-1}X_{k,t+1}^{\pi}=(Y_t^{\pi}-D_t)^+-(X_{K-1,t}^{\pi}-D_t)^+$, where $(Y_t^{\pi}-D_t)^+$ is the amount of inventory after demand realization at period $t$, and $(X_{K-1,t}^{\pi}-D_t)^+$ is the amount of outdates at period $t$. Then we have: \begin{align*} \hat{\mathscr{C}}(\pi)-\mathscr{C}(\pi)=&\sum\limits_{t=1}^{T}\beta^{t-1}\hat{c}\bigg(Q_t^{\pi}+(D_t-Y_t^{\pi})^+-(1-\beta)(Y_t^{\pi}-D_t)^+-\beta (X_{K-1,t}^{\pi}-D_t)^+\bigg)-\beta^{T}\hat{c}\sum\limits_{k=1}^{K-1}X_{k,T+1}^{\pi}\\ =&\sum\limits_{t=1}^{T}\beta^{t-1}\hat{c}\bigg(Q_t^{\pi}+(D_t-Y_t^{\pi})^+-(Y_t^{\pi}-D_t)^+\bigg)+\sum\limits_{t=1}^{T}\beta^{t}\hat{c}\sum\limits_{k=1}^{K-1}X_{k,t+1}^{\pi}-\beta^{T}\hat{c}\sum\limits_{k=1}^{K-1}X_{k,T+1}^{\pi}\\ =&\sum\limits_{t=1}^{T}\beta^{t-1}\hat{c}(D_t-\sum\limits_{k=1}^{K-1}X_{k,t}^{\pi})+\sum\limits_{t=1}^{T-1}\beta^{t}\hat{c}\sum\limits_{k=1}^{K-1}X_{k,t+1}^{\pi}\\ =&\sum\limits_{t=1}^{T}\beta^{t-1}\hat{c}(D_t-\sum\limits_{k=1}^{K-1}X_{k,t}^{\pi})+\sum\limits_{t=2}^{T}\beta^{t-1}\hat{c}\sum\limits_{k=1}^{K-1}X_{k,t}^{\pi}\\ =&\sum\limits_{t=1}^T\beta^{t-1}\hat{c}D_t, \end{align*} where the second equality follows from the fact that $\sum\limits_{k=1}^{K-1}X_{k,t+1}^{\pi}=(Y_t^{\pi}-D_t)^+-(X_{K-1,t}^{\pi}-D_t)^+$ as explained above, and the third equality comes from the fact that $(D_t-Y_t^{\pi})^+-(Y_t^{\pi}-D_t)^+=D_t-Y_t^{\pi}$. \Halmos \textbf{Proof of Lemma \ref{lem:im}.} Without loss of generality, assume that $\tau_i-\tau_1\leq K-1$ (otherwise, we can start from the largest $\tau_i-\tau_j$ that is less than or equal to $K-1$). By construction of policy $IM$, we have $\sum\limits_{k=\tau_i-\tau_1}^{K-1}x_{k,\tau_i}^{IM}=\sum\limits_{k=\tau_i-\tau_1}^{K-1}x_{k,\tau_i}^{B}$ and $x_{k,\tau_i}^{IM}=0, k=\tau_i-\tau_1+1,...,K-1$. Therefore, Inequality \ref{Eq. ine} holds for $k=\tau_i-\tau_1,...,K-1$. Then, for $j=2,...,i-1$, by construction of policy $IM$, we have $\sum\limits_{k=\tau_i-\tau_{j-1}}^{K-1}x_{k,\tau_i}^{IM}=\sum\limits_{k=\tau_i-\tau_{j-1}}^{K-1}x_{k,\tau_i}^{B}$, $\sum\limits_{k=\tau_i-\tau_j}^{K-1}x_{k,\tau_i}^{IM}=\sum\limits_{k=\tau_i-\tau_j}^{K-1}x_{k,\tau_i}^{B}$ and $x_{k,\tau_i}^{IM}=0, k=\tau_i-\tau_j+1,...,\tau_i-\tau_{j-1}-1$. Therefore, Inequality \ref{Eq. ine} holds for $k=\tau_i-\tau_j,...,\tau_i-\tau_{j-1}-1$. Finally, by construction of policy $IM$, we have $\sum\limits_{k=\tau_i-\tau_{i-1}}^{K-1}x_{k,\tau_i}^{IM}=\sum\limits_{k=\tau_i-\tau_{i-1}}^{K-1}x_{k,\tau_i}^{B}$ and $x_{k,\tau_i}^{IM}=0, k=1,...,\tau_i-\tau_{i-1}-1$. Therefore, Inequality \ref{Eq. ine} holds for $k=1,...,\tau_i-\tau_{i-1}-1$, which completes the proof. \textbf{Proof of Lemma \ref{lem:ij}.} We first show that for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. Suppose for some $t+1$, we have $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})> w/\beta$ for some $k=1,...,K-1$ and some $\textbf{x}_{t+1}$ and $f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_t$. At period $t$, let $\textbf{x}_t$ and $q_t$ be such that $x_{k-1,t}=x_{k,t+1}+\epsilon, x_{m-1,t}=x_{m,t+1}, m=1,...,k-1,k+1,...,K-1$, and $x_{K-1,t}=d_t$, where $x_{0,t}=q_t$ and $\epsilon$ is positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+q_t=\sum\limits_{k=1}^{K-1}x_{k,t+1}+d_t+\epsilon\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$. Then, FIFO issuing policy will issue $d_t$ units of age $K-1$. Consider another issuing policy $\gamma$ which issues $d_t-\epsilon$ units of age $K-1$ and $\epsilon$ units of age $k-1$. Then, there will be $\epsilon$ more units of outdates under issuing policy $\gamma$ and $\epsilon$ more inventory of age $k$ at the beginning of period $t+1$ under FIFO issuing policy. By assumption, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})> w/\beta$; thus $\gamma$ is strictly better than FIFO, which is a contradiction. We next show that for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\geq 0, k=1,...,K-1, \forall \textbf{x}_{t+1},f_{t+1}$, and $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1}), 1\leq i<j\leq K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. The claim is clearly true for $t=T$ since $C_{T+1}(\textbf{x}_{T+1},f_{T+1})=0, \forall \textbf{x}_{T+1},f_{T+1}$. Assume the claim is true for $t+1$. We now show that it is also true for $t$, i.e., $C_{t}^{(k)}(\textbf{x}_{t},f_{t})\geq 0, k=1,...,K-1, \forall \textbf{x}_{t},f_{t}$, and $C_{t}^{(i)}(\textbf{x}_{t},f_{t})\leq C_{t}^{(j)}(\textbf{x}_{t},f_{t}), 1\leq i<j\leq K-1, \forall \textbf{x}_{t},f_{t}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}-d_{t-1}$. We start with $C_{t}^{(k)}(\textbf{x}_{t},f_{t})\geq 0, k=1,...,K-1, \forall \textbf{x}_{t},f_{t}$. Consider the following two cases. First, suppose at period $t$ we have $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t}\bar{y}$. Consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon$ and $x'_{m,t}=x_{m,t}, m=1,...,k-1,k+1,...,K-1$, i.e., System 2 has $\epsilon$ more units of age $k$, and $\epsilon$ is positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+\epsilon\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$. Let System 2 follow an optimal ordering policy, and let System 1 order $\epsilon$ more units than System 2 and follow an optimal ordering policy afterward. Then, it is sufficient to show that System 1 has no more expected total cost than System 2. Let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ for Systems 1 and 2, respectively. Assume that there are $\xi\leq \epsilon$ more units of outdates in System 2 than in System 1 at period $t$. Then we have $\sum\limits_{k=1}^{K-1}x_{k,t+1}=\sum\limits_{k=1}^{K-1}x'_{k,t+1}+\xi\leq (\max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_t)^+$, and $\sum\limits_{k=m}^{K-1}x_{k,t+1}\leq\sum\limits_{k=m}^{K-1}x'_{k,t+1}, m=2,...,K-1$. By induction assumption, we have $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, 1\leq i<j\leq K-1,$ $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_t$. Therefore, System 1 has no more expected total cost than System 2. Second, suppose at period $t$ we have $\sum\limits_{k=1}^{K-1}x_{k,t}\geq \max\limits_{\tau=1,...,t}\bar{y}$. Consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon$ and $x'_{m,t}=x_{m,t}, m=1,...,k-1,k+1,...,K-1$, i.e., System 2 has $\epsilon$ more units of age $k$, and $\epsilon$ is any positive number. Let both Systems 1 and 2 follow an optimal ordering policy. Since $\sum\limits_{k=1}^{K-1}x_{k,t}\geq \max\limits_{\tau=1,...,t}\bar{y}$, clearly, the ordering quantities in both systems are zero at period $t$. Let $y_t$ and $y'_t$ be the total inventory levels after ordering in Systems 1 and 2, respectively. Then, $\max\limits_{\tau=1,...,t}\bar{y}\leq y_t\leq y'_t$. Thus the expected cost at period $t$ in System 1 is no more than that in System 2. Let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ for Systems 1 and 2, respectively. Then we have $x_{k,t+1}\leq x'_{k,t+1},k=1,...,K-1$. By induction assumption, we have $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\geq 0,k=1,...,K-1, \forall\textbf{x}_{t+1},f_{t+1}$. Therefore, System 1 has no more expected total cost than System 2 Now it remains to show $C_{t}^{(i)}(\textbf{x}_{t},f_{t})\leq C_{t}^{(j)}(\textbf{x}_{t},f_{t}), 1\leq i<j\leq K-1$, $\forall \textbf{x}_{t},f_{t}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}-d_{t-1}$. Given $\textbf{x}_t,f_t$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}-d_{t-1}$, consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}'_t$ and System 2 starts from $\textbf{x}''_t$, where $x'_{i,t}=x_{i,t}+\epsilon, x'_{k,t}=x_{k,t}, k\neq i$, and $x''_{j,t}=x_{j,t}+\epsilon, x''_{k,t}=x_{k,t}, k\neq j, 1\leq i<j\leq K-1$, i.e., System 1 starts with $\epsilon$ more units of age $i$ and System 2 with $\epsilon$ more units of age $j$, where $i<j$. Let $\epsilon$ be positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+\epsilon\leq \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}$. Let System 2 follow an optimal ordering policy, and let System 1 order the same amount as System 2 and follow an optimal policy afterward. Then, it is sufficient to show that System 1 has no more expected total cost than System 2. Let $\textbf{x}'_{t+1}$ and $\textbf{x}''_{t+1}$ be the inventory vectors at period $t+1$ in Systems 1 and 2, respectively. Then, we have $x'_{k,t+1}=x''_{k,t+1}, k=1,...,i, x'_{i+1,t+1}\geq x''_{i+1,t+1}$, and $x'_{k,t+1}\leq x''_{k,t+1}, k=i+2,...,K-1$. Since $\bar{y}_t$ minimizes the expected cost at period $t$ and by induction assumption, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\geq 0, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$, the optimal order-up-to level at each period $t$ is at most $\bar{y}_t$ (because ordering more than $\bar{y}_t$ will increase both the cost at $t$ and the cost-to-go at $t+1$). Assume that there are $\xi\leq \epsilon$ more units of outdates in System 2 than in System 1 at period $t$. Then, $\sum\limits_{k=1}^{K-1}x'_{k,t+1}=\sum\limits_{k=1}^{K-1}x''_{k,t_0+1}+\xi\leq (\max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t})^+$. By induction assumption, we have $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1}) \leq w/\beta, 1\leq i<j\leq K-1, \forall\textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. Therefore, System 1 has no more expected total cost than System 2, which completes the proof. \Halmos \textbf{Proof of Lemma \ref{lem:optim}.} First, since $\bar{y}_t$ minimizes the expected cost at period $t$ and by Lemma \ref{lem:ij}, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\geq 0, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$, the optimal order-up-to level at each period $t$ is at most $\bar{y}_t$ (because ordering more than $\bar{y}_t$ will increase both the cost at $t$ and the cost-to-go at $t+1$). Therefore, given that we start from zero inventory and an optimal ordering policy is followed at each period under policy $IM$, we have $\sum\limits_{k=1}^{K-1}x_{k,t+1}^{IM}\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}, t=1,...,T$. For the case where we start from a high inventory level, by construction, the ordering quantity under policy $IM$ will always be zero until period $t$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}^{IM}\leq \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}$ (and by forward induction the inequality will continue to hold at all of the following periods), before which no movements of units will be performed since the inventory level under policy $IM$ will be no more than that under policy $B$. Therefore, we always have $\sum\limits_{k=1}^{K-1}x_{k,t+1}^{IM}\leq (\max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t})^+$ at $t+1$ if units are moved at $t$. We now prove Lemma \ref{lem:optim} in a recursive manner. Recall that for each given sample path, $\mathscr{T}_H=\{\tau_1,...,\tau_n\}$. Consider an variation of policy $IM$, call it $IM_1$; under $IM_1$, the movements of units are only performed at $\tau_1$, and an optimal ordering policy is followed and no movements are performed at the following periods. Then, to show $\mathrm{E}[\mathscr{C}(IM)]\leq \mathrm{E}[\mathscr{C}(OPT)]$, it is sufficient to show $\mathrm{E}[\mathscr{C}(IM_1)]\leq \mathrm{E}[\mathscr{C}(OPT)]$; since if this is true, following a similar argument, the movements at future periods can only further decrease the total cost. Consider any realization of $\tau_1$. Clearly, the total cost under policies $IM_1$ and $OPT$ are the same for all periods $1,...,\tau_1-1$. Without loss of generality, further assume that at $\tau_1$, we have only moved $\epsilon$ units of age $k$ to age zero, $k=1,...,K-1$. Then, after the movements, there are $\epsilon$ more units of age zero but $\epsilon$ fewer units of age $k$ under policy $IM_1$ than under policy $OPT$. Consider the following two cases. First, suppose the amount of outdates at $\tau_1$ under policies $IM_1$ and $OPT$ are the same. Then, the total cost at $\tau_1$ under the two policies are the same, and total inventory level at $\tau_1+1$ under the two policies are also the same but the inventory vector under policy $IM_1$ is ``younger'', i.e., $\sum\limits_{k=1}^{K-1}x_{k,\tau_1+1}^{IM}=\sum\limits_{k=1}^{K-1}x_{k,\tau_1+1}^{OPT}$, and $\sum\limits_{k=m}^{K-1}x_{k,\tau_1+1}^{IM}\leq \sum\limits_{k=m}^{K-1}x_{k,\tau_1+1}^{OPT}, m=2,...,K-1$. By Lemma \ref{lem:ij}, we have $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1}), 1\leq i<j\leq K-1, \forall\textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. Therefore, policy $IM_1$ has no more expected total cost than policy $OPT$. Second, suppose there are $\xi\leq \epsilon$ more units of outdates at $\tau_1$ under policy $OPT$ than under policy $IM_1$ (this is only possible when we have moved units of age $K-1$ to age zero under policy $IM_1$). Then, we have $\sum\limits_{k=1}^{K-1}x_{k,\tau_1+1}^{IM}=\sum\limits_{k=1}^{K-1}x_{k,\tau_1+1}^{OPT}+\xi$, and $\sum\limits_{k=m}^{K-1}x_{k,\tau_1+1}^{IM}\leq \sum\limits_{k=m}^{K-1}x_{k,\tau_1+1}^{OPT}, m=2,...,K-1$. By Lemma \ref{lem:ij}, we have $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1}) \leq w/\beta, 1\leq i<j\leq K-1, \forall\textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. Therefore, policy $IM_1$ has no more expected cost than policy $OPT$, which completes the proof. \Halmos \endproof \textbf{Proof of Theorem \ref{baseb}.} We prove the theorem in a similar way as for Theorem \ref{optb}, except that now policy $IM$ is constructed based on $BA$ instead of $OPT$. In this case, policy $IM$ also follows a base-stock policy and orders up to the same base-stock level as policy $BA$. Recall that for each given sample path, $\mathscr{T}_H=\{\tau_1,...,\tau_n\}$. Consider an variation of policy $IM$, call it $IM_1$; under $IM_1$, the movements of units are only performed at $\tau_1$ and no movements are performed at the following periods. Then, to show $\mathrm{E}[\mathscr{C}(IM)]\leq \mathrm{E}[\mathscr{C}(OPT)]$, it is sufficient to show $\mathrm{E}[\mathscr{C}(IM_1)]\leq \mathrm{E}[\mathscr{C}(OPT)]$; since if this is true, following a similar argument, the movements at future periods can only further decrease the total cost. Since both policies $IM_1$ and $BA$ follow the same base-stock policy, the total shortage penalty and hold cost under policies $IM_1$ and $BA$ are exactly the same at each period. Consider any realization of $\tau_1$. Clearly, the outdating cost under policies $IM_1$ and $OPT$ are the same for all periods $1,...,\tau_1-1$. Further, since units are only moved from older to younger positions at $\tau_1$ under policy $IM_1$, Lemma \ref{lem:base} implies that with probability one, the total outdating cost under policy $IM_1$ is no more than that under policy $BA$. Therefore, we have $\mathscr{C}(IM_1)\leq \mathscr{C}(BA)$ with probability one. The rest of the proof follows the same way as that for Theorem \ref{optb}. \Halmos \textbf{Proof of Proposition \ref{prop:lower}.} We start with providing a structural property on the optimal cost-to-go function under the marginal-cost accounting scheme. For $t=1,...,T$, given $\textbf{x}_t$ and $f_t$, let $\tilde{C}_t(\textbf{x}_t,f_t)$ denote the optimal cost-to-go function at period $t$ under the marginal-cost accounting scheme, and as in the paper, let $\Gamma_t(\textbf{x}_t,f_t,q_t)=P_t(\textbf{x}_t,f_t,q_t)+H_t(\textbf{x}_t,f_t,q_t)+W_t(\textbf{x}_t,f_t,q_t)$. Then, the optimality equation under the marginal-cost accounting scheme is: $$\tilde{C}_t(\textbf{x}_t,f_t)=\min\limits_{q_t\geq 0}\bigg\{\Gamma_t(\textbf{x}_t,f_t,q_t)+\mathrm{E}[\tilde{C}_{t+1}(\textbf{X}_{t+1},F_{t+1})|f_t]\bigg\}.$$ For $k=1,...,K-1$, for the continuous case, let $\tilde{C}_t^{(k)}(\textbf{x}_t,f_t)$ denote the partial derivative of $\tilde{C}_t(\textbf{x}_t,f_t)$ with respect to to $x_{k,t}$; for the discrete case, let $\tilde{C}_t^{(k)}(\textbf{x}_t,f_t)$ denote the incremental of $\tilde{C}_t(\textbf{x}_t,f_t)$ caused by a unit increase of $x_{k,t}$. Then, we have the following result. \begin{lemma} \label{lem:lower} Under Assumption \ref{assump:1}, for $t=1,...,T$, $\tilde{C}_{t+1}^{(k)}(\textbf{x}_{t},f_{t})\leq 0, k=1,...,K-1$, $\forall \textbf{x}_{t},f_{t}$. \end{lemma} Proof. The claim is clearly true for $t=T$ since $\tilde{C}_{T+1}(\textbf{x}_{T+1},f_{T+1})=0, \forall \textbf{x}_{T+1}, f_{T+1}$. Assume that the claim is true for $t+1$. We now show that it is also true for $t$. Consider the following two cases. First, suppose at period $t$ we have $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t}\bar{y}$. Consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon$ and $x'_{m,t}=x_{m,t}, m=1,...,k-1,k+1,...,K-1$, i.e., System 2 has $\epsilon$ more units of age $k$, and $\epsilon$ is positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+\epsilon\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$. Let System 1 follow an optimal ordering policy. To define the ordering policy in System 2, let $t_0\in(t,t+K-1]$ be the period such that at all $t,...,t_0-1$, there are still some products that are ordered prior to period $t$ in System 2, while by the beginning of period $t_0$, all of those products are either used to satisfy demand or outdated. Then, we define the ordering policy in System 2 as follows: for each period $t,...,t_0-1$, let System 2 order up to the same level as System 1 (order nothing if this is not feasible), and let System 2 follow an optimal ordering policy afterward. Then, to prove the lemma, it is sufficient to show that the expected total cost under the marginal-cost accounting scheme in System 2 is no more than that in System 1. By definition of $t_0$, no units ordered at periods $\geq t$ will be outdated by the beginning of period $t_0$. Then, the total cost under the marginal-cost accounting scheme in each system is comprised of the following three parts: i) the shortage penalties that occur at periods $t,...,t_0-1$, ii) the holding costs that occur at periods $t,...,t_0-1$ charged for units ordered at periods $\geq t$, and iii) the total costs (shortage penalties, holding and outdating costs) that occur at periods $\geq t_0$. i) Consider the shortage penalties that occur at periods $t,...,t_0-1$. By definition of the ordering policy under System 2, after ordering, there is at least the same amount of inventory in System 2 as that in System 1 at each period $t,...,t_0-1$. Therefore, the total shortage penalty at periods $t,...,t_0-1$ in System 2 is no more than that in System 1. ii) Consider the holding costs that occur at periods $t,...,t_0-1$ charged for units ordered at periods $\geq t$. Since System 2 started with more inventory, it is possible that for all periods $t,...,t_0-1$, the initial inventory level in System 2 is higher than the total inventory level in System 1 after ordering. Then, the ordering quantity in System 2 is zero for all $t,...,t_0-1$. In this case, System 2 would be empty at the beginning of period $t_0$ and there is nothing to prove. Otherwise, let $s_0\in[t,t_0)$ be the first period such that the ordering quantity in System 2 is strictly positive. Since System 2 started with more inventory than System 1, the amount of outdates in System 2 is at least as much as that in System 1 at each period $t,...,t_0-1$. Therefore, by construction, at all periods $s_0+1,...,t_0-1$, the ordering quantity in System 2 is at least as much as that in System 1, and the total inventory level after ordering in the two systems are the same. Let $\textbf{x}_{t_0}$ and $\textbf{x}'_{t_0}$ be the inventory vectors at period $t_0$ in Systems 1 and 2, respectively. Then, by construction, we have $\sum\limits_{m=k}^{K-1}x'_{m,t_0}\leq \sum\limits_{m=k}^{K-1}x_{m,t_0}, k=1,...,K-1$. Therefore, the holding cost that occurs at periods $t,...,t_0-1$ charged for units ordered at periods $\geq t$ in System 2 is no more than that in System 1. iii) Consider the total costs that occur at periods $\geq t_0$. At the beginning of period $t_0$, we know that $\sum\limits_{m=k}^{K-1}x'_{m,t_0}\leq \sum\limits_{m=k}^{K-1}x_{m,t_0}, k=1,...,K-1$. By Lemma \ref{lem:ij}, we have $0\leq C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1}), 1\leq i<j\leq K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. Therefore, the total cost that occurs at periods $\geq t_0$ in System 2 is no more than that in System 1. Second, suppose at period $t$ we have $\sum\limits_{k=1}^{K-1}x_{k,t}\geq \max\limits_{\tau=1,...,t}\bar{y}$. Consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon$ and $x'_{m,t}=x_{m,t}, m=1,...,k-1,k+1,...,K-1$, i.e., System 2 has $\epsilon$ more units of age $k$, and $\epsilon$ is any positive number. Let System 1 follow an optimal ordering policy. Let both Systems 1 and 2 follow an optimal ordering policy. Since $\sum\limits_{k=1}^{K-1}x_{k,t}\geq \max\limits_{\tau=1,...,t}\bar{y}$, clearly, the ordering quantities in both systems are zero at period $t$. Let $y_t$ and $y'_t$ be the total inventory levels after ordering in Systems 1 and 2, respectively. Then, $\max\limits_{\tau=1,...,t}\bar{y}\leq y_t\leq y'_t$. Thus the expected marginal shortage penalty at period $t$ in System 2 is no more than that in System 1; and there is no marginal holding or outdating cost in either system. Let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ for Systems 1 and 2, respectively. Then we have $x_{k,t+1}\leq x'_{k,t+1},k=1,...,K-1$. By induction assumption, we have $\tilde{C}_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq 0,k=1,...,K-1, \forall\textbf{x}_{t+1},f_{t+1}$. Therefore, under the marginal-cost accounting scheme, System 2 has no more expected total cost than System 1. \Halmos With the above result, we now prove the proposition by contradiction. Suppose for some period $t$, given $\textbf{x}_t$ and $f_t$, we have $q_t^{L}> q_t^{OPT}$. Consider a policy $L$, under which $q_t^{L}$ units are ordered at period $t$ and an optimal ordering policy is applied at the following periods. Then, the expected cost-to-go at period $t$ of policy $L$ is $\Gamma_t(\textbf{x}_t,f_t,q_t^{L})+\mathrm{E}[\tilde{C}_{t+1}(\textbf{X}_{t+1}^{L},F_{t+1})|f_t]$, where $\textbf{X}_{t+1}^{L}=\textbf{X}_{t+1}(\textbf{x}_t,q_t^{L},D_t)$. On the other hand, the expected cost-to-go at period $t$ of policy $OPT$ is $\Gamma_t(\textbf{x}_t,f_t,q_t^{OPT})+\mathrm{E}[\tilde{C}_{t+1}(\textbf{X}_{t+1}^{OPT},F_{t+1})|f_t]$, where $\textbf{X}_{t+1}^{OPT}=\textbf{X}_{t+1}(\textbf{x}_t,q_t^{OPT},D_t)$. Since $q_t^{L}> q_t^{OPT}$, by definition of $q_t^{L}$, we have $\Gamma_t(\textbf{x}_t,f_t,q_t^{L})<\Gamma_t(\textbf{x}_t,f_t,q_t^{OPT})$. Further, we have $X_{k,t+1}^{L}\geq_{p} X_{k,t+1}^{OPT}, k=1,...,K-1$ for any realization of $D_{t}$. Therefore, by Lemma \ref{lem:lower}, we have $\tilde{C}_{t+1}(\textbf{X}_{t+1}^{L},F_{t+1})\leq \tilde{C}_{t+1}(\textbf{X}_{t+1}^{OPT},F_{t+1})$ with probability one. Then: $$\Gamma_t(\textbf{x}_t,f_t,q_t^{L})+\mathrm{E}[\tilde{C}_{t+1}(\textbf{X}_{t+1}^{L},F_{t+1})|f_t]<\Gamma_t(\textbf{x}_t,f_t,q_t^{OPT})+\mathrm{E}[\tilde{C}_{t+1}(\textbf{X}_{t+1}^{OPT},F_{t+1})|f_t],$$ i.e., policy $OPT$ is not optimal for periods $t,...,T$, which is a contradiction. \Halmos \textbf{Proof of Theorem \ref{opttb}.} We prove the theorem in a similar way as for Theorem \ref{optb}. The main difference lies in the construction of policy $IM$. In particular, now policy $IM$ is constructed as follows: At each period $t$, given $\textbf{x}_t$ and $f_t$, let the system under policy $IM$ follow an optimal ordering policy. What differentiates policies $IM$ and $OPT$ is that under policy $IM$, at each period after ordering and before demand realization, 1) products in the inventory vector can be ``moved'' from older positions to the position of age 0; and 2) products of age 0 can be intendedly disposed. At each period $t$, let $y_t^{TB}$ and $y_t^{IM}$ be the total inventory levels after ordering under policies $TB$ and $IM$, respectively. Also, given $\textbf{x}_t^{TB}$ and $f_t$, let $y_t^B$ denote the total inventory level after ordering if the balancing ordering quantity $q_t^B$ is ordered. Then, we partition the set of decision epochs $\{1,...,T\}$ into the following four subsets: $$\mathscr{T}_P=\{t: y_t^B\geq y_t^{IM}\}, \mathscr{T}_H=\{t: y_t^B< y_t^{IM}, y_t^{TB}= y_t^{B}\},$$ $$\mathscr{T}_{LH}=\{t:y_t^B< y_t^{IM}, y_t^{TB}>y_t^B\}, \mathscr{T}_{UH}=\{t: y_t^B< y_t^{IM}, y_t^{TB}<y_t^B\}.$$ The main objective of constructing policy $IM$ is to bound the the total shortage penalty of policy $TB$ at each $t\in \mathscr{T}_P\cup \mathscr{T}_{UH}$ and the total holding and outdating cost of policy $TB$ charged for the first $q_t^B$ units ordered at each $t\in \mathscr{T}_H\cup \mathscr{T}_{LH}$. In particular, units under policy $IM$ can be moved for $t\in \mathscr{T}_H\cup \mathscr{T}_{LH}\cup \mathscr{T}_{UH}=\{\tau_1,...,\tau_n\}$. The rules of movements are defined in a similar way as before such that after the movements at each $\tau_i$, we have: (i) There are only positive inventory of age 0 and $\tau_i-\tau_{j}$ under policy $IM$, for all $j=1,...,i-1$ such that $\tau_j\in \mathscr{T}_{H}\cup \mathscr{T}_{LH}$. (ii) For $j=1,...,i-1$ and $\tau_j\in \mathscr{T}_{H}\cup \mathscr{T}_{LH}$, $\sum\limits_{k=\tau_i-\tau_j}^{K-1}x_{k,\tau_i}^{IM}=x_{\tau_i-\tau_j,\tau_i}^{B}+\sum\limits_{k=\tau_i-\tau_j+1}^{K-1}x_{k,\tau_i}^{TB}$, where $x_{\tau_i-\tau_j,\tau_i}^{B}$ denotes the inventory of age $\tau_i-\tau_j$ at period $\tau_i$ under policy $TB$ if $q_{\tau_j}^B$ instead of $q_{\tau_j}^{TB}$ units are ordered at $\tau_j$. Note that propoerty (ii) is equivalent to Equation (\ref{eqn:move1}) for $\tau_j\in \mathscr{T}_{H}$ since in that case, we have $q_{\tau_j}^{TB}=q_{\tau_j}^B$. In addition to movements, we also allow disposals of units at periods in $\mathscr{T}_{UH}$. For $t\in \mathscr{T}_{UH}$, we have $y_{t}^{TB}<y_{t}^B<y_{t}^{IM}$. After the movements of units, there must be at least $y_{t}^{IM}-y_{t}^{TB}$ units of age 0 under policy $IM$. Then, we dispose $y_{t}^{IM}-y_{t}^{TB}$ units of age 0 under policy $IM$ so that after the disposal, we have $y_{t}^{IM}=y_{t}^{TB}$, and none of the above two properties resulted from movements of units is violated. Then, similar as before, to show $\mathrm{E}[\mathscr{C}(TB)]\leq 2\mathrm{E}[\mathscr{C}(OPT)]$, it is sufficient to show $\mathrm{E}[\mathscr{C}(IM)]\leq \mathrm{E}[\mathscr{C}(OPT)]$ and $\mathrm{E}[\mathscr{C}(TB)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$, respectively. We have shown in Lemma \ref{lem:optim} that under Assumption \ref{assump:1}, moving units from older to younger positions can only decrease the expected total cost. We now show that disposing units during periods in $\mathscr{T}_{UH}$ can also only decrease the expected total cost. For $t\in \mathscr{T}_{UH}$, since $y_t^{TB}<y_t^B$, by definition of policy $TB$, $y_t^{TB}$ provides an upper bound on the optimal order-up-to level for given $\textbf{x}_t^{TB}$ and $f_t$. Also, similar as before, the inventory vector under policy $IM$ is ``younger'' than that under policy $TB$ after the movements (i.e., for $k=1,...,K-1$, policy $IM$ has no more units of age greater than or equal to $k$). Then it is not difficult to show that the optimal order-up-to level for given $\textbf{x}_t^{IM}$ and $f_t$ is at most $y_t^{TB}$. Therefore, the disposal of inventory from $y_t^{IM}$ to $y_t^{TB}$ will only decrease the expected total cost. Then we have: \begin{equation} \label{appendix:imopt} E[\mathscr{C}(IM)]\leq E[\mathscr{C}(OPT)]. \end{equation} We next show $\mathrm{E}[\mathscr{C}(TB)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$, which together with Inequality (\ref{appendix:imopt}) lead to our conclusion. By construction of policy $IM$, after the movements and disposals, we have $y_t^B\geq y_t^{IM}, \forall t\in \mathscr{T}_{P}\cup \mathscr{T}_{UH}$. Then clearly: \begin{equation} \label{appendix:p} \sum\limits_{t\in \mathscr{T}_P\cup \mathscr{T}_{UH}}P_t^{B}\leq \sum\limits_{t=1}^TP_t^{IM}. \end{equation} Then, define the dynamic unit-matching scheme in a similar way as before, such that the first $q_t^B$ units ordered at each $t\in \mathscr{T}_H\cup \mathscr{T}_{LH}$ under policy $TB$ are matched to units under policy $IM$ on a one to one correspondence, and a matched unit under policy $TB$ stays in inventory no longer than the corresponding unit under policy $IM$. Then, we have: \begin{equation} \label{appendix:hlh} \sum\limits_{t\in \mathscr{T}_H\cup\mathscr{T}_{LH}}H_t^{B}\leq \sum\limits_{t=1}^TH_t^{IM}, \sum\limits_{t\in \mathscr{T}_H\cup\mathscr{T}_{LH}}W_t^{B}\leq \sum\limits_{t=1}^TW_t^{IM}. \end{equation} Finally, recall that $\Gamma_t(\textbf{x}_t,f_t,q_t)=P_t(\textbf{x}_t,f_t,q_t)+H_t(\textbf{x}_t,f_t,q_t)+W_t(\textbf{x}_t,f_t,q_t)$. Consider the following three cases. First, suppose $q_t^{TB}=q_t^B$. Then clearly, $\Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{TB})=\Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{B})$. Second, suppose $q_t^{TB}>q_t^B$. Then we have $q_t^{TB}=q_t^L>q_t^B$. Given $\textbf{x}_t$ and $f_t$, it is straightforward to check that $\Gamma_t(\textbf{x}_t,f_t,q_t)$ is convex in $q_t$. Further, since $q_t^{TB}=q_t^L$ minimizes $\Gamma_t(\textbf{x}_t^{TB},f_t,q_t)$, we must have $\Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{TB})\leq \Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{B})$ (This is why the lower bound $q_t^L$ in the definition of policy $TB$ cannot be replaced by tighter ones). Last, suppose $q_t^B>q_t^U$. Then we have $q_t^{TB}=q_t^U$. Since $\Gamma_t(\textbf{x}_t^{TB},f_t,q_t)$ is convex in $q_t$, $q_t^L$ minimizes $\Gamma_t(\textbf{x}_t^{TB},f_t,q_t)$, and $q_t^B>q_t^{TB}\geq q_t^{L}$, we also have $\Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{TB})\leq \Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{B})$. By definition, for any given $f_t$, $\mathrm{E}[P_t^{TB}+H_t^{TB}+W_t^{TB}|f_t]=\Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{TB})$, $\mathrm{E}[P_t^{B}+H_t^{B}+W_t^{B}|f_t]=\Gamma_t(\textbf{x}_t^{TB},f_t,q_t^{B})$. Therefore: \begin{equation} \label{appendix:tbb} \mathrm{E}[P_t^{TB}+H_t^{TB}+W_t^{TB}|f_t]\leq \mathrm{E}[P_t^{B}+H_t^{B}+W_t^{B}|f_t] \end{equation} With Inequalities \ref{appendix:p}-\ref{appendix:tbb}, the rest steps to show $\mathrm{E}[\mathscr{C}(TB)]\leq 2\mathrm{E}[\mathscr{C}(IM)]$ are the same as before, which completes the proof. \Halmos \textbf{Proof of Proposition \ref{prop:fifo}.} Due to Lemma \ref{lem:ij}, it remains to prove the ``if'' part of the proposition, i.e., if for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$, then Assumption \ref{assump:1} holds. First, since $C_{T+1}^{(k)}(\textbf{x}_{T+1},f_{T+1})=0\leq w/\beta, k=1,...,K-1, \forall \textbf{x}_{T+1},f_{T+1}$, issuing products of age $K-1$ at $T$ clearly results in less cost than issuing younger products and let the oldest products outdate. Further, how we issue products of age less than $K-1$ at $T$ does not affect the total cost. Therefore, Assumption \ref{assump:1} clearly holds for $T$. Assume that Assumption \ref{assump:1} holds for $t+1$, i.e., starting from period $t+1$, given that $\sum\limits_{k=1}^{K-1}x_{k,t+1}+q_{t+1}\leq \max\limits_{\tau=1,...,t+1}\bar{y}_{\tau}$ at $t+1$ and an optimal ordering policy is implemented at $t+2,...,T$, FIFO is an optimal issuing policy. We now show it also holds for $t$. Starting from period $t$, given $\textbf{x}_t$ and $q_t$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}+q_t\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$, we must have $\sum\limits_{k=1}^{K-1}x_{k,t+1}\leq (\max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_t)^+$. Thus, under an optimal ordering policy, we have $\sum\limits_{k=1}^{K-1}x_{k,t+1}+q_{t+1}\leq \max\limits_{\tau=1,...,t+1}\bar{y}_{\tau}$. Then, by induction assumption, FIFO is optimal for $t+1,...,T$. It remains to show that FIFO is also optimal at period $t$. Clearly, issuing products of age $K-1$ at period $t$ results in less total cost than issuing younger products and let the oldest products outdate because $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$. Thus, an optimal issuing policy will issue as many oldest products as possible at period $t$. Let $\gamma$ be such an issuing policy. Then, the costs that occur at period $t$ by following FIFO and $\gamma$ are exactly the same. Further, let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ by following FIFO an $\gamma$, respectively. Then, we have $\sum\limits_{k=1}^{K-1}x_{k,t+1}=\sum\limits_{k=1}^{K-1}x'_{k,t+1}$ and $\sum\limits_{k=m}^{K-1}x_{k,t+1}\leq \sum\limits_{k=m}^{K-1}x'_{k,t+1}, m=2,...,K-1$. From the proof of Lemma \ref{lem:ij}, we know that for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$ implies for $t=1,...,T$, $C_{t+1}^{(i)}(\textbf{x}_{t+1},f_{t+1})\leq C_{t+1}^{(j)}(\textbf{x}_{t+1},f_{t+1}), 1\leq i<j\leq K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. Therefore, FIFO is also optimal at period $t$, which completes the proof. \Halmos \textbf{Proof of Proposition \ref{prop:demand}.} Since $\bar{y}_t$ is non-decreasing in $t$, we have $\max\limits_{\tau=1,...,t}\bar{y}_{\tau}=\bar{y}_{t}$. Due to Proposition \ref{prop:fifo}, to show Assumption \ref{assump:1} holds, it is sufficient to show that for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \bar{y}_{t}-d_{t}$. The claim is clearly true for $t=T$ since $C_{T+1}(\textbf{x}_{T+1},f_{T+1})=0, \forall \textbf{x}_{T+1}, f_{T+1}$. Assume that the claim is true for $t+1,...,T+1$. We now show that it is also true for $t$. At period $t$, given $\textbf{x}_t$ and $f_t$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}< \bar{y}_{t-1}-d_{t-1}$, consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon, x'_{m,t}=x_{m,t}, m\neq k, k=1,...,K-1$, i.e., System 2 starts with $\epsilon$ more units of age $k$, and $\epsilon$ is positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+\epsilon\leq \bar{y}_{t-1}$. Let System 1 follow an optimal ordering policy, and let System 2 order up to the same level as System 1 at period $t$ (order nothing if this is not feasible) and follow an optimal ordering policy afterward. Then, it is sufficient to show that the expected total cost in System 2 is at most $w\epsilon /\beta$ more than that in System 1. Let $y_t$ and $y'_t$ be the total inventory levels after ordering in Systems 1 and 2, respectively. Then, by construction, we have $y_t\leq y'_t\leq \bar{y}_t$. Therefore, the total shortage penalty and holding cost at period $t$ in System 2 is no more than that in System 1 (since by definition, $\bar{y}_t$ minimizes the total shortage penalty and holding cost at period $t$ and the sum of shortage penalty and holding cost is convex in ordering quantity). Let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ under Systems 1 and 2, respectively. Then, by construction, we have $x_{1,t+1}\geq x'_{1,t+1}, x_{k,t+1}\leq x'_{k,t+1},k=2,...,K-1$. Assume that there are $\xi\leq \epsilon$ more units of outdates in System 2 than in System 1 at period $t$. Then $\sum\limits_{k=2}^{K-1}x'_{k,t}-\sum\limits_{k=2}^{K-1}x_{k,t}=\epsilon-\xi$. Since $0\leq C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=2,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \bar{y}_{t}-d_{t}$, the expected total cost in System 2 is at most $w\epsilon\leq w\epsilon/\beta$ more than that in System 1, which completes the proof. \Halmos \textbf{Proof of Proposition \ref{prop:cost}.} Due to Proposition \ref{prop:fifo}, to show Assumption \ref{assump:1} holds, it is sufficient to show that for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. The claim is clearly true for $t=T$ since $C_{T+1}(\textbf{x}_{T+1},f_{T+1})=0, \forall \textbf{x}_{T+1}, f_{T+1}$. Assume that the claim is true for $t+1,...,T+1$. We now show that it is also true for $t$. At period $t$, given $\textbf{x}_t$ and $f_t$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}-d_{t-1}$, consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon, x'_{m,t}=x_{m,t}, m\neq k, k=1,...,K-1$, i.e., System 2 starts with $\epsilon$ more units of age $k$, and $\epsilon$ is positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+\epsilon\leq \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}$. Let System 1 follow an optimal ordering policy, and let System 2 order up to the same level as System 1 at period $t$ (order nothing if this is not feasible) and follow an optimal ordering policy afterward. Then, it is sufficient to show that the expected total cost in System 2 is at most $w\epsilon /\beta$ more than that in System 1. Let $y_t$ and $y'_t$ be the total inventory levels after ordering in Systems 1 and 2, respectively. Then, by construction, we have $y_t\leq y'_t\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$. Assume that $y'_t-y_t=\eta\leq \epsilon$. Then, there will be at most $\gamma h\eta$ more expected holding cost in System 2 than in System 1 at period $t$. Let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ under Systems 1 and 2, respectively. Then, by construction, we have $x_{1,t+1}\geq x'_{1,t+1}, x_{k,t+1}\leq x'_{k,t+1},k=2,...,K-1$. Assume that there are $\xi\leq \epsilon$ more units of outdates in System 2 than in System 1 at period $t$. Then $\sum\limits_{k=2}^{K-1}x'_{k,t}-\sum\limits_{k=2}^{K-1}x_{k,t}=\epsilon-\xi$. Since $0\leq C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=2,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \bar{y}_{t}-d_{t}$, and $h\leq \frac{1-\beta}{\beta}w$, the expected total cost in System 2 is at most $\gamma h\eta+w\epsilon\leq w\epsilon/\beta$ more than that in System 1, which completes the proof. \Halmos \textbf{Proof of Proposition \ref{prop:mix}.} Due to Proposition \ref{prop:fifo}, to show Assumption \ref{assump:1} holds, it is sufficient to show that for $t=1,...,T$, $C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=1,...,K-1$, $\forall\textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \max\limits_{\tau=1,...,t}\bar{y}_{\tau}-d_{t}$. The claim is clearly true for $t=T$ since $C_{T+1}(\textbf{x}_{T+1},f_{T+1})=0, \forall \textbf{x}_{T+1}, f_{T+1}$. Assume that the claim is true for $t+1,...,T+1$. We now show that it is also true for $t$. At period $t$, given $\textbf{x}_t$ and $f_t$ such that $\sum\limits_{k=1}^{K-1}x_{k,t}< \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}-d_{t-1}$, consider the following two systems (both following FIFO issuing policy): System 1 starts from $\textbf{x}_t$ and System 2 starts from $\textbf{x}'_t$, where $x'_{k,t}=x_{k,t}+\epsilon, x'_{m,t}=x_{m,t}, m\neq k, k=1,...,K-1$, i.e., System 2 starts with $\epsilon$ more units of age $k$, and $\epsilon$ is positive but sufficiently small such that $\sum\limits_{k=1}^{K-1}x_{k,t}+\epsilon\leq \max\limits_{\tau=1,...,t-1}\bar{y}_{\tau}$. Let System 1 follow an optimal ordering policy, and let System 2 order up to the same level as System 1 at period $t$ (order nothing if this is not feasible) and follow an optimal ordering policy afterward. Then, it is sufficient to show that the expected total cost in System 2 is at most $w\epsilon /\beta$ more than that in System 1. Let $y_t$ and $y'_t$ be the total inventory levels after ordering in Systems 1 and 2, respectively. Then, by construction, we have $y_t\leq y'_t\leq \max\limits_{\tau=1,...,t}\bar{y}_{\tau}$. Assume that $y'_t-y_t=\eta\leq \epsilon$. The probability that there will be excess inventory after demand realization at period $t$ in either system is upper bounded by $\Phi_t(\max\limits_{\tau=1,...,t}\bar{y}_{\tau})\leq \max\limits_{1<s\leq t\leq T}\Phi_t(\bar{y}_s)=\gamma$. Then, there will be at most $\gamma h\eta$ more expected holding cost and at least $(1-\gamma)p\eta$ less expected shortage penalty in System 2 than in System 1 at period $t$. Let $\textbf{x}_{t+1}$ and $\textbf{x}'_{t+1}$ be the inventory vectors at period $t+1$ under Systems 1 and 2, respectively. Then, by construction, we have $x_{1,t+1}\geq x'_{1,t+1}, x_{k,t+1}\leq x'_{k,t+1},k=2,...,K-1$. Assume that there are $\xi\leq \epsilon$ more units of outdates in System 2 than in System 1 at period $t$. Then $\sum\limits_{k=2}^{K-1}x'_{k,t}-\sum\limits_{k=2}^{K-1}x_{k,t}=\epsilon-\xi$. Since $0\leq C_{t+1}^{(k)}(\textbf{x}_{t+1},f_{t+1})\leq w/\beta, k=2,...,K-1$, $\forall \textbf{x}_{t+1},f_{t+1}$ such that $\sum\limits_{k=1}^{K-1}x_{k,t+1}< \bar{y}_{t}-d_{t}$, and $h\leq \frac{1-\gamma}{\gamma}p+\frac{1-\beta \gamma}{\beta \gamma}w$, the expected total cost in System 2 is at most $\gamma h\eta-(1-\gamma)p\eta+w\epsilon\leq w\epsilon/\beta$ more than that in System 1, which completes the proof. \Halmos \end{APPENDIX} \end{document}
\begin{document} \title{Optimal transfer protocol by incremental layer defrosting} \blfootnote{$*$To whom correspondence may be addressed. Email: \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]}.} \begin{abstract} Transfer learning is a powerful tool enabling model training with limited amounts of data. This technique is particularly useful in real-world problems where data availability is often a serious limitation. The simplest transfer learning protocol is based on ``freezing" the feature-extractor layers of a network pre-trained on a data-rich source task, and then adapting only the last layers to a data-poor target task. This workflow is based on the assumption that the feature maps of the pre-trained model are qualitatively similar to the ones that would have been learned with enough data on the target task. In this work, we show that this protocol is often sub-optimal, and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen. In particular, we make use of a controlled framework to identify the optimal transfer depth, which turns out to depend non-trivially on the amount of available training data and on the degree of source-target task correlation. We then characterize transfer optimality by analyzing the internal representations of two networks trained from scratch on the source and the target task through multiple established similarity measures. \end{abstract} \section{Introduction} \label{Introduction} Machine learning models show a remarkable capacity to extrapolate rules and predict the behavior of complex systems, but this ability often comes at the cost of training with large amounts of data. In a variety of domains -- such as in medical applications -- collecting data is a slow and costly process or the variability across data is particularly large that we would need huge datasets to achieve satisfying generalization performance \cite{beam2018big,rajkomar2019machine}. This makes data efficiency a necessary condition. Transfer learning emerged as a solution to this problem \cite{thrun2012learning,shin2016deep,raghu2019transfusion}. By combining a data-poor \textit{target task} with a data-rich related \textit{source task}, it is possible to learn useful representations from the source, and then adapt them to the target in a fine-tuning phase, effectively improving the downstream performance. Indeed, fine-tuning the already learned meaningful representation to the target task allows one to work with dramatically smaller dataset sizes while keeping the same level of accuracy. Unfortunately, deep neural networks tend to be sensitive even to tiny distribution shifts \cite{gama2014survey}. Therefore, performing transfer learning starting from the right initialization and with the right protocol is crucial for success. One of the major risks is to incur in overfitting \cite{geirhos2020shortcut} due to the limited size of the target dataset. Common strategies to avoid it include damping the learning rate in the fine-tuning stage \cite{bengio2012deep}, which forces the network to change its weights only by a small amount. Another popular and simple transfer learning strategy is keeping frozen most of the network \cite{yosinski2014transferable}, and training to the target task only some of the model parameters. Transferring the layers directly from another network allows, in principle, filtering out irrelevant information present in the input of both the source and the target data sets, effectively reducing the dimensionality of the problem and increasing data efficiency. But \emph{how should one choose the layers that should be kept frozen?} Ideally, one should be able to identify, in the source network, a layer in which the representation is general enough to be meaningful also for the target task, but not too general, as otherwise training it to a specialized task might be hindered by the scarcity of data in the target set. \begin{figure*} \caption{\textbf{Diverse effectiveness of freezing protocols} \label{fig:intro} \end{figure*} Given the relevance of transfer learning in real-world applications, it is important to define simple protocols that can be applied without too much fine-tuning or domain knowledge. In this work, we propose an approach capable of identifying the layer in which the representation is optimal for a given transfer task. In particular, we suggest estimating the generalization accuracy achieved on the target task as more and more layers are retrained on the target task, while the remaining layers are kept frozen to the feature maps extracted on the source task. This procedure allows building what we call a \textit{layer-wise defrosting profile}. Four examples of such profiles are shown in Fig.~\ref{fig:intro}. We see that, depending on the transfer scenario, the optimal accuracy can be achieved by either keeping almost all the layers frozen, or by re-optimizing all the layers, or by freezing only part of the layers (panels (a), (d) and (b-c), respectively). As we will see, the position of the maximum of the defrosting profiles strongly depends on the nature of the transfer task and on the amount of available data (cf. (a) and (d)). Remarkably, in some scenarios, employing a non-vanilla freezing protocol can lead to greater performance improvement than acquiring $5$ times more training data for the target application. The main results discussed in this work are the following: \begin{itemize} \item We introduce a simple protocol capable of identifying, for a given transfer task, the intermediate layer in which the neural network representation is most suitable for transfer; \item Thanks to a controlled synthetic-dataset framework, we analyze the effect on transfer efficiency of the size of the target task and of the similarity between the source and the target task. \item We show that the large degeneracy of neural network parameter space may obfuscate the similarity and compatibility between two different tasks, and that learning in compliance with both may not hinder performance; \item We show that by monitoring the topological similarity between the representations in networks trained on different tasks it is possible to predict qualitatively if a transfer protocol between the tasks will work or not. \end{itemize} \subsection{Related Works} Deep neural networks (DNNs) can build surprisingly general representations in a variety of ways \cite{deng2009imagenet, solorio2020review, jaiswal2020survey}. A possible explanation for their effectiveness comes from their "implicit bias" towards regularized solutions \cite{neyshabur2014search}. Numerous works provided evidence of a preference for \textit{simple solutions} \cite{rahaman2018spectral, valle2018deep, shah2020pitfalls, abbe2021staircase}, and it was shown that learning complex functions may require training the networks for longer times \cite{goldt2022datadriven,refinetti2022neural}. The layered nature of DNN architectures seems to play a crucial role, as it appears that increasingly complex features are progressively encoded in deeper layers of the network \cite{kornblith2019CKA, abbe2021staircase}. Interestingly, the interaction between input data and labels has been shown to lead to non-trivial and often abrupt changes in the nature of the extracted features throughout the layers \cite{tishby2015deep,doimo2020hierarchical}. Recent investigations highlighted the importance of intermediate representations \cite{yosinski2014transferable}, but the question on how to select transferable DNN representations \cite{raina2007self,long2015fully} remains largely open, given the complex dependence on the specific source-target distribution shifts and on the data scarcity \cite{lee2022surgical}. The theoretical insight on transfer learning phenomenology in DNNs remains limited. Except for some results on training convergence rates \cite{du2020few}, the simplifying assumption of linear activation function was necessary to obtain exact results on the generalization properties after a transfer, both in the asymptotic \cite{dhifallah2021phase, dar2022double} and in the dynamical \cite{lampinen2018analytic} regimes. A framework for a systematic exploration of the generalization gain as a function of the source-target distribution shift in the non-linear case was developed in \cite{gerace2022probing}, but the analysis was restricted to 2-layer networks. The present work explores the uncharted area between theory and real-world transfer learning applications through a carefully designed framework that allows some degree of control over the data distributions. The main goal is to investigate the connection between data and architecture, pinpointing the connection between distribution shift and sample size on one side, and the quality of intermediate representations for a transfer task on the other. \section{The Experimental setup}\label{sec:exp_setup} In this section, we briefly describe the setup used for the experiments of the present paper. We refer to Appendix \ref{app:numerical_details} for more details. \paragraph*{Architecture and training protocol} In order to assess the robustness of the defrosting profile properties with respect to the network topology, we consider two different types of architectures. The first and mostly employed in our experiments is a Wide-ResNet-28-4 architecture \cite{zagoruyko2016wide}, organized in $3$ identical groups, each containing $4$ blocks of alternating convolutional, batch-norm, and downsampling layers, plus a final batch norm and fully connected one. The reason behind this choice is to foster a good degree of compatibility between model representations, motivated by previous studies \cite{kornblith2019CKA}, where the similarity of neural network representations was shown to increase with the width of the layers. We train the Wide-ResNet-28-4 networks for 200 epochs with stochastic gradient descent with momentum at 0.9, batch size of $128$, weight decay at $5\cdot10^{-4}$, and a cosine annealing scheduler for the learning rate, starting from a value of $0.1$. The second one is a ResNet-18 architecture from the PyTorch model zoo pre-trained on the ImageNet dataset \cite{he2016deep}. \paragraph*{Reproducibility} We provide the code to reproduce our experiments and analysis at \url{https://anonymous.4open.science/r/TL_gradient-DD27/}. \paragraph*{Synthetic datasets}\label{sec:setup_dataset} Fig.~\ref{fig:intro} illustrates that defrosting profiles can show different behaviors depending on the specific transfer learning scenario. In order to perform an in-depth investigation of the factors at play in determining these different transfer learning behaviors, we consider three different types of CIFAR-10 clones. These clones are meant to form a hierarchical family of datasets, approximating the true underlying distribution of CIFAR-10 with increasing fidelity. Following \cite{refinetti2022neural}, the first level of the hierarchy (IsoGM) is obtained by fitting CIFAR-10 with a mixture of isotropic Gaussians and matching the first moments with the CIFAR-10 distribution. At the second level of the hierarchy (GM), also the covariances are learned from CIFAR-10. To go beyond the second moment, we propose to obtain finer approximations of CIFAR-10 images by using a deep autoencoder architecture, consisting of a convolutional encoder network and a deconvolutional decoder, both connected to a bottleneck layer of variable size (we refer to SI for additional details). Note that, by construction, this setup is devoid of the confounding effect of misalignments between labeling rules \cite{gerace2022probing,lee2022surgical} in the source and target tasks, since the synthetic datasets share the same label structure, independently of the alterations induced in the input distributions. \section{Results} Basic transfer protocols often prove to be the most effective in real-world applications, since they require very little fine-tuning and allow for straightforward interpretation and comparison of the results. One of the simplest ideas is that of freezing protocols, i.e. constraining some of the layers of the network to the exact values obtained at pre-training and optimizing the remaining layers on the target task. However, even when the analysis is restricted to the family of freezing strategies, in principle one still has to deal with a combinatorial number of possible layer choices \cite{yosinski2014transferable,lee2022surgical}, which makes a simple comparative approach impractical for selecting the best subset of layers to be kept frozen.\\ In this work, we aim to identify the optimal transfer depth, i.e. the layer in which the representation is optimal for a given transfer task. We thus focus on the following type of freezing protocol: we assume the architecture of the network to be left unaltered, with the exception of the last layer, which can be chosen according to the target task (which can be a classification with a different number of categories, or a regression). In the protocol all the first $l$ layers are kept frozen, while the final layers of the network are retrained on the new target data. We call this transfer learning strategy as layer-wise defrosting. One of the key advantages of this protocol is its simplicity. Indeed, no learning heuristics need to be employed to ensure the retention of useful information obtained in the pre-training phase. It also drastically reduces the computational cost of training by pre-processing the dataset a single time, up until the last frozen layer, and then just training the remaining part of the architecture as a separate network receiving the frozen representations as inputs. In the next three paragraphs, we first describe more in details the layer-wise defrosting strategy, discussing about its effectiveness in the detection of the optimal number of layers to keep frozen. We then show how the optimal transfer depth depends on both the size of the target training set and the degree of similarity between the source and target data statistics. \begin{figure} \caption{ \textbf{Layer-wise defrosting procedure.} \label{fig:defrosting} \end{figure} \paragraph*{Layer-wise defrosting} As sketched in the left side of the panels in Fig. ~\ref{fig:defrosting}, given a network architecture and a source/target couple, we perform a series of transfer learning cycles. We start from a completely frozen setting, except the output layer. We then gradually increase the number of layers to be defrosted and re-trained, starting from the very last layer up to the early layers in the network. At each iteration of this procedure, in order to avoid vanishing gradient scenarios, the defrost layers are re-initialized to random values (following typical heuristics \cite{he2015delving}) and then trained from scratch on the target training data with the hyper-parameter setting described in the Architecture and training protocol paragraph. In the right side of the panels in Fig. ~\ref{fig:defrosting}, we show the accuracy recorded at each iteration of the procedure and, on the bottom, the full \textit{defrosting profile} with the optimal transfer depth highlighted by a red star. While it is common practice to transfer most of the network except for the fully-connected segment (or even just the readout layer), in many scenarios transferring a smaller number of layers may largely improve the downstream accuracy. There are at least two distinct reasons for the deterioration of the performance when a sub-optimal number of layers is kept frozen. On the one hand, one can see a defrosting profile that decreases when too many layers are kept frozen. This is natural when the input/output mappings associated with the two tasks are too dissimilar: a trained network will not retain all the information contained in the data points, as it learns to select the features which are meaningful for classification. Some information that is relevant to the target task might be dropped completely by the pre-trained model, given the presence of non-linearities and down-sampling operations throughout the layers. On the other hand, when the target dataset is small, one can see a performance deterioration if too many layers are defrosted and re-trained on the new data. If the ratio of training samples and tunable parameters becomes too small, one can trace over-fitting behaviors, as the model is unable to choose a good generalizing solution among the vast number of zero training error models. \paragraph*{Dependency of the optimal transfer depth on the training set size}\label{sec:dataset_size} At a qualitative level the interplay between data scarcity and transfer learning seems clear: with enough samples in the target dataset, all the relevant features of data can be extracted directly from the task at hand. With a reduced sample size, learning a good representations from scratch becomes impossible and transfer learning becomes convenient. In this section, we quantitatively investigate the transition between these two scenarios, by considering two distinct datasets as test case. The first dataset is the GM CIFAR-10 clone, already described in sec. \ref{sec:exp_setup}. The second one is a balanced sub-sample of a publicly available medical dataset for the classification of breast cancer \cite{breast_cancer_1,breast_cancer_2} (we refer to the SI for additional details concerning the Breast Cancer dataset). Given the two datasets, we then consider two different transfer learning experiments. In the first one, we train a Wide-ResNet-28-4 by selecting GM as the source task and CIFAR-10 as the target task. In the second experiment, we take instead a ResNet-18 pre-trained on ImageNet from the PyTorch model zoo and we select as target task the Breast Cancer dataset (this is a common choice in medical applications \cite{khan2019novel,chouhan2020novel}). For both transfer learning experiments, we sub-sample the target data, thus spanning over a wide range of data-scarce scenarios. \begin{figure} \caption{ \textbf{Effect of dataset size.} \label{fig:dataset_size} \end{figure} In Fig.~\ref{fig:dataset_size}, we see the outcome of the experiments involving CIFAR10 (top) and Breast Cancer (bottom) respectively. The defrosting profiles in the two cases show some common features. Indeed, as expected, the generalization accuracy on the target task generally increases with larger amounts of training data. However, a non-trivial observation, is that the optimum of the defrosting profile shifts backward in the layers, as the size of the target training set is increased (darker colors). Instead of just transitioning from a regime of convenient transfers (with some fixed optimal transfer depth) to one of sub-optimal transfer, we find that the number of transferred layers needs to be adjusted according to the sample size, as early layers maintain a higher degree of compatibility when more data is made available. Note also that optimally defrosting the pre-trained network can be more effective than employing the vanilla freezing protocol after acquiring more data. For example, in the bottom panel of Fig.~\ref{fig:dataset_size} the optimal transfer with $5\times10^3$ samples achieves a comparable performance with vanilla freezing with $10$ times more data. \paragraph*{Dependency of the optimal transfer depth on source-target similarity} \label{sec:source-target_similarity} \begin{figure*} \caption{ \textbf{Defrosting profile for CIFAR-10 clones.} \label{fig:clones_accuracy} \end{figure*} We now investigate the role played by the statistical similarity between the input distributions of the source and target tasks in determining the optimal transfer depth. Comparing experiments with no control over the input statistics and with different labelling rules, may be insufficient to answer this question, as these factors cannot be easily disentangled. With the hierarchy of generative models described in Section~\ref{sec:setup_dataset}, instead, one can isolate this crucial aspect of transfer learning. In this controlled synthetic framework, we can gradually move from a scenario where only the first moment of the source and target task distribution are matched (IsoGM), to the case in which the first two moments are matched (GM), up to the scenario where the source and target distributions almost perfectly match (L256). Our results are summarised in the layer-wise defrosting profiles in Fig.~\ref{fig:clones_accuracy} at same training set size. As expected, if the similarity of the two data sources is insufficient (left panel), fully retraining the network and giving up the transfer learning approach is preferable, even in the data-scarce regime. At pre-training, the early layers of the network have adapted to significantly different input distributions and are thus unsuitable to extract the information needed to solve the target task. As the similarity between the two tasks is increased (middle and right panel), the optimal cut for the defrosting protocol shifts towards the later layers of the network. In the Appendix, Fig. \ref{fig:other_clones} complements this picture by showing how the defrosting profiles at different size of the training set can significantly change depending on the relatedness between the source and the target task. Indeed, in the case where the source and the target task are poorly statistically correlated, we can observe negative transfer effects even in extreme data-scarcity regimes. On the other hand, when the two tasks are extremely correlated, it is almost always convenient to transfer up to the very last layer, except when one can count on a huge basin of training examples. From this picture two relevant and related aspects emerge. First of all, these transfer experiments strongly hints at the fact that there is a precise order in which the layers learn different parts of the input distribution. Later layers tend to focus on more complex details and higher-order features of data, selecting only those that are relevant for solving the task at hand. Early layers, as already pointed out\cite{yosinski2014transferable}, seem to play a more generic image pre-processing role, compatible with multiple downstream classification tasks on similar input distributions. The second aspect is that, depending on the complexity of the source task and its relatedness to the target one, we can observe different scenarios, ranging from always negative to almost always positive transfer. It is then natural to ask whether the effectiveness of transfer learning can be assessed by comparing the feature map of two networks trained from scratch on the source and target task respectively. We discuss these two points in the next two sections. \paragraph*{Compliant Learning} Given the fact that early layer representations are typically more easily transferable, one wonders if training on different but related image classification tasks leads to networks with similar early layers parameters. As already reported in \cite{kornblith2019CKA}, the answer to this question is typically no: if the network is strongly overparametrized, training on different tasks will lead to different solutions, no matter the task relatedness. However, there are transfer scenarios where the early representations of the network can be completely transferred without hindering the accuracy (cf. the first points of the dark red curves, in the top plot of Fig. \ref{fig:dataset_size}). \begin{figure} \caption{\textbf{Attraction between the network parameters.} \label{fig:compliant} \end{figure} To reconcile these observations, we study the effect of introducing an elastic coupling (i.e. an $L_2$ regularization) of magnitude $\lambda$, which constraints the first $10$ layers of a network trained from scratch on CIFAR10 around the pre-trained values obtained on the GM clone, in the data abundant regime. We then measure the cosine distance between the learned weights of the two networks as a function of the coupling strength $\lambda$. The outcome is shown in Fig~\ref{fig:compliant}. As a preliminary observation, note that when the coupling intensity is set to $0$, the learned weight configurations in the target task are very different from that learned on the source dataset. This can be seen by the very large cosine distance $d\sim0.6$ (light blue curve) in the first layer. Therefore, one could conclude that the process of learning from natural images is not sufficiently strict to constrain the value of the parameters and induce a specific role for the first layers. As expected, by increasing the coupling intensity one can force the weights to converge closer to the reference configuration. What is interesting, however, is that the accuracy (blue curve) is almost unaffected by the strength of the coupling: the network is able to achieve close to optimal accuracy even when the constraint becomes \emph{hard} and the number of trainable parameters in the second phase is effectively reduced. We call this procedure ``compliant learning", since, even if the target network is forced to stay close to the source pre-trained weights, this constraint does not downgrade its generalization performance. This procedure is connected to soft-parameter sharing in multi-task learning, with the difference that source and target networks are not trained simultaneously on the respective tasks \cite{caruana1997multitask,ruder2017overview,duong2015low,yang2016deep}. The arising of compliant learning in this experiment where the source and target datasets share only the first two moments highlights two main aspects. First, it corroborates the finding that the first layer are mostly responsible of learning the lower-order statistics of the data, while the latter layers can focus on higher-order moments, whose relevance is more task-specific. Second, it provides evidence of the great degeneracy of the space of neural network parameters, and of the fact that metrics related to parameter distances may be opaque to meaningful changes in the function represented by the network \cite{kornblith2019CKA}. Indeed, configurations that are apparently very dissimilar can produce equally effective representations. In the next section we investigate to what extent the geometry of different representations can be informaccess. One of the major risks is to incur in overfitting [7] due to the limited size of the target datasettive of the interchangeability of the neural mappings. \paragraph*{Representation similarity and transferability}\label{sec:rep_sim} Given that metrics based on geometrical arguments in the parameter space may be blind to structural invariance and symmetries of neural network models, if we want to determine the effectiveness of a given transfer direction, we need to move to the feature map space while relying on mostly topological observable. \emph{How similar are the representations of the target and source network when trained from scratch on the respcetive dataset?} To measure this similarity we use the Information Imbalance (II) \cite{Glielmo2022II}. This measure estimates how much the information content of a given feature map is contained into another one. We estimate the II on the test set of the target task. For each data point $i$ in this set we find its nearest neighbor in the representation generated by the network trained on the source task. Say that the nearest neighbor is data point $j$. We then consider the representation of the same data points in the second network (the one trained on the target task). We then find the number of points whose distance from $i$ is smaller than the distance between $i$ and $j$. Denoting this number $n_i$, the II is defined as $\mbox{II}=\frac{2}{N^2}\sum_i n_i$, where $N$ is the number of data in the test set of the target task. If the representations of the two networks are (approximately) equivalent one will find $n_i=0$ for several points and $\mbox{II}\sim 0$. In general, the more the first representation is predictive with respect to the second, the smaller the II is. In the limit in which the first representation does not provide any information on the second, one will find $\mbox{II}=1$ \cite{Glielmo2022II}. Note that, crucially, the II circumvents all the known ambiguities caused by re-scaling, permutations and symmetries in neural network functions, since this measure relies on the distance ranks, which are invariant. \begin{figure} \caption{\textbf{Role of the topological similarity of representations} \label{fig:information_imbalance} \end{figure} The top panel of Fig.~\ref{fig:information_imbalance} shows the information imbalance quantifying the representation similarity across the layers of two networks trained from scratch on CIFAR10 and on one of its clones respectively, e.g. IsoGM (magenta curve), GM (blue curve) and L256 (green curve). The representation are extracted on the CIFAR10 dataset. In this plot, the first thing to notice is that the II is generally increasing as a function of the depth, but its rate of increase depends on the similarity between the datasets on which the networks to be compared have been trained on. Note that the main II jumps happen in correspondence with the down-sampling layers in the network, where some information that might be relevant for one of the tasks is filtered out in the other task. This inevitably induces different local neighborhoods in the representations. The second thing to notice is the layer at which the representations of CIFAR10 start to consistently deviate from those of its clones. In particular, the topological similarity in the case of L256 is almost perfect until the very last layers, indicating the presence of small differences in the high-order features of the two data distributions. Instead, in the case of GM, we see that the representations are almost equivalent up to the $10$-th layer and become increasingly different thereafter. Finally, the IsoGM case shows that the difference in the statistical distribution of training data affects the learning process dramatically, inducing visibly dissimilar representations that are incompatible with a good generalization performance. The bottom panel of Fig. \ref{fig:information_imbalance} shows instead the defrosting profile obtained when considering CIFAR10 as the target task and one of its clones as the source task (same color legend of the top panel). As we can see, the generalization performance are in agreement with the prediction of the II. The behavior of the II supports the understanding that the first layers are responsible for general-purpose pre-processing and are only slightly affected by changes in higher-order statistics of the data distribution. This is compatible with a strong layer-wise hierarchy in the representations extracted by neural networks. This finding complements the picture presented in \cite{goldt2022datadriven}, where the learning dynamics is shown to learn the different moments of the input distribution in a precise temporal order. There are many alternative similarity indices that can be measured in place of the II, each one clearly unveiling the connection between the topological similarity of the representations and the transfer learning behavior observed in the layer-wise-defrosting profile. In Appendix \ref{app:similarity_measures} we also show the results for the CKA \cite{kornblith2019CKA}, the neighborhood overlap \cite{doimo2020hierarchical} and the Spearman correlation of local neighborhoods, which provide qualitatively similar information. \paragraph*{Efficient Defrosting Protocol} The vanilla freezing protocol, where only the read-out layers are retrained on the target data, may represent a very cost-effective and easily implementable strategy for non-expert practitioners. However, we have shown that a small variation of this protocol, i.e. considering different depths in the defrosting process, may allow large performance gains in the downstream task. Given that transfer learning is mostly relevant when the target dataset is effectively small, the multiple defrosting and re-training process should not require high computational costs compared with typical large-scale training of deep networks, and should be compatible with a moderately small computational budget. On the other hand, our results show that the shape of the layer-wise defrosting profile is generally quite regular and predictable from a few points in the profile. Moreover, we have observed that the main drops in the representation similarities happen after the lossy down-sampling layers, while the performance is generally more stable by considering an additional layer of a different type. These findings suggest that the potential gain of defrosting at the correct layer could be uncovered by sampling very few depths and by inferring the positions of the maximum from them. In case of a very limited computational budget, we thus suggest sampling a few of these cuts at depths ranging from next to last to after the first down-sampling operation. By looking at the behavior of these points one can obtain an informed guess of what cut is best chosen or what range of layers should include the optimal cut. \section{Discussion} In the present work, we have investigated the relative effectiveness of a family of basic transfer learning protocols. These protocols only differ by the \emph{transfer depth}, i.e. the number of layers which are kept frozen to pre-trained values, while fully optimizing the final layers on the target task. We have shown that the optimal transfer depth depends non-trivially on the amount of training data and on the similarity between source and target tasks, and that its identification can often be more impactful on downstream performance than acquiring additional training data. Crucially, we have also shown that \emph{this depth can be found explicitly by a simple preliminary analysis, } which we dubbed layer-wise defrosting. For data-poor target tasks this analysis can be performed at a negligible computational cost, and we propose a strategy that can be employed to reduce its cost also for larger target data sets. We believe that layer-wise defrosting can make a difference in real-world transfer learning applications, as it allows squeezing the best accuracy for a given architecture in an elementary and intuitive manner. At this time, this type of approach cannot be based on strong theoretical foundations. Recent developments in the theory of neural networks \cite{jacot2018neural, canatar2021spectral} achieved exact descriptions of learning neural networks in different regimes, but there is still a lack of mathematical tools for analysing the role played by representations in the feature-learning regime relevant for transfer learning. Empirically, we have shown evidence that the topological similarity between representations -- obtained via independent training on source and target tasks -- provides qualitative indications of the suitability of a representation transfer between tasks. However, we were not able to identify a practical rule of thumb for identifying the best transfer protocol without explicit cross-validation on a target test set. In perspective, we plan to develop a training schedule for transfer learning capable of efficiently performing layer-wise defrosting in a fully automatic manner. This could be achieved, for example, by exploiting the property which we called compliant learning, namely the fact that transferable representations are practically insensitive to $L_2$ penalties between the weights. More in general, we believe that understanding -- both on the theoretical and the practical sides -- how to disentangle input similarity from the labeling rule might be an interesting direction for future investigations in this field of research. \appendix \onecolumn \section{Numerical Details} \label{app:numerical_details} In this section we provide extra-details concerning both the autoencoder clones and the Breast Cancer medical dataset. \paragraph*{Autoencoder Clones} We follow \cite{lippe2022uvadlc} to build an encoder with 5 $3\times3$-convolutional layers and GeLU activations, alternating stride 2 and stride 1 filters to reduce the dimensionality and ending with a single dense layer, whose size determines the bottleneck. The decoder has the same structure with the stride-2 convolutional layers replaced by deconvolutional ones. Training is performed over the CIFAR-10 training set in batches of size 256 over 500 epochs using Adam \cite{kingma2014adam}. We used a warmup schedule, peaking the learning rate up to $10^{-3}$ after 100 epochs and then annealing to $10^{-5}$. At the end of the training, we collect the produced images for each bottleneck size in a new dataset. In the experiments, the so-produced CIFAR-10 autoencoder clones are standardized in such a way to have zero mean and standard deviation equal to one for each channel. \paragraph*{Medical Dataset} Concerning the transfer experiments in the bottom panel of Fig. \ref{fig:dataset_size}, we consider as test case a publicly available medical dataset for the classification of breast cancer, consisting of 277.524 50x50 patches extracted from 162 scans \cite{breast_cancer_1,breast_cancer_2}. The images are labeled as positive or negative according to the presence or absence of invasive ductal carcinoma (IDC) tissue in the patch. Among all the examples, we use the $80\%$ of them for the training set and the $20\%$ for the test set. With the purpose of speeding-up the simulations and describing more realistic transfer learning scenarios, we further randomly sub-sampled the test set up to $10^4$ images at each simulation run, preserving class balance. In the transfer experiment, the images have been rescaled to $224\times 224$ pixels and then standardized as required in the PyTorch documentation for pre-trained ResNet-18. \section{More on CIFAR10 Clones: Defrosting Profile and Information Imbalance} As already pointed out in the main text, the optimal transfer depth strongly depends on the size of the target training set and on the statistical similarity between the source and the target task. Fig. \ref{fig:other_clones} complements the picture described in sec.\ref{sec:dataset_size} and sec.\ref{sec:source-target_similarity}, by showing the defrosting profile for different CIFAR-10 training set size (lighter colors identify smaller training set size), when the source task is IsoGM or one of the autoencoder clones at increasingly higher bottleneck size, e.g. L4, L32, L256. As we can see, in the transfer direction IsoGM to CIFAR-10, transferring up to the very layer is never beneficial, no matter the size of the target task. IsoGM and CIFAR-10 are indeed sharing only the first moment of the underlying CIFAR-10 distribution, therefore, negative transfer effects can occur due to the scarce statistical similarity between the two datasets. By including higher order moments of the CIFAR-10 distribution, and refining their approximations through the bottleneck size of the CIFAR-10 autoencoder clones, transfer learning becomes increasingly more effective. Indeed, in the transfer direction L256 to CIFAR-10, we can see it is always convenient to transfer up to the very last layer, except at very huge training set sizes. Almost the same picture occurs when transferring the pre-trained feature map on L32 to CIFAR10. However, similarly to the picture emerged in the transfer experiment concerning the Breast Cancer dataset, in this case, a peak starts appearing in correspondence of the second-to-last defrosted layer. The optimal transfer depth then shifts to the left in the L4 to CIFAR-10 transfer experiment with $64$ images per class. This phenomenon is probably due to the smaller degree of similarity between the two datasets, even if it is less exacerbated then what it is observable in the GM to CIFAR-10 transfer direction. \begin{figure} \caption{ \textbf{Effect of dataset size and source-target relatedness.} \label{fig:other_clones} \end{figure} Fig. \ref{fig:ii_autoclones} shows the II across layers, quantifying the representation similarity of two networks trained from scratch on CIFAR-10 and on one of the clones among IsoGM, L4, L32 and L256. The representations are extracted on the CIFAR-10 test set. As we can see, the profile of the II reflect the hierarchical nature of the CIFAR-10 clone family. Indeed, the larger is the size of the bottleneck, the more and the deeper similar the representations are across the layers. \begin{figure} \caption{\textbf{Representation Similarity.} \label{fig:ii_autoclones} \end{figure} \section{Similarity measures} \label{app:similarity_measures} In the main manuscript, we have analyzed the similarity among representations while relying on the II metrics. However, as already pointed out in sec.\ref{sec:rep_sim}, there are other similarity measures that can be taken into account and that are equivalently blind to some of the re-scaling, permutation and symmetries of the parametric function implemented by a neural network model. Fig. \ref{fig:similarity_measures} shows, from left to right, the CKA \cite{kornblith2019CKA}, the Spearman correlation of the first $100$ neighborhoods and the neighborhood overlap among the first $30$ \cite{doimo2020hierarchical} neighbours as a function of the network depth. The representations are extracted once again on the CIFAR-10 test set and corresponds to the feature map of two different networks, one trained from scratch on the CIFAR-10 dataset and the other one on one of the CIFAR-10 clones, e.g. IsoGM (magenta curve), GM (blue curve) and L256 (green curve). As can be seen, the behaviour of the three metrics is qualitatively similar to the one already observed for the II in Fig. \ref{fig:information_imbalance} (top panel). Indeed, even in this case, the representation similarity is stronger at early layers, providing a further hint on the generality of the feature maps extracted at the bottom of the networks. Moreover, the hierarchical structure of the CIFAR10-clones is reflected in the profiles of the similarity measured, signaling which task is the most promising one as source task for the CIFAR-10 classification target task. \begin{figure} \caption{\textbf{Other Similarity Measures.} \label{fig:similarity_measures} \end{figure} \end{document}
\begin{document} \title{Distribution of quantum coherence in multipartite systems} \author{Chandrashekar Radhakrishnan} \affiliation{New York University, 1555 Century Avenue, Pudong, Shanghai 200122, China} \affiliation{NYU-ECNU Institute of Physics at NYU Shanghai, 3663 Zhongshan Road North, Shanghai 200062, China} \author{Manikandan Parthasarathy} \author{Segar Jambulingam} \affiliation{Department of Physics, Ramakrishna Mission Vivekananda College, Mylapore, Chennai 600004, India} \author{Tim Byrnes} \affiliation{New York University, 1555 Century Avenue, Pudong, Shanghai 200122, China} \affiliation{NYU-ECNU Institute of Physics at NYU Shanghai, 3663 Zhongshan Road North, Shanghai 200062, China} \affiliation{Department of Physics, New York University, New York 10003, USA} \affiliation{National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan} \begin{abstract} The distribution of coherence in multipartite systems is examined. We use a new coherence measure with entropic nature and metric properties, based on the quantum Jensen-Shannon divergence. The metric property allows for the coherence to be decomposed into various contributions, which arise from local and intrinsic coherences. We find that there are trade-off relations between the various contributions of coherence, as a function of parameters of the quantum state. In bipartite systems the coherence resides on individual sites or distributed among the sites, which contribute in a complementary way. In more complex systems, the characteristics of the coherence can display more subtle changes with respect to the parameters of the quantum state. In the case of the $ XXZ $ Heisenberg model, the coherence changes from a monogamous to a polygamous nature. This allows us to define the shareability of coherence, leading to monogamy relations for coherence. \end{abstract} \pacs{03.65.Ta,03.67.Mn} \date{\today} \maketitle The concept of wave particle duality introduced the importance of quantum coherence in physical phenomena such as low temperature thermodynamics \cite{vs15}, quantum thermodynamics \cite{ja14,ml15,lo15}, nanoscale physics \cite{ok11}, biological systems \cite{ps07,er14}, and is one of the most basic aspects of quantum information science \cite{mn00}. For this reason, understanding quantum coherence has a long history and is of fundamental importance to many fields. In quantum optics \cite{rg63,es63}, the approach has been typically to examine quantities such as phase space distributions and higher order correlation functions \cite{ms97}. While this method distinguishes between quantum and classical coherence, it does not quantify coherence in a rigorous sense. More recently, a procedure to quantify coherence using methods of quantum information science was developed \cite{tb14,dg14,dpp15,as15}. In the seminal work of Ref. \cite{tb14}, basic quantities such as incoherent states, incoherent operations, maximally coherent states were defined and the set of properties a functional should satisfy to be considered as a coherence measure were listed. One fundamental task that is desirable is to pinpoint what part of a quantum system is responsible for any coherence that is present. To understand the possibilities, let us consider a two qubit system as an example. Coherence is a basis-dependent quantity \cite{as15,yy15}, and the reference incoherent states are chosen as $ |0 \rangle, | 1 \rangle $. We can consider then two types of states which possess coherence, $ ( |0 \rangle - |1 \rangle )( |0 \rangle - |1 \rangle ) $ and $ |0 \rangle |0 \rangle - | 1 \rangle | 1 \rangle $. In the former, the coherence lies on each qubit, while the latter has a kind of collective coherence, i.e. entanglement. An interesting aspect of this is that the types of coherence are complementary to each other -- an increase in one type leads to a corresponding decrease in the other. In order to have maximum coherence on a particular qubit, it is optimal to create a superposition on each one, which excludes entanglement. On the other hand, for the Bell state, tracing out one of the qubits leaves a completely mixed (incoherent) state on the other qubit. \begin{figure} \caption{\label{fig1} \label{fig1} \end{figure} This complementary behavior is reminiscent of another quantum feature, monogamy of entanglement, which has attracted a lot of attention recently \cite{vc00,mk04,to06,ga06,pk14}. Monogamy is a concept related to the shareability of entanglement between different constituents in a multipartite system. For example, in a tripartite system, if Alice and Bob have a maximally entangled state then this rules out entanglement to Charlie. The monogamy relation for three qubits was introduced in Ref. \cite{vc00}, has also been generalized to multipartite systems \cite{to06}. Both these examples illustrate the trade-off nature of quantum mechanical features, where increasing one imposes restrictions on the other. Another fundamental question which this raises is the relationship between coherence and entanglement \cite{as15,yy15}. The framework outlined in Ref. \cite{tb14} closely followed the format of entanglement quantification developed in \cite{vv97,vvp97,vv98}. While entanglement is clearly a form of coherence, the converse is not necessarily not true. In this paper we explore the question of how we can quantify various types of coherence, and examine their trade-off relations within a multipartite system. By understanding the distribution of coherence in a multipartite system, this leads us to find the relation between concepts such as coherence, entanglement, and monogamy. One of the tools that we will use in this study is a coherence measure which has both entropic and geometric properties. In Ref. \cite{tb14}, two different functionals, one based on the relative entropy and the other based on the $\ell_{1}$-norm were found to satisfy the necessary properties as a coherence measure. Of these, the former is an entropic measure while the other is a geometric measure which can be used as a formal distance measure. Any measure ${\mathcal D}$ is considered as a formal distance over the set $X$ if $\forall$ $\rho,\sigma \in X$ it satisfies the following properties: (i) ${\mathcal D} (\rho,\sigma) > 0$ for $\rho \neq \sigma$ and ${\mathcal D} (\rho,\rho) = 0$, (ii) ${\mathcal D} (\rho,\sigma) = {\mathcal D}(\sigma,\rho)$ (symmetry). If ${\mathcal D}$ satisfies (iii) ${\mathcal D}(\rho,\sigma) + {\mathcal D}(\sigma,\tau) \geq {\mathcal D}(\rho,\tau)$ (the triangle inequality) in addition to the properties given above, then $\mathcal{D}$ is a metric for the space $X$. The relative entropy $S(\rho \| \sigma) \equiv \text{Tr} \rho \log (\rho/\sigma)$ is not a distance since it is asymmetric and further it is well defined only when the support of $\sigma$ is equal to or larger than that of $\rho$. Towards this end we introduce here an alternative, the quantum version of the Jensen-Shannon divergence (QJSD): \begin{align} \mathcal{J}(\rho,\sigma) = \frac{1}{2} [S(\rho\|(\rho+\sigma)/2) + S(\sigma\|(\rho+\sigma)/2)]. \end{align} The QJSD is known to be a distance measure, be bounded $0 \leq \mathcal{J} \leq 1$, and is well defined irrespective of the nature of the support of $\rho$ and $\sigma$ \cite{jp09,ap05,pl08}. The QJSD does not obey the triangle inequality, but its square root obeys it for all pure states. In the case of mixed states there is no general proof of the triangle inequality, but numerical studies up to five qubits \cite{pl08} strongly indicate its validity. {\it Quantum coherence trade-offs.} The quantum coherence is defined as \cite{tb14} \begin{align} {\mathcal C} (\rho) & \equiv \underset{\sigma \in \mathcal{I}^{(b)}}{\hbox{min}} {\mathcal D} (\rho,\sigma), \label{coherencedef} \end{align} where $ {\mathcal D} $ is a distance measure and $\mathcal{I}^{(b)}$ are the set of incoherent states in a particular basis $ b $. The functional ${\mathcal C}$ is a quantum coherence measure if it obeys the properties \cite{tb14}: (i) ${\mathcal C} (\rho) \geq 0$ and ${\mathcal C}(\rho) = 0$ iff $\rho \in \mathcal{I}^{(b)} $; (ii) ${\mathcal C} (\rho)$ is invariant under unitary transformations; (iii) ${\mathcal C} (\rho)$ is monotonic under a ICPTP (incoherent completely positive and trace preserving map); (iv) ${\mathcal C} (\rho)$ monotonic under selective incoherent measurements on average; and (v) ${\mathcal C}(\rho)$ non-increasing under mixing of quantum states (convexity). Eq. (\ref{coherencedef}) states that the amount of coherence in a given state is the distance to the closest incoherent state. This definition clearly depends on what we deem to be an incoherent state, and is responsible for the basis-dependent nature of $ \mathcal C $. Most generally, one may assume a form for an incoherent state $ \sigma = \sum_{k} p_{k} |b_k \rangle \langle b_k| $ where the $\{|b_k \rangle \}$ are a fixed particular basis choice $ b $, and $ p_{k} $ are probabilities. Without the constraint of the fixed basis, it is always possible to write $ \sigma = \rho $ by taking $ | b_k\rangle $ to be eigenvectors of $ \rho $, which immediately gives $ {\mathcal C} = 0 $. In this paper we are interested in how the overall coherence is distributed in a multipartite system. For this reason it will be most interesting to choose a local basis choice \begin{equation} \sigma = \sum_{k} p_{k} \tau_{k,1}^{(b)} \otimes \cdots \otimes \tau_{k,N}^{(b)}, \label{separablestates} \end{equation} where $\tau_{k,n}^{(b)}$ is the incoherent state on the subsystem $n$ i.e., $\tau_{k,n}^{(b)} = \sum_{k} p_{k,n} |b_{k,n} \rangle \langle b_{k,n}|$. The set of states that are separable and in a basis $ b $ are called $\mathcal{I}^{(b)}_S$. This gives a natural way to study various coherence contributions within a multipartite system. As discussed above, we can distinguish between coherence that is localized on the subsystems $ n $, and and collective coherence which cannot be attributed to particular subsystems (see Fig. \ref{fig1}(a)). To remove the contribution from the subsystems, we may relax the basis constraint $ b $ and minimize over the set of states (\ref{separablestates}). This contribution is independent of the basis choice, and is the coherence which is intrinsic within the system. We thus define the intrinsic coherence \begin{align} {\mathcal C}_I (\rho) \equiv \underset{\sigma_S \in \mathcal{I}_S}{\hbox{min}} {\mathcal D} (\rho,\sigma_S), \label{intrinsiccoherence} \end{align} where $ \mathcal{I}_S $ is the set of states of the form as given in (\ref{separablestates}), but is not necessarily in the basis $ b $. Thus the only constraint here is the general form of the basis, that it is separable, but the particular basis is not specified. Eq. (\ref{intrinsiccoherence}) is in fact equal to the entanglement, which is reasonable from the point of view that entanglement must contribute to coherence \cite{as15}. The remaining contribution then originates from coherence that exists on the subsystems, and we can write the local coherence as \begin{align} {\mathcal C}_L (\rho) \equiv {\mathcal D} (\sigma_S^{\min} , \rho^d ), \label{localcoherence} \end{align} where $ \sigma_S^{\min} $ and $ \rho^d $ are the minimum solutions of (\ref{intrinsiccoherence}) and (\ref{coherencedef}) respectively, and are implicit functions of $ \rho $. We may visualize the two different contributions according to Fig. \ref{fig1}(b). According to the metric properties of $ \mathcal{D} $, and the triangle inequality we immediately see that \begin{align} {\mathcal C} \le {\mathcal C}_L + {\mathcal C}_I , \label{localintrinsic} \end{align} For a product state $\sigma_S^{\min}$, the coherence measure is subadditive which leads to $ {\mathcal C}_L \le \sum_{n} {\mathcal C}_{L,n} $. We thus have \begin{align} {\mathcal C} \le \sum_{n=1}^N {\mathcal C}_{L,n} + {\mathcal C}_I, \label{localintrinsic2} \end{align} where $ {\mathcal C}_{L,n} $ is the coherence on each subsystem $ n $ separately. An illustrative example of the coherence decomposition is given by the ground state of the $N=2$ Ising model described by the Hamiltonian \begin{align} H = \lambda \sigma_{1}^{x} \sigma_{2}^{x} + J (\sigma_{1}^{x} + \sigma_{2}^{x}) + \epsilon \lambda (\sigma_{1}^{z} + \sigma_{2}^{z}) , \end{align} where $J,\lambda$ coupling parameters and $ \epsilon $ is a small symmetry breaking term. The numerically estimated values of ${\mathcal C}_L$, ${\mathcal C}_I$, and ${\mathcal C}$ are given in Fig. \ref{fig2}(a), where we use the square root of the Jensen-Shannon divergence as our distance measure \begin{align} {\mathcal D} (\rho,\sigma)= \sqrt{\mathcal{J}(\rho,\sigma)} = \sqrt{S\left(\frac{\rho+\sigma}{2}\right) - \frac{S(\rho)}{2} - \frac{S(\sigma)}{2}}, \nonumber \end{align} where $ S( \rho) = - \text{Tr} \rho \log \rho $ is the von Neumann entropy. Taking the $ \{ |0 \rangle_n, |1 \rangle_n \} $ basis as the reference state (this will be the case throughout this paper), we see that there is a crossover between coherence contributions from intrinsic when $ J \ll \lambda $, to local as $ J \gg \lambda $. This is due to the fact that for $ J = 0, \epsilon \rightarrow 0 $ the ground state approaches a Bell state $ | 00 \rangle - | 11 \rangle $, and for $ \lambda = 0 $ the ground state is $ (|0\rangle-|1 \rangle ) (|0\rangle-|1 \rangle ) $, with intermediate $ J/\lambda $ giving an interpolation to these limits. The total coherence is less than the sum of the local and intrinsic contributions, following (\ref{localintrinsic}). \begin{figure} \caption{\label{fig2} \label{fig2} \end{figure} {\it Multipartite coherence.} The bipartite case studied above is the simplest case of more general trade-off relations in multipartite systems. One of the fundamental properties we investigate is the shareability of coherence between subsystems. For example, in a tripartite system $ \rho_{123} $ we may decompose the coherence using (\ref{localintrinsic}) according to \begin{align} \mathcal{C}_{123} \leq \mathcal{C}_1 + \mathcal{C}_2 + \mathcal{C}_3 + \mathcal{C}_{1:2:3} . \label{tripartite1} \end{align} where we have introduced a shorthand for the local coherence on subsystem $ n $ as $ \mathcal{C}_n = \mathcal{C}_{L,n} (\rho_n) $, and $\rho_n $ is the reduced density matrix. For the product states $\sigma_S^{\min}$ the Eqn. (\ref{tripartite1}) holds exactly. The intrinsic coherence $ \mathcal{C}_{1:2:3} = \mathcal{C}_I (\rho_{123}) $ is minimized over the set of separable states on the tripartite system. We note that as $ \mathcal{C}_{1:2:3} $ is an intrinsic coherence, it does not contain any coherence located {\it on} the sites, but contains all coherences {\it between} the sites. We can decompose a tripartite system in a bipartite fashion, leading to the relation \begin{align} \mathcal{C}_{123} \leq \mathcal{C}_1 + \mathcal{C}_2 + \mathcal{C}_3 + \mathcal{C}_{2:3}+ \mathcal{C}_{1:23}, \label{tripartite2} \end{align} where we first find the intrinsic coherence between 2 and 3, then we estimate the intrinsic coherence between 1 and the bipartite subsystem 23. Similar decompositions can be carried out with respect to the bipartitions 2:13 and 3:12 as well. From (\ref{tripartite1}) and (\ref{tripartite2}) and the other possible bipartitions suggested above we may deduce that \begin{align} \mathcal{C}_{1:2:3} & \simeq \mathcal{C}_{2:3}+ \mathcal{C}_{1:23} \simeq \mathcal{C}_{1:2}+ \mathcal{C}_{12:3} \simeq \mathcal{C}_{1:3}+ \mathcal{C}_{13:2} . \label{tribipartite} \end{align} To illustrate the various contributions, first let us consider the mixed GHZ states defined as $ \rho_{\text{GHZ}} = \frac{1 - \mu}{8} {\hat{\mathbb{1}}} + \mu \; |\text{GHZ} \rangle \langle \text{GHZ} | $ with $ |\text{GHZ} \rangle = \cos \phi |000 \rangle + \sin \phi |111 \rangle $, $\phi \in [0, 2\pi)$, and $0\le \mu \le 1$. The coherence is plotted in Fig. \ref{fig2}(b). For this class of states we find that the various contributions due to one and two sites are always zero: $ \mathcal{C}_{n} = \mathcal{C}_{m:n}=0 $. This means that the only coherence contributions originates from the intrinsic coherence where all three sites are involved. The total coherence is thus identical to the tripartite coherence $ \mathcal{C} = \mathcal{C}_{1:2:3} $, which is verified numerically. It is also equal to the bipartitioned intrinsic coherence $ \mathcal{C} = \mathcal{C}_{l:mn} $, where $ l,m,n $ are all permutations of the sites. This verifies the relation (\ref{tribipartite}) for this class of states. In contrast to the GHZ state where there is only one coherence contribution, the W states have a trade-off relation similar to that seen in the transverse Ising model. These are defined $ |\text{W} \rangle = \sin \theta \cos \phi |100 \rangle + \sin \theta \sin \phi |010 \rangle + \cos \theta |001 \rangle $ with $0 \leq \phi < 2\pi$ and $0 \leq \theta \leq \pi$. The GHZ and W states are two classes of states which are unrelated under local operations and classical communications. From Fig. \ref{fig2}(c) we see that the calculated coherence can be attributed to several contributions. Firstly the coherence $ \mathcal{C}_{12:3} $ is always constant as the state for the choice $ \theta = \pi/4 $ can be written as $|\text{W} \rangle = [ (\cos \phi |10 \rangle + \sin \phi |01 \rangle )|0 \rangle + |00 \rangle| 1 \rangle ]/\sqrt{2} $, thus there is always intrinsic coherence between the bipartition of sites 12 and 3. The coherences $ \mathcal{C}_{1:3} $ and $ \mathcal{C}_{2:3} $ show complementary behavior as the system oscillates between a Bell state between sites 13 ($\phi=n \pi$) and 23 ($\phi=(n+1/2)\pi$), with the remaining site being decoupled. There is coherence between the sites 12 between these two extrema, giving $ \mathcal{C}_{1:2} $ with twice the oscillatory frequency. The same ideas can be equally applied to more complex multipartite systems. The various coherence contributions can be used to understand the nature of the quantum states in quantum many-body systems. We illustrate this by analyzing the one-dimensional Heisenberg $XXZ$ model, one of the fundamental models in magnetism. The Hamiltonian of this model is \begin{align} H = J \sum_{n} (\sigma^{x}_{n} \sigma^{x}_{n+1} + \sigma^{y}_{n} \sigma^{y}_{n+1} + \Delta \sigma^{z}_{n} \sigma^{z}_{n+1}), \end{align} where $J$ is the nearest neighbor spin coupling and $\Delta$ is the anisotropy parameter. For an antiferromagnetic coupling $ J> 0 $, the system has a phase transition from the ferromagnetic axial regime to the antiferromagnetic planar regime at $\Delta =-1$. Using exact diagonalization techniques we estimate various types of coherence as shown in Fig. \ref{fig2}(d). In the ferromagnetic phase with $ \Delta <-1 $, all coherences vanish due to spontaneous symmetry breaking selecting a unique ferromagnetic ground state with all spins aligned in the $ \sigma^{z} $ basis. In the opposite limit $ \Delta \gg 1 $, the state is a superposition of N\'{e}el states, due to the two-fold degeneracy of these states: $ (|0101 \dots 01 \rangle + |1010 \dots 10 \rangle)/\sqrt{2} $. The coherence thus approaches the Bell state value $ \mathcal{C} = \mathcal{C}_{1:2 \dots N} \approx 0.56 $, with all other coherence contributions vanishing. Due to the spin flip symmetry, coherence on each site is always zero $ \mathcal{C}_{n} = 0 $, and $ \mathcal{C}_{1:2 \dots N} $ can always be written in a Bell state form, resulting in a constant value. The coherence contributions between two sites decrease with distance as expected $ \mathcal{C}_{1:n} $, due to the reduced correlations between these sites. Interestingly, at $ \Delta = -1 $ the two-site correlations all converge to the same value, which we attribute to the fact that this is close to the antiferromagnetic-ferromagnetic phase transition, which has the effect of increasing the overall coherence in the system. {\it Monogamy of coherence.} From our coherence decompositions, we arrive naturally at the notion of monogamy of coherence. In a tripartite system, if subsystems 2 and 3 are maximally coherent with respect each other, this limits on the amount of coherence that subsystem 1 has with 2 and 3. This is immediately evident from Eq. (\ref{tripartite2}), where the coherence is decomposed into these two contributions. If subsystem 3 is coherently connected to 1 and 2 then the tripartite system is described to be polygamous, and otherwise is monogamous. The coherence monogamy relations may be identified from (\ref{tribipartite}), where we observe that the tripartite coherence $ \mathcal{C}_{1:2:3} $ can be decomposed into several bipartite coherences. The genuine tripartite coherence can be estimated by subtracting pairwise bipartite terms giving \begin{align} \mathcal{C}_{1:2:3} - & \mathcal{C}_{1:2} -\mathcal{C}_{2:3} - \mathcal{C}_{1:3} \simeq \mathcal{C}_{1:23} -\mathcal{C}_{1:2} -\mathcal{C}_{1:3} . \end{align} For a multipartite system the monogamy inequality reads $ \mathcal{C}_{1:2\dots N} \geq \sum_{n=2}^{N} \mathcal{C}_{1:n} $. Thus, we define the multipartite monogamy of coherence with respect to a measure as: \begin{equation} M = \sum_{n=2}^{N} \mathcal{C}_{1:n} - \mathcal{C}_{1:2\dots N} , \label{q-def} \end{equation} which is monogamous for $ M \le 0 $ due to the multipartite coherence that is present. For $M>0$ it is polygamous since the dominant coherence is distributed in a pairwise fashion. In Fig. \ref{fig2}(c) we calculate (\ref{q-def}) for the W states. We find that $ M\ge 0 $ for all $\theta,\phi $, hence the state is strictly polygamous. For the GHZ states as shown in Fig. \ref{fig2}(b) there is only one coherence contribution with $ \mathcal{C}_{1:n} = 0 $, which results in $ M = - \mathcal{C} $, meaning that it is strictly monogamous. This is as expected since the GHZ states are tripartite entangled, whereas the $W$ state has a bipartite nature \cite{wd00}. For the Heisenberg spin chain we find both monogamous ($\Delta > 2.9$) and polygamous behavior ($-1 < \Delta < 2.9$) (see Fig. \ref{fig2}(d) inset). For $\Delta \gg 1$ region when the ground state is a N\'{e}el state, where the two-site coherences vanish $ \mathcal{C}_{1:n} \rightarrow 0 $. Then the coherence is entirely due to the $1$:$2\dots N $ bipartition, resulting in a monogamous state. This can be understood to be due to the fact that the N\'{e}el state superposition is essentially the same as a GHZ state up to a redefinition of state labels. For small $ \Delta $, there is a larger effect from the off-diagonal terms $ \sigma^x_n \sigma^x_{n+1} + \sigma^y_n \sigma^y_{n+1} = 2( \sigma^+_n \sigma^-_{n+1} + \sigma^-_n \sigma^+_{n+1}) $. This term tends to create coherence on nearby sites, which is more characteristic of a polygamous behavior. In this way the parameter $ \Delta $ switches the nature of the coherence between monogamy and polygamy by redistribution it between relatively local sites to a genuinely multipartite form. {\it Conclusions.} Multipartite coherence is decomposed into local and intrinsic parts and quantified using a entropic measure with metric nature. This decomposition into various contributions can be used not only to characterize a given state but also to locate the origin of the coherence. In many cases there is a crossover behavior between the coherences of different origins, which depends upon the type of the state examined. In the transverse Ising model, the coherence transitions between local coherence on the sites to a GHZ-type multipartite nature. The coherence decompositions leads to a multipartite monogamy inequality for coherence measures, giving another way of characterizing the nature of coherence in these systems. In the Heisenberg $XXZ$ model the coherence displays a crossover between monogamous and polygamous behavior when the anisotropy parameter is varied. The framework provided in this paper allows for a simple way to understand the nature of an arbitrary quantum state, by characterizing the various coherence contributions, even for relatively complicated states in quantum many-body problems. In addition to providing a framework for decomposing coherence, we believe that the general method is potentially applicable in several contexts. In the field of quantum simulation and quantum computing it is often of interest to understand what kind of quantum state is generated, either to understand the nature of a many-body system \cite{buluta09} or for the purposes of benchmarking \cite{rb14,tx15}. Finding the distribution of coherence provides a more illuminating way of understanding the nature of a quantum state. One of the contributions which quantum information made to condensed matter physics is the introduction of entanglement as a quantity that can be used to characterize the state of a system \cite{osborne02}. It is an interesting question of whether particular types of coherence could be used to analyze similarly quantum phase transitions. Furthermore, quantum limits to shareability (i.e. monogamy) of entanglement is known to be related to frustration in many body systems \cite{ko01,mw04,af07,xm11,sg15}, and affect the coherence and entanglement structure in the system. This has a direct effect on approaches to efficiently capture the wavefunction of interacting quantum many-body systems, such as matrix product states and their variants \cite{fv06,gv08}. In quantum metrology, a recent development has been the use of local rather than global strategies to gain interferometric advantages \cite{bh07,js15,pk16}, which highlights resource nature of coherence. An interesting future possibility for the QJSD is that due to its distance and metric properties and entropic nature, it could contribute to differential geometry based approaches to quantum information theory to understanding of the geometry of quantum states \cite{ib06}. This work is supported by the Shanghai Research Challenge Fund, New York University Global Seed Grants for Collaborative Research, National Natural Science Foundation of China grant 61571301, and the Thousand Talents Program for Distinguished Young Scholars. \end{document}
\begin{document} \title{Perturbation bounds and degree of imprecision for uniquely convergent imprecise Markov chains} \begin{abstract} The effect of perturbations of parameters for uniquely convergent imprecise Markov chains is studied. We provide the maximal distance between the distributions of original and perturbed chain and maximal degree of imprecision, given the imprecision of the initial distribution. The bounds on the errors and degrees of imprecision are found for the distributions at finite time steps, and for the stationary distributions as well. \noindent {\bfseries Keywords.} imprecise Markov chain, sensitivity of imprecise Markov chain, perturbation of imprecise Markov chains, degree of imprecision, weak coefficient of ergodicity \noindent 2010 Mathematics Subject Classification: 60J10 \end{abstract} \section{Introduction} Markov chains depend on a large number of parameters whose values are often subject to uncertainty. In the long run even small changes in the initial distribution or transition probabilities may cause large deviations. Several approaches to cope with uncertainty and estimate its magnitude have therefore been developed. Perturbation analysis gives estimates for the differences between probability distributions when the processes evolve in time (\cite{Mitrophanov2005, Kartashov1986, Kartashov1986a, rudolf2015perturbation}) or for the stationary distributions (\cite{cho2001comparison, mouhoubi2010new}) based on the differences in parameters (see also \cite{Mitrophanov2003, mitrophanov2005ergodicity} for the continuous time case). In the recent decades variety of models of \emph{imprecise probabilities} \cite{augustin2014introduction} have been developed. They provide means of expressing probabilistic uncertainty in the way that no reference to particular precise models is needed. The results thus reflect exactly the amount of uncertainty that results from the uncertainty in the inputs. Uncertainty in the parameters of Markov chains has first been addressed with models of this kind by \citet{hart:98}, without formally connecting it to the theory of imprecise probabilities, although applying similar ideas. More recently, \citet{decooman-2008-a} formally linked Markov chains with the theory of upper previsions, while \citet{skulj:09} proposed an approach based on the theory of interval probabilities. Unique convergence of imprecise Markov chains has been further investigated by \citet{hermans2012characterisation}, where ergodicity of upper transition operators has been characterized in several ways, and by \citet{2013:skulj-hable}, who investigated the generalization of coefficients of ergodicity. \citet{skulj:13b} also studied the structure of non-uniquely convergent imprecise Markov chains. Although at first glance models of imprecise probabilities and classical perturbation models seem to have the same objective, which they approach from different angles, this is often not so. In fact they answer essentially different questions. While imprecise probabilities provide models that replace classical probabilities with generalized models that are capable to reflect uncertainty or imprecision, perturbation models give the numerical information on how much uncertainty in results to expect, given the amount of uncertainty in inputs. Moreover, it is even quite natural to involve perturbation analysis to the models of imprecise probabilities. The goal of this paper is thus to apply results from perturbation theory to imprecise Markov chains. There are two main reasons for this. The first is, that imprecise models too, are sensitive to the changes of input parameters. In the case of imprecise Markov chains this means that the changes in the bounds of input models affect the bounds of the distributions at further times in a similar way as in the case with the precise models. The other, maybe even more important reason is that currently in the theory of imprecise probabilities there has been little attention paid to the 'degree of imprecision', that is the maximal distances between, say, lower and upper bounds of probabilities. At least in comparison with the attention received by the methods for calculating these bounds. We thus also estimate how the 'degree of imprecision' evolves in time for imprecise Markov chains. The paper is structured as follows. In Sc.~\ref{s-imc} we introduce imprecise Markov chains with a special emphasis on the representation of the probability distributions after a number of time steps. In Sc.~\ref{s-mpio} we introduce the metric properties of imprecise operators, which allow us to measure the distances between imprecise probability distributions. By the means of the distances we also define the degree of imprecision. We show that with exception of the special case of 2-alternating upper probabilities (or 2-monotone lower probabilities) it is hard to find the exact distance between two imprecise probability models. In Sc.~\ref{s-pio} we analyse the effects of perturbations of parameters to the deviations of the perturbed chains from the original ones. With a similar method we also study how the 'degree of imprecision' of the process grows in time. In Sc.~\ref{s-ex} we apply the analysis to the case of contamination models and give a numerical example. \section{Imprecise Markov chains}\label{s-imc} \subsection{Imprecise distributions and upper expectation functionals} Let $\mathcal X$ be a finite set of \emph{states}. A \emph{probability distribution} of some random variable $X$ over $\mathcal X$ is given in terms of an \emph{expectation functional} $E$ on the space of real-valued maps $f$ on $\mathcal X$, which we will denote by $\mathcal L(\mathcal X)$: \begin{equation} E(f) = \sum_{x\in \mathcal X} P(X = x) f(x). \end{equation} An \emph{imprecise probability distribution} of a random variable $X$ is given in terms of a closed convex set of expectation functionals $\mathcal M$, called a \emph{credal set}. To every imprecise probability distribution a unique \emph{upper expectation functional} \begin{equation} \up E(f) = \sup_{E\in \mathcal M} E(f) \end{equation} can be assigned. Upper expectation functionals are in a one-to-one correspondence with credal sets. Moreover, a functional $\up E$ is an upper expectation functional with respect to a credal set $\mathcal M$ if and only if it satisfies the following properties: \begin{enumerate}[(i)] \item $\min_{x\in\mathcal X} f(x) \le \up E(f)\le \max_{x\in\mathcal X} f(x)$ (boundedness); \item $\up E(f_1+f_2) \le \up E(f_1) + \up E(f_2)$ (subadditivity); \item $\up E(\lambda f) = \lambda\up E(f)$ (non-negative homogeneity); \item $\up E(f + \mu 1_{\mathcal X}) = \up E(f) + \mu$ (constant additivity); \item if $f_1\le f_2$ then $\up E(f_1) \le \up E(f_2)$ (monotonicity), \end{enumerate} where $f, f_1, f_2\in \mathcal L(\mathcal X)$ are arbitrary, $\lambda$ a non-negative real constant, $\mu$ an arbitrary real constant, and $1_{\mathcal X}$ the constant map 1 on $\mathcal X$. An upper expectation functional $\up E$ can be supplemented by a \emph{lower expectation functional} with respect to the same credal set by assigning: \begin{equation} \low E(f) = \min_{E\in\mathcal M} E(f). \end{equation} The following duality relation holds: \begin{equation}\label{eq-up-low-duality} \low E(f) = -\up E(-f) \end{equation} for every $f\in\mathcal L(\mathcal X)$. A lower expectation functional $\low E$ satisfies the same properties (i)--(v) as the upper ones, except for (ii), where subadditivity is replaced by \begin{enumerate}[(ii)'] \item $\low E(f_1+f_2) \ge \low E(f_1) + \low E(f_2)$ (superadditivity). \end{enumerate} \subsection{Representations of uncertainty} In the previous section we described upper (and lower) expectation functionals, which uniquely represent convex sets of probability distributions. In principle such functionals posses properties similar to those of precise expectation functionals corresponding to precise probability distributions. However, there is a huge difference when it comes to the ways of specifying a convex set of probabilities compared to specifying a single probability distribution. In the latter case we need, in the general case, a single probability density function, or in the case finite spaces, a probability mass function. In the case of state spaces of finite Markov chains, we thus need to specify the probability of each state. In contrast, even in the case of finite spaces there are in general infinitely many values needed to specify a convex set of probability measures, or in the case of convex polytopes, this number is finite but often large. Every convex polytope can be represented by specifying its extreme points or as an intersection of a set of half spaces separated by hyperplanes. The latter is far more natural and useful in the case of imprecise probabilities. This approach is behind many particular models, such as \emph{lower} and \textit{upper probabilities}, \emph{probability intervals} or \textit{coherent lower} and \textit{upper previsions} \citep{augustin2014introduction, miranda:07, troffaes2014lower}. The most general of those models are lower and upper previsions, which generalize all the other models, including lower and upper expectation functionals. Although most researchers of imprecise probabilities list properties for lower previsions, in the theory of imprecise Markov chains, upper previsions are more often used. In general, an upper prevision $\up P\colon \mathcal K\to \mathbb{R}$ is a map on some set $\mathcal K$ -- not necessarily a vector space -- of measurable maps $\mathcal X\to \mathbb{R}$. What is important for our present model is the following equivalent definition of coherence for finite probability spaces. \begin{defn} Let $\mathcal K\subseteq \mathcal L(\mathcal X)$ be a set of real-valued maps on a finite set $\mathcal X$ and $\up P\colon \mathcal K\to \mathbb{R}$ a mapping such that there exists a closed and convex set $\mathcal M$ of (precise/linear) expectation functionals such that $\up P(f) = \max_{P\in \mathcal M}P(f)$. Then $\up P$ is a coherent upper prevision on $\mathcal K$. \end{defn} Note that any coherent upper prevision $\up P$ allows canonical extension to entire $\mathcal L(\mathcal X)$ by defining \begin{equation} \up E(f)=\max_{P\in \mathcal M}P(f). \end{equation} This upper expectation functional is called the \emph{natural extension} of $\up P$, and is clearly itself too a coherent upper prevision on $\mathcal L(\mathcal X)$. Upper expectation functionals thus form a subclass of coherent upper previsions. Another subclass of imprecise probability models are \emph{lower} and \textit{upper probabilities}. A lower and upper probability $\low P$ and $\up P$ respectively are defined as a pair of real-valued maps on a class $\mathcal A$ of subsets of $\mathcal X$. For every $A\in\mathcal A$, its probability is assumed to lie within $[\low P(A), \up P(A)]$. They too allow the formation of a credal set $\mathcal M$, whose members are exactly those expectation functionals $E$ that satisfy the conditions $E(1_A) \ge \low P(A)$ and $E(1_A)\le \up P(A)$ for every $A\in \mathcal A$. If the bounds $\low P(A)$ and $\up P(A)$ are reachable by the members of $\mathcal M$, then the lower/upper probabilities are said to be \emph{coherent}. When the lower and upper probabilities are defined on the set of elementary events $A= \{x\}$, where $x\in\mathcal X$, we are talking about \textit{probability intervals (PRI)}, which, if coherent, also form a subclass of coherent upper previsions. When an upper probability $\up P$ satisfies the following equation: \begin{equation} \up P(A \cup B) + \up P(A\cap B) \le \up P(A) + \up P(B) \end{equation} we call it 2-\emph{alternating}. At the same time, the corresponding lower probability $\low P$, which satisfies equation $\low P(A) = 1-\up P(A^c)$, is 2-\emph{monotone}: \begin{equation} \low P(A \cup B) + \low P(A\cap B) \ge \low P(A) + \low P(B). \end{equation} If an imprecise probability model is given in the form of probability intervals $[\low p(x), \up p(x)]$ for all elements $x\in\mathcal X$, we can first extend the bounds to all subsets of $\mathcal X$ by \begin{equation} \low P(A) = \max\left\{\sum_{x\in A} \underline p(x), 1-\sum_{x\in A^c} \overline p(x) \right\}, \text{ for every } A\subseteq \mathcal X. \end{equation} It can be shown that in this case $\low P$ is 2-monotone (see e.g. \cite{decampos:94}). \subsection{Calculating the natural extension as a linear programming problem} Given a (coherent)\footnote{Coherence is in fact not important for calculating the natural extension.} upper prevision $\up P$ on a set $\mathcal K$, its natural extension to $\mathcal L(\mathcal X)$ can be calculated as a linear programming problem as follows: \begin{quote} Maximize \begin{align} E(f) & = \sum_{x\in\mathcal X} p(x)f(x) \intertext{subject to} \sum_{x\in\mathcal X} p(x) & = 1 \\ \sum_{x\in\mathcal X} p(x)h(x) & \le \up P(h) \end{align} for every $h\in \mathcal K$. \end{quote} Although there exist efficient algorithms for solving linear programming problems, in the case of 2-monotone lower or 2-alternating upper probabilities the expectation bounds can be calculated even more efficiently by the use of \emph{Choquet integral} (see e.g.\cite{den:97}): \begin{align} \up E(f) & = \min f + \int_{\min f}^{\max f} \up P(f\ge x) dx \intertext{and} \low E(f) & = \min f + \int_{\min f}^{\max f} \low P(f\ge x)dx \end{align} \subsection{Imprecise transition operators} An \emph{(imprecise) Markov chain} $\{ X_n \}_{n\in \mathbb{N}o}$ with the set of states $\mathcal X$ is specified by an initial (imprecise) distribution and an (imprecise) transition operator. A \emph{transition operator} assigns a probability distribution of $X_{n+1}$ conditional on $(X_n = x)$. As before, we represent the conditional distribution with a conditional expectation functional $T(\cdot|x)$, mapping a real-valued map on the set of states into $T(f|x)$. A transition operator $T\colon \mathcal L(\mathcal X)\to \mathcal L(\mathcal X)$ then maps $f\mapsto Tf$ such that \begin{equation} Tf(x) = T(f|x). \end{equation} That is, the value of $Tf(x)$ equals the conditional expectation of $f$ at time $n+1$ if the chain is in $x$ at time $n$. Replacing precise expectations $T(\cdot|x)$ with the imprecise ones given in terms of upper expectation functionals $\up T(\cdot|x)$, we obtain an imprecise transition operator $\up T\colon \mathcal L(\mathcal X)\to \mathcal L(\mathcal X)$ defined with \begin{equation} \up Tf(x) = \up T(f|x). \end{equation} To every conditional upper expectation functional $\up T(\cdot | x)$ a credal set can be assigned, and therefore a set of transition operators $\mathcal T$ can be formed so that $\up Tf = \max_{T\in \mathcal T}Tf$. Given an upper expectation functional $\up E_0$ corresponding to the distribution of $X_0$ and an upper transition operator $\up T$, we obtain \begin{equation} \up E_n (f) = \up E_0(\up T^nf), \end{equation} where $\up E_n$ is the upper expectation functional corresponding to the distribution of $X_n$. \begin{ex}\label{ex-1} Let an imprecise transition operator be given in terms of a lower and upper transition matrices: \begin{align} \low M = \begin{bmatrix} 0.33 & 0.33 & 0 \\ 0.33 & 0.17 & 0.25 \\ 0 & 0.5 & 0.42 \end{bmatrix} \qquad \text{and} \qquad \up M = \begin{bmatrix} 0.67 & 0.67 & 0 \\ 0.58 & 0.42 & 0.5 \\ 0 & 0.58 & 0.5 \end{bmatrix} \end{align} The upper transition operator is then \begin{equation}\label{ex-1-lp-T} \up Tf = \max_{\substack{\low M \le M \le \up M\\\noalign{ } M1_\mathcal X = 1_\mathcal X}} Mf, \end{equation} Thus, $\up Tf$ is the maximum of $Mf$ over all row stochastic matrices that lie between $\low M$ and $\up M$. Since each row can be maximized separately, the componentwise maximal vector does exist. Further let \begin{equation} \low P_0 = (0.33, 0.25, 0.25) \qquad \text{and} \qquad \up P_0 = (0.38, 0.38, 0.42) \end{equation} be the bounds for initial probability mass functions, whence the initial upper expectation functional is defined with \begin{equation}\label{ex-1-lp-E} \up E_0(f) = \max_{\substack{\low P_0 \le P \le \up P_0\\\noalign{ } P(1_\mathcal X) = 1}} P(f). \end{equation} Both $\up T$ and $\up E_0$ are operators whose values are obtained as solutions of linear programming problems. As we explained in previous sections, we would hardly expect $\up E_n$, that is the upper expectation functional for $X_n$, to be easily expressed in terms of a single linear programming problem, but rather as a sequence of problems. Thus, to obtain the value of $\up E_n(f)$, for some $f\in\mathcal L(\mathcal X)$, we would first find $\up Tf$ as a solution of the linear program \eqref{ex-1-lp-T} and then use it in the next instance of the linear program with the objective function replaced with the expectation of $\up Tf$ to obtain $\up T^2f$, until finally we would maximize $\up E_0(\up T^nf)$ as a linear program of the form \eqref{ex-1-lp-E}. Practically this means that even though $\up E_0$ is natural extension of a simple probability interval, the linear programs for $\up E_n$ are in general much more complex. A similar situation occurs with an $n$-step transition operator $\up T^n$. Nevertheless, we might still be interested in the lower bounds for probabilities of events of the form $(X_n = x)$. We can, for instance, find the upper probability $\up P(X_n = x)$ as $\up E_0\up T^n 1_{\{x\}}$, understanding of course that the imprecise probability model for the distribution of $X_n$ is no longer a simple probability interval. The above leads us to the idea, that even if the imprecise transition operators $\up T^n$ cannot be expressed in a simple form comparable to matrices, we might still want to provide the lower and upper transition matrix containing the information of the conditional probabilities $\low P(X_{m+n} = y|X_m = x)$ and $\up P(X_{m+n} = y|X_m = x)$, again bearing in mind that this is not an exhaustive information on the imprecise transition model. The upper probability can, for instance, be calculated by finding $\up T^n1_{\{y\}}(x)$. In our case, the lower and upper probability mass vectors for $X_3$ are \begin{equation} \low P_3 = (0.1966, 0.2672, 0.1513) \qquad \text{and} \qquad \up P_3 = (0.5293, 0.5799, 0.3903), \end{equation} and the lower and upper 3-step transition probabilities are \begin{align} \low M_3 = \begin{bmatrix} 0.2195 & 0.2500 & 0.1040 \\ 0.2195 & 0.2583 & 0.1533 \\ 0.1650 & 0.3067 & 0.2205 \end{bmatrix} \qquad \text{and} \qquad \up M_3 = \begin{bmatrix} 0.5898 & 0.5992 & 0.3350 \\ 0.5383 & 0.5730 & 0.4175 \\ 0.4239 & 0.5609 & 0.4175 \end{bmatrix} \end{align} \end{ex} \section{Metric properties of imprecise operators}\label{s-mpio} \subsection{Distances between upper operators} Let $\mathcal L_1 = \{ f\in\mathcal L(\mathcal X) \colon 0 \le f(x) \le 1\, \forall x\in \mathcal X \}$. In \cite{2013:skulj-hable} the following distance between two upper expectation functionals $\up E$ and $\up E'$ is defined: \begin{equation}\label{eq-distance-functionals} d(\up E, \up E') = \max_{f\in \mathcal L_1} |\up E(f) - \up E'(f)|. \end{equation} When restricted to precise expectation functionals, the above distance coincides with the \emph{total variation distance} for probability measures: \begin{equation} d(P, Q) = \max_{A\in \mathcal F}|P(A)-Q(A)|, \end{equation} for two probability measures on an algebra $\mathcal F$. For real-valued maps on $\mathcal X$ we use the \emph{Chebyshev distance} \begin{equation} d(f, g) = \max_{x\in\mathcal X} |f(x)-g(x)|, \end{equation} which we also extend to upper transition operators $\up T$ and $\up T'$: \begin{equation} d(\up T, \up T') = \max_x d(\up T(\cdot |x), \up T'(\cdot |x)) = \max_{f\in \mathcal L_1}d(\up Tf, \up T'f) . \end{equation} Subadditivity of the upper expectation functionals implies that \begin{equation}\label{eq-norm1-ue} |\up E (f) - \up E(g)| \le |\up E (f-g)|\vee |\up E (g-f)| \le d(f, g) . \end{equation} Hence, \begin{equation} d(\up Tf, \up Tg) = \max_{x\in\mathcal X} |\up T(f|x) - \up T(g|x)| \le d(f, g) \end{equation} for every upper transition operator $\up T$. Similarly, we have that \begin{equation}\label{eq-dist-ET} d(\up E\, \up T, \up E\, \up T') = \max_{f\in\mathcal L_1} |\up E( \up T f) - \up E( \up T' f) | \le \max_{f\in\mathcal L_1} d( \up T f, \up T' f ) = d(\up T, \up T') . \end{equation} It is easy to see that the distances defined using the corresponding lower expectation functionals and transition operators are the same as those where the upper expectations are used. In practical situations the upper expectations are usually the natural extensions of some coherent upper previsions defined on some subset $\mathcal K\subset\mathcal L(\mathcal X)$. It would be therefore very useful if the differences between the values of two upper previsions on the elements of $\mathcal K$ would give some information on the distances between their natural extensions. In general, however, this does not seem to be possible. For an illustration we give the following simple example. \begin{ex} Let $\mathcal X$ be a set of 3 states, say $x, y, z$ and $\mathcal K = \{ f_1 = (0, 1, 0), f_2 = (0.1, 1, 0) \}.$ Then let two lower/upper previsions be given with the values: \begin{align*} \low P_1(f_1) & = 0.3 & \low P_2(f_1) &= 0.3 \\ \up P_1(f_2) & = 0.305 & \up P_2(f_2) &= 0.306 \end{align*} Now let $\up E_1$ and $\up E_2$ be the corresponding natural extensions and $h = (1, 0.5, 0)$. Then we have that \[ \up E_1(h) = 0.2 \qquad\text{and}\qquad \up E_2(h) = 0.21. \] Thus, although the maximal distance between $\up P_1$ and $\up P_2$ on $\mathcal K$ is only $0.001$, the distance $d(\up E_1, \up E_2)$ is at least $0.01$, which is 10 times larger. \end{ex} Unlike the general case, in the case of 2-monotone lower probabilities the following holds (\cite{2013:skulj-hable}, Proposition~22): \begin{prop}\label{prop-dist-2-alt} Let $\low P_1$ and $\low P_2$ be 2-monotone lower probabilities and $\low E_1$ and $\low E_2$ the corresponding lower expectation functionals. Then \begin{equation} d(\low E_1, \low E_2) = \max_{A\subseteq \mathcal X} |\low P_1(A) - \low P_2(A)|. \end{equation} \end{prop} The fact that $\low P_1(A)-\low P_2(A) = (1-\up P_1(A^c)) - (1-\up P_2(A^c)) = \up P_2(A^c)-\up P_1(A^c)$ and $d(\up E_1, \up E_2) = d(\low E_1, \low E_2)$ implies that the choice of either upper or lower functionals does not make any difference. \subsection{Distances between upper and lower operators}\label{ss-dulo} Let $\mathcal M_1$ and $\mathcal M_2$ be two credal sets with the corresponding lower and upper expectation functionals denoted by $\up E_1, \low E_1$ and $\up E_2, \low E_2$ respectively. It has been shown in \cite{2013:skulj-hable} that the maximal distance between the elements of two credal sets can be expressed in terms of the distance between the corresponding expectation functionals: \begin{align} \up d(\mathcal M_1, \mathcal M_2) & := \max_{E_1\in \mathcal M_1, E_2\in \mathcal M_2} d(E_1, E_2) \\ & = \max_{f\in \mathcal L_1}\max \{ \up E_1(f)-\low E_2(f), \up E_2(f) - \low E_1(f) \} \intertext{Now, since $1-f\in \mathcal L_1$ iff $f\in \mathcal L_1$ and using $\up E_i (1-f) = 1-\low E_i(f)$, the above simplifies into:} & = \max_{f\in \mathcal L_1} \up E_1(f)-\low E_2(f). \end{align} It follows that \begin{equation} \up d(\mathcal M, \mathcal M) = \max_{f\in \mathcal L_1} \up E(f)-\low E(f), \end{equation} which could be regarded as a measure of imprecision of a credal set. The above equalities justify the following definition of a distance between upper and lower expectation functionals. \begin{equation} d(\up E_1, \low E_2) = \max_{f\in \mathcal L_1} \up E_1(f)-\low E_2(f), \end{equation} and \begin{equation} d(\up T, \low T) = \max_{x\in\mathcal X}d(\up T(\cdot | x), \low T(\cdot|x)). \end{equation} The following proposition holds: \begin{prop} Let $\up E_1$ and $\low E_2$ be a lower and an upper expectation functionals. Then \begin{equation} \max_{f\in \mathcal L_1}\up E_1(f)-\low E_2(f) = \max_{A\subseteq \mathcal X} \up E_1(1_A) - \low E_2(1_A), \end{equation} where $1_A$ denotes the indicator function of set $A$. This implies that \begin{equation} d(\up E_1, \low E_2) = \max_{A\subseteq \mathcal X}\up E_1(1_A)-\low E_2(1_A). \end{equation} \end{prop} \begin{proof} Let $\mathcal M_1$ and $\mathcal M_2$ be the credal sets corresponding to $\up E_1$ and $\low E_2$. We have that \begin{align} d(\up E_1, \low E_2) & = \max_{E_1\in \mathcal M_1, E_2\in \mathcal M_2} d(E_1, E_2) \\ & = \max_{E_1\in \mathcal M_1, E_2\in \mathcal M_2} \max_{f\in\mathcal L_1}E_1(f) - E_2(f). \end{align} For every (precise) expectation functional $E_i$ there exists some probability mass function $p_i$ so that $E_i(f) = \sum_{x\in\mathcal X} p_i(x)f(x)$. Now let $A=\{ x\colon p_1(x) \ge p_2(x) \}$ and let $F=1_A$. For every $f\in\mathcal L_1$ we have that \begin{align} E_1(f) - E_2(f) & = \sum_{x\in\mathcal X} (p_1(x)-p_2(x))f(x) \\ & \le \sum_{x\in\mathcal X} (p_1(x)-p_2(x))F(x) \\ & = E_1(F) - E_2(F). \end{align} Thus we have that the difference $E_1(f)-E_2(f)$ is always maximized by an indicator function. Hence, \begin{align} d(\up E_1, \low E_2) & = \max_{E_1\in \mathcal M_1, E_2\in \mathcal M_2} \max_{A\subseteq\mathcal X}E_1(1_A) - E_2(1_A) \\ & = \max_{A\subseteq\mathcal X}\{ \max_{E_1\in \mathcal M_1} E_1(1_A) - \min_{E_2\in \mathcal M_2} E_2(1_A)\} \\ & = \max_{A\subseteq\mathcal X} \up E_1(1_A) - \low E_2(1_A). \end{align} \end{proof} \subsection{Coefficients of ergodicity} \emph{Coefficients of ergodicity} measure the rate of convergence of Markov chains. Given a metric $d$ on the set of probability distributions, a coefficient of ergodicity is a real-valued map $\tau\colon T\mapsto \tau(T)$ with the property that \[ d(pT, qT) \le \tau(T)d(p, q), \] where $p$ and $q$ are arbitrary probability mass functions. In the general form coefficients of ergodicity were defined by \citet{seneta1979coefficients}, while in the form where the total variation distance is used, it was introduced by \citet{dobrushin:56}. Given a stochastic matrix $P$, the value of $\tau(P)$ assuming the total variation distance, equals maximal distance between the rows of $P$: \begin{equation} \tau (P) = \max_{i, j}d(P_i, P_j), \end{equation} where $P_i$ and $P_j$ are the $i$-th and $j$-th rows of $P$ respectively. In the operators notation we would write \begin{equation} \tau(T) = \max_{x, y}d(T(\cdot|x), T(\cdot|y)). \end{equation} The general definition clearly implies that: \begin{equation} d(pT^n, qT^n) \le \tau(T)^n d(p, q). \end{equation} Coefficients of ergodicity are also called \emph{contraction coefficients}. Since transition operators are always non expanding, which means that $\tau(T)\le 1$, the case of interest is usually when $\tau(T)$ is strictly less than 1. In such case $\tau(T)^n$ tends to 0 as $n$ approaches infinity, which means that the distance $d(pT^n, qT^n)$ approaches 0. This means that the distance between probability distribution of random variables $X_n$ of the corresponding Markov chain is diminishing, or equivalently, that the distributions converge to a unique limit distribution. Often, despite $\tau(T)=1$, the value of $\tau(T^r)$ might be strictly less than 1, which is sufficient to guarantee unique convergence. In fact, a chain is uniquely convergent exactly if $\tau(T^r)<1$ for some positive integer $r$. Coefficients of ergodicity have been generalized for the case of imprecise Markov chains too. The first, so called \emph{uniform coefficient of ergodicity} has been introduced by Hartfiel~\cite{hart:98} as \begin{equation} \tau(\mathcal T) = \max_{T\in\mathcal T}\tau(T), \end{equation} where $\tau$ is the coefficient of ergodicity based on the total variation distance. If $\tau(\mathcal T)<1$ this implies unique convergence of the corresponding Markov chains in the sense that every subset of the chains uniquely converges. This implies that the upper and lower expectations converge too, but in order to ensure unique convergence in the sense of expectation bounds weaker condition suffices. The \emph{weak coefficient of ergodicity} for imprecise Markov chains was defined in \cite{2013:skulj-hable} as \begin{equation} \rho(\up T) = \max_{x, y\in\mathcal X} d(\up T(\cdot|x), \up T(\cdot| y)). \end{equation} That is, it is equal to the maximal distance between its row upper expectation functionals. The following properties hold: \begin{enumerate}[{(i)}] \item $\rho(\up T\, \up S)\le \rho(\up T)\rho(\up S)$ for arbitrary upper transition operators $\up T$ and $\up S$; \item $d(\up E_1 \, \up T, \up E_2 \, \up T) \le d(\up E_1, \up E_2 )\rho (\up T)$ for arbitrary upper expectation functionals $\up E_1, \up E_2$ and transition operator $\up T$. \end{enumerate} The following theorem holds. \begin{thm}[\cite{2013:skulj-hable}~Theorem~21] Let $\up T$ be an imprecise transition operator corresponding to a Markov chain $\{X_n \}_{n\in \mathbb{N}}$. Then the chain converges uniquely if and only if $\rho(\up T^r)<1$ for some integer $r>0$. \end{thm} If $\up T$ is an upper transition operator such that $\up T(\cdot |x)$ is the natural extension of some 2-alternating upper probability, then it follows from Proposition~\ref{prop-dist-2-alt} that \begin{equation} \rho(\up T) = \max_{A\subseteq \mathcal X}\max_{x, y\in\mathcal X} |\up T(A|x)-\up T(A|y)|. \end{equation} Ergodicity coefficient can be applied to a pair of upper and lower expectation functionals as follows. \begin{prop}\label{pr-ergodicity-lu} Let $\up E_1$ and $\low E_2$ be an upper and lower expectation functionals, and $\up T$ an upper transition operator. Then: \begin{equation} d(\up E_1 \up T, \low E_2 \up T) \le d(\up E_1 , \low E_2 )\rho(\up T). \end{equation} \end{prop} \begin{proof} Denote by $\mathcal M_1$ and $\mathcal M_2$ the credal sets corresponding to $\up E_1$ and $\low E_2$ respectively. Then we have that: \begin{align} d(\up E_1 \up T, \low E_2 \up T) & = \max_{f\in \mathcal L_1} \up E(\up Tf)-\low E(\up Tf) \\ & = \max_{f\in \mathcal L_1} \max_{E_1\in\mathcal M_1, E_2\in \mathcal M_2}E_1(\up Tf)-E_2(\up Tf) \\ & = \max_{E_1\in\mathcal M_1, E_2\in \mathcal M_2}\max_{f\in \mathcal L_1} E_1(\up Tf)-E_2(\up Tf) \\ & \le \max_{E_1\in\mathcal M_1, E_2\in \mathcal M_2}d(E_1, E_2)\rho(\up T) \\ &=d(\up E_1 , \low E_2 )\rho(\up T). \end{align} \end{proof} \section{Perturbations of imprecise Markov chains}\label{s-pio} \subsection{Distances between imprecise distributions of perturbed Markov chains}\label{ss-dbid} Suppose we have two imprecise Markov chains given by initial expectation functionals $\up E_0, \up E_0'$ and upper transition operators $\up T, \up T'$. The $n$-th step upper expectation functionals are then $\up E_n = \up E_0\,\up T^n$ and $\up E'_n =\up E_0'\,\up T'^n$ respectively. Our goal is to find the bounds on the distances between $\up E_n$ and $\up E'_n$ if the distances $d(\up E_0, \up E_0')$ and $d(\up T, \up T')$ are known. We will also assume that both chains are uniquely convergent with weak coefficients of ergodicity $\rho_n = \rho(\up T^n)$ and $\rho'_n = \rho(\up T'^n)$, so that $\lim_{n\to \infty}\rho_n = 0$ and $\lim_{n\to \infty}\rho'_n = 0$. The latter conditions are clearly necessary and sufficient for unique convergence. Moreover, we will give bounds on the distance between the limit distributions $\up E_\infty$ and $\up E'_\infty$. To do so we will follow the similar derivation of the bounds for the case of precise (but not necessarily finite state) Markov chains by \citet{Mitrophanov2005}. We will make use of the following proposition: \begin{prop}\label{pr-diff-sum} The following equality holds for a pair of imprecise Markov chains and every $n\in \mathbb{N}o$: \begin{equation} \up E_0 \up T^n - \up E'_0 \up T'^n = (\up E_0 \up T^n - \up E'_0 \up T^n) + \sum_{i=0}^{n-1} (\up E'_0 \up T'^i \up T\, \up T^{n-i-1} - \up E'_0 \up T'^i \up T' \up T'^{n-i-1}), \end{equation} and therefore, \begin{equation} d(\up E_0 \up T^n, \up E'_0 \up T'^n) \le d(\up E_0 \up T^n, \up E'_0 \up T^n) + \sum_{i=0}^{n-1} d(\up E'_0 \up T'^i \up T\, \up T^{n-i-1}, \up E'_0 \up T'^i \up T' \up T'^{n-i-1}). \end{equation} \end{prop} \begin{thm}\label{thm-diff-err} Denote $E_n = d(\up E_n, \up E'_n)$ and $D = d(\up T, \up T')$. The following inequality holds: \begin{equation} E_n \le E_0 \rho_n + D\sum_{i=0}^{n-1}\rho_{i}. \end{equation} \end{thm} \begin{proof} Proposition~\ref{pr-diff-sum} implies that \begin{align} E_n & \le d(\up E_0 \up T^n, \up E'_0 \up T^n) + \sum_{i=0}^{n-1}d(\up E'_0 \up T'^i \up T\, \up T^{n-i-1}, \up E'_0 \up T'^i \up T' \up T^{n-i-1}) \intertext{denote $\up E'_{i+1} = \up E'_0\up T'^{i+1}$ and $\up E^*_{i+1} = \up E'_0\up T'^{i}\up T$ } & \le E_0\rho_n + \sum_{i=0}^{n-1}d(\up E^*_{i+1} \up T^{n-i-1}, \up E'_{i+1} \up T^{n-i-1}) \\ & \le E_0\rho_n + \sum_{i=0}^{n-1}d(\up E^*_{i+1} , \up E'_{i+1})\rho_{n-i-1} \\ & = E_0\rho_n + \sum_{i=0}^{n-1}d(\up E'_{i}\up T , \up E'_{i}\up T')\rho_{n-i-1} \\ \intertext{by \eqref{eq-dist-ET}} & \le E_0\rho_n + \sum_{i=0}^{n-1}d(\up T , \up T')\rho_{n-i-1} \\ & = E_0\rho_n + D\sum_{i=0}^{n-1}\rho_{n-i-1} \\ & = E_0\rho_n + D\sum_{i=0}^{n-1}\rho_{i} . \end{align} \end{proof} \begin{cor}\label{cor-op-err} Denote $D_n = d(\up T^n, \up T'^n )$ (that is $D_1 = D$). The following inequality then holds: \begin{equation} D_n \le D_1\sum_{i=0}^{n-1} \rho_i. \end{equation} \end{cor} \begin{proof} We have: \begin{align} d(\up T^n, \up T'^n) & = \max_{f\in\mathcal L_1}d(\up T^nf, \up T'^nf) \\ & = \max_{f\in\mathcal L_1}\max_{x\in \mathcal X}d(\up T(\up T^{n-1}f|x), \up T'(\up T'^{n-1}f|x)) \\ & = \max_{f\in\mathcal L_1}\max_{x\in \mathcal X}d(\up T(\cdot|x)\up T^{n-1}f, \up T'(\cdot|x)\up T'^{n-1}f) \\ & = \max_{x\in \mathcal X}d(\up T(\cdot|x)\up T^{n-1}, \up T'(\cdot|x)\up T'^{n-1}) \\ \intertext{by Theorem~\ref{thm-diff-err}} & \le \max_{x\in \mathcal X}d(\up T(\cdot|x), \up T'(\cdot|x)) \rho_{n-1} + D_1\sum_{i=0}^{n-2}\rho_i \\ & = D_1 \rho_{n-1} + D_1\sum_{i=0}^{n-2}\rho_i = D_1\sum_{i=0}^{n-1}\rho_i. \end{align} \end{proof} \begin{lem}\label{lem-sums} Let $\up T$ be an upper transition operator such that $\rho(\up T^r)=\rho_r =: \rho < 1$ and let $n = kr+m$, where $m<r$. Then \begin{equation} \sum_{i=0}^{n-1}\rho_{i} \le r\frac{1-\rho^k}{1-\rho} + m\rho^k \end{equation} and \begin{equation} \sum_{i=0}^{\infty}\rho_{i} \le \frac{r}{1-\rho}. \end{equation} \end{lem} \begin{proof} Clearly $\{\rho_n\}$ is a non-increasing sequence. Moreover, it follows directly from the definitions and monotonicity that \begin{equation}\label{en-lem-sums-1} \rho_i \le \rho^{r\left[\frac ir \right]}, \end{equation} where $[\cdot]$ denotes the integer part. The required equations are now obtained by taking sums of the left and right hand sides of equation~\eqref{en-lem-sums-1}. \end{proof} Let $E_\infty$ denote the distance between the limit distributions $\up E_\infty$ and $\up E'_\infty$. The following corollary is a direct consequence of the above results. \begin{cor}\label{cor-diff-err} Using the notation from Theorem~\ref{thm-diff-err} and Lemma~\ref{lem-sums} we have the following inequalities \begin{align}\label{eq-lem-sums-1} E_n & \le E_0 \rho^{k} + D\left(r\frac{1-\rho^k}{1-\rho} + m\rho^k \right) \\ E_\infty & \le \frac{Dr}{1-\rho} \label{eq-lem-sums-2} \\ \intertext{and} D_n & \le D_1 \left(r\frac{1-\rho^k}{1-\rho} + m\rho^k\right). \end{align} \end{cor} \begin{rem} Notice that $D_\infty = E_\infty$ for every uniquely convergent Markov chain. \end{rem} \subsection{Degree of imprecision} Let $\{ X_n\}_{n\in\mathbb{N}}$ be an imprecise Markov chain and let $\up E_n$ denote the upper expectation functionals corresponding to the imprecise distributions of $X_n$. As a measure of the degree of imprecision, we have suggested in Sc.~\ref{ss-dulo} $I_n = d(\up E_n, \low E_n)$. Our goal in this section is to find bounds on $I_n$ given the initial imprecision $I_0$ and the imprecision of the transition operator, given by $\hat I=d(\up T, \low T)$. Similarly as in Proposition~\ref{pr-diff-sum} we have the following. \begin{prop}\label{pr-sum-lu} Let $\up E_0$ and $\low E_0$ be a pair of lower an upper expectation functionals and $\up T$ and $\low T$ an upper and lower transition operators. Then we have that \begin{equation} \up E_0 \up T^n - \low E_0 \low T^n = \up E_0 \up T^n - \low E_0 \up T^n + \sum_{i=0}^{n-1} (\low E_0 \low T^i \up T \,\up T^{n-i-1} - \low E_0 \low T^i \low T \up T^{n-i-1}), \end{equation} and therefore \begin{equation} d(\up E_0 \up T^n, \low E_0 \low T^n) = d(\up E_0 \up T^n, \low E_0 \up T^n) + \sum_{i=0}^{n-1} d (\low E_0 \low T^i \up T \,\up T^{n-i-1} , \low E_0 \low T^i \low T \up T^{n-i-1}). \end{equation} \end{prop} \begin{thm}\label{thm-diff-impr} Let $I_n = d(\up E_n, \low E_n)$, $\hat I=d(\up T, \low T)$ and $\rho_n = \rho(\up T^n)$. The following inequality holds: \begin{equation} I_n \le I_0 \rho_n + \hat I\sum_{i=0}^{n-1}\rho_{i}. \end{equation} \end{thm} \begin{proof} It follows directly from Proposition~\ref*{pr-ergodicity-lu} that \begin{equation} d(\up E_0 \up T^n, \low E_0 \up T^n) \le I_0\rho_n. \end{equation} Further we have that for every $0\le i \le n-1$ \begin{align} d(\low E_0 \low T^i \up T\, \up T^{n-i-1}, \low E_0 \low T^i \low T \up T^{n-i-1}) &= \max_{f\in \mathcal L_1}| \low E_0[\low T^i \up T\,\up T^{n-i-1} f] - \low E_0[\low T^i \low T\up T^{n-i-1} f] | \intertext{which is by definition, and by replacing $\low E_i = \low E_0 \low T^i$ we have} & = \max_{f\in \mathcal L_1}| \low E_i[\up T\,\up T^{n-i-1} f] - \low E_i[\low T\up T^{n-i-1} f] | \intertext{by \eqref{eq-norm1-ue} } & = \max_{f\in \mathcal L_1} d(\up T\,\up T^{n-i-1} f, \low T\up T^{n-i-1} f) \\ & = \max_{f\in \mathcal L_1}\max_{x\in\mathcal X} d(\up T(\up T^{n-i-1} f|x), \low T(\up T^{n-i-1} f|x)) \\ & = \max_{x\in\mathcal X} d(\up T(\cdot |x)\low T^{n-i-1}, \low T(\cdot |x)\low T^{n-i-1}) \\ \intertext{by Proposition~\ref{pr-ergodicity-lu} } & = \max_{x\in\mathcal X} d(\up T(\cdot |x), \low T(\cdot |x))\rho_{n-i-1} \\ & = \hat I\rho_{n-i-1}. \end{align} Now the required inequality follows directly by combining the above inequalities. \end{proof} The following corollaries now follow immediately using similar reasoning as in the case of distances in Sc.~\ref{ss-dbid}. \begin{cor} Let $\up T$ be an upper transition operator and denote $\hat I_n = d(\up T^n, \low T^n)$. The following inequality holds: \[ \hat I_n \le \hat I_1\sum_{i=0}^{n-1}\rho_i. \] \end{cor} \begin{cor} Using the notation from Theorem~\ref{thm-diff-impr} and Lemma~\ref{lem-sums} we have the following inequalities \begin{align}\label{eq-lem-sums-3} I_n & \le I_0 \rho^{k} + \hat I\left(r\frac{1-\rho^k}{1-\rho} + m\rho^k \right) \\ I_\infty & \le \frac{\hat Ir}{1-\rho} \\ \intertext{and} \hat I_n & \le \hat I_1 \left(r\frac{1-\rho^k}{1-\rho} + m\rho^k\right). \end{align} \end{cor} \begin{rem} It is again clear that for every uniquely convergent imprecise Markov chain $\hat I_\infty = I_\infty$. \end{rem} \section{Examples}\label{s-ex} \subsection{Contamination models} Let $\up E$ be an upper expectation functional and $\varepsilon>0$. Then we consider the \emph{$\varepsilon$-contaminated} upper expectation functional \begin{equation} \up E_\varepsilon (f) = (1-\varepsilon)\up E (f) + \varepsilon f_{\mathrm{max}}, \end{equation} where $f_{\mathrm{max}} = \max_{x\in\mathcal X} f(x) =: \up V(f)$. Note that $\up V$ is the upper expectation functional whose credal set consists of all expectation functionals on $\mathcal L(\mathcal X)$. It is called the \emph{vacuous upper prevision}. The upper transition operator $\up T_V(f) = f_{\mathrm{max}} 1_\mathcal X$ is called the \emph{vacuous upper transition operator}. Being a convex combination of $\up E$ and the vacuous upper prevision $\up V, \up E_\varepsilon$ is itself also an upper prevision. Similarly we could define an $\varepsilon$-contaminated upper transition operator with \begin{equation} \up T_\varepsilon f = (1-\varepsilon)\up Tf + \varepsilon 1_\mathcal X f_{\mathrm{max}} = (1-\varepsilon)\up Tf + \varepsilon \up T_Vf. \end{equation} Let $\rho = \rho(\up T)$. Then we can explicitly find the coefficients of ergodicity for the contaminated model. \begin{prop}\label{prop-eps-cont} Let $\up E$ be an upper expectation functional, $\up T$ an upper transition operator and $\up E_\varepsilon$ and $\up T_\varepsilon$ the corresponding $\varepsilon$-contaminated models. The following inequalities hold: \begin{enumerate}[{(i)}] \item $d(\up E, \up E_\varepsilon) = \varepsilon d(\up E, \up V)$, where $\up V$ is the vacuous upper prevision; \item $d(\up T, \up T_\varepsilon) = \varepsilon d(\up T, \up T_V)$, where $\up T_V$ is the vacuous upper transition operator; \item $d(\up E'_\varepsilon, \up E_\varepsilon) = (1-\varepsilon)d(\up E', \up E)$; \item $d(\up T_\varepsilon, \up T'_\varepsilon) = (1-\varepsilon)d(\up T, \up T')$; \item $\rho(\up T_\varepsilon) = (1-\varepsilon)\rho(\up T)$; \item $d(\up E_\varepsilon, \low E_\varepsilon) = (1-\varepsilon)d(\up E, \low E) + \varepsilon$; \item $\hat I(\up T_\varepsilon) = (1-\varepsilon)\hat I(\up T) + \varepsilon$. \end{enumerate} \end{prop} \begin{proof} (i) follows directly from \begin{align} d(\up E, \up E_\varepsilon) & = \max_{f\in \mathcal L_1} | \up E(f) - (1-\varepsilon)\up E(f) - \varepsilon \up V(f) | \\ & = \max_{f\in \mathcal L_1} | \varepsilon(\up E(f) - \up V(f)) | \\ & = \varepsilon d(\up E, \up V) \end{align} and simply leads to (ii). To see (iii) we calculate \begin{align} d(\up E_\varepsilon, \up E'_\varepsilon) & = \max_{f\in \mathcal L_1} | (1-\varepsilon)\up E(f) + \varepsilon \up V(f) - (1-\varepsilon)\up E'(f) - \varepsilon \up V(f) | \\ & = \max_{f\in \mathcal L_1} | \varepsilon(\up E(f) - \up E'(f)) | \\ & = \varepsilon d(\up E, \up E'). \end{align} (iv) and (v) are direct consequences of (iii) and the definitions. To see (vi) note that $\low V(f) = f_{\mathrm{min}}$ and therefore $\low E_\varepsilon(f) = (1-\varepsilon)\low E(f) + \varepsilon \low V(f)$. For some $f$ we then have \begin{align} \up E_\varepsilon(f) - \low E_\varepsilon(f) & = (1-\varepsilon) (\up E(f) - \low E(f)) + \varepsilon (f_{\mathrm{max}} - f_{\mathrm{min}}) \end{align} Now suppose the maximal difference in the above expression is attained for some $f\in \mathcal L_1$ and denote $\tilde f = \dfrac{f-f_{\mathrm{min}}}{f_{\mathrm{max}}-f_{\mathrm{min}}}$ which belongs to $\mathcal L_1$ as well. It is directly verified that $\up E(\tilde f)-\low E(\tilde f) = \dfrac{1}{f_{\mathrm{max}}-f_{\mathrm{min}}}(\up E(f)-\low E(f))$. Thus, $f_{\mathrm{max}}-f_{\mathrm{min}} = 1$ must hold, because of maximality of $f$, and therefore (vi) easily follows. (vii) is also a simple consequence of (vi). \end{proof} \begin{thm} Let $\up E$ and $\up T$ be an upper expectation functional and an upper transition operator respectively, and $\up E_\varepsilon$ and $\up T_\varepsilon$ the corresponding $\varepsilon$-contaminated operators. Denote $E_n = d(\up E_{\varepsilon n}, \up E_n)$, where $\up E_{\varepsilon n} = \up E_{\varepsilon}\up T_\varepsilon^n$ and $\up E_{n} = \up E\,\up T^n$, $\Delta_1 = d(\up E, \up V), \Delta_2 = d(\up T, \up T_V), \hat I = \hat I(\up T), \rho = \rho(\up T)$ and the imprecision of the contaminated chain by $I'_n = d(\up E_{\varepsilon n}, \low E_{\varepsilon n})$. Then we have that $E_0 = \varepsilon \Delta_1, D=\varepsilon \Delta_2$ and $\hat I(\up T_\varepsilon) = (1-\varepsilon)\hat I(\up T) + \varepsilon$. The following inequalities hold: \begin{align} E_n & \le \varepsilon \Delta_1\rho^n(1-\varepsilon)^n + \varepsilon \Delta_2 \frac{1-\rho^n (1-\varepsilon)^n}{1-\rho(1-\varepsilon)}; \\ E_\infty & \le \frac{\varepsilon \Delta_2}{1-\rho(1-\varepsilon)}; \\ I'_n & \le ((1-\varepsilon)I_0+\varepsilon)(1-\varepsilon)^n\rho^n + ((1-\varepsilon)\hat I + \varepsilon)\frac{1-\rho^n(1-\varepsilon)^n}{1-\rho (1-\varepsilon)}; \\ I'_\infty & \le \frac{(1-\varepsilon)\hat I + \varepsilon}{1-\rho(1-\varepsilon)}. \end{align} \end{thm} \begin{proof} A direct consequence of Theorems~\ref{thm-diff-err} and \ref{thm-diff-impr}, Proposition~\ref{prop-eps-cont} and Corollary~\ref{cor-diff-err}. \end{proof} \subsection{Numerical example} We again consider a Markov chain with the initial lower and upper probabilities and transition probabilities as in Example~\ref{ex-1}. We compare it with a perturbed chain whose lower and upper transition matrices are \begin{align} \low M' = \begin{bmatrix} 0.32 & 0.36 & 0 \\ 0.36 & 0.19 & 0.24 \\ 0 & 0.5 & 0.4 \end{bmatrix} \qquad \text{and} \qquad \up M' = \begin{bmatrix} 0.64 & 0.68 & 0 \\ 0.57 & 0.38 & 0.45 \\ 0.04 & 0.56 & 0.46 \end{bmatrix} \end{align} and the initial probability bounds are \begin{equation} \low P'_0 = (0.32, 0.21, 0.28) \qquad \text{and} \qquad \up P'_0 = (0.42, 0.38, 0.42) \end{equation} Coefficients of ergodicity are $\rho(\up T) = 0.67$ and $\rho(\up T') = 0.60$, and the distance between initial imprecise probability models is $d(\up E_0, \up E'_0) = 0.0248$, and between transition operators $d(\up T, \up T') = 0.05$. The maximal theoretically possible bounds $d(E_n, E'_n)$ can be obtained using Theorem~\ref{thm-diff-err}, with $E_0 = 0.0248, D=0.05$ and $\rho_n=\rho(\up T')^n = 0.60^n$. For comparison we have calculated lower and upper transition probability matrices and the distances based on these estimates. The results are listed in Figure~\ref{fig-distances-fnals}. The actual distances may be larger because the expectation functionals are not fully described by probability interval models (PRI). \begin{figure} \caption{Distances between $\up E_n$ and $\up E'_n$ based on PRI estimates and their theoretical upper bounds.} \label{fig-distances-fnals} \end{figure} In Figure~\ref{fig-distances-ops} the distances between the operators $\up T^n$ and $\up T'^n$ are given, together with their upper bounds calculated using Corollary~\ref{cor-op-err}. \begin{figure} \caption{Distances between $\up T^n$ and $\up T'^n$ based on PRI estimates and their theoretical upper bounds.} \label{fig-distances-ops} \end{figure} \section{Conclusions and further work} We have studied the impact of perturbations of initial imprecise probability distributions and transition operators of imprecise Markov chains on the deviations of distributions of the chain at further steps. The results show that stability of the distributions depends on the weak coefficient of ergodicity, which is consistent with the known results for precise Markov chains \cite{Mitrophanov2005}. By the same means we give the bounds on the degree of imprecision depending on the imprecision of initial distribution and transition operators. Our goal in the future is to extend the results to related models, such as continuous time imprecise Markov chains, hidden Markov models or semi-Markov models. \end{document}
\begin{document} \noindent\makebox[60mm][l]{\tt {\large Version:~\today}} \noindent{ \begin{center} \LARGE\bf Hitting Time Distribution for Skip-Free Markov Chains: A Simple Proof \end{center} } \noindent{ \begin{center}Wenming Hong\footnote{School of Mathematical Sciences \& Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, P.R. China. Email: [email protected]} \ \ \ Ke Zhou\footnote{ School of Mathematical Sciences \& Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, P.R. China. Email:[email protected]} \end{center} } \begin{center} \begin{minipage}[c]{12cm} \begin{center}\textbf{Abstract}\end{center} A well-known theorem for an irreducible skip-free chain with absorbing state $d$, under some conditions, is that the hitting (absorbing) time of state $d$ starting from state $0$ is distributed as the sum of $d$ independent geometric (or exponential) random variables. The purpose of this paper is to present a direct and simple proof of the theorem in the cases of both discrete and continuous time skip-free Markov chains. Our proof is to calculate directly the generation functions (or Laplace transforms) of hitting times in terms of the iteration method.\\ \mbox{}\textbf{Keywords:}\quad skip-free, random walk, birth and death chain, absorbing time, hitting time, eigenvalues, recurrence equation.\\ \mbox{}\textbf{Mathematics Subject Classification (2010)}: 60E10, 60J10, 60J27, 60J35. \end{minipage} \end{center} \section{ Introduction \label{s1}} The skip-free Markov chain on $\mathbb{Z^{+}}$ is a process for which upward jumps may be only of unit size, and there is no restriction on downward jumps. If a chain start at $0$, and we suppose $d$ is an absorbing state. An interesting property for the chain is that the hitting time of state $d$ is distributed as a sum of $d$ independent geometric (or exponential) random variables. There are many authors give out different proofs to the results. For the birth and death chain, the well-known results can be traced back to Karlin and McGregor \cite{KM}, Keilson \cite{Keillog}, \cite{Keil}. Kent and Longford \cite{Kent} proved the result for the discrete time version (nearest random walk) although they have not specified the result as usual form (section 2, \cite{Kent}). Fill \cite{F1} gave the first stochastic proof to both nearest random walk and birth and death chain cases via duality which was established in \cite{DF}. Diaconis and Miclo \cite{DM} presented another probabilistic proof for birth and death chain. Very recently, Gong, Mao and Zhang \cite{Mao} gave a similar result in the case that the state space is $\mathbb{Z^{+}}$, they use the well established result to determine all the eigenvalues or the spectrum of the generator. For the skip-free chain, Brown and Shao \cite{BS} first proved the result in continuous time situation. By using the duality, Fill \cite{F2} gave a stochastic proof to both discrete and continuous time cases. The purpose of this paper is to present a direct and simple proof of the theorem in the cases of both discrete and continuous time skip-free Markov chains. Our proof is to calculate directly the generation functions (or Laplace transforms) of hitting times in terms of the iteration method. \begin{theorem}For the discrete-time skip-free random walk:\\ Consider an irreducible skip-free random walk with transition probability $P$ on $\{0,1,\cdots, d\}$ started at $0$, suppose $d$ is an absorbing state. Then the hitting time of state $d$ has the generation function \begin{equation*} \varphi_{d}(s)=\prod_{i=0}^{d-1} \left[\frac{(1-\lambda_i)s}{1-\lambda_i s} \right], \end{equation*} where $\lambda_0, \cdots, \lambda_{d-1}$ are the $d$ non-unit eigenvalues of $P$. In particular, if all of the eigenvalues are real and nonnegative, then the hitting time is distributed as the sum of $d$ independent geometric random variables with parameters $1-\lambda_i$.\\ \end{theorem} \begin{theorem} For the skip-free birth and death chain:\\ Consider an irreducible skip-free birth and death chain with generator $Q$ on $\{0,1,\cdots, d\}$ started at $0$, suppose $d$ is an absorbing state. Then the hitting time of state $d$ has the Laplace transform \begin{equation*} \varphi_{d}(s)=\prod_{i=0}^{d-1}\frac{\lambda_i}{1-\lambda_i s}, \end{equation*} where $\lambda_i$ are the $d$ non-zero eigenvalues of $-Q$. In particular, if all of the eigenvalues are real and nonnegative, then the hitting time is distributed as the sum of $d$ independent exponential random variables with parameters $\lambda_i$. \end{theorem} \section{ Proof of Theorem 1.1\label{s2}} Define the transition probability matrix $P$ as \begin{equation*} P=\left( \begin{array}{ccccccc} r_0 & p_0 & \\ q_{1,0} & r_1 & p_1 & \\ \ddots & \ddots & \ddots & & \ddots & \ddots & \\ q_{d-1,0}& q_{d-1,1}& q_{d-1,2}& & \cdots & r_{d-1} & p_{d-1} \\ & & & & & & 1\\ \end{array} \right)_{(d+1)\times(d+1)}, \end{equation*}and for $0\leq n\leq d-1$, $P_{n}$ denote the first $n+1$ rows and first $n+1$ lines of $P$. Let $\tau_{i,i+j}$ be the hitting time of state $i+j$ starting from $i$. By the Markov property, we have \begin{equation} \label{2.1}\tau_{i, i+j}=\tau_{i,i+1}+\tau_{i+1,i+2}+\cdots+\tau_{i+j-1, i+j}. \end{equation} If $f_{i,i+1}(s)$ is the generation function of $\tau_{i,i+1}$, \begin{equation*}f_{i,i+1}(s)=\mathbb{E}s^{\tau_{i,i+1}}~~~\text{for}~ 0\leq i\leq d-1, \end{equation*} (\ref{2.1}) says that \begin{equation*} f_{i,i+j}(s)=f_{i,i+1}(s)\cdot f_{i+1,i+2}(s)\cdot\cdots \cdot f_{i+j-1,i+j}(s),~~\text{for}~ 1\leq j\leq d-i. \end{equation*} Let \begin{equation*}g_{0,0}(s)=1,~~~g_{i,i+j}(s)=\frac{p_{i}p_{i+1}\cdots p_{i+j-1}}{f_{i,i+j}(s)}s^{j},~~\text{for}~ 1\leq j\leq d-i.\end{equation*} \begin{lemma}Define $I_n$ as a $(n+1)\times(n+1)$ identity matrix. We have \begin{equation}\label{lm} g_{0,n+1}(s)=\text{det}(I_n-sP_{n}),~~~\text{for}~ 0\leq n\leq d-1. \end{equation} \end{lemma} \begin{proof} We will give a key recurrence to proof this lemma. By decomposing the first step ,the generation function of $\tau_{n,n+1}$ satisfy \begin{equation} \label{key} \begin{split} f_{n,n+1}(s)&=r_{n}sf_{n,n+1}(s)+p_{n}s+q_{n,n-1}sf_{n-1,n+1}(s)+q_{n,n-2}sf_{n-2,n+1}(s)+\\ &~~~~~\cdots+q_{n,0}sf_{0,n+1}(s). \end{split} \end{equation} Recall the definition of $g_{i,i+j}(s)$, substitute it into the formula above, we have \begin{equation}\label{f 2.5}\begin{split}g_{0,n+1}(s)&=(1-r_{n}s)g_{0,n}(s)-q_{n,n-1}s^{2}p_{n-1}g_{0,n-1}(s)-q_{n,n-2}s^{3}p_{n-2}p_{n-1}g_{0,n-2}(s)-\\ &~~~~~\cdots-q_{n,0}s^{n+1}p_{0}p_{1}\cdots p_{n-1}g_{0,0}(s). \end{split}\end{equation} Use the notation $A_{i,j}$ to denote the algebraic complement of the position $(i+1,j+1)$ in $I_n-sP_{n}$. By expanding the bottom row of the matrix, we obtain\begin{equation}\label{f 2.6}\text{det}(I_n-sP_{n})=(1-r_{n}s)A_{n,n}-q_{n,n-1}sA_{n,n-1}-q_{n,n-2}sA_{n,n-2}-\cdots-q_{n,0}sA_{n,0}.\end{equation} By some calculation, we can deduce \begin{equation*}A_{n,n}=\text{det}(I_{n-1}-sP_{n-1}),~~A_{n,0}=p_{0}p_{1}\cdots p_{n-1}s^{n},\end{equation*} and for $1\leq i< n$,\begin{equation*} A_{n,n-i}=p_{n-i}p_{n-i+1}\cdots p_{n-1}s^{i}\text{det}(I_{n-i-1}-sP_{n-i-1}).\end{equation*} Now we prove the lemma by induction. At first, $g_{0,1}(s)=\text{det}(I_0-sP_{0})=1-r_{0}s$. If (\ref{lm}) holds for $n< k$, we calculate $g_{0,k+1}(s)$. By (\ref{f 2.5}), \begin{equation*}\begin{split}g_{0,k+1}(s)&=(1-r_{k}s)\text{det}(I_{k-1}-sP_{k-1})-q_{k,k-1}s^{2}p_{k-1}\text{det}(I_{k-2}-sP_{k-2})-\\ &~~~~~q_{k,k-2}s^{3}p_{k-2}p_{k-1}\text{det}(I_{k-3}-sP_{k-3})-\cdots-q_{k,0}s^{k+1}p_{0}p_{1}\cdots p_{k-1}. \end{split}\end{equation*} Formula (\ref{f 2.6}) tell us that $g_{0,k+1}(s)=\text{det}(I_k-sP_{k})$. The proof is complete. \end{proof} \noindent{\it Proof of Theorem 1.1 }~~Denote $\varphi_{d}(s)$ the generation function of $\tau_{0,d}$, then $\varphi_{d}(s)=f_{0,d}(s)$. By (\ref{2.1}) and (\ref{lm}) , we have \begin{equation*}\begin{split}\varphi_{d}(s)&=f_{0,1}(s)f_{1,2}(s)\cdots f_{d-1,d}(s)\\ &=\frac{p_{0}p_{1}\cdots p_{d-1}s^{d}}{g_{0,d}(s)}=\frac{p_{0}p_{1}\cdots p_{d-1}s^{d}}{\text{det}(I_{d-1}-sP_{d-1})}.\end{split}\end{equation*} It is easy to prove that $1$ is the unique unit eigenvalue of $P$, and for $i=0,2\cdots d-1$, $\lambda_i$ are the $d$ non-unit eigenvalues. So \begin{equation*}\text{det}(I_{d-1}-sP_{d-1})=(1-\lambda_{0}s)(1-\lambda_{1}s)\cdots(1-\lambda_{d-1}s).\end{equation*} By Lemma 2.1 and the definition of $g_{0,d}(s)$,\begin{equation*}\begin{split} p_{0}p_{1}\cdots p_{d-1}&=s^{-d}f_{0,d}(s)\text{det}(I_{d-1}-sP_{d-1})\\ &=s^{-d}f_{0,d}(s)(1-\lambda_{0}s)(1-\lambda_{1}s)\cdots(1-\lambda_{d-1}s).\end{split}\end{equation*} Let $s=1$, because $f_{0,d}$ is a generation function, $f_{0,d}(1)=1$. Then\begin{equation*} p_{0}p_{1}\cdots p_{d-1}=(1-\lambda_{0})(1-\lambda_{1})\cdots(1-\lambda_{d-1}).\end{equation*} As a consequence we have\begin{equation*}\begin{split} \varphi_{d}(s)&=\frac{(1-\lambda_{0})(1-\lambda_{1})\cdots(1-\lambda_{d-1})s^{d}}{(1-\lambda_{0}s)(1-\lambda_{1}s)\cdots(1-\lambda_{d-1}s)}\\ &=\prod_{i=0}^{d-1} \left[\frac{(1-\lambda_i)s}{1-\lambda_i s} \right].\end{split}\end{equation*} $\Box$ \section{ Proof of Theorem 1.2\label{s3}} Denote the generator $Q$ of the skip-free Markov chain as \begin{equation*} Q=\left( \begin{array}{ccccccc} -\gamma_0 & \alpha_0 & \\ \beta_{1,0} & -\gamma_1 & \alpha_1 & \\ \ddots & \ddots & \ddots & & \ddots & \ddots & \\ \beta_{d-1,0}& \beta_{d-1,1}& \beta_{d-1,2}& & \cdots & -\gamma_{d-1} & \alpha_{d-1} \\ & & & & & & 0\\ \end{array} \right)_{(d+1)\times(d+1)}, \end{equation*}and for $0\leq n\leq d-1$, $Q_{n}$ denote the sub-matrix of the first $n+1$ rows and first $n+1$ lines of $Q$. $\tau_{i,i+j}$ be the hitting time of state $i+j$ starting from $i$. The idea of proof is similar as Theorem 1.1. We just give a briefly description here. It is well known that the skip-free chain on the finite state has an simple structure. The process start at $i$, it stay there with an $\mbox{Exponential}~(\gamma_i) $ time, then jumps to $i+1$ with probability $\frac{\alpha_i}{\gamma_i}$, to $i-k$ with probability $\frac{\beta_{i,i-k}}{\gamma_i}~(1\leq k\leq i).$ Let $\widetilde{f}_{i,i+j}(s)$ be the Laplace transform of $\tau_{i,i+j}$, \begin{equation*}\widetilde{f}_{i,i+j}(s)=\mathbb{E}e^{-s\tau_{i,i+j}}. \end{equation*} Recall that if a random variable $\xi\sim\mbox{Exponential}~(\theta)$, \begin{equation*}\mathbb{E}e^{-s\xi}=\frac{\theta}{\theta+s}. \end{equation*} By decomposing the trajectory at the first jump, \begin{equation*} \label{keykey}\begin{split} \widetilde{f}_{n,n+1}(s)&=\frac{\gamma_n}{\gamma_n+s}\frac{\alpha_n}{\gamma_n} +\frac{\gamma_n}{\gamma_n+s}\frac{\beta_{n,n-1}}{\gamma_n}\widetilde{f}_{n-1,n+1}(s)+\frac{\gamma_n}{\gamma_n+s}\frac{\beta_{n,n-2}}{\gamma_n}\widetilde{f}_{n-2,n+1}(s)+\\ &~~~~~\cdots+\frac{\gamma_n}{\gamma_n+s}\frac{\beta_{n,0}}{\gamma_n}\widetilde{f}_{0,n+1}(s)\\ &=\frac{\alpha_n}{\gamma_n+s}+\frac{\beta_{n,n-1}}{\gamma_n+s}\widetilde{f}_{n-1,n+1}(s)+\frac{\beta_{n,n-2}}{\gamma_n+s}\widetilde{f}_{n-2,n+1}(s)+\cdots+\frac{\beta_{n,0}}{\gamma_n+s}\widetilde{f}_{0,n+1}(s).\end{split}\end{equation*} Define \begin{equation*}\widetilde{g}_{0,0}(s)=1,~~~\widetilde{g}_{i,i+j}(s)=\frac{\alpha_{i}\alpha_{i+1}\cdots \alpha_{i+j-1}}{\widetilde{f}_{i,i+j}(s)},~~\text{for}~ 1\leq j\leq d-i.\end{equation*} The following lemma can be proved by use the method similar as Lemma 2.1, we omit the details. \begin{lemma} \begin{equation*} \widetilde{g}_{0,n+1}(s)=\text{det}(sI_{n}-Q_{n}),~~~\text{ for}~ 0\leq n\leq d-1.\end{equation*} \end{lemma} Then we can calculate the Laplace transform of $\tau_{0,d}$, recall that $\lambda_0, \dots, \lambda_{d-1}$ are the $d$ non-zero eigenvalues of $-Q$, we can see they are not equal to $0$ easily. And we have \begin{equation*}\alpha_{0}\alpha_{1}\cdots \alpha_{d-1}= \lambda_{0}\lambda_{1}\cdots\lambda_{d-1}.\end{equation*} So \begin{equation*}\begin{split}\varphi_{d}(s)&=\widetilde{f}_{0,1}(s)\widetilde{f}_{1,2}(s)\cdots \widetilde{f}_{d-1,d}(s)\\&=\frac{\alpha_{0}\alpha_{1}\cdots \alpha_{d-1}}{\text{det}(sI_{d-1}-Q_{d-1})}=\prod_{i=0}^{d-1}\frac{\lambda_i}{1-\lambda_i s} \end{split}\end{equation*} complete the proof. $\Box$ \end{document}
{\beta}egin{document} {\theta}itle[Characteristic polynomials and Gaussian cubature] {Generalized characteristic polynomials and Gaussian cubature rules} {\alpha}uthor{Yuan Xu} {\alpha}ddress{Department of Mathematics\\ University of Oregon\\ Eugene, Oregon 97403-1222.}\email{[email protected]} {\delta}ate{{\theta}oday} {\theta}hanks{The work was supported in part by NSF Grant DMS-1106113} {\kappa}eywords{Generalized characteristic polynomials, orthogonal polynomials, Toeplitz matrix, Gaussian cubature rule} {\sigma}ubstackjclass[2000]{33C45, 33C50, 42C10} {\beta}egin{abstract} For a family of near banded Toeplitz matrices, generalized characteristic polynomials are shown to be orthogonal polynomials of two variables, which include the Chebyshev polynomials of the second kind on the deltoid as a special case. These orthogonal polynomials possess maximal number of real common zeros, which generate a family of Gaussian cubature rules in two variables. \end{abstract} \maketitle {\sigma}ection{Introduction} {\sigma}etcounter{equation}{0} Characteristic polynomials for non-square matrices are defined and studied in \cite{AS}, which can be viewed as the multidimensional generalization of the usual characteristic polynomials. We describe a connection between these polynomials and multivariate orthogonal polynomials, which leads to a family of orthogonal polynomials in two variables that has maximal number of real distinct common zeros. The latter serve as nodes of a new family of Gaussian cubature rules. We start with the definition of generalized characteristic polynomials. For $m, n \in {\mathbb N}$, let ${\mathcal M}(m, n)$ denote the space of complex valued matrices of size $m {\theta}imes n$ and let ${\mathcal M}(n) = {\mathcal M}(n,n)$. For $A \in {\mathcal M}(n)$, the eigenvalues of $A$ are the zeros of the characteristic polynomial ${\delta}et (x {\mathcal I}_n - A)$, where ${\mathcal I}_n$ denotesÄ the identity matrix in ${\mathcal M}(n)$. The notion of the characteristic polynomial has been extended to several variables in \cite{AS, SS}. \iffalse In two variable, for example, let $A \in {\mathcal M}(n,n+1)$ and consider the $n {\theta}imes (n+1)$ matrix $$ A(z_0, z_1) = z_0 {{\langle}mbda}eft[I_n, 0\right] + {{\langle}mbda}eft[0 I_n \right] - A. $$ For $0 {{\langle}mbda}e k {{\langle}mbda}e n+1$, let $A_{\widehat k}(z_0,z_1)$ denote the $n{\theta}imes n$ matrix formed by $A(z_0,z_1)$ minus its $k$-th column. The generalized characteristic polynomials of $A$ are defined by $$ P_k(z_0,z_1) = {\delta}et A_{\widehat k} (z_0,z_1), \qquad 0 {{\langle}mbda}e k {{\langle}mbda}e n. $$ \fraci For $s = 0, 1,{{\langle}mbda}dots, n$ define the $s$-unit matrix $$ {\mathcal I}_s: = ({\delta}elta_{s+i-j}) \in {\mathcal M}(m,m+n). $$ As an example, for $n=1$, ${\mathcal I}_0 = [{\mathcal I}_m \, 0]$ and ${\mathcal I}_1 = [0\,\, {\mathcal I}_m]$. Let $A$ be a matrix in ${\mathcal M}(m,m+n)$. Define the $m {\theta}imes (m+n)$ matrix $$ A(z_0,{{\langle}mbda}dots, z_n) := A + z_0 {\mathcal I}_0 + {{\langle}mbda}dots + z_n {\mathcal I}_n. $$ For $I = \{i_1,{{\langle}mbda}dots, i_m\}$, where $1 {{\langle}mbda}e i_1 < \cdots < i_m {{\langle}mbda}e m +n$, denote by $A_I (z_0,{{\langle}mbda}dots, z_n)$ the submatrix of $A(z_0, {{\langle}mbda}dots, z_n)$ formed by its $m$ columns indexed by $I$. The generalized characteristic polynomials of $A$ are then defined by {\beta}egin{equation}{{\langle}mbda}abel{eq:P_I} P_I (z_0,{{\langle}mbda}dots, z_n): = {\delta}et A_I(z_0, {{\langle}mbda}dots, z_n). \end{equation} Let $|I|$ denote the cardinality of $I$. It is easy to see that the total degree of $P_I(z_0,{{\langle}mbda}dots,z_n)$ is $|I|$. Furthermore, for $m \in {\mathbb N}_0$, there are ${\beta}inom{m+n}{m}$ polynomials $P_I(z_0,{{\langle}mbda}dots,z_n)$ with $|I| = m$ and these polynomials are linearly independent. Moreover, the following proposition was proved in \cite{AS}. {\beta}egin{prop} {{\langle}mbda}abel{prop:basic} The set of common zeros of all $P_I(z_0,{{\langle}mbda}dots,z_n)$ with $|I| = m$ is a finite subset of ${\mathbb C}^{n+1}$ of cardinality ${\beta}inom{m+n}{n+1}$ counting multiplicities. \end{prop} The set of common zeros of ${\beta}inom{m+n}{n+1}$ randomly selected polynomials can be empty. The significance of the above scheme is that it gives a simple construction that warrants a maximal set of finite common zeros. We are interested in the connection of these characteristic polynomials and orthogonal polynomials of several variables. For this purpose, we consider, for example, an infinite dimensional matrix ${\mathcal A}$ and define its $m$-th characteristic polynomials in terms of its main $m {\theta}imes (m+n)$ submatrix (in the left and upper corner). In the case of one variable, characteristic polynomials satisfy a three--term relation if ${\mathcal A}$ is tri-diagonal with positive off--diagonal elements, which implies that they are orthogonal polynomials by Favard's theorem. For several variables, it was pointed out in \cite[Example 8]{AS} that if ${\mathcal A} $ is a Toeplitz matrix ${\mathcal A} = (c_{i-j})$ with $c_{-1} = c_{d}=1$ and all other $c_i=0$, then the generalized characteristic polynomials are, up to a change of variable, the Chebyshev polynomials of the second kind associated to the root system of ${\mathcal A}_d$ type (\cite{Be, K74}). These Chebyshev polynomials are orthogonal with respect to a real--valued weight function $w$ on a compact domain $\Omega \in {\mathbb R}^d$ (both $w$ and $\Omega$ are explicitly known) and they have been extensively studied (\cite{Be, DX, LX} and the references therein). Together with Proposition \ref{prop:basic}, this suggests a way to find orthogonal polynomials that have maximal number of common zeros, which is related to the existence of Gaussian cubature rules. The latter is important for numerical integration and a number of other problem in orthogonal polynomials of several variables. Let $\Pi_m^d$ denote the space of real-valued polynomials of (total) degree $m$ in $d$ variables. For a given integral $\int_{{\mathbb R}^d} f(x) d\mu$, a cubature rule of degree $M$ in ${\mathbb R}^d$ with $N$ nodes is a finite sum of function evaluations such that $$ \int_{{\mathbb R}^d} f(x) d\mu = {\sigma}um_{k=1}^N {{\langle}mbda}_k f(x_k), \qquad x_k \in {\mathbb R}^d, \quad {{\langle}mbda}_k \ne 0, $$ for all $f \in \Pi_M^d$, where $x_k$ are called nodes and ${{\langle}mbda}_k$ are called weights. It is known that the number of nodes, $N$, satisfies $$ N {\gamma}e {\delta}im \Pi_{m-1}^d = {\beta}inom{m+d-1}{m}, \qquad \hbox{$M = 2 m-1$ or $2m-2$}. $$ A cubature rule of degree $M= 2m-1$ with $N$ attaining the above lower bound is called a Gaussian cubature rule. For $d =1$, a Gaussian quadrature rule of degree $2m-1$ always exists and its nodes are zeros of orthogonal polynomials of degree $m$ with respect to $d\mu$. For $d > 1$, it is known that a Gaussian cubature rule of degree $2m-1$ exists if and only if the corresponding orthogonal polynomials of degree $m$ have ${\beta}inom{m+d-1}{m}$ real, distinct common zeros (\cite{DX,My,St}). As a consequence, Gaussian cubature rules rarely exist. In fact, at the moment, only two families of integrals are known for which Gaussican cubature rules of all orders exist (\cite{BSX,LX}); one of them is generated by the Chebyshev polynomials of the second kind that are related to the generalized characteristic polynomials. The purpose of this paper is to explore possible connection between characteristic polynomials and orthogonal polynomials, especially the connection that will lead to new Gaussian cubature rules. In order to establish that a family of generalized characteristic polynomials are orthogonal, we shall show that the polynomials satisfy a three--term relation which implies, by Favard's theorem, orthogonality provided that the coefficients of the three--term relation satisfy certain conditions. Unlike the case of one variable, the three--term relation in several variables is given in vector form and its coefficients are matrices, which are more difficult for explicit computation. For this reason, we will primarily be working with polynomials of two variables. Our main result shows that there are a one--parameter perturbations of the Chebyshev polynomials of the second kind that are generalized characteristic polynomials, whose common zeros are all real, distinct, and generate Gaussian cubature rules. In the next section, we provide background properties of orthogonal polynomials in several variables. Since we consider integrals in real variables for cubature rules and generalized characteristic polynomials are in complex variables, we need to work with polynomials in conjugate complex variables, which requires us to restate several results on orthogonal polynomials accordingly. In Section 3, we discuss generalized characteristic polynomials and two conjectures in \cite{AS} in greater detail, and we will explore the impact of the restriction to conjugate complex variables for these polynomials. The new orthogonal polynomials and Gaussian cubature rules are studied in Section 4. {\sigma}ection{Orthogonal polynomials and their common zeros} {\sigma}etcounter{equation}{0} Let $\Pi_n^d$ denote the space of real--valued polynomials of degree at most $n$ in $d$ variables as before. Let $d\mu$ be a real--valued measure defined on a domain $\Omega \in {\mathbb R}^d$. We usually consider orthogonal polynomials with respect to the inner product $$ {{\langle}mbda}a f, g{\rangle}_\mu: = \int_{\Omega} f(x) g(x) d\mu(x). $$ In some cases, we need to define orthogonal polynomials in terms of a linear functional, which is somewhat more general. Let ${\mathcal L}$ be a linear functional that has all finite moments. A polynomial $P \in \Pi_n^d$ is called an orthogonal polynomial with respect to ${\mathcal L}$ if ${\mathcal L} (P Q) =0$ for all polynomials $Q \in \Pi_{n-1}^d$. When ${\mathcal L}$ is defined by ${\mathcal L} f = \int_\Omega f(x) d\mu$, then this is the same as orthogonality with respect to ${{\langle}mbda}a f, g{\rangle}_\mu$. We will need the notion that ${\mathcal L}$ is positive definite and quasi--definite, see \cite[Chapt. 3]{DX} for definition. For our purpose, the quasi definiteness allows us to use the Gram--Schmidt process to generate a complete sequence of orthogonal polynomials $\{P_{\alpha}: |{\alpha}| =n, {\alpha} \in {\mathbb N}_0^d, n=0,1,2{{\langle}mbda}dots \}$, and the positive definiteness allows us to further normalize the basis to be an orthonormal one, that is, ${\mathcal L} (P_{\alpha}^n P_{\beta}^m ) = {\delta}elta_{{\alpha},{\beta}} {\delta}elta_{m,n}$ for all ${\alpha},{\beta} \in {\mathbb N}_0^d$ and $m, n \in {\mathbb N}_0$. For $d =1$, the positive definiteness of ${\mathcal L}$ means that ${\mathcal L} f = \int f d \mu$ for a nonnegative Borel measure. For $d >1$, however, further restriction on ${\mathcal L}$ is needed for this to hold. Assume that orthogonal polynomials with respect to a linear functional ${\mathcal L}$ exist. For $n =0,1,{{\langle}mbda}dots,$ let ${\mathcal V}_n^d$ be the space of orthogonal polynomials of degree exactly $n$. It is known that ${\delta}im {\mathcal V}_n^d = {\beta}inom{n+d-1}{n}$ and a basis of ${\mathcal V}_n^d$ can be conveniently indexed by the multi--indices in $\{{\alpha} \in {\mathbb N}_0^d: |{\alpha}| =n\}$ with respect to a fixed order, say the lexicographical order. Let ${\mathbb P}_n: =\{P_{\alpha}^n: |{\alpha}| =n\}$ be a basis of ${\mathcal V}_n^d$, where $n$ denotes the total degree of the polynomials. With respect to the fixed order, we can regard ${\mathbb P}_n$ as a column vector. The orthogonal polynomials satisfy a three--term relation, which takes the form of {\beta}egin{equation} {{\langle}mbda}abel{eq:real3-term} x_i {\mathbb P}_n(x) = A_{n,i} {\mathbb P}_{n+1}(x) + B_{n,i} {\mathbb P}_n(x) + C_{n,i} {\mathbb P}_{n-1}(x), \quad 1{{\langle}mbda}e i {{\langle}mbda}e d, \end{equation} in the vector notation, where the coefficients $A_{n,i}$, $B_{n,i}$ and $C_{n,i}$ are real matrices of appropriate dimensions. These relations and a full rank assumption on $A_{n,i}$ in fact characterize the orthogonality. Furthermore, the polynomials in ${\mathbb P}_n$ have ${\delta}im \Pi_{n-1}^d$ common zeros if and only if {\beta}egin{equation} {{\langle}mbda}abel{eq:real-zeros} A_{n-1,i} A_{n-1,j}^\mathsf{t} = A_{n-1,j} A_{n-1,i}^\mathsf{t}, \qquad 1 {{\langle}mbda}e i, j {{\langle}mbda}e d. \end{equation} For further results in this direction, see \cite{DX}. As we mentioned in the introduction, we need to consider orthogonal polynomials in complex conjugate variables. Let $\Pi_n^d({\mathbb C})$ denote the space of polynomials of total degree $n$ in conjugated complex variables $z_1,{{\langle}mbda}dots, z_d$, where $z_j = {\omega}verline{z_{d-j}}$, with complex coefficients. If $d$ is odd, this requires $z_{\frac{d+1}{2}} \in {\mathbb R}$. The relation between the real and the complex variables are $$ x_k = \fracrac12 (z_j + z_{d-j}), \quad x_{d+1-k} = \fracrac1{2i} (z_j - z_{d-j}) \quad 1 {{\langle}mbda}e k {{\langle}mbda}e {{\langle}mbda}floor {\theta}frac{d}{2} \rfloor, $$ and if $d$ is odd, then $x_{\frac{d+1}{2}} = z_{\frac{d+1}{2}}$. Let $d\mu$ be a real--valued measure defined on a domain $\Omega \in {\mathbb R}^d$. We define the orthogonality in terms of the inner product $$ {{\langle}mbda}a f, g {\rangle}_\mu^{\mathbb C}: = \int_{\Omega} f(z_1, z_2,{{\langle}mbda}dots, {\beta}ar z_2, {\beta}ar z_1) {\omega}verline {g(z_1, z_2,{{\langle}mbda}dots, {\beta}ar z_2, {\beta}ar z_1)} d \mu $$ or its linear functional analogue. \iffalse If $d$ is even, then for any ${\alpha} \in {\mathbb N}_0^d$, there is a unique ${\alpha}lpha_*$ such that ${\omega}verline{z^{\alpha}} = z^{{\alpha}_*}$. Since it is clear that ${{\alpha}_*}_* = {\alpha}$, this leads to a unique decomposition $\{{\alpha}: |{\alpha}| = n\} = \Lambda_n \cup \Lambda_n^*$ with $|\Lambda_n| = |\Lambda_n^*|$. If $d$ is odd, then $a_*$ is uniquely defined if ${\alpha} \ne k e_{\frac{d+1}2}$, where $e_i$ is the $i$-th coordinate vector, so that there is a unique decomposition $\{{\alpha}: |{\alpha}| = n\} = \{n e_{\frac{d+1}2}\} \cup \Lambda_n \cup \Lambda_n^*$ with $|\Lambda_n| = |\Lambda_n^*|$. The orthogonal polynomials of complex variables, dented by $P_{\alpha}^n({\mathbb C})$, are related to the real orthogonal polynomials by $$ P_{\alpha}^n(z,{\mathbb C}) = P_{\alpha}^n (x) + i P_{{\alpha}_*}^n (x) \quad \hbox{and}\quad P_{{\alpha}_*}^n(z,{\mathbb C}) = P_{\alpha}^n (x) - i P_{{\alpha}_*}^n (x), \quad {\alpha} \in \Lambda_n, $$ and, if $d$ is odd and ${\alpha} = n e_{\frac{d+1}2}$, then $P_{\alpha}^n(z, {\mathbb C}) = P_{{\alpha}}^n(x)$. \fraci Let ${\mathcal V}_n^d({\mathbb C})$ denote the space of orthogonal polynomials of degree $n$ in $\Pi_n^d({\mathbb C})$ with respect to ${{\langle}mbda}a \cdot,\cdot{\rangle}_\mu^{\mathbb C}$. Since $d\mu$ is real--valued, it can be shown that ${\mathcal V}_n^d$ and ${\mathcal V}_n^d({\mathbb C})$ coincide. We can establish a precise correspondence between a basis in ${\mathcal V}_n^d$ and a basis $\{P_{\alpha}^{\mathbb C}: |{\alpha}| =n\}$ of ${\mathcal V}_n^d({\mathbb C})$ that satisfies the relation {\beta}egin{equation} {{\langle}mbda}abel{eq:J-relationOP} P_{\alpha} (z_0,z_1, {{\langle}mbda}dots, {\beta}ar z_1, {\beta}ar z_0) = {\omega}verline {P_{\alpha}( {\beta}ar{z}_0, {\beta}ar z_1 {{\langle}mbda}dots, z_1, z_0)}. \end{equation} For orthogonal polynomials in conjugate complex variables, three--term relation and various properties that rely on the three--term relation take different forms. To state these results, we shall restrict to two variables, since this avoids complicated notations and our examples are given mostly in two variables. For $d =2$, a basis of ${\mathcal V}_n^2$ contains $n+1$ elements, which can be conveniently denoted by $P_{k,n}(x,y)$, $0 {{\langle}mbda}e k {{\langle}mbda}e n$, whereas a basis for ${\mathcal V}_n^2({\mathbb C})$ can be written as $P_{k,n}^{\mathbb C}(z,{\beta}ar z)$. Let ${\mathbb P}_n^{\mathbb C} := \{P_{0,n}^{\mathbb C}, {{\langle}mbda}dots, P_{n,n}^{\mathbb C}\}$ and we regard it as a vector. The space ${\mathcal V}_n^2({\mathbb C})$ has many different bases, among which we can choose one that satisfies the relation (\cite{X13}) {\beta}egin{equation} {{\langle}mbda}abel{eq:PPconjugate} {\omega}verline {{\mathbb P}_n^{\mathbb C}(z, {\beta}ar z)} = J_{n+1} {\mathbb P}_n^{\mathbb C}(z, {\beta}ar z), \qquad J_n: = {{\langle}mbda}eft[ {\beta}egin{matrix} {\beta}igcirc & & 1 \\ & {\beta}ddots & \\ 1 & & {\beta}igcirc \end{matrix} \right], \end{equation} where $J_n$ is of size $n{\theta}imes n$, which is \eqref{eq:J-relationOP} for two complex variables. Furthermore, setting $z = x+iy$ and ${\beta}ar z = x - iy$, this basis and a basis for ${\mathcal V}_n^2$ are related by {\beta}egin{align} {{\langle}mbda}abel{eq:PvsQ} {\beta}egin{split} P_{k,n}(x,y) & := \fracrac{1}{{\sigma}qrt{2}} {{\langle}mbda}eft[P^{\mathbb C}_{k,n} (z, {\beta}ar z) + P^{\mathbb C}_{n-k,n} (z, {\beta}ar z) \right], \quad 0 {{\langle}mbda}e k {{\langle}mbda}e \frac n 2, \\ P_{k,n}(x,y) & := \fracrac{1}{{\sigma}qrt{2} i } {{\langle}mbda}eft[P^{\mathbb C}_{k,n} (z, {\beta}ar z) - P^{\mathbb C}_{n-k,n} (z, {\beta}ar z) \right], \quad \frac n 2 < k {{\langle}mbda}e n. \end{split} \end{align} In the following we normalize our linear functionals so that ${\mathcal L} 1 =1$. We also define ${\mathbb P}_{0}^{\mathbb C}(z,{\beta}ar z ) =1$ and ${\mathbb P}^{\mathbb C}_{-1}(z,{\beta}ar z) =0$. In order to state the three--term relation for orthogonal polynomials in conjugate complex variables, we need one more definition. Let ${\mathcal M}^{\mathbb C}(n,m)$ denote the set of complex matrices of size $n {\theta}imes m$. For a matrix $M \in {\mathcal M}^{\mathbb C}(n, m)$, we define a matrix $M^{\varepsilon}e$ of the same dimensions by $$ M^{\varepsilon}e: = J_n {\omega}verline{M} J_m. $$ The following is the three--term relation and the Favard's theorem for polynomials in conjugate complex variables. {\beta}egin{thm} {{\langle}mbda}abel{thm:FavardC} Let $\{{\mathbb P}^{\mathbb C}_n\}_{n=0}^\infty = \{P^{\mathbb C}_{k,n}: 0 {{\langle}mbda}e k {{\langle}mbda}e n, n\in {\mathbb N}_0\}$, ${\mathbb P}_0^{\mathbb C}(z,{\beta}ar z) =1$, be an arbitrary sequence in $\Pi^2({\mathbb C})$. Then the following statements are equivalent. {\beta}egin{enumerate}[\quad \rm (1)] \item There exists a quasi--definite linear functional ${\mathcal L}$ on $\Pi^2({\mathbb C})$ which makes $\{{\mathbb P}^{\mathbb C}_n\}_{n=0}^\infty$ an orthogonal basis in $\Pi^d({\mathbb C})$. \item For $n {\gamma}e 0$, there exist matrices ${\alpha}_{n}: (n+1){\theta}imes (n+2)$, ${\beta}_{n}: (n+1){\theta}imes (n+1)$ and ${\gamma}_{n-1}: (n+1){\theta}imes n$ such that {\beta}egin{align}{{\langle}mbda}abel{3-termC} z {\mathbb P}_n^{\mathbb C} (z,{\beta}ar z)& = {\alpha}_n {\mathbb P}^{\mathbb C}_{n+1} (z,{\beta}ar z)+ {\beta}_n {\mathbb P}^{\mathbb C}_n (z,{\beta}ar z) + {\gamma}_{n-1} {\mathbb P}^{\mathbb C}_{n-1} (z,{\beta}ar z), \end{align} and the matrices in the relation satisfy the rank condition {\beta}egin{align*} {\rangle}nk ({\alpha}_n + {\alpha}_n^{\varepsilon}e) & = {\rangle}nk ({\alpha}_n - {\alpha}_n^{\varepsilon}e) = n+1 \quad \hbox{and} \quad {\rangle}nk {{\langle}mbda}eft [ {\beta}egin{matrix} {\alpha}_n \\ {\alpha}_{n}^{\varepsilon}e \end{matrix} \right]=n+2 \\ {\rangle}nk ({\gamma}_{n-1} + {\gamma}_{n-1}^{\varepsilon}e) & = {\rangle}nk ({\gamma}_{n-1} - {\gamma}_{n-1}^{\varepsilon}e) = n \quad \hbox{and} \quad {\rangle}nk {{\langle}mbda}eft [ {\beta}egin{matrix} {\gamma}_{n-1} \\ {\gamma}_{n-1}^{\varepsilon}e \end{matrix} \right]=n+1. \end{align*} \end{enumerate} \end{thm} If ${\mathbb P}_n^{\mathbb C}$ is orthogonal, then the matrices ${\alpha}_n$ and ${\gamma}_{n-1}$ are related by {\beta}egin{equation}{{\langle}mbda}abel{gamma-alpha} {\gamma}_{n-1} H_{n-1} = J_{n+1} ({\alpha}_{n-1} H_n)^\mathsf{t} J_n \end{equation} where $H_n = {\mathcal L} {\beta}ig({\mathbb P}^{\mathbb C}_n ( {\mathbb P}^{\mathbb C}_n)^* {\beta}ig )$. Using the fact that $J {\omega}verline{ {\mathbb P}_n^{\mathbb C}} = {\mathbb P}_n^{\mathbb C}$, it then follows from \eqref{3-termC} that we also have {\beta}egin{align}{{\langle}mbda}abel{3-termC2} {\beta}ar z {\mathbb P}_n^{\mathbb C} (z,{\beta}ar z)& = {\alpha}_n^{\varepsilon}e {\mathbb P}^{\mathbb C}_{n+1} (z,{\beta}ar z)+ {\beta}_n^{\varepsilon}e {\mathbb P}^{\mathbb C}_n (z,{\beta}ar z) + {\gamma}_{n-1}^{\varepsilon}e {\mathbb P}^{\mathbb C}_{n-1} (z,{\beta}ar z). \end{align} One can compare \eqref{3-termC} and \eqref{3-termC2} with \eqref{eq:real3-term}. These results were formulated recently in \cite{X13}. They can be deduced from \eqref{eq:PvsQ} and the corresponding results in real variables, so are the results stated below. {\beta}egin{thm} {{\langle}mbda}abel{cor:ONP3term} The equivalence of Theorem \ref{thm:FavardC} remains true if ${\mathcal L}$ is positive definite and ${\mathbb P}_n^{\mathbb C}$ are orthonormal in (1) and assume, in addition, that ${\gamma}_{n-1} = ({\alpha}_{n-1}^*)^{\varepsilon}e$ in (2). Furthermore, in this case, there is a real--valued positive measure $d\mu$ with compact support in ${\mathbb R}^2$ such that ${\mathcal L} f = \int f d \mu$ if {\beta}egin{align}{{\langle}mbda}abel{eq:norm-cpt} \max_{n {\gamma}e 0} \|{\alpha}_n\| < \infty, \quad \hbox{and}\quad \max_{n {\gamma}e 0} \|{\beta}_n\| < \infty, \qquad n =0,1,2,{{\langle}mbda}dots, \end{align} where $\|\cdot\|$ is a fixed matrix norm. \end{thm} The common zeros of ${\mathbb P}_n^{\mathbb C}$ can be characterized in terms of the coefficient matrices of the three--term relation. A common zero of ${\mathbb P}_n^{\mathbb C}$ is called simple if at least one partial derivative of ${\mathbb P}_n^{\mathbb C}$ is not identically zero. {\beta}egin{thm} {{\langle}mbda}abel{thm:zeros} Assume ${\mathbb P}_n^{\mathbb C}$ consists of an orthogonal basis of ${\mathcal V}_n^2({\mathbb C})$. Then {\beta}egin{enumerate} [\quad \rm (1)] \item ${\mathbb P}^{\mathbb C}_n$ has ${\delta}im \Pi_{n-1}^2$ common zeros if and only if {\beta}egin{equation}{{\langle}mbda}abel{max-zero-cond} {\alpha}_{n-1} {\gamma}_{n-1}^{\varepsilon}e = {\alpha}_{n-1}^{\varepsilon}e {\gamma}_{n-1}. \end{equation} \item If, in addition, ${\gamma}_{n-1} = ({\alpha}_{n-1}^*)^{\varepsilon}e$, then all zeros of ${\mathbb P}_n^{\mathbb C}$ are real. \item All zeros of ${\mathbb P}_n^{\mathbb C}$ are simple. \end{enumerate} \end{thm} For orthonormal polynomials, this result is stated in \cite{X13}; the above statement (without orthonormality) can be deduced form the results for orthogonal polynomials of real variables. \iffalse The zeros of ${\mathbb P}_n^{\mathbb C}$ can be characterized as the joint eigenvalues of two block Jacobi matrices, for $z$ and ${\beta}ar z$ respectively, formed by the coefficients of the three--term relations, and the condition \eqref{max-zero-cond} ensures that the block Jacobi matrices commute, which implies the existence of maximal number of zeros. \fraci We observe that, by Corollary \ref{cor:ONP3term}, ${\gamma}_{n-1} = ({\alpha}_{n-1}^*)^{\varepsilon}e$ holds if ${\mathcal L}$ is positive definite, which holds if $H_n$ is positive definite for all $n {\gamma}e 0$. The condition \eqref{max-zero-cond} is equivalent to \eqref{eq:real-zeros} and it does not usually hold. Although the space of orthogonal polynomials of real variables and that of conjugate complex variables are the same, sometimes it is more convenient to work with conjugate complex variables. This is illustrated by the example below. \noindent {{\beta}f Example 2.4.} {{\langle}mbda}abel{exam:cheby} {\it Chebyshev polynomials on the deltoid}. These polynomials are orthogonal with respect to {\beta}egin{equation}{{\langle}mbda}abel{eq:cheby-weight} w_{\alpha}(z): = {{\langle}mbda}eft [-3(x^2+y^2 + 1)^2 + 8 (x^3 - 3 xy^2) +4\right]^{{\alpha}}, \quad {\alpha} = \partialm \fracrac12, \end{equation} on the deltoid, which is a region bounded by the Steiner's hypocycloid, or the curve $$ x + i y = (2 e^{ i {\theta}} + e^{- 2 i {\theta}})/3, \qquad 0 {{\langle}mbda}e {\theta} {{\langle}mbda}e 2 \partiali. $$ The three--cusped region is depicted in Figure \ref{figure:region}. {\beta}egin{figure}[ht] {\beta}egin{center} \includegraphics[scale=0.4]{region.pdf} \caption{Region bounded by Steiner's hypocycloid} {{\langle}mbda}abel{figure:region} \end{center} \end{figure} These polynomials are first studied by Koornwinder in \cite{K74} and they are related to the symmetric and antisymmetric sums of exponentials on a regular hexagonal domain \cite{LSX}. Let $U_k^n \in \Pi_n^2({\mathbb C})$ be the Chebyshev polynomials of the second kind defined by the three--term recursive relations {\beta}egin{align} {{\langle}mbda}abel{recurT} U _k^{n+1} (z,{\beta}ar z) = 3 z U_{k}^n(z,{\beta}ar z) - U_{k+1}^n(z,{\beta}ar z) - U_{k-1}^{n-1}(z,{\beta}ar z) \end{align} for $ 0 {{\langle}mbda}e k {{\langle}mbda}e n$ and $n{\gamma}e 1$, where $U_{-1}^n(z,{\beta}ar z ) : = 0$ and $ U_n^{n-1}(z,{\beta}ar z ): =0$, and the initial conditions {\beta}egin{align*} U_0^0(z,{\beta}ar z)=1, \quad U_0^1(z,{\beta}ar z)= 3z, \quad U_1^1(z,{\beta}ar z)= 3 {\beta}ar z. \end{align*} Then $U_k^n(z,{\beta}ar z)$, $0 {{\langle}mbda}e k {{\langle}mbda}e n$, are mutually orthogonal with respect to $w_{\fracrac12}$. \qed It is known (\cite{LSX}) that the polynomials $U_k^n$, $0 {{\langle}mbda}e k {{\langle}mbda}e n$, possess ${\delta}im \Pi_{n-1}^2$ real common zeros and $w_{\frac12}$ admits Gaussian cubature rule of degree $2n-1$ for all $n$. {\sigma}ection{Characteristic polynomials and orthogonal polynomials} {\sigma}etcounter{equation}{0} In this section we discuss generalized characteristic polynomials defined in the introduction when the matrix $A$ is banded Toeplitz. In the case of one variable, it is well known that the characteristic polynomial of a tri--diagonal matrix with positive off--diagonal elements is an orthogonal polynomial with respect to some positive definite linear functional. For generalized characteristic polynomials to be orthogonal, we need more restrictive conditions on the matrix $A$, more or less a banded Toeplitz matrix. Recall that, for a matrix $A \in {\mathcal M}(m,n+1)$ and $I = \{i_1,{{\langle}mbda}dots,i_m\}$ for $1 {{\langle}mbda}e i_1< {{\langle}mbda}dots < i_m {{\langle}mbda}e m+n$, the generalized characteristic polynomial $P_I(z_0,{{\langle}mbda}dots,z_n)$ is defined by \eqref{eq:P_I} and it is a polynomials of degree $m$. If ${\mathcal A}$ is an infinite matrix, we define these polynomials for $A$ being the main $m {\theta}imes (m+n)$ submatrix in the left and upper corner. We need the following definition from \cite{Lee}. {\beta}egin{defn} A matrix $A \in {\mathcal M}(m,n)$ is called centrohermitian if $$ A = J_m {\omega}verline{A }J_n. $$ \end{defn} If $A = (a_{i,j})$ is a centrohermitian matrix in ${\mathcal M}_{m, m+n}$, then $a_{i,j} = {\omega}verline{a_{m+1-i, m+n+1-j}}$ for $1 {{\langle}mbda}e i {{\langle}mbda}e m$ and $1 {{\langle}mbda}e j {{\langle}mbda}e m+n$. If $n= 2 \ell$, then $$ A = {{\langle}mbda}eft[ {\beta}egin{matrix} a_{1,1} & \cdots & a_{1,\ell} & {\omega}verline{a_{m,\ell}} & \cdots & {\omega}verline{a_{m,1}} \\ \vdots & {\delta}dots & \vdots & \vdots & {\delta}dots & \vdots \\ a_{m,1} & \cdots & a_{m,\ell} & {\omega}verline{a_{1,\ell}} & \cdots & {\omega}verline{a_{1,1}} \end{matrix} \right], $$ and if $n = 2\ell +1$, then $$ A = {{\langle}mbda}eft[ {\beta}egin{matrix} a_{1,1} & \cdots & a_{1,\ell} & a_{1,\ell+1}& {\omega}verline{a_{m,\ell}} & \cdots & {\omega}verline{a_{m,1}} \\ \vdots & {\delta}dots & \vdots & \vdots & \vdots & {\delta}dots & \vdots \\ a_{m,1} & \cdots & a_{m,\ell} & {\omega}verline{a_{1,\ell+1}}& {\omega}verline{a_{1,\ell}} & \cdots & {\omega}verline{a_{1,1}} \end{matrix} \right]. $$ The following conjecture was made in \cite{AS}: \iffalse Let ${\mathcal A} = (c_{i-j})$ with $i,j = 1,2,{{\langle}mbda}dots$ be an infinite, banded Toeplitz matrix, where $c_i =0$ if $i < -k$ or $i > h$. Associate to ${\mathcal A}$, define $$ Q(t,x) = t^k {{\langle}mbda}eft( {\sigma}um_{j=-k}^h c_j t^j - {\sigma}um_{j=0}^n x_j t^j \right). $$ Following \cite{AS}, we call a banded Toeplitz matrix that satisfies {\beta}egin{equation}{{\langle}mbda}abel{eq:Qtz} {\omega}verline{Q(t, z_0,{{\langle}mbda}dots, z_n)} = {\omega}verline{t}^{h+k} Q(1/ {\omega}verline{t}, {\omega}verline{x}_n, {\omega}verline{x}_{n-1}, {{\langle}mbda}dots, {\omega}verline{x}_0) \end{equation} multihermitian or order $n$. The following conjecture appeared in \cite[Conjecture 10]{AS}. {\beta}egin{prop} Let $A$ be a multihermitian of order $n$, then $k = h-n$ and, for every pair $m,n \in {\mathbb N}$, the matrix $A_{m, m+n}$ that consists of the first $m$ rows and $m+n$ columns $A$ is centrohermitian. In other words, the first row and column of $A$ are, respectively, $$ (c_0, c_1,{{\langle}mbda}dots , {\omega}verline{c}_1, {\omega}verline{c}_0, {\omega}verline{c}_{-1}, \cdots, {\omega}verline{c}_{-k}, 0, {{\langle}mbda}dots,) \quad \hbox{and}\quad (c_0, c_{-1},{{\langle}mbda}dots, c_{-k}, 0,{{\langle}mbda}dots)^\mathsf{t}. $$ \end{prop} {\beta}egin{proof} Comparing the coefficient of $x_j$ in the both sides of \eqref{eq:Qtz} shows that $k = h-n$. With $k=h-n$, comparing the coefficients of $t_j$ we obtain $$ {\omega}verline{c}_j = c_{-j+n}, \qquad j = -k , -k+1,{{\langle}mbda}dots h, $$ from which it is easy to see that $A$ is of the given form and that $A_{m,m+n}$ is centrohermitian. \end{proof} t turns out that the notion of multihermitian matrix is closely related to the notion of centrohermitian \cite{Lee}. \fraci {\beta}egin{conj} If $A \in {\mathcal M}(m, m+n)$ is Toeplitz and centrohermitian, then each common zero $(z_0,{{\langle}mbda}dots,z_n)$ of $\{P_I:|I| = m\}$ satisfies $z_j = {\omega}verline{z_{n-j}}$, $0{{\langle}mbda}e j {{\langle}mbda}e n$. \end{conj} The original conjecture used a more complicated notion, called multihermitian, see the arXiv version of \cite{AS}, which was shown by the current author to be equivalent to the centrohermitian. {\beta}egin{prop} {{\langle}mbda}abel{prop:P_Iconjugate} Let $A$ be a centrohermitian matrix in ${\mathcal M}(m,m+n)$. Then the polynomials $P_I$ in \eqref{eq:P_I} satisfy the property $$ P_{{\omega}verline{I}}(z_0,{{\langle}mbda}dots, z_n) = {\omega}verline {P_I( {\beta}ar{z}_n,{{\langle}mbda}dots, {\beta}ar{z}_0)}, $$ where ${\omega}verline{I} = m+n+1 - I = \{m+n+1-i_m, {{\langle}mbda}dots, m+n+1-i_1\}$. \end{prop} {\beta}egin{proof} Directly from the centrohermitian of the matrix $A$, it follows that $$ {\omega}verline{A(z_0,{{\langle}mbda}dots, z_n)} = J_m A({\beta}ar{z}_n, {{\langle}mbda}dots, {\beta}ar{z}_0) J_{m+n}. $$ Since multiplying $J_{m+n}$ from the right hand side reverse the order of the columns, we see that $$ {\omega}verline{A_I(z_0,{{\langle}mbda}dots, z_n)} = J_m A_{{\omega}verline{I}}({\beta}ar{z}_n, {{\langle}mbda}dots, {\beta}ar{z}_0) J_{m}, $$ from which the stated result for $P_I$ follows immediately. \end{proof} As a corollary of Proposition \ref{prop:P_Iconjugate}, we see that the polynomials $P_I$ associated with $A$ in the above proposition satisfies $$ {\omega}verline{P_I(z_0,z_1,{{\langle}mbda}dots, {\omega}verline{z}_1, {\omega}verline{z}_0)} = P_{{\omega}verline{I}}(z_0,z_1,{{\langle}mbda}dots, {\omega}verline{z}_1, {\omega}verline{z}_0). $$ In particular, in the case of $d =2$, we can write $P_I$ as $P_{k,n}^{\mathbb C}$ for $0 {{\langle}mbda}e k {{\langle}mbda}e n$ with $I = \{n-k,k\}$. Then the above relation coincides with \eqref{eq:PPconjugate}. In view of this relation, we reformulate Conjecture 3.2 as follows: {\beta}egin{conj} If $A \in {\mathcal M}(m, m+n)$ is Toeplitz and centrohermitian, then the common zeros $(z_0,{{\langle}mbda}dots,z_n)$ of $\{P_I(z_0,z_1,{{\langle}mbda}dots,{\omega}verline{z}_1,{\omega}verline{z}_0):|I| = m\}$ are all real. \end{conj} Our interest in real common zeros lies in the Gaussian cubature rules, for which we need characteristic polynomials to be orthogonal. Let us first consider the example of multivariate Chebyshev polynomials associated with the group ${\mathcal A}_d$. These Chebyshev polynomials are orthogonal and are extensively studied in the literature, see \cite{Be, LX} and the references therein. The three--term relations that they satisfy are explicitly given in \cite{LX}, where it is also shown that these polynomials have maximal number of common zeros that serve as nodes for Gaussian cubature rules. In the case of $d =2$, the three--term relations are precisely those appearing in Example 2.4. It was pointed out in \cite[Example 8]{AS} that when ${\mathcal A}$ is the special Toeplitz matrix ${\mathcal A} = (c_{i-j})$ with $c_{-1} = c_{n+1}=1$ and all other $c_j =0$, the generalized characteristic polynomials are the multivariate Chebyshev polynomials associated with the group ${\mathcal A}_{n+1}$. This was stated in \cite{AS} without proof. In fact, the statement holds for a dilation of the characteristic polynomials and one way to prove it is to verify that the characteristic polynomials satisfy the same three--term relations of the multivariate Chebyshev polynomials. In the following we carry out this proof for the case of two variables, which will also be useful in the next section. We consider, instead of $c =1$, more generally the matrix $A_{m,m+1}(z,{\beta}ar z)$ defined by $$ A_{m,m+1}^c(z, {\beta}ar z) :={{\langle}mbda}eft[ {\beta}egin{matrix} z & {\beta}ar{z} & {\beta}ar c & & & {\beta}igcirc \\ c & z & {\beta}ar z &{\beta}ar c & & \\ & {\delta}dots & {\delta}dots & {\delta}dots & {\delta}dots & \\ & & c & z & {\beta}ar z & {\beta}ar{c} \\ {\beta}igcirc & & & c & z & {\beta}ar{z} \end{matrix} \right] $$ and denote its generalized characteristic polynomials $P_I$ more conveniently by $$ P_k^m (z,{\omega}verline{z}), \quad 0 {{\langle}mbda}e k {{\langle}mbda}e m, \quad \hbox{and} \quad {\mathbb P}_m = (P_0^m, P_1^m, {{\langle}mbda}dots, P_m^m)^\mathsf{t}, $$ where $P_k^m$ is the determinant of the matrix formed by $A_{m,m+1}^c(z, {\beta}ar z)$ minus its $(m-k)$-th column. It is easy to see that $P_k^m (z, {\beta}ar z)$ is monic; that is, its highest order monomial is $z^{m-k}{\beta}ar z^k$. {\beta}egin{prop}{{\langle}mbda}abel{prop:a=0} The polynomials defined above satisfy the three--term relation {\beta}egin{equation}{{\langle}mbda}abel{3term-P} z {\mathbb P}_m (z, {\omega}verline{z})= [I_m \,\,0] {\mathbb P}_{m+1} (z, {\omega}verline{z})+ {\beta}_m {\mathbb P}_m(z, {\omega}verline{z}) + {\gamma}_m {\mathbb P}_{m-1}(z, {\omega}verline{z}), \quad m {\gamma}e 0 \end{equation} where $$ {\beta}_m = {{\langle}mbda}eft[{\beta}egin{matrix} 0 & c & & {\beta}igcirc \\ & {\delta}dots & {\delta}dots & \\ & & 0 & c \\ {\beta}igcirc & & & 0 \end{matrix} \right] \quad \hbox{and} \quad {\gamma}_m = {{\langle}mbda}eft[{\beta}egin{matrix} 0 & {{\langle}mbda}dots & 0 \\ |c|^2 & & {\beta}igcirc \\ & {\delta}dots & \\ {\beta}igcirc & & |c|^2 \end{matrix} \right]. $$ \end{prop} {\beta}egin{proof} For $0 {{\langle}mbda}e k {{\langle}mbda}e m$, define $k {\theta}imes k$ matrices {\sigma}mall{ $$ A_k^c(z,{\beta}ar z):= {{\langle}mbda}eft[ {\beta}egin{matrix} z & {\beta}ar{z} & {\beta}ar c & & &{\beta}igcirc \\ c & z & {\beta}ar z &{\beta}ar c & &\\ & {\delta}dots & {\delta}dots & {\delta}dots & {\delta}dots & \\ & & c & z & {\beta}ar z & {\beta}ar c \\ & & & c & z & {\beta}ar z \\ {\beta}igcirc & & & & c & z \end{matrix} \right], B_k^c(z,{\beta}ar z) :={{\langle}mbda}eft[ {\beta}egin{matrix} {\beta}ar{z} & {\beta}ar c & & & & {\beta}igcirc \\ z & {\beta}ar z &{\beta}ar c & & & \\ c & z & {\beta}ar z &{\beta}ar c & & \\ & {\delta}dots & {\delta}dots & {\delta}dots & {\delta}dots & \\ & & c & z & {\beta}ar z & {\beta}ar{c} \\ {\beta}igcirc & & & c & z & {\beta}ar{z} \end{matrix} \right]. $$ } It follows directly from the definition that {\beta}egin{align*} P_0^m(z,{\beta}ar z) & = {\delta}et A_m^c(z, {\beta}ar z), \quad P_m^m(z,{\beta}ar z) = {\delta}et B_m^c(z,{\beta}ar z), \\ P_k^m (z, {\beta}ar z) & = {\delta}et {{\langle}mbda}eft [ {\beta}egin{array}{c|c} A_{m-k}^c(z,{\beta}ar z) & {\beta}egin{matrix} & {\beta}igcirc\\ {\beta}ar c \quad & \end{matrix} \\ \hline {\beta}egin{matrix} & \qquad c \\ {\beta}igcirc & \end{matrix} & B_k^c(z, {\beta}ar z) \end{array} \right ], \qquad 1 {{\langle}mbda}e k {{\langle}mbda}e n-1. \end{align*} Now, expanding the determinant of $P_0^m$ by the last row shows immediately that $$ P_0^m (z, {\beta}ar z) = z P_0^{m-1} (z, {\beta}ar z) - c P_1^{m-1} (z, {\beta}ar z). $$ Expanding the determinant in the first row for $P_1^m$ and the last row for $P_2^{m-1}$ leads to $$ P_1^m = z P_1^{m-1} - c {\beta}ar z P_1^{m-2} + c |c|^2 P_1^{m-3}, \quad P_2^{m-1} = {\beta}ar z P_1^{m-2} - {\beta}ar c z P_0^{m-2}, $$ Combining these identities gives $$ P_1^m (z, {\beta}ar z) = z P_1^{m-1} (z, {\beta}ar z) - c P_2^{m-1} (z, {\beta}ar z) - |c|^2 P_0^{m-2}(z, {\beta}ar z). $$ The same process can be used to derive the expansion of $P_k^m(z,{\beta}ar z)$, we omit the details. \end{proof} {\beta}egin{cor} {{\langle}mbda}abel{cor:3.6} Let $c = {\beta}ar a^3/ |a|^2$ and $U_k^m(z,{\beta}ar z) = a^{- m+k} {\beta}ar a^{-k} P_k^m (3 a z, 3 {\beta}ar a {\beta}ar z)$, $0 {{\langle}mbda}e k {{\langle}mbda}e m$. Then the polynomials $U_k^m (a z, {\beta}ar a {\beta}ar z)$ are precisely the Chebyshev polynomials of the second kind defined in Example 2.4. \end{cor} {\beta}egin{proof} Rewriting the three--term relation \eqref{3term-P} in terms of $U_k^m$, it is easy to see that $U_k^m$ satisfy the three--term relation \eqref{recurT} and $U_0^1(z,{\beta}ar z) = 3 z$ and $U_1^1(z,{\beta}ar z) = 3 {\beta}ar z$. Since the three--term relation uniquely determines the system of polynomials, $U_k^m$ coincides with those defined in Example 2.4. \end{proof} In particular, when $c= a =1$, the characteristic polynomials $P_k^m (z,{\beta}ar z) = U_k^n(z/3,{\beta}ar z/3)$, a dilation of the Chebyshev polynomials of the second kind associated with the group ${\mathcal A}_2$. The above example gives a Toeplitz matrix ${\mathcal A}$ for which the generalized characteristic polynomials are orthogonal. More generally, it was conjectured in \cite[Conjecture 20]{AS} that if ${\mathcal A}$ without its first row is a Toeplitz matrix, then the characteristic polynomials are orthogonal. The precise statement is the following: {\beta}egin{conj}{{\langle}mbda}abel{conj:2} Given $n > 0$, a banded matrix ${\mathcal A}$ has a weak orthogonality property in $n$ variables if it is of the form {\beta}egin{equation*} {\mathcal A} = {{\langle}mbda}eft[ {\beta}egin{matrix} a_0 & a_1 & a_2 & \cdots & a_{n+1} & 0 & 0 & 0 & \cdots \\ d_{-1} & d_0 & d_1 & \cdots & d_n & d_{n+1} & 0 & 0 & \cdots \\ 0 & d_{-1} & d_0 & d_1 & \cdots & d_n & d_{n+1} & 0 & \cdots \\ \vdots & {\delta}dots & {\delta}dots & {\delta}dots & {\delta}dots & \cdots & {\delta}dots & {\delta}dots & \cdots \end{matrix} \right]. \end{equation*} \end{conj} The weak orthogonality in the conjecture was defined in \cite{AS} by requiring that the family of the generalized characteristic polynomials $\{P_I(z_0,{{\langle}mbda}dots, z_n): |I| = m, \, m \in {\mathbb N}_0\}$ satisfy the $n$--dimensional analogue of the three--term relations \eqref{eq:real3-term} with complex coefficients. However, what we are interested in is orthogonal polynomials in conjugate complex variables that satisfy \eqref{eq:J-relationOP} or \eqref{eq:PPconjugate} for two variables, for which the three--term relations are of the form \eqref{3-termC} or its high dimensional analogue. In this setting, the Conjecture \ref{conj:2} does not hold. For example, in two variables ($n=1$), it is not difficult to see, by working with small $m$, that the condition \eqref{eq:PPconjugate} will force ${\mathcal A}$ in the conjecture to be Toeplitz with $d_2 = {\beta}ar d_{-1}$ and $d_1= {\beta}ar d_0$. The above discussion raises the question that, besides the characteristic polynomials associated with $A_{m,m+1}^c$, are there other systems of characteristics polynomials that are also orthogonal polynomials. It turns out that there exists at least a one--parameter family of perturbations of the matrix $A_{m,m+1}^c$ that does, as we shall see in the next section. {\sigma}ection{Polynomials associated with a family of centrohermitian matrices} {\sigma}etcounter{equation}{0} In this section, we consider a family of centrohermitian matrices that is a one--parameter family of perturbations of the matrix $A_{m,m+1}^c$, and show that the associated characteristic polynomials are orthogonal with respect to a positive Borel measure for some range of the parameters, which establishes the existence of the Gaussian cubature rule for the integral against this measure. For complex numbers $a$ and $ c$, we consider the matrix $$ A_{m,m+1}^{a,c}(z, {\beta}ar z) :={{\langle}mbda}eft[ {\beta}egin{matrix} z & {\beta}ar{z} & {\beta}ar a & & & {\beta}igcirc \\ c & z & {\beta}ar z &{\beta}ar c & & \\ & {\delta}dots & {\delta}dots & {\delta}dots & {\delta}dots & \\ & & c & z & {\beta}ar z & {\beta}ar{c} \\ {\beta}igcirc & & & a & z & {\beta}ar{z} \end{matrix} \right], $$ and denote by $Q_k^m$ the determinant of $A_{m,m+1}^{a,c}(z, {\beta}ar z)$ minus the $(m-k)$-th row. When $a =c$, the matrix degenerates to the one considered in the previous section. It is again easy to see that $Q_k^m (z, {\beta}ar z)$ is monic with the leading term $z^{m-k} {\beta}ar z^k$. We shall show below that these polynomials also satisfy three--term relations, but whether their common zeros are all real depends on the range of the parameters $a$ and $c$. To deduce the three--term relation for $Q_k^m$, we first express them in terms of ${\mathbb P}_m$ in Proposition \ref{prop:a=0}. {\beta}egin{lem}{{\langle}mbda}abel{lem:a=!0} Let $P_k^m$, $0 {{\langle}mbda}e k {{\langle}mbda}e m$, be the orthogonal polynomials in Proposition \ref{prop:a=0} and define $P_{-1}^m(z,{\beta}ar z):=0$. Then {\beta}egin{align*} Q_0^m & = P_0^m -(a-c) P_1^{m-1} + c^2 ({\beta}ar a - {\beta}ar c) P_0^{m-3} - c^2|a-c|^2 P_1^{m-4}, \\ Q_1^m & = P_1^m - {\beta}ar c (a-c)P_0^{m-2} + c^2 ({\beta}ar a - {\beta}ar c) P_1^{m-3} - c |c|^2|a-c|^2 P_0^{m-5}, \\ Q_k^m & = P_k^m +|c|^2(a-c) P_{k-3}^{m-3} + c^2 ({\beta}ar a - {\beta}ar c) P_k^{m-3} + |c|^4|a-c|^2 P_{k-3}^{m-6}, \quad 2 {{\langle}mbda}e k {{\langle}mbda}e m-2, \\ Q_{m-1}^m & = P_{m-1}^m - c ({\beta}ar a - {\beta}ar c) P_{m-2}^{m-2} + {\beta}ar c^2 (a-c)P_{m-4}^{m-3} - {\beta}ar c |c|^2|a-c|^2 P_{m-5}^{m-5}, \\ Q_m^m & = P_m^m - ({\beta}ar a- {\beta}ar c) P_1^{m-1} + {\beta}ar c^2 (a - c) P_{m-3}^{m-3} - {\beta}ar c^2 |a-c|^2 P_{m-5}^{m-4}. \end{align*} \end{lem} {\beta}egin{proof} For $0 {{\langle}mbda}e k {{\langle}mbda}e m-1$, we defined $k {\theta}imes k$ matrices $A_k^{a,c}(z,{\beta}ar z)$ and $B_k^{a,c}(z,{\beta}ar z)$ as in the proof of Proposition \ref{prop:a=0}, where the $(1,3)$ element of $A_k^{a,c}(z,{\beta}ar z)$ is ${\beta}ar a$ and $(k-2,k)$ element of $B_k^{a,c}(z,{\beta}ar z)$ is $a$, and these matrices do not contain $a$ or ${\beta}ar a$ if $k =1$ or $2$. We then have {\beta}egin{align*} Q_k^m (z, {\beta}ar z) = {\delta}et {{\langle}mbda}eft [ {\beta}egin{array}{c|c} A_{m-k}^{a,c}(z,{\beta}ar z) & {\beta}egin{matrix} & {\beta}igcirc\\ {\beta}ar c \quad & \end{matrix} \\ \hline {\beta}egin{matrix} & \qquad c \\ {\beta}igcirc & \end{matrix} & B_k^{a,c}(z, {\beta}ar z) \end{array} \right ], \qquad 2 {{\langle}mbda}e k {{\langle}mbda}e m-2. \end{align*} Writing the first row of the matrix for $Q_k^m$ as a sum of two, so that one is the same row with $a$ replaced by $c$ and the other one is $(0,0, {\beta}ar a - {\beta}ar c, 0, {{\langle}mbda}dots, 0)$, it follows that {\beta}egin{align*} Q_k^m (z,{\beta}ar z) =& {\delta}et {{\langle}mbda}eft [ {\beta}egin{array}{c|c} A_{m-k}^{c,c}(z,{\beta}ar z) & {\beta}egin{matrix} & {\beta}igcirc\\ {\beta}ar c \quad & \end{matrix} \\ \hline {\beta}egin{matrix} & \qquad c \\ {\beta}igcirc & \end{matrix} & B_k^{a,c}(z, {\beta}ar z) \end{array} \right ] \\ & +c^2 ({\beta}ar a - {\beta}ar c) {\delta}et {{\langle}mbda}eft [ {\beta}egin{array}{c|c} A_{m-3-k}^{c,c}(z,{\beta}ar z) & {\beta}egin{matrix} & {\beta}igcirc\\ {\beta}ar c \quad & \end{matrix} \\ \hline {\beta}egin{matrix} & \qquad c \\ {\beta}igcirc & \end{matrix} & B_k^{a,c}(z, {\beta}ar z) \end{array} \right ]. \end{align*} Applying the same procedure on the last row of the two matrices in the right hand side, the desired formula for $Q_k^m$ follows. The remaining cases of $k =0,1$ and $k = m-1, m$ can be handled similarly. \end{proof} {\beta}egin{prop}{{\langle}mbda}abel{prop:a=!0} The polynomials $Q_k^m$ satisfy the three--term relation {\beta}egin{equation} {{\langle}mbda}abel{eq:3termQac} z {\mathbb Q}_m (z, {\omega}verline{z})= [I_m \,\,0] {\mathbb Q}_{m+1} (z, {\omega}verline{z})+ {\beta}_m {\mathbb Q}_m(z, {\omega}verline{z}) + {\gamma}_m {\mathbb Q}_{m-1}(z, {\omega}verline{z}), \quad m {\gamma}e 0 \end{equation} where $$ {\beta}_m = {{\langle}mbda}eft[{\beta}egin{matrix} 0 & c & & {\beta}igcirc \\ & {\delta}dots & {\delta}dots & \\ & & 0 & c \\ {\beta}igcirc & {\beta}ar c - {\beta}ar a & 0 &0 \end{matrix} \right] \quad \hbox{and} \quad {\gamma}_m = {{\langle}mbda}eft[{\beta}egin{matrix} 0 & 0 & c(c-a) & \\ |c|^2 & & & {\beta}igcirc \\ & |c|^2 && \\ & & {\delta}dots & \\ {\beta}igcirc & & & |c|^2 \end{matrix} \right]. $$ In particular, the polynomials $Q_k^m$ are orthogonal polynomials with respect to a quasi-definite linear functional and ${\mathbb Q}_m$ has ${\delta}im \Pi_{m-1}^2$ simple common zeros. \end{prop} {\beta}egin{proof} To prove the three--term relation, we first write $Q_k^m $ in terms of $P_j^n$ as in Lemma \ref{lem:a=!0}, then apply the three--term relation \eqref{3term-P} of $P_j^n$ in the previous section to derive an expansion of $z Q_k^m(z,{\beta}ar z)$ in terms of $P_j^m$, and, finally, write the latter as the expansion of $Q_j^m$ by applying Lemma \ref{lem:a=!0}. The first two steps are immediate, the third step is also straightforward in the case of $2 {{\langle}mbda}e k {{\langle}mbda}e m-2$ and it is just slightly more complicated in the remaining cases of $k =0,1$ or $k = m-1, m$. We use the case $k=0$ as an example, which is the one that needs most of the attention. By Lemma \ref{lem:a=!0} and \eqref{3term-P}, it is easy to see that {\beta}egin{align*} z Q_0^m = & P_0^{m+1} + c P_1^m - (a-c) {{\langle}mbda}eft(P_1^{m+1} + c P_2^{m-1} + |c|^2 P_0^{m-2} \right) \\ & + c^2({\beta}ar{a} - {\beta}ar{c}) {{\langle}mbda}eft(P_0^{m-2} + cP_1^{m-3}\right ) - c^2|a-c| {{\langle}mbda}eft(P_1^{m-3} + c P_2^{m-4} + |c|^2 P_0^{m-5}\right). \end{align*} Using the formulas in Lemma \ref{lem:a=!0}, in particular, $Q_2^{m-1} = P_2^{m-1} + c^2({\beta}ar{a}-{\beta}ar{c}) P_2^{m-4}$, we can write the right hand side of the above identity in terms of $Q_j^m$. This gives $$ a Q_0^m = Q_0^{m+1} + c Q_1^m + c(c-a) Q_2^{m-1}, $$ which is precisely the first component of the matrix identity in \eqref{eq:3termQac}. It is straightforward to see the the rank conditions in Theorem \ref{thm:FavardC} are satisfied for ${\alpha}_m$ and ${\gamma}_m$, so that $Q_k^m$ are orthogonal polynomials with respect to a quasi-definite linear functional. From the explicit expressions of ${\alpha}_m = [I\, \, 0]$ and ${\gamma}_m$, it follows readily that \eqref{max-zero-cond} holds. Consequently, ${\mathbb Q}_m$ has ${\delta}im \Pi_{m-1}^2$ simple common zeros. \end{proof} Since $Q_k^m$ are polynomials of $z $ and ${\beta}ar z$, the fact that they have ${\delta}im \Pi_{m-1}^2$ common zeros does not follow from Proposition \ref{prop:basic}, which is stated for polynomials of independent complex variables. For Gaussian cubature rules, we need in addition that the common zeros of $Q_k^m$ are all real. This holds, however, only for restricted values of $a$ and $c$. {\beta}egin{thm} If $a$ and $c$ are nonzero complex numbers such that $c(c-a) \in {\mathbb R}$, $|c| {\gamma}e 2 |c-a|$, then the polynomials $Q_k^m(z,{\beta}ar z)$, $0 {{\langle}mbda}e k {{\langle}mbda}e m$, have ${\delta}im \Pi_{m-1}^2$ real, simple common zeros. \end{thm} {\beta}egin{proof} By the discussion right after the Theorem \ref{cor:ONP3term}, it is sufficient to prove that the matrices $H_n = {\mathcal L}({\mathbb Q}_n {\mathbb Q}_n^*)$ are positive definite for all $n {\gamma}e 0$. Because ${\omega}verline{{\mathbb Q}_n} = J_{n+1} {\mathbb Q}_n$, the matrix $H_n$ is centrohermitian; that is, $J_{n+1}H_n J_{n+1} = {\omega}verline{H_n}$. Using this fact and taking complex conjugate of \eqref{gamma-alpha}, it is easy to see that ${\omega}verline{{\alpha}_{n-1}}J_{n+1} H_n J_{n+1} = H_{n-1}^\mathsf{t} ({\gamma}_{n-1}^{\varepsilon}e)^\mathsf{t}$. Consequently, since ${\alpha}_{n-1} = [I_n\,\,0]$ and $J_n {\alpha}_{n-1}J_{n+1} = [0\,\, I_n]$, we conclude that the second identity below holds, {\beta}egin{equation} {{\langle}mbda}abel{eq:HnHn-1} [I_n \, \, 0] H_n = J_n H_{n-1}^\mathsf{t} {\gamma}_{n-1}^\mathsf{t} J_{n+1} \quad\hbox{and}\quad [0 \,\, I_n] H_n = J_n H_{n-1}^\mathsf{t} ({\gamma}_{n-1}^{\varepsilon}e)^\mathsf{t} J_{n+1}, \end{equation} where the first one follows directly from \eqref{gamma-alpha}. These two identities can be used to determine $H_n$ inductively. We can normalize the linear functional ${\mathcal L}$ so that $H_0 =1$. Using the explicit formulas of ${\gamma}_n$, it is easy to see that $H_1 = |a|^2 I_2$ and $H_2 = |a|^2 {\alpha} I_3$, where ${\alpha} = |c|^2 - |a-c|^2$, which is positive definite since ${\alpha} >0$ by assumption. The next case is $$ H_3 = {{\langle}mbda}eft [ {\beta}egin{matrix} |c|^2 & 0 & 0 & {\beta}eta \\ 0 & |c|^2 & 0 & 0 \\ 0 & 0& |c|^2 & 0 \\ {\omega}verline{{\beta}eta} & 0 & 0 & |c|^2 \end{matrix} \right], \qquad {\beta}: = c (c-a), $$ which is positive definite since ${\delta}et H_3 = |a|^2 {\alpha}^2 |c|^6 >0$ if $a \ne 0$ and $c \ne 0$, so that all its principle minors have positive determinant. Using the explicit formula of ${\gamma}_3$, it then follows from \eqref{eq:HnHn-1} that $H_4$ satisfies {\beta}egin{align*} [I_4\,\, 0] H_4 & = {\alpha} |a|^2 |c|^2 {{\langle}mbda}eft [ {\beta}egin{matrix} |c|^2 & 0 & 0 & {\omega}verline{{\beta}eta} & 0 \\ 0 & |c|^2 & 0 & 0 & {\beta}eta \\ 0 & 0& |c|^2 & 0 & 0 \\ {\beta}eta & 0 & 0 & |c|^2 & 0\end{matrix} \right], \\ [0\,\, I_4] H_4 & = {\alpha} |a|^2 |c|^2 {{\langle}mbda}eft [ {\beta}egin{matrix} 0 & |c|^2 & 0 & 0& {\omega}verline{{\beta}eta} \\ 0& 0 & |c|^2 & 0 & 0 \\ {\omega}verline{{\beta}eta} & 0 & 0& |c|^2 & 0 \\ 0 & {\beta}eta & 0 & 0 &|c|^2 \end{matrix} \right], \end{align*} which implies that ${\beta}eta$ is necessarily a real number and $$ H_4 = {\alpha} |a|^2 |c|^2 {{\langle}mbda}eft [ {\beta}egin{matrix} |c|^2 & 0 & 0 & {\beta}eta & 0 \\ 0 & |c|^2 & 0 & 0 & {\beta}eta \\ 0 & 0& |c|^2 & 0 & 0 \\ {\beta}eta & 0 & 0 & |c|^2 & 0\\ 0 & {\beta}eta & 0 & 0 &|c|^2\end{matrix} \right]. $$ For $n > 4$, it is easy to conclude by induction and \eqref{eq:HnHn-1} that $$ H_n = {\alpha} |a|^2 |c|^{2(n-3)} {{\langle}mbda}eft [ {\beta}egin{matrix} |c|^2 & 0 & 0 & {\beta}eta & & {\beta}igcirc \\ 0 & |c|^2 & 0 & {\delta}dots & {\delta}dots & \\ 0 & 0& {\delta}dots & {\delta}dots & {\delta}dots & {\beta}eta \\ {\beta}eta & {\delta}dots & {\delta}dots & {\delta}dots & 0 &0 \\ & {\delta}dots & {\delta}dots & 0 & |c|^2 & 0 \\ {\beta}igcirc & & {\beta}eta & 0 & 0 &|c|^2\end{matrix} \right]. $$ By assumption, $|c|^2 {\gamma}e 2 |{\beta}| = 2 |c| |c-a|$, which implies that $H_n$ is diagonal dominant with its first and the last rows strictly diagonal dominant. Consequently, $H_n$ is positive definite. \end{proof} Numerical computation indicates that the condition $|c| {\gamma}e 2 |c-a|$ is sharp for the positive definiteness of $H_n$ for all $n \in {\mathbb N}$, hence, sharp for the polynomials in ${\mathbb Q}_n$ being orthogonal with respect to a positive measure. {\beta}egin{cor}{{\langle}mbda}abel{cor:Gaussian} If $a$ and $c$ satisfies the assumption of the theorem, then there is a finite positive Borel measure $d \mu_{a,c}$ with compact support in ${\mathbb R}^2$ with respect to which the polynomials $Q_k^m$ are orthogonal. Furthermore, for the integral against $d\mu_{a,c}$, the Guassian cubature rule of degree $2m-1$ exists for all $m \in {\mathbb N}$. \end{cor} {\beta}egin{proof} By the explicit formula of the three--term relations, it is easy to see that \eqref{eq:norm-cpt} holds, which implies the existence of $d\mu$ by Theorem \ref{cor:ONP3term}. \end{proof} As mentioned before, only two families of integrals for which Gaussian cubature rules are known to exist in the literature. One of them is the integral over the deltoid with respect to $w_{1/2}$ in \eqref{eq:cheby-weight}, which corresponds to the case $a = c$ in the corollary. Our result in the above corollary shows that the Gaussian cubature rules exist for a family of measures that includes $w_{1/2}$ as a special case, which corresponds to $a = c =1$ and with $(x,y)$ dilated by $3$ as shown in Corollary \ref{cor:3.6}. We do not know, however, the explicit formula for the measure when $a \ne c$. To get some sense of the affair, let us depict the common zeros of the orthogonal polynomials when $c=1$ and $a$ is a parameter, and we dilate $(x,y)$ by 3. By the Corollary \ref{cor:Gaussian}, the common zeros generate a Gaussian cubature rule if $1/2 {{\langle}mbda}e a {{\langle}mbda}e 3/2$. In Figure \ref{figure:nodes} {\beta}egin{figure}[ht] {\beta}egin{center} \includegraphics[scale=0.45]{a=half} \quad \includegraphics[scale=0.45]{a=1points} \\ \includegraphics[scale=0.45]{a=3half} \quad \includegraphics[scale=0.45]{a=2points} \caption{Clockwise from the upper left corner: nodes for $m = 8$ with $a = 1/2$, $a=1$, $a=3/2$ and $a =2$} {{\langle}mbda}abel{figure:nodes} \end{center} \end{figure} we depict the common zeros for orthogonal polynomials of degree 8 for $a=1/2, 1, 3/2, 2$, respectively. The case $a = 1$ corresponds to the Gaussian cubature rule for $w_{1/2}$. The case $a = 1/2$ and $a=3/2$ correspond to the boundary cases for which the existence of a Gaussian cubature rule is guaranteed by Corollary \ref{cor:Gaussian}. The figures show that the corresponding measures in these two cases are likely supported on the same region, and the points are distributed more densely toward the boundary as $a$ increases, which indicates that the corresponding measures may behavior like $w_{a/2} (x,y)dx dy$, where $w_{\alpha}$ is defined in \eqref{eq:cheby-weight}. However, the coefficients of the three--term relation of the Chebyshev polynomials of the first kind (\cite{LSX}), which corresponds to $w_{-1/2}$ and does not admit a Gaussian cubature rule, are of different forms from those in \eqref{eq:3termQac}. This seems to indicate that the measures are not exactly $w_{a/2}$. For the case $a=2$, outside the range in Corollary \ref{cor:Gaussian}, the figure shows that the points appear to cluster together. Further test shows that the common zeros are no longer all real nor all inside the region for larger $a$, say $a = 5/2$. \iffalse This example shows that the polynomials for centrohermitian matrix do not all have real common zeros. Hence, Conjecture 2 does not extend to centrohermitian matrices. In its place we state the following conjecture: {\beta}egin{conj} If $A$ is a centrohermitian matrix of $n {\theta}imes n+m$, then the set of common zeros of all $P_I(z_0,{{\langle}mbda}dots,z_n)$ with $|I| = m$ is a finite subset of ${\mathbb C}^{n+1}$ of cardinality ${\beta}inom{m+n}{n+1}$ counting multiplicities. \end{conj} This connection is not a consequence of Proposition \ref{prop:basic}, which requires the variables are independent complex variables. \fraci {\beta}igskip\noindent {{\beta}f Acknowledgement.} The author thanks Professor Boris Shapiro for helpful discussions, and for the referee and the editor for their helpful comments and suggestions. {\beta}egin{thebibliography}{99} {\beta}ibitem{AS} P. Alexandersson and B. Shapiro, Around multivariate Schmidt-Spitzer theorem, {\theta}extit{Lin. Alg. Appl.}, {\theta}extbf{446} (2014), 356--368. {\beta}ibitem{Be} R. J. Beerends, Chebyshev polynomials in several variables and the radial part of the Laplace-Beltrami operator, {\theta}extit{Trans. Amer. Math. Soc.}, {\theta}extbf{328} (1991), 779--814. {\beta}ibitem{BSX} H. Berens, H. Schmid and Y. Xu, Multivariate Gaussian cubature formula, {\theta}extit{Arch. Math.} {\theta}extbf{64} (1995), 26--32. {\beta}ibitem{DX} C. F. Dunkl and Y. Xu, {\theta}extit{Orthogonal polynomials of several variables}, 2nd edition, Cambridge Univ. Press, 2014. {\beta}ibitem{K74} T. H. Koornwinder, Orthogonal polynomials in two variables which are eigenfunctions of two algebraically independent partial differential operators, I, II, {\theta}extit{Proc. Kon. Akad. v. Wet., Amsterdam} {\theta}extbf{36} (1974), 48--66. {\beta}ibitem{Lee} A. Lee, Centrohermitian and skew-centrohermitian matrices, {\theta}extit{Linear Algebra \& Appl.} {\theta}extbf{29} 1980, 205--210. {\beta}ibitem{LSX} H. Li, J. Sun and Y. Xu, Discrete Fourier analysis, cubature and interpolation on a hexagon and a triangle, {\theta}extit{SIAM J. Numer. Anal.} {\theta}extbf{46}, (2008), 1653--1681. {\beta}ibitem{LX} H. Li and Y. Xu, Discrete Fourier analysis on fundamental domain of $A_d$ lattice and on simplex in $d$-variables. {\theta}extit{J. Fourier Anal. Appl.} {\theta}extbf{16} 2010, 383--433. {\beta}ibitem{My} I. P. Mysovskikh, {\theta}extit{Interpolatory cubature formulas}, Nauka, Moscow, 1981. {\beta}ibitem{SS} B. Shapiro and M.Shapiro, On eigenvalues of rectangular matrices, {\theta}extit{Proc. Steklov Math. Inst,} {\theta}extbf{267} (2009), 248--255. {\beta}ibitem{St} A. Stroud, {\theta}extit{Approximate calculation of multiple integrals}, Prentice-Hall, Englewood Cliffs, NJ, 1971. {\beta}ibitem{X13} Y. Xu, Complex versus real orthogonal polynomials of two variables, {\theta}extit{Integral Transform \& Special Funct.}, {\theta}extbf{26} (2015), 134--151. \end{thebibliography} \end{document}
\begin{equation}gin{document} {\bf n}ewtheorem{lemma}{Lemma}[section] {\bf n}ewtheorem{theorem}[lemma]{Theorem} {\bf n}ewtheorem{corollary}[lemma]{Corollary} {\bf n}ewtheorem{proposition}[lemma]{Proposition} \theoremstyle{definition} {\bf n}ewtheorem{definition}[lemma]{Definition} {\bf n}ewtheorem{remark}[lemma]{Remark} {\bf n}ewtheorem{exam}[lemma]{Example} {\bf n}umberwithin{equation}{section} \begin{equation}gin{center} {\labelrge\bf Notes on the Chernoff Product Formula} \\[10pt] {\labelrge Valentin A. Zagrebnov} \\[5mm] Aix-Marseille Universit\'{e}, CNRS, Centrale Marseille\\ Institut de Math\'{e}matiques de Marseille (UMR 7373)\\ Centre de Math\'{e}matiques et Informatique - Technop\^{o}le Ch\^{a}teau-Gombert\\ 39, rue F. Joliot Curie, 13453 Marseille Cedex 13, France \\ e-mail {[email protected]} \varepsilonnd{center} {\sc Abstract}. We revise the strong convergent Chernoff product formula and extend it, in a Hilbert space, to convergence in the operator-norm topology. Main results deal with the self-adjoint Chernoff product formula. The nonself-adjoint case concerns the quasi-sectorial contractions. \tableofcontents \let\thefootnote\relax\varphiootnotetext{ \begin{equation}gin{tabular}{@{}l} {\varepsilonm Mathematics Subject Classification}. \\ 47D05, 47A55, 81Q15, 47B65.\\ {\varepsilonm Keywords}. Strongly continuous semigroups, semigroup approximations, Chernoff product formula,\\ quasi-sectorial contractions, Trotter-Kato product formulae. \varepsilonnd{tabular}} \section{The Chernoff product formula}\label{sec:3.1} In paper \cite{Che68}, Chernoff has proved the following Theorem: \begin{equation}gin{proposition}\labelbel{prop:3.0.0} Let $F(t)$ be a strongly continuous function from $[0,\infty )$ to the linear contractions on a Banach space $\mathfrak{X}$ such that $F(0)=\mathds{1}$. Suppose that the closure $C$ of the strong derivative $F'(+0)$ is the generator of a contraction semigroup. Then $F(t/n)^{n}$, $n \in \dN $, converges for $t\geq 0$ and $n \rightarrow \infty $ to contraction $C_0$-semigroup $\{e^{t\,C}\}_{t\geq 0}$ in the strong operator topology. \varepsilonnd{proposition} Note, that by condition of Proposition \ref{prop:3.0.0} the operator $C(\tau): = (F(\tau) - \mathds{1})/\tau$ for $\tau > 0$ is dissipative and $F'(+0)$ is derivative $\lim_{\tau\rightarrow +0} C(\tau)$ in the strong resolvent sense. We also recall another version of this proposition, see \cite{Che74} (Theorem 1.1). \begin{equation}gin{proposition}\labelbel{prop:3.1.0} Let $t \mapsto F(t)$ be a measurable operator-valued function from $[0,\infty)$ to the linear contractions on a Banach space $\mathfrak{X}$ such that $F(0)=\mathds{1}$. If \begin{equation}gin{equation}\labelbel{eq:0.1} \,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to+0}(\mathds{1} - C(\gt))^{-1} = (\mathds{1} - C)^{-1} \, , \varepsilonnd{equation} in strong operator topology, then \begin{equation}\label{eq:0.2} \,\mbox{\rm s-}\hspace{-2pt} \lim_{n\to+\infty}F(t/n)^n = e^{t\,C}\, , \varepsilone for all $t\geq 0$ uniformly on bounded $t$-intervals. \varepsilonnd{proposition} \begin{equation}gin{proof} The proof needs two ingredients. The first is the \textit{Trotter-Neveu-Kato} theorem (\cite{Kat80}, Ch.IX, Theorem 2.16): the strong convergence in (\ref{eq:0.1}) yields for contractions $\{e^{t\, C(\gt)}\}_{t\geq 0}$ \begin{equation}\label{eq:0.3} \lim_{\gt\to+0}e^{t\, C(\gt)} u = e^{t\,C} u \, , \varepsilone for all $u \in \mathfrak{X}$, locally uniformly on closed intervals $\cI \subset \dR^+_{0}$. Now, for any $t > 0$ and $u \in \mathfrak{X}$ we define $\{u_n := [\mathds{1} - C(t/n)/\sqrt{n}]^{-1} u \}_{n\geq1}$. Then by (\ref{eq:0.3}) \begin{equation}gin{equation}\label{eq:0.4} \lim_{n \rightarrow \infty } u_n := \lim_{n \rightarrow \infty}\sqrt{n} \int_{0}^{\infty} ds \ e^{- \sqrt{n}\, s}\ e^{s\, C(t/n)}\, u = u . \varepsilonnd{equation} The second ingredient comes from the, so-called, $\sqrt{n}$-Lemma (\cite{Che68} Lemma 2). It yields the estimate \begin{equation}gin{equation}\labelbel{eq:0.5} \|e^{t \, C(t/n)}\, w - F(t/n)^n \, w\| \leq \sqrt{n} \ \|(F(t/n) - \mathds{1})\, w\| \, , \varepsilonnd{equation} for any $w\in \mathfrak{X}$. Then (\ref{eq:0.4}) and (\ref{eq:0.5}) imply \begin{equation}gin{eqnarray*} && \lim_{n \rightarrow \infty }\|e^{t \, C(t/n)}\, u - F(t/n)^n \, u\| = \lim_{n \rightarrow \infty }\|e^{t \, C(t/n)}\, u_n - F(t/n)^n \, u_n \| \\ && \leq \lim_{n \rightarrow \infty } t/\sqrt{n} \ \|C(t/n)\, [\mathds{1} - C(t/n)/\sqrt{n}]^{-1}\, u\| = \lim_{n \rightarrow \infty } t\, \|u_n - u\| = 0\,. \varepsilonnd{eqnarray*} Hence, by (\ref{eq:0.3}) together with estimate \begin{equation}gin{equation*} \|e^{t\,C} u - F(t/n)^n u\|\leq \|e^{t\,C} u - e^{t\, C(t/n)} u\| + \|e^{t \, C(t/n)}\, u - F(t/n)^n \, u\| \, , \varepsilonnd{equation*} for any $u\in \mathfrak{X}$, we obtain (\ref{eq:0.2}). \varepsilonnd{proof} \begin{equation}gin{remark}\labelbel{rem:3.2.0} (a) Equivalence of condition (\ref{eq:0.1}) to the existence of the strong derivative $F'(+0)$ is a part of the \textit{Trotter-Neveu-Kato} theorem, \cite{EN00} Ch.III, Sects.4.8 and 4.9.\\ (b) For analysis of optimality, generalisation and improvement of the $\sqrt{n}$-Lemma, see \cite{Zag17}. \varepsilonnd{remark} \begin{equation}gin{definition}\labelbel{def:3.3.0} Equation (\ref{eq:0.2}) is called the \textit{Chernoff product formula}, or the {Chernoff approximation formula}, in the strong operator topology for contraction $C_0$-semigroup $\{e^{t\,C}\}_{t\geq 0}$. \varepsilonnd{definition} The aim of the present Notes is \textit{lifting} the strongly convergent Chernoff product formula to formula convergent in the operator norm topology, whereas a majority of results concerns only strong convergence, see, e.g., a detailed review \cite{But19}. Our main results are focused on analysis of the \textit{operator-norm} convergence of the Chernoff product formula in a Hilbert space ${\omega}tH$, see Section \ref{sec:3.2}. In Section \ref{sec:3.3} we establish estimates of the operator-norm \textit{rate} of convergence for \textit{self-adjoint} Chernoff product formula. The case of Chernoff product formula for {nonself-adjoint} \textit{quasi-sectorial} contractions is the subject of Section \ref{sec:3.4}. The Trotter-Kato product formula is a direct application of the Chernoff product formula, see Section \ref{sec:3.5}. We conclude this section by the proof of our first \textit{Note}: for strongly convergent Chernoff product formula the \textit{self-adjointness} allows to relax condition (\ref{eq:0.1}) to the weak operator convergence. To this aim we recall a \textit{lifting} topology assertion in ${\omega}tH \,$: \begin{equation}gin{proposition}\label{lem:3.1.1+1} Let $\{u_n\}_{n\geq1}$ be a weakly convergent sequence of vectors, $\,\mbox{\rm w-}\hspace{-2pt} \lim_{n\to\infty} u_n = u$, in a Hilbert space ${\omega}tH$. If , in addition, $\lim_{n\to\infty} \|u_n\| = \|u\|$, then $\lim_{n\to\infty} \|u_n - u\| =0$. \varepsilonnd{proposition} This assertion implies the following statement. \begin{lemma}\label{lem:3.1.3} Let $S: \dR^+ \rightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint operators and let $H$ be non-negative self-adjoint operator. If the weak operator limit \begin{equation}\label{eq:3.1.11} \,\mbox{\rm w-}\hspace{-2pt} \lim_{\gt\to+0}({\labelmbda} \mathds{1} + S(\gt))^{-1} = ({\labelmbda} \mathds{1} +H)^{-1}\, , \varepsilone for each ${\labelmbda} > 0$, then it is also true in the strong operator topology {\rm{:}} \begin{equation}\label{eq:3.1.12} \,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to+0}(\labelmbda \mathds{1} + S(\gt))^{-1} = ({\labelmbda} \mathds{1} +H)^{-1}\, . \varepsilone \varepsilonl \begin{equation}gin{proof} From \varepsilonqref{eq:3.1.11} we get \begin{equation}\label{eq:3.1.13} \lim_{\gt\to+0}\|({\labelmbda} \mathds{1} + S(\gt))^{-1/2}u\|^2 = \|({\labelmbda} \mathds{1} +H)^{-1/2}u\|^2 , \quad u \in {\omega}tH, \varepsilone for ${\labelmbda} \ge 1$. Since \begin{equation}gin{equation*} (\mathds{1} +S(\gt))^{-1/2} = \varphirac{1}{\pi}\int^\infty_0 d\varepsilonta \ \varphirac{1}{\sqrt{\varepsilonta}}\ (\varepsilonta \, \mathds{1} + \mathds{1} + S(\gt))^{-1}\, , \varepsilonnd{equation*} \varepsilonqref{eq:3.1.11} also yields $\,\mbox{\rm w-}\hspace{-2pt} \lim_{\gt\to +0}(\mathds{1} + S(\gt))^{-1/2} = (\mathds{1} +H)^{-1/2}$. This, together with \varepsilonqref{eq:3.1.13} and the lifting Proposition \ref{lem:3.1.1+1}, implies $\,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to +0}(\mathds{1} + S(\gt))^{-1/2} = (\mathds{1} +H)^{-1/2}$. Then strong convergence of the product of operators yields \varepsilonqref{eq:3.1.12}. \varepsilonnd{proof} By Lemma \ref{lem:3.1.3}, the conditions of Proposition \ref{prop:3.1.0} for the family of operators \begin{equation}\label{eq:3.1.1} S(\gt) := \varphirac{\mathds{1}- F(\gt)}{\gt}\, , \quad \gt > 0 \, , \varepsilone can be reformulated in a Hilbert space ${\omega}tH$. Then we get a stronger assertion: \begin{theorem}\label{cor:3.1.4} Let $F: \dR^+ \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$ and let $H\geq 0$ be a self-adjoint operator in ${\omega}tH$. Then \begin{equation}\label{eq:3.1.7} \,\mbox{\rm s-}\hspace{-2pt} \lim_{n\to+\infty}F(t/n)^n = e^{-tH}\, , \varepsilone if and only if for each ${\labelmbda} > 0$ the condition \begin{equation}\label{eq:3.1.15} \,\mbox{\rm w-}\hspace{-2pt} \lim_{\gt\to +0} ({\labelmbda} \mathds{1} + S(\gt))^{-1} = ({\labelmbda} \mathds{1} +H)^{-1}\, , \varepsilone is satisfied. \varepsilont \begin{equation}gin{proof} If \varepsilonqref{eq:3.1.15} is valid, then applying Lemma \ref{lem:3.1.3} we verify \varepsilonqref{eq:3.1.12}. Using Proposition \ref{prop:3.1.0} in a Hilbert space ${\omega}tH$ for $C(\tau) = - S(\gt)$, $C = -H$ and \varepsilonqref{eq:3.1.12} for $\labelmbda =1$, we prove the self-adjoint Chernoff product formula \varepsilonqref{eq:3.1.7}. To prove the converse we use the representation \begin{equation}\label{eq:3.1.150} e^{-tS(t/n)} - e^{-tH} = F(t/n)^n - e^{-tH} + e^{-tS(t/n)} - F(t/n)^n \, . \varepsilone Then $\lim_{n\to+\infty}\|(F(t/n)^n - e^{-tH})u\| = 0$ by condition (\ref{eq:3.1.7}), whereas by the spectral functional calculus for self-adjoint operator $F(\gt)$ we obtain the \textit{operator norm} estimate \begin{equation}\label{eq:3.1.151} \|F(t/n)^n - e^{-n(\mathds{1}- F(t/n))}\| = \left\|\int_{[0,1]} dE_{F(t/n)}({\labelmbda}) \left({\labelmbda}^n -e^{-n(1-{\labelmbda})}\right)\right\| \leq \varphirac{1}{n} \ . \varepsilone Therefore, (\ref{eq:3.1.150}) yields for $\tau = t/n$ the limit \begin{equation}\label{eq:3.1.3} \,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to+0}e^{-tS(\gt)} = e^{-tH} \, , \varepsilone for $t\in \dR^+$. Note that by the self-adjoint \textit{Trotter-Neveu-Kato} convergence theorem the limit (\ref{eq:3.1.3}) is equivalent to \begin{equation}\label{eq:3.1.2} \,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to+0}(\mathds{1} + S(\gt))^{-1} = (\mathds{1} +H)^{-1} \, . \varepsilone Since (\ref{eq:3.1.2}) is, in turn, equivalent to \begin{equation}gin{equation*} \,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to +0}({\labelmbda} \mathds{1} +S(\gt))^{-1} = ({\labelmbda} \mathds{1} +H)^{-1}\, , \varepsilonnd{equation*} for ${\labelmbda} > 0$, the latter yields the weak limit \varepsilonqref{eq:3.1.15}. \varepsilonnd{proof} \begin{equation}gin{remark}\labelbel{rem:3.2.1} (a) We note that due to self-ajointness of the family $\{F(t)\}_{t\geq0}$ the estimate (\ref{eq:3.1.151}) is stronger than the $\sqrt{n}$-estimate (\ref{eq:0.5}) for general contractions.\\ (b) By definition of $\{F(t)\}_{t\geq 0}$ and by condition (\ref{eq:3.1.15}) the $C_0$-semigroup property of $\{e^{-tH}\}_{t\geq 0}$ ensures that strong limits $\,\mbox{\rm s-}\hspace{-2pt} \lim_{t\to +0}$ of the left- and the right-hand sides of \varepsilonqref{eq:3.1.7} are well-defined and coincide with $\mathds{1}$. Similarly, since by (\ref{eq:3.1.3}) the limits in $\lim_{t\to+0} \,\mbox{\rm s-}\hspace{-2pt} \lim_{\gt\to+0}e^{-tS(\gt)}$ commute, one gets that $t \in \dR^+_{0}$. \varepsilonnd{remark} \section{{Lifting the Chernoff product formula to operator-norm topology}}\label{sec:3.2} The first result about lifting the Chernoff product formula to the operator-norm topology is due to \cite{NZ99a}, Theorem 2.2. Our next \textit{Note} will be about improvement of this result by extension to any closed $t$-interval $\cI \subset \dR^{+}_{0}$. Similarly to the case of the strong operator topology our strategy includes the estimate of two ingredients (\ref{eq:0.3}) and (\ref{eq:0.5}) involved into Proposition \ref{prop:3.1.0}, but now in the \textit{operator norm} topology. Since the second ingredient has estimate (\ref{eq:3.1.151}), it rests to find conditions for \textit{lifting} to operator norm convergence of the limit (\ref{eq:3.1.3}) in the {Trotter-Neveu-Kato} convergence theorem. We proceed with the following lemma. \begin{lemma}\label{lem:3.2.1} Let $K$ and $L$ be non-negative self-adjoint operators in a Hilbert space ${\omega}tH$. Then \begin{equation}\label{eq:3.2.1} \|e^{-K} - e^{-L}\| \le c \ \|(\mathds{1} +K)^{-1} - (\mathds{1} +L)^{-1}\| \varepsilone with a constant $c > 0 $ independent of operators $K$ and $L$. \varepsilonl By the Riesz-Dunford functional calculus one obtains representation: \begin{equation}\label{eq:3.2.1-1} e^{-K} - e^{-L} = \varphirac{1}{2\pi i}\int_{\Gamma} dz \; e^{z}\left((z + K)^{-1} - (z + L)^{-1}\right). \varepsilone Essentially, the line of reasoning is based on straightforward estimates that use (\ref{eq:3.2.1-1}) for a given contour $\Gamma$. Then arguments show that constant $c$ is only $\Gamma$-dependent. We skip the proof. By virtue of (\ref{eq:3.1.151}) and Lemma \ref{lem:3.2.1} the \textit{operator-norm} Trotter-Neveu-Kato theorem needs \textit{lifting} of the strong convergence in \varepsilonqref{eq:3.1.2} to the operator-norm convergence. \begin{lemma}\label{lem:3.2.2} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for non-negative self-adjoint operator $H$ in ${\omega}tH$. Then condition \begin{equation}\label{eq:3.2.6} \lim_{\gt\to+0}\|(\mathds{1} +S(\gt))^{-1} - (\mathds{1} +H)^{-1}\| = 0 , \varepsilone is satisfied if and only if \begin{equation}\label{eq:3.2.7} \lim_{\gt\to+0}\sup_{t\in\cI}\|(\mathds{1} +t S(\tau))^{-1} - (\mathds{1} +t H)^{-1}\| = 0 , \varepsilone for any closed interval $\cI \subset \dR^+$. \varepsilonl \begin{equation}gin{proof} A straightforward computation shows that \begin{equation}gin{equation}\label{eq:3.2.71} \begin{equation}gin{split} &(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\\ &= t(\mathds{1} +S(\gt))(\mathds{1} +tS(\gt))^{-1}[(\mathds{1} +S(\gt))^{-1} - (\mathds{1} +H)^{-1}](\mathds{1} +H)(\mathds{1} +tH)^{-1}\, . \varepsilonnd{split} \varepsilonnd{equation} Here we used that if $t > 0$ and $\gt > 0$ then for self-adjoint operator $S(\gt)$ the closure \begin{equation}gin{equation*} \overline{(\mathds{1} + t \, S(\gt))^{-1} (\mathds{1} +S(\gt))}= (\mathds{1} + S(\gt))(\mathds{1} + t \, S(\gt))^{-1} \, . \varepsilonnd{equation*} For these values of arguments $t$ and $\tau$ we obtain estimates: \begin{equation}gin{eqnarray*} &&\|(\mathds{1} +S(\gt))(\mathds{1} + t\, S(\gt))^{-1}\| \le (1 + 2/t)\, , \\ &&\|(\mathds{1} +H)(\mathds{1} + t\, H)^{-1}\| \le (1 + 2/t)\, . \varepsilonnd{eqnarray*} If $\cI$ is a closed interval of $\dR^+$, e.g., $\cI := [a,b]$ for $0 < a < b < \infty$, then by (\ref{eq:3.2.71}) \begin{equation}gin{equation}\label{eq:3.2.8} \|(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\| \le {b}(1 + 2/a)^{2} \|(\mathds{1} +S(\gt))^{-1} - (\mathds{1} +H)^{-1}\|, \varepsilonnd{equation} for $t \in [a,b]$ and $\gt > 0$. By \varepsilonqref{eq:3.2.6} the estimate \varepsilonqref{eq:3.2.8} yields \varepsilonqref{eq:3.2.7}. The converse is obvious. \varepsilonnd{proof} \begin{theorem}\label{th:3.2.3} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for self-adjoint operator $H \geq 0$ in ${\omega}tH$. Then we have \begin{equation}\label{eq:3.2.9} \lim_{n\to \infty}\sup_{t\in\cI}\|F(t/n)^n - e^{-tH}\| = 0 \, , \varepsilone for any closed interval $\cI \subset \dR^+$, if and only if the family $\{S(\tau)\}_{\tau > 0}$ satisfies condition \varepsilonqref{eq:3.2.6}. \varepsilont \begin{equation}gin{proof} For $t > 0$ and $n \geq 1$ we get estimate \begin{equation}\label{eq:3.2.10} \|F(t/n)^n - e^{-tH}\| \le\|F(t/n)^n - e^{-tS(t/n)}\| + \|e^{-tS(t/n)} - e^{-tH}\|. \varepsilone Then (\ref{eq:3.1.151}) and (\ref{eq:3.2.10}) imply \begin{equation}\label{eq:3.2.11} \|F(t/n)^n - e^{-tH}\| \le \varphirac{1}{n} + \|e^{-t S(t/n)} - e^{-tH}\|, \quad t > 0, \quad n \geq 1 \ . \varepsilone Note that by Lemma \ref{lem:3.2.1} there is a constant $c > 0$ such that for $t > 0$ \begin{equation}\label{eq:3.2.12} \|e^{-tS(t/n)} - e^{-tH}\| \le c \ \|(\mathds{1} +t S(t/n))^{-1} - (\mathds{1} +tH)^{-1}\| . \varepsilone Inserting the estimate \varepsilonqref{eq:3.2.12} into \varepsilonqref{eq:3.2.11} we obtain \begin{equation}\label{eq:3.2.13} \|F(t/n)^n - e^{-tH}\| \le \varphirac{1}{n} + c \, \|(\mathds{1} +tS(t/n))^{-1} - (\mathds{1} +tH)^{-1}\|, \varepsilone Then (\ref{eq:3.2.13}) and Lemma \ref{lem:3.2.2} yield \varepsilonqref{eq:3.2.9}. Conversely, let us assume \varepsilonqref{eq:3.2.9}. For $t > 0$ and $n \geq 1$ we have estimate \begin{equation}gin{equation*} \|e^{-tS(t/n)} - e^{-tH}\| \le \|F(t/n)^n - e^{-tH}\| + \|F(t/n)^n - e^{-tS(t/n)}\| \ , \varepsilonnd{equation*} and by (\ref{eq:3.1.151}) \begin{equation}\label{eq:3.2.14} \|e^{-t S(t/n)} - e^{-tH}\| \le \|F(t/n)^n - e^{-tH}\| + \varphirac{1}{n} \ . \varepsilone Then by assumption \varepsilonqref{eq:3.2.9} for any closed interval $\cI \subset \dR^+$ the estimate \varepsilonqref{eq:3.2.14} yields \begin{equation}gin{equation*} \lim_{n\to\infty}\sup_{t\in\cI}\|e^{-tS(t/n)} - e^{-tH}\| = 0 \, . \varepsilonnd{equation*} Hence, $\lim_{n\to\infty}\|e^{-tS(t/n)} - e^{-tH}\| = 0$ implies $\lim_{\gt\to+0}\|e^{-tS(\gt)} - e^{-tH}\| = 0$ for any $t > 0$. Now, using representation: \begin{equation}gin{equation}\label{eq:3.2.14-2} (\mathds{1} +S(\gt))^{-1} - (\mathds{1} +H)^{-1} = \int^\infty_0 ds \ e^{-s}\ \big(e^{-s \, S(\gt)} - e^{-s\, H}\big) \ , \varepsilonnd{equation} we obtain the estimate \begin{equation}gin{equation}\label{eq:3.2.14-1} \|(\mathds{1} + S(\gt))^{-1} - (\mathds{1} + H)^{-1}\| \le \int^\infty_0 ds \ e^{-s}\ \big\|e^{-s S(\gt)} - e^{-s H}\big\| \, . \varepsilonnd{equation} Let $\Phi_{\tau}(s):= e^{-s}\ \big\|e^{-s S(\gt)} - e^{-s H}\big\|$. Since $S(\gt)\geq 0$ and $H \geq 0$, one gets $\Phi_{\tau}(s) \leq 2 \, e^{-s} \in L^{1}(\dR^+)$ and $\lim_{\gt\to+0}\Phi_{\tau}(s) = 0$. Then $\lim_{\gt\to+0}$ in the right-hand side of (\ref{eq:3.2.14-1}) is zero by the Lebesgue {dominated convergence} theorem, that yields \varepsilonqref{eq:3.2.6}. \varepsilonnd{proof} Extension of this statement to \textit{any} bounded interval $\cI \subset \dR^{+}_{0}$ (cf. Remark \ref{rem:3.2.1}(b)) needs a \textit{uniform} operator-norm extension of the Trotter-Neveu-Kato theorem, that we present below. \begin{theorem}\label{th:3.2.5} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})\,}, for self-adjoint operator $H \geq 0$ in ${\omega}tH$. Then the convergence \begin{equation}\label{eq:3.2.16} \lim_{\gt\to +0}\sup_{t\in \cI}\|e^{-tS(\tau)} - e^{-tH}\| = 0 \ , \varepsilone holds for any bounded interval $\cI \subset \dR^{+}_{0}$ if and only if the condition \begin{equation}\label{eq:3.2.17} \lim_{\gt\to +0}\sup_{t\in \cI}\|(\mathds{1} +t S(\tau))^{-1} - (\mathds{1} +tH)^{-1}\| = 0 \ , \varepsilone is valid for any bounded interval $\cI \subset \dR^{+}_{0}$. \varepsilont \begin{equation}gin{proof} By conditions of theorem and by Lemma \ref{lem:3.2.1} we obtain from \varepsilonqref{eq:3.2.12} the estimate \begin{equation}gin{equation*} \sup_{t\in\cI}\|e^{-tS(\tau)} - e^{-tH}\| \le c \ \sup_{t\in\cI}\|(\mathds{1} + t S(\tau))^{-1} - (\mathds{1} + t H)^{-1}\| \, , \varepsilonnd{equation*} for $\tau > 0$ and for any bounded interval $\cI \subset \dR^{+}_{0}$. This estimate and condition \varepsilonqref{eq:3.2.17} imply the convergence in \varepsilonqref{eq:3.2.16}. Conversely, assume \varepsilonqref{eq:3.2.16}. Note that by representation (\ref{eq:3.2.14-2}) one gets for $t\geq 0 $: \begin{equation}gin{equation*} (\mathds{1} +tS(\tau))^{-1} - (\mathds{1} +tH)^{-1} = \int^\infty_0 ds \ e^{-s}\big(e^{-s\,tS(\tau)} - e^{-s\,tH}\big)\, . \varepsilonnd{equation*} This yields the estimate \begin{equation}gin{equation*} \left\|(\mathds{1} +tS(\tau))^{-1} - (\mathds{1} +tH)^{-1}\right\| \le \int^\infty_0 ds \ e^{-s} \big\|e^{-s\,tS(\tau)} - e^{-s\,tH}\big\| , \varepsilonnd{equation*} for $\tau > 0$ and $t \ge 0$. Now, let $0< \varepsilon < 1$ and let $N_{\varepsilon} := - \ln(\varepsilon/2)$. Then \begin{equation}gin{equation*} \int^\infty_{N_{\varepsilon}} ds \ e^{-s}\left\|e^{-s\,tS(\tau)} - e^{-s\,tH}\right\| \le {\varepsilon}\, , \varepsilonnd{equation*} for $\tau > 0$ and $t \ge 0$. Hence, \begin{equation}gin{equation*} \left\|(\mathds{1} +tS(\tau))^{-1} - (\mathds{1} +tH)^{-1}\right\| \le \int^{N_{\varepsilon}}_0 ds \ e^{-s}\left\|e^{-stS(\tau)} - e^{-stH}\right\| + {\varepsilon} \ , \varepsilonnd{equation*} that for any bounded interval $\cI \subset \dR^{+}_{0}$ and $\tau> 0$ yields \begin{equation}gin{equation*} \sup_{t\in\cI}\left\|(\mathds{1} +tS(\tau))^{-1} - (\mathds{1} +tH)^{-1}\right\| \le \sup_{\begin{equation}gin{array}{c} t\in\cI \wedge s\in [0,N_{\varepsilon}]\varepsilona}\left\|e^{-s\,tS(\tau)} - e^{-s\,tH}\right\| + {\varepsilon} \ . \varepsilonnd{equation*} Applying now \varepsilonqref{eq:3.2.16} we obtain \begin{equation}gin{equation*} \lim_{\tau\to +0}\sup_{t\in\cI}\left\|(\mathds{1} +tS(\tau))^{-1} - (\mathds{1} +tH)^{-1}\right\| \le {\varepsilon} \ , \varepsilonnd{equation*} for any $\varepsilon > 0$. This completes the proof of \varepsilonqref{eq:3.2.17}. \varepsilonnd{proof} Now we are in position to improve Theorem \ref{th:3.2.3}. We relax the restriction to closed intervals $\cI \subset \dR^{+}$ to condition on any bounded interval $\cI \subset \dR^{+}_{0}$. \begin{theorem}\label{th:3.2.6} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1})} and \varepsilonmph{(\ref{eq:3.1.2})\,} for self-adjoint operator $H \geq 0$ in ${\omega}tH$. Then \begin{equation}\label{eq:3.2.18} \lim_{n\to \infty}\sup_{t\in\cI}\|F(t/n)^n - e^{-tH}\| = 0 , \varepsilone for any bounded interval $\cI \subset \dR^{+}_{0}$ if and only if \begin{equation}\label{eq:3.2.19} \lim_{n\to\infty}\sup_{t\in\cI}\|(\mathds{1} +t S(t/n))^{-1} - (\mathds{1} +tH)^{-1}\| = 0 , \varepsilone is satisfied for any bounded interval $\cI \subset \dR^{+}_{0}$. \varepsilont \begin{equation}gin{proof} By \varepsilonqref{eq:3.2.13} and by assumption \varepsilonqref{eq:3.2.19} we obtain the limit \varepsilonqref{eq:3.2.18}. Conversely, using \varepsilonqref{eq:3.2.14} and assumption \varepsilonqref{eq:3.2.18} one gets \varepsilonqref{eq:3.2.16} for $\tau = t/n$ and for any bounded interval $\cI \subset \dR^{+}_{0}$. Then application of Theorem \ref{th:3.2.5} yields \varepsilonqref{eq:3.2.19}. \varepsilonnd{proof} \section{{Chernoff product formula: rate of the operator-norm convergence}}\label{sec:3.3} Theorem \ref{th:3.2.5} admits further improvements. They allow to establish estimates for the \textit{rate} of operator-norm convergence in \varepsilonqref{eq:3.2.16} under certain conditions in \varepsilonqref{eq:3.2.17}. Our next \textit{Note} concerns the estimates of the convergence rate in Theorem \ref{th:3.2.6}. Recall that the first result in this direction was due to \cite{IT01} (Lemma 2.1). \begin{lemma}\label{lem:3.3.1} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for self-adjoint operator $H \geq 0$ in ${\omega}tH$. \item[\;\;\rm (i)] If ${\rho} \in (0,1]$ and there is a constant $M_{\rho} > 0$ such that the estimate \begin{equation}\label{eq:3.3.1} \|(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\| \le M_{\rho}\left(\varphirac{\gt}{t}\right)^{\rho} \, , \varepsilone holds for $\gt,t \in (0,1]$ and $0 < \gt \le t$, then there is a constant $c_{\rho} > 0$ such that the estimate \begin{equation}\label{eq:3.3.2} \|F(\gt)^{t/\gt} - e^{-tH}\| \le c_{\rho}\left(\varphirac{\gt}{t}\right)^{\rho} \, , \varepsilone is valid for $\gt,t \in (0,1]$ with $0 < \gt \le t$. \item[\;\;\rm (ii)] If ${\rho} \in (0,1)$ and there is a constant $c_{\rho}$ such that \varepsilonqref{eq:3.3.2} holds, then there is a constant $M_{\rho} > 0$ such that the estimate \varepsilonqref{eq:3.3.1} is valid. \varepsilonl \begin{equation}gin{proof} (i) By Lemma \ref{lem:3.2.1} there is a constant $c > 0$ such that \begin{equation}\label{eq:3.3.3} \|e^{-tS(\gt)} - e^{-tH}\| \le c \, \|(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\| \, , \varepsilone for $\gt,t > 0$. Then \varepsilonqref{eq:3.3.1}, for $\gt,t \in (0,1]$ with $0 < \gt \le t$, yields, cf. Theorem \ref{th:3.2.5}, \begin{equation}d \big\|e^{-tS(\gt)} - e^{-tH}\big\| \le c \;M_{\rho}\left(\varphirac{\gt}{t}\right)^{\rho} \, . \varepsiloned By definition \varepsilonqref{eq:3.1.1} and inequality \varepsilonqref{eq:3.1.151} \begin{equation}\label{eq:3.3.5} \big\|F(\gt)^{t/\gt} - e^{-tS(\gt)}\big\| \le \varphirac{\gt}{t} \, . \varepsilone Therefore, by estimate \begin{equation}\label{eq:3.3.6} \big\|F(\gt)^{t/\gt} - e^{-tH}\big\| \le \big\|F(\gt)^{t/\gt} - e^{-tS(\gt)}\big\| + \big\|e^{-tS(\gt)} - e^{-tH}\big\| \, , \varepsilone we obtain for $\gt,t \in (0,1]$ with $0 < \gt \le t$ \begin{equation}d \big\|F(\gt)^{t/\gt} - e^{-tH}\big\| \le \varphirac{\gt}{t} + c \; M_{\rho}\left(\varphirac{\gt}{t}\right)^{\rho} \, . \varepsiloned Since for ${\rho} \in (0,1]$ one has ${\gt}/{t} \le \left({\gt}/{t}\right)^{\rho}$, \begin{equation}\label{eq:3.3.8} \left\|F(\gt)^{t/\gt} - e^{-tH}\right\| \le (1 + c \;M_{\rho})\left(\varphirac{\gt}{t}\right)^{\rho} \ . \varepsilone Setting $c_{\rho} := 1 + c\;M_{\rho} \,$, we prove \varepsilonqref{eq:3.3.2} for ${\rho} \in (0,1]$. (ii) To prove (\ref{eq:3.3.1}) we use the identity: \begin{equation}gin{equation*} (\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1} = \sum^\infty_{n=0}\;\int^{n+1}_n \; dx \;e^{-x} \big(e^{-xtS(\gt)} - e^{-xtH}\big)\, , \varepsilonnd{equation*} here $\gt,t > 0$. Substitution $x = y + n$ yields \begin{equation}gin{eqnarray}d \lefteqn{ (\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1} =}\\ & & \hspace{1.0cm} \sum^\infty_{n=0}\;e^{-n}\;\int^{1}_0 \; dy \;e^{-y} \big(e^{-(y+n)tS(\gt)} - e^{-(y+n)tH}\big)\, , \varepsilonead that gives the representation \begin{equation}gin{eqnarray}d \lefteqn{ (\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1} =}\\ & & \hspace{0.5cm} \sum^\infty_{n=0}\;e^{-n}\left \{ \Big(\sum^{n-1}_{k=0} e^{-ktS(\gt)}\big(e^{-tS(\gt)} - e^{-tH}\big) e^{-(n-k-1)tH}\Big) \int^{1}_0 \; dy \;e^{-y}\;e^{-ytS(\gt)} + \right.\\ & & \hspace{0.5cm} \left. e^{-ntH}\int^{1}_0 \; dy\;e^{-y}\;\big(e^{-ytS(\gt)} - e^{-ytH}\big) \right\}. \varepsilonead Hence, we obtain the estimate \begin{equation}gin{eqnarray}\label{eq:3.3.9} \lefteqn{ \|(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\| \le}\\ & & \hspace{0.5cm} \sum^\infty_{n=0}\;e^{-n}\Big\{n \ \big\|e^{-tS(\gt)} - e^{-tH}\big\| + \int^{1}_0 \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\|\Big\}\, . {\bf n}onumber \varepsilonea Note that assumption (\ref{eq:3.3.2}) and estimate \varepsilonqref{eq:3.3.5} yield for $\gt,t \in (0,1]$, with $0 < \gt \le t$, \begin{equation}\label{eq:3.3.10} \big\|e^{-tS(\gt)} - e^{-tH}\big\| \le {{(1 + c_{\rho})}} \left(\varphirac{\gt}{t}\right)^{\rho} . \varepsilone To treat the last term in \varepsilonqref{eq:3.3.9} we use decomposition \begin{equation}gin{eqnarray}\label{eq:3.3.11} \lefteqn{ \int^{1}_0 \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\| = }\\ & & \hspace{0.5cm} \int^1_{\gt/t} \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\| + \int^{\gt/t}_0 \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\|. {\bf n}onumber \varepsilonea Since by (\ref{eq:3.3.10}) for $\gt,t,y \in (0,1]$ and $\gt/t \le y$ \begin{equation}d \big\|e^{-ytS(\gt)} - e^{-ytH}\big\| \le (1 + c_{\rho}) \left(\varphirac{\gt}{ty}\right)^{\rho}\, , \varepsiloned we obtain for $0 < \gt \le t$ the estimate \begin{equation}\label{eq:3.3.12} \int^{1}_{\gt/t} \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\| \le (1 + c_{\rho}) \int^{1}_0 \; dy \; e^{-y}y^{-{\rho}}\left(\varphirac{\gt}{t}\right)^{\rho} . \varepsilone Moreover, for ${\rho} <1 $ one obviously gets \begin{equation}\label{eq:3.3.13} \int^{\gt/t}_0 \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\| \le \; 2\left(\varphirac{\gt}{t}\right)^{\rho}\, . \varepsilone Taking into account (\ref{eq:3.3.12}) and (\ref{eq:3.3.13}) we obtain from (\ref{eq:3.3.11}): \begin{equation}\label{eq:3.3.14} \int^{1}_0 \; dy \;e^{-y}\;\big\|e^{-ytS(\gt)} - e^{-ytH}\big\| \le \Big[(1 + c_{\rho}) \int^{1}_0 \; dy \; e^{-y}y^{-{\rho}} + 2\Big] \left(\varphirac{\gt}{t}\right)^{\rho} , \varepsilone for $\gt,t \in (0,1]$, with $0 < \gt \le t$. Finally, by virtue of \varepsilonqref{eq:3.3.10} and \varepsilonqref{eq:3.3.14} we get for \varepsilonqref{eq:3.3.9} \begin{equation}gin{eqnarray}d \lefteqn{ \|(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\| \le}\\ & & \hspace{1.0cm} \sum^\infty_{n=0}\;e^{-n}\Big\{ n\;{{(1 + c_{\rho})}} + (1 + c_{\rho}) \int^{1}_0 \; dy \; e^{-y}y^{-{\rho}} + 2\Big\} \left(\varphirac{\gt}{t}\right)^{\rho} . \varepsilonead Now, setting \begin{equation}gin{equation*} M_{\rho} := \sum^\infty_{n=0}\;e^{-n}\Big\{ n\;{{(1 + c_{\rho})}} + (1 + c_{\rho}) \int^{1}_0 \; dy \; e^{-y}y^{-{\rho}} + 2\Big\} \varepsilonnd{equation*} we obtain estimate \varepsilonqref{eq:3.3.1}. \varepsilonnd{proof} \begin{equation}gin{remark}\labelbel{rem:3.3.2} In Lemma \ref{lem:3.3.1}(i) it is shown that for ${\rho} =1$ the condition \varepsilonqref{eq:3.3.1} implies \varepsilonqref{eq:3.3.2}. But it is \textit{unclear} for converse since Lemma \ref{lem:3.3.1}(ii) does not cover this case. \varepsilonnd{remark} The next assertion extends the result of Lemma \ref{lem:3.3.1}(i) to \textit{any} bounded interval $\cI \subset \dR^{+}_{0}$. \begin{theorem}\label{th:3.3.2} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for self-adjoint operator $H \geq 0$ in ${\omega}tH$. If for some ${\rho} \in (0,1]$ there is a constant $M_{\rho} > 0$ such that the estimate \varepsilonqref{eq:3.3.1} holds for $\gt,t \in (0,1]$ and $0 < \gt \le t$, then for any bounded interval $\cI \subset \dR^{+}_{0}$ there is a constant $c^\cI_{\rho} > 0$ such that the estimate \begin{equation}\label{eq:3.3.15} \sup_{t\in\cI}\|F(t/n)^n - e^{-tH}\| \le c^\cI_{\rho} \; \varphirac{1}{n^{\rho}}\, , \varepsilone holds for $n \geq 1$. \varepsilont \begin{equation}gin{proof} Let $N \in \dN$ such that $\cI \subseteq [0,N]$. Then representation \begin{equation}d \begin{equation}gin{split} F(t/Nn)^{Nn}& - e^{-t\, H}\\ & = \sum^{N-1}_{k=0}e^{-k \ t \, H/N}(F(t/Nn)^n - e^{-t\, H/N})F(t/Nn)^{(N-1-k)n} \, , \varepsilonnd{split} \varepsiloned for $n \geq 1$, yields the estimate \begin{equation}\label{eq:3.3.151} \|F(t/Nn)^{Nn} - e^{-t\, H}\| \le N\|F(t/Nn)^n - e^{-t\, H/N}\| . \varepsilone Let $t' := t/N \leq 1$ and $\gt' := t'/n \leq 1$. Then $t \leq N$ and $0 < \gt' \le t' \le 1$. By Lemma \ref{lem:3.3.1}(i) we obtain \begin{equation}d \|F(t/Nn)^{n} - e^{-tH/N}\| = \|F(\gt')^{t'/\gt'} - e^{-t'H}\|\le c_{\rho} \left(\varphirac{\tau'}{t'}\right)^{\rho} , \varepsiloned and by (\ref{eq:3.3.151}) the estimate \begin{equation}d \|F(t/Nn)^{Nn} - e^{-tH}\| \le c_{\rho} N \left(\varphirac{\tau'}{t'}\right)^{\rho}. \varepsiloned Let $n' := Nn \ge 1$. Then \begin{equation}d \|F(t/n')^{n'} - e^{-tH}\| \le c_{\rho} N^{1+{\rho}} \left(\varphirac{1}{n'}\right)^{\rho}, \quad t \in [0,N]. \varepsiloned Setting $c^{[0,N]}_{{\rho}} := c_{\rho} N^{1+{\rho}}$, we prove the theorem for $\cI = [0,N]$. Since for any bounded interval $\cI$ one can always find a $N \in \dN$ such that $\cI \subseteq [0,N]$, this completes the proof. \varepsilonnd{proof} To extend Theorem \ref{th:3.3.2} to $\cI = \dR^{+}_{0}$ one needs conditions when the values of $\gt , t$ are allowed to be unbounded. \begin{theorem}\label{th:3.3.3} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for self-adjoint operator $H \geq 0$ in ${\omega}tH$. If for some ${\rho} \in (0,1]$ there is a constant $M_{\rho} > 0$ such that the estimate \varepsilonqref{eq:3.3.1} holds for $0 < \gt \le t < \infty$, then there exists a constant $c^{\dR^+}_{\rho} > 0$ such that for $\tau =t/n$ the estimate \begin{equation}\label{eq:3.3.16} \sup_{t\in \dR^{+}_{0}}\|F(t/n)^n - e^{-tH}\| \le c^{\dR^+}_{\rho} \varphirac{1}{n^{\rho}} , \varepsilone holds for $n \ge 1$. \varepsilont \begin{equation}gin{proof} The arguments ensuring that \varepsilonqref{eq:3.3.3} yields estimate \varepsilonqref{eq:3.3.8} go through verbatim if we assume \varepsilonqref{eq:3.3.1} for $0 < \gt \le t < \infty$. Then setting $\gt := t/n$, $n \in \dN$ we deduce from \varepsilonqref{eq:3.3.8} \begin{equation}d \|F(t/n)^n - e^{-tH}\| \le c^{\dR^+}_{\rho} \, \varphirac{1}{n^{\rho}}\, , \quad n \ge 1, \varepsiloned for $t \in \dR^{+}_{0}$, where $c^{\dR^+}_{\rho} := 1 + c \, M_{\rho} \, $. So, this proves (\ref{eq:3.3.16}). \varepsilonnd{proof} \begin{equation}gin{remark}\labelbel{rem:3.3.21} We \textit{Note} that in Theorem \ref{th:3.2.6} we established the self-adjoint operator-norm convergent \textit{extension} of the Chernoff product formula for any bounded interval $\cI \subset \dR^{+}_{0}$ under $t$-\textit{dependent} condition (\ref{eq:3.2.19}), which is necessary and sufficient.\\ We \textit{Note} that Theorem \ref{th:3.3.2} and Theorem \ref{th:3.3.3} prove self-adjoint operator-norm Chernoff product formula in $\dR^{+}_{0}$ with \textit{estimate} of the \textit{rate} of convergence. They are also based on $t$-\textit{dependent} {fractional power} condition (\ref{eq:3.3.1}), which is necessary and sufficient for $\rho \in (0,1)$. \varepsilonnd{remark} Our next \textit{Note} is that $t$-dependence in assumption (\ref{eq:3.3.1}) for ${\rho} =1$ can be relaxed. The assertion below extends to $\dR^{+}_{0}$ (with convergence rate) the established in Theorem \ref{th:3.2.3} operator-norm convergence of the self-adjoint Chernoff product formula for bounded interval $\cI \subset \dR^{+}$. It yields an operator-norm version of original Chernoff product formula, see Proposition \ref{prop:3.1.0}. \begin{theorem}\label{th:3.3.4} Let $F: \dR^{+}_{0} \longrightarrow \cL({\omega}tH)$ be a measurable family of non-negative self-adjoint contractions such that $F(0) = \mathds{1}$. Let self-adjoint family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for self-adjoint operator $H \geq 0$ in ${\omega}tH$. If there is a constant $M_1 > 0$ such that the estimate \begin{equation}\label{eq:3.3.17} \|(\mathds{1} +S(\gt))^{-1} - (\mathds{1} +H)^{-1}\| \le M_1\gt \, , \varepsilone holds for $\gt \in (0,1]$, then for any bounded interval $\cI \subset \dR^{+}_{0}$ there is a constant $c^\cI_1 > 0$ such that the estimate \begin{equation}\label{eq:3.3.18} \sup_{t\in\cI}\|F(t/n)^n - e^{-tH}\| \le c^\cI_1 \, \varphirac{1}{n}\, , \varepsilone holds for $n \ge 1$. \varepsilont \begin{equation}gin{proof} By virtue of (\ref{eq:3.2.71}) and \varepsilonqref{eq:3.3.17} we obtain the estimate \begin{equation}\label{eq:3.3.19} \begin{equation}gin{split} \|&(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\|\\ &\le M_1 \gt \;t \ \|(\mathds{1} +S(\gt))(\mathds{1} +tS(\gt))^{-1}\|\|(\mathds{1}+ H) (\mathds{1} +tH)^{-1}\|\,. \varepsilonnd{split} \varepsilone Then \varepsilonqref{eq:3.3.19}, together with estimates $\|(\mathds{1} +S(\gt))(\mathds{1} +tS(\gt))^{-1}\| \le {1}/{t}$ and $\|(\mathds{1}+ H)((\mathds{1} +tH)^{-1}\| \le {1}/{t}$, for self-adjoint $S(\gt)$ and $H$, imply for $0 < \gt \le t \le 1$ and ${\rho} = 1$ the estimate \varepsilonqref{eq:3.3.1} and thus \varepsilonqref{eq:3.3.2}. Finally, applying Theorem \ref{th:3.3.2} we extend the proof to any bounded interval $\cI \subset \dR^{+}_{0}$. \varepsilonnd{proof} Now we extend Theorem \ref{th:3.3.4} for condition (\ref{eq:3.3.17}), to infinite interval $\cI = \dR^{+}_{0}$. To this end, similar to Theorem \ref{th:3.3.3}, one needs additional assumption valid on infinite $t$-intervals. \begin{theorem}\label{th:3.3.5} Let in addition to conditions of Theorem \varepsilonmph{\ref{th:3.3.4}} operator $H \ge \mu \mathds{1}$, $\mu > 0$ and for any $\varepsilon > 0$ there exists a ${\delta}_\varepsilon \in (0,1)$ such that \begin{equation}\label{eq:3.3.20} 0 \le F(\gt) \le (1-{\delta}_\varepsilon)\mathds{1}, \varepsilone is valid for $\gt \ge \varepsilon$. If there is a constant $M_1 > 0$ such that \varepsilonqref{eq:3.3.17} holds for $\gt \in (0,\varepsilon \leq 1)$, then there exists a constant $c^{\dR^+}_1 > 0$ such that \varepsilonqref{eq:3.3.18} is valid for infinite interval $\cI = \dR^{+}_{0}$. \varepsilont \begin{equation}gin{proof} Since \varepsilonqref{eq:3.3.17} implies the resolvent-norm convergence of $\{S(\tau)\}_{\tau > 0}$, when $\tau \rightarrow +0$, and since $H \ge \mu \mathds{1}$, there exists $0 < \mu_{\varepsilon} \le \mu $ such that $S(\gt) \ge \mu_{\varepsilon} \mathds{1}$ for $\gt \in (0,\varepsilon)$, where $\varepsilon$ is sufficiently \textit{small}. On the other hand, \varepsilonqref{eq:3.3.19} for self-adjoint $S(\gt)$ and $H$, yields \begin{equation}\label{eq:3.3.21-1} \begin{equation}gin{split} \|&(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\|= \\ &\le M_1 \, \varphirac{\gt}{t} \, \|(\mathds{1} +S(\gt))(\mathds{1}/t + S(\gt))^{-1}\| \|(\mathds{1}+ H)(\mathds{1}/t +H)^{-1}\| \, , \varepsilonnd{split} \varepsilone for $t > 0$. Since $S(\gt) \ge \mu_{\varepsilon} \mathds{1}$, $\gt \in (0,\varepsilon)$, and $H \ge \mu \mathds{1}$, we obtain estimates \begin{equation}d \begin{equation}gin{split} &\|(\mathds{1} +S(\gt))(\mathds{1}/t +S(\gt))^{-1}\| \le \varphirac{1+\mu_{\varepsilon}}{\mu_{\varepsilon}}\, ,\\ &\|(\mathds{1}+ H)(\mathds{1}/t +H)^{-1}\| \le \varphirac{1+\mu}{\mu} \, . \varepsilonnd{split} \varepsiloned By (\ref{eq:3.3.21-1}) these estimates give \begin{equation}\label{eq:3.3.22} \|(\mathds{1} +tS(\gt))^{-1} - (\mathds{1} +tH)^{-1}\| \le M^{\dR^+}_1 \varphirac{\gt}{t}\, , \varepsilone for $\gt \in (0,\varepsilon)$ and $t > 0$. Here $M^{\dR^+}_1 := M_1 \, (1+\mu_{\varepsilon})(1+\mu)/{\mu_{\varepsilon}}{\mu}$. Note that if $\tau/t \, \leq 1$, then by (\ref{eq:3.3.5}) \begin{equation}d \big\|F(\gt)^{t/\gt} - e^{-tS(\gt)}\big\| \le \varphirac{\gt}{t} \, , \quad 0 < \gt \le t < \infty \, . \varepsiloned Therefore, \varepsilonqref{eq:3.3.3}, \varepsilonqref{eq:3.3.6} and (\ref{eq:3.3.22}), which are valid for $\gt \in (0,\varepsilon \leq 1)$, allow to use the result (\ref{eq:3.3.18}) of Theorem \ref{th:3.3.4} for the case $0 < \gt \le t < \infty$, and $\tau = t/n$, $n \ge 1$. This yields \begin{equation}\label{eq:3.3.23} \| F(t/n)^n - e^{-tH}\| \le \widehat{c}^{\ \dR^+}_1 \varphirac{1}{n} \, , \varepsilone for bounded interval: $t \in [0,\varepsilon n )$, and for $\widehat{c}^{\ \dR^+}_1 := 1 + c \, M^{\dR^+}_1$. Now let $t \geq \varepsilon n$. Then by assumption \varepsilonqref{eq:3.3.20} we have \begin{equation}\label{eq:3.3.24} \|F(t/n)^n\| \le (1-{\delta}_\varepsilon)^{n} = e^{n\, \ln(1-{\delta}_\varepsilon)}, \quad t \geq n\varepsilon \,. \varepsilone Note that $H \ge \mu I$ implies $\|e^{-tH}\| \le e^{-n\varepsilon\mu}$ for $t \geq n\varepsilon$. This together with (\ref{eq:3.3.23}) and (\ref{eq:3.3.24}) yield for a small $\varepsilon > 0$, cf. (\ref{eq:3.3.22}), the estimate \begin{equation}d \| F(t/n)^n - e^{-tH}\| \le \widehat{c}^{\ \dR^+}_1 \varphirac{1}{n} + e^{n\, \ln(1-{\delta}_\varepsilon)} + e^{-n\varepsilon\mu} \, , \varepsiloned valid for \textit{any} $t \geq 0$. Since $\widetilde{c}_1 := \sup_{n \ge 1} n(e^{n\, \ln(1-{\delta}_\varepsilon)} + e^{-n\varepsilon\mu}) < \infty$, there exists constant $c^{\dR^+}_1 := \widehat{c}^{\ \dR^+}_1 + \widetilde{c}_1$ such that \varepsilonqref{eq:3.3.18} is valid for $n \ge 1$ and infinite interval $\cI = \dR^{+}_{0}$ \varepsilonnd{proof} \section{{Nonself-adjoint operator-norm Chernoff product formula}}\label{sec:3.4} The results on the \textit{nonself-adjoint} Chernoff product formula in operator-norm topology are more restricted. The most of them concern the \textit{quasi-sectorial} contractions \cite{CZ01}. \begin{equation}gin{definition}\labelbel{def:2.1.2} A {contraction} $F$ on the Hilbert space $\mathfrak{H}$ is called \textit{quasi-sectorial} with semi-angle ${\bf a}lpha\in [0, \pi/2)$ with respect to the vertex at $z=1$, if its numerical range $W(F)\subseteq D_{{\bf a}lpha}$, where closed domain \begin{equation}gin{equation}\labelbel{eq:2.1.1} D_{{\bf a}lpha}:=\{z\in {\mathbb{C}}: |z|\leq \sin {\bf a}lpha\} \cup \{z\in {\mathbb{C}}: |{\bf a}rg (1-z)|\leq {\bf a}lpha \ {\rm{and}}\ |z-1|\leq \cos {\bf a}lpha \} . \varepsilonnd{equation} The limits: ${\bf a}lpha=0$ and ${\bf a}lpha = \pi/2-0$, correspond, respectively, to non-negative self-adjoint contractions and to general contractions. \varepsilonnd{definition} A characterisation of {quasi-sectorial} contraction \textit{semigroups} is due to \cite{Zag08}, \cite{ArZ10}. \begin{equation}gin{proposition}\labelbel{prop:6.2.2} $C_0$-semigroup $\{e^{- t \, H}\}_{t \geq 0}$ is, for $t>0$, a family of quasi-sectorial contractions with $W(e^{-t \, H}) \subseteq D_{{\bf a}lpha} \, $, if and only if generator $H$ is an $m$-sectorial operator with $W(H) \subset S_{{\bf a}lpha}$, the open sector with semi-angle ${\bf a}lpha\in[0, \pi/2)$ and vertex at $z=0 \,$. \varepsilonnd{proposition} Note that if operator $F$ is a quasi-sectorial contraction and $W(F)\subseteq D_{{\bf a}lpha}$, then $\mathds{1}- F$ is also $m$-sectorial operator with vertex $z=0$ and semi-angle ${\bf a}lpha$. Using the Riesz-Dunford functional calculus one obtains estimate \begin{equation}gin{equation}\labelbel{eq:2.1.14} \|F^n (\mathds{1}-F)\|\leq \varphirac{K}{n+1} \ , \ n\in{\mathbb{N}} \ . \varepsilonnd{equation} The estimate (\ref{eq:2.1.14}) allows to go beyond the Chernoff $\sqrt{n}$-Lemma (\ref{eq:0.5}) and to establish the $(1/\sqrt[3]{n})$-Theorem \cite{Zag17}. \begin{equation}gin{proposition}\labelbel{th:6.2.2} Let $F$ be a quasi-sectorial contraction on $\mathfrak{H}$ with numerical range $W(F)\subseteq D_{\bf a}lpha$ for ${\bf a}lpha\in [0, \pi/2)$. Then \begin{equation}gin{equation}\labelbel{eq:6.2.5} \left\|F^n - e^{n(F-\mathds{1})}\right\| \leq {M\over n^{1/3}} \ , \ \ n\in{\mathbb{N}}\, , \varepsilonnd{equation} where $M=2K+2$ and $K$ is defined by {\rm{(\ref{eq:2.1.14})}}. \varepsilonnd{proposition} Next we recall nonself-adjoint operator-norm extension of the Trotter-Neveu-Kato convergence theorem for \textit{quasi-sectorial} contraction semigroups \cite{CZ01} (Lemma 4.1): \begin{equation}gin{proposition}\labelbel{th:6.2.21} Let $\{S(\tau)\}_{\tau >0}$ be a family of $m$-sectorial operators with $W(S(\tau))\subseteq S_{\bf a}lpha$ for some ${\bf a}lpha \in [0, \pi/2)$ and for all $\tau>0$. Let $H$ be an {$m$-sectorial} operator with $W(H) \subset S_{{\bf a}lpha}$. Then the following conditions are equivalent: \begin{equation}gin{eqnarray*} (a) & & \lim_{\tau\rightarrow +0} \left\|(\zeta \mathds{1} + S(\tau))^{-1} - (\zeta \mathds{1} + H)^{-1}\right\| = 0, \ \ \mbox{ for some } \zeta\in S_{\pi-{\bf a}lpha};\\ (b) & & \lim_{\tau\rightarrow +0} \left\|e^{-t\, S(\tau)} - e^{-t\, H} \right\| = 0, \ \ \mbox{ for t in a subset of \ } \dR^{+} \mbox{ having a limit point}. \varepsilonnd{eqnarray*} \varepsilonnd{proposition} Therefore, the estimate (\ref{eq:6.2.5}) together with Proposition \ref{th:6.2.21} and inequality \begin{equation}gin{equation}\labelbel{eq:6.2.51} \|F(t/n)^n - e^{-t\, H}\| \leq \|F(t/n)^n - e^{-t S(t/n)}\| + \|e^{-tS(t/n)} - e^{-tH}\| \, , \varepsilonnd{equation} yield \textit{nonself-adjoint} operator-norm version of the Chernoff product formula for quasi-sectorial contractions (cf. Proposition \ref{prop:3.1.0} in \cite{CZ01}): \begin{equation}gin{proposition}\labelbel{prop:2.1.12} Let $\{F(\tau)\}_{\tau\geq 0}$ be a family of uniformly quasi-sectorial contractions on a Hilbert space $\mathfrak{H}$, that is, $W(F(\tau) \subseteq D_{\bf a}lpha$ {\rm{(\ref{eq:2.1.1})}}, for all $\tau > 0$. Let family $\{S(\tau)\}_{\tau > 0}$ be defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})}. If $H$ is {$m$-sectorial} operator with $W(H) \subset S_{{\bf a}lpha}$, then \begin{equation}gin{equation}\labelbel{eq:2.1.17} \lim_{n\rightarrow \infty} \left\|F(t/n)^n -e^{-t\, H}\right\| = 0 \, , \ \ \ \mbox{for} \ \ t> 0 \ , \varepsilonnd{equation} if and only if \begin{equation}gin{equation}\labelbel{eq:2.1.171} \lim_{\tau\rightarrow +0} \left\|(\zeta \mathds{1} + S(\tau))^{-1} - (\zeta \mathds{1} + H)^{-1}\right\| = 0 \, , \ \ \ \mbox{ for some } \zeta\in S_{\pi-{\bf a}lpha}\, . \varepsilonnd{equation} \varepsilonnd{proposition} \begin{equation}gin{remark}\labelbel{rem:3.3.3} Since semigroups $\{e^{- z \, S(\tau)}\}_{z \in S_{\pi/2-{\bf a}lpha}}$, $\tau > 0$, and $\{e^{- z \, H}\}_{z \in S_{\pi/2-{\bf a}lpha}}$ are holomorphic in sector $S_{\pi/2-{\bf a}lpha}$, proof of the Chernoff product formula (\ref{eq:2.1.17}) for {nonself-adjoint} quasi-sectorial contractions is based on the Riesz-Dunford functional calculus. To establish the operator-norm convergence \textit{without} rate it successfully (although \textit{not} completely, since $t > 0$) substitutes the self-adjointness in the proofs in Sections \ref{sec:3.2} and \ref{sec:3.3}. \varepsilonnd{remark} We use this calculus to prove the operator-norm Chernoff product formula for quasi-sectorial contractions \textit{with} estimate of the {rate} of convergence. We \textit{Note} that this is a generalisation of the Chernoff product formulae proven respectively in Theorem \ref{th:3.3.4} (for self-adjoint case) and in Proposition \ref{prop:2.1.12} (without rate of convergence). \begin{equation}gin{theorem}\labelbel{th:6.2.22} Let $\{F(\tau)\}_{\tau\geq 0}$ be a family of uniformly quasi-sectorial contractions on a Hilbert space $\mathfrak{H}$ and let $\{S(\tau)\}_{\tau >0}$ be a family of $m$-sectorial operators defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} for {$m$-sectorial} operator $H$ with $W(H)\subseteq S_{\bf a}lpha$. If there is $L > 0$ such that estimate \begin{equation}gin{equation}\labelbel{est-res} \left\|(\zeta \mathds{1} + S(\tau))^{-1} - (\zeta \mathds{1} + H)^{-1}\right\| \leq { L \ \tau \over {\rm{dist}}(\zeta,-S_{\bf a}lpha)}\, , \varepsilonnd{equation} holds for $\zeta \in S_{\pi-{\bf a}lpha} \ $, then for any bounded interval $\cI \subset \dR^{+}$ there is a constant $C^\cI > 0$ such that estimate \begin{equation}gin{equation}\labelbel{est-Ch} \sup_{t\in\cI}\|F(t/n)^n - e^{-tH}\| \le C^\cI \, \varphirac{1}{n^{1/3}} \ , \varepsilonnd{equation} holds for $n \ge 1$. \varepsilonnd{theorem} \begin{equation}gin{proof} Estimation of the last term in inequality (\ref{eq:6.2.51}). Since by (\ref{eq:3.1.1}) and conditions on $\{F(\tau)\}_{\tau\geq 0}$ operators $\{S(\tau)\}_{\tau>0}$ are $m$-sectorial with $W(S(\tau))\subseteq S_{\bf a}lpha$, the Riesz-Dunford formula \begin{equation}gin{equation*} e^{-tS(\tau)} = {1\over 2\pi i} \int_\Gamma d\zeta \ {e^{t\zeta}\over\zeta \mathds{1} + S(\tau)} \, , \varepsilonnd{equation*} defines for $\tau>0$ a family of holomorphic semigroups $\tau \mapsto \{e^{-tS(\tau)}\}_{t\in S_{\pi/2-{\bf a}lpha}}$. Here $\Gamma\subset S_{\pi-{\bf a}lpha}$ is a positively-oriented closed (at infinity) contour in $\dC$ around $-S_{\bf a}lpha$. The same is true for $m$-sectorial operator $H$ since $W(H)\subseteq S_{\bf a}lpha$: \begin{equation}gin{equation*} e^{-tH} = {1\over 2\pi i}\int_\Gamma d\zeta \ {e^{t\zeta}\over \zeta \mathds{1} + H}\, . \varepsilonnd{equation*} We define $\Gamma := \overline{\Gamma_\varepsilonpsilon} \cup\Gamma_\delta \cup \Gamma_\varepsilonpsilon$, where the arc $\Gamma_\delta = \{z\in{{\Bbb C}}: \ |z|=\delta>0, |{\bf a}rg z|\leq \pi-{\bf a}lpha-\varepsilonpsilon\}$ (for $0<\varepsilonpsilon<\pi/2-{\bf a}lpha$) and $\Gamma_\varepsilonpsilon,\overline{\Gamma_\varepsilonpsilon}$ are two conjugate radial rays with $\Gamma_\varepsilonpsilon = \{z\in{{\Bbb C}}: \ {\bf a}rg z = \pi-{\bf a}lpha-\varepsilonpsilon,\ |z|\geq \delta\}$. Then for $t>0$ one gets \begin{equation}gin{equation}\labelbel{esa1} \| e^{-tS(\tau)} - e^{-tH}\| \leq {1\over 2\pi} \int_\Gamma |d\zeta||e^{t\zeta}| \left\|(\zeta \mathds{1}+S(\tau))^{-1} - (\zeta \mathds{1} +H)^{-1}\right\|. \varepsilonnd{equation} Since operators $\{S(\tau)\}_{\tau>0}$ and $H$ are $m$-sectorial with numerical ranges in open sector $S_{\bf a}lpha$, by condition (\ref{est-res}) we obtain: \begin{equation}gin{eqnarray}\labelbel{est1} &&\left\|(\zeta \mathds{1}+S(\tau))^{-1} - (\zeta \mathds{1}+H)^{-1}\right\| \leq { L \, \tau \over \delta \, \sin \varepsilon}\, , \ \ \ \rm{for} \ \zeta \in \Gamma_\delta \ , \\ &&\left\|(\zeta \mathds{1}+S(\tau))^{-1} - (\zeta \mathds{1}+H)^{-1}\right\| \leq { L \, \tau \over |\zeta|\, \sin \varepsilon}\, , \ \ \rm{for} \ \zeta \in \Gamma_\varepsilonpsilon \vee \overline{\Gamma_\varepsilonpsilon} \ . \labelbel{est2} \varepsilonnd{eqnarray} Then for $t>0$ the estimates (\ref{esa1}) and (\ref{est1}), (\ref{est2}) yield \begin{equation}gin{equation}\labelbel{esa2} \| e^{-tS(\tau)} - e^{-tH}\| \leq \varphirac{L \tau}{\sin \varepsilon} \, \left\{e^{t \delta } + \varphirac{e^{- t \delta \,\cos ({\bf a}lpha + \varepsilon)}}{\pi \, t \, \delta \, \cos ({\bf a}lpha + \varepsilon)}\right\}\, . \varepsilonnd{equation} Hence, inequality (\ref{esa2}) proves the existence of $N^\cI$ such that for $\tau = t/n$ \begin{equation}gin{equation}\labelbel{esa3} \sup_{t\in\cI}\| e^{-tS(t/n)} - e^{-tH}\| \leq N^\cI \ \varphirac{1}{n} \, . \varepsilonnd{equation} To estimate the first term in the right-hand side of (\ref{eq:6.2.51}) we use Proposition \ref{th:6.2.2}. Then \begin{equation}gin{equation}\labelbel{esa4} \|F(t/n)^n - e^{-t S(t/n)}\| \leq M \ {1 \over n^{1/3}} \ . \varepsilonnd{equation} The inequalities (\ref{eq:6.2.51}) and (\ref{esa3}), (\ref{esa4}) prove the estimate (\ref{est-Ch}). \varepsilonnd{proof} \begin{equation}gin{remark}\labelbel{rem:6.2.23} Note that the rate (\ref{est-Ch}) of the operator-norm convergence of the Chernoff product formula for quasi-sectorial contractions is slower then for the self-adjoint case (\ref{eq:3.3.18}). This rate is limited by \textit{non-optimal} estimate due to Proposition \ref{th:6.2.2}. \varepsilonnd{remark} \begin{equation}gin{corollary}\labelbel{cor:6.2.24} Let $\{S(\tau)\}_{\tau >0}$ be a family of $m$-sectorial operators defined by \varepsilonmph{(\ref{eq:3.1.1}) and (\ref{eq:3.1.2})} with $W(S(\tau))\subseteq S_{\bf a}lpha$ for some ${\bf a}lpha \in [0, \pi/2)$. Let $H$ be an {$m$-sectorial} operator with $W(H)\subseteq S_{\bf a}lpha$. Then there exist $M' > 0$ and $\delta' > 0$ such that \begin{equation}gin{equation}\labelbel{esa5} \| e^{-tS(\tau)} - e^{-tH}\| \leq \varphirac{M'}{t} \ e^{t \, \delta'} \, \left\|(\mathds{1}+S(\tau))^{-1} - (\mathds{1}+H)^{-1}\right\| , \varepsilonnd{equation} holds for positive $\tau$ and $t$. \varepsilonnd{corollary} For extension of Theorem \ref{th:6.2.22} to $\dR^{+}_{0}$ the estimate (\ref{esa5}) suggests a weaker form of the operator-norm Trotter-Neveu-Kato theorem. We \textit{Note} that, under \textit{stronger} than (\ref{est-res}) (the $t$-dependent resolvent condition, see (\ref{eq:3.3.1}) for $\rho = 1$) we obtain a new version of Proposition \ref{th:6.2.21}. \begin{equation}gin{theorem}\labelbel{th:6.2.25} Let $\{F(\tau)\}_{\tau\geq 0}$ be a family of uniformly quasi-sectorial contractions on a Hilbert space $\mathfrak{H}$. Let $\{S(\tau)\}_{\tau >0}$ be a family of $m$-sectorial operators defined by \varepsilonmph{(\ref{eq:3.1.1}), (\ref{eq:3.1.2})} with $W(S(\tau))\subseteq S_{\bf a}lpha$ for some ${\bf a}lpha \in [0, \pi/2)$ and for {$m$-sectorial} operator $H$ with $W(H)\subseteq S_{\bf a}lpha$. Then estimate \begin{equation}\label{eq:3.2.161} \sup_{t\in \cI} \, \left\|(\zeta \mathds{1} + t S(\tau))^{-1} - (\zeta \mathds{1} + t H)^{-1}\right\| \leq { L^{\cI} \tau \over {\rm{dist}}(\zeta,-S_{\bf a}lpha)}\, , \ \ \ \ \zeta \in S_{\pi-{\bf a}lpha}\, , \varepsilone holds for any interval $\cI \subseteq \dR^{+}_{0}$ if and only if the condition \begin{equation}\label{eq:3.2.162} \sup_{t\in \cI} \, \|e^{-tS(\tau)} - e^{-tH}\| \leq K^{\cI} \tau \ , \varepsilone is valid for any interval $\cI \subseteq \dR^{+}_{0}$. \varepsilonnd{theorem} \begin{equation}gin{proof} Necessity of (\ref{eq:3.2.162}): As in the proof of Theorem \ref{th:6.2.22} we use the Riesz-Dunford functional calculus for holomorphic semigroups to obtain estimate \begin{equation}gin{equation}\labelbel{esa11} \| e^{-tS(\tau)} - e^{-tH}\| \leq {1\over 2\pi} \int_\Gamma |d z||e^{z}| \left\|(z \mathds{1}+t\,S(\tau))^{-1} - (z \mathds{1} +t\,H)^{-1}\right\| \, , \varepsilonnd{equation} for the same contour $\Gamma \subset S_{\pi-{\bf a}lpha}$. After change of variable: $z = t \, \zeta$, the right-hand side of estimate (\ref{esa11}) gets the same expression as (\ref{esa2}), but for $\delta$ substituted by $\delta/t$. This yields (\ref{eq:3.2.162}) for any bounded interval $\cI \subset \dR^{+}_{0}$. Sufficiency of (\ref{eq:3.2.162}): First, using the Laplace transform, we estimate: \begin{equation}gin{equation}\labelbel{esa21} \left\|(\zeta \mathds{1}+t\,S(\tau))^{-1} - (\zeta \mathds{1} +t\,H)^{-1}\right\| \leq \int^\infty_0 ds \ e^{-s \, \mathbb{R}E\zeta} \big\|e^{-s \, t \, S(\tau)} - e^{-s \, t \, H}\big\|\, , \varepsilonnd{equation} in the half-plane $\dC_{+} = S_{\pi/2}$. To this aim we let $\varepsilon \in (0,1)$ and $N_{\varepsilon} := - \ln(\varepsilon /2)$ such that for $\zeta \in S_{\pi/2}$ and $\tau > 0$, $t \ge 0$ \begin{equation}gin{equation*} \int^{\infty}_{N_{\varepsilon}} ds \ e^{- s \, \mathbb{R}E\zeta}\left\|e^{-s \, t \, S(\tau)} - e^{- s \, t \, H}\right\| \le {\varepsilon} \, . \varepsilonnd{equation*} Therefore, one gets \begin{equation}gin{equation*} \left\|(\zeta \mathds{1}+t\,S(\tau))^{-1} - (\zeta \mathds{1} +t\,H)^{-1}\right\| \le \int^{N_{\varepsilon}}_{0} ds \ e^{-s \, \mathbb{R}E\zeta}\left\|e^{-s\,t \,S(\tau)} - e^{-s\,t \,H}\right\| + {\varepsilon} \ , \varepsilonnd{equation*} that by condition (\ref{eq:3.2.162}) for any interval $\cI \subseteq \dR^{+}_{0}$ and $\varepsilon \in (0,1)$ yields \begin{equation}gin{eqnarray}\labelbel{esa31} &&\sup_{t\in\cI}\left\|(\zeta \mathds{1} +tS(\tau))^{-1} - (\zeta \mathds{1} +tH)^{-1}\right\| \le \\ && \varphirac{1}{\mathbb{R}E\zeta}\sup_{\begin{equation}gin{array}{c} t\in\cI \wedge s\in [0,N_{\varepsilon}]\varepsilona}\left\|e^{-s\,tS(\tau)} - e^{-s\,tH}\right\| + {\varepsilon} \leq \varphirac{1}{\mathbb{R}E\zeta} \ K^{\dR^{+}_{0}} \tau + {\varepsilon} \ . {\bf n}onumber \varepsilonnd{eqnarray} Since $\varepsilon$ may be arbitrary small, we obtain the estimate (\ref{eq:3.2.161}) for any $\zeta \in S_{\pi/2}$. For extension of $\zeta$ to sector $S_{\pi-{\bf a}lpha}$ we note that semigroups involved into estimate (\ref{esa21}) are holomorphic in sector $S_{\pi/2-{\bf a}lpha}$. Therefore, the Laplace transform is also valid for integration along the radial rays: $\, s \, e^{i \varphi} \in S_{\pi/2-{\bf a}lpha}\, $. Then conditions for convergence of Laplace integrals take the form: \begin{equation}gin{equation}\labelbel{esa41} - {\pi}/{2}< \varphi + {\bf a}rg\zeta < {\pi}/{2} \ \ \ \ \wedge \ \ \ -(\pi/2-{\bf a}lpha) < \varphi < (\pi/2-{\bf a}lpha) \, , \varepsilonnd{equation} that yields ${\bf a}rg\zeta \in S_{\pi-{\bf a}lpha}$ and makes $\mathbb{R}E(e^{i \varphi} \zeta)$ proportional to ${\rm{dist}}(\zeta,-S_{\bf a}lpha)$. \varepsilonnd{proof} \begin{equation}gin{corollary}\labelbel{cor:6.2.26} Linear $\tau$-estimate {\rm{(\ref{eq:3.2.162})}} implies \begin{equation}\label{eq:3.2.163} \sup_{t\in \cI} \, \|e^{-tS(t/n)} - e^{-tH}\| \leq K^{\cI}_{1} \varphirac{1}{n} \ , \varepsilone for any $n \in \mathbb{N}$ and any bounded interval $\cI \subset \dR^{+}_{0}$. Then the same line of reasoning as in {\rm{Theorem \ref{th:6.2.22}}} yields under $t$-dependent resolvent condition {\rm{(\ref{eq:3.2.161})}} the estimate \begin{equation}gin{equation}\labelbel{est-Ch1} \sup_{t\in\cI}\|F(t/n)^n - e^{-tH}\| \le C^\cI_{1} \, \varphirac{1}{n^{1/3}} \ , \varepsilonnd{equation} for $n \ge 1$ and any bounded interval $\cI \subset \dR^{+}_{0}$. \varepsilonnd{corollary} We \textit{Note} that extension (\ref{est-Ch1}) of the operator-norm Chernoff product formula for quasi-sectorial contractions on $\dR^{+}_{0}$ inherits the estimate of the {rate} of convergence established in Proposition \ref{th:6.2.2}. \section{Comments: Trotter-Kato product formulae}\labelbel{sec:3.5} The first application of the Chernoff product formula (Proposition \ref{prop:3.1.0}) was the proof of the strongly convergent Trotter product formula, see \cite{Che68}: \begin{equation}gin{equation}\labelbel{Trot} \,\mbox{\rm s-}\hspace{-2pt} \lim_{n\to\infty} \big(e^{-tA/n}e^{-tB/n}\big)^n = e^{-tH} \, , \ \ \ t\geq 0. \varepsilonnd{equation} Here $A$ and $B$ are positive self-adjoint operators in a Hilbert space $\mathfrak{H}$ with domains ${\rm dom\,}\, A$ and ${\rm dom\,}\,B$ such that ${\rm dom\,}\,A \cap {\rm dom\,}\,B = {\rm{core}}\, H$ of the self-adjoint $H$. Note that operator family $\{F(t) := e^{-tA}e^{-tB}\}_{t\geq 0}$ is not self-adjoint. Later the operator-norm Chernoff product formula from Sections \ref{sec:3.2} and \ref{sec:3.3} was used to lift (\ref{Trot}) to operator-norm topology, as well as to extend it from the \textit{exponential} Trotter product formula to the \textit{Trotter-Kato} product formulae for Kato functions $\mathcal{K}$, see \cite{NZ98}. Recall that if a real-valued Borel measurable function $f: [0, \infty) \rightarrow [0,1]$ satisfying \begin{equation}gin{equation}\labelbel{eq:K-F} 0 \leq f(s) \leq 1, \quad f(0) = 1, \quad f'(+0) = -1 \, , \varepsilonnd{equation} then $f \in \mathcal{K}$. Different conditions on local continuity at $t = +0$ and global behaviour on $\dR^{+}$ select subclasses of the Kato functions, see Appendix C in \cite{Zag19}. If functions $f,g \in \mathcal{K}$ substitute exponents in formula (\ref{Trot}) then it is called the {Trotter-Kato} product formula. To apply a full power of the self-adjoint Chernoff product formula (Sections \ref{sec:3.2} and \ref{sec:3.3}) we symmetrise and produce \textit{self-adjoint} family $\{F(t) := g(tB)^{1/2}f(tA)g(tB)^{1/2}\}_{t\geq 0}$. Let positive operators $A$ and $B$ be such that operator $A+B =: H \geq \mu \mathds{1}$ is self-adjoint. If $F(t)$ is sufficiently smooth at $t = +0$ and satisfy (\ref{eq:3.3.20}) (see \cite{IT01} (1.2)), then (\ref{eq:3.3.17}) holds for $\tau = t/n$ and Theorem \ref{th:3.3.5} proves the operator-norm convergent symmetrised {Trotter-Kato} product formula: \begin{equation}gin{equation}\labelbel{T-K} \|\cdotot\|-{\lim_{n\to\infty}} \big(g(tB/n)^{1/2}f(tA/n)g(tB/n)^{1/2}\big)^n = e^{-tH} \, , \varepsilonnd{equation} uniformly on $\dR^{+}_{0}$ with $O(1/n)$ as the rate of convergence, see \cite{IT01}, \cite{ITTZ01}. There it was also shown that this rate is \textit{optimal}. For proving convergence of the \textit{nonself-adjoint} Trotter-Kato approximants, for example the simplest: $\{(f(tA/n)g(tB/n))^n\}_{n \geq 1}$, note that for $n \in \dN$ and $t \geq 0$: \begin{equation}d (f(tA/n)g(tB/n))^n = f(tA/n)g(tB/n)^{1/2}F(t/n)^{n-1}g(tB/n)^{1/2} \ . \varepsiloned This representation yields: \begin{equation}d \begin{equation}gin{split} \|(f(tA/n)&g(tB/n))^n - e^{-tH}\| \le \|F(t/n)^{n-1} - e^{-tH}\| \\ &+ 2\|(\mathds{1}- g(tB/n))e^{-tH}\| + \|(\mathds{1}- f(tA/n))e^{-tH}\| \ . \varepsilonnd{split} \varepsiloned Then by estimates (\ref{eq:3.3.23}) and (\ref{eq:2.1.14}) we get for $n-1 \geq 1$ \begin{equation}gin{equation}\labelbel{eq:3.4.1} \| F(t/n)^{n-1} - e^{-tH}\| \le ({\widehat{c}}^{\ \dR^+}_1 + K) \ \varphirac{1}{n} \ , \ \ t\geq 0. \varepsilonnd{equation} On the other hand, since $f,g \in {{\mathcal{K}}}$ and $H = A + B$: \begin{equation}gin{equation}\labelbel{eq:3.4.2} \|(\mathds{1}- f(t A/n))e^{-t H}\|\leq C^A_C \ {{\bf a}lpha}mma[f]\ \varphirac{1}{n} \quad \mbox{and} \quad \|(\mathds{1}- g(t B/n))e^{-t H}\|\leq C^B_C \ {{\bf a}lpha}mma[g] \ \varphirac{1}{n} \ , \varepsilonnd{equation} where ${{\bf a}lpha}mma[f]: = \sup_{x > 0}(1 - f(x))/{x}$ and similar for $g$. Inequalities \varepsilonqref{eq:3.4.1} and \varepsilonqref{eq:3.4.2} yield for some $\Gamma >0$ the estimate \begin{equation}d \|(f(tA/n)g(tB/n))^n - e^{-tC}\| \le \Gamma \ \varphirac{1}{n} \ , \varepsiloned which proves the asymptotic $O(1/n)$ for $n \rightarrow \infty $. \textit{This paper is dedicated to the memory of Hagen Neidhardt passed away on 23 March 2019. I am deeply grateful to Hagen for valuable discussions on the subjects of this and of many others of my projects.} \begin{equation}gin{thebibliography}{ITTZ01} \bibitem[ArZ10]{ArZ10} Yu. Arlinski\u{i} and V. Zagrebnov, \varepsilonmph{Numerical range and quasi-sectorial contractions}, J. Math. Anal. Appl. \textbf{366} (2010), 33–-43. \bibitem[But19]{But19} Ya. A. Butko, \varepsilonmph{The method of Chernoff approximation}, arXiv:submit/2697593 [math.FA], 21 May 2019, 1--22. \bibitem[CZ01]{CZ01} V.~Cachia and V.~A. Zagrebnov, \varepsilonmph{Operator-norm approximation of semigroups by quasi-sectorial contractions}, J. Funct. Anal. \textbf{180} (2001), 176--194. \bibitem[Che68]{Che68} P. R. Chernoff, \varepsilonmph{Note on product formulas for operator semigroups}, J. Funct. Anal. \textbf{2} (1968), 238--242. \bibitem[Che74]{Che74} P. R. Chernoff, \varepsilonmph{Product formulas, nonlinear semigroups and addition of unbounded operators}, Memoirs Amer. Math. Soc. \textbf{140} (1974), 1--121. \bibitem[EN00]{EN00} K.-J. Engel and R.~Nagel, \varepsilonmph{One-parameter Semigroups for Linear Evolution Equations}, Springer-Verlag, Berlin, 2000. \bibitem[IT01]{IT01} T.~Ichinose, H. Tamura, \varepsilonmph{The norm convergence of the {T}rotter-{K}ato product formula with error bound}, Commun. Math. Phys. \textbf{217} (2001), 489--502. \bibitem[ITTZ01]{ITTZ01} T.~Ichinose, Hideo Tamura, Hiroshi~Tamura, and V.~A. Zagrebnov, \varepsilonmph{Note on the paper ``The norm convergence of the {T}rotter-{K}ato product formula with error bound'' by {I}chinose and {T}amura}, Commun. Math. Phys. \textbf{221} (2001), 499--510. \bibitem[Kat80]{Kat80} T.~Kato, \varepsilonmph{Perturbation Theory for Linear Operators}, Springer-Verlag, Berlin, 1980, Corrected Printing of the Second Edition. \bibitem[NZ98]{NZ98} H.~Neidhardt and V.~A. Zagrebnov, \varepsilonmph{On error estimates for the {T}rotter-{K}ato product formula}, Lett. Math. Phys. \textbf{44} (1998), 169--186. \bibitem[NZ99a]{NZ99a} H.~Neidhardt and V.~A. Zagrebnov, \varepsilonmph{{T}rotter-{K}ato product formula and operator-norm convergence}, Commun. Math. Phys. \textbf{205} (1999), 129--159. \bibitem[NZ99]{NZ99} H.~Neidhardt and V.~A. Zagrebnov, \varepsilonmph{{T}rotter-{K}ato product formula and symmetrically normed ideals}, J. Funct. Anal. \textbf{167} (1999), 113--167. \bibitem[Zag08]{Zag08} V. A. Zagrebnov, \varepsilonmph{Quasi-sectorial contractions}, J. Funct. Anal. \textbf{254} (2008), 2503-2511. \bibitem[Zag17]{Zag17} V. A. Zagrebnov, \varepsilonmph{Comments on the Chernoff $\sqrt{n}$-lemma}, Functional Analysis and Operator Theory for Quantum Physics (The Pavel Exner Anniversary Volume), European Mathematical Society, Z\"{u}rich, 2017, pp.~565--573. \bibitem[Zag19]{Zag19} V. A. Zagrebnov, \varepsilonmph{Gibbs Semigroups}, Operator Theory Series: Advances and Applications, Vol. 273, Bikh\"{a}user - Springer, Basel 2019. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Random Order Set Cover is as Easy as Offline} \begin{abstract} We give a polynomial-time algorithm for \osetcov with a competitive ratio of $O(\log mn)$ when the elements are revealed in random order, essentially matching the best possible offline bound of $O(\log n)$ and circumventing the $\Omega(\log m \log n)$ lower bound known in adversarial order. We also extend the result to solving pure covering IPs when constraints arrive in random order. The algorithm is a multiplicative-weights-based round-and-solve approach we call \textsc{LearnOrCover}. We maintain a coarse fractional solution that is neither feasible nor monotone increasing, but can nevertheless be rounded online to achieve the claimed guarantee (in the random order model). This gives a new offline algorithm for \setcov that performs a single pass through the elements, which may be of independent interest. \end{abstract} \section{Introduction} In the \setcov problem we are given a set system $(U,\mathcal{S})$ (where $U$ is a ground set of size $n$ and $\mathcal{S}$ is a collection of subsets with $|\mathcal{S}| = m$), along with a cost function $c: \mathcal{S}\rightarrow \R^+$. The goal is to select a minimum cost subcollection $\mathcal{S}' \subseteq \mathcal{S}$ such that the union of the sets in $\mathcal{S}'$ is $U$. Many algorithms have been discovered for this problem that achieve an approximation ratio of $\ln n$ (see e.g. \cite{chvatal1979greedy,johnson1974approximation,lovasz1975ratio,williamson2011design}), and this is best possible unless $\P = \NP$ \cite{feige1998threshold,Dinur:2014:AAP:2591796.2591884}. In the \osetcov variant, we impose the additional restriction that the algorithm does not know $U$ initially, nor the contents of each $S \in \mathcal{S}$. Instead, an adversary reveals the elements of $U$ one-by-one in an arbitrary order. On the arrival of every element, it is revealed which sets of $S \in \mathcal{S}$ contain the element, and the algorithm must immediately pick one such set $S \in \mathcal{S}$ to cover it. The goal is to minimize the total cost of the sets chosen by the algorithm. In their seminal work, \cite{alon2003online} show that despite the lack of foresight, it is possible to achieve a competitive ratio of $O(\log m \log n)$ for this version\footnote{Throughout this paper we consider the \emph{unknown-instance model} for online set cover (see Chapter 1 of \cite{korman2004use}). The result of \cite{alon2003online} was presented for the \emph{known-instance model}, but extends to the unknown setting as well. This was made explicit in subsequent work \cite{buchbinder2009online}.}. This result has since been shown to be tight unless $\NP \subseteq \BPP$ \cite{korman2004use}, and has also been generalized significantly (e.g. \cite{alon2006general,buchbinder2009online,gupta2014approximating,gupta2020online}). In this paper, we answer the question: \begin{quote} \emph{What is the best competitive ratio possible for \osetcov when the adversary is constrained to reveal the elements in uniformly random order (RO)?} \end{quote} We call this version \roosc. Note that the element set $U$ is still adversarially chosen and unknown, and only the arrival order is random. \subsection{Results} We show that with only this one additional assumption on the element arrival order, there is an efficient algorithm for \roosc with expected competitive ratio matching the best-possible offline approximation guarantee (at least in the regime where $m = \poly(n)$). \begin{theorem} \nameref{alg:gencost} is a polynomial-time randomized algorithm for \roosc achieving expected competitive ratio $O(\log (mn))$. \end{theorem} When run offline, our approach gives a new asymptotically optimal algorithm for \setcov for $m = \poly(n)$, which may be of independent interest. Indeed, given an estimate for the optimal \emph{value} of the set cover, our algorithm makes a single pass over the elements (considered in random order), updating a fractional solution using a multiplicative-weights framework, and sampling sets as it goes. This simplicity, and the fact that it uses only $\tilde O(m)$ bits of memory, may make the algorithm useful for some low-space streaming applications. (Note that previous formulations of \streamsetcov \cite{saha2009maximum,demaine2014streaming,har2016towards,emek2016semi} only consider cases where \emph{sets} arrive in a stream.) We show next that a suitable generalization of the same algorithm achieves the same competitive ratio for the \emph{RO Covering Integer Program} problem (\roocip) (see \cref{sec:cips} for a formal description). \begin{theorem} \label{thm:intro_cip} \nameref{alg:cips} is a polynomial-time randomized algorithm for \roocip achieving competitive ratio $O(\log (mn))$. \end{theorem} We complement our main theorem with some lower bounds. For instance, we show that the algorithms of \cite{alon2003online,buchbinder2009online} have a performance of $\Theta(\log m \log n)$ even in RO, so a new algorithm is indeed needed. Moreover, we observe an $\Omega(\log n)$ lower bound on \emph{fractional} algorithms for \roosc. This means we cannot pursue a two-phase strategy of maintaining a good monotone fractional solution and then randomly rounding it (as was done in prior works) without losing $\Omega(\log^2 n)$. Interestingly, our algorithm \emph{does} maintain a (non-monotone) fractional solution while rounding it online, but does so in a way that avoids extra losses. We hope that our approach will be useful in other works for online problems in RO settings. (We also give other lower bounds for batched versions of the problem, and for the more general submodular cover problem.) \subsection{Techniques and Overview} The core contribution of this work is demonstrating that one can exploit randomness in the arrival order to \textit{learn} about the underlying set system. What is more, this learning can be done fast enough (in terms of both sample and computational complexity) to build an $O(\log mn)$-competitive solution, even while committing to covering incoming elements immediately upon arrival. This seems like an idea with applications to other sequential decision-making problems, particularly in the RO setting. We start in \cref{sec:exptime} with an exponential time algorithm for the unit-cost setting. This algorithm maintains a portfolio of all exponentially-many collections of cost $c(\opt)$ that are feasible for the elements observed so far. When an uncovered element arrives, the algorithm takes a random collection from the portfolio, and picks a random set from it covering the element. It then prunes the portfolio to drop collections that did not cover the incoming element. We show that either the expected marginal coverage of the chosen set is large, or the expected number of solutions removed from the portfolio is large. I.e., we either make progress \textit{covering}, or \textit{learning}. We show that within $c(\opt) \log (mn)$ rounds, the portfolio only contains the true optimal feasible solutions, or all unseen elements are covered. (One insight that comes from our result is that a good measure of set quality is the number of times an unseen \textit{and uncovered} element appears in it.) We then give a polynomial-time algorithm in \cref{sec:gen_cost}: while it is quite different from the exponential scheme above, it is also based on this insight, and the intuition that our algorithm should make progress via learning or covering. Specifically, we maintain a distribution $\{ x_S \}_{S \in \mathcal{S}}$ on sets: for each arriving uncovered element, we first sample from this distribution $x$, then update $x$ via a multiplicative weights rule. If the element remains uncovered, we buy the cheapest set covering it. For the analysis, we introduce a potential function which simultaneously measures the convergence of this distribution to the optimal fractional solution, and the progress towards covering the universe. Crucially, this progress is measured in expectation over the random order, thereby circumventing lower bounds for the adversarial-order setting \cite{korman2004use}. In \cref{sec:cips} we extend our method to the more general \roocip problem: the intuitions and general proof outlines are similar, but we need to extend the algorithm to handle elements being partially covered. Finally, we present lower bounds in \cref{sec:lbs}. Our information-theoretic lower bounds for \roosc follow from elementary combinatorial arguments. We also show a lower bound for a batched version of \roosc, following the hardness proof of \cite{korman2004use}; we use it in turn to derive lower bounds for \rosubc. \subsection{Other Related Work} There has been much recent interest in algorithms for random order online problems. Starting from the secretary problem, the RO model has been extended to include metric facility location \cite{meyerson2001online}, network design \cite{meyerson2001designing}, and solving packing LPs \cite{AWY14,MR12,kesselheim2014primal,GM16,AgrawalD15,albers2021improved}, load-balancing \cite{GM16,Molinaro-SODA17} and scheduling \cite{albers2020scheduling,AJ21}. See \cite{gupta2020random} for a recent survey. Our work is closely related to \cite{grandoni2008set}, who give an $O(\log mn)$-competitive algorithm a related stochastic model, where the elements are chosen i.i.d.\ from a \emph{known distribution} over the elements. \cite{dehghani2018greedy} generalize the result of \cite{grandoni2008set} to the \textit{prophet version}, in which elements are drawn from known but distinct distributions. Our work is a substantial strengthening, since the RO model captures the \emph{unknown} i.i.d.\ setting as a special case. Moreover, the learning is an important and interesting part of our technical contribution. On the flip side, the algorithm of \cite{grandoni2008set} satisfies \textit{universality} (see their work for a definition), which our algorithm does not. We point out that \cite{grandoni2008set} claim (in a note, without proof) that it is not possible to circumvent the $\Omega (\log m \log n)$ lower bound of \cite{korman2004use} even in RO; our results show this claim is incorrect. We discuss this in \cref{sec:ggl}. Regret bounds for online learning are also proven via the KL divergence (see, e.g., \cite{arora2012multiplicative}). However, no reductions are known from our problem to the classic MW setting, and it is unclear how random order would play a role in the analysis: this is necessary to bypass the adversarial order lower bounds. Finally, \cite{korman2004use} gives an algorithm with competitive ratio $k \log (m/k)$---hence total cost $k^2 \log (m/k)$---for unweighted \osetcov where $k = |\opt|$. The algorithm is the same as our exponential time algorithm in \cref{sec:exptime} for the special case of $k=1$; however, we outperform it for non-constant $k$, and generalize it to get polynomial-time algorithms as well. \subsection{Preliminaries} All logarithms in this paper are taken to be base $e$. In the following definitions, let $x,y \in \R^n_+$ be vectors. The standard dot product between $x$ and $y$ is denoted $\langle x, y\rangle = \sum_{i=1}^n x_i y_i$. We use a weighted generalization of KL divergence. Given a weight function $c$, define \begin{align}\wKL{x}{y} : = \sum_{i=1}^n c_i \left[x_i \log \left(\frac{x_i}{y_i}\right) -x_i + y_i\right].\label{eq:prelim_kl}\end{align} \section{Warmup: An Exponential Time Algorithm for Unit Costs} \label{sec:exptime} We begin with an exponential-time algorithm which we call \textsc{SimpleLearnOrCover} to demonstrate some core ideas of our result. In what follows we assume that we know a number $k \in [\copt, 2\cdot \copt]$. This assumption is easily removed by a standard guess and double procedure, at the cost of an additional factor of $2$. The algorithm is as follows. Maintain a list $\mathfrak{T} \subseteq \binom{\mathcal{S}}{k}$ of candidate $k$-tuples of sets. When an uncovered element $v$ arrives, choose a $k$-tuple $\mathcal{T} = (T_1, \ldots, T_k)$ uniformly at random from $\mathfrak{T}$, and buy a uniformly random $T$ from $\mathcal{T}$. Also buy an arbitrary set containing $v$. Finally, discard from $\mathfrak{T}$ any $k$-tuples that do not cover $v$. See \cref{sec:appendix2} for pseudocode. \begin{theorem} \label{thm:unit_cost_exp} \nameref{alg:expunit} is a randomized algorithm for unit-cost \roosc with expected cost $O(k \cdot \log (mn))$. \end{theorem} \begin{proof}[Proof of \cref{thm:unit_cost_exp}] Consider any time step $\thist$ in which a random element arrives that is \textit{uncovered} on arrival. Let $U^{\thist}$ be the set of elements that remain uncovered at the end of time step $\thist$. Before the algorithm takes action, there are two cases: \textbf{Case 1:} At least half the tuples in $\mathfrak{T}$ cover at least $|U^\lastt|/2$ of the $|U^\lastt|$ as-of-yet-uncovered elements. In this case we say the algorithm performs a \textbf{Cover Step} in round $t$. \textbf{Case 2:} At least half the tuples in $\mathfrak{T}$ cover strictly less than $|U^\lastt|/2$ of the uncovered elements; we say the algorithm performs a \textbf{Learning Step} in round $t$. Define $\mathfrak{N}(c)$ to be the number of uncovered elements remaining after $c$ Cover Steps. Define $\mathfrak{M}(\ell)$ to be the value of $|\mathfrak{T}|$ after $\ell$ Learning Steps. We will show that after $10 k (\log m + \log n)$ rounds, either the number of elements remaining is less than $\mathfrak{N}(10 k \log n)$ or the number of tuples remaining is less than $\mathfrak{M}(10 k \log m)$. In particular, we argue that both $\expectation\expectarg*{\mathfrak{N}(10 k \log n)}$ and $\expectation\expectarg*{\mathfrak{M}(10 k \log m)}$ are less than $1$. \begin{claim} $\expectation\expectarg*{\mathfrak{N}(c+1) | \mathfrak{N}(c) = N} \leq \left(1-\frac{1}{4k}\right) N$. \end{claim} \begin{proof} If round $t$ is a Cover Step, then at least half of the $\mathcal{T} \in \mathfrak{T}$ cover at least half of $U^{\lastt}$, so $\expectation\expectarg*{\left\lvert(\bigcup \mathcal{T}) \cap U^\lastt\right\rvert} \geq \frac{|U^\lastt|}{4}$. Since $T$ is drawn uniformly at random from the $k$ sets in the uniformly random $\mathcal{T}$, we have $\expectation\expectarg*{\left\lvert T \cap U^\lastt\right\rvert} \geq \frac{|U^\lastt|}{4k}$. \end{proof} \begin{claim} $\expectation\expectarg*{\mathfrak{M}(\ell+1) \mid \mathfrak{M}(\ell) = M} \leq \frac{3}{4} M$. \end{claim} \begin{proof} Upon the arrival of $v$ in a Learning Step, at least half the tuples have probability at least $\nicefrac{1}{2}$ of being removed from $\mathfrak{T}$, so the expected number of tuples removed from $\mathfrak{T}$ is at least $M/4$. \end{proof} To conclude, by induction \begin{align*} \textstyle \expectation\expectarg*{\mathfrak{N}(10 k \log n)} \leq n\,\left(1-\frac{1}{4k}\right)^{10 k\log n} \leq 1 \quad \text{and} \quad \expectation\expectarg*{\mathfrak{M}(10 k \log m)} \leq m^k \, \left(\frac{3}{4}\right)^{10 k\log m} \leq 1. \end{align*} Note that if there are $N$ remaining uncovered elements and $M$ remaining tuples to choose from, the algorithm will pay at most $\min(k\cdot M,N)$ before all elements are covered. Thus the total expected cost of the algorithm is bounded by \[10k (\log m + \log n) + \expectation\expectarg*{\min(k\cdot \mathfrak{M}(10 k \log m), \mathfrak{N}(10 k \log n))} = O(k \log mn). \qedhere\] \end{proof} Apart from the obvious challenge of modifying \textsc{SimpleLearnOrCover} to run in polynomial time, it is unclear how to generalize it to handle non-unit costs. Still, the intuition from this algorithm will be useful for the next sections. \section{A Polynomial-Time Algorithm for General Costs} \label{sec:gen_cost} We build on our intuition from \cref{sec:exptime} that we can either make progress in covering or in learning about the optimal solution. To get an efficient algorithm, we directly maintain a probability distribution over sets, which we update via a multiplicative weights rule. We use a potential function that simultaneously measures progress towards learning the optimal solution, and towards covering the unseen elements. Before we present the formal details and the pseudocode, here are the main pieces of the algorithm. \begin{enumerate} \item We maintain a fractional vector $x$ which is a (\emph{not necessarily feasible}) guess for the LP solution of cost $\beta$ to the set cover instance. \item Every round $\thist$ in which an uncovered element $v^\thist$ arrives, we \begin{enumerate} \item sample every set $S$ with probability proportional to its current LP value $x_S$, \item increase the value $x_S$ of all sets $S \ni v^\thist$ multiplicatively and renormalize, \item buy a cheapest set to cover $v^\thist$ if it remains uncovered. \end{enumerate} \end{enumerate} Formally, by a guess-and-double approach, we assume we know a bound $\beta$ such that $\lpopt \leq \beta \leq 2 \cdot \lpopt$; here $\lpopt$ is the cost of the optimal LP solution to the final unknown instance. Define \begin{gather} \kappa_v := \min \{c_S \mid S \ni v\} \end{gather} as the cost of the cheapest set covering $v$. \begin{algorithm}[H] \caption{\textsc{LearnOrCover}} \label{alg:gencost} \begin{algorithmic}[1] \State Let $m' \leftarrow |\{S : c(S) \leq \beta \}|$. \State Initialize $x_S^{0} \leftarrow \frac{\beta}{c_S \cdot m'} \cdot \mathbbm{1}\{c(S) \leq \beta \}$, and $\mathcal{C}^0 \leftarrow \emptyset$. \label{line:loc_sup} \For{$\thist=1,2\ldots, n$} \State{$v^\thist \leftarrow$ $\thist^{th}$ element in the random order, and let $\mathcal{R}^\thist \leftarrow \emptyset$.} \If {$v^\thist$ not already covered} \label{line:gencost_uncov} \State \textbf{for each}\xspace set $S$, with probability $\min(\kappa_{v^\thist} \cdot x^{\lastt}_S/\beta,1)$ add $\mathcal{R}^\thist \leftarrow \mathcal{R}^\thist \cup \{S\}$. \State Update $\mathcal{C}^\thist \leftarrow \mathcal{C}^{\lastt} \cup \mathcal{R}^\thist$. \If {$\sum_{S \ni v^\thist} x^{\lastt}_S < 1$} \State For every set $S$, update $x^{\thist}_S \leftarrow x^{\lastt}_S \cdot \exp\left\{\mathbbm{1}\{S \ni v^\thist \} \cdot \kappa_{v^{\thist}} / c_S\right\}$. \State Let $Z^{\thist} = \langle c, x^\thist \rangle / \beta$ and normalize $x^{\thist} \leftarrow x^{\thist} / Z^{\thist}$. \label{line:loc_cost_invariant} \Else \State $x^\thist \leftarrow x^{\lastt}$. \label{line:gencost_noupdate} \EndIf \State Let $S_{v^\thist}$ be the cheapest set containing $v^\thist$. Add $\mathcal{C}^\thist \leftarrow \mathcal{C}^\thist \cup \{S_{v^\thist}\}$. \label{line:gencost_backup} \EndIf \EndFor \end{algorithmic} \end{algorithm} The algorithm is somewhat simpler for unit costs: the $x_S$ values are multiplied by either $1$ or $e$, and moreover we can sample a single set for $\mathcal{R}^\thist$ (see \cref{sec:appendix2} for pseudocode). Because of the non-uniform set costs, we have to carefully calibrate both the learning and sampling rates. Our algorithm dynamically scales the learning and sampling rates in round $\thist$ depending on $\kappa_{v^\thist}$, the cost of the cheapest set covering $v^\thist$. Intuitively, this ensures that all three of (a) the change in potential, (b) the cost of the sampling, and (c) the cost of the backup set, are at the same scale. Before we begin, observe that \cref{line:loc_cost_invariant} ensures the following invariant: \begin{invariant} \label{inv:loc_cost} For all time steps $\thist$, it holds that $\langle c, x^\thist \rangle = \beta$. \end{invariant} \begin{theorem}[Main Theorem] \label{thm:weighted_cost_poly} \nameref{alg:gencost} is a polynomial-time randomized algorithm for \roosc with expected cost $O(\beta \cdot \log (mn))$. \end{theorem} Let us start by defining notation. Let $x^*$ to be the optimal LP solution to the final, unknown set cover instance. Next let $X_v^{\thist} := \sum_{S \ni v} x_S^{\thist}$, the fractional coverage provided to $v$ by $x^{\thist}$. Let $U^{\thist}$ be the elements remaining uncovered at the end of round $t$ (where $U^{0}=U$ is the entire ground set of the set system). Define the quantity \begin{align} \label{eq:gencost_rhodef} \rho^{\thist} := \sum_{u \in U^{\thist}} \kappa_u. \end{align} With this, we are ready to define our potential function which is the central player in our analysis. Recalling that $\beta$ is our guess for the value of $\lpopt$, and the definition of KL divergence \eqref{eq:prelim_kl}, define \begin{align} \label{eq:gencost_phi} \boxed{ \Phi(\thist) := C_1 \cdot \wKL{x^*}{x^{\thist}} + C_2 \cdot \beta \cdot \log \left(\frac{\rho^{\thist}}{\beta}\right)} \end{align} where $C_1$ and $C_2$ are constants to be specified later. \begin{lemma}[Initial Potential] \label{fact:gencost_phi0} The initial potential is bounded as $\Phi(0) = O(\beta\cdot \log (mn))$. \end{lemma} We write the proof in general language in order to reuse it for covering IPs in the following section. Recall that sets correspond to columns in the canonical formulation of \setcov as an integer program. \begin{proof} We require the following fact which we prove in \cref{sec:appendix}. \begin{restatable}{fact}{lpsupport} \label{fact:lpsupport} Every pure covering LP of the form $\min_{x \geq 0}\{\langle c, x \rangle : Ax \geq 1\}$ for $c \geq 0$ and $a_{ij} \in [0, 1]$ with optimal value less than $\beta$ has an optimal solution $x^*$ which is supported only on columns $j$ such that $c_j \leq \beta$. \end{restatable} We assume WLOG that $x^*$ is such a solution, and we first bound the KL term of $\Phi(0)$. Since $\text{support}(x^*) \subseteq \text{support}(x^0)$ by \cref{fact:lpsupport}, we have \[\wKL{x^*}{x^{0}} = \sum_{j} c_j \cdot x^*_j \log \left(x^*_j \frac{c_j \cdot m'}{\beta}\right) + \sum_{j} c_j (x^0_j - x^*_j)\leq \beta (\log (m)+1),\] where we used that $\langle c, x^* \rangle \leq \beta$ and that $m'<m$ is the number of columns with cost less than $\beta$. For the second term, $\beta \log (\rho^{0}/\beta) = \beta \log (\sum_{i \in U^0} \kappa_i /\beta) \leq \beta \log (|U^{0}|) = \beta \log n$, since for all $i$, the cheapest cover for $i$ costs less than $\beta$, and therefore $\kappa_u / \beta \leq 1$. The claim follows so long as $C_1$ and $C_2$ are constants. \end{proof} The rest of the proof relates the expected decrease of potential $\Phi$ to the algorithm's cost in each round. Define the event $\Upsilon^{\thist} := \{v^\thist \in U^{\lastt}\}$ that the element $v^{\thist}$ is \emph{uncovered} on arrival. Note \cref{line:gencost_uncov} ensures that if event $\Upsilon^{\thist}$ does not hold, the algorithm takes no action and the potential does not change. So we focus on the case that event $\Upsilon^{\thist}$ does occur. We first analyze the change in KL divergence. Recall that $X_v^{\thist} := \sum_{S \ni v} x_S^{\thist}$. \begin{lemma}[Change in KL] \label{lem:weighted_klchange} For rounds $\thist$ in which $\Upsilon^{\thist}$ holds, the expected change in the weighted KL divergence is \[ \expectation\expectargover{v^\thist, \mathcal{R}^{\thist}}*{ \wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}} \leq \expectation\expectargover{v \sim U^{\lastt}}{(e-1) \cdot \kappa_v \min(X_v^{\lastt},1) - \kappa_v}. \] \end{lemma} We emphasize that the expected change in relative entropy in the statement above depends only on the randomness of the arriving uncovered element $v^\thist$, not on the randomly chosen sets $\mathcal{R}^{\thist}$. \begin{proof} We break the proof into cases. If $X_v^{\lastt} \geq 1$, in \cref{line:gencost_noupdate} we set the vector $x^{\thist} = x^{\lastt}$, so the change in KL divergence is 0. This means that \begin{align} &\expectation\expectargover{v^\thist, \mathcal{R}^{\thist}}*{ \wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}, X_{v^{\thist}}^{\lastt} \geq 1} \notag \\ &\leq \expectation\expectargover{v \sim U^{\thist}}{(e-1) \cdot \kappa_v \min\left(X_v^{\lastt},1\right) - \kappa_v | X_v^{\lastt} \geq 1} \label{eq:wkl_xvge1}\end{align} trivially. Henceforth we focus on the case $X_{v^{\thist}}^{\lastt} < 1$. Recall that the expected change in relative entropy depends only on the arriving uncovered element $v^{\thist}$. Expanding definitions, \begin{align} & \expectation\expectargover{v^{\thist}, \mathcal{R}^{\thist}}*{\wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}, X_{v^{\thist}}^{\lastt} < 1} \notag \\ &= \expectation\expectargover{v \sim U^{\lastt}}*{\sum_{S} c_S \cdot x^*_S \cdot \log \frac{x^{\lastt}_S}{x_S^{\thist}} | X_v^{\lastt} < 1} \notag \\ &= \expectation\expectargover{v \sim U^{\lastt}}*{\langle c, x^*\rangle \cdot \log Z^{\thist} - \sum_{S \ni v} c_S \cdot x^*_S \cdot \log e^{\kappa_v/c_S} | X_v^{\lastt} < 1} \notag \\ &\leq \expectation\expectargover{v \sim U^{\lastt}}*{ \beta \cdot \log\left(\sum_{S\ni v} \frac{c_S}{\beta} \cdot x_S^{\lastt} \cdot e^{\kappa_v/c_S} + \sum_{S\not \ni v} \frac{c_S}{\beta} \cdot x_S^{\lastt} \right) - \sum_{S \ni v} \kappa_v \cdot x^*_S | X_v^{\lastt} < 1}\label{eq:wkl_copt}, \\ \intertext{where in the last step \eqref{eq:wkl_copt} we expanded the definition of $Z^{\thist}$, and used $\langle c, x^*\rangle \leq \beta$. Since $x^*$ is a feasible set cover, which means that $\sum_{S \ni v} x^*_S \geq 1$, we can further bound \eqref{eq:wkl_copt} by} &\leq \expectation\expectargover{v \sim U^{\lastt}}*{ \beta \cdot \log\left(\sum_{S\ni v} \frac{c_S}{\beta} \cdot x_S^{\lastt} \cdot e^{\kappa_v/c_S} + \sum_{S\not \ni v} \frac{c_S}{\beta} \cdot x_S^{\lastt} \right) - \kappa_v | X_v^{\lastt} < 1} \notag \\ &\leq \expectation\expectargover{v \sim U^{\lastt}}*{ \beta \cdot \log\left(\sum_S \frac{c_S}{\beta} \cdot x_S^{\lastt} + (e - 1) \cdot \sum_{S\ni v} \frac{\kappa_v}{\beta} \cdot x_S^{\lastt} \right) - \kappa_v | X_v^{\lastt} < 1} \label{eq:wkl_eapx} \\ \intertext{where we use the approximation $e^y \leq 1+(e-1) \cdot y$ for $y \in [0,1]$ (note that $\kappa_v$ is the cheapest set covering $v$, so for any $S \ni v$ we have $\kappa_v / c_S \leq 1$). Finally, using \cref{inv:loc_cost}, along with the approximation $\log(1+y) \leq y$, we bound \eqref{eq:wkl_eapx} by} &\leq \expectation\expectargover{v \sim U^{\lastt}}*{(e-1) \cdot \kappa_v \cdot X_v^{\lastt} - \kappa_v | X_v^{\lastt} < 1} \notag \\ &\leq \expectation\expectargover{v \sim U^{\lastt}}{(e-1) \cdot \kappa_v \min(X_v^{\lastt},1) - \kappa_v | X_v^{\lastt} < 1}. \label{eq:wkl_xvl1} \end{align} The lemma statement follows by combining \eqref{eq:wkl_xvge1} and \eqref{eq:wkl_xvl1} using the law of total expectation. \end{proof} Next we bound the expected change in $\log \rho^{\thist}$ provided by the sampling $\mathcal{R}^{\thist}\sim \kappa_{v^{\thist}} x^{\lastt} / \beta$ upon the arrival of uncovered $v^{\thist}$ (where each $S \in \mathcal{R}^{\thist}$ independently with probability $\kappa_{v^{\thist}} x_S^{\lastt}/\beta$). Recall that $U^{\thist}$ denotes the elements uncovered at the end of round $\thist$; therefore element $u$ is contained in $U^{\lastt} \setminus U^{\thist}$ if and only if it is marginally covered by $\mathcal{R}^{\thist}$ in this round. \begin{lemma}[Change in $\log \rho^{\thist}$] \label{lem:weighted_utchange} For rounds when $v$ is uncovered on arrival, the expected change in $\log \rho^{\thist}$ is \[\expectation\expectargover{v^{\thist}, \mathcal{R}^{\thist}}*{\log \rho^{\thist} - \log \rho^{\lastt} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}} \leq - \frac{1-e^{-1}}{\beta}\cdot \expectation\expectargover{u \sim U^{\lastt}}*{\kappa_u \cdot \min(X_u^{\lastt}, 1)}.\] \end{lemma} \begin{proof} Recall the definition of $\rho^{\thist}$ from \eqref{eq:gencost_rhodef}. Conditioned on $v^{\thist} = v$ for any fixed element $v$, the expected change in $\log \rho^{\thist}$ depends only on $\mathcal{R}^{\thist}$. Recall $\mathcal{R}^{\thist}$ is formed by sampling each set $S$ with probability $\kappa_{v} x_S^{\lastt} / \beta$. \begin{align} & \expectation\expectargover{\mathcal{R}^{\thist}}*{\log \rho^{\thist} - \log \rho^{\lastt} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}, v^{\thist} = v} \notag \\ &= \expectation\expectargover{\mathcal{R}^{\thist}}*{\log \left(1-\frac{\rho^{\lastt} - \rho^{\thist}}{\rho^{\lastt}} \right) | U^{\lastt}, v^{\thist} = v} \notag \\ &\leq - \frac{1}{\rho^{\lastt}} \cdot \expectation\expectargover{\mathcal{R}^{\thist}}*{\rho^{\lastt} - \rho^{\thist} | U^{\lastt}, v^{\thist} = v} \label{eq:ut_logapx} \\ \intertext{Above, \eqref{eq:ut_logapx} follows from the approximation $\log (1-y) \leq -y$. Expanding the definition of $\rho^{\thist}$ from \eqref{eq:gencost_rhodef}, \eqref{eq:ut_logapx} is bounded by} &= - \frac{1}{\rho^{\lastt}} \cdot \expectation\expectargover{\mathcal{R}^{\thist}}*{\sum_{u \in U^{\lastt}} \kappa_u \cdot \mathbbm{1}\{u \in U^{\lastt}\setminus U^{\thist}\} | U^{\lastt}, v^{\thist} = v} \notag \\ &= - \frac{1}{\rho^{\lastt}} \sum_{u \in U^{\lastt}} \kappa_u \cdot \probarg{u \not \in U^{\thist} \mid u \in U^{\lastt}, v^{\thist} = v}{\mathcal{R}^{\thist}} \notag \\ &\leq - \frac{1-e^{-1}}{\beta} \cdot \kappa_v \cdot \frac{1}{\rho^{\lastt}} \sum_{u \in U^{\lastt}} \kappa_u \cdot \min(X_u^{\lastt}, 1) \label{eq:ut_prob}\\ &= - \frac{1-e^{-1}}{\beta}\cdot \kappa_v \cdot \frac{|U^{\lastt}|}{\rho^{\lastt}} \cdot \expectation\expectargover{u \sim U^{\lastt}}*{\kappa_u \cdot \min(X_u^{\lastt}, 1)} \label{eq:ut_simplify} . \end{align} Step \eqref{eq:ut_prob} is due to the fact that each set $S\in \mathcal{R}^{\thist}$ is sampled independently with probability $\min(\kappa_v x^{\lastt}_S /\beta,1)$, so the probability any given element $u \in U^{\lastt}$ is covered is \[ 1 - \prod_{S \ni u} \left(1 - \min\left(\frac{\kappa_v x^{\lastt}_S}{\beta},1\right)\right) \geq 1 - \exp\left\{-\min\left(\frac{\kappa_v}{\beta} X_u^{\lastt},1\right)\right\} \stackrel{(**)}{\geq} (1 - e^{-1}) \cdot \min\left(\frac{\kappa_v}{\beta} X_u^{\lastt},1\right) \] Above, $(**)$ follows from convexity of the exponential. Step \eqref{eq:ut_simplify} follows since $\kappa_v / \beta \leq 1$. Taking the expectation of \eqref{eq:ut_simplify} over $v^{\thist} \sim U^{\lastt}$, and using the fact that $\expectation\expectargover{v \sim U^{\lastt}}*{\kappa_v} = \rho^{\lastt} / |U^{\lastt}|$, the expected change in $\log \rho^{\thist}$ becomes \begin{align*} \expectation\expectargover{v^{\thist}, \mathcal{R}^{\thist}}*{\log \rho^{\thist} - \log \rho^{\lastt} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}} \leq - \frac{1-e^{-1}}{\beta} \cdot \expectation\expectargover{u \sim U^{\lastt}}*{\kappa_u \cdot \min(X_u^{\lastt}, 1)}, \end{align*} as desired. \qedhere \end{proof} \begin{proof}[Proof of \cref{thm:weighted_cost_poly}] In every round $\thist$ for which $\Upsilon^\thist$ holds, the expected cost of the sampled sets $\mathcal{R}^\thist$ is $\kappa_{v^\thist} \cdot \langle c, x^{\lastt}\rangle / \beta = \kappa_{v^\thist}$ (by \cref{inv:loc_cost}). The algorithm pays an additional $\kappa_{v^\thist}$ in \cref{line:gencost_backup}, and hence the total expected cost per round is at most $2\cdot \kappa_{v^\thist}$. Combining \cref{lem:weighted_klchange,lem:weighted_utchange}, and setting the constants $C_1 = 2$ and $C_2 = 2e$, we have \begin{align*} &\expectation\expectargover{\substack{v^{\thist}, \mathcal{R}^{\thist}}}*{\Phi(\thist) - \Phi(\lastt) | v^{1}, \ldots, v^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}, \Upsilon^{\thist}} \\ & = \expectation\expectargover{\substack{v^{\thist}, \mathcal{R}^{\thist}}}*{ \begin{array}{ll} &C_1\left(\wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}}\right) \\ + &C_2\cdot \beta \cdot \left(\log \rho^{\thist} - \log \rho^{\lastt}\right) \end{array} | v^{1}, \ldots, v^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}, \Upsilon^{\thist}} \\ &\leq - \expectation\expectargover{\substack{v^{\thist}, \mathcal{R}^{\thist}}}*{2 \cdot \kappa_{v^{\thist}} | v^{1}, \ldots, v^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}, \Upsilon^{\thist}}, \end{align*} which cancels the expected change in the algorithm's cost. Since neither the potential $\Phi$ nor the cost paid by the algorithm change during rounds in which $\Upsilon^\thist$ does not hold, we have the inequality \[\expectation\expectargover{\substack{v^{\thist}, \mathcal{R}^{t}}}*{\Phi(\thist) - \Phi(\lastt) + c(\alg(\thist)) - c(\alg(\lastt)) | v^{1}, \ldots, v^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}} \leq 0.\] Let $t^*$ be the last time step for which $\Phi(t^*) \geq 0$. By applying \cref{lem:exp_pot_change} and the bound on the starting potential from \cref{fact:gencost_phi0}, we have that $\expectation\expectarg{c(\alg(t^*))} \leq O( \beta \cdot \log (mn))$. It remains to bound the expected cost paid by the algorithm after time $t^*$. Since KL divergence is a nonnegative quantity, $\Phi$ is negative only when $\rho^{\thist} \leq \beta$. The algorithm pays $O(\kappa_{v^{\thist}})$ in expectation during rounds $t$ where $v^{\thist} \in U^{\lastt}$ and $0$ during rounds where $v^{\thist} \not \in U^{\lastt}$, and hence the expected cost paid by the algorithm after time $t^*$ is at most $\sum_{u \in U^{t^*}} \kappa_u = \rho^{t^*} = O(\beta)$. \end{proof} \section{Covering Integer Programs} \label{sec:cips} We show how to generalize our algorithm from \cref{sec:gen_cost} to solve pure covering IPs when the constraints are revealed in random order, which significantly generalizes \roosc. Formally, the random order covering IP problem (\roocip) is to solve \begin{align} \begin{array}{lll} \min_z &\langle c, z \rangle \label{eq:CIP}\\ \text{s.t.} &A z \geq 1 \\ &z \in \Z_+^m, \end{array} \end{align} when the rows of $A$ are revealed in random order. Furthermore the solution $z$ can only be incremented monotonically and must always be feasible for the subset of constraints revealed so far. (Note that we do not consider box constraints, namely upper-bound constraints of the form $z_j \leq d_j$.) We may assume without loss of generality that the entries of $A$ are $a_{ij} \in [0,1]$. We describe an algorithm which guarantees that every row is covered to extent $1-\dthrsh$, meaning it outputs a solution $z$ with $Az \geq 1-\dthrsh$ (this relaxation is convenient in the proof for technical reasons). With foresight, we set $\dthrsh = (e-1)^{-1}$. It is straightforward to wrap this algorithm in one that buys $\lceil(1-\dthrsh)^{-1}\rceil=3$ copies of every column and truly satisfy the constraints, which only incurs an additional factor of $3$ in the cost. Once again, by a guess-and-double approach, we assume we know a bound $\beta$ such that $\lpopt \leq \beta \leq 2 \cdot \lpopt$; here $\lpopt$ is the cost of the optimal solution to the LP relaxation of \eqref{eq:CIP}. Let $z^{\thist}$ be the integer solution held by the algorithm at the end of round $t$. Define $\Delta_i^{\thist} := \max(0, 1 - \langle a_i, z^{\thist}\rangle)$ to be the extent to which $i$ remains uncovered at the end of round $t$. This time we redefine $\kappa_i^{\thist} := \Delta_i^{\lastt} \cdot \min_k c_k / a_{ik}$, which becomes the minimum fractional cost of covering the current deficit for $i$. Finally, for a vector $y$, denote the fractional remainder by $\widetilde y := y - \lfloor y \rfloor$. The algorithm once again maintains a fractional vector $x$ which is a guess for the (potentially infeasible) LP solution of cost $\beta$ to \eqref{eq:CIP}. This time, when the $i^{th}$ row arrives at time $t$ and $\Delta_i^{\lastt} > \dthrsh$ (meaning this row is \emph{not} already covered to extent $1-\dthrsh$), we (a) buy a random number of copies of every column $j$ with probability proportional to its LP value $x_j$, (b) increase the value $x_j$ multiplicatively and renormalize, and finally (c) buy a minimum cost cover for row $i$ if necessary. \begin{algorithm}[H] \caption{\textsc{LearnOrCoverCIP}} \label{alg:cips} \begin{algorithmic}[1] \State $m' \leftarrow |\{j: c_j \leq \beta\}|$. \State Initialize $x_j^{0} \leftarrow \frac{\beta}{c_j \cdot m'} \cdot \mathbbm{1}\{c_j \leq \beta\}$, and $z^0 \leftarrow \vec 0$. \For{$t=1,2\ldots, n$} \State{$i \leftarrow$ $\thist^{th}$ constraint in the random order.} \If {$\Delta_i^{\lastt} > \dthrsh$} \label{line:cip_alreadycov} \State Let $y := \kappa^{\thist}_i \cdot x^{\lastt} / \beta $. \textbf{for each}\xspace column $j$, add $z^{\thist}_j \leftarrow z^{\lastt}_j + \lfloor y_j\rfloor + \Ber(\tilde y_j)$. \label{line:cip_updatez} \If {$ \langle a_i, x^{\lastt}\rangle< \Delta_i^{\lastt}$} \label{line:cip_noxupdate} \State For every $j$, update $x^{\thist}_j \leftarrow x^{\lastt}_j \cdot \exp\left\{\kappa_i^{\thist} \cdot \frac{a_{ij}}{c_j }\right\}$. \State Let $Z^{\thist} = \langle c, x^{\thist} \rangle / \beta$ and normalize $x^{\thist} \leftarrow x^{\thist} / Z^{\thist}$. \label{line:normalize} \Else \State $x^{\thist} \leftarrow x^{\lastt}$ \EndIf \State Let $k^* = \argmin_k \frac{c_k}{a_{ik}}$. Add $z^{\thist}_{k^*} \leftarrow z^{\thist}_{k^*} + \left\lceil \frac{\Delta_i^{\lastt}}{a_{ik^*}}\right\rceil$. \label{line:cip_backup} \EndIf \EndFor \end{algorithmic} \end{algorithm} Note that once again, \cref{line:normalize} ensures \cref{inv:loc_cost} holds. The main theorem of this section is: \begin{theorem} \label{thm:cip} \nameref{alg:cips} is a polynomial-time randomized algorithm for \roocip which outputs a solution $z$ with expected cost $O(\beta \cdot \log (mn))$ such that $3z$ is feasible. \end{theorem} \cref{thm:intro_cip} follows as a corollary, since given any intermediate solution $z^{\thist}$, we can buy the scaled solution $3z^{\thist}$. We generalize the proof of \Cref{thm:weighted_cost_poly}. Redefine $x^*$ to be the optimal LP solution to the final, unknown instance \eqref{eq:CIP}, and $U^{\thist} := \{i \mid \Delta_i^{\thist} > \dthrsh\}$ be the elements which are not covered to extent $1-\dthrsh$ at the end of round $t$. With these new versions of $U^{\thist}$ and $\kappa^{\thist}$, the definitions of both $\rho^{\thist}$ and the potential $\Phi$ remain the same as in \eqref{eq:gencost_rhodef} and \eqref{eq:gencost_phi} (except we pick the constants $C_1$ and $C_2$ differently). Once again, we start with a bound on the initial potential. \begin{lemma}[Initial Potential] \label{fact:cip_phi0} The initial potential is bounded as $\Phi(0) = O(\beta\cdot \log (mn))$. \end{lemma} The proof is identical verbatim to that of \cref{fact:gencost_phi0}. It remains to relate the expected decrease in $\Phi$ to the algorithm's cost in every round. For convenience, let $X_i^{\thist} := \langle a_i, x^{\thist}\rangle$ be the amount that $x^{\thist}$ fractionally covers $i$. Define $\Upsilon^{\thist}$ to be the event that for constraint $i^{\thist}$ arriving in round $t$ we have $\Delta^{\lastt}_i > \dthrsh$. The check at \cref{line:cip_alreadycov} ensures that if $\Upsilon^{\thist}$ does not hold, then neither the cost paid by the algorithm nor the potential will change. We focus on the case that $\Upsilon^{\thist}$ occurs, and once again start with the KL divergence. \begin{lemma}[Change in KL] \label{lem:ip_klchange} For rounds in which $\Upsilon^{\thist}$ holds, the expected change in weighted KL divergence is \[ \expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{ \wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}} \leq \expectation\expectargover{i \sim U^{\lastt}}{(e-1) \cdot \kappa_i^{\thist} \min(X^{\lastt}_i,\Delta_{i}^{\lastt}) - \kappa_i^{\thist}}. \] \end{lemma} \begin{proof} We break the proof into cases. By the check on \cref{line:cip_noxupdate}, if $X_{i^{\thist}}^{\lastt} \geq \Delta_{i^{\thist}}^{\lastt}$, then the vector $x^{\thist}$ is not updated in round $t$, so the change in KL divergence is 0. This means that \begin{align} &\expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{ \wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}, X_{i^{\thist}}^{\lastt} \geq \Delta_{i^{\thist}}^{\lastt}} \notag \\ &\leq \expectation\expectargover{i \sim U^{\lastt}}{(e-1) \cdot \kappa_i^{\thist} \min(X^{\lastt}_i,\Delta_i^{\lastt}) - \kappa_i^{\thist} | X_i^{\lastt} \geq \Delta_i^{\lastt}} \label{eq:wKL_xigewi} \end{align} holds trivially, since in this case $\min(X_i^{\lastt},\Delta_i^{\lastt}) = \Delta_i^{\lastt} > \dthrsh = (e-1)^{-1}$ and $\kappa_i^{\thist} \ge 0$. Henceforth we focus on the case $X^{\lastt}_i < \Delta_i^{\lastt}$. The change in relative entropy depends only on the arriving uncovered constraint $i^{\thist}$, not on the randomly chosen columns $\mathcal{R}^{\thist}$. Expanding definitions, \begin{align} & \expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{\wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}, X^{\lastt}_i < \Delta_i^{\lastt}} \notag \\ &= \expectation\expectargover{i \sim U^{\lastt}}*{\sum_{j} c_j \cdot x^*_j \cdot \log \frac{x^{\lastt}_j}{x^{\thist}_j} | X^{\lastt}_i < \Delta_i^{\lastt}} \notag \\ &= \expectation\expectargover{i \sim U^{\lastt}}*{\sum_{j} c_j \cdot x^*_j \cdot \log Z^{\thist} - \sum_j c_j \cdot x^*_j \cdot \kappa_i^{\thist} \cdot \frac{a_{ij}}{c_j} | X^{\lastt}_i < \Delta_i^{\lastt}} \notag \\ &\leq \expectation\expectargover{i \sim U^{\lastt}}*{ \beta \cdot \log Z^{\thist} - \kappa_i^{\thist} \cdot \sum_j a_{ij} x^*_j | X^{\lastt}_i < \Delta_i^{\lastt} } \label{eq:kl_costofopt} \\ \intertext{where we used that $\langle c, x^* \rangle \leq \beta$. Expanding the definition of $Z^{\thist}$ and applying the fact that $x^*$ is a feasible solution i.e. $\langle a_{i}, x^* \rangle \geq 1$, we continue to bound \eqref{eq:kl_costofopt} as} &\leq \expectation\expectargover{i \sim U^{\lastt}}*{ \beta \cdot \log \left( \frac{1}{\beta} \sum_j c_j x^{\lastt}_j \exp\left( \kappa_i^{\thist} \cdot \frac{a_{ij}}{c_j} \right) \right) - \kappa_i^{\thist} | X^{\lastt}_i < \Delta_i^{\lastt} } \notag \\ &\leq \expectation\expectargover{i \sim U^{\lastt}}*{ \beta \cdot \log \left( 1 + \frac{e-1}{\beta} \cdot \kappa_i^{\thist} \cdot \sum_j a_{ij} x^{\lastt}_j \right) - \kappa_i^{\thist} | X^{\lastt}_i < \Delta_i^{\lastt} } \label{eq:kl_expapprox}. \\ \intertext{\eqref{eq:kl_expapprox} is derived by applying the approximation $e^y \leq 1 + (e-1) y$ for $y \in [0,1]$ and the fact that $\langle c, x^{\lastt}\rangle = \beta$ by \cref{inv:loc_cost}; the exponent lies in $[0,1]$ because by definition $\kappa_i^{\thist} \cdot a_{ij} / c_j = \Delta_i^{\lastt} \cdot (a_{ij} / c_j) \cdot \min_k (c_k / a_{ik}) \leq 1$ since $\Delta_i^{\lastt} \in [0,1]$. Finally, using the fact that $\log(1+y) \leq y$, we have that \eqref{eq:kl_expapprox} is at most} &\leq \expectation\expectargover{i \sim U^{\lastt}}*{ (e-1) \cdot \kappa_i^{\thist} \cdot \sum_j a_{ij} x^{\lastt}_j - \kappa_i^{\thist} | X^{\lastt}_i < \Delta_i^{\lastt} } \notag \\ &\leq \expectation\expectargover{i \sim U^{\lastt}}*{ (e-1) \cdot \kappa_i^{\thist} \cdot \min \left(X^{\lastt}_i, \Delta_i^{\lastt}\right) - \kappa_i^{\thist} | X^{\lastt}_i < \Delta_i^{\lastt} } .\label{eq:wkl_xilwi} \end{align} The lemma statement follows by combining \eqref{eq:wKL_xigewi} and \eqref{eq:wkl_xilwi} using the law of total expectation. \end{proof} We move to bounding the expected change in $\log \rho^{\thist}$ provided by updating the solution $z$ on \cref{line:cip_updatez} on the arrival of the random row $i$. Recall that $U^{\thist} = \{i \mid \Delta_i^{\thist} > \dthrsh\}$ are the unseen elements which are at most half covered by $z$. We will make use of the following lemma, which we prove in \cref{sec:appendix}: \begin{restatable}{fact}{crslem} \label{lem:ip_expectcov} Given probabilities $p_j$ and coefficients $b_j\in [0,1]$, let $W := \sum_j b_j \Ber(p_j)$ be the sum of independent weighted Bernoulli random variables. Let $\Delta\geq \dthrsh = (e-1)^{-1}$ be some constant. Then \[ \expectation\expectarg*{\min\left(W, \Delta \right)} \geq \alpha \cdot \min\left(\expectation\expectarg*{W}, \Delta \right), \] for a fixed constant $\alpha$ independent of the $p_j$ and $b_j$. \end{restatable} We are ready to bound the expected change in $\log \rho^{\thist}$. \begin{lemma}[Change in $\log \rho^{\thist}$] \label{lem:ip_rhochange} For rounds in which $\Upsilon^{\thist}$ holds, the expected change in $\log \rho^{\thist}$ is \[ \expectation\expectargover{\substack{i^{\thist},\mathcal{R}^{\thist}}}*{\log \rho^{\thist} - \log \rho^{\lastt} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}} \leq - \frac{\alpha}{\beta}\cdot \expectation\expectargover{i' \sim U^{\lastt}}*{\kappa_{i'}^{\thist} \cdot \min\left(X_{i'}^{\lastt}, \Delta_{i'}^{\lastt}\right)} \] where $\alpha$ is a fixed constant. \end{lemma} \begin{proof} Conditioned on $i^{\thist} = i$, the expected change in $\log \rho^{\thist}$ depends only on $\mathcal{R}^{\thist}$. \begin{align} & \expectation\expectargover{i^{\thist}, \mathcal{R}^{\thist}}*{\log \rho^{\thist} - \log \rho^{\lastt} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}, i^{\thist} = i} \notag \\ &= \expectation\expectargover{\mathcal{R}^{\thist}}*{\log \left(1-\frac{\rho^{\lastt} - \rho^{\thist}}{\rho^{\lastt}} \right) | U^{\lastt}, i^{\thist} = i} \notag \\ &\leq - \frac{1}{\rho^{\lastt}} \expectation\expectargover{\mathcal{R}^{\thist}}*{\rho^{\lastt} - \rho^{\thist} | U^{\lastt}, i^{\thist} = i} \label{eq:ip_ut_logapx}. \\ \intertext{Above, follows from the approximation $\log (1-y) \leq -y$. Expanding definitions again, and using the fact that $\kappa_{i'}^{\lastt} - \kappa_{i'}^{\thist} = \min_k(c_k / a_{i'k})\cdot (\Delta_{i'}^{\lastt} - \Delta_{i'}^{\thist}) \geq \kappa_{i'}^{\thist} (\Delta_{i'}^{\lastt} - \Delta_{i'}^{\thist})$, we further bound \eqref{eq:ip_ut_logapx} by} &\leq - \frac{1}{\rho^{\lastt}} \expectation\expectargover{\mathcal{R}^{\thist}}*{\sum_{i' \in U^{\lastt}} \kappa_{i'}^{\thist} \cdot (\Delta_{i'}^{\lastt} - \Delta_{i'}^{\thist})} \notag\\ &= - \frac{1}{\rho^{\lastt}} \sum_{i' \in U^{\lastt}} \kappa_{i'}^{\thist} \cdot \expectation\expectargover{\mathcal{R}^{\thist}}{\Delta_{i'}^{\lastt} - \Delta_{i'}^{\thist}} \notag \\ &= - \frac{1}{\rho^{\lastt}} \sum_{i' \in U^{\lastt}} \kappa_{i'}^{\thist} \cdot \alpha \cdot \min\left(\frac{\kappa^{\thist}_{i}}{\beta} \cdot X_{i'}^{\lastt}, \: \Delta_{i'}^{\lastt} \right). \label{eq:ip_randcovg} \\ \intertext{To understand this last step \eqref{eq:ip_randcovg}, note that $\Delta_{i'}^{\lastt} - \Delta_{i'}^{\thist} = \min(\sum_j a_{i'j} \lfloor y_j \rfloor + \sum_j a_{i'j} \Ber(\widetilde y) ,\: \Delta_{i'}^{\lastt} )$. By the definition of $y$, the first term inside the minimum has expectation $\frac{\kappa_{i}^{\thist}}{\beta} \cdot X^{\lastt}_{i'}$, and since $\Upsilon^{\thist}$ holds we have $\Delta_{i'}^{\lastt} > \dthrsh$. Therefore applying \Cref{lem:ip_expectcov} gives \eqref{eq:ip_randcovg} (where $\alpha$ is the constant given by the lemma). Since $\kappa_i^{\thist} / \beta \leq 1$, we bound \eqref{eq:ip_randcovg} with} &\leq - \frac{\alpha}{\beta} \cdot \kappa_i^{\thist} \cdot \frac{1}{\rho^{\lastt}} \sum_{i' \in U^{\lastt}} \kappa_{i'}^{\thist} \cdot \min\left(X_{i'}^{\lastt}, \Delta_{i'}^{\lastt}\right) \notag \\ &= - \frac{\alpha}{\beta} \cdot \kappa_i^{\thist} \cdot \frac{|U^{\lastt}|}{\rho^{\lastt}} \cdot \expectation\expectargover{i' \sim U^{\lastt}}*{\kappa_{i'}^{\thist} \cdot \min \left(X_{i'}^{\lastt}, \Delta_{i'}^{\lastt})\right)}. \label{eq:ip_kcrs} \end{align} Taking the expectation of \eqref{eq:ip_kcrs} over $i \sim U^{\lastt}$, and using the fact that $\expectation\expectargover{i \sim U^{\lastt}}*{\kappa_i^{\thist}} = \rho^{\lastt} / U^{\lastt}$, the expected change in $\log \rho^{\thist}$ becomes \begin{align*} &\expectation\expectargover{i^{\thist}, \mathcal{R}^{\thist}}*{\log \rho^{\thist} - \log \rho^{\lastt} \mid x^{\lastt}, U^{\lastt}, \Upsilon^{\thist}} \leq - \frac{\alpha}{\beta} \cdot \expectation\expectargover{i' \sim U^{\lastt}}*{\kappa_{i'}^{\thist} \cdot \min\left(X_{i'}^{\lastt}, \Delta_{i'}^{\lastt}\right)}, \end{align*} as desired. \qedhere \end{proof} We may now combine the two previous lemmas as before. \begin{proof}[Proof of \cref{thm:cip}] In the round in which constraint $i$ arrives, the expected cost of sampling is $\frac{\kappa_i^{\thist}}{\beta} \langle c, x^{\lastt}\rangle = \kappa_i^{\thist}$ (by \cref{inv:loc_cost}). The algorithm pays an additional \[ c_{k^*} \left\lceil \frac{\Delta_i^{\lastt}}{a_{ik^*}}\right\rceil = c_{k^*} \left\lceil \frac{\kappa_i^{\thist}}{c_{k^*}}\right\rceil \leq 2 \kappa_i^{\thist}\] in \cref{line:gencost_backup}, where this upper bound holds because $\frac{\kappa_i^{\thist}}{c_{k^*}} = \frac{\Delta_i^{\lastt}}{a_{ik^*}}\geq \nicefrac{1}{2}$, since $\Delta_i^{\lastt} \geq \dthrsh \geq \nicefrac{1}{2}$ and $a_{ik^*} \leq 1$. Hence the total expected cost per round is at most $3 \cdot \kappa_i^{\thist}$. Combining \cref{lem:ip_klchange} and \cref{lem:ip_rhochange} and choosing $C_1 = 3$ and $C_2 = 3(e-1) / \alpha$, we have \begin{align*} &\expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{\Phi(t) - \Phi(\lastt) | i^{1}, \ldots, i^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}, \Upsilon^{\thist}} \\ & = \expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{ \begin{array}{ll} &C_1 \cdot \left(\wKL{x^*}{x^{\thist}} - \wKL{x^*}{x^{\lastt}}\right) \\ + & C_2 \cdot \beta \cdot \left(\log \rho^{\thist} - \log \rho^{\lastt}\right) \end{array} | i^{1}, \ldots, i^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}, \Upsilon^{\thist}} \\ &\leq - \expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{3 \cdot \kappa_{i^{\thist}}^{\thist} | i^{1}, \ldots, i^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}, \Upsilon^{\thist}}, \end{align*} which cancels the expected change in the algorithm's cost. Hence we have the inequality \[ \expectation\expectargover{\substack{i^{\thist}, \mathcal{R}^{\thist}}}*{\Phi(t) - \Phi(\lastt) + c(\alg(t)) - c(\alg(\lastt)) | i^{1}, \ldots, i^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}} \leq 0. \] Let $t^*$ be the last time step for which $\Phi(t^*) \geq 0$. By applying \cref{lem:exp_pot_change} and the bound on the starting potential, $\Phi(0) = O( \beta \cdot \log (mn))$, we have that $\expectation\expectarg{c(\alg(t^*))} \leq O( \beta \cdot \log (mn))$. To bound the expected cost of the algorithm after time $t^*$, note that as before that KL divergence is a nonnegative quantity, and $\Phi$ is negative only when $\rho^{\thist} \leq \beta$. The algorithm pays $O(\kappa_i^{\thist})$ in expectation during rounds $t$ where $i \in U^{\lastt}$ and $0$ during rounds where $i \not \in U^{\lastt}$, and hence the expected cost paid by the algorithm after time $t^*$ is at most $\sum_{i \in U^{t^*}} \kappa_i^{t^*} = \rho^{t^*} = O(\beta)$. \end{proof} \section{Lower Bounds} \label{sec:lbs} We turn to showing lower bounds for \setcov and related problems in the random order model. The lower bounds for \roosc are proven via basic probabilistic and combinatorial arguments. We also show hardness for a batched version of \roosc, which has implications for related problems. \subsection{Lower Bounds for \roosc} \label{sec:ro_lb} We start with information theoretic lower bounds for the RO setting. \begin{theorem} \label{thm:fraclogLB} The competitive ratio of any randomized fractional or integral algorithm for \roosc is $\Omega(\log n)$. \end{theorem} \begin{proof} Consider the following instance of \roosc in which $m = 2^\ell$ and $n =2^\ell - 1$. Construct the instance recursively in $\ell$ rounds. Define the sub collection of sets $\mathcal{S}_0 = \mathcal{S}$, i.e. initially all the sets. For each round $i$ from $1$ to $\ell$, do: (a) create $2^{\ell-i}$ new elements and add each to every set in $\mathcal{S}_{i-1}$, and (b) choose $\mathcal{S}_{i}$ to be a uniformly random subcollection of $\mathcal{S}_{i-1}$ of size $|\mathcal{S}_{i-1}|/2$. By Yao's principle, it suffices to lower bound the cost of any fixed deterministic algorithm $A$ that maintains a monotone LP solution in random order. Let $x^t$ be the fractional solution of $A$ at the end of round $t$. Assume $A$ is lazy in the sense that in every round and for every coordinate $x_S$ that is incremented in that round, setting that coordinate to $x_S - \epsilon$ is infeasible for all $\epsilon> 0$ with respect to the elements observed up until and including this round. We refer to the elements added in round $i$ as the type $i$ elements. Let $\mathfrak{g}(i)$ be the event that at least one element of that type arrives in random order before any element of type $j$ for $j > i$. Note that $\prob{\mathfrak{g}(i)} = 2^{l-i} / (2^{l-i+1} - 1)> \nicefrac{1}{2}$ for all $i$. Also define $c(A,i)$ be the cost paid by the algorithm to cover the first element of type $i$ that arrives in random order (potentially $0$). Conditioned on $\mathfrak{g}(i)$ holding, we claim the expected cost $c(A,i)$ is at least $\nicefrac{1}{2}$. To see this, first suppose that $i$ is the first type for which $\mathfrak{g}(i)$ holds; then this is the first element to arrive, and so $\sum_{S \in \mathcal{S}_0} x^t_S = 0$ beforehand and $\sum_{S \in \mathcal{S}_i} x^t_S = 1$ afterward, and so $c(A,i) = 1 \geq 1/2$. Otherwise, let $k < i$ be the last type for which $\mathfrak{g}(k)$ held. Since $A$ is lazy, at the time $t$ before the first element of type $i$ arrived, we had $\sum_{S \in \mathcal{S}_k} x^t_S = 1$. By the construction of the instance and the fact that $i >k$, the collection $\mathcal{S}_i$ consists of a uniformly random subset of $\mathcal{S}_k$ of size at most $|\mathcal{S}_k|/2$. Deferring the random choice of this collection $\mathcal{S}_i$ until this point, we have that $\expectation\expectarg{\sum_{S \in \mathcal{S}_i} x^t_S} \leq \nicefrac{1}{2}$. Hence $\displaystyle \expectation\expectarg*{c(A,i) | \mathfrak{g}(i)} \geq \nicefrac{1}{2}$. To conclude, the optimal solution consists of the vector that has $1$ on the one coordinate $S$ containing all the elements and $0$ elsewhere, whereas \[ \expectation\expectarg{c(A)} \geq \sum_{i} \prob{\mathfrak{g}(i)} \cdot \expectation\expectarg*{c(A,i) | \mathfrak{g}(i)} > \frac{\ell}{4} = \Omega(\log n). \] Since the optimum is integral, the competitive ratio lower bound also holds for integral algorithms. \end{proof} We emphasize that this set system has a VC dimension of $2$, which rules out improved algorithms for set systems of small VC dimension in this setting. \begin{theorem} The competitive ratio of any randomized algorithm for \roosc is $\Omega\left(\frac{\log m}{\log \log m} \right)$ even when $m \gg n$. \end{theorem} The following proof uses the construction in Proposition 4.2 of \cite{alon2003online}; they take the product of this construction with another to show a stronger bound for the adversarial order setting. It is a simple observation that the part of the construction we use gives a bound even in random order, but we include the proof for completeness. \begin{proof} Given integer parameter $r$, let $\mathcal{S} = \binom{[10r^2]}{r}$ be the set of all subsets of $[10r^2]$ of size $r$. The adversary chooses $U$ to be a random subset of $[10r^2]$ of size $r$ and reveals it in random order. By Yao's principle, it again suffices to bound the performance of any deterministic algorithm $A$, and we may again assume that $A$ is lazy. By deferring randomness, the adversary is equivalent to one which randomly selects an element $r$ times without replacement from $[10r^2]$. Since the algorithm selects at most $r$ sets each of size at most $r$, the number of elements of $[10r^2]$ covered by $A$ is at most $[r^2]$, and hence every element chosen by the adversary has probability at least $4/5$ of being uncovered on arrival. Hence the algorithm selects at least $4r/5$ sets in expectation, whereas $\opt$ consists of the single set covering the $r$ elements of the adversary, so the competitive ratio of $A$ is $\Omega(r)$. The claim follows by noting that $r = \Omega(\log m / \log \log m)$. \end{proof} \subsection{Performance of \cite{buchbinder2009online} in Random Order} \label{sec:bn_lb} In this section we argue that the algorithm of \cite{buchbinder2009online} has a performance of $\Omega(\log m \log n)$ for \roosc in general. One instance demonstrating this bound is the so called upper triangular instance $\Delta = (U_{\Delta}, \mathcal{S}_\Delta)$ for which $n=m$ and which we now define. Let the sets $\mathcal{S}_{\Delta} = \{S_1, \ldots, S_n \}$ be fixed. Choose a random permutation $\pi \in S_n$. Then for every $i=1, \ldots, m$, let $S_{\pi(i)} = \{ n-i+1, \ldots, n\}$, since it will be convenient for elements to appear in \cref{fig:permutedtriangular} in descending order. \begin{figure} \caption{Tight instance for \cite{buchbinder2009online} \label{fig:permutedtriangular} \end{figure} \begin{claim} \cite{buchbinder2009online} is $\Omega(\log m \log n)$-competitive on the instance $\Delta = (U_\Delta, \mathcal{S}_\Delta)$ in RO. \end{claim} \begin{proof} The final solution $\mathcal{T}$ output by \cite{buchbinder2009online} on this instance is equivalent to that of the following algorithm which waits until the end of the sequence to buy all of its sets. Maintain a (monotone) LP solution $x$ whose coordinates are indexed by sets. Every time an uncovered element $i$ arrives, for all $j\geq i$ increase the weights of all sets $x_{\pi(j)}$ uniformly until $\sum_{j \geq i} x_{\pi(j)} = 1$. Finally, at the end of the element sequence, sample each set with probability $\min(\log n \cdot x_S, 1)$. We claim that this produces a solution of cost $\Omega(\log^2 n)$ in expectation on the instance above when elements are presented in random order, whereas the optimal solution consists of only the one set $S_1$. Let $X_i$ be the event that $i$ arrives before any element $j$ with $j<i$. This corresponds to $i$ appearing in the fewest sets of any element seen thus far; we call such an element \textit{leading}. Let $k(i) = \min\{k \mid X_k, \ k>i \}$ be the most recent leading index before $i$, if one exists. Note that if $X_i$ occurs and $k(i) = k$, then the expected increase in the size of the final solution due to $X_i$ is exactly $ i \cdot \left(\min\left(\frac{\log n}{i}, 1 \right)-\min\left(\frac{\log n}{k}, 1 \right)\right)$. (If $X_i$ occurs but $k(i)$ does not exist, then $i$ is the first leading element and so the expected cost increase in the final solution is $i \cdot \min\left(\frac{\log n}{i}, 1 \right)$.) What is $\prob{X_i, \ k(i) = k}$ for some $i < k$? It is precisely the probability that the random arrival order induces an order on only the elements $k, k-1, \ldots, i, \ldots, 1$ which puts $k$ first and $i$ second, which is $1/(k(k-1))$. Thus the total expected size of the final solution $\mathcal{T}$ can be bounded by \begin{align*} \expectation\expectarg{\lvert \mathcal{T} \rvert} &= \sum_{i=1}^{n-1} \sum_{k = i + 1}^n \prob{X_i, \ k(i) = k} \cdot i \left(\min\left(\frac{\log n}{i}, 1 \right)-\min\left(\frac{\log n}{k}, 1 \right)\right) \\ &\geq \sum_{i=1}^{n-1} \sum_{k = i+1}^n \frac{1}{k(k-1)} \cdot i \left(\min\left(\frac{\log n}{i}, 1 \right)-\min\left(\frac{\log n}{k}, 1 \right)\right) \\ &\geq \log n \cdot \sum_{i= \log n}^{n-1} \sum_{k =i+ 1}^n \frac{1}{k(k-1)} \cdot i \left(\frac{1}{i} - \frac{1}{k}\right) \\ &\geq \log n \cdot \Omega(\log n - \log \log n) \\ &= \Omega(\log^2 n). \qedhere \end{align*} \end{proof} \subsection{Lower Bounds for Extensions} \label{sec:lb_ext} We study lower bounds for several extensions of \roosc. Our starting point is a lower bound for the batched version of the problem. Here the input is specified by a set system $(U, \mathcal{S})$ as before, along with a partition of $U$ into batches $B_1, B_2, \ldots B_b$. For simplicity, we assume all batches have the same size $s$. The batches are revealed one-by-one in their entirety to the algorithm, in uniform random order. After the arrival of a batch, the algorithm must select sets to buy to cover all the elements of the batch. Using this lower bound, we derive as a corollary a lower bound for the random order version of \subcov defined in \cite{gupta2020online}. It is tempting to use the method of \cref{thm:weighted_cost_poly} to improve their competitive ratio of $O(\log n \log (t \cdot f(N) / \fmin))$ in RO (we refer the reader to \cite{gupta2020online} for the definitions of these parameters). We show that removing a log from the bound is not possible in general. \begin{theorem} \label{lem:batched} The competitive ratio of any polynomial-time randomized algorithm on batched \roosc with $b$ batches of size $s$ is $\Omega(\log b \log s)$ unless $\NP \subseteq \BPP$. \end{theorem} We follow the proof of \cite[Theorem~2.3.4]{korman2004use}, which demonstrates that there is no randomized $o(\log m \log n)$-competitive polynomial-time algorithm for (adversarial order) \osetcov unless $\NP \subseteq \BPP$. We adapt the argument to account for random order. Consider the following product of the upper triangular instance from \cref{sec:bn_lb} together with an arbitrary instance of (offline) \setcov $H$. In particular, take $\Delta = (U_\Delta, \mathcal{S}_\Delta)$ to be the upper triangular instance on $N$ elements, and let $H = (U_H, \mathcal{S}_H)$ denote a set cover instance, where $U_H = [N']$ and $|\mathcal{S}_H| = M'$. Note that $\Delta$ is a random instance, where the randomness is over the choice of label permutation $\pi$. Define product instance $\Delta \times H = (U_{\Delta \times H}, \mathcal{S}_{\Delta \times H})$ by \begin{align*} U_{\Delta \times H} &= \{(i,j) \in U_\Delta \times U_H \} \\ \mathcal{S}_{\Delta \times H} & = \{S_{ij} = S_i \times S_j: (S_i, S_j) \in \mathcal{S}_\Delta \times \mathcal{S}_H \}. \end{align*} Observe that $|U_{\Delta \times H}| = N N'$ and $| \mathcal{S}_{\Delta \times H} | = N M'$. Each copy of $H$ is a batch of this instance, so that batches of elements $B_1, \ldots, B_N$ are given by $B_i = \{(i,j): j \in U_H\}$. Thus the parameters of the batched \roosc instance are $b = N$ and $s = N'$. These batches $B_i$ will arrive in uniformly random order according to some permutation $\sigma \in S_N$. For a randomized algorithm $A$, let \[ C(A(\Delta\times H)) := \expectation\expectargover{\substack{\sigma, \pi \in S_N \\ \mathcal{R}}}{c(A(\Delta \times H))} \] denote the expected cost of $A$ on the instance $\Delta \times H$, where $\mathcal{R}$ denotes the randomness of $A$. We begin by establishing an information-theoretic lower bound, whose proof we defer to \cref{sec:appendix}. \begin{restatable}{lemma}{productLB} \label{lem:productLB} Let $A$ be a randomized algorithm for batched \roosc. Then on the instance $\Delta \times H$, \[ C(A(\Delta \times H)) \geq \frac{1}{2} \cdot |\opt(H)| \cdot \log N. \] \end{restatable} We also require the following theorem of \cite{raz1997sub}: \begin{lemma} \label{lem:sattosetcover} There exists a polynomial-time reduction from \SAT to \setcov that, given a formula $\psi$, produces a \setcov instance $H^\psi$ with $N'$ elements and $M'$ sets for which \begin{itemize} \item $M' = N'^\alpha$ for some constant $\alpha$, \item if $\psi \in \SAT$ then $\opt(H^\psi) = K(N')$, \item if $\psi \not \in \SAT$ then $\opt(H^\psi) \geq c \cdot \log N' \cdot K(N')$, \end{itemize} for some polynomial-time-computable $K$ and constant $c\in(0,1)$. \end{lemma} We are ready to show the lower bound. \begin{proof}[Proof of \cref{lem:batched}] Suppose that there is some polynomial-time randomized algorithm $A$ with competitive ratio at most $c/4 \cdot \log b \log s$ for batched \roosc (recall that $c$ is the constant given in \cref{lem:sattosetcover} above). Then, we argue, the following is a $\BPP$ algorithm deciding \SAT. Given a formula $\psi$, we first reduce it to an instance of batched \roosc: first feed the formula through the reduction in \cref{lem:sattosetcover} to get the instance $H^\psi$, then create the batched \roosc instance defined by $\Delta \times H^{\psi}$. Finally, run $A$ on $\Delta \times H^{\psi}$ a number $W = \poly(n)$ times, and let $\overline C$ be the empirical average of $c(A(\Delta \times H^{\psi}))$ over these runs. If $\overline C \geq 3c/8 \cdot \log b \log s \cdot K(N')$, output $\psi \in \SAT$, else output $\psi \not \in \SAT$. It suffices to argue that this procedure answers correctly with high probability. \begin{claim} If $\psi \in \SAT$, then $C(A(\Delta, H^{\psi})) \leq \frac{c}{4} \cdot \log b \log s \cdot K(N')$. \end{claim} \begin{proof} By assumption, $A$ has the guarantee \begin{align*} C(A(\Delta \times H^\psi)) &\leq \frac{c}{4} \log b \log s \cdot |\opt(\Delta \times H^\psi)| \\ & = \frac{c}{4} \cdot \log b \log s \cdot K(N'). \qedhere \end{align*} \end{proof} \begin{claim} If $\psi \not \in \SAT$, then $C(A(\Delta, H^{\psi})) \geq \frac{c}{2} \cdot \log b \log s \cdot K(N')$. \end{claim} \begin{proof} By \cref{lem:productLB}, the performance of $A$ is lower bounded as \begin{align*} C(A(\Delta \times H^\psi)) &\geq \frac{1}{2} H_N \cdot |\opt(H^\psi)| \\ &\geq \frac{1}{2} \log N \cdot c \cdot \log (N') \cdot K(N') \\ &\geq \frac{c}{2} \cdot \log b \log s \cdot K(N'). \qedhere \end{align*} \end{proof} To complete the proof, note that $c(A(\Delta \times H^\psi)) \in [0, NM']$. Setting $W = \poly(n)$ sufficiently high, by a Hoeffding bound, the estimate $\overline C$ concentrates to within $\frac{c}{8} \cdot \log b \log s \cdot K(N')$ of $C(A(\Delta, H^{\psi}))$ with high probability, in which case the procedure above answers correctly. \end{proof} We now use \cref{lem:batched} to derive lower bounds for \rosubc. \begin{corollary} The competitive ratio of any polynomial-time randomized algorithm against \rosubc is $\Omega(\log n \cdot \log (f(N)/\fmin))$ unless $\NP \subseteq \BPP$. \end{corollary} \begin{proof} Batched \roosc is a special case of online \subcov in which $f_i$ is the coverage function of block $i$. In this case the parameter $f(N)/\fmin = s$, so the statement follows by applying \cref{lem:batched} with $b = s = \sqrt{n}$. \end{proof} \section{Conclusion} In this work we introduce \nameref{alg:gencost} as a method for solving \roosc and \roocip with competitive ratio nearly matching the best possible offline bounds. On the other hand we prove nearly tight \emph{information theoretic} lower bounds in the RO setting. We also show lower bounds and separations for several generalizations of \roosc. We leave as an interesting open question whether it is possible to extend the technique to covering IPs \emph{with box constraints}. We hope our method finds uses elsewhere in online algorithms for RO settings. In \cref{sec:disussion} we discuss suggestive connections between our technique and other methods in online algorithms, namely projection based algorithms and stochastic gradient descent. \textbf{Acknowledgements:} Roie Levin would like to thank Vidur Joshi for asking what is known about online set cover in random order while on the subway in NYC, as well as David Wajc and Ainesh Bakshi for helpful discussions. {\footnotesize } \appendix \section{Deferred Proofs} \label{sec:appendix} In this paper we use several potential function arguments, and the following simple lemma. \begin{restatable}[Expected Potential Change Lemma]{lemma}{expphichange} \label{lem:exp_pot_change} Let \alg be a randomized algorithm for \roosc and let $\Phi$ be a potential which is a function of the state of the algorithm at time $t$. Let $c(\alg(\thist))$ be the cost paid by the algorithm up to and including time $t$. Let $\mathcal{R}^{\thist}$ and $v^{\thist}$ respectively be the random variables that are the random decisions made by the algorithm in time $t$, and the random element that arrives in time $t$. Suppose that for all rounds $t$ in which the algorithm has not covered the entire ground set at the beginning of the round, the inequality \[\expectation\expectargover{\substack{v^{\thist}, \mathcal{R}^{\thist}}}*{\Phi(\thist) - \Phi(\lastt) + c(\alg(\thist)) - c(\alg(\lastt)) | v^{1}, \ldots, v^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}} \leq 0.\] holds. Let $t^*$ be the last time time step such that $\Phi(t^*) \geq 0$. Then the expected cost of the algorithm, $c(\alg(t^*))$ can be bounded by \[\expectation\expectarg{c(\alg(t^*))} \leq \Phi(0).\] \end{restatable} \begin{proof} Let $\mathcal{T}$ be the set of rounds for which $\Phi > 0$ at the beginning of the round. Define the stochastic process: \begin{align*} X^{\thist} := \begin{cases} \Phi(\thist) + c(\alg(\thist)) & \text{if } \thist\in \mathcal{T}, \\ X^{\lastt} & \text{otherwise.} \end{cases} \end{align*} By assumption, this is a supermartingale with respect to $((v^{\thist}, \mathcal{R}^{\thist}))_t$; that is, \[ \expectation\expectargover{v^{\thist}, \mathcal{R}^\thist}*{X^{t+1} | v^{1}, \ldots, v^{\lastt}, \mathcal{R}^{1}, \ldots, \mathcal{R}^{\lastt}} \leq X^{\thist} \] for all $\thist$. By induction we have that for all $t > 0$ \[\expectation\expectargover{\substack{v^{1}, \ldots, v^{\thist} \\ \mathcal{R}^{1}, \ldots, \mathcal{R}^{\thist}}}{X^{\thist}} \leq X^0,\] so in particular, \begin{align*} \expectation\expectarg{\Phi(t^{*})} + \expectation\expectarg*{c(\alg(t^*))} & \leq \Phi(0). \end{align*} The claim follows since the leftmost term is nonnegative by assumption. \end{proof} \lpsupport* \begin{proof} Suppose otherwise and let $x^*$ be an optimal LP solution. Let $j'$ be a coordinate for which $c_{j'} > \beta$ and $x^*_{j'} > 0$. Then define the vector $x'$ by \[x'_j = \begin{cases} 0 & \text{if $j = j'$} \\ \frac{x_j^*}{1-x_j'^*} & \text{otherwise}, \end{cases}\] First of all, note that $x^*_{j'} < 1$ since $x^*_{j'} c_{j'} \leq \sum_j c_j x^*_j \leq \beta < c_{j'}$, and so $x' \geq 0$. To see that $x'$ is feasible for each constraint $\langle a_i, x\rangle \geq 1$, \begin{align*} \langle a_i, x^*\rangle = (1-x^*_{j'}) \langle a_i, x'\rangle + a_{ij'} x^*_{j'} \geq 1 \quad \text{so} \quad \langle a_i, x'\rangle \geq \frac{1}{1 - x^*_{j'}} (1 - a_{ij'} x^*_{j'}) \geq 1, \end{align*} since $a_{ij'} \in [0,1]$. Finally, observe that $x'$ costs strictly less than $x^*$, since \begin{align*} \langle c, x' \rangle = \frac{\langle c, x^* \rangle - x^*_{j'} c_{j'} }{1-x^*_{j'}} < \frac{\langle c, x^* \rangle - x^*_{j'} \langle c, x^* \rangle}{1-x^*_{j'}} = \langle c, x^* \rangle. \end{align*} This contradicts the optimality of $x^*$, and so the claim holds. \end{proof} \crslem* \begin{proof} We first consider the case when $\expectation\expectarg*{W} \geq \nicefrac{\Delta}{3}$. By the Paley-Zygmund inequality, noting that $\expectation\expectarg*{W} = \sum_j b_j p_j$ and so $\sigma^2 = \sum_j p_j (1-p_j) b_j^2 \leq \expectation\expectarg*{W}$, we have \[ \prob{W \geq \nicefrac{\Delta}{6}} \geq \prob{W \geq \expectation\expectarg*{W}/2} \geq \frac{1}{4} \cdot \frac{\expectation\expectarg*{W}^2}{\expectation\expectarg*{W}^2 + \sigma^2} \geq \frac{1}{4} \cdot \frac{\expectation\expectarg*{W}}{1 + \expectation\expectarg*{W}} \geq \frac{1}{4} \cdot \frac{\nicefrac{\Delta}{3}}{1 + \nicefrac{\Delta}{3}} \geq \nicefrac{1}{28}, \] since $\Delta \geq \gamma \geq \nicefrac{1}{2}$ by assumption. This implies the claim because in this case \[ \expectation\expectarg*{\min\left(W, \Delta \right)} \geq \frac{\Delta}{6} \cdot \prob{W \geq \nicefrac{\Delta}{6}} \geq \frac{\Delta}{168} \geq \frac{1}{168} \cdot \min(\expectation\expectarg*{W}, \Delta). \] Otherwise $\expectation\expectarg*{W} < \nicefrac{\Delta}{3}$. Let $\mathcal{R}$ denote the random subset of $j$ for which $\Ber(p_j) = 1$ in a given realization of $W$, and let $\mathcal{K}$ be the random subset of the $j$ which is output by the $(\nicefrac{1}{3},\nicefrac{1}{3})$-contention resolution scheme for knapsack constraints when given $\mathcal{R}$ as input, as defined in \cite[Lemma 4.15]{chekuri2014submodular}. The set $\mathcal{K}$ has the properties that (1) (over the randomness in $\mathcal{R}$) every $j$ appears in $\mathcal{K}$ with probability at least $p_j/3$, (2) $\sum_{j \in \mathcal{K}} a_{ij} < \Delta$, and (3) $\mathcal{K} \subseteq \mathcal{R}$. Hence in this case \[ \expectation\expectarg*{\min\left(W, \Delta \right)} \geq \expectation\expectargover{\mathcal{K}}*{\sum_{j \in \mathcal{K}} a_{ij}} \geq \frac{ \expectation\expectarg*{W} }{3}= \frac{1}{3} \cdot \min(\expectation\expectarg*{W}, \Delta). \qedhere \] Therefore the claim holds with $\alpha = \nicefrac{1}{168}$. \end{proof} \productLB* \begin{proof} By Yao's principle, it suffices to bound the expected performance of any deterministic algorithm $A$ over the randomness of the instance. Here the performance is in expectation over the random choice of instance as well as the random order of batch arrival. The randomness in the input distribution $\Delta \times H$ is over the set labels, given by the random permutation $\pi \in S_N$. For convenience, we instead equivalently imagine the set labels in $\Delta$ are fixed, and that $\pi$ is a random permutation over batch labels, so that the label of batch $i$ is $l_i = \pi(i)$. This means that for a fixed realization of $\pi$, for any two sets $S_{lj}, S_{l'j'} \in \mathcal{S}_{\Delta \times H}$, even before any batches have arrived $A$ can determine whether $l = l'$, but because $\pi \sim S_N$ uniformly at random, $A$ cannot determine which batches $i$ the sets $S_{lj}$ and $S_{l'j'}$ intersect. Without loss of generality, we assume that $A$ is lazy, in that for each batch it only buys sets which provide marginal coverage. The (offline) instance $H$ has some optimal cover $\{S^*_1, \ldots, S^*_k\} := \opt(H)$. Let $\mathcal{S}^* := \{S_{lj} = S_l \times S_j : S_l \in \mathcal{S}_{\Delta}, S_j \in \opt(H) \}$ denote the sets in $\mathcal{S}_{\Delta \times H}$ which project down to sets in $\opt(H)$. We argue that it suffices to consider a deterministic lazy algorithm $A^*$ which only buy sets in $\mathcal{S}^*$. Let $c(A(\pi, \sigma))$ denote the cost of $A$ on batch label permutation $\pi$, when the batch arrival order is $\sigma \in S_N$. For every feasible $A$ we argue that there is some $A^*$ which buys only sets in $\mathcal{S}^*$, is feasible for every batch $\sigma(i)$ upon arrival, and buys at most as many sets as $A$ for any $\pi \in \sigma_N$, so that \begin{equation} \label{eqn:buyingfromoptisbetter} c(A^*(\pi, \sigma)) \leq c(A^*(\pi, \sigma)). \end{equation} This will in turn imply that \begin{equation} \expectation\expectargover{\substack{\sigma, \pi \sim S_N}}{c(A^*(\pi,\sigma))} \leq \expectation\expectargover{\substack{\sigma, \pi \sim S_N}}{c(A(\pi, \sigma))}, \end{equation} and so it will suffice to lower bound the performance of any $A^*$ in this restricted class. Given $A$, we will construct an $A^*$ which satisfies \eqref{eqn:buyingfromoptisbetter}. But first, some notation. For each batch arrival order $\sigma(1), \ldots, \sigma(N)$, suppose that in round $i$ upon the arrival of $B_{\sigma(i)}$, $A$ buys the sets $\mathcal{C}_{i} \subseteq \mathcal{S}_{\Delta \times H}$. These $\mathcal{C}_1, \ldots, \mathcal{C}_i$ together cover each $B_{\sigma(i)}$ upon its arrival, and $c(A, \sigma) = \sum_i |\mathcal{C}_i|$. Let $\overline{\mathcal{C}_i} := \{S \in \bigcup_{i' \leq i} \mathcal{C}_{i'}: S \cap B_{\sigma(i)} \neq \emptyset \}$ be the collection of sets which $A$ uses to cover $B_{\sigma(i)}$. We say a set $S_{lj} \in \mathcal{S}_{\Delta \times H}$ is \textit{live} in round $i$ if it intersects with all batches $B_{\sigma(1)}, \ldots, B_{\sigma(i)}$ which have arrived so far. All sets are live to start, and once a set is not-live it will never be live again; note that the liveness of $S_{lj}$ in round $i$ is a property of its label $l$ and the batch order $\sigma$. In each round $i$ we will \textit{match} each set $S^* \in \mathcal{C}_i^*$ which $A^*$ buys with some set $S \in \bigcup_{i' \leq i} \mathcal{C}_{i'}$. We will maintain that at most one $S^*$ is matched to each $S$, and that a matched pair of sets is never unmatched. Let $A^*$ operate by running $A$ in the background. For each round $i$ with incoming batch $B_{\sigma(i)}$, if $A^*$ already covers $B_{\sigma(i)}$ upon arrival then $A^*$ does nothing. Otherwise for each $j^* \in \opt(H)$ for which $A^*$ has not already bought a copy which is live in round $i$, $A^*$ identifies an unmatched set $S_{lj} \in \overline{\mathcal{C}}_i$ which $A$ is using to cover $B_{\sigma(i)}$, buys the set $S_{lj^*}$ (with the same label), and matches $S_{lj^*}$ with $S_{lj}$. It is immediate that $c(A^*, \sigma) \leq c(A, \sigma)$, since $A^*$ matches every set which it buys to a set which $A$ buys. Therefore we need only show that $A$ is feasible in each round $i$; that is, it never runs out of unmatched sets. To see this, first note that projecting $\overline{\mathcal{C}}_i$ onto $H$ gives a feasible cover, and so $|\overline{\mathcal{C}}_i| \geq k$. In particular, this means that $A^*$ succeeds in the first round $i=1$. Next observe that in any round $i>1$, algorithm $A^*$ has bought at most one $S_{lj^*}$ which is live for each given $S_{j^*} \in \opt(H)$. This is because it only buys $S_{lj^*}$ for $j^*$ for which it does not currently have a live copy, and no sets go from not-live to live. Also note that any set which shares a label with some $S_{lj} \in \overline{\mathcal{C}}_i$ is live in round $i$. Therefore any $S_{lj^*}$ matched to $S_{lj} \in \overline{\mathcal{C}}_i$ at the beginning of round $i$ are live, since $A^*$ ensures that matched sets share labels. Let $\Gamma_i$ be the collection of these $S_{lj^*}$, and let $t:= |\Gamma_i|$. Since $A^*$ maintains at most one live set for each $S_{j^*} \in \opt(H)$ at once, the $j^*$ for $S_{lj^*} \in \Gamma_i$ are distinct. Therefore to cover $B_{\sigma(i)}$, $A^*$ must buy sets $S_{lj^*}$ for the $k - t$ remaining $S_{j^*} \in \opt(H)$ not represented in $\Gamma_i$ with live labels $l$. Fortunately there are $|\overline{\mathcal{C}}_i| - t \geq k - t$ unmatched sets with live labels which $A$ has bought for $A^*$ to choose from, and so $A^*$ never gets stuck. Therefore $A^*$ is feasible for every round $i$, and so \eqref{eqn:buyingfromoptisbetter} holds. We now lower-bound the performance of $A^*$. Since $A^*$ is lazy and $\opt(H)$ is a minimal cover for $H$, for each arriving batch $B_{\sigma(i)}$ the algorithm $A^*$ buys exactly one $S_{lj^*}$ corresponding to each $S_{j^*} \in \opt(H)$ for which it does not already have a live copy. Therefore we can analyze the expected number of copies of $S_{j^*}$ which $A^*$ buys over the randomness of the batch arrival order for each $S_{j^*} \in \opt(H)$ independently. Fix some $S^* = S_{j^*} \in \opt(H)$, and let $C_N$ denote the expected number of copies of $S^*$ which $A^*$ buys, where the expectation is taken over the batch arrival order $\sigma \sim S_N$. In the first round $i=1$, $A^*$ buys some $S_{lj^*}$ with label $l$, and uses this copy until the first batch arrives for which $l$ is no longer live; let $P(l)$ denote the number of batches $B_{\sigma(1)} \ldots, B_{\sigma(P(l))}$ for which $l$ remains live. Once $l$ is no longer live, $A^*$ must choose another copy $S_{l'j^*}$ for one of the remaining live $l'$. This is a sub-instance of the problem it faced at $i=1$. We will prove that $C_N \geq H_N / 2$ by induction on $N$ (where here $H_N$ denotes the $N^{th}$ harmonic number). By definition we have that $C_1 = 1$, and so this holds for $N=1$. Now assume the claim $C_n \geq H_n / 2$ holds for all integers $n \in [0, N-1]$. Using the observations above, we can express $C_N$ by the following recurrence: \begin{align*} C_N &= 1 + \sum_{j = 1}^N \prob{P(l) = j} \cdot C_{N - j} \\ &= 1 +\frac{1}{N} \sum_{n = 0}^{N-1} (H_N - H_{n}) \cdot C_{n}, \intertext{where we take $H_0 = 0$. By computing $C_{N-1}$ and substituting, we then obtain} C_N &= \frac{1}{N} +\frac{N-1}{N} C_{N-1} + \frac{1}{N^2} \sum_{n = 0}^{N-1} C_n \\ &\geq \frac{1}{N} +\frac{N-1}{2N}\left(H_N - \frac{1}{N} \right) + \frac{1}{2N} (H_N - 1)\\ &\geq \frac{1}{2} H_N, \end{align*} where here we used that $\sum_{n=0}^{N-1} H_n = N (H_N - 1)$. Since $H_N \geq \log N$, we therefore have \[ \expectation\expectargover{\sigma \sim S_N}*{\left\vert\left\{S_{lj^*} \in \bigcup_i \mathcal{C}^*_i\right\}\right\rvert} = C_N \geq \frac{1}{2} \log N. \] Since this analysis holds for each of the $k$ such sets $S_{j^*} \in \opt(H)$, the claim follows from linearity of expectation. \end{proof} \section{Pseudocode} \label{sec:appendix2} In this section we give pseudocode for secondary algorithms in this paper. \subsection{The Exponential-Time Set Cover Algorithm} First we give the algorithm from \cref{sec:exptime}. \begin{algorithm}[H] \caption{\textsc{SimpleLearnOrCover}} \label{alg:expunit} \begin{algorithmic}[1] \State Initialize $\mathfrak{T}^0 \leftarrow \binom{\mathcal{S}}{k}$ and $\mathcal{C}^0 \leftarrow \emptyset$. \For{$\thist=1,2\ldots, n$} \State $v^{\thist} \leftarrow$ $\thist^{th}$ element in the random order. \If {$v^{\thist}$ uncovered} \State Choose $\mathcal{T} \sim \mathfrak{T}^{\lastt}$ uniformly at random, and choose $T \sim \mathcal{T}$ uniformly at random. \State Add $\mathcal{C}^{\thist} \leftarrow \mathcal{C}^\lastt \cup \{S, T\}$ for any choice of $S$ containing $v^{\thist}$. \EndIf \State Update $\mathfrak{T}^{\thist} \leftarrow \{ \mathcal{T} \in \mathfrak{T}^{\lastt} : v^{\thist} \in \bigcup \mathcal{T} \}$. \EndFor \end{algorithmic} \end{algorithm} \subsection{The Set Cover Algorithm for Unit Costs} Next, we give a slightly simplified version of algorithm from \cref{sec:gen_cost} in the special case of unit costs. We give it here to illustrate the essential simplicity of our algorithm in the unit-weight case. (Much of the complication comes from managing the non-uniform set costs.) \begin{algorithm}[H] \caption{\textsc{UnitCostLearnOrCover}} \label{alg:unitcost} \begin{algorithmic}[1] \State Initialize $x_S^{0} \leftarrow \frac{1}{m}$, and $\mathcal{C}^0 \leftarrow \emptyset$. \For{$\thist=1,2\ldots, n$} \State $v^{\thist} \leftarrow$ $\thist^{th}$ element in the random order. \If {$v^{\thist}$ not already covered} \State Sample one set $R^{\thist}$ from distribution $x$, update $\mathcal{C}^{\thist} \leftarrow \mathcal{C}^{\lastt} \cup \{R^{\thist}\}$. \If {$\sum_{S \ni v^{\thist}} x^{\lastt}_S < 1$} \State For every set $S \ni v^{\thist}$, update $x^{\thist}_S \leftarrow e \cdot x^{\lastt}_S$. \State Normalize $x^{\thist} \leftarrow x^{\thist} / \|x^{\thist}\|_1$. \Else \State $x^{\thist} \leftarrow x^{\lastt}$. \EndIf \State Let $S_{v^{\thist}}$ be an arbitrary set containing $v^{\thist}$. Add $\mathcal{C}^{\thist} \leftarrow \mathcal{C}^{\thist} \cup \{S_{v^{\thist}}\}$. \EndIf \EndFor \end{algorithmic} \end{algorithm} \section{Discussion of Claim in \cite{grandoni2008set}} \label{sec:ggl} \cite{grandoni2008set} writes in passing that \setcov in ``the random permutation model (and hence any model where elements are drawn from an unknown distribution) [is] as hard as the worst case''. Our algorithm demonstrates that the instance of \cite{korman2004use} witnessing the $\Omega(\log^2 n)$ lower bound in adversarial order can be easily circumvented in random order. Korman's instance is conceptually similar to our hard instance for the batched case from \cref{sec:lb_ext} but with batches shown in order, deterministically. A natural strategy to adapt the instance to the random order model is to duplicate the elements in the $i^{th}$ batch $C^{b-i}$ times for a constant $C>1$ (where $b$ is the number of batches). This ensures that elements of early batches arrive early with good probability. Indeed, this is the strategy used for the online Steiner Tree problem in random order (see e.g \cite{gupta2020random}). However, for the competitive ratio lower bound of \cite{korman2004use}, which is $\log b \log s$, to be $\Omega(\log^2 n)$, the number of batches $b$ and the number of distinct elements per batch $s$ must be polynomials in the number of elements $n$. In this case the total number of elements after duplication is $n' = O(C^{\poly(n)})$, which degrades the $\Omega(\log b \log s)$ bound to doubly logarithmic in $n'$. \section{Connections to Other Algorithms} \label{sec:disussion} In this section we discuss connections between \nameref{alg:gencost} and other algorithms. Whether these perspectives are useful or merely spurious remains to be seen, but regardless, they provide interesting context. \textbf{Projection interpretation of \cite{buchbinder2019k}.} In \cite{buchbinder2019k} it is shown that the original \osetcov algorithm of \cite{alon2003online,buchbinder2009online} is equivalent to the following. Maintain a fractional solution $x$. On the arrival of every constraint $\langle a_i, x \rangle \geq 1$, update $x$ to be the solution to the convex program \begin{align} \begin{array}{lll} \argmin_{y} & \KL{y}{x} \label{eq:bn_proj}\\ \text{s.t.} &A^{\leq i} y \geq 1 \\ &y \geq 0, \end{array} \end{align} where $A^{\leq i}$ is the matrix of constraints up until time $i$. Finally, perform independent randomized rounding online. The KKT and complementary slackness conditions ensure that the successive fractional solutions obtained this way are monotonically increasing, and in fact match the exponential update rule of \cite{buchbinder2009online}. Interestingly, we may view \nameref{alg:gencost} as working with the same convex program but \textit{with an additional cost constraint} (recall that $\beta$ is our guess for $\copt$). \begin{align} \begin{array}{lll} \argmin_{y} & \KL{y}{x} \\ \text{s.t.} &A^{\leq i} y \geq 1 \\ &\langle c, y \rangle \leq \beta \\ &y \geq 0, \end{array} \end{align} The extra packing constraint already voids the monotonicity guarantee of \eqref{eq:bn_proj} which we argue is a barrier for \cite{buchbinder2009online} in \cref{sec:ro_lb,sec:bn_lb}. Furthermore, instead of computing the full projection, we fix the Lagrange multiplier of the most recent constraint $\langle a_i, x \rangle \geq 1$ to be $\kappa_{v_i}$ (the cost of the cheapest set containing this last element), the multipliers of the other elements' constraints to $0$, and the multiplier of the cost packing constraint such that $y$ is normalized to cost precisely $\beta$. Hence we only make a partial step towards the projection, and sample from the new fractional solution regardless of whether it was fully feasible. \textbf{Stochastic Gradient Descent (SGD).} Perhaps one reason \roosc is easier than the adversarial order counterpart is that, in the following particular sense, random order grants access to a stochastic gradient. Consider the unit cost setting. Given fractional solution $x$, define the function \[f(x) := \sum_v \max\left(0, 1- \sum_{S \ni e} x_S\right),\] in other words the fractional number of elements uncovered by $x$. Clearly we wish to minimize $f$, and if we assume that every update to $x$ coincides with buying a set (as is the case in our algorithm), we wish to do so in the smallest number of steps. The gradient of $f$ evaluated at the coordinate $S$ is \[[\nabla f]_S = - |S \cap U^{\thist}|.\] On the other hand, conditioning on the next arriving element $v$ being uncovered, the random binary vector $\chi^v$ denoting the set membership of $v$ has the property \[\expectation\expectargover{v}*{\chi^v_S} = \frac{|S \cap U^{\thist}|}{|U^{\thist}|},\] meaning $\chi^v$ is a scaled but otherwise unbiased estimate of $\nabla f$. Since \nameref{alg:gencost} performs updates to the fractional solution using $\chi_v$, it can be thought of as a form of stochastic gradient descent (more precisely of stochastic mirror descent with entropy mirror map since we use a multiplicative weights update scheme, see e.g. \cite{bubeck2014convex}). One crucial and interesting difference is that SGD computes a gradient estimate at every point to which it moves, whereas our algorithm is only allowed to query the gradient at the vertex of the hypercube corresponding to the sets bought so far. This analogy with SGD seems harder to argue in the non-unit cost setting, where the number of updates to the solution is no longer a measure of the competitive ratio. \end{document}
\begin{document} \title{Irredundant Sets in Atomic Boolean Algebras\footnote{ 2010 Mathematics Subject Classification: Primary 06E05, 03E35, 03E17. Key Words and Phrases: boolean algebra, irredundant set, reaping number. }} \author{Kenneth Kunen\footnote{University of Wisconsin, Madison, WI 53706, U.S.A., \ \ [email protected]} } \maketitle \begin{abstract} Assuming $\GCH$, we construct an atomic boolean algebra whose pi-weight is strictly less than the least size of a maximal irredundant family. \end{abstract} \section{Introduction} We begin by reviewing some standard notation regarding boolean algebras. Koppelberg \cite{Kop} and Monk \cite{Monk_text, Monk_paper} contain much more information. \begin{notation} In this paper, the three calligraphic letters $\AAA, \BB,\CC$ will always denote boolean algebras; in particular, $\AAA$ will always denote a finite-cofinite algebra. Other calligraphic letters denote subsets of boolean algebras. $b'$ denotes the boolean complement of $b$. $\BB \subseteq \CC$ means that $\BB$ is a sub-algebra of $\CC$, and $\BB \subset \CC$ or $\BB \subsetneqq \CC$ means that $\BB$ is a \emph{proper} sub-algebra of $\CC$. Also, $\stsp(\BB)$ denotes the Stone space of $\BB$. \end{notation} The symbols $\subset$ and $\subsetneqq$ are synonymous, but we shall use $\subset$ when the properness is obvious; e.g., ``let $\BB \subset \PP(\omega)$ be a countable sub-algebra and $\cdots\cdots$''. Some further notation is borrowed either from topology (giving properties of $\stsp(\BB)$) or from forcing (regarding $\BB \backslash \{\zero\}$ as a forcing poset). \begin{definition} If $\SSS \subseteq \BB$, then $\SSS$ is \emph{dense in} $\BB$ iff $\forall b \in \BB \backslash \{\zero\} \; \exists d \in \SSS \backslash \{\zero\} \; [d \le b]$. \end{definition} \noindent In forcing, we would say that $\SSS \backslash \{\zero\}$ is dense in $\BB \backslash \{\zero\}$. \begin{definition} The \emph{pi-weight}, $\pi(\BB)$, is the least size of a dense subset $\SSS \subseteq \BB$. \end{definition} \noindent This is the same as the topological notion $\pi(\stsp(\BB))$. \begin{definition} If $\EE \subseteq \BB$, then $\sa(\EE)$ is the sub-algebra of $\BB$ generated by $\EE$. \end{definition} \noindent Note that $\sa(\emptyset) = \{\zero, \one\}$. The notation $\langle \EE \rangle$ is more common in the literature, but we shall frequently use the angle brackets to denote sequences. If $\EE$ is a set of non-zero vectors in a vector space, then $\EE$ is linearly independent iff no $a \in \EE$ is generated from the other elements of $\EE$; equivalently, iff no non-trivial linear combination from $\EE$ is zero. In boolean algebras, these two notions are not equivalent, and are named, respectively, ``irredundance'' and ``independence'': \begin{definition} $\EE \subseteq \BB$ is \emph{irredundant} iff $a \notin \sa(\EE \backslash \{a\})$ for all $a \in \EE$. \end{definition} \begin{definition} For $a \in \BB$: $a^1 = a'$ and $a^0 = a$. Then, $\EE \subseteq \BB$ is \emph{independent} iff for all $n \in \omega$ and all distinct $a_0, \ldots , a_{n-1} \in \EE$ and all $\epsilon \in \pre{n}{2}$, $\bigwedge_{i < n} a_i^{\epsilon(i)} \ne \zero$. \end{definition} Some remarks that follow easily from the definitions: Every independent set is irredundant. If $\EE$ is a chain and $|\EE| \ge 2$ and $\zero, \one \notin \EE$, then $\EE$ is irredundant but not independent. No irredundant set can contain $\zero$ or $\one$ because $\zero,\one \in \sa(\GG)$ for every $\GG$, even if $\GG = \emptyset$. Irredundance and independence are similar in that they treat an element and its complement equivalently: \begin{lemma} \label{lemma-replace-comp} Fix $\EE \subseteq \BB \backslash \{\zero, \one\}$. If $b, b' \in \EE$, then $\EE$ is neither irredundant nor independent. If $b \in \EE$ and $b' \notin \EE$ and $\tilde \EE$ is obtained from $\EE$ by replacing $b$ by $b'$, then $\EE$ is irredundant iff $\tilde \EE$ is irredundant and $\EE$ is independent iff $\tilde \EE$ is independent. \end{lemma} Monk \cite{Monk_paper} defines: \begin{definition} $\Irrmm(\BB)$ is the \emph{m}inimum size of a \emph{m}aximal irredundant subset of $\BB$. \end{definition} The following provides a simple way to prove maximal irredundance: \begin{lemma} \label{lemma-irred-generate} If $\EE \subseteq \BB$ is irredundant and $\sa(\EE) = \BB$, then $\EE$ is maximally irredundant in $\BB$. \end{lemma} The following provides a simple way to refute maximal irredundance. It is attributed to McKenzie in Koppelberg \cite{Kop} (see Proposition 4.23): \begin{lemma} \label{lemma-irred-dense} Assume that $\EE \subseteq \BB$, where $\EE$ is irredundant. Fix $d \in \BB \backslash \sa(\EE)$ such that $\forall a \in \sa(\EE)\, [a \le d \to a = \zero]$. Then $\EE \cup \{d\}$ is irredundant. In particular, if $\EE$ is maximally irredundant in $\BB$, then $\sa(\EE) $ is dense in $\BB $. \end{lemma} \begin{proof} Assume that $\EE \cup \{d\}$ is not irredundant. Then there are distinct $a, a_1, \ldots a_n \in \EE$ such that $a \in \sa \{ a_1, \ldots a_n , d \}$. Then, fix $u,w \in \sa \{ a_1, \ldots a_n \}$ such that $a = (u \wedge d) \vee (w \wedge d')$. Now, $a \wedge d' = w \wedge d'$, so $a \sym w \le d$, so $a \sym w = \zero$. Then $a = w \in \sa \{ a_1, \ldots a_n \}$, contradicting irredundance of $\EE$. \end{proof} \begin{corollary} When $\BB$ is infinite, $\pi(\BB) \le \Irrmm(\BB) \le |\BB| \le 2^{\pi(\BB)}$. \end{corollary} \begin{proof} For the first $\le$: If $\EE$ is maximally irredundant in $\BB$, then $\sa(\EE)$ must be infinite (since it is dense in $\BB$), so $\pi(\BB) \le |\sa(\EE)| = |\EE| $. \end{proof} Note that Lemma \ref{lemma-irred-dense} does not say that $\EE$ must be dense in $\BB$, and the first $\le$ can fail for finite $\BB$. For example, let $\BB = \PP(4)$ be the $16$ element boolean algebra. If $\EE = \{a,b\}$ is an independent set (e.g., $a = \{0,1\}$ and $b = \{1,2\}$), then $\sa(\EE) = \BB$. So, $\EE$ is a maximal irredundant set, showing that $\Irrmm(\BB) = 2$, although $\pi(\BB) = 4$. Also, let $\FF $ be the set of the four atoms (singletons). Then $\sa(\FF) = \BB$. So, $\FF$ is a maximal irredundant set. Since there can be maximal irredundant sets of different sizes in $\BB$, there is no simple notion of ``dimension'' as in vector spaces. This phenomenon can occur in infinite $\BB$ as well: \begin{example} \label{ex-max-irr} Let $\BB = \PP(\kappa)$, where $\kappa$ is any infinite cardinal. Then $\BB$ has maximal irredundant set of size $2^\kappa$, but $\pi(\BB) = \Irrmm(\BB) = \kappa$. \end{example} \begin{proof} Following Hausdorff \cite{Hau}, let $\EE$ be an independent set of size $2^\kappa$; then $\EE$ is irredundant and is contained in a maximal irredundant set. To prove that $\Irrmm(\BB) = \kappa$: As in \cite{Monk_paper}, let $\FF = \kappa\backslash \{0\}$; that is, $\FF$ is the set of all proper initial segments of $\kappa$. $\FF$ is a chain, and hence irredundant. To prove maximality, fix $c \in \PP(\kappa) \backslash \sa(\FF)$; we show that $\FF \cup \{c\}$ is not irredundant. By Lemma \ref{lemma-replace-comp}, WLOG $0 \in c$ (otherwise, replace $c$ by $c'$). Let $\delta$ be the least ordinal not in $c$. Then $\delta , \delta + 1 \in \FF$ and $\delta = c \cap (\delta + 1)$, refuting irredundance. \end{proof} In view of examples like this, Monk \cite{Monk_paper} asks (Problem 1): \begin{question} \label{question_Irrmm} Does $\Irrmm(\BB) = \pi(\BB)$ for every infinite $\BB$? \end{question} Assuming $\GCH$, the answer is ``no'': \begin{theorem} \label{thm-pi-irred} If $2^{\aleph_1} = \aleph_2$, then there is an atomic boolean algebra $\BB$ such that $\pi(\BB) = \aleph_1 < \Irrmm(\BB)$. \end{theorem} We do not know whether the hypothesis ``$2^{\aleph_1} = \aleph_2$'' can be eliminated here, although it can be weakened quite a bit, as we shall see from the proof of Theorem \ref{thm-pi-irred} in Section \ref{sec-proofs}. This weakening (described in Theorem \ref{thm-pi-irred-better}) is expressed in terms of some cardinals, such as $\bbbb_{\omega_1}$, $\dddd_{\omega_1}$, etc., that are obtained by replacing $\omega$ by $\omega_1$ in the definitions of the standard cardinal characteristics of the continuum, such as $\bbbb$, $\dddd$, etc. These cardinals are discussed further in Section \ref{sec-small}, which also uses them to give some lower bounds to the size of a $\BB$ that can possibly satisfy Theorem \ref{thm-pi-irred}. Section~\ref{sec-remarks} contains some preliminary observations on atomic boolean algebras. \section{Remarks on Atomic Boolean Algebras} \label{sec-remarks} We are trying to find an atomic $\BB$ that answers Monk's Question \ref{question_Irrmm} in the negative; that is, such that $\pi(\BB) < \Irrmm(\BB)$. Observe first: \begin{lemma} \label{lemma-atomic} If $\BB$ is infinite and atomic and $\kappa = \pi(\BB)$, then $\kappa$ is the number of atoms, and $\kappa$ is infinite, and $\BB \cong \widetilde \BB$, where $\AAA \subseteq \widetilde \BB \subseteq \PP(\kappa)$, and $\AAA$ is the finite-cofinite algebra on $\kappa$. \end{lemma} So, we need only consider $\BB$ with $\AAA \subseteq \BB \subseteq \PP(\kappa)$. The proof of Example \ref{ex-max-irr} generalizes immediately to: \begin{lemma} \label{lemma-init-seg} Assume that $\AAA \subseteq \BB \subseteq \PP(\kappa)$, where $\kappa$ is any infinite cardinal and $\AAA$ is the finite-cofinite algebra and $\kappa \subseteq \BB$ \textup(i.e., $\BB$ contains all initial segments of $\kappa$\textup). Then $\pi(\BB) = \Irrmm(\BB) = \kappa$. \end{lemma} When $\kappa = \omega$, $\BB$ must contain all initial segments, so the two lemmas imply: \begin{lemma} If $\BB$ is atomic and $\pi(\BB) = \aleph_0$ then $\Irrmm(\BB) = \aleph_0$. \end{lemma} So, we shall focus here on obtaining our $\BB$ with $\kappa = \omega_1$. Then, note that in Lemma \ref{lemma-init-seg}, a club of initial segments suffices: \begin{lemma} \label{lemma-contains-club} Assume that $\AAA \subseteq \BB \subseteq \PP(\omega_1)$, where $\AAA$ is the finite-cofinite algebra and $C \subseteq \BB$ for some club $C \subseteq \omega_1$. Then $\pi(\BB) = \Irrmm(\BB) = \aleph_1$. \end{lemma} \begin{proof} Shrinking $C$ and adding in $0$ if necessary, we may assume that $C$, in its increasing enumeration, is $\{\delta_{\omega \cdot \alpha} : \alpha < \omega_1\}$, where $\delta_0 = 0$ and $\delta_{\omega \cdot (\alpha + 1)} \ge \delta_{\omega \cdot \alpha} + \omega$ for each $\alpha$. Then each set-theoretic difference $\delta_{\omega \cdot (\alpha + 1)} \backslash \delta_{\omega \cdot \alpha}$ is countably infinite, so we can enumerate this set as $\{\delta_{\omega \cdot \alpha + \ell} : 0 < \ell < \omega\}$. We now have a 1-1 (but not increasing) enumeration of $\omega_1$ as $\{\delta_\xi : \xi < \omega_1\}$, and, using $\AAA \subseteq \BB$, each initial segment $\{\delta_\xi : \xi < \eta\} \in \BB$. We can now apply Lemma \ref{lemma-init-seg} to the isomorphic copy of $\BB$ obtained via the bijection $\xi \mapsto \delta_\xi$. \end{proof} We shall avoid this issue by constructing a $\BB$ with $\pi(\BB) < \Irrmm(\BB)$ so that $\BB$ contains no countably infinite sets at all; we shall call such $\BB$ \emph{dichotomous}: \begin{definition} $\AAA$ always denotes the finite-cofinite algebra on $\omega_1$. $\BB $ is \emph{dichotomous} iff $\BB$ is a sub-algebra of $\PP(\omega_1)$ and $\AAA \subseteq \BB$ and $\forall b \in \BB\, [b \in \AAA \text{ or } |b| = |\omega_1 \backslash b| = \aleph_1 ]$. \end{definition} Note that $\forall b \in \BB\, [b \in \AAA \text{ or } |b| = |\omega_1 \backslash b| = \aleph_1 ]$ is equivalent to $\forall b \in \BB\, [|b| \ne \aleph_0 ]$. To get an easy example of a dichotomous $\BB$ of size $2^{\aleph_1}$: Following Hausdorff \cite{Hau}, let the sets $J_\alpha \subset \omega_1$ for $\alpha < 2^{\aleph_1}$ be independent in the sense that all non-trivial boolean combinations are uncountable (not just non-empty). Then $\BB = \sa(\AAA \cup \{J_\alpha : \alpha < 2^{\aleph_1}\} )$ is dichotomous. However, it is quite possible that $\Irrmm(\BB) = \aleph_1$ because the following lemma may apply. This goes in the opposite direction from Lemma \ref{lemma-contains-club}: \begin{lemma} \label{lemma-nomax-split} Assume that $\AAA \subseteq \BB \subset \PP(\omega_1)$, and assume that $\omega_1 = \bigcup\{S_\xi : \xi < \omega_1\}$, where the $S_\xi$ are disjoint countably infinite sets and \[ \forall b \in \BB \backslash \AAA \; \exists \xi \; [ S_\xi \cap b \ne \emptyset \;\&\; S_\xi \backslash b \ne \emptyset ] \ \ . \tag{$\ast$} \] Then $\Irrmm(\BB) = \aleph_1$. \end{lemma} \begin{proof} List each $S_\xi$ as $\{\sigma_\xi^\ell : \ell \in \omega \}$. Then, let $\EE$ be the set of all $\{\sigma_\xi^0, \ldots, \sigma_\xi^\ell\}$ such that $\xi < \omega_1$ and $\ell < \omega$. Then $\sa(\EE) = \AAA$, so $\EE$ is maximally irredundant in $\AAA$. Also, $\EE$ remains maximal in $\BB$. Proof: fix $b \in \BB \backslash \AAA$ and then fix $\xi$ as in $(\ast)$. WLOG $\sigma_\xi^0 \in b$ (otherwise, swap $b/b'$). Then let $\ell$ be least such that $\sigma_\xi^\ell \notin b$; so, $\{\sigma_\xi^0, \ldots, \sigma_\xi^{\ell-1}\} \subseteq b$. Then $\{\sigma_\xi^0, \ldots, \sigma_\xi^{\ell -1}\} = \{\sigma_\xi^0, \ldots, \sigma_\xi^\ell\} \cap b$, so $\EE \cup \{b\}$ is not irredundant. \end{proof} To see how this lemma might apply to $\BB = \sa(\AAA \cup \{J_\alpha : \alpha < 2^{\aleph_1}\} )$: Start with any partition $\{S_\xi : \xi < \omega_1\}$. Choose any $T_\xi$ with $\emptyset \subsetneqq T_\xi \subsetneqq S_\xi$. Then, choose the independent $J_\alpha$ so that each $J_\alpha \cap S_\xi$ is either $T_\xi$ or $\emptyset$. Assuming that $2^{\aleph_1} = \aleph_2$, our $\BB$ satisfying Theorem \ref{thm-pi-irred} will in fact be dichotomous and of the form $\sa(\AAA \cup \{J_\alpha : \alpha < \omega_2\} )$, where the $J_\alpha$ are independent, but the $J_\alpha$ will be chosen inductively, in $\omega_2$ steps to avoid situations such as the one described in Lemma \ref{lemma-nomax-split}. The next section defines some cardinals below $2^{\aleph_1}$ that will be useful both in describing properties of clubs and in deriving a version of Theorem \ref{thm-pi-irred} that applies in some models of $2^{\aleph_1} > \aleph_2$. \section{Some Small Cardinals} \label{sec-small} We begin with some remarks on club subsets of $\omega_1$. \begin{definition} \label{def-club-part} Given a club $C \subseteq \omega_1$, we define the \emph{associated partition} of $\omega_1$ into $\aleph_1$ non-empty countable sets, which we shall call the \emph{$C$--blocks}, and label them as $S_\xi^C$ \textup(or, just $S_\xi$\textup) for $\xi < \omega_1$. If $0 \in C$, write $C = \{\gamma_\xi : \xi < \omega_1\}$ in increasing enumeration; then $S_\xi = [\gamma_\xi, \gamma_{\xi + 1})$. If $0 \notin C$, let $S_\xi^C = S_\xi^{C \cup \{0\}}$. \end{definition} Note that if we are given sets $S_\xi$ satisfying the hypotheses of Lemma \ref{lemma-nomax-split}, then there is a club $C$ such that each $S_\xi$ meets only one $C$--block. Then, $\{S_\xi^C : \xi < \omega_1\}$ also will satisfy the hypotheses of Lemma \ref{lemma-nomax-split}. For our purposes, ``thinner'' clubs will yield ``better'' partitions. As usual, for subsets of $\omega_1$, $D \subseteq^* C$ means that $D \backslash C$ is countable. Then observe \begin{lemma} \label{lemma-block-finer} If $D \subseteq^* C \subseteq \omega_1$ and $D,C$ are clubs, then all but countably many $D$--blocks are unions of $C$--blocks. \end{lemma} Given $\aleph_1$ clubs $C_\alpha$, for $\alpha < \omega_1$, there is always a club $D$ such that $D \subseteq^* C_\alpha$ for all $\alpha$. Whether this holds for more than $\aleph_1$ clubs depends on the model of set theory one is in. The basic properties here are controlled by the cardinals $\bbbb_{\omega_1}$ and $\dddd_{\omega_1}$. Cardinal characteristics of the continuum (e.g., $\bbbb$, $\dddd$, etc.) are well-known, and are discussed in set theory texts (e.g., \cite{Je, Kunen3}), and in much more detail in the paper of Blass \cite{Blass2}. In analogy with $\bbbb$ and $\dddd$, we use $\bbbb_{\omega_1}$ to denote the least size of an unbounded family in ${\omega_1}^{\omega_1}$, while $\dddd_{\omega_1}$ denotes the least size of a dominating family. Then $\bbbb_{\omega_1}$ is regular and $\aleph_2 \le \bbbb_{\omega_1} \le \dddd_{\omega_1} \le 2^{\aleph_1}$. Furthermore, statements such as $\bbbb_{\omega_1} = \aleph_2$ and $\bbbb_{\omega_1} = 2^{\aleph_1}$ and $\aleph_2 < \bbbb_{\omega_1} < 2^{\aleph_1}$ are consistent with $\CH$ plus $2^{\aleph_1}$ being arbitrarily large; see \cite{Kunen3} \S V.5 for an exposition of these matters. For our purposes here, it will often be useful to rephrase $\bbbb_{\omega_1}$ and $\dddd_{\omega_1}$ in terms of clubs: \begin{lemma} Let $\CCCC$ be the set of all club subsets of $\omega_1$. Then $\dddd_{\omega_1}$ is the least $\kappa$ such that \textup(a\textup) holds and $\bbbb_{\omega_1}$ is the least $\kappa$ such that \textup(b\textup) holds: \[ \begin{array}{ll} a. & \exists \DDDD \subseteq \CCCC \; [ |\DDDD| = \kappa \;\&\; \forall C \in \CCCC \, \exists D \in \DDDD \, [ D \subseteq^* C ]] \\ b. & \exists \BBBB \subseteq \CCCC \; [ |\BBBB| = \kappa \;\&\; \neg \exists C \in \CCCC \, \forall D \in \BBBB \, [ C \subseteq^* D ]] \end{array} \ \ . \] \end{lemma} The following definition relates clubs to the proof of Lemma \ref{lemma-nomax-split}. As before, $\AAA$ always denotes the finite-cofinite algebra on $\omega_1$. \begin{definition} \label{def-induced-by} A club $C \subset \omega_1$ is \emph{nice} iff all $S_\xi = S^C_\xi$ are infinite. If $C$ is nice, then $\EE \subset \AAA$ is \emph{induced by} $C$ iff $\EE$ is obtained from the $S_\xi$ as in the proof of Lemma \ref{lemma-nomax-split}. That is, list each $S_\xi$ as $\{\sigma_\xi^\ell : \ell \in \omega \}$; then, $\EE = \{ \{\sigma_\xi^0, \ldots, \sigma_\xi^\ell\} : \xi < \omega_1 \,\&\, \ell < \omega \}$. \end{definition} Of course, $\EE$ is not uniquely defined from $C$, since $\EE$ depends on a choice of an enumeration of each $S_\xi$. Note that $\EE$ must be maximally irredundant in $\AAA$. Whether $\EE$ remains maximal in some $\BB \supseteq \AAA$ will depend on $\BB$. For dichotomous $\BB$, the hypothesis $(\ast)$ of Lemma \ref{lemma-nomax-split}, when $C$ is nice and $S_\xi = S^C_\xi$, is equivalent to saying that no $b \in \BB$ is \emph{blockish} for $C$: \begin{definition} \label{def-blockish} If $C \subseteq \omega_1$ is a club, then $b \subseteq \omega_1$ is \emph{blockish} for $C$ iff both $b$ and $b'$ are unions of $\aleph_1$ $C$--blocks. \end{definition} We next consider the $\omega_1$ version of the \emph{reaping number} $\rrrr$. This $\rrrr$ is less well-known than $\bbbb$ and $\dddd$, but it is discussed in Blass \cite{Blass2}. \begin{definition} \label{def-reap} If $\RR \subseteq [\omega_1]^{\aleph_1}$, then $T \subseteq \omega_1$ \emph{splits} $\RR$ iff $|X \cap T| = | X \setminus T| = \aleph_1$ for all $X \in \RR$. Then, $ \rrrr_{\omega_1}$ is the least cardinality of an $\RR \subseteq [\omega_1]^{\aleph_1}$ such that no $T \subseteq \omega_1$ splits $\RR$. \end{definition} Related to this, one might be tempted to define a \emph{strong reaping number}: \begin{definition} \label{def-strong-reap} If $\RR \subseteq [\omega_1]^{\aleph_1}$, then the nice \textup(Definition \ref{def-induced-by}\textup) club $C$ \emph{strongly splits} $\RR$ iff every set $T$ that is blockish for $C$ splits $\RR$. Then, $ \hat\rrrr_{\omega_1}$ is the least cardinality of an $\RR \subseteq [\omega_1]^{\aleph_1}$ such that no nice club strongly splits $\RR$. \end{definition} Some simple remarks: The nice club $C$ strongly splits $\RR$ iff for each $X \in \RR$, all but countably many $C$--blocks meet $X$. Also, if $C \subseteq^* D$ and $D$ strongly splits $\RR$, then $C$ strongly splits $\RR$. Actually, $\hat\rrrr_{\omega_1} = \bbbb_{\omega_1}$ (although the concept of $\hat\rrrr_{\omega_1}$ will be useful); the cardinals that we have defined are related by the following inequalities: \begin{lemma} \label{lemma-ineq} $\aleph_2 \le \bbbb_{\omega_1} = \hat\rrrr_{\omega_1} \le \rrrr_{\omega_1} \le 2^{\aleph_1}$ and $\aleph_2 \le \bbbb_{\omega_1} = \hat\rrrr_{\omega_1} \le \dddd_{\omega_1} \le 2^{\aleph_1}$. \end{lemma} \begin{proof} For $\bbbb_{\omega_1} \le \hat\rrrr_{\omega_1}$: For each $X \in \RR$, choose a nice club $C_X$ such that all $C_X$--blocks meet $X$. If $|\RR| < \bbbb_{\omega_1}$, then there is a nice club $C$ such that $C \subseteq^* C_X$ for each $X \in \RR$. For $\hat\rrrr_{\omega_1} \le \bbbb_{\omega_1}$: Fix $\kappa < \hat\rrrr_{\omega_1} $; we shall show that $\kappa < \bbbb_{\omega_1}$. So, let $C_\alpha$, for $\alpha < \kappa$ be clubs. Then, fix a nice club $D$ that strongly splits $\{C_\alpha: \alpha < \kappa\}$; so, for each $\alpha$, all but countably many $D$--blocks meet $C_\alpha$. But then $\tilde D \subseteq^* C_\alpha$ for each $\alpha$, where $\tilde D$ is the set of limit points of $D$. \end{proof} It is not clear which of the many independence results involving these cardinals on $\omega$ go through for the $\omega_1$ versions. Of course, all these cardinals are $\aleph_2$ if $2^{\aleph_1} = \aleph_2$. Also, the following is easy by standard forcing arguments: \begin{lemma} \label{lemma-forcing} In $V$, assume that $\GCH$ holds and $\kappa > \aleph_2$ is regular. Then there are cardinal-preserving forcing extensions $V[G]$ satisfying each of the following: 1. $\aleph_2 = \bbbb_{\omega_1} < \dddd_{\omega_1} = \rrrr_{\omega_1} = 2^{\aleph_1} = \kappa$ . 2. $\aleph_2 < \bbbb_{\omega_1} = \dddd_{\omega_1} = \rrrr_{\omega_1} = 2^{\aleph_1} = \kappa$ . 3. $\aleph_2 = \bbbb_{\omega_1} = \dddd_{\omega_1} < \rrrr_{\omega_1} = 2^{\aleph_1} = \kappa$ . \end{lemma} \begin{proof} For (1), use the ``countable Cohen'' forcing $\Fn_{\aleph_1}(\kappa, 2)$. For (2), use a countable support iteration to get $V[G]$ satisfying Baumgartner's Axiom (see \cite{Kunen3}, \S V.5). For (3), just use the standard Cohen forcing $\Fn(\kappa, 2)$. Then in $V[G]$, $\rrrr_{\omega_1} = 2^{\aleph_1} = \kappa$ as in (1), but $\bbbb_{\omega_1} = \dddd_{\omega_1} = \aleph_2$ because ccc forcing doesn't change $\bbbb_{\omega_1}$ or $ \dddd_{\omega_1}$. \end{proof} With this proof, $V[G] \models \CH$ in (1)(2), but $V[G] \models 2^{\aleph_0} = \kappa$ in (3). We do not know whether $ \rrrr_{\omega_1} \ge \dddd_{\omega_1}$ is a $\ZFC$ theorem. For the standard $\omega$ version, $ \rrrr < \dddd$ holds in the Miller real model (see \cite{Blass2}, \S11.9), but it's not clear how to make that construction work on $\omega_1$. The following theorem will be proved in Section \ref{sec-proofs}: \begin{theorem} \label{thm-pi-irred-better} If $ \rrrr_{\omega_1} \ge \dddd_{\omega_1}$ then there is a dichotomous $\BB$ such that $\AAA \subset \BB \subset \PP(\omega_1)$ and $|\BB| = \dddd_{\omega_1}$ and $\Irrmm(\BB) \ge \bbbb_{\omega_1}$. \end{theorem} Theorem \ref{thm-pi-irred} is an immediate consequence of this, and if $ \rrrr_{\omega_1} \ge \dddd_{\omega_1}$ is a $\ZFC$ theorem, then we would have a $\ZFC$ example answering Question \ref{question_Irrmm} in the negative. The following shows that one cannot get $|\BB| < \dddd_{\omega_1}$ in Theorem \ref{thm-pi-irred-better}: \begin{lemma} \label{lemma-b-irr} Assume that $\AAA \subseteq \BB \subset \PP(\omega_1)$ and $\BB$ is dichotomous and $|\BB| < \dddd_{\omega_1}$. Then $\Irrmm(\BB) = \aleph_1$. \end{lemma} \begin{proof} For each $b \in \BB \backslash \AAA$, let $f_b : \omega_1 \to \omega_1$ be such that for all $\xi$: $\xi < f_b(\xi)$ and $b \cap (\xi, f_b(\xi)) \ne \emptyset$ and $b' \cap (\xi, f_b(\xi)) \ne \emptyset$. Now, using $|\BB| < \dddd_{\omega_1}$, fix $g : \omega_1 \to \omega_1$ that is not dominated by the $f_b$; that is, for all $b \in \BB \backslash \AAA$, $Y_b := \{\xi : f_b(\xi) < g(\xi)\}$ is uncountable. Let $C$ be a nice club of fixedpoints of $g$; that is, each $\delta \in C$ is a limit ordinal and $g(\xi) < \delta$ whenever $\xi < \delta$. Now, fix $b \in \BB \backslash \AAA$. Then fix $\xi \in Y_b$ with $\xi > \min(C)$. So, we have $\xi < f_b(\xi) < g(\xi)$. Let $\delta = \min\{\mu \in C : \mu \ge g(\xi)\}$. Then $\xi < \delta \to g(\xi) < \delta$, so $\xi < f_b(\xi) < g(\xi) < \delta$. For each $\gamma < \delta$ such that $\gamma \in C$: $\gamma < g(\xi)$ (by definition of $\delta$), so $\gamma \le \xi$ (since $\xi < \gamma \to g(\xi) < \gamma$). Fixing $\gamma = \sup(C \cap \delta)$, we have $\gamma \le \xi < f_b(\xi) < g(\xi) < \delta$. So, $\xi$ and $f_b(\xi)$ are in the same $C$--block, say $S^C_\eta = [\gamma, \delta)$, so by our choice of $f_b$, $b \cap S^C_\eta \ne \emptyset$ and $b' \cap S^C_\eta \ne \emptyset$. Then $(\ast)$ of Lemma \ref{lemma-nomax-split} holds, so $\Irrmm(\BB) = \aleph_1$. \end{proof} Assuming $\CH$, we can remove the hypothesis that $\BB$ is dichotomous: \begin{lemma} Assume $\CH$, and let $\BB$ be atomic with $\pi(\BB) = \aleph_1$ and $|\BB| < \dddd_{\omega_1}$. Then $\Irrmm(\BB) = \aleph_1$. \end{lemma} \begin{proof} WLOG, $\AAA \subseteq \AAA^* \subseteq \BB \subset \PP(\omega_1)$, where $\AAA^*$ is the set of all $b \in \BB$ such that $b$ or $b'$ is countable. Then $|\AAA^*| = \aleph_1$ by $\CH$. Apply the proof of Lemma \ref{lemma-b-irr}, but just using $f_b$ for $b \in \BB \backslash \AAA^*$. This yields an $\EE \subset \AAA$ such that $\EE$ is maximally irredundant in $\AAA$ and $\EE \cup \{b\}$ is not irredundant for all $b \in \BB \backslash \AAA^*$. Let $\EE^* \subseteq \AAA^*$ be maximally irredundant in $\AAA^*$ with $\EE^* \supseteq \EE$. Then $\EE^*$ is maximally irredundant in $\BB$ and $|\EE^*| = \aleph_1$. \end{proof} \section{A Very Blockish Boolean Algebra} \label{sec-proofs} Here we shall prove Theorem \ref{thm-pi-irred-better}. Our only use of the assumption $ \rrrr_{\omega_1} \ge \dddd_{\omega_1}$ will be to prove Lemma \ref{lemma-dichot-reap} below. First, a remark on preserving dichotomicity: \begin{lemma} \label{lemma-pres-dic} Assume that $\AAA \subseteq \BB \subset \PP(\omega_1)$, $\BB$ is dichotomous, and $b \subseteq \omega_1$. Then TFAE: 1. $\sa(\BB \cup \{b\})$ is dichotomous. 2. $|u \cap b| \ne \aleph_0$ and $|u \cap b'| \ne \aleph_0$ for all $u \in \BB$. \end{lemma} \begin{proof} $(1) \to (2)$ is immediate from the definition of ``dichotomous''. Conversely, if (1) is false, fix $a = (u \cap b) \cup (w \cap b') \in \sa( \BB \cup \{b\})$ such that $|a| = \aleph_0$, where $u,w \in \BB$. Then at least one of $(u \cap b)$ and $ (w \cap b')$ has size $\aleph_0$, so (2) is false. \end{proof} Note that (2) holds whenever $|b| = \aleph_1$ and $b$ splits $\BB \cap [\omega_1]^{\aleph_1}$. \begin{lemma} \label{lemma-dichot-reap} Assume that $\kappa := \dddd_{\omega_1} \le \rrrr_{\omega_1}$. Then there is a $\BB$ such that $\AAA \subset \BB \subset \PP(\omega_1)$, and $\BB$ is dichotomous, and $|\BB| = \kappa$, and \[ \text{ For all clubs $C \subset \omega_1$ there is a $b \in \BB$ such that $b$ is blockish for $C$. } \tag{\dag} \] \end{lemma} \begin{proof} Let $C_\mu \subset \omega_1 $ for $\mu < \kappa$ be nice (Definition \ref{def-induced-by}) clubs such that for every club $C\subset \omega_1 $ there is a $\mu$ with $C_\mu \subseteq C$. Now, build a chain $\langle \BB_\mu : \mu \le \kappa \rangle$, where $\AAA \subseteq \BB_\mu \subseteq \PP(\omega_1)$ and $\mu \le \nu \to \BB_\mu \subseteq \BB_\nu$ and all $\BB_\mu$ are dichotomous and $|\BB_\mu| = \max( |\mu|, \aleph_1)$. Let $\BB_0 = \AAA$, and take unions at limits. Choose $\BB_{\mu+1} \supseteq \BB_\mu$ so that $\BB_{\mu+1} = \sa(\BB_\mu \cup \{J_\mu\})$, where $J_\mu$ is blockish for $C_\mu$. Assuming that this can be done, setting $\BB = \BB_\kappa$ satisfies the lemma. Fix $\mu$; we show that an appropriate $J = J_\mu$ can be chosen: $J$ will be blockish for $C_\mu$ and $|u \cap J| = |u \cap J'| = \aleph_1$ for all infinite (= uncountable) $u \in \BB_\mu$. Then, we can simply apply Lemma \ref{lemma-pres-dic}. Let $S_\xi = S_\xi^{C_\mu}$; these sets are disjoint and countably infinite. For $u \in \BB_\mu \cap [\omega_1]^{\aleph_1}$, let $\hat u = \{\xi < \omega_1 : u \cap S_\xi \ne \emptyset\}$. Then $|\hat u| = \aleph_1$. Since $| \BB_\mu | < \rrrr_{\omega_1}$, fix $T \subset \omega_1$ such that $T$ splits $\{\hat u : u \in \BB_\mu \cap [\omega_1]^{\aleph_1} \}$. Then, let $J = \bigcup \{S_\xi : \xi \in T\}$. Then $|u \cap J| = |u \cap J'| = \aleph_1$ for all $u \in \BB_\mu \cap [\omega_1]^{\aleph_1}$. \end{proof} Lemma \ref{lemma-dich-block} below shows (in $\ZFC$) that any $\BB$ satisfying $(\dag)$ also satisfies Theorem \ref{thm-pi-irred-better} --- that is, $\Irrmm(\BB) \ge \bbbb_{\omega_1}$. We remark that $(\dag)$ implies that for all clubs $C$, the $S_\xi^C$ fail to satisfy $(\ast)$ of Lemma \ref{lemma-nomax-split}. But that alone proves nothing, since possibly $\Irrmm(\BB) = \aleph_1$ via some $\EE$ that is not at all related to families induced by clubs (Definition \ref{def-induced-by}). But our argument will in fact show (Lemma \ref{lemma-dichot}) that such families are all that we need to consider. We remark that the $J_\mu$ used in the proof of Lemma \ref{lemma-dichot-reap} are independent in the sense that all non-trivial finite boolean combinations are uncountable; this is easily proved using the fact that $|u \cap J_\mu| = |u \setminus J_\mu| = \aleph_1$ for all infinite $u \in \sa \{J_\nu : \nu < \mu\}$. But, as remarked above (see end of Section \ref{sec-remarks}), independence alone is not enough to prove $\Irrmm(\BB) > \aleph_1$. \begin{lemma} \label{lemma-block-irred} Assume that $\AAA \subseteq \BB \subset \PP(\omega_1)$ and $\BB$ is dichotomous and $|\BB| < \bbbb_{\omega_1}$. In addition, assume that $\EE \subseteq \BB$ and $\EE$ is irredundant. Then there is a nice club $D$ such that for any $c$ that is blockish for $D$: \[ \begin{array}{l} 1.\; |c \cap b | = |c' \cap b| = \aleph_1 \text{ for all infinite } b \in \BB. \\ 2.\; c \notin \BB. \qquad 3.\; \sa(\BB \cup \{c\}) \text{ is dichotomous. } \qquad 4.\; \EE \cup \{c\} \text{ is irredundant. } \end{array} \] \end{lemma} \begin{proof} Using $|\BB| < \bbbb_{\omega_1} = \hat\rrrr_{\omega_1}$ (Lemma \ref{lemma-ineq}), fix a nice club $C$ such that (1) holds for every $c$ that is blockish for $C$. Then, for such $c$, (2) holds (setting $b = c$) and (3) holds by Lemma \ref{lemma-pres-dic}. Now, we cannot simply let $D = C$, since we have not used $\EE$ yet; for example, it is quite possible that $\EE$ contains some $\{\alpha\}$ and $\{\alpha, \beta\}$ and there is a $c$ that is blockish for $C$ such that $c \cap \{\alpha, \beta\} =\{\alpha\}$, so that $\EE \cup \{c\}$ is not irredundant. But, our proof will replace $C$ by a thinner club $D$ obtained via a chain of elementary submodels. We recall some standard terminology on elementary submodels, following the exposition in \cite{Kunen3} \S III.8: Fix a suitably large regular $\theta$. Then, a \emph{nice chain of elementary submodels of $H(\theta)$} is a sequence $\langle M_\xi : \xi < \omega_1 \rangle$ such that $M_0 = \emptyset$, and $M_\xi \prec H(\theta)$ for $\xi \ne 0$, and all $M_\xi$ are countable, and $\xi < \eta \to M_\xi \in M_\eta \; \& \; M_\xi \subset M_\eta$, and $M_\eta = \bigcup_{\xi < \eta} M_\xi$ for limit $\eta$. For $x \in \bigcup_\xi M_\xi$, $\hgt(x)$ (the \emph{height} of $x$) denotes the $\xi$ such that $x \in M_{\xi+1} \backslash M_\xi$. Given such a chain, let $\gamma_\xi = M_\xi \cap \omega_1 \in \omega_1$; so $\gamma_0 = 0$. The \emph{associated club} is $D = \{\gamma_\xi : \xi < \omega_1\}$. If $S_\xi = [\gamma_\xi, \gamma_{\xi + 1}) = \{\delta : \hgt(\delta) = \xi\}$, then these $S_\xi$ are precisely the $S_\xi^D$ described in Definition \ref{def-club-part}. We shall use such a chain, with $C \in M_1$. This will ensure that $D \subset C \cup \{0\}$. We also assume that $\EE \in M_1$. Let $c$ be blockish for $D$ (and hence for $C$). Then $c \notin \BB$, so $c \notin \EE$. WLOG, $\EE$ is maximally irredundant in $\BB$; if not, we can replace $\EE$ by some maximally irredundant $\tilde \EE \supset \EE$ such that $\tilde \EE \in M_1$. Before proving irredundance of $\EE \cup \{c\}$, we introduce some notation. For each $\delta \in \omega_1$ and each $e \in \EE$, let $h(\delta, e)$ be the \emph{smallest finite} $r \in \sa(\EE \backslash \{e\})$ such that $\delta \in r$; if there is no such finite $r$, let $h(\delta, e) = \infty$. Maximality of $\EE$ plus Lemma \ref{lemma-irred-dense} implies that $\{\delta\} \in \sa(\EE)$, and hence $\{\delta\} \in \sa(\WW)$ for some finite $\WW \subset \EE$. Then $h(\delta, e) = \{\delta\} \ne \infty$ for all $e \in \EE \backslash \WW$. Observe that \[ h(\delta, e) \ne \infty \; \& \; h(\varepsilon, e) \ne \infty \; \rightarrow \; \; h(\delta, e) = h(\varepsilon, e) \text{ or } h(\delta, e) \cap h(\varepsilon, e) = \emptyset \ \ . \tag{$\ast$} \] To prove $(\ast)$, use the definition of $h(\delta, e)$ as ``the \emph{smallest} $r$ $\cdots$'': If $\delta \notin h(\varepsilon, e)$, then $h(\delta, e) \cap h(\varepsilon, e) = \emptyset$ (otherwise one could replace $h(\delta, e) $ by the smaller $h(\delta, e) \setminus h(\varepsilon, e)$\,). If $\delta \in h(\varepsilon, e)$ and $\varepsilon \in h(\delta, e)$, then $h(\delta, e) = h(\varepsilon, e)$ (otherwise one could replace both $h(\delta, e)$ and $h(\varepsilon, e)$ by the smaller $h(\delta, e) \cap h(\varepsilon, e)$\,). Now, assume that $\EE \cup \{c\}$ is not irredundant. Then, fix $a \in \EE$ such that $a \in \sa((\EE \setminus \{a\})\cup \{c\})$. Then, fix $u,w \in \sa(\EE \backslash \{a\})$ such that $a = (u \cap c ) \cup (w \cap c')$. Then $u \cap w \subseteq a \subseteq u \cup w$. Let $s = ( u \cup w ) \setminus (u \cap w ) = u \sym w $. Then $s \in \sa(\EE \backslash \{a\})$. Note that $s$ is finite. To prove this, use (1) four times, plus the fact that $u,w,a \in \BB$: $(w \backslash a) \cap c' = \emptyset$ so $w \backslash a $ is finite. $(u \backslash a) \cap c = \emptyset$ so $u \backslash a $ is finite. $((w \backslash u) \setminus (w \backslash a)) \cap c = \emptyset$ so $(w \backslash u) \setminus (w \backslash a)$ is finite so $(w \backslash u)$ is finite $((u \backslash w) \setminus (u \backslash a)) \cap c' = \emptyset$ so $(u \backslash w) \setminus (u \backslash a)$ is finite so $(u \backslash w)$ is finite \noindent Then, $s = (w \backslash u) \cup (u \backslash w)$ is finite. For $\delta \in s$, $h(\delta, a) \ne \infty$ because $h(\delta, a) \subseteq s$. For $\delta \in s$ and $\xi < \omega_1$, $\delta \in M_\xi \leftrightarrow h(\delta, a) \in M_\xi$ (hence $\hgt( h(\delta, a) ) = \hgt(\delta) $). Proof: The $\leftarrow$ direction is clear because $\delta \in h(\delta, a)$ and $h(\delta, a)$ is finite. For the $\rightarrow$ direction, there are two cases: If $a \in M_\xi$, use $M_\xi \prec H(\theta)$. If $a \notin M_\xi$, use $\EE \in M_\xi \prec H(\theta)$ to get a finite $\WW \in M_\xi$ such that $\WW \subset \EE$ and $\{\delta\} \in \sa(\WW)$. Then $a \notin \WW$ so $h(\delta, a) = \{\delta\}$. Let $t = \bigcup \{ h(\delta, a) : \delta \in s \cap c \}$. Then $s \cap c \subseteq t \subseteq s$. Also, this is a finite union, so $t \in \sa(\EE \backslash \{a\})$. But actually, $t = s \cap c$: If this fails, then fix $\varepsilon \in t \backslash c$. Then $\varepsilon \in h(\delta, a) $ for some $\delta \in s \cap c$. Applying $(\ast)$, $h(\delta, a) = h(\varepsilon, a)$, and so $\hgt(\delta ) = \hgt( h(\delta, a) ) = \hgt( h(\varepsilon, a) ) = \hgt(\varepsilon)$. But $\delta \in c$ and $\varepsilon \notin c$, so this contradicts the fact that $c$ is blockish. So, $s \cap c \in \sa(\EE \backslash \{a\})$, and hence also $s \cap c' \in \sa(\EE \backslash \{a\})$. But then $a = (u \cap w) \cup (u \cap s \cap c) \cup (w \cap s \cap c') \in \sa(\EE \backslash \{a\})$, contradicting irredundance of $\EE$. \end{proof} For dichotomous $\BB$, we have a dichotomy for $\Irrmm(\BB)$; either $\Irrmm(\BB) = \aleph_1$ or $\Irrmm(\BB) \ge \bbbb_{\omega_1}$: \begin{lemma} \label{lemma-dichot} Assume that $\AAA \subseteq \BB \subset \PP(\omega_1)$ and $\BB$ is dichotomous and $\Irrmm(\BB) < \bbbb_{\omega_1}$. Then $\Irrmm(\BB) = \aleph_1$. Furthermore, there is a nice club $D \subset \omega_1$ such that every $\EE \subset \AAA$ that is induced by $D$ is maximally irredundant in $\BB$, and no $c \in \BB$ is blockish for $D$. \end{lemma} \begin{proof} Fix $\FF \subset \BB$ such that $|\FF| < \bbbb_{\omega_1}$ and $\FF$ is maximally irredundant in $\BB$. Let $\widetilde \BB = \sa(\FF)$. Then $\AAA \subseteq \widetilde \BB \subseteq \BB \subset \PP(\omega_1)$ and $| \widetilde \BB | < \bbbb_{\omega_1}$. Apply Lemma \ref{lemma-block-irred} to $\FF$ and $\widetilde \BB$. This produces a nice club $D$ such that for any $c$ that is blockish for $D$: $c \notin \widetilde \BB$ and $\FF \cup \{c\}$ is irredundant. Now, consider any $c \in \BB \backslash \AAA$. By maximality of $\FF$ in $\BB$, $c$ is not blockish for $D$. Since $|c| = |c'| = \aleph_1$, there must be some $\xi$ such that $S^D_\xi \cap c \ne \emptyset$ and $S^D_\xi \cap c' \ne \emptyset$; this is condition $(\ast)$ of Lemma \ref{lemma-nomax-split}. Then the proof of that lemma shows that every $\EE \subset \AAA$ that is induced by $D$ is maximally irredundant in $\BB$. \end{proof} It is important here that $\BB$ be dichotomous. Otherwise, let $\BB = \PP(\omega_1)$. Then $\Irrmm(\BB) = \aleph_1$ (Example \ref{ex-max-irr}), but no $\EE \subseteq \AAA$ is maximally irredundant in $\BB$ (Lemma \ref{lemma-block-irred}, applied with $\BB = \AAA$). Lemma \ref{lemma-dichot} implies immediately: \begin{lemma} \label{lemma-dich-block} Assume that $\AAA \subset \BB \subset \PP(\omega_1)$, and $\BB$ is dichotomous, and for all clubs $C \subset \omega_1$ there is some $b \in \BB$ such that $b$ is blockish for $C$. Then $\Irrmm(\BB) \ge \bbbb_{\omega_1}$. \end{lemma} \begin{proofof}{Theorems \ref{thm-pi-irred-better} and \ref{thm-pi-irred}} For Theorem \ref{thm-pi-irred-better}, apply Lemma \ref{lemma-dich-block} to the $\BB$ obtained in Lemma \ref{lemma-dichot-reap}. Then Theorem \ref{thm-pi-irred} is the special case of Theorem \ref{thm-pi-irred-better} where $2^{\aleph_1} = \aleph_2$. \end{proofof} \end{document}
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{note}{Note} \newtheorem{corollary}{Corollary} \newtheorem{remark}{Remark} \title{A restriction on proper actions on homogeneous spaces of a reductive type}{} \begin{abstract} Let $L$ be a reductive subgroup of a reductive Lie group $G$. Let $G/H$ be a homogeneous space of reductive type. We provide a necessary condition for the properness of the action of $L$ on $G/H$. As an application we give examples of spaces that do not admit standard compact Clifford-Klein forms. \end{abstract} \section{Introduction} Let $L$ be a locally compact topological group acting continuously on a locally Hausdorff topological space $M$. This action is called {\it \textbf{proper}} if for every compact subset $C \subset M$ the set $$L(C):=\{ g\in L \ | \ g\cdot C \cap C \neq \emptyset \}$$ is compact. In this paper, our main concern is the following question posed by T. Kobayashi \cite{kob3} \begin{center} How ``large'' subgroups of $G$ can act properly \end{center} \begin{center} on a homogeneous space $G/H$? \textbf{(Q1)} \end{center} We restrict our attention to the case where $M=G/H$ is a homogeneous space of reductive type and always assume that $G$ is a linear connected reductive real Lie group with the Lie algebra $\mathfrak{g}.$ Let $H\subset G$ be a closed subgroup of $G$ with finitely many connected components and $\mathfrak{h}$ be the Lie algebra of $H.$ \begin{definition} The subgroup $H$ is reductive in $G$ if $\mathfrak{h}$ is reductive in $\mathfrak{g},$ that is, there exists a Cartan involution $\theta $ for which $\theta (\mathfrak{h}) = \mathfrak{h}.$ The space $G/H$ is called the homogeneous space of reductive type. \label{def1} \end{definition} \noindent Note that if $\mathfrak{h}$ is reductive in $\mathfrak{g}$ then $\mathfrak{h}$ is a reductive Lie algebra. It is natural to ask when does a closed subgroup of $G$ act properly on a space of reductive type $G/H.$ This problem was treated, inter alia, in \cite{ben}, \cite{bt}, \cite{kas}, \cite{kob2}, \cite{kob4}, \cite{kob1}, \cite{kul} and \cite{ok}. In \cite{kob2} one can find a very important criterion for a proper action of a subgroup $L$ reductive in $G.$ To state this criterion we need to introduce some additional notation. Let $\mathfrak{l}$ be the Lie algebra of $L.$ Take a Cartan involution $\theta$ of $\mathfrak{g}.$ We obtain the Cartan decomposition \begin{equation} \mathfrak{g}=\mathfrak{k} + \mathfrak{p}. \label{eq1} \end{equation} Choose a maximal abelian subspace $\mathfrak{a}$ in $\mathfrak{p}.$ The subspace $\mathfrak{a}$ is called the \textbf{\textit{maximally split abelian subspace}} of $\mathfrak{p}$ and $\text{rank}_{\mathbb{R}}(\mathfrak{g}) := \text{dim} (\mathfrak{a})$ is called the \textbf{\textit{real rank}} of $\mathfrak{g}.$ It follows from Definition \ref{def1} that $\mathfrak{h}$ and $\mathfrak{l}$ admit Cartan decompositions $$\mathfrak{h}=\mathfrak{k}_{1} + \mathfrak{p}_{1} \ \text{and} \ \mathfrak{l}=\mathfrak{k}_{2} + \mathfrak{p}_{2},$$ given by Cartan involutions $\theta_{1}, \ \theta_{2}$ of $\mathfrak{g}$ such that $\theta_{1} (\mathfrak{h})= \mathfrak{h}$ and $\theta_{2} (\mathfrak{l})= \mathfrak{l}.$ Let $\mathfrak{a}_{1} \subset \mathfrak{p}_{1}$ and $\mathfrak{a}_{2} \subset \mathfrak{p}_{2}$ be maximally split abelian subspaces of $\mathfrak{p}_{1}$ and $\mathfrak{p}_{2},$ respectively. One can show that there exist $a,b \in G$ such that $\mathfrak{a}_{\mathfrak{h}} := \text{\rm Ad}_{a}\mathfrak{a}_{1} \subset \mathfrak{a}$ and $\mathfrak{a}_{\mathfrak{l}} := \text{\rm Ad}_{b}\mathfrak{a}_{2} \subset \mathfrak{a}.$ Denote by $W_{\mathfrak{g}}$ the Weyl group of $\mathfrak{g}.$ In this setting the following holds \begin{theorem}[Theorem 4.1 in \cite{kob2}] The following three conditions are equivalent \begin{enumerate} \item $L$ acts on $G/H$ properly. \item $H$ acts on $G/L$ properly. \item For any $w \in W_{\mathfrak{g}},$ $w\cdot \mathfrak{a}_{\mathfrak{l}} \cap \mathfrak{a}_{\mathfrak{h}} =\{ 0 \}.$ \end{enumerate} \label{twkob} \end{theorem} \noindent Note that the criterion 3. in Theorem \ref{twkob} depends on how $L$ and $H$ are embedded in $G$ up to inner-automorphisms. Theorem \ref{twkob} provides a partial answer to Q1. \begin{corollary}[Corollary 4.2 in \cite{kob2}] If $L$ acts properly on $G/H$ then $$\text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) + \text{\rm rank}_{\mathbb{R}}(\mathfrak{h}) \leq \text{\rm rank}_{\mathbb{R}} (\mathfrak{g}).$$ \label{coko} \end{corollary} \noindent Hence the real rank of $L$ is bounded by a constant which depends on $G/H,$ no matter how $H$ and $L$ are embedded in $G.$ In this paper we find a similar, stronger restriction for Lie groups $G,H,L$ by means of a certain tool which we call the a-hyperbolic rank (see Section 2, Definition \ref{dd2} and Table \ref{tab1}). In more detail we prove the following \begin{theorem} If $L$ acts properly on $G/H$ then $$\mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}}(\mathfrak{l}) + \mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}}(\mathfrak{h}) \leq \mathop{\mathrm{rank}}\nolimits_{\text{\rm a-hyp}} (\mathfrak{g}).$$ \label{twgl} \end{theorem} Recall that a homogeneous space $G/H$ of reductive type admits a \textbf{\textit{compact Clifford-Klein form}} if there exists a discrete subgroup $\Gamma \subset G$ such that $\Gamma$ acts properly on $G/H$ and $\Gamma \backslash G/H$ is compact. The space $G/H$ admits a \textbf{\textit{standard compact Clifford-Klein form}} in the sense of Kassel-Kobayashi \cite{kako} if there exists a subgroup $L$ reductive in $G$ such that $L$ acts properly on $G/H$ and $L \backslash G/H$ is compact. In the latter case, for any discrete cocompact subgroup $\Gamma ' \subset L,$ the space $\Gamma ' \backslash G/H$ is a compact Clifford-Klein form. Therefore it follows from Borel's theorem (see \cite{bor}) that any homogeneous space of reductive type admitting a standard compact Clifford-Klein form also admits a compact Clifford-Klein form. \newline It is not known if the converse statement holds, but all known reductive homogeneous spaces $G/H$ admitting compact Clifford-Klein forms also admit standard compact Clifford-Klein forms. As a corollary to Theorem \ref{twgl}, we get examples of the semisimple symmetric spaces without standard compact Clifford-Klein forms. In particular, we cannot find the first example in the existing literature: \begin{corollary} The homogeneous spaces $G/H=SL(2k+1, \mathbb{R})/SO(k-1,k+2)$ \linebreak and $G/H=SL(2k+1, \mathbb{R})/Sp(k-1,\mathbb{R})$ for $k \geq 5$ do not admit standard compact Clifford-Klein forms. \label{co1} \end{corollary} \begin{remark} Let us mention the following results, related to the above corollary. \begin{itemize} \item T. Kobayashi proved in \cite{kobadm} that $SL(2k,\mathbb{R})/SO(k,k)$ for $k\geq 1$ and $SL(n,\mathbb{R})/Sp(l,\mathbb{R})$ \linebreak for $0<2l \leq n-2$ do not admit compact Clifford-Klein forms. \item Y. Benoist proved in \cite{ben} that $SL(2k+1,\mathbb{R})/SO(k,k+1)$ for $k\geq 1$ does not admit compact Clifford-Klein forms. \item Y. Morita proved recently in \cite{mor} that $SL(p+q,\mathbb{R})/SO(p,q)$ does not admit compact Clifford-Klein forms if $p$ and $q$ are both odd. \end{itemize} Note that these works are devoted to the problem of existence of compact Clifford-Klein forms on a given homogeneous space (not only standard compact Clifford-Klein forms). \end{remark} \section{The a-hyperbolic rank and antipodal hyperbolic orbits} Let $\Sigma_{\mathfrak{g}}$ be a system of restricted roots for $\mathfrak{g}$ with respect to $\mathfrak{a}.$ Choose a system of positive roots $\Sigma^{+}_{\mathfrak{g}}$ for $\Sigma_{\mathfrak{g}}.$ Then a fundamental domain of the action of $W_{\mathfrak{g}}$ on $\mathfrak{a}$ can be define as $$\mathfrak{a}^{+} := \{ X\in \mathfrak{a} \ | \ \alpha (X) \geq 0 \ \text{\rm for any} \ \alpha \in \Sigma^{+}_{\mathfrak{g}} \}.$$ Note that $$sX+tY \in \mathfrak{a}^{+},$$ for any $s,t \geq 0$ and $X,Y\in \mathfrak{a}^{+}$. Therefore $\mathfrak{a}^{+}$ is a convex cone in the linear space $\mathfrak{a}.$ Let $w_{0} \in W_{\mathfrak{g}}$ be the longest element. One can show that $$-w_{0}: \mathfrak{a} \rightarrow \mathfrak{a}, \ \ X \mapsto -(w_{0} \cdot X)$$ is an involutive automorphism of $\mathfrak{a}$ preserving $\mathfrak{a}^{+}.$ Let $\mathfrak{b} \subset \mathfrak{a}$ be the subspace of all fixed points of $-w_{0}$ and put $$\mathfrak{b}^{+} := \mathfrak{b} \cap \mathfrak{a}^{+}.$$ Thus $\mathfrak{b}^{+}$ is a convex cone in $\mathfrak{a}.$ We also have $\mathfrak{b} = \text{Span} (\mathfrak{b}^{+}).$ \begin{definition} The dimension of $\mathfrak{b}$ is called the a-hyperbolic rank of $\mathfrak{g}$ and is denoted by $$\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g}).$$ \label{dd2} \end{definition} \noindent The a-hyperbolic ranks of the simple real Lie algebras can be deduce from Table \ref{tab1}. \begin{center} \begin{table}[h] \centering {\footnotesize \begin{tabular}{| c | c | c |} \hline \multicolumn{3}{|c|}{ \textbf{\textit{A-HYPERBOLIC RANK}}} \\ \hline $\mathfrak{g}$ & $\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g})$ & $\text{rank}_{\mathbb{R}} (\mathfrak{g})$ \\ \hline $\mathfrak{sl}(2k,\mathbb{R})$ & $k$ & $2k-1$ \\ {\scriptsize $k\geq 2$} & & \\ \hline $\mathfrak{sl}(2k+1,\mathbb{R})$ & $k$ & $2k$ \\ {\scriptsize $k\geq 1$} & & \\ \hline $\mathfrak{su}^{\ast}(4k)$ & $k$ & $2k-1$ \\ {\scriptsize $k\geq 2$} & & \\ \hline $\mathfrak{su}^{\ast}(4k+2)$ & $k$ & $2k$ \\ {\scriptsize $k\geq 1$} & & \\ \hline $\mathfrak{so}(2k+1,2k+1)$ & $2k$ & $2k+1$ \\ {\scriptsize $k\geq 2$} & & \\ \hline $\mathfrak{e}_{6}^{\text{I}}$ & 4 & 6 \\ \hline $\mathfrak{e}_{6}^{\text{IV}}$ & 1 & 2 \\ \hline \end{tabular} } \caption{ This table contains all real forms of simple Lie algebras $\mathfrak{g}^{\mathbb{C}}$ for which $\text{rank}_{\mathbb{R}}(\mathfrak{g}) \neq \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{g}).$ The notation is close to \cite{ov2}, Table 9, pages 312-316. } \label{tab1} \end{table} \end{center} \noindent A method of calculation of the a-hyperbolic rank of a simple Lie algebra can be found in \cite{bt}. The a-hyperbolic rank of a semisimple Lie algebra equals the sum of a-hyperbolic ranks of all its simple parts. For a reductive Lie algebra $\mathfrak{g}$ we put $$\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g}) := \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}([\mathfrak{g},\mathfrak{g}]).$$ There is a close relation between $\mathfrak{b}^{+}$ and the set of antipodal hyperbolic orbits in $\mathfrak{g}.$ We say that an element $X \in \mathfrak{g}$ is {\it \textbf{hyperbolic}}, if $X$ is semisimple (that is, $\mathrm{ad}_{X}$ is diagonalizable) and all eigenvalues of $\mathrm{ad}_{X}$ are real. \begin{definition} An adjoint orbit $O_{X}:=\mathop{\mathrm{Ad}}\nolimits (G)(X)$ is said to be hyperbolic if $X$ (and therefore every element of $O_{X}$) is hyperbolic. An adjoint orbit $O_{Y}$ is antipodal if $-Y\in O_{Y}$ (and therefore for \linebreak every $Z\in O_{Y},$ $-Z\in O_{Y}$). \end{definition} \begin{lemma}[c.f. Fact 5.1 and Lemma 5.3 in \cite{ok}] There is a bijective correspondence between antipodal hyperbolic orbits $O_{X}$ in $\mathfrak{g}$ and elements $Y \in \mathfrak{b}^{+}.$ This correspondence is given by $$\mathfrak{b}^{+}\ni Y \mapsto O_{Y}.$$ Furthermore, for every hyperbolic orbit $O_{X}$ in $\mathfrak{g}$ the set $O_{X} \cap \mathfrak{a}$ is a single $W_{\mathfrak{g}}$ orbit in $\mathfrak{a}$. \label{lma} \end{lemma} \section{The main result} We need two basic facts from linear algebra. \begin{lemma} Let $V_{1},V_{2}$ be vector subspaces of a real linear space $V$ of finite dimension. Then $$\text{\rm dim} (V_{1}+V_{2})= \text{\rm dim} (V_{1}) + \text{\rm dim} (V_{2}) - \text{\rm dim} (V_{1}\cap V_{2}).$$ \label{lma1} \end{lemma} \begin{lemma} Let $V_{1},...,V_{n}$ be a collection of vector subspaces of a real linear space $V$ of a finite dimension and let $A^{+} \subset V$ be a convex cone. Assume that $$A^{+} \subset \bigcup_{k=1}^{n} V_{k}.$$ Then there exists a number $k,$ such that $A^{+} \subset V_{k}.$ \label{lma2} \end{lemma} We also need the following, technical lemma. Choose a subalgebra $\mathfrak{h}$ reductive in $\mathfrak{g}$ which corresponds to a Lie group $H \subset G.$ Let $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset \mathfrak{a}_{\mathfrak{h}}$ be the convex cone constructed according to the procedure described in the previous subsection (for $[\mathfrak{h},\mathfrak{h}]$). \begin{lemma} Let $X\in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+}.$ The orbit $O_{X}:=\mathop{\mathrm{Ad}}\nolimits (G)(X)$ is an antipodal hyperbolic orbit in $\mathfrak{g}.$ \label{lma3} \end{lemma} \begin{proof} By Lemma \ref{lma} the vector $X$ defines an antipodal hyperbolic orbit in $\mathfrak{h}.$ Therefore we can find $h \in H \subset G$ such that $\text{\rm Ad}_{h}(X) = - X$. Since a maximally split abelian subspace $\mathfrak{a} \subset \mathfrak{g}$ consists of vectors for which $\mathrm{ad}$ is diagonalizable with real values and $$X \in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset \mathfrak{a}_{\mathfrak{h}} \subset \mathfrak{a},$$ thus the vector $X$ is hyperbolic in $\mathfrak{g}.$ It follows that $\mathop{\mathrm{Ad}}\nolimits (G)(X)$ is a hyperbolic orbit in $\mathfrak{g}$ \linebreak and $-X \in \mathop{\mathrm{Ad}}\nolimits (G)(X).$ \end{proof} Now we are ready to give a proof of Theorem \ref{twgl}. \begin{proof} Assume that $\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{l}) + \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{h}) > \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}} (\mathfrak{g})$ and let $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+},$ $\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+},$ $\mathfrak{b}^{+}$ be appropriate convex cones. If $X \in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+}$ then $O_{X}^{H} := \mathop{\mathrm{Ad}}\nolimits (G)(X)$ is an antipodal hyperbolic orbit in $\mathfrak{h}.$ By Lemma \ref{lma3} the orbit $O_{X}^{G} := \mathop{\mathrm{Ad}}\nolimits (G)(X)$ is an antipodal hyperbolic orbit in $\mathfrak{g}.$ By Lemma \ref{lma} there exists $Y \in \mathfrak{b}^{+}$ such that $$O_{X}^{G} = O_{Y}^{G} = \mathop{\mathrm{Ad}}\nolimits (G)(Y).$$ Since $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset \mathfrak{a}_{\mathfrak{h}} \subset \mathfrak{a}$ and $\mathfrak{b}^{+} \subset \mathfrak{a}$ thus (according to Lemma \ref{lma}) we get $X=w_{1} \cdot Y$ for a certain $w_{1} \in W_{\mathfrak{g}}.$ Therefore $$\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset W_{\mathfrak{g}} \cdot \mathfrak{b}^{+} = \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}^{+} \subset \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}.$$ Analogously $$\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+} \subset \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}^{+} \subset \bigcup_{w\in W_{\mathfrak{g}}} w \cdot \mathfrak{b}.$$ By Lemma \ref{lma2} there exist $w_{\mathfrak{h}},w_{\mathfrak{l}} \in W_{\mathfrak{g}}$ such that $$\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+} \subset w_{\mathfrak{h}}^{-1} \cdot \mathfrak{b} \ \ \text{\rm and} \ \ \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+} \subset w_{\mathfrak{l}}^{-1} \cdot \mathfrak{b},$$ because $W_{\mathfrak{g}}$ acts on $\mathfrak{a}$ by linear transformations. Therefore $$\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \subset w_{\mathfrak{h}}^{-1} \cdot \mathfrak{b} \ \ \text{\rm and} \ \ \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]} \subset w_{\mathfrak{l}}^{-1} \cdot \mathfrak{b}$$ where $\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} := \text{Span}(\mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}^{+})$ and $\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]} := \text{Span} (\mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}^{+}).$ We obtain $$w_{\mathfrak{h}} \cdot \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \subset \mathfrak{b} , \ w_{\mathfrak{l}} \cdot \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]} \subset \mathfrak{b}.$$ By the assumption and Lemma \ref{lma1} $$\text{\rm dim}(w_{\mathfrak{h}} \cdot \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \cap w_{\mathfrak{l}} \cdot \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}) >0.$$ Choose $0 \neq Y \in w_{\mathfrak{h}} \cdot \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]} \cap w_{\mathfrak{l}} \cdot \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}.$ Then $$w_{\mathfrak{l}} \cdot X_{\mathfrak{l}}=Y= w_{\mathfrak{h}} \cdot X_{\mathfrak{h}} \ \text{\rm for some} \ X_{\mathfrak{h}} \in \mathfrak{b}_{[\mathfrak{h},\mathfrak{h}]}\backslash \{ 0 \} \ \text{\rm and} \ X_{\mathfrak{l}} \in \mathfrak{b}_{[\mathfrak{l},\mathfrak{l}]}\backslash \{ 0 \}. $$ Take $w_{2} := w_{\mathfrak{h}}^{-1}w_{\mathfrak{l}} \in W_{\mathfrak{g}},$ we have $X_{\mathfrak{h}}= w_{2} \cdot X_{\mathfrak{l}}$ and $X_{\mathfrak{h}} \in \mathfrak{a}_{\mathfrak{h}}, \ X_{\mathfrak{l}} \in \mathfrak{a}_{\mathfrak{l}}.$ Thus $0 \neq X_{\mathfrak{h}} \in w_{2} \cdot \mathfrak{a}_{\mathfrak{l}} \cap \mathfrak{a}_{\mathfrak{h}}.$ The assertion follows from Theorem \ref{twkob}. \end{proof} We can proceed to a proof of Corollary \ref{co1}. For a reductive Lie group $D$ with a Lie algebra $\mathfrak{d}$ with a Cartan decomposition $$\mathfrak{d} = \mathfrak{k}_{\mathfrak{d}} + \mathfrak{p}_{\mathfrak{d}}$$ we define $d(G) := \text{dim} (\mathfrak{p}_{\mathfrak{d}}).$ We will need the following properties \begin{theorem}[Theorem 4.7 in \cite{kob2}] Let $L$ be a subgroup reductive in $G$ acting properly on $G/H.$ The space $L \backslash G /H$ is compact if and only if $$d(L)+d(H)=d(G).$$ \label{twkk} \end{theorem} \begin{theorem}[\cite{yo}] If $J \subset G$ is a semisimple subgroup then it is reductive in $G.$ \label{twy} \end{theorem} \begin{proposition} Let $L \subset G$ be a semisimple Lie group acting properly on \linebreak $G/H=SL(2k+1, \mathbb{R})/SO(k-1,k+2)$ or $G/H=SL(2k+1, \mathbb{R})/Sp(k-1,\mathbb{R}).$ Then $$\text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) \leq 2.$$ \label{p1} \end{proposition} \begin{proof} Because $\mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{g}) = 1+ \mathop{\mathrm{rank}}\nolimits_{\text{a-hyp}}(\mathfrak{h})$ thus it follows from Table \ref{tab1} and Theorem \ref{twgl} that if $L$ is simple then $\text{rank}_{\mathbb{R}}(\mathfrak{l}) \leq 2.$ On the other hand if $L$ is semisimple then each (non-compact) simple part of $\mathfrak{l}$ adds at least $1$ to a-hyperbolic rank of $\mathfrak{l}.$ Thus we also have $\text{rank}_{\mathbb{R}}(\mathfrak{l}) \leq 2.$ \end{proof} \textit{Proof of Corollary \ref{co1}.} Assume now that $L$ is reductive in $G.$ Since the Lie algebra $\mathfrak{l}$ is reductive therefore $$\mathfrak{l} = \mathfrak{c}_{\mathfrak{l}} + [\mathfrak{l},\mathfrak{l}],$$ where $\mathfrak{c}_{\mathfrak{l}}$ denotes the center of $\mathfrak{l}.$ It follows from Corollary \ref{coko} that \begin{equation} \text{\rm rank}_{\mathbb{R}}(\mathfrak{l}) \leq k+1, \label{eq2} \end{equation} and by Proposition \ref{p1} we have $\text{rank}_{\mathbb{R}}([\mathfrak{l},\mathfrak{l}]) \leq 2.$ Note that $$d(G)-d(H)\geq k^{2} +2k +2.$$ We will show that if $L$ acts properly on $G/H$ and $k\geq 5$ then \begin{equation} d(L) < k^{2} + 2k +2. \label{eq4} \end{equation} Let $[\mathfrak{l},\mathfrak{l}] = \mathfrak{k}_{0} + \mathfrak{p}_{0}$ be a Cartan decomposition. From (\ref{eq2}) \begin{equation} d(L) \leq \text{\rm dim} (\mathfrak{c}_{\mathfrak{l}}) + \text{\rm dim} (\mathfrak{p}_{0}) \leq k+1 + \text{\rm dim} (\mathfrak{p}_{0}). \label{eq7} \end{equation} Also, if $\text{rank}_{\mathbb{R}}([\mathfrak{l},\mathfrak{l}]) =2$ then it follows from Table \ref{tab1} that (the only) non-compact simple part of $[\mathfrak{l},\mathfrak{l}]$ is isomorphic to $\mathfrak{sl}(3,\mathbb{R}),$ $\mathfrak{su}^{\ast}(6),$ $\mathfrak{e}_{6}^{\text{IV}}$ or $\mathfrak{sl}(3,\mathbb{C})$ (treated as a simple real Lie algebra). In such case \begin{equation} \text{\rm dim} (\mathfrak{p}_{0}) < 27. \label{eq5} \end{equation} Therefore assume that $\text{rank}_{\mathbb{R}} ([\mathfrak{l},\mathfrak{l}])=1$ and let $\mathfrak{s} \subset [\mathfrak{l},\mathfrak{l}]$ be (the only) simple part of a non-compact type. We have \begin{equation} \text{\rm rank}_{\mathbb{R}} (\mathfrak{s}) =1. \label{eq3} \end{equation} It follows from Theorem \ref{twy} that $\mathfrak{s}$ is reductive in $\mathfrak{g}.$ Therefore $\mathfrak{s}$ admits a Cartan decomposition $$\mathfrak{s} = \mathfrak{k}_{\mathfrak{s}} + \mathfrak{p}_{\mathfrak{s}}$$ compatible with $\mathfrak{g}= \mathfrak{k} + \mathfrak{p},$ that is $\mathfrak{k}_{\mathfrak{s}} \subset \mathfrak{k}.$ We also have $\text{dim}(\mathfrak{p}_{s})=\text{dim}(\mathfrak{p}_{0}).$ Since $\mathfrak{k} = \mathfrak{so}(2k+1)$ we obtain $$\text{\rm rank} (\mathfrak{k}_{s}) \leq \text{\rm rank} (\mathfrak{k}) = k.$$ Using the above condition together with (\ref{eq3}) we can check (by a case-by-case study of simple Lie algebras) that \begin{equation} \text{\rm dim} (\mathfrak{p}_{\mathfrak{s}}) < 4k. \label{eq6} \end{equation} Now (\ref{eq7}), (\ref{eq5}) and (\ref{eq6}) imply that $$d(L) < 5k+1$$ for $k \geq 6,$ and $d(L)<33$ for $k=5.$ Thus we have showed (\ref{eq4}). The assertion follows from Theorem \ref{twkk}. Maciej Boche\'nski Department of Mathematics and Computer Science, University of Warmia and Mazury, S\l\/oneczna 54, 10-710, Olsztyn, Poland. email: [email protected] Marek Ogryzek Department of Geodesy and Land Management, University of Warmia and Mazury, Prawoche\'nskiego 15, 10-720, Olsztyn, Poland. email: [email protected] \end{document}
\begin{document} \title[Fractional Jumps]{Full Orbit Sequences in Affine Spaces via Fractional Jumps and Pseudorandom Number Generation} \author{Federico Amadio Guidi} \address{Mathematical Institute, University of Oxford, Oxford, UK} \email{[email protected]} \author{Sofia Lindqvist} \address{Mathematical Institute, University of Oxford, Oxford, UK} \email{[email protected]} \author[Giacomo Micheli]{Giacomo Micheli$^*$} \thanks{$^*$ Corresponding author.} \address{Mathematical Institute, University of Oxford, Oxford, UK} \email{[email protected]} \maketitle \begin{abstract} Let $n$ be a positive integer. In this paper we provide a general theory to produce full orbit sequences in the affine $n$-dimensional space over a finite field. For $n=1$ our construction covers the case of the Inversive Congruential Generators (ICG). In addition, for $n>1$ we show that the sequences produced using our construction are easier to compute than ICG sequences. Furthermore, we prove that they have the same discrepancy bounds as the ones constructed using the ICG. \end{abstract} {\footnotesize\noindent\textbf{Keywords}: full orbit sequences, pseudorandom number generators, inversive congruential generators, discrepancy.}\\ {\footnotesize\noindent\textbf{MSC2010 subject classification}: 11B37, 15B33, 11T06, 11K38, 11K45, 11T23, 65C10, 37P25.} \section{Introduction} In recent years there has been a great interest in the construction of discrete dynamical systems with given properties (see for example \cite{bib:eich93,bib:EHHW98,bib:HBM17,bib:ost10,bib:OPS10,bib:OS10degree,bib:OS10length}) both for applications (see for example \cite{bib:BW05,bib:chou95,bib:eich91,bib:eich92,bib:GPOS14, bib:NS02, bib:NS03, bib:TW06,bib:winterhof10}) and for the purely mathematical interest that these objects have (see for example \cite{bib:eich91,bib:EMG09,bib:ferraguti2016existence,bib:FMS16,bib:FMS17,bib:GSW03}). This paper deals with the problem of finding discrete dynamical systems which can be new candidates for pseudorandom number generation. Let us denote the set of natural numbers by $\mathbb{N}$. Given a finite set $S$, a sequence $\{a_m\}_{m\in \mathbb{N}}$ of elements in $S$ is said to have \emph{full orbit} if for any $s\in S$ there exists $m\in \mathbb{N}$ such that $a_m=s$. Let $q$ be a prime power, $\mathbb{F}_q$ be the finite field of cardinality $q$, and $n$ be a positive integer. In this paper we produce maps $\psi:\mathbb{F}_q^n \rightarrow \mathbb{F}_q^n$ such that \begin{itemize} \item the sequences $\{\psi^m(0)\}_{m\in \mathbb{N}}$ have full orbit (whenever this property is verified, we say that the map $\psi$ is \emph{transitive}), \item the sequences constructed from $\psi$ have nice discrepancy bounds, analogous to those constructed from an Inversive Congruential Generator (ICG), \item they are very inexpensive to iterate: if $n>1$ they are asymptotically less expensive than an ICG for the same bitrate. \end{itemize} In addition, such maps can be described using quotients of degree one polynomials. From a purely theoretical point of view related to the full orbit property, one of the reasons why such constructions are interesting is that one cannot build transitive affine maps (i.e. of the form $x\mapsto Ax+b$, with $A$ an invertible $n\times n$ matrix and $b$ an $n$-dimensional vector) unless either $n=1$ and $q$ is prime, or $n=2$ and $q=2$ (see Theorem \ref{affine_transitivity_theorem}). For $n=1$ our construction covers the well-studied case of the ICG, for which we obtain easy proofs of classical facts (see for example Remark \ref{remarkICGfullorbit}). In fact, we fit the theory of full orbit sequences in a much wider context, where tools from projective geometry can be used to establish properties of the sequences produced with our method (see for example Proposition \ref{theorem_uniformity}). Let us now summarise the results of the paper. The main tool we use to construct full orbit sequences is the notion of fractional jump of projective maps, which is described in Section \ref{affine_jumps}. With such a notion we are able to produce maps in the affine space which can be guaranteed to be transitive when they are fractional jumps of transitive projective maps. In Section \ref{transitivity_projective} we characterise transitive projective maps using the notion of projective primitivity for polynomials (see Definition \ref{projectively_primitive_polynomial}). In Section \ref{uniformity} we show that whenever our sequences come from the iterations of transitive projective automorphisms, they behave quite uniformly with respect to proper projective subspaces (i.e. not many consecutive element in the sequence can lie in a proper subspace of the projective space). This fact (and in particular Proposition \ref{theorem_uniformity}) will allow us in Section \ref{explicit} to give an explicit description of the fractional jump of a transitive projective map, finally leading to the new explicit constructions of full orbit sequences promised earlier. In turn, such a description and the theory developed in Section \ref{transitivity_projective} allow us to prove the discrepancy bounds of Theorem \ref{thm:discrepancy} in Section \ref{discrepancy}. In Section \ref{computation} we show the computational advantage of our approach compared to the classical ICG one. Finally, we include some conclusions which summarise the results of the paper. \subsection*{Notation} Let us denote the set of natural numbers by $\mathbb{N}$, and the ring of integers by $\mathbb{Z}$. For a commutative ring with unity $R$, let us denote by $R^*$ the group of invertible elements of $R$. We denote by $\mathbb{F}_q$ the finite field of cardinality $q$, which will be fixed throughout the paper, and by $\overline{\mathbb{F}}_q$ an algebraic closure of $\mathbb{F}_q$. Given an integer $n \geq 1$, we often denote the $n$-dimensional affine space $\mathbb{F}_q^n$ by $\mathbb{A}^n$. The $n$-dimensional projective space over the finite field $\mathbb{F}_q$ is denoted by $\mathbb{P}^n$. Also, we denote by $\mathbb{G}\mathrm{r} (d, n)$ the set of $d$-dimensional projective subspaces of $\mathbb{P}^n$. We denote by $\mathbb{F}_q[x_1,...,x_n]$ the ring of polynomials in $n$ variables with coefficients in $\mathbb{F}_q$. For a polynomial $a \in \mathbb{F}_q[x_1,...,x_n]$ we denote by $\deg a$ its total degree, which we will simply call its degree. Also, for $b\in\mathbb{F}_q[x_1,\dots,x_n]$ we let $V(b)$ denote the set of points $x\in \mathbb{A}^n$ such that $b(x)=0$. We denote by $\mathrm{GL}_n (\mathbb{F}_q)$ the general linear group over the field $\mathbb{F}_q$, i.e. the group of $n \times n$ invertible matrices with entries in $\mathbb{F}_q$, and by $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$ the group of automorphisms of $\mathbb{P}^{n}$. Recall that $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$ can be identified with the quotient group $\mathrm{GL}_{n+1} (\mathbb{F}_q) / \mathbb{F}_q^*\mathrm{Id}$, where $\mathbb{F}_q^*\mathrm{Id}$ is just the subgroup of nonzero scalar multiples of the identity matrix $\mathrm{Id}$. Given a matrix $M \in \mathrm{GL}_{n+1} (\mathbb{F}_q)$, we denote by $[M]$ its class in $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$. Let $X$ be either $\mathbb{A}^n$ or $\mathbb{P}^n$. We will say that a map $f : X \rightarrow X$ is \emph{transitive}, or equivalently that it \emph{acts transitively on $X$}, if for any $x, y \in X$ there exists an integer $i \geq 0$ such that $y = f^i (x)$. Equivalently, $f$ is transitive if and only if for any $x \in X$ the sequence $\{ f^m(x)\}_{m \in \mathbb{N}}$ has full orbit, that is $\{ f^m (x) \, : \, m \in \mathbb{N} \} = X$. A map $f:\mathbb{A}^n\rightarrow \mathbb{A}^n$ is said to be affine if there exist $A\in \mathrm{GL}_n(\mathbb{F}_q)$ and $b\in \mathbb{F}_q^n$ such that $f(x)=Ax+b$ for any $x\in \mathbb{A}^n$. Let $G$ be a group acting on a set $S$. The orbit of an element $s\in S$ will be denoted by $\mathcal O(s)$. For any element $g\in G$, let us denote by $o(g)$ the order of $g$ in $G$. We write $f\ll g$ or $f=O(g)$ to mean that for some positive constant $C$ it holds that $|f|\le Cg$. The notation $f\ll_\delta g$ or $f=O_\delta(g)$ means the same, but now the constant $C$ may depend on the parameter $\delta$. For any real vector $\mathbf{h}=(h_1,\dots, h_n)$, we write $\|\mathbf{h}\|_{\infty}= \max\{|h_j|\, : \, j\in \{1,\dots, n\}\}$. Finally, for any prime $p$ and any $z \in \mathbb{Z}$ we write $e_p (z) = \exp (2 \pi i z / p)$. \section{Fractional jumps} \label{affine_jumps} Fix the standard projective coordinates $X_0, \ldots, X_n$ on $\mathbb{P}^n$, and the canonical decomposition \begin{equation} \label{decomposition} \mathbb{P}^n = U \cup H, \end{equation} where \begin{align*} U &= \set{[X_0: \ldots: X_n] \in \mathbb{P}^n \, : \, X_n \neq 0}, \\ H &= \set{[X_0: \ldots: X_n] \in \mathbb{P}^n \, : \, X_n = 0}. \end{align*} There is a natural isomorphism of the affine $n$-dimensional space into $U$ given by \begin{equation} \label{pi_definition} \pi : \mathbb{A}^n \xrightarrow{\sim} U, \quad (x_1, \ldots, x_n) \mapsto [x_1: \ldots: x_n: 1], \end{equation} Let now $\Psi$ be an automorphism of $\mathbb{P}^n$. We give the following definitions: \begin{definition} For $P \in U$, the \emph{fractional jump index of $\Psi$ at $P$} is \begin{equation*} \mathfrak{J}_P = \min \set{k \geq 1 \, : \, \Psi^k (P) \in U}. \end{equation*} \end{definition} \begin{remark} The fractional jump index $\mathfrak{J}_P$ is always finite, as it is bounded by the order of $\Psi$ in $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$. \end{remark} \begin{definition} The \emph{fractional jump of $\Psi$} is the map \begin{equation*} \psi : \mathbb{A}^n \rightarrow \mathbb{A}^n, \quad x \mapsto \pi^{-1} \Psi^{\mathfrak{J}_{\pi(x)}} \pi (x). \end{equation*} \end{definition} Roughly speaking, the purpose of defining this new map is to avoid the points which are mapped outside $U$ via $\Psi$. This is done simply by iterating $\Psi$ until $\Psi(\pi (x))$ ends up again in $U$. In this definition, $\pi$ is simply used to obtain the final map defined over $\mathbb{A}^n$ instead of $U$. A priori, one of the issues here is that a global description of the map might be difficult to compute, as in principle it depends on each of the $x\in \mathbb{A}^n$. It is interesting to see that this does not happen in the case in which $\Psi$ is transitive on $\mathbb{P}^n$: in fact, we will show in Section \ref{explicit} that there always exists a set of indices $I$, a disjoint covering $\set{U_i}_{i \in I}$ of $\mathbb{A}^n$, and a family $\set{f^{(i)}}_{i \in I}$ of rational maps of degree $1$ on $\mathbb{A}^n$ such that \begin{enumerate} \item[i)] $|I| \leq n+1$, \item[ii)] $f^{(i)}$ is well-defined on $U_i$ for every $i \in I$, \item[iii)] $\psi (x) = f^{(i)} (x)$ if $x \in U_i$. \end{enumerate} That is, $\psi$ can be written as a multivariate linear fractional transformation on each $U_i$. In addition, for any fixed $i\in \{1,\dots n+1\}$, all the denominator of the $f^{(i)}$'s will be equal. \begin{example} \label{inversive} Let $n=1$. For $a\in \mathbb{F}_q^*$ and $b\in \mathbb{F}_q$ and \begin{equation*} \Psi ([X_0: X_1]) = [b X_0 + a X_1: X_0] \end{equation*} we get the case of the inversive congruential generator. In fact, the fractional jump index of $\Psi$ is given by \begin{equation*} \mathfrak{J}_P = \begin{cases} 1, & \text{if } P \neq [0, 1], \\ 2, & \text{if } P = [0, 1], \end{cases} \end{equation*} and $\Psi^2 ([0, 1]) = [b, 1]$. Therefore, the fractional jump $\psi$ of $\Psi$ is defined on the covering $\set{U_1, U_2}$, where $U_1 = \mathbb{A}^1 \setminus \set{0}$ and $U_2 = \set{0}$, by \begin{equation*} \psi (x) = \begin{cases} \frac{a}{x} + b, & \text{if } x \neq 0, \\ b, & \text{if } x = 0. \end{cases} \end{equation*} The inversive sequence is then given by $\set{\psi^m (0)}_{m \in \mathbb{N}}$, which has full orbit under suitable assumptions on $a$ and $b$ (see for example \cite[Lemma FN]{bib:chou95}). \end{example} \begin{remark} \label{remark_transitivity_affine_jump} Let $\Psi$ be an automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. It is immediate to see that if $\Psi$ acts transitively on $\mathbb{P}^n$ then $\psi$ acts transitively on $\mathbb{A}^n$. \end{remark} For the case of $n = 1$, the next proposition shows that the notion of transitivity for $\Psi$ and its fractional jump $\psi$ are actually equivalent, under the additional assumption that $\Psi$ sends a point of $U$ to a point of $H$ (which is equivalent to ask that the induced map on $\mathbb{A}^1$ is not affine). \begin{proposition} \label{transitivity_affine_jump_P1} Let $\Psi$ be an automorphism of $\mathbb{P}^1$ and let $\psi$ be its fractional jump. Assume that $\Psi$ sends a point of $U$ to the point at infinity. Then, $\Psi$ acts transitively on $\mathbb{P}^1$ if and only if $\psi$ acts transitively on $\mathbb{A}^1$. \end{proposition} \begin{proof} As already stated in Remark \ref{remark_transitivity_affine_jump}, if $\Psi$ is transitive on $\mathbb{P}^1$ then $\psi$ is obviously transitive on $\mathbb{A}^1$. Conversely, assume that $\psi$ is transitive on $\mathbb{A}^1$. Consider the decomposition $\mathbb{P}^1 = U \cup H$ of $\mathbb{P}^1$ as in \eqref{decomposition}. Since $n = 1$, we have $H = \set{P_0}$, for $P_0 = [1 : 0]$. Since there exists $P_1 \in U$ such that $\Psi (P_1) = P_0$, we have that $\Psi^2 (P_1) = \Psi (P_0) \in U$, as otherwise the point $P_0$ would have two preimages under $\Psi$, which is not possible as $\Psi$ is an automorphism, and so in particular a bijection. We have to prove that given $P, Q \in \mathbb{P}^1$ there exists an integer $i\geq 0$ such that $Q = \Psi^i (P)$. Assume that $P$ and $Q$ are distinct, as otherwise we can simply set $i = 0$. We distinguish two cases: either $P, Q \in U$, or one of the two, say $P$, is equal to $P_0$ and $Q \in U$. In the first case, the claim follows by transitivity of $\psi$. In the second case, reduce to the previous case by considering $\Psi (P_0), Q \in U$. \end{proof} One can actually prove that affine transformations of $\mathbb{A}^n$ are never transitive, unless restrictive conditions on $q$ and $n$ apply. Actually, the result that follows will not be used in the rest of the paper but provides additional motivation for the study of fractional jumps of projective maps and for completeness we include its proof. \begin{theorem} \label{affine_transitivity_theorem} There is no affine transitive transformation of $\mathbb{A}^n$ unless $n = 1$ and $q$ is prime, or $q = 2$ and $n = 2$, with explicit examples in both cases. \end{theorem} \begin{proof} For convenience of notation, in this proof we will identify the points of $\mathbb{A}^n$ with columns vectors in $\mathbb{F}_q^n$. Let us first deal with the pathological cases. For $n=1$ it is trivial to observe that $x\mapsto x+1$ has full orbit if and only if $q$ is prime. For $n=2$ and $q=2$, we get by direct check that the map \begin{equation*} \varphi \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \quad \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \in \mathbb{F}_2^2, \end{equation*} has full orbit. Let $\varphi$ be an affine transformation of the $n$-dimensional affine space over $\mathbb{F}_q$. Then, by definition there exist $A \in \mathrm{GL}_n (\mathbb{F}_q)$ and $b \in \mathbb{F}_q^n$ such that \begin{equation*} \varphi (x) = A x + b, \quad x \in \mathbb{F}_q^n. \end{equation*} Assume by contradiction that $\varphi$ is transitive, so that the order $o (\varphi)$ of $\varphi$ is $q^n$. Denote by $p$ the characteristic of $\mathbb{F}_q$. We firstly prove that the order $o (A)$ of $A$ in $\mathrm{GL}_n (\mathbb{F}_q)$ is $q^n / p$. Then, we will show how this will lead to a contradiction. Let $j$ be the smallest integer such that \begin{equation*} \varphi^j (x) = x + c, \quad \text{for all }x \in \mathbb{F}_q^n, \end{equation*} for some $c \in \mathbb{F}_q^n$. As \begin{equation} \label{explicit_affine} \varphi^j (x) = A^j x + \sum_{i = 0}^{j-1} A^i b, \quad x \in \mathbb{F}_q^n, \end{equation} we get $o(A) = j$. If $c = 0$, then $o(\varphi) = j = o (A) \leq q^n - 1$, so that $\varphi$ cannot be transitive. We then have $c \neq 0$. By \eqref{explicit_affine}, we get $\varphi^{j p} = \mathrm{Id}$, therefore $o(\varphi)\mid jp$. We now prove that $o(\varphi) = jp$. Write $o(\varphi) = j s + r$, with $r < j$. Then, we have \begin{align*} \varphi^{j s + r} (x) &= \varphi^r (x) + s c \\ &= A^r x + v, \quad x \in \mathbb{F}_q^n, \end{align*} for a suitable $v \in \mathbb{F}_q^n$. Since $\varphi^{j s + r} = \mathrm{Id}$, we get that $A^r x + v = x$ for all $x \in \mathbb{F}_q^n$, and so we must have $r = 0$ and $v=0$. It follows that $\varphi^{j s} (x) = x + s c = x$ for all $x \in \mathbb{F}_q^n$, which gives $p \mid s$, as $c \neq 0$, so that we get $p \leq s$, and then $o(\varphi) = j s \geq jp$. Therefore we conclude that $o (\varphi) = jp$. As $\varphi$ is assumed to be transitive, we have that $j p = q^n$, and so $o (A) = j = q^n / p$. Essentially, what we have proved up to now is that, if such a transitive affine map $\varphi(x)=Ax+b$ exists, then it must have the property that $o (A) = q^n / p$. Let $\mu_A (T) \in \mathbb{F}_q [T]$ be the minimal polynomial of $A$. By the fact that $o (A) = q^n / p$ we get \begin{equation*} \mu_A (T) \mid T^{q^n / p} - 1 = (T-1)^{q^n / p}. \end{equation*} Then, $\mu_A (T) = (T - 1)^d$, for some $d \leq n$, as the degree of the minimal polynomial is less than or equal to the degree of the characteristic polynomial by Cayley-Hamilton. From basic ring theory, one gets that the order of $A$ in $\mathrm{GL}_n (\mathbb{F}_q)$ is equal to the order of the class $\overline{T}$ of $T$ in the quotient ring $(\mathbb{F}_q[T] / (\mu_A (T)))^* = (\mathbb{F}_q[T] / ((T - 1)^d))^*$. Let us now assume $q^n / p^2 \geq n$. In this case we have \begin{equation*} \overline{T}^{q^n / p^2} = (\overline{T}-1)^{q^n / p^2} + 1 = 1, \end{equation*} as $q^n / p^2 \geq n \geq d$. Therefore, $o (A)=o(\overline T)\leq q^n/p^2 < q^n / p$ from which the contradiction follows. Therefore we can restrict to the case $q^n / p^2 < n$. It is easy to see that this inequality forces $q=p$: in fact if $q=p^k$ and $k\geq 2$, then $q^n/p^2=p^{kn-2}\geq p^{2n-2}\geq 4^{n-1}\geq n$. Therefore, the only uncovered cases are in correspondence with the solutions of $p^{n-2}<n$, which consist only of the following: $n=3$ and $p=2$, or $n=1$ and $p$ any prime, or $n=2$ and $p$ any prime. For $n=3$ and $p=2$ an exhaustive computation shows that there is no transitive affine map. Also, we already know that in the case $n=1$ and $p$ any prime we have such a transitive map, as this is one of the pathological cases. For the case $n=2$ we argue as follows. Let \[\varphi(x)=Ax+b\] be such a transitive affine map. Clearly $A\in \mathrm{GL}_2(\mathbb{F}_p)$ must be different from the identity matrix, as otherwise $\varphi$ cannot have full orbit. So the minimal polynomial of $A$ is different from $T-1$. On the other hand, the minimal polynomial of $A$ must divide $(T-1)^d$. Since $n=2$, we have that $d=2$. In $\mathrm{GL}_2(\mathbb{F}_p)$ having minimal polynomial $(T-1)^2$ forces a matrix to be conjugate to a single Jordan block of size $2$ with eigenvalue $1$, hence there exists $C\in \mathrm{GL}_2(\mathbb{F}_p)$ such that \[CAC^{-1}= \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \] Let us now consider again the map $\varphi$. Clearly, $\varphi$ is transitive if and only if the map $\widetilde \varphi=C\varphi C^{-1}$ is. For any $x\in \mathbb{F}_p^2$ we have that $\widetilde \varphi(x)=C(AC^{-1}x+b)$. Therefore the map $\widetilde \varphi$ can be written as \[\widetilde \varphi\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}=\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \begin{pmatrix} r \\ s \end{pmatrix}.\] for some $r,s\in \mathbb{F}_p$. We will now prove that $\widetilde \varphi^p\begin{pmatrix} r \\ s\end{pmatrix}=\begin{pmatrix} r \\ s \end{pmatrix}$ so that $\varphi$ cannot be transitive, as starting form $c:=\begin{pmatrix} r \\ s\end{pmatrix}$ only visits $p$ points. \begin{align*} \widetilde \varphi^p\begin{pmatrix} r \\ s\end{pmatrix}&=\sum^{p}_{i=0} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^ic \\ &= c+\sum^p_{i=1}\begin{pmatrix} 1 & i \\ 0 & 1 \end{pmatrix} \cdot \begin{pmatrix} r \\ s \end{pmatrix} \\ &= c+\sum^p_{i=1}\begin{pmatrix} r+is \\ s \end{pmatrix}=c+ \begin{pmatrix} s\sum^p_{i=1}i \\ 0 \end{pmatrix} \end{align*} But now the sum $\sum^p_{i=1}i$ is different from zero if and only if $p= 2$. Therefore, such a transitive map could exist only for $p=n=2$. Since we already provided such an example of transitive map, the proof of the theorem is now concluded. \end{proof} \section{Transitive actions via projective primitivity} \label{transitivity_projective} In this section we characterise transitive projective automorphisms. \begin{definition} \label{projectively_primitive_polynomial} A polynomial $\chi(T) \in \mathbb{F}_q [T]$ of degree $m$ is said to be \emph{projectively primitive} if the two following conditions are satisfied: \begin{enumerate} \item[i)] $\chi(T)$ is irreducible over $\mathbb{F}_q$, \item[ii)] for any root $\alpha$ of $\chi(T)$ in $\mathbb{F}_{q^m} \cong \mathbb{F}_q[T] / (\chi(T))$ the class $\overline{\alpha}$ of $\alpha$ in the quotient group $G = \mathbb{F}_{q^m}^* / \mathbb{F}_q^*$ generates $G$. \end{enumerate} \end{definition} \begin{remark} Note that if a polynomial $\chi(T)\in \mathbb{F}_q[T]$ of degree $m$ is primitive, i.e. it is irreducible and any of its roots in $\mathbb{F}_{q^m} \cong \mathbb{F}_q[T] / (\chi(T))$ generates the multiplicative group $\mathbb{F}_{q^m}^*$, then it is obviously projectively primitive. The class of projectively primitive polynomials is in general larger than then the class of primitive polynomials: for example take the polynomial $\chi(T)=T^3+T+1\in \mathbb{F}_5[T]$. One can check that this polynomial is irreducible but is not primitive. In fact, let $\alpha$ be a root of $\chi(T)$. Since $\alpha^{62}=1\in \mathbb{F}_5[T] / (\chi(T))\cong \mathbb{F}_{5^3}$, and therefore $o(\alpha)\neq 5^3-1=124$, we have that $\chi(T)$ is not primitive. On the other hand $G=\mathbb{F}_{5^3}^* / \mathbb{F}_5^*$ has prime cardinality equal to $|G|=(5^3-1)/(5-1)=31$ and $\overline{\alpha}\neq 1\in G$. It follows immediately that $\overline \alpha$ has to be a generator of $G$. \end{remark} \begin{remark} Let $M, M' \in \mathrm{GL}_{n+1} (\mathbb{F}_q)$ be such that $[M] = [M']$ in $\mathrm{PGL}_{n+1} (\mathbb{F}_q)$, and let $\chi_M (T), \chi_{M'} (T) \in \mathbb{F}_q [T]$ be their characteristic polynomials. It is immediate to see that $\chi_M (T)$ is projectively primitive if and only if $\chi_{M'} (T)$ is projectively primitive. \end{remark} We are now ready to give a full characterisation of transitive projective automorphisms on $\mathbb{P}^n$. \begin{theorem} \label{transitivity_characterisation} Let $\Psi$ be an automorphism of $\mathbb{P}^n$. Write $\Psi = [M] \in \mathrm{PGL}_{n+1} (\mathbb{F}_q)$ for some $M \in \mathrm{GL}_{n+1} (\mathbb{F}_q)$. Then, $\Psi$ acts transitively on $\mathbb{P}^n$ if and only if the characteristic polynomial $\chi_M (T) \in \mathbb{F}_q [T]$ of $M$ is projectively primitive. \end{theorem} \begin{proof} For simplicity of notation, set $N = |\mathbb{P}^n| = (q^{n+1}-1) / (q-1)$. Assume that $\chi_M (T)$ is projectively primitive. Now we prove that for any $P \in \mathbb{P}^n$ we have that $\Psi^k (P) \neq P$ for $k \in \set{1, \ldots, N -1}$. Suppose by contradiction that there exists $P_0 \in \mathbb{P}^n$ such that $\Psi^k (P_0) = P_0$ for some $k \in \set{1, \ldots, N-1}$. Let $v_0 \in \mathbb{F}_q^{n+1} \setminus \set{0}$ be a representative of $P_0$. Then, there exists $\lambda \in \mathbb{F}_q^*$ such that \begin{equation*} M^k v_0 = \lambda v_0. \end{equation*} This means that $v_0$ is an eigenvector for the eigenvalue $\lambda$ of $M^k$, which implies that $\lambda = \alpha^k$ for some root $\alpha$ of $\chi_M (T)$ in $\mathbb{F}_{q^{n+1}}$. But now, the class $\overline{\alpha}^k$ of $\alpha^k$ in $G = \mathbb{F}_{q^{n+1}}^* / \mathbb{F}_q^*$ is $\overline{\lambda} = \overline{1}$, contradicting the hypothesis that $\overline{\alpha}$ generates $G$. Conversely, assume that $\Psi$ is transitive, so that for any $P \in \mathbb{P}^n$ we have that $\Psi^k (P) \neq P$ for $k \in \set{1, \ldots, N -1}$. Let $\alpha$ be a root of $\chi_M(x)$ in its splitting field and $h$ be a positive integer such that $\mathbb{F}_{q^h}\cong\mathbb{F}_q(\alpha)$. Clearly $1\leq h\leq n+1$. We also have that $\alpha \neq 0$ as $\det M \neq 0$, and $\alpha \notin \mathbb{F}_q^*$ as otherwise $M v_0 = \alpha v_0$ for some eigenvector $v_0 \in \mathbb{F}_q^{n+1} \setminus \set{0}$ for the eigenvalue $\alpha$, so that $\Psi (P_0) = P_0$ for $P_0$ the class of $v_0$ in $\mathbb{P}^n$, in contradiction with the fact that $\Psi$ is transitive. Let $d$ be the order of the class $\overline{\alpha}$ of $\alpha$ in $\mathbb{F}_{q^h}^* / \mathbb{F}_q^*$. Then, there exists $\lambda \in \mathbb{F}_q^*$ such that $\alpha^d = \lambda$. Now, $\alpha^d$ is an eigenvalue of $M^d$, and so $M^d v_1 = \lambda v_1$ for some eigenvector $v_1\in \mathbb{F}_q^{n+1} \setminus \set{0}$ for the eigenvalue $\alpha^d$. Thus, $\Psi^d (P_1) = P_1$ for $P_1$ the class of $v_1$ in $\mathbb{P}^n$, and so $d=N$ by the transitivity of $\Psi$. Therefore we have $(q^{n+1}-1)/(q-1)=N=d\leq (q^{h}-1)/(q-1)$. Now, since $h\leq n+1$, this forces $h = n+1$, so that $\chi_M$ is irreducible, which together with $d=N$ gives projective primitivity for $\chi_M$, as we wanted. \end{proof} \begin{remark}\label{remarkICGfullorbit} When $n = 1$, our approach gives immediately the criterion to get maximal period for inversive congruential generators, see for example \cite[Lemma FN]{bib:chou95}. To see this, set $\Psi$ and $\psi$ as in Example \ref{inversive}, so that $\set{\psi^m (0)}_{m \in \mathbb{N}}$ is an inversive sequence. By Proposition \ref{transitivity_affine_jump_P1}, transitivity of $\Psi$ and $\psi$ are equivalent. If $\chi (T) = T^2 - a T - b$ is irreducible, then $\psi$ acts transitively on $\mathbb{A}^1$ if and only if the class $\overline{\alpha}$ of a root $\alpha$ of $\chi(T)$ in $G = \mathbb{F}_{q^{2}}^* / \mathbb{F}_q^*$ generates $G$, which is itself equivalent to the fact that $\alpha^{q-1}$ has order $q+1$ in $\mathbb{F}_{q^2}^*$ (which is in fact the condition given in \cite[Lemma FN]{bib:chou95}). \end{remark} \section{Subspace Uniformity} \label{uniformity} In this section we show that sequences associated to iterations of transitive projective maps behave ``uniformly'' with respect to subspaces, i.e. not too many consecutive points can lie in the same projective subspace of $\mathbb{P}^n$. More precisely, we have the following: \begin{proposition} \label{theorem_uniformity} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$. For any $P \in \mathbb{P}^n$ and any $d \in \set{1, \ldots, n-1}$ there is no $W \in \mathbb{G}\mathrm{r} (d, n)$ such that $\Psi^i (P) \in W$ for all $i \in \set{0, \ldots, d+1}$. \end{proposition} \begin{proof} Suppose by contradiction that there exists a projective subspace $W$ of dimension $d$ such that there exists $P\in \mathbb{P}^n$ such that $\Psi^i (P) \in W$ for all $i\in \set{0, \ldots, d+1}$. Let $W'$ be the subspace of $\mathbb{F}_q^{n+1}$ whose projectification is $W$, and let $v \in \mathbb{F}_q^{n+1} \setminus \set{0}$ be a representative for $P$. Let also $M\in \mathrm{GL}_{n+1}(\mathbb{F}_q)$ be a representative for $\Psi$. Consider now the smallest integer $h$ such that $M^h v$ is linearly dependent on $\{M^i v \, : \, i\in \{0,\dots h-1\}\}$ over $\mathbb{F}_q$. Since $M^iv$ is contained in $W'$ for any $i\in \{0,\dots d+1\}$, and $W'$ has dimension $d+1$, then we have that $h$ is at most $d+1$. Therefore, $M^hv$ can be rewritten in terms of lower powers of $M$, which in turn forces the span of $\{ M^iv \, : \, i\in \{0,\dots, h-1\}\}$ over $\mathbb{F}_q$ to be an invariant space for $M$. It follows that the characteristic polynomial of $M$ has a non-trivial factor of degree less than or equal to $d$. Since $d\leq n$, we have the claim, as by Theorem \ref{transitivity_characterisation} the characteristic polynomial of $M$ has to be irreducible. \end{proof} \begin{remark} This result is optimal with respect to $d$, as for any set $S$ of $d+2$ points of $\mathbb{P}^n$ there always exists a $W\in \mathbb{G}\mathrm{r} (d+1, n)$ containing $S$. \end{remark} Fix the canonical decomposition $\mathbb{P}^n = U \cup H$ as in \eqref{decomposition}. \begin{corollary} \label{bound_affine_jump_index} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$. For any $P \in U$ the fractional jump index $\mathfrak{J}_P$ of $\Psi$ at $P$ is bounded by $n+1$. \end{corollary} \begin{proof} Assume by contradiction $\mathfrak{J}_P \geq n+2$. Then, setting $P' = \Psi (P)$ we get $\Psi^i (P') \in H$ for all $i \in \set{0, \ldots, n}$. But we have $H \in \mathbb{G}\mathrm{r} (n-1, n)$, and so this violates Proposition \ref{theorem_uniformity}. \end{proof} \section{Explicit description of fractional jumps} \label{explicit} Let $\Psi$ be an automorphism of $\mathbb{P}^n$. In this section we will give an explicit description of the fractional jump $\psi$ of $\Psi$. First of all, fix homogeneous coordinates $X_0, \ldots, X_n$ on $\mathbb{P}^n$, fix the canonical decomposition $\mathbb{P}^n = U \cup H$ as in \eqref{decomposition} and the map $\pi$ as in \eqref{pi_definition}, and write $\Psi\in \mathrm{PGL}_{n+1} (\mathbb{F}_q)$ as \begin{equation*} \Psi = [F_0: \ldots: F_n], \end{equation*} where each $F_j$ is an homogeneous polynomial of degree $1$ in $\mathbb{F}_q [X_0, \ldots, X_n]$. Fix now affine coordinates $x_1, \ldots, x_n$ on $\mathbb{A}^n$, and for each $j \in \set{1, \ldots, n}$ set \begin{equation} \label{rational_functions} f_j (x_1, \ldots, x_n) = \frac{F_{j-1} (x_1, \ldots, x_n, 1)}{F_n (x_1, \ldots, x_n, 1)}. \end{equation} Let $K = \mathbb{F}_q (x_1, \ldots, x_n)$ be the field of rational functions on $\mathbb{A}^n$. Then, \eqref{rational_functions} defines elements $f_j \in K$ for $j \in \set{1, \ldots, n}$, and $f_\Psi = (f_1, \ldots, f_n) \in K^n$. In turn this process defines a map \begin{equation} \label{map_pgl} \imath : \mathrm{PGL}_{n+1} (\mathbb{F}_q) \rightarrow K^n, \quad \Psi \mapsto f_\Psi. \end{equation} It is easy to see that this map is well defined and for any element $f=(f_1,\dots,f_n)$ in the image of $\imath$ all the denominators of the $f_j$'s are equal. It also holds that $\imath(\Psi\circ \Phi)=\imath(\Psi)\circ \imath(\Phi)$, where the composition in $K^n$ is defined in the obvious way, i.e. just plugging in the components of $\imath (\Phi)$ in the variables of $\imath(\Psi)$. Let us go back to $f_\Psi = (f_1, \ldots, f_n) \in K^n$ for a fixed automorphism $\Psi$. For any $i \geq 1$, let us define $f^{(i)} = \imath(\Psi^{i})$. For each $f^{(i)}$ and for each $j\in \{1,\dots,n\}$, write the $j$-th component of $f^{(i)}$ as \begin{equation*} f^{(i)}_j = \frac{a^{(i)}_j}{b^{(i)}_j}, \quad \text{for } a^{(i)}_j, b^{(i)}_j \in \mathbb{F}_q [x_1, \ldots, x_n]. \end{equation*} As we already observed, for fixed $i\geq 1$ all the $b^{(i)}_j$'s are equal, so we can set $b^{(i)}=b^{(i)}_1$. Define now \begin{align*} V_0 &= \mathbb{A}^n, \\ V_i &= \bigcap_{k = 1}^i V(b^{(k)}), \quad \text{for } i \geq 1. \end{align*} These sets will be the main ingredient in the definition of the covering mentioned in Section \ref{affine_jumps}. The following result characterises the $V_i$'s in terms of the position of a bunch of iterates of $\Psi$. \begin{lemma} \label{characterisation_vanishing_loci} Let $x \in \mathbb{A}^n$, and $P = \pi (x) \in U$. Then, $x \in V_i$ if and only if $\Psi^k (P) \in H$ for $k \in \set{1, \ldots, i}$. \end{lemma} \begin{proof} By definition, $x \in V_i$ if and only if $x \in V(b^{(k)})$ for every $k \in \set{1, \ldots, i}$, which means $b^{(k)} (x) = 0$ for every $k \in \set{1, \ldots, i}$. Now, $b^{(k)} (x) = 0$ if and only if the last component of $\Psi^k (P)$ is zero, which is equivalent to the condition $\Psi^k (P) \in H$. \end{proof} \begin{definition} Define the \emph{absolute fractional jump index $\mathfrak{J}$ of $\Psi$} to be the quantity \begin{equation*} \mathfrak{J} = \max \set{\mathfrak{J}_P \, : \, P \in U}. \end{equation*} \end{definition} When $\Psi$ is transitive, Corollary \ref{bound_affine_jump_index} ensures that $\mathfrak{J} \leq n+1$. We will now show that the absolute jump index equals the number of non empty $V_i$'s. \begin{proposition} We have that \begin{equation*} \label{absolute_jump_index} \min \set{i \in \mathbb{N} \, : \, V_i = \emptyset} = \mathfrak{J}. \end{equation*} \end{proposition} \begin{proof} Set $i_0 = \min \set{i \in \mathbb{N} \, : \, V_i = \emptyset}$. In order to show that $i_0 \leq \mathfrak{J}$, it is enough to prove that $V_{\mathfrak{J}} = \emptyset$. Assume that there exists $x \in V_{\mathfrak{J}}$. Then, if $P = \pi (x)$, we have by Lemma \ref{characterisation_vanishing_loci} that $\Psi^j (P) \in H$ for $j \in \set{1, \ldots, \mathfrak{J}}$, and so the jump index $\mathfrak{J}_P$ must be strictly greater than $\mathfrak{J} $, a contradiction. Conversely, in order to show that $\mathfrak J \leq i_0$, it is enough to prove that $V_{\mathfrak J -1}\neq \emptyset $. To do so, take $P_0 \in U$ for which $\mathfrak{J}_{P_0} = \mathfrak{J}$. Then $\Psi^k(P_0)\in H$ for any $k\in \{1,\dots \mathfrak{J}-1\}$. Let $x_0 = \pi^{-1} (P_0)$. Then, by Lemma \ref{characterisation_vanishing_loci} we have $x_0 \in V_{\mathfrak{J}-1}$. \end{proof} Define now \begin{equation*} U_i = V_{i-1} \setminus V_i, \quad \text{for } i \in \set{1, \ldots, \mathfrak{J}}. \end{equation*} Thus, for $I = \set{1, \ldots, \mathfrak{J}}$, the family $\set{U_i}_{i \in I}$ is a disjoint covering of $\mathbb{A}^n$ and each $f^{(i)}$ is a rational map of degree $1$ on $\mathbb{A}^n$. Also, we observe that by construction $f^{(i)}$ is well defined on $U_i$, so that the fractional jump is defined as \begin{equation*} \psi (x) = f^{(i)} (x), \quad \text{if } x \in U_i. \end{equation*} To clarify this contruction, we now give an explicit description of a fractional jump over $\mathbb{A}^2$. \begin{example} Let $q = 101$ and $n = 2$. Consider the automorphism of $\mathbb{P}^2$ defined by \begin{align*} \Psi([X_0: X_1: X_2]) &= [F_0 : F_1 : F_2] \\ &= [X_0 + 2 X_2: 3 X_1+4 X_2: 4X_0 + 2 X_1 + 3 X_2]. \end{align*} Notice that \begin{equation*} M = \begin{pmatrix} 1 & 0 & 2 \\ 0 & 3 & 4 \\ 4 & 2 & 3 \end{pmatrix} \end{equation*} is a representative of $\Psi$ in $\mathrm{GL}_3 (\mathbb{F}_{101})$. The characteristic polynomial $\chi_M (T) \in \mathbb{F}_{101} [T]$ of $M$ is given by \begin{equation*} \chi_M (T) = T^3 - 7 T^2 - T + 23, \end{equation*} which is irreducible over $\mathbb{F}_{101}$. Now, as \begin{align*} \frac{q^{n+1}-1}{q-1} &= \frac{101^3-1}{101-1} \\ &= 10303 \end{align*} is prime, any irreducible polynomial of degree $3$ in $\mathbb{F}_{101}[T]$ is projectively primitive. By Theorem \ref{transitivity_characterisation} we have that $\Psi$ is transitive on $\mathbb{P}^n$. Since $n=2$ and $\Psi$ is transitive, by Proposition \ref{absolute_jump_index} and the definition of the $U_i$'s we know that the fractional jump of $\Psi$ will be defined using at most $U_1,U_2,U_3$. As in \eqref{rational_functions}, we consider rational functions \begin{align*} f_1(x_1, x_2) &= \frac{x_1+2}{4x_1+2 x_2 +3}, \\ f_2(x_1, x_2) &= \frac{3x_2+4}{4x_1+2 x_2 +3}. \end{align*} in $\mathbb{F}_{101} (x_1, x_2)$, and set $f = (f_1, f_2) \in \mathbb{F}_{101}(x_1, x_2)^2$. Given the definition of $f$, we have \begin{align*} V_1 &= V(4x_1+2 x_2 +3), \\ U_1 &= \mathbb{A}^2 \setminus V_1. \end{align*} Let now $f^{(1)} = f$ and $f^{(2)} = f \circ f = (f_1^{(2)}, f_2^{(2)}) \in \mathbb{F}_{101}(x_1, x_2)^2$, where \begin{align*} f_1^{(2)}(x_1, x_2) &= f_1 (f_1 (x_1, x_2), f_2 (x_1, x_2)) \\ &= \frac{9x_1 + 4x_2 + 8}{16 x_1 + 12 x_2 +25}, \\ f_2^{(2)}(x_1, x_2) &= f_2 (f_1 (x_1, x_2), f_2 (x_1, x_2)) \\ &= \frac{16 x_1 + 17 x_2 + 24}{16 x_1 + 12 x_2 +25}. \end{align*} Define \begin{align*} V_2 &= V_1 \cap V(16 x_1 + 12 x_2 +25) = \set{(64, 22)}, \\ U_2 &= V_1 \setminus V_2. \end{align*} Finally, let $f^{(3)} = f \circ f \circ f = (f_1^{(3)}, f_2^{(3)}) \in \mathbb{F}_{101}(x_1, x_2)^2$, where \begin{align*} f_1^{(3)}(x_1, x_2) &= f_1 (f_1^{(2)} (x_1, x_2), f_2^{(2)} (x_1, x_2)) \\ &= \frac{41 x_1 + 28 x_2 - 43}{15 x_1 - 15 x_2 - 47}, \\ f_2^{(3)}(x_1, x_2) &= f_2 (f_1^{(2)} (x_1, x_2), f_2^{(2)} (x_1, x_2)) \\ &= \frac{11 x_1 - 2 x_2 - 30}{15 x_1 - 15 x_2 - 47}, \end{align*} and $U_3 =V_2= \set{(64, 22)}$, since $V_3=V_2 \cap V(15x_1-15x_2-47)=\emptyset$. By construction, $\mathbb{A}^2 = U_1 \cup U_2 \cup U_3$, and therefore we are ready to describe the fractional jump $\psi$ of $\Psi$ as \begin{equation*} \psi (x_1, x_2) = \begin{cases} f^{(1)} (x_1, x_2), & \text{if }(x_1, x_2) \in U_1, \\ f^{(2)} (x_1, x_2), & \text{if }(x_1, x_2) \in U_2, \\ f^{(3)} (x_1, x_2), & \text{if }(x_1, x_2) \in U_3. \end{cases} \end{equation*} Notice that $f^{(3)} (64, 22) = (63, 78)$, and so $\psi (x_1, x_2) = (63, 78)$ if $(x_1, x_2) \in U_3 = \set{(64, 22)}$. \end{example} \section{The discrepancy of fractional jump sequences}\label{discrepancy} In the context of pseudorandom number generation, it is of interest to say something about the distribution of a sequence. A statistic that is of particular interest is the discrepancy of a sequence, of which we recall the definition below. The goal of this section is to show that for sequences generated by fractional jumps one can prove the same discrepancy bounds as for the sequences generated by the ICG. For simplicity, we let $q=p$ be prime. We assume the set $ \mathbb{F}_p\cong \mathbb{Z}/p\mathbb{Z}$ to be represented by $\{0,1,\dots, p-1\}\subseteq \mathbb{Z}$ as in \cite{shparlinski10}. For $x\in \mathbb{F}_p$ we then write $\frac{x}{p}$ for the corresponding element in $\frac{1}{p}\mathbb{Z}\subseteq \mathbb{R}$. For a sequence \begin{equation*} \Gamma = \{(\gamma_{m,0},\dots,\gamma_{m,s-1})\}_{m=0}^{N-1} \end{equation*} of $N$ points in $[0,1)^s$, for $s\in \mathbb{N}$, the \emph{discrepancy} of $\Gamma$ is defined by \begin{equation*} D_\Gamma = \sup_{B\subseteq [0,1)^s} \bigg|\frac{T_\Gamma(B)}{N}-|B|\bigg|, \end{equation*} where the supremum is taken over boxes $B$ of the form \begin{equation*} B = [\alpha_1,\beta_1)\times \dots \times [\alpha_s,\beta_s) \subseteq [0,1)^s, \end{equation*} and $T_\Gamma(B)$ denotes the number of points of $\Gamma$ which lie inside $B$. For a sequence $\{u_m\}_{m \in \mathbb{N}}$ of points in $\mathbb{F}_p$ the main interest lies in bounding the discrepancy of the sequence \[ \Big(\frac{u_{m}}{p},\frac{u_{m+1}}{p},\dots,\frac{u_{m+s-1}}{p}\Big)_{m=0}^{N-1}\] for $s\ge 1$. In the case of a sequence generated by an ICG such a bound was given in \cite{shparlinski10}. The goal of this section is to extend the results in \cite{shparlinski10} to give discrepancy bounds for full orbit sequences generated by fractional jumps also in the case where the dimension $n$ satisfies $n>1$. Given a fractional jump $\psi:\mathbb{F}_p^n\to \mathbb{F}_p^n$ and an initial value $x\in \mathbb{F}_p^n$ we define the sequence $\{ \mathbf{u}_m (x) \}_{m \in \mathbb{N}}$ of points in $\mathbb{F}_p^n$ by setting $\mathbf{u}_0(x) = x$ and \begin{equation*} \mathbf{u}_m(x) = \psi^{m}(x), \quad \text{for } m \geq 1. \end{equation*} We also define the \emph{snake sequence} $\{ v_m (x) \}_{m \ge 1}$ of points in $\mathbb{F}_p$ by setting \[ (v_{kn+1}(x),v_{kn+2}(x),\dots,v_{(k+1)n}(x)) = \mathbf{u}_{k}(x), \quad \text{for } k \in \mathbb{N}.\] Let $D_{s,\psi}(N;x)$ denote the discrepancy of the sequence \[ \Big(\frac{v_{m+1}}{p},\frac{v_{m+2}}{p},\dots,\frac{v_{m+s}}{p}\Big)_{m=0}^{N-1}\] and let $D^n_{s,\psi}(N;x)$ denote the discrepancy of the sequence \[ \Big(\frac{\mathbf{u}_m}{p},\frac{\mathbf{u}_{m+1}}{p},\dots,\frac{\mathbf{u}_{m+s-1}}{p}\Big)_{m=0}^{N-1}.\] Note that in the first case the individual points of the sequence lie in $\mathbb{F}_p^s$, while in the second case the points of the sequence lie in $\mathbb{F}_p^{ns}$. Our main result for the discrepancy $D_{s,\psi}(N;x)$ is a direct generalization of \cite[Theorem 4]{shparlinski10}, which deals with the discrepancy of a sequence generated by an ICG. We also provide the analogous bounds for the $n$-dimensional discrepancy $D_{s,\psi}^n(N;x)$. \begin{theorem}\label{thm:discrepancy} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then for any integer $s\ge 1$ and any real $\Delta >0$, for all but $O(\Delta p^n)$ initial values $x\in \mathbb{F}_p^n$ it holds that \[ D_{s,\psi}(N;x) \ll_{s,n} (\Delta^{-2/3}N^{-1/3}+ p^{-1/4}\Delta^{-1})(\log N)^s \log p\] and \[ D_{s,\psi}^n(N;x) \ll_{s,n} (\Delta^{-2/3}N^{-1/3}+ p^{-1/4}\Delta^{-1})(\log N)^{sn} \log p\] for all $N$ with $1\le N\le p^n$. \end{theorem} The proof of Theorem \ref{thm:discrepancy} follows the same lines as the proof of \cite[Theorem 4]{shparlinski10}, but with Lemma \ref{lem:2nd-moment} below extending \cite[Lemma 1]{shparlinski10} to $n>1$. In the proofs we will make use of the Koksma--Sz\"{u}sz inequality as well as the Bombieri--Weil bound. \begin{theorem}[{\cite[Theorem 1.21]{drmota}}]\label{thm:koksma} For any integer $H\ge 1$, the discrepancy $D_\Gamma$ of the sequence $\Gamma = (\gamma_{m,0},\dots,\gamma_{m,s-1})_{m=0}^{N-1}$ satisfies \[ D_\Gamma \ll \frac{1}{H} + \frac{1}{N}\sum_{0<\| \mathbf{h} \|_\infty \le H} \frac{1}{\rho(\mathbf{h})} \bigg| \sum_{m=0}^{N-1} \exp\Big( 2\pi i \sum_{j=0}^{s-1} h_j \gamma_{m,j} \Big)\bigg|, \] where $\rho(\mathbf{h}) = \prod_{j=0}^{s-1} \max\{|h_j|,1\}$ for $\mathbf{h}=(h_0,\dots, h_{s-1})\in \mathbb{Z}^s$. \end{theorem} \begin{theorem}[{\cite[Theorem 2]{moreno}}]\label{thm:weil} Let $f/g$ be a rational function over $\mathbb{F}_p$ with $\deg(f)>\deg(g)$. Suppose that $f/g$ is not of the form $h^p-h$, where $h$ is a rational function over $\overline{\mathbb{F}}_p$. Then \[ \bigg|\sum_{\substack{x\in \mathbb{F}_p:\\g(x)\ne 0}} e_p\left( \frac{f(x)}{g(x)} \right) \bigg| \le (\deg(f) + v-1)p^{1/2},\] where $v$ is the number of distinct roots of $g$ in $\overline{\mathbb{F}}_p$. \end{theorem} We will also need to use the explicit description of $\psi$ given in Section \ref{explicit} to describe powers of $\psi$, which is done in the next lemma. \begin{lemma}\label{lem:explicit} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then there are polynomials $a^{(i)}_j, b^{(i)} \in \mathbb{F}_p [x_1, \ldots, x_n]$ of degree less or equal than $1$, for $i\in\{1,\dots,p-1\}$ and $j\in\{1,\dots,n\}$, with $b^{(i)}$ not identically a constant, and such that \[ \psi^i_j(x) = \frac{a^{(i)}_j(x)}{b^{(i)}(x)}, \quad \text{for } x\not\in \bigcup_{k=1}^i V(b^{(k)}),\] where $\psi^i_j (x)$ denotes the $j$-th component of $\psi^i (x)$. \end{lemma} \begin{proof} The functions $a_j^{(i)}, b^{(i)}$ are defined as in Section \ref{explicit}. Indeed, recall that there is a set $U_1$ and there is a rational map \[ f^{(1)}=\Big(\frac{a^{(1)}_1}{b^{(1)}},\dots,\frac{a^{(1)}_n}{b^{(1)}}\Big) \] of degree $1$ such that \[ \psi(x) = f^{(1)}(x),\quad \text{for } x\in U_1 = \mathbb{F}_p^n\setminus V(b^{(1)}).\] For $i\in \{1,\dots,p-1\}$ and $j\in\{1,\dots,n\}$, define the maps $a_j^{(i)},b^{(i)}$ by iterating the map $f^{(1)}$, that is \[ f^{(i)} = (f^{(1)})^i = \left(\frac{a^{(i)}_1}{b^{(i)}},\dots,\frac{a^{(i)}_n}{b^{(i)}}\right),\quad i\in\{1,\dots,p-1\}.\] Let us notice that in Section \ref{explicit} the function $f^{(i)}$ was used to describe the map $\psi$ on the set $U_i$. In this section we are instead using $f^{(i)}$ to describe the $i$'th iterate of $\psi$ on the set $U_1\cap \psi^{-1}(U_1) \cap \cdots \cap \psi^{-i+1}(U_1)$. In particular, Section \ref{explicit} made use of $f^{(i)}$ for $i\in\{1,\dots,n\}$, but here we instead use $f^{(i)}$ on the range $i\in \{1,\dots,p-1\}$. To see that $b^{(i)}$ isn't identically a constant for $i \in \set{1, \ldots, p-1}$ we need to show that $f^{(i)} = \imath(\Psi^i)$, where $\imath$ is the map in \eqref{map_pgl}, is not affine. Let $M\in \mathrm{GL}_{n+1}(\mathbb{F}_p)$ be such that $\Psi = [M] \in \mathrm{PGL}_{n+1}(\mathbb{F}_p)$. Suppose by contradiction that $\imath(\Psi^i)$ is affine for some $i \in \set{1, \ldots, p-1}$. Since $M$ has irreducible characteristic polynomial by Theorem \ref{transitivity_characterisation}, we have that $\mathbb{F}_p[M]$ is a field and in turn that $\mathbb{F}_p[M^i]$ is a proper subfield, therefore the minimal polynomial of $M^i$ is irreducible. Now, since $\imath(\Psi^i)$ is assumed to be affine, we have that $\Psi^i(H)=H$, which in turn forces $M^i$ to fix a proper subspace of $\mathbb{F}_p^{n+1}$. This directly implies that the (irreducible) minimal polynomial of $M^i$ cannot be equal to the characteristic polynomial of $M^i$, and therefore it must have degree $d<n+1$. We know, again by Theorem \ref{transitivity_characterisation}, that $[M]$ is a generator for the quotient group $\mathbb{F}_p[M]^*/ \mathbb{F}_p^*$ as $\Psi$ is transitive, but $[1]=[M^i]^{(p^d-1)/(p-1)}= [M]^{i (p^d-1)/(p-1)}$, so $(p^{n+1}-1)/(p-1)\mid i (p^d-1)/(p-1)$ which forces $i\geq (p^{n+1}-1)/(p^d-1)\geq p$, a contradiction. \end{proof} We are now ready to prove the technical heart of the argument. \begin{lemma}\label{lem:technical} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then for any integers $j_0, s\ge 1$, $d\le (p-1)n-s$ and $\mathbf{h} \in \mathbb{F}_p^s\setminus \{0\}$ it holds that \[ \Big|\sum_{x \in \mathbb{F}_p^n} e_p\Big( \sum_{j=0}^{s-1}h_j(v_{j_0+d+j}(x)-v_{j_0+j}(x))\Big)\Big| \le 3\Big(\frac{s+d}{n}+1\Big)p^{n-1}+4\Big(\frac{s}{n}+1\Big)p^{n-1/2}. \] \end{lemma} \begin{proof} Observe first that the result is trivial for $s \ge p^{1/2}n$, so assume that $s\le p^{1/2}n$. Let $r = \min\{j: h_j\ne 0\}$, $s'=s-r$ and $h_j'=h_{j+r}$ for $j\in\{0,\dots,s'-1\}$. Let $m= \floor{(j_0+r)/n}$. Since $\psi$ is a bijection, we can make the substitution $x'=\psi^m (x)$ and sum over $x'\in\mathbb{F}_p^n$ in place of summing over $x\in \mathbb{F}_p^n$. Then, we get $v_{i}(x')=v_{mn+i}(x) $, and so \begin{align*} \sum_{x\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s-1} h_j(v_{j_0+d+j}(x)-v_{j_0+j}(x))\Big) &= \sum_{x'\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s'-1} h_{j}'(v_{j_1+d+j}(x')-v_{j_1+j}(x'))\Big)\\ &=\sum_{x\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s'-1} h_{j}'(v_{j_1+d+j}(x)-v_{j_1+j}(x))\Big), \end{align*} for some $j_1$ with $1\le j_1 \le n$, where in the last equality we have simply relabeled the summation index $x'$ to $x$, which we do for simplicity of notation. Notice that in this way we have that $h_0' \ne 0$, which was the entire point of shifting the sum. As $d\le (p-1)n-s$ we have $j_1+j \le j_1+d+j \le pn-1 < pn$. This means that $v_{j_1+d+j} = \psi_k^i(x)$ for some $i< p$ and some $k\in\{1,\dots, n\}$. An analogous statement also holds for $v_{j_1+j}$, i.e. $v_{j_1+j} = \psi_{k'}^{i'}(x)$ for some $i'< p$ and some $k'\in\{1,\dots, n\}$. We can therefore apply Lemma \ref{lem:explicit} to write \[v_{in+j}(x) = \psi_j^i(x) = \frac{a_j^{(i)}(x)}{b^{(i)}(x)},\quad x\not\in \bigcup_{k=1}^i V(b^{(k)}),\] for $i\in\{1,\dots,\floor{\frac{n+d+s-1}{n}}\}$, $j\in \{1,\dots,n\}$, and for $i=0$ we clearly have \[ v_{j}(x) = x_j,\] for $j \in \{1,\dots,n\}$. Since we want to estimate the sum \[\Big|\sum_{x \in \mathbb{F}_p^n} e_p\Big( \sum_{j=0}^{s'-1}h_j'(v_{j_0+d+j}(x)-v_{j_0+j}(x))\Big)\Big|,\] we consider for fixed $\tilde{x} = (x_1,\dots,x_{j_1-1},x_{j_1+1},\dots,x_n)\in \mathbb{F}_q^{n-1}$ the inner sum \[G(x_{j_1};\tilde{x})= \sum_{j=0}^{s'-1} h_j'(v_{j_1+d+j}(x)-v_{j_1+j}(x))\] as a function of the variable $x_{j_1}$. Since we want to apply Theorem \ref{thm:weil} to $G$ for fixed $\tilde x$ (and considered as a univariate polynomial in $x_{j_1}$) we first need to give a nice description of $G$ outside a certain set. We can do that outside of the set \[E=\bigcup_{i=1}^{\floor{(n+d+s-1)/n}}V(b^{(i)}).\] In fact, one may write \[G(x_{j_1};\tilde{x})= \frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})} = \frac{\tilde{a}(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})} - h_0' x_{j_1} + c(\tilde{x}),\] where $a,\tilde{a},b$ are polynomials and $c(\tilde{x})$ is constant with respect to $x_{j_1}$. In order to apply Theorem \ref{thm:weil} to the sum over $x_{j_1}$, we need to check that the conditions of the theorem are verified apart from a small set $F$ of $\tilde x$'s, whose size we can estimate. To begin with we check that $\deg(a) = \deg(b)+1$. This follows immediately if either $\deg(\tilde{a})\le \deg(b)$, or if $\deg(\tilde{a})=\deg(b)+1$ and the leading coefficient in $\tilde{a}/b$ doesn't cancel the term $-h_0'x_{j_1}$. By considering the possible powers of $\psi$ that can appear in the definition of $G$, we see that \[b(x_{j_1};\tilde{x}) = \prod_{i\in I} b^{(i)}(x),\] with the product taken over a set $I$ of $i$ satisfying \begin{equation} i \in \left[ \frac{j_1}{n}-1,\frac{j_1+s'-1}{n} \right)\cup\left[ \frac{j_1+d}{n}-1,\frac{j_1+d+s'-1}{n} \right) \subseteq [0, p) \label{eq:ugly} \end{equation} and such that the coefficient of $x_{j_1}$ is nonzero in $b^{(i)}$. Since $\tilde{a}/b$ was defined by a linear sum of rational functions of degree $1$, it follows that $\deg(\tilde{a}) \le \deg(b) + 1$. If $\deg(\tilde{a}) = \deg(b)+1$, the coefficient of the highest order term in $a$ is of the form a constant times $\prod_{i\in J} b^{(i)}(x)$ for some set $J$ of $i$ satisfying \eqref{eq:ugly} and such that $b^{(i)}$ doesn't depend on $x_{j_1}$. Think of this coefficient as a polynomial in $\tilde{x}$. In particular, there are no more than $2 \Big(\frac{s'}{n}+1 \Big)$ values of $\tilde{x}$ satisfying \eqref{eq:ugly}, and therefore the coefficient is equal to $h_0'$ for at most $2 \Big( \frac{s'}{n}+1 \Big) p^{n-2}$ values of $\tilde{x}$. We can therefore define a set $F \subseteq \mathbb{F}_p^{n-1}$ with \[ |F| \le 2\Big(\frac{s'}{n}+1\Big)p^{n-2},\] and such that $\deg(a) = \deg(b) + 1$~for $\tilde{x} \not \in F$. Finally, for $\tilde{x}\not\in F$ we want to check that $G$ is not of the form $h^p-h$ for some rational function $h$ over $\overline{\mathbb{F}}_p$. Assume therefore that in fact $a/b = h^p-h$ for some rational function $h = h_1/h_2$, where $h_1$ and $h_2$ are coprime. Then ${h_2^pa = h_1^p b - h_1bh_2^{p-1}}$, and so in particular $h_2^p | b$. Note that \[ \deg(b) \le 2(s'/n+1) < p \] since we initially assumed that $s\le p^{1/2}n$, and so $h_2$ must be constant. This gives $a = b(c_1h_1^{p} - c_2h_1)$ for some constants $c_1,c_2$. But then $\deg(a) -\deg(b)$ is a multiple of $p$, contradicting $\deg(a) = \deg(b)+1$. Combining all of this we may apply Theorem \ref{thm:weil} to the sum over $x_{j_1}$ to conclude that whenever $\tilde{x}\not\in F$ it holds that \[\bigg|\sum_{x_{j_1}} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\bigg| \le 4\Big(\frac{s}{n}+1\Big)p^{1/2},\] where the sum is taken over values $x_{j_1}$ where $b\ne 0$. For $\tilde{x}\in F$ we have the trivial bound \[\bigg|\sum_{x_{j_1}} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\bigg| \le p.\] Finally, these bounds together with the union bound \[ |E| \le \sum_{i=0}^{\floor{(n+d+s-1)/n}} |V(b^{(i)})| \le \left(\frac{s+d}{n}+1\right)p^{n-1}\] and the triangle inequality give \begin{align*} \Big|\sum_{x\in\mathbb{F}_p^n} e_p(G(x_{j_1};\tilde{x}))\Big| &\le |E| + \Big|\sum_{x\not\in E} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\Big| \\ &\le |E| + p|F| + \Big|\sum_{\tilde{x}\not\in F}\sum_{\substack{x_{j_1}:\\ x\not\in E}} e_p\Big(\frac{a(x_{j_1};\tilde{x})}{b(x_{j_1};\tilde{x})}\Big)\Big|\\ &\le \Big(\frac{s+d}{n}+1\Big)p^{n-1} + 2p\Big(\frac{s}{n}+1\Big)p^{n-2} + 4\Big(\frac{s}{n}+1\Big)p^{1/2}p^{n-1}\\ &\le 3\Big(\frac{s+d}{n}+1\Big)p^{n-1}+4\Big(\frac{s}{n}+1\Big)p^{n-1/2}. \end{align*} \end{proof} We now need an additional ancillary result, which will be used in the proof of the main theorem. \begin{lemma}\label{lem:2nd-moment} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$ and let $\psi$ be its fractional jump. Then for any integers $j_0, s \geq 1$ and $K$ with $1\le K \le p^n$, and any $\mathbf{h}\in \mathbb{F}_p^s\setminus \{0\}$ one has \[ \sum_{x\in \mathbb{F}_p^n} \bigg| \sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_jv_{j_0+j+k}(x)\Big)\bigg|^2 \ll_{s,n} Kp^n + K^2p^{n-1/2}. \] \end{lemma} \begin{proof} We divide into the two cases $K\le p^{1/2}$ and $K> p^{1/2}$. In the first case we have \begin{align*} \sum_{x\in \mathbb{F}_p^n} \bigg| \sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_jv_{j_0+j+k}(x)\Big)\bigg|^2 = \sum_{x\in\mathbb{F}_p^n} \sum_{m,l=0}^{K-1}e_p\Big(\sum_{j=0}^{s-1}h_j (v_{j_0+j+m}(x)-v_{j_0+j+l}(x))\Big)\\ \le Kp^n + 2\sum_{d=1}^{K-1}\sum_{m=0}^{K-1-d}\bigg|\sum_{x\in\mathbb{F}_p^n} e_p\Big(\sum_{j=0}^{s-1} h_j(v_{j_0+m+j+d}(x)-v_{j_0+m+j}(x))\Big)\bigg| , \end{align*} where we have split into the cases $m=l$ and $m\ne l$. Applying Lemma \ref{lem:technical} to the innermost sum when $d\le (p-1)n-s$, and applying the trivial bound for the $O_{s,n}(1)$ remaining values of $d$ then gives that this is \begin{align*} &\ll_{s,n} K p^n + \sum_{d=1}^{K-1} (K-d)\left(\frac{d}{n}p^{n-1} +(s/n+1)p^{n-1/2}\right) + p^n\\ &\ll_{s,n} Kp^n + K^3p^{n-1} + K^2p^{n-1/2}. \end{align*} As the middle term never dominates for the considered range of $K$, we are done in this case. In the second case, split the sum over $k$ into at most $K/M+1$ intervals of length $M=p^{1/2}$. On each interval we bound the sum as in the first case, and so by Cauchy--Schwarz it follows that \begin{align*} \sum_{x\in \mathbb{F}_p^n} \bigg|\sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_jv_{j_0+j+k}(x)\Big)\bigg|^2 &\ll_{s,n} \left(\frac{K^2}{M^2}+1\right)(Mp^n+M^3p^{n-1}+M^2p^{n-1/2})\\ &\ll_{s,n} K^2p^{n-1/2}. \end{align*} \end{proof} We are now ready to prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:discrepancy}] Apply Theorem \ref{thm:koksma} with $H = \floor{N/2}$ to get \begin{equation} D_{s,\psi}(N;x) \ll \frac{1}{N}+\frac{1}{N}\sum_{0<\| \mathbf{h} \|_\infty \le N/2} \frac{1}{\rho(\mathbf{h})}\bigg| \sum_{m=0}^{N-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)\bigg|. \label{eq:bound1} \end{equation} Let $k\geq 1$ be an integer. Observe that if $k > N-1$, we have that \[\bigg|\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)-\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1} h_j v_{m+j+k}(x)\Big)\bigg| \leq 2N \leq 2k,\] if $k \leq N-1$, since the two sums in $m$ overlap in all but $2k$ terms, we have that \begin{align*} &\bigg|\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)-\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1} h_j v_{m+j+k}(x)\Big)\bigg| \\ & = \bigg|\sum_{m=0}^{k-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big)-\sum_{m=N-k}^{N-1}e_p\Big(\sum_{j=0}^{s-1} h_j v_{m+j+k}(x)\Big)\bigg| \\ &\leq 2k. \end{align*} Therefore, for any integer $K \geq 1$ it holds that \begin{equation*} \begin{split} K\bigg|\sum_{m=0}^{N-1}e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j}(x)\Big) \bigg| &\le \bigg|\sum_{k=0}^{K-1} \sum_{m=0}^{N-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j+k}(x)\Big) \bigg|+\sum_{k=0}^K 2k\\ &\le \sum_{m=0}^{N-1} \bigg|\sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j+k}(x)\Big)\bigg| + O(K^2). \end{split} \end{equation*} Combining this with \eqref{eq:bound1}, and noting that $\sum_{0<\| \mathbf{h} \|_\infty \le H} \frac{1}{\rho(\mathbf{h})}\ll (\log H)^s$, then gives \begin{equation} D_{s,\psi}(N;x) \ll \frac{K}{N}(\log N)^s + \frac{1}{N}R(N,K,x) \label{eq:bound4} \end{equation} where \begin{equation} R(N,K,x) = \frac{1}{K}\sum_{0<\| \mathbf{h} \|_\infty \le N/2} \frac{1}{\rho(\mathbf{h})}\sum_{m=0}^{N-1}\bigg|\sum_{k=0}^{K-1}e_p\Big(\sum_{j=0}^{s-1}h_jv_{m+j+k}(x)\Big)\bigg|. \label{eq:bound2} \end{equation} We now average over initial values $x$. By Cauchy--Schwarz one has \[\left( \sum_{x\in\mathbb{F}_p^n}\bigg|\sum_{k=0}^{K-1}e_p\Big(\sum_{j=0}^{s-1}h_jv_{m+j+k}(x)\Big)\bigg| \right)^2 \le p^n \sum_{x\in\mathbb{F}_p^n} \bigg|\sum_{k=0}^{K-1} e_p\Big(\sum_{j=0}^{s-1}h_j v_{m+j+k}(x)\Big)\bigg|^2. \] Inserting this and the bound from Lemma \ref{lem:2nd-moment} into \eqref{eq:bound2} gives \begin{equation} \sum_{x \in \mathbb{F}_p^n} R(N,K,x) \ll_{s,n} Np^n (K^{-1/2}+p^{-1/4})(\log N)^s. \label{eq:bound3} \end{equation} Now, let $N_j = 2^j$ and $K_j = \ceil{\Delta^{-2/3}N_j^{2/3}}$ for $j \in \set{ 0,1,\dots,\ceil{\log_2 p^n}}$. Let $\Omega_j\subseteq \mathbb{F}_p^n$ be the set of $x$ for which \[ R(N_j,K_j,x) \ge C_{s,n}\Delta^{-1}N_j(K_j^{-1/2}+p^{-1/4}) (\log N_j)^s \log p^n,\] where $C_{s,n}$ is the implied constant in \eqref{eq:bound3}. By \eqref{eq:bound3} we must have that $|\Omega_j| \le \Delta p^n/\log p^n$. Setting $\Omega = \cup_j \Omega_j$ we then have $|\Omega| \le \Delta p^n$, and for $x \not\in \Omega$ it holds that \begin{equation} R(N_j,K_j,x) \le C_{s,n} \Delta^{-1}N_j(K_j^{-1/2}+p^{-1/4}) (\log N_j)^s \log p^n \label{eq:bound5} \end{equation} for all $j \le \ceil{\log_2 p^n}$. Given $N$ such that $1\le N \le p^n$, take $\nu\in\mathbb{N}$ such that $N_{\nu-1}\le N < N_{\nu}$. By \eqref{eq:bound4} we have \[ D_{s,\psi}(N;x) \ll \frac{K_\nu}{N_\nu}(\log N_\nu)^s + \frac{1}{N_\nu}R(N_\nu,K_\nu,x),\] and so for $x \not\in \Omega$ it holds that \[ D_{s,\psi}(N;x) \ll (\Delta^{-2/3}N^{-1/3} + p^{-1/4}\Delta^{-1})(\log N)^s \log p^n\] by \eqref{eq:bound5}, completing the first bound in the theorem. For $D_{s,\psi}^n(N;x)$ we also apply Theorem \ref{thm:koksma} with $H = \ceil{N/2}$ to get \begin{equation*} D_{s,\psi}^n(N;x) \ll \frac{1}{N}+\frac{1}{N}\sum_{0< \| \mathbf{h} \|_\infty \le N/2} \frac{1}{\rho(\mathbf{h})}\Big| \sum_{m=0}^{N-1} e_p\Big(\sum_{j=0}^{s-1}\mathbf{h}_j\cdot \mathbf{u}_{m+j}(x)\Big)\Big|, \end{equation*} where now $\mathbf{h} = (\mathbf{h}_0,\dots,\mathbf{h}_{s-1})$ and $\mathbf{h}_j = (h_{j,1},\dots,h_{j,n})$ for $j \in \set{0,\dots,s-1}$. Observe that \[ \sum_{j=0}^{s-1} \mathbf{h}_j\cdot \mathbf{u}_{m+j}(x) = \sum_{j=0}^{s-1}\sum_{i=1}^{n} h_{j,i}v_{(m+j)n+i}(x) = \sum_{k=0}^{sn-1}h_j' v_{mn+k}(x), \] where $h_k' = h_{j,i}$ if $k = nj+i$, $0\le i \le s-1$. We may therefore bound all sums exactly as before, with the only difference being that $sn$~replaces $s$. \end{proof} \section{The computational complexity of fractional jump sequences} \label{computation} Let $\Psi$ be a transitive automorphism of $\mathbb{P}^n$, and let $\psi$ be its fractional jump. We now want to establish the computational complexity of computing the $m$-th term of the sequence $\{ \psi^m (0) \}_{m \in \mathbb{N}}$. In particular in this section we will show that computing a term in our sequence is less expensive than computing a term of a classical inversive sequence of the same bit size. Fix notations as in Section \ref{explicit}. For simplicity, let us restrict to the case in which $q$ is prime. Let us first deal with the regime in which $q$ is large (which is the regime in which we got the discrepancy bounds in Section \ref{discrepancy}), so that most of the computations will be performed for points in $U_1$. If one chooses $\Psi$ in such a way that the coefficients of the $F_j$'s are small (this is possible for example by taking $\Psi$ as the companion matrix of a projectively primitive polynomial with small coefficients), so that also the coefficients of the $f_j^{(1)}$'s are small, the multiplications for such coefficients cost essentially the same as sums. Therefore the computational cost of computing the $m$-th term of the sequence (given the $(m-1)$-th term) is reduced to the cost of computing $n$ multiplications in $\mathbb{F}_q$ and one inversion in $\mathbb{F}_q$ (as all the denominators of the $f_j^{(1)}$'s are equal). Let $M(q)$ (resp. $I(q)$) denote the cost of one multiplication (resp. inversion) in $\mathbb{F}_q$. The total cost of bit operations involved to compute a single term in the sequence is then \[C^{\text{new}}(q,n)=n M(q)+ I(q).\] Using the fast Fourier transform for multiplications \cite{bib:SchStr71} and the extended Euclidean algorithm for inversion \cite{bib:schonhage71} one gets \begin{align*} M(q) &= O(\log(q)\log\log(q) \log \log \log (q)), \\ I(q) &= O(M(q)\log\log(q)). \end{align*} Let us compare this complexity with the complexity of computing the $m$-th term of an inversive sequence of the form $x_{m}=a/x_{m-1}+b$ over $\mathbb{F}_p$. The correct analogue is obtained when $q^n$ has roughly the same bit size as $p$. If one chooses $a,b$ small, one obtains that the complexity of computing $x_m$ is essentially the complexity of computing only one inversion modulo $p$, which is \[C^{\text{old}}(p)=O(\log(p)[\log \log(p)]^2 \log \log \log(p)).\] Now, since $q,n$ are chosen in such a way that $q^n$ has roughly the same bit size as $p$, we can write $C^{\text{old}}(q,n)=C^{\text{old}}(p)$. It is easy to see that up to a positive constant we have \[\frac{C^{\text{new}}(q,n)}{C^{\text{old}}(q,n)}\leq \frac{1}{\log \log q}+\frac{1}{n},\] which goes to zero as $n$ and $q$ grow. It is also interesting to see that with our construction we have the freedom to choose $q$ relatively small and $n$ large (so that again one gets $q^n\sim p$). In this case one can see that ${C^{\text{new}}(q,n)}/{C^{\text{old}}(q,n)}$ goes to zero as $([\log(n)]^2\log \log(n))^{-1}$. If one tries to do something similar with an ICG (i.e. reducing the characteristic but keeping the size of the field large), one would anyway have to compute an inversion in $\mathbb{F}_{q^n}$ which costs $O(n^2)$ $\mathbb{F}_q$-operations, see \cite[Table 2.8]{bib:MVOV96}, while in our case we would only need to invert one element and multiply $n$ elements in $\mathbb{F}_q$, which costs $(n+1)$ $\mathbb{F}_q$-operations. \section{Conclusion and further research} Using the theory of projective maps, we provided a general construction for full orbit sequences over $\mathbb A^n$. Our theory generalises the standard construction for the inversive congruential generators. Let us summarise the properties of fractional jump sequences obtained in this paper: \begin{itemize} \item We completely characterise the full orbit condition for such sequences (Theorem \ref{transitivity_characterisation}). \item In dimension $1$ they cover the theory of ICG sequences. \item In dimension greater than $1$, they are automatically full orbit whenever $(q^{n+1}-1)/(q-1)$ is a prime number, which is something that can never occur in the case of ICG sequences, as $2$ always divides $q+1$ when $q$ is odd. \item In any dimension, they enjoy the same discrepancy bound as the one of the ICG, so they appear to be a good source of pseudorandomness both when one desires a one dimensional sequence of pseudorandom elements or if one desires a stream of $n$-dimensional pseudorandom points (Theorem \ref{thm:discrepancy}). \item They are very inexpensive to compute: for $n>1$ computations are asymptotically quicker than the ones of an ICG sequence, as described in Section \ref{computation}. The moral reason for this is that at each step the ICG generates a $1$-dimensional pseudorandom point with exactly one inversion in $\mathbb{F}_q$, on the other hand at each step our construction generates an $n$-dimensional point of $\mathbb{F}_q$ (again, using only one inversion). \end{itemize} Some research questions arising are the following. \begin{enumerate} \item As our bound on the discrepancy holds for any transitive non-affine fractional jump sequence, can one build special fractional jump sequences having strictly better discrepancy bounds with respect to the one of the ICG? \item What happens if one replaces the finite field with a finite ring? Can we extend the fractional jump construction to this case? \item Can the notion of fractional jump be extended to more general objects such as quasi-projective varieties and produce competitive behaviours as in the projective space setting? \end{enumerate} \section*{Acknowledgments} The authors would like to thank Violetta Weger for checking the preliminary version of this manuscript. The third author would like to thank the Swiss National Science Foundation grant number 171248. \end{document}
\begin{document} \begin{abstract} Let $k$ be an algebraically closed field of prime characteristic $p$. Let $kGe$ be a block of a group algebra of a finite group $G$, with normal defect group $P$ and abelian $p'$ inertial quotient $L$. Then we show that $kGe$ is a matrix algebra over a quantised version of the group algebra of a semidirect product of $P$ with a certain subgroup of $L$. To do this, we first examine the associated graded algebra, using a Jennings--Quillen style theorem. As an example, we calculate the associated graded of the basic algebra of the non-principal block in the case of a semidirect product of an extraspecial $p$-group $P$ of exponent $p$ and order $p^3$ with a quaternion group of order eight with the centre acting trivially. In the case $p=3$ we give explicit generators and relations for the basic algebra as a quantised version of $kP$. As a second example, we give explicit generators and relations in the case of a group of shape $2^{1+4}:3^{1+2}$ in characteristic two. \end{abstract} \title{Structure of blocks with normal defect and abelian $p'$ inertial quotient} \section{Introduction} Throughout this paper $p$ is a prime and $k$ is an algebraically closed field of characteristic $p$. The study of blocks with normal defect groups has a long history, starting with the work of Brauer~\cite{Brauer:1956a}, and continuing with Reynolds~\cite{Reynolds:1963a}, Dade~\cite{Dade:1973a}, and K\"ulshammer~\cite{Kulshammer:1985a}. In the case of abelian normal defect, abelian inertial quotient and one simple module, explicit descriptions of the basic algebra were given by Benson and Green~\cite{Benson/Green:2004a}, and Holloway and Kessar~\cite{Holloway/Kessar:2005a}. Dropping the hypothesis of one simple module led to our paper~\cite{Benson/Kessar/Linckelmann:2019a}. The main structural feature of the basic algebras calculated in these papers is that they appear to be quantised versions of the group algebras of semidirect products of a defect group and a subgroup of the inertial quotient. The purpose of this paper is to generalise the results from \cite{Benson/Kessar/Linckelmann:2019a} to blocks of group algebras over $k$ of finite groups that have a normal defect group $P$ which is no longer necessarily abelian, but still with abelian $p'$ inertial quotient $L$. By a theorem of K\"ulshammer~\cite{Kulshammer:1985a}, any such block is isomorphic to a matrix algebra over a twisted group algebra $k_\alpha(P\rtimes L)$ of the semidirect product $P\rtimes L$, for some $\alpha\in H^2(L, k^\times)$, inflated to $P\rtimes L$. So there is a central $p'$-extension \[ 1 \to Z \to H \to L \to 1 \] and an idempotent $e$ in $kZ$, such that $k_\alpha(P\rtimes L)\cong$ $kGe$, where $G=P\rtimes H$. \begin{theorem}\label{th:qu} With the notation and hypotheses above, let $\tilde\mathfrak{A}$ be a basic algebra of the twisted group algebra $k_{\alpha}(P \rtimes L)$. Then $k_{\alpha}(P \rtimes L)$ is a matrix algebra over $\tilde\mathfrak{A}$ and $\tilde\mathfrak{A}$ has an explicit presentation as a quantised version of the group algebra $k(P\rtimes Z(H)/Z)$. \end{theorem} For the precise presentation and the proof, see Section \ref{se:ungrading}. There are several new ingredients required to extend the results from \cite{Benson/Kessar/Linckelmann:2019a} to nonabelian defect groups. We first consider the associated graded $\mathsf{gr}_*(kGe)=\bigoplus_{n\geqslantslantq 0}\ J^n(kGe)/J^{n+1}(kGe)$ of $kGe$, briefly reviewed in the next section, and make use of the Jennings--Quillen theorem~\cite{Jennings:1941a,Quillen:1968a} and Semmen \cite{Semmen:2005a}. We show that $\mathsf{gr}_*(kGe)$ is isomorphic to a matrix algebra over a quantised version of the associated graded of the group algebra of the group $P\rtimes (Z(H)/Z)$. Specialising to the case $\alpha =0 $, we get a presentation of $\mathsf{gr}_*(k (P \rtimes L))$ which we have not seen before in the literature (see Remark~\ref{rk:fAquantised}). The exact relations are stated in Theorem~\ref{th:rels}; see also Theorems~\ref{th:Q/I} and~\ref{th:AxM} and Corollary~\ref{co:basic}. We then show that this may be ungraded to exhibit the basic algebra of $kGe$ as a quantised version of the group algebra of $P\rtimes (Z(H)/Z)$, see Section~\ref{se:ungrading}. In Section~\ref{se:eg}, in order to illustrate the main results, we explicitly calculate the following examples of blocks with a normal extraspecial defect group of order $p^3$ and exponent $p$ having a single isomorphism class of simple modules. \begin{theorem} \label{pcubeexample} Suppose that $p$ is odd. Let $P$ be an extraspecial group of order $p^3$ and exponent $p$, let $H$ be a quaternion group of order $8$ acting on $P$ with $Z(H)$ acting trivially, and with the two generators of $H$ inverting the two generators of $P$. Set $G = P\rtimes H$. The basic algebra of the associated graded $\mathsf{gr}_*(kP)$ of $kP$ is given by generators $x$, $y$, $z$, subject to the relations \[ x^p=0,\quad y^p=0,\quad xy-yx=z,\quad xz-zx=0,\quad yz-zy=0 \] (these imply $z^p=0$) while the basic algebra of the associated graded $\mathsf{gr}_*(kGe)$ of the non-principal block $e$ of $kG$ is given by generators ${\sf x}$, ${\sf y}$, ${\sf z}$, subject to the relations \[ {\sf x}^p=0,\quad {\sf y}^p=0,\quad {\sf x} {\sf y} + {\sf y} {\sf x} = {\sf z}, \quad {\sf x} {\sf z} + {\sf z} {\sf x} = 0, \quad {\sf y} {\sf z} + {\sf z} {\sf y} = 0 \] (these imply ${\sf z}^p=0$). \end{theorem} In the case $p=3$, we can be more precise and explicitly describe the algebra $kP$ and a basic algebra of $kGe$ by `ungrading' the previous Theorem. \begin{theorem}\label{27example} With the notation of the previous theorem, assume that $p=3$. The algebra $kP$ is given by generators $\tilde x$, $\tilde y$, $\tilde z$, subject to the relations \[ \tilde x^3=0,\quad \tilde y^3=0,\quad \tilde x\tilde y-\tilde y\tilde x=\tilde z,\quad \tilde x\tilde z-\tilde z\tilde x=\tilde z\tilde y\tilde z, \quad \tilde y\tilde z-\tilde z\tilde y=-\tilde z\tilde x\tilde z \] (these imply $\tilde z^p=0$) while a basic algebra of $kGe$ is given by generators $\tilde {\sf x}$, $\tilde {\sf y}$, $\tilde {\sf z}$, subject to the relations \[ \tilde{\sf x}^3=0, \quad \tilde{\sf y}^3=0,\quad \tilde{\sf x}\tilde{\sf y}+\tilde{\sf y}\tilde{\sf x}=\tilde{\sf z}, \quad \tilde{\sf x} \tilde {\sf z} + \tilde {\sf z} \tilde {\sf x} = -\tilde {\sf z}\tilde {\sf y}\tilde {\sf z}, \quad \tilde {\sf y} \tilde {\sf z} + \tilde {\sf z} \tilde {\sf y}= -\tilde {\sf z} \tilde {\sf x} \tilde {\sf z} \] (these imply $\tilde{\sf z}^p=0$). \end{theorem} In Section \ref{se:char2} we give an example in characteristic two, with $P$ an extraspecial group of order $2^{1+4}$ and $H$ an extraspecial group of order $3^{1+2}$. Finally, the appendix contains some corrections to the calculations in \cite{Benson/Kessar/Linckelmann:2019a}. \noindent {\bf Notation.} The bracket $[-,-]$ is used in three different ways, depending on the context: as commutator $[g,h]=ghg^{-1}h^{-1}$ for elements $g$, $h$ in a multiplicatively written group, as Lie bracket in a Lie algebra, and as additive commutator $[a,b]=ab-ba$ for elements $a$, $b$ in an associative algebra. \noindent {\bf Acknowledgements.} The first author is grateful to City, University of London for its hospitality during the research for this paper, and to Ehud Meir for conversations about the proof of Theorem~\ref{th:27}. The second author acknowledges support from EPSRC grant EP/T004592/1. \section{The associated graded} The associated graded of a finite-dimensional $k$-algebra $A$ is the graded algebra \[ \mathsf{gr}_*(A)=\bigoplus_{n\geqslantslantq 0} J^n(A)/J^{n+1}(A), \] with the summands $J^n(A)/J^{n+1}(A)$ in degree $n$, where we adopt the convention $J^0(A)=A$. The image in $A/J(A)$ of a block idempotent of $A$ is a block idempotent of $\mathsf{gr}_*(A)$, and this induces a bijection between the blocks of $A$ and the blocks of $\mathsf{gr}_*(A)$. Similarly, the image in $A/J(A)$ of a primitive idempotent of $A$ is a primitive idempotent in $\mathsf{gr}_*(A)$. It follows that $A$ and $\mathsf{gr}_*(A)$ have the same quiver. Let $P$ be a finite $p$-group, $L$ an abelian $p'$-subgroup of $\mathsf{Aut}(P)$, and let $\alpha\in H^2(L, k^\times)$. Since $k$ is algebraically closed, the canonical group homomorphism $Z^2(G,k^\times)\to H^2(G,k^\times)$ splits (see for example Theorem~11.15 of Isaacs~\cite{Isaacs:1976a}). Thus we may represent $\alpha$ by a $2$-cocycle having the same order in $Z^2(G,k^\times)$ as its image in $H^2(G,k^\times)$, still denoted by $\alpha$. Such a choice of $\alpha$ yields a central $p'$-extension \[ 1 \to Z \to H \to L \to 1 \] and a faithful character $\chi : Z \to k^\times$ such that $Z = [H,H]$ and such that for some choice of inverse images $\hat x$ in $H$ for all $x$, we have $$\alpha(x,y)=\chi(\hat x\hat y {\sf w}idehat{xy}^{-1})$$ for all $x$, $y\in L$. Moreover, $|Z|$ is equal to the order of $\alpha$ in $H^2(L,k^\times)$; that is, the subgroup of $k^\times$ generated by the values of $\alpha$ is equal to $\chi(Z)$. Set $G = P \rtimes H$, where $H$ acts on $P$ via the canonical map $H\to L$, so that $Z=$ $C_H(P)\leqslantslantq Z(H)$, and hence $Z\leqslantslantq Z(G)$. Thus the idempotent \[ e=\frac{1}{|Z|}\sum_{z\in Z}\chi(z^{-1})z, \] is a non-principal block of $kG$, and the canonical surjection $G\to P\rtimes L$ with kernel $Z$ induces an algebra isomorphism $$kGe {\sf x}rightarrow{\cong} k_\alpha (P\rtimes L), $$ where $\alpha$ is inflated to $P\rtimes L$ via the canonical surjection $P\rtimes L\to L$. We wish to describe $kGe$. This being difficult, we tackle first the associated graded algebra \[ \mathsf{gr}_*(kGe)=\bigoplus_{n\geqslantslant 0}J^n(kGe)/J^{n+1}(kGe). \] Our goal is to give an explicit presentation of this as a quantum deformation of the corresponding associated graded for the (untwisted) group algebra $k(P \rtimes Z(H)/Z)$. First, we recall the Jennings--Quillen theorem~\cite{Jennings:1941a,Quillen:1968a} for the associated graded of $kP$. Our treatment follows Section~3.14 of~\cite{Benson:1998b}. For $r\geqslantslant 1$, we have dimension subgroups \[ F_r(P)=\{g\in P \mid g-1 \in J^r(kP)\}. \] Thus $F_1(P)=P$, $F_2(P)=\Phi(P)$, $[F_r(P),F_s(P)]\subseteq F_{r+s}(P)$, and if $g\in F_r(P)$ then $g^p\in F_{pr}(P)$. Furthermore, $F_r(P)$ is the most rapidly descending central series with these properties. Define \[ \mathsf{Jen}_*(P) = \bigoplus_{r\geqslantslant 1}k\otimes_{\mathbb{F}_p} F_r(P)/F_{r+1}(P). \] Then $\mathsf{Jen}_*(P)$ is a $p$-restricted Lie algebra with Lie bracket induced by taking commutators in $P$ and $p$th power map coming from taking $p$th powers in $P$. As a restricted Lie algebra, $\mathsf{Jen}_*(P)$ is generated by its degree one elements because the subgroups $F_r(P)$ form the lowest central series with the properties mentioned above. Let $\mathcal{U}\mathsf{Jen}_*(P)$ be the restricted universal enveloping algebra of $\mathsf{Jen}_*(P)$ over $k$. As an associative algebra, $\mathcal{U}\mathsf{Jen}_*(P)$ is generated by its degree one elements. The commutator $[g,h]$ of two elements $g$, $h\in P$ becomes the Lie bracket of the images of $g$, $h$ in $\mathsf{Jen}_*(P)$, and the image of that Lie bracket in $\mathcal{U}\mathsf{Jen}_*(P)$ is in turn equal to the additive commutator of the images of $g$, $h$ in the associative algebra $\mathcal{U}\mathsf{Jen}_*(P)$. The Jennings--Quillen theorem states that there is a $k$-algebra isomorphism \[ \mathcal{U}\mathsf{Jen}_*(P)\to \mathsf{gr}_*(kP) \] which for any $r$ and any $g\in F_r(P)$ sends the image of $g $ in $F_r(P)/F_{r+1} (P) $ to the image of $g-1$ in $\mathsf{gr}_*(kP)$. The group action of $H$ on $P$ induces an action of $H$ on $\mathsf{Jen}_*(P)$ as a restricted Lie algebra, because the Lie bracket in $\mathsf{Jen}_*(P)$ is induced by taking commutators in $P$ and the $p$-power map in $\mathsf{Jen}_*(P)$ is induced by taking $p$-th powers in $P$. The Jennings--Quillen map is equivariant with respect to $H$, and therefore extends to an isomorphism \[ \mathcal{U}\mathsf{Jen}_*(P) \rtimes H {\sf x}rightarrow{\ \cong\ } \mathsf{gr}_*(kP)\rtimes H {\sf x}rightarrow{\ \cong\ } \mathsf{gr}_*(kG), \] where the second isomorphism uses the fact that $J(kG)=J(kP)kG=kGJ(kP)$ since $H$ is a $p'$-group (cf. \cite[Theorem 4]{Semmen:2005a}). Since we have a canonical bijection between the blocks of $kG$ and the blocks of $\mathsf{gr}_*(kG)$ as described at the beginning of this section, it follows that the blocks of both $kG$ and $\mathsf{gr}_*(kG)$ are the idempotents in $kZ$. \begin{remark}\label{rk:JenkGe} If $e$ is an idempotent in $kH$, then the restriction of the projective module $kGe$ to $P$ is a direct sum of $\dim_k(kHe)$ copies of $kP$. Furthermore, the radical layers of $kGe$ as a $kG$-module are the same as the radical layers as a $kP$-module. So we have \begin{align*} \sum_{i\geqslantslant 0} \dim_k J^i(kGe)/J^{i+1}(kGe)&=\dim_k(kHe). \sum_{i\geqslantslant 0}\dim_k J^i(kP)/J^{i+1}(kP) \\ &=\dim_k(kHe).\prod_r\leqslantslantft(\frac{1-t^{pr}}{1-t^r}\right)^{\dim_k\mathsf{Jen}_r(P)}. \end{align*} It can also be seen by restriction to $P$ that if $e$ is a central idempotent in $kH$ then the associated graded $\mathsf{gr}_*(kGe)$ of the algebra $kGe$ is generated by its degree zero and degree one elements. \end{remark} \begin{remark} The algebra $\mathcal{U}\mathsf{Jen}_*(P)$ is a finite dimensional cocommutative Hopf algebra, which defines a connected unipotent finite group scheme $\mathcal{P}$ whose group algebra is $k\mathcal{P}\cong \mathcal{U}\mathsf{Jen}_*(P)$. The finite group $H$ acts as automorphism on $\mathcal{P}$, so we may form the semidirect product $\mathcal{G} = \mathcal{P} \rtimes H$, which is again a finite group scheme. \end{remark} \section{The quantum relations} In this section, we define an algebra $\mathfrak{A}$, which will turn out to be a basic algebra for $\mathsf{gr}_*(kGe)$. The quantum commutation rules for $\mathfrak{A}$ are given in Theorem~\ref{th:rels}, and the fact that $\mathfrak{A}$ is indeed a basic algebra is shown in Corollary~\ref{co:basic}. By \cite[Proposition 3.1]{Benson/Kessar/Linckelmann:2019a} we have a bijection \[ \mathsf{Irr}(Z(H)|\chi) {\sf x}rightarrow{\cong} \mathsf{Irr}(H|\chi),\qquad \phi \mapsto \tau_\phi \] between one-dimensional characters of $Z(H)$ lying over $\chi$ and irreducible characters of $H$ lying over $\chi$, such that $\tau_\phi$ lies over $\phi$. The central idempotent corresponding to $\tau_\phi$ is \[ e_\phi=\frac{1}{|Z(H)|}\sum_{h\in Z(H)}\phi(h^{-1})h. \] Then $\displaystyle e = \sum_{\phi\in\mathsf{Irr}(Z(H)|\chi)} e_\phi$, and hence \[ kHe = \prod_{\phi\in\mathsf{Irr}(Z(H)|\chi)} kHe_\phi\ . \] The factors $kHe_\phi$ are matrix algebras, corresponding to $\tau_\phi$, all of the same dimension. An element ${\sf x}i $ of $\mathsf{Hom}(H/Z, k^\times)$ induces an algebra automorphism of $kHe$ sending $he$ to ${\sf x}i(h)^{-1}he$. This yields an action of $\mathsf{Hom}(H/Z, k^\times)$ on $kHe$ by algebra automorphisms which in turn induces a permutation action of $\mathsf{Hom}(H/Z, k^\times)$ on the set of factors $kHe_\phi$. The stabiliser of any factor is the subgroup $\mathsf{Irr}(H/Z(H) ) $ of $\mathsf{Irr}(H/Z) $ and elements of $\mathsf{Irr}(H/Z(H) ) $ act as inner automorphisms on each factor. Choose $\phi_0\in\mathsf{Irr}(Z(H)|\chi)$, and set $\tau=\tau_{\phi_0}$. For each $\phi \in \mathsf{Irr} (Z(H)| \chi) $, choose a one dimensional representation ${\sf x}i_\phi \in \mathsf{Irr}( H/Z) $ inflated to $H$ whose restriction to $Z(H)$ is $\phi\phi_0^{-1}$, and so that ${\sf x}i_{\phi_0}=1$. The ${\sf x}i_{\phi} $ form a set of coset representatives of $\mathsf{Irr}(H/Z(H) )$ in $\mathsf{Irr}( H/Z) $. The algebra automorphism induced by ${\sf x}i_{\phi} $ sends $e_{\phi_0}$ to $e_\phi$, hence restricts to an algebra isomorphism $$kHe_{\phi_0} \cong kHe_\phi$$ sending $he_{\phi_0}$ to ${\sf x}i_\phi(h)^{-1}ee_\phi$. Taking the product over all $\phi$ yields a unital injective algebra homomorphism $$kHe_{\phi_0} \to kHe$$ sending $he_{\phi_0}$ to $\sum_{\phi\in\mathsf{Irr}(Z(H)|\chi)}\ {\sf x}i_\phi(h)^{-1}he_\phi$. By the above, this homomorphism depends on the choice of the ${\sf x}i_\phi$, but only up to inner automorphisms of $kHe$. We write $\mathfrak{M}$ for the image in $kHe$ of the matrix algebra $kHe_{\phi_0}$ under this algebra homomorphism. This is a unital matrix subalgebra in $kHe$. We have a canonical homomorphism $\rho\colon H \to \mathsf{Hom}(H,k^\times)$ sending $g$ to $\rho(g)\colon h \mapsto \chi([h,g])$. The kernel of this homomorphism is $Z(H)$ and its image is $\mathsf{Hom}(H/Z(H),k^\times)$. For each $\psi \in \mathsf{Irr} (H/Z) $ and each $\phi \in \mathsf{Irr} (Z(H)|\chi) $, we write for simplicity $\phi\psi$ instead of $\phi\,(\psi\vert_{Z(H)})$. Then ${\sf x}i_{\phi\psi}{\sf x}i_\phi^{-1}\psi^{-1}$ is trivial on $Z(H)$. So there exists an element $g_{\psi, \phi}\in H$ such that \[ \rho(g_{\psi,\phi})={\sf x}i_{\phi\psi}{\sf x}i_\phi^{-1}\psi^{-1}, \] or equivalently, such that $$\chi([h,g_{\psi,\phi}]) = {\sf x}i_{\phi\psi}(h){\sf x}i_\phi(h)^{-1}\psi(h)^{-1}$$ for all $h\in H$. We choose such elements $g_{\psi,\phi}$, one for each $\psi$ and $\phi$. Note that these elements are unique up to multiplication by elements in $Z(H)$. For any $\psi, \eta \in \mathsf{Irr}(H/Z) $ and any $\phi \in \mathsf{Irr} (Z(H)| \chi) $, we have $$ \rho (g_{\eta, \phi \psi} g_{\psi, \phi} )= \rho (g_{\eta, \phi \psi} ) \rho(g_{\psi, \phi}) ={\sf x}i_{\phi \psi\eta } {\sf x}i_{\phi}^{-1} \eta^{-1} \psi^{-1} = \rho(g_{\eta\psi, \phi}). $$ \begin{lemma}\label{glemma} Let $\psi_i, \eta_j \in \mathsf{Irr} (H/Z) $, $\phi_i, {\sf z}eta_j \in \mathsf{Irr}(Z(H) |\chi) $, $ 1\leqslantslantq i \leqslantslantq m $, $ 1\leqslantslantq j \leqslantslantq n $. Suppose that $\phi_i = \phi _{i-1} \psi_{i-1} $ for all $ 2 \leqslantslantq i \leqslantslantq m $. Then \begin{enumerate} \item $ g_{\psi_m, \phi_m} \ldots g_{\psi_1, \phi_1} = g _{\psi_m \cdots\psi_1, \phi_1} z $ for some $ z \in Z(H)$. \item Suppose further that ${\sf z}eta_j = {\sf z}eta _{j-1} \psi_{j-1} $ for all $ 2 \leqslantslantq j\leqslantslantq n $, $\phi_1= {\sf z}eta_1 $ and $\psi_m \ldots \psi_1 = \eta_n \ldots \eta _1 $. Then $ g_{\psi_m, \phi_m} \ldots g_{\psi_1, \phi_1} = g_{\eta_n, {\sf z}eta_n} \ldots g_{\eta_1, {\sf z}eta_1} z' $ for some $z' \in Z(H)$. \end{enumerate} \end{lemma} \begin{proof} Since $Z =\mathsf{Ker} (\rho )$, (i) follows by repeated application of the equation displayed above the lemma. Part (ii) follows from (i) applied to both $ g_{\psi_m, \phi_m} \ldots g_{\psi_1, \phi_1} $ and $g_{\eta_n, {\sf z}eta_n} \ldots g_{\eta_1, {\sf z}eta_1} $. \end{proof} \begin{chunk}\label{chunk:PBW} Since $k$ is algebraically closed, we may choose a $k$-basis $w_1,\dots,w_m$ of $\mathsf{Jen}_*(P)$, where $p^m=|P|$, consisting of homogeneous eigenvectors of the action of $H$. We arrange the indices in such a way that if $i\leqslantslant j$ then $\deg(w_i)\leqslantslant \deg(w_j)$. Then for each $w_i$ there is a character $\psi_i$ of $L$, inflated to $H$, such that \[ {^gw_i}=\psi_i(g)w_i \] for $g\in H$. Define structure constants $c_{i,j,k}$ and $d_{i,k}$ for $\mathsf{Jen}_*(P)$ via \[ [w_i,w_j]=\sum_k c_{i,j,k}w_k,\qquad w_i^{[p]}=\sum_k d_{i,k}w_k. \] Here, $[w_i,w_j]$ denotes the Lie bracket and $w_i^{[p]}$ the $p$-restriction map in $\mathsf{Jen}_*(P)$. We have \[ {^g[w_i,w_j]} = [{^gw_i}, {^gw_j}] = \psi_i(g)\psi_j(g)[w_i,w_j], \] \[ {^g(w_i^{[p]})} = (^gw_i)^{[p]} = (\psi_i(g)w_i)^{[p]} = \psi_i(g)^pw_i^{[p]}. \] It follows that if $c_{i,j,k}\ne 0$ then $\psi_i\psi_j=\psi_k$, and if $d_{i,k}\ne 0$ then $\psi_i^p=\psi_k$. By the Poincar\'e--Birkhoff--Witt (PBW) theorem for restricted Lie algebras (Jacobson~\cite{Jacobson:1962a}, page~190), the algebra $\mathcal{U}\mathsf{Jen}_*(P)\cong \mathsf{gr}_*(kP)$ has a basis $\mathcal{B}$ consisting of words $w_{i_1} \dots w_{i_r}$ where $i_1\leqslantslant \dots\leqslantslant i_r$, and each index is repeated at most $p-1$ times (so we are writing $w_i^a $ as $w_i \dots w_i$). We follow the convention is that the empty word denotes the identity element in degree zero. The element $w_{i_1}\dots w_{i_r}$ is an eigenvector for the conjugation action of $H$, with character $\psi_{i_1}\dots\psi_{i_r}$. In what follows we identify $\mathsf{Jen}_*(P)$ with its image in $\mathcal{U}\mathsf{Jen}_*(P)\rtimes H$. The calculations that follow are similar to those in Section~4 of~\cite{Benson/Kessar/Linckelmann:2019a} (with the corrections described in Section~\ref{se:errata} below). For any $\phi \in \mathsf{Irr}(Z(H)|\chi) $, $w_i $ a basis element of $\mathsf{Jen}_*(P)$, with associated linear characters $\psi_i$, we write $g_{i, \phi}$ for the element $g_{\psi_i,\phi} $. \end{chunk} \begin{lemma} \label{fALemma} With the notation above, the following equations in $(\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e$ hold for all $h\in H$, all basis elements $w_i$ of $\mathsf{Jen}_*(P)$, the associated linear characters $\psi_i\in\mathsf{Hom}(H,k^\times)$ and all $\phi \in \mathsf{Irr}(Z(H)|\chi)$. \begin{enumerate} \item[{\rm (i)}] $w_i e_\phi = e_{\phi\psi_i} w_i$. \item[{\rm (ii)}] $ (g_{i,\phi}w_i)({\sf x}i_\phi(h)^{-1}e_\phi\, h) = ({\sf x}i_{\phi\psi_i}(h)^{-1}e_{\phi\psi_i}\, h)(g_{i,\phi}w_i). $ \item[{\rm (iii)}] $g_{i,\phi}w_ie_\phi=e_{\phi\psi_i}g_{i,\phi}w_i$ commutes with $\mathfrak{M}$. \end{enumerate} \end{lemma} \begin{proof} We have $hw_ih^{-1}=$ $\psi_i(h)w_i$, hence $w_i h= \psi_i(h)^{-1} hw_i$. Thus if $h\in Z(H)$, then $\phi(h)^{-1}w_ih = \phi(h)^{-1}\psi_i(h)^{-1}hw_i$. Taking the sum over all $h\in Z(H)$ and dividing by $|Z(H)|$ shows (i). Note that $[g,h]e=$ $\chi([g,h])e$ for all $g$, $h\in H$. Thus $g_{i,\phi}he=$ $\chi([h,g_{i,\phi}])^{-1} hg_{i,\phi}e$. It follows that $${\sf x}i_\phi(h)^{-1} g_{i\phi}w_ih e_\phi ={\sf x}i_\phi(h)^{-1} \psi_i(h)^{-1} g_{i,\phi}h w_i e_\phi= {\sf x}i_\phi(h)^{-1}\psi_i(h)^{-1}\chi([h,g_{i,\phi}])^{-1} hg_{i,\phi} e_{\phi\psi_i} w_i\ ,$$ where we have used (i). Note that $e_{\phi\psi_i}$ is central in $kH$. Using the definition of $\rho$, the scalar in the last expression is ${\sf x}i_{\phi\psi_i}(h)^{-1}$. This shows (ii). The equality in (iii) is the special case of (ii) applied with $h=1$. For the commutation with $\mathfrak{M}$, we need to check that the elements in the statement commute with expressions of the form $\sum_{\phi'} {\sf x}i_{\phi'}(h)^{-1}h e_{\phi'}$. This follows easily using (ii) and the fact that the $e_\phi$ are pairwise orthogonal. \end{proof} \begin{defn}\label{def:fA} We define ${\sf w}_{i,\phi}=g_{i,\phi}w_ie_\phi$, and let $\mathfrak{A}$ be the subalgebra of $(\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e$ generated by the elements $e_\phi$ and ${\sf w}_{i,\phi}$. \end{defn} By Lemma \ref{fALemma}, the subalgebras $\mathfrak{A}$ and $\mathfrak{M}$ of $(\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e$ commute. \begin{lemma}\label{deg1genlemma} The algebra $\mathfrak{A}$ is generated by the elements $e_\phi$ and ${\sf w}_{i,\phi}$ for those $i$ such that the element $w_i$ of $\mathsf{Jen}_*(P)$ has degree one. \end{lemma} \begin{proof} Since $\mathsf{Jen}_*(P)$ is generated by elements in degree one, there exists a basis ${\mathcal V}$ of $ \mathcal{U}\mathsf{Jen}_*(P)$ consisting of a subset of the set of monomials in the degree one $w_i $'s. Let $ w_t$ be an arbitrary element of the chosen basis of $\mathsf{Jen}_*(P) $ and write $$ w_t =\sum_{ v\in {\mathcal V}} \alpha_v v. $$ If $u, u' \in \mathcal{U}\mathsf{Jen}_*(P) $ are eigenvectors for the $H$ action corresponding to characters $\psi $ and $\psi' $ respectively, then $uu'$ is an $H$-eigenvector with corresponding character $\psi \psi' $. From this it follows that if a monomial $ v= w_{i_m} \ldots w_{i_1} $ in degree one elements $w_{i_j} $ is an element of $ {\mathcal V} $ such that $\alpha _v \ne 0 $, then $ \psi_t = \psi_{i_m} \ldots \psi_{i_1} $, where for each $j$, $1\leqslantslantq j \leqslantslantq m$, $\psi_{i_j} \in\mathsf{Irr}( H/Z) $ is the character of $H$ corresponding to the action on $w_{i_j} $. Let ${\sf z}eta \in \mathsf{Irr}( Z(H) | \chi ) $ and let $v$ be as above. By Lemma~\ref{glemma}, $$ g_{\psi, {\sf z}eta } = g_{i_m, \phi_m} \ldots g_{i_1, \phi_1 } z$$ where $ z \in Z(H) $, $ \phi_1 = {\sf z}eta $ and $ \phi_j = \phi_{j-1}\psi_{i_{j-1}}$, $ 2\leqslantslantq j \leqslantslantq m $. On the other hand, since every $w_{i_j} $ is an eigenvector for the $H$ action, $$ v g_{i_m, \phi_m} \ldots g_{i_1, \phi_1 } =\beta_v w_{i_m}g_{i_m, \phi_m} \ldots w_{i_1} g_{i_1, \phi_1} $$ for some $\beta_v\in k^{\times} $. Since $z e_{{\sf z}eta} $ is a non-zero scalar multiple of $e_{{\sf z}eta} $, the above equation and Lemma~\ref{fALemma} (iii) give that $$ v g_{\psi, {\sf z}eta} e_{{\sf z}eta} = q_v {\sf w}_{i_m, \phi_m} \ldots {\sf w}_{i_1, \phi_1 } $$ for some non-zero scalar $q_v$. Since all $w_{i_j} $ are in degree one, it follows that $$ {\sf w}_{t, {\sf z}eta} = w g_{\psi, {\sf z}eta} e_{\phi} = \sum_{ v\in {\mathcal V}} \alpha_v v g_{\psi, {\sf z}eta} e_{{\sf z}eta} $$ is a linear combination of monomials in the ${\sf w}_{i, \phi} $ for those $i$ such that $w_i $ has degree one. \end{proof} \begin{defn} We define elements $z_{i,j,\phi}$, $z'_{i,j,k,\phi}$ and $z''_{i,k,\phi}$ in $Z(H)$ as follows. By Lemma \ref{glemma} we have \[ g_{j,\phi\psi_i}g_{i,\phi}=g_{i,\phi\psi_j}g_{j,\phi}z_{i,j,\phi} \] for some $z_{i,j,\phi}\in Z(H)$. If $c_{i,j,k}\ne 0$ then \[ g_{j,\phi\psi_i}g_{i,\phi}=g_{k,\phi}z'_{i,j,k,\phi} \] for some $z'_{i,j,k,\phi}\in Z(H)$. If $d_{i,k}\ne 0$ then \[ g_{i,\phi\psi_i^{p-1}}\dots g_{i,\phi\psi_i}g_{i,\phi}=g_{k,\phi}z''_{i,k,\phi} \] for some $z''_{i,k,\phi}\in Z(H)$. \end{defn} \begin{remark} We have \[ z_{i,j,\phi}e_\phi=\phi(z_{i,j,\phi})e_\phi,\qquad z'_{i,j,k,\phi}e_\phi=\phi(z'_{i,j,k,\phi})e_\phi,\qquad z''_{i,k,\phi}e_\phi=\phi(z''_{i,k,\phi})e_\phi. \] Also, we have $z_{i,j,\phi}z_{j,i,\phi}=1$, and if $c_{i,j,k}\ne 0$ then $z'_{i,j,k,\phi}=z'_{j,i,k,\phi}z_{i,j,\phi}$. \end{remark} \begin{theorem}\label{th:rels} Defining constants \begin{align*} q_{i,j,\phi}&=\psi_i(g_{j,\phi}z_{i,j,\phi})\psi_j(g_{i,\phi}^{-1}z_{i,j,\phi})\phi(z_{i,j,\phi}), \\ q'_{i,j,k,\phi}&=\psi_j(g_{i,\phi})^{-1}\psi_k(z'_{i,j,k,\phi})\phi(z'_{i,j,k,\phi}), \\ q''_{i,k,\phi}&=\psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots\psi_i(g_{i,\phi\psi_i})^{-p+2} \psi_i(g_{i,\phi})^{-p+1}\psi_k(z''_{i,k,\phi})\phi(z''_{i,k,\phi}) , \end{align*} we have \begin{align} {\sf w}_{j,\phi\psi_i}{\sf w}_{i,\phi}-q_{i,j,\phi} {\sf w}_{i,\phi\psi_j}{\sf w}_{j,\phi}&= \sum_kc_{i,j,k}q'_{i,j,k,\phi}{\sf w}_{k,\phi} \label{eq:rels1} \\ {\sf w}_{i,\phi\psi_i^{p-1}}\dots {\sf w}_{i,\phi\psi_i}{\sf w}_{i,\phi}&= \sum_kd_{i,k,\phi}\,q''_{i,k,\phi}{\sf w}_{k,\phi}. \label{eq:rels2} \end{align} By changing the choices of $g_{i,\phi}$ by elements of $Z(H)$, we may ensure that $z_{i,j,\phi}\in Z$, and then the formula for the parameters $q_{i,j,\phi}$ simplifies to \[ q_{i,j,\phi}=\psi_i(g_{j,\phi})\psi_j(g_{i,\phi}^{-1})\chi(z_{i,j,\phi}). \] \end{theorem} \begin{proof} We have \begin{align*} {\sf w}_{j,\phi\psi_i}{\sf w}_{i,\phi}&= (g_{j,\phi\psi_i}w_je_{\phi\psi_i})(g_{i,\phi}w_ie_\phi)\\ &=g_{j,\phi\psi_i}w_jg_{i,\phi}w_ie_\phi \\ &=\psi_j(g_{i,\phi})^{-1}g_{j,\phi\psi_i}g_{i,\phi}w_jw_ie_\phi \\ &=\psi_j(g_{i,\phi})^{-1}g_{j,\phi\psi_i}g_{i,\phi}(w_iw_j+[w_i,w_j])e_\phi\\ &=\psi_j(g_{i,\phi})^{-1}g_{i,\phi\psi_j}g_{j,\phi}z_{i,j,\phi}w_iw_je_\phi \\ &\qquad{}+\psi_j(g_{i,\phi})^{-1}g_{j,\phi\psi_i}g_{i,\phi}[w_i,w_j]e_\phi\\ &=\psi_j(g_{i,\phi})^{-1}\psi_i(z_{i,j,\phi})\psi_j(z_{i,j,\phi})g_{i,\phi\psi_j}g_{j,\phi}w_iw_jz_{i,j,\phi}e_\phi \\ &\qquad{}+\sum_kc_{i,j,k}\psi_j(g_{i,\phi})^{-1}g_{k,\phi}z'_{i,j,k,\phi}w_ke_\phi \\ &=\psi_j(g_{i,\phi})^{-1}\psi_i(z_{i,j,\phi})\psi_j(z_{i,j,\phi})\psi_i(g_{j,\phi})\phi(z_{i,j,\phi})g_{i,\phi\psi_j}w_ig_{j,\phi}w_je_\phi \\ &\qquad{}+\sum_kc_{i,j,k}\psi_j(g_{i,\phi})^{-1}\psi_k(z'_{i,j,k,\phi})g_{k,\phi}w_kz'_{i,j,k,\phi}e_\phi\\ &=\psi_i(g_{j,\phi}z_{i,j,\phi})\psi_j(g^{-1}_{i,\phi}z_{i,j,\phi})\phi(z_{i,j,\phi})(g_{i,\phi\psi_j}w_ie_{\phi\psi_j})(g_{j,\phi}w_je_\phi) \\ &\qquad{}+\sum_kc_{i,j,k}\psi_j(g_{i,\phi})^{-1}\psi_k(z'_{i,j,k,\phi})\phi(z'_{i,j,k,\phi})g_{k,\phi} w_ke_\phi\\ &=q_{i,j,\phi}{\sf w}_{i,\phi\psi_j}{\sf w}_{j,\phi} +\sum_kc_{i,j,k}q'_{i,j,k,\phi}{\sf w}_{k,\phi}. \end{align*} Similarly, \begin{align*} {\sf w}_{i,\phi\psi_i^{p-1}}&\dots {\sf w}_{i,\phi\psi_i}{\sf w}_{i,\phi} \\ &= (g_{i\phi\psi_i^{p-1}}w_ie_{\phi\psi_i^{p-1}})\dots (g_{i,\phi\psi_i}w_ie_{\phi\psi_i})(g_{i,\phi}w_ie_\phi)\\ &=g_{i,\phi\psi_i^{p-1}}w_i\dots g_{i,\phi\psi_i}w_ig_{i,\phi}w_ie_\phi \\ &=\psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots\psi_i(g_{i,\phi\psi_i})^{-p+2} \psi_i(g_{i,\phi})^{-p+1}(g_{i,\phi\psi_i^{p-1}}\dots g_{i,\phi\psi_i}g_{i,\phi})w_i^pe_\phi\\ &=\sum_kd_{i,k}\psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots\psi_i(g_{i,\phi\psi_i})^{-p+2} \psi_i(g_{i,\phi})^{-p+1}g_{k,\phi}z''_{i,k,\phi}w_ke_\phi\\ &=\sum_kd_{i,k}\psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots\psi_i(g_{i,\phi\psi_i})^{-p+2} \psi_i(g_{i,\phi})^{-p+1}\psi_k(z''_{i,k,\phi})g_{k,\phi}w_kz''_{i,k,\phi}e_\phi\\ &=\sum_kd_{i,k}\psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots\psi_i(g_{i,\phi\psi_i})^{-p+2} \psi_i(g_{i,\phi})^{-p+1}\psi_k(z''_{i,k,\phi})\phi(z''_{i,k,\phi})g_{k,\phi}w_ke_\phi\\ &=\sum_k d_{i,k}q''_{i,k,\phi}{\sf w}_{k,\phi}. \end{align*} For the final remark, just as in Lemma~4.12\,(3) of~\cite{Benson/Kessar/Linckelmann:2019a}, we may change the choices of $g_{i,\phi}$ by elements of $Z(H)$ to ensure that $z_{i,j,\phi}\in Z$, with the same argument. Then the characters $\psi_i$ take value one on these elements, leading to the given simplifications of the constants. \end{proof} Recall that by Lemma~\ref{deg1genlemma}, $ \mathfrak{A}$ is generated by the $e_{\phi}$ and the ${\sf w}_{i, \phi} $ for those $i$ such that the element $w_i$ of $\mathsf{Jen}_*(P)$ has degree one. \begin{theorem}\label{th:Q/I} The algebra $\mathfrak{A}$ is given as a quiver with relations $kQ/I$, where $Q$ is the quiver with $|Z(H):Z|$ vertices labelled $[\phi]$ corresponding to the idempotents $e_\phi \in kZ(H)$ lying over $\chi$ and directed edges \[ [\phi] {\sf x}rightarrow{\quad i\quad }[\phi\psi_i] \] corresponding to the element \[ {\sf w}_{i,\phi}=g_{i,\phi}w_ie_\phi = e_{\phi\psi_i}g_{i,\phi}w_i= e_{\phi\psi_i}g_{i,\phi}w_ie_\phi \] for those $i$ such that the element $w_i$ of $\mathsf{Jen}_*(P)$ has degree one. The relations are those that follow from the structure constant relations of Theorem~\ref{th:rels}, where for each $k$ such that $w_k $ is in degree greater than or equal to $2$, any ${\sf w}_{k, {\sf z}eta} $ appearing in Theorem~\ref{th:rels} is replaced by an element in $kQ$ corresponding via Lemma~\ref{deg1genlemma} to an expression for ${\sf w}_{k, {\sf z}eta} $ in terms of the ${\sf w}_{i, \phi} $ such that $ w_i $ has degree one. There is a PBW style basis $\mathcal{B}'$ for $\mathfrak{A}$ (described below in the proof), consisting of composable monomials in the ${\sf w}_{i,\phi}$, giving $\dim(kQ/I)=\dim(\mathfrak{A})=|Z(H):Z|\cdot|P|$. \end{theorem} \begin{proof} By Lemma~\ref{deg1genlemma}, $\mathfrak{A}$ is generated by the idempotents $e_\phi$ and the elements ${\sf w}_{i,\phi}$. By Lemma~\ref{fALemma} and Theorem~\ref{th:rels} they satisfy the given relations. Thus we have a surjective homomorphism from $kQ/I$ to $\mathfrak{A}$ taking $[\phi]$ to $e_\phi$ and $[\phi] {\sf x}rightarrow{\quad i \quad}[\phi\psi_i]$ to ${\sf w}_{i,\phi}$. The relations holding in $kQ/I$ allow us to write every element of $\mathfrak{A}$ as a linear combination of elements of the set $\mathcal{B}'$ consisting of the $e_\phi$ and composable monomials in the ${\sf w}_{i,\phi}$ where the indices $i$ are in order, and each index $i$ is repeated at most $p-1$ times. The number of such monomials (including the $e_\phi$) is $|Z(H):Z|\cdot |P|$. Replacing $[\phi] $ by $e_\phi $, ${\sf w}_{i, \phi} $ by $[\phi] {\sf x}rightarrow{\quad i \quad}[\phi\psi_i]$ for those $i $ such that $ w_i$ has degree one and ${\sf w}_{i, \phi} $ by their chosen lifts in $kQ$ for those $i $ such that $ w_i$ has degree greater than or equal to two, we see by the same reasoning that $\dim_kQ/I $ is at most $|Z(H):Z|\cdot |P|$. If there were a linear relation in $\mathfrak{A}$ between the monomials in $\mathcal{B}'$, then there would be a linear relation between the ones of maximal length, namely length $m(p-1)$. There is one of these for each $\phi$, and they are linearly independent elements of the socle of $kG$ because they are non-zero elements of different projective summands $kGe_\phi$. Thus $\dim(\mathfrak{A})$ is equal to $|Z(H):Z|\cdot |P|$ and $kQ/I \to \mathfrak{A}$ is an isomorphism. \end{proof} \begin{rk}\label{rk:fAquantised} The group algebra of the semidirect product $P\rtimes Z(H)/Z$, with the action given by restricting the action of $H/Z$ on $P$, has only one block. We can perform the computations above for this group, and the results look similar, except that the factors of $\phi(z_{i,j,\phi})$, $\phi(z'_{i,j,k,\phi})$, and $\phi(z''_{i,k,\phi})$ in the definitions of $q_{i,j,\phi}$, $q'_{i,j,k,\phi}$, and $q''_{i,k,\phi}$ are missing in Theorem~\ref{th:Q/I}. So removing these factors, the relations in Theorem~\ref{th:rels} are the relations in $\mathsf{gr}_*(k(P\rtimes Z(H)/Z))\cong \mathcal{U}\mathsf{Jen}_*(P)\rtimes Z(H)/Z$. Thus we can see $\mathfrak{A}$ as a quantum deformation of the algebra $\mathcal{U}\mathsf{Jen}_*(P)\rtimes Z(H)/Z$. Also, we note that in the case that $\alpha=0$, we have $Z=1 $, $ H=L$ and Theorem~\ref{th:Q/I} provides an explicit presentation of $\mathsf{gr}_*( k(P\rtimes L )) $. \end{rk} As in~\cite{Benson/Kessar/Linckelmann:2019a}, we now make use of the following lemma (see Chapter~3, Corollary~4.3 in Bass~\cite{Bass:1967a}). \begin{lemma}\label{le:Bass} Let $A\leqslantslant B$ be $k$-algebras with $A$ an Azumaya algebra (that is, a finite-dimensional central separable $k$-algebra). Then the map $A \otimes_k C_B(A) \to B$ is an isomorphism.\qed \end{lemma} \begin{theorem}\label{th:AxM} The multiplication in $(\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e$ induces an isomorphism $$\mathfrak{A}\otimes_k \mathfrak{M} {\sf x}rightarrow{\ \cong\ } (\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e .$$ \end{theorem} \begin{proof} The proof is similar to that of Theorem~4.15 of~\cite{Benson/Kessar/Linckelmann:2019a}. Applying Lemma~\ref{le:Bass} with $A=\mathfrak{M}$ and $B$ the subalgebra generated by $\mathfrak{A}$ and $\mathfrak{M}$, we see that the given map is injective. The dimensions are given by $\dim(\mathfrak{A})=|Z(H):Z|\cdot |P|$, $\dim(\mathfrak{M})=|H:Z(H)|$ and $\dim((\mathcal{U}\mathsf{Jen}_*(P)\rtimes H) e)=$ $\dim(kGe)=|G:Z|$, so $\dim((\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e)=\dim(\mathfrak{A})\cdot\dim(\mathfrak{M})$ and the map is an isomorphism. \end{proof} \begin{cor}\label{co:basic} We have \[ \mathsf{gr}_*(kGe)\cong (\mathcal{U}\mathsf{Jen}_*(P)\rtimes H)e \cong \mathsf{Mat}_m(\mathfrak{A}), \] where $m=\sqrt{|H:Z(H)|}$. In particular, $\mathfrak{A}$ is a basic algebra of $\mathsf{gr}_*(kGe)$.\qed \end{cor} \begin{cor} The algebra $\mathfrak{A}$ is generated by its degree zero and degree one elements. \end{cor} \begin{proof} This follows from Corollary~\ref{co:basic} and Remark~\ref{rk:JenkGe}. \end{proof} \section{Ungrading the relations}\label{se:ungrading} We saw in the last section that the relations for the basic algebra of $\mathsf{gr}_*(kGe)$ are a quantised version of the relations for $\mathsf{gr}_*(kP\rtimes(Z(H)/Z))$. In this section, we show that the same holds without taking the associated graded. Since $|H|$ is coprime to $p$, the characteristic of $k$, we can choose invariant complements to $J^{n+1}(kP)$ in $J^n(kP)$ for each $n\geqslantslant 0$. Let $w_1,\dots,w_m$ be the basis of $\mathsf{Jen}_*(P)$ chosen in Section~\ref{chunk:PBW}, and let $\mathcal{B}$ be the resulting PBW basis of $\mathcal{U}\mathsf{Jen}_*(P)\cong \mathsf{gr}_*(kP)$ described there. Regarding $\mathsf{Jen}_*(P)$ as an $k$-linear subspace of $\mathsf{gr}_*(kP)$, this enables us to choose representatives $\tilde w_i$ in $kP$ of the $w_i$ in such a way that \[ g\tilde w_i g^{-1} = \psi_i(g)\tilde w_i. \] Let $\tilde \mathcal{B}$ be the corresponding basis of $kP$ consisting of monomials in the $\tilde w_i $. That is, if $w_{i_1} \dots w_{i_r} $ is an element of $\mathcal{B}$ then the corresponding element of $\tilde \mathcal{B} $ is $\tilde w_{i_1} \dots \tilde w_{i_r} $. An element $\tilde w_{i_1} \ldots \tilde w_{i_r} $ of $\tilde\mathcal{B}$ is an eigenvector for the action of $H$ for the character $\psi_{i_1} \ldots \psi_{i_r} $. When we ungrade a relation of the form $[w_i,w_j]=\sum_kc_{i,j,k}w_k$, we obtain a relation of the form \begin{equation}\label{eq:[xi,xj]} [\tilde w_i,\tilde w_j] = \sum_kc_{i,j,k}\tilde w_k + y_{i,j} \end{equation} in $kP$, where $y_{i,j}$ is a linear combination of elements of $\tilde\mathcal{B}$ in a higher power of the radical than $\deg(w_i)+\deg(w_j)$. Moreover, each basis monomial $\tilde w_{i_1} \ldots \tilde w_{i_r} $ that occurs in $y_{ij}$ is an eigenvector for the character $\psi_i \psi_j $, and consequently \begin{equation} \label{eq:ijproduct} \psi_{i_1} \ldots \psi_{i_r} =\psi_i \psi_j . \end{equation} Similarly, when we ungrade a relation of the form $w_i^p=\sum_kd_{i,k}w_k$, we obtain a relation of the form \begin{equation}\label{eq:xi^p} \tilde w_i^p=\sum_kd_{i,k}\tilde w_k + y''_{i} \end{equation} in $kP$, where $y''_{i}$ is a linear combination of monomial basis elements in a higher power of the radical than $p.\deg(w_i)$. Each basis monomial $\tilde w_{i_1}\dots\tilde w_{i_r} $ that occurs in $y''_{i} $ is an eigenvector for the character $\psi_i ^p$ and consequently \begin{equation} \label{eq:xi^pproduct} \psi_{i_r} \ldots \psi_{i_1} =\psi_i ^p. \end{equation} \begin{defn} As in Definition~\ref{def:fA}, we define $\tilde {\sf w}_{i,\phi}=g_{i,\phi}\tilde w_ie_\phi$. Then $\tilde {\sf w}_{i,\phi}$ commutes with $\mathfrak{M}$. We define $\tilde\mathfrak{A}$ to be the subalgebra of $kGe$ generated by the elements $e_\phi $ and $\tilde {\sf w}_{i,\phi}$. For an element $\tilde w=\tilde w_{i_1} \ldots \tilde w_{i_r} $ of $\tilde\mathcal{B}$ and a character $\phi\in\mathsf{Irr}(Z(H)|\chi)$, set $\tilde {\sf w}_{\phi} = \tilde {\sf w}_{i_1, \phi \psi_1\ldots \psi_r }\dots \tilde {\sf w}_{ i_{r-1}, \phi \psi_r} \tilde {\sf w}_{i_r, \phi} $. Denote by $\tilde \mathcal{B}'$ the subset of $\tilde\mathfrak{A}$ consisting of the elements $\tilde x_\phi$ for $\tilde w$ in $\tilde\mathcal{B}$ and $\phi\in\mathsf{Irr}(Z(H)|\chi)$. We shall see below in Theorem~\ref{th:grfA} that $\tilde\mathcal{B}'$ is a basis for $\tilde\mathfrak{A}$. \end{defn} \begin{prop} \label{p:ungradedrelations} The elements $\tilde {\sf w}_{i,\phi}$ satisfy the following relations. \begin{align*} \tilde {\sf w}_{j,\phi\psi_i}\tilde {\sf w}_{i,\phi}-q_{i,j,\phi}\tilde {\sf w}_{i,\phi}\tilde {\sf w}_{j,\phi\psi_i} &= \sum_kc_{i,j,k}q'_{i,j,k,\phi}\tilde {\sf w}_{k,\phi} +\psi_j(g_{i,\phi})^{-1}g_{j,\phi\psi_i}g_{i,\phi}y_{i,j}e_\phi. \\ \tilde {\sf w}_{i,\phi\psi_i^{p-1}}\dots \tilde {\sf w}_{i,\phi\psi_i}\tilde {\sf w}_{i,\phi} &= \sum_k d_{i,k}q''_{i,k,\phi}\tilde {\sf w}_{k,\phi} + \psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots\\ &\qquad\qquad\dots\psi_i(g_{i,\phi\psi_i})^{-p+2}\psi_i(g_{i,\phi})^{-p+1} (g_{i,\phi\psi_i^{p-1}}\dots g_{i,\phi\psi_i}g_{i,\phi})y''_ie_\phi. \end{align*} Moreover, suppose that $y_{i,j} = \sum_{\tilde w \in \tilde\mathcal{B}} c_{i,j,\tilde w} \tilde w$ and $ y_i''= \sum _{\tilde w \in \tilde\mathcal{B}} d_{i,\tilde w} \tilde w $. For each $\tilde w \in \tilde\mathcal{B}$, there exist elements $q'_{i,j,\tilde w, \phi} $ and $ q''_{i,\tilde w,\phi} $ of $ k ^{\times} $ such that \begin{align*} g_{j,\phi\psi_i}g_{i,\phi}y_{i,j}e_\phi= \sum_{\tilde w \in \tilde\mathcal{B}} q'_{i,j,\tilde w,\phi} c_{i,j,\tilde w} \tilde {\sf w}_\phi. \\ (g_{i,\phi\psi_i^{p-1}}\dots g_{i,\phi\psi_i}g_{i,\phi})y''_ie_\phi = \sum_{\tilde w \in \tilde\mathcal{B}} q''_{i,\tilde w,\phi} d_{i,\tilde w} \tilde {\sf w}_\phi. \end{align*} \end{prop} \begin{proof} Following through the proof of relation~\eqref{eq:rels1}, we can replace each $w$ with ${\sf w}$ until the sixth line, where we have to use~\eqref{eq:[xi,xj]} for the commutator. At this point, the extra term is \[ \psi_j(g_{i,\phi})^{-1}g_{j,\phi\psi_i}g_{i,\phi}y_{i,j}e_\phi. \] Similarly, following through the proof of relation~\eqref{eq:rels2}, we can replace each $w$ with ${\sf w}$ until the fourth line, where we have to use~\eqref{eq:xi^p}. At this point, the extra term is \begin{equation*} \psi_i(g_{i,\phi\psi_i^{p-2}})^{-1}\dots \psi_i(g_{i,\phi\psi_i})^{-p+2}\psi_i(g_{i,\phi})^{-p+1} (g_{i,\phi\psi_i^{p-1}}\dots g_{i,\phi\psi_i}g_{i,\phi})y''_ie_\phi. \end{equation*} By Lemma \ref{glemma} and Equation \ref{eq:ijproduct}, for each ${\sf w} = {\sf w}_{i_r} \ldots {\sf w}_{i_1} \in \tilde\mathcal{B}$ such that $c_{i,j, \tilde w, \phi} \ne 0 $, \[ g_{j,\phi\psi_i}g_{i,\phi}= g_{i_r, \phi \psi_1\ldots \psi_r } \ldots g_{ i_2, \phi \psi_1} g_{i_1, \phi} z \] for some $z \in Z(H) $. The second assertion follows from this by the fact that for any $g\in H$, any $z \in Z(H)$, any $\tilde w_i$, and any ${\sf z}eta \in \mathsf{Irr}( Z(H) | \chi) $, $g\tilde w_i =\psi_i(g)\tilde w_i g $ is a scalar multiple of $\tilde w_i g$, $ ze_{{\sf z}eta} ={\sf z}eta(z) e_{\sf z}eta $ is a scalar multiple of $e_{\sf z}eta $ and $g\tilde w_i e_{{\sf z}eta} = e_{{\sf z}eta\psi_i} g \tilde w_i e_\phi $. The last assertion follows in a similar fashion from Lemma \ref{glemma} and Equation \ref{eq:xi^pproduct}. \end{proof} \begin{theorem}\label{th:grfA} The algebra $\tilde\mathfrak{A}$ is given as a quiver with relations $kQ/\tilde I$, where $Q$ is as in Theorem~\ref{th:Q/I}, but with edges corresponding to the lifts $\tilde {\sf w}_{i,\phi}$ of the ${\sf w}_{i,\phi}$ given there. The relations are those that follow from the structure constant relations of Proposition~\ref{p:ungradedrelations}, together with relations saying that every composite of at least $s$ arrows is zero, where $s$ is the radical length of $kP$. The set $\tilde\mathcal{B}'$ is a PBW style basis of $\tilde\mathfrak{A}$, giving $\dim\tilde\mathfrak{A}= |Z(H):H|$. There is a natural isomorphism $\mathsf{gr}_*(\tilde \mathfrak{A})\cong \mathfrak{A}$, sending each $\tilde {\sf w}_{i,\phi}$ to ${\sf w}_{i,\phi}$. \end{theorem} \begin{proof} It follows from the relations in Proposition~\ref{p:ungradedrelations} that the linear span of $\tilde\mathcal{B}'$ is closed under multiplication modulo a large enough power of the arrow ideal. The zero relations for composites of $s$ arrows then show that this ideal is zero, and therefore that $\tilde\mathcal{B}'$ linearly spans $\tilde\mathfrak{A}$. The image of an element $\tilde {\sf w}_{i,\phi}$ in $\mathsf{gr}_*(kGe)$ is equal to ${\sf w}_{i,\phi}$, which lies in $\mathfrak{A}$. Since the elements ${\sf w}_{i,\phi}$ of $\mathcal{B}'$ are linearly independent, it follows that the elements $\tilde {\sf w}_{i,\phi}$ of $\tilde\mathcal{B}'$ are linearly independent, and therefore form a basis for $\tilde\mathfrak{A}$. This therefore induces a natural isomorphism $\mathsf{gr}_*(\tilde\mathfrak{A})\cong \mathfrak{A}$. Since $\mathfrak{A}$ is generated by its degree one elements, $\tilde\mathfrak{A}$ has the same quiver, with the lifts of the relations. \end{proof} \begin{theorem}\label{th:tildeAisbasic} The multiplication in $kGe$ induces an isomorphism $\tilde\mathfrak{A}\otimes_k \mathfrak{M} \to kGe$. \end{theorem} \begin{proof} By Theorem~\ref{th:grfA} we have $\dim(\tilde\mathfrak{A})=|Z(H):Z|\cdot|P|$. So this is now proved in the same way as Theorem~\ref{th:AxM}. \end{proof} \begin{cor} We have $kGe\cong \mathsf{Mat}_m(\tilde\mathfrak{A})$, where $m=\sqrt{|H:Z(H)|}$, so that $\tilde \mathfrak{A}$ is the basic algebra of $kGe$.\qed \end{cor} \begin{rk} As in Remark~\ref{rk:fAquantised}, if we perform the computations of this section with the group algebra of the semidirect product $P\rtimes Z(H)/Z$ instead of $kGe$, the results look similar except with different scalars. So we can see $\tilde\mathfrak{A}$ as a quantum deformation of the algebra $k(P\rtimes Z(H)/Z)$. This observation, together with Theorems~\ref{th:grfA} and~\ref{th:tildeAisbasic}, complete the proof of Theorem~\ref{th:qu}. \end{rk} We shall see some explicit examples of the ungrading of the relations in Section~\ref{se:eg}. \section{\texorpdfstring{Example: $P$ extraspecial of order $p^3$ and exponent $p$}{Example: P extraspecial of order p³ and exponent p}} \label{se:eg} Let $k$ have characteristic $p$, an odd prime, and $P$ be an extraspecial $p$-group of order $p^3$ and exponent $p$, with presentation \[ P = \langle g,h,c \mid g^p=h^p=c^p=1,\ [g,h]=c,\ [g,c]=[h,c]=1\rangle. \] We denote by $H$ the quaternion group of order $8$, given by a presentation \[ H = \langle s, t\ |\ s^4=1,\ s^2=t^2,\ ts = s^{-1}t\rangle\cong Q_8. \] Set $Z= \langle s^2\rangle$; this is the centre of $H$. We consider the following action of $H$ on $P$, and set $G=P\rtimes H$. \[ g^s = g^{-1},\qquad g^t = g,\qquad h^s = h,\qquad h^t = h^{-1} \] It follows that $c^s=c^t=c^{-1}$, and $Z$ acts trivially on $P$. This action lifts the action of $C_2\times C_2$ on $C_p\times C_p\cong P/\langle c\rangle$, where here the nontrivial element of each copy of $C_2$ acts as inversion on the corresponding copy $C_p$. The group algebra $kG$ has two blocks, namely the principal block $e_0 = \frac{1}{2} (1+s^2)$ and the nonprincipal block $e = \frac{1}{2} (1-s^2)$ corresponding to the faithful central character $\chi\colon Z \to k^\times$ given by $\chi(s^2)=-1$. We shall be interested in $kGe$. \begin{rk} Let $x=g-1$, $y=h-1$, $z=c-1$ in $kP$. Then \begin{equation}\label{eq:Z} z = (xy-yx)(1+x)^{-1}(1+y)^{-1} \end{equation} and a presentation for $kP$ is given by generators $x$ and $y$, and relations saying that $x^p=0$, $y^p=0$, and the element $z$ defined by~\eqref{eq:Z} is central with $p$th power equal to zero. Note that the element $(1+x)^{-1}(1+y)^{-1}$ is congruent to $1$ modulo $J(kP)$, and so in the associated graded $\mathsf{gr}_*(kP)$ this term in~\eqref{eq:Z} may be ignored. This is used in the proof of Theorem \ref{pcubeexample} that follows. \end{rk} \begin{proof}[Proof of Theorem \ref{pcubeexample}] Denote by $x$, $y$, $z$ the images of $g$, $h$, $c$ in $\mathsf{Jen}_*(P)$, respectively. (These elements are mapped to the images of $g-1$, $h-1$, $c-1$ in $\mathsf{gr}_*(kP)$ under the canonical map $\mathsf{Jen}_*(P)\to \mathsf{gr}_*(kP)$). The three dimensional $p$-restricted Lie algebra $\mathsf{Jen}_*(P)$ is spanned by the elements $x$, $y$ in degree one together with $z=[x,y]$ (by the previous Remark) in degree two, satisfying $[x,y]=[y,z]=0$. The $p$-restriction map given by $x^{[p]}=y^{[p]}=z^{[p]}=0$. Its $p^3$ dimensional universal enveloping algebra $\mathcal{U}\mathsf{Jen}_*(P)$ is isomorphic to $\mathsf{gr}_*(kP)$. This shows the first part of Theorem \ref{pcubeexample}. The action of $H$ on $\mathsf{Jen}_*(P)$ is given by \[ x^s= - x,\qquad x^t = x,\qquad\qquad y^s= y,\qquad y^t = -y,\qquad\qquad z^s = -z,\qquad z^t = -z. \] The elements $x$, $y$ and $z$ are eigenvectors for $H$ on $\mathsf{Jen}_*(P)$. So we set $w_1=x$, $w_2=y$, $w_3=z$. The characters $\psi_i$ of $H$ satisfying $gw_ig^{-1}=\psi_i(g)w_i$ for $g\in H$ are given as follows. \[ \psi_1(s)=-1,\quad \psi_1(t)=1,\qquad \psi_2(s)=1,\quad \psi_2(t)=-1,\qquad \psi_3(s)=-1,\quad \psi_3(t)=-1. \] Note that the relation $[x,y]=z$ in $\mathsf{Jen}_*(P)$ implies $\psi_1\psi_2=\psi_3$. Denoting as above by $e = \frac{1}{2} (1-s^2)$ the nonprincipal block of $kG$, the block algebra $kGe$ has a unique isomorphism class of simple modules. Indeed, $e$ corresponds to the unique $2$-dimensional simple $kH$-module, and hence the semisimple quotient of $kGe$ is the matrix algebra $\mathfrak{M}=kHe\cong \mathsf{Mat}_2(k)$. Since $Z=Z(H)$, there is only one central character of $Z(H)$ lying above $\chi$, namely $\phi=\chi$, and ${\sf x}i_\phi=1$. The map $\rho\colon H/Z(H) \to \mathsf{Hom}(H/Z(H),k^\times)$ takes $s$ to $\phi_2$, $t$ to $\phi_1$ and $st$ to $\phi_3$. Thus $g_{1,\phi}=t$, $g_{2,\phi}=s$ and $g_{3,\phi}=st$; these are only well defined up to multiplication by $Z(H)$. The block algebra $\mathsf{gr}_*(kGe)$ of $\mathsf{gr}_*(kG)$ also has one isomorphism class of simple modules, namely the same $2$-dimensional simple $kH$-module as above, and by Theorem~\ref{th:AxM} and Corollary~\ref{co:basic} we have \[ \mathsf{gr}_*(kGe) \cong \mathfrak{A} \otimes_k \mathfrak{M}\cong \mathsf{Mat}_2(\mathfrak{A}), \] where $\mathfrak{M}=kHe\cong \mathsf{Mat}_2(k)$ and $\mathfrak{A}=(\mathsf{gr}_*(kGe))^H$. The algebra $\mathfrak{A}$ contains elements $g_{1,\phi}w_1e=txe$, $g_{2,\phi}w_2e=sye$ and $g_{3,\phi}w_3e=stze$. The constants are given by $q_{1,2,\phi}=-1$ and $q'_{1,2,3,\phi}=-1$, so these satisfy the following relation: \[ (txe)(sye)+(sye)(txe) = -tsxye - styxe = st(xy-yx)e=stze \] Similar computations give \[ (txe)(stze) + (stze)(txe) = 0, \qquad (sye)(stze) + (stze)(sye) = 0. \] Writing ${\sf x} = txe$, ${\sf y} = sye$ and ${\sf z} = stze$, we therefore have \[ {\sf x} {\sf y} + {\sf y} {\sf x} = {\sf z}, \quad {\sf x} {\sf z} + {\sf z} {\sf x} = 0, \quad {\sf y} {\sf z} + {\sf z} {\sf y} = 0,\quad {\sf x}^p=0,\quad {\sf y}^p = 0, \quad {\sf z}^p =0. \] This is a presentation for the basic algebra $\mathfrak{A}$ of $\mathsf{gr}_*(kGe)$, with generators ${\sf x}$ and ${\sf y}$, and with ${\sf z}$ defined as ${\sf x} {\sf y} + {\sf y} {\sf x}$. This proves Theorem \ref{pcubeexample}. \end{proof} \begin{rk} The first part of the above proof shows that $\mathsf{Jen}_*(P)$ is isomorphic to the $p$-restricted Lie algebra of $3\times 3$ matrices of the form $\leqslantslantft(\begin{smallmatrix} 0 & * & * \\ 0 & 0 & * \\ 0 & 0 & 0 \end{smallmatrix}\right)$. \end{rk} In order to prove Theorem \ref{27example}, ungrading the algebra is our next task. The problem is that the generators $g-1$ and $h-1$ of $kP$ are not well suited to dealing with automorphisms. We have an action of $\mathbb{F}_p^\times \times \mathbb{F}_p^\times$ on $P$ where $(i,j)$ sends $g$ to $g^i$ and $h$ to $h^j$. The commutator $c=[g,h]$ is sent to $c^{ij}$. Set \[ \tilde x=-\sum_{i=1}^{p-1}g^i/i,\qquad \tilde y=-\sum_{j=1}^{p-1}h^j/j. \] \begin{lemma} \label{liftlemma} We have $\tilde x \equiv g-1 \pmod {J^2(kP)} $ and $\tilde y \equiv h-1 \pmod {J^2(kP)}$. \end{lemma} \begin{proof} Since $ p$ is odd, $\sum_{i=1}^{p-1} 1/i = \sum_{i=1}^{p-1} i= 0 $ in $k$ whence $\tilde x = -\sum_{i=1}^{p-1}(g^i-1)/i $. Now the assertion for $\tilde x $ follows since $(g^i-1)/i \equiv g-1 \pmod {J^2 (kP) } $ for any $i$, $ 1\leqslantslantq i \leqslantslantq p-1 $. The proof for $\tilde y $ is similar. \end{proof} Note that $\tilde x$ an eigenvector in the $(1,0)$ eigenspace and $\tilde y$ an eigenvector in the $(0,1)$ eigenspace of $\mathbb{F}_p^\times \times \mathbb{F}_p^\times$. Then we set $\tilde z=[\tilde x,\tilde y]=\tilde x\tilde y-\tilde y\tilde x$, an eigenvector in the $(1,1)$ eigenspace. By Lemma~\ref{liftlemma} and the proof of Theorem~\ref{pcubeexample}, $kP$ has a PBW basis consisting of monomials in the $\tilde x$, $\tilde y $ and $\tilde z $. Moreover, a PBW basis element $\tilde x^i\tilde y^j\tilde z^k$ of $kP$ with $0\leqslantslant i,j,k < p$, is an eigenvector in the $(i+k,j+k)$ eigenspace, where $i+k$ and $j+k$ are read modulo $p-1$. \begin{lemma}\label{le:Zp} We have $\tilde z^p=0$. \end{lemma} \begin{proof} The element $\tilde z^p$ is an eigenvector in the $(1,1)$ eigenspace. Further, $\tilde z^p $ has image $ z^p =0 \in \mathsf{gr}_{2p} (kP) $, hence $\tilde z^p $ is in $J^{2p+1}(kP)$. The PBW basis elements in this eigenspace have $i+k$ and $j+k$ congruent to one modulo $p-1$ and at most $2p-2$, and hence at most $p$, but then $i+j+2k\leqslantslant 2p$, so the basis element is not in $J^{2p+1}(kP)$. It follows that the $(1,1)$ eigenspace in $J^{2p}(kP+1)$ is zero and so $\tilde z^p=0$. \end{proof} \begin{lemma}\label{le:[X,Z]} The element $[\tilde x,\tilde z]$ is a linear combination of the elements $\tilde z\tilde y^{p-2}\tilde z \in J^{p+2}(kP)$ and $\tilde x^i\tilde y^{2i-1}\tilde x^i\tilde z^{p+1-2i} \in J^{2p+1}(kP)$ with $1\leqslantslant i \leqslantslant (p-1)/2$. Similarly, $[\tilde y,\tilde z]$ is a linear combination of the elements $\tilde z\tilde x^{p-2}\tilde z\in J^{p+2}(kP)$ and $\tilde y^i\tilde x^{2i-1}\tilde y^i\tilde z^{p+1-2i} \in J^{2p+1}(kP)$ with $1\leqslantslant i\leqslantslant (p-1)/2$. \end{lemma} \begin{proof} We prove the first statement. The proof of the second is identical, with the roles of $\tilde x$ and $\tilde y$ reversed. The element $[\tilde x,\tilde z]$ has image $[x,z]=0$ in $\mathsf{gr}_3(kP)$, and hence lies in $J^4(kP)$. It is in the $(2,1)$ eigenspace, so we start by identifying the PBW basis elements of $J^4(kP)$ in this eigenspace. These are $\tilde y^{p-2}\tilde z^2$ and $\tilde x\tilde y^{p-1}\tilde z\in J^{p+2}(kP)$ and $\tilde x^{i+1}\tilde y^i\tilde z^{p-i} \in J^{2p+1}(kP)$ with $1\leqslantslant i \leqslantslant p-2$. However, we also need to make use of symmetry. Let $\sigma $ be the composition of the automorphism of $kP$ which inverts $g$ and $h$ (and hence fixes $c$) with the anti-automorphism of $kP$ which inverts all elements of $P$. Then $\sigma$ fixes $\tilde x$ and $\tilde y$, reverses multiplication in $kP$, and negates $\tilde z$. The point is that $[\tilde x,\tilde z]=\tilde x^2\tilde y -2\tilde x\tilde y\tilde x+\tilde y\tilde x^2$ is fixed by $\sigma$, whereas $\sigma$ does not fix all elements of the $(2,1)$ eigenspace. With this in mind, we modify the PBW basis of this eigenspace so that the action of $\sigma$ is more transparent. The element $\tilde y^{p-2}\tilde z^2$, for example, is not fixed by $\sigma$, even though it's fixed modulo $J^{p+3}(kP)$. So instead, we use the element $\tilde z\tilde y^{p-2}\tilde z$, which is equivalent to it modulo $J^{p+3}(kP)$, and therefore just as good as part of a PBW basis of $kP$, but is fixed by $\sigma$. Since $\sigma(\tilde x\tilde y^{p-1}\tilde z)\equiv -\tilde x\tilde y^{p-1}\tilde z-\tilde y^{p-2}\tilde z^2\pmod{J^{p+3}(kP)}$, the element $\tilde x\tilde y^{p-1}\tilde z$ is not involved in the expression for $[\tilde x,\tilde z]$. So $[\tilde x,\tilde z]$ is congruent to a multiple of $\tilde z\tilde y^{p-2}\tilde z$ modulo $J^{2p+1}(kP)$. For the linear span of the elements $\tilde x^{i+1}\tilde y^i\tilde z^{p-i}$, since there are no $(2,1)$ eigenvectors lower in the radical series, reordering the terms in a monomial has the same effect as in $\mathcal{U}\mathsf{Jen}_*(kP)$. So we can choose a basis consisting of the elements $\tilde x^i\tilde y^{2i-1}\tilde x^i\tilde z^{p+1-2i}$ $(1\leqslantslant i \leqslantslant (p-1)/2$) and the elements $\tilde y^i\tilde x^{2i+1}\tilde y^i\tilde z^{p-2i}$ $(1\leqslantslant i \leqslantslant (p-3)/2$). The former are $+1$ eigenvectors of $\sigma$, while the latter are $-1$ eigenvectors. So the expression for $[\tilde x,\tilde z]$ only involves the former. \end{proof} By Lemma~\ref{le:[X,Z]}, we can write \begin{align} [\tilde x,\tilde z]&=a_0 \tilde z\tilde y^{p-2}\tilde z + a_1\tilde x\tilde y\tilde x\tilde z^{p-1} +a_2\tilde x^2\tilde y^3\tilde x^2\tilde z^{p-3} +\cdots +a_{(p-1)/2}\tilde x^{\frac{p-1}{2}}\tilde y^{p-2}\tilde x^{\frac{p-1}{2}}\tilde z^2, \label{eq:XZ} \\ [\tilde y,\tilde z]&=-a_0 \tilde z\tilde x^{p-2}\tilde z -a_1\tilde y\tilde x\tilde y\tilde z^{p-1} -a_2\tilde y^2\tilde x^3\tilde y^2\tilde z^{p-3} -\cdots- a_{(p-1)/2}\tilde x^{\frac{p-1}{2}}\tilde y^{p-2}\tilde x^{\frac{p-1}{2}}\tilde z^2.\label{eq:YZ} \end{align} Here, we have used the symmetry of $kP$ which swaps $\tilde x$ and $\tilde y$, and negates $\tilde z$, to compare the coefficients in~\eqref{eq:XZ} and those in~\eqref{eq:YZ}. \begin{rk} With the aid of the computer algebra system {\sc Magma}~\cite{Bosma/Cannon/Playoust:1997a} we have determined the relation~\eqref{eq:XZ} for small $p$ as follows: \begin{align*} p=3:\qquad [\tilde x,\tilde z]&=\tilde z\tilde y\tilde z, \\ p=5:\qquad [\tilde x,\tilde z]&=\tilde z\tilde y^3\tilde z +2\tilde x\tilde y\tilde x\tilde z^4, \\ p=7:\qquad [\tilde x,\tilde z]&=\tilde z\tilde y^5\tilde z +4\tilde x\tilde y\tilde x\tilde z^6 +2\tilde x^2\tilde y^3\tilde x^2\tilde z^4. \end{align*} One might surmise that $a_0=1$ and $a_{(p-1)/2}=0$, but we have not proved that. Nor have we spotted the general pattern of the coefficients. \end{rk} \begin{theorem} A presentation for $kP$ is given by generators $\tilde x$, $\tilde y$, $\tilde z$ with the relations~\eqref{eq:XZ} and~\eqref{eq:YZ} together with \[ \tilde x^p=\tilde y^p=\tilde z^p=0,\qquad [\tilde x,\tilde y]=\tilde z, \] and relations saying that all words of length at least $4p-3$ in $\tilde x$ and $\tilde y$ are equal to zero. \end{theorem} \begin{proof} These relations hold in $kP$ by Lemmas~\ref{le:Zp} and~\ref{le:[X,Z]}, and the fact that $J^{4p-3}(kP)=0$. Let $\mathbf{A}$ be the algebra defined by these generators and relations. Then we have a surjective map $\mathbf{A}\to kP$ taking $\tilde x$, $\tilde y$ and $\tilde z$ to the elements with the same names. This induces a map $\mathsf{gr}_*\mathbf{A} \to \mathsf{gr}_* kP$. The relations~\eqref{eq:XZ} and~\eqref{eq:YZ} imply that the images $x$, $y$ and $z$ in $\mathsf{gr}_*\mathbf{A}$ of $\tilde x$, $\tilde y$ and $\tilde z$ in $\mathbf{A}$ satisfy $[x,z]=0$ and $[y,z]=0$. Thus all the relations in $\mathcal{U}\mathsf{Jen}_*(P)$ hold in $\mathsf{gr}_*\mathbf{A}$, and $\mathsf{gr}_*\mathbf{A}\to \mathsf{gr}_* kP$ is an isomorphism. Since the radical of $\mathbf{A}$ is nilpotent, this implies that $\mathbf{A}\to kP$ is an isomorphism. \end{proof} Recall from the proof of Theorem~\ref{pcubeexample}, that setting ${\sf x}=txe$, ${\sf y}=sye$ and ${\sf z}=stze$ in $\mathsf{gr}_*(kGe)$, we have that the algebra $\mathfrak{A}$ is generated by ${\sf x}$, ${\sf y}$ and ${\sf z}$ centralises $\mathfrak{M}$ in $\mathcal{U}\mathsf{Jen}_*(P)\rtimes kH$. Further, these elements satisfy the relations \[ {\sf x}^p=0,\quad {\sf y}^p=0,\quad {\sf x}{\sf y}+{\sf y}{\sf x}={\sf z},\quad {\sf x}{\sf z}+{\sf z}{\sf x}=0,\quad {\sf y}{\sf z}+{\sf z}{\sf y}=0 \] (and these imply that ${\sf z}^p=0$). In $kGe$, we set $\tilde {\sf x}=t\tilde xe$, $\tilde {\sf y}=s\tilde ye$ and $\tilde {\sf z}=st\tilde ze$. The algebra $\tilde\mathfrak{A}$ generated by $\tilde{\sf x}$, $\tilde{\sf y}$ and $\tilde{\sf z}$ centralises $\mathfrak{M}$ in $kGe$. These elements satisfy the relations \[ \tilde {\sf x}^p=\tilde {\sf y}^p=\tilde {\sf z}^p=0,\qquad \tilde {\sf x}\tilde {\sf y}+\tilde {\sf y} \tilde {\sf x} = \tilde {\sf z} \] together with the following quantised versions of~\eqref{eq:XZ} and~\eqref{eq:YZ} \begin{align*} \tilde {\sf x}\tilde {\sf z} + \tilde {\sf z} \tilde {\sf x} & =(-1)^{\frac{p-1}{2}}\! \bigl(a_0 \tilde {\sf z}\tilde {\sf y}^{p-2}\tilde {\sf z} - a_1\tilde{\sf x}\tilde{\sf y}\tilde{\sf x}\tilde{\sf z}^{p-1}- a_2\tilde{\sf x}^2\tilde{\sf y}^3\tilde{\sf x}^2\tilde{\sf z}^{p-3} -\cdots- a_{(p-1)/2}\tilde{\sf x}^{\frac{p-1}{2}}\tilde{\sf y}^{p-2}\tilde{\sf x}^{\frac{p-1}{2}}\tilde{\sf z}^2\bigr), \\ \tilde{\sf y} \tilde{\sf z} + \tilde{\sf z} \tilde{\sf y} & =(-1)^{\frac{p-1}{2}}\! \bigl(a_0\tilde{\sf z}\tilde{\sf x}^{p-2}\tilde{\sf z} - a_1\tilde{\sf y}\tilde{\sf x}\tilde{\sf y}\tilde{\sf z}^{p-1} - a_2\tilde{\sf y}^2\tilde{\sf x}^3\tilde{\sf y}^2\tilde{\sf z}^{p-3} -\cdots- a_{(p-1)/2}\tilde{\sf y}^{\frac{p-1}{2}}\tilde{\sf x}^{p-2}\tilde{\sf y}^{\frac{p-1}{2}}\tilde{\sf z}^2\bigr), \end{align*} together with relations saying that all words of length at least $4p-3$ in $\tilde {\sf x}$ and $\tilde {\sf y}$ are equal to zero. Using {\sc Magma}~\cite{Bosma/Cannon/Playoust:1997a}, in the case $p=3$ we have succeeded in finding a short presentation for $kP$ in terms of the generators $\tilde x$ and $\tilde y$. In this case, we have $\tilde x=g^{-1}-g$ and $\tilde y=h^{-1}-h$. Defining $\tilde z=[\tilde x,\tilde y]$, the following relations hold in $kP$. \begin{align} \tilde x^3=&0,\qquad \tilde y^3=0,\qquad [\tilde x,\tilde y]=\tilde z,\qquad [\tilde x,\tilde z]=\tilde z\tilde y\tilde z,\qquad [\tilde y,\tilde z]=-\tilde z\tilde x\tilde z. \label{eq:kP} \end{align} It follows from these relations that $\tilde z^3=0$, so it is not necessary to include this in the relations, and hence that the algebra defined by these relations has dimension $27$, and is isomorphic to $kP$. This is the content of the next theorem. Note, however, that the proof is difficult, so for some purposes it is better to adjoin $\tilde z^3=0$ to the above presentation. We restate and prove the first part of Theorem \ref{27example}. \begin{theorem}\label{th:27} Suppose that $p=3$. The generators $\tilde x$, $\tilde y$ and $\tilde z$ and the relations~\eqref{eq:kP} give a presentation for $kP$. \end{theorem} \begin{proof} Since the given elements of $kP$ satisfy these relations, it suffices to prove that the algebra defined by the relations has dimension at most $27$. The crucial point is to prove that $\tilde z^3=0$. It is more convenient to extend the field so that it has a square root of $-1$, which we denote $\mathsf{i}\,$. Then we set $a=\tilde x+\mathsf{i}\,\tilde y$, $b=\tilde x-\mathsf{i}\,\tilde y$, $c=\mathsf{i}\,\tilde z$, and the presentation becomes \[ a^3=[b,c]=cbc,\qquad b^3=-[a,c]=cac, \qquad [a,b]=c, \] and we must show that $c^3=0$. We have \begin{align*} (1+c)a(1-c)&=a,\\ (1-c)b(1+c)&=b, \end{align*} and so \[ (1+c)ab=(1+c)a(1-c)b(1+c)=ab(1+c). \] Therefore $c$ commutes with $ab$ and with $ba$. Next, \[ cab=acb+cacb=abc-acbc+cacb, \] and since $cab=abc$ it follows that $c$ commutes with $acb$. Thus we have \begin{equation}\label{eq:a4b4} cbca=a^4=acbc=cacb=b^4=bcac. \end{equation} Since we are in characteristic three, we also have \[ [a,[a,c]]=-[a,cac]=-[a,c]ac-c[a,a]c-ca[a,c]=cacac+cacac=-cacac \] and so \[ [a^3,c]=[a,[a,[a,c]]]=-[a,cacac]=-[a,c]acac-ca[a,c]ac-caca[a,c]=-3cacacac=0. \] Thus $c$ also commutes with $a^3$: \begin{equation}\label{eq:ca^3} a^3c=ca^3. \end{equation} Next, using~\eqref{eq:a4b4} we have \begin{align} c^3&=cabc-cbac=acbc+cacbc-cbca+cbcac=a^4+b^4c-b^4+a^4c\label{eq:c3}\\ &=-a^4c = -b^4c = -cacbc = -cbcac= -ca^4 = -cb^4.\notag \end{align} Using~\eqref{eq:ca^3} and~\eqref{eq:c3}, we have \[ a^4c=ca^4 =a^3ca=a^4c+a^3cac=a^4c+ca^4c=a^4c-c^4 \] and so $c^4=0$. The fact that $c^4=0$ enables us to write \begin{align} ca&=ac+cac=ac+ac^2+cac^2=ac+ac^2+ac^3+cac^3=a(c+c^2+c^3)\label{eq:ca} \\ cb&=bc-cbc=bc-bc^2+cbc^2=bc-bc^2+bc^3-cbc^3=b(c-c^2+c^3). \label{eq:cb} \end{align} We can use these to move copies of $c$ to the end of expressions, at the expense of accumulating higher powers of $c$. Applying this to $a^6=cbc^2bc$, we get $b^2(c^4+{}$higher powers of $c)$, so we get $a^6=0$. Similarly, we get $b^6=0$. Then when we do the same with $(ab)^9$, moving $b$ past $a$ using $ba=ab-c$, we get \[ a^9b^9+a^8b^8cf_1(c)+a^7b^7c^2f_2(c)+a^6b^6c^3f_3(c)+a^5b^5c^4f_4(c) + \dots \] for suitable polynomials $f_i(c)$. Since $a^6=b^6=c^4=0$, every term here is zero, and so $(ab)^9=0$. Now using~\eqref{eq:c3} and the same method, we have \[ c^3=-cacbc=-cab(c-c^2+c^3)c=-abc(c-c^2+c^3)c=-abc^3, \] and so $(1+ab)c^3=0$. Since $ab$ is nilpotent, $(1+ab)$ is invertible, so this implies that $c^3=0$. The original relations together with~\eqref{eq:ca}, \eqref{eq:cb} and $c^3=0$ allow us to rewrite every element as a linear combination of the elements $a^ib^jc^k$ with $0\leqslantslant i,j,k < 3$, so the algebra has dimension at most $27$, and we are done. \end{proof} \begin{proof}[Proof of Theorem \ref{27example}] The relations for $kP$ are proved in Theorem \ref{th:27}. As above, using $\tilde x=g^{-1}-g$, $\tilde y = h^{-1}-h$, and $\tilde z=[\tilde x,\tilde y]$, to obtain generators for the basic algebra for $kGe$, we set $\tilde {\sf x} = t\tilde xe$ and $\tilde {\sf y} = s\tilde ye$, $\tilde {\sf z} = st\tilde ze$. These satisfy \[ \tilde {\sf x}^3=0, \qquad \tilde {\sf y}^3=0,\qquad \tilde {\sf x}\tilde {\sf y}+\tilde {\sf y}\tilde {\sf x}=\tilde {\sf z}, \qquad \tilde {\sf x} \tilde {\sf z} + \tilde {\sf z} \tilde {\sf x} = -\tilde {\sf z}\tilde {\sf y}\tilde {\sf z}, \qquad \tilde {\sf y} \tilde {\sf z} + \tilde {\sf z} \tilde {\sf y}= -\tilde {\sf z} \tilde {\sf x} \tilde {\sf z}. \] Furthermore, the algebra defined by these relations again has dimension $27$, and is hence isomorphic to the basic algebra of $kGe$. \end{proof} \begin{rk} The above relations for $kGe$ are a quantised version of the relations for $kP$. These relations imply that $\tilde {\sf z}^3=0$, and adjoining this relation makes the presentation easier to work with if desired. These presentations can be lifted to give integral presentations. The algebra $\mathcal{O} P$ has a presentation with corresponding generators $\hat x$, $\hat y$ and $\hat z$ subject to \[ \hat x^3+3\hat x=0,\qquad \hat y^3+3\hat y=0, \qquad [\hat x,\hat y]=\hat z,\qquad 2[\hat x,\hat z]=-\hat z\hat y\hat z, \qquad 2[\hat y,\hat z]=\hat z\hat x\hat z, \] while $\mathcal{O} Ge$ is generated by $\hat {\sf x}$, $\hat {\sf y}$ and $\hat {\sf z}$ subject to \[ \hat {\sf x}^3=3\hat {\sf x},\qquad \hat {\sf y}^3 =3\hat {\sf y},\qquad \hat {\sf x}\hat {\sf y}+\hat {\sf y}\hat {\sf x} = \hat {\sf z},\qquad 2\hat {\sf x} \hat {\sf z} + 2\hat {\sf z} \hat {\sf x} = \hat {\sf z}\hat {\sf y}\hat {\sf z},\qquad 2\hat {\sf y} \hat {\sf z} + 2\hat {\sf z} \hat {\sf y} = \hat {\sf z} \hat {\sf x} \hat {\sf z}. \] The expressions for $\hat z^3$ in $\mathcal{O} P$ and for $\hat{\sf z}^3$ in $\mathcal{O} Ge$, lifting the fact that they cube to zero modulo three, are ugly even though they follow from the presentations above. \end{rk} \section{\texorpdfstring{Example: $2^{1+4}{:\,}3^{1+2}$ in characteristic two}{Example: 2¹ᐩ⁴:3¹ᐩ² in characteristic two}} \label{se:char2} The examples in the last section were at odd primes for extraspecial groups of order $p^3$. In this section we give an example in characteristic two with an extraspecial group of order $2^5$. Let $P$ be an extraspecial group $2^{1+4}$ which is a central product of two copies of the quaternion group of order eight, and let $H$ be an extraspecial group $3^{1+2}$ of exponent three. We let the centre $Z\cong\mathbb{Z}/3$ of $H$ act trivially on $P$, and the elementary abelian quotient act as the automorphisms of order three on the two quaternion central factors of $P$, and we set $G=P\rtimes H$. Thus the quotient $G/Z\cong SL(2,3)\circ SL(2,3)$ is a central product of two copies of the group $SL(2,3)$ of order $24$. More precisely, we let \begin{align*} P &= \langle g_1,g_2, h_1,h_2, c \mid g_1^2=h_1^2=[g_1,h_1]=g_2^2=h_2^2=[g_2,h_2]=c,\\ &\qquad [g_1,c]=[g_2,c]=[h_1,c]=[h_2,c]= [g_1,g_2]=[g_1,h_2]=[h_1,g_2]=[h_1,h_2]=c^2=1 \rangle, \\ H &= \langle s_1,s_2, t \mid s_1^3=s_2^3=1,\ [s_1,s_2]=t,\ [s_1,t]=[s_2,t]=t^3=1\rangle. \end{align*} Let $H$ act on $P$ with $Z=\langle t\rangle$ acting trivially, and \begin{gather*} s_1g_1s_1^{-1}=h_1, \qquad s_1h_1s_1^{-1}=g_1h_1,\qquad s_1g_2s_1^{-1}=g_2, \qquad s_1h_2s_1^{-1}=h_2,\qquad s_1cs_1^{-1}=c, \\ s_2g_1s_2^{-1}=g_1, \qquad s_2h_1s_2^{-1}=h_1,\qquad s_2g_2s_2^{-1}=h_2, \qquad s_2h_2s_2^{-1}=g_2h_2, \qquad s_2cs_2^{-1}=c. \end{gather*} Let $k$ be a field of characteristic two containing $\mathbb{F}_4=\{0,1,\omega,\bar\omega\}$. A basis of eigenvectors in $\mathsf{gr}_1(kP)$ is given by \[ x_i = \bar\omega(g_i-1)+\omega(h_i-1),\qquad y_i=\omega(g_i-1)+\bar\omega(h_i-1) \qquad (i=1,2). \] These give the following presentation for $\mathsf{gr}_*(kP)$. \[ x_i^2=0,\quad y_i^2=0,\quad [x_1,x_2]=[y_1,y_2]=[x_1,y_2]=[x_2,y_1]=0, \quad [x_1,y_1]=[x_2,y_2]. \] A lift of $x_i$ and $y_i$ to eigenvectors complementing $J^2(kP)$ in $J(kP)$ is given by the elements \[ \tilde x_i = \omega g_i + \bar\omega h_i + g_ih_i, \qquad \tilde y_i = \bar\omega g_i + \omega h_i + g_ih_i\qquad (i =1,2). \] The relations lift to \begin{gather*} \tilde x_i^2=\tilde y_i\tilde x_i\tilde y_i, \qquad \tilde y_i^2=\tilde x_i\tilde y_i\tilde x_i,\qquad \tilde x_i^4=0 \qquad (i=1,2),\\ [\tilde x_1,\tilde x_2]=[\tilde y_1,\tilde y_2]= [\tilde x_1,\tilde y_2]=[\tilde x_2,\tilde y_1]=0, \qquad [\tilde x_1,\tilde y_1]+\tilde x_1^3=[\tilde x_2,\tilde y_2]+\tilde x_2^3 \end{gather*} (both sides in the last relation are equal to $(1+c)$). Using the radical filtration, it is not hard to check that these relations define a $k$-algebra of dimension at most $32$, which is therefore isomorphic to $kP$. The action of $H$ on $kP$ with respect to these generators is given by \[ g\tilde x_ig^{-1}=\psi_i(g)^{-1}\tilde x_i,\qquad g\tilde y_ig^{-1}=\psi_i(g)\tilde y_i\qquad (g\in H) \] where $\psi_1(s_1)=\psi_2(s_2)=\omega$, $\psi_1(s_2)=\psi_2(s_1)=1$. Set \[ e_0=1+t+t^2,\qquad e=1+\bar\omega t + \omega t^2, \qquad \bar e = 1 + \omega t +\bar\omega t^2. \] Then $kG$ has three blocks, the principal block $kGe_0$, and two non-principal blocks $kGe$ and $kG\bar e$. We examine the non-principal block $kGe$, the other is similar. We have $ s_1s_2e=\omega s_2s_1e$. We set \[ \tilde {\sf x}_1 = s_2\tilde x_1e,\qquad \tilde {\sf x}_2 = s_1^{-1}\tilde x_2e, \qquad \tilde {\sf y}_1 = s_2^{-1}\tilde y_1e,\qquad \tilde {\sf y}_2 = s_1\tilde y_2e. \] These commute with $\mathfrak{M}=kHe$, and generate the subalgebra $\mathfrak{A}$, so that $kGe\cong \mathsf{Mat}_{3}(\mathfrak{A})$. They satisfy the relations: \begin{gather*} \tilde {\sf x}_i^2=\tilde {\sf y}_i\tilde {\sf x}_i\tilde {\sf y}_i,\qquad \tilde {\sf y}_i^2=\tilde {\sf x}_i\tilde {\sf y}_i\tilde {\sf x}_i,\qquad \tilde {\sf x}_i^4=0 \qquad (i=1,2), \\ \tilde {\sf x}_1\tilde {\sf x}_2= \bar\omega\tilde {\sf x}_2\tilde {\sf x}_1, \qquad \tilde {\sf x}_1\tilde {\sf y}_2= \omega \tilde {\sf y}_2\tilde {\sf x}_1, \\ \tilde {\sf y}_1\tilde {\sf x}_2= \omega \tilde {\sf x}_2\tilde {\sf y}_1, \qquad\ \tilde {\sf y}_1\tilde {\sf y}_2= \bar \omega\tilde {\sf y}_2\tilde {\sf y}_1, \\ [\tilde {\sf x}_1,\tilde {\sf y}_1]+\tilde {\sf x}_1^3= [\tilde {\sf x}_2,\tilde {\sf y}_2]+\tilde {\sf x}_2^3 \end{gather*} (both sides in the last relation are equal to $(1+c)e$). These are identical to the relations for $kP$ apart from the commutation relations, which have been quantised by the introduction of factors $\omega$ and $\bar\omega$. \section{Appendix: Errata}\label{se:errata} The present paper supersedes most of our previous paper~\cite{Benson/Kessar/Linckelmann:2019a}. In that paper there are a number of minor errors, mostly in the calculations in Section~4, which have been corrected in the present work. We give a list of those errors in \cite{Benson/Kessar/Linckelmann:2019a}. In the statements of Theorem 1.2 and Corollary 1.3, it should read `...quantised version of $k(P\rtimes Z(H)/Z)$' (and not `...of $k(P\rtimes L)$.') In the 3rd line of the proof of Proposition 3.1 insert the word `abelian' between `maximal' and `subgroup' (as is done correctly in line 2 and line 4 of that proof). On page 1441, third line from the bottom, insert \emph{faithful}: \begin{center} ``\dots and a faithful linear character $\chi\colon Z \to k^\times$\dots'' \end{center} On page 1443, in the first line, $\rho(g)\colon h \mapsto \chi([h,g])$. The display on the third line should read \[ \bar\rho\colon H/Z(H) \to \mathsf{Hom}(H/Z(H),k^\times) \] On page 1444, line four should begin ``where $\rho(g_{i,\phi})(h)=\chi([h,g_{i,\phi}])$.'' The third line of the proof of Lemma~4.8 should begin with $eg_{i,\phi}h=e\chi([h,g_{i,\phi}])^{-1}hg_{i,\phi}$. The displayed equation on the fourth line of the proof of Lemma~4.8 should read \[ {\sf x}i_\phi(h)^{-1}(g_{i,\phi}w_i)(e_\phi\cdot h) = {\sf x}i_\phi(h)^{-1}\psi_i(h)^{-1}\chi([h,g_{i,\phi}])^{-1}(e_{\phi\psi_i}h)(g_{i,\phi}w_i). \] In Definition 4.9 and the four lines following, $kHe$ should be $k\tilde Ge$ four times. The dimension of $\mathfrak A$ should be given as $|P|\cdot |Z(H):Z|$ and not $|P|\cdot |H:Z(H)|$. On page 1445, in Lemma~4.12\,(2), in the displayed equation the last $w_i$ should be $w_j$. The scalar $q_{i,j,\phi}$ should equal $\psi_i(g_{j,\phi}z_{i,j,\phi})\psi_j(g^{-1}_{i,\phi}z_{i,j,\phi})\phi(z_{i,j,\phi})$ rather than $\phi(z_{i,j,\phi})$. Similarly, in Lemma~4.12\,(3) the scalar $q_{i,j,\phi}$ should equal $\psi_i(g_{j,\phi})\psi_j(g^{-1}_{i,\phi})\chi(z_{i,j,\phi})$ rather than $\chi(z_{i,j,\phi})$. In the second line of the proof of Lemma~4.12\,(1), $g_{j,\phi\phi_i}$ should be $g_{j,\phi\psi_i}$. The computation that was suppressed in the proof of Lemma~4.12\,(2) uses~(1) and equation~(4.7). It is similar to the computation in Theorem~\ref{th:rels} above, which we have spelled out in detail. There is a missing $Z$ in the third to last line of the proof of Lemma~4.12\,(3), and $g_ji$ should be $g_i$ in the second to last line. On page 1447, in Theorem 4.15 and Corollary 4.16, $kGe$ should be $k\tilde Ge$ five times. \iffalse \section{Appendix. Some {\sc Magma} code} The extraspecial group of order $p^3$ and exponent $p$ is the third in the list of small groups of order $p^3$. The generators $g$ and $h$ are the first and second terms in the power commutator generator list for this group. So to produce the group algebra and the elements $g$, $h$, and $X=\sum_{i=1}^{p-1}g^i/i$, \ $Y=\sum_{i=1}^{p-1}h^i/i$ we use the following code, where the value of $p$ can be changed if desired. \begin{verbatim} p:=5; k:=GF(p); G:=SmallGroups(p^3)[3]; A:=GroupAlgebra(k,G); g:=A!G.1; h:=A!G.2; X:=g; for i in [2..p-1] do X:=X+g^i/i; end for; Y:=h; for i in [2..p-1] do Y:=Y+h^i/i; end for; \end{verbatim} \pagebreak[3] Then for example we can ask, \begin{verbatim} (X*Y-Y*X)^p eq 0; \end{verbatim} \pagebreak[3] Quotients by powers of the $J(kP)$ are produced as follows. \begin{verbatim} J:=AugmentationIdeal(A); Q,f:=quo<A|J^(p+3)>; QQ,ff:=quo<A|J^(2*p+2)>; \end{verbatim} \pagebreak[3] So to find an expression for $[X,[Y,X]]$, for example, we evaluate \begin{verbatim} f(-X^2*Y+2*X*Y*X-Y*X^2); \end{verbatim} \noindent and compare it with the value of {\tt f} on symmetric words in $2$ copies of $X$ and $p$ copies of $Y$, and on sums of asymmetric ones with their reverse. Once values of {\tt f} agree, which tells us about the image modulo $J^{p+3}$, we can start comparing values of {\tt ff}, which tells us about the image modulo $J^{2p+2}$, and so on. Having found some relations, the presentation of $kP$ in the case $p=3$ was checked using the following code. \begin{verbatim} k:=GF(3); F<X,Y>:=FreeAlgebra(k,2); B:=quo<F|X^3,Y^3,X^2*Y-2*X*Y*X+Y*X^2+X^2*Y*X^2-Y*X*Y*X*Y, Y^2*X-2*Y*X*Y+X*Y^2+Y^2*X*Y^2-X*Y*X*Y*X>; Dimension(B); \end{verbatim} \pagebreak[3] \fi \end{document}
\begin{document} \title{Bounds on mixed state entanglement} \newcommand{0000-0000-000-000X}{0000-0000-000-000X} \author{Bruno Leggio $^{1}$, Anna Napoli $^{2,4}$, Hiromichi Nakazato $^{5}$ and Antonino Messina $^{3,4}$} \affiliation{$^{1}$ Laboratoire Reproduction et D{\'e}veloppement des Plantes, Univ Lyon, ENS de Lyon, UCB Lyon 1, CNRS, INRA, Inria, Lyon 69342, France\\ $^{2}$ Universit\`a degli Studi di Palermo, Dipartimento di Fisica e Chimica - Emilio Segr\`e, Via Archirafi 36, I-90123 Palermo, Italy\\ $^{3}$ Universit\`a degli Studi di Palermo, Dipartimento di Matematica ed Informatica, Via Archirafi, 34, I-90123 Palermo, Italy \\ $^{4}$ I.N.F.N. Sezione di Catania, Via Santa Sofia 64, I-95123 Catania, Italy\\ $^{5}$ Department of Physics, Waseda University, Tokyo 169-8555, Japan\\ } \begin{abstract} In the general framework of $d\times d$ mixed states, we derive an explicit bound for bipartite NPT entanglement based on the mixedness characterization of the physical system. The result derived is very general, being based only on the assumption of finite dimensionality. In addition it turns out to be of experimental interest since some purity measuring protocols are known. Exploiting the bound in the particular case of thermal entanglement, a way to connect thermodynamic features to monogamy of quantum correlations is suggested, and some recent results on the subject are given a physically clear explanation. \end{abstract} \maketitle \section{Introduction} In dealing with mixed states of a physical system, one has to be careful when speaking about entanglement. The definition of bipartite mixed state entanglement is unique (although problems may arise in dealing with multipartite entanglement \cite{Guhne}), but its quantification relies on several different criteria and it is not yet fully developed: many difficulties arise already in the definition of physically sensible measures \cite{Horodecki,Mintert}. The main problem affecting a few known mixed state entanglement measures is, indeed, the fact that extending a measure from a pure state case to a mixed state case usually requires challenging maximization procedures over all its possible pure state decompositions \cite{Eisert}-\cite{Takayanagi}. Notwithstanding, the investigation of the connection between entanglement and mixedness exhibited by a quantum system is of great interest, e.g. in quantum computation theory \cite{Jozsa,Datta} as well in quantum teleportation \cite{Paulson}. The threshold of mixedness exhibited by a quantum system compatible with the occurrence of entanglement between two parties of the same system has been analyzed, leading for example to the so-called Kus-Zyczkowski ball of absolutely separable states \cite{Kus}-\cite{Aubrun}. Quite recently, possible links between entanglement and easily measurable observables have been exploited to define experimental protocols aimed at measuring quantum correlations \cite{Brida}-\cite{Johnston}. The use of measurable quantities as entanglement witnesses for a wide class of systems has been known for some time \cite{Toth,Wiesniak}, but an analogous possibility amounting at entanglement measures is a recent and growing challenge. To present day some bounds for entanglement are measured in terms of correlation functions in spin systems \cite{Cramer} or using quantum quenches \cite{Cardy}. Indeed an experimental measure of entanglement is, generally speaking, out of reach because of the difficulty in addressing local properties of many-particle systems and of the fundamental non-linearity of entanglement quantifiers. For such a reason the best one can do is to provide experimentally accessible bounds on some entanglement quantifiers \cite{Guhne2}. The aim of this paper is to build a bound to the entanglement degree in a general bipartition of a physical system in a mixed state. We are going to establish an upper bound to the Negativity $N$ \cite{Peres} in terms of the Linear Entropy $S_L$. We are thus studying what is called Negative Partial Transpose (NPT) entanglement. It should however be emphasized that a non-zero Negativity is a sufficient but not necessary condition to detect entanglement, since Positive Partial Transpose (PPT, or bound) entanglement exists across bipartitions of dimensions higher than $2\times 3$, which can not be detected by means of the Negativity criterion \cite{Horodecki2}. Our investigation contributes to the topical debate concerning a link between quantum correlations and mixedness \cite{Wei}. We stress that our result is of experimental interest since the bound on $N$ may easily be evaluated measuring the Linear Entropy. \section{An upper bound to the Negativity in terms of Linear Entropy} Consider a $d$-dimensional system S in a state described by the density matrix $(0\leq p_i \leq 1, \;\;\;\forall i)$ \begin{equation}\label{rho} \rho=\sum_ip_i{\sigma_i} \end{equation} where each $\sigma_i$ represents a pure state, and define a bipartition into two subsystems S$_1$ and S$_2$ with dimensions $d_1$ and $d_2$ respectively ($d = d_1\cdot d_2$). It is common [19] to define Negativity as \begin{equation}\label{N} N=\frac{\|\rho^{T_1}\| -1}{d_m-1}=\frac{\textrm{Tr}{\sqrt{\rho^{T_1}(\rho^{T_1})^{\dag}}-1}}{d_m-1} \end{equation} where $d_m = \min \{ d_1, d_2\}$, $\rho^{T_1}$ is the matrix obtained through a partial transposition with respect to the subsystem $S_1$ and $ \| \cdot \| $ is the trace norm ($ \| O \|\equiv \textrm{Tr}\{ \sqrt{O O^{\dag}}\}$). In what follows we will call $d_M= \max \{ d_1, d_2\}$. By construction, $0\leq N\leq 1$, with $N=1$ for maximally entangled states only. Furthermore, the Linear Entropy $S_L$ in our system is defined as \begin{equation}\label{SL} S_L=\frac{d}{d-1}(1-\textrm{Tr} \rho^2)=\frac{d}{d-1}P_E \end{equation} where $P_E=1-\textrm{Tr} \rho^2=1-\| \rho \|_2^2$ is a measure of mixedness in terms of the Purity $\textrm{Tr} \rho^2$ of the state, $ \| \rho \|_2 $ being the Hilbert-Schmidt norm of $\rho$ ($ \| O \|_2 \equiv \sqrt{\textrm{Tr} \{O O^{\dag}}\}$). By definition, $S_L = 0$ for any pure state while $S_L = 1$ for maximally mixed states. It is easy to see that there exists a link between the trace norm of an operator $O$ in a $d$-dimensional Hilbert space and its Hilbert-Schmidt norm. Such a link can be expressed as \begin{equation}\label{normaO} \| O \|^2=( \sum_{i=1}^d |\lambda_i | )^2 \leq d \sum_{i=1}^d | \lambda_i |^2 = d \| O \|^2_2 \end{equation} where $\lambda_i$ is the $i$-th eigenvalue of $O$ and the so-called Chebyshev sum inequality $\left( \sum_{i=1}^d a_i \right)^2 \leq d \sum_{i=1}^da_i^2$ has been used. Since, in addition, the Hilbert-Schmidt norm is invariant under partial transposition, one readily gets a first explicit link between Negativity and mixedness $P_E$, valid for generic $d$-dimensional systems, in the form of an upper bound, which reads \begin{equation}\label{Q1} N\leq \frac{\sqrt{d}\sqrt{1-P_E}-1}{d_m-1}\equiv Q_1 \end{equation} Equation (\ref{Q1}) provides an upper bound to the Negativity $N$ in terms of $P_E$ and thus, in view of equation (\ref{SL}), in terms of the Linear Entropy. This bound imposes a zero value for $N$ only for a maximally mixed state. It is known \cite{Thirring}, however, that no entanglement can survive in a state whose purity is smaller than or equal to $(d-1)^{-1}$. Also in the case of a pure (or almost pure) state, the bound becomes useless as long as the bipartition is not "balanced" (by "balanced" we mean a bipartition where $d_m=\sqrt{d}$). It indeed becomes greater than one (thus being unable to give information about entanglement) for mixedness smaller than $\frac{d-d_m^2}{d}$ which might even approach 1 in some specific cases (recall that, by definition, $d_m\leq d$). We however expect entanglement to be unbounded only in the case of pure states ($P_E = 0$). In the following we show that bound (\ref{Q1}) can be strengthened. \section{Strengthening the previous bound} Observe firstly that the rank $r_{\rho}$ of $\rho^{T_1}$ is not greater than $d_m^2$ (equal to $d$) when $\rho$ is pure (maximally mixed). For this reason we write \begin{equation}\label{r(SL)} r(S_L)\equiv \max_{\{\rho : \textrm{Tr} \rho^2=1-\frac{d-1}{d}S_L\}}r_{\rho} \end{equation} By construction, $r(0)=d_m^2$ since any pure state can be written in Schmidt decomposition consisting of $d_m$ vectors, and $r(1) = d$ because a maximally mixed state is proportional to identity. Since by definition $\left( \sum_{i=1}^d | \lambda_i |\right)^2=\left( \sum_{i=1}^{r(S_L)}{| \lambda_i |}\right)^2$ holds for any physical system, equation (\ref{Q1}) may be substituted by the following inequality \begin{equation}\label{N_leq} N\leq \frac{\sqrt{r(S_L)}\sqrt{1-P_E}-1}{d_m-1} \end{equation} Note however that there exist at least some physical systems for which the function in (\ref{r(SL)}), due to the maximization procedure involved in its definition, is always equal to $d$ in the range $S_L \in (0, 1]$, showing then a discontinuity at $S_L = 0$ as \begin{equation}\label{SL_to_0} \lim_{S_L\to 0} r(S_L)=d\neq d_m^2=r(0) \end{equation} Since we want our result to hold generally, independently on the particular system analyzed, equation (\ref{N_leq}) can not improve equation (\ref{Q1}) because even for slightly mixed states $(0 < S_L<<1) $ we have a priori no information on $r(S_L)$ which might be equal to $d$, tracing back equation (\ref{N_leq}) to (5). Despite this, we may correct (\ref{N_leq}) exploiting the expectation that for very low mixedness some of these eigenvalues are much smaller than the other ones. Indeed for all the $ r(S_L)$, not vanishing eigenvalues appearing in (\ref{normaO}) are treated on equal footing in going from $\| \rho^{T_1} \|$ to $\| \rho^{T_1} \|_2$. To properly take into account the difference between them, go back to equation (\ref{rho}) and define a reference pure state $\sigma_R$ at will among the ones having the largest occupation probability $p_R$. The spectrum of $\sigma_R^{T_1}$ consists of $n_p$ non-zero eigenvalues $\{\mu_{\alpha} ^{(R)} \}$ $\left(\max n_p =d_m^2\right)$ and of $n_m=d-n_p$ zero eigenvalues $\{ \nu_{\alpha}^{(R)}\}$. We call the former $\alpha$-class eigenvalues and the latter $\beta$-class eigenvalues, and obviously the latter class does not contribute to $\| \sigma_R^{T_1}\|$. In order to strengthen (\ref{Q1}) we are interested in the spectrum of $\rho^{T_1}$ which, in general, consists of $d$ non-zero eigenvalues. Unfortunately, then, we can not directly introduce analogous $\alpha$- and $\beta$-classes to identify which eigenvalues contribute to the sum involved in (4) comparatively much less than the other ones, when the state $\rho$ possesses a low mixedness degree and is thus very close to a pure state. To overcome this difficulty let us consider a parameter-dependent class of density matrices associated to the given $\rho$ \begin{equation}\label{tau} \tau (x)=\sum_i q_i(x)\sigma_i \end{equation} with $x\geq 0$, such that, for all $i$, \begin{equation}\label{x_to_x1} \lim_{x\to x_1} q_i(x)=p_i \;\;\;\; \lim_{x\to 0} q_i(x)=\delta_{iR} \end{equation} and such that all $q_i(x)$s are continuous functions of $x$. Thus $\tau(x)^{T_1}$ continuously connects $\rho^{T_1}$ and $\sigma_R^{T1} $ and, as a consequence, any $\nu_{\beta}^{(R)}$ is continuously connected to a particular eigenvalue of $\rho^{T_1}$ which will be the the corresponding mixed state $\beta$-class eigenvalue $\nu_{\beta}$. In such a way one can define the function $\nu_{\beta_0}(x)$ as the eigenvalue of $\tau(x)^{T_1}$ having the property \begin{equation}\label{x_to_0} \lim_{x\to 0} \nu_{\beta_0}(x)=\nu_{\beta_0}^{(R)} \end{equation} and so the $\beta$-class eigenvalue for $\rho^{T_1}$ as \begin{equation}\label{nu_beta} \nu_{\beta_0}\equiv \lim_{x\to x_1} \nu_{\beta_0}(x) \end{equation} We emphasize at this point that the results of this paper do not depend on the explicit functional dependence of $\tau(x)$ on $x$, which can be chosen at will provided it satisfies conditions (\ref{x_to_x1}). Indeed, $\tau(x)$ is just a mathematical tool, with (in general) no physical meaning. To save some writing and in view of equation (\ref{normaO}), we put \begin{equation}\label{A} A=\sum_{\alpha}\mu_{\alpha}^2 \;\;\;\; B=\sum_{\beta}\nu_{\beta}^2 \end{equation} and notice that $\textrm{Tr}(\rho^{T_1})^2=\textrm{Tr} \rho^2=A+B$. We can now state (see Appendix for a proof) the following. {\bf Lemma 1} Given a state $\rho$ of a system in a $d$-dimensional Hilbert space, and the associated reference pure state $\sigma_R$, for any set of states $\tau(t)$ satisfying (\ref{tau}) and (\ref{x_to_x1}), there exists a value $\delta\geq x_1$ such that $1-A(t)-B(t)\geq B(t)d$ for any $t \in [0,\delta]$. This result allows us to find a function $w(S_L)$ such that $w(0) = d_m^2$ and \begin{equation}\label{normarho} \| \rho^{T_1} \|^2\leq w(S_L)=f(\|\rho^{T_1} \|^2_2) \end{equation} Starting from the identity \begin{eqnarray}\label{identity} \| \rho^{T_1} \|^2=(\sum_{\alpha}^{d_m^2}|\mu_{\alpha}|)^2+(\sum_{\beta}^{d-1}|\nu_{\beta}|)^2+ \sum_{\alpha}^{d_m^2}|\mu_{\alpha}| \sum_{\beta}^{d-1}|\nu_{\beta}| \end{eqnarray} and applying the Chebyshev sum inequality term by term we obtain \begin{equation}\label{rho_T1} \| \rho^{T_1} \|^2\leq ( d_m\sqrt{A+B}+\sqrt{\frac{d-1}{d}}\sqrt{1-A-B} )^2 \end{equation} where Lemma 1 has been exploited. Expressing equation (\ref{rho_T1}) in terms of Negativity and Purity we finally get \begin{equation}\label{Q2} N\leq \frac{d_m\sqrt{1-P_E}+\sqrt{\frac{d-1}{d}}\sqrt{P_E}-1}{d_m-1}\equiv Q_2 \end{equation} Bound (\ref{Q2}) improves bound (\ref{N_leq}) for high purity when $S_L$ is small, that is $Q_2 < Q_1$, becoming in general greater than $Q_1$ at low purity. In addition it still suffers the same drawback as $Q_1$, not vanishing when $1-P_E=\frac{1}{d-1}$. In such a case one has to consider the lower bound $\frac{1}{d-1}$ on purity, below which no entanglement survives. In order to take such a bound into account, instead of distinguishing among $\alpha$ and $\beta$ eigenvalues of $\rho^{T_1}$ , we can divide them into positive ones $\{ \xi_i\}$ and negative ones $ \{\chi_i\}$. In this way, calling $n_-$ and $ (d-n_-)$ the numbers of negative and positive eigenvalues, respectively and applying the Lagrange multiplier method to the function $\| \rho^{T_1} \|=\sum_i^{d-n_-} \xi_i+\sum_j^{n_-} \chi_j$ subjected to constraints $\sum_i\xi_i^2+\sum_j \chi_j^2=1-P_E$ and $\sum_i\xi_i+\sum_j \chi_j=1$ one finds \begin{equation}\label{norma_rho_leq} \| \rho^{T_1} \|^2\leq\frac{d-2n_-+2\sqrt{n_-(d-n_-)(d(1-P_E)-1)}}{d} \end{equation} Bound (\ref{norma_rho_leq}) can be exploited to show that no entanglement can survive at purity lower than $\frac{1}{d-1}$. Indeed, for entanglement to exist, one eigenvalue at least has to be negative. However, by normalization, it always has to be true that $\| \rho^{T_1} \| \geq 1$ and this implies that, as long as $n_-\geq 1$, purity $1-P_E$ can not be smaller than $\frac{1}{d-1}$ as expected. However, in general, the number of negative eigenvalues is not known. In these cases the best one can do is to look for the maximum, with respect to $n_-$, of the right-hand side of (\ref{norma_rho_leq}), leading unfortunately once again to bound (\ref{Q1}) on $N$. However, since always $N\leq \Theta (\frac{d-2}{d-1}-P_E)\equiv Q_3$, where $\Theta (x)$ is the Heaviside step function, defining $Q=\min\{Q_1, Q_2, Q_3\}$, we state our final and main result as \begin{equation}\label{NQ} N \leq Q \end{equation} valid for every possible bipartition of a quantum system, independently on its (finite) dimension, its detailed structure or its properties. It is worth stressing that computing Negativity quickly becomes a very hard task as the dimension of the Hilbert space grows, while the evaluation of purity can be performed without particular efforts. We emphasize in addition that bound $Q$ in (\ref{NQ}) only depends on purity, and is completely determined once a bipartition of the physical system is fixed and purity is known. This means that an experimental measure of purity allows to extract information about the maximal degree of bipartite entanglement one can find in the system under scrutiny. Some purity measuring protocols, or at least purity estimations based on experimental data, have been proposed. They are based on statistical analysis of homodyne distributions, obtained measuring radiation field tomograms\cite{Manko}, on the properties of graph states \cite{Wunderlich}, or on the availability of many different copies of the state over which separable measurements are performed \cite{Bagan}. In all the cases where a measure of purity is possible, an experimental estimation of bipartite entanglement is available thanks to (\ref{NQ}) which is then actually experimentally accessible. \section{Crossover between $Q_1$ and $Q_2$ and numerical results} As commented previously, bounds $Q_1$ (Eq. (\ref{Q1})) and $Q_2$ (Eq. (\ref{Q2})) can supply information about bipartite entanglement in two different setups: $Q_1$ is indeed accurate enough for a balanced bipartition (i.e. when $d_m\sim \sqrt{d}$) but fails when $\sqrt{d}\gg d_m$ since it rapidly becomes greater than 1. To solve this problem, we obtained the bound $Q_2$ which, by construction, provides nontrivial information about bipartite entanglement in an unbalanced bipartition $(\sqrt{d}\gg d_m )$, but may not work properly for a balanced one. It is actually a very easy task to show that our new bound $Q_2$ works better than the old one, $Q_1$, (i.e., $Q_1\geq Q_2$) when the purity $P$ is greater than a critical value given in terms of the total Hilbert space dimension and of the subdimensions of the bipartition, i.e. when \begin{equation}\label{Pc} P(\rho)\geq \frac{d-1}{d(\sqrt{d}-d_m)^2+d-1}=P_c \end{equation} Two limiting cases are easily studied directly from Eq. (\ref{Pc}): for a perfectly balanced bipartition ($\sqrt{d}= d_m$), one gets $P_c=1$ and, since by definition $\frac{1}{d}\leq P(\rho)\leq 1$, in this case the bound $Q_1$ is smaller (and therefore works better) than the bound $Q_2$ for any possible quantum state. In the opposite limit, in a strongly unbalanced bipartition, one can roughly approximate $(\sqrt{d}-d_m)^2\sim d$ and, since by definition $d\geq 4$, this leads to $P_c\sim \frac{d-1}{d^2+d-1} < \frac{1}{d}$. Taking into account the natural bounds for the purity of a quantum state, this in turn means that in such a limit $Q_1 > Q_2$ for any quantum state or, in other words, our new bound works always better. This behavior can be clearly seen in Fig. (\ref{Fig1}), where the dependence of $P_c$ on $d$ and $d_m$ is shown together with the natural limiting values of $P(\rho)$. \begin{figure} \caption{(Color online) Purity threshold $P_c$ given in Eq. (\ref{Pc} \label{Fig1} \end{figure} \begin{figure} \caption{((Color online) $\Delta_1$ (light red triangles) and $\Delta_2$ (dark blue circles) evaluated for 1000 randomly generated bipartite states, with $d_m = d_M$ randomly chosen in [2, 10]. For these perfectly balanced bipartite states $\Delta_1 < \Delta_2$ everywhere in state space. } \label{Fig2} \end{figure} \begin{figure} \caption{ (Color online) $\Delta_1$ (light red triangles) and $\Delta_2$ (dark blue circles) evaluated for 1000 randomly generated bipartite states, with $d_M = d_m+60$ and $d_m$ randomly chosen in [2, 10]. Since the bipartitions are no longer perfectly balanced, there is a much broader mixing of values of $\Delta_1$ and $\Delta_2$. In particular, $\Delta_1$ has a much wider distribution of values, while $\Delta_2$ seems to have a much denser distribution around central values. The inset shows the difference $\Delta_1-\Delta_2$. } \label{Fig3} \end{figure} \begin{figure} \caption{(Color online) $\Delta_1$ (light red triangles) and $\Delta_2$(dark blue circles) evaluated for 1000 randomly generated bipartite states, with $d_M = d_m+70$ and $d_m$ randomly chosen in [2, 5]. For these strongly unbalanced bipartitions we always detect $\Delta_1>\Delta_2$. The inset shows the difference $\Delta_1-\Delta_2$. } \label{Fig4} \end{figure} To better exemplify such a behavior, we report here results of numerical simulations performed with the aid of the $QI$ package for Mathematica \cite{Mathematica}, by which random quantum states have been generated in different dimensions, uniformly distributed according to different metrics. On these states, we tested bounds $Q_1$ and $Q_2$. Figures (\ref{Fig2})-(\ref{Fig4}) show the differences $\Delta_i=Q_i-N$ $(i=1,2)$ of the bounds with the Negativity of the state, once a bipartition has been fixed. In particular, in a first run of simulations (Fig. (\ref{Fig2})) we generated $10^3$ perfectly balanced bipartite states (such that $d_m=d_M=\sqrt{d}$), randomly choosing the dimension of the two subsystems for each quantum state within the range $d_M=d_m\in[2,10]$. The results in Fig. (\ref{Fig2}) clearly show that $\Delta_1< \Delta_2$ for all the analyzed states. The second run of simulations has been performed with $d_m$ randomly chosen in $[2, 14]$ and $d_M = d_m+60$. In such a case, as can be seen in Fig. (\ref{Fig3}) the difference $\Delta_1-\Delta_2$ has no fixed sign. The two subdimensions are, indeed, such that the critical value of purity $P_c$ in Eq. (\ref{Pc}) is neither extremely close to $\frac{1}{d}$ nor to 1. As can be noticed from the inset of Fig. (\ref{Fig3}) which shows the difference $\Delta_1-\Delta_2$, however, on the average it is still true that $\Delta_1<\Delta_2$. The third set of numerical data, finally, has been obtained generating $10^3$ random states with subdimensions $d_M=d_m+70$ and $d_m$ randomly drawn in $[2,5]$. In this limit the value of $P_c$ is very close to the minimum of purity and we therefore expect $Q_2$ to work better than $Q_1$ for almost any state. This is indeed confirmed by the simulations shown in Fig. (\ref{Fig4}), in which $\Delta_2<\Delta_1$. As an example of application of our results, consider a single two-level system interacting with a spin system composed of $n_s$ spins, each of which lives in a $d_s$ dimensional Hilbert space. Therefore the total system Hilbert space has dimension $d = 2(d_s)^{n_s}$ . Let us suppose the spin system is a chain of 10 spin $\frac{1}{2}$ (which is a relatively small system, very far from its thermodynamic limit). The total Hilbert space dimension will then be $d = 2^{11}$, and considering the natural bipartition into the two-level system and the spin chain, one has $d_m = 2$ and $d_M = 2^{10}$. For such a system, the critical value of purity $P_c$ in Eq. (\ref{Pc}) is \begin{equation}\label{Pcnum} P_c=\frac{2^{11}-1}{2^{11}(\sqrt{2^{11}}-2)^2+2^{11}-1}\sim 0.000534 \end{equation} The lower value of purity for which bipartite entanglement can survive is, as said before, $P_l=\frac{1}{d-1}\sim 0.00049$. Therefore, for all the total states having purity $0.000534\leq P(\rho)\leq 1$, bound $Q_2$ in Eq. (\ref{Q2}) works better than $Q_1$. Only for the small fraction of states having an extremely low purity in the range $[0.00049, 0.000534]$ bound $Q_1$ gives better information than $Q_2$. This shows again that, for unbalanced bipartitions (and even in the case of a relatively small number of individual components of the total system), $Q_2$ works much better than $Q_1$. \section{ Application to thermal entanglement} Of particular interest is the application of the results of this paper to the case of thermal entanglement, where both Linear Entropy and its link to Negativity acquire a much clearer meaning. A recent result \cite{Popescu}, indeed, shows how the canonical ensemble description of thermal equilibrium stems from the existence of quantum correlations between a system and its thermal bath. In view of this it has been shown that it is possible, with a very small statistical error, to replace the system + bath microcanonical ensemble with a pure state inside the suitable energy shell, still obtaining the appropriate thermal statistics characterizing Gibbs distribution. In this context, then, the Linear Entropy of the mixed Gibbs state provides a system/bath entanglement measure. Equation (\ref{NQ}) then can be viewed as a monogamy relation, describing the competition between two kinds of quantum correlations - internal ones measured by Negativity and external ones measured by Entropy. On the other hand it is known that some thermodynamic quantities (like e.g. heat capacity or internal energy) can be used as entanglement witnesses \cite{Wiesniak}, and recent works have shown an even closer link between heat capacity and entanglement for particular systems \cite{Leggio1,Leggio2}. The result of this paper suggests this link might hold very generally. Indeed, in the case of a Gibbs equilibrium state, $P_E$ can be given by the expression \begin{equation}\label{PE} P_E=\sum_{i\neq j}\frac{e^{-\beta E_i}e^{-\beta E_j}}{Z^2}=\sum_{i\neq j}P_{E}^{ij} \end{equation} where $E_i$ is the $i$-th energy level of the system and $Z$ is its partition function, $\beta$ being the inverse temperature in units of $k_B$. Heat capacity in a finite dimensional system reads \begin{equation}\label{CV} C_V\equiv\beta^2(\langle H^2\rangle-\langle H\rangle^2)=\beta^2\sum_{i\neq j}P_E^{ij}\frac{E_i-E_j}{2} \end{equation} There is then a similarity between $P_E$ and $C_V$ as given by equations (\ref{PE}) and (\ref{CV}), suggesting how a measure of the latter, together with little knowledge about the energy spectrum of the physical system, might supply significant information on the Linear Entropy of the system and, as a consequence, on its maximal degree of internal bipartite entanglement. This triggers interest in further future investigation on a detailed analysis of the relation between $P_E$ and $C_V$ which, in turn, might supply us with an easily experimentally measurable entanglement bound as well as highlight how the origin of thermodynamical properties is strongly related to non-classical correlations and monogamy effects. Such a connection, and the usefulness of the bounds derived in the previous sections, can be exemplified with a simple three qutrit system with a parameter-dependent Hamiltonian \begin{equation}\label{Hl} H_l=\omega J_z+\tau {\bf J}_1\cdot {\bf J}_2+({\bf J}_1\cdot {\bf J}_2)^2+k{\bf J}_0\cdot({\bf J}_1+{\bf J}_2) \end{equation} where ${\bf J}_i$ is the spin operator of the $i$-th particle, ${\bf J} ={\bf J}_0 +{\bf J}_1 +{\bf J}_2$. and $\omega,\;\; \tau, \;\;k$ are real interaction parameters. This effective Hamiltonian operator describes a system consisting of two ultracold atoms (spins labeled as 1 and 2) in a two-well optical lattice and in the Mott insulator phase, where thus the tunneling term in the usual Bose-Hubbard picture is accounted for as a second order perturbative term, both coupled with a third atom (labeled as 0) via an Heisenberg-like interaction. An external magnetic field is also present, uniformly coupled to the three atoms. Such a system is a generalization of the one studied in \cite{Leggio2}, where a deep connection between thermal entanglement and heat capacity in parameter space has been shown. Hamiltonian (\ref{Hl}) is analytically diagonalizable, thus allowing us to obtain explicit expressions for thermodynamic quantities characterizing the Gibbs equilibrium state of the three-atom system, together with the Negativity of the reduced state of the two quadratically coupled spins. \begin{figure} \caption{Negativity of reduced state of the two ultracold atoms (full line) and Heat Capacity of the system (dashed line) versus quadratic interaction parameter $\gamma$. The other parameters have been fixed as $k_B T=2$, $\tau=3$, and $k=1$. } \label{Fig5} \end{figure} \begin{figure} \caption{Negativity of reduced state of the two ultracold atoms (full line), bound $Q_1$ (dotted line) and bound $Q_2$ (dashed line) versus quadratic interaction parameter $\gamma$. The other parameters have been fixed as $k_B T=2$, $\tau=3$, and $k=1$. } \label{Fig6} \end{figure} \begin{figure} \caption{Negativity of reduced state of the two ultracold atoms (full line), bound $Q_1$ (dotted line) and bound $Q_2$ (dashed line) versus Heisenberg interaction parameter $k$. The other parameters have been fixed as $k_B T=10$, $\tau=3$, and $\gamma=1$. } \label{Fig7} \end{figure} \begin{figure} \caption{Negativity of reduced state of the two ultracold atoms (full line), bound $Q_1$ (dotted line) and bound $Q_2$ (dashed line) versus temperature $T$ (in units of $k_B$). The interaction parameters have been fixed as $\tau=4$, $k=5$ and $\gamma=1$. } \label{Fig8} \end{figure} The mathematical origin of the connection between heat capacity and Negativity was already discussed in \cite{Leggio2} and is ultimately due to the presence of level crossing in the low-lying energy eigenvalues of the system. Here we want to show how the existence of the strong connection between purity and Negativity, expressed by bound (\ref{NQ}), can give some hints for a physical explanation of such an effect, and moreover to exemplify how bound (\ref{NQ}) can often supply important information on the amount of thermal entanglement. Figure (\ref{Fig5}) shows how the connection between thermal entanglement and heat capacity highlighted in \cite{Leggio2} is still present despite the interaction with a third atom. Figures (\ref{Fig6}) and (\ref{Fig7}) show bounds (\ref{Q1}) and (\ref{Q2}), together with the Negativity of the reduced state of two atoms, versus a certain interaction parameter in the Hamiltonian. Figure (\ref{Fig8}) finally shows the same quantities versus temperature for fixed Hamiltonian parameters. All energies in the plots are expressed in units of $\omega$. It is worth stressing here that, in all these plots, bound $Q_3=\Theta(\frac{d-2}{d-1}-P_E)$ is not shown. The reason is that, in order to preserve thermal entanglement, temperature in our simulations has to be kept at most of the same order of magnitude of spin-spin interactions, and in such a regime $P_E$ has not yet crossed the threshold $\frac{d-2}{d-1}$ so that $Q_3$ is constantly equal to one. It is clearly shown in Figures (\ref{Fig6}) and (\ref{Fig8}) how bound $Q_1$ given in (\ref{Q1}) can become, as discussed, larger than 1. In all these cases (except for a small temperature range in Figure (\ref{Fig8}), however, (\ref{Q2}) is still able to sensibly bound Negativity. In all the plots shown, and in general every time the bounds (\ref{Q1}) and (\ref{Q2}) are applied to the particular system analyzed here, one always gets useful information about bipartite entanglement in the form, of course, of an upper bound. Such a bound, however, gets very close to zero in some particular cases (see, for example, Figures (\ref{Fig7}) and (\ref{Fig8})), strongly restricting the allowed range of values for the Negativity. It is then shown that (\ref{NQ}) is able to produce non-trivial results. It is worthy noting from that in Fig. (\ref{Fig5}) there exist ranges of the parameter $\gamma$ where the Negativity and the Heat Capacity exhibit simultaneous plateaus. This fact, previously shown and commented in reference \cite{Leggio2} too, in view of Equation (\ref{NQ}) and the strong link between Heat Capacity and the mixedness $P_E$ of a quantum state legitimates the deduction that in the parameter regions of very low Negativity the Heat capacity may be assumed as almost constant. \section{Conclusions} In this paper we derived a bound on the degree of information storable as bipartite quantum entanglement within an open $d$-dimensional quantum system in terms of its Linear Entropy. Our result is quite general, holding for arbitrary bipartitions of an as well arbitrary system. We emphasize that our result is experimentally appreciable in view of quite recently proposed protocols aimed at measuring the Purity of a state of a quantum system. Inspired by the seminal paper of Popescu, Short and Winter \cite{Popescu}, our conclusions highlight the interplay between quantum entanglement inside a thermalized system and its physical properties. Our results are of interest not only for the Quantum Information researchers but also for the growing cross-community of theoreticians and experimentalists investigating the subtle underlying link between quantum features and thermodynamics. \section{Appendix} {\bf Proof of Lemma 1.} Let us first notice that both $B(t)$ and $1-A(t)-B(t)$ go to zero when the state $\tau(t)$ is pure. Indeed $A+B=\textrm{Tr} \tau^2$ equals the purity, while $B$ by construction vanishes when $\tau$is pure (see Equations (9)-(13)). Notice further that both $A(t)$ and $B(t)$ are quadratic in $\nu_{\beta}(t)$ and $\mu_{\alpha}(t)$ and then the statement of Lemma 1 is independent of their sign. Such a statement can be rewritten as \begin{equation}\label{A1} \frac{1-A-B}{d}-B=l(A, B)\geq 0 \end{equation} The function $l(A,B)$ at the extremal points of its domain (corresponding to a pure and a maximally mixed state) satisfies (\ref{A1}). For a maximally mixed state, calling $n_{\alpha}$ $(n_{\beta})$ the number of $\alpha$- $(\beta-)$class eigenvalues $(n_{\alpha}+n_{\beta})$, one gets $A=\frac{n_{\alpha}}{d^2}$ and $B=\frac{n_{\beta}}{d^2}$ and thus $l=\frac{n_{\alpha-1}}{d^2}\geq 0$ since $n_{\alpha}\geq 1$. Let us now express $l(A,B)$ as \begin{equation}\label{A2} h(\{\mu_{\alpha}\}, \{\nu_{\beta}\})=\frac{1}{d}\left( 1-\sum_{\alpha}^{n_{\alpha}}\mu_{\alpha}^2- \sum_{\beta}^{n_{\beta}}\nu_{\beta}^2\right)- \sum_{\beta}^{n_{\beta}}\nu_{\beta}^2 \end{equation} We can address the investigation on internal points using the Lagrange multiplier method, taking into account the trace condition $\sum_{\alpha}^{n_{\alpha}}\mu_{\alpha}^2+ \sum_{\beta}^{n_{\beta}}\nu_{\beta}^2= 1$. From this method only one stationary point results, characterized by values of $\nu_{\beta}$ and $\mu_{\alpha}$ such that the corresponding state is mixed. It is straightforward to check that at this point the function (\ref{A2}) is positive. We then deduce that $l(A,B)\geq 0$, from which Lemma 1 directly follows. Finally, the range $\delta$ of validity of Lemma 1 is given by the requirement $q_R(t)\geq q_{i\neq R}(t)$, such a property being necessary for the sensible definition of the reference pure state $\sigma_R$ which guarantees, in turn, that $B(t)$ vanishes on pure states. \acknowledgments{ A.M. acknowledges the kind hospitality provided by HN at the Physics Department of Waseda University on November 2019. All the authors are grateful to Marek Kus for his constructive and stimulating suggestion and for carefully reading the manuscript. HN was partly supported by Waseda University Grant for Special Research Projects (Project number: 2019C-256).} \end{document}
\begin{document} \title{Quantized Media with Absorptive Scatterers and Modified Atomic Emission Rates} \author{L.G.~Suttorp and A.J.~van~Wonderen} \address{Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands} \begin{abstract} Modifications in the spontaneous emission rate of an excited atom that are caused by extinction effects in a nearby dielectric medium are analyzed in a quantummechanical model, in which the medium consists of spherical scatterers with absorptive properties. Use of the dyadic Green function of the electromagnetic field near a a dielectric sphere leads to an expression for the change in the emission rate as a series of multipole contributions for which analytical formulas are obtained. The results for the modified emission rate as a function of the distance between the excited atom and the dielectric medium show the influence of both absorption and scattering processes. \end{abstract} \maketitle \setlength{\mathindent}{0cm} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \section{Introduction}\label{sec1} The emission rate of an excited atom is modified if the electromagnetic properties of its surroundings differ from that of vacuum \cite{P46}. For an atom in front of a dielectric medium filling a half-space, the rate varies with the distance between the atom and the medium \cite{A75} - \cite{KSW01}. Usually, the medium is taken to be homogeneous on the scale of the atomic wavelength, so that its electromagnetic properties are fully described by a susceptibility, which does not vary appreciably on the scale of the wavelength. In general, it will be complex to account for absorption and dispersion. For such a configuration modifications of the atomic radiative properties have been confirmed experimentally a few years ago \cite{ICLS04}. If the structure of the medium cannot be neglected, scattering effects play a role as well, so that extinction in such a medium is driven by both absorption and scattering. Extinction by scattering in material media is quite common, owing to the presence of impurities and defects. The interplay between the two types of extinction in atomic decay rates can be investigated in a model in which both of these features occur simultaneously. In a recent paper \cite{SvW10} we have studied the change in the decay rate of an atom in the presence of a medium consisting of non-overlapping spheres that are made of absorptive dielectric material. The spheres are distributed randomly in a half-space, with a uniform average density. In order to describe the absorptive dielectric material of the spheres in a quantummechanically consistent way a damped-polariton model has been employed \cite{HB92b}. By introducing an effective susceptibility for the composite medium and after a detailed analysis of surface contributions, we could derive an asymptotic expression for the change in the emission rate at relatively large distances between the atom and the medium. In the present paper we take a somewhat different approach so as to derive an analytic expression for the emission rate that is valid for all distances between the atom and the medium. We shall start from exact expressions for the electromagnetic Green function in the presence of a dielectric sphere of arbitrary radius. As before, the absorptive dielectric material of the spheres will be described by a damped-polariton model. For simplicity we shall assume that the density of the spherical scatterers in the medium is low, so that multiple-scattering effects can be neglected. \section{Spontaneous emission in the presence of absorbing dielectrics} In the damped-polariton model an absorptive linear dielectric medium is described by a polarization density that is coupled to a bath of harmonic oscillators with a continuous range of frequencies~\cite{HB92b}. The Hamiltonian of the damped-polariton model can be diagonalized exactly, as has been shown both for the case of a uniform dielectric \cite{HB92b} and for a dielectric with arbitrary inhomogeneities \cite{SWo04}. Diagonalization of the Hamiltonian in the general non-homogeneous case yields \begin{equation} H_d=\int d {\bf r}\int_0^{\infty} d \omega\, \hbar\omega \, {\bf C}^{\dagger}({\bf r},\omega)\cdot {\bf C}({\bf r},\omega)\, , \label{2.1} \end{equation} with annihilation operators ${\bf C}({\bf r},\omega)$ and associated creation operators. The electric field can be expressed in terms of these operators as \cite{SWo04}: \begin{equation} {\bf E}({\bf r})=\int d {\bf r}'\int_0^{\infty} d \omega \, \mbox{\sffamily\bfseries{f}}_E({\bf r},{\bf r}',\omega)\cdot{\bf C}({\bf r}',\omega) +{\rm h.c.} \, ,\label{2.2} \end{equation} with a tensorial coefficient: \begin{equation} \mbox{\sffamily\bfseries{f}}_E({\bf r},{\bf r}',\omega)=-i\, \frac{\omega^2}{c^2} \left(\frac{\hbar \, {\rm Im}\,\varepsilon({\bf r}',\omega+i0)}{\pi\varepsilon_0}\right)^{1/2}\, \mbox{\sffamily\bfseries{G}}({\bf r},{\bf r}',\omega+i 0) \, . \label{2.3} \end{equation} Here $\varepsilon$ is the complex local (relative) dielectric constant, which follows from the parameters of the model. Furthermore, $\mbox{\sffamily\bfseries{G}}$ is the tensorial Green function, which satisfies the differential equation \begin{eqnarray} && -\nabla\times [\nabla\times \mbox{\sffamily\bfseries{G}} ({\bf r},{\bf r}',\omega+i 0)]\nonumber\\ &&+\frac{\omega^2}{c^2}\, \varepsilon({\bf r},\omega+i 0)\, \mbox{\sffamily\bfseries{G}}({\bf r},{\bf r}',\omega+i 0) =\mbox{\sffamily\bfseries{I}}\, \delta({\bf r}-{\bf r}')\, , \label{2.4} \end{eqnarray} with $\mbox{\sffamily\bfseries{I}}$ the unit tensor. The atomic decay rate in the presence of an absorbing dielectric follows from the inhomogeneous damped-polariton mod\-el in its diagonalized form by employing perturbation theory in leading order \cite{SvW10}. It can be expressed as an integral over a product of the coefficients (\ref{2.3}) and suitable atomic matrix elements: \begin{eqnarray} \Gamma=\frac{2\pi}{\hbar^2 \omega_a^2} \int d {\bf r} \int d {\bf r}' \int d {\bf r}'' \, \langle e|{\bf J}_a({\bf r}')|g\rangle \cdot \mbox{\sffamily\bfseries{f}}_E({\bf r}',{\bf r},\omega_a) \cdot &&\nonumber\\ \cdot\tilde{\mbox{\sffamily\bfseries{f}}}_E^\ast({\bf r}'',{\bf r},\omega_a)\cdot \langle g|{\bf J}_a({\bf r}'')|e\rangle\, , && \label{2.5} \end{eqnarray} with $e$ and $g$ denoting the excited and the ground state of the atom, $\omega_a$ the atomic transition frequency, and the tilde denoting the tensor transpose. Furthermore, ${\bf J}_a({\bf r})$ is the atomic local current density $-\half e\sum_i\{{\bf p}_i/m,\delta({\bf r}-{\bf r}_i)\}$, with ${\bf r}_i\, , \, {\bf p}_i$ the positions and momenta of the electrons and curly brackets denoting the anticommutator. In the electric-dipole approximation the atomic decay rate can be expressed in terms of the Green function as: \begin{equation} \Gamma= -\frac{2\omega_a^2}{\varepsilon_0 \hbar c^2} \,\langle e|\bfmu|g\rangle \cdot {\rm Im}\, \mbox{\sffamily\bfseries{G}}({\bf r}_a,{\bf r}_a,\omega_a+i 0)\cdot \langle g|\bfmu|e\rangle \, , \label{2.6} \end{equation} with ${\bf r}_a$ the atomic position and $\bfmu=-e\sum_i({\bf r}_i-{\bf r}_a)$ the atomic electric dipole moment. The above expression for the decay rate of an excited atom in the presence of an inhomogeneous absorptive dielectric can be obtained as well by invoking the fluctuation-dissipation theorem \cite{BHLM96,SKW99}. \section{Green functions}\label{sec2} \setcounter{equation}{0} The Green function in vacuum fulfils the differential equation (\ref{2.4}) with $\varepsilon=1$. It follows from the scalar Green function $G_s({\bf r},{\bf r}',\omega)={\rm exp}(i\omega |{\bf r}-{\bf r}'|/c)/(4\pi|{\bf r}-{\bf r}'|)$ as $\mbox{\sffamily\bfseries{G}}_0({\bf r},{\bf r}',\omega)=-[\mbox{\sffamily\bfseries{I}}+(c^2/\omega^2)\nabla\nabla]G_s({\bf r},{\bf r}',\omega)$. Its explicit form in spherical coordinates is obtained from the expansion of $G_s$ in spherical harmonics and spherical Bessel functions. The ensuing form for the vacuum Green function is \cite{T71,C95} \begin{eqnarray} &&\mbox{\sffamily\bfseries{G}}_0({\bf r},{\bf r}',\omega+i0)=-ik\sum_{\ell=1}^{\infty} \sum_{m=-\ell}^{\ell}\frac{(-1)^m}{\ell(\ell+1)}\nonumber\\ &&\times\left\{\theta(r-r')\left[ {\bf M}_{\ell, m}^{(h)}({\bf r}){\bf M}_{\ell,-m}({\bf r'})+ {\bf N}_{\ell, m}^{(h)}({\bf r}){\bf N}_{\ell,-m}({\bf r'}) \right]\right.\nonumber\\ &&+\left. \theta(r'-r)\left[ {\bf M}_{\ell, m}({\bf r}){\bf M}_{\ell,-m}^{(h)}({\bf r'})+ {\bf N}_{\ell, m}({\bf r}){\bf N}_{\ell,-m}^{(h)}({\bf r'}) \right]\right\}\nonumber\\ &&+k^{-2}\, {\bf e}_r{\bf e}_r\, \delta({\bf r}-{\bf r}')\, , \label{3.1} \end{eqnarray} with $k=\omega/c$ the wavenumber, ${\bf e}_r$ a unit vector in the direction of ${\bf r}$ and $\theta(r)$ a step function that equals 1 for positive and 0 for negative argument. The vector harmonics are defined as \begin{eqnarray} &&{\bf M}_{\ell,m}({\bf r})=\nabla\wedge[{\bf r}\psi_{\ell,m}({\bf r})]\, ,\label{3.2}\\ &&{\bf N}_{\ell,m}({\bf r})=k^{-1}\nabla\wedge[\nabla\wedge[{\bf r}\psi_{\ell,m}({\bf r})]]\, ,\label{3.3} \end{eqnarray} where $\psi_{\ell,m}({\bf r})$ stands for $j_\ell(kr)\, Y_{\ell,m}(\theta,\phi)$, with $j_\ell$ spherical Bessel functions and $Y_{\ell,m}$ spherical harmonics. The superscripts $(h)$ in (\ref{3.1}) denote the analogous vector harmonics with spherical Hankel functions $h_\ell^{(1)}$ instead of $j_\ell$. The expression (\ref{3.1}) may be checked by substitution in the differential equation (\ref{2.4}). Differentiation of the step functions yields singular terms, which together with the last term lead to the right-hand side of (\ref{2.4}). The Green function in the presence of a dielectric sphere is the sum of the vacuum Green function and a correction term. For a sphere centered at the origin the latter has the form \cite{T71}-\cite{LKLY94} \begin{eqnarray} &&\mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r}',\omega+i0)= k\sum_{\ell=1}^{\infty}\sum_{m=-\ell}^{\ell} \frac{(-i)^{\ell}(-1)^m}{2\ell+1} \nonumber\\ &&\times\left[ B^e_\ell\, {\bf N}_{\ell,m}^{(h)}({\bf r}){\bf N}_{\ell,-m}^{(h)}({\bf r}')+ B^m_\ell\, {\bf M}_{\ell,m}^{(h)}({\bf r}){\bf M}_{\ell,-m}^{(h)}({\bf r}') \right] \label{3.4} \end{eqnarray} for ${\bf r}$ and ${\bf r}'$ both outside the sphere. The electric and magnetic multipole amplitudes read \cite{M08}-\cite{BW99}: \begin{equation} B^p_\ell=i^{\ell+1}\, \frac{2\ell+1}{\ell(\ell+1)}\, \frac{N^p_\ell}{D^p_\ell}\, , \label{3.5} \end{equation} with $p=e,m$. The numerators and denominators are given as \begin{eqnarray} &&N^e_\ell=\varepsilon\, f_\ell(q)\, j_\ell(q')- j_\ell(q)\,f_\ell(q')\, , \nonumber\\ && N^m_\ell=f_\ell(q)\, j_\ell(q')-j_\ell(q)\,f_\ell(q')\, , \nonumber\\ &&D^e_\ell=\varepsilon\, f^{(h)}_\ell(q)\, j_\ell(q')-h^{(1)}_\ell(q)\,f_\ell(q')\, , \nonumber\\ &&D^m_\ell= f^{(h)}_\ell(q)\, j_\ell(q')-h^{(1)}_\ell(q)\,f_\ell(q')\, , \label{3.6} \end{eqnarray} with $f_\ell(q)=(\ell+1)\, j_\ell(q)-q\, j_{\ell+1}(q)$ and $f^{(h)}_\ell(q)=(\ell+1)\, h^{(1)}_\ell(q)-q\, h^{(1)}_{\ell+1}(q)$. The spherical Bessel and Hankel functions depend on $q=k a$ and $q'=\sqrt{\varepsilon}\, q$, with $a$ the radius of the sphere. To determine the change in the atomic decay rate due to the presence of a dielectric sphere one needs the components of the Green function $\mbox{\sffamily\bfseries{G}}_c$ for coinciding arguments. The non-vanishing components follow from (\ref{3.4}) as: \begin{eqnarray} &&{\bf e}_r\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega+i0)\cdot {\bf e}_r = \nonumber\\ && =\frac{1}{4\pi k r^2}\sum_{\ell=1}^\infty (-i)^\ell\, [\ell(\ell+1)]^2\, B^e_\ell \, [h_\ell^{(1)}(kr)]^2 \label{3.7} \end{eqnarray} and \begin{eqnarray} &&{\bf e}_\theta\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega+i0)\cdot {\bf e}_\theta = {\bf e}_\phi\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega+i0)\cdot {\bf e}_\phi = \nonumber\\ && =\frac{1}{8\pi k r^2}\sum_{\ell=1}^\infty (-i)^\ell\, \ell(\ell+1) \left\{ B^e_\ell \, \left[\frac{d}{dr}[rh_\ell^{(1)}(kr)]\right]^2\right.\nonumber\\ &&\left. \rule{3.5cm}{0cm}+ B^m_\ell\, \left[kr\, h_\ell^{(1)}(kr) \right]^2 \right\}\, , \label{3.8} \end{eqnarray} in agreement with \cite{DKW01}. Here ${\bf e}_r$, ${\bf e}_\theta$ and ${\bf e}_\phi$ are unit vectors in a spherical coordinate system. \section{Decay near a half-space of absorptive scatterers} \setcounter{equation}{0} We consider a halfspace $z<0$ filled with a dilute set of spherical scatterers. The non-overlapping spheres are randomly distributed with a uniform average density. An excited atom is located at ${\bf r}_a=(0,0,z_a)$, with $z_a>a$, so that the minimal distance between the atom and the scatterers is positive. The decay rate is given by the sum of the vacuum decay rate and a correction term. The modified rate depends on the orientation of the dipole-moment transition matrix element. If the dipole moment is oriented perpendicular to the $z$-axis, the vacuum rate is $\Gamma_{0,\perp}=\omega_a^3\, |\langle e|\bfmu_\perp|g\rangle|^2/(3\pi \varepsilon_0 \hbar c^3)$. A similar formula is valid for a dipole moment oriented parallel to the $z$-axis, with $\bfmu_\perp$ replaced by $\bfmu_\parallel$. If multiple-scattering effects are neglected, the correction term in the decay rate is given by the sum of the correction terms due to all spheres. For the perpendicular orientation one finds \begin{eqnarray} &&\Gamma_{c,\perp}= -\frac{2\omega_a^2}{\varepsilon_0 \hbar c^2} \,\sum_i\langle e|\bfmu_\perp|g\rangle \cdot\nonumber\\ &&\cdot {\rm Im}\, \mbox{\sffamily\bfseries{G}}_c({\bf r}_a-{\bf R}_i,{\bf r}_a-{\bf R}_i,\omega_a+i 0)\cdot \langle g|\bfmu_\perp|e\rangle\, , \label{4.1} \end{eqnarray} with $\mbox{\sffamily\bfseries{G}}_c$ given by (\ref{3.4}) and ${\bf R}_i$ the positions of the centers of the spheres. Choosing the $x$-axis to be parallel to the transition matrix element and averaging over the positions of the spheres we get \begin{eqnarray} &&\langle\Gamma_{c,\perp}\rangle=-\frac{6\pi n c}{\omega_a}\, \Gamma_{0,\perp}\, {\rm Im} \int_{z<0} d{\bf r} \nonumber\\ &&\times \, {\bf e}_x\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r}_a-{\bf r},{\bf r}_a-{\bf r},\omega_a+i0)\cdot{\bf e}_x\, , \label{4.2} \end{eqnarray} with $n$ the uniform density of the spheres and ${\bf e}_x$ a unit vector along the $x$-axis. The volume integral can be written as a triple integral, viz.\ over $|{\bf r}-{\bf r}_a|$, $z$ and an azimuthal angle. Upon carrying out the latter two of these integrals one finds for the volume integral in (\ref{4.2}): \begin{eqnarray} &&\pi \int_{z_a}^\infty dr\left[ \left(\frac{z_a^3}{3r}-z_a r+\frac{2r^2}{3}\right) {\bf e}_r\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega_a+i0)\cdot {\bf e}_r \right.\nonumber\\ &&\left. +\left(-\frac{z_a^3}{3r}+\frac{r^2}{3}\right) {\bf e}_\theta\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega_a+i0)\cdot {\bf e}_\theta \right.\nonumber\\ &&\left.+\left(-z_ar+r^2\right) {\bf e}_\phi\cdot \mbox{\sffamily\bfseries{G}}_c({\bf r},{\bf r},\omega_a+i0)\cdot {\bf e}_\phi\right]\, . \label{4.3} \end{eqnarray} Insertion of (\ref{3.7}) and (\ref{3.8}) yields \begin{eqnarray} &&\langle\Gamma_{c,\perp}\rangle=-\frac{3\pi n c^3}{4\omega_a^3}\, \Gamma_{0,\perp}\, {\rm Im} \sum_{\ell=1}^{\infty} (-i)^\ell \ell(\ell+1)\nonumber\\ &&\times\left[B^e_{\ell}\, J^e_{\ell,\perp}(\zeta_a)+ B^m_{\ell}\, J^m_{\ell,\perp}(\zeta_a)\right]\, ,\label{4.4} \end{eqnarray} with multipole amplitudes $B_\ell^p$ given by (\ref{3.5})-(\ref{3.6}) with $k=\omega_a/c$, and with the integrals \begin{eqnarray} && J^e_{\ell,\perp}(\zeta)=2\ell(\ell+1)\int_{\zeta}^\infty dt\, \left(\frac{\zeta^3}{3t^3}-\frac{\zeta}{t}+\frac{2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2\nonumber\\ &&+\int_{\zeta}^\infty dt\, \left(-\frac{\zeta^3}{3t^3}-\frac{\zeta}{t}+\frac{4}{3}\right) \left[ \frac{d}{dt}\left[th^{(1)}_\ell(t)\right]\right]^2 \label{4.5} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\perp}(\zeta)=\int_{\zeta}^\infty dt\, \left(-\frac{\zeta^3}{3t}-\zeta t+\frac{4t^2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2\, , \label{4.6} \end{eqnarray} with $\zeta$ equal to $\zeta_a=(\omega_a+i0) z_a/c$. The derivative of the spherical Hankel function in (\ref{4.5}) can be rewritten in terms of Hankel functions with a different index \cite{AS65}. For large $\zeta$ the asymptotic forms of these integrals are \begin{equation} J^e_{\ell,\perp}(\zeta)\simeq (-1)^{\ell+1}\frac{e^{2i\zeta}}{2\zeta} \, , \quad J^m_{\ell,\perp}(\zeta)\simeq (-1)^{\ell}\frac{e^{2i\zeta}}{2\zeta} \, , \label{4.7} \end{equation} so that (\ref{4.4}) becomes \begin{equation} \langle\Gamma_{c,\perp}\rangle\simeq \frac{3\pi n c^3}{8\omega_a^3\zeta_a}\, \Gamma_{0,\perp}\, {\rm Im} \sum_{\ell=1}^{\infty} i^\ell \ell(\ell+1) \left(B^e_{\ell} - B^m_{\ell}\right)\, e^{2i\zeta_a}\, , \label{4.8} \end{equation} which falls off proportionally to $1/\zeta_a$. Substituting the leading terms of $B^e_1$, $B^e_2$ and $B^m_1$ for small values of $q$ and $\varepsilon-1$ one recovers a result found before \cite{SvW10}. Similar expressions may be obtained for the correction to the decay rate of an excited atom with a dipole moment parallel to the $z$-axis. Instead of (\ref{4.2}) one gets a formula with the $zz$-component of $\mbox{\sffamily\bfseries{G}}_c$ . Upon carrying out the integrals one arrives at the analogue of (\ref{4.4}), with the integrals \begin{eqnarray} && J^e_{\ell,\parallel}(\zeta)=2\ell(\ell+1)\int_{\zeta}^\infty dt\, \left(-\frac{2\zeta^3}{3t^3}+\frac{2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2\nonumber\\ &&+\int_{\zeta}^\infty dt\, \left(\frac{2\zeta^3}{3t^3}-\frac{2\zeta}{t}+\frac{4}{3}\right) \left[ \frac{d}{dt}\left[th^{(1)}_\ell(t)\right]\right]^2 \label{4.9} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\parallel}(\zeta)=\int_{\zeta}^\infty dt\, \left(\frac{2\zeta^3}{3t}-2\zeta t+\frac{4t^2}{3}\right) \left[ h^{(1)}_\ell(t)\right]^2 \, . \label{4.10} \end{eqnarray} For large $\zeta$ their asymptotic forms are \begin{equation} J^e_{\ell,\parallel}(\zeta)\simeq (-1)^{\ell+1}\frac{ie^{2i\zeta}}{2\zeta^2} \, , \quad J^m_{\ell,\parallel}(\zeta)\simeq (-1)^{\ell}\frac{ie^{2i\zeta}}{2\zeta^2} \, , \label{4.11} \end{equation} so that the decay rate for large $\zeta_a$ becomes \begin{eqnarray} &&\langle\Gamma_{c,\parallel}\rangle\simeq \frac{3\pi n c^3}{8\omega_a^3\zeta_a^2}\, \Gamma_{0,\parallel}\, {\rm Im} \sum_{\ell=1}^{\infty} i^{\ell+1} \ell(\ell+1) \left(B^e_{\ell} - B^m_{\ell}\right)\, e^{2i\zeta_a}\, .\nonumber\\ &&\mbox{} \label{4.12} \end{eqnarray} In contrast to (\ref{4.8}) the right-hand side is proportional to the inverse square of $\zeta_a$. For general values of $\zeta_a$ we have to evaluate the integrals in (\ref{4.5})-(\ref{4.6}) and (\ref{4.9})-(\ref{4.10}), as will be done in the following section. \section{Evaluation of integrals} \setcounter{equation}{0} The integrals $J^p_{\ell,\perp}$ and $J^p_{\ell,\parallel}$ are linear combinations of integrals of the general form \begin{eqnarray} && I_{\ell_1,\ell_2,n}(\zeta)=\int_1^\infty du\, u^{-n}\, h^{(1)}_{\ell_1}(\zeta u)\, h^{(1)}_{\ell_2}(\zeta u)\, , \label{5.1} \end{eqnarray} which is symmetric in $\ell_1,\ell_2$. In fact, upon inspecting (\ref{4.5})-(\ref{4.6}) and (\ref{4.9})-(\ref{4.10}) we find that explicit expressions are needed for the integrals $I_{\ell,\ell,n}(\zeta)$ with $n=-2, -1, 0, 1, 3$ and for $I_{\ell,\ell-1,n}(\zeta)$ for $n=-1,0,2$. With the use of standard identities \cite{AS65} for spherical Hankel functions and by means of a partial integration we may derive several relations connecting these integrals for different values of the parameters: \begin{eqnarray} && I_{\ell_1-1,\ell_2,n}(\zeta)+I_{\ell_1+1,\ell_2,n}(\zeta)= \frac{2\ell_1+1}{\zeta}\, I_{\ell_1,\ell_2,n+1}(\zeta)\, , \label{5.2}\\ &&(n-\ell_1-\ell_2)\,I_{\ell_1-1,\ell_2,n}(\zeta)+ (n+\ell_1-\ell_2+1)\, I_{\ell_1+1,\ell_2,n}(\zeta)\nonumber\\ &&+(2\ell_1+1)\, I_{\ell_1,\ell_2+1,n}(\zeta) =\frac{2\ell_1+1}{\zeta}\,h^{(1)}_{\ell_1}(\zeta)\, h^{(1)}_{\ell_2}(\zeta)\, . \label{5.3} \end{eqnarray} In order to obtain explicit expressions for $I_{\ell_1,\ell_2,n}$ with $n=0,1,3$ we start from a result \cite{AS65} that is valid for $n=0$ and $\ell_1\neq\ell_2$: \begin{eqnarray} &&(\ell_1+\ell_2+1)I_{\ell_1,\ell_2,0}(\zeta)=\nonumber\\ &&=\frac{\zeta}{\ell_1-\ell_2} \left( h^{(1)}_{\ell_1}h^{(1)}_{\ell_2-1}-h^{(1)}_{\ell_1-1}h^{(1)}_{\ell_2}\right) +h^{(1)}_{\ell_1}h^{(1)}_{\ell_2} \, , \label{5.4} \end{eqnarray} as may be checked by differentiation. We omit the argument $\zeta$ of the spherical Hankel functions from now on. To obtain the corresponding expression for $\ell_1=\ell_2$ we put $\ell_1=\ell+1$, $\ell_2=\ell$ and $n=0$ in (\ref{5.3}) and use (\ref{5.4}) in the second term. In this way we obtain a recursion relation connecting $I_{\ell,\ell,0}$ for consecutive values of $\ell$. Solving this relation by employing the identity $I_{0,0,0}(\zeta)=-(2i/\zeta)\, E_1(-2i\zeta)+[h^{(1)}_0]^2$ (with $E_1$ the exponential integral \cite{AS65}) as an initial condition, we find \begin{eqnarray} &&I_{\ell,\ell,0}(\zeta)=-\frac{2i}{(2\ell+1)\zeta}E_1(-2i\zeta) \nonumber\\ &&+\frac{2}{2\ell+1}\sum_{k=0}^{\ell}\left[h_k^{(1)}\right]^2- \frac{1}{2\ell+1}\left[h_\ell^{(1)}\right]^2 \label{5.5} \end{eqnarray} for all $\ell\geq 0$. The exponential integral of purely imaginary argument can be expressed in terms of sine and cosine integrals as $E_1(-2i\zeta)=-{\rm Ci}(2\zeta)-i{\rm Si}(2\zeta)+i\pi/2$. With the help of the identities (\ref{5.2}), (\ref{5.4}) and the recursion relations for the spherical Hankel functions one derives expressions for $I_{\ell,\ell,1}$ (with $(\ell\geq 1$) and $I_{\ell,\ell,3}$ (with $\ell\geq 2$), in the form of linear combinations of products of spherical Hankel functions: \begin{eqnarray} &&I_{\ell,\ell,1}(\zeta)=\left[-\frac{\zeta^2}{2\ell(\ell+1)}+\frac{1}{2(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&-\frac{\zeta^2}{2\ell(\ell+1)}\left[h_{\ell-1}^{(1)}\right]^2 +\frac{\zeta}{\ell+1}h_\ell^{(1)}h_{\ell-1}^{(1)} \label{5.6} \end{eqnarray} for $\ell\geq 1$, and \begin{eqnarray} &&I_{\ell,\ell,3}(\zeta)=\left[-\frac{\zeta^4}{3(\ell-1)\ell(\ell+1)(\ell+2)} -\frac{\zeta^2}{6(\ell+1)(\ell+2)}\right.\nonumber\\ &&\left.+\frac{1}{2(\ell+2)}\right] \left[h_\ell^{(1)}\right]^2 +\left[-\frac{\zeta^4}{3(\ell-1)\ell(\ell+1)(\ell+2)}\right.\nonumber\\ &&\left.-\frac{\zeta^2}{6(\ell-1)(\ell+2)}\right]\left[h_{\ell-1}^{(1)}\right]^2 +\left[\frac{2\zeta^3}{3(\ell-1)(\ell+1)(\ell+2)}\right.\nonumber\\ &&\left.+\frac{\zeta}{3(\ell+2)}\right]h_\ell^{(1)}h_{\ell-1}^{(1)} \label{5.7} \end{eqnarray} for $\ell\geq 2$. It turns out that these formulas cannot be used for $I_{0,0,1}$ and $I_{1,1,3}$, since the expressions diverge in these cases. However, these special cases can be obtained straightforwardly by connecting them to the exponential integral of the same argument as in (\ref{5.5}). Expressions for $I_{\ell,\ell,n}$ with $n=-2$ follow by choosing $\ell_1=\ell+1$, $\ell_2=\ell$ and $n=-2$ in (\ref{5.3}). The second term at the left-hand side drops out for these values of the parameters. As a result a simple recurrence relation for $I_{\ell,\ell,-2}$ is found, which may be solved for all $\ell\geq 0$ by employing the identity $I_{0,0,-2}(\zeta)=-ie^{2i\zeta}/(2\zeta^3)$ as a starting point. One gets for $\ell\geq 0$: \begin{eqnarray} &&I_{\ell,\ell,-2}(\zeta)=-\frac{1}{2} \left[h_\ell^{(1)}\right]^2- \frac{1}{2}\left[h_{\ell+1}^{(1)}\right]^2+\frac{2\ell+1}{2\zeta} h_\ell^{(1)}h_{\ell+1}^{(1)} \, . \label{5.8} \end{eqnarray} Furthermore, by choosing in (\ref{5.3}) the parameters as $n=-2$ and $\ell_1=\ell_2$ as either $\ell$ or $\ell+1$, one gets two identities, which may be combined with (\ref{5.2}) so as to obtain a recursion relation for $I_{\ell,\ell,-1}$. Solving that relation with the initial condition $I_{0,0,-1}(\zeta)=-\zeta^{-2}E_1(-2i\zeta)$, we find for all $\ell\geq 0$: \begin{eqnarray} &&I_{\ell,\ell,-1}(\zeta)=-\zeta^{-2}E_1(-2i\zeta)+\sum_{k=1}^\ell \frac{2k+1}{2k(k+1)} \left[h_k^{(1)}\right]^2 \nonumber\\ &&+\frac{1}{2} \left[h_0^{(1)}\right]^2-\frac{1}{2(\ell+1)} \left[h_{\ell}^{(1)}\right]^2\, . \label{5.9} \end{eqnarray} It should be noted that the sum drops out for $\ell=0$. Finally, we need expressions for $I_{\ell,\ell-1,p}$ for $p=-1$ and $p=2$. Once more we use the identity (\ref{5.3}), now for the choice $\ell_1=\ell_2=\ell$ and $n=-1$. It yields a recursion relation for $I_{\ell,\ell-1,-1}$ from which we get for $\ell\geq 1$: \begin{eqnarray} && I_{\ell,\ell-1,-1}(\zeta)=-i\zeta^{-2}E_1(-2i\zeta)+\zeta^{-1} \sum_{k=0}^{\ell-1} \left[h_k^{(1)}\right]^2\, . \label{5.10} \end{eqnarray} Turning to the case $p=2$, one derives a result for $I_{\ell,\ell-1,2}$ by a repeated use of (\ref{5.2}) in combination with (\ref{5.4}). Once again linear combinations of products of two spherical Hankel functions are found, at least for $\ell\geq 2$: \begin{eqnarray} && I_{\ell,\ell-1,2}(\zeta)=\left[-\frac{\zeta^3}{3(\ell-1)\ell(\ell+1)}-\frac{\zeta}{6(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^3}{3(\ell-1)\ell(\ell+1)}-\frac{\zeta}{6(\ell-1)}\right]\left[h_{\ell-1}^{(1)}\right]^2 \nonumber\\ &&+\left[\frac{2\zeta^2}{3(\ell-1)(\ell+1)}+\frac{1}{3}\right]h_\ell^{(1)}h_{\ell-1}^{(1)}\, . \label{5.11} \end{eqnarray} For $\ell=1$ this expression is singular and cannot be used. From a direct evaluation of the integral for this special case one finds that an exponential integral shows up, as before. \section{Results} \setcounter{equation}{0} The explicit expressions for the basic integrals (\ref{5.1}) that we derived in the previous section can be employed now to determine the corrections to the decay rate. For a perpendicular orientation of the excited atom the correction (\ref{4.4}) to the decay rate is governed by the integrals (\ref{4.5})-(\ref{4.6}) for which we get upon substitution of the relevant contributions: \begin{eqnarray} &&J^e_{\ell,\perp}(\zeta)=\zeta E_1(-2i\zeta)\nonumber\\ && +\left[-\frac{\zeta^5}{6\ell(\ell+1)}-\frac{\zeta^3(2\ell^2-2\ell-3)}{6\ell(\ell+1)} +\frac{\zeta\ell}{2(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^5}{6\ell(\ell+1)}-\frac{\zeta^3(2\ell^2+2\ell-3)}{6\ell(\ell+1)} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[\frac{\zeta^4}{3(\ell+1)}+\frac{\zeta^2(2\ell^2+3\ell-2)}{3(\ell+1)} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{2k(k+1)}\left[h_k^{(1)}\right]^2 -\frac{1}{2}\zeta^3\left[h_0^{(1)}\right]^2 \label{6.1} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\perp}(\zeta)=\zeta E_1(-2i\zeta)\nonumber\\ && +\left[\frac{\zeta^5}{6\ell(\ell+1)}-\frac{\zeta^3(2\ell+1)}{3(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[\frac{\zeta^5}{6\ell(\ell+1)}-\frac{2\zeta^3}{3} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^4}{3(\ell+1)}+\frac{2\zeta^2(2\ell+1)}{3} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{2k(k+1)}\left[h_k^{(1)}\right]^2 -\frac{1}{2}\zeta^3\left[h_0^{(1)}\right]^2\, , \label{6.2} \end{eqnarray} for all $\ell\geq 1$. Likewise, the results for a parallel atomic orientation read \begin{eqnarray} &&J^e_{\ell,\parallel}(\zeta)=2\zeta E_1(-2i\zeta)\nonumber\\ && +\left[\frac{\zeta^5}{3\ell(\ell+1)}-\frac{\zeta^3(4\ell^2+2\ell-3)}{3\ell(\ell+1)} +\frac{\zeta\ell}{\ell+1}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[\frac{\zeta^5}{3\ell(\ell+1)}-\frac{\zeta^3(4\ell^2+4\ell-3)}{3\ell(\ell+1)} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{2\zeta^4}{3(\ell+1)}+\frac{2\zeta^2(4\ell^2+6\ell-1)}{3(\ell+1)} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{k(k+1)}\left[h_k^{(1)}\right]^2 -\zeta^3\left[h_0^{(1)}\right]^2 \label{6.3} \end{eqnarray} and \begin{eqnarray} &&J^m_{\ell,\parallel}(\zeta)=2\zeta E_1(-2i\zeta)\nonumber\\ && +\left[-\frac{\zeta^5}{3\ell(\ell+1)}-\frac{2\zeta^3(\ell-1)}{3(\ell+1)}\right] \left[h_\ell^{(1)}\right]^2\nonumber\\ &&+\left[-\frac{\zeta^5}{3\ell(\ell+1)}-\frac{2\zeta^3}{3} \right]\left[h_{\ell-1}^{(1)}\right]^2\nonumber\\ &&+\left[\frac{2\zeta^4}{3(\ell+1)}+\frac{2\zeta^2(2\ell+1)}{3} \right]h_\ell^{(1)}h_{\ell-1}^{(1)}\nonumber\\ &&-\zeta^3\sum_{k=1}^\ell \frac{2k+1}{k(k+1)}\left[h_k^{(1)}\right]^2 -\zeta^3\left[h_0^{(1)}\right]^2\, , \label{6.4} \end{eqnarray} again for all $\ell\geq 1$. As remarked above, the exponential integrals can be expressed in sine and cosine integrals. After insertion of the expressions (\ref{6.1})-(\ref{6.2}) and the multipole amplitudes (\ref{3.5}) into (\ref{4.4}), the average correction to the decay rate for the perpendicular configuration is found in terms of well-known special functions depending on $\zeta_a$, $q$ and $\varepsilon$. It may be plotted as a function of $\zeta_a$, for various choices of $q$ and $\varepsilon$. To facilitate comparison with our previous results \cite{ SvW10} we introduce the decay rate correction function $f_\perp(\zeta_a,q,\varepsilon)=-16 \langle\Gamma_{c,\perp}\rangle /(3nv_0\Gamma_{0,\perp})$ with $v_0=4\pi a^3/3$ the volume of the spheres. We shall first consider two special cases that we have treated in \cite{SvW10}: purely scattering spheres and purely absorbing spheres. In the former case we choose the dielectric constant to be real ($\varepsilon=1.5$) and the spherical radius to be finite on the scale of the wavelength ($q=0.5$). For the purely absorbing case with vanishingly small spheres ($q\rightarrow 0$), we take the dielectric constant to be complex with the value $\varepsilon=1.5+i\, 0.5$, as in \cite{SvW10}. Since for small $q$ the multipole amplitudes $B^e_\ell$ and $B^m_\ell$ behave as $q^{2\ell+1}$ and $q^{2\ell+3}$, respectively, only the electric dipole amplitude $B^e_1=iq^3(\varepsilon-1)/(\varepsilon+2)$ contributes to (\ref{4.4}) for the purely absorbing case. In Figs.~\ref{fig1} and \ref{fig2} the curves for $f_\perp(\zeta_a)$ are compared to their asymptotic counterparts for large $\zeta_a$ that follow from (\ref{4.8}). As can be seen from these figures, the asymptotic curves are quite adequate \begin{figure} \caption{Decay rate correction function $f_\perp(\zeta_a)$ (solid line) and its asymptotic form at large distances (dashed line), for a medium with scattering spheres (with $q=0.5$, $\varepsilon(\omega_a)=1.5$).} \label{fig1} \end{figure} \begin{figure} \caption{Decay rate correction function $f_\perp(\zeta_a)$ (solid line) and its asymptotic forms at small and large distances (dashed lines), for a medium with absorbing spheres (with $q=0$, $\varepsilon(\omega_a)=1.5+i \, 0.5$).} \label{fig2} \end{figure} already for $\zeta_a\approx 3$. In the asymptotic regime the results given in \cite{SvW10} are corroborated. (It should be noted that the curves given in \cite{SvW10} erroneously represent $-f_\perp$ instead of $f_\perp$.) For small distances the behaviour of the atomic decay rates in the two cases differ considerably. In fact, for the purely scattering case of Fig.~\ref{fig1} the decay rate attains a finite value when $\zeta_a$ approaches its minimum value $q$. On the other hand, for the purely absorbing case of Fig.~\ref{fig2} the decay rate correction function is governed by $J^e_{1,\perp}(\zeta)$, which according to (\ref{6.1}) has the asymptotic form $-1/(4\zeta^3)$ for small $\zeta$. Hence, the decay rate correction function diverges as $-(3/2)\, {\rm Im}[(\varepsilon-1)/(\varepsilon+2)] /\zeta_a^3$ for $\zeta_a\rightarrow 0$ in this case. For a more general situation in which both scattering and absorption take place, we choose $q=0.5$ and $\varepsilon=1.5+i\, 0.5$, with results presented in Fig.~\ref{fig3}. For large $\zeta_a$ the decay \begin{figure} \caption{Decay rate correction function $f_\perp(\zeta_a)$ (solid line) and its asymptotic forms at small and large distances (dashed lines), for a medium with scattering and absorbing spheres (with $q=0.5$, $\varepsilon(\omega_a)=1.5+i \, 0.5$).} \label{fig3} \end{figure} rate falls off like $\zeta_a^{-1}$, in agreement with (\ref{4.8}). For small distances the decay rate diverges, as in Fig.~\ref{fig2}. In fact, as $\zeta_a \rightarrow q$, the rate is found to be proportional to $(\zeta_a-q)^{-1}$. This follows from (\ref{4.4}), since the series converges increasingly slowly when $\zeta_a$ approaches $q$. Indeed, for large $\ell$ the electric multipole amplitudes $B^e_\ell$ are given by $[i^\ell/(l^2[(2\ell-1)!!]^2)]\,[(\varepsilon-1)/(\varepsilon+1)]\, q^{2\ell+1}$, while the integral (\ref{6.1}) gets the form $-[(2\ell-1)!!]^2/(2\zeta_a^{2\ell+1})$. Hence, the electric multipole contribution to the $\ell$-th term in the series of (\ref{4.4}) is $-\half[(\varepsilon-1)/(\varepsilon+1)]\, (q/\zeta_a)^{2\ell+1}$. Since the magnetic multipole contributions turn out to be negligible for large $\ell$, the asymptotic form of (\ref{4.4}) for $\zeta_a$ tending to $q$ is proportional to $\sum_{\ell=1}^\infty (q/\zeta_a)^{2\ell+1}\simeq q/[2(\zeta_a-q)]$, so that the asymptotic form of $f_\perp$ reads \begin{equation} f_\perp\simeq -\frac{3}{4q^2(\zeta_a-q)}\, {\rm Im} \left[\frac{\varepsilon-1}{\varepsilon+1}\right] \label{6.5} \end{equation} for $\zeta_a\rightarrow q$. The physical mechanism for the divergence in the decay rates of Figs.~\ref{fig2} and \ref{fig3} for small $\zeta_a$ is the efficient non-radiative energy transfer from the atom to the absorbing spheres that dominates the atomic decay in the near zone. A similar divergent behaviour has been found in a classical treatment of the energy transfer between an excited molecule and a homogeneous absorbing medium filling a halfspace \cite{CPS74}. It should be noted that (\ref{6.5}) loses its meaning when $\zeta_a-q$ becomes so small that the approximations made in deriving it are no longer valid. In particular, perturbation theory in lowest order and the electric-dipole approximation are not adequate to describe the decay for very small distances. Furthermore, the notion of scatterers with a structureless surface gets lost as well in that case. For the parallel configuration we have likewise evaluated $f_\parallel(\zeta_a,q,\varepsilon)=-16 \langle\Gamma_{c,\parallel}\rangle /(3nv_0\Gamma_{0,\parallel})$. The result for the mixed case of both scattering and absorption is given in Fig.~\ref{fig4} for the same choice of the parameters $q$ and $\varepsilon$ as in Fig.~\ref{fig3}. \begin{figure} \caption{Decay rate correction function $f_\parallel(\zeta_a)$ (solid line) and its asymptotic forms at small and large distances (dashed lines), for a medium with scattering and absorbing spheres (with $q=0.5$, $\varepsilon(\omega_a)=1.5+i\, 0.5$).} \label{fig4} \end{figure} The figure clearly shows that for the parallel configuration the correction to the atomic decay rate goes faster to zero with increasing $\zeta_a$ than for the perpendicular configuration. This is in accordance with the findings of section 4, where it has been seen that in the asymptotic regime the correction to the atomic decay rate is proportional to the inverse distance in the perpendicular configuration, but to its square in the parallel configuration. As before, the asymptotic expression is adequate from $\zeta_a\approx 3$ onwards. For small distances, with $\zeta_a\rightarrow q$, the asymptotic form of $f_\parallel$ is twice that of $f_\perp$, as follows by comparing the asymptotic forms of (\ref{6.1}) and (\ref{6.3}) for large $\ell$. In conclusion, we have shown how absorption and scattering processes in a medium may cooperate in modifying the emission rate of an excited atom in its vicinity. The explicit expressions for the decay rate that we have obtained permit a detailed analysis of the behaviour of the emission rate for arbitrary distances between the atom and the medium. As we have seen, the effects of absorption and of scattering are qualitatively different, when the atom approaches the medium. \end{document}
\begin{document} \title[$C_1$ conjecture in mixed characteristic]{A pathological case of the $C_1$ conjecture in mixed characteristic} \author[I. Kaur]{Inder Kaur} \address{ Instituto Nacional de Matem\'{a}tica Pura e Aplicada, Estr. Dona Castorina, 110 - Jardim Bot\^{a}nico, Rio de Janeiro - RJ, 22460-320, Brazil } \email{[email protected]} \subjclass[2010]{Primary $14$D$20$, $14$J$60$, $14$G$05$, $14$M$22$ Secondary $14$L$24$, $14$D$22$} \keywords{Moduli spaces, Semistable sheaves, $C_1$ conjecture, Rational points, Rationally connected varieties} \date{\today} \begin{abstract} Let $K$ be a field of characteristic $0$. Fix integers $r,d$ coprime with $r\geq 2$. Let $X_K$ be a smooth, projective, geometrically connected curve of genus $g\geq 2$ defined over $K$. Assume there exists a line bundle $\mc{L}_K$ on $X_K$ of degree $d$. In this article we prove the existence of a stable locally free sheaf on $X_K$ with rank $r$ and determinant $\mc{L}_K$. This trivially proves the $C_1$ conjecture in mixed characteristic for the moduli space of stable locally free sheaves of fixed rank and determinant over a smooth, projective curve. \end{abstract} \maketitle \section{Introduction} A field $L$ is said to be $C_{1}$ if any hypersurface in $\mathbf{P}^{n}_{L}$ of degree $d \leq n$ has a rational point. The Lang-Manin-Koll\'ar conjecture states that a smooth, proper, separably rationally connected variety over a $C_1$ field has a rational point. Let $K$ be the fraction field of a Henselian discrete valuation ring with algebraically closed residue field denoted $k$. By {\cite[Theorem $14$]{L}}, $K$ is a $C_1$ field. Using \cite{JCT}, the conjecture has been understood in the case when $\mr{char}(K) = \mr{char}(k)$. However, little is known in the case of mixed characteristic i.e. $\mr{char}(K) \neq \mr{char}(k)$. In this note we prove the conjecture for the moduli space of stable locally free sheaves of fixed rank and determinant on a smooth, projective, geometrically connected curve defined over such a $C_1$ field. \begin{note}\label{notin7} Let $K$ be a field of characteristic $0$. Fix integers $r,d$ coprime with $r\geq 2$. Let $X_K$ be a smooth, projective, geometrically connected curve of genus $g\geq 2$ defined over $K$. Assume there exists a line bundle $\mc{L}_K$ on $X_K$ of degree $d$. \end{note} Let $M^s_{X_{K},\mc{L}_{K}}(r,d)$ be the moduli space of stable locally free sheaves of rank $r$ and determinant $\mc{L}_K$ (see Definition \ref{defimsfd} and Remark \ref{mklcorep}). Denote by $M^s_{X_{\ov{K}},\mc{L}_{\ov{K}}}(r,d)$ the moduli space of stable locally free sheaves of rank $r$ and determinant $\mc{L}_{\ov{K}}:= \mc{L}_K \otimes_{K} \ov{K}$ over the curve $X_{\ov{K}}:= X_K \times_K \msp(\ov{K})$. Since the functor $\mc{M}^{s}_{X_K,\mc{L}_{K}}(r,d)$ is universally corepresented by $M^{s}_{X_K,\mc{L}_{K}}(r,d)$, the moduli space $M^s_{X_{\ov{K}},\mc{L}_{\ov{K}}}(r,d)$ is isomorphic to $M^s_{X_{K},\mc{L}_{K}}(r,d) \times_{K} \msp(\ov{K})$. By \cite{SFV}, $M^s_{X_{\ov{K}},\mc{L}_{\ov{K}}}(r,d)$ is a unirational variety and therefore rationally connected. Hence the moduli space $M^s_{X_{K},\mc{L}_{K}}(r,d)$ is a rationally connected variety. Now suppose that $K$ is the fraction field of a Henselian discrete valuation ring with algebraically closed residue field. The $C_1$ conjecture then predicts that $M^s_{X_{K},\mc{L}_{K}}(r,d)$ has a $K$-rational point. In order to prove this, it suffices to show the existence of a stable locally free sheaf on $X_K$ of rank $r$ and determinant $\mc{L}_K$. The moduli of (semi)stable locally free sheaves of fixed rank and degree over a smooth, projective curve have been studied for decades and there is a plethora of results on the subject. However, for most of these results the curve is defined over an algebraically closed field. In fact when the field is not algebraically closed, there may not even exist invertible sheaves of certain degrees (see for example \cite{BI}, \cite{mes}). In this note we prove the following: \begin{thm}[Theorem \ref{evb}, Corollary \ref{c1ex}]\label{thmin3} Keep Notations \ref{notin7}. There exists a stable locally free sheaf on $X_{K}$ of rank $r$ and determinant $\mc{L}_{K}$. In particular, the moduli space of stable locally free sheaves over $X_{K}$ of rank $r$ and determinant $\mc{L}_K$ denoted $M_{X_K,\mc{L}_K}^{s}(r,d)$, has a $K$-rational point. \end{thm} This result holds in much greater generality than $C_1$ fields and is therefore of interest in its own right. We use standard techniques from algebraic geometry to prove this theorem. Since the result holds for any field of characteristic $0$, it does not throw any light on the proof of the $C_1$ conjecture in the general case. Indeed it illustrates that even though at first sight the variety $M_{X_K,\mc{L}_K}^{s}(r,d)$ appears to be a good candidate for testing the conjecture in mixed characteristic, it is in fact a pathological example. \emph{Acknowledgements}: This paper answers the PhD question given to me by my supervisor Prof. H. Esnault. \section{Main result} We prove Theorem \ref{thmin3} stated in the introduction and show how it can be applied to the $C_1$ conjecture. \begin{thm}\label{evb} Keep Notations \ref{notin7}. There exists a geometrically stable locally free sheaf on $X_{K}$ of rank $r$ and determinant $\mc{L}_{K}$. \end{thm} \begin{proof} By \cite[Proposition $8.6.1$]{po} there exists a semistable, locally free sheaf of rank $r$ and degree $d$ on $X_{\ov{K}}$. Since $\mr{Pic}^{0}(X_{\ov{K}})$ is an abelian variety and multiplication by $r$ is an isogeny, one can show that there exists a semistable, locally free sheaf $\mc{E}_{\ov{K}}$ on $X_{\ov{K}}$ of rank $r$ and determinant $\mc{L}_{\ov{K}}$, where $\mc{L}_{\ov{K}}=\mc{L}_K \otimes \mo_{X_{\ov{K}}}$ is the base change of $\mc{L}_K$. Furthermore, there exists an integer $b$ such that $\mc{E}_{\ov{K}} \otimes \mc{K}_{X_{\ov{K}}}^{\otimes b}$ is globally generated, where $\mc{K}_{X_{\ov{K}}}$ is the canonical divisor on $X_{\ov{K}}$. Since $X_{K}$ is a curve, this sheaf is a quotient of $r+1$ copies of $\mo_{X_{\ov{K}}}$ i.e., we have the following surjective morphism: \[\bigoplus\limits_{i=1}^{r+1} \mo_{X_{\ov{K}}} \twoheadrightarrow \mc{E}_{\ov{K}} \otimes \mc{K}_{X_{\ov{K}}}^{\otimes b}.\] Since the determinant of $\mc{E}_{\ov{K}}$ is $\mc{L}_{\ov{K}}$, the kernel of this morphism is isomorphic to $\mc{L}_{\ov{K}}^\vee \otimes \mc{K}_{X_{\ov{K}}}^{-rb}$. In other words, $\mc{E}_{\ov{K}}$ is cokernel of a morphism \[\phi:\mc{L}_{\ov{K}}^\vee \otimes \mc{K}_{X_{\ov{K}}}^{-(r+1)b} \hookrightarrow \bigoplus\limits_{i=1}^{r+1}\mc{K}_{X_{\ov{K}}}^{-b}.\] Since $\mc{K}_{X_{\ov{K}}} \cong \mc{K}_{X_K} \otimes \mo_{X_{\ov{K}}}$, the sheaf $\mc{E}_{\ov{K}}$ is a $\ov{K}$-point of the affine space \[U:=\mr{Hom}_{X_K}(\mc{L}_{{K}}^\vee \otimes \mc{K}_{X_{{K}}}^{-(r+1)b},\bigoplus\limits_{i=1}^{r+1}\mc{K}_{X_{{K}}}^{-b}).\] As local-freeness and semi-stability are open conditions, there exists a non-empty open subscheme $V$ of $U$ parameterizing those homomorphisms whose cokernel is a semistable, locally free sheaf of rank $r$. Over any field of characteristic $0$, a nonempty Zariski open subset of an affine space has a rational point. Thus there exists a $K$-point of $V$ corresponding to a homomorphism such that the cokernel $\mc{F}_K$ is a locally free, semistable rank $r$ sheaf with determinant $\mc{L}_K$ on $X_K$. Since the degree of $\mc{L}_K$ is prime to $r$, $\mc{F}_K$ is also stable. This proves the theorem. \end{proof} Now we see an application of the above result. \begin{defi}\label{defimsfd} Keep Notations \ref{notin7}. Denote by $X_{T} := X_K \times_{\msp(K)} T$. We define a functor $\mc{M}_{X_K, \mc{L}_K}(r,d)$ as follows: \[\mc{M}_{X_K, \mc{L}_K}(r,d) : \mr{Sch}^{\circ}/K \rightarrow \mr{Sets}\] \noindent such that for a $K$-scheme $T$, \[ \mc{M}_{X_K, \mc{L}_K}(r,d)(T):= \left\{ \begin{array}{l} \mbox{ $S$-equivalence classes of } \mbox{ locally free sheaves } \mc{F} \mbox{ on } X_{T}\\ \mbox{ such that for every geometric point } t \in T, \mc{F}_{t} \mbox{ is a slope} \\ \mbox{ semistable sheaf of rank } r \mbox{ and degree } d \mbox{ on } X_t \mbox{ and for}\\ \mbox{some invertible sheaf } \mc{Q} \mbox{ on } T, \mr{det}(\mc{F}) \simeq \pi^{*}_{X_{K}} \mc{L}_{K}\otimes \pi^{*}_{T}{\mc{Q}} \end{array} \right \} / \sim \] \noindent where $\pi_{X_{K}}: X_{T} \rightarrow X_K$, $\pi_{T}: X_{T} \rightarrow T$ are the first and second projections respectively and $\mc{F} \sim \mc{F}'$ if and only if there exists an invertible sheaf $\mc{L}$ on $T$ such that $\mc{F} \simeq \mc{F}' \otimes \pi^{*}_{T}\mc{L}$. \noindent We denote by $\m^{s}_{X_K,\mc{L}_{K}}(r,d)$ the subfunctor for the stable sheaves. Since $(r,d)$ are coprime and $X_K$ is integral, slope semistable sheaves are stable. Hence $\m^{s}_{X_K,\mc{L}_{K}}(r,d)$ coincides with $\m_{X_K,\mc{L}_{K}}(r,d)$. \end{defi} \begin{rem}\label{mklcorep} Denote by $\m^{s}_{X_K}(r,d)$ the moduli functor of isomorphism classes of stable locally free sheaves of rank $r$ and degree $d$. By \cite[Theorem $0.2$]{LA1} this functor is universally corepresented by a projective $K$-scheme $M^s_{K}(r,d)$. Recall the Picard functor $\mc{P}ic_{X_K}$ and the natural transformation $\m^s_{X_K}(r,d) \rightarrow \mc{P}ic_{X_K}$ which is defined by taking the determinant of the locally free sheaves. This induces the determinant morphism $\mr{det} : M^s_{X_K}(r,d) \rightarrow \mr{Pic}(X_{K})$, where $\mr{Pic}(X_{K})$ is the Picard group scheme of $X_K$. Using the property of universal categorical quotients, one can show that $\mc{M}^{s}_{X_K,\mc{L}_{K}}(r,d)$ is universally corepresented by $\mr{det}^{-1}(\mc{L}_K)$ which we denote by $M^{s}_{X_K,\mc{L}_{K}}(r,d)$. For a complete proof see \cite[Proposition $2.3$]{ink3} (replacing $R$ by $K$). Since $(r,d)$ are coprime in our current settings, we have $M^{s}_{X_K,\mc{L}_{K}}(r,d)$ is in fact a projective $K$-scheme. \end{rem} \begin{cor}\label{c1ex} Keep Notations \ref{notin7}. The moduli space $M_{X_K,\mc{L}_K}^{s}(r,d)$ has a $K$-rational point. \end{cor} \begin{proof} By Remark \ref{mklcorep}, $M_{X_K,\mc{L}_K}^{s}(r,d)$ corepresents the functor $\m^{s}_{X_K,\mc{L}_{K}}(r,d)$. By Theorem \ref{evb} there exists a stable locally free sheaf on $X_{K}$ of rank $r$ and determinant $\mc{L}_{K}$. Then by definition of corepresentability, the moduli space $M_{X_K,\mc{L}_K}^{s}(r,d)$ has a $K$-rational point. This proves the corollary. \end{proof} As mentioned in the introduction the variety $M^s_{X_K,\mc{L}_{K}}(r,d)$ is rationally connected. The above corollary trivially proves the $C_1$ conjecture in mixed characteristic for this variety. \begin{cor} Let $K$ be the fraction field of a Henselian discrete valuation ring with algebraically closed residue field $k$. Assume that the characteristic of $K$ is $0$ and that of $k$ is $p>0$. The $C_1$ conjecture holds for the variety $M^s_{X_K,\mc{L}_{K}}(r,d)$. \end{cor} \begin{proof} By Corollary \ref{c1ex} the variety $M^s_{X_K,\mc{L}_{K}}(r,d)$ has a $K$-rational point. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Survival functions versus conditional aggregation-based survival functions on~discrete~space} \author{{Basarik Stanislav}\fnref{fn1}}\ead{[email protected]} \author{{Borzová Jana}\fnref{fn1}}\ead{[email protected]} \author{{Halčinová Lenka}\corref{cor1}\fnref{fn1}}\ead{[email protected]}\cortext[cor1]{[email protected]} \address[fn1]{Institute of Mathematics, P.~J. \v{S}af\'arik University in Ko\v sice, Jesenn\'a 5, 040 01 Ko\v{s}ice, Slovakia} \fntext[fn2]{Supported by the grants APVV-16-0337, VEGA 1/0657/22, bilateral call Slovak-Poland grant scheme No. SK-PL-18-0032 and grant scheme VVGS-PF-2021-1782. } \begin{abstract} In this paper we deal with conditional aggregation-based survival functions recently introduced by Boczek et al. (2020). The concept is worth to study because of its possible implementation in real-life situations and mathematical theory as well. The aim of this paper is the comparison of this new notion with the standard survival function. We state sufficient and necessary conditions under which the generalized and the standard survival function equal. The main result is the characterization of the family of conditional aggregation operators (on discrete space) for which these functions coincide. \end{abstract} \begin{keyword} {aggregation, survival function, nonadditive measure, visualization, size } \MSC[2010] 28A12 \end{keyword} \end{frontmatter} \section{Introduction} We continue to study the~novel survival functions introduced in~\cite{BoczekHalcinovaHutnikKaluszka2020} as a~generalization of size-based level measure developed for the~use in nonadditive analysis in~\cite{BorzovaHalcinovaSupina2018,Halcinova2017,HalcinovaHutnikKiselakSupina2019}. The~concept appeared initially in time-frequency analysis~\cite{DoThiele2015}. As the main result, in Theorem~\ref{thm:characterization2} we show that the~generalized survival function is equal to the original notion (for any monotone measure and any input vector) just in very particular case. The concept of the novel survival function is useful in many real-life situations and pure theory as well. In fact, the~standard survival function (also known in the literature as the standard level measure~\cite{HalcinovaHutnikKiselakSupina2019}, strict level measure~\cite{BorzovaHalcinovaHutnik2020} or decumulative distribution function~\cite{grabisch2016set}) is the~crucial ingredient of many definitions in mathematical analysis. Many well-known integrals are based on the~survival function, e.g. the~Choquet integral, the~Sugeno integral, the~Shilkret integral, the~seminormed integral~\cite{BorzovaHalcinovaHutnik2020}, universal integrals~\cite{KlementMesiarPap2010}, etc. Also, the convergence of a~sequence of functions in measure is based on the~same concept. Hence a~reasonable ge\-ne\-ra\-li\-za\-tion of~the~survival function leads to the generalizations of all mentioned concepts. For more on applications of the~generalized survival function, see~\cite{BoczekHalcinovaHutnikKaluszka2020,DoThiele2015}. \mathbf{i}gskip Due to the~number of factors needed in the~definition of the~generalized survival function, it is quite difficult to understand this concept. In order to understand it more deeply, in~the~following we shall focus on the graphical visualization of inputs, see~\cite{BorzovaHalcinovaSupina2021}. Moreover, the graphical representation will help us to formulate basic results of this paper. In~the~whole paper, we restrict ourselves to discrete settings. We consider finite basic set $$[n]:=\{1,2,\dots, n\}, \,\, n\geq1$$ and a~monotone measure~$\mu$ on $2^{[n]}$. If $\mathbf{x}=(x_1,\dots, x_n)$ is a~nonnegative real-valued function on $[n]$, i.e., a~vector, then the \textit{survival function} (or standard survival function) of~the~vector $\mathbf{x}$ with respect to $\mu$, see~\cite{BoczekHalcinovaHutnikKaluszka2020,DuranteSempi2015}, is defined by $$\mu(\{\x>\alpha\}):=\mu\left(\{i\in [n]: x_i>\alpha\}\right), \quad \alpha\in[0,\infty).$$ For the~thorough exposition see Preliminaries. To avoid too abstract setting in the following visual representations, let us consider the~input vector $\mathbf{x}=(2,3,4)$ and the~monotone measure $\mu$ on~$ 2^{[3]}$ defined in~Table~\ref{tabulka_0}. \par \begin{table} \renewcommand*{\arraystretch}{1.5} \begin{center} \begin{tabular}{|>{\centering\arraybackslash}m{1.25cm}|>{\centering\arraybackslash}m{1.5cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1cm}|>{\centering\arraybackslash}m{1.5cm}|} \hline $E$ & $\{1,2,3\}$ & $\{2,3\}$ & $\{1,3\}$ & $\{1,2\}$ & $\{3\}$ & $\{2\}$ & $\{1\}$ & $\emptyset$ \\ \hline $E^c$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu(E^c)$ & $0$ & $0.25$ & $0.25$ & $0.4$ & $0.75$ & $0.75$ & $0.75$ & $1$\\ \hline $\max_{i\in E}x_i$ & $4$ & $4$ & $4$ & $3$ & $4$ & $3$ & $2$ & $0$\\\hline $\sum_{i\in E}x_i$ & $9$ & $7$ & $6$ & $5$ & $4$ & $3$ & $2$ & $0$\\\hline \end{tabular}\caption{Sample measure~$\mu$ and two conditional aggregation operators for vector $\mathbf{x}=(2,3,4)$}\label{tabulka_0} \end{center} \end{table} \paragraph{\bf{The survival functions visual representation}} We begin with a~nonstandard representation of standard survival function, as a~stepping stone to its~generalization. Before, let us introduce the following equivalent representation of survival function: \begin{align} \begin{split}\label{prepis2} \mu(\{\x>\alpha\})=\mu([n]\setminus \{i\in [n]: x_i\leq \alpha\})&=\min\mathbf{i}g\{\mu (E^c):(\forall i\in E)\,\, x_i\leq \alpha,\, E\in 2^{[n]}\mathbf{i}g\}\\&=\min\mathbf{i}g\{\mu (E^c): \max_{i\in E} x_i\leq \alpha,\, E\in 2^{[n]}\mathbf{i}g\}, \end{split} \end{align} where $E^c=[n]\setminus E$, see motivation problem 1 in~\cite{BoczekHalcinovaHutnikKaluszka2020}. Let us start the visualization with inputs from Table~\ref{tabulka_0}. \begin{figure} \caption{The survival function visualization for $\mathbf{x} \label{MLM_vizualizacia} \end{figure} Let us depict all maximal values of~$\mathbf{x}$ on $E$, for each set $E\in2^{[3]}$ on the~lower axis, see left image of Figure~\ref{MLM_vizualizacia}, in decreasing order and the corresponding values of monotone measure of complement, i.e. $\mu(E^c)$, on the~upper axis. In this picture of Figure~\ref{MLM_vizualizacia}, the~number on lower axis is linked with the~number on the~upper one via a~straight line once they correspond to the~same set, i.e., $a$ is linked with $b$ if there is $E\in2^{[3]}$ such that $$a=\max\limits_{i\in E}x_i\hspace{0.5cm}\text{ and }\hspace{0.5cm} b=\mu(E^c).$$ Finally, the~value $\mu(\{\x>\alpha\})$ at some $\alpha\in[0,\infty)$ can be read from the left image of Figure~\ref{MLM_vizualizacia} considering the~minimal value on the~upper axis which is linked to a~value smaller than $\alpha$ (i.e., right-hand side value) on the~lower one. Thus considering an~illustrative example in the left image of Figure~\ref{MLM_vizualizacia}, the~value of survival function at $2{.}5$ is $0{.}75$. Indeed, there are just 2 values on the~right hand side of $2{.}5$, namely numbers 2 and 0. These are linked to $0{.}75$ and 1, respectively. Hence, $0{.}75$ is a~smaller one. The graph of survival function is in the right image of Figure~\ref{MLM_vizualizacia}. \paragraph{\bf{The generalized survival functions visual representation}} In the~modification of the survival function, the~previously described computational procedure stays. However, we allow to use any conditional aggregation operator, not just maximum operator. The~standard example of conditional aggregation is the sum of components of~$\mathbf{x}$, see the last line in~Table~\ref{tabulka_0} and the corresponding visualisation in Figure~\ref{SLM_vizualizacia}. Applying the~described computational procedure we obtain the~sum-based survival function of vector $\mathbf{x}$, i.e., the~generalized survival function of vector $\mathbf{x}$ studied in~\cite{BoczekHalcinovaHutnikKaluszka2020,BorzovaHalcinovaSupina2018,Halcinova2017,HalcinovaHutnikKiselakSupina2019}. The formula linked to this procedure is the following: \begin{eqnarray*} \mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\min\left\{\mu(E^c): \aAi[E]{\mathbf{x}}{\mathrm{sum}}\leq\alpha,\, E\in\mathcal{D}GSF\right\}\end{eqnarray*} \noindent with $\aAi[E]{\mathbf{x}}{\mathrm{sum}}=\sum\limits_{i\in E}x_i$ and $\{\emptyset\}\subseteq\,\mathcal{D}GSF\subseteq 2^{[n]}$ (in the illustrative example $\mathcal{D}GSF=2^{[3]}$). The~corresponding graph is: Considering discrete space, the~computation of the~generalized survival function studied in~\cite{BoczekHalcinovaHutnikKaluszka2020,BorzovaHalcinovaSupina2018,Halcinova2017,HalcinovaHutnikKiselakSupina2019} may be always represented via the~corresponding diagrams similar to those in~Figures~\ref{MLM_vizualizacia} and~\ref{SLM_vizualizacia}. \mathbf{i}gskip \begin{figure} \caption{Generalized survival function visualization for $\mathbf{x} \label{SLM_vizualizacia} \end{figure} Except for a better understanding of survival functions, the visual representation may help us to answer the problem of their indistinguishability. With the introduction of novel survival function a natural question arises: When does the generalized survival function coincide with the survival function? The motivation for answering these questions is not only to know the relationship between mentioned concepts for given inputs, but it will help us to compare the corresponding integrals based on them, see~\cite[Definition 5.1, Definition 5.4]{BoczekHalcinovaHutnikKaluszka2020}. In the literature, there are known some families of conditional aggregation operators together with the collection $\mathcal{D}GSF$ when the generalized survival function equals to the survival function. In the following we list them: \begin{itemize} \item (cf.~\cite[Corollary 4.15]{HalcinovaHutnikKiselakSupina2019}) $\cA=\cA^{\rm{size}}$ with size~$\mathsf{s}$ being the weighted sum\footnote{ ${\mathsf{s}}_{{\#},p}(\mathbf{x})(E)= \left(\frac{1}{{\#}(E)}\cdot\sum\limits_{x_i\in E}x_i^p\right)^{\frac{1}{p}} $ \text{for}~$E\neq\emptyset$, ${\mathsf{s}}_{{\#},p}(\mathbf{x})(\emptyset)=0$ and $p>0$.}, $\mathcal{D}$ contains all singletons of~$[n]$ and $\mathcal{D}GSF=2^ {[n]}$; \item (cf.~~\cite[Example 4.2]{BoczekHalcinovaHutnikKaluszka2020} or~\cite[Section 5]{HalcinovaHutnikKiselakSupina2019}) $\cA=\cA^{\rm{max}}$ with $\mathcal{D}GSF=2^{[n]}$; \item (cf.~\cite[Proposition 4.6]{BoczekHalcinovaHutnikKaluszka2020}) $\cA=\cA^{\mu-\mathrm{ess}}$ with $\mathcal{D}GSF=2^{[n]}$. \end{itemize} Although the first two items appear to be different, in fact, under the above conditions, they are equal $\cA^{\rm{size}}=\cA^{\rm{max}}$. Settings of above mentioned examples lead to the survival function regardless of the choice of monotone measure $\mu$. However, the identity between generalized survival function and survival function may happen also for other families of conditional aggregation operators (a FCA for short), but with specific monotone measures, e.g. $\cA^{\mathrm{sum}}$ with the weakest monotone measure\footnote{$\mu_*\colon 2^ {[n]}\to[0,\infty)$ given by $$\mu_*(E)=\begin{cases}\mu([n]), & E=[n], \\ 0, & \textrm{otherwise}.\end{cases}$$} shrinks to survival function for any input vector $\mathbf{x}\in[0,\infty)^{[n]}$ and $\mathcal{D}GSF=2^{[n]}$. In this paper we shall treat the following problems: \noindent\textbf{Problem 1:} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu$ be a monotone measure on $2^{[n]}$, and $\cA$ be FCA. What are sufficient and necessary conditions on $\mathbf{x}$, $\mu$ and $\cA$ to hold $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$? \mathbf{i}gskip \noindent\textbf{Problem 2:} Let $\mathbf{x}\in[0,\infty)^{[n]}$, and $\cA$ be FCA. What are sufficient and necessary conditions on $\mathbf{x}$ and $\cA$ to hold $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any monotone measure $\mu$? \mathbf{i}gskip \noindent\textbf{Problem 3:} Let $\cA$ be FCA. What are sufficient and necessary conditions on $\cA$ to hold $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any monotone measure $\mu$ and $\mathbf{x}\in[0,\infty)^{[n]}$? The~paper is organized as follows. We continue with preliminary section containing needed definitions and notations. In Section~3 we solve Problem 1, see e.g. Corollary~\ref{LM_SLM_coincide}, Corollary~\ref{corollary_1}, Remark~\ref{remark_i}, Proposition~\ref{summar} and Theorem~\ref{nutna_postacujuca_2}. In Section~4 we provide quite~surprising result, see Theorem~\ref{thm:characterization2} that characterizes the family of conditional aggregation operators (in discrete setting) for which the~generalized survival function coincides with the~standard survival function. Thus we answer Problem 3. In Section~4 we also treat Problem~2, see Theorem~\ref{thm:characterization} and Theorem~\ref{thm:characterization_mon}. Many our results are supported by appropriate examples. \section{Background and preliminaries} In order to be self-contained as far as possible, we recall in this section necessary definitions and all basic notations. In the whole paper, we restrict ourselves to discrete settings. As we have already mentioned, we shall consider a finite set $$X=[n]:=\{1,2,\dots, n\},\,\, n\geq 1.$$ We shall denote by $2^{[n]}$ the power set of $[n]$. A~{\it monotone} or \textit{nonadditive measure} on $2^{[n]}$ is a~nondecreasing set function $\mu\colon 2^{[n]}\to {{[0,\infty)}},$ i.e., $\mu(E)\le\mu(F)$ whenever $E\subseteq F$, with $\mu(\emptyset)=0.$ Moreover, we shall suppose $\mu([n])>0$. The set of monotone measures on $2^{[n]}$ we shall denote by $\mathbf{M}$. The monotone measure satisfying the equality $\mu([n])=1$ will be called the \textit{normalized monotone measure} (also known as a capacity in~\cite{Weber1986}). In this paper we shall always work with monotone measures being defined on $2^{[n]}$, although, on several places the domain of $\mu$ can be smaller. Also, we shall need special properties of $\mu$ on a system $ \mathcal{S}\subseteq 2^{[n]}$. The monotone measure $\mu\in\mathbf{M}$ with the property $\mu(E)\neq\mu(F)$ for any $E,F\in \mathcal{S}\subseteq 2^{[n]}$, $E\neq F$ will be called \textit{strictly monotone measure} on $\mathcal{S}$. The counting measure will be denoted by ${\#}$. Further, we put $\max\emptyset=0$ and $\sum_{i\in\emptyset}x_i=0$. We shall work with nonnegative real-valued vectors, we shall use the notation $\mathbf{x}=(x_1,\dots,x_n)$, $x_i\in [0,\infty)$, $i=1,2,\dots, n$. The set $[0,\infty)^{[n]}$ is the family of all nonnegative real-valued functions on $[n]$, i.e. vectors. For any $\mathbf{x}=(x_1,\dots,x_n)\in[0,\infty)^{[n]}$ we denote by $(\cdot)$ a~permutation $(\cdot)\colon [n]\to[n]$ such that $x_{(1)}\leq x_{(2)}\leq\dots \leq x_{(n)}$ and $x_{(0)}=0$, $x_{(n+1)}=\infty$ by convention. Let us remark that the permutation $(\cdot)$ need not be unique (this happens if there are some ties in the sample $(x_1,..., x_n)$, see~\cite{ChenMesiarLiStupnanova2017}). For a~fixed input vector $\mathbf{x}$ and a~fixed permutation $(\cdot)$ we shall denote by $\Ai{i}$ the set of the form $\Ai{i}=\{(i),\dots,(n)\}$ for any $i\in [n]$ with the convention~$\Ai{n+1}=\emptyset$. By $\mathbf{i}n_E$ we shall denote the indicator function of a set $E\subseteq Y$, $Y\subseteq [0,\infty)$, i.e., $\mathbf{i}n_E(x)=1$ if $x\in E$ and $\mathbf{i}n_E(x)=0$ if $x\notin E$. Especially, $\mathbf{i}n_{\emptyset} (x)= 0$ for each $x\in Y$. We shall work with indicator function with respect to two different sets. We shall work with $Y=[n]$ when dealing with vectors (i.e. $\mathbf{i}n_E$ is a characteristic vector of $E\subseteq [n]$ in $\{0,1\}^{{[n]}}$) and $Y= [0,\infty)$ when dealing with survival functions. In the following we list several definitions (adopted to discrete settings). Firstly, the concept of the \textit{conditional aggregation operator} is presented. Its crucial feature is that the validity of properties is required only on conditional set, not on the whole set. The inspiration for its introduction came from the conditional expectation, which is the fundamental notion of probability theory. Let us also remark that this operator generalizes the aggregation operator introduced earlier by Calvo et al. in~\cite[Definition 1]{CalvoKolesarovaKomornikovaMesiar2002} and it is the crucial ingredient in the definition of the generalized survival function. \begin{definition}\label{def: cao}\rm (cf. \cite[Definition 3.1]{BoczekHalcinovaHutnikKaluszka2020}) A map $\aA[B]{\cdot}\colon [0,\infty)^{[n]}\to[0,\infty)$ is said to be a~\textit{conditional aggregation operator} with respect to a set $B\in 2^{[n]}\setminus \{\emptyset$\} if it satisfies the following conditions: \begin{itemize} \item[i)] $\aA[B]{\mathbf{x}}\leq \aA[B]{\mathbf{y}}$ for any $\mathbf{x},\mathbf{y}\in[0,\infty)^{[n]}$ such that $x_i\leq y_i$ for any $i\in B$; \item[ii)] $\aA[B]{\mathbf{i}n_{B^c}}=0.$ \end{itemize} \end{definition} Let us compare the settings of the previous definition with the settings of the original definition, see~\cite[Definition 3.1]{BoczekHalcinovaHutnikKaluszka2020}). We consider the greatest $\sigma$-algebra as the domain of $\mu$ in comparison with the original arbitrary $\sigma$-algebra $\Sigma$. Then all vectors are measurable and this assumption may be omitted from the definition. The measurability of each vector is desired property mainly from the application point of view. Because of the property $\aA[B]{\mathbf{x}}=\aA[B]{\mathbf{x}\mathbf{i}n_B}$ for any $\mathbf{x}\in[0,\infty)^{[n]}$ with fixed $B\in 2^{[n]}\setminus \{\emptyset\}$ the value $\aA[B]{\mathbf{x}}$ can be interpreted as \say{an aggregated value of $\mathbf{x}$ on $B$}, see~\cite{BoczekHalcinovaHutnikKaluszka2020}. In the following we list several examples of conditional aggregation operators we shall use in this paper. For further examples and some properties of conditional aggregation operators we recommend~\cite[Section~3]{BoczekHalcinovaHutnikKaluszka2020}. \begin{example}\label{pr aggr} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $B\in 2^{[n]}\setminus\{\emptyset\}$ and $m\in\mathbf{M}$. \begin{enumerate}[i)] \item $\aAi[B]{\mathbf{x}}{m-\mathrm{ess}}=\mathrm{ess} \sup_m(\mathbf{x}\mathbf{i}n_{B})$, where $\mathrm{ess} \sup_{m}(\mathbf{x})=\min\{\alpha\geq 0:\, \{\mathbf{x}>\alpha\}\in\mathcal{N}_{m}\}$.\footnote{A~set $N\in 2^{[n]}$ is said to be a~{\it null set} with respect to a monotone measure $m$ if $m(E\cup N)=m(E)$ for all $E\in 2^{[n]}.$ By $\cN$ we denote the family of null sets with respect to $m.$} \item $\aA[B]{\mathbf{x}}=\mathrm{J}(\mathbf{x}\mathbf{i}n_{B},m)$, (the multiplication of vectors is meant by components) where $\mathrm{J}$ is an integral defined in~\cite[Definition 2.2]{BoczekHovanaHutnikKaluszka2020b}. Namely,\begin{itemize} \item[a)]$\aAi[B]{\mathbf{x}}{\mathrm{Ch}_m}=\sum\limits_{i=1}^{n}x_{(i)}\left(m(\Ai{i}{\cap B})-m(\Ai{i+1}{\cap B})\right)$; \item[b)]$\aAi[B]{\mathbf{x}}{\mathrm{Sh}_m}=\max\limits_{i\in[n]}\left\{ x_{(i)}\cdot m(\Ai{i}{\cap B} )\right\}$; \item[c)]$\aAi[B]{\mathbf{x}}{\mathrm{Su}_m}=\max\limits_{i\in[n]}\left\{ \min\{x_{(i)}, m(\Ai{i}{\cap B})\} \right\}$. \end{itemize} \item $\aA[B]{\mathbf{x}}= \frac{\max_{i\in B}(x_i\cdot w_i)}{\max_{i\in B} z_i},$ where $\mathbf{w}\in[0,1]^{[n]}$ is a fixed weight vector, $\mathbf{z}\in(0,1]^{[n]}$ is fixed vector such that $\max_{i\in [n]}z_i=1$. We note, that for $\mathbf{w}=\mathbf{z}=\mathbf{1}_{[n]}$ we get $\aAi[B]{\mathbf{x}}{\mathrm{max}}=\max_{i\in B}x_i$. \item $\aAi[B]{\mathbf{x}}{p-\mathrm{mean}}=\left(\frac{1}{{\#}(B)}\cdot\sum\limits_{i\in B}(x_i)^{p}\right)^{\frac{1}{p}}$ with $p\in(0,\infty)$. For $p=1$ we get the arithmetic mean. \item $\aAi[B]{\mathbf{x}}{\mathrm{size}}=\max\limits_{D\in\mathcal{D}}\mathsf{s}(\mathbf{x}\mathbf{i}n_{B})(D)$ with $\mathsf{s}$ being a size, see~\cite{BorzovaHalcinovaSupina2018, Halcinova2017, HalcinovaHutnikKiselakSupina2019}, is the outer essential supremum of $\mathbf{x}$ over $B$ with respect to a size $\mathsf{s}$ and a collection $\mathcal{D}\subseteq2^{[n]}$. In particular, for the sum as a size, i.e., $\mathsf{s}_{\mathrm{sum}}(\mathbf{x})(G)=\sum\limits_{i\in G}x_i$ for any $G\in2^{[n]}$ and for $\mathcal{D}$ such that there is a set $C\supseteq B, C\in\mathcal{D}$ we get $\aAi[B]{\mathbf{x}}{\mathrm{sum}}=\sum\limits_{i\in B}x_i$. \end{enumerate} \end{example} Observe that the empty set is not included in the Definition~\ref{def: cao}. The reason for that is the fact that the empty set does not provide any additional information for aggregation. However, in order to have the concept of the generalized survival function correctly introduced, it is necessary to add the assumption $\aA[\emptyset]{\cdot}=0$, see~\cite[Section 4]{BoczekHalcinovaHutnikKaluszka2020}. From now on, we shall consider only these conditional aggregation operators. Let us remark, that all mappings from Example~\ref{pr aggr} with the convention \enquote{$0/0=0$} satisfy this property. In the following we shall provide the definition of the generalized survival function, see~\cite[Definition 4.1.]{BoczekHalcinovaHutnikKaluszka2020}. Let us consider a collection $\mathcal{D}GSF$, ${{\{}}\emptyset{{\}}}\subseteq\mathcal{D}GSF\subseteq2^{[n]}$ and conditional aggregation operators on sets from $\mathcal{D}GSF$ with $ \aA[\emptyset]{\cdot}=0$. The set of such aggregation operators we shall denote by $$\bA=\{\aA[E]{\cdot}: E\in \mathcal{D}GSF\}\footnote{Since $\aA[E]{\cdot}\colon[0,\infty)^{[n]}\to [0,\infty)$, $\cA$ is a~family of operators parametrized by a~set from~${\cE}$.}$$ and we shall call it a \textit{family of conditional aggregation operators} (FCA for short). For example, $\cA^{\mathrm{sum}}=\{\sA^{\mathrm{sum}}(\cdot|E):E\in2^{[n]}\}$, $\cA^{\mathrm{max}}=\{\sA^{\mathrm{max}}(\cdot|E):E\in\{\emptyset, \{1\}, \{2\}, \dots, \{n\}\}\}$, $\mathbf{w}idehat{\cA}^{\mathrm{max}}=\{\sA^{\mathrm{max}}(\cdot|E):E\in\{\emptyset\}\}$ or $\cA=\{\sA(\cdot|E):E\in2^{[n]}\}$, $n\geq 2$, where $$\sA(\mathbf{x}|E)=\begin{cases} \sA^{\mathrm{max}}(\mathbf{x}|E), & E\in\{\{1\}, \{2\}, \dots, \{n\}\},\\ 0,&E=\emptyset,\\ \sA^{\mathrm{sum}}(\mathbf{x}|E), &\text{otherwise} \end{cases}$$ for any $\mathbf{x}\in[0,\infty)^{[n]}$. \begin{definition}\label{def: gsf}\rm(cf.~\cite[Definition 4.1.]{BoczekHalcinovaHutnikKaluszka2020}) Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\mathbf{M}$. The \textit{generalized survival function} with respect to $\bA$ is defined as \begin{eqnarray*} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\left\{\mu(E^c): \aA[E]{\mathbf{x}}\leq\alpha,\, E\in\mathcal{D}GSF\right\}\end{eqnarray*} for any $\alpha\in[0,\infty)$. \end{definition} The presented definition is correct. Really, for any $E\in\mathcal{D}GSF$ it holds that $E^c\in2^{[n]}$ is a~measurable set. Moreover, the set $\{E\in\mathcal{D}GSF: \aA[E]{\mathbf{x}}\leq\alpha\}$ is nonempty for all $\alpha\in[0,\infty)$, because $\aA[\emptyset]{\cdot}=0$ by convention and $\emptyset\in\mathcal{D}GSF$. Immediately it is seen, that for $\mathcal{D}GSF=2^{[n]}$ and $\cA^{\mathrm{max}}$ we get the standard survival function, compare with~\eqref{prepis2}. When it will be necessary we shall emphasize the collection $\mathcal{D}GSF$ in the notation of generalized survival function, i.e. we shall use $\cA^{\mathcal{D}GSF}$. On several places in this paper we shall work with the FCA that is \textit{nondecreasing} w.r.t sets, i.e. the map $E\mapsto\aA[E]{\cdot}$ will be nondecreasing. Many FCA satisfy this property, e.g. $\cA^{m-\mathrm{ess}}=\{\sA^{m-\mathrm{ess}}(\cdot|E):E\in\mathcal{D}GSF\}$, $\cA^{\mathrm{Ch}_m}=\{\sA^{\mathrm{Ch}_m}(\cdot|E):E\in\mathcal{D}GSF\}$, $\cA^{\mathrm{Su}_m}=\{\sA^{\mathrm{Su}_m}(\cdot|E):E\in\mathcal{D}GSF\}$, $\cA^{\mathrm{Sh}_m}=\{\sA^{\mathrm{Sh}_m}(\cdot|E):E\in\mathcal{D}GSF\}$, $\cA^{\mathrm{max}}=\{\sA^{\mathrm{max}}(\cdot|E):E\in\mathcal{D}GSF\}$, see Example~\ref{pr aggr} i), ii), iii). \section{Equality and inequalities of the generalized and standard survival function} In this section we shall treat Problem 1. We provide sufficient and necessary conditions on $\mathbf{x}$, $\mu$ and $\cA$ under which the generalized survival function and survival function coincide. The important knowledge we use is the standard survival function formula. In what follows we shall work with the expression of the survival function on a finite set in the form \begin{equation}\label{LM_form}\mu(\{\x>\alpha\})=\sum_{i=0}^{n-1}\mu\left(\Ai{i+1}\right)\cdot\mathbf{1}_{[x_{(i)},x_{(i+1)})}(\alpha)\text{}\end{equation} with the permutation $(\cdot)$ such that $0=x_{(0)}\leq x_{(1)}\leq x_{(2)}\leq\dots \leq x_{(n)}$ and $\Ai{i}=\{(i),\dots,(n)\}$ for $i\in[n]$. However, one can easily see that some summands in the formula~\eqref{LM_form} can be redundant. For example, for vectors with the property $x_{(i)}=x_{(i+1)}$ for some $i\in[n-1]\cup\{0\}$ we have $\mu\left(\Ai{i+1}\right)\cdot\mathbf{1}_{[x_{(i)},x_{(i+1)})}(\alpha)=0$ for any $\alpha\in[0,\infty)$, i.e., this summand does not change the values of survival function and can be omitted. \mathbf{i}gskip Let us consider an arbitrary (fixed) input vector $\mathbf{x}$ together with a permutation $(\cdot)$ such that $0=x_{(0)}\leq x_{(1)}\leq x_{(2)}\leq\dots \leq x_{(n)}$. Let us denote \begin{align}\label{pi}\Psi_{\mathbf{x}}:=\{i\in[n-1]\cup\{0\}: x_{(i)}<x_{(i+1)}\}\cup\{n\}.\end{align} For example, for the input vector $\mathbf{x}=(3,2,3,1)$ and the permutation $(\cdot)$ such that $x_{(0)}=0$, $x_{(1)}=1$, $x_{(2)}=2$, $x_{(3)}=3$, $x_{(4)}=3$, we get $\Psi_{\mathbf{x}}=\{0,1,2,4\}$. The following proposition includes the very basic properties of system $\Psi_{\mathbf{x}}$ needed for further results. \begin{proposition}\label{vlastnostipi} Let $\mathbf{x}\in[0,\infty)^{[n]}$. \begin{enumerate}[i)] \item $\Psi_{\mathbf{x}}$ is independent on permutation $(\cdot)$ of $\mathbf{x}$, i.e., $\Psi_{\mathbf{x}}$ contains the same values for any permutation $(\cdot)$ of $\mathbf{x}$ such that $0=x_{(0)}\leq x_{(1)}\leq x_{(2)}\leq\dots \leq x_{(n)}$. \item For any $i\in[n]$ there exists $k_i\in\Psi_{\mathbf{x}}\setminus\{0\}$ such that $x_i=x_{(k_i)}$, i.e. $\{x_{(k_i)}:k_i\in\Psi_{\mathbf{x}}\setminus\{0\}\}$ contains all different values of $\mathbf{x}$. \item $x_{(\min\Psi_{\mathbf{x}})}=0$. \item $\left\{[x_{(k)},x_{(k+1)}): k\in\Psi_{\mathbf{x}}\right\}$ is a decomposition of interval $[0,\infty)$ into nonempty pairwise disjoint~sets. \end{enumerate} \end{proposition} \noindent {\bf Proof. \ \ }\begin{enumerate}[i)] \item Let us consider two different permutations of $\mathbf{x}$ (if they exist) $(\cdot)_1$ and $(\cdot)_2$ with the required property. Let us denote \begin{align*}\Psi_\mathbf{x}&:=\{i\in[n-1]\cup\{0\}: x_{(i)_1}<x_{(i+1)_1}\}\cup\{n\},\\\Phi_{\mathbf{x}}&:=\{i\in[n-1]\cup\{0\}: x_{(i)_2}<x_{(i+1)_2}\}\cup\{n\}. \end{align*} We show that $\Psi_{\mathbf{x}}=\Phi_{\mathbf{x}}$. Indeed, $n\in\Psi_{\mathbf{x}},n\in\Phi_{\mathbf{x}}$. If $i\in\Psi_{\mathbf{x}}\setminus\{n\}$, then $x_{(i)_1}<x_{(i+1)_1}$. Because of nondecreasing rearangement of $\mathbf{x}$ with respect to $(\cdot)_1$, $(\cdot)_2$ we get $x_{(i)_2}<x_{(i+1)_2}$, therefore $i\in\Phi_{\mathbf{x}}$ and $\Psi_{\mathbf{x}}\subseteq\Phi_{\mathbf{x}}$. By analogy it holds $\Phi_{\mathbf{x}}\subseteq\Psi_{\mathbf{x}}$. \item Since any $i\in[n]$ can be represented via permutation as $i={(j_i)}$, $j_i\in[n]$, let us set $$k_i=\max\{j_i\in[n]:\, x_i=x_{(j_i)}\}.$$ As for any $k_i<n$ it holds that $x_{(k_i)}<x_{(k_{i}+1)}$, then $k_i\in\Psi_{\mathbf{x}}\setminus\{0\}$. Moreover, $k_i=n\in\Psi_{\mathbf{x}}$ because of the definition of $\Psi_{\mathbf{x}}$, see~\eqref{pi}. \item It follows immediately from the fact that $\min\Psi_{\mathbf{x}}=\max\{i\in[n]\cup\{0\}:x_{(i)}=x_{(0)}=0\}$. \item It follows from part ii), iii) and from definition of system $\Psi_{\mathbf{x}}$, since $x_{(k)}<x_{(k+1)}$ for any $k\in\Psi_{\mathbf{x}}$ and $x_{(k_1)}\neq x_{(k_2)}$ for any $k_1,k_2\in\Psi_{\mathbf{x}}$. \qed \end{enumerate} Since we have shown that the system $\Psi_{\mathbf{x}}$ is independent of the chosen permutation, henceforward we shall not explicitly mention the permutation in assumptions of presented results. The following proposition states that the formula~(\ref{LM_form}) can be rewritten by the system $\Psi_{\mathbf{x}}$ in the simpler form, see part ${\rm{i)}}$. Moreover, in the second part of the proposition we show that {for a fixed $\mathbf{x}\in[0,\infty)^{[n]}$ it is $\mu_{\bA}(\x,\alpha){\mathrm{max}, {{\mathcal{D}GSF}}}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$} with smaller collection $\mathcal{D}GSF$ than the whole powerset $2^{[n]}$ (compare with the known result~\cite[Example 4.2]{BoczekHalcinovaHutnikKaluszka2020} or see \eqref{prepis2}). The collection $\mathcal{D}GSF$ depends on $\mathbf{x}$ (equivalently on $\Psi_{\mathbf{x}}$). \begin{proposition}\label{LM=SLM_max} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\mathbf{M}$. \begin{enumerate}[i)] \item Then \begin{equation}\label{PiLM_form}\mu(\{\x>\alpha\})=\sum_{k\in\Psi_{\mathbf{x}}}\mu\left(\Ai{k+1}\right)\cdot\mathbf{1}_{[x_{(k)},x_{(k+1)})}(\alpha)\end{equation} for any $\alpha\in[0,\infty)$ with the convention $x_{(n+1)}=\infty$. \item If $\mathcal{D}GSF\supseteq\{\Ai{k+1}^c: k\in\Psi_{\mathbf{x}}\}$, then $$\mu_{\bA}(\x,\alpha){\mathrm{max}}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\}).$$ \end{enumerate} \end{proposition} \noindent {\bf Proof. \ \ } \begin{enumerate}[i)] \item For $k\in[n-1]\cup\{0\},\, k\notin\Psi_{\mathbf{x}}$, we have $x_{(k)}=x_{(k+1)}$. This leads to the fact that $\mu\left(\Ai{k+1}\right)\cdot\mathbf{1}_{[x_{(k)},x_{(k+1)})}(\alpha)=0$ for any $\alpha\in[0,\infty)$. {Using the Proposition~\ref{vlastnostipi} iv) we have the required assertion.} \item According to Proposition~\ref{vlastnostipi} (iv) let us divide interval $[0,\infty)$ into disjoint sets $$[0,\infty)=\mathbf{i}gcup_{k\in\Psi_{\mathbf{x}}}[x_{(k)},x_{(k+1)}).$$ Let us consider an arbitrary (fixed) $k\in\Psi_{\mathbf{x}}$. Then from the fact, that $\Ai{k+1}^c\in\mathcal{D}GSF$ and $\aAi[E_{(k+1)}^c]{\mathbf{x}}{\mathrm{max}}=x_{(k)}$ we have {$E_{(k+1)}^c\in\{E:\aA[E]{\mathbf{x}}\leq\alpha\}$ for any $\alpha\in[x_{(k)},x_{(k+1)})$. } Therefore we get $$\mu_{\bA}(\x,\alpha){\mathrm{max}, {{\mathcal{D}GSF}}}{\mathbf{x}}{\alpha}=\min\{\mu(E^c):\aAi[E]{\mathbf{x}}{\mathrm{max}}\leq\alpha,E\in\mathcal{D}GSF\}\leq\mu(E_{(k+1)})=\mu(\{\x>\alpha\})$$ for any $\alpha\in[x_{(k)},x_{(k+1)})$, where the last equality follows from part~$\text{i)}$. On the other hand, {as $\mathcal{D}GSF\subseteq2^{[n]}$ from properties of minimum we have} $$\mu_{\bA}(\x,\alpha){\mathrm{max}, {{\mathcal{D}GSF}}}{\mathbf{x}}{\alpha}\geq\mu_{\bA}(\x,\alpha){\mathrm{max}, {{2^{[n]}}}}{\mathbf{x}}{\alpha}=\mu(E_{(k+1)})=\mu(\{\x>\alpha\})$$ for any $\alpha\in[x_{(k)},x_{(k+1)})$. To sum it up, $\mu_{\bA}(\x,\alpha){\mathrm{max}, {{\mathcal{D}GSF}}}{\mathbf{x}}{\alpha}=\mu(E_{(k+1)})=\mu(\{\x>\alpha\})$ for any $\alpha\in[x_{(k)},x_{(k+1)})$.\qed \end{enumerate} \begin{remark} Let us remark that in the whole paper we suppose $\mu$ is defined on $2^{[n]}$. However, in fact, it is enough to have a smaller $\mathtt{Dom}{(\mu)}$. For example, in part ii) of the previous proposition it is enough to consider the domain of $\mu$ being $\{E^c: E\in\mathcal{E}\}$. \end{remark} Let us note that in~(\ref{PiLM_form}) the last summand is always equal to $0$ because $\mu\left(\Ai{n+1}\right)=\mu(\emptyset)=0$. However, it is useful to consider the form of survival function in~(\ref{PiLM_form}) with sum over the whole set $\Psi_{\mathbf{x}}$ not $\Psi_{\mathbf{x}}\setminus\{n\}$ because of some technical details in presented proofs in this paper. \begin{example}\label{LM_collection} Let us take $\cA^{\mathrm{max}}=\{\aAi[E]{\cdot}{\mathrm{max}}: E\in\mathcal{D}GSF\}$ and normalized monotone measure $\mu$ on $2^{[3]}$ given in the following table: \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $E$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu(E)$ & $0$ & $0$ & $0.5$ & $0$ & $0.5$ & $0.5$ & $0.5$ & $1$\\ \hline \end{tabular} \label{table0} \end{center} \vspace*{-12pt} \end{table} \noindent Further, let us take the input vector $\mathbf{x}=(1,2,1)$ with the permutation \mbox{$(1)\!=\!1$, $(2)\!=\!3$, $(3)\!=\!2$}. Then $\Psi_{\mathbf{x}}=\{0,2,3\}$ and the collection guarantying the equality between survival function and generalized survival function (of input $\mathbf{x}$) is according to Proposition~\ref{LM=SLM_max} $\text{ii)}$ e.g. \begin{align*} \mathcal{D}GSF&=\{\Ai{k+1}^c:\, k\in\Psi_{\mathbf{x}}\}=\{\Ai{1}^c, \Ai{3}^c, \Ai{4}^c\}=\{\emptyset, \{(1), (2)\}, \{(1), (2), (3)\}\}\\&=\{\emptyset, \{1,3\}, \{1,2,3\}\}.\end{align*} Indeed, $$\mu_{\bA}(\x,\alpha){\mathrm{max}}{\mathbf{x}}{\alpha}= 1\cdot\mathbf{i}n_{[0,1)}(\alpha)+0.5\cdot\mathbf{i}n_{[1,2)}(\alpha)=\mu(\{\x>\alpha\}).$$ \end{example} \begin{figure} \caption{The survival function visualization $\mu_{\cA^{\mathrm{max} \label{LM_visualization} \end{figure} From the previous result it follows that the standard survival function can be represented by the formula \begin{align}\label{formula} \mu(\{\x>\alpha\})&=\min\mathbf{i}g\{\mu (E^c): \aAi[E]{\mathbf{x}}{\mathrm{max}}\leq \alpha,\, E\in \{\Ai{k+1}^c: k\in\Psi_{\mathbf{x}}\}\mathbf{i}g\} \end{align} with the system $\Psi_{\mathbf{x}}$ given by the input vector $\mathbf{x}$. This formula can be visualized by Figure~\ref{LM_visualization}. Let us remark that since $\aAi[\Ai{k+1}^c]{\mathbf{x}}{\mathrm{max}}=x_{(k)}$ on the upper line we measure sets $(\Ai{k+1}^c)^c$. The calculation of (generalized) survival function is processed as we have described in the Introduction. Let us remark that the essence of the following results is the pointwise comparison of the generalized survival function with the standard survival function having in mind the representation~\eqref{formula} together with its visualization, see Figure~\ref{LM_visualization}. It is obvious that the equality of survival functions (standard and generalized) means that they have to achieve the same values, i.e., $\mu\left(\Ai{k+1}\right)$, $k\in\Psi_{\mathbf{x}}$, on the same corresponding intervals $[x_{(k)},x_{(k+1)})$, $k\in\Psi_{\mathbf{x}}$. Having in mind the formula~\eqref{PiLM_form}, the survival function representation given by~\eqref{formula} and the visualization, see Figure~\ref{LM_visualization}, we can formulate the following sufficient conditions. While (C1) ensures that the generalized survival function will be able to achieve the same values as the survival function, (C2) guarantees it. Let $\cA$ be FCA. \begin{enumerate}[(C1)] \item For any $k\in\Psi_{\mathbf{x}}$ there exists $G_k\in\mathcal{D}GSF$ such that $$ \aA[G_k]{\mathbf{x}}=x_{(k)}\quad\text{and}\quad \mu(G_k^c)=\mu(\Ai{k+1}).$$ \item For any $k\in\Psi_{\mathbf{x}}$ and for any $E\in \mathcal{D}GSF$ it holds: $$\aA[E]{\mathbf{x}}<x_{(k+1)}\mathbb{R}ightarrow\mu(E^c)\geq \mu(\Ai{k+1}).$$ \end{enumerate} \begin{figure} \caption{The visualization of conditions $\mathrm{(C1)} \label{obr_dokaz} \end{figure} \noindent The visualization of conditions $\mathrm{(C1)}$, $\mathrm{(C2)}$ via two parallel lines is drawn in Figure~\ref{obr_dokaz}. Let us remark that for $k=n$ (C2) holds trivially. Also, for $k=\min\Psi_{\mathbf{x}}$ (C1) holds trivially with $G_{\min\Psi_{\mathbf{x}}}=\emptyset$. \begin{remark}\label{poznamka_max} In accordance with the above written, it can be easily seen that for $\cA^{\mathrm{max}}$ with $\mathcal{D}GSF\supseteq\{\Ai{k+1}^c: k\in\Psi_{\mathbf{x}}\}$ it holds $G_k=\Ai{k+1}^c$ for any $k\in\Psi_{\mathbf{x}}$ regardless of the choice of $\mu$ in $\mathrm{(C1)}$. Of course, for specific classes of monotone measures $\mu$ also other sets $G_k$ can satisfy $\rm{(C1)}$. Similarly, the validity of $\rm{(C2)}$ is clear. Indeed, if $\aAi[E]{\mathbf{x}}{\mathrm{max}}<x_{(k+1)}$, then we have $E\subseteq \Ai{k+1}^c$, i.e., $E^c\supseteq E_{(k+1)}$. From the monotonicity of $\mu$ we have $\mu(E^c)\geq\mu(E_{(k+1)})$ for any $E\in\mathcal{D}GSF$. \end{remark} Conditions {$\mathrm{(C1)}$} and {$\mathrm{(C2)}$} guarantee inequalities between survival functions. Thus the equality of survival function is a consequence. \begin{proposition}\label{LM<SLM} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. \begin{enumerate}[i)] \item If $\mathrm{(C1)}$ holds, then $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item $\mathrm{(C2)}$ holds if and only if $\mu(\{\x>\alpha\})\leq \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{proposition} \noindent {\bf Proof. \ \ } According to Proposition~\ref{vlastnostipi} (iv) let us divide interval $[0,\infty)$ into disjoint sets $$[0,\infty)=\mathbf{i}gcup_{k\in\Psi_{\mathbf{x}}}[x_{(k)},x_{(k+1)}).$$ Let us consider an arbitrary (fixed) $k\in\Psi_{\mathbf{x}}$. Let us prove part i). According to {$\mathrm{(C1)}$} there exists the set $G_k\in\mathcal{D}GSF$ such that $\aA[G_k]{\mathbf{x}}=x_{(k)}$ and $\mu(G_k^c)=\mu(E_{(k+1)})$. From the fact that $\mu(E_{(k+1)})=\mu(G_k^c)\in\left\{\mu(E^c):\, \aA[E]{\mathbf{x}}\leq x_{(k)}\right\}$ and since $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ is nonincreasing (see~\cite[Proposition 4.3 (a)]{BoczekHalcinovaHutnikKaluszka2020}) we have $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{x_{(k)}}\leq\mu(E_{(k+1)})=\mu(\{\x>\alpha\})$$ for any $\alpha\in[x_{(k)},x_{(k+1)})$, where the last equality follows from~(\ref{PiLM_form}). Let us prove part ii). From {$\mathrm{(C2)}$} it follows that for any $E\in\mathcal{D}GSF$ if $\aA[E]{\mathbf{x}}< x_{(k+1)}$, then $\mu(E^c)\geq\mu(E_{(k+1)})$. Therefore, $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\geq \mu(E_{(k+1)})=\mu(\{\x>\alpha\})$$ for any $\alpha\in[x_{(k)},x_{(k+1)})$, where the last equality follows from~(\ref{PiLM_form}). It is enough to prove the implication $\Leftarrow$. Since $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\left\{\mu(E^c):\aA[E]{\mathbf{x}}\leq\alpha<x_{(k+1)}, E\in\mathcal{D}GSF\right\}\geq\mu(\Ai{k+1})=\mu(\{\x>\alpha\})$$ for any $\alpha\in[x_{(k)},x_{(k+1)})$, then for any $E\in\mathcal{D}GSF$ it has to hold: if $\aA[E]{\mathbf{x}}<x_{(k+1)}$, then $\mu(E^c)\geq\mu(\Ai{k+1})$. \qed \mathbf{i}gskip \begin{corollary}\label{LM_SLM_coincide} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. If {$\mathrm{(C1)}$} and {$\mathrm{(C2)}$} are satisfied, then $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{corollary} The application of the previous result is illustrated in the following example. The second example proves that $\mathrm{(C1)}$ and $\mathrm{(C2)}$ are only sufficient and not necessary. \begin{figure} \caption{Generalized survival function and visualization from Example~\ref{SC_example} \label{LM_eq_SLM_picture} \end{figure} \begin{example}\label{SC_example} Let us consider $\cA^{\mathrm{sum}}=\{\aAi[E]{\cdot}{\mathrm{sum}}: E\in2^{[3]}\}$, and normalized monotone measure $\mu$ on $2^{[3]}$ with the following values: \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $E$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu(E)$ & $0$ & $0$ & $0.5$ & $0$ & $0.5$ & $0$ & $0.7$ & $1$\\ \hline $\aAi[E]{\mathbf{x}}{\mathrm{sum}}$ & $0$ & $1$ & $3$ & $1$ & $4$ & $2$ & $4$ & $5$\\\hline \end{tabular} \end{center} \vspace*{-12pt} \end{table} \noindent Further, let us take the input vector $\mathbf{x}=(1,3,1)$ with the permutation $(1)=1$, $(2)=3$, $(3)=2$. Then $x_{(0)}=0$, $x_{(1)}=1$, $x_{(2)}=1$, $x_{(3)}=3$, therefore $\Psi_{\mathbf{x}}=\{0,2,3\}$ and $$\Ai{1}=\{(1), (2), (3)\}=\{1,2,3\},\quad \Ai{3}=\{(3)\}=\{2\},\quad \Ai{4}=\emptyset. $$ We can see, that the assertion $\mathrm{(C1)}$ of Corollary~\ref{LM_SLM_coincide} is satisfied with \begin{center} $G_0=\emptyset$, $G_2=\{3\}$, $G_3=\{2\}$. \end{center} Indeed, $\aAi[G_0]{\mathbf{x}}{\mathrm{sum}}=0=x_{(0)}$ and $\mu(G_0^c)=\mu(E_{(1)})$. Further, $\aAi[G_2]{\mathbf{x}}{\mathrm{sum}}=1=x_{(2)}$ and $\mu(G_2^c)=\mu(\{1,2\})=\mu(E_{(3)})$. Finally, $\aAi[G_3]{\mathbf{x}}{\mathrm{sum}}=3=x_{(3)}$ and $\mu(G_3^c)=\mu(\{1,3\})=\mu(E_{(4)})$. The assertion $\mathrm{(C2)}$ is also satisfied, see the visualisation of generalized survival function via parallel lines in Figure~\ref{LM_eq_SLM_picture}. Discussed survival functions coincide and take the form $$\mu(\{\x>\alpha\})=\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mathbf{1}_{[0,1)}(\alpha)+0{.}5\cdot\mathbf{1}_{[1,3)}(\alpha)$$ for $\alpha\in[0,\infty)$. The plot of (generalized) survival function is in Figure~\ref{LM_eq_SLM_picture}. \end{example} \begin{example}\label{example} Let us consider $\cA^{\mathrm{sum}}=\{\aAi[E]{\cdot}{\mathrm{sum}}: E\in2^{[3]}\}$, and normalized monotone measure $\mu$ on $2^{[3]}$ with the following values: \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $E$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu(E)$ & $0$ & $0$ & $0$ & $0.7$ & $0$ & $0.8$ & $0.7$ & $1$\\ \hline $\aAi[E]{\mathbf{x}}{\mathrm{sum}}$ & $0$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $9$\\\hline \end{tabular} \end{center} \end{table} \noindent Further, let us take the input vector $\mathbf{x}=(2,3,4)$ with the permutation being the identity. Then survival functions coincide $$\mu(\{\x>\alpha\})=\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mathbf{1}_{[0,2)}(\alpha)+0{.}7\cdot\mathbf{1}_{[2,4)}(\alpha).$$ Here, $G_0=\emptyset$, $G_1=\{1\}$, $G_2=\{2\}$, $G_3=\{3\}$ are the only sets that satisfy the equality $\aAi[G_k]{\mathbf{x}}{\mathrm{sum}}=x_{(k)}$ for $k\in\Psi_{\mathbf{x}}=\{0,1,2,3\}$. However, $$0{.}8=\mu(G_2^c)\neq\mu(\Ai{3})=0{.}7.$$ Thus, a sufficient condition in Corollary~\ref{LM_SLM_coincide} is not a necessary condition. \end{example} Let us return to Proposition~\ref{LM<SLM}. While $\mathrm{(C2)}$ is the necessary and sufficient condition under which the generalized survival function is greater or equal to the survival function, $\mathrm{(C1)}$ is only sufficient for the reverse inequality. Since this condition seems too strict, let us define conditions $\mathrm{(C3)}$ and $\mathrm{(C4)}$ as follows: \begin{enumerate} \item[(C3)] For any $k\in\Psi_{\mathbf{x}}$ there exists $F_k\in \mathcal{D}GSF$ such that $\aA[F_k]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_k^c)\leq \mu(\Ai{k+1})$. \item[(C4)] For any $k\in\Psi_{\mathbf{x}}$ there exists $F_k\in \mathcal{D}GSF$ such that $\aA[F_k]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_k^c)= \mu(\Ai{k+1})$. \end{enumerate} \noindent The visualization of condition $\mathrm{(C3)}$ is drawn in Figure~\ref{obr_dokaz3}. In the following we show that exactly $\mathrm{(C3)}$ improves Proposition~\ref{LM<SLM} $\text{ii)}$. As a consequence we also get improvement of Corollary~\ref{LM_SLM_coincide}. Replacing $\mathrm{(C1)}$ with $\mathrm{(C3)}$, we obtain sufficient and necessary condition for equality between survival functions. However, it will turn out that under the $\mathrm{(C2)}$ assumption $\mathrm{(C3)}$ will be reduced to $\mathrm{(C4)}$. \begin{figure} \caption{The visualization of condition $\mathrm{(C3)} \label{obr_dokaz3} \end{figure} \begin{proposition}\label{LM>SLM} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. $\mathrm{(C3)}$ holds if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{proposition} \noindent {\bf Proof. \ \ } Let us prove the implication $\mathbb{R}ightarrow$. According to Proposition~\ref{vlastnostipi} (iv) let us divide interval $[0,\infty)$ into disjoint sets $$[0,\infty)=\mathbf{i}gcup_{k\in\Psi_{\mathbf{x}}}[x_{(k)},x_{(k+1)}).$$ Let us consider an arbitrary (fixed) $k\in\Psi_{\mathbf{x}}$. Then by assumptions, there is $F_{k}\in \mathcal{D}GSF$ such that $\aA[F_{k}]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_{k}^c)\leq \mu(\Ai{k+1})$. Thus $\mu(F_k^c)\in\{\mu(E^c):\, \aA[E]{\mathbf{x}}\leq \alpha,\, E\in\mathcal{D}GSF \}$ for any $\alpha\in[x_{(k)},x_{(k+1)})$. Hence, \begin{align*} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\left\{\mu(E^c):\aA[E]{\mathbf{x}}\leq\alpha, E\in\mathcal{D}GSF\right\}\leq\mu(F_k^c)\leq\mu(\Ai{k+1})=\mu(\{\x>\alpha\})\text{} \end{align*} for any $\alpha\in[x_{(k)},x_{(k+1)})$. Let us prove the reverse implication $\Leftarrow$. Let $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. Then from this fact and from~\eqref{PiLM_form} it follows: $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{x_{(k)}}\leq\mu(\{\mathbf{x}> x_{(k)}\})=\mu(E_{(k+1)})$$ for any $k\in\Psi_{\mathbf{x}}$. As $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{x_{(k)}}=\min\left\{\mu(E^c): \aA[E]{\mathbf{x}}\leq x_{(k)}, E\in\mathcal{D}GSF\right\}$, there exists $F_k\in\mathcal{D}GSF$ such that $\aA[F_k]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_k^c)\leq\mu(\Ai{k+1})$. \qed \mathbf{i}gskip \begin{corollary}\label{corollary_1} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. \begin{enumerate}[i)] \item If $\mathrm{(C2)}$ holds, then $\mathrm{(C3)}$ is equivalent to $\mathrm{(C4)}$. \item $\mathrm{(C2)}$ and $\mathrm{(C3)}$ hold if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item $\mathrm{(C2)}$ and $\mathrm{(C4)}$ hold if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{corollary} \noindent {\bf Proof. \ \ } It is enough to prove part i), more precisely, the implication $\mathrm{(C3)}\mathbb{R}ightarrow\mathrm{(C4)}$. Let $\mathrm{(C3)}$ is satisfied, we show that $\mu(F_k^c)= \mu(\Ai{k+1})$ holds for any $k\in\Psi_{\mathbf{x}}$. Since for any $F_k\in\mathcal{D}GSF$, $k\in\Psi_{\mathbf{x}}$ we have $\aA[x]{F_k}\leq x_{(k)}<x_{(k+1)}$, then from $\mathrm{(C2)}$ we have $\mu(F_k^c)\geq\mu(E_{(k+1)})$. On the other hand, from $\mathrm{(C3)}$ we have $\mu(F_k^c)\leq\mu(E_{(k+1)})$. \qed \begin{remark}\label{remark_i} At the end of this main part let us remark that some above mentioned results are true also without constructing $\Psi_{\mathbf{x}}$ system. Let us denote: \begin{enumerate}[$(\mathbf{w}idetilde{\mathrm{C}}1)$] \item For any $i\in[n]\cup\{0\}$ there exists $G_i\in \mathcal{D}GSF$ such that $\aA[G_i]{\mathbf{x}}=x_{(i)}$ and $\mu(F_i^c)=~\mu(\Ai{i+1})$. \item {{For any $i\in[n]\cup\{0\}$ and for any $E\in \mathcal{D}GSF$ it holds:}} $\aA[E]{\mathbf{x}}<x_{(i+1)}\mathbb{R}ightarrow\mu(E^c)\geq \mu(\Ai{i+1}).$ \item For any $i\in[n]\cup\{0\}$ there exists $F_i\in \mathcal{D}GSF$ such that $\aA[F_i]{\mathbf{x}}\leq x_{(i)}$ and $\mu(F_i^c)\leq \mu(\Ai{i+1})$. \end{enumerate} Then Proposition~\ref{LM<SLM} and Corollary~\ref{corollary_1}~(ii) remain to be true, although, requirements in $(\mathbf{w}idetilde{\mathrm{C}}1)$, $(\mathbf{w}idetilde{\mathrm{C}}2)$, $(\mathbf{w}idetilde{\mathrm{C}}3)$ will be for some $i\in[n]\cup\{0\}$ redundant \footnote{They will be redundant for $i\in[n]\cup\{0\}$ such that $x_{(i)}=x_{(i+1)}$, compare with the motivation of $\Psi_{\mathbf{x}}$ system introduction.}. On the other hand, Corollary~\ref{corollary_1}~(i),~(iii) need not be satisfied in general. \mathbf{i}gskip \noindent{Inequalities:} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. \begin{enumerate}[i)] \item If $(\mathbf{w}idetilde{\mathrm{C}}1)$ holds, then $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item $(\mathbf{w}idetilde{\mathrm{C}}2)$ holds if and only if $\mu(\{\x>\alpha\})\leq \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ for any $\alpha\in[0,\infty)$. \end{enumerate} \noindent{Sufficient and necessary condition:} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. $(\mathbf{w}idetilde{\mathrm{C}}2)$ and $(\mathbf{w}idetilde{\mathrm{C}}3)$ hold if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{remark} \subsection{Equality of generalized survival function and standard survival function, further results} \vspace*{12pt} In this subsection we provide further results on indistinguishability of survival functions. Considering the formula of standard survival function~\eqref{PiLM_form} one can observe that the same value of monotone measure may be achieved on several intervals. These intervals can be joined together. Thus we obtain again a shorter formula of survival function, see Proposition~\ref{LM=SLM_max2} i), which allows us to formulate further results. Let us define system $\Psi_{\mathbf{x}}^{*}\subseteq\Psi_{\mathbf{x}}$ as follows: \begin{align}\label{pi*}\Psi_{\mathbf{x}}^*:=\{k\in\Psi_{\mathbf{x}}\setminus\{\min\Psi_{\mathbf{x}}\}:\, \mu(\Ai{j+1})>\mu(\Ai{k+1}), j<k, j\in\Psi_{\mathbf{x}}\}\cup \{\min\Psi_{\mathbf{x}}\}\end{align} (compare with the definition of system $\Psi_{\mathbf{x}}$ which is analogous, however the main condition is concentrated on components of $\mathbf{x}$ instead of values of $\mu$). Let us give an example of the $\Psi_{\mathbf{x}}^*$ system calculation considering inputs from Example~\ref{example}. For given input $\Psi_{\mathbf{x}}=\{0,1,2,3\}$. Then by definition of $\Psi_{\mathbf{x}}^*$ we have $\min\Psi_{\mathbf{x}}=0\in\Psi_{\mathbf{x}}^*$. For $k=1,3$ the inequality $\mu(\Ai{j+1})>\mu(\Ai{k+1})$, $j<k$ holds, however, for $k=2$ we have $\mu(\Ai{2})=\mu(\Ai{3})$. Thus $2\notin\Psi_{\mathbf{x}}^*$. In summary, $\Psi_{\mathbf{x}}^*=\{0,1,3\}$. For purpose of this subsection for any $k\in\Psi_{\mathbf{x}}^*$ let us denote \begin{align}\label{l_k} l_k:=\max\{j\in\Psi_{\mathbf{x}}:\, \mu(E_{(j+1)})=\mu(E_{(k+1)})\}. \end{align} \begin{proposition}\label{vlastnoti_pi_s_hviezdickou} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\mathbf{M}$. \begin{enumerate}[i)] \item $x_{(\min\Psi_{\mathbf{x}}^*)}=0$. \item $\left\{[x_{(k)},x_{(l_k+1)}): k\in\Psi_{\mathbf{x}}^*\right\}$ with $l_k$ given by~\eqref{l_k} and with the convention $x_{(n+1)}=\infty$ is a decomposition of interval $[0,\infty)$ into nonempty pairwise disjoint sets. \item $(\forall k\in\Psi_{\mathbf{x}}^*\setminus\{\min\Psi_{\mathbf{x}}^*\})$ $(\exists r\in\Psi_{\mathbf{x}}^*, r<k)$ $x_{(k)}=x_{(l_{r}+1)}$. Moreover, $\mu(\Ai{k+1})<\mu(\Ai{l_r+1})$. \item If $\mu$ is such that it is strictly monotone on $\{\Ai{k+1}: k\in\Psi_{\mathbf{x}}\}$, then $\Psi_{\mathbf{x}}=\Psi_{\mathbf{x}}^*$. \end{enumerate} \end{proposition} \noindent {\bf Proof. \ \ } Part i) follows from Proposition~\ref{vlastnostipi} iii). Since $\Psi_{\mathbf{x}}^*\subseteq\Psi_{\mathbf{x}}$ and $\min\Psi_{\mathbf{x}}\in\Psi_{\mathbf{x}}^*$, then $\min\Psi_{\mathbf{x}}=\min \Psi_{\mathbf{x}}^*$. The proof of ii) follows from Proposition~\ref{vlastnostipi} part iv) and from the fact that each partial interval $[x_{(k)}, x_{(l_k+1)})$, $k\in\Psi_{\mathbf{x}}^*$ can be rewritten as follows $$[x_{(k)}, x_{(l_k+1)})=\mathbf{i}gcup_{j=k,j\in\Psi_{\mathbf{x}}}^{l_k}[x_{(j)}, x_{(j+1)}).$$ The equality $x_{(k)}=x_{(l_{r}+1)}$ in part iii) follows from ii) with $r=\max\{j\in\Psi_{\mathbf{x}}^*: x_{(j)}<x_{(k)}\}$. Moreover, it holds $\mu(\Ai{l_r+1})=\mu(\Ai{r+1})>\mu(\Ai{k+1})$ where the first equality holds because of~\eqref{l_k}, the second inequality is true due to $r< k$, $r,k\in\Psi_{\mathbf{x}}^*$. Part~iv) follows from~\eqref{pi*}. \qed \begin{proposition}\label{LM=SLM_max2} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\mathbf{M}$. \begin{enumerate}[i)] \item Then \begin{equation}\label{Pi*LM_form} \mu(\{\x>\alpha\})=\sum_{k\in\Psi_{\mathbf{x}}^*}\mu\left(\Ai{k+1}\right)\cdot\mathbf{1}_{[x_{(k)},x_{(l_k+1)})}(\alpha) \end{equation} for any $\alpha\in[0,\infty)$ with $l_k$ given by~\eqref{l_k} and with the convention $x_{(n+1)}=\infty$. \item If $\mathcal{D}GSF\supseteq\{\Ai{k+1}^c: k\in\Psi_{\mathbf{x}}^*\}$, then $\mu_{\bA}(\x,\alpha){\mathrm{max}}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{proposition} \noindent {\bf Proof. \ \ } Part ii) can be proved analogously as Proposition~\ref{LM=SLM_max} part ii). Part i) follows from the fact that each partial interval $[x_{(k)}, x_{(l_k+1)})$, $k\in\Psi_{\mathbf{x}}^*$ can be rewritten as follows $[x_{(k)}, x_{(l_k+1)})=\mathbf{i}gcup_{j=k,j\in\Psi_{\mathbf{x}}}^{l_k}[x_{(j)}, x_{(j+1)}).$ From the formula~\eqref{PiLM_form} and from definition of $l_k$ we get $$\mu(\{\x>\alpha\})=\mu(\Ai{j+1})=\mu(\Ai{k+1}).$$ for any $\alpha\in[x_{(j)}, x_{(j+1)})$. \qed \mathbf{i}gskip All results from the previous subsection will also be true under a slight modification of conditions (C1), (C2), (C3) and (C4) as follows: \begin{enumerate}[(C1$^*$)] \item For any $k\in\Psi_{\mathbf{x}}^*$ there exists $G_k\in\mathcal{D}GSF$ such that $ \aA[G_k]{\mathbf{x}}=x_{(k)}$ and $\mu(G_k^c)=\mu(\Ai{k+1}).$ \item For any $k\in\Psi_{\mathbf{x}}^*$ and for any $E\in \mathcal{D}GSF$ it holds: $\aA[E]{\mathbf{x}}<x_{(l_k+1)}\mathbb{R}ightarrow\mu(E^c)\geq \mu(\Ai{l_k+1}).$ \item For any $k\in\Psi_{\mathbf{x}}^*$ there exists $F_k\in \mathcal{D}GSF$ such that $\aA[F_k]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_k^c)\leq \mu(\Ai{k+1})$. \item For any $k\in\Psi_{\mathbf{x}}^{*}$ there exists $F_k\in \mathcal{D}GSF$ such that $\aA[F_k]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_k^c)= \mu(\Ai{k+1})$. \end{enumerate} In the following we summarize all modifications of results from the main part of this section. Since proofs of parts i) -- vii) are based on the same ideas, we omit them. The comparison of these results with those obtained in the main part can be found in Remark~\ref{remark*}. \begin{proposition}\label{summar} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. \begin{enumerate}[i)] \item If $\mathrm{(C1^*)}$ holds, then $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item $\mathrm{(C2^*)}$ holds if and only if $\mu(\{\x>\alpha\})\leq \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ for any $\alpha\in[0,\infty)$. \item If {$\mathrm{(C1^*)}$} and {$\mathrm{(C2^*)}$} are satisfied, then $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item $\mathrm{(C3^*)}$ holds if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item {$\mathrm{(C2^*)}$} and {$\mathrm{(C3^*)}$} hold if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item If {$\mathrm{(C2^*)}$} holds, then {$\mathrm{(C3^*)}$} is equivalent to {$\mathrm{(C4^*)}$}. \item {$\mathrm{(C2^*)}$} and {$\mathrm{(C4^*)}$} hold if and only if $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \item $\mathrm{(C2)}$ holds if and only if $\mathrm{(C2^*)}$ holds. \end{enumerate} \end{proposition} \noindent {\bf Proof. \ \ } The implication (C2) $\mathbb{R}ightarrow$ (C2$^*$) of part viii) is clear. We prove the reverse implication. Let us consider any set $E\in\mathcal{D}GSF$ such that $\aA[E]{\mathbf{x}}<x_{(k+1)}$ for some $k\in\Psi_{\mathbf{x}}$. Let us define $$j_k=\min\{j\in\Psi_{\mathbf{x}}: \mu(\Ai{j+1})=\mu(\Ai{k+1})\}.$$ It is easy to see that $j_k\in\Psi_{\mathbf{x}}^*$, $l_{j_k}\geq k\geq j_k$. Moreover, $\mu(\Ai{l_{j_k}+1})=\mu(\Ai{k+1})=\mu(E_{(j_k+1)})$ and $x_{(k+1)}\leq x_{(l_{j_k}+1)}$. Then from (C2$^*$) we have $\mu(E^c)\geq\mu(E_{(l_{j_k}+1)})=\mu(E_{(k+1)})$. \qed \begin{remark}\label{remark*} In comparison with results in the main part of this section, the advantage of previous statements lies in their efficiency for survival functions equality or inequality testing. In particular, Proposition~\ref{summar} vii) requires to hold the same properties as Corollary~\ref{corollary_1} iii), however for a smaller number of sets, $k\in\Psi_{\mathbf{x}}^{*}\subseteq\Psi_{\mathbf{x}}$. On the other hand, the equality (inequality) of survival functions implies more information than those included in the Proposition~\ref{summar}, the results are true for any $k\in\Psi_{\mathbf{x}}$ not only for $k\in\Psi_{\mathbf{x}}^*$. Moreover, system $\Psi_{\mathbf{x}}$ is also easier in definition. \end{remark} We have seen in the main part of this section that $\mathrm{(C1)}$, $\mathrm{(C2)}$ are not necessary for equality between survival functions in general, see Corollary~\ref{LM_SLM_coincide}, Example~\ref{example}. This result we have improved by replacing $\mathrm{(C1)}$ with $\mathrm{(C4)}$. Also, Corollary~\ref{LM_SLM_coincide} can be improved as it follows. \begin{theorem}\label{nutna_postacujuca_2} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. Then the following assertions are equivalent: \begin{enumerate}[i)] \item $\mathrm{(C1^*)}$, $\mathrm{(C2^*)}$ are satisfied. \item $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{theorem} \noindent {\bf Proof. \ \ } The implication $\text{i)}\mathbb{R}ightarrow \text{ii)}$ follows from Proposition~\ref{summar} iii). In order to prove the reverse implication, assume that survival functions are equal. Then (C2$^*$) follows from Proposition~\ref{summar} vii). It is enough to prove (C1$^*$). From Proposition~\ref{summar} vii) we have: \begin{center} For any $k\in\Psi_{\mathbf{x}}^*$ there exists $F_k\in \mathcal{D}GSF$ such that $\aA[F_k]{\mathbf{x}}\leq x_{(k)}$ and $\mu(F_k^c)= \mu(\Ai{k+1})$. \end{center} We show that $\aA[F_k]{\mathbf{x}}= x_{(k)}$. Indeed, for $k=\min\Psi_{\mathbf{x}}^*$ is the result immediate since $0\leq\aA[F_{\min\Psi_{\mathbf{x}}^*}]{\mathbf{x}}\leq~ x_{(\min\Psi_{\mathbf{x}}^*)}=0$, where the last inequality follows from Proposition~\ref{vlastnoti_pi_s_hviezdickou} i). Let $k>\min\Psi_{\mathbf{x}}^*$, $k\in\Psi_{\mathbf{x}}^*$. From Proposition~\ref{vlastnoti_pi_s_hviezdickou} iii) there exists $r\in\Psi_{\mathbf{x}}^*, r<k$ such that $x_{(l_{r}+1)}=x_{(k)}$ and $\mu(F_k^c)= \mu(\Ai{k+1})< \mu(\Ai{l_{r}+1})$. From contraposition to (C2$^*$) we have $\aA[F_k]{\mathbf{x}}\geq x_{(l_{r}+1)}=x_{(k)}$. \qed \begin{corollary}\label{nutna_postacujuca} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$ {such that it is strictly monotone} on $\{\Ai{k+1}: k\in\Psi_{\mathbf{x}}\}$, and let $\cA$ be FCA. Then the following assertions are equivalent: \begin{enumerate}[i)] \item $\mathrm{(C1)}$, $\mathrm{(C2)}$ are satisfied. \item $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{corollary} \noindent {\bf Proof. \ \ } It follows from Proposition~\ref{vlastnoti_pi_s_hviezdickou} iv) and Theorem~\ref{nutna_postacujuca_2}. \qed A summary of relationships among some conditions as well as the summary of sufficient and necessary conditions under which survival functions coincide or under which they are pointwise comparable with respect to $\leq$, $\geq$ can be found in the Appendix, see Table~\ref{tab:summar}. \section{Equality characterization} Results of the previous section stated conditions depended on FCA $\cA$, input vector $\mathbf{x}$ and monotone measure $\mu$ to hold equality between survival functions. Of course, when one changes the monotone measure and the other inputs stay the same, the equality can violate as the following example shows. \begin{example}\label{ex:main} Let us consider $\cA^{\mathrm{sum}}=\{\aAi[E]{\cdot}{\mathrm{sum}}: E\in2^{[3]}\}$, and normalized monotone measure $\mu$ on $2^{[3]}$ with the following values: \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $E$ & $\{1,2,3\}$ & $\{2,3\}$ & $\{1,3\}$ & $\{1,2\}$ & $\{3\}$ & $\{2\}$ & $\{1\}$ & $\emptyset$ \\ \hline $E^c$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu(E^c)$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0.5$ & $0.5$ & $1$\\ \hline $\nu(E^c)$ & $0$ & $0$ & $0$ & $0$ & $0.5$ & $0.5$ & $0.5$ & $1$\\ \hline $\aAi[E]{\mathbf{x}}{\mathrm{sum}}$ & $4$ & $3$ & $2$ & $3$ & $1$ & $2$ & $1$ & $0$\\\hline $\aAi[E]{\mathbf{x}}{\mathrm{max}}$ & $2$ & $2$ & $1$ & $2$ & $1$ & $2$ & $1$ & $0$\\\hline \end{tabular} \end{center} \end{table} \noindent Further, let us take the input vector $\mathbf{x}=(1,2,1)$.\noindent Then we can see $$\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mathbf{i}n_{[0,1)}(\alpha)=\mu(\{\x>\alpha\}),\quad \alpha\in[0,\infty),$$ but $$\nu_{\cA^{\mathrm{sum}}}(\mathbf{x},\alpha)=\mathbf{i}n_{[0,1)}(\alpha)+0{.}5\,\mathbf{i}n_{[1,2)}(\alpha)\not=\mathbf{i}n_{[0,1)}(\alpha)=\nu(\{\mathbf{x}>\alpha\}),\quad \alpha\in[0,\infty).$$ \end{example} In the following we shall find sufficient and necessary conditions on $\cA$ and $\mathbf{x}$ under which survival functions equal for any monotone measure. So, we answer Problem 2, see Theorem~\ref{thm:characterization}, Theorem~\ref{thm:characterization_mon}. In the second step we characterize FCA for which survival functions equal (for any monotone measure and any input vector). We answer Problem 3. \begin{theorem}\label{thm:characterization} Let $\mathbf{x}\in[0,\infty)^{[n]}$, and $\cA$ be FCA. Then the following assertions are equivalent: \begin{enumerate}[i)] \item $\mathcal{D}GSF\supseteq\{\Ai{k+1}^c : k\in\Psi_{\mathbf{x}}\}$ and $\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E=\Ai{k+1}^c$ with $k\in\Psi_{\mathbf{x}}$, $\aA[E]{\mathbf{x}}\geq\aAi[E]{\mathbf{x}}{\mathrm{max}}$ otherwise. \item For each $\mu\in\mathbf{M}$ such that it is strictly monotone on $\{\Ai{k+1} : k\in\Psi_{\mathbf{x}}\}$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\}) $ for any $\alpha\in[0,\infty)$. \item For each $\mu\in\mathbf{M}$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\}) $ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{theorem} \noindent {\bf Proof. \ \ } The implication $\text{i)}\mathbb{R}ightarrow \text{iii)}$ we easily prove by Corollary~\ref{LM_SLM_coincide}. Indeed, for any $k\in\Psi_{\mathbf{x}}$ (C1) is satisfied with $G_k=\Ai{k+1}^c$. If $\aA[E]{\mathbf{x}}<x_{(k+1)}$, $k\in\Psi_{\mathbf{x}}$ and $E\in\mathcal{D}GSF$, then from assumptions we have $$\aAi[E]{\mathbf{x}}{\mathrm{max}}\leq\aA[E]{\mathbf{x}}<x_{(k+1)}.$$ Then we get $E\subseteq \Ai{k+1}^c$, i.e., $E^c\supseteq\Ai{k+1}$ and for each monotone measure $\mu$ we have $\mu(E^c)\geq\mu(\Ai{k+1})$. Thus (C2) is also satisfied. Let us prove the implication $\text{ii)}\mathbb{R}ightarrow \text{i)}$. Since the assumption holds for any $\mu\colon 2^{[n]}\to[0,\infty)$ such that it is strictly monotone measure on $\{\Ai{k+1}: k\in\Psi_{\mathbf{x}}\}$, it holds for $\mu$ such that it is strictly monotone measure on $2^{[n]}$ (not only on $\{\Ai{k+1}: k\in\Psi_{\mathbf{x}}\}$). From Corollary~\ref{nutna_postacujuca} (C1) holds. Moreover, since sets $E_{(k+1)}$ are the only sets with value equal to $\mu(E_{(k+1)})$, we get $G_{k}=\Ai{k+1}^c$ and $\mathcal{D}GSF\supseteq\{\Ai{k+1}^c : k\in\Psi_{\mathbf{x}}\}$. So, from (C1) we have $$\aA[\Ai{k+1}^c]{\mathbf{x}}=x_{(k)}=\aAi[\Ai{k+1}^c]{\mathbf{x}}{\mathrm{max}}$$ for any $k\in\Psi_{\mathbf{x}}$. Let us prove the second part of~i). Again, if the equality between survival functions holds for any strictly monotone measure $\mu$ on $\{\Ai{k+1}^c : k\in\Psi_{\mathbf{x}}\}$, then it holds for $\mu\colon 2^{[n]}\to[0,\infty)$ being strictly monotone on the above mentioned collection with values: $$\mu(E)=\mu(E_{(k+1)})\,\, \text{for any set}\,\,E\,\,\text{such that}\,\,\aAi[E^c]{\mathbf{x}}{\mathrm{max}}=x_{(k)},\text{ } k\in\Psi_{\mathbf{x}}.\footnote{Given set function is a monotone measure because $\aAi[\emptyset^c]{\mathbf{x}}{\mathrm{max}}=x_{(n)}$, therefore $\mu(E_{(n+1)})=\mu(\emptyset)=0$. Further, let $E_1\subseteq E_2$, i.e. $E_1^c\supseteq E_2^c$. Then from Proposition~\ref{vlastnostipi} ii) there exist $k_1,k_2\in\Psi_{\mathbf{x}}$ such that $\aAi[E_1^c]{\mathbf{x}}{\mathrm{max}}=x_{(k_1)}$ and $\aAi[E_2^c]{\mathbf{x}}{\mathrm{max}}=x_{(k_2)}$. It is easy to see that $k_1\geq k_2$ and $\mu(E_1)=\mu(\Ai{k_1+1})\leq\mu(\Ai{k_2+1})=\mu(E_2)$.}$$ Let $E\in\mathcal{D}GSF$. Then according to Proposition~\ref{vlastnostipi} ii) there exists $k\in\Psi_{\mathbf{x}}\setminus\{0\}$ such that $$\aAi[E]{\mathbf{x}}{\mathrm{max}}=x_{(k)}.$$ Since $\mu$ is strictly monotone on $\{\Ai{k+1}: k\in\Psi_{\mathbf{x}}\}$, then from Proposition~\ref{vlastnoti_pi_s_hviezdickou} iv) we have $\Psi_{\mathbf{x}}=\Psi_{\mathbf{x}}^*$. Further, from Proposition~\ref{vlastnoti_pi_s_hviezdickou} i), if $\aAi[E]{\mathbf{x}}{\mathrm{max}}=x_{(\min\Psi_{\mathbf{x}}^*)}=0$ the result is trivial. Let $k>\min\Psi_{\mathbf{x}}^*$. Then from Proposition~\ref{vlastnoti_pi_s_hviezdickou} iii) there exists $r\in\Psi_{\mathbf{x}}^*, r<k$ such that $x_{(l_{r}+1)}=x_{(k)}$ and $\mu(\Ai{k+1})< \mu(\Ai{l_{r}+1})$. Therefore $\mu(E^c)=\mu(\Ai{k+1})< \mu(\Ai{l_{r}+1})$ and from contraposition to (C2$^*$) we have $\aA[E]{\mathbf{x}}\geq x_{(l_{r}+1)}=x_{(k)}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$. \qed \begin{remark}From the previous theorem one can see the other sufficient condition under which the standard and generalized survival functions coincide, i.e., the condition i). Let us remark that this sufficient condition is more strict than $\mathrm{(C1)}$ and $\mathrm{(C2)}$, i.e., if i) is satisfied then $\mathrm{(C1)}$, $\mathrm{(C2)}$ are true, however, the reverse implication need not be true in general, see Example~\ref{SC_example}. \end{remark} According to previous result there are vectors for which the equality between survival functions (for any $\mu$) do not lead to $\sA^{\text{max}}$. \begin{example}\label{pr_vektor_y} Let us consider $\cA=\{\aA[E]{\cdot}:E\in2^{[3]}\}$ with conditional aggregation operator from Example~\ref{pr aggr} iii) with $\mathbf{w}=(0.5,0.5,1)$, $\mathbf{z}=(0.5,0.25,1)$. Let us take the input vector $\mathbf{x}=(2,3,4)$. The values of $\aA[E]{\mathbf{x}}$, $E\in2^{[3]}$ are summarized in following table: \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $E$ & $\{1,2,3\}$ & $\{2,3\}$ & $\{1,3\}$ & $\{1,2\}$ & $\{3\}$ & $\{2\}$ & $\{1\}$ & $\emptyset$\\ \hline $E^c$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\aA[E]{\mathbf{x}}$ & $4$ & $4$ & $4$ & $3$ & $4$ & $6$ & $2$ & $0$\\\hline \end{tabular} \end{center} \end{table} \noindent Then $\Psi_{\mathbf{x}}=\{0,1,2,3\}$ and it holds \begin{align*} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})&=\mu(\{1,2,3\})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\{2,3\})\cdot\mathbf{1}_{[2,3)}(\alpha)+\mu(\{3\})\cdot\mathbf{1}_{[3,4)}(\alpha)\\ &=\mu(\Ai{1})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\Ai{2})\cdot\mathbf{1}_{[2,3)}(\alpha)+\mu(\Ai{3})\cdot\mathbf{1}_{[3,4)}(\alpha) \end{align*} for any $\alpha\in[0,\infty)$ and monotone measure $\mu$. So, we have shown that there is vector $\mathbf{x}$ and $\cA\neq\cA^\mathrm{max}$ such that $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any monotone measure $\mu$. Indeed, $6=\aA[\{2\}]{\mathbf{x}}>\aAi[\{2\}]{\mathbf{x}}{\mathrm{max}}=3$ ($\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E\in 2^{[3]}\setminus\{\{2\}\}$). \end{example} Aggregation functions with the property being bounded from below by the maximum are in the literature called \textit{disjunctive}, see~\cite{GrabischMarichalMesiarPap2009}. For FCA $\cA$ nondecreasing w.r.t. sets we get an interesting consequence. \begin{lema}\label{lema_char} Let $\cA=\{\aA[E]{\cdot}:E\in 2^{[n]}\}$ be FCA nondecreasing w.r.t. sets. If for $\mathbf{x}\in[0,\infty)^{[n]}$ it holds that $\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E=\Ai{k+1}^c$ with $k\in\Psi_{\mathbf{x}}$ and $\aA[E]{\mathbf{x}}\geq\aAi[E]{\mathbf{x}}{\mathrm{max}}$ otherwise, then $\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E\in2^{[n]}$.\end{lema} \noindent {\bf Proof. \ \ } Let us consider an arbitrary set $E\in\mathcal{D}GSF$ and let us denote $\aAi[E]{\mathbf{x}}{\mathrm{max}}:=x_{s}$. Then according to Proposition~\ref{vlastnostipi} there exists $k_s\in\Psi_{\mathbf{x}}$ such that $x_s=x_{(k_s)}$. Then $E\subseteq\Ai{k_s+1}^c$. From above mentioned and from Theorem~\ref{thm:characterization} we have $$x_{(k_s)}=\aAi[\Ai{k_s+1}^c]{\mathbf{x}}{\mathrm{max}}=\aA[\Ai{k_s+1}^c]{\mathbf{x}}\geq\aA[E]{\mathbf{x}}\geq\aAi[E]{\mathbf{x}}{\mathrm{max}}= x_{(k_s)}.$$ \qed \begin{theorem}\label{thm:characterization_mon} Let $\mathbf{x}\in[0,\infty)^{[n]}$, and $\cA$ be FCA nondecreasing w.r.t. sets. Then the following assertions are equivalent: \begin{enumerate}[i)] \item $\mathcal{D}GSF\supseteq\{\Ai{k+1}^c : k\in\Psi_{\mathbf{x}}\}$ and $\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any set $E\in\mathcal{D}GSF$. \item For each $\mu\in\mathbf{M}$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\}) $ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{theorem} \noindent {\bf Proof. \ \ } The implication $\text{i)}\mathbb{R}ightarrow \text{ii)}$ follows from Proposition~\ref{LM=SLM_max} ii). The reverse implication follows from Theorem~\ref{thm:characterization} and Lemma~\ref{lema_char}. \qed \mathbf{i}gskip Let us return to Example~\ref{pr_vektor_y}. We have shown that for the input vector $\mathbf{x}=(2,3,4)$ with $\cA$ given in example $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any $\mu$. However, for another vector, let us take $\mathbf{y}=(2,5,4)$ the equality can violate: \begin{align*} \mu_{\bA}(\x,\alpha){}{\mathbf{y}}{\alpha}&=\mu(\{1,2,3\})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\{2,3\})\cdot\mathbf{1}_{[2,4)}(\alpha),\\ \mu(\{\mathbf{y}>\alpha\})&=\mu(\{1,2,3\})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\{2,3\})\cdot\mathbf{1}_{[2,4)}(\alpha)+\mu(\{2\})\cdot\mathbf{1}_{[4,5)}(\alpha), \end{align*} i.e. $\mu_{\bA}(\x,\alpha){}{\mathbf{y}}{\alpha}=\mu(\{\mathbf{y}>\alpha\})$ does not hold for any $\mu$. In the following we shall ask for FCA $\cA$ for which the equality holds for any $\mu$ and for any $\mathbf{x}$. As a last thus we solve Problem 3. \begin{theorem}\label{thm:characterization2} Let $\cA$ be FCA. The following assertions are equivalent: \begin{enumerate}[i)] \item $\mathcal{A}=\{\aAi[E]{\cdot}{\max}:\,\, E\in2^{[n]}\}$. \item For each $\mu\in\mathbf{M}$, and for each $\mathbf{x}\in[0,\infty)^{[n]}$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu(\{\x>\alpha\})$ for any $\alpha\in[0,\infty)$. \end{enumerate} \end{theorem} \noindent {\bf Proof. \ \ } The implication $\text{i)}\mathbb{R}ightarrow \text{ii)}$ is immediate. We prove $\text{ii)}\mathbb{R}ightarrow \text{i)}$. Since the equality holds for any $\mathbf{x}$, according to Theorem~\ref{thm:characterization} we get $$\mathcal{D}GSF=\mathbf{i}gcup_{\mathbf{x}\in[0,\infty)^{[n]}}\mathcal{D}GSF^{\Psi_{\mathbf{x}}-\text{chain}}=2^{[n]} $$ with $\mathcal{D}GSF^{\Psi_{\mathbf{x}}-\text{chain}}:=\{\Ai{k+1}^c: k\in\Psi_{\mathbf{x}}\}.$ Let $\mathbf{x}\in[0,\infty)^{[n]}$ be an arbitrary fixed vector. From Theorem~\ref{thm:characterization} we have $\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E\in\mathcal{D}GSF^{\Psi_{\mathbf{x}}-\text{chain}}$ and $\aA[E]{\mathbf{x}}\geq\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E\in2^{[n]}\setminus\mathcal{D}GSF^{\Psi_{\mathbf{x}}-\text{chain}}$. However, we show that $\aA[E]{\mathbf{x}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}$ for any $E\in2^{[n]}$. Let us consider an arbitrary fixed $E\in2^{[n]}\setminus\mathcal{D}GSF^{\Psi_{\mathbf{x}}-\text{chain}}$ and vector $$\mathbf{w}idehat{\mathbf{x}}=\mathbf{x}\mathbf{i}n_E+a\mathbf{i}n_{E^c},\,\, a>\max_{i\in E}x_i.$$ The set $[n]\in\mathcal{D}GSF^{\Psi_{\mathbf{x}}-\text{chain}}$ by definition of $\Psi_{\mathbf{x}}$, therefore $\mathbf{w}idehat{\mathbf{x}}\neq \mathbf{x}$. Moreover, there exists permutation $(\cdot)$ such that $0=\mathbf{w}idehat{x}_{(0)}\leq\mathbf{w}idehat{x}_{(1)}\leq\mathbf{w}idehat{x}_{(2)}\leq\dots=\mathbf{w}idehat{x}_{(\mathbf{w}idehat{k})}<\mathbf{w}idehat{x}_{(\mathbf{w}idehat{k}+1)} =\dots= \mathbf{w}idehat{x}_{(n)}=a$ with $\mathbf{w}idehat{k}=|E|$. Therefore $\mathbf{w}idehat{k}\in\Psi_{\mathbf{w}idehat{\mathbf{x}}}$, and $E=\{(1),\dots,(\mathbf{w}idehat{k})\}=E^c_{(\mathbf{w}idehat{k}+1)}\in\mathcal{D}GSF^{\Psi_{\mathbf{w}idehat{\mathbf{x}}}-\text{chain}}$. Finally, from Theorem~\ref{thm:characterization}, and because of the property $\aA[E]{\mathbf{y}}=\aA[E]{\mathbf{y}\mathbf{i}n_E}$ for any $\mathbf{y}\in[0,\infty)^{[n]}$, see~\cite{BoczekHalcinovaHutnikKaluszka2020}, we have: \begin{align*}\aA[E]{\mathbf{x}}&=\aA[E]{\mathbf{x}\mathbf{i}n_E}=\aA[E]{\mathbf{w}idehat{\mathbf{x}}\mathbf{i}n_E}=\aA[E]{\mathbf{w}idehat{\mathbf{x}}}=\aAi[E]{\mathbf{w}idehat{\mathbf{x}}}{\mathrm{max}}=\aAi[E]{\mathbf{w}idehat{\mathbf{x}}\mathbf{i}n_E}{\mathrm{max}}\\&=\aAi[E]{\mathbf{x}\mathbf{i}n_E}{\mathrm{max}}=\aAi[E]{\mathbf{x}}{\mathrm{max}}.\end{align*} \qed \section{Conclusion} In this paper we have solved three problems dealing with the question of equality between the survival function and the generalized survival function based on conditional aggregation operators introduced originally in~\cite{BoczekHalcinovaHutnikKaluszka2020} (the generalization of concepts of papers~\cite{DoThiele2015},~\cite{HalcinovaHutnikKiselakSupina2019}). We have restricted ourselves to discrete settings. The most interesting results are Corollary~\ref{LM_SLM_coincide}, Corollary~\ref{corollary_1}, Proposition~\ref{summar} and Theorem~\ref{nutna_postacujuca_2} (solutions of Problem 1), Theorem~\ref{thm:characterization} and Theorem~\ref{thm:characterization_mon} (solution of Problem 2). Results were derived from the well-known formula of the standard survival function with a permutation $(\cdot)$ playing a crucial role. As the main result, we have determined the family of conditional aggregation operators with respect to which the novel survival function is identical to the standard survival function regardless of the monotone measure and input vector, see Theorem~\ref{thm:characterization2}. We expect the future extension of our results into the area of integrals introduced with respect to novel survival functions, see~\cite[Definition 5.1]{BoczekHalcinovaHutnikKaluszka2020}. The relationship of studied survival functions (in the sense of equalities or inequalities) determines also the relationship of corresponding integrals (based on standard and generalized survival function). The interesting question for the future work is: Is $\cA^{\mathrm{sup}}$ family of conditional operators also the only one that generates the standard survival function in case of arbitrary basic set $X$ instead of $[n]$, i.e., is it true that \begin{center} $\mu_{\bA}(\x,\alpha){}{f}{\alpha}=\mu(\{f>\alpha\})$, $\alpha\in[0,\infty)$ for any $\mu$ and for any $f$ if and only if $\cA=\cA^{\mathrm{sup}}$? \end{center} Up to now there are not known any other families except of $\cA^{\mathrm{sup}}$ generating generalized survival function indistinguishable from survival function (for any $\mu$, $f$). We believe that new results will be beneficial in some applications, e.g. in the theory of decision making. The equality between survival functions of a given alternative $\mathbf{x}$ means that the overall score of it with respect to the Choquet integral and the $\cA$-Choquet integral is the same. Also, immediately with decision making application the question of $(\mu,\cA)$-indistinguishability arises, i.e. under which condition on $\mu$, $\cA$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_{\bA}(\x,\alpha){}{\mathbf{y}}{\alpha}$ for $\mathbf{x},\mathbf{y}\in[0,\infty)^{[n]}$. Then alternatives $\mathbf{x},\mathbf{y}$ will be $\cA$-Choquet integral indistinguishable, i.e. they achieve the same overall score. \section*{Appendix} In this subsection we summarize all sufficient and necessary conditions for equality or inequality between survival functions, see Table~\ref{tab:summar}. \mathbf{i}gskip \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \centering \small \begin{tabular}{|c|c|c|l|l|} \hline $(\mathbf{w}idetilde{\mathrm{C}}1)$ and $(\mathbf{w}idetilde{\mathrm{C}}2)$ & $\mathbb{R}ightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Rem.~\ref{remark_i} \\ \hline $(\mathbf{w}idetilde{\mathrm{C}}2)$ and $(\mathbf{w}idetilde{\mathrm{C}}3)$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Rem.~\ref{remark_i}\\ \hline $\mathrm{(C1)}$ and $\mathrm{(C2)}$ & $\mathbb{R}ightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Cor.~\ref{LM_SLM_coincide} \\ \hline $\mathrm{(C2)}$ and $\mathrm{(C3)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Cor.~\ref{corollary_1}\\ \hline $\mathrm{(C2)}$ and $\mathrm{(C4)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Cor.~\ref{corollary_1}\\ \hline \multirow{2}{*}{ \shortstack[c]{ $\mathrm{(C1)}$ and $\mathrm{(C2)}$ } } & \multirow{2}{*}{ \shortstack[c]{$\mathbb{R}ightarrow$}} & \multirow{2}{*}{ \shortstack[c]{$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ }}& & \multirow{2}{*}{ \shortstack[l]{\hspace{-3pt}Cor.~\ref{nutna_postacujuca}}}\\ & & & \multirow{-2}{*}{ \shortstack[l]{$\mu\colon2^{[n]}\to[0,\infty)$ is strictly\\ monotone on $\{\Ai{k+1}: k\in\Psi_{\mathbf{x}}\}$}} & \\ \hline $\mathrm{(C2^*)}$ and $\mathrm{(C3^*)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Prop.~\ref{summar}\\ \hline $\mathrm{(C2^*)}$ and $\mathrm{(C4^*)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Prop.~\ref{summar}\\ \hline $\mathrm{(C1^*)}$ and $\mathrm{(C2^*)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}= \mu(\{\x>\alpha\})$ & & Th.~\ref{nutna_postacujuca_2}\\ \hline $\mathrm{(C1)}$ & $\mathbb{R}ightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq \mu(\{\x>\alpha\})$ & & Prop.~\ref{LM<SLM}\\ \hline $\mathrm{(C1^*)}$ & $\mathbb{R}ightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq \mu(\{\x>\alpha\})$ & & Prop.~\ref{summar}\\ \hline $\mathrm{(C3)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq \mu(\{\x>\alpha\})$ & & Prop.~\ref{LM>SLM}\\ \hline $\mathrm{(C3^*)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\leq \mu(\{\x>\alpha\})$ & & Prop.~\ref{summar}\\ \hline $\mathrm{(C2)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\geq \mu(\{\x>\alpha\})$ & & Prop.~\ref{LM<SLM}\\ \hline $\mathrm{(C2^*)}$ & $\Leftrightarrow$ & $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\geq \mu(\{\x>\alpha\})$ & & Prop.~\ref{summar}\\ \hline \end{tabular} \caption{Sufficient and necessary conditions for pointwise comparison of survival functions} \label{tab:summar} \end{table} From the Table~\ref{tab:summar} the following relationships between conditions (C1), (C2), (C3), (C4) and its $^*$ versions hold. \begin{corollary}\label{corollary_2} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. Then it holds: \begin{align*} \mathbf{i}g(\mathrm{(C1)} \mathbf{w}edge \mathrm{(C2)}\mathbf{i}g) & \mathbb{R}ightarrow \mathbf{i}g(\mathrm{(C1^*)} \mathbf{w}edge \mathrm{(C2^*)}\mathbf{i}g) \Leftrightarrow \mathbf{i}g(\mathrm{(C2)} \mathbf{w}edge \mathrm{(C3)}\mathbf{i}g) \Leftrightarrow \mathbf{i}g(\mathrm{(C2)} \mathbf{w}edge \mathrm{(C4)}\mathbf{i}g) \Leftrightarrow \mathbf{i}g((\mathbf{w}idetilde{\mathrm{C}}2) \mathbf{w}edge (\mathbf{w}idetilde{\mathrm{C}}3)\mathbf{i}g) \\&\Leftrightarrow \mathbf{i}g(\mathrm{(C2^*)} \mathbf{w}edge \mathrm{(C3^*)}\mathbf{i}g) \Leftrightarrow \mathbf{i}g(\mathrm{(C2^*)} \mathbf{w}edge \mathrm{(C4^*)}\mathbf{i}g) \Leftrightarrow \mathbf{i}g(\mathrm{(C1^*)} \mathbf{w}edge \mathrm{(C2)}\mathbf{i}g). \end{align*} \end{corollary} \begin{corollary}\label{corollary_3} Let $\mathbf{x}\in[0,\infty)^{[n]}$, $\mu\in\bM$, and let $\cA$ be FCA. If $\mathrm{(C2^*)}$ holds, then $$\mathrm{(C1^*)}\Leftrightarrow\mathrm{(C3^*)}\Leftrightarrow\mathrm{(C4^*)}.$$ \end{corollary} \section*{References} \end{document}
\begin{document} \@ifnextchar[{\@bstctlcite}{\@bstctlcite[@auxout]}{IEEEexample:BSTcontrol} \title{Optimal Selection of Small-Scale Hybrid PV-Battery Systems to Maximize Economic Benefit Based on Temporal Load Data} \author{\IEEEauthorblockN{Jeremy Every, Li Li} \thanks{This research is supported by an Australian Government Research Training Program Scholarship} \IEEEauthorblockA{Faculty of Engineering and Information Technology\\ University of Technology Sydney\\ Ultimo 2007, Australia \\ Email: [email protected]} \and \IEEEauthorblockN{David G. Dorrell} \IEEEauthorblockA{College of Agriculture, Engineering and Science\\ University of KwaZulu-Natal\\Durban 4041, South Africa\\ Email: [email protected]}} \maketitle \begin{abstract} Continued advances in PV and battery energy storage technologies have made hybrid PV-battery systems an attractive prospect for residential energy consumers. However the process to select an appropriate system is complicated by the relatively high cost of batteries, a multitude of available retail electricity plans and the removal of PV installation incentive schemes. In this paper, an optimization strategy based on an individual customer's temporal load profile is established to maximize electricity cost savings through optimal selection of PV-battery system size, orientation and retail electricity plan. Quantum-behaved particle swarm optimization is applied as the underlying algorithm given its well-suited application to problems involving hybrid energy system specification. The optimization strategy is tested using real-world residential consumption data, current system pricing and available retail electricity plans to establish the efficacy of a hybrid PV-battery solution. \end{abstract} \begin{IEEEkeywords} Cost benefit analysis, Photovoltaic systems, Energy storage, Batteries, Particle swarm optimization. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} Energy storage systems for residential applications, particularly lithium-ion based battery systems, have undergone rapid development in the past five years. A range of stationary energy storage devices from manufacturers including Tesla Motors, Enphase, Mercedez Benz, Samsung and LG have been recently introduced to the market. In Australia, average annual electricity prices have risen 4.5\% over the last 10 years \cite{ABS16}, 2.5\% higher than the inflation rate target \cite{RBA16}. Consequently the application of battery systems to complement existing rooftop PV systems or the installation of new hybrid PV-battery solutions are of particularly interest to energy consumers aiming to reduce their net electricity bills. The performance of PV-battery systems in reducing energy costs have been investigated in \cite{Ren16,Parra16,Mulder13,Beck16} where prescribed PV and battery sizes were tested under various tariffs \cite{Ren16,Parra16}, incentive schemes \cite{Mulder13} and temporal resolution of energy consumption data \cite{Beck16}. However existing research has been primarily structured around typical PV-battery systems and consumption profiles, rather than investigating strategies to enable individual customers to optimize a system based on their own circumstances. In this paper an optimization methodology is developed with the aim to assist potential PV-battery investors in determining the economic efficacy of a hybrid PV-battery system. Models for solar insolation, PV energy output, battery operation, installation cost and maintenance cost are presented to form key components of the underlying optimization problem. In order to maximize the net benefit of a hybrid system, the PV power rating, PV orientation, retail electricity plan and currently available retail battery products and battery operating modes are optimally selected. The objective function is formulated as a net present value (NPV) evaluation of the electricity cost savings that can be achieved through the introduction of a hybrid PV-battery system compared to a known lowest cost retail electricity plan. A form of particle swarm optimization (PSO), known as quantum-behaved PSO (QPSO), is utilized due to its fast convergence speed, natural handling of optimization parameter constraints and simplicity of implementation. The algorithm is tested in an Australian context for three residences in the state of New South Wales using one year of hourly energy data and five years of solar insolation data from the Australian Bureau of Meteorology. Time-of-use (TOU) retail electricity plans from three large Australian retailers as well as current PV and retail battery pricing are applied in order to establish the feasibility of installing a PV-battery system under prevailing market conditions. \section{Model Definition} \subsection{Solar Insolation Model} \label{ss:solar} The total hourly insolation $I_{T}$ on a tilted plane includes contributions from beam insolation, diffuse insolation and a ground reflected component. A multitude of transposition models have been developed enabling the estimation of insolation on a tilted plane based on horizontal insolation data. Among the available models, the Hay-Davies-Klucher-Reindl (HDKR) model, represented by (\ref{eq:tiltinsol}), has been identified as one of the more accurate \cite{Noorian08,Duffie13}. \begin{IEEEeqnarray}{Rl} I_{T}=&\left(I_{b}+A_{i}I_{d}\right)R_{b}\nonumber\\ &+\:I_d(1-A_{i})\left(\frac{1+\cos\beta}{2}\right)\left[1+f\sin^{3}\left(\frac{\beta}{2}\right)\right]\nonumber\\ &+\:I\rho_{g}\left(\frac{1-\cos\beta}{2}\right)\label{eq:tiltinsol} \end{IEEEeqnarray} In (\ref{eq:tiltinsol}), $I_{b}$ and $I_{d}$ are the hourly beam and diffuse insolation on a horizontal plane respectively, $A_i=I_b/I_o$, $f=\sqrt{I_b/I}$, $I$ is the global horizontal radiation, $I_o$ is the hourly extraterrestrial insolation incident on a horizontal plane projected from the Earth's surface, $\rho_g$ is the ground reflectance and $R_b$ is the ratio of tilted to horizontal beam radiation. Importantly, $R_b$ is a function of panel tilt $\beta$ and panel azimuth $\gamma$, equations for which are well established in literature and can be found in \cite{Duffie13}. Consequently tilt and azimuth must be optimized in order to maximize the incident insolation $I_T$ on a PV array. Finally, it should be noted that $\gamma = 0^\circ$ implies a north facing surface in this paper. \subsection{Photovoltaic Model} \label{ss:photo} According to Duffie and Beckman \cite{Duffie13}, the output energy of a PV system is defined as: \begin{equation}\label{eq:pvenergy} E_{pv}=A_{c}ZI_{T}\eta_{mpp}\eta_e\zeta_{pv} \end{equation} where $A_c$ is the PV panel area, $Z$ in the number of panels, $\eta_{mpp}$ is the PV panel operating efficiency, $\eta_e$ is the efficiency of the associated balance of plant and $\zeta_{pv}$ is the annual degradation factor of the PV panels. Although not explicitly defined in this paper, $\zeta_{pv}$ is assumed to be linear, in-line with the manufacturer's warranty. The operating efficiency $\eta_{mpp}$ is defined as: \begin{equation}\label{eq:opeff} \eta_{mpp}=\eta_{mpp,STC}+\mu_{mpp}(T_{c}-T_{a}) \end{equation} where $T_a$ and $T_c$ are the ambient and cell temperatures respectively, $\eta_{mpp,STC}$ is the panel efficiency at standard test conditions and $\mu_{mpp}$ (\%/W) is the power coefficient. Both $\eta_{mpp,STC}$ and $\mu_{mpp}$ are defined in manufacturer data sheets. The cell temperature $T_c$ is defined as: \begin{equation}\label{eq:celltemp} T_{c}=T_{a}+(T_{NOCT}-20)\cdot\frac{G_{T}}{800}\cdot(1-\eta_{mpp,STC}) \end{equation} where $G_{T}$ is the incident irradiance (assumed to be uniform over an hour period and therefore equal to $I_{T}$) and $T_{NOCT}$ is the cell temperature under nominal operating cell temperature (NOCT) conditions as detailed in manufacturer data sheets. In an Australian context, PV system costs are subsidized through small-scale technology certificates (STCs). The quantity of STCs generated after installation is based on the power rating $P_{pv,rat}$ and a location multiplier $M_{loc}$. In this paper $M_{loc}$ is assumed to be 20.73 (for the east coast of New South Wales) while STCs are assumed to be worth $C_{STC}$ = \$34 \cite{Jacobs16}. Consequently in Australia, the net system cost $S_{pv}$ is defined as: \begin{equation}\label{eq:pvsystemcost} S_{pv}=U_{pv}P_{pv,rat}-M_{loc}P_{pv,rat}C_{STC} \end{equation} where $U_{pv}$ is the average price per watt peak. Based on data provided in \cite{Jacobs16}, March 2016 $U_{pv}$ prices were (in AUD) \$3.20, \$3.00, \$2.55, \$2.35 and \$2.20 for 1 kW, 1.5 kW, 3 kW, 5~kW and 10 kW rated systems respectively. When solving the optimization problem presented in Section~\ref{s:optim}, the price corresponding to the closest system size is used. The PV modules considered in this research were modelled based on 280 W Trina Solar TSM-PC05A polycrystalline modules. \subsection{Battery Model} \label{ss:battery} The battery models defined in this section are structured based on manufacturer warranties and guarantees to establish the economic benefit the owner can expect over the lifetime of the system. The maximum capacity of a battery decreases over its lifetime due to a number of factors, chief of which is the number of charge/discharge cycles undergone. The degradation rate $\zeta_{batt}$ (kWh/cycle) is defined as: \begin{IEEEeqnarray}{Rl} \zeta_{batt}=(C_{\max 0}-C_{EOL})/Y_{EOL}\label{eq:battdegrad} \end{IEEEeqnarray} where $C_{\max 0}$, $C_{EOL}$ and $Y_{EOL}$ are the initial maximum capacity, end-of-life maximum capacity and cycle life respectively as defined in manufacturer data sheets. The maximum capacity $C_{\max,qdh}$ available at the start of each hour $h$, in day $d$ and billing period $q$, is assumed to be a linear function of the number of operational cycles $Y_{qdh}$ in the previous hour such that: \begin{IEEEeqnarray}{Rl} C_{\max,qdh}=C_{\max,qd(h-1)}-Y_{qd(h-1)}\zeta_{batt}\label{eq:maxcap} \end{IEEEeqnarray} It should be noted that $Y_{qdh}$ generally represents only a partial cycle for hourly intervals and therefore represents only a fraction of the energy throughput of a full discharge/charge cycle. $Y_{qdh}$ is defined as follows: \begin{IEEEeqnarray}{Rl} Y_{qdh}=\frac{E_{bpv,qdh}+E_{bg,qdh}+E_{bd,qdh}}{2DC_{\max,qdh}}\label{eq:cycles} \end{IEEEeqnarray} where $E_{bd,qdh}$, $E_{bpv,qdh}$ and $E_{bg,qdh}$ are the discharge, PV charge and grid charge energy flows respectively and $D$ is the maximum depth of discharge. The available capacity at the start of each hour is a function of the capacity at the start of the previous hour and the total charge/discharge energy that has flowed to/from the battery cells in the previous hour. The available capacity $C_{qdh}$ is therefore defined as: \begin{IEEEeqnarray}{Rl} C_{qdh}=&C_{qd(h-1)}-E_{bd,qd(h-1)}+E_{bpv,qd(h-1)}\nonumber\\ &{}+E_{bg,qd(h-1)}\label{eq:capacity} \end{IEEEeqnarray} The charge and discharge energy flow terms are defined in (\ref{eq:pvcharge})\textendash(\ref{eq:discharge}). \begin{IEEEeqnarray}{Rl} E_{bpv,qdh}=\max\Bigl\lbrace \min\bigl[&C_{\max,qdh}-C_{qdh},\nonumber\\ &{}(E_{pv,qdh}-E_{load,qdh})(1-F),\nonumber\\ &{}R_{\max}(1-F)\bigr],0\Bigr\rbrace\label{eq:pvcharge} \end{IEEEeqnarray} \begin{IEEEeqnarray}{Rl} E_{bg,qdh}=\max\Bigl\lbrace \min\bigl[&C_{\max,qdh}-C_{qdh},\nonumber\\ &{}R_{\max}(1-F)\bigr](M_3+M_4)I_{op,qdh}\nonumber\\ &{}-E_{bpv,qdh},0\Bigr\rbrace\label{eq:gridcharge} \end{IEEEeqnarray} \begin{IEEEeqnarray}{Rl} E_{bd,qdh}=\max\Bigl\lbrace \min\bigl[&C_{qdh}-C_{\max,qdh}(1-D),\nonumber\\ &{}(E_{load,qdh}-E_{pv,qdh})/(1-F),R_{\max}\bigr]\nonumber\\ &{}\times\bigl[(M_2+M_4)I_{sh,qdh}\nonumber\\ &{}\quad\;\;+I_{pk,qdh}\bigr],0\Bigr\rbrace\label{eq:discharge} \end{IEEEeqnarray} In (\ref{eq:pvcharge})\textendash(\ref{eq:discharge}), $E_{pv,qdh}$, $E_{load,qdh}$ and $R_{\max}$ are the PV generated energy as defined in (\ref{eq:pvenergy}), load energy demand and rated continuous charge/discharge rate respectively. $I_{op,qdh}$, $I_{sh,qdh}$, $I_{pk,qdh}$ and terms of the form $M_x$ are battery operation control variables defined later. It should be noted that the charge energy flow terms $E_{bpv,qdh}$ and $E_{bg,qdh}$ are considered to be the net additional charge to a battery after losses while the discharge energy term $E_{bd,qdh}$ is the total energy discharged from the battery (i.e. usable energy plus losses). The losses have been accounted for through the inclusion of a loss factor $F=(1-\eta_{batt})/2$ where $\eta_{batt}$ is the battery round-trip efficiency. The total energy loss due to battery charging and discharging is: \begin{equation}\label{eq:bloss} E_{bloss,qdh}=E_{bpvloss,qdh}+E_{bgloss,qdh}+E_{bdloss,qdh} \end{equation} where $E_{bpvloss,qdh}$, $E_{bgloss,qdh}$ and $E_{bdloss,qdh}$ are the losses during PV charging, grid charging and discharging respectively defined in (\ref{eq:pvloss})\textendash(\ref{eq:dischargeloss}). \begin{IEEEeqnarray}{Rl} E_{bpvloss,qdh}=\max\Bigl\lbrace\min\bigl[ & (C_{\max,qdh}-C_{qdh})/(1-F),\nonumber\\ &{}E_{pv,qdh}-E_{load,qdh},\nonumber\\ &{}R_{\max}\bigr],0\Bigr\rbrace F\label{eq:pvloss} \end{IEEEeqnarray} \begin{IEEEeqnarray}{Rl} E_{bgloss,qdh}=\max\Bigl\lbrace\min\bigl[&(C_{\max,qdh}-C_{qdh})/(1-F),R_{\max}\bigr]\nonumber\\ &{}\times (M_3+M_4)I_{op,qdh}\nonumber\\ &{}-E_{bpv,qdh},0\Bigr\rbrace F\label{eq:gridloss} \end{IEEEeqnarray} \begin{IEEEeqnarray}{Rl} E_{bdloss,qdh}=\max\Bigl\lbrace \min\bigl[&E_{load,qdh}-E_{pv,qdh},R_{\max},\nonumber\\ &{}C_{qdh}-C_{\max,qdh}(1-D)\bigr]\nonumber\\ &{}\times\bigl[(M_2+M_4)I_{sh,qdh}\nonumber\\ &{}\quad\;\;+I_{pk,qdh}\bigr],0\Bigr\rbrace\ F\label{eq:dischargeloss} \end{IEEEeqnarray} In addition to increasing the self-consumption ratio of PV generated energy (by effectively translating PV generated energy to non-generation periods), an energy storage system can also be used to perform energy arbitrage by charging during low cost off-peak hours and discharging during peak periods. In this paper, a review of various battery operating modes is undertaken to determine the most economically efficient mode for each residence assessed. The operating modes considered are defined as follows: \begin{itemize}[\IEEEsetlabelwidth{Mode 4:}] \item[Mode 1:] PV generation shifting. Discharge in peak only. \item[Mode 2:] PV generation shifting. Discharge during shoulder and peak periods. \item[Mode 3:] Energy arbitrage and PV generation shifting. Discharge in peak only. \item[Mode 4:] Energy arbitrage and PV generation shifting. Discharge during shoulder and peak periods. \end{itemize} As previously indicated, (\ref{eq:gridcharge}), (\ref{eq:discharge}), (\ref{eq:gridloss}) and (\ref{eq:dischargeloss}) are controlled by the operation mode variables $M_1$, $M_2$, $M_3$ and $M_4$ where: \begin{equation}\label{eq:mode} M_x=\begin{cases} & 1 \quad \text{if in Mode}\;x\\ & 0 \quad \text{otherwise} \end{cases} \end{equation} The variables $I_{op,qdh}$, $I_{sh,qdh}$ and $I_{pk,qdh}$ control battery charge and discharge based on the tariff period within which a particular hour lies and take the form: \begin{equation}\label{eq:offpeak} I_{op,qdh}=\begin{cases} & 1 \quad \text{if}\;\;h\in\lbrace\text{offpeak hours}\rbrace\\ & 0 \quad \text{otherwise} \end{cases} \end{equation} with similar equations for $I_{sh,qdh}$ and $I_{pk,qdh}$ for shoulder and peak hours respectively. The final component of the battery model is the battery cost, defined simply as: \begin{equation}\label{eq:battsystemcost} S_{b}=U_{b}X \end{equation} where $U_b$ is the price per battery and $X$ is the number of battery units installed. Two battery systems were considered in this research \textendash{ }the Tesla Motors 13.5 kWh 5 kW Powerwall 2 and the more modular 1.2 kWh 260 W Enphase AC Battery. Values for the battery model parameters defined in this section including $D$, $Y_{life}$, $Y_{EOL}$, $C_{EOL}$ and $\eta_{batt}$ were based on the manufacturer datasheets \cite{TeslaPWDC16} and \cite{Enphase16} for the Tesla and Enphase systems respectively. Based on pricing provided by Tesla, the fully installed cost of the Powerwall 2 is AU\$10,000 \cite{TeslaPW16}, while the cost of the Enphase battery is approximately AU\$2,000 \cite{SQ16}. \subsection{Maintenance Model} \label{ss:maitenance} During the lifetime of a PV-battery system, periodic maintenance as well as battery and inverter replacements are required. In this paper, the lifespans of PV modules and inverters/batteries are assumed to be 20 years and 10 years respectively. Consequently with $t$ billing periods per year, inverters/batteries will require replacement after $10t$ billing periods. Furthermore periodic maintenance is assumed to occur every $5t$ billing periods. The system maintenance costs are therefore defined as: \begin{equation}\label{eq:maintcost} W_q=\begin{cases} 200\;\;\text{if}\;\frac{q-1}{5t}\in\mathbb{Z}^+\text{, }\frac{q-1}{10t}\notin\mathbb{Z}^+\\ 400+\kappa_{inv}U_{inv}S_{pv}+\kappa_bS_{b}\;\;\text{if}\;\frac{q-1}{10t}\in\mathbb{Z}^+\\ 0\;\;\text{otherwise} \end{cases} \end{equation} where $U_{inv}$ is the inverter replacement cost (\$/W$_\text{ac}$) and $\kappa_{inv}$ and $\kappa_b$ are cost reduction factors for the inverter and batteries. Given a current average per unit inverter cost of US\$0.29/W$_\text{ac}$ \cite{NREL15}, the per unit PV inverter costs are assumed to be $U_{inv}=\text{AU}\$0.41$/W$_\text{ac}$ (assuming AUD/USD exchange rate of 0.7). The cost of inverters and batteries are forecasted to reduce significantly over the next 10 years with reductions of 31\% and 53\% respectively between 2015-2025 \cite{CSIRO15}. Consequently the cost reduction factors in (\ref{eq:maintcost}) are assumed to be $\kappa_{inv}=0.69$ and $\kappa_b=0.47$. \section{Optimization Problem} \label{s:optim} The objective of this research is to maximize the electricity cost savings achieved through optimal selection of a hybrid PV-battery system based on high resolution smart meter load data and prevailing economic and PV-battery market conditions. Cost savings are quantified through an NPV analysis performed on the difference in electricity costs between a known lowest cost retail electricity plan and a hybrid PV-battery system combined with other currently available retail electricity plans. As previously indicated in Section~\ref{ss:battery} and \ref{ss:maitenance}, hourly evaluations of the energy flows are conducted for each hour $h$ in day $d$ and billing period $q$. Maximizing the net benefit over all the billing periods $Q$ in the lifetime of the system is the objective of the optimization problem as defined below. \subsection{Problem Definition} \label{ss:probdef} \vspace*{+0.5\baselineskip} \noindent\emph{Given:} \begin{enumerate} \item Latitude and longitude of the location \item Hourly load and insolation profile \item A real annual discount rate of 3.92\% (6\% nominal rate and 2\% inflation) \item Real annual electricity price growth of 2\% \item PV system balance of plant efficiency (90\%) \item System lifespan (20 years) \end{enumerate} \vspace*{+0.5\baselineskip} \noindent\emph{Find:} Tilt angle $\beta$, azimuth angle $\gamma$, number of PV panels $Z$ and number of batteries $X$ \vspace*{+0.5\baselineskip} \noindent\emph{Objective:} \begin{IEEEeqnarray}{Rl} \max_{\beta,\gamma,Z,X}NPV=&\sum_{q=1}^{Q}\frac{\left(C_{base,q}-C_{pvbatt,q}\right)\left( 1+r_{e}\right)^q}{\left(1+r_{d}\right)^{q}}\nonumber\\ &-\:\sum_{q=1}^{Q}\frac{W_{q}}{\left(1+r_{d}\right)^{q}}-\bigl( S_{pv}+S_{b}\bigr)\label{eq:objfunc} \end{IEEEeqnarray} \noindent\emph{Subject to:} \begin{IEEEeqnarray}{rCll} 0 \leq & \beta & \leq 180 &\quad\text{for}\;\beta\in\mathbb{R}\IEEEyesnumber\IEEEyessubnumber\label{eq:tiltcons}\\ -180 < & \gamma & \leq 180 &\quad\text{for}\;\gamma\in\mathbb{R}\IEEEyessubnumber\label{eq:azicons}\\ 0 \leq & Z & \leq Z_{\max} &\quad\text{for}\;Z\in\mathbb{Z}^{+}\IEEEyessubnumber\label{eq:panelcons}\\ 0 \leq & X & \leq X_{\max} &\quad\text{for}\;X\in\mathbb{Z}^{+}\IEEEyessubnumber\label{eq:battcons} \end{IEEEeqnarray} In (\ref{eq:objfunc}), $C_{pvbatt,q}$ and $C_{base,q}$ are the cost of electricity with and without a PV-battery system within the billing period $q$. As quarterly billing is assumed in this paper, the real discount rate 3.92\% and the real annual electricity price growth of 2\% are adjusted to the quarterly effective rates $r_{d}=0.97\%$ and $r_{e}=0.50\%$ respectively. The terms $C_{base,q}$ and $C_{pvbatt,q}$ are defined as: \begin{equation}\label{eq:basecost} C_{base,q}=\sum_{d=1}^{D}\left( \sum_{h=1}^{24}T_{grid0,qdh}E_{load,qdh}+T_{sc0,qd}\right) \end{equation} \begin{IEEEeqnarray}{Rl} C_{pvbatt,q}=\sum_{d=1}^{D}\Bigg\lbrace\sum_{h=1}^{24}&\Big[T_{grid,qdh}\max\left(0,E_{bal,qdh}\right)\nonumber\\ &{}-T_{feed,qdh}\max\left(0,-E_{bal,qdh}\right)\Big]\nonumber\\ &{}+T_{sc,qd}\Bigg\rbrace\label{eq:pvbattcost} \end{IEEEeqnarray} where $T_{grid0,qdh}$ and $T_{grid,qdh}$ are the grid imported electricity tariff of the base plan and tested plan respectively for the $h^{th}$ hour of day $d$ with $D$ days in the billing period, $T_{sc0,qd}$ and $T_{sc,qd}$ are the daily electricity supply charges for the base plan and tested plan respectively, $T_{feed,qdh}$ is the PV feed-in tariff and $E_{bal,qdh}$ is the net energy flow balance of the terms defined in Section~\ref{ss:battery}, expressed as: \begin{IEEEeqnarray}{Rl} E_{bal,qdh}=&E_{load,qdh}-E_{pv,qdh}-E_{bd,qdh}\nonumber\\ &{}+E_{bpv,qdh}+E_{bg,qdh}+E_{bloss,qdh}\label{eq:energybal} \end{IEEEeqnarray} As the optimization parameters $Z$ and $X$ are limited to integer values (with respective maximums $Z_{\max}$ and $X_{\max}$ determined by the customer's available space restrictions), while $\beta$ and $\gamma$ may take any real value within the domain of the constraints, the problem is classified as a mixed-integer non-linear programming (MINLP) problem. \subsection{Optimization Algorithm} \label{ss:optalg} To solve MINLP problems, metaheuristic programming methods such a PSO have seen increased application in the last decade, with PSO already applied to a number of PV optimization problems \cite{Yadav13}. PSO simulates the social interaction within bird flocks to achieve a global objective without a governing central controller \cite{Sun12}. A modified version of PSO described by Sun et al. \cite{Sun12} known as Quantum-behaved PSO (QPSO), uses the principles of quantum mechanics, in particular quantum delta potential wells, to sample around the best positions to eventually find the global best position. The defining algorithm for QPSO is relatively simple and requires fewer parameter adjustments between problems than PSO \cite{Sun12}. Furthermore, QPSO handles parameter constraints naturally with no specific modifications required to the algorithm, as opposed to PSO. In order to handle the discrete parameters $Z$ and $X$, the hypercube nearest-vertex approach adopted in \cite{Chowdhury13} was utilized, which effectively rounds the position of the candidate particle to the nearest integer prior to evaluating the objective function. For installation simplicity, the tilt and azimuth angles were also considered as discrete in this analysis. The QPSO optimization algorithm was developed and simulated in Matlab version R2015b. \begin{table*}[!hb] \renewcommand{1.3}{1.3} \caption{Characteristics and Economic Performance of Optimized PV-Battery System For Different Retail Electricity Plans} \label{tab:results} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Customer & Retailer & Battery Size (kWh) & PV Size (kW$_\text{p}$) & Tilt & Azimuth & NPV & MIRR & Payback (Years) \\\hline\hline \multirow{3}{*}{1} & A & 0 & 8.4 & $29^\circ$ & $30^\circ$ & \$10,534 & 7.39\% & 9.3\\\hhline{~*{8}{-}} & \cellcolor[gray]{0.8}B & \cellcolor[gray]{0.8}0 & \cellcolor[gray]{0.8}8.4 & \cellcolor[gray]{0.8}$29^\circ$ & \cellcolor[gray]{0.8}$30^\circ$ & \cellcolor[gray]{0.8}\$11,153 & \cellcolor[gray]{0.8}7.52\% & \cellcolor[gray]{0.8}9.0\\\hhline{~*{8}{-}} & C & 0 & 8.4 & $29^\circ$ & $30^\circ$ & \$10,745 & 7.44\% & 9.0\\\hline\hline \multirow{3}{*}{2} & A & 0 & 4.2 & $31^\circ$ & $26^\circ$ & \$672 & 4.88\% & 18.4\\\hhline{~*{8}{-}} & \cellcolor[gray]{0.8}B & \cellcolor[gray]{0.8}0 & \cellcolor[gray]{0.8}4.2 & \cellcolor[gray]{0.8}$31^\circ$ & \cellcolor[gray]{0.8}$26^\circ$ & \cellcolor[gray]{0.8}\$917 & \cellcolor[gray]{0.8}5.04\% & \cellcolor[gray]{0.8}17.8\\\hhline{~*{8}{-}} & C & 0 & 4.2 & $31^\circ$ & $26^\circ$ & \$732 & 4.92\% & 18.2\\\hline\hline \multirow{3}{*}{3} & A & 0 & 7.56 & $29^\circ$ & $25^\circ$ & \$4,601 & 6.06\% & 13.8\\\hhline{~*{8}{-}} & \cellcolor[gray]{0.8}B & \cellcolor[gray]{0.8}0 & \cellcolor[gray]{0.8}7.56 & \cellcolor[gray]{0.8}$29^\circ$ & \cellcolor[gray]{0.8}$25^\circ$ & \cellcolor[gray]{0.8}\$5,044 & \cellcolor[gray]{0.8}6.19\% & \cellcolor[gray]{0.8}13.6\\\hhline{~*{8}{-}} & C & 0 & 7.56 & $30^\circ$ & $26^\circ$ & \$4,870 & 6.14\% & 13.6\\\hline \end{tabular} \end{table*} \section{Input Data} \label{s:input} Electricity consumption data measured over a one-year period for three arbitrarily selected customers from the Australian Government initiated `Smart Grid, Smart City' project \cite{Arup14} was used in the analysis. Daily insolation and ambient temperature data over a five year period for each location were derived from the Australian Bureau of Meteorology Climate Data Online database \cite{BOM16}. The daily data was converted to hourly data using the methodology established in \cite{Every16}. The electricity tariff structures tested in the optimization problem were based on real 2016 TOU rates from three large Australian retailers. \section{Results and Discussion} \label{s:results} Table~\ref{tab:results} shows a summary of the optimized PV-battery systems for each customer under retail electricity plans from three large retailers. For each customer, Retailer B was found to provide the greatest benefit. For Customer 1, the maximum system size of 8.4 kW ($Z_{\max}=30$, 280 W PV panels) was reached, while Customer 3 was also found to potentially benefit from a relatively large PV system of 7.56 kW. In contrast, the optimal PV system for Customer 2 was found to be a far smaller system at 4.2 kW. The optimal tilt and azimuth angles also varied for each customer but were found to be in 20-30 degree range. However importantly for the customers assessed, no instance was found whereby an energy storage system would yield an economic benefit higher than a PV-only system based on current battery pricing. To determine the price point at which a hybrid PV-battery system becomes an economically beneficial option for each customer, an NPV sensitivity analysis was undertaken on battery pricing. Referring to Fig.~\ref{fig:npvsensi}, systems consisting of either the Tesla Powerwall 2 or an Enphase AC battery become viable for Customer~1 when unit costs are reduced to 70\% of 2016 pricing. Customers 2 and 3 would first see a benefit from the small, more modular Enphase system at the 60-70\% price point but would not see a benefit from a larger Tesla system until pricing reached 30-40\% of current levels. \begin{figure} \caption{NPV sensitivity to installed battery cost (Retailer B, Mode 2 operation)} \label{fig:npvsensi} \end{figure} \begin{figure} \caption{Number of batteries in PV-battery system for varying installed battery costs (Retailer B, Mode 2 operation)} \label{fig:battnum} \end{figure} Fig.~\ref{fig:battnum} shows the number of batteries that constitute the optimal system as battery prices are decreased. For the more modular Enphase battery system, an increase in battery quantity is observed for each customer as prices decrease. Customer~1, having a relatively high energy demand, would benefit the most from a larger number of Enphase batteries at each price point compared to the other two customers. In contrast Customer 3 would not benefit from additional batteries until the 30\% price point is reached, after which the customer could can take immediate advantage of additional units as prices continue to decrease. However for the larger Tesla system, a single battery was found to be sufficient for all customers under all battery price scenarios with one exception being an additional battery for Customer 1 at the 10\% price point. \begin{table}[!t] \renewcommand{1.3}{1.3} \caption{Economic Performance Under Different Battery Operating Modes (Tesla Batteries, Cost = 10\% of 2016 Prices)} \label{tab:batmode} \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Customer} & \multirow{2}{*}{Op. Mode} & Battery Size & PV Size & \multirow{2}{*}{NPV}\\ & & (kWh) & (kW$_\text{p}$) & \\\hline\hline \multirow{4}{*}{1} & 1 & 27 & 8.4 & \$17,446\\\hhline{~*{6}{-}} & \cellcolor[gray]{0.8}2 & \cellcolor[gray]{0.8}27 & \cellcolor[gray]{0.8}8.4 & \cellcolor[gray]{0.8}\$19,637\\\hhline{~*{6}{-}} & 3 & 27 & 7.56 & \$16,721\\\hhline{~*{6}{-}} & 4 & 27 & 8.4 & \$18,102\\\hline\hline \multirow{4}{*}{2} & 1 & 13.5 & 4.2 & \$3.800\\\hhline{~*{6}{-}} & \cellcolor[gray]{0.8}2 & \cellcolor[gray]{0.8}13.5 & \cellcolor[gray]{0.8}4.2 & \cellcolor[gray]{0.8}\$4,931\\\hhline{~*{6}{-}} & 3 & 13.5 & 4.2 & \$3,269\\\hhline{~*{6}{-}} & 4 & 13.5 & 4.2 & \$4,232\\\hline\hline \multirow{4}{*}{3} & 1 & 13.5 & 4.2 & \$7,313\\\hhline{~*{6}{-}} & \cellcolor[gray]{0.8}2 & \cellcolor[gray]{0.8}13.5 & \cellcolor[gray]{0.8}4.2 & \cellcolor[gray]{0.8}\$8,678\\\hhline{~*{6}{-}} & 3 & 13.5 & 4.2 & \$6,785\\\hhline{~*{6}{-}} & 4 & 13.5 & 4.2 & \$7,652\\\hline \end{tabular} \end{table} Table~\ref{tab:batmode} summarizes the effect of battery operation mode on the NPV for each customer. The table considers a significantly deflated battery pricing scenario of 10\% of 2016 installation costs whereby energy arbitrage would yield its greatest benefit among the price points considered in this research. In all instances, Mode 2 was found to produce the highest NPV, i.e. the battery operating to maximize self-consumption of PV generated energy in shoulder and peak periods with no energy arbitrage. Consequently, even with significantly deflated battery costs, under current TOU electricity tariffs and with electricity prices continuing to increase at current rates, a battery system engaging in energy arbitrage was not found to provide any additional economic benefit than a battery system purely used for PV generation load shifting. \section{Conclusion} Significant price reductions in PV and battery systems have sparked considerable interest in hybrid PV-battery solutions at the residential level. However optimal system selection is critical to ensure the economic viability of such systems for a particular customer's energy requirements. An optimization tool was developed in this paper and applied to three real-world electricity customers. Based on current PV and battery system prices, no battery system was found to be economically viable for the residences assessed, however optimized PV-only systems were found to yield a net benefit for all customers. A sensitivity analysis was conducted on battery pricing to determine the price point at which a hybrid PV-battery system would yield a net benefit improvement. The results showed that significant price reductions to 60-70\% of current prices are required before the tested customers could take advantage of an energy storage system. It was also concluded that customers can generally take advantage of a modular system of smaller batteries earlier than a bulk energy storage system. Additionally, the results indicated that the current size of the Tesla Powerwall 2 battery is large enough for most energy storage needs even with battery prices at significantly deflated levels. Finally, various battery operating modes were examined to determine the most economically beneficial operation. No instances were found whereby energy arbitrage yielded a greater benefit than purely maximizing PV self-consumption. This observation continued to hold at all battery pricing levels. \end{document}
\begin{document} \begin{abstract} Let $X$ be a K3 surface and $L$ be an ample line bundle on it. In this article we will give an alternative and elementary proof of Lelli Chiesa's Theorem in the case of $r= 2$. More precisely we will prove that under certain condition the second co-ordinate of the gonality sequence is constant along the smooth curves in the linear system $|L|$. Using Lelli Chiesa's theorem for $r \ge 3$ we also extend Lelli Chiesa's Theorem in the case of $r= 2$ in weaker condition. \end{abstract} \title{An elementary proof of Lelli Chiesa's theorem on constancy of second coordinate of gonality sequence } \section{Introduction} Given a smooth irreducible projective curve $C$ and an integer $r$ one can associate an integer $d_r$ as the minimal degree of a line bundle with $r+1$ sections. Thus to each curve one can associate a sequence $(d_1, d_2, ...)$ called gonality sequence. The first co-ordinate of the gonality sequence is known as the gonality of $C$. Let $X$ be a smooth projective K3 surface over the field of complex numbers and $L$ be a line bundle on $X$. Then the natural question one can ask whether the gonality sequence remains constant as $C$ varies in $|L|_s$, where $|L|_s = \{C \in |L| : C \text{ is smooth }\}$. The answer of the question is negative. In fact, Donagi and Morrison pointed out the following easy counter example showing that even the first co-ordinate is not constant. \textbf{Example}(\cite{3}, 2.2). Let $\pi: X \to \mathbb{P}^2$ be a K3 surface obtained as a double cover of $\mathbb{P}^2$ ramified at a smooth sextic curve. Let $L = \pi^*(\mathcal{O}_{\mathbb{P}^2}(3))$. The general curve of $|L|$ is a plane sextic and hence they have gonality $5$. On the other hand, $|L|$ contains a subspace of co-dimension $1$ consisting of bielliptic curves which has gonality $4$. However, Ciliberto and Pareschi proved that if $L$ is an ample line bundle on a K3 surface $X$, such that $X$ and $L$ are not simultaneously as in the Donagi-Morrison's example, then gonality remains constant along $|L|_s$ \cite{2}. Naturally one could ask about the behavior of the second co-ordinate. Note that in the Donagi-Morrison's example, the second co-ordinate (which we will call planarity of $C$ and denote by $\mathcal{P}(C)$) is constant. Recently Lelli Chiesa \cite{7} proved that if $C$ is an an ample curve in $X$ with some extra hypothesis and admits a complete $g^r_d$ computing the Clifford index of $C$ then every curve in the linear system $|\mathcal{O}_X(C)|$ admits a complete $g^r_d$. However if the Clifford index is bigger than $2$, then the extra hypothesis satisfied automatically. Thus if the Clifford index of $C$ is bigger than $2$ and $C$ admits a complete $g^2_d$ computing the Clifford index of $C$, then $\mathcal{P}(C)$ is constant as $C$ varies in the linear system $|\mathcal{O}_X(C)|$. The question of constancy of $\mathcal{P}(C)$ still remains open if $C$ does not admit a complete $g^2_d$ computing the Clifford index of $C$. For example see section \ref{S2}. In this article we will give an independent proof for constancy of $\mathcal{P}(C)$ when $C$ admits a complete $g^2_d$ computing the Clifford index and also few cases when it does not admit a complete $g^2_d$ computing the Clifford index. We prove the following Theorem: \begin{Theorem}\label{TH} Let $X$ be a smooth projective $K3$ surface over the field of complex numbers. Let $L$ be an ample line bundle on $X$ such that there is no bi-elliptic curve in $|L|$. Then every smooth curve in the linear system $|L|$ carry a $g^2_{d^\prime}$, if one of the following holds:\\ (i)there exist an irreducible smooth curve $C$ in the linear system $|L|$ with a complete $g^r_{d^\prime},$ for some $r, 2 \le r \le 3$ which computes the Clifford index of $C$.\\ (ii) $L^2 \ge 16$ and there exists a smooth curve $C \in |L|$ with a complete $g^4_{d^\prime},$ which computes the Clifford index of $C$. In other words, the second co-ordinate of the gonality sequence of smooth curves is constant along the linear system $|L|$. \end{Theorem} {\it Notation:} We work throughout over the field $\mathbb{C}$ of complex numbers. If $X$ is a smooth, projective variety, we denote by $K_X$ the canonical bundle on $X$. For a coherent sheaf $\mathcal{F}$ on $X$, we denote by $H^i(\mathcal{F})$ the $i$-th cohomology group of $\mathcal{F}$ and by $h^i(\mathcal{F})$ its (complex) dimension. If $V$ is a vector bundle on $X$, we denote by $V^*$ the dual of $V$. For a sub-scheme $Z \subset X$, we denote by $\mathcal{I}_Z$ the ideal sheaf of $Z$. A line bundle of degree $d$ is called a complete $g^r_d$ on a smooth projective curve $C$ if it has exactly $r+1$ sections. We denote by $W^r_d(C)$, the subvariety of $\text{Pic}^d(C)$ whose support is the set: \[ \text{Supp}(W^r_d(C)) = \{L \in \text{Pic}^d(C): h^0(C, L) \ge r+1\}. \] If $r=0$ we denote $W^0_d(C)$ simply by $W_d(C)$. \section{preliminaries}\label{S1} In this section we recall the basic properties of the bundle $E_{C, A}$ of Lazarsfeld \cite{5} and Tyurin \cite{9}, associated to an irreducible smooth curve $C$ in $X$ and a globally generated line bundle $A$ and the basic definitions of Clifford index and Clifford dimension. Let $X$ be a smooth projective $K3$ surface over the field of complex numbers. Let $C$ be an irreducible smooth curve in $X$ and $A$ be a globally generated line bundle on $C$. Viewing $A$ as a sheaf on $X$, consider the evaluation map \[ H^0(C, A) \otimes \mathcal{O}_X \to A. \] Let $F_{C, A}$ be its kernel and $E_{C, A} := F_{C, A}^*$. Then $F_{C, A}$ fits in the following exact sequence on $X$. \[ 0 \to F_{C, A} \to H^0(C, A) \otimes \mathcal{O}_X \to A \to 0. \] It is easy to check that $F_{C, A}$ is locally free. Dualizing the above exact sequence one gets \[ 0 \to {H^0(C, A)}^* \otimes \mathcal{O}_X \to E_{C, A} \to \mathcal{O}_C(C) \otimes A^* \to 0. \] Then it is easy to check the following properties: \begin{Lemma}\label{L1} 1. Rank of $E_{C, A} = h^0(C, A)$.\\ 2. $\text{det}(E_{C, A}) = \mathcal{O}_X(C)$.\\ 3. $c_2(E_{C, A}) = \text{deg}(A)$.\\ 4. $h^0(X, {E_{C, A}}^*)= h^1(X, {E_{C, A}}^*) = 0$.\\ 5. $E_{C, A}$ is generated by its global sections off a finite set. \end{Lemma} \subsection{Clifford index} Let $C$ be a smooth irreducible complex projective curve of genus $g \ge 2$. Recall that the Clifford index of a line bundle $A$ on $C$ is the integer \[ \text{Cliff}(A) = \text{deg}(A) - 2r(A), \] where $r(A) =h^0(A)- 1$. The Clifford index of $C$ itself is defined to be \[ \text{Cliff}(C) = \text{min} \{\text{Cliff}(A) | h^0(A) \ge 2, h^1(A) \ge 2\}. \] We say that a line bundle $A$ on $C$ contributes to the Clifford index of $C$ if $A$ satisfies the inequalities in the definition of $\text{Cliff}(C)$; it computes the Clifford index of $C$ if in addition $\text{Cliff}(C) = \text{Cliff}(A)$. \begin{Theorem}( M. Green, R. Lazarsfeld \cite{4}) Let $X$ be a complex projective K3 surface, and let $C \subset X $be a smooth irreducible curve of genus $g \ge 2$. Then \[ \text{Cliff}(C^{\prime}) =\text{Cliff}(C) \] for every smooth curve $C^{\prime} \in |C|$. Furthermore, if $\text{Cliff}(C)$ is strictly less than the generic value $[\frac{(g-1)}{2}]$, then there is a line bundle $L$ on $X$ whose restriction to any smooth $C^{\prime} \in |C|$ computes the Clifford index of $C^{\prime}$. \end{Theorem} Given a curve $C$, we define its Clifford dimension as \[ r= \text{min}\{h^0(A) -1 | A \text{ computes the Clifford index of } C\}. \] \begin{Proposition}(Ciliberto, Pareschi \cite{2})\label{PA} Let $C$ be a smooth and irreducible curve of genus $g$ sitting on a K3 surface $X$ as an ample divisor. Then either $C$ is isomorphic to a smooth plane sextic and $X, \mathcal{O}_X(C)$ are as in Donagi-Morrison's example or the Clifford dimension of $C$ is $1$. \end{Proposition} \section{An example }\label{S2} In this section we will give an example of a curve $C$ in a K3 surface $X$ such that the Clifford index of $C$ is not computed by a $g^2_d$ but a $g^3_d$. Therefore we can not use Lelli Chiesa's Theorem to conclude the constancy of the second co-ordinate of the gonality sequence. However we will see that the second co-ordinate remains constant along $\mathcal{O}_X(C)$, which gives an example in support of our Theorem \ref{TH}. {\bf Example}: Let $X$ be the K3 surface given by a smooth quartic hypersurface in $\mathbb{P}^3$. Let $C$ be a quadric hypersurface section. In other words, $C$ is a complete intersection of two hypersufaces of degree $4$ and $2$ respectively. Clearly $C$ is an ample curve in $X$. Then we have following facts \cite[p.199, F-2]{1}:\\ $\bullet $ $ W^1_3(C) = \varnothing$\\ $\bullet$ $W^1_4(C) \ne \varnothing$ \\ $\bullet$ $W^3_8(C) \ne \varnothing$\\ $\bullet$ $W^3_8(C)-W_2(C)) \subset W^1_6(C) $\\ $\bullet$ $W^2_7(C) = W^3_8(C) - W_1(C)$.\\ Thus the Clifford index of $C$ is $2$. Since $W^2_7(C) = W^3_8(C)-W_1(C)$ and $W^3_8(C) -W_2(C) \subset W^1_6(C)$, we have $W^2_6 = \varnothing$. Therefore the Clifford index of $C$ can not be computed by a $g^2_d$. On the other hand, since $W^3_8(C)$ is non-empty, the Clifford index is computed by a $g^3_d$. It is clear that $\mathcal{P}(C) = 7$ for all smooth curve $C \in |\mathcal{O}_X(C)|$. \section{Structure of $E_{C, A}$} Let $C$ be a smooth irreducible curve in a K3 surface $X$ and $A$ be a line bundle of minimal degree $d$ with $3$ sections. Clearly such a line bundle is globally generated. Let $E_{C, A}$ be the vector bundle constructed as in Section \ref{S1}. Then by Lemma \ref{L1} we have, \\ \begin{align*} \text{rk}(E_{C, A}) = 3, \text{det}(E_{C, A}) = \mathcal{O}_X(C), c_2(E_{C, A})\\ = d, h^0(X, {E_{C, A}}^*)= h^1(X, {E_{C, A}}^*) = 0 \end{align*} and $E_{C, A}$ is globally generated off a finite set.\\ The following Proposition is a slight modification of a result of Donagi-Morrison. \begin{Proposition}\label{SP1} $E_{C, A}$ is not a simple vector bundle, then we have the following possibilities:\\ (1) There exist a base point free line bundle $N$ and a rank $2$ vector bundle $F$, globally generated off a finite set such that $E_{C, A} = F \oplus N$.\\ (2) There exist a base point free line bundle $N$, a rank $2$ vector bundle $F$ and a finite set $Z \subset X$ such that $E_{C, A}$ sits in the following exact sequence, \[ 0 \to F \to E_{C, A} \to N \otimes \mathcal{I}_Z \to 0 \] and we have $h^0(F) \ge h^0(N) \ge 2$. \end{Proposition} \begin{proof} If $E_{C, A}$ is not simple, then there is an endomorphism $\varphi: E_{C, A} \to E_{C, A}$ which is not of the form $c.Id$ for some scalar $c$, where $Id$ denotes the identity morphism. Let $x \in X$ be a point. Consider an eigen value $c$ of the linear map $\varphi_x: {(E_{C, A})}_x \to {(E_{C, A})}_x$. Then the morphism $\psi := \varphi - c.Id$ is a nonzero morphism, which drops rank everywhere. Let $F: = \text{Ker}(\psi), N^{\prime} = Im(\psi)$. If $E_{C, A}$ is decomposable then we are in situation (1). Let us assume $E_{C, A}$ is indecomposable. If the rank of the endomorphism $\psi$ is $2$, then one can easily see that the rank of $\psi^2$ is $1$. Thus with out loss of generality we can assume that $\text{rk}(F) = 2$ and we have a short exact sequence of the form, \[ 0 \to F \to E_{C, A} \to N^{\prime} \to 0. \] Since $X$ is a surface, any reflexive sheaf over $X$ is locally free. Thus $F$ is locally free. \\ Note that $N := {N^{\prime}}^{**}$ is a line bundle and $N^{\prime} = N \otimes \mathcal{I}_Z$, for some finite set $Z \subset X$. Thus we have a sequence \[ 0 \to F \to E_{C, A} \to N \otimes \mathcal{I}_Z \to 0. \] Since $E_{C, A}$ is globally generated off a finite set, $N$ is also globally generated off a finite set. Since a line bundle on a K3 surface has no base points outside its fixed component [Corollary 3.2, \cite{8}], it is globally generated. Moreover, since $h^0(X, {(E_{C,A})}^*) = 0$, $N$ is non-trivial. Thus $h^0(N) \ge 2$. If $\psi^2 \ne 0$ then the sequence splits and again we are in the situation (1). If $\psi^2 = 0$, then $h^0(F \otimes N^*) > 0$. Therefore, we have $h^0(F) \ge h^0(N) \ge 2$. \end{proof} \begin{Remark}\label{RZ} Note that if we are in the second case, then $h^0(N^* \otimes F) \ne 0$. Thus $F$ and hence $E_{C, A}$ contains a line subbundle $M$ which admits at least $2$ sections. Thus $E_{C, A}$ fit in the following exact sequence, \[ 0 \to M \to E_{C, A} \to F \to 0, \] where $F$ is a torsion free sheaf of rank two generated by its global sections off a finite set and we have the following exact sequence \[ 0 \to F \to F^{**} \to S \to 0, \] where $F^{**}$ is the double dual of $F$, $S$ is a coherent sheaf of finite length , in particular supported on a zero-dimensional subscheme $Z$. Also note that $c_2(F)= c_2(F^{**}) + |Z|$, where $|Z|$ denotes the length of $Z$. \end{Remark} \begin{Lemma}\label{L2} If $E$ is a globally generated vector bundle off a finite set and $c_1(E)^2 >0$, then $c_2(E) \ge 0$. \end{Lemma} \begin{proof} If $E$ is a globally generated vector bundle off a finite set then for a general subspace $V \subset H^0(E)$ of dimension $\text{rk}(E)$, we have the following exact sequence \cite[ See P.18 ]{2} \[ 0 \to V \otimes \mathcal{O}_X \to E \to B \to 0 \] where $B$ is a line bundle on a smooth curve $C \subset X$. Dualizing the exact sequence we have, \[ 0 \to E^* \to V^* \otimes \mathcal{O}_X \to A \to 0 \] where $A = K_C \otimes B^*$. If $\text{deg}(A) < 0$, then degree of $B \ge 2g^{\prime}-1$, where $g^{\prime}$ is the genus of $C$ and hence $h^0(B) \ge g^{\prime}$. Thus $h^0(E) \ge g^{\prime}+2$. On the other hand since $c_1(E)^2 > 0$, we have $h^0(E) \le h^0(c_1(E)) = g^{\prime} +1$ [Proposition 1.5, \cite{4}]. Thus $c_2(E) = \text{deg}(A) \ge 0$. \end{proof} \section{Trigonal curve in K3 surface} In this section we will prove an interesting property of a trigonal curve in a K3 surface . \begin{Theorem}\label{TTA} Let $C$ be a trigonal curve of genus $g \ge 5$ in a K3 surface $X$. Then the following holds: There exists an irreducible curve $\Delta$ such that ${p}_a(\Delta) = 1$ and $\Delta . C = 3$. \end{Theorem} \begin{proof} Since $C$ is a trigonal curve, its Clifford index is $1$. Note that if a $g^r_d$ computes the Clifford index of $C$, then $d = 2r+1$. On the other hand, for a trigonal curve if $d_r$ is the minimal degree of a line bundle with at least $r+1$ sections, then we have (see \cite[Remark 4.5(b)]{6}) \begin{equation}\label{EEE} \begin{split} d_r & =3r , 1 \le r \le [\frac{g-1}{3}], \\ & = r + g -1 - [\frac{g-r -1}{2}], [\frac{g-1}{3}] < r \le g-1 \\ & =r + g, r \ge g \end{split} \end{equation} Therefore, if a line bundle of degree $2r+1$ has at least $r+1$ sections, then $2r+1= d_r$. Now from the expression of $d_r$ in \ref{EEE}, one can conclude that the possibilities are $r =1$ and $r= g-2$. In other words, the Clifford index of a trigonal curve can be computed only by a pencil $L$ and $K \otimes L^*$. \\ On the other hand, there exist a line bundle $M$ on $X$ such that $M{\mid_C}$ computes the Clifford index \cite{4}. Therefore, $h^0(C, M{\mid_C}) =2$ or $h^0(\mathcal{O}_X(C) \otimes M^*){\mid_C}) =2 $. With out loss of generality we assume $h^0(C, M{\mid_C}) =2$. Since $K_C \otimes (M{\mid_C})^*$ also computes the Clifford index, once can see that $h^0(M \otimes \mathcal{O}(-C)) =0$ and hence $h^0(X, M)= 2$ and $\text{deg}(M{\mid_C})= M.C = 3$. Therefore a general curve $\Delta$ in $|M|$ is irreducible and has arithmetic genus $1$. Also we have $\Delta. C =3$, which conclude the Theorem. \end{proof} \section{Main theorem} In this section we prove the main theorem. \\ If $X$ and $L$ are as in the Donagi-Morrison's example, then we have seen that the planarity remains constant along the smooth curves in $|L|$. Let assume $X$ and $L$ are not as in the Donagi-Morrison's example. Let $C$ be an irreducible smooth curve $C$ in the linear system $|L|$ with a complete $g^r_{d^\prime}$, where $2 \le r \le 4$, which computes the Clifford index of $C$. It is knowm that the gonality is constant along the smooth curves in the linear system $|L|$ \cite{2}. Let $d$ be the gonality. Also we have the Clifford dimension of every curve in the linear system $|L|$ is $1$ \cite{2}. Thus the Clifford index of every curve is $d-2$. \begin{proof}{ of main Theorem }:\\ {\bf Case I: r=2} Let $C \in |L|$ be a smooth curve. If $C$ is hyperelliptic then the Theorem holds trivially. We assume $C$ is not hyperelliptic. Let $A$ be a complete $g^2_{d^\prime}$, on $C$, computing the Clifford index. Therefore, the degree $d^\prime$ of $A$ is $d+2$ and such a line bundle is necessarily globally generated. \\ Note that $d+2$ is the minimal degree of a line bundle with at least $3$ sections. If the vector bundle $E_{C,A}$ is simple, then we have $h^0(E_{C, A} \otimes {E_{C, A}}^*) =1 $. Thus \begin{equation}\label{Q1} \chi(E_{C, A} \otimes {E_{C, A}}^*) = 2 -h^1(E_{C, A} \otimes {E_{C, A}}^*). \end{equation} On the other hand, by Rieman-Roch, we have \[ \chi(E_{C, A}\otimes {E_{C, A}}^*) = \frac{{c_1(E_{C, A}\otimes {E_{C, A}}^*)}^2}{2} - c_2(E_{C, A} \otimes {E_{C, A}}^*) + \text{rk}(E_{C, A}\otimes {E_{C, A}}^*) \chi(\mathcal{O}_X). \] Now $c_2(E_{C, A} \otimes {E_{C, A}}^*) = 6 c_2(E_{C, A}) - 2 {c_1(E_{C, A})}^2 $. Thus we have, \begin{equation}\label{Q2} \begin{split} \chi(E_{C, A}\otimes {E_{C, A}}^*)\\ &= 18 - 6 c_2(E_{C, A}) +2 {c_1(E_{C, A})}^2 \\ & = 18-6(d+2) +2 (2g-2)\\ & =2-2\rho(g, 2, d+2). \end{split} \end{equation} Comparing \eqref{Q1} and \eqref{Q2} we have, $\rho(g, 2, d+2) \ge 0$. Thus $W^2_{d+2}(C)$ is non-empty for every smooth curve $C$ in $|L|$. Hence the second co-ordinate of the gonality sequence is constant. Let assume $E_{C, A}$ is not simple. Then by Remark \ref{RZ}, we have an exact sequence of the form \begin{equation}\label{Q3} 0 \to M \to E_{C, A} \to F \to 0, \end{equation} where $F$ is a rank $2$ torsion free sheaf, generated by its global sections off a finite set and $M$ is line bundle, with at least two sections and $F$ fits in the following exact sequence, \[ 0 \to F \to F^{**} \to \mathcal{O}_Z \to S \to 0, \] where $S$ is a coherent sheaf of finite length , in particular supported on a zero-dimensional subscheme $Z$. Let $N:=c_1(F)$. Note that \begin{equation}\label{Q4} c_2(E_{C, A})= d+2= M.N + |Z| + c_2(F^{**}). \end{equation} Since $F$ is globally generated by its section of a finite set, $F^{**}$ is also globally generated off a finite set. Also note that as $F^{**}$ is globally generated by it's sections off a finite set, $N$ is globally generated off a finite set and since on a K3-surface a line bundle can have no fixed point outside fixed components, $N$ is globally generated.\\ Also by Lemma \ref{L2}, $c_2(F^{**}) \ge 0$.\\ Claim: $h^1(N) \le 1$.\\ Since $N$ is base point free, $h^1(N) \ne 0$ implies that $N = \mathcal{O}(k\Gamma)$ [ Proposition 2.6 \cite{8}], where $\Gamma$ is an elliptic curve and $k$ is an integer $\ge 2$. Also we have $h^1(N) = k-1 \text{ and } h^0(N) = k+1 $. Thus if $h^1(N) > 1$, then $k \ge 3$. Since $c_2(F) \ge 0$, we have $C.2\Gamma < M.N \le d+2$ . But $\mathcal{O}_C(2\Gamma)$ has 3 sections, which is a contradiction to the minimality of $d+2$. In the case when $h^1(N) =1$ we have, $N = \mathcal{O}(2\Gamma)$. If $|Z| + c_2(F^{**}) > 0$, then $\text{deg}(N{\mid_C}) < d+2$ and $h^0(N{\mid_C})=3$. Thus we get a contradiction. \\ If $|Z| +c_2(F) = 0$, then $N{\mid_C}$ has $3$ sections and degree of $N{\mid_C} = d+2$ for all $C \in |L|$, which proves our theorem. Let us assume $h^1(N) = 0$.\\ If $h^0(N) =2$, then $N = \mathcal{O}_X(E)$, where $E$ is an smooth elliptic curve. On the other hand, since $F^{**}$ is globally generated off a finite set, by [\cite{4}, Proposition 1.5] , $F^{**} = \mathcal{O}_X(\Delta) \oplus \mathcal{O}_X(\Delta)$, where where $\Delta$ is a smooth irreducible curve on $X$ which moves in a base-point free pencil. Thus $N = \mathcal{O}_X(2\Delta)$, a contradiction. Let $h^0(N) \ge 3$. Since $h^0(M) \ge 2$ and $h^0(M) \le h^0(M{\mid_C}), M{\mid_C}$ contributes in the Clifford index. Since $K_C = \mathcal{O}_C(C)$, we have $K_C \otimes {M_{\mid_C}}^* = N_{\mid_C}$. From the exact sequence, \[ 0 \to \mathcal{O}(N-C) \to N \to N_{\mid_C} \to 0 \] we have $h^0(N_{\mid_C}) = h^0(N) + h^1(M)$. Also by Riemann-Roch, we have $h^0(N) = \frac{N^2}{2} + 2$. Thus \begin{equation}\label{Q6} \begin{split} \text{Cliff}(M{\mid_C})= \text{Cliff}(K_C \otimes {M{\mid_C}}^*)= \text{Cliff}(N{\mid_C}) &= N.C -2 (h^0(N) + h^1(M)) +2 \\ & = N.C - N^2 -4 -2h^1(M) +2\\ &= M.N -2h^1(M) -2\\ &=d+2 -|Z| -c_2(F^{**}) - 2h^1(M) -2. \end{split} \end{equation} But $d -2 =\text{Cliff}(C) \le \text{Cliff}(M{\mid_C})$, thus we have \[ d-2 \le d - |Z| -c_2(F^{**}) - 2h^1(M) \] \[ \text{or } |Z| + c_2(F^{**}) +2h^1(M) \le 2. \] In particular $c_2(F^{**}) \le 2$. Since $F^{**}$ is globally generated off a finite set, for a general two dimensional subspace $V$ of $H^0(F^{**})$, we have \begin{equation}\label{Q5} 0 \to V\otimes \mathcal{O}_X \to F^{**} \to B \to 0 \end{equation} where $B$ is a line bundle on a smooth curve $D \in |N|$.\\ Dualizing the above exact sequence we get, \[ 0 \to {F^{**}}^* \to V^* \otimes \mathcal{O}_X \to B^{\prime} \to 0 \] where $B^{\prime} = \mathcal{O}_D(D) \otimes B^*$. Now from the long exact sequence of \eqref{Q3}, we have $h^0(F^*)=h^2(F) = 0$. Thus we have $h^0(B^{\prime}) \ge 2$. Also we have $c_2(F^{**}) = \text{deg}(B^{\prime})$. But $c_2(F^{**}) = \text{deg}(B^{\prime})$ and $B^{\prime}$ has at least 2 sections. Therefore the curve $D$ is hyperelliptic. If $D$ has genus $2$ then $\text{deg}(\mathcal{O}(D){\mid_C})=D.C = D^2 + M.N= 2+M.N$. Since $c_2(F^{**}) =2, |Z| = 0$, then from \ref{Q4} it follows that $M.N = d$, which implies $D.C = d+2.$ Therefore $\mathcal{O}(D){\mid_C}$ will give a complete $g^2_{d+2}$ for all $C \in |L|$. If $D$ has genus bigger than $2$, then the following two cases can occur [\cite{8} Theorem 5.2] :\\ (i) There exists an irreducible elliptic curve $\Delta$ such that $\Delta . D = 2$.\\ (ii) There exists an irreducible hyperelliptic curve $B$ of genus $2$ such that $D \sim 2B$.\\ In case (i), we can further assume genus of $D$ is bigger than $3$, thus we can decompose $D$ as $\Delta + D^{\prime}$, with $D^{\prime}.\Delta = 2$. Now ${(D- 2 \Delta)}^2 = D^2 - 8$. Thus if $D-2\Delta$ is not effective, then $D^2 = 6$ and hence ${D^{\prime}}^2 = 2$. Therefore the restriction of $\mathcal{O}(D^{\prime})$ on each curve in $|L|$ will give a complete $g^2_{d+2}$. If $D-2\Delta$ is effective then we can decompose $D$ as $D^{''} + 2 \Delta$ and $L = \mathcal{O}(2\Delta + D^{''}) \otimes M$. It is easy to see that ${(D^{''} + c_1(M))}^2 > 0$. Thus $D^{''}.c_1(M) \ge 2$, [\cite{8}, Lemma 3].\\ On the other hand, \begin{equation} \begin{split} \text{deg}(\mathcal{O}(2\Delta){\mid_C})= 4 + 2\Delta. c_1(M) \le M.N + c_2(F^{**}) +|Z| = d+2 \end{split} \end{equation} Therefore, $\mathcal{O}(2\Delta){\mid_C}$ will give a $g^2_{d+2}$ for all $C \in |L|$ or $\text{deg}(\mathcal{O}(2\Delta){\mid_C} < d+2$, a contradiction. In case (ii), Considering the line bundle $\mathcal{O}(B)$. Note that since, $C$ is neither hyper-elliptic nor bi-elliptic, by Mumford's Theorem for $g^2_d$ \cite{1}, $W^2_{d+2}(C)$ is non-empty, if and only if, $d+2 - 6 \ge 0$ that is $d+2 \ge 6$, i.e., $d \ge 4$. \\ Note that $M.N= d$ and $B.C =B.(M+N)= B(2B +M)= 2B^2 + \frac{M.N}{2} \le d+ 2$. Thus either $\mathcal{O}_X(B){\mid_C}$ is a complete $g^2_d$ for all $C \in |L|$ or we will get a cotradiction. {\bf Case II: r =3} Let $A$ be a line bundle of degree $d^{\prime}$ computing the Clifford index of $C$ with $h^0(A) = 4$. We can assume there is no curve in $|L|$ with a line bundle with $3$ sections, computing the Clifford index. Since $d^{\prime}$ computes the Clifford index of $C$ and the Clifford index of $C$ is $d-2$, one has $d^{\prime} = d+4$. In this case every curve in the linear system $|L|$ admits a complete $g^3_{d+4}$ \cite[Theorem 4.1]{7}. For a general point $x \in C$, $A \otimes \mathcal{O}_C(-x)$ admits $3$ sections. Thus $W^2_{d+3}$ is non-empty. If $W^2_{d+2} \ne \o$, then one can get a line bundle computing the Clifford index of $C$ with $3$ sections, a contradiction. Thus $W^2_{d+2} = \o$. This is true for every smooth irreducible curve in $|L|$. Thus the planarity of every curve in the linear system $|L|$ is $d+3$. {\bf Case III: r = 4} Again let $A$ be a line bundle of degree $d^{\prime}$ computing the Clifford index of $C$ with $h^0(A) = 5$. In this case $d^{\prime} = d + 6$ and as previous case by \cite[Theorem 4.1]{7}, every curve in the linear system $|L|$ admits a complete $g^4_{d+6}$. Now for general two points $x, y \in C, A \otimes \mathcal{O}_C(-x-y)$ admits $3$ sections. Thus $W^2_{d+4}(C)$ is non-empty for every smooth irreducible curve $C \in |L|$. If $W^2_{d+3}(C) = \o$ for all $C \in |L|$, then planarity of every curve is $d+4$ and we are done. Let $C \in |L|$ such that $W^2_{d+3}(C) \ne \o$ and let $A \in W^2_{d+3}(C)$.\\ Let $E_{C, A}, F^{**}, M, N , Z, D$ are as in Case I. Then from \ref{Q6}, we have \[ d-2 \le M.N -2h^1(M) -2 = d+3 - |Z| -c_2(F^{**}) -2h^1(M) -2 \] Or \[ c_2(F^{**}) + |Z| + 2h^1(M) \le 3 \] If $c_2(F^{**}) \le 2$ then we can conclude the Theorem as Case I. Let $c_2(F^{**}) =3$. Then the degree of the line bundle $B$ on $D$ in \ref{Q5} is $3$ and admits $2$ sections. Hence $D$ is a trigonal curve. Since $L^2 \ge 16$ and $h^0(F) \ge h^0(N)$, we have $D^2 \ge 8$. In other words, $D$ has genus at least $5$. Therefore by Theorem \ref{TTA}, there exist an elliptic curve $\Delta$ is $X$ such that $\Delta . D = 3$ and $D$ can be decomposed as $D^{\prime} + \Delta$. Since $D^2 \ge 8$, we have ${D^{\prime}}^2 \ge 0$. If $0 \le {D^{\prime}}^2 \le 2$, then by similar analysis as in {\bf Case $r =2$}, we can conclude the Theorem. Thus we can assume that ${D^{\prime}}^2 \ge 4$, that is, $D^2 \ge 10$. If $D^2 \ge 12$, then $D$ can be decomposed as $2 \Delta + D^{\prime}$ and we are done as earlier. \\ Let $D^2 = 10$. Then ${D^{\prime}}^2 = 4$. Therefore, $D^{\prime}$ is either hyperelliptic or trigonal. Thus we have a decomposition of $D$ as $2\Delta + D^{\prime}$, which conclude the Theorem. \end{proof} {\it Acknowledgement:} We would like to thank to Prof.A.J. Parameswaran for many useful discussion. We also would like to thank Prof. Ciliberto , Prof. P. Newstead for valueable comments and pointing out the work done in this direction. We also would like to thank Krishanu Dan for careful reading of the article. \end{document}
\begin{document} \title{Optimal switching for pairs trading rule: \\ a viscosity solutions approach} \author{Minh-Man NGO \\\small John von Neumann (JVN) Institute \\\small Vietnam National University \\\small Ho-Chi-Minh City, \\\small man.ngo at jvn.edu.vn \and Huy\^en PHAM \\\small Laboratoire de Probabilit\'es et \\\small Mod\`eles Al\'eatoires, CNRS UMR 7599 \\\small Universit\'e Paris 7 Diderot, \\\small CREST-ENSAE, \\\small and JVN Institute \\\small pham at math.univ-paris-diderot.fr } \maketitle \begin{abstract} This paper studies the problem of determining the optimal cut-off for pairs trading rules. We consider two correlated assets whose spread is modelled by a mean-reverting process with stochastic volatility, and the optimal pair trading rule is formulated as an optimal switching problem between three regimes: flat position (no holding stocks), long one short the other and short one long the other. A fixed commission cost is charged with each transaction. We use a viscosity solutions approach to prove the existence and the explicit characterization of cut-off points via the resolution of quasi-algebraic equations. We illustrate our results by numerical simulations. \end{abstract} \noindent {\bf Keywords:} pairs trading, optimal switching, mean-reverting process, viscosity solutions. \noindent {\bf MSC Classification:} 60G40, 49L25. \noindent {\bf JEL Classification:} C61, G11. \section{Introduction} Pairs trading consists of taking simultaneously a long position in one of the assets $A$ and $B$, and a short position in the other, in order to eliminate the market beta risk, and be exposed only to relative market movements determined by the spread. A brief history and discussion of pairs trading can be found in Ehrman \cite{ehr06}, Vidyamurthy \cite{vidyamurthy2004pairs} and Elliott, Van der Hoek and Malcom \cite{Elliott}. The main aim of this paper is to rationale mathematically these rules and find optimal cutoffs, by means of a stochastic control approach. Pairs trading problem has been studied by stochastic control approach in the recent years. Mudchanatongsuk, Primbs and Wong \cite{mudchanatongsuk2008optimal} consider self-financing portfolio strategy for pairs trading, model the log-relationship between a pair of stock prices by an Ornstein-Uhlenbeck process and use this to formulate a portfolio optimization and obtain the optimal solution to this control problem in closed form via the corresponding Hamilton-Jacobi-Bellman (HJB) equation. They only allow positions that are short one stock and long the other, in equal dollar amounts. Tourin and Yan \cite{tourin2013dynamic} study the same problem, but allow strategies with arbitrary amounts in each stock. On the other hand, instead of using self-financing strategies, one can focus on determining the optimal cut-offs, i.e. the boundaries of the trading regions in which one should trade when the spread lies in. Such problem is closely related to optimal buy-sell rule in trading mean reverting asset. Zhang and Zhang \cite{zhazha08} studied optimal buy-sell rule, where they model the underlying asset price by an Ornstein-Uhlenbeck process and consider an optimal trading rule determined by two regimes: buy and sell. These regimes are defined by two threshold levels, and a fixed commission cost is charged with each transaction. They use classical verification approach to find the value function as solution to the associated HJB equations (quasi-variational inequalities), and the optimal thresholds are obtained by smooth-fit technique. The same problem is studied in Kong's PhD thesis \cite{kong2010stochastic}, but he considers trading rules with three aspects: buying, selling and shorting. Song and Zhang \cite{song2013optimal} use the same approach for determining optimal pairs trading thresholds, where they model the difference of the stock prices $A$ and $B$ by an Ornstein-Uhlenbeck process and consider an optimal pairs trading rule determined by two regimes: long $A$ short $B$ and flat position (no holding stocks). Leung and Li \cite{leung2013optimal} studied the optimal timing to open or close the position subject to transaction costs, and the effect of Stop-loss level under the Ornstein-Uhlenbeck (OU) model. They directly construct the value functions instead of using variational inequalities approach, by characterizing the value functions as the smallest concave majorant of reward function. In this paper, we consider a pairs trading problem as in Song and Zhang \cite{song2013optimal}, but differ in our model setting and resolution method. We consider two correlated assets whose spread is modelled by a more general mean-reverting process with stochastic volatility, and the optimal pairs trading rule is based on optimal switching between three regimes: flat position (no holding stocks), long one short the other and vice-versa. A fixed commission cost is charged with each transaction. We use a viscosity solutions approach to solve our optimal switching problem. Actually, by combining viscosity solutions approach, smooth fit properties and uniqueness result for viscosity solutions proved in Pham, Ly Vath and Zhou \cite{phalyvzho09}, we are able to derive directly the structure of the switching regions, and thus the form of our value functions. This contrasts with the classical verification approach where the structure of the solution should be guessed ad-hoc, and one has to check that it satisfies indeed the corresponding HJB equation, which is not trivial in this context of optimal switching with more than two regimes. The paper is organized as follows. We formulate in Section 2 the pairs trading as an optimal switching problem with three regimes. In Section 3, we state the system of variational inequalities satisfied by the value functions in the viscosity sense and the definition of pairs trading regimes. In Section 4, we state some useful properties on the switching regions, derive the form of value functions, and obtain optimal cutoff points by relying on the smooth-fit properties of value functions. In Section 5, we illustrate our results by numerical examples. \section{Pair trading problem} \setcounter{equation}{0} \setcounter{Assumption}{0} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} Let us consider the spread $X$ between two correlated assets, say $A$ and $B$ modelled by a mean-reverting process with boundaries $\ell_-$ $\in$ $\{-\infty,0\}$, and $\ell_+$ $=$ $\infty$: \begin{eqnarray} \label{dynX} dX_t &=& \mu(L- X_t) dt + \sigma(X_t) dW_t, \end{eqnarray} where $W$ is a standard Brownian motion on $(\Omega,\Fc,\F=(\Fc_t)_{t\geq 0},\P)$, $\mu$ $>$ $0$ and $L$ $\geq$ $0$ are positive constants, $\sigma$ is a Lipschitz function on $(\ell_-,\ell_+)$, satisfying the nondegeneracy condition $\sigma$ $>$ $0$. The SDE \reff{dynX} admits then a unique strong solution, given an initial condition $X_0$ $=$ $x$ $\in$ $(\ell_-,\ell_+)$, denoted $X^x$. We assume that $\ell_+$ $=$ $\infty$ is a natural boundary, $\ell_-$ $=$ $-\infty$ is a natural boundary, and $\ell_-$ $=$ $0$ is non attainable. The main examples are the Ornstein-Uhlenbeck (OU in short) process or the inhomogenous geometric Brownian motion (IGBM), as studied in detail in the next sections. Suppose that the investor starts with a flat position in both assets. When the spread widens far from the equilibrium point, she naturally opens her trade by buying the underpriced asset, and selling the overpriced one. Next, if the spread narrows, she closes her trades, thus generating a profit. Such trading rules are quite popular in practice among hedge funds managers with cutoff values determined empirically by descriptive statistics. The main aim of this paper is to rationale mathematically these rules and find optimal cutoffs, by means of a stochastic control approach. More precisely, we formulate the pairs trading problem as an optimal switching problem with three regimes. Let $\{-1,0,1\}$ be the set of regimes where $i$ $=$ $0$ corresponds to a flat position (no stock holding), $i$ $=$ $1$ denotes a long position in the spread corresponding to a purchase of $A$ and a sale of $B$, while $i$ $=$ $-1$ is a short position in $X$ (i.e. sell $A$ and buy $B$). At any time, the investor can decide to open her trade by switching from regime $i$ $=$ $0$ to $i$ $=$ $-1$ (open to sell) or $i$ $=$ $1$ (open to buy). Moreover, when the investor is in a long ($i$ $=$ $1$) or short position ($i$ $=$ $-1$), she can decide to close her position by switching to regime $i$ $=$ $0$. We also assume that it is not possible for the investor to switch directly from regime $i$ $=$ $-1$ to $i$ $=$ $1$, and vice-versa, without first closing her position. The trading strategies of the investor are modelled by a switching control $\alpha$ $=$ $(\tau_n,\iota_n)_{n\geq 0}$ where $(\tau_n)_n$ is a nondecreasing sequence of stopping times representing the trading times, with $\tau_n$ $\rightarrow$ $\infty$ a.s. when $n$ goes to infinity, and $\iota_n$ valued in $\{-1,0,1\}$, $\Fc_{\tau_n}$-measurable, represents the position regime decided at $\tau_n$ until the next trading time. By misuse of notations, we denote by $\alpha_t$ the value of the regime at any time $t$: \begin{eqnarray*} \alpha_t &=& \iota_0 {\bf 1}_{\{0 \leq t < \tau_0\}} + \sum_{n \geq 0} \iota_n {\bf 1}_{\{\tau_n\leq t< \tau_{n+1}\}}, \;\;\; t \geq 0, \end{eqnarray*} which also represents the inventory value in the spread at any time. We denote by $g_{ij}(x)$ the trading gain when switching from a position $i$ to $j$, $i,j$ $\in$ $\{-1,0,1\}$, $j$ $\neq$ $i$, for a spread value $x$. The switching gain functions are given by: \begin{eqnarray*} g_{_{01}}(x) \; = \; g_{_{-10}}(x) &=& - (x+ \eps) \\ g_{_{0-1}}(x) \; = \; g_{_{10}}(x) &=& x-\eps, \end{eqnarray*} where $\eps$ $>$ $0$ is a fixed transaction fee paid at each trading time. Notice that we do not consider the functions $g_{_{-11}}$ and $g_{_{11}}$ since it is not possible to switch from regime $i$ $=$ $-1$ to $i$ $=$ $1$ and vice-versa. By misuse of notations, we also set $g(x,i,j)$ $=$ $g_{_{ij}}(x)$. Given an initial spread value $X_0$ $=$ $x$, the expected reward over an infinite horizon associated to a switching trading strategy $\alpha$ $=$ $(\tau_n,\iota_n)_{n\geq 0}$ is given by the gain functional: \begin{eqnarray*} J(x,\alpha) &=& \E \Big[ \sum_{n\geq 1} e^{-\rho \tau_n} g(X_{\tau_n}^x,\alpha_{\tau_n^-},\alpha_{\tau_n}) - \lambda \int_0^{\infty} e^{-\rho t} |\alpha_t| dt \Big]. \end{eqnarray*} The first (discrete sum) term corresponds to the (discounted with discount factor $\rho$ $>$ $0$) cumulated gain of the investor by using pairs trading strategies, while the last integral term reduces the inventory risk, by penalizing with a factor $\lambda$ $\geq$ $0$, the holding of assets during the trading time interval. For $i$ $=$ $0,-1,1$, let $v_i$ denote the value functions with initial positions $i$ when maximizing over switching trading strategies the gain functional, that is \begin{eqnarray*} v_i(x) &=& \sup_{\alpha\in\Ac_i} J(x,\alpha), \;\;\;\;\; x \in (\ell_-,\infty), \; i =0,-1,1, \end{eqnarray*} where $\Ac_i$ denotes the set of switching controls $\alpha$ $=$ $(\tau_n,\iota_n)_{n\geq 0}$ with initial position $\alpha_{0^-}$ $=$ $i$, i.e. $\tau_0$ $=$ $0$, $\iota_0$ $=$ $i$. The impossibility of switching directly from regime $i$ $=$ $\pm 1$ to $\mp 1$ is formalized by restricting the strategy of position $i= \pm 1$: if $\alpha \in \Ac_{_1}$ or $\alpha \in \Ac_{_{-1}}$ then $\iota_{_1}$ $=$ $0$ for ensuring that the investor has to close first her position before opening a new one. \section{PDE characterization} \setcounter{equation}{0} \setcounter{Assumption}{0} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} Throughout the paper, we denote by $\Lc$ the infinitesimal generator of the diffusion process $X$, i.e. \begin{eqnarray*} \Lc \varphi(x) &=& \mu(L- x) \varphi'(x) + \frac{1}{2} \sigma^2(x) \varphi''(x). \end{eqnarray*} The ordinary differential equation of second order \begin{eqnarray} \label{ode2} \rho \phi - \Lc \phi &=& 0, \end{eqnarray} has two linearly independent positive solutions. These solutions are uniquely determined (up to a multiplication), if we require one of them to be strictly increasing, and the other to be strictly decreasing. We shall denote by $\psi_+$ the increasing solution, and by $\psi_-$ the decreasing solution. They are called fundamental solutions of \reff{ode2}, and any other solution can be expressed as their linear combination. Since $\ell_+$ $=$ $\infty$ is a natural boundary, and $\ell_-$ $\in$ $\{-\infty,0\}$ is either a natural or non attainable boundary, we have: \begin{eqnarray} \label{natural} \psi_+(\infty) \; = \; \psi_-(\ell_-) \; = \; \infty, & & \psi_-(\infty) \; = \; 0. \end{eqnarray} We shall also assume that \begin{eqnarray} \label{limpsix} \lim_{x\rightarrow \ell_-} \frac{x}{\psi_-(x)} \; = \; 0, & & \lim_{x\rightarrow \infty} \frac{x}{\psi_+(x)} \; = \; 0. \end{eqnarray} \noindent {\bf Canonical examples} \\ Our two basic examples in finance for $X$ satisfying the above assumptions are \begin{itemize} \item Ornstein-Uhlenbeck (OU) process: \begin{eqnarray} \label{OU} dX_t &=& - \mu X_t dt + \sigma dW_t, \end{eqnarray} with $\mu$, $\sigma$ positive constants. In this case, $\ell_+$ $=$ $\infty$, $\ell_-$ $=$ $-\infty$ are natural boundaries, the two fundamental solutions to \reff{ode2} are given by \begin{eqnarray*} \psi_+(x) = \int_0^\infty t^{\frac{\rho}{\mu}-1}\exp\big(-\frac{t^2}{2} + \frac{\sqrt{2\mu}}{\sigma} x t\big) dt, & & \psi_-(x) = \int_0^\infty t^{\frac{\rho}{\mu}-1}\exp\big(-\frac{t^2}{2} - \frac{\sqrt{2\mu}}{\sigma} x t\big) dt, \end{eqnarray*} and it is easily checked that condition \reff{limpsix} is satisfied. \item Inhomogeneous Geometric Brownian Motion (IGBM): \begin{eqnarray} \label{inGBM} dX_t =\mu(L- X_t) dt + \sigma X_t dW_t, \;\;\; X_{_0}>0, \end{eqnarray} where $\mu$, $L$ and $\sigma$ are positive constants. In this case, $\ell_+$ $=$ $\infty$ is a natural boundary, $\ell_-$ $=$ $0$ is a non attainable boundary, and the two fundamental solutions to \reff{ode2} are given by \begin{eqnarray} \label{psi+-} \psi_+(x) = x^{-a}U(a,b,\frac{c}{x}), & & \psi_-(x) = x^{-a}M(a,b,\frac{c}{x}). \end{eqnarray} where \begin{eqnarray} \label{igbmc} a &=& \frac{\sqrt{\sigma^4+4(\mu+2\rho)\sigma^2+4\mu^2}-(2\mu+\sigma^2)}{2\sigma^2} \; > \; 0,\notag \\ b &=& \frac{2\mu}{\sigma^2}+2a+2, \;\;\; c \; = \; \frac{2\mu L}{\sigma^2}, \end{eqnarray} and $M$ and $U$ are the confluent hypergeometric functions of the first and second kind. Moreover, by the asymptotic property of the confluent hypergeometric functions (see \cite{abramowitz1972handbook}), the fundamental solutions $\psi_+$ and $\psi_-$ satisfy condition \reff{limpsix}, and \begin{eqnarray} \label{limpsi2} \psi_+(0^+) \; = \; \frac{1}{c^a}. \end{eqnarray} \end{itemize} In this section, we state some general PDE characterization of the value functions by means of the dynamic programming approach. We first state a linear growth property and Lipschitz continuity of the value functions. \begin{Lemma} \label{lemgrowth} There exists some positive constant $r$ (depending on $\sigma$) such that for a discount factor $\rho$ $>$ $r$, the value functions are finite on $\R$. In this case, we have \begin{eqnarray*} 0 & \leq & v_{_0}(x) \; \leq \; C(1+|x|), \;\;\; \forall x \in (\ell_-,\infty), \\ - \frac{\lambda}{\rho} & \leq & v_i(x) \; \leq \; C(1+|x|), \;\;\; \forall x \in (\ell_-,\infty), \; i =1,-1, \end{eqnarray*} and \begin{eqnarray*} |v_{_i}(x)-v_{_i}(y)|\; \leq \; C|x-y|, \;\;\; \forall x,y \in (\ell_-,\infty), \; i =0,1,-1, \end{eqnarray*} for some positive constant $C$. \end{Lemma} {\bf Proof.} The lower bound for $v_{_0}$ and $v_{i}$ are trivial by considering the strategies of doing nothing. Let us focus on the upper bound. First, by standard arguments using It\^o's formula and Gronwall lemma, we have the following estimate on the diffusion $X$: there exists some positive constant $r$, depending on the Lipschitz constant of $\sigma$, such that \begin{eqnarray} \E |X_t^x| & \leq & C e^{r t} (1 + |x|), \;\;\; \forall t \geq 0, \label{estimX} \\ \E |X_t^x-X_t^y| & \leq & e^{r t} |x-y| , \;\;\; \forall t \geq 0, \label{estimX2} \end{eqnarray} for some positive constant $C$ depending on $\rho$, $L$ and $\mu$. Next, for two successive trading times $\tau_n$ and $\sigma_n$ $=$ $\tau_{n+1}$ corresponding to a buy-and-sell or sell-and-buy strategy, we have: \begin{eqnarray} & & \E \Big[ e^{-\rho \tau_n} g(X_{\tau_n}^x,\alpha_{\tau_n^-},\alpha_{\tau_n}) + e^{-\rho \sigma_n} g(X_{\sigma_n}^x,\alpha_{\sigma_n^-},\alpha_{\sigma_n}) \Big] \label{succes} \\ & \leq & \Big| \E\Big[ e^{-\rho\sigma_n} X_{\sigma_n}^x - e^{-\rho\tau_n} X_{\tau_n}^x \Big] \Big| \; \leq \; \E \Big[ \int_{\tau_n}^{\sigma_n} e^{-\rho t} (\mu+\rho) |X_t^x| dt \Big]+ \E \Big[ \int_{\tau_n}^{\sigma_n} e^{-\rho t} \mu L dt \Big], \nonumber \end{eqnarray} where the second inequality follows from It\^o's formula. When investor is staying in flat position $(i=0)$, in the first trading time investor can move to state $i=1$ or $i=-1$, and in the second trading time she has to back to state $i=0$. So that, the strategy when we stay in state $i=0$ can be expressed by the combination of infinite couples: \textit{buy-and-sell}, \textit{sell-and-buy}, for example: states $0 \rightarrow 1 \rightarrow 0 \rightarrow -1 \rightarrow 0 \rightarrow -1 \rightarrow 0 \rightarrow 1 \rightarrow 0 ...$ it means: buy-and-sell, sell-and-buy, sell-and-buy, buy-and-sell,.... We deduce from \reff{succes} that for any $\alpha$ $\in$ $\Ac_{_0}$, \begin{eqnarray*} J(x,\alpha) & \leq & \E \Big[ \int_0^\infty e^{-\rho t} (\mu+\rho) |X_t^x| dt \Big]+\frac{\mu L}{\rho}. \end{eqnarray*} Recalling that, when investor starts with a long or short position ($i$ $=$ $\pm 1$) she has to close first her position before opening a new one, so that for $\alpha$ $\in$ $\Ac_{_1}$ or $\alpha$ $\in$ $\Ac_{_{-1}}$, \begin{eqnarray*} J(x,\alpha) & \leq & |x|+\E \Big[ \int_{0}^{\tau_1} e^{-\rho t} (\mu+\rho) |X_t^x| dt \Big]+ \E \Big[ \int_{0}^{\tau_1} e^{-\rho t} \mu L dt \Big] \\ & & \; + \E \Big[ \int_{\tau_2}^{\infty} e^{-\rho t} (\mu+\rho) |X_t^x| dt \Big]+ \E \Big[ \int_{\tau_2}^{\infty} e^{-\rho t} \mu L dt \Big] \\ & \leq & |x|+ \E \Big[ \int_0^\infty e^{-\rho t} (\mu+\rho) |X_t^x| dt \Big]+\frac{\mu L}{\rho}, \end{eqnarray*} which proves the upper bound for $v_{i}$ by using the estimate \reff{estimX}. By the same argument, for two successive trading times $\tau_n$ and $\sigma_n$ $=$ $\tau_{n+1}$ corresponding to a buy-and-sell or sell-and-buy strategy, we have: \begin{eqnarray*} & & \E \Big[ e^{-\rho \tau_n} g(X_{\tau_n}^x,\alpha_{\tau_n^-},\alpha_{\tau_n}) + e^{-\rho \sigma_n} g(X_{\sigma_n}^x,\alpha_{\sigma_n^-},\alpha_{\sigma_n}) \\ & & \;\;\;\;\; - \; e^{-\rho \tau_n} g(X_{\tau_n}^y,\alpha_{\tau_n^-},\alpha_{\tau_n}) - e^{-\rho \sigma_n} g(X_{\sigma_n}^y,\alpha_{\sigma_n^-},\alpha_{\sigma_n}) \Big] \\ & \leq & \Big| \E\Big[ e^{-\rho\sigma_n} X_{\sigma_n}^x - e^{-\rho\tau_n} X_{\tau_n}^x- e^{-\rho\sigma_n} X_{\sigma_n}^y + e^{-\rho\tau_n} X_{\tau_n}^y \Big] \Big|\\ & \leq & \E \Big[ \int_{\tau_n}^{\sigma_n} e^{-\rho t} (\mu+\rho) |X_t^x-X_t^y| dt \Big], \end{eqnarray*} where the second inequality follows from It\^o's formula. We deduce that \begin{eqnarray*} |v_i(x)-v_i(y)| &\leq& \sup_{\alpha\in\Ac_i} |J(x,\alpha)-J(y,\alpha)| \\ & \leq & |x-y| + \E \Big[ \int_0^\infty e^{-\rho t} (\mu+\rho) |X_t^x-X_t^y| dt \Big], \end{eqnarray*} which proves the Lipschitz property for $v_{_i}, \; i=0,1,-1$ by using the estimate \reff{estimX2}. \ep In the sequel, we fix a discount factor $\rho$ $>$ $r$ so that the value functions $v_i$ are well-defined and finite, and satisfy the linear growth and Lipschitz estimates of Lemma \ref{lemgrowth}. The dynamic programming equations satisfied by the value functions are thus given by a system of variational inequalities: \begin{eqnarray} \min \big[ \rho v_{_0} - \Lc v_{_0} \; , \; v_{_0} - \max \big( v_{_1} + g_{_{01}}, v_{_{-1}} + g_{_{0-1}}\big) \big] &=& 0, \;\;\; \mbox{ on } \; (\ell_-,\infty), \label{pdev0} \\ \min \big[ \rho v_{_1} - \Lc v_{_1} + \lambda \; , \; v_{_1} - v_{_0} - g_{_{10}} \big] &=& 0, \;\;\; \mbox{ on } \; (\ell_-,\infty), \label{pdev1} \\ \min \big[ \rho v_{_{-1}} - \Lc v_{_{-1}} + \lambda \; , \; v_{_{-1}} - v_{_0} - g_{_{-10}} \big] &=& 0, \;\;\; \mbox{ on } \; (\ell_-,\infty). \label{pdev-1} \end{eqnarray} Indeed, the equation for $v_0$ means that in regime $0$, the investor has the choice to stay in the flat position, or to open by a long or short position in the spread, while the equation for $v_i$, $i$ $=$ $\pm 1$, means that in the regime $i$ $=$ $\pm 1$, she has first the obligation to close her position hence to switch to regime $0$ before opening a new position. By the same argument as in \cite{pham2007smooth}, we know that the value functions $v_i, \; i=0,1,-1$ are viscosity solutions to the system (\ref{pdev0})-(\ref{pdev1})-(\ref{pdev-1}), and satisfied the smooth-fit $C^1$ condition. Let us introduce the switching regions: \begin{itemize} \item Open-to-trade region from the flat position $i$ $=$ $0$: \begin{eqnarray*} \Sc_{_0} & = & \Big\{ x \in (\ell_-,\infty) : v_{_0}(x) = \max \big( v_{_1} + g_{_{01}}, v_{_{-1}} + g_{_{0-1}}\big)(x) \Big\} \\ &=& \Sc_{_{01}} \cup \Sc_{_{0-1}}, \end{eqnarray*} where $\Sc_{_{01}}$ is the open-to-buy region, and $\Sc_{_{0-1}}$ is the open-to-sell region: \begin{eqnarray*} \Sc_{_{01}} &=& \Big\{ x \in (\ell_-,\infty) : v_{_0}(x) = (v_{_1} + g_{_{01}})(x) \Big\}, \\ \Sc_{_{0-1}} &=& \Big\{ x \in (\ell_-,\infty) : v_{_0}(x) = (v_{_{-1}} + g_{_{0-1}})(x) \Big\}. \end{eqnarray*} \item Sell-to-close region from the long position $i$ $=$ $1$: \begin{eqnarray*} \Sc_{_1} &=& \Big\{ x \in (\ell_-,\infty) : v_{_1}(x) = (v_{_0} + g_{_{10}})(x) \Big\}. \end{eqnarray*} \item Buy-to-close region from the short position $i$ $=$ $-1$: \begin{eqnarray*} \Sc_{_{-1}} &=& \Big\{ x \in (\ell_-,\infty) : v_{_{-1}}(x) = (v_{_0} + g_{_{-10}})(x) \Big\}, \end{eqnarray*} \end{itemize} and the continuation regions, defined as the complement sets of the switching regions: \begin{eqnarray*} \Cc_{_0} &=& (\ell_-,\infty) \setminus\Sc_{_0} \; = \; \Big\{ x \in (\ell_-,\infty) : v_{_0}(x) > \max \big( v_{_1} + g_{_{01}}, v_{_{-1}} + g_{_{0-1}}\big)(x) \Big\}, \\ \Cc_{_1} &=& (\ell_-,\infty) \setminus\Sc_{_1} \; = \; \Big\{ x \in (\ell_-,\infty) : v_{_1}(x) > (v_{_0} + g_{_{10}})(x) \Big\}, \\ \Cc_{_{-1}} &=& (\ell_-,\infty) \setminus\Sc_{_{-1}} \; = \; \Big\{ x \in (\ell_-,\infty) : v_{_{-1}}(x) > (v_{_0} + g_{_{-10}})(x) \Big\}. \end{eqnarray*} \section{Solution} \setcounter{equation}{0} \setcounter{Assumption}{0} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} In this section, we focus on the existence and structure of switching regions, and then we use the results on smooth fit property, uniqueness result for viscosity solutions of the value functions to derive the form of value functions in which the optimal cut-off points can be obtained by solving smooth-fit condition equations. \begin{Lemma} \label{leminclu} \begin{eqnarray*} \Sc_{_{01}} \; \subset \; \big(-\infty, \frac{\mu L - \ell_{_0}}{\rho+\mu}\big] \cap (\ell_-,\infty) , & & \Sc_{_{0-1}} \; \subset \; \big[ \frac{\mu L + \ell_{_0}}{\rho+\mu}, \infty\big), \\ \Sc_{_1} \; \subset \; \big[ \frac{\mu L- \ell_{_1}}{\rho+\mu},\infty\big) \cap (\ell_-,\infty), & & \Sc_{_{-1}} \; \subset \; \big(-\infty, \frac{\mu L+ \ell_{_1}}{\rho+\mu}\big] \cap (\ell_-,\infty), \end{eqnarray*} where \begin{eqnarray*} 0 \; < \; \ell_{_0} \; := \; \lambda+\rho\eps, & & \ell_{_1} \; := \; \lambda-\rho\eps \; \in \; (-\ell_{_0},\ell_{_0}). \end{eqnarray*} \end{Lemma} {\bf Proof.} Let $\bar x$ $\in$ $\Sc_{_{01}}$, so that $v_{_0}(\bar x)$ $=$ $(v_{_1} + g_{_{01}})(\bar x)$. By writing that $v_{_0}$ is a viscosity supersolution to: $\rho v_{_0}-\Lc v_{_0}$ $\geq$ $0$, we then get \begin{eqnarray} \label{ineg1} \rho (v_{_1} + g_{_{01}})(\bar x) - \Lc (v_{_1} + g_{_{01}})(\bar x) & \geq & 0. \end{eqnarray} Now, since $g_{_{01}}+g_{_{10}}$ $=$ $-2\eps$ $<$ $0$, this implies that $\Sc_{_{01}}$ $\cap$ $\Sc_{_{1}}$ $=$ $\emptyset$, so that $\bar x$ $\in$ $\Cc_{_1}$. Since $v_{_1}$ satisfies the equation $\rho v_{_1} - \Lc v_{_1} + \lambda$ $=$ $0$ on $\Cc_{_1}$, we then have from \reff{ineg1} \begin{eqnarray*} \rho g_{_{01}}(\bar x) - \Lc g_{_{01}}(\bar x) - \lambda & \geq & 0. \end{eqnarray*} Recalling the expressions of $g_{_{01}}$ and $\Lc$, we thus obtain: $-\rho(\bar x+\eps) - \mu\bar x - \lambda+L\mu$ $\geq$ $0$, which proves the inclusion result for $\Sc_{_{01}}$. Similar arguments show that if $\bar x$ $\in$ $\Sc_{_{0-1}}$ then \begin{eqnarray*} \rho g_{_{0-1}}(\bar x) - \Lc g_{_{0-1}}(\bar x) - \lambda & \geq & 0, \end{eqnarray*} which proves the inclusion result for $\Sc_{_{0-1}}$ after direct calculation. Similarly, if $\bar x$ $\in$ $\Sc_{_1}$ then $\bar x$ $\in$ $\Sc_{_{0-1}}$ or $\bar x$ $\in$ $\Cc_{_0}$: if $\bar x$ $\in$ $\Sc_{_{0-1}}$, we obviously have the inclusion result for $\Sc_{_1}$. On the other hand, if $\bar x$ $\in$ $\Cc_{_0}$, using the viscosity supersolution property of $v_{_1}$, we have: \begin{eqnarray*} \rho g_{_{10}}(\bar x) - \Lc g_{_{10}}(\bar x) + \lambda & \geq & 0, \end{eqnarray*} which yields the inclusion result for $\Sc_{_1}$. By the same method, we shows the inclusion result for $\Sc_{_{-1}}$. \ep We next examine some sufficient conditions under which the switching regions are not empty. \begin{Lemma} \label{lemempty1} (1) The switching regions $\Sc_{_1}$ and $\Sc_{_{0-1}}$ are always not empty. \noindent (2) \begin{itemize} \item[(i)] If $\ell_-$ $=$ $-\infty$, then $\Sc_{_{-1}}$ is not empty \item[(ii)] If $\ell_-$ $=$ $0$, and $\eps$ $<$ $\frac{\lambda}{\rho}$, then $\Sc_{_{-1}}$ $\neq$ $\emptyset$. \end{itemize} \noindent (3) If $\ell_-$ $=$ $-\infty$, then $\Sc_{_{01}}$ is not empty. \end{Lemma} {\bf Proof.} (1) We argue by contradiction, and first assume that $\Sc_{_1}$ $=$ $\emptyset$. This means that once we are in the long position, it would be never optimal to close our position. In other words, the value function $v_{_1}$ would be equal to $\hat V_{_1}$ given by \begin{eqnarray*} \hat V_{_1}(x) &=& \E \Big[ - \lambda \int_0^{\infty} e^{-\rho t} dt \Big] \; = \; - \frac{\lambda}{\rho}. \end{eqnarray*} Since $v_{_1}$ $\geq$ $v_{_0}+g_{_{10}}$, this would imply $v_{_0}(x)$ $\leq$ $- \frac{\lambda}{\rho} + \eps - x$, for all $x$ $\in$ $(\ell_-,\infty)$, which obviously contradicts the nonnegativity of the value function $v_{_0}$. Suppose now that $\Sc_{_{0-1}}$ $=$ $\emptyset$. Then, from the inclusion results for $\Sc_{_0}$ in Lemma \ref{leminclu}, this implies that the continuation region $\Cc_{_0}$ would contain at least the interval $(\frac{\mu L-\ell_{_0}}{\rho+\mu},\infty)$ $\cap$ $(\ell_-,\infty)$. In other words, we should have: $\rho v_{_0} - \Lc v_{_0}$ $=$ $0$ on $(\frac{\mu L-\ell_{_0}}{\rho+\mu},\infty)$ $\cap$ $(\ell_-,\infty)$, and so $v_{_0}$ should be in the form: \begin{eqnarray*} v_{_0}(x) &=& C_+ \psi_+(x) + C_- \psi_-(x), \;\;\; \forall x > \Big(\frac{\mu L-\ell_{_0}}{\rho+\mu} \Big) \vee \ell_-, \end{eqnarray*} for some constants $C_+$ and $C_-$. In view of the linear growth condition on $v_{_0}$ and condition \reff{limpsix} when $x$ goes to $\infty$, we must have $C_+$ $=$ $0$. On the other hand, since $v_{_0}$ $\geq$ $v_{_{-1}}$ $+$ $g_{_{0-1}}$, and recalling the lower bound on $v_{_{-1}}$ in Lemma \ref{lemgrowth}, this would imply: \begin{eqnarray*} C_{-} \psi_-(x) & \geq & - \frac{\lambda}{\rho} + x-\eps, \;\;\; \forall x > \Big(\frac{\mu L-\ell_{_0}}{\rho+\mu} \Big) \vee \ell_-. \end{eqnarray*} By sending $x$ to $\infty$, and from \reff{natural}, we get the contradiction. \noindent (2) Suppose that $\Sc_{_{-1}}$ $=$ $\emptyset$. Then, a similar argument as in the case $\Sc_{_1}$ $=$ $\emptyset$, would imply that $v_{_0}(x)$ $\leq$ $- \frac{\lambda}{\rho} + \eps + x$, for all $x$ $\in$ $(\ell_-,\infty)$. This immediately leads to a contradiction when $\ell_-$ $=$ $-\infty$ by sending $x$ to $-\infty$. When $\ell_-$ $=$ $0$, and under the condition that $\eps$ $<$ $\frac{\lambda}{\rho}$, we also get a contradiction to the non negativity of $v_{_0}$. \noindent (3) Consider the case when $\ell_-$ $=$ $-\infty$, and let us argue by contradiction by assuming that $\Sc_{_{01}}$ $=$ $\emptyset$. Then, from the inclusion results for $\Sc_{_0}$ in Lemma \ref{leminclu}, this implies that the continuation region $\Cc_{_0}$ would contain at least the interval $(-\infty,\frac{\mu L+\ell_{_0}}{\rho+\mu})$. In other words, we should have: $\rho v_{_0} - \Lc v_{_0}$ $=$ $0$ on $(-\infty,\frac{\mu L+\ell_{_0}}{\rho+\mu})$, and so $v_{_0}$ should be in the form: \begin{eqnarray*} v_{_0}(x) &=& C_+ \psi_+(x) + C_- \psi_-(x), \;\;\; \forall x < \frac{\mu L+\ell_{_0}}{\rho+\mu}, \end{eqnarray*} for some constants $C_+$ and $C_-$. In view of the linear growth condition on $v_{_0}$ and condition \reff{limpsix} when $x$ goes to $-\infty$, we must have $C_-$ $=$ $0$. On the other hand, since $v_{_0}$ $\geq$ $v_{_{1}}$ $+$ $g_{_{01}}$, recalling the lower bound on $v_{_{1}}$ in Lemma \ref{lemgrowth}, this would imply: \begin{eqnarray*} C_+ \psi_+(x) & \geq & - \frac{\lambda}{\rho} -(x+ \eps), \;\;\; \forall x < \frac{\mu L+\ell_{_0}}{\rho+\mu}. \end{eqnarray*} By sending $x$ to $-\infty$, and from \reff{natural}, we get the contradiction. \ep \begin{Remark} {\rm Lemma \ref{lemempty1} shows that $\Sc_{_1}$ is non empty. Furthermore, notice that in the case where $\ell_-$ $=$ $0$, $\Sc_{_1}$ can be equal to the whole domain $(0,\infty)$, i.e. it is never optimal to stay in the long position regime. Actually, from Lemma \ref{leminclu}, such extreme case may occur only if $\mu L - \ell_{_1}$ $\leq$ $0$, in which case, we would also get $\mu L - \ell_{_0}$ $<$ $0$, and thus $\Sc_{_{01}}$ $=$ $\emptyset$. In that case, we are reduced to a problem with only two regimes $i$ $=$ $0$ and $i$ $=$ $-1$. \ep } \end{Remark} The above Lemma \ref{lemempty1} left open the question whether $\Sc_{_{-1}}$ is empty when $\ell_-$ $=$ $0$ and $\eps$ $\geq$ $\frac{\lambda}{\rho}$, and whether $\Sc_{_{01}}$ is empty or not when $\ell_-$ $=$ $0$. We examine this last issue in the next Lemma and the following remarks. \begin{Lemma} \label{lemempty2} Let $X$ be governed by the Inhomogeneous Geometric Brownian motion in \reff{inGBM}, and set \begin{eqnarray*} \label{defK} K_{0}(y) & := & (\frac{c}{y})^{-a}\frac{1}{U(a,b,\frac{c}{y})}(y-\eps+\frac{\lambda}{\rho})-(\frac{\lambda}{\rho}+\eps), \;\;\; y > 0, \\ K_{-1}(y) & := & (\frac{c}{y})^{-a}\frac{1}{U(a,b,\frac{c}{y})}(y-\eps-\frac{\lambda}{\rho})+(\frac{\lambda}{\rho}-\eps), \;\;\; y > 0, \end{eqnarray*} where $a$, $b$ and $c$ are defined in \reff{igbmc}. If there exists $y$ $\in$ $(0,\frac{\mu L+\ell_{_0}}{\rho+\mu})$ (resp $y$ $>$ $0$) such that $K_{0}(y)$ (resp. $K_{-1}$) $>$ $0$, then $\Sc_{_{01}}$ (resp. $\Sc_{_{-1}}$) is not empty. \end{Lemma} {\bf Proof.} Suppose that $\Sc_{_{01}}$ $=$ $\emptyset$. Then, from the inclusion results for $\Sc_{_0}$ in Lemma \ref{leminclu}, this implies that the continuation region $\Cc_{_0}$ would contain at least the interval $(0,\frac{\mu L+\ell_{_0}}{\rho+\mu})$. In other words, we should have: $\rho v_{_0} - \Lc v_{_0}$ $=$ $0$ on $(0,\frac{\mu L+\ell_{_0}}{\rho+\mu})$, and so $v_{_0}$ should be in the form: \begin{eqnarray*} v_{_0}(x) &=& C_+ \psi_+(x) + C_- \psi_-(x), \;\;\; \forall 0 < x < \frac{\mu L+\ell_{_0}}{\rho+\mu}, \end{eqnarray*} for some constants $C_+$ and $C_-$. From the bounds on $v_0$ in Lemma \ref{lemgrowth}, and \reff{natural}, we must have $C_-$ $=$ $0$. Next, for $0<x\leq y$, let us consider the first passage time $\tau_y^x$ $:=$ $\inf\{t: X^{x}_t=y\}$ of the inhomogeneous Geometric Brownian motion. We know from \cite{zhao2009inhomogeneous} that \begin{eqnarray} \label{zhao1} \E_x\big[ e^{-\rho \tau_y^x} \big] &=& \left(\frac{x}{y}\right)^{-a} \frac{U(a,b,\frac{c}{x})}{U(a,b,\frac{c}{y})}=\frac{\psi_+(x)}{\psi_+(y)}. \end{eqnarray} We denote by $\bar v_{_1}(x;y)$ the gain functional obtained from the strategy consisting in changing position from initial state $x$ and regime $i=1$, to the regime $i=0$ at the first time $X^{x}_t$ hits $y$ ($0<x \leq y$), and then following optimal decisions once in regime $i$ $=$ $0$: \begin{eqnarray*} \bar v_{_1}(x;y) &=& \E [e^{-\rho \tau_y^x}(v_{_0}(y)+y-\eps)-\int_0^{\tau_y^x}\lambda e^{-\rho t}dt ], \;\;\; 0 < x \leq y. \end{eqnarray*} Since $v_{_0}(y)$ $=$ $C_+ \psi_+(y)$, for all $0<y< \frac{\mu L+\ell_{_0}}{\rho+\mu}$, and recalling (\ref{zhao1}) we have: \begin{eqnarray*} \bar v_{_1}(x;y) &=& \E [e^{-\rho \tau_y^x}(C_+ \psi_+(y)+y-\eps)-\int_0^{\tau_y^x}\lambda e^{-\rho t}dt ] \\ &=& \frac{\psi_+(x)}{\psi_+(y)}(C_+ \psi_+(y)+y-\eps+\frac{\lambda}{\rho})-\frac{\lambda}{\rho} \\ &=& v_{_{0}}(x)+\frac{\psi_+(x)}{\psi_+(y)}(y-\eps+\frac{\lambda}{\rho})-\frac{\lambda}{\rho} , \; \; \; \; \; \; \forall 0<x \leq y< \frac{\mu L+\ell_{_0}}{\rho+\mu}. \end{eqnarray*} Now, by definition of $v_{_1}$, we have $v_{_1}(x)$ $\geq$ $\bar v_{_1}(x;y)$, so that: \begin{eqnarray*} \label{cont1} v_{_1}(x) &\geq& v_{_{0}}(x)+\frac{\psi_+(x)}{\psi_+(y)}(y-\eps+\frac{\lambda}{\rho})-\frac{\lambda}{\rho}, \; \; \; \forall 0<x \leq y< \frac{\mu L+\ell_{_0}}{\rho+\mu}. \end{eqnarray*} By sending $x$ to zero, and recalling \reff{psi+-} and \reff{limpsi2}, this yields \begin{eqnarray*} \label{abs1} v_{_1}(0^+) &\geq& v_{_0}(0^+) + K_0(y)+\eps, \; \; \; \forall 0<y< \frac{\mu L+\ell_{_0}}{\rho+\mu}. \end{eqnarray*} Therefore, under the condition that there exists $y$ $\in$ $(0,\frac{\mu L+\ell_{_0}}{\rho+\mu})$ such that $K(y)$ $>$ $0$, we would get: \begin{eqnarray*} v_{_1}(0^+) &> & v_{_0}(0^+) +\eps, \end{eqnarray*} which is in contradiction with the fact that we have: $v_{_0}$ $\geq$ $v_{_1} + g_{_{01}}$, and so: $v_{_0}(0^+)$ $\geq$ $v_{_1}(0^+)$ $-$ $\eps$. Suppose that $\Sc_{_{-1}}$ $=$ $\emptyset$, in this case $v_{_{-1}}$ $=$ $-\lambda/\rho$. By the same argument as the above case, we have \begin{eqnarray*} v_{_0}(x) &\geq& \E[e^{-\rho \tau_y^x}(v_{_{-1}}(y)+y-\eps )] \; = \; \E[e^{-\rho \tau_y^x}(-\frac{\lambda}{\rho}+y-\eps )] \notag \\ &=& \big(-\frac{\lambda}{\rho}+y-\eps \big) \frac{\psi_+(x)}{\psi_+(y)}. \end{eqnarray*} by \reff{zhao1}. By sending $x$ to zero, and recalling \reff{psi+-} and \reff{limpsi2}, we thus have \begin{eqnarray} \label{condst1} v_{_0}(0^+) &\geq& -\frac{\lambda}{\rho} + \eps + K_{-1}(y) \;\;\; y>0. \end{eqnarray} Therefore, under the condition that there exists $y$ $>$ $0$ such that $K_{-1}(y)$ $>$ $0$, we would get: \begin{eqnarray*} v_{_0}(0^+) &> & -\frac{\lambda}{\rho} +\eps, \end{eqnarray*} which is in contradiction with the fact that we have: $v_{_{-1}}$ $\geq$ $v_{_0} + g_{_{-10}}$, and so: $ -\frac{\lambda}{\rho}$ $=$ $v_{_{-1}}(0^+)$ $\geq$ $v_{_0}(0^+)$ $-$ $\eps$. \ep \begin{Remark}\label{remark41} {\rm The above Lemma \ref{lemempty2} gives a sufficient condition in terms of the function $K_0$ and $K_{-1}$, which ensures that $\Sc_{_{01}}$ and $\Sc_{_{-1}}$ are not empty. Let us discuss how it is satisfied. From the asymptotic property of the confluent hypergeometric functions, we have: $\lim_{z \rightarrow \infty} z^aU(a,b,z)=1$. Then by sending $L$ to infinity (recall that $c=\frac{2\mu L}{\sigma^2}$), and from the expression of $K_0$ and $K_{-1}$ in Lemma \ref{lemempty2}, we have: \begin{eqnarray*} \lim_{L\rightarrow \infty} K_0(y) \; = \; \lim_{c \rightarrow \infty} K_0(y) &=& y - 2 \eps \; = \; \lim_{L\rightarrow \infty} K_{-1}(y). \end{eqnarray*} This implies that for $L$ large enough, one can choose $2 \eps < y < \frac{\mu L+\ell_{_0}}{\rho+\mu}$ so that $K_0(y)$ $>$ $0$. Notice also that $K_0$ is nondecreasing with $L$ as a consequence of the fact that $\frac{\partial}{\partial z}z^aU(a,b,z)=\frac{aU(a+1,b,z)(a-b+1)}{z}<0$. In practice, one can check by numerical method the condition $K_0(y)$ $>$ $0$ for $0 < y < \frac{\mu L+\ell_{_0}}{\rho+\mu}$. For example, with $\mu=0.8$, $\sigma=0.5$ , $\rho=0.1$, $\lambda=0.07$, $\varepsilon=0.005$, and $L=3$, we have $\frac{\mu L+\ell_{_0}}{\rho+\mu}= 2.7450$, and $K_0(1)=0.9072>0$. Similarly, for $L$ large enough, one can find $y$ $>$ $2\eps $ such that $K_{-1}(y)$ $>$ $0$ ensuring that $\Sc_{_{-1}}$ is not empty. \ep } \end{Remark} We are now able to describe the complete structure of the switching regions. \begin{Proposition} \label{propcutoff} 1) There exist finite cutoff levels $\bar x_{_{01}}$, $\bar x_{_{0-1}}$, $\bar x_{_1}$, $\bar x_{_{-1}}$ such that \begin{eqnarray*} \Sc_{_1} \; = \; [\bar x_{_1},\infty) \cap (\ell_-,\infty), & & \Sc_{_{0-1}} \; = \; [\bar x_{_{0-1}},\infty), \\ \Sc_{_{-1}} \; = \; (\ell_-, -\bar x_{_{-1}}], & & \Sc_{_{01}} \; = \; (\ell_-, - \bar x_{_{01}}], \end{eqnarray*} and satisfying $\bar x_{_{0-1}}$ $\geq$ $\frac{\mu L+\ell_{_0}}{\rho+\mu}$, $\bar x_{_1}$ $\geq$ $\frac{\mu L -\ell_{_1}}{\rho+\mu}$, $-\bar x_{_{-1}}$ $\leq$ $\frac{\mu L+\ell_{_1}}{\rho+\mu}$, $-\bar x_{_{01}}$ $\leq$ $\frac{\mu L-\ell_{_0}}{\rho+\mu}$. Moreover, $- \bar x_{_{01}}$ $<$ $\bar x_{_1}$, i.e. $\Sc_{_{01}}$ $\cap$ $\Sc_{_1}$ $=$ $\emptyset$ and $ \bar x_{_{0-1}}$ $>$ $-\bar x_{_{-1}}$, i.e. $\Sc_{_{0-1}}$ $\cap$ $\Sc_{_{-1}}$ $=$ $\emptyset$. \noindent 2) We have $\bar x_{_1}$ $\leq$ $\bar x_{_{0-1}}$, and $-\bar x_{_{01}}$ $\leq$ $-\bar x_{_{-1}}$ i.e. the following inclusions hold: \begin{eqnarray*} \Sc_{_{0-1}} \; \subset \; \Sc_{_1}, & & \Sc_{_{01}} \; \subset \; \Sc_{_{-1}}. \end{eqnarray*} \end{Proposition} {\bf Proof.} 1) (i) We focus on the structure of the sets $\Sc_{_{01}}$ and $\Sc_{_{-1}}$, and consider first the case where they are not empty. Let us then set $-\bar x_{_{01}}$ $=$ $\sup\Sc_{_{01}}$, which is finite since $\Sc_{_{01}}$ is not empty, and is included in $(\ell_-,\frac{\mu L-\ell_{_0}}{\rho+\mu}]$ by Lemma \ref{leminclu}. Moreover, since $\Sc_{_{0-1}}$ is included in $[\frac{\mu L+\ell_{_0}}{\rho+\mu},\infty)$, it does not intersect with $(\ell_-,-\bar x_{_{01}})$, and so $v_{_0}(x)$ $>$ $(v_{_{-1}}+ g_{_{0-1}})(x)$ for $x$ $<$ $-\bar x_{_{01}}$, i.e. $(\ell_-,-\bar x_{_{01}})$ $\subset$ $\Sc_{_{01}}$ $\cup$ $\Cc_{_0}$. From \reff{pdev0}, we deduce that $v_{_0}$ is a viscosity solution to \begin{eqnarray} \label{pdev0-2} \min \big[ \rho v_{_0} - \Lc v_{_0} \; , \; v_{_0} - v_{_1} - g_{_{01}} \big] &=& 0, \;\;\; \mbox{ on } \; (\ell_-,-\bar x_{_{01}}). \end{eqnarray} Let us now prove that $\Sc_{_{01}}$ $=$ $(\ell_-,-\bar x_{_{01}}]$. To this end, we consider the function $w_{_0}$ $=$ $v_{_1}+g_{_{01}}$ on $(\ell_-,-\bar x_{_{01}}]$. Let us check that $w_{_0}$ is a viscosity supersolution to \begin{eqnarray} \label{viscosurw0} \rho w_{_0} - \Lc w_{_0} & \geq & 0 \;\;\; \mbox{ on } \; (\ell_-,-\bar x_{_{01}}). \end{eqnarray} For this, take some point $\bar x$ $\in$ $(\ell_-,-\bar x_{_{01}})$, and some smooth test function $\varphi$ such that $\bar x$ is a local minimum of $w_{_0}-\varphi$. Then, $\bar x$ is a local minimum of $v_{_1}-(\varphi-g_{_{01}})$ by definition of $w_{_0}$. By writing the viscosity supersolution property of $v_{_1}$ to: $\rho v_{_1} - \Lc v_{_1} + \lambda$ $\geq$ $0$, at $\bar x$ with the test function $\varphi-g_{_{01}}$, we get: \begin{eqnarray*} 0 & \leq & \rho(\varphi-g_{_{01}})(\bar x) - \Lc(\varphi-g_{_{01}})(\bar x) + \lambda \\ &=& \rho \varphi(\bar x) - \Lc \varphi(\bar x) + (\rho+\mu)(\bar x + \frac{\ell_{_0}- \mu L}{\rho+\mu}) \\ & \leq & \rho \varphi(\bar x) - \Lc \varphi(\bar x), \end{eqnarray*} since $\bar x$ $<$ $-\bar x_{_{01}}$ $\leq$ $\frac{\mu L-\ell_{_0}}{\rho+\mu}$. This proves the viscosity supersolution property \reff{viscosurw0}, and actually, by recalling that $w_{_0}$ $=$ $v_{_1}+g_{_{01}}$, $w_{_0}$ is a viscosity solution to \begin{eqnarray} \label{w0visco} \min \big[ \rho w_{_0} - \Lc w_{_0} \; , \; w_{_0} - v_{_1} - g_{_{01}} \big] &=& 0, \;\;\; \mbox{ on } \; (\ell_-,-\bar x_{_{01}}). \end{eqnarray} Moreover, since $-\bar x_{_{01}}$ lies in the closed set $\Sc_{_{01}}$, we have $w_{_0}(-\bar x_{_{01}})$ $=$ $(v_{_1}+g_{_{01}})(-\bar x_{_{01}})$ $=$ $v_{_0}(-\bar x_{_{01}})$. By uniqueness of viscosity solutions to \reff{pdev0-2}, we deduce that $v_{_0}$ $=$ $w_{_0}$ on $(\ell_-,-\bar x_{_{01}}]$, i.e. $\Sc_{_{01}}$ $=$ $(\ell_-,-\bar x_{_{01}}]$. In the case where $\Sc_{_{01}}$ is empty, which may arise only when $\ell_-$ $=$ $0$ (recall Lemma \ref{lemempty1}), then it can still be written in the above form $(\ell_-,-\bar x_{_{01}}]$ by choosing $-\bar x_{_{01}}$ $\leq$ $\ell_-$ $\wedge$ $(\frac{\mu L-\ell_{_0}}{\rho+\mu})$. By similar arguments, we show that when $\Sc_{_{-1}}$ is not empty, it should be in the form: $\Sc_{_{-1}}$ $=$ $(\ell_-, -\bar x_{_{-1}}]$, for some $-\bar x_{_{-1}}$ $\leq$ $\frac{\mu L+\ell_{_1}}{\rho+\mu}$, while when it is empty, which may arise only when $\ell_-$ $=$ $0$ (recall Lemma \ref{lemempty1}), it can be written also in this form by choosing $-\bar x_{_{-1}}$ $\leq$ $0$ $\wedge$ $(\frac{\mu L+\ell_{_1}}{\rho+\mu})$. (ii) We derive similarly the structure of $\Sc_{_{0-1}}$ and $\Sc_{_1}$ which are already known to be non empty (recall Lemma \ref{lemempty1}): we set $\bar x_{_{0-1}}$ $=$ $\inf\Sc_{_{0-1}}$, which lies in $[\frac{\mu L+\ell_{_0}}{\rho+\mu},\infty)$ since $\Sc_{_{0-1}}$ is included in $[\frac{\mu L+\ell_{_0}}{\rho+\mu},\infty)$ by Lemma \ref{leminclu}. Then, we observe that $v_{_0}$ is a viscosity solution to \begin{eqnarray} \label{pdev0-3} \min \big[ \rho v_{_0} - \Lc v_{_0} \; , \; v_{_0} - v_{_{-1}} - g_{_{0-1}} \big] &=& 0, \;\;\; \mbox{ on } \; (\bar x_{_{0-1}},\infty). \end{eqnarray} By considering the function $\tilde w_{_0}$ $=$ $v_{_{-1}}+g_{_{0-1}}$, we show by the same arguments as in \reff{w0visco} that $\tilde w_{_0}$ is also a viscosity solution to \reff{pdev0-3} with boundary condition $\tilde w_{_0}(\bar x_{_{0-1}})$ $=$ $v_{_0}(\bar x_{_{0-1}})$. We conclude by uniqueness that $\tilde w_{_0}$ $=$ $v_{_0}$ on $[\bar x_{_{0-1}},\infty)$, i.e. $\Sc_{_{0-1}}$ $=$ $[\bar x_{_{0-1}},\infty)$. The same arguments show that $\Sc_{_1}$ is in the form stated in the Proposition. Moreover, from Lemma \ref{leminclu} we have : $\bar x_{_{0-1}}$ $\geq$ $\frac{\mu L+\ell_{_0}}{\rho+\mu}$ $>$ $\frac{\mu L+\ell_{_1}}{\rho+\mu}$ $\geq$ $-\bar x_{_{-1}}$ and $\bar x_{_1}$ $\geq$ $\frac{\mu L-\ell_{_1}}{\rho+\mu}$ $>$ $\frac{\mu L-\ell_{_0}}{\rho+\mu}$ $\geq$ $-\bar x_{_{01}}$. \noindent 2) We only consider the case where $-\bar x_{_{-1}} < \bar x_{_{1}}$, since the inclusion result in this proposition is obviously obtained when $-\bar x_{_{-1}} \geq \bar x_{_{1}}$ from the above forms of the switching regions. Let us introduce the function $U(x)=2v_{_0}(x)- (v_{_1}+v_{_{-1}})(x)$ on $[ -\bar x_{_{-1}},\bar x_{_{1}}]$. On $( -\bar x_{_{-1}},\bar x_{_{1}})$, we see that $v_{_1}$ and $v_{_{-1}}$ are smooth $C^2$, and satisfy: \begin{eqnarray*} \rho v_{_{1}}- \Lc v_{_{1}} +\lambda=0, & & \rho v_{_{-1}}- \Lc v_{_{-1}} +\lambda=0, \end{eqnarray*} which combined with the viscosity supersolution property of $v_{_0}$, gives \begin{eqnarray*} \rho U- \Lc U \; = \; 2(\rho v_{_{0}}- \Lc v_{_{0}}) +2\lambda \; \geq \; 0 \ \ \ \text{on} \ \ \ (- \bar x_{_{-1}},\bar x_{_{1}}). \end{eqnarray*} At $x=\bar x_{_1}$ we have $v_{_1}(x)=v_{_0}(x)+x-\eps$ and $v_{_0}(x)\geq v_{_{-1}}(x)+x-\eps$ so that $2v_{_0}(x)\geq v_{_1}(x)+v_{_{-1}}(x)$, which means $U(\bar x_{_1})\geq 0$. By the same way, at $x=-\bar x_{_{-1}}$ we also have $2v_{_0}(x)\geq v_{_1}(x)+v_{_{-1}}(x)$, which means $U(-\bar x_{_{-1}})\geq 0$. By the comparison principle, we deduce that \begin{eqnarray*} 2v_{_0}(x)\geq v_{_1}(x)+v_{_{-1}}(x) \ \ \ \text{on} \ \ \ [-\bar x_{_{-1}},\bar x_{_{1}}]. \end{eqnarray*} Let us assume on the contrary that $\bar x_{_1}$ $>$ $\bar x_{_{0-1}}$. We have $v_{_0}(\bar x_{_{0-1}})=v_{_{-1}}(\bar x_{_{0-1}})+\bar x_{_{0-1}}-\eps$ and $v_{_1}(\bar x_{_{0-1}})>v_{_0}(\bar x_{_{0-1}})+\bar x_{_{0-1}}-\eps$, so that $(v_{_{-1}}+v_{_1})(\bar x_{_{0-1}})>2v_{_0}(\bar x_{_{0-1}})$, leading to a contradiction. By the same argument, it is impossible to have $-\bar x_{_{-1}}$ $<$ $-\bar x_{_{01}}$, which ends the proof. \ep \begin{Remark}\label{remark42} {\rm Consider the situation where $\ell_-$ $=$ $0$. We distinguish the following cases: \begin{itemize} \item[(i)] $\lambda$ $>$ $\rho \eps$. Then, we know from Lemma \ref{lemempty1} that $\Sc_{_{-1}}$ $\neq$ $\emptyset$. Moreover, for $L$ small enough, namely $L$ $\leq$ $\ell_{_0}/\mu$, we see from Proposition \ref{propcutoff} that $-\bar x_{_{01}}$ $\leq$ $0$ and thus $\Sc_{_{01}}$ $=$ $\emptyset$. \\ \item[(ii)] $\lambda$ $\leq$ $\rho \eps$. Then $\ell_{_1}$ $\leq$ $0$, and for $L$ small enough namely, $L$ $\leq$ $-\ell_{_1}/\mu$, we see from Proposition \ref{propcutoff} that $-\bar x_{_{-1}}$ $\leq$ $0$, and thus $\Sc_{_{-1}}$ $=$ $\emptyset$ and $\Sc_{_{01}}$ $=$ $\emptyset$. \end{itemize} \ep } \end{Remark} The next result shows a symmetry property on the switching regions and value functions. \begin{Proposition}\label{ldx} (Symmetry property) In the case $\ell_-$ $=$ $-\infty$, and if $\sigma(x)$ is an even function and $L=0$, then $\bar x_{_{0-1}}=\bar x_{_{01}}$, $\bar x_{_{-1}}=\bar x_{_{1}}$ and \begin{eqnarray*} v_{_{-i}}(-x) & = & v_{_i}(x), \;\;\;\;\; x \in \R, \; i \in \{0,-1,1\}. \end{eqnarray*} \end{Proposition} {\bf Proof.} Consider the process $Y^{x}_t=-X^x_t$, which follows the dynamics: \begin{eqnarray*} dY_t &=& -\mu Y_tdt + \sigma(Y_t)d\bar W_t, \end{eqnarray*} where $\bar W=-W$ is still a Brownian motion on the same probability measure and filtration of $W$, and we can see that $Y^{x}_t=X^{-x}_t$. We consider the same optimal problem, but we use $Y_t$ instead of $X_t$, we denote \begin{eqnarray*} J^Y(x,\alpha) &=& \E \Big[ \sum_{n\geq 1} e^{-\rho \tau_n} g(Y^x_{\tau_n},\alpha_{\tau_n^-},\alpha_{\tau_n}) - \lambda \int_0^{\infty} e^{-\rho t} |\alpha_t| dt \Big], \end{eqnarray*} For $i$ $=$ $0,-1,1$, let $v^{_Y}_i$ denote the value functions with initial positions $i$ when maximizing over switching trading strategies the gain functional, that is \begin{eqnarray*} v^{_Y}_{_i}(x) &=& \sup_{\alpha\in\Ac_i} J^Y(x,\alpha), \;\;\;\;\; x \in \R, \; i =0,-1,1. \end{eqnarray*} For any $\alpha$ $\in$ $\Ac_i$, we see that $g(Y^x_{\tau_n},-\alpha_{\tau_n^-},-\alpha_{\tau_n})$ $=$ $g(X^x_{\tau_n},\alpha_{\tau_n^-},\alpha_{\tau_n})$, and so $J^Y(x,-\alpha)$ $=$ $J(x,\alpha)$. Thus, $v^{_Y}_{_{-i}}(x)$ $\geq$ $J^Y(x,-\alpha)$ $=$ $J(x,\alpha)$, and since $\alpha$ is arbitrary in $\Ac_i$, we get: $v^{_Y}_{_{-i}}(x)$ $\geq$ $v_{_i}(x)$. By the same argument, we have $v_{_i}(x)\geq v^{_Y}_{_{-i}}(x)$, and so $v_{_{-i}}^{_Y}$ $=$ $v_{_i}$, $i \in \{0,-1,1\}$. Moreover, recalling that $Y^{x}_t=X^{-x}_t$, we have: \begin{eqnarray*} v_{_{-i}}(-x)=v_{_{-i}}^{_Y}(x)=v_{_i}(x), \;\;\;\;\; x \in \R, \; i \in \{0,-1,1\}. \end{eqnarray*} In particular, we $v_{_{-1}}(-\bar x_{_1})$ $=$ $v_{_1}(\bar x_{_1})=(v_{_0}+g_{_{10}})(\bar x_{_1})=(v_{_0}+g_{_{-10}})(-\bar x_{_1})$, so that $-\bar x_{_1} \in \Sc_{_{-1}}$. Moreover, since $\bar x_{_1}$ $=$ $\inf \Sc_{_1}$, we notice that for all $r>0$, $\bar x_{_1}-r \not\in \Sc_{_{1}}$. Thus, $v_{_{-1}}(-\bar x_{_1}+r)$ $=$ $v_{_1}(\bar x_{_1}-r)$ $>$ $(v_{_0}+g_{_{10}})(\bar x_{_1}-r)$ $=$ $(v_{_0}+g_{_{-10}})(-\bar x_{_1}+r)$, for all $r$ $>$ $0$, which means that $-\bar x_{_1}$ $=$ $\sup \Sc_{_{-1}}$. Recalling that $\sup \Sc_{_{-1}}$ $=$ $-\bar x_{_{-1}}$, this shows that $\bar x_{_1}$ $=$ $\bar x_{_{-1}}$. By the same argument, we have $\bar x_{_{0-1}}=\bar x_{_{01}}$. \ep To sum up the above results, we have the following possible cases for the structure of the switching regions: \begin{itemize} \item[{\bf (1)}] $\ell_-$ $=$ $-\infty$. In this case, the four switching regions $\Sc_{_1}$, $\Sc_{_{-1}}$, $\Sc_{_{01}}$ and $\Sc_{_{0-1}}$ are not empty in the form \begin{eqnarray*} \Sc_{_1} \; = \; [\bar x_{_1},\infty), & & \Sc_{_{0-1}} \; = \; [\bar x_{_{0-1}},\infty), \\ \Sc_{_{-1}} \; = \; (-\infty, -\bar x_{_{-1}}], & & \Sc_{_{01}} \; = \; (-\infty, - \bar x_{_{01}}], \end{eqnarray*} and are plotted in Figure \ref{figswitch1}. Moreover, when $L$ $=$ $0$ and $\sigma$ is an even function, $\Sc_{_1}$ $=$ $-\Sc_{_{-1}}$ and $\Sc_{_{01}}$ $=$ $-\Sc_{_{0-1}}$. \item[{\bf (2)}] $\ell_-$ $=$ $0$. In this case, the switching regions $\Sc_{_1}$ and $\Sc_{_{0-1}}$ are not empty, in the form \begin{eqnarray*} \Sc_{_1} \; = \; [\bar x_{_1},\infty) \cap (0,\infty), & & \Sc_{_{0-1}} \; = \; [\bar x_{_{0-1}},\infty), \end{eqnarray*} for some $\bar x_{_{1}}$ $\in$ $\R$, and $\bar x_{_{0-1}}$ $>$ $0$ by Proposition \ref{propcutoff}. However, $\Sc_{_{-1}}$ and $\Sc_{_{01}}$ may be empty or not. More precisely, we have the three following possibilities: \begin{itemize} \item[(i)] $\Sc_{_{-1}}$ and $\Sc_{_{01}}$ are not empty in the form: \begin{eqnarray*} \Sc_{_{-1}} \; = \; (0, -\bar x_{_{-1}}], & & \Sc_{_{01}} \; = \; (0, - \bar x_{_{01}}], \end{eqnarray*} for some $0$ $<$ $- \bar x_{_{01}}$ $\leq$ $-\bar x_{_{-1}}$ by Proposition \ref{propcutoff}. Such cases arises for example when $X$ is the IGBM \reff{inGBM} and for $L$ large enough, as showed in Lemma \ref{lemempty2} and Remark \ref{remark41}. The visualization of this case is the same as Figure \ref{figswitch1}. \item[(ii)] $\Sc_{_{-1}}$ is not empty in the form: $\Sc_{_{-1}}$ $=$ $(0, -\bar x_{_{-1}}]$ for some $\bar x_{_{-1}}$ $<$ $0$ by Proposition \ref{propcutoff}, and $\Sc_{_{01}}$ $=$ $\emptyset$. Such case arises when $\lambda$ $>$ $\rho\eps$, and for $L$ $\leq$ $(\lambda+\rho\eps)/\mu$, see Remark \ref{remark42}(i). This is plotted in Figure \ref{figswitch2}. \item[(iii)] Both $\Sc_{_{-1}}$ and $\Sc_{_{01}}$ are empty. Such case arises when $\lambda$ $\leq$ $\rho\eps$, and for $L$ $\leq$ $(\rho\eps-\lambda)/\mu$, see Remark \ref{remark42}(ii). This is plotted in Figure \ref{figswitch3}. Moreover, notice that in such case, we must have $\lambda$ $\leq$ $\rho\eps$ by Lemma \ref{lemempty1}(2)(ii), and so by Proposition \ref{propcutoff}, $\bar x_{_1}$ $\geq$ $\frac{\mu L - \ell_{_1}}{\rho+\mu}$ $>$ $0$, i.e. $\Sc_{_1}$ $=$ $[\bar x_{_1},\infty)$. \end{itemize} \end{itemize} \begin{figure} \caption{\small \sl Regimes switching regions in cases (1) and (2)(i). \label{figswitch1} \label{figswitch1} \end{figure} \begin{figure} \caption{\small \sl Regimes switching regions in case (2)(ii). \label{figswitch2} \label{figswitch2} \end{figure} \begin{figure} \caption{\small \sl Regimes switching regions in case (2)(iii). \label{figswitch3} \label{figswitch3} \end{figure} The next result provides the explicit solution to the optimal switching problem. \begin{Theorem} \label{theo1} $\bullet$ {\it Case (1): $\ell_-$ $=$ $\infty$}. The value functions are given by \begin{eqnarray*} v_{_0}(x) &=& \left\{ \begin{array}{cc} A_{_1} \psi_+(x) - \frac{\lambda}{\rho} + g_{_{01}}(x), & x \leq - \bar x_{_{01}}, \\ A_{_0} \psi_+(x) + B_{_0} \psi_-(x), & - \bar x_{_{01}} < x < \bar x_{_{0-1}}, \\ B_{_{-1}} \psi_-(x) - \frac{\lambda}{\rho} + g_{_{0-1}}(x), & x \geq \bar x_{_{0-1}}, \end{array} \right. \end{eqnarray*} \begin{eqnarray*} v_{_1}(x) &=& \left\{ \begin{array}{cc} A_{_1} \psi_+(x) - \frac{\lambda}{\rho}, & x < \bar x_{_1}, \\ v_{_0}(x) + g_{_{10}}(x), & x \geq \bar x_{_1}, \end{array} \right. \end{eqnarray*} \begin{eqnarray*} v_{_{-1}}(x) &=& \left\{ \begin{array}{cc} v_{_0}(x) + g_{_{-10}}(x), & x \leq -\bar x_{_{-1}}, \\ B_{_{-1}} \psi_-(x) - \frac{\lambda}{\rho}, & x > -\bar x_{_{-1}}, \end{array} \right. \end{eqnarray*} and the constants $A_{_0}$, $B_{_0}$, $A_{_1}$, $B_{_{-1}}$, $\bar x_{_{01}}$, $\bar x_{_{0-1}}$, $\bar x_{_1}$, $\bar x_{_{-1}}$ are determined by the smooth-fit conditions: \begin{eqnarray*} A_{_1} \psi_+(- \bar x_{_{01}}) - \frac{\lambda}{\rho} + g_{_{01}}(- \bar x_{_{01}}) &=& A_{_0} \psi_+(- \bar x_{_{01}}) + B_{_0} \psi_-(- \bar x_{_{01}}) \\ A_{_1} \psi_+'(- \bar x_{_{01}}) - 1 &=& A_{_0} \psi_+'(- \bar x_{_{01}}) + B_{_0} \psi_-'(- \bar x_{_{01}}) \\ B_{_{-1}} \psi_-(\bar x_{_{0-1}}) - \frac{\lambda}{\rho} + g_{_{0-1}}(\bar x_{_{0-1}}) &=& A_{_0} \psi_+( \bar x_{_{0-1}}) + B_{_0} \psi_-(\bar x_{_{0-1}}) \\ B_{_{-1}} \psi_-'(\bar x_{_{0-1}}) + 1 &=& A_{_0} \psi_+'( \bar x_{_{0-1}}) + B_{_0} \psi_-'(\bar x_{_{0-1}}) \\ A_{_1} \psi_+(\bar x_{_1}) - \frac{\lambda}{\rho} &=& A_{_0} \psi_+(\bar x_{_1}) + B_{_0} \psi_-(\bar x_{_1}) + g_{_{10}}(\bar x_{_1}) \\ A_{_1} \psi_+'(\bar x_{_1}) &=& A_{_0} \psi_+'(\bar x_{_1}) + B_{_0} \psi_-'( \bar x_{_1}) + 1 \\ B_{_{-1}} \psi_-(-\bar x_{_{-1}}) - \frac{\lambda}{\rho} &=& A_{_0} \psi_+(-\bar x_{_{-1}}) + B_{_0} \psi_-(-\bar x_{_{-1}}) + g_{_{-10}}(-\bar x_{_{-1}}) \\ B_{_{-1}} \psi_-'(-\bar x_{_{-1}}) &=& A_{_0} \psi_+'(-\bar x_{_{-1}}) + B_{_0} \psi_-'(-\bar x_{_{-1}}) - 1. \end{eqnarray*} \noindent $\bullet$ {\it Case (2)(i): $\ell_-$ $=$ $0$, and both $\Sc_{_{-1}}$ and $\Sc_{_{01}}$ are not empty}. The value functions have the same form as Case (1) with the state space domain $(0,\infty)$. \noindent $\bullet$ {\it Case (2)(ii): $\ell_-$ $=$ $0$, $\Sc_{_{-1}}$ is not empty, and $\Sc_{_{01}}$ $=$ $\emptyset$}. The value functions are given by \begin{eqnarray*} v_{_0}(x) &=& \left\{ \begin{array}{cc} A_{_0} \psi_+(x) , & 0< x < \bar x_{_{0-1}}, \\ B_{_{-1}} \psi_-(x) - \frac{\lambda}{\rho} + g_{_{0-1}}(x), & x \geq \bar x_{_{0-1}}, \end{array} \right. \end{eqnarray*} \begin{eqnarray*} v_{_{-1}}(x) &=& \left\{ \begin{array}{cc} v_{_0}(x) + g_{_{-10}}(x), & 0< x \leq - \bar x_{_{-1}}, \\ B_{_{-1}} \psi_-(x) - \frac{\lambda}{\rho}, & x > - \bar x_{_{-1}}, \end{array} \right. \end{eqnarray*} \begin{eqnarray*} v_{_1}(x) &=& \left\{ \begin{array}{cc} A_{_1} \psi_+(x) - \frac{\lambda}{\rho}, & 0< x < \max(\bar x_{_1},0), \\ v_{_0}(x) + g_{_{10}}(x), & x \geq \max(\bar x_{_1},0), \end{array} \right. \end{eqnarray*} and the constants $A_{_0}$, $A_{_1}$, $B_{_{-1}}$, $\bar x_{_{0-1}}$ $>$ $0$, $\bar x_{_1}$, $\bar x_{_{-1}}$ $<$ $0$ are determined by the smooth-fit conditions: \begin{eqnarray*} B_{_{-1}} \psi_-(\bar x_{_{0-1}}) - \frac{\lambda}{\rho} + g_{_{0-1}}(\bar x_{_{0-1}}) &=& A_{_0} \psi_+( \bar x_{_{0-1}}) \\ B_{_{-1}} \psi_-'(\bar x_{_{0-1}}) + 1 &=& A_{_0} \psi_+'( \bar x_{_{0-1}})\\ A_{_1} \psi_+(\bar x_{_1}) - \frac{\lambda}{\rho} &=& A_{_0} \psi_+(\bar x_{_1}) + g_{_{10}}(\bar x_{_1}) \\ A_{_1} \psi_+'(\bar x_{_1}) &=& A_{_0} \psi_+'(\bar x_{_1}) +1 \\ B_{_{-1}} \psi_-(-\bar x_{_{-1}}) - \frac{\lambda}{\rho} &=& A_{_0} \psi_+(-\bar x_{_{-1}}) + g_{_{-10}}(-\bar x_{_{-1}}) \\ B_{_{-1}} \psi_-'(-\bar x_{_{-1}}) &=& A_{_0} \psi_+'(-\bar x_{_{-1}}) - 1. \end{eqnarray*} \noindent $\bullet$ {\it Case (2)(iii): $\ell_-$ $=$ $0$, and $\Sc_{_{-1}}$ $=$ $\Sc_{_{01}}$ $=$ $\emptyset$}. The value functions are given by \begin{eqnarray*} v_{_0}(x) &=& \left\{ \begin{array}{cc} A_{_0} \psi_+(x) , & 0< x < \bar x_{_{0-1}}, \\ - \frac{\lambda}{\rho} + g_{_{0-1}}(x), & x \geq \bar x_{_{0-1}}, \end{array} \right. \end{eqnarray*} \begin{eqnarray*} v_{_1}(x) &=& \left\{ \begin{array}{cc} A_{_1} \psi_+(x) - \frac{\lambda}{\rho}, & x < \bar x_{_1}, \\ v_{_0}(x) + g_{_{10}}(x), & x \geq \bar x_{_1}, \end{array} \right. \end{eqnarray*} \begin{eqnarray*} v_{_{-1}}=-\frac{\lambda}{\rho}, \end{eqnarray*} and the constants $A_{_0}$, $A_{_1}$, $\bar x_{_{0-1}}$ $>$ $0$, $\bar x_{_1}$ $>$ $0$, are determined by the smooth-fit conditions: \begin{eqnarray*} - \frac{\lambda}{\rho} + g_{_{0-1}}(\bar x_{_{0-1}}) &=& A_{_0} \psi_+( \bar x_{_{0-1}}) \\ 1 &=& A_{_0} \psi_+'( \bar x_{_{0-1}})\\ A_{_1} \psi_+(\bar x_{_1}) - \frac{\lambda}{\rho} &=& A_{_0} \psi_+(\bar x_{_1}) + g_{_{10}}(\bar x_{_1}) \\ A_{_1} \psi_+'(\bar x_{_1}) &=& A_{_0} \psi_+'(\bar x_{_1}) +1. \end{eqnarray*} \end{Theorem} {\bf Proof.} We consider only case (1) and (2)(i) since the other cases are dealt with by similar arguments. We have $\Sc_{_{01}} \; = \; (\ell_-, - \bar x_{_{01}}]$, which means that $v_{_0}=v_{_1}+g_{_{01}}$ on $(\ell_-, - \bar x_{_{01}}]$. Moreover, $v_{_1}$ is solution to $\rho v_{_{1}}- \Lc v_{_{1}} +\lambda=0$ on $(\ell_-,\bar x_{_1})$, which combined with the bound in the Lemma \ref{lemgrowth}, shows that $v_{_1}$ should be in the form: $v_{_1}$ $=$ $A_{_1} \psi_+ - \frac{\lambda}{\rho}$ on $(\ell_-,\bar x_{_1})$. Since $- \bar x_{_{01}}<\bar x_{_1}$, we deduce that $v_{_0}$ has the form expressed as: $A_{_1} \psi_+ - \frac{\lambda}{\rho} + g_{_{01}}$ on $(\ell_-, - \bar x_{_{01}}]$. In the same way, $v_{_{-1}}$ should have the form expressed as $B_{_{-1}} \psi_- - \frac{\lambda}{\rho}$ on $(-\bar x_{_{-1}},\infty)$ and $v_{_0}$ has the form expressed as $B_{_{-1}} \psi_- - \frac{\lambda}{\rho} + g_{_{0-1}}$ on $[\bar x_{_{0-1}}, \infty)$. We know that $v_{_0}$ is solution to $\rho v_{_{0}}- \Lc v_{_{0}} =0$ on $(- \bar x_{_{01}},\bar x_{_{0-1}})$ so that $v_{_0}$ should be in the form: $v_{_0}$ $=$ $A_{_0} \psi_+ + B_{_0} \psi_-$ on $(- \bar x_{_{01}},\bar x_{_{0-1}})$. We have $\Sc_{_1} \; = \; [\bar x_{_1},\infty)$, which means that $v_{_1}$ $=$ $v_{_0}+g_{_{10}}$ on $[\bar x_{_1},\infty)$ and $ \Sc_{_{-1}} \; = \; (\ell_-, -\bar x_{_{-1}}]$, which means that $v_{_{-1}}=v_{_0}+g_{_{-10}}$ on $(\ell_-, -\bar x_{_{-1}}]$. From Proposition \ref{propcutoff} we know that $\bar x_{_1}$ $\leq$ $\bar x_{_{0-1}}$, and $-\bar x_{_{01}}$ $\leq$ $-\bar x_{_{-1}}$ and by the smooth-fit property of value function we obtain the above smooth-fit condition equations in which we can compute the cut-off points by solving these quasi-algebraic equations. \ep \begin{Remark} {\rm {\bf 1.} In Case (1) and Case(2)(i) of Theorem \ref{theo1}, the smooth-fit conditions system is written as: \begin{eqnarray} \label{sys1} \left[ \begin{array}{cccc} \psi_+(-\bar{x}_{_{01}}) &0 & -\psi_+(-\bar{x}_{_{01}}) & -\psi_-(-\bar{x}_{_{01}})\\ 0 & \psi_-(\bar{x}_{_{0-1}}) & -\psi_+(\bar{x}_{_{0-1}})& -\psi_-(\bar{x}_{_{0-1}})\\ \psi_+(\bar{x}_{_1})& 0 & -\psi_+(\bar{x}_{_1})& -\psi_-(\bar{x}_{_1})\\ 0 & \psi_-(-\bar{x}_{_{-1}}) & -\psi_+(-\bar{x}_{_{-1}})&-\psi_-(-\bar{x}_{_{-1}}) \end{array} \right] \times \left[ \begin{array}{c} A_{_1}\\B_{_{-1}}\\A_{_0}\\B_{_0} \end{array} \right] = \left[ \begin{array}{c} \lambda \rho^{-1}-g_{_{01}}(-\bar{x}_{_{01}})\\ \lambda \rho^{-1}-g_{_{0-1}}(\bar{x}_{_{0-1}})\\ \lambda \rho^{-1}+g_{_{10}}(\bar{x}_{_{1}})\\ \lambda \rho^{-1}+g_{_{-10}}(-\bar{x}_{_{-1}}) \end{array} \right] \notag &\\ \end{eqnarray} and \begin{eqnarray} \label{sys2} \left[ \begin{array}{cccc} \psi_+^{'}(-\bar{x}_{_{01}}) &0 & -\psi_+^{'}(-\bar{x}_{_{01}}) & -\psi_-^{'}(-\bar{x}_{_{01}})\\ 0 & \psi_-^{'}(\bar{x}_{_{0-1}}) & -\psi_+^{'}(\bar{x}_{_{0-1}})& -\psi_-^{'}(\bar{x}_{_{0-1}})\\ \psi_+^{'}(\bar{x}_{_1})& 0 & -\psi_+^{'}(\bar{x}_{_1})& -\psi_-^{'}(\bar{x}_{_1})\\ 0 & \psi_-^{'}(-\bar{x}_{_{-1}}) & -\psi_+^{'}(-\bar{x}_{_{-1}})&-\psi_-^{'}(-\bar{x}_{_{-1}}) \end{array} \right] \times \left[ \begin{array}{c} A_{_1}\\B_{_{-1}}\\A_{_0}\\B_{_0} \end{array} \right] = \left[ \begin{array}{c} 1\\-1\\1\\-1 \end{array} \right]. \end{eqnarray} Denote by $M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$ and $M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$ the matrices: \begin{eqnarray*} M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}}) &=& \left[ \begin{array}{cccc} \psi_+(-\bar{x}_{_{01}}) &0 & 0 & -\psi_-(-\bar{x}_{_{01}})\\ 0 & \psi_-(\bar{x}_{_{0-1}}) & -\psi_+(\bar{x}_{_{0-1}})& 0\\ \psi_+(\bar{x}_{_1})& 0 & 0& -\psi_-(\bar{x}_{_1})\\ 0 & \psi_-(-\bar{x}_{_{-1}}) & -\psi_+(-\bar{x}_{_{-1}})&0 \end{array} \right], \\ M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}}) &=& \left[ \begin{array}{cccc} \psi_+^{'}(-\bar{x}_{_{01}}) &0 & 0 & -\psi_-^{'}(-\bar{x}_{_{01}})\\ 0 & \psi_-^{'}(\bar{x}_{_{0-1}}) & -\psi_+^{'}(\bar{x}_{_{0-1}})& 0\\ \psi_+^{'}(\bar{x}_{_1})& 0 & 0& -\psi_-^{'}(\bar{x}_{_1})\\ 0 & \psi_-^{'}(-\bar{x}_{_{-1}}) & -\psi_+^{'}(-\bar{x}_{_{-1}})&0 \end{array} \right]. \end{eqnarray*} Once $M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$ and $M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$ are nonsingular, straightforward computations from (\ref{sys1}) and (\ref{sys2}) lead to the following equation satisfied by $\bar x_{_{01}}$, $\bar x_{_{0-1}}$, $\bar x_{_1}$, $\bar x_{_{-1}}$: \begin{eqnarray*} \label{sys} M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})^{-1} \left[ \begin{array}{c} 1\\-1\\1\\-1 \end{array} \right] &=& M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})^{-1} \left[ \begin{array}{c} \lambda \rho^{-1}-g_{_{01}}(-\bar{x}_{_{01}})\\ \lambda \rho^{-1}-g_{_{0-1}}(\bar{x}_{_{0-1}})\\ \lambda \rho^{-1}+g_{_{10}}(\bar{x}_{_{1}})\\ \lambda \rho^{-1}+g_{_{-10}}(-\bar{x}_{_{-1}}) \end{array} \right]. \end{eqnarray*} This system can be separated into two independent systems: \begin{eqnarray}\label{quasi1} \left[ \begin{array}{cccc} \psi_+^{'}(-\bar{x}_{_{01}}) & -\psi_-^{'}(-\bar{x}_{_{01}})\\ \psi_+^{'}(\bar{x}_{_1})& -\psi_-^{'}(\bar{x}_{_1})\\ \end{array} \right]^{-1} \times \left[ \begin{array}{c} 1\\1 \end{array} \right]=& \notag \\ \left[ \begin{array}{cccc} \psi_+(-\bar{x}_{_{01}}) & -\psi_-(-\bar{x}_{_{01}})\\ \psi_+(\bar{x}_{_1}) & -\psi_-(\bar{x}_{_1})\\ \end{array} \right]^{-1} \times \left[ \begin{array}{c} \lambda \rho^{-1}-g_{_{01}}(-\bar{x}_{_{01}})\\ \lambda \rho^{-1}+g_{_{10}}(\bar{x}_{_{1}})\\ \end{array} \right] \end{eqnarray} and \begin{eqnarray} \label{quasi2} \left[ \begin{array}{cccc} \psi_-^{'}(\bar{x}_{_{0-1}}) & -\psi_+^{'}(\bar{x}_{_{0-1}})\\ \psi_-^{'}(-\bar{x}_{_{-1}}) & -\psi_+^{'}(-\bar{x}_{_{-1}}) \end{array} \right]^{-1} \times \left[ \begin{array}{c} -1\\-1 \end{array} \right]=& \notag\\ \left[ \begin{array}{cccc} \psi_-(\bar{x}_{_{0-1}}) & -\psi_+(\bar{x}_{_{0-1}})\\ \psi_-(-\bar{x}_{_{-1}}) & -\psi_+(-\bar{x}_{_{-1}}) \end{array} \right]^{-1} \times \left[ \begin{array}{c} \lambda \rho^{-1}-g_{_{0-1}}(\bar{x}_{_{0-1}})\\ \lambda \rho^{-1}+g_{_{-10}}(-\bar{x}_{_{-1}}) \end{array} \right] \end{eqnarray} We then obtain thresholds $\bar x_{_{01}}$, $\bar x_{_{0-1}}$, $\bar x_{_1}$, $\bar x_{_{-1}}$ by solving two quasi-algebraic system equations (\ref{quasi1}) and (\ref{quasi2}). Notice that for the examples of OU or IGBM process, the matrices $M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$ and $M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$ are nonsingular so that their inverses are well-defined. Indeed, we have: $\psi_+^{''}$ $>$ $0$ and $\psi_-^{''}$ $>$ $0$. This property is trivial for the case of OU process, while for the case of IGBM: \begin{eqnarray*} \frac{d^2 \psi_+(x)}{dx^2} &=& \frac{d}{dx}\left( \frac{a}{x^{a+1}}(-U(a+1,b,\frac{c}{x})(a-b+1))\right)\\ &=& \frac{a(a+1)}{x^{a+2}}U(a+2,b,\frac{c}{x})(a-b+1)(a-b+2) >0, \ \ \forall x>0. \end{eqnarray*} \begin{eqnarray*} \frac{d \psi_-(x)}{dx} &=& -\frac{ax^{-a-2}(bxM(a,b,\frac{c}{x})+cM(a+1,b+1,\frac{c}{x}))}{b}, \ \ \forall x>0. \end{eqnarray*} Thus, $\psi_-^{'}$ is strictly increasing since $M(a,b,\frac{c}{x})$ is strictly decreasing, and so $\psi_-^{''}$ $>$ $0$. Moreover, we have: \begin{eqnarray} & & \text{det} \; \big[ M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}}) \big] \\ &=& \left( \psi_-(\bar{x}_{_{-01}})\psi_+(\bar{x}_{_1})- \psi_-(\bar{x}_{_1})\psi_+(\bar{x}_{_{-01}}) \right) \left( \psi_-(\bar{x}_{_{0-1}})\psi_+(\bar{x}_{_{-1}})- \psi_-(\bar{x}_{_{-1}})\psi_+(\bar{x}_{_{0-1}})\right). \nonumber \end{eqnarray} Recalling that $- \bar x_{_{01}}$ $<$ $\bar x_{_1}$ and $ \bar x_{_{0-1}}$ $>$ $-\bar x_{_{-1}}$ (see Proposition \ref{propcutoff}), and since $\psi_+$ is a strictly increasing and positive function, while $\psi_-$ is a strictly decreasing positive function, we have: $ \psi_-(\bar{x}_{_{-01}})\psi_+(\bar{x}_{_1})- \psi_-(\bar{x}_{_1})\psi_+(\bar{x}_{_{-01}})$ $>$ $0$ and $\psi_-(\bar{x}_{_{0-1}})\psi_+(\bar{x}_{_{-1}})- \psi_-(\bar{x}_{_{-1}})\psi_+(\bar{x}_{_{0-1}}) $ $<$ $0$, which implies the non singularity of the matrix $M(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$. On the other hand, we have: \begin{eqnarray} & & \text{det} \big[ M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}}) \big] \\ &=& \left( \psi_-^{'}(\bar{x}_{_{-01}})\psi_+^{'}(\bar{x}_{_1})- \psi_-^{'}(\bar{x}_{_1})\psi_+^{'}(\bar{x}_{_{-01}}) \right) \left( \psi_-^{'}(\bar{x}_{_{0-1}})\psi_+^{'}(\bar{x}_{_{-1}})- \psi_-^{'}(\bar{x}_{_{-1}})\psi_+^{'}(\bar{x}_{_{0-1}})\right). \nonumber \end{eqnarray} Since $\psi_+'$ is a strictly increasing positive function and $\psi_-'$ is a strictly increasing function, with $\psi_-'$ $<$ $0$, we get: $ \psi_-^{'}(\bar{x}_{_{-01}})\psi_+^{'}(\bar{x}_{_1})- \psi_-^{'}(\bar{x}_{_1})\psi_+^{'}(\bar{x}_{_{-01}})$ $<$ $0$ and $\psi_-^{'}(\bar{x}_{_{0-1}})\psi_+^{'}(\bar{x}_{_{-1}})- \psi_-^{'}(\bar{x}_{_{-1}})\psi_+^{'}(\bar{x}_{_{0-1}}) $ $>$ $0$, which implies the non singularity of the matrix $M_x(\bar x_{_{01}},\bar x_{_{0-1}},\bar x_{_1},\bar x_{_{-1}})$. \noindent {\bf 2.} In Case (2)(ii) of Theorem \ref{theo1}, we obtain the thresholds $\bar x_{_{0-1}}$ $>$ $0$, $\bar x_{_{-1}}$ $<$ $0$ from the smooth-fit conditions which lead to the quasi-algebraic system: \begin{eqnarray} \label{sysx0t1xt12} \left[ \begin{array}{cccc} - \psi_-^{'}(\bar{x}_{_{0-1}}) & \psi_+^{'}(\bar{x}_{_{0-1}})\\ -\psi_-^{'}(-\bar{x}_{_{-1}}) & \psi_+^{'}(-\bar{x}_{_{-1}}) \end{array} \right]^{-1} \times \left[ \begin{array}{c} 1\\1 \end{array} \right]=& \notag \\ \left[ \begin{array}{cccc} - \psi_-(\bar{x}_{_{0-1}}) & \psi_+(\bar{x}_{_{0-1}})\\ -\psi_-(-\bar{x}_{_{-1}}) & \psi_+(-\bar{x}_{_{-1}}) \end{array} \right]^{-1} \times \left[ \begin{array}{c} -\lambda \rho^{-1}+g_{_{0-1}}(\bar{x}_{_{0-1}})\\ -\lambda \rho^{-1}-g_{_{-10}}(-\bar{x}_{_{-1}}) \end{array} \right]. \end{eqnarray} The non singularity of the matrix above is checked similarly as in case (1) and (2)(i) for the examples of the OU or IGBM process. Note that $\bar x_{_{0-1}}$, $\bar x_{_{-1}}$ are independent from $\bar x_{_1}$, which is obtained from the equation: \begin{eqnarray} \label{systx12} \left( -\lambda \rho^{-1}-g_{_{10}}(\bar{x}_{_{1}})\right)\psi_+^{'}(\bar{x}_{_{1}})+\psi_+(\bar{x}_{_{1}})=0. \end{eqnarray} When $\bar x_{_1}$ $\leq$ $0$, this means that $\Sc_{_1}$ $=$ $(0,\infty)$. \noindent {\bf 3.} In Case (2)(iii) of Theorem \ref{theo1}, the threshold $\bar x_{_1}$ $>$ $0$ is obtained from the equation \reff{systx12}, while the threshold $\bar x_{_{0-1}}$ $>$ $0$ is derived from the smooth-fit condition leading to the quasi-algebraic equation: \begin{eqnarray} \label{systx13} \left( \lambda \rho^{-1}-g_{_{0-1}}(\bar{x}_{_{0-1}})\right)\psi_+^{'}(\bar{x}_{_{0-1}})+\psi_+(\bar{x}_{_{0-1}})=0. \end{eqnarray} } \ep \end{Remark} \section{Numerical examples} \setcounter{equation}{0} \setcounter{Assumption}{0} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} In this part, we consider OU process and IGBM as examples. \begin{itemize} \item[{\bf 1.}] We first consider the example of the Ornstein-Uhlenbeck process: \end{itemize} \begin{eqnarray*} dX_t &=& -\mu X_t dt + \sigma dW_t, \end{eqnarray*} with $\mu$, $\sigma$ positive constants. In this case, the two fundamental solutions to \reff{ode2} are given by \begin{eqnarray*} \psi_+(x) = \int_0^\infty t^{\frac{\rho}{\mu}-1}\exp\big(-\frac{t^2}{2} + \frac{\sqrt{2\mu}}{\sigma} x t\big) dt, & & \psi_-(x) = \int_0^\infty t^{\frac{\rho}{\mu}-1}\exp\big(-\frac{t^2}{2} - \frac{\sqrt{2\mu}}{\sigma} x t\big) dt, \end{eqnarray*} and satisfy assumption \reff{limpsix}. We consider a numerical example with the following specifications: : $\mu=0.8$ , $\sigma=0.5$ , $\rho=0.1$ , $\lambda=0.07$ , $\varepsilon=0.005$ , $L=0$. \begin{Remark} \label{remarkL} {\rm We can reduce the case of non zero long run mean $L \neq 0$ of the OU process to the case of $L=0$ by considering process $Y_t=X_t-L$ as spread process, because in this case $\sigma$ is constant. Finally, we can see that, cutoff points translate along $L$, as illustrated in figure \ref{fig:fig2}. \ep } \end{Remark} We recall some notations:\\ $\Sc_{_{01}} \; = \; (-\infty, - \bar x_{_{01}}]$ is the open-to-buy region,\\ $\Sc_{_{0-1}} \; = \; [\bar x_{_{0-1}},\infty) $ is the open-to-sell region,\\ $\Sc_{_1} \; = \; [\bar x_{_1},\infty)$ is Sell-to-close region from the long position $i$ $=$ $1$,\\ $\Sc_{_{-1}} \; = \; (-\infty,-\bar x_{_{-1}}]$ is Buy-to-close region from the short position $i$ $=$ $-1$. We solve the two systems \reff{quasi1} and \reff{quasi2} which give \begin{eqnarray*} \bar x_{_{01}}= 0.2094, \; \bar x_{_1}=0.0483, \; \bar x_{_{-1}}= 0.0483, \; \bar x_{_{0-1}}= 0.2094, \end{eqnarray*} and confirm the symmetry property in Proposition \ref{ldx}. \begin{figure} \caption{\small \sl Simulation of trading strategies \label{fig:fig1} \label{fig:fig1} \end{figure} \begin{figure} \caption{\small \sl Value functions \label{fig:figvalue0} \label{fig:figvalue0} \end{figure} In figure \ref{fig:figvalue0}, we see the symmetry property of value functions as showed in Proposition \ref{ldx}. Moreover, we can see that $v_{_1}$ is a non decreasing function while $v_{_{-1}}$ is non increasing. The next figure shows the dependence of cut-off point on parameters \begin{figure} \caption{\small \sl The dependence of cut-off point on parameters \label{fig:fig2} \label{fig:fig2} \end{figure} In figure \ref{fig:fig2}, $\mu$ measures the speed of mean reversion and we see that the length of intervals $\Sc_{_{01}}$, $\Sc_{_{0-1}}$ increases and the length of intervals $\Sc_{_1}$, $\Sc_{_{-1}}$ decreases as $\mu$ gets bigger. The length of intervals $\Sc_{_{01}}$, $\Sc_{_{0-1}}$, $\Sc_{_1}$, and $\Sc_{_{-1}}$ decreases as volatility $\sigma$ gets bigger. $L$ is the long run mean€™, to which the process tends to revert, and we see that the cutoff points translate along $L$. We now look at the parameters that does not affect on the dynamic of spread: the length of intervals $\Sc_{_{01}}$, $\Sc_{_{0-1}}$, $\Sc_{_1}$, and $\Sc_{_{-1}}$ decreases as the transaction fee $\eps$ gets bigger. Finally, the length of intervals $\Sc_{_{01}}$, $\Sc_{_{0-1}}$ decreases and the length of intervals $\Sc_{_1}$, $\Sc_{_{-1}}$ increases as the penalty factor $\lambda$ gets larger, which means that the holding time in flat position $i=0$ is longer and the opportunity to enter the flat position from the other position is bigger as the penalty factor $\lambda$ is increasing. \begin{itemize} \item[{\bf 2.}] We now consider the example of Inhomogeneous Geometric Brownian Motions which has stochastic volatility, see more details in Zhao \cite{zhao2009inhomogeneous} : \end{itemize} \begin{eqnarray*} dX_t =\mu(L- X_t) dt + \sigma X_t dW_t, \;\;\; X_{_0}>0, \end{eqnarray*} where $\mu$, $L$ and $\sigma$ are positive constants. Recall that in this case, the two fundamental solutions to \reff{ode2} are given by \begin{eqnarray*} \psi_+(x) = x^{-a}U(a,b,\frac{c}{x}), & & \psi_-(x) = x^{-a}M(a,b,\frac{c}{x}), \end{eqnarray*} where \begin{eqnarray*} a &=& \frac{\sqrt{\sigma^4+4(\mu+2\rho)\sigma^2+4\mu^2}-(2\mu+\sigma^2)}{2\sigma^2}>0, \\ b &=& \frac{2\mu}{\sigma^2}+2a+2, \;\;\;\;\; c=\frac{2\mu L}{\sigma^2}, \end{eqnarray*} $M$ and $U$ are the confluent hypergeometric functions of the first and second kind. We can easily check that $\psi_-$ is a monotone decreasing function, while \begin{eqnarray*} \frac{d \psi_+(x)}{dx} &=& \frac{a}{x^{a+1}}(-U(a+1,b,\frac{c}{x})(a-b+1))>0, \ \ \forall x>0, \end{eqnarray*} so that $\psi_+$ is a monotone increasing function. Moreover, by the asymptotic property of the confluent hypergeometric functions (cf.\cite{abramowitz1972handbook}), the fundamental solutions $\psi_+$ and $\psi_-$ satisfy the condition \reff{limpsix}. \noindent $\bullet$ {\it Case (2)(i)}: Both $\Sc_{_{-1}}$ and $\Sc_{_{01}}$ are not empty. Let us consider a numerical example with the following specifications: : $\mu=0.8$ , $\sigma=0.5$ , $\rho=0.1$ , $\lambda=0.07$ , $\varepsilon=0.005$ , and we set $L=10$. Note that, in this case the condition in Lemma \ref{lemempty2} is satisfied, and we solve the two systems (\ref{quasi1}) and (\ref{quasi2}) which give \begin{eqnarray*} \bar x_{_{01}}= - 8.2777, \; \bar x_{_1}= 9.3701, \; \bar x_{_{-1}} = - 8.4283, \; \bar x_{_{0-1}}= 9.5336. \end{eqnarray*} \begin{figure} \caption{\small \sl Value functions \label{fig:figvalue1} \label{fig:figvalue1} \end{figure} In the figure \ref{fig:figvalue1}, we can see that $v_{_1}$ is non decreasing while $v_{_{-1}}$ is non increasing. Moreover, $v_{_1}$ is always larger than $v_{_0}$, and $v_{_{-1}}$. The next figure \ref{fig:fig3} shows the dependence of cut-off points on parameters (Note that the condition in Lemma \ref{lemempty2} is satisfied for all parameters in this figure). \begin{figure} \caption{\small \sl The dependence of cut-off point on parameters \label{fig:fig3} \label{fig:fig3} \end{figure} We can make the same comments as in the case of the OU process, except for the dependence with respect to the long run mean $L$. Actually, we see that when $L$ increases, the moving of cutoff points is no more translational due to the non constant volatility. \noindent $\bullet$ {\it Case (2)(ii)}: $\Sc_{_{01}}$ is empty. Let us consider a numerical example with the following specifications: : $\mu=0.8$, $\sigma=0.3$, $\rho=0.1$, $\lambda= 0.35$, $\varepsilon=0.55$, and $L=0.5$. We solve the two systems (\ref{sysx0t1xt12}) and (\ref{systx12}) which give \begin{eqnarray*} \bar x_{_1}= 0.1187, \; \bar x_{_{-1}} = -0.8349, \; \bar x_{_{0-1}}= 2.7504. \end{eqnarray*} \noindent $\bullet$ {\it Case (2)(iii)}: Both $\Sc_{_{-1}}$ and $\Sc_{_{01}}$ are empty. Let us consider a numerical example with the following specifications: : $\mu=0.8$, $\sigma=0.3$, $\rho=0.2$, $\lambda=0.05$, $\varepsilon=0.65$, and $L=0.1$. The two equations (\ref{systx13}) and (\ref{systx12}) give $$\bar x_{_{1}} = 0.4293, \; \bar x_{_{0-1}}= 0.9560.$$ \begin{small} \end{small} \end{document}
\begin{document} \preprint{APS/123-QED} \title{Continuous-variable quantum key distribution \\ with non-Gaussian quantum catalysis} \author{Ying Guo} \affiliation{School of Information Science and Engineering, Central South University, Changsha 410083, China} \author{Wei Ye} \affiliation{School of Information Science and Engineering, Central South University, Changsha 410083, China} \author{Hai Zhong} \affiliation{School of Information Science and Engineering, Central South University, Changsha 410083, China} \author{Qin Liao} \email{Corresponding author: [email protected]}\affiliation{School of Information Science and Engineering, Central South University, Changsha 410083, China} \affiliation{School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore} \date{\today} \begin{abstract} The non-Gaussian operation can be used not only to enhance and distill the entanglement between Gaussian entangled states, but also to improve quantum communications. In this paper, we propose an non-Gaussian continuous-variable quantum key distribution (CVQKD) by using quantum catalysis (QC), which is an intriguing non-Gaussian operation in essence that can be implemented with current technologies. We perform quantum catalysis on both ends of the Einstein-Podolsky-Rosen (EPR) pair prepared by a sender, Alice, and find that for the single-photon QC-CVQKD, the bilateral symmetrical quantum catalysis (BSQC) performs better than the single-side quantum catalysis (SSQC). Attributing to characteristics of integral within an ordered product (IWOP) of operators, we find that the quantum catalysis operation can improve the entanglement property of Gaussian entangled states by enhancing the success probability of non-Gaussian operation, leading to the improvement of the QC-CVQKD system. As a comparison, the QC-CVQKD system involving zero-photon and single-photon quantum catalysis outperforms the previous non-Gaussian CVQKD scheme via photon subtraction in terms of secret key rate, maximal transmission distance and tolerable excess noise. \end{abstract} \maketitle \section{Introduction} Quantum key distribution (QKD) \cite{1,2,3,4,5}, as one of the mature practical applications in quantum information processing, allows two distant legitimate parties (normally say a sender, Alice and a receiver, Bob) to establish a set of secure keys even in the presence of the untrusted environment controlled by an eavesdropper (Eve), and its unconditional security can be guaranteed by the laws of quantum physics, e.g., the uncertainty principle \cite{6} and the non-cloning theorem \cite{7}. In general, QKD mainly includes two families, namely discrete-variable quantum key distribution (DVQKD) and continuous-variable quantum key distribution (CVQKD) \cite{2,8,9,10,11,12,13,14,15}. Due to the limitation of single-photon detectors employed in DVQKD systems, in CVQKD, the sender Alice encodes information on the quadratures of the optical field with Gaussian modulation and the receiver Bob decodes the secret information with high-speed and high-efficiency homodyne or heterodyne detection so that CVQKD has become the center of attention in recent years \cite{16,17}. In addition, since the security proofs of the Gaussian-modulated CVQKD protocols against collective attacks \cite{16,18} and coherent attacks \cite {19,20} have been proven experimentally \cite{21,22,23}, the Gaussian-modulated CVQKD protocols take on the potential application prospects of long-distance communication. Among them, the Grosshans-Grangier 2002 (GG02) protocol \cite{17} performs outstandingly over short distance, but seems unfortunately to be facing the problem of long-secure distance compared with its DVQKD counterpart. Till now, many remarkable theoretical and experimental efforts have been devoted to extending the maximal transmission distance with high rate in CVQKD systems \cite{23,24,25,26,27,28}. By the use of multidimensional reconciliation protocols in the regime of low signal-to-noise ratio (SNR) \cite{23}, it was demonstrated experimentally that CVQKD over $80$ km transmission distance can be realized. The reason is that the multidimensional reconciliation is, in a sense, to design a suitable reconciliation code with high efficiency even at low SNR, which can increase the secure distance \cite{28}. Alternatively, the discrete modulation protocols such as the four-state protocol \cite{13,27,29,30} and eight-state protocol \cite{31} were shown to improve the secure distance as there does exist suitable error-correlation codes with high efficiency for discrete possible values at low SNR. Especially for eight state protocol, not only can the secret key rate be improved, but the transmission distance more than $100$ km can be achieved \cite{31,32}. From a practical point of view, the maximal transmission distance and the unconditional security of the secret key are usually disturbed by the environmental noise and dissipation. To solve these problems,\ the methods of source monitoring \cite{33} and linear optics cloning machine \cite{34} have been proposed subsequently. Thanks to the development of experimental techniques, on the other hand, some quantum operations have been used to improve the performance of CVQKD in terms of secret key rate and tolerable excess noise. For example, a heralded noiseless amplifier \cite{26,27,35} was proposed to improve the maximal transmission distance roughly by the equivalent of 20log$_{\text{10} } $g dB losses resulting from the compensation of the losses \cite{27}. Recently, due to the fact that the non-Gaussian operation can be used for improving the entanglement \cite{36,37,38} and quantum teleportation in CV system \cite{39,40}, the photon-subtraction operation, which is one of the non-Gaussian operations, has been proposed to improve the secret key rate, the maximal tolerable excess noise and the transmission distance of CVQKD protocol \cite{11,14,15,29}. In particular, the single-photon subtraction operation in the enhanced CVQKD protocol\ outperforms other numbers of photon subtraction. Unfortunately, the success probability for implementing this single-photon subtraction operation at the variance of two-mode squeezed vacuum (TMSV) state $V=20\ $is limited to below $0.25$, which may lead to loss more information between Alice and Bob in the process of extracting the secret key. In order to overcome the limitation, in this paper, we propose an improved modulation scheme for CVQKD by using another non-Gaussian operation, the quantum catalysis (QC) \cite{41}, which can be implemented with current experimental technologies. Attractively, the quantum catalysis operation is a feasible way to enhance the nonclassicality \cite{42} and the entanglement property of Gaussian entangled states \cite {43}, thereby has become one of the research hotspots in quantum physics. Different from the previous studied photon-subtraction operations, although no photon is subtracted and added in quantum catalysis process, quantum catalysis can be applied to facilitate the conversion of the target ensemble, which could prevent the loss of information effectively. Numerical simulation shows that the entanglement and the success probability for implementing quantum catalysis can be improved efficiently. Specifically, the success probability for implementing zero-photon quantum catalysis can be dramatically enhanced when compared with the previous CVQKD with single-photon subtraction. In addition, we illustrate the performance of QC-CVQKD with different photon-catalyzed numbers, and find that zero-photon and single-photon catalysis presents the well performance when optimized over the transmittance $T$ of Alice$^{\text{'}}$s beam splitter (BS). This paper is organized as follows. In Sec.II, we suggest a quantum catalysis operator, and detail the process of QC-CVQKD. In Sec.III, the success probability and the entanglement property for implementing quantum catalysis are analyzed, and security analysis for QC-CVQKD system is subsequently discussed. Finally, conclusions are drawn in Sec.IV. \section{Quantum catalysis-based CVQKD} To make the derivation self-contained, we suggest quantum catalysis operator by using the technique of integral within an ordered product (IWOP) of operators \cite{44}, and then detail the QC-CVQKD. \subsection{Quantum catalysis operation} \begin{figure} \caption{ (Color online) Schematic diagram of the non-Gaussian operations. (a) The quantum catalysis (QC). An n-photon Fock state $\left\vert n\right\rangle $ in auxiliary mode C is split on the asymmetrical beam splitter (BS) with transmittance $T_{2} \label{Fig.1} \end{figure} As shown in Fig.1(a), an $n$-photon Fock state $\left \vert n\right \rangle $ in auxiliary mode C is injected at one of the input ports of BS with transmittance $T_{2}$, and simultaneously detected at the corresponding output port of BS, which is the so-called $n$-photon catalysis. In fact, this quantum catalysis process is often regarded as an equivalent operator $ \hat{O}_{n}$ given by \begin{equation} \hat{O}_{n}\equiv_{C}\left \langle n\right \vert B\left( T_{2}\right) \left \vert n\right \rangle _{C}, \label{1} \end{equation} where $B\left( T_{2}\right) $ is the BS operator with transmittance $T_{2}$. To obtain the specific expression of the equivalent operator $\hat{O}_{n}$, we employ the normally order form of $B\left( T_{2}\right) $ by the IWOP technique and the coherent state representation of Fock state $\left \vert n\right \rangle $, which are respectively expressed as $B\left( T\right) =: \exp[(\sqrt{T_{2}}-1)(b^{\dag}b+c^{\dag}c)+\left( c^{\dag }b-cb^{\dagger}\right) \sqrt{1-T_{2}}]:$ and $\left \vert n\right \rangle =1/ \sqrt{n!}\frac{\partial^{n}}{\partial \beta^{n}}\left \Vert \beta \right \rangle |_{\beta=0}$ where the notations :$\cdot$: and $\left \Vert \beta \right \rangle =\exp(\beta c^{\dagger})\left \vert 0\right \rangle $ represent a normally ordering of operator and an un-normalized coherent state, respectively. As a result, Eq.(\ref{1}) can be described as \begin{equation} \hat{O}_{n}=:L_{n}\left( \frac{1-T_{2}}{T_{2}}b^{\dagger}b\right) :\left( \sqrt{T_{2}}\right) ^{b^{\dagger}b+n}, \label{2} \end{equation} where $L_{n}\left( \cdot \right) $ denotes the Laguerre polynomials (see Refs.\cite{42,43} for the detailed calculation). By using the generating function of Laguerre polynomials, i.e., \begin{equation} L_{n}\left( x\right) =\frac{\partial^{n}}{n!\partial \gamma^{n}}\left \{ \frac{e^{\frac{-x\gamma}{1-\gamma}}}{1-\gamma}\right \} _{\gamma=0}, \label{3} \end{equation} and the operator relation $e^{\lambda b^{\dagger}b}=:\exp \left \{ \left( e^{\lambda}-1\right) b^{\dagger}b\right \} :$, Eq.(\ref{2})\ can be further rewritten as\ \begin{equation} \hat{O}_{n}=G_{T_{2}}\left( b^{\dagger}b\right) \left( \sqrt{T_{2}}\right) ^{b^{\dagger}b+n}, \label{4} \end{equation} where \ \ \begin{equation} G_{T_{2}}\left( b^{\dagger}b\right) =\frac{\partial^{n}}{n!\partial \gamma^{n}}\left \{ \frac{1}{1-\gamma}\left( \frac{1-\gamma/T_{2}}{1-\gamma } \right) ^{b^{\dagger}b}\right \} _{\gamma=0}. \label{5} \end{equation} From Eq.(\ref{4}), we find that the quantum catalysis operation belongs to a kind of non-Gaussian operation. Moreover, as shown in Fig.1(a), for an arbitrary input state $\left\vert \varphi \right\rangle _{in}$ in mode B, the output state $\left\vert \psi \right\rangle _{out}$ can be expressed as $ \left\vert \psi \right\rangle _{out}=$\ $\hat{O}_{n}$/$\sqrt{p}\left\vert \varphi \right\rangle _{in}$ with the success probability $p$ for implementing the $n$-photon catalysis $\hat{O}_{n}$, which is beneficial for calculating the analytical expressions of the Alice output state and the the covariance matrix between Alice and Bob in the following. In addition, different from the $n$-photon-subtraction operation shown in Fig.1(b), no photon is subtracted or added in $n$-photon catalysis operation. Such operation facilitates the transformation between input and output states, thereby effectively preventing useful information from being lost. However, no matter how many photons are catalyzed or subtracted, there is no quantum-catalysis or photon-subtraction effect when $T_{2}=1$. \begin{figure*} \caption{(Color online) Schematic diagram of QC-CVQKD. EPR: two-mode squeezed vacuum state. $\left \vert m\right \rangle $\ and $\left \vert n\right \rangle $: $m$-photon and $n$-photon Fock state. PD$_{\text{I} \label{Fig.2} \end{figure*} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \subsection{The CVQKD protocol with quantum catalysis} In what follows, we elaborate the schematic diagram of the QC-CVQKD protocol, as shown in Fig.2. The sender, Alice generates a TMSV state (which is also called as an Einstein-Podolsky-Rosen (EPR) pair) involving two modes A and B with a modulation variance $V$, which is usually expressed as the two-mode squeezed operator $S_{2}\left( r\right) =\exp \left \{ r\left( a^{\dagger}b^{\dag}-ab\right) \right \} $ on the two-mode vacuum state $ \left \vert 0,0\right \rangle _{AB}$, i.e., \begin{align} \left\vert TMSV\right\rangle _{AB}& =S_{2}\left( r\right) \left\vert 0,0\right\rangle _{AB} \notag \\ & =\sqrt{1-\lambda ^{2}}\underset{l=0}{\overset{\infty }{\sum }}\lambda ^{l}\left\vert l,l\right\rangle _{AB}, \label{6} \end{align} where $\lambda =\tanh r=\sqrt{\left( V-1\right) /\left( V+1\right) }$ for $ V=2\alpha ^{2}+1$, and $\left\vert l,l\right\rangle _{AB}=$ $\left\vert l\right\rangle _{A}\otimes \left\vert l\right\rangle _{B}$ denotes the two-mode Fock state of both modes A and B. After that, she performs $m$ -photon and $n$-photon catalysis operations in modes A and B, respectively, giving birth to the state $\left\vert \psi \right\rangle _{A_{1}B_{1}}$. Note that, in mode A, inserting another quantum catalysis operation $\hat{O} _{m}$ at the end of the transmission to Alice is designed to figure out what effect quantum catalysis has on the information between Alice and Bob, when comparing with the single-side quantum catalysis $\hat{O}_{n}$ case. According to the afore-mentioned method of quantum catalysis operation, likewise in Eq.(\ref{4}), we obtain the $m$-photon quantum catalysis operation, i.e., \begin{equation} \hat{O}_{m}=G_{T_{1}}\left( a^{\dagger }a\right) \left( \sqrt{T_{1}}\right) ^{a^{\dagger }a+m}, \label{7} \end{equation} with the notation $G_{T_{1}}\left( a^{\dagger }a\right) $ given by \begin{equation} G_{T_{1}}\left( a^{\dagger }a\right) =\frac{\partial ^{m}}{m!\partial \tau ^{m}}\left\{ \frac{1}{1-\tau }\left( \frac{1-\tau /T_{1}}{1-\tau }\right) ^{a^{\dagger }a}\right\} _{\tau =0}. \label{8} \end{equation} Then, the yielded state $\left\vert \psi \right\rangle _{A_{1}B_{1}}$ turns out to be \begin{align} \left\vert \psi \right\rangle _{A_{1}B_{1}}& =\frac{\hat{O}_{m}\hat{O}_{n}}{ \sqrt{P_{d}}}\left\vert TMSV\right\rangle \notag \\ & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!=\underset{l=0}{\overset{\infty }{\sum }} \frac{W_{0}}{\sqrt{P_{d}}}\frac{\partial ^{m}}{\partial \tau ^{m}}\frac{ \partial ^{n}}{\partial \gamma ^{n}}\frac{W^{l}}{(1-\tau )(1-\gamma )} \left\vert l,l\right\rangle _{AB}, \label{9} \end{align} where $P_{d}$ denotes the success probability of implementing quantum catalysis, which is an important indicator that affects the mutual information in the process of distilling a common secret key between Alice and Bob, and\ can be calculated as \begin{equation} P_{d}=W_{0}^{2}\Re ^{m,n}\left\{ \frac{\Pi }{1-W_{1}W}\right\} , \label{10} \end{equation} with $\Re ^{m,n},\Pi ,W_{0},W$ and $W_{1}$ defined in Eq.(A2). Detailed calculations of the success probability $P_{d}$ can be shown in Appendix A. From Eq.(\ref{9}), the state $\left\vert \psi \right\rangle _{A_{1}B_{1}}$ becomes a non-Gaussian entangled state. At Alice$^{\text{'}}$s station, the quadratures of both $x_{A}$ and $p_{A}$ are measured via heterodyne detection on the incoming one half of the state $ \left \vert \psi \right \rangle _{A_{1}B_{1}}$, and the other half of $ \left \vert \psi \right \rangle _{A_{1}B_{1}}$ is sent to Bob through an insecure quantum channel that can be controlled by Eve with the transmission efficiency $T_{c}$ and the excess noise $\varepsilon$. After receiving the state, Bob randomly chooses to measure either $x_{B}$ or $p_{B}$ via homodyne detection and informs Alice about the measured observable. Finally, Alice and Bob can share a string of secret keys by data-postprocessing. Before deriving the performance of the QC-CVQKD protocol, we demonstrate the entanglement of both the Gaussian entangled state $\left\vert TMSV\right\rangle _{AB}$ and the transformed state $\left\vert \psi \right\rangle _{A_{1}B_{1}}$. As a computable measurement of entanglement and an upper bound on the distillable entanglement, the logarithmic negativity is usually used to quantify the degree of entanglement, which is given by \begin{equation} E_{N}=\log _{2}\left\Vert \rho ^{PT}\right\Vert , \label{11} \end{equation} in which $\rho ^{PT}$ is the partial transpose of density operator $\rho $ about arbitrary subsystem, and the symbol $\left\Vert \cdot \right\Vert $ is the trace norm. By using the Schmidt decomposition, if an arbitrary state $ \left\vert \Psi \right\rangle $ can be decomposed as $\left\vert \Psi \right\rangle =\sum_{n=0}^{\infty }w_{n}\left\vert n\right\rangle _{A}\left\vert n^{\prime }\right\rangle _{B}$ with the positive real number $ w_{n}$ and the orthonormal states $\left\vert n\right\rangle _{A}$ and $ \left\vert n^{\prime }\right\rangle _{B}$, its logarithmic negativity can be calculated as \begin{equation} E_{N}=2\log _{2}\left\vert \overset{\infty }{\underset{n=0}{\sum }} w_{n}\right\vert . \label{12} \end{equation} According to Eqs.(\ref{6}) and (\ref{9}), the logarithmic negativity of both the TMSV state \cite{14,15} and the resulted state $\left\vert \psi \right\rangle _{A_{1}B_{1}}$ can be, respectively, calculated as \begin{align} & E_{N}\left( \left\vert TMSV\right\rangle _{AB}\right) =-\log _{2}\left( 1+\alpha ^{2}\right) \notag \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad -2\log _{2}\left( \sqrt{1+\alpha ^{2}}-\alpha \right) , \notag \\ & E_{N}\left( \left\vert \psi \right\rangle _{A_{1}B_{1}}\right) =2\log _{2}\left\vert \overset{\infty }{\underset{n=0}{\sum }}\frac{W_{0}}{\sqrt{ P_{d}}}\frac{\partial ^{m}}{\partial \tau ^{m}}\frac{\partial ^{n}}{\partial \gamma ^{n}}\right. \notag \\ & \quad \quad \quad \quad \quad \quad \quad \left. \times \frac{W^{l}}{ \left( 1-\tau \right) \left( 1-\gamma \right) }\right\vert . \label{13} \end{align} \section{Performance analysis} In this section, we demonstrate the success probability regarding quantum catalysis operation, and derive the performance of the QC-CVQKD system in terms of secret key rate and tolerable excess noise. A performance comparison between the QC-CVQKD and the photon-subtracted CVQKD is made to highlight the merits of the QC-based system. Note that for a simple and convenient discussion, we consider two special cases, i.e., the bilateral symmetrical quantum catalysis (BSQC, in which $T_{1}=T_{2}=T$ and $m=n$) and the single-side quantum catalysis (SSQC, in which $T_{1}=1,T_{2}=T$ and $n$). \subsection{Success probability for quantum catalysis} \begin{figure} \caption{(Color online) The success probability $P_{d} \label{Fig.3} \end{figure} The explicit form of the success probability for implementing quantum catalysis operations has been given in Eq.(\ref{9}). In particular, for the zero-photon BSQC ($T_{1}=T_{2}=T$ and $m=n=0$) and SSQC ($T_{1}=1,T_{2}=T$ and $n=0$), the success probabilities for implementing such zero-photon quantum catalysis operations can be given by 1/(1-$\left( T^{2}-1\right) \alpha ^{2}$)\ and 1/(1-$\left( T-1\right) \alpha ^{2}$), respectively. Given a high transmittance $T=0.95$, the success probabilities $P_{d}$ can be plotted as a function of $\alpha $ with several different photon-catalysis numbers such as $m,n\in \{0,1,2\}$. Fig.3 shows that the overall trend of success probability $P_{d}$ decreases as $\alpha $ increases. It indicates that for the increased modulation variance $ V=2\alpha ^{2}+1$, the success probability $P_{d}$ of implementing quantum catalysis decreases. Meanwhile, the success probabilities decrease with the increased number of photon catalysis for both SSQC and SBQC. The above-mentioned phenomenon explains that the implementation of multiphoton catalysis ($m=n>1$ and $n>1$) may be relatively difficult to achieve. Whereas, the success probability $P_{d}$ of SSQC provides better performance than that of BSQC when one considers same photon-catalyzed numbers. For the zero-photon SSQC ($n=0$) and BSQC ($m=n=0$), the success probabilities $ P_{d} $ for the given large $\alpha $ $(\alpha =3)$ are approximate $0.68$ and $0.53$, respectively. It is worth noting that for the two-photon BSQC ($ m=n=2$), the success probability $P_{d}$ for the given large $\alpha $ $ (\alpha =3) $ is below 0.2, which may leak much information in the CVQKD system. Now we consider the effect of entanglement variation on the QC-CVQKD system, which can be evaluated by the logarithmic negativity in Eq.(\ref{13}). For arbitrary photon-catalyzed numbers $m$ and $n$, we can obtain the logarithmic negativity of the state $\left\vert \psi \right\rangle _{A_{1}B_{1}}$. Given a high transmittance $T=0.95$, we plot the logarithmic negativity of both $E_{N}$($\left\vert \psi \right\rangle _{A_{1}B_{1}}$) and $E_{N}$($\left\vert TMSV\right\rangle _{AB}$) as a function of $\alpha $ involving different photon-catalyzed numbers, as shown in Fig.4. For the zero-photon and single-photon quantum catalysis, the entanglement property can be improved for $\alpha =3$, which may have an important impact on the correlation strength of mutual information between Alice and Bob. However, for $\alpha =3$, the gap of the enhanced entanglement in BSQC decreases with the increase of $m,n\in \{0,1,2\}$. The similar trend occurs to SSQC and there is no improvement of the entanglement for $n=2$. Although the entanglement for $m=n=2$ can be improved at large region of $\alpha $, there does exist the limitation of its success probability. These results show that the zero-photon and single-photon quantum catalysis (i.e, $m=n\in \{0,1\}$ and $n\in \{0,1\}$) perform well in terms of the success probability and the entanglement property, when comparing with the two-photon cases (i.e, $m=n=2$ and $n=2$). On the other hand, for the optimized $T$, we give the optimal logarithmic negativity $E_{N}$ as a function of $\alpha $ for $m=n\in \{0,1\}$ and $n\in \{0,1\}$, as shown in Fig.5. We find that the optimal entanglements of different zero-photon and single-photon quantum catalysis cases overlap together, and then the gap of the improved entanglement increases with the increasing $\alpha $. To highlight the contribution of quantum catalysis operation, compared with single-photon subtraction, we illustrate the success probability and the entanglement property in Fig.6. For $T\rightarrow1,$ although the improvement of the entanglement for single-photon subtraction [magenta surface] performs better than that for quantum catalysis operation, the success probability for the former is worse than that for the latter. As a result, the quantum catalysis operation is superior to the single-photon subtraction in terms of the success probability. These results indicate that the quantum catalysis, as a novel non-Gaussian operation, can be used to improve the entanglement property of Gaussian entangled states, and has an advantage of the success probability over the photon subtraction operation. Consequently, in what follows, we focus on quantum catalysis to enhance the performance of the CVQKD system. \begin{figure} \caption{(Color online) The logarithmic negativity of $E_{N} \label{Fig.4} \end{figure} \begin{figure} \caption{(Color online) The optimal logarithmic negativity of $E_{N} \label{Fig.5} \end{figure} \begin{figure} \caption{(Color online) (a) The success probability of implementing between quantum catalysis (QC) and single-photon subtraction in $\left( T,\protect \alpha \right) $ space with different photon-catalyzed numbers. (b) The logarithmic negativity for the resulted state $\left \vert \protect\psi \right \rangle _{A_{1} \label{Fig.6} \end{figure} \begin{figure} \caption{(Color online) The asymptotic secret key rates of the QC-CVQKD system [dash-dotted lines and dotted lines] as a function of the transmission distance for bilateral symmetrical quantum catalysis (BSQC) cases ($T_{1} \label{Fig.7} \end{figure} \begin{figure} \caption{(Color online) (a) The secret key rates of the QC-CVQKD system [dash-dotted lines and dotted lines] as a function of the transmission distance for bilateral symmetrical quantum catalysis (BSQC) cases ($ T_{1} \label{Fig.8} \label{Fig8a} \label{Fig8b} \end{figure} \subsection{Security analysis} To evaluate the performance of QC-CVQKD system, according to the detailed calculations of asymptotic secret key rate [see in Appendix B], we demonstrate the numerical simulations of the secret key rate and tolerable excess noise. Fig.7 shows that for a given transmittance $T=0.95$, the asymptotic secret key rate $\widetilde{K}_{R}$ as a function of transmission distance can be plotted with different photon-catalyzed numbers $m=n\in\{0,1,2\}$ and $ n\in\{0,1,2\}$. The black solid line denotes the secret key rate of the original protocol, which is exceeded by the QC-CVQKD system with zero-photon and single-photon quantum catalysis within the long-distance range. To be specific, the proposed system of using zero-photon BSQC [blue dash-dotted line] has the longer transmission distance when compared with the zero-photon SSQC case [red dotted line]. While for the single-photon QC-CVQKD system, the BSQC [green dash-dotted line] in term of the maximum transmission distance is better than the SSQC case [yellow dotted line]. The reason may be that for the single-photon BSQC, extra adding the model of quantum catalysis $\hat{O}_{m}$ before Alice taking heterodyne detection can be regarded as the generation of trusted noise controlled by Alice, thereby resulting in the diminution of the Holevo bound $S^{G}\left( B\text{:} E\right) $. However, for the two-photon QC-CVQKD system, the BSQC [cyan dash-dotted line] and SSQC [orange dotted line] are worse than the original one, resulted from the fact that the more photons are catalyzed, the higher the non-Gaussianity is, thereby making the more noise to the covariance matrix \cite{42,43}. In addition, Within the shortening distance, the secret key of the QC-CVQKD system is worse than that of the original system because of the limitation of the success probability of quantum catalysis. As a result, for a given large transmittance $T=0.95$, the QC-CVQKD system of using zero-photon and single- photon quantum catalysis can lengthen the maximal transmission distance apart from the two-photon QC-CVQKD system. Since it is so, for the optimal choice of $T$, we obtain the maximal secret key rate of the proposed system with zero-photon and single-photon quantum catalysis. In Fig.8, we show the maximal secret key rate as a function of transmission distance for $m=n\in\{0,1\}$ and $n\in\{0,1\}$, when compared with the original protocol [black solid line]. In Fig.8(b), it is a case of the optimal $T$ that achieves the maximal secret key rate. We find that, for the long-distance range, the zero-photon and single-photon QC-CVQKD systems at the optimal transmittance range ($0.86\leqslant T\leqslant1$) perform better than the original system, in terms of both secret key rate and transmission distance. It indicates that the quantum catalysis can be used for improving the performance of CVQKD. For the single-photon QC-CVQKD system [green dash-dotted line and yellow dotted line] at the long transmission distance, the range of the optimal $T$ is approximate $ 0.978\leqslant T\leqslant1$ in which there does exist a high success probability for single-photon quantum catalysis [see Fig.6a]. However, for the short-distance range, even if for the optimal choice of $T$, the secret key rate of the QC-CVQKD system is similar to that of the original system, because for $T_{1}=T_{2}=1$ of Alice$^{\text{'}}$s BS$_{\text{I}}$ and BS$_{ \text{II}}$, there is no quantum catalysis effect resulting from the CVQKD system. \begin{figure} \caption{(Color online) The maximal tolerable excess noise of the QC-CVQKD system [dash-dotted lines and dotted lines] as a function of the transmission distance for bilateral symmetrical quantum catalysis (BSQC) cases ($T_{1} \label{Fig.9} \end{figure} \begin{figure} \caption{(Color online) (a) The maximal secret key rate for $\protect\epsilon =0.01$ and (b) the maximal tolerable excess noise of the QC-CVQKD system [dash-dotted lines and dotted lines] and the CVQKD with the single-photon subtraction [magenta solid line] as a function of the transmission distance for bilateral symmetrical quantum catalysis (BSQC) cases ($T_{1} \label{Fig.10} \label{Fig10a} \label{Fig10b} \end{figure} Additionally, the other factor that has an effect on the QC-CVQKD system is the tolerable excess noise. In Fig.9, we illustrate the tolerable excess noise as a function of transmission distance for the optimal choice of $T$. Analogous to Fig.8, at the long-distance range, the QC-CVQKD system with zero-photon and single-photon quantum catalysis exceed the original system with respect to the maximal tolerable excess noise for remote users. More specifically, the zero-photon QC-CVQKD system [blue dash-dotted line and red dotted line] presents the best performance since the maximal tolerable excess noise approaches to about $0.0292$ at the transmission distance of $ 300$ km. Besides, at the transmission distance of $300$ km, for the single-photon BSQC (i.e.,$m=n=1$) [green dash-dotted line] and SSQC (i.e.,$ n=1$) [yellow dotted line], the maximal tolerable excess noises can approach to about $0.0261$ and $0.0185$, respectively. There results indicate that when the quantum channel is less noise ($\varepsilon \sim 0.0185$), the zero-photon and single-photon quantum catalysis can be applied to lengthen the maximal transmission distance up to hundreds of kilometers. In addition, we find that from Fig.8(a) and Fig.9, at the long distance range, for the single-photon QC-CVQKD schemes, the BSQC case [green dash-dotted line] performs better than the SSQC [yellow dotted line]. It indicates that the single-photon QC-CVQKD system, extra adding the model of quantum catalysis $ \hat{O}_{m}$ in mode A may be useful for improving the performance of CVQKD protocol, when compared with the SSQC $\hat{O}_{n}$ in mode B. Interestingly,\ in Ref.\cite{11}, it was pointed out that for the photon-subtraction-involved CVQKD system, the single-photon subtraction operation can usually improve the performance of the related system. Therefore, in order to make comparisons of the QC-CVQKD and the single-photon-subtraction(SS)-CVQKD, here we give the schematic diagram of the SS-CVQKD system in Fig.11. We consider the asymptotic secret key rate $ K_{asy}$ of reverse reconciliation under collective attack with the assistant of the IWOP technique [The more detailed calculations can be seen in Appendix C]. To display the effect of quantum catalysis on the performance of CVQKD, we plot the secret key rate and the tolerable excess noise of the CVQKD system involving quantum catalysis and single-photon subtraction as a function of transmission distance for photon-catalyzed numbers $m=n\in \{0,1\}$ and $n\in \{0,1\}$, as shown in Fig.10(a) and Fig.10(b), respectively. It is found that the performance of the SS-CVQKD system [magenta solid line] in terms of the maximal secret key rate and the maximal tolerable excess noise is outperformed by the QC-CVQKD system at the long transmission distance range. The reason may be that the success probability for single-photon subtraction is lower than that for quantum catalysis at the optimal transmittance $T$ range [see Figs.8(b)]. It implies that the former loses more information than the latter in the process of distilling a common secret key. Without loss of generality, we assume that the minimal secret key rate is confined to above $10^{-6}$ bits per pulse. For the single-photon QC-CVQKD system [green dash-dotted line and yellow dotted line], for the optimal choice of the transmittance $T$ of Alice$^{ \text{'}}$s beamsplitters, the maximal transmission distances is more than $ 240$ km. Whereas for the SS-CVQKD system, the maximal transmission distance is approximate $218$ km, because its success probability is limited to below $0.25$ [magenta surface in Fig.6(a)]. These comparison results show that the performance of the QC-CVQKD system using zero-photon and single-photon quantum catalysis performs better than the SS-CVQKD system when optimized over the transmittance $T$. Attractively, from Figs.7, 8(a) and 10(a), we also consider the PLOB bound that stands for the fundamental rate-loss scalling (secret key capacity) \cite{45}. By comparison, it is found that for a given transmittance $T=0.95$ , the performance of QC-CVQKD system using\ the zero-photon BSQC (i.e.,$m=n=0 $) is closer to the PLOB bound than that of original protocol when the transmission distance reaches larger than $57.6851$ km. While for the optimal choice of the transmittance $T$, we can easily see that, our proposed QC-CVQKD system involving the zero-photon and single-photon quantum catalysis is closer to the PLOB bound when comparing with the SS-CVQKD. However, both of them are unable to exceed the PLOB bound at any transmission distance. Therefore, in order to beat the PLOB bound that is ultimate limit of repeaterless point-to-point communication, we can design the one way continuous-variable measurement-device-independent QC-QKD system acting as an active repeater. \section{Conclusion} We have suggested the effect of quantum catalysis on the performance of the CVQKD system by using the IWOP technique. From the equivalent operator of quantum catalysis, the quantum catalysis that is a non-Gaussian operation in essence can be used for improving the CVQKD system. Different from the traditional TMSV, the entanglement of the resulted state using quantum catalysis can be improved significantly after optimizing the transmittance $ T $ of Alice beamsplitters, and the success probability for quantum catalysis in high transmittance $T$ performs better than the single-photon subtraction case, especially for the zero-photon quantum catalysis. Taking into account the Gaussian optimality, we derive the lower bound of the asymptotic secret key rate of the QC-CVQKD for reverse reconciliation against the collective attack. Numerical simulations show that when comparing with the SS-CVQKD system, the QC-CVQKD system has an advantage of lengthening the maximal transmission distance with the raised secret key rates. For all the QC-CVQKD systems, the zero-photon quantum catalysis has the best performance. While for the QC-CVQKD system using single-photon quantum catalysis, the BSQC performs better than the SSQC due to the fact that extra adding the model of quantum catalysis $\hat{O}_{m}$ is useful for improving the performance of the CVQKD system. We make a comparison of the CVQKD systems involving quantum catalysis and single-photon subtraction. It is found that the QC-CVQKD system using the zero-photon and single-photon quantum catalysis is superior to the single-photon subtraction case in terms of the maximal transmission distance. \begin{acknowledgments} We would like to thank Professor S. Pirandola for his helpful suggestion. This work was supported by the National Natural Science Foundation of China (Grant Nos. 61572529, 61871407). \end{acknowledgments} \begin{appendix} \section{Derivation of the success probability }$P_{d}$ In order to derive the analytical expression of the success probability \textbf{\ }$P_{d}$ shown in Eq.(\ref{10}), we rewrite the state in Eq.(\ref {9}) as the density operator $\rho =\left\vert \psi \right\rangle \left\langle \psi \right\vert $, i.e., \begin{align} \rho _{A_{1}B_{1}}& =\frac{1}{P_{d}}\hat{O}_{m}\hat{O}_{n}\left\vert TMSV\right\rangle \left\langle TMSV\right\vert \hat{O}_{n}^{\dag }\hat{O} _{m}^{\dag } \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re ^{m,n}\Pi \exp [a^{\dag }b^{\dag }W]00\rangle \left\langle 00\right\vert \exp [abW_{1}] \tag{A1} \end{align} where we have used the equivalent operators of photon catalysis operations in Eq.(\ref{4}), e$^{\zeta a^{\dag }a}a^{\dag }$e$^{-\zeta a^{\dag }a}=a^{\dag }e^{\zeta },$ and set \begin{align} & W=\frac{\lambda \left( T_{2}-\gamma \right) \left( T_{1}-\tau \right) }{ \sqrt{T_{1}T_{2}}\left( 1-\gamma \right) \left( 1-\tau \right) }, \notag \\ & W_{0}=\frac{\sqrt{T_{1}^{m}T_{2}^{n}\left( 1-\lambda ^{2}\right) }}{n!m!}, \notag \\ & W_{1}=\frac{\lambda \left( T_{2}-\gamma _{1}\right) \left( T_{1}-\tau _{1}\right) }{\sqrt{T_{1}T_{2}}\left( 1-\gamma _{1}\right) \left( 1-\tau _{1}\right) }, \notag \\ & \Pi =\frac{1}{1-\tau }\frac{1}{1-\gamma }\frac{1}{1-\tau _{1}}\frac{1}{ 1-\gamma _{1}}, \notag \\ & \Re ^{m,n}=\frac{\partial ^{m}}{\partial \tau ^{m}}\frac{\partial ^{n}}{ \partial \gamma ^{n}}\frac{\partial ^{m}}{\partial \tau _{1}^{m}}\frac{ \partial ^{n}}{\partial \gamma _{1}^{n}}\left\{ ...\right\} |_{\tau =\gamma =\tau _{1}=\gamma _{1}=0}. \tag{A2} \end{align} Then, according to the completeness relation of coherent state representation $\int d^{2}z\left\vert z\right\rangle \left\langle z\right\vert /\pi =1$,\ the integrational formula \begin{align} & \int \frac{d^{2}z}{\pi }\exp \left( \zeta \left\vert z\right\vert ^{2}+\xi z+\eta z^{\ast }+fz^{2}+gz^{\ast 2}\right) \notag \\ & =\frac{1}{\sqrt{\zeta ^{2}-4fg}}\exp \left[ \frac{-\zeta \xi \eta +\xi ^{2}g+\eta ^{2}f}{\zeta ^{2}-4fg}\right] , \tag{A3} \end{align} and the completeness of the resulted state Tr($\rho _{A_{1}B_{1}}^{N}$)=1, we can obtain the success probability\textbf{\ }$P_{d}$ given by \begin{align} P_{d}& =W_{0}^{2}\Re ^{m,n}\Pi \left\langle 00\right\vert \exp [abW_{1}]\exp [a^{\dag }b^{\dag }W]00\rangle \notag \\ & =W_{0}^{2}\Re ^{m,n}\Pi \int \frac{d^{2}z}{\pi ^{2}}\int \frac{d^{2}\beta }{\pi ^{2}}\exp \left[ -\left\vert z\right\vert ^{2}\right. \notag \\ & \left. -\left\vert \beta \right\vert ^{2}+z\beta W_{1}+z^{\ast }\beta ^{\ast }W\right] \notag \\ & =W_{0}^{2}\Re ^{m,n}\left\{ \frac{\Pi }{1-W_{1}W}\right\} . \tag{A4} \end{align} \section{Calculation of asymptotic secret key rate } Here, we present the calculation of asymptotic secret key rates of the QC-CVQKD system where Alice performs heterodyne detection and Bob performs homodyne detection. As mentioned above, state $\left \vert \psi \right \rangle _{A_{1}B_{1}}$ belongs to a new kind of non-Gaussian state, thus we cannot directly use the results of the conventional Gaussian CVQKD to calculate its secret key rate. Fortunately, thanks to the extremity of Gaussian quantum states that the rendering secret key rate of the non-Gaussian state $\left \vert \psi \right \rangle _{A_{1}B_{1}}$ is no less than that of a Gaussian state $\left \vert \psi \right \rangle _{A_{1}B_{1}}^{G}$ with the same covariance matrix $\Gamma_{A_{1}B_{1}}=$ $ \Gamma_{A_{1}B_{1}}^{G}$, we obtain $K$($\left \vert \psi \right \rangle _{A_{1}B_{1}}$)$\geqslant K$($\left \vert \psi \right \rangle _{A_{1}B_{1}}^{G}$) \cite{16,18}. For reverse reconciliation, therefore, the lower bound of the asymptotic secret key rate under optimal collective attack can be given by \begin{equation} \widetilde{K}_{R}=P_{d}\left \{ \beta I^{G}\left( A:B\right) -S^{G}\left( B:E\right) \right \} , \tag{B1} \end{equation} where $\beta$ denotes the reconciliation efficiency, $I^{G}\left( A\text{:} B\right) $ denotes Alice and Bob$^{\text{'}}$s mutual information, and $ S^{G}\left( B\text{:}E\right) $ denotes the Holevo bound, which is defined as the maximum information on Bob final key available to Eve. In order to derive the analytical expression of the asymptotic secret key rate $K$($\left \vert \psi \right \rangle _{A_{1}B_{1}}^{G}$), we consider the covariance matrix $\Gamma_{A_{1}B_{1}}$ of the resulted state $ \left \vert \psi \right \rangle _{A_{1}B_{1}}$ given by \begin{equation} \Gamma_{A_{1}B_{1}}=\left( \begin{array}{cc} X_{A}II & Z_{AB}\sigma_{z} \\ Z_{AB}\sigma_{z} & Y_{B}II \end{array} \right) , \tag{B2} \end{equation} where $II=diag(1,1)$, $\sigma_{z}=diag(1,-1)$, and $X_{A},Y_{B}$ and $Z_{AB} $ can be derived by using the IWOP technique as follows: It is first required to derive the average values such as $\left \langle a^{\dagger}a\right \rangle ,\left \langle b^{\dagger}b\right \rangle $ and $ \left \langle ab\right \rangle $. According to Eq.(A1) and\ Eq.(A3), thus, it is straightforward to get \begin{align} \left \langle a^{\dagger}a\right \rangle & =\text{Tr[}\rho_{A_{1}B_{1}}^{N} \left( aa^{\dagger}-1\right) \text{]} \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re^{m,n}\Pi \int \frac{d^{2}\alpha}{\pi^{2}}\int \frac{d^{2}\beta}{\pi^{2}}\alpha \alpha^{\ast} \notag \\ & \times \exp[-\left \vert \alpha \right \vert ^{2}-\left \vert \beta \right \vert ^{2}+\alpha \beta W_{1}+\alpha^{\ast}\beta^{\ast}W]-1 \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re^{m,n}\left \{ \frac{\Pi}{\left( 1-W_{1}W\right) ^{2}}\right \} -1, \tag{B3} \end{align} \begin{align} \left \langle b^{\dagger}b\right \rangle & =\text{Tr[}\rho_{A_{1}B_{1}}^{N} \left( bb^{\dagger}-1\right) \text{]} \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re^{m,n}\Pi \int \frac{d^{2}\alpha}{\pi^{2}}\int \frac{d^{2}\beta}{\pi^{2}}\beta \beta^{\ast} \notag \\ & \times \exp[-\left \vert \alpha \right \vert ^{2}-\left \vert \beta \right \vert ^{2}+\alpha \beta W_{1}+\alpha^{\ast}\beta^{\ast}W]-1 \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re^{m,n}\left \{ \frac{\Pi}{\left( 1-W_{1}W\right) ^{2}}\right \} -1, \notag \\ & =\left \langle a^{\dagger}a\right \rangle , \tag{B4} \end{align} \begin{align} \left \langle ab\right \rangle & =\text{Tr[}\rho_{A_{1}B_{1}}^{N}ab\text{]} \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re^{m,n}\Pi \int \frac{d^{2}\alpha}{\pi^{2}}\int \frac{d^{2}\beta}{\pi^{2}}\alpha \beta \notag \\ & \times \exp[-\left \vert \alpha \right \vert ^{2}-\left \vert \beta \right \vert ^{2}+\alpha \beta W_{1}+\alpha^{\ast}\beta^{\ast}W] \notag \\ & =\frac{W_{0}^{2}}{P_{d}}\Re^{m,n}\left \{ \frac{\Pi W}{\left( 1-W_{1}W\right) ^{2}}\right \} . \tag{B5} \end{align} Note that $\left \langle ab\right \rangle =\left \langle a^{\dagger}b^{\dagger }\right \rangle ^{\dagger}.$By combining Eqs.(B3)-(B5), therefore, we can directly obtain the elements of convariance matrix $\Gamma_{A_{1}B_{1}}^{N}$ as the following form \begin{align} X_{A} & =\text{Tr}\left[ 1+2a^{\dagger}a\right] \notag \\ & =\frac{2W_{0}^{2}}{P_{d}}\Re^{m,n}\left \{ \frac{\Pi}{\left( 1-W_{1}W\right) ^{2}}\right \} -1, \notag \\ Y_{B} & =\text{Tr}\left[ 1+2b^{\dagger}b\right] =X_{A}, \notag \\ Z_{AB} & =\text{Tr}\left[ ab+a^{\dagger}b^{\dagger}\right] \notag \\ & =\frac{2W_{0}^{2}}{P_{d}}\Re^{m,n}\left \{ \frac{\Pi W}{\left( 1-W_{1}W\right) ^{2}}\right \} . \tag{B6} \end{align} After passing the untrusted quantum channel which is characterized by the transmission efficiency $T_{c}$ and the excess noise $\varepsilon$, the covariance matrix $\Gamma_{A_{1}B_{2}}^{G}$ reads \begin{equation} \Gamma_{A_{1}B_{2}}^{G}=\left( \begin{array}{cc} X_{A}II & \sqrt{T_{c}}Z_{AB}\sigma_{z} \\ \sqrt{T_{c}}Z_{AB}\sigma_{z} & T_{c}\left( X_{A}+\xi \right) II \end{array} \right) , \tag{B7} \end{equation} where $\xi=\left( 1-T_{c}\right) /T_{c}+\varepsilon$ denotes the channel-added noise referred to the input of Gaussian channel. Tthe mutual information between Alice and Bob now can be expressed as \begin{align} I^{G}\left( A\text{:}B\right) & =\frac{1}{2}\log_{2}\frac{V_{A_{1}}}{ V_{A_{1}|B_{2}}} \notag \\ & =\log_{2}\left \{ \sqrt{\frac{\left( X_{A}+1\right) \left( X_{A}+\xi \right) }{\left( X_{A}+1\right) \left( X_{A}+\xi \right) -Z_{AB}^{2}}}\right \} . \tag{B8} \end{align} Furthermore, Eve accessible quantum information on Bob measurement can be calculated by assuming Eve can purify the whole system $S^{G}\left( B\text{:} E\right) =S\left( E\right) -S\left( E|B\right) =S\left( AB\right) -S\left( A|B\right) $. For the Gaussian modulation, the first term $S\left( AB\right) $ is a function of the symplectic eigenvalues $\lambda_{1,2}$ of $ \Gamma_{A_{1}B_{2}}^{G}$, which is given by \begin{subequations} \begin{equation} S\left( AB\right) =G\left[ \left( \lambda_{1}-1\right) /2\right] +G\left[ \left( \lambda_{2}-1\right) /2\right] , \tag{B9} \end{equation} where the Von Neumann entropy $G\left[ x\right] $ is \end{subequations} \begin{equation} G\left[ x\right] =\left( x+1\right) \log_{2}\left( x+1\right) -x\log _{2}x \tag{B10} \end{equation} and \begin{equation} \lambda_{1,2}^{2}=\frac{1}{2}\left[ \Lambda \pm \sqrt{\Lambda^{2}-4D^{2}} \right] , \tag{B11} \end{equation} with the notation \begin{align} \Lambda & =X_{A}^{2}+T_{c}^{2}\left( X_{A}+\xi \right) ^{2}-2T_{c}Z_{AB}^{2}, \notag \\ D & =X_{A}T_{c}\left( X_{A}+\xi \right) -T_{c}Z_{AB}^{2}. \tag{B12} \end{align} Moreover, the second term $S\left( A|B\right) =G\left[ \left( \lambda _{3}-1\right) /2\right] $ is a function of the symplectic eigenvalue $ \lambda_{3}$ of the covariance matrix $\Gamma_{A}^{b}$ of Alice mode after Bob performing homodyne detection, where the square of symplectic eigenvalue $\lambda_{3}$ is \begin{equation} \lambda_{3}^{2}=X_{A}\left[ X_{A}-\frac{Z_{AB}^{2}}{X_{A}+\xi}\right] . \tag{B13} \end{equation} As a result, the asymptotic secret key rate can be written as \begin{equation} \widetilde{K}_{R}=P_{d}\left \{ \beta I^{G}\left( A:B\right) -S\left( AB\right) +S\left( A|B\right) \right \} . \tag{B14} \end{equation} \ \begin{figure} \caption{(Color online) The schematic diagram of the Gaussian modulation CVQKD scheme with single-photon subtraction.} \label{Fig11} \end{figure} \section{The secret key rate of the \textbf{single-photon subtraction }EB-CVQKD protocol under collective attack } In order to make a comparison of the proposed long-distance CVQKD scheme via quantum catalysis, here we review the CVQKD protocol of applying single photon subtraction, and then assume that these two schemes have the same quantum channel controlled by Eve. As can be seen from Fig.11, Alice generates a two-mode squeezed vacuum state $\left \vert TMSV\right \rangle _{AB}$ (EPR), and performs heterodyne detection of the one half of $ \left \vert TMSV\right \rangle _{AB}$. The other half of $\left \vert TMSV\right \rangle _{AB}$ after operating single-photon subtraction is sent to Bob through the same quantum channel marked by transmission efficiency $ T_{c}$ and excess noise $\varepsilon$. Afterwards, Bob performs homodyne detection of the received state and then informs Alice about which observable he measured, so that two correlated variables, which are shared by both Alice and Bob, can be used to exact a common secret key. In deed, starting from the concept of quantum operators, the single-phton subtraction operation can be seen as an equivalent operator $\Theta $ which is given by \begin{align} \Theta & \text{=}_{C}\left\langle 1\right\vert B\left( T\right) \left\vert 0\right\rangle _{C} \notag \\ & =\frac{1-T}{T}b\exp \left[ b^{\dagger }b\ln \sqrt{T}\right] \tag{C1} \end{align} Thus, the photon-subtraction state $\left\vert \Psi \right\rangle _{AB_{1}}$ after operating single-photon subtraction is expressed as \begin{align} \left\vert \Psi \right\rangle _{AB_{1}}& =\frac{1}{\sqrt{P_{1}}}\Theta \left\vert TMSV\right\rangle _{AB} \notag \\ & =\frac{\widetilde{A}\widetilde{B}}{\sqrt{P_{1}}}\exp \left[ \widetilde{B} a^{\dagger }b^{\dagger }\right] a^{\dagger }\left\vert 00\right\rangle _{AB} \tag{C2} \end{align} where \begin{align} \widetilde{A}& =\sqrt{\frac{\left( 1-\lambda ^{2}\right) \left( 1-T\right) }{ T}}, \notag \\ \widetilde{B}& =\lambda \sqrt{T}, \tag{C3} \end{align} and \begin{equation} P_{1}=\frac{\widetilde{A}^{2}\widetilde{B}^{2}}{\left( 1-\widetilde{B} ^{2}\right) ^{2}} \tag{C4} \end{equation} is the success probability of implementing the single-photon subtraction operation. After the photon-subtraction state $\left\vert \Psi \right\rangle _{AB_{1}}$ goes through the quantum channel, similarly to Eq.(B7), we also can obtain the covariance matrix $\Gamma ^{1}$ as the following form \begin{equation} \Gamma ^{1}=\left( \begin{array}{cc} XII & \sqrt{T_{c}}Z\sigma _{z} \\ \sqrt{T_{c}}Z\sigma _{z} & T_{c}\left( Y+\xi \right) II \end{array} \right) \tag{C5} \end{equation} where $\xi =\left( 1-T_{c}\right) /T_{c}+\varepsilon $, and \begin{align} X& =\frac{4\widetilde{A}^{2}\widetilde{B}^{2}}{P_{1}\left( 1-\widetilde{B} ^{2}\right) ^{3}}-1, \notag \\ Y& =\frac{2\widetilde{A}^{2}\widetilde{B}^{2}\left( 1+\widetilde{B} ^{2}\right) }{P_{1}\left( 1-\widetilde{B}^{2}\right) ^{3}}-1, \notag \\ Z& =\frac{4\widetilde{A}^{2}\widetilde{B}^{3}}{P_{1}\left( 1-\widetilde{B} ^{2}\right) ^{3}}. \tag{C6} \end{align} Now let us consider the calculation of asymptotic secret key rate of the single-photon subtraction EB-CVQKD protocol in the context of Gaussian optimality theorem. Thus, the lower bound of asymptotic secret key rate $ K_{asy}$ of reverse reconciliation under collective attack is \begin{equation} K_{asy}=P_{1}\left \{ \beta I^{Hom}\left( A\text{:}B\right) -S^{Hom}\left( B \text{:}E\right) \right \} \tag{C7} \end{equation} where $P_{1}$ has been derived in Eq.(C4), $\beta$ is the efficiency for reverse reconciliation, and the superscript Hom represents Bob taking homodyne detection. Additionally, the mutual information between Alice and Bob is given by \begin{equation} I^{Hom}\left( A\text{:}B\right) =\log_{2}\left \{ \sqrt{\frac{\left( X+1\right) \left( Y+\xi \right) }{\left( X+1\right) \left( Y+\xi \right) -Z^{2}}}\right \} \tag{C8} \end{equation} Under the assumption that Eve is able to purity the whole system, $ S^{G}\left( B\text{:}E\right) =S\left( E\right) -S\left( E|B\right) =S\left( AB\right) -S\left( A|B\right) $, we can directly obtain the symplectic eigenvalues $\widetilde{\lambda}_{1,2,}$ of covariance matrix $\Gamma^{1}$ as the following form \begin{equation} \widetilde{\lambda}_{1,2}^{2}=\frac{1}{2}\left[ \widetilde{C}\pm \sqrt{ \widetilde{C}^{2}-4\widetilde{D}^{2}}\right] , \tag{C9} \end{equation} with \begin{align} \widetilde{C} & =X^{2}+T_{c}^{2}\left( Y+\xi \right) ^{2}-2T_{c}Z^{2} \notag \\ \widetilde{D} & =XT_{c}\left( Y+\xi \right) -T_{c}Z^{2} \tag{C10} \end{align} and \begin{equation} \widetilde{\lambda}_{3}^{2}=X\left[ X-\frac{Z^{2}}{Y+\xi}\right] . \tag{C11} \end{equation} Furthermore, $S\left( AB\right) =G\left[ \left( \widetilde{\lambda} _{1}-1\right) /2\right] +G\left[ \left( \widetilde{\lambda}_{2}-1\right) /2 \right] $ and $S\left( A|B\right) =G\left[ \left( \widetilde{\lambda } _{3}-1\right) /2\right] $ where the Von Neumann entropy $G\left[ x\right] $ is defined in Eq.(B10). \end{appendix} \end{document}
\begin{document} \title{Monoid generalizations of the Richard Thompson groups} \begin{abstract} The groups $G_{k,1}$ of Richard Thompson and Graham Higman can be generalized in a natural way to monoids, that we call $M_{k,1}$, and to inverse monoids, called ${\it Inv}_{k,1}$; this is done by simply generalizing bijections to partial functions or partial injective functions. The monoids $M_{k,1}$ have connections with circuit complexity (studied in another paper). Here we prove that $M_{k,1}$ and ${\it Inv}_{k,1}$ are congruence-simple for all $k$. Their Green relations $\cal J$ and $\cal D$ are characterized: $M_{k,1}$ and ${\it Inv}_{k,1}$ are $\cal J$-0-simple, and they have $k-1$ non-zero $\cal D$-classes. They are submonoids of the multiplicative part of the Cuntz algebra ${\cal O}_k$. They are finitely generated, and their word problem over any finite generating set is in {\sf P}. Their word problem is {\sf coNP}-complete over certain infinite generating sets.\footnote{\, {\bf Changes in this version:} Section 4 has been thoroughly revised, and errors have been corrected; however, the main results of Section 4 do not change. The main changes are in Theorem 4.5, Definition 4.5A (the concept of a {\em normal} right-ideal morphism), and the final proof of Theorem 4.13. Sections 1, 2, and 3 are unchanged, except for the proof of Theorem 2.3, which was incomplete; a complete proof was published in the Appendix of reference \cite{BiBern}, and is also given here.} \end{abstract} \section{Thompson-Higman monoids} Since their introduction by Richard J.\ Thompson in the mid 1960s \cite{Th0, McKTh, Th}, the Thompson groups have had a great impact on infinite group theory. Graham Higman generalized the Thompson groups to an infinite family \cite{Hig74}. These groups and some of their subgroups have appeared in many contexts and have been widely studied; see for example \cite{CFP, BrinSqu, Dehornoy, BrownGeo, GhysSergiescu, GubaSapir, Brin97, BCST, LawsonPolycyclic}. The definition of the Thompson-Higman groups lends itself easily to generalizations to inverse monoids and to more general monoids. These monoids are also generalizations of the finite symmetric monoids (of all functions on a set), and this leads to connections with circuit complexity; more details on this appear in \cite{BiDistor, BiFact, BiCoNP}. By definition the Thompson-Higman group $G_{k,1}$ consists of all maximally extended isomorphisms between finitely generated essential right ideals of $A^*$, where $A$ is an alphabet of cardinality $k$. The multiplication is defined to be composition followed by maximal extension: for any $\varphi, \psi \in G_{k,1}$, we have \ $\varphi \cdot \psi = $ max$(\varphi \circ \psi)$. Every element $\varphi \in G_{k,1}$ can also be given by a bijection $\varphi: P \to Q$ where $P, Q \subset A^*$ are two finite maximal prefix codes over $A$; this bijection can be described concretely by a finite function {\it table}. For a detailed definition according to this approach, see \cite{BiThomps} (which is also similar to \cite{Scott}, but with a different terminology); moreover, Subsection 1.1 gives all the needed definitions. It is natural to generalize the maximally extended {\em isomorphisms} between finitely generated essential right ideals of $A^*$ to {\em homomorphisms}, and to drop the requirement that the right ideals be essential. It will turn out that this generalization leads to interesting monoids, or inverse monoids, which we call Thompson-Higman monoids. Our generalization of the Thompson-Higman groups to monoids will also generalize the embedding of these groups into the Cuntz algebras \cite{BiThomps, Nekrash}, which provides an additional motivation for our definition. Moreover, since these homomorphisms are close to being arbitrary finite string transformations, there is a connection between these monoids and combinational boolean circuits; the study of the connection between Thompson-Higman groups and circuits was started in \cite{BiCoNP, BiFact} and will be developed more generally for monoids in \cite{BiDistor}; the present paper lays some of the foundations for \cite{BiDistor}. \subsection{Definition of the Thompson-Higman groups and monoids} Before defining the Thompson-Higman monoids we need some basic definitions, that are similar to the introductory material that is needed for defining the Thompson-Higman groups $G_{k,1}$; we follow \cite{BiThomps} (which is similar to \cite{Scott}). We use an alphabet $A$ of cardinality $|A| = k$, and we list its elements as $A = \{a_1, \ldots, a_k\}$. Let $A^*$ denote the set of all finite {\it words} over $A$ (i.e., all finite sequences of elements of $A$); this includes the {\it empty word} $\varepsilon$. The {\it length} of $w \in A^*$ is denoted by $|w|$; let $A^n$ denote the set of words of length $n$. For two words $u,v \in A^*$ we denote their {\it concatenation} by $uv$ or by $u \cdot v$; for sets $B, C \subseteq A^*$ the concatenation is $BC = \{uv : u \in B, v \in C\}$. A {\it right ideal} of $A^*$ is a subset $R \subseteq A^*$ such that $RA^* \subseteq R$. A generating set of a right ideal $R$ is a set $C$ such that $R$ is the intersection of all right ideals that contain $C$; equivalently, $R = CA^*$. A right ideal $R$ is called {\it essential} iff $R$ has a non-empty intersections with every right ideal of $A^*$. For words $u,v \in A^*$, we say that $u$ is a {\it prefix} of $v$ iff there exists $z \in A^*$ such that $uz = v$. A {\it prefix code} is a subset $C \subseteq A^*$ such that no element of $C$ is a prefix of another element of $C$. A prefix code is {\it maximal} iff it is not a strict subset of another prefix code. One can prove that a right ideal $R$ has a unique minimal (under inclusion) generating set, and that this minimal generating set is a prefix code; this prefix code is maximal iff $R$ is an essential right ideal. For right ideals $R' \subseteq R \subseteq A^*$ we say that $R'$ is {\em essential in} $R$ iff $R'$ intersects all right subideals of $R$ in a non-empty way. {\em Tree interpretation:} The free monoid $A^*$ can be pictured by its right Cayley graph, which is the rooted infinite regular $k$-ary tree with vertex set $A^*$ and edge set $\{(v,va): v \in A^*, a \in A\}$. We simply call this the {\em tree of} $A^*$. It is a directed tree, with all paths moving away from the root $\varepsilon$ (the empty word); by ``path'' we will always mean a directed path. A word $v$ is a prefix of a word $w$ iff $v$ is is an ancestor of $w$ in the tree. A set $P$ is a prefix code iff no two elements of $P$ are on the same path. A set $R$ is a right ideal iff any path that starts in $R$ has all its vertices in $R$. The prefix code that generates $R$ consists of the elements of $R$ that are maximal (within $R$) in the prefix order, i.e., closest to the root $\varepsilon$. A finitely generated right ideal $R$ is essential iff every infinite path of the tree eventually reaches $R$ (and then stays in it from there on). Similarly, a finite prefix code $P$ is maximal iff any infinite path starting at the root eventually intersects $P$. For two finitely generated right ideals $R' \subset R$, $R'$ is essential in $R$ iff any infinite path starting in $R$ eventually reaches $R'$ (and then stays in $R'$ from there on). In other words for finitely generated right ideals $R' \subseteq R$, $R'$ is essential in $R$ iff $R'$ and $R$ have the same ``ends''. For the prefix tree of $A^*$ we can consider also the ``boundary'' $A^{\omega}$ (i.e., all infinite words), a.k.a.\ the {\em ends} of the tree. In Thompson's original definition \cite{Th0,Th}, $G_{2,1}$ was given by a total action on $\{0,1\}^{\omega}$. In \cite{BiThomps} this total action was extended to a partial action on $A^* \cup A^{\omega}$; the partial action on $A^* \cup A^{\omega}$ is uniquely determined by the total action on $A^{\omega}$; it is also uniquely determined by the partial action on $A^*$. Here, as in \cite{BiThomps}, we only use the partial action on $A^*$. \begin{defn} \label{homo_right_ideals} \ A {\em right ideal homomorphism} of $A^*$ is a total function $\varphi: R_1 \to A^*$ such that $R_1$ is a right ideal of $A^*$, and for all $x_1 \in R_1$ and all $w \in A^*$: \ $\varphi(x_1w) = \varphi(x_1) \ w$. \end{defn} For any partial function $f:A^* \to A^*$, let Dom$(f)$ denote the domain and let Im$(f)$ denote the image (range) of $f$. For a right ideal homomorphism $\varphi: R_1 \to A^*$ it is easy to see that the image ${\rm Im}(\varphi)$ is also right ideal of $A^*$, which is finitely generated (as a right ideal) if the domain $R_1 = {\rm Dom}(\varphi)$ is finitely generated. A right ideal homomorphism $\varphi: R_1 \to R_2$, where $R_1 = {\rm Dom}(\varphi)$ and $R_2 = {\rm Im}(\varphi)$, can be described by a total surjective function $P_1 \to S_2$, with $P_1, S_2 \subset A^*$; here $P_1$ is the prefix code (not necessarily maximal) that generates $R_1$ as a right ideal, and $S_2$ is a set (not necessarily a prefix code) that generates $R_2$ as a right ideal; so $R_1 = P_1 A^*$ and $R_2 = S_2 A^*$. The function $P_1 \to S_2$ corresponding to $\varphi: R_1 \to R_2$ is called the {\em table} of $\varphi$. The prefix code $P_1$ is called the {\em domain code} of $\varphi$ and we write $P_1 = {\rm domC}(\varphi)$. When $S_2$ is a prefix code we call $S_2$ the {\em image code} of $\varphi$ and we write $S_2 = {\rm imC}(\varphi)$. We denote the {\em table size} of $\varphi$ (i.e., the cardinality of ${\rm domC}(\varphi)$) by $\|\varphi\|$. \begin{defn} \label{special_homo_right_ideals} \ An injective right ideal homomorphism is called a right ideal {\em isomorphism}. A right ideal homomorphism $\varphi: R_1 \to R_2$ is called {\em total} iff the domain right ideal $R_1$ is essential. And $\varphi$ is called {\em surjective} iff the image right ideal $R_2$ is essential. \end{defn} The table $P_1 \to P_2$ of a right ideal isomorphism $\varphi$ is a bijection between prefix codes (that are not necessarily maximal). The table $P_1 \to S_2$ of a {\em total} right ideal homomorphism is a function from a {\em maximal} prefix code to a set, and the table $P_1 \to S_2$ of a surjective right ideal homomorphism is a function from a prefix code to a set that generates an essential right ideal. The word ``total'' is justified by the fact that if a homomorphism $\varphi$ is total (and if ${\rm domC}(\varphi)$ is finite) then $\varphi(w)$ is defined for every word that is long enough (e.g., when $|w|$ is longer than the longest word in the domain code $P_1$); equivalently, $\varphi$ is defined from some point onward on every infinite path in the tree of $A^*$ starting at the root. \begin{defn} \label{restriction_extension} An {\em essential restriction} of a right ideal homomorphism $\varphi: R_1 \to A^*$ is a right ideal homomorphism $\Phi: R'_1 \to A^*$ such that $R'_1$ is essential in $R_1$, and such that for all $x'_1 \in R'_1$: \ $\varphi(x'_1) = \Phi(x'_1)$. We say that $\varphi$ is an {\em essential extension} of $\Phi$ iff $\Phi$ is an essential restriction of $\varphi$. \end{defn} Note that if $\Phi$ is an essential restriction of $\varphi$ then $R'_2 = {\rm Im}(\Phi)$ will automatically be essential in $R_2 = {\rm Im}(\varphi)$. Indeed, if $I$ is any non-empty right subideal of $R_1$ then $I \cap R'_1 \neq \varnothing$, hence \ $\varnothing \neq \Phi(I \cap R'_1)$ $\subseteq \Phi(I) \, \cap \, \Phi(R'_1)$ $= \Phi(I) \cap R'_2$; moreover, any right subideal $J$ of $R_2$ is of the form $J = \Phi(I)$ where $I = \Phi^{-1}(J)$ is a right subideal of $R_1$; hence, for any right subideal $J$ of $R_2$, $\varnothing \neq J \cap R'_2$. \begin{pro} \label{extension_formula} \ {\rm (1)} Let $\varphi, \Phi$ be homomorphisms between {\em finitely generated} right ideals of $A^*$, where $A = \{a_1, \ldots, a_k\}$. Then $\Phi$ is an essential restriction of $\varphi$ iff $\Phi$ can be obtained from $\varphi$ by starting from the table of $\varphi$ and applying a finite number of {\em restriction steps} of the following form: Replace $(x,y)$ in a table by $\{(xa_1,ya_1), \ldots, (xa_k,ya_k)\}$. \\ {\rm (2)} Every homomorphism between finitely generated right ideals of $A^*$ has a {\em unique maximal essential} extension. \end{pro} {\bf Proof.} \ (1) Consider a homomorphism between finitely generated right ideals $\varphi: R_1 \to R_2$, let $P_1$ be the finite prefix code that generates the right ideal $R_1$, and let $S_2 = \varphi(P_1)$, so $S_2$ generates the right ideal $R_2$. If $x \in P_1$ and $y = \varphi(x) \in S_2$ then (since $\varphi$ is a right ideal homomorphism), $ya_i = \varphi(xa_i)$ for $i = 1, \ldots, k$. Then $R_1 - \{x\}$ is a right ideal which is essential in $R_1$, and $R_1 - \{x\}$ is generated by $(P_1 - \{x\}) \, \cup \, \{xa_1, \ldots, xa_k\}$. Indeed, in the tree of $A^*$ every downward directed path starting at vertex $x$ goes through one of the vertices $xa_i$. Thus, removing $(x,y)$ from the graph of $\varphi$ is an essential restriction; for the table of $\varphi$, the effect is to replace the entry $(x,y)$ by the set of entries $\{(xa_1,ya_1), \ldots, (xa_k,ya_k)\}$. If finitely many restriction steps of the above type are carried out, the result is again an essential restriction of $\varphi$. Conversely, let us show that if $\Phi$ is an essential restriction of $\varphi$ then $\Phi$ can be obtained by a finite number of replacement steps of the form ``replace $(x,y)$ by $\{(xa_1,ya_1), \ldots, (xa_k,ya_k)\}$ in the table''. Using the tree of $A^*$ we have: If $R$ and $R'$ are right ideals of $A^*$ generated by the finite prefix codes $P$, respectively $P'$, and if $R'$ is essential in $R$ then every infinite path from $P$ intersects $P'$. It follows from this characterization of essentiality and from the finiteness of $P_1$ and $P'_1$ that $R_1 - R'_1$ is finite. Hence $\varphi$ and $\Phi$ differ only in finitely many places, i.e., one can transform $\varphi$ into $\Phi$ in a finite number of restriction steps. So, the restriction $\Phi$ of $\varphi$ is obtained by removing a finite number of pairs $(x,y)$ from $\varphi$; however, not every such removal leads to a right ideal homomorphism or an essential restriction of $\varphi$. If $(x_0, y_0)$ is removed from $\varphi$ then $x_0$ is removed from $R_1$ (since $\varphi$ is a function). Also, since $R'_1$ is a right ideal, when $x_0$ is removed then all prefixes of $x_0$ (equivalently, all ancestor vertices of $x_0$ in the tree of $A^*$) have to be removed. So we have the following removal rule (still assuming that domain and image right ideals are finitely generated): \noindent {\it If $\Phi$ is an essential restriction of $\varphi$ then $\varphi$ can be transformed into $\Phi$ by removing a finite set of strings from $R_1$, with the following restriction: If a string $x_0$ is removed then all prefixes of $x_0$ are also removed from $R_1$; moreover, $x_0$ is removed from $R_1$ iff $(x_0, \varphi(x_0))$ is removed from $\varphi$. } As a converse of this rule, we claim that if the transformation from $\varphi$ to $\Phi$ is done according to this rule, then $\Phi$ is an essential restriction of $\varphi$. Indeed, $\Phi$ will be a right ideal homomorphism: if $\Phi(x_1)$ is defined then $\Phi(x_1z)$ will also be defined (if it were not, the prefix $x_1$ of $x_1z$ would have been removed), and $\Phi(x_1z) = \varphi(x_1z) = $ $\varphi(x_1) \ z = \Phi(x_1) \ z$. Moreover, ${\rm Dom}(\Phi) = R'_1$ will be essential in $R_1$: every directed path starting at $R_1$ eventually meets $R'_1$ because only finitely many words were removed from $R_1$ to form $R'_1$. Hence by the tree characterization of essentiality, $R'_1$ is essential in $R_1$. In summary, if $\Phi$ is an essential restriction of $\varphi$ then $\Phi$ is obtained from $\varphi$ by a finite sequence of steps, each of which removes one pair $(x,\varphi(x))$. In ${\rm Dom}(\varphi)$ the string $x$ is removed. The domain code becomes $(P_1 - \{x\}) \, \cup \, \{xa_1, \ldots, xa_k\}$, since $\{xa_1, \ldots, xa_k\}$ is the set of children of $x$ in the tree of $A^*$. This means that in the table of $\varphi$, the pair $(x,\varphi(x))$ is replaced by $\{(xa_1, \varphi(x) \, a_1), \ldots, (xa_k, \varphi(x) \, a_k)\}$. \noindent (2) Uniqueness of the maximal essential extension: \ By (1) above, essential extensions are obtained by the set of rewrite rules of the form \ $\{(xa_1,ya_1), \ldots, (xa_k,ya_k)\} \to (x,y)$, applied to tables. This rewriting system is {\it locally confluent} (because different rules have non-overlapping left sides) and {\it terminating} (because they decrease the length); hence maximal essential extensions exist and are unique. \ \ \ $\Box$ \noindent Proposition \ref{extension_formula} yields another tree interpretation of essential restriction: Assume first that a total order $a_1 < a_2 < \ldots < a_k$ has been chosen for the alphabet $A$; this means that the tree of $A^*$ is now an {\em oriented} rooted tree, i.e., the children of each vertex $v$ have a total order (namely, $va_1 < va_2 < \ldots < va_k$). The rule ``replace $(x,y)$ in the table by $\{(xa_1,ya_1), \ldots, (xa_k,ya_k)\}$'' has the following tree interpretation: Replace $x$ and $y = \varphi(x)$ by the children of $x$, respectively of $y$, matched according to the order of the children. \noindent {\bf Important remark:} \\ As we saw, every right ideal homomorphism can be described by a table $P \to S$ where $P$ is a prefix code and $S$ is a set. But we also have: Every right ideal homomorphism $\varphi$ has an essential restriction $\varphi'$ whose table $P' \to Q'$ is such that {\em both $P'$ and $Q'$ are prefix codes}; moreover, $Q'$ can be chosen to be a subset of $A^n$ for some $n \leq {\rm max}\{ |s| : s \in S\}$. Example (with alphabet $A = \{a,b\}$): \\ $\left( \hspace{-.08in} \begin{array}{r|r} a & b \\ a & aa \end{array} \hspace{-.08in} \right) $ \ has an essential restriction \ $ \left( \hspace{-.08in} \begin{array}{r|r|r} aa & ab & b \\ aa & ab & aa \end{array} \hspace{-.08in} \right) $. Theorem 4.5B gives a tighter result with polynomial bounds. \begin{defn} \label{Tomps_part_funct_monoid} \ The Thompson-Higman {\em partial function monoid} $M_{k,1}$ consists of all maximal essential extensions of homomorphisms between finitely generated right ideals of $A^*$. The multiplication is composition followed by maximal essential extension. \end{defn} In order to prove {\it associativity} of the multiplication of $M_{k,1}$ we define the following and we prove a few Lemmas. \begin{defn} \label{congruence} \ By RI$_k$ we denote the monoid of all right ideal homomorphisms between finitely generated right ideals of $A^*$, with function composition as multiplication. We consider the equivalence relation $\equiv$ defined for $\varphi_1, \varphi_2 \in {\it RI}_k$ by: \ $\varphi_1 \equiv \varphi_2$ \ iff \ ${\rm max}(\varphi_1) = {\rm max}(\varphi_2)$. \end{defn} It is easy to prove that ${\it RI}_k$ is closed under composition. Moreover, by existence and uniqueness of the maximal essential extension (Prop.\ \ref{extension_formula}(2)) each $\equiv$-equivalence class contains exactly one element of $M_{k,1}$. We want to prove: \begin{pro} \label{congruence_is_congruence} \ The equivalence relation $\equiv$ is a monoid congruence on ${\it RI}_k$, and $M_{k,1}$ is isomorphic (as a monoid) to ${\it RI}_k/\!\equiv$. \ Hence, $M_{k,1}$ is associative. \end{pro} First some Lemmas. \begin{lem} \label{inters_essent_ideals} \ If $R'_i \subseteq R_i$ ($i = 1,2$) are finitely generated right ideals with $R'_i$ essential in $R_i$, then $R'_1 \cap R'_2$ is essential in $R_1 \cap R_2$. \end{lem} {\bf Proof.} We use the tree characterization of essentiality. Any infinite path $p$ in $R_1 \cap R_2$ is also in $R_i$ ($i = 1,2$), hence $p$ eventually enters into $R'_i$. Thus $p$ eventually meets $R'_1$ and $R'_2$, i.e., $p$ meets $R'_1 \cap R'_2$. \ \ \ $\Box$ \begin{lem} \label{dom_equal_im} \ All $\varphi_1, \varphi_2 \in {\it RI}_k$ have restrictions $\Phi_1, \Phi_2 \in {\it RI}_k$ (not necessarily essential restrictions) such that: \\ $\bullet$ \ \ $\Phi_2 \circ \Phi_1 = \varphi_2 \circ \varphi_1$, \ and \\ $\bullet$ \ \ ${\rm Dom}(\Phi_2) = {\rm Im}(\Phi_1) = $ ${\rm Dom}(\varphi_2) \, \cap \, {\rm Im}(\varphi_1)$. \end{lem} {\bf Proof.} Let $R = {\rm Dom}(\varphi_2) \cap {\rm Im}(\varphi_1)$. This is a right ideal which is finitely generated since ${\rm Dom}(\varphi_2)$ and ${\rm Im}(\varphi_1)$ are finitely generated (see Lemma 3.3 of \cite{BiThomps}). Now we restrict $\varphi_1$ to $\Phi_1$ in such a way that ${\rm Im}(\Phi_1) = R$ and ${\rm Dom}(\Phi_1) = \varphi_1^{-1}(R)$, and we restrict $\varphi_2$ to $\Phi_2$ in such a way that ${\rm Dom}(\Phi_2) = R$ and ${\rm Im}(\Phi_2) = \varphi_2(R)$. Then $\Phi_2 \circ \Phi_1(.)$ and $\varphi_2 \circ \varphi_1(.)$ agree on $\varphi_1^{-1}(R)$; moreover, ${\rm Dom}(\Phi_2 \circ \Phi_1) = \varphi_1^{-1}(R)$. Since $\varphi_2 \circ \varphi_1(x)$ is only defined when $\varphi_1(x) \in R$, we have $\Phi_2 \circ \Phi_1 = \varphi_2 \circ \varphi_1$. Also, by the definition of $R$ we have ${\rm Dom}(\Phi_2) = {\rm Im}(\Phi_1)$. \ \ \ $\Box$ \begin{lem} \label{max_max} \ For all $\varphi_1, \varphi_2 \in {\it RI}_k$ we have: \ \ \ \ \ \ \ ${\rm max}(\varphi_2 \circ \varphi_1) = $ ${\rm max}({\rm max}(\varphi_2) \circ \varphi_1) = $ ${\rm max}(\varphi_2 \circ {\rm max}(\varphi_1))$. \end{lem} {\bf Proof.} We only prove the first equality; the proof of the second one is similar. By Lemma \ref{dom_equal_im} we can restrict $\varphi_1$ and $\varphi_2$ to $\varphi'_1$, respectively $\varphi'_2$, so that \ $\varphi'_2 \circ \varphi'_1 = \varphi_2 \circ \varphi_1$, and \ ${\rm Dom}(\varphi'_2) = {\rm Im}(\varphi'_1) = $ ${\rm Dom}(\varphi_2) \cap {\rm Im}(\varphi_1)$; let $R' = {\rm Dom}(\varphi_2) \cap {\rm Im}(\varphi_1)$. Similarly we can restrict $\varphi_1$ and ${\rm max}(\varphi_2)$ to $\varphi''_1$, respectively $\varphi''_2$, so that \ $\varphi''_2 \circ \varphi''_1 = {\rm max}(\varphi_2) \circ \varphi_1$, and \ ${\rm Dom}(\varphi''_2) = {\rm Im}(\varphi''_1) = $ ${\rm Dom}({\rm max}(\varphi_2)) \cap {\rm Im}(\varphi_1)$; let $R'' = {\rm Dom}({\rm max}(\varphi_2)) \cap {\rm Im}(\varphi_1)$. Obviously, $R' \subseteq R''$ (since $\varphi_2$ is a restriction of ${\rm max}(\varphi_2)$). Moreover, $R'$ is essential in $R''$, by Lemma \ref{inters_essent_ideals}; indeed, ${\rm Dom}(\varphi_2)$ is essential in ${\rm Dom}({\rm max}(\varphi_2))$ since ${\rm max}(\varphi_2)$ is an essential extension of $\varphi_2$. Since $R'$ is essential in $R''$, $\varphi_2 \circ \varphi_1$ is an essential restriction of ${\rm max}(\varphi_2) \circ \varphi_1$. Hence by uniqueness of the maximal essential extension, ${\rm max}({\rm max}(\varphi_2) \circ \varphi_1) = $ ${\rm max}(\varphi_2 \circ {\rm max}(\varphi_1))$. \ \ \ $\Box$ \noindent {\bf Proof of Prop.\ \ref{congruence_is_congruence}:} \ If $\varphi_2 \equiv \psi_2$ then, by definition, ${\rm max}(\varphi_2) = {\rm max}(\psi_2)$, hence by Lemma \ref{max_max}: ${\rm max}(\varphi_2 \circ \varphi) = $ ${\rm max}({\rm max}(\varphi_2) \circ \varphi) = $ ${\rm max}({\rm max}(\psi_2) \circ \varphi) = $ ${\rm max}(\psi_2 \circ \varphi)$, \noindent for all $\varphi \in {\it RI}_k$. Thus (by the definition of $\equiv$), $\varphi_2 \circ \varphi \equiv \psi_2 \circ \varphi$, so $\equiv$ is a right congruence. Similarly one proves that $\equiv$ is a left congruence. Thus, ${\it RI}_k/\!\!\equiv \,$ is a monoid. Since every $\equiv$-equivalence class contains exactly one element of $M_{k,1}$ there is a one-to-one correspondence between ${\it RI}_k/\!\!\equiv \, $ and $M_{k,1}$. Moreover, the map \ $\varphi \in {\it RI}_k \longmapsto {\rm max}(\varphi) \in M_{k,1}$ \ is a homomorphism, by Lemma \ref{max_max} and by the definition of multiplication in $M_{k,1}$. Hence ${\it RI}_k/\!\!\equiv \, $ is isomorphic to $M_{k,1}$. \ \ \ $\Box$ \subsection{Other Thompson-Higman monoids} We now introduce a few more families of Thompson-Higman monoids, whose definition comes about naturally in analogy with $M_{k,1}$. \begin{defn} \label{Tomps_monoid_variants} \ The Thompson-Higman {\em total} function monoid ${\it tot}M_{k,1}$ and the Thompson-Higman {\em surjective} function monoid ${\it sur}M_{k,1}$ consist of maximal essential extensions of homomorphisms between finitely generated right ideals of $A^*$ where the domain, respectively, the image ideal, is an {\em essential} right ideal. The Thompson-Higman {\em inverse} monoid ${\it Inv}_{k,1}$ consists of all maximal essential extensions of isomorphisms between finitely generated (not necessarily essential) right ideals of $A^*$. \end{defn} Every element $\varphi \in {\it tot}M_{k,1}$ can be described by a function $P \to Q$, called the {\it table} of $\varphi$, where $P, Q \subset A^*$ with $P$ a finite {\em maximal} prefix code over $A$. A similar description applies to ${\it sur}M_{k,1}$ but now with $Q$ a finite maximal prefix code. Every $\varphi \in {\it Inv}_{k,1}$ can be described by a bijection $P \to Q$ where $P, Q \subset A^*$ are two finite prefix codes (not necessarily maximal). It is easy to prove that essential extension and restriction of right ideal homomorphisms, as well as composition of such homomorphisms, preserve injectiveness, totality, and surjectiveness. Thus ${\it tot}M_{k,1}$, ${\it sur}M_{k,1}$, and ${\it Inv}_{k,1}$ are submonoids of $M_{k,1}$. We also consider the intersection ${\it tot}M_{k,1} \cap {\it sur}M_{k,1}$, i.e., the monoid of all maximal essential extensions of homomorphisms between finitely generated essential right ideals of $A^*$; we denote this monoid by ${\it totsur}M_{k,1}$. The monoids $M_{k,1}$, ${\it tot}M_{k,1}$, ${\it sur}M_{k,1}$, and ${\it totsur}M_{k,1}$ are {\em regular} monoids. (A monoid $M$ is regular iff for every $m \in M$ there exists $x \in M$ such that $m x m = m$.) The monoid ${\it Inv}_{k,1}$ is an inverse monoid. (A monoid $M$ is inverse iff for every $m \in M$ there exists one and only one $x \in M$ such that $m x m = m$ and $x = x m x$.) We consider the submonoids ${\it totInv}_{k,1}$ and ${\it surInv}_{k,1}$ of ${\it Inv}_{k,1}$, described by bijections $P \to Q$ where $P, Q \subset A^*$ are two finite prefix codes with $P$, respectively $Q$ maximal. The (unique) inverses of elements in ${\it totInv}_{k,1}$ are in ${\it surInv}_{k,1}$, and vice versa, so these submonoids of ${\it Inv}_{k,1}$ are not regular monoids. We have \ ${\it totInv}_{k,1} \cap {\it surInv}_{k,1} \ = \ G_{k,1}$ \ (the Thompson-Higman group). It is easy to see that for all $n>0$, $M_{k,1}$ contains the symmetric monoids ${\it PF}_{k^n}$ of all partial functions on $k^n$ elements, represented by all elements of $M_{k,1}$ with a table $P \to Q$ where $P, Q \subseteq A^n$. Hence $M_{k,1}$ contains all finite monoids. Similarly, ${\it tot}M_{k,1}$ contains the symmetric monoids $F_{k^n}$ of all total functions on $k^n$ elements. And ${\it Inv}_{k,1}$ contains ${\mathcal I}_{k^n}$ (the finite symmetric inverse monoid of all injective partial functions on $A^n$). \subsection{Cuntz algebras and Thompson-Higman monoids} All the monoids, inverse monoids, and groups, defined above, are submonoids of the multiplicative part of the Cuntz algebra ${\cal O}_k$. The Cuntz algebra ${\cal O}_k$, introduced by Dixmier \cite{Dixmier} (for $k=2$) and Cuntz \cite{Cuntz}, is a $k$-generated star-algebra (over the field of complex numbers) with identity element {\bf 1} and zero {\bf 0}, given by the following finite presentation. The generating set is $A = \{a_1, \ldots, a_k\}$. Since this is defined as a star-algebra, we automatically have the star-inverses $\{ \overline{a}_1, \ldots, \overline{a}_k\}$; for clarity we use overlines rather than stars. \noindent Relations of the presentation: \ $\overline{a}_i a_i = {\bf 1}$, \ \ for $i = 1, \ldots, k$; $\overline{a}_i a_j = {\bf 0}$, \ \ when $i \neq j$, $1 \leq i, j \leq k$; $a_1 \overline{a}_1 + \ldots + a_k \overline{a}_k = {\bf 1}$. \noindent It is easy to verify that this defines a star-algebra. The Cuntz algebras are actually C$^*$-algebras with many remarkable properties (proved in \cite{Cuntz}), but here we only need them as star-algebras, without their norm and Cauchy completion. In \cite{BiThomps} and independently in \cite{Nekrash} it was proved that the Thompson-Higman group $G_{k,1}$ is the subgroup of ${\cal O}_k$ consisting of the elements that have an expression of the form \ $\sum_{x \in P} f(x) \ \overline{x}$ \ where we require the following: $P$ and $Q$ range over all finite maximal prefix codes over the alphabet $\{a_1, \ldots, a_k\}$, and $f$ is any bijection $P \to Q$. Another proof is given in \cite{Hughes}. More generally we also have: \begin{thm} \label{thomps_monoid_in_cuntz_algebra} \ The Thompson-Higman monoid $M_{k,1}$ is a submonoid of the multiplicative part of the Cuntz algebra ${\cal O}_k$. \end{thm} {\bf Proof outline.} The Thompson-Higman partial function monoid $M_{k,1}$ is the set of all elements of ${\cal O}_k$ that have an expression of the form \ $\sum_{x \in P} f(x) \ \overline{x}$ \ where $P \subset A^*$ ranges over all finite prefix codes, and $f$ ranges over functions $P \to A^*$. The details of the proof are very similar to the proofs in \cite{BiThomps, Nekrash}; the definition of {\it essential} restriction (and extension) and Proposition \ref{extension_formula} insure that the same proof goes through. \ \ \ $\Box$ The embeddability into the Cuntz algebra is a further justification of the definitional choices that we made for the Thompson-Higman monoid $M_{k,1}$. \section{Structure and simplicity of the Thompson-Higman monoids} \noindent We give some structural properties of the Thompson-Higman monoids; in particular, we show that $M_{k,1}$ and ${\it Inv}_{k,1}$ are simple for all $k$. \subsection{Group of units, $J$-relation, simplicity} By definition, the group of units of a monoid $M$ is the set of invertible elements (i.e., the elements $u \in M$ for which there exists $x \in M$ such that $xu = ux = {\bf 1}$, where {\bf 1} is the identity element of $M$). \begin{pro} \label{group_of_units} \ The Thompson-Higman group $G_{k,1}$ is the {\em group of units} of the monoids $M_{k,1}$, ${\it tot}M_{k,1}$, ${\it sur}M_{k,1}$, ${\it totsur}M_{k,1}$, and ${\it Inv}_{k,1}$. \end{pro} {\bf Proof.} It is obvious that the groups of units of the above monoids contain $G_{k,1}$. Conversely, we want to show that that if $\varphi \in M_{k,1}$ (and in particular, if $\varphi$ is in one of the other monoids) and if $\varphi$ has a left inverse and a right inverse, then $\varphi \in G_{k,1}$. First, it follows that $\varphi$ is injective, i.e., $\varphi \in $ ${\it Inv}_{k,1}$. Indeed, existence of a left inverse implies that for some $\alpha \in M_{k,1}$ we have $\alpha \ \varphi = {\bf 1}$; hence, if $\varphi(x_1) = \varphi(x_2)$ then $x_1 = \alpha \ \varphi(x_1) = $ $\alpha \ \varphi(x_2) = x_2$. Next, we show that ${\rm domC}(\varphi)$ is a {\em maximal} prefix code, hence $\varphi \in {\it totInv}_{k,1}$. Indeed, we can again consider $\alpha \in M_{k,1}$ such that $\alpha \ \varphi = {\bf 1}$. For any essential restriction of {\bf 1} the domain code is a maximal prefix code, hence ${\rm domC}(\alpha \circ \varphi)$ is maximal (where $\circ$ denotes functional composition). Moreover, ${\rm domC}(\alpha \circ \varphi)$ is also contained in the domain code of some restriction of $\varphi$, since $\varphi(x)$ must be defined when $\alpha \circ \varphi(x)$ is defined. Hence domC($\varphi'$), for some restriction $\varphi'$ of $\varphi$, is a maximal prefix code; it follows that domC($\varphi$) is a maximal prefix code. If we apply the reasoning of the previous paragraph to $\varphi^{-1}$ (which exists since we saw that $\varphi$ is injective), we conclude that ${\rm domC}(\varphi^{-1}) = {\rm imC}(\varphi)$ is a maximal prefix code. Thus, $\varphi \in {\it surInv}_{k,1}$. We proved that if $\varphi$ has a left inverse and a right inverse then $\varphi \in {\it totInv}_{k,1} \cap {\it surInv}_{k,1}$. Since ${\it totInv}_{k,1} \cap {\it surInv}_{k,1} = G_{k,1}$ we conclude that $\varphi \in G_{k,1}$. \ \ \ $\Box$ \noindent We now characterize some of the Green relations of $M_{k,1}$ and of ${\it Inv}_{k,1}$, and we prove simplicity. By definition, two elements $x,y$ of a monoid $M$ are {\em $J$-related} (denoted $x \equiv_J y$) iff $x$ and $y$ belong to exactly the same ideals of $M$. More generally, the {\em $J$-preorder} of $M$ is defined as follows: $x \leq_J y$ iff $x$ belongs to every ideal that $y$ belongs to. It is easy to see that $x \equiv_J y$ iff $x \leq_J y$ and $y \leq_J x$; moreover, $x \leq_J y$ iff there exist $\alpha, \beta \in M$ such that $x = \alpha y \beta$. A monoid $M$ is called {\em $J$-simple} iff $M$ has only one $J$-class (or equivalently, $M$ has only one ideal, namely $M$ itself). A monoid $M$ is called {\em $0$-$J$-simple} iff $M$ has exactly two $J$-classes, one of which consist of just a zero element (equivalently, $M$ has only two ideals, one of which is a zero element, and the other is $M$ itself). See \cite{CliffPres, Grillet} for more information on the $J$-relation. Cuntz \cite{Cuntz} proved that the multiplicative part of the $C^*$-algebra ${\cal O}_k$ is a $0$-$J$-simple monoid, and that as an algebra ${\cal O}_k$ is simple. We will now prove similar results for the Thompson-Higman monoids. \begin{pro} \label{0Jsimple} The inverse monoid ${\it Inv}_{k,1}$ and the monoid $M_{k,1}$ are $0$-$J$-simple. The monoid ${\it tot}M_{k,1}$ is $J$-simple. \end{pro} {\bf Proof.} Let $\varphi \in M_{k,1}$ (or $\in {\it Inv}_{k,1}$). When $\varphi$ is not the empty map there are $x_0, y_0 \in A^*$ such that $y_0 = \varphi(x_0)$. Let us define $\alpha, \beta \in {\it Inv}_{k,1}$ by the tables $\alpha = \{(\varepsilon \mapsto x_0)\}$ and $\beta = \{(y_0 \mapsto \varepsilon)\}$. Recall that $\varepsilon$ denotes the empty word. Then \ $\beta \, \varphi \, \alpha(.) = $ $\{(\varepsilon \mapsto \varepsilon)\} = {\bf 1}$. So, every non-zero element of $M_{k,1}$ (and of ${\it Inv}_{k,1}$) is in the same $J$-class as the identity element. In the case of ${\it tot}M_{k,1}$ we can take $\alpha = \{(\varepsilon \mapsto x_0)\}$ as before (since the domain code of $\alpha$ is $\{ \varepsilon\}$, which is a maximal prefix code), and we take $\beta' : Q \mapsto \{\varepsilon\}$ (i.e., the map that sends every element of $Q$ to $\varepsilon$), where $Q$ is any finite maximal prefix code containing $y_0$. Then again, \ $\beta' \, \varphi \, \alpha(.) = $ $\{(\varepsilon \mapsto \varepsilon)\} = {\bf 1}$. \ \ \ $\Box$ \noindent Thompson proved that $V$ ($= G_{2,1}$) is a simple group; Higman proved more generally that when $k$ is even then $G_{k,1}$ is simple, and when $k$ is odd then $G_{k,1}$ contains a simple normal subgroup of index 2. We will show next that in the monoid case we have {\em simplicity for all} $k$ (not only when $k$ is even). For a monoid $M$, ``simple'', or more precisely, {\em ``congruence-simple''} is defined to mean that the only congruences on $M$ are the trivial congruences (i.e., the equality relation, and the congruence that lumps all elements of $M$ into one congruence class). \begin{thm} \label{simple} The Thompson-Higman monoids ${\it Inv}_{k,1}$ and $M_{k,1}$ are congruence-simple for all $k$. \end{thm} {\bf Proof.} \ Let $\equiv$ be any congruence on $M_{k,1}$ that is not the equality relation. We will show that then the whole monoid is congruent to the empty map {\bf 0}. We will make use of $0$-$\cal J$-simplicity. \noindent {\sf Case 0:} \ Assume that $\Phi \equiv $ {\bf 0} for some element $\Phi \neq $ {\bf 0} of $M_{k,1}$. Then for all $\alpha, \beta \in M_{k,1}$ we have obviously $\alpha \, \Phi \, \beta \equiv $ {\bf 0}. Moreover, by $0$-$\cal J$-simplicity of $M_{k,1}$ we have \ $M_{k,1}$ $= \{ \alpha \, \Phi \, \beta : \alpha, \beta \in M_{k,1}\}$ \ since $\Phi \neq 0$. Hence in this case all elements of $M_{k,1}$ are congruent to {\bf 0}. \noindent For the remainder we suppose that $\varphi \equiv \psi$ and $\varphi \neq \psi$, for some elements $\varphi, \psi$ of $M_{k,1} - \{ {\bf 0} \}$. For a right ideal $R \subseteq A^*$ generated by a prefix code $P$ we call $P A^{\omega}$ the set of {\em ends} of $R$. We call two right ideals $R_1, R_2$ {\em essentially equal} iff $R_1$ and $R_2$ have the same ends, and we denote this by $R_1 =_{\sf ess} R_2$. This is equivalent to the following property: Every right ideal that intersects $R_1$ also intersects $R_2$, and vice versa (see \cite{BiBern} and \cite{BiCongr}). \noindent {\sf Case 1:} \ ${\rm Dom}(\varphi) \neq_{\sf ess} {\rm Dom}(\psi)$. Then there exists $x_0 \in A^*$ such that $x_0A^* \subseteq {\rm Dom}(\varphi)$, but \ ${\rm Dom}(\psi) \cap x_0A^* = \varnothing$; or, vice versa, there exists $x_0 \in A^*$ such that $x_0A^* \subseteq {\rm Dom}(\psi)$, but \ ${\rm Dom}(\varphi) \cap x_0A^* = \varnothing$. Let us assume the former. Letting $\beta = (x_0 \mapsto x_0)$, we have $\varphi \, \beta(.) = (x_0 \mapsto \varphi(x_0))$. We also have $\psi \, \beta(.) = $ {\bf 0}, since $x_0A^* \cap {\rm Dom}(\psi) = \varnothing$. So, $\varphi \, \beta \equiv \psi \, \beta = {\bf 0}$, but $\varphi \, \beta \neq$ {\bf 0}. Hence case 0, applied to $\Phi = \varphi \, \beta$, implies that the entire monoid $M_{k,1}$ is congruent to {\bf 0}. \noindent {\sf Case 2.1:} \ ${\rm Im}(\varphi) \neq_{\sf ess} {\rm Im}(\psi)$ \ and \ ${\rm Dom}(\varphi) =_{\sf ess} {\rm Dom}(\psi)$. Then there exists $y_0 \in A^*$ such that $y_0A^* \subseteq {\rm Im}(\varphi)$, but \ ${\rm Im}(\psi) \cap y_0A^* = \varnothing$; or, vice versa, $y_0A^* \subseteq {\rm Im}(\psi)$, but \ ${\rm Im}(\varphi) \cap y_0A^* = \varnothing$. Let us assume the former. Let $x_0 \in A^*$ be such that $y_0 = \varphi(x_0)$. Then \ $(y_0 \mapsto y_0) \circ \varphi \circ (x_0 \mapsto x_0)$ $ \ = \ (x_0 \mapsto y_0)$. On the other hand, \ $(y_0 \mapsto y_0) \circ \psi \circ (x_0 \mapsto x_0) \ = \ {\bf 0}$. Indeed, if $x_0A^* \cap {\rm Dom}(\psi) = \varnothing$ then for all $w \in A^* : $ \ $\psi \circ (x_0 \mapsto x_0)(x_0w) \ = \ \psi(x_0w)$ $ \ = \ $ $\varnothing$. And if $x_0A^* \cap {\rm Dom}(\psi) \neq \varnothing$ then for those $w \in A^*$ such that $x_0w \in {\rm Dom}(\psi)$ we have \ $(y_0 \mapsto y_0) \circ \psi \circ (x_0 \mapsto x_0)(x_0w) \ = \ $ $(y_0 \mapsto y_0)(\psi(x_0w)) \ = \ \varnothing$, since ${\rm Im}(\psi) \cap y_0A^* = \varnothing$. Now case 0 applies to ${\bf 0} \neq \Phi = $ $(y_0 \mapsto y_0) \circ \varphi \circ (x_0 \mapsto x_0)$ $ \equiv {\bf 0}$; hence all elements of $M_{k,1}$ are congruent to {\bf 0}. \noindent {\sf Case 2.2:} \ ${\rm Im}(\varphi) =_{\sf ess} {\rm Im}(\psi)$ \ and \ ${\rm Dom}(\varphi) =_{\sf ess} {\rm Dom}(\psi)$. Then after restricting $\varphi$ and $\psi$ to ${\rm Dom}(\varphi) \cap {\rm Dom}(\psi)$ ($ =_{\sf ess}$ ${\rm Dom}(\varphi) =_{\sf ess} {\rm Dom}(\psi)$), we have: $\, {\rm domC}(\varphi) = {\rm domC}(\psi)$, and there exist $x_0 \in {\rm domC}(\varphi) = {\rm domC}(\psi)$ and $y_0 \in {\rm Im}(\varphi)$, $y_1 \in {\rm Im}(\psi)$ such that $\varphi(x_0) = y_0 \neq y_1 = \psi(x_0)$. We have two sub-cases. \noindent {\sf Case 2.2.1:} \ $y_0$ and $y_1$ are not prefix-comparable. Then \ $(y_0 \mapsto y_0) \circ \varphi \circ (x_0 \mapsto x_0)$ $ \ = \ (x_0 \mapsto y_0)$. On the other hand, \ $(y_0 \mapsto y_0) \circ \psi \circ (x_0 \mapsto x_0)(x_0 w) \ = $ \ $(y_0 \mapsto y_0)(y_1 w) \ = \ \varnothing$ \ for all $w \in A^*$ \ (since $y_0$ and $y_1$ are not prefix-comparable). So \ $(y_0 \mapsto y_0) \circ \psi \circ (x_0 \mapsto x_0) \ = \ \ {\bf 0}$. Hence case 0 applies to \ ${\bf 0} \neq\Phi \ = \ $ $(y_0 \mapsto y_0) \circ \varphi \circ (x_0 \mapsto x_0)$ $\equiv {\bf 0}$. \noindent {\sf Case 2.2.2:} \ $y_0$ is a prefix of $y_1$, and $y_0 \neq y_1$. (The case where $y_0$ is a prefix of $y_1$ is similar.) Then $y_1 = y_0 a u_1$ for some $a \in A$, $u_1 \in A^*$. Letting $b \in A - \{a\}$, and $y_2 = y_0 b$, we obtain a string $y_2$ that is not prefix-comparable with $y_1$. Now, $\, (y_2 \mapsto y_2) \circ \varphi \circ (x_0 \mapsto x_0)(x_0v_2)$ $ \ = \ $ $(y_2 \mapsto y_2)(y_0 v_2) \ = \ y_2$. But for all $w \in A^*$, $ \, (y_2 \mapsto y_2) \circ \psi \circ (x_0 \mapsto x_0)(x_0w) \ = \ $ $(y_2 \mapsto y_2)(y_1 w) \ = \ \varnothing$, since $y_2$ and $y_1$ are not prefix-comparable. Thus, case 0 applies to \ ${\bf 0} \neq \Phi = $ $(y_2 \mapsto y_2) \circ \varphi \circ (x_0 \mapsto x_0)$ $\equiv {\bf 0}$. The same proof works for ${\it Inv}_{k,1}$ since all the multipliers used in the proof (of the form $(u \mapsto v)$ for some $u,v \in A^*$) belong to ${\it Inv}_{k,1}$. \ \ \ $\Box$ \subsection{$D$-relation} Besides the $J$-relation and the $J$-preorder, based on ideals, there are the $R$- and $L-$relations and $R$- and $L-$preorders, based on right (or left) ideals. Two elements $x,y \in M$ are {\em $R$-related} (denoted $x \equiv_R y$) iff $x$ and $y$ belong to exactly the same right ideals of $M$. The {\em $R$-preorder} is defined as follows: $x \leq_R y$ iff $x$ belongs to every right ideal that $y$ belongs to. It is easy to see that $x \equiv_R$ iff $x \leq_R y$ and $y \leq_R x$; also, $x \leq_R y$ iff there exists $\alpha \in M$ such that $x = y \alpha$. In a similar way one defines $\equiv_L$ and $\leq_L$. Finally, there is the {\em $D$-relation} of $M$, which is defined as follows: $x \equiv_D y$ iff there exists $s \in M$ such that $x \equiv_R s \equiv_L y$; this is easily seen to be equivalent to saying that there exists $t \in M$ such that $x \equiv_L t \equiv_R y$. For more information on these definitions see for example \cite{CliffPres, Grillet}. The $D$-relation of $M_{k,1}$ and ${\it Inv}_{k,1}$ has an interesting characterization, as we shall prove next. We will represent all elements of $M_{k,1}$ by tables of the from $\varphi: P \to Q$, where both $P$ and $Q$ are finite prefix codes over $A$ (with $|A| = k$). For such a table we also write $P = {\rm domC}(\varphi)$ (the domain code of $\varphi$) and $Q = {\rm imC}(\varphi)$ (the image code of $\varphi$). In general, tables of elements of $M_{k,1}$ have the form $P \to S$, where $P$ is a finite prefix code and $S$ is a finite set; but by using essential restrictions, if necessary, every element of $M_{k,1}$ can be given a table $P \to Q$, where both $P$ and $Q$ are finite prefix codes. Note the following invariants with respect to essential restrictions: \begin{pro} \label{essent_ext_restr_mod_k_1} \ Let $\varphi_1: P_1 \to Q_1$ be a table for an element of $M_{k,1}$, where $P_1,Q_1 \subset A^*$ are finite prefix codes. Let $\varphi_2: P_2 \to Q_2$ be another finite table for the {\em same} element of $M_{k,1}$, obtained from the table $\varphi_1$ by an essential restriction. Then $P_2, Q_2 \subset A^*$ are finite prefix codes and we have \ \ \ \ \ $|P_1| \equiv |P_2|$ \ {\rm mod} $(k-1)$ \ \ \ and \ \ \ \ \ $|Q_1| \equiv |Q_2|$ \ {\rm mod} $(k-1)$. \noindent These modular congruences also hold for essential extensions, provided that we only extend to tables in which the image is a prefix code. \end{pro} {\bf Proof.} An essential restriction consists of a finite sequence of essential restriction steps; an essential restriction step consists of replacing a table entry $(x,y)$ of $\varphi_1$ by $\{(xa_1, ya_1), \ldots, (xa_k, ya_k)\}$ (according to Proposition \ref{extension_formula}). For a finite prefix code $Q \subset A^*$, and $q \in Q$, the finite set $(Q - \{q\}) \cup \{qa_1, \ldots, qa_k\}$ is also a prefix code, as is easy to prove. In this process, the cardinalities change as follows: \ $|P_1|$ becomes $|P_1| - 1 + k$ and $|Q_1|$ becomes $|Q_1| - 1 + k$. Indeed (looking at $Q_1$ for example), first an element $y$ is removed from $Q_1$, then the $k$ elements $\{ya_1, \ldots, ya_k\}$ are added. The elements $ya_i$ that are added are all different from the elements that are already present in $Q_1 - \{y\}$; in fact, more strongly, $ya_i$ and the elements of $Q_1 - \{y\}$ are not prefixes of each other. \ \ \ $\Box$ As a consequence of Prop.\ \ref{essent_ext_restr_mod_k_1} it makes sense, for any $\varphi \in M_{k,1}$, to talk about $|{\rm domC}(\varphi)|$ and $|{\rm imC}(\varphi)|$ as elements of ${\mathbb Z}_{k-1}$, independently of the representation of $\varphi$ by a right-ideal homomorphism. \begin{thm} \label{D_relation_characteris} For any non-zero elements $\varphi, \psi$ of $M_{k,1}$ (or of ${\it Inv}_{k,1}$) the $D$-relation is characterized as follows: \ \ \ $\varphi \equiv_D \psi$ \ \ \ iff \ \ \ $|{\rm imC}(\varphi)| \equiv |{\rm imC}(\psi)|$ \ {\rm mod} $(k-1)$. \noindent Hence, $M_{k,1}$ and ${\it Inv}_{k,1}$ have $k-1$ non-zero $D$-classes. In particular, $M_{2,1}$ and ${\it Inv}_{2,1}$ are $0$-$D$-simple (also called {\em $0$-bisimple}). \end{thm} The proof of Theorem \ref{D_relation_characteris} uses several Lemmas. \begin{lem} \label{card_max_pref_code} {\rm (\cite{BiCoNP} Lemma 6.1; Arxiv version of \cite{BiCoNP} Lemma 9.9).} \ For every finite alphabet $A$ and every integer $i \geq 0$ there exists a maximal prefix code of cardinality \ $1 + (|A|-1) \, i$. And every finite maximal prefix code over $A$ has cardinality \ $1 + (|A|-1) \, i$, for some integer $i \geq 0$. It follows that when $|A| = 2$, there are finite prefix codes over $A$ of every finite cardinality. \ \ \ \ \ $\Box$ \end{lem} As a consequence of this Lemma we have for all $\varphi \in G_{k,1}$: \ $\|\varphi\| \equiv 1$ mod $(k-1)$. Thus, except for the Thompson group $V$ (when $k=2$), there is a constraint on the table size of the elements of the group. In the following ${\rm id}_Q$ denotes the element of ${\it Inv}_{k,1}$ given by the table $\{(x \mapsto x) : x \in Q\}$ where $Q \subset A^*$ is any finite prefix code. \begin{lem} \label{D_if_imC_same_card} \ {\bf (1)} For any $\varphi \in M_{k,1}$ (or $\in {\it Inv}_{k,1}$) with table $P \to Q$ (where $P, Q$ are finite prefix codes) we have: \ $\varphi \equiv_R {\rm id}_Q$. \\ {\bf (2)} If $S, T$ are finite prefix codes with $|S| = |T|$ then \ ${\rm id}_S \equiv_D {\rm id}_T$. \\ {\bf (3)} If $\varphi_1: P_1 \to Q_1$ and $\varphi_2: P_ 2 \to Q_2$ are such that $|Q_1| = |Q_2|$ then \ $\varphi_1 \equiv_D \varphi_2$. \end{lem} {\bf Proof.} {\bf (1)} Let $P' \subseteq P$ be a set of representatives modulo $\varphi$ (i.e., we form $P'$ by choosing one element in every set $\varphi^{-1} \varphi(x)$ as $x$ ranges over $P$). So, $|P'| = |Q|$. Let $\alpha \in {\it Inv}_{k,1}$ be given by a table $Q \to P'$; the exact map does not matter, as long as $\alpha$ is bijective. Then $\varphi \circ \alpha(.)$ is a permutation of $Q$, and $\varphi \circ \alpha \equiv_R $ $\varphi \circ \alpha \circ (\varphi \circ \alpha)^{-1} = {\rm id}_Q$. Now, $\varphi \geq_R \varphi \circ \alpha \geq_R $ $\varphi \circ \alpha \circ (\varphi \circ \alpha)^{-1} \circ \varphi = $ ${\rm id}_Q \circ \varphi = \varphi$, hence $\varphi \equiv_R \varphi \circ \alpha$ \ ($\equiv_R {\rm id}_Q$). \noindent {\bf (2)} Let $\alpha: S \to T$ be a bijection (which exists since $|S| = |T|$); so $\alpha$ represents an element of ${\it Inv}_{k,1}$. Then $\alpha = \alpha \circ {\rm id}_S(.)$ and ${\rm id}_S = \alpha^{-1} \circ \alpha(.)$; hence, $\alpha \equiv_L {\rm id}_S$. Also, $\alpha = {\rm id}_T \circ \alpha(.)$ and ${\rm id}_T = \alpha \circ \alpha^{-1}(.)$; hence, $\alpha \equiv_R {\rm id}_T$. Thus, ${\rm id}_S \equiv_L \alpha \equiv_R {\rm id}_T$. \noindent {\bf (3)} If $|Q_1| = |Q_2|$ then ${\rm id}_{Q_1} \equiv_D {\rm id}_{Q_2}$ by (2). Moreover, $\varphi_1 \equiv_D {\rm id}_{Q_1}$ and $\varphi_2 \equiv_D {\rm id}_{Q_2}$ by (1). The result follows by transitivity of $\equiv_D$. \ \ \ $\Box$ \begin{lem} \label{special_pref_code} \ {\bf (1)} For any $m \geq k$ let $i$ be the residue of $m$ modulo $k-1$ in the range $2 \leq i \leq k$, and let us write $m = i + (k-1)j$, for some $j \geq 0$. Then there exists a prefix code $Q_{i,j}$ of cardinality $|Q_{i,j}| = m$, such that ${\rm id}_{Q_{i,j}}$ is an essential restriction of ${\rm id}_{\{a_1, \ldots,a_i\}}$. Hence, \ ${\rm id}_{Q_{i,j}} = {\rm id}_{\{a_1, \ldots,a_i\}}$ \ as elements of ${\it Inv}_{k,1}$. \\ {\bf (2)} In $M_{k,1}$ and in ${\it Inv}_{k,1}$ we have \ ${\rm id}_{\{a_1\}} \equiv_D {\rm id}_{\{a_1, \ldots, a_k\}} = {\bf 1}$. \end{lem} {\bf Proof.} {\bf (1)} For any $m \geq k$ there exist $i, j \geq 0$ such that $1 \leq i \leq k$ and $m = i + (k-1)j$. We consider the prefix code $Q_{i,j} \ = \ \{a_2, \ldots,a_i\} \ \ \cup \ \ $ $\bigcup_{r=1}^{j-1} a_1^r (A-\{a_1\}) \ \ \cup \ \ a_1^j A$. \noindent It is easy to see that $Q_{i,j}$ is a prefix code, which is maximal iff $i = k$; see Fig.\ 1 below. Clearly, $|Q_{i,j}| = i + (k-1)j$. Since $Q_{i,j}$ contains $a_1^j A$, we can perform an essential extension of ${\rm id}_{Q_{i,j}}$ by replacing the table entries \ $\{(a_1^j a_1, a_1^j a_1), (a_1^j a_2, a_1^j a_2), \ldots, $ $(a_1^j a_k,a_1^j a_k)\}$ \ by $(a_1^j, a_1^j)$. This replaces $Q_{i,j}$ by $Q_{i,j-1}$. So, ${\rm id}_{Q_{i,j}}$ can be essentially extended to ${\rm id}_{Q_{i,j-1}}$. By repeating this we find that ${\rm id}_{Q_{i,j}}$ is the same element (in $M_{k,1}$ and in ${\it Inv}_{k,1}$) as ${\rm id}_{Q_{i,0}} = {\rm id}_{\{a_1, \ldots, a_i\}}$. \noindent {\bf (2)} By essential restriction, \ ${\rm id}_{\{a_1\}} = {\rm id}_{\{a_1a_1, a_1a_2, \ldots, a_1a_k\}}$, in $M_{k,1}$ and in ${\it Inv}_{k,1}$. And by Lemma \ref{D_if_imC_same_card}(2), \ ${\rm id}_{\{a_1a_1, a_1a_2, \ldots, a_1a_k\}} \equiv_D $ ${\rm id}_{\{a_1, \ldots, a_k\}}$; the latter, by essential extension, is {\bf 1}. \ \ \ $\Box$ \unitlength=0.90mm \special{em:linewidth 0.4pt} \linethickness{0.4pt} \begin{picture}(150,100) \put(120,90){\makebox(0,0)[cc]{$\varepsilon$}} \put(123,87){\line(1,-2){2}} \put(136,79){\makebox(0,0)[cc]{$\{a_2,\ldots,a_i\}$}} \put(124,87){\line(2,-1){8}} \put(128,83){\makebox(0,0)[cc]{$\ldots$}} \put(117,87){\line(-1,-2){2.1}} \put(114,80){\makebox(0,0)[cc]{$a_1$}} \put(117,77){\line(1,-2){2.2}} \put(133,70){\makebox(0,0)[cc]{$a_1 \, (A-\{a_1\})$}} \put(118,77){\line(2,-1){8}} \put(122,73){\makebox(0,0)[cc]{$\ldots$}} \put(111,77){\line(-1,-2){2}} \put(107,70){\makebox(0,0)[cc]{.}} \put(106,68){\makebox(0,0)[cc]{.}} \put(105,66){\makebox(0,0)[cc]{.}} \put(103,62){\line(-1,-2){2}} \put(99,54){\makebox(0,0)[cc]{$a_1^r$}} \put(102,50){\line(1,-2){2}} \put(119,43){\makebox(0,0)[cc]{$a_1^r \, (A-\{a_1\})$}} \put(103,50){\line(2,-1){8}} \put(107,46){\makebox(0,0)[cc]{$\ldots$}} \put(96,50){\line(-1,-2){2}} \put(92,43){\makebox(0,0)[cc]{.}} \put(91,41){\makebox(0,0)[cc]{.}} \put(90,39){\makebox(0,0)[cc]{.}} \put(88,35){\line(-1,-2){2}} \put(84,26){\makebox(0,0)[cc]{$a_1^{j-1}$}} \put(85,22){\line(1,-2){2.7}} \put(104,13){\makebox(0,0)[cc]{$a_1^{j-1} \, (A-\{a_1\})$}} \put(86,22){\line(2,-1){10}} \put(91,17){\makebox(0,0)[cc]{$\ldots$}} \put(79,22){\line(-1,-2){2.2}} \put(73,13){\makebox(0,0)[cc]{$a_1^j$}} \put(71,9){\line(-1,-2){2.2}} \put(75,9){\line(1,-2){2.2}} \put(73,5){\makebox(0,0)[cc]{$\ldots$}} \put(73,1){\makebox(0,0)[cc]{$A$}} \put(20,-6){\makebox(0,0)[cc]{{\sf Fig.\ 1: \ The prefix tree of $Q_{i,j}$.}}} \end{picture} \begin{lem} \label{Green_rel_in_Inv_from_M} \ For all $\varphi, \psi \in {\it Inv}_{k,1}$: If \ $\varphi \geq_{L(M_{k,1})} \psi$, where $\geq_{L(M_{k,1})}$ is the $L$-preorder of $M_{k,1}$, then \ $\varphi \geq_{L(I_{k,1})} \psi$, where $\geq_{L(I_{k,1})}$ is the $L$-preorder of ${\it Inv}_{k,1}$. The same holds with $\geq_L$ replaced by $\equiv_L$, $\geq_R$, $\equiv_R$, $\equiv_D$, $\geq_J$ and $\equiv_J$. \end{lem} {\bf Proof.} If $\psi = \alpha \, \varphi$ for some $\alpha \in M_{k,1}$ then let us define $\alpha'$ by \ $\alpha' = \alpha \ {\rm id}_{{\rm Im}(\varphi)}$. Then we have: \ $\psi \ \varphi^{-1} = \alpha \ \varphi \ \varphi^{-1} = $ $\alpha \ {\rm id}_{{\rm Im}(\varphi)} = \alpha'$, hence $\alpha' \in {\it Inv}_{k,1}$ (since $\varphi, \psi \in {\it Inv}_{k,1}$). Moreover, $\alpha' \, \varphi = $ $\alpha \ {\rm id}_{{\rm Im}(\varphi)} \ \varphi = \alpha \, \varphi = $ $\psi$. \ \ \ $\Box$ So far our Lemmas imply that in $M_{k,1}$ and in ${\it Inv}_{k,1}$, every non-zero element is $\equiv_D$ to one of the $k-1$ elements ${\rm id}_{\{a_1, \ldots, a_i\}}$, for $i=1, \ldots, k-1$. Moreover the Lemmas show that if two elements of $M_{k,1}$ (or of ${\it Inv}_{k,1}$) are given by tables $\varphi_1: P_1 \to Q_1$ and $\varphi_2: P_2 \to Q_2$, where $P_1$, $Q_1$, $P_2$ and $Q_2$ are finite prefix codes, then we have: \ If \ $|Q_1| \equiv |Q_2|$ mod $(k-1)$ \ then \ $\varphi_1 \equiv_D \varphi_2$. We still need to prove the converse of this. It is sufficient to prove the converse for ${\it Inv}_{k,1}$, by Lemma \ref{Green_rel_in_Inv_from_M} and because every element of $M_{k,1}$ is $\equiv_D$ to an element of ${\it Inv}_{k,1}$ (namely ${\rm id}_{\{a_1,\ldots,a_i\}}$). \begin{lem} \label{L_implies_mod_k1} \ Let $\varphi, \psi \in {\it Inv}_{k,1}$. If $\varphi \equiv_D \psi$ in ${\it Inv}_{k,1}$, then \ $\|\varphi\| \equiv \|\psi\|$ {\rm mod} $(k-1)$. \end{lem} {\bf Proof.} (1) We first prove that if $\varphi \equiv_L \psi$ then $|{\rm domC}(\varphi)| \equiv |{\rm domC}(\psi)|$ {\rm mod} $(k-1)$. By definition, $\varphi \equiv_L \psi$ iff $\varphi = \beta \, \psi$ and $\psi = \alpha \, \varphi$ for some $\alpha, \beta \in {\it Inv}_{k,1}$. By Lemma \ref{dom_equal_im} \ there are restrictions $\beta'$ and $\psi'$ of $\beta$, respectively $\psi$, and an essential restriction $\Phi$ of $\varphi$ such that: $\Phi = \beta' \circ \psi'$, \ and \ ${\rm Dom}(\beta') = {\rm Im}(\psi')$. \noindent It follows that ${\rm Dom}(\Phi) \subseteq {\rm Dom}(\psi')$, since if $\psi'(x)$ is not defined then $\Phi(x) = \beta' \circ \psi'(x)$ is not defined either. Similarly, there is an essential restriction $\Psi$ of $\psi$ and a restriction $\varphi'$ of $\varphi$ and such that ${\rm Dom}(\Psi) \subseteq {\rm Dom}(\varphi')$. Thus, the restriction of both $\varphi$ and $\psi$ to the intersection ${\rm Dom}(\Phi) \cap {\rm Dom}(\Psi)$ yields restrictions $\varphi''$, respectively $\psi''$ such that ${\rm Dom}(\varphi'') = {\rm Dom}(\psi'')$. \noindent {\sf Claim:} \ $\varphi''$ and $\psi''$ are essential restrictions of $\varphi$, respectively $\psi$. Indeed, every right ideal $R$ of $A^*$ that intersects ${\rm Dom}(\psi)$ also intersects ${\rm Dom}(\Psi)$ (since $\Psi$ is an essential restriction of $\psi$). Since ${\rm Dom}(\Psi) \subseteq {\rm Dom}(\varphi') \subseteq $ ${\rm Dom}(\varphi)$, it follows that $R$ also intersects ${\rm Dom}(\varphi)$. Moreover, since $\Phi$ is an essential restriction of $\varphi$, $R$ also intersects ${\rm Dom}(\Phi)$. Thus, ${\rm Dom}(\Phi)$ is essential in ${\rm Dom}(\psi)$. Since ${\rm Dom}(\Psi)$ is also essential in ${\rm Dom}(\psi)$, it follows that ${\rm Dom}(\Phi) \cap {\rm Dom}(\Psi)$ is essential in ${\rm Dom}(\psi)$; indeed, in general, the intersection of two right ideals $R_1, R_2$ that are essential in a right ideal $R_3$, is essential in $R_3$ (this is a special case of Lemma \ref{inters_essent_ideals}). This means that $\psi''$ is an essential restriction of $\psi$. Similarly, one proves that $\varphi''$ is an essential restriction of $\varphi$. \ \ \ [This proves the Claim.] So, $\varphi''$ and $\psi''$ are essential restrictions such that ${\rm Dom}(\varphi'') = {\rm Dom}(\psi'')$. Hence, ${\rm domC}(\varphi'') = {\rm domC}(\psi'')$; Proposition \ref{essent_ext_restr_mod_k_1} then implies that \ $|{\rm domC}(\varphi)| \equiv |{\rm domC}(\varphi'')| = $ $|{\rm domC}(\psi'')| \equiv |{\rm domC}(\psi)|$ \ mod \ $(k-1)$. \noindent (2) Next, let us prove that if $\varphi \equiv_R \psi$ then $|{\rm imC}(\varphi)| \equiv |{\rm imC}(\psi)|$ {\rm mod} $(k-1)$. In ${\it Inv}_{k,1}$ we have $\varphi \equiv_R \psi$ iff $\varphi^{-1} \equiv_L \psi^{-1}$. Also, ${\rm imC}(\varphi) = {\rm domC}(\varphi^{-1})$. Hence, (2) follows from (1). \noindent The Lemma now follows from (1) and (2), since for elements of ${\it Inv}_{k,1}$, $|{\rm imC}(\varphi)| = |{\rm domC}(\varphi)| = $ $\|\varphi\|$, and since the $D$-relation is the composite of the $L$-relation and the $R$-relation. \ \ \ $\Box$ \noindent {\bf Proof of Theorem \ref{D_relation_characteris}.} We saw already (in the observations before Lemma \ref{L_implies_mod_k1} and in the preceding Lemmas) that for $\varphi_1: P_1 \to Q_1$ and $\varphi_2: P_2 \to Q_2$ (where $P_1$, $Q_1$, $P_2$ and $Q_2$ are non-empty finite prefix codes) we have: \ If \ $|Q_1| \equiv |Q_2|$ mod $(k-1)$ \ then \ $\varphi_1 \equiv_D \varphi_2$. In particular, when $|Q_1| \equiv i$ mod $(k-1)$ then $\varphi_1 \equiv_D {\rm id}_{\{a_1, \ldots, a_i\}}$. It follows from Lemma \ref{L_implies_mod_k1} that the elements ${\rm id}_{\{a_1, \ldots, a_i\}}$ (for $i=1, \ldots, k-1$) are all in different $D$-classes. \ \ \ $\Box$ So far we have characterized the $D$- and $J$-relations of $M_{k,1}$ and ${\it Inv}_{k,1}$. We leave the general study of the Green relations of $M_{k,1}$, ${\it Inv}_{k,1}$, and the other Thompson-Higman monoids for future work. The main result of this paper, to be proved next, is that the Thompson-Higman monoids $M_{k,1}$ and ${\it Inv}_{k,1}$ are finitely generated and that their word problem over any finite generating set is in {\sf P}. \section{Finite generating sets} We will show that ${\it Inv}_{k,1}$ and $M_{k,1}$ are finitely generated. An application of the latter fact is that a finite generating set of $M_{k,1}$ can be used to build combinational circuits for finite boolean functions that do not have fixed-length inputs or outputs. In engineering, non-fixed length inputs or outputs make sense, for example, if the inputs or outputs are handled sequentially, and if the possible input strings form a prefix code. First we need some more definitions about prefix codes. The {\em prefix tree} of a prefix code $P \subset A^*$ is, by definition, a tree whose vertex set is the set of all the prefixes of the elements of $P$, and whose edge set is $\{ (x,xa) : a \in A, \ xa$ is a prefix of some element of $P\}$. The tree is rooted, with root $\varepsilon$ (the empty word). Thus, the prefix tree of $P$ is a subtree of the tree of $A^*$. The set of leaves of the prefix tree of $P$ is $P$ itself. The vertices that are not leaves are called {\em internal vertices}. We will say more briefly an ``internal vertex of $P$'' instead of internal vertex of the prefix tree of $P$. An internal vertex has between 1 and $k$ children; an internal vertex is called {\em saturated} iff it has $k$ children. One can prove easily that a prefix code $P$ is maximal iff every internal vertex of the prefix tree of $P$ is saturated. Hence, every prefix code $P$ can be embedded in a maximal prefix code (which is finite when $P$ is finite), obtained by saturating the prefix tree of $P$. Moreover we have: \begin{lem} \label{ext_to_max_pref_code} \ For any two finite non-maximal prefix codes $P_1, P_2 \subset A^*$ there are finite maximal prefix codes $P'_1, P'_2 \subset A^*$ such that $P_1 \subset P'_1$, $P_2 \subset P'_2$, and $|P'_1| = |P'_2|$. \end{lem} {\bf Proof.} First we saturate $P_1$ and $P_2$ to obtain two maximal prefix codes $P''_1$ and $P''_2$ such that $P_1 \subset P''_1$, and $P_2 \subset P''_2$. If $|P''_1| \neq |P''_2|$ (e.g., if $|P''_1| < |P''_2|$) then $|P''_1|$ and $|P''_2|$ differ by a multiple of $k-1$ (by Prop.\ \ref{essent_ext_restr_mod_k_1}). So, in order to make $|P''_1|$ equal to $|P''_2|$ we repeat the following (until $|P''_1| = |P''_2|$): consider a leaf of the prefix tree of $P''_1$ that does not belong to $P_1$, and attach $k$ children at that leaf; now this leaf is no longer a leaf, and the net increase in the number of leaves is $k-1$. \ \ \ $\Box$ \begin{lem} \label{transitive_action_G_k1_on_prefixCodes} Let $P$ and $Q$ be finite prefix codes of $A^*$ with $|P| = |Q|$. If $P$ and $Q$ are both maximal prefix codes, or if both are non-maximal, then there is an element of $G_{k,1}$ that maps $P$ onto $Q$. On the other hand, if one of $P$ and $Q$ is maximal and the other one is not maximal, then there is no element of $G_{k,1}$ that maps $P$ onto $Q$. \end{lem} {\bf Proof.} When $P$ and $Q$ are both maximal then any one-to-one correspondence between $P$ and $Q$ is an element of $G_{k,1}$. When $P$ and $Q$ are both non-maximal, we use Lemma \ref{ext_to_max_pref_code} above to find two maximal prefix codes $P'$ and $Q'$ such that $P \subset P'$, $Q \subset Q'$, and $|P'| = |Q'|$. Consider now any bijection from $P'$ onto $Q'$ that is also a bijection from $P$ onto $Q$. This is an element of $G_{k,1}$. When $P$ is maximal and $Q$ is non-maximal, then every element $\varphi \in M_{k,1}$ that maps $P$ onto $Q$ will satisfy ${\rm domC}(\varphi) = P$; since $\varphi$ is onto $Q$, we have ${\rm imC}(\varphi) = Q$. Hence, $\varphi \not\in G_{k,1}$ since ${\rm imC}(\varphi)$ is a non-maximal prefix code. A similar reasoning shows that no element of $G_{k,1}$ maps $P$ onto $Q$ if $P$ is non-maximal and $Q$ is maximal. \ \ \ $\Box$ \noindent Notation: For $u,v \in A^*$, the element of ${\it Inv}_{k,1}$ with one-element domain code $\{u\}$ and one-element image code $\{v\}$ is denoted by $(u \mapsto v)$. When $(u \mapsto v)$ is composed with itself $j$ times the resulting element of ${\it Inv}_{k,1}$ is denoted by $(u \mapsto v)^j$. \begin{lem} \label{special_generation} \ {\bf (1)} \ For all $j > 0$: \ \ $(a_1 \mapsto a_1a_1)^j \ = \ (a_1 \mapsto a_1^{j+1})$. \\ {\bf (2)} \ Let \ $S = {\{a_1^j a_1, a_1^j a_2, \ldots, a_1^j a_i\}}$, for some $1 \leq i \leq k-1$, $0 \leq j$. Then ${\rm id}_S$ is generated by the $k+1$ elements \ $\{(a_1 \mapsto a_1a_1), \ (a_1a_1 \mapsto a_1)\} \ \cup \ $ $\{{\rm id}_{\{a_1a_1, \ a_1a_2, \ \ldots, \ a_1a_i\}} : 1 \leq i \leq k-1\}$. \\ {\bf (3)} \ For all $j \geq 2$: \ \ $(\varepsilon \mapsto a_1^j)(.) \ = \ (a_1 \mapsto a_1a_1)^{j-1} \, \cdot $ $(\varepsilon \mapsto a_1)(.)$. \end{lem} {\bf Proof.} {\bf (1)} We prove by induction that \ $(a_1 \mapsto a_1a_1)^j = (a_1 \mapsto a_1 a_1^j)$ \ for all $j \geq 1$. \\ Indeed, \ $(a_1 \mapsto a_1a_1)^{j+1}(.) = (a_1 \mapsto a_1a_1) \, \cdot$ $(a_1 \mapsto a_1a_1^j)(.)$, \ and by essential restriction this is $\left( \hspace{-.08in} \begin{array}{l|l} a_1 a_1^j & \ a_1 w \ \ \ \ (w \in A^j - \{a_1^j\}) \\ a_1a_1 a_1^j & \ a_1a_1 w \end{array} \hspace{-.08in} \right) \cdot (a_1 \mapsto a_1 a_1^j)(.) $ $ \ = \ (a_1 \mapsto a_1 a_1 a_1^j)(.)$. \noindent {\bf (2)} For $S = {\{a_1^j a_1, a_1^j a_2, \ldots, a_1^j a_i\}}$ we have ${\rm id}_S \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l|l|l} a_1 a_1 & a_1a_2 & \ \ldots \ & a_1a_i \\ a_1^ja_1 & a_1^ja_2 & \ \ldots \ & a_1^ja_i \end{array} \hspace{-.08in} \right) \cdot $ $\left( \hspace{-.08in} \begin{array}{l|l|l|l} a_1^j a_1 & a_1^ja_2 & \ \ldots \ & a_1^j a_i \\ a_1a_1 & a_1a_2 & \ \ldots \ & a_1a_i \end{array} \hspace{-.08in} \right)(.)$ \noindent and $\left( \hspace{-.08in} \begin{array}{l|l|l|l} a_1 a_1 & a_1a_2 & \ \ldots \ & a_1a_i \\ a_1^ja_1 & a_1^ja_2 & \ \ldots \ & a_1^ja_i \end{array} \hspace{-.08in} \right) $ $ \ = \ \left( \hspace{-.08in} \begin{array}{l|l|l|l|l|l|l} a_1 a_1 & a_1a_2 & \ \ldots \ & a_1a_i & a_1a_{i+1} & \ \ldots \ & a_1a_k \\ a_1^ja_1 & a_1^ja_2 & \ldots & a_1^ja_i & a_1^ja_{i+1} & \ \ldots \ & a_1^j a_k \end{array} \right) \cdot $ ${\rm id}_{\{a_1a_1,\ a_1a_2,\ \ldots, \ a_1a_i\}}(.)$ $ \ = \ (a_1 \mapsto a_1^j) \cdot $ ${\rm id}_{\{a_1a_1, \ a_1a_2, \ \ldots, \ a_1a_i\}}$ $ \ = \ $ $(a_1 \mapsto a_1a_1)^{j-1} \cdot $ ${\rm id}_{\{a_1a_1, \ a_1a_2, \ \ldots, \ a_1a_i\}}$. \noindent The map id$_{\{a_1a_1\}}$ is redundant as a generator since $(a_1 a_1 \mapsto a_1 a_1) = (a_1a_1 \mapsto a_1) \, $ $(a_1 \mapsto a_1a_1)(.)$. \noindent {\bf (3)} By (1) we have \ $(\varepsilon \mapsto a_1^j) = (a_1 \mapsto a_1^j) \, \cdot$ $ (\varepsilon \mapsto a_1)(.)$, and \ $(a_1 \mapsto a_1^j) = $ $(a_1 \mapsto a_1 a_1)^{j-1}$. \ \ \ $\Box$ \begin{thm} \label{Inv_fin_gen} \ \ The inverse monoid ${\it Inv}_{k,1}$ is finitely generated. \end{thm} {\bf Proof.} Our strategy for finding a finite generating set for ${\it Inv}_{k,1}$ is as follows: We will use the fact that the Thompson-Higman group $G_{k,1}$ is finitely generated. Hence, if $\varphi \in {\it Inv}_{k,1}$, \, $g_1, g_2 \in G_{k,1}$, and if $g_2 \varphi g_1$ can be expressed as a product $p$ over a fixed finite set of elements of ${\it Inv}_{k,1}$, then it follows that $\varphi = g_2^{-1} p \, g_1^{-1}$ can also be expressed as a product over a fixed finite set of elements of ${\it Inv}_{k,1}$. We assume that a finite generating set for $G_{k,1}$ has been chosen. For any element $\varphi \in {\it Inv}_{k,1}$ with domain code domC$(\varphi) = P$ and image code imC$(\varphi) = Q$, we distinguish four cases, depending on the maximality or non-maximality of $P$ and $Q$. \noindent (1) If $P$ and $Q$ are both maximal prefix codes then $\varphi \in G_{k,1}$, and we can express $\varphi$ over a finite fixed generating set of $G_{k,1}$. \noindent (2) Assume $P$ and $Q$ are both non-maximal prefix codes. By Lemma \ref{ext_to_max_pref_code} there are finite maximal prefix codes $P',Q'$ such that $P \subset P'$, $Q \subset Q'$, and $|P'| = |Q'|$; and by Lemma \ref{card_max_pref_code}, $|P'| = |Q'| = 1 + (k-1)N$ for some $N \geq 0$. Consider the following maximal prefix code $C$, of cardinality $|P'| = |Q'| = 1 + (k-1)N$: \ \ \ $C \ = \ \bigcup_{r=0}^{N-2} a_1^r (A-\{a_1\}) \ \ \cup \ a_1^{N-1} A$. \noindent The maximal prefix code $C$ is none other than the code $Q_{i,j}$ when $i=k$ and $j=N-1$ \, (introduced in the proof of Lemma \ref{special_pref_code}, Fig.\ 1). The elements $g_1: C \to P'$ and $g_2: Q' \to C$ of $G_{k,1}$ can be chosen so that $\psi = g_2 \varphi g_1(.)$ is a partial identity with ${\rm domC}(\psi) = {\rm imC}(\psi) \subset C$ consisting of the $|P|$ first elements of $C$ in the dictionary order. So, $\psi$ is the identity map restricted to these $|P|$ first elements of $C$, and $\psi$ is undefined on the rest of $C$. To describe ${\rm domC}(\psi) = {\rm imC}(\psi)$ in more detail, let us write $|P| = i + (k-1) \, \ell$, for some $i, \ell$ with $1 \leq i < k$ and $0 \leq \ell \leq N-1$. Then \ \ \ ${\rm domC}(\psi) = {\rm imC}(\psi) \ = \ $ $a_1^{N-1} A \ \cup \ \bigcup_{r=j+1}^{N-2} a_1^r (A-\{a_1\}) \ \ \cup \ $ $a_1^j \ \{a_2, \ldots, a_i\}$. \noindent where $j = N-1-\ell$. Since \ $\psi = {\rm id}_{{\rm domC}(\psi)}$, we claim: \noindent By essential maximal extension \ $\psi \ = \ {\rm id}_S$ \ (as elements of ${\it Inv}_{k,1}$), \ \ where \ $S = {\{a_1^j a_1, a_1^j a_2, \ldots, a_1^j a_i\}}$, \noindent with $i, j$ as in the description of ${\rm domC}(\psi) = {\rm imC}(\psi)$ above, i.e., $1 < i < k$, \ $N-1 \geq j = N-1-\ell \geq 0$, and $|P| = i + (k-1) \, \ell$. Indeed, if $|P| < k$ then $S$ is just ${\rm domC}(\psi)$, with $i = |P|$, and $\ell=0$ (hence $j = N-1$). If $|P| \geq k$ then the maximum essential extension of $\psi$ will replace the $1+ (k-1) \, \ell$ elements \ $a_1^{N-1} A \ \cup \ \ \bigcup_{r=N-j+1}^{N-2} a_1^r (A-\{a_1\})$ \ by the single element $a_1^{N-\ell+1} = a_1^{j+1}$. What remains is the set \ $S \ = \ \{a_1^{j+1}\} \ \cup \ a_1^j \, \{a_2, \ldots, a_i\}$. Finally, by Lemma \ref{special_generation}, ${\rm id}_S$ (where $S = {\{a_1^j a_1, a_1^j a_2, \ldots, a_1^j a_i\}}$) can be generated by the $k+1$ elements \ \ $\{(a_1 \mapsto a_1a_1), \ (a_1a_1 \mapsto a_1)\} \ \cup \ $ $\{ {\rm id}_{\{a_1a_1, \ a_1a_2, \ \ldots, \ a_1a_i\}} : $ $ 1 \leq i \leq k-1 \}$. \noindent (3) Assume $P$ is a maximal prefix code and $Q$ is non-maximal. Let $Q'$ be the finite maximal prefix code obtained by saturating the prefix tree of $Q$. Then $Q \subset Q'$, \ $|Q'| = 1 + (k-1)N'$, and $|P| = 1 + (k-1)N$ for some $N' > N \geq 0$. We consider the maximal prefix codes $C$ and $C'$ as defined in the proof of (2), using $N'$ for defining $C'$. We can choose $g_1: C \to P$ and $g_2: Q' \to C'$ in $G_{k,1}$ so that $\psi = g_2 \varphi g_1(.)$ is the dictionary-order preserving map that maps $C$ to the first $|C|$ elements of $C'$. So we have ${\rm domC}(\psi) = C$, \ and ${\rm imC}(\psi) = S_0$ , where $S_0 \subset C'$ consist of the $|C|$ first elements of $C'$, in dictionary order. \noindent Since $|C| = 1 + (k-1) \, N$, we can describe $S_0$ in more detail by $S_0 \ = \ \bigcup_{r=N'-N}^{N'-2} a_1^r (A-\{a_1\}) \ \ \cup \ a_1^{N'-1} A$. \noindent Next, by essential maximal extension we now obtain \ $\psi \ = \ (\varepsilon \mapsto a_1^{N'-N})$. Indeed, we saw that $|P| = 1 + (k-1) \, N$. If $|P| = 1$ then $P = \{\varepsilon\}$, and $\psi \ = \ (\varepsilon \mapsto a_1^{N'})$. If $|P| \geq k$ then maximum essential extension of $\psi$ will replace all the elements of $C$ by the single element $\varepsilon$, and it will replace all the elements of $S_0$ by the single element $a_1^{N'-N}$. Finally, by Lemma \ref{special_generation}, $(\varepsilon \mapsto a_1^{N'-N})$ is generated by the two elements $(\varepsilon \mapsto a_1)$ and $(a_1 \mapsto a_1 a_1)$. \noindent (4) The case where $P$ is a non-maximal maximal prefix code and $Q$ is maximal can be derived from case (3) by taking the inverses of the elements from case (3). \ \ \ $\Box$ \begin{thm} \label{M_fin_gen} \ \ The monoid $M_{k,1}$ is finitely generated. \end{thm} {\bf Proof.} Let $\varphi: P \to Q$ be the table of any element of $M_{k,1}$, mapping $P$ onto $Q$, where $P, Q \subset A^*$ are finite prefix codes. The map described by the table is total and surjective, so if $|P| = |Q|$ (and in particular, if $\varphi$ is the empty map) then $\varphi \in {\it Inv}_{k,1}$, hence $\varphi$ can be expressed over the finite generating set of ${\it Inv}_{k,1}$. In the rest of the proof we assume $|P| > |Q|$. The main observation is the following. \noindent {\sf Claim.} \ $\varphi$ can be written as the composition of finitely many elements $\varphi_i \in M_{k,1}$ with tables $P_i \to Q_i$ such that $0 \leq |P_i| - |Q_i| \leq 1$. \noindent {\sf Proof of the Claim:} We use induction on $|P| - |Q|$. There is nothing to prove when $|P| - |Q| \leq 1$, so we assume now that $|P| - |Q| \geq 2$. If $\varphi(x_1) = \varphi(x_2) = \varphi(x_3) = y_1$ for some $x_1, x_2, x_3 \in P$ (all three being different) and $y_1 \in Q$, then we can write $\varphi$ as a composition $\varphi(.) = \psi_2 \circ \psi_1(.)$, as follows. The map $\psi_1: P \longrightarrow P - \{x_1\}$ is defined by $\psi_1(x_1) = \psi_1(x_2) = x_2$, and acts as the identity everywhere else on $P$. The map $\psi_2: P - \{x_1\} \longrightarrow Q$ is defined by $\psi_2(x_2) = \psi_2(x_3) = y_1$, and acts in the same way as $\varphi$ everywhere else on $P - \{x_1\}$. Then for $\psi_1$ we have \ \ $|P| -|P - \{x_1\}| \ < \ |P| - |Q|$, \ and for $\psi_2$ we have \ \ $|P - \{x_1\}| - |Q| \ < \ |P| - |Q|$. If $\varphi(x_1) = \varphi(x_2) = y_1$ and $\varphi(x_3) = \varphi(x_4) = y_2$ for some $x_1, x_2, x_3, x_4 \in P$ (all four being different) and $y_1, y_2 \in Q$ ($y_1 \neq y_2$), then we can write $\varphi$ as a composition $\varphi(.) = \psi_2 \circ \psi_1(.)$, as follows. First the map $\psi_1: P \longrightarrow P - \{x_1\}$ is defined by $\psi_1(x_1) = \psi_1(x_2) = x_2$, and acts as the identity everywhere else on $P$. Second, the map $\psi_2: P - \{x_1\} \longrightarrow Q$ is defined by $\psi_2(x_2) = y_1$ and $\psi_2(x_3) = \psi_2(x_4) = y_2$, and acts like $\varphi$ everywhere else on $P - \{x_1\}$. Again, for $\psi_1$ we have \ \ $|P| -|P - \{x_1\}| \ < \ |P| - |Q|$ \ and for $\psi_2$ we have \ \ $|P - \{x_1\}| - |Q| \ < \ |P| - |Q|$. \ \ \ \ \ [End, proof of the Claim.] Because of the Claim we now only need to consider elements $\varphi \in M_{k,1}$ with tables $P \to Q$ such that the prefix codes $P, Q$ satisfy $|P| = |Q| + 1$. We denote $P = \{p_1, \ldots, p_n\}$ and $Q = \{ q_1, \ldots, q_{n-1}\}$, with $\varphi(p_j) = q_j$ for $1 \leq j \leq n-1$, and $\varphi(p_{n-1}) = \varphi(p_n) = q_{n-1}$. We define the following prefix code $C$ with $|C| = |P|$: \noindent $\bullet$ \ if $|P| = i \leq k$ then \ \ $C \ = \ \{a_1, \ldots, a_i \}$; \ note that $i \geq 2$, since $|P| > |Q| >0$; \noindent $\bullet$ \ if $|P| > k$ then \ \ $C \ = \ \{a_2, \ldots, a_i \} \ \cup \ $ $\bigcup_{r=1}^{j-1} a_1^r (A-\{a_1\}) \ \ \cup \ \ a_1^j A$, \noindent where $i,j$ are such that $|P| = i + (k-1)j$, \ $2 \leq i \leq k$, and $1 \leq j$ (see Fig.\ 1). Let us write $C$ in increasing dictionary order as $C = \{c_1, \ldots, c_n\}$. The last element of $C$ in the dictionary order is thus $c_n = a_i$. We now write $\varphi(.) = \psi_3 \, \psi_2 \, \psi_1(.)$ where $\psi_1$, $\psi_2$, $\psi_3$ are as follows: \\ $\bullet$ \ $\psi_1: P \longrightarrow C$ \ is bijective and is defined by $p_j \mapsto c_j$ for $1 \leq j \leq n$; \\ $\bullet$ \ $\psi_2: C \longrightarrow C - \{ a_i\}$ \ is the identity map on $\{c_1, \ldots, c_{n-1} \}$, and $\psi_2(c_n) = c_{n-1}$. \\ $\bullet$ \ $\psi_3: C - \{ a_i\} \longrightarrow Q$ \ is bijective and is defined by $c_j \mapsto q_j$ for $1 \leq j \leq n-1$. It follows that $\psi_1$ and $\psi_3$ can be expressed over the finite generating set of ${\it Inv}_{k,1}$. On the other hand, $\psi_2$ has a maximum essential extension, as follows. \noindent $\bullet$ \ If \ $2 \leq |P| = i \leq k$ \ then $\psi_2 \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l|l|l|l} a_1 & \ \ldots \ & a_{i-2} & a_{i-1} & a_i \\ a_1 & \ \ldots \ & a_{i-2} & a_{i-1} & a_{i-1} \end{array} \hspace{-.08in} \right) \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l} {\rm id}_{\{a_1, \ \ldots, \ a_{i-1}\}} & a_i \\ \ & a_{i-1} \end{array} \hspace{-.08in} \right)$. \noindent $\bullet$ \ If \ $|P| = i + (k-1)j > k$ \ and if $i > 2$ then, after maximal essential extension, $\psi_2$ also becomes \ $\max(\psi_2) \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l} {\rm id}_{\{a_1, \ \ldots, \ a_{i-1}\}} & a_i \\ \ & a_{i-1} \end{array} \hspace{-.08in} \right)$. \noindent $\bullet$ \ If \ $|P| = i + (k-1)j > k$ \ and if $i = 2$ then, after essential extensions, $\max(\psi_2) \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l|l|l|l|l} a_1a_1 & \ \ldots \ & a_1 a_{k-2} & a_1a_{k-1} & a_1a_k & a_2 \\ a_1a_1 & \ \ldots \ & a_1 a_{k-2} & a_1a_{k-1} & a_1a_k & a_1a_k \end{array} \hspace{-.08in} \right) \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l} {\rm id}_{a_1A} & a_2 \\ \ & a_1a_k \end{array} \hspace{-.08in} \right) \ = \ $ $\left( \hspace{-.08in} \begin{array}{l|l} a_1 & a_2 \\ a_1 & a_1a_k \end{array} \hspace{-.08in} \right) $. \noindent In summary, we have factored $\varphi$ over a finite set of generators of ${\it Inv}_{k,1}$ and $k$ additional generators in $M_{k,1}$. \ \ \ $\Box$ \noindent {\bf Factorization algorithm:} The proofs of Theorems \ref{Inv_fin_gen} and \ref{M_fin_gen} are constructive; they provide algorithms that, given $\varphi \in {\it Inv}_{k,1}$ or $ \in M_{k,1}$, output a factorization of $\varphi$ over the finite generating set of ${\it Inv}_{k,1}$, respectively $M_{k,1}$. In \cite{Hig74} (p.\ 49) Higman introduces a four-element generating set for $G_{2,1}$; a special property of these generators is that their domain codes and their image codes only contain words of length $\leq 2$, and that $\big| \, |\gamma(x)| - |x| \, \big| \leq 1$ for every generator $\gamma$ and every $x \in {\rm domC}(\gamma)$. The generators in the finite generating set of $M_{k,1}$ that we introduced above also have those properties. Thus we obtain: \begin{cor} \label{lengths_in_domCimC} \ The monoid $M_{2,1}$ has a finite generating set such that all the generators have the following property: The domain codes and the image codes only contain words of length $\leq 2$, and \ $\big| \, |\gamma(x)| - |x| \, \big| \leq 1$ for every generator $\gamma$ and every $x \in {\rm domC}(\gamma)$. \ \ \ \ \ $\Box$ \end{cor} \noindent For reference we list an explicit {\em finite generating set for} $M_{2,1}$. It consists, first, of the Higman generators of $G_{2,1}$ (\cite{Hig74} p.\ 49): {\sc Not} $=$ $\left( \hspace{-.08in} \begin{array}{l|l} 0 & 1 \\ 1 & 0 \end{array} \hspace{-.08in} \right)$, \ \ $(01 \leftrightarrow 1) = $ $\left( \hspace{-.08in} \begin{array}{r|r|r} 00 & 01 & 1 \\ 00 & 1 & 01 \end{array} \hspace{-.08in} \right)$, \ \ $(0 \leftrightarrow 10) = $ $\left( \hspace{-.08in} \begin{array}{r|r|r} 0 & 10 & 11 \\ 10 & 0 & 11 \end{array} \hspace{-.08in} \right)$, \ \ and $\tau_{1,2} = $ $\left( \hspace{-.08in} \begin{array}{r|r|r|r} 00 & 01 & 10 & 11 \\ 00 & 10 & 01 & 11 \end{array} \hspace{-.08in} \right)$; \noindent the additional generators for ${\it Inv}_{2,1}$: $(\varepsilon \to 0)$, \ \ $(0 \to \varepsilon)$, \ \ $(0 \to 00)$, \ \ $(00 \to 0)$; \noindent the additional generators for $M_{2,1}$: $\left( \hspace{-.08in} \begin{array}{l|l} 0 & 1 \\ 0 & 0 \end{array} \hspace{-.08in} \right)$, \ and \ \ $\left( \hspace{-.08in} \begin{array}{r|r} 0 & 1 \\ 0 & 01 \end{array} \hspace{-.08in} \right)$ $ = \left( \hspace{-.08in} \begin{array}{r|r|r} 00 & 01 & 1 \\ 00 & 01 & 01 \end{array} \hspace{-.08in} \right)$ . \noindent Observe that Higman's generators of $G_{k,1}$ (in \cite{Hig74} p.\ 27) have domain and image codes with at most 3 internal vertices. We observe that the additional generators that we introduced for ${\it Inv}_{k,1}$ and $M_{k,1}$ have domain and image codes have at most 2 internal vertices. \noindent The following problem remains open: Are ${\it Inv}_{k,1}$ and $M_{k,1}$ finitely {\bf presented}? \section{The word problem of the Thompson-Higman monoids} We saw that the Thompson-Higman monoid $M_{k,1}$ is finitely generated. We want to show now that the word problem of $M_{k,1}$ over any finite generating set can be decided in deterministic polynomial time, i.e., it belongs to the complexity class {\sf P}. \footnote{\, This section has been revised in depth, to correct errors.} In \cite{BiThomps} it was shown that the word problem of the Thompson-Higman group $G_{k,1}$ over any finite generating set is in {\sf P}. In fact, it is in the parallel complexity class {\sf AC}$_1$ \cite{BiThomps}, and it is co-context-free \cite{LehSchw}. In \cite{BiCoNP} it was shown that the word problem of the Thompson-Higman group $G_{k,1}$ over the infinite generating set \ $\Gamma_{k,1} \cup \{ \tau_{i,i+1} : i > 0\}$ \ is {\sf coNP}-complete, where $\Gamma_{k,1}$ is any finite generating set of $G_{k,1}$; the position transposition $\tau_{i,i+1} \in G_{k,1}$ has \ ${\rm domC}(\tau_{i,i+1}) = {\rm imC}(\tau_{i,i+1}) = A^{i+1}$, and is defined by \ $u \alpha \beta \mapsto u \beta \alpha$ \ for all letters $\alpha, \beta \in A$ and all words $u \in A^{i-1}$. We will see below that the word problem of $M_{k,1}$ over $\Gamma_{k,1} \cup \{ \tau_{i,i+1} : i > 0\}$ is also {\sf coNP}-complete, where $\Gamma_{k,1}$ is any finite generating set of $M_{k,1}$. \subsection{The image code formula} Our proof in \cite{BiThomps} that the word problem of the Thompson-Higman group $G_{k,1}$ (over any finite generating set) is in {\sf P}, was based on the following fact (the {\em table size formula}): \ \ \ \ \ $\forall \varphi, \psi \in G_{k,1}$: \ \ $\|\psi \circ \varphi\| \leq \|\psi\| + \|\varphi\|$. \noindent Here $\|\varphi\|$ denotes the {\em table size} of $\varphi$, i.e., the cardinality of ${\rm domC}(\varphi)$. See Proposition 3.5, Theorem 3.8, and Proposition 4.2 in \cite{BiThomps}. In $M_{k,1}$ the above formula does not hold in general, as the following example shows. We give some definitions and notation first. \begin{defn} For any finite set $S \subseteq A^*$ we denote the length of the longest word in $S$ by $\ell(S)$. The cardinality of $S$ is denoted by $|S|$. The {\em table} of a right-ideal morphism $\varphi$ is the set $\{(x, \varphi(x)) : x \in {\rm domC}(\varphi)\}$. \end{defn} \begin{pro} \label{exp_table} \ For every $n>0$ there exists $\Phi_n = \varphi_2^{n-1} \varphi_1 \in M_{2,1}$ (for some $\varphi_1, \varphi_2 \in M_{2,1}$) with the following properties: The table sizes are $\|\Phi_n\| = 2^n$, and \ $\|\varphi_2\| = \|\varphi_1\| = 2$. So, $\|\Phi_n\|$ is exponentially larger than \, $(n-1) \cdot \|\varphi_2\| + \|\varphi_1\|$. Hence the table size formula does not hold in $M_{2,1}$. The word lengths of $\varphi_1, \varphi_2$, and $\Phi_n$ (over the finite generating set $\Gamma$ of $M_{2,1}$ from Section 3 in \cite{BiMonTh}) satisfy \ $|\varphi_1|_{_{\Gamma}} =1$, \ $|\varphi_2|_{_{\Gamma}} \leq 2$, and $|\Phi_n|_{_{\Gamma}} < 2 n$. So the table size of $\Phi_n$ is exponentially larger than its word length: \ $\|\Phi_n\| \, > \, \sqrt{2}^{\, |\Phi_n|_{_{\Gamma}}}$. \end{pro} {\bf Proof.} Consider $\varphi_1, \varphi_2 \in M_{2,1}$ given by the tables \ $\varphi_1 = \{(0 \mapsto 0), \ (1 \mapsto 0)\}$, and \ $\varphi_2 = \{(0 0 \mapsto 0), \ (0 1 \mapsto 0)\}$. One verifies that \ $\Phi_n = \varphi_2^{n-1} \circ \varphi_1(.)$ \ sends every bitstring of length $n$ to the word $0$; its domain code is $\{0,1\}^n$, its image code is $\{0\}$, and it is its maximum essential extension. Thus, $\|\varphi_2^{n-1} \circ \varphi_1\| \ = \ 2^n$, whereas \ $(n-1) \cdot \|\varphi_2\| + \|\varphi_1\| = 2 n$. Also, \ $\varphi_2(.) \, = \, $ $(0 \mapsto 0, 1 \mapsto 0) \cdot (0 \mapsto \varepsilon)$, so $|\varphi_1|_{_{\Gamma}} = 1$, \ $|\varphi_2|_{_{\Gamma}} \leq 2$, and $|\Phi_n|_{_{\Gamma}} \leq 2n-1$; hence $\|\Phi_n\| > 2^{|\Phi_n|_{_{\Gamma}}/2}$. \ \ \ $\Box$ We will use the following facts that are easy to prove: If $R \subset A^*$ is a right ideal and $\varphi$ is a right-ideal morphism then $\varphi(R)$ and $\varphi^{-1}(R)$ are right ideals. The intersection and the union of right ideals are right ideals. We also need the following result. \noindent {\bf (Lemma 3.3 of \cite{BiThomps})} \ {\em If $P,Q,S \subseteq A^*$ are such that $PA^* \cap QA^* = SA^*$, and if $S$ is a prefix code then $S \subseteq P \cup Q$. } \ \ \ $\Box$ \begin{lem} \label{Im_of_rightideal} \ Let $\theta$ be a right-ideal morphism, and assume $SA^* \subseteq {\rm Dom}(\theta)$, where $S \subset A^*$ is a finite prefix code. Then there is a finite prefix code $R \subset A^*$ such that $\, \theta(SA^*) = RA^* \,$ and $\, R \subseteq \theta(S)$. \end{lem} {\bf Proof.} Since $\theta$ is a right-ideal morphism we have \ $\theta(SA^*) = \theta(S) \ A^*$. Since $\theta(S)$ might not be a prefix code we take \ $R = \{r \in \theta(S) : r$ is minimal (shortest) in the prefix order within $\theta(S)\}$. Then $R$ is a prefix code that has the required properties. \ \ \ $\Box$ \begin{lem} \label{InvIm_of_rightideal}\footnote{\, This Lemma was incorrect in the earlier versions of this paper and in \cite{BiMonTh}.} \ For any right-ideal morphism $\theta$ and any prefix code $Z \subset A^*$, $\theta^{-1}(Z)$ is a prefix code. In particular, $\theta^{-1}({\rm imC}(\theta))$ is a prefix code, and $\, \theta^{-1}({\rm imC}(\theta)) \subseteq {\rm domC}(\theta)$. There exist right-ideal morphisms $\theta$ with finite domain code, such that $\, \theta^{-1}({\rm imC}(\theta)) \neq {\rm domC}(\theta)$. \end{lem} {\bf Proof.} First, $\theta^{-1}(Z)$ is a prefix code. Indeed, if we had $x_1 = x_2u$ for some $x_1, x_2 \in \theta^{-1}(Z)$ with $u$ non-empty, then $\theta(x_1) = \theta(x_2) \ u$, with $\theta(x_1), \theta(x_2) \in Z$. This would contradict the assumption that $Z$ is a prefix code. Second, let $Q = {\rm imC}(\theta)$; then $\theta^{-1}(Q) \, A^* \subseteq \theta^{-1}(QA^*)$. Indeed, if $x \in \theta^{-1}(Q)$, then $x = pw$ for some $p \in {\rm domC}(\theta)$ and $w \in A^*$. Hence, $\theta(x) = \theta(p) \, w$, and $\theta(x) \in Q$. Since $\theta(p) \, w \in Q$ and $\theta(p) \in Q A^*$, we have $\theta(p) \, w = \theta(p)$ (since $Q$ is a prefix code). So $w$ is empty, hence $x = pw = p \in {\rm domC}(\theta)$. Example: Let $A = \{0,1\}$, and let $\theta$ be the right-ideal morphism defined by ${\rm domC}(\theta) = \{01, 1\}$, ${\rm imC}(\theta) = \{\varepsilon\}$, and $\, \theta(01) = 0$, $\, \theta(1) = \varepsilon$. Then, $\theta^{-1}({\rm imC}(\theta)) = \{1\} \neq {\rm domC}(\theta)$. \ \ \ $\Box$ \noindent The following generalizes the ``table size formula'' of $G_{k,1}$ to the monoid $M_{k,1}$. \begin{thm} \label{sum_of_imC} {\bf (Generalized image code formulas).} \footnote{\, This Theorem was incorrect in the previous versions and in \cite{BiMonTh}; this is a corrected (and expanded) version.} \\ Let $\varphi_i$ be right-ideal morphism with finite domain codes, for $i = 1, 2, \ldots, n$. Then \noindent {\bf (1)} \ \ \ \ $\big|{\rm imC}(\varphi_n \circ \ \ldots \ \circ \varphi_1)\big|$ $ \ \leq \ $ $|{\rm imC}(\varphi_1)| \ + \ $ $\sum_{i=2}^n |\varphi_i({\rm domC}(\varphi_i))|$, \noindent {\bf (2)} \ \ \ \ $\ell \big({\rm domC}(\varphi_n \circ \ \ldots \ \circ \varphi_1)\big)$ $ \ \leq \ $ $\sum_{i=1}^n \ell ({\rm domC}(\varphi_i))$, \noindent {\bf (3)} \ \ \ \ $\ell \big(\varphi_n \ldots \varphi_1({\rm domC}(\varphi_n \circ \ \ldots \ \circ \varphi_1)) \big)$ $ \ \leq \ $ $\sum_{i=1}^n \ell(\varphi_i({\rm domC}(\varphi_i)))$, \noindent {\bf (4)} \ \ \ \ $\ell \big({\rm imC}(\varphi_n \circ \ \ldots \ \circ \varphi_1) \big)$ $ \ \leq \ $ $\ell({\rm imC}(\varphi_1)) \ + \ $ $\sum_{i=2}^n \ell(\varphi_i({\rm domC}(\varphi_i)))$, \noindent {\bf (5)} \ \ \ \ $\big|\varphi_n \ldots \varphi_1({\rm domC}(\varphi_n \circ \ \ldots \ \circ \varphi_1)) \big|$ $ \ \leq \ $ $\sum_{i=1}^n |(\varphi_i({\rm domC}(\varphi_i))|$, \ and \hspace{.25in} $\varphi_n \ldots \varphi_1({\rm domC}(\varphi_n \circ \ \ldots \, $ $\circ$ $\varphi_1))$ $ \ \subseteq \ \ $ $\bigcup_{i=1}^n$ $\varphi_n \ldots \varphi_i({\rm domC}(\varphi_i))$. \end{thm} {\bf Proof.} \ Let $P_i = {\rm domC}(\varphi_i)$ and $Q_i = {\rm imC}(\varphi_i)$. \noindent (1) \ The proof is similar to the proof of Proposition 3.5 in \cite{BiThomps}. We have \ ${\rm Dom}(\varphi_2 \circ \varphi_1) = $ $\varphi_1^{-1}(Q_1A^* \cap P_2A^*)$ \ and \ ${\rm Im}(\varphi_2 \circ \varphi_1) = \varphi_2(Q_1A^* \cap P_2A^*)$. So the following maps are total and onto on the indicated sets: \ \ \ \ \ \ $\varphi_1^{-1}(Q_1A^* \cap P_2A^*) \ \stackrel{\varphi_1}{\longrightarrow}$ $ \ Q_1A^* \cap P_2A^* \ \stackrel{\varphi_2}{\longrightarrow} \ $ $\varphi_2(Q_1A^* \cap P_2A^*)$. \noindent By Lemma 3.3 of \cite{BiThomps} (quoted above) we have $Q_1A^* \cap P_2A^* = SA^*$ for some finite prefix code $S$ with $S \subseteq Q_1 \cup P_2$. Moreover, by Lemma \ref{Im_of_rightideal} we have $\varphi_2(SA^*) = R_2A^*$ for some finite prefix code $R_2$ such that $R_2 \subseteq \varphi_2(S)$. Now, since $S \subseteq Q_1 \cup P_2$ we have \ $R_2 \subseteq \varphi_2(S) \subseteq $ $\varphi_2(Q_1) \, \cup \, \varphi_2(P_2)$. Thus, \ $|{\rm imC}(\varphi_2 \circ \varphi_1)| = $ $|R_2| \leq |\varphi_2(P_2)| + |\varphi_2(Q_1)|$. Since $|\varphi_2(Q_1)| \le |Q_1|$, we have $|R_2| \leq |\varphi_2(P_2)| + |Q_1|$. By induction for $n > 2$, $|{\rm imC}(\varphi_n$ $\circ$ $\varphi_{n-1} \circ \ \ldots \ \circ \varphi_1)|$ $\le$ $|\varphi_n({\rm domC}(\varphi_n))|$ $+$ $|{\rm imC}(\varphi_{n-1} \circ \ \ldots \ \circ \varphi_1)|$ $\le$ $|\varphi_n({\rm domC}(\varphi_n))|$ $+$ $\sum_{i=2}^{n-1} |\varphi_i({\rm domC}(\varphi_i))| \ + \ |{\rm imC}(\varphi_1)|$ . \noindent (2) \ We prove the formula when $n = 2$; the general formula then follows immediately by induction. Let $x \in {\rm domC}(\varphi_2 \circ \varphi_1)$; then $\varphi_1(x)$ is defined, hence $x = p_1 u$ for some $p_1 \in P_1$, $u \in A^*$. And $\varphi_2$ is defined on $\varphi_1(x) = \varphi_1(p_1) \, u$, so $\varphi_1(x) \in P_2 A^* = {\rm Dom}(\varphi_2)$. Hence there exist $p_2 \in P_2$ and $v \in A^*$ such that \noindent $(\star)$ \hspace{.5in} $\varphi_1(p_1) \, u = p_2 v$ \ $\in \varphi_1(P_1) \, A^* \cap P_2 A^*$. \noindent It follows that $u$ and $v$ are suffix-comparable. \noindent {\sf Claim.} \ {\it The words $u$ and $v$ in $(\star)$ satisfy: $\, u = \varepsilon$, or $v = \varepsilon$. } \noindent Proof of the Claim: Since $u$ and $v$ are suffix-comparable, let us first consider the case where $v$ is a suffix of $u$, i.e., $u = t v$ for some $t \in A^*$. Then $\varphi_1(x) = \varphi_1(p_1) \, t v = p_2 v$, hence $\varphi_1(p_1) \, t = p_2$, hence $\varphi_2$ is defined on $\varphi_1(p_1) \, t = p_2$. So, $\varphi_2 \circ \varphi_1$ is defined on $p_1 t$, i.e., $p_1 t \in {\rm domC}(\varphi_2 \circ \varphi_1)$. But we also have $x = p_1 t v \in {\rm domC}(\varphi_2 \circ \varphi_1)$. Since ${\rm domC}(\varphi_2 \circ \varphi_1)$ is a prefix code, it follows that $v = \varepsilon$. Let us next consider the other case, namely where $u$ is a suffix of $v$, i.e., $v = s u$ for some $s \in A^*$. Then $\varphi_1(x) = \varphi_1(p_1) \, u = p_2 s u$, hence $\varphi_1(p_1) = p_2 s$, hence $\varphi_2$ is defined on $\varphi_1(p_1) = p_2 s$, hence $p_1 \in {\rm domC}(\varphi_2 \circ \varphi_1)$. But we also have $x = p_1 u \in {\rm domC}(\varphi_2 \circ \varphi_1)$. Since ${\rm domC}(\varphi_2 \circ \varphi_1)$ is a prefix code, it follows that $u = \varepsilon$. \ \ \ [This proves the Claim.] Now for $x \in {\rm domC}(\varphi_2 \circ \varphi_1)$ we have $x = p_1 u$, and $\varphi_1(p_1) \, u = p_2 v$, hence $|x| = |p_1| + |u|$ and $|\varphi_1(p_1)| + |u| = |p_2| + |v|$. By the Claim, either $|u| = 0$ or $|v| = 0$. If $\, |u| = 0 \,$ then $\, |x| = |p_1| \le \ell({\rm domC}(\varphi_1))$. If $\, |v| = 0 \,$ then $\, |x| = |p_1| + |u|$ $=$ $|p_1| + |p_2| + |v| - |\varphi_1(p_1)|$ $=$ $|p_1| + |p_2| - |\varphi_1(p_1)| \le |p_1| + |p_2|$ $\le$ $\ell({\rm domC}(\varphi_1)) + \ell({\rm domC}(\varphi_2))$. \noindent (3) \ As in the proof of (2) we only need to consider $n = 2$. Let $x \in$ ${\rm domC}(\varphi_2 \varphi_1)$, hence $\varphi_2 \varphi_1(x) \in$ $\varphi_2 \varphi_1({\rm domC}(\varphi_2 \varphi_1))$. By $(\star)$ (and with the notation of the proof of (2)) we have $\varphi_2 \varphi_1(x) = \varphi_2(\varphi_1(p_1) \, u)$ $=$ $\varphi_2(p_2) \, v \in \varphi_2(\varphi_1(P_1) \, A^* \cap P_2 A^*) =$ ${\rm Im}(\varphi_2 \varphi_1)$. By the reasoning of the proof of (2), we have two cases: If $|u| = 0$ then $|v| = |\varphi_1(p_1)| + |u| - |p_2| = |\varphi_1(p_1)| - |p_2|$ $\le$ $|\varphi_1(p_1)|$. Hence, $|\varphi_2 \varphi_1(x)| = |\varphi_2(p_2)| + |v| \le$ $|\varphi_2(p_2)| + |\varphi_1(p_1)| \ \le \ $ $\ell(\varphi_2({\rm domC}(\varphi_2))$ $+$ $\ell(\varphi_1({\rm domC}(\varphi_1))$. If $|v| = 0$ then $\varphi_2 \varphi_1(x) = \varphi_2(p_2)$, hence \ $|\varphi_2 \varphi_1(x)| = |\varphi_2(p_2)| \le $ $\ell(\varphi_2({\rm domC}(\varphi_2))$. \noindent (4) \ We first consider the case $n=2$. As we saw in the proof of (1), ${\rm imC}(\varphi_2 \varphi_1) = R_2$ where $R_2 \subseteq \varphi_2(S)$, and where $S$ is a prefix code such that $S \subseteq Q_1 \cap P_2$. Hence $R_2 \subseteq$ $\varphi_2(Q_1) \cup \varphi_2(P_2)$. Hence for any $z \in R_2$, either $z \in \varphi_2(P_2)$ or $z \in \varphi_2(Q_1)$. If $z \in \varphi_2(P_2)$ then $|z| \leq \ell(\varphi_2(P_2))$. If $z \in \varphi_2(Q_1)$, then $z = \varphi_2(q_1)$ for some $q_1 \in Q_1 \cap P_2 A^*$, so $q_1 = p_2 u$ for some $p_2 \in P_2$ and $u \in A^*$. We have $q_1 \in P_2 A^*$ ($= {\rm Im}(\varphi_2)$), so $q_1 \in {\rm Im}(\varphi_2)$. Now $|z| = |\varphi_2(p_2)| + |u|$, and $|u| = |q_1| - |p_2| \le |q_1| \le \ell({\rm imC}(\varphi_1))$. Thus, $|z| \le |\varphi_2(p_2)| + \ell({\rm imC}(\varphi_1))$ $\le$ $\ell(\varphi_2({\rm domC}(\varphi_2)))$ $+$ $\ell({\rm imC}(\varphi_1))$. The formula for $n > 2$ now follows by induction in the same way as in the proof of (1). \noindent (5) \ We first prove the formula for $n = 2$. As we saw in the proof of (2), if $x \in {\rm domC}(\varphi_2 \varphi_1)$ then there exist $u, v \in A^*$, $p_1 \in P_1$, $p_2 \in P_2$, such that $x = p_1 u$ and $\varphi_1(x) = \varphi_1(p_1) \, u = p_2 v$. Moreover, by the Claim in (2) we have $u = \varepsilon$ or $v = \varepsilon$. Also, $\varphi_2 \varphi_1(x) = \varphi_2(\varphi_1(p_1) \, u) = $ $\varphi_2(p_2) \, v$. If $v = \varepsilon$ then $\varphi_2 \varphi_1(x) = \varphi_2(p_2)$ $\in \varphi_2({\rm domC}(\varphi_2))$. If $u = \varepsilon$ then $\varphi_2 \varphi_1(x) = $ $\varphi_2 \varphi_1(p_1) \in \varphi_2 \varphi_1({\rm domC}(\varphi_1))$. Thus we proved the following fact: \ \ \ \ \ $\varphi_2 \varphi_1({\rm domC}(\varphi_2 \varphi_1))$ $\, \subseteq \ $ $\varphi_2({\rm domC}(\varphi_2))$ $ \ \cup \ $ $\varphi_2 \varphi_1({\rm domC}(\varphi_1))$. \noindent Now, since $|\varphi_2 \varphi_1({\rm domC}(\varphi_1))| \le $ $|\varphi_1({\rm domC}(\varphi_1))|$, the fact implies that $\, |\varphi_2 \varphi_1({\rm domC}(\varphi_2 \varphi_1))|$ $ \ \le \ $ $|\varphi_2({\rm domC}(\varphi_2))|$ $+$ $|\varphi_1({\rm domC}(\varphi_1))|$. By induction we immediately obtain $\big|\varphi_n \ldots \varphi_1({\rm domC}(\varphi_n \circ \ \ldots \ \circ \varphi_1)) \big|$ $ \ \leq \ $ $\sum_{i=1}^n |(\varphi_i({\rm domC}(\varphi_i))|$, \ and $\varphi_n \ldots \varphi_1({\rm domC}(\varphi_n \circ \ \ldots \, $ $\circ$ $\varphi_1))$ $ \ \subseteq \ \ $ $\varphi_n({\rm domC}(\varphi_n))$ $ \ \cup \ $ $\varphi_n \varphi_{n-1}({\rm domC}(\varphi_{n-1}))$ $ \ \cup \ \ \ldots \ \ \ldots \ \ \ldots$ \ \ \ \ \ \ $\cup \ $ $\varphi_n \ldots \varphi_i({\rm domC}(\varphi_i))$ $ \ \cup \ $ $ \ \ldots \ \ \ldots \ \ \ldots \ $ $ \ \cup \ $ $\varphi_n \ldots \varphi_i \ldots \varphi_1({\rm domC}(\varphi_1))$. \ \ \ \ \ $\Box$ \noindent {\bf Remarks.} Obviously, ${\rm Dom}(\varphi_2 \varphi_1)$ $\subseteq$ ${\rm Dom}(\varphi_1)$; however, in infinitely many cases (in ``most'' cases), ${\rm domC}(\varphi_2 \varphi_1)$ $\not\subseteq$ ${\rm domC}(\varphi_1)$. Instead, we have the more complicated formula of Theorem \ref{sum_of_imC}(5). By Prop.\ \ref{exp_table}, we cannot have a formula for $|{\rm domC}(\varphi_n \ldots \varphi_1)|$ of a similar nature as the formulas in Theorem \ref{sum_of_imC}. \noindent The following class of right-ideal morphisms plays an important role here (as well as in Section 5 of \cite{BiCongr}, where it was introduced). \footnote{\, Def.\ 4.5A, Theorem 4.5B, and Cor.\ 4.5C are new in this version.} \noindent {\bf Definition 4.5A (Normal).} {\it A right-ideal morphism $\varphi$ is called {\rm normal} iff $\, \varphi({\rm domC}(\varphi)) = {\rm imC}(\varphi)$. } \noindent By Lemma 5.7 of \cite{BiCongr} we also have: $\, \varphi$ is normal iff $\varphi^{-1}({\rm imC}(\varphi)) = {\rm domC}(\varphi)$. In other words, $\varphi$ is normal iff $\varphi$ is entirely determined by the way it maps ${\rm domC}(\varphi)$ onto ${\rm imC}(\varphi)$. For example, {\it every injective right-ideal morphism is normal} (by Lemma 5.1 in \cite{BiCongr}). The finite generating set $\Gamma$ of $M_{k,1}$, constructed in Section 3, consist entirely of normal right-ideal morphisms. On the other hand, the composition of two normal right-ideal morphisms does not always result in a normal morphism, as is shown by the following example: $\, {\rm domC}(f) = \{0,1\} \, $ and $\, f(0) = 0$, $f(1) = 10$; $\, {\rm domC}(g) = \{0,1\} \, $ and $\, g(0) = g(1) = 0$; so $f$ and $g$ are normal. But ${\rm domC}(gf) = \{0, 1\}$ and $gf(0) = 0$, $\, gf(1) = 00$; so $gf$ is not normal (for more details, see Prop.\ 5.8 in \cite{BiCongr}). The next result (Theorem 4.5B) shows that every element of $M_{k,1}$ can be represented by a normal right-ideal morphism. So one can say informally that ``from the point of view of $M_{k,1}$, all right-ideal morphisms are normal''. For proving this we need some definitions. We always assume $|A| \ge 2$. \noindent {\bf Definitions and notation.} \ {\it If $x_1, x_2 \in A^*$ are such that $x_1$ is a prefix of $x_2$, i.e., $x_2 \in x_1 A^*$, we denote this by $x_1 \le_{\rm pref} x_2$. For $Z \subseteq A^*$, the set of {\em prefixes of $Z$} is $\, {\sf pref}(Z) = \{v \in A^* : v \le_{\rm pref} z$ for some $z \in Z\}$. For a set $X \subseteq A^*$ and a word $v \in A^*$, $v^{-1} X$ denotes the set $\{s \in A^* : v s \in X\}$. The {\rm tree of} $A^*$ has root $\varepsilon$, vertex set $A^*$, and edge set $\, \{(w, wa) : w \in A^*, \, a \in A\}$. A {\rm subtree} of the tree of $A^*$ has as root any string $r \in A^*$, and as vertex set any subset $V \subseteq r A^*$, such that the following holds for all $v \in V$ and $u \in A^*$: \ $r \le_{\rm pref} u \le_{\rm pref} v$ implies $u \in V$. } \noindent The following is a slight generalization of the classical notion of a prefix tree. \noindent {\bf Definition (Prefix tree).} \ {\it Let $Z \subseteq A^*$, and let $q \in A^*$. The {\rm prefix tree} $T(q, Z)$ is the subtree of the tree of $A^*$ with root $q$ and vertex set \ $V_{q, Z} = \{v \in A^*: q \le_{\rm pref} v$, and $v \le_{\rm pref} z$ for some $z \in Z\}$. } \noindent {\bf Remark.} Let $L$ be the set of leaves of $T(q, Z)$; then $L$ and $q^{-1} L$ are prefix codes. \noindent {\bf Definition (Saturated tree).} \ {\it A subtree $T$ of the tree of $A^*$ is {\rm saturated} iff for every vertex $v$ of $T$ we have: $v$ has no child in $T$ (i.e., $v$ is a leaf), or $v$ has $|A|$ children in $T$. } \noindent {\bf Definition (Tree saturation).} \ {\it Let $T$ be a subtree of the tree of $A^*$, with root $q$, set of vertices $V$, and set of leaves $L$. The {\rm saturation} of $T$ is the smallest (under inclusion) {\rm saturated} subtree of the tree of $A^*$ with root $q$, that contains $T$. In other words, if $T$ is just $\{q\}$, it is its own saturation; otherwise the saturation has root $q$ and has vertex set $\, V \cup (V - L) \cdot A$. We denote the saturation of $T$ by ${\rm s}T$. } \noindent {\bf Remark.} {\bf (1)} The prefix tree $T(q, Z)$ and its saturation have the same {\it depth} (i.e., length of a longest path from the root). Every leaf of $T(q, Z)$ is also a leaf of ${\rm s}T(q, Z)$, but unless $T(q, Z)$ is already saturated, ${\rm s}T(q, Z)$ has more leaves than $T(q, Z)$. The non-leaf vertices of $T(q, Z)$ and ${\rm s}T(q, Z)$ are the same. \noindent {\bf (2)} The number of leaves in the saturated tree ${\rm s}T(q, Z)$ is $\, < |V_{q, Z}| \cdot |A|$. \noindent {\bf (3)} Let $L$ be the leaf set of the saturated tree ${\rm s}T(q, Z)$; if $Z$ is finite then $q^{-1} L$ is a {\it maximal} prefix code. \noindent {\bf Theorem 4.5B (Equivalent} \textbf{\textsl{normal}} {\bf morphism).} {\it For every right-ideal morphism $\varphi$ with finite domain code there exists a {\rm normal} right-ideal morphism $\varphi_0$ with finite domain code, such that $\varphi = \varphi_0$ in $M_{k,1}$. Moreover, $|{\rm imC}(\varphi_0)| \ = \ |\varphi_0({\rm domC}(\varphi_0))|$ $\ \le \ $ $|A| \cdot (\ell(\varphi(P)) + 1) \cdot |\varphi(P)|$, $|{\rm domC}(\varphi_0)|$ $\ \le \ $ $|P| \cdot |A| \cdot (\ell(\varphi(P)) + 1) \cdot |\varphi(P)|$, $\ell({\rm imC}(\varphi_0)) \, = \, \ell(\varphi_0({\rm domC}(\varphi_0)))$ $\, = \, \ell(\varphi({\rm domC}(\varphi)))$, $\ell({\rm domC}(\varphi_0)) \ \le \ $ $\ell({\rm domC}(\varphi)) \, + \, \ell(\varphi({\rm domC}(\varphi)))$. } \noindent {\bf Proof.} Let $P = {\rm domC}(\varphi)$, $Q = {\rm imC}(\varphi)$, $P_0 = {\rm domC}(\varphi_0)$, $Q_0 = {\rm imC}(\varphi_0)$. For each $p \in P$, let $\varphi(p) \, W_{\varphi(p)}$ be the the set of leaves of the saturated tree $\, {\rm s}T\big(\varphi(p), \, \varphi(P) \cap \varphi(p) \, A^*\big)$. By Remark (3) above, $W_{\varphi(p)}$ is a finite maximal prefix code. Now we define $\varphi_0$ as follows: \ \ \ \ \ \ $\varphi_0$ is the restriction of $\varphi$ to $ \ \bigcup_{p \in P} \, p \, W_{\varphi(p)} \, A^*$. \noindent Let us verify that $\varphi_0$ has the required properties. Since $P$ and $W_{\varphi(p)}$ are finite prefix codes, $\bigcup_{p \in P} p \, W_{\varphi(p)}$ is a finite prefix code. So, \ \ \ \ \ \ ${\rm domC}(\varphi_0) \ = \ \bigcup_{p \in P} \, p \, W_{\varphi(p)}$. \noindent Since each $W_{\varphi(p)}$ is a maximal prefix code, the right ideal $\, \bigcup_{p \in P} p \, W_{\varphi(p)} \, A^*$ is essential in the right ideal $P A^*$; hence $\varphi$ and $\varphi_0$ are equal as elements of $M_{k,1}$. Finally, let us show that $\varphi_0({\rm domC}(\varphi_0))$ is a prefix code. We have \ \ \ \ \ \ $\varphi_0({\rm domC}(\varphi_0))$ $ \ = \ $ $\bigcup_{p \in P} \, \varphi(p) \ W_{\varphi(p)}$, \noindent which is the set of leaves of the union of the saturated prefix trees $\, {\rm s}T\big(\varphi(p), \, \varphi(P) \cap \varphi(p) \, A^*\big)$, for $p$ ranging over $P$. For each $p \in P$, the leaves of $\, {\rm s}T\big(\varphi(p), \, \varphi(P) \cap \varphi(p) \, A^*\big)$ form the prefix code $\varphi(p) \, W_{\varphi(p)}$. For $p_1 \neq p_2$ in $P$, if $\varphi(p_1)$ is a prefix of $\varphi(p_2)$ then the leaves of $\, {\rm s}T\big(\varphi(p_2), \, \varphi(P) \cap \varphi(p_2) \, A^*\big)$ are a subset of the leaves of $\, {\rm s}T\big(\varphi(p_1), \, \varphi(P) \cap \varphi(p_1) \, A^*\big)$, so the union of these two leaf sets is just the leaf set of $\, {\rm s}T\big(\varphi(p_1), \, \varphi(P) \cap \varphi(p_1) \, A^*\big)$; a similar thing happens if $\varphi(p_2)$ is a prefix of $\varphi(p_1)$. So in $\, \bigcup_{p \in P} \, \varphi(p) \ W_{\varphi(p)} \, $ we can ignore elements $p$ of $P$ for which $\varphi(p)$ is a strict prefix of another element of $\varphi(P)$. If $\varphi(p_1)$ and $\varphi(p_2)$ are not prefix-comparable, then the leaves of $\, {\rm s}T\big(\varphi(p_i), \, \varphi(P) \cap \varphi(p_i) \, A^*\big)$ have $\varphi(p_i)$ as a prefix, so these two trees have leaf sets that are two-by-two prefix-incomparable (namely the sets $\varphi(p_1) \, W_{\varphi(p_1)}$ and $\varphi(p_2) \, W_{\varphi(p_2)}$). The union of prefix codes that are two-by-two prefix-incomparable forms a prefix code; hence, $\, \bigcup_{p \in P} \, \varphi(p) \ W_{\varphi(p)} \, $ is a prefix code. Now, since $\varphi_0({\rm domC}(\varphi_0))$ is a prefix code it follows that $\, {\rm imC}(\varphi_0) = \varphi_0({\rm domC}(\varphi_0))$, so $\varphi_0$ is normal. This proves the first part of the theorem. Let us prove the formulas. We saw that $\, {\rm imC}(\varphi_0) = \varphi_0({\rm domC}(\varphi_0))$ $=$ $\, \bigcup_{p \in P} \, \varphi(p) \ W_{\varphi(p)}$, and $\varphi(p) \ W_{\varphi(p)}$ is the leaf set of the saturated tree $\, {\rm s}T\big(\varphi(p), \varphi(P) \cap \varphi(p) A^*\big)$. By the definition of prefix trees, the vertices of all the (non-saturated) trees $\, T\big(\varphi(p), \varphi(P) \cap \varphi(p) A^*\big) \,$ are subsets of ${\sf pref}(\varphi(P))$. By Remark (2) above, the number of leaves in a saturated tree $\, {\rm s}T\big(\varphi(p), \varphi(P) \cap \varphi(p) A^*\big) \,$ is at most $|A|$ times the number of vertices of the non-saturated tree. Hence, $|{\rm imC}(\varphi_0)| \le |A| \cdot |{\sf pref}(\varphi(P))|$. Moreover, for any finite $Z \subset A^*$, $\, |{\sf pref}(Z)| \le (1 + \ell(Z)) \cdot |Z|$, hence, $|{\rm imC}(\varphi_0)| \le$ $|A| \cdot (\ell(\varphi(P)) + 1) \cdot |\varphi(P)|$. We have $\, {\rm domC}(\varphi_0)$ $=$ $\bigcup_{p \in P} p \, W_{\varphi(p)}$, and $\varphi(p) \, W_{\varphi(p)}$ is the leaf set of $\, {\rm s}T\big(\varphi(p), \varphi(P) \cap \varphi(p) A^*\big)$. Hence by the same reasoning as for $|{\rm imC}(\varphi_0)|$: $\, |W_{\varphi(p)}| = |\varphi(p) \, W_{\varphi(p)}|$ $\, \le \,$ $|A| \cdot (\ell(\varphi(P)) + 1) \cdot |\varphi(P)|$. Hence, $|{\rm domC}(\varphi_0)|$ $\, \le \,$ $\sum_{p \in P} |W_{\varphi(p)}|$ $\, \le \,$ $\sum_{p \in P} \, |A| \cdot (\ell(\varphi(P)) + 1) \cdot |\varphi(P)|$ $\, \le \, $ $|P| \cdot |A| \cdot (\ell(\varphi(P)) + 1) \cdot |\varphi(P)|$. We have $\, {\rm imC}(\varphi_0) = \bigcup_{p \in P} \varphi(p) \, W_{\varphi(p)}$, and $\varphi(p) \, W_{\varphi(p)}$ is the leaf set of $\, {\rm s}T\big(\varphi(p), \, \varphi(P) \cap \varphi(p) \, A^*\big)$. Hence, $\ell({\rm imC}(\varphi_0)) \le \ell(\varphi(P))$; indeed, tree saturation does not increase the depth of a tree, and the depth of $\, T\big(\varphi(p), \, \varphi(P) \cap \varphi(p) \, A^*\big) \, $ is $\, \le \ell(\varphi(P))$. We have $\, {\rm domC}(\varphi_0) = \bigcup_{p \in P} p \, W_{\varphi(p)}$. And $\ell(W_{\varphi(p)}) \le \ell(\varphi(p) \, W_{\varphi(p)})$ $\le$ $\ell(\varphi(P))$, since $\varphi(p) \, W_{\varphi(p)}$ is the leaf set of $\, {\rm s}T\big(\varphi(p), \, \varphi(P) \cap \varphi(p) \, A^*\big)$. Hence, for every $x \in {\rm domC}(\varphi_0)$ we have $x \in p W_{\varphi(p)}$ for some $p \in P$, so $\, |x| \le |p| + \ell(W_{\varphi(p)})$. Therefore, $\ell({\rm domC}(\varphi_0)) \, \le \, \ell(P) + \ell(\varphi(P))$. \ \ \ $\Box$ \noindent Theorem 4.5B tells us that as far as $M_{k,1}$ is concerned, all right-ideal morphisms are normal. \footnote{\, The concept of normal morphism and Theorem 4.5 enable us to rehabilitate the {\it image code formula} (which was incorrect as stated in Theorem 4.5 of \cite{BiMonTh}, but which is correct when one adds the hypothesis that the morphisms $\varphi_i$ are {\em normal}).} \noindent {\bf Corollary 4.5C (Image code formula).} \\ {\it Let $\varphi_i$ be a right-ideal morphism (for $i = 1, \ldots, n$), and let $\, \Phi = \varphi_n \circ \ \ldots \ \circ \varphi_1$. \noindent {\bf (1)} If $\varphi_i$ is {\rm normal} for $2 \le i \le n$, then \hspace{.3in} $|{\rm imC}(\Phi)|$ $ \ \leq \ $ $\sum_{i=1}^n |{\rm imC}(\varphi_i)|$ . \noindent {\bf (2)} If all $\varphi_i$ are {\rm normal} (for $1 \le i \le n$), then \hspace{.3in} $\ell({\rm domC}(\Phi) \cup {\rm imC}(\Phi)) \ \leq \ $ $\sum_{i=1}^n \ell({\rm domC}(\varphi_i) \cup {\rm imC}(\varphi_i))$ . } \noindent {\bf Proof.} {\bf (1)} follows immediately from Theorem \ref{sum_of_imC}(1), and {\bf (2)} follows from \ref{sum_of_imC}(2) and \ref{sum_of_imC}(4). \ \ \ $\Box$ \noindent {\bf Counter-examples:} \noindent {\bf (1)} The following shows that the image code formula of Corollary 4.5C(1) is wrong in some examples when $\varphi_2$ is not normal (but $\varphi_1$ is normal). Let $A = \{0,1\}$, $n \ge 2$, and $\varphi_1 \ = \ \{(01, 00), \, (00, 01), \, (10, 1011), \, (11, 1100)\}$, \ and $\varphi_2 \ = \ \{(00u0, 000u1) : u \in \{0,1\}^{n-1}\}$ $\ \cup \ $ $\{(01v0, 001v1) : v \in \{0,1\}^{n-1}\}$ $\ \cup \ $ $\{(10, 000), \, (11, 001)\}$. \noindent So, ${\rm imC}(\varphi_1) = \{00, 01, 1011, 1100\}$, and ${\rm imC}(\varphi_2) = \{000, 001\}$, hence $|{\rm imC}(\varphi_1)| + |{\rm imC}(\varphi_2)| = 6$. Note that the right-ideal morphisms $\varphi_1$ and $\varphi_2$ are in maximally extended form. Now, $\varphi_2 \circ \varphi_1: \ 01u0 \mapsto 00 u0 \mapsto 000 u1$ and $\varphi_2 \circ \varphi_1: \ 00 v0 \mapsto 01 v0 \mapsto 001 v1$, for all $u, v \in \{0,1\}^{n-1}$; and $\varphi_2 \circ \varphi_1: \ 10 \mapsto 1011 \mapsto 00011$, $\varphi_2 \circ \varphi_1: \ 11 \mapsto 1100 \mapsto 00100$. Note that $\varphi_2 \circ \varphi_1$ is in maximally extended form. Then $\, {\rm imC}(\varphi_2 \circ \varphi_1) = \{00011, 00100\}$ $\cup$ $000 \, \{00, 01,11\} \, \{0,1\}^{n-2}$ $\cup$ $001 \, \{00, 01,11\} \, \{0,1\}^{n-2}$. Thus when $n \ge 2$: \ $2 + 6 \cdot 2^{n-2} = |{\rm imC}(\varphi_2 \circ \varphi_1)|$ $ \ \not\le \ $ $|{\rm imC}(\varphi_1)| + |{\rm imC}(\varphi_2)| = 6$. \ \ \ $\Box$ \noindent {\bf (2)} The following shows that the formula of Corollary 4.5C(2) is wrong in some examples when $\varphi_2$ is not normal (but $\varphi_1$ is normal). We abbreviate $\ell({\rm domC}(\varphi) \cup {\rm imC}(\varphi))$ by $\ell(\varphi)$. Let $A = \{0,1\}$, $n \ge 2$, and $\varphi_1 \ = \ \{(0, 0^n)\}$, \ and $\varphi_2 \ = \ \{(0, 0^{n+1}), \, (1, 0)\}$. \noindent So, $\ell(\varphi_1) = n$, and $\ell(\varphi_2) = 1$ since ${\rm imC}(\varphi_2) = \{0\}$. Now, $\varphi_2 \circ \varphi_1 = \{(0, 0^{2n})\}$. Thus when $n \ge 2$: \ $2n = \ell(\varphi_2 \circ \varphi_1) \ \not\le \ $ $\ell(\varphi_2) + \ell(\varphi_1) = n + 1$. \ \ \ $\Box$ For elements of ${\it Inv}_{k,1}$ the image code has the same size as the domain code, which is also the table size. Moreover, injective right-ideal morphisms are normal, thus Corollary 4.5C implies: \begin{cor} \ For all injective right-ideal morphisms $\varphi, \psi$: \ \ $\|\psi \circ \varphi\| \leq \|\psi\| + \|\varphi\|$. \ \ \ \ \ $\Box$ \end{cor} In other words, the table size formula holds for ${\it Inv}_{k,1}$. Another immediate consequence of Theorem \ref{sum_of_imC} is the following. \begin{cor} \label{repeated_sum_of_imC} \ Let $\varphi_i$ be {\em normal} right-ideal morphisms for $i = 1, \ldots, n$, and let $c_1, c_2$ be positive constants. \noindent {\bf (1)} \ \ \ \ If $\, |{\rm imC}(\varphi_i)| \leq c_1$ for all $i$ then \ $|{\rm imC}(\varphi_n \circ \ldots \circ \varphi_1)| \ \le \ c_1 \, n$. \noindent {\bf (2)} \ \ \ \ If $\, \ell({\rm imC}(\varphi_i)) \le c_2$ for all $i$ then \ $\ell({\rm imC}(\varphi_n \circ \ldots \circ \varphi_1)) \ \le \ c_2 \, n$. \ \ \ \ \ \ \ $\Box$ \end{cor} The position transposition $\tau_{i,j}$ (with $0<i<j$) is, by definition, the partial permutation of $A^*$ which transposes the letters at positions $i$ and $j$; $\tau_{i,j}$ is undefined on words of length $< j$. More precisely, we have \ ${\rm domC}(\tau_{i,j}) = {\rm imC}(\tau_{i,j}) = A^j$, and \ $u \alpha v \beta \mapsto u \beta v \alpha$ \ for all letters $\alpha, \beta \in A$ and all words $u \in A^{i-1}$ and $v \in A^{j-i-1}$. In this form, $\tau_{i,j}$ is equal to its maximum essential extension. \begin{cor} \label{wordlength_tau} \ The word-length of $\tau_{i,j}$ over any finite generating set of $M_{k,1}$ is exponential. \end{cor} {\bf Proof.} We have $|{\rm imC}(\tau_{i,j})| = k^j$. The Corollary follows then from Corollary \ref{repeated_sum_of_imC}(1). \ \ \ $\Box$ \subsection{Some algorithmic problems about right-ideal morphisms} We consider several problems about right-ideal morphisms of $A^*$ and show that they have deterministic polynomial-time algorithms. We also show that the word problem of $M_{k,1}$ over $\Gamma_{k,1} \cup $ $\{ \tau_{i,i+1} : 0 < i \}$ is {\sf coNP}-complete, where $\Gamma_{k,1}$ is any finite generating set of $M_{k,1}$. We saw that $\Gamma_{k,1}$ can be chosen so as to consist of normal right-ideal morphisms. \begin{lem} \label{essent_intersect_algo} \ There are deterministic polynomial time algorithms for the following problems. \\ {\rm Input:} Two finite prefix codes $P_1, P_2 \subset A^*$, given explicitly by lists of words. \\ {\rm Output 1:} The finite prefix code $\Pi \subset A^*$ such that $\Pi A^* = P_1A^* \cap P_2A^*$, where $\Pi$ is produced explicitly as a list of words. \\ {\rm Question 2:} Is $P_1A^* \cap P_2A^*$ essential in $P_1A^*$ (or in $P_2A^*$, or in both)? \end{lem} {\bf Proof.} We saw already that $\Pi$ exists and $\Pi \subseteq P_1 \cup P_2$; see Lemma 3.3 of \cite{BiThomps} (quoted before Lemma \ref{Im_of_rightideal} above). \noindent {\sf Algorithm for Output 1:}\ Since $\Pi \subseteq P_1 \cup P_2$, we just need to search for the elements of $\Pi$ within $P_1 \cup P_2$. For each $x \in P_1$ we check whether $x$ also belongs to $P_2A^*$ (by checking whether any element of $P_2$ is a prefix of $x$). Since $P_1$ and $P_2$ are explicitly given as lists, this takes polynomial time. Similarly, for each $x \in P_2$ we check whether $x$ also belongs to $P_1A^*$. Thus, we have computed the set \, $\Pi_1 = (P_1 \cap P_2A^*) \, \cup \, (P_2 \cap P_1A^*)$. Now, $\Pi$ is obtained from $\Pi_1$ by eliminating every word that has another word of $\Pi_1$ as a prefix. Since $\Pi_1$ is explicitly listed, this takes just polynomial time. \noindent {\sf Algorithm for Question 2:} \ We first compute $\Pi$ by the previous algorithm. Next, we check whether every $p_1 \in P_1$ is a prefix of some $r \in \Pi$; since $P_1$ and $\Pi$ are given by explicit lists, this takes just polynomial time. For $P_2$ it is similar. \ \ \ $\Box$ \begin{lem} \label{varphi_of_set} \ The following input-output problem has a deterministic polynomial-time algorithm. \\ $\bullet$ {\rm Input:} A finite set $S \subset A^*$, and $m$ right-ideal morphisms $\psi_j$ for $j = 1, \ldots, m$, where $S$ is given by an explicit list of words, and each $\psi_j$ is given explicitly by the list of pairs of words $\{(x, \psi_j(x)) : x \in {\rm domC}(\psi_j)\}$. \\ $\bullet$ {\rm Output:} The finite set $\, \psi_m \ldots \psi_1(S)$, given explicitly by a list of words. \end{lem} {\bf Proof.} Let $\Psi = \, $ $\psi_m \circ \ \ldots \ \circ \psi_1 \circ {\sf id}_S$. Then $\psi_m \ldots \psi_1(S) = \Psi({\rm domC}(\Psi))$. By Theorem \ref{sum_of_imC}(3) and (5), $\ell(\Psi({\rm domC}(\Psi)))$ $\le$ $\ell(S) + \sum_{i=1}^m \ell(\psi_i({\rm domC}(\psi_i))) \, $ and $\, |\Psi({\rm domC}(\Psi))| \le$ $|S| + \sum_{i=1}^m |\psi_i({\rm domC}(\psi_i))|$. So the size of $\psi_m \ldots \psi_1(S)$, in terms of the number of words and their lengths, is polynomially bounded by the size of the input. We now compute $\psi_m \ldots \psi_1(S)$ by applying $\psi_j$ to $\psi_{j-1} \ldots \psi_1(S)$ for increasing $j$. Since the sizes of the sets remain polynomially bounded, this algorithm takes polynomial time. \ \ \ $\Box$ \begin{cor} \label{computing_imC} \ The following input-output problems have deterministic polynomial-time algorithms. \\ $\bullet$ {\rm Input:} A list of $n$ right-ideal morphisms $\varphi_i$ for $i = 1, \ldots, n$, given explicitly by finite tables. \\ $\bullet$ {\rm Output 1:} A finite set, as an explicit list of words, that contains $\, \varphi_n \ldots $ $\varphi_1({\rm domC}(\varphi_n \ldots \varphi_1))$. \\ $\bullet$ {\rm Output 2:} The finite set $\, {\rm imC}(\varphi_n \ldots \varphi_1)$, as an explicit list of words. \end{cor} {\bf Proof.} (1) By Theorem \ref{sum_of_imC}(5) we have $\, \varphi_n \ldots \varphi_1({\rm domC}(\varphi_n \ldots \varphi_1))$ $ \ \subseteq \ \bigcup_{i=1}^n$ $\varphi_n \ldots \varphi_i({\rm domC}(\varphi_i))$. By Lemma \ref{varphi_of_set}, each set $\varphi_n \ldots \varphi_i({\rm domC}(\varphi_i))$, as well as their union, is computable in polynomial time (as an explicit list of words). (2) Let $\Phi = \varphi_n \ldots \varphi_1$, $P_i = {\rm domC}(\varphi_i)$, and $Q_i = {\rm imC}(\varphi_i)$. As in the proof of Theorem \ref{sum_of_imC}(1), \ ${\rm Dom}(\varphi_2 \circ \varphi_1) = $ $\varphi_1^{-1}(Q_1A^* \cap P_2A^*)$, \ ${\rm Im}(\varphi_2 \circ \varphi_1) = \varphi_2(Q_1A^* \cap P_2A^*)$, and the maps $\varphi_1^{-1}(Q_1A^* \cap P_2A^*) \ \stackrel{\varphi_1}{\longrightarrow}$ $ \ Q_1A^* \cap P_2A^* \stackrel{\varphi_2}{\longrightarrow} \ $ $\varphi_2(Q_1A^* \cap P_2A^*)$ \ are total and onto. By Lemma 3.3 of \cite{BiThomps} (mentioned before Theorem \ref{sum_of_imC}) we have $Q_1A^* \cap P_2A^* = S_1 A^*$ for some finite prefix code $S_1$ with $S_1 \subseteq Q_1 \cup P_2$. Moreover, by Lemma \ref{Im_of_rightideal}, $\varphi_2(S_1 A^*) = R_2 A^*$, where $\, {\rm imC}(\varphi_2 \varphi_1) = R_2 \subseteq \varphi_2(S_1)$. By induction, for $j \ge 2$ suppose $\, {\rm imC}(\varphi_j \ldots \varphi_1) = R_j$ $\subseteq$ $\varphi_j(S_{j-1})$, where $R_j$ and $S_{j-1}$ are finite prefix codes such that $S_{j-1} \subseteq R_{j-1} \cup P_j$, \ $S_{j-1} A^* = R_{j-1} A^* \cap P_j A^*$, \ $R_j A^* = $ ${\rm Im}(\varphi_j \ldots \varphi_1) = \varphi_j(S_{j-1} A^*)$, and the maps $\varphi_j^{-1}(R_j A^* \cap P_{j+1} A^*)$ $ \ \stackrel{\varphi_j}{\longrightarrow}$ $ \ R_j A^* \cap P_{j+1} A^* \stackrel{\varphi_{j+1}}{\longrightarrow} \ $ $\varphi_{j+1}(R_j A^* \cap P_{j+1} A^*)$ \ are total and onto. Then by Lemma 3.3 of \cite{BiThomps} we again have $R_j A^* \cap P_{j+1} A^* = S_j A^*$ for some finite prefix code $S_j$ with $S_j \subseteq R_j \cup P_{j+1}$; and by Lemma \ref{Im_of_rightideal}, $\varphi_{j+1}(S_j A^*) = R_{j+1} A^*$ for some finite prefix code $R_{j+1}$ such that ${\rm imC}(\varphi_{j+1} \varphi_j \ldots \varphi_1) = $ $R_{j+1} \subseteq \varphi_{j+1}(S_j)$. Applying Theorem \ref{sum_of_imC} to $\, R_i = {\rm imC}(\varphi_i \ \ldots \ \varphi_1)$ for any $i \ge 2$ we have $\, |R_i| \, \le \, |\varphi_i(P_i)| + \ \ldots \ $ $+$ $|\varphi_2(P_2)| + |{\rm imC}(\varphi_1)|$, \ and $\, \ell(R_i) \, \le \, \ell(\varphi_i(P_i)) + \ \ldots \ $ $+$ $\ell(\varphi_2(P_2)) + \ell({\rm imC}(\varphi_1))$. \noindent Since $S_j \subseteq P_j \cup R_{j-1}$, we have $|S_j| \le |P_j| + |R_{j-1}|$ $\, \le \,$ $|P_j| + |\varphi_{j-1}(P_{j-1})| + \ \ldots \ $ $+$ $|\varphi_2(P_2)| + |{\rm imC}(\varphi_1)|$, \ and $\ell(S_j) \le \ell(P_j) + \ell(R_{j-1})$ $\, \le \,$ $\ell(P_j) + \ell(\varphi_{j-1}(P_{j-1})) + \ \ldots \ $ $+$ $\ell(\varphi_2(P_2)) + \ell({\rm imC}(\varphi_1))$. Thus, the size of each $R_i$ and $S_j$ is less than the input size; by input size we mean the total length of all the words in the input lists. By Lemma \ref{essent_intersect_algo}, the prefix code $S_j$ is computed from $R_j$ and $P_{j+1}$, as an explicit list, in time $\le T_j(|P_j| + \ell(P_j) + |R_{j-1}| + \ell(R_{j-1}))$, for some polynomial $T_j(.)$. And $R_{j+1}$ is computed from $S_j$ by applying $\varphi_{j+1}$ to $S_j$, and then keeping the elements that do not have a prefix in $\varphi_{j+1}(S_j)$. Computing $\varphi_{j+1}(S_j)$ takes at most quadratic time, and finding the prefix code in $\varphi_{j+1}(S_j)$ also takes at most quadratic time. In the end we obtain $R_n = {\rm imC}(\varphi_n \ldots \varphi_1)$ as an explicit list of words. \ \ \ $\Box$ When we consider the word problem of $M_{k,1}$ over a finite generating set, we measure the input size by the length of input word (with each generator having length 1). But for the word problem of $M_{k,1}$ over the infinite generating set $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$ we count the length of the position transpositions $\tau_{i-1,i}$ as $i$, in the definition of the input size of the word problem. Indeed, at least $\log_2 i$ bits are needed to describe the subscript $i$ of $\tau_{i-1,i}$. Moreover, in the connection between $M_{k,1}$ (over $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$) and circuits, $\tau_{i-1,i}$ is interpreted as the wire-crossing operation of wire number $i$ and wire number $i-1$; this suggests that viewing the size of $\tau_{i-1,i}$ as $i$ is more natural than $\log_2 i$. In any case, we will see next that the word problem of $M_{k,1}$ over $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$ is {\sf coNP}-complete, even if the size of $\tau_{i-1,i}$ is more generously measured as $i$; this is a stronger result than if $\log_2 i$ were used. \begin{thm} \label{wp_in_coNP} {\bf ({\sf coNP}-complete word problem).} \ The word problem of $M_{k,1}$ over the infinite generating set $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$ is {\sf coNP}-complete, where $\Gamma_{k,1}$ is any finite generating set of $M_{k,1}$. \end{thm} {\bf Proof.} In \cite{BiCoNP} (see also \cite{BiFact}) it was shown that the word problem of the Thompson-Higman group $G_{k,1}$ over $\Gamma_{G_{k,1}} \cup \{ \tau_{i-1,i} : i > 1\}$ \ is {\sf coNP}-complete, where $\Gamma_{G_{k,1}}$ is any finite generating set of $G_{k,1}$. Hence, since the elements of the finite set $\Gamma_{G_{k,1}}$ can be expressed by a finite set of words over $\Gamma_{k,1}$, it follows that the word problem of $M_{k,1}$ over $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$ is {\sf coNP}-hard. We will prove now that the word problem of $M_{k,1}$ over $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$ belongs to {\sf coNP}. The {\it input} of the problem consists of two words $(\rho_m, \ldots, \rho_1)$ and $(\sigma_n, \ldots, \sigma_1)$ over $\Gamma_{k,1} \cup \{\tau_{i-1,i} : i > 1\}$. The {\it input size} is the weighted length of the words $(\rho_m, \ldots, \rho_1)$ and $(\sigma_n, \ldots, \sigma_1)$, where each generator in $\Gamma_{k,1}$ has weight 1, and each generator of the form $\tau_{i-1,i}$ has weight $i$. For every right-ideal morphism $\varphi$ we abbreviate $\ell({\rm domC}(\varphi) \cup \varphi({\rm domC}(\varphi)))$ by $\ell(\varphi)$; recall that for a finite set $X \subset A^*$, $\ell(X)$ denotes the length of a longest word in $X$. Since $\Gamma_{k,1}$ is finite there is a constant $c>0$ such that $c \geq \ell(\gamma)$ for all $\gamma \in \Gamma_{k,1}$; also, for each $\tau_{i-1,i}$ we have $\ell(\tau_{i-1,i}) = i$. By Theorem \ref{sum_of_imC}, the table of $\sigma_n \circ \ldots \circ \sigma_1$ (and more generally, the table of $\sigma_j \circ \ldots \circ \sigma_1$ for any $j$ with $n \geq j \geq 1$) contains only words of length $\leq \sum_{j=1}^n \ell(\sigma_j)$, and similarly for $\rho_m \circ \ldots \circ \rho_1$ (and for $\rho_i \circ \ldots \circ \rho_1$, $m \geq i \geq 1$). So all the words in the tables for any $\sigma_j \circ \ldots \circ \sigma_1$ and any $\rho_i \circ \ldots \circ \rho_1$ have lengths that are linearly bounded by the size of the input $\big((\rho_m, \ldots, \rho_1)$, $(\sigma_n, \ldots, \sigma_1)\big)$. \noindent {\sf Claim.} {\it Let $N = \max\{\sum_{i=1}^m \ell(\rho_i), \ $ $\sum_{j=1}^n \ell(\sigma_j)\}$. Then \ $\rho_m \cdot \ldots \cdot \rho_1 \neq \sigma_n \cdot \ldots \cdot \sigma_1$ \ in $M_{k,1}$ \ iff \ there exists $x \in A^N$ such that \ $\rho_m \circ \ldots \circ \rho_1(x)$ $\neq$ $\sigma_n \circ \ldots \circ \sigma_1(x)$. } \noindent {\sf Proof of the Claim:} As we saw above, the tables of $\rho_m \circ \ldots \circ \rho_1$ and $\sigma_n \circ \ldots \circ \sigma_1$ only contain words of length $\leq N$. Thus, restricting $\rho_m \circ \ldots \circ \rho_1$ and $\sigma_n \circ \ldots \circ \sigma_1$ to $A^N A^*$ is an essential restriction, and the resulting tables have domain codes in $A^N$. Therefore, $\rho_m \cdot \ldots \cdot \rho_1$ and $\sigma_n \cdot \ldots \cdot \sigma_1$ are equal (as elements of $M_{k,1}$) \, iff \, $\rho_m \circ \ldots \circ \rho_1$ and $\sigma_n \circ \ldots \circ \sigma_1$ are equal on $A^N$. \ \ \ \ \ [End, Proof of Claim] The number $N$ in the Claim is immediately obtained form the input. Based on the Claim, we obtain a nondeterministic polynomial-time algorithm which decides (nondeterministically) whether there exists $x \in A^N$ such that \ $\rho_m \circ \ldots \circ \rho_1(x)$ $ \neq \sigma_n \circ \ldots \circ \sigma_1(x)$, as follows: The algorithm guesses $x \in A^N$, computes $\rho_m \circ \ldots \circ \rho_1(x)$ and $\sigma_n \circ \ldots \circ \sigma_1(x)$, and checks that they are different words ($\in A^*$) or that one is undefined and the other is a word. Applying Theorem \ref{sum_of_imC} to $\, \rho_m \circ \ldots \circ \rho_1 \circ {\sf id}_{A^N} \, $ and to $\, \sigma_n \circ \ldots \circ \sigma_1 \circ {\sf id}_{A^N} \, $ shows that \ $|\rho_m \circ \ldots \circ \rho_1(x)| \leq 2N$ \ and \ $|\sigma_n \circ \ldots \circ \sigma_1(x)| \leq 2N$; here $|\rho_m \circ \ldots \circ \rho_1(x)|$ denotes the length of the word $\rho_m \circ \ldots \circ \rho_1(x) \in A^*$, and similarly for $\sigma_n \circ \ldots \circ \sigma_1(x)$. Also by Theorem \ref{sum_of_imC}, all intermediate results (as we successively apply $\rho_i$ for $i = 1, \ldots, m$, or $\sigma_j$ for $j = 1, \ldots, n$) are words of length $\leq 2N$. These successive words are computed by applying the table of $\rho_i$ or $\sigma_j$ (when $\rho_i$ or $\sigma_j$ belong to $\Gamma_{k,1}$), or by directly applying the position permutation $\tau_{h,h-1}$ (if $\rho_i$ or $\sigma_j$ is $\tau_{h,h-1}$). Thus, the output $\rho_m \circ \ldots \circ \rho_1(x)$ (and similarly, $\sigma_n \circ \ldots \circ \sigma_1(x)$) can be computed in polynomial time. \ \ \ $\Box$ \subsection{The word problem of $M_{k,1}$ is in {\sf P}} We now move ahead with the the proof of our main result. \begin{thm} \label{wp_M_in_P} {\bf (Word problem in {\sf P}).} \ The word problem of the Thompson-Higman monoids $M_{k,1}$, over any finite generating set, can be decided in deterministic polynomial time. \end{thm} We assume that a fixed finite generating set $\Gamma_{k,1}$ of $M_{k,1}$ has been chosen. The input consists of two sequences $(\rho_m, \ldots, \rho_1)$ and $(\sigma_n, \ldots, \sigma_1)$ over $\Gamma_{k,1}$, and the input size is $m+n$; since $\Gamma_{k,1}$ is finite and fixed, it does not matter whether we choose $m+n$ as input size, or the sum of the lengths of all the words in the tables of the elements of $\Gamma_{k,1}$. We want to decide in deterministic polynomial time whether, as elements of $M_{k,1}$, the products $\rho_m \cdot \ldots \cdot \rho_1$ and $\sigma_n \cdot \ldots \cdot \sigma_1$ are equal. \noindent {\bf Overview of the proof:} \noindent $\bullet$ \ We compute the finite sets $\, {\rm imC}(\rho_m \circ \ldots \circ \rho_1)$, $\, {\rm imC}(\sigma_n \circ \ldots \circ \sigma_1) \subset A^*$, explicitly described by lists of words. By Corollary \ref{computing_imC} (Output 2) this can be done in polynomial time, and these sets have polynomial size. (Note however that by Proposition \ref{exp_table}, the table sizes of $\rho_m \circ \ldots \circ \rho_1$ or $\sigma_n \circ \ldots \circ \sigma_1$ could be exponential in $m$ or $n$.) \noindent $\bullet$ \ We check whether ${\rm Im}(\rho_m \circ \ldots \circ \rho_1) $ $\cap $ ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$ is essential in ${\rm Im}(\rho_m \circ \ldots \circ \rho_1)$ and in ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$. By Lemma \ref{essent_intersect_algo} (Question 2) this can be done in polynomial time. If the answer is ``no'' then $\rho_m \cdot \ldots \cdot \rho_1$ $\neq$ $\sigma_n \cdot \ldots \cdot \sigma_1$ in $M_{k,1}$, since they don't have a common maximum essential extension. Otherwise, the computation continues. \noindent $\bullet$ \ We compute the finite prefix code $\Pi \subset A^*$ such that $\Pi A^* = {\rm Im}(\rho_m \circ \ldots \circ \rho_1) $ $\cap $ ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$. By Lemma \ref{essent_intersect_algo} (Output 1) this can be done in polynomial time, and $\Pi$ has polynomial size. Hence, the table of ${\sf id}_{\Pi A^*}$ can be computed in polynomial time. \noindent $\bullet$ \ We restrict $\rho_m \circ \ldots \circ \rho_1$ and $\sigma_n \circ \ldots \circ \sigma_1$ in such a way that their images are in $\Pi A^*$. In other words, we replace them by $\rho =$ ${\sf id}_{\Pi A^*} \circ \rho_m \circ \ldots \circ \rho_1$, respectively $\sigma = $ ${\sf id}_{\Pi A^*} \circ \sigma_n \circ \ldots \circ \sigma_1$. Since $\Pi A^*$ is essential in ${\rm Im}(\rho_m \circ \ldots \circ \rho_1)$ and in ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$, we have $\rho = \rho_m \cdot \ldots \cdot \rho_1$ in $M_{k,1}$, and $\sigma = \sigma_n \cdot \ldots \cdot \sigma_1$ in $M_{k,1}$. So, $\rho_m \cdot \ldots \cdot \rho_1 = \sigma_n \cdot \ldots \cdot \sigma_1$ in $M_{k,1}$ iff $\rho = \sigma$ in $M_{k,1}$. \noindent $\bullet$ \ We compute finite sets $R_1, R_2 \subset A^*$, such that $\rho({\rm domC}(\rho)) \subseteq R_1$ and $\sigma({\rm domC}(\sigma)) \subseteq R_2$. Since $\rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma)) \subseteq \Pi A^*$, we can pick $R_1, R_2$ so that $R_1 \cup R_2 \subseteq \Pi A^*$. By Corollary \ref{computing_imC} (Output 1), the sets $R_1, R_2$ can be computed as explicit lists in polynomial time. Let $R = R_1 \cup R_2$. \noindent $\bullet$ \ We note that $\rho = \sigma$ in $M_{k,1}$ iff for all $r \in \rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma))$: $\, \rho^{-1}(r) = \sigma^{-1}(r)$. This holds iff for all $r \in R$: $\, \rho^{-1}(r) = \sigma^{-1}(r)$. \noindent $\bullet$ \ For every $r \in R$ we construct a deterministic finite automaton (DFA) accepting the finite set $\rho^{-1}(r) \subset A^*$, and a DFA accepting the finite set $\sigma^{-1}(r) \subset A^*$. By Corollary \ref{iterated_inv_image_acyclicDFA} this can be done in polynomial time, and the DFAs have polynomial size. (The finite sets $\rho^{-1}(r)$ and $\sigma^{-1}(r)$ themselves could have exponential size.) Note that ${\rm domC}(\rho) \subseteq \rho^{-1}(\rho({\rm domC}(\rho)))$ $\subseteq \rho^{-1}(R)$, and similarly for $\sigma$. Note that usually, ${\rm domC}(\rho)\not\subseteq \rho^{-1}({\rm imC}(\rho)) \, $ (since $\rho$ is not normal in general), and similarly for $\sigma$; so we need to use $\rho({\rm domC}(\rho))$, and not just ${\rm imC}(\rho)$. \noindent $\bullet$ \ For every $r \in R$ we check whether the DFA for $\rho^{-1}(r)$ and the DFA for $\sigma^{-1}(r)$ are equivalent. By classical automata theory, equivalence of DFAs can be checked in polynomial time. \noindent [End of Overview.] \noindent {\bf Automata -- notation and facts:} In the following, DFA stands for {\em deterministic finite automaton}. The language accepted by a DFA ${\cal A}$ is denoted by ${\cal L}({\cal A})$. A DFA is a structure $(S, A, \delta, s_0, F)$ where $S$ is the set of states, $A$ is the input alphabet, $s_0 \in S$ is the start state, $F \subseteq S$ is the set of accept states, and $\delta: S \times A \to S$ is the next-state function; in general, $\delta$ is a partial function (by ``function'' we always mean partial function). We extend the definition of $\delta$ to a function $S \times A^* \to S$ by defining $\delta(s,w)$ to be the state that the DFA reaches from $s$ after reading $w$ (for any $w \in A^*$ and $s \in S$). See \cite{HU,LewisPapad} for background on finite automata. A DFA is called {\em acyclic} iff its underlying directed graph has no directed cycle. It is easy to prove that a language $L \subseteq A^*$ is finite iff $L$ is accepted by an acyclic DFA. Moreover, $L$ is a finite prefix code iff $L$ is accepted by an acyclic DFA that has a single accept state (take the prefix tree of the prefix code, with the leaves as accept states, then glue all the leaves together into a single accept state). By the {\em size} of a DFA ${\cal A}$ we mean the number of states, $|S|$; we denote this by ${\sf size}({\cal A})$. For a finite set $P \subseteq A^*$ we denote the length of the longest words in $P$ by $\ell(P)$, and we define the {\em total length} of $P$ by $\, \Sigma(P) = \sum_{x \in P} |x|$; obviously, $\Sigma(P) \le |P| \cdot \ell(P)$. For a language $L \subseteq A^*$ and a partial function $\Phi: A^* \to A^*$, we define the inverse image of $L$ under $\Phi$ by \, $\Phi^{-1}(L) = \{ x \in A^* : \Phi(x) \in L\}$. For $L \subseteq A^*$ we denote the set of all {\it strict} prefixes of the words in $L$ by {\sf spref}$(L)$; precisely, ${\sf spref}(L) = \{x \in A^* : (\exists w \in L)[ \, x \le_{\rm pref} w$ and $x \neq w \,]\}$. The reason why we use acyclic DFAs to describe finite sets is that a finite set can be exponentially larger than the number of states of a DFA that accepts it; e.g., $A^n$ is accepted by an acyclic DFA with $n+1$ states. This conciseness plays a crucial role in our polynomial-time algorithm for the word problem of $M_{k,1}$. \begin{lem} \label{inv_image_acyclicDFA} \ Let ${\cal A}$ be an acyclic DFA with a single accept state. Let $\varphi$ be a {\em normal} right-ideal morphism, with $\, {\rm domC}(\varphi) \neq \{\varepsilon\} \,$ and $\, {\rm imC}(\varphi) \neq \{\varepsilon\}$. Then $\varphi^{-1}({\cal L}({\cal A}))$ is accepted by a one-accept-state acyclic DFA $\varphi^{-1}({\cal A})$ whose number of states is $\, {\sf size}(\varphi^{-1}({\cal A}))$ $ < $ ${\sf size}({\cal A}) + \Sigma({\rm domC}(\varphi))$. The transition table of the DFA $\varphi^{-1}({\cal A})$ can be constructed deterministically in polynomial time, based on the transition table of ${\cal A}$ and the table of $\varphi$. \end{lem} {\bf Proof.} If $\, \varphi^{-1}({\cal L}({\cal A})) = \varnothing$ then ${\sf size}(\varphi^{-1}({\cal A})) = 0$, so the result is trivial. Let us assume now that $\, \varphi^{-1}({\cal L}({\cal A})) \neq \varnothing$. Let ${\cal A} = (S, A, \delta, s_0, \{s_A\})$ where $s_A$ is the single accept state; $s_A$ has no out-going edges (they would be useless). For any set $X \subseteq A^*$ and any state $s \in S$ we denote $\{\delta(s,x) : x \in X\}$ by $\delta(s, X)$. Let $P = {\rm domC}(\varphi)$ and $Q = {\rm imC}(\varphi)$. Since ${\cal A}$ is acyclic, its state set $S$ can be partitioned into $\delta(s_0, {\sf spref}(Q))$ and $\delta(s_0, QA^*)$. Since $Q \neq \{\varepsilon\}$, the block $\delta(s_0, {\sf spref}(Q))$ contains $s_0$, so the block is non-empty. The block $\delta(s_0, QA^*)$ is non-empty because of the assumption $\varphi^{-1}({\cal L}({\cal A})) \neq \varnothing$, which implies ${\cal L}({\cal A}) \cap Q A^* \neq \varnothing$. Since ${\cal L}({\cal A})$ is a prefix code and $\varphi$ is a right-ideal morphism, $\varphi^{-1}({\cal L}({\cal A}))$ is a prefix code. To accept $\varphi^{-1}({\cal L}({\cal A}))$ we define an acyclic DFA, called $\varphi^{-1}({\cal A})$, as follows: \noindent $\bullet$ \ State set of $\varphi^{-1}({\cal A})$: \ ${\sf spref}(P) \ \cup \ \delta(s_0, QA^*)$; start state: $\varepsilon$, i.e., the root of the prefix tree of $P$ (since $P \neq \{\varepsilon\}$, $\, \varepsilon \in{\sf spref}(P)$); accept state: the accept state $s_A$ of ${\cal A}$. \noindent $\bullet$ \ State-transition function $\delta_1$ of $\varphi^{-1}({\cal A})$: For every $r \in {\sf spref}(P)$ and $a \in A$ such that $ra \in {\sf spref}(P)$: \ \ $\delta_1(r, a) = ra$. For every $r \in {\sf spref}(P)$ and $a \in A$ such that $ra \in P$: \ \ $\delta_1(r, a) = \delta(s_0, \varphi(ra))$. For every $s \in \delta(s_0, QA^*)$: \ \ $\delta_1(s, a) = \delta(s, a)$. \noindent It follows immediately from this definition that for all $p \in P$: \ $\delta_1(\varepsilon, p) = \delta(s_0, \varphi(p))$. The construction of $\varphi^{-1}({\cal A})$ assumes that $\varphi$ maps $P$ onto $Q$, i.e., it uses the assumption that $\varphi$ is normal. As usual, ``function'' means partial function, so $\delta(.,.)$ and $\delta_1(.,.)$ need not be defined on every state-letter pair. The DFA $\varphi^{-1}({\cal A})$ can be pictured as being constructed as follows: The DFA has two parts. The first part is the prefix tree of $P$, but with the leaves left out (and with edges to leaves left dangling). The second part is the DFA ${\cal A}$ restricted to the state subset $\delta(s_0, QA^*)$. The two parts are glued together by connecting any dangling edge, originally pointing to a leaf $p \in P$, to the state $\, \delta(s_0, \varphi(p)) \in \delta(s_0, QA^*)$. The description of $\varphi^{-1}({\cal A})$ constitutes a deterministic polynomial time algorithm for constructing the transition table of $\varphi^{-1}({\cal A})$, based on the transition table of ${\cal A}$ and on the table of $\varphi$. By the construction, the number of states of $\varphi^{-1}({\cal A})$ is $\, < {\sf size}({\cal A}) + \Sigma(P)$ We will prove now that the DFA $\varphi^{-1}({\cal A})$ accepts exactly $\varphi^{-1}({\cal L}({\cal A}))$; i.e., \ $\varphi^{-1}({\cal L}({\cal A})) = {\cal L}(\varphi^{-1}({\cal A}))$. \noindent $[\subseteq]$ \ Consider any $y \in {\cal L}({\cal A})$ such that $\varphi^{-1}(y) \neq \varnothing$. We want to show that $\varphi^{-1}({\cal A})$ accepts all the words in $\varphi^{-1}(y)$. Since $\varphi^{-1}(y) \neq \varnothing$ we have $y \in {\rm Im}(\varphi)$, hence $y = qw$ for some strings $q \in Q = {\rm imC}(\varphi)$ and $w \in A^*$. Since $Q$ is a prefix code, $q$ and $w$ are uniquely determined by $y$. Moreover, since $y \in {\cal L}({\cal A})$ it follows that $y$ has an accepting path in ${\cal A}$ of the form \ \ \ $s_0 \ \stackrel{q}{\longrightarrow} \ \delta(s_0,q) \ $ $ \stackrel{w}{\longrightarrow} \ s_A$. \noindent For every $x \in \varphi^{-1}(y)$ we have $x \in {\rm Dom}(\varphi) = P A^*$, hence $x = pv$ for some strings $p \in P$ and $v \in A^*$. So $\varphi(x) = \varphi(p) \ v$. We also have $\varphi(x) = y = qw$, hence $\varphi(p)$ and $q$ are prefix-comparable. Therefore, $\varphi(p) = q$, since $Q$ is a prefix code and since $\varphi(p) \in Q$ (by normality of $\varphi$); hence $v = w$. Thus every $x \in \varphi^{-1}(y)$ has the form $pw$ for some string $p \in \varphi^{-1}(q)$. Now in $\varphi^{-1}({\cal A})$ there is the following accepting path on input $x = pw \in \varphi^{-1}(y)$: \ \ \ $\varepsilon \ \stackrel{p}{\longrightarrow} \ \delta_1(\varepsilon, p) = $ $\delta(s_0, \varphi(p)) \ \stackrel{w}{\longrightarrow} \ s_A$. \noindent Thus $\varphi^{-1}({\cal A})$ accepts $x = p w = p v$. \noindent $[\supseteq]$ \ Suppose $\varphi^{-1}({\cal A})$ accepts $x$. Then, because of the prefix tree of $P$ at the beginning of $\varphi^{-1}({\cal A})$, $x$ has the form $x = pw$ for some strings $p \in P$ and $w \in A^*$. The accepting path in $\varphi^{-1}({\cal A})$ on input $pw$ has the form \ \ \ $s_0 \ \stackrel{p}{\longrightarrow} \ \delta_1(\varepsilon, p) = $ $\delta(s_0, \varphi(p)) \ \stackrel{w}{\longrightarrow} \ s_A$. \noindent Also, $\varphi(x) = q w$ where $q = \varphi(p) \in Q$ (here we use normality of $\varphi$). Hence ${\cal A}$ has the following computation path on input $qw$: \ \ \ $s_0 \ \stackrel{q}{\longrightarrow} \ \delta(s_0,q) = $ $\delta(s_0,\varphi(p)) \ \stackrel{w}{\longrightarrow} \ s_A$. \noindent So, $\varphi(x) = \varphi(p) \, w = qw \in {\cal L}({\cal A})$. Hence, $x \in \varphi^{-1}(qw)$ $\subseteq \varphi^{-1}({\cal L}({\cal A}))$. Thus ${\cal L}(\varphi^{-1}({\cal A}))$ $\subseteq$ $\varphi^{-1}({\cal L}({\cal A}))$. \ \ \ $\Box$ \begin{cor} \label{iterated_inv_image_acyclicDFA} \ Let ${\cal A}$ be an acyclic DFA with a single accept state. For $i = 1, \ldots, n$, let $P_i, Q_i \subset A^*$ be finite prefix codes, and let $\varphi_i: P_i A^* \to Q_i A^*$ be {\em normal} right-ideal morphisms. We assume that $\, P_i \neq \{\varepsilon\} \,$ and $\, Q_i \neq \{\varepsilon\}$. Then $\, (\varphi_n \circ \ldots \circ \varphi_1)^{-1}({\cal L}({\cal A})) \, $ is accepted by an acyclic DFA with size $\, < {\sf size}({\cal A}) + \sum_{i=1}^n \Sigma(P_i)$, with one accept state. The transition table of this DFA can be constructed deterministically in polynomial time, based on the transition table of ${\cal A}$ and the tables of $\varphi_i$ (for $i = 1, \ldots, n$). \end{cor} {\bf Proof.} We assume that $(\varphi_n \circ \ldots \circ \varphi_1)^{-1}({\cal L}({\cal A}))$ $\neq$ $\varnothing \, $ (since the empty set is accepted by a DFA of size 0). We use induction on $n$. For $n=1$ the Corollary is just Lemma \ref{inv_image_acyclicDFA}. Let $n\geq 1$, assume the Corollary holds for $n$ normal morphisms, and consider one more normal right-ideal morphism $\varphi_0: P_0 A^* \to Q_0 A^*$, where $P_0, Q_0 \subset A^*$ are finite prefix codes with $P_0 \neq \{\varepsilon\} \neq Q_0$. And assume $ \, (\varphi_n \circ \ldots \circ \varphi_1$ $\circ$ $\varphi_0)^{-1}({\cal L}({\cal A})) \neq \varnothing$. Since $(\varphi_n \circ \ldots \circ \varphi_1 \circ$ $\varphi_0)^{-1}({\cal L}({\cal A}))$ $=$ $\varphi_0^{-1} \circ (\varphi_n \circ \ldots \circ $ $\varphi_1)^{-1}({\cal L}({\cal A}))$, let us apply Lemma \ref{inv_image_acyclicDFA} to $\varphi_0$ and the acyclic DFA $(\varphi_n \circ \ldots \circ \varphi_1)^{-1}({\cal A})$. We have $\varepsilon \not\in {\rm Dom}(\varphi_n \ \ldots \ \varphi_1 \varphi_0)$; indeed, $P_i \neq \{\varepsilon\}$ is equivalent to $\varepsilon \not\in {\rm Dom}(\varphi_i)$; moreover we have $\varepsilon \not\in {\rm Dom}(\varphi_0)$, and ${\rm Dom}(\varphi_n \ \ldots \ \varphi_1 \varphi_0)$ $\subseteq$ ${\rm Dom}(\varphi_0)$. Similarly, $Q_i \neq \{\varepsilon\}$ is equivalent to $\varepsilon \not\in {\rm Im}(\varphi_i)$; and $\varepsilon \not\in {\rm Im}(\varphi_n)$ implies $\varepsilon \not\in {\rm Im}(\varphi_n \ \ldots \ \varphi_1 \varphi_0)$. The conclusion of Lemma \ref{inv_image_acyclicDFA} is then that $(\varphi_n \circ \ldots \circ \varphi_1$ $\circ$ $\varphi_0)^{-1}({\cal L}({\cal A}))$ is accepted by an acyclic DFA $(\varphi_n \circ \ldots \circ \varphi_1 \circ \varphi_0)^{-1}({\cal A})$ whose size is $\, < \,$ ${\sf size}((\varphi_n \circ \ldots \circ \varphi_1)^{-1}({\cal A}))$ $+$ $\Sigma(P_0)$ $ \, < \, {\sf size}({\cal A}) + \sum_{i=1}^n \Sigma(P_i) + \Sigma(P_0)$ $=$ ${\sf size}({\cal A}) + \sum_{i=0}^n \Sigma(P_i)$. \ \ \ $\Box$ \noindent {\bf Proof of Theorem \ref{wp_M_in_P}:} Let $(\rho_m, \ldots, \rho_1)$ and $(\sigma_n, \ldots, \sigma_1)$ be two sequences of generators from the finite generating set $\Gamma_{k,1}$. The elements of $\Gamma_{k,1}$ can be chosen so that the assumptions of Corollary \ref{iterated_inv_image_acyclicDFA} hold; see Section 3 of \cite{BiMonTh}, where such a generating set is given. We want to decide in deterministic polynomial time whether the products $\rho_m \cdot \ldots \cdot \rho_1$ and $\sigma_n \cdot \ldots \cdot \sigma_1$ are the same, as elements of $M_{k,1}$. First, by Corollary \ref{computing_imC} (Output 2) we can compute the sets ${\rm imC}(\rho_m \circ \ldots \circ \rho_1)$ and ${\rm imC}(\sigma_n \circ \ldots \circ \sigma_1)$, explicitly described by lists of words, in polynomial time. By Lemma \ref{essent_intersect_algo} (Question 2) we can check in polynomial time whether the right ideal $\, {\rm Im}(\rho_m \circ \ldots \circ \rho_1)$ $\cap$ ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1) \, $ is essential in ${\rm Im}(\rho_m \circ \ldots \circ \rho_1)$ and in ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$. If it is not essential we immediately conclude that $\rho_m \cdot \ldots \cdot \rho_1 \neq $ $\sigma_n \cdot \ldots \cdot \sigma_1$. On the other hand, if it is essential, Lemma \ref{essent_intersect_algo} (Output 1) lets us compute a generating set $\Pi$ for the right ideal $\, {\rm Im}(\rho_m \circ \ldots \circ \rho_1)$ $\cap$ ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$, in deterministic polynomial time; the generating set $\Pi$ is a finite prefix code, given explicitly by a list of words. By Corollary \ref{repeated_sum_of_imC} and because $\, \Pi \subseteq$ ${\rm imC}(\rho_m \circ \ldots \circ \rho_1)$ $\cup$ ${\rm imC}(\sigma_n \circ \ldots \circ \sigma_1)$, $\Pi$ has linearly bounded cardinality and the length of the longest words in $\Pi$ is linearly bounded in terms of $n+m$. We restrict $\rho_m \circ \ldots \circ \rho_1$ and $\sigma_n \circ \ldots \circ \sigma_1$ in such a way that their images are $\Pi A^*$; i.e., we replace them by $\rho =$ ${\sf id}_{\Pi A^*} \circ \rho_m \circ \ldots \circ \rho_1$, respectively $\sigma = {\sf id}_{\Pi A^*} \circ \sigma_n \circ \ldots \circ \sigma_1$. So, ${\rm Im}(\rho) = \Pi A^* = {\rm Im}(\sigma)$. Also, since $\Pi A^*$ is essential in ${\rm Im}(\rho_m \circ \ldots \circ \rho_1)$ and in ${\rm Im}(\sigma_n \circ \ldots \circ \sigma_1)$ we have: $\rho$ is equal to $\rho_m \cdot \ldots \cdot \rho_1$ in $M_{k,1}$, and $\sigma$ is equal to $\sigma_n \cdot \ldots \cdot \sigma_1$ in $M_{k,1}$. So for deciding the word problem it is enough to check whether $\rho = \sigma$ in $M_{k,1}$. By the next Claim, the sets $\rho({\rm domC}(\rho))$ and $\sigma({\rm domC}(\sigma))$ play a crucial role. However, instead of directly computing $\rho({\rm domC}(\rho))$ and $\sigma({\rm domC}(\sigma))$, we compute finite sets $R_1, R_2 \subset A^*$ such that $\, \rho({\rm domC}(\rho)) \subseteq R_1 \, $ and $\, \sigma({\rm domC}(\sigma)) \subseteq R_2 \,$. Moreover, since $\, \rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma))$ $\subseteq$ $\Pi A^*$, we can pick $R_1, R_2$ so that $R_1 \cup R_2 \subseteq \Pi A^*$. By Corollary \ref{computing_imC} (Output 1), the sets $R_1, R_2$ can be computed in polynomial time as explicit lists of words. Let $R = R_1 \cup R_2$. \noindent {\sf Claim.} \ {\it $\rho = \sigma$ in $M_{k,1}$ \ iff \ $\, \rho^{-1}(r) = \sigma^{-1}(r) \,$ for every $\, r \in \rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma))$. The latter is equivalent to $\, \rho^{-1}(r) = \sigma^{-1}(r) \,$ for every $r \in R$. } \noindent {\sf Proof of the Claim.} If $\rho = \sigma$ in $M_{k,1}$ then $\rho^{-1}(r) = \sigma^{-1}(r)$ for every $r \in \Pi A^* = {\rm Im}(\rho)$ $=$ ${\rm Im}(\sigma)$. Hence this holds in particular for all $r \in \rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma))$ and for all $r \in R$, since $\rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma))$ $\subseteq R \subset \Pi A^*$. Conversely, if $\rho^{-1}(r) = \sigma^{-1}(r)$ for every $r \in$ $\rho({\rm domC}(\rho)) \cup \sigma({\rm domC}(\sigma))$, then for all $x \in \rho^{-1}(r) = \sigma^{-1}(r)$: \ $\rho(x) = r = \sigma(x)$. Since ${\rm domC}(\rho) \subseteq \rho^{-1}(\rho({\rm domC}(\rho)))$ and ${\rm domC}(\sigma) \subseteq \sigma^{-1}(\sigma({\rm domC}(\sigma)))$, it follows that $\rho$ and $\sigma$ are equal on ${\rm domC}(\rho) \cup {\rm domC}(\sigma)$, and it follows that ${\rm domC}(\rho) = {\rm domC}(\sigma)$. Hence $\rho$ and $\sigma$ are equal as right-ideal morphisms, and hence as elements of $M_{k,1}$. \ \ \ {\sf [This proves the Claim.]} Recall that $|R|$ and $\ell(R)$, and hence $\Sigma(R)$, are polynomially bounded in terms of the input size. To check for each $r \in R$ whether $\, \rho^{-1}(r) = \sigma^{-1}(r)$, we apply Corollary \ref{iterated_inv_image_acyclicDFA}, which constructs an acyclic DFA ${\cal A}_{\rho}$ for $\rho^{-1}(r)$ from a DFA for $\{r\}$; this is done deterministically in polynomial time. Similarly, an acyclic DFA ${\cal A}_{\sigma}$ for $\sigma^{-1}(r)$ is constructed. Thus, $\rho^{-1}(r) = \sigma^{-1}(r)$ iff ${\cal A}_{\rho}$ and ${\cal A}_{\sigma}$ accept the same language. Checking whether ${\cal A}_{\rho}$ and ${\cal A}_{\sigma}$ accept the same language is an instance of the equivalence problem for DFAs that are given explicitly by transition tables. It is well known (see e.g., \cite{HU}, or \cite{LewisPapad} p.\ 103) that the equivalence problem for DFAs is decidable deterministically in polynomial time. This proves Theorem \ref{wp_M_in_P}. \ \ \ $\Box$ \noindent {\bf Acknowledgement.} I would like to thank John Meakin for many discussions over the years concerning the Thompson groups and generalizations to inverse monoids. {\small } \noindent {\small J.C.\ Birget \\ Dept.\ of Computer Science \\ Rutgers University -- Camden \\ Camden, NJ 08102 \\ {\tt [email protected]} } \end{document}
\begin{document} \title{Temperature cooling in quantum dissipation channel and the correspondimg thermal vacuum state\thanks{ Work supported by the National Natural Science Foundation of China under grant (No: 61203061, 61403362, 61374091, 61473199 and 11175113) }} \author{Wang Yao-xiong$^{1}$, Gao Fang$^{1}$, Fan Hong-yi$^{1,2}$, and Tang Xu-bing$^{1,3\dag }$ \\ $^{1}$Institute of Intelligent Machines, \\ Chinese Academy of Sciences, Hefei 230031, China\\ $^{2}$Department of Material Science \& Engineering, \\ University of Science \ \ \& Technology of China, Hefei 230027, China\\ $^{3}$School of Mathematics \& Physics Science and Engineering, \\ Anhui University of Technology, Ma'anshan 243032, China\\ $^{\dag }[email protected]} \maketitle \begin{abstract} We examine temperature cooling of optical chaotic light in a quantum dissipation channel with the damping parameter $\kappa .$The way we do it is by introducing its thermal vacuum state which can expose entangling effect between the system and the reservoir. The temperature cooling formula is derived, which depends on the parameter $\kappa ,$ by adjusting $\kappa $ one can control temperature. \end{abstract} \section{Introduction} In nature most systems are immersed in their enviroments, energy exchange between system and its enviroment always happens, this brings system's dissipation with quantum decoherence \cite {Gardiner_2000_qn,Ankerhold_2003_lnp}. If these systems involve non-negligible correlations amongst their components, quantum memory (non-Markovian) effects cannot be ignored. If the feedback from enviroment is extremely weak, we can say this process is Markovian, and Its dynamics described by the master equation or the associated Langevin or Fokker-Planck equations \cite{Scully_1998_qo,Orszag_2000_qo}. The Lindblad equation is the most general form for a Markovian master equation, and it is very important for the treatment of irreversible and non-unitary processes, from dissipation \cite{Breuer_2002_oqs} and decoherence \cite{fan_2008_mplb} to the quantum measurement process \cite{Mandel_1995_ocqo,Gardiner_2000_qn}. In quantum optics and quantum statistics theory, a damping harmonic osciallator in thermal bath is one of the most famous dissipation model. Its dynamics of this model can be described by the Lindblad master equation. The associated amplitude damping mechanism of system in physical processes is governed by the following master equation \cite {Scully_1998_qo,Orszag_2000_qo} \begin{equation} \frac{d\rho \left( t\right) }{dt}=\kappa \left( 2a\rho a^{\dagger }-a^{\dagger }a\rho -\rho a^{\dagger }a\right) , \label{1} \end{equation} where $\rho $ is the density operator of the system, $\kappa $ is the rate of decay, $a,a^{\dagger }$ are boson annihilation and creation operator, respectively, $\left[ a,a^{\dagger }\right] =1$. Such an equation can be conveniently solved by virtue of the entangled state representation \cite {fan_1994_pra}, and the solution is in so-called Kraus form \begin{equation} \ \rho \left( t\right) =\sum_{n=0}^{\infty }\frac{V^{n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}\rho _{0}a^{\dag n}e^{-\kappa ta^{\dagger }a}, \label{2} \end{equation} where $\rho _{0}$ is system's initial density operator, \begin{equation} V\equiv 1-e^{-2\kappa t}. \label{3} \end{equation} As a common knowledge, during the thermal communication between system and reservoir, excitation and de-excitation processes are influenced by the exchange of energy between them. The loss of energy by the system can occur in two ways: 1). It emits quanta with positive energy $\hbar \omega $ (denoted by an annihilation operator $a$). 2). By creating holes of particles with positive energy in the reservoir. When the latter process takes place, we say that a hole is created in the reservoir by $\tilde{a} ^{\dagger }.$ The tilde mode $\tilde{a}$ (reservoir mode) is independent of the system's mode $a,$ $\left[ \tilde{a},\tilde{a}^{\dagger }\right] =1.$ An interesting question that has been overlooked for long is how does the temperature of system change when the system undergoes dissipation? To be concrete, we consider when an optical chaotic field \cite{fan_2011_cpl}, described by the density operator \begin{equation} \rho _{c}=\left( 1-e^{-\frac{\hbar \omega }{kT}}\right) e^{-\frac{\hbar \omega }{kT}a^{\dagger }a}, \label{4} \end{equation} here $\beta =\frac{1}{kT}$ $\left( k\text{ is the Boltzmann constant, }T \text{ is temperature}\right) $, undergoes amplitude damping described by Eq. (\ref{1}), then how is its temperature evolves with time? In order to answer this question we recall the Thermal Field Dynamics (TFD) theory of Takahashi and Umezawa \cite{Takahashi_1996_ijmpb} for converting the statistical average Tr$\left( A\rho \right) $ at nonzero temperature $T$ into equivalent pure state expectation value they introduced the state in a doubled freedom Fock space \begin{equation} \sec \text{h}\theta \exp \left[ a^{\dagger }\tilde{a}^{\dagger }\tanh \theta \right] \left \vert 0\tilde{0}\right \rangle \equiv \left \vert 0(\beta )\right \rangle \label{6} \end{equation} such that \begin{equation} \left \langle 0(\beta )\right \vert A\left \vert 0(\beta )\right \rangle = \text{Tr}\left( A\rho \right) , \label{5} \end{equation} where the vacuum state $\left \vert 0\tilde{0}\right \rangle $ is annihilated by either $a$ or\ $\tilde{a}.$ The parameter \begin{equation} \tanh \theta =\exp \left( -\frac{\hbar \omega }{2kT}\right) , \label{7} \end{equation} is determined by comparing the Bose-Einstein distribution \begin{equation} \text{Tr}\left( \rho _{c}a^{\dagger }a\right) =\left[ e^{\omega \hbar /kT}-1 \right] ^{-1}\equiv \bar{n}, \label{8} \end{equation} with the expectation value of the photon number operator in $\left \vert 0(\beta )\right \rangle $ \begin{equation} \left \langle 0(\beta )\right \vert a^{\dagger }a\left \vert 0(\beta )\right \rangle =\sinh ^{2}\theta . \label{9} \end{equation} The reason we tackle with $\left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert $ lies in that partial tracing over its tilde-mode will lead to $\rho _{c}$ (this will be proved in Sec. 2), i.e. \begin{equation} \text{\~{T}r}\left[ \left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] =\rho _{c}=\left( 1-e^{- \frac{\hbar \omega }{kT}}\right) e^{-\frac{\hbar \omega }{kT}a^{\dagger }a}, \label{10} \end{equation} then when we take $\left \vert 0(\beta )\right \rangle \left \langle 0(\beta )\right \vert $ as $\rho _{0}$ and substitute it into Eq. (\ref{2}) \begin{equation} \rho _{c}\left( t\right) =\sum_{n=0}^{\infty }\frac{V^{n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}\left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert a^{\dag n}e^{-\kappa ta^{\dagger }a}, \label{11} \end{equation} we will have \begin{eqnarray} \text{\~{T}r}\left[ \rho _{c}\left( t\right) \right] &=&\sum_{n=0}^{\infty } \frac{V^{n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}\left[ t\tilde{r}\left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] a^{\dag n}e^{-\kappa ta^{\dagger }a} \label{12} \\ &=&\sum_{n=0}^{\infty }\frac{T^{\prime n}}{n!}e^{-\kappa ta^{\dagger }a}a^{n}\rho _{c}a^{\dag n}e^{-\kappa ta^{\dagger }a}, \notag \end{eqnarray} thus \~{T}r$\left[ \rho _{c}\left( t\right) \right] $ will present the correct dissipation evolution law of the chaotic field. Noting that $\rho _{c}\left( t\right) $ itself includes not only the information of the system, but also of the reservoir, and we can also see how the reservoir evolves accompanying the system's dissipation. Moreover, since the temperature effect is manifest through the structure of $\rho _{c}$, we also investigate how system's dissipation accompanies the temperature variation. \section{Partial tracing over the tilde-mode of $\left \vert 0\left( \protect \beta \right) \right \rangle \left \langle 0\left( \protect \beta \right) \right \vert $} Using the coherent state representation of the tilde-mode \begin{equation} \int \frac{d^{2}z}{\pi }\left \vert \tilde{z}\right \rangle \left \langle \tilde{z}\right \vert =1,\text{ \ \ \ }\tilde{a}\left \vert \tilde{z}\right \rangle =z\left \vert \tilde{z}\right \rangle , \label{13} \end{equation} where \begin{equation} \left \vert \tilde{z}\right \rangle =\exp \left[ -\frac{|z|^{2}}{2}+z\tilde{a }^{\dagger }\right] \left \vert \tilde{0}\right \rangle , \label{14} \end{equation} we have \begin{eqnarray} \text{\~{T}r}\left[ \left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] &=&\text{\~{T}r}\left[ \int \frac{d^{2}z}{\pi }\left \vert \tilde{z}\right \rangle \left \langle \tilde{z}\right \vert \left. 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] \notag \\ &=&\sec h^{2}\theta \int \frac{d^{2}z}{\pi }\left \langle \tilde{z}\right \vert e^{a^{\dag }z^{\ast }\tanh \theta }\left \vert 0\tilde{0}\right \rangle \left \langle 0\tilde{0}\right \vert e^{az\tanh \theta }\left \vert \tilde{z}\right \rangle . \label{15} \end{eqnarray} Then using $\left \langle \tilde{z}\right \vert \left. \tilde{0} \right \rangle =\exp \left( -\left \vert z\right \vert ^{2}/2\right) $, the normal ordering of $\left \vert 0\right \rangle \left \langle 0\right \vert $ \begin{equation} \left \vert 0\right \rangle \left \langle 0\right \vert =\colon e^{-a^{\dagger }a}\colon , \label{16} \end{equation} and \begin{equation*} e^{\lambda a^{\dagger }a}=\colon \exp \left[ \left( e^{\lambda }-1\right) a^{\dagger }a\right] \colon \end{equation*} we have \begin{equation} \text{\~{T}r}\left[ \left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] =\sec h^{2}\theta \int \frac{d^{2}z}{\pi }\colon e^{-\left \vert z\right \vert ^{2}+a^{\dag }z^{\ast }\tanh \theta +az\tanh \theta -a^{\dag }a}\colon =\sec h^{2}\theta \colon e^{a^{\dag }a\left( \tanh ^{2}\theta -1\right) }\colon \label{17} \end{equation} and noting $\tanh \theta =\exp \left( -\tfrac{\hbar \omega }{2kt}\right) $ \begin{equation} \text{\~{T}r}\left[ \left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] =\left[ 1-\exp \left( - \tfrac{\hbar \omega }{2kt}\right) \right] \exp \left( -\tfrac{\hbar \omega }{ 2kt}a^{\dag }a\right) =\rho _{c}. \label{18} \end{equation} Note that the partial trace over\ mode $a$ for $\left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert $ is Tr$\left[ \left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \right] =\left( 1-e^{-\frac{\hbar \omega }{kT}}\right) e^{-\frac{\hbar \omega }{kT}\tilde{a}^{\dagger }\tilde{a}},$ since in Eq. (\ref{6}) $\left \vert 0\left( \beta \right) \right \rangle $ is symmetric with respect to $\tilde{a}^{\dagger }$ and $a^{\dagger }.$ Remarkably, the thermo vacuum state can be rewritten as \begin{equation} \left \vert 0(\beta )\right \rangle =\sec h\theta \exp \left[ a^{\dagger } \tilde{a}^{\dagger }\tanh \theta \right] \left \vert 0\tilde{0}\right \rangle =S\left( \theta \right) \left \vert 0\tilde{0}\right \rangle \label{19} \end{equation} where \begin{equation} S\left( \theta \right) =\exp \left[ \theta \left( a^{\dagger }\tilde{a} ^{\dagger }-a\tilde{a}\right) \right] \label{20} \end{equation} is in form like a two-mode squeezing operator \cite{Wang_cpl_2010}, so $ S\left( \theta \right) $ is named thermo squeezing operator which squeezes the vacuum state $\left \vert 0\tilde{0}\right \rangle $ at zero-temperature to the thermo vacuum state $\left \vert 0(\beta )\right \rangle $ at finite temperature $T$. Because a two-mode squeezed state is an entangled state, so $\left \vert 0(\beta )\right \rangle $ can be considered an entangled state in which the syetem's mode $a^{\dagger }$ entangles with the tilde mode $ \tilde{a}^{\dagger }$. Since they are entangled, the dissipation of system will affect its enviroment, as we shall show in the next section. \section{Evolution of $\left \vert 0\left( \protect \beta \right) \right \rangle \left \langle 0\left( \protect \beta \right) \right \vert $ in dissipation channel} Using \begin{equation} a^{n}\left \vert 0\left( \beta \right) \right \rangle =a^{n-1}\sec \text{h} \theta \left[ a^{n},e^{a^{\dagger }\tilde{a}^{\dagger }\tanh \theta }\right] \left \vert 0\tilde{0}\right \rangle =\left( \tilde{a}^{\dagger }\tanh \theta \right) ^{n}\left \vert 0\left( \beta \right) \right \rangle , \label{21} \end{equation} we obtain \begin{align} \rho _{c}\left( t\right) & =\sum_{n=0}^{\infty }\frac{V^{n}\tanh ^{2n}\theta }{n!}e^{-\kappa ta^{\dagger }a}\tilde{a}^{\dagger n}\left \vert 0\left( \beta \right) \right \rangle \left \langle 0\left( \beta \right) \right \vert \tilde{a}^{n}e^{-\kappa ta^{\dagger }a} \label{22} \\ & =\sec \text{h}^{2}\theta \sum_{n=0}^{\infty }\frac{V^{n}\tanh ^{2n}\theta }{n!}e^{-\kappa ta^{\dagger }a}\tilde{a}^{\dagger n}e^{a^{\dagger }\tilde{a} ^{\dagger }\tanh \theta }\left \vert 0\tilde{0}\right \rangle \left \langle 0 \tilde{0}\right \vert e^{a\tilde{a}\tanh \theta }\tilde{a}^{n}e^{-\kappa ta^{\dagger }a} \end{align} Then using \begin{equation} e^{-\kappa ta^{\dagger }a}a^{\dagger }e^{\kappa ta^{\dagger }a}=e^{-\kappa t}a^{\dagger } \label{23} \end{equation} we obtain \begin{equation} \rho _{c}\left( t\right) =\sec \text{h}^{2}\theta \sum_{n=0}^{\infty }\frac{ V^{n}\tanh ^{2n}\theta }{n!}\tilde{a}^{\dagger n}e^{e^{-\kappa t}a^{\dagger } \tilde{a}^{\dagger }\tanh \theta }\left \vert 0\tilde{0}\right \rangle \left \langle 0\tilde{0}\right \vert e^{e^{-\kappa t}a\tilde{a}\tanh \theta } \tilde{a}^{n}. \label{24} \end{equation} Further, by using the normal product form \begin{equation} \left \vert 0\tilde{0}\right \rangle \left \langle 0\tilde{0}\right \vert =\colon e^{-a^{\dagger }a-\tilde{a}^{\dagger }\tilde{a}}\colon , \label{25} \end{equation} we can make summation in Eq. (\ref{24}) and derive the compact form of $\rho \left( t\right) ,$ \begin{align} \rho _{c}\left( t\right) & =\sec h^{2}\theta \colon \sum_{n=0}^{\infty } \frac{V^{n}\tanh ^{2n}\theta }{n!}\tilde{a}^{\dagger n}\tilde{a} ^{n}e^{e^{-\kappa t}a^{\dagger }\tilde{a}^{\dagger }\tanh \theta }e^{e^{-\kappa t}a\tilde{a}\tanh \theta -a^{\dagger }a-\tilde{a}^{\dagger } \tilde{a}}\colon \notag \\ & =\sec \text{h}^{2}\theta \colon \exp \{ \tilde{a}^{\dagger }\tilde{a} \left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta +e^{-\kappa t}\tanh \theta \left( a^{\dagger }\tilde{a}^{\dagger }+a\tilde{a}\right) -a^{\dagger }a- \tilde{a}^{\dagger }\tilde{a}\} \colon \label{26} \\ & =\sec \text{h}^{2}\theta e^{e^{-\kappa t}a^{\dagger }\tilde{a}^{\dagger }\tanh \theta }\left \vert 0\right \rangle \left \langle 0\right \vert \exp \{ \tilde{a}^{\dagger }\tilde{a}\ln \left[ \left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta \right] \}e^{e^{-\kappa t}\tanh \theta a\tilde{a}} \notag \end{align} where $\exp \{ \tilde{a}^{\dagger }\tilde{a}\ln \left[ \left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta \right] \}$ indicates that the reservoir is in a chaotic field of the tilde mode, no more in $\left \vert \tilde{0} \right \rangle \left \langle \tilde{0}\right \vert ,$ this is because the system mode and the reservior mode are entangled, the disspation of system certainly affects the reservoir. From (\ref{26}) we can realize how a pure thermo vacuum state evolves into the mixed state during the dissipation process, that is, not only the squeezing parameter $\tanh \theta \rightarrow e^{-\kappa t}\tanh \theta ,$ but also $\left \vert 0\tilde{0}\right \rangle \left \langle 0\tilde{0}\right \vert $ evolves into $\left \vert 0\right \rangle \left \langle 0\right \vert \exp \{ \tilde{a}^{\dagger } \tilde{a}\ln \left[ \left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta \right] , $ i.e., in the process the thermo squeezing effect decreases while the reservioir-mode vacuum becomes chaotic. \section{Partial traceing over the tilde-mode of $\protect \rho _{c}\left( t\right) $} Now we perform partial trace over the tilde-mode of $\rho _{c}\left( t\right) ,$ using (\ref{13}), (\ref{16}-\ref{18}) and (\ref{26}) we have \begin{eqnarray*} \text{\~{T}r}\left[ \rho _{c}\left( t\right) \right] &=&\sec h^{2}\theta \text{\~{T}r}\left[ \frac{d^{2}z}{\pi }\left \vert \tilde{z}\right \rangle \left \langle \tilde{z}\right \vert \exp \left[ e^{-kt}a^{\dag }\tilde{a} ^{\dag }\tanh \theta \right] \left \vert 0\right \rangle \left \langle 0\right \vert \exp \left \{ \tilde{a}^{\dagger }\tilde{a}\ln \left[ \left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta \right] \right \} \exp \left[ a \tilde{a}e^{-kt}\tanh \theta \right] \right] \\ &=&\sec h^{2}\theta \int \frac{d^{2}z}{\pi }\colon \exp \left \{ \left \vert z\right \vert ^{2}\left[ \left( 1-e^{-2kt}\right) \tanh ^{2}\theta -1\right] +e^{-kt}\left( a^{\dagger }z^{\ast }+az\right) \tanh \theta -a^{\dagger }a\right \} \colon \\ &=&\frac{1}{1+e^{-2\kappa t}\sinh ^{2}\theta }\exp \left[ a^{\dagger }a\ln \frac{e^{-2\kappa t}\tanh ^{2}\theta }{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }\right] . \end{eqnarray*} When $t=0,$ it becomes the original chaotic state. By identifying \begin{equation} \frac{e^{-\kappa t}\tanh \theta }{\sqrt{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }}=\tanh \theta ^{\prime }, \label{28} \end{equation} then \begin{equation} \frac{1}{1+e^{-2\kappa t}\sinh ^{2}\theta }=\sec h^{2}\theta ^{\prime } \label{29} \end{equation} and we can express \begin{equation} \text{\~{T}r}\left[ \rho _{c}\left( t\right) \right] =\sec \text{h} ^{2}\theta ^{\prime }\exp [a^{\dagger }a\ln \tanh ^{2}\theta ^{\prime }], \label{30} \end{equation} which explains that the system in still in a chaotic state but with new parameter $\theta ^{\prime }.$ In similar to Eq. (\ref{7}), by identifying \begin{equation} \ln \frac{e^{-2\kappa t}\tanh ^{2}\theta }{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }=-\frac{\hbar \omega }{kT^{\prime }}, \label{31} \end{equation} we see that system is now at the temperature \begin{equation} T^{\prime }=-\frac{\hbar \omega }{k\ln \frac{e^{-2\kappa t}\tanh ^{2}\theta }{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }}. \label{32} \end{equation} Due to \begin{equation} 1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta >e^{-2\kappa t}, \label{33} \end{equation} so \begin{equation} \ln \frac{e^{-2\kappa t}\tanh ^{2}\theta }{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }<0,\text{ \ \ \ \ }T^{\prime }>0. \label{34} \end{equation} Moreover, since \begin{equation} \tanh ^{2}\theta >\frac{e^{-2\kappa t}\tanh ^{2}\theta }{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }, \label{35} \end{equation} \begin{equation} -\frac{1}{\ln \tanh ^{2}\theta }>\frac{-1}{\ln \frac{e^{-2\kappa t}\tanh ^{2}\theta }{1-\left( 1-e^{-2\kappa t}\right) \tanh ^{2}\theta }}, \label{36} \end{equation} and in reference to Eq. (\ref{7}), $\ln \tanh ^{2}\theta =-\frac{\hbar \omega }{kT},$ $T=-\frac{\hbar \omega }{k\ln \tanh ^{2}\theta },$ we see \begin{equation} T>T^{\prime } \label{37} \end{equation} which states that during the damping process the system's temperature decreases, the rate of decreasing can be controlled by adjusting the damping rate $\kappa .$ In summary, we have examined temperature cooling of optical chaotic light in a quantum dissipation channel with the damping parameter $\kappa .$The way we do it is by introducing its thermal vacuum state which can expose entangling effect between the system and the reservoir. The temperature cooling formula (\ref{32}) is derived, which depends on the parameter $ \kappa ,$ by adjusting $\kappa $ one can control temperature. \end{document}
\begin{document} \title[On the Frobenius complexity of determinantal rings] {\bf On the Frobenius complexity of determinantal rings} \author[Florian~Enescu]{Florian Enescu} \author[Yongwei~Yao]{Yongwei Yao} \address{Department of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303 USA} \operatorname{e}mail{[email protected]} \operatorname{e}mail{[email protected]} \subjclass[2010]{Primary 13A35} \date{} \begin{abstract} We compute the Frobenius complexity for the determinantal ring of prime characteristic $p$ obtained by modding out the $2 \times 2$ minors of an $m \times n$ matrix of indeterminates, where $m > n \ge 2$. We also show that, as $p \to \infty$, the Frobenius complexity approaches $m-1$. \operatorname{e}nd{abstract} \maketitle \section{Introduction} \subsection{Notations} Throughout this paper $R$ is a commutative Noetherian ring, often local, of prime characteristic $p$. Let $q=p^e$, where $e \in \mathbb{N} = \{ 0, 1, \ldots \}$. Consider the $e$th Frobenius homomorphism $F^e:R\to R$ defined $F(r)=r^q$, for all $r \in R$. For an $R$-module $M$, an $e$th Frobenius action (or Frobenius operator) on $M$ is an additive map $\mathcal{P}hi: M \to M$ such that $\mathcal{P}hi(rm)= r^{p^e}\mathcal{P}hi(m)$, for all $r \in R, m \in M$. For any $e \geq 0$, we let $R^{(e)}$ be the $R$-algebra defined as follows: as a ring $\R{e}$ equals $R$ while the $R$-algebra structure is defined by $r \cdot s = r^{q} s$, for all $r \in R,\, s \in R^{(e)}$. Also, $\R{e}$ as an $\R{e}$-algebra is simply $R$ as an $R$-algebra. Similarly, for an $R$-module $M$, we can define a new $R$-module structure on $M$ by letting $r * m = r^{p^e}m$, for all $r\in R, m\in M$. We denote this $R$-module by $\M{e}$. Consider now an $e$th Frobenius action, $\mathcal{P}hi: M \to M$, on $M$, which is no other than an $R$-module homomorphism $\mathcal{P}hi : M \to \M{e}$. Such an action naturally defines an $R$-module homomorphism $f_{\mathcal{P}hi}: \R{e} \otimes_R M \to M$, where $f_{\mathcal{P}hi} (r \otimes m) = r \mathcal{P}hi(m)$, for all $r \in R, m \in M$. Here, $\R{e}$ has the usual structure (i.e., without twisting) as an $R$-module given by $\R{e}=R$ on the left, while on the right we have the twisted module structure via the Frobenius action. Let $\mathscr{F}^e(M)$ be the collection of all $e$th Frobenius operators on $M$. The $R$-module structure on $\mathscr{F}^e{(M)}$ is given by viewing $\M{e}$ as an $R$-module without twisting, that is, $(r\mathcal{P}hi)(x) = r\mathcal{P}hi(x)$ for every $r \in R,\, \mathcal{P}hi \in \mathscr{F}^e{(M)}$ and $x \in M$. \begin{Def}\label{def:sfe} We define {\it the algebra of Frobenius operators} on $M$ by $$\mathscr{F}{(M)} = \oplus_{e\geq 0} \mathscr{F}^e{(M)},$$ with the multiplication on $\mathscr{F}{(M)}$ determined by composition of functions; that is, if $\mathcal{P}hi \in \mathscr{F}^e{(M)}, \mathcal{P}si \in \mathscr{F}^{e'}{(M)}$ then $\mathcal{P}hi \mathcal{P}si:=\mathcal{P}hi\circ \mathcal{P}si \in \mathscr{F}^{e+e'}(M)$. Hence, in general, $\mathcal{P}hi \mathcal{P}si \neq \mathcal{P}si \mathcal{P}hi$. \operatorname{e}nd{Def} Note that $\mathscr{F}^0{(M)} = {\rm End}_R(M)$, which is a subring of $\mathscr{F}(M)$. Naturally, each $\mathscr{F}^e{(M)}$ is a module over $\mathscr{F}^0{(M)}$. Since $R$ maps canonically to $\mathscr{F}^0{(M)}$, this makes $\mathscr{F}^e{(M)}$ an $R$-module by restriction of scalars. Note that $(\mathcal{P}hi \circ r)(m) = \mathcal{P}hi(rm) = (r^q\mathcal{P}hi)(m)$, for all $r \in R, m \in M$. Therefore, $\mathcal{P}hi r = r^q \mathcal{P}hi$, for all $r\in R, \, \mathcal{P}hi \in \mathscr{F}^e{(M)}$, $q=p^e$. \subsection{The Frobenius Complexity} The main concept studied in this paper is the Frobenius complexity of a local ring $R$, which was introduced in~\cite{EY}. In fact, the results in this subsection, if not referenced otherwise, are taken from \cite{EY}. We first need to review the definition of the complexity of a graded ring. \begin{Def}Let $A= \oplus_{e\geq 0} A_e$ be a $\mathbb{N}$-graded ring, not necessarily commutative. \begin{enumerate} \item Let $G_e(A)= G_e$ be the subring of $A$ generated by the elements of degree less or equal to $e$. (So $k_0 = 0$.) We agree that $G_{-1} = A_0$. \item We use $k_e=k_e(A)$ to denote the minimal number of homogeneous generators of $G_e$ as a subring of $A$ over $A_0$. We say that $A$ is {\it degree-wise finitely generated} if $k_e < \infty$ for all $e$. We agree that $k_{-1} = 0$. \item For a degree-wise finitely generated ring $A$, we say that a set $X$ of homogeneous elements of $A$ minimally generates $A$ if for all $e$, $X_{\leq e} =\{ a \in X: deg(a) \leq e \}$ is a minimal set of generators for $G_e$ with $k_e = |X_{\leq e} |$ for every $e \ge 0$. Also, let $X_e= \{ a \in X: deg(a)=e \}$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Def} \begin{Prop} \label{prop:minimal-gen} With the notations introduced above, let $X$ be a set of homogeneous elements of $A$. Then \begin{enumerate} \item The set $X$ generates $A$ as a ring over $A_0$ if and only if $X_{\le e}$ generates $G_e$ as a ring over $A_0$ for all $e \ge 0$ if and only if the image of $X_{e}$ generates $r_Fac{A_{e}}{(G_{e-1})_{e}}$ as an $A_0$-bimodule for all $e \ge 0$. \item Assume that $A$ is degree-wise finitely generated $\mathbb{N}$-graded ring and $X$ generates $A$ as a ring over $A_0$. The set $X$ minimally generates $A$ as a ring over $A_0$ if and only if $|X_{e}|$ is the minimal number of generators (out of all homogeneous generating sets) of $r_Fac{A_{e}}{(G_{e-1})_{e}}$ as an $A_0$-bimodule for all $e \ge 0$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Prop} \begin{Cor} \label{cor:minimal-gen} Let $A$ be a degree-wise finitely generated $\mathbb{N}$-graded ring and $X$ a set of homogeneous elements of $A$. Then \begin{enumerate} \item The minimal number of generators of $r_Fac{A_{e}}{(G_{e-1})_{e}}$ as an $A_0$-bimodule is $k_e - k_{e-1}$ for all $e \ge 0$. \item If $X$ is generates $A$ as a ring over $A_0$ then $|X_e| \ge k_e - k_{e-1}$ for all $e \ge 0$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Cor} \begin{Def} \label{degreewise} Let $A$ be a degree-wise finitely generated ring. The sequence $\{k_e\}_e$ is called the {\it growth} sequence for $A$. The {\it complexity} sequence is given by $\{ c_{e}(A)= k_{e}-k_{e-1} \}_{e\geq 0}$. The {\it complexity} of $A$ is $$\inf \{ n \in \mathbb{R}_{> 0}: c_e(A)=k_{e} - k_{e-1} = O(n^e) \}$$ and it is denoted by $\operatorname{cx}(A)$. If there is no $n >0$ such that $c_e(A)= O(n^e)$, then we say that $\operatorname{cx}(A)=\infty$. \operatorname{e}nd{Def} \begin{Def}\label{def:nearly-onto} Let $A$ and $B$ be $\mathbb N$-graded rings and $h \colon A \to B$ be a graded ring homomorphism. We say that $h$ is \operatorname{e}mph{nearly onto} if $B = B_0[h(A)]$ (that is, $B$ as a ring is generated by $h(A)$ over $B_0$). \operatorname{e}nd{Def} \begin{Thm} \label{thm:nearly-onto} Let $A$ and $B$ be $\mathbb N$-graded rings that are degree-wise finitely generated. If there exists a graded ring homomorphism $h \colon A \to B$ that is nearly onto, then $c_e(A) \ge c_e(B)$ for all $e \ge 0$. \operatorname{e}nd{Thm} \begin{Def} Let $A$ be a $\mathbb{N}$-graded ring such that there exists a ring homomorphism $R \to A_0$, where $R$ is a commutative ring. We say that $A$ is a (left) $R$-{\it skew algebra} if $aR \subseteq Ra$ for all homogeneous elements $a \in A$. A right $R$-skew algebra can be defined analogously. In this paper, our $R$-skew algebras will be left $R$-skew algebras and therefore we will drop the adjective `left' when referring it to them. \operatorname{e}nd{Def} \begin{Cor} \label{interpretation} Let $A$ be a degree-wise finitely generated $R$-skew algebra such that $R=A_0$. Then $c_{e}(A)$ equals the minimal number of generators of $r_Fac{A_{e}}{(G_{e-1})_{e}}$ as a left $R$-module for all $e$. \operatorname{e}nd{Cor} We are now in position to state the definition of the Frobenius complexity of a local ring of prime characteristic. \begin{Def} \label{def-FCX} Let $(R,\fm,k)$ be a local ring of prime characteristic $p$. We define the {\it Frobenius complexity} of the ring $R$ by $$\operatorname{cx}_F(R) = \log _p (\operatorname{cx}(\mathscr{F}{(E)})).$$ Also, denote $k_e(R) : = k_e (\mathscr{F}{(E)})$, for all $e$, and call these numbers the {\it Frobenius growth sequence} of $R$. Then $c_e= c_e(R): = k_{e}(R)-k_{e-1}(R)$ defines the {\it Frobenius complexity sequence} of $R$. If the Frobenius growth sequence of the ring $R$ is eventually constant (i.e., $\operatorname{cx}(\mathscr{F}{(E)}) = 0$), then the Frobenius complexity of $R$ is set to be $-\infty$. If $\operatorname{cx}(\mathscr{F}(E)) = \infty$, the Frobenius complexity if $R$ is set to be $\infty$. \operatorname{e}nd{Def} Katzman, Schwede, Singh and Zhang have introduced an important $\mathbb{N}$-graded ring in their paper~\cite{KSSZ}, which is an example of an $R$-skew algebra. We will study the complexity of this skew-algebra in this section, and apply these results to the complexity of the ring $R$ in subsequent sections. \begin{Def}[\cite{KSSZ}] Let $\mathscr{R}$ be an $\mathbb{N}$-graded commutative ring of prime characteristic $p$ with $\mathscr{R}_0 =R$. Define $T(\mathscr{R}):= \oplus_{e\geq 0} \mathscr{R} _{p^e-1}$, which is an $\mathbb{N}$-graded ring by $$ a *b = ab^{p^e}$$ for all $a \in\mathscr{R} _{p^e-1} ,\, b\in \mathscr{R} _{p^{e'}-1}$. The degree $e$ piece of $T(\mathscr{R})$ is $T_e(\mathscr{R})=\mathscr{R} _{p^e-1}$. \operatorname{e}nd{Def} A number of results have been proved about the Frobenius complexity of a local ring and they are summarized below. \begin{Thm}[\cite{EY}, Corollary 2.12, Theorems 4.7, 4.9] Let $(R,\fm,k)$ be a local ring. \begin{enumerate} \item If $R$ is $0$-dimensional then $\operatorname{cx}_F(R) = -\infty$. \item If $R$ is normal, complete and has dimension at most two, then $\operatorname{cx}_F(R) \leq 0$. \item If $R$ is normal, complete and has a finitely geneated anticanonical cover, then $\operatorname{cx}_F(R) <\infty$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Thm} In addition the following holds. \begin{Thm}[\cite{KSSZ} Proposition 4.1 and \cite{EY} Theorem 4.5] If $(R,\fm,k)$ is normal and $\mathbb{Q}$-Gorenstein, then the order of its canonical module in the divisor class group is relatively prime to $p$ if and only if $\operatorname{cx}_F(R) =-\infty$. \operatorname{e}nd{Thm} As in \cite{EY}, we will also use the following notations and terminologies in the sequel: For an integer $a \in \mathbb N$, if $a= c_{n}p^{n} + \cdots + c_1p + c_0$ with $0 \leq c_i \leq p-1$ for all $0 \le i \le n$, then we use $a = \overline{c_{n} \cdots c_0}$ to denote the base $p$ expression of $a$. Also, we write $a \vert_e$ to denote the remainder of $a$ when dividing to $p^e$. Thus, if $a = \overline{c_{n} \cdots c_0}$ then $a \vert_e = \overline{c_{e-1} \cdots c_0}$, which we refer to as the $e$th truncation of $a$. Put differently, $a \vert_e = a-\floor{r_Fac a{p^e}}p^e$, in which $\floor{r_Fac a{p^e}}$ is the floor function of $r_Fac a{p^e}$. When adding up integers $a_i \in \mathbb N$ with $1 \le i \le m$, all written in base $p$ expressions, we can talk about the carry over to digit corresponding to $p^e$, which is simply $\floor{r_Fac {a_1 \vert_e + \dotsb + a_m\vert_e}{p^e}}$. These notations depend on the choice of $p$, which should be clear from the context. For any positive integers $p$ and $m$ (with $p$ prime), denote by $M_{p,m}(i)$ (or simply $M(i)$ if $p$ and $m$ are understood) the rank of $(R[x_1,\,\dotsc,\, x_m]/(x_1^p,\,\dotsc,\, x_m^p))_i$ over $R$, for all $i \in \mathbb Z$. This is clearly independent of $R$. Observe that $M_{p,m} =0$ exactly when $i > d(p-1)$ or $i < 0$. In fact, all $M_{p,m}(i)$ can be read off from the following Poincar\'e series (actually a polynomial): \[ \sum_{i = -\infty}^{\infty} M_{p,m}(i)t^i = \left(r_Fac{1-t^p}{1-t}\right)^m =\left(1+\dotsb + t^{p-1}\right)^m. \] \subsection{Determinantal rings} \label{sec:det} In this paper we consider the determinantal ring $K[X]/I$ where $X$ is an $m \times n$ matrix of indeterminates and $I$ is the ideal of all the $2 \times 2$ minors of $X$ and $K$ a field. This ring is isomorphic to the Segre product of $K[x_1,\dotsc, x_{m}]$ and $K[y_1,\dotsc, y_{n}]$. Recall that, for $\mathbb N$-graded commutative rings $A = \oplus_{i \in \mathbb N}A_i$ and $B = \oplus_{i \in \mathbb N}B_i$ such that $A_0 = R = B_0$, their Segre product is \[ A \,\sharp\, B = \oplus_{i \in \mathbb N}(A_i \otimes_R B_i), \] which is a ring under the natural operations. \begin{Def} \label{def:segre-complete} Let $S_{m,n}$ denote the completion of $K[x_1,\dotsc, x_{m}] \,\sharp\, K[y_1, \dotsc, y_{n}]$ with respect to the ideal generated by all homogeneous elements of positive degree, in which $K$ is a field and $m > n \ge 2$. It is easy to see that \begin{align*} S_{m,n} &\cong \mathcal{P}rod_{\alphapha \in \mathbb N^{m},\, \beta \in \mathbb N^{n},\, |\alphapha| = |\beta|} Kx^{\alphapha}y^{\beta} \\ &=\left\{ \sum_{|\alphapha| = |\beta|}a_{\alphapha,\, \beta}x^{\alphapha}y^{\beta} \; \Big | \; a_{\alphapha,\, \beta} \in K, \, \alphapha \in \mathbb N^{m},\, \beta \in \mathbb N^{n}\right\} \subset K[[x_1, \dotsc,x_{m}, y_1, \dotsc, y_{n}]]. \operatorname{e}nd{align*} Let $\mathscr R_{m,n}$ be the anticanonical cover of $S_{m,n}$. \operatorname{e}nd{Def} The anticanonical cover of such a ring was described by Kei-ichi Watanabe. \begin{Thm} [{\cite[page 430]{Wa}}] \label{thm:Watanabe} Let $K$ be a field and $m > n \ge 2$. The anticanonical cover of the Segre product of $K[x_1, \dotsc, x_{m}]$ and $K[y_1, \dotsc, y_{n}]$ is isomorphic to \[ \bigoplus_{i \in \mathbb N} \left(\bigoplus_{\alphapha \in \mathbb N^{m},\, \beta \in \mathbb N^{n},\, |\alphapha| - |\beta|=i(m-n)} Kx^{\alphapha}y^{\beta}\right), \] in which the grading is governed by $i$. Here, for $\alphapha = (a_1, \dotsc, a_m)$ and $\beta = (b_1, \dotsc, b_{n})$ we denote $x^{\alphapha} = x_1^{a_1} \dotsm x_m^{a_m}$ and $y^{\beta} = y_1^{b_1} \dotsm y_{n}^{b_{n}}$. \operatorname{e}nd{Thm} It follows from Theorem~\ref{thm:Watanabe} that \[ \mathscr R_{m,n} \cong \bigoplus_{i \in \mathbb N} \left(\mathcal{P}rod_{\alphapha \in \mathbb N^{m},\, \beta \in \mathbb N^{n},\, |\alphapha| - |\beta|=i(m-n)} Kx^{\alphapha}y^{\beta}\right), \] in which the grading is governed by $i$. \begin{Lma}[\cite{EY}] \label{lma:nearly-onto} Let $A$ and $B$ be degree-wise finitely generated $\mathbb N$-graded commutative rings and $h \colon A \to B$ be a graded ring homomorphism. \begin{enumerate} \item The homomorphism $h$ is nearly onto if and only if $B_i$ is generated by $h(A_i)$ as a $B_0$-module for all $i \in \mathbb N$ (that is, $B$ is generated by $h(A)$ as a $B_0$-module). \item If $A$ and $B$ have prime characteristic $p$ and $h$ is nearly onto, then the induced graded homomorphism $T(h) \colon T(A) \to T(B)$ is nearly onto. \operatorname{e}nd{enumerate} \operatorname{e}nd{Lma} \begin{Cor}\label{cor:nearly-onto} Let $A$ and $B$ be $\mathbb N$-graded commutative rings of prime characteristic $p$. If there exists a graded ring homomorphism $h \colon A \to B$ that is nearly onto, then $c_e(T(A)) \ge c_e(T(B))$ for all $e \ge 0$. \operatorname{e}nd{Cor} \begin{Prop}[{Compare with \cite[Proposition~5.5]{EY}}] \label{prop:nearly-onto} Let $K$, $S_{m,n}$ and $\mathscr{R}_{m,n}$ be as in Definition~\ref{def:segre-complete} with $m > n \ge 2$. Then there are nearly onto graded ring homomorphisms from $\mathscr{R}_{m,n}$ to $V_{m-n}(K[x_1, \dotsc, x_{m}])$ and vice versa, in which $V_{m-n}(K[x_1, \dotsc, x_{m}])$ denotes the $(m-n)$-Veronese subring of $K[x_1, \dotsc, x_{m}]$. \operatorname{e}nd{Prop} \begin{proof} In light of Definition~\ref{def:segre-complete} and Theorem~\ref{thm:Watanabe}, we simply assume \begin{align*} \mathscr R_{m,n} = \bigoplus_{i \in \mathbb N} \left(\mathcal{P}rod_{\alphapha \in \mathbb N^{m},\, \beta \in \mathbb N^{n},\, |\alphapha| - |\beta|=i(m-n)} Kx^{\alphapha}y^{\beta}\right). \operatorname{e}nd{align*} Define $\mathcal{P}hi \colon \mathscr{R}_{m,n} \to V_{m-n}(K[x_1, \dotsc, x_{m}])$ and $\mathcal{P}si \colon V_{m-n}(K[x_1, \dotsc, x_{m}]) \to \mathscr{R}_{m,n}$ by \begin{align*} \mathcal{P}hi(f(x_1,\dotsc, x_m,\, y_1, \dotsc, y_{n})) &= f(x_1,\dotsc, x_m,\, 0, \dotsc, 0) \in K[x_1, \dotsc, x_{m}]\\ \text{and} \mathcal{Q}quad \mathcal{P}si(g(x_1,\dotsc, x_m)) &= g(x_1,\dotsc, x_m) \in \mathscr{R}_{m,n}, \operatorname{e}nd{align*} for all $f(x_1,\dotsc, x_m,\, y_1, \dotsc, y_{n}) \in \mathscr{R}_{m,n}$ and all $g(x_1,\dotsc, x_m) \in V_{m-n}(K[x_1, \dotsc, x_{m}])$. It is routine to verify that both $\mathcal{P}hi$ and $\mathcal{P}si$ are graded ring homomorphisms. As $\mathcal{P}hi \circ \mathcal{P}si$ is the identity map, we see that $\mathcal{P}hi$ is onto and hence nearly onto. Finally, note that for every $i \in \mathbb N$, $(\mathscr{R}_{m,n})_i$ is generated by $\mathcal{P}si(V_{m-n}(K[ x_1, \dotsc, x_{m}])_i) = \mathcal{P}si(K[ x_1, \dotsc, x_{m}]_{i(m-n)})$ as a module over $(\mathscr{R}_{m,n})_0 = S_{m,n}$. So $\mathcal{P}si$ is nearly onto, completing the proof. \operatorname{e}nd{proof} \begin{Thm}\label{thm:segre} Let $K$, $S_{m,n}$ and $\mathscr{R}_{m,n}$ be as in Definition~\ref{def:segre-complete} with $m > n \ge 2$. \begin{enumerate} \item Then $\mathscr{R}_{m,n}$ and $V_{m-n}(K[x_1,\dotsc, x_{m}])$ have the same complexity sequence. \item If $K$ has prime characteristic $p$, then $T(\mathscr{R}_{m,n})$ and $T(V_{m-n}(K[x_1,\dotsc, x_{m}]))$ have the same complexity sequence. \item If $K$ has prime characteristic $p$, then \[ \operatorname{cx}(\mathscr{F}(E_{m,n})) = \operatorname{cx}(T(\mathscr{R}_{m,n})) = \operatorname{cx}(T(V_{m-n}(K[x_1,\dotsc, x_{m}]))), \] in which $E_{m,n}$ stands for the injective hull of the residue field of $S_{m,n}$. Consequently, \[ \operatorname{cx}_F(S_{m,n}) = \log_p\operatorname{cx}(T(V_{m-n}(K[x_1,\dotsc, x_{m}]))). \] \operatorname{e}nd{enumerate} \operatorname{e}nd{Thm} \begin{proof} This follows from Corollary~\ref{cor:nearly-onto}, Proposition~\ref{prop:nearly-onto} and \cite[Theorem~3.3]{KSSZ}. \operatorname{e}nd{proof} In summary, to compute the Frobenius complexity of $S_{m,n}$ with $m > n \ge 2$, it suffices to study $T(V_{r}(K[x_1,\dotsc, x_{m}]))$ with $r = m-n$ (hence $0< r \le m-2$). The next section is devoted to the study of $T(V_{r}(K[x_1,\dotsc, x_{m}]))$, more generally with $1 \le r,\,m \in \mathbb N$. \section{Investigating $T(V_r(R[x_1,\,\dotsc,\,x_m]))$} Let $R$ be a commutative ring of prime characteristic $p$ and $r,\,m$ positive integers. In this section, we study $T(V_r(R[x_1,\,\dotsc,\,x_m]))$. In particular, we are interested in when it is finitely generated over $R$, as well as how to compute its complexity. To simplify notation, denote the following (with $R$, $p$, $m$ and $r$ understood): \begin{itemize} \item $\mathscr{R} := R[x_1,\,\dotsc,\,x_m]$. \item $\mathscr{V} := V_r(\mathscr{R}) = V_r(R[x_1,\,\dotsc,\,x_m])$. \item $T: = T(\mathscr{V}) =T(V_r(R[x_1,\,\dotsc,\,x_m]))$. \item $G_e:= G_{e}(T)$. \item $T_e:=T_e(\mathscr{V}) = T_e(V_r(R[x_1,\,\dotsc,\,x_m])) = \mathscr{R}_{r(p^e-1)} = (R[x_1,\,\dotsc,\,x_m])_{r(p^e-1)}$. As there are several gradings going on, when we say the degree of a monomial, we agree that it refers to its (total) degree in $\mathscr{R} = R[x_1,\,\dotsc,\,x_m]$. Thus a monomial in $T_e$ is a monomial of (total) degree $r(p^e-1)$. Note that $T_e=\mathscr{R}_{r(p^e-1})$ is an $R$-free (left) module with a basis consisting of monomials of (total) degree $r(p^e-1)$. In particular, $T_0 = R$. \operatorname{e}nd{itemize} Fix any $e \in \mathbb N$. We see that $G_{e-1} = G_{e-1}(T)$ is an $R$-free (left) module with a basis consisting of monomials that can be expressed as products (under $*$, the multiplication of $T$) of monomials of degree $r(p^i-1)$ where $i \leq e-1$. So all such monomials of total degree $r(p^e-1)$ form an $R$-basis of $(G_{e-1})_e$. In conclusion, $r_Fac{T_e}{(G_{e-1})_e}$ is free as a left $R$-module with a basis given by monomials of degree $r(p^e-1)$ that cannot be written as products (under $*$) of monomials of degree $r(p^i-1)$, with $i \leq e-1$. We will refer to this basis as the \operatorname{e}mph{monomial basis} of $r_Fac{T_e}{(G_{e-1})_e}$. By Corollary~\ref{interpretation}, we see $c_e(T) = \operatorname{rank}_R(r_Fac{T_e}{(G_{e-1})_e})$. As $c_0(T) = 0$ and $c_1(T) = \operatorname{rank}_R(T_1) = \operatorname{rank}_R(\mathscr{R}_{r(p-1)})$, we may assume $e \ge 2$ in the following discussion. Let $\alphapha = (a_1, \dotsc, a_m) \in \mathbb N^m$ such that $|\alphapha| := a_1 + \dotsb + a_m = r(p^e-1)$, so that $x^{\alphapha} := x_1^{a_1} \cdots x_m^{a_m}$ is a monomial in $T_e$ (i.e., of degree $r(p^e-1)$). This monomial $x^{\alphapha}$ belongs to $(G_{e-1})_e$ if and only if it can be decomposed as \[ x^{\alphapha} = x^{\alphapha'} * x^{\alphapha''}= x^{\alphapha' + p^{e'}\alphapha''} \] for some $x^{\alphapha'} \in T_{e'},\, x^{\alphapha''}\in T_{e''}$ with $1 \leq e',\,e'' \le e-1$ and $e' +e'' =e$. In other words, $x^{\alphapha} \in (G_{e-1})_e$ if and only if there is an equation \[ {\alphapha} = {\alphapha' + p^{e'}\alphapha''} \] for some $\alphapha',\,\alphapha'' \in \mathbb N^m,\, 1 \leq e' \le e-1,\, e'+e''=e$ with $|\alphapha'| =r(p^{e'}-1)$ and $|\alphapha''| = r(p^{e''}-1)$, which is equivalent to the existence of equations \[ a_i = a_i' + p^{e'}a_i'' \mathcal{Q}uad \text{for all} \mathcal{Q}uad i \in \{1,\dotsc, m\} \] for some $(a_1',\dotsc,a_m'),\,(a_1'',\dotsc,a_m'') \in \mathbb N^m,\, 1 \leq e' \le e-1,\, e'+e''=e$ with $\sum_{i=1}^m a'_i =r(p^{e'}-1)$ and $\sum_{i=1}^m a''_i = r(p^{e''}-1)$. Now it is routine to see that the above holds if and only if there exist $(a_1',\dotsc,a_m') \in \mathbb N^m$ and $1 \leq e' \le e-1$ with $\sum_{i=1}^m a'_i =r(p^{e'}-1)$ such that \[ a_i\vert_{e'} \le a_i' \le a_i \ \text{ and }\ a_i\vert_{e'} = a_i'\vert_{e'} \mathcal{Q}uad \text{for all} \mathcal{Q}uad i \in \{1,\dotsc, m\}, \] which can be seen to be equivalent to the existence of an integer $1 \leq e' \leq e-1$ such that \[ a_1 \vert_{e'} + \cdots + a_m \vert_{e'} \le r(p^{e'}-1), \] which is equivalent to the existence of an integer $1 \leq e' \leq e-1$ such that \[ \floor{r_Fac {a_1 \vert_{e'} + \cdots + a_m \vert_{e'}}{p^{e'}}} \le \floor{r_Fac {r(p^{e'}-1)}{p^{e'}}}. \] Note that the backward implications of the last two equivalences rely on the fact that $a_1\vert_{e'} + \cdots + a_m \vert_{e'}$ and $r(p^{e'}-1)$ are in the same congruence class modulo $p^{e'}$; the backward implications of the next to last equivalence also relies on the fact $a_i\vert_{e'} \operatorname{e}quiv a_i \mod p^{e'}$ for all $i$, which allows us to reverse-engineer $(a_1',\dotsc,a_m') \in \mathbb N^m$ as desired. With the argument above, we establish the following result. (Again, the fact $a_1\vert_{i} + \cdots + a_m \vert_{i} \operatorname{e}quiv r(p^{i}-1) \mod p^{i}$ is needed in part~(2) of the following proposition.) \begin{Prop} \label{prop:good/bad} Consider $T = T(V_r(R[x_1,\,\dotsc,\,x_m]))$, in prime characteristic $p$. \begin{enumerate} \item For any monomial $x_1^{a_1} \cdots x_m^{a_m} \in T_e$ with $e \ge 1$, the following are equivalent. \begin{itemize} \item $x_1^{a_1} \cdots x_m^{a_m} \in G_{e-1}(T)$. \item There exists an integer $i$, $1 \le i \le e-1$, such that the carry-over to the digit associated with $p^{i}$ is less than or equal to $\floor{r_Fac {r(p^{i}-1)}{p^{i}}}$ when $a_1 + \dotsb + a_m$ is calculated in base $p$. \operatorname{e}nd{itemize} \item For any monomial $x_1^{a_1} \cdots x_m^{a_m} \in T_e$ with $e \ge 1$, the following are equivalent. \begin{itemize} \item $x_1^{a_1} \cdots x_m^{a_m} \notin G_{e-1}(T)$. \item $a_1 \vert_{i} + \cdots + a_m \vert_{i} = r(p^{i}-1) + d_{i}p^{i}$ with $1 \le d_{i} \in \mathbb N$ for all $1 \le i \le e-1$. \item The carry-over to the digit associated with $p^{i}$ is greater than $\floor{r_Fac {r(p^{i}-1)}{p^{i}}}$ for all $1 \le i \le e-1$ when $a_1 + \dotsb + a_m$ is calculated in base $p$. \operatorname{e}nd{itemize} \operatorname{e}nd{enumerate} \operatorname{e}nd{Prop} \begin{Prop} \label{prop:count} For $T = T(V_r(R[x_1,\,\dotsc,\,x_m]))$, $c_e(T)$ is the number of monomials $x_1^{a_1} \cdots x_m^{a_m} \in T_e$ such that the carry-over to the digit associated with $p^{i}$ is bigger than $\floor{r_Fac {r(p^{i}-1)}{p^{i}}}$ for all $1 \le i\le e-1$ when $a_1 + \dotsb + a_m$ is calculated in base $p$. \operatorname{e}nd{Prop} Using the criteria given in Proposition~\ref{prop:good/bad}, we are able to determine precisely when $T(V_r(R[x_1,\,\dotsc,\,x_m]))$ is finitely generated over $T_0 = R$. \begin{Thm}\label{thm:mr} Let $T = T(V_r(R[x_1,\,\dotsc,\,x_m]))$, with $r,\,m,\,R$ as above. \begin{enumerate} \item If $r \ge m-1$, then $T$ is generated by $T_1$ over $T_0$ (that is, $c_e(T) = 0$ for all $e \ge 2$). \item If $r < m-1$, then $c_e(T) > 0$ (i.e., $T_e$ is not generated by lower degree) for all $e \ge 1$. \item The ring $T(V_r(R[x_1,\,\dotsc,\,x_m]))$ is finitely generated over $R$ if and only if $r \ge m-1$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Thm} \begin{proof} Evidently, we only need to prove (1) and (2). (1) Suppose, on the contrary, that for some $e \ge 2$ there exists a monomial $x_1^{a_1} \cdots x_m^{a_m} \in T_e$ that does not belong to $G_{e-1}(T)$. Then by Proposition~\ref{prop:good/bad} \[ a_1 \vert_{i} + \cdots + a_m\vert_{i} \ge r(p^{i}-1) + p^{i} \] for all $1 \le i \le e-1$. However, the assumption $r \ge m-1$ implies \[ a_1 \vert_{i} + \cdots + a_m \vert_{i} \le m(p^{i}-1) \le (r+1)(p^{i}-1) < r(p^{i}-1) + p^{i}. \] We get a contradiction. (2) As $c_1(T) > 0$ is clear, we assume $e \ge 2$. Consider \[ x_1^{p^e-1}\dotsm x_{r-1}^{p^e-1} x_r^{p^e-p^{e-1}-1}x_{r+1}^{p^{e-1}-1}x_{r+2}^{1} \in \mathscr{R}_{r(p^e-1)} = T_e. \] Now it is routine to see that the carry-over to the digit associated with $p^{i}$ is $\floor{r_Fac {r(p^{i}-1)}{p^{i}}} +1$ for all $1 \le i \le e-1$ when $a_1 = p^e-1,\, \dotsc,\,a_{r-1} = p^e-1$, $a_r = p^e-p^{e-1}-1$, $a_{r+1}={p^{e-1}-1}$, $a_{r+2}={1}$ and $a_i = 0$ (for $r+2 < i \le m$) are added up in base $p$. This verifies $x_1^{p^e-1}\dotsm x_{r-1}^{p^e-1}x_r^{p^e-p^{e-1}-1}x_{r+1}^{p^{e-1}-1}x_{r+2} \notin G_{e-1}(T)$ and hence $c_e(T) > 0$. \operatorname{e}nd{proof} \section{Computing $c_e(T(V_r(R[x_1,\,\dotsc,\,x_m])))$} \label{sec:computing-ce} Let $R$, $m$, $r$, $\mathscr{R}$, $\mathscr{V}$ and $T$ be as in last section and keep the notations. In particular, $T = T(V_r(R[x_1,\,\dotsc,\,x_m]))$ is an $\mathbb N$-graded ring. For simplicity, denote $c_e(T)$ by $c_{m,r,e}$ or simply by $c_e$ since $r$ and $m$ are understood. (It should be clear that $c_e(T(V_r(R[x_1,\,\dotsc,\,x_m])))$ is independent of $R$. Also note that $c_1 = \operatorname{rank}_R (\mathscr{R}_{r(p-1)}) = \binom{r(p-1)+m-1}{m-1}$.) Fix an integer $e \ge 2$. The goal is to count the number of monomials that produce the monomial basis of $r_Fac{T_e}{(G_{e-1})_e}$. First, we set up some notations. Let $\alphapha = (a_1, \dotsc, a_m) \in \mathbb N^m$ with $|\alphapha| := a_1 + \dotsb + a_m = r(p^e -1)$. For each $n \in [1,\,m] := \{1,\dotsc,m\}$, write $a_n = \overline{\cdots a_{n,i} \cdots a_{n,0}}$ in base $p$ expression. Then, for each $i \in [0,\,e-2] := \{0, \dotsc, e-2\}$, denote \[ \alphapha_i := (a_{1,i}, \dotsc, a_{m, i})\in \mathbb N^m, \] which can be referred to as the vector of the digits corresponding to $p^i$. Also denote \[ \alphapha_{e-1} := \left(\floor{r_Fac{a_{1}}{p^{e-1}}}, \dotsc,\floor{r_Fac{a_{m}}{p^{e-1}}}\right) = \left(a_1 - a_1 \vert_{e-1}, \dotsc, a_m - a_m \vert_{e-1}\right) \in \mathbb N^m. \] Moreover, for each $i \in \{0,\dotsc,e-1\}$, let $f_i(\alphapha)$ denote the carry-over to the digit corresponding to $p^{i}$ when computing $\sum_{i=1}^ma_i$ in base $p$. In other words, \[ f_i(\alphapha) := \floor{r_Fac{a_1 \vert_{i} + \dotsb + a_m \vert_{i}}{p^i}}. \] Note that $f_{0}(\alphapha) = 0$. Then denote $f(\alphapha) :=(f_{e-1}(\alphapha), \dotsc, f_{0}(\alphapha)) \in \mathbb N^e$. Finally, denote \[ d(\alphapha) := (d_{e-1}(\alphapha), \dotsc, d_0(\alphapha)) : = f(\alphapha) - \left(\floor{r_Fac {r(p^{e-1}-1)}{p^{e-1}}}, \dotsc, \floor{r_Fac {r(p^{0}-1)}{p^{0}}}\right)\in \mathbb Z^e, \] so that $d_i(\alphapha) = f_i(\alphapha)-\floor{r_Fac {r(p^{i}-1)}{p^{i}}}$ for all $i \in [0,\,e-1] := \{0,\dotsc,e-1\}$. Note that $d_0(\alphapha) = 0$. Moreover, for all $i \in [0,\,e-2]$, we have \begin{align*} d_{i+1}(\alphapha) & = \floor{r_Fac{a_1 \vert_{i+1} + \dotsb + a_m \vert_{i+1}}{p^{i+1}}} - \floor{r_Fac {r(p^{i+1}-1)}{p^{i+1}}} \\ & \overset{\dagger}{=} \floor{r_Fac{|\alphapha_i| + f_i(\alphapha)}{p}} - \floor{r_Fac{r(p-1) + \floor{r_Fac {r(p^{i}-1)}{p^{i}}}}{p}} \\ & \overset{\ddagger}{=} r_Fac 1p \left[(|\alphapha_i| + f_i(\alphapha)) - \left(r(p-1) + \floor{r_Fac {r(p^{i}-1)}{p^{i}}}\right)\right] \\ & = r_Fac 1p \left[|\alphapha_i| + \left(f_i(\alphapha))- \floor{r_Fac {r(p^{i}-1)}{p^{i}}}\right) - r(p-1)\right]\\ & = r_Fac 1p \big[|\alphapha_i| + d_i(\alphapha) - r(p-1)\big]. \operatorname{e}nd{align*} Note that $\overset{\dagger}{=}$ follows from how we compute the carry overs to digit corresponding $p^{i+1}$, while $\overset{\ddagger}{=}$ follows from the fact that $|\alphapha_i| + f_i(\alphapha) \operatorname{e}quiv r(p-1) + \floor{r_Fac {r(p^{i}-1)}{p^{i}}} \mod p$ since they are all congruent to the (same) number representing the digit associated with $p^i$ in the base $p$ expression of $r(p^e-1)$ and $r(p^i-1)$. Let $\alphapha = (a_1, \dotsc, a_m) \in \mathbb N^m$ with $|\alphapha| = r(p^e -1)$ as above and let $\delta = (d_{e-1}, \dotsc,d_0)\in \mathbb Z^e$ with $d_{0} = 0$. By what we have established above, we see \begin{align*} d(\alphapha) = d & \iff d_i(\alphapha) = d_i,\,\forall i \in [1,\,e-1] \\ & \iff d_{i+1}(\alphapha) = d_{i+1},\,\forall i \in [0,\,e-2] \\ & \iff r_Fac 1p \big[|\alphapha_i| + d_i(\alphapha) - r(p-1)\big] = d_{i+1},\,\forall i \in [0,\,e-2] \\ & \iff |\alphapha_i| + d_i(\alphapha) - r(p-1) = d_{i+1}p,\,\forall i \in [0,\,e-2] \\ & \iff |\alphapha_i| + d_i(\alphapha) = r(p-1) + d_{i+1}p,\,\forall i \in [0,\,e-2] \\ & \overset{*}{\iff} |\alphapha_i| + d_i = r(p-1) + d_{i+1}p,\,\forall i \in [0,\,e-2] \\ & \iff |\alphapha_i| = r(p-1) + d_{i+1}p - d_i,\,\forall i \in [0,\,e-2]. \operatorname{e}nd{align*} Note that $\overset{*}{\operatorname{Im}plies}$ holds because the assumption (i.e., antecedent) of this implication already implies $d(\alphapha) =\delta$, while $\overset{*}{\operatorname{Im}pliedby}$ follows from an easy induction on $i$ (in light of the established equation $d_{i+1}(\alphapha)=r_Fac 1p \big[|\alphapha_i| + d_i(\alphapha) - r(p-1)\big]$). Furthermore, the assumption $|\alphapha| = r(p^e -1)$ (together with $d(\alphapha) = \delta$) translates to the following \[ |\alphapha_{e-1}| + f_{e-1}(\alphapha) = \floor{r_Fac{a_1 + \dotsb + a_m}{p^{e-1}}} = \floor{r_Fac{r(p^e-1)}{p^{e-1}}} = r(p-1) + \floor{r_Fac {r(p^{e-1}-1)}{p^{e-1}}}, \] which is obtained by examining summations $a_1 + \dotsb + a_m$ and $\overbrace{(p^e-1) + \dotsb + (p^e-1)}^{r \text{ terms}}$ in base $p$. Therefore \[ |\alphapha_{e-1}| = r(p-1) + \floor{r_Fac {r(p^{e-1}-1)}{p^{e-1}}} - f_{e-1}(\alphapha) = r(p-1)-d_{e-1}(\alphapha) = r(p-1)-d_{e-1}. \] In summary, with $\alphapha \in \mathbb N^m$ and $\delta \in \mathbb Z^e$ with $d_0 = 0$ as above, we conclude that $|\alphapha| = r(p^e-1)$ and $d(\alphapha) = \delta$ if and only if \[ |\alphapha_{e-1}| = r(p-1)-d_{e-1} \mathcal{Q}uad \text{and} \mathcal{Q}uad |\alphapha_i| = r(p-1) + d_{i+1}p - d_{i} \mathcal{Q}uad \text{for all} \mathcal{Q}uad i \in \{0, \dotsc, e-2\}. \] Now we are ready to formulate $c_e = c_e(T)$ for $T = T(V_r(R[x_1,\,\dotsc,\,x_m]))$. This result generalizes \cite[Proposition~3.7]{EY}. Since $c_e = 0$ for all $e \le 2$ when $m \le r+1$, the formula in the following proposition is most meaningful when $m-r-1 > 0$. \begin{Prop}\label{prop:c_e} For $T = T(V_r(R[x_1,\,\dotsc,\,x_m]))$, we have the following formula: \begin{align*} c_{e}&= \sum_{\substack{ (d_{e-1},\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\d_i \ge 1 \text{ for } 1 \le i \le e-1}} \left(P_m\left(r(p-1)-d_{e-1}\right) \mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i})\right) \\ &= \sum_{\substack{ (d_{e-1},\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\1 \le d_i \le m-r-1 \text{ for } 1 \le i \le e-1}} \left(\binom{r(p-1)-d_{e-1}+m-1}{m-1} \mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i})\right) \operatorname{e}nd{align*} for all $e \ge 2$, where $P_m(i)$ denotes $\operatorname{rank}_R(R[x_1,\,\dotsc,\,x_m]_i)$, i.e., $P_m(i) = \binom{m+i-1}{i} = \binom{m+i-1}{m-1}$. \operatorname{e}nd{Prop} \begin{proof} Fix any $e \ge 2$ and adopt the notations set up above. Consider $x^{\alphapha} = x_1^{a_1} \dotsm x_m^{a_m} \in T_e$. By Proposition~\ref{prop:good/bad}, $x^{\alphapha} \notin G_{e-1}(T)$ if and only if \[ d_i(\alphapha) \ge 1 \mathcal{Q}uad \text{for all} \mathcal{Q}uad i \in \{1,\,\dotsc,\,e-1\}. \] To determine $c_e$, we need to find the number of monomials with the above property, as stated in Proposition~\ref{prop:count}. This is equivalent to counting the number of $\alphapha \in \mathbb N^m$ such that $|\alphapha| = r(p^e -1)$ and $d_i(\alphapha) \ge 1$ for all $i \in [1,\,e-1]$. Fix any $\delta = (d_{e-1}, \dotsc,d_0)\in \mathbb N^e$ with $d_{0} = 0$ and $d_i \ge 1$ for all $i \in [1,\,e-1]$. We intend to find the number of $\alphapha \in \mathbb N^m$ such that $|\alphapha| = r(p^e -1)$ and $d(\alphapha) = \delta$, which can be written as \[ \card{\{\alphapha \in \mathbb N^m : |\alphapha| = r(p^e -1) \text{ and } d(\alphapha) = \delta\}}, \] in which $\card X$ stands for the cardinality of any set $X$. For each $i \in [1,\,e-2]$, the number of ways to realize $|\alphapha_i| = r(p-1) + d_{i+1}p - d_{i}$ is given as follows: \[ \card{\{\alphapha_i \in [0,\,p-1]^m : |\alphapha_i| = r(p-1) + d_{i+1}p - d_{i}\}} =M_{p,m}(r(p-1) + d_{i+1}p - d_{i}). \] The number of ways to realize $|\alphapha_{e-1}| = r(p-1) - d_{e-1}$ is given as follows: \[ \card{\{\alphapha_{e-1} \in N^m : |\alphapha_{e-1}| = r(p-1) - d_{e-1}\}} =P_{m}(r(p-1) - d_{e-1}). \] Therefore, the number of $\alphapha \in \mathbb N^m$ such that $|\alphapha| = r(p^e -1)$ and $d(\alphapha) = \delta$ is governed by the following formula: \begin{multline*} \card{\{\alphapha \in \mathbb N^m : |\alphapha| = r(p^e -1) \text{ and } d(\alphapha) = \delta\}} \\ =P_m\left(r(p-1)-d_{e-1}\right)\mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i}). \operatorname{e}nd{multline*} Observe that if $m-r-1 \le 0$, then \[ \card{\{\alphapha \in \mathbb N^m : |\alphapha| = r(p^e -1) \text{ and } d(\alphapha) = \delta\}} = 0, \] which follows from $M_{p,m}(r(p-1) + d_{1}p - d_{0}) = 0$ since $r(p-1) + d_{1}p - d_{0} \ge r(p-1) + p = (r+1)(p-1)+1 \ge m(p-1)+1$; also see Theorem~\ref{thm:mr}(1). We further observe that, whenever there exists $d_i > m-r-1 >0$ for some $i \in [1,\,e-1]$, then \[ \card{\{\alphapha \in \mathbb N^m : |\alphapha| = r(p^e -1) \text{ and } d(\alphapha) = \delta\}} = 0. \] Indeed, pick the least $i \in [1,\,e-1]$ such that $d_i > m-r-1 >0$ and we get $r(p-1) + d_{i}p - d_{i-1} \ge m(p-1)+1$ and hence $M_{p,m}(r(p-1) + d_{i}p - d_{i-1}) = 0$. Put differently, when adding $m$ many non-negative integers to $r(p^e-1)$, the carry overs to digits associated with $p^i$ can not exceed $\floor{r_Fac {r(p^{i}-1)}{p^{i}}} + m - r-1$. Finally, exhausting all $\delta = (d_{e-1}, \dotsc,d_0)\in \mathbb N^e$ with $d_{0} = 0$ and $d_i \ge 1$ for $i \in [1,\,e-1]$, we can formulate $c_{e}=c_e(T(R[x_1, \dotsc, x_m]))$ as follows: \begin{align*} c_{d,e} & = \sum_{\substack{ (d_{e-1},\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\d_i \ge 1 \text{ for } 1 \le i \le e-1}} \card{\{\alphapha \in \mathbb N^m : |\alphapha| = r(p^e-1) \text{ and } d(\alphapha) = (d_{e-1},\, \dotsc,\, d_1,\, d_{0})\}} \\ & = \sum_{\substack{ (d_{e-1},\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\d_i \ge 1 \text{ for } 1 \le i \le e-1}} \left(P_m\left(r(p-1)-d_{e-1}\right) \mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i})\right)\\ &= \sum_{\substack{ (d_{e-1},\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\1 \le d_i \le m-r-1 \text{ for } 1 \le i \le e-1}} \left(\binom{r(p-1)-d_{e-1}+m-1}{m-1} \mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i})\right), \operatorname{e}nd{align*} which verifies the equations. \operatorname{e}nd{proof} Next, we outline a method that allows us compute $c_{e} = c_e(T(V_r(R[x_1,\,\dotsc,\,x_m])))$ for any $m,\,r$ with $m \ge r+2$, in which $R$ may have any prime characteristic $p$. (Note that, if $m \le r+1$, then $c_{e}= 0$ for all $e \ge 2$, see Theorem~\ref{thm:mr}.) The following generalizes \cite[Discussion~3.8]{EY}. \begin{Dis}\label{dis:m,r,p} Fix any positive integers $r,\,m$ such that $r+1 < m$, any prime number $p$, and any ring $R$ with characteristic $p$. Let $\mathscr{R}=R[x_1,\,\dotsc,\,x_m]$. We describe a way to determine $c_{e} = c_e(T(V_r(\mathscr{R})))$ explicitly as follows: For every $e \ge 0$, denote \[ X_e := \begin{bmatrix} X_{e,1} \\ \vdots \\ X_{e,m-r-1} \operatorname{e}nd{bmatrix}_{(m-r-1) \times 1}, \] in which \begin{align*} X_{e,n} &:= \sum_{\substack{ (d_{e+1}=n,\, d_{e-1},\,\dotsc,\, d_1,\,d_{0}=0) \in \mathbb N^{e+2} \\1 \le d_i \le m-r-1 \text{ for } 0 \le i \le e}} \mathcal{P}rod_{i=0}^{e} M_{p,m}(r(p-1) + d_{i+1}p - d_{i}) \operatorname{e}nd{align*} for all $n \in \{1,\,\dotsc,\,m-r-1\}$. With these notations, it is straightforward to see that, for all $i \in [1,\,m-r-1]$, \begin{align*} X_{e+1,i} &= \sum_{j=1}^{m-r-1}M_{p,m}(r(p-1) + ip - j)X_{e,j} \operatorname{e}nd{align*} In other words, $X_{e+1}$ can be computed recursively: \[ X_{e+1} = U \cdot X_e, \] where \[ U:= \begin{bmatrix} u_{ij} \operatorname{e}nd{bmatrix}_{(m-r-1) \times (m-r-1)} \mathcal{Q}uad \text{with} \mathcal{Q}uad u_{ij}:=M_{p,m}(r(p-1) + ip - j). \] Therefore, \[ X_e = U^{e}\cdot X_0 \mathcal{Q}uad \text{for all} \mathcal{Q}uad e \ge 0. \] With $m,\,r$ and $p$ given, both $X_0$ and $U= (u_{ij})_{(m-r-1) \times (m-r-1)}$ can be determined explicitly. Accordingly, we can compute $X_e = U^{e}\cdot X_0$ explicitly for all $e \ge 0$. Finally, for all $e \ge 2$, we can determine $c_{e} =c_{e}(T(V_r(\mathscr{R})))$ explicitly, as follows: \begin{align*} c_{e} &= \sum_{\substack{ (d_{e-1},\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\1 \le d_i \le m-r-1 \text{ for } 1 \le i \le e-1}} \left(P_{m}(r(p-1)-d_{e-1})\mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i})\right)\\ &= \sum_{n = 1}^{m-r-1}\left(P_{m}(r(p-1)-n) \sum_{\substack{ (d_{e-1}=n,\, \dotsc,\, d_1,\, d_{0}=0) \in \mathbb N^{e} \\1 \le d_i \le m-r-1 \text{ for } 1 \le i \le e-2}} \mathcal{P}rod_{i=0}^{e-2} M_{p,m}(r(p-1) + d_{i+1}p - d_{i})\right)\\ &=\sum_{n=1}^{m-r-1} P_{m}(r(p-1)-n) X_{e-2,n} = \sum_{n=1}^{m-r-1} \binom{r(p-1)-n+m-1}{m-1} X_{e-2,n}\\ &=Y_0 \cdot U^{e-2} \cdot X_0, \operatorname{e}nd{align*} where $Y_0 := \begin{bmatrix} \binom{r(p-1)-1+m-1}{m-1} & \cdots & \binom{r(p-1)-(m-r-1)+m-1}{m-1} \\ \operatorname{e}nd{bmatrix}_{1 \times (m-r-1)}$. Consequently, $\operatorname{cx}(T(V_r(\mathscr{R})))$ can be computed. \operatorname{e}nd{Dis} \begin{Def} In what follows, we call \[ U(p,r,m)=U:= \begin{bmatrix} u_{ij} \operatorname{e}nd{bmatrix}_{(m-r-1) \times (m-r-1)} \mathcal{Q}uad \text{with} \mathcal{Q}uad u_{ij}:=M_{p,m}(r(p-1) + ip - j) \] as the {\it determining matrix} for $p, r, m$. \operatorname{e}nd{Def} \begin{Thm}\label{thm:m=r+2} Consider $T = T(V_r(R[x_1,\dotsc,x_m]))$ as above with $m=r+2$. Then $c_e(T) = \binom{rp}{m-1}\binom{p+m-2}{m-1}^{e-2}\binom{p+m-3}{m-1}$ for all $e \ge 2$ and $\operatorname{cx}(T) = \binom{p+m-2}{m-1}$. \operatorname{e}nd{Thm} \begin{proof} Adopting all the notations introduced in Discussion~\ref{dis:m,r,p}, we see \begin{align*} &X_0 = M_{p,m}(r(p-1) + p) = M_{p,m}(p-2) = P_m(p-2) = \binom{p+m-3}{m-1} >0, \\ &U = M_{p,m}((r+1)(p-1)) = M_{p,m}(p-1) = P_m(p-1) =\binom{p+m-2}{m-1} >0,\\ &Y_0 = P_m(r(p-1)-1) = \binom{r(p-1)-1+m-1}{m-1} = \binom{rp}{m-1}> 0. \operatorname{e}nd{align*} Here we use the fact $M_{p,m}(i) = M_{p,m}(m(p-1)-i)$ for all $i$. Therefore, for all $e \ge 2$, we obtain \[ c_e =\binom{r(p-1)+m-2}{m-1}\binom{p+m-2}{m-1}^{e-2} \binom{p+m-3}{m-1}, \] which establishes \[ \operatorname{cx}(T(V_r(R[x_1,\dotsc,x_m))) = \binom{p+m-2}{m-1} \] when $m=r+2$. \operatorname{e}nd{proof} \subsection{The Frobenius complexity as $p \to \infty$.} \label{p-to-infinity} We will maintain the notations from this section, including the condition $m \geq r+2$ and $r > 0$. The following results are straightforward and left to the reader. We will comment on their proofs only when necessary. \begin{Lma} \label{calc} Fix $m>0$ an integer and $p$ a prime number. \begin{enumerate} \item $M_{p,m}(i)= M_{p.m}(m(p-1)-i).$ \item $M_{p,m}(i) \leq M_{p,m}(j)$ if $0 \leq i \leq j \leq \lceil m(p-1)/2 \rceil$ or $\lceil m(p-1)/2 \rceil \leq j \leq i \leq m(p-1)$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Lma} \begin{Lma} \label{calc2} For any integers $i, j$ such that $1 \leq i,j \leq m-r-1$, we have $$ p -3 < p \leq r(p-1)+pi-j \leq m(p-1)-p+3,$$ for all $p \gg 0$. \operatorname{e}nd{Lma} \begin{Def} For any $t \times s$ matrix $A=(a_{ij})$ with nonnegative entries, where $t,s$ are positive integers, define $\abs A = \min \{ a_{ij} \}$ and $\Vert A \Vert = \max \{ a_{ij} \}$. \operatorname{e}nd{Def} The following Lemma is a consequence of Lemmata~\ref{calc} and~\ref{calc2}. \begin{Lma} \label{abs} Given $m$ and $r$, we have the following inequalities: $$\binom{m-1+p-3}{m-1} \leq \abs U \leq \Vert U \Vert \leq \binom{m-1+ \lceil r_Fac{m(p-1)}{2} \rceil}{m-1}$$ for the determining matrix $U = U(p,r,m)$ for all $p \gg 0$. \operatorname{e}nd{Lma} \begin{Lma} Let $A, B$ be matrices with nonnegative entries of sizes $l \times t$, respectively $t \times s$, with $l, t, s$ positive integers. Then \[ t \abs A \cdot \abs B \le \abs {A \cdot B} \le \Vert{A \cdot B} \Vert \leq t \Vert A \Vert \cdot \Vert B \Vert. \] \operatorname{e}nd{Lma} Now, let us recall that (cf.~Discussion~\ref{dis:m,r,p}) $$c_e = Y_0 \cdot X_{e-2}= Y_0 \cdot U^{e-2} \cdot X_0,$$ where \[ X_0 = \begin{bmatrix} X_{0,1} \\ \vdots \\ X_{0,m-r-1} \operatorname{e}nd{bmatrix}_{(m-r-1) \times 1} \mathcal{Q}uad \text{with} \mathcal{Q}uad X_{0,i} = M_{p,m}( r(p-1)+ip) \] and \[ Y_0 = \begin{bmatrix} \binom{r(p-1)-1+m-1}{m-1} & \cdots & \binom{r(p-1)-(m-r-1)+m-1}{m-1} \\ \operatorname{e}nd{bmatrix}_{1 \times (m-r-1)}. \] \begin{Lma} \label{pos} For all $p,\,m,\,r$ as above, both $X_0$ and $Y_0$ are non-zero. \operatorname{e}nd{Lma} \begin{proof} Indeed, $m \ge r+2$ implies $0 \le r(p-1)+p < r(p-1)+2(p-1) \le m(p-1)$, which implies $X_{0,1} = M_{p,m}(r(p-1)+p) > 0$. On the other hand, $r(p-1)-1 \ge 0$ implies $r(p-1)-1+m-1 \ge m-1$, which implies $\binom{r(p-1)-1+m-1}{m-1} > 0$. \operatorname{e}nd{proof} Moreover, both $X_0$ and $Y_0$ have all positive entries for $p \gg 0$. In fact we can be more precise. \begin{Lma} \label{pos} If $p \geq m-r$, then both $X_0$ and $Y_0$ have all positive entries. \operatorname{e}nd{Lma} \begin{proof} If $p \geq m-r$, then $0 \leq r(p-1)+ip \leq m(p-1)$ and hence $M_{p,m}(r(p-1)+ip) > 0$, for all $i =1, \dotsc, m-r-1$. On the other hand, note that $r(m-r)- m+1 = -(r-1)(r-m+1) \ge 0$ for all $r = 1, \dotsc, m-2$. Consequently, if $p \ge m-r$ then for all $i = 1, \dotsc, m-r-1$, \begin{align*} r(p-1)-i+m-1 &\ge r(p-1)-(m-r-1) +m-1 \\ &=(rp-m+1) + m-1 \ge (r(m-r) - m +1) + m-1 \ge m-1, \operatorname{e}nd{align*} which leads to $\abs{Y_0} > 0$. \operatorname{e}nd{proof} \begin{Prop} \label{prop:bounds} We have $$c_e \leq (m-r-1)^{e-1} \cdot \Vert Y_0 \Vert \cdot \Vert U \Vert ^{e-2} \cdot \Vert X_0 \Vert$$ and $$(m-r-1)^{e-1} \cdot \abs{Y_0} \cdot \abs{U}^{e-2} \cdot \abs{X_0} \leq c_e.$$ (In fact $(m-r-1)^{e-3} \cdot \Abs{Y_0} \cdot \abs{U}^{e-2} \cdot \Abs{X_0} \leq c_e$.) Therefore we have that $$ (m-r-1) \abs{U} \leq \operatorname{cx} (T(V_r(\mathscr{R})) \leq (m-r-1)\Vert U \Vert$$ for $p \gg 0$, where $\mathscr{R} = R[x_1,\,\dotsc,\,x_m]$. \operatorname{e}nd{Prop} \begin{Cor} \label{cor:limit} Let $\mathscr{R} = R[x_1,\,\dotsc,\,x_m]$. If $p \gg 0$, then $$ (m-r-1)\binom{m-1+p-3}{m-1} \leq \operatorname{cx} (T(V_r(\mathscr{R})) \leq (m-r-1) \binom{m-1+ \lceil r_Fac{m(p-1)}{2} \rceil}{m-1}$$ and therefore $\lim_{p \to \infty} \log_p\operatorname{cx} (T(V_r(\mathscr{R})) = m-1$. \operatorname{e}nd{Cor} This corollary motivates the definition of Frobenius complexity in characteristic zero, which is given in Section 4, see Definition~\ref{FC0}. \subsection{Perron-Frobenius} \label{subsec:pf} We would like to summarize a few things about square matrices with positive real entries. Any such matrix admits a real positive eigenvalue $\lambda$ such that all other eigenvalues have absolute value less than $ \lambda$. We will refer to this eigenvalue as the Perron root or Perron-Frobenius eigenvalue of the matrix. This eigenvalue is a simple root of the characteristic polynomial of the matrix. Moreover, an eigenvector for $\lambda$ either has all entries positive or has all entries negative. See \cite{Pe} and \cite{Fr}. Let $p \gg 0$. Since $U$ has only positive entries by Lemma~\ref{abs}, let $\lambda$ be the Perron-Frobenius eigenvalue for $U$. There exists an invertible matrix $P$ such that $$U = P D P^{-1}$$ where $D$ is the Jordan canonical form of $U$. (We may also take $D$ to be the rational canonical form of $U$ over $\mathbb R$ if we prefer to stay within $\mathbb R$.) Without loss of generality, the left upper corner of $D$ is $\lambda$ (thus all the other entries of the first row or first column are $0$); that is, \[ D = \begin{bmatrix} \lambda & 0 \\ 0 & D_1 \operatorname{e}nd{bmatrix}_{(m-r-1) \times (m-r-1)} \] with $D_1$ being an $(m-r-2) \times (m-r-2)$ matrix whose eigenvalues are all less than $\lambda$ in absolute value. Hence the first column (row) of $P$ ($P^{-1}$) is an eigenvector of $U$ ($U^T$) for $\lambda$. Thus, without loss of generality, we may assume that the first column of $P$ and (consequently) the first row of $P^{-1}$ have all positive entries. Lastly, since both $Y_0$ and $X_0$ are non-zero, the first entries of both $Y_0 P$ and $P^{-1} X_0$ are positive. Write $Y_0 P = [a, \ A]$ and $P^{-1} X_0 = [b, \ B]^T$ in block form. Now the fact that $\lambda$ is the largest eigenvalue in absolute value implies that \[ c_e = Y_0 U^{e-2} X_0 = (Y_0 P) D^{e-2} (P^{-1} X_0) = ab\lambda^{e-2} + AD_1^{e-2}B =ab\lambda^{e-2}+o(\lambda^{e}). \] Thus \[ \operatorname{cx}(T(V_r(\mathscr{R})))= \lambda. \] (The above argument applies as long as $p,\,m,\,r$ are such that $U$ is all positive, since $X_0$ and $Y_0$ are always non-zero.) \section{Frobenius complexity of determinantal rings} In this section, we combine what we have obtained to derive results on the Frobenius complexity of determinantal rings. In particular, we translate the results on $T(V_r(R[x_1,\dotsc,x_m]))$ to $S_{m,n}$ with $m > n \ge 2$. \begin{Thm}\label{thm:f-com-det} Let $K$, $S_{m,n}$ and $\mathscr{R}_{m,n}$ be as in Section~\ref{sec:det} (cf.~Definition~\ref{def:segre-complete}) with $m > n \ge 2$. Further assume that $K$ is a field of prime characteristic $p$. Let $E_{m,n}$ denote the injective hull of the residue field of $S_{m,n}$. \begin{enumerate} \item The ring of Frobenius operators of $S_{m,n}$ (i.e., $\mathscr{F}(E_{m,n})$) is never finitely generated over $\mathscr{F}_0(E_{m,n})$. \item When $n = 2$, we have $\operatorname{cx}_F(S_{m,2}) = \log_p\binom{p+m-2}{m-1}$. \item We have $\lim_{p \to \infty}\operatorname{cx}_F(S_{m,n}) = m-1$. \item For $p \gg0$ or whenever the determining matrix $U=U(p, m, m-n)$ has all positive entries, we have $\operatorname{cx}_F(S_{m,n}) = \log_p (\lambda)$, in which $\lambda$ is the Perron root for $U$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Thm} \begin{proof} (1) Since $m-n \le m-2$, we see that $T(V_{m-n}(K[x_1,\dotsc,x_{m}]))$ is not finitely generated over $T_0(V_{m-n}(K[x_1,\dotsc,x_{m}]))$ by Theorem~\ref{thm:mr}(2). Thus $\mathscr{F}(E_{m,n})$) is not finitely generated over $\mathscr{F}_0(E_{m,n})$ by Theorem~\ref{thm:segre}(1). (2) By Theorem~\ref{thm:segre}(3) and Theorem~\ref{thm:m=r+2}, \[ \operatorname{cx}_F(S_{m,2}) = \log_p\operatorname{cx}(T(V_{m-2}(K[x_1,\dotsc, x_{m}]))) = \log_p\binom{p+m-2}{m-1}. \mathcal{Q}edhere \] (3) This follows from Corollary~\ref{cor:limit}. (4) This is a straightforward consequence of the discution in Subsection~\ref{subsec:pf}. \operatorname{e}nd{proof} \begin{Rem} We like to point out the following: \begin{enumerate} \item Also note that, for every $m > 2$, \[ \lim_{e \to \infty} c_e(\mathscr{F}(E_{m,2})) = \lim_{e \to \infty} c_e(T(V_{m-2}(K[x_1, \dotsc,x_m]))) = \infty. \] \item Moreover, there exists an onto (hence nearly onto) graded ring homomorphism from $T(V_{m-n}(K[x_1, \dotsc,x_m]))$ to $T(V_{m-n}(K[x_1,\dotsc,x_{m-n+2}]))$. Thus by Corollary~\ref{cor:nearly-onto}, \[ c_e(T(V_{m-n}(K[x_1, \dotsc,x_m]))) \ge c_e(T(V_{m-n}(K[x_1,\dotsc,x_{m-n+2}]))) \] for all $ e \ge 0$. Hence $c_e(\mathscr{F}(E_{m,n})) \ge c_e(\mathscr{F}(E_{m-n+2,2}))$ for all $ e \ge 0$ and consequently \[ \lim_{e \to \infty} c_e(\mathscr{F}(E_{m,n})) = \infty \] for all $m > n \ge 2$. \operatorname{e}nd{enumerate} \operatorname{e}nd{Rem} \subsection{Example} We will illustrate our method with a concrete example. We are going to use freely the notations established so far (especially the ones in Section~\ref{sec:computing-ce}). Let $r=2,\, m=5$ and $K$ be a field of characteristic $p=3$. We are going to compute $c_e = c_e(T(V_2(K[x_1, \ldots, x_5])))$, which in turn equals $c_e(\mathscr{F}(E_{5,3}))$ by Theorem~\ref{thm:segre}. As in Discussion~\ref{dis:m,r,p}, we have \[ X_e= U^e\cdot X_0 \mathcal{Q}uad \text{for all} \mathcal{Q}uad e \geq 0, \] in which \begin{align*} X_e &= \begin{bmatrix} X_{e,1} \\ X_{e,2} \operatorname{e}nd{bmatrix}, \\ X_0 &= \begin{bmatrix} X_{0,1} \\ X_{0,2} \operatorname{e}nd{bmatrix} =\begin{bmatrix} M_{3,5}(7) \\ M_{3,5}(10) \operatorname{e}nd{bmatrix} =\begin{bmatrix} 30 \\ 1 \operatorname{e}nd{bmatrix}, \\ U &= \begin{bmatrix} M_{3,5}(6) & M_{3,5}(5) \\ M_{3,5}(9) & M_{3,5}(8) \operatorname{e}nd{bmatrix} = \begin{bmatrix} 45 & 51 \\ 5 & 15 \operatorname{e}nd{bmatrix}. \operatorname{e}nd{align*} Note that $U$ has all positive entries and the eigenvalues of $U$ are $2(15+ 2\sqrt{30})$ and $2(15-2\sqrt{30})$. At this point, we can apply the Theorem~\ref{thm:f-com-det}(4) above directly and determine the Frobenius complexity of $S_{5,3}$ by observing that the Perron root of $U$ is $2(15+2\sqrt{30})$. However, for illustrative purposes let us compute $U^e$. This is accomplished by diagonalizing $U$. Skipping the details, we get \[ U^e= \begin{bmatrix} (15+4\sqrt{30})y_e +(-15+4\sqrt{30})z_e & 51(y_e-z_e)\\ 5(y_e-z_e) & (-15+4\sqrt{30})y_e +(15+4\sqrt{30})z_e \operatorname{e}nd{bmatrix}, \] in which \[ y_e:=r_Fac{1}{\sqrt{15}} \cdot 2^{-r_Fac{7}{2}+e} \cdot (15+2\sqrt{30})^e \mathcal{Q}uad \text{and} \mathcal{Q}uad z_e:= r_Fac{1}{\sqrt{15}} \cdot 2^{-r_Fac{7}{2}+e}\cdot (15-2\sqrt{30})^e. \] Thus, for $e \geq 0$, we obtain \begin{align*} X_{e, 1} &= 30( (15+4\sqrt{30})y_{e} +(-15+4\sqrt{30})z_{e}) + 51(y_{e}+z_{e}),\\ X_{e,2} &= 150(y_{e}- z_{e}) + (-15+4\sqrt{30})y_{e} + (15+4\sqrt{30})z_{e}. \operatorname{e}nd{align*} Lastly, for $e \geq 2$, we have (cf.~Discussion~\ref{dis:m,r,p}) \begin{equation*}\label{eq:c} c_e = c_e(T(V_2(K[x_1, \ldots, x_5]))) = \binom{7}{4}X_{e-2, 1} + \binom{6}{4}X_{e-2, 2}, \operatorname{e}nd{equation*} which allows us to compute $c_e(T(V_2(K[x_1, \ldots, x_5])))$ which equals $c_e(\mathscr{F}(E_{5,3}))$. Therefore we are led to the following Proposition. \begin{Prop} When $p=3$, $\operatorname{cx}_F(S_{5,3}) = \log_3(2(15+2\sqrt{30}))$. \operatorname{e}nd{Prop} At conclusion of the paper, we would like to introduce the definition of the Frobenius complexity for rings of characteristic zero, which is motivated by Corollary~\ref{cor:limit} and Theorem~\ref{thm:f-com-det}(3). As the definition involves rings that may not be local, we first extend our Definition~\ref{def-FCX} by defining the Frobenius complexity of a (not necessarily local) ring $R$ of prime characteristic $p$ as $\operatorname{cx}_F(R) : = \log_p(\operatorname{cx}(\mathscr{C}(R)))$. (When $(R,\fm,k)$ is F-finite complete local, $\mathscr{C}(R)$ and $\mathscr{F}(E(k))$ are opposite as graded rings; so $\operatorname{cx}(\mathscr{C}(R)) = \operatorname{cx}(\mathscr{F}(E(k)))$ and we do have an extension of the definition.) \begin{Def} \label{FC0} Let $R$ be a ring (of characteristic zero) such that $R/pR \neq 0$ for almost all prime number $p$. When the limit $\lim_{p \to \infty} \operatorname{cx}_F(R/pR)$ exists, we call it \operatorname{e}mph{the Frobenius complexity} of $R$. \operatorname{e}nd{Def} It is natural to ask under what conditions, if any at all, the Frobenius complexity exists. The case of $R = \mathbb Z[X_1,\,\dotsc,\,X_n]/I$ and $R = \mathbb Z[[X_1,\,\dotsc,\,X_n]]/I$ are particularly interesting. If $R$ is a finitely generated algebra over a field $k$ of characteristic zero, we could descend $R$ to a finitely generated $A$-algebra $R_A$ (where $A$ is a subring of $k$ that is finitely generated over $\mathbb Z$ containing the defining data of $R$) and study the the Frobenius complexity of $R_A$. \begin{thebibliography}{GSTT} \bibitem[EY]{EY} F.\ Enescu, Y.\ Yao, \operatorname{e}mph{The Frobenius complexity of a local ring of prime characteristic}, preprint, arXiv:1401.0234. \bibitem[Fr]{Fr} G.\ Frobenius, \operatorname{e}mph{Ueber Matrizen aus nicht negativen Elementen}, Sitzungsber.\ K\"onigl.\ Preuss.\ Akad.\ Wiss. (1912), 456--477. \bibitem[KSSZ]{KSSZ} M.\ Katzman, K.~Schwede, A.~K.~Singh, W.~Zhang, \operatorname{e}mph{Rings of Frobenius operators}, Math.\ Proc.\ Cambridge Phil.\ Soc.\ \textbf{157} (2014), no.~1, 151--167. \bibitem[Pe]{Pe} O.\ Perron, \operatorname{e}mph{Zur Theorie der Matrices}, Math.\ Ann.\ \textbf{64} (1907), no.~2, 248--263. \bibitem[Wa]{Wa} K.-i.\ Watanabe, \operatorname{e}mph{Infinite cyclic covers of strongly F-regular rings}, Contemporary Mathematics \textbf{159} (1994), 423--432. \operatorname{e}nd{thebibliography} \operatorname{e}nd{document}
\begin{document} \maketitle \begin{abstract} \vspace*{-0.05in} We introduce a \emph{Three Tier Tree Calculus} ($T^3C$) that defines in a systematic way three tiers of tree structures underlying proof search in logic programming. We use $T^3C$ to define a new -- structural -- version of resolution for logic programming. \vspace*{-0.05in} \end{abstract} \begin{keywords} Structural resolution, term trees, rewriting trees, derivation trees. \vspace*{-0.05in} \end{keywords} \section{Introduction}\label{sec:intro} As ICLP is celebrating the 200$^{th}$ anniversary of George Boole, we are reflecting on the fundamental ``laws" underlying derivations in logic programming (LP), and making an attempt to formulate some fundamental principles for first-order proof search, analogous in generality to Boole's ``laws of thought" for propositional logic~\cite{Boole}. Any such principles must be able to reflect two important features of first-order proof search in LP: its recursive and non-deterministic nature. For this they must satisfy two criteria: to be able to (a) model infinite structures and (b) reflect the non-determinism of proof search, relating ``laws of infinity" with ``laws of non-determinism" in LP. \vspace*{-0.1in} \begin{example} \label{ex:nat} The program $P_1$ inductively defines the set of natural numbers: \[\begin{array}{lrll} 0. & \mathtt{nat(0)} & \gets & \\ 1. & \mathtt{nat(s(X))} & \gets & \mathtt{nat(X)} \\ \end{array}\] \vspace*{0.05in}\noindent To answer the question ``{\em Does $P_1 \vdash \mathtt{nat(s(X))}$ hold?}'', we first represent it as the LP query $? \gets \mathtt{nat(s(X))}$ and then use SLD-resolution to resolve this query with $P_1$. The topmost clause selection strategy first resolves $\mathtt{nat(s(X))}$ with $P_1$'s second clause (Clause 1), and then resolves the resulting term with $P_1$'s first clause (Clause 0). This gives the derivation $\mathtt{nat(s(X))} \rightarrow \mathtt{nat(X)} \rightarrow \mathtt{true}$, which computes the solution $\{\mathtt{X \mapsto 0}\}$ in its last step. So one answer to our question is ``Yes, provided $\mathtt{X}$ is $\mathtt{0}$.'' \end{example} \noindent Even for this simple inductive program, there will be clause selection strategies (or clause orderings) that will result in infinite SLD-derivations. If Clause 1 is repeatedly resolved against, the infinite computation will compute the first limit ordinal. The least and greatest Herbrand model semantics~\cite{vEmdK76,Llo87,EmdenA85} captured very well the recursive (and corecursive!) nature of LP (thus satisfying our criterion (a)). For example, the least Herbrand model for $P_1$ is an infinite set of finite terms $\mathtt{nat(0), \ nat(s(0)),}$ $\mathtt{nat(s(s(0))), \ldots}$. The greatest complete Herbrand model for program $P_1$ is the set containing all of the finite terms in the least Herbrand model for $P_1$ together with the first limit ordinal $\mathtt{nat(s(s(...)))}$. However, due to its declarative nature, the semantics does not reflect the operational non-deterministic nature of LP, and thus fails our criterion (b). The operational semantics of LP has seen the introduction of a variety of tree structures reflecting the non-deterministic nature of proof search: \emph{proof trees}, \emph{SLD-derivation trees}, and \emph{and-or-trees}, just to name a few. However, these do not adequately capture the infinite structures arising in LP proof search. It is well-known that SLD-derivations for any program $P$ are sound and complete with respect to the least Herbrand model for $P$~\cite{Llo87}, but this soundness and completeness depends crucially on termination of SLD-derivations, and termination is not always available in LP proof search. As a result, logical entailment is only semi-decidable in LP. In one attempt to match the greatest complete Herbrand semantics for potentially non-terminating programs, an operational counterpart --- called \emph{computations at infinity} --- was introduced in~\cite{Llo87,EmdenA85}. The operational semantics of a potentially nonterminating logic program $P$ was then taken to be the set of all infinite ground terms computable by $P$ at infinity. Computations at infinity better capture the computational behaviour of non-terminating logic programs, but infinite computations do not result in implementations. This observation suggests one more criterion: (c) our operational semantics must be able to provide an observational (constructive) approach to potential infinity and non-determinism of LP proof search, thus incorporating ``laws of observability". Coinductive logic programming (CoLP)~\cite{GuptaBMSM07,SimonBMG07} provides a method for terminating certain infinite SLD-derivations (thus satisfying our criteria (a) and (c)). This is based on the principle of coinduction, which is in turn based on the ability to finitely observe coinductive hypotheses and succeed when coinductive conclusions are reached. CoLP's search for coinductive hypotheses and conclusions uses a fairly straightforward loop detection mechanism. It requires the programmer to supply annotations classifying every predicate as either inductive or coinductive. Then, for queries marked as coinductive, it observes finite fragments of SLD-derivations, checks them for unifying subgoals, and terminates when loops determined by such subgoals are found. The loop detection mechanism of CoLP has three major limitations, all arising from the fact that it has relatively week support for analysis of various proof-search strategies and term structures arising in LP proof search (and thus for our criterion (b)). (1) It does not work well for cases of mixed induction-coinduction. For example, to coinductively define an infinite stream of Fibonacci numbers, we would need to include inductive clauses defining addition on natural numbers. Coinductive goals will be mixed with inductive subgoals. Closing such computations by simple loop detection is problematic. (2) There are programs for which computations at infinity \emph{produces} an infinite term, whereas CoLP fails to find unifiable loops. \noindent Consider the following (coinductive) program $P_2$ that has the single clause \vspace*{0.05in}\noindent 0. $\mathtt{from(X, scons(X,Y))} \gets \mathtt{from(s(X),Y)}$ \vspace*{0.05in}\noindent Given the query $? \gets \mathtt{from(0, X)}$, and writing $\mathtt{[\_,\_]}$ as an abbreviation for the stream constructor $\mathtt{scons}$, we have that the infinite term $t' = \mathtt{from(0,[0,[s(0),[s(s(0)),\ldots]]])}$ is computable at infinity by $P_2$ and is also contained in the greatest Herbrand model for $P_2$. However, $P_2 \vdash \mathtt{from(0,X)}$ cannot be proven using the unification-based loop detection technique of CoLP. Since the terms $\;\mathtt{from(0,scons(0,X'))}$, $\mathtt{from(s(0),}$ $\mathtt{scons(s(0),X''))}$, $\mathtt{from(s(s(0)),}$ $\mathtt{scons(s(s(s(0))),X''')}, ...$ arising in the derivation for $P_2$ and $? \gets \mathtt{from(0,X)}$ will never unify, CoLP will never terminate. (3) CoLP fails to reflect the fact that some infinite computations are not productive, i.e., do not produce an infinite term at infinity. The notion of productivity of corecursion is well studied in the semantics of other programming languages~\cite{EndrullisGHIK10,Agda,Coq}. For example, no matter how long an SLD-derivation for the following program $P_3$ runs, it does not \emph{produce} an infinite term, and the resulting computation is thus coinductively meaningless: \vspace*{0.05in}\noindent 0. $\mathtt{bad(X)} \; \gets \; \mathtt{bad(X)}$ \vspace*{0.05in}\noindent Somewhat misleadingly, CoLP's loop detection terminates with success for such programs, thus failing to guarantee coinductive construction of infinite terms (failing criterion (a)). Is our quest for a theory of LP satisfying criteria (a), (b), and (c) hopeless? We take a step back and recollect that the semantics of first-order logic and recursive schemes offers one classical approach to formulating structural properties of potentially infinite first-order terms. Best summarised in \emph{``Fundamental Properties of Infinite Trees"}~\cite{Courcelle83}, the approach comes down to formulating some structural laws underlying first-order syntax. It starts with definition of a \emph{tree language} as a (possibly infinite) set of sequences of natural numbers satisfying conditions of prefix-closedness and finite branching. Given a first-order signature $\Sigma$ together with a countable set of variables $Var$, a first-order term tree is defined as a map from a tree language $L$ to the set $\Sigma \cup Var$. Size of the domain of the map determines the size of the term tree. The ``laws" are then given by imposing several structural properties: (i) in a given term tree, arities imposed by $\Sigma$ must be reflected by the branching in the underlying tree language; (ii) variables have arity $0$ and thus can only occur at leaves of the trees; and (iii) the operation of substitution is given by replacing leaf variables with term trees. A calculus for the operation can be formulated in terms of a suitable unification algorithm. We give formal definitions in Sections~\ref{sec:tl} and \ref{sec:t1}. We extend this elegant theory of infinite trees to give an operational semantics of LP that satisfies criteria (a), (b), and (c). We borrow a few general principles from this theory. Structural properties of trees (given by arity and variable constraints) and operations on trees (substitutions) are defined by means of ``structural laws" that hold for finite and infinite trees. This gives us constructive approach to infinity (cf. criteria (a) and (c)). It remains to find the right kind of structures to reflect the non-determinism of proof search in LP. \begin{comment} when relating structures in the domain and codomain, and do not depend o The definition of a tree as a map from a tree language to a codomain of interest allows to relate structures in the codomain with the structure of the given tree language. This kind of analysis is general enough to give a uniform account of both finite and infinite trees by simply varying the size of the domain. \end{comment} Given a logic program $P$ and a term (tree) $t$, the first question we may ask is whether $t$ \emph{matches} any of $P$'s clauses. First-order term matching is a restricted form of unification employed in (first-order) term rewriting systems (TRS)~\cite{Terese} and --- via pattern-matching --- in functional programming. For our $P$ and $t$, we may proceed with term matching steps recursively, mimicking an SLD-derivation in which unification is restricted to term matching. Consider the matching sequences for four different terms and the coinductive program $P_2$ from above: \vspace*{-0.1in} \begin{center} \hspace*{-0.25in} \begin{tikzpicture}[scale=0.30,baseline=(current bounding box.north),grow=down,level distance=20mm,sibling distance=50mm,font=\footnotesize] \node { $\mathtt{from(0,X)}$}; \end{tikzpicture}\hspace*{0.2in} \begin{tikzpicture}[scale=0.30,baseline=(current bounding box.north),grow=down,level distance=20mm,sibling distance=60mm,font=\footnotesize ] \node { $\mathtt{from(0,[0, X'])}$} child { node {$\mathtt{from(s(0),X')}$}}; \end{tikzpicture}\hspace*{0.2in} \begin{tikzpicture}[scale=0.30,baseline=(current bounding box.north),grow=down,level distance=20mm,sibling distance=60mm,font=\footnotesize ] \node { $\mathtt{from(0,[0, [s(0), X'']])}$} child { node{ $\mathtt{from(s(0),[s(0),X''])}$ } child { node {$\mathtt{from(s(s(0)),X'')}$}}}; \end{tikzpicture}\hspace*{0.2in} \begin{tikzpicture}[scale=0.30,baseline=(current bounding box.north),grow=down,level distance=20mm,sibling distance=60mm,font=\footnotesize ] \node { $\mathtt{from(0,[0, [s(0),[s(s(0)), X''']]])}$} child { node{ $\mathtt{from(s(0),[s(0),[s(s(0)), X''']])}$ } child { node {$\mathtt{from(s(s(0)),[s(s(0)), X'''])}$} child { node {$\mathtt{from(s(s(s(0))), X''')}$} }}}; \end{tikzpicture} \end{center} \noindent Let us call term matching sequences as above \emph{rewriting trees}, to highlight their relation to TRS. The above sequences can already reveal some of the structural properties of the given logic program. If $\Sigma_2$ is the signature of the program $P_2$, and if we denote all finite term trees that can be formed from this signature as $\mathbf{Term}(\Sigma_2)$, then a rewriting tree for $P_2$ can be defined as a map from a given tree language $L$ to $\mathbf{Term}(\Sigma_2)$. Since rewriting trees are built upon term trees, we may say that term trees give a first tier of tree structures, while the rewriting trees give a second tier of tree structures. To formulate suitable laws for the second tier, we need to refine our notion of rewriting trees. Given a program $P$ and a term $t$, we may additionally reflect \emph{how many} clauses from $P$ can be unified with $t$, and how many terms those clauses contain in their bodies. We thus introduce a new kind of ``or-nodes" to track the matching clauses. If $P$ has $n$ clauses, $t$ may potentially have up to $n$ alternative matching sequences. When a clause $i$ does not match a given term tree $t$, we may use a \emph{Tier 2 variable} to denote the fact that, although $t$ does not match clause $i$ at the moment, a match may be found for some instantiation of $t$. Thus, for the program $P_1$ above and the queries $? \gets \mathtt{nat(s(X))}$ and $? \gets \mathtt{nat(s(0))}$, we will have the two rewriting trees of Figure~\ref{pic:tree2}. We note the alternating or-nodes (given by clauses) and and-nodes (given by terms from clause bodies) and Tier 2 variables. \begin{figure} \caption{\footnotesize{The rewriting trees for $P_1$ and $?\gets \mathtt{nat(s(X))} \label{pic:tree2} \end{figure} Two kinds of laws are imposed on structure of rewriting trees: \begin{itemize} \item[--] arity constraints: the arity of an and-node is the number of clauses in the program, and arity of and or-node is the number of terms in its clause body. \item[--] variable constraints: variable leaves have arity $0$, and run over the objects being defined (rewriting trees). Variables are the leaves in which substitution can take place. \end{itemize} In Figure~\ref{pic:tree2}, Tier 2 variable $X_2$ is substituted by a one-node rewriting tree $\mathtt{nat(0)} \gets$. Such substitutions constitute the fundamental operation on Tier 2 trees, and give rise to a calculus for Tier 2 given in terms of so-called rewriting tree transitions. Figure~\ref{pic:tree2} shows a transition from a rewriting tree for $? \gets \mathtt{nat(s(X))}$ to a rewriting tree for $? \gets \mathtt{nat(s(0))}$ which corresponds to the SLD-derivation outlined in Example~\ref{ex:nat}. Thus, a derivation is a sequence of tree transitions (given by the Tier 2 operation of substitution). We call this method \emph{structural resolution}, or {\em S-resolution} for short. Its formal relation to TRS and type theory is given in~\cite{FK15}. Section~\ref{sec:t2} will introduce Tier 2 formally. We note the remarkably precise analogy between structures and operations of Tier 1 and Tier 2. Rewriting trees can be finite or infinite. For programs $P_1$ and $P_2$, any rewriting tree will be finite, but program $P_3$ will give rise to infinite rewriting trees. Once again, our structural analysis is fully generic for finite and infinite tree structures at Tier 2, which fits our criterion (a). Rewriting trees perfectly reflect the ``non-determinism laws" (criterion (b)), thanks to and-nodes and or- nodes keeping a structural account of all the search options. Finally, our structural analysis perfectly fits criterion (c). For productive programs like $P_1$ and $P_2$, the length of a derivation may be infinite, however, each rewriting tree will necessarily be finite. This ensures observational approach to corecursion and productivity. We complete the picture by introducing the third tier of trees reflecting different search strategies arising from substitution into different variables of Tier 2. Given the set $\mathbf{Rew}(P)$ of all finite rewriting trees defined for program $P$, a derivation tree is given by a map from a tree language $L$ to $\mathbf{Rew}(P)$. The arity of a given node in a derivation tree (itself given by a rewriting tree) is the number of Tier 2 variables in that rewriting tree. The construction of derivation trees is similar to the construction of SLD-derivation trees (as it accounts for all possible derivation strategies). The trees of Tier 3 are formally defined in Section~\ref{sec:t3}. The resulting \emph{Three Tier Tree Calculus} ($T^3C$) developed in this paper formalises the fundamental properties of trees arising in LP proof search. Apart from being theoretically pleasing, this new theory can actually deliver very practical results. The finiteness of rewriting trees comprising a possibly infinite derivation gives an important observational property for defining and semi-deciding (observational) productivity for corecursion in LP. This puts LP on par with other languages in terms of observational productivity and coinductive semantics~\cite{EndrullisGHIK10,Agda,Coq}. With a notion of productivity in hand for LP, we can ask for results showing inductive and coinductive soundness of derivations given by transitions among rewriting trees. The two pictures above give, respectively, a sound coinductive observation of a proof for $t' = \mathtt{from(0,[0,[s(0),[s(s(0)),\ldots]]])}$ with respect to $P_2$, and a sound inductive derivation for $\mathtt{nat(s(X))}$ with respect to $P_1$. Our ongoing and future research based on $T^3C$ will be further explained in Section~\ref{sec:concl}. \begin{comment} Proofs by resolution are essentially proofs by contradiction. In contrast, in Constructive Type Theory~\cite{NPS90}, to prove that $A$ follows from $\Gamma$, we would need to \emph{construct} an inhabitant $p$ of type $A$, which is a proof $p$ of $A$ in context $\Gamma$; denoted by $\Gamma \vdash p: A$. Constructive type theory underlies Interactive Theorem Provers (ITPs)~\cite{Agda,Coq}. In the constructive setting, the \emph{structure} of $p$ that inhabits type $A$ matters. In LP, the proof structure plays little r\^{o}le. Apart from substitution produced as an answer to a query in LP, we do not possess (nor, as it may seem, need) any structural information about the proof. \emph{In this paper, we make two claims: $\bigstar$ a structural variant of SLD-resolution is genuinely needed in order to formulate a more precise operational semantics of LP. $\bigstar\bigstar$ ``Structural" resolution arises from giving a fine-grained analysis of various tree structures arising in LP derivations, and continuing the much older line of research~\cite{Courcelle83} into tree structures in programming language semantics.} We first motivate the claim $\bigstar$. It is well-known that SLD-derivations for any program $P$ are sound and complete with respect to the least Herbrand model for $P$~\cite{Llo87}. However, this soundness and completeness depends crucially on termination of SLD-derivations, and termination is not always available in LP proof search. Because of non-terminating cases, logical entailment is only semi-decidable in LP. When a logic program, such as $P_1$, is intended as an inductive definition, the reliance on termination of LP derivations actually works well in practice. However, it is extremely hard to analyse infinite and corecursive computations in LP. \vspace*{-0.05in} \begin{example}\label{ex:zstream} Consider the program $P_2$: \vspace*{0.05in}\noindent 0. $\mathtt{stream(scons(0,Y))} \; \gets \; \mathtt{stream(Y)}$ \vspace*{0.05in}\noindent No SLD-derivation for $P_2$ and the query $? \gets \mathtt{stream(X)}$ terminates with an answer, be it success or otherwise. Nevertheless, from a coinductive perspective, $\mathtt{stream}$ is meaningful, since it computes the infinite stream of $0$s, at its limit. \end{example} Coinductive logic programming (CoLP)~\cite{GuptaBMSM07,SimonBMG07} introduced a coinductive dialect of SLD-resolution. CoLP provides a method for terminating certain infinite derivations. This is based on the principle of coinduction, which is in turn based on the ability to finitely observe coinductive hypotheses and succeed when coinductive conclusions are reached. CoLP's search for coinductive hypotheses and conclusions uses a fairly straightforward loop detection mechanism. It requires the programmer to supply annotations classifying every predicate as either inductive or coinductive. Then, for queries marked as coinductive, it observes finite fragments of SLD-derivations, checks them for unifying subgoals, and terminates when loops determined by such subgoals are found. The query $? \gets \mathtt{stream(X)}$ gives rise to an SLD-derivation with subgoals $\mathtt{stream(scons(0,Y'))}$, $\mathtt{stream(scons(0,}$ $\mathtt{Y''))}$, $\mathtt{stream(scons(0,Y'''))}$, $\ldots$ Observing that $\mathtt{stream(scons(0,}$ $\mathtt{Y'))}$ and $\mathtt{stream(scons(0,Y''))})$ unify and thus comprise a loop, CoLP concludes that $\mathtt{stream(X)}$ has been proven by coinductive hypothesis $\mathtt{stream(scons(0,Y'))}$ and coinductive conclusion\\ $\mathtt{stream(scons(0,Y''))}$. CoLP returns the solution $\mathtt{X=scons(0,X)}$ in the form of a ``circular'' term indicating that $P_2$ computes at infinity the infinite stream of $\mathtt{0}$s. The above method relies on two assumptions: that the program in question indeed has an intended coinductive meaning (it would be unsound to close derivations for inductive programs on the basis of loop detection) and that unifiable loops can be found (othewise CoLP does not terminate and hence does not return any answer). CoLP cannot handle some cases of well-founded and non-well-founded corecursion. Firstly, there can be programs admitting both inductive and coinductive interpretations. Program $P_1$, seen coinductively, defines the first limit ordinal; and to enforce this operationally we need to swap the order of its first and second clauses. We would like to have methods that allow for uniform operational semantics for such programs and rely on structural properties of programs rather than heuristics controlling the search strategies (such as manually-provided ``coinductive" annotations or clause order). Secondly, some infinite computations are not productive in the sense of corecursive function productivity known in other programming languages~\cite{EndrullisGHIK10,Agda,Coq}. No matter how long an SLD-derivation for the following program $P_3$ runs, it does not \emph{produce} an infinite term, and such computation is thus coinductively meaningless: \vspace*{0.05in}\noindent 0. $\mathtt{bad(X)} \; \gets \; \mathtt{bad(X)}$ However, somewhat misleadingly, CoLP's loop detection terminates with success for such programs. In~\cite{EndrullisGHIK10,Agda,Coq} analogous examples would be called non-well-founded (or unproductive) cases of corecursion and are generally avoided as leading to unsound coinductive reasoning. We would like to be able to analyse productivity in LP, and for this we need richer structural methods. Finally, there are cases of productive corecursion that do not produce unifiable loops that can be detected by CoLP. The program $P_4$ comprises the single clause \vspace*{0.05in}\noindent 0. $\mathtt{from(X, scons(X,Y))} \gets \mathtt{from(s(X),Y)}$ \vspace*{0.05in}\noindent Although there exists an $\mathtt{X}$ such that $P_3 \vdash \mathtt{from(0,X)}$ (namely, $P_3 \vdash \mathtt{from(0,scons(0,(scons}$ $\mathtt{(s(0), \ldots))))}$), this cannot be proven using the unification-based loop detection technique of CoLP: since the terms $\;\mathtt{from(0,scons(0,X'))}$, $\mathtt{from(s(0),}$ $\mathtt{scons(s(0),X''))}$, $\mathtt{from(s(s(0)),}$\\ $\mathtt{scons(s(s(s(0))),X''')}, ...$ arising in the derivation for $P_3$ and $? \gets \mathtt{from(0,X)}$ will never unify, CoLP will never terminate. As the above examples show, better methods are needed in order to give a uniform semantics for recursion and corecursion in LP. A number of related techniques do already exist~\cite{deSchreye1994199,GuptaBMSM07,SimonBMG07,Pf92,RohwedderP96}. Some use ``modes'' to distinguish termination cases for appropriately annotated programs, while others use reduction orderings on terms to formulate termination conditions for individual derivations. But since properties of individual derivations do not necessarily carry over to the whole programs of which they are part, a perhaps more promising approach is to develop LP analogues of the ITP coinductive techniques. Types and pattern matching are key to defining (co)recursive functions in ITP and proving their termination or productivity. In ITP, types logically precede function definitions, and (co)recursive function definitions rely on the strict discipline of pre-existing (co)inductive types. But unlike ITP, LP has no separate type layer that can be used to force its programs to have (co)inductive meaning. Also, ITP uses (co)pattern matching against terms of (co)inductive types to enforce structural invariants that support termination analysis of (co)recursive programs, whereas LP computation uses full unification, rather than just matching, in its computational steps. Unfortunately, the lack of a separate typing layer and the greater generality of unification over matching prevent a direct porting to LP of the program analyses used to such good effect in ITP. Nevertheless, the structural techniques we develop in this paper are, in essence, LP analogues of the kinds of reasoning that types and pattern matching support in ITP. This leads us to our claim $\bigstar\bigstar$. The structural approach to LP put forth in this paper relies on the syntactic structure of programs rather than on their (operational, declarative, or other) semantics. It also actively constructs proof witnesses for derivations. We introduce here a novel three-tier tree calculus, called $T^3C$, that highlights analogies between different ``levels'', or ``tiers'', of proof structures arising in LP derivations. $T^3C$ views both terms and LP derivations as trees, as is traditional~\cite{Llo87,Courcelle83}, but also introduces a third tier of trees intermediate between them. The {\em term trees} of $T^3C$'s first tier give standard tree representations of (first-order) terms and logic programs. The {\em rewriting trees} of $T^3C$'s second tier are constructed using most general {\em matchers}, rather than full unification, of clauses against term trees, and thus mimic ITP pattern matching. And, because complete LP proof search requires unification of terms, the {\em derivation trees} of $T^3C$'s third tier are constructed using most general {\em unifiers}.\footnote{ Rewriting trees were called ``coinductive trees'' in~\cite{KP11-2,KPS12-2}, where they were introduced to study the coalgebraic semantics of LP.} A branch in a rewriting tree is a sequence of terms (resulting from term rewriting steps). A rewriting tree is a success tree if certain of its branches end with empty goals. A branch of a derivation tree is a sequence of rewriting tree transitions (we call it S-derivation) which corresponds to an SLD-derivation. If a derivation ends with a ``successful" rewriting tree --- that tree is a constructed witness of the proof found in the derivation. This paper gives the first fully formal exposition of $T^3C$ for S-Resolution. In Section~\ref{sec:tl}, we define tree languages, and in Sections~\ref{sec:t1} --- \ref{sec:t3} use tree languages to formally define term trees, rewriting trees and derivation trees as maps from a tree language to a suitable codomain: first-order signature, term trees, or rewriting trees, respectively; as shown on the diagram: {\footnotesize{ \[\xy0;/r.10pc/: *[o]=<65pt,10pt>\hbox{\txt{Tree language}}="a"*\frm<8pt>{-,}, (100,15)*[o]=<110pt,10pt>\hbox{\txt{Rewriting trees}}="b"*\frm<8pt>{-}, (100,0)*[o]=<110pt,10pt>\hbox{\txt{Term trees}}="c"*\frm<8pt>{-}, (100,-15)*[o]=<110pt,10pt>\hbox{\txt{First-order Signature}}="d"*\frm<8pt>{-}, (190,15)*[o]=<110pt,10pt>\hbox{\txt{defines Tier 3: Derivation trees}}="e", (190,0)*[o]=<110pt,10pt>\hbox{\txt{defines Tier 2: Rewriting trees}}="f", (190,-15)*[o]=<110pt,10pt>\hbox{\txt{defines Tier 1: Term trees}}="g", "a";"b" **\dir{-} ?>*\dir{>}, "a";"d" **\dir{-} ?>*\dir{>}, "a";"c" **\dir{-} ?>*\dir{>}, "d";"c" **\dir{-} ?>*\dir{>}, "c";"b" **\dir{-} ?>*\dir{>}, \endxy\]}} In each of these sections, we carefully trace the structural properties of objects at every tier. There are three: (1) structural properties given by the codomain of the tree map, a suitable notion of tree branching reflecting arities of objects of that tier, and a suitable notions of a tree variable; (2) main defining operation of the Tier --- notion of ``substitution" for variables of the Tier, and finally (3) calculus involving the main operation of the Tier. Systematically tracing properties of objects at every tier gives enough of information to reason about inductive and coinductive properties of a given program, its productivity, as well as to define richer coinductive proof principles for LP. These applications of $T^3C$ are left as a future work and discussed in Section~\ref{sec:concl}. \end{comment} \vspace*{-0.25in} \section{Background: Tree Languages}\label{sec:tl} Our notation for trees is a variant of that in, e.g., \cite{Llo87,Courcelle83}. Let ${\mathbb N}^*$ denote the set of all finite words (i.e., sequences) over the set ${\mathbb N}$ of natural numbers. The length of a word $w\in{\mathbb N}^*$ is denoted by $|w|$. The empty word $\epsilon$ has length $0$. We identify the natural number $i$ and the word $i$ of length $1$. If $w$ is a word of length $l$, then for each $i \in \{1,..., l\}$, $w_i$ is the $i^{th}$ element of $w$. We may write $w = w_1...w_{l}$ to indicate that $w$ is a word of length $l$. We use letters from the end of the alphabet, such $u,v,$ and $w$, to denote words in ${\mathbb N}^*$ of any length, and letters from the middle of the alphabet, such as $i,j$, and $k$, to denote words in ${\mathbb N}^*$ of length $1$ (i.e., individual natural numbers). The concatenation of words $w$ and $u$ is denoted $wu$. The word $v$ is a {\em prefix} of $w$ if there exists a word $u$ such that $w = vu$, and a {\em proper prefix} of $w$ if $u \not = \epsilon$. \vspace*{-0.05in} \begin{definition}\label{def:lang} A set $L \subseteq {\mathbb N}^*$ is a \emph{(finitely branching) tree language} if the following conditions are satisfied: \vspace*{-0.05in} \begin{itemize} \item For all $w \in {\mathbb N}^*$ and all $i,j \in {\mathbb N}$, if $wj \in L$ then $w \in L$ and, for all $i<j$, $wi \in L$. \item For all $w \in L$, the set of all $i\in {\mathbb N}$ such that $wi\in L$ is finite. \end{itemize} \vspace*{-0.1in} \end{definition} A tree language $L$ is {\em finite} if it is a finite subset of ${\mathbb N}^*$, and {\em infinite} otherwise. Examples of finite and infinite tree languages are given in Figure~\ref{pic:tree}. We may call a word $w \in L$ a {\em node} of $L$. If $w = w_1w_2...w_l$, then a node $w_1w_2...w_k$ for $k < l$ is an {\em ancestor} of $w$. The node $w$ is the \emph{parent} of $wi$, and nodes $wi$ for $i \in {\mathbb N}$ are {\em children} of $w$. A \emph{branch} of a tree language $L$ is a subset $L'$ of $L$ such that, for all $w,v\in L'$, $w$ is an ancestor of $v$ or $v$ is an ancestor of $w$. If $L$ is a tree language and $w$ is a node of $L$, the \emph{subtree of $L$ at $w$} is $L \backslash w = \{v \mid wv \in L\}$. We can now define our three-tier calculus $T^3C$. \begin{comment} \begin{figure} \caption{\footnotesize{The finite tree language $\{\epsilon, 0, 00, 01\} \label{pic:trees} \end{figure} \end{comment} \vspace*{-0.1in} \section{Tier 1: Term Trees}\label{sec:t1} In this section, we introduce Tier 1 of $T^3C$, highlighting the structural properties of its objects (arity, branching, variables), the operation of first-order substitution, and the relevant calculus given by unification. \vspace*{0.1in} \noindent \textbf{3.1 Tier 1 structural properties: Signature as codomain, arity, and variables} \vspace*{0.05in} \noindent The trees of $T^3C$'s first tier are term trees over a (first-order) signature. A \emph{signature} $\Sigma$ is a non-empty set of \emph{function symbols}, each with an associated {\em arity}. The arity of $f \in \Sigma$ is denoted $\mathit{arity}(f)$. For example, $\Sigma_1 = \{\mathtt{stream}, \mathtt{scons}, \mathtt{0}\}$, with $\mathit{arity}(\mathtt{scons}) = 2$, $\mathit{arity}(\mathtt{stream}) = 1$, and $\mathit{arity}(\mathtt{0}) = 0$, is a signature. To define term trees over $\Sigma$, we also need a countably infinite set $\mathit{Var}$ of {\em variables} disjoint from $\Sigma$, each with arity $0$. We use capital letters from the end of the alphabet, such as $\mathtt{X}$, $\mathtt{Y}$, and $\mathtt{Z}$, to denote variables in $\mathit{Var}$. \begin{definition}\label{def:tt} Let $L$ be a non-empty tree language and let $\Sigma$ be a signature. A \emph{term tree} over $\Sigma$ is a function $t: L \rightarrow \Sigma \cup \mathit{Var}$ such that, for all $w\in L$, $\mathit{arity}(t(w)) = \;\,\mid \!\{i \mid wi \in L\}\!\mid$. \end{definition} Structural properties of tree languages extend to term trees. For example, a term tree $t : L \rightarrow \Sigma \cup \mathit{Var}$ has depth $\mathit{depth}(t) = \max\{|w| \mid w\in L\}$. The subtree of $t$ at node $w$ is given by $t': (L \backslash w) \rightarrow \Sigma \cup V$, where $t'(v) = t(wv)$ for each $v \in L \backslash w$. \begin{figure} \caption{\footnotesize{The two figures on the left depict the finite and infinite tree languages $\{\epsilon, 0, 00, 01\} \label{pic:tree} \end{figure} Term trees are finite or infinite according as their domains are finite or infinite. Term trees over $\Sigma$ may be infinite even if $\Sigma$ is finite. Figure~\ref{pic:tree} shows the finite and infinite term trees $\mathtt{stream(scons(X,Y))}$ and $\mathtt{scons(0,scons(0,...))}$ over $\Sigma_1$. The set of finite (infinite) term trees over a signature $\Sigma$ is denoted $\mathbf{Term}(\Sigma)$ ($\mathbf{Term}^\infty(\Sigma)$). The set of {\em all} (i.e., finite {\em and} infinite) term trees over $\Sigma$ is denoted by $\mathbf{Term}^\omega(\Sigma)$. Term trees with no occurrences of variables are \emph{ground}. We write $\textbf{GTerm}(\Sigma)$ ($\textbf{GTerm}^\infty(\Sigma)$, $\mathbf{GTerm}^\omega(\Sigma)$) for the set of finite (infinite, {\em all}) ground term trees over $\Sigma$. $\textbf{GTerm}(\Sigma)$ is also known as the Herbrand base for $\Sigma$, and $\mathbf{GTerm}^\omega(\Sigma)$ is known as the complete Herbrand base for $\Sigma$, in the literature~\cite{Llo87}. Both $\mathbf{GTerm}(\Sigma)$ and $\mathbf{GTerm}^\omega(\Sigma)$ are used to define the Herbrand model and complete Herbrand model (declarative) semantics of LP~\cite{Kow74,Llo87}. Additionally, $\mathbf{GTerm}^\omega(\Sigma)$ is used to give an operational semantics to SLD-computations at infinity in~\cite{Llo87,EmdenA85}. \vspace*{0.1in} \noindent \textbf{3.2 Tier 1 operation: First-order substitution} \vspace*{0.05in} \noindent A \emph{substitution} of term trees over $\Sigma$ is a total function $\sigma: \mathit{Var} \to \mathbf{Term}(\Sigma)$. We write $\mathit{id}$ for the identity substitution. If $\sigma$ has finite support --- i.e., if $|\{\mathtt{X} \in \mathit{Var}\; |\; \sigma(\mathtt{X}) \not = \mathtt{X}\}| \in {\mathbb N}$ --- and if $\sigma$ maps the variables $\mathtt{X}_i$ to term trees $t_i$, respectively, and is the identity on all other variables, then we may write $\sigma$ as $\{\mathtt{X}_1 \mapsto t_1,..., \mathtt{X}_n \mapsto t_n\}$. The set of all substitutions over a signature $\Sigma$ is $\mathbf{Subst}(\Sigma)$. Substitutions are extended from variables to term trees homomorphically: if $t\in \mathbf{Term}(\Sigma)$ and $\sigma \in \mathbf{Subst}(\Sigma)$, then the {\em application} $\sigma(t)$ is defined by $(\sigma(t))(w) = t(w)$ if $t(w) \not \in \mathit{Var}$, and $(\sigma(t))(w) = (\sigma(X))(v)$ if $w = uv$, $t(u) = \mathtt{X}$, and $\mathtt{X} \in \mathit{Var}$. Composition of substitutions is denoted by juxtaposition, so $\sigma_2\sigma_1(t)$ is $\sigma_2(\sigma_1(t))$. Since composition is associative, we write $\sigma_3\sigma_2\sigma_1$ rather than $(\sigma_3\sigma_2)\sigma_1$ or $\sigma_3(\sigma_2\sigma_1)$. \vspace*{0.1in} \noindent \textbf{3.3 Tier 1 calculus: Unification} \vspace*{0.05in} \noindent A substitution $\sigma$ over $\Sigma$ is a \emph{unifier} for term trees $t$ and $u$ over $\Sigma$ if $\sigma(t) = \sigma(u)$, and a \emph{matcher} for $t$ against $u$ if $\sigma(t) = u$. A substitution $\sigma_1$ is {\em more general} than a substitution $\sigma_2$, denoted $\sigma_1 \leq \sigma_2$, if there exists a substitution $\sigma$ such that $\sigma \sigma_1(\mathtt{X}) = \sigma_2(\mathtt{X})$ for every $\mathtt{X} \in \mathit{Var}$. A substitution $\sigma$ is a {\em most general unifier} ({\em mgu}) for $t$ and $u$ if it is a unifier for $t$ and $u$, and is more general than any (other) such unifier. A {\em most general matcher} ({\em mgm}) is defined analogously. Both mgms and mgus are unique up to variable renaming. We write $t \sim_\sigma u$ if $\sigma$ is a mgu for $t$ and $u$, and $t \prec_\sigma u$ if $\sigma$ is a mgm for $t$ against $u$. Our notation is reasonable: unification is reflexive, symmetric, and transitive, but matching is reflexive and transitive only. Mgms and mgus can be computed using Robinson's seminal unification algorithm (see, e.g.,~\cite{Llo87,Pfen06}). Any standard unification algorithm (possibly represented by system of sequent-like rules~\cite{Pfen06,FK15}) can be seen as the calculus of Tier 1. Additional details about unification and matching can be found in, e.g.,~\cite{BS01}. \vspace*{-0.1in} \section{Tier 2: Rewriting Trees}\label{sec:t2} In this section, we introduce Tier 2 of $T^3C$, highlighting the structural properties of rewriting trees: codomains comprising term trees and clauses, suitable notions of arity, the operation of Tier 2 substitution, and the relevant calculus given by rewriting tree transitions. \vspace*{0.1in} \noindent \textbf{4.1 Tier 2 structural properties: Terms and clauses as codomain, arity, and variables} \vspace*{0.05in} \noindent In LP, a {\em clause} $C$ over a signature $\Sigma$ is a pair $(A, [B_0,...,B_n])$, where $A \in \mathbf{Term}(\Sigma)$ and $[B_0, \ldots B_n]$ is a list of term trees in $\mathbf{Term}(\Sigma)$. Such a clause $C$ is usually written as $A \gets B_0, \ldots , B_n$. The {\em head} $A$ of $C$ is denoted $\mathit{head}(C)$ and the {\em body} $B_0, \ldots , B_n$ of $C$ is denoted $\mathit{body}(C)$. In $T^3C$, a clause over $\Sigma$ is naturally represented as a total function (also called $C$) from a finite tree language $L$ of depth $1$ to $\mathbf{Term}(\Sigma)$ such that $C(\epsilon) = \mathit{head}(C)$, and if $\mathit{body}(C)$ is $B_0, \ldots , B_n$ then, for each $i \in L$, $C(i) = B_i$. The set of all clauses over $\Sigma$ is denoted by $\mathbf{Clause}(\Sigma)$. A {\em goal clause} $G$ over $\Sigma$ is a clause $? \gets B_0, \ldots, B_n$ over $\Sigma \cup \{?\}$. Here, $?$ is a specified symbol not occurring in $\Sigma \cup \mathit{Var}$, and $B_0, \ldots , B_n$ are term trees in $\mathbf{Term}(\Sigma)$. The goal clause $? \gets \;$ is called the {\em empty goal clause} over $\Sigma$. We consider every goal clause over $\Sigma$ to be a clause over $\Sigma$. The {\em arity} of a clause $A \gets B_0 , \ldots , B_n$ is $n+1$. The symbol $\mathit{head}(C)(\epsilon)$ is the \emph{predicate} of $C$. \begin{comment} \begin{example}\label{ex:bnstream} The program $P_5$ computes (finite and infinite) lists of bits: \vspace*{0.05in}\noindent 0. $\mathtt{bit(0)} \; \gets$\\ 1. $\mathtt{bit(1)} \; \gets$\\ 2. $\mathtt{bitlist(nil)} \; \gets$\\ 3. $\mathtt{bitlist(bcons(X,Y))} \; \gets \; \mathtt{bit(X)}, \mathtt{bitlist(Y)}$ \vspace*{0.05in}\noindent Its predicates are $\mathtt{bitlist}$ and $\mathtt{bit}$. \end{example} \end{comment} A \emph{logic program} over $\Sigma$ is a total function from a set $\{0,1,\dots,n\}$ $\subseteq {\mathbb N}$ to the set of non-goal clauses over $\Sigma$. The set of all logic programs over $\Sigma$ is denoted $\mathbf{LP}(\Sigma)$. The {\em arity} of $P\in \mathbf{LP}(\Sigma)$ is the number $|\mathit{dom}(P)|$ of clauses in $P$. We extend substitutions from variables to clauses and programs homomorphically. The variables of a clause $C$ can be renamed with ``fresh'' variables --- i.e., with variables that do not appear elsewhere in the current context --- to get a new $\alpha$-equivalent clause that can be used interchangeably with $C$. We assume variables have been thus \emph{renamed apart} whenever convenient. Renaming apart avoids circular (non-terminating) cases of unification and matching in LP. Under renaming, we can always assume that a mgm or mgu of a clause and a term is {\em idempotent}, i.e., that $\sigma \sigma = \sigma$. We now define the trees of Tier 2. Rewriting trees allow us to simultaneously track all matching sequences appearing in an LP derivation, and thus to see relationships between them. Since rewriting trees use only matching in their computation steps, they capture theorem proving (i.e., computations holding for {\em all} compatible term trees). By contrast, the Tier 3 derivation trees defined in Section~\ref{sec:t3} use full unification, and thus capture problem solving (i.e., computations holding only for {\em certain} compatible term trees). \begin{comment} \begin{figure} \caption{\footnotesize{The rewriting trees $\kw{rew} \label{pic:tree2} \end{figure} \end{comment} We distinguish two kinds of nodes in rewriting trees: {\em and-nodes} capturing terms coming from clause bodies, and {\em or-nodes} capturing the idea that every term tree can in principle match several clause heads. We also introduce or-node variables to signify the possibility of unification when matching of a term tree against a program clause fails. \begin{definition}\label{def:CT} Let $V_R$ be a countably infinite set of variables disjoint from $\mathit{Var}$. If $P\in \cat{LP}(\Sigma)$, $C\in \cat{Clause}(\Sigma)$, and $\sigma \in \mathbf{Subst}(\Sigma)$ is idempotent, then $\kw{rew}(P,C, \sigma)$ is the function $T:\mathit{dom}(T) \rightarrow \cat{Term}(\Sigma) \cup \cat{Clause}(\Sigma) \cup V_R$, where $\mathit{dom}(T)$ is a non-empty tree language, satisfying the following conditions: \begin{enumerate} \item $T(\epsilon) = \sigma(C) \in \cat{Clause}(\Sigma)$ and, for all $i \in \mathit{dom}(C) \setminus \{\epsilon\} $, $T(i) = \sigma(C(i))$. \item For $w\in\mathit{dom}(T)$ with $|w|$ even and $|w| > 0$, $T(w) \in \cat{Clause}(\Sigma) \cup V_R$. Moreover,\\ -- if $T(w) \in V_R$, then $\{j\mid wj \in \mathit{dom}(T)\} = \emptyset$, and \\ -- if $T(w) = B \in \cat{Clause}(\Sigma)$, then there exists a clause $P(i)$ and an mgm $\theta$ for $P(i)$ against $\mathit{head}(B)$. Moreover, for every $j \in \mathit{dom}(P(i)) \setminus \{\epsilon\}$, $wj\in \mathit{dom}(T)$ and $T(wj) = \sigma(\theta(P(i)(j)))$. \item For $w\in\mathit{dom}(T)$ with $|w|$ odd, $T(w) \in \cat{Term}(\Sigma)$. Moreover, for every $i\in \mathit{dom}(P)$, we have\\ -- $wi\in \mathit{dom}(T)$, and\\ -- $T(wi) = \begin{cases} \sigma(\theta(P(i))) & \text{if } \mathit{head}(P(i)) \prec_\theta T(w) \text{ and} \\ \text{a fresh } X\in V_R &\text{otherwise} \end{cases}$ \item No other words are in $\mathit{dom}(T)$. \end{enumerate} A node $T(w)$ of $\kw{rew}(P,C,\sigma)$ is an {\em or-node} if $|w|$ is even and an {\em and-node} if $|w|$ is odd. The node $T(\epsilon)$ is the \emph{root} of $\kw{rew}(P,C, \sigma)$. If $P \in \cat{LP}(\Sigma)$, then $T$ is a {\em rewriting tree for $P$} if it is either the empty tree or $\kw{rew}(P,C,\sigma)$ for some $C \in \cat{Clause}(\Sigma)$ and $\sigma \in \mathbf{Subst}(\Sigma)$. \end{definition} The arity of a node $T(w)$ in $T = \kw{rew}(P,C,\sigma)$ is $\mathit{arity}(P)$ if $T(w) \in \mathbf{Term}(\Sigma)$, $\mathit{arity}(C)$ if $T(w) \in \mathbf{Clause}(\Sigma)$, and $0$ if $T(w) \in V_R$. The role of the parameter $\sigma$ in the definition of $\kw{rew}$ will become clear when we discuss the notion of substitution for Tier 2. For now, we may think of $\sigma$ as the identity substitution. \begin{example}\label{ex:subst1} The rewriting trees $\kw{rew}(P_1,?\gets \mathtt{nat(s(X))},\mathit{id})$ and $\kw{rew}(P_1,?\gets \mathtt{nat(s(0))},\mathit{id})$ are shown in Figure~\ref{pic:tree2}. \end{example} A rewriting tree for a program $P$ is finite or infinite according as its domain is finite or infinite. We write $\mathbf{Rew}(P)$ for the set of finite rewriting trees for $P$, $\mathbf{Rew}^{\infty}(P)$ for the set of infinite rewriting trees for $P$, and $\mathbf{Rew}^\omega(P)$ for the set of all (finite and infinite) rewriting trees for $P$. In~\cite{KPS12-2}, a logic program $P$ is called (observationally) productive, if each rewriting tree constructed for it is in $\mathbf{Rew}(P)$. Programs $P_1$ and $P_2$ are productive in this sense, whereas program $P_3$ is not. In future work, we will introduce methods that semi-decide observational productivity. \vspace*{0.1in} \noindent \textbf{4.2 Tier 2 operation: Substitution of rewriting trees for Tier 2 variables} \vspace*{0.05in} With rewriting trees as the objects of Tier 2 and a suitable notion of a Tier 2 variable, we can replay Tier 1 substitution at Tier 2 by defining Tier 2 substitution to be the replacement of Tier 2 variables by rewriting trees. However, in light of the structural dependency of rewriting trees on term trees in Definition~\ref{def:CT}, we must also incorporate first-order substitution into Tier 2 substitution. Exactly how this is done is reflected in the next definition. \begin{definition}\label{def:substs} Let $P \in \textbf{LP}(\Sigma)$, $C \in \mathbf{Clause}(\Sigma)$, $\sigma,\sigma' \in \mathbf{Subst}(\Sigma)$ idempotent, and $T = \kw{rew}(P,C,\sigma)$. Then the rewriting tree $\sigma'(T)$ is defined as follows: \begin{itemize} \item for every $w \in \mathit{dom}(T)$ such that $T(w)$ is an and-node or non-variable or-node, $(\sigma'(T))(w) = \sigma'(T(w))$. \item for every $wi \in \mathit{dom}(T)$ such that $T(wi) \in V_R$, if $\theta$ is an mgm of $\mathit{head}(P(i))$ against $\sigma'(T)(w)$, then $(\sigma'(T))(wiv) = \kw{rew}(P,\theta(P(i)),\sigma'\sigma)(v)$. (Note $v = \epsilon$ is possible.) If no mgm of $\mathit{head}(P(i))$ against $\sigma'(T)(w)$ exists, then $(\sigma'(T))(wi) = T(wi)$. \end{itemize} \end{definition} \noindent Both items in the above definition are important in order to make sure that, given a rewriting tree $T$ and a first-order substitution $\sigma$, $\sigma(T)$ satisfies Definition~\ref{def:CT}. \vspace*{-0.1in} \begin{example}\label{ex:subst} Consider the first rewriting tree $T$ of Figure~\ref{pic:tree2}. Given first-order substitution $\sigma = \{X \mapsto 0\}$, the second tree of that Figure gives $\sigma(T)$. Note that Tier 2 variable $X_2$ is substituted by the one-node rewriting tree $\mathtt{nat(0) \gets}$ as a result. In addition, all occurrences of the first-order variable $\mathtt{X}$ in $T$ are substituted by $\mathtt{0}$ in $\sigma(T)$. \end{example} \begin{comment} \begin{figure*} \caption{\footnotesize{A tree transition relative to the variable $X_5$, induced by substitution $\{\mathtt{X} \label{fig:trans} \end{figure*} \end{comment} Drawing from Examples~\ref{ex:subst1} and \ref{ex:subst}, we would ideally like to formally connect the definition of a rewriting tree and Tier 2 substitution, and say that, given $T = \kw{rew}(P,C,id)$ and a first-order substitution $\sigma$, $\sigma(T) = rew(P,\sigma(C),id)$. However, this does not hold in general, as was also noticed in~\cite{KPS12-2}. Given a clause $C = (t \gets t_1, \ldots , t_n)$, we say a variable $\mathtt{X}$ is \emph{existential} if it occurs in some $t_i$ but not in $t$. The presence of existential variables shows why the third parameter in definition of $\kw{rew}$ is crucial: \vspace*{-0.1in} \begin{example}\label{ex:conn} The graph connectivity program $P_4$ is given by \vspace*{0.05in}\noindent 0. $\mathtt{conn(X,X)} \gets$\\ 1. $\mathtt{conn(X,Y)} \gets \mathtt{edge(X,Z)}, \mathtt{conn(Z,Y)}$\\ 2. $\mathtt{edge(a,b)} \gets$\\ 3. $\mathtt{conn(b,c)} \gets$\\ \vspace*{-0.05in} \noindent Figure~\ref{fig:conn} shows rewriting trees $T = \kw{rew}(P_4,C,id)$ and $T' = \kw{rew}(P_4,C,\theta)$, where $C = ? \gets \mathtt{conn(a,c)}$, and $\theta = \{\mathtt{Z'}\mapsto\mathtt{b}\}$. Note that $\theta(T) = T'$ but $\kw{rew}(P_4,\theta(C),id) \neq T'$. This happens because Clause $1$ contains an existential variable $\mathtt{Z}$ in its body, and construction of $\kw{rew}(P_4,\theta(C),id) $ fails to apply the substitution $\theta$ down the tree. \end{example} Given $T = \kw{rew}(P,C,\sigma)$, for $T' = \theta(T) = \kw{rew}(P,C, \theta\sigma)$ to hold, we must make sure that the procedure of renaming variables apart used implicitly when computing mgms during the rewriting tree construction is tuned in such a way that existential variables contained in the domain of $\theta$ are still in correspondence with the existential variables in $\kw{rew}(P,C, \theta\sigma)$. We achieve this by introducing a new renaming apart convention to supplement Definition~\ref{def:CT}. Given a program $P$ and a clause $P(i)$ with distinct existential variables $Z_1, \ldots ,Z_n \in Var$, we impose an additional condition on the standard \emph{renaming apart} procedure. During the construction of $T =\kw{rew}(P,C,\sigma)$, when an and-node $T(w)$ is matched with $\mathit{head}(P(i))$ via $\theta$ in order to form $T(wi) = \theta(P(i))$, $P(i)$'s existential variables $Z_1, \ldots ,Z_n$ must be renamed apart as follows: \vspace*{0.02in} -- We partition $Var$ into two disjoint sets called $V_U$ and $V_E$. The set $V_E$ is used to rename existential variables apart, while $V_U$ is used to (re)name all other variables. -- Moreover, when computing an mgm $\theta$ for $T(w)$ and $(P(i))$, every existential variable $Z_k$ from $Z_1, \ldots ,Z_n$ is renamed apart from variables of $T$ using the following indexing convention: $Z_k \mapsto E_{wi}^k$, with $E_{wi}^k \in V_E$. \vspace*{0.03in} \noindent When writing $T(wi) = \theta(P(i))$ we assume that the above renaming convention is already accounted for by $\theta$. This ensures that the existential variables will be uniquely determined and synchronized for every two nodes $T(w)$ and $T'(w)$ in $T = \kw{rew}(P,C,\sigma)$ and $T' = \kw{rew}(P,C, \theta\sigma)$. Subject to this renaming convention, the following theorem holds. \begin{comment} Variables occurring in $T$ that are not affected by mgms computed during the construction of $T$ can be seen as ``free variables" in $T$, and may be substituted by arbitrary term trees. The third parameter $\sigma$ used in Definition~\ref{def:CT} for $T = \kw{rew}(P,C,\sigma)$ tracks substitutions of such free variables and keeps them ``in sync'' with $T$'s computed mgus. This is necessary, since different substitutions of free variables can give rise to rewriting trees with very different properties. \end{comment} \begin{figure*} \caption{\footnotesize{The infinite rewriting trees $T$ and $T'$ for the program $P_4$ of Example~\ref{ex:conn} \label{fig:conn} \end{figure*} \vspace*{-0.1in} \begin{theorem}\label{prop:sub-props} Let $P \in \textbf{LP}(\Sigma)$, $C \in \cat{Clause}(\Sigma)$, and $\theta, \sigma \in \mathbf{Subst}(\Sigma)$. Then $\theta(\kw{rew} (P, C, \sigma)) = \kw{rew} (P, C, \theta\sigma)$. \end{theorem}\vspace*{-0.1in} \emph{Proof.} Let $T = \kw{rew}(P, C, \sigma)$, and let $T' = \kw{rew} (P, C, \theta\sigma)$. We need to prove that $\theta(T) = T'$. The proof proceeds by induction on the length of the tree $T$ and by cases on the types of nodes in $T$ and $\theta(T)$. -- If $T(w)$ and $\theta(T(w))$ are non-variable or-nodes (including the case $T(\epsilon)$), then, by Definition~\ref{def:substs}, $\theta(T)(w) = \theta(T(w)) = \theta \sigma (C^*)$, where $C^*$ is either $C$ (i.e., it is a root node) or some $P(i) \in P$. But, by Definition~\ref{def:CT}, $T'(w) = \theta\sigma(C^*)$. (Here, the synchronisation of renamed existential variables is essential, as described.) -- If $T(w)$ and $\theta(T(w))$ are and-nodes, then the argument is similar. -- If $T(wi)$ is a variable or-node, then, by Definition~\ref{def:substs}, two cases are possible: \vspace*{0.02in} (1) If no mgm for $\theta (T(w))$ and $\mathit{head}(P(i))$ exists, then $\theta (T)(w) = \theta(T(w))$. But then no mgm for $T'(w)$ and $\mathit{head}(P(i))$ exists either, so $T'(w) = \theta(T(w))$. \vspace*{0.02in} (2) If the mgm for $\theta (T(w))$ and $\mathit{head}(P(i))$ exists, then by Definition~\ref{def:substs}, $\theta(T)(wi) = \kw{rew}(P,$ $\theta'(P(i)), \theta\sigma)(\epsilon)$, where $\theta'$ is the mgm of $\theta(T(w))$ and $\mathit{head}(P(i))$. The rest of the proof proceeds by induction on the depth of $\kw{rew}(P,\theta'(P(i)), \theta\sigma)$. \vspace*{0.02in} Base case. For the root $\theta(T)(wi) = \kw{rew}(P, \theta'(P(i)),\theta\sigma)(\epsilon)$, by Definition~\ref{def:CT} we have that $\kw{rew}(P,\theta'(P(i)), \theta\sigma)(\epsilon) =(\theta\sigma) (\theta'(P(i)))$. On the other hand, Definition~\ref{def:CT} also gives that $T'(wi) = (\theta\sigma)(\theta''(P(i)))$, where $\theta''$ is the mgm of $T'(w)$ and $\mathit{head}(P(i))$. Since $T'(w) = (\theta(T))(w)$ by the earlier argument for and-nodes, $\theta'$ and $\theta''$ are mgus of equal term trees and $\mathit{head}(P(i))$, so $\theta' = \theta''$. Then $T'(wi) = (\theta(T))(wi)$, as desired. Inductive case. We need only consider the situation when $T(wivj)$ is undefined, but $(\theta(T))(wivj)$ is defined. By Definition~\ref{def:substs}, $\theta(T)(wivj) = \kw{rew}(P,\theta'(P(i)), \theta\sigma)(vj)$. This node can be either an and-node, a variable or-node, or a non-variable or-node. The first two cases are simple; we spell out the latter, more complex case only. If $\theta(T)(wivj)$ is a non-variable or-node then, by Definition~\ref{def:CT}, it must be $(\theta\sigma) (\theta^* P(j))$, where $\theta^*$ is the mgm of $\theta(T)(wiv)$ and $\mathit{head}(P(j))$. On the other hand, Definition~\ref{def:CT} also gives that $T'(wivj) = (\theta\sigma)(\theta^{**}(P(j)))$, where $\theta^{**}$ is the mgm of $T'(wiv)$ and $\mathit{head}(P(j))$. Since $T'(wiv) = (\theta(T))(wiv)$ by the induction hypothesis, $\theta^*$ and $\theta^{**}$ are mgms of equal term trees and $\mathit{head}(P(j))$, so $\theta^* = \theta^{**}$. Thus $T'(wivj) = (\theta(T))(wivj)$, as desired. \vspace*{-0.242in} \begin{flushright} $\Box$ \end{flushright} \vspace*{0.1in} \noindent \textbf{4.3 Tier 2 calculus: Rewriting tree transitions} \vspace*{0.05in} The operation of Tier 2 substitution is all we need to define transitions among rewriting trees. Let $P \in \textbf{LP}(\Sigma)$ and $t\in \cat{Term}(\Sigma)$. If $\mathit{head}(P(i)) \sim_\sigma t$, then $\sigma$ is the \emph{resolvent} of $P(i)$ and $t$. If no such $\sigma$ exists then $P(i)$ and $t$ have {\em null resolvent}. A non-null resolvent is an \emph{internal resolvent} if it is an mgm of $P(i)$ against $t$, and it is an {\em external resolvent} otherwise. \vspace*{-0.1in} \begin{definition}\label{def:resapp} Let $P \in \textbf{LP}(\Sigma)$ and $T = \kw{rew}(P,C,\sigma') \in \textbf{Rew}^\omega(P)$. If $X = T(wi) \in V_R$, then the rewriting tree $T_{X}$ is defined as follows. If the external resolvent $\sigma$ for $P(i)$ and $T(w)$ is null, then $T_X$ is the empty tree. If $\sigma$ is non-null, then $T_X = \kw{rew}(P, C, \sigma \sigma')$. \end{definition} \noindent If $T \in \mathbf{Rew}^\omega(\Sigma)$ and $X \in V_R$, then the computation of $T_X$ from $T$ is denoted $\kw{Trans}(P,T,X) = T_X$. If the other parameters are clear we simply write $T \rightarrow T_X$. The operation $T \rightarrow T_X$ is a \emph{tree transition} for $P$ and $C$. A {\em tree transition} for $P \in \mathbf{LP}(\Sigma)$ is a tree transition for $P$ and some $C \in \mathbf{Clause}(\Sigma)$. A (finite or infinite) sequence $T = \kw{rew}(P,C,\mathit{id}) \rightarrow T_1 \rightarrow T_2 \rightarrow \ldots$ of tree transitions for $P$ is a \emph{derivation} for $P$ and $C$. Each rewriting tree $T_i$ in the derivation is given by $\kw{rew}(P, \, C, \,\sigma_i\ldots \sigma_2 \sigma_1)$, where $\sigma_1, \sigma_2 \ldots$ is the sequence of external resolvents associated with the derivation. When we want to contrast the above derivations with SLD-derivations, we call them S-derivations, or derivations by \emph{structural resolution}. \begin{example}\label{ex:bl} Tree transitions for $P_1$ and $P_4$ are shown in Figures~\ref{pic:tree2} and~\ref{fig:conn}, respectively. \end{example} It is our current work to prove that S-derivations are sound and complete relative to declarative semantics of LP; see also~\cite{FK15} for a comparative study of the operational properties of S-derivations and SLD-derivations. \vspace*{-0.2in} \section{Tier 3: Derivation Trees}\label{sec:t3} While the rewriting trees of Tier 2 capture transitions between Tier 1 term trees that depend on matching, the derivation trees of Tier 3 capture transitions between Tier 2 rewriting trees that depend on unification. Derivation trees thus allow us to simultaneously track all unification sequences appearing in an LP derivation. The {\em arity} of a rewriting tree $T$, denoted $\mathit{arity}(T)$, is the cardinality of the set $\mathit{indices}(T)$ of indices of variables from $V_R$ in $T$. There is always a bijection $\mathit{pos}$ from $\mathit{indices}(T)$ to the (possibly infinite) set $\mathit{arity}(T)$. \begin{definition}\label{df:CD} If $P\in \cat{LP}(\Sigma)$ and $C\in \cat{Clause}(\Sigma)$, the \emph{derivation tree} $\kw{der}(P,C)$ is the function $D: \mathit{dom}(D) \rightarrow \mathbf{Rew}^\omega(P)$ such that $D(\epsilon) = \kw{rew}(P,C,\mathit{id})$, and if $w\in\mathit{dom}(D)$, $i \in \mathit{arity}(D(w))$, and $i = pos(k)$, then $wi\in\mathit{dom}(D)$ and $D(wi)$ is $\kw{Trans}(P,D(w),X_k)$. \end{definition} \begin{comment} \begin{figure} \caption{\footnotesize{An initial portion of the infinite derivation tree for the program $P_5$ of Example~\ref{ex:bnstream} \label{pic:stream} \end{figure} \end{comment} For $P \in \mathbf{LP}(\Sigma)$ and $C \in \mathbf{Clause}(\Sigma)$, the derivation tree $\kw{der}(P,C)$ is unique up to renaming. If $P \in \mathbf{LP}(\Sigma)$, then $D$ is a {\em derivation tree for $P$} if it is $\kw{der}(P,C)$ for some $C \in \mathbf{Clause}(\Sigma)$. A derivation tree is finite or infinite according as its domain is finite or infinite. \begin{comment} \begin{example} Figure~\ref{pic:stream} shows an initial portion of the infinite derivation tree for $P_5$ of Example~\ref{ex:bnstream} and $? \gets \mathtt{bitlist(X)}$. \end{example} The novelty of $T^3C$ as compared to systems such as PROLOG and CoLP lies in its use of rewriting trees and their transitions, rather than transitions between lists of terms, to analyse LP derivations. As discussed and illustrated above, $T^3C$ thus allows us to capture a kind of ``lazy LP'' that provides a more fine-grained tool for investigating LP proof search than has thus far been available. \begin{figure} \caption{\footnotesize{The finite derivation tree (consisting of just one rewriting tree) for the program $P_5$ of Example~\ref{ex:bnstream} \label{pic:fsd} \end{figure} \end{comment} Inductive programs like $P_1$ and coinductive programs like $P_2$ will have infinite derivation trees, so construction of the full derivation trees for such programs is infeasible. Nevertheless, finite initial fragments of derivation trees may be used to make coinductive observations about various routes for proof search. We are currently exploring this research direction. \vspace*{-0.3in} \section{Conclusions and Future Work}\label{sec:concl} This paper gives the first fully formal exposition of the Three Tier Tree Calculus $T^3C$ for S-resolution, relating ``laws of infinity", ``laws of non-determinism", and ``laws of observability" of proof search in LP in a uniform, conceptual way. Implementation of derivations by S-resolution is available~\cite{sres}. The structural approach to LP put forth in this paper relies on the syntactic structure of programs rather than on their (operational, declarative, or other) semantics. In essence, it presents an LP analogue of the kinds of reasoning that types and pattern matching support in interactive theorem proving (ITP)~\cite{Agda,Coq}. Further study of this analogy is an interesting direction for future research. Our next steps will be to formulate a theory of universal and observational productivity of (co)recursion in LP, and to supply $T^3C$ with semi-decidable algorithms for ensuring program productivity (akin to guardedness checks in ITP). Formally proving that S-resolution is both inductively and coinductively sound is another of our current goals. \begin{comment} {\footnotesize{ \[\xy0;/r.10pc/: *[o]=<65pt,10pt>\hbox{\txt{Tree language}}="a"*\frm<8pt>{-,}, (100,15)*[o]=<110pt,10pt>\hbox{\txt{Rewriting trees}}="b"*\frm<8pt>{-}, (100,0)*[o]=<110pt,10pt>\hbox{\txt{Term trees}}="c"*\frm<8pt>{-}, (100,-15)*[o]=<110pt,10pt>\hbox{\txt{First-order Signature}}="d"*\frm<8pt>{-}, (190,15)*[o]=<110pt,10pt>\hbox{\txt{defines Tier 3: Derivation trees}}="e", (190,0)*[o]=<110pt,10pt>\hbox{\txt{defines Tier 2: Rewriting trees}}="f", (190,-15)*[o]=<110pt,10pt>\hbox{\txt{defines Tier 1: Term trees}}="g", "a";"b" **\dir{-} ?>*\dir{>}, "a";"d" **\dir{-} ?>*\dir{>}, "a";"c" **\dir{-} ?>*\dir{>}, "d";"c" **\dir{-} ?>*\dir{>}, "c";"b" **\dir{-} ?>*\dir{>}, \endxy\]}} \end{comment} \begin{comment} \vspace*{-0.05in} \begin{example}\label{ex:zstream} Consider the program $P_4$: \vspace*{0.05in}\noindent 0. $\mathtt{stream(scons(0,Y))} \; \gets \; \mathtt{stream(Y)}$ \vspace*{0.05in}\noindent No SLD-derivation for $P_4$ and the query $? \gets \mathtt{stream(X)}$ terminates with an answer, be it success or otherwise. Nevertheless, from a coinductive perspective, $\mathtt{stream}$ is meaningful, since it computes the infinite stream of $0$s, at its limit. \end{example} \end{comment} \begin{comment} The query $? \gets \mathtt{stream(X)}$ gives rise to an SLD-derivation with subgoals $\mathtt{stream(scons(0,Y'))}$, $\mathtt{stream(scons(0,}$ $\mathtt{Y''))}$, $\mathtt{stream(scons(0,Y'''))}$, $\ldots$ Observing that $\mathtt{stream(scons(0,}$ $\mathtt{Y'))}$ and $\mathtt{stream(scons(0,Y''))})$ unify and thus comprise a loop, CoLP concludes that $\mathtt{stream(X)}$ has been proven by coinductive hypothesis $\mathtt{stream(scons(0,Y'))}$ and coinductive conclusion\\ $\mathtt{stream(scons(0,Y''))}$. CoLP returns the solution $\mathtt{X=scons(0,X)}$ in the form of a ``circular'' term indicating that $P_4$ computes at infinity the infinite stream of $\mathtt{0}$s. \end{comment} \begin{comment} As the above examples show, better methods are needed in order to give a uniform semantics for recursion and corecursion in LP. A number of related techniques do already exist~\cite{deSchreye1994199,GuptaBMSM07,SimonBMG07,Pf92,RohwedderP96}. Some use ``modes'' to distinguish termination cases for appropriately annotated programs, while others use reduction orderings on terms to formulate termination conditions for individual derivations. But since properties of individual derivations do not necessarily carry over to the whole programs of which they are part, a perhaps more promising approach is to develop LP analogues of the ITP coinductive techniques. \end{comment} \begin{comment} Bridging the gap between constructive type theory methods implemented in ITP and methods of proof-search in LP is crucial for development of coinductive methods in LP, as Introduction illustrated. Types and pattern matching are key to defining (co)recursive functions in ITP and proving their termination or productivity. In ITP, types logically precede function definitions, and (co)recursive function definitions rely on the strict discipline of pre-existing (co)inductive types. But unlike ITP, LP has no separate type layer that can be used to force its programs to have (co)inductive meaning. Also, ITP uses (co)pattern matching against terms of (co)inductive types to enforce structural invariants that support termination analysis of (co)recursive programs, whereas LP computation uses full unification, rather than just matching, in its computational steps. Unfortunately, the lack of a separate typing layer and the greater generality of unification over matching prevent a direct porting to LP of the program analyses used to such good effect in ITP. Nevertheless, the structural techniques we develop in this paper are, in essence, LP analogues of the kinds of reasoning that types and pattern matching support in ITP. \end{comment} Since LP and similar automated proof search methods underlie type inference in ITP and other programming languages, S-resolution also has the potential to impact the design and implementation of typeful programming languages. This is another research direction we are currently pursuing. \pagebreak \label{lastpage} \end{document}
\begin{document} \begin{abstract} We consider the model of pushdown vector addition systems with resets. These consist of vector addition systems that have access to a pushdown stack and have instructions to reset counters. For this model, we study the coverability problem. In the absence of resets, this problem is known to be decidable for one-dimensional pushdown vector addition systems, but decidability is open for general pushdown vector addition systems. Moreover, coverability is known to be decidable for reset vector addition systems without a pushdown stack. We show in this note that the problem is undecidable for one-dimensional pushdown vector addition systems with resets. \noindent\textsc{Keywords.}~~ Pushdown vector addition systems; decidability \end{abstract} \maketitle \section{Introduction} Vector addition systems with states (VASS) play a central role for modelling systems that manipulate discrete resources, and as such provide an algorithmic toolbox applicable in many different fields. Adding a pushdown stack to vector addition systems yields so-called \emph{pushdown VASS} (PVASS), which are even more versatile: one can model for instance recursive programs with integer variables~\cite{atig11} or distributed systems with a recursive server and multiple finite-state clients, and PVASS can be related to decidability issues in logics on data trees~\cite{lazic13}. However, this greater expressivity comes with a price: the \emph{coverability problem} for PVASS is only known to be decidable in dimension one~\cite{leroux2015coverability}. This problem captures most of the decision problems of interest and in particular safety properties, and is the stumbling block in a classification for a large family of models combining pushdown stacks and counters~\cite{zetzsche15}. Another viewpoint on one-dimensional PVASS~\cite{leroux2015coverability} is to see those systems as extensions of two-dimensional VASS, where one of the two counters is replaced by a pushdown stack. In this context, a complete classification with respect to decidability of coverability, and of the more difficult \emph{reachability problem}, was provided by Finkel and Sutre~\cite{FinkelSutre2000}, \begin{table}[th] \centering \caption{Decidability status of the coverability and reachability problems in extensions of two-dimensional VASS; our contribution is indicated in bold.\label{tab-decidability}} \newcolumntype{Y}{>{\raggedright\arraybackslash}X} \definecolor{llg}{cmyk}{0,0,0,0.04} \begin{subtable}{0.45\textwidth} \centering \caption{Coverability problem.\label{tab-cov}} \begin{tabularx}{\textwidth}{YYYYY|Y} $\mathbb{N}ctr$ & $\mathbb{N}ctrReset$ & $\mathbb{N}ctrTransfer$ & $\mathbb{N}ctrZero$ & $\mathsf{PD}$ &\\ \hline D~\cite{karp69} & D~\cite{arnold78} & D~\cite{dufourd98} & D~\cite{reinhardt08} & D~\cite{leroux2015coverability} & $\mathbb{N}ctr$\\ \cellcolor{llg} & D~\cite{arnold78} & D~\cite{dufourd98} & D~\cite{FinkelSutre2000}& \textbf{U} & $\mathbb{N}ctrReset$ \\ \cellcolor{llg} & \cellcolor{llg} & D~\cite{dufourd98} & U~\cite{FinkelSutre2000}& U~\cite{FinkelSutre2000} & $\mathbb{N}ctrTransfer$ \\ \cellcolor{llg} & \cellcolor{llg} & \cellcolor{llg} & U~\cite{Minsky1961} & U~\cite{Minsky1961} & $\mathbb{N}ctrZero$ \\ \cellcolor{llg} & \cellcolor{llg} & \cellcolor{llg} & \cellcolor{llg} & U & $\mathsf{PD}$ \end{tabularx} \end{subtable}\quad \begin{subtable}{0.45\textwidth} \caption{Reachability problem.\label{tab-reach}} \centering \begin{tabularx}{\textwidth}{YYYYY|Y} $\mathbb{N}ctr$ & $\mathbb{N}ctrReset$ & $\mathbb{N}ctrTransfer$ & $\mathbb{N}ctrZero$ & $\mathsf{PD}$ &\\ \hline D~\cite{leeuwen74}& D~\cite{reinhardt08} & D~\cite{reinhardt08} & D~\cite{reinhardt08} & ?? & $\mathbb{N}ctr$\\ \cellcolor{llg} & D~\cite{FinkelSutre2000}& D~\cite{FinkelSutre2000}& D~\cite{FinkelSutre2000}& \textbf{U} & $\mathbb{N}ctrReset$ \\ \cellcolor{llg} & \cellcolor{llg} & D~\cite{FinkelSutre2000}& U~\cite{FinkelSutre2000}& U~\cite{FinkelSutre2000} & $\mathbb{N}ctrTransfer$ \\ \cellcolor{llg} & \cellcolor{llg} & \cellcolor{llg} & U~\cite{Minsky1961} & U~\cite{Minsky1961} & $\mathbb{N}ctrZero$ \\ \cellcolor{llg} & \cellcolor{llg} & \cellcolor{llg} & \cellcolor{llg} & U & $\mathsf{PD}$ \end{tabularx} \end{subtable} \end{table} whether one uses plain counters ($\mathbb{N}ctr$), counters with resets~($\mathbb{N}ctrReset$), counters whose contents can be transferred to the other counter~($\mathbb{N}ctrTransfer$), or counters with zero tests~($\mathbb{N}ctrZero$); see \cref{tab-decidability}. In particular, two-dimensional VASS with one counter extended to allow resets and one extended to allow zero tests have a decidable reachability problem~\cite{FinkelSutre2000}: put differently, the coverability problem for one-dimensional PVASS \emph{with resets} ($1$-PRVASS) is decidable if the stack alphabet is of the form~$\{a,\bot\}$ where~$\bot$ is a distinguished bottom-of-stack symbol. \subsubsection*{Contributions.} In this note, we show that Finkel and Sutre's decidability result does not generalise to one-dimensional pushdown VASS with resets over an arbitrary finite stack alphabet. \begin{theorem}\label{thm} The coverability problem for $1$-PRVASS is undecidable. \end{theorem} \noindent As far as the coverability problem is concerned, this fully determines the decidability status in extensions of two-dimensional VASS where one may also replace counters by pushdown stacks~($\mathsf{PD}$); see \cref{tab-cov}. Technically, the proof of \cref{thm} presented in \cref{sec-red} reduces from the reachability problem in two-counter Minsky machines. The reduction relies on the ability to \emph{weakly implement}~\cite{mayr81} basic operations---like multiplication by a constant---and their inverses---like division by a constant. This in itself would not bring much; for instance, plain two-dimensional VASS can already weakly implement multiplication and division by constants. The crucial point here is that, in a \mbox{$1$-PRVASS}, we can also weakly implement the inverse of a \emph{sequence} of basic operations performed by the system, by using the pushdown stack to record a sequence of basic operations and later replaying it in reverse, and relying on resets to ``clean-up'' between consecutive operations. Note that without resets, while PVASS are known to be able to weakly implement Ackermannian functions already in dimension one~\cite{leroux14}, they cannot weakly compute sublinear functions~\cite{leroux19}---like iterated division by two, i.e., logarithms. \section{Pushdown Vector Addition Systems with Resets}\label{sec-pvassr} A \emph{($1$-dimensional) pushdown vector addition system with resets ($1$-PRVASS)} is a tuple $\mathcal{V}=(Q,\Gamma,A)$, where $Q$ is a finite set of \emph{states}, $\Gamma$ is a finite set of \emph{stack symbols}, and $A\subseteq Q\times I^*\times Q$ is a finite set of \emph{actions}. Here, transitions are labelled by finite sequences of \emph{instructions} from $I\eqby{def}\Gamma\cup\bar{\Gamma}\cup \{\mathsf{\mathord{+}},\mathsf{\mathord{-}},\mathsf{r}\}$ where $\bar\Gamma\eqby{def}\{\bar z\mid z\in\Gamma\}$ is a disjoint copy of~$\Gamma$. A $1$-PRVASS defines a (generally infinite) transition system acting over \emph{configurations} $(q,w,n)\in Q\times\Gamma^*\times\mathbb{N}$. For an instruction~$x\in I$, $w,w'\in\Gamma^\ast$, and $n,n'\in\+N$, we write $(w,n)\autstepp{y}(w',n')$ in the following cases: \begin{description} \item[push] if $x=z$ for $z\in\Gamma$, then $w'=wz$ and $n'=n$, \item[pop] if $x=\bar{z}$ for $z\in\Gamma$, then $w=w'z$ and $n'=n$, \item[increment] if $x=\mathsf{\mathord{+}}$, then $w'=w$ and $n'=n+1$. \item[decrement] if $x=\mathsf{\mathord{-}}$, then $w'=w$ and $n'=n-1$, and \item[reset] if $x=\mathsf{r}$, then $w'=w$ and $n'=0$. \end{description} Moreover, for a sequence of instructions $u=x_1\cdots x_k$ with $x_1,\ldots,x_k\in I$, we have $(w_0,n_0)\autstepp{u}(w_k,n_k)$ if for some $(w_1,n_1),\ldots,(w_{k-1},n_{k-1})\in\Gamma^*\times\mathbb{N}$, we have $(w_i,n_i)\autstepp{x_i}(w_{i+1},n_{i+1})$ for all $0\leq i<k$. Finally, for two configurations $(q,w,n),(q',w',n')\in Q\times\Gamma^*\times\mathbb{N}$, we write $(q,w,n)\autstep[\mathcal{V}](q',w',n')$ if there is an action $(q,u,q')\in A$ such that $(w,n)\autstepp{u}(w',n')$. The \emph{coverability problem} for 1-PRVASS is the following decision problem. \begin{description} \item[given] a 1-PRVASS $\mathcal{V}=(Q,\Gamma,A)$, states $s,t\in Q$. \item[question] are there $w\in\Gamma^*$ and $n\in\mathbb{N}$ with $(s,\varepsilon,0)\autsteps[\mathcal{V}](t,w,n)$? \end{description} \section{Reduction from Minsky Machines}\label{sec-red} \newcommand{\amult}[1]{\mathcal{M}_{#1}} \newcommand{\ainvmult}[1]{\bar{\mathcal{M}}_{#1}} \newcommand{\adiv}[1]{\mathcal{D}_{#1}} \newcommand{\ainvdiv}[1]{\bar{\mathcal{D}}_{#1}} \newcommand{\atest}[1]{\mathcal{T}_{#1}} \newcommand{\ainvtest}[1]{\bar{\mathcal{T}}_{#1}} We present in this section a reduction from reachability in two-counter Minsky machines to coverability in 1-PRVASS. \subsection{Preliminaries} Recall that a \emph{two-counter (Minsky) machine} is a tuple $\mathcal{M}=(Q,A)$, where $Q$ is a finite set of states and $A\subseteq Q\times \{0,1\}\times \{\mathsf{\mathord{+}},\mathsf{\mathord{-}},\mathsf{z}\}\times Q$ a set of actions. A \emph{configuration} is a now triple $(q,n_0,n_1)$ with $q\in Q$ and $n_0,n_1\in\mathbb{N}$. We write $(q,n_0,n_1)\autstep[\mathcal{M}](q',n'_0,n'_1)$ if there is an action $(q,c,x,q')\in A$ such that $n'_{1-c}=n_{1-c}$ and \begin{description} \item[increment] if $x=\mathsf{\mathord{+}}$, then $n'_c=n_c+1$, \item[decrement] if $x=\mathsf{\mathord{-}}$, then $n'_c=n_c-1$, and \item[zero test] if $x=\mathsf{z}$, then $n'_c=n_c=0$. \end{description} The \emph{reachability problem} for two-counter machines is the following undecidable decision problem~\cite{Minsky1961}. \begin{description} \item[given] a two-counter machine $\mathcal{M}=(Q,A)$, and states $s,t\in Q$. \item[question] does $(s,0,0)\autsteps[\mathcal{M}](t,0,0)$ hold? \end{description} \subsubsection*{G\"odel Encoding.} The first ingredient of the reduction is to use the well-known encoding of counter values $(n_0,n_1)\in\mathbb{N}\times\mathbb{N}$ as a single number $2^{n_0}3^{n_1}$; for instance, the pair $(0,0)\in\mathbb{N}\times\mathbb{N}$ is encoded by $2^03^0=1$. In this encoding, incrementing the first counter means multiplying by~$2$, decrementing the second counter means dividing by~$3$, and testing the second counter for zero means verifying that the encoding is not divisible by~$3$, etc. Note that, in each case, we encode the instruction as a partial function $g\colon \mathbb{N}\nrightarrow\mathbb{N}$; let us we define its \emph{graph} as the binary relation $R\eqby{def}\{(m,n)\in\mathbb{N}\times\mathbb{N} \mid \text{$g$ is defined on $m$ and $g(m)=n$}\}$. Thus the encoded instructions are the partial functions with the following graphs: \begin{align*} {\mathbin{R_{m_f}}}&\eqby{def}\{(n,f\cdot n) \mid n\in\mathbb{N}\} & &\text{for multiplication,}\\ R_{d_f}&\eqby{def}\{(f\cdot n, n) \mid n\in\mathbb{N}\} & & \text{for division, and} \\ R_{t_f}&\eqby{def}\{(n,n) \mid n\not\equiv 0\bmod{f} \} & & \text{for the divisibility test,} \end{align*} for a factor $f\in\{2,3\}$. This means that we can equivalently see \begin{itemize} \item a two-counter machine with distinguished source and target states~$s$ and~$t$ as a regular language $M\subseteq\Delta^*$ over the alphabet $\Delta\eqby{def}\{m_f, d_f, t_f\mid f\in\{2,3\}\}$, and \item reachability as the existence of a word $u=x_1\cdots x_\ell$ in the language~$M$, with $x_1,\dots,x_\ell\in\Delta$, such that the pair $(1,1)$ belongs to the composition $R_{x_1}R_{x_2}\cdots R_{x_\ell}$. \end{itemize} \subsubsection*{Weak Relations.} Here, the problem is that it does not seem possible to implement these operations (multiplication, division, divisibility test) directly in a $1$-PRVASS. Therefore, a key idea of our reduction is to perform the instructions of $u$ \emph{weakly}---meaning that the resulting value may be smaller than the correct result---but \emph{twice}: once forward and once backward. More precisely, for any relation $R\subseteq \mathbb{N}\times\mathbb{N}$, we define the \emph{weak forward} and \emph{backward} relations $\rdown{R}$ and $\ldown{R}$ by \begin{align*} \rdown{R}&\eqby{def}\{(m,n)\in\mathbb{N}\times\mathbb{N} \mid \exists \tilde{n}\ge n\colon (m,\tilde{n})\in R\} \\ \ldown{R}&\eqby{def}\{(m,n)\in\mathbb{N}\times\mathbb{N} \mid \exists \tilde{m}\ge m\colon (\tilde{m},n)\in R\}. \end{align*} Let us call a relation $R\subseteq\mathbb{N}\times\mathbb{N}$ \emph{strictly monotone} if for $(m,n)\in R$ and $(m',n')\in R$, we have $m<m'$ if and only if $n<n'$. We shall rely on the following \lcnamecref{two-approximations}, which is proven in \cref{proof-two-app}. \begin{restatable}{proposition}{twoapprox}\label{two-approximations} If $R_1,\ldots,R_\ell\subseteq\mathbb{N}\times\mathbb{N}$ are strictly monotone relations, then $R_1R_2\cdots R_\ell=\rdown{R_1}\rdown{R_2}\cdots\rdown{R_\ell}\cap \ldown{R_1}\ldown{R_2}\cdots\ldown{R_\ell}$. \end{restatable} We shall thus construct in \cref{sec-rec} a $1$-PRVASS~$\mathcal{V}$ in which a particular state is reachable if and only if there exists a word~$u\in M$ with $u=x_1\cdots x_\ell$ and $x_1,\ldots,x_\ell\in\Delta$, such that $(1,1)\in \rdown{R_{x_1}}\cdots \rdown{R_{x_\ell}}$ and $(1,1)\in \ldown{R_{x_1}}\cdots \ldown{R_{x_\ell}}$. Since the relations $R_{m_f}$, $R_{d_f}$, and $R_{t_f}$ for $f\in\{2,3\}$ are strictly monotone, \cref{two-approximations} guarantees that this is equivalent to $(1,1)\in R_{x_1}\cdots R_{x_\ell}$. Intuitively, if we make a mistake in the forward phase $\rdown{R_{x_1}}\cdots \rdown{R_{x_\ell}}$, then at some point, we produce a number $n$ that is smaller than the correct result $\tilde n> n$. Then, the backward phase cannot compensate for that, because it can only make the results even smaller, and cannot reproduce the initial value. \subsection{Construction}\label{sec-rec} We now describe the construction of our $1$-PRVASS~$\mathcal{V}$. Its stack alphabet $\Gamma\eqby{def}\Delta\cup \{\bot,\#,a\}$. In~$\mathcal{V}$, each configuration will be of the form $(q,\bot w\#a^n, k)$, where $w\in\Delta^*$, and $n,k\in\mathbb{N}$. In the forward phase, we simulate the run of the two-counter machine so that~$n$ is the G\"{o}del encoding of the two counters. In order to perform the backward phase, the word~$w$ records the instruction sequence of the forward phase. The resettable counter is used as an auxiliary counter in each weak computation step. \subsubsection*{Gadgets.} \begin{figure} \caption{Gadgets used in the reduction.} \label{gadgets} \end{figure} For each weak computation step, we use one of the gadgets from \cref{gadgets}; note that, for instance, ``$+^f$'' denotes the sequence of instructions $+\cdots +$ of length~$f$. Observe that we have: \begin{align} (q_1,\bot u\#a^m,0)&\autsteps[\amult{f}](q_3,\bot v\#a^n,0) & &\text{iff} & &v=um_f \text{ and }(m,n)\in\rdown{R_{m_f}} \\ (q_1,\bot u\#a^m,0)&\autsteps[\ainvmult{f}](q_3,\bot v\#a^n,0) & &\text{iff} & &u=vm_f\text{ and }(n,m)\in\ldown{R_{m_f}} \end{align} and analogous facts hold for $\adiv{f}$ and $\ainvdiv{f}$ (with $d_f$ instead of $m_f$) and also for $\atest{f}$ and $\ainvtest{f}$ (with $t_f$ instead of $m_f$). Let us explain this in the case $\amult{f}$. In the loop at $q_1$, $\amult{f}$ removes $a$ from the stack and adds $f$ to the auxiliary counter. When $\#$ is on top of the stack the automaton moves to $q_2$ and changes the stack from $\bot u\#$ to $\bot um_f\#$. Therefore, once $\amult{f}$ is in $q_2$, it has set the counter to $f\cdot m$. In the loop at $q_2$, it decrements the counter and pushes $a$ onto the stack before it resets the counter and moves to $q_3$. Thus, in state $q_3$, we have $0\leq n\leq f\cdot m$. \subsubsection*{Main Control.} Let $M\subseteq\Delta^*$ be accepted by the finite automaton $\mathcal{A}=(\Delta,Q,A,s,t)$. Schematically, our $1$-PRVASS~$\mathcal{V}$ is structured as in the following diagram: \[ \extrastates\] The part in the dashed rectangle is obtained from~$\mathcal{A}$ as follows. Whenever there is an action $(q,m_f,q')$ in~$\mathcal{A}$, we glue in a fresh copy of~$\amult{f}$ between~$q$ and~$q'$, including $\varepsilon$-actions from~$q$ to~$q_1$ and from~$q_3$ to~$q'$. The original action $(q,m_f,q')$ is removed. We proceed analogously for actions $(q,d_f,q')$ and $(q,t_f,q')$, where we glue in fresh copies of~$\adiv{f}$ and~$\atest{f}$, respectively. Clearly, the part in the dashed rectangle realizes the forward phase as described above. Once it reaches~$t$, $\mathcal{V}$ can check if the current number stored on the stack equals~$1$ and if so, move to state~$b$. In state~$b$, the backward phase is implemented. The $1$-PRVASS~$\mathcal{V}$ contains a copy of $\ainvmult{f}$, $\ainvdiv{f}$, and $\ainvtest{f}$ for each $f\in\{2,3\}$. Each of these copies can be entered from~$b$ and goes back to~$b$ when exited. Finally, the stack is emptied by an action from~$b$ to~$t'$, which can be taken if and only if the stack content is~$\bot\# a$. We can check that from $(s',\varepsilon,0)$, one can reach a configuration $(t',w,m)$ with $w\in\Gamma^*$ and $m\in\mathbb{N}$, if and only if there exists $u\in M$, $u=x_1\cdots x_\ell$, and $x_1,\ldots,x_\ell\in\Delta$, with $(1,1)\in \rdown{R_{x_1}}\cdots \rdown{R_{x_\ell}}\cap \ldown{R_{x_1}}\cdots \ldown{R_{x_\ell}}$. According to \cref{two-approximations}, the latter is equivalent to $(1,1)\in R_{x_1}\cdots R_{x_\ell}$. \section{Concluding Remarks} In this note, we have proven the undecidability of coverability in one-dimensional pushdown VASS with resets (c.f.\ \cref{thm}). The only remaining open question in \cref{tab-decidability} regarding extensions of two-dimensional VASS is a long-standing one, namely the reachability problem for one-dimensional PVASS. Another fruitful research avenue is to pinpoint the exact complexity in the decidable cases of \cref{tab-decidability}. Here, not much is known except regarding coverability and reachability in two-dimensional VASS: these problems are \PSPACE-complete if counter updates are encoded in binary~\cite{blondin15} and \mathbb{N}L-complete if updates are encoded in unary~\cite{EnglertLT16}. \appendix \newcommand{\vge}{\rotatebox[origin=c]{-90}{$\ge$}} \newcommand{\vgt}{\rotatebox[origin=c]{-90}{$>$}} \newcommand{\veq}{\rotatebox[origin=c]{-90}{$=$}} \section{Proof of \Cref{two-approximations}}\label{proof-two-app} It remains to prove \cref{two-approximations}. We will use the following lemma. \begin{lemma}\label{monotone-pairs} Let $R_1,\ldots,R_\ell\subseteq\mathbb{N}\times\mathbb{N}$ be strictly monotone relations and $(m,n)\in \rdown{R_1}\cdots\rdown{R_\ell}$ and $(m',n')\in \ldown{R_1}\cdots\ldown{R_\ell}$. If $n'\le n$, then $m'\le m$. Moreover, if $n'<n$, then $m'<m$. \end{lemma} \begin{proof} It suffices to prove the \lcnamecref{monotone-pairs} in the case $\ell=1$: then, the general version follows by induction. Let $(m,n)\in \rdown{R_1}$ and $(m',n')\in\ldown{R_1}$. Then there are $\tilde{n}\ge n$ with $(m,\tilde{n})\in R_1$ and $\tilde{m}\ge m'$ with $(\tilde{m},n')\in R_1$. If $n'<n$, then we have the following relationships: \[ \begin{matrix} m & R_1 & \tilde{n} \\ & & \vge \\ & & n \\ & & \vgt \\ \tilde{m}& R_1 & n' \\ \vge & & \\ m' & & \end{matrix} \] Since $R_1$ is strictly monotone, this implies $\tilde{m}<m$ and thus $m'<m$. The case $n'\le n$ follows by the same argument.\myqed \end{proof} We are now ready to prove \cref{two-approximations}. \twoapprox* \begin{proof} Of course, for any relation $R\subseteq\mathbb{N}\times\mathbb{N}$, one has $R\subseteq \rdown{R}$ and $R\subseteq\ldown{R}$. In particular, $R_1R_2\cdots R_\ell$ is included in both $\rdown{R_1}\rdown{R_2}\cdots\rdown{R_\ell}$ and $\ldown{R_1}\ldown{R_2}\cdots\ldown{R_\ell}$. For the converse inclusion, suppose $(m,n)\in \rdown{R_1}\rdown{R_2}\cdots\rdown{R_\ell}\cap \ldown{R_1}\ldown{R_2}\cdots\ldown{R_\ell}$. Then there are $p_0,\ldots,p_{\ell}\in\mathbb{N}$ with $p_0=m$, $p_\ell=n$, and $(p_{i-1},p_{i})\in \rdown{R_i}$ for $0< i\leq\ell$. There are also $q_0,\ldots,q_{\ell}\in\mathbb{N}$ with $q_0=m$, $q_\ell=n$, and $(q_{i-1},q_{i})\in \ldown{R_i}$ for $0<i\leq\ell$. Towards a contradiction, suppose that $(p_{i-1},p_i)\notin R_i$ for some $0<i\leq\ell$. Then there is a $\tilde{p}_i>p_i$ with $(p_{i-1},\tilde{p}_i)\in R_i$. With this, we have \[ \begin{matrix} m=p_0 & \rdown{R_1}\cdots \rdown{R_i}& \tilde{p}_i & & \\ & & \vgt & & \\ & & p_i & \rdown{R_{i+1}}\cdots \rdown{R_\ell} & p_\ell\\ & & & & \veq \\ m=q_0 & \ldown{R_1}\cdots\ldown{R_i} & q_i & \ldown{R_{i+1}}\cdots\ldown{R_\ell} & q_\ell \end{matrix} \] Since $p_\ell=q_\ell$, \cref{monotone-pairs} applied to $R_{i+1},\dots,R_\ell$ implies $q_i\le p_i$ and thus $q_i<\tilde{p}_i$. Applying \cref{monotone-pairs} to $R_1,\dots,R_i$ then yields $q_0<p_0$, a contradiction. Therefore, we have $(p_{i-1},p_i)\in R_i$ for every $0<i\leq\ell$ and thus $(m,n)\in R_1\cdots R_\ell$.\myqed \end{proof} \end{document}
\begin{document} \title{Intersection exponents for biased random walks \ on discrete cylinders} \begin{abstract} We prove existence of intersection exponents ${\bf x}i(k,\lambda)$ for biased random walks on $d$-dimensional half-infinite discrete cylinders, and show that, as functions of $\lambda$, these exponents are real analytic. As part of the argument, we prove convergence to stationarity of a time-inhomogeneous Markov chain on half-infinite random paths. Furthermore, we show this convergence takes place at exponential rate, an estimate obtained via a coupling of weighted half-infinite paths. \noindent {\bf Keywords:} Intersection exponent, biased random walk, coupling \end{abstract} \section{Introduction} In this paper, we analyze biased random walks on $d$-dimensional discrete cylinders. The treatment of the model is self contained and it does not use previous results. Our main motivation in approaching this problem is to understand the arguments and techniques needed in the study of $3$-dimensional Brownian intersection exponents. In a series of papers, Lawler, Schramm and Werner studied Brownian intersection exponents in dimensions two and three. They also proved that, in dimension two, these exponents are analytic. We think their analyticity argument will apply in three dimensions, as long as one is able to get good estimates on the coupling rate of weighted Brownian motion paths. In two dimensions, the estimates are obtained using conformal invariance, and so they do not transfer to three dimensions. By analyzing a transient random walk on the cylinder, we will be able to highlight the techniques needed to prove analyticity of intersection exponents for Brownian motion, as well as the type of estimates needed for an exponential coupling rate. Let us note that analyzing random walks on cylinders has been of great interest in recent years. We direct the reader to the work of Sznitman and Dembo on the disconnection of cylinders by random walks, such as \cite{Dembo} and \cite{Sznitman}. The recent work of Windisch \cite{Windisch} on the disconnection time of discrete cylinders by biased random walks, where the drift depends on the size of the base, is also worth noting. \subsection{Brownian intersection exponents} Let us begin with a brief introduction to Brownian intersection exponents. Consider a set of $k$ independent Brownian motion paths started at the origin and a set of $j$ independent Brownian motion paths started away from the origin, on the ball of radius one. The probability that the two packets reach level $e^n$, without intersecting, decays exponentially in $n$ with exponent ${\bf x}i^{(BM)}(k,j)$. Roughly speaking, intersection exponents for Brownian motion are a measure of how likely it is that Brownian motion paths do not intersect. We now proceed to make this precise. For $d=2, 3$, $B^1_t, B^2_t,...,B^k_t$ will denote $k$ independent $d$-dimensional Brownian motions started at the origin. Let $X_t$ be another $d$-dimensional Brownian motion started on the ball of radius $1$ and independent of $B^1,\cdots,B^k$. For $1\le i\le k$ we write $B^i[0,t]:=\{z\in {\mathbb{R}}^d: B^i_s=z \mbox{ for some } 0\le s\le t\}$ and similarly $X[0,t]:=\{z\in {\mathbb{R}}^d: X_s=z \mbox{ for some } 0\le s\le t\}$. For $1\le j\le k$, let \[\tilde{T}^j_n=\inf\{t: |B^j_t|\ge e^n\}\] be the first time $B^j$ reaches the ball of radius $e^n$. Similarly, let \[\tilde{T}_n=\inf\{t: |X_t|\ge e^n\}.\] Given $k$ paths, we let $Z_n^{(BM)}$ be the probability that another path avoids them, up to the first time both sets reach the ball of radius $e^n$, as follows: \[Z_n^{(BM)}=\Prob\left\{X[0,\tilde{T}_n]\cap (B^1[0,\tilde{T}^1_n]\cup\cdots \cup B^k[0,\tilde{T}^k_n])=\emptyset \; \vert\; B^1[0,\tilde{T}^1_n]\cup\cdots\cup B^k[0,\tilde{T}^k_n]\right\}.\] Then the intersection exponent ${\bf x}i^{(BM)}(k,j)$ is defined as \[{\bf x}i^{(BM)}(k,j):=-\lim_{n\to \infty}\frac{\log{\bf E}[(Z^{(BM)}_n)^j]}{n}.\] One can further define generalized intersection exponents that loosely speaking describe non-intersection probabilities between non-integer numbers of Brownian paths. They were first introduced in \cite{LW}. In other words, the discrete sequence of intersection exponents can be replaced in a natural way by a continuous function \[{\bf x}i^{(BM)}(k,\lambda):=-\lim_{n\to \infty}\frac{\log{\bf E}[(Z^{(BM)}_n)^{\lambda}]}{n}.\] The existence of intersection exponents follows from a subadditivity argument. Alternate but equivalent ways of defining such exponents can be found in \cite{Fractal Prop}, \cite{Strict Concavity}. Brownian intersection exponents have been studied extensively. In dimensions four and higher, since two Brownian paths do not intersect almost surely, Brownian intersection exponents in these dimensions equal zero. Presently, all intersection exponents for the planar Brownian motion are known (see \cite{LSW1}, \cite{LSW2}, \cite{LSW3}): intersection exponents for a wide range of values of $\lambda$ have been computed using the Schramm-Loewner evolution ($SLE$). Lawler, Schramm and Werner further proved that planar Brownian intersection exponents are analytic \cite{Anal Paper} and consequently ${\bf x}i^{(BM)}_2(k,\lambda)$ were extended by analyticity to $\lambda>0.$ However, not much progress has been made in three dimensions. As of this moment, the only known exponents are ${\bf x}i^{(BM)}_3(k,0)=0$ and ${\bf x}i^{(BM)}_3(2,1)={\bf x}i^{(BM)}_3(1,2)=1$. In \cite{BL}, it was proved that the exponent ${\bf x}i^{(BM)}_3(1,1)$ is between $1$ and $1/2$ and \cite{Strict Concavity} implies it is strictly between $1$ and $1/2$. Simulations further suggest this exponent is around $.58$ (see \cite{Simulation}). \subsection{Summary of results} Let $G$ be the half-infinite discrete cylinder $\mathbb {Z}\times \T$, where $\T$ is a $(d-1)$-dimensional torus of side $L$. We consider random walks on $G$ that move according to the following transition probabilities: \begin{equation}\label{rw_transitions} p(z,w)=\left\{ \begin{array}{ll} p/d & \mbox{if } w-z=(1,0)\\ (1-p)/d & \mbox{if } w-z=(-1,0)\\ 1/(2d) & \mbox{if } w-z=(0,\vec{1})\\ 0 & \mbox{otherwise} \end{array}\right. \end{equation} where $\vec{1}$ denotes any vector of norm one on $\T$. Here the second coordinate is a $d$-dimensional vector and addition is, as usual, $\hspace{-.1in}\mod L$ on $\T.$ Then the random walk is symmetric on $\T$, and it is a one-dimensional asymmetric random walk on $\mathbb {Z}$ with parameter $p>1/2$. The path-valued random variables $Z_n$ are defined for this process in the same manner we defined them for Brownian motion. Roughly speaking, they are the probability that, given a set of $k$ half-infinite paths up to level $n$, another random walk coming from negative infinity will reach level $n$ without hitting the given set of paths. In order to give a precise definition for $Z_n$, we introduce a class of paths that we will call \emph{nice}; these paths have the property that they can be avoided by a random walk coming from negative infinity and, in particular, they do not disconnect negative infinity and level zero. We note that the definition of \emph{nice} paths is such that two $h$-processes conditioned to avoid a given path can be coupled (see Section \ref{nice}). One can use this to describe the hitting measure on any level of a random walk started at negative infinity and conditioned to avoid a given random walk path. A similar result for 3-dimensional Brownian motion has not yet been proved but it is expected to be true. In particular, one should be able to describe the hitting measure on the ball of radius one for a Brownian motion started close to another Brownian motion and conditioned to avoid it. The random walk intersection exponent ${\bf x}i(k,\lambda)$ is defined as \[{\bf x}i(k,\lambda):=-\lim_{n\to \infty}\frac{\log{\bf E}[Z_n^{\lambda}]}{n}.\] In Section \ref{sec:exponents} we use a subadditivity argument to show these exponents exist and are finite, with $\displaystyle{{\bf E}[Z_n^{\lambda}]}$ being logarithmically asymptotic to $\displaystyle{e^{-{\bf x}i(k,\lambda)n}}$. As long as we start with a \emph{nice} initial configuration, we further prove an important estimate: ${\bf E}[Z_n^{\lambda}]$ are within constant multiples of $e^{-{\bf x}i(k,\lambda) n}$ (see Proposition \ref{estconst}), which we will denote by \[{\bf E}[Z_n^{\lambda}]\asymp e^{-{\bf x}i(k,\lambda) n}.\] As mentioned before, one of our main tools is coupling of weighted paths. Starting with an initial configuration $\og_0$, we attach to it a random walk started at the endpoint of $\og_0$ and stopped when it first reaches level one. Call the resulting path $\og_1$. We will fix a large $N$ and condition the path $\og_1$ to survive up to level $N$, that is, $N-1$ additional steps. Then we weight the new path by the probability it will survive up to level $N$, and normalize it to obtain a probability measure. This procedure defines a time-inhomogeneous Markov chain $X_n$ on the space of \emph{nice} paths, depending on the initial configuration $\og_0$. One of our main results is the following theorem, whose proof is the content of Section \ref{sec:Exp Convergence}: \begin{theorem}\label{expc} Let $X_n$ and $X_n'$ be Markov chains with $X_0=\og_0$, and $X'_0=\ogp_0$ respectively, induced by the weighting described above. There exist constants $C,\beta>0$ such that, for all $n\ge 1$, for all $\og_0,\ogp_0 \in \overline{\mathcal{A}}$, we can define $X_n$ and $X'_n$ on the same probability space $(\tilde{\Omega},\tilde{\mathcal{F}},\tilde{\Prob})$ with \[\tilde{\Prob}\{X_n\not\equiv_{n/2}X'_n\}\le Ce^{-\beta n},\] where $X_n\equiv_{k}X'_n$ means that $X_n$ and $X'_n$ have been coupled for the past $k$ steps. \end{theorem} Let us briefly describe how we will couple two Markov chains started with different initial configurations. The coupling is similar to the coupling used in \cite{Coupling Paper} for a one-dimensional Ising-type model. The maximal coupling is done only on the set of paths that have \emph{enough} connected cross-sections and the other transition probabilities are then adjusted so that we obtain a probability measure on the set of \emph{nice} paths. Once the two chains are coupled, they do not necessarily remain coupled; in fact, if at the next level the paths do not have \emph{enough} connected cross-sections they will decouple. However, if the two chains are coupled, the probability they will remain coupled for an additional step increases with the number of steps for which they have been already coupled. Once the coupling is set up, we can use roughly the same argument as in \cite{Coupling Paper} to prove Theorem \ref{expc}. As a result of this coupling, one can show that starting with a given measure $\nu$ on initial configurations $\og_0$, supported on $\overline{\mathcal{A}}$, if we let the process evolve, then the measure induced by this process on half-infinite paths converges to an invariant measure $\pi$. \begin{theorem}\label{invariant} Let $\nu$ be a measure supported on $\overline{\mathcal{A}}$ and let $\nu^n$ be the measure on paths in $\overline{\mathcal{A}}$, whose density with respect to $\nu$ is \begin{equation*} \frac{Z_n^{\lambda}}{{\bf E}^{\nu}[Z_n^{\lambda}]}. \end{equation*} Then there exists a measure $\pi$ supported on $\overline{\mathcal{A}}$ such that $\nu^n$ converges to $\pi.$ Furthermore, if $\pi^n$ has density $\displaystyle{\frac{Z_n^{\lambda}}{{\bf E}^{\pi}[Z_n^{\lambda}]}}$ with respect to the limiting measure $\pi$, then $\pi^n=\pi.$ \end{theorem} The proof of this theorem is the content of Section \ref{sec:Invariant Measure}. Our main result can be found in Section \ref{sec:anal}. Using the same ideas and techniques that Lawler, Schramm and Werner used in \cite{Anal Paper}, we prove analyticity of the intersection exponent ${\bf x}i(k,\lambda)$. \begin{theorem} \label{anal_proof} For all $k\ge 1$, ${\bf x}i(k,\lambda)$ is a real analytic function of $\lambda$ in $(0,\infty).$ \end{theorem} The argument has similarities with the proof from \cite{Ruelle} that the free energy of a one-dimensional Ising model with exponentially decreasing interactions is an analytic function. The proof follows the structure of the proof for analyticity of planar Brownian intersection exponents, presented in \cite{Anal Paper}, and it differs from \cite{Anal Paper} only in the estimates we use. In fact, this suggests that, if similar estimates can be obtained for 3-dimensional Brownian exponents, the analyticity proof from \cite{Anal Paper} would immediately apply. Here is a brief outline of the proof. We will restrict ourselves to proving analyticity for ${\bf x}i(\lambda):={\bf x}i(1,\lambda).$ We define linear functionals $T_{\lambda}$ on a Banach space of functions, defined on the set of \emph{nice} paths, functions with the property that they depend very little on how the path looks like far away in the past. $T_{\lambda}$ and the Banach space are described in Section \ref{Bspace}. The norm on the Banach space is a little different than the one in \cite{Anal Paper}, but it follows the same principle. We first show $T_z$ is an analytic function in a neighborhood of the positive real line in Section \ref{Tanal}. The existence of the spectral gap is done in Section \ref{sec:analyticity} by showing the exponential coupling rate implies $e^{-{\bf x}i(\lambda)}$ is an isolated simple eigenvalue of $T_{\lambda}$. The theorem then follows by a simple argument from operator theory. \subsection{A word on notation} Throughout the article, we will say a sequence $a_n$ is \emph{logarithmicaly asymptotic} to $e^{bn}$, which we will write as $a_n\approx e^{bn}$ if \[\lim_{n\to\infty}\frac{\log a_n}{n}=b.\] Also we will use the notation $a_n\asymp b_n$ to denote the following: there exist constants $c$ and $C$ such that for all $n$, \[c b_n\le a_n\le C b_n.\] We will often use the notation $a_n=O(b_n)$ by which we mean that there exist constants $C, N>0$ such that for all $n\ge N$, \[a_n\le C\, b_n.\] Constants $c$, $c'$ and $C$ will denote arbitrary positive constants, independent of all other quantities involved in a given expression. Their value will be allowed to change from line to line. However, other constants, such as $a, \hat{c}, c_1, c_2$ will be fixed. \section{Random walk intersection exponents} \subsection{Paths on the cylinder} Let $\displaystyle{\T=(\mathbb {Z}/L\mathbb {Z})^{d-1}}$ be a $(d-1)$-dimensional torus and let \[G:=\mathbb {Z}\times\T, \hspace{.3in} d\ge 1,\] be the discrete $d$-dimensional infinite cylinder. The set $\{(i,y):y\in \T\}$ will be called \emph{the level $i$} of the cylinder, and for a point $z=(i,y)\in G$ we write $|z|=i$ to mean $z$ is on level $i$ of the cylinder. Our main motivation in using the torus as a base for the cylinder is the property that every point on the torus ``looks the same.'' Therefore, one could equally well consider a finite connected regular graph as a generalization of $\T$. For $i\le j$, denote by $G_{i,j}$ the cylinder between levels $i$ and $j$, \[G_{i,j}=\{z\in \mathbb {Z}\times\T :i\le |z|\le j\}.\] We will write $G_j=G_{-\infty,j}$ for the half-infinite cylinder up to level $j$. We construct half-infinite paths on $G$ as follows. Let $\mathcal{X}$ be the set of all paths $\gamma:[0,t_{\gamma}]\to G$, starting on level 0 and ending when first reaching level 1: \[\mathcal{X}:=\left\{\gamma:[0,t_{\gamma}]\to G : |\gamma(0)|=0, |\gamma(t_{\gamma})|=1 \mbox{ and } |\gamma(t)|<1 \mbox{ for all } t<t_{\gamma}\right\}.\] We will refer to $t_{\gamma}$ as the time-duration of the path $\gamma$ and we will say $\gamma$ and $\gamma'$ are \emph{equal} if $\gamma'$ is a translation of $\gamma$ in the $\T$ direction of $G$. Let $\mathcal{X}_i$ be the set of all paths $\gamma_i$ starting on level $(i-1)$ and stopped when they reach level $i$. Then $\mathcal{X}_i$ is exactly $\mathcal{X}$ translated by $(i-1)$ in the $\mathbb {Z}$ direction of $G$. It will be convenient to think of elements $\gamma_i$ of $\mathcal{X}_i$ as paths in $\mathcal{X}$, translated accordingly. Let \[\mathcal{A}=\{ \overline{\gamma}_0=\dotsc\gamma_{-2}\gamma_{-1}\gamma_0 : \gamma_j \in \mathcal{X}_j \mbox{ and } \gamma_{j}(t_{\gamma_{j}})=\gamma_{j+1}(0) \mbox{ for } j< 0\}.\] That is, $\mathcal{A}$ is the set of half-infinite paths $\og_0$ constructed as a sequence of $\gamma_j \in \mathcal{X}_j$, $-\infty< j\le 0$. We will think of $\mathcal{A}$ as $\cdots \mathcal{X}\times \mathcal{X}$. More precisely, an element of $\cdots \mathcal{X}\times\mathcal{X}$ along with a position at time zero uniquely determines a path in $\mathcal{A}$. Each $\og_0$, as a sequence of paths, is in a one-one correspondence with a path indexed by time $\og_0(t): (-\infty, 0]\rightarrow G$. The construction of $\og_0(t)$ from $\og_0$ is left as a simple exercise. We will write $\og_0$ when we look at the path as a sequence of elements of $\mathcal{X}$, and we will write $\og_0(t)$ when we need to look at $\og_0$ as a sequence of points in the half-infinite cylinder. For $n\in\mathbb {Z}$, let $\mathcal{A}_n$ be the set $\mathcal{A}$ shifted by $n$ in the $\mathbb {Z}$ coordinate of $G$, and denote its elements by $\og_n$. Observe that for $n\ge 1$, $\og_n$ can be decomposed into $\og_n=\og_0 \gamma_1\cdots \gamma_n$, for some unique $\og_0\in \mathcal{A}$ and $\gamma_j\in \mathcal{X}_j$ for $1\le j\le n$. Let $\hat{\gamma}_n=\gamma_1\cdots\gamma_n$. \begin{definition} Let $\og_n=\dotsc\gamma_{n-1}\gamma_n$ and $\ogp_n=\dotsc\gamma^{\prime}_{n-1}\gamma^{\prime}_n$ be paths in $\mathcal{A}_n$. We write \[\og_n=_k\ogp_n\] if $\gamma_j=\gamma^{\prime}_j$ for all $n-k+1\le j\le n$, and we say $\og_n$ and $\ogp_n$ agree for the last $k$ levels. \end{definition} The half-infinite paths $\og_0$ that we will be studying are biased random walks coming from negative infinity. \subsection{Intersection exponent - existence}\label{sec:exponents} Let us consider $k$ independent random walks $S^1, \dots, S^k$ defined on the probability space $(\Omega, \mathcal{F},\Prob)$, starting on level zero of $G$, and evolving according to transition probabilities as in \eqref{rw_transitions}. Let $S$ be another random walk, defined on the probability space $(\Omega_1, \mathcal{F}_1, \Prob_1)$, with the same transition probabilities. We will use ${\bf E}$ and ${\bf E} _1$ for expectations with respect to $\Prob$ and $\Prob_1$, respectively. \begin{remark} Observe that we define $S$ on $(\Omega_1, \mathcal{F}_1, \Prob_1)$ and $S^1,\dots,S^k$ on $(\Omega, \mathcal{F}, \Prob)$. This notation may look unnatural, but it will help simplify notation later in the paper. \end{remark} Define stopping times: for $1\le j\le k$, \[T_r^j=\inf\{t:|S_t^j|=r\}\] \[T_r=\inf\{t:|S_t|=r\}.\] $S[r,s]$ will denote the random-valued set $\{S_l: r\le l \le s\}$, the set of points on the cylinder visited by the random walk from time $r$ to time $s$. Similarly, for $1\le j\le k$, $S^j[r,s]=\{S^j_l: r\le l \le s\}$. We start with a $k$-tuple $\Gamma_0:=(\og_0^1,\dots,\og_0^k)$ of $\mathcal{A}^k$, which will be called an initial configuration. For $1\le j\le k$, let $S^j$ be a random walk on the cylinder $G$, started at the endpoint of $\og_0^j$, and evolving, independent of $\og_0^j$, according to transition probabilities in \eqref{rw_transitions}. Take the path $S^j$ stopped when it first reaches level $n$ and attach it to $\og_0^j$. This is a path from $-\infty$ to level $n$, and in particular it is an element of $\mathcal{A}_n$. We denote it by $\og_n^j$. Let $\tg_n^j$ be the path $\og_n^j$ shifted accordingly in the $\mathbb {Z}$ direction of $G$ so that it is an element of $\mathcal{A}$. For all $n\in\mathbb {Z}$, define $\Gamma_n:=(\og_n^1,\dots,\og_n^k)$ and $\tilde{\Gamma}_n:=(\tg_n^1,\dots,\tg_n^k)$. Let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $\Gamma_0$ and the random walks $S^1,\dots,S^k$ up to stopping times $T_n^1,\dots,T_n^k$ \[\mathcal{F}_n=\sigma\{\Gamma_0, S_t^j; t\le T_n^j, \mbox{ for $1\le j\le k$}\}.\] Then $\Gamma_n$ is an $\mathcal{F}_n$-measurable (path-valued) random variable. We define functions $Z_n: \mathcal{A}^k\to{\mathbb{R}}$ by \begin{equation*} Z_n(\Gamma_0)=\Prob_1\{S(-\infty,T_0]\cap \Gamma_0=\emptyset| S(-\infty,T_{-n}]\cap \Gamma_{-n}=\emptyset\}. \end{equation*} Equivalently, one can define $Z_n$ as \begin{equation}\label{defZ} Z_n(\tilde{\Gamma}_n)=\Prob_1\{S(-\infty,T_n]\cap \Gamma_n=\emptyset| S(-\infty,T_0]\cap \Gamma_0=\emptyset\}. \end{equation} More precisely, \begin{equation}\label{eq:nice_A} Z_n(\tilde{\Gamma}_n)=\lim_{|x|\to-\infty}\Prob_1^x\{S[0,T_n]\cap \Gamma_n=\emptyset| S[0,T_0]\cap \Gamma_0=\emptyset\}, \end{equation} where the initial configuration $\Gamma_0$ is \emph{nice}, meaning that $\Gamma_0$ is such that this limit exists. Note also that we write conditioning with respect to the event \[S(-\infty,T_0]\cap \Gamma_0=\emptyset,\] which is a set of probability zero. By this conditioning we mean that on $(-\infty, T_0]$, $S$ is an $h$-process conditioned not to hit $\Gamma_0$. Given $\Gamma_0$ is \emph{nice}, the conditioning is well-defined and the limit exists, as it will be discussed in Section \ref{nice}. For now, let us assume $Z_n$ are well-defined. Let us also consider a random walk started at $z$, on level zero, and define the following $\mathcal{F}_n$-measurable random variables \[\overline{Z}_{n,z}=\Prob_1^{z}\{S[0,T_n]\cap \Gamma_n= \emptyset\}\] \[\overline{Z}_n=\sup_{|z|=0}\Prob_1^{z}\{S[0,T_n]\cap \Gamma_n= \emptyset\}.\] Let $\displaystyle{q_n=\sup_{\Gamma_0 \in \mathcal{A}^k}{\bf E}^{\Gamma_0}[Z_n^{\lambda}]}$ and $\displaystyle{\overline{q}_n=\sup_{\Gamma_0 \in \mathcal{A}^k}{\bf E}^{\Gamma_0}[\overline{Z}_n^{\lambda}]}$ . Now, \begin{eqnarray*} Z_n&=&\Prob_1\{S(-\infty,T_n]\cap \Gamma_n=\emptyset| S(-\infty,T_0]\cap \Gamma_0=\emptyset\}\\ &\le& \sup_{|z|=0}\Prob_1^z\{S[0,T_n]\cap \Gamma_n=\emptyset\}=\overline{Z}_n \end{eqnarray*} and taking expectations, we have $q_n\le \overline{q}_n$. \begin{proposition} There exists ${\bf x}i(k,\lambda)$ such that $q_n\approx e^{-n{\bf x}i(k,\lambda)}$, that is \[\lim_{n\to \infty}\frac{\log{q_n}}{n}=-{\bf x}i(k,\lambda)\] \end{proposition} \begin {proof} Let $\overline{\mathcal{A}}^k$ be the set of all \emph{nice} $k$-tuples $\Gamma_0$. For $n\ge 1$, define functions $\Phi_n:\overline{\mathcal{A}}^k\to {\mathbb{R}}$ by \[\Phi_n(\Gamma_0)=-\log Z_n(\Gamma_0).\] We will use $\Phi$ to denote $\Phi_1$ and use the shorthand $\Phi_m$ for $\Phi_m (\Gamma_0)$. We naturally let $\Phi_0=0$. It is easy to see that the functions $\Phi_m$ are additive, more precisely \[\Phi_{n+m}(\tilde{\Gamma}_{n+m})=\Phi_n(\tilde{\Gamma}_{n})+\Phi_m(\tilde{\Gamma}_{n+m}).\] Then we have \begin{eqnarray*} q_{m+n}&=& \sup_{\Gamma_0}{\bf E}^{\Gamma_0}[e^{-\lambda \Phi_{m+n}}]\\ &=& \sup_{\Gamma_0}{\bf E}[{\bf E}^{\Gamma_0}[e^{-\lambda \Phi_{m+n}(\tilde{\Gamma}_{n+m})}|\mathcal {F}_n]]\\ &=& \sup_{\Gamma_0}{\bf E}^{\Gamma_0}[e^{-\lambda \Phi_n} {\bf E}^{\tilde{\Gamma}_n}[e^{-\lambda\Phi_m}]]\\ &\le&\sup_{\Gamma_0}{\bf E}^{\Gamma_0}[e^{-\lambda \Phi_n}q_m]\\ &\le& q_m q_n. \end{eqnarray*} Then $\log{q_{n+m}}\le\log{q_n}+\log{q_m}$, and using an easy subadditivity argument (see \cite{RW Lawler}), we get \begin{equation}\label{existence} \lim_{n\to \infty}\frac{\log{q_n}}{n}=\inf_{n}\frac{\log{q_n}}{n}=-{\bf x}i(k,\lambda), \end{equation} with ${\bf x}i(k,\lambda)$ possibly infinite. To see that ${\bf x}i(k,\lambda) <\infty$, suppose $S$ has avoided $\Gamma_0$ up to time $T_0$. Let the paths given by $S[T_0,T_n]$ and $S^1[0,T^1_n],\dots, S^k[0,T^k_n]$ be straight lines in the $\mathbb {Z}$ direction of $G$. Then $\Gamma_n\cap S(-\infty,T_n]=\emptyset$. This configuration occurs with probability $\left(p/d\right)^{(k+\lambda)n}$, if $k<|\T|-1$ and with probability at least $\left(p/d\right)^{(|\T|-1+\lambda)n}$ if $k\ge|\T|-1$. Therefore ${\bf x}i(k,\lambda)\le (|\T|-1+\lambda)\log (d/p)$. \end{proof} From now on we will only consider the case $k=1$ and analyze the exponent ${\bf x}i(\lambda):={\bf x}i(1,\lambda)$. Proofs for $k>1$ are essentially the same. \subsubsection{Nice paths}\label{nice} Recall definition \eqref{eq:nice_A} of $Z_n$. In this section we will present the technicalities involved in making sense of this definition. The reader is welcome to skip this section at a first reading. Let $\og_0$ be a path in $\mathcal{A}$. Let $D(\og_0)$ be the connected component of $G_0\setminus \og_0$ connecting level 0 to negative infinity. Of course, $\og_0$ might disconnect level 0 from negative infinity, in which case $D(\og_0)=\emptyset.$ If $D(\og_0)$ is non-empty, then it is unique for the following reason: $\og_0$ has only one point on level zero, so level zero is connected, and hence there is at most one connected component containing $-\infty$ and level zero minus $\og_0$. For $j\le 0$, let $D_j=G_{j,j}\cap D(\og_0)$ be the set of sites on level $j$ that can be reached by a random walk from $-\infty$, conditioned to avoid $\og_0$. \begin{definition}\label{def:nice} $\og_0$ is \emph{nice} if $D(\og_0)\neq \emptyset$ and for all $n\ge 0$, there exists a $k>n$ such that \[\sum_{j=-k}^{-1} 1_{\left\{D_j \mbox{ is connected}\right\}}\ge n \] Let $\overline{\mathcal{A}}$ be the set of all such \emph{nice} paths. \end{definition} This definition simply says that if $\og_0$ does not disconnect $-\infty$ from level zero and $G\setminus \og_0$ has infinitely many connected levels, then $\og_0$ is \emph{nice} path. Let us now address the obvious question of conditioning on a set of measure zero in \eqref{eq:nice_A}. Let $\rho_0=\inf\{t> 0: S_t\in \og_0\}$ and \[h(z)=\Prob_1^z\{T_0<\rho_0\}.\] Denote transition probabilities for the unconditioned random walk on $G$ by $p(z,w)$. Then \[h(z)=\sum_{w}p(z,w)h(w),\] where we sum over all neighbors of $z$. In other words, $h(z)$ is zero on the path and harmonic on the complement of the path. Suppose $z$ and $w$ are on $D_j(\og_0)$ and $D_j(\og_0)$ is connected. Then we have the following Harnack-type inequality: \begin{equation} \label{harnack} a\le \frac{h(z)}{h(w)}\le a^{-1}. \end{equation} where $a$ is the minimum over all possible connected configurations $D_j$, over all $z$ and $w$ in $D_j$, of the probability that starting at $z$ the random walk reaches $w$ before leaving $D_j$. Since the torus has finitely many sites, $0<a<1/(2d)$. The constant $a$ is repeatedly used in this article in estimates. It depends only on the size of the torus, or, if generalized to a regular graph, on the structure of the graph. We start the random walk at $z\notin \og_0$ and we condition on the random walk surviving up to level zero ($T_0<\rho_0$) to obtain a process evolving according to the following transition probabilities \begin{equation}\label{hprob} \overline{p}(z,w)= \frac{\Prob_1\{S_1=w; T_0<\rho_0 | S_0=z\}}{\Prob_1\{T_0<\rho_0 | S_0=z\}}=\frac{p(z,w)h(w)}{h(z)}. \end{equation} Thus conditioning on $\{S[0,T_0]\cap \og_0=\emptyset\}$ simply means that $S$ is an $h$-process conditioned to avoid $\og_0$, and evolving according to transition probabilities given in \eqref{hprob}. We can also define the hitting measure of level $-n$ by a random walk started at $|z|<-n$ and conditioned to avoid $\og_0$ up to time $T_0$ as \[\mu_{-n,z}(w)=\Prob_1^z\{S(T_{-n})=w; T_{-n}<\rho_0\}\frac{h(w)}{h(z)}.\] Then we can show that if two $h$-processes conditioned on avoiding a \emph{nice} path $\og_0$ are started far enough, then they can be coupled by the time they hit a given level with high probability. In fact, the reason for choosing this definition for \emph{nice} paths was the need for such a coupling result. More general definitions of \emph{nice} paths can be given, but we use this one in our present work for simplicity. We prove the coupling result in the following lemma. \begin{lemma}\label{coupling0} Let $n\ge 0$ be given. For every $\epsilon>0$, there exists an $m>n$ such that for all pairs $z,z'\in G_{-m}$, if $S$ and $S'$ are $h$-processes started at $z$, and $z'$ respectively, with transition probabilities as in \eqref{hprob}, we can define $S$ and $S'$ on the same probability space $(\Omega_1,\mathcal{F}_1,\overline{\mu})$ such that \[\overline{\mu}\{S(T_{-n})\neq S'(T'_{-n})\}<\epsilon/2.\] Furthermore, $||\mu_{-n,z}-\mu_{-n,z'}||<\epsilon$, where $\mathcal{V}ert\cdot\mathcal{V}ert$ denotes the total variation norm. \end{lemma} \begin{proof} Fix $n\ge0$ and let $\epsilon>0$ be given. Let $T_j$ and $T_j'$ be the hitting time of level $j$ by $h$-processes $S$ and $S'$ respectively. If $D_j$ is connected, then the hitting measures on level $j+1$ for the two $h$-processes are within a constant. More precisely, using \eqref{harnack}, one can show \[\frac{\mu_{j+1,z}(w)}{\mu_{j+1,z'}(w)}\ge a^2.\] Then we can maximally couple $S$ and $S'$ on the same probability space $(\Omega_1, \mathcal{F}_1,\overline{\mu})$, so that \[\overline{\mu}\{S(T_{j+1})\neq S'(T'_{j+1})\}=\frac{1}{2}||\mu_{j+1,z}-\mu_{j+1,z'}||\le\frac{1}{2}(1-a^2).\] For a detailed discussion of coupling and a proof for existence of the maximal coupling, we refer the reader to \cite{Coupling}. Once $S$ and $S'$ are coupled, we run them together. If $G_{-m,-n}\cap D(\og_0)$ has at least $k$ connected cross-sections, then the two $h$-processes do not couple by the time they reach level $-n$ only if they do not couple at any of the connected cross-sections: \[\overline{\mu}\{ S(T_{-n})\neq S'(T'_{-n})\} \le\overline{\mu}\{ S(T_{j})\neq S'(T'_{j}) \mbox{ for all j} \in [-m,-n)\} \le \left(\frac{1-a^2}{2}\right)^k.\] Moreover, from the standard coupling inequality we obtain \[||\mu_{-n,z}-\mu_{-n,z'}||\le 2\overline{\mu}\{S(T_{-n})\neq S'(T'_{-n})\}\le 2\left(\frac{1-a^2}{2}\right)^k.\] Choose $k$ large enough such that $(\frac{1-a^2}{2})^k\le \epsilon/2$. Then let $m$ be so that $G_{-m,-n}\cap D(\og_0)$ has at least $k$ connected levels. Since $\og_0$'s complement has infinitely many connected cross-sections, $m$ is finite. \end {proof} We can now show that given a \emph{nice} path $\og_0$, conditioning on avoiding this path makes sense in \eqref{eq:nice_A}. We start with the following lemma which basically says that given a \emph{nice} path $\og_0$, we can define a hitting measure on level $-n$ of the $h$-process induced by this conditioning. \begin{lemma} If $\og_0$ is \emph{nice}, then for each $n\ge 0$, there exists a unique limiting measure \[\mu_{-n}=\lim_{z\to -\infty} \mu_{-n,z}\] \end{lemma} \begin {proof} Fix $n$. Let $\left(z_{k}\right)_{k=1}^{\infty}$ be a sequence in $G_{-n}\cap D(\og_0)$, with the property $\displaystyle{\lim_{k\to\infty}|z_k|=-\infty}$. Let $\epsilon>0$. By Lemma \ref{coupling0}, there exists $m>0$ such that for all $z, z' \in G_{-m}\cap D(\og_0)$, \[\displaystyle{\| \mu_{-n,z} - \mu_{-n,z'} \| <\epsilon}\] Let $n_1$ be the smallest integer so that $\left(z_{k}\right)_{k=n_1}^{\infty}$ is in $G_{-m}\cap D(\og_0)$. Then for all $i$, $j \ge n_1$, $\| \mu_{-n,z_i} - \mu_{-n,z_j} \| <\epsilon$ and so for every $w$ on level $-n$, $\left\{\mu_{-n,z_k}(w)\right\}_{k=1}^{\infty}$ is a Cauchy sequence converging to some $\mu_{-n}(w)$. Then clearly $\mu_{-n,z} {\mathbb{R}}ightarrow \mu_{-n}$ as $z \to -\infty$. \end{proof} If one can define a probability measure on conditioned paths coming from $-\infty$, then the random function $Z_n$ in \eqref{eq:nice_A} is well defined. Let $\overline{\eta}_0$ be a half infinite path. We define $\upsilon_{k}$ to be the measure on $\overline{\eta}_0|_k=\eta_{-k+1}\eta_k\dots\eta_0$ , the restriction of $\overline{\eta}_0$ to the last $k$ elements of the path, in the following way: \[\upsilon_{k}(\overline{\eta}_0|_k)=\mathcal{M}(\overline{\eta}_0|_k)1_{\{\overline{\eta}_0|_k\cap\og_0=\emptyset\}}\frac{\mu_{-k}(w_{-k})}{h(w_{-k})},\] where $\mathcal{M}$ denotes the unconditioned random walk measure on paths and $w_{-k}$ is the first site on level $-k$ reached by $\overline{\eta}_0$. Note that $\{\upsilon_{k}\}_{k=1}^{\infty}$ is a consistent sequence of measures, and hence by Kolmogorov Extension Theorem, it can be extended to a measure on half-infinite paths $\displaystyle{\upsilon:=\lim_{n\to \infty} \upsilon_{-n}}$, which depends on the initial configuration $\og_0$. Thus, when we condition on avoiding $\og_0$ up to level zero, we mean that the measure induced by the $h$-process on half-infinite paths is given by $\upsilon$. \subsection{Exponent estimate} From the definition of the intersection exponent, we know that $\displaystyle{q_n\approx e^{-n{\bf x}i(\lambda)}}.$ However, for $\lambda$ restricted to a closed interval, away from zero, we will show that $q_n$ and $\overline{q}_n$ are within multiplicative constants of $e^{-n{\bf x}i(\lambda)}$. Moreover, this will also hold for ${\bf E}^{\og_0}[Z_n^{\lambda}]$ for all $\og_0\in \overline{\mathcal{A}}$. We fix $\lambda_1>0$ and $\lambda_2<\infty$ and restrict $\lambda$ to $[\lambda_1,\lambda_2]$. \begin{proposition}\label{estconst} For every $0<\lambda_1<\lambda_2<\infty$, there exist positive constants $c_1$ and $c_2$ such that for all $n\ge 0$ and all $\lambda\in [\lambda_1,\lambda_2]$, \begin{equation}\label{asymp} c_1 e^{-{\bf x}i(\lambda)n}\le q_n\le c_2 e^{-{\bf x}i(\lambda)n}. \end{equation} Note that $c_1$ and $c_2$ can be chosen so they are independent of $\lambda\in [\lambda_1,\lambda_2]$. \end{proposition} We will use the notation $q_n \asymp e^{-{\bf x}i(\lambda) n}$, to mean that $q_n$ is bounded as in \eqref{asymp}. The reason to restrict $\lambda$ between two values $\lambda_1$, $\lambda_2$ is to get constants uniform in $\lambda$. We proceed to prove the proposition, but we will first need a couple of estimates and technical lemmas. \subsubsection{Preparation lemmas} We define the following stopping times: \begin{itemize} \item for $j\le 0$, let $\eta_{j}=\min\{t>T_0: |S_t|=j\}$ and $\eta^1_j=\min\{t>T_0^1: |S_t^1|=j\}$ \item for $j>0$, let $\eta_{j}=\min\{t>T_j: |S_t|=j\}$ and $\eta^1_j=\min\{t>T_j^1: |S_t^1|=j\}$ \end{itemize} In other words, for non-positive $j$, $\eta_{j}$ is the first time after reaching level zero that $S_t$ returns to level $j$ and, for positive $j$, $\eta_{j}$ is the second time $S_t$ reaches level $j$. \begin{lemma} For any $z$ with $|z|=0$, \begin{equation}\label {ARWestimate} \Prob_1^z\{T_n<\eta_0\}\ge \frac{2p-1}{d}. \end{equation} \end{lemma} \begin{proof} Let $\displaystyle{\psi(x)=[(1-p)/p]^x}$. If $a<0<b$, one can easily solve \[\Prob_1\{T_{b}<T_{a}\}=\frac{\psi(0)-\psi(a)}{\psi(b)-\psi(a)}.\] See \cite{Durrett} for a standard proof of such a result. The random walk started on level zero will reach level $n$ before returning to level zero if and only if it first goes one step forward and then from level $1$ it reaches level $n$ before reaching level $0$. But, because the state space is a cylinder, starting from $1$ and reaching $n$ before $0$ is equivalent to starting at $0$ and reaching $n-1$ before $-1$. Hence we can use the above expression with $a=-1$ and $b=n-1$: \begin{equation*} \Prob_1^z\{T_n<\eta_0\}=\frac{p}{d}\cdot\Prob_1^{|z'|=1}\{T_{n}<T_{0}\}=\frac{p}{d}\cdot \frac{\psi(0)-\psi(-1)}{\psi(n-1)-\psi(-1)}. \end{equation*} Now plugging in $\psi$, and recalling that $p>1/2$ and hence $p>1-p$, we obtain \[\frac{\psi(0)-\psi(-1)}{\psi(n-1)-\psi(-1)}=\frac{2p-1}{p}\cdot\frac{1}{1-[(1-p)/p]^n}\ge\frac{2p-1}{p},\] and the lemma follows immediately. Similarly, $\displaystyle{\Prob\{T_n^1<\eta_0^1\}\ge \frac{2p-1}{d}.}$ \end{proof} \begin{lemma}\label{lemA} For all $k$, all $\og_1 \in \mathcal{A}_1$, and all $z$ on level zero of $G$, \begin{equation}\label{estk} \Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset| S[0,T_1]\cap G_{-k}\neq\emptyset \}\le \frac{d}{p}\, \Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset\} \end{equation} \end{lemma} \begin{proof} First, on the event $\{z \notin \gamma_1\}$, since we can avoid both $\og_0$ and $\gamma_1$ by simply taking one step in the $\mathbb {Z}$ direction of $G$, we have $\Prob^z_1\{S[0,T_1]\cap \og_1 =\emptyset\}\ge p/d$ and so, \[\Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset| S[0,T_1]\cap G_{-k}\neq\emptyset\}\le \frac{d}{p}\cdot \frac{p}{d} \le \frac{d}{p}\, \Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset\}.\] Secondly, on the event $\{z \in \gamma_1\}$, we also get \[\Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset| S[0,T_1]\cap G_{-k}\neq\emptyset\}=0= \frac{d}{p}\, \Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset\}\] Furthermore, if $A$ is any set in the $\sigma$-algebra generated by $S$ up to time $T_1$, conditioning on this event $A$ instead of $\{S[0,T_1]\cap G_{-k}\neq\emptyset\}$, yields the same inequality as \eqref{estk}. \end{proof} From the proof of Lemma \ref{lemA}, also note that if $z$ and $z'$ are on level zero of $G$ and $z, z'\notin\gamma_1$, \begin{equation} \Prob_1^z\{S[0,T_1]\cap \og_1 =\emptyset\} \le \frac{d}{p}\, \Prob_1^{z'}\{S[0,T_1]\cap \og_1 =\emptyset\} \end{equation} In order to show $q_n\asymp e^{-{\bf x}i(\lambda) n}$, we need some preliminary results which we summarize in the two lemmas below. First we consider the random walk started at $z$, $|z|=0$, that reaches level $n$ without hitting $\og_n$, while staying above level $-1$. We define random variables \[\hat{Z}_{n,z}= \Prob_1^z\{S[0,T_n]\cap \og_n =\emptyset; T_n<\eta_{-1}\},\] \[\displaystyle{\hat{Z}_n=\sup_{|z|=0}\hat{Z}_{n,z}}.\] \begin{lemma}\label{separation1} There exists a constant $c$ such that for all $n$, $\hat{Z}_n\ge c\overline{Z}_n$. In particular, \[\displaystyle{\sup_{\og_0}{\bf E}^{\og_0}[\hat{Z}_{n}^{\lambda}]\ge c \overline{q}_n}.\] \end{lemma} \begin{proof} Suppose we start $S$ at $z$, with $|z|=0$. We prove the lemma by first looking at the random walk on the event it falls below level zero before time $T_n$. Let \[Y_{n,z}=\Prob_1^z\{S[0,T_n]\cap\og_n=\emptyset; T_n>\eta_{-1}\}.\] Let $A_n$ be the event that the random walk $S$, up to time $T_n$, hits every site on level zero. Since the cross-section of the cylinder has a finite state space, for all $n$, $\Prob_1(A_n)\ge a$, for some same constant $a>0$ that can be easily computed. Similarly, given $T_n>\eta_{-1}$, let $A_{\eta_{-1}}$ be the event that the random walk $S$, up to time $\eta_{-1}$, hits every site on level zero, with $\Prob_1(A_{\eta_{-1}}|T_n>\eta_{-1})\ge a$. Here $A_{\eta_{-1}}$ is a subset of $\{S[0,\eta_{-1}]\cap \og_n\neq \emptyset\}$ which implies \begin{eqnarray*} Y_{n,z}&\le& \Prob_1^z\{S[0,\eta_{-1}]\cap \og_n=\emptyset| T_n>\eta_{-1}\} \Prob_1^z\{T_n>\eta_{-1}\}\\ &&\quad \quad\times \Prob_1^z\{S[0,T_n]\cap \og_n=\emptyset |S[0,\eta_{-1}]\cap \og_n=\emptyset;T_n>\eta_{-1}\}\\ &\le&(1-a)\sup_{|z'|=-1}\Prob_1^{z'}\{S[0,T_n]\cap \og_n=\emptyset\}\le(1-a)\overline{Z}_{n} \end{eqnarray*} Taking the supremum over all $z$ yields $\displaystyle{Y_n=\sup_{|z|=0}Y_{n,z}\le (1-a)\, \overline{Z}_n.}$ Since $\overline{Z}_n\le \hat{Z}_n+ Y_n$, this implies $\hat{Z}_n\ge a\overline{Z}_n$ which proves the lemma, with a constant $c=a^{\lambda_2}$, uniform in $\lambda$. \end{proof} \begin {lemma}\label{separation2} There exists a constant $c$ such that for all $n$, all histories $\og_m$, and all $z \notin \og_m$ with $|z|=m$, \[{\bf E}^{\og_m}[\hat{Z}_{n+m,z}^{\lambda}1_{ \{T^1_{n+m}<\eta^1_m\}}]\ge c\, \overline{q}_n.\] \end{lemma} \begin{proof} Without loss of generality, assume $m=0$. Recall that $S$ is started on level zero of $G$. Let $x_0$ be the endpoint of $\og_0$, where we attach $\hat{\gamma}_n$. Consider another starting configuration $\ogp_0$ with endpoint at $x'_0$ to which we attach $\hat{\gamma}_n$ (translated accordingly in the $\T$ direction of $G$). We let $\og_n=\og_0\hat{\gamma}_n$ and $\ogp_n=\ogp_0\hat{\gamma}_n$. First note that if $x_0=x'_0$, and if the random walk S does not fall below level zero, $S$ can only hit $\hat{\gamma}_n$, and so $\hat{Z}_{n,z}(\og_n)=\hat{Z}_{n,z}(\ogp_n)$. For the case when $\og_0$ and $\ogp_0$ do not have the same endpoint, we start one random walk at $z$ and consider $\hat{Z}_{n,z}(\og_n)$. We can find a point $z'$ on level zero of $G$ such that the ``relative position'' of $z'$ to $x'_0$ is the same as the ``relative position'' of $z$ to $x_0$. And again, if $S$ does not fall below level zero, we get \begin{equation}\label{hat1} \hat{Z}_{n,z}(\og_n)=\hat{Z}_{n,z'}(\ogp_n) \end{equation} Now, starting a random walk anywhere on level zero, away from $\og_0$, say at $z$, we claim that for all $z'$ on level zero, $z'\notin \og_0$, \begin{equation}\label{zz'} \overline{Z}_{n,z}1_{\{T_n^1<\eta^1_0\}}\ge a\overline{Z}_{n,z'}1_{\{T_n^1<\eta^1_0\}} \end{equation} This is easy to see: starting the random walk at $z$, with probability greater than $a$ it will reach $z'$ before hitting level $1$, or returning to level $-1$, without hitting the starting point of $\hat{\gamma}_n$. Note that in this case, the random walk from $z$ to $z'$ will not hit $\hat{\gamma}_n$ and the claim follows. Furthermore, by the same argument, \begin{equation}\label{hat2} \hat{Z}_{n,z}1_{\{T_n^1<\eta^1_0\}}\ge a\hat{Z}_{n,z'}1_{\{T_n^1<\eta^1_0\}} \end{equation} Since \eqref{hat2} holds for all $z'$ not on $\og_0$, taking the supremum over all $z'$, along with equation \eqref{hat1}, \[\hat{Z}_{n,z}(\og_n)1_{\{T_n^1<\eta^1_0\}}\ge a\hat{Z}_{n}(\ogp_n)1_{\{T_n^1<\eta^1_0\}},\] so averaging over all $\hat{\gamma}_n$, we get \[{\bf E}^{\og_0}[\hat{Z}^{\lambda}_{n,z}1_{\{T_n^1<\eta^1_0\}}]\ge a^{\lambda_2}{\bf E}^{\ogp_0}[\hat{Z}_{n}^{\lambda}1_{\{T_n^1<\eta^1_0\}}],\] Observe that this inequality holds for any pair $\og_0$, $\ogp_0$, and thus, \[{\bf E}^{\og_0}[\hat{Z}^{\lambda}_{n,z}1_{\{T_n^1<\eta^1_0\}}]\ge a^{\lambda_2}\sup_{\ogp_0}{\bf E}^{\ogp_0}[\hat{Z}^{\lambda}_{n}1_{\{T_n^1<\eta^1_0\}}].\] Next we want to show there exists a constant $c'<1$ such that \begin{equation}\label{complement} \sup_{\og_0}{\bf E}^{\og_0}[\hat{Z}^{\lambda}_{n}1_{\{ T_n^1>\eta^1_0\}}]\le c'\,\sup_{\og_0}{\bf E}^{\og_0}[\hat{Z}^{\lambda}_{n}]. \end{equation} This, along with Lemma \ref{separation1}, will finish the proof with a constant equal to $(1-c')a^{2\lambda_2}$. In order to prove (\ref{complement}), we condition on the event that $\hat{\gamma}_n$ returns to level zero before reaching level $n$. Then a random walk started on level zero will avoid $\og_n$ only if it avoids the part of $\hat{\gamma}_n$ from the first return to level zero up to the hitting time of level $n$. If we denote this part of $\hat{\gamma}_n$ by $ S^1[\eta^1_0,T_n^1],$ \[{\bf E}^{\og_0}[\hat{Z}_{n}^{\lambda} | T_n^1>\eta^1_0]\le {\bf E}^{\og_0}[\sup_z\Prob_1^z\{S[0,T_n]\cap S^1[\eta^1_0,T_n^1]=\emptyset; T_n<\eta_{-1}\}^{\lambda}]\le \sup_{\og_0}{\bf E}^{\og_0}[\hat{Z}_{n}^{\lambda}]\] where the second inequality follows from observing that $\hat{Z}_{n,z}$ depends on $\og_0$ only in terms of the endpoint of $\og_0$. Recall that $\Prob\{T_n^1>\eta^1_0\}\le 1-(2p-1)/d$, hence taking $c'=1-(2p-1)/d>0\,$ completes our proof. Furthermore, for all $n$ and for all $\og_0$, we have ${\bf E}^{\og_0}[\overline{Z}_{n}^{\lambda}1_{\{ T_n^1<\eta^1_0\}}]\ge c\overline{q}_n$. \end{proof} \subsubsection{Proof of Proposition \ref{estconst}} We prove the proposition by showing that for all $n\ge 1$, both $q_n \asymp e^{-{\bf x}i(\lambda) n}$ and $\overline{q}_n \asymp e^{-{\bf x}i (\lambda)n}$. By subadditivity of $\log(q_n)$, we have \[\displaystyle{\lim_{n\to \infty}\frac{\log(q_n)}{n}=\inf_n \frac{\log(q_n)}{n}=-{\bf x}i(\lambda)}\] and thus, for all $n$, \begin{equation}\label{2} q_n\ge e^{-{\bf x}i(\lambda) n}, \end{equation} Note that this means $c_1=1$. We claim there exists a constant $\hat{c}$ such that for all $n$, $q_n\ge \hat{c}\overline{q}_n$. Then, along with the trivial inequality $q_n\le\overline{q}_n$, this implies \begin{equation}\label{1} q_n\asymp\overline{q}_n. \end{equation} On the event where $\og_n$ does not reach level zero after time $0$, the random walk $S$, coming from $-\infty$ and conditioned to avoid $\og_0$, will avoid $\og_n$ if and only if $S[T_0,T_n]$ avoids $\og_n$. This is the same as starting a random walk at $S(T_0)$ and avoiding $\og_n$. But, from (\ref{zz'}) we know this is bounded below by $a\overline{Z}_{n,z}$, for all $z\notin \og_0$. Taking the supremum over all $z$, we have $Z_n 1_{\{T_n^1<\eta^1_0\}} \ge a\overline{Z}_n 1_{\{T_n^1<\eta^1_0\}}$. From Lemma \ref{separation2}, \[{\bf E}^{\og_0}[Z_n^{\lambda}1_{\{T_n^1<\eta^1_0\}}]\ge a^{\lambda}{\bf E}^{\og_0}[\overline{Z}_{n}^{\lambda}1_{\{T_n^1<\eta^1_0\}}]\ge a^{\lambda}\, c \overline{q}_n .\] We let $\hat{c}=a^{\lambda_2}\, c=a^{3\lambda_2}(2p-1)/d$ and then $q_n\ge \hat{c}\overline{q}_n$, with a constant uniform in $\lambda$. Furthermore, \begin{equation}\label{chat} {\bf E}^{\og_0}[Z_n^{\lambda}1_{\{T_n^1<\eta^1_0\}}]\ge \hat{c}{\bf E}^{\og_0}[Z_n^{\lambda}]. \end{equation} In the last part of the proof we will show there exists a constant $c_2$, uniform in $\lambda$ such that \begin{equation}\label{3} \overline{q}_n\le c_2 e^{-{\bf x}i(\lambda) n} \end{equation} Then the proposition follows immediately from inequalities (\ref{1}), (\ref{2}) and (\ref{3}). To prove inequality (\ref{3}), we bound $\overline{Z}_{n+m,z}$ by the probability $S$ avoids $\og_{n+m}$ while not going below level $n$ between times $T_n$ and $T_{n+m}$. We are considering this probability only on the event where $S^1$ reaches level $n+m$ before returning to level $n$. Intuitively, we want $S$ and $S^1$ to have a nice behavior from level $n$ onward, so we can ``separate" what happens up to level $n$ from what happens from level $n$ to $m+n$. Then for every pair $n,m$ we have the following relation between $\overline{q}_{n+m}$, $\overline{q}_n$ and $\overline{q}_m$: \begin{equation*} \begin{split} \overline{q}_{n+m} &= \sup_{\og_0}{\bf E}^{\og_0}[\sup_{|z|=0}\Prob_1^z \{S[0, T_{m+n}]\cap \og_{m+n} =\emptyset\}^{\lambda}]\\ &\ge \sup_{\og_0}{\bf E}^{\og_0}[\sup_{|z|=0}\Prob_1^z \{S[0, T_{m+n}]\cap \og_{m+n} =\emptyset; S(T_n,T_{n+m}]\cap G_{n-1}=\emptyset\}^{\lambda}1_{\{T_{n+m}^1<\eta^1_n\}}]\\ &\ge \sup_{\og_0}{\bf E}^{\og_0}[\sup_{|z|=0}\overline{Z}_{n,z}^{\lambda} {\bf E}^{\og_0}[\Prob_1 \{S[0, T_{m+n}]\cap \og_{m+n} =\emptyset;\\ &\qquad\qquad\qquad\qquad S(T_n,T_{n+m}]\cap G_{n-1}=\emptyset |S[0, T_n]\cap \og_n =\emptyset\}^{\lambda}1_{\{T_{n+m}^1<\eta^1_n\}}|\mathcal{F}_n]]\\ &= \sup_{\og_0}{\bf E}^{\og_0}[\sup_{|z|=0}\overline{Z}_{n,z}^{\lambda} {\bf E}^{\og_n}[\hat{Z}_{n+m,S(T_n)}^{\lambda} 1_{\{T_{n+m}^1<\eta^1_n\}}]]\\ &\ge \sup_{\og_0}{\bf E}^{\og_0}[\sup_{|z|=0}\overline{Z}_{n,z}^{\lambda}(\,c\overline{q}_m)]\\ &\ge c \overline{q}_m\, \overline{q}_n \end{split} \end{equation*} with the second to last step following from Lemma \ref{separation2}. Note that $\log(c\, \overline{q}_n)$ is a super-additive function, and then using (\ref{1}) \[\displaystyle{\sup_n \frac{\log(c\, \overline{q}_n)}{n}=\lim_{n\to \infty}\frac{\log(c\, \overline{q}_n)}{n}=\lim_{n\to \infty}\frac{\log(q_n)}{n}=-{\bf x}i(\lambda)}\] and so, for all $n$, $\overline{q}_n\le c_2 e^{-{\bf x}i (\lambda)n}$, where $c_2=\left[a^{2\lambda_2}(2p-1)/d\right]^{-1}$ and the proof is complete. From the proof to Proposition \ref{estconst}, we also get the following result: \begin{corollary}\label{estconst2} For all $n$ and all $\og_0\in \overline{\mathcal{A}}$, ${\bf E}^{\og_0}[Z_n^{\lambda}]\asymp e^{-{\bf x}i (\lambda)n}$. \end{corollary} \section{Exponential convergence of Markov chains}\label{sec:Exp Convergence} In this section we will first define a Markov process on pairs of non-disconnecting random walk paths on $G$. Each ${\bf x}i(\lambda):={\bf x}i(1,\lambda)$ will then be associated to such a Markov process, along with a weighting that corresponds to the value of $\lambda$. We will fix $\lambda$ and start two Markov chains $X$ and $X'$, with different initial configurations $\og_0$ and $\ogp_0$ respectively. The goal is to show that the two chains can be coupled at exponential rate, and as a result we will be able to describe an invariant limiting measure on half-infinite paths. \subsection{Markov process on random walk paths}\label{MC} Fix $N$ large and start with an initial configuration $\og_0$ from $\overline{\mathcal{A}}$. Attaching a random walk started at the endpoint of $\og_0$ and run until it hits level $N$ gives a path $\og_N$, which, translated accordingly, produces $\tg_N$. If $\tg_N$ does not disconnect $-\infty$ from level zero, then it is a \emph{nice} path. (This follows from the fact that $\hat{\gamma}_N$ is finite almost surely and hence one can find a level $m$ such that $\hat{\gamma}_N$ does not fall below level $m$; but $G_{m-n}\cap D(\tg_n)=G_{m}\cap D(\og_0)$ must have infinitely many connected levels, hence $\tg_N$ is \emph{nice}.) Each $\og_N$ is weighted by $\displaystyle{e^{-\lambda \Phi_N}}$ and then normalized by ${\bf E}^{\og_0}[e^{-\lambda \Phi_N}]$ to get a probability measure ${\bf Q}_N={\bf Q}_N^{\og_0}$. Clearly, paths $\og_N$ that cannot be avoided by an $h$-process from $-\infty$ to level $N$ will be assigned measure zero and we will say that these paths "do not survive" up to level $N$. For a given $\og_0$, we denote the expectation with respect to the measure ${\bf Q}^{\og_0}$ by ${\bf E}_{{\bf Q}}^{\og_0}$. Let \[K_N(\og_0)=e^{{\bf x}i (\lambda)N}{\bf E}^{\og_0}[e^{-\lambda \Phi_N}].\] From Corollary \ref{estconst2}, we get $K_N(\og_0)\asymp 1$, i.e. $K_N(\og_0)$ are bounded above and below by positive constants, uniform in $\og_0$ and $N$. Furthermore, we can show that $\displaystyle{\lim_{N\to\infty}K_N(\og_0)}$ exists. A proof of this result can be found in Section \ref{sec:analyticity}, as a part of the proof of analyticity of ${\bf x}i(\lambda).$ If $m<N$, we have \[{\bf E}^{\og_0}[e^{-\lambda \Phi_N}|\mathcal{F}_m]= e^{-\lambda \Phi_m}{\bf E}^{\tg_m}[e^{-\lambda \Phi_{N-m}}].\] Let $\mathcal{M}$ be the unconditioned random walk measure on paths. We let the path $\og_n$ evolve, starting with $\og_0$ and conditioned to survive up to level $N$. Then, for this conditioned random walk we write transition probabilities: \begin{equation}\label{transitions} Q_N(\tg_1|\tg_0)=e^{-\lambda \Phi(\tg_1)}e^{{\bf x}i(\lambda)}\frac{K_{N-1}(\tg_1)}{K_N(\tg_0)}\mathcal{M}(\hat{\gamma}_1), \end{equation} The $m$-step transition probabilities, for $m<N$ are given by \[ Q_N(\tg_m|\tg_0)=e^{-\lambda \Phi_m}e^{{\bf x}i(\lambda) m}\frac{K_{N-m}(\tg_m)}{K_N(\tg_0)}\mathcal{M}(\hat{\gamma}_m).\] Here we have used the fact that $\og_0=\tg_0$ and we will use the two notations interchangeably. It is trivial to check that if $n+m\le N$, the distribution of $\tg_{m+n}$ under ${\bf Q}_N^{\tg_0}$ is the same as the distribution of $\tg_{m+n}$ under ${\bf Q}_{N-n}^{\tg_n}$. Intuitively, this should be the case since we condition on paths surviving up to level $N$ and once they have reached level $n$ they only have to survive another $N-n$ steps. Then transition probabilities from \eqref{transitions} describe a time-inhomogeneous Markov chain $X_n$ taking values in $\overline{\mathcal{A}}$ and conditioned to survive for a total of $N$ levels, with $X_0=\og_0$ and $X_n:=\tg_n$ for $n\ge 1$. Starting with a different \emph{nice} initial configuration $\ogp_0$, we construct a Markov chain $X'_n$ with transition probabilities given by a formula similar to \eqref{transitions} and $X'_0=\ogp_0$. Then we will show that we can couple $X_n$ and $X'_n$ as in Theorem \ref{expc}. Let $\eta|_k$ denote the restriction to the last $k$ levels of a half-infinite path $\eta$ from $\overline{\mathcal{A}}.$ Then Theorem \ref{expc} implies the following corollary. \begin{corollary} \label{coupling} For all $\og_0, \ogp_0, \eta$ in $\overline{\mathcal{A}}$ and all $0\le k\le N/2$, \[\sum_{\eta|_k}\left|\frac{{\bf E}^{\og_0}[e^{-\lambda \Phi_N}; \tg_N=_k\eta]}{{\bf E}^{\og_0}[e^{-\lambda \Phi_N}]}- \frac{{\bf E}^{\ogp_0}[e^{-\lambda \Phi_N}; \tg'_N=_k\eta]}{{\bf E}^{\ogp_0}[e^{-\lambda\Phi_N}]}\right| =O(e^{-\beta_1 N}).\] \end{corollary} \begin{proof} Let $X_n$ and $X'_n$ be Markov chains given by initial configuration $\og_0$ and $\ogp_0$, respectively, and evolving according to transition probabilities given in \eqref{transitions}. Observe that \[\frac{{\bf E}^{\og_0}[e^{-\lambda \Phi_N}; \tg_N=_k\eta]}{{\bf E}^{\og_0}[e^{-\lambda \Phi_N}]}={\bf Q}_N^{\og_0}\{X_{N}=_k \eta\}.\] Then for all $k \le N/2$, we have \begin{eqnarray*} \sum_{\eta|_k}\left\vert{\bf Q}_N^{\og_0}\{X_{N}=_k \eta\}- {\bf Q}_N^{\ogp_0}\{X'_N=_k \eta\}\right\vert &\le& 2\tProb\{X_N\not\equiv_k X'_N\}, \end{eqnarray*} and the corollary follows from noting that for all $k\le N/2$, \[\displaystyle{\tProb\{X_{N}\not\equiv_k X'_{N}\}\le \tProb\{X_{N}\not\equiv_{N/2}X'_{N}\}\le Ce^{-\beta_1 N}.}\] This result implies existence of an invariant measure on paths $\og_0\in \mathcal{A}$ which we will prove in Section \ref{sec:Invariant Measure}. \end{proof} The rest of Section \ref{sec:Exp Convergence} is devoted to proving Theorem \ref{expc}. First we present some technical lemmas, followed by the description of the coupling in Section \ref{sec:coup} and the proof of the theorem. \subsection{Preliminary estimates} For all $i\le 0$, and $\tg \in \overline{\mathcal{A}}$, let $J_i$ be the random variable that takes the value 1 if the $i$-th cross-section of $G\setminus \tg$ is connected, and is zero otherwise. Then for $k\ge j\ge 0$, the sum $\sum_{i=-k}^{-j}J_i $ represents the number of connected cross-sections between levels $-k$ and $-j$ in $G\setminus \tg$. For all $k\ge0$ and $j>0$, we define the sets \[V_{k,j}(\delta)=\left\{\tg\in \overline{\mathcal{A}}: \sum_{i=-k}^{-k+j-1}J_i>\delta j\right\}\] \[W_{k,j}=\left\{\gamma\in \mathcal{X}_k: \gamma \cap G_{j}=\emptyset\right\}.\] Then $V_{k,j}(\delta)$ is the set of all $\tg$ in $\overline{\mathcal{A}}$ with the property that it has at least $\delta j$ connected cross-sections between levels $-k$ and $-k+j-1$, and $W_{k,j}$ is the set of all $\gamma$ started on level $k-1$ and reaching level $k$ before reaching level $j$. The set $V_{k,j}$ is large for $j$ large enough, that is, for an appropriate $\delta$, the ${\bf Q}_N$-probability that $\tg_k$ is not in $V_{k,j}(\delta)$ decays exponentially in $j$, and this is independent of $\tg_0$. Therefore, the weighted measure put on paths that are not in $V_{k,k/2}$ is exponentially small. Moreover, we will show that the set $V_k$ of all paths that have enough connected cross-sections between levels $-k$ and $-k+j-1$ for all integers $j$ in $[k/2,k]$, is also large. Note that \[V_k=\{\tg\in \overline{\mathcal{A}}: \tg\in \bigcap_{j=k/2}^k V_{k,j}\},\] and the set $V_k$ is a subset of $V_{k,k/2}$. \begin{lemma}\label{largedev} There exist constants $\alpha'>0$ and $\delta>0$ such that for all $\og_0\in\mathcal{A}$, all $n\le N$ and $k\le n$, \[{\bf Q}_N^{\og_0}\{\tg_k\notin V_k(\delta)\}\le c\,e^{-\alpha' k/2}.\] \end{lemma} \begin{proof} Using Chebyshev's inequality, for all integers $j \in [k/2,k]$ \begin{eqnarray*} {\bf Q}_N^{\og_0}\{\tg_k\notin V_{k,j}(\delta)\} &=&{\bf Q}_N^{\og_0}\left\{\exp\{-t\sum_{i=-k}^{-k+j-1}J_i\} \ge e^{-t\delta j}\right\} \\ &\le& e^{t\delta j}{\bf E}^{\og_0}_{{\bf Q}}\left[\exp\{-t\sum_{i=-k}^{-k+j-1}J_i\}\right]. \end{eqnarray*} Suppose we know the path $\tg_k$ up to level $i-1$ (note that $i<0$ and we have information from $\mathcal{F}_{k+i-1}$). What is the ${\bf Q}_N^{\tg_0}$-probability that the chain evolved so that level $i-1$ is connected? If, starting from level $i-1$, the path moved forward in the $\mathbb {Z}$ direction of $G$ and it did not return to this level, then level $i-1$ remained connected. It follows from $\eqref{chat}$ that $\displaystyle{{\bf Q}_N^{\og_0}\{J_i=1|\mathcal{F}_{k+i-1}\}\ge \hat{c}}$, and so \begin{eqnarray*} {\bf E}_{{\bf Q}}^{\og_0}\left[\exp\{-t\cdot J_i\} \big|\, \mathcal{F}_{k+i-1}\right] &=& e^{-t}{\bf Q}_N^{\og_0}\{J_i=1|\mathcal{F}_{k+i-1}\}+{\bf Q}^{\og_0}\{J_i=0|\mathcal{F}_{k+i-1}\}\\ &\le& e^{-t}+1-\hat{c}. \end{eqnarray*} Using this, one can check that \begin{eqnarray*} {\bf E}_{{\bf Q}}^{\og_0}\left[\exp\{-t\sum_{i=-k}^{-k+j-1}J_i\}\right] &\le& (e^{-t}+1-\hat{c})^j. \end{eqnarray*} Therefore, $\displaystyle{{\bf Q}_N^{\og_0}\{\tg_k\notin V_{k,j}(\delta)\} \le e^{t\delta j}(e^{-t}+1-\hat{c})^j}$. Fix $t$ large enough such that $e^{-t}+1-\hat{c}<1$, then let $\alpha'=-\delta t-\log(e^{-t}+1-\hat{c})$, and choose $\delta\le 1/4$, so that $\alpha'>0$. We now let $V_{k,j}=V_{k,j}(\delta)$ for this particular choice of $\delta$. To get an estimate on the size of $V_k$, we need only consider $j\in[k/2,k]$. A path $\tg_k$ is not in $V_k$ if it is not in at least one of the sets $V_{k,j}$ for $j$ between $k/2$ and $k$: \[{\bf Q}_N^{\og_0}\{\tg_k\notin V_k\}\le \sum_{j=k/2}^k{\bf Q}_N^{\og_0}\{\tg_k\notin V_{k,j}\} \le \frac{e^{-\alpha' k/2}}{1-e^{-\alpha'}},\] and the lemma follows. \end{proof} We start with $\tg_0=_k\tg'_0$ satisfying $\tg_0\in V_k$ (this implies $\tg'_0$ also in $ V_k$). To $\tg_0$ and $\tg'_0$ we attach the same $\hat{\gamma}_1\in W_{1,-k/2}$. We want to show $e^{-\lambda\Phi(\tg_1)}$ is close to $e^{-\lambda\Phi'(\tg'_1)}$, and $K_N(\tg_0)$ is close to $K_N(\tg'_0)$. The estimate on how close these quantities are relies on a coupling of $h$-processes. Let $S$ and $S'$ be $h$-processes given by random walks conditioned to avoid $\og_0$, and $\ogp_0$ respectively. If $\og_0=_k\ogp_0 \in V_k$, then one can show the $h$-processes $S$ and $S'$ can be coupled by the time they first hit level $-k/2$ with high probability. \begin{lemma}\label{coupling1} There exist constants $c,\alpha''>0$ such that for all $k\ge 0$, for all $\og_0, \ogp_0 \in V_k$, with $\og_0=_k\ogp_0$, if $S$ and $S'$ are $h$-processes described as above, then $S$ and $S'$ can be defined on the same probability space $(\Omega_1, \mathcal{F}_1,\overline{\mu})$ such that \[\overline{\mu}\{S(T_{-k/2})\neq S'(T'_{-k/2})\}\le c\,e^{-\alpha'' k}.\] \end{lemma} \begin {proof} For $-k<j< -k/2$, let $\mu_{j}(x)$ be the hitting measure on level $j$ of the $h$-process $S$ conditioned to avoid the path $\og_0$, and $\mu'_{j}(x)$ be the hitting measure on level $j$ of the $h$-process $S'$ conditioned to avoid the path $\ogp_0$. Then, if level $j$ of $\og_0$ is connected, we claim there exists a constant $c>0$ such that \begin{equation}\label{eq:coupling1} \frac{\mu_{j+1}(w)}{\mu'_{j+1}(w)}\ge c. \end{equation} Let $h(x)=\Prob_1^x\{S[0,T_0]\cap \og_0=\emptyset\}$ and $h'(x)=\Prob_1^x\{S'[0,T'_0]\cap \ogp_0=\emptyset\}$. Using \eqref{harnack}, for all $ x, x'\notin \og_0$, with $|x|=|x'|=j$, we have \[\frac{\mu_{j+1,x}(w)}{\mu'_{j+1,x'}(w)}\ge \left(\frac{p}{d} a\right) \frac{h(w)}{h(x)}\frac{h'(x')}{h'(w)}\ge \left(\frac{p}{d} a^3\right) \frac{h(w)}{h(z)}\frac{h'(z)}{h'(w)},\] where $z=w-(1,0)$ is the last point on level $j$ touched by $S$ and $S'$ before reaching level $j+1$ at $w$. Furthermore, $\displaystyle{\frac{1-p}{d}h(z)\le h(w)\le \frac{d}{p}h(z)}$ and a similar inequality holds for $h'(w)$. Thus, for all $ x,x' \notin \og_0$, with $|x|=|x'|=j$, \[\frac{\mu_{j+1,x}(w)}{\mu'_{j+1,x'}(w)}\ge \frac{p^2(1-p)}{d^3}a^3>0. \] Then inequality \eqref{eq:coupling1} follows from noting that $\displaystyle{\mu_{j+1}(w)=\sum_{|x|=j}\mu_{j}(x)\mu_{j+1,x}(w).}$ Recall that on $V_k$, both $D(\og_0)$ and $D(\og'_0)$ have $k\delta$ connected cross-sections between levels $-k$ and $-k/2$. Using the same coupling as in Lemma \ref{coupling0}, we can define $S$ and $S'$ on the same probability space $(\Omega_1,\mathcal{F}_1,\overline{\mu})$, with \[\overline{\mu}\{S(T_{-k/2})\neq S'(T'_{-k/2})\}\le 2\left(\frac{1-c}{2}\right)^{\delta k},\] and then let $\alpha''=-\delta\log\left(\frac{1-c}{2}\right)$ to complete the proof. \end{proof} We are now ready to estimate $K_N(\tg_0)/K_N(\tg'_0)$. \begin{proposition}\label{cnestimate} There exists $\beta>0$ such that for all $n\le N$, all $k\ge 0$, and all histories $\og_0,\ogp_0$ in $V_k$ with $\og_0=_k\ogp_0$, \[K_n(\og_0)=K_n(\ogp_0)[1+O(e^{-\beta k})].\] \end{proposition} \begin{proof} To simplify notation, let \[Z_n^*(\tg_n)=\Prob_1\{S(-\infty,T_n]\cap \og_n=\emptyset | S(-\infty,T_0]\cap\og_0=\emptyset; S[T_0,T_n]\cap G_{-k}\neq\emptyset\}.\] Let $\mathcal{U}$ be the event $\{\hat{\gamma}_n\cap G_{-k/2}=\emptyset\}$. Then on $\mathcal{U}$, using the coupling result from Lemma \ref{coupling1}, and an estimate similar to \eqref{ARWestimate}, \begin{equation}\label{bound} Z_n(\tg_n)\le Z_n(\tg'_n)+\left(\frac{1-p}{p}\right)^k Z_n^*(\tg_n)+ \left(c\,e^{-\alpha'' k}\right)Z_n(\tg_n) \end{equation} Using Lemma \ref{separation2}, it is easy to show that \begin{equation*} {\bf E}^{\og_0}[(Z^*_n)^{\lambda}1_{\mathcal{U}}]\le c{\bf E}^{\og_0}[Z_n^{\lambda}1_{\mathcal{U}}], \end{equation*} for some constant $c$ uniform over all $\lambda$, and not depending on $k$. We will show \begin{equation*} {\bf E}^{\ogp_0}[Z_n^{\lambda}1_{\mathcal{U}}]\ge{\bf E}^{\og_0}[Z_n^{\lambda}1_{\mathcal{U}}][1-O(e^{-\beta k})], \end{equation*} where $\displaystyle{\beta=\lambda_1\min\{\alpha',\alpha'',\frac{1}{2}\log\frac{p}{1-p}\}}$. There are two cases to consider. If $\lambda\le 1$, raising \eqref{bound} to $\lambda$ and taking expectations, \[{\bf E}^{\og_0}[Z_n^{\lambda}1_{\mathcal{U}}] \le {\bf E}^{\ogp_0}[Z_n^{\lambda}1_{\mathcal{U}}]+ \left[c\left(\frac{1-p}{p}\right)^{\lambda k}+ e^{-\lambda\alpha'' k}\right]{\bf E}^{\og_0}[Z_n^{\lambda}1_{\mathcal{U}}] \] When $\lambda>1$, using the Minkowski inequality, \[ {\bf E}^{\og_0}[Z_n^\lambda 1_{\mathcal{U}}]^{1/\lambda}\le {\bf E}^{\ogp_0}[Z_n^\lambda 1_{\mathcal{U}}]^{1/\lambda}+\left[ c^{1/\lambda}\left(\frac{1-p}{p}\right)^k+ e^{-\alpha'' k}\right]{\bf E}^{\og_0}[Z_n^\lambda 1_{\mathcal{U}}]^{1/\lambda} \] Collecting terms and raising to $\lambda$, \[{\bf E}^{\ogp_0}[Z_n^\lambda 1_{\mathcal{U}}]\ge {\bf E}^{\og_0}[Z_n^\lambda 1_{\mathcal{U}}]\left[1-c^{1/\lambda}\left(\frac{1-p}{p}\right)^k- e^{-\alpha'' k}\right]^{\lambda}\ge {\bf E}^{\og_0}[Z_n^{\lambda}1_{\mathcal{U}}][1-O(e^{-\beta k})]. \] We conclude that \begin{equation}\label{good} {\bf E}^{\og_0}[Z_n^{\lambda} 1_{\mathcal{U}}]\le {\bf E}^{\ogp_0}[Z_n^{\lambda}][1+O(e^{-\beta k})] \end{equation} Now, conditioning on paths satisfying $\hat{\gamma}_n \cap G_{-k/2}\neq\emptyset$, and using Lemma \ref{separation2}, we can find a constant $c$, uniform over $\lambda$ and independent of $k$, such that \begin{equation}\label{bad} {\bf E}^{\og_0}[Z_n^{\lambda}|\hat{\gamma}_n \cap G_{-k/2}\neq\emptyset]\le c{\bf E}^{\og_0}[Z_n^{\lambda}] \end{equation} Using equations \eqref{good} and \eqref{bad}, we get \begin{eqnarray*} {\bf E}^{\og_0}[Z_n^{\lambda}] &=&{\bf E}^{\og_0}[Z_n^{\lambda} 1_{\mathcal{U}}]+{\bf E}^{\og_0}[Z_n^{\lambda};\hat{\gamma}_n \cap G_{-k}\neq\emptyset]\\ &\le& [1+O(e^{-\beta k})]{\bf E}^{\ogp_0}[Z_n^{\lambda}] +c\,\left(\frac{1-p}{p}\right)^k {\bf E}^{\og_0}[Z_n^{\lambda}]\\ &=&[1+O(e^{-\beta k})]{\bf E}^{\ogp_0}[Z_n^{\lambda}], \end{eqnarray*} with the last step coming from $\beta\le \frac{1}{2}\log\left(\frac{p}{1-p}\right)$. The proposition follows immediately from the definition of $K_n(\og_0)$. \end{proof} \begin{corollary}\label{c1estimate} For all $k$ and all $\og_0, \ogp_0 \in V_k$ with $\og_0=_k\ogp_0$, and $\gamma_1 \in W_{1,-k/2}$, \begin{equation*}\label{c1} e^{-\lambda\Phi (\tg'_1)}\ge e^{-\lambda\Phi (\tg_1)}[1-O(e^{-\beta k})]. \end{equation*} \end{corollary} \begin{proof} Since $\tg_0$ and $\tg'_0$ have the same endpoint, we can attach to both of them the same element of $\mathcal{X}$, namely $\gamma_1$. Let $\tg_1$ and $\tg'_1$ be the resulting paths translated accordingly. From \eqref{bound}, using Lemma \ref{lemA}, in the case $n=1$ we get: \[Z_1(\tg'_1)\ge Z_1(\tg_1) \left[1-e^{-\alpha'' k}- \frac{d}{p}\left(\frac{1-p}{p}\right)^k \right]\] Taking $\beta$ as in the previous proposition, it is easy to see that for all $\lambda\in [\lambda_1,\lambda_2]$, \[\left[1-e^{-\alpha'' k}- \frac{d}{p}\left(\frac{1-p}{p}\right)^k \right]^{\lambda}\ge 1-O(e^{-\beta k})\] and the corollary follows. \end{proof} Then, using our results above, for all $\gamma_1 \in W_{1,-k/2}$, the chain has the following property: the infimum over all $\og_0=_k\og'_0$ in $V_k$, \begin{equation}\label {diagbound} \inf \frac{Q_N(\tg_1|\tg_0)}{Q_N(\tg'_1|\tg'_0)}= \inf\left [ \frac{e^{-\lambda\Phi (\og_1)}}{e^{-\lambda\Phi (\ogp_1)}} \cdot \frac{K_{N-1}(\og_1)}{K_{N-1}(\ogp_1)}\cdot \frac{K_N(\ogp_0 )}{K_N(\og_0)}\right ] =1-O(e^{-\beta k}) \ge 1-c\, e^{-\beta k}, \end{equation} for some constant $c$ uniform over $\lambda$. This will be the main estimate used in the proof of Theorem \ref{expc}. \subsection{Coupling of weighted paths}\label{sec:coup} Given a double history $(\og_0,\ogp_0)$, we let $X_0=\og_0$ and $X'_0=\ogp_0$ be the starting configurations of our time-inhomogeneous Markov chains. In this section, we define $X_n$ and $X'_n$ on the same probability space $(\tilde{\Omega},\tilde{ \mathcal{F}}, \tProb)$, we will show there is a good chance the paths will couple in two steps, and once coupled, they will remain coupled for another step with positive probability. Furthermore, if the paths are coupled for $k$ steps, we will show the paths will decouple at a rate that decays exponentially in $k$. The key tool used in proving the exponential rate of decay is our main estimate from the previous section, equation \eqref{diagbound}. On $V_k$, we will use a maximal coupling for $X_n$ and $X'_n$. It is essentially the same coupling as the one described in \cite{Coupling Paper}. For $n\ge k+2$, if $\tg_n=_k\tg'_n$, we say the chains $X$ and $X'$ have been coupled for $k+1$ steps by level $n+1$, denoted by $X_{n+1}\equiv_{k+1}X'_{n+1}$, if: \begin{itemize} \item $X_n\equiv_k X'_n$, that is, $X$ and $X'$ have been coupled for $k$ steps by level $n$, \item $\gamma_{n+1}=\gamma'_{n+1}$, \item $\tg_{n}, \tg'_{n} \in V_{k}$ \end{itemize} We think of this coupling in the following way: suppose $X_n$ and $X'_n$ have been coupled for $k$ steps; if $\tg_n$ does not have enough connected cross-sections, we decouple, otherwise, the chains couple for an additional step if $\gamma_{n+1}=\gamma'_{n+1}$. \begin{remark} Note that when we say $X_n$ and $X'_n$ are coupled for the last $k$ steps, we do not mean that $X_j$ is equal to $X'_j$ for $n-k+1\le j\le n.$ What we mean is that for $X_n=\tg_n$ and $X'_n=\tg'_n$ to be coupled for the last $k$ steps, we simply need $\gamma_j=\gamma'_j$ for $n-k+1\le j\le n.$ An equivalent way to set up this problem is to construct a chain $\tilde{X}_n$ with history $\og_0$, on one-level paths from $\mathcal{X}$: for $1\le n$, let $\tilde{X}_j=\gamma_j$, with transition probabilities as in \eqref{transitions}. These chains would be non-Markovian, time-inhomogeneous and dependent on the initial configurations, but they would couple in the classical sense, that is, we would say $\tilde{X}_n$ and $\tilde{X}'_n$ are coupled if $\tilde{X}_n=\tilde{X}'_n$. We prefer our setup for notation purposes only and the reader is welcome to think of the coupling in terms of $\tilde{X}_n$ if so wishes. \end{remark} Let $\sigma_n=\sigma(\tg_n,\tg'_n)$, be the minimum number of steps backward that are needed to find a difference in coupling. On the event $\{\sigma_n=k\}$, either the paths decouple or they couple for an additional step, and thus $\sigma_{n+1}\in \{0,k+1\}.$ We define the following family of transition probabilities $q_n$ for the triple $(X_n, X_n'; \sigma_n)$: if $\gamma_{n+1}=\gamma'_{n+1},$ \[q_{n+1}(\tg_{n+1},\tg'_{n+1}; k+1\vert \tg_{n},\tg'_{n};k)=Q_{N}(\tg_{n+1}|\tg_n)\wedge Q_{N}(\tg'_{n+1}|\tg'_n)1_{\{\tg_n, \tg'_n\in V_k\}}\] If $\gamma_{n+1}\neq\gamma'_{n+1},$ the transition probability $q_{n+1}(\tg_{n+1},\tg'_{n+1}; 0\vert \tg_{n},\tg'_{n};k)$ could also be positive. One can give a formula for this case, but since we will not use it, we refer the reader to the maximal coupling presented in \cite{Coupling Paper} for details. Then these transition probabilities define a coupling of $X_n$ and $X'_n$. We use $\tProb$ as shorthand for $\tProb^{\og_0,\ogp_0}$. The two chains decouple at the $(n+1)$-th step if either we attach different elements of $\mathcal{X}$ to $\tg_n$, and $\tg'_n$ respectively, or if $\tg_{n}$ and implicitly $\tg'_{n}$ do not have enough connected cross-sections. We would like to estimate $\tProb\{\sigma_{n+1}=k+1\vert \sigma_n=k\}$. First observe that given $\tg_n \in V_k$, \begin{equation}\label{bad set} \sum_{\gamma_{n+1} \notin W_{n+1,n-k/2}}Q_{N}(\tg_{n+1}\vert \tg_n)\le O\left[\left(\frac{1-p}{p}\right)^{k/2}\right]=O(e^{-\beta k}), \end{equation} and thus, given two histories $\tg_n$ and $\tg'_n$ in $V_k$, and using $\eqref{diagbound}$, and $\eqref{bad set}$, \begin{eqnarray*} \tProb\{\sigma_{n+1}=k+1\vert \tg_n, \tg'_n;\sigma_n=k\}&\ge& \sum Q_{N}(\tg_{n+1}\vert \tg_n)\wedge Q_{N}(\tg'_{n+1}\vert \tg'_n)\\ &\ge& \sum Q_{N}(\tg_{n+1}\vert \tg_n)\left[1-O(e^{-\beta k})\right]\\ &\ge& 1-O(e^{-\beta k}), \end{eqnarray*} where the summation above is taken over all $\gamma_{n+1} \in W_{n+1,n-k/2}$. Hence, on $V_k$, the chains will decouple with probability less than $O(e^{-\beta k})$. Taking expectations over $\hat{\gamma}_n$, and recalling our earlier result from Lemma $\ref{largedev}$, we can find a constant $c$ such that \begin{equation}\label{step3} \tProb\{\sigma_{n+1}=0\vert \sigma_n=k\}\le O(e^{-\beta k})+O(e^{-\alpha' k})\le ce^{-\beta k}. \end{equation} Suppose $k\ge 1$ and $\tg_n=_k\tg'_n$ and the chains have been coupled for $k$ steps. First assume $\tg_n\in V_k$ and let $\gamma_{n+1}$ be a step forward in the $\mathbb {Z}$ direction of $G$. Since $\og_n$ and $\og'_n$ have the same endpoint, we attach $\gamma_{n+1}$ to both. Consider the $h$-process with the property that it reaches level $n+1$ in exactly one step from level $n$. Then one can show $Q_N(\tg_{n+1}|\tg_n)\ge c (p/d)^{1+\lambda}$ and thus there exists a constant $c>0$ such that for all $k\ge 1$, for all $\tg_n=_k\tg'_n\in V_k$, \[\tProb\{\sigma_{n+1}=k+1\vert \tg_n, \tg'_n;\sigma_n=k\}\ge Q_{N}(\tg_{n+1}\vert \tg_n)\wedge Q_{N}(\tg'_{n+1}\vert \tg'_n)\ge c\left(\frac{p}{d}\right)^{1+\lambda}\] Taking expectations over all pairs $\tg_n=_k\tg'_n$, for all $k\ge 1$, \[ \tProb\{\sigma_{n+1}=k+1\vert \sigma_n=k\}\ge c \left(\frac{p}{d}\right)^{1+\lambda}{\bf Q}^{\tg_0}_N\{\tg_n\in V_k|\tg_{n-1}\in V_{k-1}\}\] Reasoning as above, if we start with a half-infinite path with enough connected cross-sections, and we attach an additional step, with ${\bf Q}_N$-probability greater than $c(p/d)^{\lambda+1}$ we get a half-infinite path that has enough connected cross-sections. Therefore, for all $k\ge 1$ \begin{equation}\label{step2} \tProb\{\sigma_{n+1}=k+1\vert \sigma_n=k\} \ge\left(c \left(\frac{p}{d}\right)^{1+\lambda}\right)^2. \end{equation} Observe that even when $k=0$, if $\og_n$ and $\ogp_n$ have the same endpoint, the paths will get coupled on the next step with probability greater than $c \left(\frac{p}{d}\right)^{1+\lambda}.$ Suppose $n\ge 2$. Then \begin{equation*} \tProb\{\sigma_{n+1}=1\vert \sigma_n=0\}\ge c \left(\frac{p}{d}\right)^{1+\lambda}\tProb\{(n,0)\in \og_n\cap \ogp_n\}. \end{equation*} We proceed to find a positive lower bound for $\tProb\{(n,0)\in \og_n\cap \ogp_n\}.$ Suppose the chains have evolved up to level $n-2$. Let $\gamma_{n-1}$ be given by taking one step forward in the $\mathbb {Z}$ direction of $G$. Define $\gamma'_{n-1}$ in the same way. Let $\gamma_n$ be the following path: take the shortest path to $(n-1,0)$ by moving only on level $n-1$ of the cylinder, then take a step forward in the $\mathbb {Z}$ direction of $G$. Define $\gamma'_n$ in the same way. Then the 2-step paths $\gamma_{n-1}\gamma_n$ and $\gamma'_{n-1}\gamma'_n$ each have a random walk measure bounded from below by $\left(\frac{p}{d}\right)^2a$. Attach them to $\og_{n-2}$ and $\og_{n-2}$ respectively. Now we have $\og_n$ and $\ogp_n$ ending at the same point, namely $(n,0)$. Observe that for any choice of $\hat{\gamma}_{n-2}$, if we attach paths $\gamma_{n-1}\gamma_n$ as described above, $\displaystyle{Q_N(\tg_n|\tg_{n-2}) \asymp e^{-\lambda\Phi_2(\tg_2)}}$. It is easy to see that this quantity is bounded from below by a positive constant: let $x$ be the point on the torus with coordinates $(\lfloorloor L/2\rfloorloor,0,\dotsc,0)$ and $x'$ be the point on $\mathcal{T}$ with coordinates $(\lfloorloor L/2\rfloorloor+1,0,\dotsc,0)$. Consider the following path: if the endpoint of $\og_{n-2}$ is equal to $(n-2,x)$, then $S$ takes the shortest path to $(n-2,x')$, conditioned to avoid $(n-2,x)$, otherwise $S$ takes the shortest path to $(n-2,x)$, conditioned to avoid the the endpoint of $\og_{n-2}$; then $S$ moves two steps forward in the radial direction. Observe that our choices for $x$ and $x'$ assure $S[T_{n-2},T_n]$ avoids $\og_n$. This occurs with probability greater than $a\left(\frac{p}{d}\right)^2$, so we can bound $Q_N(\tg_n|\tg_{n-2})$ from below by a constant which we can choose uniformly over $\lambda$. Call this constant $c'.$ Then \begin{equation}\label{step2'} \tProb\{\sigma_{n+1}=1\vert \sigma_n=0\}\ge c \left (\frac{p}{d}\right)^{1+\lambda}(c')^2. \end{equation} Let $b=\min\left\{c^2 \left(\frac{p}{d}\right)^{2(1+\lambda_2)},c \left(\frac{p}{d}\right)^{1+\lambda_2}(c')^2\right\}$. From \eqref{step2} and \eqref{step2'}, for all $n\ge 2$, for all $k\ge 0$ \[\tProb\{\sigma_{n+1}=k+1\vert \sigma_n=k\}\ge b .\] \subsection{Proof of Theorem \ref{expc}.} Using the setup from the coupling above, Theorem \ref{expc} can be restated in the following way: \begin{theorem}\label{t2} Suppose $\sigma_1, \sigma_2,\cdots$ are non-negative integer valued random variables adapted to a filtration $\mathcal{F}_1\subset\mathcal{F}_2\subset \cdots$. Suppose there exist positive constants $c, \alpha, b$ such that \begin{itemize} \item on the event $\{\sigma_n=k\}$, $\sigma_{n+1}\in \{0,k+1\},$ \item for all $k$ and all $n\ge 2$, $\tProb\{\sigma_{n+1}=k+1\vert \mathcal{F}_n\}\ge b 1_{\{\sigma_n=k\}},$ \item for all $k$ and all $n$, $\tProb\{\sigma_{n+1}=k+1\vert \mathcal{F}_n\}\ge \left(1-ce^{-\beta k}\right) 1_{\{\sigma_n=k\}}.$ \end{itemize} Then there exist constants $C$ and $\beta_1$ such that for all $n$ \[\tProb \{\sigma_{2n} < n \} \le C e^{-\beta_1 n}.\] \end{theorem} \begin{proof} Let $k_0$ be large enough so that $1-ce^{-\beta k}>0$ for $k\ge k_0$. Then we choose $\alpha$ small enough so that $1-b\le e^{-\alpha k_0}$ and $ce^{-\beta k}\le e^{-\alpha(k+1)}$ for $k\ge k_0$. Note that $(\sigma_n)$ is not a Markov chain, so consider the following setup: let $s_n$ be a non-negative integer valued Markov chain with $s_0=0$, and whose transition probabilities are given by: \begin{align*} p_{k,k+1}&=1-e^{-\alpha (k+1)}& p_{k,0}&=e^{-\alpha (k+1)} \end{align*} $s_n$ stochastically dominates $\sigma_{n+2}$. We claim that for each $n\ge 0$, \begin{equation}\label{dom} \tProb\{s_n\ge k\}\le \tProb\{\sigma_{n+2}\ge k\}, \mbox{ for all $k\ge 0$.} \end{equation} The proof is done by induction on $n$. Note that \eqref{dom} holds for all $n$, when $k=0$. We assume $k\ge 1$. For $n=0$, since $\tProb\{s_0=0\}=1$, it follows that $\tProb\{s_0\ge k\}=0 \le \tProb\{\sigma_2\ge k\}$. Assume $\tProb\{s_n\ge k\}\le \tProb\{\sigma_{n+2}\ge k\}$ for all $k\ge 1$. Then \begin{eqnarray*} \tProb\{\sigma_{n+3}\ge k\} &=&\sum_{j=k-1}^{\infty}\tProb\{\sigma_{n+3}= j+1\}\\ &=&\sum_{j=k-1}^{\infty}\tProb\{\sigma_{n+3}= j+1|\sigma_{n+2}= j\}\tProb\{\sigma_{n+2}= j\}\\ &\ge& \sum_{j=k-1}^{\infty}(1-e^{-\alpha(j+1)})\left(\tProb\{\sigma_{n+2}\ge j\}-\tProb\{\sigma_{n+2}\ge j+1\}\right)\\ &=&(1-e^{-\alpha k})\tProb\{\sigma_{n+2}\ge k-1\}+\sum_{j=k}^{\infty}(e^{-\alpha j}-e^{-\alpha(j+1)})\tProb\{\sigma_{n+2}\ge j\} \end{eqnarray*} Using our inductive assumption, and then taking the same steps as above, but backwards, \begin{eqnarray*} \tProb\{\sigma_{n+3}\ge k\} &\ge& (1-e^{-\alpha k})\tProb\{s_n\ge k-1\}+\sum_{j=k}^{\infty}(e^{-\alpha j}-e^{-\alpha (j+1)})\tProb\{s_n\ge j\}\\ &=&\sum_{j=k-1}^{\infty}(1-e^{-\alpha(j+1)})\left(\tProb\{s_n\ge j\}-\tProb\{s_n\ge j+1\}\right)\\ &=& \tProb\{s_{n+1}\ge k\}. \end{eqnarray*} We prove the theorem by showing there exist constants $C$ and $\beta_1$ such that for all $n \in \mathbb{N}$, \[\tProb \{ s _{2n-2} \ge n \} \ge 1-Ce^{-\beta_1 n}.\] Let $\tau=\inf\{ n\ge 1: s_n=0\}$. Then $\tProb\{\tau=1\}=e^{-\alpha}$, $\tProb\{\tau=k+1\}=e^{-\alpha (k+1)}\prod_{j=1}^{k}(1-e^{-j\alpha})$ for $k\ge 1$, and $\tProb\{\tau=+\infty\}=\prod_{j=1}^{\infty}(1-e^{-j\alpha})$. Relating these stopping times to $s_n$, for $n\ge 2$ we have \[\tProb\{s_n=0\}=\sum_{k=1}^n\tProb\{\tau=k\}\tProb\{s_{n-k}=0\}.\] Define the following function \begin{equation*}\label{F} F(s)=\sum_{n=1}^{\infty}\tProb \{\tau=n\}s^n . \end{equation*} The radius of convergence of $F(s)$ is $e^{\alpha}$, and since $F(1)=\tProb\{\tau<\infty\}<1$, by continuity of $F$, we can find some $s^* \in (1, e^{\alpha})$ for which $F(s^*)=1$. Then for all $s \in (1,s^*)$, $F(s)<1$ and \[\sum_{n=0}^{\infty}\tProb \{s_n=0\}s^n =\frac{1}{1-F(s)}<\infty .\] Thus, for all $s \in (1, s^*)$ and $n$ large enough $\tProb\{s_n=0\}\le s^{-n}$. Then clearly we can find constants $c$ and $\beta_1$ such that $\tProb\{s_n=0\}\le c\, e^{-\beta_1 n}$ for all $n$. So, \[\sum_{k=0}^{n-1}\tProb\{ s _{2n-2}=k \} \le \sum_{k=0}^{n-1}\tProb\{ s _{2n-2-k}=0 \} \le \sum_{k=0}^{n-1}c\, e^{-\beta_1(2n-2-k)} =c\,e^{-\beta_1 (2n-2)}\left(\frac{e^{\beta_1 n}-1}{e^{\beta_1}-1}\right)\] and the theorem follows, with a constant $C=ce^{2\beta_1}(e^{\beta_1} -1)^{-1}$. \end{proof} \subsection{Invariant measure: proof of Theorem \ref{invariant}}\label{sec:Invariant Measure} Let $\Lambda$ denote the space of measures supported on $\overline{\mathcal{A}}$. Let $L_{\lambda}: \Lambda\to \Lambda$ denote the following transformation: \[L_{\lambda}\nu(\tg_1)=\nu(\tg_0)e^{-\lambda\Phi(\tg_1)}\] $L$ is linear and continuous on $\Lambda$. Furthermore, the $n$-th iterate of $L_{\lambda}$ is, as expected, given by the expression: \[L_{\lambda}^n\nu(\tg_n)=\nu(\tg_0)e^{-\lambda\Phi_n(\tg_n)}.\] Suppose we start with a distribution $\nu$ on $\overline{\mathcal{A}}$, and we attach an $n$-level path according to the probability measure ${\bf Q}_n$, then re-scale the path accordingly to obtain an element of $\overline{\mathcal{A}}$. Then the measure of this new path is given by $\nu^n$, whose density with respect to $\nu$ is \begin{equation}\label{nun} \frac{e^{-\lambda \Phi_n}}{{\bf E}^{\nu}[e^{-\lambda \Phi_n}]} \end{equation} Note that $\nu^n$ is simply $L_{\lambda}^n\nu$ normalized to a probability measure and it depends on the starting distribution $\nu$ and on $\lambda$. Let $\nu^n_{k}$ be restriction of $\nu^n$ to the last $k$ steps of $\eta$. Then \[\nu^n_k(\eta)={\bf Q}_n^{\nu}\{X_n=_k\eta\}=\frac{{\bf E}^{\nu}[e^{-\lambda \Phi_n}; \tg_n=_k\eta]}{{\bf E}^{\nu}[e^{-\lambda \Phi_n}]}.\] We are interested in convergence of $\nu^n$ as $n\to\infty$. So let us first consider $\nu$ to be the Dirac measure $\delta_{\og_0}$ and define measures $\pi_k$ on $\overline{\mathcal{A}}$ as follows: for all $\eta$ in $\overline{\mathcal{A}}$, let \[\pi_k(\eta) =\lim_{N\to \infty}{\bf Q}_N^{\tg_0}\{X_N=_k \eta\}.\] Note that $\pi_k$ is a measure on the restriction of $\eta$ to the last $k$ steps. To show that the measures $\pi_k$ are well-defined and this limit exists, first recall that given $X_j=\tg_j$, the law of $X_{N+j}$ under ${\bf Q}_{N+j}^{\tg_0}$ is the same as the law of $X_{N}$ under ${\bf Q}_{N}^{\tg_j}$. Then for all $k< N/2$ and all $j\ge 0$, the coupling result from Theorem \ref{expc} implies \begin{equation}\label{jump} \begin{split} |{\bf Q}_N^{\tg_0}\{X_N&=_k \eta\}-{\bf Q}_{N+j}^{\tg_0}\{X_{N+j}=_k \eta\}| \\ &= |\sum_{\hat{\gamma}_j}{\bf Q}_N^{\tg_0}\{X_j=\tg_j\}\left({\bf Q}_N^{\tg_0}\{X_N=_k\eta\}-{\bf Q}_N^{\tg_j}\{X_{N}=_k\eta\}\right)|\\&\le \sum_{\hat{\gamma}_j}{\bf Q}_N^{\tg_0}\{X_j=\tg_j\}\left|{\bf Q}_N^{\tg_0}\{X_N=_k\eta\}-{\bf Q}_N^{\tg_j}\{X_{N}=_k\eta\}\right|\\ &\le \sum_{\hat{\gamma}_j}{\bf Q}_N^{\tg_0}\{X_j=\tg_j\}(C e^{-\beta_1 N})\\ &\le C e^{-\beta_1 N} \end{split} \end{equation} Then clearly, for a given history $\tg_0 \in \overline{\mathcal{A}}$, ${\bf Q}_N^{\tg_0}\{X_N=_k \eta\}$ converges as $N \to \infty$. Now, if we start with a different history $\tg'_0$, from our coupling, and more precisely from Corollary \ref{coupling}, we get \[\lim_{N\to \infty}{\bf Q}_N^{\og_0}\{X_N=_k \eta\}=\lim_{N\to \infty}{\bf Q}_N^{\ogp_0}\{X'_N=_k\eta\}.\] Therefore, the measures $\pi_k$ are well-defined. Moreover, if we start with some other initial distribution $\nu$ on paths $\tg_0$ from $\overline{\mathcal{A}}$, the following holds \begin{equation}\label{e3_1} \sum_{\eta|_k}\left|{\bf Q}_N^{\nu}\{X_N=_k\eta)\}-\pi_k(\eta)\right| =O( e^{-\beta_1 N}). \end{equation} The measures $\pi_k$ are consistent and then, by the Kolgomorov Extension Theorem, they converge to a limiting measure on $\overline{\mathcal{A}}$, which we will denote by $\displaystyle{\pi=\lim_{k\to \infty} \pi_k}$. It is easy to check that $\displaystyle{\pi=\lim_{n\to \infty}\nu^n}.$ We claim that $\pi$ is the unique stationary measure for the Markov chains described in this section. The result follows from the next proposition. \begin{proposition} There exists $\beta_2>0$ such that for any starting distribution $\nu$, we can find a constant $c(\nu)$ so that for all $n\ge 0$ the following holds \begin{equation}\label {est1} {\bf E}^{\nu}[e^{-\lambda \Phi_n}]=c(\nu)e^{-{\bf x}i(\lambda)n}[1+ O\left(e^{-\beta_2 n}\right)] \end{equation} \end {proposition} \begin{proof} Let $a_n={\bf E}^{\nu}[e^{-\lambda \Phi_n}]$. Then is it a quick check that \begin{eqnarray*} {\bf E}^{\nu}[e^{-\lambda \Phi_{n+m}}] &=&{\bf E}^{\nu}[e^{-\lambda \Phi_n}]{\bf E}^{\nu^n}[e^{-\lambda \Phi_m}]. \end{eqnarray*} Let $\tilde{\Phi}$ depend only on the last $n/2$ steps of the path $\tg_n$, that is, let $\tg'_n=\tg_n\vert_{n/2}$ be the restriction of $\tg_n$ to its last $n/2$ steps and then \[e^{-\lambda \tilde{\Phi}(\tg_n)}=e^{-\lambda \Phi(\tg'_n)} .\] Suppose $\tg_n$ has enough connected cross-sections above level$-n/2$, more precisely, $\tg_n, \tg'_n$ are in $V_{n/2}$. By Proposition \ref{cnestimate}, $\displaystyle{{\bf E}^{\tg_n}[e^{-\lambda\Phi}]={\bf E}^{\tg_n}[e^{-\lambda\tilde{\Phi}}][1+O(e^{-\beta n/2})]}$, and so \begin{equation}\label{e1} \begin{split} \vert{\bf E}^{\nu^n}[e^{-\lambda\Phi};V_{n/2}]&-{\bf E}^{\pi}[e^{-\lambda\Phi};V_{n/2}]\vert\\ &\le \left[1+O(e^{-\beta n/2})\right]\left\vert{\bf E}^{\nu^n_{n/2}}[e^{-\lambda\tilde{\Phi}};V_{n/2}]-{\bf E}^{\pi_{n/2}}[e^{-\lambda\tilde{\Phi}};V_{n/2}]\right\vert\\ &\le \left[1+O(e^{-\beta n/2})\right]\mathcal{V}ert\nu^n_{n/2}-\pi_{n/2}\mathcal{V}ert. \end{split} \end{equation} On the complement of $V_{n/2}$, which we will denote by $V_{n/2}^c$, we will bound $\displaystyle{{\bf E}^{\tg_n}[e^{-\lambda \Phi}]}$ by $1$. From Lemma \ref{largedev}, we know that \begin{equation}\label{e2} \nu^n(V^c_{n/2})=\nu^n_{n/2}(V^c_{n/2})=O( e^{-\alpha' n/4}). \end{equation} Observe that \eqref{e3_1} implies $\displaystyle{\mathcal{V}ert\nu^n_{n/2}-\pi_{n/2}\mathcal{V}ert= O(e^{-\beta_1 n})}$, and thus, \begin{equation}\label{e3} \pi(V_{n/2}^c)=\pi_{n/2}(V_{n/2}^c)=O(e^{-\beta_1 n})+O(e^{-\alpha' n/4}). \end{equation} Combining estimates \eqref{e1}, \eqref{e2} and \eqref{e3}, \begin{equation*} \left\vert{\bf E}^{\nu^n}[e^{-\lambda\Phi}]-{\bf E}^{\pi}[e^{-\lambda\Phi}]\right\vert = O(e^{-\beta_1 n})+O(e^{-\alpha' n/2}). \end{equation*} Let $\beta_2=\min\{\beta_1, \alpha'/4\}$, and since ${\bf E}^{\pi}[e^{-\lambda\Phi}]\asymp e^{-{\bf x}i(\lambda)}$ and thus it is bounded from below by a positive constant, we get \[\left\vert{\bf E}^{\nu^n}[e^{-\lambda\Phi}]-{\bf E}^{\pi}[e^{-\lambda\Phi}]\right\vert=O(e^{-\beta_2 n}){\bf E}^{\pi}[e^{-\lambda\Phi}].\] Note that our error term depends on the initial measure on paths, namely $\nu$. Thus, \[\displaystyle{a_n=a_0\prod_{j=0}^{n-1}\frac{a_{j+1}}{a_j} =\prod_{j=0}^{n-1}{\bf E}^{\pi}[e^{-\lambda\Phi}] [1+O(e^{-\beta_2 j})] =c(\nu) \, \left({\bf E}^{\pi}[e^{-\lambda\Phi}]\right)^n[1+O(e^{-\beta_2 n})]}.\] We know that $a_n\asymp e^{-{\bf x}i(\lambda)n}$, and therefore we must have \[{\bf E}^{\pi}[e^{-\lambda\Phi}]=e^{-{\bf x}i(\lambda)},\] which completes the proof of the proposition. \end{proof} We want to show $\pi$ is invariant under the linear transformation $L_{\lambda}$. Re-writing \eqref{nun} in this notation, we obtain \[\nu^n=\frac{L_{\lambda}^n \nu}{{\bf E}^{\nu}[e^{-\lambda\Phi_n}]}.\] Starting with any distribution $\nu$ as the initial measure on paths, we have seen that \[L_{\lambda}\nu^n=\frac{L_{\lambda}^{n+1}\nu}{{\bf E}^{\nu}[e^{-\lambda\Phi_n}]} =\nu^{n+1}\frac{{\bf E}^{\nu}[e^{-\lambda\Phi_{n+1}}]}{{\bf E}^{\nu}[e^{-\lambda\Phi_n}]}= e^{-{\bf x}i(\lambda)}\nu^{n+1}[1+O(e^{-\beta_2 n})].\] Passing to the limit, as $n\to \infty$, from continuity of $L_{\lambda}$ we have $L_{\lambda}\pi=e^{-{\bf x}i(\lambda)}\pi$ and therefore for all $n\ge 0$, $\pi^n=\pi$. Note that $\pi$ depends on $\lambda$. \section{Analyticity of intersection exponents}\label{sec:anal} In this section we prove that ${\bf x}i(1,\lambda)$ is a real analytic function of $\lambda$ for $\lambda>0$. The proof for other values of $k$ is essentially the same. Key to this proof is convergence to the unique invariant measure $\pi$, presented in the previous section, and more significantly the exponential rate at which this convergence takes place. For each $\lambda>0$, we associate to $L_{\lambda}$ a linear functional defined as \[T_{\lambda}^n f (\tg_0)={\bf E}[f(\tg_n)Z_n^{\lambda}],\] for continuous functions $f$, bounded under an appropriately chosen norm. This norm will be chosen so that on the Banach space of functions with finite norm $\lambda\mapsto T_{\lambda}$ is an analytic operator-valued function. In particular, the functions on this Banach space have the property that their dependence on the behavior of paths far away in the past decays exponentially. Estimates already obtained from convergence to invariant measure will show ${\bf x}i(\lambda):={\bf x}i(1,\lambda)$ is an isolated simple eigenvalue for $T_{\lambda}$. Using results from operator theory, for every $\lambda>0$ one can then extend $x\mapsto{\bf x}i(x)$ to an analytic function in a neighborhood of $\lambda$. This immediately proves that intersection exponent ${\bf x}i(\lambda)$ is real analytic in $\lambda$. \begin{remark} We reiterate that this section follows the notation and proof outlines from \cite{Anal Paper} in which Lawler, Schramm and Werner prove analyticity of 2-dimensional Brownian exponents. We include here the full proofs, for the sake of completeness, with the understanding that they differ from \cite{Anal Paper} only in the estimates we use. \end{remark} \subsection{The operator}\label{Bspace} Let $\mathcal{C}$ be the set of continuous functions $f:\overline{\mathcal{A}}\to \mathbb{C},$ bounded under the uniform norm $\displaystyle{\mathcal{V}ert f\mathcal{V}ert=\sup_{\tg_0}|f(\tg_0)|}$. We are interested in functions that depend very little on how $\tg_0$ looks like near negative infinity. Recall that $\tg\equiv_k\tg'$ means the paths $\tg$ and $\tg'$ have been coupled for the last $k$ steps, in the sense of Section \ref{sec:coup}. Thus, let us consider the following $u$-norm: for all $f\in \mathcal{C}$, and $u>0$, let \[\mathcal{V}ert f\mathcal{V}ert_u:=\max\{\mathcal{V}ert f\mathcal{V}ert, \sup\{e^{ku}|f(\tg)-f(\tg')|: k=1,2,\cdots, \tg\equiv_k\tg' \}\}.\] Recall that $\tg\equiv_k\tg'$ means the paths $\tg$ and $\tg'$ have been coupled for the last $k$ steps, in the sense of the coupling described in Section \ref{sec:coup}. This norm similar to the one used in \cite{Anal Paper}. Let $\mathcal{C}_u:=\{f\in \mathcal{C}:\mathcal{V}ert f\mathcal{V}ert_u<\infty\}$ denote the Banach space of all bounded functions $f$ under the norm $\mathcal{V}ert f\mathcal{V}ert_u$. Let $\mathcal{L}_u$ be the Banach space of continuous linear operators from $\mathcal{C}_u$ to $\mathcal{C}_u$ with the usual norm \[N_u(T):=\sup_{\mathcal{V}ert f\mathcal{V}ert_u=1}\mathcal{V}ert T(f)\mathcal{V}ert_u.\] For all $\lambda>0$, and all $n>0$, we define the linear operator $T_{\lambda}:\mathcal{C}\to {\mathbb{R}}$ by \[T^n_{\lambda} f(\tg_0):={\bf E}^{\tg_0}\left[f(\tg_n)e^{-\lambda\Phi_n}\right],\] where the expectation is over the randomness of $\hat{\gamma}_n$. It is easy to see that $T_{\lambda}$ is a semigroup of operators: \begin{eqnarray*} T^{n+m}_{\lambda}f(\tg_0)&=&{\bf E}^{\tg_0}\left[f(\tg_{n+m})e^{-\lambda\Phi_{n+m}}\right]\\ &=&{\bf E}^{\tg_0}\left[e^{-\lambda\Phi_n}{\bf E}^{\tg_n}\left[f(\tg_{n+m})e^{-\lambda\Phi_m}\right]\right]\\ &=&{\bf E}^{\tg_0}\left[e^{-\lambda\Phi_n}T^m_{\lambda} f(\tg_n)\right]\\ &=& T^n_{\lambda}T^m_{\lambda}f(\tg_0). \end{eqnarray*} We will use $T_{\lambda}$ for $T^1_{\lambda}$. One can similarly define $T^n_{\lambda}$ for complex $\lambda$ with ${\mathbb{R}}e(\lambda)>0.$ \subsection{Analyticity of operator}\label{Tanal} If we look at the functional $T_{z}$ as a function of $z$, it is analytic in a small neighborhood of the positive real line. This will be the first step in proving $e^{-{\bf x}i(\lambda)}$ is analytic in $\lambda.$ \begin{proposition} If $\lambda_1\le\lambda\le\lambda_2$, there exist $\epsilon>0$ and $v(\lambda)>0$ such that for all $u\in(0,v)$, $z\mapsto T_z$ is an analytic function from $\{z:|z-\lambda|<\epsilon\}$ into $\mathcal{L}_u$. \end{proposition} \begin{proof} Fix $\lambda>0$. For all $k\ge 0$, and all $f\in \mathcal{C}$, $\tg_0\in\mathcal{A}$, let \[U_k f(\tg_0)={\bf E}^{\tg_0}\left[f(\tg_1)\frac{\Phi^k}{k!}e^{-\lambda\Phi}\right]\] An upper bound for $U_kf$ is \begin{equation}\label{uk} \left\vert U_k f(\tg_0)\right\vert\le\mathcal{V}ert f\mathcal{V}ert{\bf E}^{\tg_0}\left[\frac{(\lambda\Phi)^k}{k!}\lambda^{-k}e^{-\lambda\Phi}\right]\le \mathcal{V}ert f\mathcal{V}ert {\bf E}^{\tg_0}\left[e^{\lambda\Phi}\lambda^{-k}e^{-\lambda\Phi}\right]\le \mathcal{V}ert f\mathcal{V}ert \lambda^{-k}, \end{equation} and by dominated convergence, for all $z\in\mathbb{C}$ with $|z|<\lambda$, \[T_{\lambda-z}f(\tg_0)=\sum_{k=0}^{\infty}U_kf(\tg_0)z^k.\] We need to show there exists a $v(\lambda)>0$ such that for $u<v$, for all $k$, the operator norm of $U_k$ in $\mathcal{L}_u$ is bounded by $b^k$ for some $b>0$. Then for $|z|<b^{-1}$, $T_{\lambda-z}f$ is an analytic function of $z$ into $\mathcal{L}_u$ and the proposition follows. We will now prove this claim. From \eqref{uk}, we have $\mathcal{V}ert U_k\mathcal{V}ert\le \lambda^{-k}.$ Now suppose $\tg_0\equiv_m\tg'_0$ (that is, $\tg_0$ and $\tg'_0$ are coupled for $m$ steps). We use $\Phi$ to denote $\Phi(\tg_1)$ and $\Phi'$ for $\Phi(\tg'_1)$. \begin{equation}\label{udk} \begin{split} \left|U_kf(\tg_0)-U_kf(\tg'_0)\right|&\le {\bf E}\left[|f(\tg_1)-f(\tg'_1)|\frac{\Phi^k}{k!}e^{-\lambda\Phi}\right]\\ &+\mathcal{V}ert f\mathcal{V}ert{\bf E}\left[\left|\frac{\Phi^k}{k!}e^{-\lambda\Phi}-\frac{(\Phi')^k}{k!}e^{-\lambda\Phi'}\right|\right] \end{split} \end{equation} Given $\tg_0\equiv_m\tg'_0$, on the event the two paths remain coupled for an additional step, we can bound $\displaystyle{|f(\tg_1)-f(\tg'_1)|}$ by $\mathcal{V}ert f\mathcal{V}ert e^{-(m+1)u}$ and otherwise we bound it by $2\mathcal{V}ert f\mathcal{V}ert$. Using our coupling result in \eqref{step3}, an upper bound for the first term in \eqref{udk} is \begin{equation} \begin{split}\label{u-good} \mathcal{V}ert f\mathcal{V}ert_u e^{-(m+1)u}\lambda^{-k}+2\mathcal{V}ert f\mathcal{V}ert \lambda^{-k}\tilde{\Prob}\{\tg_1\not\equiv_{m+1}\tg'_1\vert \tg_0\equiv_m\tg'_0\}\\ \le \mathcal{V}ert f\mathcal{V}ert_u\lambda^{-k}\left(e^{-(m+1)u}+ce^{-m\beta}\right). \end{split}\end{equation} For the second term in \eqref{udk}, we have \begin{eqnarray*} \left(\frac{\lambda}{2}\right)^k {\bf E}\left[\left|\frac{\Phi^k}{k!}e^{-\lambda\Phi}-\frac{(\Phi')^k}{k!}e^{-\lambda\Phi'}\right|\right]&=&\frac{1}{k!} {\bf E}\left[\left\vert(\frac{\lambda\Phi}{2})^ke^{-\lambda\Phi}-(\frac{\lambda\Phi'}{2})^ke^{-\lambda\Phi'}\right\vert\right]\\ &\le& 3{\bf E}\left[\vert e^{-\lambda\Phi/2}-e^{-\lambda\Phi'/2}\vert\right]. \end{eqnarray*} Suppose that after being coupled for $m$ steps the paths remain coupled for an additional step. We thus attach the same $\hat{\gamma}_1$ to $\tg_0$ and $\tg'_0$, and we consider two cases: if $\hat{\gamma}_1 \in G_{m/2}$, then using our estimate from \eqref{diagbound} we get $\vert e^{-\lambda\Phi/2}-e^{-\lambda\Phi'/2}\vert\le c\,e^{-m\beta}$. On the complement of $G_{m/2},$ as well as when the paths decouple after $m$ steps, we will bound this difference by $2$. Using the inequality $\displaystyle{\tilde{\Prob}\{\hat{\gamma}_1\notin G_{m/2}\}\le e^{-\beta m}}$, an recalling our result from \eqref{step3}, we obtain \[3{\bf E}\left[\vert e^{-\lambda\Phi/2}-e^{-\lambda\Phi'/2}\vert\right]\le c\, e^{-\beta m},\] with a different $c$, uniform in $\lambda$ and independent of $k$. Thus, the second term in \eqref{udk} can be bound by \begin{equation}\label{factorial} c\,\mathcal{V}ert f\mathcal{V}ert_u (\frac{2}{\lambda})^ke^{-m\beta}\, . \end{equation} From estimates \eqref{u-good} and \eqref{factorial}, for all $u\le \beta$, \[\left|U_kf(\tg_0)-U_kf(\tg'_0)\right|\le c\,\mathcal{V}ert f\mathcal{V}ert_u(\frac{2}{\lambda})^{k}e^{-mu}.\] Hence $N_u(U_k)\le c\, (\frac{2}{\lambda})^k$ and the proposition follows with $v(\lambda)\le\beta$ and $\epsilon\le\lambda/2$. \end{proof} \subsection{Analyticity of exponent}\label{sec:analyticity} \begin{proposition}\label{eigen} $e^{-{\bf x}i(\lambda)}$ is an isolated simple eigenvalue for $T_{\lambda}.$ \end{proposition} \begin{proof} Fix $\lambda>0$. For ease of notation, we write $T$ as shorthand for $T_{\lambda}$ and $e^{-{\bf x}i}$ for $e^{-{\bf x}i(1,\lambda)}.$ First we will show $\displaystyle{\frac{T^n f(\tg_0)}{T^n 1(\tg_0)}}$ converges to a bounded functional $h$. Recall the result of our coupling: for any $\tg_0, \tg'_0\in \overline{\mathcal{A}}$, \[\tilde{\Prob}\{\tg_n\not\equiv_{n/2} \tg'_n\}\le Ce^{-\beta_1 n}.\] Suppose $\tg_0$ is fixed. If we let $\tg'_0=\tg_k$ then the law of $\tg'_0$ under ${\bf Q}_{n}$ is the same as the law of $\tg_k$ under ${\bf Q}_{n+k}$, and from the coupling result, for all $f\in \mathcal{C}_u$, \begin{equation}\label{hlim} \left\vert\frac{T^{n+k} f(\tg_0)}{T^{n+k} 1(\tg_0)} -\frac{T^n f(\tg_0)}{T^n 1(\tg_0)}\right\vert\le \int \vert f(\tg_n)-f(\tg'_n)\vert \,d\tilde{\Prob}\le 2\mathcal{V}ert f\mathcal{V}ert Ce^{-\beta_1 n}+\mathcal{V}ert f\mathcal{V}ert_u e^{-nu/2} \end{equation} Therefore, $\displaystyle{\frac{T^n f(\tg_0)}{T^n 1(\tg_0)}\to h(f,\tg_0)}$. Similarly, for any starting configurations $\tg_0, \tg'_0\in \overline{\mathcal{A}}$ and all $f\in \mathcal{C}_u$ \[\left\vert\frac{T^{n} f(\tg_0)}{T^{n} 1(\tg_0)}-\frac{T^n f(\tg'_0)}{T^n 1(\tg'_0)}\right\vert\le 2\mathcal{V}ert f\mathcal{V}ert Ce^{-\beta_1 n}+\mathcal{V}ert f\mathcal{V}ert_u e^{-nu/2}.\] This shows $h$ is independent of $\tg_0$. It is easy to see that $h$ is a linear on $\mathcal{C}$ and $\mathcal{V}ert h\mathcal{V}ert\le \mathcal{V}ert f\mathcal{V}ert$. Then $h$ is a bounded linear functional on $\mathcal{C}_u$. Now we want to consider the functional $f \mapsto T^nf-h(f)T^n1$ and to find an upper bound for its $N_u$ norm. From \eqref{hlim}, for $u\le 2\beta_1$ and all $f\in \mathcal{C}_u$ and $\tg_0\in \overline{\mathcal{A}}$, \begin{equation}\label{eq:norm bound} \vert T^nf(\tg_0)-h(f)T^n1(\tg_0)\vert\le (2C+1)\mathcal{V}ert f\mathcal{V}ert_u e^{-nu/2}T^n1(\tg_0)\le c\mathcal{V}ert f\mathcal{V}ert_u e^{-n({\bf x}i+u/2)}. \end{equation} Then $\displaystyle{\mathcal{V}ert T^n(\cdot)-h(\cdot)T^n1\mathcal{V}ert\le c\,e^{-n({\bf x}i+n/2)}}$. To find the $N_u$ norm, consider $\tg_0\equiv_k\tg'_0$. For $k\le n/4$, the estimate above gives \[\vert T^nf(\tg_0)-h(f)T^n1(\tg_0)-T^nf(\tg'_0)+h(f)T^n1(\tg'_0)\vert\le 2c\mathcal{V}ert f\mathcal{V}ert_u e^{-n({\bf x}i+u/4)}e^{-ku}.\] When $k>n/4$, if $\tg_0$ and $\tg'_0$ are coupled for the last $k$ steps, we have shown in Proposition \ref{cnestimate} that $\displaystyle{T^n1(\tg_0)=T^n1(\tg'_0)\left[1+O(e^{-\beta k})\right]}$. From our coupling, the two paths will remain coupled for $n$ additional steps with probability greater than $\displaystyle{\prod_{j=0}^{n-1}\left[1-ce^{-\beta (k+j)}\right]}$ which can be shown to be bounded by $1-ce^{-\beta k}$, with a different $c$, independent of $n.$ Hence for all $u\le\beta/2$ and all $f\in \mathcal{C}_u$, \[\left\vert\frac{T^{n} f(\tg_0)}{T^{n} 1(\tg_0)} -\frac{T^n f(\tg'_0)}{T^n 1(\tg'_0)}\right\vert\le 2c\,\mathcal{V}ert f\mathcal{V}ert_u e^{-\beta k}+\mathcal{V}ert f\mathcal{V}ert_u e^{-(k+n)u}\le c'\,\mathcal{V}ert f\mathcal{V}ert_ue^{-nu/4}e^{-k u}\] Multiplying this expression by $T^n1(\tg_0)$ and recalling that $T^n1(\tg_0)\le ce^{-{\bf x}i n}$, we get \begin{equation*} \vert T^{n} f(\tg_0)-h(f)T^n1(\tg_0) -T^n f(\tg'_0) \frac{T^n 1(\tg_0)}{T^n 1(\tg'_0)}+h(f)T^n1(\tg_0) \vert \le c\,e^{-{\bf x}i n}\mathcal{V}ert f\mathcal{V}ert_ue^{-nu/4-ku} \end{equation*} From \eqref{eq:norm bound} and Proposition \ref{cnestimate} \begin{equation*} \begin{split} \vert T^nf(\tg_0)-h(f)T^n1(\tg_0)&-T^nf(\tg'_0)-h(f)T^n1(\tg'_0)\vert\\ &\le c\,e^{-{\bf x}i n}\mathcal{V}ert f\mathcal{V}ert_ue^{-nu/4}e^{-k u}+O(e^{-\beta k})\mathcal{V}ert f\mathcal{V}ert_u e^{-n({\bf x}i+u/2)}\\ &\le c'\,\mathcal{V}ert f\mathcal{V}ert_ue^{-n({\bf x}i+u/4)}e^{-ku}. \end{split} \end{equation*} We conclude that \begin{equation}\label{eq:norm} N_u(T^n(\cdot)-h(\cdot)T^n1)\le ce^{-n({\bf x}i+u/4)}. \end{equation} This will show that $e^{-{\bf x}i}$ is a simple eigenvalue of $T$. Since for all $k\ge 1$, $\displaystyle{\frac{T^{n+k}1}{T^n1}\to h(T^k1)}$ and for all $\tg_0$, $T^n1(\tg_0)\asymp e^{-{\bf x}i n}$, we get $h(T1)=e^{-{\bf x}i}$ and $h(T^n1)=e^{-{\bf x}i n}$. From \eqref{eq:norm}, \[\mathcal{V}ert T^{n+k}1-h(T^k)T^n1\mathcal{V}ert_u\le ce^{-{\bf x}i(n+k)}e^{-nu/4},\] Recall that $K_n(\tg_0)=e^{{\bf x}i n}T^n1(\tg_0)$ and then for all $k\ge 0$, \[\mathcal{V}ert K_{n+k}-K_n\mathcal{V}ert_u\le ce^{-nu/4},\] and therefore $K_n\to K$, for some function $K:\mathcal{A}\to \mathbb{R}$. Furthermore, $\mathcal{V}ert K_n-K\mathcal{V}ert_u\le ce^{-nu/4}$. It follows that $N_u(T^n(\cdot)-e^{-{\bf x}i n}h(\cdot)K)\le c\,e^{-{\bf x}i n}e^{-nu/4}$. In particular, for all $f\in \mathcal{C}_u$, \begin{equation}\label{b1} \mathcal{V}ert T^n(f)-e^{-{\bf x}i n}h(f)K\mathcal{V}ert_u\le c\,e^{-n({\bf x}i+u/4)}\mathcal{V}ert f\mathcal{V}ert_u. \end{equation} It is easy to see that for all $n\ge 1$, \begin{equation}\label{hf} h(T^nf)=e^{-{\bf x}i n}h(f). \end{equation} Moreover, from continuity of $T$,\begin{equation}\label{TK} TK=\lim_{n\to\infty}TK_n=\lim_{n\to\infty}e^{{\bf x}i n}T^{n+1}1=e^{-{\bf x}i}\lim_{n\to\infty}K_{n+1}=e^{-{\bf x}i}K, \end{equation} and hence $e^{-{\bf x}i}$ is an eigenvalue for $T$. By continuity and linearity of $h$, \begin{equation}\label{hK} h(K)=\lim_{n\to\infty}h(K_n)=\lim_{n\to\infty}e^{{\bf x}i n}h(T^n1)=1. \end{equation} From estimates \eqref{hf}, \eqref{TK} and \eqref{hK}, one can easily check that $T^n(\cdot)-e^{-{\bf x}i n}h(\cdot)K$ is the $n$-th iterate of $T(\cdot)-e^{-{\bf x}i}h(\cdot)K$ and thus $N_u(T(\cdot)-e^{-{\bf x}i}h(\cdot)K)\le e^{-{\bf x}i}e^{-u/4}$. We claim this implies $e^{-{\bf x}i}$ is an isolated eigenvalue. We will prove that for every $z$ with $1/2(1+e^{-u/4})<|z|<1$, there exists $\epsilon>0$ such that for all $\mathcal{V}ert f\mathcal{V}ert_u=1$, $\mathcal{V}ert e^{{\bf x}i}Tf-z f\mathcal{V}ert_u\ge \epsilon$, that is, $z$ is in the resolvent set of $\tT=e^{{\bf x}i}T$. Fix $z$ with $1/2(1+e^{-u/4})<|z|<1$, and for $\mathcal{V}ert f\mathcal{V}ert_u=1$ let \[\tT f=z f+g \hspace{.5in} v_n(f)=\tT ^nf-h(f)K.\] Note that $g \in \mathcal{C}_u$ and \begin{equation}\label{ng} \tT^nf=z^nf+\sum_{j=1}^nz^{n-j}\tT^{j-1}g. \end{equation} Since $K_n$ converges to $K$, by Proposition \ref{estconst} we have $\mathcal{V}ert K\mathcal{V}ert_u\le c_2$. Recalling that $\mathcal{V}ert h(g)\mathcal{V}ert_u\le\mathcal{V}ert g\mathcal{V}ert_u$ and using \eqref{b1}, we arrive at \begin{eqnarray*}\label{gf} \left\mathcal{V}ert \tT^nf-z^nf-\frac{1}{1-z}h(g)K\right\mathcal{V}ert_u &\le& \left\mathcal{V}ert\sum_{j=1}^nz^{n-j}\tT^{j-1}g-\frac{1}{1-z}h(g)K\right\mathcal{V}ert_u\\ &=& \left\mathcal{V}ert\sum_{j=1}^nz^{n-j}v_{j-1}(g)\right\mathcal{V}ert_u+\left\mathcal{V}ert \left(\sum_{j=1}^nz^{n-j}-\frac{1}{1-z}\right)h(g)K\right\mathcal{V}ert_u\\ &\le& c|z|^n\mathcal{V}ert g\mathcal{V}ert_u \, , \end{eqnarray*} for some constant $c>1$ that depends on $z$ and $u$. Since this bound holds for all $n$, and so does \eqref{b1}, we must have $h(g)=(1-z)h(f)$. Therefore, for $f$ with $\mathcal{V}ert f\mathcal{V}ert_u=1$, and for all $n$, \[c|z|^n\mathcal{V}ert g\mathcal{V}ert_u\ge\mathcal{V}ert \tT^nf-z^nf-h(f)K\mathcal{V}ert_u\ge|z|^n\mathcal{V}ert f\mathcal{V}ert_u-\mathcal{V}ert v_n(f)\mathcal{V}ert_u\ge |z|^n-e^{-nu/4}.\] For $|z|>1/2(1+e^{-u/4})$, this implies \[\mathcal{V}ert g\mathcal{V}ert_u\ge\frac{1-e^{-u/4}}{c(1+e^{-u/4})}.\] It follows that the spectrum of $T$ in $\mathcal{L}_u$ is the union of $e^{-{\bf x}i}$ and a set contained in the ball of radius $1/2(1+e^{-u/4})e^{-{\bf x}i}$ and centered at the origin. \end{proof} \begin{proofof}{\bf Proof of Theorem \ref{anal_proof}:} We prove the theorem for $k=1$. From Proposition \ref{eigen}, ${\bf x}i(1,\lambda)$ is an isolated simple eigenvalue for the analytic operator $T_{\lambda}$. By 4.16 in \cite{Wolf}, for all $\lambda>0$, $x\mapsto{\bf x}i(1,x)$ can be extended analytically in a neighborhood of $\lambda$. More precisely, for all $\lambda>0$, there exists $\epsilon>0$ such that $z\mapsto{\bf x}i(1,z)$ is analytic in $|z-\lambda|<\epsilon$. Therefore, piecing together these $\epsilon$-balls we obtain a neighborhood of the positive real-line $(0,\infty)$ where $z\mapsto{\bf x}i(1,z)$ is analytic. \end{proofof} \end{document}
\begin{equation}gin{document} \title{Diffusion-driven blow-up for a non-local Fisher-KPP type model} \author{Nikos I. Kavallaris} \address{ Department of Mathematical and Physical Sciences, University of Chester, Thornton Science Park Pool Lane, Ince, Chester CH2 4NU, UK } \varepsilonmail{[email protected]} \author{Evangelos A. Latos} \address{Institute for Mathematics and Scientific Computing, Karl-Franzens-University Graz, Heinr. 36, A-8010 Graz, Austria} \varepsilonmail{[email protected]} \subjclass{Primary: 35B44, 35K51 ; Secondary: 35B36, 92Bxx } \keywords{Pattern formation, Turing instability, diffusion-driven blow-up, non-local, reaction-diffusion} \date{\today} \maketitle \begin{equation}gin{abstract} The purpose of the current paper is to unveil the key mechanism which is responsible for the occurrence of {\it Turing-type instability} for a non-local Fisher-KPP model. In particular, we prove that the solution of the considered non-local Fisher-KPP equation in the neighbourhood of a constant stationary solution, is destabilized via a {\it diffusion-driven blow-up}. It is also shown that the observed {\it diffusion-driven blow-up} is complete, whilst its blow-up rate is completely classified. Finally, the detected {\it diffusion-driven instability} results in the formation of unstable blow-up patterns, which are also identified through the determination of the blow-up profile of the solution. \varepsilonnd{abstract} \section{Introduction} \subsection*{The mathematical model} In as early as 1952, A. Turing in his seminal paper \cite{t52} attempted, by using reaction-diffusion systems, to model the phenomenon of morphogenesis, the regeneration of tissue structures in hydra, an animal of a few millimeters in length made up of approximately $100,000$ cells. Further observations on the morphogenesis in hydra led to the assumption of the existence of two chemical substances (morphogens), a slowly diffusing (short-range) activator and a rapidly diffusing (long-range) inhibitor. A. Turing, in \cite{t52}, indicates that although diffusion has a smoothing and trivializing effect on a single chemical, for the case of the interaction of two or more chemicals different diffusion rates could force the uniform steady states of the corresponding reaction–diffusion systems to become unstable and to lead to non-homogeneous distributions of such reactants. Since then, such a phenomenon is now known as {\it Turing-type instability} or {\it diffusion-driven instability (DDI)}; however such a phenomenon has been first specified in \cite{R40}. The main purpose of the current paper is the investigation of the occurrence of a {\it Turing-type} or {\it DDI instability} for the following non-local Fisher-KPP type model \begin{equation}gin{align}\lambdabel{fkpp} &\ u_t-\Deltaelta u=|u|^{p-1}u\left(1-\sigmagma\rlap{$-$}\!\int_{\Omega} |u|^{\begin{equation}ta-1}u\;dx\right), &&\hspace{-5em}x\in\Omega,\; t>0,\\ &\ \frac{\partial u}{\partial \nu}=0, &&\hspace{-5em} x\in\partial\Omega,\;t>0\lambdabel{nbc}, \\ &\ u(x,0)=u_0(x)\geq 0, &&\hspace{-5em}x\in\Omega.\lambdabel{id} \varepsilonnd{align} Our motivation to investigate the possible {\it Turing-type} instability of the above model stems from the fact that the non-local Fisher-KPP equation \varepsilonqref{fkpp} arises as a mathematical model in several research areas. In particular, \varepsilonqref{fkpp} characterizes the evolution of a population of density $u$ when its individuals are moving either by diffusion and/or by interaction. Actually, the fate of the population is determined by the interaction modus which might lead either to growth or decay.The reaction term describes the joint influence of a nonlinear growth accounting for a weak Allee effect and of concurrence for available resources (prevention of overcrowding). The nonlocal form of the reaction term infers that several individuals of the population interact in a space/phenotypic trait/etc. domain, through sampling all occupancy information therein. This kind of problems arise e.g., when modeling emergence and evolution of a biological species cf. \cite{BH94, B00, D04, FG89, LL97, vol2}. Thereby the respective population is structured by a phenotypical trait and its individuals infer two essential interactions: mutation and selection. From this perspective $u(x, t)>0$ serves as the density of a population having phenotype $x$ at time $t.$ The mutation process, is described by a diffusion operator on the trait space, and it is modeled by a classical diffusion operator, whereas the selection process is illustrated by the nonlocal term $u^p\left(1-\sigmagma\rlap{$-$}\!\int_{\Omega} u^{\begin{equation}ta}\;dx\right),$ where $\sigmagma>0$ stands for the (non-local) parameter measuring the intensity of the selection process. Equation \varepsilonqref{fkpp} has been also proposed, cf. \cite{GP07,GVA06}, as a simple model of adaptive dynamics, where again the variable $x$ represents a phenotypical trait of a given population. The individuals of such a population with trait $x$ face competition from all their counterparts which does not depend on the trait itself. Other types of non-local terms may arise, see \cite{CD05,PS05} for dispersal by jumps rather than by the Brownian motion. Note also that the imposed Neumann type boundary condition \varepsilonqref{nbc} describes the fact that the population does not interact with its external environment. Besides, $\Omega$ in \varepsilonqref{fkpp} is assumed to be a bounded domain in $\mathbb{R}^N$, $N\geq3$, with boundary of class $C^{2,r}$ for some $r\in(0,1).$ We also consider $u_0\in L^\begin{equation}ta(\Omega)\setminus\{0\}$ and the involved exponents $p, \begin{equation}ta$ are set to satisfy \begin{equation}gin{equation}\lambdabel{mt1} p\geq\begin{equation}ta>1. \varepsilonnd{equation} Equation \varepsilonqref{fkpp} is actually a non-local version of the well known Fisher-KPP equation was first introduced, in its scalar form, \begin{eqnarray}\lambdabel{lfkpp} \frac{\partial u}{\partial t} = \frac{\partial^2u}{\partial x^2} + u^p(1-u). \varepsilonge by Fisher \cite{f37} and Kolmogorov, Petrovskii, Piskunov \cite{kpp}, both in 1937, in the context of population dynamics. Here $u$ represents the population density and the reaction term in \varepsilonqref{lfkpp} is considered to be the reproduction rate of the population. When $p = 1$, this reproduction rate is proportional to the population density $u$ and to the available resources $(1-u).$ While, when $p = 2$ the model actually takes into account the addition of sexual reproduction with the reproduction rate to be proportional to the square of the population density, see \cite{VVpp, volpet, vol1, vol2}. Later, in 1938, Zeldovich and Frank-Kamenetskii \cite{zfk38} came up with equation \varepsilonqref{lfkpp} in combustion theory where now $u$ stands for the temperature of the combustive mixture. In the literature, far more cases of non-local problems are encountered where the non-local terms induced by an integral of the solution over the domain of interaction $\Omegaega,$ cf. \cite{AlKH11, KN07, KTz, KLW17, KS18, L1, L2, LTz1, LTz2, QS, s98, Tz02} and the references there in; however a non-local reaction term close to the one of \varepsilonqref{fkpp} is particularly considered in \cite{bds93, hy95,SJM07,SK08}. Notably, in \cite{bds93} the authors considered a non-local parabolic reaction-diffusion of the form \begin{eqnarray}\lambdabel{gnlt} u_t=\Deltaelta u+u^p-\frac{1}{|\Omegaega|}\int_\Omegaega u^p\;dx. \varepsilonge For the case $p=2$ and for $\Omegaega=(0,1)$ they proved the finite time blow-up of the solutions by considering appropriate initial data. Equation \varepsilonqref{gnlt} for general exponent $p$ was also considered in \cite{hy95} and the authors, among others, proved that the solution can blow-up if $p>N/(N-2)$ by considering spiky initial data. Later on, the authors in \cite{SJM07,SK08} proved the occurrence of finite-time blow-up for \varepsilonqref{gnlt} even for $p>1$ and initial data satisfying an energy inequality, utilizing a gamma convergence argument in order to get appropriate lower bounds for the considered Lyapunov functional. Thanks to the negative sign of the non-local reaction term included in \varepsilonqref{fkpp} and \varepsilonqref{gnlt} maximum principle fails and thus comparison methods are not applicable, cf. \cite[Proposition 52.24]{QS}. Furthermore, reaction-diffusion equation \varepsilonqref{gnlt} leads to a conservation of the total mass, which is a key property for the investigation of its dynamics; it also admits a Lyapunov functional a helpful tool for the derivation of a priori estimates of the solution. In contrast, equation \varepsilonqref{fkpp} lacks these two key features although the associated total mass is still bounded, a crucial property still used for the investigation of its dynamics. Now regarding the non-local reaction-diffusion equation \varepsilonqref{fkpp} there are some already available results in the literature. More precisely, the authors in \cite{BChL} proved that the problem \varepsilonqref{fkpp}-\varepsilonqref{id} for $\begin{equation}ta=1$ admits global-in-time solutions for $N=1,2$ with any $1 \le p <2$ or $N \ge 3$ when $1 \le p < 1+2/N.$ Moreover, in \cite{BChL} the asymptotic convergence of solutions towards the solution of the heat equation is also proved. Some more existence results were shown for the whole space case, i.e. when $\Omegaega=\mathbb{R}^N$ as well as for different boundary conditions in \cite{B,BCh}; we refer the interested reader to these works for more references about this kind of problems. Finally, in \cite{LLC} the authors considered \varepsilonqref{fkpp} on $\mathbb{R}$ and studied the wave fronts of the corresponding nonlinear non-local bistable reaction-diffusion equation. Finally, quite recently, in \cite{LCS20}, the whole space case with reaction term $u^p\left(1-\sigmagma J*u^{\begin{equation}ta}\right)$ for a proper kernel $J$ is investigated. Nevertheless, as far as we know there are no blow-up results available in the literature for the non-local equation \varepsilonqref{fkpp}, and so in the current paper we will try to fill in this gap by providing some blow-up results for the Neumann problem \varepsilonqref{fkpp}-\varepsilonqref{id}. \subsection*{Main results} In the current subsection the main results of our work related with the occurrence of a a {\it Turing-type} instability for model are demonstrated. First, it is worth noting, that due to the power non-linearity and thanks to condition \varepsilonqref{mt1}, if a {\it Turing-type} (or {\it (DDI)) instability} occurs for the solution of non-local problem \varepsilonqref{fkpp}-\varepsilonqref{id}, then it should lead to the non existence of global-in-time solutions. More precisely, such an instability would be exhibited in the form of a {\it diffusion-driven blow-up (DDBU)}, cf. \cite{FN05, hy95}. In this work we restrict ourselves to the radial symmetric case, i.e. when $\Omegaega = B_1$ where \[ B_1=B_1(0):=\{x\in \mathbb{R}^N: |x|<1\},\] denotes the unit sphere in $\mathbb{R}^N.$ Then the solution of \varepsilonqref{pfkpp}-\varepsilonqref{pid} is radial symmetric, cf. \cite{gnn}, that is $u(x, t) = u(r, t)$ for $0\leq r=|x|<1$ and so problem \varepsilonqref{pfkpp}-\varepsilonqref{pid} is reduced to \begin{equation}gin{eqnarray} && u_t- \Deltaelta_r u = K(t)u^p,\quad 0<r<1,\; 0<t<T,\lambdabel{fkpp2a}\\ && u_r(0,t)=u_r(1,t)=0,\quad 0<t<T,\lambdabel{fkpp2b} \\ &&\ u(r,0)=u_0(r)\geq 0, \quad 0<r<1,\lambdabel{id2} \varepsilonea where $T > 0$ is the maximal existence time, $\Deltaelta_r:=\frac{\partial^2}{\partial r^2}+\frac{N-1}{r} \frac{\partial }{\partial r}$ and \begin{equation}gin{eqnarray}n K(t)\varepsilonquiv 1 -\sigmagma \rlap{$-$}\!\int_{\Omega}r u^\begin{equation}ta\;dx. \varepsilonean Notably the absolute values have been dropped, since the solution of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} is nonegative when nongeative initial data are considered, cf. Lemma \ref{pos}. Next we consider, as in \cite{hy95, KS16, LN09}, spiky initial data of the form \begin{equation}\lambdabel{sid} u_0(r)=\lambdambda\phi_\deltalta(r),\quad\mbox{for}\quad 0<\lambdambda<<1, \varepsilone where \begin{equation}gin{align}\lambdabel{fd} \phi_\deltalta(r)= \begin{equation}gin{cases} r^{-a},\quad &\deltalta\leq r\leq1,\\ \deltalta^{-a}\left(1+\frac{a}{2}\right)-\frac{a}{2}\deltalta^{-(a+2)}r^2,\quad &0\leq r<\deltalta, \varepsilonnd{cases} \varepsilonnd{align} for $ a :=\frac{2}{p-1}$ and $\deltalta\in(0,1).$ Taking also into account that $u_0'(r)<0$, then $\max_{r\in[0,1]}u=u_0(0),$ and via maximum principle for the heat operator, since $K(t)>0$ due to Lemma \ref{Lemma:mbetaestimate} $(i)$, we also deduce that $u_r(r,t)<0,$ hence $||u(\cdot,t)||_{\infty}=u(0,t).$ Henceforth, we will denote by $T_{\delta}$ the maximum existence time of solution of \varepsilonqref{fkpp2a}-\varepsilonqref{id2} with initial data given by \varepsilonqref{fd} and \varepsilonqref{sid}. In the sequel we prove that this kind of initial can lead to finite-time blow-up for the solution of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2}, i.e. to the occurrence of $T_{\delta}<+\infty$ such that \begin{equation}\lambdabel{sbu} ||u(\cdot,t)||_{\infty}=u(0,t)\to +\infty \quad\mbox{as}\quad t\to T_{\delta}. \varepsilone Our first main result is stated as follows: \begin{equation}gin{theorem}\lambdabel{thmbu} Let $\Omegaega=B_1\subset \mathbb{R}^N$ with $N\geq 3,$ $p>\frac{N}{N-2}$ and \varepsilonqref{mt1} hold. Then there is a $\lambdambda_0>0$ provided with the following property: any $0<\lambdambda\leq \lambdambda_0$ admits $0<\deltalta_0=\deltalta_0(\lambdambda)<1$ such that any solution of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} with initial data of the form \varepsilonqref{sid}-\varepsilonqref{fd} satisfying Lemma \ref{Lemma:mbetaestimate} $(i)$ and $0<\deltalta\leq \deltalta_0$ blows up in finite time, i.e. $T_{\delta}<+\infty.$ \varepsilonnd{theorem} As far as we are aware Theorerm \ref{thmbu} is the first available blow-up result in the literature for non-local problem \varepsilonqref{fkpp}-\varepsilonqref{id}. \begin{equation}gin{rem} Theorem \ref{thmbu} guarantees the occurrence of a diffusion-induced blow-up. Namely it can be easily seen that any spatial homogeneous solution of \varepsilonqref{fkpp2a}-\varepsilonqref{id2} initiating close to the steady-stade solution $u_{\infty}\varepsilonquiv \sigmagma^{-\frac{1}{\begin{equation}ta}},$ and solving the IVP \begin{equation}gin{eqnarray}n \frac{dU}{dt}=U^{p}\left(1-\sigmagma U^{\begin{equation}ta}\right),\; t>0,\; U(0)=U_0, \varepsilonean is stable and it converges to the steady state solution $u_{\infty}.$ Otherwise, Theorem \ref{thmbu} states that such a solution destabilizes once diffusion enters into the equation. \varepsilonnd{rem} It is known, see for example \cite[Proposition 52.24]{QS}, that the maximum principle is not applicable for the non-local problem \varepsilonqref{sid}-\varepsilonqref{fd} and hence comparison techniques fail. Therefore, our main strategy to overcome this obstacle is to derive a lower estimate of the non-local term $K(t)$ and then deal with a local problem for which comparison techniques are available. Although a lower estimate of $K(t)$ is provided by Lemma \ref{Lemma:mbetaestimate}, such an estimate is not uniform in time and thus an alternative approach should be applied to derive a uniform lower bound. To that end we will follow an approach used in \cite{hy95, KS16, KS18}, and which was actually inspired by ideas in \cite{fmc85}. The steps of the proposed approach, though, needs to be modified appropriately so we can tackle the technical difficulties arise from the very different non-local term of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} compared with the one considered in problems discussed in \cite{hy95, KS16, KS18}. It is worth pointing out that the underlying method can be also implemented to predict {\it diffusion-driven blow-up (DDBU)} even in the case of an isotropically evolving domain $\Omegaega(t),t>0$, for more details see \cite{KBM}. Next, the form of the {\it DDBU} provided by Theorem \ref{thmbu} is further investigated. As a complementary result we show, cf. Corollary \ref{nik2}, that as soon as the solution of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} blows up in finite time $T_{\delta}<\infty,$ then it immediately becomes unbounded along the whole domain $\Omegaega$ at any subsequent time; such a phenomenon is known in the literature as {\it complete blow-up}. In other words, the observed Turing-type instability is quite severe so it destroys all the occurring instability patterns once the blow-up time is exceeded. Our next main result, identifying the blow-up (Turing-type instability) rate, is presented below: \begin{equation}gin{theorem}\lambdabel{tbu} Let $N\geq 3$ with $p>\frac{N}{N-2}$ and assume that \varepsilonqref{mt1} holds true. Then the blow-up rate of the diffusion-induced blowing solution predicted by Theorem \ref{thmbu} is determined by \begin{equation} \Vert u(\cdot, t)\Vert_\infty \ \approx \ (T_\delta-t)^{-\frac{1}{p-1}}, \quad t\uparrow T_\delta\lambdabel{ik2}. \varepsilone \varepsilonnd{theorem} The paper is organized as follows. Section \ref{prr} introduces some preliminary results on problem \varepsilonqref{fkpp}-\varepsilonqref{id}. Section \ref{blrn} contains the proof of our main blow-up Theorem \ref{thmbu} and that of the completeness of blow-up given by Corollary \ref{nik2}. Section \ref{blp} discusses the exact blow-up rate provided by Theorem \ref{tbu}. In section \ref{blp} we also identify the blow-up profile of solution $u,$ and thus we determine the form of Turing instability patterns occurring as a consequence of the diffusion-driven instability. \section{Preparatory results}\lambdabel{prr} \ In the current subsection we present some key properties for the $u(x,t)$ solution of \varepsilonqref{fkpp}-\varepsilonqref{id} We first point that the existence of a unique classical local-in-time solution of the non-local problem \varepsilonqref{fkpp}-\varepsilonqref{id} can be established by using results existing in \cite{QS} (see Remark 51.11 and Example 51.13 ) and in \cite{s98}. Henceforth, we use the notation $C$ and $C_i, i=1,\dots,$ to denote positive constants. Next we provide a result that establishing the positivity of solutions of \varepsilonqref{fkpp}-\varepsilonqref{id} once non-negative initial data are considered. \begin{equation}gin{lemma}\lambdabel{pos} Let consider initial date $u_0\in L^{\begin{equation}ta_0}(\Omega)$ with $\begin{equation}ta_0=\max\{\begin{equation}ta,2\},$ $u_0(x)\geq 0$ in $\Omegaega,$ then \begin{equation}gin{eqnarray}n u(x,t)\geq 0, \quad\mbox{for any}\quad (x,t) \in \overline{Q}_T, \varepsilonean where $Q_T:=\Omegaega \times (0,T).$ \varepsilonnd{lemma} \begin{equation}gin{proof} Set $u^{-}:=-\min\{u,0\}\geq 0,$ then by the assumption on the initial data we have $u_0^{-}=0$ and thus \begin{equation}\lambdabel{ik1} \int_{\Omegaega} (u_0^{-})^2\,dx=0. \varepsilone Next by testing \varepsilonqref{fkpp} by $u^{-}$ we derive \begin{equation}gin{eqnarray} \frac{1}{2} \frac{d}{dt} \int_{\Omegaega} (u^{-})^2\,dx && =-\int_{\Omegaega} |\nabla u^{-}|^2\,dx+\int_{\Omegaega}|u|^{p-1}(u^{-})^2\,dx\left(1-\sigmagma\rlap{$-$}\!\int_{\Omega} |u|^{\begin{equation}ta-1}u\;dx\right)\nonumber\\ && \leq \int_{\Omegaega}|u|^{p-1}(u^{-})^2\,dx\left(1+\sigmagma\rlap{$-$}\!\int_{\Omega} |u|^{\begin{equation}ta}\;dx\right)\nonumber\\ && \leq C(T)\int_{\Omegaega}(u^{-})^2\,dx,\lambdabel{ik2a} \varepsilonea where \[ C(T):=\left[M^{p-1}(T) \left(1+\sigmagma M^{\begin{equation}ta}(T)\right)\right]<\infty, \] since $M(T):=\max_{(x,t)\in Q_T} |u(x,t)|<+\infty$ for a classical solution of \varepsilonqref{fkpp}-\varepsilonqref{id}. Inequality $\varepsilonqref{ik2a}$ by virtue of \varepsilonqref{ik1} entails $\int_{\Omegaega} (u^{-})^2\,dx=0,$ and thus $u(x,t)\geq 0$ in $Q_T.$ \varepsilonnd{proof} Due to the above positivity result, henceforth we focus on the investigation of the problem \begin{equation}gin{eqnarray} && u_t-\Deltaelta u=u^{p}\left(1-\sigmagma\rlap{$-$}\!\int_{\Omega} u^{\begin{equation}ta}\;dx\right), \quad\mbox{in}\quad Q_T,\lambdabel{pfkpp}\\ && \frac{\partial u}{\partial \nu}=0, \quad\mbox{on}\quad \Gammaamma_T:=\partial \Omegaega \times (0,T), \\ && u(x,0)=u_0(x)\geq 0, \quad x\in\Omega.\lambdabel{pid} \varepsilonea The next lemma clarifies the evolution of the norm $$m_\begin{equation}ta(t):=\int_{\Omegaega} u^{\begin{equation}ta}(x,t)\,dx,$$ along a nontrivial solution of \varepsilonqref{pfkpp}-\varepsilonqref{pid}. \begin{equation}gin{lemma}\lambdabel{Lemma:mbetaestimate} Let $u$ be a solution of \varepsilonqref{pfkpp}-\varepsilonqref{pid} with $u_0\in L^\begin{equation}ta(\Omega).$ If $\begin{equation}ta>1$ there holds \begin{equation}gin{itemize} \item[$\rm (i)$] $0<m_\begin{equation}ta(0)\leq1/\sigmagma$ implies $m_\begin{equation}ta(t)<1/\sigmagma$ for all $t\in(0,T]$, and \item[$\rm (ii)$] $m_\begin{equation}ta(0)\geq1/\sigmagma$ implies $m_\begin{equation}ta(t)<m_\begin{equation}ta(0)$ for all $t\in(0,T]$. \varepsilonnd{itemize} \varepsilonnd{lemma} \begin{equation}gin{proof} A direct calculation and by virtue of \varepsilonqref{pfkpp} implies \begin{equation}gin{eqnarray}\lambdabel{evo_m_beta} m'_\begin{equation}ta(t) &&= -4\frac{\begin{equation}ta-1}{\begin{equation}ta}\int_{\Omega}\big|\nabla u^{\begin{equation}ta/2}\big|^2\mathrm{d}x + \begin{equation}ta\left(1-\sigmagma m_\begin{equation}ta(t)\right)m_{p+\begin{equation}ta-1}(t)\nonumber\\ && < \begin{equation}ta\big(1-\sigmagma m_\begin{equation}ta(t)\big)m_{p+\begin{equation}ta-1}(t),\quad\mbox{for any}\quad 0<t<T,\lambdabel{EVO} \varepsilonea using also the fact $\begin{equation}ta>1.$ Under the assumption $m_\begin{equation}ta(0)\leq1/\sigmagma$, by \varepsilonqref{EVO} we infer that there cannot be time $t_{\sigma}>0$ such that $m_\begin{equation}ta(t_{\sigma})=1/\sigmagma$ and $m'_\begin{equation}ta(t_{\sigma})\geq0$. Thus $m_\begin{equation}ta(t)\leq1/\sigmagma$ for all $t\in(0,T]$, and in the fact strict inequality follows. Namely, if $m_\begin{equation}ta(\hat{t}_{\sigma})=1/\sigmagma$ for some $\hat{t}_{\sigma}\in (0,T)$, then $m'_\begin{equation}ta(\hat{t}_{\sigma})<0,$ due to \varepsilonqref{EVO}, which infers that $m_{\begin{equation}ta}(t)$ would have thus exceeded $1/\sigmagma$ at some previous time $t'\in(0,\hat{t}),$ leading to a contradiction. Then an identical argument to $(i)$ implies $(ii).$ \varepsilonnd{proof} \begin{equation}gin{rem} An immediate consequence of Lemma \ref{Lemma:mbetaestimate} $(i)$ is a lower estimate of the average of solution $u$ over domain $\Omegaega.$ Indeed, under the assumption $0<m_\begin{equation}ta(0)\leq1/\sigmagma,$ which actually guarantees that \begin{eqnarray}e K(t)\geq 0,\quad \mbox{for any}\quad 0<t<T, \varepsilongee then averaging \varepsilonqref{pfkpp} over $\Omegaega$ entails \begin{equation}gin{eqnarray}n \frac{d}{dt} \rlap{$-$}\!\int_{\Omega} u\,dx\geq 0,\quad \mbox{for any}\quad 0<t<T, \varepsilonean in conjunction with Lemma \ref{pos}, which finally implies \begin{equation}gin{eqnarray}\lambdabel{as2} \overline{u}(t):=\rlap{$-$}\!\int_{\Omega} u\,dx\geq \overline{u}_0>0,\quad \mbox{for any}\quad 0<t<T, \varepsilonea since $u_0\in L^\begin{equation}ta(\Omega)\setminus\{0\}.$ \varepsilonnd{rem} The global existence of positive classical solutions was proven in \cite{BChL,B}, yet for the sake of completeness we state these results in the sequel. \begin{equation}gin{theorem}\lambdabel{thm1}{\cite{BChL,B}} Let $\begin{equation}ta\geq1$, and assume that $u_0$ is non-negative with $u_0\in L^k(\Omegaega)$ for $1<k<+\infty.$ Assume further that $p$ satisfies $$ 1<p<1+\left(1-\frac{2}{q}\right)\begin{equation}ta, $$ where $$ q= \begin{equation}gin{cases} \frac{2N}{N-2},\ N\geq3,\\ 2<p<+\infty,\ N=2,\\ \infty,\ N=1, \varepsilonnd{cases} $$ then there exists a unique non-negative classical global-in-time solution to \varepsilonqref{pfkpp}-\varepsilonqref{pid}. \varepsilonnd{theorem} \begin{equation}gin{rem} Note that Theorem \ref{thm1} for $N\geq 3$ guarantees the existence of global-in-time solutions of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} in the range $1<p<1+\frac{2}{N}\begin{equation}ta$ and for any $\begin{equation}ta\geq 1.$ In particular, choosing $\begin{equation}ta>\frac{N}{N-2}$ we obtain global-in-time solutions for $1<p<\frac{N}{N-2},$ while on the other hand, if $p>\frac{N}{N-2}$ then Theorem \ref{thmbu} establishes finite-time blow-up. Consequently, for the specific choice $\begin{equation}ta>\frac{N}{N-2}$ Theorems \ref{thm1} and \ref{thmbu} provide an optimal result regarding the long-time behaviour of the solution to \varepsilonqref{fkpp2a}-\varepsilonqref{id2}, although is still unclear what happens in the critical case $p=\begin{equation}ta=\frac{N}{N-2}.$ Nevertheless, for $\begin{equation}ta< \frac{N}{N-2}$ our approach still works but leaves a gap between regarding the existence of global-in-time and blowing up solutions in the interval $ p\in( 1+\frac{2}{N}\begin{equation}ta,\frac{N}{N-2}).$ \varepsilonnd{rem} We also have the following result providing the asymptotic behaviour of the solution in the case $\begin{equation}ta=1$, \begin{equation}gin{theorem}\lambdabel{thm2}{\cite{BChL,B}} Let $u$ be a non-negative classical solution obtained from Theorem \ref{thm1}, $v$ be the solution to the heat equation with Neumann boundary condition and initial data $\int_\Omegaega v_0(x)dx= m_0$, then, \begin{equation}gin{align} \|u(\cdot,t)-v(\cdot,t)-(1-m_0)\|_{L^2(\Omegaega)}\leq C_1e^{-C_2t}, \varepsilonnd{align} where $C_1,C_2$ are constants depending on the initial mass $m_0$ and $\|u_0\|_{L^{2 p}(\Omegaega)}$. \varepsilonnd{theorem} \section{Main results}\lambdabel{blrn} \subsection{Diffusion-driven blow-up} The current subsection is devoted to the proof of the occurrence of a {\it diffusion-driven blow-up (DDBU)} for the solution of problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} under spiky initial data of the form \varepsilonqref{sid}-\varepsilonqref{fd}. Accordingly an approach, previously used in \cite{hy95,KS16, KS18, LN09}, will be implemented but it should be modified accordingly due to the form of the non-local term. However in order to proceed further we first need to establish some auxiliary results. For the function $\phi_{\deltalta}$ defined by \varepsilonqref{fd} we set \begin{eqnarray} &&\alphapha_1=\sup_{0<\delta<1}\frac{1}{\overline{\phi}_{\delta}^{\mu} }\left(\rlap{$-$}\!\int_{\Omega}r \phi_{\delta}^p\,dx\right),\lambdabel{kl1}\\ &&\alphapha_2=\inf_{0<\delta<1}\frac{1}{\overline{\phi}_{\delta}^{\mu} }\left(\rlap{$-$}\!\int_{\Omega}r \phi_{\delta}^p\,dx\right),\lambdabel{kl2} \varepsilonge for \begin{eqnarray}\lambdabel{mmnk1} \mu:=\displaystyle{\frac{p\,\varepsilonll}{k-1}}>p, \varepsilonge and some $0<k<p$ such that $N>\frac{2p}{k-1}.$ Then the first auxiliary result presents the key properties of $\phi_{\deltalta}.$ \begin{equation}gin{lemma}[$\phi_{\delta}-$properties]\lambdabel{PhiDeltaProp} Let \begin{equation}gin{eqnarray}\lambdabel{st6} p>\frac{N}{N-2},\quad N\geq 3, \varepsilonea then the function $\phi_{\delta}$ defined by \varepsilonqref{fd} satisfies the following: \begin{equation}gin{enumerate}[label=(\roman*)] \item There holds that \begin{equation}gin{equation}\lambdabel{Deltaphi} \Deltaelta_r \phi_{\delta}\geq-N a \phi_{\delta}^p, \varepsilonnd{equation} in a weak sense for any $\deltalta\in(0,1),$ where $\Deltaelta_r:= \frac{\partial^2}{\partial r^2}+\frac{N-1}{r}\frac{\partial }{\partial r}.$ \item If $\zetaeta>0$ and $N>\zetaeta a$ then \begin{equation}gin{equation}\lambdabel{nonlocalterm} \frac{1}{|B_1|}\int_{B_1} \phi_{\delta}^\zetaeta(x)\,dx :=\rlap{$-$}\!\int_{\Omega}r \phi_{\delta}^{\zetaeta}(x)\,dx=\,N\,\omega_{N}\int_0^{1} \,r^{N-1}\phi_{\delta}^{\zetaeta}(r)\,dr=\frac{N}{N- a \zetaeta}+O(\deltalta^{N- a \zetaeta}),\quad\mbox{as}\quad \deltalta\downarrow 0, \varepsilonnd{equation} where $\omega_N:=|B_1|=\pi^{N/2} \Gammaamma (N/2)$ is the volume of the unit ball in $\mathbb{R}^N.$ \item Choose now parameter $\varepsilonll$ so that \begin{eqnarray}\lambdabel{st7} k-1<\varepsilonll<\frac{N(p-1)}{2p}, \varepsilonge then for the quantities $\alphapha_1,\alphapha_2$ defined by \varepsilonqref{kl1} and \varepsilonqref{kl2} are bounded thanks to \varepsilonqref{nonlocalterm} and \varepsilonqref{st6}. Consider also \begin{eqnarray}\lambdabel{kl4} d=d(\lambda, \sigmagma):=\lambdambda-\sigmagma2^{\begin{equation}ta(\mu+1)/p} a_1^{\begin{equation}ta/p} \Lambda_1^{\begin{equation}ta\mu/p} \lambdambda^{\begin{equation}ta+1}, \varepsilonge which is a positive constant for any $0<\lambdambda\ll 1$ since $0<\Lambda_1:=\sup_{0<\deltalta<1} \bar{\phi_{\delta}}<\infty$ thanks to \varepsilonqref{st7}. Then there exists some $\lambdambda_0:=\lambdambda_0(\sigmagma,\delta)>0$ small enough such that \begin{equation}gin{align}\lambdabel{supercritical} \Deltaelta_r u_0+d\lambdambda^{-1}u_0^p \geq 2 u_0^p, \varepsilonnd{align} for any $0<\frac{\lambdambda_0}{2}<\lambdambda<\lambdambda_0.$ \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{equation}gin{proof} For the proof of $(i)$ and $(ii)$ see \cite{hy95, KS16, KS18}, hence it remains to prove $(iii).$ Recall that $u_0=\lambdambda \phi_{\deltalta}$ and if we fix some $0<\lambdambda_0 <1$ then \begin{equation}gin{align*} \Deltaelta_r u_0+d\lambdambda^{-1}u_0^p = \lambdambda\Deltaelta_r\phi_{\delta}+d\lambdambda^{p-1}\phi_{\delta}^p\geq \lambdambda\Deltaelta_r\phi_{\delta}+d_0\lambdambda^{p-1}\phi_{\delta}^p, \varepsilonnd{align*} where $d_0:=\inf_{\lambda\in(\frac{\lambda_0}{2},\lambdambda_0)} d(\lambda),$ and so it is enough to prove that \begin{equation}gin{align*} \lambdambda\Deltaelta_r\phi_{\delta}+d_0\lambdambda^{p-1}\phi_{\delta}^p &\geq 2\lambdambda^p\phi_{\delta}^p. \varepsilonnd{align*} Note that thanks to \varepsilonqref{Deltaphi} it is sufficient to show \begin{equation}gin{eqnarray*} -\lambdambda N a \phi_{\delta}^p+d_0\lambdambda^{p-1}\phi_{\delta}^p \geq 2\lambdambda^p\phi_{\delta}^p, \varepsilonnd{eqnarray*} or equivalently \begin{equation}gin{eqnarray}n d_0\lambdambda^{p-2} \geq 2\lambdambda^{p-1}+N a, \varepsilonean which is finally true for $0< \lambda_0\ll 1.$ \varepsilonnd{proof} Given $0<\deltalta<1$, we recall that $T_\deltalta>0$ is the maximal existence time for the solution to \varepsilonqref{fkpp2a}-\varepsilonqref{id2} with initial data $u_0=\lambdambda\phi_{\delta}.$ Henceforth we consider $0<\lambdambda<\lambdambda_0$ so that Lemma \ref{PhiDeltaProp} is valid. The next result provides a useful point estimate for any $0<r<1$ of the radial symmetric solution $u(r,t)$ in terms of its average over $B_1.$ \begin{equation}gin{lemma}\lambdabel{NewProblProp} For any $0<r\leq 1$ there holds \begin{equation}gin{equation}\lambdabel{as0} r^N u(r,t)\leq \overline{u}(t):=\rlap{$-$}\!\int_{\Omega}r u(x,t)\,dx=N\int_0^{1} \,y^{N-1}u(y,t)\,dr \varepsilonnd{equation} and \begin{equation}gin{equation}\lambdabel{as1} u_r\left(\frac34,t\right)\leq-c,\quad 0\leq t<T_\deltalta. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{equation}gin{proof} We first define the operator $$ \mathcal{H}[w]:= w_t-w_{rr}+\frac{N-1}{r}w_r-pu^{p-1}K(t)w $$ with $w=r^{N-1}u_r$ and then we note that \begin{equation}gin{eqnarray} &&\mathcal{H}[w]=0,\;0<r<1,\;\; 0<t<T_{\delta},\lambdabel{mmnk2}\\ &&w(r,t)=0,\quad r=0,1,\quad 0<t<T_{\delta},\lambdabel{mmnk3}\\ && w(r,0)<0,\quad 0<r<1, \lambdabel{mmnk4} \varepsilonnd{eqnarray} following similar calculations as in \cite{hy95}. The maximum principle, treating \varepsilonqref{mmnk2}-\varepsilonqref{mmnk4} as a local problem, then implies that $w\leq 0$, thus $u_{r}\leq0$ for $(r,t)\in(0,1)\times(0,T_{\delta})$ and so $$ \overline{u}:=N\int_0^{1} \,y^{N-1}u(y,t)\,dy \geq N \int_0^{r} \,y^{N-1}u(y,t)\,d\sigmagma \geq N u(r,t)\,\int_0^r y^{N-1}\,d \sigmagma = u(r,t)\,r^N, $$ for any $0<r<1$ and $0<t<T_{\delta},$ recalling that $\omega_{N}=|B_1(0)|.$ By virtue of Lemma \ref{Lemma:mbetaestimate} and for a classical solution $u$ of \varepsilonqref{fkpp2a}-\varepsilonqref{id2}, we obtain that the term $p K(t) u^{p-1}$, that is the coefficient of the linear term in $\mathcal{H}[w]$, is uniformly bounded in $\frac{1}{2}<r<1,\; 0<t<T_{\delta}$ for all $0<\delta<\delta_0.$ Furthermore, we have $p K(t) u^{p-1}w\leq 0$ due to Lemma \ref{pos} and Lemma \ref{Lemma:mbetaestimate}. Next we compare $w$ with the solution of local problem \begin{equation}gin{eqnarray} &&\theta_t-\theta_{rr}+\frac{N-1}{r}\theta_r=0,\;\frac{1}{2}<r<1,\;\; 0<t<T_{\delta},\\ &&\theta(r,t)=0,\quad r=\frac{1}{2},1,\quad 0<t<T_{\delta},\\ && \theta(r,0)=w(r,0)<0,\quad \frac{1}{2}<r<1, \varepsilonnd{eqnarray} to obtain that $w\leq \theta\leq 0$ in $(\frac{1}{2},1)\times(0,T_{\delta})$ in conjunction with maximum principle. In particular we have \begin{equation}gin{eqnarray}n u_r\left(\frac{3}{4},t\right)\leq \left(\frac{4}{3}\right)^{N-1}\,\theta\left(\frac{3}{4},t\right)\leq -c,\quad 0<t<T_{\delta}, \varepsilonean where $c$ is independent of $0<\delta<\delta_0.$ \varepsilonnd{proof} Next we prove an essential two-side $L^p-$estimate for the solution of \varepsilonqref{fkpp2a}-\varepsilonqref{id2}, inspired by an analogous result holding for the shadow system of Gierer-Meinhardt model, see also \cite[Proposition 8.1]{KS16} or \cite[Chapter 5, Proposition 5.3]{KS18}. \begin{equation}gin{proposition}\lambdabel{prop8.1} There exist $0<\deltalta_0<1$and $0<t_0\leq1$ independent of any $0<\deltalta\leq\deltalta_0$, such that the following estimate holds \begin{equation}\lambdabel{LpEst} \frac12A_2\overline{u}^\mu\;dx \leq\rlap{$-$}\!\int_{\Omega}r u^p\;dx \leq 2A_1\overline{u}^\mu\;dx,\quad \mbox{for any}\quad t\in\left(0,\min\{t_0 ,T_\deltalta\}\right), \varepsilone where $ \mu $ is given by \varepsilonqref{mmnk1}. The positive constants $A_1$ and $A_2$ in \varepsilonqref{LpEst} are given by \begin{equation}gin{eqnarray}n &&A_1=\sup_{0<\delta<1}\frac{1}{\overline{u}_0^{\mu} }\left(\rlap{$-$}\!\int_{\Omega}r u_0^p\,dx\right)=\lambdambda^{p-\mu}\alphapha_1,\\ &&A_2=\inf_{0<\delta<1}\frac{1}{\overline{u}_0^{\mu} }\left(\rlap{$-$}\!\int_{\Omega}r u_0^p\,dx\right)=\lambdambda^{p-\mu}\alphapha_2, \varepsilonean and are bounded due to Lemma \ref{PhiDeltaProp}. \varepsilonnd{proposition} \begin{equation}gin{proof} For any $0<\delta<\delta_0,$ consider $[0,t_0(\delta)]$ to be the maximal time interval for which \varepsilonqref{LpEst} holds. Obviously there holds $0<t_0(\delta)\leq T_{\delta}$ for each $0<\delta<\delta_0.$ In case $t_0\geq 1$ there is nothing to prove since the statement \varepsilonqref{LpEst} automatically holds by simply choosing $t_0=1.$ Hence in the following we now assume that $t_0\leq 1.$ Integration of \varepsilonqref{fkpp2a} over $B_1,$ by virtue of \varepsilonqref{LpEst}, entails \begin{equation}gin{align} \frac{d\overline{u}}{dt}=\rlap{$-$}\!\int_{\Omega} u^p\,dx\, \left(1-\sigmagma\rlap{$-$}\!\int_{\Omega} u^\begin{equation}ta\,dx\right)\leq \rlap{$-$}\!\int_{\Omega} u^p\,dx\leq 2A_1\overline{u}^\mu, \varepsilonnd{align} and thus, \begin{equation}gin{align} \overline{u}\leq \left[ \frac{1}{\overline{u}_0^{\;1-\mu}-2A_1(\mu-1)t} \right]^{\frac{1}{\mu-1}}, \quad\mbox{for any}\quad t\in (0,t_0). \varepsilonnd{align} It can be also verified that $$ \left[ \frac{1}{\overline{u}_0^{\;1-\mu}-2A_1(\mu-1)t} \right]^{\frac{1}{\mu-1}}\leq2\overline{u}_0, $$ provided that $$ t\leq \min\left\{\frac{2^{\mu}-1}{2^{\mu}A_1(\mu-1)\overline{u}_0^{\;\mu-1}},\frac{\overline{u}_0^{\;\mu-1}}{2 A_1(\mu-1)}\right\}. $$ Consequently, we deduce that \begin{equation}gin{equation}\lambdabel{zUB} \overline{u}(t) \leq2\overline{u}_0\leq 2\Lambda:=2\sup_{\deltalta\in(0,\deltalta_0)}\overline{u}_0, \varepsilonnd{equation} when $0<t<t_2:=\min\left\{t_0,t_1\right\},$ and for \begin{equation}gin{equation} t_1:=\left\{\frac{2^{\mu}-1}{2^{\mu}A_1(\mu-1)\Lambda^{\;\mu-1}},\frac{\Lambda^{\;\mu-1}}{2 A_1(\mu-1)}\right\}, \varepsilonnd{equation} which is independent of $0<\deltalta<\deltalta_0$. Next, for given $\varepsilon>0$, we define the auxiliary function $$ \chi:=r^{N-1} u_{r}+\varepsilon\,r^{N}\,\frac{u^k}{\overline{u}^{\varepsilonll}}, $$ recalling thatexponents $k$ and $\varepsilonll$ are defined in \varepsilonqref{mmnk1} and \varepsilonqref{st7}. It is readily seen, cf. \cite{hy95}, that \begin{equation}gin{equation}\lambdabel{ps1} \mathcal{H}\left[r^{N-1} u_{r}\right]=0, \varepsilonnd{equation} while by straightforward calculations we derive \begin{equation}gin{align} \mathcal{H}\left[\varepsilon r^{N} \frac{u^k}{\overline{u}^{\varepsilonll}}\right]&=\frac{2k(N-1)\varepsilon r^{N-1}u^{k-1}}{\overline{u}^{\varepsilonll}}u_{r}+\frac{k\varepsilon r^N u^{p-1+k}}{\overline{u}^{\varepsilonll}} K(t)-\frac{\varepsilonll \varepsilon r^N u^k}{\overline{u}^{\varepsilonll+1}}\,K(t)\,\rlap{$-$}\!\int_{\Omega}r u^p\,dx \nonumbernumber\\ &-\frac{2 k N\varepsilon r^{N-1} u^{k-1}}{\overline{u}^{\varepsilonll}}u_r-\frac{k(k-1)\varepsilon r^N u^{k-2}}{\overline{u}^{\varepsilonll}}u_r^2-\frac{p\varepsilon r^N u^{p-1+k}}{\overline{u}^{\varepsilonll}} K(t) \nonumbernumber\\ &\leq -\frac{2 k \varepsilon r^{N-1} u^{k-1}}{\overline{u}^{\varepsilonll}}u_r -\frac{\varepsilonll \varepsilon r^N u^k}{\overline{u}^{\varepsilonll+1}}\,K(t)\,\rlap{$-$}\!\int_{\Omega}r u^p\,dx -\frac{(p-k)\varepsilon r^N u^{p-1+k}}{\overline{u}^{\varepsilonll}} K(t) \nonumbernumber\\ &= -\frac{2 k \varepsilon u^{k-1}}{\overline{u}^{\varepsilonll}}\chi +\frac{2 k \varepsilon^2 r^{N} u^{2k-1}}{\overline{u}^{2\varepsilonll}} -\frac{\varepsilon r^Nu^k}{\overline{u}^{2\varepsilonll}} \left[ K(t)\varepsilonll \overline{u}^{\varepsilonll-1}\rlap{$-$}\!\int_{\Omega}r u^p\,dx +(p-k) u^{p-1}\overline{u}^{\varepsilonll} K(t) \right] \nonumbernumber\\ &= -\frac{2 k \varepsilon u^{k-1}}{\overline{u}^{\varepsilonll}}\chi +\frac{\varepsilon r^Nu^k}{\overline{u}^{2\varepsilonll}} \left[ 2k\varepsilon u^{k-1} -K(t)\varepsilonll \overline{u}^{\varepsilonll-1}\rlap{$-$}\!\int_{\Omega}r u^p\,dx -(p-k) u^{p-1}\overline{u}^\varepsilonll K(t) \right] \nonumbernumber\\ &= -\frac{2 k \varepsilon u^{k-1}}{\overline{u}^{\varepsilonll}}\chi +I. \lambdabel{hest1} \varepsilonnd{align} Next we show that $$I:= 2k\varepsilon u^{k-1} -K(t)\varepsilonll \overline{u}^{\varepsilonll-1}\rlap{$-$}\!\int_{\Omega}r u^p\,dx -K(t)(p-k) u^{p-1}\overline{u}^\varepsilonll\leq0.$$ Indeed, using \varepsilonqref{mt1} in conjunction with Jensen's inequality and \varepsilonqref{LpEst}, \varepsilonqref{zUB} we immediately derive \begin{equation}gin{eqnarray}\lambdabel{mt2} K(t)&&\geq 1-\sigmagma\left(\rlap{$-$}\!\int_{\Omega}r u^p dx\right)^{\begin{equation}ta/p}\nonumber\\ && \geq 1-\sigmagma2^{\begin{equation}ta(\mu+1)/p} A_1^{\begin{equation}ta/p} \Lambda^{\begin{equation}ta\mu/p}\nonumber\\ &&\geq 1-\sigmagma2^{\begin{equation}ta(\mu+1)/p} a_1^{\begin{equation}ta/p} \Lambda_1^{\begin{equation}ta\mu/p} \lambdambda^{\begin{equation}ta}:=D,\quad\mbox{for any}\quad 0<t<\min\{t_0,t_1\}, \varepsilonea recalling that $0<\Lambda_1=\sup_{0<\deltalta<1} \bar{\phi_{\delta}}<\infty.$ Notably we have \begin{eqnarray}\lambdabel{kl5} D= 1-\sigmagma2^{\begin{equation}ta(\mu+1)/p} a_1^{\begin{equation}ta/p} \Lambda_1^{\begin{equation}ta\mu/p} \lambdambda^{\begin{equation}ta}=\lambdambda^{-1} d, \varepsilonge is a positive constant for $0<\lambdambda<\lambdambda_0\ll1,$ depending on $u_0$ but not on $0<\deltalta<\deltalta_0;$ here we recall that $d$ is defined by \varepsilonqref{kl4}. Next combining \varepsilonqref{hest1} with \varepsilonqref{LpEst}, \varepsilonqref{zUB} and \varepsilonqref{mt2} we deduce \begin{equation}gin{align} I& \leq 2k\varepsilon u^{k-1} -\frac{A_2 D\varepsilonll }{2} \overline{u}^{\mu+\varepsilonll-1} -D(p-k) u^{p-1}\overline{u}^\varepsilonll\nonumbernumber\\ & \leq 2k\varepsilon u^{k-1} -D(p-k) (2 \Lambda)^{\varepsilonll} u^{p-1}, \quad\mbox{for any}\quad 0<t<t_0. \lambdabel{I1} \varepsilonnd{align} Then \varepsilonqref{I1} in conjunction with Young's inequality leads to $I\leq 0,$ by also choosing $v^\varepsilonpp$ sufficiently small and independent of $0<\delta<\delta_0.$ Finally combining \varepsilonqref{ps1} with \varepsilonqref{hest1} we obtain \begin{equation}gin{eqnarray}n \mathcal{H}[\chi]\leq -\frac{2 k v^\varepsilonpp u^{k-1}}{\overline{u}^{\varepsilonll}}\chi\quad\mbox{in}\quad \left(0,\frac{3}{4}\right)\times(0,t_2), \varepsilonean for any $v^\varepsilonpp$ sufficiently small and independent of $0<\delta<\delta_0.$ Note that $\chi(0,t)=0,$ whilst at $r=\frac{3}{4}$ due to Lemma \ref{NewProblProp} and \varepsilonqref{zUB} there holds \begin{equation}gin{align} \chi\left(\frac{3}{4}\right) &= \left(\frac34\right)^{N-1}u_r\left(\frac34,t\right) + \varepsilon\left(\frac34\right)^{N}\frac{u^k\left(\frac{3}{4},t\right)}{\overline{u}^\varepsilonll} \nonumbernumber\\ &\leq -c\left(\frac34\right)^{N-1} + \varepsilon\left(\frac34\right)^{N-kN}\overline{u}^{k-\varepsilonll}(t) \nonumbernumber\\ &\leq -c\left(\frac{3}{4}\right)^{N-1} +\varepsilon\left(\frac{3}{4}\right)^{N-kN}(2\Lambda)^{k-\varepsilonll} <0, \varepsilonnd{align} for $\varepsilon$ sufficiently small. Subsequently for $t=0,\; 0<r<\deltalta$ and fixed $0<\lambdambda\ll 1$ we calculate, \begin{equation}gin{align*} \chi(r,0) &=r^{N-1}\left[\lambdambda \phi'_{\delta}(r)+\varepsilon r\lambdambda^{k-\varepsilonll}\frac{\phi_{\delta}^k}{\overline{\phi}_{\delta}^{\varepsilonll}}\right]\\ &\leq r^{N}\left[-\frac{\lambdambda a }{\delta^{a+2}} +\varepsilon\lambdambda^{k-\varepsilonll} \frac{(1+\frac{a}{2})^k}{\delta^{ak}\left(\frac{N}{N-a\begin{equation}ta} +O\left(\delta^{N-a\begin{equation}ta}\right)\right)}\right]\\ &\leq r^{N}\left[-\frac{\lambdambda a }{\delta^{a+2}}+\varepsilon\lambdambda^{k-\varepsilonll}\frac{(1+\frac{a}{2})^k}{\delta^{ak}}\right]<0, \varepsilonnd{align*} since $a+2=ap>ak$ and for $\varepsilon$ small enough and independent of $0<\delta<\delta_0.$ On the other hand, for $t=0,\ \deltalta<r<\frac{3}{4}$ and fixed $0<\lambdambda \ll 1$ we have, \begin{equation}gin{align*} \chi(r,0)&=r^{N-1}\left[-\frac{\lambdambda\,a}{r^{a+1}} +\varepsilon r\lambdambda^{k-\varepsilonll}\frac{1}{r^{ak}\left(\frac{N}{N-a\begin{equation}ta}+O\left(\delta^{N-a\begin{equation}ta}\right)\right)}\right]\\ &\leq r^{N-1}\left[-\frac{\lambdambda\,a}{r^{a+1}} +\varepsilon \lambdambda^{k-\varepsilonll}\frac{1}{r^{ak-1}}\right]<0, \varepsilonnd{align*} since $a+1>ak-1$ and again taking $\varepsilon$ small enough and still independent of $0<\delta<\delta_0.$ Outlining we have \begin{equation}gin{eqnarray}n &&\mathcal{H}[\chi]\leq -\frac{2 k \varepsilon u^{k-1}}{\overline{u}^{\varepsilonll}}\chi, \quad\mbox{for}\quad 0<r<\frac{3}{4}\quad\mbox{and}\quad 0<t<t_2, \\ &&\chi\leq 0,\quad\mbox{for}\quad r=0,\frac{3}{4}\quad\mbox{and}\quad 0<t<t_2, \\ &&\chi\leq 0,\quad\mbox{for}\quad 0<r<\frac{3}{4}\quad\mbox{and}\quad t=0, \varepsilonean hence maximum principle entails $\chi\leq 0,$ that is $$ u_r\leq \varepsilon r\frac{u^k}{\overline{u}^{\varepsilonll}}, $$ which by integrating over $r\in(0,\frac34)$ leads to \begin{equation}gin{eqnarray} \lambdabel{tbsd10} u(r,t)\leq\left[\frac{2 \overline{u}^{\varepsilonll}}{\varepsilon (k-1)}\right]^{\frac{1}{k-1}} r^{-\frac{2}{k-1}}\quad\mbox{for}\quad 0<r<\frac{3}{4}\quad\mbox{and}\quad 0<t<t_2. \varepsilonea Hence \varepsilonqref{tbsd10} implies that for any $0<r<\frac{3}{4}$ $$ \frac{1}{|B_1|}\int_{B_r(0)} u^p\,dx\leq N\left[\frac{2}{\varepsilon (k-1)}\right]^{\frac{p}{k-1}}\frac{r^{N-\frac{2p}{k-1}}}{N-\frac{2p}{k-1}}\overline{u}^{\mu},\quad 0<t<t_2, $$ and by choosing $r$ sufficiently small, recalling that $N>\frac{2p}{k-1},$ we end up with the following estimate \begin{equation}\lambdabel{tbsd11} \frac{1}{|B_1|}\int_{B_r(0)} u^p\,dx\leq \frac{A_2}{8} \overline{u}^{\mu}, \varepsilone for all $0<t<t_2$ since $\mu=\frac{p\,\varepsilonll}{k-1}.$ Next we set $ \psi:=\frac{u}{\overline{u}^{\nu}}, $ for $\nu:=\frac{\mu}{p}=\frac{\varepsilonll}{k-1}>1$, then we can easily check that $\psi$ satisfies the following non-local equation $$ \psi_t = \Deltaelta \psi +\left( \frac{K(t)}{\overline{u}^{\nu}}u^p - \nu\frac{K(t)u}{\overline{u}^{\nu+1}} \rlap{$-$}\!\int_{\Omega}r u^p\;dx \right). $$ We easily observe that by virtue of Lemma \ref{NewProblProp} and relations \varepsilonqref{as2}, \varepsilonqref{LpEst} and \varepsilonqref{zUB} the terms \begin{equation}gin{eqnarray}n \frac{K(t)}{\overline{u}^{\nu}}u^p,\quad\mbox{and}\quad \nu\frac{K(t)u}{\overline{u}^{\nu+1}} \rlap{$-$}\!\int_{\Omega}r u^p\;dx, \varepsilonean are uniformly bounded in $[B_1(0)\setminus B_r(0)]\times \left(0,\min\{t_0(\delta),t_1\}\right).$ Then standard parabolic regularity theory, \cite{LSU}, guarantees the existence of a time $t_3>0$ independent of $0<\delta<\delta_0$ such that \begin{equation}gin{eqnarray}\lambdabel{tbsd12} \left|\frac{1}{|B_1|}\int_{B_1\setminus B_r(0)}\frac{u^p}{\overline{u}^\mu}\,dx-\frac{1}{|B_1|}\int_{B_1\setminus B_r(0)}\frac{u_0^p}{\overline{u}_0^\mu}\,dx\right|<\frac{A_2}{8}, \varepsilonea for $0\leq t\leq \min\{t_0(\delta), t_2,t_3\}.$ Considering that for some $\delta_1\in(0,\delta_0)$ there holds that $t_0(\delta_1)\leq\min\{t_2,t_3, T_{\delta_1}\}$ and then by virtue of \varepsilonqref{tbsd11} and \varepsilonqref{tbsd12} we deduce \begin{equation}gin{eqnarray}n &&\left|\frac{1}{|B_1|}\int_{B_1}\frac{u^p}{\overline{u}^\mu}\,dx-\frac{1}{|B_1|}\int_{B_1}\frac{u_0^p}{\overline{u}_0^\mu}\,dx\right|\\ &&\leq \left|\frac{1}{|B_1|}\int_{B_r(0)}\frac{u^p}{\overline{u}^\mu}\,dx-\frac{1}{|B_1|}\int_{B_r(0)}\frac{u_0^p}{\overline{u}_0^\mu}\,dx\right|\\ &&+\left|\frac{1}{|B_1|}\int_{B_1\setminus B_r(0)}\frac{u^p}{\overline{u}^\mu}\,dx-\frac{1}{|B_1|}\int_{B_1\setminus B_r(0)}\frac{u_0^p}{\overline{u}_0^\mu}\,dx\right|\\ &&\leq \frac{3 A_2}{8}, \varepsilonean and thus \begin{equation}gin{eqnarray}\lambdabel{tbsd13} \frac{11 A_1}{8}\leq \frac{1}{|B_1|}\int_{B_1} \frac{u^p}{\overline{u}^{\mu}}\,dx\leq \frac{5 A_2}{8}\quad\mbox{for any}\quad 0<t<\min\{t_1,t_0(\delta_1)\}. \varepsilonea We can then use continuity arguments in conjunction with \varepsilonqref{tbsd13} and the fact that $0<t_0(\delta_1)<T_{\delta_1}$ to extend the validity of \varepsilonqref{LpEst} beyond $t_0(\delta_1),$ which actually contradicts the definition of $t_0(\delta_1).$ Eventually we obtain that \varepsilonqref{LpEst} as well as all the preceding estimations are valid for any $0<t<\min\{\widetilde{t}_0,T_{\delta}\}$ for $\widetilde{t}_0=\min\{t_2,t_3\}.$ This competes the proof of the proposition. \varepsilonnd{proof} We now are ready to prove Theorem \ref{thmbu}, the main result in the current subsection. \begin{equation}gin{proof}[Proof of Theorem \ref{thmbu}] By virtue of the key estimate \varepsilonqref{mt2}, derived in the proof of Proposition \ref{prop8.1}, we can easily check that $u$ satisfies \begin{equation}gin{eqnarray}n u_t= \Deltaelta_r u +K(t)u^p\geq \Deltaelta_r u + D u^p\quad\mbox{in}\quad B_1\times\left(0,\min\{t_0,T_{\delta}\}\right), \varepsilonean reacalling that $D$ depends on $u_0$ but not on $0<\delta<\delta_0.$ Thus by comparison principle (in terms of the heat operator) we infer \begin{equation}gin{eqnarray} \lambdabel{nik} u(x,t)\geq \tilde{u}(x,t)\quad\mbox{in}\quad\bar{B}_1\times\left[0,\min\{t_0,T_{\delta}\}\right], \varepsilonea where $\tilde{u}$ solves the following local problem problem \begin{equation}gin{eqnarray} &&\tilde{u}_t= \Deltaelta_r \tilde{u} + D \tilde{u}^p\quad\mbox{in}\quad B_1\times\left(0,\min\{t_0,T_{\delta}\}\right),\lambdabel{lcp1}\\ &&\displaystyle\frac{\partial \tilde{u}}{\partial \nu}=0,\quad\mbox{on}\quad \partial B_1\times \left(0,\min\{t_0,T_{\delta}\}\right),\lambdabel{lcp2} \\ &&\tilde{u}(x,0)=u_0(x), \quad\mbox{in}\quad B_1.\lambdabel{lcp3} \varepsilonea Consider now the auxiliary function $h:=\tilde{u}_t-\tilde{u}^p,$ then by straightforward calculations we deduce \begin{equation}gin{eqnarray}n h_t=\Deltaelta_r h+p(p-1) \tilde{u}^{p-2} |\nabla \tilde{u}|^2+ D p \tilde{u}^{p-1}\,h\geq \Deltaelta_r h+ D p \tilde{u}^{p-1}\,h \quad\mbox{in}\quad B_1\times\left(0,\min\{t_0,T_{\delta}\}\right) \varepsilonean for $p>1$ and $\displaystyle{\frac{\partial h}{\partial \nu}}=0$ on $\partial{B}_1\times\left(0,\min\{t_0,T_{\delta}\}\right).$ Additionally, by virtue of \varepsilonqref{supercritical} and \varepsilonqref{kl5}, we have \begin{equation}gin{eqnarray}n h(x,0)=\Deltaelta_r \tilde{u}(x,0)+D \tilde{u}^p(x,0)-\tilde{u}^p(x,0)=\Deltaelta_r u_0+(D-1) u_0^p\geq u_0^p,\quad\mbox{in}\quad B_1. \varepsilonean Therefore maximum principle entails that $h>0$ in $\bar{B_1}\times\left[0,\min\{t_0,T_{\delta}\}\right]$ and that is \begin{equation}gin{eqnarray}n \tilde{u}_t>\tilde{u}^p\quad\mbox{in}\quad \bar{B_1}\times\left[0,\min\{t_0,T_{\delta}\}\right]. \varepsilonean Integrating we derive \begin{equation}gin{eqnarray}n \tilde{u}(r,t)\geq\left(\frac{1}{u_0^{p-1}(r)}-(p-1)t\right)^{-\frac{1}{p-1}},\quad\mbox{in}\quad \bar{B_1}\times\left[0,\min\{t_0,T_{\delta}\}\right], \varepsilonean which for $r=0$ reads \begin{equation}gin{eqnarray}n \tilde{u}(0,t)\geq\left(\frac{1}{u_0^{p-1}(0)}-(p-1)t\right)^{-\frac{1}{p-1}}=\left\{\frac{\delta^{a(p-1)}}{\left[\lambda\left(1+\frac{a}{2}\right)\right]}-(p-1)t\right\}^{-\frac{1}{p-1}}, \varepsilonean which entails finite time blow-up for $\widetilde{u},$ i.e. \begin{equation}gin{eqnarray}n ||\tilde{u}(\cdot,t)||_{\infty}=\widetilde{u}(0,t)\to \infty\quad\mbox{as}\quad t\to \tilde{T}_\delta=\frac{1}{p-1}\left[\lambda \left(1+\frac{a}{2}\right)\right]^{1-p}\,\delta^2, \varepsilonean and consequently finite-time blow-up for the solution $u$ of \varepsilonqref{fkpp2a}-\varepsilonqref{id2} at time $T_{\delta}\leq \widetilde{T}_\delta$ due to \varepsilonqref{nik}. Note also that $T_{\delta}\to 0$ as $\delta \to 0$ and thus the proof is complete. \varepsilonnd{proof} \begin{equation}gin{rem} The finite-time blow-up established by Theorem \ref{thmbu} is actually a single-point blow-up, i.e. the solution $u(r,t)$ of of \varepsilonqref{fkpp2a}-\varepsilonqref{id2} blows up only at the origin $r=0.$ Indeed, by virtue of \varepsilonqref{LpEst} and \varepsilonqref{zUB} we derive the following estimate \begin{equation}gin{eqnarray}n \rlap{$-$}\!\int_{\Omega}r u(x,t)\,dx=N\int_0^{1} \,r^{N-1}u(r,t)\,dr\leq C<\infty,\quad\mbox{for any}\quad 0<t<T_\delta, \varepsilonean wich in conjunction with \varepsilonqref{as0} implies that the blow-up set of $u$ \[ \mathcal{S}=\left\{r_0\in[0,1]:\quad\mbox{there exists}\quad r_n\to r_0\quad\mbox{and}\quad t_n\to T_{\delta}: \lim_{n\to +\infty} u(r_n,t_n)=+\infty \right\}=\{0\}. \] \varepsilonnd{rem} \subsection{Complete blow-up} Interestingly the finite-time blow-up predicted by Theorem \ref{thmbu} for the solution $u$ of \varepsilonqref{fkpp2a}-\varepsilonqref{id2} is complete, roughly speaking there holds $u(x,t)=+\infty$ for any $x\in B_1$ and $t>T_{\delta}.$ Before proving the latter result we need to provide an auxiliary result inspired by \cite{BC}, cf. \cite[Theorem 27.2]{QS}, and for which we will need some preliminary concepts. Now set $f_k(V):=\min\{V^p,k\}, V\geq 0, k=1,2,\dots.$ and let $\tilde{u}_k$ be the solution of problem \begin{equation}gin{eqnarray}n && V_t=\Deltaelta_r V+f_k(V),\quad\mbox{in}\quad B_1\times(0,\infty),\\ && \displaystyle\frac{\partial V}{\partial \nu}=0,\quad\mbox{on}\quad \partial B_1\times (0,\infty),\\ && V(x,0)=u_0(x),\quad\mbox{in}\quad B_1. \varepsilonean It is easily seen that $\tilde{u}_k$ is globally defined and $\tilde{u}_{k+1}\geq \tilde{u}_k.$ Moreover $\tilde{u}_k$ solves the integral equation \begin{equation}gin{eqnarray}\lambdabel{ps50} \tilde{u}_k(x,t)=\int_{B_1} G(x,y,t) u_0(x)\,dy+\int_0^t \int_{B_1} G(x,y,t-s) f_k(\tilde{u}_k(y,s))\,dy\,ds, \varepsilonea for any $x\in B_1,\; t>0$, where $G$ stands for the Neuman heat kernel in $B_1.$ Now since $G>0$ and $\tilde{u}_{k+1}\geq \tilde{u}_k$ if we pass to the limit into \varepsilonqref{ps50} we derivethen monotone convergence theorem implies \begin{equation}gin{eqnarray}n \bar{u}(x,t)=\int_{B_1} G(x,y,t) u_0(x)\,dy+\int_0^t \int_{B_1} G(x,y,t-s)\bar{u}^p(y,s))\,dy\,ds,\quad x\in B_1,\;t>0, \varepsilonean for $\bar{u}(x,t):=\lim_{k\to\infty}\tilde{u}_k(x,t)$ and where the double integral might be infinite. Clearly $\bar{u}(\cdot,t)=u(\cdot,t)$ for $t<\widetilde{T}_{\delta}$ and if we set \begin{equation}gin{eqnarray}n T^{c}=T^{c}(u_0):=\inf\left\{t\geq \widetilde{T}_{\delta}:\bar{u}(x,t)=\infty\quad\mbox{for all}\quad x\in B_1\right\}, \varepsilonean then there holds $T^{c}(u_0)\geq \widetilde{T}_{\delta}.$ Now we can provide a more rigorous definition of the complete blow-up. \begin{equation}gin{definition} We say that the solution of problem \varepsilonqref{lcp1}-\varepsilonqref{lcp3} blows up completely if $T^{c}(u_0)=\widetilde{T}_{\delta}.$ \varepsilonnd{definition} \begin{equation}gin{theorem}\lambdabel{cobu} If $N\geq 3$ and $1<p<p_S:=\frac{N+2}{N-2}$ then solution of problem \varepsilonqref{lcp1}-\varepsilonqref{lcp3} exhibits a complete blow-up at $T_{\delta}.$ \varepsilonnd{theorem} \begin{equation}gin{proof} For reader's convenience we split the proof in several steps.\\ {\bf Step 1:} We claim that $\widetilde{u}_t\geq 0.$ Indeed, if we set $z=\widetilde{u}_t$ then $z,$ thanks to \varepsilonqref{supercritical} satisfies \begin{equation}gin{eqnarray}n && z_t=\Deltaelta_r z+D p \tilde{u}^{p-1} z,\quad\mbox{in}\quad B_1\times\left(0,\widetilde{T}_{\delta}\right),\\ && \displaystyle\frac{\partial z}{\partial \nu}=0,\quad\mbox{on}\quad \partial B_1\times \left(0,\widetilde{T}_{\delta}\right),\\ && z(x,0)=\widetilde{u}_t(x,0)=\Deltaelta_r u_0(x)+D u_0^p(x)\geq 2u_0(x)\geq 0,\quad\mbox{in}\quad B_1, \varepsilonean and thus maximum principle verifies our claim.\\ {\bf Step 2:} In the current step we will prove that $||\widetilde{u}^p(\cdot,t)||_1\to +\infty$ as $t\to \widetilde{T}_{\delta}-.$ Note that since $\widetilde{u}\geq 0$ and $\widetilde{u}_t\geq 0$ then the function $g: t\mapsto||\widetilde{u}^p(\cdot,t)||_1$ is nondecreasing. Assume by contrary that $g$ is bounded then the $L^{k}-L^{\varepsilonll}-$estimates entail \begin{equation}gin{eqnarray} \left\Vert e^{-tA}f \right\Vert_k\leq Cq(t)^{-\frac{N}{2}(\frac{1}{\varepsilonll}-\frac{1}{k})} e^{-\mu_2 t} \left\Vert f \right\Vert_\varepsilonll, \quad 1\leq \varepsilonll \leq k \leq \infty, \lambdabel{eqn:21} \varepsilonea for $t\geq 0$ and any $f\in L^{\varepsilonll}(\Omega),$ where \[ 0<q(t)=\min\{t,1\}\leq 1, \] and the operator $A$ in \varepsilonqref{eqn:21} denotes $-\Deltaelta$ provided with Neumann boundary condition, whilst $\mu_2$ is second eigenvalue of $A,$ see also \cite{henry, rothe}. Now by virtue of the variation-of-parameters formula we deduce \begin{equation}gin{eqnarray} \Vert \widetilde{u}(t)\Vert_k\leq C\Big( \Vert u_0\Vert_k +\int_{0}^te^{-\mu_2s} q(s)^{-\frac{N}{2}(\frac{1}{\varepsilonll}-\frac{1}{k})} ||\widetilde{u}^p(s)||_\varepsilonll \,ds\Big), \quad\mbox{for any}\quad 0<t<\widetilde{T}_\delta, \lambdabel{eqn:27} \varepsilonnd{eqnarray} where integrability near $s=t$ of the integrand terms appeared in \varepsilonqref{eqn:27} is ensured under the condition \begin{eqnarray}\lambdabel{eqn:28} \frac{N}{2}\Big(\frac{1}{\varepsilonll}-\frac{1}{k}\Big)<1. \varepsilonge Now for $\varepsilonll=1$ \varepsilonqref{eqn:27} in conjunction with our assumption gives \begin{equation}gin{eqnarray} \Vert \widetilde{u}(t)\Vert_k\leq C\Big( \Vert u_0\Vert_k +\int_{0}^te^{-\mu_2s} q(s)^{-\frac{N}{2}(\frac{1}{\varepsilonll}-\frac{1}{k})} ||\widetilde{u}^p(s)||_1 \,ds\Big)\leq C(\widetilde{T}_\delta),\;\mbox{for any}\; 0<t<\widetilde{T}_\delta,\quad \lambdabel{eqn:27a} \varepsilonnd{eqnarray} provided that \begin{eqnarray}\lambdabel{ps20} \frac{N}{2}\Big(1-\frac{1}{k}\Big)<1. \varepsilonge It is known, see \cite{BP, W}, that for $N\geq 3$ and $k>\frac{N(p-1)}{2}$ the $L^{k}-$norm,of the solution of \begin{eqnarray}e &&\xii_t = \Deltaelta_r \xii + D \xii^p\quad\mbox{in}\quad B_1\times\left(0,\min\{t_0,T_{\delta}\}\right),\\ &&\xii=0,\quad\mbox{on}\quad \partial B_1\times \left(0,\min\{t_0,T_{\delta}\}\right), \\ &&\xii(x,0)=u_0(x), \quad\mbox{in}\quad B_1, \varepsilongee blows up in finite time, and thus by comparison arguments we also derive that \begin{eqnarray} \Vert\widetilde{u}(t)\Vert_k\to +\infty\quad\mbox{as}\quad t\to \widetilde{T}_\delta,\quad\mbox{for any}\quad k>\frac{N(p-1)}{2}.\lambdabel{ps10} \varepsilonge Since $1<p<p_S$ we can always find an exponent $k$ so that both \varepsilonqref{ps20} and \varepsilonqref{ps10} hold true, and thus we arrive at a contradiction due to \varepsilonqref{eqn:27a}.\\ {\bf Step 3:} Consider $v^\varepsilonpp\in (0,1),$ then \begin{equation}gin{eqnarray}n \int_0^1 r^{N-1} \tilde{u}^p(r,t)\,dr&&=\int_0^{v^\varepsilonpp} r^{N-1} \tilde{u}^p(r,t)\,dr+\int_v^\varepsilonpp^{1-v^\varepsilonpp} r^{N-1} \tilde{u}^p(r,t)\,dr+\int_{1-v^\varepsilonpp}^1 r^{N-1} \tilde{u}^p(r,t)\,dr\\ &&:=I_1(v^\varepsilonpp)+I_2(v^\varepsilonpp)+I_3(v^\varepsilonpp). \varepsilonean For $I_1(v^\varepsilonpp)$ under the change of variable $r=\frac{v^\varepsilonpp(R-v^\varepsilonpp)}{1-2v^\varepsilonpp}$ we derive \begin{equation}gin{eqnarray}n I_1(v^\varepsilonpp)=\frac{v^\varepsilonpp^N}{(1-2v^\varepsilonpp)^N} \int_{v^\varepsilonpp}^{1-v^\varepsilonpp} (R-v^\varepsilonpp)^{N-1} \tilde{u}^p(R,t)\, dR\leq \frac{v^\varepsilonpp^N}{(1-2v^\varepsilonpp)^N} I_2(v^\varepsilonpp). \varepsilonean An estimate for $I_3(v^\varepsilonpp)$ is obtained as follows \begin{equation}gin{eqnarray}n I_3(v^\varepsilonpp)\leq \tilde{u}^p(1,t)\left(\frac{1-\left(1-v^\varepsilonpp\right)^N}{N}\right)\leq \tilde{u}^p(1-v^\varepsilonpp,t)\left(\frac{\left(1-v^\varepsilonpp\right)^N-v^\varepsilonpp^N}{N}\right)\leq I_2(v^\varepsilonpp), \varepsilonean provided that $v^\varepsilonpp$ is chosen small enough so that $1+v^\varepsilonpp^N<2(1-v^\varepsilonpp)^N, $ where also the fact that $\tilde{u}_r\leq 0$ for $r\in (0,1)$ has been taken into account. Consequently \begin{equation}gin{eqnarray}\lambdabel{ps30} \int_{v^\varepsilonpp}^{1-v^\varepsilonpp}r^{N-1}\tilde{u}^p(r,t)\,dr\geq C(v^\varepsilonpp) \int_{0}^{1}r^{N-1}\tilde{u}^p(r,t)\,dr, \varepsilonea for $C(v^\varepsilonpp):=2+\frac{v^\varepsilonpp^N}{(1-2v^\varepsilonpp)^N}.$ Set $v(r):=\lim_{t\to \tilde{T}_\delta} \tilde{u}(r,t)\;\mbox{for any}\; r\in (0,1)$ then \begin{equation}gin{eqnarray}\lambdabel{ps40} \int_{v^\varepsilonpp}^{1-v^\varepsilonpp} r^{N-1}v^p(r)\,dr=\lim_{t\to \tilde{T}_\delta} \int_{v^\varepsilonpp}^{1-v^\varepsilonpp}r^{N-1}\tilde{u}^p(r,t)\,dr\geq \liminf_{t\to \tilde{T}_\delta} C(v^\varepsilonpp) \int_0^1 r^{N-1}\tilde{u}^p(r,t)\,dr=\infty,\qquad \varepsilonea where it has been successively used the monotone convergence of $\tilde{u}$ towards $v,$ relation \varepsilonqref{ps30} and Step 2. \\ {\bf Step 4:} Fix now some $r\in (0,1)$ and take some $t>\widetilde{T}_{\delta}.$ Then we can find $v^\varepsilonpp>0$ sufficiently small such that $t-\widetilde{T}_{\delta}\geq 2v^\varepsilonpp$ and $v^\varepsilonpp<r<1-v^\varepsilonpp.$ Next by virtue of \varepsilonqref{ps50} and in conjunction with $f_k(\tilde{u}_k(R,s))\geq f_k(\tilde{u}_k(R,\widetilde{T}_\delta))$ for $s\geq \widetilde{T}_\delta$ we have \begin{equation}gin{eqnarray}\lambdabel{ps70} \tilde{u}_k(r,t) &&\geq N \omega_N\int_0^t \int_{0}^1 R^{N-1} G(r,R,t-s) f_k(\tilde{u}_k(R,s))\,dR\,ds\nonumber\\ &&\geq N \omega_N \tilde{C}(v^\varepsilonpp)\int_{t-2v^\varepsilonpp}^{t-v^\varepsilonpp} \int_{v^\varepsilonpp}^{1-v^\varepsilonpp} R^{N-1} f_k(\tilde{u}_k(R,s))\,dR\,ds\nonumber\\ &&\geq N \omega_N\tilde{C}(v^\varepsilonpp)\int_{t-2v^\varepsilonpp}^{t-v^\varepsilonpp} \int_{v^\varepsilonpp}^{1-v^\varepsilonpp} R^{N-1} f_k(\tilde{u}_k(R,\widetilde{T}_\delta))\,dR\,ds\nonumber\\ &&\geq N \omega_Nv^\varepsilonpp\;\tilde{C}(v^\varepsilonpp)\int_{v^\varepsilonpp}^{1-v^\varepsilonpp} R^{N-1} f_k(\tilde{u}_k(R,\widetilde{T}_\delta))\,dR, \varepsilonea where \[ \tilde{C}(v^\varepsilonpp):=\inf\left\{G(r,R,s): v^\varepsilonpp<r,R<1-v^\varepsilonpp,\; s\in(v^\varepsilonpp, 2v^\varepsilonpp)\right\}>0. \] Passing to the limit as $k\to \infty$ into \varepsilonqref{ps70} then due to \varepsilonqref{ps40} we deduce \begin{equation}gin{eqnarray}n \bar{u}(x,t)&&\geq N \omega_Nv^\varepsilonpp\;\tilde{C}(v^\varepsilonpp)\lim_{k\to \infty}\int_{v^\varepsilonpp}^{1-v^\varepsilonpp} R^{N-1} f_k(\tilde{u}_k(R,\widetilde{T}_\delta))\,dR\\ &&\geq N \omega_Nv^\varepsilonpp\;\tilde{C}(v^\varepsilonpp) \int_{v^\varepsilonpp}^{1-v^\varepsilonpp} R^{N-1} v^p(R)\,dR=\infty, \varepsilonean which proves the assertion. \varepsilonnd{proof} \begin{equation}gin{corollary}\lambdabel{nik2} Let $N\geq 3$ with $\frac{N}{N-2}<p<p_S.$ Then the solution $u$ of \varepsilonqref{fkpp2a}-\varepsilonqref{id2} blows up completely. \varepsilonnd{corollary} \begin{equation}gin{proof} The proof is an immediate consequence of Theorem \ref{cobu} and relation \varepsilonqref{nik}. \varepsilonnd{proof} \begin{equation}gin{rem} Corollary \ref{nik2} actually means that the diffusion-driven instability stated by Theorem \ref{thmbu} is quite severe and thus any Turing (instability) pattern is destroyed once we exceed the blow-up time. \varepsilonnd{rem} \section{Blow-up rate and blow-up patterns}\lambdabel{blp} Our aim in the current section is to determine the form the diffusion-driven blow-up ({\it DDBU}) provided by Theorem \ref{thmbu}. We first provide some estimates of the blow-up rate for $u.$ \begin{equation}gin{proof}[Proof of Theorem \ref{tbu}] We first observe that due to Lemma \ref{Lemma:mbetaestimate} and \varepsilonqref{mt2} there holds \begin{equation}gin{eqnarray}\lambdabel{nkl4} D:=1-\sigmagma2^{\begin{equation}ta(\mu+1)/p} a_1^{\begin{equation}ta/p} \Lambda_1^{\begin{equation}ta\mu/p} \lambdambda^{\begin{equation}ta}<K(t)< 1<\infty,\quad\mbox{for any}\quad 0<t<T_{\delta}. \varepsilonea Consider now $\Phi$ satisfying \begin{equation}gin{eqnarray}n &&\Phi_t=\Deltaelta \Phi+\Phi^{p},\quad\mbox{in}\quad B_1\times\left(0,T_{\delta}\right),\\ &&\frac{\partial \Phi}{\partial \nu}=0,\quad\mbox{on}\quad \partial B_1\times \left(0,T_{\delta}\right), \\ &&\Phi(x,0)=u_0(x), \quad\mbox{in}\quad B_1, \varepsilonean then via comparison principle and due to \varepsilonqref{nkl4} we derive $u\leq \Phi$ in $ \bar{B}_1\times\left[0,T_{\delta}\right].$ Yet it is known, see \cite[Theorem 44.6]{QS}, that \begin{equation}gin{eqnarray}n |\Phi(x,t)|\leq C_{\varepsilonta}|x|^{-\frac{2}{p-1}-\varepsilonta},\quad\mbox{in}\quad B_1\times\left(0,T_{\delta}\right)\quad\mbox{for some}\quad \varepsilonta>0, \varepsilonean and thus \begin{equation}gin{eqnarray}\lambdabel{tbsd18} |u(x,t)|\leq C_{\varepsilonta}|x|^{-\frac{2}{p-1}-\varepsilonta}\quad\mbox{in}\quad B_1\times\left(0,T_{\delta}\right). \varepsilonea Then using standard parabolic estimates we get \begin{equation}gin{eqnarray}\lambdabel{nkl5} u\in \mathcal{BUC^{\tau}}\left(\left\{\rho_0<|x|<1-\rho_0\right\}\times \left(\frac{T_{\delta}}{2}, T_{\delta}\right)\right), \varepsilonea for some $\tau\in(0,1)$ and each $0<\rho_0<1,$ where $\mathcal{BUC^{\tau}}(M)$ denotes the Banach space of all bounded and uniform $\tau-$H\"{o}lder continuous functions $\omega:M\subset\mathbb{R}^N\to \mathbb{R},$ see also \cite{QS}. Consequently \varepsilonqref{nkl5} infers that $\lim_{t\to T_{\delta}}u(x,t)$ exists and is finite for all $x\in B_1\setminus\{0\}.$ Recalling that $N>\displaystyle{\frac{2p}{p-1}}$ (or equivalently $p>\displaystyle{\frac{N}{N-2}},\; N\geq 3$ ) then by using \varepsilonqref{nkl4},\varepsilonqref{tbsd18} and in view of dominated convergence theorem we derive \begin{equation}gin{eqnarray}\lambdabel{tbsd18a} \lim_{t\to T_{\delta}} K(t)=\gammamma\in(0,+\infty). \varepsilonea Applying now Theorem 44.3(ii) in \cite{QS} and in conjunction with \varepsilonqref{tbsd18a} we can find a constant $C_{u}>0$ such that \begin{equation}gin{eqnarray}\lambdabel{ube} \left|\left|u(\cdot,t)\right|\right|_{\infty}\leq C_{u}\left(T_{\delta}-t\right)^{-\frac{1}{(p-1)}}\quad\mbox{in}\quad (0, T_{\delta}). \varepsilonea On the other hand, setting $N(t):=\left|\left|u(\cdot,t)\right|\right|_{\infty}=u(0,t)$ then $N(t)$ is differentiable for almost every $t\in(0,T_{\delta}),$ in view of \cite{fmc85}, and it also satisfies \begin{equation}gin{eqnarray}n \frac{dN}{dt}\leq K(t) N^p(t). \varepsilonean Now since $K(t)\in C([0,T_{\delta}))$ is bounded in any time interval $[0,t],\; t<T_{\delta},$ and then upon integration over $(t,T_{\deltalta})$ we obtain \begin{equation}gin{eqnarray}\lambdabel{lbe} \left|\left|u(\cdot,t)\right|\right|_{\infty}\geq C_l\left(T_{\delta}-t\right)^{-\frac{1}{(p-1)}}\quad\mbox{in}\quad (0, T_{\delta}), \varepsilonea for some positive constant $C_l$ and the proof is complete. \varepsilonnd{proof} \begin{equation}gin{rem}\lambdabel{nkl6} Condition \varepsilonqref{ik2} implies that the diffusion-induced blow-up stated in Theorem \ref{thmbu} is of type I, i.e. the blow-up mechanism is controlled by the ODE part of \varepsilonqref{fkpp2a}. \varepsilonnd{rem} Next we identify the blow-up (Turing instability) pattern of the {\it DDBU solution} obtained by Theorem \ref{thmbu}. Note that \varepsilonqref{tbsd18} provides a rough form of the blow-up pattern for $u.$ Nonetheless, due to \varepsilonqref{nkl4} the non-local problem \varepsilonqref{fkpp2a}-\varepsilonqref{id2} can be tackled as the corresponding local one for which the following more accurate asymptotic blow-up profile, cf. \cite{mz98}, is available \begin{equation}gin{eqnarray}\lambdabel{kk1} \lim_{t\to T_{max}}u(|x|,t)\sigmam C\left[\frac{|\log |x||}{|x|^2}\right]\quad\mbox{for}\quad |x|\ll 1. \varepsilonea For a more rigorous approach regarding non-local problems the interested readers is advised to check \cite{DKZ20}. Relation \varepsilonqref{kk1} provides the form of the blow-up profile of $u.$ Therefore \varepsilonqref{kk1}, in the biological context, actually identifies the form of the developing patterns, which are induced as the result of the {\it DDI} phenomenon. \begin{equation}gin{thebibliography}{ABCD} \bibitem[AlKH11]{AlKH11} M. Al-Refai, M., Kavallaris, N.I., \& Hajji, M.A., {\it Monotone iterative sequences for non-local elliptic problems}, Euro. Jour. Appl. Mathematics {\bf 22 (6)}, (2011) 533--552, N.I., \bibitem[BC]{BC} Baras, P. \& Cohen, L., {\it Complete blow-up after $T_{\max}$ for the solution of a semilinear heat equation.} J. Funct. Anal. {\bf 71} (1983), 287-302. \bibitem[BP]{BP} Baras, P. \& Pierre., {\it Critere d'existence de solutions positives pour des equations semi-lineaires non monotones.} Ann. Inst. H. Poincare Anal. Non Lineaire {\bf 2} (1985), 185-212. \bibitem[BL]{BL} Bebernes, J. W. \& Lacey, A. A. , {\it Global existence and finite-time blow-up for a class of non-local parabolic problems.} Adv. Differential Equations {\bf 2 (6)} (1997), 927--953. \bibitem[B]{B} Bian, S. , {\it Global solutions to a non-local Fisher-KPP type problem.} Acta Appl. Math. {\bf147} (2017), 187-195. \bibitem[BCh]{BCh} Bian, S.\& Chen, L. , {\it A non-local reaction diffusion equation and its relation with Fujita exponent.} J. Math. Anal. Appl. {\bf 444 (2)} (2016), 1479-1489. \bibitem[BChL]{BChL} Bian, S., Chen, L.\& Latos, E. A. , {\it Global existence and asymptotic behavior of solutions to a non-local Fisher-KPP type problem.} Nonlinear Anal. {\bf149} (2017), 165-176. \bibitem[BDSt]{bds93} Budd, C., Dold, B. \& Stuart, A. , {\it Blowup in a partial differential equation with conserved first integral.} SIAM J. Appl. Math. {\bf 53(3)} (1993), 718-742. \bibitem[B00]{B00} B\"{u}rger, R., {\it The Mathematical Theory of Selection, Recombination, and Mutation}, Wiley Series in Mathematical and Computational Biology, John Wiley, 2000. \bibitem[BH94]{BH94} B\"{u}rger, R. \& Hofbauer J., {\it Mutation load and mutation-selection-balance in quantitative genetic traits}, J. Math. Biol. {\bf 32(3)} (1994) 193--218. \bibitem[CD05]{CD05} Coville, J. \& Dupaigne, L., {\it Propagation speed of travelling fronts in non local reaction–diffusion equations.} Nonlinear Anal. {\bf 60} (2005) 797--819. \bibitem[D04]{D04} Diekmann, O., {\it A beginners guide to adaptive dynamics}, Banach Center Publication, {\bf vol.63}, Institute of Mathe-matics, Polish Academy of Sciences, 2004, pp.47--86. \bibitem[DKZ20]{DKZ20} G.K. Duong, G.K., Kavallaris, N.I., \& Zaag, H., {\it Blowup solution for the shadow limit model of Gierer-Meinhardt system}, preprint. \bibitem[SJM]{SJM07} El Soufi, A., Jazar, M. \& Monneau, R., {\it A gamma-convergence argument for the blow-up of a non-local semilinear parabolic equation with Neumann boundary conditions.} Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire {\b 24} (2007), no. 1, 17-39. \bibitem[SK]{SK08} El Soufi, A. \& Kiwan, R., {\it Blow-up of a non-local semilinear parabolic equation with Neumann boundary conditions.} Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire {\bf 25} (2008), 215-218. \bibitem[FN]{FN05} Fila, M. \& Ninomiya, H. , {\it Reaction versus diffusion: blow-up induced and inhibited by diffusivity.}, Russian Math. Surveys, {\bf 60(6)} (2005), 1217--1235. \bibitem[FM85]{fmc85} Friedman, A. \& McLeod, J.B., {\it Blow-up of positive solutions of semilinear heat equations}, Indiana Univ. Math. J. {\bf 34} (1985) 425-447. \bibitem[FG89]{FG89} Furter, J. \& Grinfeld, M. , {\it Local versus non-local interactions in population dynamics}. J. Math. Biol. {\bf 27} (1989) 65--80 \bibitem[gnn]{gnn} B. Gidas, W-M. Ni and L. Nirenberg, Symmetry and related properties via the maximum principle, \textit{Comm. Math. Phys.} {\bf68} (1979), 209--243. \bibitem[GP07]{GP07} G\'enieys, S. \& Perthame, B. , {\it Concentration in the non-local Fisher equation: the Hamilton–Jacobi limit}, Math. Modelling Nat. Phenom. {\bf 2} (2007) 135--151. \bibitem[GVA06]{GVA06} G\'enieys, S., Volpert, V. \& Auger, P. {\it Pattern and waves for a model in population dynamics with non-local consumption of resources} Math. Modelling Nat. Phenom. {\bf 1} (2006) 65--82. \bibitem[H]{henry} Henry, D.,{\it Geometric Theory of Semilinear Parabolic Equations}, Springer, Berlin, 1981. \bibitem[HY]{hy95} Hu, B.\& Yin, H.-M., {\it Semilinear parabolic equations with prescribed energy.} Rend. Circ. Mat. Palermo {\bf 44 (2)} (1995), no. 3, 479-505. \bibitem[KS16]{KS16} Kavallaris, N.I., \& Suzuki, T., {\it On the dynamics of a non-local parabolic equation arising from the Gierer–Meinhardt system }, Nonlinearity {\bf 30} (2017) 1734–-1761. \bibitem[KS18]{KS18} Kavallaris, N.I. \& Suzuki, T. {\it Non-Local Partial Differential Equations for Engineering and Biology: Mathematical Modeling and Analysis}, Mathematics for Industry Vol. 31 Springer Nature 2018. \bibitem[KN07]{KN07} Kavallaris, N.I. \& Nadzieja, T.,{\it On the blow-up of the non-local thermistor problem} Proc. Edinb. Math. Society {\bf 50 (2)}, (2007) 389--409. \bibitem[KTz]{KTz} Kavallaris, N. I. \& Tzanetis, D. E. , {\it Blow-up and stability of a non-local diffusion-convection problem arising in Ohmic heating of foods.} Differential Integral Equations {\bf 15 (3)} (2002), 271-288. \bibitem[KLW17]{KLW17} Kavallaris N.I., Lankeit, J. \& Winkler, M., {\it On a degenerate nonlocal parabolic problem describing infinite dimensional replicator dynamics}, SIAM Jour. Math.l Analysis {\bf 49 (2)}, (2017) 954--983. \bibitem[KBM]{KBM} Kavallaris, N. I., Bareira, R. \& Madzvamuse A., {\it Dynamics of shadow system of a singular Gierer-Meinhardt system on an evolving domain.} {\bf arXiv:1903.10051} \bibitem[KPP]{kpp} Kolmogorov A. N., Petrovsky I. G.\& Piskunov N. S. {\it Investigation of the equation of diffusion combined with increasing of the substance and its application to a biology problem.} Bull. Moscow State Univ. Ser. A {\b f1} (1937), 1--25. \bibitem[L1]{L1} Lacey, A.A., {\it Thermal runaway in a non-local problem modelling Ohmic heating: Part I: Model derivation and some special cases}, Euro. Jour. Appl. Mathematics {\bf 6 (3)}, (1995), 201--224. \bibitem[L2]{L2} Lacey, A.A., {\it Thermal runaway in a non-local problem modelling Ohmic heating. Part II: General proof of blow-up and asymptotics of runaway}, Euro. Jour. Appl. Mathematics {\bf 6 (2)}, (1995), 127--144 \bibitem[LSU]{LSU} Lady\v{z}enskaja, O. A., Solonnikov, V. A. \& Ural'ceva, N. N. {\it Linear \& quasilinear equations of parabolic type.} Translated from the Russian by S. Smith. Translations of Mathematical Monographs Vol. {\bf23}. Amer. Math. Soc. 1968. \bibitem[LTz2]{LTz2} Latos, E. A.; Tzanetis, D. E. {\it Grow-up of critical solutions for a non-local porous medium problem with Ohmic heating source.} NoDEA Nonlinear Differential Equations Appl. {\bf17} (2010), no. 2, 137-151. \bibitem[LTz1]{LTz1} Latos, E. A.; Tzanetis, D. E. {\it Existence and blow-up of solutions for a non-local filtration and porous medium problem.} Proc. Edinb. Math. Soc. (2) {\bf53} (2010), no. 1, 195-209. \bibitem[LL97]{LL97} Lefever, R. \& Lejeune, O. {\it On the origin of tiger bush}, Bull. Math. Biol. {\bf 59} (1997), 263--94. \bibitem[LN09]{LN09} Li. F. \& Ni. W-M. {\it On the global existence and finite time blow-up of shadow systems}, J. Differ. Equ. {\bf 247} 1762--1776. \bibitem[LLC]{LLC} Li, J., Latos, E. \& Chen, L. , {\it Wavefronts for a nonlinear non-local bistable reaction-diffusion equation in population dynamics}, Journal of Differential Equations, {\bf 263(10)}, (2017), 6427-6455. \bibitem[LCS20]{LCS20} Li, J., Chen, L. \& Surulescu C., {\it Global boundedness, hair trigger effect, and pattern formation driven by the parametrization of a nonlocal Fisher-KPP problem}, J. Diff. Equations {\bf 269}, (2020, 9090--9122. \bibitem[MZ]{mz98} Merle, M. \& Zaag, H., {\it Refined uniform estimates at blow-up and applications for nonlinear heat equations}, Geom. Funct. Anal. {\bf 8(6)} (1998), 1043--1085. \bibitem[PS05]{PS05} Perthame, B. \& Souganidis, P. E., {\it Front propagation for a jump process model arising in spatial ecology.} DCDS(B) {\bf 13}, (2005), 1235-1248. \bibitem[QS]{QS} Quittner, P. \& Souplet, P. {\it Superlinear parabolic problems. Blow-up, global existence \& steady states.} Birkh\"{a}user Adv. Texts Basler Lehrb\"{u}cher. Birkh\"{a}user 2007. \bibitem[R40]{R40} Rashevsky, N. , {\it An approach to the mathematical biophysics of biological self-regulation and of cell polarity}, Bull. Math. Biophys. {\bf 2} (1940), 15--25. \bibitem[Ro]{rothe} Rothe, F. ,{\it Global solutions of reaction-diffusion systems}, Lect. Notes Math., Vol 1072, Springer, Berlin 1984. \bibitem[S]{s98} Souplet, P., {\it Blow-up in non-local reaction-diffusion equations.} SIAM J. Math. Anal. {\bf29} (1998), no. 6, 1301-1334. \bibitem[T52]{t52} Turing, A.M. , {\it The chemical basis of morphogenesis}, Phil. Trans. Roy. Soc. B {\bf 237} (1952), 37--72. \bibitem[Tz02]{Tz02} Tzanetis, D.E., {\it Blow-up of radially symmetric solutions of a non-local problem modelling Ohmic heating}, Electron. J. Differ. Equ. {\bf 11} , (2002) 1–26 \bibitem[V1]{vol1} Volpert, V. , {\it Elliptic partial differential equations: Fredholm theory of elliptic problems in unbounded domains}, Birkh\"auser, 2011. \bibitem[V2]{vol2} Volpert, V. , {\it Elliptic partial differential equations: Reaction-diffusion equations}, Birkh\"auser, 2014. \bibitem[VP]{volpet} Volpert, V. \& Petrovskii, S., {\it Reaction-diffusion waves in biology}, Physics of Life Reviews {\bf 6}, 267-310, (2009). \bibitem[VV]{VVpp} Volpert, V. \& Vougalter, V. {\it Existence of stationary pulses for non-local reaction-diffusion equations}, Documenta Math. {\bf 19} (2014), 1141--1153. \bibitem[W]{W} Weissler, F.B. {\it Local existence and nonexistence for semilinear parabolic equations in $L^p$.} Indiana Uni. Math. J. {\bf 38} (1981), 29-40. \bibitem[ZFK]{zfk38} Zeldovitsch, J. B. \& Frank-Kamenetzki, D. A. {\it A theory of thermal propagation of flame.} Acta Physicochimica U.R.S.S. {\bf9} (1938), no.2, 341-350. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Convergence rate of the weighted Yamabe flow} \author{Pak Tung Ho} \address{Department of Mathematics, Sogang University, Seoul 04107, Korea} \address{Korea Institute for Advanced Study, Seoul, 02455, Korea} \email{[email protected]} \author{Jinwoo Shin} \address{Korea Institute for Advanced Study, Hoegiro 85, Seoul 02455, Korea} \email{[email protected]} \author{Zetian Yan} \address{109 McAllister Building, Penn State University, University Park, PA 16802, USA} \email{[email protected]} \subjclass[2020]{Primary 53E99; Secondary 35K55, 58K05} \date{30th June, 2022.} \keywords{} \begin{abstract} The weighted Yamabe flow was the geometric flow introduced to study the weighted Yamabe problem on smooth metric measure spaces. Carlotto, Chodosh and Rubinstein have studied the convergence rate of the Yamabe flow. Inspired by their result, we study in this paper the convergence rate of the weighted Yamabe flow. \end{abstract} \maketitle \section{Introduction} Given a closed (i.e. compact without boundary) Riemannian manifold $(M,g_0)$ of dimension $n\geq 3$, the \textit{Yamabe problem} is to find a metric conformal to $g_0$ such that the scalar curvature $R_g$ of $g$ is constant. This was solved by Aubin \cite{Aubin0}, Trudinger \cite{Trudinger} and Schoen \cite{Schoen}. The \textit{Yamabe flow} is a geometric flow introduced to study the Yamabe problem, which is defined as \begin{equation}\label{YF} \frac{\partial g(t)}{\partial t}=-(R_{g(t)}-r_{g(t)})g(t), \end{equation} where \begin{equation*} r_{g(t)}=\frac{\int_MR_{g(t)}dV_{g(t)}}{\int_MdV_{g(t)}} \end{equation*} is the average of the scalar curvature of $g(t)$. The existence and convergence of the Yamabe flow has been studied in \cite{Brendle4,Brendle5,Chow,Schwetlick&Struwe,Ye}. See also \cite{Azami&Razavi,Cheng&Zhu,Daneshvar&Razavi,Ho1,Ho2,Ho3,Ma&Cheng,Schulz} and references therein for results related to the Yamabe flow. In \cite{Chodosh}, Carlotto, Chodosh and Rubinstein studied the rate of convergence of the Yamabe flow (\ref{YF}). In particular, they proved the following: \begin{theorem}[Theorem 1 in \cite{Chodosh}]\label{thm1} Assume $g(t)$ is a solution of the Yamabe flow \eqref{YF} that converges in $C^{2,\alpha}(M,g_\infty)$ to $g_\infty$ as $t\to\infty$ for some $\alpha\in (0,1)$. Then there is a $\delta>0$ depending only on $g_\infty$ such that:\\ (i) If $g_\infty$ is an integrable critical point, then the convergence occurs at an exponential rate, that is $$\|g(t)-g_\infty\|_{C^{2,\alpha}(M,g_\infty)}\leq Ce^{-\delta t}$$ for some constant $C>0$ depending on $g(0)$. \\ (ii) In general, the rate of convergence cannot be worse than polynomial, that is $$\|g(t)-g_\infty\|_{C^{2,\alpha}(M,g_\infty)}\leq C(1+t)^{-\delta}$$ for some constant $C>0$ depending on $g(0)$. \end{theorem} \begin{theorem}[Theorem 2 in \cite{Chodosh}]\label{thm2} Assume that $g_\infty$ is a nonintegrable critical point of the Yamabe energy with order of integrability $p\geq 3$. If $g_\infty$ satisfies the Adams-Simon positive condition $AS_p$, then there exists metric $g(0)$ conformal to $g_\infty$ such that the solution $g(t)$ of the Yamabe flow \eqref{YF} starting from $g(0)$ exists for all time and converges in $C^\infty(M,g_\infty)$ to $g_\infty$ as $t\to\infty$. The convergence occurs ``slowly" in the sense that $$C^{-1}(1+t)^{-\frac{1}{p-2}}\leq \|g(t)-g_\infty\|_{C^{2,\alpha}(M,g_\infty)}\leq C(1+t)^{-\frac{1}{p-2}}$$ for some constant $C>0$. \end{theorem} We refer the readers to \cite[Definition 8]{Chodosh} and \cite[Definition 10]{Chodosh} respectively for the precise definitions of integrable critical points and Adams-Simon positive condition $AS_p$. To explain our results requires some terminology. A \textit{smooth metric measure space} is a four-tuple $(M^n, g, e^{-\phi} dV_g, m)$ of a Riemannian manifold $(M^n,g)$, a smooth measure $e^{-\phi}dV_g$ determined by a function $\phi\in C^{\infty}(M)$ and the Riemannian volume element of $g$, and a dimensional parameter $m\in [0,\infty]$. In the case $m=0$, we require $\phi=0$. The \textit{weighted scalar curvature} of a smooth metric measure space $(M,g,e^{-\phi}dV_g, m)$ is defined as \begin{equation}\label{1.0} R_\phi^m:=R_g+2\Delta_g\phi-\frac{m+1}{m}|\nabla_g\phi|^2_g, \end{equation} where $R_g$ is the scalar curvature of $g$, $\Delta_g$ and $\nabla_g$ are respectively the Laplacian and the gradient of $g$. Conformal equivalence between smooth metric measure spaces are defined as the following, see \cite{Case1} for more details. \begin{definition}\label{condef} Smooth metric measure spaces $(M^n, g, e^{-\phi}dV_g, m)$ \\and $(M^n, \hat{g}, e^{-\hat{\phi}}dV_{\hat{g}}, m)$ are conformally equivalent if there is a smooth function $\sigma\in C^{\infty}(M)$ such that \begin{equation}\label{1.2} (M^n, \hat{g}, e^{-\hat{\phi}}dV_{\hat{g}}, m)=(M^n, e^{\frac{2}{m+n-2}\sigma}g, e^{\frac{m+n}{m+n-2}\sigma}e^{-\phi}dV_g, m). \end{equation} In the case $m=0$, conformal equivalence is defined in the classical sense. \end{definition} If we denote $e^{\frac{1}{2}\sigma}$ by $w$, (\ref{1.2}) is equivalent to \begin{equation} (M^n, \hat{g}, e^{-\hat{\phi}}dV_{\hat{g}}, m)=(M^n,w^{\frac{4}{m+n-2}}g,w^{\frac{2(m+n)}{m+n-2}}e^{-\phi}dV_{g}, m), \end{equation} which is an alternative way to formulate the conformal equivalence of smooth metric measure spaces. If $(M,g,e^{-\phi}dV_g, m)$ and $(M,g_0,e^{-\phi_0}dV_{g_0}, m)$ are conformal in the sense of (\ref{1.2}), then their weighted scalar curvatures are related by (see (2.2) in \cite{Yan} for example) \begin{equation}\label{1.6} -\frac{4(n+m-1)}{n+m-2}\Delta_{\phi_0} w+R_{\phi_0}^mw=R_\phi^m w^{\frac{m+n+2}{m+n-2}}, \end{equation} where $$\Delta_{\phi_0}:=\Delta-\nabla \phi_0$$ is the \textit{weighted Laplacian} of $(M,g_0,e^{-\phi_0}dV_{g_0}, m)$, i.e. $$\Delta_{\phi_0}\psi=\Delta_{g_0}\psi-\langle\nabla_{g_0} \phi_0,\nabla_{g_0}\psi\rangle~~\mbox{ for any }\psi\in C^\infty(M).$$ The fact that we will constantly use throughout this paper is that the weighted Laplacian $\Delta_{\phi_0}$ is formally self-adjoint with respect to the measure $e^{-\phi_0}dV_{g_0}$ (c.f. \cite{Case1}). Given a compact smooth metric measure space $(M,g_0,e^{-\phi_0}dV_{g_0}, m)$, the \textit{weighted Yamabe problem} is to find another smooth metric measure space $(M,g,e^{-\phi}dV_g, m)$ conformal to $(M,g_0,e^{-\phi_0}dV_{g_0}, m)$ such that its weighted scalar curvature $R^m_\phi$ is constant. The weighted Yamabe problem in this article is different from that introduced by Case in \cite{Case1}. See also \cite{Case2,Case3,deSouza,Munoz} for more results related to Case's weighted Yamabe problem. Similar to the Yamabe flow, the \textit{weighted Yamabe flow} is the geometric flow used to study the weighted Yamabe problem. This was first introduced by Yan in \cite{Yan}. More precisely, the weighted Yamabe flow is the evolution equation defined on $(M,g(t),e^{-\phi(t)}dV_{g(t)}, m)$ given by \begin{equation}\label{1.3} \begin{split} \begin{dcases} \frac{\partial g}{\partial t} &=(r^m_{\phi}-R^m_{\phi})g, \\ \frac{\partial \phi}{\partial t} &=\frac{m}{2}(R^m_{\phi}-r^m_{\phi}), \end{dcases} \end{split} \end{equation} where $r^m_{\phi}$ is the mean value of $R^m_{\phi}$; i.e. \begin{equation}\label{1.4} r^m_{\phi}=\frac{\int_M R^m_{\phi} e^{-\phi}dV_g}{\int_M e^{-\phi}dV_g}. \end{equation} In \cite{Yan}, Yan proved that the weighted Yamabe flow (\ref{1.3}) exists for all time and converges to a metric with constant weighted scalar curvature. Inspired by the results of Carlotto, Chodosh and Rubinstein about the convergence rate of Yamabe flow, i.e. Theorems \ref{thm1} and \ref{thm2} mentioned above, we study in this paper the rate of convergence of the weighted Yamabe flow (\ref{1.3}). The following theorems are the main results in this paper: \begin{theorem}\label{main1} Assume that $(g(t),\phi(t))$ is a solution to the weighted Yamabe flow that is converging in $C^{2,\alpha}(M,g_\infty)$ to $(g_\infty,\phi_\infty)$ as $t\to\infty$ for some $\alpha\in (0,1)$. Then, there is $\delta>0$ depending only on $(g_\infty,\phi_\infty)$ such that\\ (i) If $(g_\infty,\phi_\infty)$ is an integrable critical point, then the convergence occurs at an exponential rate $$\|(g(t),\phi(t))-(g_\infty,\phi_\infty)\|_{C^{2,\alpha}(M,g_\infty)}\leq Ce^{-\delta t},$$ for some constant $C>0$ depending on $(g(0),\phi(0))$.\\ (ii) In general, the convergence cannot be worse than a polynomial rate $$\|(g(t),\phi(t))-(g_\infty,\phi_\infty)\|_{C^{2,\alpha}(M,g_\infty)}\leq C(1+t)^{-\delta},$$ for some constant $C>0$ depending on $(g(0),\phi(0))$, where \begin{displaymath} \|(g(t),\phi(t))-(g_\infty,\phi_\infty)\|_{C^{2,\alpha}(M,g_\infty)}=\|g(t)-g_\infty\|_{C^{2,\alpha}(M,g_\infty)}+\|\phi(t)-\phi_\infty\|_{C^{2,\alpha}(M,g_\infty)}. \end{displaymath} \end{theorem} \begin{theorem}\label{main2} Assume that $(g_\infty,\phi_\infty)$ is a non-integrable critical point of the energy functional $E$ with order of integrability $p\geq 3$. If $(g_\infty,\phi_\infty)$ satisfies the Adam-Simon positivity condition $AS_p$, then there exists a metric-measure structure $(g(0),\phi(0))$ conformal to $(g_\infty,\phi_\infty)$ such that the weighted Yamabe flow $(g(t),\phi(t))$ starting from $(g(0),\phi(0))$ exists for all time and converges in $C^\infty(M,g_\infty)$ to $(g_\infty,\phi_\infty)$ as $t\to\infty$. The convergence occurs ``slowly" in the sense that $$C(1+t)^{-\frac{1}{p-2}}\leq \|(g(t),\phi(t))-(g_\infty,\phi_\infty)\|_{C^{2}(M,g_\infty)}\leq C(1+t)^{-\frac{1}{p-2}}$$ for some constant $C>0$. \end{theorem} The structure of this article is the following. Section \ref{section2} is devoted to fixing the notation and recalling some backgrounds about the normalized Yamabe functional, its analyticity and the Lyapunov--Schmidt reduction near a critical point. In particular, the precise definitions of integrable critical point and the Adam-Simon positivity condition $AS_p$ can be found there. In Section \ref{section3}, we use the {\L}ojasiewicz-Simon inequality to prove Theorem \ref{thm1}. Next, in Section \ref{section4} we study polynomial convergence phenomena for nonintegrable critical points, and in Section \ref{section5}, we construct example of Riemannian manifolds which satisfies the condition $AS_3$. This allows us to conclude that there exists a weighted Yamabe flow converging exactly at a polynomial rate described in Theorem \ref{main2}. \section{Definitions and Preliminaries}\label{section2}\sloppy \begin{definition} On a smooth metric measure space $(M^n, g, e^{-\phi}dV_g, m)$, which is conformal to $(M^n, g_0, e^{-\phi_0}dV_{g_0}, m)$ in the sense of Definition \ref{condef}, \begin{displaymath} (M^n, g, e^{-\phi}dV_g, m)=(M^n, w^{\frac{4}{n+m-2}}g_0, w^{\frac{2(m+n)}{n+m-2}}e^{-\phi_0}dV_{g_0}, m), \end{displaymath} analogous to the classical Yamabe problem, we define the normalized energy functional $E(w)$ as \begin{equation}\label{energy} E_{(g_0,\phi_0)}(w)=\frac{\int_M \left(\frac{4(n+m-1)}{n+m-2}L^m_{\phi_0}w,w\right)e^{-\phi_0}dV_{g_0} }{\left( \int_M w^{\frac{2(n+m)}{n+m-2}}e^{-\phi_0}dV_{g_0}\right)^{\frac{n+m-2}{n+m}}}, \end{equation} where $L^m_{\phi_0}$ is the weighted conformal Laplacian on $(M^n, g_0, e^{-\phi_0}dV_{g_0}, m)$ \begin{displaymath} L^m_{\phi_0}=-\Delta_{\phi_0}+\frac{n+m-2}{4(n+m-1)}R^m_{\phi_0}. \end{displaymath} \end{definition} \begin{remark}{The normalized total weighted scalar curvature:} Under the setting of Definition \ref{energy}, by the transformation law of the weighted scalar curvature in (\ref{1.6}), the normalized energy $E_{(g_0,\phi_0)}(w)$ is exactly the normalized total weighted scalar curvature of $(M^n, g, e^{-\phi}dV_g, m)$; i.e. \begin{displaymath} E_{(g_0,\phi_0)}(w)=E_{(g,\phi)}(1)= \frac{\int_M R^m_{\phi}e^{-\phi}dV_{g} }{\left( {\rm Vol}\left(M^n, e^{-\phi}dV_g\right)\right)^{\frac{n+m-2}{n+m}}}. \end{displaymath} \end{remark} Along the flow (\ref{1.3}), the volume $\displaystyle\int_M e^{-\phi(t)}dV_{g(t)}$ is preserved. Indeed, it follows from (\ref{1.3}) and (\ref{1.4}) that \begin{equation}\label{2.1} \frac{d}{dt}\left(\int_Me^{-\phi(t)} dV_{g(t)}\right) =\frac{n+m}{2}\int_M (r_{\phi(t)}^m-R_{\phi(t)}^m)dV_{g(t)}=0. \end{equation} Since the flow (\ref{1.3}) preserves the conformal structure, we can write the solution as \begin{equation}\label{2.4} (M,g(t),e^{-\phi(t)}dV_{g(t)}, m)=(M,u(t)^{\frac{4}{m+n-2}}g_\infty,u(t)^{\frac{2(m+n)}{m+n-2}}e^{-\phi_\infty}dV_{g_\infty}, m). \end{equation} We remark that this implies that \begin{equation}\label{2.4b} \phi(t)=\phi_\infty-\frac{2m}{m+n-2}\ln u(t). \end{equation} Therefore, we assume that the volume of $(M, g_\infty, e^{-\phi_\infty}dV_{g_\infty}, m)$ satisfying \begin{equation}\label{2.5} \int_Me^{-\phi_\infty}dV_{g_\infty}=1, \end{equation} then it follows from (\ref{2.1}) that \begin{equation}\label{2.2} \int_Me^{-\phi(t)}dV_{g(t)}=1~~\mbox{ for all }t\geq 0. \end{equation} In view of (\ref{2.4}) and \eqref{2.4b}, the weighted Yamabe flow (\ref{1.3}) reduces to the following evolution equation for the conformal factor: \begin{equation}\label{1.8} \frac{\partial}{\partial t}u(t)=\frac{n+m-2}{4} (r_{\phi(t)}^m-R_{\phi(t)}^m)u(t). \end{equation} Together this with (\ref{2.2}), we find \begin{equation}\label{1.5} \frac{d}{dt}r_{\phi(t)}^m=\frac{d}{dt}E(u(t)) =-\frac{n+m-2}{2} \int_M(R_{\phi(t)}^m-r_{\phi(t)}^m)^2e^{-\phi(t)}dV_{g(t)}\leqslant 0 \end{equation} along the flow. Consider the following unit volume conformal class associated to $(g_\infty,\phi_\infty)$: \begin{equation*} \begin{split} [(g_\infty,\phi_\infty)]_1=\left\{(w^{\frac{4}{n+m-2}}g_\infty,\phi_\infty-\frac{2m}{m+n-2}\ln w): 0<w\in C^{2,\alpha}(M)\right.,\\ \left.\int_M w^{\frac{2(n+m)}{n+m-2}}e^{-\phi_\infty}dV_{g_\infty}=1\right\}. \end{split} \end{equation*} $$$$ In order to avoid ambiguities, we define the following notion: for $k\in\mathbb{N}$, we denote the $k$-th differential of the energy functional $E$ on $[(g_\infty,\phi_\infty)]_1$ at the point $w$ in the directions $v_1,..., v_k$ by $$D^kE(w)[v_1,..., v_k].$$ As we will see below, the functional $v\mapsto D^kE(w)[v_1,..., v_{k-1},v]$ is in the image of $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$ under the natural embedding onto $C^{2,\alpha}(M,g_\infty)'$. Therefore, we will also write $$D^kE(w)[v_1,..., v_{k-1}]$$ for this element of $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$. When $k=1$, we will drop the (second) brackets, and thus consider $DE(w)\in L^2(M,e^{-\phi_\infty}dV_{g_\infty})$. We may write the differential of $E$ restricted to $[(g_\infty,\phi_\infty)]_1$ as \begin{equation}\label{diffE} \begin{split} \frac{1}{2}DE(w)[v] &=\frac{1}{2}\left.\frac{d}{dt}E(w+tv)\right|_{t=0}\\ &=\frac{\int_M\big(\frac{4(n+m-1)}{n+m-2}\langle\nabla_{g_\infty}w,\nabla_{g_\infty}v\rangle+R_{\phi_\infty}^mwv\big)e^{-\phi_\infty}dV_{g_\infty}} {\big(\int_Mw^{\frac{2(n+m)}{n+m-2}}e^{-\phi_\infty}dV_{g_\infty}\big)^{\frac{n+m-2}{n+m}}}\\ &\hspace{4mm} -\frac{E(w)}{\int_Mw^{\frac{2(n+m)}{n+m-2}}e^{-\phi_\infty}dV_{g_\infty}} \int_Mw^{\frac{n+m+2}{n+m-2}}ve^{-\phi_\infty}dV_{g_\infty}\\ &=\int_M\left(-\frac{4(n+m-1)}{n+m-2}\Delta_{\phi_\infty}w+R^m_{\phi_\infty}w-r^m_{\phi}w^{\frac{n+m+2}{n+m-2}}\right)ve^{-\phi_\infty}dV_{g_\infty}\\ &=\int_M\left(R^m_{\phi}-r^m_{\phi}\right)w^{\frac{n+m+2}{n+m-2}}ve^{-\phi_\infty}dV_{g_\infty} \end{split} \end{equation} where $$(M,g,e^{-\phi}dV_g, m)=(M,w^{\frac{4}{m+n-2}}g_\infty,w^{\frac{2(m+n)}{m+n-2}}e^{-\phi_\infty}dV_{g_\infty}, m). $$ Thus, a unit volume metric-measure structure $(g,\phi)$ is a critical point for the energy $E$ restricted to $[(g_\infty,\phi_\infty]_1$ exactly when $(g,\phi)$ has constant weighted scalar curvature. We now fix $(g_\infty,\phi_\infty)$ such that (\ref{2.5}) holds and $(g_\infty,\phi_\infty)$ has constant weighted scalar curvature. We denote by $\mathcal{CWSC}_1$ the set of unit volume constant weighted scalar curvature metric-measure structures in $[(g_\infty,\phi_\infty)]_1$. If we define the \textit{linearized weighted Yamabe operator} at $(g_\infty,\phi_\infty)$, $\mathcal{L}_\infty$, by means of the formula \begin{equation*} \begin{split} -\frac{4}{n+m-2}\int_M w\mathcal{L}_\infty v e^{-\phi_\infty}dV_{g_\infty} :=&\frac{1}{2}D^2E(g_\infty,\phi_\infty)[v,w]\\ =&\frac{1}{2}\left.\frac{d}{dt}\left(DE(1+tw)[v]\right)\right|_{t=0} \end{split} \end{equation*} for $v\in C^2(M)$. A direct computation (see the Appendix) shows that $$\mathcal{L}_\infty v=(n+m-1)\Delta_{\phi_\infty}v+R_{\phi_\infty}^mv.$$ We define $\Lambda_0:=\ker\mathcal{L}_\infty\subset L^2(M,e^{-\phi_\infty}dV_{g_\infty})$. It follows from a classical theorem of spectral theory that $\Lambda_0$ is finite dimensional, since it is the eigenspace of the weighted Laplacian $\Delta_{\phi_\infty}$ for the eigenvalue $\displaystyle\frac{R_{\phi_\infty}^m}{n+m-1}$. We will write $\Lambda_0^{\perp}$ for the $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$-orthogonal complement. It is crucial throughout this work that the functional $E$ is an analytic map in the sense of \cite[Definition 8.8]{Zeidler}. More precisely, one can easily prove the following by expanding the denominator of $E$ in a power series: fix a metric-measure structure $(g_\infty,\phi_\infty)$ then the functional $E$ is an analytic functional on $\{u\in C^{2,\alpha}(M,g_\infty):u>0\}$ in the sense that for each $w_0\in C^{2,\alpha}(M,g_\infty)$ with $w_0>0$, there is an $\epsilon>0$ and bounded multilinear operators \begin{equation*} E^{(k)}:C^{2,\alpha}(M,g_\infty)^{\times k}\rightarrow \mathbb{R}\textrm{ for each }k\geq 0 \end{equation*} such that if $\|w-w_0\|_{C^{2,\alpha}}<\epsilon$, then $\sum_{k=0}^\infty\|E^{(k)}\|\cdot\|w-w_0\|^k_{C^{2,\alpha}}<\infty$ and \begin{equation*} E(w)=\sum_{k=0}^\infty E^{(k)}(\underbrace{w-w_0,\cdots,w-w_0}_{k\textrm{-times}})\textrm{ in }C^{2,\alpha}(M,g_\infty). \end{equation*} We need the following proposition from \cite[Section 3]{Simon}, which can be established with the help of the implicit function theorem: \begin{prop}\label{prop7} There is $\epsilon>0$ and an analytic map $$\mathbb{P}hi:\Lambda_0\cap \{v: \|v\|_{L^2}<\epsilon\} \to C^{2,\alpha}(M,g_\infty)\cap \Lambda_0^\perp$$ such that $\mathbb{P}hi(0)=0$, $D\mathbb{P}hi(0)=0$, \begin{equation}\label{eq2.7} \sup_{\substack{ \|v\|_{L^2}<\epsilon,\\ \|w\|_{L^2}\leq 1}}\|D\mathbb{P}hi(v)[w]\|_{L^2}<1, \end{equation} and so that defining $\mathbb{P}si(v)=1+v+\mathbb{P}hi(v)$, we have that $\mathbb{P}si(v)>0$, $\displaystyle\int_M \mathbb{P}si(v)^{\frac{2(n+m)}{n+m-2}}e^{-\phi_\infty}dV_{g_\infty}=1$ and $$\mbox{\emph{proj}}_{\Lambda_0^\perp}[DE(\mathbb{P}si(v))]= \mbox{\emph{proj}}_{\Lambda_0^\perp} \left[\Big(R_{\phi}^m-r_{\phi}^m\Big)\mathbb{P}si(v)^{\frac{n+m+2}{n+m-2}}\right]=0$$ where \begin{equation*} (g,\phi)=(\mathbb{P}si(v)^{\frac{4}{n+m-2}}g_\infty,\phi_\infty-\frac{2m}{m+n-2}\ln \mathbb{P}si(v)). \end{equation*} Furthermore $$\mbox{\emph{proj}}_{\Lambda_0}[DE(\mathbb{P}si(v))]= \mbox{\emph{proj}}_{\Lambda_0} \left[\Big(R_{\phi}^m-r_{\phi}^m\Big)\mathbb{P}si(v)^{\frac{n+m+2}{n+m-2}}\right]=DF(v),$$ where $F:\Lambda_0\cap\{v: \|v\|_{L^2}\leq \epsilon\}\to\mathbb{R}$ is defined by $F(v)=E(\mathbb{P}si(v))$. Finally, the intersection of $\mathcal{CWSC}_1$ with a small $C^{2,\alpha}(M,g_\infty)$-neighborhood of $1$ coincides with $$\mathcal{S}_0:=\{\mathbb{P}si(v): v\in\Lambda_0, \|v\|_{L^2}<\epsilon, DF(v)=0\},$$ which is a real analytic subvariety (possible singular) of the following $(\dim\Lambda_0)$-dimensional real analytic submanifold of $C^{2,\alpha}(M,g_\infty)$: $$\mathcal{S}:\{\mathbb{P}si(v): v\in\Lambda_0, \|v\|_{L^2}<\epsilon\}.$$ \end{prop} We will refer to $\mathcal{S}$ as the \textit{natural constraint} for the problem. \begin{definition}\label{def8} For $(g_\infty,\phi_\infty)\in \mathcal{CWSC}_1$, we say that $(g_\infty,\phi_\infty)$ is \textit{integrable} if for all $v\in\Lambda_0$, there is a path $w(t)\in C^2((-\epsilon,\epsilon)\times M,g_\infty)$ such that $(w(t)^\frac{4}{n+m-2}g_\infty,\phi_\infty-\frac{2m}{m+n-2}\ln w(t))\in \mathcal{CWSC}_1$ and $w(0)=1$, $w'(0)=v$. Equivalently, $(g_\infty,\phi_\infty)$ is integrable if and only if $\mathcal{CWSC}_1$ agrees with $\mathcal{S}$ in a small neighborhood of $1$ in $C^{2,\alpha}(M,g_\infty)$. \end{definition} We remark that the integrability defined in Definition \ref{def8} is equivalent to the functional $F$ (as defined in Proposition \ref{prop7}) being constant in a neighborhood of $0$ inside $\Lambda_0$ \cite[Lemma 1]{Adams&Simon}. \begin{definition} If $\Lambda_0=\{0\}$, i.e. if $\mathcal{L}_\infty$ is injective, then we call $(g_\infty,\phi_\infty)$ a \textit{nondegenerate critical point}. On the other hand, if $\Lambda_0$ is nonempty, we call $(g_\infty,\phi_\infty)$ \textit{degenerate}. \end{definition} Note that if $(g_\infty,\phi_\infty)$ is a nondegenerate critical point, then $(g_\infty,\phi_\infty)$ is automatically integrable in the above sense. Now suppose that $(g_\infty,\phi_\infty)$ is a nonintegrable critical point. Because $F(v)=E(\mathbb{P}si(v))$, defined in Proposition \ref{prop7}, is analytic, we may expand it in a power series \begin{equation*} F(v)=F(0)+\sum_{j\geq p}F_j(v) \end{equation*} where $F_j$ is a degree-$j$ homogeneous polynomial on $\Lambda_0$ and $p$ is chosen so that $F_p$ is nonzero. We will call $p$ the \textit{order of integrability} of $(g_\infty,\phi_\infty)$. We will also need a further hypothesis for nonintegrable critical points introduced in \cite{Adams&Simon}. \begin{definition}\label{def10} We say that $(g_\infty,\phi_\infty)$ satisfies the \textit{Adams-Simon positivity condition}, $AS_p$ for short (here $p$ is the order of integrability of $g_\infty$), if it is nonintegrable and $F_p|_{\mathbb{S}^k}$ attains a positive maximum for some $\hat{v}\in \mathbb{S}^k\subset \Lambda_0$. Recall that $F_p$ is the lowest-degree nonconstant term in the power series expansion of $F(v)$ around $0$ and $\mathbb{S}^k$ is the unit sphere with respect to the $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$-norm in $\Lambda_0$. \end{definition} An important observation is that when the order of integrability $p$ is odd, the Adams-Simon positivity condition is always satisfied. Moreover the order of integrability (at a critical point of $E$) always satisfies $p\geq 3$. Furthermore, we will show in the Appendix that \begin{equation}\label{1.10} F_3(v)=-\frac{8(n+m+2)}{(n+m-2)^2}R_{\phi_\infty}^m\int_M v^3 e^{-\phi_\infty}dV_{g_\infty}. \end{equation} \section{The rate of convergence}\label{section3} One of the tools for controlling the rate of convergence of weighted Yamabe flow will be the {\L}ojasiewicz-Simon inequality stated in \cite[Definition 11]{Chodosh}. \begin{prop}\label{prop13} Suppose that $(g_\infty,\phi_\infty)$ satisfies \eqref{2.5} and has constant weighted scalar curvature. There are $\theta\in(0,\frac{1}{2}]$, $\epsilon>0$ and $C>0$ (both depending only on $n$ and $(g_\infty,\phi_\infty)$) such that for $u\in C^{2,\alpha}(M,g_\infty)$ with $\|u-1\|_{C^{2,\alpha}(M,g_\infty)}<\epsilon$, then $$\Big|r_{\phi}^m-r_{\phi_\infty}^m\Big|^{1-\theta}\leq C\|DE(g,\phi)\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}$$ where \begin{equation*} (g,\phi)=(u^\frac{4}{n+m-2}g_\infty,\phi_\infty-\frac{2m}{m+n-2}\ln u). \end{equation*} If $(g_\infty,\phi_\infty)$ is an integrable critical point, then $\theta=\frac{1}{2}$. If $(g_\infty,\phi_\infty)$ is non-integrable, then this holds for some $\theta\in(0,\frac{1}{p}]$, where $p$ is the order of integrability of $(g_\infty,\phi_\infty)$. \end{prop} \begin{proof} To verify this, we will show that hypotheses of Proposition 12 in \cite{Chodosh} are satisfied for the energy functional $E$. We work with the Banach spaces ${\mathcal{B}}:=C^{2,\alpha}(M,g_\infty)$ and ${\mathcal{W}}:=L^2(M,e^{-\phi_\infty}dV_{g_\infty})$, and fix $U$ a small enough ball around $1$ in $C^{2,\alpha}(M,g_\infty)$ so that Proposition \ref{prop7} is applicable in $U$. Hypothesis (A) says that $\Lambda_0=\ker\mathcal{L}_\infty$ is complemented in $C^{2,\alpha}(M,g_\infty)$, which is immediate by the following argument. It's not hard to check that the $L^2$ projection map ${\rm{proj}}_{\Lambda_0}$ restricts to a continuous map from $C^{2,\alpha}(M,g_\infty)$ onto $\Lambda_0$ (since, of course, $C^{2,\alpha}(M,g_\infty) \hookrightarrow L^2(M,e^{-\phi_\infty}dV_{g_\infty})$ is a continuous embedding); from this, it follows that $\Lambda_0^{'}$ is complemented (by the map ${\rm{proj}}^{'}_{\Lambda_0}$) in the dual space $C^{2,\alpha}(M,g_\infty)^{'}$ may be canonically identified with $(\Lambda_0^{\perp})^{'}$. Hypothesis (B) is satisfied as follows: Consider the map \begin{equation} {\mathcal{W}}:=L^2(M,e^{-\phi_\infty}dV_{g_\infty}) \hookrightarrow C^{2,\alpha}(M,g_\infty)^{'}, \quad f \mapsto \left(\psi\mapsto \int_M f\psi e^{-\phi_\infty}dV_{g_\infty}\right). \end{equation} (B1) This map is continuous.\\ (B2) The map ${\rm{proj}}^{'}_{\Lambda_0}\in {\mathcal{B}}(C^{2,\alpha}(M,g_\infty)^{'})$ leaves $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$ invariant; here we are considering the composition \begin{equation} {\rm{proj}}_{\Lambda_0}: C^{2,\alpha}(M,g_\infty)\to \Lambda_0 \hookrightarrow C^{2,\alpha}(M,g_\infty). \end{equation} (B3) The fact that $DE\in C^1(U,L^2(M,e^{-\phi_\infty}dV_{g_\infty}))$ follows from the explicit form of $DE$ given above.\\ (B4) Finally, we have to verify that range $\mathcal{L}_\infty=(\Lambda_0^{\perp})^{'}\cap L^2(M,e^{-\phi_\infty}dV_{g_\infty})$. The fact that range $\mathcal{L}_\infty \subset (\Lambda_0^{\perp})^{'}\cap L^2(M,e^{-\phi_\infty}dV_{g_\infty})$ is obvious because $\mathcal{L}_\infty$ is formally self-adjoint on $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$. The other inclusion follows from the $L^2$ spectral decomposition of $\mathcal{L}_\infty$. Therefore, to prove the {\L}ojasiewicz-Simon inequality with exponent $\theta\in (0,\frac{1}{2}]$, it suffices to check hypotheses (C), i.e. that the energy functional $E$ restricted to the natural constraint satisfies the {\L}ojasiewicz-Simon inequality with exponent $\theta\in (0,\frac{1}{2}]$. Recall that in Proposition \ref{prop7} we have defined $F(v)=E(\mathbb{P}si(v))$. In the integrable case, clearly, $F(v)= F(0) $, so $F$ satisfies the {\L}ojasiewicz-Simon inequality with exponent $\frac{1}{2}$. In general, by definition, $F$ is an analytic function whose power series has its first nonzero term of degree $p$. Similar to \cite[Proposition 13]{Chodosh}, we may conclude that $F$ satisfies the {\L}ojasiewicz-Simon inequality with exponent $\frac{1}{p}$. The claim follows from the fact that $E(u)=r^m_{\phi}$ under the volume normalization. \end{proof} Now we show how the {\L}ojasiewicz-Simon inequality yields quantitative estimates on the rate of convergence of the weghted Yamabe flow. \begin{proof}[Proof of Theorem \ref{main1}] We consider the weighted Yamabe flow $(g(t),\phi(t))=(u(t)^{\frac{4}{n+m-2}}g_\infty,\phi_\infty-\frac{2m}{m+n-2}\ln u(t))$ which converges to $(g_\infty,\phi_\infty)$ in $C^{2,\alpha}(M,g_\infty)$ as $t\to\infty$. In Proposition \ref{prop13}, we have shown that there is a {\L}ojasiewicz-Simon inequality near $(g_\infty,\phi_\infty)$ for some $\theta\in (0,\frac{1}{2}]$. We emphasize that if we regard $DE(u(t))$ as an element of $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$, then \begin{equation}\label{1.9} DE(u(t))=2\big(R_{\phi(t)}^m-r_{\phi(t)}^m\big)u(t)^{\frac{n+m+2}{n+m-2}}. \end{equation} Choose $t_0$ large enough to apply the {\L}ojasiewicz-Simon inequality. In other words, so that $\|u(t)-1\|_{C^{2,\alpha}(M,g_\infty)}\leq \epsilon$ for all $t\geq t_0$. This together with (\ref{1.5}) and Proposition \ref{prop13} implies that \begin{equation}\label{2.6} \begin{split} \frac{d}{dt}\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big) &=-\frac{n+m-2}{2}\int_M(R_{\phi(t)}^m-r_{\phi(t)}^m)^2u(t)^{\frac{2(n+m)}{n+m-2}}e^{-\phi_\infty}dV_{g_\infty}\\ &\leq -c\int_M(R_{\phi(t)}^m-r_{\phi(t)}^m)^2u(t)^{\frac{2(n+m+2)}{n+m-2}}e^{-\phi_\infty}dV_{g_\infty}\\ &=-c\big\|DE(u(t))\big\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}^2\\ &\leq -c\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big)^{2-2\theta}, \end{split} \end{equation} where $c>0$ is a constant depending only on $n$ and $(g_\infty,\phi_\infty)$ (that we let change from line to line). Let us first assume that the {\L}ojasiewicz-Simon inequality is satisfied with $\theta=\frac{1}{2}$, i.e. we are in the integrable case. Then (\ref{2.6}) yields $0\leq r_{\phi(t)}^m-r_{\phi_\infty}^m\leq C e^{-2\delta t}$, for some $\delta>0$ depending only on $n$ and $(g_\infty,\phi_\infty)$, and $C>0$ depending on $(g(0),\phi(0))$ (chosen so that this actually holds for all $t\geq 0$). On the other hand, if {\L}ojasiewicz-Simon inequality holds with $\theta\in (0,\frac{1}{2})$, then the same argument shows that $r_{\phi(t)}^m-r_{\phi_\infty}^m\leq C(1+t)^{\frac{1}{2\theta-1}}$. Exploiting the fact that the flow converges in $C^2$, we may use the {\L}ojasiewicz-Simon inequality to compute \begin{equation*} \begin{split} \frac{d}{dt}\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big)^\theta &=\theta\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big)^{\theta-1} \frac{d}{dt}\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big)\\ &\leq -c\,\theta\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big)^{\theta-1}\big\|DE(u(t))\big\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}^2\\ &\leq -c\,\theta\|DE(u(t))\big\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}\\ &\leq -c\,\theta\left\|\frac{\partial u(t)}{\partial t}\right\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}, \end{split} \end{equation*} where we have used (\ref{1.8}) and (\ref{1.9}) in the last equality. Thus, if $\theta=\frac{1}{2}$ (recall $\lim_{t\to\infty}u(t)=1$), then \begin{equation*} \begin{split} \|u(t)-1\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})} & \leq\int_t^\infty\left\|\frac{\partial u(s)}{\partial s}\right\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}ds \\ & \leq -c\int_t^\infty\frac{d}{ds}\left[\big(r_{\phi(s)}^m-r_{\phi_\infty}^m\big)^{\frac{1}{2}}\right]ds\\ &=c\big(r_{\phi(t)}^m-r_{\phi_\infty}^m\big)^{\frac{1}{2}}\leq Ce^{-\delta t}. \end{split} \end{equation*} If $\theta\in (0,\frac{1}{2})$, a similar computation yields $\|u(t)-1\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}\leq C(1+t)^{-\frac{\theta}{1-2\theta}}$. To obtain $C^2$ estimates, we may interpolate between $L^2(M,e^{-\phi}g)$ and $W^{k,2}(M,e^{-\phi}g)$ for $k$ large enough: interpolation \cite[Theorem 6.4.5]{Bergh} and Sobolev embedding yields some constant $\eta\in (0,1)$ so that $$\|u(t)-1\|_{C^{2,\alpha}(M,e^{-\phi_\infty}dV_{g_\infty})}\leq \|u(t)-1\|_{L^2(M,e^{-\phi_\infty}dV_{g_\infty})}^\eta\|u(t)-1\|_{W^{k,2}(M,e^{-\phi_\infty}dV_{g_\infty})}^{1-\eta}.$$ Because $u(t)$ converges to $1$ in $C^{2,\alpha}$ (and thus in $C^\infty$ by parabolic Schauder estimates and bootstrapping), the second term is uniformly bounded. Thus, exponential (polynomial) decay of the $L^2$ norm gives exponential (polynomial) decay of the $C^{2,\alpha}$ norm as well. Since $(g(t), \phi(t))=(u(t)^\frac{4}{n+m-2}g_\infty,\phi_\infty-\frac{2m}{m+n-2}\ln u(t))$, we immediately have \begin{equation} \|(g(t),\phi(t))-(g_\infty,\phi_\infty)\|_{C^{2,\alpha}(M,g_\infty)}\leq Ce^{-\delta t}, \end{equation} for some constant $C>0$ depending on $(g(0),\phi(0))$. \end{proof} \section{Slowly converging weighted Yamabe flow}\label{section4} In this section, we show that, given a nonintegrable critical point $(g_\infty,\phi_\infty)$ satisfying a particular hypothesis, there exists a weighted Yamabe flow $(g(t),\phi(t))$ such that $(g(t),\phi(t))$ converges to $(g_\infty,\phi_\infty)$ exactly at a polynomial rate. This section is organized as follows: In section \ref{sec3.1}, we show that the weighted Yamabe flow can be represented by two different flows. To be more specific, we will project the flow equation to the kernel $\Lambda_0$ of $\mathcal{L}_\infty$ and its orthogonal complement $\Lambda_0^\perp$, respectively. In section \ref{sec3.2} and section \ref{sec3.3}, we solve the kernel-projected flow and the kernel-orthogonal projected flow, respectively. In section \ref{sec3.4}, we combine all the previous results to prove Theorem \ref{main2}. \subsection{Projecting the weighted Yamabe flow with estimates}\label{sec3.1} Here and in the sequel we will always use $f'(t)$ to denote the time derivative of a function $f(t)$. We will skip the proof of the following lemma, for its proof is the same as that of \cite[Lemma 15]{Chodosh}. \begin{lem}\label{lem15} Assume that $(g_\infty,\phi_\infty)$ satisfies $AS_p$ as defined in Definition \ref{def10}, i.e. $F_p|_{\mathbb{S}^k}$ achieves a positive maximum for some point $\hat{v}$ in the unit sphere $\mathbb{S}^k\subset \Lambda_0$. Then, for any fixed $T\geq 0$, the function \begin{equation}\label{eq6} \varphi(t):=\varphi(t,T)=(T+t)^{-\frac{1}{p-2}}\left(\frac{8}{(n+m-2)p(p-2)F_p(\hat{v})}\right)^\frac{1}{p-2}\hat{v} \end{equation} solves $\frac{8}{n+m-2}\varphi'+DF_p(\varphi)=0$. \end{lem} We define the parabolic $C^{k,\alpha}$ norm on $(t,t+1)\times M$ as follows: for $\alpha\in(0,1)$, we define the seminorm \begin{equation*} |f(t)|_{C^{0,\alpha}}:=\sup_{\substack{(s_i,x_i)\in(t,t+1)\times M \\ (s_1,x_1)\neq(s_2,x_2)}}\frac{|f(s_1,x_1)-f(s_2,x_2)|}{(d_{g_\infty}(x_1,x_2)^2+|t_1-t_2|)^\frac{\alpha}{2}} \end{equation*} and for $k\geq 0$ and $\alpha\in(0,1)$, we define the norm \begin{equation}\label{eq8} \|f(t)\|_{C^{k,\alpha}}:=\sum_{|\beta|+2j\leq k}\sup_{(t,t+1)\times M}|D^\beta_x D^j_t f|+\sum_{|\beta|+2j=k}|D^\beta_x D^j_t f|_{C^{0,\alpha}} \end{equation} where the norm and derivatives in the sum are taken with respect to $g_\infty$. When we mean an alternative norm, we will always indicate the domain. \begin{lem}\label{appdA} For the functional $E$, for $w$ such that $\|w-1\|_{C^{2,\alpha}}<1$, there holds \begin{equation}\label{eq27} \|D^3E(w)[u,v]\|_{C^{0,\alpha}}\leq C\|u\|_{C^{2,\alpha}}\|v\|_{C^{2,\alpha}} \end{equation} for some uniform constant $C>0$. Furthermore, for $w_1,w_2$ such that $\|w_i-1\|_{C^{2,\alpha}}<1$, we have \begin{equation*} \begin{split} \|D^3E(w_1)[v,v]-D^3E(w_2)[u,u]\|_{C^{0,\alpha}}\leq& C(\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}})\\ &\qquad \times (\|u\|_{C^{2,\alpha}}+\|v\|_{C^{2,\alpha}})\|u-v\|_{C^{2,\alpha}} \end{split} \end{equation*} for some uniform constant $C>0$. \end{lem} \begin{proof} It follows from the following computation for $D^3E$ which proved in the Appendix: \begin{equation*} D^3E(1)[v,u,z]=-\frac{8(n+m+2)}{(n+m-2)^2}R_{\phi_\infty}^m\int_M vuze^{-\phi_\infty}dV_{g_\infty}. \end{equation*} \end{proof} \begin{lem}\label{lemma16} There exists $T_0>0$, $\epsilon_0>0$ and $c>0$, all depending on $(g_\infty,\phi_\infty)$ and $\hat{v}$, such that the following holds: Fix $T>T_0$. Then, for $\varphi(t)$ as in Lemma \ref{lem15} and $w\in C^{2,\alpha}(M\times[0,\infty))$, and $u:=1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+w^\perp$ where $w^\top:=\textrm{proj}_{\Lambda_0}(w)$ and $w^\perp:=\textrm{proj}_{\Lambda_0^\perp}(w)$, the function \begin{equation*} E_0^\top(w):=\mbox{\emph{proj}}_{\Lambda_0}\left[DE(u)u^{-\frac{4}{n+m-2}}-DE(u)\right] \end{equation*} satisfies \begin{equation*} \begin{split} &\left\|E_0^\top(w)\right\|_{C^{0,\alpha}}\leq c\left\{(T+t)^{-\frac{p-1}{p-2}}+\|w^\top\|^{p-1}_{C^{0,\alpha}}+\|w^\perp\|_{C^{2,\alpha}}\right\}\left\{(T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right\},\\ &\left\|E^\top_0(w_1)-E_0^\top(w_2)\right\|_{C^{0,\alpha}}\\ &\qquad\leq c\left\{(T+t)^{-\frac{p-1}{p-2}}+\|w_1^\top\|^{p-1}_{C^{0,\alpha}}+\|w_2^\top\|^{p-1}_{C^{0,\alpha}}+\|w_1^\perp\|_{C^{2,\alpha}}+\|w_2^\perp\|_{C^{2,\alpha}}\right\}\\ &\qquad \qquad \times \|w_1-w_2\|_{C^{2,\alpha}}\\ &+c\left\{(T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right\}\left(\|w_1^\top\|^{p-2}_{C^{0,\alpha}}+\|w_2^\top\|^{p-2}_{C^{0,\alpha}}\right)\\ &\qquad\qquad \times \|w_1^\top-w_2^\top\|_{C^{0,\alpha}}\\ &+c\left\{(T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right\}\|w_1^\perp-w_2^\perp\|_{C^{2,\alpha}}. \end{split} \end{equation*} Identical estimates hold for $E_0^\perp(w):=\mbox{\emph{proj}}_{\Lambda_0^\perp}\left[DE(u)u^{-\frac{4}{n+m-2}}-DE(u)\right]$. Here, we are using the parabolic H\"{o}lder norms on $(t,t+1)\times M$ as defined above; the bounds hold for each fixed $t\geq 0$, with the constants independent of $T$ and $t$. \end{lem} \begin{proof} Let $\eta:=\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+w^\perp$. Then one can easily see that $u=1+\eta$ and \begin{equation*} \frac{d}{ds}(1+s\eta)^{-\frac{4}{n+m-2}}=-\frac{4}{n+m-2}(1+s\eta)^{-\frac{n+m+2}{n+m-2}}\eta. \end{equation*} Thus we have \begin{equation*} \begin{split} u^{-\frac{4}{n+m-2}}&=1-\frac{4}{n+m-2}\int_0^1(1+s\eta)^{-\frac{n+m+2}{n+m-2}}\eta ds. \end{split} \end{equation*} So, letting $E_0(w):=DE(u)u^{-\frac{4}{n+m-2}}-DE(u)$, we have \begin{equation}\label{eq9} \begin{split} \|E_0(w)\|_{C^{0,\alpha}}=& c \left\|DE(u)\int_0^1(1+s\eta)^{-\frac{n+m+2}{n+m-2}}\eta ds\right\|_{C^{0,\alpha}}\\ \leq & c\|DE(u)\|_{C^{0,\alpha}}\left( \|\varphi\|_{C^{0,\alpha}}+\|w^{\perp}\|_{C^{0,\alpha}}+\|\mathbb{P}hi(\varphi+w^\top)\|_{C^{0,\alpha}}+\|w^\top\|_{C^{0,\alpha}}\right)\\ \leq & c\|DE(u)\|_{C^{0,\alpha}}\left((T+t)^{-\frac{1}{p-2}}+\|w^\top\|_{C^{0,\alpha}}+\|w^\perp\|_{C^{0,\alpha}}\right), \end{split} \end{equation} where we have used the fact that $\mathbb{P}hi(0)=0$ and $\mathbb{P}hi$ is an analytic map. It follows from Taylor's theorem and Proposition \ref{prop7} that, for $\psi_{s,r}=1+r\left[\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+sw^\perp\right]$, \begin{equation}\label{taylor1} \begin{split} DE(u)=&DE(\mathbb{P}si(\varphi+w^\top))+\int_0^1D^2E(\psi_{s,1})[w^\top]ds\\ =&DF(\varphi+w^\top)-\frac{8}{n+m-2}\mathcal{L}_\infty w^\perp\\ &\qquad +\int_0^1\int_0^sD^3E(\psi_{s,\tilde{s}})[w^\perp,\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+sw^\perp]d\tilde{s}ds. \end{split} \end{equation} Now, observe that $DF(0)=D^2F(0)=\cdots D^{p-1}F(0)=0$, where $p$ is the order of integrability. Therefore, by Taylor's theorem, we have \begin{equation}\label{taylor2} \|DF(\varphi+w^\top)\|_{C^{0,\alpha}}\leq c\|\varphi+w^\top\|^{p-1}_{C^{0,\alpha}}\leq c\left((T+t)^{-1-\frac{1}{p-2}}+\|w^\top\|^{p-1}_{C^{0,\alpha}}\right). \end{equation} Combining (\ref{eq27}), \eqref{taylor1}, and \eqref{taylor2}, we have \begin{equation}\label{eq10} \|DE(u)\|_{C^{0,\alpha}}\leq c\left((T+t)^{-1-\frac{1}{p-2}}+\|w^\top\|^{p-1}_{C^{0,\alpha}}+\|w^\perp\|_{C^{2,\alpha}}\right). \end{equation} We define \begin{equation*} E_0^\top(w):=\textrm{proj}_{\Lambda_0}E_0(w),\ \ E_0^\perp(w):=\textrm{proj}_{\Lambda_0^\perp} E_0(w). \end{equation*} Now the asserted bounds for $E_0^\top(w)$ follow from the bound (\ref{eq9}) on $E_0(w)$, the estimate (\ref{eq10}) and the continuity of the map $\textrm{proj}_{\Lambda_0}:C^{0,\alpha}(M,g_\infty)\rightarrow \Lambda_0$, \begin{equation*} \begin{split} \|\textrm{proj}_{\Lambda_0}f\|_{C^{0,\alpha}( M,g_\infty)}\leq c\,\|f\|_{C^{0,\alpha}( M,g_\infty)}. \end{split} \end{equation*} Note that this is a spatial bound, so it does not include the $t$-H\"{o}lder norm, but the desired space-time norm bound follows easily from it in the same spirit of \cite[Lemma 16]{Chodosh}. The bound for $E_0^\top(w_1)-E_0^\top(w_2)$ follows similarly. This together with the bound (\ref{eq9}) on $E_0(w)$ and the estimate (\ref{eq10}) gives the estimates for $E_0^\perp(w)$. \end{proof} As mentioned at the beginning of this section, we will reduce the weighted Yamabe flow to two flows, one on $\Lambda_0$ and the other on $\Lambda_0^\perp$. The following proposition explains how to do this. \begin{prop}\label{prop17} There exists $T_0>0$, $\epsilon_0>0$ and $c>0$, all depending on $(g_\infty,\phi_\infty)$ and $\hat{v}$, such that the following holds: Fix $T>T_0$. Then, for $\varphi(t)$ as in Lemma \ref{lem15} and $w\in C^{2,\alpha}(M\times [0,\infty))$, there are functions $E^\top(w)$ and $E^\perp(w)$ such that $u:=1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+w^\perp$ is a solution to the weighted Yamabe flow if and only if \begin{align} \frac{8}{n+m-2}(w^\top)'+D^2F_p(\varphi)w^\top=&E^\top(w),\label{eq11}\\ (w^\perp)'-\mathcal{L}_\infty w^\perp=&E^\perp(w).\label{eq12} \end{align} Here, as long as $\|w\|_{C^{2,\alpha}}\leq \epsilon_0$, the error terms $E^\top$ and $E^\perp$ satisfy \begin{equation*} \begin{split} \|E^\top(w)\|_{C^{0,\alpha}}\leq & c\left((T+t)^{-\frac{p-1}{p-2}}+\|w^\top\|^{p-1}_{C^{0,\alpha}}+\|w^\perp\|_{C^{2,\alpha}}\right)\left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\\ &+c(T+t)^{-\frac{p}{p-2}}+c(T+t)^{-\frac{p-1}{p-2}}\|w^\top\|_{C^{0,\alpha}}+c(T+t)^{-\frac{p-3}{p-2}}\|w^\top\|^2_{C^{0,\alpha}}\\ &+c\|w^\top\|^{p-1}_{C^{0,\alpha}}+c\left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\|w^\perp\|_{C^{2,\alpha}}, \end{split} \end{equation*} \begin{equation*} \begin{split} & \|E^\top(w_1)-E^\top(w_2)\|_{C^{0,\alpha}}\\ &\qquad \leq c\left((T+t)^{-\frac{p-1}{p-2}}+\|w_1^\top\|^{p-1}_{C^{0,\alpha}}+\|w_2^\top\|^{p-1}_{C^{0,\alpha}}+\|w_1^\perp\|_{C^{2,\alpha}}+\|w_2^\perp\|_{C^{2,\alpha}}\right)\\ &\qquad\qquad \times\|w_1-w_2\|_{C^{2,\alpha}}\\ &\qquad\quad +c\left((T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right)(\|w_1^\top\|^{p-2}_{C^{0,\alpha}}+\|w_2^\top\|^{p-2}_{C^{0,\alpha}})\\ &\qquad\qquad \times \|w_1^\top-w_2^\top\|_{C^{0,\alpha}}\\ &\qquad \quad +c\left((T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right)\|w_1^\perp-w_2^\perp\|_{C^{2,\alpha}}\\ &\qquad\quad +c\left((T+t)^{-\frac{p-3}{p-2}}(\|w_1^\top\|_{C^{0,\alpha}}+\|w_2^\top\|_{C^{0,\alpha}})+\|w_1^\top\|^{p-2}_{C^{0,\alpha}}+\|w_2^\top\|^{p-2}_{C^{0,\alpha}}\right)\\ &\qquad \qquad \times \|w_1^\top-w_2^\top\|_{C^{0,\alpha}}\\ &\qquad\quad +c(T+t)^{-\frac{p-1}{p-2}}\|w_1^\top-w_2^\top\|_{C^{0,\alpha}}, \end{split} \end{equation*} \begin{equation*} \begin{split} & \|E^\perp(w)\|_{C^{0,\alpha}}\\ &\qquad \quad \leq c\left((T+t)^{-\frac{p-1}{p-2}}+\|w^\top\|^{p-1}_{C^{0,\alpha}}+\|w^\perp\|_{C^{2,\alpha}}\right)\left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\\ &\qquad\qquad +c\left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\|w^\perp\|_{C^{2,\alpha}}\\ &\qquad\qquad +c\left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\left((T+t)^{-\frac{p-1}{p-2}}+\|w'\|_{C^{0,\alpha}}\right) \end{split} \end{equation*} \begin{equation*} \begin{split} & \|E^\perp(w_1)-E^\perp(w_2)\|_{C^{0,\alpha}}\\ &\qquad\quad \leq c\left((T+t)^{-\frac{p-1}{p-2}}+\|w_1^{\top}\|^{p-1}_{C^{0,\alpha}}+\|w_2^\top\|^{p-1}_{C^{0,\alpha}}+\|w_1^\perp\|_{C^{2,\alpha}}+\|w_2^\perp\|_{C^{2,\alpha}}\right)\\ &\qquad\qquad \times\|w_1-w_2\|_{C^{2,\alpha}}\\ &\qquad\quad +c\left((T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right)(\|w_1^\top\|^{p-2}_{C^{0,\alpha}}+\|w_2^\top\|^{p-2}_{C^{0,\alpha}})\\ &\qquad\qquad \times \|w_1^\top-w_2^\top\|_{C^{0,\alpha}}\\ &\qquad\quad +c\left((T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right)\|w_1^\perp-w_2^\perp\|_{C^{2,\alpha}}\\ &\qquad\quad +c\left((T+t)^{-\frac{1}{p-2}}+\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right)\|w_1'-w_2'\|_{C^{0,\alpha}}\\ &\qquad\quad +c\left((T+t)^{-\frac{p-1}{p-2}}+\|w_1'\|_{C^{0,\alpha}}+\|w_2'\|_{C^{0,\alpha}}\right)\|w_1-w_2\|_{C^{2,\alpha}}. \end{split} \end{equation*} Here we are using the parabolic H\"{o}lder norms on $(t,t+1)\times M$ as defined above; the bounds hold for each fixed $t\geq 0$, with the constants independent of $T$ and $t$. \end{prop} \begin{proof} Recall that $u$ is a solution to the weighted Yamabe flow if and only if \begin{equation}\label{reduce1} \begin{split} \frac{4}{n+m-2} \frac{\partial}{\partial t}u(t)=(r_{\phi(t)}^m-R_{\phi(t)}^m)u(t) \end{split} \end{equation} where $(g(t),\phi(t))=(u(t)^\frac{4}{m+n-2}g_\infty, \phi_\infty-\frac{2m}{m+n-2}\ln u(t))$. Regarding $DE(w)$ in \eqref{diffE} as an element of $L^2(M,e^{-\phi_\infty}dV_{g_\infty})$, we have \begin{equation}\label{reduce2} \frac{1}{2}DE(w)=-\frac{4(n+m-1)}{n+m-2}\Delta_{\phi_\infty}w+R^m_{\phi_\infty}w-r^m_{\phi}w^{\frac{n+m+2}{n+m-2}} \end{equation} where as always in this paper, $E$ is defined on the unit volume conformal class. Combining \eqref{reduce1} and \eqref{reduce2}, we have \begin{equation*} \frac{4}{n+m-2}\frac{\partial u}{\partial t}=-\frac{1}{2}DE(u)u^{-\frac{m+n+2}{m+n-2}}. \end{equation*} We now project the weighted Yamabe flow equation onto $\Lambda_0$ and $\Lambda_0^\perp$, so $u$ solves the weighted Yamabe flow if and only if the following two equations are satisfied: \begin{equation}\label{decompose} \begin{split} &\frac{8}{n+m-2}(\varphi+w^\top)'\\ &\qquad\qquad=-\textrm{proj}_{\Lambda_0}\left[DE\left(1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+w^\perp\right)\right]-E_0^\top(w),\\ &\frac{8}{n+m-2}\left(\mathbb{P}hi(\varphi+w^\top)+w^\perp\right)'\\ &\qquad\qquad=-\textrm{proj}_{\Lambda_0^\perp}\left[DE\left(1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+w^\perp\right)\right]-E_0^\perp(w), \end{split} \end{equation} where $E_0(w)$ is defined as in Lemma \ref{lemma16}. Now, by the Taylor's theorem we claim that \begin{equation}\label{proj1} \begin{split} &\textrm{proj}_{\Lambda_0}DE\left(1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)+w^\perp\right)\\ &\qquad\qquad\qquad =\textrm{proj}_{\Lambda_0}DE\left(1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)\right)+E_1^\top(w), \end{split} \end{equation} with the bounds \begin{equation}\label{bound1} \begin{split} \|E_1^\top(w)\|_{C^{0,\alpha}}\leq &c\left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\|w^\perp\|_{C^{2,\alpha}},\\ \|E_1^\top(w_1)-E_1^\top(w_2)\|_{C^{0,\alpha}}\leq &c\left(\|w_1\|_{C^{2,\alpha}}+\|w_2\|_{C^{2,\alpha}}\right)\|w_1^\perp-w_2^\perp\|_{C^{2,\alpha}}. \end{split} \end{equation} The claim follows from the integral form of the remainder in Taylor's theorem (see \cite[Proposition 17]{Chodosh} for more details) so we omit the proof here. Recall that $F(v):=E(\mathbb{P}si(v))=E(1+v+\mathbb{P}hi(v))$, and using the Lyapunov-Schmidt reduction (Proposition \ref{prop7}) \begin{equation}\label{proj2} \textrm{proj}_{\Lambda_0}DE\left(1+\varphi+w^\top+\mathbb{P}hi(\varphi+w^\top)\right)=DF(\varphi+w^\top). \end{equation} Furthermore, by analyticity (Proposition \ref{prop7}), $DF$ has a convergent power series representation around $0$ with lowest order term of order $p-1$. Thus, as long as $\varphi+w^\top$ is small enough, we may write \begin{equation}\label{proj3} DF(\varphi+w^\top)=DF(\varphi)+D^2F(\varphi)(w^\top)+E_2^\top(w^\top), \end{equation} where \begin{equation}\label{bound2} \begin{split} &\|E_2^\top(w^\top)\|_{C^{0,\alpha}}\leq c\left((T+t)^{-\frac{p-3}{p-2}}+\|w^\top\|^{p-3}_{C^{0,\alpha}}\right)\|w^\top\|^2_{C^{0,\alpha}},\\ &\|E_2^\top(w_1^\top)-E_2^\top(w_2^\top)\|_{C^{0,\alpha}}\\ &\qquad\qquad \leq c\left((T+t)^{-\frac{p-3}{p-2}}(\|w_1^\top\|_{C^{0,\alpha}}+\|w_2^\top\|_{C^{0,\alpha}})+\|w_1^\top\|^{p-2}_{C^{0,\alpha}}+\|w_2^\top\|^{p-2}_{C^{0,\alpha}}\right)\\ &\qquad\qquad \qquad \times\|w_1^\top-w_2^\top\|_{C^{0,\alpha}}. \end{split} \end{equation} By the results we have obtained so far, the $\Lambda_0$-component of the weighted Yamabe flow may be written as \begin{equation*} \begin{split} \frac{8}{n+m-2}(\varphi+w^\top)' =-DF(\varphi)-D^2F(\varphi)(w^\top)-E_2^\top(w^\top)-E_1^\top(w)-E_0^\top(w) \end{split} \end{equation*} where we have used (\ref{proj1}), (\ref{proj2}) and (\ref{proj3}). Now, expanding $F$ in a power series, $F=F(0)+\sum_{j=p}^\infty F_j$, we may write the above expression as \begin{equation*} \begin{split} &\frac{8}{n+m-2}(\varphi+w^\top)'\\ &\qquad\qquad=-DF_p(\varphi)-D^2F_p(\varphi)(w^\top)+\underbrace{E_3^\top(w)-E_2^\top(w^\top)-E_1^\top(w)-E_0^\top(w)}_{:=E^\top(w)} \end{split} \end{equation*} where \begin{equation*} E_3^\top(w)=\sum_{j\geq p+1}(DF_j(\varphi)+D^2F_j(\varphi)w^\top). \end{equation*} On the other hand, by Lemma \ref{lem15}, we have that $\frac{8}{n+m-2}\varphi'=-DF_p(\varphi)$. Therefore, $w^\top$ must satisfy the equation \begin{equation*} \frac{8}{n+m-2}(w^\top)'+D^2F_p(\varphi)w^\top=E^\top(w). \end{equation*} By analyticity, $E_3^\top(w)$ converges in $C^{0,\alpha}$ for $\|\varphi\|_{C^{2,\alpha}}$ and $\|w\|_{C^{2,\alpha}}$ small enough. Because each term in the sum is a homogeneous polynomial, we get the following error bound by using the formula for $\varphi$: \begin{equation}\label{bound3} \begin{split} \|E_3^\top(w)\|_{C^{0,\alpha}}&\leq c\left((T+t)^{-\frac{p}{p-2}}+(T+t)^{-\frac{p-1}{p-2}}\|w^\top\|_{C^{0,\alpha}}\right),\\ \|E_3^\top(w_1)-E_3^\top(w_2)\|_{C^{0,\alpha}}&\leq c(T+t)^{-\frac{p-1}{p-2}}\|w_1^\top-w_2^\top\|_{C^{0,\alpha}}. \end{split} \end{equation} Combining (\ref{bound1}), (\ref{bound2}) and (\ref{bound3}), we can see that $E^\top(w)$ satisfies the asserted bounds. By the similar argument, we can prove the result for the $\Lambda_0^{\perp}$-portion of the weighted Yamabe flow. \end{proof} \subsection{Solving the kernel-projected flow with polynomial decay estimates}\label{sec3.2} In this subsection we solve the kernel-projected flow (\ref{eq11}). First, from the definition of $\varphi$ in (\ref{eq6}) and the fact that $D^2F_p$ is homogeneous of degree $p-2$, \begin{equation*} D^2F_p(\varphi)=(T+t)^{-1}\underbrace{\left(\frac{8}{(n+m-2)p(p-2)F_p(\hat{v})}\right)D^2F_p(\hat{v})}_{:=\mathcal{D}}. \end{equation*} Let $\mu_1,\cdots,\mu_k$ be the eigenvalues of $\mathcal{D}$ and $e_i$ the corresponding orthonormal basis in which $\mathcal{D}$ is diagonalized. Then the kernel-projected flow is equivalent to the following system of ODEs for $v_i:=w^\top\cdot e_i$, \begin{equation}\label{eq14} \frac{8}{n+m-2}v_i'+\frac{\mu_i}{T+t}v_i=E_i^\top:=E^\top\cdot e_i,\quad i=1,\cdots,k. \end{equation} Fix for the rest of this subsection a number $\gamma$ with $\gamma\notin\left\{\frac{n+m-2}{8}\mu_1,\cdots,\frac{n+m-2}{8}\mu_k\right\}$. Define the following weighted norms: \begin{equation*} \|u\|_{C^{0,\alpha}_\gamma}:=\sup_{t>0}\left[(T+t)^\gamma\|u(t)\|_{C^{0,\alpha}}\right] \ \textrm{ and } \ \|u\|_{C^{0,\alpha}_{1,\gamma}}:=\|u\|_{C^{0,\alpha}_\gamma}+\|u'\|_{C^{0,\alpha}_{1+\gamma}}. \end{equation*} We recall that these H\"{o}lder norms are space-time norms on the interval $(t,t+1)\times M$, as defined in (\ref{eq8}). Given $\gamma$ as above, we define $\mathbb{P}i_0=\mathbb{P}i_0(\gamma)$ by \begin{equation} \mathbb{P}i_0:=\textrm{span}\left\{v\in\Lambda_0:\mathcal{D}v=\mu v,\textrm{ and }\mu>\frac{8}{n+m-2}\gamma\right\}. \end{equation} Moreover, let $\textrm{proj}_{\mathbb{P}i_0}:\Lambda_0\rightarrow \mathbb{P}i_0$ be the corresponding linear projector. The next lemma concerns the system (\ref{eq14}). \begin{lem}\label{lemma18} For any $T>0$ such that $\|E^\top\|_{C^{0,\alpha}_{1+\gamma}}<\infty$, there is a unique $u$ with $u(t)\in\Lambda_0$, $t\in[0,\infty)$, satisfying $\|u\|_{C^0_\gamma}<\infty$, $\textrm{\emph{proj}}_{\mathbb{P}i_0}(u(0))=0$, and such that $v_i:=u\cdot e_i$ solves the system \eqref{eq14}. Furthermore, we have the bound \begin{equation*} \|u\|_{C^{0,\alpha}_{1,\gamma}}\leq C\|E^\top\|_{C^{0,\alpha}_{1+\gamma}}. \end{equation*} Here the constant $C$ does not depend on $T$. \end{lem} \begin{proof} Letting \begin{equation*} w_j:=(T+t)^{\frac{n+m-2}{8}\mu_j}v_j, \end{equation*} the system (\ref{eq14}) is equivalent to \begin{equation*} w_j'=\frac{n+m-2}{8}(T+t)^{\frac{n+m-2}{8}\mu_j}E_j^\top,\quad j=1,\cdots,k. \end{equation*} Then, we claim that we may solve the $j$-th ODE as \begin{equation*} w_j(t)=\left\{ \begin{array}{ll} \displaystyle\alpha_j-\frac{n+m-2}{8}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d\tau, & \hbox{if $\gamma>\displaystyle\frac{n+m-2}{8}\mu_j$;} \\ \displaystyle\alpha_j+\frac{n+m-2}{8}\int_0^t(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d\tau, & \hbox{if $\gamma<\displaystyle\frac{n+m-2}{8}\mu_j$.} \end{array} \right. \end{equation*} First suppose that $j$ is such that $\gamma>\frac{n+m-2}{8}\mu_j$. Then the claim would imply that \begin{equation*} \begin{split} v_j(t)=(T+t)^{-\frac{n+m-2}{8}\mu_j}\alpha_j-\frac{n+m-2}{8}(T+t)^{-\frac{n+m-2}{8}\mu_j}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d\tau. \end{split} \end{equation*} To prove the claim, we check that the integral converges under our assumption on $E^\top$: \begin{equation*} \begin{split} &\left|(T+t)^{-\frac{n+m-2}{8}\mu_j}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d \tau\right|\\ &\qquad\leq(T+t)^{-\frac{n+m-2}{8}\mu_j}\|E_j\|^\top_{C^0_{1+\gamma}}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j-\gamma-1}d\tau\\ &\qquad=\left(\gamma-\frac{n+m-2}{8}\right)^{-1}(T+t)^{-\frac{n+m-2}{8}\mu_j}\|E_j^\top\|_{C^0_{1+\gamma}}(T+t)^{\frac{n+m-2}{8}\mu_j-\gamma}\\ &\qquad = C_j(T+t)^{-\gamma}\|E_j^\top\|_{C^0_{1+\gamma}}. \end{split} \end{equation*} The previous estimate also shows that, since by assumption $\gamma>\frac{n+m-2}{8}\mu_j$, to have $\|u\|_{C^{0}_\gamma}<\infty$, it must hold that $\alpha_j=0$. On the other hand, if $\gamma<\frac{n+m-2}{8}\mu_j$, by requiring $\textrm{proj}_{\mathbb{P}i_0}u(0)=0$, we see that $\alpha_j=0$. As a result, the bounds for $\|v_j\|_{C^0_\gamma}$ follow from a similar calculation as before. Combining these two cases proves existence, uniqueness and the $\|u\|_{C^0_\gamma}$ bound. It thus remains to prove the inequality $\|u\|_{C^{0,\alpha}_{1,\gamma}}\leq C\|E^\top\|_{C^{0,\alpha}_{1+\gamma}}$. By finite dimensionality, the (spatial) $C^{0,\alpha}(M)$-H\"{o}lder norms of each basis element in $\Lambda_0$ are uniformly bounded. Thus, it remains to show that the desired inequality holds for the H\"{o}lder norms in the time direction, along with the same thing for $u'(t)$. Suppose that $j$ is such that $\gamma>\frac{n+m-2}{8}\mu_j$. Then, we have seen above that $$ v_j(t)=-\frac{n+m-2}{8}(T+t)^{-\frac{n+m-2}{8}\mu_j}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d\tau,$$ which gives \begin{equation*} \begin{split} v_j'(t)=&\frac{(n+m-2)^2}{64}\mu_j(T+t)^{-\frac{n+m-2}{8}\mu_j-1}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d \tau\\ &+\frac{n+m-2}{8}E_j^\top(t). \end{split} \end{equation*} Thus \begin{equation*} \begin{split} \|v_j'\|_{C^{0,\alpha}}&\leq C\left\|(T+t)^{-\frac{n+m-2}{8}\mu_j-1}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d\tau\right\|_{C^1}+C\|E_j^\top(\tau)\|_{C^{0,\alpha}}\\ &\leq C\left\|(T+t)^{-\frac{n+m-2}{8}\mu_j-2}\int_t^\infty(T+\tau)^{\frac{n+m-2}{8}\mu_j}E_j^\top(\tau)d\tau\right\|_{C^0}+C\|E_j^\top(\tau)\|_{C^{0,\alpha}}\\ &\leq C(T+t)^{-1-\gamma}\|E_j^\top\|_{C^{0,\alpha}_{1+\gamma}}. \end{split} \end{equation*} On the other hand, the case of $\gamma<\frac{n+m-2}{8}\mu_j$ can be easily be obtained through a similar argument. Combining these calculations, we obtain a H\"{o}lder estimate for $v_j$. From this the claimed inequality follows. \end{proof} \subsection{Solving the kernel-orthogonal projected flow}\label{sec3.3} In this subsection, we solve the kernel-orthogonal projected flow, which is the remaining part of the weighted Yamabe flow. Define the weighted norms \begin{equation*} \|u\|_{L^2_q}=\sup_{t\in[0,\infty)}\left[(T+t)^q\|u(t)\|_{L^2( M)}\right], \end{equation*} where the $L^2$ norm is the spatial norm of $u(t)$ on $ M$, taken with respect to $e^{-\phi_\infty}dV_{g_\infty}$, and \begin{equation*} \|u\|_{C^{2,\alpha}_q}=\sup_{t\geq 0} \left[(T+t)^q\|u(t)\|_{C^{2,\alpha}}\right], \end{equation*} where, as usual, the H\"{o}lder norms are the space-time norms defined in (\ref{eq8}). Also, let \begin{equation*} \begin{split} &\Lambda_\downarrow:=\overline{\textrm{span}\{\varphi\in C^\infty( M):\mathcal{L}_\infty \varphi+\delta \varphi=0,\delta>0\}}^{L^2},\\ &\Lambda_\uparrow :=\textrm{span}\{\varphi\in C^\infty( M):\mathcal{L}_\infty \varphi+\delta \varphi=0,\delta<0\}. \end{split} \end{equation*} From the spectral theory, $L^2( M,e^{-\phi_\infty} g_\infty)=\Lambda_\uparrow\oplus\Lambda_0\oplus \Lambda_\downarrow$ and $\Lambda_\uparrow$ and $\Lambda_0$ are finite dimensional. Write the nonnegative integers as an ordered union $\mathbb{N}=K_\uparrow\cup K_0\cup K_\downarrow$, where the ordering of the indices comes from an ordering of the eigenfunctions of the $\mathcal{L}_\infty$ and the partitioning of $\mathbb{N}$ corresponds to which of $\Lambda_\downarrow$, $\Lambda_0$, or $\Lambda_\uparrow$ the $k$-th eigenfunction of $\mathcal{L}_\infty$ lies in. \begin{lem}\label{lemma19} For any $T>0$ and $q<\infty$ such that $\|E^\perp\|_{L^2_q}<\infty$, there is a unique $u(t)$ with $u(t)\in\Lambda_0^\perp$, $t\in[0,\infty)$, satisfying $\|u\|_{L^2_q}<\infty$, $\textrm{\emph{proj}}_{\Lambda_\downarrow}(u(0))=0$ and \begin{equation}\label{eq15} u'=\mathcal{L}_\infty u+E^\perp. \end{equation} Furthermore, $\|u\|_{L^2_q}\leq C\|E^\perp\|_{L^2_q}$ and $\|u\|_{C^{2,\alpha}_q}\leq C\|E^\perp\|_{C^{0,\alpha}_q}$. Here the constant $C$ does not depent on $T$. \end{lem} \begin{proof} Let $\varphi_i$ be an eigenfunction of $\mathcal{L}_\infty$ with eigenvalue $-\delta_i$ which is orthogonal to the kernel $\Lambda_0$. The equation (\ref{eq15}) reduces to the system \begin{equation}\label{eq17} u_i'+\delta_iu_i=E_i^\perp, \end{equation} where $u_i=\langle u,\varphi_i\rangle$ and $E_i^\perp=\langle E^\perp,\varphi_i\rangle$. This is equivalent to \begin{equation}\label{eq18} (e^{\delta_it}u_i)'=e^{\delta_i t}E_i^\perp. \end{equation} Thus, we may represent the components of the solution as \begin{equation*} \begin{split} u_i(t)=\begin{dcases} &\beta_ie^{-\delta_it}+e^{-\delta_it}\int_0^te^{\delta_i \tau}E_i^\perp(\tau)d\tau~~\mbox{ for }i\in K_\downarrow,\\ &\beta_ie^{-\delta_it}-e^{-\delta_it}\int_t^\infty e^{\delta_i \tau}E_i^\perp(\tau)d\tau~~\mbox{ for }i\in K_{\uparrow}. \end{dcases} \end{split} \end{equation*} In particular, we have \begin{equation*} \begin{split} u(t)&=\sum_{j\in K_\downarrow}\left(\beta_je^{-\delta_jt}+e^{-\delta_jt}\int_0^te^{\delta_j \tau}E_j^\perp(\tau)d\tau\right)\varphi_j\\ &\qquad+\sum_{j\in K_\uparrow}\left(\beta_je^{-\delta_jt}-e^{-\delta_jt}\int_t^\infty e^{\delta_j \tau}E_j^\perp(\tau)d\tau\right)\varphi_j. \end{split} \end{equation*} This sum is in an $L^2$ sense (but then elliptic regularity guarantees that the sum converges uniformly on compact time intervals). We note that for $i\in K_\uparrow$, if $\|u\|_{L^2_q}<\infty$, then necessarily $\beta_i=0$. Furthermore, by requiring that $\textrm{proj}_{\Lambda_\downarrow}u(0)=0$, we also have $\beta_i=0$ for $i\in K_\downarrow$. The $L^2$-bound for the first term in $u$ can be estimated as: \begin{equation*} \begin{split} \Bigg\|\sum_{j\in K_\downarrow}u_j(t)\varphi_j\Bigg\|_{L^2}^2&\leq \sum_{j\in K_\downarrow}\left(\int_0^t e^{\delta_j( \tau-t)}E_j^\perp(\tau)d\tau\right)^2\\ &\leq \sum_{j\in K_\downarrow}\left(\int_0^t e^{\delta_{\min}( \tau-t)}E_j^\perp(\tau)d\tau\right)^2\\ &\leq \left\|\int_0^t e^{\delta_{min}(\tau-t)}E^\perp(\tau)d\tau\right\|_{L^2}^2 \end{split} \end{equation*} where $\delta_{min}=\min_{j\in K_\downarrow}\delta_j$ and the inequality follows from the Parseval identity. Taking square roots and using the decay assumption on $E^\perp$ gives \begin{equation*} \begin{split} \Bigg\|\sum_{j\in K_\downarrow}u_j(t)\varphi_j\Bigg\|_{L^2}&\leq \left\|\int_0^t e^{\delta_{min}(\tau-t)}E^\perp(\tau)d\tau\right\|_{L^2}\\ & \leq \int_0^t e^{\delta_{min}(\tau-t)}\left\|E^\perp(\tau)\right\|_{L^2}d\tau\\ & \leq \left\|E^\perp\right\|_{q}\int_0^t e^{\delta_{min}(\tau-t)}(T+\tau)^{-q}d\tau\\ & \leq C\|E^\perp\|_{L^2_q}(T+t)^{-q}. \end{split} \end{equation*} A similar argument holds for the $K_\uparrow$ terms. From this, the asserted bounds for $\|u\|_{L^2_q}$ follow readily. We now consider the $C^{2,\alpha}_q$ bounds for $u$. Following the argument in \cite[Lemma 19]{Chodosh}, by interior parabolic Schauder estimates \cite[Theorem 4.9]{Lieberman} and Arzel\`{a}-Ascoli theorem, we have that for $t\geq 1$, \begin{equation*} \begin{split} &\|u(t)\|_{C^{2,\alpha}}\leq C\left(\sup_{(s,x)\in(t-1,t+1)\times M}\|u(s,x)\|_{L^2(M)}+\|E^\perp\|_{C^{0,\alpha}((t-1,t+1)\times M)}\right)\\ &\hspace{6cm} +C\epsilon \|u(t)\|_{C^{0,\alpha}((t-1,t+1)\times M)}. \end{split} \end{equation*} Multiplying it by $(T+t)^q$ and taking the supremum over $t\geq1$ yields \begin{equation}\label{4.3.1} \begin{split} & \sup_{t\geq 1}\left[(T+t)^q\|u(t)\|_{C^{2,\alpha}}\right]\leq C\|E^\perp\|_{C^{0,\alpha}_q}+C\epsilon\|u\|_{C^{0,\alpha}_q} \end{split} \end{equation} where we have used $\|u\|_{L^2_q}\leq C\|E^\perp\|_{L^2_q}$, which was proved earlier. To finish the proof, it remains to extend the supremum up to $t=0$. The global Schauder estimates \cite[Theorem 4.28]{Lieberman} shows that \begin{equation}\label{4.3.2} \begin{split} \|u(t)\|_{C^{2,\alpha}((0,1)\times M)}&\leq C\Big(\sup_{s\in(0,1)}\|u(s,x)\|_{L^2( M)}+\epsilon\|u\|_{C^{0,\alpha}((0,1)\times M)}\\ &\qquad\qquad +\|E^\perp\|_{C^{0,\alpha}((0,1)\times M)}+\|u(0)\|_{C^{2,\alpha}(M)}\Big). \end{split} \end{equation} Except for the last term $\|u(0)\|_{C^{2,\alpha}( M)}$ on the right-hand side of the above expression, the rest of the terms can be bounded in a manner similar to the argument used above. Note that \begin{equation*} u(0)=-\sum_{j\in K_\uparrow}\left(\int_{0}^\infty e^{\delta_j\tau}E_j^\perp(\tau)d\tau\right)\varphi_j. \end{equation*} The space $\Lambda_\uparrow$ is finite-dimensional, so there must be a uniform constant $C>0$ such that $\|\varphi_j\|_{C^{2,\alpha}( M)}\leq C\|\varphi_j\|_{L^2( M)}$ for all $j\in K_\uparrow$. Using this we have that \begin{equation}\label{4.3.3} \begin{split} \|u(0)\|_{C^{2,\alpha}( M)}^2& \leq C\sum_{j\in K_\uparrow}\left(\int_0^\infty e^{\delta_j\tau}E_j^\perp(\tau)d\tau\right)^2\|\varphi_j\|^2_{C^{2,\alpha}(M)}\\ & \leq C\sum_{j\in K_\uparrow}\left(\int_0^\infty e^{\delta_j\tau}E_j^\perp(\tau)d\tau\right)^2\|\varphi_j\|^2_{L^2( M)}=C\|u(0)\|^2_{L^2( M)}. \end{split} \end{equation} Combining (\ref{4.3.1}), (\ref{4.3.2}), and (\ref{4.3.3}), we obtain that \begin{equation*} \sup_{t\geq 0}\left[(T+t)^q\|u(t)\|_{C^{2,\alpha}}\right]\leq C\|E^\perp\|_{C^{0,\alpha}_q}+C\epsilon\|u\|_{C^{0,\alpha}_q}. \end{equation*} By choosing $\epsilon$ small, we get the desired H\"{o}lder bounds. \end{proof} \subsection{Construction of a slowly converging flow}\label{sec3.4} In this subsection we will combine the results from the previous two subsections to finally construct a slowly converging flow. That is, we prove Theorem \ref{main2}. To proceed further, we define the norm \begin{equation*} \|f\|_{\gamma}^*:=\|\textrm{proj}_{\Lambda_0}f\|_{C^{0,\alpha}_{1,\gamma}}+\|\textrm{proj}_{\Lambda_0^\perp}f\|_{C^{2,\alpha}_{1+\gamma}}. \end{equation*} Recall that \begin{equation*} \begin{split} &\|u\|_{C^{0,\alpha}_{1,\gamma}}=\sup_{t\geq 0}\left[(T+t)^\gamma\|u(t)\|_{C^{0,\alpha}}\right]+\sup_{t\geq0}\left[(T+t)^{1+\gamma}\|u'(t)\|_{C^{0,\alpha}}\right],\\ &\|u\|_{C^{2,\alpha}_{1+\gamma}}=\sup_{t\geq 0}\left[(T+t)^{1+\gamma}\|u(t)\|_{C^{2,\alpha}}\right], \end{split} \end{equation*} where the H\"{o}lder norms are the space-time H\"{o}lder norms defined in (\ref{eq8}). For $\gamma$ to be specified below, the Banach space $X$ is defined as \begin{equation}\label{defX} X:=\{w:\|w\|_{\gamma}^*<\infty\}. \end{equation} \begin{prop}\label{prop4.7} Assume that $(g_\infty,\phi_\infty)$ satisfies $AS_p$. We may thus fix a point where $F_p|_{\mathbb{S}^{k-1}}$ achieves a positive maximum and denote it by $\hat{v}$. Define \begin{equation*} \varphi(t)=(T+t)^{-\frac{1}{p-2}}\left(\frac{8}{(n+m-2)p(p-2)F_p(\hat{v})}\right)^{\frac{1}{p-2}}\hat{v} \end{equation*} as in Lemma \ref{lem15}. Then, there exists $C>0$, $T>0$, $\frac{1}{p-2}<\gamma<\frac{2}{p-2}$ and $u(t)\in C^\infty(M\times (0,\infty))$ such that $u(t)>0$ for all $t>0$, $(g(t),\phi(t)):=(u(t)^\frac{4}{n+m-2}g_\infty,\phi_\infty-\frac{2m}{n+m-2}\ln u(t))$ is a solution to the weighted Yamabe flow and \begin{equation*} \|w^\top(t)+\mathbb{P}hi\left(\varphi(t)+w^\top(t)\right)+w^\perp(t)\|_\gamma^*=\|u(t)-\varphi(t)-1\|_\gamma^*\leq C. \end{equation*} \end{prop} \begin{proof} We fix $\frac{1}{p-2}<\gamma<\frac{2}{p-2}$ so that $\gamma\notin\left\{\frac{n+m-2}{8}\mu_1,\cdots,\frac{n+m-2}{8}\mu_k\right\}$. By Proposition \ref{prop17}, the weighted Yamabe flow can be reduced to two flows, i.e. kernel projected flow and kernel-orthogonal projected flow, so it is enough to solve \begin{equation*} \frac{8}{n+m-2}(w^\top)'+D^2F_p(\varphi)w^\top=E^\top(w),\quad (w^\perp)'-\mathcal{L}_\infty w^\perp=E^\perp (w), \end{equation*} for $w(t)$ with $\|w\|^*_\gamma<C$. To do so, we will use the contraction mapping method. We define a map \begin{equation*} S:\{w\in X:\|w\|_\gamma^*\leq1\}\rightarrow X \end{equation*} where $X$ is the Banach space defined in (\ref{defX}), by defining $u:=\textrm{proj}_{\Lambda_0}S(w)$ and $v:=\textrm{proj}_{\Lambda_0^\perp}S(w)$ to be the solution of the kernel projected flow and the kernel-orthogonal projected flow with the initial values $u(0)=\textrm{proj}_{\mathbb{P}i_0^\perp}w^\perp(0)$ and $v(0)=\textrm{proj}_{\Lambda_\uparrow}w^\perp(0)$ respectively, i.e. \begin{equation*} \frac{8}{n+m-2}u'+D^2F_p(\varphi)u=E^\top(w)~~\mbox{ and }~~ v'-\mathcal{L}_\infty v=E^\perp (w). \end{equation*} Thus, we have defined the map $S(w)$ by its orthogonal projections onto $\Lambda_0$ and $\Lambda_0^\perp$. These solutions exist, in the right function spaces, by combining the bounds for the error terms in Proposition \ref{prop17} with Lemmas \ref{lemma18} and \ref{lemma19}. Furthermore, we have the explicit bound \begin{equation*} \begin{split} \|\textrm{proj}_{\Lambda_0}S(w)\|_{C^{0,\alpha}_{1,\gamma}}&\leq c\|E^{\top}(w)\|_{C^{0,\alpha}_{1+\gamma}}\\ & \leq c \sup_{t\geq 0} (T+t)^{1+\gamma}\left((T+t)^{-1-\frac{1}{p-2}}+\|w^{\top}\|^{p-1}_{C^{0,\alpha}}+\|w^\perp\|_{C^{2,\alpha}}\right)\\ & \qquad \times \left((T+t)^{-\frac{1}{p-2}}+\|w\|_{C^{2,\alpha}}\right)\\ & \qquad +c\sup_{t\geq 0} \left((T+t)^{\gamma-\frac{2}{p-2}}+(T+t)^{\gamma-\frac{2}{p-2}}\|w^\top\|_{C^{2,\alpha}}\right)\\ & \qquad +c\sup_{t\geq 0} \left((T+t)^{\gamma+\frac{1}{p-2}}\|w^\top\|_{C^{2,\alpha}}+(T+t)^{\gamma+1}\|w^\top\|^{p-1}_{C^{2,\alpha}}\right)\\ & \qquad +c\sup_{t\geq 0} (T+t)^{\gamma+1}\|w^\perp\|^{2}_{C^{2,\alpha}}\\ & \leq c\left(T^{\gamma-\frac{2}{p-2}}+\left(T^{-\frac{1}{p-2}}+T^{(p-2)\left(\frac{1}{p-2}-\gamma\right)}\right)\|w\|^*_\gamma\right). \end{split} \end{equation*} By the same argument, using Proposition \ref{prop17} and Lemma \ref{lemma19}, we obtain the similar bound for the kernel-orthogonal projected part: \begin{equation*} \begin{split} \|\textrm{proj}_{\Lambda_0^\perp}S(w)\|_{C^{2,\alpha}_{1+\gamma}}\leq c\left(T^{\gamma-\frac{2}{p-2}}+\left(T^{-\frac{1}{p-2}}+T^{(p-2)\left(\frac{1}{p-2}-\gamma\right)}\right)\|w\|^*_\gamma\right). \end{split} \end{equation*} Therefore, we have \begin{equation*} \begin{split} \|S(w)\|_\gamma^*&=\|\textrm{proj}_{\Lambda_0}S(w)\|_{C^{0,\alpha}_{1,\gamma}}+\|\textrm{proj}_{\Lambda_0^\perp}S(w)\|_{C^{2,\alpha}_{1+\gamma}}\\ &\leq c\left\{T^{\gamma-\frac{2}{p-2}}+\left(T^{-\frac{1}{p-2}}+T^{(p-2)\left(\frac{1}{p-2}-\gamma\right)}\right)\|w\|^*_\gamma\right\}. \end{split} \end{equation*} Thus, because $\gamma\in\left(\frac{1}{p-2},\frac{2}{p-2}\right)$, by choosing $T$ large enough we can ensure that $S$ maps $\{w:\|w\|_\gamma^*\leq 1\}\subset X$ into itself. And one can also show that $S$ is a contraction map by enlarging $T$ if necessary. \end{proof} Now we are ready to prove Theorem \ref{main2}. \begin{proof}[Proof of Theorem \ref{main2}] From Proposition \ref{prop17}, we have constructed $\varphi(t)$ and $u(t)$ so that \begin{equation*} \varphi(t)=(T+t)^{-\frac{1}{p-2}}\left(\frac{8}{(n+m-2)p(p-2)F_p(\hat{v})}\right)^{\frac{1}{p-2}}\hat{v}, \end{equation*} $(u(t)^\frac{4}{n+m-2}g_\infty,\phi_\infty-\frac{2m}{n+m-2}\ln u(t))$ is a solution to the weighted Yamabe flow, and \begin{equation*} u(t)=1+\varphi(t)+\tilde{w}(t):=1+\varphi(t)+w^\top(t)+\mathbb{P}hi\left(\varphi(t)+w^\top(t)\right)+w^\perp(t), \end{equation*} where $\tilde{w}(t)$ satisfies $\|\tilde{w}\|_{C^0}\leq C(1+t)^{-\gamma}$ for some $C>0$ and all $t\geq 0$. We have arranged that $\gamma>1/(p-2)$, which implies that $\varphi(t)$ is decaying slower than $\tilde{w}(t)$. Thus \begin{equation*} \|u(t)-1\|_{C^0}\geq C(1+t)^{-\frac{1}{p-2}} \end{equation*} as $t\rightarrow \infty$. From this, the assertion follows. \end{proof} \section{Examples satisfying $AS_p$}\label{section5} In this section, we provide a family of metrics which satisfy $AS_3$. This allows us, via Theorem \ref{main2}, to conclude the existence of slowly converging weighted Yamabe flow. We denote the $2n_2$-dimensional complex projective space equipped with the Fubini--Study metric by $(\mathbb{P}^{n_2}, g_{FS})$. (We normalize the Fubini--Study metric so the map $\mathbb{S}^{2n+1}\to \mathbb{P}^n$ from the unit sphere is a submersion.) From \cite[Section 5.1]{Chodosh}, we know that $R_{g_{FS}}=4n_2(n_2+1)$ and $\lambda_1(g_{FS})=4(n_2+1)$. Suppose that $(M^{n_1}, g_M, e^{-\phi}dV_g, m)$ is a smooth metric measure space with constant weighted scalar curvature $R^m_{\phi}=\lambda_1(g_{FS})(n_1+n_2+m-1)$. We consider the product smooth metric measure space $(M^{n_1}\times \mathbb{P}^{n_2}, g=g_M\oplus g_{FS}, e^{-\phi}dV_{g_M\oplus g_{FS}}, m)$. Therefore, the weighted scalar curvature on it is given by \begin{equation} R^m_{g,\phi}=R^m_{g_M,\phi}+R_{g_{FS}}=\lambda_1(n+m-1), \end{equation} where $n=n_1+2n_2$ is the dimension of the product space. So $\Lambda_0$ consists of eigenfunctions of $\Delta_{g,\phi}$ with eigenvalue $R^m_{g,\phi}/(n+m-1)=\lambda_1$. From this, we see that $(M^{n_1}\times \mathbb{P}^{n_2}, g=g_M\oplus g_{FS}, e^{-\phi}dV_{g_M\oplus g_{FS}}, m)$ is degenerate. Moreover, let $v$ be an eigenfunction corresponding to the eigenvalue $\lambda_1$, i.e. \begin{equation}\label{5.1} -\Delta_{g_{FS}} v=\lambda_1v. \end{equation} It follows from the Courant nodal domain theorem that $v$ does not change sign. By replacing $v$ with $-v$ if necessary, we may assume that $v>0$. Since $\phi\in C^\infty(M)$, the function $1\otimes v$ on $M^{n_1}\times \mathbb{P}^{n_2}$ will be an eigenfunction of $\Delta_{g,\phi}$ with eigenvalue $\lambda_1$, i.e. \begin{displaymath} (n+m-1)\Delta_{g,\phi}(1\otimes v)+R_{g,\phi}^m (1\otimes v)=(n+m-1)\Delta_{g_{FS}}v+(n+m-1)\lambda_1v=0. \end{displaymath} On the other hand, it follows from \cite[Section 5.1]{Chodosh} that \begin{equation} \int_{\mathbb{P}^{n_2}}v^3 dV_{g_{FS}}\neq 0. \end{equation} Therefore, we have \begin{equation*} \begin{split} F_3(1\otimes v)&=-2\left(\frac{n+m+2}{n+m-2}\right)\left(\frac{4}{n+m-2}\right)R_{g,\phi}^m\int_{M^{n_1}\times \mathbb{P}^{n_2}} v^3 e^{-\phi}dV_{g_M\oplus g_{FS}}\\ &=\frac{-8(n+m+2)(n+m-1)}{(n+m-2)^2}\lambda_1\left(\int_{M^{n_1}}e^{-\phi}dV_{g_M}\right)\left(\int_{\mathbb{P}^{n_2}}v^3dV_{g_FS}\right)\neq 0 \end{split} \end{equation*} since $v>0$ and $\lambda_1>0$. Therefore, $(M^{n_1}\times \mathbb{P}^{n_2}, g=g_M\oplus g_{FS}, e^{-\phi}dV_{g_M\oplus g_{FS}}, m)$ satisfies the $AS_3$ condition. \section{Appendix : Computing $F_3$} In this Appendix, we prove (\ref{1.10}) by computing the term $F_3$ at a metric-measure structure $(g_\infty,\phi_\infty)$ with constant weighted scalar curvature. First we will show that $F_1(v)=F_2(v)=0$. To check this, notice that $DF(w)[v]=DE(\mathbb{P}si(w))[D\mathbb{P}si(w)[v]]$. Thus $DF(0)=0$ since $DE(1)=0$, as $1$ is a critical point of the functional $E$ (by assumption, $(g_\infty,\phi_\infty)\in \mathcal{CWSC}$) and $\mathbb{P}si(0)=1$. Therefore, $F_1=0$. Similarly, $D^2F(w)[v,u]=D^2E(\mathbb{P}si(w))[D\mathbb{P}si(w)[u], D\mathbb{P}si(w)[v]] +\langle DE(\mathbb{P}si(w)),D^2\mathbb{P}si(w)[v,u] \rangle$. When setting $w=0$, $\mathbb{P}si(0)=1$, $D\mathbb{P}si(0)=id$, and \begin{equation*} \begin{split} D^2F(0)[v,u]&= D^2E(1)[u,v] +\langle DE(1),D^2\mathbb{P}si(0)[v,u] \rangle\\ &=-\frac{8}{n+m-2}\langle \mathcal{L}_\infty u,v\rangle +\langle DE(1),D^2\mathbb{P}si(0)[v,u] \rangle. \end{split} \end{equation*} As before, the second term vanishes. The first term vanishes because $u$ is in the kernel of $\mathcal{L}_\infty$ by assumption. Therefore, $F_2=0$. To compute $D^3F(0)$, we may in fact compute $D^3\tilde{F}(0)$, where $\tilde{F}:\Lambda_0\to\mathbb{R}$ is defined by $\tilde{F}(v)=E(1+v)$. We first compute $D^3F$: \begin{equation}\label{4.2} \begin{split} D^3F(w)[v,u,z]&=D^3E(\mathbb{P}si(w))[D\mathbb{P}si(w)[v],D\mathbb{P}si(w)[u],D\mathbb{P}si(w)[z]]\\ &\quad+D^2E(\mathbb{P}si(w))[D^2\mathbb{P}si(w)[u,z],D\mathbb{P}si(w)[v]]\\ &\quad+D^2E(\mathbb{P}si(w))[D\mathbb{P}si(w)[u],D^2\mathbb{P}si(w)[v,z]]\\ &\quad+D^2E(\mathbb{P}si(w))[D\mathbb{P}si(w)[z],D^2\mathbb{P}si(w)[v,u]]\\ &\quad+\langle DE(\mathbb{P}si(w)), D^3\mathbb{P}si(w)[v,u,z]\rangle. \end{split} \end{equation} Setting $w=0$, and using similar considerations as before (in particular noting that $D^2E(1)[\cdot]$ is self-adjoint), we obtain $D^3F(0)[v,u,z]=D^3E(1)[v,u,z]$. Performing the same computation for $D^3\tilde{F}(0)$ yields the same result. Next, we compute $D^3\tilde{F}(0)$. In the rest of this section all integrals are taken with respect to $e^{-\phi_\infty}dV_{g_\infty}$. First we recall that \begin{equation}\label{recall1} \frac{1}{2}DE(w)[v]=\int_M\left(a_{n,m}\Delta_{\phi_\infty}w+R_{\phi_\infty}^mw-r_\phi^mw^\frac{n+m+2}{n+m-2}\right)v \end{equation} where \begin{equation*} a_{n,m}=-\frac{4(n+m-1)}{n+m-2},\quad (g,\phi)=(w^\frac{4}{n+m-2}g_\infty,\phi_\infty-\frac{2m}{n+m-2}\ln w). \end{equation*} Because $r_\phi^m=E(w)$, eq \eqref{recall1} can be rewritten as \begin{equation*} \frac{1}{2}DE(w)[v]=\int_M\left(a_{n,m}\Delta_{\phi_\infty}w+R_{\phi_\infty}^mw-E(w)w^\frac{n+m+2}{n+m-2}\right)v. \end{equation*} So the second differential of the functional $E$ can be computed as follows: \begin{equation*} \begin{split} \frac{1}{2}D^2&E(u)[v,w]\\ =&\frac{1}{2}\left.\frac{d}{dt}\right|_{t=0}DE(u+tw)[v]\\ =&\left.\frac{d}{dt}\right|_{t=0}\int_M\left(a_{n,m}\Delta_{\phi_\infty}(u+tw)+R_{\phi_\infty}^m(u+tw)-E(u+tw)(u+tw)^\frac{n+m+2}{n+m-2}\right)v\\ =&\int_M\left(a_{n,m}\Delta_{\phi_\infty}w+R_{\phi_\infty}^mw-DE(u)[w]u^\frac{n+m+2}{n+m-2}-\frac{n+m+2}{n+m-2}E(u)u^\frac{4}{n+m-2}w\right)v. \end{split} \end{equation*} Since $DE(1)=0$ and $E(1)=R_{\phi_\infty}^m$, we have \begin{equation*} \frac{1}{2}D^2E(g_\infty,\phi_\infty)[v,w]=\int_M\left(-\frac{4(n+m-1)}{n+m-2}\Delta_{\phi_\infty}w-\frac{4}{n+m-2}w\right)v. \end{equation*} Similarly, the third differential of the functional $E$ can be computed as follows: \begin{equation*} \begin{split} \frac{1}{2}D^3&E(u)[v,w,z]\\ =&\frac{1}{2}\left.\frac{d}{dt}\right|_{t=0}D^2E(u+tz)[v,w]\\ =&\left.\frac{d}{dt}\right|_{t=0}\int_M\left(a_{n,m}\Delta_{\phi_\infty}w+R_{\phi_\infty}^mw-DE(u+tz)[w](u+tz)^\frac{n+m+2}{n+m-2}\right.\\ &\qquad\qquad\qquad \left.-\frac{n+m+2}{n+m-2}E(u+tz)(u+tz)^\frac{4}{n+m-2}w\right)v\\ =&\int_M\left(-D^2E(u)[w,z]u^\frac{n+m+2}{n+m-2}-\frac{n+m+2}{n+m-2}DE(u)[w]u^\frac{4}{n+m-2}z\right.\\ &\qquad \left.-\frac{n+m+2}{n+m-2}DE(u)[z]u^\frac{4}{n+m-2}w-\frac{4(n+m+2)}{(n+m-2)^2}E(u)u^{-\frac{n+m-6}{n+m-2}}zw\right)v. \end{split} \end{equation*} Therefore, $v,w,z\in\Lambda_0$, we have \begin{equation*} D^3E(1)[v,w,z]=-\frac{8(n+m+2)}{(n+m-2)^2}R_{\phi_\infty}^m\int_Mvwz e^{-\phi_\infty}dV_{g_\infty}. \end{equation*} \end{document}
\begin{document} \author{Michel Deza} \address{Michel Deza, \'Ecole Normale Sup\'erieure, Paris, Deceased} \author{Mathieu Dutour Sikiri\'c} \address{Mathieu Dutour Sikiri\'c, Rudjer Boskovi\'c Institute, Bijenicka 54, 10000 Zagreb, Croatia, Fax: +385-1-468-0245} \email{[email protected]} \title{Generalized cut and metric polytopes of graphs and simplicial complexes} \date{} \begin{abstract} Given a graph $G$ one can define the cut polytope $\CUTP(G)$ and the metric polytope $\METP(G)$ of this graph and those polytopes encode in a nice way the metric on the graph. According to Seymour's theorem, $\CUTP(G) = \METP(G)$ if and only if $K_5$ is not a minor of $G$. We consider possibly extensions of this framework: \begin{enumerate} \item We compute the $\CUTP(G)$ and $\METP(G)$ for many graphs. \item We define the oriented cut polytope $\WOMCUTP(G)$ and oriented multicut polytope $\OMCUTP(G)$ as well as their oriented metric version $\QMETP(G)$ and $\WQMETP(G)$. \item We define an $n$-dimensional generalization of metric on simplicial complexes. \end{enumerate} \end{abstract} \keywords{max-cut problem, cut polytope, metrics, graphs, cycles, quasi-metrics, hemimetrics} \maketitle \section{Introduction}\label{Sec_Introduction} The cut polytope \cite{DL} is a natural polytope arising in the study of the maximum cut problem \cite{CompendiumOptiPb}. The cut polytope on the complete graph $K_n$ has seen much study (see \cite{DL}) but the cut polytope on a graph was much less studied \cite{CUTsmallGraphs,BarahonaCutContract,LeggettGargIneq}. Moreover, generalizations of the cut polytope on graphs seems not to have been considered. Given a graph $G=(V,E)$, for a vertex subset $S\subseteq V=\{1, \dots, n\}$, the {\em cut semimetric} $\delta_S(G)$ is a vector (actually, a symmetric $\{0,1\}$-matrix) defined as \begin{equation}\label{DefCutMetric} \delta_S(x,y)=\left\{\begin{array}{rl} 1 &\mbox{if~~~} (xy)\in E \mbox{~~~~and ~~~} \vert S \cap \{x,y\}\vert = 1,\\ 0 &\mbox{otherwise.} \end{array}\right. \end{equation} A {\em cut polytope} $\CUTP(G)$, respectively {\em cut cone} $\CUT(G)$, are defined as the convex hull of all such semimetrics, respectively positive span of all non-zero ones among them. The dimension of $\CUTP(G)$ and $\CUT(G)$ is equal to the number of edges of $G$. The {\em metric cone} $\MET(K_n)$ is the set of all {\em semimetrics} on $n$ points, i.e., the functions $d : \{1,\dots , n\}^2 \rightarrow \mathbb{R}_{\ge 0}$ (actually, symmetric matrices over $\mathbb{R}_{\ge 0}$ having only zeroes on the diagonal), which satisfy all $3 {n\choose 3}$ {\em triangle inequalities} $d(i,j) + d(i,k) - d(j,k) \ge 0$. The bounding of $\MET(K_n)$ by $n\choose 3$ {\em perimeter inequalities} $d(i,j) + d(i,k) + d(j,k) \le 2$ produces the {\em metric polytope} $\METP(K_n)$. For a graph $G=(V,E)$ of the {\em order} $|V|=n$, let $\MET(G)$ and $\METP(G)$ denote the projections of $\MET(K_n)$ and $\METP(K_n)$, respectively, on the subspace $\mathbb{R}^{|E|}$ indexed by the edge set of $G$. Clearly, $\CUT(G)$ and $\CUTP(G)$ are projections of, respectively, $\CUT(K_n)$ and $\CUTP(K_n)$ on $\mathbb{R}^E$. It holds \begin{equation*} \CUT(G)\subseteq \MET(G) \mbox{~and~} \CUTP(G)\subseteq \METP(G). \end{equation*} In Section \ref{Section_CutPolytope} we consider the structure of those polytopes and give the description of the facets for many graphs (see Tables \ref{TableSomeCUTpolytopes1} and \ref{TableSomeCUTpolytopes2}). The data file of the groups and orbits of facets of considered polytopes is available from \cite{WebPageCutPolytopes}. The construction of cuts and metrics can be generalized to metrics which are not necessarily symmetric are considered in Section \ref{Sec_Quasi} (see also \cite{QuasiMetric1,QuasiMetric2}). The triangle inequality becomes $d(i,j) \leq d(i,k) + d(k,j)$ and the perimeter inequality becomes $d(i,j) + d(j,k) + d(k,i) \leq 2$ for $1\leq i,j,k\leq n$. We also need the inequalities $0\leq d(i,j) \leq 1$. The quasi metric polytope $\QMETP(K_n)$ is defined by the above inequalities and the quasi metric cone $\QMET(K_n)$ is defined by the inequalities passing by zero. The quasi metric cone $\QMET(G)$ and polytope $\QMETP(G)$ are defined as projection of above two cone and polytopes. In Theorem \ref{QMET_theorem} we give an inequality description of those projections. Given an ordered partition $(S_1, \dots, S_r)$ of $\{1, \dots, n\}$ we defined an oriented multicut as: \begin{equation*} \delta'(S_1, \dots, S_r)_{x,y} = \left\{\begin{array}{rl} 1 & \mbox{~if~} x\in S_i, y\in S_j \mbox{~and~} i<j,\\ 0 & \mbox{~otherwise}. \end{array}\right. \end{equation*} The convex cone of the oriented multicut is the oriented multicut cone $\OMCUT(K_n)$. The convex polytope can also be defined but there are vertices besides the oriented multicuts. A smaller dimensional cone $\WQMET(G)$ and polytope $\WQMETP(G)$ can be defined by adding the cycle equality \begin{equation*} d(i,j) + d(j,k) + d(k,i) = d(j,i) + d(k,j) + d(i,k) \end{equation*} to the cone $\QMET(G)$ and polytope $\QMETP(G)$. A multicut satisfies the cycle equality if and only if $r=2$. We note the corresponding cone $\WOMCUT(G)$ and $\WOMCUTP(G)$. In Section \ref{Sec_Quasi} we consider those cones and polytopes and their facet description. The notion of metrics can be generalized to more than $2$ points and we obtain the {\em hemimetrics}. Those were considered in \cite{DezaRosenberg1,DezaRosenberg2,DezaDutourHemimetric1,DD_mining}. Only the notion of cones makes sense in that context. The definition of the above papers extends the triangle inequality in a direct way: It becomes a simplex inequality with the area of one side being bounded by the sums of area of the other sides. In \cite{BookGenFiniteMetrCuts} we argued that this definition was actually inadequate since it prevented right definition of hemimetric for simplicial complex. In Section \ref{Sec_Simplicial} we give full details on what we argue is the right definition of hemimetric cone. There is much more to be done in the fields of metric cones on graphs and simplicial complexes. Besides further studies of the existing cones and the ones defined in this paper, two other cases could be interesting. One is to extend the notion of hypermetrics cone $\HYP(K_n)$ to graphs; several approaches were considered in \cite{GeneralizationHypermetric}, for example projecting only on the relevant coordinates, but no general results were proved. Another generalization that could be considered is the {\em diversities} considered in \cite{BT_hyperconvexity,BT_hyperconvexityOTHER}. {\em Diversity cone $DIV_n$} is the set of all {\em diversities on $n$ points}, i.e., the functions $f: \{A: A \subseteq \{1,\dots,n\}\} \rightarrow \ensuremath{\mathbb{R}}_{\ge 0}$ satisfying $f(A)= 0$ if $|A|\le 1$ and \begin{equation*} f(A\cup B)+f(B\cup C)\ge f(A\cup C) \,\mbox{~if~} \,B\neq \emptyset. \end{equation*} The {\em induced diversity metric} $d(i,j)$ is $f(\{i,j\})$. {\em Cut diversity cone $CDIV_n$} is the positive span of all {\em cut diversities} $\delta (A)$, where $A \subseteq \{1,\dots,n\}$, which are defined, for any $S \subseteq \{1,\dots,n\}$, by \begin{equation*} \delta_{S}(A) = \left\{\begin{array}{rl} 1 & \mbox{if~} A\cap S \neq \emptyset \mbox{~and~} A\setminus S \neq \emptyset,\\ 0 & \mbox{otherwise.} \end{array}\right. \end{equation*} $CDIV_n$ is the set of all diversities from $DIV_n$, which are isometrically embeddable into an {\em $l_1$-diversity}, i.e., one, defined on $\mathbb{R}^m$ with $m \le {n\choose \left\lfloor \frac{n}{2} \right\rfloor}$ by \begin{equation*} f_{m1}(A)= \sum_{i=1}^m \max_{a,b\in A}\{|a_i-b_i|\}. \end{equation*} These two cones are extensions of the $\MET(K_n)$ and $\CUT(K_n)$ on a complete hypergraphs and it would be nice to have a nice definition on any hypergraph. \section{Structure of cut polytopes of graphs}\label{Section_CutPolytope} The cut metric $\delta_S$ defined at Equation \eqref{DefCutMetric} satisfies the relation $\delta_{\{1, \dots, n\} - S} = \delta_S$. The cut polytope $\CUTP(K_n)$ is defined as the convex hull of the metrics $\delta_S$ and thus has $2^{n-1}$ vertices. For a given subset $S$ of $\{1, \dots, n\}$ we can define the {\em switching} operation $F_S$ by \begin{equation*} F_S(d)(i,j) = \left\{\begin{array}{cl} 1 - d(i,j) & \mbox{if~} \left\vert S\cap \{i,j\}\right\vert = 1,\\ d(i,j) & \mbox{otherwise}. \end{array}\right. \end{equation*} The operation on cuts is $F_S(\delta_T) = \delta_{S\Delta T}$ with $\Delta$ denoting the symmetric difference (see \cite{DL} for more details). For a graph $G$ we define $\CUTP(G)$ to be the projection of $\CUTP(K_n)$ on the coordinates corresponding to the edges of the graph $G$. If $G$ is connected then $\CUTP(G)$ has exactly $2^{n-1}$ vertices. Then $\delta_S$ can be seen also as the adjacency matrix of a {\em cut} (into $S$ and $\overline{S}$) {\em subgraph} of $G$. The cut cone $\CUT(G)$ is defined by taking the convex cone generated by the metrics $\delta_S$ but it is generally not used in that section. In fact, $\CUT(K_n)$ is the set of all $n$-vertex semimetrics, which embed isometrically into some metric space $l_1$, and rational-valued elements of $\CUT(K_n)$ correspond exactly to the $n$-vertex semimetrics, which embed isometrically, {\em up to a scale} $\lambda \in \mathbb{N}$, into the path metric of some $m$-cube $K_2^m$. It shows importance of this cone in Analysis and Combinatorics. The enumeration of orbits of facets of $\CUT(K_n)$ and $\CUTP(K_n)$ for $n\le 7$ was done in \cite{OrbitFacetCutPolytope5,OrbitFacetCutPolytope6,OrbitFacetCutPolytope7} for $n=5$, $6$, $7$ respectively, and in \cite{CR}, completed by \cite{CUTsmallGraphs}, for $n=8$. \subsection{Automorphism group of cut polytopes} The symmetry group $\Aut(G)$ of a graph $G=(V,E)$ induces symmetry of $\CUTP(G)$. For any $U\subset \{1,\dots, n\}$, the map $\delta_S\mapsto \delta_{U\Delta S}$ also defines a symmetry of $\CUTP(G)$. Together, those form the {\em restricted symmetry group} $ARes(\CUTP(G))$ of order $2^{|V|-1}\vert \Aut(G)\vert$. The full symmetry group $\Aut(\CUTP(G))$ may be larger. In Tables \ref{TableSomeCUTpolytopes1}, \ref{TableSomeCUTpolytopes2}, such cases are marked by ${}^*$. Denote $2^{1-|V|}|\Aut(\CUTP(G))|$ by $A(G)$. For example, $|\Aut(\CUTP(K_n))|$ is $2^{n-1}n!$ if $n\neq 4$ and $6\times 2^34!$ if $n=4$ (\cite{Relatives}). \begin{remark} (i) If $G=(V,E)$ is $Prism_m$ ($m\neq 4$), $APrism_m$ ($m> 3$), M\"{o}bius ladder $M_{2m}$ and $Pyr^2(C_m)$ ($m> 3$), then $Aut(G)=4m$. (ii) If $G$ is a complete multipartite graph with $t_1$ parts of size $a_1$, $\dots ,$ $t_r$ parts of size $a_r$, with $a_1< a_2< \dots <a_r$ and all $t_i\ge 1$, then $|Aut(G)|=\prod_{i=1}^rt_i!(a_i!)^{t_i}.$ (iii) Among the cases considered here, all occurrences of $A(G)> |Aut(G)|$ are: $A(G)=m!2^{m-1}|Aut(G)|$ for $G=K_{2,m>2},K_{1,1,m>1}$ and $A(G)=6|Aut(G)|$, i.e., $2m!=48, 6m!$ for $G=K_{2,2}$ and $K_{1,1,1,1}$, respectively. (iv) If $G=P_m$ ($m\ge 3$ edges), then $|Aut(G)=2$, while $A(G)=m!=(|V|-1)!$. If $G=C_m$ ($m>3$), then $|Aut(G)|=2m$, while $A(G)=2m!$ for $m=4$ and $A(G)=m!=|V|!$ for $m\ge 5$. \end{remark} \subsection{Edge faces, $s$-cycle faces and metric polytope} \begin{definition} Let $G=(V,E)$ be a graph. (i) Given an edge $e\in E$, the {\em edge inequality} (or {\em $2$-cycle inequality}) is \begin{equation*} x(e) \geq 0. \end{equation*} (ii) Given a $s$-cycle $c=(v_1, \dots, v_s), s\ge 3,$ of $G$, the {\em $s$-cycle inequality} is: \begin{equation*} x(c, (v_1, v_s)) = \sum_{i=1}^{s-1} x(v_i,v_{i+1}) -x(v_1,v_s)\ge 0. \end{equation*} \end{definition} The edge inequalities and $s$-cycle inequalities are valid on $\CUTP(G)$, since they are, clearly, valid on each cut: a cut intersects a cycle in the set of even cardinality. So, they define faces, but not necessarily facets. In fact, it holds \begin{theorem} (i) The inequality $x(e)$ is facet defining in $\CUTP(G)$ (also, in $\CUT(G)$) if and only if $e$ is not contained into a $3$-cycle of $G$. (ii) An $s$-cycle inequality is facet defining in $\CUTP(G)$ (also, in $\CUT(G)$) if and only corresponding $s$-cycle is chordless. (iii) $\METP(G)$ is defined by all edge and $s$-cycle inequalities, while $\MET(G)$ is defined by all $s$-cycle inequalities. \end{theorem} In fact, (i) and (ii) above were proved in \cite{BarahonaMahjoubOnCutPolytope}, (iii) was proved in \cite{BarahonaCutsMatchingsPlanar}; see also Section 27.3 in \cite{DL}. The following Theorem, proved in \cite{SeymourMatroidMulticommodity} for cones and in \cite{BarahonaCutContract} for polytopes, clarifies when the metric and cut polytope coincides: \begin{theorem}\label{K5wonder} $\CUT(G) = \MET(G)$ or, equivalently, $\CUTP(G)=\METP(G)$ if and only if $G$ does not have any $K_5$-minor. \end{theorem} As a corollary of Theorem \ref{K5wonder}, we have that the facets of $\CUTP(G)$ (also, in $\CUT(G)$) are determined by edge inequalities and $s$-cycle inequalities if and only if $G$ does not have any $K_5$-minor. $3$-cycle inequality is usual triangle inequality; in fact, it is unique, among edge and all $s$-cycle inequalities to define a facet in a $\CUTP(K_n)$. The {\em girth} and {\em circumference} of a graph, having cycles, are the length of the shortest and longest cycle, respectively. In a graph $G$, a {\em chordless cycle} is any cycle, which is induced subgraph; so, any triangle, any shortest cycle and any cycle, bounding a face in some embedding of $G$, are chordless. Let $c_s'$ and $c_s$ denote the number of all and of all chordless $s$-cycles in $G$, respectively. There are $2|E|$ edge faces, which decompose into orbits, one for each orbit of edges of $G$ under $\Aut(G)$. There are $2^{s-1}c'_s$ $s$-cycle faces, which decompose into orbits, one for each orbit of $s$-cycles of $G$ under $\Aut(G)$. The incidence of edge faces is $2^{|V|-2}$ and the size of each orbit is twice the size of corresponding orbit of edges. The incidence of $s$-cycle faces is $2^{|V|-s}s$ and the size of each orbit is $2^{s-1}$ times the size of corresponding orbit of $s$-cycles in $G$. By {\em Wagner's theorem} \cite{Wagner1937}, a finite graph is planar if and only if it has no minors $K_5$ and $K_{3,3}$. For embeddability on the projective plane $\mathbb{P}^2$, there are exactly $103$ forbidden topological minors and exactly $35$ forbidden minors (see \cite{ArchDeacon35,GloverHuneke103}). For embeddability on the torus $\mathbb{T}^2$, $16629$ forbidden minors are known (see \cite{GagarinMyrvoldChambers}) but the list is not necessarily complete. Closely related {\em Kuratowski's theorem} \cite{Kuratowski1930} states that a finite graph is planar if and only if it does not contain a subgraph that is a subdivision of $K_5$ or of $K_{3,3}$. \begin{table} \begin{center} \caption{The number of facets of $\CUTP(G)$ of some $K_5$-minor-free graphs $G$; ${}^{*}$ shows $A(G)$=$2^{1-|V|} |\Aut(\CUTP(G))|>|Aut(G)|$} \label{TableSomeCUTpolytopes1} \begin{tabular}{||c|c||c|c|c||} \hline\hline $G=(V,E)$ & $\vert V\vert,\vert E\vert$ & $A(G)$ & Number of facets&Orbit's $s$\\\hline\hline M\"{o}bius ladder $M_{8}$& $8,2$ & $16$ & $184(4)$&$2,2,4,5$\\ $M_{6}=K_{3,3}$& $6,9$ & $2(3!)^2$ & $90(2)$&$2,4$\\\hline $K_{1,1,1,m}, m> 1$ & $m$+$3,3m$+$3$ & $3!m!$ &$4+12m(2)$&$3,3$\\ $K_{1,2,m},m>1$ & $m$+$3,3m$+$2$ & $|Aut(K_{1,2,m})|$ & $8m + 8{m\choose 2}(2)$&$3,4$\\ $K_{3,m}, m\ge 3$ & $m$+$3,3m$ & $|Aut(K_{3,m})|$ & $6m + 24{m\choose 2}(2)$&$2,4$\\ $K_{2,m}, m> 2$ & $m$+ $2,2m$ & $2^{m-1}m!|Aut(K_{2,m})|$ $\,^{*}$ & $4m^2(1)$&$2$ with $4$\\ $K_{2,2}$ & $4,4$ & $6|\Aut(K_{2,2})|$ $\,^{*}$ & $16(1)$&$2$ with $4$\\ $K_{1,1,m}, m>1$ & $m$+$2,2m$+$1$ & $2^{m-1}m!|Aut(K_{1,1,m})|$ $\,^{*}$ &$4m(1)$ & $3$\\ $K_{m+1}$-$K_m$=$K_{1,m}, m$$>$$1$ & $m$+$1,m$ & $m!$ &$2m(1)$ &$2$\\ \hline $APrism_6$ & $12,24$ & $24$ & $2,032(5)$&$3,6,7,7,8$\\ $APrism_5$ & $10,20$ & $20$ & $552(4)$&$3,5,6,7$\\ $APrism_4$ & $8,16$ & $16$ & $176(3)$&$3,4,5$\\ $Prism_7$ & $14,21$ & $28$ & $7,394(6)$&$2,2,4,7,9,9$\\ $Prism_6$ & $12,18$ & $24$ & $2,452(6)$&$2,2,4,6,8,8$\\ $Prism_5$ & $10,15$ & $20$ & $742(5)$&$2,2,4,5,7$\\ $Prism_3$ & $6,9$ & $12$ & $38(3)$&$2,3,4$\\\hline Tr. Tetrahedron & $12,18$ & $24$ & $540(4)$&$2,3,6,8$\\ Cuboctahedron & $12,24$ & $48$ & $1,360(5)$&$3,4,6,6,8$\\ \hline Dodecahedron & $20,30$ & $120$ & $23,804(5) $&$2,5,9,10,10$\\ Icosahedron & $12,30$ & $120$ & $1,552(4)$&$3,5,6,6$\\ Cube $K_2^2$ & $8,12$ & $48$ & $200(3)$&$2,4,6$\\ Octahedron $K_{2,2,2}$ & $6,12$ & $48$ & $56(2)$&$3,4$\\ Tetrahedron $K_4$ & $4,6$ & $6|Aut(K_4)|$\,\,\,${}^{*}$ & $12(1)$&$3$\\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{The number of facets of $\CUTP(G)$ for some graphs $G$ with $K_5$-minor} \label{TableSomeCUTpolytopes2} \begin{tabular}{||c|c||c|c|c||} \hline\hline $G=(V,E)$ & $\vert V\vert,\vert E\vert$ & $A(G)$ & Number of facets (orbits)&Orbit's $s$\\\hline\hline Heawood graph &$14,21$ & $336$ & $5,361,194(9)$&$2,6,8$\\ Petersen graph & $10,15$ & $120$ & $3,614(4)$&$2,5,6$\\\hline M\"{o}bius ladder $M_{10}$& $10,15$ & $20$ & $1,414(5)$&$2,2,4,6$\\\hline M\"{o}bius ladder $M_{12}$ & $12,18$ & $24$ & $26,452(6)$&$2,2,4,7,9$\\ M\"{o}bius ladder $M_{14}$ & $14,21$ & $28$ & $369,506(9)$&$2,2,4,8,10$\\ $K_{5,5}$& $10,25$ & $2(5!)^2$ & $16,482,678,610(1,282)$&$2,4$\\ $K_{4,7}$& $11,28$ & $4!7!$ & $271,596,584(15)$&$2,4$\\ $K_{4,6}$& $10,24$ &$4!6!$ & $23,179,008(12)$&$2,4$\\ $K_{4,5}$& $9,20$ & $4! 5!$ & $983,560(8)$&$2,4$\\ $K_{4,4}$& $8,16$ & $2(4!)^2$ & $27,968(4)$&$2,4$\\\hline $K_{3,3,3}$&$9,27$&$(3!)^4$ &$624, 406, 788(2,015)$&$3,4$\\ $K_{1,4,4}$&$9,24$&$2(4!)^2$ &$36,391,264(175)$&$3,4$\\ $K_{1,3,5}$&$9,23$&$3!5!$ &$71,340(7)$&$3,4$\\ $K_{1,3,4}$&$8,19$&$3!4!$ &$12,480(6)$&$3,4$\\ $K_{1,3,3}$&$7,15$&$2(3!)^2$ &$684(3)$&$3,4$\\\hline $K_{1,1,3,3}$&$8,21$&$4(3!)^2$ &$432,552(50)$&$3,3,4$\\ $K_{1,2,2,2}$&$7,14$&$3!(2!)^3$ &$5,864(9)$&$3,3,4$\\ $K_{1,1,2,2}$ &$6,13$&$4(2!)^2$ &$184(4)$&$3,3,4$\\ $K_{1,1,2,m},m> 2$&$m$+$4,4m$+$5$ & $4m!$ &$8$+$20m$+$8{m\choose 2}(16m-15)(7)$&$3,3,3,4$\\\hline $K_{1,1,1,1,m}, m>1$ & $m$+$4,4m$+$6$ & $4!m!$ &$8(8m^2-3m+2)$(4)&$3,3$\\\hline $K_{1,1,1,1,1,3}$=$K_8-K_3$& $8.25$ & $360$ & $2,685,152(82)$&$3,3$\\ $K_{1,1,1,1,1,2}$=$K_7-K_2$& $7,20$ & $240$ & $31,400(17)$&$3,3$\\ $K_7-C_3$& $7,18$ & $144$ & $520(4)$&$3,3$\\ $K_7-C_4$&$7,17$&$48$&$108$(4)&$3,3,3$\\ $K_7-C_5$=$Pyr^2(C_5)$&$7,16$&$20$&$780(6)$&$3,3,5$\\ $K_7-C_6$=$Pyr(Prism_3)$&$7,15$&$12$&$452(5)$&$3,3,3,4$\\ $K_7-C_7$&$7,14$&$14$&$148(3)$&$3,4$\\ \hline $Pyr(Prism_4)$&$9,20$&$48$&$10,464(6)$&$3,4,6$\\ $Pyr(Prism_5)$&$11,25$&$20$&$208,133(22)$&$3,3,4,5,7$\\ $Pyr(APrism_4)$&$9,24$&$16$&$389,104(17)$&$3,3,3,4,5$\\\hline $Pyr^2(C_6)$&$8,19$&$24$&$3,432(7)$&$3,3,6$\\ $Pyr^2(C_7)$&$9,22$&$28$&$14,740(11)$&$3,3,7$\\\hline Tr.Octahedron on $\mathbb{P}^2$&$12,18$&$48$&$62,140(7)$&$2,2,4,6,6$\\ \hline\hline \end{tabular} \end{center} \end{table} \subsection{Skeletons of Platonic and semiregular polyhedra} Let $G$ be embedded in some oriented surface; so, it is a map $(V,E,F)$, where $F$ is the set of faces of $G$. Let $\vec{p}=(\dots, p_i, \dots)$ denote the {\em $p$-vector} of the map, enumerating the number $p_i>0$ of faces of all sizes $i$, existing in $G$. Call {\em face-bounding} any $s$-cycle of $G$, bounding a face in map $G$. Call an $s$-cycle of $G$ {\em $i$-face-containing}, {\em edge-containing} or {\em point-containing}, if all its interior points form just $i$-gonal face, edge or point, respectively. Call {\em equator} any cycle $C$, the interior of which (plus $C$) is isomorphic to the exterior (plus $C$). The chordless $4,6,5,9$-cycles of Octahedron, Cube, Icosahedron and Dodecahedron, respectively, are exactly their vertex-containing $4,6,5,9$-cycles. For Octahedron and Cube, they are exactly all $3$ and $4$ equators, respectively, which are, apropos, the {\em central circuits} and {\em zigzags} (see \cite{newBook}), respectively. All $c_6$ chordless $6$-cycles of Icosahedron are exactly their $30$ edge-containing ones and $10$ face-containing ones, which are exactly the $10$ equators and the {\em weak zigzags} (\cite{newBook}). All $c_{10}$ chordless $10$-cycles of Dodecahedron are $30$ edge-containing ones and $6$ face-containing ones, which are exactly all $6$ equators and the zigzags. \begin{proposition} If $G$ is the skeleton of a Platonic solid, then all possible facets of $\CUTP(G)$ are: edge facets and $s$-cycle facets, coming from all face-bounding cycles and from all (if they exist and not listed before) vertex-, edge-, face-containing cycles. For instance: (i) If $G=K_4$ (Tetrahedron), then $\CUTP(G)$ has unique orbit of $2^2p_3=16$ (simplicial) $3$-cycle facets (from all $|F|=p_3=4$ face-bounding cycles of $G$). (ii) If $G=K_{2,2,2}$ (Octahedron), then $\CUTP(G)$ has $56$ facets in $2$ orbits, namely: orbit of $2^2p_3$ $3$-cycle facets (from all $|F|=p_3=8$ face-bounding cycles, orbit of $2^3c_4$ $4$-cycle facets (from all $c_4 =\frac{|V|}{2}=3$ vertex-containing $4$-cycles). (iii) If $G=K_{2}^3$ (Cube), then $\CUTP(G)$ has $200$ facets in $3$ orbits, namely: orbit of $2|E|=24$ edge facets, orbit of $2^3p_4$ $4$-cycle facets (from all $|F|=p_4=6$ face-bounding cycles), orbit of $2^5c_6=128$ $6$-cycle facets (from all $c_6=4$ vertex-containing $6$-cycles). (iv) If $G$ is Icosahedron, then $\CUTP(G)$ has $1,552$ facets in $4$ orbits, namely: orbit of $2^2p_3=80$ $3$-cycle facets (from all $|F|=p_3=20$ face-bounding cycles), orbit of $2^4c_5=192$ $5$-cycle facets (from all $c_5=12$ vertex-containing $5$-cycles), orbit of $2^5|E|=960$ $6$-cycle facets (from $|E|=30$ edge-containing $6$-cycles), orbit of $320$ $6$-cycle facets (from $\frac{|F|}{2}=10$ face-containing $6$-cycles). (v) If $G$ is Dodecahedron, then $\CUTP(G)$ has $23,804$ facets in $5$ orbits, namely: orbit of $2|E|=60$ edge facets, orbit of $2^4p_5=192$ $5$-cycle facets (from all $|F|=p_5=12$ face-bounding cycles), orbit of $2^8c_9=5,120$ $9$-cycle facets (from all $c_9=20$ vertex-containing $9$-cycles), orbit of $2^9|E|=15,360$ $10$-cycle facets (from $30$ edge-containing $10$-cycles), orbit of $2^9\times 6=3,072$ $10$-cycle facets (from $\frac{|F|}{2}=6$ face-containing $10$-cycles). \end{proposition} In a Truncated Tetrahedron, call {\em ring-edges} those bounding a triangle, and {\em rung-edges} all $6$ other ones. \begin{proposition} (i) If $G$ is Truncated Tetrahedron, then $\CUTP(G)$ has $540$ facets: \begin{enumerate} \item orbit of $2\times 6$ edge facets (from all $6$ rung-edges), \item orbit of $2^3p_3$ $3$-cycle facets (from all $p_3=4$ $3$-face-bounding cycles), \item orbit of $2^5p_6$ $6$-cycle facets (from all $p_6=4$ $6$-face-bounding cycles), \item orbit of $2^7\times 3$ $8$-cycle facets (from $\frac{1}{2}{4\choose 2}$ rung-edge-containing $8$-cycles, which are also the equators). \end{enumerate} (ii) If $G$ is Cuboctahedron, then $\CUTP(G)$ has $1,360$ facets, namely: \begin{enumerate} \item orbit of $2^2p_3$ $3$-cycle facets (from all $p_3=8$ $3$-face-bounding cycles), \item orbit of $2^3p_4$ $4$-cycle facets (from all $p_4=6$ $4$-face-bounding cycles), \item orbit of $2^5|V|$ $6$-cycle facets (from all $12$ vertex-containing $6$-cycles), \item orbit of $2^5\times 4=128$ $6$-cycle facets (from all $\frac{p_3}{2}$ $3$-face-containing $6$-cycles, which are also equators and the central circuits), \item orbit of $2^7p_4=768$ $8$-cycle facets (from all $6$ $4$-face-containing $8$-cycles, which are also zigzags). \end{enumerate} \end{proposition} Given a $Prism_m$ ($m\neq 4$) or an $APrism_m$ ($m\neq 3$), we call {\em rung-edges} the edges connecting two $m$-gons, and {\em ring-edges} other $2m$ edges. Let $P$ be an ordered partition $X_1\cup \dots \cup X_{2t}=\{1, \dots , m\}$ into ordered sets $X_i$ of $|X_i|\ge 3$ consecutive integers. Call {\em $P$-cycle of $Prism_m$} the chordless $(m+2t)$-cycle obtained by taking the path $X_1$ on the, say, $1$-st $m$-gon, then rung edge (in the same direction, then path $X_2$ on the $2$-nd $m$-gon, etc. till returning to the path $X_1$. Any vertex of $Prism_m$ can be taken as the $1$-st element of $X_1$, in order to fix a $P$-cycle. So, a $P$-cycle defines an orbit of $2^{m+2t-1}2m$ $(m+2t)$-cycle facets of $\CUTP(Prism_m)$, except the case $(|X_1|,\dots, |X_m|)=(|X_2|,\dots , |X_{2t}|,|X_1|)$ when the orbit is twice smaller. A {\em $P$-cycle of $APrism_m$} is defined similarly, but we ask only $|X_i|\ge 2$ and rung edges, needed to change $m$-gon, should be selected, in the cases $|X_i|= 2,3$ so that they not lead to a ring edge,i.e., a chord on $P$. Clearly, $P$-cycles are are all possible chordless $t$-cycles with $t\neq 4,m$ for $Prism_m$ and with $t\neq 2,m$ for $APrism_m$. \begin{proposition} (i) If $G$ is $Prism_m$ ($m\ge 5$), then all facets of $\CUTP(G))$ are: \begin{enumerate} \item orbit of $2m$ edge facets (from all $m$ rung-edges) \item orbit of $4m$ edge facets (from all $2m$ ring-edges); \item orbit of $2^3p_4=8m$ $4$-cycle facets (from all $m$ $4$-face-bounding $4$-cycles); \item orbit of $ 2^{m-1}p_m$ of $m$-cycle facets (from both $m$-face-bounding $m$-cycles); \item orbits of cycle facets for all possible $P$-cycles. \end{enumerate} (ii) If $G$ is $APrism_m (m\ge 4$), then all facets of $\CUTP(G))$ are: \begin{enumerate} \item orbit of $2^2p_3=8m$ $3$-cycle facets (from all $2m$ $3$-face-bounding $3$-cycles); \item orbit of $2^{m-1}p_m$ of $m$-cycle facets (from both $m$-face-bounding $m$-cycles); \item orbits of cycle facets for all possible $P$-cycles. \end{enumerate} \end{proposition} \subsection{M\"{o}bius ladders and Petersen graph} All M\"{o}bius ladders $M_{2m}$ are toroidal. M\"{o}bius ladder $M_{6}=K_{3,3}$, Petersen graph and Heawood graph are both, toroidal and $1$-planar. Given the M\"{o}bius ladder $M_{2m}$, call {\em ring-edges} $2m$ those belonging to the $2m$-cycle $C_{1,\dots ,2m}$, and {\em rung-edges} all other ones, i.e., $(i,i+m)$ for $i=1,\dots ,m$. For any odd $t$ dividing $m$, denote by $C(m,t)$ the $(m+t)$-cycle of $M_{2m}$, having, up to a cyclic shift, the form $$1, \dots , 1+\frac{m}{t},1+\frac{m}{t}+m, \dots , 1+\frac{2m}{t}+m, 1+\frac{2m}{t}+2m,\dots , 1+\frac{3m}{t}+2m, \dots ,$$ i.e., $t$ consecutive sequences of $\frac{2m}{t}-1$ ring-edges, followed by a rung-edge. Such $C(m,1)$ exists for any $m\ge 3$; for $t> 1$, their existence requires divisibility of $m$ by $t$. Clearly, the number of $(m+t)$-cycles $C(m,t)$ is $\frac{2m}{t}$. \begin{conjecture} If $G=M_{2m}$ ($m\ge 4$), then among facets of $\CUTP(G)$ there are: two orbits of $4m$ and $2m$ edge facets (from all $2m$ ring- and $m$ rung-edges), orbit of $2^3c_4=8m$ $4$-cycles facets (from all $m$ $4$-cycles), orbit of $2^m2m$ $(m+1)$-cycle facets (from all $2m$ $(m+1)$-cycles $C(m,1)$), for any odd divisor $t>1$ of $m$, orbit of $2^{m+t}\frac{m}{t}$ $(m+t)$-cycle facets (from all $(m+t)$-cycles $C(m,t)$). \end{conjecture} There are no other orbits for $m=3,4$ and for $m=3$ first two orbits unite into one of $18$ edge facets, while all other orbits unite into one of $2^3c_4=72$ $4$-cycle facets. $\CUTP(M_{10})$ has only one more orbit: the orbit of $2^{10}$ facets of incidence $15$ (i.e., simplicial facets), defined by a cyclic shift of $$\sum_{i=1}^{10}\frac{1}{2}(3-(-1)^i)x_{i,i+1}+\sum_{i=0}^mx_{i,i+m}-2(x_{5,10}+2x_{1,2}+x_{3,8}).$$ $\CUTP(M_{12})$ also has only one more orbit: $2^{12}6$ similar facets of incidence $20$. Petersen graph has three circuit double covers: by six $5$-gons (actually, zigzags), by five cycles of lengths $9,6,5,5,5$ and by $5$ cycles of lengths $8,6,6,5,5$. It can be embedded in projective plane, in torus and in Klein bottle with corresponding sets of six, five and five faces. Petersen graph have only $5-,6-,8-$ and $9$-cycles; it has $c_5=12$ and $c_6=10$. Heawood graph, i.e., {\em $(3,6)$-cage}, have the girth $6$ and $c_6=28$, $c_8=|E|=21$. \begin{proposition} $\CUTP(Petersen\, graph)$ has $3,614$ facets in $4$ orbits: \begin{enumerate} \item orbit of $2|E|=30$ edge facets, \item orbit of $2^4 c_5 =192$ $5$-cycle facets, \item orbit of $2^5c_6 =320$ $5$-cycle facets, \item orbit of $2^{10}3$ simplexes, represented by $$(C_{12345}-2x_{15}) -(C_{1'4'2'5'3'}-x_{1'4'}-x_{2'5'}) + 2\sum_{1\le i\le 5}x_{ii'},$$ where Petersen graph is seen as $C_{12345}+C_{1'4'2'5'3'}+ \sum_{1\le i\le 5}x_{ii'}$. \end{enumerate} \end{proposition} \begin{remark} Three of all $9$ orbits of facets of $\CUTP(Heawood\, graph)$, are: \begin{enumerate} \item $2|E|=42$ edge facets, \item $2^5c_6=896$ $6$-cycle facets and \item $2^7c_8=2,688$ $8$-cycle facets. \end{enumerate} \end{remark} \subsection{Complete-like graphs} $K_n$ is toroidal only for $n=5,6,7$, while it is $1$-planar only for $n=5,6$. Among complete multipartite graphs $G$, the planar ones are: $K_{2,m}$; $K_{1,1,m}$; $K_{1,2,2}$; $K_{1,1,1,1}=K_4$ and their subgraphs. The $1$-planar $G$ are, besides above: $K_6$; $K_{1,1,1,6}$; $K_{1,1,2,3}$; $K_{2,2,2,2}$; $K_{1,1,1,2,2}$ and their subgraphs (\cite{CzapPlanarity}) Given sets $A_1,\dots , A_t$ with $t\ge 2$ and $1\le |A_1|\le \dots \le |A_t|$, let $G$ be complete multipartite graph $K_{a_1,\dots ,a_t}$ with $a_i=|A_i|$ for $1\le i\le t$. All possible chordless cycles in $G$ are $c_3=\sum_{1\le i<j<k\le t}a_ia_j a_k$. triangles and $c_4=\sum_{1\le i \le t}{a_i \choose 2}{a_j \choose 2 }$ quadrangles. Hence, $c_3>0$ if and only if $t>2$ and $c_4>0$ if and only if $(a_1,t)\neq (1,2)$. So, among edge and $s$-cycle facets of $\CUTP(G)$, only three such orbits are possible: $2|E|$ edge facets if $t=2$, $4c_3$ $3$-cycle facets if $\ge 3$ and $8c_4$ $4$-cycle facets if $(a_1,t)\neq (1,2)$. All cases, when there are no other facets, i.e., when $G$ has no $K_5$-minor, are given in Table \ref{TableSomeCUTpolytopes1}; note that the facets are simplexes for $G =K_{2,2}$ and $K_{1,1,1,1}$. In particular, $G=K_{m+i}-K_m, m>1,$ has no $K_5$-minor only for $i=1,2,3$. The facets of $\CUTP(G)$ are the orbit of $2m$ edge facets for $i=1$, the orbit of $2m$ $3$-cycle facets for $i=2$ and two orbits (of sizes $12m$ and $4$) of $3$-cycle facets for $i=3$. Some of remaining cases presented in Table \ref{TableSomeCUTpolytopes2}. For $G=K_{m+4}-K_m=K_{1,1,1,1,m>1}$ and $K_{1,1,2,m>2}$, the number of orbits stays constant for any $m$: $4$ and $7$, respectively. Given sequence $b_1, \dots , b_n$ of integers, which sum to $1$, let us call $$hyp(b)=\sum_{1\le i,j \le n}x_{ij}b_ib_j\le 0$$ (when it is applicable) {\em hypermetric inequality}. Note that $hyp(1,1,-1,0,\dots , 0)$ is usual triangle inequality. Denote $hyp(b)$ with all non-zero $b_i$ being $b_x=b_y=1=-b_z$ by $Tr(x,y;z)$ and $hyp(b)$ with all non-zero $b_i$ being $b_x=b_y=b_z=1=-b_u=-bv$ by $Pent(x,y,z;u,v)$. If $G=K_{1,1,2,m}$ with $m\ge 3$, then $\CUTP(G)$ has $8+20m+8{m\choose 2}(16m-15)$ facets in $7$ orbits: $3$ orbits of $8,4m, 16m$ $3$-cycle facets, one orbit of $8{m\choose 2}$ $4$-cycle facets and $3$ orbits of $64{m\choose 2},64{m\choose 2}, 384{m\choose 3}$ $\{0,\pm 1 \}$-valued non-s-cycle facets, having $4$ values $-1$ and $11,11,12$ values of $1$. The partition is $\{1\},\{2\},\{3,4\},\{5,\dots, m+4\}$. $\CUTP(K_{1,1,2,2})$ has $184$ facets in $4$ orbits: $2$ orbits of $8+8,32$ $3$-cycle facets, one orbit of $8$ $4$-cycle facets and one orbits of $2^7$ facets, represented by \begin{equation*} hyp(1,1,1,-1,\,-1,0)+hyp(0,0,1,1,\,0,-0,-1)\le 0. \end{equation*} The graph $G=K_{m+t}-K_{m}=K_{1,\dots ,1,m}$ has a $K_5$-minor only if $t\ge 4$. If $m\ge 3$, then $\CUTP(G)$ has $2$ orbits of $4m{t\choose 2}$ and $4{t\choose 3}$ $3$-cycle facets and, for $t<4$ only, no other facets. The partition is $\{1\},\dots ,\{t\},\{t+1,\dots, t+m\}$. If $G=K_{m+4}-K_{m}$, then $\CUTP(G)$ has $8(8m^2-3m+2)$ facets in $4$ orbits: $2$ orbits of $24m$, $16$ $3$-cycle facets and $2$ orbits of sizes $16m, 128{m\choose 2}$, represented by $hyp(1,1,-1,-1,\,1,0,\dots ,0)\le 0, \mbox{~i.e.,~} Pent(1,2,5;3,4)$ and \begin{equation*} hyp(1,1,-1,0,\,1,-1,0, \dots , 0) + hyp(0,0,0,-1,\,1,1,0, \dots , 0)\le 0. \end{equation*} If $G=K_{m+5}-K_{m}$, then among many orbits of facets of $\CUTP(G)$, there are $2$ orbits of $40,40m$ $3$-cycle facets and $3$ orbits of $16, 80m, 20m(m-1)$ facets, represented, respectively, by \begin{enumerate} \item $hyp(1,1,1,-1,-1,\,0,\dots , 0)\le 0$, \item $hyp(1,1,-1,-1,0,\,1,0, \dots , 0)\le 0$ \item and $hyp(1,-1,-1,0,0,\,1,1,0, \dots, 0) +hyp(0,0,0,1,0,\,1-1,0, \dots, 0)\le 0$. \end{enumerate} Among $12$ remaining orbits for $K_{7}-K_{2}$, two (of sizes $2^730, 2^760$) are $\{0,\pm 1\}$-valued; they are represented, respectively, by \begin{enumerate} \item $hyp(1,1,-1,-1,1,1,-1)+ (x_{34}+x_{47}-x_{2,7}+x_{12}-x_{13})\le 0$ and \item $(x_{13} + x_{34} + x_{45} + x_{15}) + (x_{23}+x_{36}+ x_{67} + x_{27}) - (x_{14} + x_{47} - x_{57} + x_{25} + x_{26} + x_{16})$. \end{enumerate} Let $G=Pyr^2(C_m)$. Clearly, it is $K_4, K_5$ if $m=2,3$, respectively. For $m\ge 4$, it hold $A(G)=4m$ and all chordless cycles $3m$ triangles and unique $m$-cycle. Any of $3m+1$ edges belongs to a triangle. So, among orbits of facets of $\CUTP(G)$, there are two (of size $8m$ and $4m$) orbits of $3$-cycle facets and orbit of $2^{m-1}$ $m$-cycle facets. All other facets for $m\le 7$ are $\{0,\pm 1\}$-valued. For $Pyr^2(C_{1\dots m})$ with $m=4$, unique remaining orbit consists of $2^7$ facets, represented by $Pent(3,5,5;1,2)+Tr(1,2;4)$. Among remaining orbits for $m=5$ and $7$, there is an orbit of $2^{m+1}$ facets represented by \begin{enumerate} \item $Pyr^2(C_{12345})-2((x_{45}+x_{67})+(x_{16}+x_{17}+x_{36}+x_{37}))\le 0$ and, respectively, by \item $Pyr^2(C_{12345})-2((x_{12}+x_{19})+(x_{29}+x_{38}+x_{49}+x_{58}x_{69}+x_{78}))\le 0$. \end{enumerate} For $m=5$, two remaining orbits (each of size $2^65$) are represented by \begin{enumerate} \item $C_{12345}-2x_{15}+x_{67}+((x_{17}-x_{16})-(x_{37}-x_{36})+(x_{47}-x_{46}))\le 0$ and \item $C_{12345}-2x_{15}+x_{67}+((x_{17}-x_{16})-(x_{47}-x_{46})+(x_{57}-x_{56}))\le 0$, respectively. \end{enumerate} For $m=6$, one of $4$ remaining orbits (of size $2^76$) is represented by $C_{123456}-2x_{12}+x_{78}+((x_{17}-x_{18})+(x_{57}-x_{58})-(x_{67}-x_{68}))\le 0$. Note that $K_7-C_5=Pyr^2(C_5)$. Now, $G=K_7-C_{1234}=K_{\{7\},\{6\},\{5\},\{1,3\},\{2,4\}}$ has $c_3=19$; $\CUTP(G)$ has four orbits of facets: three (of sizes $48,24,4$) of $3$-cycle facets and one orbit of size $32$, represented by $Pent(4,5,6;2,7)$. Each of $K_5$-minors, $K_{\{2, 4, 5,6,7\}}$ and $K_{\{1, 3, 5,6,7\}}$ provides $16$ of above $32$ facets. $G=K_7-C_{7}$ has $c_3=c_4=7$; $\CUTP(G)$ has three orbits of facets: one (of size $28$) of $3$-cycle facets, one (of size $56$) of $4$-cycle facets and one of size $64$, represented by $(K_7-C_{1234567})-2(x_{15}+Path_{27364})$. \section{Quasi-metric polytopes over graphs}\label{Sec_Quasi} We first define the inequalities satisfied by quasi-metrics on $n$-points. \begin{definition}\label{DEFS_Ineqs_Or_Bound} Given a fixed $n\geq 3$ we define: (i) The oriented triangle inequality for all $1\leq i,j,k\leq n$ \begin{equation*} d(i,j) \leq d(i,k) + d(k,j) \end{equation*} (ii) The non-negativity inequality for all $1\leq i,j\leq n$ is \begin{equation*} d(i,j) \geq 0 \end{equation*} (iii) A bounded oriented metric is a metric satisfying for all $1\leq i,j,k\leq n$ the inequalities \begin{equation*} d(j,i) + d(i,k) + d(k,j) \leq 2 \mbox{~and~} d(i,j) \leq 1. \end{equation*} \end{definition} Using this we can define the cone of quasimetrics $\QMET(K_n)$ (see \cite{QuasiMetric1,QuasiMetric2} for more details) to be the cone of oriented metrics satisfying the inequalities (i), (ii) of \ref{DEFS_Ineqs_Or_Bound}. We define the polytope $\QMETP(K_n)$ to be the set of metrics satisfying the inequalities of \ref{DEFS_Ineqs_Or_Bound}. Given a subset $S\subset \{1, \dots, n\}$ we define the {\em oriented switching}: \begin{equation*} F_S(d)(i,j) = \left\{\begin{array}{cl} 1 - d(j,i) & \mbox{if~} \left\vert S\cap \{i,j\}\right\vert = 1,\\ d(i,j) & \mbox{otherwise}. \end{array}\right. \end{equation*} The symmetric group $Sym(n)$ acts on $\QMET(K_n)$ and define a group of size $n!$. The oriented switchings determine and $Sym(n)$ act on $\QMETP(K_n)$ and determine a group of size $2^{n-1} n!$. The cone $\MET(K_n)$ and polytope $\METP(K_n)$ are embedded into $\QMET(K_n)$ and $\QMETP(K_n)$ but we have another interesting subset: \begin{definition}\label{DEFS_Weightable} Given $n\geq 3$ and an oriented metric $d\in \QMET(K_n)$, $d$ is called {\em weightable} if it satisfies the following equivalent definitions: (i) An oriented metric is called weightable if there exist a function $w_i$ such that for all $1\leq i,j\leq n$ \begin{equation*} d(i,j) + w_i = d(j,i) + w_j \end{equation*} (ii) For all $1\leq i,j,k\leq n$ we have \begin{equation*} d(i,j) + d(j,k) + d(k,i) = d(j,i) + d(k,j) + d(i,k) \end{equation*} \end{definition} We thus define the cone $\WQMET(K_n)$ and polytope $\WQMETP(K_n)$ to be the set of weightable quasimetrics of the cone $\QMET(K_n)$ and polytope $\QMETP(K_n)$. Clearly, the oriented switching preserves $\WQMETP(K_n)$. With all those definitions we can now define the corresponding objects on graphs: \begin{definition} Let $G$ be an undirected graph; we define $E(G)$ the set of edges and $Dir(E(G))$ to be the set of directed edges of $G$: (i) We define the cones $\QMET(G)$ and $\WQMET(G)$ to be the projections of the cones $\QMET(K_n)$ and $\WQMET(K_n)$ on $\mathbb{R}^{Dir(E(G))}$. (ii) We define the polytopes $\QMETP(G)$ and $\WQMETP(G)$ to be the projections of the polytopes $\QMETP(K_n)$ and $\WQMETP(K_n)$ on $\mathbb{R}^{Dir(E(G))}$. \end{definition} We can now give a description by inequalities of $\QMET(G)$: \begin{theorem}\label{QMET_theorem} For a given graph $G$ the polyhedral cone $\QMET(G)$ is defined as the set of functions $\ensuremath{\mathbb{R}}^{Dir(E)}$ such that (i) For any directed edge $e=(i,j)$ of $G$ the inequality $0\leq d(i,j)$. (ii) For any oriented cycle $e=(v_1, v_2, \dots, v_m)$ of $G$ \begin{equation}\label{Oriented_Cycle_Ineq} d(v_1,v_m)\leq d(v_1,v_2) + d(v_2,v_3) + \dots + d(v_{m-1},v_m) \end{equation} The same results holds for $\WQMET(G)$ by adding the extra condition that there exist a function $w$ such that $d(i,j) - d(j,i) = w_i - w_j$. \end{theorem} \proof Our proof is adapted from the proof of \cite[Theorem 27.3.3]{DL}. It is clear that the cycle inequalities (i) and (ii) are valid for $d\in \QMET(K_n)$ and that edges of $G$ do not occur in their expression. Therefore, the inequalities are also valid for the projection. The proof of sufficiency is done by induction and is more complicated. Suppose that the result is proved for $G+e$, i.e. $G$ to which an edge $e=(i,j)$ has been added. Suppose we have an element $x$ of $\ensuremath{\mathbb{R}}^{Dir(E(G))}$ satisfying all oriented cycle inequalities. We need to find an antecedent of $x$, i.e. a function $y\in \ensuremath{\mathbb{R}}^{Dir(E(G) + e)}$. That is we need to find $y(i,j)$ and $y(j,i)$. We write $P_{i,j}$ to be the set of directed paths from $i$ to $j$ in $G$. Assume first that $P_{i,j}\not= \emptyset$. We write \begin{equation*} u_{i,j} = \min_{u \in P_{i,j}} x(u) \end{equation*} since $x$ is non-negative, we have $u_{i,j}\geq 0$. We then write \begin{equation*} l_{i,j} = \max_{v \in P_{i,j}, f\in v} x(r(f)) - x(v - f) \end{equation*} with $r(f)$ the reversal of the directed edge $f$. If $P_{i,j}=\emptyset$, i.e. if the edge $e$ is connecting two connected components of $G$ then we set $l_{i,j} = u_{i,j} = 0$. We have $l_{i,j}\leq u_{i,j}$ since otherwise we could take a path $u$ realizing the minimum $u_{i,j}$, a path $v$ and directed edge $f$ realizing the maximum $l_{i,j}$ put it together and get a counterexample to the oriented cycle inequality (ii). So, we can find a value $y_{i,j}$ such that \begin{equation*} l_{i,j} \leq y_{i,j} \leq u_{i,j} \end{equation*} and since $u_{i,j} \geq 0$ we can choose $y_{i,j}\geq 0$. The same holds for $y_{j,i}$. Therefore we found an antecedent of $x$ in $\ensuremath{\mathbb{R}}^{Dir(E(G) + e)}$ and this proves the result for $\QMET(G)$ and so the stated theorem. For $\WQMET(G)$ we have to adjust the induction construction. If $P_{i,j} = \emptyset$ then we can adjust the values of the weights $w$ such that $w_i = w_j$. This is possible since the weights are determined up to a constant term. On the other hand if $P_{i,j}$ is not empty then the weight is already given and we should get in the end $y_{i,j} - y_{j,i} = w_i - w_j$. Actually this is not a problem since it can be easily be shown that $u_{i,j} - u_{j,i} = w_i - w_j$ and $l_{i,j} - l_{j,i} = w_i - w_j$ and so the inductive construction works. \qed Now we turn to the construction for the polytope case. \begin{theorem} For a given graph $G$ the polytope $\QMETP(G)$ is defined as the set of functions $\ensuremath{\mathbb{R}}^{Dir(E)}$ such that (i) For any directed edge $e=(i,j)$ of $G$ the inequality $0\leq d(i,j)\leq 1$ holds (ii) For any oriented cycle $C=(v_1, v_2, \dots, v_m)$ of $G$ and subset $F$ of odd size \begin{equation}\label{Oriented_Cycle_Ineq_Poly} \sum_{f=(v, v') \in F} d(v', v) - \sum_{f=(v, v') \in C - F} d(v, v') \leq \vert F\vert - 1 \end{equation} The same results holds for $\WQMETP(G)$ with the extra condition that there exist a function $w$ such that $d(i,j) - d(j,i) = w_i - w_j$. \end{theorem} \proof The proof follows by remarking that the inequalities (i) and (ii) are the oriented switchings of the non-negative inequality and oriented cycle inequality \ref{Oriented_Cycle_Ineq}. Thus the proof follow from Theorem \ref{QMET_theorem} and the same proof strategy as \cite[Theorem 27.3.3]{DL}. \qed The oriented multicut cones defined in the introduction are very complicated. In particular the oriented multicuts are not stable under oriented switchings. However, we have $\WOMCUTP(K_n) = \WQMETP(K_n)$ for $n\leq 4$. Based on that and analogy with Theorem \ref{K5wonder} a natural conjecture would be that $\WOMCUTP(G) = \WQMETP(G)$ if $G$ has no $K_5$ minor. But it seems that for some other graphs with no $K_5$ minor we have $\WOMCUTP(G) \not= \WQMETP(G)$. \section{hemi-metric polytopes over simplicial complexes}\label{Sec_Simplicial} We can also generate metrics to a measure of distance of more than $2$ objects. Our approach differs from \cite{DezaRosenberg1,DezaRosenberg2,DezaDutourHemimetric1,DD_mining} and has the advantage of allowing to define it on complexes. We consider by $Set_{n,m}$ the set of subsets of $m+1$ points of $\{1, \dots, n\}$. \begin{definition} Let us fix $m\geq 1$ and $n$: (i) A $m$-dimensional complex is formed by a subset of $Set_{n,m}$. (ii) A {\em closed manifold} of dimension $m$ is formed by a subset ${\mathcal S}$ of $Set_{n,m}$ such that for each subset $S$ of $m$ points of $\{1, \dots, n\}$ the number of simplices of ${\mathcal S}$ containing $S$ is even. \end{definition} For the case $m=1$ the closed manifold of above definition corresponds to the closed cycles. We now proceed to defining the corresponding cycle inequalities: \begin{definition} Let us fix $m\geq 1$ and $n$. Given a $m$-dimensional complex $K$ on $\{1, \dots, n\}$, the hemimetric cone $\HMET(K)$ is formed by the functions $d$ on $K$ satisfying (i) the non-negative inequalities \begin{equation*} d(\Delta) \geq 0 \end{equation*} for all $\Delta\in K$. (ii) For all closed manifolds $(\Delta_1, \dots, \Delta_r)$ formed by simplices $\Delta_i\in K$ the inequalities \begin{equation*} d(\Delta_i) \leq \sum_{1\leq j\leq r, i\not= j} d(\Delta_j) \end{equation*} for all $1\leq i\leq r$. \end{definition} For $m=1$ the definition corresponds to the one of $\MET(G)$. \begin{theorem} Let us fix $m\geq 1$ and $n$. Let us take $K$ a $m$-dimensional complex on $n$ points. The cone $\HMET(K)$ is the projection of $\HMET(Set_{n,m})$ on the simplices included in $K$. \end{theorem} \proof Our proof is adapted from the proof for metric of \cite[Theorem 27.3.3]{DL}. The inequalities for $\HMET(K)$ are clearly valid on $HMET(Set_{n,m})$ which proves one inclusion. We want to prove it by induction the other inclusion. Suppose that we have a metric $d\in \HMET(K)$ and a simplex $\Delta\notin K$. We want to find a metric $d'$ on $\HMET(K + \Delta)$. That is we need to find a value of $d(\Delta)$ that extends the inequality. For a subset $S\subset Set_{n,m}$ we define \begin{equation*} d(S) = \sum_{\Delta'\in S} d(\Delta'). \end{equation*} Let us consider the \begin{equation*} W_{K,\Delta} = \left\{ U\subset K \mbox{~:~} U\cup \{\Delta\} \mbox{~is~a~closed~manifold}\right\}. \end{equation*} We now define the upper bound \begin{equation*} u_{K,\Delta} = \min_{U \in W_{K, \Delta}} d(U). \end{equation*} We have $u_{K,\Delta}\geq 0$ since $d\in \HMET(K)$ implies $d(\Delta')\geq 0$. The lower bound is formed by \begin{equation*} l_{K,\Delta} = \max_{P\in W_{K,\Delta}, F\in P} d(F) - d(P - F). \end{equation*} Suppose that $l_{K,\Delta} > u_{K,\Delta}$. We have $u_{K,\Delta}$ realized by $U_0$ and $l_{K,\Delta}$ is realized by $L_0$ and a face $F_0\in L_0$. The union $L_0\cup U_0$ is not necessarily a closed manifold since $L_0\cup U_0$ may share simplices. If that is so we remove them and consider instead $W_0 = L_0 \cup U_0 - L_0 \cap U_0$. The inequality $l_{K,\Delta} > u_{K,\Delta}$ implies then \begin{equation*} d(F_0) > d(L_0 - F_0) + d(U_0) = d(W_0 - F_0) + 2 d(L_0\cap U_0) \geq d(W_0 - F_0) \end{equation*} which violates the fact that $d\in \HMET(K)$. Thus we can find a value $\alpha$ with \begin{equation*} l_{K,\Delta} \leq \alpha \leq u_{K,\Delta} \mbox{~and~} \alpha\geq 0. \end{equation*} Thus we can find a value for $d(\Delta)$ that is compatible with an extension. \qed The inequality set defining $\HMET(K)$ is highly redundant but is still finite so, the cone $\HMET(K)$ is actually polyhedral. On the other hand, using the inequalities obtained from the simplex does not work. Consider for example the complex $Set_{6,2}$. The Octahedron has $6$ vertices and $8$ faces and is a closed manifold. Thus it determines an inequality of the form \begin{equation*} x_{000} \leq x_{100} + x_{010} + x_{001} + x_{110} + x_{101} + x_{011} + x_{111} \end{equation*} which is not implied by the inequality on the simplices. The proof can be done by linear programming using our software {\em polyhedral} (\cite{Polyhedral}). This proves that our construction is different from the one of \cite{DezaRosenberg1,DezaRosenberg2,DD_mining} and it would be interesting to redo the computations of those works. \section{Acknowledgments} Second author gratefully acknowledges support from the Alexander von Humboldt foundation. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\betagin{document} \newtheorem{Theorem}{Theorem}[section] \newtheorem{cor}[Theorem]{Corollary} \newtheorem{goal}[Theorem]{Goal} \newtheorem{Conjecture}[Theorem]{Conjecture} \newtheorem{guess}[Theorem]{Guess} \newtheorem{exercise}[Theorem]{Exercise} \newtheorem{Question}[Theorem]{Question} \newtheorem{lemma}[Theorem]{Lemma} \newtheorem{property}[Theorem]{Property} \newtheorem{proposition}[Theorem]{Proposition} \newtheorem{ax}[Theorem]{Axiom} \newtheorem{claim}[Theorem]{Claim} \newtheorem{nTheorem}{Surjectivity Theorem} \thetaeoremstyle{definition} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{problem}[Theorem]{Problem} \newtheorem{question}[Theorem]{Question} \newtheorem{Example}[Theorem]{Example} \newtheorem{remark}[Theorem]{Remark} \newtheorem{diagram}{Diagram} \newtheorem{Remark}[Theorem]{Remark} \newcommand{\diagref}[1]{diagram~\ref{#1}} \newcommand{\thetamref}[1]{Theorem~\ref{#1}} \newcommand{\secref}[1]{Section~\ref{#1}} \newcommand{\subsecref}[1]{Subsection~\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\corref}[1]{Corollary~\ref{#1}} \newcommand{\exampref}[1]{Example~\ref{#1}} \newcommand{\remarkref}[1]{Remark~\ref{#1}} \newcommand{\corlref}[1]{Corollary~\ref{#1}} \newcommand{\mathcal{C}aimref}[1]{Claim~\ref{#1}} \newcommand{\partialfnref}[1]{Definition~\ref{#1}} \newcommand{\propref}[1]{Proposition~\ref{#1}} \newcommand{\prref}[1]{Property~\ref{#1}} \newcommand{\itemref}[1]{(\ref{#1})} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\sigma}{\sigma} \renewcommand{\varkappa}{\varkappa} \newcommand{\mbox{Frac}}{\mbox{Frac}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\widetilde}{\widetilde} \newcommand{\widehat}{\widehat} \newcommand{ }{ } \renewcommand{\sectionmark}[1]{} \renewcommand{\operatorname{Im}}{\operatorname{Im}} \renewcommand{\operatorname{Re}}{\operatorname{Re}} \newcommand{\langle}{\langlengle} \newcommand{\rangle}{\ranglengle} \newcommand{\mbox{LND}}{\mbox{LND}} \newcommand{\mbox{Pic}}{\mbox{Pic}} \newcommand{\mbox{lnd}}{\mbox{lnd}} \newcommand{\mbox{glnd}}{\mbox{glnd}} \newcommand{\mbox{DER}}{\mbox{DER}} \renewcommand{\theta}{\thetaeta} \newcommand{\varepsilon}{\varepsilon} \newcommand{^{-1}}{^{-1}} \newcommand{\infty}{\infty} \newcommand{\iint\limits}{\iint\limits} \newcommand{\operatornamewithlimits{\bigcap}\limits}{\operatornamewithlimits{\bigcap}\limits} \newcommand{\operatornamewithlimits{\bigcup}\limits}{\operatornamewithlimits{\bigcup}\limits} \newcommand{\sum\limits}{\sum\limits} \newcommand{\operatorname{ord}}{\operatorname{ord}} \newcommand{\operatorname{Gal}}{\operatorname{Gal}} \newcommand{ }{ } \newcommand{\frac}{\frac} \newcommand{\alphalphamma}{\alphalphammaamma} \newcommand{\beta}{\betata} \newcommand{\partiallta}{\partiallta} \newcommand{\Delta}{\Delta} \newcommand{\lambda}{\langlembda} \newcommand{\Lambda}{\Lambda} \newcommand{\omega}{\omegaega} \newcommand{\overline}{\overlineerline} \newcommand{\varphi}{\varphi} \newcommand{\varkappa}{\varkappa} \newcommand{\Phi}{\Phi} \newcommand{\Phi}{\Phi} \newcommand{\mathcal{B}C}{\mathbb{C}} \newcommand{\C}{\mathbb{C}}\newcommand{\mathcal{B}P}{\mathbb{P}} \newcommand{\mathcal{B}Q}{\mathbb {Q}} \newcommand{\mathcal{B}M}{\mathbb{M}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathcal{B}R}{\mathbb{R}}\newcommand{\mathcal{B}N}{\mathbb{N}} \newcommand{\mathcal{B}Z}{\mathbb{Z}}\newcommand{\mathcal{B}F}{\mathbb{F}} \newcommand{\mathcal{B}A}{\mathbb {A}} \renewcommand{\operatorname{Im}}{\operatorname{Im}} \newcommand{\operatorname{id}}{\operatorname{id}} \newcommand{\epsilon}{\epsilonsilon} \newcommand{\tilde\partial}{\tilde\partial} \newcommand{\doe}{\overlineerset{\text{def}}{=}} \newcommand{\supp} {\operatorname{supp}} \newcommand{\loc} {\operatorname{loc}} \newcommand{\partial}{\partial} \newcommand{\zeta}{\zetaeta} \renewcommand{\alpha}{\alphalpha} \newcommand{\Gamma}{\Gammaamma} \newcommand{\partialr}{\mbox{DER}} \newcommand{\operatorname{Spec}}{\operatorname{Spec}} \newcommand{\operatorname{Sym}}{\operatorname{Sym}} \newcommand{\mathcal{A}ut}{\operatorname{Aut}} \newcommand{\operatorname{Id}}{\operatorname{Id}} \newcommand{\widetilde G}{\widetilde G} \betagin{comment} \newcommand{\mathfrac {X}}{\mathfrac {X}} \newcommand{\mathfrac {V}}{\mathfrac {V}} \newcommand{\mathcal {X}}{\mathrsfs {X}} \newcommand{\mathcal {V}}{\mathrsfs {V}} \newcommand{\mathcal {O}}{\mathrsfs {O}} \newcommand{\mathcal {D}}{\mathrsfs {D}} \newcommand{\rho}{\rho} \newcommand{\mathcal {R}}{\mathrsfs {R}} \end{comment} \newcommand{\mathfrac {X}}{\mathfrac {X}} \newcommand{\mathfrac {V}}{\mathfrac {V}} \newcommand{\mathcal {X}}{\mathcal {X}} \newcommand{\mathcal {V}}{\mathcal {V}} \newcommand{\mathcal {O}}{\mathcal {O}} \newcommand{\mathcal {D}}{\mathcal {D}} \newcommand{\rho}{\rho} \newcommand{\mathcal {R}}{\mathcal {R}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{O}_K}{\mathcal{O}_K} \newcommand{\mathcal{AB}}{\mathcal{AB}} \setcounter{equation}{0} \setcounter{section}{0} \newcommand{\displaystyle}{\displaystyle} \newcommand{\alphalphammal}{\langlembda} \newcommand{\alphalphammaL}{\Lambda} \newcommand{\alphalphammage}{\epsilonsilon} \newcommand{\alphalphammaG}{\Gammaamma} \newcommand{\alphalphammaa}{\alphalpha} \newcommand{\alphalphammab}{\betata} \newcommand{\alphalphammad}{\partiallta} \newcommand{\alphalphammaD}{\Delta} \newcommand{\alphalphammas}{\sigma} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\text{lcm}\,}{\text{lcm}\,} \newcommand{\mf}[1]{\mathfrak{#1}} \newcommand{\ol}[1]{\overlineerline{#1}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\equiv\hspace{-.07in}/\;}{\equiv\hspace{-.07in}/\;} \newcommand{\equiv\hspace{-.13in}/\;}{\equiv\hspace{-.13in}/\;} \title{A bound for the conductor of an open subgroup of $GL_2$ associated to an elliptic curve} \alphauthor{Nathan Jones} \alphaddress{University of Illinois at Chicago, 851 S Morgan St, SEO 322, Chicago, IL 60614} \email{[email protected]} \renewcommand{\thetaefootnote}{\fnsymbol{footnote}} \footnotetext{\emph{Key words and phrases:} Elliptic curves, Galois representations} \renewcommand{\thetaefootnote}{\alpharabic{footnote}} \renewcommand{\thetaefootnote}{\fnsymbol{footnote}} \footnotetext{\emph{2010 Mathematics Subject Classification:} Primary 11G05, 11F80} \renewcommand{\thetaefootnote}{\alpharabic{footnote}} \date{} \betagin{abstract} Given an elliptic curve $E$ without complex multiplication defined over a number field $K$, consider the image of the Galois representation defined by letting Galois act on the torsion of $E$. Serre's open image theorem implies that there is a positive integer $m$ for which the Galois image is completely determined by its reduction modulo $m$. In this note, we prove a bound on the smallest such $m$ in terms of standard invariants associated with $E$. The bound is sharp and improves upon previous results. \end{abstract} \maketitle \section{introduction} Let $K$ be a number field, let $E/K$ be an elliptic curve and let $E_{\tors}$ denote its torsion subgroup. Denote by $G_K := \operatorname{Gal}(\ol{K}/K)$ the absolute Galois group of $K$ and consider the Galois representation \[ \rho_{E,K} : G_K \longrightarrow \alphaut(E_{\tors}) \simeq \GammaL_2(\hat{\mathbb{Z}}) \] defined by letting $G_K$ act on the torsion of $E$ and choosing a $\hat{\mathbb{Z}}$-basis thereof. A celebrated theorem of J.-P. Serre \cite{serre} states that, if $E$ has no complex multiplication, then the image of $\rho_{E,K}$ is open inside $\GammaL_2(\hat{\mathbb{Z}})$, or equivalently that \betagin{equation} \langlebel{serresopenimage} \left[ \GammaL_2(\hat{\mathbb{Z}}) : \rho_{E,K}(G_K) \right] < \infty. \end{equation} Consequently, one may find a positive integer $m$ with the property that \betagin{equation*} \varkappaer\left( \GammaL_2(\hat{\mathbb{Z}}) \rightarrow \GammaL_2(\mathbb{Z}/m\mathbb{Z}) \right) \subseteq \rho_{E,K}(G_K). \end{equation*} \betagin{Definition} \langlebel{definitionofme} Given an open subgroup $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$, we define the positive integer $m_G$ by \[ m_G := \min \{ m \in \mathbb{N} : \varkappaer\left( \GammaL_2(\hat{\mathbb{Z}}) \rightarrow \GammaL_2(\mathbb{Z}/m\mathbb{Z}) \right) \subseteq G \} \] and call it the {\bf{conductor of $G$}}. In case $G = \rho_{E,K}(G_K)$ for an elliptic curve $E$ defined over a number field $K$ and without complex multiplication, we denote the conductor of $G$ by $m_{E,K}$. \end{Definition} The purpose of this note is to prove the following upper bound for $m_{E,K}$. In its statement, $\alphalphammaD_K$ denotes the absolute discriminant of the number field $K$, $\alphalphammaD_E$ denotes the minimal discriminant ideal attached to the elliptic curve $E$, $N_{K/\mathbb{Q}} : K^\times \longrightarrow \mathbb{Q}^\times$ denotes the usual norm map and \[ \rangled(m) := \prod_{\ell \mid m \alphatop \ell \text{ prime}} \ell \] denotes the radical of the positive integer $m$. Given a non-zero ideal $I \subseteq \mc{O}_K$, we identify the ideal $N_{K/\mathbb{Q}}(I) \subseteq \mathbb{Z}$ with the (unique) positive integer that generates it, and thus we may regard $N_{K/\mathbb{Q}}(\alphalphammaD_E) \in \mathbb{N}$. \betagin{Theorem} \langlebel{maintheorem} Let $K$ be a number field, let $E$ be an elliptic curve over $K$ without complex multiplication, and let $m_{E,K} \in \mathbb{N}$ be as in Definition \ref{definitionofme}. Then one has \[ m_{E,K} \leq 2 \cdot \left[ \GammaL_2(\hat{\mathbb{Z}}) : \rho_{E,K}(G_K) \right] \cdot \rangled\left( |\alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right). \] \end{Theorem} \betagin{remark} The bound in Theorem \ref{maintheorem} both improves upon and generalizes a bound appearing in \cite{jones1} (see Corollary \ref{conditionalcorollary} below). Furthermore, using results in \cite{daniels}, we may see that there are infinitely many\footnote{Specifically, \eqref{mEforsomeserrecurves} holds for any Serre curve $E$ with the property that $\alphalphammaD_E$ is square-free and $\alphalphammaD_E \not\equiv 1 \mod 4$.} elliptic curves $E$ over $\mathbb{Q}$ satisfying \betagin{equation} \langlebel{mEforsomeserrecurves} m_{E,\mathbb{Q}} = 2 \cdot \left[ \GammaL_2(\hat{\mathbb{Z}}) : \rho_{E,\mathbb{Q}}(G_\mathbb{Q}) \right] \cdot \rangled(| \alphalphammaD_E | ). \end{equation} Thus, our bound for $m_{E,K}$ is sharp when $K = \mathbb{Q}$. \end{remark} \betagin{remark} Let \[ \betagin{split} \rho_{E,m} : G_K &\longrightarrow \GammaL_2(\mathbb{Z}/m\mathbb{Z}), \\ \rho_{E,\ell^\infty} : G_K &\longrightarrow \GammaL_2(\mathbb{Z}_\ell) \end{split} \] denote the Galois representations defined by letting $G_K$ act on $E[m]$ and on $\displaystyle E[\ell^\infty] := \bigcup_{n \alphalphammaeq 1} E[\ell^n]$ respectively, and let $K(E[m]) = \ol{K}^{\varkappaer \rho_{E,m}(G_K)}$ denote the $m$-th division field of $E$. The conductor $m_{E,K}$ that we are considering should not be confused with ``Serre's constant,'' defined for an elliptic curve $E$ over $\mathbb{Q}$ in \cite{danielsgonzalez} (see also \cite{cojocaru}) by \[ A(E) := \prod_{{\betagin{substack} { \ell^n \text{ a prime power} \\ \rho_{E,\ell^n}(G_\mathbb{Q}) \neq \GammaL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \\ \forall k < n, \, \rho_{E,\ell^{k}}(G_\mathbb{Q}) = \GammaL_2(\mathbb{Z}/\ell^{k}\mathbb{Z}) } \end{substack}}} \ell^n. \] It is evident that $A(E)$ divides $m_{E,\mathbb{Q}}$, but $m_{E,\mathbb{Q}}$ is in general larger than $A(E)$. The main differences between these two constants are as follows: \betagin{enumerate} \item A prime power $\ell^n$ divides $m_{E,\mathbb{Q}}$ whenever $\varkappaer\left( \GammaL_2(\mathbb{Z}_\ell) \rightarrow \GammaL_2(\mathbb{Z}/\ell^{n-1}\mathbb{Z}) \right) \not\subseteq \rho_{E,\ell^\infty}(G_\mathbb{Q})$, whereas $A(E)$ is square-free, except possibly at the primes $2$ and $3$. In other words, for each prime $\ell$, $m_{E,\mathbb{Q}}$ encodes the action of $G_\mathbb{Q}$ on the entire $\ell$-adic Tate module, whereas, for $\ell \alphalphammaeq 5$, $A(E)$ only encodes the action of $G_\mathbb{Q}$ on the $\ell$-torsion of $E$. \item It may happen that there is a non-trivial intersection $\mathbb{Q} \neq \mathbb{Q}(E[m_1]) \cap \mathbb{Q}(E[m_2])$ for some $m_1, m_2 \in \mathbb{N}$ with $\alphalphammacd(m_1,m_2) = 1$. The constant $m_{E,\mathbb{Q}}$ encodes such ``entanglements,'' whereas $A(E)$ does not. \end{enumerate} The general phenomenon of entanglements has come up in various recent papers, see for instance \cite{braujones}, which studies elliptic curves $E$ over $\mathbb{Q}$ satisfying $\left[ \mathbb{Q}(E[2]) : \mathbb{Q} \right] = 6$ and $\mathbb{Q}(E[2]) \subseteq \mathbb{Q}(E[3])$, and also \cite{bourdon}, in which potential entanglements come up in an analysis of sporadic points on the modular curve $X_1(N)$. \end{remark} Given an elliptic curve $E$ defined over a number field $K$, computing the positive integer $m_{E,K}$ is a step toward understanding the image $\rho_{E,K}(G_K) \subseteq \GammaL_2(\hat{\mathbb{Z}})$. Following Serre's open image result, there has been much interest in the nature of $\rho_{E,K}(G_K)$, for instance regarding its mod $\ell$ reductions (see \cite{mazur}, \cite{merel}, \cite{lozanorobledo}, \cite{biluparent}, \cite{biluparentrebolledo}, and \cite{zywina2}) and also more recently its reductions at composite levels (see \cite{dokchitser}, \cite{sutherlandzywina}, \cite{danielsgonzalez} and \cite{morrow}). In addition to this connection, Theorem \ref{maintheorem} also has analytic relevance; for instance in \cite{awstitchmarsh} it is applied to the study averages of constants appearing in various elliptic curve conjectures. Serre's open image result \eqref{serresopenimage} implies that, for any $E/K$ without complex multiplication (CM), there exists a bound $C_{E,K} > 0$ so that, for each prime $\ell > C_{E,K}$, we have $\rho_{E,\ell}(G_K) = \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$. Serre asked whether the constant $C_{E,K}$ may be chosen uniformly in $E$, i.e. whether \betagin{equation} \langlebel{serresquestioneqn} \exists C_K > 0 \; \text{ so that, } \; \forall E/K \text{ without CM and } \forall \text{ prime } \ell > C_K, \; \rho_{E,\ell}(G_K) = \GammaL_2(\mathbb{Z}/\ell\mathbb{Z}). \end{equation} This question is still open, even in the case $K = \mathbb{Q}$. An affirmative answer to it would imply that \[ [ \GammaL_2(\hat{\mathbb{Z}}) : \rho_E(G_K) ] \ll_K 1, \] although the implied constant is ineffective, because of an appeal to Faltings' theorem (see \cite{zywinaindex}, which details this in the case $K = \mathbb{Q}$). Theorem \ref{maintheorem} thus has the following corollary. \betagin{cor} \langlebel{conditionalcorollary} Assume that \eqref{serresquestioneqn} holds. We then have \[ m_{E,K} \ll_K \rangled \left( N_{K/\mathbb{Q}}(\alphalphammaD_E) \right). \] \end{cor} Theorem \ref{maintheorem} is proved via the following two propositions, the first of which deals generally with open subgroups $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$. Because of group-theoretical differences present for the prime $2$ (see \cite{dokchitser} and \cite{elkies}, which concern the Galois representation on the $2$-adic (resp. on the $3$-adic) Tate module, illustrating these differences), it will be convenient to introduce the following modified radical: \betagin{equation} \langlebel{defofradprime} \rangled'(m) := \betagin{cases} \rangled(m) & \text{ if } 4 \nmid m \\ 2\rangled(m) & \text{ if } 4 \mid m. \end{cases} \end{equation} We will also distinguish the following case involving the prime $3$, in whose statement $G_3$ (resp. $G(3))$ denotes the image of $G$ under the projection map $\GammaL_2(\hat{\mathbb{Z}}) \longrightarrow \GammaL_2(\mathbb{Z}_3)$ (resp. under $\GammaL_2(\hat{\mathbb{Z}}) \longrightarrow \GammaL_2(\mathbb{Z}/3\mathbb{Z})$). The analysis proceeds a bit differently according to whether or not the condition \betagin{equation} \langlebel{theconditionat3} 9 \mid m_G, \quad \SL_2(\mathbb{Z}_3) \not \subseteq G_3 \quad \text{ and } \quad G(3) = \GammaL_2(\mathbb{Z}/3\mathbb{Z}) \end{equation} holds. \betagin{proposition} \langlebel{mGprimeboundprop} Let $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$ be an open subgroup and let $m_G$ be as in Definition \ref{definitionofme}. We then have \[ \frac{m_G}{\rangled'(m_G)} \; \text{ divides } \; \left[ \pi^{-1}(G(\rangled'(m_G))) : G(m_G) \right], \] where $\rangled'(\cdot)$ is defined as in \eqref{defofradprime} and $\pi : \GammaL_2(\mathbb{Z}/m_G\mathbb{Z}) \longrightarrow \GammaL_2(\mathbb{Z}/\rangled'(m_G)\mathbb{Z})$ denotes the canonical projection map. Assuming that \eqref{theconditionat3} holds, we have \[ \frac{9m_G}{\rangled'(m_G)} \; \text{ divides } \; \left[ \pi^{-1}(G(\rangled'(m_G))) : G(m_G) \right]. \] \end{proposition} In contrast with Proposition \ref{mGprimeboundprop}, our second proposition is specific to the situation where $G = \rho_{E,K}(G_K)$, making use of facts about the Weil pairing on an elliptic curve, together with the Ner\'{o}n-Ogg-Shafarevich criterion for ramification in division fields. \betagin{proposition} \langlebel{radmGdoubleprimeboundprop} Let $K$ be a number field and let $E$ be an elliptic curve defined over $K$ without complex multiplication. Let $G := \rho_{E,K}(G_K)$ be the image of the Galois representation associated to $E$ and let $m_G$ be as in Definition \ref{definitionofme}. Assuming that \eqref{theconditionat3} does not hold, we have \[ \rangled'(m_G) \leq 2 \left[ \GammaL_2(\mathbb{Z}/\rangled'(m_G)\mathbb{Z}) : G(\rangled'(m_G)) \right] \rangled(|\alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E)|). \] If \eqref{theconditionat3} does hold, then \[ \frac{\rangled'(m_G)}{3} \leq 2 \left[ \GammaL_2(\mathbb{Z}/\rangled'(m_G)\mathbb{Z}) : G(\rangled'(m_G)) \right] \rangled(|\alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E)|). \] \end{proposition} Since the index of a subgroup is preserved under taking the full pre-image, we have that \[ \left[ \GammaL_2(\mathbb{Z}/\rangled'(m_G)\mathbb{Z}) : G(\rangled'(m_G)) \right] = \left[ \GammaL_2(\mathbb{Z}/m_G\mathbb{Z}) : \pi^{-1}(G(\rangled'(m_G))) \right], \] where $\pi : \GammaL_2(\mathbb{Z}/m_G\mathbb{Z}) \longrightarrow \GammaL_2(\mathbb{Z}/\rangled'(m_G)\mathbb{Z})$ is the canonical projection map. Thus, Theorem \ref{maintheorem} follows from Propositions \ref{mGprimeboundprop} and \ref{radmGdoubleprimeboundprop}. Many of the ingredients that enter into the proof of Theorem \ref{maintheorem} may be verified for algebraic groups other than $\GammaL_2$. For instance, using these same techniques, one should be able to obtain a similar bound for the analogous integer $m_{A,K}$ associated to an abelian variety $A$ defined over a number field $K$ whose Galois representation has open image inside $\GammaSp_{2g}(\hat{\mathbb{Z}})$. \section{Notation and preliminaries} Throughout the paper, $p$ and $\ell$ will always denote prime numbers. As usual, $\mathbb{N}$ denotes the set of natural numbers (excluding zero) and $\mathbb{Z}$ denotes the set of integers. We will occasionally use the abbreviations \[ \betagin{split} \mathbb{N}_{\alphalphammaeq \alphalphammaa} &:= \{ n \in \mathbb{N} : n \alphalphammaeq \alphalphammaa \}, \\ \mathbb{Z}_{\alphalphammaeq \alphalphammaa} &:= \{ n \in \mathbb{Z} : n \alphalphammaeq \alphalphammaa \}. \end{split} \] We recall that \[ \hat{\mathbb{Z}} := \lim_{\leftarrow} \mathbb{Z}/m\mathbb{Z} \] is the inverse limit of the rings $\mathbb{Z}/m\mathbb{Z}$ with respect to the canonical projection maps $\mathbb{Z}/nm\mathbb{Z} \rightarrow \mathbb{Z}/m\mathbb{Z}$. Under the isomorphism of the Chinese Remainder Theorem, we have that \betagin{equation} \langlebel{CRTisomorphism} \hat{\mathbb{Z}} \simeq \prod_{\ell} \mathbb{Z}_\ell, \end{equation} where $\displaystyle \mathbb{Z}_\ell$ as usual denotes the ring of $\ell$-adic integers. More generally, for any $m \in \mathbb{N}_{\alphalphammaeq 2}$ we define $\mathbb{Z}_m$ and $\mathbb{Z}_{(m)}$ to be the quotients of $\hat{\mathbb{Z}}$ corresponding under \eqref{CRTisomorphism} to the following rings: \[ \mathbb{Z}_{m} \simeq \prod_{\ell \mid m} \mathbb{Z}_\ell, \quad\quad \mathbb{Z}_{(m)} \simeq \prod_{\ell \nmid m} \mathbb{Z}_\ell. \] For any $m \in \mathbb{N}_{\alphalphammaeq 2}$ we have an isomorphism \[ \hat{\mathbb{Z}} \simeq \mathbb{Z}_m \times \mathbb{Z}_{(m)}, \] and projection maps \[ \betagin{split} &\hat{\mathbb{Z}} \longrightarrow \mathbb{Z}_m, \\ &\hat{\mathbb{Z}} \longrightarrow \mathbb{Z}_{(m)}. \end{split} \] We note that these observations may also be applied to points in an algebraic group; in particular we have \[ \GammaL_2(\hat{\mathbb{Z}}) \simeq \GammaL_2(\mathbb{Z}_m) \times \GammaL_2(\mathbb{Z}_{(m)}) \simeq \prod_{\ell} \GammaL_2(\mathbb{Z}_\ell) \] and we have projection maps \betagin{equation} \langlebel{canonicalprojections} \betagin{split} \pi_m : &\GammaL_2(\hat{\mathbb{Z}}) \longrightarrow \GammaL_2(\mathbb{Z}_m), \\ \pi_{(m)} : &\GammaL_2(\hat{\mathbb{Z}}) \longrightarrow \GammaL_2(\mathbb{Z}_{(m)}). \end{split} \end{equation} In most cases we will denote any projection map simply by $\pi$, but on some occasions we will decorate it with subscripts, such as in \eqref{canonicalprojections} or \[ \betagin{split} &\pi_{m^\infty, m} : \GammaL_2(\mathbb{Z}_m) \longrightarrow \GammaL_2(\mathbb{Z}/m\mathbb{Z}), \\ &\pi_{nm,n} : \GammaL_2(\mathbb{Z}/nm\mathbb{Z}) \longrightarrow \GammaL_2(\mathbb{Z}/n\mathbb{Z}). \end{split} \] The ring $\hat{\mathbb{Z}}$ is a topological ring under the profinite topology, and the group $\GammaL_2(\hat{\mathbb{Z}})$ inherits the structure of a profininte group. We recall that any open subgroup $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$ is a closed subgroup but not conversely. In general, given any closed subgroup $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$, we denote by $G_m \subseteq \GammaL_2(\mathbb{Z}_m)$ (resp. by $G_{(m)} \subseteq \GammaL_2(\mathbb{Z}_{(m)})$) its image under $\pi_m$ (resp. under $\pi_{(m)}$) as in \eqref{canonicalprojections}. We denote by $G(m)$ the image of $G$ under the canonical projection \[ \GammaL_2(\hat{\mathbb{Z}}) \longrightarrow \GammaL_2(\mathbb{Z}/m\mathbb{Z}). \] For any $m \in \mathbb{N}$ and any $d$ dividing $m$, we denote the prime-to-$d$ part of $m$ by \[ m_{(d)} := \frac{m}{\prod_{\ell \mid d} \ell^{\operatorname{ord}_\ell(m)}}. \] Finally, we let \[ \betagin{split} \id_m &: \GammaL_2(\mathbb{Z}_m) \longrightarrow \GammaL_2(\mathbb{Z}_m), \\ \id_{(m)} &: \GammaL_2(\mathbb{Z}_{(m)}) \longrightarrow \GammaL_2 (\mathbb{Z}_{(m)}) \end{split} \] denote the identity maps, and we let $1_{m}$ (resp. $1_{(m)}$) denote the identity element of $\GammaL_2(\mathbb{Z}_m)$ (resp. of $\GammaL_2(\mathbb{Z}_{(m)})$). We may also at times denote by $1_m$ the identity element of $\GammaL_2(\mathbb{Z}/m\mathbb{Z})$. For an abelian group $A$ and a positive integer $n$ we as usual denote by $A[n]$ the $n$-torsion subgroup of $A$. For a prime number $\ell$ we define \[ A[\ell^\infty] := \bigcup_{n = 0}^\infty A[\ell^n], \quad\quad A_{\tors} := \bigcup_{n = 1}^\infty A[n], \quad\quad A_{\tors,(\ell)} := \bigcup_{{\betagin{substack} { n = 1 \\ \ell \nmid n} \end{substack}}}^\infty A[n]. \] Note that, if $A[n]$ is finite for each $n \in \mathbb{N}$, we have \[ A_{\tors} \simeq A[\ell^\infty] \times A_{\tors, (\ell)}. \] For a number field $K$, we denote by $\mc{O}_K$ its ring of integers, by $\alphalphammaD_K$ its absolute discriminant and by \[ N_{K/\mathbb{Q}} : K \longrightarrow \mathbb{Q} \] the norm map. A critical issue that arises in the proof of Proposition \ref{radmGdoubleprimeboundprop} is that of \emph{entanglement} of division fields, i.e. the possibility that the field extension $K \subseteq K(E[m_1]) \cap K(E[m_2])$ is a non-trivial extension, where $m_1$ and $m_2$ are relatively prime positive integers. Putting $F := K(E[m_1]) \cap K(E[m_2])$, we have by Galois theory that \[ \operatorname{Gal}\left(K(E[m_1m_2]) / K\right) \simeq \left\{ (\alphalphammas_1, \alphalphammas_2 ) \in \operatorname{Gal}\left(K(E[m_1])/K\right) \times \operatorname{Gal}\left(K(E[m_2])/K\right) : \alphalphammas_1 |_{F} = \alphalphammas_2 |_{F} \right\}. \] More generally, if $G_1$, $G_2$ and $H$ are groups and $\psi_1 : G_1 \longrightarrow H$, $\psi_2 : G_2 \longrightarrow H$ are surjective group homomorphisms, we introduce the following notation for the fibered product: \[ G_1 \times_{\psi} G_2 := \{ (g_1, g_2) \in G_1 \times G_2 : \psi_1(g_1) = \psi_2(g_2) \} \] (here $\psi$ is an abbreviation for the ordered pair $(\psi_1,\psi_2)$). Evidently, $K \neq K(E[m_1]) \cap K(E[m_2])$ if and only if the fibered product \[ \operatorname{Gal}\left(K(E[m_1])/K\right) \times_{\res} \operatorname{Gal}\left(K(E[m_2])/K\right) \] is a fibered product over a non-trivial group, where \[ \res_i : \operatorname{Gal}\left(K(E[m_i])/K\right) \longrightarrow \operatorname{Gal}\left(K(E[m_1]) \cap K(E[m_2])/K\right) \] denotes the restriction map. \section{Proof of Proposition \ref{mGprimeboundprop}} In this section we prove Proposition \ref{mGprimeboundprop}, bounding $m_G/\rangled'(m_G)$ in terms of the index of $G(m_G)$ in $\pi^{-1}\left(G(\rangled'(m_G))\right)$, where $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$ is any open subgroup. We recall that, in the profinite topology, any open subgroup of $\GammaL_2(\hat{\mathbb{Z}})$ is necessarily closed; we will establish some lemmas regarding closed subgroups of $\GammaL_2(\hat{\mathbb{Z}})$ which thus apply to the open subgroup $G$. We begin by giving a more precise description of the local exponents $\alphalphammab_\ell \alphalphammaeq 0$ occurring in \betagin{equation} \langlebel{defofgbsubell} m_G =: \prod_{\ell} \ell^{\alphalphammab_\ell}. \end{equation} In what follows we use the maps \betagin{equation*} \betagin{split} &\pi_{\ell^{\alphalphammab+1},\ell^{\alphalphammab}} \times \id_{(\ell)} : \GammaL_2(\mathbb{Z}/\ell^{\alphalphammab+1}\mathbb{Z}) \times \GammaL_2(\mathbb{Z}_{(\ell)}) \longrightarrow \GammaL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}) \times\GammaL_2(\mathbb{Z}_{(\ell)}) \\ &\pi_{\ell^\infty,\ell^{\alphalphammab+1}} \times \id_{(\ell)} : \GammaL_2(\mathbb{Z}_\ell) \times \GammaL_2(\mathbb{Z}_{(\ell)}) \longrightarrow \GammaL_2(\mathbb{Z}/\ell^{\alphalphammab+1}\mathbb{Z}) \times \GammaL_2(\mathbb{Z}_{(\ell)}) \end{split} \end{equation*} defined by the obvious projection in the first factor and the identity map in the second factor. For any prime $\ell$, we define \betagin{equation} \langlebel{defofalphasubell} \alphalphammaa_\ell := \betagin{cases} 2 & \text{ if } \ell = 2 \\ 1 & \text{ if } \ell \alphalphammaeq 3. \end{cases} \end{equation} The next lemma follows from ideas in \cite[Lemma 3, IV-23]{serre2}. In its statement and henceforth, we will interpret $\GammaL_2(\mathbb{Z}/\ell^0\mathbb{Z}) := \{ 1 \}$ as the trivial group, so that $\varkappaer \pi_{\ell,1} = \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$. \betagin{lemma} \langlebel{alphasubelllemma} Let $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$ be a closed subgroup, let $\ell$ be a prime number, and let $\alphalphammab \in \mathbb{Z}_{\alphalphammaeq 0}$. Assume that \[ \forall \alphalphammaamma \in [\alphalphammab, \max \{ \alphalphammab, \alphalphammaa_\ell \} ] \cap \mathbb{Z}, \quad \varkappaer( \pi_{\ell^{\alphalphammaamma+1}, \ell^{\alphalphammaamma}} ) \times \{ 1_{(\ell)} \} \subseteq (\pi_{\ell^\infty, \ell^{\alphalphammaamma + 1}} \times \id_{(\ell)}) (G), \] where $\alphalphammaa_\ell$ is as in \eqref{defofalphasubell}. We then have \betagin{equation*} \varkappaer( \pi_{\ell^{\infty}, \ell^{\alphalphammab}} ) \times \{ 1_{(\ell)} \} \subseteq G. \end{equation*} \end{lemma} \betagin{proof} Since $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$ is closed, it suffices to prove that, for each $n \in \mathbb{Z}_{\alphalphammaeq \max\{\alphalphammab,\alphalphammaa_\ell\}}$, one has \betagin{equation} \langlebel{kernelinductionhypothesis} \varkappaer (\pi_{\ell^{n+1},\ell^{n}}) \times \{ 1_{(\ell)} \} \subseteq (\pi_{\ell^\infty, \ell^{n+1}} \times \id_{(\ell)})(G). \end{equation} We prove this by induction on $n$ as follows (the base case $n = \max\{\alphalphammab,\alphalphammaa_\ell\}$ is true by hypothesis). First note that, for $n \alphalphammaeq 1$, we have \betagin{equation} \langlebel{shapeofkernel} \varkappaer (\pi_{\ell^{n+1},\ell^{n}}) = \{ I + \ell^{n} \tilde{X} \mod \ell^{n+1} : \tilde{X} \in M_{2\times 2}(\mathbb{Z}_\ell) \}. \end{equation} Thus, \eqref{kernelinductionhypothesis} may be reformulated as saying \betagin{equation} \langlebel{desiredconditioninn} \forall X \in M_{2\times 2}(\mathbb{F}_\ell), \, \exists \tilde{X} \in M_{2\times 2}(\mathbb{Z}_\ell) \text{ such that } \tilde{X} \equiv X \mod \ell \text{ and } g := (I + \ell^{n}\tilde{X}, 1_{(\ell)}) \in G. \end{equation} Our goal is to deduce that \eqref{desiredconditioninn} continues to hold when $n$ is replaced by $n+1$. Since $G$ is a group, $g^\ell \in G$, and one sees by considering the binomial expansion \betagin{equation} \langlebel{binomialexpansion} (I + \ell^{n}\tilde{X})^\ell = I + \binom{\ell}{1} \ell^n \tilde{X} + \binom{\ell}{2} \ell^{2n} \tilde{X}^2 + \dots + \binom{\ell}{\ell-1} \ell^{(\ell-1)n} \tilde{X}^{\ell-1} + \ell^{\ell n} \tilde{X}^\ell \end{equation} that \[ (\pi_{\ell^\infty,\ell^{n+2}} \times \id_{(\ell)})(g^\ell) = (I + \ell^{n+1}\tilde{X} \pmod{\ell^{n+2}}, 1_{(\ell)}). \] Since $X$ in \eqref{desiredconditioninn} was arbitrary, it follows by \eqref{shapeofkernel} that \betagin{equation*} \varkappaer (\pi_{\ell^{n+2},\ell^{n+1}}) \times \{ 1_{(\ell)} \} \subseteq (\pi_{\ell^\infty, \ell^{n+2}} \times \id_{(\ell)})(G), \end{equation*} completing the induction and proving the lemma. \end{proof} \betagin{remark} \langlebel{elladicversionremark} The ``purely $\ell$-adic version'' of Lemma \ref{alphasubelllemma} also follows by the same proof (without the $\GammaL_2(\mathbb{Z}_{(\ell)})$ factor). Precisely, for any prime $\ell$ and closed subgroup $G \subseteq \GammaL_2(\mathbb{Z}_\ell)$, and any $\alphalphammab \in \mathbb{Z}_{\alphalphammaeq 0}$, one has \betagin{equation} \langlebel{GL2elladicversion} \betagin{matrix} \forall \alphalphammaamma \in [\alphalphammab, \max \{ \alphalphammab,\alphalphammaa_\ell \}] \cap \mathbb{Z}, \\ \varkappaer\left( \GammaL_2(\mathbb{Z}/\ell^{\alphalphammaamma+1}\mathbb{Z}) \rightarrow \GammaL_2(\mathbb{Z}/\ell^{\alphalphammaamma}\mathbb{Z}) \right) \subseteq G(\ell^{\alphalphammaamma+1}) \end{matrix} \; \Longrightarrow \; \varkappaer\left( \GammaL_2(\mathbb{Z}_\ell) \rightarrow \GammaL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}) \right) \subseteq G, \end{equation} where $\alphalphammaa_\ell$ is as in \eqref{defofalphasubell}. \end{remark} \betagin{remark} The fact that the exponent $\alphalphammaa_\ell$ in \eqref{GL2elladicversion} is different for $\ell = 2$ and otherwise uniform for $\ell \alphalphammaeq 3$ stands in contrast with \cite[Lemma 3, IV-23]{serre2}, which breaks into cases according to whether $\ell \leq 3$ or $\ell \alphalphammaeq 5$. The underlying reason is that we are seeking to conclude that $\varkappaer\left( \GammaL_2(\mathbb{Z}_\ell) \rightarrow \GammaL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}) \right) \subseteq G$, rather than the weaker conclusion $\varkappaer\left( \SL_2(\mathbb{Z}_\ell) \rightarrow \SL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}) \right) \subseteq G$, the latter breaking into cases according to the condition $ \left[ \SL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}), \SL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}) \right] = \SL_2(\mathbb{Z}/\ell^{\alphalphammab}\mathbb{Z}), $ which happens if and only if $\ell \alphalphammaeq 5$; see Lemma \ref{usefulSL2elladicversion} below. \end{remark} \betagin{Definition} \langlebel{defofgbsubellprime} We define the exponents $\alphalphammab_\ell' = \alphalphammab_\ell'(G)$ by \betagin{equation*} \alphalphammab_\ell' := \min \{ \alphalphammab \in \mathbb{Z}_{\alphalphammaeq 0} : \, \forall \alphalphammaamma \in [\alphalphammab, \max \{ \alphalphammab,\alphalphammaa_\ell \}] \cap \mathbb{Z}, \, \varkappaer( \pi_{\ell^{\alphalphammaamma+1}, \ell^{\alphalphammaamma}} ) \times \{ 1_{(\ell)} \} \subseteq (\pi_{\ell^\infty, \ell^{\alphalphammaamma + 1}} \times \id_{(\ell)}) (G) \}, \end{equation*} where $\alphalphammaa_\ell$ is as in \eqref{defofalphasubell}. \end{Definition} \betagin{cor} \langlebel{equalityofgbsubellandgbsubellprimecorollary} We have $\alphalphammab_\ell = \alphalphammab_\ell'$, where $\alphalphammab_\ell$ is as in \eqref{defofgbsubell}. \end{cor} \betagin{proof} By Lemma \ref{alphasubelllemma}, for each prime $\ell$ we have \[ \varkappaer( \pi_{\ell^{\infty}, \ell^{\alphalphammab_\ell'}} ) \times \{ 1_{(\ell)} \} \subseteq G. \] Since $\varkappaer \left( \GammaL_2(\hat{\mathbb{Z}}) \rightarrow \GammaL_2(\mathbb{Z}/ \prod_\ell \ell^{\alphalphammab_\ell'}\mathbb{Z}) \right)$ is equal to the subgroup of $\GammaL_2(\hat{\mathbb{Z}})$ generated by $\varkappaer( \pi_{\ell^{\infty}, \ell^{\alphalphammab_\ell'}} ) \times \{ 1_{(\ell)} \}$ as $\ell$ varies over all primes, we then have \[ \varkappaer \left( \GammaL_2(\hat{\mathbb{Z}}) \rightarrow \GammaL_2(\mathbb{Z}/ \prod_\ell \ell^{\alphalphammab_\ell'}\mathbb{Z}) \right) \subseteq G. \] Thus, by \eqref{defofgbsubell} and Definition \ref{definitionofme}, we see that $\alphalphammab_\ell \leq \alphalphammab_\ell'$. Conversely, suppose for the sake of contradiction that $\alphalphammab_\ell < \alphalphammab_\ell'$. By definition of $\alphalphammab_\ell$, we would then have \betagin{equation} \langlebel{definitingconditionforgbsubell} \varkappaer \left( \pi_{\ell^\infty,\ell^{\alphalphammab_\ell'-1}} \right) \times \{ 1_{(\ell)} \} \subseteq \varkappaer \left( \pi_{\ell^\infty,\ell^{\alphalphammab_\ell}} \right) \times \{ 1_{(\ell)} \} \subseteq G. \end{equation} Furthermore, since $\pi_{\ell^\infty,\ell^{\alphalphammab_\ell'}}\left( \varkappaer ( \pi_{\ell^\infty,\ell^{\alphalphammab_\ell'-1}} ) \right) = \varkappaer( \pi_{\ell^{\alphalphammab_\ell'},\ell^{\alphalphammab_\ell'-1}} )$, we then see that \eqref{definitingconditionforgbsubell} would then imply \[ \forall \alphalphammaamma \in [\alphalphammab_\ell'-1, \max \{ \alphalphammab_\ell'-1,\alphalphammaa_\ell \}] \cap \mathbb{Z}, \, \varkappaer( \pi_{\ell^{\alphalphammaamma+1}, \ell^{\alphalphammaamma}} ) \times \{ 1_{(\ell)} \} \subseteq (\pi_{\ell^\infty, \ell^{\alphalphammaamma + 1}} \times \id_{(\ell)}) (G), \] contradicting Definition \ref{defofgbsubellprime}. Thus, $\alphalphammab_\ell' \leq \alphalphammab_\ell$. \end{proof} We will also find it useful to have sufficient conditions to conclude that $\SL_2(\mathbb{Z}_\ell) \subseteq G$ where $G \subseteq \GammaL_2(\mathbb{Z}_\ell)$ is an arbitrary closed subgroup. The next lemma does so for $\ell$ odd, and gives us sufficient information to allow us to deal separately with the prime $\ell = 2$. As with Lemma \ref{alphasubelllemma}, it can be largely deduced from arguments found in the proof of \cite[Lemma 3, IV-23]{serre2}; we include the details here for the sake of completeness. \betagin{lemma} \langlebel{usefulSL2elladicversion} Let $\ell$ be a prime number and let $G \subseteq \GammaL_2(\mathbb{Z}_\ell)$ be a closed subgroup. If $\ell \alphalphammaeq 5$, then we have \betagin{equation*} \SL_2(\mathbb{Z}/\ell\mathbb{Z}) \subseteq G(\ell) \; \Longrightarrow \; \SL_2(\mathbb{Z}_\ell) \subseteq G. \end{equation*} If $\ell = 3$, we have \[ G(3) = \GammaL_2(\mathbb{Z}/3\mathbb{Z}) \; \text{ and } \; \SL_2(\mathbb{Z}/9\mathbb{Z}) \subseteq G(9) \; \Longrightarrow \; \SL_2(\mathbb{Z}_3) \subseteq G. \] Finally, if $\ell = 2$, we have \[ G(4) = \GammaL_2(\mathbb{Z}/4\mathbb{Z}) \; \Longrightarrow \; G = \GammaL_2(\mathbb{Z}_2) \; \text{ or } \; [\GammaL_2(\mathbb{Z}/8\mathbb{Z}) : G(8)] = 2. \] \end{lemma} \betagin{proof} We first first assume $\ell$ is odd. Under the stated hypotheses, we will show that $\SL_2(\mathbb{Z}_\ell) \subseteq G$ by establishing that \betagin{equation} \langlebel{SL2insidecommutators} \SL_2(\mathbb{Z}_\ell) = [G,G], \end{equation} which amounts to showing that $\SL_2(\mathbb{Z}_\ell) \subseteq [G,G]$, since the reverse inclusion follows from the fact that every commutator has determinant $1$. We begin by first showing, by induction on $n$, that \betagin{equation} \langlebel{SL2kernelcontainedin} \varkappaer\left( \SL_2(\mathbb{Z}/\ell^{n+1}\mathbb{Z}) \rightarrow \SL_2(\mathbb{Z}/\ell^{n}\mathbb{Z}) \right) \subseteq G(\ell^{n+1}) \quad \left( \betagin{matrix} \ell \alphalphammaeq 5 \text{ and } n \alphalphammaeq 0, \text{ or} \\ \ell = 3 \text{ and } n \alphalphammaeq 1 \end{matrix} \right). \end{equation} The binomial expansion argument \eqref{binomialexpansion} of Lemma \ref{alphasubelllemma} shows this, except for the case $\ell \alphalphammaeq 5$ and $n = 0$. To establish this final case, we first observe that \[ \partialt \left( I + \ell^n \tilde{X} \right) \equiv 1 + \ell^n \tr \tilde{X} \mod \ell^{n+1} \quad\quad (n \alphalphammaeq 1). \] Thus, for $n \alphalphammaeq 1$, we have \betagin{equation} \langlebel{shapeofSL2kernel} \varkappaer \left( \SL_2(\mathbb{Z}/\ell^{n+1}\mathbb{Z}) \rightarrow \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \right) = \{ I + \ell^{n} \tilde{X} \mod \ell^{n+1} : \tilde{X} \in M_{2\times 2}^{\tr \equiv 0}(\mathbb{Z}_\ell) \}, \end{equation} where $M_{2\times 2}^{\tr \equiv 0}(\mathbb{Z}_\ell) := \{ \tilde{X} \in M_{2\times 2}(\mathbb{Z}_\ell) : \tr \tilde{X} \equiv 0 \mod \ell \}$. In particular, $\varkappaer \left( \SL_2(\mathbb{Z}/\ell^{n+1}\mathbb{Z}) \rightarrow \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \right)$ is a 3-dimensional subspace of the 4-dimensional $\mathbb{Z}/\ell\mathbb{Z}$-vector space $\varkappaer (\pi_{\ell^{n+1},\ell^n})$. It follows from this, together with the fact that $\betagin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$ and $\betagin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$ reduced modulo $\ell$ generate $\SL_2(\mathbb{Z}/\ell\mathbb{Z})$, that the set \[ \mc{K} := \left\{ \betagin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \betagin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \betagin{pmatrix} 1 & 1 \\ -1 & -1 \end{pmatrix} \right\} \subseteq M_{2\times 2}(\mathbb{Z}) \] satisfies \betagin{equation} \langlebel{mcKgenerateskernel} \left\langlengle I + \ell^n \mc{K} \mod \ell^{n+1} \right\ranglengle = \varkappaer\left( \SL_2(\mathbb{Z}/\ell^{n+1}\mathbb{Z}) \rightarrow \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \right) \quad\quad \left( n \alphalphammaeq 0 \right). \end{equation} Fix $X \in \mc{K}$. Note that $I + \ell^0X \mod \ell \in \SL_2(\mathbb{Z}/\ell\mathbb{Z})$, which by hypothesis is contained in $G(\ell)$. Fix a lift $\tilde{X} \in M_{2\times 2}(\mathbb{Z}_\ell)$ for which $I + \ell^0\tilde{X} \in G$, and note that $\tilde{X}^2 \equiv {\boldsymbol{0}} \mod \ell$, so $\tilde{X}^4 \equiv {\boldsymbol{0}} \mod \ell^2$. Thus, since $\ell \alphalphammaeq 5$, we have \[ (I + \ell^{0}\tilde{X})^\ell = I + \binom{\ell}{1} \tilde{X} + \binom{\ell}{2} \tilde{X}^2 + \dots + \binom{\ell}{\ell-1} \tilde{X}^{\ell-1} + \tilde{X}^\ell \equiv I + \ell \tilde{X} \mod \ell^{2}, \] and more generally, \[ (I + \ell^{n}\tilde{X})^\ell \equiv I + \ell^{n+1} \tilde{X} \mod \ell^{n+2} \quad \left( \betagin{matrix} \ell \alphalphammaeq 5 \text{ and } n \alphalphammaeq 0, \text{ or} \\ \ell = 3 \text{ and } n \alphalphammaeq 1 \end{matrix} \right). \] Therefore \eqref{SL2kernelcontainedin} is established by induction on $n$. We now proceed to verify \eqref{SL2insidecommutators} for $\ell$ an odd prime. When $\ell \alphalphammaeq 5$, the group $\PSL_2(\mathbb{Z}/\ell\mathbb{Z})$ is a non-abelian simple group (see e.g. \cite[Ch. II, Hauptsatz 6.13]{huppert}), and the exact sequence \[ 1 \longrightarrow \{ \pm I \} \longrightarrow \SL_2(\mathbb{Z}/\ell\mathbb{Z}) \longrightarrow \PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \longrightarrow 1 \] does not split (see e.g. \cite[Lemma 2.3]{zywina}). From this and a computer computation for the prime $\ell = 3$, we then find that \betagin{equation*} \betagin{split} \ell \alphalphammaeq 5 \; &\Longrightarrow \; [\SL_2(\mathbb{Z}/\ell\mathbb{Z}), \SL_2(\mathbb{Z}/\ell\mathbb{Z})] = \SL_2(\mathbb{Z}/\ell\mathbb{Z}), \\ \ell = 3 \; &\Longrightarrow \; [\GammaL_2(\mathbb{Z}/3\mathbb{Z}),\GammaL_2(\mathbb{Z}/3\mathbb{Z})] = \SL_2(\mathbb{Z}/3\mathbb{Z}), \end{split} \end{equation*} and so by the hypotheses of our lemma in this case, we have $[G(\ell),G(\ell)] = \SL_2(\mathbb{Z}/\ell\mathbb{Z})$. Note further that the commutator subgroup $[G,G] \subseteq G$ projects modulo $\ell$ onto the commutator subgroup $[G(\ell),G(\ell)] $. We will prove by induction on $n \in \mathbb{N}$ that \betagin{equation} \langlebel{commutatorcontainmentwewant} [G(\ell^n), G(\ell^n)] = \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \quad\quad (n \alphalphammaeq 1), \end{equation} having just established the base case. Fix $n \alphalphammaeq 1$ and assume that \eqref{commutatorcontainmentwewant} holds. Pick any $g \in G(\ell^{n+1})$ and $\tilde{X} \in M^{\tr \equiv 0}_{2\times 2}(\mathbb{Z}_\ell)$, so that, by \eqref{SL2kernelcontainedin} and \eqref{shapeofSL2kernel}, we have $I + \ell^n\tilde{X} \mod \ell^{n+1} \in G(\ell^{n+1})$. We then compute the commutator \betagin{equation} \langlebel{shapeofcommutator} \betagin{split} g (I + \ell^n \tilde{X}) g^{-1} (I + \ell^n \tilde{X})^{-1} &\equiv g (I + \ell^n \tilde{X}) g^{-1} (I - \ell^n \tilde{X}) \\ &\equiv I + \ell^n ( g \tilde{X} g^{-1} - \tilde{X} ) \mod \ell^{n+1}. \end{split} \end{equation} Consider the following computations in $M_{2\times 2}(\mathbb{Z}/\ell\mathbb{Z})$: \[ \betagin{split} \betagin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \betagin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \betagin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}^{-1} &= \betagin{pmatrix} -1 & 1 \\ -1 & 1 \end{pmatrix}, \\ \betagin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \betagin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \betagin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{-1} &= \betagin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix}, \\ \betagin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \betagin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \betagin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}^{-1} &= \betagin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}. \end{split} \] It follows that, inside the additive 3-dimensional $\mathbb{Z}/\ell\mathbb{Z}$-vector space \[ M_{2\times 2}^{\tr = 0} (\mathbb{Z}/\ell\mathbb{Z}) := \{ X \in M_{2\times 2}(\mathbb{Z}/\ell\mathbb{Z}) : \tr X = 0 \}, \] we have \betagin{equation*} \ell \alphalphammaeq 3 \; \Longrightarrow \; \left\langlengle \{ g X g^{-1} - X : g \in \SL_2(\mathbb{Z}/\ell\mathbb{Z}), X \in M_{2\times 2}^{\tr = 0} (\mathbb{Z}/\ell\mathbb{Z}) \} \right\ranglengle = M_{2\times 2}^{\tr = 0} (\mathbb{Z}/\ell\mathbb{Z}). \end{equation*} Thus, varying $g$ and $\tilde{X}$ in \eqref{shapeofcommutator}, we see that \[ \varkappaer \left( \SL_2(\mathbb{Z}/\ell^{n+1}\mathbb{Z}) \rightarrow \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \right) \subseteq [G(\ell^{n+1}),G(\ell^{n+1}) ], \] verifying that \eqref{commutatorcontainmentwewant} holds with $n$ replaced by $n+1$, thus completing the induction step. Since $[G,G] \subseteq \SL_2(\mathbb{Z}_\ell)$ is a closed subgroup, we have therefore verified \eqref{SL2insidecommutators}, proving Lemma \ref{usefulSL2elladicversion} in case $\ell$ is odd. Now assume $\ell = 2$ and note that \eqref{mcKgenerateskernel} is still valid. By the hypothesis that $G(4) = \GammaL_2(\mathbb{Z}/4\mathbb{Z})$, for each $X \in \mc{K}$ we may find a lift $\tilde{X} \in M_{2\times 2}(\mathbb{Z}_2)$ for which $\tilde{X} \equiv X \mod 2$ and $I + 2\tilde{X} \in G$. Again computing \[ (I + 2\tilde{X})^2 = I + 4\tilde{X} + 4\tilde{X}^2 \equiv I + 4\tilde{X} \mod 8, \] we see that $\varkappaer\left( \SL_2(\mathbb{Z}/8\mathbb{Z}) \rightarrow \SL_2(\mathbb{Z}/4\mathbb{Z}) \right) \subseteq G(8)$, and it follows that $[\GammaL_2(\mathbb{Z}/8\mathbb{Z}) : G(8) ] \leq 2$. Finally, if $G(8) = \GammaL_2(\mathbb{Z}/8\mathbb{Z})$, then \eqref{GL2elladicversion} with $\alphalphammab = 0$ implies that $G = \GammaL_2(\mathbb{Z}_2)$. \end{proof} Next we will employ the following group theoretical lemma. \betagin{lemma} \langlebel{GiHiNilemma} Let $G_1$ and $G_2$ be finite groups and let $\pi : G_1 \rightarrow G_2$ be a surjective group homomorphism. Let $H_1 \subseteq G_1$ and $H_2 \subseteq G_2$ be subgroups satisfying $\pi(H_1) = H_2$ and let $N_1 \unlhd G_1$ and $N_2 \unlhd G_2$ be normal subgroups satisfying $\pi(N_1) = N_2$. Assume that \betagin{equation} \langlebel{keyassumptionsonN1andkerpi} \alphalphammacd(\#N_1, \# \varkappaer \pi) = 1 \quad \text{ and } \quad \left[ N_1, \varkappaer \pi \right] = \{ 1 \}. \end{equation} We then have \[ N_1 \subseteq H_1 \; \Longleftrightarrow \; N_2 \subseteq H_2. \] \end{lemma} \betagin{proof} The implication $\Longrightarrow$ is immediate and does not require \eqref{keyassumptionsonN1andkerpi}. For the converse, suppose that $N_2 \subseteq H_2$ and let $n_1 \in N_1$. Since $\pi(N_1) = N_2 \subseteq H_2 = \pi(H_1)$, we see that there exists $h_1 \in H_1$ satisfying $\pi(n_1) = \pi(h_1)$, and we may thus find $k \in \varkappaer \pi$ so that $n_1k \in H_1$. Now by \eqref{keyassumptionsonN1andkerpi}, we see that \[ (n_1k)^{\# \varkappaer \pi} = n_1^{\# \varkappaer \pi} \in H_1, \] which again by \eqref{keyassumptionsonN1andkerpi} implies that $n_1 \in H_1$. Thus, $N_1 \subseteq H_1$, proving the lemma. \end{proof} Applying Lemma \ref{GiHiNilemma} in a special case, we obtain \betagin{lemma} \langlebel{primedividesindexlemma} Let $G \subseteq \GammaL_2(\hat{\mathbb{Z}})$ be an open subgroup, let $m_G$ be as in Definition \ref{definitionofme} and let $\rangled'(m_G)$ be defined by \eqref{defofradprime}. For any prime $\ell$ and $d \in \mathbb{N}$, one has \[ \rangled'(m_G) \mid d \mid d\ell \mid m_G \; \Longrightarrow \; \ell \text{ divides } \left[ \pi_{\ell d, d}^{-1}(G(d)) : G(\ell d) \right]. \] \end{lemma} \betagin{proof} We write $m := m_G$ and \[ d =: \ell^{\alphalphammad_\ell} \cdot d_{(\ell)}, \quad m =: \ell^{\alphalphammab_\ell} \cdot m_{(\ell)} \] (where $\ell \nmid d_{(\ell)} m_{(\ell)}$), and note that, by hypothesis, $\alphalphammaa_\ell \leq \alphalphammad_\ell < \alphalphammab_\ell$. Further observe that, since $\alphalphammab_\ell = \alphalphammab_\ell'$, by Definition \ref{definitionofme} and Definition \ref{defofgbsubellprime}, we have \[ \varkappaer \left( \pi_{\ell^{\alphalphammad_\ell+1},\ell^{\alphalphammad_\ell}} \right) \times \{ 1_{m_{(\ell)}} \} \not\subseteq G(\ell^{\alphalphammad_\ell+1} m_{(\ell)}). \] We now apply Lemma \ref{GiHiNilemma} with \[ \betagin{array}{lll} G_1 := \GammaL_2(\mathbb{Z}/\ell^{\alphalphammad_\ell+1}m_{(\ell)}\mathbb{Z}), &H_1 := G(\ell^{\alphalphammad_\ell+1}m_{(\ell)}), &N_1 := \varkappaer( \pi_{\ell^{\alphalphammad_\ell+1},\ell^{\alphalphammad_\ell}} ) \times \{ 1_{m_{(\ell)}} \}, \\ G_2 := \GammaL_2(\mathbb{Z}/\ell^{\alphalphammad_\ell+1}d_{(\ell)}\mathbb{Z}), &H_2 := G(\ell^{\alphalphammad_\ell+1}d_{(\ell)}), &N_2 := \varkappaer( \pi_{\ell^{\alphalphammad_\ell+1},\ell^{\alphalphammad_\ell}} ) \times \{ 1_{d_{(\ell)}} \}, \end{array} \] and $\pi : \GammaL_2(\mathbb{Z}/\ell^{\alphalphammad_\ell+1}m_{(\ell)}\mathbb{Z}) \longrightarrow \GammaL_2(\mathbb{Z}/\ell^{\alphalphammad_\ell+1}d_{(\ell)}\mathbb{Z})$ the canonical projection map. The conclusion is that \[ \varkappaer ( \pi_{\ell^{\alphalphammad_\ell+1},\ell^{\alphalphammad_\ell}} ) \times \{ 1_{d_{(\ell)}} \} \not\subseteq G(\ell^{\alphalphammad_\ell+1} d_{(\ell)}). \] Since $\varkappaer ( \pi_{\ell^{\alphalphammad_\ell+1},\ell^{\alphalphammad_\ell}} ) \times \{ 1_{d_{(\ell)}} \} \simeq \varkappaer ( \pi_{\ell d, d} )$ is an $\ell$-group, this proves the lemma. \end{proof} Applying Lemma \ref{primedividesindexlemma} prime by prime, for each prime $\ell$ dividing $m_G/\rangled'(m_G)$, we obtain \[ \frac{m_G}{\rangled'(m_G)} \; \text{ divides } \; \left[ \pi^{-1}\left( G(\rangled'(m_G)) \right) : G( m_G ) \right], \] proving Proposition \ref{mGprimeboundprop} in the case that \eqref{theconditionat3} does not hold. In case \eqref{theconditionat3} does hold, we have $G(3) = \GammaL_2(\mathbb{Z}/3\mathbb{Z})$ and, by Lemma \ref{usefulSL2elladicversion}, we must also have $\SL_2(\mathbb{Z}/9\mathbb{Z}) \not \subseteq G(9)$. A computer search reveals that, up to conjugation in $\GammaL_2(\mathbb{Z}/9\mathbb{Z})$, there are two subgroups $G_1, G_2 \subseteq \GammaL_2(\mathbb{Z}/9\mathbb{Z})$ meeting these two criteria\footnote{The (genus zero) modular curve associated with $G_2$ has been considered by N. Elkies \cite{elkies}, who exhibited an explicit map from it to the $j$-line.}, and $G_1 \subseteq G_2$. Furthermore, $[\GammaL_2(\mathbb{Z}/9\mathbb{Z}) : G_2] = 27$. From this it follows that $27$ divides $[\GammaL_2(\mathbb{Z}/9\mathbb{Z}) : G(9)]$, and so \[ 9 \cdot 3 \; \text{ divides } \; \left[ \pi^{-1}\left(G(\rangled'(m_G))\right) : G( 3\rangled'(m_G) ) \right]. \] Now starting here and applying Lemma \ref{primedividesindexlemma}, prime by prime, we conclude the proof of Proposition \ref{mGprimeboundprop} in the case that \eqref{theconditionat3} holds. \section{Proof of Proposition \ref{radmGdoubleprimeboundprop}} We now prove Proposition \ref{radmGdoubleprimeboundprop}. The proof will rely, in part, on the following corollary to the N\'{e}ron-Ogg-Shafarevich criterion (see for instance \cite{ogg} or \cite[Ch. VII, Theorem 7.1]{silverman}). \betagin{Theorem} \langlebel{neronoggshafarevichthm} Let $K$ be a number field, let $E$ be an elliptic curve over $K$ and let $\mc{L} \subseteq \mc{O}_K$ be a prime ideal of $K$, lying over the rational prime $\ell$ of $\mathbb{Z}$. The following are equivalent: \betagin{enumerate} \item[(a)] $E$ has good reduction at $\mc{L}$. \item[(b)] For each positive integer $m$ that is not divisible by $\ell$, the prime $\mc{L}$ is unramified in $K(E[m])$. \item[(c)] The prime $\mc{L}$ is unramified in $K(E_{\tors,(\ell)})$. \end{enumerate} \end{Theorem} We presently reduce the proof of Proposition \ref{radmGdoubleprimeboundprop} to the following four lemmas. The first lemma follows immediately from the classification subgroups of $\GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$. \betagin{lemma} \langlebel{SL2notcontainedinlemma} Let $\ell$ be a prime number and let $G(\ell) \subseteq \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ be any subgroup. We have \[ \SL_2(\mathbb{Z}/\ell\mathbb{Z}) \not \subseteq G(\ell) \; \Longrightarrow \; \ell \leq [ \GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) : G(\ell) ]. \] \end{lemma} The second lemma is a consequence of the Weil pairing on an elliptic curve. \betagin{lemma} \langlebel{elldividesdiscKlemma} Let $E$ be an elliptic curve defined over a number field $K$, let $G := \rho_{E,K}(G_K) \subseteq \GammaL_2(\hat{\mathbb{Z}})$, and let $\ell$ be a prime number. For any positive integer $n$, we have \[ \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \subseteq G(\ell^n) \neq \GammaL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \; \Longrightarrow \; \ell \mid \alphalphammaD_K. \] Consequently, \[ \SL_2(\mathbb{Z}_{\ell}) \subseteq G_\ell \neq \GammaL_2(\mathbb{Z}_{\ell}) \; \Longrightarrow \; \ell \mid \alphalphammaD_K. \] \end{lemma} Our third lemma utilizes the N\'{e}ron-Ogg-Shafarevich criterion in the form of Theorem \ref{neronoggshafarevichthm}. \betagin{lemma} \langlebel{neronoggshafarevichlemmaellgeq5} Let $E$ be an elliptic curve defined over a number field $K$, let $G := \rho_{E,K}(G_K) \subseteq \GammaL_2(\hat{\mathbb{Z}})$, let $m_G$ be as in Definition \ref{definitionofme} and let $\ell$ be an odd prime number dividing $m_G$. We then have \[ G_{\ell} = \GammaL_2(\mathbb{Z}_{\ell}) \; \Longrightarrow \; \ell \mid \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E). \] \end{lemma} For the prime $\ell = 2$ we must make a finer analysis, in the form of the next (and final) lemma. Let us make the abbreviation \betagin{equation*} \langlebel{defofrprime} r' := \rangled'(m_G). \end{equation*} \betagin{lemma} \langlebel{ellequals2lemma} Let $E$ be an elliptic curve defined over a number field $K$, let $G := \rho_{E,K}(G_K) \subseteq \GammaL_2(\hat{\mathbb{Z}})$, let $m_G$ be as in Definition \ref{definitionofme} and assume that $4$ divides $m_G$. We then have \[ \GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times \{ 1_{r'_{(2)}} \} \not \subseteq G(r') \; \Longrightarrow \; 4 \leq 2 \left[ \pi^{-1} \left( G\left(r'_{(2)} \right) \right) : G\left(r' \right) \right] \] and \[ \GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times \{ 1_{r'_{(2)}} \} \subseteq G(r') \; \Longrightarrow \; 2 \mid \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E). \] \end{lemma} Let us now deduce Proposition \ref{radmGdoubleprimeboundprop} from Lemmas \ref{SL2notcontainedinlemma} -- \ref{ellequals2lemma}, postponing the proofs of those lemmas until later. First, combining Lemma \ref{usefulSL2elladicversion} with Lemmas \ref{SL2notcontainedinlemma}, \ref{elldividesdiscKlemma} and \ref{neronoggshafarevichlemmaellgeq5}, one concludes the following implications, for any prime $\ell \alphalphammaeq 5$ that divides $m_G$: \betagin{equation*} \betagin{split} &\SL_2(\mathbb{Z}/\ell\mathbb{Z}) \not \subseteq G(\ell) \; \Longrightarrow \; \ell \leq [ \GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) : G(\ell) ], \\ &\SL_2(\mathbb{Z}/\ell\mathbb{Z}) \subseteq G(\ell) \; \Longrightarrow \; \ell \mid \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E). \end{split} \end{equation*} This implies that \betagin{equation} \langlebel{boundforprimesabove5} \betagin{split} r'_{(6)} &\leq \prod_{{\betagin{substack} {\ell \alphalphammaeq 5, \, \ell \mid r' \\ \SL_2(\mathbb{Z}/\ell\mathbb{Z}) \not \subseteq G(\ell) } \end{substack}}} [ \GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) : G(\ell) ] \prod_{{\betagin{substack} {\ell \alphalphammaeq 5 \\\ell \mid \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) \\ \SL_2(\mathbb{Z}/\ell\mathbb{Z}) \subseteq G(\ell) } \end{substack}}} \ell \\ &\leq [ \GammaL_2(\mathbb{Z}/r'_{(6)}\mathbb{Z}) : G(r'_{(6)}) ] \rangled \left( | \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right)_{(6)}. \end{split} \end{equation} If the prime $\ell = 3$ divides $m_G$ then either condition \eqref{theconditionat3} holds or it does not hold. Let us first assume that \eqref{theconditionat3} does not hold, i.e. we assume that it is \emph{not} the case that $9$ divides $m_G$, $G(3) = \GammaL_2(\mathbb{Z}/3\mathbb{Z})$ and $\SL_2(\mathbb{Z}_3) \not \subseteq G_3$. We then use Lemmas \ref{SL2notcontainedinlemma}, \ref{elldividesdiscKlemma} and \ref{neronoggshafarevichlemmaellgeq5}, together with Lemma \ref{usefulSL2elladicversion}, to deduce the following implications: \betagin{equation*} \betagin{split} &\SL_2(\mathbb{Z}/3\mathbb{Z}) \not \subseteq G(3) \; \Longrightarrow \; 3 \leq [ \GammaL_2(\mathbb{Z}/3\mathbb{Z}) : G(3) ], \\ &\SL_2(\mathbb{Z}/3\mathbb{Z}) \subseteq G(3) \neq \GammaL_2(\mathbb{Z}/3\mathbb{Z}) \; \Longrightarrow \; 3 \mid \alphalphammaD_K, \\ &G(3) = \GammaL_2(\mathbb{Z}/3\mathbb{Z}) \; \text{ and } \; \SL_2(\mathbb{Z}_3) \subseteq G_3 \neq \GammaL_2(\mathbb{Z}_3) \; \Longrightarrow \; 3 \mid \alphalphammaD_K, \\ &G(3) = \GammaL_2(\mathbb{Z}/3\mathbb{Z}) \; \text{ and } \; G_3 = \GammaL_2(\mathbb{Z}_3) \; \Longrightarrow \; 3 \mid N_{K/\mathbb{Q}}(\alphalphammaD_E). \end{split} \end{equation*} Inserting this information into \eqref{boundforprimesabove5}, we find that \betagin{equation} \langlebel{boudnforoddprimes} r'_{(2)} \leq [ \GammaL_2(\mathbb{Z}/r'_{(2)}\mathbb{Z}) : G(r'_{(2)}) ] \rangled \left( | \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right)_{(2)}. \end{equation} On the other hand, in case \eqref{theconditionat3} \emph{does} hold, then we obviously have \betagin{equation} \langlebel{incasetheconditionat3doeshold} \betagin{split} \frac{r'_{(2)}}{3} = r'_{(6)} &\leq [ \GammaL_2(\mathbb{Z}/r'_{(6)}\mathbb{Z}) : G(r'_{(6)}) ] \rangled \left( | \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right)_{(6)} \\ &\leq [ \GammaL_2(\mathbb{Z}/r'_{(2)}\mathbb{Z}) : G(r'_{(2)}) ] \rangled \left( | \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right)_{(2)}. \end{split} \end{equation} If $\ell = 2$ divides $m_G$, then either $4 \mid m_G$ or not. If $4 \nmid m_G$, then multiplying both sides of \eqref{boudnforoddprimes} (resp. of \eqref{incasetheconditionat3doeshold}) by 2, we obtain the bound of Proposition \ref{radmGdoubleprimeboundprop}. Now assume that $4 \mid m_G$. In this case, when \eqref{theconditionat3} does not hold, we insert the result of Lemma \ref{ellequals2lemma} into \eqref{boudnforoddprimes}, concluding that \betagin{equation*} r' = 4r'_{(2)} \leq 2 [ \GammaL_2(\mathbb{Z}/r'\mathbb{Z}) : G(r') ] \rangled \left( | \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right). \end{equation*} Likewise, in case condition \eqref{theconditionat3} does hold, we insert these results into \eqref{incasetheconditionat3doeshold} and obtain \betagin{equation*} \frac{r'}{3} = \frac{4r'_{(2)}}{3} \leq 2 [ \GammaL_2(\mathbb{Z}/r'\mathbb{Z}) : G(r') ] \rangled \left( | \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E) | \right). \end{equation*} Thus we see that Lemmas \ref{SL2notcontainedinlemma} -- \ref{ellequals2lemma} indeed imply Proposition \ref{radmGdoubleprimeboundprop}. We now prove each of these lemmas. First we state an auxiliary lemma that is used throughout and may be found in \cite[Lemma (5.2.1)]{ribet}. \betagin{lemma} \langlebel{goursatlemma} (Goursat's Lemma) \noindent Let $G_1$, $G_2$ be groups and for $i \in \{1, 2 \}$ denote by $\pr_i : G_1 \times G_2 \longrightarrow G_i$ the projection map onto the $i$-th factor. Let $G \subseteq G_1 \times G_2$ be a subgroup and assume that $$ \pr_1(G) = G_1, \; \pr_2(G) = G_2. $$ Then there exists a group $\Gammaamma$ together with a pair of surjective homomorphisms \[ \betagin{split} \psi_1 : G_1 &\longrightarrow \Gammaamma \\ \psi_2 : G_2 &\longrightarrow \Gammaamma \end{split} \] so that \[ G = G_1 \times_\psi G_2 := \{ (g_1,g_2) \in G_1 \times G_2 : \psi_1(g_1) = \psi_2(g_2) \}. \] \end{lemma} \subsection{Proof of Lemma \ref{SL2notcontainedinlemma}} To prove Lemma \ref{SL2notcontainedinlemma}, we will use the following classification of certain proper subgroups of $\GammaL_2$. \betagin{Definition} Let $\ell$ be any prime number. \betagin{itemize} \item[(i)] A subgroup $G(\ell) \subseteq \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ is called a {\bf{Borel subgroup}} if it is conjugate in $\GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ to the subgroup \betagin{equation} \langlebel{defofBofell} \mc{B}(\ell) := \left\{ \betagin{pmatrix} a & b \\ 0 & d \end{pmatrix} : \; b \in \mathbb{Z}/\ell\mathbb{Z}, \; a, d \in (\mathbb{Z}/\ell\mathbb{Z})^\times \right\}. \end{equation} \\ \item[(ii)] A subgroup $G(\ell) \subseteq \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ is called a {\bf{normalizer of a split Cartan subgroup}} if it is conjugate in $\GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ to the subgroup \betagin{equation} \langlebel{defofNsubsofell} \mc{N}_{\s}(\ell) := \left\{ \betagin{pmatrix} a & 0 \\ 0 & d \end{pmatrix} : \; a, d \in (\mathbb{Z}/\ell\mathbb{Z})^\times \right\} \cup \left\{ \betagin{pmatrix} 0 & b \\ c & 0 \end{pmatrix} : \; b, c \in (\mathbb{Z}/\ell\mathbb{Z})^\times \right\}. \end{equation} If $\ell$ is odd, then $G(\ell)$ is called a {\bf{normalizer of a non-split Cartan subgroup}} if it is conjugate in $\GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ to the subgroup \betagin{equation} \langlebel{defofNsubnsofell} \mc{N}_{\ns}(\ell) := \left\{ \betagin{pmatrix} x & \varepsilon y \\ y & x \end{pmatrix} : \; \betagin{matrix} x, y \in \mathbb{Z}/\ell\mathbb{Z}, \\ x^2-\varepsilon y^2 \neq 0 \end{matrix} \right\} \cup \left\{ \betagin{pmatrix} x & -\varepsilon y \\ y & -x \end{pmatrix} : \; \betagin{matrix} x, y \in \mathbb{Z}/\ell\mathbb{Z}, \\ x^2-\varepsilon y^2 \neq 0 \end{matrix} \right\}, \end{equation} where $\varepsilon$ is any fixed non-square in $(\mathbb{Z}/\ell\mathbb{Z})^\times$. If $\ell = 2$, then $G(2)$ is called a normalizer of a non-split Cartan subgroup if $G(2) = \GammaL_2(\mathbb{Z}/2\mathbb{Z})$. \\ \item[(iii)] A subgroup $G(\ell) \subseteq \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ is called an {\bf{exceptional group}} if its image in $\PGL_2(\mathbb{Z}/\ell\mathbb{Z})$ is isomorphic to one of the groups $A_4$, $S_4$ or $A_5$ (the symmetric or alternating groups). \end{itemize} \end{Definition} The following lemma may be deduced from Propositions 15, 16 and Section 2.6 of \cite{serre}. \betagin{lemma} \langlebel{subgroupsofGL2lemma} Let $G(\ell) \subseteq \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ be a subgroup. Then one of the following must hold: \betagin{enumerate} \item $G(\ell)$ is contained in a Borel subgroup. \item $G(\ell)$ is contained in the normalizer of a split Cartan subgroup. \item $G(\ell)$ is contained in the normalizer of a non-split Cartan subgroup. \item $G(\ell)$ is an exceptional group. \item $\SL_2(\mathbb{Z}/\ell\mathbb{Z}) \subseteq G(\ell)$. \end{enumerate} \end{lemma} We include the following table of indices $[\GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) : G(\ell)]$, for each of the proper subgroups $G(\ell)$ given in Lemma \ref{subgroupsofGL2lemma}. In addition to the definitions \eqref{defofBofell}, \eqref{defofNsubsofell}, and \eqref{defofNsubnsofell}, we make the following abbreviations. For a prime $\ell$ for which $A_4 \subseteq \PGL_2(\mathbb{Z}/\ell\mathbb{Z})$, we define the exceptional subgroup $\mc{E}_{A_4}(\ell) \subseteq \GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ by \[ \mc{E}_{A_4}(\ell) := \{ g \in \GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) : \varpi(g) \in A_4 \}, \] where $\varpi : \GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) \longrightarrow \PGL_2(\mathbb{Z}/\ell\mathbb{Z})$ denotes the usual projection. The exceptional subgroups $\mc{E}_{S_4}(\ell)$ and $\mc{E}_{A_5}(\ell)$ are defined similarly. \[ \betagin{array}{|c||c|c|c|c|c|c|} \hline G(\ell) & \mc{B}(\ell) & \mc{N}_{\s}(\ell) & \mc{N}_{\ns}(\ell) & \mc{E}_{A_4}(\ell) & \mc{E}_{S_4}(\ell) & \mc{E}_{A_5}(\ell) \\ \hline \hline [\GammaL_2(\mathbb{Z}/\ell\mathbb{Z}) : G(\ell)] & \ell + 1 & \ell(\ell+1)/2 & \ell(\ell-1)/2 & \ell(\ell^2-1)/12 & \ell(\ell^2-1)/24 & \ell(\ell^2-1)/60 \\ \hline \end{array} \] We note that $\mc{N}_{\ns}(2) = \GammaL_2(\mathbb{Z}/2\mathbb{Z})$, and also that each exceptional group only occurs for certain primes $\ell$. In particular, if the expression given in the table is not a whole number, then the associated exceptional group does not occur as a subgroup of $\GammaL_2(\mathbb{Z}/\ell\mathbb{Z})$ for that prime $\ell$. The conclusion of Lemma \ref{SL2notcontainedinlemma} follows immediately from this table. \subsection{Proof of Lemma \ref{elldividesdiscKlemma}} We will make use of the following commutative diagram, where \[ \res : \operatorname{Gal}(K(E_{\tors})/K) \rightarrow \operatorname{Gal}(K(\mu_\infty)/K), \quad\quad \cyc : \operatorname{Gal}(K(\mu_\infty)/K) \rightarrow \hat{\mathbb{Z}}^\times \] denote respectively the restriction map and the cyclotomic character (the containment $K(\mu_\infty) \subseteq K(E_{\tors})$ follows from the Weil Pairing \cite{weil}, see also \cite[Ch. III, \S 8]{silverman}). \betagin{equation} \langlebel{commutingdiagram} \betagin{CD} \operatorname{Gal}(K(E[\ell^n])/K) @>\rho_{E,K}>> \GammaL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \\ @V{\res}VV @V{\partialt}VV \\ \operatorname{Gal}(K(\mu_{\ell^n})/K) @>\cyc>> (\mathbb{Z}/\ell^n\mathbb{Z})^\times \end{CD} \end{equation} By considering the commutative diagram \eqref{commutingdiagram} and Galois theory, we see that \[ \SL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \subseteq G(\ell^n) \neq \GammaL_2(\mathbb{Z}/\ell^n\mathbb{Z}) \; \Longrightarrow \; \partialt(G(\ell^n)) \neq (\mathbb{Z}/\ell^n\mathbb{Z})^\times \; \Longrightarrow \; \mathbb{Q} \neq \mathbb{Q}(\mu_{\ell^n}) \cap K. \] Since $\mathbb{Q}(\mu_{\ell^n})$ is totally ramified at $\ell$, it follows that $\ell$ is then ramified in $\mathbb{Q}(\mu_{\ell^n}) \cap K$, so $\ell$ is ramified in $K$, and thus $\ell$ divides $\alphalphammaD_K$. \subsection{Proof of Lemma \ref{neronoggshafarevichlemmaellgeq5}} In order to prove Lemma \ref{neronoggshafarevichlemmaellgeq5}, we will make use of the following definition and lemma, which allow us to understand in more detail the nature of the fibered products that may be present in $G$. \betagin{Definition} Let $G$ be a profinite group and $\Sigma$ a finite simple group. We say that \emph{$\Sigma$ occurs in $G$} if and only if there are closed subgroups $G_1$ and $N_1$ of $G$ with $N_1 \subseteq G_1 \subseteq G$, $N_1$ normal in $G_1$ and $G_1 / N_1 \simeq \Sigma$. We further define \[ \betagin{split} \occ(G) &:= \{ \text{finite simple non-abelian groups } \Sigma : \; \Sigma \text{ occurs in } G \}, \\ \occ_{\JH}(G) &:= \{ \text{finite simple non-abelian groups } \Sigma : \; \Sigma \text{ is a Jordan-H\"{o}lder factor of $G$} \}. \end{split} \] \end{Definition} Note that any simple Jordan-H\"{o}lder factor of $G$ occurs in $G$ (but generally not vice versa), i.e. we have $ \occ_{\JH}(G) \subseteq \occ(G). $ Also note that, if \[ 1 \longrightarrow G' \longrightarrow G \longrightarrow G'' \longrightarrow 1 \] is an exact sequence of profinite groups, then \betagin{equation} \langlebel{JHobservation} \betagin{split} \occ(G) &= \occ(G') \cup \occ(G''), \\ \occ_{\JH}(G) &= \occ_{\JH}(G') \cup \occ_{\JH}(G''). \end{split} \end{equation} Finally, as observed in \cite[IV-25]{serre2}, one has that \betagin{equation*} \occ(\GammaL_2(\mathbb{Z}_{\ell})) = \betagin{cases} \emptyset & \text{ if } \ell \in \{ 2, 3 \} \\ \{ \PSL_2(\mathbb{Z}/5\mathbb{Z}) \} = \{ A_5 \} & \text{ if } \ell = 5 \\ \{ \PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \} & \text{ if } \ell > 5, \ell \equiv \pm 2 \pmod{5} \\ \{ \PSL_2(\mathbb{Z}/\ell\mathbb{Z}), A_5 \} & \text{ if } \ell > 5, \ell \equiv \pm 1 \pmod{5}. \end{cases} \end{equation*} Thus, by \eqref{JHobservation} we have \betagin{equation} \langlebel{occincomplementofell} \occ(\GammaL_2(\mathbb{Z}_{(\ell)})) = \{ A_5 \} \cup \{ \PSL_2(\mathbb{Z}/p\mathbb{Z}) \}_{p \neq \ell}. \end{equation} \betagin{lemma} \langlebel{PSL2Zmodplemma} Let $\ell \alphalphammaeq 5$ be a prime and let $G \subseteq \GammaL_2(\mathbb{Z}_{\ell})$ be a closed subgroup satisfying $\SL_2(\mathbb{Z}/\ell\mathbb{Z}) \subseteq G(\ell)$. Suppose further that $\psi : G \longrightarrow H$ is a surjective group homomorphism onto a finite group $H$. Then either \betagin{enumerate} \item $\PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \in \occ_{\JH}(H)$, or \item $H$ is abelian and $\SL_2(\mathbb{Z}_{\ell}) \subseteq \varkappaer \psi$. \end{enumerate} \end{lemma} \betagin{proof} As observed earlier, since $\ell \alphalphammaeq 5$, the group $\PSL_2(\mathbb{Z}/\ell\mathbb{Z})$ is a simple non-abelian group, and we obviously have $\{ \PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \} \subseteq \occ_{\JH}(\SL_2(\mathbb{Z}/\ell\mathbb{Z}))$. Furthermore, by the hypothesis $\SL(\mathbb{Z}/\ell\mathbb{Z}) \subseteq G(\ell)$ together with \eqref{JHobservation}, we see that $\{ \PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \} \subseteq \occ_{\JH}(G(\ell))$. Thus, again by \eqref{JHobservation}, we have \betagin{equation} \langlebel{PSL2occursinG} \{ \PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \} \subseteq \occ_{\JH}(G). \end{equation} Furthermore, we have that \betagin{equation} \langlebel{eitherorpsl2} \frac{\pm (\varkappaer \psi)(\ell) \cap \SL_2(\mathbb{Z}/\ell\mathbb{Z})}{\{\pm I\}} = \betagin{cases} \{ 1 \} \; \text{ or } \\ \PSL_2(\mathbb{Z}/\ell\mathbb{Z}). \end{cases} \end{equation} If the left-hand side of \eqref{eitherorpsl2} is trivial, then $\varkappaer \psi$ is prosolvable (so that $\occ_{\JH}(\varkappaer\psi) = \emptyset$), and considering the exact sequence \[ 1 \longrightarrow \varkappaer \psi \longrightarrow G \longrightarrow H \longrightarrow 1, \] we see by \eqref{JHobservation} and \eqref{PSL2occursinG} that $\PSL_2(\mathbb{Z}/\ell\mathbb{Z}) \in \occ_{\JH}(H)$. If, on the other hand, we have $\PSL_2(\mathbb{Z}/\ell\mathbb{Z})$ in \eqref{eitherorpsl2}, then $\SL_2(\mathbb{Z}/\ell\mathbb{Z}) \subseteq (\varkappaer \psi)(\ell)$, which by Lemma \ref{usefulSL2elladicversion} applied to $G = \varkappaer \psi$ implies that $\SL_2(\mathbb{Z}_\ell) \subseteq \varkappaer \psi$. Thus $H$ is abelian and $\psi$ factors through the determinant map, as asserted. \end{proof} We now proceed with the proof of Lemma \ref{neronoggshafarevichlemmaellgeq5}. By Lemma \ref{goursatlemma}, the hypothesis that $G_\ell = \GammaL_2(\mathbb{Z}_{\ell})$ and that $\ell$ divides $m_G$ imply that \betagin{equation} \langlebel{fiberedproductatell2} G \simeq \GammaL_2(\mathbb{Z}_{\ell}) \times_{\psi} G_{(\ell)}, \end{equation} where $\psi_{\ell} : \GammaL_2(\mathbb{Z}_{\ell}) \twoheadrightarrow H$ and $\psi_{(\ell)} : G_{(\ell)} \twoheadrightarrow H$ are surjective homomorphisms onto a common \emph{non-trivial} group $H$. Under the Galois correspondence, we have $\GammaL_2(\mathbb{Z}_{\ell}) = \operatorname{Gal}(K(E[\ell^\infty])/K)$, $G_{(\ell)} = \operatorname{Gal}(K(E_{\tors,(\ell)})/K)$ and $H = \operatorname{Gal}(F/K)$, where $F := K(E[\ell^\infty]) \cap K(E_{\tors,(\ell)}) \neq K$. Thus, the corresponding field diagram is as follows. \betagin{equation} \langlebel{fancyfielddiagramwithF} \betagin{tikzpicture} \matrix(a)[matrix of math nodes, row sep=1.5em, column sep=1.5em, text height=1.5ex, text depth=0.25ex] {K(E[\ell^\infty]) & & K(E_{\tors,(\ell)}) \\ & F \\ & K\\}; \path[-](a-1-1) edge (a-2-2); \path[-](a-1-3) edge (a-2-2); \path[-](a-2-2) edge (a-3-2); \end{tikzpicture} \end{equation} We first claim that \betagin{equation} \langlebel{FcapKmuinftyneqK} F \cap K(\mu_{\ell^\infty}) \neq K. \end{equation} We separate the verification of \eqref{FcapKmuinftyneqK} into cases. \noindent \emph{Case: $\ell \alphalphammaeq 5$.} By Lemma \ref{PSL2Zmodplemma}, we see that either $\PSL_2(\mathbb{Z}/\ell\mathbb{Z})$ occurs in $H$ (and thus occurs in $G_{(\ell)}$), or $H$ is abelian and $F \subseteq K(\mu_{\ell^\infty})$. If $\ell \alphalphammaeq 7$ then, by \eqref{occincomplementofell} we see that $H$ must be abelian and $F \subseteq K(\mu_{\ell^\infty})$, verifying \eqref{FcapKmuinftyneqK}. If $\ell = 5$, we consider the further quotient induced by reduction modulo 5: \[ H \simeq \frac{\GammaL_2(\mathbb{Z}_5)}{\varkappaer \psi_5} \longrightarrow \frac{\GammaL_2(\mathbb{Z}/5\mathbb{Z})}{( \varkappaer \psi_5 )(5)} =: H(5). \] Since the kernel of this quotient is pro-solvable, we see that if $\PSL_2(\mathbb{Z}/5\mathbb{Z}) \simeq A_5$ occurs in $H$, then it must occur in $H(5)$, and a computer calculation shows that we then must have \[ (\varkappaer \psi_5)(5) \subseteq \left\{ \betagin{pmatrix} \alphalphammal & 0 \\ 0 & \alphalphammal \end{pmatrix} : \alphalphammal \in (\mathbb{Z}/5\mathbb{Z})^\times \right\}, \] and thus \[ \langlengle \SL_2(\mathbb{Z}_5) , \varkappaer \psi_5 \ranglengle \subseteq \left\{ g \in \GammaL_2(\mathbb{Z}_5) : \left( \frac{\partialt(g) \mod 5}{5} \right) = 1 \right\}. \] By the Galois correspondence, we then have \[ F \cap K(\mu_{5^\infty}) = K(E[5^\infty])^{\langlengle \SL_2(\mathbb{Z}_5), \varkappaer \psi_5 \ranglengle} \supseteq K(\sqrt{5}) \neq K, \] where we are using the fact that $G(5) = \GammaL_2(\mathbb{Z}/5\mathbb{Z})$, which precludes the possibility that $K(\sqrt{5}) = K$. Thus in any case, \eqref{FcapKmuinftyneqK} also holds for $\ell = 5$. \noindent \emph{Case: $\ell = 3$.} As in the previous case, we have that \eqref{fiberedproductatell2} holds with $\ell = 3$. By Galois theory, we have that \[ F := K(E[3^{\infty}])^{\varkappaer \psi_3} \supseteq K(E[3^{\infty}])^{\langlengle \SL_2(\mathbb{Z}_3), \varkappaer \psi_3 \ranglengle} = K(\mu_{3^\infty}) \cap F. \] As in the previous case, since $\varkappaer \psi_3 \neq \GammaL_2(\mathbb{Z}_3)$, we have $F \neq K$. The following lemma will then imply that $K(\mu_{3^\infty}) \cap F \neq K$. \betagin{lemma} \langlebel{kernelat3lemma} Let $N \unlhd \GammaL_2(\mathbb{Z}_3)$ be a closed normal subgroup satisfying $\langlengle \SL_2(\mathbb{Z}_3), N \ranglengle = \GammaL_2(\mathbb{Z}_3)$. Then $N = \GammaL_2(\mathbb{Z}_3)$. \end{lemma} \betagin{proof} A computer calculation shows that, if $H \unlhd \GammaL_2(\mathbb{Z}/9\mathbb{Z})$ is a normal subgroup satisfying \[ \langlengle \SL_2(\mathbb{Z}/9\mathbb{Z}), H \ranglengle = \GammaL_2(\mathbb{Z}/9\mathbb{Z}), \] then $H = \GammaL_2(\mathbb{Z}/9\mathbb{Z})$. Taking $N$ as in the statement of the lemma and setting $H := N(9)$, we see that $N(9) = \GammaL_2(\mathbb{Z}/9\mathbb{Z})$, and applying \eqref{GL2elladicversion} with $\alphalphammab = 1$, we conclude that $N = \GammaL_2(\mathbb{Z}_3)$. \end{proof} Applying Lemma \ref{kernelat3lemma} with $N = \varkappaer \psi_3$, we find that $K(\mu_{3^\infty}) \cap F \neq K$, since $F \neq K$, verifying \eqref{FcapKmuinftyneqK} in the $\ell = 3$ case as well. Finally, we observe that \eqref{FcapKmuinftyneqK} implies the conclusion of Lemma \ref{neronoggshafarevichlemmaellgeq5}. Indeed, let $\ell$ be any odd prime with $\ell \nmid \alphalphammaD_K$. Since $G_\ell = \GammaL_2(\mathbb{Z}_\ell)$, we have $K \cap \mathbb{Q}(\mu_{\ell^\infty}) = \mathbb{Q}$, and so any prime $\mf{L} \subseteq O_K$ over $\ell$ is totally ramified in $K(\mu_{\ell^\infty})$, hence ramified in $F \cap K(\mu_{\ell^\infty})$. Thus, by \eqref{fancyfielddiagramwithF}, $\mf{L}$ is ramified in $K(E_{\tors, (\ell)})$. By Theorem \ref{neronoggshafarevichthm}, we find that $\ell \mid N_{K/\mathbb{Q}}(\alphalphammaD_E)$, finishing the proof. \subsection{Proof of Lemma \ref{ellequals2lemma}} The proof of Lemma \ref{ellequals2lemma} will make use of the following sub-lemma. \betagin{lemma} \langlebel{ramificationlemma} Let $K$ be a number field for which $2 \nmid \alphalphammaD_K$ and let $\mf{p} \subseteq \mc{O}_K$ be a prime ideal lying over $2$. Let $\alphalphammaa \in \mc{O}_K - \{ 0 \}$ be any element for which $\mf{p} \nmid \alphalphammaa \mc{O}_K$. Then $2\alphalphammaa$ is not a square in $K^\times$, so the field $K(\sqrt{2\alphalphammaa})$ is a quadratic extension of $K$. Furthermore, $\mf{p}$ ramifies in $K(\sqrt{2\alphalphammaa})$. \end{lemma} \betagin{proof} Let $v_{\mf{p}}$ denote the $\mf{p}$-adic valuation on $K$, normalized so that $v_{\mf{p}}(K^\times) = \mathbb{Z}$. Note that, since by assumption $2$ is unramified in $K$ and $v_{\mf{p}}(\alphalphammaa) = 0$, we have \betagin{equation} \langlebel{valuationof2alpha} v_{\mf{p}}(2\alphalphammaa) = v_{\mf{p}}(2) + v_{\mf{p}}(\alphalphammaa) = 1, \end{equation} and so in particular $2\alphalphammaa$ cannot be a square in $K^\times$, as asserted. Next, let $L := K(\sqrt{2\alphalphammaa})$, fix any prime $\mf{P} \subseteq \mc{O}_L$ lying over $\mf{p}$ and let $v_\mf{P}$ be the $\mf{P}$-adic valuation on $L$, normalized so that it extends $v_{\mf{p}}$ on $K$. By \eqref{valuationof2alpha}, we then have \[ v_{\mf{P}}\left( (2\alphalphammaa)^{1/2} \right) = \frac{1}{2} v_{\mf{p}}(2\alphalphammaa) = \frac{1}{2}. \] It follows that $L$ is ramified over $K$ at $\mf{p}$, as asserted. \end{proof} We now proceed with the proof of Lemma \ref{ellequals2lemma}. Since we are assuming that $4$ divides $r'$, by Lemma \ref{goursatlemma} we may write $G(r')$ as a fibered product: \betagin{equation} \langlebel{fiberedproductat2} G(r') = G(4) \times_{\psi} G(r'_{(2)}). \end{equation} \noindent \emph{Case: $\GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times \{ 1_{r'(2)} \} \not \subseteq G(r')$.} In this case, either $G(4) \neq \GammaL_2(\mathbb{Z}/4\mathbb{Z})$ or $G(4) = \GammaL_2(\mathbb{Z}/4\mathbb{Z})$ and the common quotient $\psi_2(G(4)) = \psi_{(2)}(G(r'_{(2)}))$ in \eqref{fiberedproductat2} is nontrivial. If $G(4) \neq \GammaL_2(\mathbb{Z}/4\mathbb{Z})$, we find that $2 \leq [\GammaL_2(\mathbb{Z}/4\mathbb{Z}) : G(4) ] \leq [ \pi^{-1}(G(r'_{(2)})) : G(r')]$, and so the result of the lemma follows. If on the other hand $G(4) = \GammaL_2(\mathbb{Z}/4\mathbb{Z})$, then the common quotient in \eqref{fiberedproductat2} is nontrivial, and since \[ \betagin{split} \pi^{-1}\left( G(r_{(2)}') \right) &= \GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times G(r_{(2)}'), \\ G\left( r' \right) &= \GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times_{\psi} G(r_{(2)}'), \end{split} \] we thus have $2 \leq [ \pi^{-1}(G(r'_{(2)})) : G(r')]$, proving the lemma in this sub-case as well. \noindent \emph{Case: $\GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times \{ 1_{r'(2)} \} \subseteq G(r')$.} In this case, \eqref{fiberedproductat2} is a full product: \betagin{equation} \langlebel{fullproductobservation} G(r') = \GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times G(r'_{(2)}). \end{equation} By Lemma \ref{usefulSL2elladicversion}, either $G(8)$ is an index 2 subgroup of $\GammaL_2(\mathbb{Z}/8\mathbb{Z})$ that surjects onto $\GammaL_2(\mathbb{Z}/4\mathbb{Z})$, or $G_2 = \GammaL_2(\mathbb{Z}_2)$. Let us treat the former subcase first. A computer search reveals that there are exactly $4$ index $2$ subgroups of $\GammaL_2(\mathbb{Z}/8\mathbb{Z})$ that map surjectively onto $\GammaL_2(\mathbb{Z}/4\mathbb{Z})$, namely \[ \varkappaer(\chi_8), \; \varkappaer(\chi_8\chi_4), \; \varkappaer(\chi_8\varepsilon), \varkappaer(\chi_8\chi_4\varepsilon), \] where $\chi_8 : \GammaL_2(\mathbb{Z}/8\mathbb{Z}) \rightarrow \{ \pm 1 \}$ (resp. $\chi_4 : \GammaL_2(\mathbb{Z}/8\mathbb{Z}) \rightarrow \{ \pm 1 \}$) denotes the Kronecker symbol associated to the quadratic field $\mathbb{Q}(\sqrt{2})$ (resp. to $\mathbb{Q}(\sqrt{-1})$), precomposed with the determinant map, and $\varepsilon :\GammaL_2(\mathbb{Z}/8\mathbb{Z}) \rightarrow \GammaL_2(\mathbb{Z}/2\mathbb{Z}) \rightarrow \{ \pm 1 \}$ denotes the unique non-trivial character of order $2$ on $\GammaL_2(\mathbb{Z}/2\mathbb{Z})$, precomposed with reduction modulo $2$. We have \betagin{equation} \langlebel{quadraticfields} \betagin{split} &K(E[8])^{\varkappaer(\chi_8)} = K(\sqrt{2}), \quad \quad \quad K(E[8])^{\varkappaer(\chi_8\varepsilon)} = K(\sqrt{2\alphalphammaD_E}), \\ &K(E[8])^{\varkappaer(\chi_8\chi_4)} = K(\sqrt{-2}), \; \quad K(E[8])^{\varkappaer(\chi_8\chi_4\varepsilon)} = K(\sqrt{-2\alphalphammaD_E}). \end{split} \end{equation} Here, by $K(\sqrt{\pm 2\alphalphammaD_E})$ we mean the quadratic field $K(\sqrt{\pm 2 \alphalphammaD(E_{\weier})})$, where $E_{\weier}$ is any fixed Weierstrass model of $E$ and $\alphalphammaD(E_{\weier}) \in K^\times$ denotes its discriminant (note that although $\alphalphammaD(E_{\weier})$ depends on the choice of $E_{\weier}$, the quadratic field $K(\sqrt{\pm 2 \alphalphammaD(E_{\weier})})$ depends only on $E$). By \eqref{quadraticfields}, we thus have \[ G(8) = \varkappaer(\chi_8) \; \Longrightarrow \; \sqrt{2} \in K \quad \text{ and } \quad G(8) = \varkappaer(\chi_8\chi_4) \; \Longrightarrow \; \sqrt{-2} \in K, \] either of which imply that $2 \mid \alphalphammaD_K$. On the other hand, for any Weierstrass model $E_{\weier}$ of $E$, we have \betagin{equation} \langlebel{implicationswithepsilon} \betagin{split} G(8) = \varkappaer(\chi_8\varepsilon) \; &\Longrightarrow \; \sqrt{2\alphalphammaD(E_{\weier})} \in K, \\ G(8) = \varkappaer(\chi_8\chi_4\varepsilon) \; &\Longrightarrow \; \sqrt{-2\alphalphammaD(E_{\weier})} \in K. \end{split} \end{equation} Let us suppose for the sake of contradiction that \betagin{equation} \langlebel{2doesnotdivide} 2 \nmid \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E). \end{equation} Fix a prime ideal $\mf{p} \subseteq \mc{O}_K$ lying over $2$. By \eqref{2doesnotdivide}, we must have $\mf{p} \nmid \alphalphammaD_E$, and we may thus find a Weierstrass model $E_{\weier}$ of $E$ satisfying $ \mf{p} \nmid \alphalphammaD(E_{\weier}) $. Applying Lemma \ref{ramificationlemma} with $\alphalphammaa = \pm \alphalphammaD(E_{\weier})$, we see that $\sqrt{\pm2\alphalphammaD(E_{\weier})} \notin K$, contradicting \eqref{implicationswithepsilon}. Thus, we must have $2 \mid \alphalphammaD_K N_{K/\mathbb{Q}}(\alphalphammaD_E)$ whenever $G(8)$ has index 2 in $\GammaL_2(\mathbb{Z}/8\mathbb{Z})$. We now treat the second subcase, in which $G_2 = \GammaL_2(\mathbb{Z}_2)$. We evidently must have a non-trivial common quotient in \betagin{equation*} G_{r'} \simeq \GammaL_2(\mathbb{Z}_2) \times_{\psi} G_{r'_{(2)}}. \end{equation*} (If this fibered product were over a trivial quotient, then $2$ would not divide $m_G$.) We note that any non-trivial finite quotient of $\GammaL_2(\mathbb{Z}_2)$ must have order divisible by $2$ and that $\varkappaer(G_{r'_{(2)}} \rightarrow G(r'_{(2)}))$ is a profinite group whose finite quotients each have order coprime with $2$. It follows that the image of $G$ under $\id_{2} \times \pi_{(r'_{(2)})^\infty,r'_{(2)}}$ has the form \betagin{equation} \langlebel{shortfiberedproductat2} \GammaL_2(\mathbb{Z}_2) \times_\psi G(r'_{(2)}), \end{equation} a fibered product with a common quotient of order divisible by $2$ (and hence non-trivial). Consider the subgroup $N := \varkappaer \psi_2 \subseteq \GammaL_2(\mathbb{Z}_2)$ where $\psi = (\psi_2,\psi_{(2)})$ in \eqref{shortfiberedproductat2}. The assumption $\GammaL_2(\mathbb{Z}/4\mathbb{Z}) \times \{ 1_{r'(2)} \} \subseteq G(r')$ then implies that $N(4) = \GammaL_2(\mathbb{Z}/4\mathbb{Z})$ (otherwise the mod $r'$ image of \eqref{shortfiberedproductat2} would have a nontrivial fibering between $G(4)$ and $G(r'_{(2)})$, contradicting \eqref{fullproductobservation}). By Lemma \ref{usefulSL2elladicversion}, we find that $[\GammaL_2(\mathbb{Z}/8\mathbb{Z}) : N(8) ] = 2$. By the same computation as mentioned in the previous subcase, we have \[ N(8) \in \{ \varkappaer(\chi_8), \; \varkappaer(\chi_8\chi_4), \; \varkappaer(\chi_8\varepsilon), \varkappaer(\chi_8\chi_4\varepsilon) \}, \] and it follows by \eqref{quadraticfields} and Galois theory that one of the fields $K(\sqrt{2})$, $K(\sqrt{-2})$, $K(\sqrt{2\alphalphammaD_E})$, or $K(\sqrt{-2\alphalphammaD_E})$ must be contained in $K(E_{\tors, (2)})$. By Lemma \ref{ramificationlemma} and Theorem \ref{neronoggshafarevichthm}, it follows that, if $2 \nmid \alphalphammaD_K$ then $2$ divides $N_{K/\mathbb{Q}}(\alphalphammaD_E)$. This finishes the proof of Lemma \ref{ellequals2lemma}. \section{Appendix: MAGMA code for computations} The computations referenced throughout the paper were undertaken using the computational package MAGMA (see \cite{MAGMA}); below we give scripts that the interested reader can run therein to reproduce those results. \\ ---------------------------------------------------- \noindent {\tt{> IsConjugateToSubgroup := function(G1,G2,m)}} \\ \noindent {\tt{function> answer := false;}} \\ \noindent {\tt{function> for H2 in Subgroups(G2 : OrderEqual := \#G1) do}} \\ \noindent {\tt{function|for> if IsConjugate(GL(2,Integers(m)),G1,H2`subgroup) then}} \\ \noindent {\tt{function|for|if> answer := true;}} \\ \noindent {\tt{function|for|if> break H2;}} \\ \noindent {\tt{function|for|if> end if;}} \\ \noindent {\tt{function|for> end for;}} \\ \noindent {\tt{function> return answer;}} \\ \noindent {\tt{function> end function;}}\\ ---------------------------------------------------- \\ \noindent {\tt{> G3 := GL(2,Integers(3));}} \\ \noindent {\tt{> C3 := CommutatorSubgroup(G3,G3);}} \\ \noindent {\tt{> C3 eq SL(2,Integers(3));}} \\ ---------------------------------------------------- \\ \noindent {\tt{> L9 := [];}} \\ \noindent {\tt{> G9 := GL(2,Integers(9));}} \\ \noindent {\tt{> for H in Subgroups(G9) do}} \\ \noindent {\tt{for> H3 := MatrixGroup<2,Integers(3) | Generators(H`subgroup)>;}} \\ \noindent {\tt{for> if H3 eq G3 and not SL(2,Integers(9)) subset H`subgroup then}} \\ \noindent {\tt{for|if> Append($\sim$L9,H`subgroup);}} \\ \noindent {\tt{for|if> end if;}} \\ \noindent {\tt{for> end for;}} \\ \noindent {\tt{> IsConjugateToSubgroup(L9[1],L9[2],9);}} \\ \noindent {\tt{> \#GL(2,Integers(9))/\#L9[2];}} \\ ----------------------------------------------------- \\ \noindent {\tt{> G5 := GL(2,Integers(5));}} \\ \noindent {\tt{> L5 := [];}} \\ \noindent {\tt{> for N in NormalSubgroups(G5) do}} \\ \noindent {\tt{for> if IsDivisibleBy(Floor(\#GL(2,Integers(5))/\#N`subgroup),60) then}} \\ \noindent {\tt{for|if> Append($\sim$L5,N`subgroup);}} \\ \noindent {\tt{for|if> end if;}} \\ \noindent {\tt{for> end for;}} \\ \noindent {\tt{> L5;}} \\ ----------------------------------------------------- \\ \noindent {\tt{> L9p := [];}} \\ \noindent {\tt{> for H in NormalSubgroups(G9) do}} \\ \noindent {\tt{for> Hext := MatrixGroup<2,Integers(9) | Generators(SL(2,Integers(9))), Generators(H`subgroup)>;}} \\ \noindent {\tt{for> if Hext eq GL(2,Integers(9)) then}} \\ \noindent {\tt{for|if> Append($\sim$L9p,Hext);}} \\ \noindent {\tt{for|if> end if;}} \\ \noindent {\tt{for> end for;}} \\ \noindent {\tt{> \#L9p;}} \\ \noindent {\tt{> L9p[1] eq GL(2,Integers(9));}} \\ ----------------------------------------------------- \\ \noindent {\tt{> L8 := [];}} \\ \noindent {\tt{> for G in Subgroups(G8 : IndexEqual := 2) do}} \\ \noindent {\tt{for> G4 := MatrixGroup<2,Integers(4) | Generators(G`subgroup)>;}} \\ \noindent {\tt{for> if G4 eq GL(2,Integers(4)) then}} \\ \noindent {\tt{for|if> Append($\sim$L8,G`subgroup);}} \\ \noindent {\tt{for|if> end if;}} \\ \noindent {\tt{for> end for;}} \\ \noindent {\tt{> L8;}} \\ \betagin{thebibliography}{99} \bibitem{biluparent} Y. Bilu and P. Parent, \emph{Serre's uniformity problem in the split Cartan case}, Annals of Mathematics, \textbf{173}, no. 1 (2011), 569--584. \bibitem{biluparentrebolledo} Y. Bilu, P. Parent and M. Rebolledo, \emph{Rational points on $X_0^+(p^r)$}, Annales de l'Institut Fourier, \textbf{63}, no. 3 (2013), 957--984. \bibitem{awstitchmarsh} R. Bell, C. Blakestad, A.C. Cojocaru, A. Cowan, N. Jones, V. Matei, G. Smith and I. Vogt, \emph{Constants in Titchmarsh divisor problems for elliptic curves}, preprint. Available at {\tt{https://arxiv.org/abs/1706.03422}} \bibitem{MAGMA} W. Bosma, J. Cannon and C. Playoust, \emph{The {M}agma algebra system. {I}. {T}he user language}, J. Symbolic Comput. \textbf{24}, no. 3-4, (1997), 235--265. \bibitem{bourdon} A. Bourdon, O. Ejder, Y. Liu, F. Odumodu and B. Viray, \emph{On the level of modular curves that give rise to isolated $j$-invariants}, Adv. Math. \textbf{357} (2019), 106824, 33 pp. \bibitem{braujones} J. Brau and N. Jones, \emph{Elliptic curves with $2$-torsion contained in the $3$-torsion field}, Proc. Amer. Math. Soc. \textbf{144} (2016), 925--936. \bibitem{cojocaru} A.C. Cojocaru, \emph{On the surjectivity of the Galois representations associated to non-CM elliptic curves}, Canad. Math. Bull. \textbf{48}, no. 1, (2005) 16--31. \bibitem{daniels} H. Daniels, \emph{An infinite family of Serre curves}, J. Number Theory \textbf{155} (2015), 226--247. \bibitem{danielsgonzalez} H. Daniels and E. Gonzalez-Jimenez, \emph{Serre's constant of elliptic curves over the rationals}, to appear in Exp. Math. Available at {\tt{https://arxiv.org/abs/1812.04133}} \bibitem{dokchitser} T. Dokchitser and V. Dokchitser, \emph{Surjectivity of mod $2^n$ representations of elliptic curves}, Math. Z. \textbf{272} (2012), 961--964. \bibitem{elkies} N. Elkies, \emph{Elliptic curves with $3$-adic Galois representation surjective mod $3$ but not mod $9$}, preprint (2006). \bibitem{huppert} B. Huppert, \emph{Endliche Gruppen I}, Die Grundlehren der mathematischen Wissenschaften, \textbf{134}, Springer-Verlag, Berlin Heidelberg (1967). \bibitem{jones1} N. Jones, \emph{A bound for the torsion conductor of an elliptic curve}, Proc. Amer. Math. Soc. \textbf{137} (2009), 37--43. \bibitem{lozanorobledo} A. Lozano-Robledo, \emph{On the field of definition of p-torsion points on elliptic curves over the rationals}, Math. Annalen \textbf{357}, no. 1 (2013), 279--305. \bibitem{mazur} B. Mazur, \emph{Rational isogenies of prime degree}, Invent. Math. \textbf{44}, no. 2 (1978), 129--162. \bibitem{merel} L. Merel, \emph{Bornes pour la torsion des courbes elliptiques sure les corps de nombres}, Invent. Math. \textbf{24}, no. 1 (1996), 437--449. \bibitem{morrow} J. Morrow, \emph{Composite images of Galois for elliptic curves over $\mathbb{Q}$ and Entanglement fields}, Math. Comp. \textbf{88} (2019), 2389--2421. \bibitem{ogg} A.P. Ogg, \emph{Elliptic curves and wild ramification}, Amer. J. Math. \textbf{89}, no. 1 (1967), 1--21. \bibitem{ribet} K. Ribet, \emph{Galois action on division points of Abelian varieties with real multiplications}, Amer. J. Math. \textbf{98}, no. 3 (1976), 751--804. \bibitem{serre} J-P. Serre, \emph{Propri\'{e}t\'{e}s galoisiennes des points d'ordre fini des courbes elliptiques}, Invent. Math. \textbf{15} (1972), 259--331. \bibitem{serre2} J.-P. Serre, \emph{Abelian $\ell$-adic representations and elliptic curves}, Benjamin, New York-Amsterdam, 1968. \bibitem{silverman} J. Silverman, \emph{The arithmetic of elliptic curves}, Springer, New York, 1986. \bibitem{sutherlandzywina} A. Sutherland and D. Zywina, \emph{Modular curves of prime-power level with infinitely many rational points}, Algebra \& Number Theory \textbf{11}, no. 5 (2017), 1199--1229. \bibitem{weil} A. Weil, \emph{Sur les fonctions algebriques \`{a} corps de constantes finis}, C. R. Acad. Sci. Paris \textbf{210} (1940) 592--594. \bibitem{zywina} D. Zywina, \emph{Elliptic curves with maximal Galois action on their torsion points}, Bull. Lond. Math. Soc. \textbf{42} (2010), 811--826. \bibitem{zywina2} D. Zywina, \emph{On the possible images of the mod $\ell$ representations associated to elliptic curves over $\mathbb{Q}$}, preprint. Available at {\tt{https://arxiv.org/abs/1508.07660}} \bibitem{zywinaindex} D. Zywina, \emph{Possible indices for the Galois image of elliptic curves over $\mathbb{Q}$}, preprint. Available at {\tt{https://arxiv.org/abs/1508.07663}} \end{thebibliography} \end{document}
\begin{document} \def\varepsilon{\varepsilon} \def\theta{\thetaheta} \def{\bf Q}{{\bf Q}} \defT_{\theta}{T_{\thetaheta}} \defr_{\theta}{r_{\thetaheta}} \deff_{\theta}{f_{\thetaheta}} \defS_{\theta}{S_{\thetaheta}} \defX_{\theta}{X_{\thetaheta}} \defw_{\theta}{w_{\thetaheta}} \defc_{\theta}{c_{\thetaheta}} \defC_{\theta}{C_{\thetaheta}} \def\vec{x}{\vec{x}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{N}{\mathbb{N}} \def\mathcal{L}{\mathcal{L}} \def\mathcal{A}{\mathcal{A}} \def\mathcal{BL}{\mathcal{BL}} \def\mathcal{P}{\mathcal{P}} \def\mathbb{R}{\mathbb{R}} \def\varepsilonm{\varepsilon_m} \thetaitle{Complexity and growth for polygonal billiards} \author{J.~Cassaigne, P.~Hubert and S.~Troubetzkoy} \address{Institut de math\'ematiques de Luminy, CNRS Luminy, Case 907, F-13288 Marseille Cedex 9, France} \varepsilonmail{[email protected]} \urladdr{http://iml.univ-mrs.fr/{\lower.7ex\hbox{\~{}}}cassaign/} \address{Institut de math\'ematiques de Luminy, CNRS Luminy, Case 907, F-13288 Marseille Cedex 9, France} \varepsilonmail{[email protected]} \address{Centre de physique th\'eorique et Institut de math\'ematiques de Luminy, CNRS Luminy, Case 907, F-13288 Marseille Cedex 9, France} \varepsilonmail{[email protected]} \varepsilonmail{[email protected]} \urladdr{http://iml.univ-mrs.fr/{\lower.7ex\hbox{\~{}}}troubetz/} \begin{abstract} We establish a relationship between the word complexity and the number of generalized diagonals for a polygonal billiard. We conclude that in the rational case the complexity function has cubic upper and lower bounds. In the tiling case the complexity has cubic asymptotic growth. \varepsilonnd{abstract} \title{Complexity and growth for polygonal billiards} \section{Introduction} A billiard ball, i.e.~a point mass, moves inside a polygon $Q \subset \mathbb{R}^2$ with unit speed along a straight line until it reaches the boundary $\partial Q$, then instantaneously changes direction according to the mirror law: ``the angle of incidence is equal to the angle of reflection,'' and continues along the new line. How complex is the game of billiards in a polygon? The first results in this direction, proven independently by Sinai \cite{S} and Boldrighini, Keane and Marchetti \cite{BKM} is that the metric entropy with respect to the invariant phase volume is zero. Sinai's proof in fact shows more, the ``metric complexity'' grows at most polynomially. Furthermore, it is known that the topological entropy (in various senses) is zero \cite{K,GKT,GH}. To prove finer results there are two natural quantities one can count, one is the number of generalized diagonals, that is (oriented) orbit segments which begin and end in a vertex of the polygon and contain no vertex of the polygon in their interior. The number of links of a generalized diagonal is called its combinatorial length while its geometric length is simply the sum of the lengths of the segments. Let $N_g(t)$ (resp.~$N_c(n)$) be the number of generalized diagonals of geometric (resp.~combinatorial) length at most $t$ (resp.~$n$). Katok has shown that $N_g(t)$ grows slower than any exponential \cite{K}. Masur has shown that for rational polygons, that is for polygons all of whose inner angles are commensurable with $\pi$, $N_g(t)$ has quadratic upper and lower bounds \cite{M1,M2}. By elementary reasoning there is a constant $B>1$ such that $B^{-1} \le N_c(t)/N_g(t) \le B$, thus all of these results easily extend to the quantity $N_c(n)$. Furthermore, Veech has shown that there is a special class of polygons now commonly referred to as Veech polygons, for example regular polygons, such that the quantity $N_g(t)/t^2$ admits a limit as $t$ tends to infinity \cite{V,V1}. To introduce the second natural quantity which can be counted, label the sides of $Q$ by symbols from a finite alphabet $\mathcal{A}$ whose cardinality is equal to the number of sides of $Q$. We code the orbit by the sequence of sides it hits. Consider the set $\mathcal{L}(n)$ of all words of length $n$ which arise via this coding. Let $p(n) = \# \mathcal{L}(n)$, this is called the complexity function of the language $\mathcal{L}(\cdot).$ The only general results known about the complexity function is that it grows slower than any exponential \cite{K} and at least quadratically \cite{Tr}. For billiards in a square the complexity function has been explicitly calculated, albeit for a slightly different coding (the alphabet consists of two symbols, one for vertical sides one for horizontal sides) \cite{Mi,BP}. For this coding of the square the collection of codes which appear are known as the Sturmian sequences. In fact it is not hard to relate the complexity functions for the two different codings, the relationship is $p_4(n) = 4p_2(n) -4$. There are some related results on the complexity when one restricts to certain initial conditions: for rational polygons the ``directional complexity'' in each direction is known explicitly \cite{H1}, while for general polygons there are polynomial upper bounds for the directional complexity \cite{GT}. There are several good surveys of billiards in polygons, in these surveys one can find more details about the definitions and more precise statements of the above mentioned results. We refer the reader to \cite{Gu1,Gu2,MT,T}. Our main theorem shows that $p(n)$ and $N_c(n)$ are related. \begin{theorem} For any convex polygon $$p(n) = \sum_{j=0}^{n-1} N_c(j).$$ \label{theorem1} \varepsilonnd{theorem} Here we remark that $N_c(0)$ is the number of vertices of the polygon while the sides of $Q$ are not counted as generalized diagonals. Applying the above mentioned results of Masur's \cite{M1,M2} we immediately conclude \begin{corollary} If $Q$ is a rational convex polygon then there are positive constants $D_1,D_2$ such that $$D_1 < p(n)/n^3 < D_2$$ for all $n \in \mathbb{N} \backslash \{0\}.$ \varepsilonnd{corollary} Next we exhibit several examples where there are exact asymptotics. We show \begin{theorem}\label{thm::2} If $Q$ is the square, the isosceles right triangle or the equilateral triangle then \begin{equation}\label{e1} \lim_{n \thetao \infty} \frac{p(n)}{n^3} \varepsilonnd{equation} exists. The following table expresses the limit. \begin{center} \begin{tabular}{|l|c||l|c||l|c|}\hline & & & & &\\ Square & $\displaystyle\frac{4}{\pi^2}$ & $\displaystyle\left (\frac{\pi}2,\frac{\pi}4,\frac{\pi}4\right )$-triangle & $\displaystyle \frac{2}{3\pi^2}$ & Equilateral triangle & $\displaystyle\frac{3}{4\pi^2}$ \\ & & & & & \\ \hline \varepsilonnd{tabular} \varepsilonnd{center} \varepsilonnd{theorem} The proof of Theorem \ref{theorem1} is split into two parts. The first part is combinatorial. It uses the notion of bispecial words which was developed by Cassaigne \cite{C}. The second part is geometric and uses a counting argument based on Euler's formula. {\bf Remark:} it is known that for $n$ sufficiently large the complexity of each aperiodic individual word is $4(n+1)$ for the square, $3(n+2)$ for the equilateral triangle, $4(n+2) $ for the isosceles right triangle and $6(n+2) $ for the half equilateral triangle \cite{H,H1,H2}. For the square the complexity is four times larger than that of Sturmian sequences, thus the fact that $p(n)$ is asymptotically four times the number of Sturmian words of length $n$ is not surprising \cite{Mi,BP}. Any infinite word of eventual complexity $3(n+2)$ whose language (i.e.~the collection of finite factors) is invariant under cyclic permutations of the letters arises as the coding of a billiard trajectory in the equilateral triangle \cite{H}. The third entry of the table in Theorem \ref{thm::2} gives the asymptotic growth rates of the number of all such words. Two interesting tiling cases remaining to evaluate the limit (\ref{e1}) are the $\displaystyle \left (\frac{\pi}2,\frac{\pi}3,\frac{\pi}6\right )$-triangle and the hexagon. In this triangular case the methods developed for the other three cases allow us to conclude that this limit exists and to calculate it explicitly. Since the combinatorics of this case is more complicated than the others, we leave its explicit computation to the dedicated reader. The hexagonal case remains open since the corresponding lattice point counting problem seems not to have been investigated. It would be interesting to know if the limit (\ref{e1}) exists in the case of Veech polygons and also to exhibit cases when it does not exist. \section{A combinatorial lemma} Let $p(0) := 0$ and for any $n \ge 1$ let $s(n) := p(n+1) - p(n).$ For $u \in \mathcal{L}(n)$ let \begin{eqnarray*} m_l(u) & := & \#\{a \in \mathcal{A}: au \in \mathcal{L}(n+1)\}\\ m_r(u) & := & \#\{b \in \mathcal{A}: ub \in \mathcal{L}(n+1)\}\\ m_b(u) & := & \#\{(a,b) \in \mathcal{A}^2: aub \in \mathcal{L}(n+2)\}. \varepsilonnd{eqnarray*} We remark that all three of these quantities are larger than or equal to one. A word $u \in \mathcal{L}(n)$ is called left special if $m_l(u) > 1$, right special if $m_r(u) > 1$ and bispecial if it is left and right special. Let $\mathcal{BL}(n) := \{u \in \mathcal{L}(n): u \thetaext{ is bispecial}\}.$ In this section we show that \begin{theorem}\label{thm::com} For any polygon $Q$ $$s(n+1) - s(n) = \sum_{v \in \mathcal{BL}(n)} \Big (m_b(v) - m_l(v) - m_r(v) + 1 \Big ).$$ \label{combi} \varepsilonnd{theorem} {\bf Remark:} there is no assumption of convexity for this theorem, in fact it is not necessary that the language arises from the coding of a polygonal billiard. \begin{proof} Since for every $u \in \mathcal{L}(n+1)$ there exists $b \in \mathcal{A}$ and $v \in \mathcal{L}(n)$ such that $u = vb$ we have $$s(n) = \sum_{u \in \mathcal{L}(n)} (m_r(u) - 1).$$ Thus $$s(n + 1) - s(n) = \sum_{v \in \mathcal{L}(n+1)} \Big (m_r(v) - 1 \Big ) - \sum_{u \in \mathcal{L}(n)} \Big (m_r(u) - 1 \Big ).$$ For $u \in \mathcal{L}(n+1)$ we can write $u = av$ where $a \in \mathcal{A}$ and $v \in \mathcal{L}(n)$, thus $$s(n+1) - s(n) = \sum_{v \in \mathcal{L}(n)} \left [ \sum_{av \in \mathcal{L}(n+1)} \Big (m_r(av) - 1 \Big ) - \Big (m_r(v) -1 \Big ) \right ].$$ For any word $v \in \mathcal{L}(n)$ and $av \in \mathcal{L}(n+1)$ any legal prolongation to the right of $av$ is a legal prolongation to the right of $v$ as well thus if $m_r(v) = 1$ then $m_r(av) = 1$. Thus words with $m_r(v) = 1$ do not contribute to the above sum. Thus $s(n+1) - s(n)$ is equal to the above sum restricted to those $v$ such that $m_r(v) > 1.$ If furthermore $m_l(v) = 1$ then there is only a single $a$ such that $av \in \mathcal{L}(n+1)$. For this $a$ we have $m_r(av) = m_r(v)$ thus such words do not contribute to the sum either. Thus we can restrict the sum to bispecial words, yielding $$s(n+1) - s(n) = \sum_{v \in \mathcal{BL}(n)} \left [ \sum_{av \in \mathcal{L}(n+1)} \Big (m_r(av) - 1 \Big ) - \Big (m_r(v) -1 \Big ) \right ].$$ The lemma follows since for any $v \in \mathcal{BL}(n)$ we have $$ m_b(v) = \sum_{av \in \mathcal{L}(n+1)} m_r(av)$$ and $$ m_l(v) = \sum_{av \in \mathcal{L}(n+1)} 1.$$ \varepsilonnd{proof} \section{Proof of theorem \ref{theorem1}} Let $X := \{ (s,v): s \in \partial Q \thetaext{ and } v \thetaext{ is an inner pointing unit vector} \}$ and $P$ the ``partition'' of $X$ induced by the sides of $Q$. The ambiguity of the definition of $P$ at the vertices plays no role in our discussion. Let $T: X \thetao X$ be the billiard ball map. An element of the partition $P \vee T^{-1}P \vee \cdots \vee T^{-n+1} P$ is called an $n$--cell. The code of every point in an $n$--cell has the same prefix of length $n$, thus there is a bijection between the set of $n$--cells and the language $\mathcal{L}(n)$. If the footpoint of $T^nx$ is a vertex then we say that $x$ belongs to a discontinuity of order $n$. A discontinuity (of any order) is locally a curve whose endpoints lie on the boundary of $X$ or on a discontinuity of lower order. We call each piece between such endpoints a smooth branch of the discontinuity. \begin{figure} \centerline{\psfig{file=complexity.fig1.ps,height=50mm}} \caption{A generalized diagonal of combinatorial length 4 with code $bcd$.} \varepsilonnd{figure} For $v \in \mathcal{BL}(n)$ let $gd(v)$ be the number of generalized diagonals of length $n+1$ such that the code of (the nonsingular part of) the generalized diagonal is $v$ (see figure 1). Let $I_l(v) := m_l(v) - 1$ and $I_r(v) := m_r(v) - 1$. For short we call $(I_l(v),I_r(v),gd(v))$ the index of $v$. \begin{lemma}\label{lem::geo} Suppose that $Q$ is a convex polygon. For any $v \in \mathcal{BL}(n)$ $$m_b(v) = I_l(v) + I_r(v) + gd(v) +1$$ \varepsilonnd{lemma} \begin{proof} We consider the $n$--cell $C$ with bispecial code $v$. Note that an $n$--cell is a convex polygon \cite{K}, thus geometrically the number $m_b(v)$ corresponds to the number of pieces $C$ is cut into by the discontinuities of order $-1$ and $n$. Let $r$ be the number of sides of $Q$. There are $I_l(v) \le r$ vertices of $Q$ which produce the splitting on the left, they cut $C$ via singularities of $T^{-1}$. Similarly there are $I_r(v) \le r$ vertices which produce the splitting on the right, they correspond to cutting $C$ via singularities of $T^{n}$. Suppose the index of $v$ is $(i,j,k)$. The cell $C$ is cut by $i+j$ singularities with $k$ intersections inside the interior of $C$. We claim that since $Q$ is convex, each of these $k$ intersections consists of an intersection of exactly two smooth branches of the singularities. Consider an intersection point $x$. Its forward orbit arrives at a vertex in say $m > 0$ steps and ends. Thus $x$ belongs to the interior of a discontinuity of order $m$. The forward orbit hits no other vertex before time $m$, and by definition ends at time $m$, thus $x$ belongs to the interior of no other discontinuity of positive order. There are two possible continuations by continuity of the orbit of $x$. If either of these continuations is a generalized diagonal or tangent to a side of $Q$ then $x$ is an end point of another singularity of positive order. In the second case the order of this additional singularity is also $m$, while in the first case it is strictly larger than $m$. We note that the second possibility can only happen if $Q$ is not convex. Similarly, considering the backwards orbit we see that $x$ belongs to the interior of a single discontinuity of negative order. If $Q$ is not convex then it is not the end point of any negative discontinuity of greater order. The claim is proven. Next we will use Euler's formula to conclude our lemma. Let $F,E,V$ stand for the number of faces, edges and vertices respectively of the partition of the interior of $C$ by the discontinuities of order $-1$ and $n$. We have $E := i + j + 2k$ and $V:=k$. By Euler's formula we have $V-E+F = 1$ thus $F = 1-V+E= 1+i+j+k$. As discussed above $m_b(v) = F$. \varepsilonnd{proof} \noindent {\bf Proof of Theorem \ref{theorem1}.} The theorem follows immediately from lemma \ref{lem::geo} and Theorem \ref{thm::com} since $N_c(n) = \sum_{j=0}^{n-1} \sum_{v \in \mathcal{BL}(j)} gd(v)$. \qed \section{Proof of Theorem \ref{thm::2}} It is well known that if the images of $Q$ under the action $A(Q)$ tile the plane, then $Q$ is the square, the equilateral triangle, the right isosceles triangle or the half equilateral triangle (i.e.~the triangle with angles $(\pi/2,\pi/3,\pi/6))$. We will use this tiling to calculate $N_c(n)$. \subsection{The square} The tiling is the usual square grid. Fix a corner of the square and call it the origin of the grid. Consider all the generalized diagonals in the grid of combinatorial length at most $n$ which {\bf start} from this corner and are in the first quadrant. From figure 2 it is clear that the number $M_c(n)$ of such generalized diagonals is $$\#\left \{(i,j) \in \mathbb{N}^2: i+j \le n+1 \thetaext{ and } \langle i,j \rangle = 1\right \}$$ where $\langle i,j \rangle$ is the gcd of $i$ and $j$. The condition $\langle i,j \rangle = 1$ arises since generalized diagonals stop as soon as they hit a vertex, thus if a line through the origin hits several vertices (for example the line $y=x$), it corresponds to only one generalized diagonal (starting at the origin). Thus we only count it once. Since there are four possible starting corners we have $N_c(n) = 4 M_c(n)$. \begin{figure} \centerline{\psfig{file=complexity.fig2.ps,height=75mm}} \caption{Counting the generalized diagonals starting at the origin of combinatorial length at most 3 in the square.} \varepsilonnd{figure} The asymptotics of this quantity is well known by Mertens formula and its generalizations \cite{HW,N}: $$N_c(n) \sim \frac{12}{\pi^2}n^2.$$ Applying Theorem \ref{theorem1} we have $$p(n) \sim \frac{12}{\pi^2} \sum_{k=1}^n k^2 \sim \frac{4}{\pi^2}n^3.$$ \subsection{The equilateral triangle} We consider the images of $Q$ under the action of $A(G)$ specifying that one of the vertices is at the origin and another at the point $(1,0)$. We transform this grid to the grid in figure 3 via the affine mapping which fixes the vector {\mathcal{L}arge \lower2pt\hbox{(}\hspace{-4pt}\thetainy \begin{tabular}{c} 1 \\ 0 \varepsilonnd{tabular}\mathcal{L}arge\hspace{-4pt}\lower2pt\hbox{)}} and takes the vector {\mathcal{L}arge \lower2pt\hbox{(}\hspace{-4pt}\thetainy \begin{tabular}{c} $\cos(\pi/3)$ \\ $\sin(\pi/3)$ \varepsilonnd{tabular}\mathcal{L}arge\hspace{-4pt}\lower2pt\hbox{)}} to {\mathcal{L}arge \lower2pt\hbox{(}\hspace{-4pt}\thetainy \begin{tabular}{c} 0 \\ 1 \varepsilonnd{tabular}\mathcal{L}arge\hspace{-4pt}\lower2pt\hbox{)}}. \begin{figure} \centerline{\psfig{file=complexity.fig3.ps,height=75mm}} \caption{The affinely transformed grid of the equilateral triangle.} \varepsilonnd{figure} Consider all the generalized diagonals of combinatorial length at most $n$ which start from the origin and are in the first quadrant. Let $M_c(n)$ be the cardinality of this set. Since there are 3 vertices we have $N_c(n) = 3 M_c(n)$. From figure 3 one sees that $M_c(2n) = M_c(2n+1)$ since in traversing a square in the tiling one alway crosses exactly two consecutive copies of the fundamental triangle. From the figure it is also clear that $M_c(2n) =\#\left \{(i,j) \in \mathbb{N}^2: i+j \le n+1 \thetaext{ and } \langle i,j \rangle =1 \right\}.$ By Mertens formula \cite{HW,N}: $$N_c(2n) = N_c(2n+1) \sim \frac{9}{\pi^2}n^2.$$ Applying Theorem \ref{theorem1} we have $$p(n) \sim \frac{3}{4\pi^2}n^3.$$ \subsection{The right isosceles triangle} There are two different quantities which we must count. First we consider all the generalized diagonals of combinatorial length at most $n$ which start from the origin of the grid in figure 4a and are in the first quadrant. Let $M_1(n)$ be the cardinality of this set. We also consider all the generalized diagonals of combinatorial length at most $n$ which start from the origin of the grid in figure 4b and are in the first octant. Let $M_2(n)$ be the cardinality of this set. There are two vertices of our triangle with angle $\pi/4$ thus $N_c(n) = M_1(n) + 2 M_2(n)$. \begin{figure} \centerline{\psfig{file=complexity.fig4a.ps,height=75mm} \quad \psfig{file=complexity.fig4b.ps,height=75mm}} \caption{The grid of the right isosceles triangle.} \varepsilonnd{figure} The number $2 M_2(n)$ can also be interpreted as the cardinality of the set of all the generalized diagonals of combinatorial length at most $n$ which start from the origin of the grid in figure 4b and are in the first quadrant. With this interpretation, if we overlay the grids from figures 4a and 4b we see that each generalized diagonal counted in $M_1(n)$ is also a generalized diagonal counted in $2 M_2(n+1)$ and each generalized diagonal counted in $2 M_2(n)$ is counted in $M_1(n+1).$ Thus the asymptotics of $N_n(c)$ is the same as the asymptotics of $2M_1(n)$. We want to count $M_1(n)$. All generalized diagonals in the argument below will start at the origin of the grid pictured in figure 4a and all lengths will be combinatorial lengths. We count first the solid lines which the generalized diagonal crosses, we will deal later with the dashed lines. For most of the argument it will not matter whether the generalized diagonal is simple or not (i.e.~contains no vertices in its interior), we will restrict to the set of simple generalized diagonals only in the last step of the proof. Let $l(i,j)$ be the true combinatorial length of the generalized diagonal starting at the origin with end point $(i,j)$ for any $(i,j) \in \mathbb{N}^2$. We view this length as the sum of the solid lines and the dashed lines it crosses plus one. If $i > j$ then the number of dashed lines it crosses is $\lfloor\frac{i-j}2\rfloor$. On the other hand the number of solid lines it crosses is characterized by the following statements. Suppose that $n=3k$, then if it crosses $n-1$ solid lines then $i + j = 2n/3 + 1 = 2k + 1$. Inversely, supposing that $i + j = 2k +1$, then it crosses $n - 1 = 3k - 1$ solid lines. Combining these two facts we have if \begin{equation}\label{eq1} (i,j) \in \mathbb{N}^2 \thetaext{ and } i > j \thetaext{ and } i + j = 2k+1 \varepsilonnd{equation} then $$l(i,j) = n + \left \lfloor \frac{i-j}2 \right \rfloor.$$ We need to calculate the region $\mathcal{R}(n)$ consisting of all $(i,j) \in \mathbb{N}^2$ such that $i > j$ and $l(i,j) \le n.$ To do this fix $(i,j)$ as in (\ref{eq1}) and a natural number $m \le i-1$. We compare how many fewer dashed lines are crossed by the generalized diagonal ending at $(i-m,j)$ than by the generalized diagonal ending at $(i,j)$. This comparison yields $l(i-m,j) = n + \left \lfloor \frac{i-j}2 \right \rfloor - 2m + \varepsilonm$ where $\varepsilonm = m \mod 2$. Thus $l(i-m,j) \le n$ if and only if $n + \left \lfloor \frac{i-j}2 \right \rfloor - 2m + \varepsilonm \le n$. A simple computation yields the following two implications \begin{eqnarray*} m > \frac{i-j}4 \quad & \mathcal{L}ongrightarrow & \quad l(i-m,j) \le n\\ m \le \frac{i-j}4 - \frac12 \quad & \mathcal{L}ongrightarrow & \quad l(i-m,j) \ge n+1 \varepsilonnd{eqnarray*} Let $m_0 := \min \{ m: \ l(i-m,j) \le n\}$. From the above implications we have $m_0 \le \frac{i-j}4 + 1$ and $m_0 \ge \frac{i-j}4 - \frac12$. Let $\mathcal{D}(n)$ be the line $x = -y/2 + n/2$. This line is the ``ideal boundary'' of the region $\mathcal{R}(n)$. The following computation shows that the distance of the true boundary from the ideal boundary is uniformly bounded: $$d \Big ((i-m_0,j),\mathcal{D}(n) \Big ) \le d\left ((i-m_0,j),(-\frac j2 + \frac n2,j) \right) = \left |i + \frac j2 - \frac n2 - m_0 \right | \le \frac54.$$ Let $\Delta^+(n)$ be the triangle whose boundaries are the $x$--axis, the line $y=x$ and the line $\mathcal{D}(n)$. By symmetry we also define a region $\Delta^-(n)$ in the second octant (i.e.~we consider $i < j$). Let $\thetailde{M}_1(n)$ be the number of simple generalized diagonals starting at the origin whose other end is in the region $\Delta(n) := \Delta^+(n) \cup \Delta^-(n)$. Since the distance of the set $\Delta(n)$ from the set $\mathcal{R}$ is uniformly bounded (in $n$) the asymptotics of $\thetailde{M}_1(n)$ and $M_1(n)$ are the same. By symmetry $\thetailde{M}_1(n)$ is twice the number of relatively prime lattice points in the region $\Delta^+(n)$. The area of $\Delta^+(n)$ is $n^2/12$. Thus applying Mertens \cite{HW,N} yields $$M_1(n) \sim \thetailde{M}_1(n) \sim 2 \thetaimes \frac{n^2}{2\pi^2}.$$ Thus $$N_c(n) = 2 M_1(n) \sim \frac{2n^2}{\pi^2}$$ and applying Theorem \ref{theorem1} gives $$p(n) \sim \frac{2}{3\pi^2}n^3.$$ \subsection{The half equilateral triangle} The procedure is along the same lines as the previous examples. We consider the affinely transformed grid similarly to the case of the equilateral triangle. Counting the generalized diagonals which start at the origin reduces to an application of Merten's formula. The explicit description of the region to which Merten's formula must be applied is more complicated than in the previous examples, thus we do not carry it out. {\bf Acknowledgements:} We would like to thank Samuel Leli\`evre a critical reading of an earlier version of this article. \begin{thebibliography}{99} \bibitem[BP]{BP} J.~Berstel and M.~Pocchiola, {\it A geometric proof of the enumeration formula for Sturmian words}, Jour.~Alg.~Comp.~{\bf 3} (1993) 349--355. \bibitem[BKM]{BKM} C.~Boldrighini, M.~Keane, and F.~Marchetti, {\it Billiards in polygons}, Ann. Prob.~{\bf 6} (1978) 532--540. \bibitem[C]{C} J.~Cassaigne, {\it Complexit\'e et facteurs sp\'eciaux}, Bull.~Belgian Math.~Soc.~{\bf 4} (1997) 67--88. \bibitem[CT]{CT} N.~Chernov and S.~Troubetzkoy, {\it Ergodicity of billiards in polygons with pockets}, Nonlinearity {\bf 11} (1998) 1095--1102. \bibitem[GKT]{GKT} G.~Galperin, T.~Kr\"uger, and S.~Troubetzkoy, {\it Local instability of orbits in polygonal and polyhedral billiards}, Comm.~Math.~Phys.~{\bf 169} (1995) 463--473. \bibitem[Gu1]{Gu1} E.~Gutkin, {\it Billiards in polygons}, Physica D {\bf 19} (1986) 311--333. \bibitem[Gu2]{Gu2} E.~Gutkin, {\it Billiards in polygons: survey of recent results}, J.~Stat.~Phys.~{\bf 174} (1995) 43--56. \bibitem[GuH]{GH} E.~Gutkin and N.~Haydn, {\it Topological entropy of generalized interval exchanges}, Bull.~AMS {\bf 32} (1995) 50--57. \bibitem[GuT]{GT} E.~Gutkin, and S.~Troubetzkoy, {\it Directional flows and strong recurrence for polygonal billiards}, in Proceedings of the International Congress of Dynamical Systems, Montevideo, Uruguay, Pitman Research Notes in Math 362, Longman, Essex (1996). \bibitem[HW]{HW} G.H.~Hardy and E.M.~Wright, {\it An introduction to the theory of numbers}, Oxford Univ.~Press (1964). \bibitem[H]{H} P.~Hubert, {\it Dynamique symbolique des billards polygonaux rationnels}, Th\`ese Universit\'e d'Aix--Marseille II (1995). \bibitem[H1]{H1} P.~Hubert, {\it Complexit\'e des suites d\'efinies par des billards rationnels}, Bull.~Soc.~Math.~France {\bf 123} (1995) 257--270. \bibitem[H2]{H2} P.~Hubert, {\it Propri\'et\'es combinatoires des suites d\'efinies par le billard dans les triangles pavants}, Theoret.~Comput.~Sci.~{\bf 164} (1996) 165--183. \bibitem[K]{K} A.~Katok, {\it The growth rate for the number of singular and periodic orbits for a polygonal billiard}, Comm.~Math.~Phys. {\bf 111} (1987) 151--160. \bibitem[M1]{M1} H.~Masur, {\it The growth rate of a quadratic differential}, Ergod.~Th.~Dyn.~Sys.~{\bf 10} (1990) 151--176. \bibitem[M2]{M2} H.~Masur, {\it Lower bounds for the number of saddle connections and closed trajectories of a quadratic differential}, In Holomorphic functions and moduli, vol.~1, D.~Drasin ed., Springer--Verlag 1988. \bibitem[MT]{MT} H.~Masur and S.~Tabachnikov, {\it Rational billiards and flat structures}, preprint Max Planck Institut (1999). \bibitem[Mi]{Mi} F.~Mignosi, {\it On the number of factors of Sturmian words}, Theor.~Comp.~Sci.~{\bf 82} (1991) 71--84. \bibitem[N]{N} A.~Nogueira, {\it Orbit distribution on} $\mathbb{R}^2$ {\it under the natural action of} $SL(2,\mathbb{Z})$, IML preprint 21 (2000). \bibitem[S]{S} Ya.~G.~Sinai, {\it Introduction to ergodic theory}, Princeton Univ.~Press 1976. \bibitem[T]{T} S.~Tabachnikov, {\it Billiards}, ``Panoramas et Synth\`eses'', Soc.~Math.~France (1995). \bibitem[Tr]{Tr} S.~Troubetzkoy, {\it Complexity lower bounds for polygonal billiards}, Chaos {\bf 8} (1998) 242--244. \bibitem[V]{V} W.~Veech, {\it Teichm\"uller curves in moduli space, Eisenstein series and an application to triangular billiards}, Invent.~Math.~{\bf 97} (1989) 553--583. \bibitem[V1]{V1} W.~Veech, {\it The billiard in a regular polygon}, Geom.~Func.~Anal.~{\bf 2} (1992) 341--379. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{ FLUCTUATIONS OF MULTI-DIMENSIONAL\\ KINGMAN-L\'EVY PROCESSES } \author{Thu Nguyen} \address{Department of Mathematics; International University, HCM City; No.6 Linh Trung ward, Thu Duc District, HCM City; Email: [email protected]} \date{August 10, 2009} \begin{abstract} In the recent paper \cite{Ng5} we have introduced a method of studying the multi-dimensional Kingman convolutions and their associated stochastic processes by embedding them into some multi-dimensional ordinary convolutions which allows to study multi-dimensional Bessel processes in terms of the cooresponding Brownian motions. Our further aim in this paper is to introduce k-dimensional Kingman-L\'evy (KL) processes and prove some of their fluctuation properties which are analoguous to that of k-symmetric L\'evy processes. In particular, the L\'evy-It\^o decomposition and the series representation of Rosi\'nski type for k-dimensional KL-processes are obtained. \end{abstract} \maketitle{Keywords and phrases: Cartesian products of Kingman convolutions; Rayleigh distributions} \section{Introduction. notations and prelimilaries} The purpose of this paper is to introduce and study the multivariate KL processes defined in terms of multicariate Kingman convolutions. To begin with we review the following information of the Kingman convolutions and their Cartesian products. Let $\mathcal P:=\mathcal P(\mathbb R^+)$ denote the set of all probability measures (p.m.'s) on the positive half-line $\mathbb R^+$. Put, for each continuous bounded function f on $\mathbb R^{+}$, \begin{multline}\label{astKi} \int_{0}^{\infty}f(x)\mu\ast_{1,\delta}\nu(dx)=\frac{\Gamma(s+1)}{\sqrt{\pi}\Gamma(s+\frac{1}{2})}\\ \int_{0}^{\infty}\int_{0}^{\infty}\int_{-1}^{1}f((x^2+2uxy+y^2)^{1/2})(1-u^2)^{s-1/2}\mu(dx)\nu(dy)du, \end{multline} where $\mu\mbox{ and }\nu\in\mathcal P\mbox{ and }\delta=2(s+1)\geq1$ (cf. Kingman\cite{Ki} and Urbanik\cite{U1}). The convolution algebra $(\mathcal{P},\ast_{1,\delta})$ is the most important example of Urbanik convolution algebras (cf. Urbanik\cite{U1}). In language of the Urbanik convolution algebras, the {\it characteristic measure}, say $\sigma_s$, of the Kingman convolution has the Rayleigh density \begin{equation}\label{Ray} d\sigma_s(y)= \frac{2{(s+1)^{s+1}}}{\Gamma(s+1)}y^{2s+1}\exp{(-(s+1)y^2)}dy \end{equation} with the characteristic exponent $\varkappa=2$ and the kernel $\Lambda_s$ \begin{equation}\label{eq:Lam} \Lambda_s(x)= \Gamma(s+1) J_{s}(x)/(1/2x)^{s}, \end{equation} where $J_s(x)$ denotes the Bessel function of the first kind, \begin{equation}\label{eq:Bessel} J_s(x):= \Sigma_{k=0}^{\infty} \frac{(-1)^k (x/2)^{\nu+2k}}{k!\Gamma(\nu+k+1)}. \end{equation} It is known (cf. Kingman \cite{Ki}, Theorem 1), that the kernel $\Lambda_s$ itself is an ordinary characteristic function (ch.f.) of a symmetric p.m., say $F_s$, defined on the interval [-1,1]. Thus, if $\theta_s$ denotes a random variable (r.v.) with distribution $F_s$ then for each $t\in \mathbb R^+$, \begin{equation}\label{eq:LamThe} \Lambda_s(t)= E\exp{(it\theta_s)}=\int_{-1}^1\cos{(tx)}dF_s(x).\end{equation} Suppose that $X$ is a nonnegative r.v. with distribution $\mu\in\mathcal{P}$ and $X$ is independent of $\theta_s$. The {\it radial characteristic function} (rad.ch.f.) of $\mu$, denoted by $\hat\mu(t),$ is defined by \begin{equation}\label{ra.ch.f.} \hat\mu(t) = E\exp{(itX\theta_s)} = \int_0^{\infty} \Lambda_s(tx)\mu(dx), \end{equation} for every $t\in \mathbb R^{+}$. The characteristic measure of the Kingman convolution $\ast_{1, \delta}$, denoted by $\sigma_s$, has the Maxwell density function \begin{equation}\label{Maxwell density} \frac{d\sigma_s(x)}{dx}=\frac{2(s+1)^{s+1}}{\Gamma(s+1)}x^{2s+1}exp\{-(s+1)x^2\}, \quad(0<x<\infty). \end{equation} and the rad.ch.f. \begin{equation} \hat\sigma_s(t)=exp\{-t^2/4(s+1)\}. \end{equation} Let $\tilde P:=\tilde{\mathcal P}(\mathbb R)$ denote the class of symmetric p.m.'s on $\mathbb R.$ Putting, for every $G\in \mathcal P$, \begin{equation*} F_s(G)=\int_0^{\infty}F_{cs} G(dc), \end{equation*} we get a continuous homeomorphism from the Kingman convolution algebra $(\mathcal{P},\ast_{1,\delta})$ onto the ordinary convolution algebra $(\tilde{\mathcal P}, \ast)$ such that \begin{eqnarray}\label{homeomorphism1} F_s\{G_1\ast_{1, \delta}G_2\}&=&(F_sG_1)\ast(F_sG_2) \qquad (G_1, G_2\in \mathcal P)\\ F_s\sigma_s&=&N(0, 2s+1) \end{eqnarray} which shows that every Kingman convolution can be embedded into the ordinary convolution $\ast$. Denote by $ \mathbb {R}^{+k}, k=1,2,...$ the k-dimensional nonnegative cone of $ \mathbb {R}^{k}$ and $\mathcal{P}(\mathbb {\mathbb R}^{+k})$ the class of all p.m.'s on $\mathbb {\mathbb R}^{+k}$ equipped with the weak convergence. In the sequel, we will denote the multidimensional vectors and random vectors (r.vec.'s) and their distributions by bold face letters. For each point z of any set $A$ let $\delta_z$ denote the Dirac measure (the unit mass) at the point z. In particular, if $\mathbf x=(x_1, x_2,\cdots,x_k)\in \mathbb R^{k+}$, then \begin{equation}\label{proddelta} \delta_{\mathbf {x}} = \delta_{x_1}\times\delta_{x_2}\times \ldots\times\delta_{x_k},\quad (k\; times), \end{equation} where the sign $"\times"$ denotes the Cartesian product of measures. We put, for $\mathbf {x} = (x_1,\cdots, x_k)\mbox{ and }\mathbf {y} = (y_1,y_2,\cdots, y_k)\in \mathbb R^{+k},$ \begin{equation}\label{convdeltas}\delta_{\mathbf x}\bigcirc_{s, k} \delta_{\mathbf {y}} = \{\delta_{x_1}\circ _s \delta_{y_1}\} \times\{\delta_{x_2} \circ _s\delta_{y_2}\} \times\cdots\ \times \{\delta_{x_k} \circ_s \delta_{y_k}\},\quad (k\; times), \end{equation} here and somewhere below for the sake of simplicity we denote the Kingman convolution operation $\ast_{1,\delta}, \delta=2(s+1)\ge 1$ simply by $\circ_{s}, s\ge \frac{!}{2}.$ Since convex combinations of p.m.'s of the form (\ref{proddelta}) are dense in $\mathcal P(\mathbb R^{+k})$ the relation (\ref{convdeltas}) can be extended to arbitrary p.m.'s $ \mathbf{G}_1 \mbox{ and } \mathbf{G}_2\in\mathcal{P}( \mathbb R^{+k})$. Namely, we put \begin{equation}\label{convF} \mathbf {G}_1 \bigcirc_{s, k} \mathbf {G}_2 = \iint\limits_{ \mathbb R^{+k}} \delta_{\mathbf {x}} \bigcirc_{s, k} \delta_{\mathbf {y}} {\mathbf G}_1(d\mathbf {x}) {\mathbf G}_2(d\mathbf {y}) \end{equation} which means that for each continuous bounded function $\phi$ defined on $\mathbb R^{+k}$ \begin{equation}\label{convof} \int\limits_{\mathbb R^{+k}} \phi({\mathbf z}) {\mathbf G}_1 \bigcirc_{s, k} {\mathbf G}_2 (d{\mathbf z})= \iint\limits_{ \mathbb R^{+k}}\big\{\int\limits_{\mathbb R^{+k}} \phi({\mathbf z}) \delta_{{\mathbf x}} \bigcirc_{s, k} \delta_{{\mathbf y}}(d{\mathbf z})\big\}{ \mathbf G}_1(d{\mathbf x}) {\mathbf G}_2(d{\mathbf y}). \end{equation} In the sequel, the binary operation $\bigcirc_{s, k}$ will be called {\it the k-times Cartesian product of Kingman convolutions} and the pair $(\mathcal P( \mathbb R^{+k}), \bigcirc_{s, k})$ will be called {\it the k-dimensional Kingman convolution algebra}. It is easy to show that the binary operation $\bigcirc_{s, k}$ is continuous in the weak topology which together with (\ref{astKi}) and (\ref{convF}) implies the following theorem. \begin{theorem}\label{Theo:Kingmanalgebra} The pair $(\mathcal P{( \mathbb R^{+k})} ,\bigcirc_{s, k})$ is a commutative topological semigroup with $\delta_{\mathbf 0}$ as the unit element. Moreover, the operation $\bigcirc_{s, k}$ is distributive w.r.t. convex combinations of p.m.'s in $\mathcal P( \mathbb R^{+k})$. \end{theorem} \ For every ${\mathbf G}\in\mathcal P( \mathbb R^{+k})$ the k-dimensional rad.ch.f. $\hat{{\mathbf G}}({\mathbf t}), {\mathbf t}=(t_1, t_2, \cdots t_k)\in \mathbb R^{+k},$ is defined by \begin{equation}\label{k-ra.ch.f.} \hat{\mathbf G}(\mathbf {t})=\int\limits_{\mathbb R^{+k}} \prod_{j=1}^{k}\Lambda_s(t_jx_j){ \mathbf G}(\mathbf {dx}), \end{equation} where $\mathbf {x}=(x_1, x_2, \cdots x_k)\in \mathbb R^{+k}.$ Let $\mathbf{\Theta_s} = \{\theta_{s, 1},\theta_{s, 2}, \cdots ,\theta_{s, k}\}$, where $\theta_{s, j}$ are independent r.v.'s with the same distribution $F_s $. Next, assume that $ {\mathbf X}=\{X_1, X_2,..., X_k\}$ is a k-dimensional r.vec. with distribution $\mathbf{G}$ and $\mathbf{X}$ is independent of $\mathbf{\Theta}_s$. We put \begin{equation}\label{[Theta,X]} [{\mathbf\Theta}_s,{\mathbf X}]=\{{\theta_{s,1} X_1, \theta_{s, 2} X_2,...,\theta_{s, k}X_k}\}. \end{equation} Then, the following formula is equivalent to (\ref{k-ra.ch.f.}) (cf. \cite{Ng4}) \begin{equation}\label{multiradchf} \widehat{\mathbf G}({\mathbf t})=Ee^{i<{\mathbf t},[{\mathbf\Theta_s, \mathbf X}]>},\qquad ({\mathbf t}\in \mathbb R^{+k}). \end{equation} The Reader is referred to Corollary 2.1, Theorems 2.3 \& 2.4 \cite{Ng4} for the principal properties of the above rad.ch.f. Given $s\ge -1/2$ define a map $F_{s, k}: \mathcal P(\mathbb R^{+k}) \rightarrow \mathcal P(\mathbb R^k)$ by \begin{equation}\label{k-map} F_{s, k}({\mathbf G})=\int\limits_{\mathbb R^{+k}} (T_{c_1}F_s)\times(T_{c_2}F_s)\times \ldots\times(T_{c_k}F_s) {\mathbf G}(d{\mathbf c}), \end{equation} here and in the sequel, for a distribution $\mathbf G$ of a r.vec. $\mathbf X$ and a real number c we denote by $T_c{\mathbf G}$ the distribution of $c{\mathbf X}$. Let us denote by $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ the sub-class of $\mathcal P(\mathbb R^k)$ consisted of all p.m.'s defined by the right-hand side of (\ref{k-map}). By virtue of (\ref{k-ra.ch.f.})-(\ref{k-map}) one can prove the following theorem. \begin{theorem}\label{symmconvo} The set $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ is closed w.r.t. the weak convergence and the ordinary convolution $\big.\ast$ and the following equation holds \begin{equation}\label{Fourier=rad.ch.f.} \hat{\mathbf G}({\mathbf t})=\mathcal F(F_{s, k}({\mathbf G}))({\mathbf t})\qquad ({\mathbf t}\in {\mathbb R^{+k}}) \end{equation} where $\mathcal F({\mathbf K})$ denotes the ordinary characteristic function (Fourier transform) of a p.m. ${\mathbf K}$. Therefore, for any ${\mathbf G}_1\mbox{ and } {\mathbf G}_2\in \mathbb R^{+k}$ \begin{equation}\label{convolequality} F_{s, k}({\mathbf G}_1)\big.\ast F_{s, k}({\mathbf G}_2)=F_{s, k}({\mathbf G}_1\bigcirc_{s, k}{\mathbf G}_2) \end{equation} and the map $F_{s, k}$ commutes with convex combinations of distributions and scale changes $T_c, c>0$. Moreover, \begin{equation}\label{Gaussian-Rayleigh} F_{s, k}({\Sigma_{s, k}})=N({\mathbf 0}, 2(s+1){\mathbf I}) \end{equation} where $\Sigma_{s, k}$ denotes the k-dimensional Rayleigh distribution and $N({\mathbf 0}, 2(s+1){\mathbf I}) $ is the symmetric normal distribution on $\mathbb R^k \mbox{ with variance operator } {\mathbf R}= 2(s+1) {\mathbf I}, {\mathbf I}$ being the identity operator. Consequently, Every Kingman convolution algebra $\big( \mathcal P(\mathbb R^{+k}), \bigcirc_{s, k}\big)$ is embedded in the ordinary convolution algebra $\big( \mathcal P_{s, k}(\mathbb{R}^{+k}), \big.\star \big)$ and the map $F_{s, k}$ stands for a homeomorphism. \end{theorem} Let us denote by $\mathcal E=\{{\mathbf e}=(e_1, e_2, \ldots, e_k), e_j=\pm , j=1, 2, \ldots, k \}$. It is convenient to regard the elements of $\mathcal E$ as sign vectors. Denote $\mathbb R^{+k}_{\mathbf e} =\{[{\mathbf e},{\mathbf x} ]: {\mathbf x}\in \mathbb R^{+k}\}, \mbox{ where } [{\mathbf e},{\mathbf x} ]:=(e_1x_1, e_2x_2, \ldots, e_kx_k).$ Then the family $\{\mathbb R^{+k}_{\mathbf e}, {\mathbf e} \in\mathcal E \}$ is a partition of the space $\mathbb R^k.$ If $\mathbf X$ is a k-dimensional r.vec. with distribution $\mathbf G,$ the k-symmetrization of $\mathbf G$, denoted by $\tilde{\mathbf G},$ is defined by \begin{equation} \tilde{\mathbf G}=\frac{1}{2^k} \sum_{\mathbf e \in \mathcal E} S_{\mathbf e} {\mathbf G}, \end{equation} where the operator $S_{\mathbf e}$ is defined by \begin{equation} S_{\mathbf e}({\mathbf x})=[{\mathbf e},{\mathbf x}] \qquad {\mathbf x}\in{ \mathbb R^k} \end{equation} and the symbol $S_{\mathbf e} \tilde{\mathbf G}$ denotes the image of $\mathbf G$ under $S_{\mathbf e}$. \begin{definition} \label{k-symm} We say that a distribution $\mathbf G\in \mathcal P(\mathbb R^k)$ is k-symmetric, if the equation $\mathbf G=\tilde{\mathbf G}$ holds. \end{definition} \begin{definition}\label{k-ID} A p.m. ${\mathbf F} \in \mathcal P(\mathbb R^{+k})$ is called $\bigcirc_{s, k}-$infinitely divisible ($\bigcirc_{s, k}-$ID), if for every m=1, 2, \ldots there exists $\mathbf F_m\in \mathbf P(\mathbb R^{+k})$ such that \begin{equation}\label{kID} { \mathbf F}={\mathbf F}_m\bigcirc_{s, k} {\mathbf F}_m\bigcirc_{s, k}\ldots \bigcirc_{s, k}{\mathbf F}_m\quad (m\;times). \end{equation} \end{definition} \begin{definition}\label{stability} $\mathbf F$ is called stable, if for any positive numbers a and b there exists a positive number c such that \begin{equation}\label{k-stability} T_a{\mathbf F}\;{\bigcirc_{s, k}}\;T_b{\mathbf F}=T_c{\mathbf F} \end{equation} \end{definition} By virtue of Theorem \ref{symmconvo} it follows that the following theorem holds. \begin{theorem}\label{equivdef} A p.m. $\mathbf G\mbox{ is } \bigcirc_{s, k}-ID$, resp. stable if and only if $H_{s, k}({\mathbf G})$ is ID, resp. stable, in the usual sense. \end{theorem} The following theorem gives a representation of rad.ch.f.'s of $\bigcirc_{s, k}-$ID distributions. The proof is a verbatim reprint of that for (\cite{Ng4}, Theorem 2.6). \begin{theorem}\label{LevyID} A p.m. $\mu\in ID(\bigcirc_{s, k})$ if and only if there exist a $\sigma$-finite measure M (a L\'evy's measure) on $ \mathbb R^{+k}$ with the property that $M({\mathbf 0})=0, {\mathbf M}$ is finite outside every neighborhood of ${\mathbf 0}$ and \begin{equation}\label{integrable w. r. t. weight function} \int_{\mathbb R^{+k}}\frac {\|{\mathbf x}\|^2} {1+\|{\mathbf x}\|^2} {\mathbf M}(d{\mathbf x}) < \infty \end{equation} and for each ${\mathbf t}=(t_1,...,t_k)\in \mathbb R^{+k}$ \begin{equation}\label{Levy-Kintchine for k-dim.rad. ch. f.} -\log{\hat{\mu}({\mathbf t})}=\int_{\mathbb R^{+k}}\{1-\prod_{j=1}^{k}\Lambda_s(t_jx_j)\} \frac {1+\|{\mathbf x}\|^2} {\|{\mathbf x}\|^2} M({\mathbf {dx}}), \end{equation} where, at the origin $\mathbf{0}$, the integrand on the right-hand side of (\ref{Levy-Kintchine for k-dim.rad. ch. f.}) is assumed to be \begin{equation}\label{limiting integrand} lim_{\|\mathbf {x}\|\rightarrow 0 }\{1-\prod_{j=1}^k \Lambda_s(t_jx_j)\} \frac {1+\|\mathbf x\|^2} {\|\mathbf {x}\|^2}= \Sigma_{j=1}^k \lambda^2_j t_j^2 \end{equation} for nonnegative $\lambda_j, j=1, 2,...,k.$ In particular, if $ M=0, \mbox{ then } \mu $ becomes a Rayleighian distribution with the rad.ch.f (see definition \ref{Rayleigh}) \begin{equation}\label{kRayleighian rad. ch. f.} -\log{\hat{\mu}({\mathbf t})}=\frac{1}{2}\sum_{j=1}^k \lambda^2_j t_j^2,\quad {\mathbf t}\in \mathbb R^{+k}, \end{equation} for some nonnegative $\lambda_j, j=1,...,k.$ Moreover, the representation (\ref{Levy-Kintchine for k-dim.rad. ch. f.}) is unique. \end{theorem} An immediate consequence of the above theorem is the following: \begin{corollary}\label{Cor:Pair} Each distribution $\mu\in ID(\bigcirc_{s, k})$ is uniquely determined by the pair $[\mathbf{M}, \pmb {\lambda}]$, where $\mathbf{M}$ is a Levy's measure on $\mathbb R^{+k}$ such that $\mathbf{M}(\mathbf{0})=0,$ $\mathbf{M}$ is finite outsite every neighbourhood of $\mathbf{0}$ and the condition (\ref{integrable w. r. t. weight function}) is satisfied and $\pmb{\lambda}:=\{\lambda_1, \lambda_2,\cdots \lambda_k\}\in \mathbb R^{+k}$ is a vector of nonnegative numbers appearing in (\ref{kRayleighian rad. ch. f.}). Consequently, one can write $\mu\equiv[\mathbf{M}, \pmb {\lambda}].$\\ \indent In particular, if $\mathbf{M}$ is zero measure then $\mu=[\pmb{\lambda}]$ becomes a Rayleighian p.m. on $\mathbb R^{+k}$ as defined as follows: \end{corollary} \begin{definition}\label{Rayleigh} A k-dimensional distribution, say $\pmb{\mathbf \Sigma}_{s, k}$, is called a {\it Rayleigh distribution}, if \begin{equation}\label{k-dimension Rayleigh} \pmb{\mathbf \Sigma}_{s, k}=\sigma_s\times\sigma_s\times\cdots\times\sigma_s \quad (k\;times). \end{equation} Further, a distribution ${\mathbf F}\in \mathcal P(\mathbb R^{+k})$ is called a {\it Rayleighian distribution} if there exist nonnegative numbers $\lambda_r, r=1,2 \cdots k $ such that \begin{equation}\label{k-dimensional rayleighian} { \mathbf F}=\{T_{\lambda_1}\sigma_s\}\times \{T_{\lambda_2}\sigma_s\} \times\ldots \times\{T_{\lambda_k}\sigma_s\}. \end{equation} \end{definition} \indent It is evident that every Rayleigh distribution is a Rayleighian distribution. Moreover, every Rayleighian distribution is $\bigcirc_{s, k}-$ID. By virtue of (\ref{Maxwell density} ) and (\ref{k-dimension Rayleigh}) it follows that the k-dimensional Rayleigh density is given by \begin{equation}\label{density k-dimension Rayleigh} g({\mathbf x})=\Pi_{j=1}^k\frac{2^k(s+1)^{k(s+1)}}{\Gamma^k(s+1)}x_j^{2s+1}exp\{-(s+1)||{\mathbf x}||^2\}, \end{equation} where ${\mathbf x}=(x_1, x_2,\ldots, x_k)\in \mathbb R^{+k}$ and the corresponding rad.ch.f. is given by \begin{equation} \hat\Sigma_{s, k}({\mathbf t})=Exp(-|{\mathbf t}|^2/4(s+1)),\quad {\mathbf t}\in \mathbb R^{+k}. \end{equation} Finally, the rad.ch.f. of a Rayleighian distribution $\mathbf F\mbox{ on } \mathbb R^{+k}$ is given by \begin{equation}\label{rad.ch.rayleighian} \hat{\mathbf F}({\mathbf t})=Exp(-\frac{1}{2}\sum_{j=1}^k\lambda_j^2t_j^2) \end{equation} where $\lambda_j, j=1, 2, \ldots, k$ are some nonnegative numbers. \section{Multivariate Bessel processes} \section{Multivrariate Kingman-L\'evy processes and their L\'evy-It\^o decomposition} Suppose that $\mu_t, t\ge 0$ is continuous semigroup in $ID(\bigcirc_{s, k})$, that is for any $t,s\ge 0$ \begin{equation}\label{Ksemigroup} \mu_t\bigcirc_{s, k}\mu_s=\mu_{t+s} \end{equation} and $\{\mu_t\}$ is continuous at 0 i.e. \begin{equation*} lim_{t\rightarrow 0}\mu_t=\delta_{\mbox{0}}. \end{equation*} By virtue of Theorem \ref{symmconvo} it follows that $\{\mathcal F_{s, k}(\mu_t)\}$ is an ordinary continuous convolution semigroup on $\mathbb R^k.$ Putting, for each $\mathbf{x}\in \mathbb {R}^{k+}$ and for every Borel subset $\mathcal{E}\mbox{ of } \mathbb {R}^{k+}, $ \begin{equation}\label{transitionProbLevy} \mathbf{P}(t, \mathcal{E}, \mathbf{x})=\mu_t\bigcirc_{s, k}\delta_{\mathbf{x}}(\mathcal{E}) \end{equation} and using the rad.ch.f. it follows that the family $\{ \mathbf{P}(t, \mathcal{E}, \mathbf{x}), t\ge 0\}$ satisfies the Chapman-Kolmogorov equation and, consequently, the formula (\ref{transitionProbLevy}) defines transition probabilities of a $\mathbb R^{k+}-$valued homogeneous strong Markov Feller process $\{\mathbf{X}^{\mathbf{x}}_t, t\ge 0\}$, say, such that it is stochastically continuous and has a cadlag version (compare \cite{Ng1}, Theorem 2.6). \begin{definition}\label{KL-process} A $\mathbb R^{k+}$-valued stochastic process $\{\mathbf{X}_t, t\ge 0\}$ is called a Kingman-L\'evy process, if $\mathbf{X}_t=$ (i) $ \mathbf{X}_0=\mathbf{0}\qquad (P.1);$ (ii) There exists a $\mathbb R^{k+}-$valued homogeneous strong Markov Feller process having a cadlag version $\{\mathbf{X}^{\mathbf{x}}_t, t\ge 0\}$ with transition probabilities defined by (\ref{transitionProbLevy}) and $\mathbf{X}_t=\mathbf{X}^{\mathbf{0}}_t, t\ge 0;$ \end{definition} \section{Fluctuations of Multidimensional Bessel Processes} \begin{definition} Let $(W_t, t\ge 0)$ be a d-dimensional Brownian motion (d=1, 2, \ldots). The Euclidean norm of $(W_t)$, denoted by $B_t, t\ge 0$ is called a Bessel process. \end{definition} It has been proved that Bessel processes inherit the well-known characteristics of Brownian motions: They are independent stationary "increments" processes with continuous sample paths. The term 'increment' is defined as follows: \begin{definition} For any $s>u$ the random variable $| W_s-W_u |$ is called an increments of the Bessel process. \end{definition} The following theorem gives a L\'evy-Khinczyn representation of the Bessel process in the sense of the Kingman convolution. \begin{theorem} The radial characteristic function $\phi(x)$ of the Bessel process $(B_t)$ is of the form \begin{equation} \phi(x)=exp\{-\frac{tx^2}{4(s+1)}\}\qquad x, t \ge0 \end{equation} where d=2(s+1). \end{theorem} Since for any $s>u$ the 'increment' of the Bessel process $(B_t)$ is infinitely divisible in the ordinary convolution $\ast$ we have the following representation of the Fourier transform of $B_{s-u}.$ \begin{equation} \mathcal F_{B_{s-u}}(x)=exp(-(s-u) \psi(x)) \end{equation} where $\psi(x)$ is a symmetric characteristic exponent \begin{equation} \psi(x)=\frac{1}{2}\sigma^2+\int_0^{\infty}(1-cos\,xv)\Pi(dv) \end{equation} where the measure $\Pi$ satisfies the condition \\begin equation \begin{equation} \int_0^{\infty}(min(1,x^2)\Pi(dx)<\infty. \end{equation} which implies the following L\'evy-It\^o decomposition. \begin{theorem}(L\'evy-It\^o decomposition) There exists a Brownian motion $X^{(1)}_t$ and a compound Poison process $ X^{(2)}_t $ independent of $X^{(1)}_t$ such that \begin{equation} B_t=||W_t ||\overset{d}{=}X^{(1)}_t + X^{(2)}_t\qquad (t\ge 0). \end{equation} \end{theorem} Before stating the Wienner-Hopf factorization (WHf) theorem for Bessel processes we introduce some concepts and notations. The importance of WHf is that it gives us information of the ascending and descending ladder processes. We begin by recalling that for $\alpha, \beta\ge 0$ the Laplace exponents $\kappa(\alpha, \beta)\mbox { and } \hat \kappa(\alpha, \beta)$ of the ascending ladder process $(\hat L^{-1}, \hat H)$ and the descending ladder process $(\hat L^{-1}, \hat H).$ Further, we define $$\overset {-}{G}_t=sup\{s<t: \overset{-}{X}_s=X_s\} \mbox { and } \underset{-}{G_t}= sup\{s<t: \underset{-}{X_t}=X_s.$$ \begin{theorem}\label{Wienner-Hopf}(Wienner-Hopf Factorization) Let $(B_t, t\ge 0)$ be a Bessel process. Denote by ${\mathbf e}_p$ an independent and exponentially distributed random variable. The pairs $(\overset{-}{G}_{{\mathbf e}_p}, \overset{-}{X}_{{\mathbf e}_p})\mbox{ and } ({\mathbf e}_p-\overset{-}{G}_{{\mathbf e}_p}, \overset{-}{X}_{{\mathbf e}_p}-X_{{\mathbf e}_p})$ are independent and infinitely divisible, yielding the factorization \begin{equation}\label{Wienner-Hopf} \frac{p}{p-i\nu+\psi(\theta)}=\Psi^{+}(\nu, \theta).\Psi^{-}(\nu, \theta)\qquad \nu, \theta\in \mathbb R, \end{equation} $\psi^{+}, \psi^{-}$ being Fourier transforms and called the Wienner-Hopf factors. \end{theorem} \section{Levy-Ito decomposition of Kingman-Levy processes} \ \begin{thebibliography}{19} \bibitem{B1} Bingham, N.H., Random walks on spheres, Z. Wahrscheinlichkeitstheorie Verw. Geb., {\bf22}, (1973), 169-172. \bibitem{B2} Bingham, N.H., On a Theorem of Klosowska about generalized convolutions, Colloquium Math., {\bf 28} No. {\bf 1}, (1984), 117-125. \bibitem{C} Cox, J.C., Ingersoll, J.E.Jr., and Ross, S.A., A theory of the term structure of interest rates. Econometrica, {\bf53}(2), (1985). \bibitem{Fe} Feller, W., An Introduction to probability Theory and Its Applications, John Wiley \& Sons Inc., Vol.II, 2nd Ed., (1971). \bibitem{I} Ito, K., Mckean H.P., Jr., Diffusion processes and their sample paths, Berlin-Heidelberg-New York. Springer (1996). \bibitem{Ka} Kalenberg O., Random measures, 3rd ed. New York: Academic Press, (1983). \bibitem{Ki} Kingman, J.F.C., Random walks with spherical symmetry, Acta Math., {\bf 109}, (1963), 11-53. \bibitem{Ky} Kyprianou, Andreas E., Introductory lectures on fluctuations of L\'evy processes with applications, \bibitem{Le} Levitan B.M., Generalized translation operators and some of their applications, Israel program for Scientific Translations, Jerusalem, (1962). \bibitem{LiOst} Linnik Ju. V., Ostrovskii, I. V., Decomposition of random variables and vectors, Translation of Mathematical Monographs, vol. 48, American Mathematical Society, Providence R. L, 1977, ix+380 pp.,\$38.80. (Translated from the Russian, 1972, by Israel Program for Scientific Translations). \bibitem{Ng1} Nguyen V.T, Generalized independent increments processes, Nagoya Math. J.{\bf133}, (1994), 155-175. \bibitem{Ng2} Nguyen V.T., Generalized translation operators and Markov processes, Demonstratio Mathematica, {\bf 34} No 2, ,295-304. \bibitem{NgOGAWAYamazato} Nguyen T.V., OGAWA S., Yamazato M. A convolution Approach to Mutivariate Bessel Processes, Proceedings of the 6th Ritsumeikan International Symposium on "{\it Stochastic Processes and Applications to Mathematical Finance}", edt. J. Akahori, S. OGAWA and S. Watanabe, World Scientific, (2006) 233-244. \bibitem{Ng4} Nguyen V. T., A Kingman convolution approach to Bessel processes, Probab. Math. Stat, Probab. Math. Stat. {\bf 29}, fasc. 1(2009) 119-134. \bibitem{Ng5} Nguyen V. T., An analogue of the Cram\'er-L\'evy theorem for multi-dimensional Rayleigh distributions, arxiv.org/abs/0907.5035. \bibitem{ReYor} Revuz, D. and Yor, M., Continuous martingals and Brownian motion. Springer-verlag Berlin Heidelberg, (1991). \bibitem{Sa} Sato K, L\'{e}vy processes and infinitely divisible distributions, Cambridge University of Press, (1999). \bibitem{Sh} Shiga T., Watanabe S., Bessel diffusions as a one-parameter family of diffusion processes, Z. Warscheinlichkeitstheorie Verw. geb. 27,(1973), 34-46. \bibitem{U1} Urbanik K., Generalized convolutions, Studia math., {\bf 23} (1964), 217-245. \bibitem{U2} Urbanik K., Cram\'{e}r property of generalized convolutions,Bull. Polish Acad. Sci. Math.{\bf 37} No 16 (1989), 213-218. \bibitem{V} V\'olkovich, V. E., On symmetric stochastic convolutions, J. Theor. Prob. {\bf 5}, No. {\bf 3}(1992), 417-430. \end{thebibliography} \end{document} \end{document} \begin{definition} Let $(W_t, t\ge 0)$ be a d-dimensional Brownian motion (d=1, 2, \ldots). The Euclidean norm of $(W_t)$, denoted by $B_t, t\ge 0$ is called a Bessel process. \end{definition} It has been proved that Bessel processes inherit the well-known characteristics of Brownian motions: They are independent stationary "increments" processes with continuous sample paths. The term 'increment' is defined as follows: \begin{definition} For any $s>u$ the random variable $| W_s-W_u |$ is called an increments of the Bessel process. \end{definition} The following theorem gives a L\'evy-Khinczyn representation of the Bessel process in the sense of the Kingman convolution. \begin{theorem} The radial characteristic function $\phi(x)$ of the Bessel process $(B_t)$ is of the form \begin{equation} \phi(x)=exp\{-\frac{tx^2}{4(s+1)}\}\qquad x, t \ge0 \end{equation} where d=2(s+1). \end{theorem} Since for any $s>u$ the 'increment' of the Bessel process $(B_t)$ is infinitely divisible in the ordinary convolution $\ast$ we have the following representation of the Fourier transform of $B_{s-u}.$ \begin{equation} \mathcal F_{B_{s-u}}(x)=exp(-(s-u) \psi(x)) \end{equation} where $\psi(x)$ is a symmetric characteristic exponent \begin{equation} \psi(x)=\frac{1}{2}\sigma^2+\int_0^{\infty}(1-cos\,xv)\Pi(dv) \end{equation} where the measure $\Pi$ satisfies the condition \\begin equation \begin{equation} \int_0^{\infty}(min(1,x^2)\Pi(dx)<\infty. \end{equation} which implies the following L\'evy-It\^o decomposition. \begin{theorem}(L\'evy-It\^o decomposition) There exists a linear symmetric Brownian motion $X^{(1)}_t\mbox{ and a compound symmetric Poison process} X^{(2)}_t $ independent of $X^{(1)}_t$ such that \begin{equation} B_t=||W_t ||\overset{d}{=}X^{(1)}_t + X^{(2)}_t\qquad t\ge 0. \end{equation} \end{theorem} Before stating the Wienner-Hopf factorisation (WHf) theorem for Bessel processes we introduce some concepts and notations. The importance of WHf is that it gives us information of the ascending and descending ladder processes. We begin by recalling that for $\alpha, \beta\ge 0$ the Laplace exponents $\kappa(\alpha, \beta)\mbox { and } \hat \kappa(\alpha, \beta)$ of the ascending ladder process $(\hat L^{-1}, \hat H)$ and the descending ladder process $(\hat L^{-1}, \hat H).$ Further, we define $$\overset {-}{G}_t=sup\{s<t: \overset{-}{X}_s=X_s\} \mbox { and } \underset{-}{G_t}= sup\{s<t: \underset{-}{X_t}=X_s.$$ \begin{theorem}\label{Wienner-Hopf}(Wienner-Hopf Factorisation) Let $(B_t, t\ge 0)$ be a Bessel process. Denote by ${\mathbf e}_p$ an independent and exponentially distributed random variable. The pairs $(\overset{-}{G}_{{\mathbf e}_p}, \overset{-}{X}_{{\mathbf e}_p})\mbox{ and } ({\mathbf e}_p-\overset{-}{G}_{{\mathbf e}_p}-X_{{\mathbf e}_p})$ are independent and infinitely divisible, yielding the factorisation \begin{equation}\label{Wienner-Hopf} \frac{p}{p-i\nu+\psi(\theta)}=\Psi^{+}(\nu, \theta).\Psi^{-}(\nu, \theta)\qquad \nu, \theta\in \mathbb R, \end{equation} $\psi^{+}, \psi^{-}$ being Fourier transforms and called the Wienner-Hopf factors. \end{theorem} \begin{thebibliography}{19} \bibitem{B1} Bingham, N.H., Random walks on spheres, Z. Wahrscheinlichkeitstheorie Verw. Geb., {\bf22}, (1973), 169-172. \bibitem{B2} Bingham, N.H., On a Theorem of Klosowska about generalized convolutions, Colloquium Math., {\bf 28} No. {\bf 1}, (1984), 117-125. \bibitem{C} Cox, J.C., Ingersoll, J.E.Jr., and Ross, S.A., A theory of the term structure of interest rates. Econometrica, {\bf53}(2), (1985). \bibitem{Fe} Feller, W., An Introduction to probability Theory and Its Applications, John Wiley \& Sons Inc., Vol.II, 2nd Ed., (1971). \bibitem{I} Ito, K., Mckean H.P., Jr., Diffusion processes and their sample paths, Berlin-Heidelberg-New York. Springer (1996). \bibitem{Ka} Kalenberg O., Random measures, 3rd ed. New York: Academic Press, (1983). \bibitem{Ki} Kingman, J.F.C., Random walks with spherical symmetry, Acta Math., {\bf 109}, (1963), 11-53. \bibitem{Le} Levitan B.M., Generalized translation operators and some of their applications, Israel program for Scientific Translations, Jerusalem, (1962). \bibitem{LiOst} Linnik Ju. V., Ostrovskii, I. V., Decomposition of random variables and vectors, Translation of Mathematical Monographs, vol. 48, American Mathematical Society, Providence R. L, 1977, ix+380 pp.,\$38.80. (Translated from the Russian, 1972, by Israel Program for Scientific Translations). \bibitem{Ng1} Nguyen V.T, Generalized independent increments processes, Nagoya Math. J.{\bf133}, (1994), 155-175. \bibitem{Ng2} Nguyen V.T., Generalized translation operators and Markov processes, Demonstratio Mathematica, {\bf 34} No 2, ,295-304. \bibitem{NgOGAWAYamazato} Nguyen T.V., OGAWA S., Yamazato M. A convolution Approach to Mutivariate Bessel Processes, Proceedings of the 6th Ritsumeikan International Symposium on "{\it Stochastic Processes and Applications to Mathematical Finance}", edt. J. Akahori, S. OGAWA and S. Watanabe, World Scientific, (2006) 233-244. \bibitem{Ng4} Nguyen V. T., A Kingman convolution approach to Bessel processes, Probab. Math. Stat, Probab. Math. Stat. {\bf 29}, fasc. 1(2009) 119-134. \bibitem{Ng5} Nguyen V. T., An analogue of the Cram\'er-L\'evy theorem for multi-dimensional Rayleigh distributions, arxiv.org/abs/0907.5035; Preprint Ser. IU HCMC 2009. \bibitem{ReYor} Revuz, D. and Yor, M., Continuous martingals and Brownian motion. Springer-verlag Berlin Heidelberg, (1991). \bibitem{Sa} Sato K, L\'{e}vy processes and infinitely divisible distributions, Cambridge University of Press, (1999). \bibitem{Sh} Shiga T., Watanabe S., Bessel diffusions as a one-parameter family of diffusion processes, Z. Warscheinlichkeitstheorie Verw. geb. 27,(1973), 34-46. \bibitem{U1} Urbanik K., Generalized convolutions, Studia math., {\bf 23} (1964), 217-245. \bibitem{U2} Urbanik K., Cram\'{e}r property of generalized convolutions,Bull. Polish Acad. Sci. Math.{\bf 37} No 16 (1989), 213-218. \bibitem{V} V\'olkovich, V. E., On symmetric stochastic convolutions, J. Theor. Prob. {\bf 5}, No. {\bf 3}(1992), 417-430. \end{thebibliography} \end{document} \end{document} \section{Introduction, Notations and Prelimilaries}\label{S:intro} In probability theory and statistics, the {\bf Rayleigh distribution} is a continuous probability distribution which is widely used to model events that occur in different fields such as medicine, social and natural sciences. A multivariate Rayleigh distribution is the probability distribution of a vector of norms of random Gaussian vectors. The purpose of this paper, is to introduce and study the fractional indexes multivariate Rayleigh distributions via the Cartesian product of Kingman convolutions and, in particular, to prove an analogue of the L\'evy-Cram\'er theorem for multivariate Rayleigh distributions. We begin with a brief review of the Kingman convolution $\ast_{1,\delta}$ as follows. Let $\mathcal P(\mathbb R^+)$ denotes the set of all probability measures (p.m.'s) on the positive-half line $\mathbb R^+$. Put, for each continuous bounded function on $\mathbb R^{+}$, \begin{multline}\label{astKi} \int_{0}^{\infty}f(x)\mu\ast_{1,\delta}\nu(dx)=\frac{\Gamma(s+1)}{\sqrt{\pi}\Gamma(s+\frac{1}{2})}\\ \int_{0}^{\infty}\int_{0}^{\infty}\int_{-1}^{1}f((x^2+2uxy+y^2)^{1/2})(1-u^2)^{s-1/2}\mu(dx)\nu(dy)du, \end{multline} where $\mu\mbox{ and }\nu\in \mathcal P(\mathbb R^+)\mbox{ and }\delta=2(s+1)\geq1$ (cf. Kingman \cite{Ki} and Urbanik \cite{U1}). The Kingman algebra $(\mathcal{P},\ast_{1,\delta})$ is the most important example of Urbanik convolution algebras (cf Urbanik \cite{U1}). In language of the Urbanik convolution algebras, the radial characteristic function (rad.ch.f.) of a measure $\mu$ is defined by $$\hat{\mu}(t)=\int_0^{\infty}\Lambda_{s}(xt) \mu(dx), $$ where $ t\in \mathbb R^+$ and the kernel $\Lambda_{s}$ is given by \begin{equation}\label{eq:Lam} \Lambda_{s}(x)= \Gamma(s+1) J_{s}(x)/(1/2x)^{s}, \end{equation} $J_{s}(x)$ being the Bessel function of the first kind, \begin{equation}\label{eq:Bessel} J_{s}(x):= \Sigma_{k=0}^{\infty} \frac{(-1)^k (x/2)^{s+2k}}{k!\Gamma(s+k+1)}. \end{equation} It is known (cf. Kingman \cite{Ki}, Theorem 1), that the kernel $\Lambda_{s}$ itself is an ordinary ch.f. of a symmetric p.m., say $F_s$, defined on the interval [-1,1]. Thus, if $\theta_s$ denotes a r.v. with distribution $F_s$ then for each $t\in \mathbb R^+$, \begin{equation}\label{eq:LamThe} \Lambda_s(t)= E\exp{(it\theta_s)}=\int_{-1}^1 \exp{(itx)}dF_s(x). \end{equation} The characteristic measure of the Kingman convolution $\ast_{1, \delta}$, denoted by $\sigma_s$, has the Maxwell desity function \begin{equation}\label{Maxwell density} \frac{d\sigma_s(x)}{dx}=\frac{2(s+1)^{s+1}}{\Gamma(s+1)}x^{2s+1}exp\{-(s+1)x^2\}, \quad(0<x<\infty). \end{equation} and the rad.ch.f. \begin{equation} \hat\sigma_s(t)=exp\{-t^2/4(s+1)\}. \end{equation} \begin{example}\label{Symmetric convolution} The case $\delta=1\; ( s=-\frac{1}{2})$ the Kingman convolution reduces to the symmetric convolution $\ast_{1, 1}\mbox{ with }\Lambda_{-\frac{1}{2}}(x)=cos x\mbox{ and }\varkappa=2,$ and the characteristic measure $\sigma_{-1/2}$ has the density function $$\frac{d\sigma_{-\frac{1}{2}}(x)}{dx}=(\pi)^{-\frac{1}{2}}exp(-x^2/2)$$ which is the density of a symmetric Gaussian distribution induced on $\mathbb R^+$. \end{example} \section{Cartesian product of Kingman convolutions} Denote by $ \mathbb {R}^{+k}, k=1,2,...$ the k-dimensional nonnegative cone of $ \mathbb {R}^{k}$ and $\mathcal{P}(\mathbb {\mathbb R}^{+k})$ the class of all p.m.'s on $\mathbb {\mathbb R}^{+k}$ equipped with the weak convergence. In the sequel, we will denote the multidimensional vectors and random vectors (r.vec.'s) and their distributions by bold face letters. For each point z of any set $Z$ let $\delta_z$ denote the Dirac measure (the unit mass) at the point z. In particular, if $\mathbf x=(x_1, x_2,\cdots,x_k)\in \mathbb R^{k+}$, then \begin{equation}\label{proddelta} \delta_{\mathbf {x}} = \delta_{x_1}\times\delta_{x_2}\times \ldots\times\delta_{x_k},\quad (k\; times), \end{equation} where the sign $"\times"$ denotes the Cartesian product of measures. We put, for $\mathbf {x} = (x_1,\cdots, x_k)\mbox{ and }\mathbf {y} = (y_1,y_2,\cdots, y_k)\in \mathbb R^{+k},$ \begin{equation}\label{convdeltas}\delta_{\mathbf x}\bigcirc_{s, k} \delta_{\mathbf {y}} = \{\delta_{x_1}\circ _s \delta_{y_1}\} \times\{\delta_{x_2} \circ _s\delta_{y_2}\} \times\cdots\ \times \{\delta_{x_k} \circ_s \delta_{y_k}\},\quad (k\; times), \end{equation} here and somewhere below for the sake of simplicity we denote the Kingman convolution operation $\ast_{1,\delta}, \delta=2(s+1)\ge 1$ simply by $\circ_{s}, s\ge -\frac{!}{2}.$ Since convex combinations of p.m.'s of the form (\ref{proddelta}) are dense in $\mathcal P(\mathbb R^{+k})$ the relation (\ref{convdeltas}) can be extended to arbitrary p.m.'s $ \mathbf{F} \mbox{ and } \mathbf{G}\in\mathcal{P}( \mathbb R^{+k})$. Namely, we put \begin{equation}\label{convF} \mathbf {F} \bigcirc_{s, k} \mathbf {G} = \iint\limits_{ \mathbb R^{+k}} \delta_{\mathbf {x}} \bigcirc_{s, k} \delta_{\mathbf {y}} \mathbf {F}(d\mathbf {x}) \mathbf {G}(d\mathbf {y}) \end{equation} which means that for each continuous bounded function $\phi$ defined on $\mathbb R^{+k}$ \begin{equation}\label{convof} \int_{\mathbb R^{+k}} \phi({\mathbf z}) \mathbf {F} \bigcirc_{s, k} \mathbf {G} (d{\mathbf z})= \iint\limits_{ \mathbb R^{+k}}\big\{\int_{\mathbb R^{+k}} \phi({\mathbf z}) \delta_{{\mathbf x}} \bigcirc_{s, k} \delta_{{\mathbf y}}(d{\mathbf z})\big\}{ \mathbf F}(d{\mathbf x}) {\mathbf G}(d{\mathbf y}). \end{equation} In the sequel, the binary operation $\bigcirc_{s, k}$ will be called {\it the k-times Cartesian product of Kingman convolutions} and the pair $(\mathcal P( \mathbb R^{+k}), \bigcirc_{s, k})$ will be called {\it the k-dimensional Kingman convolution algebra}. It is easy to show that the binary operation $\bigcirc_{s, k}$ is continuous in the weak topology which together with (\ref{astKi}) and (\ref{convF}) implies the following theorem. \begin{theorem}\label{Theo:Kingmanalgebra} The pair $(\mathcal P{( \mathbb R^{+k})} ,\bigcirc_{s, k})$ is a commutative topological semigroup with $\delta_{\mathbf 0}$ as the unit element. Moreover, the operation $\bigcirc_{s, k}$ is distributive w.r.t. convex combinations of p.m.'s in $\mathcal P( \mathbb R^{+k})$. \end{theorem} \ For every ${\mathbf G}\in\mathcal P( \mathbb R^{+k})$ the k-dimensional rad.ch.f. $\hat{{\mathbf G}}({\mathbf t}), {\mathbf t}=(t_1, t_2, \cdots t_k)\in \mathbb R^{+k},$ is defined by \begin{equation}\label{k-ra.ch.f.} \hat{\mathbf G}(\mathbf {t})=\int_{\mathbb R^{+k}} \prod_{j=1}^{k}\Lambda_s(t_jx_j){ \mathbf G}(\mathbf {dx}), \end{equation} where $\mathbf {x}=(x_1, x_2, \cdots x_k)\in \mathbb R^{+k}.$ Let $\mathbf{\Theta_s} = \{\theta_{s, 1},\theta_{s, 2}, \cdots ,\theta_{s, k}\}$, where $\theta_{s, j}$ are independent r.v.'s with the same distribution $F_s $. Next, assume that $ {\mathbf X}=\{X_1, X_2,..., X_k\}$ is a k-dimensional r.vec. with distribution $\mathbf{G}$ and $\mathbf{X}$ is independent of $\mathbf{\Theta}_s$. We put \begin{equation}\label{[Theta,X]} [{\mathbf\Theta}_s,{\mathbf X}]=\{{\theta_{s,1} X_1, \theta_{s, 2} X_2,...,\theta_{s, k}X_k}\}. \end{equation} Then, the following formula is equivalent to (\ref{k-ra.ch.f.}) (cf. \cite{Ng4}) \begin{equation}\label{multiradchf} \widehat{\mathbf G}({\mathbf t})=Ee^{i<{\mathbf t},[{\mathbf\Theta_s, \mathbf X}]>},\qquad {\mathbf t}\in \mathbb R^{+k}. \end{equation} The Reader is referred to Corollary 2.1, Theorems 2.3 \& 2.4 \cite{Ng4} for the principal properties of the above rad.ch.f. Given $s\ge -1/2$ define a map $H_{s, k}: \mathcal P(\mathbb R^{+k}) \rightarrow \mathcal P(\mathbb R^k)$ by \begin{equation}\label{k-map} H_{s, k}({\mathbf G})=\int_{\mathbb R^{+k}} (T_{c_1}F_s)\times(T_{c_2}F_s)\times \ldots\times(T_{c_k}F_s) {\mathbf G}(d{\mathbf c}), \end{equation} here and in the sequel, for a distribution $\mathbf F$ of a r.vec. $\mathbf X$ and a real number c we denote by $T_c{\mathbf F}$ the distribution of $c{\mathbf X}$. Let us denote by $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ the sub-class of $\mathcal P(\mathbb R^k)$ consisted of all symmetric p.m.'s defined by the right-hand side of (\ref{k-map}). By virtue of (\ref{k-ra.ch.f.})-(\ref{k-map}) it is easy to prove the following theorem. \begin{theorem}\label{symmconvo} The set $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ is closed w.r.t. the weak convergence and the ordinary convolution $\big.\ast$ and the following equation holds \begin{equation}\label{Fourier=rad.ch.f.} \hat{\mathbf G}({\mathbf t})=\mathcal F(H_{s, k}({\mathbf G}))({\mathbf t})\qquad {\mathbf t}\in {\mathbb R^{+k}} \end{equation} where $\mathcal F({\mathbf K})$ denotes the ordinary characteristic function (Fourier transform) of a p.m. ${\mathbf K}$. Therefore, for any ${\mathbf G}\mbox{ and } {\mathbf K}\in \mathbb R^{+k}$ \begin{equation}\label{convolequality} H_{s, k}({\mathbf G})\big.\ast H_{s, k}({\mathbf K})=H_{s, k}({\mathbf G}\bigcirc_{s, k}{\mathbf K}) \end{equation} and the map $H_{s, k}$ commutes with convex combinations of distributions and scale changes $T_c, c>0$. Moreover, \begin{equation}\label{Gaussian-Rayleigh} H_{s, k}({\Sigma_{s, k}})=N({\mathbf 0}, {\mathbf I}) \end{equation} where $\Sigma_{s, k}$ denotes the k-dimensional Rayleigh distribution (see Definition \ref{Rayleigh}) and $N({\mathbf 0}, {\mathbf I}) $ is the standard normal distribution on $\mathbb R^k.$ Consequently, Every Kingman convolution algebra $\big( \mathcal P(\mathbb R^{+k}), \bigcirc_{s, k}\big)$ is representable in the ordinary convolution algebra $\big(\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k}), \big.\star \big)$ and the map $H_{s, k}$ stands for a homeomorphism. \end{theorem} \begin{proof} First we prove the equation (\ref{Fourier=rad.ch.f.}) by taking the Fourier transform of the right-hand side of (\ref{k-map}). We have, for ${\mathbf t}\in \mathbb R^k,$ \begin{eqnarray*} \mathcal F(H_{s, k}({\mathbf G}))({\mathbf t})&=& \int_{\mathbb R^k}\Pi_{j=1}^k \cos(t_jx_j)H_{s, k}({\mathbf G})d{\mathbf x}\\ &=&\int_{\mathbb R^k}\int_{\mathbb R^{+k}} \Pi_{j=1}^k\cos(t_jx_j)(T_{c_j}F_s (d{\mathbf x}){\mathbf G}(d{\mathbf c})\\ &= &\int_{\mathbb R^{+k}}\Lambda_{s, k} {\mathbf G}(d{\mathbf c})\\ &=& \hat{\mathbf G}({\mathbf t}) \end{eqnarray*} which implies that the set set $\tilde{ \mathcal P}_{s, k}(\mathbb{R}^{+k})$ is closed w.r.t. the weak convergence and the ordinary convolution $\big.\ast$ and, moreover the equations (\ref{convolequality}) and (\ref{Gaussian-Rayleigh}) hold. \end{proof} \begin{definition}\label{k-ID} A p.m. ${\mathbf F} \in \mathcal P(\mathbb R^{+k})$ is called $\bigcirc_{s, k}-$infinitely divisible ($\bigcirc_{s, k}-$ID), if for every m=1, 2, \ldots there exists $\mathbf F_m\in \mathbf P(\mathbb R^{+k})$ such that \begin{equation}\label{kID} { \mathbf F}={\mathbf F}_m\bigcirc_{s, k} {\mathbf F}_m\bigcirc_{s, k}\ldots \bigcirc_{s, k}{\mathbf F}_m\quad (m\;times). \end{equation} \end{definition} \begin{definition}\label{stability} $\mathbf F$ is called stable, if for any positive numbers a and b there exists a positive number c such that \begin{equation}\label{k-stability} T_a{\mathbf F}\;{\bigcirc_{s, k}}\;T_b{\mathbf F}=T_c{\mathbf F} \end{equation} \end{definition} By virtue of Theorem \ref{symmconvo} it follows that the following theorem holds true. \begin{theorem}\label{equivdef} A p.m. $\mathbf G\mbox{ is } \bigcirc_{s, k}-ID$, resp. stable if and only if $H_{s, k}({\mathbf G})$ is ID, resp. stable, in the usual sense. \end{theorem} The following lemma will be used in the representation of $\bigcirc_{s, k}-ID, k\ge 2.$ \begin{lemma}\label{Bessellimittheorem} (i) For every $t\ge 0$ \begin{equation}\label{Bessellimittheorem 1} \lim_{x\rightarrow 0}\frac{1-\Lambda_s(tx)}{x^2}= \lim_{x\rightarrow 0}\frac{1-Ee^{it\theta}}{x^2}=\frac{t^2}{2}. \end{equation} (ii) For any ${\mathbf x}=(x_0, x_1,\cdots ,x_k)\mbox{ and }{\mathbf t}=(t_0, t_1, \cdots, t_k)\in\mathbb R^{k+1}, k=1,2, ...$ \begin{equation}\label{bessellimittheorem2} lim_{\rho\rightarrow 0}\frac{1-\prod_{r=0}^k \Lambda_s (t_rx_r)}{\rho^2}=\frac{1}{2}\Sigma_{r=0}^k \lambda^2_r( Arg({\mathbf x}))t_r^2, \end{equation} where $\rho=||\mathbf x||, Arg({\mathbf x})=\frac{\mathbf x}{||\mathbf x||},\mbox{ and } \lambda_r( Arg({\mathbf x})), r=0,1, ...,k$ are given by \begin{equation}\label{polarization} \lambda_r( Arg({\mathbf x}))= \begin{cases}\cos\phi & r=0,\\ \sin\phi\sin\phi_1\cdots \sin\phi_{r-1}\cos\phi_{r} &1\le r\le k-2,\\ \sin\phi\sin\phi_1...\sin\phi_{k-2}\cos\psi & r={k-1},\\ \sin\phi\sin\phi_1 ...\sin\phi_{k-2}\sin\psi & r=k, \end{cases} \end{equation} where $0\le \psi, \phi, \phi_r\le\pi/2, r=1,2,...,k-2$ are angles of $\mathbf{x}$ appearing its polar form. \end{lemma} \begin{definition}\label{Rayleigh} A k-dimensional distribution, say $\pmb{\mathbf \Sigma}_{s, k}$, is called a Rayleigh distribution, if \begin{equation}\label{k-dimension Rayleigh} \pmb{\mathbf \Sigma}_{s, k}=\sigma_s\times\sigma_s\times\cdots\times\sigma_s \quad (k\;times). \end{equation} Further, a distribution ${\mathbf F}\in \mathcal P(\mathbb R^{+k})$ is called a Rayleighian distribution if there exist nonnegative numbers $\lambda_r, r=1,2 \cdots k $ such that \begin{equation}\label{k-dimensional rayleighian} { \mathbf F}=\{T_{\lambda_1}\sigma_s\}\times \{T_{\lambda_2}\sigma_s\} \times\ldots \times\{T_{\lambda_k}\sigma_s\}. \end{equation} \end{definition} \indent It is evident that every Rayleigh distribution is a Rayleighian distribution. Moreover, every Rayleighian distribution is $\bigcirc_{s, k}-$ID. By virtue of (\ref{Maxwell density} ) and (\ref{k-dimension Rayleigh}) it follows that the k-dimensional Rayleigh density is given by \begin{equation}\label{density k-dimension Rayleigh} g({\mathbf x})=\Pi_{j=1}^k\frac{2^k(s+1)^{k(s+1)}}{\Gamma^k(s+1)}x_j^{2s+1}exp\{-(s+1)||{\mathbf x}||^2\}, \end{equation} where ${\mathbf x}=(x_1, x_2,\ldots, x_k)\in \mathbb R^{+k}$ and the corresponding rad.ch.f. is given by \begin{equation} \hat\Sigma_{s, k}({\mathbf t})=Exp(-|{\mathbf t}|^2/4(s+1)),\quad {\mathbf t}\in \mathbb R^{+k}. \end{equation} Finally, the rad.ch.f. of a Rayleighian distribution $\mathbf F\mbox{ on } \mathbb R^{+k}$ is given by \begin{equation}\label{rad.ch.rayleighian} \hat{\mathbf F}({\mathbf t})=Exp(-\frac{1}{2}\sum_{j=1}^k\lambda_j^2t_j^2) \end{equation} where $\lambda_j, j=1, 2, \ldots, k$ are some nonnegative numbers. \section{An analogue of the L\'evy-Cram\'er Theorem in k-dimensional Kingman convolution algebras} We say that a distribution ${\mathbf F \mbox{ on } \mathbb R^k}$ has dimension m, $1\le m \le k$, if m is the dimension of the smallest hyper-plane which contains the support of $\mathbf F.$ The following theorem can be regarded as a version of the L\'evy-Cram\'er Theorem for multi-dimensional Kingman convolution.The case k=1 was proved by Urbanik (\cite{U2}). \begin{theorem}\label{Levy-Cramer} Suppose that $\mathbf G_i \in \mathcal P(\mathbb R^{+k}), i=1, 2 $ and \begin{equation}\label{decomposi} \Sigma_{s, k}=\mathbf G \bigcirc_{s, k} \mathbf K. \end{equation} Then, $\mathbf G\mbox{ and } \mathbf K$ are both Rayleighian distributions fufilling the condition that there exist nonnegative numbers $\lambda_r\mbox{ and } \gamma_r, r=1, 2,\ldots, k$ such that $$\lambda_r^2+\gamma_r^2=1\qquad r=1, 2, ..., k$$ and \begin{equation}\label{form 1} { \mathbf G} =T_{\lambda_1}\sigma_s\times T_{\lambda_2}\sigma_s\times \ldots\times T_{\lambda_k} \sigma_s \end{equation} and \begin{equation}\label{form 2} {\mathbf K}=T_{\gamma_1}\sigma_s\times T_{\gamma_2}\sigma_s\times\ldots \times T_{\gamma_k}\sigma_s. \end{equation} \end{theorem} \begin{proof} Suppose that the equation (\ref{decomposi}) holds. Using the map $H_{s, k}$ we have \begin{equation*} H_{s, k}(\Sigma_{s, k})=H_{s, k}({\mathbf G})\big.\ast H_{s, k}({\mathbf K}) \end{equation*} which implies that \begin{equation*} N({\mathbf 0}, {\mathbf I})=H_{s, k}({\mathbf G})\big.\ast H_{s, k}({\mathbf K}). \end{equation*} By the well-known L\'evy-Cram\'er Theorem on $\mathbb R^k$ (cf. Linnik and Ostrovskii \cite{LiOst}), that they are both symmetric Gaussian distributions on $\mathbb R^k.$ Consequently, they must be of the form (\ref{form 1}) and (\ref{form 2}). \end{proof} \end{document}
\begin{document} \title[Some problems on ruled hypersurfaces in nonflat complex space forms]{Some problems on ruled hypersurfaces\\ in nonflat complex space forms} \author[O.~P\'erez-Barral]{Olga P\'erez-Barral} \address{Department of Mathematics, University of Santiago de Compostela, Spain.} \email{[email protected]} \thanks{The author acknowledges support by projects MTM2016-75897-P (AEI/FEDER, Spain) and ED431C 2019/10 (Xunta de Galicia, Spain), and by a research grant under the Ram\'on y Cajal project RYC-2017-22490 (AEI/FSE, Spain).} \subjclass[2010]{53B25, 53C42, 53C55} \begin{abstract} We study ruled real hypersurfaces whose shape operators have constant squared norm in nonflat complex space forms. In particular, we prove the nonexistence of such hypersurfaces in the projective case. We also show that biharmonic ruled real hypersurfaces in nonflat complex space forms are minimal, which provides their classification due to a known result of Lohnherr and Reckziegel. \end{abstract} \keywords{Complex projective space, complex hyperbolic space, ruled hypersurface, minimal hypersurface, strongly 2-Hopf hypersurface, biharmonic hypersurface.} \maketitle \section{Introduction} A ruled real hypersurface in a nonflat complex space form, that is, in a complex projective or hyperbolic space, $\mathbb{C}P^{n}$ or $\mathbb{C}H^{n}$, is a submanifold of real codimension one which is foliated by totally geodesic complex hypersurfaces of $\mathbb{C}P^{n}$ or $\mathbb{C}H^{n}$. Ruled hypersurfaces in nonflat complex space forms constitute a very large class of real hypersurfaces. It becomes then an interesting problem to classify these objects under some additional geometric properties. For example, Lohnherr and Reckziegel classified ruled minimal hypersurfaces in nonflat complex space forms into three classes~\cite{LR}: Kimura type hypersurfaces in $\mathbb{C}P^{n}$ or $\mathbb{C}H^{n}$, bisectors in $\mathbb{C}H^{n}$ and Lohnherr hypersurfaces in $\mathbb{C}H^{n}$. Moreover, they proved that Lohnherr hypersurfaces in $\mathbb{C}H^{n}$ are the only complete ruled hypersurfaces with constant principal curvatures in nonflat complex space forms~\cite{LR}. Another important notion in the context of real hypersurfaces is that of Hopf hypersurface, which is defined as a real hypersurface whose Reeb vector field is an eigenvector of the shape operator at every point. Ruled hypersurfaces in nonflat complex space forms are never Hopf; indeed, the smallest tangent distribution invariant under the shape operator and containing the Reeb vector field has rank two. In particular, the minimal ruled hypersurfaces mentioned above have an additional property, which has been introduced in \cite{DDV:annali}: they are strongly 2-Hopf, that is, the smallest distribution invariant under the shape operator and containing the Reeb vector field is integrable and has rank two, and the principal curvatures associated with the principal directions defining such distribution are constant along its integral submanifolds. This concept is also important since it characterizes, at least in the complex projective and hyperbolic planes $\mathbb{C}P^{2}$ and $\mathbb{C}H^{2}$, the real hypersurfaces of cohomogeneity one that can be constructed as union of principal orbits of a polar action of cohomogeneity two on the ambient space. Motivated by some recent results concerning the classification of ruled hypersurfaces in nonflat complex space forms having constant mean curvature \cite{holi} or having constant scalar curvature \cite{Maeda_2}, we firstly focus on studying ruled hypersurfaces in $\mathbb{C}P^{n}$ or $\mathbb{C}H^{n}$ whose shape operators have constant squared norm, proving that there are no such hypersurfaces in $\mathbb{C}P^{n}$, whereas any possible example in $\mathbb{C}H^{n}$ must be strongly 2-Hopf. \begin{maintheorem1} Let $M$ be a ruled real hypersurface in a nonflat complex space form whose shape operator has constant norm. Then, $M$ is a strongly 2-Hopf real hypersurface in a complex hyperbolic space. In particular, there are no ruled hypersurfaces in complex projective spaces whose shape operator has constant norm. \end{maintheorem1} We note that, in the hyperbolic case, the Lohnherr hypersurfaces (as homogeneous ruled hypersurfaces) are examples of ruled real hypersurfaces with shape operator of constant norm. The problem of deciding whether these are the only such examples remains open. A hypersurface is said to be biharmonic if its defining isometric immersion is a biharmonic map, that is, a smooth map which is a critical point of the so-called bienergy functional (see, for example, \cite{ou} for more information on biharmonic hypersurfaces). It is well known that any minimal hypersurface is biharmonic and there exist some known results and conjectures claiming that the converse is, under certain circumstances, also true. For example, Chen conjectured that any minimal hypersurface in the Euclidean space is biharmonic. In the context of Riemannian manifolds of nonpositive curvature, it has been proved that both compact biharmonic hypersurfaces and biharmonic hypersurfaces with constant mean curvature are exactly the minimal ones \cite{jiang,ou}. However, if one removes these conditions one cannot ensure (in principle) minimality. Indeed, deciding whether, in general, biharmonicity implies minimality in ambient spaces of nonpositive curvature is the content of the generalized Chen's conjecture, proposed by Caddeo, Montaldo and Oniciuc in \cite{caddeo}. Ou and Tang have constructed some counterexamples which prove that this conjecture is not true \cite{outang}. However, it is still one of the main motivations for studying biharmonic hypersurfaces in the setting of Riemannian manifolds of nonpositive curvature due to the incompleteness of the examples provided by these two authors. Then, it becomes interesting to study biharmonic hypersurfaces satisfying other conditions, such as ruled ones, particularly in complex hyperbolic spaces, which are negatively curved. We point out that a general study of biharmonic submanifolds of arbitrary codimension in complex space forms has been developed in \cite{fetcu}. It has been recently proved (see \cite{toru}) that biharmonic ruled hypersurfaces in complex projective spaces are minimal. Our second goal in this article is to extend this result to the entire context of nonflat complex space forms. In particular, we prove the following result. \begin{maintheorem2} Let $M$ be a biharmonic ruled real hypersurface in a nonflat complex space form. Then, $M$ is minimal, and therefore an open part of one of the following hypersurfaces: \begin{enumerate} \item a Kimura type hypersurface in a complex projective or hyperbolic space, or \item a bisector in a complex hyperbolic space, or \item a Lohnherr hypersurface in a complex hyperbolic space. \end{enumerate} \end{maintheorem2} \section{Preliminaries}\label{sec:preliminaries} In this section we introduce the notation and terminology that we are going to use throughout this article. For more information, we refer to \cite{CR} and \cite{NR}. Let $\bar{M}$ be a Riemannian manifold and $M$ a smooth hypersurface of $\bar{M}$. Since the arguments that follow are local, we can assume that $M$ is embedded and take a unit normal vector field $\xi$ on $M$. We denote by $\langle\,\cdot\,,\,\cdot\,\rangle$ the metric of $\bar{M}$, by $\bar{\nabla}$ its Levi-Civita connection and by $\bar{R}$ its curvature tensor. Let $\nabla$ be the Levi-Civita connection of $M$. Then, the relation between $\nabla$ and $\bar{\nabla}$ is given by the Gauss formula $\bar{\nabla}_{X}Y=\nabla_{X}Y+\ensuremath{I\!I}(X,Y)$, where $\ensuremath{I\!I}$ denotes the second fundamental form of $M$. The shape operator $S$ of $M$ is the endomorphism of the tangent bundle of $M$ given by $SX=-(\bar{\nabla}_{X}\xi)^{\top}$, where $X$ is a tangent vector to $M$ and $(\cdot)^\top$ denotes the orthogonal projection onto the tangent space to $M$. As the shape operator is a self-adjoint endomorphism with respect to the induced metric on $M$, it can be diagonalized with real eigenvalues and orthogonal eigenspaces. Each eigenvalue $\lambda$ is called a principal curvature and its corresponding eigenspace, $T_{\lambda}$, is a principal curvature space. In this paper we will need some second order equations of submanifold geometry. In particular, we will use the Codazzi equation \begin{equation*} \langle \bar{R}(X,Y)Z,\xi\rangle = \langle(\nabla_{X}S)Y,Z\rangle-\langle(\nabla_{Y}S)X,Z\rangle, \end{equation*} as well as the Gauss equation \begin{equation*} \langle\bar{R}(X,Y)Z,W\rangle = \langle R(X,Y)Z,W\rangle + \langle SX,Z\rangle\langle SY,W\rangle -\langle SX,W\rangle\langle SY,Z\rangle, \end{equation*} where $X,Y,Z,W\in\Gamma(TM)$ are sections of $TM$. In what follows, we will assume that the ambient manifold $\bar{M}$ is a complex space form of complex dimension $n$ and constant holomorphic sectional curvature $c\in\mathbb{R}$, $\bar{M}^{n}(c)$. It is known that its curvature tensor $\bar{R}$ is given by \begin{align*} \langle\bar{R}(X,Y)Z,W\rangle={}& \frac{c}{4}\Bigl( \langle Y,Z\rangle\langle X,W\rangle -\langle X,Z\rangle\langle Y,W\rangle\\[-1ex] &\phantom{\frac{c}{4}\Bigl(} +\langle JY,Z\rangle\langle JX,W\rangle -\langle JX,Z\rangle\langle JY,W\rangle -2\langle JX,Y\rangle\langle JZ,W\rangle \Bigr), \end{align*} where $J$ denotes the complex structure of $\bar{M}^{n}(c)$. Moreover, since $\bar{M}^{n}(c)$ is a K\"ahler manifold, $\bar{\nabla}J=0$. Now let $M$ be a real hypersurface of $\bar{M}^{n}(c)$, that is, a real submanifold of real codimension one. The tangent vector field $J\xi$ is usually called the Hopf or Reeb vector field of $M$. We define the integer-valued function $h$ on $M$ as the number of the principal curvature spaces onto which $J\xi$ has nontrivial projection. A real hypersurface $M$ of $\bar{M}^{n}(c)$ is said to be ruled if it is foliated by totally geodesic complex hypersurfaces of $\bar{M}^{n}(c)$. Equivalently, $M$ is ruled if the orthogonal distribution to the Hopf vector field $J\xi$ is integrable and its leaves are totally geodesic submanifolds of $\bar{M}^{n}(c)$. Locally, ruled hypersurfaces are embedded, but globally, they may have self-intersections and singularities. See~\cite{Maeda} and~\cite[Section 8.5.1]{CR} for more information on ruled real hypersurfaces. We finally recall the notion of strongly 2-Hopf real hypersurface \cite{DDV:annali}. A real hypersurface $M$ in $\bar{M}^{n}(c)$ is said to be strongly 2-Hopf if the following conditions hold: \begin{enumerate} \item The smallest $S$-invariant distribution $\mathcal{D}$ of $M$ that contains the Hopf vector field $J\xi$ has rank 2. \item $\mathcal{D}$ is integrable. \item The spectrum of $S|_{\mathcal{D}}$ is constant along the integral submanifolds of $\mathcal{D}$. \end{enumerate} Notice that the first condition is equivalent to $h=2$ and that a hypersurface satisfying both (1) and (2) is said to be a 2-Hopf hypersurface \cite[Section 8.5.1]{CR}. \section{Proof of the Main Theorems} Let $M$ be a ruled real hypersurface in a nonflat complex space form $\bar{M}^{n}(c)$, $c\neq0$, with (locally defined) unit normal vector field $\xi$. We briefly recall some facts from~\cite[Section 3]{holi}. It is known that there exists an open and dense subset $\cal{U}$ of $M$ where $h=2$. $\cal{U}$ has exactly two nonzero principal curvature functions $\alpha$ and $\beta$, both of multiplicity one at every point, and $J\xi=aU+bV$ for some unit vector fields $U\in\Gamma(T_\alpha)$ and $V\in \Gamma(T_\beta)$ and nonvanishing smooth functions $a$, $b\colon \cal{U}\to\mathbb{R}$ satisfying $a^2+b^2=1$ and \begin{equation}\label{eq:1} a^2=\frac{\alpha}{\alpha-\beta}, \qquad b^2=\frac{\beta}{\beta-\alpha}. \end{equation} From now on, we will work in the open and dense subset $\cal{U}$ of $M$. With this notation, it can be proved~\cite[Lemma~3.1]{DD:indiana} that there exists a unit vector field $A\in\Gamma(T_0)$ such that \begin{align}\label{eq:alg} J\xi&=aU+bV, &JU&=-bA-a\xi, &JV&=aA-b\xi, &JA&=bU-aV. \end{align} Taking these expressions into account, we obtain the following. \begin{proposition}\label{prop:Levi-Civita} Let $M$ be a ruled hypersurface in a nonflat complex space form. Then, its Levi-Civita connection satisfies the following equations: \begin{align*} &\langle\nabla_{U}U,V\rangle=\frac{V(\alpha)}{\alpha-\beta}, &&\langle\nabla_{V}V,U\rangle=-\frac{U(\beta)}{\alpha-\beta},\\ &\langle\nabla_{U}U,A\rangle=\frac{4A(\alpha)-3abc}{4\alpha}, &&\langle\nabla_{V}V,A\rangle=\frac{4A(\beta)+3abc}{4\beta},\\ &\langle\nabla_{U}V,A\rangle=\frac{3c}{4(\alpha-\beta)}+\alpha-\frac{aA(\alpha)}{b\alpha}, &&\langle\nabla_{V}U,A\rangle=\frac{3c}{4(\alpha-\beta)}-\beta-\frac{bA(\beta)}{a\beta},\\ &\langle\nabla_{A}U,V\rangle=\frac{ac\beta-4a\alpha\beta^{2}-4b\alpha A(\beta)}{4a\beta(\alpha-\beta)}, &&\nabla_{A}A=0. \end{align*} Moreover, for any unit vector field orthogonal to $A$ in the 0-principal curvature distribution, $W\in\Gamma(T_{0}\ominus\mathbb{R}A)$, the following relations hold: \begin{align*} &\langle\nabla_{U}U,W\rangle=\frac{W(\alpha)}{\alpha}, &&\langle\nabla_{V}V,W\rangle=\frac{W(\beta)}{\beta}, &&\langle\nabla_{W}U,V\rangle=\frac{b\alpha W(\beta)}{a\beta(\beta-\alpha)}. \end{align*} In addition, \begin{align*} &U(\beta)=-\frac{\beta^{2} U(\alpha)+2ab\alpha(\alpha-\beta)V(\beta)}{3\alpha\beta}, &&V(\alpha)=\frac{2ab\beta(\alpha-\beta)U(\alpha)-\alpha^{2}V(\beta)}{3\alpha\beta},\\ &A(\alpha)=\frac{b(\alpha-\beta)(a\beta(4\alpha\beta-c)+2b\alpha A(\beta))}{2\beta^{2}}, &&W(\alpha)=-\frac{\alpha}{\beta}W(\beta). \end{align*} \end{proposition} \begin{proof} Since $U$ and $V$ are orthogonal eigenvectors of the shape operator $S$ associated with the eigenvalues $\alpha$ and $\beta$, respectively, we have \begin{align*} \langle (\nabla_{U}S)V, U\rangle &= \langle \nabla_{U}(SV)-S\nabla_{U}V, U\rangle = \langle \nabla_{U}(\beta V), U\rangle - \alpha\langle \nabla_{U}V, U\rangle\\ &=\langle U(\beta)V+\beta\nabla_{U}V, U\rangle -\alpha \langle\nabla_{U}V,U \rangle= -(\alpha-\beta)\langle \nabla_{U}V, U\rangle. \end{align*} As $U$ has constant length, $\langle \nabla_{V}U,U\rangle=0$, and thus, proceeding as before, \begin{align*} \langle (\nabla_{V}S)U, U\rangle &= \langle \nabla_{V}(SU)-S\nabla_{V}U, U\rangle = \langle \nabla_{V}(\alpha U), U\rangle - \alpha\langle \nabla_{V}U, U\rangle\\ &=\langle V(\alpha)U+\alpha\nabla_{V}U,U\rangle-\alpha\langle \nabla_{V}U, U\rangle=V(\alpha). \end{align*} Using the expression for the curvature tensor of a complex space form and the relations~\eqref{eq:alg}, we obtain that $\langle\bar{R}(U,V)U,\xi\rangle=0$. Then, using the previous relations to apply the Codazzi equation to the triple $(U,V,U)$, we get \begin{equation}\label{eq:cod_uvu} 0=V(\alpha)+(\alpha-\beta)\langle \nabla_{U}V,U\rangle, \end{equation} which gives the first relation in the statement. Analogously, the Codazzi equation applied to the triple $(V,U,V)$ yields \begin{equation}\label{eq:cod_vuv} 0=U(\beta)-(\alpha-\beta)\langle\nabla_{V}U,V\rangle, \end{equation} which is equivalent to the second relation in the statement. Since $\bar{\nabla}J=0$, using the definition of the shape operator and the relations $J\xi=aU+bV$ and $\langle\nabla_{U}U,U\rangle=0$, we obtain \begin{align*} U(a)&= U\langle J\xi,U\rangle = \langle \bar{\nabla}_{U}J\xi,U\rangle+\langle J\xi,\bar{\nabla}_{U} U\rangle = \langle J\bar{\nabla}_{U}\xi,U\rangle+\langle J\xi,\nabla_U U\rangle\\ &= \langle SU,JU \rangle + \langle aU+bV,\nabla_{U}U\rangle = b\langle V,\nabla_{U}U \rangle = -b\langle \nabla_{U}V,U \rangle. \end{align*} By multiplying this expression by $2a$ and taking into account that \begin{equation*} 2aU(a)=U(a^{2})=U\left(\frac{\alpha}{\alpha-\beta}\right)=\frac{\alpha U(\beta)-\beta U(\alpha)}{(\alpha-\beta)^{2}}, \end{equation*} we get, using \eqref{eq:cod_uvu}, \begin{equation}\label{eq:deriv_ua} 0=\beta U(\alpha)-\alpha U(\beta)+2ab(\alpha-\beta)V(\alpha). \end{equation} Analogously, expanding the relation $V(a)=V\langle J\xi,U\rangle$, we deduce, inserting \eqref{eq:cod_vuv}, that \begin{equation}\label{eq:deriv_va} 0=\beta V(\alpha)-\alpha V(\beta)-2ab(\beta-\alpha)U(\beta). \end{equation} Equations \eqref{eq:deriv_ua} and \eqref{eq:deriv_va} constitute a linear system in the unknowns $U(\beta)$ and $V(\alpha)$. After some calculations using~\eqref{eq:1}, we get that the determinant of the matrix of this system vanishes if and only in $\alpha\beta$ does, which cannot occur in $\mathcal{U}$. Then, there exists a unique solution given by \begin{align}\label{eq:ubeta,valpha} &U(\beta)=-\frac{2ab\alpha(\alpha-\beta)V(\beta)+\beta^{2}U(\alpha)}{3\alpha\beta}, &&V(\alpha)=\frac{2ab\beta(\alpha-\beta)U(\alpha)-\alpha^{2}V(\beta)}{3\alpha\beta}. \end{align} Now, proceeding as above, the Codazzi equation applied to the triples $(U,A,U)$, $(V,A,V)$, $(A,U,A)$ and $(A,V,A)$ yields \begin{align}\label{eq:cod_A} &\langle\nabla_{U}U,A\rangle=\frac{4A(\alpha)-3abc}{4\alpha}, &&\langle \nabla_{V}V,A\rangle=\frac{3abc+4A(\beta)}{4\beta},\\\nonumber &\langle\nabla_{A}A,U\rangle=0, &&\langle\nabla_{A}A,V\rangle=0. \end{align} Since $\bar{\nabla}J=0$ and $J\xi=aU+bV$, expanding the relations $0=U\langle J\xi,A\rangle$ and $0=V\langle J\xi,A\rangle$, inserting the expressions for $\langle\nabla_{U}A,U\rangle$ and $\langle \nabla_{V}A,V\rangle$ that follow from~\eqref{eq:cod_A}, and using~\eqref{eq:1}, we have \begin{align}\label{eq:deriv_A} &\langle\nabla_{U}V,A\rangle= \frac{3c}{4(\alpha-\beta)}+\alpha-\frac{aA(\alpha)}{b\alpha}, &&\langle \nabla_{V}A,U \rangle =\frac{bA(\beta)}{a\beta}+\beta-\frac{3c}{4(\alpha-\beta)}. \end{align} Now, the Codazzi equation applied to the triple $(V,A,U)$ yields, after inserting the expression for $\langle\nabla_{V}A,U\rangle$ given in \eqref{eq:deriv_A}, \begin{align*} 0&=\langle \bar{R}(V,A)U,\xi\rangle-\langle(\nabla_V S)A,U\rangle+\langle(\nabla_A S)V,U\rangle\\%\nonumber &=\frac{c(2\alpha+\beta)}{4(\alpha-\beta)}+\alpha\langle\nabla_{V}A,U\rangle-(\alpha-\beta)\langle\nabla_{A}V,U\rangle =-\frac{c}{4}+\alpha\beta+(\beta-\alpha)\langle\nabla_{A}V,U\rangle+\frac{b\alpha A(\beta)}{a\beta}, \end{align*} from where \begin{equation}\label{eq:cod_vau} \langle\nabla_{A}V,U\rangle=\frac{4b\alpha A(\beta)+4a\alpha\beta^{2}-ac\beta}{4a\beta(\alpha-\beta)}. \end{equation} Similarly, applying the Codazzi equation to the triple $(U,V,A)$ and using the expressions for $\langle\nabla_{U}V,A\rangle$ and $\langle\nabla_{V}U,A\rangle$ given by~\eqref{eq:deriv_A}, we obtain \begin{align*} 0&=\langle \bar{R}(U,V)A,\xi\rangle-\langle (\nabla_U S)V,A\rangle+\langle(\nabla_V S)U,A\rangle\\&=-\frac{c}{4}-\beta\langle\nabla_{U}A,V\rangle+\alpha\langle\nabla_{V}A,U\rangle=\frac{c}{2}-2\alpha\beta+\frac{a\beta A(\alpha)}{b\alpha}-\frac{b\alpha A(\beta)}{a\beta}, \end{align*} from where, using~\eqref{eq:1}, we get the following relation between $A(\alpha)$ and $A(\beta)$: \begin{equation}\label{eq:a_alpha} A(\alpha)=\frac{b(\alpha-\beta)(a\beta(4\alpha\beta-c)+2b\alpha A(\beta))}{2\beta^{2}}. \end{equation} Finally, let $W\in\Gamma(T_{0}\ominus\mathbb{R}A)$ be an arbitrary unit vector field orthogonal to $A$ in the 0-principal curvature distribution. Using the expressions for $J\xi, JU$ and $JA$ given in~\eqref{eq:alg}, the Codazzi equation applied to the triple $(A,U,W)$ yields $\langle\nabla_{A}U,W\rangle=0$, from where we deduce, using \eqref{eq:cod_A}, that $\nabla_{A}U$ is proportional to $V$. In particular, since $T_{0}\ominus\mathbb{R}A$ is a complex distribution, $\langle\nabla_{A}U,JW\rangle=0$. Expanding the relation $0=A\langle JU,W\rangle$, and taking the previous fact into account, as well as the expression for $JU$ given in~\eqref{eq:alg}, one gets \begin{equation*} b\langle\nabla_{A}A,W\rangle=\langle\nabla_{A}U,JW\rangle=0. \end{equation*} Then, $\langle\nabla_{A}A,W\rangle=0$ and, in fact, using~\eqref{eq:cod_A}, we obtain that $\nabla_{A}A=0$. Proceeding as above, the Codazzi equation applied to the triples $(U,W,U)$ and $(V,W,V)$ yields \begin{align}\label{eq:cod_w} &\langle \nabla_{U}U,W\rangle=\frac{W(\alpha)}{\alpha}, &&\langle \nabla_{V}V,W\rangle=\frac{W(\beta)}{\beta}. \end{align} Expanding the relations $0=U\langle J\xi,W\rangle$ and $0=V\langle J\xi,W\rangle$ and inserting the expressions for $\langle\nabla_{U}U,W\rangle$ and $\langle\nabla_{V}V,W\rangle$ given by~\eqref{eq:cod_w}, we have \begin{align}\label{eqs_w} &\langle\nabla_{U}V,W\rangle=-\frac{aW(\alpha)}{b\alpha}, &&\langle\nabla_{V}U,W\rangle=-\frac{bW(\beta)}{a\beta}. \end{align} After some calculations using~\eqref{eq:cod_w} and~\eqref{eqs_w}, the Codazzi equation applied to the triple $(V,W,U)$ yields \begin{equation}\label{eq:cod_vwu} \langle\nabla_{W}V,U\rangle=\frac{b\alpha W(\beta)}{a\beta(\alpha-\beta)} \end{equation} and, analogously, the Codazzi equation applied to the triple $(U,V,W)$ reads \begin{equation*}\label{eq:cod_uvw} 0=\frac{a\beta W(\alpha)}{b\alpha}-\frac{b\alpha W(\beta)}{a\beta}, \end{equation*} from where we get, after using~\eqref{eq:1}, the last formula in the statement. \end{proof} \subsection[Constant norm of $S$]{Ruled hypersurfaces whose shape operator has constant norm} We firstly focus on the proof of Theorem 1. Let $k=\alpha^{2}+\beta^{2}$ denote the squared norm of the shape operator of $M$. Since by hypothesis $k$ is constant, $X(k)=0$ for each $X\in TM$. Thus, $\alpha X(\alpha)+\beta X(\beta)=0$, from where \begin{equation}\label{eq:X_beta} X(\beta)=-\frac{\alpha X(\alpha)}{\beta}, \ \text{ for each } X\in TM. \end{equation} Taking this fact into account, one can rewrite some of the relations given in Proposition~\ref{prop:Levi-Civita} in an easier way. \begin{proposition}\label{prop:Levi-Civita-k} Suppose that $\alpha\neq-\beta$ on an open subset of $\cal{U}$. Then, with the previous notations, the Levi-Civita connection of such open subset satisfies the following equations: \begin{align*} &\nabla_U U=-\frac{ab(8\alpha\beta^{2}+c(3\alpha+\beta))}{4\alpha(\alpha+\beta)}A, &\nabla_U V=\frac{c(3\alpha+\beta)+4\alpha(\alpha^{2}+\beta^{2})}{4(\alpha^{2}-\beta^{2})}A, \\ &\nabla_V V=\frac{ab(8\alpha^{2}\beta+c(\alpha+3\beta))}{4\beta(\alpha+\beta)}A, &\nabla_V U=\frac{c(\alpha+3\beta)+4\beta(\alpha^{2}+\beta^{2})}{4(\alpha^{2}-\beta^{2})}A. \end{align*} Moreover: \begin{equation}\label{eq:deriv_functions} U(\alpha)=V(\alpha)=W(\alpha)=0, \qquad \text{and}\qquad A(\alpha)=\frac{ab\beta(c-4\alpha\beta)}{2(\alpha+\beta)}, \end{equation} for any $W\in\Gamma(T_{0}\ominus\mathbb{R}A)$. \end{proposition} \begin{proof} First of all, in order to prove that $U(\alpha)=V(\alpha)=0$, we rewrite the expressions for $U(\beta)$ and $V(\alpha)$ given in Proposition~\ref{prop:Levi-Civita} using the relation~\eqref{eq:X_beta}. Some calculations using~\eqref{eq:1} show that such equations are equivalent to: \begin{align*} \beta(3\alpha^{2}-\beta^{2})\ U(\alpha)+2ab\alpha^{2}(\alpha-\beta)\ V(\alpha)&=0,\\ -2ab\beta^{2}(\alpha-\beta)\ U(\alpha)+\alpha(3\beta^{2}-\alpha^{2})\ V(\alpha)&=0, \end{align*} which constitute a homogeneous linear system in the unknowns $U(\alpha)$ and $V(\alpha)$. The determinant of the matrix of such system can be easily deduced to be, using~\eqref{eq:1}, $-3\alpha\beta(\alpha^{2}-\beta^{2})^{2}$, which cannot vanish since $\alpha\beta\neq0$ and $\alpha\neq\pm\beta$ on an open subset of $\mathcal{U}$. Then, we conclude that $U(\alpha)=V(\alpha)=0$. Again, using~\eqref{eq:X_beta}, we can rewrite the expression for $A(\alpha)$ given in Proposition~\ref{prop:Levi-Civita}. Some calculations using~\eqref{eq:1} lead us to conclude that $2(\alpha+\beta)A(\alpha)=ab\beta(c-4\alpha\beta)$, which is equivalent to the last formula in the statement. Given $W\in\Gamma(T_{0}\ominus\mathbb{R}A)$, $W(\alpha)=-\alpha W(\beta)/\beta$ by Proposition~\ref{prop:Levi-Civita} and $W(\beta)=-\alpha W(\alpha)/\beta$ by~\eqref{eq:X_beta}. Then $W(\alpha)(1-\alpha^{2}/\beta^{2})=0$, from where we deduce, since $\alpha\neq\pm\beta$, that $W(\alpha)=0$. Inserting the expressions for $U(\alpha),\ V(\alpha),\ W(\alpha)$ and $A(\alpha)$ that we have just obtained into the relations given in Proposition~\ref{prop:Levi-Civita} one gets, after some calculations involving~\eqref{eq:1}, the formulas for $\nabla_{U}U,\ \nabla_{U}V,\ \nabla_{V}U,\ \nabla_{V}V$ in the statement. \end{proof} We can now conclude the proof of Theorem 1. First of all notice that, if $\alpha=-\beta$ on $\mathcal{U}$, then $0=X(k)=X(2\alpha^{2})=4\alpha X(\alpha)$, which implies that $X(\alpha)=0$ for any $X\in T\mathcal{U}$. Since $X\in TM$ is arbitrary, we deduce that both $\alpha$ and $\beta$ have to be constant on $\mathcal{U}$, and by the density of $\mathcal{U}$, also on $M$. Suppose now that there exists a point $p\in\mathcal{U}$ such that $\alpha(p)\neq-\beta(p)$. Then, in an open neighborhood of $p$, $\alpha\neq-\beta$. Taking the expressions given by \eqref{eq:deriv_functions} into account, the definition of the Lie bracket of $M$ yields \begin{equation}\label{eq:[u,v]} [U,V](\alpha)=U(V(\alpha))-V(U(\alpha))=0. \end{equation} On the other hand, using the fact that the Levi-Civita connection of $M$ is torsion-free, inserting the expressions for $\nabla_{U}V$ and $\nabla_{V}U$ given in Proposition~\ref{prop:Levi-Civita-k}, we obtain \begin{equation}\label{eq:torsion_free} [U,V](\alpha)=(\nabla_{U}V)(\alpha)-(\nabla_{V}U)(\alpha)=\frac{c+2(\alpha^{2}+\beta^{2})}{2(\alpha+\beta)}A(\alpha). \end{equation} Then, either $A(\alpha)=0$ or $c+2(\alpha^{2}+\beta^{2})=c+2k=0$. If $A(\alpha)=0$ on an open subset, since $U(\alpha)=V(\alpha)=W(\alpha)=0$ for any $W\in\Gamma(T_{0}\ominus\mathbb{R}A)$, both $\alpha$ and $\beta$ must be constant, and thus, $M$ has constant principal curvatures on such open subset. Suppose now that $2k+c=0$ or, equivalently, that $k=-c/2$ on an open subset of $\mathcal{U}$. In the projective case, since $c>0$, the equation $\alpha^{2}+\beta^{2}=-c/2$ has no solution and, on the other hand, there is no ruled hypersurface in the complex projective case having constant principal curvatures \cite[Remark~5]{LR}. Therefore, there is no example of ruled hypersurface in $\mathbb{C}P^{n}$ whose shape operator has constant norm. In the hyperbolic case, let $\mathcal{D}:=\mathrm{span}\{U,V\}$ be the smallest $S$-invariant distribution of $\cal{U}$ that contains $J\xi$, which clearly has rank 2. On the one hand, if an open subset of $M$ has constant principal curvatures, then it is an open part of a Lohnherr hypersurface \cite[Remark~5]{LR}, which is known to be strongly $2$-Hopf. This can be checked directly from Proposition~3.1: taking into account that it has constant principal curvatures $\alpha=\sqrt{-c}/2$, $\beta=-\sqrt{-c}/2$ and $0$, it follows that $[U,V]=\nabla_{U}V-\nabla_{V}U=0$, hence $\cal{D}$ is integrable, and moreover $\cal{D}(\alpha)=\cal{D}(\beta)=0$. On the other hand, if an open subset of $M$ satisfies $k=-c/2$, it follows from~\eqref{eq:[u,v]} that $\cal{D}$ is integrable and, by virtue of~\eqref{eq:deriv_functions} and the assumption that $\alpha^2+\beta^2$ is constant, again $\cal{D}(\alpha)=\cal{D}(\beta)=0$. Thus, in any case, $M$ is a strongly 2-Hopf hypersurface, which concludes the proof. \subsection[Biharmonic hypersurfaces]{Ruled biharmonic hypersurfaces} In this subsection we prove Theorem 2. Recall that a submanifold of a Riemannian manifold is said to be biharmonic if its defining isometric immersion is a critical point of the bienergy functional \cite{ou}. Moreover, they can be characterized as those submanifolds having vanishing bitension field. In the particular case of codimension one isometric immersions, that is, in the setting of biharmonic hypersurfaces, there exists an explicit formula which completely characterizes them. \begin{proposition}\cite[Theorem 2.1]{ou} Let $\bar{M}$ be a Riemannian manifold and $M\subset\bar{M}$ a hypersurface with unit normal vector field $\xi$. $M$ is biharmonic if, and only if, it satisfies the following relations \begin{equation}\label{eq:biharmonic} \begin{cases} &\Delta H-H|S|^{2}+H\overline{\ensuremath{\mathbb{R}}ic}(\xi,\xi)=0,\\ &2S(\nabla H)+H\nabla H-2H(\overline{\ensuremath{\mathbb{R}}ic}(\xi))^{\top}=0, \end{cases} \end{equation} where $H=\trace(S)$ is the mean curvature of the hypersurface, $\nabla$ denotes the gradient, $\Delta$ is the Laplace-Beltrami operator of $M$, and $\overline{\ensuremath{\mathbb{R}}ic}$ denotes both the (0,2) and the (1,1) Ricci tensors of~$\bar{M}$. \end{proposition} In our context, $\bar{M}=\bar{M}^{n}(c)$ is a complex space form of constant holomorphic sectional curvature $c\neq 0$. In this case, the Ricci tensor of $\bar{M}$ satisfies $\overline{\ensuremath{\mathbb{R}}ic}(\xi,\xi)=c(n+1)/2$ and $(\overline{\ensuremath{\mathbb{R}}ic}(\xi))^{\top}=0$, which follows immediately from the formula for the curvature tensor of a complex space form. Thus, in our case, equations~\eqref{eq:biharmonic} can be rewritten as follows (cf. \cite[Proposition~2.1]{fetcu}) : \begin{equation}\label{eq:biharmonic2} \begin{cases} & \Delta H-H|S|^{2}+\frac{1}{2}Hc(n+1)=0,\\ & 2S(\nabla H)+H\nabla H=0. \end{cases} \end{equation} We assume from now on that $M$ is a biharmonic ruled real hypersurface in a nonflat complex space form $\bar{M}^{n}(c)$. We will use the notations introduced in Section 3 for ruled real hypersurfaces. Thus, the mean curvature function of $M$ is $H=\alpha+\beta$. As in the previous section, according to the discussion before Proposition~\ref{prop:Levi-Civita}, there is an open and dense subset $\cal{U}$ of $M$ where $h=2$. Proposition~\ref{prop:Levi-Civita} and relations~\eqref{eq:alg} hold in this open subset. In what follows, we will work in terms of an orthonormal basis of eigenvectors $\{U,V,A,W_{4},\dots,W_{2n-1}\}$, where $W_{i}\in\Gamma(T_0\ominus\mathbb{R}A)$, $i\in\{4,\dots,2n-1\}$. Suppose that the mean curvature, $H=\alpha+\beta$, is not zero on an open subset of $\cal{U}$. We will work on this open subset of $M$ from now on. Since $M$ is a biharmonic hypersurface, it satisfies equations~\eqref{eq:biharmonic2}. With respect to the orthonormal eigenbasis $\{U,V,A,W_{4},\dots,W_{2n-1}\}$, we have \begin{equation*} \nabla H=U(H)U+V(H)V+A(H)A+\sum\limits_{i=4}^{2n-1}W_{i}(H) W_{i}. \end{equation*} On the other hand, since $U, V, A$ and $W_{i}$, for $i\in\{4,\dots,2n-1\}$, are orthogonal eigenvectors of the shape operator $S$ of $M$ associated with eigenvalues $\alpha, \beta$ and 0, respectively, we have \begin{equation*} S(\nabla H)=\alpha U(H)U+\beta V(H)V. \end{equation*} Thus, inserting these relations into the second equation in~\eqref{eq:biharmonic2}, we obtain \begin{equation*} 0=2S(\nabla H)+H\nabla H=(2\alpha+H)U(H)U+(2\beta+H)V(H)V+HA(H)A+\sum\limits_{i=4}^{2n-1}HW_{i}(H) W_{i}. \end{equation*} Since $H\neq0$ by assumption, one can deduce that $A(H)=0$ and $W_{i}(H)=0$ for $i\in\{4,\dots,2n-1\}$. Moreover, one of the following conditions holds on an open subset: \begin{enumerate} \item $U(H)=V(H)=0$. \item $\alpha=\beta=-H/2$. \item $U(H)=0$ and $2\beta+H=0$. \item $V(H)=0$ and $2\alpha+H=0$. \end{enumerate} Neither case (1) nor case (2) are possible. Indeed, if $U(H)=V(H)=0$ on an open subset, then such subset has constant mean curvature and, since it is ruled, $H=0$ \cite{holi}, which gives a contradiction. Analogously, since $M$ is ruled, $\alpha\neq\beta$ on any open subset. Suppose that $U(H)=0$ and $2\beta+H=0$ or, equivalently, $H=2\alpha/3$ (case (4) is analogous). Then, both $\alpha$ and $\beta$ can be expressed as $\alpha=3H/2$ and $\beta=-H/2$, respectively. Inserting these expressions in the formula for $A(\alpha)$ given in Proposition~\ref{prop:Levi-Civita}, one gets \begin{equation*} \frac{3}{2}A(H)=A(\alpha)=2b(a(c+3H^{2})-3bA(H)). \end{equation*} Moreover, $A(H)\neq0$ on $\mathcal{U}$, from where $ab(3H^{2}+c)=0$. Since $a$ and $b$ are not zero in $\mathcal{U}$, $M$ has constant mean curvature on $\mathcal{U}$ and, as $M$ is a ruled, it must be minimal (see \cite{holi}), which concludes the proof. \end{document}
\begin{document} \section{Introduction} Computer simulations of quantum systems constitute a crucial tool for a deeper understanding of behaviour and properties of matter on the atomic scale. However, when investigating the electronic structure of larger molecules, we quickly encounter the limits of classical computers. The space and time requirements for describing the states, and even more so for performing optimization on them, grow prohibitively. Thus, we resort to many clever types of approximations \cite{doi:10.1063/1.5129672, ONMethods, Qmontecarlo,Friesner-2005,Trygve-2008,Cremer-2011,Lyakh-2012,Yu-2016,Narbe-2017}. Oftentimes, though, they are not good enough, or start to scale badly. It is natural to imagine that using one quantum mechanical system to simulate another could be a more efficient approach. The concept of a quantum computer, based on laws of quantum physics, was first proposed by Richard Feynman \cite{Feynman}. He envisioned that the way to deal with the exponential amount of information appropriate to study physical systems was to use quantum systems as computers themselves. We have come a long way since then, and today we have access to the first small, programmable quantum chips, together with 25 years of development of quantum algorithms \cite{algZoo,2018arXiv180403719A,10.5555/1972505} and protocols for simulation, optimization, and many other applications in sensing, cryptography and communication. \cite{RevModPhys.89.035002,RevModPhys.74.145,RevModPhys.92.025002,RevModPhys.89.015004}. While fault-tolerant, universal quantum computing is still out of reach, much of today's development focuses at demonstrating quantum supremacy on problems without direct applications \cite{Arute2019,PhysRevLett.127.180501,PhysRevLett.127.180502}. Our goal here is less ambitious, but more practical, utilizing the imperfect computers we have at hand for a practical task. We investigate one of the key promised applications of quantum computing: the understanding of molecular structure. While the long-term goal is to find more efficient solutions for a problem deemed intractable on conventional computers for large problem sizes, we want to understand how well we can do today for small test cases. The basic task is to solve the many-body Schrödinger equation \begin{equation}\label{SCH} \hat{H} \ket{\psi} = \textit{E} \, \ket{\psi}, \end{equation} \noindent where $\hat{H}$ denotes the Hamiltonian operator, $\ket{\psi}$ is the system wave function and $\textit{E}$ represents the energy of the system. Finding the ground state energy of the studied system is a difficult optimization problem. However, we now know several promising approaches for applications in quantum chemistry \cite{Cao2019}, relying on natural quantum encodings of the problems, and algorithms utilizing superpositions and entanglement, resulting in efficient search and energy evaluation. One way of approximately obtaining the minimum eigenvalue of a Hamiltonian is to start with an initial guess and iteratively search for optimal parameters. One such popular approach aimed at solving the many-particle problem is the hybrid classical-quantum {\em Variational Quantum Eigensolver} (VQE) algorithm proposed by Peruzzo et al. \cite{vqe}. It is an application of the time-independent variational principle that benefits from cooperation of classical and quantum computers, and is suitable for use on near-term, imperfect quantum devices. For example, in \cite{vqe}, the authors used a small photonic quantum computer to calculate the He–H$^+$ system's ground state energies for various atomic distances. Next, O'Malley et al.~\cite{OMalley-PRX-2016} studied the properties of the H$_2$ molecule using Google’s digital quantum computer with superconducting qubits. The researchers used two different methods for finding the ground-state energies of H$_2$: the phase-estimation algorithm (PEA), and the variational quantum eigensolver (VQE). The former method can in principle get the answer with arbitrary precision, but only if there are no errors in the process. As in practice, errors are always present, the VQE method works better. However, it also has its pitfalls, which we aim to elucidate. In this work, we want to understand the possible sources of errors in VQE calculations when implemented on current, publicly available superconducting quantum processors. Our goal is to provide a recipe for efficient use of the scarce resources, as well as practical guidance for the starting practitioner, wanting to run quantum chemistry computations on current devices. If our (superconducting) qubits were a closed system, the dynamical evolution of their state would be fully determined by their initial state and Hamiltonian. In reality, the system is partially open, the qubits interact with their environment, their internal interactions are not perfect, and so the overall state is not deterministic. A generalized theory of the interactions between a quantum state and its environment was derived by Bloch~\cite{Bloch-1957}, manifesting as extra degrees of freedom we cannot control. These affect the state of the system and result in a loss of coherence \cite{Krantz-APR-2019}. Noise comes through controlling the qubits (e.g. via pulses applied to the qubits) or reading out the final states of the qubits (there is a finite probability that the recorded classical bit value will be flipped from the true outcome of a measurement). We also get errors from random fluctuations of parameters that are coupled to our qubits, such as (i) thermal voltage and current fluctuations in control lines or (ii) randomly fluctuating electric and magnetic fields in the local qubit environment. Minimizing various sources of noise and errors is a very complex and demanding task involving materials science, fabrication engineering, electronics design, cryogenic engineering, and qubit design. Of course, our hope is fault-tolerant quantum processing, which is currently out of reach, thanks to precision and overhead requirements \cite{Preskill2018quantumcomputingin}. Meanwhile, we often employ methods for dealing with noisy data. There are many different approaches to correcting the noisy results (see Refs. \cite{kandala,Kandala2019,PhysRevLett.119.180509,PhysRevX.8.031027, Geller_2021,mitigation2,Nachman-npjQI-2020,Maciejewski2020mitigationofreadout,Cai2021quantumerror,Suchsland2021algorithmicerror} and references therein), based on different techniques of noise characterization. Finally, even hypothetical noiseless quantum processors are prone to {\em stochastic noise}. Our calculations end with estimating the values of Pauli observables in the final state. This can only be done via sampling the outcome probabilities, with a limited number of shots. Moreover, stochastic noise comes in also through the heuristic nature of the classical optimizer part of the VQE algorithm. The good news is that we can investigate and control this noise, thanks to our selection of optimizer, and the parameters of the experimental runs. We will now attempt to characterize in detail how each source of noise influences the convergence of the VQE algorithm. We will then look at how to efficiently distribute our resources in order to achieve the desired precision of the calculations, if it is possible at all. \section{Methods} There are several preparatory steps before studying molecules on real quantum devices. We start with the molecular Hamiltonian in the second quantized form \cite{SQ}. The fermionic Hamiltonian with one and two-electron terms ($h_{ij}$ and $h_{ijkl}$) is given by \begin{equation}\label{SQ} \hat{H} = \sum_{ij} h_{ij} \,\, a_{i}^\dagger a_{j} + \frac{1}{2} \sum_{ijkl} h_{ijkl} \,\, a_{i}^\dagger a_{j}^\dagger a_{l} a_{k}, \end{equation} \noindent where $a_{i}^\dagger$ and $a_{j}$ are fermionic creation and annihilation operators. We then need to map this fermionic Hamiltonian into qubit operators represented in the Pauli operator basis \cite{map}. This can be done in multiple ways, e.g. with the Bravyi-Kitaev \cite{bravyi} or Jordan-Wigner \cite{Jordan1928} transformations. Here we choose the Bravyi-Kitaev transformation \cite{bravyi} of the hydrogen molecule Hamiltonian, produced using the publicly available {\em qiskit} package \cite{qiskit}. The code for this, and all of our following calculations is located at \cite{imihalik}. We also invite the reader to read the introductory VQE tutorial \cite{VQETut}. Choosing the {\em STO-3G} basis set, and the distance between hydrogen atoms set to $0.725 \; \text{\normalfont\oldAA}$, not taking into account the Coulomb repulsiom between nuclei, we arrive at a~$4$-qubit Hamiltonian \begin{align} \hat{H}_{\mathrm{H_2}} &= c_{\mathrm 0} \; \boldsymbol{1} + c_{\mathrm 1} \; Z_{\mathrm 0} + c_{\mathrm 2} \; Z_{\mathrm 1}Z_{\mathrm 0} + c_{\mathrm 1} \; Z_{\mathrm 2} + c_{\mathrm 2} \; Z_{\mathrm 3}Z_{\mathrm 2}Z_{\mathrm 1} + c_{\mathrm 3} \; Z_{\mathrm 1} + c_{\mathrm 4} \; Z_{\mathrm 2}Z_{\mathrm 0} \nonumber\\ &+ c_{\mathrm 5} \; X_{\mathrm 2}Z_{\mathrm 1}X_{\mathrm 0} + c_{\mathrm 6} \; Z_{\mathrm 3}X_{\mathrm 2}X_{\mathrm 0} + c_{\mathrm 6} \; X_{\mathrm 2}X_{\mathrm 0} + c_{\mathrm 5} \; Z_{\mathrm 3}X_{\mathrm 2}Z_{\mathrm 1}X_{\mathrm 0} \label{H2_4_qubits}\\ &+ c_{\mathrm 7} \; Z_{\mathrm 3}Z_{\mathrm 2}Z_{\mathrm 1}Z_{\mathrm 0} + c_{\mathrm 7} \; Z_{\mathrm 2}Z_{\mathrm 1}Z_{\mathrm 0} + c_{\mathrm 8} \; Z_{\mathrm 3}Z_{\mathrm 2}Z_{\mathrm 0} + c_{\mathrm 3} \; Z_{\mathrm 3}Z_{\mathrm 1}. \nonumber \end{align} \noindent The $Z$ and $X$ terms are Pauli operators, and the coefficients $c_i$ are integrals calculated using the {\em qiskit.chemistry} package \cite{qiskit.chemistry}: \begin{equation} \begin{array}{lll} c_{\rm 0} = -0.80718, \hskip1cm & c_{\rm 1} = 0.17374, \hskip1cm & c_{\rm 2} =-0.23047,\\ c_{\rm 3} = 0.12149, \hskip1cm & c_{\rm 4} = 0.16940, \hskip1cm & c_{\rm 5} = -0.04509,\\ c_{\rm 6} = 0.04509, \hskip1cm & c_{\rm 7} = 0.16658, \hskip1cm & c_{\rm 8} = 0.17511.\\ \end{array} \label{4qubits} \end{equation} \noindent Note that Hamiltonian in Eq. \ref{4qubits} commutes with $Z_1$ and with $Z_3$. Therefore, the Hamiltonian is block-diagonal with $4$ blocks, each corresponding to a particular computational basis setting of the qubits $1$ and $3$. This can be used to find a Hamiltonian with the same ground energy expressed in $2$-qubit space \cite{qiskit.chemistry}: \begin{equation} \hat{H}_{\mathrm{H_2}} = c_{\mathrm 0} \; \boldsymbol{1} + c_{\mathrm 1} \; Z_{\mathrm 0} + c_{\mathrm 1} \; Z_{\mathrm 1} + c_{\mathrm 2} \; Z_{\mathrm 1}Z_{\mathrm 0} + c_{\mathrm 3} \; X_{\mathrm 1}X_{\mathrm 0}, \label{H2_2_qubits} \end{equation} \noindent with the coefficients \begin{equation} \begin{array}{ll} c_{\rm 0} = -1.05016, \hskip1cm & c_{\rm 1} = 0.40421,\\ c_{\rm 2} = 0.01135, \hskip1cm & c_{\rm 3} = 0.18038.\\ \end{array} \label{2qubits} \end{equation} \noindent With two equivalent formulations (both Hamiltonians have the same ground state energy) of the problem in hand, we will be able to investigate which form is more amenable to VQE optimization. They give us a chance to explore various problem sizes, parametrized state ansatzes, energy landscapes, and investigate how noise affects the procedures, as the optimizations will involve different numbers of gates. We are now ready to run the VQE algorithm. Its input is a Hamiltonian expressed in a qubit basis and its aim is to find the eigenvector with the lowest eigenvalue. For this, the VQE iterates these four steps: \begin{enumerate} \item {(quantum)} prepare a parametrized quantum state on a quantum device, \item {(quantum)} measure each Hamiltonian term (requires repetitions of step 1), \item {(classical)} sum the expectation values of the Hamiltonian terms to estimate the energy of the parametrized state, \item {(classical)} use the energy value to update the parameters of the trial quantum state, \end{enumerate} until we meet the convergence criteria of the classical optimization method. To minimize the expectation value of the energy, we choose the simultaneous perturbation stochastic approximation (SPSA)~\cite{SPSA,Spall1, Spall}, classical optimization method implemented in {\em qiskit}. SPSA is a pseudo-gradient method for optimizing problems with varying numbers of unknown parameters. SPSA starts with the initial vector of parameters $\vec{\theta}^{}_{\mathrm{0}}$. In each iteration, the parameter vector is simultaneously shifted twice as \begin{equation} \vec{\theta}^{\pm}_{\mathrm{k}} = \vec{\theta}^{}_{\mathrm{k}} \pm c^{}_{\mathrm{k}}\vec{\Delta}^{}_{\mathrm{k}}, \end{equation} \noindent where $c^{}_{\mathrm{k}}$ is a preassigned positive number, ${\mathrm{k}}$ is the iteration number and $\vec{\Delta}^{}_{\mathrm{k}}$ is a randomly generated vector (from the Bernoulli distribution). To approximate the gradient at $\vec{\theta}^{}_{\mathrm{k}}$, we utilize the gradients at $\vec{\theta}^{+}_{\mathrm{k}}$ and $\vec{\theta}^{-}_{\mathrm{k}}$ as \begin{equation} \vec{g^{}}_{\mathrm{k}}(\vec{\theta}^{}_{\mathrm{k}}) = \frac{\bra{\psi(\vec{\theta}^{+}_{\mathrm{k}})} H \ket{\psi (\vec{\theta}^{+}_{\mathrm{k}})} - \bra{\psi(\vec{\theta}^{-}_{\mathrm{k}})} H \ket{\psi (\vec{\theta}^{-}_{\mathrm{k}})}}{2c^{}_{\mathrm{k}}}\vec{\Delta}^{}_{\mathrm{k}}. \label{SPSAgrad} \end{equation} \noindent In each iteration step we thus need to measure the energies of two quantum states, prepared with parameter settings $\vec{\theta}^{+}_{\mathrm{k}}$ and $\vec{\theta}^{-}_{\mathrm{k}}$. We then update the underlying parameters $\vec{\theta}^{}_{\mathrm{k}}$ to \begin{equation} \vec{\theta}^{}_{\mathrm{k} + 1} = \vec{\theta}^{}_{\mathrm{k}} - a^{}_{\mathrm{k}}\vec{g}^{}_{\mathrm{k}}(\vec{\theta^{}_{\mathrm{k}}}), \end{equation} \noindent where $a^{}_{\mathrm{k}}$ is a preassigned positive number and $\vec{g}^{}_{\mathrm{k}}(\vec{\theta}^{}_{\mathrm{k}})$ is the approximated gradient \eqref{SPSAgrad} that depends on $\vec{\theta}^{+}_{\mathrm{k}}$ and $\vec{\theta}^{-}_{\mathrm{k}}$. The optimization procedure runs for a number of iterations controlled by the \texttt{maxiter} parameter (in {\em qiskit}’s implementation). We then update the parameter vector $\vec{\theta}^{}_{\mathrm{k}}$, and measure the final value of energy for the underlying optimized parameters. There is a subtlety that influences the total number of function evaluations. {\em Qiskit}'s implementation of SPSA includes additional initial exploration -- a calibration phase that depends on the maximum number of iterations with $\textrm{min}\left\{\texttt{maxiter}/5,\,25\right\}$ steps. There are many possible choices for the quantum circuit that prepares the trial state for simulating the molecules using VQE. We employ the hardware-inspired (easy to implement) $R_{\mathrm{y}}R_{\mathrm{z}}$ and $R_{\mathrm{y}}$ variational forms, accompanied by linear entanglement \cite{forms}. The circuits consist of two main layers: (i) a layer of parametrized $R_{\mathrm{y}}$ or $R_{\mathrm{y}}$ and $R_{\mathrm{z}}$ single qubit rotations applied on each qubit in the quantum register, alternating with (ii) an entanglement-creating layer of CNOT gates. We can easily alternate these two layers, creating circuits of varying depth, increasing the complexity of the ansatz, and the number of parameters to be optimized. \begin{figure} \caption{A scheme of the 4-qubit quantum circuits for calculations of the ground state energy. The circuit depth was set to 2. Dashed lines represent blocks of two layers -- the entangled layer and the rotation layer. (\textbf{a} \label{fig:circuit_4} \end{figure} The simplest variational circuit of depth $d = 1$ consists of just the entanglement-creating layer, nested between two parametrized rotation layers. We draw the 4-qubit variational circuits of depth $d=2$ in Figure~\ref{fig:circuit_4}. For $q$ qubits and depth $d$, the $R_{\mathrm{y}}R_{\mathrm{z}}$ variational circuit has $2 q (d + 1)$ parameters -- single qubit rotations, while the $R_{\mathrm{y}}$ variational circuit has $q (d + 1)$ of them. Our qubit Hamiltonians \eqref{H2_4_qubits} and \eqref{H2_2_qubits} contain terms with the Pauli $X$ operators. To estimate their expectation values, we need to measure them in a non-diagonal basis. As only computational-basis measurements are available on the quantum processors, we need to utilize basis-switching single-qubit gates (post-rotations). For the Pauli operator $X$, we thus use a $\pi/2$ rotation around the $y$ axis, performed by gate $R_{\mathrm{y}}(\pi/2)$, and subsequently measure in the computational ($Z$-)basis. Let us note that we do not need to estimate all of the Pauli terms in our Hamiltonians individually. We can save resources and reduce the number of required measurements and state preparations by grouping the Pauli operators that require the same post-rotations in the tensor product basis sets \cite{kandala}. To access the real quantum devices, we use the publicly available cloud-based quantum computing platform IBM Quantum~\cite{IBM}. We find that access to real quantum processors is still limited for a larger systematic study of different types of errors and noise. Therefore, we have heavily relied on classical simulations of quantum processors. The simulations were performed both in an ideal noise-free manner and including noise. Importantly, we have used the noise-related parameters of the IBM quantum processors, which are publicly available (and changing) on a daily basis. Thereby, our simulations could most closely mimic the actual gates and computations executed on a real device. The published noise-related data include an approximate noise model consisting of (i)~single-qubit and two-qubit gate errors -- depolarizing errors followed by thermal-relaxation errors, (ii)~single-qubit readout errors on all measurements, and (iii)~all errors including gate, readout and thermal-relaxation errors. Since calibration data changes frequently, in noisy simulation results we used the error model from the same calibration for most of the trials. Finally, in some tests involving real quantum hardware (see Figure~\ref{fig:bogota}) we employ a simple error-mitigation method. It uses least-squares fitting to obtain the error-mitigated outcome probabilities by using a calibration matrix \cite{ErrMit}. \section{Results and discussion} There are several possible sources of error for the energies calculated by the VQE. First, {\em statistical errors} in intermediate and final energy estimations, caused by the probabilistic nature of quantum mechanics. Second, {\em Hamiltonian representation and state-preparation ansatz} errors, caused by approximations in the Hamiltonian (restricted basis set), as well as the space of states we search over, given by our parametrized quantum circuit. Third, {\em hardware errors} present in noisy quantum devices running the quantum part of VQE. We study these using simulators of noisy devices, as well as real quantum processor runs. We will now investigate the influence of these errors, and discuss ways and the costs of mitigating them. We will do this for the H$_2$ molecular Hamiltonians from the previous Section. Note that there are several factors which limit the number of gates/circuit repetitions we can execute. First, we have only limited access to the quantum processors, which severely restricts the number of shots we can use for each datapoint. Second, {we do not have unlimited time either}. Third, the gates themselves are noisy, so increasing system sizes or gate numbers won't work without serious error mitigation. Thus, simply increasing the number of repetitions/circuit size is not a solution to these errors, and we must work harder on error mitigation to obtain trustworthy results -- molecular energies with chemical precision. \subsection*{Number of quantum computer calls} While searching for optimal parameters, the VQE optimization requires many calls to a quantum subroutine: prepare a candidate state and measure its energy. Even if we imagined having a noiseless quantum processor, its probabilistic nature and impossibility of measuring noncommuting observables simultaneously would force us to rely on averages over several measurements, which carry stochastic noise. We thus start by studying how the number of circuit (gate) evaluations on a quantum processor influences the convergence of the VQE algorithm, on ideal quantum devices. Our goal is to understand and efficiently fight the influence of stochastic errors, inherent in VQE regardless of the quantum hardware. We want to optimally utilize a limited number of gate executions. For this, we have two options. First, we can increase the number of shots (controlled by the \texttt{shots} parameter in the programs) for each evaluation of readout probabilities. Second, we can increase the maximum number of iterations (the parameter \texttt{maxiter}) allowed for the classical optimization subroutine. Increasing the number of shots, improves the precision of the measurement outcomes for each of the terms in \eqref{H2_2_qubits} or \eqref{H2_4_qubits}. This makes the energy function more stable and easier for the classical optimization routine to minimize. Meanwhile, increasing the allowed number of iterations for the classical optimizer provides it with a better chance to get out of existing local minima and/or to fine-tune the final output value, but again comes at the cost of increasing the number of times quantum computer needs to be accessed. Intuitively, it can be expected that increasing the number of quantum gate executions in either case increases the precision of the output. However, presently the number of gate executions on a quantum computer is a limiting factor for most users -- access to real quantum processors is either restricted or quite costly \cite{ShotPrice}. Additionally, an excessive number of quantum gate executions significantly impacts the real running time of the algorithm. In this light, it is an interesting question to find the minimum number of quantum gate executions that result in VQE output within chemical precision of the real energy value. In our study, we ran the algorithm for different settings of \texttt{shots} and \texttt{maxiter} parameters, on the two qubit hydrogen molecule Hamiltonian, using the noiseless quantum simulator. Each combination of the settings was run $1000$ times. We present the experimental results in detail in Table \ref{tab:QuantumUses} (see Appendix~\ref{AppendixDataTables}), and visualize it here in Figures~\ref{fig:shots}~and~\ref{fig:maxiter}. We visualize the data using a boxplot. The box shows the middle two quartiles of the data (interquartile range, IQR), with a marked median. Outliers are determined using Tukey fences \cite{tukey77}: a distance of $1.5$ times the IQR in each direction, as in Figure~\ref{fig:shots}(a), or the extremes of our data, as in Figure~\ref{fig:shots}(b), where all of our computed energies fall above the red line, close to the box. From Figure \ref{fig:shots}(a), we can see that with an ideal quantum computer and unrestricted iterations of the SPSA algorithm, VQE performs increasingly better with the increase of \texttt{shots} parameter. The limit of infinitely many shots is simulated using a statevector simulator, which calculates the outcome probabilities directly from the laws of quantum mechanics. It is important to note that when using a finite number of shots, the algorithm can also output energy values lower than the actual minimum energy. Thanks to stochastic noise, the estimates of outcome probabilities for the Pauli terms in \eqref{H2_2_qubits}, calculated from a limited number of shots, can be non-physical. To obtain realistic final optimized energies, one should thus invest in precise final energy readout for the optimized state, using a larger number of shots. This also rises another interesting question: can the inaccuracy in outcome probabilities be overcome by the SPSA algorithm? In other words, are the states discovered by SPSA actually close to the minimum eigenvector of the Hamiltonian, and {is most of the energy spread seen in the results caused} by the small number of shots? We recalculated the energies of the discovered states in Figure \ref{fig:shots}(a), using the statevector simulator. The results (see Figure \ref{fig:shots}(b)) somewhat surprisingly show that indeed, most of the discovered states have real energies within the chemical accuracy. \begin{figure} \caption{ VQE energy estimates improvement with number of shots. We calculate the ground state energy of H$_{\mathrm2} \label{fig:shots} \end{figure} In the context of trying to minimize quantum computer calls, this suggests that the VQE algorithm can be run with a small number of shots during the run of the classical optimization function and after the optimization produces the final result, the energy of the candidate state with minimum energy needs to be recalculated once more, using a much larger number of shots. However, for this to work, we have to use a classical optimizer which can deal well with noisy data. We could use Bayesian methods \cite{2017arXiv170607094L,2018arXiv180702811F}, which could be quite slow, needing to invert large matrices to guess next points for search. Or we could use other optimization methods like NEWUOA \cite{Powell2006}, which can be adapted to work on noisy data. Note that in initial stages of this work, we have used the COBYLA optimization method~\cite{cobyla}, and concluded that very large numbers of shots were necessary for the method to converge at all. As described in the Methods Section, in the end we have decided on the use of SPSA, thanks to its natural noise resilience, as it explores random points around the current parameter vectors. On the other hand, at least for our selected Hamiltonian, increasing the \texttt{maxiter} parameter and waiting longer for convergence seems to have diminishing returns after surpassing a value of approximately $\texttt{maxiter} = 100$. We confirm this in detail in in Figure~\ref{fig:maxiter}. There, we simulate $1000$ VQE calculations for various combinations of \texttt{maxiter}/\texttt{shots} settings. For the same value of \texttt{shots} parameter, increasing the \texttt{maxiter} parameter only decreases the number of outliers, but the spread of the data remains almost unchanged. Both of these observations suggest that only a small fraction of runs can benefit from the increase of \texttt{maxiter} parameter beyond $100$. \begin{figure} \caption{ (\textbf{a} \label{fig:maxiter} \end{figure} In order to evaluate the energies more precisely after the last step, we recalculated energies using state vector simulator for \texttt{shots} parameter set to $512$ and $1024$, with $\texttt{maxiter}$ settings $50,75$ and $100$. Results of this calculation can be found in Fig.\ref{fig:maxiter2} and they suggest that the combination of \texttt{shots} = $1024$ and \texttt{maxiter} = $75$ already provides median energy in the chemical accuracy for the $2$-qubit H$_2$ Hamiltonian. \begin{figure} \caption{ \label{fig:maxiter2} \label{fig:maxiter2} \end{figure} \subsection*{Choice of Hamiltonian and state-preparation ansatz} In this Section we study the influence of the size and form of the quantum circuit used in the VQE calculation. The most straightforward way to reduce the complexity of the VQE algorithm is to find the smallest possible qubit representation of the studied Hamiltonian. Indeed, much effort today is focused on efficient encodings of fermionic Hamiltonians (see e.g.\cite{PhysRevB.104.035118,PRXQuantum.2.030305} and references therein), as getting rid of extra qubits (and operations on them) decreases noise and can avoid space limitations. To highlight the importance of finding minimal representations, we have chosen two equivalent variants of H$_{\mathrm2}$ molecule Hamiltonian, one using $2$ and another one using $4$ qubits (see the Methods section). Once we have chosen a Hamiltonian and its qubit implementation, VQE needs a choice of the variational circuit for state-preparation. We have a vast choice of parametrized gates, arrayed in multiple rounds. This results in varying numbers of classical parameters to be optimized, as well as different reachable quantum states (ansatz quality), with varying amounts of entanglement \cite{Cerezo2021,Funcke2021dimensional,Nakaji2021expressibilityof}. Here, we focus on two types of variational circuits -- the $R_{\mathrm{y}}$ and the $R_{\mathrm{y}}R_{\mathrm{z}}$ linear forms with a varying number of rounds, shown in Figure~\ref{fig:circuit_4} for $4$ qubits (the $2$-qubit circuits are obvious simplifications). Note that all the results in this subsection were obtained using a noiseless quantum simulation, in order to highlight the influence of the Hamiltonian/ansatz choice in the ideal case. Of course, once we consider noise and statistical readout errors, the Hamiltonian/circuit choice translates to more errors, thanks to wider circuits with a larger number of gates. In Figure \ref{fig:forms}, we depict our results for the convergence of the VQE algorithm, depending on the Hamiltonian choice ($2$- or $4$-qubit), and the ansatz complexity (type and number of rounds). The two-qubit Hamiltonian \eqref{H2_2_qubits} turns out to be simple enough that the depth of the variational ansatz (number of layers) does not significantly influence the final optimized energies. Moreover, we can see that the simpler $R_{\mathrm{y}}$ ansatz outperforms $R_{\mathrm{y}}R_{\mathrm{z}}$ ansatz. This confirms the intuition that the simplest ansatz containing the solution will also have the best performance, thanks to solution space having less parameters to optimize over. Additionally in noisy regime, simpler ansatz requires less gates for implementation, leading to further advantage. On the other hand, for the $4$-qubit Hamiltonian \eqref{H2_4_qubits}, both $R_{\mathrm{y}}$ and $R_{\mathrm{y}}R_{\mathrm{z}}$ variational forms with $1$ layer are not expressive enough and the search space does not contain the state with the lowest energy. Using two layers already alleviates this problem. Again, we see that the $R_{\mathrm{y}}$ form performs better. Using a $12$-parameter, two-level $R_{\mathrm{y}}$ ansatz achieves better convergence than the $24$-parameter, two-level $R_{\mathrm{y}}R_{\mathrm{z}}$ ansatz. Moreover, if we allow the $R_{\mathrm{y}}$ form with $24$ parameters as well, the circuit has depth $5$ and outperforms the $R_{\mathrm{y}}R_{\mathrm{z}}$ ansatz even more significantly. This can be partially explained by the fact that for the same number of search space parameters, the $R_{\mathrm{y}}$ ansatz can achieve more intricately entangled states, due to the increased number of entanglement layers. \begin{figure} \caption{Comparison of the VQE energies for H$_2$ using the $R_{\mathrm{y} \label{fig:forms} \end{figure} These results only underline the importance of an efficient choice of Hamiltonian encoding and ansatz parametrization. More qubits mean the need for more intricate ansatzes, while introducing more statistical errors when estimating the energies of individual terms, and thus affecting the overall readout precision, even in an ideal (noiseless) case. {Further, more complex Hamiltonians can cause the optimization algorithm to converge to a local minimum, thus causing a systematic error. This is in more detail treated in Appendix \ref{app:convergenceLocMin}.} \subsection*{Imperfect quantum devices} So far, we studied the convergence of the VQE algorithm assuming a perfect quantum computer available for the quantum subroutines. This analysis was possible thanks to relative simplicity of classically simulating the few-qubit quantum circuits used during our VQE algorithm runs. We wish to eventually scale up the calculations, and obtain results from the hybrid classical-quantum VQE which are not available classically. For this, we will have to rely on real quantum devices. Of course, current quantum processors are far from perfect, so the calculations will be inherently noisy, influencing also the convergence of VQE algorithm. Let us group the possible errors into two basic types -- gate errors and readout errors. This is a natural divide, since the number of measurement (readout) errors scales only with the number of qubits and can be mitigated with classical techniques. Meanwhile, the gate errors inevitably scale with both depth of the computational ansatz and the number of qubits, unless we use error correction/mitigation techniques. In Figures \ref{fig:12} and \ref{fig:errors}, we show the convergence of the VQE using simulations of noisy quantum processors. To obtain realistic parameters for the simulation, we used the error characterization of the real IBM devices, obtained directly from {\em qiskit}. \begin{figure} \caption{A comparison of VQE results with noise calibration from two different dates. We used quantum backend {\em ibmq$\_$santiago} \label{fig:12} \end{figure} Our first observation is that the quality of calibration of recent experimental quantum processors changes frequently. Not only that, it significantly influences the convergence, as we demonstrate in Figure~\ref{fig:12}. There, we visualize the convergence of $1000$ trials with the same settings, using calibration data from different dates{, showing results differing by $10\%$}. Second, we observe that for these small circuits, the effect of measurement error is more significant than the gate errors. As we study this further in Figure \ref{fig:errors} for the $4$-qubit Hamiltonian \eqref{H2_4_qubits}, we find that readout errors still affect the convergence more, but the effect is less pronounced. In Figure~\ref{fig:errors}(a) we use the $2$-qubit, $1$-round $R_y$ variational form, which requires $4$ single-qubit gates + $1$ CNOT, and up to $2$ Pauli operators for readout. For Figure~\ref{fig:errors}(b), we recall results presented in Figure~\ref{fig:forms}, and choose the $4$-qubit, $2$-round $R_y$ variational form, which requires $12$ single-qubit gates + $6$ CNOTs, and up to $2$ Pauli operators for readout. The number of gates (circuit width $\times$ depth), and thus gate errors, grows faster than the number of estimation terms, which still scales with the circuit width. Thus, we expect that the effect of readout error will become less prominent for larger Hamiltonians compared to the gate errors. Moreover, we can also implement various readout error mitigating strategies -- e.g. avoiding systematic bias by using bit flips before the final measurements. \begin{figure} \caption{The calculations of H$_{\mathrm2} \label{fig:errors} \end{figure} \subsection*{Convergence of VQE on real quantum hardware} Finally, in this Subsection we look at how the convergence predicted by the noisy simulator corresponds to convergence on real quantum hardware. For this, we performed an experimental runs on the {\em ibm$\_$lagos} quantum processor, with \texttt{maxiter} = $75$ and \texttt{shots} = $1024$, and compared it to noisy simulation results with the same parameter settings. In order to mitigate the readout errors we performed a simple mitigation routine described at the beginning of this Section. In Figure~\ref{fig:bogota}, we observe that in fact the predictions agree fairly well with practice. Further, we tested our hypothesis that the discovered states are very close to optimal ones, but their energies have to be evaluated more precisely. To this end we calculated their energies using $40000$ shots on the {\em ibm$\_$lagos} quantum processor ($40000$ shots result in $\approx 0.5\%$ error in energy calculations) as well as using a statevector simulator. In Figure~\ref{fig:bogota}, we can see that both of these techniques decrease the spread of the data points. However, only the statevector simulator consistently evaluates energies within the chemical accuracy from the exact ground state value. This only shows that evaluating the energies using a large number of shots still suffers from hardware errors, which are not present when using the statevector simulator. We conclude that a simple VQE implementation on real quantum hardware does not {\em consistently} find the minimum energy within the chemical accuracy. Nevertheless, using both error-mitigation and precise energy readout one can obtain a trustworthy result: take the lowest of the precisely evaluated energies. \begin{figure} \caption{ A comparison of results obtained with a noisy quantum simulator with error mitigation (orange points and box plot), with real runs of the {\em ibm$\_$lagos} \label{fig:bogota} \end{figure} \section{Conclusions} In this work we provided a guided tour of quantum chemistry calculations with the Variational Quantum Eigensolver on real quantum hardware. We analyzed the influence of different types of errors on the convergence of VQE. We divided them into several categories and studied them separately, in order to understand their impact and to find ways of avoiding them, while using our resources efficiently. For this, we used noiseless and noisy classical simulations, as well as publicly available superconducting quantum processors. The errors resulting from a probabilistic nature of quantum mechanics significantly differ from the hardware errors. While the statistical error increases the spread of VQE outcomes around the optimum, gate and readout errors not only influence the spread of the outcomes, but often prevent the VQE from finding the optimum altogether. If we could use noiseless hardware, the stochastic errors could be mitigated by increasing the number of shots, and getting precise energy estimates. This turns out not to be that important in the intermediate steps of iterative algorithm, but crucial at the end. Even with smaller numbers of shots per iteration, our optimization was able to find near-optimal states. However, in order to rule out unphysical results (energies below the actual minimum energy), but also to avoid overestimating the energy, we must invest shots into the final, precise readout. We learned that this approach can significantly reduce the overall cost (gate evaluations) for VQE, important while access to real devices (and large runs) is limited, as well as time-consuming. Second, it is crucial to search for efficient encodings and good trial-state ansatzes (hardware, and problem-motivated). It saves on both gate-evaluations, as well as on optimization complexity, reducing the number of required parameters to search over. We have demonstrated this in the comparison of the 4-qubit and 2-qubit Hamiltonians, and two types of ansatzes, with varying depth/number of parameters, for the same molecular hydrogen problem. Finally, real devices' parametrization changes with time, and it is crucial to choose to run your experiments at times when they are calibrated well. Moreover, the current levels of noise are prohibitive, and add up so that the final results do not straightforwardly reach the desired chemical precision. In order to avoid this, error mitigation must be used, for the readouts (which contribute a surprising amount of error) as well as inside of the circuit. \authorcontributions{ {Conceptualization, I.M.,M.Pi., M.Pl., M.F., M.\v{S}.; methodology, M.Pi.,D.N.; software, I.M.; validation, I.M.,M.Pi., M.Pl.; formal analysis, I.M.,M.Pi.; investigation, I.M.,M.Pi., M.Pl., M.F.; resources, I.M.,M.Pi., M.Pl.; data curation, I.M.; writing---original draft preparation, I.M.,M.Pi., M.Pl., M.F. ; writing---review and editing, D.N.; visualization, I.M.; supervision, M.\v{S}., M.Pi., M.Pl.,M.F. ; project administration, M.\v{S} ; funding acquisition, M.\v{S}., M.Pi., M.Pl., M.F. All authors have read and agreed to the published version of the manuscript.} } \funding{ This research was funded by the Grant Agency of Masaryk University in Brno, Czech Republic, within an interdisciplinary research project No. MUNI/G/1596/2019 entitled "Development of algorithms for application of quantum computers in electronic-structure calculations in solid-state physics and chemistry". } \conflictsofinterest{The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the~results.} \sampleavailability{No compounds were created in this study.} \noindent\small{\textbf{Acknowledgements:}} We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. We acknowledge the access to advanced services provided by the IBM Quantum Researchers Program. We would like to thank Martin Saip for fruitful discussions. \abbreviations{The following abbreviations are used in this manuscript:\\ \noindent \begin{tabular}{@{}ll} VQE & Variational Quantum Eigensolver\\ STO-3G & Slater-type Orbital basis set with each orbital expanded into $3$ Gaussian functions\\ SPSA & Simultaneous Perturbation Stochastic Approximation \end{tabular}} \reftitle{References} \externalbibliography{yes} \appendixtitles{yes} \appendixstart \appendix \section{Convergence to local minima}\label{app:convergenceLocMin} In this Appendix, we dig in beyond energy minimization and ask: are we finding the correct (ground) state? Our analysis above is primarily focused on identifying the impact of different types of noise and errors that are nowadays commonly occurring in physical realizations of superconducting quantum processors. We have purposely chosen the H$_2$ molecule as a material system of our case study because the ground state of H$_2$ is well known. In fact, it is a frequent textbook example in quantum chemistry and, therefore, it is rather straightforward to show the impact of noise. Importantly, there are two implicit underlying assumptions of our analysis. First, we expect the minimization procedure within our methodology to aim solely at the ground state. Second, we assume that all the obtained values, covering a wide energy range, are indeed ground-state-related solutions that differ from the known exact ground-state energy only due to the noise. These two aspects are mutually related and the validity of our assumptions sensitively depends on both the minimizing method and the studied Hamiltonian. Had the noise-related range of energies been wider than the energy separation to the next higher-energy eigenvalue(s), and had this excited-state eigenvalue been found by the minimization procedure, yet another ``source of errors'' would have affected our analysis. This error type would be methodological, rather than noise-related. We address this issue below and show that the analysis focused on only the eigenvalues, i.e. energies, is insufficient. Note that while estimating the final energy, we perform many measurements, and thus obtain a list of (significant) probabilities. This is not full tomography, which would be prohibitive for large scale systems. We only estimate the expectation values of Pauli operators present in the Hamiltonian. In our case, we estimate probabilities for computational basis states in order to estimate expectation values of Pauli $Z$ operators, while estimating Pauli $X$ operators on two of the qubits forces us to do measurements in a rotated basis. We will now study how to utilize the resulting vectors of probabilities to distinguish different classes of the states produced by VQE. Figure~\ref{fig:wide-range} shows our results for the $4$-qubit Hamiltonian \eqref{H2_4_qubits} in an energy range of $(-1.9, -1.1)$ Hartree. We can notice a set of clearly outlying values above the energy of $-1.3$ Hartree. They can be identified in a very straightforward manner because they are separated from the main set of ground-state-related energies by quite a wide energy range that is free of other datapoints. \begin{figure} \caption{Ground state energies of the $H_2$ molecule from $200$ calculations on a noise-less simulator, using the $4$-qubit Hamiltonian \eqref{H2_4_qubits} \label{fig:wide-range} \end{figure} In order to discern what these outlying energies represent we next evaluate higher-energy eigenvalues of the studied qubit Hamiltonians of H$_2$. The eigenvalues of the $4$-qubit Hamiltonian of H$_2$ molecule, as determined using classical techniques, are as follows: \begin{align*} (-1.867,-1.262, -1.262, -1.242, -1.242, -1.242, -1.160,\\ -1.160, -0.881, -0.465, -0.465, -0.341, -0.211, 0, 0.227). \end{align*} Their inspection reveals that while the ground-state energy of H$_2$ is very well separated from the other higher-energy eigenvalues, there are {several} eigenvalues in the energy range of $(-1.3, -1.1)$ Hartree, albeit some of them degenerated. A challenge that we face is to tell (i) those cases that are noise-related to the ground-state energy from (ii) those that are (again noise-related) to the other eigenvalues. In what follows we propose to distinguish between sets of values corresponding to different eigenvalues by applying a suitable similarity index to the measured probabilities of different basis states. Before we start discussing similarity measures, it is worth noting a few facts related to the measured probabilities of basis states. They are sets of {non-negative} real numbers smaller or equal to one. The number of elements in each set is equal to $n = 2^q$, where $q$ is the number of qubits, and the sum of all elements in each set of probabilities equals to one. As discussed above, the measured sets of probabilities are multiplied by values of integrals within the qubit Hamiltonian in order to evaluate the energies. Our analysis of similarities of vectors of measured probabilities below includes two different measures. The first one is the Jaccard-Tanimoto (J-T) index, also known as the Jaccard similarity coefficient. It is used for gauging the similarity and diversity of sample sets. It was developed by Paul Jaccard~\cite{Jaccard} and independently formulated again by T. Tanimoto~\cite{Tanimoto}. The Jaccard-Tanimoto index of two sets X and Y is defined in general as the ratio of intersection of the two sets over their union $ J-T(X,Y) = |X \cap Y| / |X \cup Y|. $ For two vectors $\{x_i\}, \{y_i\}$ with all components non-negative $(x_i \geq 0, y_i \geq 0)$ and the same length $(i = 1, ..., n)$ it is evaluated as $J-T(\{x_i\}, \{y_i\}) = \sum_i \mbox{min}(x_i,y_i)/ \sum_i \mbox{max}(x_i,y_i)$. The maximum similarity is characterized by the J-T index equal to one, while very low similarity by the J-T index equal to zero. The very many applications of J-T index include, e.g. various forms of image analysis or identification of words that are mistyped by the users of numerous computer and cell-phone/smart-phone applications. The second measure is the scalar (inner) product of the vectors representing the set of measured probabilities. It is worth noting that while the vectors of measured probabilities have the sum of components equal to one, their length in a vector sense is in general not equal to one. The length of such a vector is equal to one only when one of the basis states has the probability equal to one and all others have the probability equal to zero. It is also the maximum length in the vector sense. For any other distribution of probabilities the length is lower than one. The other extreme case with the shortest length of the vector has all $n$ basis states with the same probability, equal to $1/n$ length, and the length of this vector is $1/\sqrt{n}$. Therefore, all the probability vectors are normalized before we evaluate their scalar product in our analysis. As individual additive parts of the Hamiltonian can be divided into two groups with each requiring a different topology of the quantum circuit, we have evaluated similarities as obtained for two sets of $200$ vectors of measured probabilities that we below call circuit $0$ (all qubits measured in the computational basis) and circuit $1$ (qubits $1$ and $3$ measured in the computational basis and qubits $0$ and $2$ measured in the $X$ basis). Figure~\ref{sim-fig} shows the values of both the Jaccard-Tanimoto (J-T) similarity index (Fig.~\ref{sim-fig}(a,b)) and scalar product (Fig.~\ref{sim-fig}(c,d)) for both the circuit $0$ (Fig.~\ref{sim-fig}(a,c)) and circuit $1$ (Fig.~\ref{sim-fig}(b,d)). The values plotted for each vector in each set were determined as averaged values over $200$ J-T/scalar-product similarities of a given vector and all $200$ vectors in each set. \begin{figure} \caption{Comparison of similarities of vectors of measured probabilities as functions of energies. The plotted values are averaged ones using all $200$ values and evaluated using either the Jaccard-Tanimoto (J-T) similarity index (parts (a,b)) and scalar product (parts (c,d)) for both the circuit $0$ (parts (a,c)) and circuit $1$ (parts (b,d)).} \label{sim-fig} \end{figure} Figure~\ref{sim-fig} shows that the averaged similarity values are decreasing with increasing energy from its ground-state value. Without noise, the probability vector of circuit $0$ has the $\ket{1100}$ basis state with probability close to $1$ and the other basis states have probability close to $0$. From the $200$ calculations, $188$ of them, i.e. $94\%$ , exhibit variations of this scenario and these results cover energies up to $-1.7$ Hartree. The J-T similarity index values (see Fig.~\ref{sim-fig}(a)) show a weakly decreasing trend with increasing energy. In contrast, the averaged scalar product of all of these $188$ probability vectors has nearly identical value (see Fig.~\ref{sim-fig}(c)). As another extreme, about $5\%$ of vectors of probabilities have very low averaged J-T similarity value close to zero and the energies from the range between $-1.3$ and $-1.2$ Hartree. These we, therefore, identify as excited states because their vectors of probabilities are very different from the case when the $|$1110$\rangle$ basis state has the probability close to one. The remaining $1\%$ cases plotted in Fig.~\ref{sim-fig}(a,c) are characterized by energies and similarity index values that are in between the region close to the ground state on one hand and that of excited states on the other hand. We believe that these are essentially erroneous calculations in which the SPSA algorithm failed to converge to a local minimum. The above mentioned percentages ($94\%$ states similar to the ground state, $5\%$ similar to the excited states and $1 \%$ erroneous states) explain the actual value of the averaged similarity index plotted in Figure~\ref{sim-fig}. As $94\%$ vectors of probability vectors are related to the ground state, they are mutually similar and contribute with values close to one. Their averaged similarity index is lower than one because it is reduced by their medium similarity to the erroneous states ($1\%$ of contributions into the sum) and nearly zero similarity value with the excited states (another $5\%$ of contributions). Complementarily, the excited states have nearly zero similarity index with $94\%$ of all probability vectors as well as erroneous states and, therefore, their averaged similarity index is close to zero. Quite a similar situation is that of the intermediate values of the erroneous states that are similar to both the ground-state related states as well as the excited states. The situation is more complicated in the case of circuit $1$ (see Fig.~\ref{sim-fig}(b,d)) when the measurements of the ground state result in outcomes ($|$0100$\rangle$, $|$0110$\rangle$, $|$1100$\rangle$ and $|$1110$\rangle$) with rather similar probabilities (they differ by about $1/16$). The J-T similarity index performs quite similarly (see Fig.~\ref{sim-fig}(b)) as in the case of circuit $0$ (see Fig.~\ref{sim-fig}(a)) but some of the excited states show significant non-zero values (up to $0.4$). This trend is a lot more pronounced in the case of scalar product (see Fig.~\ref{sim-fig}(d)) when the values can reach as high as $0.74$. While future studies focused on similarity analysis in other systems are critically needed, a few general conclusions can be drawn. First, the similarity analysis of vectors of measured probabilities can be very fruitful when assessing the computed energies. The similarities seem to be able to tell cases that are just noisy-variants of the ground state from those that are (i) related to other eigenvalues of higher-energy excited states or even (ii) plainly erroneous cases. Second, the two tested similarity measures, the Jaccard-Tanimoto (J-T) similarity index and the scalar product, perform much better when the ground-state and/or excited states are associated with the situation when one of the basis states has probability close to $1$ while the others have it equal to $0$. It would be thus advantageous to develop methods associating the eigenvalues with these cases. Third, while the similarity index based on the scalar product identifies states related to the ground state by noise a lot more clearly than the J-T index, it tends to overrate the similarity of the excited and plain erroneous states. Therefore, the J-T similarity index seems to better capture continuously varying levels of similarity of different states. Fourth, we have analyzed averaged similarity measures based on the similarity between each vector of probabilities with respect to all other such vectors in our data set. Our analysis does not profit from the fact that we can estimate the vector of probabilities related to the ground state (it has the lowest energy) and determine the similarity with respect to this particular state. As the VQE can be applied to determine also the excited states, the same procedure with determination of the similarity analysis can be applied also to the excited states and the their noise-related states can be more identified. Lastly, when an eigenvalue is associated with the probability vector where one of the basis states has the probability equal to one and others zero, all other vectors that are noise-related to it (and that have probabilities redistributed elsewhere due to the nose and errors), can be used for determination for effective error rates and reversely applied for correcting for these errors. \section{Data tables} \label{AppendixDataTables} Let us now present the data behind the figures in the main text. First, in Table~\ref{tab:maxiter1000}, we list the median ground state energies and the percentages of data falling into chemical precision range (with and without precise recalculation of energies), for various settings of the number of \texttt{shots} and \texttt{maxiter} = $1000$. This is followed by Tables~\ref{tab:QuantumUses} and \ref{tab:QuantumUses2} in which we elaborete in more detail on energy values obtainable by various \texttt{shots} and \texttt{maxiter} settings. Then, in Table~\ref{tab3}, we list the results underlying Figure~\ref{fig:forms}, comparing state preparation ansatzes for $2$- and $4$-qubit computations. Further, in Table~\ref{tab2}, we list the results highlighting the variability of actual hardware when performing runs on different dates, plotted in Figure~\ref{fig:errors}. Finally, in Table~\ref{tab:realRun} we show energies underlying real runs presented in Figure~\ref{fig:bogota}. \begin{specialtable}[ht] \widefigure \centering \begin{tabular}{ccccccc} \toprule \multirow{2}{*}{Simulator} & \multirow{2}{*}{\texttt{shots}} & \multirow{2}{*}{Median [Ha]} & \multirow{2}{*}{$\%$} & \multicolumn{2}{c}{Recalculation} \\ &&&& Median [Ha] & $\%$ \\ \midrule \multirow{5}{*}{qasm\_simulator} & $512$ & $ -1.86637 $ & $ 10.5 \pm 3 $ & $ -1.86675 $ & $ 93.8 \pm 3 $\\ & $1024$ & $ -1.86723 $ & $ 14.8 \pm 3 $ & $ -1.86693 $ & $ 98.5 \pm 3 $\\ & $2048$ & $ -1.86667 $ & $ 21.1 \pm 3 $ & $ -1.86703 $ & $ 99.1 \pm 3 $\\ & $4096$ & $ -1.86705 $ & $ 28.7 \pm 3 $ & $ -1.86707 $ & $ 99.0 \pm 3 $\\ & $8192$ & $ -1.86729 $ & $ 38.9 \pm 3 $ & $ -1.86710 $ & $ 99.4 \pm 3 $\\ statevector\_simulator & $-$ & $ -1.86712 $ & $ 99.1 \pm 3 $ & $-$ & $-$\\ \bottomrule \end{tabular} \caption{Summary of the medians of $1000$ optimized ground state energies and the percentage of outcomes that were found within chemical precision of exact energy ($-1.86712 \pm 0.0015$ Ha) for different settings of \texttt{shots} parameter and \texttt{maxiter} = $1000$. Additionally, in the middle two columns the final energy is the direct outcome of the SPSA algorithm and in the last two columns recalculated using statevector simulator. The $R_{\mathrm{y}}$ variational ansatz was used. Error margin of $3$ percent is estimated based on the standard deviation of $1000$ identically distributed trials. This data is the basis for Figure \ref{fig:shots}.} \label{tab:maxiter1000} \end{specialtable} \begin{specialtable}[ht] \widefigure \centering \begin{tabular}{cccccc} \toprule Settings & \texttt{shots} & Median [Ha] & $\%$ \\ \midrule \multirow{4}{*}{SPSA(\texttt{maxiter} = 50)} & $512$ & $ -1.86119 $ & $ 8.6 \pm 3 $ \\ & $1024$ & $ -1.86258 $ & $ 11.4 \pm 3 $ \\ & $4096$ & $ -1.86475 $ & $ 20.7 \pm 3 $ \\ & $8192$ & $ -1.86544 $ & $ 29.9 \pm 3 $ \\ \midrule \multirow{4}{*}{SPSA(\texttt{maxiter} = 75)} & $512$ & $ -1.86276 $ & $ 9.1 \pm 3 $ \\ & $1024$ & $ -1.86439 $ & $ 12.6 \pm 3 $ \\ & $4096$ & $ -1.86559 $ & $ 24.9 \pm 3 $ \\ & $8192$ & $ -1.86601 $ & $ 34.2 \pm 3 $ \\ \midrule \multirow{4}{*}{SPSA(\texttt{maxiter} = 100)} & $512$ & $ -1.86419 $ & $ 9.4 \pm 3 $ \\ & $1024$ & $ -1.86530 $ & $ 13.5 \pm 3 $ \\ & $4096$ & $ -1.86607 $ & $ 25.9 \pm 3 $ \\ & $8192$ & $ -1.86644 $ & $ 38.1 \pm 3 $ \\ \midrule \multirow{4}{*}{SPSA(\texttt{maxiter} = 125)} & $512$ & $ -1.86486 $ & $ 9.3 \pm 3 $ \\ & $1024$ & $ -1.86524 $ & $ 13.6 \pm 3 $ \\ & $4096$ & $ -1.86631 $ & $ 27.0 \pm 3 $ \\ & $8192$ & $ -1.86656 $ & $ 37.5 \pm 3 $ \\ \midrule \multirow{4}{*}{SPSA(\texttt{maxiter} = 150)} & $512$ & $ -1.86461 $ & $ 9.3 \pm 3 $ \\ & $1024$ & $ -1.86593 $ & $ 14.3 \pm 3 $ \\ & $4096$ & $ -1.86613 $ & $ 28.8 \pm 3 $ \\ & $8192$ & $ -1.86637 $ & $ 36.9 \pm 3 $ \\ \midrule \multirow{4}{*}{SPSA(\texttt{maxiter} = 200)} & $512$ & $ -1.86634 $ & $ 11.3 \pm 3 $ \\ & $1024$ & $ -1.86575 $ & $ 14.3 \pm 3 $ \\ & $4096$ & $ -1.86652 $ & $ 28.7 \pm 3 $ \\ & $8192$ & $ -1.86653 $ & $ 36.5 \pm 3 $ \\ \bottomrule \end{tabular} \caption{Summary of the medians of $1000$ optimized ground state energies and the percentage of outcomes that were found within chemical precision of exact energy ($-1.86712 \pm 0.0015$ Ha) for different settings of maximum number of iterations (\texttt{maxiter}) and number of shots. The $R_{\mathrm{y}}$ variational ansatz was used. Error margin of $3$ percent is estimated based on the standard deviation of $1000$ identically distributed trials. This data is the basis for Figure \ref{fig:maxiter}. \label{tab:QuantumUses} } \end{specialtable} \begin{specialtable}[ht] \widefigure \centering \begin{tabular}{ccccccc} \toprule \multirow{2}{*}{Settings} & \multirow{2}{*}{\texttt{shots}} & \multirow{2}{*}{Median [Ha]} & \multirow{2}{*}{$\%$} & \multicolumn{2}{c}{Recalculation} \\ &&&& Median [Ha] & $\%$ \\ \midrule \multirow{2}{*}{SPSA(\texttt{maxiter} = 50)} & $512$ & $ -1.86101 $ & $ 7.0 \pm 3 $ & $ -1.86349 $ & $ 25.5 \pm 3 $\\ & $1024$ & $ -1.86190 $ & $ 10.8 \pm 3 $ & $ -1.86520 $ & $ 41.4 \pm 3 $\\ \midrule \multirow{2}{*}{SPSA(\texttt{maxiter} = 75)} & $512$ & $ -1.86369 $ & $ 9.7 \pm 3 $ & $ -1.86511 $ & $ 38.7 \pm 3 $\\ & $1024$ & $ -1.86420 $ & $ 11.7 \pm 3 $ & $ -1.86594 $ & $ 57.4 \pm 3 $\\ \midrule \multirow{2}{*}{SPSA(\texttt{maxiter} = 100)}& $512$ & $ -1.86472 $ & $ 9.7 \pm 3 $ & $ -1.86560 $ & $ 49.4 \pm 3 $\\ & $1024$ & $ -1.86458 $ & $ 13.9 \pm 3 $ & $ -1.86627 $ & $ 64.5 \pm 3 $\\ \bottomrule \end{tabular} \caption{Summary of the medians of $1000$ optimized ground state energies and the percentage of outcomes that were found within chemical precision of exact energy ($-1.86712 \pm 0.0015$ Ha) for different settings of maximum number of iterations (\texttt{maxiter}) and number of shots. The $R_{\mathrm{y}}$ variational ansatz was used. Error margin of $3$ percent is estimated based on the standard deviation of $1000$ identically distributed trials. Rightmost two collumns are the recalculation of obrtained energies using statevector simulator. This data is the basis for Figure \ref{fig:maxiter2}.\label{tab:QuantumUses2}} \end{specialtable} \begin{specialtable}[ht] \widefigure \centering \begin{tabular}{cccc} \toprule \multicolumn{4}{c}{2-qubit Hamiltonian H$_{\mathrm{2}}$} \\ Form & Depth (parameters) & Median [Ha] & $\%$ \\ \midrule $R_\mathrm{y}$ & $3$ $(8)$ & $-1.86713$ & $31.2\pm 3$ \\ $R_\mathrm{y}$$R_\mathrm{z}$ & $2$ $(12)$ & $-1.86664$ & $30.7\pm 3$ \\ $R_\mathrm{y}$ & $2$ $(6)$ & $-1.86709$ & $32.4\pm 3$ \\ $R_\mathrm{y}$$R_\mathrm{z}$ & $1$ $(8)$ & $-1.86594$ & $25.8\pm 3$ \\ $R_\mathrm{y}$ & $1$ $(4)$ & $-1.86705$ & $28.7\pm 3$ \\ \toprule \multicolumn{4}{c}{4-qubit Hamiltonian H$_{\mathrm{2}}$} \\ Form & Depth (parameters) & Median [Ha] & $\%$ \\ \midrule $R_\mathrm{y}$ & $5$ $(24)$ & $-1.86331$ & $18.1 \pm 3$ \\ $R_\mathrm{y}$$R_\mathrm{z}$ & $2$ $(24)$ & $-1.86070$ & $12.5\pm 3$ \\ $R_\mathrm{y}$ & $2$ $(12)$ & $-1.86361$ & $17.9\pm 3$ \\ $R_\mathrm{y}$$R_\mathrm{z}$ & $1$ $(16)$ & $-1.84560$ & $0.0$ \\ $R_\mathrm{y}$ & $1$ $(8)$ & $-1.84591$ & $0.0$ \\ \bottomrule \end{tabular} \caption{A comparison of various settings of quantum circuits for 2-qubit and 4-qubit Hamiltonians of H$_{\mathrm{2}}$. The $R_{\mathrm{y}}R_{\mathrm{z}}$ and $R_{\mathrm{y}}$ variational forms accompanied by linear entanglement with different depth of circuit and number of parameters were used. In case of 2-qubit Hamiltonian we used unrestricted maximum number of iterations of the SPSA optimizer, and in case of 4-qubit Hamiltonian the \texttt{maxiter} was set to 400. Columns labeled $\%$ represent the percentage of the runs that ended within chemical accuracy ($-1.86712 \pm 0.0015$ Ha) of the optimum. Error margin of $3$ percent is estimated based on the standard deviation of $1000$ identically distributed trials. This data is the basis for Figure \ref{fig:forms}. \label{tab3}} \end{specialtable} \begin{specialtable}[ht] \widefigure \centering \begin{tabular}{ccccccc} \toprule \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Simulator \\ NoiseModel\end{tabular}} & \multirow{2}{*}{Qubits} & \multirow{2}{*}{Date} & \multicolumn{2}{c}{$R_\mathrm{y}$ form} & \multicolumn{2}{c}{$R_\mathrm{y}$$R_\mathrm{z}$ form} \\ & & & Median[Ha] & \% & Median[Ha] & \% \\ \midrule \multirow{3}{*}{ gate errors} & \multirow{2}{*}{2} & Dec.\,14,20 & $-1.85819$ & $3.3\pm 3$ & $-1.85719$ & $2.4\pm 3$ \\ & & May\,14,21 & & & $-1.85947$ & $5.3\pm 3$ \\ & $4$ & Dec.\,14,20 & $-1.79786$ & $0.0$ & $-1.77738$ & $0.0$ \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}readout\\ errors\end{tabular}} & \multirow{2}{*}{2} & Dec.\,14,20 & $-1.80617$ & $0.0$ & $-1.80510$ & 0.0 \\ & & May\,14,21 & & & $-1.82060$ & $0.0$ \\ & $4$ & Dec.\,14,20 & $-1.78270$ & $0.0$ & $-1.76214$ & $0.0$ \\ \midrule \multirow{3}{*}{ all errors} & \multirow{2}{*}{2} & Dec.\,14,20 & $-1.79816$ & $0.0$ & $-1.79646$ & $0.0$ \\ & & May\,14,21 & & & $-1.81879$ & $0.0$ \\ & $4$ & Dec.\,14,20 & $-1.70967$ & $0.0$ & $-1.68760 $ & $0.0$ \\ \bottomrule \end{tabular} \caption{A comparison of results from different dates and different noise models built from a real quantum processor ibmq$\_$santiago. The number of shots was set to $4096$. The calculations were performed using both the $R_{\mathrm{y}}R_{\mathrm{z}}$ and $R_{\mathrm{y}}$ variational forms. The classical optimization method SPSA was used, for $2$-qubit system the maxiter was unrestricted and for $4$-qubit system, the maxiter was set to $400$. Last column represents the percentage of the runs that ended within chemical accuracy ($-1.86712 \pm 0.0015$ Ha) of the optimum. Error margin of $3$ percent is estimated based on the standard deviation of $1000$ identically distributed trials. This data is the basis for Figures \ref{fig:12} and \ref{fig:errors}.\label{tab2}} \end{specialtable} \begin{specialtable}[ht] \widefigure \centering \begin{tabular}{ccc} \toprule \multicolumn{2}{c}{\em ibm\_lagos} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Statevector \\ simulator\end{tabular}} \\ \texttt{shots} = $1024$ & \texttt{shots} = $40000$\\ \midrule $-1.85075$ Ha & $-1.85688$ Ha & $-1.86588$ Ha \\ $-1.86237$ Ha & $-1.86035$ Ha & $-1.86558$ Ha \\ $-1.81998$ Ha & $-1.82370$ Ha & $-1.86560$ Ha \\ $-1.82088$ Ha & $-1.82312$ Ha & $-1.85678$ Ha \\ $-1.84165$ Ha & $-1.83561$ Ha & $-1.86488$ Ha \\ $-1.82453$ Ha & $-1.83271$ Ha & $-1.86456$ Ha \\ $-1.85601$ Ha & $-1.86479$ Ha & $-1.86569$ Ha \\ $-1.87990$ Ha & $-1.86107$ Ha & $-1.86605$ Ha \\ $-1.84045$ Ha & $-1.81550$ Ha & $-1.86504$ Ha \\ $-1.86599$ Ha & $-1.82214$ Ha & $-1.86554$ Ha \\ $-1.78271$ Ha & $-1.84946$ Ha & $-1.86099$ Ha \\ $-1.82068$ Ha & $-1.83412$ Ha & $-1.86482$ Ha \\ $-1.83439$ Ha & $-1.82281$ Ha & $-1.85480$ Ha \\ $-1.80671$ Ha & $-1.83176$ Ha & $-1.86270$ Ha \\ \bottomrule \end{tabular} \caption{Energy values obtained by $ibm\_lagos$ with various final energy evaluations. This data is the basis for Figure \ref{fig:bogota}.\label{tab:realRun}} \end{specialtable} \end{paracol} \end{document}
\begin{document} \title{Factoring Sobolev inequalities through classes of functions} \author[D.\,Alonso]{David Alonso-Guti\'errez} \address{Departamento de Matem\'aticas, Universidad de Zaragoza, 50009 Zaragoza, Spain}\email[(David Alonso)]{[email protected]} \thanks{ The three authors partially supported by MCYT Grant(Spain) MTM2007-61446 and DGA E-64} \author[J.\,Bastero]{Jes\'us Bastero}\email[(Jes\'us Bastero)]{[email protected]} \author[J.\,Bernu\'es]{Julio Bernu\'es} \email[(Julio Bernu\'es)]{[email protected]} \subjclass[2000]{46E35, 46E30, 26D10, 52A40} \keywords{Sobolev inequality, sharp constants, affine isoperimetric inequalities} \begin{abstract} We recall two approaches to recent improvements of the classical Sobolev inequality. The first one follows the point of view of Real Analysis, \cite{MM1}, \cite{BMR}, while the second one relies on tools from Convex Geometry, \cite{Z}, \cite{LYZI}. In this paper we prove a (sharp) connection between them. \end{abstract} \date{} \maketitle \section{Introduction and notation} The classical Sobolev inequality states that for $1\le p<n$ and $\frac1q=\frac1p-\frac1n$, there exists a constant $C_{p,n}>0$ such that for every $f\colon\mathbb R^n\to\mathbb R$ in the Sobolev space $W^{1,p}(\mathbb R^n)$, \begin{equation}\label{Sobolev} \Vert\nabla f\Vert_p\ge C_{p,n} \Vert f\Vert_q \end{equation} where $\Vert \cdot\Vert_q$ denotes the $L_q-$norm of the Euclidean norm of functions and $\nabla f$ is the gradient of $f$. The best constant in the case $p=1$ ($q=n/(n-1)$) was obtained by H. Federer, W. Fleming, \cite{FF} and independently by V. Maz'ja, \cite{M1} \cite{M2}. They proved $C_{1,n}=n\ \omega_n^{\frac{1}{n}}$, where $\omega_n$ denotes the Lebesgue measure of the Euclidean unit ball in ${\mathbb R}^n$, and showed that this fact is equivalent to the isoperimetric inequality (see for instance \cite{C} for a survey). For the other values of $1<p<n$, Aubin and Talenti got the best constants, \cite{A}, \cite{T1}. See also the recent approaches in \cite{BL} and \cite{CNV}. We point out that one key step in classical proofs of (\ref{Sobolev}) is the use of Polya-Szeg\"o rearrangement inequality, see \cite{PS}, \begin{equation}\label{PS} \Vert \nabla f\Vert_p \ge \Vert \nabla f^\circ\Vert_p,\qquad \qquad p\ge 1 \end{equation} where $f^{\circ}(x):=f^{*}(\omega_n |x|^n)$, is a radial extension to ${\mathbb R}^n$ of the decreasing rearrangement of $f$, $f^{*}(t)=\inf\{\lambda>0\ ;\ |\{|f|>\lambda\}|_n\le t\}, t\ge 0$, $|\cdot |$ is the Euclidean distance in ${\mathbb R}^n$ and $|\cdot|_n$ is the Lebesgue measure on the (suitable) $n$-dimensional space. $f^{\circ}$ has the same distribution function as $f$ and $f^{\ast}$. It is called the symmetric Schwarz rearrangement of $f$. For $p=n$, the inequality with $q=\infty$ is not true. In the sixties Trudinger \cite{Tr} and Moser \cite{M} proposed an Orlicz space, $\mathcal{MT}(\Omega)$, of functions defined on open domains $\Omega\subset {\mathbb R}^n$ with $|\Omega|_n<\infty$ and showed the continuous inclusion $W^{1,n}_0(\Omega)\hookrightarrow\mathcal{MT}(\Omega)$, where $W^{1,n}_0(\Omega)$ is the closure of the space of $\mathcal{C}^1$ functions of compact support, $\mathcal{C}^1_{00}(\Omega)$, in the Sobolev space $W^{1,n}(\Omega)$. More precisely, they proved that there exists $C_n>0$ such that for all $f\in W^{1,n}_0(\Omega)$ \[ \Vert \nabla f\Vert_n \geq C_n \Vert f\Vert_{\mathcal{MT}} \] and the constant (depending on $|\Omega|_n$) is sharp. In the late seventies, Hansson \cite{Ha} and Brezis-Wainger \cite{BW} improved the target space in the inclusion above. They introduced a rearrangement invariant function space, $H_n(\Omega)$, such that $W_0^{1,n}(\Omega)\hookrightarrow H_n(\Omega)\hookrightarrow\mathcal{MT}(\Omega)$. Moreover, $H_n(\Omega)$ was proved to be the optimal target space in the class of rearrangement invariant spaces. Equivalently, they obtained an inequality of the form $\Vert \nabla f\Vert_n\geq c_n \Vert f\Vert_{H_n}\geq c'_n \Vert f\Vert_{\mathcal{MT}}$ for some constants $c_n, c'_n>0$ (depending on $|\Omega|_n$). Tartar \cite{Tar}, Maly-Pick \cite{MP} and Bastero-Milman-Ruiz \cite{BMR}, see also \cite{K}, refined those estimates using {\sl classes of functions} as follows: For $1\le p<\infty$ denote \[\mathcal{A}_{\infty,p}(\mathbb R^n)=\{f; \Vert f\Vert_{\infty, p}=\left(\int_0^{\infty} (f^{\ast\ast}(t)-f^\ast(t))^p\frac{dt}{t^{p/n}}\right)^{1/p}<\infty\}\] where $f^{\ast\ast}$ is the Hardy transform of $f^{*}$ defined by $\displaystyle f^{\ast\ast}(t)=\frac{1}{t}\int_0^tf^\ast(s)\ ds.$$W^{1,p}(\mathbb R^n)$ Then for all $f\in W^{1,n}_0(\Omega)$ \begin{equation}\label{BMR} \Vert \nabla f\Vert_n\geq (n-1)\,\omega_n^{\frac1n}\Vert f\Vert_{\infty,n}\geq c''_n \Vert f\Vert_{H_n} \end{equation} for some $c''_n>0$ (depending on $|\Omega|_n$). Observe that the constant in the first inequality depends neither on the measure of $\Omega$ nor on the support of $f$. Once one considered classes of functions instead of vector spaces, Sobolev type inequalites could be extended further. At this point we recall the well known fact that $W_0^{1,p}(\mathbb R^n)=W^{1,p}(\mathbb R^n)$, \cite{M2}. In \cite{MM1} the authors proved \begin{equation} \label{des}\Vert \nabla f\Vert_p\geq c_{n,p} \Vert f\Vert_{\infty,p}\geq c'_{n,p} \Vert f\Vert_q,\qquad \forall\ f\in W^{1,p}(\mathbb R^n),\qquad 1\le p<n \end{equation} and some constants $ c_{n,p}, c'_{n,p}>0$. We now move on to a different philosophy. We start by recalling the so called Petty projection inequality, stated in \cite{P} for convex bodies and extended by Zhang \cite {Z} to compact subsets $K\subset{\mathbb R}^n$ \begin{equation}\label{Petty} \frac{n\omega_n}{\omega_{n-1}}\left(\int_{ S^{n-1}} \frac{du}{|P_{u^{\perp}}(K)|_{n-1}^n} \right)^{-\frac{1}{n}}\ge n\omega_n^{1/n} |K|_n^{\frac{n-1}{n}} \end{equation} where $P_{u^{\perp}}$ is the the orthogonal projection onto the hyperplane $u^{\perp}$ and $du$ is the normalized Haar probability on the unit sphere $S^{n-1}$. Petty projection inequality directly implies the isoperimetric inequality. In 1999 Zhang \cite {Z} (see also \cite{HSX} and the references therein) introduced a new class of functions \[\mathcal{E}_p(\mathbb R^n)=\left\{f\in W^{1,p}(\mathbb R^n); \mathcal{E}_p(f):= \frac1{I_p}\left( \int_{S^{n-1}}\Vert D_u f\Vert_p^{-n}du\right)^{-\frac{1}{n}}<\infty\right\}, \qquad p\ge 1 \] where $D_uf(x):=\langle\nabla f(x), u\rangle $ and $\displaystyle I^{p}_p:=\int_{S^{n-1}}|u_1|^pdu$ is a normalization constant so that $\mathcal{E}_p(f^{\circ})= \Vert \nabla f^{\circ}\Vert_p$. The expression $\mathcal{E}_p(f)$ is an energy integral having applications in information theory. It is invariant under transformations of $\mathbb R^n$ of the form $x\to x_0+Ax, x_0\in\mathbb R^n, A\in SL(n)$, \cite{LYZII}. Moreover, by Jensen's inequality and Fubini's theorem the following relation holds $$\mathcal{E}_p(f)=\frac{1}{I_p} \left(\int_{S^{n-1}}\Vert D_uf\Vert_p^{-n}du\right)^{-1/n} \le \frac{1}{I_p}\left(\int_{S^{n-1}}\Vert D_uf\Vert_p^{p}du\right)^{1/p} =\Vert \nabla f\Vert_p.$$ The following remarkable inequality \begin{equation}\label{LYZ} \mathcal{E}_p(f) \geq \mathcal{E}_p(f^\circ)\ ,\qquad 1\le p<\infty \end{equation} was proved in a series of papers: Zhang \cite {Z} initiated the approach by showing that his extension of the Petty projection inequality (\ref{Petty}) is actually equivalent to (\ref{LYZ}) for $p=1$. The general case was proved via the $L_p$-Brunn-Minkowski theory in \cite{LYZI}, \cite{LYZII}, \cite{CLYZ}. The invariance of $\mathcal{E}_p(f)$ implies, by homogeneity, that (\ref{LYZ}) is affine-invariante i.e. invariant under transformations of $\mathbb R^n$ of the form $x\to x_0+Ax, x_0\in\mathbb R^n, A\in GL(n)$. The inequality (\ref{LYZ}) is stronger than Polya-Szeg\"o rearrangement inequality (\ref{PS}) and thus it yields to a new proof of Sobolev's inequality \begin{equation}\label{sobolev} \Vert\nabla f\Vert_p\ge \mathcal{E}_p(f) \geq \mathcal{E}_p(f^\circ) \ge C_{n,p}\Vert f\Vert_{q}, \qquad 1\le p<n. \end{equation} See \cite {Z}, \cite{LYZII} for such a proof of (\ref{Sobolev}) with sharp constants. We remark the fact that $\mathcal{E}_p(f) \geq C_{n,p}\Vert f\Vert_{q}$ is affine-invariant while Sobolev's inequality $\Vert\nabla f\Vert_p\ge C_{n,p}\Vert f\Vert_{q}$ is invariant only under rigid motions. In \cite{HS1} \cite{HS2}, \cite{HSX}, the authors investigated the space $\mathcal{E}_p^+(\mathbb R^n)$ defined analogously as before by \[ \mathcal{E}_p^+(f):=\frac{2^{1/p}}{I_p} \left( \int_{S^{n-1}}\Vert D_u^+ f\Vert_p^{-n}du\right)^{-\frac{1}{n}} \] where $D_u^+f(x):=\max\{\langle\nabla f(x), u\rangle,0 \} $ and proved \begin{equation}\label{HS} \mathcal{E}_p(f)\geq \mathcal{E}_p^+(f) \geq \mathcal{E}_p^+(f^\circ)=\mathcal{E}_p(f^\circ), \qquad p\ge 1 \end{equation} which refined (\ref{sobolev}) for $1\le p<n$. In the case $p\ge n$, affine-invariant inequalities of Sobolev type were studied in \cite{CLYZ}, \cite{HS2} and \cite{HSX} with the hypothesis of $f$ having support of finite measure. In the limiting case $p=n$ they proved the sharp inequality \begin{equation}\label{hs} \Vert \nabla f\Vert_n \geq \,\mathcal {E}_n(f) \geq \mathcal {E}_n^+(f) \geq C_n \Vert f\Vert_{\mathcal{MT}} \end{equation} while for $p>n$, \begin{equation}\label{hsp} \mathcal {E}_p(f) \geq\mathcal {E}_p^+(f) \geq \left(\frac{p'}{|q|}\right)^{\frac{1}{p'}} n\omega_n^{1/n}\Vert f\Vert_\infty |{\text {supp}\,f}|_n^{1/q} \qquad \text{where}\quad \frac 1p +\frac 1{p^\prime}=1 \end{equation} where the constant {\sl depending on the size of the support of $f$} is sharp (take $f^{\ast}(t)=(1-t^{-p'/q})\chi_{[0,1]}\!(t)$). In conclusion, the first approach so described looked for improvements of the right hand side of (\ref{Sobolev}), while the second approach concerned the left hand side of (\ref{Sobolev}). In this note we link these two approaches and show that for all $1\le p <\infty$ and $\displaystyle \frac 1q:=\frac1p-\frac1n$ \[\mathcal{E}_p^+(f) \geq\left(1-\frac 1q\right){n\ \omega_n^{1/n} } \Vert f\Vert_{{\infty,p}}\qquad \forall f\in W^{1,p}(\mathbb R^n)\] (see Theorem \ref{relation}) where the constant is sharp. As a consequence, for $1\le p<n$ it gives the right constant in the first inequality in (\ref{des}) and enables to connect (\ref{sobolev}), (\ref{HS}) and (\ref{des}). For $p=n$, Theorem \ref{relation} and its Corollary \ref{cor} connect (\ref{BMR}) and (\ref{hs}), improving the first inequality in (\ref{BMR}). In the case $p>n$ it links the first inequality in (\ref{des}) and (\ref{sobolev}), (thanks to (\ref{HS}) and to the fact that they are also valid for $p>n$). Moreover, in Proposition \ref{mejora} we see how it yields to lower estimates for $\mathcal{E}_p^+(f)$ better than (\ref{hsp}). In the third section we include a proof of the inequalities (\ref{LYZ}) and (\ref{HS}) which directly derives from Zhang's extension of Petty projection inequality, paying the penalty of an extra constant $\frac{I_p}{I_1}$ (which is independent of the dimension $n$). No use of the $L_p$-Brunn-Minkoswki theory and polytope approximation appearing in the papers \cite{LYZI}, \cite{LYZII}, \cite{CLYZ} and \cite{HS2} is made. \section{The results} The first result is the correct relation between $\mathcal {E}_p^+(f)$ and $\Vert f\Vert_{\infty,p}$. \begin{thm}\label{relation} Let $1\le p<\infty$ and $\displaystyle \frac 1q=\frac1p-\frac1n$, $q\in (-\infty, -n)\cup[\displaystyle \frac n{n-1}, \infty]$. Then \[\mathcal {E}_p^+(f) \geq\left(1-\frac 1q\right){n\ \omega_n^{1/n} } \Vert f\Vert_{{\infty,p}}\qquad \forall f\in W^{1,p}(\mathbb R^n). \] Moreover the constant is sharp. \end{thm} \begin{proof} Taking (\ref{HS}) into account it is enough to see that for any $f:\mathbb R^n\to \mathbb R$ compactly supported $C^{1}$ function \[ \mathcal{E}_p^+(f^{\circ})=\frac{2^{1/p}}{I_p} \left(\int_{S^{n-1}}\Vert D_u^+ f^\circ\Vert_p^{-n}du \right)^{-1/n}\geq \left(1-\frac 1q\right){n\ \omega_n^{1/n} } \Vert f\Vert_{\infty,p}. \] Now, $f^\circ$ is Lipschitz and $f^\ast$ locally Lipschitz and $f^{\ast\prime}$ integrable (see for instance \cite{K} and \cite{Mo}). Therefore since $$ f^{\ast\ast}(t)-f^{\ast}(t)=\frac1t\int_0^t(f^\ast(s)-f^\ast(t))ds=\frac 1t \int_0^t\int_s^t- f^{\ast\prime}(u)duds\\ =\frac 1t \int_0^ts |f^{\ast\prime}(s)|ds $$ we have $$ \Vert f^\circ\Vert_{\infty, p}^p =\Vert f\Vert_{\infty, p}^p=\int_0^\infty (f^{\ast\ast}(t)-f^{\ast}(t))^p\frac{dt}{t^{p/n}} =\int_0^\infty\left(\frac 1t \int_0^ts |f^{\ast\prime}(s)|ds \right)^p\frac{dt}{t^{p/n}}.$$ Apply Hardy's inequality $\int_0^\infty\left(\frac 1t \int_0^tg(s)ds \right)^p\frac{dt}{t^{p/n}}\le\left( \frac{p}{p+\frac{p}{n}-1}\right)^p\int_0^\infty g(s)^p \frac{ds}{s^{p/n}}$ to $g(s)=s |f^{\ast\prime}(s)|$ (see \cite{M2} for a reference on Hardy's inequalities with weights) to obtain $$\Vert f^\circ\Vert_{\infty, p}^p\le \left(\frac{1}{1-\frac1q}\right)^p\int_0^\infty s^{\frac{(n-1)p}{n}}|f^{\ast\prime}(s)|^p\ ds.$$ On the other hand, by definition of $f^\circ(x)$, $$ \langle \nabla f^\circ(x),u\rangle_+=n\omega_n|x|^{n-1}|f^{\ast\prime}(\omega_n |x|^n)|\left\langle\frac{-x}{|x|},u\right \rangle_+ $$ and so by polar integration $x=r\theta$ and the change of variables $s=\omega_n r^n$ \begin{equation} \label{fpolar} \Vert D_u^+f^\circ\Vert_p^p=\int_{\mathbb R^n}\langle\nabla f^\circ(x),u\rangle_+^p\, dx=\frac12I_p^p \Big(n\omega_n^{1/n}\Big)^p\int_0^\infty s^{\frac{(n-1)p}{n}}|f^{\ast\prime}(s)|^pds \end{equation} and the result follows. In order to see that the constant is sharp consider truncations of the function $f^\ast(t)=t^{-1/q}$, whenever $p<n$, $f^\ast(t)=\log(1/t)$, for $p=n$ and $f^\ast(t)=(1-t^{-1/q})\chi_{[0,1]}$, whenever $p>n$. \end{proof} Straightforward computations show that $f^{\ast}$ and therefore $\Vert f\Vert_{\infty, p}$ are invariant under transformations of $\mathbb R^n$, $x\to x_0+Ax, x_0\in\mathbb R^n, A\in SL(n)$. Consequently, the inequality in Theorem \ref{relation} is affine-invariant. \begin{cor}\label{cor} Let $1\leq p<\infty$. For any $f\in W^{1,p}(\mathbb R^n)$ \[\Vert \nabla f\Vert_p\ge n\ \omega_n^{1/n} \left(1-\frac 1q\right)\Vert f\Vert_{\infty, p}. \] In particular, for $p=n$ we have $\Vert \nabla f\Vert_n\ge n\ \omega_n^{1/n} \Vert f\Vert_{\infty, n}$ \end{cor} \begin{rmk} The result refines the inequalities (\ref{des}), (\ref{BMR}) and (\ref{hs}). \end{rmk} \begin{proof} It follows from the previous Theorem and the facts stated in the introduction $$ \left(1-\frac 1q\right)n\ \omega_n^{1/n}\Vert f\Vert_{\infty, p}\le \mathcal{E}^{+}_p(f^{\circ}) \le \mathcal{E}_p(f^\circ) \le \mathcal{E}_p(f)\le \Vert\nabla f\Vert_p$$ \end{proof} We pass to the case $p>n$. As we said in the introduction we shall see how Theorem \ref{relation} provides better estimates than inequality (\ref{hsp}). \begin{proposition}\label{mejora} Let $p>n$, $\displaystyle\frac{1}{q}=\frac{1}{p}-\frac{1}{n}$ and $f$ a compactly supported $C^{1}$ function. Then, \[\sup_{t>0}\{ \left( \Vert f\Vert_\infty-f^\ast(t)\right)t^{1/q}\} \leq \alpha_{n,p}\Vert f\Vert_{\infty,p} \] for some $\alpha_{n,p}>0$ (independent of the support of $f$). \end{proposition} \begin{rmk} The proof gives $\alpha_{p,n}=\left(\big(p (1-\frac1{q})\big)^{p'/p}+\frac{|q|}{p'}\right)^\frac{1}{p'}$. \end{rmk} \begin{proof} For any $t>0$ we have \begin{eqnarray*} \Vert f\Vert_\infty\!\!\!\! &-&\!\!\!\!f^\ast(t)=f^{\ast\ast}(0)-f^{\ast\ast}(t)+f^{\ast\ast}(t)-f^\ast(t)= \int_0^t\!-f^{\ast\ast\prime}(u)du+f^{\ast\ast}(t)-f^\ast(t)\cr &=&\int_0^t\frac{f^{\ast\ast}(u)-f^\ast(u)}{u}du+f^{\ast\ast}(t)-f^\ast(t)\cr &=&\int_0^t\frac{f^{\ast\ast}(u)-f^\ast(u)}{u}du\left(\frac{p'}{|q|}\right)^\frac{1}{p'} \left(\frac{|q|}{p'}\right)^\frac{1}{p'} +\frac{f^{\ast\ast}(t)-f^\ast(t)}{\left(p\big(1-\frac{1}{q}\big)\right)^\frac{1}{p}} \left(p\big(1-\frac{1}{q}\big)\right)^\frac{1}{p}. \end{eqnarray*} By H\"{o}lder's inequality, the latter expression is bounded from above by $$ \alpha_{p,n}\left(\left(\int_0^t\frac{f^{\ast\ast}(u)-f^\ast(u)}{u}du \right)^p\left(\frac{p'}{|q|}\right)^\frac{p}{p'} +\frac{\left(f^{\ast\ast}(t)-f^\ast(t)\right)^p}{p\big(1-\frac{1}{q}\big)}\right)^\frac{1}{p}. $$ Now, on one hand, for any $s>t>0$ we have after integrating by parts \begin{align*} f^{\ast\ast}(s)-f^\ast(s)&=\frac 1{s}\int_0^{s} u|f^{\ast\prime}(u)|du \geq \frac 1{s}\int_0^t u|f^{\ast\prime}(u)|du=\frac{t}{s} (f^{\ast\ast}(t)-f^\ast(t)). \end{align*} Hence $$ \int_t^{\infty}(f^{\ast\ast}(s)-f^\ast(s))^ps^{-p/n}ds \ge t^{p}(f^{\ast\ast}(t)-f^\ast(t))^p\int_t^{\infty}\frac{ds}{s^{p+\frac{p}{n}}} $$ and consequently $$ \frac{\left(f^{\ast\ast}(t)-f^\ast(t)\right)^p}{p\big(1-\frac{1}{q}\big)}\leq t^{-\frac{p}{q}}\int_t^\infty(f^{\ast\ast}(s)-f^\ast(s))^ps^{-p/n}ds. $$ On the other hand, by H\"{o}lder's inequality and since $p>n$, $$ \int_0^t\frac{f^{\ast\ast}(s)-f^\ast(s)}{s}\le \left(\int_0^t s^{(\frac1{n}-1)p'}ds\right)^{1/p'} \left(\int_0^t(f^{\ast\ast}(s)-f^\ast(s))^ps^{-p/n}ds\right)^{1/p} $$ which implies $$ \left(\int_0^t\frac{f^{\ast\ast}(s)-f^\ast(s)}{s}\right)^p\left(\frac{p'}{|q|}\right)^{\frac{p}{p'}}t^{\frac{p}{q}}\leq \int_0^t(f^{\ast\ast}(s)-f^\ast(s))^ps^{-p/n}ds. $$ Thus, for any $t>0$ we have $$t^\frac{1}{q}(\Vert f\Vert_\infty-f^\ast(t))\leq\alpha_{p,n}\Vert f\Vert_{\infty,p}$$ which finishes the proof. \end{proof} \begin{rmk} $f^\ast(|{\text {supp}\,f}|_n)=0$ implies $\displaystyle \Vert f\Vert_\infty |{\text {supp}\, f}|_n^{1/q}\le\sup_{t>0}\{ \left( \Vert f\Vert_\infty-f^\ast(t)\right)t^{1/q}\}$ and so Proposition \ref{mejora} shows that Theorem \ref{relation} is, up to constant, better than (\ref{hsp}). The example $f^\ast(t)=(1-t^{-1/q})\chi_{[0,1]}(t)$ verifies $\displaystyle\sup_{t>0}\{ \left( \Vert f\Vert_\infty{-}f^\ast(t)\right)t^{1/q}\}=1$ while $\Vert f\Vert_{\infty, p}=\infty$. \end{rmk} \section{A simplified approach to affine Sobolev inequalities} In this part we will show a direct way to deduce the key inequalities (\ref{LYZ}) and (\ref{HS}) from Zhang's inequality, paying a penalty on the constant. We use similar ideas to those appearing in \cite{Mo} which prove Polya-Szeg\"o rearrangement inequality from the isoperimetric inequality. \begin{proposition} Let $1\leq p<\infty$. For all $f\in W^{1,p}(\mathbb R^n)$ \[ \mathcal{E}_p(f^\circ)\leq \frac{I_p}{I_1}\mathcal{E}_p\, (f) \qquad {\rm and}\qquad \mathcal{E}_p^+(f^\circ)\leq \frac{I_p}{I_1}\,\mathcal{E}_p^+(f). \] \end{proposition} \begin{rmk} It is well known that $C_1\sqrt p\le\frac{I_p}{I_1}\le C_2\sqrt p$ with $C_1, C_2$ absolute constants. \end{rmk} \begin{proof} Suppose $f$ is a ${\mathcal C}^1$ function of compact support. Let $\Phi(t)$ represent either $|t|$ or $\max\{t,0\}$. By Sard's theorem, for almost all $t>0$ the level set $\{|f|\ge t\}$ is compact with ${\mathcal C}^1$ boundary $\{|f|=t\}$ and $\nabla f(x)\ne 0, \forall x\in \{|f|=t\}$. By Federer's co-area formula \[\int_{\mathbb R^n}\Phi(\langle \nabla f(x), u\rangle)^pdx =\int_0^\infty\left(\int_{\{|f|=t\}}\Phi(\langle \nabla f(x), u\rangle)^pd\mu(x)\right)dt\] where, for almost every $t>0$, $ d\mu(x)=\displaystyle\frac{dH_{n-1}(x)}{|\nabla f(x)|} $ being $dH_{n-1}(x)$ the corresponding Hausdorff measure on $\{|f|=t\}$. Next we use Jensen inequality \begin{align*} \int_{\mathbb R^n}\!\!\!\Phi(\langle \nabla f(x), u\rangle)^pdx&\ge \int_0^\infty\!\!\!\left(\int_{\{|f|=t\}}\!\!\!\!\!\!\Phi(\langle \nabla f(x), u\rangle)\frac{d\mu(x)}{\int_{\{|f|=t\}}d\mu(x)} \right)^p \left(\int_{\{|f|=t\}}\!\!\!\!\!\!\!d\mu(x)\right)dt\\ &= \int_0^\infty\left(\int_{\{|f|=t\}}\!\!\!\!\Phi(\langle \nabla f(x), u\rangle) d\mu(x) \right)^p \left(\int_{\{|f|=t\}}\!\!\!\!d\mu(x)\right)^{1-p}dt. \end{align*} Denote $\displaystyle M(t)= \int_{\{|f|\ge t\}}\!\!\!\!dx$. For almost every $t>0$, another use of the co-area formula yields to \[ \int_{\{|f|=t\}}d\mu(x)=\left(-\int_t^\infty \left(\int_{\{|f|=s\}}\frac{dH_{n-1}(x)}{|\nabla f(x)|}\right)ds\right)^\prime=-M^\prime(t)=|M^\prime(t)| \] and so \begin{align*} & \left(\int_{S^{n-1}}\left(\int_{\mathbb R^n}\Phi(\langle \nabla f(x), u\rangle)^pdx\right)^{-n/p} du\right)^{-p/n}\\ &\geq \left(\int_{S^{n-1}}\left(\int_0^\infty\left(\int_{\{|f|=t\}}\Phi(\langle \nabla f(x), u\rangle) d\mu(x) \right)^p |M^\prime(t)|^{1-p}dt \right)^{-n/p}du\right)^{-p/n}\!\!\!.\end{align*} Use Minkowski's integral inequality to bound the previous formula from below \begin{align*} &\geq \int_0^\infty \left(\int_{S^{n-1}}\left(\int_{\{|f|=t\}}\Phi(\langle \nabla f(x), u\rangle) d\mu(x) \right)^{-n}du \right)^ {-p/n}|M^\prime(t)|^{1-p}dt\\ & = \int_0^\infty \left(\int_{S^{n-1}}\left(\int_{\{|f|=t\}}\Phi(\langle \nu(x), u\rangle)dH_{n-1}(x)\right)^{-n}du \right)^ {-p/n}|M^\prime(t)|^{1-p}dt\end{align*} where for a.e. $t>0$, $\nu(x)$ is the outer normal vector to $\{|f|=t\}$ (w.r.t. $\{|f|\ge t\}$) at the point $x$. For every ``good'' $t>0$ from Sard's theorem, one can easily see, \cite{Z}, that the linear functional $g\in\mathcal{C}(S^{n-1})\to \int_{\{|f|=t\}} g(\nu(x))dH_{n-1}(x)$ can be represented by a finite measure $d\mu_t$ on $S^{n-1}$. That is \[\int_{S^{n-1}}g(v)d\mu_t(v)=\int_{\{|f|=t\}} g(\nu(x))dH_{n-1}(x)\hskip 1truecm \forall\, g\in\mathcal{C}(S^{n-1}).\] Recall (see for instance \cite{S}) that every convex body $K\subset\mathbb R^n$ determines a {\sl surface area measure} on $S^{n-1}$ denoted by $S_K$. It is also proved in \cite{Z} that, by Minkowski existence theorem (see \cite{S}), there exists a unique up to translations convex body $K_t$ in $\mathbb R^n$ whose surface area measure $S_{K_t}$ is $\mu_t$. For this reason $\mu_t$ is also called the surface area measure of (the compact set) $\{|f|\ge t\}$. Let $\Pi K_t$ be the projection body associated to $K_t$, i.e. the convex body defined by its support function as \[ h(\Pi K_t,u)=\frac12\int_{S^{n-1}}|\langle v, u\rangle|dS_{K_t}(v)=|P_{u^\perp}(K_t)|_{n-1}, \qquad u\in S^{n-1}. \] Let $\Pi^\ast K_t$ be the polar projection body of $K_t$. Its volume is \begin{align*} 2^{-n}|\Pi^\ast K_t|_n & =\int_{S^{n-1}}h(\Pi K_t,u)^{-n}du= \int_{S^{n-1}}\left(\int_{S^{n-1}}|\langle u, v\rangle|dS_{K_t}(v)\right)^{-n}du\\ &= \int_{S^{n-1}}\left(\int_{\{|f|=t\}}|\langle \nu(x), u\rangle|dH_{n-1}(x)\right)^{-n}du. \end{align*} Finally, Petty's projection inequality (\ref{Petty}) and the fact proved in \cite{Z}, $M(t)=|\{|f|\geq t\}|_n\leq |K_t|_n$, show that for almost all $t>0$ \begin{align*} \left(\int_{S^{n-1}}\left(\int_{\{|f|=t\}}|\langle \nu(x), u\rangle|dH_{n-1}(x) \right)^{-n}du\right)^ {-1/n}\geq 2\frac{\omega_{n-1}}{\omega_n^{1-1/n}} M(t)^{\frac{(n-1)}{n}}. \end{align*} Consider the case $\Phi(t)=|t|$. $M$ and $f^{\ast}$ are differentiable (except, possibly, on some set $N$ of zero measure) and $M=(f^{\ast})^{-1}$ on the set $(f^{\ast})^{-1}\big((0,\infty)\setminus N\big)$, therefore \begin{align*} \left(\int_{S^{n-1}}\Vert D_uf\Vert_p^{-n}du\right)^{-p/n} &\geq 2^{p}\left(\frac{\omega_{n-1}}{\omega_n^{1-1/n}}\right)^p \int_0^\infty M(t)^{\frac{(n-1)p}{n}}|M^\prime(t)|^{1-p}dt\\ &=\left(\frac{2\omega_{n-1}}{\omega_n^{1-1/n}}\right)^p\int_0^\infty s^{\frac{p(n-1)}{n}}|f^{\ast\prime}(s)|^pds. \end{align*} In the other case, $\Phi(t)=\max\{ t,0\}$, since $\int_{S^{n-1}}\langle u,v\rangle d\mu_t(v)=0$ we have $$ h(\Pi K_t,u)=\frac12\int_{S^{n-1}}|\langle v, u\rangle|d\mu_t(v) =\int_{S^{n-1}}\langle v, u\rangle_+d\mu_t(v). $$ Hence \begin{align*} \left(\int_{S^{n-1}}\Vert D_u^+f\Vert_p^{-n}du\right)^{-p/n} &\geq \left(\frac{\omega_{n-1}}{\omega_n^{1-1/n}}\right)^p\int_0^\infty s^{\frac{p(n-1)}{n}}|f^{\ast\prime}(s)|^pds. \end{align*} Finally we recall (\ref{fpolar}) and analogously \[ \left(\int_{S^{n-1}}\Vert D_uf^\circ\Vert_p^{-n}du\right)^{-p/n}= \Big(n\omega_n^{1/n}\Big)^p\left(\int_0^\infty s^{\frac{(n-1)p}{n}}|f^{\ast\prime}(s)|^pds \right)I_p^p. \] Therefore \[\mathcal{E}_p(f^\circ)\leq \frac{n\omega_n}{2\omega_{n-1}} I_p\mathcal{E}_p(f)=\frac{I_p}{I_1}\mathcal{E}_p(f)\quad \text{and}\quad \mathcal{E}_p^+(f^\circ)\leq \frac{n\omega_n}{2\omega_{n-1}} I_p\mathcal{E}_p^+(f)=\frac{I_p}{I_1}\mathcal{E}_p^+(f).\] \end{proof} {\bf Acknowledgements.} We thank the referee for useful comments that helped to improve the presentation of the manuscript. \end{document}
\begin{document} \title{\centerline{A class of bridges of iterated integrals of Brownian motion} \centerline{related to various boundary value problems involving} \centerline{the one-dimensional polyharmonic operator}} \titlerunning{Bridges of iterated integrals of Brownian motion related to various boundary value problems} \author{\centerline{Aim\'e LACHAL}} \institute{ \textsc{Institut National des Sciences Appliqu\'ees de Lyon}\\ P\^ole de Math\'ematiques/Institut Camille Jordan CNRS UMR5208\\ B\^atiment L\'eonard de Vinci, 20 avenue Albert Einstein\\ 69621 Villeurbanne Cedex, \textsc{France}\\ \email{[email protected]}\\ Web page: http://maths.insa-lyon.fr/$\mbox{}^{\sim}$lachal } \maketitle \begin{abstract} Let $(B(t))_{t\in [0,1]}$ be the linear Brownian motion and $(X_n(t))_{t\in [0,1]}$ be the $(n-1)$-fold integral of Brownian motion, $n$ being a positive integer: $$ X_n(t)=\int_0^t \frac{(t-s)^{n-1}}{(n-1)!} \,\dd B(s)\quad\mbox{for any $t\in[0,1]$.} $$ In this paper we construct several bridges between times $0$ and $1$ of the process $(X_n(t))_{t\in [0,1]}$ involving conditions on the successive derivatives of $X_n$ at times $0$ and $1$. For this family of bridges, we make a correspondance with certain boundary value problems related to the one-dimensional polyharmonic operator. We also study the classical problem of prediction. Our results involve various Hermite interpolation polynomials. \end{abstract} \keywords{Bridges \and Gaussian processes \and Prediction \and Boundary value problems} \subclass{Primary 60G15 \and 60G25 \and Secondary 60J65} \section{Introduction} Throughout the paper, we shall denote, for any enough differentiable function $f$, its $i$-th derivative by $f^{(i)}$ or $\dd^i f/\dd t^i$. Let $(B(t))_{t\in [0,1]}$ be the linear Brownian motion started at $0$ and $(\beta(t))_{t\in [0,1]}$ be the linear Brownian bridge within the time interval $[0,1]$: $(\beta(t))_{t\in [0,1]}=(B(t)|B(0)=B(1)=0)_{t\in [0,1]}$. These processes are Gaussian processes with covariance functions $$ c_{_B}(s,t)=s\wedge t\quad\mbox{and}\quad c_{_{\beta}}(s,t)=s\wedge t-st. $$ For a given continuous function $u$, the functions $v_{_B}$ and $v_{_{\beta}}$ respectively defined on $[0,1]$ by $$ v_{_B}(t)=\int_0^1 c_{_B}(s,t)u(s)\,\dd s\quad\mbox{and}\quad v_{_{\beta}}(t)=\int_0^1 c_{_{\beta}}(s,t)u(s)\,\dd s $$ are the solutions of the respective boundary value problems on $[0,1]$: $$ \begin{cases} v_{_B}''=-u, \\ v_{_B}(0)=v_{_B}'(1)=0, \end{cases} \quad\mbox{and}\quad \begin{cases} v_{_{\beta}}''=-u, \\ v_{_{\beta}}(0)=v_{_{\beta}}(1)=0. \end{cases} $$ Observe that the differential equations are the same in both cases. Only the boundary conditions differ. They are Dirichlet-type boundary conditions for Brownian bridge while they are Dirichlet/Neumann-type boundary conditions for Brownian motion. These well-known connections can be extended to the polyharmonic operator $d^{2n}/dt^{2n}$ where $n$ is a positive integer. This latter is associated with the $(n-1)$-fold integral of Brownian motion $(X_n(t))_{t\in[0,1]}$: $$ X_n(t)=\int_0^t \frac{(t-s)^{n-1}}{(n-1)!} \,\dd B(s) \quad\mbox{for any $t\in[0,1]$.} $$ (Notice that all of the derivatives at time~$0$ naturally vanish: $X_n(0)=X_{n-1}(0)=\dots=X_2(0)=X_1(0)=0$.) Indeed, the following facts for instance are known (see, e.g., \cite{bahadur} and \cite{bridge}): \begin{itemize} \item The covariance fonction of the process $(X_n(t))_{t\in[0,1]}$ coincide with the Green function of the boundary value problem $$ \begin{cases} v^{(2n)}=(-1)^nu\quad\mbox{on }[0,1], \\ v(0)=v'(0)=\dots=v^{(n-1)}(0)=0, \\ v^{(n)}(1)=v^{(n+1)}(1)=\dots=v^{(2n-1)}(1)=0; \end{cases} $$ \item The covariance fonction of the bridge $(X_n(t)|X_n(1)=0)_{t\in[0,1]}$ coincide with the Green function of the boundary value problem $$ \begin{cases} v^{(2n)}=(-1)^nu\quad\mbox{on }[0,1], \\ v(0)=v'(0)=\dots=v^{(n-1)}(0)=0, \\ v^{(n-1)}(1)=v^{(n+1)}(1)=\dots=v^{(2n-1)}(1)=0; \end{cases} $$ \item The covariance fonction of the bridge $(X_n(t)|X_n(1)=X_{n-1}(1) =\dots=X_1(1)=0)_{t\in[0,1]}$ coincide with the Green function of the boundary value problem $$ \begin{cases} v^{(2n)}=(-1)^nu\quad\mbox{on }[0,1], \\ v(0)=v'(0)=\dots=v^{(n-1)}(0)=0, \\ v(1)=v'(1)=\dots=v^{(n-1)}(1)=0. \end{cases} $$ \end{itemize} Observe that the differential equations and the boundary conditions at~$0$ are the same in all cases. Only the boundary conditions at~$1$ differ. Other boundary value problems can be found in~\cite{hn1} and~\cite{hn2}. We refer the reader to~\cite{dolph} for a pioneering work dealing with the connections between general Gaussian processes and Green functions; see also~\cite{carraro}. We also refer to~\cite{chen},~\cite{ptreg},~\cite{lli},~\cite{lin1},~\cite{nik},~\cite{lin2} and the references therein for various properties, namely asymptotical study, of the iterated integrals of Brownian motion as well as to~\cite{hn1},~\cite{hn2},~\cite{wah1} and~\cite{wah2} for interesting applications of these processes to statistics. The aim of this work is to examine all the possible conditioned processes of $(X_n(t))_{t\in[0,1]}$ involving different events at time~$1$: $$ (X_n(t)|X_{j_1}(1)=X_{j_2}(1)=\dots=X_{j_m}(1)=0)_{t\in[0,1]} $$ for a certain number $m$ of events, $1\le m\le n$, and certain indices $j_1,j_2,\dots,j_m$ such that $1\le j_1<j_2<\dots<j_m\le n$, and to make the connection with the boundary value problems: \begin{align*} \begin{cases} v^{(2n)}=(-1)^nu\quad\mbox{on }[0,1], \\ v(0)=v'(0)=\dots=v^{(n-1)}(0)=0, \\ v^{(i_1)}(1)=v^{(i_2)}(1)=\dots=v^{(i_n)}(1)=0 \end{cases} \\[-7ex] \noalign{ (BVP)} \end{align*} for certain indices $i_1,i_2,\dots,i_n$ such that $0\le i_1<i_2<\dots<i_n\le 2n-1$. Actually, we shall see that this connection does not recover all the possible boundary value problems and we shall characterize those sets of indices for which such a connection exists. The paper is organized as follows. In Section~\ref{sect-gaussian}, we exhibit the relationships between general Gaussian processes and Green functions of certain boundary value problems. In Section~\ref{sect-iteratedIBM}, we consider the iterated integrals of Brownian motion. In Section~\ref{sect-bridges}, we construct several bridges associated with the foregoing processes and depict explicitly their connections with the polyharmonic operator together with various boundary conditions. One of the main results is Theorem~\ref{th-BVP-gene}. Moreover, we exhibit several interesting properties of the bridges (Theorems~\ref{th-drift} and \ref{th-decompo-cov}) and solve the prediction problem (Theorems~\ref{th-prediction}). In Section~\ref{sect-IBM}, we illustrate the previous results on the case $n=2$ related to integrated Brownian motion. Finally, in Section~\ref{sect-general}, we give a characterization for the Green function of the boundary value problem (BVP) to be a covariance function. Another one of the main results is Theorem~\ref{Green-sym}. \section{Gaussian processes and Green functions}\label{sect-gaussian} We consider a $n$-Markov Gaussian process $(X(t))_{t\in[0,1]}$ evolving on the real line $\mathbb{R}$. By ``$n$-Markov'', it is understood that the trajectory $t\mapsto X(t)$ is $n$ times differentiable and the $n$-dimensional process $(X(t),X'(t),\dots,X^{(n-1)}(t))_{t\in[0,1]}$ is a Markov process. Let us introduce the covariance function of $(X(t))_{t\in[0,1]}$: for $s,t\in[0,1]$, $c_{_X}(s,t)=\mathbb{E}[X(s)X(t)]$. It is known (see~\cite{carraro}) that the function $c_{_X}$ admits the following representation: \begin{equation}\label{cov-gene} c_{_X}(s,t)=\sum_{k=0}^{n-1} \varphi_k(s\wedge t) \psi_k(s\vee t) \end{equation} where $\varphi_k,\psi_k$, $k\in\{0,1,\dots,n-1\}$, are certain functions. Let $\mathcal{D}_0,\mathcal{D}_1$ be linear differential operators of order less than $p$ and let $\mathcal{D}$ be a linear differential operator of order $p$ defined by $$ \mathcal{D}=\sum_{i=0}^{p} \alpha_i\frac{\dd^i}{\dd t^i} $$ where $\alpha_0,\alpha_1,\dots,\alpha_p$ are continuous functions on $[0,1]$. More precisely, we have for any $p$ times differentiable function $f$ and any $t\in[0,1]$, $$ (\mathcal{D} f)(t)=\sum_{i=0}^{p} \alpha_i(t) f^{(i)}(t). $$ \begin{theorem}\label{th-gene} Assume that the functions $\varphi_k,\psi_k$, $k\in\{0,1,\dots,n-1\}$, are $p$~times differentiable and satisfy the following conditions, for a certain constant $\kappa$: \begin{equation}\label{condition-sum} \sum_{k=0}^{n-1}\left[\varphi_k\psi_k^{(i)}-\varphi_k^{(i)}\psi_k\right]= \begin{cases} 0 &\mbox{if }\, 0\le i\le p-2,\\ \kappa & \mbox{if }\, i=p-1, \end{cases} \end{equation} \begin{equation}\label{condition-deriv} \mathcal{D}\varphi_k=\mathcal{D}\psi_k=0,\quad(\mathcal{D}_0 \varphi_k)(0)=0, \quad (\mathcal{D}_1 \psi_k)(1)=0. \end{equation} Then, for any continuous function $u$ on $[0,1]$, the function $v$ defined on $[0,1]$ by $$ v(t)=\int_0^1 c_{_X}(s,t)u(s)\,\dd s $$ solves the boundary value problem \begin{equation}\label{BVP-gene} \begin{cases} \mathcal{D} v= \alpha_p u, \\ (\mathcal{D}_0 v)(0)=(\mathcal{D}_1 v)(1)=0. \end{cases} \end{equation} \end{theorem} \begin{remark} If the problem~(\ref{BVP-gene}) is determining, that is if it has a unique solution, then the covariance function $c_{_X}$ is exactly the Green function of the boundary value problem~(\ref{BVP-gene}). \end{remark} \begin{proof} In view of~(\ref{cov-gene}), the function $v$ can be written as $$ v(t)=\sum_{k=0}^{n-1}\left[\psi_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s+\varphi_k(t) \int_t^1 \psi_k(s)u(s)\,\dd s\right]\!. $$ The derivative of $v$ is given by \begin{align*} v'(t) & =\sum_{k=0}^{n-1}\left[\psi'_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\varphi_k'(t) \int_t^1 \psi_k(s)u(s)\,\dd s\right] \end{align*} and its second order derivative, since $\sum_{k=0}^{n-1}\left[\varphi_k\psi_k'-\varphi_k'\psi_k\right]=0$, by \begin{align*} v''(t) & =\sum_{k=0}^{n-1}\left[\psi''_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\varphi''_k(t) \int_t^1 \psi_k(s)u(s)\,dd s+\left[\varphi_k(t)\psi'_k(t) -\varphi'_k(t)\psi_k(t)\right]u(t)\right] \\ & =\sum_{k=0}^{n-1}\left[\psi''_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\varphi''_k(t) \int_t^1 \psi_k(s)u(s)\,\dd s\right]\!. \end{align*} More generally, because of the assumptions~(\ref{condition-sum}), we easily see that, for $i\in\{0,1,\dots, p-1\}$, \begin{align*} v^{(i)}(t) & =\sum_{k=0}^{n-1}\left[\psi^{(i)}_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\varphi^{(i)}_k(t) \int_t^1 \psi_k(s)u(s)\,\dd s\right] \end{align*} and the $p$-th order derivative of $v$ is given by \begin{align*} v^{(p)}(t) & =\sum_{k=0}^{n-1}\left[\psi^{(p)}_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\varphi^{(p)}_k(t) \int_t^1 \psi_k(s)u(s)\,\dd s \right. \\ &\hphantom{=\,} \left.+\left[\varphi_k(t)\psi^{(p-1)}_k(t)-\varphi^{(p-1)}_k(t)\psi(t)\right]u(t)\right] \\ & =\sum_{k=0}^{n-1}\left[\psi^{(p)}_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\varphi^{(p)}_k(t) \int_t^1 \psi_k(s)u(s)\,\dd s\right]+\kappa u(t). \end{align*} Actually, we have proved that, for $i\in\{0,1,\dots, p-1\}$, \begin{equation}\label{deriv-v} v^{(i)}(t)=\int_0^t \frac{\partial^i \!c_{_X}}{\partial t^i}(s,t)\,u(s)\dd s. \end{equation} Finally, due to~(\ref{condition-deriv}), \begin{align*} \mathcal{D} v(t) & =\sum_{k=0}^{n-1}\left[\mathcal{D}\psi_k(t) \int_0^t \varphi_k(s)u(s)\,\dd s +\mathcal{D}\varphi_k(t) \int_t^1 \psi_k(s)u(s)\,\dd s\right]+\kappa\alpha_p u(t) =\kappa\alpha_p u(t). \end{align*} Concerning the boundary value conditions, referring to~(\ref{condition-deriv}), we similarly have $$ (\mathcal{D}_0 v)(0)=\sum_{k=0}^{n-1}(\mathcal{D}_0 \varphi_k)(0) \int_0^1 \psi_k(s)u(s)\,\dd s=0,\quad (\mathcal{D}_1 v)(1)=\sum_{k=0}^{n-1}(\mathcal{D}_1 \psi_k)(1) \int_0^1 \varphi_k(s)u(s)\,\dd s=0. $$ The proof of Theorem~\ref{th-gene} is finished. \qed \end{proof} In the two next sections, we construct processes connected to the equation $\mathcal{D} v=u$ subject to the boundary value conditions at $0$: $(\mathcal{D}^i_0 v)(0)=0$ for $i\in\{0,1,\dots,n-1\}$ and others at $1$ that will be discussed subsequently, where $\mathcal{D}$ and $\mathcal{D}^i_0$ are the differential operators ($\mathcal{D}$ being of order $p=2n$) defined by $$ \mathcal{D}=(-1)^n\frac{\dd^{2n}}{\dd t^{2n}},\quad \mathcal{D}^i_0=\frac{\dd^{i}}{\dd t^{i}}. $$ \section{The $(n-1)$-fold integral of Brownian motion}\label{sect-iteratedIBM} Let $(B(t))_{t\in [0,1]}$ be the linear Brownian motion limited to the time interval $[0,1]$ and started at $0$. We introduce the $(n-1)$-fold integral of Brownian motion: for any $t\in[0,1]$, $$ X_n(t)=\int_0^t \frac{(t-s)^{n-1}}{(n-1)!} \,\dd B(s). $$ In particular, $X_1=B$. The trajectories of $(X_n(t))_{t\in [0,1]}$ are $n$ times differentiable and we have $X_n^{(i)}=X_{n-i}$ for $0\le i\le n-1$. Moreover, we have at time~$0$ the equalities $X_n(0)=X_{n-1}(0)=\dots=X_2(0)=X_1(0)=0$. The process $(X_n(t))_{t\in [0,1]}$ is a $n$-Markov Gaussian process since the $n$-dimensional process $(X_n(t),X_{n-1}(t),\dots,X_1(t))_{t\in [0,1]}$ is Markovian. The covariance function of the Gaussian process $(X_n(t))_{t\in [0,1]}$ is given by $$ c_{_{X_n}}(s,t)=\int_0^{s\wedge t} \frac{(s-u)^{n-1}}{(n-1)!} \,\frac{(t-u)^{n-1}}{(n-1)!} \,\dd u. $$ In order to apply Theorem~\ref{th-gene}, we decompose $c_{_{X_n}}$ into the form~(\ref{cov-gene}). We have for, e.g., $s\le t$, \begin{align*} c_{_{X_n}}(s,t) & =\int_0^s \frac{(s-u)^{n-1}}{(n-1)!^2}\left[\,\sum_{k=0}^{n-1}\binom{n-1}{k} (-u)^{n-1-k}t^k\right] \dd u \\ & =\sum_{k=0}^{n-1} (-1)^{n-1-k}\,\frac{t^k}{k!} \int_0^s \frac{(s-u)^{n-1}}{(n-1)!}\, \frac{u^{n-1-k}}{(n-1-k)!}\, \dd u \\ & =\sum_{k=0}^{n-1} (-1)^{n-1-k}\,\frac{s^{2n-1-k}}{(2n-1-k)!}\,\frac{t^k}{k!}. \end{align*} We then obtain the following representation: $$ c_{_{X_n}}(s,t)=\sum_{k=0}^{n-1} \varphi_k(s) \psi_k(t) $$ with, for any $k\in\{0,1,\dots,n-1\}$, $$ \varphi_k(s)=(-1)^{n-1-k}\,\frac{s^{2n-1-k}}{(2n-1-k)!},\quad \psi_k(t)=\frac{t^k}{k!}. $$ We state below a result of~\cite{bridge} that we revisit here by using Theorem~\ref{th-gene}. \begin{theorem}\label{th-BVP-IBM} Let $u$ be a fixed continuous function on $[0,1]$. The function $v$ defined on $[0,1]$ by $$ v(t)=\int_0^1 c_{_{X_n}}(s,t)u(s)\,\dd s $$ is the solution of the boundary value problem \begin{equation}\label{BVP-IBM} \begin{cases} v^{(2n)}=(-1)^n u &\mbox{on }[0,1], \\ v^{(i)}(0)=0 &\mbox{for } i\in\{0,1,\dots,n-1\}, \\ v^{(i)}(1)=0 &\mbox{for } i\in\{n,n+1,\dots,2n-1\}. \end{cases} \end{equation} \end{theorem} \begin{proof} Let us check that the conditions~(\ref{condition-sum}) and~(\ref{condition-deriv}) of Theorem~\ref{th-gene} are fulfilled. First, we have \begin{align*} \lqn{\sum_{k=0}^{n-1}\left[\varphi_k(t)\psi_k^{(i)}(t) -\varphi_k^{(i)}(t)\psi_k(t)\right]} & =\sum_{k=0}^{n-1} (-1)^{n-1-k}\left[1\hspace{-.27em}\mbox{\rm l}_{\{k\ge i\}} \,\frac{t^{2n-1-k}}{(2n-1-k)!}\,\frac{t^{k-i}}{(k-i)!} -1\hspace{-.27em}\mbox{\rm l}_{\{k\le 2n-1-i\}}\,\frac{t^{2n-1-i-k}}{(2n-1-i-k)!}\,\frac{t^k}{k!}\right] \\ & =(-1)^{n-1}\,\frac{t^{2n-1-i}}{(2n-1-i)!} \left[\vphantom{\sum_n^n}\right.\!1\hspace{-.27em}\mbox{\rm l}_{\{i\le n-1\}}\sum_{k=i}^{n-1} (-1)^{k}\binom{2n-1-i}{k-i} -\sum_{k=0}^{(2n-1-i)\wedge(n-1)} (-1)^{k}\binom{2n-1-i}{k}\!\!\left. \vphantom{\sum_n^n}\right] \\ & =(-1)^{n-1}\,\frac{t^{2n-1-i}}{(2n-1-i)!} \left[1\hspace{-.27em}\mbox{\rm l}_{\{i\le n-1\}}\left(\,\sum_{k=0}^{n-1-i} (-1)^{i+k}\binom{2n-1-i}{k} -\sum_{k=0}^{n-1} (-1)^{k}\binom{2n-1-i}{k}\right)\!\!\right. \\ & \hphantom{=\,}\left.-1\hspace{-.27em}\mbox{\rm l}_{\{i\ge n\}}\sum_{k=0}^{2n-1-i} (-1)^{k}\binom{2n-1-i}{k}\!\right]\!. \end{align*} Performing the transformation $k\mapsto 2n-1-i-k$ in the first sum lying within the last equality, we get \begin{align*} \lqn{\sum_{k=0}^{n-1-i} (-1)^{i+k}\binom{2n-1-i}{k} -\sum_{k=0}^{n-1} (-1)^{k}\binom{2n-1-i}{k}} & =\sum_{k=n}^{2n-1-i} (-1)^{k}\binom{2n-1-i}{2n-1-i-k} +\sum_{k=0}^{n-1} (-1)^{k}\binom{2n-1-i}{k} =\sum_{k=0}^{2n-1-i} (-1)^k\binom{2n-1-i}{k}=\delta_{i,2n-1} \end{align*} and then $$ \sum_{k=0}^{n-1}\left[\varphi_k(t)\psi_k^{(i)}(t)-\varphi_k^{(i)}(t)\psi_k(t)\right] =(-1)^n\delta_{i,2n-1}. $$ On the other hand, setting $$ \mathcal{D}=(-1)^n\frac{\dd^{2n}}{\dd t^{2n}},\quad\mathcal{D}_0^{i} =\frac{\dd^i}{\dd t^i},\quad \mathcal{D}_1^{i}=\frac{\dd^{i+n}}{\dd t^{i+n}}, $$ we clearly see that, for any $k\in\{0,1,\dots,n-1\}$, $$ \mathcal{D} \varphi_k=\mathcal{D} \psi_k=0\quad\mbox{and}\quad\mathcal{D}_0^i \varphi_k(0)=\mathcal{D}_1^i \psi_k(1)=0\quad\mbox{for } i\in\{0,1,\dots,n-1\}. $$ Consequently, by Theorem~\ref{th-gene}, we see that the function $v$ solves the boundary value problem~(\ref{BVP-IBM}). The uniqueness part will follow from a more general argument stated in the proof of Theorem~\ref{th-BVP-gene}. \qed \end{proof} \section{Various bridges of the $(n-1)$-fold integral of Brownian motion}\label{sect-bridges} In this section, we construct various bridges related to $(X_n(t))_{t\in [0,1]}$. More precisely, we take $(X_n(t))_{t\in [0,1]}$ conditioned on the event that certain derivatives vanish at time~$1$. Let us recall that all the derivatives at time~$0$ naturally vanish: $X_n(0)=X_{n-1}(0)=\dots=X_2(0)=X_1(0)=0$. For any $m\in\{0,1,\dots,n\}$, let $J=\{j_1,j_2,\dots,j_m\}$ be a subset of $\{1,2,\dots,n\}$ with $1\le j_1<j_2<\dots<j_m\le n$ and the convention that for $m=0$, $J=\varnothing$. We see that for each $m\in\{0,1,\dots,n\}$, we can define $\binom{n}{m}$ subsets of indices $J$, and the total number of sets $J$ is then $\sum_{m=0}^n \binom{n}{m}=2^n$. Set for any $t\in[0,1]$ $$ Y(t)=(X_n(t) | X_j(1)=0,j\in J)= (X_n(t)|X_{j_1}(1)=X_{j_2}(1)=\dots=X_{j_m}(1)=0). $$ In this way, we define $2^n$ processes $(Y(t))_{t\in[0,1]}$ that we shall call ``bridges'' of $(X_n(t))_{t\in [0,1]}$. In particular, \begin{itemize} \item for $J=\varnothing$, we simply have $(Y(t))_{t\in[0,1]}=(X_n(t))_{t\in[0,1]}$; \item for $J=\{1\}$, the corresponding process is the $(n-1)$-fold integral of Brownian bridge $$ (X_n(t) | X_1(1)=0)_{t\in[0,1]}=\left(\int_0^t \frac{(t-s)^{n-1}}{(n-1)!}\, \dd \beta(s)\right)_{t\in[0,1]}; $$ \item for $J=\{n\}$, the corresponding process is the ``single'' bridge of $(n-1)$-fold integral of Brownian Brownian: $$ (X_n(t) | X_n(1)=0)_{t\in[0,1]}=\left(\int_0^t \frac{(t-s)^{n-1}}{(n-1)!}\,\dd B(s) \,\bigg| \int_0^1 \frac{(1-s)^{n-1}}{(n-1)!}\,\dd B(s)=0\right)_{t\in[0,1]}; $$ \item for $J=\{1,2,\dots,n\}$, the corresponding process is $$(X_n(t) | X_n(1)=X_{n-1}(1)=\dots=X_1(1)=0)_{t\in[0,1]}.$$ This is the natural bridge related to the $n$-dimensional Markov process $(X_n(t),X_{n-1}(t),\dots,\linebreak X_1(t))_{t\in[0,1]}.$ \end{itemize} In this section, we exhibit several interesting properties of the various processes $(Y(t))_{t\in [0,1]}$. One of the main goals is to relate these bridges to additional boundary value conditions at $1$. For this, we introduce the following subset $I$ of $\{0,1,\dots,2n-1\}$: $$ I=(n-J)\cup\left[\{n,n+1,\dots,2n-1\}\backslash(J+n-1)\right] $$ with \begin{align*} n-J & =\{n-j,j\in J\}=\{n-j_1,\dots,n-j_m\}, \\ J+n-1 & =\{j+n-1,j\in J\}=\{j_1+n-1,\dots,j_m+n-1\}. \end{align*} The cardinality of $I$ is $n$. Actually, the set $I$ will be used further for enumerating the boundary value problems which can be related to the bridges labeled by $J$. Conversely, $I$ yields $J$ through $$ J=(n-I)\cap\{1,2,\dots,n\}. $$ In the table below, we give some examples of sets $I$ and $J$. $$ \begin{array}{|@{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|} \hline &&&& \\ I & \{0,1,\dots,n-1\} & \{n,n+1,\dots,2n-1\} & \{n-1,n+1,\dots,2n-1\} & \{0,n+1,\dots,2n-2\} \\ &&&& \\ \hline &&&& \\ J & \{1,2,\dots,n\} & \varnothing & \{1\} & \{n\} \\[-2ex] &&&& \\ \hline \end{array} $$ \subsection{Polynomial drift description} Below, we provide a representation of $(Y(t))_{t\in [0,1]}$ by means of $(X_n(t))_{t\in [0,1]}$ subject to a random polynomial drift. \begin{theorem}\label{th-drift} We have the distributional identity $$ (Y(t))_{t\in[0,1]}\stackrel{d}{=}\bigg(X_n(t)-\sum_{j\in J} P_j(t)X_j(1)\bigg)_{t\in[0,1]} $$ where the functions $P_j$, $j\in J$, are Hermite interpolation polynomials on $[0,1]$ characterized by \begin{equation}\label{condition-hermite} \begin{cases} P_j^{(2n)}=0, \\[1ex] P_j^{(i)}(0)=0 &\mbox{for } i\in\{0,1,\dots,n-1\}, \\[1ex] P_j^{(i)}(1)=\delta_{j,n-i} &\mbox{for } i\in I. \end{cases} \end{equation} \end{theorem} \begin{remark} In the case where $n=2$, we retrieve a result of~\cite{drift}. Moreover, the conditions~(\ref{condition-hermite}) characterize the polynomials $P_j$, $j\in J$. We prove this fact in Lemma~\ref{hermite} in the appendix. \end{remark} \begin{proof} By invoking classical arguments of Gaussian processes theory, we have the distributional identity $$ (Y(t))_{t\in[0,1]}\stackrel{d}{=}\bigg(X_n(t)-\sum_{j\in J} P_j(t)X_j(1)\bigg)_{t\in[0,1]} $$ where the functions $P_j$, $j\in J$, are such that $\mathbb{E}[Y(t)X_k(1)]=0$ for all $k\in J$. We get the linear system \begin{equation}\label{system} \sum_{j\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j(t)=\mathbb{E}[X_n(t)X_k(1)],\quad k\in J. \end{equation} We plainly have $$ \mathbb{E}[X_j(s)X_k(t)]=\int_0^{s\wedge t} \frac{(s-u)^{j-1}}{(j-1)!}\, \frac{(t-u)^{k-1}}{(k-1)!}\,\dd u. $$ Then, the system~(\ref{system}) writes $$ \sum_{j\in J} \frac{1}{j+k-1}\,\frac{P_j(t)}{(j-1)!} =\int_0^t \frac{(t-u)^{n-1}}{(n-1)!}\,(1-u)^{k-1}\,\dd u. $$ The matrix of this system $\left(1/(j+k-1)\right)_{j,k\in J}$ is regular as it can be seen by introducing the related quadratic form which is definite positive. Indeed, for any real numbers $x_j$, $j\in J$, we have $$ \sum_{j,k\in J} \frac{x_jx_k}{j+k-1} =\int_0^1 \left(\vphantom{\sum_n^n}\right.\!\sum_{j,k\in J} x_jx_k u^{j+k-2}\!\!\left.\vphantom{\sum_n^n}\right)\dd u =\int_0^1 \left(\vphantom{\sum_n^n}\right.\!\sum_{j\in J} x_j u^{j-1}\!\!\left.\vphantom{\sum_n^n}\right)^{\!2}\dd u\ge 0 $$ and $$ \sum_{j,k\in J} \frac{x_jx_k}{j+k-1}=0\quad\mbox{if and only if} \quad \forall j\in J,\, x_j=0. $$ Thus, the system~(\ref{system}) has a unique solution. As a result, the $P_j$ are linear combinations of the functions $t\mapsto\int_0^t (t-u)^{n-1}(1-u)^{k-1}\,\dd u$ which are polynomials of degree less than $n+k$. Hence, $P_j$ is a polynomial of degree at most $2n-1$. We now compute the derivatives of $P_j$ at $0$ en $1$. We have $P_j^{(i)}(0)=0$ for $i\in\{0,1,\dots,n-1\}$ since the functions $t\mapsto\int_0^t (t-u)^{n-1}(1-u)^{k-1}\,\dd u$ plainly enjoy this property. For checking that $P_j^{(i)}(1)=\delta_{j,n-i}$ for $i\in I$, we successively compute \begin{itemize} \item for $i\in\{0,1,\dots,n-1\}$, $$ \frac{\dd^i}{\dd t^i} \,\mathbb{E}[X_n(t)X_k(1)]=\mathbb{E}[X_{n-i}(t)X_k(1)]; $$ \item for $i=n-1$, $$ \frac{\dd^i}{\dd t^i}\,\mathbb{E}[X_n(t)X_k(1)]=\mathbb{E}[B(t)X_k(1)] =\int_0^t \frac{(1-u)^{k-1}}{(k-1)!}\,\dd u; $$ \item for $i=n$, $$ \frac{\dd^i}{\dd t^i}\,\mathbb{E}[X_n(t)X_k(1)]=\frac{(1-t)^{k-1}}{(k-1)!}; $$ \item for $i\in\{n,n+1,\dots,n+k-1\}$, $$ \frac{\dd^i}{\dd t^i}\,\mathbb{E}[X_n(t)X_k(1)] =(-1)^{i+n}\,\frac{(1-t)^{k+n-i-1}}{(k+n-i-1)!}; $$ \item for $i=k+n-1$, $$ \frac{\dd^i}{\dd t^i}\,\mathbb{E}[X_n(t)X_k(1)]=(-1)^{k-1}; $$ \item for $i\ge k+n$, $$ \frac{\dd^i}{\dd t^i}\,\mathbb{E}[X_n(t)X_k(1)]=0. $$ \end{itemize} Consequently, at time~$t=1$, for $i\ge n$, \begin{equation}\label{deriv-cov} \frac{\dd^i}{\dd t^i}\,\mathbb{E}[X_n(t)X_k(1)]\Big|_{t=1}=(-1)^{k-1}\delta_{i,k+n-1}. \end{equation} Now, by differentiating~(\ref{system}), we get for $i\in\{0,1,\dots,n-1\}$, $$ \sum_{j\in J} \mathbb{E}[X_j(1)X_k(1)] \, P_j^{(i)}(1)=\mathbb{E}[X_{n-i}(1)X_k(1)],\quad k\in J. $$ In particular, if $i\in (n-J)$, this can be rewritten as $$ \sum_{j\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j^{(i)}(1) =\sum_{j\in J} \delta_{j,n-i}\,\mathbb{E}[X_j(1)X_k(1)],\quad k\in J, $$ which by identification yields $P_j^{(i)}(1)=\delta_{j,n-i}$. Similarly, for $i\in\{n,n+1,\dots,2n-1\}$, in view of~(\ref{deriv-cov}), we have $$ \sum_{j\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j^{(i)}(1) =(-1)^{k-1}\delta_{i,k+n-1},\quad k\in J. $$ In particular, if $i\in\{n,n+1,\dots,2n-1\}\backslash(J+n-1)$, we have $\delta_{i,k+n-1}=0$ for $k\in J$, and then $$ \sum_{j\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j^{(i)}(1)=0,\quad k\in J, $$ which by identification yields $P_j^{(i)}(1)=0=\delta_{j,n-i}$. The proof of Theorem~\ref{th-drift} is finished. \qed \end{proof} \subsection{Covariance function} Let $c_{_Y}$ be the covariance function of $(Y(t))_{t\in[0,1]}$: $c_{_Y}(s,t)=\mathbb{E}[Y(s)Y(t)]$. In the next theorem, we supply a representation of $c_{_Y}$ of the form~(\ref{cov-gene}). \begin{theorem}\label{th-decompo-cov} The covariance function of $(Y(t))_{t\in[0,1]}$ admits the following representation: for $s,t\in[0,1]$, $$ c_{_Y}(s,t)=\sum_{k=0}^{n-1} \varphi_k(s\wedge t) \tilde{\psi}_k(s\vee t) $$ with, for any $k\in\{0,1,\dots,n-1\}$, $$ \tilde{\psi}_k(t)=\psi_k(t)-\sum_{j\in J}\psi_k^{(n-j)}(1) P_j(t). $$ Moreover, the functions $\tilde{\psi}_k$, $k\in\{0,1,\dots,n-1\}$, are Hermite interpolation polynomials such that $$ \begin{cases} \tilde{\psi}_k^{(2n)}=0, \\[1ex] \tilde{\psi}_k^{(i)}(0)=\delta_{i,k} &\mbox{for }i\in\{0,1,\dots,n-1\}, \\[1ex] \tilde{\psi}_k^{(i)}(1)=0 &\mbox{for }i\in I. \end{cases} $$ \end{theorem} \begin{proof} We decompose $Y(t)$ into the difference $$ Y(t)=X_n(t)-Z(t) \quad\mbox{ with }\quad Z(t)=\sum_{j\in J} P_j(t)X_j(1). $$ We have \begin{align*} c_{_Y}(s,t) & =\mathbb{E}[X_n(s)X_n(t)]+\mathbb{E}[Z(s)Z(t)]-\mathbb{E}[X_n(s)Z(t)]-\mathbb{E}[Z(s)X_n(t)] \\ & =c_{_{X_n}}(s,t)+\sum_{j,k\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j(s)P_k(t) \\ & \hphantom{=\,}-\sum_{j\in J} \mathbb{E}[X_n(t)X_j(1)]\,P_j(s) -\sum_{j\in J} \mathbb{E}[X_n(s)X_j(1)]\,P_j(t). \end{align*} By definition~(\ref{system}) of the $P_j$'s, we observe that \begin{align*} \lqn{\sum_{j,k\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j(s)P_k(t) -\sum_{k\in J} \mathbb{E}[X_n(s)X_k(1)]\,P_k(t)} & =\sum_{k\in J} \left(\vphantom{\sum_n^n}\right.\!\sum_{j\in J} \mathbb{E}[X_j(1)X_k(1)]\,P_j(s)-\mathbb{E}[X_n(s)X_k(1)] \!\!\left.\vphantom{\sum_n^n}\right)\!P_k(t) =0. \end{align*} Then, we can simplify $c_{_Y}(s,t)$ into $$ c_{_Y}(s,t)=c_{_{X_n}}(s,t)-\sum_{j\in J} \mathbb{E}[X_n(t)X_j(1)]\,P_j(s). $$ Since the covariance functions $c_{_Y}$ and $c_{_{X_n}}$ are symmetric, we also have $$ c_{_Y}(s,t)=c_{_{X_n}}(s,t)-\sum_{j\in J} \mathbb{E}[X_n(s)X_j(1)]\,P_j(t). $$ Let us introduce the symmetric polynomial $$ Q(s,t)=\sum_{j\in J} \mathbb{E}[X_n(s)X_j(1)]\,P_j(t) =\sum_{j\in J} \mathbb{E}[X_n(t)X_j(1)]\,P_j(s). $$ It can be expressed by means of the functions $\varphi_k,\psi_k$'s as follows: $$ Q(s,t)=\sum_{j\in J}\sum_{k=0}^{n-1} \varphi_k(s\wedge t) \psi_k^{(n-j)}(1) P_j(s\vee t). $$ We can rewrite $c_{_Y}(s,t)$ as $$ c_{_Y}(s,t)=c_{_{X_n}}(s,t)-Q(s,t) =\sum_{k=0}^{n-1} \varphi_k(s\wedge t) \!\left[\vphantom{\sum_n^n}\right.\! \psi_k(s\vee t)-\sum_{j\in J}\psi_k^{(n-j)}(1) P_j(s\vee t)\!\!\left. \vphantom{\sum_n^n}\right]\!. $$ and then, for $s,t\in[0,1]$, $$ c_{_Y}(s,t)=\sum_{k=0}^{n-1} \varphi_k(s\wedge t) \tilde{\psi}_k(s\vee t) \quad\mbox{with}\quad \tilde{\psi}_k(t)=\psi_k(t)-\sum_{j\in J}\psi_k^{(n-j)}(1) P_j(t). $$ We immediately see that $\tilde{\psi}_k$ is a polynomial of degree less than $2n$ such that $\tilde{\psi}_k^{(i)}(0)=\delta_{i,k}$ for $i\in\{0,1,\dots,n-1\}$ and, since $P_j^{(i)}(1)=\delta_{j,n-i}$, $$ \tilde{\psi}_k^{(i)}(1)=\psi_k^{(i)}(1)-\sum_{j\in J}\psi_k^{(n-j)}(1) P_j^{(i)}(1) =\left(1-1\hspace{-.27em}\mbox{\rm l}_{\{i\in n-J\}}\right)\psi_k^{(i)}(1). $$ We deduce that $$ \begin{cases} \tilde{\psi}_k^{(i)}(1)=0 &\mbox{if }i\in (n-J), \\[.5ex] \tilde{\psi}_k^{(i)}(1)=\psi_k^{(i)}(1)=0 &\mbox{if } i\in\{n,n+1,\dots,2n-1\}\backslash (J+n-1). \end{cases} $$ Then $\tilde{\psi}_k^{(i)}(1)=0$ for any $i\in I$. This ends up the proof of Theorem~\ref{th-decompo-cov}. \qed \end{proof} \subsection{Boundary value problem} In this section, we write out the natural boundary value problem which is associated with the process $(Y(t))_{t\in[0,1]}$. The following statement is the main connection between the different boundary value conditions associated with the operator $d^{2n}/dt^{2n}$ and the different bridges of the process $(X_n)_{t\in[0,1]}$ introduced in this work. \begin{theorem}\label{th-BVP-gene} Let $u$ be a fixed continuous function on $[0,1]$. The function $v$ defined on $[0,1]$ by $$ v(t)=\int_0^1 c_{_Y}(s,t)u(s)\,\dd s $$ is the solution of the boundary value problem \begin{equation}\label{BVP-bridge} \begin{cases} v^{(2n)}=(-1)^n u &\mbox{on }[0,1], \\ v^{(i)}(0)=0 &\mbox{for } i\in\{0,1,\dots,n-1\}, \\ v^{(i)}(1)=0 &\mbox{for } i\in I. \end{cases} \end{equation} \end{theorem} \begin{proof} \noindent$\bullet$ \textsl{First step.} Recall that $c_{_Y}(s,t)=c_{_{X_n}}(s,t)-Q(s,t)$. We decompose the function $v$ into the difference $w-z$ where, for $t\in[0,1]$, $$ w(t)=\int_0^1 c_{_{X_n}}(s,t)u(s)\,\dd s,\quad z(t)=\int_0^1 Q(s,t)u(s)\,\dd s. $$ We know from Theorem~\ref{th-BVP-IBM} that $w^{(2n)}=(-1)^nu$, $w^{(i)}(0)=\frac{\partial^i\!Q}{\partial t^i}(s,0)=0$ for $i\in\{0,1,\dots,n-1\}$ and $w^{(i)}(1)=0$ for $i\in\{n,n+1,\dots,2n-1\}$. Moreover, the function $t\mapsto Q(s,t)$ being a polynomial of degree less than $2n$, the function $z$ is also a polynomial of degree less than $2n$. Then $z^{(2n)}=0$, $z^{(i)}(0)=0$ for $i\in\{0,1,\dots,n-1\}$ and $$ v^{(2n)}=w^{(2n)}-z^{(2n)}=(-1)^nu,\quad v^{(i)}(0)=w^{(i)}(0)-z^{(i)}(0)=0 \quad\mbox{for }i\in\{0,1,\dots,n-1\}. $$ On the other hand, we learn from~(\ref{deriv-v}) that, for $i\in\{0,1,\dots,2n-1\}$, $$ w^{(i)}(1)=\int_0^1 \frac{\partial^i \!c_{_{X_n}}}{\partial t^i}(s,1)\,u(s)\dd s $$ and then, for $i\in\{0,1,\dots,n-1\}$, $$ w^{(i)}(1)=\int_0^1 \mathbb{E}[X_n(s)X_{n-i}(1)]\,u(s)\,\dd s. $$ We also have, for any $i\in\{0,1,\dots,2n-1\}$, $$ \frac{\partial^i\!Q}{\partial t^i}(s,1)=\sum_{j\in J} \mathbb{E}[X_n(s)X_j(1)]\,P^{(i)}_j(1) =1\hspace{-.27em}\mbox{\rm l}_{\{i\in (n-J)\}}\mathbb{E}[X_n(s)X_{n-i}(1)]. $$ As a result, we see that $$ z^{(i)}(1)=\int_0^1 \frac{\partial^i\!Q}{\partial t^i}(s,1)u(s)\,\dd s =1\hspace{-.27em}\mbox{\rm l}_{\{i\in (n-J)\}}w^{(i)}(1). $$ This implies that for $i\in (n-J)$, $z^{(i)}(1)=w^{(i)}(1)$ and for $i\in\{n,n+1,\dots,2n-1\}\backslash(J+n-1)$, $z^{(i)}(1)=0=w^{(i)}(1)$. Then $z^{(i)}(1)=w^{(i)}(1)$ for any $i\in I$, that is $v^{(i)}(1)=0$. The function $v$ is a solution of~(\ref{BVP-bridge}). \noindent$\bullet$ \textsl{Second step.} We now check the uniqueness of the solution of~(\ref{BVP-bridge}). Let $v_1$ and $v_2$ be two solutions of $\mathcal{D} v=u$. Then the function $w=v_1-v_2$ satisfies $\mathcal{D} w=0$, $w^{(i)}(0)=0$ for $i\in\{0,1,\dots,n-1\}$ and $w^{(i)}(1)=0$ for $i\in I$. We compute the following ``energy'' integral: \begin{align*} \int_0^1 w^{(n)}(t)^2\,\dd t & =(-1)^{n+1}\left[\,\sum_{i=0}^{n-1} (-1)^{i} w^{(i)}(t)w^{(2n-1-i)}(t) \right]_0^1+(-1)^n\int_0^1 w(t)w^{(2n)}(t)\,\dd t \\ & =(-1)^{n+1}\sum_{i=0}^{n-1} (-1)^{i} w^{(i)}(1)w^{(2n-1-i)}(1). \end{align*} We have constructed the set $I$ in order to have $w^{(i)}(1)w^{(2n-1-i)}(1)=0$ for any $i\in\{0,1,\dots,n-1\}$: when we pick an index $i\in\{0,1,\dots,n-1\}$, either $i\in I$ or $2n-1-i\in I$. Indeed, \begin{itemize} \item[--] if $i\in I\cap\{0,1,\dots,n-1\}$, $w^{(i)}=0$; \item[--] if $i\in \{0,1,\dots,n-1\}\backslash I$, by observing that $\{0,1,\dots,n-1\}\backslash I=\{0,1,\dots,n-1\}\backslash (n-J)$, we have $2n-1-i\in \{n,n+1,\dots,2n-1\}\backslash (J+n-1)$. Since $\{n,n+1,\dots,2n-1\}\backslash (J+n-1)\subset I$, we see that $2n-1-i\in I$ and then $w^{(2n-1-i)}=0$. \end{itemize} Next, $\int_0^1 w^{(n)}(t)^2\,\dd t=0$ which entails $w^{(n)}=0$, that is, $w$ is a polynomial of degree less than $n$. Moreover, with the boundary value conditions at $0$, we obtain $w=0$ or $v_1=v_2$. The proof of Theorem~\ref{th-BVP-gene} is finished. \qed \end{proof} \begin{remark}\label{remark-unique} We have seen in the above proof that uniqueness is assured as soon as the boundary value conditions at $1$ satisfy $w^{(i)}(1)w^{(2n-1-i)}(1)=0$ for any $i\in\{0,1,\dots,n-1\}$. These conditions are fulfilled when the set $I=\{i_1,i_2,\dots,i_n\}\subset\{0,1,\dots,2n-1\}$ is such that $i_1\in\{0,2n-1\}$, $i_2\in\{1,2n-2\}$, $i_3\in\{2,2n-3\}$, $\dots$, $i_n\in\{n-1,n\}$. This is equivalent to say that $I$ and $(2n-1-I)$ make up a partition of $\{0,1,\dots,2n-1\}$, or $$ 2n-1-I=\{0,1,\dots,2n-1\}\backslash I. $$ In this manner, we get $2^n$ different boundary value problems which correspond to the $2^n$ different bridges we have constructed. We shall see in Section~\ref{sect-general} that the above identity concerning the differentiating set $I$ characterizes the possibility for the Green function of the boundary value problem~(\ref{BVP-bridge}) to be the covariance of a Gaussian process. \end{remark} \subsection{Prediction} Now, we tackle the problem of the prediction for the process $(Y(t))_{t\in[0,1]}$. \begin{theorem}\label{th-prediction} Fix $t_0\in[0,1]$. The shifted process $(Y(t+t_0))_{t\in[0,1-t_0]}$ admits the following representation: $$ (Y(t+t_0))_{t\in[0,1-t_0]}=\left(\tilde{Y}_{t_0}(t)+\sum_{i=0}^{n-1} Q_{i,t_0}(t) Y^{(i)}(t_0) \right)_{t\in[0,1-t_0]} $$ where $$ \tilde{Y}_{t_0}(t)=\tilde{X}_n(t)-\sum_{j\in J}\tilde{P}_{j,t_0}(t)\tilde{X}_j(1-t_0). $$ The process $\big(\tilde{X}_n(t)\big)_{t\in[0,1-t_0]}$ is a copy of $(X_n(t))_{t\in[0,1-t_0]}$ which is independent of $(X_n(t))_{t\in[0,t_0]}$. The functions $\tilde{P}_{j,t_0}$, $j\in J$, and $Q_{i,t_0}$, $i\in\{0,1,\dots,n-1\}$, are Hermite interpolation polynomials on $[0,1-t_0]$ characterized by $$ \begin{cases} \tilde{P}^{(2n)}_{j,t_0}=0, \\[1ex] \tilde{P}^{(\iota)}_{j,t_0}(0)=0 &\mbox{for } \iota\in\{0,1,\dots,n-1\}, \\[1ex] \tilde{P}^{(\iota)}_{j,t_0}(1-t_0)=\delta_{\iota,n-j}&\mbox{for }\iota\in I, \end{cases} $$ and $$ \begin{cases} Q^{(2n)}_{i,t_0}=0, \\[1ex] Q^{(\iota)}_{i,t_0}(0)=\delta_{\iota, i} &\mbox{for } \iota\in\{0,1,\dots,n-1\}, \\[1ex] Q^{(\iota)}_{i,t_0}(1-t_0)=0 &\mbox{for }\iota\in I. \end{cases} $$ Actually, these functions can be expressed by means of the functions $P_j$, $j\in J$, and $\tilde{\psi}_i$, $i\in\{0,1,\dots,n-1\}$, as follows: $$ \tilde{P}_{j,t_0}(t)=(1-t_0)^{n-j}P_j\!\left(\frac{t}{1-t_0}\right)\!, \quad Q_{i,t_0}=(1-t_0)^{i}\tilde{\psi}_i\!\left(\frac{t}{1-t_0}\right)\!. $$ In other words, the process $\big(\tilde{Y}_{t_0}(t)\big)_{t\in[0,1-t_0]}$ is a bridge of length $(1-t_0)$ which is independent of $(Y(t))_{t\in[0,t_0]}$, that is $$ \big(\tilde{Y}_{t_0}(t)\big)_{t\in[0,1-t_0]}\stackrel{d}{=} \big(\tilde{X}_n(t) \big| \tilde{X}_j(1-t_0)=0,j\in J\big)_{t\in[0,1-t_0]}. $$ \end{theorem} \begin{proof} \noindent$\bullet$ \textsl{First step.} Fix $t_0\in[0,1]$. We have the well-known decomposition, based on the classical prediction property of Brownian motion stipulating that $(B(t+t_0))_{t\in[0,1-t_0]}=\big(\tilde{B}_{t_0}(t)+B(t_0)\big)_{t\in[0,1-t_0]}$ where $\big(\tilde{B}_{t_0}(t)\big)_{t\in[0,1-t_0]}$ is a Brownian motion independent of $(B(t))_{t\in[0,t_0]}$, $$ (X_n(t+t_0))_{t\in[0,1-t_0]}=\left(\tilde{X}_n(t)+\sum_{k=0}^{n-1} \frac{t^k}{k!} \, X_{n-k}(t_0)\right)_{t\in[0,1-t_0]} $$ with $\tilde{X}_n(t)=\int_0^1 \frac{(t-s)^{n-1}}{(n-1)!}\,\dd \tilde{B}(s)$. Differentiating this equality $(n-j)$ times, $j\in\{1,2,\dots,n\}$, we obtain $$ X_j(t+t_0)=\tilde{X}_j(t)+\sum_{k=n-j}^{n-1} \frac{t^{j+k-n}}{(j+k-n)!} \, X_{n-k}(t_0). $$ Therefore, \begin{align} \lqn{Y(t+t_0)} & =X_n(t+t_0)-\sum_{j\in J} P_j(t+t_0)X_j(1) \nonumber\\ & =\tilde{X}_n(t)+\sum_{k=0}^{n-1} \frac{t^k}{k!} \,X_{n-k}(t_0) -\sum_{j\in J} P_j(t+t_0) \!\left[\vphantom{\sum_n^n}\right.\!\tilde{X}_j(1-t_0) +\sum_{k=n-j}^{n-1} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, X_{n-k}(t_0)\!\!\left.\vphantom{\sum_n^n}\right] \nonumber\\ & =\!\left[\vphantom{\sum_n^n}\right.\!\tilde{X}_n(t)-\sum_{j\in J} P_j(t+t_0) \tilde{X}_j(1-t_0) \!\!\left.\vphantom{\sum_n^n}\right] +\sum_{k=0}^{n-1} \left[\vphantom{\sum_n^n}\right.\!\frac{t^k}{k!} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P_j(t+t_0) \!\!\left.\vphantom{\sum_n^n}\right] X_{n-k}(t_0). \label{Y-translated} \end{align} We are going to express the $X_{n-k}(t_0)$, $k\in\{1,\dots,n\}$, by means of the $\tilde{X}_j(1-t_0)$, $j\in J$. We have, by differentiating~(\ref{Y-translated}) $i$ times, for $i\in\{0,1,\dots,n-1\}$, \begin{align*} Y^{(i)}(t+t_0) & =\!\left[\vphantom{\sum_n^n}\right.\!\tilde{X}_{n-i}(t)- \sum_{j\in J} P^{(i)}_j(t+t_0) \tilde{X}_j(1-t_0)\!\!\left.\vphantom{\sum_n^n}\right] \\ & \hphantom{=\,} +\sum_{k=0}^{n-1} \left[\vphantom{\sum_n^n}\right.\!\frac{t^{k-i}}{(k-i)!}1\hspace{-.27em}\mbox{\rm l}_{\{k\ge i\}} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(i)}_j(t+t_0)\!\!\left.\vphantom{\sum_n^n}\right]\! X_{n-k}(t_0). \end{align*} For $t=0$, this yields \begin{equation}\label{Y-i-tzero} Y^{(i)}(t_0)=-\sum_{j\in J} P^{(i)}_j(t_0) \tilde{X}_j(1-t_0) +\sum_{k=0}^{n-1} \left[\vphantom{\sum_n^n}\right.\! \delta_{i,k} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(i)}_j(t_0)\!\!\left.\vphantom{\sum_n^n}\right] X_{n-k}(t_0). \end{equation} Set for $i,k\in\{0,1,\dots,n-1\}$ $$ a_{ik}=\delta_{i,k}-\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(i)}_j(t_0) $$ and let us introduce the matrix $$ A=(a_{ik})_{0\le i,k\le n-1} $$ together with its inverse matrix $$ B=A^{-1}=(b_{ik})_{0\le i,k\le n-1}. $$ The equalities~(\ref{Y-i-tzero}) for $i\in\{0,1,\dots,n-1\}$ read as a linear system of $(n-1)$ equations and $(n-1)$ unknowns: $$ \sum_{k=0}^{n-1} a_{ik} X_{n-k}(t_0)=Y^{(i)}(t_0) +\sum_{j\in J} P^{(i)}_j(t_0)\tilde{X}_j(1-t_0),\quad i\in\{0,1,\dots,n-1\}, $$ which can be rewritten into a matrix form as $$ A\begin{pmatrix} X_n(t_0) \\ X_{n-1}(t_0) \\ \vdots \\ X_2(t_0) \\ X_1(t_0) \end{pmatrix} =\begin{pmatrix} Y(t_0) \\ Y'(t_0) \\ \vdots \\ Y^{(n-2)}(t_0) \\ Y^{(n-1)}(t_0) \end{pmatrix} +\sum_{j\in J} \tilde{X}_j(1-t_0) \begin{pmatrix} P_j(t_0) \\ P'_j(t_0) \\ \vdots \\ P^{(n-2)}_j(t_0) \\ P^{(n-1)}_j(t_0) \end{pmatrix}\!. $$ The solution is given by $$ \begin{pmatrix} X_n(t_0) \\ X_{n-1}(t_0) \\ \vdots \\ X_2(t_0) \\ X_1(t_0) \end{pmatrix} =B\begin{pmatrix} Y(t_0) \\ Y'(t_0) \\ \vdots \\ Y^{(n-2)}(t_0) \\ Y^{(n-1)}(t_0) \end{pmatrix} +\sum_{j\in J} \tilde{X}_j(1-t_0) B\begin{pmatrix} P_j(t_0) \\ P'_j(t_0) \\ \vdots \\ P^{(n-2)}_j(t_0) \\ P^{(n-1)}_j(t_0) \end{pmatrix} $$ and we see that $X_{n-k}(t_0)$, $k\in\{0,1,\dots,n-1\}$, is of the form \begin{equation}\label{X-tzero} X_{n-k}(t_0)=\sum_{i=0}^{n-1} b_{ki} Y^{(i)}(t_0) +\sum_{j\in J} \left[\,\sum_{i=0}^{n-1} b_{ki}P^{(i)}_j(t_0)\right]\tilde{X}_j(1-t_0). \end{equation} Therefore, by plugging~(\ref{X-tzero}) into~(\ref{Y-translated}), we obtain \begin{align*} Y(t+t_0) & =\tilde{X}_n(t)-\sum_{j\in J} P_j(t+t_0) \tilde{X}_j(1-t_0) \\ & \hphantom{=\,}+\sum_{k=0}^{n-1} \left[\vphantom{\sum_n^n}\right.\!\frac{t^{k}}{k!} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!} \, P_j(t+t_0)\!\!\left.\vphantom{\sum_n^n}\right] \\ & \hphantom{=\,}\times\!\left[\vphantom{\sum_n^n}\right.\!\sum_{i=0}^{n-1} b_{ki} Y^{(i)}(t_0) +\sum_{j\in J} \left(\sum_{i=0}^{n-1} b_{ki}P^{(i)}_j(t_0)\right)\tilde{X}_j(1-t_0) \!\!\left.\vphantom{\sum_n^n}\right] \\ & =\tilde{X}_n(t)-\sum_{j\in J} \left[\vphantom{\sum_n^n}\right.\!P_j(t+t_0)- \sum_{0\le i,k\le n-1} b_{ki} P^{(i)}_j(t_0) \\ & \hphantom{=\,} \times\left(\vphantom{\sum_n^n}\right.\!\frac{t^{k}}{k!} -\sum_{\jmath\in J:\jmath\ge n-k} \frac{(1-t_0)^{\jmath+k-n}}{(\jmath+k-n)!}\, P_{\jmath}(t+t_0) \!\!\left.\vphantom{\sum_n^n}\right)\!\!\left.\vphantom{\sum_n^n}\right]\!\tilde{X}_j(1-t_0) \\ & \hphantom{=\,} +\sum_{0\le i,k\le n-1} b_{ki}\!\left[\vphantom{\sum_n^n}\right.\!\frac{t^{k}}{k!} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P_j(t+t_0) \!\!\left.\vphantom{\sum_n^n}\right]\! Y^{(i)}(t_0). \end{align*} Finally, $Y(t+t_0)$ can be written as $$ Y(t+t_0)=\tilde{Y}_{t_0}(t)+\sum_{i=0}^{n-1} Q_{i,t_0}(t) Y^{(i)}(t_0) $$ where $$ Q_{i,t_0}(t)=\sum_{k=0}^{n-1} b_{ki}\!\left[\vphantom{\sum_n^n}\right.\!\frac{t^{k}}{k!} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P_j(t+t_0) \!\!\left.\vphantom{\sum_n^n}\right] $$ and $$ \tilde{Y}_{t_0}(t)=\tilde{X}_n(t)-\sum_{j\in J}\tilde{P}_{j,t_0}(t)\tilde{X}_j(1-t_0) $$ with \begin{align*} \tilde{P}_{j,t_0}(t) & =P_j(t+t_0)-\sum_{0\le i,k\le n-1} b_{ki} P^{(i)}_j(t_0) \!\left[\vphantom{\sum_n^n}\right.\!\frac{t^{k}}{k!} -\sum_{\jmath\in J:\jmath\ge n-k} \frac{(1-t_0)^{\jmath+k-n}}{(\jmath+k-n)!} \, P_{\jmath}(t+t_0)\!\!\left.\vphantom{\sum_n^n}\right] \\ & =P_j(t+t_0)-\sum_{i=0}^{n-1} P^{(i)}_j(t_0)Q_{i,t_0}(t). \end{align*} \noindent$\bullet$ \textsl{Second step.} We easily see that the functions $\tilde{P}_{j,t_0}$ and $Q_{i,t_0}$ are polynomials of degree less than $2n$. Let us compute now their derivatives at $0$ and $t_0$. First, concerning $Q_{i,t_0}$ we have $$ Q^{(\iota)}_{i,t_0}(t)=\sum_{k=0}^{n-1} b_{ki} \!\left[\vphantom{\sum_n^n}\right.\!\frac{t^{k-\iota}}{(k-\iota)!} 1\hspace{-.27em}\mbox{\rm l}_{\{k\ge \iota\}}-\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(\iota)}_j(t+t_0) \!\!\left.\vphantom{\sum_n^n}\right]\!. $$ Choosing $t=0$ and recalling the definition of $a_{\iota k}$ and the fact that the matrices $(a_{ik})_{0\le i,k\le n}$ and $(b_{ik})_{0\le i,k\le n}$ are inverse, this gives for $\iota\in\{0,1,\dots,n-1\}$, $$ Q^{(\iota)}_{i,t_0}(0)=\sum_{k=0}^{n-1} b_{ki} \!\left[ \vphantom{\sum_n^n}\right.\!\delta_{\iota, k} -\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(\iota)}_j(t_0)\!\!\left.\vphantom{\sum_n^n}\right] =\sum_{k=0}^{n-1}a_{\iota k} b_{ki}=\delta_{\iota, i}. $$ Choosing $t=1-t_0$, we have for $\iota\in I$ \begin{equation}\label{Qitzero} Q^{(\iota)}_{i,t_0}(1-t_0)=\sum_{k=0}^{n-1} b_{ki}\!\left[ \vphantom{\sum_n^n}\right.\!\frac{(1-t_0)^{k-\iota}}{(k-\iota)!}\, 1\hspace{-.27em}\mbox{\rm l}_{\{k\ge \iota\}}-\sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!} \, P^{(\iota)}_j(1)\!\!\left.\vphantom{\sum_n^n}\right]\!. \end{equation} By Theorem~\ref{th-drift}, we know that $P^{(\iota)}_j(1)=\delta_{j,n-\iota}$ for $\iota\in I$; then $$ \sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(\iota)}_j(1) =\frac{(1-t_0)^{k-\iota}}{(k-\iota)!}\,1\hspace{-.27em}\mbox{\rm l}_{\{k\ge \iota\}}1\hspace{-.27em}\mbox{\rm l}_{\{\iota\in(n-J)\}}. $$ Observing that, if $\iota\le k\le n-1$, the conditions $\iota\in(n-J)$ and $\iota\in I$ are equivalent, we simply have $$ \sum_{j\in J:j\ge n-k} \frac{(1-t_0)^{j+k-n}}{(j+k-n)!}\, P^{(\iota)}_j(1) =\frac{(1-t_0)^{k-\iota}}{(k-\iota)!}\,1\hspace{-.27em}\mbox{\rm l}_{\{k\ge \iota\}} $$ which immediately entails, by~(\ref{Qitzero}), $$ Q^{(\iota)}_{i,t_0}(1-t_0)=0\quad\mbox{for }\iota\in I. $$ Next, concerning $\tilde{P}^{(\iota)}_{j,t_0}$, we have $$ \tilde{P}^{(\iota)}_{j,t_0}(t) =P^{(\iota)}_j(t+t_0)-\sum_{i=0}^{n-1} P^{(i)}_j(t_0)Q^{(\iota)}_{i,t_0}(t). $$ Choosing $t=0$, this gives for $\iota\in\{0,1,\dots,n-1\}$, since $Q^{(\iota)}_{i,t_0}(0)=\delta_{\iota, i}$, $$ \tilde{P}^{(\iota)}_{j,t_0}(0) =P^{(\iota)}_j(t_0)-\sum_{i=0}^{n-1} P^{(i)}_j(t_0)Q^{(\iota)}_{i,t_0}(0)=0. $$ Choosing $t=1-t_0$, we have for $\iota\in I$, since $P^{(\iota)}_j(1)=\delta_{\iota,n-j}$ and $Q^{(\iota)}_{i,t_0}(1-t_0)=0$, $$ \tilde{P}^{(\iota)}_{j,t_0}(1-t_0) =P^{(\iota)}_j(1)-\sum_{i=0}^{n-1} P^{(i)}_j(t_0)Q^{(\iota)}_{i,t_0}(1-t_0) =\delta_{\iota,n-j}. $$ The polynomials $\tilde{P}_{j,t_0}$ (resp. $Q_{i,t_0}$) enjoy the same properties than the $P_j$'s (resp. the $\tilde{\psi}_i$'s), regarding the successive derivatives, they can be deduced from these latter by a rescaling according as $$ \tilde{P}_{j,t_0}(t)=(1-t_0)^{n-j}P_j\!\left(\frac{t}{1-t_0}\right)\quad (\mbox{resp. }Q_{i,t_0}=(1-t_0)^{i}\tilde{\psi}_i\!\left(\frac{t}{1-t_0}\right)). $$ It is then easy to extract the identity in distribution below, by using the property of Gaussian conditioning: $$ \left(\vphantom{\sum_n^n}\right.\!\tilde{X}_n(t)-\sum_{j\in J}\tilde{P}_{j,t_0}(t) \tilde{X}_j(1-t_0)\!\!\left.\vphantom{\sum_n^n}\right)_{t\in[0,1-t_0]} \stackrel{d}{=} \big(\tilde{X}_n(t) \big| \tilde{X}_j(1-t_0)=0,j\in J\big)_{t\in[0,1-t_0]}. $$ Theorem~\ref{th-prediction} is established. \qed \end{proof} \section{Example: bridges of integrated Brownian motion ($n=2$)}\label{sect-IBM} Here, we have a look on the particular case where $n=2$ for which the corresponding process $(X_n(t))_{t\in [0,1]}$ is nothing but integrated Brownian motion (the so-called Langevin process): $$ X_2(t)=\int_0^t B(s)\,\dd s. $$ The underlying Markov process is the so-called Kolmogorov diffusion $(X_2(t),X_1(t))_{t\in [0,1]}$. All the associated conditioned processes that will be constructed are related to the equation $v^{(4)}(t)=u(t)$ with boundary value conditions at time~$0$: $v(0)=v'(0)=0$. There are four such processes: \begin{itemize} \item $(X_2(t))_{t\in[0,1]}$ (integrated Brownian motion); \item $(X_2(t)|X_1(1)=0)_{t\in[0,1]}$ (integrated Brownian bridge); \item $(X_2(t)|X_2(1)=0)_{t\in[0,1]}$ (bridge of integrated Brownian motion); \item $(X_2(t)|X_1(1)=X_2(1)=0)_{t\in[0,1]}$ (another bridge of integrated Brownian motion). \end{itemize} On the other hand, when adding two boundary value conditions at time~$1$ to the foregoing equation, we find six boundary value problems: $v(1)=v'(1)=0$, $v(1)=v''(1)=0$, $v(1)=v'''(1)=0$, $v'(1)=v''(1)=0$, $v'(1)=v'''(1)=0$, $v''(1)=v'''(1)=0$. Actually, only four of them can be related to some Gaussian processes--the above listed processes--in the sense of our work whereas two others cannot be. For each process, we provide the covariance function, the representation by means of integrated Brownian motion subject to a random polynomial drift, the related boundary value conditions at $1$ and the decomposition related to the prediction problem. Since the computations are straightforward, we shall omit them and we only report here the results. For an account on integrated Brownian motion in relation with the present work, we refer the reader to, e.g., \cite{drift} and references therein. \subsection{Integrated Brownian motion} The process corresponding to the set $J=\varnothing$ is nothing but integrated Brownian motion: $$ (X_2(t))_{t\in [0,1]}=\left(\int_0^t B(s)\,\dd s\right)_{t\in [0,1]}. $$ The covariance function is explicitly given by $$ c(s,t)=\frac16 [s\wedge t]^2\,[3(s\vee t)-s\wedge t]. $$ This process is related to the boundary value conditions at $1$ ($I=\{2,3\}$): $v''(1)=v'''(1)=0$. The prediction property can be stated as follows: $$ (X_2(t+t_0))_{t\in [0,1-t_0]}=\big(\tilde{X}_2(t)+X_2(t_0)+tX_1(t_0)\big)_{t\in [0,1-t_0]}. $$ \subsection{Integrated Brownian bridge} The process corresponding to the set $J=\{1\}$ is integrated Brownian bridge: $$ (Y(t))_{t\in [0,1]}=\left(\int_0^t B(s)\,\dd s\,\Big|B(1)=0\right)_{t\in [0,1]} =\left(\int_0^t \beta(s)\,\dd s\right)_{t\in [0,1]} =(X_2(t)|X_1(1)=0)_{t\in [0,1]}. $$ This process can be represented as $$ (Y(t))_{t\in [0,1]}=\left(X_2(t)-\frac12\,t^2 X_1(1)\right)_{t\in [0,1]}. $$ The covariance function is explicitly given by $$ c(s,t)=\frac16 [s\wedge t]^2\,[3(s\vee t)-s\wedge t]-\frac14\,s^2t^2. $$ The process $(Y(t))_{t\in [0,1]}$ is related to the boundary value conditions at $1$ ($I=\{1,3\}$): $v'(1)=v'''(1)=0$. The prediction property says that $$ (Y(t+t_0))_{t\in [0,1-t_0]}=\left(\tilde{Y}_{t_0}(t)+Y(t_0) +\left(t-\frac{t^2}{2(1-t_0)}\right)Y'(t_0)\right)_{t\in [0,1-t_0]}. $$ \subsection{Bridge of integrated Brownian motion} The process corresponding to the set $J=\{2\}$ is the bridge of integrated Brownian motion: $$ (Y(t))_{t\in [0,1]}=\left(\int_0^t B(s)\,\dd s\,\bigg|\int_0^1 B(s)\,\dd s=0\right)_{t\in [0,1]} =(X_2(t)|X_2(1)=0)_{t\in [0,1]}. $$ The bridge is understood as the process is pinned at its extremities: $Y(0)=Y(1)=0$. This process can be represented as $$ (Y(t))_{t\in [0,1]}=\left(X_2(t)-\frac12\,t^2(3-t) X_2(1)\right)_{t\in [0,1]}. $$ The covariance function is explicitly given by $$ c(s,t)=\frac16 [s\wedge t]^2\,[3(s\vee t)-s\wedge t]-\frac{1}{12}\,s^2t^2(3-s)(3-t). $$ The process $(Y(t))_{t\in [0,1]}$ is related to the boundary value conditions at $1$ ($I=\{0,2\}$): $v(1)=v''(1)=0$. The prediction property says that \begin{align*} \lqn{(Y(t+t_0))_{t\in [0,1-t_0]}} =\left(\tilde{Y}_{t_0}(t)+\frac{t^3-3(1-t_0)t^2+2(1-t_0)^3}{2(1-t_0)^3}\,Y(t_0) +\frac{t^3-3(1-t_0)t^2+2(1-t_0)^2t}{2(1-t_0)^2}\,Y'(t_0)\right)_{t\in [0,1-t_0]}. \end{align*} \subsection{Other bridge of integrated Brownian motion} The process corresponding to the set $J=\{1,2\}$ is another bridge of integrated Brownian motion (actually of the two-dimensional Kolmogorov diffusion): $$ (Y(t))_{t\in [0,1]}=\left(\int_0^t B(s)\,\dd s\,\bigg|\int_0^1 B(s)\,\dd s=B(1)=0\right)_{t\in [0,1]} =(X_2(t)|X_2(1)=X_1(1)=0)_{t\in [0,1]}. $$ The bridge here is understood as the process is pinned at its extremities together with its derivatives: $Y(0)=Y'(0)=Y(1)=Y'(1)=0$. This process can be represented as $$ (Y(t))_{t\in [0,1]}=\left(X_2(t)-t^2(t-1) X_1(1)-t^2(3-2t) X_2(1)\right)_{t\in [0,1]}. $$ The covariance function is explicitly given by $$ c(s,t)=\frac16 [s\wedge t]^2\,[1-s\vee t]^2\,[3(s\vee t)-s\wedge t-2st]. $$ The process $(Y(t))_{t\in [0,1]}$ is related to the boundary value conditions at $1$ ($I=\{0,1\}$): $v(1)=v'(1)=0$. The prediction property says that $$ (Y(t+t_0))_{t\in [0,1-t_0]}=\left(\tilde{Y}_{t_0}(t)+\frac{t^2(t+t_0-1)}{(1-t_0)^3}\,Y(t_0) +\frac{t^2(3-3t_0-2t)}{(1-t_0)^2}\,Y'(t_0)\right)_{t\in [0,1-t_0]}. $$ \subsection{Two counterexamples} \noindent$\bullet$ The solution of the problem associated with the boundary value conditions $v(1)=v'''(1)=0$ (which corresponds to the set $I_1=\{0,3\}$) is given by $$ v(t)=\int_0^1 G_1(s,t) u(s)\,\dd s, \quad t\in[0,1], $$ where $$ G_1(s,t)=\frac16 [s\wedge t]^2\,[3(s\vee t)-s\wedge t]+\frac16\,s^2t^2(s-3). $$ \noindent$\bullet$ The solution of the problem associated with the boundary value conditions $v'(1)=v''(1)=0$ (which corresponds to the set $I_2=\{1,2\}$), is given by $$ v(t)=\int_0^1 G_2(s,t) u(s)\,\dd s, \quad t\in[0,1], $$ where $$ G_2(s,t)=\frac16 [s\wedge t]^2\,[3(s\vee t)-s\wedge t]+\frac16\,s^2t^2(t-3). $$ We can observe the relationships $G_1(s,t)=G_2(t,s)$ and $I_2=\{0,1,2,3\}\backslash(3-I_1)$. The Green functions $G_1$ and $G_2$ are not symmetric, so they cannot be viewed as the covariance functions of any Gaussian process. In the next section, we give an explanation of these observations. \section{General boundary value conditions}\label{sect-general} In this last part, we address the problem of relating the general boundary value problem \begin{equation}\label{BVP-gene-bis} \begin{cases} v^{(2n)}=(-1)^nu \quad\mbox{on }[0,1], \\ v(0)=v'(0)=\dots=v^{(n-1)}(0)=0, \\ v^{(i_1)}(1)=v^{(i_2)}(1)=\dots=v^{(i_n)}(1)=0, \end{cases} \end{equation} for any indices $i_1,i_2,\dots,i_n$ such that $0\le i_1<i_2<\dots<i_n\le 2n-1$, to some possible Gaussian process. Set $I=\{i_1,i_2,\dots,i_n\}$. We have noticed in Theorem~\ref{th-BVP-gene} and Remark~\ref{remark-unique} that, when $I$ satisfies the relationship $2n-1-I=\{0,1,\dots,2n-1\}\backslash I$, the system~(\ref{BVP-gene-bis}) admits a unique solution. We proved this fact by computing an energy integral. Actually, this fact holds for any set of indices~$I$; see Lemma~\ref{hermite}. Our aim is to characterize the set of indices $I$ for which the Green function of~(\ref{BVP-gene-bis}) can be viewed as the covariance function of a Gaussian process. A necessary condition for a function of two variables to be the covariance function of a Gaussian process is that it must be symmetric. So, we shall characterize the set of indices $I$ for which the Green function of~(\ref{BVP-gene-bis}) is symmetric and we shall see that in this case this function is a covariance function. \subsection{Representation of the solution} We first write out a representation for the Green function of~(\ref{BVP-gene-bis}). \begin{theorem}\label{th-BVP-gene-bis} The boundary value problem~(\ref{BVP-gene-bis}) has a unique solution. The corresponding Green function admits the following representation, for $s,t\in[0,1]$: $$ G_{_{\!I}}(s,t)=(-1)^n1\hspace{-.27em}\mbox{\rm l}_{\{s\le t\}}\frac{(t-s)^{2n-1}}{(2n-1)!} -(-1)^n\sum_{\iota\in I}\frac{(1-s)^{2n-1-\iota}}{(2n-1-\iota)!}\, R_{{\scriptscriptstyle I},\iota}(t) $$ where the $R_{{\scriptscriptstyle I},\iota}$, $\iota\in I$, are Hermite interpolation polynomials satisfying \begin{equation}\label{condition-hermite-bis} \begin{cases} R_{{\scriptscriptstyle I},\iota}^{(2n)}=0, \\[.5ex] R_{{\scriptscriptstyle I},\iota}^{(i)}(0)=0 &\mbox{for } i\in\{0,1,\dots,n-1\}, \\[.5ex] R_{{\scriptscriptstyle I},\iota}^{(i)}(1)=\delta_{\iota,i} &\mbox{for } i\in I. \end{cases} \end{equation} \end{theorem} \begin{remark} The conditions~(\ref{condition-hermite-bis}) characterize the polynomials $R_{{\scriptscriptstyle I},\iota}$, $\iota\in I$. We prove this fact in Lemma~\ref{hermite} in the appendix. \end{remark} \begin{proof} Let us introduce the functions $v_1$ and $v_2$ defined, for any $t\in[0,1]$, by $$ v_1(t)=(-1)^n\int_0^t \frac{(t-s)^{2n-1}}{(2n-1)!}\,u(s)\,\dd s\quad \mbox{and}\quad v_2(t)=v(t)-v_1(t). $$ We plainly have $v_1^{(2n)}=(-1)^nu$ and $v_1(0)=v_1'(0)=\dots=v_1^{(n-1)}(0)=0$. Therefore, the function $v$ solves the system~(\ref{BVP-gene-bis}) if and only if the function $v_2$ satisfies \begin{equation}\label{BVP-v2} \begin{cases} v_2^{(2n)}=0\quad\mbox{on }[0,1], & \\[1ex] v_2^{(i)}(0)=0 & \mbox{for } i\in\{0,1,\dots,n-1\}, \\ \displaystyle v_2^{(i)}(1)=(-1)^{n-1}\int_0^1 \frac{(1-s)^{2n-1-i}}{(2n-1-i)!} \,u(s)\,\dd s & \mbox{for } i\in I. \end{cases} \end{equation} Referring to Lemma~\ref{hermite}, the conditions~(\ref{BVP-v2}) mean that $v_2$ is a Hermite interpolation polynomial which can be expressed as a linear combination of the $R_{{\scriptscriptstyle I},\iota}$, $\iota\in I$, defined in Theorem~\ref{th-BVP-gene-bis} as follows: $$ v_2(t)=\sum_{\iota\in I} v_2^{(i)}(1) R_{{\scriptscriptstyle I},\iota}(t) =(-1)^{n-1} \int_0^1 \left[\,\sum_{\iota\in I} \frac{(1-s)^{2n-1-\iota}}{(2n-1-\iota)!}\, R_{{\scriptscriptstyle I},\iota}(t)\right] \!u(s)\,\dd s. $$ Consequently, the boundary value problem~(\ref{BVP-gene-bis}) admits a unique solution $v$ which writes $$ v(t)=v_1(t)+v_2(t)=\int_0^1 G_{_{\!I}}(s,t)u(s)\,\dd s,\quad t\in[0,1], $$ where $G_{_{\!I}}(s,t)$ is defined in Theorem~\ref{th-BVP-gene-bis}. The proof is finished. \qed \end{proof} We now state two intermediate results which will be used in the proof of Theorem~\ref{Green-sym}. \begin{proposition}\label{different} Let $I_1$ and $I_2$ be two subsets of $\{0,1,\dots,2n-1\}$ with cardinality $n$. If the sets $I_1$ and $I_2$ are different, then the corresponding Green functions $G_{_{\!I_1}}$ and $G_{_{\!I_2}}$ are different. \end{proposition} \begin{proof} Suppose that $I_1\neq I_2$. If we had $G_{_{\!I_1}}=G_{_{\!I_2}}$, for any continuous function $u$, the function $v$ defined on $[0,1]$ by $$ v(t)=\int_0^1 G_{_{\!I_1}}(s,t)u(s)\,\mathrm{d}s=\int_0^1 G_{_{\!I_2}}(s,t)u(s)\,\mathrm{d}s $$ would be a solution of the equation $v^{(2n)}=(-1)^nu$ satisfying $v^{(i)}(0)=0$ for $i\in\{0,1,\dots,n-1\}$ and $v^{(i)}(1)=0$ for $i\in I_1\cup I_2$. Since $I_1\neq I_2$ and since $I_1,I_2$ have the same cardinality, there exists an index $i_0$ which belongs to $I_2\backslash I_1$. For this $i_0$, we would have $(\partial^{i_0}G_{_{\!I_1}}/\partial t^{i_0})(s,1)=0$ for all $s\in(0,1)$, or equivalently, $$ \sum_{\iota\in I_1} \frac{(1-s)^{2n-1-\iota}}{(2n-1-\iota)!}\, R_{{\scriptscriptstyle I_1},\iota}^{(i_0)}(1)=\frac{(1-s)^{2n-1-i_0}}{(2n-1-i_0)!}. $$ This is impossible since the exponent $(2n-1-i_0)$ does not appear in the polynomial on the left-hand side of the foregoing equality. As a result, $G_{_{\!I_1}}\neq G_{_{\!I_2}}$. \qed \end{proof} \begin{proposition}\label{sym} Let $I_1$ and $I_2$ be two subsets of $\{0,1,\dots,2n-1\}$ with cardinality $n$. The relationship $G_{_{\!I_1}}(s,t)=G_{_{\!I_2}}(t,s)$ holds for any $s,t\in[0,1]$ (in other words, the integral operators with kernels $G_{_{\!I_1}}$ and $G_{_{\!I_2}}$ are dual) if and only if the sets $I_1$ and $I_2$ are linked by $I_2=\{0,1,\dots,2n-1\}\backslash (2n-1-I_1)$. \end{proposition} \begin{proof} We have, for any $s,t\in[0,1]$, $$ (-1)^n[G_{_{\!I_1}}(s,t)-G_{_{\!I_2}}(t,s)]=\frac{(t-s)^{2n-1}}{(2n-1)!} -\sum_{\iota\in I_1}\frac{(1-s)^{2n-1-\iota}}{(2n-1-\iota)!} R_{{\scriptscriptstyle I_1},\iota}(t) +\sum_{\iota\in I_2}\frac{(1-t)^{2n-1-\iota}}{(2n-1-\iota)!} R_{{\scriptscriptstyle I_2},\iota}(s). $$ Set, for any $s,t\in[0,1]$, $$ S(s,t)=(-1)^n[G_{_{\!I_1}}(s,t)-G_{_{\!I_2}}(t,s)]. $$ The polynomial $S$ has a degree less than $2n$ with respect to each variable $s$ and $t$ and satisfy \begin{align*} \frac{\partial^{i_1+i_2}S}{\partial s^{i_2}\partial t^{i_1}}\,(s,t) & =(-1)^{i_2}1\hspace{-.27em}\mbox{\rm l}_{\{i_1+i_2\le 2n-1\}}\,\frac{(t-s)^{2n-1-i_1-i_2}}{(2n-1-i_1-i_2)!} \\ &\hphantom{=\;} -(-1)^{i_2}\sum_{\iota\in I_1}1\hspace{-.27em}\mbox{\rm l}_{\{i_2\le 2n-1-\iota\}} \,\frac{(1-s)^{2n-1-\iota-i_2}}{(2n-1-\iota-i_2)!} R_{{\scriptscriptstyle I_1},\iota}^{(i_1)}(t) \\ &\hphantom{=\;} +(-1)^{i_1}\sum_{\iota\in I_2}1\hspace{-.27em}\mbox{\rm l}_{\{i_1\le 2n-1-\iota\}} \,\frac{(1-t)^{2n-1-\iota-i_1}}{(2n-1-\iota-i_1)!} R_{{\scriptscriptstyle I_2},\iota}^{(i_2)}(s). \end{align*} In particular, \begin{itemize} \item for $t=1$ and $i_1\in I_1$, $i_2\in \{0,1,\dots,2n-1\}$, \begin{align} \frac{\partial^{i_1+i_2}S}{\partial s^{i_2}\partial t^{i_1}}\,(s,1) & =(-1)^{i_2}1\hspace{-.27em}\mbox{\rm l}_{\{i_1+i_2\le 2n-1\}}\,\frac{(1-s)^{2n-1-i_1-i_2}}{(2n-1-i_1-i_2)!} \nonumber\\ &\hphantom{=\;} -(-1)^{i_2}\sum_{\iota\in I_1} 1\hspace{-.27em}\mbox{\rm l}_{\{i_2\le 2n-1-\iota\}} \,\frac{(1-s)^{2n-1-\iota-i_2}}{(2n-1-\iota-i_2)!}\,\delta_{\iota,i_1} \nonumber\\ &\hphantom{=\;} +(-1)^{i_1}\sum_{\iota\in I_2} \delta_{\iota,2n-1-i_1}\;R_{{\scriptscriptstyle I_2},\iota}^{(i_2)}(s) \nonumber\\ &=(-1)^{i_1}1\hspace{-.27em}\mbox{\rm l}_{\{i_1\in (2n-1-I_2)\}} R_{{\scriptscriptstyle I_2},\iota}^{(i_2)}(s); \label{partial1} \end{align} \item for $(s,t)=(1,0)$ and $i_1\in \{0,1,\dots,n-1\}$, $i_2\in I_2$, \begin{align} \frac{\partial^{i_1+i_2}S}{\partial s^{i_2}\partial t^{i_1}}\,(1,0) =(-1)^{i_1+1}\,\frac{1\hspace{-.27em}\mbox{\rm l}_{\{i_1+i_2\le 2n-1\}}}{(2n-1-i_1-i_2)!} +(-1)^{i_1}\sum_{\iota\in I_2}\frac{1\hspace{-.27em}\mbox{\rm l}_{\{i_1\le 2n-1-\iota\}}} {(2n-1-\iota-i_1)!}\,\delta_{\iota,i_2} =0; \label{partial2} \end{align} \item for $(s,t)=(0,0)$ and $i_1,i_2\in \{0,1,\dots,n-1\}$, \begin{align} \frac{\partial^{i_1+i_2}S}{\partial s^{i_2}\partial t^{i_1}}\,(0,0) =(-1)^{i_2}\,\delta_{i_1+i_2, 2n-1}=0. \label{partial3} \end{align} \end{itemize} Now we are able to establish the statement of Proposition~\ref{sym}. \begin{itemize} \item If $I_2\neq\{0,1,\dots,2n-1\}\backslash (2n-1-I_1)$, there exists $i_1\in I_1$ such that $i_1\in (2n-1-I_2)$ and then, in view of~(\ref{partial1}), $$ \frac{\partial^{i_1}\!S}{\partial t^{i_1}}\,(s,1)\neq 0. $$ The polynomial $S$ cannot be null, that is, there exist $s,t\in[0,1]$ such that $G_{_{\!I_1}}(s,t)\neq G_{_{\!I_2}}(t,s)$. \item If $I_2=\{0,1,\dots,2n-1\}\backslash (2n-1-I_1)$, for any $i_1\in I_1$, we have $i_1\notin (2n-1-I_2)$ and then, in view of~(\ref{partial1}), $$ \frac{\partial^{i_1}\!S}{\partial t^{i_1}}\,(s,1)=0\quad \mbox{for } i_1\in I_1. $$ Put, for any $i_1\in \{0,1,\dots,n-1\}$, $\displaystyle\tilde{S}_{i_1}(s)=\frac{\partial^{i_1}\!S}{\partial t^{i_1}}\,(s,0)$. The polynomial $\tilde{S}_{i_1}$ has a degree less than $2n$. By~(\ref{partial2}) and~(\ref{partial3}), we have $$ \begin{cases} \tilde{S}_{i_1}^{(i_2)}(0)=0 &\mbox{for }i_2\in \{0,1,\dots,n-1\}, \\[1ex] \tilde{S}_{i_1}^{(i_2)}(1)=0 &\mbox{for } i_2\in I_2, \end{cases} $$ from which we deduce, invoking Lemma~\ref{hermite}, that all the polynomials $\tilde{S}_{i_1}$, $i_1\in \{0,1,\dots,n-1\}$, are null. Finally, the polynomial $S$ has a degree less than to $2n$ with respect to $t$ and satisfies $$ \begin{cases} \displaystyle\frac{\partial^{i_1}\!S}{\partial t^{i_1}}\,(s,0)=0 &\mbox{for } i_1\in \{0,1,\dots,n-1\}, \\[1.5ex] \displaystyle\frac{\partial^{i_1}\!S}{\partial t^{i_1}}\,(s,1)=0 &\mbox{for } i_1\in I_1. \end{cases} $$ We can assert, by Lemma~\ref{hermite}, that $S$ is the null-polynomial. \end{itemize} The proof of Proposition~\ref{sym} is finished. \qed \end{proof} A necessary condition for $G_{_{\!I}}$ to be the covariance function of a Gaussian process is that it must be symmetric: $G_{_{\!I}}(s,t)=G_{_{\!I}}(t,s)$ for any $s,t\in[0,1]$. The theorem below asserts that if the set of indices $I$ is not of the form displayed in the preamble of Section~\ref{sect-bridges}, that is $I\neq \{0,1,\dots,\linebreak 2n-1\}\backslash (2n-1-I)$, the Green function $G_{_{\!I}}$ is not symmetric and consequently this function can not be viewed as a covariance function, that is we can not relate the boundary value problem~(\ref{BVP-gene-bis}) to any Gaussian process. \begin{theorem}\label{Green-sym} The Green function $G_{_{\!I}}$ is symmetric (and it corresponds to a covariance function) if and only if the set of indices $I$ satisfies $2n-1-I=\{0,1,\dots,2n-1\}\backslash I$. \end{theorem} \begin{proof} Set $I'=\{0,1,\dots,2n-1\}\backslash (2n-1-I)$. By Proposition~\ref{sym}, we see that $G_{_{\!I}}$ is symmetric if and only if $G_{\raisebox{0ex}[1.45ex]{$\scriptscriptstyle{\!I'}$}}(s,t)=G_{_{\!I}}(s,t)$ for any $s,t\in[0,1]$, that is, by Proposition~\ref{different}, if and only if $I=I'$. \qed \end{proof} We made several verifications with the aid of Maple. Below is the program we wrote for this. \begin{footnotesize} \begin{verbatim} [> Green_function:=proc(n,setI) local V,M,S,T,P,setIcomp; V:=i->vector(n,[seq(binomial(k,i),k=n..2*n-1)]); M:=liste->stackmatrix(seq(V(i),i=liste)); S:=(s,liste)->Matrix(1,n,[seq((1-s)^(2*n-1-i)/(i!*(2*n-1-i)!),i=liste)]); T:=t->Matrix(n,1,[seq([t^k],k=n..2*n-1)]); P:=(s,t,liste)->multiply(S(s,liste),inverse(transpose(M(liste))),T(t))[1,1]; setIcomp:=[op({seq(i,i=0..2*n-1)} minus {seq(2*n-1-i,i=setI)})]; print(`value of n`=n,`differentiating indices set I_1`=setI); print(`Green function for s<t: GI_1(s,t)` =sort(simplify((t-s)^(2*n-1)/((2*n-1)!)-P(s,t,setI)),[s,t],plex)); print(`Green function for s>t: GI_1(s,t)` =sort(simplify(-P(t,s,setI)),[s,t],plex)); print(`symmetry test: GI_1(s,t)=GI_1(t,s)?` =evalb(simplify((t-s)^(2*n-1)/((2*n-1)!)-P(s,t,setI)+P(t,s,setI))=0)); print(`complementary set of 2n+1-I_1: I_2`=setIcomp); print(`difference between the two Green functions for s<t: GI_1(s,t)-GI_2(s,t)` =sort(simplify(-P(s,t,setI)+P(s,t,setIcomp)),[s,t],plex)); print(`difference between the two Green functions for s>t: GI_1(s,t)-GI_2(s,t)` =sort(simplify(-P(s,t,setIcomp)+P(s,t,setI)),[s,t],plex)); print(`equality test between the two Green functions for s<t: GI_1(s,t)=GI_2(t,s)?` =evalb(simplify((t-s)^(2*n-1)/((2*n-1)!)-P(s,t,setI)+P(t,s,setIcomp))=0)); print(`equality test between the two Green functions for s>t: GI_1(s,t)=GI_2(t,s)?` =evalb(simplify((t-s)^(2*n-1)/((2*n-1)!)-P(s,t,setIcomp)+P(t,s,setI))=0)); end proc; \end{verbatim} \end{footnotesize} To obtain the two Green functions associated with the sets $I_1$ and $I_2$, $G_{_{I_1}}$ and $G_{_{I_2}}$, together with the equality test between them, run the command {\footnotesize \verb+[> Green_function(n,I_1);+}. For instance, the return of the command {\footnotesize \verb+[> Green_function(5,[2,3,5,6,8]);+} is \begin{footnotesize} \begin{verbatim} value of n = 3, differentiating indices set I_1 = [1,4,5] Green function for s<t: GI_1(s,t) = -1/120 s^5 - 1/72 s^4 t^3 + 1/24 s^4 t + 1/18 s^3 t^3 - 1/12 s^3 t^2 Green function for s>t: GI_1(s,t) = -1/120 s^5 + 1/24 s^4 t - 1/72 s^3 t^4 + 1/18 s^3 t^3 - 1/12 s^3 t^2 symmetry test: GI_1(s,t)=GI_1(t,s)? = false complementary set of 2n+1-I_1: I_2 = [2,3,5] difference between the two Green functions for s<t: GI_1(s,t)-GI_2(s,t) = -1/72 s^4 t^4 + 1/72 s^3 t^4 difference between the two Green functions for s>t: GI_1(s,t)-GI_2(s,t) = 1/72 s^4 t^3 - 1/72 s^3 t^4 equality test between the two Green functions for s<t: GI_1(s,t)=GI_2(t,s)? = true equality test between the two Green functions for s>t: GI_1(s,t)=GI_2(t,s)? = true \end{verbatim} \end{footnotesize} \subsection{Example: bridges of twice integrated Brownian motion ($n=3$)}\label{sect-IBM2} Here, we have a look on the particular case where $n=3$ for which the corresponding process $(X_n(t))_{t\in [0,1]}$ is the twice integrated Brownian motion: $$ X_3(t)=\int_0^t (t-s)B(s)\,\dd s=\int_0^t \left(\int_0^{s_2} B(s_1)\,\dd s_1\right)\dd s_2. $$ All the associated conditioned processes that can be constructed are related to the equation $v^{(6)}(t)=-u(t)$ with boundary value conditions at time~$0$: $v(0)=v'(0)=v''(0)=0$. There are $2^3=8$ such processes. Since the computations are tedious and the explicit results are cumbersome, we only report the correspondance between bridges and boundary value conditions at time~$1$ through the sets of indices $I$ and $J$. These are written in the table below. $$ \begin{array}{|@{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}|} \hline &&&&&&&& \\ \mbox{conditioning set } J & \varnothing & \{1\} & \{2\} & \{3\} & \{1,2\} & \{1,3\} & \{2,3\} & \{1,2,3\} \\ &&&&&&&& \\ \hline &&&&&&&& \\ \mbox{differentiating set } I & \{3,4,5\} & \{2,4,5\} & \{1,3,5\} & \{0,3,4\} & \{1,2,5\} & \{0,2,4\} & \{0,1,3\} & \{0,1,2\} \\[-2ex] &&&&&&&& \\ \hline \end{array} $$ The Green functions related to the other sets cannot be related to some Gaussian processes. The sets are written in the table below with the correspondance $I_2=\{0,1,2,3,4,5\}\backslash (5-I_1)$. $$ \begin{array}{|@{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|@{\hspace{.3em}}c@{\hspace{.3em}}| @{\hspace{.3em}}c@{\hspace{.3em}}|} \hline &&&&&& \\ \mbox{differentiating set } I_1 & \{0,1,4\} & \{0,1,5\} & \{0,2,5\} & \{0,3,5\} & \{0,4,5\} & \{1,4,5\} \\ &&&&&& \\ \hline &&&&&& \\ \mbox{differentiating set } I_2 & \{0,2,3\} & \{1,2,3\} & \{1,2,4\} & \{1,3,4\} & \{2,3,4\} & \{2,3,5\} \\[-2ex] &&&&&& \\ \hline \end{array} $$ \subsection{Example: bridges of thrice integrated Brownian motion ($n=4$)}\label{sect-IBM3} For $n=4$, only the $2^4=16$ following differentiating sets can be related to bridges: \begin{align*} \{0,1,2,3\},\,\{0,1,2,4\},\,\{0,1,3,5\},\,\{0,1,4,5\},\,\{0,2,3,6\},\,\{0,2,4,6\},\,\{0,3,5,6\},\,\{0,4,5,6\}, \\ \{1,2,3,7\},\,\{1,2,4,7\},\,\{1,3,5,7\},\,\{1,4,5,7\},\,\{2,3,6,7\},\,\{2,4,6,7\},\,\{3,5,6,7\},\,\{4,5,6,7\}. \end{align*} \section*{Appendix: Hermite interpolation polynomials} \renewcommand{A.\arabic{lemma}}{A.\arabic{lemma}} \renewcommand{A.\arabic{remark}}{A.\arabic{remark}} \setcounter{equation}{0}\setcounter{remark}{0} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \begin{lemma}\label{hermite} Let $a_i$, $i\in\{0,1,\dots,n-1\}$, and $b_i$, $i\in I$, be real numbers. There exists a unique polynomial $P$ such that \begin{equation}\label{condition-hermite-ter} \begin{cases} P^{(2n)}=0, \\[.5ex] P^{(i)}(0)=a_i &\mbox{for } i\in\{0,1,\dots,n-1\}, \\[.5ex] P^{(i)}(1)=b_i &\mbox{for } i\in I. \end{cases} \end{equation} \end{lemma} \begin{remark} The conditions~(\ref{condition-hermite-ter}) characterize the Hermite interpolation polynomial at points~$0$ and~$1$ with given values of the successive derivatives at~$0$ up to order $n-1$ and given values of the derivatives at~$1$ with \textsl{selected} orders in $I$. When $I\neq \{0,1,\dots,n-1\}$, these polynomials differ from the usual Hermite interpolation polynomials which involve the successive derivatives at certain points \textsl{progressively} from zero order up to certain orders. \end{remark} \begin{proof} We look for polynomials $P$ in the form $P(t)=\sum_{j=0}^{2n-1} c_j\,\frac{t^j}{j!}.$ We have $$ P^{(i)}(t)=\sum_{j=0}^{2n-1} c_j\,\frac{t^{j-i}}{(j-i)!}. $$ We shall adopt the convention $1/[(j-i)!]=0$ for $i>j$. The conditions~(\ref{condition-hermite-ter}) yield the linear system (with the convention that $i$ and $j$ denote respectively the raw and column indices) $$ \begin{cases} c_i=a_i &\mbox{if } i\in\{0,1,\dots,n-1\}, \\ \displaystyle\sum_{j=0}^{2n-1} \frac{c_j}{(j-i)!}=b_i &\mbox{if } i\in I. \end{cases} $$ The $(2n)\times(2n)$ matrix of this system writes $$ \mathbf{A}=\begin{pmatrix} (\delta_{i,j})_{\hspace{-.2em}i\in\{0,1,\dots,n-1\}\atop j\in\{0,1,\dots,2n-1\}} \\[-.5ex] \dotfill \\ \begin{pmatrix}\displaystyle\frac{1}{(j-i)!} \end{pmatrix}_{\!\!\hspace{-4em}i\in I\atop \!\!j\in\{0,1,\dots,2n-1\}} \end{pmatrix}=\begin{pmatrix} (\delta_{i,j})_{i\in\{0,1,\dots,n-1\}\atop j\in\{0,1,\dots,n-1\}} && \;(0)_{\hspace{-1.4em}i\in\{0,1,\dots,n-1\}\atop j\in\{n,n+1,\dots,2n-1\}} \\[-0.5ex] \dotfill &&\hspace*{-.74em} \dotfill \\ \begin{pmatrix}\displaystyle\frac{1}{(j-i)!} \end{pmatrix}_{\!\!\hspace{-3.8em}i\in I\atop \!\!j\in\{0,1,\dots,n-1\}} && \;\displaystyle\left(\frac{1}{(j-i)!}\right)_{\!\!\hspace{-5.1em}i\in I\atop \!\!j\in\{n,n+1,\dots,2n-1\}} \end{pmatrix}\!. \hspace{-13.23em}\begin{array}{c}\\[-2.86ex]\vdots\\[-1.5ex]\vdots\\[-1.5ex]\vdots\\[-2.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{13em} $$ Proving the statement of Lemma~\ref{hermite} is equivalent to proving that the matrix $\mathbf{A}$ is regular. In view of the form of $\mathbf{A}$ as a bloc-matrix, we see, since the north-west bloc is the unit matrix and the north-east bloc is the null matrix, that this is equivalent to proving that the south-east bloc of $\mathbf{A}$ is regular. Let us call this latter $\mathbf{A}_0$ and label its columns $C_j^{(0)}$, $j\in\{0,1,\dots,n-1\}$: $$ \mathbf{A}_0=\begin{pmatrix} \displaystyle\frac{1}{(n+j-i)!} \end{pmatrix}_{\!\!\hspace{-3.7em}i\in I\atop \!\!j\in\{0,1,\dots,n-1\}} =\begin{pmatrix} \mathbf{C}_0^{(0)} & \mathbf{C}_1^{(0)} & \cdots & \mathbf{C}_{n-1}^{(0)} \end{pmatrix}\!. $$ For proving that $\mathbf{A}_0$ is regular, we factorize $\mathbf{A}_0$ into the product of two regular triangular matrices. The method consists in performing several transformations on the columns of $\mathbf{A}_0$ which do not affect its rank. We provide in this way an algorithm leading to a $\mathbf{L}\mathbf{U}$-factorization of $\mathbf{A}_0$ where $\mathbf{L}$ is a lower triangular matrix and $\mathbf{U}$ is an upper triangular matrix with no vanishing diagonal term. We begin by performing the transformation $$ \mathbf{C}_j^{(1)}=\begin{cases} \mathbf{C}_j^{(0)} &\mbox{if } j=0, \\[1ex] \displaystyle \mathbf{C}_j^{(0)}-\frac{\mathbf{C}_{j-1}^{(0)}}{n+j-i_1} &\mbox{if } j\in\{1,2,\dots,n-1\}. \end{cases} $$ The generic term of the column $\mathbf{C}_j^{(1)}$, for $j\in\{1,2,\dots,n-1\}$, is $$ \frac{1}{(n+j-i)!}-\frac{1}{n+j-i_1}\,\frac{1}{(n+j-i-1)!} =\frac{i-i_1}{n+j-i_1}\,\frac{1}{(n+j-i)!}. $$ This transformation supplies a matrix $\mathbf{A}_1$ with columns $\mathbf{C}_j^{(1)}$, $j\in\{0,1,\dots,n-1\}$, which writes $$ \mathbf{A}_1=\begin{pmatrix} \mathbf{C}_0^{(1)} & \mathbf{C}_1^{(1)} & \cdots & \mathbf{C}_{n-1}^{(1)} \end{pmatrix} =\begin{pmatrix}\\[-1.8ex]\begin{pmatrix} \displaystyle\frac{1}{(n-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; & \begin{pmatrix} \displaystyle\frac{i-i_1}{n+j-i_1}\,\frac{1}{(n+j-i)!}\end{pmatrix}_{\!\!\hspace{-3.7em}i\in I\atop \!\!j\in\{1,2,\dots,n-1\}} \end{pmatrix}\!. \hspace{-18.1em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{14em} $$ We have written $$ \mathbf{A}_1=\mathbf{A}_0\mathbf{U_1} $$ where $\mathbf{U}_1$ is the triangular matrix with a diagonal made of~$1$ below: $$ \mathbf{U}_1=\left(\delta_{i,j}-\frac{\delta_{i,j-1}1\hspace{-.27em}\mbox{\rm l}_{\{j\ge 1\}}}{n+j-i_1}\right) _{\!0\le i,j\le n-1}\!. $$ We now perform the transformation $$ \mathbf{C}_j^{(2)}=\begin{cases} \mathbf{C}_j^{(1)} &\mbox{if } j\in\{0,1\}, \\[1ex] \displaystyle \mathbf{C}_j^{(1)}-\frac{n+j-i_1-1}{n+j-i_1} \,\frac{\mathbf{C}_{j-1}^{(1)}}{n+j-i_2} &\mbox{if } j\in\{2,3,\dots,n-1\}. \end{cases} $$ The generic term of the column $\mathbf{C}_j^{(2)}$, for $j\in\{2,3,\dots,n-1\}$, is \begin{align*} \lqn{\frac{i-i_1}{n+j-i_1}\,\frac{1}{(n+j-i)!} -\frac{i-i_1}{(n+j-i_1)(n+j-i_2)}\,\frac{1}{(n+j-i-1)!}} & =\frac{(i-i_1)(i-i_2)}{(n+j-i_1)(n+j-i_2)}\,\frac{1}{(n+j-i)!}. \end{align*} This transformation supplies a matrix $\mathbf{A}_2$ with columns $\mathbf{C}_j^{(2)}$, $j\in\{0,1,\dots,n-1\}$, which writes \begin{align*} \mathbf{A}_2 & =\begin{pmatrix} \mathbf{C}_0^{(2)} & \mathbf{C}_1^{(2)} & \cdots & \mathbf{C}_{n-1}^{(2)} \end{pmatrix} =\left( \vphantom{\begin{pmatrix}\displaystyle\frac{(i-i_1)(i-i_2)}{(n+j-i_1)(n+j-i_2)}\,\frac{1}{(n+j-i)!} \end{pmatrix}_{\!\!\hspace{-3.7em}i\in I\atop \!\!j\in\{2,3,\dots,n-1\}}} \begin{pmatrix} \displaystyle\frac{1}{(n-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; \begin{pmatrix} \displaystyle\frac{i-i_1}{n+1-i_1}\,\frac{1}{(n+1-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; \right. \hspace{-13.9em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{32em} \\ & \hspace{17.5em}\left.\begin{pmatrix} \displaystyle\frac{(i-i_1)(i-i_2)}{(n+j-i_1)(n+j-i_2)}\,\frac{1}{(n+j-i)!} \end{pmatrix}_{\!\!\hspace{-3.7em}i\in I\atop \!\!j\in\{2,3,\dots,n-1\}} \right)\!. \hspace{-23.9em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array} \end{align*} We have written $$ \mathbf{A}_2=\mathbf{A}_1\mathbf{U}_2=\mathbf{A}_0\mathbf{U}_1\mathbf{U}_2 $$ where $\mathbf{U}_2$ is the triangular matrix with a diagonal made of~$1$ below: $$ \mathbf{U}_2=\left(\delta_{i,j}-\frac{n+j-i_1-1}{(n+j-i_1)(n+j-i_2)} \,\delta_{i,j-1}1\hspace{-.27em}\mbox{\rm l}_{\{j\ge 2\}}\right)_{\!0\le i,j\le n-1}\!. $$ In a recursive manner, we easily see that we can construct a sequence of matrices $\mathbf{A}_k, \mathbf{U}_k$, $k\in\{1,2,\dots,n-1\}$, such that $\mathbf{A}_k=\mathbf{A}_{k-1}\mathbf{U}_k$ where $\mathbf{U}_k$ is the triangular matrix with a diagonal made of~$1$ below: $$ \mathbf{U}_k=\left(\delta_{i,j}-\frac{(n+j-i_1-1)\dots(n+j-i_{k-1}-1)}{(n+j-i_1) \dots(n+j-i_k)}\,\delta_{i,j-1}1\hspace{-.27em}\mbox{\rm l}_{\{j\ge k\}}\right)_{\!0\le i,j\le n-1} $$ and \begin{align*} \lqn{\mathbf{A}_k =\left(\begin{matrix}\\[-1.8ex] \!\vphantom{\begin{pmatrix}\frac{1}{(n-i)!}\end{pmatrix}_{i\in I\atop j\in\{k+1,\dots,n-1\}}} \begin{pmatrix} \displaystyle\frac{1}{(n-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; & \begin{pmatrix} \displaystyle\frac{i-i_1}{n+1-i_1}\,\frac{1}{(n+1-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; & \cdots \end{matrix}\right. \hspace{-15.4em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array} \hspace{12.5em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{19.4em} \hspace{-17.7em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array} } & \hspace{-4.8em} \begin{pmatrix} \displaystyle\frac{(i-i_1)\dots(i-i_{k-1})}{(n+k-1-i_1)\dots(n+k-1-i_{k-1})} \,\frac{1}{(n+k-1-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; \hspace{-27.55em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array} \\ & \hspace{-4.8em} \left.\begin{matrix}\\[-1.8ex]\begin{pmatrix} \displaystyle\frac{(i-i_1)\dots(i-i_k)}{(n+j-i_1)\dots(n+j-i_k)}\,\frac{1}{(n+j-i)!} \end{pmatrix}_{\hspace{-3.4em}i\in I\atop \!\!j\in\{k,\dots,n-1\}} \end{matrix}\right)\!. \hspace{-24.9em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array} \end{align*} We finally obtain, since all the $U_k$, $k\in\{1,2,\dots,n-1\}$, are regular, that $$ \mathbf{A}_0=\mathbf{A}_{n-1}\mathbf{U}_{n-1}^{-1}\dots\mathbf{U}_1^{-1} =\mathbf{L}\mathbf{U} $$ with $\mathbf{U}=\mathbf{U}_{n-1}^{-1}\dots\mathbf{U}_1^{-1}$ and \begin{align*} \mathbf{L}=\mathbf{A}_{n-1}= \left(\begin{pmatrix} \displaystyle\frac{1}{(n-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; \begin{pmatrix} \displaystyle\frac{i-i_1}{n+1-i_1}\,\frac{1}{(n+1-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I}\;\; \,\cdots\right. \hspace{-15.5em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array} \hspace{12.48em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{19.6em} \hspace{-17.9em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{19em} \\ \left.\;\;\begin{pmatrix} \displaystyle\frac{(i-i_1)\dots(i-i_{n-1})}{(2n-1-i_1)\dots(2n-1-i_{n-1})} \,\frac{1}{(2n-1-i)!}\end{pmatrix}_{\!\scriptscriptstyle i\in I} \right)\!. \hspace{-24em}\begin{array}{c}\\[-3.5ex]\vdots\\[-1.5ex]\vdots\end{array}\hspace{28.1em} \end{align*} It is clear that the matrices $\mathbf{U}$ and $\mathbf{L}$ are triangular and regular, and then $\mathbf{A}_0$ (and $\mathbf{A}$) is also regular. Moreover, the inverse of $\mathbf{A}_0$ can be computed as $$ \mathbf{A}_0^{-1}=\mathbf{U}^{-1}\mathbf{L}^{-1} =\mathbf{U}_1\dots \mathbf{U}_{n-1}\mathbf{L}^{-1}. $$ The proof of Lemma~\ref{hermite} is finished. \qed \end{proof} \end{document}
\begin{document} \author{Andrzej Grudka} \affiliation{Faculty of Physics, Adam Mickiewicz University, 61-614 Pozna\'{n}, Poland} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'{n}sk, 80-952 Gda\'{n}sk, Poland} \affiliation{National Quantum Information Centre of Gda\'{n}sk, 81-824 Sopot, Poland} \author{Pawe{\l} Kurzy\'nski} \email{[email protected]} \affiliation{Faculty of Physics, Adam Mickiewicz University, 61-614 Pozna\'{n}, Poland} \title{Is There Contextuality for a Single Qubit?} \date{May 1, 2007} \pacs{03.65.Ta, 03.65.Ud, 03.67.–a} \begin{abstract} Cabello and Nakamura have shown [A. Cabello, Phys. Rev. Lett. {\bf 90}, 190401 (2003)] that the Kochen-Specker theorem can be applied to two-dimensional systems, if one uses the Positive Operator-Valued Measures. We show that the contextuality in their models is not of the Kochen-Specker type, but it is rather a result of not keeping track of the whole system on which the measurement is performed. This is connected to the fact that there is no one-to-one correspondence between the POVM elements and projectors on the extended Hilbert space and the same POVM element has to originate from two different projectors when used in Cabello's and Nakamura's models. Moreover, we propose a hidden-variable formulation of the above models. \end{abstract} \maketitle For a long time there has been a debate whether a qubit is a truly quantum system \cite{van Enk}. Although it may exist in a superposition of two orthogonal states it does not reveal the typical quantum oddities. The Kochen-Specker (KS) \cite{KS} and Gleason \cite{Gleason} theorems are valid only in at least three-dimensional Hilbert space and the Bell theorem \cite{Bell1} applies to composite systems. Finally and most importantly there is a hidden-variable model describing every von Neumann measurement on a two-level quantum system \cite{Bell}. Only recently the special versions of KS and Gleason theorems have beem presented for a single qubit. It was done by Cabello and Nakamura \cite{Cabello} and by Busch \cite{Busch} respectively. The authors of both papers used the Positive Operator-Valued Measures (POVMs) to achieve their goals. The same year Aravind proposed how to generalize the CN method to obtain contextual POVMs for Hilbert spaces of arbitrary dimension \cite{Aravind}. The essence of the contextuality is that if an observable A is measured together with an observable B with which it commutes then it gives different outcome than if it is measured with an observable C with which it also commutes, and this fact has severe consequences when one wants to describe quantum mechanics within the framework of the hidden-variable models. If an outcome of a measurement of an observable was preassigned before the measurement, it would have to depend on the choice of the other observables co-measured with the observable of interest. In the original KS theorem A, B and C are represented by projectors, i.e. by operators with eigenvalues 0 and 1, which makes them natural yes-no operators. According to quantum mechanics, from a set of mutually orthogonal projectors making up a measurement exactly one brings the value 1 and the others bring 0. The goal of KS theorem is to show that the pre-assignment of outcomes to a group of measurements inevitably leads to at least one measurement wrongly assigned. There is no contextuality in von Neumann measurements for a two-dimensional system (a qubit). This is because the choice of the first projector automatically defines the second one and there is no freedom in choosing which one to measure together with the first projector. However, this freedom is restored if one uses POVM measurements instead of von Neumann ones. One can have as many POVM elements as one wants. The problem is that as for von Neumann measurements it is natural to ask whether an outcome of a measurement of an observable is somehow encoded in the state of the system prior to the measurement, for a POVM it is not that simple. To perform a POVM one has to measure an observable of a composite system --- the qubit and an auxiliary quantum system (an ancilla) --- thus fixing the outcome of a POVM prior to the measurement corresponds to fixing the outcome of the measurement of the observable on an extended Hilbert space. The important thing is that one should not assume that the outcome of a POVM is encoded in the qubit alone. The projection postulate states that an act of a measurement defines the post measurement state of the system, therefore successive measurements of the same type should bring the same outcome. It is true in the case of the same von Neumann measurements because the second measurement of the system reveals the same outcome as the first one. On the other hand POVMs are not repeatable and the second measurement may bring a different outcome than the first one. Even worse, in order to perform the same POVM one has to reset the state of the ancilla, which results in preparation of the new state of the whole system. This is another example which shows that it is hard to speak of the KS theorem held for POVMs. In this Letter we show that the CN model is contextual because it does not give the complete information about the measurement. More precisely the same POVM element acting on the original qubit can correspond to different projectors on the extended Hilbert space. We also give an example of CN POVM, which can be described by non-contextual hidden-variable model of the system and the ancilla. \begin{figure} \caption{The structure of CN POVMs. Twenty vertices of Cabello's dodecahedron (top) and six vertices of Nakamura's hexagon (bottom). \label{f1} \label{f1} \end{figure} Let us now briefly describe the contextual set of measurements for a qubit proposed by Cabello. His construction is based on the symmetry of a dodecahedron enclosed in the Bloch sphere (see Fig.\ref{f1} top). There are twenty POVM elements pointing from the center of the dodecahedron at its vertices. These elements appear in $\pm$ pairs pointing at antipodal vertices. Each pair appears exactly twice in two of five eight-element POVM measurements \begin{equation} \begin{array}[t]{llllllllll} \{ \varepsilon_{A_+}, & \varepsilon_{A_-}, & \varepsilon_{C_+}, & \varepsilon_{C_-}, & \varepsilon_{I_+}, & \varepsilon_{I_-}, & \varepsilon_{J_+}, & \varepsilon_{J_{-}}~ \}, \\ \{ \varepsilon_{A_+}, & \varepsilon_{A_-}, & \varepsilon_{D_+}, & \varepsilon_{D_-}, & \varepsilon_{G_+}, & \varepsilon_{G_-}, & \varepsilon_{H_+}, & \varepsilon_{H_-} \}, \\ \{ \varepsilon_{B_+}, & \varepsilon_{B_-}, & \varepsilon_{D_+}, & \varepsilon_{D_-}, & \varepsilon_{F_+}, & \varepsilon_{F_-}, & \varepsilon_{J_+}, & \varepsilon_{J_{-}}~ \}, \\ \{ \varepsilon_{B_+}, & \varepsilon_{B_-}, & \varepsilon_{E_+}, & \varepsilon_{E_-}, & \varepsilon_{H_+}, & \varepsilon_{H_-}, & \varepsilon_{I_+}, & \varepsilon_{I_{-}}~ \}, \\ \{ \varepsilon_{C_+}, & \varepsilon_{C_-}, & \varepsilon_{E_+}, & \varepsilon_{E_-}, & \varepsilon_{F_+}, & \varepsilon_{F_-}, & \varepsilon_{G_+}, & \varepsilon_{G_-} \}. \end{array} \label{e1} \end{equation} All elements are of the form \begin{equation} \varepsilon_{i}=\frac{1}{4}|\psi_i\rangle\langle \psi_i|. \label{e2} \end{equation} The proof of KS theorem goes as follows. By assigning 1 to any two elements, which do not occur together, we assign values to four sets --- the remaining elements in these sets have to be 0. Now, the fifth set is made of elements which have already been assigned 0, since each element occurs in exactly two sets. Thus one cannot assign a value 1 to any POVM element in the fifth set. This ends the proof. Nakamura proposed a more economic proof using the symmetry of a regular hexagon (see Fig.\ref{f1} bottom). Now there are only three $\pm$ pairs grouped in three POVM measurements \begin{equation} \begin{array}[t]{llllllll} & \{ \varepsilon_{A_+}, & \varepsilon_{A_-}, & \varepsilon_{B_+}, & \varepsilon_{B_-} \}, \\ & \{ \varepsilon_{A_+}, & \varepsilon_{A_-}, & \varepsilon_{C_+}, & \varepsilon_{C_-} \}, \\ & \{ \varepsilon_{B_+}, & \varepsilon_{B_-}, & \varepsilon_{C_+}, & \varepsilon_{C_-} \}, \end{array} \label{e3} \end{equation} and all elements are given by \begin{equation} \varepsilon_{i}=\frac{1}{2}|\psi_i\rangle\langle \psi_i|. \label{e4} \end{equation} By assigning 1 to any element we assign values to two sets, and therefore we assign 0 to all other elements. Since each element occurs exactly twice, there is always one set in which all elements are assigned 0. We now show that the contextuality in both models comes from non-unique extension of POVM elements. More precisely, POVM measurement is performed as von Neumann measurement on the extended Hilbert space. The restriction of von Neumann projectors leads to POVM elements. As shown above, assignment of a physical reality to a POVM element corresponds to the assignment of a physical reality to a projector on the extended Hilbert space. The main point in our argument is that two different projectors can lead to the same POVM element and thus the contextuality comes from not keeping track of the whole system. First, we briefly describe the relation between the POVM and von Neumann measurements. In order to perform POVM measurements one extends the Hilbert space by adding an ancilla and performs von Neumann measurement on a higher dimensional Hilbert space (see \cite{Peres} and references therein). Then POVM elements $\varepsilon_{i}$ are given by \begin{equation} \varepsilon_{i}=\text{Tr}_{A}\{(\rho_{A}\otimes {I})P_{i} \}. \label{e5} \end{equation} where $\rho_{A}$ is the state of the ancilla, $I$ is the identity on a qubit Hilbert space, $P_{i}$ is a von Neumann projector on the whole Hilbert space and the right hand side is traced over the ancilla. One can always assume that the qubit and the ancilla are unentangled, because of the identity \begin{equation} \text{Tr}\{(U\rho U^{\dagger}) P_i\}=\text{Tr}\{\rho (U^{\dagger} P_i U)\}. \label{e6} \end{equation} which corresponds to different choice of projectors. Our task is to examine the relation between projectors $P_i's$ that generate the CN POVM measurements. More precisely, we want to know if a particular POVM element which is in two different sets corresponds to the same projector. For example, if the ancilla is a qubit in the state $|0\rangle$ and the measurement on the whole space is done in the Bell basis, two projectors will generate the element $\frac{1}{2}|0\rangle\langle0|$, whereas the remaining two will generate $\frac{1}{2}|1\rangle\langle1|$. We show that in the CN model at least one POVM element has to be extended to two different projectors. In order to prove this we put forward the following hypothesis: There is a one-to-one correspondence between the POVM elements and von Neumann projectors for each element in the CN model. We show that this hypothesis leads to contradiction. It should be noted that the above hypothesis includes the statement that the state of the ancilla is the same for all POVMs, because as already shown a change in the ancilla's state is equivalent to a change of projectors. We begin with the Nakamura's model. For every POVM element $\varepsilon_{X_{\pm}}$ we define the corresponding projector $P_{X_{\pm}}$, so the measurements on the extended Hilbert space are given by \begin{equation} \begin{array}[t]{llllllll} & \{P_{A_+}, & P_{A_-}, & P_{B_+}, & P_{B_-}, & P_1\}, \\ & \{P_{A_+}, & P_{A_-}, & P_{C_+}, & P_{C_-}, & P_2\}, \\ & \{P_{B_+}, & P_{B_-}, & P_{C_+}, & P_{C_-}, & P_3\}. \end{array} \label{e7} \end{equation} We have introduced additional projectors $P_1$, $P_2$ and $P_3$ in order to have the resolution of the identity. Because they should not contribute to POVM they should satisfy: \begin{equation} \text{Tr}_{A}\{(\rho_{A}\otimes {I})P_{i} \}=0 \label{e8} \end{equation} for $i=1,2,3$. All projectors in each set are mutually orthogonal. From the first two sets (\ref{e7}) we see that both $P_{A_+}$ and $P_{A_-}$ are orthogonal to $P_{B_+}$, $P_{B_-}$, $P_{C_+}$, $P_{C_-}$. Since the projectors in a set span the whole Hilbert space, from the third set we have that $P_{A_+}$ and $P_{A_-}$ have to be confined in $P_3$, but $P_3$ does not contribute to POVM (as we defined in (\ref{e8})), thus $P_{A_+}$ and $P_{A_-}$ cannot contribute to POVM too. However, $\text{Tr}_{A}\{(\rho_{A}\otimes I)P_{A_{\pm}} \}=\varepsilon_{A_{\pm}}$ which is a required contradiction. The contradiction vanishes if for the same POVM element one uses different projectors in different sets. For example, if $\varepsilon_{B_{\pm}}$ in the first POVM arose from $P_{B_{\pm}}^1$ and in the second POVM from $P_{B_{\pm}}^2$ there would be no contradiction in the Nakamura's case. However, this change cancels the contextuality in the sets (\ref{e7}). A similar contradiction can be obtained for the Cabello's model. We introduce five projectors $P_1,P_2,\dots,P_5$ which should have the property (\ref{e8}). The measurements on the extended Hilbert space give: \begin{equation} \begin{array}[t]{lllllllll} \{P_{A_+}, & P_{A_-}, & P_{C_+}, & P_{C_-}, & P_{I_+}, & P_{I_-}, & P_{J_+}, & P_{J_-}, & P_1\}, \\ \{P_{A_+}, & P_{A_-}, & P_{D_+}, & P_{D_-}, & P_{G_+}, & P_{G_-}, & P_{H_+}, & P_{H_-}, & P_2 \}, \\ \{P_{B_+}, & P_{B_-}, & P_{D_+}, & P_{D_-}, & P_{F_+}, & P_{F_-}, & P_{J_+}, & P_{J_-}, & P_3 \}, \\ \{P_{B_+}, & P_{B_-}, & P_{E_+}, & P_{E_-}, & P_{H_+}, & P_{H_-}, & P_{I_+}, & P_{I_-}, & P_4 \}, \\ \{P_{C_+}, & P_{C_-}, & P_{E_+}, & P_{E_-}, & P_{F_+}, & P_{F_-}, & P_{G_+}, & P_{G_-}, & P_5 \}. \end{array} \label{e9} \end{equation} From the first two sets we see that $P_{A_+}$ and $P_{A_-}$ are orthogonal to $P_{C_+}$, $P_{C_-}$, $P_{I_+}$, $P_{I_-}$, $P_{J_+}$, $P_{J_-}$, $P_{D_+}$, $P_{D_-}$, $P_{G_+}$, $P_{G_-}$, $P_{H_+}$, $P_{H_-}$. From the last three sets we can conclude that $P_{A_+}$ and $P_{A_-}$ have to be confined in each of the three subspaces spanned by \begin{equation} \begin{array}{l} P_{B_+} + P_{B_-} + P_{F_+} + P_{F_-} + P_3, \\ P_{B_+} + P_{B_-} + P_{E_+} + P_{E_-} + P_4, \\ P_{E_+} + P_{E_-} + P_{F_+} + P_{F_-} + P_5. \end{array} \label{e10} \end{equation} From the last three sets we can also conclude that \begin{equation} \begin{array}{l} P_{B_+} + P_{B_-} \perp P_{F_+} + P_{F_-}, \\ P_{B_+} + P_{B_-} \perp P_{E_+} + P_{E_-}, \\ P_{E_+} + P_{E_-} \perp P_{F_+} + P_{F_-}, \end{array} \label{e11} \end{equation} thus $P_{A_+}$ and $P_{A_-}$ have to be confined in the subspace spanned by $P_3 + P_4 + P_5$, which brings us again to a contradiction. The above reasoning shows that the contextuality in CN POVMs is a result of not keeping track of the ancilla, i.e. if one considers a qubit and an ancilla together, the sets of projectors are not contextual, but if one forgets what happens to the ancilla, the projectors become contextual POVMs. One may wonder if it is the same kind of contextuality as the one considered by KS (for different types of contextuality see the paper by Spekkens \cite{Spekk}). The problem is analogous to the one we present below. Consider two yes-no questions: {\em 1) Is it raining in Poland?} {\em 2) Is it raining in the USA?} It is obvious that the corresponding two answers may be different, but if we erase the ends of both questions we will have twice: {\em Is it raining?} If the answers to the questions are assigned before the erasure, we can have the same question with two different answers. Let us now present an example of CN POVM measurement for which one can construct non-contextual hidden-variable model. Eqs. (\ref{e2}) and (\ref{e4}) define POVM elements as projectors multiplied by the same constant ($1/2$ for the Nakamura's model and $1/4$ for the Cabello's model). Since $\varepsilon_{X_{+}}$ is orthogonal to $\varepsilon_{X_{-}}$, without the constant the pair would give a von Neumann measurement on a qubit. Because of this symmetry CN POVM measurements may be implemented by performing von Neumann measurement on the ancilla and then, depending on the outcome, performing one of two von Neumann measurements on the qubit in the Nakamura's case, or one of four in the Cabello's case. The outcome of the first measurement defines which $\pm$ pair is going to be measured next. It is enough to prepare the ancilla in the superposition \begin{equation} |\varphi\rangle=\frac{1}{\sqrt{N}}\sum_{i=0}^{N-1}|i\rangle, \label{e12} \end{equation} where $\{|i\rangle\}$ is an orthonormal basis and $N=2$ for the Nakamura's model ($N=4$ for the Cabello's model), and then to measure it in the basis $\{ |i\rangle \}$. There is always a hidden-variable model of a measurement in one basis (in our case $\{ |i\rangle \}$). For example, let $\Lambda \in \{0,1\}$ ($\Lambda \in \{0,1,2,3\}$) be a discrete hidden-variable for the Nakamura's model (the Cabello's model). If the total description of the ancilla is given by $\Lambda$, the state after measurement is $|\Lambda\rangle$ with the probability $1/2$ ($1/4$) if $\Lambda$ is uniformly distributed. For the next measurement which is made on the qubit, we can describe it within the framework of the Bell model \cite{Bell}. The corresponding projectors on the qubit Hilbert space are: \begin{equation} V=\frac{ I+\vec{v}\cdot\vec{\sigma}}{2}, \label{e13} \end{equation} where $\vec{v}$ is the real unit vector pointing at the $V$ direction on the Bloch sphere and $\vec{\sigma}$ is the vector of Pauli matrices. Similarly, we define the state of a qubit as \begin{equation} \label{e14} |n\rangle\langle n|=\frac{ I+\vec{n}\cdot\vec{\sigma}}{2}, \end{equation} with $\vec{n}$ being a unit vector. The hidden-variable is another unit vector $\vec{m}$ which is uniformly distributed over all directions on the Bloch sphere. The outcome of the measurement of a projector (\ref{e13}) is given by \begin{eqnarray} \label{e15} v_n(V) &=& 1,~if~(\vec{m}+\vec{n})\cdot\vec{v} > 0, \nonumber \\ v_n(V) &=& 0,~if~(\vec{m}+\vec{n})\cdot\vec{v} < 0. \end{eqnarray} Hidden-variables allow a pre-assignment of outcomes to both von Neumann measurements. In both cases the value 1 is assigned to exactly one element, whereas the rest of elements are assigned 0. The two successive measurements on the ancilla and the qubit are equivalent to one measurement on the whole system using von Neumann projectors of the form $| i \rangle \langle i |_A \otimes V_{\pm Q}$, thus we can multiply the values assigned to the outcomes of the first measurement by the values assigned to the outcomes of the second measurement. As a result we obtain a group of measurements which is definitely non-contextual, because in every measurement there is exactly one element assigned to the value 1. It is instructive to show how different extensions of POVM elements appear in the Nakamura's model. First we consider the first set of POVM elements. The outcome $|0\rangle$ of a measurement performed on the ancilla's state (which was prepared in equally weighted superposition of $|0\rangle$ and $|1\rangle$) tells us that we should perform a von Neumann measurement on a qubit along the $A_{\pm}$ direction, while the outcome $|1\rangle $ tells us that we should perform a von Neumann measurement along the $B_{\pm}$ direction. Now for the second set if we want to have the same extension of POVM elements the outcome $|0\rangle$ has to lead to a von Neumann measurement along the $A_{\pm}$ direction and then the outcome $|1\rangle$ leads to a von Neumann measurement along the $C_{\pm}$. However in the third set the outcome $|1\rangle$ cannot lead to a von Neumann measurements in both $B_{\pm}$ and $C_{\pm}$ directions. One of them has to be conditioned on the outcome $|0\rangle$. Thus for example POVM elements $\varepsilon_{B_{\pm}}$ have different extensions. We have shown that the contextuality in the Cabello-Nakamura's model is not of the Kochen-Specker type. It is rather a result of not keeping track of the whole system on which the measurement is performed. We proved that in the CN model there is no one-to-one correspondence between POVM elements and projectors on the extended Hilbert space. It means that some POVM elements appearing in two different POVMs have to originate from two distinct von Neumann projectors. Therefore, there is nothing surprising in the fact that they bring different outcomes when measured in two different POVMs. Moreover, POVMs are probabilistic measurements in the sense that subsequent measurements of the same type may bring different outcomes. The basic assumption of all hidden-variable models is that an outcome of every measurement is pre-encoded in the state of the system, and since in our case the whole system is not only the qubit, it is incorrect to assume that the outcome of the POVM is encoded only in the qubit's state. One should rather speak of preassigning an outcome of a von Neumann measurement to the whole system --- a qubit and an ancilla (which is not considered in the CN model). It would be interesting to find contextual POVMs for a qubit with a one-to-one correspondence between their elements and von Neumann projectors or to show that every contextual POVM on a qubit cannot be obtained unless we give up one-to-one correspondence. We conjecture that if the corresponding von Neumann projectors form a contextual group of measurements one may obtain a contextual one-to-one POVMs. We are very grateful to Micha{\l} Horodecki, Pawe{\l} Horodocki, Ryszard Horodecki, Micha{\l} Kurzy\'{n}ski, Marco Piani and Maria Spychalska for useful comments and talks on the subject and the manuscript. We would like to thank our Referees for comments and suggestions. A. G. was supported by the State Committee for Scientific Research Grant No. 1 P03B 014 30 and by the European Commission through the Integrated Project FET/QIPC SCALA. P. K. was supported by the State Committee for Scientific Research Grant No. N202 008 31/0183. P.K. would also like to thank Tomasz {\L}uczak for support from subsidium MISTRZ (Foundation for Polish Science). \end{document}
\begin{document} \maketitle \begin{abstract} In this article we study the inverse of the period map for the family $\mathcal{F}$ of complex algebraic curves of genus 6 equipped with an automorphism of order 5. This is a family with 2 parameters, and is fibred over a certain type of Del Pezzo surace. The period satisfies the hypergeometric differential equation for Appell's $F_1(\frac{3}{5},\frac{3}{5},\frac{2}{5},\frac{6}{5})$ of two variables after a certain normalization of the variable parameter. \\ \indent This differential equation and the family $\mathcal{F}$ are studied by G. Shimura (1964), T. Terada (1983, 1985), P. Deligne - G.D. Mostow (1986) and T. Yamazaki- M. Yoshida(1984). Recently M. Yoshida presented a new approch using the concept of configration space. Based on their results we show the representation of the inverse of the period map in terms of Riemann theta constants. This is the first variant of the work of H. Shiga (1981) and K. Matsumoto (1989, 2000) to the co-compact case. \end{abstract} \mathfrak{S}etcounter{section}{-1} \mathfrak{S}ection{Introduction} Let $\mathcal{F}$ be the family of algebraic curves given by \[ C(\lambda ):\ w^5=\mathbb{P}^1od_{i=1}^5(z-\lambda _i), \] here the parameter $\lambda = (\lambda_1, \lambda_2,\lambda_3, \lambda_4,\lambda_5)$ lives on the domain $(\mathbb{P}^1)^5 - \Delta$, where¡¡ \[ \Delta = \{(\lambda_i) \in (\mathbb{P}^1)^5 \ : \ \lambda_i = \lambda_j \ \text{for some} \ i \ne j\}. \] By putting $(\lambda _1, \lambda _2, \lambda _3)=(0,1,\infty )$ we can normalize $C(\lambda )$ in the form \[ C'(x,y) : w^5=z(z-1)(z-x)(z-y) \] where the parameter $(x,y)$ lives in \[ \Lambda = \{(x,y) \in \mathbb{C}^2 \ : \ xy(x-1)(y-1)(x-y) \ne 0\} \] The period of $C'(x,y)$ \[ \eta (x,y)=\int_{\gamma }\frac{dz}{w^2} \] satisfies the system of differential equation \begin{equation} \label{appell} \begin{split} x(1-x) \frac{\partial^2 u}{\partial x^2} + y(1-x) \frac{\partial^2 u}{\partial x \partial y} + (\frac{6}{5} - \frac{11}{5}x) \frac{\partial u}{\partial x} - \frac{3}{5} y \frac{\partial u}{\partial y} - \frac{9}{5} u &= 0 \\ y(1-y) \frac{\partial^2 u}{\partial y^2} + x(1-y) \frac{\partial^2 u}{\partial x \partial y} + (\frac{6}{5} - 2 y) \frac{\partial u}{\partial y} - \frac{2}{5} x \frac{\partial u}{\partial x} - \frac{6}{5} u &= 0 \end{split} \end{equation} It is the hypergeometric differential equation for the Appell's $F_1(\frac{3}{5},\frac{3}{5},\frac{2}{5},\frac{6}{5};x,y)$. The dimension of the solution space is equal to 3. If it holds $\lambda' = g \circ \lambda$ for a certain projective transformation $g \in \mathrm{PGL}_2(\mathbb{C})$, then we have the biholomorphic equivalence $C(\lambda) \cong C(\lambda')$. So we consider the quotient space \[ X(2,5)c = ((\mathbb{P}^1)^5 - \Delta) / \mathrm{PGL}_2(\mathbb{C}). \] as the parameter space for $\mathcal{F}$, that is biholomorphically equivalent with $\Lambda $. \par According to the work of T. Terada (\cite{Te}), P. Deligne - G.D. Mostow (\cite{DM}) and T. Yamazaki - M. Yoshida(\cite{YY}) we have the following properties : \begin{enumerate} \item Let $\{ \eta_1, \eta_2, \eta_3 \}$ be the basis of the solutions of (\ref{appell}). The image of the Schwarz map $(x,y)\mapsto [\eta_1(x,y):\eta_2(x,y):\eta_3(x,y)]\in {\mathbb P}^2$ is an open dense subset of a 2-dimensional ball ${\mathbb B}_2$. \item The monodromy group for (\ref{appell}) is characterized as a certain congruence subgroup of the Picard modular group for $k={\mathbb Q}(e^{2\pi i/5})$. \item Let $\mathrm{S}_5$ be the symmetric group of permutations of $\{ \lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5 \}$, it has a natural action on $X(2,5)c$. There is a compactification $X(2,5)$ of $X(2,5)c$ so that we have $\mathrm{S}_5 \mathfrak{S}ubset \mathrm{Aut}(X(2,5))$. Yoshida showed $X(2,5)$ is a Del Pezzo surface of degree 5. \item We obtain a single valued modular map on ${\mathbb B}_2$ as the inverse of the Schwarz map. \end{enumerate} By the so-called Picard principle we can reduce the period map for $\mathcal{F}$ to the Schwarz map for (\ref{appell}). So we proceed our study by the following steps. \par In first 4 sections we make up the explicit realization of the above properties (1) - (4): \par \par Section 1. We describe the parameter space $X(2,5)c$ for $\mathcal{F}$ and its compactification $X(2,5)$. We list up certain divisors those become to be essential in our study. \par Section2. We construct the period map for ${\mathcal F}$. And we show how it reduces to the map $\Phi : X(2,5)c \rightarrow {\mathbb B_2}$. \par Section 3. We list up the generator system of the monodromy group for $\Phi$ in terms of the unitary reflections. \par Section 4. We observe the degeneration of the Schwarz map for (\ref{appell}). \par Section 5 is the main part of the article. There we study the 0 values of the Riemann theta functions of genus 6 with the characteristic $(a,b)\in (\frac{1}{10}\mathbb{Z})^6\times (\frac{1}{10}\mathbb{Z})^6$ (Theorem \ref{Gaussinverse}). These are considered to be a certain kind of automorphic form on ${\mathbb B}_2$. Many of the above theta constants identically vanish on ${\mathbb B}_2$. At first it is proved that there are only 25 among them those are invariant under the action of the monodromy group. We show they are not identically zero on $\mathbb{B}_2^A$. Then we determine the vanishing locus on ${\mathbb B}_2$ of every theta constant in question. \par In Section 6 we state the main theorem (Theorem \ref{maintheorem}), that is the representation of $\Phi ^{-1}$ via the theta constants. As the direct consequence we show the representation of the inverse Schwarz map for the Gauss hypergeometric differential equation $E_{2,1}(\frac{1}{5}, \frac{2}{5}, \frac{4}{5})$. In this case we have the arithmetic triangle group of co-compact type $\Delta (5,5,5)$ as the monodromy group, and it is the case mentioned by Shimura \cite{Sm}. As the another application we show the explicit generator system for the graded ring of the automorphic forms with respect to the unitary group $U(2,1;{\mathcal O}_k)$ over ${\mathcal O}_k$ (Theorem \ref{tmmodring}). \\ \mathfrak{S}ection{The configuration space $X(2,5)$} \label{sectionconfig} Here we summarize the fundamental facts of $X(2,5)$. For precise arguments, see \cite[Chapter V]{Y2}. Let $[a:b]$ be a point on $\mathbb{P}^1$, and let $\lambda = b/a$ be its representative on $\mathbb{C} \cup \{\infty\}$. Always we use the notation $\lambda_i \in \mathbb{P}^1$ in this sense. Let us consider ordered distinct five points on $\mathbb{P}^1$: \[ \lambda = (\lambda_1, \lambda_2,\lambda_3, \lambda_4,\lambda_5) \in (\mathbb{P}^1)^5 - \Delta \] where, $\Delta$ is degenerate locus: \[ \Delta = \{(\lambda_i) \in (\mathbb{P}^1)^5 \ : \ \lambda_i = \lambda_j \ \text{for some} \ i \ne j\}. \] A projective transformation $g \in \mathrm{PGL}_2(\mathbb{C})$ acts on $(\mathbb{P}^1)^5$ as \[ g \cdot (\lambda_1, \cdots, \lambda_5) = (g(\lambda_1), \cdots, g(\lambda_5)). \] The configuration space $X(2,5)c$ is defined by the quotient space \[ X(2,5)c = ((\mathbb{P}^1)^5 - \Delta) / \mathrm{PGL}_2(\mathbb{C}). \] It has a good compactification \[ X(2,5) = \overline{X(2,5)c} = ((\mathbb{P}^1)^5 - \Delta') / \mathrm{PGL}_2(\mathbb{C}) \] where \[ \Delta' = \{(\lambda_i) \in (\mathbb{P}^1)^5 \ : \ \lambda_i = \lambda_j = \lambda_k \ \text{for some} \ i \ne j \ne k \ne i\}. \] There exist ten lines on $X(2,5)$ of the form \[ L(ij) = \{\text{the orbit of the form $\lambda_i = \lambda_j$} \} / \mathrm{PGL}_2(\mathbb{C}) \cong \mathbb{P}^1. \] Notice that $L(ij) \cap L(jk) = \phi \ (i \ne j \ne k \ne i)$ by the definition, and the degenerate locus $X(2,5) - X(2,5)c$ is just the union of these ten lines. $X(2,5)$ is isomorphic to the blow-up of $\mathbb{P}^2$ at four points. We can see the blow down $\pi : X(2,5) \rightarrow \mathbb{P}^2$ by the following way. Let us specialize $\lambda_4 = 0, \ \lambda_5 = \infty$ and regard $[\lambda_1:\lambda_2:\lambda_3]$ as a point in $\mathbb{P}^2$, then we obtain the following correspondence; \begin{align*} P_1 = [1:0:0] = \pi( L(15)), \quad P_2 = [0:1:0] = \pi( L(25)), \\ P_3 = [0:0:1] = \pi( L(35)), \quad P_4 = [1:1:1] = \pi( L(45)), \end{align*} and \[ \pi (X(2,5)c) = \{[\lambda_1:\lambda_2:\lambda_3] \in \mathbb{P}^2 \ : \ \lambda_i \ne \lambda_j \ (i \ne j), \quad i,j = 1,2,3,4\}. \] \\ \indent For five distinct numbers $i,j,k,l,m$ in $\{1,2,3,4,5\}$, We define a divisor $D(ijklm)$ on $X(2,5)$ by \[ D(ijklm) = L(ij) + L(jk) + L(kl) + L(lm) + L(mi). \] Such a divisor is understood as a ``juzu sequence''(see \cite{Y2}). A 5-juzu sequence $(ijklm)$ is the pentagon with vertices $i,j,k,l,m$ in this cyclic order. The divisor $D(ijklm)$ is given by the edges of this pentagon. \begin{figure}\label{Juzu} \end{figure} We identify $(ijklm)$ and $(imlkj)$ since $L(ij) = L(ji)$. There are twelve different divisors of this form. Let $H$ be a line on $\mathbb{P}^2$. As easily shown, $D(ijklm)$ are linearly equivalent to the divisor \[ 3 \pi^*H - L(15) - L(25) - L(35) - L(45). \] By the general theory of Del Pezzo surfaces (for example, see \cite[Chapter 5]{F}), this is anti-canonical class and very ample. In fact, we have the following proposition by direct calculations. \vskip3mm \begin{prop} \label{propJ} Set \begin{align*} J(ijklm)(\lambda) = (\lambda_i - \lambda_j)(\lambda_j - \lambda_k) (\lambda_k - \lambda_l)(\lambda_l - \lambda_m)(\lambda_m - \lambda_i) \end{align*} for twelve $(ijklm)$. Then the map \begin{align*} J \ : \ X(2,5) \longrightarrow \mathbb{P}^{11}, \quad J(\lambda) = [\cdots : J(ijklm)(\lambda) : \cdots] \end{align*} is an embedding. \end{prop} \begin{rem} It is necessary to make precise the above notation for $J(ijklm)$. By using the homogeneous coordinate $[a_i:b_i]$ for $\lambda_i \in \mathbb{P}^1$, we set $d(ij) = a_j b_i - a_i b_j$. So $\lambda_i - \lambda_j$ stands for $d(ij)$. The ratio $[J(ijklmn):J(i'j'k'l'm')]$ defines a rational function on $X(2,5)$. \end{rem} We shall give the correspondence between these divisors and theta functions in a later section. \\ \\ \mathfrak{S}ection{The family of pentagonal curves and the periods} \label{sectionperiod} Let us consider the algebraic curve \begin{align*} C_{\lambda} \ : \ y^5 = (x -\lambda_1)(x -\lambda_2)(x -\lambda_3)(x -\lambda_4)(x -\lambda_5), \cr \lambda = (\lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5) \in (\mathbb{P}^1)^5 - \Delta. \end{align*} If it holds $\lambda' = g \cdot \lambda$ for some $g \in \mathrm{PGL}_2(\mathbb{C})$, then we have a biholomorphic equivalence $C(\lambda) \cong C(\lambda')$. So we identify $C(\lambda)$ and $C(\lambda')$ in this case. Set $\mathcal{F} = \{ C_{\lambda} : \lambda \in X(2,5)c\}$. We regard $C_{\lambda}$ as a five sheeted cyclic covering over $\mathbb{P}^1$ branched at $\lambda_i$ via the projection \[ \pi \ : \ C_{\lambda} \longrightarrow \mathbb{P}^1, \quad (x,y) \mapsto x. \] By the Hurwitz formula, the genus of $C_{\lambda}$ is six. We have the following basis of $\mathrm{H}^0(C_{\lambda},\Omega^1)$: \begin{align} \label{basiof1forms} \varphi_1 = \frac{\mathrm{d}x}{y^2}, \quad \varphi_2 = \frac{\mathrm{d}x}{y^3}, \quad \varphi_3 = \frac{x\mathrm{d}x}{y^3}, \quad \varphi_4 = \frac{\mathrm{d}x}{y^4}, \quad \varphi_5 = \frac{x\mathrm{d}x}{y^4}, \quad \varphi_6 = \frac{x^2\mathrm{d}x}{y^4}. \end{align} Let $\rho$ denotes the automorphism of order five: \[ \rho \ : \ C_{\lambda} \longrightarrow C_{\lambda}, \quad (x,y) \mapsto (x,\zetaeta y) \qquad (\zetaeta = \exp(2 \pi \mathfrak{S}qrt{-1}/5)) \] on $C_{\lambda}$. \begin{rem} Throughout this article always $\zeta$ stands for $\exp(2 \pi \mathfrak{S}qrt{-1}/5)$. \end{rem} Next, we construct a symplectic basis of $\mathrm{H}_0(C_{\lambda},\mathbb{Z})$. \\ Let $\lambda^0 = (\lambda_1^0,\lambda_2^0,\lambda_3^0, \lambda_4^0,\lambda_5^0) \in X(2,5)c$ be a real point such that $\lambda_1 < \cdots <\lambda_5$ and $C_0$ be the corresponding curve. Take a point $x_0 \in \mathbb{P}^1$ such that $\mathrm{Im}(x_0) < 0$, and make line segments $l_i$ connecting $x_0$ and $\lambda_i$. Then $\Sigma = \mathbb{P}^1 - \cup l_i$ is simply connected and $\pi^{-1}(\Sigma)$ is isomorphic to $\Sigma \times \mathbb{Z} / 5 \mathbb{Z}$. Here, we choose the fiber coordinates $k \in \mathbb{Z} / 5 \mathbb{Z}$ such that $\rho(x,k) = (x,k+1)$. Let $\alpha(i,j)$ be the oriented arc from $\lambda_i$ to $\lambda_j$ in $\Sigma$. We obtain the following five oriented arcs $\alpha_k(i,j) \ (k = 1,\cdots,5)$ in $C_0$: \begin{align} \label{arc} \alpha_k(i,j) = (\alpha(i,j), k) \mathfrak{S}ubset \Sigma \times \mathbb{Z} / 5 \mathbb{Z}. \end{align} We define cycles $\gamma_1,\ \gamma_2,\ \gamma_3$ on $C_0$ (Figure \ref{Fgcycle}) using this notation; \begin{equation} \label{gamma} \begin{split} \gamma_1 &= \alpha_1(1,2) + \alpha_2(2,1), \cr \gamma_2 &= \alpha_1(3,4) + \alpha_2(4,3), \cr \gamma_3 &= \alpha_1(1,3) + \alpha_2(3,4) + \alpha_3(4,2) + \alpha_2(2,1). \end{split} \end{equation} \begin{figure}\label{Fgcycle} \end{figure} We set \begin{equation} \label{AB} \begin{split} &A_1 = \gamma_1, \quad A_2 = \gamma_2, \quad A_3 = \gamma_3, \quad A_4 = \rho^2(\gamma_1), \quad A_5 = \rho^2(\gamma_2), \quad A_6 = \rho^4(\gamma_3), \\ &B_1 = \rho(\gamma_1) + \rho^3(\gamma_1), \quad B_2 = \rho(\gamma_2) + \rho^3(\gamma_2), \quad B_3 = \rho(\gamma_3) + \rho^2(\gamma_3), \\ &B_4 = \rho^3(\gamma_1), \quad B_5 = \rho^3(\gamma_2), \quad B_6 = \rho(\gamma_3). \end{split} \end{equation} The intersection numbers of these cycles are given by \[ A_i \cdot A_j = B_i \cdot B_j = 0, \quad A_i \cdot B_j = \delta_{ij}. \] So, $\{A_i,\ B_i \}$ is a symplectic basis of $\mathrm{H}_1(C_0,\mathbb{Z})$. Let $\lambda$ be a point on $X(2,5)c$, and suppose an arc $r$ from $\lambda^0$ to $\lambda$. Since the family $\mathcal{F}$ is locally trivial as a topological fiber space over $X(2,5)c$, by using this trivialization along $r$, we obtain the systems $\{\alpha_k(i,j)(\lambda)\}$, $\{\gamma_i(\lambda)\}$ and the symplectic basis $\{A_i(\lambda),\ B_i(\lambda) \}$ on $C_{\lambda}$. We have the relation (\ref{AB}) between $\{\gamma_i(\lambda)\}$ and $\{A_i(\lambda),\ B_i(\lambda) \}$ also. We note that $\{A_i(\lambda),\ B_i(\lambda) \}$ depend on the homotopy class of $r$. \\ Now, we consider the period matrix of $C_{\lambda}$: \begin{align*} \Pi(\lambda) = \Pi = (Z_1, Z_2) = \begin{pmatrix} \int_{A_1} \varphi_1 & \cdots & \int_{A_6} \varphi_1 & \int_{B_1} \varphi_1 & \cdots & \int_{B_6} \varphi_1 \\ \hdotsfor{6} \\ \int_{A_1} \varphi_6 & \cdots & \int_{A_6} \varphi_6 & \int_{B_1} \varphi_6 & \cdots & \int_{B_6} \varphi_6 \end{pmatrix}. \end{align*} The normalized period matrix $\Omega(\lambda) = \Omega = Z_1^{-1} Z_2$ belongs to the Siegel upper half space of degree 6: \[ \mathfrak{S}_6 = \{\Omega \in \mathrm{GL}_6(\mathbb{C}) \ : \ {}^t \Omega = \Omega,\ \mathrm{Im}(\Omega) \ \text{is positive definite} \}. \] The automorphism $\rho$ acts on $\mathrm{H}^0(C_{\lambda},\Omega^1)$ and $\mathrm{H}_1(C_{\lambda},\mathbb{Z})$. So we have the representation matrices $R \in \mathrm{GL}_6(\mathbb{C})$ and $M \in \mathrm{GL}_{12}(\mathbb{Z})$ of $\rho$ with respect to the basis $\{\varphi_i\}$ and $\{A_i, B_i\}$, respectively. It holds $R \Pi = \Pi M$. Put \begin{align} \label{sigma} M = \begin{pmatrix} {}^tD & {}^tB \\ {}^tC & {}^tA \end{pmatrix}, \quad \mathfrak{S}igma = \begin{pmatrix} A & B \\ C & D \end{pmatrix}. \end{align} Then the matrix $\mathfrak{S}igma$ belongs to the symplectic group \[ \mathrm{Sp}_{12}(\mathbb{Z}) = \{g \in \mathrm{GL}_{12}(\mathbb{Z}) \ : \ {}^tg J g = J \}, \quad J = \begin{pmatrix} 0 & I_6 \\ -I_6 & 0 \end{pmatrix} \] and it holds \[\Omega = (A \Omega + B)(C \Omega + D)^{-1}. \] As easily shown, $\varphi_i \ (i = 1, \cdots, 6)$ is eigenvectors of $\rho$ and we have \[ R = \begin{pmatrix} \zetaeta^3 &&&&& \\ & \zetaeta^2 &&&0& \\ && \zetaeta^2 &&& \\ &&& \zetaeta && \\ &0&&& \zetaeta & \\ &&&&& \zetaeta \end{pmatrix}. \] By the relation (\ref{AB}) of $A_i, B_i$, we have \begin{align}\label{piri} \Pi = (a,b,c,R^2 a,R^2 b,R^4 c,(R + R^3)a,(R + R^3)b,(R + R^2)c,R^3 a, R^3 b,R c), \end{align} where we denote \[a = {}^t(\int_{\gamma_1} \varphi_1, \cdots, \int_{\gamma_1}\varphi_6),\quad b = {}^t(\int_{\gamma_2} \varphi_1, \cdots, \int_{\gamma_2}\varphi_6),\quad c = {}^t(\int_{\gamma_3} \varphi_1, \cdots, \int_{\gamma_3}\varphi_6). \] According to (\ref{AB}), \[ \rho (A_1) =\rho (\gamma_1) = (\rho (\gamma_1) + \rho^3 (\gamma_1)) - \rho^3 (\gamma_1) = B_1 - B_4. \] By the same way, we can describe $\rho (A_2) , \cdots, \rho (B_6)$ in terms of $\{A_i, B_i\}$. So we can determine $M$, and obtain \begin{equation}\label{matrixsigma} \mathfrak{S}igma = \left( \begin{array}{cccccccccccc} -1&0&0&0&0&0& -1&0&0&0&0&0 \\ 0&-1&0&0&0&0& 0&-1&0&0&0&0 \\ 0&0&0&0&0&-1& 0&0&-1&0&0&-1 \\ -1&0&0&0&0&0& -1&0&0&-1&0&0 \\ 0&-1&0&0&0&0& 0&-1&0&0&-1&0 \\ 0&0&1&0&0&-1& 0&0&0&0&0&0 \\ 1&0&0&-1&0&0& 0&0&0&0&0&0 \\ 0&1&0&0&-1&0& 0&0&0&0&0&0 \\ 0&0&0&0&0&1& 0&0&0&0&0&0 \\ 0&0&0&1&0&0& 0&0&0&0&0&0 \\ 0&0&0&0&1&0& 0&0&0&0&0&0 \\ 0&0&0&0&0&0& 0&0&1&0&0&0 \\ \end{array} \right). \end{equation} Put \[ \eta(\lambda) = [\eta_1(\lambda):\eta_2(\lambda):\eta_3(\lambda)] \in \mathbb{P}^2, \quad \eta_1(\lambda) = \int_{\gamma_1} \varphi_1, \quad \eta_2(\lambda) = \int_{\gamma_2} \varphi_1, \quad \eta_3(\lambda) = \int_{\gamma_3} \varphi_1. \] These are multi-valued analytic functions of $\lambda$. Applying the Riemann positive condition \[ (\int_{A_1} \varphi_1, \cdots, \int_{B_6} \varphi_1) \ J \ {}^t(\int_{A_1} \overline{\varphi}_1, \cdots, \int_{B_6} \overline{\varphi}_1) > 0 \] for (\ref{piri}), we obtain \[ |\eta_1|^2 + |\eta_2|^2 + \frac{1 - \mathfrak{S}qrt{5}}{2} |\eta_3|^2 < 0. \] So, $\eta = (\eta_1, \eta_2, \eta_3)$ belongs to the complex ball \begin{align} \label{ballA} \mathbb{B}_2^A = \{\eta \in \mathbb{P}^2 \ : \ {}^t\bar{\eta}A\eta < 0\}, \quad A = \mathrm{diag}(1,1,\frac{1 - \mathfrak{S}qrt{5}}{2}). \end{align} Next, we determine $\Omega$ explicitly. Write $a = (a_i), \ b = (b_i)$ and $c = (c_i)$. Then, the Riemann bilinear relation $\Pi J {}^t\Pi = 0$ induces the following equations: \[ c_2 = -(\zetaeta^2 + \zetaeta^3)(a_1 a_2 + b_1 b_2) / c_1, \quad c_3 = -(\zetaeta^2 + \zetaeta^3)(a_1 a_3 + b_1 b_3) / c_1. \] By substituting them for $Z_1, \ Z_2$ in $\Pi$ we can proceed the calculation of $\Omega = Z_1^{-1} Z_2$ (using a computer). Hence we have the following: \begin{lem}\label{Period} Let $\Delta = \eta_1^2 + \eta_2^2 - \zeta^3 (1 + \zeta) \eta_3^2$. The period matrix $\Omega = (\Omega_{ij})$ is given by \begin{align*} \Omega_{11} &=& (\zeta^3-1)(\eta_1^2 +(1+\zeta^3) \eta_2^2 +\eta_3^2)/\Delta, \quad \Omega_{44} &=& -\zeta^2 (\eta_1^2 + \zeta^2 \eta_2^2 -(1+\zeta) \eta_3^2)/\Delta, \\ \Omega_{22} &=& (\zeta^3-1)((1+\zeta^3) \eta_1^2 +\eta_2^2 + \eta_3^2)/\Delta, \quad \Omega_{55} &=& -\zeta^2(\zeta^2\eta_1^2 +\eta_2^2 - (1+\zeta)\eta_3^2)/\Delta, \\ \Omega_{33} &=& (\zeta^2 -1)(\eta_1^2 +\eta_2^2 - \zeta^3 \eta_3^2)/\Delta, \quad \Omega_{66} &=& -\zeta^3( \eta_1^2 + \eta_2^2 -(1+\zeta^4)\eta_3^2)/\Delta, \\ \Omega_{12} &=& (\zeta^3 - \zeta) \eta_1 \eta_2/\Delta, \quad \Omega_{45} &=& (\zeta^4 - \zeta^2) \eta_1 \eta_2 / \Delta, \\ \Omega_{15} &=& (\zeta^4 - \zeta) \eta_1 \eta_2 / \Delta, \quad \Omega_{24} &=& (\zeta^4 -\zeta) \eta_1 \eta_2 / \Delta, \\ \Omega_{13} &=& (1 - \zeta^2) \eta_1 \eta_3 / \Delta, \quad \Omega_{23} &=& (1 - \zeta^2) \eta_2 \eta_3 / \Delta, \\ \Omega_{46} &=& (\zeta^4 - \zeta) \eta_1 \eta_3 / \Delta, \quad \Omega_{56} &=& (\zeta^4 - \zeta) \eta_2 \eta_3 / \Delta, \\ \Omega_{16} &=& (\zeta^3 - \zeta) \eta_1 \eta_3 / \Delta, \quad \Omega_{26} &=& (\zeta^3 - \zeta) \eta_2 \eta_3 / \Delta, \\ \Omega_{34} &=& (1-\zeta^3) \eta_1 \eta_3 / \Delta, \quad \Omega_{35} &=& (1-\zeta^3) \eta_2 \eta_3 / \Delta, \\ \Omega_{14} &=& \zeta^3((1 + \zeta) \eta_1^2 + (1 + \zeta^3) \eta_2^2 + \eta_3^2) / \Delta, \quad \Omega_{25} &=& \zeta^3((1 + \zeta^3) \eta_1^2 + (1 + \zeta) \eta_2^2 + \eta_3^2) / \Delta, \\ \Omega_{36} &=& (\zeta + \zeta^2)(\eta_1^2 + \eta_2^2 - \zetaeta^3 (1 + \zetaeta^2) \eta_3^2) / \Delta . \end{align*} \end{lem} \vskip3mm Now we define our period map \[ \Phi \ : \ X(2,5)c \longrightarrow \mathbb{B}_2^A, \quad \lambda \mapsto [\eta_1(\lambda) : \eta_2(\lambda) : \eta_3(\lambda)], \] that is multi-valued analytic. The above Lemma says that the original period map $\lambda \mapsto \Omega(\lambda)$ factors as \[ X(2,5)c \longrightarrow \mathbb{B}_2^A \longrightarrow \mathfrak{S}_6. \] Throughout this paper, we denote matrices of the form in Lemma \ref{Period} by $\Omega(\eta)$. \\ \mathfrak{S}ection{The monodromy group and Reflections} The multi-valuedness of $\Phi$ induces a unitary representation with respect to $A$ in (\ref{ballA}) of the fundamental group $\pi_1(X(2,5)c)$. We call it the monodromy group of $\Phi$. The structures of our monodromy group is studied in \cite{YY}. Set \[ \Gamma = \{g \in \mathrm{GL}_3(\mathbb{Z}[\zeta]) \ : \ {}^t \bar{g} A g = A \}, \quad \Gamma(1-\zeta) = \{g \in \Gamma \ : \ g \equiv I_3 \ \mathrm{mod} \ 1-\zeta \}. \] The group $\Gamma$ acts on $\mathbb{B}_2^A$ (left action). \begin{tm}[T. Yamazaki, M. Yoshida \cite{YY}]\label{tmYY} (1). \ The monodromy group of the period map $\Phi$ coincides with $\Gamma(1-\zeta)$ and the quotient $\Gamma / (\pm I) \Gamma(1-\zeta)$ is isomorphic to the symmetric group $\mathrm{S}_5$. \\ (2). The quotient $\mathbb{B}_2^A / \Gamma(1-\zeta)$ is biholomorphically equivalent to the blow up of $\mathbb{P}^2$ at four points. \end{tm} \begin{rem}[see \cite{YY}] There are ten $(-1)$--curves on $\mathbb{B}_2^A / \Gamma(1-\zeta)$, and $\mathrm{S}_5$ acts transitively on them. \end{rem} According to \cite{Te} and \cite{YY}, it is proved that $\Gamma$ and $\Gamma(1-\zeta)$ are reflection groups and the generator systems are given also. We expose those generator system in a form adapted for our calculation in the later sections. \\ \indent Let us consider the reference point $\lambda^0 \in X(2,5)c$ again. Now we define the half way monodromy transformation $g_{12}$ induced from the permutation of $\lambda^0_1$ and $\lambda^0_2$. Let us consider a continuous arc $R_{12}$ starting from $\lambda^0$: \begin{align} \label{lambdadeformation} \lambda(t) = (\lambda_1(t), \lambda_2(t), \lambda_3^0,\lambda_4^0,\lambda_5^0), \quad (0 \leq t \leq 1) \end{align} such that (Figure \ref{Fghalf}) \[ \lambda_2(1) = \lambda_1^0, \quad \lambda_1(1) = \lambda_2^0, \qquad \mathrm{Im}(\lambda_1(t)) < 0 < \mathrm{Im}(\lambda_2(t)) \quad (0 < t < 1). \] \begin{figure}\label{Fghalf} \end{figure} Let $\eta(t) = \eta(\lambda(t))$ be the corresponding periods. Recall the definition (\ref{gamma}). It is apparent that $\gamma_2$ and $\gamma_3$ are invariant after this deformation process. Describing $\gamma_1(t)$ for any $0 \leq t \leq 1$, we get $\gamma_1(1) = - \rho(\gamma_1(0))$. Namely, \[ \left( \begin{array}{c} \eta_1(1) \\ \eta_2(1) \\ \eta_3(1) \end{array} \right) = \left( \begin{array}{ccc} - \zeta^3 &0&0 \\ 0&1&0 \\ 0&0&1 \end{array} \right) \left( \begin{array}{c} \eta_1(0) \\ \eta_2(0) \\ \eta_3(0) \end{array} \right). \] The matrix in the right hand side belongs to $\Gamma$, and we denote it by $g_{12}$. We define $g_{23}, \ g_{34}, \ g_{45}$ by the same manner. Set \begin{align} \label{hij} h_{12} = (g_{12})^2, \quad h_{23} = (g_{23})^2, \quad h_{34} = (g_{34})^2 \\ h_{13} = (g_{23})^{-1}(g_{12})^2g_{23},\quad h_{14} = (g_{23}g_{34})^{-1}(g_{12})^2g_{23}g_{34}. \end{align} \begin{prop}[see \cite{Y1}] The monodromy group is generated by $h_{ij}$ in (\ref{hij}). \end{prop} \vskip3mm Let $\mathrm{T}_{\alpha}$ be the reflection on $\mathbb{B}_2^A$ with respect to a root $\alpha$; \[ \mathrm{T}_{\alpha}(\eta) = \eta - (1 + \zeta^3) \frac{{}^t\bar{\alpha}A\eta}{{}^t\bar{\alpha}A\alpha} \alpha, \] and $\mathrm{R}_{\beta}$ be the reflection on $\mathbb{B}_2^A$ with respect to a root $\beta$; \[ \mathrm{R}_{\beta}(\eta) = \eta - (1 - \zeta) \frac{{}^t\bar{\beta}A\eta}{{}^t\bar{\beta}A\beta} \beta. \] Then we see that \begin{lem} Set \[ \alpha_{12} = (1,0,0), \quad \alpha_{23} = (\zeta^3,1,-(1+\zeta)), \quad \alpha_{34} = (0,1,0),\quad \alpha_{45} = (0,1,\zeta^3) \] and set \begin{align*} \beta_{12} = (1,0,0), \quad \beta_{13} = (1,-1,1+\zeta), \quad \beta_{14} = (1,\zeta^3,1+\zeta), \\ \beta_{23} = (\zeta^3,1,-(1+\zeta)), \quad \beta_{34} = (0,1,0). \end{align*} Then it holds $g_{ij} = \mathrm{T}_{\alpha_{ij}}$, $h_{kl} = \mathrm{R}_{\beta_{ij}}$. And $g_{ij}$ is of order five, $h_{kl}$ is of order ten. \end{lem} \begin{rem} The group $\Gamma'$ generated by $\{g_{ij}\}$ has a representation to $\mathrm{S}_5$. According to Theorem \ref{tmYY}, $\Gamma$ is generated $\Gamma'$ and $\pm I$. \end{rem} \indent The deformation of the curve $C_{\lambda(t)}$ along $R_{12}$ in (\ref{lambdadeformation}) induces a symplectic basis $\{A_i(t), B_i(t)\}$ on it. So $\{A_i(1), B_i(1)\}$ is again a symplectic basis on $C_{\lambda^0}$. Hence we obtain a symplectic transformation \[ {}^t(B_1(1), \cdots, B_6(1),A_1(1) \cdots,A_6(1)) = \hat{g}_{12} {}^t(B_1(0), \cdots, B_6(0),A_1(0) \cdots,A_6(0)). \] For $\hat{g}_{12} = \begin{pmatrix} A&B\\mathbb{C}&D\end{pmatrix}$, we have \[ \Omega(\eta_1) = (A \Omega(\eta_0) + B)(C \Omega(\eta_0) + D)^{-1}. \] Recall $R_{12}$ induces the change of cycles \[ (\gamma_1, \gamma_2, \gamma_3) \longrightarrow (-\rho(\gamma_1), \gamma_2, \gamma_3). \] Together with (\ref{AB}), we obtain: \begin{equation} \label{matrixg12} \hat{g}_{12} = \left( \begin{array}{cccccccccccc} 1&0&0&0&0&0& 1&0&0&0&0&0 \\ 0&1&0&0&0&0& 0&0&0&0&0&0 \\ 0&0&1&0&0&0& 0&0&0&0&0&0 \\ 1&0&0&0&0&0& 1&0&0&1&0&0 \\ 0&0&0&0&1&0& 0&0&0&0&0&0 \\ 0&0&0&0&0&1& 0&0&0&0&0&0 \\ -1&0&0&1&0&0& 0&0&0&0&0&0 \\ 0&0&0&0&0&0& 0&1&0&0&0&0 \\ 0&0&0&0&0&0& 0&0&1&0&0&0 \\ 0&0&0&-1&0&0& 0&0&0&0&0&0 \\ 0&0&0&0&0&0& 0&0&0&0&1&0 \\ 0&0&0&0&0&0& 0&0&0&0&0&1 \end{array} \right). \end{equation} By same consideration, we obtain following; \begin{equation}\label{matrixg23} \hat{g}_{23} = \left( \begin{array}{cccccccccccc} 1&1&0&1&-1&1& 2&-1&-1&1&-1&1 \\ -1&1&-1&-1&1&0& -2&2&0&-1&1&-2 \\ 1&-1&2&0&0&-1& 0&-1&1&0&0&1 \\ 0&1&-1&1&0&1& 1&0&-1&1&0&0 \\ 0&0&0&-1&1&-1& -1&1&0&-1&1&-1 \\ 1&0&1&1&-1&1& 2&-2&0&1&-1&2 \\ -1&0&-1&0&1&1& 0&1&0&0&0&-1 \\ 1&-1&1&0&0&-1& 0&0&1&0&0&1 \\ -1&1&-2&-1&1& 1&-1&2&0&0&1&-2 \\ 0&0&0&-1&0&-1 &-1&1&0&0&1&-1 \\ 0&0&1&1&-1&0& 1&-1&0&0&0&1 \\ 1&-1&2&0&-1&-2 &0&-1&1&-1&0&2 \end{array} \right), \end{equation} \begin{equation}\label{matrixg34} \hat{g}_{34} = \left( \begin{array}{cccccccccccc} 1&0&0&0&0&0& 0&0&0&0&0&0 \\ 0&1&0&0&0&0& 0&1&0&0&0&0 \\ 0&0&1&0&0&0& 0&0&0&0&0&0 \\ 0&0&0&1&0&0& 0&0&0&0&0&0 \\ 0&1&0&0&0&0& 0&1&0&0&1&0 \\ 0&0&0&0&0&1& 0&0&0&0&0&0 \\ 0&0&0&0&0&0& 1&0&0&0&0&0 \\ 0&-1&0&0&1&0& 0&1&0&0&0&0 \\ 0&0&0&0&0&0& 0&0&1&0&0&0 \\ 0&0&0&0&0&0& 0&0&0&1&0&0 \\ 0&0&0&0&-1&0& 0&1&0&0&0&0 \\ 0&0&0&0&0&0& 0&0&0&0&0&1 \end{array} \right), \end{equation} \begin{equation}\label{matrixg45} \hat{g}_{45} = \left( \begin{array}{cccccccccccc} 1&0&0&0&0&0& 0&0&0&0&0&0 \\ 0&1&-1&0&1&0& 0&2&0&0&1&-1 \\ 0&0&1&0&-1&0& 0&0&1&0&0&1 \\ 0&0&0&1&0&0& 0&0&0&0&0&0 \\ 0&0&0&0&1&0& 0&1&1&0&1&0 \\ 0&0&0&0&-1&1& 0&-1&0&0&-1&1 \\ 0&0&0&0&0&0& 1&0&0&0&0&0 \\ 0&-1&1&0&0&-1& 0&0&0&0&0&0 \\ 0&0&-1&0&1&0& 0&1&0&0&0&-1 \\ 0&0&0&0&0&0& 0&0&0&1&0&0 \\ 0&0&0&0&-1&0& 0&-1&0&0&0&1 \\ 0&-1&1&0&0&-1& 0&-1&0&0&0&1 \end{array} \right). \end{equation} \\ \mathfrak{S}ection{Degenerate loci} \label{sectiondl} According to Theorem \ref{tmYY}, P. Deligne - G.D. Mostow(\cite{DM}) and T. Terada(\cite{Te}) the period map $\Phi$ induces the biholomorphic equivalence \[ \tilde{\Phi} \ : \ X(2,5)c \mathfrak{S}tackrel{\mathfrak{S}im}{\longrightarrow} \mathbb{B}_2^Ac / \Gamma(1 - \zeta), \] where $\mathbb{B}_2^Ac = \mathrm{Im} \Phi$. Moreover we have the unique extension \[ \tilde{\Phi} \ : \ X(2,5) \mathfrak{S}tackrel{\mathfrak{S}im}{\longrightarrow} \mathbb{B}_2^A / \Gamma(1 - \zeta), \] and $\cup L(ij) = X(2,5) - X(2,5)c$ corresponds to $(\mathbb{B}_2^A - \mathbb{B}_2^Ac)/ \Gamma(1 - \zeta)$. Let $\pi$ be the projection $\mathbb{B}_2^A \rightarrow \mathbb{B}_2^A / \Gamma(1-\zeta)$, and let $\ell(ij)$ denote $\pi^{-1}(\tilde{\Phi}(L(ij)))$. \\ \indent Now we consider a degenerate curve \[ y^5 = (x-\lambda_1)^2(x-\lambda_3)(x-\lambda_4)(x-\lambda_5) \] with $(\lambda_1,\lambda_1,\lambda_3,\lambda_4,\lambda_5) \in L(12)$, and putting $\lambda' = (\lambda_1,\lambda_3,\lambda_4,\lambda_5)$ we denote it by $C_{\lambda'}$. Let $\tilde{C}_{\lambda'}$ denote the non-singular model of $C_{\lambda'}$. It is a curve of genus 4. Set $\mathcal{F}_{12}$ be the totality of $\tilde{C}_{\lambda'}$. For the parameter $(\lambda^0)' = (\lambda_1^0,\lambda_3^0,\lambda_4^0,\lambda_5^0)$ the cycle $\gamma_1$ vanishes on $\tilde{C}_{(\lambda^0)'}$, but $\gamma_2$ and $\gamma_3$ are still alive. So we can define $A_i, B_i \ (i = 2,3,5,6)$ on $\tilde{C}_{\lambda'}$ by the same argument as for $C_{\lambda}$. Hence we obtain a basis $\{A_i, B_i \} \ (i = 2,3,5,6)$ of $\mathrm{H}_1(\tilde{C}_{\lambda'}, \mathbb{Z})$. By putting $\lambda' = (0,1,t,\infty)$ the period \begin{align} \int_{\gamma} x^{-\frac{4}{5}} (x-1)^{-\frac{2}{5}} (x-t)^{-\frac{2}{5}}dx, \qquad (\gamma \in \mathrm{H}_1(\tilde{C}_{\lambda'}, \mathbb{Z})) \end{align} on $\tilde{C}_{\lambda'}$ gives a solution for the Gauss hypergeometric differential equation $E_{2,1}(\frac{1}{5}, \frac{2}{5}, \frac{4}{5})$: \begin{align} \label{Gauss(5,5,5)} t(1 -t) \frac{\mathrm{d}^2 u}{\mathrm{d} t^2} + (\frac{4}{5} - \frac{8}{5} t) \frac{\mathrm{d} u}{\mathrm{d} t} - \frac{2}{5} u = 0 \end{align} The corresponding monodromy group is the triangle group $\Delta(5,5,5)$ (see \cite{Ta}, \cite{Y1}, \cite[p.138]{Y2}). Set \[ \mathbb{B}_1 = \{\eta \in \mathbb{B}_2^A \ : \ \eta_1 = 0\}, \] it is the mirror of the reflection $g_{12}$. By using the system $\{\gamma_2, \gamma_3 \}$ we define a multi-valued map \[ \Phi_{12} \ : \ L(12) \longrightarrow \mathbb{B}_1, \quad \lambda \mapsto [0: \eta_2(\lambda) : \eta_3(\lambda)]. \] It induces the restriction $\tilde{\Phi}|_{L(12)}$. By the same manner we obtain that $\tilde{\Phi}|_{L(ij)}$ is the mirror of the reflection $g_{ij}$. Suppose $\lambda \in L(12)$ and set $\eta = \eta(\lambda) = \Phi_{12}(\lambda)$ . By putting $\eta_1 = 0$ in Lemma \ref{Period}, we see that \begin{align} \label{degeperiod} \Omega(\eta) = \begin{pmatrix} \Omega_{11} &\Omega_{14}\\ \Omega_{41} &\Omega_{44} \end{pmatrix} \oplus \Omega'(\eta), \quad \begin{pmatrix} \Omega_{11} &\Omega_{14}\\ \Omega_{41} &\Omega_{44} \end{pmatrix} = \tau_0 = \begin{pmatrix} \zeta - 1 &\zeta^2 +\zeta^3 \\ \zeta^2 +\zeta^3 & -\zeta^4 \end{pmatrix} \end{align} with a certain element $\Omega'(\eta) \in \mathfrak{S}_4$. Moreover, in case $\eta_0 = [0:0:1] \in \ell(12) \cap \ell(34)$ we have \begin{align} \label{eta_0} \Omega(\eta_0) = \begin{pmatrix} \Omega_{11} &\Omega_{14}\\ \Omega_{41} &\Omega_{44} \end{pmatrix} \oplus \begin{pmatrix} \Omega_{22} &\Omega_{25}\\ \Omega_{52} &\Omega_{55} \end{pmatrix} \oplus \begin{pmatrix} \Omega_{33} &\Omega_{36}\\ \Omega_{63} &\Omega_{66} \end{pmatrix} = \tau_0 \oplus \tau_0 \oplus \tau_0 \end{align} We use the above matrix to numerical evaluation of theta functions in later section. \\ \\ \mathfrak{S}ection{Theta functions} \mathfrak{S}ubsection{Invariant theta characteristics} We recall basic facts on the Riemann theta functions. For a characteristic $(a,b) \in (\mathbb{R}^g)^2$, the theta function $\Theta_{(a,b)}(z,\Omega)$ on $\mathbb{C}^g \times \mathfrak{S}_g$ is defined by the series \[ \Theta_{(a,b)}(z,\Omega) = \mathfrak{S}um_{n \in \mathbb{Z}^g}\exp[\pi \mathfrak{S}qrt{-1} {}^t(n + a)\Omega(n + a) + 2 \pi \mathfrak{S}qrt{-1}{}^t(n + a)(z + b)]. \] These functions satisfy the following period relations \begin{align} \label{translate1} \Theta_{(a,b)}(z + m,\Omega) &= \exp(2 \pi \mathfrak{S}qrt{-1}{}^tma) \Theta_{(a,b)}(z,\Omega), \end{align} \begin{align} \label{translate2} \Theta_{(a,b)}(z + \Omega m,\Omega) &= \exp(-\pi \mathfrak{S}qrt{-1}{}^tm \Omega m -2 \pi \mathfrak{S}qrt{-1}{}^tm(z + b)) \Theta_{(a,b)}(z,\Omega) \end{align} for $m \in \mathbb{Z}^g$. For the characteristics, we have \begin{align}\label{chformula} \Theta_{(a + n,b + m)}(z,\Omega) = \exp(2 \pi \mathfrak{S}qrt{-1}{}^tam) \Theta_{(a,b)}(z,\Omega), \end{align} for $n, \ m \in \mathbb{Z}^g$ and \begin{align}\label{chformula2} \Theta_{(-a,-b)}(z,\Omega) = \Theta_{(a,b)}(z,\Omega). \end{align} Theta constants $\Theta_{(a,b)}(\Omega) = \Theta_{(a,b)}(0,\Omega)$ satisfy following transformation formula (see \cite[p176]{Ig}) as function on $\mathfrak{S}_g$. For $g = \begin{pmatrix} A&B\\mathbb{C}&D\end{pmatrix} \in \mathrm{SP}_{2g}(\mathbb{Z})$, set \begin{align} g\Omega &= (A \Omega + B)(C \Omega + D)^{-1} \\ \label{gab} g(a,b) &= (Da - Cb,\ -Ba + Ab) + \frac{1}{2}( (C {}^tD)_0,\ (A {}^tB)_0) \\ \phi_{(a,b)}(g) &= -\frac{1}{2}(^ta^tDBa - 2^ta^tBCb + {}^tb^tCAb) + \frac{1}{2}(^ta^tD - ^tb^tC)(A^tB)_0 \end{align} where $(A)_0$ stands for the diagonal vector of a matrix $A$. Then we have \begin{align}\label{formula} \Theta_{g(a,b)}(g\Omega) = \kappa(g)\exp(2 \pi \mathfrak{S}qrt{-1}\phi_{a,b}(g)) \mathrm{det}(C\Omega + D)^{\frac{1}{2}}\Theta_{(a,b)}(\Omega) \end{align} where, $\kappa(g)$ is a certain 8-th root of 1 depending only on $g$. \begin{rem} \label{remtrans} By the definition, we have \[ \Theta_{(a,b)}(z,\Omega) = \exp(\pi \mathfrak{S}qrt{-1} {}^ta\Omega a + 2\pi \mathfrak{S}qrt{-1} {}^ta(z + b)) \Theta_{(0,0)}(z + \Omega a + b, \ \Omega), \] so we often identify a characteristic $(a,b) \in (\mathbb{R}^g)^2$ with $\Omega a + b \in \mathbb{C}^g$. For $\begin{pmatrix} A&B \\ C&D \end{pmatrix} \in \mathrm{Sp}_{2g}(\mathbb{Z})$, we have \[ \Omega' (Da - Cb) + (-Ba + Ab) = {}^t(C \Omega + D)^{-1}(\Omega a + b), \] where $\Omega' = (A \Omega + B)(C \Omega + D)^{-1}$. \end{rem} For these formulas, see \cite{Ig} and \cite{Mum}. \\ \\ \indent Henceforth we suppose the characteristics $(a,b)$ satisfy $a,b \in (\frac{1}{10}\mathbb{Z})^6$. \begin{lem} \label{fixedlemma} Let $\mathfrak{S}igma$ be the matrix in (\ref{sigma}) and write $a = (a_i), \ b = (b_i)$. \\ 1. \ We have \[ \mathfrak{S}igma (a,b) \equiv (a,b) \ \mathrm{mod} \ \mathbb{Z} \] if and only if \begin{align*} 5 a_1 &\equiv \frac{1}{2}, \quad a_4 \equiv a_1, \quad b_1 \equiv -2 a_1, \quad b_4 \equiv - a_1 \\ 5 a_2 &\equiv \frac{1}{2}, \quad a_5 \equiv a_2, \quad b_2 \equiv -2 a_2, \quad b_5 \equiv - a_2 \qquad \mathrm{mod} \ \mathbb{Z}.\\ 5 a_3 &\equiv \frac{1}{2}, \quad a_6 \equiv a_3, \quad b_3 \equiv -2 a_3, \quad b_6 \equiv - a_3 \end{align*} (2) \ Let $(a,b)$ be the characteristic with the above condition. Then we have \[ \hat{g} (a,b) \equiv (a,b) \ \mathrm{mod} \ \mathbb{Z} \quad \text{for all} \ g \in \Gamma(1-\zeta). \] \end{lem} \begin{proof} (1) \ Using the exact form (\ref{matrixsigma}) we can describe $\mathfrak{S}igma (a,b)$. Then we deduce the assertion. \\ (2) \ The transformation $g(a,b)$ in (\ref{gab}) define a group action of the symplectic group on $(\mathbb{R} / \mathbb{Z})^{2g}$ (see \cite{Ig}). We can check that the equality for every member of the generator system $\{h_{ij} \}$ of $\Gamma(1-\zeta)$. \end{proof} \vskip3mm \begin{defin} \label{defchara} Let $(a,b)$ be the characteristic satisfying the condition Lemma \ref{fixedlemma} (1). Then we can put \begin{align}\label{charatype} a = \frac{1}{10}{}^t(a_1, a_2, a_3,a_1, a_2, a_3), \ b = \frac{1}{10}{}^t(-2a_1, -2a_2, -2a_3,-a_1, -a_2, -a_3). \end{align} Let $(a_1, a_2, a_3)$ denote this characteristic. We call $(a,b) = (a_1, a_2, a_3)$ ``$\mathfrak{S}igma$--invariant'' if $a_1, a_2, a_3$ are odd integers. For a characteristic of this type, we denote the zero locus of $\Theta_{(a_1,a_2,a_3)}$ on $\mathbb{B}_2^A$ by $\vartheta(a_1,a_2,a_3)$; \[ \vartheta(a_1,a_2,a_3) = \{\eta \in \mathbb{B}_2^A \ : \ \Theta_{(a_1,a_2,a_3)}(\Omega(\eta)) = 0\}. \] \end{defin} \vskip3mm \begin{rem} By the transformation formula (\ref{formula}) and Lemma \ref{fixedlemma}, we see that \[ \Theta_{(a_1,a_2,a_3)}(g \Omega(\eta)) = (\text{a unit function}) \times \Theta_{(a_1,a_2,a_3)}(\Omega(\eta)) \] for a invariant characteristic $(a_1,a_2,a_3)$ and $g \in \Gamma(1-\zeta)$. Hence if we have $\eta \in \vartheta(a_1,a_2,a_3)$, then $\Gamma(1-\zeta)$--orbit of $\eta$ contained in $\vartheta(a_1,a_2,a_3)$. \end{rem} \begin{lem} \label{vanish100} Let $(a_1, a_2, a_3)$ be a $\mathfrak{S}igma$--invariant characteristic. If $2 a_1^2 + 2 a_2^2 + a_3^2 \notin 5 \mathbb{Z}$, then $\vartheta(a_1,a_2,a_3) = \mathbb{B}_2^A$. Namely, $\Theta_{(a_1,a_2,a_3)}$ vanishes on $\mathbb{B}_2^A$. \end{lem} \begin{proof} We apply the transformation formula (\ref{formula}) for $g = \mathfrak{S}igma^4$. \\ For it we proceed the preparatory calculations. At first, get the explicit form of $g = \mathfrak{S}igma^4 = \begin{pmatrix} A&B\\mathbb{C}&D\end{pmatrix}$ by using (\ref{matrixsigma}). So we obtain \[ \phi_{(a_1,a_2,a_3)}(\mathfrak{S}igma^4) = \frac{1}{40}(2 a_1^2 + 2 a_2^2 + a_3^2). \] Using the explicit form of $\Omega(\eta)$ in Lemma \ref{Period}, we get \[ \det(C\Omega(\eta) + D) = 1 \] for all $\eta \in \mathbb{B}_2^A$ by a computer and calculation. By (\ref{chformula}), we may put \[ \Theta_{\mathfrak{S}igma^4(a_1,a_2,a_3)}(\Omega) = \exp[2 \pi \mathfrak{S}qrt{-1} {}^t a m] \Theta_{(a_1,a_2,a_3)}(\Omega) \] for a certain $m \in \mathbb{Z}^6$. Returning to the explicit form of $\mathfrak{S}igma^4(a_1,a_2,a_3)$ we should get $m$. We check that $\exp[2 \pi \mathfrak{S}qrt{-1} {}^t a m] = 1$ by a computer aided calculation. Hence we have \[ \Theta_{(a_1,a_2,a_3)}(\Omega(\eta)) = \kappa(\mathfrak{S}igma^4) \exp[\frac{1}{20}\pi \mathfrak{S}qrt{-1}(2 a_1^2 + 2 a_2^2 + a_3^2)] \Theta_{(a_1,a_2,a_3)}(\Omega(\eta)) \] for all $\eta \in \mathbb{B}_2^A$. This implies our assertion since $\kappa(\mathfrak{S}igma^4)$ is an 8-th root of 1. \end{proof} \vskip 3mm We consider odd integers $a_1,a_2,a_3$ modulo $10 \mathbb{Z}$. There exist 25 representatives of the $\mathfrak{S}igma$--invariant characteristic $(a_1,a_2,a_3)$ satisfying the condition $2 a_1^2 + 2 a_2^2 + a_3^2 \in 5 \mathbb{Z}$; \begin{equation} \label{twelve1} \begin{split} (1,1,1)&, \ (1,1,9), \ (1,9,1), \ (9,1,1), \ (1,3,5), \ (1,7,5),\\ (3,1,5)&, \ (7,1,5), \ (3,3,3), \ (3,3,7),\ (3,7,3), \ (7,3,3), \end{split} \end{equation} and \begin{align}\label{twelve2} \begin{split} (9,9,9)&, \ (9,9,1), \ (9,1,9), \ (1,9,9), \ (9,7,5), \ (9,3,5), \\ (7,9,5)&, \ (3,9,5), \ (7,7,7), \ (7,7,3), \ (7,3,7), \ (3,7,7), \end{split} \end{align} and $(5,5,5)$. \begin{rem} (1) \ The characteristic $(5,5,5)$ is an odd half integer characteristic (see \cite{Mum}), hence $\Theta_{(5,5,5)}(\Omega)$ vanishes identically. \\ (2) \ By (\ref{chformula}) and (\ref{chformula2}), we see that $\Theta_{(a_1,a_2,a_3)}(\Omega)$ is a scalar multiple of $\Theta_{(b_1,b_2,b_3)}(\Omega)$ if $a_1 + b_1, a_2 + b_2,a_3 + b_3 \in 10 \mathbb{Z}$. So the system in (\ref{twelve1}) and the system in (\ref{twelve2}) are essentially the same. \end{rem} \begin{lem} \label{transitiv} Let $(a_1,a_2,a_3)$ be a member of the system (\ref{twelve1})(equivalently (\ref{twelve2})). The group $\Gamma$ acts on the set of twelve $\vartheta(a_1,a_2,a_3)$ transitively. \end{lem} \begin{proof} We have an explicit form of $\hat{g}_{ij}$ in (\ref{matrixg12}) -- (\ref{matrixg45}). We use it and obtain \[ \hat{g}_{12}(a_1,a_2,a_3) \equiv (-a_1,a_2,a_3), \quad \hat{g}_{34}(a_1,a_2,a_3) \equiv (a_1,-a_2,a_3), \] \begin{center} \begin{tabular}{|c||c|c|} \hline $(a_1,a_2,a_3)$ & $\hat{g}_{23}(a_1,a_2,a_3)\equiv$ & $\hat{g}_{45}(a_1,a_2,a_3)\equiv$ \\ \hline (1,1,1)& (3,3,7) & (1,9,9)\\ (1,1,9)& (7,7,7) & (1,7,5) \\ (1,9,1)& (9,7,5) & (1,3,5) \\ (9,1,1)& (7,9,5) & (9,9,9) \\ (1,3,5)& (9,1,9) & (1,9,1) \\ (1,7,5)& (7,3,3) & (1,1,9) \\ (3,1,5)& (1,9,9) & (3,3,7) \\ (7,1,5)& (3,7,3) & (7,3,7) \\ (3,3,3)& (9,9,1) & (3,7,7) \\ (3,3,7)& (1,1,1) & (3,1,5) \\ (3,7,3)& (7,1,5) & (3,9,5) \\ (7,3,3)& (1,7,5) & (7,7,7) \\ \hline \end{tabular} \end{center} According to (\ref{formula}), \[ g(\vartheta(a_1,a_2,a_3)) = \vartheta(\hat{g}(a_1,a_2,a_3)) \] So the assertion follows. \end{proof} \mathfrak{S}ubsection{The zero loci of twelve theta functions} Here we state Riemann's theorem. Let $C$ be an algebraic curve of genus $g$, let $\{A_i, B_i\}$ be a symplectic basis of $\mathrm{H}_1(C,\mathbb{Z})$ such that $A_i \cdot B_j = \delta_{ij}$, and let $\{\omega_i\}$ be the basis of $\mathrm{H}^0(C,\Omega^1)$ such that $\int_{A_i} \omega_j = \delta_{ij}$. Then $\Omega = (\int_{B_i} \omega_j)$ belongs to $\mathfrak{S}_g$. We denotes ${}^t(\int_{\gamma} \omega_1, \cdots, \int_{\gamma} \omega_g)$ by $\int_{\gamma} \omega$. \begin{tm}[see \cite{Mum}, p149] \label{Riemanntm} Let us fix a point $P_0 \in C$. Then there is a vector $\Delta \in \mathbb{C}^g$, such that for all $z \in \mathbb{C}^g$, multi-valued function \[ f(P) = \Theta_{(0,0)}(z + \int^P_{P_0} \omega,\ \Omega) \quad (P \in C) \] on $C$ either vanishes identically, or has $g$ zeros $Q_1, \cdots , Q_g$ with \[ \mathfrak{S}um_{i =1}^g \int_{P_0}^{Q_i} \omega \equiv -z + \Delta \quad \text{mod} \ \Omega \mathbb{Z}^g + \mathbb{Z}^g. \] \end{tm} \begin{rem}[see \cite{Mum}] \label{Riemannrem} (1) \ The vector $\Delta$ in the theorem is called the Riemann constant, and depends on the symplectic basis $\{A_i, b_i\}$ and the base point $P_0$. For the fixed $\{A_i, B_i\}$ and $P_0$, $\Delta$ is uniquely determined as the point of the Jacobian $J(C) = \mathbb{C}^g / (\Omega \mathbb{Z}^g + \mathbb{Z}^g)$ by the property of the theorem.\\ (2) \ If we take $P_0$ such that the divisor $(2g-2)P_0$ is linearly equivalent to the canonical divisor, then we have $\Delta \in \frac{1}{2} \Omega \mathbb{Z} + \frac{1}{2} \mathbb{Z}$. \end{rem} \begin{cor}[see \cite{Mum}] \label{Riemanncor} Under same situation as the theorem, $\Theta_{a,b}(\Omega) = 0$ if and only if there exist $Q_1, \cdots, Q_g \in C$ such that \[ \Delta - (\Omega a + b) \equiv \mathfrak{S}um_{i = 1}^{g-1} \int_{P_0}^{Q_i} \omega. \] \end{cor} \vskip3mm Now, let us return to our case. Let $\lambda^0 \in X(2,5)c$ and $C_0$ be as in section \ref{sectionperiod} and $\omega_1, \cdots, \omega_6$ be the basis of $\mathrm{H}^0(C_0, \Omega^1)$ such that $\int_{A_i} \omega_j = \delta_{ij}$. We denote the ramified points over $\lambda_i \in \mathbb{P}^1$ by $P_i \in C_0$. Let us take the base point $P_0$ arbitrary among $\{P_1, \cdots, P_5\}$ and $\Delta_0$ be the Riemann constant with respect to $\{A_i,B_i\}$ and $P_0$. \begin{lem} \label{lemma(5,5,5)} The Riemann constant $\Delta_0$ corresponds to the characteristic $(5,5,5)$. \end{lem} \begin{proof} The divisor of the holomorphic 1-form $(x-\lambda_i)^2dx/y^4$ is $10 P_i$. Hence $\Delta_0$ is a half integer characteristic (see Remark \ref{Riemannrem}). For $z = \Omega a + b \ (a,b \in \mathbb{R}^6)$ and $\mathfrak{S}igma = \begin{pmatrix} A&B \\ C&D \end{pmatrix}$, applying (\ref{formula}) we have \begin{align} \label{equ} \Theta_{\mathfrak{S}igma(\Delta_0 -z)}(\Omega) = (\text{a unit function}) \times \Theta_{\Delta_0 -z }(\Omega) \end{align} since $\mathfrak{S}igma \Omega = \Omega$. By (\ref{gab}) and Remark \ref{Riemanncor}, we have \[ \mathfrak{S}igma(\Delta_0 -z) = \mathfrak{S}igma \Delta_0 - {}^t(C \Omega + D)^{-1} z. \] Hence it holds \begin{align*} \Theta_{\mathfrak{S}igma \Delta_0 - {}^t(C \Omega + D)^{-1}z}(\Omega) = 0 \ &\Leftrightarrow \ \Theta_{\Delta_0 -z }(\Omega) =0 \\ &\Leftrightarrow \ z \equiv \mathfrak{S}um_{i = 1}^{5} \int_{P_0}^{Q_i} \omega \quad \text{for} \quad {}^{\exists} Q_1, \cdots, Q_5 \in C_0 \end{align*} by Corollary \ref{Riemanncor}. Namely, putting $w = {}^t(C \Omega + D)^{-1}z$ we have \begin{align*} \Theta_{\mathfrak{S}igma \Delta_0 - w}(\Omega) = 0 \ &\Leftrightarrow \ {}^t(C \Omega + D) w \equiv \mathfrak{S}um_{i = 1}^{5} \int_{P_0}^{Q_i} \omega \quad \text{for} \quad {}^{\exists} Q_1, \cdots, Q_5 \in C_0. \end{align*} Let us recall that $\mathfrak{S}igma$ is the symplectic representation matrix of $\rho$ with respect to the basis $\{A_i, B_i \}$ of $\mathrm{H}_1(C_0, \mathbb{Z})$. And we have \[ \begin{pmatrix}I & \Omega \end{pmatrix} \begin{pmatrix}{}^tD & {}^tB \\ {}^tC & {}^tA\end{pmatrix} = \begin{pmatrix}{}^t(C \Omega + D) & {}^t(A \Omega + B)\end{pmatrix} = {}^t(C \Omega + D) \begin{pmatrix}I & \Omega \end{pmatrix}, \] so ${}^t(C \Omega + D)$ is the representation matrix of $\rho$ with respect to the basis $\{\omega_1, \cdots, \omega_6 \}$ of $\mathrm{H}^0(C_0, \Omega^1)$. Hence it holds \begin{align*} \Theta_{\mathfrak{S}igma \Delta_0 - w}(\Omega) = 0 \ &\Leftrightarrow \ w \equiv \mathfrak{S}um_{i = 1}^{5} \int_{P_0}^{Q_i} (\rho^{-1})^* \omega \equiv \mathfrak{S}um_{i = 1}^{5} \int_{\rho^{-1}(P_0)}^{\rho^{-1}(Q_i)} \omega \equiv \mathfrak{S}um_{i = 1}^{5} \int_{P_0}^{\rho^{-1}(Q_i)} \omega \end{align*} Recalling Remark \ref{Riemannrem} (1), this implies that $\mathfrak{S}igma \Delta_0$ is the Riemann constant, that is $\mathfrak{S}igma \Delta_0 \equiv \Delta_0$. Hence we have $\Delta_0 \equiv (5,5,5)$ since $(5,5,5)$ is the unique $\mathfrak{S}igma$--invariant half integer characteristic. \end{proof} \vskip 3mm Next, let us consider the oriented arcs $\alpha_k(i,j)$ defined by (\ref{arc}) and the integrals $\int_{\alpha_k(i,j)} \omega \in \mathbb{C}^6$. \vskip3mm \begin{lem} \label{integrals} The integral $\int_{\alpha_k(i,j)} \omega$ is a five torsion point $\Omega a + b$ on $\mathbb{C}^6 / (\Omega \mathbb{Z}^6 + \mathbb{Z}^6)$ of the form \[ a = \frac{1}{10}{}^t(a_1, a_2, a_3,a_1, a_2, a_3), \ b = \frac{1}{10}{}^t(-2a_1, -2a_2, -2a_3,-a_1, -a_2, -a_3) \] with $a_1, a_2, a_3 \in 2 \mathbb{Z}$. In explicit way, it holds \begin{align*} \int_{\alpha_k(1,2)} \omega \equiv (6,0,0), \ \int_{\alpha_k(1,3)} \omega \equiv (8,2,6), \ \int_{\alpha_k(1,4)} \omega \equiv (8,8,6), \ \int_{\alpha_k(1,5)} \omega \equiv (8,0,8) \\ \text{mod} \ \Omega \mathbb{Z}^6 + \mathbb{Z}^6 \end{align*} with the same notation in Definition \ref{defchara} and identification referred in Remark \ref{remtrans} (Note that any $\alpha_k(i,j)$ is written as a combination of $\alpha_k(1,2)$, $\alpha_k(1,3)$, $\alpha_k(1,4)$ and $\alpha_k(1,5)$). \end{lem} \begin{proof} Since $D_{ij} = \alpha_i(1,5) - \alpha_j(1,5)$ is a cycle, we see that $\int_{\alpha_i(1,5)} \omega \equiv \int_{\alpha_j(1,5)} \omega$ mod $\Omega \mathbb{Z}^6 + \mathbb{Z}^6$. And we have \begin{align*} \int_{D_{12} + D_{15}} \varphi_1 \ =\ \int_{2 \alpha_1(1,5) - \alpha_2(1,5) - \alpha_5(1,5)} \varphi_1 \ = \ (2- \zeta^2 - \zeta^3) \int_{\alpha_1(1,5)} \varphi_1. \end{align*} By the same calculation, we see that \begin{align*} \int_{\alpha_1(1,5)} \varphi_k \ =& \begin{cases} \frac{1}{5}(2- \zeta - \zeta^4) \int_{D_{12} + D_{15}} \varphi_k & (k = 1,2,3) \\ {}\\ \frac{1}{5}(2- \zeta^2 - \zeta^3) \int_{D_{12} + D_{15}} \varphi_k & (k = 4,5,6) \end{cases} \\ =& \frac{1}{5} \int_{[2 - \rho^2 - \rho^3] (D_{12} + D_{15})} \varphi_k. \end{align*} Calculating intersection numbers, we have the following equality \begin{align*} [2 - \rho^2 - \rho^3] (D_{12} + D_{15}) = 2 A_1 + 2 A_3 + A_4 + A_6 + 4 B_1 + 4 B_3 - B_4 - B_6 \end{align*} as homology classes. Hence it holds \begin{align*} \int_{\alpha_1(1,5)} \omega &\equiv \frac{1}{5} \int_{2 A_1 + 2 A_3 + A_4 + A_6 + 4 B_1 + 4 B_3 - B_4 - B_6} \omega \\ &\equiv \frac{1}{10} \int_{-6 A_1 -6 A_3 -8 A_4 -8 A_6 +8 B_1 +8 B_3 +8B_4 +8 B_6} \omega \equiv (8,0,8). \end{align*} By the same way, we obtain the results for $\alpha_k(1,2)$, $\alpha_k(1,3)$ and $\alpha_k(1,4)$. \end{proof} \vskip3mm Let $C_{\lambda} \ (\lambda \in X(2,5)c)$ be any element of our family $\mathcal{F}$. We defined in Section \ref{sectionperiod} the system $\{\alpha_k(i,j)(\lambda) \}$, $\{\gamma_i(\lambda) \}$ and $\{A_i(\lambda), B_i(\lambda) \}$ on $C_{\lambda}$ depending on the arc $r$. The point $P_0$ has always the same meaning. So Lemma \ref{lemma(5,5,5)} and \ref{integrals} are true for $C_{\lambda}$ using these notations. Let $\Delta \equiv (5,5,5)$ denote the Riemann constant on $C_{\lambda}$. \\ \indent Now, recall that $\mathbb{B}_2^0$, $\ell(ij)$ stands for $\Phi(X(2,5)c)$ and $\pi^{-1}(\tilde{\Phi}(L(ij)))$ respectively(see Section \ref{sectiondl}). \begin{prop} $\vartheta(1,1,1) \cap \mathbb{B}_2^{\circ} = \phi$. \end{prop} \begin{proof} Let us consider a curve $C = C_{\lambda} \ (\lambda \in X(2,5)c)$ and its period $\Omega = \Omega_{\lambda}$. We assume that $\Theta_{(1,1,1)}(\Omega) = 0$. According to Corollary \ref{Riemanncor}, there exist points $Q_1, \cdots, Q_5 \in C$ such that \[ \mathfrak{S}um_{i =1}^5 \int_{P_5}^{Q_i} \omega \equiv \Delta - (1,1,1) \equiv (4,4,4). \] On the other hand, by Lemma \ref{integrals}, we have \[ \int_{P_4}^{P_3} \omega \equiv (0,4,0), \quad \int_{P_5}^{P_1} \omega \equiv (2,0,2). \] Hence it holds \[ \mathfrak{S}um_{i =1}^5 \int_{P_5}^{Q_i} \omega \equiv 2 \int_{P_5}^{P_1} \omega + \int_{P_4}^{P_3} \omega. \] By Abel's theorem, the divisor $\mathfrak{S}um_{i =1}^g Q_i$ is linearly equivalent to the divisor $D = 2 P_1 + P_3 - P_4 + 3 P_5$, and we have \begin{align} \label{eq} \dim \mathrm{H}^0(C, \mathcal{O}(D)) = \dim \mathrm{H}^0(C, \mathcal{O}(\mathfrak{S}um_{i =1}^g Q_i)) \geq 1 \end{align} For the effective divisor $D' = D + P_0$, we have \[ \dim \mathrm{H}^0(C, \mathcal{O}(D')) = \dim \mathrm{H}^0(C, \Omega^1(-D')) + 1 \] by the Riemann-Roch. We claim that $\dim \mathrm{H}^0(C, \Omega^1(-D')) = 0$. In fact, the basis $\{\varphi_i\}$ is written as \begin{align*} \varphi_1 = y^2 \varphi, \quad \varphi_2 = y \varphi, \quad \varphi_3 = x^2 \varphi, \quad \varphi_4 = x \varphi, \quad \varphi_5 = xy \varphi, \quad \varphi_5 = \varphi \\ ( \varphi = 4 \frac{dy}{f'(x)}, \quad f(x) = \mathbb{P}^1od_{i = 1}^5(x -\lambda_i)), \end{align*} and we have following vanishing orders; \[ \mathrm{ord}_{P_i}(y) = 1, \quad \mathrm{ord}_{P_i}(x - \lambda_j) = 5 \delta_{ij}, \quad \mathrm{ord}_{P_i}(\varphi) = 0 \quad (i,j = 1, \cdots, 5). \] Because any holomorphic 1-form is written in the form \[ \text{(inhomogeneous quadratic polynomial of $x,y$)} \times \varphi, \] we see that there is no holomorphic 1-form $X(2,5)i$ such that \[ \mathrm{ord}_{P_1}(X(2,5)i) \geq 2, \quad \mathrm{ord}_{P_3}(X(2,5)i) \geq 1, \quad \mathrm{ord}_{P_5}(X(2,5)i) \geq 3. \] Hence we have $\dim \mathrm{H}^0(C, \mathcal{O}(D')) = 1$, that is, $\mathrm{H}^0(C, \mathcal{O}(D'))$ contains only constant functions. This contradicts to (\ref{eq}) since $\mathrm{H}^0(C, \mathcal{O}(D)) \mathfrak{S}ubset \mathrm{H}^0(C, \mathcal{O}(D'))$ and $D$ is not effective. \end{proof} \vskip3mm \begin{cor} \label{cornonvanish} Let $(a_1,a_2,a_3)$ be a $\mathfrak{S}igma$--invariant characteristic in (\ref{twelve1}). Then we have $\vartheta(a_1,a_2,a_3) \cap \mathbb{B}_2^{\circ} = \phi$. \end{cor} \begin{proof} This follows from Lemma \ref{transitiv}. \end{proof} \vskip3mm Hence $\vartheta(a_1,a_2,a_3)$ is the union of certain $\ell(ij)$'s. \begin{lem} \label{evaluation} Let $\eta_0$ be the point $[0:0:1] \in \mathbb{B}_2^A$, and let $(a_1,a_2,a_3)$ be a member of (\ref{twelve1}). If $a_1, a_2,a_3 \in \{1,9\}$, then we have $\Theta_{(a_1,a_2,a_3)}(\Omega(\eta_0)) \ne 0$. \end{lem} \begin{proof} Let \[ (a',b') = (\frac{1}{10}{}^t(\alpha,\alpha), \frac{1}{10}{}^t(-2 \alpha,-\alpha)) \] be a characteristic in $(\mathbb{Q}^2)^2$. Let $\Theta_{\alpha}(\tau)$ denote the theta constant $\Theta_{(a',b')}(\tau) \ (\tau \in \mathfrak{S}_2)$. Using this notation, we have \begin{align} \label{3theta} \Theta_{(a_1,a_2,a_3)}(\Omega(\eta_0)) = \Theta_{a_1}(\tau_0) \Theta_{a_2}(\tau_0) \Theta_{a_3}(\tau_0), \quad \tau_0 = \begin{pmatrix} \zeta - 1 &\zeta^2 +\zeta^3 \\ \zeta^2 +\zeta^3 & -\zeta^4 \end{pmatrix} \end{align} (see (\ref{eta_0})). So our assertion is reduced to the inequality $\Theta_1(\tau_0) \ne 0$, since $\Theta_9$ is a constant multiple of $\Theta_1$. Set \[ a = {}^t(\frac{1}{10},\frac{1}{10}), \quad b = {}^t(-\frac{2}{10},-\frac{1}{10}), \quad n = {}^t(n_1,n_2), \] and set \[ f(n_1,n_2) = \exp[\pi \mathfrak{S}qrt{-1} (^t(n+a) \tau_0 (n+a) + 2^t(n+a)b)]. \] By definition, $\Theta_1(\tau_0) = \mathfrak{S}um_{n_1,n_2 \in \mathbb{Z}} f(n_1,n_2)$. For simplicity, we denote $n + a$ by $m = (m_1,m_2)$. By elementary calculations, we see that \[ |f(n_1,n_2)| = \exp[-\pi \mathfrak{S}in(\frac{2 \pi}{5})\{m_1^2 + (3 - \mathfrak{S}qrt{5})m_1 m_2 + m_2^2\}]. \] In case $m_1 m_2 >0$, we have \[ |f(n_1,n_2)| < \exp[-\pi \mathfrak{S}in(\frac{2 \pi}{5})\{m_1^2 + m_2^2\}]. \] In case $m_1 m_2 < 0$, we have \begin{align*} |f(n_1,n_2)| &< \exp[-\pi \mathfrak{S}in(\frac{2 \pi}{5})\{m_1^2 +m_1 m_2 +m_2^2\}] \\ &= \exp[-\pi \mathfrak{S}in(\frac{2 \pi}{5})\{\frac{1}{2}(m_1^2 + m_2^2) + \frac{1}{2}(m_1 + m_2)^2\}] \\ &< \exp[-\frac{\pi}{2} \mathfrak{S}in(\frac{2 \pi}{5})\{m_1^2 + m_2^2\}]. \end{align*} Consequently, \[ |f(n_1,n_2)| < \alpha^{m_1^2 + m_2^2}, \quad (\alpha = \exp[-\frac{\pi}{2} \mathfrak{S}in(\frac{2 \pi}{5})]) \] for any $n_1, n_2 \in \mathbb{Z}$. Set \[ D_1 = \{ (n_1,n_2) \in \mathbb{Z}^2 \ : \ -10 \leq n_1, n_2 \leq 10\}, \quad D_2 = \mathbb{Z}^2 - D_1, \] and consider the summations \[ S_1 = \mathfrak{S}um_{D_1} f(n_1,n_2), \quad S_2 = \mathfrak{S}um_{D_2} f(n_1,n_2). \] Using a computer, we can evaluate $|S_1|$ and $|S_2|$. We have a approximate value \[ |S_1| \fallingdotseq 1.13746 \cdots, \] by {\it Mathematica}. On the other hand, we have \[ |S_2| < \mathfrak{S}um_{D_2}|f(n_1,n_2)| < \mathfrak{S}um_{D_2} \alpha^{m_1^2 + m_2^2}. \] The last term is very small. For example, \[ \mathfrak{S}um_{n_1 \geq 10, n_2 \geq 0} \alpha^{m_1^2 + m_2^2} < (\mathfrak{S}um_{n_1 \geq 10} \alpha^{n_1})(\mathfrak{S}um_{n_2 \geq 0} \alpha^{n_2}) = (\frac{\alpha^{10}}{1-\alpha})( \frac{\alpha}{1-\alpha}) \fallingdotseq 5.40545 \times 10^{-7}, \] and the same calculations shows $|S_1| \gg |S_2|$. This implies $\Theta_1(\tau_0) = S_1 + S_2 \ne 0$. \end{proof} \vskip3mm \begin{lem} \label{vanishinglemma} (1) \ If we have $a_1 \equiv 3,7 \ \mathrm{mod}\ 10$, then $\Theta_{(a_1,a_2,a_3)}$ vanishes on $\ell(12)$. \\ (2) \ If we have $a_2 \equiv 3,7 \ \mathrm{mod}\ 10$, then $\Theta_{(a_1,a_2,a_3)}$ vanishes on $\ell(34)$. \end{lem} \begin{proof} Set $g = \hat{g}_{12} = \begin{pmatrix} A&B\\mathbb{C}&D\end{pmatrix}$ and set $\Omega = \Omega(\eta)$ with $\eta = [0:\eta_2:\eta_3] \in \mathbb{B}_2^A$. By the computation same as the one in the proof of Lemma \ref{vanish100}, we have \[ g \Omega = \Omega, \quad \det(C \Omega + D) = \zeta, \quad \phi_{(a_1,a_2,a_3)}(g) = \frac{1}{40} a_1^2, \quad \Theta_{g(a_1,a_2,a_3)}(\Omega) = \Theta_{(a_1,a_2,a_3)}(\Omega). \] Hence it holds \[ \Theta_{(a_1,a_2,a_3)}(\Omega)^8 = \exp[\frac{2}{5} \pi \mathfrak{S}qrt{-1}(a_1^2 - 1)] \Theta_{(a_1,a_2,a_3)}(\Omega)^8 \] Therefore $\Theta_{(a_1,a_2,a_3)}(\Omega(\eta))$ vanishes on the mirror of $g_{12}$ provided $a_1 \equiv 3,7 \ \mathrm{mod}\ 10$. This implies assertion (1). The assertion (2) follows by the same argument with $g = g_{34}$ and $\eta = [\eta_1: 0:\eta_3] \in \mathbb{B}_2^A$. \end{proof} \begin{prop} \label{proptable} We have the Table \ref{table} for the vanishing loci of twelve theta constants coming from the system (\ref{twelve1}). In the table, ``v'' implies that $\Theta_{(a_1,a_2,a_3)}$ vanishes there, and the blank implies $\Theta_{(a_1,a_2,a_3)}$ is not identically zero there. For example, $\Theta_{(1,1,1)}$ vanishes on $\ell(13)$ and is not identically zero on $\ell(12)$. \begin{table}[hbtp] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|} \hline $(a_1,a_2,a_3)$ & $\ell(12)$ & $\ell(13)$ & $\ell(14)$ & $\ell(15)$ & $\ell(23)$ & $\ell(24)$ & $\ell(25)$ & $\ell(34)$ & $\ell(35)$ & $\ell(45)$ \\ \hline $(1,1,1)$ & &v & &v &v &v & & & &v \\ \hline $(1,1,9)$ & &v &v & & &v &v & &v & \\ \hline $(1,9,1)$ & & &v &v &v &v & & &v & \\ \hline $(9,1,1)$ & &v &v & &v & &v & & &v \\ \hline $(1,3,5)$ & & &v &v &v & &v &v & & \\ \hline $(1,7,5)$ & &v & &v & &v &v &v & & \\ \hline $(3,1,5)$ &v & &v & &v & & & &v &v \\ \hline $(7,1,5)$ &v &v & & & &v & & &v &v \\ \hline $(3,3,3)$ &v & &v & & & &v &v &v & \\ \hline $(3,3,7)$ &v & & &v &v & & &v & &v \\ \hline $(3,7,3)$ &v &v & & & & &v &v & &v \\ \hline $(7,3,3)$ &v & & &v & &v & &v &v & \\ \hline \end{tabular} \end{center} \caption{} \label{table} \end{table} \end{prop} \begin{proof} By Lemma \ref{vanishinglemma}, \[ \Theta_{(3,1,5)}, \quad \Theta_{(7,1,5)}, \quad \Theta_{(3,3,3)}, \quad \Theta_{(3,3,7)}, \quad \Theta_{(3,7,3)}, \quad \Theta_{(7,3,3)} \] vanish on $\ell(12)$, and \[ \Theta_{(1,3,5)}, \quad \Theta_{(1,3,5)}, \quad \Theta_{(3,3,3)}, \quad \Theta_{(3,3,7)}, \quad \Theta_{(3,7,3)}, \quad \Theta_{(7,3,3)} \] vanish on $\ell(34)$. By Lemma \ref{evaluation}, \[ \Theta_{(1,1,1)}, \quad \Theta_{(1,1,9)}, \quad \Theta_{(1,9,1)}, \quad \Theta_{(9,1,1)} \] are not identically zero on $\ell(12)$ and on $\ell(34)$, since $\eta_0 = [0:0:1] \in \ell(12) \cap \ell(34)$. The result is obtained by applying the transformation formula (\ref{formula}) for above theta constants and $\hat{g}_{ij}$. For example, we have \[ \Theta_{\hat{g}_{12}\hat{g}_{45}(a_1,a_2,a_3)} (\hat{g}_{12}\hat{g}_{45} \Omega) = \text{(a unit function)} \times \Theta_{(a_1,a_2,a_3)}(\Omega). \] Since $\hat{g}_{12}\hat{g}_{45}(1,3,5) \equiv (9,9,1)$ (see Lemma \ref{transitiv}) and $g_{12}g_{45}(\ell(12)) = \ell(12)$, we see that $\Theta_{(1,3,5)}$ is not identically zero on $\ell(12)$. \end{proof} \mathfrak{S}ubsection{Automorphic Factor} We study the automorphic factor appeared in the transformation formula (\ref{formula}) with respect to $\Gamma(1-\zeta)$ and $\Omega = \Omega(\eta) $. Let $H$ be the diagonal matrix $\mathrm{diag}(1,1,-\zeta^3(1 + \zeta))$. We denote ${}^t\eta H \eta$ by $\left<\eta,\eta\right>$. Set \[ F_g(\eta) = \frac{\left<g\eta,g\eta\right>}{\left<\eta,\eta\right>} \] for $g \in \Gamma$ and $\eta \in \mathbb{B}_2^A$. Obviously, we have the following lemma. \begin{lem} \label{lemcocycle} $F_g(\eta)$ satisfies the cocycle condition with respect to $\Gamma$. That is, \[ F_{g_1g_2}(\eta) = F_{g_1}(g_2\eta) F_{g_2}(\eta), \qquad g_1, g_2 \in \Gamma. \] \end{lem} \vskip3mm \begin{prop} There exist the non trivial character \[ \chi \ :\ \Gamma \longrightarrow \mu_5 = \{1,\zeta,\cdots, \zeta^4\} \] such that \begin{align*} \det(C \Omega(\eta) + D) = \chi(g) F_g(\eta) \quad (\eta \in \mathbb{B}_2^A) \end{align*} for $g \in \Gamma$, where the matrix $\begin{pmatrix} A&B\\mathbb{C}&D\end{pmatrix}$ is the symplectic representation $\hat{g}$ of $g$. \end{prop} \begin{proof} According to the case by case calculation, we have \[ \det(C \Omega(\eta) + D) = \zeta^3 F_g(\eta) \quad (\eta \in \mathbb{B}_2^A) \] for $g = g_{12}, g_{23}, g_{34}, g_{45}$. Since $\det(C \Omega(\eta) + D)/F_g(\eta)$ satisfies the cocycle condition, we obtain the result. \end{proof} Now let $(a,b)$ be a invariant characteristic $(a_1,a_2,a_3)$, and $(a_g,b_g)$ be $\hat{g}(a,b)$ for $g \in \Gamma(1-\zeta)$. Since \[ (a_g,b_g) \equiv (a,b) \quad \text{mod} \ \mathbb{Z}, \] we have \[ \Theta_{\hat{g}(a,b)}(\Omega) = \Theta_{(a_g,b_g)}(\Omega) = \Theta_{(a_g - a + a,b_g - b + b)}(\Omega) = \exp[2 \pi \mathfrak{S}qrt{-1} {}^ta(b_g - b)] \Theta_{(a,b)}(\Omega) \] by (\ref{chformula}). Set \[ \phi'_{(a_1,a_2,a_3)}(\hat{g}) = \phi_{(a_1,a_2,a_3)}(\hat{g}) - {}^ta(b_g - b). \] Then we can write the transformation formula (\ref{formula}) as \begin{align} \label{newformula} \Theta_{(a_1,a_2,a_3)}(\Omega(g\eta)) = \kappa(\hat{g})\exp(2 \pi \mathfrak{S}qrt{-1}\phi'_{(a_1,a_2,a_3)}(\hat{g})) [\chi(g) F_g(\eta)]^{\frac{1}{2}}\Theta_{(a_1,a_2,a_3)}(\Omega(\eta)), \end{align} where $\kappa(\hat{g})$ is a 8-th root of 1 depending only on $\hat{g}$. \vskip3mm \begin{lem} \label{lemmaphi'} Let $g$ be in $\Gamma(1 - \zeta)$. Then, the values \[ [\exp(2 \pi \mathfrak{S}qrt{-1}\phi'_{(a_1,a_2,a_3)}(\hat{g}))]^5 \] are the same for all twelve characteristics $(a_1,a_2,a_3)$ in (\ref{twelve1}). \end{lem} \begin{proof} By direct calculations, we have \begin{align*} 5 \phi'_{(a_1,a_2,a_3)}(\hat{h}_{12}) \equiv \frac{1}{8}, \quad 5 \phi'_{(a_1,a_2,a_3)}(\hat{h}_{13}) \equiv \frac{3}{4}, \quad 5 \phi'_{(a_1,a_2,a_3)}(\hat{h}_{14}) \equiv \frac{1}{2}, \\ 5 \phi'_{(a_1,a_2,a_3)}(\hat{h}_{23}) \equiv \frac{1}{2}, \quad 5 \phi'_{(a_1,a_2,a_3)}(\hat{h}_{34}) \equiv \frac{3}{4} \quad (\text{mod} \ \mathbb{Z}) \end{align*} for the twelve $(a_1,a_2,a_3)$. According to Lemma \ref{lemcocycle}, the equality (\ref{newformula}) shows that \[ \kappa(\hat{g})\exp[2 \pi \mathfrak{S}qrt{-1}\phi'_{(a_1,a_2,a_3)}(\hat{g})] \] is a character on $\Gamma(1 -\zeta)$. So we obtain the result for any $g \in \Gamma(1-\zeta)$. \end{proof} \vskip3mm \begin{cor} Let $(a_1, a_2, a_3)$ and $(b_1, b_2, b_3)$ be in (\ref{twelve1}). Then, the function \[ \frac{\Theta_{(a_1, a_2, a_3)}(\Omega(\eta))^5} {\Theta_{(b_1, b_2, b_3)}(\Omega(\eta))^5} \] is well-defined as meromorphic function on $\mathbb{B}_2^A / \Gamma(1-\zeta)$. \end{cor} \vskip3mm Let $\Omega = \Omega_{\lambda}$ be the period matrix of a curve $C_{\lambda} \ (\lambda \in X(2,5)c)$, $P_0$ be a ramified point of $C \rightarrow \mathbb{P}^1$. \vskip 3mm \begin{prop} Let $(a_1, a_2, a_3)$ and $(b_1, b_2, b_3)$ be in (\ref{twelve1}). The function \[ f(P) = \frac{\Theta_{(a_1, a_2, a_3)} (\int_{P_0}^{P} \omega, \ \Omega)^5} {\Theta_{(b_1, b_2, b_3)}(\int_{P_0}^{P} \omega, \ \Omega)^5} \quad (P \in C_{\lambda}) \] is a single-valued meromorphic function on $C_{\lambda}$, where the paths of integrations in the numerator and the denominator are chosen as same. \end{prop} \begin{proof} Note that Corollary \ref{cornonvanish} asserts \[ \Theta_{(a_1, a_2, a_3)}(\int_{P_0}^{P_0} \omega, \ \Omega) = \text{const.} \times \Theta_{(a_1, a_2, a_3)}(0, \ \Omega) \ne 0, \] where the constant depends on the path of integration. So the numerator is not identically zero, and it is same for the denominator. By the assumption we have \[ (a_1, a_2, a_3) - (b_1, b_2, b_3) \in (\frac{1}{5} \mathbb{Z}^6)^2. \] By using the formula (\ref{translate1}) and (\ref{translate2}) we can check that \[ \frac{\Theta_{(a_1, a_2, a_3)} (\int_{P_0}^{P} \omega + \Omega m + n, \ \Omega)^5} {\Theta_{(b_1, b_2, b_3)} (\int_{P_0}^{P} \omega + \Omega m + n, \ \Omega)^5} = \frac{\Theta_{(a_1, a_2, a_3)}(\int_{P_0}^{P} \omega, \ \Omega)^5} {\Theta_{(b_1, b_2, b_3)}(\int_{P_0}^{P} \omega, \ \Omega)^5} \] for $m, n \in \mathbb{Z}^6$. This implies single-valuedness of $f$. \end{proof} \vskip3mm Let us consider the meromorphic function \[ f(P) = \frac{\Theta_{(1,1,1)}(\int_{P_1}^{P} \omega, \ \Omega)^5} {\Theta_{(3,3,7)}(\int_{P_1}^{P} \omega, \ \Omega)^5} \] on $C_{\lambda}$. By Lemma \ref{integrals}, we have \begin{align*} \Delta - (1,1,1) \equiv (4,4,4) \equiv 2 \int_{P_1}^{P_2} \omega + 3 \int_{P_1}^{P_3} \omega + \int_{P_1}^{P_4} \omega, \\ \Delta - (3,3,7) \equiv (2,2,8) \equiv 3 \int_{P_1}^{P_2} \omega + 2 \int_{P_1}^{P_3} \omega + \int_{P_1}^{P_4} \omega. \end{align*} By Corollary \ref{Riemanncor}, the zero divisor of $\Theta_{(1,1,1)}(\int_{P_1}^{P} \omega, \ \Omega)$ and $\Theta_{(3,3,7)}(\int_{P_1}^{P} \omega, \ \Omega)$ are $2 P_2 + 3 P_3 + P_4$ and $3 P_2 + 2 P_3 + P_4$ respectly. Hence we can write \[ f(P) = c \ \frac{x(P) - \lambda_3}{x(P) - \lambda_2}, \] where $x(P)$ is the coordinate function $x \in \mathbb{C}[x,y]/(y^5 - \mathbb{P}^1od(x-\lambda_i))$ and $c \ne 0$ is a certain constant. By Lemma \ref{integrals}, \[ \int_{P_1}^{P_1} \omega \equiv (0,0,0), \quad \int_{P_1}^{P_5} \omega \equiv (8,0,8). \] Substitutes $P = P_1, P_5$ in the above form, then we obtain \[ \frac{\Theta_{(1,1,1)}((0,0,0), \ \Omega)^5} {\Theta_{(3,3,7)}((0,0,0), \ \Omega)^5} = c \ \frac{\lambda_1 - \lambda_3}{\lambda_1 - \lambda_2}, \quad \frac{\Theta_{(1,1,1)}((8,0,8), \ \Omega)^5} {\Theta_{(3,3,7)}((8,0,8), \ \Omega)^5} = c \ \frac{\lambda_5 - \lambda_3}{\lambda_5 - \lambda_2}. \] Set $(8,0,8) = \Omega \varepsilon' + \varepsilon''$. By elementary and patient calculation, we have \begin{align*} \Theta_{(1,1,1)}((8,0,8), \ \Omega)^5 = - \zeta^2 \exp[-5 \pi \mathfrak{S}qrt{-1}{}^t\varepsilon' \Omega \varepsilon' -10 \pi \mathfrak{S}qrt{-1} {}^t\varepsilon' \varepsilon''] \Theta_{(1,9,1)}(\Omega)^5 \\ \Theta_{(3,3,7)}((8,0,8), \ \Omega)^5 = \exp[-5 \pi \mathfrak{S}qrt{-1}{}^t\varepsilon' \Omega \varepsilon' -10 \pi \mathfrak{S}qrt{-1} {}^t\varepsilon' \varepsilon''] \Theta_{(1,3,5)}(\Omega)^5 \end{align*} Eliminating $c$, we have the following equality \[ \frac{\Theta_{(1,1,1)}(\Omega)^5 \Theta_{(1,3,5)}(\Omega)^5} {\Theta_{(3,3,7)}(\Omega)^5 \Theta_{(1,9,1)}(\Omega)^5} = -\zeta^2 \frac{(\lambda_1 - \lambda_3)(\lambda_5 - \lambda_2)} {(\lambda_1 - \lambda_2)(\lambda_5 - \lambda_3)}. \] Note that we can regard the above equality as that of meromorphic functions on $\mathbb{B}_2^A / \Gamma(1-\zeta) \cong X(2,5)$. By the above equality and Proposition \ref{proptable}, we see that \begin{enumerate} \item The vanishing order of $\Theta_{(1,1,1)}(\Omega(\eta))^5$ on $\tilde{\Phi}(L(13))$ is 1, \item The vanishing order of $\Theta_{(1,3,5)}(\Omega(\eta))^5$ on $\tilde{\Phi}(L(25))$ is 1, \item The vanishing order of $\Theta_{(3,3,7)}(\Omega(\eta))^5$ on $\tilde{\Phi}(L(12))$ is 1, \item The vanishing order of $\Theta_{(1,9,1)}(\Omega(\eta))^5$ on $\tilde{\Phi}(L(35))$ is 1. \end{enumerate} Because $\Gamma$ acts transitively on the set of $\mathfrak{S}igma$--invariant characteristics (see Lemma \ref{transitiv}), we obtain the following result. \vskip3mm \begin{prop} \label{proporder} Let $(a_1,a_2,a_3)$ be a $\mathfrak{S}igma$--invariant characteristic. If the multi-valued function $\Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5$ on $\mathbb{B}_2^A / \Gamma(1-\zeta)$ identically vanishes on $\tilde{\Phi}(L(ij)) = \ell(ij) / \Gamma(1-\zeta)$, then the vanishing order is 1. \end{prop} \vskip7mm \mathfrak{S}ection{Conclusion} Now we state our results. \par \noindent {\bf ----- The Schwarz inverse for the Appell HGDE $F_1(\frac{3}{5},\frac{3}{5},\frac{2}{5},\frac{6}{5})$ ----} \par Recall the embedding of $J: X(2,5) \rightarrow \mathbb{P}^{11}$ with \[ J(ijklm)(\lambda) = (\lambda_i - \lambda_j)(\lambda_j - \lambda_k) (\lambda_k - \lambda_l)(\lambda_l - \lambda_m)(\lambda_m - \lambda_i) \] in Proposition \ref{propJ}, and the extended period map $\tilde{\Phi}$ in section \ref{sectiondl}. \begin{tm} \label{maintheorem} We have a commutative diagram: \begin{figure}\label{diagram} \end{figure} by putting \begin{align} \label{exactmap} \varTheta = \begin{bmatrix} \Theta_{(1,1,1)}(\Omega(\eta))^5 \\ \Theta_{(1,1,9)}(\Omega(\eta))^5 \\ \Theta_{(1,9,1)}(\Omega(\eta))^5 \\ \Theta_{(9,1,1)}(\Omega(\eta))^5 \\ \Theta_{(1,3,5)}(\Omega(\eta))^5 \\ \Theta_{(1,7,5)}(\Omega(\eta))^5 \\ \Theta_{(3,3,3)}(\Omega(\eta))^5 \\ \Theta_{(3,3,7)}(\Omega(\eta))^5 \\ \Theta_{(3,7,3)}(\Omega(\eta))^5 \\ \Theta_{(7,3,3)}(\Omega(\eta))^5 \\ \Theta_{(7,1,5)}(\Omega(\eta))^5 \\ \Theta_{(3,1,5)}(\Omega(\eta))^5 \end{bmatrix}, \quad J = \begin{bmatrix} c_1 J(13245)(\lambda) \\ c_2 J(13524)(\lambda) \\ c_3 J(15324)(\lambda) \\ c_4 J(13254)(\lambda) \\ c_5 J(15234)(\lambda) \\ c_6 J(13425)(\lambda) \\ d_1 J(12534)(\lambda) \\ d_2 J(12345)(\lambda) \\ d_3 J(13452)(\lambda) \\ d_4 J(15342)(\lambda) \\ d_5 J(12453)(\lambda) \\ d_6 J(12354)(\lambda) \end{bmatrix}, \end{align} with constants \[ [c_1 : \cdots : c_6 : d_1 : \cdots : d_6] = [1:-1:1:1:\zeta^3:\zeta^3:-\zeta:\zeta:\zeta:-\zeta:-1:-1] \in \mathbb{P}^{11}. \] Moreover the map $\varTheta$ is an embedding. \end{tm} \begin{proof} By Proposition \ref{proptable} and Proposition \ref{proporder}, the zero divisor of the i-th component of $\varTheta$ coincides with that of the i-th component of $J$ via the isomorphism $\tilde{\Phi}$. So we can write as \begin{align} \frac{\Theta_{(1,1,1)}(\Omega(\tilde{\Phi}(\lambda)))^5} {J(13245)(\lambda)} = c_1 ,\ \cdots \cdots, \ \frac{\Theta_{(3,1,5)}(\Omega(\tilde{\Phi}(\lambda)))^5} {J(12354)(\lambda)} = d_6 \end{align} with certain constants $c_1, \cdots, d_6$. It shows the diagram in question is commutative. Since $J$ is an embedding and $\tilde{\Phi}$ is an isomorphism, we see that $\varTheta$ is an embedding. \par \indent Next we determine the ratios of the constants $c_i, d_i$. \par \noindent Let $\eta \in \mathbb{B}_2^A$ be a point of the form $[0:\eta_2:\eta_3]$. For such $\eta$, we have the decomposition \[ \Omega(\eta) = \tau_0 \oplus \Omega(\eta)' \] (see (\ref{degeperiod})). So we have the splitting \[ \Theta_{(a_1,a_2,a_3)}(\Omega(\eta)) = \Theta_{a_1} (\tau_0) \Psi_{(a_2,a_3)}(\Omega(\eta)'), \] where $\Theta_{a_1} (\tau_0)$ is same as in the proof of Lemma \ref{evaluation}, and $\Psi_{(a_2,a_3)}(\Omega(\eta)')$ is the theta function $\Theta_{(a,b)}(\Omega(\eta)')$ of degree 4 with the characteristic \[ a = \frac{1}{10}(a_2,a_3,a_2,a_3), \quad b = \frac{1}{10}(-2 a_2,-2 a_3,- a_2,- a_3). \] By the same way, we have \[ \Theta_{(a_1,a_2,a_3)}(\Omega(\eta)) = \Theta_{a_2} (\tau_0) \Psi_{(a_1,a_3)}(\Omega(\eta)'') \] for $\eta = [\eta_1 : 0 : \eta_3] \in \mathbb{B}_2^A$. We can check that \[ \Theta_{1} (\tau_0)^5 = - \Theta_{9} (\tau_0)^5, \quad \Psi_{(1,9)}(\Omega(\eta)')^5 = \Psi_{(9,1)}(\Omega(\eta)')^5, \quad \Psi_{(3,5)}(\Omega(\eta)')^5 = \Psi_{(7,5)}(\Omega(\eta)')^5 \] by (\ref{chformula}) and (\ref{chformula2}). So we have \begin{align} \label{ident1} \frac{\Theta_{(1,1,1)}(\Omega(\eta))^5}{\Theta_{(9,1,1)}(\Omega(\eta))^5} = \frac{\Theta_1(\tau_0)^5 \Psi_{(1,1)}(\Omega(\eta)')^5} {\Theta_9(\tau_0)^5 \Psi_{(1,1)}(\Omega(\eta)')^5} = -1 \quad \text{on} \ \ell(12). \end{align} By the same way, we see \begin{align} \label{ident2} \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,9,1)}(\Omega(\eta))^5} = 1, \quad \frac{\Theta_{(1,3,5)}(\Omega(\eta))^5}{\Theta_{(1,7,5)}(\Omega(\eta))^5} = 1 \quad \text{on} \ \ell(12), \end{align} and \begin{align} \label{ident3} \frac{\Theta_{(1,1,1)}(\Omega(\eta))^5}{\Theta_{(1,9,1)}(\Omega(\eta))^5} = -1, \quad \frac{\Theta_{(9,1,1)}(\Omega(\eta))^5}{\Theta_{(1,1,9)}(\Omega(\eta))^5} = 1, \quad \frac{\Theta_{(3,1,5)}(\Omega(\eta))^5}{\Theta_{(7,1,5)}(\Omega(\eta))^5} = 1 \quad \text{on} \ \ell(34). \end{align} Moreover, we have \begin{align} \label{ident4} \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,1,1)}(\Omega(\eta))^5} = \frac{\Theta_1(\tau_0)^5 \Theta_1(\tau_0)^5 \Theta_9(\tau_0)^5 } {\Theta_1(\tau_0)^5 \Theta_1(\tau_0)^5 \Theta_1(\tau_0)^5} = -1 \quad for \ \eta \in \ell(12) \cap \ell(34) \end{align} (see (\ref{3theta})). By the transformation formula (\ref{formula}), we have \begin{align*} \frac{\Theta_{\hat{g}(a_1,a_2,a_3)}(\Omega(g \eta))^5} {\Theta_{\hat{g}(b_1,b_2,b_3)}(\Omega(g \eta))^5} = \exp[2 \pi \mathfrak{S}qrt{-1}\{\phi_{(a_1,a_2,a_3)}(\hat{g}) - \phi_{(b_1,b_2,b_3)}(\hat{g})\}] \frac{\Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5} {\Theta_{(b_1,b_2,b_3)}(\Omega(\eta))^5} \end{align*} for any pair of $\mathfrak{S}igma$--invariant characteristics $(a_1,a_2,a_3),(b_1,b_2,b_3)$, and $g \in \Gamma$. By explicit calculation of the above formula, we obtain \begin{align*} \frac{\Theta_{(1,1,1)}(\Omega(\eta))^5}{\Theta_{(9,1,1)}(\Omega(\eta))^5} &=& - \zeta^4 \frac{\Theta_{(3,3,7)}(\Omega(g_{23} \eta))^5} {\Theta_{(3,1,5)}(\Omega(g_{23} \eta))^5}, \quad \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,9,1)}(\Omega(\eta))^5} &=& \zeta^2 \frac{\Theta_{(3,3,3)}(\Omega(g_{23} \eta))^5} {\Theta_{(1,3,5)}(\Omega(g_{23} \eta))^5}, \\ \frac{\Theta_{(1,3,5)}(\Omega(\eta))^5}{\Theta_{(1,7,5)}(\Omega(\eta))^5} &=& \zeta \frac{\Theta_{(1,9,1)}(\Omega(g_{23} \eta))^5} {\Theta_{(7,3,3)}(\Omega(g_{23} \eta))^5}, \quad \frac{\Theta_{(3,1,5)}(\Omega(\eta))^5}{\Theta_{(7,1,5)}(\Omega(\eta))^5} &=& \zeta \frac{\Theta_{(9,1,1)}(\Omega(g_{23} \eta))^5} {\Theta_{(3,7,3)}(\Omega(g_{23} \eta))^5}, \\ \frac{\Theta_{(3,1,5)}(\Omega(\eta))^5}{\Theta_{(7,1,5)}(\Omega(\eta))^5} &=& - \frac{\Theta_{(3,3,7)}(\Omega(g_{45} \eta))^5} {\Theta_{(3,7,3)}(\Omega(g_{45} \eta))^5}, \quad \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,1,1)}(\Omega(\eta))^5} &=& \zeta^2 \frac{\Theta_{(1,7,5)}(\Omega(g_{45} \eta))^5} {\Theta_{(9,1,1)}(\Omega(g_{45} \eta))^5}. \end{align*} Comparing these with (\ref{ident1}), (\ref{ident2}), (\ref{ident3}) and (\ref{ident4}), we have \begin{align} \label{ident5} \frac{\Theta_{(3,3,7)}(\Omega(\eta))^5}{\Theta_{(3,1,5)}(\Omega(\eta))^5} = \zeta, \quad \frac{\Theta_{(3,3,3)}(\Omega(\eta))^5}{\Theta_{(1,3,5)}(\Omega(\eta))^5} = \zeta^3 \quad \frac{\Theta_{(1,9,1)}(\Omega(\eta))^5}{\Theta_{(7,3,3)}(\Omega(\eta))^5} = \zeta^4 \quad \text{on} \ \ell(13), \end{align} \begin{align} \label{ident6} \frac{\Theta_{(9,1,1)}(\Omega(\eta))^5}{\Theta_{(3,7,3)}(\Omega(\eta))^5} = \zeta^4 \quad \text{on} \ \ell(24), \end{align} \begin{align} \label{ident7} \frac{\Theta_{(3,3,7)}(\Omega(\eta))^5}{\Theta_{(3,7,3)}(\Omega(\eta))^5} = -1, \quad \text{on} \ \ell(35), \end{align} \begin{align} \label{ident8} \frac{\Theta_{(1,7,5)}(\Omega(\eta))^5}{\Theta_{(9,1,1)}(\Omega(\eta))^5} = \zeta^3 \quad \text{on} \ \ell(12) \cap \ell(35), \end{align} since it holds \begin{align*} g_{23}(\ell(12)) = \ell(13), \quad g_{23}(\ell(34)) = \ell(24), \quad g_{45}(\ell(12)) = \ell(12), \quad g_{45}(\ell(34)) = \ell(35). \end{align*} Because the commutativity of diagram is established, by using (\ref{ident1}), we see that \begin{align*} -1 = \frac{\Theta_{(1,1,1)}(\Omega(\eta))^5}{\Theta_{(9,1,1)}(\Omega(\eta))^5} |_{\ell(12)} &= \frac{c_1}{c_4} \frac{J(13245)}{J(13254)}|_{L(12)} \\ &= \frac{c_1}{c_4} \frac{(\lambda_1 - \lambda_3)(\lambda_3 - \lambda_1) (\lambda_1 - \lambda_4)(\lambda_4 - \lambda_5)(\lambda_5 - \lambda_1)} {(\lambda_1 - \lambda_3)(\lambda_3 - \lambda_1)(\lambda_1 - \lambda_5) (\lambda_5 - \lambda_4)(\lambda_4 - \lambda_1)}, \end{align*} that is $c_1 = c_4$. By the same calculation using (\ref{ident2})--(\ref{ident8}), we obtain \begin{align*} c_1 = c_4, \quad c_2 = - c_3, \quad c_5 = c_6, \quad c_1 = c_3, \quad c_2 = - c_4, \quad d_5 = d_6, \quad c_1 = - c_2, \\ d_2 = - \zeta d_6, \quad d_1 = - \zeta^3 c_5, \quad c_3 = - \zeta^4 d_4, \quad c_4 = \zeta^4 d_3, \quad d_2 = d_3, \quad c_6 = \zeta^3 c_4. \end{align*} These equalities gives the ratios in the assertion. \end{proof} \begin{rem} We have the following equalities; \[ J(ijklm) = - J(mlkji), \quad \Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5 = - \Theta_{(10 - a_1,10 - a_2,10 - a_3)}(\Omega(\eta))^5. \] \end{rem} \vskip3mm Let us denote $X(2,5)$ by $X$, and let $K_X$ be the canonical class of $X$. \begin{cor} We have an isomorphism of $\mathbb{C}$--algebras \[ \mathbb{C}[\Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5] \cong \oplus_{n = 0}^{\infty} \mathrm{H}^0(X,\mathcal{O}_X(-nK_X)), \] where the left hand side is the $\mathbb{C}$--algebra of the functions on $\mathbb{B}_2^A$ generated by the twelve theta functions in Theorem \ref{maintheorem}. Especially the $\mathbb{C}$--vector space spanned by $\{ \Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5 \}$ coincides with $\mathrm{H}^0(X,\mathcal{O}_X(-K_X))$. \end{cor} \begin{proof} The map $J$ is essentially anti-canonical map (see Section \ref{sectionconfig}). Hence it follows from Theorem \ref{maintheorem}. \end{proof} \begin{rem} By the Riemann-Roch theorem, we obtain \begin{align*} \dim \mathrm{H}^0(X,\mathcal{O}_X(-n K_X)) &= \frac{1}{2}(-n K_X) \cdot (-n K_X -K_X) + 1 \\ &= \frac{5}{2} n(n+1) +1 \end{align*} since $(-K_X) \cdot (-K_X) = 5$. So we have $\dim \mathrm{H}^0(X,\mathcal{O}_X(-K_X)) = 6$, and twelve $\Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5$ satisfy 6 linear equations. It is known that the image of $X$ in $\mathbb{P}^5$ by the anti-canonical map is determined by the system of quadratic equations (see \cite[Chapter 5]{F}). \end{rem} \vskip3mm \noindent {\bf ----- The graded ring of Automorphic forms----} \par Recall the automorphic factor $F_g(\eta)$ in Lemma \ref{lemcocycle}. We consider the automorphic function $f(\eta)$ on $\mathbb{B}_2^A$ in the sense that we have \begin{align} \label{modular} f(g \eta) = F_g(\eta)^k f(\eta)\quad \text{for} \ g \in \Gamma(1-\zeta), \end{align} where $k$ is a non negative integer. Let us denotes the vector space of holomorphic functions satisfying (\ref{modular}) by $A_k(F_g)$. \vskip3mm \begin{prop} \label{propmod} Let $(a_1,a_2,a_3)$ and $(b_1,b_2,b_3)$ be the member of the system in (\ref{twelve1}), then it holds \[ \Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5 \Theta_{(b_1,b_2,b_3)}(\Omega(\eta))^5 \in A_5(F_g). \] \end{prop} \begin{proof} By (\ref{newformula}) and Lemma \ref{lemmaphi'}, we have \begin{equation} \label{eeee} \begin{split} &\Theta_{(a_1,a_2,a_3)}(\Omega(g \eta))^5 \Theta_{(b_1,b_2,b_3)}(\Omega(g \eta))^5 \\ =&\kappa(g)^{10}\exp(2 \pi \mathfrak{S}qrt{-1}\phi'_{(a_1,a_2,a_3)}(g))^{10} F_g(\eta)^5 \Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5 \Theta_{(b_1,b_2,b_3)}(\Omega(g \eta))^5 \end{split} \end{equation} for $g \in \Gamma(1-\zeta)$. We must show \begin{align} \label{kapp} [\kappa(g)\exp(2 \pi \mathfrak{S}qrt{-1}\phi'_{(a_1,a_2,a_3)}(g))]^{10} = 1 \end{align} for $g \in \Gamma(1-\zeta)$. Let $\eta_0$ be the point \[ (\text{the mirror of} \ h_{12}) \cap (\text{the mirror of} \ h_{34}) = [0:0:1] \in \mathbb{B}_2^A. \] Then $\eta_0$ is fixed by $h_{12}$ and $h_{34}$. Moreover, $F_g(\eta) = \zeta$ for $h_{12}$ and $h_{34}$. So we have \[ \Theta_{(1,1,1)}(\Omega(\eta_0))^{10} =[\kappa(g)\exp(2 \pi \mathfrak{S}qrt{-1}\phi'_{(1,1,1)}(g))]^{10} \Theta_{(1,1,1)}(\Omega(\eta_0))^{10} \quad (g = h_{12}, h_{34}) \] by (\ref{eeee}). Since $\Theta_{(1,1,1)}(\Omega(\eta_0)) \ne 0$ (see Proposition \ref{evaluation}), we obtain (\ref{kapp}) for $h_{12}$ and $h_{34}$. By the same way, we see that (\ref{kapp}) holds for any member $h_{ij}$ of the generator system of $\Gamma(1-\zeta)$. Hence it holds for all $g \in \Gamma(1-\zeta)$. \end{proof} \begin{tm} \label{tmmodring} (1). \ We have the isomorphism of the $\mathbb{C}$--algebras: \begin{align*} \oplus_{n = 0}^{\infty} A_{5n}(F_g) &= \mathbb{C}[\Theta_{(a_1,a_2,a_3)}(\Omega(\eta))^5 \Theta_{(b_1,b_2,b_3)}(\Omega(\eta))^5] \\ &\cong \oplus_{n = 0}^{\infty} \mathrm{H}^0(X, \mathcal{O}_X(-2nK_X)). \end{align*} (2). \ $A_n(F_g) = \{0\} \quad \text{for} \ n \in \mathbb{N},\ n \equiv 1,2,3,4 \ \text{mod} \ 5$. \end{tm} \begin{proof} By Proposition \ref{propmod}, a function $f \in A_5(F_g)$ defines the meromorphic function \[ \frac{f(\eta)}{\Theta_{(1,1,1)}(\Omega(\eta))^{10}} \] on $\mathbb{B}_2^A / \Gamma(1-\zeta)$. So, by Theorem \ref{maintheorem}, we have the isomorphism of $\mathbb{C}$--vector space: \[ A_{5n}(F_g) \cong \mathrm{H}^0(X, \mathcal{O}_X(-2nK_X)) \quad \text{for} \ n \in \mathbb{N}. \] Hence we have the assertion (1). \par Next let us recall that $X$ is the blow up of $\mathbb{P}^2$ at 4 points. We denote this blow up by $\pi : X \rightarrow \mathbb{P}^2$. Then the Neron--Severi group $\mathrm{NS}(X)$ has the free generator $E_1, E_2, E_3, E_4$ and $\pi^*H$, where $\{ E_i \}$ are the exceptional curves with respect to $\pi$, and $H$ is a general line on $\mathbb{P}^2$. For $n \notin 5 \mathbb{Z}$, there is no divisor $D$ on $X$ such that $5 D = -2n K_X$ since $-K_X = 3 \pi^* H - E_1 - E_2 - E_3 - E_4$. This implies the assertion (2) since \[ A_n(F_g)^5 \mathfrak{S}ubset A_{5n}(F_g) \cong \mathrm{H}^0(X, \mathcal{O}_X(-2nK_X)). \] \end{proof} \vskip3mm \noindent {\bf ----- The Schwarz inverse for the Gauss HGDE $E_{2,1}(\frac{1}{5}, \frac{2}{5}, \frac{4}{5})$ ----} \par Let us consider the 1--dimensional disk \[ \mathbb{B}_1 = \{ \eta \in \mathbb{B}_2^A \ : \ \eta_1 = 0\}, \] and the degenerate period map \begin{align*} \Phi_{12} \ : L(12) \cong \mathbb{P}^1 \longrightarrow \mathbb{B}_1, \quad t \mapsto [&0:\int_{\gamma_2} \omega:\int_{\gamma_3} \omega], \\ &\omega = x^{-\frac{4}{5}} (x-1)^{-\frac{2}{5}} (x-t)^{-\frac{2}{5}}dx, \end{align*} as in Section \ref{sectiondl} (the parameter $\lambda$ is specialized as $(\lambda_1,\cdots,\lambda_5) = (0,0,1,t,\infty)$). Set \[ \Gamma(1-\zeta)_1 = \{ g \in \Gamma(1-\zeta) \ : \ g(\mathbb{B}_1) = \mathbb{B}_1\}. \] As we mentioned in Section \ref{sectiondl}, this is the triangle group $\Delta (5,5,5)$ up to the center. Recall those are the Schwarz map and the monodromy group for Gauss hypergeometric differential equation $E_{2,1}(\frac{1}{5}, \frac{2}{5}, \frac{4}{5})$ (see (\ref{Gauss(5,5,5)})). We have the explicit discription of the inverse: \begin{tm} \label{Gaussinverse} The map \[ \varTheta_{12} \ : \ \mathbb{B}_1 / \Gamma(1-\zeta)_1 \longrightarrow \mathbb{P}^1, \quad \eta \mapsto [\Theta_{(1,1,1)}(\Omega(\eta))^5 : -\Theta_{(1,1,9)}(\Omega(\eta))^5] \] is an isomorphism, and this is the inverse map of the Schwarz map \[ \Phi_{12} \ : \mathbb{P}^1 \longrightarrow \mathbb{B}_1 / \Gamma(1-\zeta)_1, \quad [1:t] \mapsto [0:\int_{\gamma_2} \omega:\int_{\gamma_3} \omega]. \] \end{tm} \begin{proof} By Theorem \ref{maintheorem}, the restriction of meromorphic function \[ \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,1,1)}(\Omega(\eta))^5} \] on $L(12)$ is of order 1. In fact, $L(12) \cap L(13) = L(12) \cap L(14) = L(12) \cap L(15) = \phi$, so the numerator vanishes at only $L(12) \cap L(35)$ with order 1, the denominator vanishes at only $L(12) \cap L(45)$ with order 1, and $L(12) \cap L(35) \ne L(12) \cap L(45)$ (see Section \ref{sectionconfig}). Hence the map $\varTheta_{12}$ is an isomorphism. Moreover, by Theorem \ref{maintheorem}, we have the equality \[ \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,1,1)}(\Omega(\eta))^5} = - \frac{(\lambda_1 - \lambda_3)(\lambda_3 - \lambda_5) (\lambda_5 - \lambda_2)(\lambda_2 - \lambda_4)(\lambda_4 - \lambda_1)} {(\lambda_1 - \lambda_3)(\lambda_3 - \lambda_2) (\lambda_2 - \lambda_4)(\lambda_4 - \lambda_5)(\lambda_5 - \lambda_1)}, \] on $\mathbb{B}_2^A / \Gamma(1-\zeta) \cong X(2,5)$, and this induces the equality \[ \frac{\Theta_{(1,1,9)}(\Omega(\eta))^5}{\Theta_{(1,1,1)}(\Omega(\eta))^5} = - \frac{(\lambda_3 - \lambda_5)(\lambda_4 - \lambda_1)} {(\lambda_3 - \lambda_1)(\lambda_4 - \lambda_5)} \] on $L(12)$. Putting $(\lambda_1,\lambda_3,\lambda_4,\lambda_5) = (0,1,t,\infty)$, we obtain \[ \frac{(\lambda_3 - \lambda_5)(\lambda_4 - \lambda_1)} {(\lambda_3 - \lambda_1)(\lambda_4 - \lambda_5)} = t. \] \end{proof} \vskip3mm Let us consider a holomorphic function $f$ on $\mathbb{B}_1$ satisfying the condition: \[ f(g \eta) = F_g(\eta)^k f(\eta)\quad \text{for} \ g \in \Gamma(1-\zeta)_1, \] and we denote the $\mathbb{C}$--vector space of such functions by $M_k(F_g)$. \vskip3mm \begin{cor} (1). \ We have an isomorphism of $\mathbb{C}$--algebras: \begin{align*} &\oplus_{n = 0}^{\infty} M_{5n}(F_g) \\ = &\mathbb{C}[\Theta_{(1,1,1)}(\Omega(\eta))^{10},\ \Theta_{(1,1,1)}(\Omega(\eta))^5 \Theta_{(1,1,9)}(\Omega(\eta))^5, \ \Theta_{(1,1,9)}(\Omega(\eta))^{10}] \\ \cong &\mathbb{C}[x_0^2, \ x_0x_1, \ x_1^2] \\ \cong &\oplus_{n = 0}^{\infty} \mathrm{H}^0(\mathbb{P}^1, \mathcal{O}_{\mathbb{P}^1}(-nK_{\mathbb{P}^1})), \end{align*} where $[x_0:x_1]$ is homogeneous coordinates of $\mathbb{P}^1$. \\ (2). \ $M_n(F_g) = \{0\} \quad \text{for} \ n \in \mathbb{N},\ n \equiv 1,2,3,4 \ \text{mod} \ 5$. \end{cor} \begin{proof} The assertion (1) is a direct consequence of Corollary \ref{tmmodring} and Theorem \ref{Gaussinverse}. The assertion (2) is obtained by the same argument as the proof of Theorem \ref{tmmodring}. \end{proof} \vskip3mm \noindent {\bf Acknowledgments.} I express my sincere thanks to Professor Hironori Shiga for advices during the prepartion of this paper. \end{document}
\begin{document} \title{Some coefficient sequences related to the descent polynomial} \begin{abstract} The descent polynomial of a finite $I\subseteq \mathbb{Z}^+$ is the polynomial $d(I,n)$, for which the evaluation at $n>\max(I)$ is the number of permutations on $n$ elements, such that $I$ is the set of indices where the permutation is descending. In this paper we will prove some conjectures concerning coefficient sequences of $d(I,n)$. As a corollary we will describe some zero-free regions for the descent polynomial. \end{abstract} \section{Introduction} Denote the group of permutations on $[n]=\{1,\dots,n\}$ by $\mathcal{S}_n$ and for a permutation $\pi\in \mathcal{S}_n$, the set of descending position is \[ Des(\pi)=\{i\in [n-1]~|~\pi_i>\pi_{i+1}\}. \] We would like to investigate the number of permutations with a fixed descent set. More precisely, for a finite $I\subseteq \mathbb{Z}^+$ let $m=\max(I\cup \{0\})$. Then for $n>m$ we can count the number of permutations with descent set $I$, that we will denote by \[ d(I,n)=|D(I,n)|=|\{\pi \in \mathcal{S}_n~|~ Des(\pi)=I\}|. \] This function was shown to be a degree $m$ polynomial in $n$ by MacMahon in \cite{macmahon2004}. In order to investigate this polynomial we extend the domain to $\mathbb{C}$, and for this paper we call $d(I,n)$ the descent polynomial of $I$. This polynomial was recently studied in the article of Diaz-Lopez, Harris, Insko, Omar and Sagan \cite{Sagan}, where the authors found a new recursion which was motivated by the peak polynomial. The paper investigated the roots of descent polynomials and their coefficients in different bases. In this paper we will answer a few conjectures of \cite{Sagan}. The coefficient sequence $a_k(I)$ is defined uniquely through the following equation \[ d(I,n)=\sum_{k=0}^m a_k(I){n-m\choose k}. \] In \cite{Sagan} it was shown that the sequence $a_k(I)$ is non-negative, since it counts some combinatorial objects. By taking a transformation of this sequence we were able to apply Stanley's theorem about the statistics of heights of a fixed element in a poset. As a result we prove \begin{repTh}{thm:a_base} If $I\neq \emptyset$, then the sequence $\{a_k(I)\}_{k=0}^m$ is log-concave, that means that for any $0<k<m$ we have \[ a_{k-1}(I)a_{k+1}(I)\le a_k^2(I). \] \end{repTh} As a corollary of the proof of Theorem~\ref{thm:a_base} we get a bound on the roots of $d(I,n)$: \begin{repTh}{cor:root_a} If $I\neq \emptyset$ and $d(I,z_0)=0$ for some $z_0\in \mathbb{C}$, then $|z_0|\le m$. \end{repTh} As in \cite{Sagan} we will also consider the $c_k(I)$ coefficient sequence, that is defined by the following equation \[ d(I,n)=\sum_{k=0}^m(-1)^{m-k}c_k(I){n+1\choose k}. \] By using a new recursion from \cite{Sagan} we prove that \begin{repPro}{prop:c_base} If $I\neq \emptyset$, then for any $0\le k\le m$ the coefficient $c_k(I)\ge 0$. \end{repPro} In the last section we will establish zero-free regions for descent polynomials. In particular we will prove the following. \begin{repTh}{th:root_main} If $I\neq \emptyset$ and $d(I,z_0)=0$ for some $z_0\in \mathbb{C}$, then $|z_0-m|\le m+1$. In particular, $\Re z_0 \ge -1$. \end{repTh} This paper is organized as follows. In the next section we will define two sequences, $a_k(I)$ and $c_k(I)$, we recall the two main recursions for the descent polynomial and we introduce one of our main key ingredients. Then in Section~\ref{sec:c_base} we will prove a conjecture concerning the sequence $c_k(I)$ and some consequences. In Section~\ref{sec:a_base} we will prove a conjecture concerning the sequence $a_k(I)$, then in Section~\ref{sec:root} we prove some bounds on the roots. \section{Preliminaries} In this section we will recall some recursions of the descent polynomial and we will establish some related coefficient sequences by choosing different bases for the polynomials. First of all, for the rest of the paper we will always denote a finite subset of $\mathbb{Z}^+$ by $I$, and $m(I)$ is the maximal element of $I\cup\{0\}$. If it is clear from the context, $m(I)$ will be denoted by $m$. Let us define the coefficients $a_k(I),c_k(I)$ for any $I$ with maximal element $m$ and $k\in\mathbb{N}$ through the following expressions: \[ d(I,n)=\sum_{k=0}^m a_k(I) {n-m\choose k} = \sum_{k=0}^m (-1)^{m-k}c_k(I){n+1\choose k}, \] if $k\le m$, and $c_k(I)=a_k(I)=0$, if $k>m$. Observe that they are well-defined, since $\{{n-m\choose k}\}_{k\in\mathbb{N}}$ and also $\{{n+1\choose k}\}_{k\in\mathbb{N}}$ form a base of the space of one-variable polynomials. For later on, we will refer to the first and second bases as ``$a$-base'' and ``$c$-base'', respectively. We will also consider an other base that is also a Newton-base. As it turns out, these coefficients are integers, moreover, they are non-negative. To be more precise, in \cite{Sagan} it has been proved that $a_k(I)$ counts some combinatorial objects (i.e. they are non-negative integers), and $c_0(I)$ is non-negative. The authors of \cite{Sagan} also conjectured that each $c_k(I)\ge 0$, and for a proof of the affirmative answer see Proposition~\ref{prop:c_base}. Next, we would like to establish two recurrences for the descent polynomial, which will be intensively used in several proofs. Before that, we need the following notations. For an $\emptyset\neq I=\{i_1,\dots,i_l\}$ and $1\le t\le l$, let \begin{align*} I^-=&I-\{i_l\},\\ I_t=&\{i_1,\dots, i_{t-1}, i_t-1, \dots, i_l-1\}-\{0\},\\ \widehat{I}_t=&\{i_1,\dots, i_{t-1}, i_{t+1}-1, \dots, i_l-1\},\\ I'=&\{i_j~|~i_j-1\notin I\},\\ I''=&I'-\{1\}. \end{align*} For the rest, $m(I)$ denotes the maximal element of a non-empty set $I\cup\{0\}$. If it is clear from the context, we will denote this element by $m$ . \begin{Pro}\label{cor:rec_a} If $I\neq \emptyset$, then \[ d(I,n)={n\choose m}d(I^-,m)-d(I^-,n) \] \end{Pro} In contrast to the simplicity of this recursion, the disadvantage is that the descent polynomial of $I$ is a difference of two polynomials. In \cite{Sagan}, the authors found an other way to write $d(I,n)$ as a sum of polynomials (Thm 2.4. of \cite{Sagan}). Now we will state an equivalent form, which will fit our purposes better, and we also give its proof. \begin{Cor}\label{cor:rec} If $I\neq \emptyset$, then \begin{align}\label{rec} d(I,n+1)&=\\&d(I,n)+\sum_{i_t\in I''\setminus\{m\}}d(I_t,n)+\sum_{i_t\in I'\setminus\{m\}}d(\hat{I}_t,n)+d(I^-,m-1){n\choose m-1}.\nonumber \end{align} \end{Cor} \begin{proof} Let us recall the formula of Theorem 2.4. of \cite{Sagan}: \begin{eqnarray}\label{s_rec} d(I,n+1)=d(I,n)+\sum_{i_t\in I''}d(I_t,n)+\sum_{i_t\in I'}d(\hat{I}_t,n). \end{eqnarray} If $I=\{1\}$, then trivially \eqref{rec} is true. For $I\neq\{1\}$ we will distinguish two cases. If $m\notin I'$ (and also $m\notin I''$), then by definition it means that $m-1\in I$. But it means that $m-1\in I^-$ and \[ d(I^-,m-1){n\choose m-1}=0. \] Therefore the right hand side of \eqref{rec} is the same as the right hand side of \eqref{s_rec}. If $m\in I'$ (and also $m\in I''$), then $i_l=m$, $\hat{I}_l=I^-\cup \{m-1\}$ and $I_l=I^-$. Now take the difference of the right hand sides of \eqref{rec} and \eqref{s_rec}, that is \begin{gather*} d(I^-,m-1){n\choose m-1}-d(I_l,n)-d(\hat{I}_l,n)=\\ d(I^-,m-1){n\choose m-1}-\left(d(\hat{I}_l^-,n){n\choose m-1}-d(I^-,n)\right)-d(I^-,n)=\\ d(I^-,m-1){n\choose m-1}-d(I^-,n){n\choose m-1}=0. \end{gather*} Therefore the two equations have to be equal. \end{proof} As a conjecture in \cite{Sagan} it arose that the coefficient sequence $\{a_k(I)\}_{k=0}^m$ is log-concave. We mean by that that for any $0<k<m$ we have \[ a_{k-1}(I)a_{k+1}(I)\le a_k(I)^2. \] In particular, the sequence $\{a_k(I)\}_{k=0}^m$ is unimodal. Our main tool to attack this problem will be a result of Stanley about the height of a certain element of a finite poset in all linear extensions. So let $P$ be a finite poset and $v\in P$ a fixed element, and denote the set of order-preserving bijection from $P$ to the chain $[1,2,\dots,|P|]$ by $\textrm{Ext}(P)$. Then, the height polynomial of $v$ in $P$ defined as \[ h_{P,v}(x)=\sum_{\phi\in \textrm{Ext}(P)}x^{\phi(v)-1} = \sum_{k=0}^{|P|-1}h_k(P,v)x^k. \] In other words $h_k(P,v)$ counts how many linear extensions $P$ has, such that below $v$ there are exactly $k$ many elements. In special cases, when all comparable elements from $v$ (except for $v$) are bigger in $P$, we can reformulate $h_k(P,v)$ as it counts how many linear extensions $P$ has, such that below $v$ there are exactly $k$ many incomparable elements. For such a case, we could combine two results of Stanley to obtain the following theorem. \begin{Th}\label{thm:stanley} Let $P$ be a finite poset, and $v\in P$ be fixed. Then the coefficient sequence $\{h_k(P,v)\}_{k=0}^{|P|-1}$ is log-concave. Moreover if all comparable elements with $v$ are bigger than $v$ in $P$, then $\{h_k(P,v)\}_{k=0}^{|P|-1}$ is a decreasing, log-concave sequence. \end{Th} \begin{proof} The first part of the theorem is Corollary 3.3. of \cite{STANLEY198156}. For the second part we use fact that $h_k(P,v)$ can be interpreted as the number of linear extensions such that there are $k$ many smaller than $v$ incomparable elements in the extension. Then by Theorem 6.5. of \cite{Stanley1986} we obtain the desired statement. \end{proof} We will use this theorem in a special case. For any $I$ we define a poset $P_I$ on $[u_1,\dots, u_{m+1}]$, as $u_i >u_{i+1}$ if $i\in I$ and $u_i<u_{i+1}$ if $i\notin I$. Observe that any comparable element with $x_{m+1}$ is bigger in $P_I$, therefore the sequence $\{h_k(P_I,u_{m+1})\}_{k=0}^{m}$ is decreasing and log-concave. We would like to remark that any linear extension of $P_I$ can be viewed as an element of $D(I,m+1)$. In that way we can write that \[ h(I,x)=h_{P_I,u_{m+1}}(x)=\sum_{\pi\in D(I,m+1)}x^{\pi_{m+1}-1}. \] \section{Descent polynomial in ``$c$-base''}\label{sec:c_base} The aim of the section is to give an affirmative answer for Conjecture 3.7. of \cite{Sagan}, and give some immediate consequences on the coefficients and evaluation. For corollaries considering the roots of $d(I,n)$ see Section~\ref{sec:root}. We would like to remark at that point that the proof will be just an algebraic manipulation, not a ``combinatorial'' proof. However, giving such a proof could imply some kind of ``combinatorial reciprocity'' for descent polynomials. First, we will translate the recursion of Corollary \ref{cor:rec} to the terms of $c_k(I)$. \begin{Lemma}\label{lem:c_rec} If $I\neq \emptyset$ and $0\le k\le m-1$, then \[ c_{k+1}(I)=\sum_{i_t\in I''\setminus\{m\}}c_k(I_t)+\sum_{i_t\in I'\setminus\{m\}}c_k(\widehat{I}_t)+d(I^-,m-1). \] \end{Lemma} \begin{proof} The idea is that we rewrite the equation of \ref{cor:rec} as \[ d(I,n+1)-d(I,n)=\sum_{i_t\in I''\setminus\{m\}}d(I_t,n)+\sum_{i_t\in I'\setminus\{m\}}d(\hat{I}_t,n)+d(I^-,m-1){n\choose m-1}, \] and express both sides in $c$-base, then compare the coefficients of ${n+1\choose k}$. The left side can be written as \begin{gather*} d(I,n+1)-d(I,n)=\\ \sum_{k=0}^mc_k(I)(-1)^{m-k}{n+2\choose k}-\sum_{k=0}^mc_k(I)(-1)^{m-k}{n+1\choose k}=\\ \sum_{k=1}^mc_k(I)(-1)^{m-k}{n+1\choose k-1}=\\ \sum_{k=0}^{m-1}c_{k+1}(I)(-1)^{m-k-1}{n+1\choose k}. \end{gather*} Next we use the famous Chu-Vandermonde's identity: \[ {n\choose m-1}=\sum_{k=0}^{m-1}{n+1\choose k}{-1\choose m-1-k}=\sum_{k=0}^{m-1}(-1)^{m-1-k}{n+1\choose k}. \] Therefore the right hand side can be written as: \begin{gather*} \sum_{i_t\in I''\setminus\{m\}}d(I_t,n)+\sum_{i_t\in I'\setminus\{m\}}d(\widehat{I}_t,n)+d(I^-,m-1){n\choose m-1}=\\ \sum_{k=0}^{m-1}(-1)^{m-1-k}\left(\sum_{i_t\in I''\setminus\{m\}}c_{k}(I_t)+\sum_{i_t\in I'\setminus\{m\}}c_{k}(\widehat{I}_t)+d(I^-,m-1)\right){n+1\choose k}. \end{gather*} We gain that for any $0\le k\le m-1$, \begin{align*} (-1)^{m-k-1}&c_{k+1}(I)=\\ &(-1)^{m-1-k}\left(\sum_{i_t\in I''\setminus\{m\}}c_{k}(I_t)+\sum_{i_t\in I'\setminus\{m\}}c_{k}(\widehat{I}_t)+(-1)^{m-1}d(I^-,m-1)\right). \end{align*} By multiplying both sides by $(-1)^{m-k-1}$ we get the desired statement. \end{proof} Similarly, we can rephrase Proposition~\ref{cor:rec_a}, but we leave the proof for the readers. \begin{Lemma} If $I\neq\emptyset$ and $0\le k\le m$, then \begin{eqnarray}\label{eq:1} c_k(I)=d(I^-,m)-(-1)^{m-m^-}c_k(I^-), \end{eqnarray} where $m^-=m(I^-)$. \end{Lemma} The next theorem settles Conjecture~3.7 of \cite{Sagan}. We would like to point out that the non-negativity of $c_0(I)$ has already been proven in \cite{Sagan}, and one can use it to find a shortcut in the proof. However, we will give a self-contained proof. \begin{Th}\label{prop:c_base} For any $I$ and $0\le k\le m$, the coefficient $c_k(I)$ is a non-negative integer. \end{Th} \begin{proof} We will proceed by induction on $m$. If $m=0$, then $I=\emptyset$, thus, \[ d(I,n)=1, \] therefore $c_0(I)=1\ge 0$. If $m=1$, then $I=\{m\}$ and \begin{gather*} d(I,n)={n\choose m}-1=\sum_{k=0}^m {n+1 \choose k}{-1\choose m-k}- {n+1\choose 0}=\\ \sum_{k=1}^m (-1)^{m-k}{n+1\choose k} + (-1)^{m-0}{n+1 \choose 0} (1-(-1)^m). \end{gather*} We obtained that \[ c_k(I)=\left\{\begin{array}{lc} 1&\textrm{if $0<k\le m$}\\ 2 & \textrm{if $k=0$ and $m$ is odd}\\ 0 & \textrm{if $k=0$ and $m$ is even} \end{array} \right. \] For the rest of the proof, we assume that the size of $I$ is at least 2. Therefore $m>1$, and $m^-=\max(I^-)>0$. Since for any $i_t\in I''$ (and $i_t\in I'$) the maximum of $I_t$ (and $\widehat{I}_t$) is exactly $m-1$, we can use induction on them, i.e. $c_k(I_t)\ge 0$ integer ($c_k(\widehat{I}_t)\ge 0$ integer). On the other hand, $d(I^-,m-1)$ counts permutations with descent set $I^-$, so $d(I^-,m-1)\ge 0$ integer. Now by Lemma \ref{lem:c_rec} and by the previous paragraph we have for any $k\ge 1$ that \begin{eqnarray} c_k(I)=\sum_{i_t\in I'\setminus\{m\}}c_{k-1}(I_t)+\sum_{i_t\in I''\setminus\{m\}}c_{k-1}(\widehat{I}_t)+d(I^-,m-1)\ge 0. \label{eq:2} \end{eqnarray} What remains is to prove that $c_0(I)\ge 0$. This is exactly the statement of Proposition 3.10. of \cite{Sagan}, but for the completeness we also give its proof. We consider two cases. If $m-1\in I$, then by \eqref{eq:1} \[ c_0(I)=d(I^-,m)-(-1)^{m-(m-1)}c_0(I^-)=d(I^-,m)+c_0(I^-)\ge 1+0>0, \] since $m>\max(I^-)$. If $m-1\notin I$, then by \eqref{eq:2}, \[ c_1(I)\ge d(I^-,m-1)\ge 1. \] On the other hand, we can express $d(I,0)$ in two ways. The first equality is by Lemma 3.8. of \cite{Sagan}, the second is by the definition of $c_k(I)$. \[ (-1)^{\# I}=d(I,0)=\sum_{k=0}^m(-1)^{m-k}c_k(I){1\choose k}=(-1)^m(c_0(I)-c_1(I)), \] therefore \[ c_0(I)=c_1(I)+(-1)^{\# I+m}\ge 1+(-1)=0. \] \end{proof} As a corollary we will see that the values of the polynomial $d(I,n)$ at negative integers are of the same sign. This phenomenon is kind of similar to a ``combinatorial reciprocity'', by which we mean that there exists a sequence of ``nice sets'' $A_n$ parametrized by $n$, such that $(-1)^md(I,-n)=|A_n|$. We think that either proving the previous theorem using combinatorial arguments or finding a combinatorial reciprocity for $d(I,n)$ could provide an answer for the other. \begin{Cor} Let $n$ be a positive integer, then \[ (-1)^md(I,-n)\ge 0. \] Moreover if $n>1$ positive integer, then $(-1)^md(I,-n)>0$. \end{Cor} \begin{proof} Assume that $n=1$. Then \[ (-1)^md(I,-1)=\sum_{k=0}^m(-1)^{-k}c_k(I){-1+1\choose k}=(-1)^0c_0(I){0\choose 0}=c_0(I), \] and by the previous proposition we know that $c_0(I)\ge 0$. \begin{gather*} (-1)^m d(I,-n)=\sum_{k=0}^m c_k(I)(-1)^{-k}{-n+1 \choose k}=\\ \sum_{k=0}^m c_k(I)(-1)^{-k}(-1)^k{n+k-2 \choose k}=\sum_{k=0}^mc_k(I){n+k-2\choose k}> 0 \end{gather*} \end{proof} We would like to remark that in Section~\ref{sec:root} we will prove that in particular there is no root of $d(I,n)$ on the half-line $(-\infty, -1)$, that is, for any real number $z_0\in(-\infty,-1)$, the expression $(-1)^md(I,z_0)$ is always positive. Moreover if we carefully follow the previous proofs, then one might observe that $d(I,-1)=0$ iff $c_0(I)=0$ iff $I=\{m\}$ where $m$ is even or $I=[m-2]\cup\{m\}$. \section{Descent polynomial in ``$a$-base''}\label{sec:a_base} In this section we would like to investigate the coefficients $a_k(I)$. In order to do that, we will need to understand the coefficients of $d(I,n)$ in the base of $\{{n-m+k\choose k+1}\}_{k=-1}^{m-1}$, which is defined by the following equation \[ d(I,n)=\overline{a}_{-1}(I){n-m-1\choose 0}+\overline{a}_0(I){n-m\choose 1}+\dots+\overline{a}_{m-1}(I){n-1\choose m}. \] Observe that $\overline{a}_{-1}(I)=0$, since \[ 0=d(I,m)=\overline{a}_{-1}(I){-1\choose 0}+\sum_{k=0}^{m-1}\overline{a}_k(I){k\choose k+1}=\overline{a}_{-1}(I), \] therefore later on, we will concentrate on the coefficients $\overline{a}_k(I)$ for $0\le k\le m-1$. As it will turn out, all these coefficients are non-negative integers, moreover, each of them counts some combinatorial objects. On the other hand, this new coefficient sequence is closely related to the coefficients $a_k(I)$. To show the connection, we introduce two polynomials \begin{gather*} a(I,x)=\sum_{k=0}^ma_k(I)x^k,\\ \overline{a}(I,x)=\sum_{k=0}^{m-1}\overline{a}_k(I)x^k. \end{gather*} First we will show that $\overline{a}_k(I)=h_{m-k}(P_I,u_{m+1})$, i.e. $\overline{a}_k(I)$ counts the number of permutations from $D(I,m+1)$, such that there are $(k+1)$ elements above $u_{m+1}$. \begin{Pro} If $I\neq \emptyset$ and $0\le k\le m-1$, then \[ \overline{a}_k(I)=h_{m-k}(P_I,u_{m+1}). \] \end{Pro} \begin{proof} We will show that if $n>m$, then \[d(I,n)=\sum_{k=0}^{m-1}h_{m-k}(P_I,u_{m+1}){n-m+k\choose k+1}.\] It is enough, since $\{{n-m+k\choose k+1}\}_{k=-1}^{m-1}$ is a base in the space of polynomials of degree at most $m$. Let us define the sets $B_k(I,n)=\{\pi\in D(I,n)~|~\pi_{m+1}=k\}$ for $1\le k\le n$. For any $\pi\in D(I,n)$ the last descent is between $m$ and $m+1$, therefore $\pi_{m}>\pi_{m+1}<\pi_{m+2}<\dots<\pi_{n}\le n$, i.e. $\pi_{m+1}\le m$. Therefore $B_k(I,n)=\emptyset$ for any $m<k\le n$, and $D(I,n)$ is a disjoint union of the sets $B_k(I,n)$ for $1\le k\le m$. Also observe that $|B_k(I,m+1)|=h_k(P_I,u_{m+1})$. We claim that \[|B_k(I,n)|=|B_k(I,m+1)\times {[k+1,n]\choose m+1-k}|=|B_k(I,m+1)|{n-k\choose m+1-k}.\] To prove the first equality we establish a bijection. If $\pi\in B_k(I,n)$, then let $E_\pi=\{1\le i\le m~|~\pi_i>k\}$, $V_\pi=\{\pi_i~|~i\in E_\pi\}$ and $\pi|_{m+1}\in B_k(I,m+1)$ the unique induced linear ordering on the first $m+1$ element. As before, for any $l>m+1$ the value $\pi_l$ is bigger than $\pi_{m+1}$, therefore $|E_\pi|=m+1-k$ and $V_\pi\subseteq [k+1,n]$ has size $m+1-k$. So let $f:B_k(I,n)\to B_k(I,m+1)\times {[k+1,n]\choose m+1-k}$ defined as \[ f(\pi)=\left(\pi|_{m+1},V_{\pi}\right). \] Checking whether the function $f$ is a bijection is left to the readers. Putting the pieces together, we have \begin{gather*} d(I,n)=|D(I,n)|=|\cup_{k=1}^m B_k(I,n)|=\sum_{k=1}^m |B_k(I,n)|=\\ \sum_{k=1}^m |B_k(I,m+1)\times {[k+1,n]\choose m+1-k}|=\\ \sum_{k=1}^m |B_k(I,m+1)|{n-k\choose m+1-k}=\\ \sum_{k=1}^m h_k(P_I,u_{m+1}){n-k\choose m+1-k}=\\ \sum_{l=0}^{m-1} h_{m-l}(P_I,u_{m+1}){n-m+l\choose l+1}. \end{gather*} \end{proof} \begin{Cor}\label{cor:bara_logconcave} If $ I\neq \emptyset$, then the sequence $\overline{a}_0(I),\overline{a}_1(I),\dots, \overline{a}_{m-1}(I)$ is a monotone increasing, log-concave sequence of non-negative integers. \end{Cor} \begin{proof} By the previous proposition we know that this sequence is the same as $\{h_{m-k}(P_I,u_{m+1})\}_{k=1}^m$, which is clearly a sequence of non-negative integers. Moreover, by Theorem \ref{thm:stanley}, it is log-concave and monotone decreasing. \end{proof} We just want to remark that since the polynomial $\overline{a}(I,x)$ has a monotone coefficient sequence, all of its roots are contained in the unit disk (see Figure~\ref{fig:bara}). \begin{figure} \caption{The roots of $\bar{a} \label{fig:bara} \end{figure} Our next goal is to establish a connection between the coefficients $a_k(I)$ and $\overline{a}_k(I)$. \begin{Pro} If $I\neq \emptyset$, then \[ a(I,x)=x\overline{a}(I,x+1) \] \end{Pro} \begin{proof} By definition we see that \begin{gather*} d(I,n)=\sum_{k=0}^{m-1} \overline{a}_k(I){n-m+k\choose k+1}=\sum_{k=0}^{m-1} \overline{a}_k(I)\sum_{l=0}^{k+1}{n-m\choose l}{k\choose k+1-l}=\\ \sum_{k=0}^{m-1} \overline{a}_k(I)\sum_{l=1}^{k+1}{n-m\choose l}{k\choose l-1}= \sum_{l=1}^{m}{n-m\choose l}\left(\sum_{k=l-1}^{m-1}\overline{a}_k(I){k\choose l-1}\right), \end{gather*} which means that $a_l(I)=\sum_{k=l-1}^{m-1}\overline{a}_l(I){k\choose l-1}$ for $1\le l\le m$, i.e. \[ a(I,x)=\sum_{l=1}^{m}x^l\left(\sum_{k=l-1}^{m-1}\overline{a}_k(I){k\choose l-1}\right) \] On the other hand, let us calculate the coefficients of $x\overline{a}(I,x+1)$. \begin{gather*} x\overline{a}(I,x+1)=x\left(\sum_{k=0}^{m-1}\overline{a}_k(I)(x+1)^k\right)=\\ x\left(\sum_{k=0}^{m-1}\overline{a}_k(I)\sum_{l=0}^{k}{k\choose l}x^l\right)= x\left(\sum_{l=0}^{m-1}x^l\sum_{k=l}^{m-1}\overline{a}_k(I){k\choose l}\right)=\\ \sum_{l=0}^{m-1}x^{l+1}\sum_{k=l}^{m-1}\overline{a}_k(I){k\choose l}=\\ \sum_{l=1}^{m}x^{l}\sum_{k=l-1}^{m-1}\overline{a}_k(I){k\choose l-1}=a(I,x). \end{gather*} \end{proof} As a corollary of two previous propositions, we will give a proof of Conjecture~3.4 of \cite{Sagan}. \begin{Cor}\label{thm:a_base} If $I\neq\emptyset$, then the sequence $a_0(I),a_1(I), \dots, a_m(I)$ is a log-concave sequence of non-negative integers. \end{Cor} \begin{proof} By Corollary~\ref{cor:bara_logconcave} we know that the coefficient sequence of the polynomial $\overline{a}(I,x)$ is log-concave, and by monotonicity, it is clearly without internal zeros. Therefore by the fundamental theorem of \cite{Brenti1989unimodal}, the coefficient sequence of the polynomial $\overline{a}(I,x+1)$ is log-concave. Since multiplication with an $x$ only shifts the coefficient sequence, $x\overline{a}(I,x+1)=a(I,x)$ also has a log-concave coefficient sequence. \end{proof} \section{On the roots of $d(I,n)$}\label{sec:root} In this section we will prove four propositions about the locations of the roots of $d(I,n)$, two are for general $I$, and two are for some special ones. The first result is obtained by the technique of Theorem 4.16. of \cite{Sagan} based on the non-negativity of the coefficients $c_k(I)$. In the second, we will prove a linear bound in $m$ for the length of the roots of $d(I,n)$, which will be based on the monotonicity of the coefficients $\overline{a}_k(I)$. For the third we use similar arguments as in the proof of the second statement. In the fourth we will prove a real-rootedness for some special $I$ using Neumaier's Gershgorin type result. First we will recall some basic notations from \cite{Sagan}. Let $R_m$ be the region described by Theorem 4.16. of \cite{Sagan}, that is $R_m=S_m\cup \overline{S_m}$ and \[ S_m=\{z\in\mathbb{C}~|~\arg(z)\ge 0 \textrm{ and } \sum_{i=1}^m\arg(z-i+1)<\pi\}. \] Then we have the following corollary of Proposition~\ref{prop:c_base}. \begin{Cor} Let $I$ be a finite set of positive integers. Than any element of $(m-2)-R_m$ is not a root of $d(I,z)$. In particular, if $z_0$ is a real root of $d(I,z)$, then $z_0\ge -1$. \end{Cor} \begin{proof} Let $z\in\mathbb{C}$ be a complex number such that \[S=\{(-1)^0(z+1)_{\downarrow 0},\dots, (-1)^m(z+1)_{\downarrow m}\}\] is non-negatively independent, i.e. \[S=\{(-1-z)_{\uparrow 0},\dots, (-1-z)_{\uparrow m}\}\] is in an open half plane $H$, such that $1\in H$. But this is equivalent to the fact that the points \[ S'=\{(m-2-z)_{\downarrow m}(-1-z)_{\uparrow 0}^{-1}, \dots, (m-2-z)_{\downarrow m}(-1-z)_{\uparrow m}^{-1}\} \] are in $H$, which is the same set as \[ S'=\{(m-2-z)_{\downarrow m},(m-2-z)_{\downarrow m-1} \dots, (m-2-z)_{\downarrow 0}\}. \] But by Theorem 4.16. of \cite{Sagan}, we know that this set lies on an open half-plane iff $m-2-z\in R_m$. Therefore $S$ is an open half plane iff $m-2-z\in R_m$ iff $z\in(m-2)-R_m$. The last statement can be obtained from the fact that $(m-1,\infty)\subseteq R_m$. \end{proof} The following lemma will be useful in the upcoming proofs. \begin{Lemma}\label{lem:convex} Let $m>0$ integer given and assume that $|z|>m$. Then the lengths \[ \left|{z-m+k\choose k}\right| \] are increasing for $k=0,\dots,m$. In particular, if $\alpha_0,\dots,\alpha_{m}\in \mathbb{R}$, $\alpha_m\neq 0$, $\sum_{i=0}^{m-1}|\alpha_i|\le |\alpha_m|$ and $|z|>m$, then \[ \left|\alpha_m{z\choose m}\right| > \left|\sum_{k=0}^{m-1}\alpha_k{z-m+k\choose k}\right|. \] \[ \sum_{i=0}^m\alpha_k{z-m+k\choose k}\neq 0 \] \end{Lemma} \begin{proof} Let $0\le k\le m-1$ be fixed. Then to see that the lengths are increasing we have to consider the ratio of two consecutive elements: \begin{align*} \left|{z-m+k+1\choose k+1}\right|& \left|{z-m+k\choose k}\right|^{-1}=\\ &=\frac{|z-m+k+1|}{k+1}\ge \frac{|z|-m+k+1}{k+1}>1 \end{align*} Therefore the sequence is increasing. To see the second statement let us define $C=\sum_{i=0}^{m-1}|\alpha_i|$. If $C=0$, then the statement is trivially true. If $C\neq 0$, then the vector \[ v=\left|\sum_{k=0}^{m-1}\frac{\alpha_k}{C}{z-m+k\choose k}\right|=\left|\sum_{k=0}^{m-1}\frac{|\alpha_k|}{C}\textrm{sign}(\alpha_i){z-m+k\choose k}\right| \] is a convex combination of the vectors $\left\{\textrm{sign}(\alpha_k){z-m+k\choose k}\right\}_{k=0}^{m-1}$. Hence \[ |v|\le \left|{z-1\choose m-1}\right|, \] and \[ \left|\sum_{k=0}^{m-1}\alpha_k{z-m+k\choose k}\right|=C|v|\le C\left|{z-1\choose m-1}\right|<\alpha_m\left|{z\choose m}\right| \] \end{proof} \begin{Cor}\label{cor:root_a} If $z_0$ is a root of $d(I,z)$, then $|z_0|\le m$. \end{Cor} \begin{proof} Let us consider the polynomial $p(z)=(z-1)\bar{a}(I,z)$, and let $p_i$ (resp. $\bar{a}_i$) be the coefficient of $z^i$ in $p$ (resp. $\bar{a}$), i.e. \[ p(z)=\sum_{i=0}^mp_iz^i\qquad \bar{a}(I,z)=\sum_{i=0}^{m-1}\bar{a}_i z^i. \] The relation of $p$ and $\bar{a}$ translates as follows: \[ p_i=\left\{\begin{array}{lc} \bar{a}_{m-1} & \textrm{if $i=m$}\\ \bar{a}_{i-1}-\bar{a}_{i} & \textrm{if $0<i<m$}\\ -\bar{a}_0 & \textrm{if $i=0$} \end{array} \right. \] and \[ d(I,n)=\sum_{k=0}^mp_k{n-m+k\choose k}. \] Since the coefficient sequence of $\bar{a}(I,z)$ is non-decreasing by Corollary~\ref{cor:bara_logconcave}, therefore all coefficients of $p$ except $p_m$ are non-positive and their sum is 0. In other words for any $k\in\{0,1,\dots,m-1\}$: \[ |p_k|=-p_k \] and \[ \sum_{k=0}^{m-1}|p_k|=-\sum_{k=0}^{m-1}p_k=a_{m-1}=p_m>0. \] Therefore by Lemma~\ref{lem:convex}, if $|z|>0$, then \[ d(I,z)=\sum_{k=0}^mp_k{z-m+k\choose k}\neq 0. \] \end{proof} In the previous proof we did not use the fact that $\bar{a}_k(I)$ is a log-concave sequence, which would be interesting if one could make use of it. Our next goal is to prove Theorem~\ref{th:root_main}. In order to prove it, we have to distinguish a few cases depending on the number of consecutive elements ending at $\max(I)$. For simplicity, first we will consider the case, when the distance of the last two elements is at least 2. \begin{Pro}\label{pro:c} If $I=\{i_1,\dots,i_l\}$ for some $l\ge 1$, such that $|I|=1$ or $i_l-i_{l-1}\ge 2$. If $d(I,z_0)=0$, then \[ |m-1-z_0|\le m. \] In particular $\Re z_0\ge -1$. \end{Pro} \begin{proof} Let us consider $p(n)=(-1)^md(I,m-1-n)$ using coefficients $c_k(I)$. \begin{gather*} d(I,-(n-m+1))=\sum_{k=0}^m (-1)^{m-k}c_k(I){-(n-m+1)+1\choose k}=\\ \sum_{k=0}^m (-1)^{m-k}c_k(I)(-1)^k{n-m+k-1\choose k}=\\ (-1)^m\sum_{k=0}^m c_k(I){n-m+k-1\choose k}. \end{gather*} It might be familiar from the proof of Corollary~\ref{cor:root_a}. As before we expend $p(n)$ in base $\{{n-m+k\choose k}\}_{k\in \mathbb{N}}$. \begin{gather*} p(n)=\sum_{k=0}^m c_k(I){n-m+k-1\choose k}=\\ \sum_{k=1}^m c_k(I)\left({n-m+k\choose k}-{n-m+k-1\choose k-1}\right) + c_0(I){n-m-1\choose 0}=\\ c_m{n\choose m}+\sum_{k=0}^{m-1} (c_k(I)-c_{k+1}(I)){n-m+k\choose k}=\\ \sum_{k=0}^m\tilde{c}_k(I){n-m+k\choose k}. \end{gather*} Now we claim that $\sum_{k=0}^{m-1}|\tilde{c}_k(I)|\le c_m(I)$. To prove that, we use induction on $|I|$ and $m$, and we use the recursion of Lemma~\ref{lem:c_rec}. If $I=\{m\}$, then it can be easily checked. So for the rest assume, that the statement is true for sets of size at most $l-1$ and with maximal element at most $m-1$. Let $|I|=l\ge 2$ with $i_l=m$ and assume that $i_l-i_{l-1}\ge 2$. Then \begin{gather*} \sum_{k=0}^{m-1}|c_k(I)-c_{k+1}(I)|=\\ |c_0(I)-c_1(I)|+\sum_{k=1}^{m-1}\left|\sum_{t\in I''\setminus\{m\}}c_{k-1}(I_t)-c_{k}(I_t)+\sum_{t\in I'\setminus\{m\}}c_{k-1}(\hat{I}_t)-c_{k}(\hat{I}_t)\right|\le\\ 1+ \sum_{k=0}^{m-2}\sum_{t\in I''\setminus\{m\}}|c_{k}(I_t)-c_{k+1}(I_t)|+\sum_{k=0}^{m-2}\sum_{t\in I'\setminus\{m\}}|c_k(\hat{I}_t)-c_{k+1}(\hat{I}_t)| \end{gather*} For any $t\in I''\setminus\{m\}$ the two largest elements of $I_t$ will be $i_{t-1}-1$ and $i_{t}-1=m-1$, so their difference is at least 2, therefore we can use inductive hypothesis. If $t\in I'\setminus\{m\}$, then either $\hat{I}_t$ has exactly one element, or $|\hat{I}_t|>1$. In this second case the largest element of $\hat{I}_t$ is $i_t-1=m-1$ and the second largest is $i_{t-2}$ or $i_{t-1}-1$. Clearly in each cases the inductive hypothesis is true, therefore \begin{gather*} \sum_{k=0}^{m-1}|c_k(I)-c_{k+1}(I)|\le 1+\sum_{t\in I''\setminus\{m\}}c_{m-1}(I_t)+\sum_{t\in I'\setminus\{m\}}c_{m-1}(\hat{I}_t)=\\ 1+c_{m}(I)-d(I^-,m-1)\le c_{m}(I)=\tilde{c}_m(I). \end{gather*} The last inequality is true, since $m-1> \max(I^-)$. So we obtained that $\sum_{k=0}^{m-1}|\tilde{c}_k(I)|\le c_m(I)$, therefore by Lemma~\ref{lem:convex}, if $|z|>m$, then \[ 0\neq \sum_{k=0}^m\tilde{c}_k(I){z-m+k\choose k}=p(z)=(-1)^m d(I,m-1-z), \] equivalently if $|m-1-z_0|>m$, then $d(I,z_0)\neq 0$. \end{proof} We would like to remark two facts about the previous proof. First of all the introduced ``new'' coefficients, $\tilde{c}_k(I)$, are exactly \[ \tilde{c}_k(I)=d(I^c,k) =\left\{ \begin{array}{cc} (-1)^{m+|[k+1,\infty)\cap I|+k}d(I\cap[k-1],k)& \textrm{ if $k\in I$}\\ 0 & \textrm{ otherwise} \end{array} \right., \] where $I^c=[m]\setminus I$, therefore \[ d(I,n)=\sum_{k=0}^m (-1)^{m-k}\tilde{c}_k(I){n\choose k}=\sum_{k=0}^m(-1)^{m-k}d(I^c,k){n\choose k}. \] Secondly we can not extend the proof for any $I$, because the crucial statement, that was $\sum_{k=0}^{m-1}|c_k(I)-c_{k+1}(I)|\le c_m(I)$, is not true for any $I\subseteq \mathbb{Z}^+$. (E.g. $I=\{1,2,3,4,5\}$) From now on we would like to understand the roots of $I$'s with ``non-trivial endings''. To analyses these cases we introduce for the rest of the paper the following notation: for any finite set $I\subseteq\mathbb{Z}^+$ and $t\in\mathbb{N}$ let $I^t=I\cup\{m+1,m+2,\dots, m+t\}$. \begin{Pro}\label{pro:c_root_small} For any $\emptyset \neq I$ such that $m-1\notin I$. Then if $t=1,2,3,4$, then there exists an $m_0=m_0(t)$, such that if $m\ge m_0$ and $d(I^t,z_0)=0$, then \[ |m+t-1-z_0|\le m+t. \] \end{Pro} \begin{proof} Let us consider $d(I^t,n)$ in base $\{{n\choose k}\}_{k\in \mathbb{N}}$. Then \[ d(I^t,n)=\sum_{k=0}^{m+t}(-1)^{m+t-k}d(I^c,k){n\choose k}, \] where $I^c=(I^t)^c=[m+t]\setminus I^t=[m]\setminus I$. We claim that if $t\in\{1,2,3,4\}$ and $m$ sufficiently large, then for any $m\le k<m+t$ we have \begin{eqnarray}\label{eq:cond} 2d(I^c,k)\le d(I^c,k+1). \end{eqnarray} To see that let us observe that all the roots $\xi_1,\dots,\xi_{m-1}$ of $d(I^c,n)$ are in a ball of radious $m-1$ around 0 by Corollary~\ref{cor:root_a}. Without loss of generality let us assume that $\xi_1=\max(I^c)=m-1$. Then \begin{align*} \frac{d(I^c,k)}{d(I^c,k+1)}&=\left|\frac{d(I^c,k)}{d(I^c,k+1)}\right|=\frac{(k-\xi_1)\prod_{i=2}^{m-1}|k-\xi_i|}{(k+1-\xi_1)\prod_{i=2}^{m-1}|k+1-\xi_i|}\\ &\frac{k-m+1}{k-m+2}\prod_{i=2}^{m-1}\frac{|k-\xi_i|}{|k+1-\xi_i|}\le \frac{k-m+1}{k-m+2}\prod_{i=2}^{m-1}\frac{k+m-1}{k+m}\\ &\le \frac{t}{t+1}\left(\frac{2m+t-2}{2m+t-1}\right)^{m-2} = \frac{t}{t+1}\left(1-\frac{1}{2m+t-1}\right)^{m-2}\to\frac{t}{t+1}e^{-0.5} \end{align*} Since $\frac{t}{t+1}e^{-0.5}<1/2$, therefore we get that for any $t\in\{1,2,3,4\}$ there exists an $m_0=m_0(t)$, such that $\forall m\ge m_0$ and for any $m\le k<m+t$ we have $2d(I^c,k)\le d(I^c,k+1)$. In particular $2^{m+t-k}d(I^c,k)\le d(I^c,m+t)$. To finish the proof let us assume that $m\ge m_0$ for some fixed $t\in\{1,2,3,4\}$. Then consider the following polynomial $p(n)=(-1)^{m+t}d(I,m+t-1-n)$ as in the previos proof \begin{align*} p(n)&=(-1)^{m+t}\sum_{k=0}^{m+t}(-1)^{m+t-k}d(I^c,k){m+t-1-n\choose k}\\ &=\sum_{k=0}^{m+t}d(I^c,k){n-(m+t)+k\choose k} \end{align*} Assume that $z_0$ is a zero of $p(n)$ with length at least $m+t$ i.e. \[ d(I^c,m+t){z_0\choose m+t}=\sum_{k=0}^{m+t-1}(-d(I^c,k)){z_0-(m+t)+k\choose k} \] By the previous proof we get that $\sum_{k=0}^{m-1}|d(I^c,k)|\le d(I^c,m)$, therefore \begin{align*} C=\sum_{k=0}^{m+t-1}|-d(I^c,k)|&\le d(I^c,m)+\sum_{k=m}^{m+t-1}d(I^c,k)\\ &\le 2^{-t}d(I^c,m+t)+\sum_{k=m}^{m+t-1}2^{-(m+t-k)}d(I^c,m+t)\\ &=d(I^c,m+t). \end{align*} But it means that $\frac{d(I^c,m+t)}{C}{z_0\choose m+t}$ is a convex combination of $\mathcal{F}=\{\epsilon_k{z_0-(m+t)+k\choose k}\}_{k=0}^{m+t-1}$, where $\epsilon_k=\textrm{sgn}(-d(I^c,k))$. However this is a contradiction, since $\frac{d(I^c,m+t)}{C}\ge 1$ and ${z_0\choose m+t}$ is strictly longer than any member of the set $\mathcal{F}$. \end{proof} Trivial upper bounds on $m_0$ is the smallest $m'_0$, such that for any $m\in[m_0',\infty)$ we have \begin{eqnarray}\label{eq:cond_small} \frac{t}{t+1}\left(1-\frac{1}{2m+t-1}\right)^{m-2}<1/2. \end{eqnarray} These values can be found in the following Table~\ref{table:values}. \begin{Lemma}\label{lem:c} For any $\emptyset \neq I$, such that $m-1\notin I$ and \begin{eqnarray}\label{eq:cond_big} (m-1)(2m+1)\le {t+m-1\choose t}, \end{eqnarray} then \[ d(I^c,m)(2m+1)\le d(I^c,m+t) \] \end{Lemma} \begin{proof} First of all \[ d(I^c,m)=d(I,m)\le d(I^-,m-1)(m-1)=d((I^c)^-,m-1)(m-1), \] because any $\pi \in D(I,m)$ can be written uniquely as an element in $D(I^-,m-1)\times [1,m-1]$. On the other hand \[ d(I^c,m+t)\ge {t+m-1\choose t}d((I^c)^-,m-1), \] because the left hand side counts the number of elements in $D(I^c,m+t)$, while the right hand side is the number of elements $\pi$ in $D(I^c,m+t)$, such that $\pi_{m} =1$. Combining these inequalities and using the hypothesis we get the desired statement. \end{proof} \begin{Pro}\label{pro:c_root_big} For any $\emptyset \neq I$ such that $m-1\notin I$. If \[ (m-1)(2m+1)\le {t+m-1\choose t} \] and $d(I^t,z_0)=0$, then \[ |m+t-z_0|\le m+t+1. \] \end{Pro} \begin{proof} Let us consider the polynomial $p(n)=(-1)^{m+t}d(I^t,m+t-n)$ \begin{align*} p(n)=&(-1)^{m+t}d(I^t,m+t-n)=\sum_{k=0}^{m+t}d(I^c,k){-t-m+n+k-1\choose k}=\\ &=\sum_{k=0}^{m-1}d(I^c,k){n-m-t+k-1\choose k}+\sum_{k=m}^{m+t}d(I^c,k){n-m-t+k-1\choose k}\\ &=u(n)+\sum_{k=m}^{m+t}d(I^c,k){n-m-t+k-1\choose k}. \end{align*} As a result of the proof of Proposition~\ref{pro:c} we get that if $|z|>m$, then \[ |u(z+t+1)|=\left|\sum_{k=0}^{m-1}d(I^c,k){z-m+k\choose k}\right|< \left|d(I^c,m){z\choose m}\right|. \] So if $|z|>m+t+1$, then $|z-(t+1)|>m$ and therefore \begin{align*} |u(z)|&\le d(I^c,m)\left|{z-t-1\choose m}\right|\\ &\le d(I^c,m) \left(\left|{z-t\choose m}\right|+ \left|{z-t-1\choose m-1}\right|\right)\\ &= d(I^c,m) \left(\left|\frac{(m+t)\dots(m+1)}{z(z-1)\dots(z-t+1)}\right|+ \left|\frac{(m+t)\dots m}{z(z-1)\dots(z-t)}\right| \right) \left|{z\choose m+t}\right|\\ &<d(I^c,m) \left(\frac{2m+1}{t+m+1}\right)\left|{z\choose m+t}\right| \end{align*} Let us assume that $p(z)=0$ and $|z|>m+t+1$, therefore \[ d(I^c,m+t){z\choose m+t} = \sum_{k=m-1}^{m+t-1}\left(d(I^c,k+1)-d(I^c,k)\right){z-m-t+k\choose k}+u(z), \] equivalently \[ {z\choose m+t} = \sum_{k=m-1}^{m+t-1}\frac{d(I^c,k+1)-d(I^c,k)}{d(I^c,m+t)}{z-m-t+k\choose k}+\frac{1}{d(I^c,m+t)}u(z). \] Observe that the summation on the right hand side is a convex combination of some complex numbers, therefore its length can be bounded from above by the length of the longest vector, that is \begin{align*} \left|\sum_{k=m-1}^{m+t-1}\frac{d(I^c,k+1)-d(I^c,k)}{d(I^c,m+t)}{z-m-t+k\choose k}+\frac{1}{d(I^c,m+t)}u(z)\right|\le \\ \left|{z-m-t+m+t-1\choose m+t-1}\right|+|u(z)|\\ <\frac{t+m}{t+m+1}\left|{z\choose m+t}\right| + \frac{d(I^c,m)}{d(I^c,m+t)} \left(\frac{2m+1}{t+m+1}\right)\left|{z\choose m+t}\right|\\ =\left(\frac{t+m}{t+m+1} + \frac{d(I^c,m)}{d(I^c,m+t)} \left(\frac{2m+1}{t+m+1}\right)\right)\left|{z\choose m+t}\right| \end{align*} We claim that \[ \frac{t+m}{t+m+1} + \frac{d(I^c,m)}{d(I^c,m+t)} \left(\frac{2m+1}{t+m+1}\right)\le 1 \] equivalently \begin{eqnarray}\label{eq:more_room} d(I^c,m)(2m+1)\le d(I^c,m+t), \end{eqnarray} but this is exactly the statement of Lemma~\ref{lem:c}. Therefore we get that \[ \left|{z\choose m+t}\right|< \left(\frac{t+m}{t+m+1} + \frac{d(I^c,m)}{d(I^c,m+t)} \left(\frac{2m+1}{t+m+1}\right)\right)\left|{z\choose m+t}\right| \le \left|{z\choose m+t}\right|, \] and that is a contradiction. So we obtained that any root of $p(n)$ has length at most $m+t+1$. Therefore if \[ 0=d(I^t,z_0)=d(m+t-(m+t-z_0))=(-1)^{m+t}p(m+t-z_0), \] then $|m+t-z_0|\le m+t+1$ \end{proof} \begin{Rem} With some easy calculation one could get the smallest value $m_0(t)$, for each $t$, such that the conditions of the corresponding proposition is satisfied for any $m\ge m_0(t)$. Specifically it means that if $\max(I)>10$, then one of the conditions are satisfied. For $\max(I)\le 10$ we refer to Figure~\ref{fig:small_cases}, where we included all the possible roots of $d(I,n)$, depending on $m=\max(I)$ and regions ball (blue) of radius $m$ around $0$, ball (blue) of radius $m+1$ around $m$ and ball (red) of radius $(m+1)/2$ around $(m-1)/2$. Observe that in Proposition~\ref{pro:c_root_big} the crucial inequality was \eqref{eq:more_room}, and checking this condition for the these 84 cases we end up with 16 cases when \eqref{eq:more_room} is not satisfied. \begin{table}[h] \begin{tabular}{c|cccc} $t$ & Corollary~\ref{pro:c} & Condition~\eqref{eq:cond_small} & Condition~\eqref{eq:cond_big} & Condition~\eqref{eq:more_room}\\ \hline 0 & \bf{1} & - & - & -\\ 1 & - & \bf{3} & - & -\\ 2 & - & \bf{6} & - & -\\ 3 & - & 14 & \bf{8}& (3) \\ 4 & - & 53 & \bf{3}& (2) \\ $\ge 5$ & - & - & \bf{1} & (1) \end{tabular} \caption{Smallest values for $m_0(t)$, such that the corresponding conditions are satisfied for any $m\ge m_0(t)$. There are 84 $I$'s, that do not satisfy any of the first 3 conditions, and there are 16 of them, that do not satisfy any of the 4 conditions.} \label{table:values} \end{table} \end{Rem} By combining the previous four propositions and checking the uncovered cases of the table (see Figure~\ref{fig:small_cases}) we obtaine the following theorem. \begin{Th}\label{th:root_main} For any $\emptyset\neq I$ if $d(I,z_0)=0$, then \begin{enumerate} \item $|z_0|\le m$ \item $|m-z_0|\le m+1$ \end{enumerate} In particular, $-1\le \Re z_0\le m$ \end{Th} \begin{figure} \caption{Roots of $d(I,n)$ for $m=\max(I)\in\{3,\dots,10\} \label{fig:small_cases} \end{figure} As the previous theorem shows, all the complex roots of $d(I^t,n)$ have their real parts in between -1 and $m+t$. In the following proposition we will show that if $t$ is large enough, then all the roots of $d(I^t,n)$ are real. \begin{Pro}\label{pro:real} Let $I\neq \emptyset $, such that $m-1\notin I$. Then there exists a $t_0=t_0(I)\in \mathbb{N}$, such that for any $t>t_0$ and $v\in\{-1,0,\dots, m+t\}\setminus\{m-1\}$ there exists a unique root of $d(I^t,n)$ of distance 1/4 from $v$. In particular the roots of $d(I^t,n)$ are contained in the interval $[-1,m+t]$. \end{Pro} \begin{proof} The proof is based on Neumaier's Gershgorin type results on the location of roots of polynomials. For further reference see \cite{Neumaier}. Let \[p_t(n)=\frac{d(I^t,n)}{\prod_{i=1}^t(n-(m+i))}\] and \[T(n)=n(n-1)\dots(n-m+2)(n-m),\] and let us fix the value of $t$. Then the leading coefficient of $p_t$ is \[ \frac{d(I^{t-1},m+t)}{(m+t)!}, \] it has degree $m$, and for $v=0,\dots,m-2,m$ \[ |\alpha_v|=\frac{|d(I,v)|(m-1-v)}{v!(m-v)!\prod_{i=1}^t(m-v+i)}=\frac{|d(I,v)|(m-1-v)}{v!(m+t-v)!}. \] Therefore \begin{gather*} |r_v|=\frac{m}{2}\frac{|d(I,v)|(m-1-v)}{v!(m+t-v)!}\frac{(m+t)!}{d(I^{t-1},m+t)}=\frac{m}{2}\frac{|d(I,v)|(m-1-v)}{d(I^{t-1},m+t)}{m+t\choose v}. \end{gather*} If we are able to prove that $|r_v|\to 0$ as $t\to \infty$ for any $v=0,\dots,m-2,m$, then we would be done. In order to prove that we observe that \[ d(I^{t-1},m+t)\ge d(I^-,m-1){m+t-1\choose t}, \] since the set of permutations of $D(I^{t-1},m+t)$ with the largest element at position $m$ has size $d(I^-,m-1){m+t-1\choose t}$. To see that, choose the largest element $m+t$ into the $m$th position, and take an arbitrary subset of $\{1,\dots,m+t-1\}$ after the $m$th position in a decreasing order, and take the rest as $D(I^-,m-1)$ on the first $m-1$ position through an order-preserving bijection of the base-set. Therefore \begin{gather*} |r_v|\le \frac{m(m-1-v)}{2}\frac{|d(I,v)|}{d(I^-,m-1)}\frac{{m+t\choose v}}{{m+t-1\choose t}}=\\ \frac{m(m-1-v)}{2}\frac{|d(I,v)|}{d(I^-,m-1)}\frac{(m+t)(m-1)!}{v!}\frac{t!}{(m+t-v)!}=\\ C_{v,m}\frac{(m+t)t!}{(t+m-v)!}. \end{gather*} If $v=m$, then $|r_v|=0$, since $d(I,m)=0$. If $v\in\{0,\dots,m-2\}$, then \[ |r_v|\le C_{v,m}\frac{a_{v,m}(t)}{b_{v,m}(t)}, \] where $a_{v,m}(t)=t+m$ is a polynomial of degree 1, and $b_{v,m}(t)=\prod_{i=1}^{(m-v)}(t+i)$ is a polynomial of degree at least 2. Therefore $C_{v,m}\frac{a_v(t)}{b_v(t)}\to 0$ as $t\to \infty$, i.e. $|r_v|\to 0$. \end{proof} \section{Some remarks and further directions} We described an interesting phenomenon in Section~\ref{sec:c_base}, namely that $c_k(I)$ and $(-1)^md(I,-n)$ are non-negative integers. This result suggests that there might be some combinatorial proofs for them. \begin{Q} What do the coefficients $c_k(I)$ and evaluations $(-1)^md(I,-n)$ count? \end{Q} There are two conjectures about the roots of the descent polynomial: \begin{Pro}[Conjecture 4.3. of \cite{Sagan}] If $z_0$ is a root of $d(I,n)$, then \begin{itemize} \item $|z_0|\le m$, \item $\Re z_0\ge-1$. \end{itemize} \end{Pro} This conjecture can be viewed as a special case of Theorem~\ref{th:root_main}. As a common generalization of the two parts we conjecture that (motivated by numerical computations for $m\le 13$ (e.g. see red regions on Figure~\ref{fig:small_cases}), by a proof for the case $|I|=1$ and by Proposition~\ref{pro:real}) the roots of $d(I,m)$ will be in a disk with the endpoints of one of its diameters being $-1$ and $m$. More precisely: \begin{Conj} If $d(I,z_0)=0$, then $|z_0-\frac{m-1}{2}|\le\frac{m+1}{2}$. \end{Conj} Similarly to the descent polynomial, instead of counting permutations with described descent set, one could ask for the number of permutations with described positions of peaks (i.e. $\pi_{i-1}<\pi_i>\pi_{i+1}$). As it turns out, this peak-counting function is not a polynomial. However, it can be written as a product of a polynomial and an exponential function in a ``natural way''. (See the precise definition in \cite{Sagan13peak}). This polynomial is the so-called peak polynomial. This polynomial behaves quite similarly to the descent one, thus it is natural to ask whether there is a deeper connection between them, or whether we can prove similar propositions to the already obtained ones. In line with this we propose a conjecture about the coefficients in a base similar to $\bar{a}_k(I)$. \begin{Conj} For the peak-polynomial the coefficients in base $\{{n-m+k\choose k+1}\}_{k\in \mathbb{N}}$ form a symmetric, log-concave sequence of non-negative integers. \end{Conj} \noindent \textbf{Acknowledgments.} I would like to express my sincere gratitude to Bruce Sagan, who pointed out some corollaries of the behavior of different coefficient sequences. I would also like to thank Alexander Diaz-Lopez for his helpful remarks. The research was partially supported by the MTA R\'enyi Institute Lend\"ulet Limits of Structures Research Group. \end{document}
\begin{document} \title[]{Einstein-Podolsky-Rosen steering based on semi-supervised machine learning} \author{Lifeng Zhang, Zhihua Chen} \thanks{Electronic address: [email protected]} \affiliation{School of Science, Jimei University, Xiamen 361021,China} \author{Shao-Ming Fei} \thanks{Electronic address: [email protected]} \affiliation{School of Mathematical Sciences, Capital Normal University, Beijing 100048, China,\\ Max Planck Institute for Mathematics in the Sciences, 04103 Leipzig, Germany} \begin{abstract} Einstein-Podolsky-Rosen(EPR)steering is a kind of powerful nonlocal quantum resource in quantum information processing such as quantum cryptography and quantum communication. Many criteria have been proposed in the past few years to detect the steerability both analytically and numerically. Supervised machine learning such as support vector machines and neural networks have also been trained to detect the EPR steerability. To implement supervised machine learning, one needs a lot of labeled quantum states by using the semidefinite programming, which is very time consuming. We present a semi-supervised support vector machine method which only uses a small portion of labeled quantum states in detecting quantum steering. We show that our approach can significantly improve the accuracies by detailed examples. \end{abstract} \maketitle \section{Introduction} Einstein-Podolsky-Rosen (EPR) steering is an intermediate quantum correlation between quantum entanglement and quantum nonlocality \cite{Wiseman}, wherein one party can steer the state of another distant party by making measurements on one party of a shared bipartite state. EPR-steering was first observed by Schr\"odinger in the famous EPR paradox in 1935 \cite{Schrodinger,Einstein}. However, the rigorous definition of EPR-steering was not proposed till 2007. EPR-steering is proven to be asymmetric in general, that is, one party can steer the another but not vice versa \cite{Bowles, A-V-N, Bowles2}. It has been shown that steering has many applications in quantum information processing such as one-sided device independent quantum key distribution \cite{Branciard}, randomness certification \cite{Passaro,Coyle}, subchannel discrimination \cite{Piani,Sun}, secret sharing \cite{Xiang}, quantum teleportation \cite{He}, coupled qubits and magnetoreception \cite{Ku}, no-cloning of quantum steering \cite{Chiu} and quantum networks \cite{Chen}. Many methods have been proposed to detect EPR-steering, such as linear and nonlinear steering inequalities \cite{ineq1,ineq2,ineq3,ineq4,ineq5,ineq6}, uncertainty relations \cite{unc1,unc2,unc3}, moment matrix approach \cite{mom} and all-versus-nothing method \cite{A-V-N}. Many steering measures were also proposed, which can be estimated based on semidefinite programming \cite{SDP}. Nevertheless, it is generally very time consuming to detect steerability of a state numerically, as a huge amount of measurement directions has to be run over. Efficient criteria of steerability are still far from being satisfied. Machine learning is a useful tool for classification problems, which has already successful applications in quantum information processing tasks such as entanglement classification, nonlocality discriminant, Hamiltonian learning and phase transition identification \cite{svm1,svm2,svm3,svm4,svm5,svm6}. Machine learning methods such as support vector machine (SVM), artificial neural networks and decision trees have been also applied to the EPR-steering detection and quantification \cite{Changliang, Zhang}. However, these methods are supervised machine learning ones, in which a lot of labeled results are needed. While quantum state labeling is also a very time consuming task. Semi-supervised machine learning was proposed to combine labeled and unlabeled data to improve the learning behavior \cite{Zhu X}. Semi-supervised support vector machine combines support vector machine and semi-supervised learning, which can make use of the scarce labeled data and sufficient unlabeled data to improve the classification performance \cite{Ding}. The safe semi-supervised SVM (S4VM) was proposed to exploit multiple candidate low-density separators and to select the representative separators, which is never significantly inferior to the inductive SVM \cite{S4VM}. Semi-supervised SVMs have been successfully applied to text and image classification, face recognition and many other areas \cite{ssvm1,ssvm2,ssvm3,ssvm4}. In this paper, we apply semi-supervised SVM-S4VM to deal with quantum steering problems. We need only a small portion of labeled states generated randomly by the semidefinite programming, together with a large portion of unlabeled quantum states used in S4VM. Compared with the inductive SVM, the accuracies can be significantly improved. Moreover, less time is needed to label the quantum states, while high accuracy is still attained. \section{Supervised Machine Learning of EPR-Steering} A two-qubit quantum state, $\rho_{AB}=\frac{1}{4}(I_4+\sum\limits_{i=1}^3 r_i\sigma_i\otimes I_2+\sum\limits_{i=1}^3 s_i I_2\otimes \sigma_i+\sum\limits_{i,j=1}^3 t_{ij}\sigma_i\otimes\sigma_j)$, where $I_d$ is the $d\times d$ identity matrix and $\sigma_i$ ($i=1,2,3$) are the standard Pauli matrices, is said to admit a local hidden state (LHS) model if the probability $P(a,b|\textbf{A},\textbf{B})$ that Alice and Bob performs the measurements $\textbf{A}$ and $\textbf{B}$ with measurement outcomes $a$ and $b$, respectively, satisfy \begin{eqnarray}\label{1}\nonumber P(a,b|\textbf{A},\textbf{B})=&\rm{tr}[(M_{\textbf{A}}^a\otimes M_{\textbf{B}}^b).\rho_{AB}]\\ =&\sum p(\lambda)p(a|\textbf{A},\lambda)p_Q(b|\textbf{B},\lambda) \end{eqnarray} where $P_Q(b|\textbf{B},\lambda)=\rm{tr}[\rho_{\lambda} M_{\textbf{B}}^b]$, $\rho_{\lambda}$ are qubit states specified by the parameter $\lambda$. State $\rho_{AB}$ is said to be not steerable if it satisfies relation (\ref{1}). Otherwise we say that $\rho_{AB}$ is steerable from Alice to Bob. Equivalently, one may say that Alice can not steer Bob if there exist probability $p_{\lambda}$ and a set of quantum states $\rho_{\lambda}$ such that \begin{equation} \rho^a_{\textbf{A}}=\rm{tr}_A((M^a_{\textbf{A}}\otimes I_2).\rho_{AB})=\sum p_{\lambda}p(a|\textbf{A},\lambda)\rho_{\lambda}, \end{equation} where $p(a|\textbf{A},\lambda)$ is a probability distribution given by $a$ and $\lambda$. For any $\rho_{AB}$ if Alice performs $m$ measurements $\textbf{A}=\{0,1,2,\cdots,m-1\}$ with $q$ outcomes $a\in\{1,2,\cdots,q\}$, an assemblage $\{\rho_{\textbf{A}}^a\}_{\textbf{A},a}$ of $m$ ensembles is obtained. Denote $D(a|\textbf{A},\lambda)=\delta_{a,\lambda(\textbf{A})}$, that is, $D(a|\textbf{A},\lambda)=1$ if $\lambda(\textbf{A})=a$ and $D(a|\textbf{A},\lambda)=0$ if $\lambda(\textbf{A})\neq a$. The following SDP algorithm can be used to detect if Alice can steer Bob for any given $\rho^a_{\textbf{A}}$ and $D(a|\textbf{A},\lambda)$ \cite{SDP}, \begin{eqnarray} \label{sdp} &\min\limits_{F_{a|\textbf{A}}}\rm{tr}\sum\limits_{a,\textbf{A}}F_{a|\textbf{A}}\rho^a_{\textbf{A}}\\ \nonumber &\rm{such}\ \rm{that}.\sum\limits_{a,\textbf{A}}F_{a|\textbf{A}}D(a|\textbf{A},\lambda)\geq 0,\\ \nonumber &\rm{tr}(\sum\limits_{a,\textbf{A},\lambda}F_{a|\textbf{A}}D(a|\textbf{A},\lambda))=1. \nonumber \end{eqnarray} If the objective value of (\ref{sdp}) is negative for some measurements $\textbf{A},$ $\rho_{AB}$ is steerable from Alice to Bob. Otherwise, there exists an LHS model. A state $\rho_{AB}$ is labeled $-1$ if the objective value is negative after using the SDP program 100 times with different values of measurements. Otherwise, the state is labeled $+1$. In \cite{Changliang} a SVM is used for the classifier. For $m=2,3,\cdots,8$, $5000$ samples with label $+1$ and $5000$ samples with $-1$ were obtained. The last $1000$ positive samples and $1000$ negative samples were reserved for tests, the remaining $4000$ positive and $4000$ negative samples were kept as training set to learn a classifier. SVM is a supervised learning model used for classification and regression analysis. Given $(\textbf{x}_i,y_i)$, $i=1,2,\cdots,n$, where $n$ is the number of samples, $\textbf{x}_i$ $(i=1,2,\cdots,n)$ are the feature vectors of the samples and $y_i$ $(i=1,2,\cdots,n)$ are the labels of the samples, $y_i=1$ or $y_i=-1$. For linearly separable problems, SVM aims to find a hyperplane with the largest distance to the nearest feature vectors of each class. The hyperplanes $H$ are defined as $\omega \textbf{x}_i+b\geq 1$ when $y_i=1$, and $\omega \textbf{x}_i+b\leq -1$ when $y_i=-1$. Two marginal hyperplanes are defined by $H_1:$ $\omega \textbf{x}_i+b= 1$ and $H_2:$ $\omega \textbf{x}_i+b=-1.$ The vectors on $H_1$ and $H_2$ are the support vectors. The distance $\frac{2}{\|\omega\|}$ between $H_1$ and $H_2$ needs to be maximized. The final separating hyperlane $H$ is selected to be the middle one between the two marginal hyperplanes. Hence, the problems to be solved are given as follows, \begin{eqnarray} &\min\limits_{\omega}\frac{1}{2}\omega^T\omega\\ \nonumber \rm{such}\ \rm{ that} &y_i(\omega\textbf{x}_i+b)\geq 1. \nonumber \end{eqnarray} When the errors $\{{\xi_i}\}_{i=1}^n$ in classification are allowed, the soft marginal hyperplane problem is attained, \begin{eqnarray} &\min\limits_{\omega,b,\xi}\frac{1}{2}\omega^T\omega+C\sum\limits_{i=1}^n \xi_{i}\\ \nonumber \rm{such}\ \rm{that} &y_i(\omega\textbf{x}_i+b)\geq 1-\xi_{i},\\ \nonumber &\xi_i\geq 0, \end{eqnarray} where $\xi_i$ are slack variables, $\xi_i=0$ if there is no error for $\textbf{x}_i$, $C$ is a tradeoff parameter between the error and the margin. For non-linearly separable problems, by using the kernel trick the problem needing to be solved becomes \begin{eqnarray} \label{svm} &\min\limits_{\omega,b,\xi}\frac{1}{2}\omega^T\omega+C\sum\limits_{i=1}^n \xi_{i}\\ \nonumber \rm{such}\ \rm{that} &y_i(\omega\mathcal{P(H)}i(\textbf{x}_i)+b)\geq 1-\xi_{i},\\ \nonumber &\xi_i\geq 0, \end{eqnarray} where $\mathcal{P(H)}i(\textbf{x})$ is the kernel function \cite{svmp1,svmp2}. Concerning the steering detection problem, the training set $\{\textbf{x}_i,y_i\}$ $(i=1, 2, \cdots, 8000)$ and the radial basis function $K(\mathcal{P(H)}i(\textbf{x}_i),\mathcal{P(H)}i(\textbf{x}_j))=e^{-\gamma\|\textbf{x}_i-\textbf{x}_j\|^2}$ are used with the parameters $C$ and $\gamma$ determined by a grid search approach. The classifier is attained by solving (\ref{svm}) \cite{Changliang}. \section{Semi-supervised Machine Learning of EPR-Steering} A well-trained SVM needs a lot of labeled samples. For steering detection it takes much time to generate the labeled samples and to obtain the data set with the increase of measurements \cite{Changliang}. Semi-supervised SVM can be used in classifying problems with a small portion of labeled samples and a large number of unlabeled samples. The S3VM was first proposed to simultaneously learn the optimal hyperplane and the labels for unlabeled instances \cite{s3vm, Zhu}. Given a set of $l$ labeled data $\{\textbf{x}_i, y_i\}_{i=1}^l$ and a set of $u$ unlabeled data $\{\hat{x}_j\}_{j=l+1}^{l+u}$, $y_i\in\{1,-1\}$, one needs to solve the following optimization problem: \begin{eqnarray} \label{s3vm} &\min\limits_{\omega,b,\hat{y}\in{\emph{B}}}(\frac{1}{2}\|\omega\|^2+C_1\sum\limits_{i=1}^l \xi_i+C_2\sum\limits_{j=l+1}^{l+u} \hat{\xi}_j)\\ \nonumber & \rm{such}\ \rm{that} \quad y_i(\omega'\mathcal{P(H)}i(\textbf{x}_i)+b)\geq 1-\xi_i,~~ \xi_i\geq 0\\ \nonumber &\hat{y}_{j}(\omega'\mathcal{P(H)}i(\hat{\textbf{x}}_j)+b)\geq 1-\hat{\xi}_j,~~ \hat{\xi}_j\geq 0 \\ \nonumber &\forall i=1,\cdots,l,~\forall j=l+1,\cdots, l+u, \end{eqnarray} where $\emph{B}=\{\hat{y}\in \{\pm 1\}^u|-\beta\leq \frac{\sum\limits_{j=l+1}^{l+u}\hat{y}_{j}}{u}-\frac{\sum\limits_{i=1}^l y_i}{l}\leq \beta\}$ is the balanced constraint to avoid the trivial solution. Unlike S3VM, the S4VM was proposed to construct diverse large-margin separators since multiple large-margin low density separators may coincide with the limited labeled data, and then the label assignment for unlabeled instances is optimized such that the worst-case performance improvement over inductive SVM is maximized \cite{S4VM}. Recalling the S4VM in detail, one first has the following minimization problem: \begin{eqnarray} \label{s4} &\min\limits_{\{\omega_t,b_t,\hat{y}_t\in{\emph{B}}\}_{t=1}^T}\sum\limits_{t=1}^T(\frac{1}{2}\|\omega_t\|^2+C_1\sum\limits_{i=1}^l \xi_i+C_2\sum\limits_{j=l+1}^{l+u} \hat{\xi}_j)\\ \nonumber &+G \sum\limits_{1\leq t\neq \tilde{t}\leq T}\delta(\frac{\hat{y}'_t\hat{y}_{\tilde{t}}}{u}\geq 1-\varsigma),\\ \nonumber & \rm{such}\ \rm{that}\ y_i(\omega_t'\mathcal{P(H)}i(\textbf{x}_i)+b_t)\geq 1-\xi_i,~~ \xi_i\geq 0\\ \nonumber &\hat{y}_{t,j}(\omega_t'\mathcal{P(H)}i(\hat{\textbf{x}}_j)+b_t)\geq 1-\hat{\xi}_j,~~ \hat{\xi}_j\geq 0 \\ \nonumber &\forall i=1,\cdots,l,~\forall j=l+1,\cdots, l+u,~\forall t=1,\cdots, T. \end{eqnarray} where $\emph{B}=\{\hat{y}_{t}\in \{\pm 1\}^u|-\beta\leq \frac{\sum\limits_{j=l+1}^{l+u}\hat{y}_{t,j}}{u}-\frac{\sum\limits_{i=1}^l y_i}{l}\leq \beta\},$ $\hat{y}_{t,j}$ is the $jth$ entry of $\hat{y}_t,$ $\delta$ is the indicator function which represents a quantity of penalty about the diversity of separators and $\varsigma\in [0,1]$ is a constant, $G$ is a large constant enforcing large diversity. $C_1$ and $C_2$ are regularization parameters trading off the complexity and the empirical error on label and unlabeled data. $T$ is the number of the separators. Second, from the global simulated annealing search with a deterministic local search scheme method or the sampling strategy used in solving minimization problems Eq.(\ref{s4}), we have the following problem, \begin{eqnarray} \label{sl} \bar{y}=arg \max\limits_{y\in\{\pm 1\}^u}\min\limits_{\hat{y}\in M_0} J(y,\hat{y},y^{\rm{SVM}}), \end{eqnarray} where $y^{\rm{SVM}}$ is the predictive labels of inductive SVM on unlabeled instances, $M_0=\{\hat{y}_t\}_{t=1}^T$ is obtained from \eqref{s4}, \begin{eqnarray} J(y,\hat{y},y^{\rm{SVM}})=&\rm{gain}(y,\hat{y},y^{\rm{SVM}})-\lambda \rm{loss}(y,\hat{y},y^{\rm{SVM}}) \nonumber\\ =&c_t' y + d_t \end{eqnarray} with \begin{eqnarray} &\rm{gain}(y,\hat{y},y^{\rm{SVM}})=\sum\limits_{j=l+1}^{l+u}\frac{1+y_j\hat{y}_j}{2}\frac{1-y^{\rm{SVM}}_j\hat{y}_j}{2},\\ \nonumber &\rm{loss}(y,\hat{y},y^{\rm{SVM}})=\sum\limits_{j=l+1}^{l+u}\frac{1-y_j\hat{y}_j}{2}\frac{1+y^{\rm{SVM}}_j\hat{y}_j}{2}, \\ \nonumber &c_t=\frac{1}{4}[(1+\lambda)\hat{y}_t+(\lambda-1)y^{\rm{SVM}}],\\ \nonumber &d_t=\frac{1}{4}[-(1+\lambda)\hat{y}'_ty^{\rm{SVM}}+(1-\lambda)u].\nonumber \end{eqnarray} Then the minimization problem in \eqref{sl} can be reformulated as the following maximization problem, \begin{eqnarray} \label{st} &\max\limits_{y,\tau} \tau \hspace{4.5cm}\\ \nonumber \rm{such}\ \rm{that} &\tau \leq c_t' y + d_t,~ \forall t=1,2,\cdots, T;~ y\in\{\pm 1\}^u. \end{eqnarray} Convex linear programming problems can be solved by relaxing $y\in\{\pm 1\}^u$ to $y\in[-1,1]^u,$ and then project it back to integer solutions with minimum distance. If the function value of the resulting integer solution is smaller than that of $y^{\rm{SVM}}$, $y^{\rm{SVM}}$ is output as the final solution instead. For any two-qubit quantum state $\rho_{AB},$ set $\rho_0=({I}_2\otimes \rho_B^{\frac{1}{2}})\rho_{AB}({I}_2\otimes \rho_B^{\frac{1}{2}}),$ where $\rho_B=\rm{tr}_A\rho_{AB}$. The coefficients $\tau_{kl}=\rm{tr}(\rho_0(\sigma_k\otimes\sigma_l))$ can be reformulate as a nine-dimensional feature vector $\textbf{x}=[\tau_{11},\tau_{12},\cdots,\tau_{33}]^T$, where $T$ denotes transpose. Generating randomly quantum states by SDP \cite{SDP, Changliang}, we get a set of $l$ feature vectors of labeled quantum states, $\{\textbf{x}_i\}$, $i=1,2,\cdots,l$, with $y_i=-1$ for steerable states and $y_i=1$ for unsteerable states. Denote $\{\hat{\textbf{x}}_j\}_{j=l+1}^{l+u}$ the set of feature vectors of unlabeled quantum states whose labels can be determined by the above S4VM \cite{S4VM} method. Radial basis function kernels are used here and $K(\mathcal{P(H)}i(\textbf{x}_i),\mathcal{P(H)}i(\textbf{x}_j))=e^{-\gamma\|\textbf{x}_i-\textbf{x}_j\|^2}$. We have three hyperparameters $C_1$, $C_2$, and $\gamma$ which are also determined by the grid research method. \section{Numerical Results} We generate randomly quantum states and get the class of steerable states from such random states by SDP. For balance, we choose half positive quantum states and half negative random states. We implement SVMs by using $l$ labeled quantum states, the radial basis function kernel, ten-folded cross validation, and the grid search approach. $u$ unlabeled quantum states are considered as the test sets and the test errors are obtained. For S4VM, sample strategy is implemented here, three hyperparameters $C_1$, $C_2$, and $\gamma$ are determined by the grid search approach. We set $\beta=0.1, \lambda=3,$ the number of clusters $T$ is 10 and the number of samples is 100 in the sample strategy for the examples. $u$ unlabeled quantum states are divided into $M$ different sets. Taking $M=2$ as an example, we have the following steps: $(1)$ The first set of unlabeled quantum states can be predicted by using $l$ labeled quantum states. Then, we implement five-fold cross validation for these $l+u/M$ quantum states by using SVMs. The best cross-validation accuracy and the best hyperparameters are then obtained. The labels of the first set of unlabeled quantum states from the best hyperparameters can be regarded as the real labels and the classification accuracies of these unlabeled quantum states are obtained. $(2)$ After labeling the first set of unlabeled quantum states, $l+u/M$ labeled quantum states and the second set of unlabeled quantum states can be utilized to implement S4VM the second time. Then the second set of unlabeled quantum states can also be labeled. Implementing five-fold cross validation for these $l+2*u/M,$ quantum states by using SVMs, the best cross-validation accuracy and the best hyperparameters are obtained. The labels of the second unlabeled quantum states from the best hyperparameters are considered as the real labels and the classification accuracies of the second set of the unlabeled quantum states are obtained. $(3)$ The average accuracy of the two sets of unlabeled quantum states is considered as the classification accuracy of the unlabeled set. We compute the errors of $4000$ unlabeled quantum states when $30$ labeled quantum states are used to implement S4VM for $m=2$ and $M= 1, 2, 4, 8$. Except for the third labeled set, the errors are smaller when $M=1$ and $M=2$ compared to the errors when $M=4$ and $M=8$ for the most cases but the errors when $M=2$ are smaller than that when $M=1$ for $m=8$, see Figs. \ref{fig1} and \ref{fig2}. In addition, the computational speed for $M=2$ is three times faster than that for $M=1.$ \begin{figure} \caption{Classification errors of the unlabeled quantum states by using ten different sets of $l$ labeled quantum states for $m=2,$ $l=30,$ and $u=4000$. The errors of the unlabeled states by S4VM for $M=1, 2, 4, 8$ are represented by solid line with star, thick dashed line with x, dashed line with square and solid line with $+$, respectively. } \label{fig:subfig:a} \label{fig1} \end{figure} \begin{figure} \caption{Classification errors of the unlabeled quantum states by using ten different sets of 30 labeled quantum states for $m=8$ and $u=4000$. The errors of the unlabeled states by S4VM for $M=1$ and $2$ are represented by solid line with star and thick dashed line with x, respectively. } \label{fig:subfig:a} \label{fig2} \end{figure} Based on the above errors and computational speed, in the following numerical experiments we set $M=2$ and use $10$, $30$, or $50$ labeled states and $4000$ unlabeled quantum states to implement S4VM for $m=2, 4, 6, 8$. When Alice performs $m$ $(m=2,4,6,8)$ measurements, S4VM is implemented ten times by using ten different sets of $l$ labeled quantum states. The errors for these $u$ unlabeled quantum states are shown in the Figs. \ref{fig3}-\ref{fig5}. \begin{figure} \caption{Classification errors of SVM and S4VM by using ten different sets of $l$ labeled quantum states for $m=2, 4,6,$ and $8,$ $l=10,$ $u=4000$ and $M=2$. The accuracies of the unlabeled states by SVM and S4VM are represented by the lines with squares and $\star$, respectively.} \label{fig3} \end{figure} \begin{figure} \caption{Classification errors of SVM and S4VM by using ten different sets of $l$ labeled quantum states for $m=2, 4,6,$ and $8,$ $l=30,$ $u=4000$ and $M=2$. The accuracies of the unlabeled states by SVM and S4VM are represented by the lines with squares and $\star$, respectively, } \label{fig4} \end{figure} \begin{figure} \caption{Classification errors of SVM and S4VM by using ten different sets of $l$ labeled quantum states for $m=2, 4,6,$ and $8,$ $l=50,$ $u=4000$ and $M=2$. The accuracies of the unlabeled states by SVM and S4VM are represented by lines with squares and $\star$, respectively.} \label{fig5} \end{figure} When two measurements are performed and ten labeled quantum states are used, the errors by SVM are larger than $0.2$ in many cases, which can be reduced to about 0.05 by S4VM in most cases. When $l=30$, the errors by S4VM are less than 0.06, while the errors by SVM are larger than 0.09 in most cases. When $l=50,$ the errors by S4VM are less than 0.05, while the errors by SVM are larger than 0.07 in most cases. In Table \ref{tab1}, we list the maximum differences between errors from SVM and S4VM by using 10, 30, and 50 labeled quantum states and 4000 unlabeled quantum states for $m=2,\,4,\,6$, and 8. It can be seen that the accuracies are significantly improved. \begin{table} \caption{The maximum differences between the errors from SVM and S4VM by using ten different $l$ labeled quantum states and 4000 unlabeled quantum states for $l=10,30,$ and $50$ and $m=2,4,6,$ and $8$. $\Delta E_{\rm{max,l}}$ is the maximum difference, the errors from SVM minus the errors from S4VM, by using $l$ labeled quantum states.} \label{tab1} \begin{tabular}{|l|l|l|l|} \hline $m$ &$\Delta E_{\rm{max,10}}$&$\Delta E_{\rm{max,30}}$&$\Delta E_{\rm{max,50}}$\\ \hline 2 & 0.216& 0.129 & 0.055 \\ \hline 4 & 0.22 & 0.088 & 0.159 \\ \hline 6 & 0.36 & 0.12 & 0.13 \\ \hline 8 & 0.104& 0.028 & 0.01\\ \hline \end{tabular} \end{table} To investigate the performance on positive and negative quantum states by S4VM, we show the errors of positive class and negative class for $l=10$, $30$ and $50$ in FIGs. \ref{fig6}, \ref{fig7} and \ref{fig8}. \begin{figure} \caption{Classification errors of $2000$ positive quantum states and $2000$ negative quantum states by SVM and S4VM with ten different labeled sets for $l=10,$ $m=2, 4,6,$ and $8$. The errors for positive and negative quantum states by S4VM are represented by lines with squares and $\star$, respectively, while the errors for positive and negative quantum states by SVM are represented by lines with diamond and $+$, respectively. } \label{fig6} \end{figure} \begin{figure} \caption{Classification errors of $2000$ positive quantum states and $2000$ negative quantum states by SVM and S4VM with ten different labeled sets for $l=30,$ $m=2, 4,6,$ and $8$. The errors for positive and negative quantum states are represented by lines with squares and $\star$, respectively, while the errors for positive and negative quantum states by SVM are represented by lines with diamond and $+$, respectively. } \label{fig7} \end{figure} \begin{figure} \caption{Classification errors of $2000$ positive quantum states and $2000$ negative quantum states by SVM and S4VM with ten different labeled sets for $l=50,$ $m=2, 4,6,$ and $8$. The errors for positive and negative quantum states are represented by lines with squares and $\star$, respectively, while the errors for positive and negative quantum states by SVM are represented by lines with diamond and $+$, respectively. } \label{fig8} \end{figure} The errors for positive and negative classes are also reduced in most cases, i.e. the errors for either the positive or negative class by SVM are also larger than the corresponding ones by S4VM for $l=10$, $30$, and $50$, respectively. In general the errors for the negative class are smaller than those for the positive class, maybe due to the errors of SDP. When $m=2$ and $m=8$ the errors by S4VM for the negative class are approximately zero in most cases. In Table \ref{tab2} (Table \ref{tab3}), we also list the maximum differences between errors of positive (negative) states from S4VM and SVM by using 10, 30, or 50 labeled quantum states and 4000 unlabeled quantum states for $m=2,4,6,$ and 8. \begin{table} \caption{The maximum differences between the errors of positive states from SVM and S4VM by using ten different $l$ labeled quantum states and 4000 unlabeled quantum states for $l=10,30,$ and $50$ and $m=2,4,6,$ and $8$. $\Delta E^{+}_{\rm{max,l}}$ is the maximum difference, errors of positive states from SVM minus the errors from S4VM, by using $l$ labeled quantum states.} \label{tab2} \begin{tabular}{|l|l|l|l|l|l|l|} \hline $m$ &$\Delta E^{+}_{\rm{max,10}}$&$\Delta E^{+}_{\rm{max,30}}$&$\Delta E^{+}_{\rm{max,50}}$\\ \hline 2 & 0.337& 0.14 & 0.11 \\ \hline 4 & 0.24 & 0.13 & 0.11 \\ \hline 6 & 0.43 & 0.19 & 0.11 \\ \hline 8 & 0.04 & 0.007 & 0.004 \\ \hline \end{tabular} \end{table} \begin{table} \caption{The maximum differences between the errors of negative states from SVM and S4VM by using ten different $l$ labeled quantum states and 4000 unlabeled quantum states for $l=10,30,$ and $50$ and $m=2,4,6,$ and $8$. $\Delta E^{-}_{\rm{max,l}}$ is the maximum difference, the errors of negative states from SVM minus the errors from S4VM, by using $l$ labeled quantum states.} \label{tab3} \begin{tabular}{|l|l|l|l|l|l|l|} \hline $m$ &$\Delta E^{-}_{\rm{max,10}}$&$\Delta E^{-}_{\rm{max,30}}$&$\Delta E^{-}_{\rm{max,50}}$\\ \hline 2 & 0.34& 0.10 & 0.03 \\ \hline 4 & 0.47 & 0.12 & 0.07 \\ \hline 6 & 0.15 & 0.10 & 0.07 \\ \hline 8 & 0.029& 0.032 & 0.031 \\ \hline \end{tabular} \end{table} Compared with the results from SVMs, semi-supervised SVMs can improve the accuracies in most cases. The differences between average errors from S4VM and SVM by using ten different sets of labeled quantum states are shown in Fig. \ref{fig9}. All the average errors can be improved. The smaller the $l$ (the number of labeled samples), the better the improved accuracies. \begin{figure} \caption{The differences between the average errors from S4VM and SVM by using ten different $l$ labeled quantum states and $u$ unlabeled quantum states for $m=2,4,6,$ and $8$. The differences between the average errors for $l=50,30,$ and $10$ are represented by the lines with $\diamond$, $\star$ and circles, respectively.} \label{fig9} \end{figure} To show the validity of cross validation, the relationship between the cross rate and the accuracies are shown in FIG. \ref{fig10} by taking 10 labeled quantum states. The vertical coordinates represent the accuracies of the unlabeled states, while the horizontal coordinates represent the the accuracies of cross validation. \begin{figure} \caption{The relationship between the cross validation accuracies and the accuracies of unlabeled quantum states by using ten labeled quantum states and $u$ unlabeled quantum states for $m=2,4,6,$ and $8$.} \label{fig10} \end{figure} To investigate the validity of the S4VM, we also study the classification of the generalized Werner states, \begin{equation} \rho_w=p|\psi\mathrm{ran}\,gle\langle\psi|+(1-p)\rho_A\otimes \frac{\rm{I}_2}{2}, \end{equation} where $|\psi\mathrm{ran}\,gle=\cos\xi|00\mathrm{ran}\,gle+\sin\xi|11\mathrm{ran}\,gle,$ $\rho_A=\rm{tr}_B(|\psi\mathrm{ran}\,gle\langle\psi|).$ The state is unsteerable from Alice to Bob if \begin{equation} \cos^22\xi\geq\frac{2p-1}{(2-p)p^3}. \end{equation} Let the unlabeled set be of $2500$ generalized Werner states from $p=0$ to $p=1.$ By using 30 random labeled quantum states, the classification errors of the unlabeled generalized Werner states for $\psi=\frac{\pi}{8}$ by SVM and S4VM are shown in Fig. \ref{fig11}. Different from the above examples, we take $M=1$ here to show the performance of S4VM. It can be seen that the accuracies of the unlabeled quantum states can be improved to 0.95 or higher. The accuracies by S4VM are 0.15 higher than those by SVM in the best case when $m=4$ and $m=8$. The average accuracies by S4VM are 0.969 and 0.979 for $m=4$ and $m=8$, respectively, which are again 0.076 and 0.036 higher than those by SVM. \begin{figure} \caption{Classification errors of SVM and S4VM with ten different sets of $l$ labeled quantum states for $m=4,$ and $m=8,$ $l=30$, $N_2=2500,$ and $M=1$. The errors of SVM for $l=30$ are represented by line with $\star$, the errors of S4VM for $l=30$ are represented by line with square.} \label{fig:subfig:a} \label{fig:subfig:b} \label{fig11} \end{figure} We only need a small portion of the labeled data to implement the semi-supervised learning algorithm, which can indeed save a lot of time. However, the algorithm S4VM may still cost much time since the global simulated annealing search, the sample strategy, and the grid search approach in S4VM cost time. Besides, the accuracies by S4VM cannot be improved significantly when the number of the labeled samples becomes larger, while the accuracies by SVM are greater than 0.95. \section{Conclusion} Semi-supervised SVMs can be used in the situation that the labeled samples are scarce or very difficult to obtain. We have implemented a special semi-supervised SVM-S4VM to detect the EPR steering with a small portion of labeled quantum states and a large portion of unlabeled quantum states. Compared with inductive SVM, the detection errors can be significantly decreased in most cases. Our approach makes a useful step to solve the quantum correlation detection problems based on the semi-supervised machine learning method, as it costs time to get enough labeled quantum states. This approach may be applied similarly to detect other quantum correlations such as quantum entanglement and bell non-locality. Our results may also highlight the applications of other more efficient semi-supervised machine-learning methods such as artificial neural networks to quantum correlation detections. \noindent{\bf ACKNOWLEDGEMENTS}\, \, This work is supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 11571313, 12071179, 12075159 and 12171044; Beijing Natural Science Foundation (Grant No. Z190005); Academy for Multidisciplinary Studies, Capital Normal University; the Academician Innovation Platform of Hainan Province; and Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (No. SIQSE202001). \end{document}
\begin{document} \title{System-environment correlations and Non-Markovian dynamics} \author{A.~Pernice} \email{[email protected]} \author{J.~Helm} \author{W.T.~Strunz} \affiliation{Institut f\"ur Theoretische Physik, Technische Universit\"at Dresden, D-01062 Dresden, Germany} \begin{abstract} We determine the total state dynamics of a dephasing open quantum system using the standard environment of harmonic oscillators. Of particular interest are random unitary approaches to the same reduced dynamics and system-environment correlations in the full model. Concentrating on a model with an at times negative dephasing rate, the issue of ``non-Markovianity'' will also be addressed. Crucially, given the quantum environment, the appearance of non-Markovian dynamics turns out to be accompanied by a loss of system-environment correlations. Depending on the initial purity of the qubit state, these system-environment correlations may be purely classical over the whole relevant time scale, or there may be intervals of genuine system-environment entanglement. In the latter case, we see no obvious relation between the build-up or decay of these quantum correlations and ``Non-Markovianity''. \end{abstract} \pacs{03.65.Yz,42.50.Lc,03.65.Ta} \maketitle \section{Introduction} In open quantum system dynamics one accounts for influences of the external environment \cite{breuer+petruccione-2002, weiss-2008, gardiner+zoller-2002}. Despite the unitary time-evolution of the total state of system plus environment, the dynamics of the system itself will in general be non-unitary. Growing correlations between the system of interest and its surroundings lead to a decay of the initially present coherences. This line of thought is at the heart of decoherence theory and is put forward to explain the appearance of classical properties in quantum systems \cite{joos+zeh-2003, zurek-2003, strunz-2002, schlosshauer-2008, braun-2001}. Decoherence in particular is of relevance for quantum technologies trying to make use of the vast computational potential forecast to applied quantum information processing \cite{nielsen+chuang-2002}. In the regime of weak system-environment coupling and short environmental correlation times the dynamics of an open quantum system may be described in terms of the Born-Markov approximation. The corresponding Markov master equation then has a generator in Lindblad form \cite{lindblad-1976, kossakowski-1972}. The time evolution of the system depends on its present state only. Often, however, such an approximation is not justified. Then, memory effects -- often incorporated by means of integrals over the past \cite{nakajima-1958, zwanzig-1960} -- start to play an essential role. Yet, it is known that for arbitrary bath correlation functions approximate and sometimes even exact time-local master equations can be derived \cite{breuer+petruccione-2002, strunz+yu-2004}. An important example is given by the exact master equation for a damped harmonic oscillator bilinearly coupled to a bath of harmonic oscillators \cite{haake+reibold-1985, hu+paz+zhang-1992, strunz+yu-2004}. Here, we focus on a dephasing qubit as an open quantum system using the standard harmonic oscillator environment, whose exact time local master equation is known. In more recent developments, the question of how to define and distinguish ``Markovian'' from ``non-Markovian'' dynamics from a local (system) perspective was addressed. The analysis has been based both on a single snapshot of the dynamics \cite{wolf+eisert+cubitt+cirac-2008} and on the full time evolution of the open quantum system within a certain time interval \cite{breuer+laine+piilo-2009, rivas+huelga+plenio-2010}. In the latter approach, memory effects associated with non-Markovian dynamics are expected to cause temporary increase in the distinguishability of states (in terms of trace distance, e.g. \cite{breuer+laine+piilo-2009}) and the dynamics may no longer be divisible~\cite{rivas+huelga+plenio-2010}. Under Markovian dynamics, on the other hand, the decay of distinguishability will be monotonic throughout and divisibility will be ensured. In this context the ``flow of quantum information'' from system to environment and back is an often employed, and certainly intuitive picture. However, without a proper conceptual and theoretical framework, such a picture should be used with caution. Care has also to be taken with respect to the need for a proper (quantum) environment. It should be noted that for single-qubit decoherence the dynamics may always be described in terms of \emph{stochastic} fluctuations of \emph{external fields}, i.e., the dynamics has a random unitary representation ~\cite{kummerer+maassen-1987,landau+streater-1993,buscemi+chiribella+dariano-2005}. The dynamics may thus be modeled \emph{without} invoking a quantum environment at all. Higher dimensions then two are needed to see decoherence that can only be understood in terms of a proper quantum environment \cite{helm+strunz-2009}. The role and nature of \emph{system-environment correlations} in open quantum system dynamics has raised some notable interest lately. In the context of quantum discord \cite{ollivier+zurek-2001}, for example, total states with no quantum correlation (zero discord) were shown to be the most general class of initial states allowing for completely positive (CP) reduced dynamics \cite{rodriguez-rosario+al-2008, shabani+lidar-2009}. Another interesting line are investigations about the relation between decoherence and system-environment entanglement. Here, it is possible that the system essentially decoheres completely \emph{without} becoming entangled with its environment at all \cite{eisert+plenio-2002, pernice+strunz-2011}. Such cases show that classical system-environment correlations alone may account for a vast number of phenomena related to ``open quantum systems''. As we will also see in this paper, often an exchange of \emph{quantum} information between system and environment cannot be proven. In order to shed light on the nature of system-environment correlations in open system dynamics in a non-trivial (non-Lindblad) regime, and the possible relation to recent definitions of ``Non-Markovianity'' we here investigate single-qubit dephasing due to the coupling to an oscillator environment \cite{piilo+breuer-2011,pernice+strunz-2011, helm+strunz-2010}. In so doing, we favour to investigate the dynamics and ``quantumness'' of system-environment correlations thus avoiding the study of the somewhat vague notion of (quantum) ``information flow'' in open quantum system dynamics. The paper is structured as follows: in Section \ref{sec2} we will present our model and give an exact, useful expression for the total system-environment state. Section \ref{sec:rand-unit-models} will be concerned with the derivation of a random unitary representation for the reduced qubit state at time $t$, showing that from the system perspective alone, no quantum environment is necessary to model the dynamics. The two inherently different approaches in Sections~\ref{sec2} and~\ref{sec:rand-unit-models} are exact and thus describe decoherence in the Markovian as well as the non-Markovian regime. Accounting for non-Markovianity, in Section~\ref{sec4} we concentrate on a super-ohmic spectral density which ensures an at times negative dephasing rate. In Section~\ref{sec5} we employ a measure for system-environment correlations and relate it to ``non-Markovianity'' in the sense of~\cite{breuer+laine+piilo-2009,rivas+huelga+plenio-2010}. Following earlier work~\cite{pernice+strunz-2011}, in Section~\ref{sec6} we finally investigate the nature of these correlations. We find that for most qubit initial states there is no relation between ``non-Markovianity'' and the build-up or decay of \emph{quantum} correlations. Furthermore, even if there are time intervals where the total state of system and environment is entangled, there is no obvious relation to the periods of ``non-Markovianity''. We will draw our conclusions in Section~\ref{sec7}. \section{Quantum decoherence model: reduced and full dynamics}\label{sec2} Continuing our earlier work on the dynamics of system-environment correlations for a dephasing qubit \cite{pernice+strunz-2011}, we start from a typical model~\cite{feynman+vernon-1963,caldeira+leggett-1981} with total Hamiltonian \begin{equation}\label{opensystemmodel} H_\text{tot}=H_\text{sys}+H_\text{int}+H_\text{env}, \end{equation} by coupling a qubit non-dissipatively to a bath of harmonic oscillators through the choices ~\cite{skinner+hsu-1986,unruh-1995,kuang+zeng+tong-1999,yu+eberly-2002,yuan+kuang+liao-2010} \begin{eqnarray} H_\text{sys} & = & \frac{\hbar \Omega}{2}\sigma_z \\ \overline{n}onumber H_{\rm int} & = & \sigma_z\otimes \sum_{\lambda=1}^N \hbar g_\lambda a_\lambda^\dagger+\text{h.c.}\\ \overline{n}onumber H_\text{env} & = & \sum_{\lambda=1}^N\hbar\omega_\lambda a_\lambda^\dagger a_\lambda. \end{eqnarray} Here $\Omega$ denotes the energy difference between the qubit states and the coefficients $g_\lambda$ describe the coupling strengths between the qubit and each environmental mode of frequency $\omega_\lambda$ and annihilation and creation operators $a_\lambda, a_\lambda^\dagger$. As environmental initial state we choose a thermal state $\rho_\text{therm}^\lambda=({\bar n}_\lambda + 1)^{-1}\exp[-\hbar\omega_\lambda a_\lambda^\dagger a_\lambda /k_B T]$ for each oscillator with the mean thermal occupation number ${\bar n}_\lambda = (\exp[\hbar\omega_\lambda/k_B T]-1)^{-1}$ at temperature $T$. Initially, we assume no system-environment correlations such that the total initial state is simply given by the product $\rho_\text{tot}(0)=\rho_{\text{sys}}\otimes\rho_\text{therm}$. Accordingly, for the reduced system state $\rho_\text{red}(t)=\text{Tr}_\text{env}[\rho_\text{tot}(t)]$ the dynamical map ${\cal{E}} (t,0)$: $\rho_{\text{red}}(0) \rightarrow \rho_{\text{red}}(t)$ is completely positive (CP) with $\rho_{\text{red}}(0) = \rho_{\text{sys}}$. Already at this stage we emphasise that on the reduced level, this dynamics can equally well be described by random unitary dynamics as will be elaborated upon in Sec.~\ref{sec:rand-unit-models}. The quantum dephasing model (\ref{opensystemmodel}) may be solved without any approximation. A possible approach to the time-local master equation for the system state is provided by the non-Markovian quantum state diffusion approach to open systems \cite{diosi+gisin+strunz-1998, strunz+yu-2004}. We find for the reduced density operator $\rho_\text{red}(t)=\text{Tr}_\text{env}[\rho_\text{tot}(t)]$ \begin{equation} \dot{\rho}_\text{red}=-i\frac{\Omega}{2}\kom{\sigma_z}{\rho_\text{red}}-\frac{\gamma(t)}{2}\left(\rho_\text{red}-\sigma_z\,\rho_\text{red}\,\sigma_z\right). \label{equ:masterequation-decoherence} \end{equation} This equation is solved by \begin{equation} \rho_\text{red}(t)=\begin{pmatrix} \rho_{00}&\mathcal{D}(t)\rho_{01}\\ \mathcal{D}^*(t)\rho_{10}&\rho_{11} \end{pmatrix}, \label{equ:solution-roh_red} \end{equation} with \begin{equation}\label{decoherence_factor} \mathcal{D}(t)=\exp\left[-i\Omega t-\int_0^t\gamma(s)ds\right], \end{equation} and where the $\rho_{ij}$ represent the initial state of the qubit. Equations~(\ref{equ:masterequation-decoherence}) and (\ref{equ:solution-roh_red}) involve the time dependent dephasing rate $\gamma(t)$ which by means of the spectral density of the environment $J(\omega)=\sum_{\lambda=0}^N|g_\lambda|^2\delta(\omega-\omega_\lambda)$ can be written as \begin{equation} \gamma(t)=4\int_0^t\hspace{-0.5ex}ds\int_0^\infty \hspace{-0.5ex}\hspace{-0.5ex}d\omega J(\omega)\coth[\hbar\omega/2 k_B T]\cos[\omega s]. \label{equ:decoherence_rate} \end{equation} Later we will concentrate on environments that lead to periods in time with a {\it negative} dephasing rate. Recently, we investigated system-environment correlations of this model \cite{pernice+strunz-2011} and found the useful representation \begin{equation} \rho_\text{tot}(t)=\int\frac{d^2\xi}{\pi}\frac{1}{\overline{n}}e^{-|\xi|^2/\overline{n}}\; \hat{P}(t;\xi,\xi^*)\otimes\ket{\xi}\bra{\xi} \label{equ:P-representation} \end{equation} of the total state. It represents a partial P-representation where the environmental degrees of freedom are expanded in terms of coherent states $|\xi\rangle$. Here, $\xi=(\xi_1,\xi_2,\cdots)$ is a vector of complex numbers and we consistently make use of the notation $d^2\xi/\pi:=d^2\xi_1/\pi\; d^2\xi_2/\pi\cdots$ (see also~\cite{strunz-2005}). Furthermore, we symbolically write $\exp[-|\xi|^2/\overline{n}]/\overline{n}:=\prod_\lambda\exp[-|\xi_\lambda|^2/\overline{n}_\lambda]/\overline{n}_\lambda$ involving the mean thermal occupation number $\overline{n}_\lambda$ of the $\lambda$-th environmental mode. The system part of the total state is encoded in a matrix-valued partial P-function $\hat{P}(t)$ with values in the $2\times2$ dimensional state space of the qubit. In order to represent a solution of the total Schr\"odinger-von-Neumann equation with initial $\rho_\text{tot}(0)=\rho_\text{sys}\otimes\rho_\text{therm}$, the partial P-function in (\ref{equ:P-representation}) reads \begin{equation} \hat{P}(t;\xi,\xi^*)= \begin{pmatrix} \mathcal{A}^+(t;\xi,\xi^*)\rho_{00}&\mathcal{B}(t;\xi,\xi^*)\rho_{01}\\ \mathcal{B}^*(t;\xi,\xi^*)\rho_{10}&\mathcal{A}^-(t;\xi,\xi^*)\rho_{11} \end{pmatrix}. \label{equ:exact-solution} \end{equation} Here, $\mathcal{A}^{\pm}=\exp[-A(t)\pm\left\{(a(t)|\xi)+(\xi|a(t))\right\}]$ and $\mathcal{B}=\exp[-i\Omega t]\exp[B(t)-\left\{({b}(t)|\xi)-(\xi|b(t))\right\}]$, where we have introduced the complex time dependent vectors ${a}(t)=(a_1(t),a_2(t),\cdots)$ and ${b}(t)$ with scalar product $(a(t)|\xi)\equiv\sum_\lambda a_\lambda^*(t)\xi_\lambda$ and vector components \begin{eqnarray} a_\lambda(t)&=&\frac{1}{\overline{n}_\lambda}\int_0^t\left(g_\lambda e^{i\omega_\lambda s}\right)ds\\ b_\lambda(t)&=&\frac{2\overline{n}_\lambda+1}{\overline{n}_\lambda}\int_0^t\left(g_\lambda e^{i\omega_\lambda s}\right)ds. \end{eqnarray} Furthermore, we use the abbreviations \begin{eqnarray} \overline{n}onumber A(t)&=&2\,\text{Re}\hspace{-0.5ex}\int_0^t\hspace{-0.5ex}ds\int_0^s\hspace{-0.5ex}d\tau\left[\sum_\lambda\frac{1}{\overline{n}_\lambda}|g_\lambda|^2e^{-i\omega_\lambda(t-s)}\right]\\ \overline{n}onumber B(t)&=&2\,\text{Re}\hspace{-0.5ex}\int_0^t\hspace{-0.5ex}ds\int_0^s\hspace{-0.5ex}d\tau\left[\sum_\lambda\frac{2\overline{n}_\lambda+1}{\overline{n}_\lambda}|g_\lambda|^2e^{-i\omega_\lambda(t-s)}\right]. \end{eqnarray} Initially, $\hat P = \rho_\text{sys} = \rho_\text{red}(0)$ and note that there are no approximations necessary to achieve the result~(\ref{equ:exact-solution}) and thus via~(\ref{equ:P-representation}) to obtain the exact state of the composite system (see also \cite{pernice+strunz-2011}). Later, we study the dynamics of system-environment correlations. Therefore, a useful representation of the total state as in (\ref{equ:P-representation}) is of central importance. The local dynamics alone is insufficient for the study of any quantities related to genuine open quantum system dynamics, i.e., involving a proper quantum environment as in eq.~(\ref{opensystemmodel}). As we will elaborate upon next, in our case the same reduced dynamics~(\ref{equ:masterequation-decoherence}) could have been obtained from a stochastic Schrödinger dynamics, not invoking a quantum environment at all. \section{Random unitary representations} \label{sec:rand-unit-models} With an eye on experimental conditions, decoherence of qubits is often modelled by random unitary dynamics~\cite{yu+eberly-2003,helm+strunz-2010}. In terms of a dynamical map, this implies that there exists a relation \begin{equation}\label{random_unitary} \rho_{\text{red}}(t) = \sum_k p_k U_k \rho_{\text{red}}(0)U_k^\dagger \end{equation} with suitably chosen probabilities $p_k>0$ and unitary maps $U_k$. Indeed, on the level of the reduced state, single-qubit decoherence (and indeed, all single-qubit unital CP maps) can always be modelled in this way ~\cite{kummerer+maassen-1987,landau+streater-1993,buscemi+chiribella+dariano-2005}. Thus, from a reduced point of view no quantum environment as in eq.~(\ref{opensystemmodel}) is required. The reduced dynamics can be obtained from a local Schrödinger equation driven by a random Hermitian Hamiltonian. By contrast, genuine \emph{quantum decoherence} may be found in two-qubit systems~\cite{helm+strunz-2009}. It is also worth noting that random unitary dynamics emerging from an open quantum system with environmental initial pure state can always be ``undone''(quantum error correction) \cite{gregoratti+werner-2003, trendelkamp-schroer+helm+strunz-2011}. The most straightforward random unitary realization of single-qubit decoherence with state (\ref{equ:solution-roh_red}) at time $t$ is provided by the simple quantum operation \begin{eqnarray}\label{dephasing_process} \rho_{\text{red}}(t) & = & \left(\frac{1+|\mathcal{D}(t)|}{2}\right) e^{-i\frac{\Omega}{2} t\sigma_z} \rho_{\text{red}}(0) e^{i\frac{\Omega}{2} t\sigma_z}\\ \overline{n}onumber & & + \left(\frac{1-|\mathcal{D}(t)|}{2}\right) e^{-i\frac{\Omega}{2} t\sigma_z}\sigma_z \rho_{\text{red}}(0)\sigma_z e^{i\frac{\Omega}{2} t\sigma_z} \end{eqnarray} which is obviously of the form (\ref{random_unitary}) employing just two unitaries $U_1 = \exp[-i\Omega t\sigma_z/2]$, $U_2 = \exp[-i\Omega t\sigma_z/2] \sigma_z$ and probabilities $p_{1,2}= (1\pm|\mathcal{D}(t)|)/2$. Recall that according to (\ref{decoherence_factor}), $|\mathcal{D}(t)|=\exp[-\int_0^t \gamma(s)ds]$. It is worth noting that the very same formal relation holds true for the two-time map \begin{equation}\label{equ:two_time_map} {\cal E}(t,t'): \rho_{\text{red}}(t') \rightarrow \rho_{\text{red}}(t) \end{equation} such that \begin{equation}\label{two-time_dephasing_process} \begin{split} \rho_{\text{red}}(t)&=\left(\frac{1+|\mathcal{D}(t,t')|}{2}\right) U_1(t-t')\rho_\text{red}(t')U_1^\dagger(t-t')\\ &\quad+\left(\frac{1-|\mathcal{D}(t,t')|}{2}\right)U_2(t-t')\rho_\text{red}(t')U_2^\dagger(t-t') \end{split} \end{equation} with $|\mathcal{D}(t,t')|=\exp[-\int_{t'}^t \gamma(s)ds]$. However, as $\gamma(s)$ need not be positive for all times (see later), the prefactor of the second contribution, $\frac{1-|\mathcal{D}(t,t')|}{2}$, may turn negative for $t'$ and $t$ near times of negative $\gamma(s)$. Thus, for such times $(t',t)$, the map ${\cal E}(t,t')$ in the form (\ref{two-time_dephasing_process}) ceases to take the form of a random unitary map. Indeed, using the Jamiolkowski isomorphism \cite{jamiolkowski-1972} it is straightforward to see that $|\mathcal{D}(t,t')|<1$ or \begin{equation}\label{equ:int_gamma} \int_{t'}^t \gamma(s)ds > 0 \end{equation} is a necessary and sufficient condition for the dephasing map ${\cal E}(t,t')$ defined above in eq.~(\ref{equ:two_time_map}) to be CP. The random unitary form (\ref{dephasing_process}) of the dephasing map ${\cal E}(t,0)$ is simple but has the drawback of not representing an intuitive dynamical picture of the process due to the time dependence of the probabilities $p_1=p_1(t)$ and $p_2=p_2(t)$. Here, we want to develop an alternative random unitary representation in a more systematic way that opens the door for further generalizations as explained below. Recall that pure dephasing of an open quantum system in the basis $\{|n\rangle\}$ implies a controlled-unitary form $U_{\rm tot}(t)=e^{-i H_{\rm tot}t/\hbar} = \sum_n |n\rangle\langle n|\otimes U_n(t)$ of the total propagator \cite{helm+strunz+rietzler+wuerflinger-2011}. Here, $U_n(t)=e^{-i H_n t /\hbar}$ with $H_n = \langle n|H_{\rm tot}|n\rangle$ are system-state-dependent propagators for the environment (see also \cite{gorin+prosen+seligman+strunz-2004}). Pure dephasing is then given by the dynamics $\langle n|\rho_{\rm red}(t)|m\rangle =\,$ $\text{Tr}_{\rm env}[U_n(t)\rho_{\rm env}(0)U_m^\dagger(t)]\cdot\langle n|\rho_{\rm red}(0)|m\rangle$. In our case of a single qubit there is a single decoherence factor $\mathcal{D}(t) = $\text{Tr}$_{\rm env}[U_0(t)\rho_{\rm env}(0)U_1^\dagger]$ as in (\ref{decoherence_factor}). The two propagators are determined by the environment Hamiltonians $H_i= \langle i |H_{\rm tot}|i\rangle$ with $H_1=\hbar\Omega/2 + \sum_{\lambda=1}^N \hbar g_\lambda (a_\lambda^\dagger+a_\lambda) + \sum_{\lambda=1}^N\hbar\omega_\lambda a_\lambda^\dagger a_\lambda$ and two sign changes for $H_0$. We next employ the Wigner representation of the environmental initial state $W_0(\alpha,\alpha^*)=\int\frac{d^2\xi}{\pi}\, e^{\xi^*\alpha - \xi\alpha^*} \text{Tr}[e^{\xi a^\dagger -\xi^* a}\rho_{\rm env}]$ and for the operator $U_1^\dagger(t) U_0(t)$, accordingly. The latter's Wigner Weyl symbol we denote by $U_{01}(\alpha,\alpha^*)=\int\frac{d^2\xi}{\pi}\, e^{\xi^*\alpha - \xi\alpha^*} \text{Tr}\left[e^{\xi a^\dagger -\xi^* a} U_1^\dagger(t) U_0(t) \right]$. We find \begin{equation}\label{dephasing} {\mathcal{D}(t)}=\int\frac{d^2\alpha}{\pi}\; W_0(\alpha,\alpha^*)\; U_{01}(\alpha,\alpha^*,t). \end{equation} Due to the harmonic properties of the environment, the corresponding propagators $U_n(t)$ are known explicitly and lead to the phase factor $U_{01}(\alpha,\alpha^*,t)=\exp[-i\Phi(\alpha,\alpha^*,t)]$ with the phase \begin{equation} \Phi(\alpha,\alpha^*,t)=\frac{\Omega}{2} t -2\, \sum_\lambda g_\lambda \alpha_\lambda\int_0^t e^{-i\omega_\lambda s}ds +\text{c.c}. \end{equation} We see that $\mathcal{D}(t)$ is just an average over a random complex number of unit norm. For a thermal initial state the initial Wigner function $W_0=\frac{1}{{\bar n}+\frac{1}{2}} \exp[-|\alpha|^2/({\bar n}+\frac{1}{2})] $ is positive. Thus, expression (\ref{dephasing}) leads to a random unitary representation for ${\cal{E}}(t,0)$: \begin{equation}\label{random_unitary_2} \rho_{\rm red}(t)=\int\frac{d^2\alpha}{\pi}\; W_0(\alpha,\alpha^*)\; U_{\alpha}(t)\, \rho_{\rm red}(0)\, U_{\alpha}^\dagger(t). \end{equation} This corresponds to a random unitary evolution of the qubit with $U_{\alpha}=\exp[-i\int_0^t H_{\alpha}(s)ds/\hbar]$ and diagonal random Hamiltonian $H_{\alpha}(t)=\sigma_z\left(\frac{\hbar\Omega}{2} -\sum_\lambda g_\lambda ( \alpha_\lambda\int_0^t e^{-i\omega_\lambda s}ds + \alpha^*_\lambda\int_0^t e^{i\omega_\lambda s}ds\right).$ Note that in this representation the probability of occurrence of a particular unitary evolution is given by the value of the initial Wigner distribution and is thus time independent. The second random unitary representation (\ref{random_unitary_2}) of the reduced dynamics reflects an ensemble of experiments where the unitary system dynamics is determined by the random $H_{\alpha}(t)$, driven by some (classical) stochastic process. No quantum environment is involved. Note that both random unitary representations of the dynamical map ${\cal E}(t,0)$ are exact -- no restriction on the sign of $\gamma(s)$ is necessary. It may appear tempting to {\it define} a two-time map ${\cal F}(t,t')$ through (\ref{random_unitary_2}) with $U_{\alpha}\rightarrow U_{\alpha}(t,t')=\exp[-i\int_{t'}^t H_{\alpha}(s)ds/\hbar]$. However, it is clear that ${\cal F}(t,t')\overline{n}eq {\cal E}(t,t')$ unless $t'=0$. Indeed, while ${\cal F}(t,t')$ is a CP map for all $t,t'$ this ceases to be true for ${\cal E}(t,t')$ (see the next Section). We close this section by pointing out an interesting additional observation: the random unitary representation (\ref{random_unitary_2}) for the quantum dephasing model is {\it not} restricted to single-qubit-dephasing. In fact, for an arbitrary system Hilbert space dimension, the very same construction works for all dephasing factors ${\mathcal D}_{nm}(t)=\text{Tr}_{\rm env}[U_n(t)\rho_{\rm env}(0)U_m^\dagger]$ of a quantum oscillator environment model. So even for larger Hilbert space dimension than two -- on a local level -- pure dephasing based on a quantum oscillator model like (\ref{opensystemmodel}) {\it cannot} be distinguished from random unitary dynamics. For genuine quantum decoherence, one needs ``more quantum mechanical'' environments ~\cite{helm+strunz-2009}. \section{Negative dephasing rate and "non-Markovianity"}\label{sec4} The physics of the harmonic oscillator environment model is encoded in its spectral density $J(\omega)$. As we are here interested in instances of negative dephasing rate, we choose a particular super-ohmic spectral density with sharp cutoff at frequency $\omega_c$ \begin{equation}\label{spectraldensity} J(\omega) = \kappa\frac{\omega^3}{\omega_c^2}\Theta(\omega-\omega_c) \end{equation} with $\Theta(\omega)$ the Heaviside step function and $\kappa$ a dimensionless coupling constant. In the high-temperature limit $k_B T \gg \hbar\omega_c$ the time dependent dephasing rate (\ref{equ:decoherence_rate}) can easily be evaluated analytically, we get \begin{equation} \hbar\gamma(t)=8\kappa k_B T\left(\frac{\sin(\omega_c t)}{(\omega_ct)^2} -\frac{\cos(\omega_c t)}{(\omega_ct)}\right). \end{equation} Though many of our results do not rely on any special choice of $J(\omega)$, in the following, whenever we show figures, we will use the spectral density (\ref{spectraldensity}) and account for the high-temperature limit by choosing $T = 10\,\hbar\omega_c/k_B $. Furthermore, we choose $\kappa=10^{-2}$ throughout this paper. Single qubit decoherence with negative dephasing rate is interesting in connection with two recently proposed ``measures of non-Markovianity''~\cite{breuer+laine+piilo-2009,rivas+huelga+plenio-2010}. In both definitions the notion of divisibility, related to the decomposition of the dynamical map according to $\mathcal{E}(t,t')=\mathcal{E}(t,t'')\mathcal{E}(t'',t')$ with $t\ge t''\ge t'$ is at the heart of ``non-Markovianity''. As explained around eq. (\ref{equ:int_gamma}), our $\mathcal{E}(t,t')$ ceases to be a CP map for time intervals of negative $\gamma(t)$. Indeed, as can be confirmed easily, for single qubit dephasing the two measures of non-Markovianity from ~\cite{breuer+laine+piilo-2009,rivas+huelga+plenio-2010} are non-zero whenever $\gamma(t')<0$ for some $0<t'<t$. The fact that the map $\mathcal{E}(t,t')$ is no longer CP at times $t'>0$ is a consequence of growing correlations between system and environment. These correlations may be due to entanglement, but they need not be as will be shown in Section~\ref{sec6}. As can be seen in Fig.~\ref{fig:figure1} our $\gamma(t)$ turns negative in certain restricted periods, while the integral \begin{equation}\label{equ:intgamma} \int_0^t\gamma(s)ds = \frac{8k_B T}{\hbar\omega_c} \left(1-\frac{\sin(\omega_c t)}{(\omega_ct)}\right) \end{equation} stays positive, as expected for the CP map $\mathcal{E}(t,0)$. \begin{figure} \caption{For our choice of super-ohmic spectral density, the dephasing rate $\gamma(t)$ shows domains of negative values. The integrated quantity $\int_0^t\gamma(s)ds$ stays positive, reflecting complete positivity of the dynamical map from $0$ to $t$.} \label{fig:figure1} \end{figure} \section{Total state and system-environment correlations}\label{sec5} Coupling to an environment leads to the build-up of correlations between system and environment and thus to changes in local entropies. From the point of view of information theory, the reduced dynamics is regarded as a ``channel'' for quantum information. In this context, several quantities related to von Neumann entropy $S=-\text{Tr}(\rho\log\rho)$ are of interest: e.g. $C_S(t)=S_{\text{sys}}+S_{\text{env}}-S_{\text{tot}}$ as a measure for system-environment correlations \cite{holevo-2000}. These quantities are hard to compute, unless one deals with very small systems or Gaussian states. Here we choose purity P=tr$(\rho^2)$ as an indicator for the mixedness of states, which is related to the ``linear entropy'' via $S_L=1-P$. Clearly, as with entropy, total purity $P_{\rm tot}$ is preserved under unitary evolution with $H_{\rm tot}$. The sum of local purities $P_{\rm sys}(t)$, $P_{\rm env}(t)$ however, will be smaller as $t>0$. For the initial product state we have $P_{\rm tot}=P_{\rm sys} P_{\rm env}$ and it appears natural for all $t\ge 0$ to consider the difference of logarithms of $P$ as a simple measure of correlations \begin{equation}\label{correlation} C = \log(P_{\rm tot}) - \log(P_{\rm sys}) - \log(P_{\rm env}). \end{equation} $C$ is easier to compute than $C_S$, but still one finds $C=C_S=0$ for uncorrelated states and $C=C_S=2\ln N$ for maximally entangled bipartite pure states of equal dimension $N$. The dynamics of system-environment correlations is given by \begin{eqnarray}\overline{n}onumber C(t) & = & \log\left(\frac{P_{\rm sys}(0)}{P_{\rm sys}(t)}\right) + \log\left(\frac{P_{\rm env}(0)}{P_{\rm env}(t)}\right)\\ \label{correlation} &\equiv& C_\text{sys}(t)+C_\text{env}(t). \end{eqnarray} Here the contributions $C_\text{sys}$ and $C_\text{env}$ correspond to the amount of correlations created between system and environment as indicated by the increase of the local entropies in the two subsystems. Having the total state (\ref{equ:P-representation}) at hand, all these quantities can be determined easily for our dephasing qubit. For instance, the qubit purity is readily determined to give \begin{equation}\label{puritysystem} P_{\rm sys}(t) = \frac{1}{2}\left(1+z^2 + (x^2+y^2)|{\mathcal{D}}(t)|^2\right). \end{equation} Here and in the following we denote by $\vec r =(x,y,z)=\text{Tr}[\vec\sigma\rho]$ the coordinates of the Bloch vector of the {\it initial state of the qubit}. Somewhat more involved, yet still easy to determine is the purity of the environment. We find \begin{equation}\label{purityenvironment} P_{\rm env}(t) = \frac{1}{2}\left(1+z^2 + (1-z^2)|{\mathcal{G}}(t)|^2\right)P_{\rm env}(0) \end{equation} with the initial environmental purity \begin{equation} \label{initialpenv} \log P_{\rm env}(0)=\int_0^\infty\hspace{-0.5ex}\hspace{-0.5ex}d\omega J(\omega)\log(\tanh(\hbar\omega/k_B T)). \end{equation} In (\ref{puritysystem}), the time dependence arises from the decoherence factor $|{\mathcal{D}}(t)|=\exp[-\int_0^t\gamma(s)ds]$ of qubit dephasing with the rate $\gamma(t)=4\int_0^t\hspace{-0.5ex}ds\int_0^\infty \hspace{-0.5ex}\hspace{-0.5ex}d\omega J(\omega)\coth[\hbar\omega/2 k_B T]\cos[\omega s]$ from (\ref{equ:decoherence_rate}). By contrast, for the environment the time dependence is governed by a factor $|{\mathcal{G}}(t)|=\exp[-\int_0^t\Gamma(s)ds]$ with a dual rate $\Gamma(t)=4\int_0^t\hspace{-0.5ex}ds\int_0^\infty \hspace{-0.5ex}\hspace{-0.5ex}d\omega J(\omega)\tanh[\hbar\omega/2 k_B T]\cos[\omega s]$. The rate of change of the correlation $C(t)$ from (\ref{correlation}) stems from the two contributions $\dot C = \dot C_{\rm sys} + \dot C_{\rm env}$ with \begin{equation} \dot C_{\rm sys}(t) = \frac{2\gamma(t)}{a|{\mathcal D}(t)|^2+1} \label{equ:Csys} \end{equation} and \begin{equation} \dot C_{\rm env}(t) = \frac{2\Gamma(t)}{b|{\mathcal G}(t)|^2+1}, \label{equ:Cenv} \end{equation} where the initial state of the qubit determines the factors $a=(1+z^2)/(1-z^2)$ and $b=(1+z^2)/(x^2+y^2)$. Eqs. (\ref{equ:Csys}) and (\ref{equ:Cenv}) reflect a first important result: System and environment become more correlated for $\gamma(t),\Gamma(t)>0$. More interestingly, system-environment correlations \emph{decrease} for negative dephasing rates. In other words, during ``non-Markovian'' periods system and environment recover some of their initial independence. As we will elaborate in the next section, these system-environment correlations may well be purely classical without the build-up of entanglement. \begin{figure} \caption{Dephasing rate $\gamma(t)$ (solid) and system-environment correlation $C(t)$ (dashed) against time for a qubit with initial purity $r=0.98$. While correlations grow for positive dephasing rates ($\gamma(t)>0$), they decrease for $\gamma(t) < 0$ (highlighted domains).} \label{abb:figure2} \end{figure} In the high temperature limit considered here, by means of~(\ref{equ:Csys}) and~(\ref{equ:Cenv}) all quantities in~(\ref{correlation}) can be obtained readily. Reflecting the huge dimension of the environmental Hilbert space, it turns out that its contribution $C_\text{env}(t)$ is small compared to $C_\text{sys}(t)$. Therefore, from~(\ref{equ:Csys}) we expect $\dot C(t)\sim\gamma(t)$. In Fig.~\ref{abb:figure2} we display system-environment correlations $C(t)$ and dephasing rate $\gamma(t)$. Clearly, changes in $C$ correlate with the sign of $\gamma$, and thus $C(t)$ decreases in domains of non-Markovianity. As we will explain in the next Section, in this case the total state is not entangled and thus $C(t)$ reflects classical correlations only. Recall that this connection between system-environment correlations and non-Markovianity can only be established on the basis of the total quantum state (\ref{equ:P-representation}). For the random unitary representation (\ref{random_unitary_2}) of the same reduced dynamics, due to the lack of an environment, the notion of system-environment correlations ceases to make sense. \section{Quantum and classical system-environment correlations}\label{sec6} Having access to the total state we can also investigate the nature of system-environment correlations. In earlier work we have shown that quantum correlations need not exist in such open system models, in particular in the high-temperature limit. In such a case, the total state may still be separable and thus all correlations could be established using classical communication. Here we argue very much as in \cite{pernice+strunz-2011}. With the time and temperature dependent function \begin{equation} \begin{split} S(T,t)&=4 \int_0^t\hspace{-0.5ex}ds\int_0^s\hspace{-0.5ex}d\tau\int_0^\infty\hspace{-0.5ex}\hspace{-0.5ex}d\omega\\ &\qquad\times\,J(\omega)\;\exp\left[\hbar\omega/kT\right]\cos[\omega(s-\tau)] \end{split} \label{equ:S_t_continuum} \end{equation} we have shown in~\cite{pernice+strunz-2011} that the total state is separable, as long as \begin{equation}\label{equ:condition_sep} S(T,t)\le\ln\sqrt{\frac{1-z^2}{x^2+y^2}}. \end{equation} In the high temperature limit, with our special choice of $J(\omega)$ this quantity can be easily evaluated yielding \begin{equation} S(T,t)=4\kappa\left(\frac12-\frac{\sin[\omega_c t]}{\omega_c t}-\frac{\cos[\omega_c t]}{(\omega_c t)^2}+\frac{1}{(\omega_c t)^2}\right). \label{equ:S_special_choice} \end{equation} With criterion (\ref{equ:condition_sep}) we can indeed prove that the total state underlying the correlation displayed in Fig.~\ref{abb:figure2} is separable. These findings show that the existence of a quantum environment does not imply (growing) entanglement. Moreover, there is no connection between the non-Markovian character of the dynamics and the nature of system-environment correlations. We can prove system-environment entanglement when the partial transpose $\rho_\text{tot}^\text{PT}$ of the total state yields a negative expectation value $\bra{\Psi}\rho_\text{tot}^\text{PT}\ket{\Psi}$ in some state $\ket{\Psi}$ of the composite system~\cite{peres-1996}. By means of the representation~(\ref{equ:P-representation}) we have shown in~\cite{pernice+strunz-2011} that with the time and temperature dependent function \begin{equation} \begin{split} E(T,t)&=\;8\int_0^t\hspace{-0.5ex}ds\int_0^s\hspace{-0.5ex}d\tau\int_0^\infty\hspace{-0.5ex}\hspace{-0.5ex}d\omega\\ &\qquad\times\,J(\omega)\sinh\left[\hbar\omega/kT\right]\cos[\omega(s-\tau)] \end{split} \label{equ:E_t_continuum} \end{equation} entanglement is present whenever \begin{equation}\label{equ:condition_ent} E(T,t)>\ln\left[\frac{r-z^2}{x^2+y^2}\right]. \end{equation} In contrast to the separable case studied in Fig. \ref{abb:figure2}, choosing a qubit initial state with a purity closer to one, we can indeed prove the existence of system-environment entanglement (see highlighted regions in Fig.~\ref{abb:figure3}). Remarkably, there is a close connection between our ``entanglement witness''~(\ref{equ:condition_ent}) and the environmental contribution $C_\text{env}(t)$ of the correlations. In the high temperature limit $kT\gg \hbar\omega_c$ we find \begin{equation}\label{equ:connection_E_Cenv} \dot C_\text{env}=\frac12\frac{\dot E}{b\,e^{-8E}+1}, \end{equation} and for $E(T,t)\ll1$, $C_\text{env}(t)=E(T,t)/2(b+1)$. Now we are able to reformulate the entanglement criterion (\ref{equ:condition_ent}) in terms of the environmental part of the system-environment correlations \begin{equation}\label{equ:ent_crit_Cenv} C_\text{env}(t)>\frac{\ln\left[\frac{r-z^2}{x^2+y^2}\right]}{2(b+1)}. \end{equation} In Fig.~\ref{abb:figure3} we choose a larger initial state purity ($r=0.997$), leading to time intervals with system-environment entanglement (highlighted areas), according to criterion (\ref{equ:condition_ent}). The appearance of quantum correlations is closely related to the dynamics of $C_\text{env}(t)$ as explained earlier: in Fig.~\ref{abb:figure3} values of $C_\text{env}(t)$ larger then the threshold given by (\ref{equ:ent_crit_Cenv}) indicate entanglement. Comparing these findings to the domains of negative $\gamma$ from Fig.~\ref{abb:figure2} (which are independent of the qubit initial state), we see no obvious relation between time intervals of \emph{quantum} correlations and non-Markovianity of the dynamics. \begin{figure} \caption{$C_\text{env} \label{abb:figure3} \end{figure} \section{Conclusions}\label{sec7} We have investigated non-Markovian dynamics of a decohering qubit and its environment. Since the reduced dynamics can be modeled by means of random unitary evolution, we have stressed that a genuine ``open quantum system'' point of view is not required in this case. In particular, we argue that approaches based solely on the reduced description may be misleading with respect to interpretations. A study of ``information flow to the environment'', e.g., is questionable without the existence of environmental dynamical degrees of freedom. Considering the full dynamics of system plus quantum environment, we have investigated the measure $C=C_\text{sys}+C_\text{env}$ for system-environment correlations that emerge from an increase of the local entropies of the two subsystems. We have found (the time derivative of) this quantity to be closely related to the sign of the dephasing rate $\gamma(t)$, reflecting the non-Markovian character of the dynamics. Referring to earlier work, we are able to show that the total state underlying the correlations described by $C$, is separable for a large class of mixed qubit initial states. Therefore, given the quantum environment, ``non-Markovianity'' is still unrelated to the build-up or decay of \emph{quantum} correlations (entanglement) between system and environment. For qubit initial states with large purity, by contrast, we were able to find periods where the total state is entangled. But again, we see no obvious relation between ``non-Markovianity'' and the build-up or decay of these quantum correlations. Interestingly though, we are able to relate the environmental part $C_\text{env}$ of the correlations $C$ to entanglement. We are confident that our approach will be helpful for further investigations with respect to system-environment correlations and ``information flow'' in open system dynamics. \end{document}
\begin{document} \footnotetext[1]{Support of the research by \"OAD, project CZ~02/2019, and support of the research of the first author by IGA, project P\v rF~2019~015, is gratefully acknowledged.} \title{Residuation in finite posets} \begin{abstract} When an algebraic logic based on a poset instead of a lattice is investigated then there is a natural problem how to introduce the connective implication to be everywhere defined and satisfying (left) adjointness with the connective conjunction. We have already studied this problem for the logic of quantum mechanics which is based on an orthomodular poset or the logic of quantum effects based on a so-called effect algebra which is only partial and need not be lattice-ordered. For this, we introduced the so-called operator residuation where the values of implication and conjunction need not be elements of the underlying poset, but only certain subsets of it. However, this approach can be generalized for posets satisfying more general conditions. If these posets are even finite, we can focus on maximal or minimal elements of the corresponding subsets and the formulas for the mentioned operators can be essentially simplified. This is shown in the present paper where all theorems are explained by corresponding examples. \end{abstract} {\bf AMS Subject Classification:} 06A11, 06C15, 06D15, 03G25 {\bf Keywords:} Finite poset, strongly modular poset, bounded poset, complementation, Boolean poset, residuated poset, adjointness, relatively pseudocomplemented poset If an algebraic logic is based on a lattice $(L,\leq)$, we usually ask that the connective conjunction $\odot$ and the connective implication $\rightarrow$ are related by means of an adjointness, i.e.\ for all $x,y,z\in L$ we have \[ x\odot y\leq z\text{ if and only if }x\leq y\rightarrow z. \] It is well-known (see e.g.\ \cite B) that for Boolean algebras one can take $x\odot y=x\wedge y$ and $x\rightarrow y=x'\vee y$ in order to obtain a residuated lattice. For modular lattices with complementation one can consider $x\odot y=y\wedge(x\vee y')$ and $x\rightarrow y=x'\vee(x\wedge y)$ in order to obtain a left-residuated lattice as shown in \cite{CF} and \cite{CL19a}. Similar results can be proved for orthomodular lattices as pointed in \cite{CL17a} and \cite{CL17b}. However, when working with posets instead of lattices, we cannot use the lattice operations and there is a problem how to introduce the operations $\odot$ and $\rightarrow$. Sometimes we define these operations by using the operators $L$ and $U$, i.e.\ the lower and upper cone, respectively, see e.g.\ \cite{CL18a}, \cite{CL19b} and \cite{CLP} for details. We call this kind of residuation an operator residuation. The disadvantage of this approach is that then the formulas for adjointness are rather huge and the corresponding sets are too large, also for finite posets. The idea of this paper is to replace these large sets for finite posets by relatively small ones which work as well. Several papers on so-called operator residuation in posets with a unary operation were already published by the authors, see e.g.\ \cite{CL18a} -- \cite{CLP}. We investigated when operators $M$ and $R$ can be introduced such that the poset satisfies the so-called operator adjointness, i.e.\ \[ M(x,y)\subseteq L(z)\text{ if and only if }L(x)\subseteq R(y,z) \] where $L(a)$ denotes the lower cone of the element $a$. We were successful in several cases, for example in the case of Boolean posets and in the case of some modifications of modular or orthomodular posets. However, for the operators $M$ and $R$ we often obtained complicated formulas involving the operators $U$ and $L$ denoting the upper and lower cone, respectively. Since $R$ models the logical connective implication within the corresponding logic and a certain implication may have several values, the values of $R$ are in general not elements, but sets which may be large in concrete cases. In such a case we call the using of such an implication as unsharp reasoning, see e.g.\ \cite{CL19a} and \cite{CL19b}. On the other hand, in practice we often work with finite posets. Hence the natural question arises if in the case of finite posets the expression for $R$ can be simplified and the number of possible values of $R$ can be reduced. The present paper shows that this is indeed possible for certain finite posets, for example for finite bounded strongly modular posets, finite Boolean posets and finite relatively pseudocomplemented posets. Some examples illustrate the obtained results. In the following, all considered structures are assumed to have a non-empty base set. Let $(P,\leq)$ be a poset, $a\in P$ and $A,B\subseteq P$. Then we write $A\leq B$ in case $x\leq y$ for all $x\in A$ and $y\in B$. Instead of $\{a\}\leq B$ we simply write $a\leq B$. Analogously we proceed in similar cases. Further we define \begin{align*} L(A) & :=\{x\in P\mid x\leq A\}, \\ U(A) & :=\{x\in P\mid A\leq x\} \end{align*} Instead of $L(\{a,b\})$, $L(\{a\}\cup A)$, $L(A\cup B)$ and $L(U(A))$ we simply write $L(a,b)$, $L(a,A)$, $L(A,B)$ and $LU(A)$, respectively. Analogously we proceed in similar cases. Further, denote by $\Max A$ and $\Min A$ the set of all maximal and minimal elements of $A$, respectively. For the following concepts, see e.g.\ \cite{CR}. \begin{definition} Let $\mathbf P=(P,\leq)$ be a poset. Then $\mathbf P$ is called {\em distributive} if it satisfies one of the following equivalent {\rm LU}-identities: \begin{align*} U(L(x,y),z) & \approx UL(U(x,z),U(y,z)), \\ L(U(x,y),z) & \approx LU(L(x,z),L(y,z)). \end{align*} The poset $\mathbf P$ is called {\em modular} {\rm(}cf.\ {\rm\cite{CR})} if for all $x,y,z\in P$ the following holds: \[ x\leq z\text{ implies }L(U(x,y),z)=LU(x,L(y,z)). \] It is well-known that for lattices the modular law \[ x\leq z\text{ implies }(x\vee y)\wedge z=x\vee(y\wedge z) \] can be equivalently replaced by the so-called {\em modular identity} \[ (x\vee y)\wedge(x\vee z)\approx x\vee(y\wedge(x\vee z)). \] We call $\mathbf P$ {\em strongly modular} {\rm(}cf.\ {\rm\cite{CL19a})} if it satisfies the {\rm LU}-identities \begin{align*} L(U(x,y),U(x,z)) & \approx LU(x,L(y,U(x,z))), \\ L(U(L(x,z),y),z) & \approx LU(L(x,z),L(y,z)). \end{align*} \end{definition} Every distributive poset is modular and strongly modular. A lattice is strongly modular if and only if it is modular. Concerning the {\rm LU}-conditions occurring in the definition of distributive or modular posets we note that the one-sided inclusions hold in general. Namely, because of \[ U(x,z)\cup U(y,z)\subseteq U(L(x,y),z) \] we have \[ UL(U(x,z),U(y,z))\subseteq U(L(x,y),z). \] Analogously, \[ LU(L(x,z),L(y,z))\subseteq L(U(x,y),z) \] follows. Moreover, if $x\leq z$ then \[ \{x\}\cup L(y,z)\subseteq L(U(x,y),z) \] and hence \[ LU(x,L(y,z))\subseteq L(U(x,y),z). \] Let $\mathbf P=(P,\leq)$ be a poset. A unary operation $'$ on $P$ is called an {\em involution} if it satisfies the identity $x''\approx x$ and it is called {\em antitone} if $x\leq y$ implies $y'\leq x'$. If $\mathbf P$ has a least or a greatest element we will denote it by $0$ or $1$, respectively. If $\mathbf P$ has both $0$ and $1$ then it is called {\em bounded}. In the following, instead of singletons $\{a\}$ we simply write $a$. \begin{definition} Let $\mathbf P=(P,\leq,{}',0,1)$ be a bounded poset with a unary operation. Then $\mathbf P$ is called {\em complemented} if it satisfies the {\rm LU}-identities $L(x,x')\approx0$ and $U(x,x')\approx1$. Moreover, $\mathbf P$ is called a {\em Boolean poset} if it is both distributive and complemented. \end{definition} \begin{example}\label{ex1} The poset shown in Fig.~1 \vspace*{3mm} \[ \setlength{\unitlength}{7mm} \begin{picture}(6,8) \put(3,0){\circle*{.3}} \put(0,2){\circle*{.3}} \put(2,2){\circle*{.3}} \put(4,2){\circle*{.3}} \put(6,2){\circle*{.3}} \put(0,4){\circle*{.3}} \put(6,4){\circle*{.3}} \put(0,6){\circle*{.3}} \put(2,6){\circle*{.3}} \put(4,6){\circle*{.3}} \put(6,6){\circle*{.3}} \put(3,8){\circle*{.3}} \put(3,0){\line(-3,2)3} \put(3,0){\line(-1,2)1} \put(3,0){\line(1,2)1} \put(3,0){\line(3,2)3} \put(3,8){\line(-3,-2)3} \put(3,8){\line(-1,-2)1} \put(3,8){\line(1,-2)1} \put(3,8){\line(3,-2)3} \put(0,2){\line(0,1)4} \put(6,2){\line(0,1)4} \put(0,4){\line(1,1)2} \put(0,2){\line(1,1)4} \put(2,2){\line(1,1)4} \put(4,2){\line(1,1)2} \put(2,2){\line(-1,1)2} \put(4,2){\line(-1,1)4} \put(6,2){\line(-1,1)4} \put(6,4){\line(-1,1)2} \put(2.85,-.75){$0$} \put(-.6,1.9){$a$} \put(1.2,1.9){$b$} \put(4.45,1.9){$c$} \put(6.4,1.9){$d$} \put(-.6,3.9){$e$} \put(6.4,3.9){$e'$} \put(-.7,5.9){$d'$} \put(1.2,5.9){$c'$} \put(4.45,5.9){$b'$} \put(6.4,5.9){$a'$} \put(2.85,8.4){$1$} \put(2.3,-1.8){{\rm Fig.~1}} \end{picture} \] \vspace*{11mm} is Boolean. Since every Boolean poset can be embedded into a Boolean algebra {\rm(}e.g.\ by means of the Dedekind-MacNeille completion{\rm)}, the complementation in Boolean posets is unique and it is even an antitone involution. For modular posets this need not be the case. \end{example} Now we introduce the basic notion of our study. \begin{definition} A {\em residuated poset} is an ordered quintuple $\mathbf P=(P,\leq,\odot,\rightarrow,1)$ where $(P,\leq,1)$ is a poset with $1$ and $\odot$ and $\rightarrow$ are mappings form $P^2$ to $2^P$ such that for all $x,y,z\in P$ \begin{enumerate}[{\rm(i)}] \item $x\odot1\approx1\odot x\approx x$, \item $x\odot y\approx y\odot x$, \item $x\odot y\leq z$ if and only if $x\leq y\rightarrow z$. \end{enumerate} If only {\rm(i)} and {\rm(iii)} are satisfied then $\mathbf P$ is called a left-residuated poset. A {\em{\rm(}left{\rm)} residuated poset} is called {\em idempotent} if it satisfies the identity $x\odot x\approx x$. Condition {\rm(iii)} is called {\em{\rm(}left{\rm)} adjointness}. \end{definition} In the following we describe residuation in finite bounded posets for which neither distributivity nor modularity are assumed, and $'$ may be an arbitrary unary operation satisfying the identity $1'\approx0$ and two further very general {\rm LU}-identities. \begin{theorem}\label{th1} Let $(P,\leq,{}',0,1)$ be a finite bounded poset with a unary operation satisfying the {\rm(LU-)}identities \begin{align*} 1' & \approx0, \\ L(U(L(x,y),y'),y) & \approx L(x,y), \\ U(L(U(x,y'),y),y') & \approx U(x,y') \end{align*} and define \begin{align*} x\odot y & :=\Max L(U(x,y'),y), \\ x\rightarrow y & :=\Min U(L(x,y),x') \end{align*} for all $x,y\in B$. Then $(P,\leq,\odot,\rightarrow,1)$ is an idempotent left-residuated poset satisfying the identities \[ x\odot0\approx0,\;0\rightarrow x\approx0',\;x\rightarrow0\approx x',\;x\rightarrow x'\approx x',\;1\rightarrow x\approx x. \] \end{theorem} \begin{proof} Let $a,b,c\in P$. Then the following are equivalent: \begin{align*} a\odot b & \leq c, \\ \Max L(U(a,b'),b) & \leq c, \\ L(U(a,b'),b) & \leq c, \\ L(U(a,b'),b) & \subseteq L(c). \end{align*} Moreover, the following are equivalent: \begin{align*} a & \leq b\rightarrow c, \\ a & \leq\Min U(L(b,c),b'), \\ a & \leq U(L(b,c),b'), \\ U(L(b,c),b') & \subseteq U(a). \end{align*} If $a\odot b\leq c$ then $L(U(a,b'),b)\subseteq L(c)$ and hence \begin{align*} U(L(b,c),b') & =U(L(b)\cap L(c),b')\subseteq U(L(b)\cap L(U(a,b'),b),b')=U(L(U(a,b'),b),b')= \\ & =U(a,b')\subseteq U(a), \end{align*} i.e., $a\leq b\rightarrow c$. If, conversely, $a\leq b\rightarrow c$ then $U(L(b,c),b')\subseteq U(a)$ and hence \begin{align*} L(U(a,b'),b) & =L(U(a)\cap U(b'),b)\subseteq L(U(L(b,c),b')\cap U(b'),b)=L(U(L(b,c),b'),b)= \\ & =L(b,c)\subseteq L(c), \end{align*} i.e., $a\odot b\leq c$. Finally, \begin{align*} x\odot0 & \approx\Max L(U(x,0'),0)\approx0, \\ x\odot x & \approx\Max L(U(x,x'),x)\approx x, \\ x\odot1 & \approx\Max L(U(x,1'),1)\approx x, \\ 1\odot x & \approx\Max L(U(1,x'),x)\approx x, \\ 0 \rightarrow x & \approx\Min U(L(0,x),0')\approx0', \\ x\rightarrow0 & \approx\Min U(L(x,0),x')\approx x', \\ x\rightarrow x' & \approx\Min U(L(x,x'),x')\approx x', \\ 1\rightarrow x & \approx\Min U(L(1,x),1')\approx x. \end{align*} \end{proof} \begin{corollary}\label{cor1} Let $\mathbf P=(P,\leq,{}',0,1)$ be a finite bounded complemented strongly modular poset. Then $\mathbf P$ satisfies the assumptions of Theorem~\ref{th1} and hence $(P,\leq,\odot,\rightarrow,1)$, where \begin{align*} x\odot y & :=\Max L(U(x,y'),y), \\ x\rightarrow y & :=\Min U(L(x,y),x') \end{align*} for all $x,y\in B$, is an idempotent left-residuated poset satisfying the identities \[ x\odot0\approx0,\;0\rightarrow x\approx0',\;x\rightarrow0\approx x',\;x\rightarrow x'\approx x',\;1\rightarrow x\approx x. \] \end{corollary} \begin{proof} We have \begin{align*} 1' & \approx L(1,1')\approx0, \\ L(U(L(x,y),y'),y) & \approx LU(L(x,y),L(y',y))\approx LU(L(x,y),0)\approx LUL(x,y)\approx L(x,y), \\ U(L(U(x,y'),y),y') & \approx ULU(y',L(y,U(y',x)))\approx UL(U(y',y),U(y',x))\approx \\ & \approx UL(1,U(x,y'))\approx ULU(x,y')\approx U(x,y'). \end{align*} \end{proof} Corollary~\ref{cor1} can be considered as some ``finite version'' of a theorem in \cite{CL19a}. An example of a complemented strongly modular poset where the complementation is not an involution is shown in the following. \begin{example} The poset shown in Fig.~2 \[ \setlength{\unitlength}{7mm} \begin{picture}(6,8) \put(3,0){\circle*{.3}} \put(0,2){\circle*{.3}} \put(1,2){\circle*{.3}} \put(2,2){\circle*{.3}} \put(4,2){\circle*{.3}} \put(6,2){\circle*{.3}} \put(0,4){\circle*{.3}} \put(6,4){\circle*{.3}} \put(0,6){\circle*{.3}} \put(2,6){\circle*{.3}} \put(4,6){\circle*{.3}} \put(5,6){\circle*{.3}} \put(6,6){\circle*{.3}} \put(3,8){\circle*{.3}} \put(3,0){\line(-3,2)3} \put(3,0){\line(-1,1)2} \put(3,0){\line(-1,2)1} \put(3,0){\line(1,2)1} \put(3,0){\line(3,2)3} \put(3,8){\line(-3,-2)3} \put(3,8){\line(-1,-2)1} \put(3,8){\line(1,-2)1} \put(3,8){\line(1,-1)2} \put(3,8){\line(3,-2)3} \put(0,2){\line(0,1)4} \put(1,2){\line(-1,2)1} \put(1,2){\line(1,1)4} \put(6,2){\line(0,1)4} \put(0,4){\line(1,1)2} \put(0,2){\line(1,1)4} \put(2,2){\line(1,1)4} \put(4,2){\line(1,1)2} \put(2,2){\line(-1,1)2} \put(4,2){\line(-1,1)4} \put(6,2){\line(-1,1)4} \put(6,4){\line(-1,1)2} \put(5,6){\line(1,-2)1} \put(2.85,-.75){$0$} \put(-.6,1.9){$a$} \put(.4,1.9){$f$} \put(2.4,1.9){$b$} \put(4.45,1.9){$c$} \put(6.4,1.9){$d$} \put(-.6,3.9){$e$} \put(6.4,3.9){$e'$} \put(-.7,5.9){$d'$} \put(1.2,5.9){$c'$} \put(3.35,5.9){$b'$} \put(5.3,5.9){$g$} \put(6.4,5.9){$a'$} \put(2.85,8.4){$1$} \put(2.3,-1.8){{\rm Fig.~2}} \end{picture} \] \vspace*{11mm} with \[ \begin{array}{c|cccccccccccccc} x & 0 & a & b & c & d & e & e' & d' & c' & b' & a' & f & g \\ \hline x' & 1 & a' & b' & c' & d' & e' & e & d & c & b & a & a' & a \end{array} \] satisfies the assumptions of Corollary~\ref{cor1}, but it is not distributive since \[ L(U(b,p),a)=L(e,a)=L(a)\neq L(0)=LU(0,0)=LU(L(b,a),L(p,a)). \] \end{example} Concerning the assumptions of Theorem~\ref{th1} we note that the one-sided inclusions hold in general. Namely, \[ L(x,y)=LUL(x,y)=L(UL(x,y),y)\subseteq L(U(L(x,y),y'),y). \] Analogously, we obtain \[ U(x,y')\subseteq U(L(U(x,y'),y),y'). \] It is surprising that also in such a general case as described in Theorem~\ref{th1} we obtain relatively simple formulas for $\odot$ and $\rightarrow$. The following example in which the unary operation is even a complementation as well as an antitone involution shows that the assumptions of Theorem~\ref{th1} are not too restrictive. \begin{example} The poset shown in Fig.~3 \[ \setlength{\unitlength}{7mm} \begin{picture}(18,8) \put(9,0){\circle*{.3}} \put(6,2){\circle*{.3}} \put(8,2){\circle*{.3}} \put(10,2){\circle*{.3}} \put(12,2){\circle*{.3}} \put(6,4){\circle*{.3}} \put(12,4){\circle*{.3}} \put(6,6){\circle*{.3}} \put(8,6){\circle*{.3}} \put(10,6){\circle*{.3}} \put(12,6){\circle*{.3}} \put(9,8){\circle*{.3}} \put(1,4){\circle*{.3}} \put(17,4){\circle*{.3}} \put(9,0){\line(-3,2)3} \put(9,0){\line(-1,2)1} \put(9,0){\line(1,2)1} \put(9,0){\line(3,2)3} \put(9,8){\line(-3,-2)3} \put(9,8){\line(-1,-2)1} \put(9,8){\line(1,-2)1} \put(9,8){\line(3,-2)3} \put(6,2){\line(0,1)4} \put(12,2){\line(0,1)4} \put(6,4){\line(1,1)2} \put(6,2){\line(1,1)4} \put(8,2){\line(1,1)4} \put(10,2){\line(1,1)2} \put(8,2){\line(-1,1)2} \put(10,2){\line(-1,1)4} \put(12,2){\line(-1,1)4} \put(12,4){\line(-1,1)2} \put(1,4){\line(2,-1)8} \put(1,4){\line(2,1)8} \put(17,4){\line(-2,-1)8} \put(17,4){\line(-2,1)8} \put(8.85,-.75){$0$} \put(5.4,1.9){$a$} \put(7.2,1.9){$b$} \put(10.45,1.9){$c$} \put(12.4,1.9){$d$} \put(5.4,3.9){$e$} \put(.4,3.9){$f$} \put(12.4,3.9){$e'$} \put(17.4,3.9){$f'$} \put(5.3,5.9){$d'$} \put(7.2,5.9){$c'$} \put(10.45,5.9){$b'$} \put(12.4,5.9){$a'$} \put(8.85,8.4){$1$} \put(8.2,-1.8){{\rm Fig.~3}} \end{picture} \] \vspace*{15mm} satisfies the assumptions of Theorem~\ref{th1}, and it is not modular since $a\leq e$, but \[ L(U(a,f),e)=L(1,e)=L(e)\neq L(a)=LU(a)=LU(a,0)=LU(a,L(f,e)). \] The tables for $\odot$ and $\rightarrow$ look as follows: \[ \begin{array}{c|cccccccccccccc} \odot & 0 & a & b & c & d & e & f & f' & e' & d' & c' & b' & a' & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ a & 0 & a & 0 & 0 & 0 & a & f & f' & 0 & a & a & a & 0 & a \\ b & 0 & 0 & b & 0 & 0 & b & f & f' & 0 & b & b & 0 & b & b \\ c & 0 & 0 & 0 & c & 0 & 0 & f & f' & c & c & 0 & c & c & c \\ d & 0 & 0 & 0 & 0 & d & 0 & f & f' & d & 0 & d & d & d & d \\ e & 0 & a & b & 0 & 0 & e & f & f' & 0 & e & e & a & b & e \\ f & 0 & a & b & c & d & e & f & 0 & e' & d' & c' & b' & a' & f \\ f' & 0 & a & b & c & d & e & 0 & f' & e' & d' & c' & b' & a' & f' \\ e' & 0 & 0 & 0 & c & d & 0 & f & f' & e' & c & d & e' & e' & e' \\ d' & 0 & a & b & c & 0 & e & f & f' & c & d' & e & \{a,c\} & \{b,c\} & d' \\ c' & 0 & a & b & 0 & d & e & f & f' & d & e & c' & \{a,d\} & \{b,d\} & c' \\ b' & 0 & a & 0 & c & d & a & f & f' & e' & \{a,c\} & \{a,d\} & b' & e' & b' \\ a' & 0 & 0 & b & c & d & b & f & f' & e' & \{b,c\} & \{b,d\} & e' & a' & a' \\ 1 & 0 & a & b & c & d & e & f & f' & e' & d' & c' & b' & a' & 1 \end{array} \] \[ \begin{array}{c|cccccccccccccc} \rightarrow & 0 & a & b & c & d & e & f & f' & e' & d' & c' & b' & a' & 1 \\ \hline 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ a & a' & 1 & a' & a' & a' & 1 & a' & a' & a' & 1 & 1 & 1 & a' & 1 \\ b & b' & b' & 1 & b' & b' & 1 & b' & b' & b' & 1 & 1 & b' & 1 & 1 \\ c & c' & c' & c' & 1 & c' & c' & c' & c' & 1 & 1 & c' & 1 & 1 & 1 \\ d & d' & d' & d' & d' & 1 & d' & d' & d' & 1 & d' & 1 & 1 & 1 & 1 \\ e & e' & b' & a' & e' & e' & 1 & e' & e' & e' & 1 & 1 & b' & a' & 1 \\ f & f' & a' & b' & c' & d' & e' & 1 & f' & e & d & c & b & a & 1 \\ f' & f & a' & b' & c' & d' & e' & f & 1 & e & d & c & b & a & 1 \\ e' & e & e & e & d' & c' & e & e & e & 1 & d' & c' & 1 & 1 & 1 \\ d' & d & \{b',c'\} & \{a',c'\} & e' & d & c' & d & d & e' & 1 & c' & b' & a' & 1 \\ c' & c & \{b',d'\} & \{a',d'\} & c & e' & d' & c & c & e' & d' & 1 & b' & a' & 1 \\ b' & b & e & b & \{a',d'\} & \{a',c'\} & e & b & b & a' & d' & c' & 1 & a' & 1 \\ a' & a & a & e & \{b',d'\} & \{b',c'\} & e & a & a & b' & d' & c' & b' & 1 & 1 \\ 1 & 0 & a & b & c & d & e & f & f' & e' & d' & c' & b' & a' & 1 \end{array} \] \end{example} As a special case of Theorem~\ref{th1} we obtain \begin{theorem} Let $(B,\leq,{}',0,1)$ be a finite Boolean poset and define \begin{align*} x\odot y & :=\Max L(x,y), \\ x\rightarrow y & :=\Min U(x',y) \end{align*} for all $x,y\in B$. Then $(B,\leq,\odot,\rightarrow,1)$ is an idempotent residuated poset satisfying \begin{align*} x\odot y=0 & \text{ if and only if }x\leq y', \\ x\rightarrow y=1 & \text{ if and only if }x\leq y \end{align*} for all $x,y\in B$ and \[ x\rightarrow0\approx x',\;x\rightarrow x'\approx x',\;1\rightarrow x\approx x. \] \end{theorem} \begin{proof} We have \begin{align*} L(U(L(x,y),y'),y) & \approx L(UL(U(x,y'),U(y,y')),y)\approx L(ULU(x,y'),y)\approx L(U(x,y'),y)\approx \\ & \approx LU(L(x,y),L(y',y))\approx LUL(x,y)\approx L(x,y), \\ U(L(U(x,y'),y),y') & \approx U(LU(L(x,y),L(y',y)),y')\approx U(LUL(x,y),y')\approx U(L(x,y),y')\approx \\ & \approx UL(U(x,y'),U(y,y'))\approx ULU(x,y')\approx U(x,y'), \\ L(U(x,y'),y) & \approx LU(L(x,y),L(y',y))\approx LUL(x,y)\approx L(x,y), \\ U(L(x,y),x') & \approx UL(U(x,x'),U(y,x'))\approx ULU(y,x')\approx U(x',y), \\ x\odot y & =\Max L(x,y)=0\text{ for all }x,y\in B\text{ with }x\leq y', \\ x\rightarrow y & =\Min U(x',y)=1\text{ for all }x,y\in B\text{ with }x\leq y, \\ x\odot y & \approx\Max L(x,y)\approx\Max L(y,x)\approx y\odot x. \end{align*} Moreover, if $x,y\in B$ and $x\odot y=0$ then \begin{align*} x & \in L(x)=L(x,1)=L(x,U(y,y'))=LU(L(x,y),L(x,y'))=LU(0,L(x,y'))= \\ & =LUL(x,y')=L(x,y')\subseteq L(y') \end{align*} and hence $x\leq y'$. Finally, if $x,y\in B$ and $x\rightarrow y=1$ then \begin{align*} y & \in U(y)=U(0,y)=U(L(x,x'),y)=UL(U(x,y),U(x',y))=UL(U(x,y),1)= \\ & =ULU(x,y)=U(x,y)\subseteq U(x) \end{align*} and hence $x\leq y$. \end{proof} Again the formulas for $\odot$ and $\rightarrow$ are very simple. As an example of a Boolean poset, we may consider that from Example~\ref{ex1}. \begin{example} The tables for $\odot$ and $\rightarrow$ for the Boolean poset from Example~\ref{ex1} are as follows: \[ \begin{array}{c|cccccccccccc} \odot & 0 & a & b & c & d & e & e' & d' & c' & b' & a' & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ a & 0 & a & 0 & 0 & 0 & a & 0 & a & a & a & 0 & a \\ b & 0 & 0 & b & 0 & 0 & b & 0 & b & b & 0 & b & b \\ c & 0 & 0 & 0 & c & 0 & 0 & c & c & 0 & c & c & c \\ d & 0 & 0 & 0 & 0 & d & 0 & d & 0 & d & d & d & d \\ e & 0 & a & b & 0 & 0 & e & 0 & e & e & a & b & e \\ e' & 0 & 0 & 0 & c & d & 0 & e' & c & d & e' & e' & e' \\ d' & 0 & a & b & c & 0 & e & c & d' & e & \{a,c\} & \{b,c\} & d' \\ c' & 0 & a & b & 0 & d & e & d & e & c' & \{a,d\} & \{b,d\} & c' \\ b' & 0 & a & 0 & c & d & a & e' & \{a,c\} & \{a,d\} & b' & e' & b' \\ a' & 0 & 0 & b & c & d & b & e' & \{b,c\} & \{b,d\} & e' & a' & a' \\ 1 & 0 & a & b & c & d & e & e' & d' & c' & b' & a' & 1 \end{array} \] \[ \begin{array}{c|cccccccccccc} \rightarrow & 0 & a & b & c & d & e & e' & d' & c' & b' & a' & 1 \\ \hline 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ a & a' & 1 & a' & a' & a' & 1 & a' & 1 & 1 & 1 & a' & 1 \\ b & b' & b' & 1 & b' & b' & 1 & b' & 1 & 1 & b' & 1 & 1 \\ c & c' & c' & c' & 1 & c' & c' & 1 & 1 & c' & 1 & 1 & 1 \\ d & d' & d' & d' & d' & 1 & d' & 1 & d' & 1 & 1 & 1 & 1 \\ e & e' & b' & a' & e' & e' & 1 & e' & 1 & 1 & b' & a' & 1 \\ e' & e & e & e & d' & c' & e & 1 & d' & c' & 1 & 1 & 1 \\ d' & d & \{b',c'\} & \{a',c'\} & e' & d & c' & e' & 1 & c' & b' & a' & 1 \\ c' & c & \{b',d'\} & \{a',d'\} & c & e' & d' & e' & d' & 1 & b' & a' & 1 \\ b' & b & e & b & \{a',d'\} & \{a',c'\} & e & a' & d' & c' & 1 & a' & 1 \\ a' & a & a & e & \{b',d'\} & \{b',c'\} & e & b' & d' & c' & b' & 1 & 1 \\ 1 & 0 & a & b & c & d & e & e' & d' & c' & b' & a' & 1 \end{array} \] As one can see, again the values of $\odot$ and $\rightarrow$ are subsets consisting of at most two elements. \end{example} Now, we turn our attention to a bit more complicated case where $\odot$ and $\rightarrow$ cannot be introduced as terms in $L$ and $U$. \begin{theorem}\label{th2} Let $(P,\leq,{}',0,1)$ be a finite bounded poset with a unary operation satisfying the {\rm(LU-)}identities \begin{align*} 0' & \approx1, \\ 1' & \approx0, \\ x' & \neq1\text{ for all }x\in P\setminus\{0\}, \\ L(U(L(x,y),x'),x) & \approx LU(L(x,y),L(x',x)), \\ L(U(x',x),U(y,x')) & \approx LU(x',L(x,U(y,x'))), \\ L(x,x') & \subseteq L(y)\text{ for all }x\in P\text{ and }y\in P\setminus\{0\}, \\ U(x,x') & \subseteq U(y)\text{ for all }x\in P\text{ and }y\in P\setminus\{1\} \end{align*} and define \[ x\odot y:=\left\{ \begin{array}{ll} 0 & \text{if }x\leq y' \\ \Max L(U(x,y'),y) & \text{otherwise} \end{array} \right.,\quad x\rightarrow y:=\left\{ \begin{array}{ll} 1 & \text{if }x\leq y \\ \Min U(L(x,y),x') & \text{otherwise} \end{array} \right. \] for all $x,y\in P$. Then $(P,\leq,\odot,\rightarrow,1)$ is a left-residuated poset satisfying \begin{align*} x\odot y & =0\text{ for all }x,y\in B\text{ with }x\leq y', \\ x\rightarrow y & =1\text{ for all }x,y\in B\text{ with }x\leq y \end{align*} and the identities \[ x\rightarrow0\approx x',\;1\rightarrow x\approx x. \] \end{theorem} \begin{proof} We have \begin{align*} 0\odot1 & \approx1\odot0\approx0, \\ 0\rightarrow0 & \approx1\rightarrow1\approx1, \\ x\odot1 & =\Max L(U(x,1'),1)=\Max L(x)=x\text{ for all }x\in P\setminus\{0\}, \\ 1\odot x & =\Max L(U(1,x'),x)=\Max L(x)=x\text{ for all }x\in P\setminus\{0\}, \\ x\rightarrow0 & =\Min U(L(x,0),x')=\Min U(x')=x'\text{ for all }x\in P\setminus\{0\}, \\ 1\rightarrow x & =\Min U(L(1,x),1')=x\text{ for all }x\in P\setminus\{1\}. \end{align*} Now let $a,b,c\in P$. If $a\leq b'$ then \begin{align*} a\odot b=0 & \leq c, \\ a & \leq\Min U(L(b,c),b')=b\rightarrow c. \end{align*} If $b\leq c$ then \begin{align*} a\odot b=\Max L(U(a,b'),b) & \leq c, \\ a & \leq1=b\rightarrow c. \end{align*} For the rest of the proof assume $a\not\leq b'$ and $b\not\leq c$. Then the following are equivalent: \begin{align*} a\odot b & \leq c, \\ \Max L(U(a,b'),b) & \leq c, \\ L(U(a,b'),b) & \leq c, \\ L(U(a,b'),b) & \subseteq L(c). \end{align*} Moreover, the following are equivalent: \begin{align*} a & \leq b\rightarrow c, \\ a & \leq\Min U(L(b,c),b'), \\ a & \leq U(L(b,c),b'), \\ U(L(b,c),b') & \subseteq U(a). \end{align*} First assume $a\odot b\leq c$. Since $a=1$ would imply $b=1\odot b=a\odot b\leq c$, we have $a\neq1$ and hence $U(b,b')\subseteq U(a)$. Now because of $L(U(a,b'),b)\subseteq L(c)$ we obtain \begin{align*} U(L(b,c),b') & =U(L(b)\cap L(c),b')\subseteq U(L(b)\cap L(U(a,b'),b),b')=U(L(U(a,b'),b),b')= \\ & =U(b',L(b,U(a,b')))=ULU(b',L(b,U(a,b')))=UL(U(b',b),U(a,b'))\subseteq \\ & \subseteq ULU(a)=U(a), \end{align*} i.e.\ $a\leq b\rightarrow c$. Finally, assume $a\leq b\rightarrow c$. Since $c=0$ would imply $a\leq b\rightarrow c=b\rightarrow0=b'$, we have $c\neq0$ and hence $L(b,b')\subseteq L(c)$. Now because of $U(L(b,c),b')\subseteq U(a)$ we obtain \begin{align*} L(U(a,b'),b) & =L(U(a)\cap U(b'),b)\subseteq L(U(L(b,c),b')\cap U(b'),b)=L(U(L(b,c),b'),b)= \\ & =LU(L(b,c),L(b',b))\subseteq LUL(c)=L(c), \end{align*} i.e.\ $a\odot b\leq c$. \end{proof} Concerning the two {\rm LU}-identities occurring in Theorem~\ref{th2} we note that the one-sided inclusions hold in general. Namely, \[ L(x,y)\cup L(x',x)\subseteq L(U(L(x,y),x'),x) \] and hence \[ LU(L(x,y),L(x',x))\subseteq L(U(L(x,y),x'),x) \] for all $x,y\in P$. Analogously, \[ LU(x',L(x,U(y,x')))\subseteq L(U(x',x),U(y,x')) \] for all $x,y\in P$. The following example shows a poset which is not a lattice and satisfies the assumptions of Theorem~\ref{th2}. \begin{example}\label{ex2} The poset shown in Fig.~4 \vspace*{3mm} \[ \setlength{\unitlength}{7mm} \begin{picture}(6,12) \put(3,2){\circle*{.3}} \put(0,4){\circle*{.3}} \put(2,4){\circle*{.3}} \put(4,4){\circle*{.3}} \put(6,4){\circle*{.3}} \put(0,6){\circle*{.3}} \put(6,6){\circle*{.3}} \put(0,8){\circle*{.3}} \put(2,8){\circle*{.3}} \put(4,8){\circle*{.3}} \put(6,8){\circle*{.3}} \put(3,10){\circle*{.3}} \put(3,0){\circle*{.3}} \put(3,12){\circle*{.3}} \put(3,2){\line(-3,2)3} \put(3,2){\line(-1,2)1} \put(3,2){\line(1,2)1} \put(3,2){\line(3,2)3} \put(3,10){\line(-3,-2)3} \put(3,10){\line(-1,-2)1} \put(3,10){\line(1,-2)1} \put(3,10){\line(3,-2)3} \put(0,4){\line(0,1)4} \put(6,4){\line(0,1)4} \put(0,6){\line(1,1)2} \put(0,4){\line(1,1)4} \put(2,4){\line(1,1)4} \put(4,4){\line(1,1)2} \put(2,4){\line(-1,1)2} \put(4,4){\line(-1,1)4} \put(6,4){\line(-1,1)4} \put(6,6){\line(-1,1)2} \put(3,0){\line(0,1)2} \put(3,12){\line(0,-1)2} \put(3.4,1.9){$f$} \put(-.6,3.9){$a$} \put(1.2,3.9){$b$} \put(4.45,3.9){$c$} \put(6.4,3.9){$d$} \put(-.6,5.9){$e$} \put(6.4,5.9){$e'$} \put(-.7,7.9){$d'$} \put(1.2,7.9){$c'$} \put(4.45,7.9){$b'$} \put(6.4,7.9){$a'$} \put(3.4,9.9){$f'$} \put(2.85,-.75){$0$} \put(2.85,12.4){$1$} \put(2.3,-1.8){{\rm Fig.~4}} \end{picture} \] \vspace*{13mm} satisfies the assumptions of Theorem~\ref{th2} and the tables for $\odot$ and $\rightarrow$ look as follows: \[ \begin{array}{c|cccccccccccccc} \odot & 0 & f & a & b & c & d & e & e' & d' & c' & b' & a' & f' & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ f & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & f \\ a & 0 & 0 & a & 0 & 0 & 0 & a & f & a & a & a & f & a & a \\ b & 0 & 0 & 0 & b & 0 & 0 & b & f & b & b & f & b & b & b \\ c & 0 & 0 & 0 & 0 & c & 0 & f & c & c & f & c & c & c & c \\ d & 0 & 0 & 0 & 0 & 0 & d & 0 & d & f & d & d & d & d & d \\ e & 0 & 0 & a & b & 0 & 0 & e & f & e & e & a & b & e & e \\ e' & 0 & 0 & 0 & 0 & c & d & f & e' & c & d & e' & e' & e' & e' \\ d' & 0 & 0 & a & b & c & f & e & c & d' & e & \{a,c\} & \{b,c\} & d' & d' \\ c' & 0 & 0 & a & b & f & d & e & d & e & c' & \{a,d\} & \{b,d\} & c' & c' \\ b' & 0 & 0 & a & f & c & d & a & e' & \{a,c\} & \{a,d\} & b' & e' & b' & b' \\ a' & 0 & 0 & f & b & c & d & b & e' & \{b,c\} & \{b,d\} & e' & a' & a' & a' \\ f' & 0 & 0 & a & b & c & d & e & e' & d' & c' & b' & a' & f' & f' \\ 1 & 0 & f & a & b & c & d & e & e' & d' & c' & b' & a' & f' & 1 \end{array} \] \[ \begin{array}{c|cccccccccccccc} \rightarrow & 0 & f & a & b & c & d & e & e' & d' & c' & b' & a' & f' & 1 \\ \hline 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ f & f' & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ a & a' & a' & 1 & a' & a' & a' & 1 & a' & 1 & 1 & 1 & a' & 1 & 1 \\ b & b' & b' & b' & 1 & b' & b' & 1 & b' & 1 & 1 & b' & 1 & 1 & 1 \\ c & c' & c' & c' & c' & 1 & c' & c' & 1 & 1 & c' & 1 & 1 & 1 & 1 \\ d & d' & d' & d' & d' & d' & 1 & d' & 1 & d' & 1 & 1 & 1 & 1 & 1 \\ e & e' & e' & b' & a' & e' & e' & 1 & e' & 1 & 1 & b' & a' & 1 & 1 \\ e' & e & e & e & e & d' & c' & e & 1 & d' & c' & 1 & 1 & 1 & 1 \\ d' & d & d & \{b',c'\} & \{a',c'\} & e' & d & c' & e' & 1 & c' & b' & a' & 1 & 1 \\ c' & c & c & \{b',d'\} & \{a',d'\} & c & e' & d' & e' & d' & 1 & b' & a' & 1 & 1 \\ b' & b & b & e & b & \{a',d'\} & \{a',c'\} & e & a' & d' & c' & 1 & a' & 1 & 1 \\ a' & a & a & a & e & \{b',d'\} & \{b',c'\} & e & b' & d' & c' & b' & 1 & 1 & 1 \\ f' & f & f & a & b & c & d & e & e' & d' & c' & b' & a' & 1 & 1 \\ 1 & 0 & f & a & b & c & d & e & e' & d' & c' & b' & a' & f' & 1 \end{array} \] \end{example} In Example~\ref{ex2} the unary operation $'$ is in fact an antitone involution which is not a complementation. We are going to show that $'$ even need not be an involution, see the following example. \begin{example} The poset shown in Fig.~5 \vspace*{-4mm} \[ \setlength{\unitlength}{7mm} \begin{picture}(6,11) \put(3,2){\circle*{.3}} \put(3,4){\circle*{.3}} \put(1,6){\circle*{.3}} \put(3,6){\circle*{.3}} \put(5,6){\circle*{.3}} \put(3,8){\circle*{.3}} \put(3,10){\circle*{.3}} \put(3,2){\line(0,1)8} \put(3,4){\line(-1,1)2} \put(3,4){\line(1,1)2} \put(3,8){\line(-1,-1)2} \put(3,8){\line(1,-1)2} \put(2.875,1.25){$0$} \put(3.4,3.85){$a$} \put(.35,5.85){$b$} \put(3.4,5.85){$c$} \put(5.4,5.85){$d$} \put(3.4,7.85){$a'$} \put(2.85,10.4){$1$} \put(2.2,.3){{\rm Fig.~5}} \end{picture} \] \vspace*{-3mm} with \[ \begin{array}{c|ccccccc} x & 0 & a & b & c & d & a' & 1 \\ \hline x' & 1 & a' & c & d & b & a & 0 \end{array} \] satisfies the assumptions of Theorem~\ref{th2} and the tables for $\odot$ and $\rightarrow$ look as follows: \[ \begin{array}{c|ccccccc} \odot & 0 & a & b & c & d & a' & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ a & 0 & 0 & 0 & 0 & 0 & 0 & a \\ b & 0 & 0 & b & c & 0 & b & b \\ c & 0 & 0 & 0 & c & d & c & c \\ d & 0 & 0 & b & 0 & d & d & d \\ a' & 0 & 0 & b & c & d & a' & a' \\ 1 & 0 & a & b & c & d & a' & 1 \end{array} \quad\quad\quad \begin{array}{c|ccccccc} \rightarrow & 0 & a & b & c & d & a' & 1 \\ \hline 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ a & a' & 1 & 1 & 1 & 1 & 1 & 1 \\ b & c & c & 1 & c & c & 1 & 1 \\ c & d & d & d & 1 & d & 1 & 1 \\ d & b & b & b & b & 1 & 1 & 1 \\ a' & a & a & b & c & d & 1 & 1 \\ 1 & 0 & a & b & c & d & a' & 1 \end{array} \] Since the considered poset is in fact a lattice, the values of $\odot$ and $\rightarrow$ are singletons. \end{example} Let us recall the concept of a relatively pseudocomplemented poset. These posets were systematically studied e.g.\ in \cite{CL18b} and \cite{CLP}. Of course, this notion is a generalization of the notion of a relatively pseudocomplemented lattice and in case of lattices both notions coincide. \begin{definition} A {\em poset} $(P,\leq)$ is called {\em relatively pseudocomplemented} if for every $a,b\in P$ there exists a greatest element $x$ of $P$ satisfying $L(a,x)\subseteq L(b)$. This element is called the {\em relative pseudocomplement} $a*b$ of $a$ with respect to $b$. Instead of $(P,\leq)$ we also write $(P,\leq,*)$. The binary operation $*$ on $P$ is called {\em relative pseudocomplementation}. \end{definition} It is worth noticing that every pseudocomplemented poset $(P,\leq,*)$ has a greatest element, namely $x*x$ ($x\in P$). For finite relatively pseudocomplemented posets the formulas for $\odot$ and $\rightarrow$ turn out to be rather simple. \begin{theorem} Let $(P,\leq,*)$ be a finite relatively pseudocomplemented poset and define \begin{align*} x\odot y & :=\Max L(x,y), \\ x\rightarrow y & :=x*y \end{align*} for all $x,y\in P$. Then $(P,\leq,\odot,\rightarrow,1)$ is an idempotent residuated poset satisfying \[ x\rightarrow y=1\text{ if and only if }x\leq y \] for all $x,y\in P$ and the identity $1\rightarrow x\approx x$. \end{theorem} \begin{proof} For $a,b,c\in P$ the following are equivalent: \begin{align*} a\odot b & \leq c, \\ \Max L(a,b) & \leq c, \\ L(a,b) & \leq c, \\ L(a,b) & \subseteq L(c), \\ L(b,a) & \subseteq L(c), \\ a & \leq b*c, \\ a & \leq b\rightarrow c. \end{align*} Moreover, the following are equivalent: \begin{align*} a\rightarrow b & =1, \\ a*b & =1, \\ L(a,1) & \subseteq L(b), \\ L(a) & \subseteq L(b), \\ a & \leq b. \end{align*} Finally, \begin{align*} x\odot y & \approx\Max L(x,y)\approx\Max L(y,x)\approx y\odot x, \\ x\odot x & \approx\Max L(x,x)\approx x, \\ x\odot1 & \approx\Max L(x,1)\approx x, \\ 1\rightarrow x & \approx1*x\approx x. \end{align*} \end{proof} \begin{example} The poset shown in Fig.~6 \vspace*{-4mm} \[ \setlength{\unitlength}{7mm} \begin{picture}(6,9) \put(3,2){\circle*{.3}} \put(1,4){\circle*{.3}} \put(5,4){\circle*{.3}} \put(1,6){\circle*{.3}} \put(5,6){\circle*{.3}} \put(3,8){\circle*{.3}} \put(3,2){\line(-1,1)2} \put(3,2){\line(1,1)2} \put(1,4){\line(0,1)2} \put(1,4){\line(2,1)4} \put(5,4){\line(-2,1)4} \put(5,4){\line(0,1)2} \put(3,8){\line(-1,-1)2} \put(3,8){\line(1,-1)2} \put(2.875,1.25){$0$} \put(.35,3.85){$a$} \put(5.4,3.85){$b$} \put(.35,5.85){$c$} \put(5.4,5.85){$d$} \put(2.85,8.4){$1$} \put(2.2,.3){{\rm Fig.~6}} \end{picture} \] \vspace*{-3mm} is relatively pseudocomplemented and the tables for $\odot$ and $\rightarrow$ look as follows: \[ \begin{array}{c|cccccc} \odot & 0 & a & b & c & d & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ a & 0 & a & 0 & a & a & a \\ b & 0 & 0 & b & b & b & b \\ c & 0 & a & b & c & \{a,b\} & c \\ d & 0 & a & b & \{a,b\} & d & d \\ 1 & 0 & a & b & c & d & 1 \end{array} \quad\quad\quad \begin{array}{c|cccccc} \rightarrow & 0 & a & b & c & d & 1 \\ \hline 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ a & b & 1 & b & 1 & 1 & 1 \\ b & a & a & 1 & 1 & 1 & 1 \\ c & 0 & a & b & 1 & d & 1 \\ d & 0 & a & b & c & 1 & 1 \\ 1 & 0 & a & b & c & d & 1 \end{array} \] As one can see, here the values of $\odot$ contain at most two elements and the values of $\rightarrow$ are singletons. \end{example} Authors' addresses: Ivan Chajda \\ Palack\'y University Olomouc \\ Faculty of Science \\ Department of Algebra and Geometry \\ 17.\ listopadu 12 \\ 771 46 Olomouc \\ Czech Republic \\ [email protected] Helmut L\"anger \\ TU Wien \\ Faculty of Mathematics and Geoinformation \\ Institute of Discrete Mathematics and Geometry \\ Wiedner Hauptstra\ss e 8-10 \\ 1040 Vienna \\ Austria, and \\ Palack\'y University Olomouc \\ Faculty of Science \\ Department of Algebra and Geometry \\ 17.\ listopadu 12 \\ 771 46 Olomouc \\ Czech Republic \\ [email protected] \end{document}
\begin{document} \title{Some properties defined by relative versions of star-covering properties II} \newtheorem{theorem}{Theorem} [section] \newtheorem{corollary}{Corollary}[section] \newtheorem{question}{Question}[section] \newtheorem{example}{Example}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{property}{Property}[section] \newtheorem{definition}{Definition}[section] \newtheorem{remark}{Remark}[section] \newtheorem{problem}{Problem}[section] \newcommand{\my}[1]{\textcolor{red}{\sf #1}} \newcommand{\green}[1]{\textcolor{green}{\sf #1}} \newcommand{\blu}[1]{\textcolor{blue}{\sf #1}} \newcommand{\violet}[1]{\textcolor{violet}{\sf #1}} \begin{abstract} In this paper we consider some recent relative versions of Menger property called set strongly star Menger and set star Menger properties and the corresponding Hurewicz-type properties. In particular, using \cite {BMae}, we "easily" prove that the set strong star Menger and set strong star Hurewicz properties are between countable compactness and the property of having countable extent. Also we show that the extent of a regular set star Menger or a set star Hurewicz space cannot exceed $\frak c$. Moreover, we construct (1) a consistent example of a set star Menger (set star Hurewicz) space which is not set strongly star Menger (set strongly star Hurewicz) and show that (2) the product of a set star Menger (set star Hurewicz) space with a compact space need not be set star Menger (set star Hurewicz). In particular, (1) and (2) answer to some questions posed by Ko\v{c}inac, Konca and Singh in \cite{KKS-MS} and \cite{S-AM}. \end{abstract} {\bf Keywords: } Star compact, strongly star compact, star Lindel\"of, strongly star Lindel\"of, star Menger, strongly star Menger, star Hurewicz, strongly star Hurewicz, set properties. {\bf AMS Subject Classification:} 54D20 \section{Introduction} Let ${\mathcal U}$ be a cover of a space $X$ and $A$ be a subset of $X$; the star of $A$ with respect to ${\mathcal U}$ is the set $st(A,{\mathcal U})=\bigcup\{U:U\in{\mathcal U}\;\hbox{and}\;U\cap A\neq\emptyset\}$. The star of a one-point set $\{x\}$ with respect to a cover ${\mathcal U}$ is denoted by $st(x,{\mathcal U})$. Recall that a space $X$ is star compact, briefly SC (strongly star compact, briefly SSC) if for every open cover $\mathcal{U}$ of the space $X$, there exists a finite subfamily $\mathcal V$ of $\mathcal U$ (resp., a finite subset $F$ of $X$) such that $st(\bigcup{\mathcal V},{\mathcal U})=X$ (resp., $st(F,{\mathcal U})=X$) (see \cite{IK} where another terminology is used, and \cite{vDRRT}); $X$ is star Lindel\"of, briefly SL (strongly star Lindel\"of, briefly SSL) if for every open cover $\mathcal{U}$ of the space $X$, there exists a countable subfamily $\mathcal V$ of $\mathcal U$ (resp., a countable subset $C$ of $X$) such that $st(\bigcup{\mathcal V},{\mathcal U})=X$ (resp., $st(C,{\mathcal U})=X$) (see \cite{IK2} and \cite{IK3}, where different terminology is used). In \cite{KS,KKS-MS} Ko\v cinac, Konca and Singh introduced the following relative versions of SC, SSC, SL and SSL properties. \begin{definition}\rm\label{Def2}\cite{KKS-MS}\label{SC} A space $X$ is set star compact, briefly set SC (resp., set strongly star compact, briefly set SSC), if for every nonempty subset $A$ of $X$ and for every family $\mathcal U$ of open sets in X such that $\overline {A}\subseteq \bigcup{\mathcal U}$, there exists a finite subfamily $\mathcal V$ of $\mathcal U$ (resp., finite subset $F$ of $\overline A$) such that $st(\bigcup{\mathcal V},{\mathcal U})\supset A$ (resp., $st(F,{\mathcal U})\supset A$).\footnote{Recently, the properties of Definition \ref{Def2} were studied in \cite{BMae}. Note that in \cite{BMae} there is a \underline{misprint} in the statment of the definition of "relatively$^*$ SSC" that the authors use to describe set SSC property: in particular, the authors write that the set "$F$ is a finite subset of $A$" instead of "$F$ is a finite subset of $\overline A$".} \end{definition} Replacing "finite" with "countable" in Definition \ref{SC}, one obtains the classes of set star Lindel\"of (briefly set SL) and set strongly star Lindel\"of (briefly set SSL) spaces (see \cite{KS}). In the following CC means countably compact. \begin{proposition}\rm \cite[Proposition 2.2]{BMae}\label{prop} In the class of Hausdorff spaces SSC, set SSC and CC are equivalent properties. \end{proposition} We prove the following \begin{proposition}\rm \label{regular set SC} In the class of regular spaces set SC and CC are equivalent properties. \end{proposition} \begin{proof} Of course, every CC space is set SC. Now, let $X$ a regular set SC space. By contradiction, assume there exists a closed and discrete subspace $D=\{x_n: n\in\omega\}$ of $X$. By regularity, there exists a disjoint family ${\cal U}=\{U_n : n \in\omega\}$ of open subsets of $X$ such that $x_n\in U_n$, for every $n\in\omega$. Then $D\subseteq \bigcup{\mathcal U}$ but for every finite subfamily $\mathcal V$ of $\mathcal U$, we have that $D\not\subset st(\bigcup{\mathcal V},{\mathcal U})$; a contradiction. \end{proof} For a space $X$, $e(X)=\sup\{|C|\,\,:\,\, C\hbox{ is a closed and discrete subset of }X\}$ and $c(X)=\sup\{|{\cal A}|\,\,:\,\, {\cal A}\hbox{ is a cellular family of }X\}$ are, respectively, the extent and the cellularity of $X$. One says that a space $X$ has the countable chain condition (briefly ccc) if $c(X)=\omega$. \begin{proposition}\rm \cite[Proposition 3.1]{BMae} \label{Proposition 3.1} In the class of $T_1$ spaces, set SSL spaces are exactly spaces having countable extent. \end{proposition} \begin{proposition}\rm \cite[Corollary 3.3]{BMae} Every ccc space is set SL. \end{proposition} Recall that a space $X$ is Menger, briefly M, if for each sequence $({\cal U}_n : n \in \omega)$ of open covers of $X$ there exists a sequence $({\cal V}_n : n \in \omega)$ such that ${\mathcal V}_n$, $n\in\omega$, is a finite subset of ${\mathcal U}_n$ and $X=\bigcup_{n \in \omega}\bigcup{\mathcal V}_n$; $X$ is Hurewicz, briefly H, if for each sequence $({\cal U}_n : n \in \omega)$ of open covers of $X$ there exists a sequence $({\cal V}_n : n \in \omega)$ such that ${\mathcal V}_n$, $n\in\omega$, is a finite subset of ${\mathcal U}_n$ and for every $x\in X$, $x \in\bigcup{\mathcal V}_n$ for all but finitely many $n\in\omega$. In \cite{K, K1, BCK} star versions of Menger and Hurewicz properties called star Menger, strongly star Menger, star Hurewicz and strongly star Hurewicz properties (Definitions \ref{star Menger} and \ref{star Hurewicz} below) were introduced and recently in \cite{KKS-MS} Ko\v{c}inac, Konca and Singh considered some relative versions of them called, respectively, set star Menger, set strongly star Menger, set star Hurewicz and set strongly star Hurewicz properties. In this paper we study the previous set properties. In particular, using \cite {BMae}, we easily prove that set strongly star Menger and set strongly star Hurewicz properties are between countable compactness and property of having countable extent. Also we show that the extent of a regular set star Menger or a set star Hurewicz space cannot exceed $\frak c$ and use this result to give a Tychonoff star Menger (star Hurewicz) space which is not set star Menger (set star Hurewicz). In fact, the constructed example (Example \ref{EXAMPLE}) is even star compact and then it gives a positive answer to the following question. \begin{question}\rm\cite{S-AM} Does there exist a Tychonoff star compact space which is not set star compact? \end{question} Moreover, we give a consistent answer (Example \ref{set SM, not set SSM}) to the following question. \begin{question}\rm\cite{KKS-MS} Does there exist a Tychonoff set star Menger space which is not set strongly star Menger? \end{question} Further, we answer in the negative (Example \ref{product}) to the following \begin{question} \rm \cite{KKS-MS} Is the product of a set star Menger space with a compact space a set star Menger space? \end{question} In fact Example \ref{product} shows even more: it proves that set star compact and set star Lindel\"of properties are not preserved in the product with compact spaces. Then, the same example answers in the negative to the following two questions. \begin{question} \rm Is the product of a set star Hurewicz space with a compact space a set star Hurewicz space? \end{question} \begin{question} \rm \cite{S-AM} Is the product of a set star compact space with a compact space a set star compact space? \end{question} Moreover we give partial answers to the following questions. \begin{question}\label{QSSM} \rm \cite{KKS-MS} Is the product of a set strongly star Menger space with a compact space a set strongly star Menger space? \end{question} \begin{question} \rm Is the product of a set strongly star Hurewicz space with a compact space a set strongly star Hurewicz space? \end{question} No separation axiom will be assumed a priori. Recall that a family of sets is almost disjoint if the intersection of any two distinct elements is finite. Let $\cal A$ be an almost disjoint family of infinite subsets of $\omega$. Put $\Psi({\mathcal A})=\omega\cup {\mathcal A}$ and topologize $\Psi({\mathcal A})$ as follows: the points of $\omega$ are isolated and a basic neighbourhood of a point $a\in\mathcal A$ takes the form $\{a\}\cup (a\setminus F)$, where $F$ is a finite set. $\Psi({\mathcal A})$ is called Isbell-Mr\' owka or $\Psi$-space (see \cite{Eng}). Recall that for $f,g\in \omega^\omega$, $f\leq^*g$ means that $f(n)\leq g(n)$ for all but finitely many $n$ (and $f\leq g$ means that $f(n)\leq g(n)$ for all $n\in\omega$). A subset $B\subseteq \omega^\omega$ is bounded if there is $g\in\omega^\omega$ such that $f\leq^*g$ for every $f\in B$. $D\subseteq \omega^\omega$ is cofinal if for each $g\in \omega^\omega$ there is $f\in D$ such that $g\leq^*f$. The minimal cardinality of an unbounded subset of $\omega^\omega$ is denoted by ${\frak b}$, and the minimal cardinality of a cofinal subset of $\omega^\omega$ is denoted by ${\frak d}$. The value of ${\frak d}$ does not change if one considers the relation $\leq$ instead of $\leq^*$ \cite[Theorem 3.6]{vD}. \section{On set star Menger and set strongly star Menger properties.} In \cite{K}, Ko\v{c}inac introduced the following star versions of Menger property. \begin{definition} \rm \cite{K}\label{star Menger} A space $X$ is $\bullet$ star Menger (briefly, SM) if for each sequence $({\cal U}_n : n \in \omega)$ of open covers of $X$ there exists a sequence $({\cal V}_n : n \in \omega)$ such that ${\mathcal V}_n$, $n\in\omega$, is a finite subset of ${\mathcal U}_n$ and $X=\bigcup_{n \in \omega}st(\bigcup{\mathcal V}_n,{\mathcal U}_n)$; $\bullet$ strongly star Menger (briefly, SSM) if for each sequence $({\cal U}_n : n \in \omega)$ of open covers of $X$ there exists a sequence $(F_n : n \in \omega)$ such that $F_n$, $n\in\omega$, is a finite subset of $X$ and $X=\bigcup_{n \in \omega}st(F_n,{\mathcal U}_n)$. \end{definition} The following result gives a characterization of the SSM property in terms of a relative version of it. \begin{proposition}\rm \label{caratterizzazione} The following are equivalent for a space $X$: \begin{enumerate} \item $X$ is SSM; \item for each nonempty subset $A$ of $X$ and each sequence $({\cal U}_n : n \in \omega)$ of collection of open sets of $X$ such that $\overline A\subset \bigcup{\mathcal U}_n$ for every $n\in\omega$, there exists a sequence $(F_n : n \in \omega)$ such that $F_n$, $n\in\omega$, is a finite subset of $X$ and $A\subset \bigcup_{n \in \omega}st(F_n,{\mathcal U}_n)$. \end{enumerate} \begin{proof} $2.\Rightarrow 1.$ is obvious. Let $A \subseteq X$ be a nonempty subset and $({\cal U}_n : n \in \omega)$ be a sequence of families of open sets of $X$ such that $\overline A\subseteq \bigcup {\mathcal{U}}_{n}$ for every $n \in \omega$. Define \begin{center} ${\mathcal{U}}^{'}_{n} = {\mathcal{U}}_{n} \cup \{ X \setminus \overline{A} \}$ \end{center} for all $n \in \omega$. Clearly, each ${\mathcal{U}}^{'}_{n}$ is an open cover for $X$. Since $X$ is SSM, there is a sequence $( F_{n} : n\in \omega)$ of finite subsets of $X$ such that $X=\bigcup_{n \in \omega}st(F_n,{\mathcal U}_n)$. Fix $x \in A$. Then there exists $n \in \omega$ such that $x \in st(F_{n},{\mathcal{U}}^{'}_{n})$. Observe that \begin{center} $x \in st(F_{n},{\mathcal{U}}^{'}_{n}) \Leftrightarrow st(x,{\mathcal{U}}^{'}_{n}) \cap F_{n} \neq \emptyset$ \end{center} We also have that \begin{center} $st(x,{\mathcal{U}}^{'}_{n}) = \bigcup \{ U \in {\mathcal{U}}^{'}_{n} : x \in U \} = \bigcup \{ U \in {\mathcal{U}}_{n} : x \in U \} = st(x,{\mathcal{U}}_{n})$ \end{center} So \begin{center} $x \in st(F_{n},{\mathcal{U}}^{'}_{n}) \Leftrightarrow st(x,{\mathcal{U}}^{'}_{n}) \cap F_{n} \neq \emptyset \Leftrightarrow st(x,{\mathcal{U}}_{n}) \cap F_{n} \neq \emptyset \Leftrightarrow x \in st(F_{n},{\mathcal{U}}_{n})$. \end{center} Since $x$ is an arbitrary point of $A$, we have that $A\subset \bigcup_{n \in \omega}st(F_n,{\mathcal U}_n)$. \end{proof} \end{proposition} In \cite{KKS-MS} the following relative version of the SM and SSM properties were considered. \begin{definition}\rm \cite{KKS-MS} A space $X$ is $\bullet$ set star Menger (shortly, set SM) if for each nonempty subset $A$ of $X$ and for each sequence $({\cal U}_n : n \in \omega)$ of collection of open sets of $X$ such that $\overline A\subset \bigcup{\mathcal U}_n$ for every $n\in\omega$, there exists a sequence $({\cal V}_n : n \in \omega)$ such that ${\mathcal V}_n$, $n\in\omega$, is a finite subset of ${\mathcal U}_n$ and $A\subset \bigcup_{n \in \omega}st(\bigcup{\mathcal V}_n,{\mathcal U}_n)$. $\bullet$ set strongly star Menger (shortly, set SSM) if for each nonempty subset $A$ of $X$ and for each sequence $({\cal U}_n : n \in \omega)$ of collection of open sets of $X$ such that $\overline A\subset \bigcup{\mathcal U}_n$ for every $n\in\omega$, there exists a sequence $(F_n : n \in \omega)$ such that $F_n$, $n\in\omega$, is a finite subset of $\overline A$ and $A\subset \bigcup_{n \in \omega}st(F_n,{\mathcal U}_n)$. \end{definition} The following result is easy to check. \begin{proposition}\rm\label{ered} A space $X$ is set SSM iff every closed subspace of $X$ is SSM. \end{proposition} The previous result is not true for set SM spaces as the following example shows. \begin{example}\rm\label{ex ered} A set SM space having a closed subspace which is not SM. \end{example} Consider the set SM space $X$ of Example \ref{product} below and its closed subspace $A$. Since $A$ is a discrete subspace of uncountable cardinality, it is not SM. $ \triangle$ Recall that in \cite{S2} it is proved that the extent of a $T_1$ SSM space can be arbitrarily big. Also \begin{proposition}\rm \label{Sakai}\cite[Corollary 2.2]{Sakai} Every closed and discrete subspace of a regular SSM space has cardinality less than $\frak c$. Hence a SSM space has extent less or equal to $\frak c$. \end{proposition} It is well known that a CC space has countable extent. Since every CC space is set SSC (see \cite[Theorem 2.1.4]{vDRRT} and recall that CC property is hereditary with respect to closed sets) and every set SSL space has countable extent \cite[Proposition 3.1]{BMae} we have that $$\hbox{CC}\Rightarrow \hbox{set SSH}\Rightarrow \hbox{set SSM}\Rightarrow \hbox{countable extent}.$$ Note that the previous implications can not be reversed. Indeed, of course, every Hurewicz non countably compact space is a set SSH non countably compact space: consider, for example, the discrete space $\omega$. For the converse of the other implications see Examples \ref{exampleSSH} and \ref{example}. In \cite{MAT} it was shown that the extent of a Tychonoff SSL space can be arbitrarily large (note that in \cite{MAT} a SSL space is called a space with countable weak extent). In \cite{Sakai} the space constructed in \cite{MAT} was used to prove that the extent of a Tychonoff SM (in fact SC) space can be arbitrarily large. Moreover in \cite{Sakai} the author shows the following \begin{theorem}\rm\cite{Sakai}\label{Sakai} If $X$ is a regular SM space such that $w(X)=\frak c$, then every closed and discrete subspace of $X$ has cardinality less than $\frak c$. Hence, we have $e(X)\leq \frak c$. \end{theorem} Now we show that the extent of a regular set SM space cannot exceed $\frak c$. \begin{theorem}\rm\label{extentsetSM}\label{Sakai3} If $X$ is a regular set SM space, then every closed and discrete subspace of $X$ has cardinality less than $\frak c$. Hence, we have $e(X)\leq \frak c$. \end{theorem} \begin{proof} Fix $Y$ a closed and discrete subspace of $X$ and assume $|Y|=\frak c$. Consider a family $\cal B$ of open subsets of $X$ such that for every $y\in Y$ there exists $B\in{\cal B}$ such that $y\in B$ and $\overline{B}\cap Y=\{y\}$ and suppose that $|\cal B|=\frak c$. Denote by $[\cal B]^{<\omega}$ the family of all finite subsets of $\cal B$, by $\mathbb P=([\cal B]^{<\omega})^{\omega}$ the family of all the sequences of elements of $[\cal B]^{<\omega}$ and introduce on $\mathbb{P}$ the partial order "$\leq$" defined as follows: if $({\cal B}_n')_{n\in\omega},({\cal B}_n'')_{n\in\omega}\in {\mathbb P} \hbox{ then }({\cal B}_n')_{n\in\omega} \leq ({\cal B}_n'')_{n\in\omega} \hbox{ means }{\cal B}_n'\subseteq {\cal B}_n'' \hbox{ for every } n\in\omega$. Let $\{({\cal B}_{\alpha,n})_{n\in\omega}: \alpha<\frak c\}$ be a cofinal family in $(\mathbb P, \leq)$. Take $Z=\{y_\alpha: \alpha <\frak c\}$ by choosing for every $\alpha<\frak c$ a point $y_\alpha \in Y\setminus \bigcup_{n\in\omega}\overline{\bigcup {\cal B}_{\alpha,n}}$ and $y_\alpha\not=y_\beta$ for $\alpha\not=\beta$. For every $\alpha < \frak c$ let $\{V_n(y_\alpha): n\in\omega\}$ be a sequence of open neighbourhoods of $y_\alpha$ such that $V_n(y_\alpha)\subseteq B$ for some $B\in {\cal B}$ and every $n\in\omega$ and $V_n(y_\alpha)\cap \bigcup{\cal B}_{\alpha,n}=\emptyset$ for every $n\in\omega$. For every $n\in\omega$ put ${\cal U}_n=\{V_n(y_\alpha): \alpha <\frak c\}$. Clearly $ Z=\overline{Z}\subseteq \bigcup {\cal U}_n$ for every $n\in\omega$. We will show that the subset $Z$ and the sequence $({\cal U}_n: n\in\omega)$ do not satisfy the set SM property. Let $({\cal V}_n: n\in\omega ) $ be any sequence of finite subsets of ${\cal U}_n$ for every $n\in\omega$. Let $({\cal B}_n': n\in\omega)\in \mathbb P$ such that every member of ${\cal V}_n$ is contained in a member of ${\cal B}_n'$. Since $\{({\cal B}_{\alpha,n})_{n\in\omega}: \alpha<\frak c\}$ is a cofinal family in $\mathbb P$, there exists $\gamma <\frak c$ such that ${\cal B}_n'\subseteq {\cal B}_{\gamma,n}$ for every $n\in\omega$. Then $V_n(y_\gamma) \cap \bigcup {\cal V}_n \subseteq V_n(y_\gamma) \cap \bigcup {\cal B}_{\gamma,n}=\emptyset$ for every $n\in \omega$. Since $V_n(y_\gamma)$ is the only member of ${\cal U}_n$ containing $y_\gamma$, we have $y_\gamma\not\in\bigcup_{n\in\omega}st(\bigcup {\cal V}_n,{\cal U}_n)$. \end{proof} Example \ref{consistent} below gives a consistent example of a SSM not set SSM space. In fact, such an example was already described in \cite{KKS-MS}; here we show that it can be easily obtained from the next characterization and from the fact that set SSM spaces have countable extent. \begin{theorem}\rm \cite{BMat}\label{BoMat} The following are equivalent: \begin{itemize} \item[(i)] $\Psi({\cal A})$ is SSM \item[(ii)] $|{\cal A}| < \mathfrak{d}$. \end{itemize} \end{theorem} \begin{example}\label{consistent} \rm \cite{KKS-MS} ($\omega_1<\frak d$) There exists a SSM not set SSM space. \end{example} Assume $\omega_1<\frak d$ and consider $\Psi({\cal A})$ with $|{\cal A}| = \omega_1$. By Theorem \ref{BoMat} and since $e(\Psi({\cal A}))>\omega$, we have that $\Psi({\cal A})$ is a SSM not set SSM space. $ \triangle$ \begin{question}\rm Does there exist a ZFC example of a SSM not set SSM space? \end{question} Using Theorem \ref{Sakai3} we can give a Tychonoff space distinguishing SM and set SM properties. In fact, the following example distinguishes SCness and set SCness too. \begin{example}\label{EXAMPLE} \rm A Tychonoff SC (hence SM) space which is not set SM (hence not set SC). \end{example} In \cite{MAT}, for each infinite cardinal $\tau$ the following space $X(\tau)$ was considered. Let $Z= \{f_{\alpha} : \alpha < {\frak \tau} \}$ where $f_\alpha$ denotes the points in $2^{\tau}$ with only the $\alpha$th coordinate equal to 1. Consider the set $$X(\tau)=(2^{\frak \tau}\times ({\tau}^+ +1))\setminus ((2^{\tau}\setminus Z)\times \{{\tau}^+\})$$ with the topology inherited from the product topology on $2^{\tau}\times ({\tau}^++1)$. Denote $X_0=2^{\tau}\times {\tau}^+$ and $X_1=Z\times{\{\tau}^+\}$. Then $X(\tau)=X_0\cup X_1$. $X_1$ is a closed and discrete subspace of $X(\tau)$ of cardinality $\tau$. So the extent of $X(\tau)$ is $\tau$.\\ In \cite{Sakai} it is proven that the space $X(\mathfrak{c})$ is SC (hence SM). By Theorem \ref{Sakai3}, $X(\mathfrak{c})$ it is not set SM. $ \triangle$ Recall the following result: \begin{proposition} \rm \cite{Sakai} \label{Sakai1} Every SL (SSL) space of cardinality less than $\frak d$ is SM (SSM). \end{proposition} Now we prove that the set versions of the previous proposition holds. \begin{proposition}\label{setSL implys setSM} \rm Every set SL space of cardinality less than $\frak d$ is set SM. \end{proposition} \begin{proof} Let $X$ be a set SL space of cardinality less than $\frak d$. Let $A\subseteq X$ and $({\cal U}_n\,\,:\,\, n\in\omega)$ be a sequence of families of open sets of $X$ such that $\overline{A}\subseteq \bigcup {\cal U}_n$ for every $n\in\omega$. For every $n\in \omega $ there is a countable subfamily ${\cal V}_n=\{V_{n,m}\,\,:\,\,m\in \omega\}$ of ${\cal U}_n$ such that $A\subseteq st(\bigcup {\cal V}_n, {\cal U}_n)$. For every $x\in A$ we choose a function $f_x\in\omega^\omega$ such that $st(x,{\cal U}_n)\cap V_{n, f_x(n)}\not= \emptyset$ for all $n\in\omega$. Since $\{f_x\,\,:\,\,x\in A\}$ is not a cofinal family in $(\omega^\omega, \leq)$, there are some $g\in \omega^\omega$ and $n_x\in\omega$ for $x\in A$ such that $f_x(n_x)<g(n_x)$. Let ${\cal W}_n=\{V_{n,j}\,\,:\,\,j\leq g(n)\}$. Then $A\subseteq \bigcup_{n\in\omega}st(\bigcup {\cal W}_n, {\cal U}_n)$. \end{proof} In a similar way we can prove the following \begin{proposition}\label{frakd} \rm Every set SSL space of cardinality less than $\frak d$ is set SSM. \end{proposition} Then, by Proposition \ref{Proposition 3.1} we obtain \begin{corollary}\label{set-SSM} \rm For every $T_1$ space $X$ of cardinality less than $\frak d$, the following are equivalent: \begin{enumerate} \item $X$ is set SSM \item $e(X)=\omega$. \end{enumerate} \end{corollary} \begin{example}\label{example}\rm There is a Tychonoff space of cardinality $\frak d$ having countable extent which is not set SSM. \end{example} Let $\Bbb P$ the space of irrationals. Take any non-Menger subspace $X\subset\Bbb P$ of cardinality $\frak d$ (for instance, consider the Baire space $\omega^\omega$ which is homeomorphic to $\Bbb P$ and take a cofinal subset of cardinality $\frak d$. It is well known that any cofinal subset of $\omega^\omega$ is not Menger). Of course, $X$ is a paracompact space having countable extent. Since in the class of paracompact Hausdorff spaces we have that M $\Leftrightarrow$ SM (see \cite{K}), we have that $X$ is not set SSM. $ \triangle$ By Corollary \ref{set-SSM} and Example \ref{example} we have: \begin{corollary}\rm The following statements are equivalent: \begin{enumerate} \item $\omega_1<\frak d$; \item every $T_1$ space of cardinality $\omega_1$ having countable extent is set SSM. \end{enumerate} \end{corollary} Recall the following result. \begin{theorem}\rm\cite{Sakai}\label{Sakai2} The following statements are equivalent for regular spaces. \begin{enumerate} \item $\omega_1={\frak d}$; \item if $X$ is a SSM space, then $e(X)\leq\omega$. \end{enumerate} \end{theorem} Now we prove \begin{theorem}\rm The following statements are equivalent for regular spaces. \begin{enumerate} \item $\omega_1={\frak d}$; \item if $X$ is a SSM space, then $e(X)\leq\omega$; \item for spaces of cardinality less than ${\frak d}$, set SSM and SSM are equivalent properties. \item for spaces of cardinality less than ${\frak d}$, set SSL and SSL are equivalent properties. \item every closed subspace of a SSM space $X$ such that $|X|<{\frak d}$ is SSM. \end{enumerate} \end{theorem} \begin{proof} $1.\Leftrightarrow 2.$ holds by Theorem \ref{Sakai2}. Now we prove $2.\Rightarrow 3.$. Let $X$ be a space of cardinality less than ${\frak d}$. By 2. and Corollary \ref{set-SSM}, we have that $X$ is SSM iff $X$ is set SSM. Now we prove $3.\Rightarrow 1.$. Assume $\omega_1<{\frak d}$. Consider a space $\Psi({\cal A})$, with $|{\cal A}|=\omega_1$. By Theorem \ref{BoMat}, $\Psi({\cal A})$ is SSM, and since $e(\Psi({\cal A}))> \omega$, $\Psi({\cal A})$ is not set SSM. $3.\Leftrightarrow 4.$ is obvious. $3.\Leftrightarrow 5.$ follows from Proposition \ref{ered}. \end{proof} Of course, countable spaces are Menger, then set SSM and SSM. \begin{corollary}\rm For regular spaces $X$ such that $\omega<|X|<{\frak d}$, SSM and set SSM are not equivalent properties. \end{corollary} \begin{corollary}\rm Uncountable regular spaces in which SSM and set SSM are equivalent properties have cardinality $\geq {\frak d}$. \end{corollary} In \cite[Example 5]{KKS-MS} Ko\v{c}inac, Konca and Singh constructed a $T_1$ set SM space which is not set SSM and posed the following question. \begin{question}\rm\cite{KKS-MS} \label{QUESTION} Does there exist a Tychonoff set SM space which is not set SSM? \end{question} Using Proposition \ref{setSL implys setSM} we can give a consistent answer to Question \ref{QUESTION}. \begin{example}\label{set SM, not set SSM} \rm ($\omega_1<\frak d$) A Tychonoff set SM space which is not set SSM. \end{example} Assume $\omega_1<\frak d$ and consider $\Psi(\mathcal A)$ with $|{\mathcal A}|=\omega_1$. Since $\Psi(\mathcal A)$ is separable, it is set SL hence, by Proposition \ref{setSL implys setSM}, it is set SM. Since $e(\Psi(\mathcal A))>\omega$, $\Psi(\mathcal A)$ is not set SSM. $ \triangle$ \section{On the product of set SM and set SSM with compact spaces.} Recall that the product of a SC (SSC) space with a compact space is SC (SSC) (\cite{Fle}, \cite{vDRRT}); further the product of a SL space with a compact space is SL \cite{vDRRT} while the product of a SSL space with a compact space need not be SSL \cite[Example 3.3.4]{vDRRT}. In \cite{K} Ko\v{c}inac proved that the product of a SM space with a compact space is SM. Using \cite[Lemma 2.3]{BM1}, Matveev noted that assuming $\omega_1<\frak d$, if $X=\Psi({\cal A})$ with $|{\mathcal A}|=\omega_1$ and $Y$ is a compact space such that $c(Y)>\omega$, then the product $X\times Y$ is not SSL, hence not SSM; therefore he gave a consistent example of a not SSM space which is the product of a SSM space and a compact space. Then, it is natural to consider the following questions. \begin{question}\label{QSSM} \rm \cite{KKS-MS} Is the product of a set SSM space with a compact space a set SSM space? \end{question} \begin{question}\label{QSM} \rm \cite{KKS-MS} Is the product of a set SM space with a compact space a set SM space? \end{question} In the following we give a partial answer to Question \ref{QSSM} and a negative answer to Question \ref{QSM}. Note that we also show that set SSL property is preserved in the $T_1$ product with compact spaces and that set SC and set SL properties are not preserved in the product with compact spaces. (For completness, we note that, by Proposition \ref{prop}, set SSC property is preserved in the Hausdorff product with compact spaces). The following fact can be easly checked (we give the proof for sake of completeness). Recall that a map is perfect if it is continuous, closed, onto and each fiber is compact. \begin{proposition}\rm\label{perfect} If $f:X\to Y$ is a perfect map and $A$ is an uncountable closed and discrete subspace of $X$, then $f(A)$ is an uncountable closed and discrete subspace of $Y$. \end{proposition} \begin{proof} Let $f$ and $A$ as in the hypothesis. Clearly $f(A)$ is closed in $Y$. Note that, for every $y\in f(A)$, $f^{-1}(y)\cap A$ is a closed subset of the compact subspace $f^{-1}(y)$ and then, since $A$ is discrete, it is finite. Then $A$ is countable, otherwise $f(A)$ is countable. Now, fix $y\in f(A)$ and say $f^{-1}(y)\cap A=\{x_1,...,x_n\}$. For every $i=1,...,n$ fix an open subset $U_i$ of $X$ such that $A\cap U_i=\{x_i\}$ and put $U=\bigcup_{i=1}^{n}U_i$. Since $A\setminus U$ is a closed subset of $X$, we have that $f(A\setminus U)=f(A)\setminus \{y\}$ is a closed subset of Y, and then $\{y\}$ is open in $f(A)$ with the topology inherited from $Y$. \end{proof} By the previous proposition, we obtain the following result. \begin{corollary}\label{preservationextent} \rm The product of a space having countable extent with a compact space has countable extent. \end{corollary} \begin{proof} Let $X$ be a space with countable extent and $Y$ be a compact space. The projection from $X\times Y$ onto $X$ is a perfect map. Then, by Proposition \ref{perfect}, $e(X\times Y)=\omega$. \end{proof} By Proposition \ref{Proposition 3.1}, the previous result can be restated as follows. \begin{proposition} \rm The $T_1$ product of a set SSL space with a compact space is set SSL. \end{proposition} \begin{corollary}\label{prodextent} \rm The product of a set SSM space with a compact space has countable extent. \end{corollary} Then, by Corollary \ref{set-SSM} we have \begin{corollary} \rm The $T_1$ product of cardinality less than $\frak d$ of a set SSM space with a compact space is set SSM. \end{corollary} Recall the following proposition. \begin{proposition}\label{prop34} \rm \cite[Proposition 3.4]{BMae} Let $X$ be a space. If there exist a closed and discrete subspace $D$ of $X$ having uncountable cardinality and a disjoint family ${\mathcal U}=\{O_a : a\in D\}$ of open neighbourhoods of points $a\in D$, then $X$ is not set SL. \end{proposition} Now we prove the following useful result. \begin{proposition}\label{corretta?} \rm If $e(X)>\omega$ and $c(Y)>\omega$, where $Y$ is $T_1$, then $X\times Y$ is not set SL. \end{proposition} \begin{proof} Let $S=\{s_\alpha : \alpha <\omega_1\}$ be a closed and discrete subset of $X$, ${\mathcal O}=\{O_\alpha : \alpha < \omega_1\}$ be a pairwise disjoint family of nonempty open subsets of $Y$. For every $\alpha < \omega_1$, fix $t_\alpha\in O_\alpha$. Put $A=\{(s_\alpha,t_\alpha): \alpha < \omega_1\}$. It is obvious that $A$ is an uncountable discrete subspace of $X\times Y$. Now we prove that $A$ is closed. For every $\alpha < \omega_1$ there exists an open set, say $N_\alpha$, such that $N_\alpha\cap S=\{s_\alpha\}$. Then $(X\times Y)\setminus A=((X\setminus S)\times Y) \cup \bigcup_{\alpha < \omega_1} (N_\alpha \times (Y\setminus \{t_\alpha\}))$. Then, by Proposition \ref{prop34}, $X\times Y $ is not set SL. \end{proof} \begin{example}\rm \label{product} There exists a set SC (hence set SL and set SM) space $X$ and a compact space $Y$ with $c(Y)>\omega$ such that $X\times Y$ is not set SL (hence neither set SM nor set SC). \end{example} Consider the set $X=\omega_1 \cup A$, where $A=\{a_\alpha : \alpha\in \omega_1\}$ is a set of cardinality $\omega_1$, topologized as follows: $\omega_1$ has the usual order topology and is an open subspace of $X$; a basic neighborhood of a point $a_\alpha\in A$ takes the form $$O_\beta(a_\alpha)=\{a_\alpha\}\cup (\beta, \omega_1), \hbox{ where } \beta<\omega_1.$$ In \cite{BMae} it was proved that $X$ is set SC, hence $X$ is set SM. We have that $e(X)>\omega$. If $Y$ is any compact space with $c(Y)>\omega$, by Proposition \ref{corretta?}, $X\times Y$ is not set SL. $ \triangle$ \section{On set star Hurewicz and set strongly star Hurewicz properties.} Recall the following definitions. \begin{definition} \rm \cite{K, BCK}\label{star Hurewicz} A space $X$ is $\bullet$ star Hurewicz (briefly, SH) if for each sequence $({\cal U}_n : n \in \Bbb N)$ of open covers of $X$ there exists a sequence $({\cal V}_n : n \in \Bbb N)$ such that ${\mathcal V}_n$, $n\in\omega$, is a finite subset of ${\mathcal U}_n$ and $\forall x \in X$, $x \in st(\bigcup {\mathcal{V}}_{n},{\mathcal{U}}_{n})$ for all but finitely many $n \in \omega$; $\bullet$ strongly star Hurewicz (briefly, SSH) if for each sequence $({\cal U}_n : n \in \Bbb N)$ of open covers of $X$ there exists a sequence $(F_n : n \in \Bbb N)$ such that $F_n$, $n\in\omega$, is a finite subset of $X$ and $\forall x \in X$, $x \in st(F_{n},{\mathcal{U}}_{n})$ for all but finitely many $n \in \omega$. \end{definition} The following result is a characterization of SSH property in terms of a relative version of it. The proof is similar to the proof of Proposition \ref{caratterizzazione}. \begin{proposition}\rm The following are equivalent for a space $X$: \begin{enumerate} \item $X$ is SSH; \item for each nonempty subset $A$ of $X$ and for each sequence $({\cal U}_n : n \in \Bbb N)$ of collection of open sets of $X$ such that $\overline A\subset \bigcup{\mathcal U}_n$ for every $n\in\omega$, there exists a sequence $(F_n : n \in \Bbb N)$ such that $F_n$, $n\in\omega$, is a finite subset of $X$ and $\forall x \in A$, $x \in st(F_{n},{\mathcal{U}}_{n})$ for all but finitely many $n \in \omega$. \end{enumerate} \end{proposition} \begin{definition}\rm\cite{KKS-MS} A space $X$ is \begin{itemize} \item set star Hurewicz (briefly, set SH) if for each nonempty subset $A \subseteq X$ and for each sequence $( {\mathcal{U}}_{n}: n \in \omega)$ of collection of open sets of $X$ such that $\overline A\subseteq \bigcup {\mathcal{U}}_{n}$ for every $n \in \omega$, there exists a sequence $({\cal V}_n : n \in \Bbb N)$ such that ${\mathcal V}_n$, $n\in\omega$, is a finite subset of ${\mathcal U}_n$ and $\forall x \in A$, $x \in st(\bigcup {\mathcal{V}}_{n},{\mathcal{U}}_{n})$ for all but finitely many $n \in \omega$. \item set strongly star Hurewicz (briefly, set SSH) if for each nonempty subset $A \subseteq X$ and for each sequence $( {\mathcal{U}}_{n}: n \in \omega)$ of collection of open sets of $X$ such that $\overline A\subseteq \bigcup {\mathcal{U}}_{n}$ for every $n \in \omega$, there exists a sequence $( F_{n}: n \in \omega)$ such that $F_n$, $n\in\omega$, is a finite subset of $\overline{A}$ and $\forall x \in A$, $x \in st(F_{n},{\mathcal{U}}_{n})$ for all but finitely many $n \in \omega$. \end{itemize} \end{definition} \begin{example}\label{exampleSSH}\rm ($\frak b<\frak d$) There is a Tychonoff set SSM space which is not set SSH. \end{example} Consider an unbounded subset $X$ of the Baire space $\omega^\omega$ of cardinality $\frak b$. Then $X$ is not Hurewicz and, by Corollary \ref{set-SSM}, $X$ is set SSM. Since $X$ is a paracompact space and in the class of paracompact Hausdorff spaces we have that H $\Leftrightarrow $ SH (see \cite{BCK}), we have that $X$ is not set SSH. $ \triangle$ Recall the following characterization of SSH spaces. \begin{theorem}\rm \cite{BMat}\label{BMat} The following properties are equivalent: \begin{itemize} \item[(i)] $\Psi({\cal A})$ is SSH \item[(ii)] $|{\cal A}| < \mathfrak{b}$. \end{itemize} \end{theorem} Then, we can "easily" give the following result (the same example was given in \cite{Sing3} using a longer proof). \begin{example} \rm ($\omega_1<\frak b$) There exists a SSH not set SSH space. \end{example} Assume $\omega_1<\frak b$ and consider $\Psi({\cal A})$ with $|{\cal A}| = \omega_1$. Then, by Theorem \ref{BMat} and since $e(\Psi({\cal A}))>\omega$, we have that $\Psi({\cal A})$ is SSH not a set SSH space. $ \triangle$ \begin{question}\rm Does there exist a ZFC example of a SSH not set SSH space? \end{question} By Theorem \ref{Sakai3} we can give the following \begin{theorem}\label{Sakai4}\rm If $X$ is a regular set SH space , then every closed and discrete subspace of $X$ has cardinality less than $\frak c$. Hence, we have $e(X)\leq \frak c$. \end{theorem} In \cite[Esempio 2.4]{Sing3} it is given a Hausdorff SH space which is not set SH. Now we can provide the following \begin{example}\label{EX} \rm A Tychonoff SC (hence SH) space which is not set SH. \end{example} Consider the space $X(\frak c)$ of Example \ref{EXAMPLE}. $X(\frak c)$ is SC (hence SH) and, by Theorem \ref{Sakai4}, it is not set SH. $ \triangle$ Recall the following \begin{proposition} \rm \cite[Corollary 3.10]{CASASDELAROSA2019572} Every SL (SSL) space of cardinality less than $\frak b$ is SH (SSH). \end{proposition} In analogy to Proposition \ref{setSL implys setSM} and Proposition \ref{frakd}, we can prove the following \begin{proposition}\label{setSL implys setSH} \rm Every set SL (set SSL) space of cardinality less than $\frak b$ is set SH (set SSH). \end{proposition} \begin{proof} Let $X$ be a set SL space of cardinality less than $\frak b$ (the proof is similar if $X$ is set SSL). Let $A\subseteq X$ and $({\cal U}_n\,\,:\,\, n\in\omega)$ be a sequence of families of open sets of $X$ such that $\overline{A}\subseteq \bigcup {\cal U}_n$ for every $n\in\omega$. For every $n\in \omega $ there is a countable subfamily ${\cal V}_n=\{V_{n,m}\,\,:\,\,m\in \omega\}$ of ${\cal U}_n$ such that $A\subseteq st(\bigcup {\cal V}_n, {\cal U}_n)$. For every $x\in A$ we choose a function $f_x\in\omega^\omega$ such that $st(x,{\cal U}_n)\cap V_{n, f_x(n)}\not= \emptyset$ for all $n\in\omega$. Since $\{f_x\,\,:\,\,x\in A\}$ is a bounded family in $(\omega^\omega, \leq^*)$, there exists $g\in \omega^\omega$ such that for every $x\in A$ we have that $f_x(n)\leq g(n)$ for every but finitely many $n\in\omega$. Let ${\cal W}_n=\{V_{n,j}\,\,:\,\,j\leq g(n)\}$. Then for every $x\in A$ we have that $x\in st(\bigcup {\cal W}_n, {\cal U}_n)$ for all but finitely many $n\in\omega$. \end{proof} Then, by Proposition \ref{Proposition 3.1}, we have \begin{corollary}\label{set-SSH} \rm For every $T_1$ space $X$ of cardinality less than $\frak b$, the following are equivalent: \begin{enumerate} \item $X$ is set SSH \item $e(X)=\omega$. \end{enumerate} \end{corollary} By Corollary \ref{set-SSM} and Corollary \ref{set-SSH} we have the following \begin{corollary}\rm\label{SSHextent} For spaces $X$ such that $|X|<{\frak b}$, the following are equivalent: \begin{enumerate} \item $X$ is set SSM \item $X$ is set SSH \item $e(X)=\omega$. \end{enumerate} \end{corollary} In \cite{Sing3} the authors give a $T_1$ set SH space which is not set SSH. Now we provide the following \begin{example}\label{set SH, not set SSH} \rm($\omega_1<\frak b$) A Tychonoff set SH space which is not set SSH. \end{example} Assume $\omega_1<\frak b$ and consider $\Psi(\mathcal A)$ with $|{\mathcal A}|=\omega_1$. Since $\Psi(\mathcal A)$ is separable, it is set SL hence, by Proposition \ref{setSL implys setSH}, it is set SH. Since $e(\Psi(\mathcal A))>\omega$, $\Psi(\mathcal A)$ is not set SSH. $ \triangle$ Using Example \ref{product} we can show that \begin{proposition} \rm Set SH property is not preserved in the product with compact spaces. \end{proposition} By Corollary \ref{preservationextent} we have that \begin{proposition} \rm The product of a set SSH space with a compact space has countable extent. \end{proposition} Then, by Corollary \ref{SSHextent} we obtain \begin{proposition} \rm The $T_1$ product of cardinality less than $\frak b$ of a set SSH space with a compact space is set SSH. \end{proposition} The following question is open. \begin{question} \rm Is the product of a set SSH space with a compact space a set SSH space? \end{question} We give the following useful diagram. \begin{picture}(150,130) \put(-50,110){{\sf Lindel\"of}} \put(-30,100){\vector(0,-1){15}} \put(-70,50){{\sf $$\boxed{ \begin{array}{c} \text{countable extent}\\ \Updownarrow \text{{\scriptsize $T_1$}}\\ \text{set SSL}\\ \end{array} }$$}} \put(-50,25){\vector(0,-1){80}} \put(-60,-65){{\sf SSL}} \put(0,25){\vector(0,-1){20}} \put(-10,-8){{\sf set SL}} \put(0,-10){\vector(0,-1){40}} \put(-5,-65){{\sf SL}} \put(-40,-60){\vector(1,0){30}} \put(-56,-5){\vector(1,0){40}} \put(-75,-8){{\sf ccc}} \put(70,-60){\vector(-1,0){55}} \put(75,-65){{\sf SM}} \put(80,-10){\vector(0,-1){40}} \put(65,-8){{\sf set SM}} \put(60,-5){\vector(-1,0){35}} \put(150,-95){\vector(-2,1){55}} \put(145,-108){\vector(-4,1){180}} \put(155,-110){{\sf SSM}} \put(165,45){\vector(0,-1){135}} \put(145,50){{\sf set SSM}} \put(147,45){\vector(-1,-1){45}} \put(140,53){\vector(-1,0){100}} \put(165,105){\vector(0,-1){40}} \put(160,110){{\sf M}} \put(155,115){\vector(-1,0){160}} \put(350,50){{\sf $$\boxed{ \begin{array}{c} \\ \text{set SSC}\\ \\ \\ \text{ CC $\Longleftrightarrow$ SSC }\\ \\ \end{array} }$$}} \put(370,40){\vector(1,2){13}} \put(400,65){\vector(1,-2){13}} \put(390,20){\sf {\scriptsize $T_2$}} \put(345,20){\vector(-1,-2){60}} \put(345,53){\vector(-1,0){50}} \put(395,3){\vector(0,-1){45}} \put(380,-55){{\sf set SC}} \put(403,-42){\vector(0,1){45}} \put(405,-22){\sf {\scriptsize regular}} \put(375,-48){\vector(-4,1){165}} \put(380,-105){\vector(-4,1){180}} \put(395,-60){\vector(0,-1){35}} \put(390,-110){{\sf SC}} \put(175,-8){{\sf set SH}} \put(172,-5){\vector(-1,0){70}} \put(190,-10){\vector(0,-1){40}} \put(180,-65){{\sf SH}} \put(175,-60){\vector(-1,0){80}} \put(263,-110){{\sf SSH}} \put(260,-97){\vector(-2,1){55}} \put(260,-105){\vector(-1,0){74}} \put(266,110){{\sf H}} \put(264,115){\vector(-1,0){90}} \put(250,50){{\sf set SSH}} \put(255,45){\vector(-1,-1){45}} \put(248,53){\vector(-1,0){50}} \put(270,105){\vector(0,-1){40}} \put(270,45){\vector(0,-1){140}} \end{picture} {\bf Acknowledgements.} The authors express gratitude to Masami Sakai for useful suggestions. \end{document}
\begin{document} \title{Quantum Error Correction with the Gottesman-Kitaev-Preskill Code: A Perspective} \author{Arne L. Grimsmo} \affiliation{ARC Centre of Excellence for Engineered Quantum Systems, School of Physics, The University of Sydney, Sydney, NSW 2006, Australia.} \author{Shruti Puri} \affiliation{Department of Applied Physics, Yale University, New Haven, CT 06511, USA} \date{\today} \begin{abstract} The Gottesman-Kitaev-Preskill (GKP) code was proposed in 2001 by Daniel Gottesman, Alexei Kitaev, and John Preskill as a way to encode a qubit in an oscillator. The GKP codewords are coherent superpositions of periodically displaced squeezed vacuum states. Because of the challenge of merely preparing the codewords, the GKP code was for a long time considered to be impractical. However, the remarkable developments in quantum hardware and control technology in the last two decades has made the GKP code a frontrunner in the race to build practical, fault-tolerant bosonic quantum technology. In this Perspective, we provide an overview of the GKP code with emphasis on its implementation in the circuit-QED architecture and present our outlook on the challenges and opportunities for scaling it up for hardware-efficient, fault-tolerant quantum error correction. \end{abstract} \maketitle \section{\label{sec:intro}Introduction} In 2001, Gottesman, Kitaev, and Preskill published a proposal to encode discrete quantum information in a continuous variable quantum system, or in other words, ``a qubit in an oscillator''~\cite{Gottesman01}. The encoding is designed such that it is possible to correct small shifts in the position and momentum quadratures of the oscillator. This remarkable idea can safely be said to have been ahead of its time: It took almost twenty years before the first experimental realization of their proposal was made by the Home group using an oscillating trapped ion~\cite{Fluhmann:2019aa}. It was almost immediately followed by the experimental realization in the Devoret group, using a microwave cavity and a circuit quantum electrodynamics (cQED) approach~\cite{Campagne2020}. More broadly, these two experiments are part of a flourishing effort to demonstrate robust encoding of quantum information using bosonic degrees of freedom, with the ultimate long term goal of building a fault-tolerant quantum computer~\cite{Ofek16,Hu:2019aa,heeres2017implementing,xu2020demonstration,Ma2020,gertler2021protecting,de2020error}. The idea of encoding information in a continuous variable quantum system is in many ways very natural. Quantum harmonic oscillators abound in nature, and well-defined bosonic modes can be isolated from environmental noise in many quantum technology platforms. Early proposals for bosonic error correcting codes were made already in the late 90s~\cite{ChuaLeunYama97,CochMilbMunr99}. The core idea behind these proposals is to encode $k$ logical qubits into $n$ bosonic modes, and attempt to exploit the large Hilbert space of each bosonic mode to achieve an efficient encoding with good error correcting properties for a small $n$. Successful bosonic codes are referred to as \emph{hardware efficient}. Remarkably, interesting bosonic codes, and the Gottesman-Kitaev-Preskill (GKP) code in particular, exist even for $k=n=1$. The primary requirement for implementing bosonic codes, and also why it took two decades of technological developments before the GKP states were realized, is control of a hiqh-quality harmonic oscillator mode with a sufficiently strong and high-quality ancillary non-linearity. This nonlinearity can be a discrete two-level system. In case of trapped-ions, a bosonic state is encoded in the harmonic motion of a single trapped ion by exploiting the strong coupling with the ancillary atomic pseudospin states. In case of cQED, the ancillary levels of a transmon have been used to realize a bosonic encoding in the microwave fields of a superconducting cavity or resonator. Harmonic modes with high quality factors, combined with easy access to strong non-linearities with minimal dissipation, lead to an unprecedented coherent control over the oscillator Hilbert space in these platforms~\cite{Ofek16,Hu:2019aa,heeres2017implementing,xu2020demonstration,Fluhmann:2018aa,Fluhmann:2019aa,Campagne2020,de2020error}. Optical systems are also researched actively in the context of GKP codes~\cite{takeda2019toward,Walshe2020,bourassa2021blueprint,larsen2021fault}. Some proposals are based on using optical nonlinearities or interaction between atoms and light to generate photonic GKP states~\cite{pirandola2004constructing,MoteBaraGilc17}. Several other proposals rely on photon number resolving detectors as the ancillary nonlinear resource~\cite{Eaton:2019aa,Tzitrin2020}. However, because of photon loss, weak optical nonlinearities, and the need for complex multiplexing with high efficiency number resolving detectors, generation of the highly non-Gaussian GKP states have not yet been demonstrated in the optical domain. Given that preparation of GKP encoded states has now been demonstrated in the lab---and it is not a big leap to imagine that gates between two encoded GKP qubits are right around the corner---it is natural to ask whether GKP encodings can become a competitive approach to large-scale, fault-tolerant quantum computing. While the GKP code can correct for small quadrature shifts in the oscillator, realistic noise in an experimental platform is more complex and can introduce uncorrectable errors. Therefore, in practice the suppression in the logical error rate with a single-mode GKP code will be limited. A natural approach to ``scale up'' is to reduce errors as much as possible in the single-mode encoding and then concatenate a number of encoded GKP qubits to a second error correcting code, for example a surface code, for a total of one logical qubit across $n$ physical modes~\cite{Vuillot2018,Noh:2019aa,Terhal2020,noh2021low}. If this approach leads to a substantially lower logical error rate than using a comparable number of un-encoded physical qubits, one may achieve a better encoding with a similar hardware cost. A first milestone towards this goal of resource-efficient fault-tolerance would be to demonstrate basic operations on the encoded GKP states, used to compose error-correction circuits, with fidelities that are comparable or better than the best physical qubits to date. These operations include state preparation, entangling gates between two GKP-encoded modes, and measurement. This is a challenging goal considering the high fidelity qubit operations in both trapped ions and superconducting qubits today. A fundamental obstacle to achieving high-fidelity operations on encoded GKP states is that practical constructions of the required interactions can ruin the protection offered by the bosonic encoding. For example, if a two-level system is used to control the oscillator mode, a single error on the two level system may propagate to a logical error on the mode~\cite{Fluhmann:2019aa,Campagne2020,de2020error}. This can prohibit the fidelity of encoded operations on the GKP states from being significantly better those of the unencoded two-level ancilla. How to best achieve fault-tolerance against such ancilla errors in a bosonic code architecture is an important open question, and we will touch on some of the possibilities that have been put forth towards this goal. In this Perspective, we will discuss the prospect of scalable, fault-tolerant quantum computing with GKP codes with special emphasis on its implementation in a cQED architecture. While there are several excellent review articles on GKP and other bosonic codes~\cite{cai2021bosonic,Terhal2020,ma2021quantum,joshi2021quantum}, here we provide an application-level perspective highlighting the outstanding practical challenges. We focus on cQED partly because the two authors are working in this field, but also because we believe the flexibility and scalability of superconducting circuits make this a particularly promising platform for the long term goal of constructing a large scale quantum computer based on bosonic encodings. With this in mind we begin with an overview of the GKP code in~\cref{sec:intro_gkp}, go on to discuss state preparation and error correction in~\cref{sec:stateprep}, and address the question of fault-tolerant, scalable quantum computing with GKP codes in~\cref{sec:scaling}. Throughout this article, we emphasize not only the advances made towards GKP error correction but also the challenges that must be overcome to make practical fault-tolerance with GKP codes possible. These challenges and opportunities for future research are summarized in~\ref{sec:summary}. \section{Introduction to Gottesman-Kitaev-Preskill codes \label{sec:intro_gkp}} \subsection{\label{sec:gkpdefs}Basic definitions} In general, GKP codes encode a $d$ dimensional logical subspace in $n$ bosonic modes~\cite{Gottesman01}. We will here focus exclusively on the simplest nontrivial case $d=2$ and $n=1$, i.e., a single logical qubit encoded in a single bosonic mode. To define a GKP code, it is first convenient to introduce the displacement operators $\hat D(\alpha) = e^{\alpha \hat a^\dagger - \alpha^* \hat a}$, where $[\hat a,\hat a^\dagger] = 1$ are the usual ladder operators of a harmonic oscillator and $\alpha$ is a complex number. The displacement operators satisfy the property \begin{equation}\label{eq:dispcomm} \begin{aligned} \hat D(\beta) \hat D(\alpha) &= e^{(\beta \alpha^* - \beta^* \alpha )/2} \hat D(\alpha + \beta)\\ &= e^{\beta \alpha^* - \beta^* \alpha} \hat D(\alpha) \hat D(\beta). \end{aligned} \end{equation} In other words, displacements commute ``up to a phase.'' In particular, if \begin{equation}\label{eq:unitarea} \beta \alpha^* - \beta^* \alpha = i\pi, \end{equation} the two operators anti-commute, while if $\alpha\beta^* - \alpha \beta^* = 2i\pi$ they commute. To define a GKP code, we first choose logical Pauli operators $\bar X = \hat D(\alpha)$ and $\bar Z = \hat D(\beta)$, where $\alpha$ and $\beta$ are any two complex numbers that satisfy~\cref{eq:unitarea}. This ensures that $\bar X\bar Z = - \bar Z\bar X$. To ensure that $\bar X$, $\bar Z$, and $\bar Y = i\bar X\bar Z = \hat D(\alpha + \beta)$ behave like the usual two-by-two Pauli matrices, they should also square to the identity on any state in the code subspace (codespace). We therefore \emph{define} the GKP logical codespace to be the simultaneous $+1$ eigenspace of the two operators \begin{equation}\label{eq:gkp_stab_and_pauli} \hat S_X = \bar X^2 = \hat D(2\alpha), \quad \hat S_Z = \bar Z^2 = \hat D(2\beta). \end{equation} It follows from~\cref{eq:dispcomm} that these two operators commute with each other, and the logical Paulis. The set $\{\hat S_X^k, \hat S_Z^l\}$ for $k,l \in \mathbb{Z}$ form the stabilizer group of the GKP code. We can write the GKP codewords explicitly in terms of sums of quadrature eigenstates. To this end, first define two generalized quadratures $\hat Q = i(\beta^* \hat a - \beta \hat a^\dagger)/\sqrt{\pi}$, $\hat P = -i(\alpha^* \hat a - \alpha \hat a^\dagger)/\sqrt{\pi}$, such that $[\hat Q, \hat P]=i$ and \begin{equation}\label{eq:paulis} \bar X = e^{-i\sqrt{\pi}\hat P},\, \bar Z = e^{i\sqrt{\pi}\hat Q},\, \bar Y = e^{i\sqrt{\pi}(\hat Q - \hat P)}. \end{equation} It is straight forward to check that \begin{subequations}\label{eq:gkpcodewords2} \begin{align} \ket{0_L} &= \sum_{j=-\infty}^\infty \ket{2j \sqrt{\pi}}_{\hat Q},\\ \ket{1_L} &= \sum_{j=-\infty}^\infty \ket{(2j+1) \sqrt{\pi}}_{\hat Q}, \end{align} \end{subequations} are $\pm 1$ eigenstates of $\bar Z$, respectively, and $+1$ eigenstates of $\hat S_X$ and $\hat S_Z$. Here we use a notation where $\ket{x}_{\hat O}$ is an eigenstate of $\hat O$ with eigenvalue $x$. We have analogous expressions in the dual basis: $\ket{+_L} = \sum_j \ket{2j\sqrt{\pi}}_{\hat P}$, $\ket{-_L} = \sum_j \ket{(2j+1)\sqrt{\pi}}_{\hat P}$. An alternative expression for the codewords can be found by noting that the state $\ket{0_L} \propto \sum_{k,l=-\infty}^\infty \hat S_X^k \bar Z^{l} \ket 0$, is a simultaneous $+1$ eigenstate of the two stabilizer generators $\hat S_X, \hat S_Z$ and logical $\bar Z$ (in fact, the vacuum state $\ket 0$ can be replaced by an arbitrary state $\ket\psi$ with non-zero overlap with $\ket{0_L}$ in this expression). With the help of~\cref{eq:dispcomm}, it follows that the logical states can be written \begin{subequations}\label{eq:gkpcodewords} \begin{align} \ket{0_L} &\propto \sum_{k,l=-\infty}^\infty e^{-i\pi kl} \ket{2k\alpha + l \beta},\\ \ket{1_L} &\propto \sum_{k,l=-\infty}^\infty e^{-i\pi (kl + l/2)} \ket{(2k+1)\alpha + l\beta}, \end{align} \end{subequations} where the kets on the right hand side are coherent states $\ket\zeta = \hat D(\zeta)\ket 0$. Analogous expressions can be found for the $\pm 1$ eigenstates of $\bar X$ following the same approach. Since any pair $\alpha,\beta$ that satisfy~\cref{eq:unitarea} is valid choice, there is an infinity of different GKP codes. The three most common choices are square, rectangular, and hexagonal codes, defined respectively by \begin{subequations} \begin{align} \text{square: } &\alpha = \sqrt{\frac{\pi}{2}}, \quad \,\,\,\,\beta = i\sqrt{\frac{\pi}{2}},\\ \text{rect: } &\alpha = {\lambda}\sqrt{\frac{\pi}{2}}, \quad \,\,\beta = \frac{i}{{\lambda}}\sqrt{\frac{\pi}{2}}, \quad {\lambda} > 0,\label{eq:lattice_def} \\ \text{hex: } &\alpha = \sqrt{\frac{\pi}{\sqrt 3}}, \quad \beta = e^{2i\pi/3}\sqrt{\frac{\pi}{\sqrt 3}}. \end{align} \end{subequations} Note that for the square code, the generalized quadratures introduced above are just the usual position and momentum quadratures $\hat Q_\square = \hat q = (\hat a + \hat a^\dagger)/\sqrt 2$, $\hat P_\square = \hat p = -i(\hat a - \hat a^\dagger)/\sqrt 2$. The square and hexagonal GKP lattices are illustrated in~\cref{fig:gkplattices}. \begin{figure} \caption{ Square (a) and hexagonal (b) GKP lattices (adapted from Ref.~\cite{Grimsmo2020} \label{fig:gkplattices} \end{figure} \subsection{\label{sec:approxgkp}Approximate GKP codewords} The GKP codespace is a rather abstract construction. The codewords in~\cref{eq:gkpcodewords2} are non-normalizable, and in general there is no physical process that can prepare a state lying entirely in the GKP codespace. In practice we have to make do with some type of approximation to~\cref{eq:gkpcodewords2,eq:gkpcodewords}. Colloquially, we refer to any pair of normalized states $\ket{\tilde \mu_L}$, $\mu=0,1$ that satisfy $\hat S_P \ket{\tilde \mu_L} \to \ket{\tilde \mu_L}$ and $\bar Z \ket{\tilde \mu_L} \to (-1)^\mu \ket{\tilde \mu_L}$ for $P=X,Z$, in some meaningful limit, as an approximate GKP code. One natural way to define such an approximate code is~\cite{Menicucci14} \begin{equation}\label{eq:gkpcodewords_approx1} \ket{\tilde \mu_L} \propto e^{-\Delta^2 \hat a^\dagger \hat a} \ket{\mu_L},\\ \end{equation} where we ignore normalization constants, for simplicity. The ideal limit corresponds to $\Delta \to 0$. \Cref{eq:gkpcodewords_approx1} introduces a Gaussian envelope over the infinite sums in~\cref{eq:gkpcodewords}, such that each coherent state $\ket\zeta$ is replaced by $e^{-\frac12\left(1-e^{-2\Delta^2}\right)|\zeta|^2}\ket{e^{-\Delta^2} \zeta} \simeq e^{-\Delta^2|\zeta|^2}\ket{e^{-\Delta^2} \zeta}$, which ensures the normalizability of the codewords. The $\ket{\tilde 0_L}$ codewords for square and hexagonal GKP codes with $\Delta=0.3$ are shown in~\cref{fig:gkplattices}. It is also possible to view the code defined in~\cref{eq:gkpcodewords_approx1} as resulting from a modificaiton of the stabilizers defining the codespace. More precisely, the codewords in~\cref{eq:gkpcodewords_approx1} are exact $+1$ eigenstates of the two commuting nonunitary operators \begin{equation} S_{X,Z}^\Delta = e^{-\Delta^2 \hat a^\dagger \hat a} S_{X,Z} e^{\Delta^2 \hat a^\dagger \hat a}, \end{equation} with logical operators defined analogously, $\bar P^\Delta = e^{-\Delta^2 \hat a^\dagger \hat a} \bar P e^{\Delta^2 \hat a^\dagger \hat a}$ for $\bar P = \bar X, \bar Z$~\cite{Royer2020}. Another common approximate form is found by applying weighted displacements to the ideal codewords \begin{equation}\label{eq:gkpcodewords_approx2} \begin{aligned} \ket{\tilde \mu_L} &= \int_{-\infty}^\infty du dv\, \eta_\Delta\left(u,v\right) e^{\frac{iuv}{2}} e^{-iu\hat P} e^{iv \hat Q} \ket{\mu_L}, \end{aligned} \end{equation} where $\eta_\Delta(u,v)$ is concentrated around zero as $\Delta \to 0$. \Cref{eq:gkpcodewords_approx1} is recovered with $\eta_\Delta(x,y) = e^{-(x^2+y^2)/(4\tanh(\Delta^2/2)} / [\pi(1-e^{-\Delta^2})] \simeq e^{-(x^2+y^2)/2 \Delta^2} / \pi \Delta^2$~\cite{Royer2020}. We can use~\cref{eq:gkpcodewords2} in~\cref{eq:gkpcodewords_approx2} and perform the integral over $v$ to find yet another approximation~\cite{Gottesman01} \begin{equation}\label{eq:gkpcodewords_approx3} \begin{aligned} \ket{\tilde \mu_L} &\simeq \frac{1}{\sqrt{N_\mu}} \sum_{j=-\infty}^\infty e^{-\Delta^2 \pi (2j+\mu)^2 / 2} \\ &{} \times \int_{-\infty}^\infty du e^{-\frac{u^2}{2\Delta^2}} \ket{(2j+\mu)\sqrt{\pi}+u}_{\hat Q}, \end{aligned} \end{equation} where $N_\mu = \sqrt{\pi}/2 + \mathcal O\left(e^{-\pi/\Delta^2}\right)$ as $\Delta \to 0$. This form has the physical interpretation of a comb of squeezed states with an overall Gaussian envelope. It is convenient to introduce a metric to quantify how close an arbitrary state $\hat\rho$ is to an ideal GKP state. To this end, we introduce a modular ``squeezing'' parameter for each of the two stabilizers $\hat S_{X,Z}$~\cite{Duivenvoorden2017} \begin{subequations} \begin{align} \Delta_{X} ={}& \frac{1}{2|\alpha|}\sqrt{-\log(|\text{tr} [\hat S_{X} \hat \rho]|^2)},\\ \Delta_{Z} ={}& \frac{1}{{2|\beta|}}\sqrt{-\log(|\text{tr} [\hat S_{Z} \hat \rho]|^2)}. \end{align} \end{subequations} The squeezing parameters satisfy $\Delta_{X,Z} \ge 0$, and are zero if and only if the state is an eigenstate of the corresponding stabilizer. For the approximate GKP codewords introduced above, we have $\Delta_{X,Z} = \Delta$ as $\Delta \to 0$. It is also conventional to measure the modular squeezing in dB \begin{equation}\label{eq:squeezingdB} \mathcal S_{X,Z} = - 10 \log_{10}(\Delta_{X,Z}^2). \end{equation} For example, the square (hexagonal) approximate codeword shown in~\cref{fig:gkplattices} has $\mathcal S_X = \mathcal S_Z = 10.1$ dB ($9.48$ dB), corresponding to an average photon number of approximately $\braket{\hat n} = 4.6$. We will from now on drop the subscript $X,Z$ and simply write $\Delta$ and $\mathcal S$ when the two quadratures are approximately equally squeezed and the distinction is unimportant. \subsection{\label{sec:qec_criteria}Error correcting properties} The purpose of encoding a logical qubit in a GKP code is that it provides protection from noise through error correction. To see this, consider first an error model consisting of small displacements applied to the oscillator. For an arbitrary displacement $\hat D(\zeta)$ we can write $\zeta = (u \alpha + v\beta)/\sqrt \pi$, where $u,v$ are real, and consequently \begin{equation}\label{eq:uvdisplacement} \hat D(\zeta) = e^{iuv / 2} e^{-iu\hat P} e^{iv \hat Q}. \end{equation} From~\cref{eq:gkpcodewords2} it is clear that the quantum error correction criteria~\cite{Nielsen10} are formally satisfied for the ideal GKP code for the set of displacement errors $\hat D(\zeta)$ such that $|u|, |v| < \sqrt \pi / 2$. This also gives us some insight into why the approximate GKP codewords introduced in the previous section are ``good'' approximations. As long as $\eta_\Delta(u,v)$ in~\cref{eq:gkpcodewords_approx2} is sufficiently localized around zero, the ``error'' introduced in the approximate codewords is small, and as long as we ensure that all the logical operations used in our quantum computation do not amplify these errors too badly, i.e., they are ``fault-tolerant,'' then we can expect to perform quantum computation with these approximate GKP codewords with high accuracy (see~\cref{sec:errorspread} for a more precise discussion around what we mean by not ``too badly''). For physical GKP codes and realistic error models, we expect that the error correction criteria are at best only approximately satisfied. Realistic error models for oscillators typically include loss, heating, dephasing, unitary errors due to imperfect implementation of control Hamiltonians, etc. Since the displacement operators form an operator basis, any single-mode noise channel can be expanded in terms of displacements \begin{equation} \mathcal E(\hat \rho) = \int d^2 \zeta d^2\zeta' f(\zeta,\zeta') \hat D(\zeta) \hat \rho \hat D^\dagger(\zeta'). \label{eq:channel} \end{equation} Again, as long as $f(\zeta,\zeta')$ is sufficiently concentrated around zero, it is in principle possible to remove the noise with high fidelity. Realistic error models, however, typically have some finite support on displacements larger than $\sqrt\pi/2$, which means that the error can not be corrected perfectly, even in the limit $\Delta\to 0$. In~\cref{fig:qecfidelity} we illustrate how the quantum error correction properties of the GKP code manifest for a practically relevant noise channel consisting of simultaneous loss and dephasing. More precisely, the noise model is given by the solution to a Lindblad master equation \begin{equation} \dot{\hat \rho} = \kappa \mathcal D[\hat a]\hat \rho + \kappa_\phi \mathcal D[\hat n]\hat\rho, \end{equation} with $\mathcal D[\hat A]\hat\rho = \hat A \hat\rho \hat A^\dagger - \frac12 \hat A^\dagger \hat A \hat \rho - \frac12 \hat\rho \hat A^\dagger \hat A$, integrated up to a fixed time $t$. The noise strength in this model is thus characterized by two dimensionless numbers, $\kappa t$ and $\kappa_\phi t$, describing pure loss and dephasing, respectively. The noise is then followed by the optimal recovery channel that maximises the average gate fidelity~\cite{Nielsen02} with the identity channel. This optimal error correction map can be found numerically~\cite{Albert17}, but does not represent a practical error correction procedure---it merely puts an upper bound on the fidelity that can be achieved and illustrates the \emph{intrinsic} error correction properties of the code. The results in~\cref{fig:qecfidelity} shows that the GKP code has an excellent potential to correct loss errors~\cite{Albert17}, but is rather poor against dephasing (there are other bosonic codes that perform far better against this latter type of noise~\cite{Grimsmo2020}). The sensitivity to dephasing is not surprising, as a rotation of phase space by a small angle gives a large displacement for large amplitudes. In practice, dephasing might arise not only due to the intrinsic frequency fluctuations of the oscillator used to encode the GKP code (which can be made very small), but also due to off-resonant coupling to ancilliary quantum systems used to control the oscillator~\cite{Ofek16,Hu:2019aa,Campagne2020}. It is therefore crucial to minimize such residual couplings in practical implementations of GKP codes. A similar issue is likely to arise if there are over-rotations and/or unwanted residual Hamiltonian terms due to miscalibrated unitary gates, and very precise quantum control is therefore important for GKP codes. \begin{figure} \caption{Average gate fidelity for an approximate square GKP code as a function of photon number $n_\text{code} \label{fig:qecfidelity} \end{figure} \subsection{\label{sec:logicalops}Logical operations on GKP codes} One of the attractive properties of GKP codes is that, apart from state preparation, all logical Clifford operations can be performed using only Gaussian operations, that is, interactions that are at most quadratic in creation and annihilation operators and homodyne measurements on the oscillator. In this section we describe how to perform logical Pauli measurements and unitary Clifford gates, leaving the more difficult topic of state preparation and error correction to~\cref{sec:stateprep}. \subsubsection{\label{sec:measurements}Pauli quadrature measurements} Destructive logical measurements in any Pauli basis ($\mathcal M_{X,Y,Z}$) can be performed by measuring one of three respective quadratures \begin{subequations} \begin{align} &\mathcal M_X: \text{ measure } {-}\hat P,\\ &\mathcal M_Y: \text{ measure } \hat Q - \hat P,\\ &\mathcal M_Z: \text{ measure } \hat Q, \end{align} \end{subequations} and rounding the outcome to the nearest multiple of $\sqrt \pi$. If the result is an even multiple, report a $+1$ outcome, and if the result is an odd multiple, report $-1$. That this gives a logical Pauli measurement follows from~\cref{eq:paulis}. An attractive feature of this measurement scheme is that it is robust to small displacement errors---precisely the type of errors the GKP code is meant to be robust against---and can in this sense be said to be fault-tolerant. The procedure is illustrated for an $\mathcal M_Z$ measurement on an approximate square GKP code in~\cref{fig:Zmeasurement} (a). \begin{figure} \caption{(a) Position wavefunctions for logical zero (blue) and one (red) for a square GKP code with $\Delta=0.3$. A logical $\mathcal M_Z$ measurement can be executed by first measuring position (homodyne measurement) and binning the result. The white (gray) regions correspond to a logical zero (one) outcome. (b) Probability of mistaking logical zero for one under a homodyne measurement with measurement efficiency $\eta$ for $\Delta=0.3$ ($\mathcal S = 10.1$ dB, blue solid) and $\Delta=0.2$ ($\mathcal S = 13.8$ dB, orange solid). Note that the bin boundaries are scaled by $\sqrt\eta$ to account for measurement inefficiency. For comparison, the dottes lines show the corresponding measurement error with one round of noiseless phase estimation using the scheme in~\cref{fig:phaseest_measure} \label{fig:Zmeasurement} \end{figure} Quadrature measurements are routinely performed in both the microwave and optical domain. Performing such measurements on an encoded GKP qubit is, however, not as straight forward. {In the context of cQED and other approaches where the GKP state is encoded in a localized high-quality mode, such as the standing modes of a cavity,} the ability to rapidly perform quadrature measurements contradicts the requirement of the oscillator mode to be long-lived. It is therefore necessary to either tune the oscillator decay rate $\kappa$ from a small to a large value prior to measurement, or to map the encoded information from a high-$Q$ to a low-$Q$ mode (with $Q\sim 1/\kappa$ the quality factor)~\cite{pfaff2017controlled}. The situation is further complicated by the fact that the measurement efficiency for homodyne detection is limited in practice. The ability to distinguish the codewords deteriorates rapidly with decreasing measurement efficiency, as shown in~\cref{fig:Zmeasurement} (b). A measurement efficiency $\eta$ below unity means that the GKP state shrinks towards vacuum, and it is important to compensate for this (assuming that $\eta$ itself is known) by rescaling the measurement bins [white and pink in~\cref{fig:Zmeasurement}~(a)] appropriately. More precisely, we {now round to the nearest integer multiple of $\sqrt{\eta\pi}$ }~\cite{Shawinprep}. To produce the numerical results in~\cref{fig:Zmeasurement}~(b) we took an approximate $\ket{\tilde 0_L}$ state, apply a pure loss channel with $\eta = e^{-\kappa t}$, followed by an ideal measurement of the position quadrature and bin the result. In the microwave domain, state-of-the-art measurement efficiencies are well below~$90\%$, even with the use of near quantum-limited amplifiers~\cite{Macklin2015,Touzard2019}. The results in~\cref{fig:Zmeasurement}~(b) show that measurement efficiencies will have to be improved for this approach to be promising for distinguishing GKP codewords with high fidelity. For example, at $\eta=75\%$---a high but not unreasonable value for a microwave measurement chain---the error probability is about $p_\text{err} \simeq 5.6\%$ ($4.1\%$) for $\Delta=0.3$ ($0.2$). For comparison, at $\eta=90\%$ we find $0.61\%$ ($0.15\%$). Note that rather large measurement efficiencies are required to have a substantial benefit from lowering $\Delta$. The measurement efficiency may be improved if we can amplify the quadrature information \emph{prior} to releasing the GKP state to a standard microwave measurement chain~\cite{Eddins2019}. Although most theoretical work on GKP codes assume high efficiency quadrature measurements~\cite{Vuillot2018,Noh:2019aa,Terhal2020,noh2021low}, it is an open question whether the stringent demands required for scalable, fault-tolerant quantum computing can be met with this approach. We return to this question in~\cref{sec:scaling} where we discuss concatenation with topological codes. \subsubsection{\label{sec:phaseest_measurement}Pauli phase estimation} An alternative, and non-destructive, way to do logical Pauli measurements is by performing phase estimation using an ancillary system. In the simplest case this task can be performed using a discrete two-level system as an ancilla, removing the need to perform direct quadrature measurements on an encoded GKP state. Since the logical GKP-Paulis are unitary displacement operators, $\bar X = \hat D(\alpha), \bar Z = \hat D(\beta)$, their eigenvalues are of the form $e^{i\theta}$ with $\theta \in [0,2\pi)$. The task of estimating an eigenvalue of a unitary operator, or equivalently the ``phase'' $\theta$, is generally known as phase estimation. A variety of different phase estimation protocols exist, with tradeoffs in terms of efficiency and complexity~\cite{Terhal2016,Weigand2020,Royer2020}. We here focus on a simple non-adaptive scheme using a single two-level ancilla. Specifically we consider the scheme that was used in the experiments in Refs.~\cite{Fluhmann:2019aa,Campagne2020}, illustrated in~\cref{fig:phaseest_measure}~(a), as well as a modified version that has theoretically been shown to give better performance, shown in~\cref{fig:phaseest_measure}~(b)~\cite{hastrup2020improved,Royer2020}. \begin{figure} \caption{ (a) One round of phase estimation. The controlled displacement is defined as $C \hat D(\zeta) = \hat D(\zeta/2) \otimes \ket{0_a} \label{fig:phaseest_measure} \end{figure} The central component of these schemes is a controlled-displacement gate $C\hat{D}(\zeta)$ which applies a displacement on the GKP mode $\hat{D}(\pm \zeta/2)$ conditioned on the state of the two-level ancilla. See Fig.~\ref{fig:CD_gate} for possible implementations of this gate in cQED. At the end of the circuit, the ancilla is measured in the $X$-basis. Consider first the simplest scheme in~\cref{fig:phaseest_measure}~(a). The probability of getting an $X=\pm$ outcome for the ancilla measurement is given by \begin{equation}\label{eq:phaseest_prob} P(\pm) = \frac12 \left[1 \pm \frac12 \left( \braket{\hat D(\zeta)} + \braket{\hat D^\dagger(\zeta)}\right)\right]. \end{equation} To be concrete, let us take $\hat D(\zeta) = \bar Z$. For an ideal GKP code we have $\braket{\mu_L|\bar Z|\mu_L} = \pm 1$ for $\mu=0,1$, and thus $P(+) = 1$, $P(-) = 0$ for the $\ket{0_L}$ state. For an approximate $\ket{\tilde 0_L}$ state, on the other hand, there is a non-zero probability of getting a $-1$ outcome. Using~\cref{eq:gkpcodewords_approx3} one can show that $\braket{\tilde 0_L|\bar Z|\tilde 0_L} \simeq e^{-\pi \Delta^2/4}$ for small $\Delta$, such that the measurement error becomes \begin{equation}\label{eq:perr_simple} p_\text{err} = P(-) \simeq \frac12\left(1 - e^{-\pi \Delta^2/4}\right) \simeq \frac{\pi\Delta^2}{8}. \end{equation} Interestingly, the performance can be improved significantly using the scheme shown in~\cref{fig:phaseest_measure}~(b). Here, a small controlled displacement orthogonal to the logical displacement is performed first. In the case of a $\bar Z = e^{i\sqrt\pi \hat Q}$ measurement we set $\hat D(\epsilon/2) = e^{i\lambda \hat P}$ with $\lambda$ real. The intuition behind the scheme is that this gives a better approximation to a measurement of the \emph{approximate} logical Pauli operator $\bar Z^\Delta$ introduced in~\cref{sec:approxgkp}~\cite{Royer2020}. In this case, one can show that the measurement error becomes \begin{equation}\label{eq:perr_improved} p_\text{err} \simeq \frac12\left\{1 - e^{-\frac{\pi \Delta^2}{4}}\left[ e^{-\frac{\lambda^2}{\Delta^2}} + \sin\left(\sqrt{\pi}\lambda\right) \right]\right\}. \end{equation} We can treat $\lambda$ as a free parameter to be optimized. In the small $\Delta$ limit $p_\text{err}$ is minimized for $\lambda \simeq \sqrt{\pi} \Delta^2/2$ in which case one finds $p_\text{err} \simeq 0.4 \Delta^6$, a significant improvement over~\cref{eq:perr_simple}~\cite{hastrup2020improved}. In~\cref{fig:Zmeasurement}~(c) we show $p_\text{err}$ for a majority vote over $n$ rounds of ideal phase estimation with a noiseless two-level ancilla for the two respective schemes. For the simple scheme in~\cref{fig:phaseest_measure}~(a) and $\Delta=0.3$ ($0.2$) we find $p_\text{err} \simeq 3.7\%$, $1.0\%$ and $0.4\%$ ($1.6\%$, $0.2\%$ and $0.05\%$) for $n=1$, $3$, and $5$ measurements, respectively. For the improved scheme in~\cref{fig:phaseest_measure}~(b) we find $p_\text{err} \simeq 2.6 \times 10^{-4}$, $1.6 \times 10^{-4}$ and $1.5 \times 10^{-4}$ ($1.9 \times 10^{-5}$, $6.2 \times 10^{-6}$ and $2.5 \times 10^{-6}$) for the same parameters. These results were produced by numerically computing the probability of a $-1$ measurement outcome on a $\ket{\tilde 0_L}$ state for the circuits in~\cref{fig:phaseest_measure}, and in the case of the scheme in panel $(b)$ optimizing over the parameter $\lambda$. These results show that phase estimation, modified to better distinguish the approximate Pauli operators of a physical GKP code, can lead to very small measurement errors when the ancilla is noiseless. In~\cref{sec:stateprep} we will discuss how this approach can form the basis for a state preparation scheme, by measuring stabilizer operators in place of logical Paulis. A fundamental obstacle to this approach, however, is that errors on the ancilla qubit can propagate back to the GKP mode. In particular, a bit-flip of the ancilla qubit during the controlled displacement gate $C\hat D(\zeta)$ leads to a large, random displacement of the GKP code. In~\cref{sec:ftstateprep} we discuss a potential way to make these schemes robust to such ancilla errors. \subsubsection{\label{sec:gates}Clifford gates and Clifford frames} As already mentioned, Clifford gates can be performed using interactions that are at most quadratic in the creation and annihilation operators. Specifically, the Clifford group can be generated, for example, from the Hadamard ($H$), phase ($S$) and CNOT ($C_X$) gates~\cite{Gottesman01} \begin{equation}\label{eq:Cliffords} \bar H = e^{\frac{i\pi}{4} (\hat Q^2 + \hat P^2)},\quad \bar S = e^{\frac{i}{2} \hat Q^2},\quad \bar C_X = e^{-i \hat Q\otimes \hat P}, \end{equation} where we use the generalized (code-dependent) quadratures $\hat Q, \hat P$ introduced in~\cref{sec:gkpdefs}. For the $\bar C_X$ gate the first mode is the control and the second mode the target. Together with logical basis measurements $\mathcal M_Z$ and preparation of encoded states $\ket{0_L}$ and $\ket{A_L} = \frac{1}{\sqrt 2}(\ket{0_L} + e^{i\pi/4}\ket{1_L})$, this forms a universal set. It should be emphasized, however, that the gates in~\cref{eq:Cliffords} are in general only \emph{approximate} logical gates on approximate GKP codes, and, despite being unitary operations, may reduce the quality of the encoded information by making the codewords harder to distinguish [this can be seen from the fact that the gates do not commute with the envelope operator introduced in~\cref{eq:gkpcodewords_approx1}]~\cite{Tzitrin2020,Shawinprep}. Due to the approximate nature of the logical gates on physical GKP codewords, it is desirable to minimize the number of Clifford gates in a given quantum circuit. An elegant solution to this problem is to make use of a so-called Clifford frame, where single qubit Cliffords are tracked ``in software''~\cite{Chamberland2018,aaronson2004improved}. The idea of the (single-qubit) Clifford frame is as follows: An arbitrary quantum circuit $\mathcal C$, written in terms of preparation of $\ket{0_L}$ and $\ket{A_L}$ states, gates from the set $\{H, S, C_X\}$, and Pauli measurements $\mathcal M_{Z}$, can be replaced by an equivalent circuit $\mathcal C'$, where the state preparation is identical, the measurements are in any Pauli basis, and all the gates are from the set \begin{equation}\label{eq:Cliffords2} C_{\sigma_i\sigma_j} = I \otimes I - \frac12\left(I - \sigma_i\right)\otimes\left(I - \sigma_j\right), \end{equation} where $\sigma_{i,j} \in \{X,Y,Z\}$ runs over the usual Pauli operators. Moreover, the number of qubits, two-qubit gates, and measurements in the new circuit $\mathcal C'$ is the same as in $\mathcal C$, while all single-qubit gates have been removed. Constructing $\mathcal C'$ from $\mathcal C$ is straight forward. One simply commutes the $H$ and $S$ gates through all the $C_X$ gates, mapping them to new gates from the set~\cref{eq:Cliffords2} (and by-product single-qubit Pauli gates) in the process, and finally absorb any single-qubit Cliffords and Paulis into the measurements, mapping $\mathcal M_Z$ to general Pauli measurements~\cite{gottesman1998heisenberg}. An example is given in~\cref{fig:cliffordframe}. \begin{figure} \caption{Example computation in the (single-qubit) Clifford frame. A general quantum circuit~(a) can be rewritten using only (adaptive) Clifford gates and magic states $\ket A = T\ket +$, with $T=\text{diag} \label{fig:cliffordframe} \end{figure} For GKP-encoded qubits, the gate set~\cref{eq:Cliffords2} is particularly attractive, because switching between gates in this set is essentially a ``free'' operation in many physical platforms. More precisely, an encoded version of the gateset for GKP codes is realized by~\cite{Shawinprep} \begin{equation}\label{eq:Cliffords3} \bar C_{\sigma_i\sigma_j} = e^{i \hat s_i \otimes \hat s_j}, \end{equation} where $\hat s_1 = -\hat P$, $\hat s_2 = \hat Q - \hat P$, $\hat s_3 = \hat Q$ are the three quadratures corresponding to logical $\bar X$, $\bar Y$ and $\bar Z$, respectively. Any gate from the set~\cref{eq:Cliffords3} can be generated from an interaction of the form $\hat H_{\theta,\phi} \propto e^{i\theta} \hat a \hat b^\dagger + e^{i\phi} \hat a\hat b + \text{H.c.}$, where $\hat a$ and $\hat b$ are annihilation operators for the two respective modes. In turn, $\hat H_{\theta,\phi}$ can be realized, for example, from a three-wave mixing interaction with a classical pump with two pump tone frequencies at the sum and difference of the two GKP modes, respectively, and $\theta$ and $\phi$ set by the two corresponding pump phases, see Fig.~\ref{fig:gate1}(a). Alternatively, it can be realized from a four-wave mixing interaction, using four pump tones, as shown in Fig.~\ref{fig:gate1}(b). Updating the Clifford frame thus simply amounts to updating the programming of classical pump phases. Albeit logical gates between two GKP qubits have not yet been demonstrated at the time of writing, the ability to engineer interactions of the form $\hat H_{\theta,\phi}$ have already been used for other applications in cQED~\cite{pfaff2017controlled,Gao:2019aa,Wang2020,grimm2020stabilization,roy2016introduction}. \begin{figure} \caption{Illustration of how the Hamiltonian $\hat H_{\theta,\phi} \label{fig:gate1} \end{figure} \subsubsection{\label{sec:errorspread}Error spread through gates} An important consequence of the fact that Clifford gates on GKP codes can be generated by quadratic Hamiltonians is that this guarantees that errors are not amplified in a bad way by the gates. Consider, for example, the $\bar C_X \equiv \bar C_{ZX} = e^{-i\hat Q\otimes \hat P}$ gate from the set~\cref{eq:Cliffords3} (the other gates in the set behave analogously), and assume that a small displacement error $e^{-iu\hat P}e^{iv \hat Q}$ [c.f.~\cref{eq:uvdisplacement}] is present on the first (control) mode prior to performing the gate. The factor $e^{iv\hat Q}$ commutes with the $\bar C_X$ gate, but since \begin{equation} \left(e^{-iu\hat P} \otimes I\right) e^{-i\hat Q\otimes \hat P} = e^{-i\hat Q\otimes \hat P} \left(e^{-iu\hat P} \otimes e^{iu \hat P} \right), \end{equation} we see that the $\bar C_X$ gate spreads a displacement error $e^{iu\hat P}$ to the second (target) mode. Even though the error has spread, small displacements spread to small displacements, and the error can be corrected by a subsequent round of error correction. This is exactly analogous to a transversal $\bar C_X = C_X^{\otimes n}$ between two binary code blocks of $n$ qubits, where, say, $t$ $X$ errors on the control block can spread to $t$ $X$ errors on the target block. In this sense the Clifford gates on GKP codes are fault-tolerant. This is, however, only a statement about an ideal implementation of a gate such as~\cref{eq:Cliffords3}. In a realistic implementation, where the quadratic interaction $\hat H_{\theta,\phi}$ stems from an underlying nonlinearity (c.f.~\cref{fig:gate1}), there will unavoidably be spurious higher order terms present as corrections to $\hat H_{\theta,\phi}$. Such terms might amplify and spread errors in a bad way, and it is therefore crucial that they are made as small as possible. Again, it is not expected that GKP codes can suppress errors arbitrarily. The goal is to suppress errors to sufficiently low levels that the resource overhead for the next level of protection is reduced, as discussed further in~\cref{sec:scaling}. \section{\label{sec:stateprep}State preparation and error correction} \subsection{\label{sec:phaseest}State preparation using two-level ancilla} One way to prepare a GKP state is to non-destructively measure the stabilizers and a corresponding logical Pauli. For example, a measurement of $\hat S_X$ and $\bar Z$, both with $+1$ outcomes, would correspond to a preparation of the ideal $\ket{0_L}$ state. We discussed how logical Paulis can be measured using phase estimation in~\cref{sec:phaseest_measurement}. These ideas can be extended to measuring the stabilizers $\hat S_X, \hat S_Z$, and introducing feedback displacements to steer the state towards the code space of an approximate GKP code. \begin{figure} \caption{ ``Sharpen'' and ``Trim'' protocols used to prepare approximate GKP codestates, the controlled displacement gate is defined as in~\cref{fig:phaseest_measure} \label{fig:phaseest} \end{figure} \textbf{} Several protocols have been developed to this end~\cite{Terhal2016,Fluhmann:2019aa,Campagne2020,Royer2020,de2020error,Weigand2020}. To keep the discussion concrete, we here focus on the scheme illustrated in~\cref{fig:phaseest}, which is the scheme used in the experimental demonstrations of GKP codewords in Ref.~\cite{Campagne2020}. Let us first consider in more detail the circuit labeled ``Sharpen''. This is simply a version of the standard phase estimation circuit we introduced in~\cref{fig:phaseest_measure}~(a) with a feedback displacement used to steer the state towards a $+1$ eigenstate of $\hat D(\zeta)$. The probability of getting $\pm$ outcomes here are respectively $P_{\pi/2}(\pm) = \frac12 \left(1 \pm \text{Im}\, \braket{\hat D(\zeta)} \right)$. To prepare a state with a target phase value $\theta=0$, we introduce a measurement dependent displacement of $\hat D(\pm \epsilon/2)$ along a direction orthogonal to $\hat D(\zeta)$, so that for a $\pm$ outcome an eigenstate state with eigenvalue $e^{i\theta}$ is mapped to one with $e^{i(\theta \mp \zeta\epsilon)}$. This ensures that $\theta = 0 \pmod{2\pi}$ is a stable fixed point, while $\theta = \pi \pmod{2\pi}$ is unstable, for sufficiently small $\epsilon$. Of course, for a fixed $\epsilon$, we can not prepare a phase arbitrarily close to zero using the above procedure, but this is also not desirable. In practice, the scheme will be limited by experimental imperfections, such as unwanted nonlinearities and dephasing, as the photon number of the state increases. It is therefore better to directly target an approximate GKP state as defined in~\cref{sec:approxgkp}, where the choice of $\Delta$ should be optimized based on experimental considerations. It was shown in Ref.~\cite{Royer2020} that by alternating the phase estimation of $\hat D(\zeta)$ (``Sharpen'' in~\cref{fig:phaseest}) with phase estimation of a small orthogonal displacement $\hat D(\epsilon)$ (``Trim'' in~\cref{fig:phaseest}) one can prepare an approximate GKP state of the form~\cref{eq:gkpcodewords_approx1}, with $\Delta \sim \sqrt\epsilon$. The intuition behind the scheme is that the first step ``sharpens the peaks'' of the target GKP state by bringing it closer to a $+1$ eigenstate of $\hat D(\zeta)$, while the second step ``trims the envelope'' of the state by weakly measuring the orthogonal quadrature~\cite{Royer2020}. To prepare a logical state, say $\ket{\tilde 0_L}$, one can first alternate many Sharpen-and-Trim cycles of the two stabilizers $\hat S_X,\hat S_Z$ to project the state onto the logical subspace. Once in the codespace, a single phase estimation round of $\bar Z=\hat D(\beta)$ suffices to project onto one of the two logical $Z$-basis states. This can be done using, for example, either of the two circuits in~\cref{fig:phaseest_measure}. The full protocol is illustrated in~\cref{fig:sharpentrimcycle}. Alternatively, one can repeat the logical $\bar Z$ measurement a few times and postselect on getting identical outcomes, to increase the preparation fidelity~\cite{Campagne2020,Royer2020} (see also~\cref{fig:Zmeasurement}). Finally, a Pauli correction can be applied if necessary to prepare $\ket{\tilde 0_L}$. \begin{figure} \caption{ Preparation of a logical state can be done by alternating ``Sharpen'' (``S'') and ``Trim'' (``T'') cycles for the two stabilizers $\hat S_X,\hat S_Z$ to project onto the codespace, and finally perform a non-destructive $\bar Z$ measurement (``M'') using, e.g., one of the two circuits in~\cref{fig:phaseest_measure} \label{fig:sharpentrimcycle} \end{figure} Various optimizations of the ``Sharpen-Trim'' scheme are possible, as well as measurement free versions. We refer the reader to Refs.~\cite{hastrup2021measurement,Royer2020,de2020error} for further details. We also note that optimal control methods have successfully been used to prepare other bosonic codes~\cite{Ofek16,Hu:2019aa}, and that similar techniques may prove useful for GKP state preparation as well. \begin{figure} \caption{Implementation of controlled-displacement gate, $C\hat D(\zeta)$, between a GKP resonator and a transmon~(a) and Kerr-cat realized in a SNAIL~(b). The frequencies of the resonator, transmon, and SNAIL are $\omega_\mathrm{c} \label{fig:CD_gate} \end{figure} \subsection{\label{sec:ftstateprep}Fault-tolerance in state preparation} An issue with the scheme illustrated in~\cref{fig:phaseest} (as well as the Pauli measurement schemes in~\cref{fig:phaseest_measure}) is that ancilla errors can propagate to the GKP code and lead to uncorrectable errors. In particular, an ancilla bit flip at a random time during the controlled displacement of the ``Sharpen'' step leads to a, potentially large, random displacement error. (A bit-flip during the controlled displacement of the ``Trim'' step is much less serious as it only leads to a displacement error of magnitude $\sim|\epsilon|$.) These simple circuits are, in other words, not fault-tolerant to the dominant error channels, such as relaxation, on the ancilla qubit. It is noteworthy, however, that there is a certain amount of built-in robustness in the phase estimation circuits, in that phase flips on the ancilla qubit are relatively benign: $Z$ errors on the ancilla commute with the gates and thus only lead to measurement errors. For the ``Sharpen'' circuit a measurement error only leads to a small displacement of magnitude $|\epsilon/2|$ in the wrong direction. This will broaden the GKP peaks, but is not very harmful as long as it does not happen too often. For the ``Trim'' circuit, a measurement error will lead to a large displacement $\hat D(\pm\zeta/2)$ in the wrong direction, however, these displacements are equivalent up to the stabilizer $\hat D(\mp\zeta)$, and thus does not lead to a logical error. A measurement error in the final ``Measure'' step illustrated in~\cref{fig:sharpentrimcycle} is more serious, as it leads to a logical error. However, as already mentioned, we may repeat this measurement to suppress such measurement errors. In the following we outline a potential approach to increase the robustness of these circuits by exploiting the natural robustness against phase flips of the phase estimation protocols. \subsubsection{\label{subsec:bias}Biased noise ancilla} { A few different protocols can be applied for more robust phase estimation. For example, the approach considered in~\cite{shi2019fault} is based on using an extra flag qubit to prevent a single ancilla error from introducing a large displacement error in the GKP state. More hardware-efficient robustness against single transmon ancilla error can also be achieved by applying the technique known as $\chi-$matching~\cite{Rosenblum2018}. In this approach, the transmon's $\ket{g}$ and $\ket{f}$ levels are used as computational states and the transmon-cavity coupling is engineered so that the GKP resonator is transparent to a single relaxation error from state $\ket{f}$ to $\ket e$ in the transmon. The approach we outline here for robust phase estimation is based on using a biased-noise ancilla qubit~\cite{Puri:2018aa}.} Biased-noise qubits couple asymmetrically with the environment so that one type of error, such as phase-flips or Pauli-$Z$ errors, is more common than others, such as bit-flips or Pauli-$X,Y$ errors. In such qubits it is convenient to define a quantity called the bias, which is the ratio of the dominant error and the sum of all other errors $\eta=p_z/\left(p_x+p_y\right)$. For pure-$Z$ noise the bias $\eta=\infty$, while for isotropic or depolarizing noise $\eta=0.5$. Many examples of such biased-noise qubits exists, including the heavy fluxonium qubit~\cite{Earnest2018}, the soft $0$--$\pi$ qubit~\cite{Gyenis2021}, and the Kerr~\cite{grimm2020stabilization} and dissipative~\cite{lescanne2020exponential} cat qubits. In the trapped ion implementation of GKP codes~\cite{Fluhmann:2019aa}, the ancillary pseudo-spin states used to control the motional mode naturally has such a strong bias. Thanks to the robustness against phase errors, a possible path towards creating a fault-tolerant state preparation scheme is to use a biased-noise ancilla qubit where bit-flip errors are heavily suppressed~\cite{Puri:2018aa}. To be able to implement the circuits in~\cref{fig:phaseest_measure,fig:phaseest} fault-tolerantly, it is however crucial that we can perform the required controlled-displacement gates while preserving a strong suppression of bit-flip errors. {While there are several candidates for strongly biased noise qubits in the superconducting circuit platform~\cite{Earnest2018,Gyenis2021,siegele2021fault}}, here we focus on the Kerr cat qubit as an illustrative example, as this provides a particularly straight forward, hardware-efficient, realization of the operations required for the phase estimation protocols. In the Kerr cat qubit the logical states are superpositions of coherent states $\ket\pm \propto \ket{\alpha} \pm \ket{-\alpha}$ ($\ket{0/1} \simeq \ket{\pm\alpha}$) of the electromagnetic field stored in a nonlinear oscillator. More precisely, these states are eigenstates of a Kerr-nonlinear oscillator in the presence of a two-photon pump: $\hat H_\text{cat} = - K \hat a^{\dagger 2} \hat a^2 + K\alpha^2 (\hat a^{\dagger 2} + \hat a^2)$, where we are working in the rotating frame of the oscillator where the two-photon pump is resonant~\cite{Puri2017}. The cat states are separated from the closest eigenstates by a gap $\omega_\text{gap} \simeq 4K\alpha^2$, and we take $\alpha$ to be real for simplicity. Crucially, realistic noise channels for this system, including photon loss, heating and dephasing, are highly unlikely to cause transitions between the $\ket 0$ and $\ket 1$ states. More precisely, bit-flips are exponentially suppressed in $\alpha^2$ compared to phase-flips leading to an exponentially large bias in $\alpha^2$ ~\cite{Puri2017}. Returning to the circuits in~\cref{fig:phaseest}, we first note that preparation and measurement in the $X$-basis, as well as the $\hat S, \hat S^\dagger$ phase gates, are bias-preserving operations, i.e., the effective error channel remains biased towards $Z$ errors when performing these operations. These operations have already been demonstrated experimentally for the Kerr-cat qubit~\cite{grimm2020stabilization}. On the other hand, a controlled displacement can be implemented with a beam splitter interaction \begin{equation}\label{eq:HCD_cat} \hat H_{CD} = i(g\hat a_\text{cat} \hat a_\text{gkp}^\dagger - g^*\hat a_\text{cat}^\dagger \hat a_\text{gkp}), \end{equation} where the subscript refers to the cat and GKP mode, respectively, and $g$ is an (in general complex) interaction strength. To show that this approximately leads to a conditional displacement, we project the Hamiltonian onto the logical cat subspace $\hat P_\text{cat} = \ket\alpha\bra\alpha + \ket{-\alpha}\bra{-\alpha}$: $P_\text{cat} \hat H_{CD} P_\text{cat} = i \alpha ( g \hat a_\text{gkp}^\dagger - g^* \hat a_\text{gkp}) \bar Z_\text{cat} + \mathcal O^*(e^{-2\alpha^2})$, with $\bar Z_\text{cat}$ the logical Pauli-Z operator for the cat qubit (we use the $\mathcal O^*[f(x)]$ notation to suppress polynomial factors in $x$~\cite{woeginger2008}). Evolution under this interaction for a time $t$ thus leads to a controlled-displacement $C\hat D(\zeta)$, with $\zeta = g\alpha t$. Interestingly, the controlled displacement gate becomes both faster and more accurate as we increase $\alpha$, and thus the bias of the ancilla. We note that to implement the optimized measurement circuit in~\cref{fig:phaseest_measure}~(b), as well as the optimized stabilizer protocols in Ref.~\cite{Royer2020}, one also requires a rotation gate of the form $\hat R_x = \exp(-i\pi\hat \sigma_x/2)$. This gate is conveniently very simple to implement for a Kerr-cat qubit. One simply turns off the two-photon pump used to stabilize the Kerr-cat logical subspace, and let the cat evolve freely under the Kerr Hamiltonian $\hat H_K = - K\hat a^{\dagger 2} \hat a^2$ for a time $\pi/2K$~\cite{grimm2020stabilization,Puri:2019aa}. One might worry that this gate is not bias-preserving: A phase flip prior to, or during, this gate, can be rotated to a bit-flip error. However, the circuit in~\cref{fig:phaseest_measure}~(b) is constructed such that the error that propagates back to the GKP mode is precisely the logical operator we are trying to measure. Similarly, for the protocols in Ref.~\cite{Royer2020} the error is a stabilizer. This is acceptable, and we therefore expect that these improved measurement and stabilizer protocols also benefit from a biased noise qubit. Physically, the Kerr-cat qubit can be implemented in a tunable nonlinear element such as capacitively shunted SNAIL, or ``SNAILmon''~\cite{grimm2020stabilization}. With an external magnetic flux, the SNAIL exhibits both three-wave and four-wave mixing capabilities. Moreover, the controlled-displacement~\cref{eq:HCD_cat}, which requires a three wave mixing interaction between the cat, GKP and a microwave pump, can also be activated using the same nonlinear element, thus requiring only capacitive coupling between the SNAIL device and the GKP mode [see Fig.~\ref{fig:CD_gate}(b)]. Such an interaction between a Kerr-cat and an un-encoded oscillator has been demonstrated~\cite{grimm2020stabilization}. This proof-of-principle demonstration strongly suggests that the Kerr-cat in a SNAILmon can be used for fault-tolerant GKP state preparation. Nonetheless, the viability of this approach depends on the effect of magnetic flux, required to bias the SNAIL, on the lifetime of 3D cavities and requires more experimental exploration~\cite{chapman2021mediating}. Alternatively, an approach based on high-Q oscillators in a 2D architecture must be developed. Finally, we note that this is exactly the same type of interaction required for two-qubit gates between GKP mode, as discussed in~\cref{sec:gates} [c.f.~\cref{eq:Cliffords3}]. It is thus be possible to repurpose the same piece of hardware for both gates and state preparation [see also Fig.~\ref{fig:gate1}(a)]. \subsection{\label{sec:ec_ckts}Error correction with GKP ancillae} In~\cref{sec:qec_criteria} we briefly discussed the ability of the GKP code to correct against realistic noise processes \emph{in principle} by looking at the quantum error correction criteria for displacement errors and performing numerical simulations using an optimal recovery map. Performing error correction \emph{in practice} requires a fault-tolerant and non-destructive way to measure the stabilizers of the GKP code. One way to do this is to perform phase estimation using a two-state ancilla, as we have already outlined above. However, this approach has the disadvantage that a single bit of information is obtained per ancilla measurement, such that several measurements is required to obtain the continuous variable GKP syndrome information with high accuracy. An alternative approach, which is the approach that has been studied most extensively in the theoretical literature on GKP codes~\cite{Gottesman01,Vuillot2018,Noh:2019aa,noh2021low}, is to use ancillae that are themselves prepared in GKP states. Here, phase estimation can be performed in a single-shot (assuming we can perform high-efficiency homodyne detection), with an accuracy set by the quality of the encoded GKP ancilla states (and the measurement efficiency). There are two canonical ways to perform ``single-shot'' GKP error correction using GKP encoded ancillae. These are essentially bosonic versions of Steane~\cite{Steane1997} and Knill~\cite{Knill05} error correction, respectively. The two schemes are illustrated in~\cref{fig:qeccirc}. As shown in the figure, the propagation of displacement errors through the two circuits are essentially equivalent, but the Knill circuit does not require any active recovery. Here, the measurements, which contain the syndrome information, are simply used to determine whether a logical Pauli operator has been applied to the GKP state in the process of teleporting from the top to the bottom rail of the circuit~\cite{Grimsmo2020}. It is not necessary to physically apply any Pauli correction, as it can be tracked in a Pauli frame~\cite{Knill05}. We also note that a version of the Knill circuit can be performed using beam splitter interactions in place of $\bar C_X$ gates{~\cite{Walshe2020}}, and that it was shown in Refs.~\cite{larsen2021fault,noh2021low} that this leads to a lower probability of logical error. \begin{figure} \caption{Illustration of ``Steane'' (a) and ``Knill'' (b) error correction circuits. The labels $\{u,v\} \label{fig:qeccirc} \end{figure} Although the ``single-shot'' GKP error correction schemes are highly efficient in principle, they also have two clear practical drawbacks: They require preparation of two additional GKP states per round of error correction, and they require very high efficiency quadrature measurement to be useful. The former itself requires repeated stabilizer measurements using a two-level ancilla as discussed in~\cref{sec:phaseest}, unless some other method is developed, and as discussed in~\cref{sec:logicalops}, the latter is a highly nontrivial task that has not yet been demonstrated on a GKP state. In the next section we discuss different approaches to quantum error correction with GKP codes in a large scale architecture. \section{\label{sec:scaling}The big picture: Scalability and fault-tolerance} So far, we have seen that either due to finite squeezing, environmental noise, or backaction from the ancilla used in state preparation, the logical error rate in the GKP codespace cannot be decreased arbitrarily. In order to correct for residual errors, the GKP code can be concatenated with another binary quantum error correcting code. A particularly popular approach is concatenation of GKP code to the topological surface code~\cite{Dennis02}. Fault-tolerant error correction with GKP-surface codes are being studied in the context of both gate-based~\cite{Vuillot2018,Terhal2020,Noh:2019aa,noh2021low} and measurement-based quantum computing~\cite{Menicucci14,Fukui:2017aa,Fukui2018,Fukui:2018aa,Fukui:2019aa,bourassa2021blueprint}. {Here, we will limit our discussion to the former as it is more commonly used in the context of the superconducting-circuit and trapped ion platforms.} In this approach, each data qubit of the surface code is replaced by a single-mode GKP code. Such concatenation provides two-layers of protection. In the first layer (referred from henceforth as the \emph{inner code} and denoted $\mathcal{C}_\mathrm{GKP}$), the stabilizers $\hat{S}_X$ and $\hat{S}_Z$ are measured for each GKP mode $M$ times using additional ancillae. These additional ancillae can also be GKP-encoded, as in~\cref{sec:ec_ckts}, or discrete qubits such as transmons or Kerr-cats, as discussed in~\cref{sec:stateprep,sec:ftstateprep}. The ancilla measurement record and details of the underlying noise model are used to estimate and correct, as accurately as possible, the noise on each GKP data mode~\cite{Vuillot2018,Noh:2019aa,noh2021low}. Of course, this procedure will not perfectly remove all errors. The remaining errors are instead corrected by interspersing the stabilizer measurements of $\mathcal{C}_\mathrm{GKP}$ with the parity checks of the surface code (referred to henceforth as the \emph{outer code}, and denoted $\mathcal{C}_\mathrm{surface}$). The surface code ancillae used to measure these parity checks can themselves GKP-encoded or discrete qubits. This approach of concatenating $\mathcal{C}_\mathrm{GKP}\text{tr}iangleright \mathcal{C}_\mathrm{surface}$ may seem contrary to the hardware efficiency of GKP codes argued in the Introduction. After all, at the end we are resorting to a binary surface code which incurs a substantial hardware overhead (the surface code has a vanishing code rate). Nevertheless, if it becomes possible to suppress the probability of error during logical operations far below the threshold of $\mathcal{C}_\mathrm{surface}$, a modest code distance might suffice to reach a target logical error rate required for useful quantum computation~\cite{noh2021low}. In this case, the resource overhead of quantum hardware and software controls may be significantly lighter than if $\mathcal{C}_\mathrm{surface}$ is directly implemented with conventional qubits such as transmons. With this overview, we now give more details on two possible constructions of the concatenated $\mathcal{C}_\mathrm{GKP}\text{tr}iangleright \mathcal{C}_\mathrm{surface}$ code. In one approach, which is also the most widely studied in the theoretical literature, and is referred to as All-GKP surface code, all the data qubits and ancillae used for error correction are GKP-encoded~\cite{Vuillot2018,Noh:2019aa,noh2021low}. The other approach takes a hybrid route where the ancillae are replaced by discrete qubits (e.g., transmons or Kerr-cats), and will be referred to as Hybrid-GKP surface code (the two schemes were referred to as Only-Surface-Code-GKP-Ancilla and All-Regular-Qubit-Ancilla, respectively, in Ref.~\cite{Terhal2020}). \Cref{fig:gkp_scov} outlines the building blocks of the two schemes. \begin{figure} \caption{Building blocks of $\mathcal{C} \label{fig:gkp_scov} \end{figure} \subsection{All-GKP surface code} A possible layout of $\mathcal{C}_\mathrm{GKP}\text{tr}iangleright \mathcal{C}_\mathrm{surface}$ with GKP-encoded data \emph{and} ancilla modes in the cQED architecture is illustrated in Fig.~\ref{fig:gkp_sc1}(a). Steane or Knill based error correction circuits may be used for $\mathcal{C}_\mathrm{GKP}$ (c.f.~\cref{fig:qeccirc}), although the former is studied more widely. To be concise, we focus on the square lattice GKP code, where the probability of $\bar{X}$ errors equals that of $\bar{Z}$ errors, concatenated with the CSS surface code with the ``rotated'' layout of Ref.~\cite{bombin_optimal_2007}. The parity check operators for $\mathcal{C}_\mathrm{surface}$ are of two types: all $\bar{X}$-type to detect $\bar{Z}$ errors and all $\bar{Z}$-type to detect $\bar{X}$ errors. In order to minimize amplification of displacement errors, the surface code check operators can be modified as shown in Fig.~\ref{fig:gkp_sc1})(b)~\cite{Noh:2019aa}. Each $\bar{X}$-type parity check involves two $\bar{X}$ and two $\bar{X}^\dag$, while each $\bar{Z}$-type parity check only involves $\bar{Z}$. (Note in contrast to regular discrete qubits we do not have $\bar{X}=\bar{X}^\dag$ and $\bar{Z}=\bar{Z}^\dag$ outside the GKP codespace). The parity checks of $\mathcal{C}_\mathrm{surface}$ are performed simultaneously using the circuit shown in Fig.~\ref{fig:gkp_sc1}(b). As illustrated, each GKP ancilla (shown in blue in Fig.~\ref{fig:gkp_sc1}) interacts with four neighboring GKP modes via $\bar{C}_Z=e^{i\hat{Q}\otimes\hat{Q}}$ gates for the $Z$-checks and a mix of $\bar{C}_X=e^{-i\hat{Q}\otimes\hat{P}}$ and $\bar{C}_X^\dag=e^{i\hat{Q}\otimes\hat{P}}$ for the $X$-checks. As discussed earlier, these gates can be implemented via a nonlinear coupler such as a transmon or SNAIL. \begin{figure*} \caption{(a) Illustration of possible realization of $\mathcal{C} \label{fig:gkp_sc1} \end{figure*} Standard techniques of decoding, for example minimum-weight perfect matching (MWPM), can be used for error-correction in the surface code layer. {Moreover, the accuracy of the decoder can be enhanced by using analog information from the continuous variable measurement outcomes~\cite{Fukui:2017aa,Fukui2018,Fukui:2018aa,Fukui:2019aa,Vuillot2018,Noh:2019aa,Terhal2020,noh2021low}. For example, one can incorporate the conditional probabilities for Pauli errors given analog measurement outcomes into the edge weights of the MWPM problem, resulting in a dynamic matching graph.} Various independent studies have been performed for estimating the performance of the All-GKP surface code under slightly different assumptions about noise~\cite{Vuillot2018,Noh:2019aa,Terhal2020,noh2021low}. A common feature of most of these studies is that, in order to simplify numerical analysis, noise in approximate GKP states as well as errors introduced during error-correction operations are modelled as independent Gaussian displacements with standard deviation $\sigma$, represented by the channel \begin{align}\label{eq:displacementchannel} \mathcal{E}(\hat \rho)_\sigma=&\frac{1}{2\pi\sigma^2}\int \mathrm{d}^2\zeta e^{-|\zeta|^2/2\sigma^2}\hat{D}(\zeta)\hat{\rho}\hat{D}^\dag(\zeta). \end{align} Compared to a general noise channel as in~\cref{eq:channel}, this channel is strictly diagonal in the displacement operator basis. {An arbitrary noise channel can be brought closer to diagonal form in the displacement basis using displacement twirling (but not necessarily following a Gaussian distribution)~\cite{Conrad2021}. \Cref{eq:displacementchannel} describes, for example, a noise process with equal loss and heating rates, given by a master equation $\dot{\hat \rho} = \kappa \left(\mathcal D[\hat a]\hat \rho + \mathcal D[\hat a^\dagger]\hat \rho\right)$~\cite{Albert17,Noh:2019aa}. However, in a realistic system special engineering is required to make these noise process equal. The Gaussian displacement channel does not in general represent physical noise commonly encountered in oscillator systems, such as loss, dephasing or heating at arbitrary rates.} The standard deviation $\sigma$ in~\cref{eq:displacementchannel} determines the amount of noise in the system. Similarly to how we introduced a squeezing parameter to quantify the quality of approximate GKP states in~\cref{sec:approxgkp}, we can introduce a squeezing parameter $\mathcal S = -10\log_{10}(2\sigma^2)$ {(with the identification $\Delta^2 = 2\sigma^2$)} to quantify the noise in~\cref{eq:displacementchannel}, with large $\mathcal S$ meaning low noise. {In the numerical studies in Refs.~\cite{Vuillot2018,Noh:2019aa,noh2021low}, approximate GKP states with squeezing $\Delta$ were modeled by applying the noise channel~\cref{eq:displacementchannel} with $\sigma = \Delta/\sqrt 2$ to an ideal GKP state. This was done in order to make the numerics tractable, but the noisy GKP states defined in this manner are still unphysical (there is no ``envelope'' in phase space). In Ref.~\cite{Terhal2020} it was shown that this model with incoherent displacements underestimates the logical error compared to the coherent superposition in~\cref{eq:gkpcodewords_approx2} with $\Delta^2 = 2\sigma^2$.} When the Gaussian displacement channels introducing errors in the GKP codewords and every element (i.e., gates and measurements) of the error-correction circuits are assumed to be equally noisy, and Steane-based error-correction is used for $\mathcal{C}_\text{gkp}$, then the threshold standard deviation for displacement errors has been found to be $\mathcal S = 18.6$ dB. That is, it becomes possible to realize a logical qubit with arbitrarily small probability of error with $\mathcal{C}_\mathrm{GKP}\text{tr}iangleright \mathcal{C}_\mathrm{surface}$ as long as the standard deviation for displacement errors is smaller than $\sim 0.09$. On the other hand, if the Gaussian displacement channel is only applied to the data and ancilla GKP codewords, while all other operations are assumed to be noiseless, this threshold is reduced to $\mathcal S=11.2$ dB~\cite{Noh:2019aa}. Further optimizations are possible, and a recent study showed that the latter threshold with only state preparation noise can be reduced to $\mathcal S = 9.9$ dB through better decoding and an optimized error correction protocol~\cite{noh2021low}. Beyond improving the threshold, it was shown that a decoding strategy that makes better use of the analog syndrome information can have a dramatic effect on the overhead cost to reach a certain logical error rate. The largest GKP state prepared experimentally so far has a squeezing of $\sim 10$ dB and experimental limitations on the performance of two-qubit gates and measurement (c.f.~\cref{fig:Zmeasurement}) is largely an open question. Thus, there is opportunity for theoretical and experimental innovations in developing scalable quantum control methods for practical implementation of the GKP-surface code architecture. One possible path towards easing the threshold requirements is by tailoring the GKP code and the surface code to exploit structure in the noise. In particular, the more recently tailored surface code (TSC)~\cite{tuckett2018ultrahigh,tuckett2020fault} and the XZZX surface code~\cite{bonilla2020xzzx} exhibit ultra-high thresholds when the noise channel of the underlying elementary qubits is biased. Recall the rectangular-lattice GKP states introduced in section~\ref{sec:gkpdefs} {and defined in Eq.~\eqref{eq:lattice_def}}. Due to the phase-space structure of the rectangular lattice, the resulting Pauli errors become asymmetric even if the translation errors from the environment are isotropic resulting in a biased noise channel. If {$\lambda>1$} in Eq.~\eqref{eq:lattice_def}, then displacements along the momentum quadrature $\hat{P}$ is more likely to be misidentified than displacement along the position quadrature $\hat{Q}$. Consequently, after $M$ rounds of GKP error-correction $\bar{X},\bar{Y}$-errors will be far less likely than $\bar{Z}$ errors. Thus, the error channel after $\mathcal{C}_\mathrm{gkp}$ with a rectangular-GKP state will be biased with the bias increasingly exponentially with {$\lambda$}~\cite{hanggli2020enhanced}. Now, the biased noise can be more accurately corrected if $\mathcal{C}_\mathrm{gkp}$ is concatenated to the TSC or the XZZX surface code. A recent work has studied the rectangular-GKP concatenated to the TSC under a simplistic noise model where only the data GKP codewords are subject to a Gaussian displacement channel, while the ancillas and error-correction circuits are perfect~\cite{hanggli2020enhanced}. It is known that with such a simplistic noise model, the threshold standard deviation when using the square lattice GKP code is $2.3$ dB. If instead a rectangular GKP code is used with $r {=\lambda^2} =3$, then as shown in~\cite{hanggli2020enhanced}, the threshold standard deviation increases and corresponds to a squeezing of $\sim$1.7 dB. Here, $r=3$, corresponds to a single-mode squeezing of $10\log_{10}(r)\sim 4.8$ dB. {For the regular CSS surface code, another study in contrast found only a marginal improvement in logical error when the ancilla (but not data) qubits are rectangular~\cite{noh2021low}.} However, more theoretical work is required to predict if the rectangular-GKP concatenated to the TSC, {XZZX, or another code optimized to exploit noise-bias,} provides a practical advantage when realistic circuit-level noise is considered. \subsection{Hybrid-GKP surface code} The All-GKP surface code scheme may become resource intensive because auxiliary qubits are required to prepare the GKP data and ancilla modes, and additional tunable nonlinear couplers are required to implement two-qubit gates between these GKP modes. {Each $\mathcal C_\text{GKP}$ \emph{and} $\mathcal C_\text{surface}$ syndrome measurement moreover requires preparation of fresh GKP ancilla states, which is a slow process~\cite{Campagne2020}.} Such extra {space-time costs} can overwhelm savings in overheads that one may have otherwise expected. A more efficient approach may be to replace some or all of the ancillae with discrete qubits such as a transmon or a biased Kerr-cat qubit (see~\cref{sec:ftstateprep}). As an illustrative example, consider Fig.~\ref{fig:gkp_sc1} for counting hardware resources. In this set-up a distance $d$ surface code requires $3d^2-1$ high-Q resonators, and $5d^2-4d$ nonlinear couplers. Contrast this with $d^2$ high-Q cavities and $2d^2-1$ nonlinear couplers that would be required to build a Hybrid-GKP surface code with GKP-encoded data qubits and discrete qubit (Kerr-cat or transmon) ancillae. Let us now look at the operations required to implement such a Hybrid-GKP surface code. Recall that the controlled-displacement gate $C\hat{D}(\zeta)$ is required to measure the GKP stabilizers. The same gate can also be used to implement the controlled-Pauli gates between the regular qubit ancilla and GKP codewords required for surface code parity measurements. Consider, for example, the $\bar{Z}$-type check operators. It can be measured with a discrete ancilla qubit initialized in the $\ket{+}$ state, followed by $C\hat{D}(\zeta)$ gates between the GKP data modes and the ancilla with $\zeta=\beta$, and finally an $X$-basis measurement of the ancilla. This can also be understood as one phase-estimation round of the surface code stabilizer, and in analogy with~\cref{eq:phaseest_prob}, the probability to get an $X=\pm 1$ outcome is \begin{align} P(\pm)=\frac{1}{2}\left[1\pm\frac{1}{2}\left(\langle\bar{Z}\bar{Z}\bar{Z}\bar{Z}\rangle+\langle\bar{Z}^\dag\bar{Z}^\dag\bar{Z}^\dag\bar{Z}^\dag\rangle\right)\right]. \end{align} Clearly, for ideal GKP states [\cref{eq:gkpcodewords}], $P(+)=1,0$ and $P(-)=0,1$ when $\langle\bar{Z}\bar{Z}\bar{Z}\bar{Z}\rangle=\pm 1$ respectively, and hence this procedure can be used for $\bar{Z}$-parity checks. In case of approximate GKP states however, a single round of phase estimation cannot perfectly estimate $\langle\bar{Z}\bar{Z}\bar{Z}\bar{Z}\rangle$, which leads to measurement errors even when the ancilla is noiseless. For small $\Delta$ (see~\cref{sec:approxgkp}) this measurement error is $p_\mathrm{err}\sim (1-e^{-\pi\Delta^2})/2 \simeq \pi\Delta^2/2$ ~\cite{Terhal2020}. The $\bar{X}$-type parity checks are measured analogously using the $C\hat{D}(\zeta)$ gate with $\zeta=\alpha$. This measurement error can in principle be reduced using the same approach as discussed in~\cref{sec:phaseest_measurement} [see~\cref{fig:phaseest_measure}~(b)] to give $p_\mathrm{err}\sim 0.8\Delta^6$ (in the small $\Delta$ limit)~\cite{hastrup2020improved,Royer2020}. Compared to the All-GKP scheme, where GKP-encoded ancillae are used, the discrete-qubit ancillae may lead to higher fidelity parity checks when a realistic homodyne measurement efficiency is taken into account. To illustrate, consider a GKP state with $\sim 14$ dB of squeezing ($\Delta=0.2$). In section~\ref{sec:measurements} we saw that with a measurement efficiency of $\eta=75\%$ (which is still optimistically high for cQED), the error in the direct homodyne measurement of the ancilla GKP state is $\sim 4\%$. On the other hand, with a discrete qubit ancilla and modified phase estimation circuit of Ref.~\cite{hastrup2020improved}, we have $p_\mathrm{err}\sim 0.0025\%$. Of course, the ancilla readout itself is not perfect, and errors must be added to $p_\mathrm{err}$ to estimate the total error in the surface code parity check. For an ancilla such as a transmon, readout error probability $<2\%$ is standard, even with $\eta<75\%$~\cite{hatridge2013quantum,Touzard2019,walter2017rapid}. Thus, the total measurement infidelity with the discrete qubit ancilla ($\sim 0.0025\%+2\%\simeq2\%$) can still be lower than that with a GKP-encoded ancilla. The hardware simplicity of the Hybrid-GKP surface code architecture, which requires only $C\hat{D}(\zeta)$ gates and standard qubit operations, makes this approach very attractive. Nonetheless, a challenge with the hybrid-GKP approach is to prevent fatal propagation of errors from the standard qubit ancilla to the encoded GKP states. Indeed, due to this effect, the current performance of the controlled-displacement gate, the building block of the hybrid scheme, is limited by the relaxation time, $T_1$ of the transmon ancilla~\cite{Campagne2020,Puri:2019aa}. This limitation indicates that it will not be possible to increase the lifetime of a GKP codeword much beyond the $T_1$ of the transmon. Fortunately, it is possible to overcome this challenge by replacing the transmon with a biased-noise ancilla such as a Kerr-cat, as we discussed in~\cref{sec:ftstateprep}. This promises a hardware-efficient solution to the problem of ancilla-induced errors and motivates further study to quantify the performance of the hybrid setup. Finally, it is not necessary that all the ancillae in the GKP-surface code are of the same kind, that is, all GKP-encoded or all discrete qubit ancillae. Another possibility is that both the data and syndrome qubits of $\mathcal C_\mathrm{surface}$ are GKP-encoded, while discrete qubit ancillae are used to perform the $\mathcal C_\mathrm{GKP}$ stabilizer measurements and phase estimation on the GKP-encoded syndrome qubits. Depending on the properties and performance of operations, we may have an optimized code where some ancillae are discrete qubits while the others are GKP-encoded. \subsection{Universality} So far, we have restricted the discussion to error correction in the $\mathcal{C}_\mathrm{GKP}\text{tr}iangleright \mathcal{C}_\mathrm{surface}$ code, but have not discussed how universal computation may be performed in this concatenated architecture. An attractive feature of the surface code is that all logical Clifford operations can be implemented via lattice surgery requiring only (single or two-qubit) logical Pauli measurements~\cite{horsman2012surface,litinski2018lattice,bourassa2021blueprint}. In surface codes with regular qubits, these measurements require nearest-neighbour, physical controlled-Pauli gates between ancilla and data qubits~\cite{litinski2018lattice}. We have seen how to implement controlled-Pauli gates between two GKP codewords or between a GKP codeword and a discrete qubit [see~\cref{eq:Cliffords3}, Fig.~\ref{fig:gate1}, and Fig.\ref{fig:CD_gate}]. Hence, we can also employ lattice surgery for implementing logical Clifford gates in the $\mathcal{C}_\mathrm{GKP}\text{tr}iangleright \mathcal{C}_\mathrm{surface}$ code. Combined with state preparation of GKP magic states $\ket{A_L} = \bar T\ket{+_L}$, logical injection and distillation~\cite{bravyi2005universal,litinski2019game} (both of which only require error correction and Clifford gates) provides the ability to perform universal quantum computation in the GKP-concatenated surface code. There are several ways to prepare GKP magic states, including using only GKP Pauli states and vacuum as a resource {by exploiting the continuous variable nature of the state space}~\cite{Baragiola:2019aa}, but the simplest approach is to use a one-bit teleportation circuit as shown in~\cref{fig:onebitteleport}, which allows us to teleport an arbitrary state from a two-level ancilla to the GKP code. \begin{figure} \caption{One-bit teleportation circuit to teleport an arbitrary state from a two-level ancilla (bottom) to an encoded GKP state (top). The $\hat C_X$ gate is implemented by a controlled displacement $C\hat D(\alpha)$. This can be used for preparation of arbitrary states, including magic states, assuming we have universal control over the ancilla.} \label{fig:onebitteleport} \end{figure} \section{Summary and Outlook\label{sec:summary}} Implementation of the GKP code was once considered, by many, to be beyond impossible. As pointed out by Daniel Gottesman at the Byron Bay Quantum Workshop in 2020---a workshop dedicated to the 20th anniversary of the GKP code---the authors were aware that the main challenge was going to be the first step of realizing the codewords themselves and the subsequent steps of realizing gates, measurements, etc. would be comparatively simpler. {{Technological developments since 2001 have made error correction with the GKP code a reality, and this success has inspired more exotic strategies for error correction~\cite{albert2020robust,gross2020encoding}}}. Keeping current and near-future technology in mind, in this perspective article we have explored the prospect of scalable, fault-tolerant quantum error correction with GKP states in a cQED architecture. The most intriguing open question in this direction is whether error correction with GKP states can be made more resource efficient in practice compared to schemes based on conventional qubits. Below we summarize some open theoretical and experimental challenges that must be addressed to answer this question. One must develop high-fidelity operations, including state-preparation, multi-qubit gates, and measurement for GKP states. The fidelity must be better than those of conventional un-encoded qubits in the same platform. At the very least, the fidelity should not be limited by decoherence in the auxiliary discrete qubits used for initialization or couplers used for gates. We identify three central challenges in this respect, that can serve as milestones on the path towards a scalable and hardware efficient quantum computer with GKP-encoded qubits: 1. State preparation: One must be able to prepare approximate GKP codewords with a sufficiently small $\Delta$, and ensure that the probability of logical errors on the GKP code, e.g. propagating from the ancilla qubit used in the state preparation, is exceedingly low. Since the goal is to outperform the best physical qubit alternatives, specifically transmons and trapped ions, the probability of a logical error in state preparation must be low compared to error rates in these systems. In our view, a biased noise ancilla, such as a Kerr-cat qubit, is promising in this respect, but further analysis is needed to quantify the quality of the GKP states that can be prepared with this approach. 2. Gates: To be able to implement high fidelity gates as discussed in~\cref{sec:gates} in practice, there are several targets that must be met simultaneously. One must be able to implement pristine two-mode Hamiltonians of the form $\hat H_{\theta,\phi} \propto e^{i\theta} \hat a \hat b^\dagger + e^{i\phi} \hat a \hat b + \text{H.c.}$, while keeping any spurious non-linear terms minimal, and moreover, the two-mode interaction must be switched from near zero to a sufficiently large value such that the gates are fast compared to all decoherence rates. It is crucial that performing these gates does not introduce errors that the GKP code is poor against. In particular, while the GKP is expected to be excellent against loss (and heating), this is not necessarily true for other natural types of noise, such as dephasing (c.f.~\cref{fig:qecfidelity}) and spurious nonlinearities. 3: Measurements: As we have shown in~\cref{sec:measurements}, a standard homodyne measurement is unlikely to be sufficiently high-fidelity to give GKP codes an advantage. Here, new ideas are needed. Either the effective homodyne measurement efficiency has to be increased (probably past $90\%$) using an amplification step prior to release to a standard microwave measurement chain, or one can follow the route of performing phase estimation with a discrete qubit ancilla. It remains to be seen how low the GKP measurement error can be made in practice. Along with measurement and control, efforts must be devoted to developing technology for scaling up either a 3D cQED architecture, or an architecture based on high-Q resonators on chip. There are also open theoretical questions about how to design such a large-scale architecture. We have discussed two different approaches at a high-level, the All-GKP surface code and the Hybird-GKP surface code. Several numerical studies have been performed on the All-GKP surface code, showing promising thresholds and sub-threshold behavior~\cite{Vuillot2018,Noh:2019aa,Terhal2020,noh2021low}. However, the noise models used in these studies are rather unrealistic, and more work is needed to model realistic noise accurately. For the Hybrid-GKP surface code, although we think it is quite promising in terms of hardware efficiency, very little quantitative analysis has been done, and its potential is largely unexplored at this stage. Arguably, the most pertinent question here is whether one will ultimately be limited by the discrete qubit ancillae used to stabilize the GKP-encoded qubits and perform syndrome extraction, and consequently whether the Hybrid-GKP approach can have a significant advantage over a more conventional scheme using only discrete qubits everywhere. For example, if biased noise qubits are used as ancillae, one should note that an approach based on using such qubits as both ancillae \emph{and} data qubits also appears very promising~\cite{guillaud_repetition_2019,chamberland2020building,Darmawan2021}. There are further avenues of research we have not touched on in this Perspective, but that nonetheless seem very promising. Alternatively to active GKP error-correction, it may be advantageous to explore passive error-correction techniques where the GKP states are stabilized via Hamiltonian engineering~\cite{le2019doubly,rymarz2021hardware,Conrad2021}. One can also consider alternative concatenation schemes. For example, surface code variants tailored to the specific noise structure of the GKP codewords may be used. In particular, biased-noise tailored surface code such as the XZZX surface code~\cite{bonilla2020xzzx} or tailored surface code~\cite{tuckett2018ultrahigh,tuckett2020fault,hanggli2020enhanced}, are promising candidates for a rectangular-lattice or squeezed GKP code. More work is required to estimate the performance of such an architecture when realistic circuit level noise is considered. Other topological codes might offer some advantages over the surface code when it comes to logical gates. For example, the 2D color code has a transversal set of single-qubit Clifford gates. As we have shown that single-qubit Clifford gates on GKP codes can be done in software, this implies that logical single-qubit Clifford gates on the color code level can be done in software as well. This may lead to some overhead savings for lattice surgery~\cite{landahl2014quantum,litinski2018lattice}. In general, one should follow the design principle of tailoring the overall fault-tolerant scheme to exploit the strengths of the underlying elementary qubits, and further research in this direction is warranted. Finally, concatenation with conventional quantum error correcting codes is not the only path towards scalability. It is possible that there is a better scheme where $k$ logical qubits are encoded in $n$ physical modes more directly, i.e., without concatenation with a binary code. {Only a small number of works have explored this avenue so far~\cite{Gottesman01,Harrington01,Noh2019capacity}.} With challenges, come opportunities and with the accelerating pace of technological and theoretical developments the future looks bright for practical quantum computation with GKP codes. \end{document}
\begin{document} \begin{frontmatter} \title{Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges} \runtitle{MAB Models for the Optimal Design of Clinical Trials} \begin{aug} \author[A]{\fnms{Sof\'ia S.}~\snm{Villar}\corref{}\ead[label=e1]{[email protected]}}, \author[A]{\fnms{Jack}~\snm{Bowden}\ead[label=e2]{[email protected]}} \and \author[A]{\fnms{James}~\snm{Wason}\ead[label=e3]{[email protected]}} \runauthor{S. S. Villar, J. Bowden and J. Wason} \affiliation{MRC Biostatistics Unit, Cambridge and Lancaster University} \address[A]{Sof\'ia S. Villar is Investigator Statistician at MRC BSU, Cambridge and Biometrika post-doctoral research fellow, Jack Bowden is Senior Investigator Statistician at MRC BSU, Cambridge and James Wason is Senior Investigator Statistician at MRC BSU, Cambridge, MRC Biostatistics Unit, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge CB2 0SR, United Kingdom \printead {e1,e2,e3}.} \end{aug} \begin{abstract} \emph{Multi-armed bandit} problems (MABPs) are a special type of optimal control problem well suited to model resource allocation under uncertainty in a wide variety of contexts. Since the first publication of the optimal solution of the \emph{classic} MABP by a dynamic index rule, the bandit literature quickly diversified and emerged as an active research topic. Across this literature, the use of bandit models to optimally design clinical trials became a typical motivating application, yet little of the resulting theory has ever been used in the actual design and analysis of clinical trials. To this end, we review two MABP decision-theoretic approaches to the optimal allocation of treatments in a clinical trial: the infinite-horizon Bayesian Bernoulli MABP and the finite-horizon variant. These models possess distinct theoretical properties and lead to separate allocation rules in a clinical trial design context. We evaluate their performance compared to other allocation rules, including fixed randomization. Our results indicate that bandit approaches offer significant advantages, in terms of assigning more patients to better treatments, and severe limitations, in terms of their resulting statistical power. We propose a novel bandit-based patient allocation rule that overcomes the issue of low power, thus removing a potential barrier for their use in practice. \end{abstract} \begin{keyword} \kwd{Multi-armed bandit} \kwd{Gittins index} \kwd{Whittle index} \kwd{patient allocation} \kwd{response adaptive procedures} \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:intro} Randomized controlled trials have become the gold-standard approach in clinical research over the last 60 years. Fixing the probability of being assigned to each arm for its duration, it removes (asymptotically) any systematic differences between patients on different arms with respect to all known or unknown confounders. The frequentist operating characteristics of the standard approach (e.g., the type-I error rate and power) are well understood, and the size of the trial can easily be chosen in advance to fix these at any level the practitioner desires. However, while it is important for a clinical trial to be adequately powered to detect a significant difference at its conclusion, the well-being of patients during the study itself must not be forgotten. MABPs are an idealized mathematical decision framework for deciding how to optimally allocate a resource among a number of competing uses, given that such allocation is to be done \emph{sequentially} and under \emph{randomly evolving conditions}. In its simplest version, the resource is work which can further be devoted to only one use at a time. The uses are treated as independent ``projects'' with a binary outcome which develop following Markov rules. Their roots can be traced back to work produced by \citet{thompson1933likelihood}, which was later continued and developed in \citet{robbins1952some}, \citet{bellman1956problem}, and finally \citet{gijo74}. Although their scope is much more general, the most common scenario chosen to motivate this methodology is that of a clinical trial which has the aim of balancing two separate goals: \begin{itemize} \item To correctly identify the best treatment (\emph{exploration or learning}). \item To treat patients as effectively as possible during the trial (\emph{exploitation or earning}). \end{itemize} One might think that these two goals are naturally complementary, but this is not the case. Correctly identifying the best treatment requires some patients to be assigned to all treatments, and therefore the former acts to limit the latter. Despite this apparent near-perfect fit between a real-world problem and a mathematical theory, the MABP has yet to be applied to an actual clinical trial. Such a state of affairs was pointed out early on by Peter Armitage in a paper reflecting upon the use in practice of theoretical models to derive optimal solutions for problems in clinical trials: \begin{quote} Either the theoreticians have got hold of the wrong problem, or the practising triallists have shown a culpable lack of awareness of relevant theoretical developments, or both. In any case, the situation does not reflect particularly well on the statistical community (\citeauthor{armitage1985search}, \citeyear{armitage1985search}, page 15). \end{quote} A very similar picture is described two decades later in \citet {palmer2002ethics} when discussing and advocating for the use of ``learn-as-you-go'' designs as a means of alleviating many problems faced by those involved with clinical trials today. More recently, Don Berry---a leading proponent of the use of Bayesian methods to develop innovative adaptive clinical trials---also highlighted the resistance to the use of bandit theoretical results: \begin{quote} But if you want to actually use the result then people will attack your assumptions. Bandit problems are good examples. An explicit assumption is the goal to treat patients effectively, in the trial as well as out. That is controversial (\,\ldots) \citep{stangl2012celebrating}. \end{quote} In view of this, a broad goal of this article is to contribute to setting the ground for change by reviewing a concrete area of theoretical bandit results, in order to facilitate their application in practice. The layout of the paper is as follows: In Section~\ref{sec:2 mabp} we first recount the basic elements of the Bayesian Bernoulli MABP. In Section~\ref{classic MABP} we focus on the infinite-horizon case, presenting its solution in terms of an index rule---whose optimality was first proved by Gittins and Jones over 30 years ago. In Section~\ref{restless MABP} we review the finite horizon variant by reformulating it as an equivalent infinite-horizon restless MABP, which further provides a means to compute the index rule for the original problem. In Section~\ref{suitability} we compare, via simulation, the performance of the MABP approaches to existing methods of response adaptive allocation (including standard randomization) in several clinical trial settings. These results motivate the proposal of a composite method, that combines bandit-based allocation for the experimental treatment arms with standard randomization for the control arm. We conclude in Section~\ref{Conclusion} with a discussion of the existing barriers to the implementation of bandit-based rules for the design of clinical trials and point to future research. \section{The Bayesian Bernoulli Multi-armed Bandit Problem}\label {sec:2 mabp} The Bayesian Bernoulli $K$-armed bandit problem corresponds to a MABP in which only one arm can be worked on at a time $t$, and work on arm $k= 1, \dots, K$ represents drawing a sample observation from a Bernoulli population $Y_{k,t}$ with unknown parameter $p_k$, ``earning'' the observed value $y_{k,t}$ as a reward (i.e., either $0$ or $1$). In a clinical trial context, each arm represents a treatment with an unknown success rate. The Bayesian feature is introduced by letting each parameter $p_k$ have a Beta prior with parameters $s_{k,0}$ and $f_{k,0}$ such that $(s_{k,0},f_{k,0}) \in\mathbb N^2_{+}$ before the first sample observation is drawn (i.e., at $t=0$). After having observed $S_{k,t}=s_{k,t}$ successes and $F_{k,t}=f_{k,t}$ failures, with $(S_{k,t},F_{k,t}) \in\mathbb N^2_{0}$ for any $t\ge1$, the posterior density is a Beta distribution with parameters $(s_{k,0}+s_{k,t},f_{k,0}+f_{k,t})$. Formally, the Bernoulli Bayesian MABP is defined by letting each arm $k$ be a discrete-time Markov Control Process (MCP) with the following elements: \begin{longlist}[(b)] \item[(a)] The \emph{state space}: $\mathbb{X}_{k,t}= \{(s_{k,0}+S_{k,t},f_{k,0}+F_{k,t})\in\mathbb N_+^2: S_{k,t}+F_{k,t} \le t, \mbox{ for } t=0,1,\dots, T \}$ which represents all the possible two-dimensional vectors of information on the unknown parameter $p_k$ at time $t$. We denote the available information on treatment $k$ at time $t$ as $\mathbf{x}_{k,t}=(s_{k,0}+S_{k,t}, f_{k,0} +F_{k,t})$ and the initial prior as $\mathbf{x}_{k,0}=( s_{k,0},f_{k,0})$. In a clinical trial context, the random vector $(S_{k,t},F_{k,t})$ represents the number of successful and unsuccessful patient outcomes (e.g., response to treatment, remission of tumor, etc.). \item[(b)] The \emph{action set} $\mathbb A_k$ is a binary set representing the action of drawing a sample observation from population $k$ at time $t$ ($a_{k,t}=1$) or not ($a_{k,t}=0$). In a clinical context, the action variable stands for the choice of assigning patient $t$ to treatment arm $k$ or not. \item[(c)] The Markovian \emph{transition law} $\mathcal{P}_k(\mathbf {x}_{k,t+1}|\mathbf{x}_{k,t},\break a_k)$ describing the evolution of the information state variable in population $k$ from time $t$ to $t+1$ is given~by \begin{equation} \label{dynamics1}\quad \mathbf{x}_{k,t+1} = \cases{ (s_{k,0}+s_{k,t}+1 , f_{k,0}+f_{k,t} ), \cr \quad \mbox{if } a_{k,t}=1 \cr\quad\mbox{w.p. } \displaystyle\frac{ s_{k,0}+ s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}} , \cr (s_{k,0}+s_{k,t},f_{k,0}+f_{k,t}+1 ), \cr \quad\mbox{if } a_{k,t}=1 \cr \quad\mbox{w.p. }\displaystyle \frac {f_{k,0}+f_{k,t}}{s_{k,0}+f_{k,0}+ s_{k,t}+f_{k,t}} , \cr \mathbf{x}_{k,t}, \cr \quad\mbox{if } a_{k,t}=0\mbox{ w.p. }1, } \end{equation} for any $\mathbf{x}_{k,t} \in\mathbb{X}_{k,t}$ and where w.p. stands for ``with probability.'' \item[(d)] The expected rewards and resource consumption functions are \begin{eqnarray} \label{rewards1}\quad \mathcal R(\mathbf{x}_{k,t}, a_{k,t})&=& \frac{s_{k,0}+s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}}a_{k,t} , \nonumber\\[-10pt]\\[-10pt] \mathcal C(\mathbf{x}_{k,t}, a_{k,t}) &=& a_{k,t},\nonumber \end{eqnarray} for $t =0,1, \ldots, T-1$, where, in accordance to \eqref{dynamics1}, a reward (i.e., a treatment success) in arm $k$ arises only if that arm is worked on and with a probability given by the posterior predictive mean of $p_k$ at time $t$ and resource consumption is restricted by the fact that (at most) one treatment can be allocated to every patient in the trial, that is, $\sum_{k=1}^{K}a_{k,t}\le1$ for all $t$. \end{longlist} A rule is required to operate the resulting MCP, indicating which action to take for each of the $K$ arms, for every possible combination of information states and at every time $t$, until the final horizon $T$. Such a rule forms a sequence of actions $\{a_{k,t}\}$, which depends on the information available up to time $t$, that is, on $\{\mathbf {x}_{k,t}\}$, and it is known as a \emph{policy} within the Markov Decision Processes literature. To complete the specification of this multi-armed bandit model as an \emph{optimal control model}, the problem's \emph{objective function} must be selected. Given an objective function and a time horizon, a multi-armed bandit optimal control problem is mathematically summarized as the problem of finding a feasible policy, $\pi$, in $ \Pi $ (the set of all the feasible policies given the resource constraint) that optimizes the selected performance objective. The performance objective in the Bayesian\break Bernoulli MABP is to maximize the Expected Total Discounted (ETD) number of successes after $T$ observations, letting $0 \le d < 1$ be the discount factor. Then, the corresponding bandit optimization problem is to find a discount-optimal policy such that \begin{eqnarray} \label{eq:dop_1} &&\quad V_D^*(\mathbf{\tilde{x}_0})\nonumber \\ &&\quad\quad= \max _{{\pi} \in{{\Pi}} } \mathsf{E} ^{{\pi}} \Biggl[\sum _{t = 0}^{T-1} \sum_{k=1}^K d^t \frac {s_{k,0}+S_{k,t}}{s_{k,0}+f_{k,0}+S_{k,t}+F_{k,t}} \\ &&\quad\hphantom{\quad= \max _{{\pi} \in{{\Pi}} } \mathsf{E} ^{{\pi}} \Biggl[\hspace*{62pt}}{}\cdot a_{k,t}\Big|\mathbf {\tilde{x}_0}= (\mathbf{x}_{k,0} )_{k=1}^{K} \Biggr],\nonumber \end{eqnarray} where $\tilde{\mathbf{x}}_0$ is the initial joint state, $\mathsf{E} ^{{\pi}}[\cdot]$ denotes expectation under policy ${\pi}$ and transition probability rule \eqref {dynamics1}, $V_D^*(\tilde{\mathbf{x}}_0)$ is the optimal expected total discounted value function conditional on the initial joint state being equal to $\mathbf{\tilde{x}_0}$ (for any possible joint initial state), and where, given the resource constraint, the family of admissible feasible policies $\Pi$ contains the sampling rules $\pi$ for which it holds that $\sum_{k=1}^{K}a_{k,t}\le1$ for all $t$. A generic MABP formally consists of $K$ discrete-time MCPs with their elements defined in more generality, that is, (a)~the \emph{state space}: a Borel space, (b)~the binary \emph{action set}, (c)~the \emph {Markovian transition law}: a stochastic kernel on the state space given each action and~(d) a \emph{reward} function and a \emph{work consumption} function: two measurable functions. As before, the MABP is to find a policy that optimizes a given performance criterion, for example, it maximizes the ETD net rewards. \citet{robbins1952some} proposed an alternative version of the Bayesian Bernoulli MABP problem by considering the average \emph{regret} after allocating $T$ sample observations [for a large $T$ and for any given and unknown $(p_k)_{k=1}^K$]. For the Bayesian Bernoulli MABP, the total regret $\rho$ is defined as \begin{eqnarray} \label{regret}&& \rho= T \max_{k} \{p_k\}- \mathsf{E}^{\pi} \Biggl[ \sum_{t=0}^{T-1} \sum_{k=1}^{K}a_{k,t} Y_{k,t} \Biggr] \nonumber\\[-10pt]\\[-10pt] &&\quad\mbox{for some } (p_k)_{k=1}^K.\nonumber \end{eqnarray} A form of \emph{asymptotic optimality} can be defined for sampling rules $\pi$ in terms of \eqref{regret} if it holds that for any $(p_k)_{k=1}^K$, $\lim_{T \to\infty}\frac{\rho}{T}=0$. A necessary condition for a rule to attain this property is to sample each of the $K$ populations infinitely often, that is, to continue to sample from (possibly) suboptimal arms for every $t < \infty$. In other words, asymptotically optimal rules have a strictly positive probability of allocating a patient to every arm at any point of the trial. Of course, within the set of asymptotically optimal policies secondary criteria may be defined and considered (see, e.g., \citeauthor{lai1985asymptotically}, \citeyear{lai1985asymptotically}). As it will be illustrated in Section~\ref{suitability}, objectives in terms of \eqref{eq:dop_1} or \eqref{regret} give rise to sampling rules with distinct statistical properties. Asymptotically optimal rules, that is, in terms of \eqref{regret}, maximize the \emph{learning} about the best treatment, provided it exists, while the rules that are optimal in terms of \eqref{eq:dop_1} maximize the mean number of total successes in the trial. \section{The Infinite-horizon Case: A Classic MABP} \label {classic MABP} We now review the solution giving the optimal policy to optimization problem (\ref{eq:dop_1}) in the infinite-horizon setting by letting $T=\infty$. In general, as MABPs are a special class of MCPs, the traditional technique to address them is via a dynamic programming (DP) approach. Thus, the solution to \eqref{eq:dop_1}, according to Bellman's principle of optimality \citep{bellman1952theory}, is such that for every $t=0, 1, \ldots$ the following DP equation holds: \begin{eqnarray} \label{eq:dp_1} &&V_D^*(\mathbf{x}_{1,t}, \dots, \mathbf{x}_{K,t})\nonumber \\ &&\quad =\max_{k } \biggl\{ \frac{s_{k,0}+s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}}\nonumber \\ &&\hphantom{=\max_{k } \biggl\{}\quad{}+ d \biggl( \frac {s_{k,0}+s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}}\nonumber \\[-10pt]\\[-10pt] &&\hphantom{=\max_{k } \biggl\{{}+ d \biggl(}\quad{}\cdot V_D^*(\mathbf {x}_{1,t}, \mathbf{x}_{k,t}+\mathbf{e}_1 , \dots, \mathbf{x}_{K,t}) \nonumber \\ &&\hphantom{=\max_{k } \biggl\{{}+ d \biggl(}\quad{} +\frac{f_{k,0}+f_{k,t}}{s_{k,0}+f_{k,0}+ s_{k,t}+f_{k,t}}\nonumber \\ &&\hphantom{=\max_{k } \biggl\{{}+ d \biggl(+\,}\quad{}\cdot V_D^*(\mathbf{x}_{1,t}, \mathbf{x}_{k,t}+\mathbf{e}_2 , \dots, \mathbf{x}_{K,t}) \biggr) \biggr\}, \nonumber \end{eqnarray} where $\mathbf{e}_1$, $\mathbf{e}_2$ respectively denote the unit vectors $(1,0)$ and $(0,1)$. Under the assumptions defining the Bayesian Bernoulli MABP, the theory for discounted MCPs ensures the existence of an optimal solution to \eqref{eq:dp_1} and also the monotone convergence of the value functions $V_D^*(\tilde {\mathbf{x}}_{t})$. Therefore, equation \eqref{eq:dp_1} can be approximately solved iteratively using a backward induction algorithm. \begin{figure} \caption{The number of individual computations for an approximation to the optimal rule in a particular instance of the Bayesian Bernoulli MABP as a function of $T$ with $K=3$ and $d=0.9$ for the Brute force, DP and Gittins index approaches.} \label{fig:1} \end{figure} Unfortunately, as shown in Figure~\ref{fig:1}, such a DP technique suffers from a severe computational burden, which is particularly well illustrated in the \emph{classic} MABP where the size of the state space grows with the truncation horizon $T$. To illustrate this fact, consider the case of $K$ treatments with an initial uniform prior distribution (i.e., $s_{k,0} = f_{k,0} = 1 \ \forall k$) and truncation horizon to initialize the algorithm equal to $T$. The total number of individual calculations [i.e., the number of successive evaluations of $ V_D^*(\mathbf{x}_{1,t}, \dots, \mathbf{x}_{K,t})$] required to find an approximate optimal solution by means of the DP algorithm equals $\frac{(T-1)!}{(2K)!(T-2K-1)!}$. The precision of such an approximation depends on $d$, for example, if $d\le0.9$ values to four-figure accuracy are calculated for $T \ge100$. Therefore, considering the problem with $K=3$ and $d=0.9$ (and hence $T\ge100$) makes the intractability of the problem's optimal policy become evident. (For a more detailed discussion see the \hyperref[index computation]{Appendix}.) \subsection{The Gittins Index Theorem}\label{GItheorem} The computational cost of the DP algorithm to solve equation \eqref {eq:dp_1} is significantly smaller than the cost of a complete enumeration the set of feasible policies $\Pi$ (i.e., the \emph{brute force} strategy), yet it is still not enough to make the solution of the problem applicable for most real world scenarios, with more than 2 treatment arms. For this reason the problem gained the reputation of being extremely hard to solve soon after being formulated for the first time, becoming a paradigmatic problem to describe the exploration versus exploitation dilemma characteristic of any \emph{data-based learning} process. Such a state of affairs explains why the solution first obtained by \citet{gijo74} constitutes such a landmark event in the bandit literature. The Index theorem states that if problem $P$ is an infinite-horizon MABP with each of its $K$ composing MCPs having (1)~a finite action set $\mathbb{A}_k$, (2) a finite or infinite numerable state space $\mathbb{X}_{k}$, (3) a Markovian transition law under the passive action $a_{k,t}=0$ (i.e., the \emph{passive} dynamics) such that \begin{eqnarray} \label{classic dyn} \mathcal P_k\bigl(x_k'|x_k,0 \bigr)&=& \mathcal{P}_k\bigl\{X_{k,t+1}=x_k'| X_{k,t}=x_{k}, a_{k,t}=0\bigr\}\nonumber \\[-5pt]\\[-10pt] &=& 1 _{\{x_{k'}=x_k\}},\nonumber \end{eqnarray} for any $x_{k}, x_{k}'\in\mathbb{X}_k$, where $1_{\{x_{k'}=x_k\}}$ is an indicator variable for the event that the state variable value at time $t+1$: $x_{k'}$ equals the state variable value of state $t$: $x_{k}$, and (4) the set of feasible polices $\Pi$ contains all polices $\pi$ such that for all $t$ \begin{equation} \label{reso dyn}\sum_{k=1}^{K}a_{k,t} \le1, \end{equation} then there exists a real-valued index function $\mathcal G(x_{k,t})$, which recovers the optimal solution to such a MABP when the objective function is defined under a ETD criterion, as in \eqref{eq:dop_1}. Such a function is defined as follows: \begin{equation} \label{Gittins} \hspace*{10pt}\mathcal G_k(x_{k,t})= \sup _{\tau\ge1}\frac{\mathsf{E} _{X_{k,t}=x_{k,t}} \sum_{i=0}^{\tau-1} \mathcal R(X_{k,t+i},1) d^i }{\mathsf{E}_{X_{k,t}=x_{k,t}} \sum_{i=0}^{\tau-1} \mathcal C(X_{k,t+i},1) d^i },\hspace*{-20pt} \end{equation} where the expectation is computed with respect to the corresponding Markovian (\emph{active}) transition law $\mathcal{P}_k(x_k'|x_k,1)$, and $\tau$ is a stopping time. Specifically, the optimal policy $\pi^*$ for problem $P$ is to work on the bandit process with the highest index value, breaking ties randomly. Note that the stopping time $\tau$ is past-measurable, that is, it is based on the information available at each decision stage only. Observe also that the index is defined as the ratio of the ETD reward up to $\tau$ active steps to the ETD cost up to $\tau$ active steps. MABPs whose dynamics are restricted as in \eqref{classic dyn} (namely, those in which passive projects remain frozen in their states) are referred to in the specialized literature as \emph{classic} MABPs and the name Gittins index is used for the function \eqref{Gittins}. The Index theorem's significant impact derives from the possibility of using such a result to break the curse of dimensionality by decomposing the optimal solution to a $K$-armed MABP in terms of its independent parts, which are remarkably more tractable than the original problem as shown in Figure~\ref{fig:1}. The number of individual calculations required to solve problem \eqref{eq:dp_1} using the Index theorem is of order $\frac{1}{2} (T-1) (T-2)$, which no longer explodes with the truncation horizon $T$. Further, it is completely independent of $K$, which means that a single index table suffices for all possible trials, therefore reducing the computing requirements appreciably. (For more details, see the \hyperref[index computation]{Appendix}.) Such computational savings are particularly well illustrated in the Bayesian Bernoulli MABP where the Gittins index \eqref{Gittins} is given by \begin{equation}\label{GittinsB} \mathcal G_k(\mathbf{x}_{k,t})= \sup_{\tau\ge 1} \frac{\mathsf{E}_{\bolds{\cdot}} \sum_{i=0}^{\tau-1}\frac{s_{k,0}+S_{k,t+i}}{s_{k,0}+ f_{k,0}+S_{k,t+i}+ F_{k,t+i}} d^i } {\mathsf{E}_{\bolds{\cdot}} \sum_{i=0}^{\tau-1}d^i },\hspace*{-30pt} \end{equation} where\vspace*{1pt} $ \mathsf{E}_{\bolds{\cdot}}= \mathsf{E}_{\mathbf{X}_{k,t}=(s_{k,0}+s_{k,t}, f_{k,0}+f_{k,t})}$. Calculations of the indices \eqref{GittinsB} have been reported in brief tables as in \citet{gi79} and \citet {robinson1982algorithms}. Improvements to the efficiency of this computing the index have since been proposed by \citet {katehakis1985multi}, \citet {katehakis1986computing}. Moreover, since the publication of Gittins' first proof of the optimality result of the index policy for a classic MABP in \citet{gijo74}, there have been alternative proofs, each offering complementary insights and interpretations. Among them, the proofs by \citet{whittle1980multi}, \citet{varaiya1985extensions}, \citet{weber1992gittins} and \citet {bertsimas1996conservation} stand out. \begin{table} \caption{The (approximate) Gittins index values for an information vector of $s_{0}+s_{t}$ successes and $f_0+f_t$ failures, where $d=0.99$ and $T$ is truncated at $T=750$} \label{tab:Gittinstable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccccc@{}} \hline $\bolds{f/s}$& \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} \\ \hline {1} & $0.8699$ & $0.9102$ & $0.9285$ & $0.9395$& $0.9470$ & $0.9525$ \\ {2} & $0.7005$ & $\underline{0.7844}$ & $0.8268$ & $0.8533$ & $0.8719$& $0.8857$ \\[1pt] {3} & $0.5671$ & $\underline{\underline{0.6726}}$ & $0.7308$ & $0.7696$ &$0.7973$ &$0.8184$\\[3pt] {4} & $0.4701$ &$0.5806$ & $0.6490$ & $\underline{0.6952}$ & $0.7295$ &$0.7561$ \\[1pt] {5} & $0.3969$ & $0.5093$ & $0.5798$ &$ 0.6311$ & $0.6697$ &$0.6998$ \\ {6} & $0.3415$ & $0.4509$& $0.5225$ &$0.5756$ & $0.6172$ & $\underline {\underline{0.6504}}$ \\ \hline \end{tabular*} \end{table} \begin{figure} \caption{The (approximate) Gittins index values for an information vector of $s_{0} \label{fig:2} \end{figure} To elaborate a little more on the use of the Gittins index for solving a $K$-armed Bayesian Bernoulli MABP in a clinical trial context, we have included some values of the Gittins index in Table~\ref{tab:Gittinstable} and Figure~\ref{fig:2}. These values correspond to a particular instance in which the initial prior for every arm is uniform, the discount factor is $d=0.99$, the index precision is of 4 digits and we have truncated the search of the best stopping time to $T=750$. The choice of $d=0.99$ is a widely used value in the related bandit literature. In our example, since $0.99^{750}<10^{-3}$, patients treated after this time yield an almost zero expected discounted reward and are hence ignored. The Gittins index policy assigns a number to every treatment (from an extended version of Table~\ref{tab:Gittinstable}) based on the values of $s_{k,t}$ and $f_{k,t}$ observed, and then prioritizes sampling the one with the highest value. Thus, provided that we adjust for each treatment prior, the same table can be used for making the allocation decision of all treatments in a trial. Furthermore, the number of treatments need not be prespecified in advance and new treatments may be seamlessly introduced part way through the trial as well (see \citeauthor{whittle1981arm}, \citeyear{whittle1981arm}). To give a concrete example, suppose that all treatments start with a common uniform prior, then all initial states are equal to $\mathbf{x}_{k,0}=(1,1)$ with a corresponding Gittins index value of $0.8699$ for all of them. Yet, if a treatment $k$ has a beta prior with parameters $(1,2)$ and another treatment $k'$ has a prior with parameters $(2,1)$, their respective initial states are $\mathbf{x}_{k,0}=(1,2)$ and $\mathbf{x}_{k',0}=(2,1)$, and their associated index values respectively are $0.7005$, $0.9102$. The same reasoning applies for the case in which priors combine with data so as to have $\mathbf{x}_{k,1}=(1,2)$ and $\mathbf{x}_{k',1}=(2,1)$. The underlined values in Table~\ref{tab:Gittinstable} describe situations in which the \emph{learning} element plays a key role. Consider two treatments with the same posterior mean of success $2/4=4/8=1/2$. According to the indices denoted by the single line, the treatment with the smallest number of observations is preferred: $0.7844>0.6952$. Moreover, consider the case in which the posterior means of success suggest the superiority of one over the other: $2/5=0.4 < 6/12 = 0.5$, yet their indices denoted by the double-underline suggest the opposite, $0.6726>0.6504$, again prioritizing the least observed population. \citet{gittins1992learning} define the \emph{learning} component of the index as the difference between the index value and the expected immediate reward, which for the general Bayesian Bernoulli MABP is given by $\frac{s_{k,0}+s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}}$. This posterior probability is the current belief that a treatment $k$ is successful and it can be used for making patient allocation decisions in a myopic way, that is, exploiting the available information without taking into account the possible future learning. Consider, for instance, the case where $\mathbf{x}_{k,0}=(1,1)$ for all $k$. In that case, the learning component before making any treatment allocation decision is thus $(0.8699 -0.5)=0.3699$. As the number of observations of a bandit increases, the learning part of the indices decreases. \section{The Finite Horizon Case: A Restless MABP} \label {restless MABP} Of course, clinical trials are not run with infinite resources or patients. Rather, one usually attempts to recruit the minimum number of patients to achieve a pre-determined power. Thus, we now consider the optimization problem defined in \eqref{eq:dop_1} for a finite value of $T$. Indeed, a solution could in theory be obtained via DP, but it is impractical in large-scale scenarios for reasons already stated. Moreover, the Index theorem does not apply to this case, thus, the Gittins index function as defined for the infinite-horizon variant does not exist \citep{berry1985bandit}. In the infinite-horizon problem, at any $t$ there is always an infinite number of possible sample observations to be drawn from any of the populations. This is no longer the case in a finite-horizon problem, and the \emph{value} of a sampling history $(s_{k,t}, f_{k,t})$ is not the same when the sampling process is about to start than when it is about to end. The finite-horizon problem analysis is thus more complex, because these transient effects must be considered for the characterization of the optimal policy. In what follows we summarize how to derive an index function analogous to Gittins' rule for the finite-horizon Bayesian Bernoulli MABP based on an equivalent reformulation of it as an infinite-horizon \emph{Restless} MABP, as it was done in \citet {nino2005marginal}. In the equivalent model the information state is augmented, adding the number of remaining sample observations that can be drawn from the $K$ populations. Hence, the MCP has the following modified elements: \begin{longlist}[b] \item[(a)] An augmented \emph{state space} $\mathbb{\hat{X}}_k$ given by the union of the set $\mathbb{X}_{k,t} \times\mathbb{T}$, where $\mathbb{T}= \{0,1, \dots, T\}$, and an absorbing state $\{ E \}$, representing the end of the sampling process. Thus, $\hat{\mathbf{x}}_{k,t}=(\mathbf{x}_{k,t}, T-t)$ is a three-dimensional vector combining the information on the treatment (prior and observed) and the number of remaining patients to allocate until the end of the trial. \item[(b)] The same as in Section~\ref{sec:2 mabp}. \item[(c)] A \emph{transition law} $\mathcal P_k(\hat{\mathbf{x}}_{k,t+1}|\hat {\mathbf{x}}_{k,t},a_k)$ for every $\hat{\mathbf{x}}_{k,t} \mbox{ such that } 0\le t \le T-1 $: \begin{eqnarray} \label{dynamics2} &&\hat{\mathbf{x}}_{k,t+1} \hspace*{-24pt}\nonumber \\[-4pt]\\[-16pt] &&\quad= \cases{ \mbox{if }a_{k,t}=1: & \cr \quad\bigl(s_{k,0}+s_{k,t}+1,f_{k,0}+f_{k,t}, T-(t+1) \bigr), \cr \qquad \mbox{w.p. } \displaystyle\frac{ s_{k,t}+s_{k,0}}{s_{k,t}+f_{k,t}+s_{k,0+}f_{k,0}} , \cr \quad\bigl(s_{k,0}+s_{k,t},f_{k,0}+f_{k,t}+1, T-(t+1) \bigr), \cr \qquad \mbox{w.p. } \displaystyle\frac{f_{k,t}+f_{k,0}}{s_{k,t}+f_{k,t}+s_{k,0}+f_{k,0}}, \cr \mbox{if } a_{k,t}=0 \quad \bigl(\mathbf{x}_{k,t}, T-(t+1) \bigr) ,\cr \qquad\mbox{w.p. } 1, }\hspace*{-24pt}\nonumber \end{eqnarray} $\hat{\mathbf{x}}_{k,T}$ and $E$, under both actions, lead to $E$ with probability one. \item[(d)] The one-period expected rewards and resource consumption functions are defined as in \eqref{rewards1} for $t =0,1, \dots, T-1$, while the states $E$ and $\hat{\mathbf{x}}_{k,T}$ both yield $0$ reward and work consumption. \end{longlist} The objective in the resulting bandit optimization problem is also to find a discount-optimal policy that maximizes the ETD rewards. \subsection{Restless MABPs and the Whittle Index}\label{restless} In this equivalent version the horizon is infinite (a~fiction introduced by forcing every arm of the MABP to remain in state $E$ after the period $T$), nonetheless, the Index theorem does not apply to it because its dynamics do not fulfil condition \eqref{classic dyn}. The inclusion of the number of remaining observations to allocate as a state variable causes inactive arms to evolve regardless of the selected action, and this particular feature makes the augmented MABP \emph{restless}. In the seminal work by \citet{whittle1988rba}, this particular extension to the MABP dynamics was first proposed and the name \emph {restless} was introduced to refer to this class of problems. Whittle deployed a Lagrangian relaxation and decomposition approach to derive an index function, analogous to the one Gittins had proposed to solve the \emph{classic} case, which has become known as the Whittle index. One of the main implications of Whittle's work is the realization that the existence of such an index function is not guaranteed for every \emph{restless} MABP. Moreover, even in those cases in which it exists, the index rule does not necessarily recover the optimal solution to the original MABP (as it does in the \emph{classic} case), being thus a heuristic rule. Whittle further conjectured that the index policy for the restless variant enjoys a form of asymptotic optimality (in terms of the ETD rewards achieved), a property later established by \citet{weber1990index} under certain conditions. Typically, the resulting heuristic has been found to be nearly optimal in various models. \subsection{Indexability of Finite-Horizon Classic MABP}\label {fh index} In general, establishing the existence of an index function for a \emph {restless} MABP (i.e., showing its \emph{indexability}) and computing it is a tedious task. In some cases, the sufficient indexability conditions (SIC) introduced by \citet{nino2001restless} can be applied for both purposes. The restless bandit reformulation of finite-horizon \emph{classic} MABPs, as defined in Section~\ref{sec:2 mabp}, is always \emph{indexable}. Such a property can either be shown by means of the SIC approach or simply using the seminal result in \citet{bellman1956problem}, by which the monotonicity of the optimal policies can be ensured, allowing to focus attention on a nested family of stopping times. Moreover, the fact that in this restless MABP reformulation the part of the augmented state that continues to evolve under $a_{k,t}=0$, that is, $T-t$, does so in the exact same way that under $a_{k,t}=1$ allows computation of the Whittle index as a modified version of the Gittins index, in which the search of the optimal stopping time in \eqref{Gittins} is truncated to be less than or equal to the number of remaining observations to allocate (at each decision period) (see Proposition~3.1 in \citeauthor{nino2011computing}, \citeyear{nino2011computing}). Hence, the Whittle index for the finite-horizon Bayesian Bernoulli MABP is \begin{eqnarray} \label{Whittle} &&\mathcal W_k(\hat{\mathbf{x}}_{k,t})= \sup _{1\le\tau\le T-t}\frac{\mathsf{E} _{\hat{\mathbf{X}}_{k,t}=\hat{\mathbf{x}}_{k,t}} \sum_{i=0}^{\tau-1} \mathcal R(\hat{\mathbf{X}}_{k,t+i},1) d^i }{\mathsf{E}_{\hat {\mathbf{X}}_{k,t}=\hat{\mathbf{x}}_{k,t}} \sum_{i=0}^{\tau -1}\mathcal C(\hat{\mathbf{X}}_{k,t+i},1) d^i }, \nonumber \\[-10pt]\\[-6pt] &&\hspace*{114pt}\quad\mbox{for } \hat { \mathbf{x}}_{k,t} \in\mathbb{\hat{\mathbf{X}}}_k \setminus \{E, \hat {\mathbf{x}}_{k,T} \},\hspace*{-114pt}\nonumber \end{eqnarray} where the expectation is computed with respect to the corresponding Markovian (\emph{active}) transition law $\mathcal{P}_k(\hat{\mathbf {x}}_{k,t+1}|\hat{\mathbf{x}}_{k,t},1)$ and $\tau$ is a stopping time. Table~\ref{tab:Whittletable}, Table~\ref{tab:Whittletable1} and Table~\ref{tab:Whittletable2} include some values of the Whittle indices for instances in which, as before, the initial prior is uniform for all the arms and the index precision is of 4 digits, but the discount factor is $d=1$, the sampling horizon is set to be $T=180$, and the number of remaining observations is respectively allowed to be $T-t=80$, $T-t=40$ and $T-t=1$. Again, the Whittle index rule assigns a number from these tables to every treatment, based on the values of $s_{k,0}+s_{k,t}$ and $f_{k,0}+f_{k,t}$ and on the number of remaining periods $T-t$, and then prioritizes sampling the one with the highest value. \begin{table} \caption{The Whittle index values for an information vector of $s_{0}+s_{t}$ successes and $f_0+f_t$ failures, $T-t=80$, $d=1$ and where the size of the trial is $T=180$} \label{tab:Whittletable} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccccc@{}} \hline $\bolds{f/s}$& \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} \\ \hline {1} & $0.8558$ & $0.9002$ & $0.9204$& $0.9326$ & $0.9409$ & $0.9471$ \\ {2} &$ 0.6803$&$ \underline{0.7689}$ & $0.8140$ & $0.8423$ &$0.8621$ & $0.8769$\\ {3} & $0.5463$ & $\underline{\underline{0.6552}}$ &$0.7158$ &$0.7565$ &$0.7855$ &$0.8077$\\[3pt] {4} & $0.4503$ & $0.5630$ & $0.6335$ & $\underline{0.6812}$ & $0.7167$ & $0.7444$ \\[1pt] {5} & $0.3786$ &$0.4923$ &$0.5642$ &$0.6169$ &$0.6565$ &$0.6876$\\ {6} & $0.3247$ & $0.4348 $& $0.5073$ & $0.6040$ & $0.6040$ &$ \underline {\underline{0.6380}}$\\ \hline \end{tabular*} \end{table} \begin{table}[b] \caption{The Whittle index at $T-t=40$} \label{tab:Whittletable1} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccccc@{}} \hline $\bolds{f/s}$& \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} \\ \hline {1} & $0.8107$ & $0.8698$ & $0.8969$ & $0.9132$ & $0.9244$ & $0.9326$ \\ {2} & $0.6199$ & $\underline{0.7239}$& $0.7778$ & $0.8120$& $0.8360$& $0.8539$ \\[1pt] {3} & $0.4877$ &$\underline{\underline{0.6067}}$ & $0.6753$ & $0.7214$ & $0.7546$ & $0.7802$\\[3pt] {4} & $0.3955$ & $0.5157$ & $0.5920$ & $\underline{0.6447}$ & $0.6837 $& $0.7147$ \\[1pt] {5} & $0.3297 $& $0.4476$ & $0.5231 $& $0.5802 $& $0.6233$ & $0.6573$ \\ {6} & $0.2805 $& $0.3929 $& $0.4690$ & $0.5254 $& $0.571 $& $\underline {\underline{0.6075}}$ \\ \hline \end{tabular*} \end{table} It follows from the above tables that the \emph{learning} element of this index decreases as $T-t$ decreases. In the limit, when $T-t=1$ the Whittle index is exactly the posterior mean of success (which corresponds to the \emph{myopic} allocation rule that results from using current belief as an index). On the contrary, as $T-t \to\infty$, the Whittle index tends to approximate the Gittins index. Hence, for a given information vector, the relative importance of exploring (or \emph {learning}) vs. exploiting (or being \emph{myopic}) varies significantly over time in a finite-horizon problem as opposed to the infinite-horizon case in which this balance remains constant in time depending solely on the sampling history. Notice that the computational cost of a single Whittle index table is, at most, the same as for a Gittins index one; however, solving a finite horizon MABP using the Whittle rule has significantly higher computational cost than the infinite-horizon case, because the Whittle indices must be computed at every time point $t$. \begin{table} \caption{The Whittle index at $T-t=1$} \label{tab:Whittletable2} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccccc@{}} \hline $\bolds{f/s}$& \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} \\ \hline {1} &$ 0.5000$ & $0.6667$ & $0.7500$ & $0.8000$ & $0.8333$ & $0.8571$ \\ {2} & $0.3333$ & $\underline{0.5000}$ & $0.6000 $& $0.6667 $& $0.7143$ & $0.7500$ \\[1pt] {3} & $0.2500$ &$ \underline{\underline{0.4000}}$ & $0.5000 $& $0.5714$ & $0.6250$ & $0.6667$\\[3pt] {4} & $0.2000$ & $0.3333$ & $0.4286$ & $\underline{0.5000}$ & $0.5556$ & $0.6000$ \\[1pt] {5} &$ 0.1667$ & $0.2857$ & $0.3750$ & $0.4444 $& $0.5000$ & $0.5455$ \\ {6} & $0.1429$ & $0.2500$ & $0.3333$ & $0.4000$ & $0.4545$ & $\underline {\underline{0.5000}}$ \\ \hline \end{tabular*} \end{table} \begin{figure} \caption{The (approximate) Whittle index values for an information vector of $s_{0} \label{fig3} \end{figure} This evolution of the learning vs. earning trade-off is depicted graphically in Figure~\ref{fig3} and causes the decisions in each of the highlighted situations of Table~\ref{tab:Gittinstable} to change over time when considered for a finite-horizon problem. In Table~\ref{tab:Whittletable} with $T-t=80$ both decisions coincide with the ones described for Table~\ref{tab:Gittinstable}, while in Table~\ref{tab:Whittletable1}, in which $T-t=40$, the decision for the second example has changed, and in Table~\ref{tab:Whittletable2}, in which $T-t=1$, the decisions in both cases are different. \section{Simulation Study}\label{suitability} In this section we evaluate the performance of a range of patient allocation rules in a clinical trial context, including the bandit-based solutions of Section~\ref{classic MABP} and Section~\ref{restless MABP}. We focus on the following: statistical power $(1-\beta)$; type-I error rate ($\alpha$); expected proportion of patients in the trial assigned to the best treatment ($p^*$); expected number of patient successes (ENS); and, for the two-arm case, bias in the maximum likelihood estimate of treatment effect associated with each decision rule. Specifically, we investigate the following patient allocation procedures: \begin{itemize} \item\emph{Fixed Randomized design (FR)}: uses an equal, fixed probability to allocate patients to each arm throughout the trial. \item\emph{Current Belief (CB)}: allocates each patient to the treatment with the highest mean posterior probability of success. \item\emph{Thompson Sampling (TS)}: randomizes each patient to a treatment $k$ with a probability that is proportional to the posterior probability that treatment $k$ is the best given the data. In the simulations we shall use the allocation probabilities defined as \begin{eqnarray} \label{thompsonsampling} \pi_{k,t}&=&\mathcal{P}(a_{k,t}=1| \mathbf{x}_{k,t})\nonumber \\[-10pt]\\[-10pt] &=&\frac{\mathcal {P}(\max_{i} p_i=p_k| \mathbf{x}_{k,t})^c}{\sum_{k=1}^K\mathcal{P}(\max_{i} p_i=p_k| \mathbf{x}_{k,t})^c},\nonumber \end{eqnarray} where $c$ is a tuning parameter defined as $\frac{t}{2T}$, and $t$ and $T$ are the current and maximum sample size respectively. See, for example, \citet{thall2007practical}. \item\emph{Gittins index (GI)} and \emph{Whittle index (WI)}: respectively use the corresponding index functions defined by formulae \eqref{GittinsB} and \eqref{Whittle}. \item\emph{Upper Confidence Bound index (UCB)}: developed by \citet {auer2002finite}, takes into account not only the posterior mean but also its variability by allocating the next patient to the treatment with the highest value of an index, calculated as follows: $\frac {s_{k,0}+s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}} + \sqrt{\frac{2 \log {t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}} }$. \end{itemize} \subsubsection*{Semi-Randomized (Asymptotically Optimal) Bandit Approaches} In addition, we consider a randomized class of index-based bandit patient allocation procedures based on a simple modification first suggested in \citet{bather1981randomized}. The key idea is to add small perturbations to the index value corresponding to the observed data at each stage, obtaining a new set of indices in which the (deterministic) index-based part captures the importance of the \emph {exploitation} based on the accumulated information and the (random) perturbation part captures the \emph{learning} element. Formally, these rules are defined as follows: \begin{eqnarray} \label{randomisedrules} &&I(s_{k,0}+s_{k,t},f_{k,0}+f_{k,t}) \nonumber \\[-10pt]\\[-10pt] &&\quad{}+ Z_t* \lambda (s_{k,0}+s_{k,t}+f_{k,t}+f_{k,0}),\nonumber \end{eqnarray} where $I(s_{k,0}+s_{k,t},f_{k,0}+f_{k,t})$ is the index value associated to the prior and observed data on arm $k$ by time $t$, $Z_t$ is an i.i.d. positive and unbounded random variable, and $\lambda(s_{k,0}+s_{k,t}+f_{k,t}+f_{k,0})$ is a sequence of strictly positive constants tending to $0$ as $s_{k,0}+s_{k,t}+f_{k,t}+f_{k,0}$ tends to $\infty$. The interest in this class of rules is due to their asymptotic optimality, that is, property \eqref{regret} discussed in Section~\ref{sec:2 mabp}, specifically on assessing how their performance compares to the index rules that are optimal (or nearly optimal) in terms of the ETD objective \eqref{eq:dop_1}. Notice that rules defined by \eqref{randomisedrules} have a decreasing, though strictly positive, probability of allocating patients to every arm at any point of the trial. In other words, rules \eqref{randomisedrules} are such that most of the patients are allocated sequentially to the current best arm (according to the criteria given by the index value), while some patients are allocated all the other of the treatment arms. For the simulations included in this paper we let $Z_t(K)$ be an exponential random variable with parameter $\frac{1}{K}$; $\lambda (s_{k,0}+s_{k,t}+f_{k,t}+f_{k,0}) = \frac {K}{s_{k,0}+s_{k,t}+f_{k,t}+f_{k,0}}$ and define two additional approaches: \begin{itemize} \item \emph{Randomized Belief index (RBI) design}: makes the sampling decisions between the populations based on an index computed setting $I(s_{k,0}+s_{k,t},f_{k,t}+f_{k,0})=\frac {s_{k,0}+s_{k,t}}{s_{k,0}+f_{k,0}+s_{k,t}+f_{k,t}}$ in \eqref{randomisedrules}. \item \emph{Randomized Gittins index (RGI) design}: first suggested in \citet{glazebrook1980randomized}, makes the sampling decisions between the populations based on the index computed setting $I(s_{k,0}+s_{k,t},f_{k,t}+f_{k,0})=\mathcal G(s_{k,0}+s_{k,t}, f_{k,t}+f_{k,0})$ in \eqref{randomisedrules}. \end{itemize} For every design, ties are broken at random and in every simulated scenario we let $\mathbf{x}_{k,0}=( s_{k,0},f_{k,0})=(1,1)$ for all $k$. \subsubsection*{Design Scenarios} We implement all of the above methods in several $K$-arm trial design settings. In each case, trials are made up of $K-1$ experimental treatments and one control treatment. The control group (and its associated quantities) is always denoted by the subscript $0$ and the experimental treatment groups by $1,\ldots,K-1$. We first consider the case $K=2$. To compare the two treatments, we consider the following hypothesis: $H_0: p_0 \ge p_1$, with the type-I error rate calculated at $p_0=p_1=0.3$ and the power to reject $H_0$ calculated at $H_{1}: p_0 =0.3 ; p_1=0.5$. We set the size of the trial to be $T=148$ to ensure that FR will attain at least $80\%$ power when rejecting $H_0$ with a one-sided $5\%$ type-I error rate. We then evaluate the performances of these designs by simulating $10^4$ repetitions of the trials under each hypothesis and comparing the resulting operating characteristics of the trials. Hypothesis testing is performed using a normal cutoff value (when appropriate) and using an adjusted Fisher's exact test for comparing two binomial distributions, where the adjustment chooses the cutoff value to achieve a 5\% type-I error. \begin{table*} \tabcolsep=0pt \caption{Comparison of different two-arm trial designs of size $T= 148$. $F_a$: Fisher's adjusted test; $\alpha$: type-I error; $1-\beta$: power; $p^{*}$: expected proportion of patients in the trial assigned to the best treatment; ENS: expected number of patient successes; UB: upper bound} \label{tab:simulations1} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccccc@{}} \hline & \multirow{2}{25pt}{\textbf{Crit.} \textbf{value}} & \multicolumn{3}{c}{$\bolds{H_0: p_0=p_1=0.3}$} & \multicolumn{3}{c}{$\bolds{H_1: p_0=0.3, p_1=0.5}$} \\ \ccline{3-5,6-8} & & \multicolumn{1}{c}{$\bolds{\alpha}$} &\multicolumn{1}{c}{$\bolds{p^*}$ \textbf{(s.e.)}} & \textbf{ENS} \textbf{(s.e.)} & \multicolumn{1}{c}{ $\bolds{1-\beta}$} & \multicolumn{1}{c}{$\bolds{p^*}$ \textbf{(s.e.)}}& \textbf{ENS (s.e.)} \\ \hline {FR} & 1.645 & 0.052 & 0.500 (0.04) & 44.34 (5.62) & 0.809 & 0.501 (0.04) & 59.17 (6.03)\\ {TS} & 1.645 & 0.066 & 0.499 (0.10) & 44.39 (5.58) & 0.795 & 0.685 (0.09) & 64.85 (6.62) \\ {UCB} &1.645 & 0.062 & 0.499 (0.10) & 44.30 (5.60) & 0.799 & 0.721 (0.07) & 66.03 (6.57) \\ {RBI} &1.645 & 0.067 & 0.502 (0.14) & 44.40 (5.57) & 0.763 & 0.737 (0.07) & 66.43 (6.54) \\ {RGI} & 1.645 & 0.063 & 0.500 (0.11) & 44.40 (5.61) & 0.785 & 0.705 (0.07) & 65.46 (6.40) \\% {CB} &\emph{$F_a$} & 0.046 & 0.528 (0.44) & 44.34 (5.55) & 0.228 & 0.782 (0.35) & 67.75 (12.0) \\ {WI} &\emph{$F_a$} & 0.048 & 0.499 (0.35) & 44.37 (5.59) & 0.282 & 0.878 (0.18) & 70.73 (8.16) \\ {GI} &\emph{$F_a$} & 0.053 & 0.501 (0.26) & 44.41 (5.58) & 0.364 & 0.862 (0.11) & 70.21 (7.11) \\[6pt] {UB} & & & & 44.40 (0.00) && 1 & 74.00 (0.00) \\ \hline \end{tabular*} \end{table*} For the $K$-arm design settings we shall consider the following hypothesis: $H_0: p_0\ge p_i$ for $i=1, \dots,K-1$ with the family-wise error rate calculated at $p_0=p_1=\cdots=p_{K-1}=0.3$. We use the Bonferroni correction method to account for multiple testing and therefore ensure that the family-wise error rate is less than or equal to 5\%, that is, all hypotheses whose $p$-values $p_{k}$ are such that $p_{k}<\frac{\alpha}{K-1}$ are rejected. Additionally, when there are multiple experimental treatments, we shall define the statistical power as the probability of the trial ending with the conclusion that a truly effective treatment is effective. \subsection{Two-Arm Trial Setting Simulations} Table~\ref{tab:simulations1} shows the results for $K=2$ under both hypotheses and for each proposed allocation rule. The randomized and semi-randomized response-adaptive procedures (i.e., TS, UCB, RBI and RGI) exhibit a slightly inferior power level than a FR design; however, they have an advantage in terms of ENS over a FR design. On the other hand, the three deterministic index-based approaches (i.e., CB, WI and GI) have the best performance in terms of ENS, yet result in power values which are far below the required values. In the most extreme case, for the CB and WI rules, the power is approximately 3.5 times smaller than with a FR design. Adaptive rules have their power reduced because they induce correlation among treatment assignments; however, for the deterministic index policies this effect is the most severe because they permanently skew treatment allocation toward a treatment as soon as one exhibits a certain advantage over the other arms. To illustrate the above point, let $n_0$ and $n_1$ be the number of patients allocated to treatment 0 and 1 respectively, then for the results in Table~\ref{tab:simulations1} it holds that $E^{\mathrm{CB}}(n_0)= 31.60$, $E^{\mathrm{CB}}(n_1)= 116.40$, $E^{\mathrm{WI}}(n_0)=16.49$, $E^{\mathrm{WI}}(n_1)=131.51$ and $E^{\mathrm{GI}}(n_0)= 19.06$, $E^{\mathrm{GI}}(n_1)= 128.94$. Moreover, this implies that the required ``superiority'' does not need to be a statistically significant difference of the size included in the alternative hypothesis as suggested by the following values: $E^{\mathrm{CB}}_k(\mathbf{s}/\mathbf{n})= [0.1437 ; 0.4208]$, $V^{\mathrm{CB}}_k(\mathbf{s}/\mathbf{n})= [0.1528 ; 0.1831]$, $E^{\mathrm{WI}}_k(\mathbf{s}/\mathbf{n})= [0.1976 ;\break 0.4860]$, $V^{\mathrm{WI}}_k(\mathbf{s}/\mathbf{n})= [0.1470 ; 0.08875]$, $E^{\mathrm{GI}}_k(\mathbf{s}/\mathbf{n})= [0.2283 ; 0.4959]$ and $V^{\mathrm{GI}}_k(\mathbf{s}/\mathbf{n})= [ 0.1271 ; 0.0538]$. The results in Table~\ref{tab:simulations1} illustrate the natural tension between the two opposing goals of maximizing the statistical power to detect significant treatment effects (using FR) and maximizing the health of the patients in the trial (using GI). The optimality property inherent in the GI design produces an average gain in successfully treated patients of 11 (an improvement of $18.62\%$ over the FR design). This is only 4 fewer patients on average than the theoretical upper bound (calculated as $T\times p_{1} = 74$) achievable if all patients were assigned to the best treatment from the start. It is worth noting that the asymptotically optimal index approaches [w.r.t. \eqref{regret}] improve on the statistical power of the index designs (around 76\%--78\% for a 5\% type-I error rate) at the expense of attaining an inferior value of ENS (around 5 fewer successes on average compared to the bandit-based rules). Yet, these rules significantly improve on the value of ENS attained by a FR design, naturally striking a better balance in the patient health/power trade-off. \begin{figure*} \caption{Top: The bias in the control treatment estimate as a function of the number of allocated patients under $H_{1} \label{fig1} \end{figure*} From Table~\ref{tab:simulations1} one can see that the three index-based rules significantly improve on the average number of successes in the trial by increasing the allocation toward the superior treatment based on the observed data. This acts to reduce the power to detect significant treatment effect. Another factor at play is bias: index-based rules\vadjust{\goodbreak} induce a negative bias in the treatment effect estimates of each arm, the magnitude of this bias is largest for inferior treatments (for which less patients are assigned to than superior treatments). When the control is inferior to the experimental treatment, this induces a positive bias in the estimated benefit of the experimental treatment over the control. This is shown in Figure~\ref{fig1}. A heuristic explanation for this is as follows. The index-based rules select a ``superior'' treatment before the trial is over based on the accumulated data. This implies that if a treatment performs worse than its true average, that is, worse for a certain number of consecutive patients, then the treatment will not be assigned further patients. The treatment's estimate then has no chance to regress up toward the true value. Conversely, if a treatment performs better than its true average, the index-based rules all assign further patients to receive it, and its estimate then has the scope to regress down toward its true value. This negative bias of the unselected arms is observed for all dynamic allocation rules, and is the most extreme for the CB method. \begin{table*} \tabcolsep=0pt \caption{Comparison of different four-arm trial designs of size $T= 423$. $F_a$: Fisher's adjusted test; $\alpha$: family-wise type-I error; $1-\beta$: power; $p^{*}$: expected proportion of patients in the trial assigned to the best treatment; ENS: expected number of patient successes; UB: upper bound} \label{tab:simulations2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccccc@{}} \hline & \multirow{2}{25pt}{\textbf{Crit.} \textbf{value}} & \multicolumn{3}{c}{$\bolds{H_0: p_0=p_i=0.3}$ \textbf{for} $\bolds{i=1,\dots,3}$} & \multicolumn{3}{c}{$\bolds{H_1: p_0=p_i=0.3}$\textbf{,} $\bolds{i=1,2}$\textbf{,} $\bolds{p_3=0.5}$} \\ \ccline{3-5,6-8} & & \multicolumn{1}{c}{$\bolds{\alpha}$} &\multicolumn{1}{c}{$\bolds{p^*}$ \textbf{(s.e.)}} & \textbf{ENS} \textbf{(s.e.)} & \multicolumn{1}{c}{ $\bolds{(1-\beta)}$} & \multicolumn{1}{c}{$\bolds{p^*}$ \textbf{(s.e.)}}& \textbf{ENS (s.e.)} \\ \hline {FR} & \emph{2.128} & 0.047 & 0.250 (0.02) & 126.86 (9.41) & 0.814 & 0.250 (0.02) & 148.03 (9.77) \\ {TS} &\emph{2.128} & 0.056 & 0.251 (0.07) & 126.93 (9.47) & 0.884 & 0.529 (0.09) & 172.15 (13.0) \\ {UCB} &\emph{2.128} &0.055 & 0.251 (0.06) & 126.97 (9.41) & 0.877 & 0.526 (0.07) & 171.70 (11.9) \\ {RBI} &\emph{2.128} & 0.049 & 0.250 (0.03) & 126.77 (9.40) & 0.846 & 0.368 (0.04) & 158.34 (10.4) \\ {RGI} & \emph{2.128} & 0.046 & 0.250 (0.03) & 126.80 (9.36) & 0.847 & 0.358 (0.03) & 157.26 (10.3)\\ {CB} &\emph{$F_a$} & 0.047 & 0.269 (0.39) & 126.89 (9.61) & 0.213 & 0.677 (0.41) & 184.87 (36.8) \\ {GI} &\emph{$F_a$} & 0.048 & 0.248 (0.18) & 126.68 (9.40) & 0.428 & 0.831 (0.10) & 198.25 (13.7)\\ {CG} & \emph{2.128}& 0.034 & 0.250 (0.02) & 127.16 (9.46) & 0.925 & 0.640 (0.08) & 182.10 (12.3) \\[6pt] UB & & && 126.90 (0.00) & &1 & 211.50 (0.00) \\ \hline \end{tabular*} \end{table*} The final observation refers to the fact that although all the index-based rules fail to achieve the required level of power to detect the true superior treatment, they tend to correctly skew patient allocation toward the best treatment within the trial, when it exists. For the simulation reported in Table~\ref{tab:simulations1} we have computed the probability that each rule makes the wrong choice (i.e., stops allocating patients to the experimental treatment). These values are as follows: $0.1730$, $0.0307$, $0.0035$ for the CB, WI and GI methods respectively. \subsection{Multi-Arm Trial Setting} \label{1arm} We now present results for a $K=4$ setting. First, we consider the case of a trial with $T=423$ patients. As before, we set the size of the trial to ensure that a \emph{FR} design results in at least $80\%$ power to detect an effective treatment for a family-wise error rate of less than $5\%$. Results for this case are depicted in Table~\ref{tab:simulations2}. The Whittle index approach is omitted because for $T$ roughly larger than 150 its performance is near identical to that attained by the Gittins index but with a significantly higher computational cost. In this setting, the randomized and semi-\break randomized adaptive rules (i.e., TS, UCB, RBI, RGI) exhibit an advantage over a FR both in the achieved power and in ENS. The reason for that is that these rules continue to allocate patients to all arms while they skew allocation to the best performing arm, hence, ensuring that by the end of the design the control arm will have a similar number of observations than with FR while the best arm will have a larger number. Among these rules, TS and UCB exhibit the best balance between power-ENS which achieve the $80\%$ power increasing ENS in approximately 23 over a FR design. The deterministic index-based rules CB and GI increase this advantage in ENS over a FR design by roughly 36 and 50, respectively. However, a severe reduction is again observed in the power values of these designs. On the other hand, the probability that each of these rules makes a wrong choice (i.e., it does not skew the allocation toward the best experimental treatment) is $0.2691$ and $0.0051$, respectively, for the CB and GI. \subsection{The Controlled Gittins Index Approach} \label{cgittins} To overcome the severe loss of statistical power of the Gittins index, we introduce, for the multi-arm trial setting only, a composite design in which the allocation to the control treatment is done in such a way that one in every $K$ patients is allocated to the control group while the allocation of the remaining patients among the experimental treatments is done using the Gittins index rule. We refer to this design as the \emph{controlled Gittins} (CG) approach. Based on the simulation results, CG manages to solve the trade-off quite successfully, in the sense that it achieves more than $80\%$ power, while it achieves a mean number of successes very close to the one achieved by the CB rule and with a third of the variability that CB exhibits in expected number of patient successes. \begin{table*} \tabcolsep=0pt \caption{Comparison of different four-arm trial designs of size $T= 80$. F: Fisher; $\alpha$: type-I error; $1-\beta$: power; $p^{*}$: expected proportion of patients in the trial assigned to the best treatment; ENS: expected number of patient successes; UB: upper bound} \label{tab:simulations3} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccccc@{}} \hline & \multirow{2}{25pt}{\textbf{Crit.} \textbf{value}} & \multicolumn{3}{c}{$\bolds{H_0: p_0=p_i=0.3}$ \textbf{for} $\bolds{i=1, \dots,3}$} & \multicolumn{3}{c}{$\bolds{H_1: p_k=0.3+0.1\times k}$\textbf{,} $ \bolds{k=0,1,2,3}$} \\ \ccline{3-5,6-8} & & \multicolumn{1}{c}{$\bolds{\alpha}$} &\multicolumn{1}{c}{$\bolds{p^*}$ \textbf{(s.e.)}} & \textbf{ENS} \textbf{(s.e.)} & \multicolumn{1}{c}{ $\bolds{(1-\beta)}$} & \multicolumn{1}{c}{$\bolds{p^*}$ \textbf{(s.e.)}}& \textbf{ENS (s.e.)} \\ \hline {FR} & \emph{F} & 0.019 & 0.251 (0.04) & 24.01 (4.07) & 0.300 & 0.250 (0.04)& 35.99 (4.41) \\ {TS} & \emph{F} & 0.013 & 0.250 (0.07) & 24.01 (4.15) & 0.246 & 0.338 (0.08) & 38.34 (4.68) \\ {UCB} & \emph{F} & 0.011 & 0.252 (0.06) & 24.00 (4.12) & 0.218 & 0.362 (0.08) & 38.84 (4.71) \\ {RBI} & \emph{F} & 0.018 & 0.250 (0.03) & 23.97 (4.06) & 0.295 & 0.268 (0.03) & 36.52 (4.41) \\ {RGI} & \emph{F} & 0.017 & 0.250 (0.02) & 24.07 (4.07) & 0.298 & 0.265 (0.03) & 36.45 (4.36) \\ {CB} &$F_a$ & 0.017 & 0.270 (0.30) & 23.98 (4.08) & 0.056 & 0.419 (0.38) & 40.92 (6.89) \\ {WI} &$F_a$ & 0.015 & 0.258 (0.22) & 23.00 (4.14) &0.101 & 0.537 (0.31) & 42.65 (6.02) \\ {GI} &$F_a$ & 0.000 & 0.251 (0.13) & 23.97 (4.11) & 0.002 & 0.492 (0.21) & 41.60 (5.44) \\ {CG} & $F_a$ & 0.015 & 0.253 (0.13) & 24.04 (4.13) & 0.349 & 0.393 (0.16) & 38.29 (4.82) \\[6pt] UB & && & 24.00 (0.00) & & 1 & 48.00 (0.00) \\ \hline \end{tabular*} \end{table*} \subsection{Multi-Arm Trial in a Rare Disease Setting} \label{rare} Finally, we imagine a rare disease setting, where the number of patients in the trial is a high proportion of all patients with the condition, but is not enough to guarantee reasonable power to detect a treatment effect of a meaningful size. In such a context, the idea of prioritizing patient benefit over hypothesis testing is likely to raise less controversy than in a common disease context \citep{Wang18122002}. We therefore simulate a four-arm trial as before but where the size of the trial is $T=80$. Given that the size of the trial implies a very small number of observations per arm, Table~\ref{tab:simulations3} only includes the results of the tests using Fisher's exact test and Fisher's adjusted exact test (in this case, adjusted to attain the same type-I error as the other methods). Also, to make the scenario more general, we have considered that under the alternative hypothesis the parameters are such that $H_1: p_k=0.3+0.1\times k$, $k=0,1,2,3$. The FR approach exhibits a $30\%$ power and attains an ENS value of 36. Table~\ref{tab:simulations3} shows the results attained for each of the designs considered. Under the alternative hypotheses, the GI and WI designs achieve an ENS gain over the FR design of 6 patients. Again, the CG rule exhibits an advantage over FR both in the achieved power and in the ENS (which in the case of this small population equals the advantage achieved by TS or UCB). Its ENS is less than 10 below the theoretical upper bound of 48. An important feature to highlight is that the Whittle rule does not significantly differ from the Gittins rule as it could be expected, given the trial (and hence its horizon) is small. These results illustrate how the GI and WI start skewing patient allocation toward the best arm (when it exists) earlier than other adaptive designs, therefore explaining their advantage in terms of $p^*$ for small $T$ over all of them. \section{Discussion}\label{Conclusion} Multi-armed bandit problems have emerged as the archetypal model for approaching learning problems while addressing the dilemma of exploration versus exploitation. Although it has long been used as {\it the} motivating example, they have yet to find any real application in clinical trials. After reviewing the theory of the Bernoulli MABP approach, and the Gittins and Whittle indices in particular, we have attempted to illustrate their utility compared to other methods of patient allocation in several multi-arm clinical trial contexts. Our results in Section~\ref{suitability} show that the Gittins and Whittle index-based allocation methods perform extremely well when judged solely on patient outcomes, compared to the traditional fixed randomization approach. The two indexes have distinct theoretical properties, yet in our simulations any differences in their performances were negligible, with both designs being close to each other and the best possible scenario in terms of patient benefit. Since it only needs to be calculated once before the trial starts, the Gittins index may naturally be preferred. The Gittins index, therefore, represents an extremely simple---yet near optimal---rule for allocating patients to treatments within the finite horizon of a real clinical trial. Furthermore, since the index is independent of the number of treatments, it can seamlessly incorporate the addition of new arms in a trial, by balancing the need to learn about the new treatment with the need to exploit existing knowledge on others. The issue of adding treatment arms is present in today's cutting-edge clinical trials. For example, this facet has been built into the I-SPY 2 trial investigating tumour-specific treatments for breast cancer from the start \citep{barker2009}. It is also now being considered in the multi-arm multi-stage STAMPEDE trial into treatments for prostate cancer as an unplanned protocol amendment, due to a new agent becoming available (\citeauthor{sydes2009}, \citeyear{sydes2009}; \citeauthor{wason2012some}, \citeyear{wason2012some}). Gittins indices and analogous optimality results have been derived for endpoints other than binary. Therefore, the analysis and conclusions of this work naturally extend to the multinomial distribution \mbox{\citep {glazebrook1978optimal}}, normally distributed processes with known variance \citep{jones1970sequential} and with unknown variance \citep{jones1975search}, and exponentially distributed populations (\citeauthor{do1985aspects}, \citeyear{do1985aspects}; \citeauthor{gittins2011multi}, \citeyear{gittins2011multi}). Unfortunately, the frequentist properties of designs that utilize index-based rules can certainly be questioned; both the Gittins and Whittle index approaches required an adjustment of the Fisher's exact test in order to attain type-I error control, produced biased estimates and, most importantly, had very low power to detect a treatment difference at the end of the trial. Since this latter issue greatly reduces their practical appeal, we proposed a simple modification that acted to stabilize the numbers of patients allocated to the control arm. This greatly increased their power while seemingly avoiding any unwanted type-I error inflation above the nominal level. This principle is not without precedence, indeed, \citet {trippa2012} have recently proposed a Bayesian adaptive design in the oncology setting for which protecting the control group allocation is also an integral part. Further research is needed to see whether statistical tests can be developed for bandit-based designs with well-controlled type-I error rates and also if bias-adjusted estimation is possible. There are of course other obvious limitations to the use of index-based approaches in practice. A patient's response to treatment needs to be known before the next patient is recruited, since the subsequent allocation decision depends on it. This will only be true in a small number of clinical contexts, for example, in early phase trials where the outcome is quick to evaluate or for trials where the recruitment rate may be slow (e.g., some rare disease settings). MABPs rely on this simplifying assumption for the sake of ensuring both tractability and optimality, and can not claim these special properties without making additional assumptions (see, e.g., \citeauthor{caro2010indexability}, \citeyear{caro2010indexability}). It would be interesting to see, however, if index-based approaches could be successfully applied in the more general settings where patient outcomes are observed in groups at a finite number of interim analyses, such as in a multi-arm multi-stage trial (\citeauthor{magirr2012generalized}, \citeyear{magirr2012generalized}; \citeauthor{wason2012optimal}, \citeyear{wason2012optimal}). Further research is needed to address this question. A different limitation to the use of bandit strategies is found in the fact that the approach leads to deterministic strategies. Randomization naturally protects designs against many possible sources of bias, for example, patient drift unbalancing treatment arms \citep{tang2010} or unscrupulous trial sponsors cherry-picking patients \citep{fda2006}. Of course, while these are serious concerns, they could also be leveled at any other deterministic allocation rule, such as play-the-winner. Further research is needed to introduce randomization to bandit strategies and also to determine some general conditions under which arms are selected or dropped when using the index rules. Further supporting materials for this paper,\break including programs to calculate extended\break tables of the Gittins and Whittle indexes, can be found at \surl{http://www.mrc-bsu.cam.ac.uk/software/\\miscellaneous-software/}. \begin{appendix} \section*{Appendix: Index Computation}\label{index computation} There is a vast literature on the efficient computation of the Gittins indices. In \citet{Beale1979}, \citet{varaiya1985extensions} and \citet {chen1986linear}, among others, algorithms for computing the Gittins indices for the infinite-horizon \emph{classic} MABP with a finite state space are provided. The computational cost for all of them (in terms of its running time as a function of the number of states $N$) is $N^3+\mathcal{O}(N^2)$. The algorithm for computing the Gittins indices in such a case achieving the lowest time complexity, $2/3 N^3+\mathcal{O}(N^2)$, was provided by \citet{nino20072}. For MABP with an infinite state space, such as the Bayesian Bernoulli MABP in Section~\ref{classic MABP}, the indices can be computed using any of the above algorithms but confining attention to some finite set of states, which will eventually determine the precision of their calculation. For the finite-horizon \emph{classic} MABP, as reviewed in Section~\ref{restless MABP}, an efficient exact computation method based on a recursive adaptive-greedy algorithm is provided in \citet{nino2011computing}. In what follows we examine in more detail the so-called \emph {calibration} method for the approximate index computation in the Bayesian Bernoulli MABP, both for the infinite- (Gittins index) and finite-horizon case (Whittle index). There are many reasons for focusing on this approach, not least because it was the algorithm used for computing the values presented in this paper. It also sheds light on the interpretation of the resulting index values, by connecting the Gittins index approach to the work in \citet{bellman1956problem}, and has long been the preferred computational method. \subsection*{The Calibration Method} \label{calib} \citet{bellman1956problem} studied an infinite random sampling problem involving two binomial distributions: one with a known success rate and the other one with an unknown rate but with a Beta prior. Bellman's key contribution was to show that the solution to the problem of determining the sequence of choices that maximize the ETD number of successes exists, is unique and, moreover, is expressible in terms of an index function which depends only on the total observed number of successes $s$ and failures $f$ of the unknown process. \citet{gijo74} used that result and showed that the optimal rule for an infinite-horizon MABP can also be expressed in terms of an index function for each of the $K$ Bernoulli populations and based on their observed sampling histories $(s,f)$. Such an index function is given by the value $p \in[0,1]$ for which the decision maker is indifferent between sampling the next observation from a population with known success rate $p$ or from an unknown one with an expected success rate $\frac{s}{s+f}$. The \emph{calibration} method uses DP to approximate the Gittins index values based on this idea, as explained in \citet{gittins1979dynamic}, and it can be adapted to compute the finite-horizon counterpart, as explained in \citeauthor{berry1985bandit} (\citeyear{berry1985bandit}), Chapter~5. Specifically, this index computation method solves, for a grid of $p$ values (the size of which determines the accuracy of the resulting index values approximations), the following DP problem: \begin{eqnarray} \label{eq:top dp ex}&& V_{D,t}^*(s,f,p)\nonumber \\ &&\quad= \max\biggl\{p \frac{ 1-d^{T-t}}{1-d},\nonumber \\ && \hphantom{\quad= \max\biggl\{}\frac{s}{s+f} \bigl(1+ d V_{D,(t+1)}^* (s+1,f,p) \bigr) \nonumber \\[-10pt]\\[-10pt] && \hphantom{\quad= \max\biggl\{\ }{}+ \frac {f}{s+f} \bigl( d V_{D,(t+1)}^*(s,f+1,p) \bigr)\biggr\},\nonumber \\ &&\hspace*{134pt}\quad t=0, \dots, T-2,\nonumber\hspace*{-134pt} \\ &&V_{D,T-1}^*(s,f,p) = \max\biggl\{ p , \frac{s}{s+f}\biggr\}.\nonumber \end{eqnarray} For the infinite-horizon problem and with $0\le d <1$, the convergence result allows for the omission of the subscript $t$ in the optimal value functions in \eqref{eq:top dp ex}, letting the reward associated to the known arm be $\frac{p}{(1-d)}$. For obtaining a reasonably good initial approximation of the optimal value function, the terminal condition on $V_{D,T-1}^*(s,f,p)$ is solved for some values of $s$ and $f$ such that $s+f=T-1$, and for a large $T$ and then a backward induction algorithm is applied to yield an approximate value for $V_{D,0}^*(s,f,p)$. For a fixed $p$ the total number of arithmetic operations to solve \eqref{eq:top dp ex} is $1/2 (T-1) (T-2)$, which, as stated in Section~\ref{GItheorem}, no longer grows exponentially in the horizon of truncation $T$ (nor does it grow in the number of arms of the MABP). For the finite-horizon variant, the terminal condition is not used for approximating the initial point of the backward-induction algorithm and the solution, but for computing the optimal value function exactly. The resulting number of operations to compute the Whittle index is basically the same as for the Gittins index, yet the total computational cost is significantly higher given that the Whittle indices must be computed and stored for every possible $t\le T-1$ and $(s,f)$. However, notice that an important advantage of the Whittle index over the Gittins index is that the discount factor $d=1$ can be explicitly considered for the former directly adopting an Expected Total objective function, by replacing the term $\frac{1-d^{T-t}}{1-d}$ by $T-t$, using the fact that \[ \lim_{d \to1}\frac{1-d^{T-t}}{1-d} = \sum _{i=0}^{T-t-1}d^i. \] \end{appendix} \section*{Acknowledgments} The authors are grateful for the insightful and very useful comments of the anonymous referee and Associate Editor that significantly improved the presentation of this paper. Supported in part by the UK Medical Research Council (grant numbers G0800860 and MR/J004979/1) and the Biometrika Trust. \end{document}
\begin{document} \title{An Efficient Method to Transform SAT problems to Binary Integer Linear Programming Problem} \author{\IEEEauthorblockN{Wenxia Guo\IEEEauthorrefmark{1},Jin Wang\IEEEauthorrefmark{1},Majun He\IEEEauthorrefmark{1}, Xiaoqin Ren\IEEEauthorrefmark{1}, Wenhong Tian\IEEEauthorrefmark{1},Qingxian Wang\IEEEauthorrefmark{2}} \IEEEauthorblockA{\IEEEauthorrefmark{1} School of Information and Software Engineering\\ University of Electronic Science and Technology of China, Chengdu, Sichuan\\ Email: [email protected]\\ } \IEEEauthorblockA{\IEEEauthorrefmark{2} Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences\\ Email:tian\[email protected], [email protected]\\ } } \maketitle \begin{abstract} In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. Although a solution to a NP-complete can be verified quickly, there is no known algorithm to solve it in polynomial time. There exists a method to reduce a SAT (Satifiability) problem to Subset Sum Problem (SSP) in the literature, however, it can only be applied to small or medium size problems. Our study is to find an efficient method to transform a SAT problem to a mixed integer linear programming problem in larger size. Observing the feature of variable-clauses constraints in SAT, we apply linear inequality model (LIM) to the problem and propose a method called LIMSAT. The new method can work efficiently for very large size problem with thousands of variables and clauses in SAT tested using up-to-date benchmarks. \end{abstract} keywords: SAT(Satisfiability problem); BinaryILP(Integer Linear programing); 3SAT; Reduction \IEEEpeerreviewmaketitle \section{Introduction} P problems are the class of problems that can be solved in polynomial time. This means they are problems which are solvable in time $O(n^k)$ in Big $O$ notation for any constant $k$, where $n$ is the size of the input of the problem. NP problems are the set of problems that can be verified in polynomial time as a function of the given input size \cite{Vanoye2011Survey} using a nondeterministic Turing machine. This means if there is a ¡°certificate¡± of a solution, then the ¡°certificate¡± can be proved to be correct in time polynomial in the size of the input to the problem \cite{Cormen2005Introduction}. In 1971, Cook \cite{Cook1971The} defined that problem X polynomial reducible to problem Y if arbitrary instances of problem X can be solved using polynomial number of standard computational steps, plus polynomial number of calls to oracle that solves problem Y. Therefore, the definition of NP complete is a problem Y in NP with the property that for every problem X in NP, $X \leq\leftidx{_p}Y,$ that is problem X can be polynomial-time reducible to problem Y. NP-complete problems constitute the class of the most difficult possible NP problems \cite{Vanoye2011Survey}. NP-complete problems can be divided into six basic genres \cite{karp1972Reducibility}, i.e., packing problems, covering problems, constraint satisfaction problems, sequencing problems, partitioning problems, numerical problems. Constraint satisfaction problems include Circuit Satisfiability problems, Satisfiability problem, 3SAT. A specific situation of SAT is 3SAT that each clause of it has exact three literals, which correspond to distinct variables or the negative form of these variables. The computational complexity of SAT problem in the worst case is $O(2^n)$, where n is the number of variables. Because SAT can be transformed to 3SAT, it has similar computational complexity as SAT. The question whether an arbitrary Boolean formula is satisfiable cannot be solved within polynomial time. A formula with n variables, possibilities of variables assignment can reach to $2^n$. If formula length ${\O}$ is a polynomial length about n, then it will take $¦¸(2^n)$ for each assignment. It is a superpolynomial length about to formula length. Due to this fact, this paper aims to propose an efficient method to transform SAT problems to a mixed integer linear programming problem to reduce the handling time of SAT problem. \subsection{Related Work} Conflict-driven clause learning (CDCL) is an efficient method for solving Boolean satisfiability problems (SAT). Up to now, many heuristics are added in it to improve performance, for example, restart, Variable State Independent Decaying Sum (VSIDS). Hidetomo et al. \cite{Nabeshima2017Coverage} focus on clause reduction heuristic, which aims to suppress memory consumption and sustain propagation speed. In their study, the reduction consists of two parts: evaluation criteria and reduction strategy. The first step measuring the usefulness of learnt clauses is using LBD (literals blocks distance) and the latter one using a new strategy based on the coverage of used LBDs is to select removing clauses according to the criteria. In experiments, they compare Glucose schema and Coverage schema. The result shows that the new strategy improves performance for both SAT and UNSAT instances. Another strategy used to improve SAT solver is called HSAT (Hint SAT). It is proposed by Jonathan Kalechstain et al. \cite{Kalechstain2015Hints} to cut the searching space by using a hint-based partial resolution-graph to get a solution faster. For hint generation, they chiefly use two heuristics. The first one is Avoiding Failing Branches (AFB) which avoids the solver spending too much time on explored branches which are made of decision variables. The second heuristic is Random Hints (RH) which aims to create hints that contradict the instance. This algorithm is based on random assignments and satisfiability checking. Experiments show that AFB can solve 113 instances from SAT 2013 within half an hour where the total number of satisfiability instances is 150. The experimental consequents of SAT 2014 are almost the same. Gilles Audemard et al. \cite{Audemard2016} study how to measure SAT instances. They give 5 indicators: the number of decision levels, the number of decisions between two conflicts, the number of successive conflicts, the number of non-binary glue clauses and the number of unit propagation. They also mention the restart polarity policy which is added to Glucose. The new version of Glucose solves 20$\%$ more problems than the original one and increases the speed for UNSAT instances. Further, Mathan Mull et al. \cite{Mull2016On} analysis the structure of industrial benchmarks. Previous studies hold that the reason why CDCL solver is efficient for industrial benchmarks is due to its ¡°good community structure¡± (high modularity). However, Mathan Mull et al. get the different result. They use random unsatisfiable instances produced by ¡°pseudo-industrial¡± community attachment model to do experiment. The result shows that community structure is not adequate to explain the good performance of CDCL on industrial benchmarks. Symmetry is another characteristic of SAT problem. Jo Devriendt et al \cite{Devriendt2017Symmetric} present symmetric explanation learning (SEL), in which symmetric clauses are learned only when they are unit or conflicting. 1300 benchmark instances indicate that among GLUCOSE, BREAKID, SEL, SP, SLS, SEL outperforms other four solver configurations and as a dynamic symmetry handling technique, SEL is the first one competitive with static symmetry breaking which is known to the most effective way to handle symmetry. In terms of symmetry, C.K Cuong et al. \cite{Cuong2016Computing} as well propose a method transforming unavoidable sub-graphs to SAT, which can make up for the shortcomings of SAT solvers. With the development of Machine Learning, combing SAT problem with Machine Learning is a good idea. Quanrun Fan et al. \cite{Fan2015Clustering} use clustering method basing on divide and conquer to deal with Boolean satisfiability problems. In this way, the original problem will be divided into many small ones. Therefore, the overall runtime reduces. In this algorithm, clauses are clustered into a group according to the similarity between clauses. They use Stoer-Wagner algorithm the minimum cut of undirected graph to partition clauses. Thus, the number of variables preventing clause group partition (cut variables) is down, and eliminating these variables will be easier. In SAT Competition 2016 \cite{SAT2016}, the best solver for main-Crafted benchmarks is tc\_glucose, which solves 58 instances in total. It combines CHBR\_glucose with tb\_glucose. There are two techniques: Variable State Independent Decaying Sum(VSIDS) decision heuristic \cite{Matthew W2005Chaff} and conflict history-based branching heuristic(CHB)\cite{JiaHuiLiang2016Exponential}. The first one is good at dealing with big problem while the latter works well with small problem. So if the number of variables is under 15000, CHBR\_glucose uses CHB. Because of the fact that once a variable scored by VSIDS, ties happen frequently. To avoid this, tb\_glucose updates VSIDS sores after getting learned clauses and computes 1/(LBD of a clause) for each variable in that clause\cite{Tom¨¢ Balyo2016Proceeding}. This is called TBVSIDS. In tc\_glucose, TBVSIDS is a default decision heuristic and CHB is activated when the number of variables is under 15000. \section{The method} \subsection{Reduction from 3SAT to SSP} The Subset-Sum problem (SSP) is the problem to find a subset of numbers which add up to a target value from a given set of numbers. The approach of transforming 3SAT to SSP is introduced in \cite{Cormen2005Introduction}, for completeness, we restate the approach in the following. Given a 3-CNF formula with $n$ variables and $k$ clauses. The most significant digits are labeled by $n$ variables, and the least significant digits are labeled by $k$ clauses. The construction is as follows: \begin{enumerate} \item The target t has a 1 in each digit labeled by a variable and a 4 in each digit labeled by a clause. \item For each variable $x_i$, set $S$ contains two integers $v_i$ and $v'_i$. Each $v_i$ and $v'_i$ equals to 1 in the digit labled by $x_i$ and equals to 0 in other digits. If $C_j$ contains $x_i$, then the digit labeled by $C_j$ in vi has a 1. If $©´x_i$ appears in $C_j$, then the digit labeled by $C_j$ in $v'_i$ has a 1. \item For each clause $C_j$, set S contains two integers $s_j$ and $s'_j$. Except the digit labeled by $C_j$, other digits all are 0. $s_j$ has a 1 in the digit labeled by $C_j$, $s'_j$ has a 2 in the same digit. $s_j$ and $s'_j$ are ¡°slack variables¡±, which help us to get each clause-labeled digit to sum to 4 which is the target value. Table. \ref{Reduction} shows an example of reduction from 3SAT to SSP (providing the example of 3 variables and 7 clauses) Example: 3 variables, 4 clauses \begin{center} $\phi$ \ = \ $C_1$ \ $\wedge$ \ $C_2$ \ $\wedge$ \ $C_3$ \ $\wedge$ \ $C_4$ \qquad (1) \end{center} within the formula, the four clauses are : \\ $C_1$ = ($x_1$ $\vee$ ${\neg x_2}$ $\vee$${\neg x_3}$), \ $C_2$ = (${\neg x_1}$ $\vee$ ${\neg x_2}$ $\vee$ ${\neg x_3}$), \\ $C_3$ = ($x_1$ $\vee$ ${\neg x_2}$ $\vee$ $x_3$), \ $C_4$ = ($x_1$ $\vee$ $x_2$ $\vee$ $x_3$) The target $t$ in the SSP is 1114444, so the job is to find a subset which has a sum equals to the target $t$. By applying the dynamic programming, it is possible to solve this SSP problem. One can see that there are totally $(n+k)$-digit for each number in the final SSP problem with total $2(n+k)$ numbers. So this approach can work only for small or medium size problem. \end{enumerate} \begin{table}[!hbp] \caption{Reduction from 3SAT to SubSet-Sum} \label{Reduction} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & $X_1$ & $X_2$ & $X_3$ & $C_1$& $C_2$& $C_3$& $C_4$ \\ \hline $v_1$ & 1 & 0 & 0 & 1 & 0 & 0& 1 \\ \hline $v'_1$ & 1 & 0 & 0 & 0 & 1 & 1& 0 \\ \hline $v_2$ & 0 & 1 & 0 & 0 & 0 & 0& 1 \\ \hline $v'_2$ & 0 &1 & 0 & 1 & 1 & 1& 0 \\ \hline $v_3$ & 0 & 0 & 1 & 0 & 0 & 1& 1 \\ \hline $v'_3$ & 0 & 0 & 1 & 1 & 1 & 0& 0 \\ \hline $s_1$ & 0 & 0 & 0 & 1 & 0 & 0& 0 \\ \hline $s'_1$ & 0 & 0 & 0 & 2 & 0 & 0& 0 \\ \hline $s_2$ & 0 & 0 & 0 & 0 & 1 & 0& 0 \\ \hline $s'_2$ & 0 & 0 & 0 & 0 & 2 & 0& 0\\ \hline $s_3$ & 0 & 0 & 0 & 0 & 0 & 1& 0 \\ \hline $s'_3$ & 0 & 0 & 0 & 0 & 2& 0& 0\\ \hline $s_4$ & 0 & 0 & 0 & 0 & 0 & 0& 1 \\ \hline $s'_4$ & 0 & 0 & 0 & 0& 0 & 0& 2\\ \hline $ t $ & 1 & 1 & 1 & 4 & 4 & 4 & 4\\ \hline \end{tabular} \end{center} \end{table} \subsection{Transform from SAT to 0-1 Integer Linear programing Model} Observing the difficulty to handle the large number in the approach of reduction 3SAT to SSP, we consider a new approach to solve SAT. Instead of treating each number independently in the reduction of 3SAT to SSP, we treat each bit as an element in the matrix of SSP. So by setting the matrix in SSP as $A$, the integer linear programming (ILP) formulation for transforming SSP to 0-1 ILP becomes to $x A=b$, where $b$ is the target value and $x$ is the solution we are looking for. To meet the constraints of SAT, we can build a linear inequalities model as follows: \begin{enumerate} \item Preparing a SAT problem in CNF format; \item Reduction from a SAT problem to SSP by the method introduced in \cite{Cormen2005Introduction}; obtaining a matrix $A$ which has $2(n+k)$ by $(n+k)$ dimension, we denote the matrix $A_1$ which is $2n$ by $n$ from A, and $A_2$ which is $2n$ by $k (n+1:n+k)$ from $A$. \item To meet the constraints, we need $x{A_1} \leq b_1$, $b_2 \leq x{A_2} \leq b_3$ , where $b_1$ and $b_2$ are all ones, and $b_3$ are all threes. We only need $b_1$ and $b_2$ for our problem. Set target value as an array in $b$, where $b=[ones(1,n),-ones(1,k)]$, and solve integer linear equation $x[A_1,-A_2] \leq b$, where $b=[b_1,-b_2]$, if there exits solution to $x$, then the original SAT problem is satisfiable, otherwise, it is not. Our model is as following: \begin{center} \par{min \ $c^{T}x$} \qquad (2) \par{s.t. \ $xA\leq b$} $x\in(0,1)$ \leftline{where c is coefficient(default as all ones)} \end{center} \end{enumerate} \section{Experimental Results} The test cases come from SAT 2016 competition \cite{Crafted}. There are 5 categories: Application Benchmarks from Main/Parallel Tracks, Crafted Benchmarks from Main/Parallel Tracks, Agile Track Benchmarks, Random Track Benchmarks and Incremental Track Benchmarks. In our research, we focus on main-crafted Track in which a majority of instances come from Crafted Benchmarks from main Tracks. In fact, these instances are limited in 5000 seconds. We carry out our algorithm in Gurobi 7.5.1. In order to avoid being out of memory, we use sparse matrix as input. All the experiments were performed on Intel xeon CPU(2.4GHz) with 20G memory which is similar to the configuration in SAT 2016 \cite{Crafted}. The time limit was set to 5000s. In order to verify the correctness of our algorithm, firstly we test one hundred instances named uf250-1065 coming from \cite{SATLIB}. These instances with 250 variables and 1065 clauses are all SAT, and our experimental result is the same as the given result. Table \ref{Resultuf250} containing part of the testing result shows the algorithm proposing in this paper can solve problems correctly. \begin{table}[!hbp] \caption{Part of result of uf250-1065 instances (others are solved within a few seconds)} \label{Resultuf250} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Filename & Variable & Clause & Result & Time(s) \\ \hline $uf250-02.$ cnf & 250 & 1065 & sat & 63 \\ \hline $uf250-024.$ cnf & 250 & 1065 & sat & 24 \\ \hline $uf250-029.$ cnf & 250 & 1065 & sat & 74 \\ \hline $uf250-054.$ cnf & 250 & 1065 & sat & 31 \\ \hline $uf250-067.$ cnf & 250 & 1065 & sat & 23 \\ \hline $uf250-071.$ cnf & 250 & 1065 & sat & 53 \\ \hline $uf250-086.$ cnf & 250 & 1065 & sat & 44 \\ \hline $uf250-093.$ cnf & 250 & 1065 & sat & 41\\ \hline \end{tabular} \end{center} \end{table} Next, we implement our algorithm for SAT competition problems. For main-crafted instances, we test 104, 68 of them is SAT, the rest is unknown. Table \ref{Result-Craft} lists successfully solved instances. \begin{center} \tablefirsthead{ \hline \multicolumn{1}{|c}{ Filename} & \multicolumn{1}{|c}{Result} & \multicolumn{1}{|c|}{CPUTime(s)} \\ \hline} \tablehead{ \hline \multicolumn{3}{|l|}{\small\sl continued from previous page}\\ \hline \multicolumn{1}{|c}{ Filename} & \multicolumn{1}{|c}{Result} & \multicolumn{1}{|c|}{CPUTime(s)} \\ \hline} \tabletail{ \hline \multicolumn{3}{|r|}{\small\sl continued on next page}\\ \hline} \tablelasttail{\hline} \topcaption{Successfully solved main-Craft benchmark instances} \label{Result-Craft} \begin{supertabular}{|c|c|c|} craft\_fixedbandwidth-eq-31 & unsat & $0.99$ s \\ \hline craft\_fixedbandwidth-eq-32 & unsat & $0.74$ s \\ \hline craft\_fixedbandwidth-eq-33 & unsat & $0.71$ s \\ \hline craft\_fixedbandwidth-eq-34 & unsat & $0.74$ s \\ \hline craft\_fixedbandwidth-eq-35 & unsat & $0.71$ s \\ \hline craft\_fixedbandwidth-eq-36 & unsat & $1.58$ s \\ \hline craft\_fixedbandwidth-eq-37 & unsat & $1.70$ s \\ \hline craft\_fixedbandwidth-eq-39 & unsat & $0.87$ s \\ \hline craft\_fixedbandwidth-eq-40 & unsat & $0.81$ s \\ \hline craft\_fixedbandwidth-eq-42 & unsat & $0.79$ s \\ \hline craft\_rphp4\_065 & unsat & $107.51$ s \\ \hline craft\_rphp4\_070 & unsat & $233.01$ s \\ \hline craft\_rphp4\_075 & unsat & $209.19$ s \\ \hline craft\_rphp4\_080 & unsat & $136.75$ s \\ \hline craft\_rphp4\_085 & unsat & $177.09$ s \\ \hline craft\_rphp4\_090 & unsat & $118.25$ s\\ \hline craft\_rphp4\_095 & unsat & $197.94$ s\\ \hline craft\_rphp4\_100 & unsat & $217.43$s\\ \hline craft\_rphp4\_105 & unsat & $401.78$ s \\ \hline craft\_rphp4\_110 & unsat & $698.25$ s \\ \hline craft\_rphp4\_115 & unsat & $865.15$ s \\ \hline craft\_rphp4\_120 & unsat & $1296.42$ s \\ \hline craft\_rphp4\_125 & unsat & $1280.32$ s \\ \hline craft\_rphp4\_130 & unsat & $1690.78$ s \\ \hline craft\_rphp4\_135 & unsat & $1530.62$ s \\ \hline craft\_rphp4\_140 & unsat & $2562.48$ s \\ \hline craft\_rphp4\_145 & unsat & $1266.03$ s \\ \hline craft\_rphp4\_150 & unsat & $2100.18$ s \\ \hline craft\_rphp4\_155 & unsat & $3982.60$ s \\ \hline craft\_rphp4\_160 & unsat & $4671.99$ s \\ \hline craft\_rphp5\_035 & unsat & $69.89$ s \\ \hline craft\_rphp5\_040 & unsat & $66.08$ s \\ \hline craft\_rphp5\_045 & unsat & $253.78$ s \\ \hline craft\_rphp5\_050 & unsat & $370.07$ s \\ \hline craft\_rphp5\_055 & unsat & $355.40$ s\\ \hline craft\_rphp5\_060 & unsat & $2122.91$ s\\ \hline craft\_rphp5\_065 & unsat & $529.26$ s\\ \hline craft\_rphp5\_070 & unsat & $928.81$ s \\ \hline craft\_rphp5\_075 & unsat & $2529.35$ s \\ \hline craft\_rphp5\_080 & unsat & $528.06$ s \\ \hline craft\_rphp5\_085 & unsat & $717.96$ s \\ \hline craft\_rphp5\_090 & unsat & $1368.22$ s \\ \hline craft\_rphp5\_095 & unsat & $2429.29$ s \\ \hline craft\_rphp5\_100 & unsat & $1018.40$ s \\ \hline craft\_rphp5\_105 & unsat & $3916.32$ s \\ \hline craft\_Ptn-7824-b01\ & unsat & $180.31$ s \\ \hline craft\_Ptn-7824-b02\ & unsat & $173.49$ s \\ \hline craft\_Ptn-7824-b03\ & unsat & $169.88$ s \\ \hline craft\_Ptn-7824-b03\ & unsat & $169.88$ s \\ \hline craft\_Ptn-7824-b04\ & unsat & $170.35$ s \\ \hline craft\_Ptn-7824-b05\ & unsat & $167.99$ s \\ \hline craft\_Ptn-7824-b06\ & unsat & $164.85$ s \\ \hline craft\_Ptn-7824-b07\ & unsat & $451.97$ s \\ \hline craft\_Ptn-7824-b08\ & unsat & $434.41$ s \\ \hline craft\_Ptn-7824-b09\ & unsat & $438.03$ s \\ \hline craft\_Ptn-7824-b010\ & unsat & $193.88$ s \\ \hline \end{supertabular} \end{center} From Table \ref{Result-Craft}, it is clear that Gurobi solver based on 0-1 Linear Inequalities Model performances well. In SAT competition 2016, the best solver tc\_glucose for main-craft totally solves 58 instances used servers with good configuration \cite{Crafted}. \section{Discussion and Conclusion} In this paper, we presented 0-1 ILP. A key idea is to reduce the size of SSP matrix from $2(n+k)(n+k)$ to $n(n+k)$ using the property of this matrix, i.e., lines $v_i $and $v_i'$ located in can not be chosen in the same time and slack variables are all in the lower right corner. Being different from reduction in \cite{Cormen2005Introduction}, LIMSAT works for general SAT problems including 3SAT but dos not need transforming SAT to 3SAT. We can construct a new SSP matrix according the CNF file. Comparing to the original SSP matrix introduced in \cite{Cormen2005Introduction}, the new one only contains variables and clauses without slack variables. The experimental results show that our algorithm is effective. For future work, we will improve the efficiency of our implementation will be improved. Specifically, we consider using parallel algorithm to deal with SAT problems. \end{document}
\begin{document} \title{Einstein hypersurfaces of $\mathbb{S}^n \times \mathbb{R}$ and $\mathbb{H}^n \times \mathbb{R}$} \author{Benedito Leandro} \address{Benedito Leandro - Instituto de Matem\'atica e Estat\'istica, Universidade Federal de Goi\'as, Goi\^ania, 74001-970, Brazil} \email{[email protected]} \author{Romildo Pina} \address{Romildo Pina - Instituto de Matem\'atica e Estat\'istica, Universidade Federal de Goi\'as, Goi\^ania, 74001-970, Brazil} \email{[email protected]} \author{Jo\~ao Paulo dos Santos} \address{Jo\~ao Paulo dos Santos - Departamento de Matem\'atica, Universidade de Bras\'ilia, 70910-900, Bras\'ilia-DF, Brazil} \email{[email protected]} \thanks{The third author was supported by FAPDF 0193.001346/2016} \subjclass[2010]{53B25; 53C40, 53C42} \keywords{hypersurfaces in product spaces, Einstein manifolds, constant sectional curvature} \begin{abstract} In this paper, we classify the Einstein hypersurfaces of $\mathbb{S}^n \times \mathbb{R}$ and $\mathbb{H}^n \times \mathbb{R}$. We use the characterization of the hypersurfaces of $\mathbb{S}^n \times \mathbb{R}$ and $\mathbb{H}^n \times \mathbb{R}$ whose tangent component of the unit vector field spanning the factor $\mathbb{R}$ is a principal direction and the theory of isoparametric hypersurfaces of space forms to show that Einstein hypersurfaces of $\mathbb{S}^n \times \mathbb{R}$ and $\mathbb{H}^n \times \mathbb{R}$ must have constant sectional curvature. \end{abstract} \maketitle \section{Introduction} A Riemannian manifold $(M^n,g)$ is said to be Einstein if its Ricci tensor is proportional to the metric, i.e., if $Ric_M = \rho g$, for some constant $\rho \in \mathbb{R}$. Equivalently, $(M^n,g)$ is an Einstein manifold if it has constant Ricci curvature and, according to Besse \cite{besse}, constant Ricci curvature could be considered as a good generalization of the concept of constant sectional curvature. Also, as pointed out in \cite{besse}, there are several results in the literature justifying that an Einstein metric is a good candidate for a ``best'' metric on a given manifold. When $n=2$, the Einstein condition means constant Gaussian curvature whereas a simple calculation shows that, when $n=3$, a manifold $(M^n,g)$ is Einstein if and only if it has constant sectional curvature. This paper aims to prove that an isometric immersion of an Einstein manifold $M^n$ as a hypersurface of the Riemannian products $\mathbb{S}^n \times \mathbb{R}$ and $\mathbb{H}^n \times \mathbb{R}$ only occur when $M^n$ has constant sectional curvature. More precisely, let us denote by $Q^{n}(\varepsilon)$ the unit sphere $\mathbb{S}^n$, if $\varepsilon=1$, or the hyperbolic space $\mathbb{H}^n$, if $\varepsilon=-1$. With this notation, our main theorem is given as the following: \begin{theorem} Let $f: M^n \rightarrow Q^{n}(\varepsilon) \times \mathbb{R}$, $n > 3$, be an isometric immersion of an Einstein manifold. Then $M^n$ is a manifold with constant sectional curvature. \label{thm-einstein} \end{theorem} Isometric immersions of Einstein manifolds into space forms were considered initially in codimension 1 by Thomas \cite{thomas}, followed by Fialkow \cite{fialkow} and the full classification in this case was concluded by Ryan \cite{ryan}. Briefly, an Einstein hypersurface of a space form of curvature $\varepsilon$ must have constant sectional curvature, except for the case $\varepsilon=1$, where we can find a product of spheres as Einstein hypersurfaces. For arbitrary codimensions, Einstein submanifolds of space forms were considered recently under the hypothesis of having flat normal bundle. Onti \cite{onti} classified such submanifolds with parallel mean curvature, whereas Dajczer, Onti and Vlachos \cite{dajczer} proved that Einstein submanifolds of space forms with flat normal bundle are locally holonomic. The study of the intrinsic geometry of hypersurfaces in $\mathbb{H}^n \times \mathbb{R}$ and $\mathbb{S}^n \times \mathbb{R}$ has been drawn much attention in recent years \cite{aledo1, aledo2, Veken2, Veken3, manfio, novais, Veken1} . Particularly, hypersurfaces with constant sectional curvature were considered by Aledo, Espinar and Galvez \cite{aledo1, aledo2}, for the two-dimensional case and by Manfio and Tojeiro, \cite{manfio}, for higher dimensions. When $n\geq 4$, Manfio and Tojeiro have proved that a hypersurface with constant sectional curvature $c$ only exists when $c \geq \varepsilon$ and it must be an open part of a complete rotation hypersurface. When $n=3$, $c \in (0,1)$ if $\varepsilon=1$ and $c \in (-1,0)$ if $\varepsilon=-1$. In this case, the hypersurface is constructed explicitly using parallel surfaces in $Q^3(\varepsilon)$. Consequently, the results given by Manfio and Tojeiro in \cite{manfio} and Theorem \ref{thm-einstein} completely solve the problem of the classification of Einstein hypersurfaces in $Q^{n}(\varepsilon) \times \mathbb{R}$. \section{Preliminary notions and results} In this section we will present some preliminary notions and results that will be used in the proof of Theorem \ref{thm-einstein}. Let us first establish some notation. As said before, we will denote by $Q^n(\varepsilon)$ the unit sphere $\mathbb{S}^n$, if $\varepsilon=1$, or the hyperbolic space $\mathbb{H}^n$ if $\varepsilon=-1$. The Riemannian manifold $Q^{n}(\varepsilon)\times\mathbb{R}$ will be given in the following models: $$ \begin{array}{rcl} {\mathbb{S}}^n \times \mathbb{R} &=& \left\{(x_1,\ldots,x_{n+2})\in\mathbb{E}^{n+2}|\;x_1^2+x_2^2+\ldots+x_{n+1}^2=1\right\}, \\ {\mathbb{H}}^n \times \mathbb{R} &=& \left\{(x_1,\ldots,x_{n+2})\in\mathbb{L}^{n+2}|-x_1^2+x_2^2+\ldots+x_{n+1}^2=-1, x_1>0\right\}, \end{array} $$ with the metric induced by the ambient space. Here $\mathbb{E}^{n+2}$ is the $(n+2)-$dimensional Euclidean space and $\mathbb{L}^{n+2}$ is the $(n+2)-$dimensional Lorentzian space with the canonical metric $ds^2=-dx_1^2+dx_2^2+ \ldots +dx_{n+2}^2$. Let $f : M^n \rightarrow Q^{n}(\varepsilon)\times\mathbb{R}$ be a hypersurface. Denote by $N$ its unit normal and let $\partial_{x_{n+2}}$ be the coordinate vector field of the factor $\mathbb{R}$. Also, let us denote by $T$ the orthogonal projection of $\partial_{x_{n+2}}$ onto the tangent space of $M^n$. With this notation, we have the following decomposition \begin{equation} \partial_{x_{n+2}}=T+\nu N, \label{decomposition} \end{equation} where $\nu$ is a smooth function defined in $M^n$, called \emph{angle function}. Let $\nabla$ and $R$ be the Riemannian connection and the curvature tensor of a hypersurface $f : M^n \rightarrow Q^n(\varepsilon) \times \mathbb{R}$, respectively. It will be considered the following sign convention: $R(X,Y)Z = \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]} Z$. If we denote by $S$ its shape operator, the Gauss equation is given by \begin{multline}\label{gauss} \langle R(X,Y)Z,W\rangle=\varepsilon(\langle X,W\rangle\langle Y,Z\rangle-\langle X,Z\rangle\langle Y,W\rangle\\ + \langle X,T\rangle\langle Z,T\rangle\langle Y,W\rangle+\langle Y,T\rangle\langle W,T\rangle\langle X,Z\rangle\\ -\langle Y,T\rangle\langle Z,T\rangle\langle X,W\rangle-\langle X,T\rangle\langle W,T\rangle\langle Y,Z\rangle)\\ +\langle SX,W\rangle\langle SY,Z\rangle-\langle SX,Z\rangle\langle SY,W\rangle. \end{multline} Moreover, since the vector field $\partial_{x_{n+2}}$ is parallel in $Q^{n}(\varepsilon)\times\mathbb{R}$, we have \begin{equation}\label{Xcos} \begin{array}{rcl} \nabla_{X}T&=&\nu SX, \\ X[\nu]&=&-\langle X,ST\rangle. \end{array} \end{equation} At this point, we present a fundamental result that will be used in the proof of Theorem 1. In \cite{tojeiro}, Tojeiro presented a characterization of the hypersurfaces for which $T$ is principal direction. Such characterization is given as follows. Let $g: \overline{M}^{n-1} \rightarrow Q^n(\varepsilon)$ be a hypersurface and let $g_s : \overline{M}^{n-1} \rightarrow Q^n (\varepsilon)$, $s \in I \subset \mathbb{R}$, be its family of parallel hypersurfaces, given by \begin{equation} g_s(x) = C_{\varepsilon} (s) g(x) + S_{\varepsilon} (s) N(x), \label{parallel-gs} \end{equation} where $x \in \overline{M}^{n-1}$, $N$ is a unit normal vector field to $g$ and the functions $C_{\varepsilon}$ and $S_{\varepsilon}$ are given by \begin{equation} C_{\varepsilon}(s) = \left\{ \begin{array}{l} \cos(s), \, \textnormal{ if }\, \varepsilon = 1, \\ \cosh(s), \, \textnormal{ if }\, \varepsilon = -1, \end{array} \right. \,\,\, \textnormal{ and } \,\,\, S_{\varepsilon}(s) = \left\{ \begin{array}{l} \sin(s), \, \textnormal{ if }\, \varepsilon = 1, \\ \sinh(s), \, \textnormal{ if }\, \varepsilon = -1. \end{array} \right. \label{e-functions} \end{equation} Let $f: M^n := \overline{M}^{n-1} \times I \rightarrow Q^n(\varepsilon) \times \mathbb{R}$ be a hypersurface defined by \begin{equation} f(x,s) = g_s(x) + a(s) \partial_{n+2}, \label{function-tojeiro} \end{equation} for a smooth function $a : I \rightarrow \mathbb{R}$ with positive derivative. In this context, the following theorem provides the mentioned characterization: \begin{theorem}[\cite{tojeiro}] Let $f$ be the map given in (\ref{function-tojeiro}), where $g_s$ is defined by (\ref{parallel-gs}). Then the map $f$ defines, at regular points, a hypersurface that has $T$ as a principal direction. Conversely, any hypersurface $f: M^n \rightarrow Q^n(\varepsilon) \times \mathbb{R}$, $n \geq 2$, with nowhere vanishing angle function that has $T$ as a principal direction is locally given in this way. \label{thm-tojeiro} \end{theorem} \begin{remark} The hypersurfaces with the property of the vector field $T$ being a principal direction constitute an important class of hypersurfaces of $Q^n(\varepsilon) \times \mathbb{R}$. This class of hypersurfaces includes the rotation hypersurfaces \cite{Veken3}, the hypersurfaces with constant sectional curvature \cite{manfio} and the hypersurfaces whose normal direction makes a constant angle with the vector field $\partial_{x_{n+2}}$ \cite{dillen2, dillen3, manfio, tojeiro}. Besides that, it was proved in \cite{tojeiro} that such a property is equivalent to $M^n$ has flat normal bundle as a submanifold into $\mathbb{E}^{n+2}$, resp. $\mathbb{L}^{n+2}$. This fact was also obtained for the two-dimensional case in \cite{Veken4, dillen}, where surfaces of $Q^2(\varepsilon) \times \mathbb{R}$ having $T$ as a principal direction were considered. \end{remark} For a hypersurface given locally by (\ref{function-tojeiro}), one has: \begin{eqnarray} |T| = \dfrac{a'(s)}{\sqrt{1+a'(s)^2}}, \label{mt} \\ \nu = \dfrac{1}{\sqrt{1+(a'(s))^2}}. \label{angle-function} \end{eqnarray} Also, the principal curvatures are given by \begin{equation} \begin{array}{rcl} \lambda_i &=& - \dfrac{a'(s)}{\sqrt{1+a'(s)^2}} \lambda_i^s, \, 1 \leq i \leq n-1, \\ \lambda_n &=& \dfrac{a''(s)}{(\sqrt{1+a'(s)^2})^3}, \end{array} \label{principal-curvatures} \end{equation} where $\lambda_n$ is the principal curvature associated to $T$ and $\lambda_i^s, \, 1 \leq i \leq n-1,$ are the principal curvatures of $g_s$, i.e., \begin{equation} \lambda_i^s = \dfrac{\varepsilon S_{\varepsilon}(s) + \lambda_i^g C_{\varepsilon}(s)}{C_{\varepsilon}(s) - \lambda_i^g S_\varepsilon(s)} \label{principal-curvature-s}. \end{equation} where $\lambda_i^g,\, 1 \leq i \leq n-1,$ are the principal curvatures of $g$. Finally, let us observe that, by equations (\ref{mt}) and (\ref{principal-curvatures}) we have \begin{equation} \lambda_n = \dfrac{d|T|}{ds}. \label{kn-edo} \end{equation} We also present in this section two results regarding isoparametric hypersurfaces in space forms. We may suggest to the reader as references the survey \cite{cecil} or Section 3.1 in \cite{cecil-book}. Let us recall that $g: \overline{M}^{n-1} \rightarrow Q^{n}(\varepsilon)$ is said to be an isoparametric hypersurface if it has constant principal curvatures. In \cite{cartan-iso}, Cartan proved that a hypersurface $g: \overline{M}^{n-1} \rightarrow Q^{n}(\varepsilon)$ is isoparametric if and only if each parallel hypersurface $g_s$ as given in \eqref{parallel-gs} has constant mean curvature, i.e., the mean curvature of $g_s$ depends only on $s$ (see Theorem 3.6 in \cite{cecil-book}). In the same paper, Cartan established an important relation between the principal curvatures of isoparametric hypersurfaces. This relation is known as \emph{Cartan's identity} (or \emph{Cartan's formula}, following \cite{cecil-book}, page 91) and it is given as follows: let $g: \overline{M}^{n-1} \rightarrow Q^{n}(\varepsilon)$ be an isoparametric hypersurface with $d$ distinct principal curvatures and respective multiplicities $m_1, \ldots, m_d$. If $d>1$, for each $i$, $1 \leq i \leq d$ one has \begin{equation} \displaystyle \sum_{j \neq i} m_j \dfrac{\varepsilon+\lambda_i \lambda_j}{\lambda_i - \lambda_j} = 0. \label{cartan-id} \end{equation} In order to prove Theorem 1, we will need the following lemmas. The first will establish the Ricci tensor on a hypersurface $f: M^{n} \rightarrow Q^{n}(\varepsilon) \times \mathbb{R}$ while the second will show that, on an Einstein hypersurface, the vector field $T$ is an eigenvector of the shape operator at $p \in M$, as long as $T \neq 0$ at $p$. In what follows, the Ricci tensor is given by \begin{equation} \textnormal{Ric}(Y,Z)=\textnormal{trace}\left\{ X \mapsto R(X,Y)Z \right\}. \label{ricci-general} \end{equation} \begin{lemma} Let $M^n$ be a hypersurface in $Q^{n}(\varepsilon) \times \mathbb{R}$, then the Ricci tensor of $M^n$ is given by \begin{equation} \begin{array}{rcl} \textnormal{Ric}(Y,Z) &=& \varepsilon(n-1-|T|^2) \langle Y, Z \rangle + \varepsilon (2-n) \langle Y, T \rangle \langle Z, T \rangle \\ && + n H \langle SY, Z \rangle - \langle SY, SZ \rangle, \end{array} \label{ricci-tensor} \end{equation} where $Y, \, Z$ are arbitrary vector fields on $M^n$ and $H$ is the mean curvature. \end{lemma} \begin{proof} Let $\left\{ e_i \right\}_{i=1}^n$ an orthonormal basis of principal directions, with $Se_i = \lambda_i e_i$. If we write $Y = \displaystyle \sum_{k=1}^n y_k e_k$, $Z = \displaystyle \sum_{k=1}^n z_k e_k$ and $T = \displaystyle \sum_{k=1}^n t_k e_k $, it follows by Gauss Equation (\ref{gauss}) that \begin{equation} \begin{array}{rcl} \langle R(e_k, Y)Z, e_k \rangle &=& \varepsilon \left[ \langle Y, Z \rangle - y_k z_k + t_k y_k \langle Z, T \rangle + t_k z_k \langle Y, T \rangle - \right. \\ && \left. - \langle Y, T \rangle \langle Z, T \rangle - t_k^2 \langle Y, Z \rangle \right] + \\ && + \lambda_k \langle SY, Z \rangle - \langle e_k, SZ \rangle \langle SY, e_k \rangle. \label{gauss-equation-ricci} \end{array} \end{equation} Consequently, by \eqref{gauss-equation-ricci} and \eqref{ricci-general}, the Ricci tensor is given by \eqref{ricci-tensor}. \end{proof} The next lemma give a characterization of Einstein hypersufaces with $T \neq 0$. \begin{lemma} Let $M^n$, $n > 3$, be an Einstein hypersurface in $Q^n(\varepsilon) \times \mathbb{R}$. If $T \neq 0$ at $p \in M^n$, then $T$ is an eigenvector for the shape operator at $p$. \label{eigen-t} \end{lemma} \begin{proof} Let $\left\{ e_i \right\}_{i=1}^n$ an orthonormal basis of principal directions, with $Se_i = \lambda_i e_i$. Let us write $T = \displaystyle \sum_{k=1}^n t_k e_k$. If $T \neq 0$ at $p \in M^n$, there is at least one coefficient $t_k \neq 0$. Since $M^n$ is an Einstein manifold, its Ricci tensor satisfy $$ \textnormal{Ric}(e_i,e_j) = \rho \delta_{ij}, $$ for some constant $\rho$. When we consider the Ricci tensor applied on the orthonormal basis $\left\{ e_i \right\}_{i=1}^n$, we have \begin{equation} \textnormal{Ric}(e_i,e_j) = \left[ \varepsilon(n-1-|T|^2) + n H \lambda_i - \lambda_i \lambda_j \right] \delta_{ij} + \varepsilon (2-n) t_i t_j \label{ricci-basis}. \end{equation} By Equation (\ref{ricci-basis}) we must have \begin{equation} \left[ \varepsilon(n-1-|T|^2) + n H \lambda_i - \lambda_i \lambda_j - \rho \right] \delta_{ij} + \varepsilon (2-n) t_i t_j = 0 \label{einstein-condition} \end{equation} and we conclude that $t_i t_j = 0$, for all $i, \, j$, with $i \neq j$. Consequently, there is only one coefficient $t_k \neq 0$ and therefore $T = t_k e_k$ at $p$. \end{proof} \section{Proof of the main result} \begin{proof}[Proof of Theorem \ref{thm-einstein}] If $T \equiv 0$, then $M^n$ is an open part of a slice $Q^{n}(\varepsilon) \times \left\{ t_0 \right\}$, where $t_0 \in \mathbb{R}$. Since the slices are isometric to $Q^{n}(\varepsilon)$, $M^n$ is a manifold with constant sectional curvature $\varepsilon$. Otherwise, let $\Omega$ be the open, non-empty subset where $|T|> 0$. By Lemma \ref{eigen-t}, $T$ is a principal direction in $\Omega$. Without loss of generality we can write $T = t_n e_n$ and $ST = \lambda_n T$. Since $M^n$ is Einstein, we have from Equation (\ref{einstein-condition}) that \begin{eqnarray} \varepsilon(n-1-|T|^2) + nH \lambda_i - \lambda_i^2 - \rho &=& 0, \, \textnormal{ for } 1 \leq i \leq n-1 \label{equation-lambda-i} \\ \varepsilon(n-1)(1-|T|^2) + nH \lambda_n - \lambda_n^2 - \rho &=& 0. \label{equation-lambda-n} \end{eqnarray} Equation \eqref{equation-lambda-i} implies that we have at most two distinct principal curvatures among the $(n-1)$ first principal curvatures. In fact, from \eqref{equation-lambda-i} we have \begin{equation} (\lambda_i - \lambda_j)(nH - \lambda_i - \lambda_j) = 0 , \, \textnormal{ for } 1 \leq i, \, j \leq n-1. \label{lambda-i-j} \end{equation} Let us suppose by contradiction that there are three distinct principal curvatures $\lambda_{i_1}, \, \lambda_{i_2}, \, \lambda_{i_3}$. It follows by equation of (\ref{lambda-i-j}) that $$ \begin{array}{rcl} \lambda_{i_1} + \lambda_{i_2} &=& n H, \\ \lambda_{i_2} + \lambda_{i_3} &=& n H, \\ \lambda_{i_3} + \lambda_{i_1} &=& n H. \end{array} $$ The equations above implies that $\lambda_{i_1} = \lambda_{i_2} = \lambda_{i_3}$, which is a contradiction. If $\lambda_1 = \lambda_2 = \ldots = \lambda_{n-1}=\mu$, the sectional curvature is constant. In fact, by equation \eqref{gauss-equation-ricci} we have \begin{eqnarray} \langle R(e_i,e_j)e_j,e_i\rangle &=& \varepsilon+\mu^2, \,\,\, 1 \leq i, \, j \leq n-1 \label{sectional-ij} \\ \langle R(e_i,e_n)e_n,e_i\rangle &=& \varepsilon(1-|T|^2) + \mu \lambda_n. \label{sectional-in} \end{eqnarray} It follows from \eqref{equation-lambda-n} that \begin{equation} \varepsilon(1-|T|^2) + \mu \lambda_n = \dfrac{\rho}{n-1}, \label{constant-sectional-n} \end{equation} therefore equation \eqref{sectional-in} implies that $\langle R(e_i,e_n)e_n,e_i\rangle$ is constant. By equation (\ref{equation-lambda-i}) we have \begin{equation} \varepsilon(1-|T|^2) + \mu \lambda_n = \rho - (n-2)(\mu^2 + \varepsilon). \label{constant-sectional-i} \end{equation} When we combine equations (\ref{constant-sectional-n}) and (\ref{constant-sectional-i}), we have from \eqref{sectional-ij} that $$\langle R(e_i,e_j)e_j,e_i\rangle = \dfrac{\rho}{n-1}$$ and, consequently, the sectional curvature is equal to $\dfrac{\rho}{n-1}$ in $\Omega$. Next we will show that the possibility of two distinct principal curvatures does not occur. In this case, we can consider as the two distinct principal curvatures $\lambda_1$ and $\lambda_2$ and therefore there are $p$ principal curvatures equal to $\lambda_1$ and $q$ principal curvatures equal to $\lambda_2$, with $\lambda_1 \neq \lambda_2$ and $p+q=n-1$. By equation \eqref{lambda-i-j} we have $\lambda_1 + \lambda_2 = nH$, consequently, \begin{eqnarray} \lambda_n &=& (1-p) \lambda_1 + (1-q) \lambda_2, \label{first-edo} \\ \lambda_1 \lambda_2 &=& \rho - \varepsilon(n-1-|T|^2). \label{second-edo} \end{eqnarray} where (\ref{second-edo}) is obtained when we substitute $\lambda_1 + \lambda_2 = nH$ into (\ref{equation-lambda-i}). We will show that $\lambda_n \equiv 0$ in $\Omega$ and this fact will lead us to a contradiction. If $\nu \equiv 0$, it follows by \eqref{kn-edo} that $\lambda_n \equiv 0$ in $\Omega$. Otherwise, there is a point $p_0$ where $\nu(p_0) \neq 0$ and an open neighborhood $\Omega_0 \subset \Omega$ of $p_0$ such that $\nu \neq 0$. Therefore, by Lemma \ref{eigen-t}, we can apply Theorem \ref{thm-tojeiro} to conclude that $\Omega_0$ is given locally by (\ref{function-tojeiro}). In this case, \eqref{equation-lambda-n} implies that \begin{equation} \begin{array}{rcl} \varepsilon(n-1)(1-|T|^2) + \lambda_n (n-1)H_{g_s} - \rho &=& 0, \end{array} \label{einstein-tojeiro} \end{equation} where $H_{g_s}$ is the mean curvature of the parallel $g_s$. Let us suppose by contradiction $\lambda_n \neq 0$ in $\Omega_0$. It follows by (\ref{einstein-tojeiro}) that $H_{g_s}$ depends only on $s$, once that equations (\ref{mt}) and (\ref{principal-curvature-s}) imply that the functions $|T|^2$ and $\lambda_n$ depend only on $s$. In this case, the mean curvature of the parallel $g_s$ depends only on $s$ which implies that $g$ is an isoparametric hypersurface, with two distinct principal curvatures. By Cartan's identity \eqref{cartan-id} we must have \begin{equation} \lambda_1^g \lambda_2^g + \varepsilon = 0. \label{cartan-1-2} \end{equation} In this case, it follows directly from \eqref{mt}, (\ref{principal-curvatures}), (\ref{principal-curvature-s}) and \eqref{cartan-1-2} that \begin{equation} \lambda_1 \lambda_2 = - \varepsilon |T|^2. \label{product-curvatures-einstein} \end{equation} When we replace (\ref{product-curvatures-einstein}) in (\ref{second-edo}) we have $$|T|^2 = \dfrac{\varepsilon(n-1)-\rho}{2\varepsilon},$$ which implies that $|T|^2$ is constant. By equation (\ref{kn-edo}), it follows that $\lambda_n = 0$ in $\Omega_0$, which is a contradiction. Therefore, we have $\lambda_n \equiv 0$ in $\Omega$. It follows by (\ref{einstein-tojeiro}) that \begin{equation} |T|^2 = \dfrac{\varepsilon(n-1)-\rho}{\varepsilon(n-1)}. \label{mt-constant-einstein} \end{equation} In this case, equations (\ref{first-edo}) and (\ref{second-edo}) are rewritten as \begin{eqnarray} (p-1) \lambda_1 + (q-1) \lambda_2&=&0, \label{first-edo-2-einstein} \\ \lambda_1 \lambda_2 &=& \left( \dfrac{n-2}{n-1} \right) (\rho - \varepsilon(n-1)). \label{second-edo-2-einstein} \end{eqnarray} Since $|T| \neq 0$, equations \eqref{mt}, (\ref{principal-curvatures}), \eqref{mt-constant-einstein}, \eqref{first-edo-2-einstein} and \eqref{second-edo-2-einstein} imply that \begin{eqnarray} (p-1) \lambda_1^s + (q-1) \lambda_2^s&=&0, \label{first-edo-3-einstein} \\ \lambda_1^s \lambda_2^s &=& -\varepsilon(n-2). \label{second-edo-3-einstein} \end{eqnarray} We claim that the system above has no solution for $n>3$. In fact, Equations \eqref{first-edo-3-einstein} and \eqref{second-edo-3-einstein} imply that $\lambda_i^s$ are constants, unless $p=q=1$, which is not the case since $p+q=n-1$. Therefore, evaluating in $s=0$ we conclude that $g$ is isoparametric. By Cartan's identity \eqref{cartan-id}, $\lambda_1^g \lambda_2^g + \varepsilon = 0$. This fact with Equation \eqref{second-edo-3-einstein} in $s=0$ implies $n=3$, which is not the case. We conclude that, in the open subset $\Omega$ where $|T| > 0$, the sectional curvature is a constant $K_0 = \dfrac{\rho}{n-1}$. If $M^n \setminus \Omega$ has empty interior, we have by continuity that $M^n$ has constant sectional curvature $K_0$. Otherwise, there is an open subset $\mathcal{O} \subset M^n \setminus \Omega$, where $T \equiv 0$. As we saw at the beginning of the proof, $\mathcal{O}$ is an open part of a slice $Q^{n}(\varepsilon) \times \left\{ t_1 \right\}$, for some $t_1 \in \mathbb{R}$, and the sectional curvature in $\mathcal{O}$ is constant equal to $\varepsilon$, which implies $\rho = (n-1) \varepsilon$. Since $\rho$ is constant in $M^n$, we must have $K_0 = \varepsilon$ and the sectional curvature in $\Omega \cup \mathcal{O}$ is $\varepsilon$, for any open subset $\mathcal{O}$ where $T \equiv 0$. Again we use the continuity of the sectional curvature to conclude that $M^n$ has constant sectional curvature equal to $\varepsilon$. \end{proof} \end{document}
\begin{document} \title{Certified Robustness in Federated Learning} \begin{abstract} Federated learning has recently gained significant attention and popularity due to its effectiveness in training machine learning models on distributed data privately. However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications. In this work, we study the interplay between federated training, personalization, and certified robustness. In particular, we deploy randomized smoothing, a widely-used and scalable certification method, to certify deep networks trained on a federated setup against input perturbations and transformations. We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models, compared to training solely on local data. We further analyze personalization, a popular technique in federated training that increases the model's bias towards local data, on robustness. We show several advantages of personalization over both~(that is, only training on local data and federated training) in building more robust models with faster training. Finally, we explore the robustness of mixtures of global and local~(\textit{i.e. } personalized) models, and find that the robustness of local models degrades as they diverge from the global model. Our implementation can be found \href{https://github.com/MotasemAlfarra/federated-learning-with-pytorch}{here}. \end{abstract} \section{Introduction} Machine learning models in general, and deep neural networks (DNNs) in particular, have demonstrated remarkable performance in several domains, including computer vision~\cite{krizhevsky2012imagenet} and natural language processing~\cite{hinton2012speech}, among others~\cite{lecun2015deep}. Despite this success, DNNs are brittle against small carefully-crafted perturbations in their input space~\cite{szegedy2014intriguing, goodfellow2015explaining}. That is, a DNN $f_\theta : \mathbb R^d\rightarrow \mathcal P(\mathcal Y)$ can produce different predictions for the inputs $x \in \mathbb R^d$ and $x+\delta$, although the adversarial perturbation $\delta$ is small, and thus $x$ and $x+\delta$ are indistinguishable to the human eye. Moreover, DNNs are also susceptible to input transformations that preserve an image's semantic information, such as rotation, translation, and scaling~\cite{engstrom2018rotation}. This observation raises security concerns about deploying such models in security-critical applications, \textit{i.e. } self-driving cars. Consequently, this phenomenon sparked substantial research efforts towards building machine learning models that are not only accurate, but also robust. That is, building models that both correctly classify an input $x$, and maintain their prediction as long $x$ preserves its semantic content. Further interest arose in theoretically characterizing the output of models whose input was subjected to perturbations. Whenever a theoretical guarantee exists about a model's robustness for classifying inputs, $f_\theta$ is said to be \emph{certifiably} robust~\cite{wong2018provable}. Among several methods that built such certifiably robust models, Randomized Smoothing (RS)~\cite{cohen2019certified} is arguably one of the most effective, scalable, and popular approaches to certify DNNs. In a nutshell, RS predicts the most probable class when the classifier's input is subjected to additive Gaussian noise. While RS directly operated on pixel-intensity perturbations, a later work extended this technique to certify against input deformations, such as rotation and translation~\cite{deformrs}. The problem of training large-scale machine learning models in the real world may need to account for data privacy constraints that arise in distributed setups. Federated learning (FL)~\cite{fedavg, FEDLEARN, konevcny2016federated} is a viable solution to this problem, which is now commonly used in popular services~\cite{kairouz2021advances}. Although the certified robustness of models in traditional learning settings has been widely studied, few works consider the interaction between certified robustness and the federated setup. Specifically, federated learning distributes the training process across a (possibly large) set of clients (each with their own local data) and privately builds a global model without leaking nor sharing local raw data. In the federated learning setup, Zizzo~\textit{et al.}~\cite{zizzo2020fat} showed that (1)~adversarial attacks can reduce the accuracy of state-of-the-art federated learning models to 0\%, and (2)~adversarial robustness is higher in the centralized setting than in the federated setting, illustrating the difficulty of enhancing adversarial robustness. We identify two key and challenging settings for analyzing certified robustness in federated learning: \textbf{(i)} each client has insufficient data to fully train a robust model, and \textbf{(ii)} each client may be interested in building a personalized model that is robust against different transformations. In this work, we set to study the certified robustness of models trained in a federated fashion. We deploy Randomized Smoothing to measure certified robustness against both pixel-intensity perturbations and input deformations. We first analyze when federated training benefits model robustness. We show that in the few local-data regimes---where clients have insufficient data---federated averaging improves not only model performance but also its robustness. Furthermore, we explore how certified robustness gains from personalization~\cite{smith2017federated, mansour2020three, arivazhagan2019federated, explicit_personalization}, \textit{i.e. } fine-tuning the global model on local data. We show that personalization can provide significant robustness improvements even when federated averaging operates on models trained without augmentations aimed at enhancing robustness. At last, we analyze a version of the recently-proposed federated mixture of global and personalized models~\cite{implicit_personalization, hanzely2020lower} from a certified robustness lens. \section{Related Work} \textbf{Certified Adversarial Robustness.} The seminal work of Szegedy~\textit{et al.}~\cite{szegedy2014intriguing} demonstrated how DNNs are vulnerable to small perturbations in their input, now called adversarial attacks. Subsequent works noted how the brittleness against these attacks was widespread and easily exploitable~\cite{goodfellow2015explaining, carlini2017towards}. This observation led to further works on defense mechanisms that improved adversarial robustness~\cite{kannan2018adversarial, madry2018towards}, and on attacks that aimed at breaking these mechanisms~\cite{carlini2017towards, moosavi2016deepfool}. To this aim, Cohen~\textit{et al.}~\cite{cohen2019certified} proposed Randomized Smoothing~(RS), an approach for certification that proved to successfully scale with models and datasets. The original RS formulation certified models against pixel-level additive perturbations, and was studied in the federated learning setup by Chen~\textit{et al.}~\cite{chen2021certifiably}. Recently, DeformRS~\cite{deformrs} reformulated RS to consider parameterized deformations, granting robustness certificates against more general and semantic perturbations such as rotation. \textbf{Federated Learning.} Federated Learning (FL) is a data-decentralized machine learning setting in which a central server coordinates a large number of clients (\textit{e.g. } mobile devices or whole organizations) to collaboratively train a model~\cite{fedavg, FEDLEARN,konevcny2016federated, wang2021field}. In particular, FL has been successfully deployed by Google~\cite{hard2018federated}, Apple~\cite{apple19wwdc}, NVIDIA~\cite{nvidia2020}, and many others, even in safety- and security-critical settings, which motivates our work. Given how the inherent multiple-devices nature of FL introduces heterogeneity in the data, interest spurred towards techniques that considered model personalization~\cite{implicit_personalization, hanzely2020lower, FL-MAML-Jakub,li2021ditto} to improve performance on each device's data~\cite{mansour2020three, explicit_personalization, jiang2019improving, paulik2021federated}. \textbf{Robustness in FL.} Concerns about the adversarial vulnerability of machine learning models have also been studied in the FL setup~\cite{bhagoji2019analyzing}. While most prior works focused on training-time adversarial attacks such as Byzantine attacks~\cite{he2020byzantine}, and model~\cite{bhagoji2019analyzing} or data poisoning~\cite{steinhardt2017certified, sun2021data}, recent works studied test-time robustness. In particular, the works of Zizzo~\textit{et al.}~\cite{zizzo2020fat} and Shah~\textit{et al.}~\cite{shah2021adversarial} studied the incorporation of adversarial training~\cite{madry2018towards}, arguably the most effective empirical defense against adversarial attacks, into the FL setup. Closest to our study, is the recent work of Chen~\textit{et al.}~\cite{chen2021certifiably} that explored certification via RS in FL. In contrast to~\cite{chen2021certifiably}, instead of pixel-level perturbations, we study realistic (\textit{e.g. } rotation and translation) perturbations. \section{Methodology} \textbf{Notation.} We consider the standard image classification problem where a classifier $f_\theta : \mathbb R^d \rightarrow \mathcal P(\mathcal Y)$, \textit{e.g. } a DNN with parameters $\theta$, maps the input $x\in \mathbb R^{d}$ to the probability simplex $\mathcal P(\mathcal Y)$ over $k$ classes, where $\mathcal Y = \{1, 2, \dots, k\}$. That is, $\sum_{i=1}^k f_\theta^i(x) = 1$ with $f_\theta^i(x) \geq 0$ being the $i^{\text{th}}$ element of $f_\theta(x)$, representing the probability $x$ belongs to the $i^\text{th}$ class. \textbf{Randomized Smoothing.} Randomized smoothing constructs a new ``smooth" classifier from an arbitrary classifier $f_\theta$. In a nutshell, when this smooth classifier is queried at input $x$, it returns the average prediction of $f_\theta$ when its input $x$ is subjected to isotropic additive Gaussian noise, that is: \begin{equation}\label{eq:soft-smoothing} g (x) = \mathbb E_{\epsilon \sim \mathcal N(0, \sigma^2 I)}\left [f_\theta(x+\epsilon)\right]. \end{equation} Let $g$ predict label $c_A$ for input $x$ with some confidence, \textit{i.e. } $\mathbb{E}_\epsilon[f^{c_A}_\theta(x+\epsilon)] = p_A \ge p_B = \max_{c \neq c_A} \mathbb{E}_\epsilon[f^c_\theta(x+\epsilon)]$, then, as proved in~\cite{Zhai2020MACER:}, $g_\theta$'s prediction is certifiably robust with radius: \begin{equation} \begin{aligned} \label{eq:certification_radius} R = \frac{\sigma}{2} \left(\Phi^{-1}(p_A) - \Phi^{-1}(p_B)\right), \end{aligned} \end{equation} where $\Phi^{-1}$ is the inverse CDF of the standard Gaussian distribution. That is, $\text{arg}\max_i g^i(x+\delta) = \text{arg}\max_i g^i(x),\:\forall \|\delta\|_2 \leq R$. While the original RS framework certified against additive pixel perturbations in images, DeformRS~\cite{deformrs} extended RS to certify against image \textit{deformations}. DeformRS achieved this objective by proposing a \textit{parametric-domain smooth classifier}. In particular, given an image $x$ with pixel coordinates $p$, a parametric deformation function $\nu_\phi$ with parameters $\phi$ (\textit{e.g. } if $\nu$ is a rotation, then $\phi$ is the angle of rotation) and an interpolation function $I_T$, DeformRS defined the parametric-domain smooth classifier as: \begin{equation}\label{eq:parametric-smooth-classifier} g_\phi(x, p) = \mathbb E_{\epsilon\sim \mathcal D}\left[ f_\theta(I_T(x, p+\nu_{\phi+\epsilon})) \right]. \end{equation} In a nutshell, $g$ outputs the average prediction of $f_\theta$ over transformed versions of the input $x$. Note that this formulation perturbs the pixels' \textit{location}, rather than their intensities. DeformRS showed that, analogous to the smoothed classifiers in RS, parametric-domain smooth classifiers are certifiably robust against perturbations to the parameters of the deformation function with radius \begin{align}\label{cor:parametric-certification} \begin{split} &\|\delta\|_1 \leq \sigma \left(p_A - p_B\right) \qquad \qquad \qquad \quad \,\,\,\, \text{for } \mathcal D = \mathcal{U}[-\sigma, \sigma], \\ &\|\delta\|_2 \leq \frac{\sigma}{2}\left(\Phi^{-1}(p_A) - \Phi^{-1}(p_B)\right) \qquad \text{for } \mathcal D = \mathcal N(0, \sigma^2I). \end{split} \end{align} Eq~\eqref{cor:parametric-certification} states that, as long as the parameters of the deformation function (\textit{e.g. } rotation angle) are perturbed by a quantity upper bounded by the certified radius, $g$'s prediction will remain constant. \textbf{Specialization to Federated Training.} Earlier works proposed various training frameworks to enhance the robustness of smooth classifiers. The schemes studied by these frameworks ranged from simple data augmentation~\cite{cohen2019certified} to more sophisticated techniques, including adversarial training~\cite{salman2019provably} and regularization~\cite{Zhai2020MACER:}. In this work, we robustify the classifier via simple and nondemanding data augmentation. We select this approach since our main aim is to analyze the effect of federated training on certified robustness in isolation of additional and more sophisticated training schemes. In addition, this setup allows compatibility with other FL-specific constraints and technologies, such as differential privacy and secure aggregation, which can be essential for real-world deployment~\cite{kairouz2021advances}. As in the Federated Learning setup, we employ the common federated averaging~\cite{fedavg, Povey2014} technique. In a nutshell, we consider the case when a dataset is distributed across multiple clients in a non-overlapping manner. We assume that all clients share the architecture $f_\theta$, to allow for weight averaging across clients. Furthermore, we study three different training schemes, where each client is minimizing the following regularized empirical risk~\cite{implicit_personalization}: \begin{equation}\label{eq:implicit-personalization} \min_{\theta_1, \dots, \theta_n} \frac{1}{n}\sum_{i=1}^n \lambda \mathcal L_i(\theta_i) + (1-\lambda)\left\| \theta_i - \bar \theta \right\|_2^2, \end{equation} where $\mathcal L_i$ is a training loss function (\textit{e.g. } cross entropy) evaluated on local data of client $i$, $\theta_i$ represents the $i^\text{th}$ client's model parameters, $\lambda\in[0, 1]$ is a personalization parameter, and $\bar \theta = \frac{1}{n}\sum_{i=1}^n\theta_i$ is the average model. \textbf{Local Training.} Each client trains solely on their own local data, without communication with other clients. At test time, each client employs their own local model. This setting is equivalent to minimizing the risk in Eq.~\eqref{eq:implicit-personalization} with $\lambda=1$. \textbf{Federated Training.} All clients are collaborating in building one global model. This is achieved by iteratively performing (1) client-side model updates with optimization on local data (\textit{i.e. } solving Eq.~\eqref{eq:implicit-personalization} with $\lambda=1$), and (2) communication of client models to the server, which averages all models and broadcasts the result back to the clients (\textit{i.e. } solving Eq.~\eqref{eq:implicit-personalization} with $\lambda=0$). This process is repeated until convergence or for a fixed number of communication rounds. At test time, all clients employ the global model sent from the server at the last communication round. \textbf{Personalized Training.} Since~\cite{yu2020salvaging} exposed how the basic FL formulation with $\lambda = 1$ and $\theta_1 = \dots = \theta_n$ degrades the performance of some clients, we explore a personalization approach. Namely, upon federated training convergence, each client fine-tunes the global model communicated in the last round with a few optimization steps on their local data (\textit{i.e. } solving Eq.~\eqref{eq:implicit-personalization} with $\lambda=1$). \textbf{Mixture of models.} We also consider a more sophisticated \textit{personalized} ERM setting via a mixture of global and local models~\cite{implicit_personalization}, seeking an explicit trade-off between the global model and the local models for $\lambda\in(0,1)$. Our implementation for solving problem~\eqref{eq:implicit-personalization} for every client $i$ requires local gradient descent steps (with step size $\gamma$) and a mixing of current global and local models through: \begin{equation}\label{eq:mixture_update} \theta_i \leftarrow \theta_i - \gamma \left( \lambda \nabla \mathcal{L}_i(\theta_i) + 2(1-\lambda) (\theta_i - \bar{\theta})\right), \end{equation} alternating with communication to the server for computing the current average model $\bar \theta$. At test time, each client uses its corresponding personalized model $\theta_i$. While the effect of federated training and personalization in improving model performance is widely explored in the literature, its impact on the trained model's certified robustness remains unclear. \section{Experiments} \begin{figure} \caption{\textbf{Certified Accuracy against rotation.} \label{fig:rotation-ablating-num-clients} \end{figure} \textbf{Training Setup.} We fix ResNet18~\cite{resnet} as our architecture and train on the CIFAR10~\cite{cifars} and MNIST~\cite{lecun2010mnist} datasets. We split the training and test sets across clients in a random and non-overlapping fashion. Note that this split forms a best-case condition for local training, allowing compatibility with federated training and its associated data-privacy constraints. We conduct 90 epochs of training, divided in 45 communication rounds, \textit{i.e. } 2 epochs of local training per communication round. We set a batch size to 64, and use a starting learning rate of $0.1$, which is decreased by a factor of 10 every 30 epochs (15 communication rounds). Unless specified otherwise, given a transformation we are interested in robustifying against (\textit{e.g. } rotation), we train the model by augmenting the data with such transformation. This procedure is equivalent to estimating the output of the smooth classifier in Eq.~\eqref{eq:parametric-smooth-classifier} with one sample. We set the $\sigma\in\{0.1, 0.5\}$ for rotation and translation and $\sigma \in \{0.12, 0,5\}$ for pixel perturbations, following~\cite{cohen2019certified, deformrs}. For local models, we train each client on their local data only with the aforementioned setup without any communication. \textbf{Certification Setup.} For certification, we use the public implementation from~\cite{cohen2019certified} for pixel-intensity perturbations, and the one from~\cite{deformrs} for rotation and translation. We use $100$ Monte Carlo samples for computing the top prediction $c_A$, and $100$k samples for estimating a lower bound to the prediction probability $p_A$; we further use a failure probability of $\alpha = 0.001$, and bound the runner-up prediction via $p_B = 1-p_A$. For experiments on deformations, we choose $I_T$ to be a bi-linear interpolation function, following~\cite{deformrs, perez20223deformrs}. To evaluate each client's certification, we fix a random subset of $500$ local samples. At last, we plot the average certified accuracy curves across clients, highlighting the minimum and maximum values as the limits of shaded regions. We report ``certified accuracy at radius $R$'', defined as the percentage of test samples that is both, classified correctly, and has a certified radius of at least $R$. We report the certified accuracy curves for all clients in the appendix. \textbf{Is Federated Training Good for Robustness?} We first investigate the benefits of federated training on certified robustness. To that regard, we use a rotation transformation and vary the number of clients on which the dataset is distributed. We conduct experiments with 4, 10, and 20 clients, and report the results in Figure~\ref{fig:rotation-ablating-num-clients}. From these results, we draw the following observations. \textbf{(i)} When the number of clients is small (4 clients), \textit{i.e. } when each client has large data, local training can be sufficient for building a model that is accurate \textit{and} robust. \textbf{(ii)} As the number of clients increases, \textit{i.e. } a typical scenario in FL, and the amount of data available to each client is not large, federated averaging shines in providing a model with better performance and robustness. In particular, we find federated training can bring robustness improvements of over 20\% in the few-local-data regime (10 or 20 clients). This can be interpreted as follows: when local data can sufficiently approximate the real-data distribution, then there is no advantage to conducting federated training. On the other hand, when local data is insufficient for approximating the data-distribution, federated learning is useful. \textbf{(iii)} The performance and robustness of personalized (fine-tuned) models are comparable to the global federated model. We attribute this observation to the federated model being trained with data augmentation, and thus converging to a reasonably-good local optimum. Hence, personalizing this model with few epochs of local training would not have very significant impact. We delve more into the benefits of personalization in the next section. \textbf{(iv)} At last, we observe that, as the number of clients increases, the performance difference between the best and worst performing clients is significantly larger for local training. This is evidenced by the spread in Figure~\ref{fig:rotation-ablating-num-clients}, where the shaded area is notably larger for local training. We elaborate more on this phenomenon in the appendix. \begin{figure} \caption{\textbf{Comparison between personalization and local training.} \label{fig:nominal-pretraining-all} \end{figure} \textbf{Is Personalization Beneficial for Robustness?} Next, we investigate the effect of personalization on certified robustness. In essence, the experiments in Figure~\ref{fig:rotation-ablating-num-clients} assumed all clients train robustly, aiming at collaboratively building a global robust model. However, in a more realistic federated setup, this is not necessarily the case. For instance, different clients could target robustness against different transformations or, in the extreme case, some clients could disregard robustness entirely and simply target accuracy. To study this case, we use 10 clients and modify the averaging and personalization phases as follows. During the federated training phase, all models train without augmentations (\textit{i.e. } targeting accuracy and disregarding robustness). During the personalization phase, we augment data with the transformation for which the client is interested in being robust against. Experimentally, we personalize against pixel perturbations, rotations, and translations, and report results in Figure~\ref{fig:nominal-pretraining-all}. Our results show that personalization has a significant impact on improving model robustness. Moreover, this improvement is observed across the board, irrespective of the choice of the augmentation or $\sigma$. \textbf{Advantages of Personalization.}\label{sec:advantages-of-personalization} Further, we address the following question: when is it better to robustly train on local data than to do robust personalization (starting from a nominally trained federated model)? We examine this question by comparing the performance of a nominally trained federated model that is robustly personalized (Figure~\ref{fig:nominal-pretraining-all}) against local robust training. We display this comparison in Figure~\ref{fig:nominal-pretraining-all}. We observe the following: \textbf{(i)} When the target local transformation is similar to the target federated one (small values of $\sigma$), personalized models significantly outperform local models, despite robust training only occurring in the personalization phase. That is, with far less computation and much faster training, federated training with personalization offers a better alternative to training only on local data. Furthermore, the performance gap increases with the number of clients---equivalent to assigning each client fewer local data. \textbf{(ii)} Following intuition, as the difference between the transformation in the federated and personalized settings increases, the performance of personalized models becomes comparable to that of their local counterparts. Note that this is an artifact of choosing a relatively small number of clients, on which the dataset is distributed. Indeed, in the appendix we report how a larger number of clients ($N=100$) displays significant robustness improvements of personalized models compared to local models. \begin{figure} \caption{\textbf{Mixture of models.} \label{fig:models-mixture-translation-rotation} \end{figure} \textbf{FL with Mixture Model.} At last, we explore how using the mixture of models formulation in Eq.~\eqref{eq:implicit-personalization} affects robustness of the resulting local models. Note that through all the aforementioned experiments, we studied the case when $\lambda=1$ for local training, or $\lambda=0$ for federated training. To this end, we compare the performance of models trained with various values $\lambda\in(0, 1)$, leading to different personalization levels. That is, during local training, each client is leveraging the update step in \eqref{eq:mixture_update} before communicating the local model to the orchestrating server for an averaging step. Figure~\ref{fig:models-mixture-translation-rotation} plots the results of this experiment. The first two columns display certified accuracy curves against rotation and translation for different $\lambda$ values. The last column summarizes the results in terms of Average Certified Radius (ACR), \textit{i.e. } the average certified radius of correctly-classified examples~\cite{Zhai2020MACER:}. From these results we observe that: \textbf{(i)} for every deformation (rotation, translation) and every $\sigma$ value, the mixture of models formulation outperforms the robustness of standard federated training. This phenomenon is consistent with almost \textit{any} value of the personalization parameter $\lambda$. In particular, we observe that setting $\lambda=0.1$, corresponding to the \textit{least} personalized local models, yields the best results. \textbf{(ii)} By contrasting these results to Figure~\ref{fig:nominal-pretraining-all}, we notice that, across clients, the spread of certified accuracy is smaller than both naïve personalization and local training. \appendix \appendix \onecolumn \begin{center} \huge \textbf{Certified Robustness in Federated Learning \\ Supplementary Materials} \end{center} \section{Additional Experiments} \subsection{Is Federated Averaging Beneficial for Robustness} \begin{figure} \caption{\textbf{Certified Robustness against Rotation starting from Affine. Effect of Personalization} \label{fig:affine-pretraining} \end{figure} In Section 4, we analyzed the benefits of personalization on the certified robustness when all clients are training nominally. Next, we address the following question: what if all other clients aim to build a model robust against transformations that are different than the desired one? We analyze the scenario when all clients train with an augmentation that differs from the one a certain client desires. Thus, we deploy augmentation with the general affine deformation~\cite{deformrs} during the federated training phase, and then personalize with either rotation or translation. We report the results in Figure~\ref{fig:affine-pretraining} and illustrate that personalization has a positive impact on robustness even when federated training is conducted with a different augmentation (an affine deformation, in this case). In Figure~\ref{fig:rotation-ablating-num-clients}, we analyzed the effect of varying the number of clients on the certified robustness against rotations. For completeness, we repeat the experiment with translations, and report the results in Figure~\ref{fig:n_clients_ablation_on_translation}. In particular, in this experiment we distribute CIFAR10 on 4 and 10 clients, and vary $\sigma\in\{0.1, 0.5\}$. Analogous to our earlier observation, as the number of clients increases, the local data becomes insufficient, and the performance and robustness of local training deteriorates. \begin{figure} \caption{\textbf{Certified Accuracy against translation.} \label{fig:n_clients_ablation_on_translation} \end{figure} \begin{figure} \caption{\textbf{Nominal pre-training and robust personalization with 100 clients.} \label{fig:100-clients-all} \end{figure} \subsection{100 Clients Experiment} In Section~\ref{sec:advantages-of-personalization}, we observed that when the augmentation deployed in the federated training phase differs from the one used in the personalization phase, the performance of local models is comparable to that of personalized models. We scale our experiments by distributing the dataset on a larger number of clients (100). This makes the local data for each client insufficient for training local models. We plot the certified accuracy curves of local training, federated training, and personalized training in Figure~\ref{fig:100-clients-all}. We observe that the performance gap increases between the personalized and local models, providing further evidence to support our hypothesis from Section~\ref{sec:advantages-of-personalization}. That is, federated training and personalization, compared to local training, provide a more reliable approach that improves both performance and robustness in more realistic federated scenarios. \subsection{Ablation for All Clients} In all of our experiments, we reported the certified accuracy curves averaged across all clients, highlighting the minimum and the maximum performing clients in a shaded region. For completeness, we plot the certified accuracy for all clients for some of our experiments. In particular, for experiments with 4 clients, we show our results in Figure~\ref{fig:ablations-all-clients-4-clients}, and show the results on 10 clients in Figure \ref{fig:ablations-all-clients-10-clients}. On each plot, we show the performance variation across clients when local training is conducted (compared to personalized training). We observe, following our earlier observations, that the spread is significantly higher for local training. Note that this behaviour is despite the fact that we distributed the data in a uniform and homogeneous fashion across clients. Hence, in more realistic scenarios with heterogeneous splits, this spread could grow, highlighting the reliability of federated training and personalization in building accurate and robust models. We believe this analysis is an interesting setup for future work. \begin{figure} \caption{\textbf{Ablations for all clients. 4 Clients experiment} \label{fig:ablations-all-clients-4-clients} \end{figure} \begin{figure} \caption{\textbf{Ablations for all clients. 10 Clients experiment} \label{fig:ablations-all-clients-10-clients} \end{figure} \subsection{Experimental Details} The experiments reported in the paper used the MNIST and CIFAR-10~\cite{cifars} Datasets\footnote{Available \texttt{https://www.cs.toronto.edu/~kriz/cifar.html}, under an MIT license.}. We used the available splits provided in PyTorch for training and testing sets. Moreover, we adopted the publicly-available codes for certification, as noted in the experimental section. \subsection{FL with Mixture Model} At last, we experiment with the effect of the mixture model on the certified accuracy of the trained model. In Section~4, our experiments analyzed rotation and translation deformations on CIFAR10. For completeness, in Figure~\ref{fig:mixture-model-experiments} we also report results against (1) pixel perturbations on CIFAR10 and (2) rotation, translation, and pixel perturbations on MNIST. Our results further confirm our observation in Section~4: the mixture model provides consistent improvement on the certified robustness for most considered values of $\lambda$. It is also worth noting that different degrees of personalization perform the best for different perturbations and $\sigma$ values. \begin{figure} \caption{\textbf{FL with Mixture of Models.} \label{fig:mixture-model-experiments} \end{figure} \section{Social Impact} This paper explores (1) certified robustness and (2) federated learning. Both topics are associated to how machine learning settings relate to security which, in turn, has a social impact. In particular, robustness is associated with the secure deployment of models, hindering how malicious agents may attack recognition systems to fit their purposes. Furthermore, federated learning is a setting that arises naturally when data-privacy constraints are introduced into the learning objective: how to develop well-performing models that leverage user data while preserving privacy. \section{Compute Power} All in all, our work has the potential to be used by services targeting high-performance private models to benefit users. In particular, we used 20 Nvidia-V100 GPUs for all of our experiments. \section{Limitations} There exist several frameworks that characterize federated training and personalization. A limitation of our work is the focus on the popular federated averaging technique. A possible interesting avenue for future work is to benchmark different federated and personalization frameworks in terms of robustness. Moreover, new mixture models could be devised that explicitly target robustness enhancement are left for future works. Moreover, the development of new mixture models explicitly aimed at improving robustness and reliability is also left for future work. At last, in this work we considered a homogeneous split of data across clients. That is, all clients have the same amount, and probably the same set of classes, of data. We note here, while this setup is less realistic to the real world setup where different clients have heterogeneous data splits, we remark here that this serves as the best condition for local training. That is, even when putting local training at advantage, our work showed the superiority of conducting federated training and personalization in providing models that are both accurate and robust. We leave studying the heterogeneous setup for future works. \end{document}
\begin{document} \author{Boris Stupovski and Rafael Torres} \title[Simply connected 5-manifolds of positive biorthogonal curvature]{Existence of Riemannian metrics with positive biorthogonal curvature on simply connected 5-manifolds} \address{Scuola Internazionale Superiori di Studi Avanzati (SISSA)\\ Via Bonomea 265\\34136\\textnormal{Tr }ieste\\Italy} \email{[email protected]} \email{[email protected]} \subjclass[2010]{Primary 53C20, 53C21; Secondary 53B21} \maketitle \emph{Abstract}: Using recent work of Bettiol, we show that a first-order conformal deformation of Wilking's metric of almost-positive sectional curvature on $S^2\times S^3$ yields a family of metrics with strictly positive average of sectional curvatures of any pair of 2-planes that are separated by a minimal distance in the 2-Grassmanian. A result of Smale's allows us to conclude that every closed simply connected 5-manifold with torsion-free homology and trivial second Stiefel-Whitney class admits a Riemannian metric with a strictly positive average of sectional curvatures of any pair of orthogonal 2-planes. \section{Introduction and main results}\label{Introduction} Let $(M, g)$ be a compact Riemannian $n$-manifold and let $\Sec_g$ be the sectional curvature of the metric. We often abuse notation and denote the Riemannian metric by $(M, g)$ as well. For each 2-plane\begin{equation}\sigma\in \Gr_2(T_pM) = \{X\wedge Y\in {\Lambda}^2 T_pM : ||X\wedge Y||^2 = 1\},\end{equation} let $\sigma^\perp\subset T_pM$ be its orthogonal complement. That is, there is a $g$-orthogonal direct sum decomposition $\sigma \oplus \sigma^{\perp} = T_pM$ at a point $p\in M$. \begin{definition}\label{Definition Biorthogonal Curvature} The biorthogonal curvature of a 2-plane $\sigma\in \Gr_2(T_pM)$ is\begin{equation}\label{Equation 1}\Sec_g^{\perp}(\sigma):= \underset{\begin{subarray}{c} \sigma'\in \Gr_2(T_pM)\\ \sigma' \subset \sigma^{\perp}\end{subarray}}{\Min}\frac{1}{2}(\Sec_g(\sigma) + \Sec_g(\sigma'))\end{equation}(cf. \cite[Section 5.4]{[Be2]}). We say that $(M, g)$ has positive biorthogonal curvature $\Sec_g^\perp > 0$ if (\ref{Equation 1}) is positive for every $\sigma\in \Gr_2(T_pM)$ at every point $p\in M$ . \end{definition} A stronger curvature condition is the following. Choose a distance on the Grassmanian bundle $\Gr_2(TM)$ that induces the standard topology. \begin{definition}\label{Definition Distance Curvature} The distance curvature of a 2-plane $\sigma \subset T_pM$ is\begin{equation}\label{Equation 2}\Sec_g^{\theta}(\sigma):= \underset{\begin{subarray}{c} \sigma'\in \Gr_2(T_pM)\\ \Dist(\sigma, \sigma')\geq \theta\end{subarray}}{\Min}\frac{1}{2}(\Sec_g(\sigma) + \Sec_g(\sigma'))\end{equation} for each $\theta > 0$ (cf. \cite[Section 5.2]{[Be2]}). We say that $(M, g^\theta)$ has positive distance curvature $\Sec_{g^\theta} > 0$, if for every $\theta > 0$ there is a Riemannian metric $(M, g^\theta)$ for which (\ref{Equation 2}) is positive for every $\sigma\in \Gr_2(T_pM)$ at every point $p\in M$. \end{definition} Bettiol \cite{[Be3]} classified up to homeomorphism closed simply connected 4-manifolds that admit a Riemannian metric of positive biorthogonal curvature by constructing metrics of positive distance curvature on $S^2\times S^2$ \cite[Theorem, Proposition 5.1]{[Be1]}, \cite[Theorem 6.1]{[Be2]} and showing that positive biorthogonal curvature is a property that is closed under connected sums \cite[Proposition 7.11]{[Be2]}, \cite[Proposition 3.1]{[Be3]}. In this paper, we extend Bettiol's results to dimension five. More precisely, we build upon Bettiol's work and show that an application of a first-order conformal deformation to Wilking's metric $(S^2\times S^3, g_W)$ of almost-positive sectional curvature \cite{[Wilking]} yields the main result of this note. \begin{thm}\label{Theorem A} For every $\theta > 0$, there is a Riemannian metric $(S^2\times S^3, g^{\theta})$ such that (a) $\Sec_{g^\theta}^\theta > 0$. (b) there is a limit metric $g^0$ such that $g^{\theta}\rightarrow g^0$ in the $C^k$-topology as $\theta\rightarrow 0$ for $k\geq 0$. (c) $g^\theta$ is arbitrarily close to Wilking's metric $g_W$ of almost-positive curvature in the $C^k$-topology for $k\geq 0$. (d) $\mathbb{R}ic_{g^{\theta}} > 0$. (e) There is a 2-plane $\sigma\in \Gr_2(T_pS^2\times S^3)$ with $\Sec_{g^{\theta}}(\sigma) < 0$. In particular, there is a Riemannian metric of positive biorthogonal curvature on $S^2\times S^3$. \end{thm} The next corollary is a consequence of coupling Theorem \ref{Theorem A} with a classification result of Smale \cite{[S]}. \begin{cor}\label{Corollary B} Every closed simply connected 5-manifold with torsion-free homology and zero second Stiefel-Whitney class admits a Riemannian metric of positive biorthogonal curvature. \end{cor} The hypothesis imposed on the homology and the second Stiefel-Whitney class of the manifolds of Corollary \ref{Corollary B} are technical in nature; cf. Remark \ref{Remark 2}. Indeed, an examination of the canonical metric on the Wu manifold yields the following proposition. \begin{prop}\label{Proposition Wu manifold} The symmetric space metric $(\mathrm{SU}(3)/\mathrm{SO}(3), g)$ has positive biorthogonal curvature. \end{prop} The Wu manifold has second homology group of order two and non-trivial second Stiefel-Whitney class. \subsection{Acknowledgements:} We thank the referee and Renato Bettiol for useful input that allowed us to significantly improve the paper. R. T. thanks Nicola Gigli for very helpful discussions during the production of the paper. \section{Constructions of Riemannian metrics of positive biorthogonal curvature}\label{Section ConstructionsRMetrics} \subsection{Wilking's metric of almost-positive curvature on $S^2\times S^3$}\label{Section Wilking's Metric} We follow the exposition in \cite[Section 5]{[Wilking]} to describe Wilking's construction a metric of almost positive curvature on the product of projective spaces $\mathbb{R} P^2\times \mathbb{R} P^3$ and its pullback to $S^2\times S^3$ under the covering map; see \cite[Section 5]{[Z1]} for a discussion relating these two constructions. The unit tangent sphere bundle of the 3-sphere\begin{equation}\label{eq:101} T_1(S^3)=S^2\times S^3\,, \end{equation}embeds into $\mathbb{R}^4\times \mathbb{R}^4=\mathbb{H}\times \mathbb{H}$ as a pair of orthogonal unit quaternions\begin{equation}\label{eq:102} S^3\times S^2 = \{(p,v)\in\mathbb{H}\times\mathbb{H}:|p|=|v|=1, \langle p,v \rangle = 0\}\subset \mathbb{H}\times \mathbb{H}\,, \end{equation} where $\langle x,y \rangle = \mathrm{Re}(\bar{x}y)$, $|x|^2=\langle x,x \rangle$ and $\bar{x}$ denotes quarternion conjugation of $x$. The group $G=\mathrm{Sp(1)}\times \mathrm{Sp(1)} \simeq S^3\times S^3$ acts on $S^2\times S^3$ by \begin{equation}\label{eq:103} (q_1,q_2)\star(p,v)=(q_1 p \bar{q}_2,q_1 v \bar{q}_2)\,, \end{equation} for $q_1, q_2 \in \mathrm{Sp(1)}$ and $(p,v) \in S^2 \times S^3$. This action is effectively free and transitive. The isotropy group of the point $(1,i)\in S^2\times S^3$ is\begin{equation}H=\{(e^{i\phi},e^{i\phi})\in \mathrm{Sp(1)}\times \mathrm{Sp(1)}\}\subset G.\end{equation} Thus, $S^2\times S^3\simeq G/H$ is a homogeneous space. In order to put a metric on $S^2\times S^3$, Wilking first defines a left invariant metric $g$ on $G=\mathrm{Sp(1)}\times \mathrm{Sp(1)}$ as follows. Let\begin{equation}g_0((X,Y),(X',Y'))=\langle X, Y \rangle + \langle X', Y' \rangle\end{equation} for $(X,Y),(X',Y')\in \mathfrak{sp}(1)\oplus \mathfrak{sp}(1)=\mathrm{Im}(\mathbb{H})\oplus\mathrm{Im}(\mathbb{H})$, denote a bi-invariant metric. In terms of $g_0$, the metric $g$ is \begin{equation}\label{eq:104} g((X,Y),(X',Y'))=g_0(\Phi(X,Y),(X',Y'))\,, \end{equation} where $\Phi$ is a $g_0$-symmetric, positive definite endomorphism of $\mathfrak{sp}(1)\oplus \mathfrak{sp}(1)$ given by \begin{equation}\label{eq:105} \Phi = \Id - \frac{1}{2}P\,, \end{equation} and $P$ is the $g_0$-orthogonal projection onto the diagonal subalgebra\begin{equation}\Delta\mathfrak{sp}(1)\subset \mathfrak{sp}(1)\oplus\mathfrak{sp}(1);\end{equation}see \cite[p. 125]{[Wilking]}. Wilking's doubling trick guarantees the existence of a diffeomorphism \begin{equation}\label{eq:106} G/H \simeq \Delta G\backslash G\times G/\{1_G\}\times H\,, \end{equation} where $ \Delta G\backslash$ denotes the quotient by the left diagonal action of $G$ on $G \times G$ and $H$ acts on the second factor from the right. Consider the product $(G\times G, g + g)$ (cf. \eqref{eq:104}) and the induced metric on $S^2\times S^3 \simeq \Delta G\backslash G\times G/\{1_G\}\times H$ that we denote by $g_W$. That is, Wilking's metric $(S^2\times S^3, g_W)$ is the metric that makes the quotient submersion \begin{equation}\label{eq:107} (G\times G, g\oplus g) \rightarrow (\Delta G\backslash G\times G/\{1_G\}\times H, g_W) \end{equation} into a Riemannian submersion. Wilking has shown that $(S^2 \times S^3, g_W)$ has almost positive curvature, with flat 2-planes located on two hypersurfaces. These hypersurfaces are both diffeomorphic to $S^2\times S^2$, and they intersect along an $\mathbb{R} P^3$ \cite[Corollary 3, Proposition 6]{[Wilking]}. However, except for points that lie on four disjoint copies of $S^2$ inside these two hypersurfaces, there is a unique flat 2-plane. At each point in these four 2-spheres, there is a one parameter family of flat 2-planes and neither the distance curvature nor the biorthogonal curvature of the metric $g_W$ are strictly positive at any of these points. \section{Proofs}\label{Section Proofs} \subsection{Proof of Theorem \ref{Theorem A}} We follow Bettiol's construction of metrics of positive distance curvature on $S^2 \times S^2$ \cite[Theorem]{[Be1]}, \cite[Theorem 6.1]{[Be2]}, and apply a first-order conformal deformation to Wilking's metric $(S^2\times S^3, g_W)$ that was described in Section \ref{Section Wilking's Metric}. This yields metrics of positive distance curvature as in Definition \ref{Definition Distance Curvature}, which converge to a metric $g^0$ as $\theta$ tends to zero in the $C^k$-topology. \begin{definition}\label{Definition confdef} Let $(M,g)$ be a compact Riemannian manifold, then for any function $\phi:M\rightarrow\mathbb{R}$, and for any small enough $s>0$ the following is also a Riemannian metric on $M$ \begin{equation}\label{eq:201} g_s=(1+s\phi)g\,, \end{equation} called the first-order conformal deformation of $g$. \end{definition} The variation of sectional curvature of a metric under the first order conformal deformation is given by the following lemma \cite{[Strake]}; cf. \cite[Chapter 3, Corollary 3.4]{[Be2]}. \begin{lemma}\label{lem:1}Let $(M,g)$ be a Riemannian manifold with sectional curvature $\mathrm{sec}_g\geq0$, and let $X,Y\in T_pM$ be $g$-orthonormal vectors such that $\mathrm{sec}_g(X\wedge Y)=0$. Consider a first-order conformal deformation $g_s=(1+s\phi)g$ of $g$. The first variation of $\mathrm{sec}_{g_s}(X\wedge Y)$ is\begin{equation}\label{eq:202} \frac{\mathrm{d}}{\mathrm{d}s}\mathrm{sec}_{g_s}(X\wedge Y)\vert_{s=0}=-\frac{1}{2}\mathrm{Hess}\,\phi(X,X)-\frac{1}{2}\,\mathrm{Hess}\,\phi(Y,Y)\,. \end{equation} \end{lemma} We will also need the following elementary fact \cite[Chapter 3, Lemma 3.5]{[Be2]}. \begin{lemma}\label{lem:2} Let $f:[0,S]\times K\rightarrow \mathbb{R}$ be a smooth function, where $S>0$ and $K$ is a compact subset of a manifold. Assume that $f(0,x)\geq0$ for all $x\in K$, and $\frac{\partial f}{\partial s}>0$ if $f(0,x)=0$. Then there exists $s_*>0$ such that $f(s,x)>0$ for all $x\in K$ and $0<s<s_*$. \end{lemma} Wilking's metric $(S^2\times S^3, g_W)$ has positive sectional curvature away from a hypersurface $Z$; see discussion at the end of Section \ref{Section Wilking's Metric}. The biorthogonal and distance curvatures are positive inside $Z$ except for points that lie in four disjoint copies of $S^2$. Every point in these four 2-spheres carries an $S^1$ worth of flat 2-planes. Denote these four 2-spheres by\begin{equation}\label{2-spheres with flats}\{S^2_i : i = 1, 2, 3, 4\}\end{equation}We only deform Wilking's metric near these four submanifolds. Let\begin{equation}\chi_i: S^2\times S^3 \rightarrow \mathbb{R}\end{equation}denote a bump function of $S^2_i$, i.e., a nonnegative function that is identically zero outside a tubular neighborhood of $S^2_i$, and identically one in a smaller tubular neighborhood of $S^2_i$. Finally, we define four functions\begin{equation}\{\psi_i:S^2\times S^3 \rightarrow \mathbb{R}: i = 1, 2, 3, 4\}\end{equation} as\begin{equation}\psi_i(p)=\mathrm{dist}_{g_W}(p,S^2_i)^2\end{equation} for $p\in S^2\times S^3$, where $\mathrm{dist}_{g_W}$ is the metric distance function on $(S^2\times S^3, g_W)$. Let $\phi:S^2\times S^3 \rightarrow \mathbb{R}$ be a function defined as \begin{equation}\label{eq:203} \phi=-\chi_1\psi_1-\chi_2\psi_2-\chi_3\psi_3-\chi_4\psi_4\,, \end{equation} and consider the first-order conformal deformation of $g_W$ given by \begin{equation}\label{eq:204} g_s=(1+s\phi)g_W\,. \end{equation} Note that at a point $p\in S^2_i$ we have \begin{equation}\label{eq:205} \mathrm{Hess}\,\phi(X, X)=-\mathrm{Hess}\,\psi_i(X, X)=-2g_W(X_\perp, X_\perp)^2=-2\|X_\perp\|^2_{g_W} \,, \end{equation} where $X_\perp$ denotes the component of $X$ perpendicular to $S^2_i$. For each $\theta>0$, consider the compact subset of $(S^2\times S^3)\times \mathrm{Gr_2}(T(S^2\times S^3))\times \mathrm{Gr_2}(T(S^2\times S^3))$ given by\begin{equation}K_\theta:=\{(p,\sigma,\sigma'):\sigma,\sigma'\in\mathrm{Gr}_2(T_p(S^2\times S^3)),\mathrm{dist}(\sigma,\sigma')\geq\theta\},\end{equation} and define \begin{equation}\label{eq:206} \begin{split} f&:[0,S]\times K_\theta\rightarrow \mathbb{R}\, \\ f(s,(p,\sigma,\sigma'))&:=\frac{1}{2}(\mathrm{sec}_{g_s}(\sigma)+\mathrm{sec}_{g_s}(\sigma'))\,. \end{split} \end{equation} Notice that $f(0,(p,\sigma,\sigma'))\geq 0$ since $\mathrm{sec}_{g_s}\geq 0$. Furthermore, $f(0,(p,\sigma,\sigma'))=0$ only for\begin{equation}p\in S^2_1\cup S^2_2 \cup S^2_3\cup S^2_4\end{equation}since these are the only points of $S^2\times S^3$ that have vanishing biorthogonal and distance curvatures. Let $(p,\sigma,\sigma')$ be such that $f(0,(p,\sigma,\sigma'))=0$ and let $\sigma=X\wedge Y$ and $\sigma'=Z\wedge W$, where $X, Y$ are $g_W$-orthonormal, and $Z, W$ are $g_W$-orthonormal. Then, by Lemma \ref{lem:1} and equation \eqref{eq:205}, at these points of $K_\theta$ we have \begin{equation}\label{eq:68} \begin{split} \frac{\partial f}{\partial s}\vert_{s=0}&=\frac{\mathrm{d}}{\mathrm{d}s}(\mathrm{sec}_{g_s}(X\wedge Y)+\mathrm{sec}_{g_s}(Z\wedge W))\vert_{s=0}= \\ &=-\frac{1}{2}\mathrm{Hess}\,\phi(X,X)-\frac{1}{2}\mathrm{Hess}\,\phi(Y,Y)-\frac{1}{2}\mathrm{Hess}\,\phi(Z,Z)-\frac{1}{2}\mathrm{Hess}\,\phi(W,W)= \\ &=\|X_\perp\|^2_{g_W}+\|Y_\perp\|^2_{g_W}+\|Z_\perp\|^2_{g_W}+\|W_\perp\|^2_{g_W}>0\,. \end{split} \end{equation} The previous expression is strictly greater than zero. Indeed, since $X\wedge Y$ and $Z\wedge W$ are different 2-planes, $\mathrm{span}\{X,Y,Z,W\}$ is at least three-dimensional while the submanifolds (\ref{2-spheres with flats}) are two-dimensional. Hence, at least one of the perpendicular components $X_\perp,Y_\perp,Z_\perp,W_\perp$ is nonzero and (\ref{eq:68}) is greater than zero. Since the assumptions of Lemma \ref{lem:2} for the function (\ref{eq:206}) are satisfied, we conclude that there is an $s_*$ such that $f(s,(p,\sigma,\sigma'))>0$ for all $(p,\sigma,\sigma')\in K_\theta$ and $0<s<s_*$. This is precisely the condition $\mathrm{sec}_{g_s}^\theta>0$ of Item (a) of Theorem \ref{Theorem A}. The claims of Item (b) and Item (c) follow from our construction; cf. \cite{[Be1]}. The claim of Item (d) follows from \cite[Proposition 4.1]{[Be1]}. As Bettiol observed in his construction of metrics of positive distance curvature on $S^2\times S^2$ \cite[Section 4.4]{[Be1]}, for every $\theta > 0$, there are 2-planes in $(S^2\times S^3, g^{\theta})$ with negative sectional curvature. This completes the proof of Theorem \ref{Theorem A}. $\square$ \begin{remark}The metrics $(S^2\times S^3, g^\theta)$ of positive distance curvature can be made invariant under the action of certain Deck transformations including the product $\mathbb{Z}/2\oplus \mathbb{Z}/2$-action. Indeed, it is possible to perform a local conformal deformation on the orbit space $(\mathbb{R} P^2\times \mathbb{R} P^3, g_W)$ equipped with Wilking's metric of almost positive curvature, and a similar statement to Theorem \ref{Theorem A} holds for $(\mathbb{R} P^2\times \mathbb{R} P^3, g^\theta)$; cf. \cite[Section 4.6]{[Be1]}. \end{remark} \subsection{Proof of Corollary \ref{Corollary B}} We will use a case of the classification up to diffeomorphism of simply connected 5-manifolds with vanishing second Stiefel-Whitney class due to Smale \cite[Theorem A]{[S]}. \begin{theorem} A closed simply connected 5-manifold $M$ with torsion-free homology $H_2(M; \mathbb{Z}) = \mathbb{Z}^k$ and zero second Stiefel-Whitney class $w_2(M) = 0$ is determined up to diffeomorphism by its second Betti number $b_2(M)$. In particular, $M$ is diffeomorphic to a connected sum\begin{equation}\label{5-mflds}\{S^5\#k(S^2\times S^3): k = b_2(M)\}.\end{equation}\end{theorem} Theorem \ref{Theorem A} and Bettiol's result regarding the positivity of biorthogonal curvature under connected sums \cite[Proposition 7.11]{[Be2]} imply that every 5-manifold in the set (\ref{5-mflds}) admits a Riemannian metric of positive biorthogonal curvature. $\square$ \begin{remark}\label{Remark 2}It is natural to ask if the hypothesis $w_2(M) = 0$ of Corollary \ref{Corollary B} can be removed. Barden has shown that a closed simply connected 5-manifold with torsion-free second homology group is diffeomorphic to a connected sum of copies of $S^2\times S^3$ and the total space $S^3\widetilde{\times} S^2$ of the nontrivial $3$-sphere bundle over the $2$-sphere \cite{[Ba]}. It is currently unknown if there is a metric of almost positive sectional curvature on $S^3\widetilde{\times} S^2$. Unlike $S^2\times S^3$, the nontrivial bundle does not arise as a biquotient that satisfies the symmetry hypothesis needed to apply Wilking's doubling trick; see DeVito's classification of free circle actions on $S^3\times S^3$ in \cite{[DeV]}; cf. \cite{[DeVito1], [DeVito2]}. \end{remark} \subsection{Proof of Proposition \ref{Proposition Wu manifold}}The symmetric space metric on $\mathrm{SU}(3)/\mathrm{SO}(3)$ is the metric that makes the canonical surjection \begin{equation}\label{eq:1} \begin{split} \pi :\mathrm{SU}(3) &\to \mathrm{SU}(3)/\mathrm{SO}(3) \\ u &\mapsto u\mathrm{SO}(3)\,, \end{split} \end{equation} into a Riemannian submersion, where $\mathrm{SU}(3)$ is equipped with a bi-invariant metric. The left action of $\mathrm{SU}(3)$ on $\mathrm{SU}(3)/\mathrm{SO}(3)$ induced from the left multiplication on $\mathrm{SU}(3)$ by \eqref{eq:1} is transitive and isometric for the symmetric space metric. This means that we can study curvature at one point of $\mathrm{SU}(3)/\mathrm{SO}(3)$ and isometrically translate the results to any other point. The Cartan decomposition that corresponds to $\mathrm{SU}(3)/\mathrm{SO}(3)$ \begin{equation}\label{eq:2} T_e\mathrm{SU}(3) \simeq \mathfrak{su}(3) = \mathfrak{so}(3) \oplus \mathfrak{so}(3)^\perp \,. \end{equation} is orthogonal with respect to the bi-invariant metric and it is precisely the decomposition of $T_e\mathrm{SU}(3)$ into vertical and horizontal subspaces of the Riemannian submersion \eqref{eq:1}. Hence, we have \begin{equation}\label{eq:3} T_{ \mathrm{SO}(3) }( \mathrm{SU}(3)/\mathrm{SO}(3) ) \simeq \mathfrak{so}(3)^\perp\,. \end{equation} To conclude that $ \mathrm{SU}(3)/\mathrm{SO}(3)$ has positive biorthogonal curvature, we need to show that no two flat 2-planes are orthogonal to each other. A result of Tapp \cite[Theorem 1.1]{[Tapp]} implies that a 2-plane on $ \mathrm{SU}(3)/\mathrm{SO}(3) $ is flat if and only if its horizontal lift is flat. Thus, it is enough to consider horizontal flat 2-planes at the identity of $ \mathrm{SU}(3) $. A horizontal 2-plane $ X \wedge Y \subset \mathfrak{so}(3)^\perp $ at the identity of $\mathrm{SU}(3)$ is flat if and only if $ [ X,Y ] = 0 $. Since the maximal number of linearly independent commuting matrices in $\mathfrak{su}(3)$ is two, every horizontal flat 2-plane corresponds to a maximal abelian subalgebra of $\mathfrak{so}(3)^\perp $ \begin{equation}\label{eq:4} \mathrm{span}_\mathbb{R} \{ X, Y \} = \mathfrak{a}_0 \subset \mathfrak{so}(3)^\perp\,. \end{equation} By a fundamental fact about Cartan decomposition, see \cite[Proposition 7.29]{[Knapp]} for the precise statement, any two maximal abelian subalgebras of $\mathfrak{so}(3)^\perp$ are conjugate by an element of $\mathrm{SO}(3)$. This means that by fixing one maximal abelian subalgebra, or one horizontal flat 2-plane we can parametrize all horizontal flat 2-planes by $\mathrm{SO}(3)$. In what follows we will obtain an explicit parametrization of horizontal flat 2-planes at the identity of $\mathrm{SU}(3)$, and so a parametrization of flat 2-planes at a point of $\mathrm{SU}(3)/\mathrm{SO}(3)$ by choosing a basis for $ \mathfrak{su}(3) $, fixing a horizontal flat 2-plane and parametrizing $ \mathrm{SO}(3) $ by Euler angles. We use this explicit parametrization to show that no two flat 2-planes can be orthogonal. For the basis of $ \mathfrak{su}(3)$, we choose $ \{-i\lambda_i\}_{i=1,...,8} $, where the $ \lambda_i $'s are traceless, self-adjoint 3 by 3 matrices known as the Gell-Mann matrices \cite{[Gell]}. The scalar product on $\mathfrak{su}(3)$ that corresponds to the bi-invariant metric is \begin{equation}\label{eq:5} \langle X, Y \rangle = -\frac{1}{2}\mathrm{Tr}(XY)\,, \end{equation} for $X, Y \in \mathfrak{su}(3)$ and the basis $ \{-i\lambda_i\}_{i=1,...,8} $ is orthonormal with respect to (\ref{eq:5}). The Cartan decomposition \eqref{eq:2} in this basis is \begin{equation}\label{eq:6} \mathfrak{so}(3) = \mathrm{span}_\mathbb{R}\{ -i\lambda_2, -i\lambda_5, -i\lambda_7\} \end{equation}and \begin{equation}\label{eq:7} \mathfrak{so}(3)^\perp = \mathrm{span}_\mathbb{R}\{ -i\lambda_1, -i\lambda_3, -i\lambda_4,-i\lambda_6,-i\lambda_8\}\,. \end{equation} Matrices $\lambda_3$ and $\lambda_8$ are diagonal, so we use $-\lambda_3 \wedge \lambda_8$ for the reference horizontal flat 2-plane. Every horizontal flat 2-plane, $ X \wedge Y $, with $ X, Y \in \mathfrak{so}(3)^\perp $ such that $ [X,Y]=0 $, can now be written as \begin{equation}\label{eq:8} X \wedge Y = -\mathrm{Ad}_r (\lambda_3 \wedge \lambda_8 )\,, \end{equation} for some $ r \in \mathrm{SO}(3) $. Suppose that $ X\wedge Y $ and $ X'\wedge Y' $ are two such 2-planes with, $X\wedge Y$ given by \eqref{eq:8}, and $ X' \wedge Y' $ by \begin{equation}\label{eq:9} X' \wedge Y' = -\mathrm{Ad}_{r'} (\lambda_3 \wedge \lambda_8 )\,, \end{equation} for some $ r' \in \mathrm{SO}(3) $. For the 2-planes \eqref{eq:8} and \eqref{eq:9} to be orthogonal it is necessary and sufficient that the equations \begin{equation}\label{eq:10} \langle \mathrm{Ad}_r \lambda_3, \mathrm{Ad}_{r'} \lambda_3 \rangle = 0\,, \end{equation} \begin{equation}\label{eq:11} \langle \mathrm{Ad}_r \lambda_3, \mathrm{Ad}_{r'} \lambda_8 \rangle = 0\,, \end{equation} \begin{equation}\label{eq:12} \langle \mathrm{Ad}_r \lambda_8, \mathrm{Ad}_{r'} \lambda_3 \rangle = 0\,, \end{equation}and \begin{equation}\label{eq:13} \langle \mathrm{Ad}_r \lambda_8, \mathrm{Ad}_{r'} \lambda_8 \rangle = 0\ \end{equation}hold. Using the $ \mathrm{Ad} $-invariance of the bi-invariant metric, equations \eqref{eq:10}, \eqref{eq:11}, \eqref{eq:12}, and \eqref{eq:13} can be rewritten as \begin{equation}\label{eq:25} \langle \lambda_3, \mathrm{Ad}_{r^{-1}r'} \lambda_3 \rangle = 0\,, \end{equation} \begin{equation}\label{eq:26} \langle \lambda_3, \mathrm{Ad}_{r^{-1}r'} \lambda_8 \rangle = 0\,, \end{equation} \begin{equation}\label{eq:27} \langle \lambda_8, \mathrm{Ad}_{r^{-1}r'} \lambda_3 \rangle = 0\,, \end{equation}and \begin{equation}\label{eq:28} \langle \lambda_8, \mathrm{Ad}_{r^{-1}r'} \lambda_8 \rangle = 0\,. \end{equation} We now use the Euler angle parametrization of $ \mathrm{SO}(3) $ to write $ r^{-1}r' \in \mathrm{SO}(3) $ as \begin{equation}\label{eq:29} r^{-1}r' = \mathrm{exp}(- i \lambda_2 x) \mathrm{exp}(- i \lambda_5 y) \mathrm{exp}(- i \lambda_2 z)\,, \end{equation} where $ x,y,z \in \mathbb{R} $. Plugging \eqref{eq:29} into equations \eqref{eq:25}, \eqref{eq:26}, \eqref{eq:27}, and \eqref{eq:28} and calculating the traces explicitly, we find \begin{equation}\label{eq:30} 0 = \langle \lambda_3, \mathrm{Ad}_{r^{-1} r'} \lambda_3 \rangle = \frac{1}{4} \mathrm{cos}(2x) \left (3 + \mathrm{cos}(2y) \right ) \mathrm{cos}(2z) - \mathrm{sin}(2x) \mathrm{cos}(y) \mathrm{sin}(2z)\,, \end{equation} \begin{equation}\label{eq:31} 0 = \langle \lambda_3, \mathrm{Ad}_{r^{-1} r'} \lambda_8 \rangle = - \frac{\sqrt{3}}{2} \mathrm{cos}(2x)\mathrm{sin}^2(y)\,, \end{equation} \begin{equation}\label{eq:32} 0 = \langle \lambda_8, \mathrm{Ad}_{r^{-1} r'} \lambda_3 \rangle = - \frac{\sqrt{3}}{2} \mathrm{cos}(2z)\mathrm{sin}^2(y)\,, \end{equation}and \begin{equation}\label{eq:33} 0 = \langle \lambda_8, \mathrm{Ad}_{r^{-1} r'} \lambda_8 \rangle =\frac{1}{4}(1+3\mathrm{cos}(2y))\,. \end{equation} Equations \eqref{eq:31}, \eqref{eq:32}, and \eqref{eq:33} imply $\mathrm{cos}^2(y)=1/3$ and $\mathrm{cos}(2x)=\mathrm{cos}(2z)=0$. Plugging this into equation \eqref{eq:30}, we obtain\begin{equation} \langle \lambda_3, \mathrm{Ad}_{r^{-1} r'} \lambda_3 \rangle \neq 0,\end{equation} and conclude that there is no solution to the system given by equations \eqref{eq:30}, \eqref{eq:31}, \eqref{eq:32}, and \eqref{eq:33}. This shows that no two 2-flat planes are orthogonal. $\square$\\ \end{document}
\begin{equation}gin{document} \title{Wave maps on (1+2)-dimensional curved spacetimes} \author{Cristian Gavrus} \address{Department of Mathematics, Johns Hopkins University} \email{[email protected]} \author{Casey Jao} \address{Department of Mathematics, University of California, Berkeley} \email{[email protected]} \author{Daniel Tataru} \address{Department of Mathematics, University of California, Berkeley} \email{[email protected]} \begin{equation}gin{abstract} In this article we initiate the study of $1+2$ dimensional wave maps on a curved spacetime in the low regularity setting. Our main result asserts that in this context the wave maps equation is locally well-posed at almost critical regularity. As a key part of the proof of this result, we generalize the classical optimal bilinear $ L^2$ estimates for the wave equation to variable coefficients, by means of wave packet decompositions and characteristic energy estimates. This allows us to iterate in a curved $ X^{s,b}$ space. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} \ Let the $(1+2)$-dimensional spacetime $ \mathbb{R}_t \times \mathbb{R}_x^2 $ be endowed with a Lorentzian metric \[ g = g_{\alphapha \begin{equation}ta}(t, x) dx^\alphapha dx^\begin{equation}ta, \] so that the time slices $t = const$ are space-like. Here $x^0 = t$ and we adopt the standard convention of referring to spacetime coordinates by Greek indices and purely spatial coordinates by Roman indices. Given a smooth Riemannian manifold $ (M,h) $ with uniformly bounded geometry, a \emph{wave map} $u: (\mathbb{R}^{1+2}, g) \to (M, h)$ is formally a critical point of the Lagrangian \begin{equation}gin{align*} \mathcal{L} (u) &= \frac{1}{2} \int_{\mathbb{R}^{1+2}} \Deltangle du, du \rangle_{ T^{*}( \mathbb{R} \times \mathbb{R}^2 ) \otimes u^{*} T M} \,\mathrm{d} \text{vol}_g, \end{align*} where $ u^{*} T M = \bigcup_x \{ x \} \times T_{u(x)} M $ is the pullback of $ TM $ by $ u $ and $ u^{*} h $ is the pullback metric. In local coordinates on $ M $ this becomes $$ \mathcal{L} (u) = \frac{1}{2} \int g^{\alpha \begin{equation}ta}(z) h_{ij} (u(z)) \partial_{\alpha} u^i(z) \partial_{\begin{equation}ta} u^j(z) \sqrt{\vm{g(z)} } \,\mathrm{d} z $$ One may think of wave maps as the hyperbolic counterpart of harmonic maps. The Euler-Lagrange equations take the following form, and we refer to \cite[Chapter 8]{jost2008riemannian} for related computations: \begin{equation}gin{align} \Deltabel{e:EL} \frac{1}{\sqrt{\vm{g } }} {\bf D}_{\alpha} \left( \sqrt{\vm{g } } g^{\alpha \begin{equation}ta} \partial_{\begin{equation}ta} u \right) =0 \end{align} Here $ {\bf D} $ denotes the pullback covariant derivative on $ u^{*} T M $ given by $ {\bf D}_X V=\nabla_{u_{*} X} V $ where $ \nabla $ is the Levi-Civita connection on $ T M $. \ In coordinates, wave maps solve a coupled system of nonlinear wave equations. We review two useful settings for this problem: \ \emph{Intrinsic formulation.} Suppose the image of $ u $ is supported in the domain of a local coordinate patch of $ M $. Then the wave maps equation~\eqref{e:EL} is written as \begin{equation} \Deltabel{WM:int} \tilde{\Box}_g u^i=- \Gamma^i_{jk} (u) g^{\alpha \begin{equation}ta } \partial_{\alpha}u^j \partial_{\begin{equation}ta} u^k \end{equation} where $ \tilde{\Box}_g $ is the Laplace-Beltrami wave operator \begin{equation} \Deltabel{LB:op} \tilde{\Box}_g u = \vm{g }^{-\frac{1}{2} } \partial_{\alpha} \Big( \vm{g }^{\frac{1}{2}} g^{\alpha \begin{equation}ta} \partial_{\begin{equation}ta} u \Big) \end{equation} denoted this way in order to distinguish it from its principal part $ \Box_g $ defined by $$ \Box_g u = g^{\alpha \begin{equation}ta} \partial_{\alpha} \partial_{\begin{equation}ta} u, $$ and $ \Gamma^i_{jk} $ are the Christoffel symbols on $(M,h)$. \ \emph{Extrinsic formulation.} When the manifold $ M $ is isometrically embedded into an euclidean space $ \mathbb{R}^m $ wave maps can be equivalently defined extrinsically; see for instance \cite{shatah1998geometric}. If $ M $ is compact, such an embedding exists for $ m $ large enough by Nash's theorem. The Lagrangian becomes $$ \mathcal{L} (u) = \frac{1}{2} \int g^{\alpha \begin{equation}ta} \Deltangle \partial_{\alpha} u, \partial_{\begin{equation}ta} u \rangle \sqrt{\vm{g } } \,\mathrm{d} z $$ Formal critical points satisfy $ \tilde{\Box}_g u \perp T_u M $ and our equation takes the form \begin{equation} \Deltabel{WM:ext} \tilde{\Box}_g u^i=- \mathcal{S}^i_{jk} (u) g^{\alpha \begin{equation}ta } \partial_{\alpha}u^j \partial_{\begin{equation}ta} u^k, \end{equation} where $ \mathcal{S} $ stands for the second fundamental form on $M$. \ \emph{Initial data} for the Cauchy problem are chosen such that $$ u(0,x) \in M, \ \partial_t u(0,x) \in T_{u(0,x)} M .$$ The initial data space can be viewed as an infinite dimensional manifold, in which the wave map evolution takes place. \ The formulations above exhibit the presence of the \emph{null form} \[ Q_g(u,v)= g^{\alpha \begin{equation}ta} \partial_{\alpha} u \partial_{\begin{equation}ta} v. \] Its structure is so that it eliminates quadratic resonant interactions in the wave map evolution, and has played a key role in the low regularity well-posedness of wave maps in the flat case (\cite{klainerman1993space}, \cite{klainerman1997remark}, \cite{klainerman2002bilinear}, \cite{Tat}, \cite{Tao2}) as well as in other problems, especially in low dimensions. A simple way to see this cancellation is via the formula \begin{equation} \Deltabel{null:form:box} 2 Q_g(u,v)= 2 g^{\alpha \begin{equation}ta} \partial_{\alpha} u \partial_{\begin{equation}ta} v = \Box_g(uv) - u \Box_g v-v \Box_g u. \end{equation} However in \emph{Section} \ref{Sec:alg} we will obtain estimates for $ Q_g(u,v) $ directly. \ In both the intrinsic and the extrinsic formulations one can use the coordinates to define Sobolev spaces for the initial data sets. To understand the relevant range of Sobolev indices we recall that in the Minkovski space the wave maps system is invariant under the dimensionless \emph{rescaling} \[ u \mapsto u(\Deltambda t, \Deltambda x), \qquad g \mapsto g(\Deltambda t, \Deltambda x) \] which identifies the \emph{critical} (scale-invariant) initial data space as $\dot{H}^1 \times L^2 (\mathbb{R}^2)$. In addition, one may consider initial data spaces for more regular data, of the form \[ \mathcal H^s = \{ (u_0,u_1): \mathbb{R}^2 \to TM;\ \nabla u_0, u_1 \in H^{s-1}(\mathbb{R}^2) \}, \qquad s > 1. \] It is this latter case which is considered in the present paper. Several remarks are in order here. First of all, the constant functions are always acceptable states in the context of manifold valued maps $u_0$; this is why we include them all in our state space $\mathcal H^s$ above. Secondly, one may ask whether the definition of the above spaces is context dependent. We separately discuss the two formulations above. The simplest set-up is in the case of the extrinsic formulation with $(M,h)$ a compact manifold. There one can directly use the $H^{s-1}(\mathbb{R}^2)$ spaces for derivatives of $\mathbb{R}^m$ valued functions. For simplicity, this is the set-up we adopt here. For the intrinsic formulation matters are not as straightforward. The fact that $s > 1$ heuristically insures that functions $u_0$ with $\nabla u_0 \in H^{s-1}$ are H\"older continuous. Thus locally such functions are in the domain of a single chart, and the $H^{s-1}$ norms can be defined using local coordinates. Further, by Moser estimates the local spaces are algebraically independent of the choice of the local chart. Finally, the global spaces are obtained from the local ones by using a suitable collection of local charts. Of course, some care is required here in order to avoid a circular argument. Most generally such a construction would apply for manifolds $(M,g)$ with uniformly bounded geometry. Finally we remark that in the setting of low-regularity discontinuous solutions as in \cite{TataruWM}, defining the critical Sobolev spaces $ \dot{H}^1_x(\mathbb{R}^2 \to M) $ is a delicate matter due to the need of having an appropriate topology on these spaces that is independent of the choice of isometric embeddings. Throughout the paper we assume that \begin{equation}gin{itemize} \item The metric coefficients $g_{\alphapha \begin{equation}ta}$ and $g^{\alphapha\begin{equation}ta}$ are uniformly bounded. \item The surfaces $t = const$ are uniformly space-like, \[ g_{ij} v^i v^j \gtrsim |v|^2. \] \end{itemize} These conditions in turn imply the bound from below \[ - g^{00} \gtrsim 1. \] Now we are ready to formulate our main result, which establishes well-posedness in the \emph{energy-subcritical} regime $s > 1$, as a stepping stone toward the energy-critical problem on a curved background. \ \begin{equation}gin{theorem} \Deltabel{thm:local} Let $M \subset \mathbb{R}^m $ be an embedded manifold. Assume that $\partial^2_{t,x} g^{ij} \in L^2_t L^\infty_x$. Then the Cauchy problem for the wave maps equation \eqref{WM:ext} is locally well-posed in $\mathcal H^s$ for $ 1< s \leq 2 $. \end{theorem} \ Our result includes existence, uniqueness, locally Lipschitz dependence on the initial data and persistence of regularity as explained in Section~\ref{reduction}. While our main result above applies to the large data problem, the bulk of the paper is devoted to the small data problem, to which the large data case reduces after a suitable localization. To state the small data result, we replace the qualitative property of the metric $\nabla^2 g \in L^2 L^\infty$ with a quantitative version which applies in the unit time interval $t \in [0,1]$: \begin{equation}gin{equation} \Deltabel{assumption:rescaled:metric} \| \partial_{t,x} g^{\alphapha\begin{equation}ta} \|_{L^\infty L^\infty} + \| \partial^2_{t,x} g^{\alphapha\begin{equation}ta} \|_{L^2 L^\infty} \le \eta^2. \end{equation} Here $\eta \ll 1$ is a fixed small parameter. For the next result below, we work in a local patch in the intrinsic setting. We assume that $0$ belongs to the range of local coordinates associated to this patch, and work with data $(u_0,u_1)$ which is small in $ H_x^s \times H_x^{s-1}$. Then $u_0$ is continuous and uniformly small. As long as this property persists, the solution will remain within the domain of the local patch. Hence its regularity can be measured in those coordinates. Our small data result is as follows: \ \begin{equation}gin{theorem} \Deltabel{thm:small} Let $ 1<s \leq 2 $ and assume $ g $ satisfies \eqref{assumption:rescaled:metric} in the time interval $[0,1]$. There exists $ \varepsilonsilonilon >0 $ such that for any initial data set $ (u_0,u_1) $ satisfying \begin{equation} \Deltabel{small:id} \vn{(u_0,u_1)}_{H^s \times H^{s-1}} \leq \varepsilonsilonilon \end{equation} there exists a unique solution $ u $ to the wave maps problem \eqref{WM:int} with this data in the space $ C([0,1];H^s) \cap C^1([0,1];H^{s-1}) $, satisfying \begin{equation} \Deltabel{sol:bd} \vn{(u,\partial_t u)}_{L^{\infty}(H^s \times H^{s-1})} \lesssim \vn{(u_0,u_1)}_{H^s \times H^{s-1}}. \end{equation} The solution has a Lipschitz continuous dependence on the initial data. \end{theorem} \ Here the uniqueness is interpreted in the classical sense if $s = 2$. For $1< s < 2$, the $H^s \times H^{s-1}$ can be defined as the unique limits of $H^2 \times H^1$ solutions. Alternatively, one may prove the uniqueness property in the $X^{s,\theta}$ spaces, see the discussion below. The higher Sobolev regularity is limited to $H^2$ given the regularity of the metric $g$. However, adding higher regularity to $g$ correspondingly adds regularity to the class of regular solutions. The objective of Section~\ref{reduction} will be to reduce our main result in Theorem~\ref{thm:local} to the small data result in Theorem~\ref{thm:small} by a standard scaling and finite speed of propagation argument. A key role in our analysis is played by the spaces $X^{s,\theta}$ associated to the wave operator $\Box_g$. Indeed, our main well-posedness argument is phrased as a fixed point argument in $X^{s,\theta}$ where \[ 1<s \leq 2, \qquad \frac{1}{2}<\theta \leq \min\{1, s - \frac{1}{2}\}. \] In particular one can rephrase the uniqueness property in our main theorems as an unconditional uniqueness in $X^{s,\theta}$, and the Lipschitz dependence as a Lipschitz dependence in $X^{s,\theta}$. These spaces are defined in Section~\ref{sec:Xspaces}, and the study of their linear and bilinear properties occupies much of the paper. \ \subsection{Previous works} Here we provide a brief survey of previous results on Wave Maps well-posedness, with an emphasis on variable coefficients. The Cauchy problem on a flat background $(\mathbb{R}^{1+n}, -dt^2 + dx^2)$ is by now well understood. In view of the scaling symmetry $u(t, x) \mapsto u(\Deltambda t, \Deltambda x)$, the critical Sobolev space is $\dot{H}^{\frac{n}{2}} \times \dot{H}^{\frac{n}{2} - 1} (\mathbb{R}^n)$. Local wellposedness in $H^s \times H^{s-1}$ was established for all subcritical regularities $s >\tfrac{n}{2}$ by Klainerman-Machedon for $n \ge 3$ and Klainerman-Selberg when $n = 2$ \cite{klainerman1993space}, \cite{klainerman1997remark}. The much more delicate critical problem $s = \tfrac{n}{2}$ was solved for small data in dimension $n = 2$ by Tataru~\cite{Tat}, \cite{TataruWM}, Tao~\cite{Tao2} and Krieger~\cite{Krieger2004}, with further contributions in higher dimension by Klainerman-Rodnianski~\cite{Klainerman2001}, and Shatah-Struwe~\cite{shatah1998geometric}, Nahmod-Stefanov-Uhlenbeck~\cite{Nahmod2003}. Further, when $n = 2$, the energy-critical problem in $H^1 \times L^2$ admits a global theory for large data as developed by Sterbenz-Tataru~\cite{Sterbenz2010}, \cite{Sterbenz2010a}, Krieger-Schlag~\cite{Krieger2012} and Tao \cite{Tao2008}. For wave maps with variable coefficients, Geba~\cite{geba2009remark} established local wellposedness in the subcritical regime $s > \tfrac{n}{2}$ when $3 \le n \le 5$, building on previous work of Geba and Tataru~\cite{geba2008gradient}. More recently, Lawrie constructed global-in-time solutions on perturbations of $\mathbb{R}^{1+4}$ Minkowski space for small data in the critical space $H^2 \times H^1(\mathbb{R}^4)$, and Lawrie-Oh-Shahshashani obtained analogous small-data results on $\mathbb{R} \times \mathbb{H}^n$, $n \ge 4$~\cite{lawrie2016cauchy}. See also the recent work of Li-Ma-Zhao on the stability of harmonic maps $\mathbb{H}^2 \to \mathbb{H}^2$ under the wave map flow~\cite{li2017asymptotic}. A key component in the study of wave maps in the Minkowski case at critical regularity is Tao's renormalization idea, first introduced in \cite{Tao2}. The subcritical problem considered in this article avoids the renormalization argument, which simplifies matters considerably. On the other hand, Strichartz estimates do not suffice to treat the full subcritical range $s > \tfrac{n}{2}$ in low dimensions, in particular when $n = 2, 3$. Indeed the null structure $Q_g(u, u)$ distinguishes the wave maps system from equations with a generic quadratic derivative nonlinearity $(\nabla u)^2$; as observed by Lindblad~\cite{Lindblad1996}, the latter can be illposed in $H^s \times H^{s-1}(\mathbb{R}^3)$ for $s = 2 > \tfrac{3}{2}$. The previous works~\cite{klainerman1993space,klainerman1997remark} rely on $X^{s,b}$ spaces to exploit the null structure in lower dimensions. We pursue an analogous strategy using the variable-coefficient $X^{s,b}$-type spaces first introduced by Tataru~\cite{Tataru1996} and further developed by by Geba and Tataru~\cite{geba2005disp}. However, the two-dimensional case involves additional subtleties as we shall discuss shortly. \ \subsection{Reduction of Theorem \ref{thm:local} to Theorem \ref{thm:small} } \Deltabel{reduction} \ Let \[ (\tilde{u}_0,\tilde{u}_1 ) :\mathbb{R}_x^2 \to M \times T_{\tilde{u}_0}M \subset \mathbb{R}^m \times \mathbb{R}^m \] be an initial data such that \[ \vn{ (\nabla \tilde{u}_0,\tilde{u}_1 ) }_{ {H}^{s-1} } \leq R. \] We have a solution $ \tilde{u} $ on a small time interval $ [0,T] $ if the rescaled function $$ u(t,x)=\tilde{u}(Tt,Tx) $$ is a solution on $ [0,1] $ using the rescaled metric $g(Tt,Tx)$. This now obeys \eqref{assumption:rescaled:metric}, provided that $T$ is small enough. The data for the rescaled solution will satisfy the scale invariant bound \[ \vn{ (u_0,u_1)}_{ \dot{H}^1 \times L^2 } \leq R \] as well as the homogeneous bound \[ \vn{ (u_0,u_1)}_{ \dot{H}^s \times \dot{H}^{s-1} } \leq R T^{s-1} \] We will choose $ T $ small enough so that \[ RT^{s-1} \ll \varepsilonsilonilon. \] To obtain smallness of the full $ H^s \times H^{s-1} $ norms we truncate the initial data, then we apply Theorem \ref{thm:small} and we glue those solutions using finite speed of propagation. Let $ c $ be the largest speed of propagation and let $ (y_j)_{j} $ be the centers of a family of balls such that the truncated cones $$ K_{j}=\{ (t,x) \ | \ ct+\vm{x-y_j} \leq c+1, \ t \in[0,1] \} $$ are finitely overlapping and cover $ [0,1] \times \mathbb{R}^2 $. Let $ \chi_j $ be a smooth function which equals $ 1 $ on the ball $ B_j=B_{y_j}(c+1) $ and is supported on $ \tilde{B}_j=B_{y_j}(c+2) $. We denote by $ B_j^t=B_{y_j}(c+1-ct), \tilde{B}_j^{t} =B_{y_j}(c+2-ct)$ the corresponding balls at time $ t $. For every $ y_j $ we choose a local chart of $ M $ such that $ u_0(y_j) $ corresponds to the origin. We localize around $ y_j $ viewing (by slight abuse of notation) $ u_0,u_1 $ as having their image in the chart and defining $ u_0^j = \chi_j u_0,\ u_1^j= \chi_j u_1 $. Since, $ u_0^j(y_j)=0 $, by homogeneous Sobolev embedding and Morrey's inequality we deduce that smallness is retained locally by the inhomogeneous Sobolev norms: \begin{equation}gin{align} \Deltabel{e:Hs-smallness} \vn{ (u_0^j, u_1^j)}_{H^s \times H^{s-1}} \leq \varepsilonsilonilon. \end{align} One uses Moser estimates to pass from the extrinsic Sobolev spaces to the Sobolev norms defined using the patch coordinates. By Theorem \ref{thm:small} we obtain a solution $ u^j $ to \eqref{WM:int} which remains in the image of the chart on $ [0,1] $ by \eqref{sol:bd} and Sobolev embedding. Now viewing $ u^j $ as taking values in $ M \subset \mathbb{R}^m $, it solves \eqref{WM:ext} and we restrict it to the truncated cone $ K_j $. To obtain a solution $ u $ on $ [0,1] \times \mathbb{R}^2 $ defined by each $ u^j $ restricted to $ K_j $ we argue that any two of them must coincide on their common domain. For $H^2$ solutions this follows from the finite speed of propagation, which is then proved in a standard fashion. For rough solutions we use the well-posedness result from Theorem \ref{thm:small} to approximate them by $H^2$ solutions, and then we pass to the limit. Now we show that $ \nabla_{t,x} u(t) \in H^{s-1}(\mathbb{R}^2) $. First we note that, by \eqref{ball:localization} we have \begin{equation}gin{align} \nonumber \sum_j \vn{u_0^j}_{H^s}^2 & \simeq \sum_j \vn{\chi_j u_0}_{L^2( \tilde{B}_j)}^2 + \vn{\nabla_x (\chi_j u_0)}_{H^{s-1}}^2 \lesssim \\ \Deltabel{sq:sum:interm} & \lesssim \sum_j \vn{ u_0}_{L^2( \tilde{B}_j)}^2 + \vn{ \nabla_x u_0}_{W^{s-1,2} ( \tilde{B}_j)}^2 \lesssim \sum_j \vn{ \nabla_x u_0}_{W^{s-1,2} ( \tilde{B}_j)}^2 \end{align} since by Morrey's inequality and Sobolev embedding we have $$ \vn{ u_0}_{L^2 ( \tilde{B_j})} \lesssim \vn{\nabla_{x} u_0}_{L^{2+} (\tilde{B_j})} \lesssim \vn{\nabla_{x} u_0}_{W^{s-1,2} (\tilde{B_j})}. $$ Moreover, by \eqref{sq:sum:balls:easy} and \eqref{Sobolev:equiv} we have $$ \text{RHS }\eqref{sq:sum:interm} \lesssim \sum_j \vn{ \nabla_x u_0}_{L^2( \tilde{B}_j)}^2 + \vm{ \nabla_x u_0}_{\dot{W}^{s-1,2} ( \tilde{B}_j)}^2 \lesssim \vn{u_0}_{\dot{H}^1}^2+\vn{\nabla_x u_0}_{\dot{H}^{s-1}}^2 \lesssim R^2. $$ Similarly we obtain $ \sum_j \vn{u_1^j}_{H^{s-1}}^2 \lesssim R^2 $. For the solution at time $ t $, using \eqref{sq:sum:balls} we get $$ \vn{ \nabla_{t,x} u(t)}_{H^{s-1}}^2 \lesssim \sum_j \vn{\nabla_{t,x} u(t) }_{W^{s-1,2}(B_j^t)}^2 \lesssim \sum_j \vn{u^j[t]}_{H^s \times H^{s-1}}^2 \lesssim \sum_j \vn{u^j_0,u^j_1}_{H^s \times H^{s-1}}^2 \lesssim R^2. $$ This proves that $ \tilde{u}(t) \in \mathcal{H}^s $ for $ t \in [0,T] $ and $$ \vn{\tilde{u}}_{L^{\infty} \mathcal{H}^s( [0,T] \times \mathbb{R}^2) } \leq C_R \vn{(\tilde{u}_0,\tilde{u}_1)}_{\mathcal{H}^s} $$ with a constant $ C_R $ depending on $ R $. Now we address the locally Lipschitz dependence on the initial data. Let $ (\tilde{v}_0, \tilde{v}_1) :\mathbb{R}_x^2 \to M \times T_{\tilde{u}_0}M \subset \mathbb{R}^m \times \mathbb{R}^m $ be another initial data set such that $$ \vn{(\tilde{u}_0-\tilde{v}_0, \tilde{u}_1-\tilde{v}_1) }_{(\dot{H}^s \cap \dot{H}^1 \cap L^{\infty}) \times H^{s-1}} $$ is small enough and let $ \tilde{v} $ be the solution on $ [0,T] $ with this data. Then using the argument above together with the Lipschitz dependence given by Theorem \ref{thm:small} of the local solutions we obtain $$ \vn{\tilde{u}-\tilde{v}}_{L^{\infty} \mathcal{H}^s( [0,T] \times \mathbb{R}^2) } \lesssim C_R \vn{(\tilde{u}_0-\tilde{v}_0,\tilde{u}_1-\tilde{v}_1)}_{\mathcal{H}^s}. $$ Finally, we remark that assuming higher regularity for the metric $ g $ (such as $ \partial^k g \in L^{2} L^{\infty} $ and $ g \in L^{\infty} H^{k-1} $ for $ k \geq 3 $, see the discussion below in the proof of Theorem \ref{thm:small}) we have that $ (\dot{H}^n \cap \dot{H}^1) \times H^{n-1} $ regularity of the initial data is maintained by the solution on $ [0,T]$ for $ n \leq k$. \ \begin{equation}gin{remark} If we assume the initial data $ (\tilde{u}_0,\tilde{u}_1 )$ to be only in $ \dot{H}^s \times \dot{H}^{s-1} $, then we can still construct a solution, but it will be only \emph{locally} in $ H^s \times H^{s-1} $ at future times. \end{remark} \ \subsection{Proof of Theorem \ref{thm:small}} \ Here we set up the fixed point argument which yields Theorem \ref{thm:small}, and show that the proof reduces to four estimates described below. As a preliminary step, we replace the metric $g^{\alphapha \begin{equation}ta}$ by $\tilde g^{\alphapha \begin{equation}ta} = (g^{00})^{-1} g^{\alphapha \begin{equation}ta}$ in order to insure that $\tilde g^{00} = 1$; this is not crucial in the analysis, but yields some minor technical simplifications. The price to pay for this substitution is that we get another term in the equations, \begin{equation}gin{equation}\Deltabel{conformal} \tilde{\Box}_{\tilde g} u^i=- \Gamma^i_{jk} (u) \tilde g^{\alpha \begin{equation}ta } \partial_{\alpha}u^j \partial_{\begin{equation}ta} u^k + \frac52 \partial^\alphapha (\log |g^{00}|) \tilde g^{\alphapha \begin{equation}ta} \partial_\begin{equation}ta u \end{equation} The extra term on the right will easily be perturbative in our setting. Hence we will simply neglect it, drop the $\tilde g$ notation and simply assume that $g^{00} = -1$. We now consider an initial data satisfying \eqref{small:id} and proceed to obtain the solution to \eqref{WM:int} on $ [0,1] $ by a fixed point argument, as $ u= \Phi(u) $ for the functional \begin{equation} \Deltabel{functional} \Phi^i(u) \vcentcolon= u_{lin}^i + \tilde{\Box}_g^{-1} \big(- \Gamma^i_{jk} (u) Q_g(u^j, u^k) \big) \end{equation} where $ u_{lin} $ is the solution of the linear equation with $ F=0 $ \begin{equation} \Deltabel{inhom2:box:tilde} \tilde{\Box}_g u =F, \qquad u[0]=(u_0,u_1) \end{equation} while $ \tilde{\Box}_g^{-1} F $ is defined as the solution of \eqref{inhom2:box:tilde} with $ u[0]=(0,0) $. \ This argument relies on the choice of two Banach spaces $ X $ (for the components of our solutions) and $ N $ (for the perturbative nonlinearity) such that $ \Phi $ is a contraction on a small ball of $ X $. Specifically, $$ X=X^{s,\theta}, \qquad N=X^{s-1,\theta-1} $$ which are defined in \emph{Section} \ref{sec:Xspaces} with $ \theta \in (1/2,1), \ \theta < s-1/2 $. \ The \emph{linear mapping property} in Lemma \ref{lin:map:prop} states that for solutions of \eqref{inhom2:box:tilde} we have \begin{equation} \Deltabel{lin:mapp:prop} \vn{u}_X \lesssim \vn{(u_0,u_1)}_{H^s \times H^{s-1}} + \vn{F}_N \end{equation} \ Having \eqref{lin:mapp:prop} we also need to know that the mapping written schematically as $ \Gamma(u) Q_g(u,u) : X \to N $ holds, as well as $$ \vn{ \Gamma(u) Q_g(u,u) - \Gamma(v) Q_g(v,v) }_N \lesssim \varepsilonsilonilon \vn{u-v}_X $$ for $ u, v $ in a ball of radius $ C \varepsilonsilonilon $ in $ X $. These two properties are now easily reduced to the estimates \eqref{alg:est}-\eqref{nonlinear:diff} below; we obtain the $ \varepsilonsilonilon $ smallness for the difference since the nonlinearity is at least quadratic. We add that this contraction argument gives Lipschitz dependence on $ u_{lin} $, therefore on the initial data by \eqref{lin:mapp:prop}. \ The building blocks for the iteration are the following nonlinear estimates: \begin{equation}gin{flalign} & \emph{Algebra property:} &\vn{u \cdot v}_X &\lesssim \vn{u}_X \vn{v}_X && \Deltabel{alg:est} \\ & \emph{Product estimate:} &\vn{u \cdot F}_N & \lesssim \vn{u}_X \vn{F}_N \Deltabel{prod:est} && \\ & \emph{Null form estimate:} &\vn{ Q_g(u,v) }_N &\lesssim \vn{u}_X \vn{v}_X \Deltabel{nf:est} &&\\ & \emph{Moser estimate:} &\vn{\Gamma(u) }_X &\lesssim \vn{u}_X (1+\vn{u}_X^M) \Deltabel{moser:est}. && \end{flalign} \ The proof of the \emph{Algebra property} \eqref{alg:est} occupies the main part of this paper, being the object of \emph{Section \ref{Sec:alg}} which is based on the results of \emph{Sections~\ref{sec:Xspaces}-\ref{s:CE} }. All the other properties rely fundamentally on \eqref{alg:est}: \ The \emph{Product estimate} \eqref{prod:est} follows easily in \emph{Section \ref{Sec:Prod:est}} as a corollary of the estimates established in the proof of \eqref{alg:est}, using a duality argument. \ We obtain the \emph{Null form estimate} \eqref{nf:est} as a consequence of the identity \eqref{null:form:box} together with \eqref{alg:est}, \eqref{prod:est} and the linear bound $\vn{\Box_g u}_N \lesssim \vn{u}_X$ from Lemma \ref{lin:map:Box}. \ The nonlinear \emph{Moser estimate} \eqref{moser:est} is proved in \emph{Section \ref{Sec:Moser}}. For the purpose of the fixed point iteration argument based on \eqref{nf:est} we may subtract a constant and assume that $ \Gamma(0)=0 $. Using the \emph{Algebra property} \eqref{alg:est} we may subtract a polynomial from $ \Gamma $ and assume that $ \partial^{\alpha} \Gamma(0) =0 $ for $ \vm{\alpha} \leq C $. Modifying $ \Gamma $ outside a neighborhood of the origin, it suffices to prove \eqref{moser:est} under the assumption that $ \Gamma $ and its derivatives are uniformly bounded. Moreover, using the fundamental theorem of calculus, when $ \vn{u}_X, \vn{v}_X $ are bounded we obtain as a consequence \begin{equation} \Deltabel{nonlinear:diff} \vn{\Gamma(u)- \Gamma(v) }_X \lesssim \vn{u-v}_X. \end{equation} \ \emph{Persistence of regularity.} Assuming we control $ k \geq 2 $ derivatives of the metric, and thus for the rescaled metric on [0,1] we assume $ \vn{ \partial^k g}_{L^2 L^{\infty}} \ll 1 $ and $ g \in L^{\infty} H^{k-1} $, we show that $ H^{\sigma} \times H^{\sigma-1} $ regularity of the initial data is maintained in time for any $ \sigma \in [s, k ] $. Let $ M \geq 1 $ and $$ \vn{u[0]}_{H^{\sigma} \times H^{\sigma-1}} \leq M, \qquad \vn{u[0]}_{H^s \times H^{s-1}} \leq \varepsilonsilonilon. $$ We obtain our solution as a fixed point for $ \Phi $ from \eqref{functional}, which is a contraction on a small ball of the space $ Z=X^{\sigma,\theta} \cap X^{s,\theta} $ endowed with the norm $$ \vn{u}_Z=\frac{\varepsilonsilonilon}{M} \vn{u}_{X^{\sigma,\theta}} + \vn{u}_{X^{s,\theta}} $$ provided $ \varepsilonsilonilon $ is sufficiently small, independently of $ M $. By Remark \ref{X:high:reg}, the mapping property \eqref{lin:mapp:prop} holds also for $ H^{\sigma} \times H^{\sigma-1}, X^{\sigma,\theta} $ and $ X^{\sigma-1,\theta-1} $. The nonlinear estimates \eqref{alg:est}-\eqref{moser:est} are replaced by \begin{equation}gin{align*} \vn{u \cdot v}_{X^{\sigma,\theta}} & \lesssim \vn{u}_{X^{\sigma,\theta}} \vn{v}_{X^{s,\theta}} + \vn{u}_{X^{s,\theta}} \vn{v}_{X^{\sigma,\theta}} \\ \vn{Q_g(u,v)}_{X^{\sigma-1,\theta-1}} & \lesssim \vn{u}_{X^{s,\theta}} \vn{v}_{X^{\sigma,\theta}} + \vn{u}_{X^{\sigma,\theta}} \vn{v}_{X^{s,\theta}} \\ \vn{u \cdot F}_{X^{\sigma-1,\theta-1}} & \lesssim \vn{u}_{X^{s,\theta}} \vn{F}_{X^{\sigma-1,\theta-1}} + \vn{u}_{X^{\sigma,\theta}} \vn{F}_{X^{s-1,\theta-1}} \\ \vn{\Gamma(u)}_{X^{\sigma,\theta}} & \lesssim \vn{u}_{X^{\sigma,\theta}} \end{align*} We refer to \eqref{alg:sgm}, \eqref{prod:est:higher} and Remark \ref{rk:cor:prodest:higher} for a discussion of these properties. We conclude that the unique solution obtained earlier in $ X=X^{s,\theta} $ is also in $ X^{\sigma,\theta} $ and $ \vn{u}_{X^{\sigma,\theta}} \lesssim M $. In both fixed point arguments we have continuous dependence on the initial data, therefore the obtained solution is the unique strong limit of smooth solutions. \ \subsection{Main ideas} \ We now provide an outline of the main ideas of the paper which are the ingredients used to establish the building blocks \eqref{alg:est}-\eqref{moser:est} of our results. \ \emph{Curved $ X^{s,b} $ spaces.} The classical $ X^{s,b} $ spaces are multiplier weighted $ L^2 $-spaces on the Minkowski space-time adapted to the symbol of the wave operator $ \Box $ similarly to how Sobolev spaces are associated to the Laplacian $ \Delta $, see \cite{Klainerman1996} and the references therein. These are used to prove local well-posedness for constant coefficients wave maps above scaling: \cite{klainerman1993space} ($ d \geq 3 $) and \cite{klainerman1997remark} ($ d=2 $). For the history of classical $ X^{s,b} $ spaces and applications we refer to the survey article \cite{klainerman2002bilinear}. A variable coefficient version of the $ X^{s,b} $ spaces, defined in physical space, was first introduced\footnote{Other variable-coefficient $X^{s,b}$ constructions have been proposed, for instance via spectral theory on a smooth compact manifold~\cite{Burq2005}. } in \cite{Tataru1996}, and then further developed in \cite{geba2008gradient} in order to study semilinear wave equations on curved backgrounds with a generic quadratic derivative nonlinearity. These spaces were later utilized by Geba~\cite{geba2009remark} in his treatment of energy-subcritical wave maps in dimensions $3 \le d \le 5$. We borrow this notion of curved $ X^{s,b} $ spaces in the present paper and review the relevant definitions and lemmata in \emph{Section~\ref{sec:Xspaces}}. In two space dimensions, new techniques are required to establish $X^{s,b}$ estimates, in particular the crucial algebra property~\eqref{alg:est}. Using Strichartz estimates as for the case $ d \geq 3 $ would incur an unacceptable loss of derivatives when $d=2$. Instead we adapt some ideas of Tataru from energy-critical wave maps in Minkowski space; see in particular~\cite[Theorem~3]{Tat}. Our analysis involves bilinear angular decompositions, wave packets, and energy estimates along certain null hypersurfaces. \ \emph{Bilinear estimates} in $ X^{s,b} $ require optimal control of $ \vn{u_{\left|d} \cdot v_{\mu}}_{L^2([0,1]\times \mathbb{R}^2)} $ with bounds depending on the angular frequency localization of $ u_{\left|d} $ and $ v_{\mu} $. Decomposing the lower frequency function $ v_{\mu} $ into \emph{wave packets} concentrated on certain "tubes" $ T $, by orthogonality it will suffice to obtain bounds for $ \vn{u_{\left|d}}_{L^2(T)} $. In effect, this strategy is a variable-coefficient variant of the traveling wave decomposition employed in~\cite{Tat}. Foliating a tube $ T $ into null surfaces $ \Lambda $ (see \emph{Section \ref{sec:wp1}}) we reduce to controlling $$ \int_{ \Lambda} \vm{ u_{\left|d}}^2 \,\mathrm{d} \sigma. $$ We call these \emph{characteristic energy estimates}. While they can be proved by Fourier analysis in the Minkowski case \cite{Tat}, the present context requires a physical space proof which is considerably more delicate, as integration by parts based on the energy-momentum tensor only controls tangential derivatives such as $ \int_{ \Lambda} \vm{L u_{\left|d}}^2 \,\mathrm{d} \sigma $. The problem of "inverting the $L $" in a manner consistent with the angular separation is addressed using microlocal analysis in \emph{Section \ref{s:CE}}. \ \emph{Wave packets analysis.} A key technical device in this article consists in approximating solutions to the linear wave equations by square-summable superpositions of \emph{wave packets}, which are localized in both space and frequency on the scale of the uncertainty principle, and propagate in spacetime along null hypersurfaces. Such representations of the wave group originate in the work of Cordoba-Fefferman~\cite{Cordoba1978}. For wave equations with $C^{1,1}$ coefficients, the first wave packet parametrix was given by Smith~\cite{smith1998parametrix}. An alternate construction, based on the use of phase space (FBI) transforms was provided by Tataru~\cite{Tat-nlw1,Tat-nlw2,Tat-nlw3} for metrics satisfying $\partial^2 g \in L^1 L^\infty$. Analogous constructions have since been employed for even rougher coefficients, as in the the study of quasilinear wave equations in Smith-Tataru~\cite{smith2005sharp}. In this article we use Smith's parametrix, which easily extends to rougher metrics satisfying $\partial^2 g \in L^1 L^\infty$. There are two reasons for that: (i) causality (the parametrix in \cite{Tat-nlw1,Tat-nlw2,Tat-nlw3} is developed in a microlocal space-time foliation, without reference to a time foliation) and (ii) localization scales ( in slabs~\cite{smith1998parametrix} vs. tubes~\cite{Tat-nlw1,Tat-nlw2,Tat-nlw3}). Neither of these reasons is critical but taken together they do make a difference from a technical standpoint. \emph{Section~\ref{s:packets}} isolates the essential properties of Smith's parametrix that we shall need, while specific implementation details are reviewed in \emph{Appendix \ref{s:smithpackets}}. Another contribution of this paper is a \emph{wave packet characterization} of functions in $X^{s,b}$ with low modulation, see \emph{Proposition \ref{X:wp:dec}} and \emph{Corollary \ref{Cor:WP:dec}}. \ \emph{Null structures} arise in equations from mathematical physics where they manifest through the vanishing of parallel interactions. The cancellations are realized in our setting expressing the null form $Q_g(u,v)$ in a suitable \emph{null frame} $ \{ L, \underline{L}, E \} $: \[ 2 Q_{g} \big(u,v \big) = Lu \cdot \underline{L}v + \underline{L} u \cdot Lv - 2 Eu \cdot Ev. \] If ${L, E}$ are tangent to the null hypersurfaces along which $v$ propagates, then $Lv$ and $Ev$ are better than the tranverse derivative $\underline{L}v$. Hence there is a gain in $Q_g(u,v)$ over a generic quadratic term $\nabla u \cdot \nabla v$ if $u$ and $v$ propagate along nearby directions. Estimates for null forms have played a key role in the low regularity well-posedness of wave maps in the flat case (\cite{klainerman1993space}, \cite{klainerman1997remark}, \cite{klainerman2002bilinear}, \cite{Tat}, \cite{Tao2}) and in other problems, especially in low dimensions. Several null form estimates for variable coefficients have been obtained before by Sogge \cite{sogge1993local}, Smith-Sogge \cite{smithsogge} and Tataru \cite{tataru2003null}, under various assumptions on the metric. The strategy outlined above of obtaining $ L^2 $ estimates using wave packets and characteristic energy estimates applies as well for bounding $ \vn{Q_g(u_\theta, v_\theta)}_{{L^2([0,1]\times \mathbb{R}^2)}} $ where the terms $ u_\theta, v_\theta $ have an angular separation of $\simeq \alphapha$. A Whitney-type decomposition is used to reduce to this assumption. \ The \emph{Moser estimate}~\eqref{moser:est}, required for non-analytic target manifolds, is proved in \emph{Section~\ref{Sec:Moser}} following the method of iterated multilinear paradifferential expansions introduced in~\cite{TataruWM}. \subsection{Notations and preliminaries} \Deltabel{s:preliminaries} \begin{equation}gin{itemize} \item We denote by $\tau$ and $\xi$ the time, respectively the spatial Fourier variables. For $0 \ne \xi$, write $\wht{\xi} := \xi/|\xi|$ for its projection to the (Euclidean) unit sphere. \item We denote the Cauchy data at time $s$ by $ v[s]=(v(s),\partial_t v(s)) $ for any time $ s $. \item Mixed Lebesgue norms shall be denoted by $\|u\|_{L^p_t L^q_x} := \bigl\| \|u(t)\|_{L^q_x} \bigr\|_{L^q_t}$; to unclutter the notation we often omit the subscripts. Also, when $p=q$ we write $L^p := L^p_t L^p_x$. \item In the context of wave packet sums we will denote by $ \ell^2_{T} $ the following norm $$ \vn{c_T }_{\ell^2_{T }}^2 =\sum_{T \in \mathcal T_{\left|d} } \vm{c_T}^2 $$ \item To define Littlewood-Paley decompositions we fix a partition of unity of the positive real line \[1 = \sum_{\Deltambda \geq 1 \text{ dyadic}} s_\Deltambda(r),\] where $s_\Deltambda$ is supported on the interval $r \in [\Deltambda/2, 2\Deltambda]$ for $ \left|d \geq 2 $ and in $ r \leq 2 $ for $ \left|d=1$. Setting $P_\Deltambda(\xi) := s_\Deltambda(|\xi|)$ and $P_\Deltambda(\tau, \xi) := s_\Deltambda(|(\tau, \xi)|)$, we define Littlewood-Paley frequency decompositions on space and spacetime: \begin{equation}gin{align*} 1_{\mathbb{R}^2} = \sum_{\Deltambda} P_\Deltambda(D_x), \quad 1_{\mathbb{R} \times \mathbb{R}^2} = \sum_{\Deltambda} P_{\Deltambda}(D_t, D_x). \end{align*} The projections $P_{<\Deltambda}$, $P_{>\Deltambda}$ are defined in the usual manner, and we also set $P_{[\Deltambda_1, \Deltambda_2]} = \sum_{\Deltambda \in [\Deltambda_1, \Deltambda_2]} P_\Deltambda$. For a multiplier $s_\Deltambda$, we write $\tilde{s}_\Deltambda$ for a slightly wider version of $s_\Deltambda$ so that $s_\Deltambda = \tilde{s}_\Deltambda s_\Deltambda$. In this article, Littlewood-Paley decompositions are defined respect to the $x$ variable with one main exception: the (dual) metric $g$ shall always be mollified in spacetime, and for a frequency $\mu$ we write $g_{<\mu} := P_{<\mu} (D_t, D_x) g. $ \item Pseudo-differential operators in this paper are defined using Kohn-Nirenberg quantization. When there is no danger of confusion, we sometimes denote both a symbol $\phi(x, \xi)$ and its corresponding operator $\phi(X, D)$ by the abbreviation $\phi$. \item The global Sobolev spaces $ \dot{H}^s $ and $ H^s $ are defined for $ s \in \mathbb{R} $ using the Fourier transform by \begin{equation} \Deltabel{Sobolev:global} \vn{u}_{\dot{H}^s} =\vn{ \vm{\xi}^s \hat{u}}_{L^2}, \qquad \qquad \vn{u}_{H^s}=\vn{ (1+\vm{\xi}^2)^{s/2} \hat{u}}_{L^2} \end{equation} \item We define the local Sobolev spaces $ W^{s-1,2}(B) $, for a ball $ B $ of radius $ \simeq 1 $ or for $ B=\mathbb{R}^2 $. When $ s-1 \in \{0,1\} $ we use the classical definition, while when $ s-1 \in (0,1) $ we have the norm \begin{equation} \vn{v}_{ W^{s-1,2}(B)}^2 =\vn{v}_{L^2(B)}^2+ \vm{v}_{ \dot{W}^{s-1,2}(B)}^2 \end{equation} where $ \vm{\cdot}_{ \dot{W}^{s-1,2}(B)} $ denotes the Gagliardo seminorm \begin{equation} \Deltabel{Gagliardo:seminorm} \vm{v}_{ \dot{W}^{s-1,2}(B)}^2 = \int_B \int_B \frac{\vm{v(x)-v(y)}^{2}}{\vm{x-y}^{2s}} \,\mathrm{d} x \,\mathrm{d} y. \end{equation} \item When $ B = \mathbb{R}^2 $ one has \begin{equation} \Deltabel{Sobolev:equiv} \vm{v}_{ \dot{W}^{s-1,2}( \mathbb{R}^2)} \simeq \vn{v}_{\dot{H}^{s-1}}, \qquad \vn{v}_{ W^{s-1,2}(\mathbb{R}^2)} \simeq \vn{v}_{H^{s-1}}. \end{equation} When $ \psi \in C^{0,1}(\tilde{B}) $ is supported in a slightly smaller ball one has \begin{equation} \Deltabel{ball:localization} \vn{\psi v}_{H^{s-1}} \lesssim \vn{\psi v}_{W^{s-1,2} (\tilde{B})} \lesssim \vn{v}_{W^{s-1,2} (\tilde{B})}. \end{equation} We refer to \cite{di2012hitchhiker} for these properties. Clearly, when the balls $ (B_j)_j $ are finitely overlapping one has \begin{equation} \Deltabel{sq:sum:balls:easy} \sum_j \vm{v}_{ \dot{W}^{s-1,2}(B_j)}^2 \lesssim \vm{v}_{ \dot{W}^{s-1,2}(\mathbb{R}^2)}^2 \end{equation} Conversely, assuming in addition that $ (B_j)_j $ cover $ \mathbb{R}^2 $, by splitting the $ \mathbb{R}^2 \times \mathbb{R}^2 $ integral in \eqref{Gagliardo:seminorm} into regions where there exists $ j $ such that both $ x,y \in B_j $ and regions where $ \vm{x-y}> \delta $ (where we bound $ \vm{v(x)-v(y)} \leq \vm{v(x)}+\vm{v(y)} $ and use the $ L^2(B_j) $ norms), we obtain \begin{equation} \Deltabel{sq:sum:balls} \vm{v}_{ \dot{W}^{s-1,2}(\mathbb{R}^2)}^2 \lesssim \sum_j \vn{v}_{ W^{s-1,2}(B_j)}^2 \end{equation} \end{itemize} {\bf Bounds on the metric.} Observe that we have for any $\Deltambda$ the pointwise estimate \begin{equation}gin{align*} \begin{equation}gin{split} &| \partial^2 g_{<\Deltambda}| \lesssim M ( \partial^2 g) \le M( \| \partial^2 g\|_{L^\infty_x}),\\ &|\partial^k \partial^2 g_{<\Deltambda}| \lesssim \Deltambda^k M ( \| \partial^2 g \|_{L^\infty_x}), \end{split} \end{align*} where $M$ is the Hardy-Littlewood maximal function, and by Bernstein \begin{equation}gin{align} \Deltabel{e:metric_est1} \begin{equation}gin{split} \| \partial^k \partial^2 g_{<\Deltambda} \|_{L^\infty} \lesssim \Deltambda^k \Deltambda^{\frac{1}{2}} \| \partial^2 g\|_{L^\infty_x L^2_t} \le \Deltambda^{\frac{1}{2} + k} \| \partial^2 g\|_{L^2_t L^\infty_x}. \end{split} \end{align} Similarly, from the bound \begin{equation}gin{align*} | g_\Deltambda| = |P_{\Deltambda} D^{-2} D^2 g| \lesssim \Deltambda^{-2} M(\| \partial^2 g\|_{L^\infty_x}) \end{align*} we have \begin{equation}gin{align} \Deltabel{e:metric_est2} \| g_\Deltambda \|_{L^2_t L^\infty_x} \lesssim \Deltambda^{-2} \| \partial^2 g\|_{L^2_t L^\infty_x}. \end{align} As a consequence of the above bounds on $g$, we recall the commutator estimate \begin{equation} \Deltabel{com:est} \vn{[ \Box_{g_{<\sqrt{\left|d}} },P_{\left|d} ] v}_{L^2_x} \lesssim \left|d \vn{v}_{ L^2_x}+ \vn{\partial_t v}_{L^2_x} \end{equation} which follows from $$ [ \Box_{g_{<\sqrt{\left|d}} },P_{\left|d} ]=[g_{<\sqrt{\left|d}},P_{\left|d}] \partial_{t,x} \partial_x, \qquad \vn{[g_{<\sqrt{\left|d}},P_{\left|d}]}_{L^2 \to L^2} \lesssim \left|d^{-1} \vn{\nabla g}_{L^{\infty}}. $$ \ \subsection*{Acknowledgments} The first author was partially supported by the Simons Foundation. The second author was partially supported by an NSF Postdoctoral Fellowship. The third author was partially supported by the NSF grant DMS-1800294 as well as by a Simons Investigator grant from the Simons Foundation. \ \section{Curved \texorpdfstring{$X^{s,b}$}{Xsb} spaces} \ \Deltabel{sec:Xspaces} In this section we present and expand on the theory of $X^{s,b}$ spaces associated to wave operators with variable coefficients from \cite{geba2008gradient} (see also \cite{Tataru1996}). An alternative definition of $X^{s,b}$ spaces in the context of time-independent coefficients on compact manifolds was proposed in \cite{Burq2005} using spectral theory. \subsection{Definitions} We begin with our main building blocks, which are norms associated to a single frequency $\left|d$ and modulation $d$: \begin{equation}gin{definition} \Deltabel{def:X:1} Let $ s \in \mathbb{R} $, $ \theta \in (0,1) $ and let $ I $ be a time interval. \begin{equation}gin{enumerate} \item For dyadic $ \left|d \geq d \geq 1 $, the norm of $ X^{s,\theta}_{\left|d,d}[I] $ is defined by $$ \vn{u}_{ X^{s,\theta}_{\left|d,d}[I]}^2 = \left|d ^{2s} d^{2\theta} \vn{u}_{L^2(I\times \mathbb{R}^n)}^2 + \left|d ^{2s-2} d^{2\theta-2} \vn{\Box_{g_{<\sqrt{\left|d}} } u }_{L^2(I\times \mathbb{R}^n)}^2. $$ When $ I=[0,1] $ we drop the $ I $ and denote simply $ X^{s,\theta}_{\left|d,d} $. \\ \item The norms of $ X^{s,\theta}_{\left|d, \leq h}, X^{s,\theta}_{\left|d, \leq h, \infty} $ are defined by $$ \vn{u}_{ X^{s,\theta}_{\left|d, \leq h} }^2=\inf \ \Big\{ \ \sum_{d=1}^{h} \vn{u_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}^2 \quad ; \quad u= \sum_{d=1}^{h} u_{\left|d,d} \ \Big\}, $$ $$ \vn{u}_{ X^{s,\theta}_{\left|d, \leq h, \infty} }^2=\inf \ \Big\{ \ \sup_{d \in \overline{1,h}} \vn{u_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}^2 \quad ; \quad u= \sum_{d=1}^{h} u_{\left|d,d} \ \Big\}, $$ for $ h \leq \left|d $. When $ h=\left|d $ we simply write $ X^{s,\theta}_{\left|d} $ for $ X^{s,\theta}_{\left|d, \leq \left|d} $. We also use a similar definition for $ X^{s,\theta}_{\left|d, [d_1,d_2]} $ for a restricted range of modulations $ d_1 \leq d \leq d_2 $. \\ \item For $ \delta>0 $ such that $ D=\delta^{-1} $ is a dyadic integer and $ \vm{I} \simeq \delta $, we analogously define $ X^{s,\theta}_{\left|d, \leq h}[I] $, $ X^{s,\theta}_{\left|d, \leq h,\infty}[I] $, $ X^{s,\theta}_{\left|d, [d_1,d_2]}[I] $ for $ D \leq h \leq \left|d $, with summation, respectively supremum, taken over $ d \in \overline{D,\left|d} $ or $ d \in \overline{d_1,d_2} $. When $ D $ is bounded by a universal constant, we may equivalently take the summation over $ d \in \overline{1,\left|d} $. \end{enumerate} \end{definition} \ \begin{equation}gin{remark} There is some flexibility in how to mollify the coefficients of $\Box_g$. The threshold $\sqrt{\Deltambda}$ is motivated by the hypothesis on $ \partial^2 g $ which gives $ \vn{g^{\alpha \begin{equation}ta}_{\geq\sqrt{\left|d}}}_{L^2_t L^{\infty}_x} \lesssim \left|d^{-1} $, making $ \Box_{g \geq\sqrt{\left|d}} P_{\left|d} $ effectively a first-order operator. Allowing higher frequencies would yield equivalent norms, and indeed a paradifferential-type cutoff $<\Deltambda$ would be more natural when considering merely Lipschitz coefficients. \end{remark} The full iteration spaces are defined as follows. \begin{equation}gin{definition} \Deltabel{def:X:2} Let $ s \in \mathbb{R} $ and $ \theta \in (0,1) $. \begin{equation}gin{enumerate} \item A function $ u \in L^2([0,1],H^s(\mathbb{R}^n)) $ is said to be in $ X^{s,\theta} $ if it has finite norm defined by $$ \vn{u}_{ X^{s,\theta} }^2=\inf \ \Big\{ \ \sum_{\left|d = 1}^{\infty} \sum_{d=1}^{\left|d} \vn{u_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}^2 \quad ; \quad u= \sum_{\left|d = 1}^{\infty} \sum_{d=1}^{\left|d} P_{\left|d} u_{\left|d,d} \ \Big\}$$ \\ \item A function $ f \in L^2([0,1],H^{s+\theta-2} (\mathbb{R}^n)) $ is said to be in $ X^{s-1,\theta-1} $ if it has finite norm defined by $$ \vn{f}_{ X^{s-1,\theta-1} }^2=\inf \Big\{ \ \vn{f_0}_{L^2 H^{s-1}}^2+ \sum_{\left|d = 1}^{\infty} \sum_{d=1}^{\left|d} \vn{f_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}^2 ; \ f=f_0+ \sum_{\left|d = 1}^{\infty} \sum_{d=1}^{\left|d} \Box_{g<\sqrt{\left|d}} P_{\left|d} f_{\left|d,d} \Big\}$$ \end{enumerate} \end{definition} \begin{equation}gin{remark} Roughly speaking, $X^{s,\theta}_{\Deltambda, d}$ will hold the portion of $u$ at frequency $\Deltambda$ and ``modulation'' $d$. If $g$ is the Minkowski metric and we modify the above definition of $X^{s,\theta}$ by replacing $L^2([-1,1]\times \mathbb{R}^n)$ with $L^2( \mathbb{R} \times \mathbb{R}^n)$, then for $u_{\Deltambda, d}$ localized to frequency $|\xi| \simeq \Deltambda$ and modulation $\bigl| |\tau| - |\xi| \bigr| \simeq d$ with $d \le \Deltambda$ we have \begin{equation}gin{align*} \|u_{\Deltambda, d}\|_{X^{s,\theta}_\Deltambda} \simeq \Deltambda^{s} d^\theta \| u_{\Deltambda, d} \|_{L^2} \simeq \| u_{\Deltambda, d} \|_{X^{s, \theta}_{\Deltambda, d}}. \end{align*} On the other hand, if $d \gg \Deltambda$, \begin{equation}gin{align*} \| u_{\Deltambda, d} \|_{X^{s, \theta}_{\Deltambda}} \simeq \Deltambda^{s+\theta-2} \| \Box u_{\Deltambda, d} \|_{L^2} \simeq \| \Box u_{\Deltambda, d} \|_{L^2 H^{s+\theta-2}}. \end{align*} In the variable-coefficient context, modulation cannot be precisely interpreted in terms of localization in Fourier space. Indeed, a typical solution to $\Box_{g<\sqrt{\Deltambda}} u = 0$ will have uncertainty at least $\sqrt{\Deltambda}$ in both temporal and spatial frequency. Nonetheless, the intuition from the constant-coefficient case still provides useful heuristics for proving estimates. \end{remark} \ \begin{equation}gin{remark} \Deltabel{rmk:tildeX:loc} By \cite[Corollary 2.5]{geba2008gradient}, in the definition of the $ X^{s,\theta} $ and $ X^{s-1,\theta-1} $ spaces one can replace the $ X^{s,\theta}_{\left|d,d} $ norm by the norm of $ \bar{X}^{s,\theta}_{\left|d,d} $ defined by $$ \vn{u}_{ \bar{X}^{s,\theta}_{\left|d,d}}^2 = \left|d ^{2s-2} d^{2\theta} \vn{\nabla_{t,x} u}_{L^2}^2 + \left|d ^{2s-2} d^{2\theta-2} \vn{\Box_{g<\sqrt{\left|d} } u }_{L^2}^2. $$ This is based on the estimate in \cite[Lemma 2.4]{geba2008gradient}: \begin{equation} \Deltabel{tildeX:loc} \left|d^{s-1} d^{\theta} \vn{\nabla_{t,x} P_{\left|d} u}_{L^2}+ \left|d ^{s-1} d^{\theta-1} \vn{\Box_{g<\sqrt{\left|d} } P_{\left|d} u }_{L^2} \lesssim \vn{u_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}. \end{equation} This bound shows that on frequency localized functions, the $ X^{s,\theta}_{\left|d,d} $ and $ \bar{X}^{s,\theta}_{\left|d,d} $ norms are comparable and also that in Definition \ref{def:X:2} one can assume that $ u_{\left|d.d} $ and $ f_{\left|d,d} $ are localized at frequency $ \left|d $. Moreover, based on \eqref{tildeX:loc} we have the straightforward embedding \begin{equation} X^{s-1,\theta-1} \subset L^2 H^{s+\theta-2}. \end{equation} \end{remark} \subsection{Basic properties} \ Here we show how some properties known for the classical $ X^{s,b} $ spaces generalize to variable coefficients. We begin with some properties for our frequency localized building blocks: \begin{equation}gin{proposition} Let $ u $ be a $ \left|d$-frequency-localized function on $ [0,1] \times \mathbb{R}^n $. Then: \begin{equation}gin{enumerate}[label=(\arabic*),ref=(\arabic*)] \item \Deltabel{P:En:est} (Energy estimates) For $ d \gtrsim \vm{I}^{-1} $ and any $ v $ one has: \begin{equation}gin{align} \Deltabel{energy:X} \left|d^{s-1} d^{\theta-\frac{1}{2}} \vn{\nabla_{t,x} u}_{L^{\infty} L^2[I]} & \lesssim \vn{u}_{X^{s,\theta}_{\left|d,d}[I]} \\ \Deltabel{energy:est:X} \vn{v}_{C H^{s}\cap C^1 H^{s-1}} & \lesssim \vn{v}_{ X^{s,\theta}} \quad \text{if} \quad \theta > \frac{1}{2}. \end{align} \item \Deltabel{P:timeloc} (Time localization) Let $ 1 \leq d_1 \leq d_2 \leq \left|d $ and let $ \chi_{d_2}(t) $ be a bump function in time localized on the $ d_2^{-1} $ scale. Then \begin{equation} \Deltabel{X:modulations:intervals:eq} \chi_{d_2} : \tilde{S}_{\left|d} X_{\left|d,d_1}^{1,\frac{1}{2}} \to X_{\left|d,d_2}^{1,\frac{1}{2}} \qquad \text{i.e.} \qquad \vn{ \chi_{d_2} u}_{X_{\left|d,d_2}^{1,\frac{1}{2} }} \lesssim \vn{u}_{X_{\left|d,d_1}^{1,\frac{1}{2}}}. \end{equation} \item \Deltabel{P:ext} (Global extension) There exists a frequency localized extension of $ u $ from $ [0,1] $ to $ \tilde{u} $ supported in $(d^{-1}, 1+d^{-1} ) $ such that \begin{equation}gin{align*} d^{\theta} \| \nabla_{t,x} \tilde{u} \|_{L^2(\mathbb{R} \times \mathbb{R}^n)} + d^{\theta-1} \| \Box_{g_{\sqrt{\Deltambda}}} \tilde{u} \|_{L^2(\mathbb{R} \times \mathbb{R}^n)} \lesssim \vn{u}_{X^{1,\theta}_{\left|d,d}}. \end{align*} \item \Deltabel{P:dec} (Time orthogonality) Let $ 1 \leq d \leq d' \leq d'' \leq \left|d $. For smooth partitions of unity with respect to time intervals of length $ d^{-1} $: $ 1=\sum_j \chi_d^j(t) $, one has \begin{equation}gin{align} \Deltabel{time:ort:1} \vn{u}_{ X_{\left|d,d'}^{s,\theta}}^2 & \simeq \sum_j \vn{ \chi_d^j(t) u}_{X_{\left|d,d'}^{s,\theta}}^2 \\ \Deltabel{time:ort:2} \vn{u}_{ X_{\left|d,[d',d'']}^{s,\theta}}^2 & \simeq \sum_j \vn{ \chi_d^j(t) u}_{X_{\left|d,[d',d'']}^{s,\theta}}^2 \end{align} \item \Deltabel{P:scaling} (Scaling) Set $ u^{\delta}(t,x)=u(\delta t, \delta x) $ defined on $ [0,1] $ where $ \delta d \geq 1 $. Then: \begin{equation} \Deltabel{eq:scaling} \vn{u^{\delta}}_{X_{\delta \left|d,\delta d}^{s,\theta}} \simeq \delta^{s+\theta-\frac{n+1}{2}} \vn{u}_{X_{\left|d,d}^{s,\theta}[0,\delta]} \end{equation} where $ X_{\delta \left|d,\delta d}^{s,\theta} $ is defined using the metric $ g^{\delta}(t,x)=g(\delta t, \delta x) $. \end{enumerate} \end{proposition} \begin{equation}gin{proof} {\bf \ref{P:En:est}} By energy estimates for the wave equation we obtain $$ \vn{\nabla_{t,x} u}_{L^{\infty} L^2[I]}^2 \lesssim \vm{I}^{-1} \vn{\nabla_{t,x} u}_{L^2[I]}^2 + \vn{\nabla_{t,x} u}_{L^2[I]} \vn{ \Box_{g_{< \sqrt{\left|d}}} u}_{L^2[I]} $$ which implies \eqref{energy:X} by \eqref{tildeX:loc}. From this, for $ \theta > \frac{1}{2} $ and $ I=[0,1] $ we sum over $ d $ and use the definition of $ X^{s,\theta} $ to obtain the $ L^{\infty}H^{s} \times L^{\infty}H^{s-1} $ bound in \eqref{energy:est:X}. Now we prove that the map $$ t \in [0,1] \to H^s \times H^{s-1} \ni v[t] $$ is continuous when $ v \in X^{s,\theta} $. By the Fundamental theorem of calculus and Cauchy-Schwarz in $ t $, by summing modulations we have $$ \vn{v_{\left|d}(t+h)-v_{\left|d}(t)}_{H^s} \leq C_\left|d \vm{h}^{\frac{1}{2}} \vn{v_{\left|d} }_{X_{\left|d}^{s,\theta}} $$ for any $ t,t+h \in [0,1] $. Let $ \varepsilonsilonilon >0 $ and define $ \left|d_{\varepsilonsilonilon} $ such that $$ \sum_{\left|d > \left|d_{\varepsilonsilonilon}} \vn{v_{\left|d}}_{L^{\infty} H^s}^2 \leq \varepsilonsilonilon^2. $$ Then \begin{equation}gin{align} \vn{v(t+h)-v(t)}_{H^s}^2 & \lesssim \sum_{\left|d \leq \left|d_{\varepsilonsilonilon}} \vn{v_{\left|d}(t+h)-v_{\left|d}(t)}_{H^s}^2 +\varepsilonsilonilon^2 \leq \\ & \leq \vm{h}^{\frac{1}{2}} C_{\varepsilonsilonilon} \vn{v }_{X_{\left|d}^{s,\theta}} + \varepsilonsilonilon^2 \lesssim \varepsilon^2. \end{align} for $ h $ small enough. The same argument applies for $ \partial_t v $. {\bf \ref{P:timeloc}} By H\"older's inequality and \eqref{energy:X} we have $$ \left|d d_2^{\frac{1}{2}} \vn{\chi_{d_2} u}_{L^2} \lesssim \left|d d_2^{\frac{1}{2}} \vn{\chi_{d_2}}_{L^2_t} \vn{u}_{L^{\infty} L^2} \lesssim \vn{u}_{X_{\left|d,d_1}^{1,\frac{1}{2}}}. $$ For the term $ \Box_{g<\sqrt{\left|d}} ( \chi_{d_2} u) $ we consider \begin{equation}gin{align*} & d_2^{-\frac{1}{2}} \vn{ \chi_{d_2} \Box_{g<\sqrt{\left|d}} u}_{L^2} \lesssim d_1^{-\frac{1}{2}} \vn{ \Box_{g<\sqrt{\left|d}}u}_{L^2} \lesssim \vn{u}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \\ & d_2^{-\frac{1}{2}} \vn{ \partial_t^2 \chi_{d_2} u}_{L^2} \lesssim d_2^{-\frac{1}{2}} \vn{\partial_t^2 \chi_{d_2} }_{L^2_t} \vn{u}_{L^{\infty} L^2} \lesssim \vn{u}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \\ & d_2^{-\frac{1}{2}} \vn{ \partial_t \chi_{d_2} \partial_{t,x} u}_{L^2} \lesssim d_2^{-\frac{1}{2}} \vn{\partial_t \chi_{d_2} }_{L^2_t} \vn{\partial_{t,x} u}_{L^{\infty} L^2} \lesssim \vn{u}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \end{align*} {\bf \ref{P:ext}} First we extend $ u $ by solving $ \Box_{g_{\sqrt{\Deltambda}}} u =0$ outside $ [0,1] $ and then we define $ \tilde{u} = \chi \tilde{P}_{\left|d} u $ where $\chi(t)$ is smooth time cutoff supported in $(-d^{-1}, 1+d^{-1})$ equal to $ 1 $ on $ [0,1] $. For $t \in [1, 1+d^{-1}]$ \begin{equation}gin{align*} \| \nabla_{t, x} u(t)\|_{L^2_x} &\lesssim \| \nabla_{t,x} u(t - d^{-1})\|_{L^2} + \| \Box_{g_{\sqrt{\Deltambda}}} u \|_{L^1 L^2 ( [1-d^{-1}, 1+d^{-1}] \times \mathbb{R}^n)}\\ &\lesssim \| \nabla_{t,x} u (t-d^{-1}) \|_{L^2_x} + d^{-1/2} \| \Box_{g_{\sqrt{\Deltambda}}} u \|_{L^2}. \end{align*} Therefore \begin{equation}gin{align*} \| \nabla_{t,x} \tilde{u} \|_{L^2( [1, 1+d^{-1}], \mathbb{R}^n) } \lesssim \| \nabla_{t,x} u\|_{L^2[0,1]} + d^{-1} \| \Box_{g_{\sqrt{\Deltambda}}} u \|_{L^2[0,1]}. \end{align*} We repeat this analysis on the interval $[-d^{-1}, 0]$. For the term $ \Box_{g_{\sqrt{\Deltambda}}} \tilde{u} $ outside of $ [0,1] $ we write $$ \Box_{g_{\sqrt{\Deltambda}}} \chi \tilde{P}_{\left|d} = \chi''(t) \tilde{P}_{\left|d} + 2 \chi'(t) \partial_{t,x} \tilde{P}_{\left|d} + \chi [ \Box_{g_{\sqrt{\Deltambda}}}, \tilde{P}_{\left|d}] + \chi \tilde{P}_{\left|d} \Box_{g_{\sqrt{\Deltambda}}}. $$ Then we use the already established $ L^2 $ bounds together with \eqref{com:est}. {\bf \ref{P:dec}} See \cite[(53)]{geba2008gradient}, which is an easy commutation argument (see also Remark \ref{rmk:tildeX:loc}). {\bf \ref{P:scaling}} The equivalence follows from a change of variables. The only issue is that the scaling does not commute with taking the $ P_{<\sqrt{\left|d}} $ localization for $ g $. Instead we have $ (g_{<\sqrt{\left|d \delta^{-1}}})^{\delta}=(g^{\delta})_{ <\sqrt{\delta \left|d} } $. It remains to estimate $$ \left|d^{-1} d^{-1} \vn{\Box_{g_{[\sqrt{\left|d},\sqrt{\left|d \delta^{-1}}] }} u }_{L^2 } \lesssim \left|d^{-2} d^{-1} \vn{ \partial^2 g_{[\sqrt{\left|d},\sqrt{\left|d \delta^{-1}}] }}_{L^2 L^{\infty}} \vn{\partial^2 u }_{L^{\infty} L^2 } \lesssim \vn{u }_{X_{\left|d,d}^{0,0}[0,\delta]}. $$ \end{proof} We next drop the frequency localization. Then one may ask whether we could also discard the frequency localization in the coefficients. This is indeed the case for a restricted range of Sobolev indices. Precisely, we have \begin{equation}gin{proposition}\Deltabel{p:Xst-global} Let $0 \leq s \leq 2$ and $0 < \theta < 1$. Then $u \in X^{s,\theta}$ iff it admits a decomposition \[ u = \sum_{d \geq 1} u_d \] so that \[ \sum_d d^{2\theta} \| \nabla u_d\|_{L^2 H^{s-1}}^2 + d^{2\theta -2}\|\Box_g u_d\|_{L^2 H^{s-1}}^2 < \infty \] with an equivalent norm given by \begin{equation}gin{equation} \| u \|_{X^{s,\theta}}^2 = \inf_{u = \sum u_d} \sum_d d^{2\theta} \| \nabla u_d\|_{L^2 H^{s-1}}^2 + d^{2\theta -2}\|\Box_g u_d\|_{L^2 H^{s-1}}^2 \end{equation} \end{proposition} We note that the counterpart of this result for $X^{s-1,\theta-1}$ is also valid. \begin{equation}gin{proof} a) In one direction, given $u_{\Deltambda,d}$ we define \[ u_d = \sum_{\Deltambda > d} u_{\Deltambda,d} \] and prove the appropriate bounds for $u_d$. b) In the opposite direction, we define \[ u_{\Deltambda,d} = P_\Deltambda u_d, \qquad d < \Deltambda \] and \[ u_{\Deltambda, \Deltambda} = \sum_{d > \Deltambda} P_\Deltambda u_d \] and again prove the appropriate bounds. \end{proof} Continuing our global description of the $X^{s,\theta}$ spaces, for $s \in [0,2]$ we define the endpoints $X^{s,0}$ and $X^{s,1}$ with norms \[ \| u\|_{X^{s,0}}^2 = \| \nabla u\|_{L^2 H^{s-1}}^2 \] respectively \[ \| u\|_{X^{s,1}}^2 = \| \nabla u\|_{L^2 H^{s-1}}^2 + \| \Box_g u\|_{L^2 H^{s-1}}^2 \] Then we can also describe the full family of $X^{s,\theta}$ spaces as follows: \begin{equation}gin{proposition} For $0 \leq s \leq 2$ and $0 \leq \theta < 1$ we have: \begin{equation}gin{enumerate} \item \Deltabel{P:interpolate} (Interpolation) The space $X^{s,\theta}$ can be described by interpolation as \begin{equation} \Deltabel{interpolate} X^{s,\theta} = [X^{s,0},X^{s,1}]_\theta \end{equation} \item \Deltabel{P:duality} (Duality) For $ \frac{1}{2} < \theta < 1 $ we have \begin{equation} \Deltabel{duality} X^{s-1,\theta-1}=(X^{1-s,1-\theta}+L^2 H^{2-s-\theta} )' . \end{equation} \end{enumerate} \end{proposition} We remark that the first part of the proposition in particular shows that the spaces $X^{s,\theta}$ defined in \cite{Tataru1996} and in \cite{geba2005disp} coincide. \begin{equation}gin{proof} {\bf (\ref{P:interpolate})} Examining the equivalent definition of the $X^{s,\theta}$ spaces in Proposition~\ref{p:Xst-global}, one immediately sees that it is nothing but the real $(\theta,2)$ interpolation space between $X^{s,0}$ and $X^{s,1}$ constructed using the $J$-method. Since these are Hilbert spaces, the outcome coincides with the one provided by complex interpolation. {\bf (\ref{P:duality})} This is proved in \cite[Lemma 2.13]{geba2008gradient}. \end{proof} It will be technically convenient at some junctures to view functions in $X^{s, \theta}[I]$ as restrictions of globally defined functions. For this purpose we assume that the metric $g$ is extended to a global Lorentzian metric in $\mathbb{R}^{1+n}$. This can be taken constant outside a compact time interval. \begin{equation}gin{corollary} In the definition of $X^{s, \theta}_{\Deltambda}$, we may assume in the decomposition $u_\Deltambda = \sum u_{\Deltambda, d}$ that $\operatorname{supp} u_{\Deltambda, d} \subset (-d^{-1}, 1+d^{-1}) \times \mathbb{R}^n$ and take all spacetime norms over $\mathbb{R}^{1+n}$. The same holds for other intervals. \end{corollary} We also have \begin{equation}gin{lemma} \Deltabel{v00:en:est} Let $ I=[0,\delta] $ and $ D=\delta^{-1} $. If $ v[0]=(0,0) $, then \begin{equation} \Deltabel{X:est:idzero} \vn{\tilde{P}_{\left|d} v }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } \lesssim \left|d^{-1} D^{-\frac{1}{2}} \vn{ \Box_{g_{<\sqrt{\left|d}} } \tilde{P}_{\left|d} v }_{ L^2[I] }. \end{equation} \end{lemma} \begin{equation}gin{proof} Since $ v[0]=(0,0) $, the estimate follows from energy estimates. \end{proof} \ \subsection{Linear mapping properties} Consider the linear problem \begin{equation} \Deltabel{inhom:box} \Box_g v =f, \qquad v[0]=(v_0,v_1). \end{equation} \begin{equation}gin{lemma} Let $ 0<s <3 $ and $ 0 < \theta \leq 1 $. Then the linear equation \eqref{inhom:box} is well-posed in $ H^s \times H^{s-1} $ and \begin{equation} \Deltabel{ref:box:2} \vn{v}_{X^{s,\theta}} \lesssim \vn{(v_0,v_1)}_{H^s \times H^{s-1}} + \vn{f}_{L^2 H^{s-1}}. \end{equation} \end{lemma} \begin{equation}gin{proof} See \cite[Lemma 2.11]{geba2008gradient}. \end{proof} We want to extend this bound for the Laplace-Beltrami operator $ \tilde{\Box}_g $ given by \eqref{LB:op}. \begin{equation}gin{lemma} \Deltabel{lin:map:prop} Let $ 1 \leq s \leq 2 $ and $ \frac{1}{2} < \theta < 1 $. The solution of the linear equation \begin{equation} \Deltabel{inhom:box:tilde} \tilde{\Box}_g u =F, \qquad u[0]=(u_0,u_1) \end{equation} satisfies \begin{equation} \Deltabel{lin:map:prop:eq} \vn{u}_{X^{s,\theta}} \lesssim \vn{(u_0,u_1)}_{H^s \times H^{s-1}} + \vn{F}_{X^{s-1,\theta-1}}. \end{equation} \end{lemma} \begin{equation}gin{proof} We first recall from \cite[Lemma 2.12]{geba2008gradient} that \begin{equation} \Deltabel{ref:box:1} \Box_g^{-1} : X^{s-1,\theta-1} \to X^{s,\theta}, \end{equation} where $ \Box_g^{-1} f $ is the solution $ v $ of the inhomogeneous equation \eqref{inhom:box} with $ v[0]=(0,0) $. Denote by $ S(v_0,v_1) $ the solution of \eqref{inhom:box} with $ f=0 $. We write $$ \tilde{\Box}_g - \Box_g = h^{\alpha} \partial_{\alpha} $$ where \begin{equation} \Deltabel{small:h:Linfty} \vn{h^{\alpha}}_{L^2 L^{\infty}} + \vn{\nabla_x h^{\alpha}}_{L^2 L^{\infty}} \ll 1 \end{equation} and we will treat $ h^{\alpha} \partial_{\alpha} u $ as a perturbation. A function $ u $ solves \eqref{inhom:box:tilde} if $ u=\Phi(u) $ where $ \Phi(v) \vcentcolon= S(u_0,u_1) + \Box_g^{-1} (F - h^{\alpha} \partial_{\alpha} v) $. To show that $ \Phi : X^{s,\theta} \to X^{s,\theta} $ and that $ \Phi $ is a contraction on $ X^{s,\theta} $, considering \eqref{ref:box:1}, \eqref{ref:box:2} and \eqref{energy:est:X} it remains to check that \begin{equation} \Deltabel{contraction} \vn{h^{\alpha} \partial_{\alpha} v}_{L^2 H^{s-1}} \ll \vn{\nabla_{t,x} v}_{L^{\infty} H^{s-1}} \lesssim \vn{ v}_{X^{s,\theta} }. \end{equation} Using \eqref{small:h:Linfty} we obtain this bound first for $ s=1 $ and $ s=2 $ and by interpolation also for $ 1 \leq s \leq 2 $. \end{proof} Finally, we recall \begin{equation}gin{lemma} \Deltabel{lin:map:Box} For $ 0<s<3 $ and $0 \leq \theta \leq 1$, the operator $ \Box_g $ maps $$ \Box_g : X^{s,\theta} \to X^{s-1,\theta-1}. $$ \end{lemma} \begin{equation}gin{proof} See \cite[Proposition 3.1]{geba2009remark}. \end{proof} \ \begin{equation}gin{remark}[Higher regularity] \Deltabel{X:high:reg} Let $ k \geq 3 $. Assuming control of $ k $ derivatives of the metric $g $, we obtain the previous properties for a wider range of $ s $. Thus, \cite{geba2008gradient}[Lemma 2.9] will hold for $ -k+2<s<k+1 $, which implies that \eqref{ref:box:2}, \eqref{ref:box:1} and Lemma \ref{lin:map:Box} extend to this range of $ s$. Assuming $ \vn{\partial^k g}_{L^2 L^{\infty}}\ll 1 $ (since $ g $ is a rescaled metric) we extend \eqref{lin:map:prop:eq} to $ s \in [1,k] $. We also refer to \cite{smith1998parametrix}[Theorem 4.7] for the fact that the parametrix representation with $ H^s \times H^{s-1} $ bounds extends to $ s \in [1,k] $ under the assumption $ g \in L^{\infty} H^{k-1} $. \end{remark} \ \subsection{$ \tilde{X}^{s,\theta} $ spaces} \ In the proof of the Moser estimate \eqref{moser:est} it will be useful to have the following modification of the $ X_{\left|d}^{s,\theta} $ norms. \begin{equation}gin{definition} \Deltabel{tld:X} Let $ s \in \mathbb{R} $, $ \theta \in (0,1) $ and dyadic $ \left|d \geq 1 $. \begin{equation}gin{enumerate} \item The norm of $ \tilde{X}_{\left|d,\left|d}^{s,\theta} $ is defined for $ \left|d $-frequency localized functions $ u $ by $$ \vn{u}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}}= \left|d^{s+\theta} \vn{ \nabla_{t,x} u}_{L^2}+ \left|d^{s+\theta-\frac{1}{2}} \vn{\nabla_{t,x} u }_{L^{\infty} L^2}. $$ For $ 1 \leq d < \left|d $ the norm of $ \tilde{X}_{\left|d,d}^{s,\theta} $ is defined by $ \vn{u}_{\tilde{X}_{\left|d,d}^{s,\theta}} = \vn{u}_{X_{\left|d,d}^{s,\theta}} $. \item The norm of $ \tilde{X}^{s,\theta}_{\left|d} $ is defined for $ \left|d $-frequency localized functions $ u $ by $$ \vn{u}_{ \tilde{X}^{s,\theta}_{\left|d} }^2=\inf \ \Big\{ \ \sum_{d=1}^{\left|d/2} \vn{u_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}^2 +\vn{u_{\left|d,\left|d}}_{\tilde{X}^{s,\theta}_{\left|d,\left|d}}^2 \quad ; \quad u= \sum_{d=1}^{\left|d/2} u_{\left|d,d}+u_{\left|d,\left|d} \ \Big\}, $$ \item A function $ u \in L^2([0,1],H^s) $ is said to be in $ \tilde{X}^{s,\theta} $ if it has finite norm defined by $$ \vn{u}_{ \tilde{X}^{s,\theta} }^2=\inf \ \Big\{ \ \sum_{\left|d = 1}^{\infty} \vn{u_{\left|d}}_{ \tilde{X}^{s,\theta}_{\left|d} }^2 \quad ; \quad u= \sum_{\left|d = 1}^{\infty} P_{\left|d} u_{\left|d} \ \Big\}$$ where the $ u_{\left|d} $ can be assumed wlog to be $ \left|d $-frequency localized. \end{enumerate} \end{definition} Clearly $ X^{s,\theta} \subset \tilde{X}^{s,\theta} $. The only difference between the two spaces occurs at high modulations, where we discarded the terms $ \left|d^{s+\theta-2} \vn{\Box_{g_{<\sqrt{\left|d}} } u_{\left|d,\left|d}}_{L^2} $, making the norm $ \tilde{X}^{s,\theta} $ smaller. This will be useful in the proof of the Moser estimate \eqref{moser:est}, see Remark \ref{Rk:tildeX:Moser}. We can recover the $ X^{s,\theta} $ bound if we control high modulations through $ \Box_g $: \begin{equation}gin{lemma} \Deltabel{lemma:XXtildeBox} If $ f \in \tilde{X}^{s,\theta} $ and $ \Box_g f \in L^2 H^{s+\theta-2} $ then $ f \in X^{s,\theta} $ and $$ \vn{f}_{ X^{s,\theta}} \lesssim \vn{f}_{ \tilde{X}^{s,\theta}} + \vn{ \Box_g f }_{L^2 H^{s+\theta-2}}. $$ \end{lemma} \begin{equation}gin{proof} The proof is straightforward using definitions \ref{def:X:1}, \ref{def:X:2}, \ref{tld:X} and the properties from Section \ref{sec:Xspaces}. \end{proof} \ \subsection{Half-waves norms} For microlocal analysis purposes, we would like an equivalent definition of the $X^{s, \theta}_{\Deltambda, d}$ norms in terms of half-waves. We factor the symbol of $\Box_{g<\sqrt{\Deltambda}}$ as \[\tau^2 - 2 g_{<\sqrt{\Deltambda}}^{0j} \tau \xi_j - g^{ab}_{<\sqrt{\Deltambda}} \xi_a \xi_b = (\tau + a^+)(\tau + a^-),\] where \[a^\pm = - g_{<\sqrt{\Deltambda}}^{0j}\xi_j \mp \sqrt{ (g_{<\sqrt{\Deltambda}}^{0j} \xi_j)^2 + g^{ab}_{<\sqrt{\Deltambda}} \xi_a \xi_b}= - g^{0j}_{\sqrt{\Deltambda}} \xi_j \mp a .\] Write \[A^\pm = - g_{<\sqrt{\Deltambda}}^{0j}(t,x) D_j \mp a(t,x, D).\] Observe that while $a$ is not exactly localized at frequencies $< \sqrt{\Deltambda}$, one does have \begin{equation}gin{align*} P_{\ge\Deltambda} (D_x) a \in \Deltambda^{-N} S^1 \text{ for any } N, \end{align*} where $S^1$ denotes the classical symbol class. This decay will be more than adequate for our purposes. \begin{equation}gin{lemma} \Deltabel{l:Xsb-halfwave} In the definition of $X^{s, \theta}_{\Deltambda, d}$, $\|\Box_{g_{<\sqrt{\Deltambda}}} P_\Deltambda u\|_{L^2}$ may be replaced by $\| (D_t+A^-)(D_t + A^+) P_\Deltambda u\|_{L^2}$ or $\| (D_t + A^+)(D_t+A^-) P_\Deltambda u\|_{L^2}$. \end{lemma} \begin{equation}gin{proof} We have \begin{equation}gin{align*} &(D_t+A^-)(D_t +A^+)P_\Deltambda - (D_t+A^-)P_{>\Deltambda/8} (D_t + A^+) P_{>\Deltambda/8} P_\Deltambda \\ &= (D_t+A^-)P_{<\Deltambda/8} (D_t +A^+) P_\Deltambda = (D_t+A^-) P_{<\Deltambda/8} A^+ P_\Deltambda\\ &= P_{<\Deltambda/8} (D_tA^+) P_\Deltambda + P_{<\Deltambda/8} A^+ P_\Deltambda D_t + A^- P_{<\Deltambda/8} A^+ P_\Deltambda, \end{align*} In view of the off-diagonal estimates \begin{equation}gin{align*} &\|(P_{< \Deltambda/8} + P_{>8\Deltambda}) A P_{\Deltambda} \|_{L^2 \to L^2} = \| (P_{< \Deltambda / 8} + P_{>8\Deltambda})A_{> \Deltambda/8} P_{\Deltambda} \|_{L^2 \to L^2} = O(\Deltambda^{-\infty}), \end{align*} where $A_{>\Deltambda/8} = (P_{>\Deltambda/8} (D_x) a) (t, x, D)$, and the pseudo-differential calculus on $\mathbb{R}^{1+n}$, we have \begin{equation}gin{align*} &\| (\Box_{g<\sqrt{\Deltambda}} - (D_t+A^-)(D_t + A^+) ) P_\Deltambda u\|_{L^2} \\&\lesssim \| (\Box_{g<\sqrt{\Deltambda}} - (D_t+A^-)P_{>\Deltambda/8} (D_t + A^+) P_{>\Deltambda/8}) P_\Deltambda u\|_{L^2} + O(\Deltambda^{-\infty})\|\nabla_{t,x} u\|_{L^2}\\ &\lesssim \| \nabla_{t,x} u\|_{L^2}. \end{align*} \end{proof} Next we show how to reduce the study of small modulation spaces $ X^{s,\theta}_{\left|d,1} $ to half-wave norms. \begin{equation}gin{proposition} \Deltabel{p:half-wave} Suppose $u: \mathbb{R} \times \mathbb{R}^{1+n} \to \mathbb{R}$ satisfies $u = P_\Deltambda(D_x) u$, and write \begin{equation}gin{align*} u = P_{>-\Deltambda/64}(D_t) u_\Deltambda + P_{<-\Deltambda/64} (D_t) u_\Deltambda := u^{+} + u^{-}. \end{align*} Then \begin{equation}gin{align*} \|\nabla_{t, x} u^{\pm} \|_{L^2} + \Deltambda \| (D_t \pm A^{\pm}) u^{\pm} \|_{L^2} + \|\Box_{g<\sqrt{\Deltambda}} u^\pm\|_{L^2} \lesssim \| \nabla_{t,x} u\|_{L^2} + \| \Box_{g<\sqrt{\Deltambda}} u\|_{L^2}. \end{align*} \end{proposition} \begin{equation}gin{proof} We write $S^+(D_t) = P_{>-\Deltambda/64}(D_t)$ and $S^{-}(D_t) = P_{<-\Deltambda/64}(D_t)$. Let \[ T_\Deltambda = \tilde{S}_{>-\Deltambda/16}(D_t) \widetilde{P}_{\Deltambda}(D_x). \] Then \begin{equation}gin{align*} \| (1-T_\Deltambda) (D_t + A^+) S^+(D_t) P_\Deltambda (D_x) \|_{L^2 \to L^2} = \| (1-T_\Deltambda) A^+_{>\Deltambda/64} S^+(D_t) P_{\Deltambda}(D_x) \|_{L^2 \to L^2} = O(\Deltambda^{-\infty}), \end{align*} where $A^+_\Deltambda$ is mollified in $(t,x)$. Now on the support of $T_\Deltambda$, the symbol $\tau + a^-$ is elliptic and belongs to $ S^1_{1, \frac{3}{4}}(\mathbb{R}^{1+n})$, hence there is a parametrix $Q \in OPS^{-1}_{1, \frac{3}{4}}(\mathbb{R}^{1+n})$ such that $Q (D_t+A^-) + R= T_\Deltambda$ with $R \in S^{-\infty}(\mathbb{R}^{1+n})$. Write \begin{equation}gin{align*} (D_t+A^+) u^+ &= Q (D_t +A^-) T_\Deltambda (D_t + A^+)u^+ + R T_\Deltambda (D_t + A^+) u^+ + (1-T_\Deltambda) (D_t + A^+) u^+\\ &= Q \tilde{T}_\Deltambda (D_t + A^-) T_\Deltambda (D_t + A^+) u^+ \\ &+ Q (1-\tilde{T}_\Deltambda) (D_t + A^-) T_\Deltambda (D_t + A^+) u^+ + RT_\Deltambda (D_t + A^+) u^+\\ &+ (1-T_\Deltambda) (D_t + A^+) u^+. \end{align*} Therefore \begin{equation}gin{align*} \| (D_t + A^+) u^+\|_{L^2} &\lesssim \Deltambda^{-1} \| (D_t+A^-) T_\Deltambda (D_t + A^+) u^+ \|_{L^2} + \Deltambda^{-N} \| (D_t + A)T_\Deltambda (D_t +A^+) u^+\|_{L^2} \\ &+ \Deltambda^{-N} \| \nabla_{t,x} u \|_{L^2}. \end{align*} The main term is \begin{equation}gin{align} \Deltabel{e:half-wave1} \begin{equation}gin{split} (D_t+A^-) T_\Deltambda (D_t + A^+) u^+ &= T_\Deltambda (D_t + A^-)(D_t+A^+) T'_\Deltambda u \\ &+ [D_t + A^-, T_\Deltambda] T'_\Deltambda (D_t + A^+) u + [D_t + A^-, T_\Deltambda] [(D_t + A^+), T'_\Deltambda] u \end{split} \end{align} where $T_\Deltambda' = P_{>-\Deltambda/64}(D_t)\tilde{S}_{\Deltambda}(D_x) u$. The second and third terms are bounded by $ \|\nabla_{t, x}u\|_{L^2}$ provided that we verify the commutator estimate \begin{equation}gin{align} \nonumber [A, T_\Deltambda] : L^2 \to L^2, \quad A = A^\pm. \end{align} To see this, we use a spherical harmonics expansion \begin{equation}gin{align*} a = \sum_{l} b_l(t, x) h_l (\wht{\xi})|\xi|, \quad \wht{\xi} = \xi/|\xi|, \end{align*} where $b_l(t, x) = \Deltangle a, h_l \rangle_{L^2(S^{n-1})}$ satisfies the same estimates as $a$ with an additional factor $c_N\Deltangle l \rangle^{-N}$, for instance \begin{equation}gin{align*} P_{\ge\Deltambda}(D_t, D_x) b_l = \Deltangle P_{\ge\Deltambda}(D_t, D_x) a, h_\ell \rangle_{L^2( S^{n-1}) } \le c_N \Deltambda^{-N} \Deltangle l\rangle^{-N} . \end{align*} Thus we get \begin{equation}gin{align*} [A, T_\Deltambda] = \sum_l [B_l, P_{>-\Deltambda/16}(D_t) \tilde{S}_\Deltambda(D_x) ] h_l (D_x) |D_x| P_{<2\Deltambda}(D_x). \end{align*} By a standard kernel estimate of the form \begin{equation}gin{align*} |[b_l, P_{>-\Deltambda/8}(D_t) \tilde{S}_\Deltambda (D_x)] (x, y)| \lesssim \Deltangle l \rangle^{-N} \| \nabla g \|_{L^\infty} \Deltambda^{-1} \Deltambda^{n+1} \Deltangle ( \Deltambda(t-s), \Deltambda(x-y) ) \rangle^{-N}, \end{align*} Schur's test yields \begin{equation}gin{align*} \| [b_l, P_{>-\Deltambda/8}(D_t) \tilde{S}_\Deltambda(D_x) ] \|_{L^2 \to L^2} \lesssim \Deltambda^{-1} \Deltangle l \rangle^{-N}, \end{align*} and the claim follows. For the first term on the right side of~\eqref{e:half-wave1}, we use the proof of the the previous lemma and similar commutator estimates as above to obtain \begin{equation}gin{align*} &\| (D_t+A^-) (D_t + A^+) T'_\Deltambda u\|_{L^2} \\ &\le \| \Box_{g<\sqrt{\Deltambda}} T'_\Deltambda u\|_{L^2} + \| [\Box_{g<\sqrt{\Deltambda}} - (D_t+A^-)(D_t+A^+)]T'_\Deltambda u\|_{L^2}\\ &\lesssim \| \Box_{g<\sqrt{\Deltambda}} u\|_{L^2} + \| [g^{0j}_{<\sqrt{\Deltambda}}, T'_\Deltambda] \partial_t \partial_j u\|_{L^2} + \| [g^{ab}_{<\sqrt{\Deltambda}}, T'_\Deltambda] \partial_a \partial_b u\|_{L^2} + \| \nabla_{t,x} u \|_{L^2}\\ &\lesssim \| \nabla_{t,x} u\|_{L^2} + \| \Box_{g<\sqrt{\Deltambda}} u\|_{L^2}. \end{align*} This last estimate also yields the desired bound for $\|\Box_{g<\sqrt{\Deltambda}} u^+\|_{L^2}$, and altogether we find that \begin{equation}gin{align*} \| (D_t+A^+) u^+\|_{L^2} \lesssim \Deltambda^{-1}\|\nabla_{t,x}u\|_{L^2} + \Deltambda^{-1} \| \Box_{g<\sqrt{\Deltambda}} u\|_{L^2}, \end{align*} as needed. \end{proof} If $u$ is compactly supported in time, we may smoothly truncate the half-waves in time to obtain: \begin{equation}gin{corollary} \Deltabel{c:half-wave} Suppose $u \in P_\Deltambda(D_x) u$ has Fourier transform supported in $[\Deltambda/2, 2\Deltambda]$ and is supported in $[-c, c] \times \mathbb{R}^n$. Then there exists a decomposition \begin{equation}gin{align*} u = u^+ + u^- \end{align*} where $u^\pm$ are supported in $[-2c, 2c] \times \mathbb{R}^n$, and satisfy the estimates of the previous proposition. \end{corollary} \section{Null frames} \ \Deltabel{s:nullframes} \subsection{Null foliations} \Deltabel{s:nullfoliations} In this section we construct the null ``hyperplanes" along which wave packets propagate. Factor the principal symbol of $\Box_g$ as $(\tau+a^+)(\tau+a^-)$. For each direction $\theta \in S^{1}$ and sign $\pm$, we construct optical functions $\Phi^{\pm}_\theta$ as solutions to the eikonal equation \begin{equation}gin{align*} \partial_t \Phi_\theta^\pm + a^{\pm} (t, x, \partial_x \Phi_\theta^\pm ) = 0, \quad \Phi^\pm_\theta(0, x) = \Deltangle x, \theta \rangle. \end{align*} By the standard theory of Hamilton-Jacobi equations, for small $\eta$ these admit classical solutions on the spacetime slab $[-10, 10] \times \mathbb{R}^2$. Recall that $\Phi$ is constructed via the Hamilton flow for $a^\pm$ defined by \begin{equation}gin{align} \Deltabel{e:HamiltonODE} \dot{x} = a^\pm_\xi(t,x,\xi), \quad \dot{\xi} = -a^\pm_x(t,x,\xi). \end{align} Solutions to this systems are called bicharacteristics. Write $t \mapsto (x^\pm_t, \xi^\pm_t)$ for the solution initialized at $(x_0, \xi_0)$. The map $(x_0, \xi_0) \mapsto (x^{\pm}_t, \xi^{\pm}_t)$ is 1-homogeneous in the second variable. Moreover, a routine linearization argument reveals that \begin{equation}gin{lemma} \begin{equation}gin{align} \nonumber \frac{\partial (x^\pm_t, \xi^\pm_t)}{\partial (x_0, \xi_0) } = \left(\begin{equation}gin{array}{cc} I + O(\eta) & B(t)\\ O(\eta) & I + O(\eta) \end{array}\right), \end{align} where the matrix $B(t)$ has norm $O(t)$. \end{lemma} \begin{equation}gin{proof} The linearized system is \begin{equation}gin{align*} \dot{y} &= a_{\xi x} y + a_{\xi \xi} \zeta,\\ \dot{\zeta} &= -a_{xx}y - a_{x \xi} \zeta. \end{align*} The half-wave symbols $a$ inherit the derivative bounds on the metric: \begin{equation}gin{align*} \| \partial_{t,x} \partial^k_\xi a (t, x, \xi)\|_{L_{t,x}^\infty} + \| \partial^2_{t,x} \partial^k a_\xi (t,x, \xi) \|_{L^2_t L^\infty_x } \lesssim \eta \quad \text{for bounded } \xi, \end{align*} hence Gronwall yields the preliminary estimate $| ( y(t), \zeta(t)) | \lesssim |( y(0), \zeta(0) )|$. Consider initial data $y(0) = I$, $\zeta(0) = 0$. Then \begin{equation}gin{align*} |\zeta(t)| \lesssim \| \partial_{x}^2 a\|_{L^2 L^\infty}\sqrt{t} + \int_0^t |\zeta(s)| \, ds, \end{align*} so $|\zeta(t)| \lesssim \eta \sqrt{t}$. Substituting this into the equation for $y$, we obtain \begin{equation}gin{align*} |y(t) - I| \lesssim \eta. \end{align*} Now consider initial data $y(0) = 0, \eta(0) = I$. Then $|y(t)| \lesssim t$, and \begin{equation}gin{align*} |\zeta (t)-I| \lesssim \int_0^t |a_{xx}| s \, ds + \int_0^t |a_{x\xi} (s) \zeta(s)| \, d\xi \lesssim \eta. \end{align*} \end{proof} Hence we may parametrize the graph of the $\pm$ flow map at time $t$ by $(x^\pm_t, \xi_0) \mapsto (x_0, \xi^\pm_t)$, via the diffeomorphisms \begin{equation}gin{align*} (x^\pm_t, \xi_0) \mapsto (x_0, \xi_0) \mapsto (x^\pm_t, \xi^\pm_t) \mapsto (x_0, \xi^\pm_t). \end{align*} A short computation then yields \begin{equation}gin{align} \Deltabel{e:jacobian_alt_param} \frac{\partial (x_0, \xi^\pm_t)}{\partial (x^\pm_t, \xi_0) } = \frac{\partial (x_0, \xi^\pm_t)}{ \partial (x_t, \xi^\pm_t)} \cdot \frac{\partial (x^\pm_t, \xi^\pm_t)}{\partial (x_0, \xi_0)} \cdot \frac{\partial (x_0, \xi_0)}{ \partial (x^\pm_t, \xi_0)} =\left(\begin{equation}gin{array}{cc} I + O(\eta) & B(t)\\ O(\eta) & I + O(\eta) \end{array}\right). \end{align} We define $\xi^\pm_\theta(t, x)$ by the relation \begin{equation}gin{equation} \Deltabel{e:fourier-variable} \xi^\pm_{\xi_0}(t, x^\pm_t) := \xi^\pm_t (x_0, \xi_0), \end{equation} and recall that the method of characteristics construction gives \begin{equation}gin{equation} \Deltabel{e:optical-construction} \partial_x \Phi_\theta^\pm (t, x) = \xi^\pm_\theta (t, x). \end{equation} As $ g^{\alphapha \begin{equation}ta} \partial_\alphapha \Phi_\theta^{\pm}\, \partial_\begin{equation}ta \Phi_\theta^\pm = 0$, we obtain a foliation $\Lambda_\theta^\pm$ of $[-10,10]\times \mathbb{R}^2$ for each $\theta$ by the null ``hyperplanes'' \begin{equation}gin{equation} \Deltabel{e:foliation} \Lambda_{\theta, h}^\pm := \{ \Phi^\pm_{\theta} = h\}. \end{equation} The regularity of these null surfaces is easy to compute: \begin{equation}gin{lemma} \Deltabel{l:optical_bounds} The functions $\Phi_\theta^\pm$ have regularity $\partial^2_{t,x} \Phi_\theta^\pm = O(\eta)$. \end{lemma} \begin{equation}gin{proof} To simplify notation, we fix $\theta$ and focus on the $+$ case for the remainder of this section. We redenote $a = a^+$, $\Phi = \Phi^+_\theta$. In our new notation, $\Phi$ solves \begin{equation}gin{align*} \partial_t \Phi + a(t, x, \partial_x \Phi) = 0, \quad \Phi(0, x) = \Deltangle x, \theta \rangle, \end{align*} Differentiating the identity~\eqref{e:optical-construction} gives $\partial_x^2 \Phi = \partial_x \xi_\theta = O(\eta)$. The estimates involving time derivatives now follow by differentiating the equation: \begin{equation}gin{align*} &\partial_t\partial_x \Phi + a_x + a_\xi \partial^2_x \Phi = 0, \ \partial_t \partial_x \Phi(0, x) = 0 \mathbb{R}ightarrow \partial_t \partial_x \Phi = O(\eta),\\ &\partial_t^2 \Phi + a_t + a_\xi \partial_t \partial_x \Phi = 0, \ \partial^2_t \Phi(0, x) = 0 \mathbb{R}ightarrow \partial^2_t\Phi = O(\eta). \end{align*} \end{proof} \begin{equation}gin{lemma} [Separation between null planes] $\operatorname{dist}(\Lambda_{h_1, \theta}, \Lambda_{h_2, \theta}) \sim |h_1 - h_2|$. More precisely, there exists constants $c_1, c_2 >0$ such that for each $(t, x) \in \Lambda_{h_1}$, we have \begin{equation}gin{align*} c_1|h_1-h_2| \le d ((t,x), \Lambda_{h_2}) \le c_2|h_1 - h_2|, \end{align*} where $d$ denotes Euclidean distance measured in the time slice $\{t\} \times \mathbb{R}^2$. \end{lemma} \begin{equation}gin{proof} Without loss of generality assume $h_2 > h_1$. By the bounds on $\partial^2 \Phi$, we have that $|\partial_x \Phi| = 1 + O(\eta)$. The Euclidean gradient flow $\dot{\gamma} = \tfrac{ \nabla_x \Phi }{ | \nabla_x \Phi|}$ satisfies \begin{equation}gin{align*} \Phi(t, \gamma(s) ) - \Phi (t, \gamma(0) ) = \int_0^s |\nabla_x \Phi (t, \gamma(\tau) )| \, d \tau \in [C_1 s, C_2s] \end{align*} for absolute constants $C_1, C_2$. Thus the Euclidean distance of $\Lambda_{h_2}$ is at most $|h_1-h_2|/C_1$. If $\eta$ is any other unit speed curve with $\eta(0) = (t, x)$, then \begin{equation}gin{align*} |\Phi(t, \eta(s)) - \Phi(t, \eta(0))| \le \int_0^s |\nabla_x \Phi (t, \theta(\tau) ) | \, d\tau \le C_2 s, \end{align*} so the distance is at least $|h_1-h_2|/C_2$. \end{proof} The next two lemmas compare the bicharacteristics and null foliations for mollified and unmollified metrics. \begin{equation}gin{lemma}{\cite[Prop. 4.3]{geba2005disp}} \Deltabel{l:freq-loc-flow} If $\mathbb{R}^n \times S^{n-1} \ni (x, \xi) \mapsto (x_{t}, \xi_t), (x_{t, \Deltambda}, \xi_{t, \Deltambda})$ are the $+$ bicharacteristics for the metrics $g$ and $g_{<\sqrt{\Deltambda}}$ with the same initial data, then \begin{equation}gin{align*} |x_{t, \Deltambda} - x_t| \lesssim \Deltambda^{-1/2}, \quad |\xi_{t, \Deltambda} - \xi_t| \lesssim \Deltambda^{-1/2}. \end{align*} \end{lemma} \begin{equation}gin{lemma} \Deltabel{l:freq-loc-foliation} (Foliations for frequency truncated metrics) Let $\Lambda^{\Deltambda}_{h}$ be the foliation defined as before but replacing $g$ with $g_{< \Deltambda^{\frac{1}{2}}}$. Then $\operatorname{dist}(\Lambda_{h, \theta}, \Lambda^\Deltambda_{h, \theta}) \lesssim \Deltambda^{-1}$. \end{lemma} \begin{equation}gin{proof} The optical function $\Phi^\Deltambda$ for $\Lambda^\Deltambda$ satisfies the corresponding eikonal equation \begin{equation}gin{align*} \partial_t \Phi^\Deltambda + a_{<\Deltambda^{\frac{1}{2}}} (t, x, \partial_x \Phi^\Deltambda) = 0, \quad \Phi^\Deltambda (0, x) = \Deltangle x, \theta \rangle, \end{align*} where $a_{<\Deltambda^{\frac{1}{2}}} = a_{<\Deltambda^{\frac{1}{2}}}^+$ is the $+$ half-wave symbol for the mollified metric. The difference $\psi := \Phi - \Phi^\Deltambda$ solves the transport equation \begin{equation}gin{align*} &\partial_t \psi + \Deltangle v(t,x), \partial_x \psi \rangle = a_{<\Deltambda^{1/2}} (t, x, \partial_x\Phi^\Deltambda) - a (t, x, \partial_x \Phi^\Deltambda ) = O(\Deltambda^{-1}), \quad \psi(0, x) = 0,\\ &v = \int_0^1 a_{\xi} (t, x, \partial_x \Phi^\Deltambda + s \partial_x (\Phi -\Phi^\Deltambda) ) \, ds, \end{align*} which may be integrated along characteristics to yield $|\psi(t, x)| \lesssim |t \Deltambda^{-1}|$. \end{proof} Certain estimates will be expressed in spacetime coordinates adapted to the foliation $\Lambda_\theta$. The following construction is modeled on~\cite{smith2005sharp}. Let $\Phi = \Phi_\theta$ be the optical function for $\Lambda_\theta$. We rotate the coordinates in $x$ by setting \[x_\theta = \Deltangle x, \theta \rangle, \quad x_{\theta}' = \Deltangle x, \theta^\perp \rangle,\] where $\theta^\perp$ is clockwise rotation of $\theta$ by angle $\pi/2$. Then in the coordinates $(t, x_{\theta}', x_\theta)$, we have $\partial_{x_\theta} \Phi (0, \cdot) = 1$ and $\partial^2 \Phi = O(\eta)$. Hence $\partial_{x_\theta} \Phi = 1 + O(\eta)$ for all $(t, x) \in [-10, 10]\times \mathbb{R}^2$. Provided $\eta$ is sufficiently small, the global implicit function theorem lets us write \begin{equation}gin{align*} \Lambda_{h,\theta} = \{ (t, x_{\theta}', \psi_\theta(t, x_{\theta}', h)\} \end{align*} for some $C^2$ function $\psi_\theta$. Then $(t, x'_\theta, h)$ also define coordinates on $[-10, 10 ] \times \mathbb{R}^2$ via \begin{equation}gin{align*} (t, x'_\theta, h) \mapsto (t, x'_\theta, \psi_\theta (x'_\theta, h) ) \in \Lambda_{h,\theta}. \end{align*} Straightforward computations show that \begin{equation}gin{align} \Deltabel{e:foliation_coord_deriv} \begin{equation}gin{split} \frac{ \partial (x_\theta', x_\theta ) }{ \partial (x'_\theta, h)} &= I + O(\eta),\\ \frac{ \partial^2 (x_\theta', x_\theta ) }{ \partial (x'_\theta, h)^2} &= O(\eta). \end{split} \end{align} The variable $h$ parametrizes the leaves of the foliation and is constant along geodesics. \subsection{The null frame} \Deltabel{s:nullframe} The future-pointing geodesic generator of $\Lambda_\theta$ is $L = -\nabla \Phi$. We complete this to a null frame by the following standard construction. Let $E = \Deltangle e(t, x), \partial_x \rangle$ be a vector field tangent to the fixed-time slices of $\Lambda_\theta$, defined concretely in terms of the rotated coordinates $(x_\theta', x_\theta)$ as \begin{equation}gin{align*} E = \frac{\tilde{E} } {\Deltangle \tilde{E}, \tilde{E} \rangle^{1/2} }, \quad \tilde{E}(t, x_\theta', x_\theta) = \partial_{x_\theta'} + (\partial_{x_\theta'} \psi)\partial_{x_\theta}. \end{align*} Then $\Deltangle L, E \rangle = -d_{t,x}\Phi (E) = 0$; in fact one also has \begin{equation}gin{align} \Deltabel{e:E-symbol} 0 = d_x \Phi(E) = \Deltangle E, \xi_\theta(t, x) \rangle = \Deltangle e(t, x), \xi_\theta(t,x) \rangle, \end{align} where $\xi_\theta(t, x)$ is the Fourier variable defined by~\eqref{e:fourier-variable}. Finally, let $\underline{L}$ be a null vector field transversal to $\Lambda_\theta$ and satisfying $\Deltangle \underline{L}, E \rangle = 0, \Deltangle L, \underline{L} \rangle = -1$. The vector fields $\{L, \underline{L}, E \}$ form a null frame adapted to the foliation $\Lambda_\theta$. \begin{equation}gin{lemma} \Deltabel{l:half-fullwave-bichar} We have $L(t,x) = \sigma(t,x) [\partial_t + \Deltangle a_\xi (t, x, \wht{\xi_\theta}(t, x) ), \partial_x \rangle ]$ for some bounded function $\sigma$. \end{lemma} \begin{equation}gin{proof} The multiplicative factor reflects the difference between the null bicharacteristics of the half wave symbol and full wave symbol. Write $z = (t, x)$, $\zeta = (\tau, \xi)$, and let $p = g^{\alphapha \begin{equation}ta} \zeta_\alphapha \zeta_\begin{equation}ta = p^+ p^- = (\tau + a^+)(\tau + a^-)$. Let $\gamma(s) = (z(s), \zeta(s))$ be a null bicharacteristic for $p^+$. Along this curve one has \begin{equation}gin{align*} \dot{z} = p^+_\zeta = (p^-)^{-1} p_\zeta , \quad \dot{\zeta} = - p^+_z = - (p^-)^{-1} p_z, \end{align*} Therefore \begin{equation}gin{align*} \dot{z}^\alphapha g_{\alphapha \begin{equation}ta} = 2(p^-)^{-1} g^{\alphapha \mu} \zeta_\mu g_{\alphapha \begin{equation}ta} = 2(p^-)^{-1} \zeta_\begin{equation}ta = 2(p^-)^{-1} \partial_\begin{equation}ta \Phi (z(s)), \end{align*} so \begin{equation}gin{align*} -2(p^-)^{-1} L = 2(p^-)^{-1} \nabla \Phi = \dot{z} = \partial_t + \Deltangle a_\xi , \partial_x \rangle. \end{align*} Finally, observe that $p^- = p^+ + p^- - p^+ = -2 \sqrt{ (g^{0j} \xi_j)^2 + g^{ab} \xi_a \xi_b}$ is bounded above and below along $\gamma$. \end{proof} \ \section{Wave packets analysis} \ \Deltabel{s:packets} In this section we collect and generalize the salient features of Smith's wave packet parametrix \cite{smith1998parametrix}. More specific implementation details are recalled in the appendix. \subsection{Packets, tubes, and null surfaces} \Deltabel{sec:wp1} We begin by clarifying our ``wave packet" terminology. For $\omega \in S^{n-1}$, let $\gamma_\omega^\pm(t) = (x^\pm(t), \xi^\pm(t) )$ be a bicharacteristic for half wave symbol $\tau + a^{\pm}$ with $\xi^\pm(0) = \omega$. Let $\omega(t) := \xi(t) / |\xi(t)|$ be the projection of $\xi$ on the unit sphere. \begin{equation}gin{definition} \Deltabel{d:packet} A smooth function $u$ at frequency $\Deltambda$ is a normalized wave packet for the bicharacteristic $\gamma_\omega^{\pm}$ if: \begin{equation}gin{itemize} \item $u$ is localized in phase space along $\gamma^\pm$: there exist constants $C_{N}$ such that \begin{equation}gin{align} \Deltabel{e:packet_decay} |u(t, x)| \le C_{N} \Deltambda^{\frac{3}{4}} ( \Deltambda |\Deltangle x - x^\pm (t), \omega^\pm (t) \rangle| + \Deltambda^{1/2} |(x - x^{\pm}(t)) \wedge \omega^\pm(t) | \rangle | )^{-N}, \end{align} and similar estimates hold for $\bigl(\Deltambda \Deltangle \omega^\pm (t), \partial_x \rangle\bigr)^a$ and $[\Deltambda^{1/2} (\omega^{\pm}(t) \wedge \partial_x)]^b$ applied to $u^\pm_\gamma$. \item $Lu$ satisfies the same estimates with constant $C_N(t)$, where $|C_N(t)| \lesssim_N \| \partial^2 g(t)\|_{L^\infty}$, and $L$ is the null generator for the null foliation $\Lambda_\omega^\pm$ defined in Section~\ref{s:nullfoliations}. \end{itemize} \end{definition} For each frequency $\Deltambda > 1$, let $\omega$ vary over a maximal collection $\Omega_{\Deltambda^{-1/2}}$ of unit vectors separated by at least $\Deltambda^{-\frac{1}{2}}$. To each such $\omega$ we associate a lattice $ \Xi_{\Deltambda}^{\omega} $ in the physical space $ \mathbb{R}_x^{n} $ on the dual scale, i.e. spaced $ \Deltambda^{-1} $ in the $ \omega $ direction and spaced $ \Deltambda^{-\frac{1}{2}} $ in directions in $ \omega^{\perp} $. Let \[\mathcal{T}_\Deltambda = \{(x, \omega) : \omega \in \Omega_{\Deltambda^{-1/2}}, \ x \in \Xi_\Deltambda^\omega\}.\] To each bicharacteristic $\gamma^\pm = (x^\pm(t), \xi^\pm(t))$ with $(x^\pm(0), \xi^\pm(0)) \in \mathcal{T}_\Deltambda$, we associate a spacetime ``$\Deltambda$-tube'' \begin{equation}gin{align} \Deltabel{e:tube-def} T := \{ (t,x): \Deltambda |\Deltangle x - x^\pm (t), \omega^\pm (t) \rangle| + \Deltambda^{1/2} |(x - x^{\pm}(t)) \wedge \omega^\pm(t) | \rangle | \le 10\}, \end{align} on which packets for $\gamma^\pm$ concentrate. Each tube, say with initial data $(x_0, \omega)$, is a $\Deltambda^{-1/2}$ neighborhood of a ray, intersected with a $\Deltambda^{-1}$ neighborhood of the null surface $\Lambda_{h, \omega}$ containing the ray. It is suggestive to identify $T$ with $\gamma^\pm$ and denote normalized packets by $u_T$. Let $\mathcal{T}_{\Deltambda}^{\pm}$ denote $\Deltambda$-tubes associated to the bicharacteristics $\gamma^\pm$ for the metric $g$, initialized in the lattice $\mathcal{T}_\Deltambda$. By Lemmas~\ref{l:freq-loc-flow} and \ref{l:freq-loc-foliation}, using the mollified metric $g_{<\sqrt{\Deltambda}}$ instead of $g$ would yield an essentially equivalent family of tubes in the sense that each tube from one family intersects boundedly many tubes from the other family, which are in turn contained in the dilate of the first tube by a fixed factor. The tubes $\mathcal{T}_{\Deltambda, \omega}^{\pm}$ associated to a given initial direction $\omega$ are finitely overlapping and admit a natural notion of distance which is convenient for expressing the decay of wave packets. Following Smith~\cite[Section 2]{smith1998parametrix}, if $T_1, T_2 \in \mathcal{T}_{\Deltambda, \omega}^+$ are tubes with initial data $(x_j, \omega), \ j= 1, 2,$ we define \begin{equation}gin{align*} d(T_1, T_2) := \Deltambda |\Deltangle x_1 - x_2, \omega \rangle| + \Deltambda^{\frac{1}{2}} | (x_1 - x_2) \wedge \omega|. \end{align*} Let $(t, x_{1,\omega}'(t), h_1)$ and $(t, x_{2, \omega}'(t), h_2)$ denote the corresponding rays in the foliation-adapted coordinates. It follows from~\eqref{e:foliation_coord_deriv} that each tube takes the form \begin{equation}gin{align*} T_j = \{ |x_\omega' - x_{j,\omega}'(t)| \lesssim \Deltambda^{-\frac{1}{2}}, \ |h - h_j| \lesssim \Deltambda^{-1}\}, \end{align*} and that $|x_{1, \omega}'(t) - x_{2,\omega}'(t)| + |h_1 - h_2| \sim |x_{1, \omega}'(0) - x_{2, \omega}'(0)| + |h_1 - h_2|$. Hence for $\Deltambda \gg 1$ we can also write \begin{equation}gin{align*} d(T_1, T_2) &\sim \Deltambda|h_1 - h_2| + \Deltambda^{\frac{1}{2}}|x_{1,\omega'}(t) - x_{2,\omega'}(t)|. \end{align*} \begin{equation}gin{figure} \caption{Tubes corresponding to the same direction $ \omega $ are finitely overlapping.} \ \ \begin{equation}gin{tikzpicture} \draw[very thick,xstep=2cm,ystep=0.6cm,rotate around={160:(0,0)}] (0,0) grid (12,3.6); \draw[->, very thick, rotate around={160:(0,0)}] (12,3.6) -- (12,-2) node[right] {$\omega$}; \draw[->, very thick, rotate around={160:(0,0)}] (12,3.6) -- (-2,3.6) node[right] {$\omega^{\perp}$}; \draw[rotate around={160:(0,0)}] (12,3.6) -- (11,3.6) node[below] {$\left|d^{-\frac{1}{2}}$}; \draw[rotate around={160:(0,0)}] (12,3.6) -- (12,3) node[left] {$\left|d^{-1}$}; \filldraw[fill=gray!20!white,draw=black,very thick,rotate around={160:(0,0)}] (8,2.4) rectangle (6,1.8); \draw[blue,very thick,rotate around={160:(0,0)}](8,2.4) to [out=-110,in=40] (7,-2); \draw[blue,very thick,rotate around={160:(0,0)}](6,2.4) to [out=-110,in=40] (5,-2.6); \draw[red,very thick,rotate around={160:(0,0)}](8,1.8) to [out=-100,in=40] (7,-2.6); \draw[red,very thick,rotate around={160:(0,0)}](6,1.8) to [out=-100,in=40](5,-3.2); \draw[blue,very thick,rotate around={160:(0,0)}](7,-2) to (5,-2.6); \draw[red,very thick,rotate around={160:(0,0)}](7,-2.6) to (5,-3.2); \draw[very thick,rotate around={160:(0,0)}](7,-2) to (7,-2.6); \draw[very thick,rotate around={160:(0,0)}](5,-2.6) to (5,-3.2); \filldraw[fill=gray!20!white,draw=black,very thick,rotate around={160:(0,0)}] (12,2.4) rectangle (10,1.8); \draw[blue,very thick,rotate around={160:(0,0)}](12,2.4) to [out=-110,in=40] (11,-2); \draw[blue,very thick,rotate around={160:(0,0)}](10,2.4) to [out=-110,in=40] (9,-2.6); \draw[red,very thick,rotate around={160:(0,0)}](12,1.8) to [out=-100,in=40] (11,-2.6); \draw[red,very thick,rotate around={160:(0,0)}](10,1.8) to [out=-100,in=40](9,-3.2); \draw[blue,very thick,rotate around={160:(0,0)}](11,-2) to (9,-2.6); \draw[red,very thick,rotate around={160:(0,0)}](11,-2.6) to (9,-3.2); \draw[very thick,rotate around={160:(0,0)}](11,-2) to (11,-2.6); \draw[very thick,rotate around={160:(0,0)}](9,-2.6) to (9,-3.2); \end{tikzpicture} \end{figure} If $u_T$ is a packet with initial direction $\omega$ and $T'$ is any other tube with initial direction $\omega$, at time $0$ the decay condition~\eqref{e:packet_decay} translates to \begin{equation}gin{align*} |u_T(0)|_{T'} \lesssim_N \Deltambda^{\frac{3}{4}} \Deltangle d(T,T')\rangle^{-N}. \end{align*} For $t \ne 0$, a similar bound holds but requires justification since the condition~\eqref{e:packet_decay} is expressed in an orthogonal coordinate system whereas the null surfaces are curved. \begin{equation}gin{lemma} \Deltabel{e:packet_decay_foliation} Let $u = u_T$ be a frequency $\Deltambda$ wave packet for the $+$ bicharacteristic $(x_t^+, \xi_t^+)$ initialized at $(0, \omega)$. Let $\Lambda_{\omega}^\Deltambda$ be the $+$ null foliation with direction $\omega$ for $g_{<\sqrt{\Deltambda} }$, and denote by $\{L, \underline{L}, E\}$ the associated null frame. Then if $T' \in T_{\omega,\Deltambda}^+$ is any other tube, one has for all $N$ \begin{equation}gin{align*} |u_T|_{T'} &\lesssim_N \Deltambda^{\frac{3}{4}} \Deltangle d(T, T')\rangle^{-N}\\ |Eu_T|_{T'} &\lesssim_N \Deltambda^{\frac{3}{4} + \frac{1}{2}} \Deltangle d(T, T')\rangle^{-N}. \end{align*} More precisely, if $(t, x_\omega' (t), x_\omega(t) )$ represents $x_t^+$ in the rotated coordinates and $\tilde{u}(t, x_\omega', h) = u_T(t, x_\omega', \psi(t, x_\omega', h) ), \, \widetilde{Eu} (t, x_\omega', h) = (Eu_T)(t, x_\omega', \psi(t, x_\omega', h)) $ represent $u$ and $Eu$ in the foliation adapted coordinates, then \begin{equation}gin{align*} &|\tilde{u}| \lesssim_N \Deltambda^{\frac{3}{4}} \Deltangle \Deltambda^{\frac{1}{2}} |x_\omega' - x_\omega'(t) | + |\Deltambda h| \rangle^{-N} \text{ for all } N.\\ &|\widetilde{E u}| \lesssim_N \Deltambda^{\frac{3}{4} + \frac{1}{2}} \Deltangle \Deltambda^{\frac{1}{2}} |x_\omega' - x_\omega'(t) | + |\Deltambda h| \rangle^{-N} \text{ for all } N. \end{align*} \end{lemma} \begin{equation}gin{proof} Modulo multiplicative factors of size $1 + O(\eta)$, we have \begin{equation}gin{align*} &\omega(t) = -\partial_{x_\omega'} \psi ( x'(t), 0) \partial_{x_\omega'}+ \partial_{x_\omega},\\ &E(t, x_\omega', x_\omega(t, x_\omega', h)) = \partial_{x_\omega'} + \partial_{x_\omega'} \psi (t, x_\omega', h) \partial_{x_\omega}. \end{align*} By a slight abuse of notation we also write $E = E(t, x_\omega', h)$. We express the wave packet decay condition~\eqref{e:packet_decay} in terms of the coordinates $(x'_\omega, h)$. Let \begin{equation}gin{align*} y_\omega &= \Deltangle (x_\omega' - x'_\omega(t), \psi(t, x_\omega') - \psi (t, x_\omega'(t), 0)), \omega(t) \rangle,\\ y'_\omega &= \Deltangle (x_\omega' - x'_\omega(t), \psi(t, x_\omega') - \psi (t, x'_\omega(t), 0)), E(t) \rangle. \end{align*} Then \begin{equation}gin{align*} y'_\omega &= x'_{\omega}-x'_{\omega}(t) + [ \psi(t, x'_{\omega}, h) - \psi(t, x'_{\omega}, 0) + \psi(t, x'_{\omega}, 0) - \psi(t, x'_{\omega}(t), 0)] \partial_{x} \psi(t, x'_{\omega}(t), 0),\\ y_\omega &= \psi(t, x'_{\omega}, h) - \psi(t, x'_{\omega}, 0) + \psi(t, x'_{\omega}, 0) - \psi(t, x'_{\omega}(t), 0) - (x-x'_{\omega}(t)) \partial_{x'_{\omega}} \psi(t, x'_{\omega}(t), 0) \end{align*} So \begin{equation}gin{align*} |y'_\omega| \ge |x'_{\omega} - x'_{\omega}(t)| - c (\eta t)^2 |x'_{\omega}-x'_{\omega}(t)| - c\eta t h \ge \frac{1}{2}|x'_{\omega}-x'_{\omega}(t)| \end{align*} whenever $\dfrac{C|x'_{\omega} - x'_{\omega}(t)|}{ \eta t} \ge h$. Similarly, \begin{equation}gin{align*} |y_\omega| \ge (1 + O(\eta)) h - c \eta t |x'_{\omega}-x'_{\omega}(t)| \ge \frac{1}{2}ch \end{align*} if $|x'_{\omega}-x'_{\omega}(t)| \le \tfrac{Ch}{\eta t}$. Thus \begin{equation}gin{align*} |\tilde{u}(t, x'_{\omega}, h)| \lesssim_N \left\{\begin{equation}gin{array}{ll} \Deltambda^{\frac{3}{4}} \Deltangle \Deltambda h\rangle^{-N}, & |x'_{\omega}-x'_{\omega}(t)|\le \frac{\eta t h}{C},\\ \Deltambda^{\frac{3}{4}}\Deltangle \Deltambda^{\frac{1}{2}} |x'_{\omega}-x'_{\omega}(t)| + |\Deltambda h|^{-N} \rangle^{-N}, & \frac{\eta t h}{C} \le |x'_{\omega}-x'_{\omega}(t)| \le \frac{Ch}{\eta t}, \\ \Deltambda^{\frac{3}{4}} \Deltangle \Deltambda^{\frac{1}{2}} |x'_{\omega} - x'_{\omega}(t)| \rangle^{-N}, & |x'_{\omega}- x'_{\omega}(t)| \ge \frac{Ch}{\eta t},\end{array}\right. \end{align*} but one may of course insert $\Deltambda^{\frac{1}{2}} |x'_{\omega} - x'_{\omega}(t)|$ and $\Deltambda h$ in the first and third regimes respectively. To verify the bound on $\widetilde{Eu}$, write \begin{equation}gin{align*} E(t, x_\omega', h ) u = E(t, x_\omega'(t), 0) u + [E(t, x_\omega', h) - E(t, x_\omega'(t), 0)] u. \end{align*} For the first term we use the hypothesis that $\Deltambda^{-1/2}(\omega(t) \wedge \partial_x)u$ satisfies the packet bounds~\eqref{e:packet_decay}. The second term can be written as \begin{equation}gin{align*} \bigl( \partial_{x_\omega'} \psi (t, x_\omega', h) - \partial_{x_\omega'} \psi (t, x_\omega'(t), 0) \bigr)\partial_{x_\omega} u, \end{align*} which is $O( \Deltambda |x_\omega - x_\omega'(t)| + \Deltambda |h|)$ times a normalized packet. \end{proof} \begin{equation}gin{corollary} \Deltabel{c:packet_decay_other_foliation} Let $u$ be a frequency $\Deltambda$ wave packet for the bicharacteristic $(x^+_t, \xi^+_t)$ of $g_{<\Deltambda^{\frac{1}{2}}}$ initialized at $(0, \omega)$. Let $\Lambda_\omega$ be the corresponding null foliation for the \emph{untruncated} metric $g$. Then the previous estimates hold with $\Lambda^\Deltambda_\omega$ replaced by the corresponding foliation $\Lambda_\omega$ for the untruncated metric $g$. \end{corollary} \begin{equation}gin{proof} The above lemmas imply that the foliations for $g$ and $g_{< \sqrt{\Deltambda}}$ are interchangeable as far as the bound with respect to $h$ is concerned. Similarly, as shown in~\cite[Prop. 4.3]{geba2005disp} the bicharacteristics for $g$ and $g_{<\sqrt{\Deltambda}}$ differ by $O(\Deltambda^{-1/2})$. \end{proof} To express the decay of packets more compactly, we introduce the weight \begin{equation}gin{align*} m_T(t, x) := \bigl(1 + \Deltambda |\Deltangle x - x^\pm (t), \omega^\pm (t) \rangle| + \Deltambda^{1/2} |(x - x^{\pm}(t)) \wedge \omega^\pm(t) | \bigr), \end{align*} write \begin{equation}gin{align} \Deltabel{e:wpnorm} \|u_T\|_{WP_T^N} := \| \Deltambda^{-\frac{3}{4}} m_T^N u_T\|_{L^\infty}, \end{align} and write $\|\cdot\|_{WP}$ to denote a generic $WP_T^N$ norm with the understanding that the constants in bounds depend on $N$. \subsection{Superpositions of wave packets} The following statement summarizes how the different wave packets fit together in Smith's parametrix. Note that in the rest of the paper we shall only use the properties below, and not the specifics of Smith's construction. \begin{equation}gin{namedthm}{Parametrix property} \Deltabel{param:property} Let $ I=[0,\delta] $. For large enough dyadic $ \left|d \geq 1$, the following properties hold: \begin{equation}gin{enumerate} \item For any initial data $ (u_1,u_2) \in L^2 \times H^{-1} $ localized at frequency $ \simeq \Deltambda $, there exists a $ \Deltambda $-wave packet superposition which depends linearly on $ (u_1,u_2) $: \begin{equation} \Deltabel{wp:dec1} u=u^{+}+u^{-}, \qquad \qquad u^{\pm}=\sum_{T \in \mathcal T_{\left|d}^{\pm}} c_T u_T \end{equation} such that $ (u(0),\partial_t u(0))=(u_1,u_2) $ and \begin{equation} \Deltabel{wp:dec2} \sum_{\pm,T \in \mathcal T_{\left|d}^{\pm}} \vm{c_T}^2 \approx \vn{(u_1,u_2)}_{L^2 \times H^{-1}}^2 \end{equation} For any such decomposition one has \begin{equation} \Deltabel{param:small} \vn{\Box_{g<\sqrt{\left|d} } u(t)}_{L^2_x} \lesssim \left|d \| \partial^2 g(t)\|_{L^\infty_x} \vn{c_{T}}_{\ell^2_T} \qquad \forall t \in I. \end{equation} \item Let $ D=\delta^{-1} $. For any solution of the homogeneous problem \begin{equation} \Deltabel{homogeneous:sol} \Box_{g_{<\sqrt{\left|d}} } v=0, \qquad v[0]=(v_1,v_2). \end{equation} there exists $ u $ as above satisfying \begin{equation}gin{align} \vn{\tilde{P}_{\left|d} (v-u) }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } & \lesssim \delta \left|d^{-1} \vn{(v_1,v_2)}_{ H^{1} \times L^2 } \\ \vn{c_{T}}_{\ell^2_T} & \lesssim \left|d^{-1} \vn{(v_1,v_2)}_{ H^{1} \times L^2 } \end{align} \item Let $ T,T' \in \mathcal T_{\left|d,\omega}^{\pm} $ be two tubes associated to the same direction $ \omega $. For $ \nu \geq \left|d $, let the vector fields $ L,\underline{L},E $ associated to $ \pm g_{\sqrt{\nu}} $ and to $ \omega $ form a null frame as in section~\ref{s:nullframe}. Then one has: \begin{equation}gin{align} \Deltabel{wp:Linfty:T} \vn{u_T }_{L^{\infty}(T')} \lesssim & \ \ \ \frac{\left|d^{\frac{3}{4}}}{\Deltangle d(T,T') \rangle^N} \\ \Deltabel{wp:L:T} \vn{L u_T }_{L^2 L^{\infty}(T')} \lesssim & \ \ \ \frac{\left|d^{\frac{3}{4}}}{\Deltangle d(T,T') \rangle^N} \\ \Deltabel{wp:E:T} \vn{E u_T }_{L^{\infty}(T')} \lesssim & \left|d^{\frac{1}{2}} \frac{\left|d^{\frac{3}{4}}}{\Deltangle d(T,T') \rangle^N} \\ \Deltabel{wp:barL:T} \vn{\underline{L} u_T }_{L^{\infty}(T')} \lesssim & \ \left|d \ \frac{\left|d^{\frac{3}{4}}}{\Deltangle d(T,T') \rangle^N} \end{align} Moreover, the same inequalities hold with $ u_T $ replaced by $ P_{\left|d} u_T $. \item For any $ t \in I $ and sign $ \pm $, for $ (c_T)_T \in \ell^2_T $ one has \begin{equation}gin{equation} \Deltabel{wp:sql2} \vn{\sum_{T \in \mathcal T_{\left|d}^{\pm}} c_T u_T(t) }_{L^2_x}^2 \lesssim \sum_{T \in \mathcal T_{\left|d}^{\pm}} \vm{c_T}^2, \quad \vn{\sum_{T \in \mathcal T_{\left|d}^{\pm}} c_T u_T^{\prime}(t) }_{L^2_x}^2 \lesssim \left|d \sum_{T \in \mathcal T_{\left|d}^{\pm}} \vm{c_T}^2 \end{equation} \end{enumerate} \end{namedthm} \ We note that property (2) follows from property (1) together \eqref{com:est}, \eqref{param:small} and Lemma \ref{v00:en:est}. In Section~\ref{s:smithpackets} we discuss these properties in the context of Hart Smith's wave packet parametrix from \cite{smith1998parametrix}. \begin{equation}gin{remark} The decay properties~\eqref{wp:Linfty:T} through \eqref{wp:barL:T} reduce certain $L^2$ bilinear estimates to the characteristic energy estimates in section~\ref{s:CE}. Here is a typical computation. Suppose $v = \sum_{T \in \mathcal{T}_{\Deltambda, \omega}^+} a_T v_T$ is a superposition of frequency $\Deltambda$ wave packets for a given initial direction $\omega$. Then by Schur's test we deduce \begin{equation}gin{align*} \| u v\|_{L^2}^2 &\lesssim \sum_T \| uv \|_{L^2(T)}^2 = \sum_T \sum_{T_1, T_2} a_{T_1} \overline{a_{T_2}} \Deltangle u v_{T_1}, u v_{T_2} \rangle_{L^2(T)}\\ &\lesssim \sum_{T} \sum_{T_1, T_2} \Deltambda^{\frac{3}{2}} \| u\|_{L^2(T)}^2 d(T, T_1)^{-N} d(T, T_2)^{-N} |a_{T_1}a_{T_2}|\\ &\lesssim \bigl(\sup_T \| u\|_{L^2(T)}^2 \bigr)\ \Deltambda^{\frac{3}{2}} \sum_T |a_T|^2, \end{align*} and the sup term is essentially an estimate for $u$ over the null surfaces associated to the direction $\omega$. \end{remark} \subsection{Preliminaries to the general decomposition} The next goal is to obtain a more general wave packet decomposition similar to \eqref{wp:dec1} for functions in $ X^{s,\theta} $ which are close to being solutions of \eqref{homogeneous:sol} in the sense of having low modulation. To allow for the extra flexibility of having inhomogeneities $ \Box_{g_{<\sqrt{\left|d}}} v $, the resulting decomposition (Prop. \ref{X:wp:dec} and Cor. \ref{Cor:WP:dec}) will have coefficients $ c_T(\cdot) $ that depend on time, which arise from Duhamel's formula $$ v=v_0+\int_I 1_{t \geq s} v_s \,\mathrm{d} s $$ We first express the functions $ v_s $ in the following way: \begin{equation}gin{lemma} \Deltabel{parametrix:lemma:repr} For $ s \in I=[0,\delta] $, let $ v_s $ be the solution of the equation \begin{equation}gin{equation*} \begin{equation}gin{cases} \Box_{g_{<\sqrt{\left|d}}} v_s=0 \ \qquad \quad \text{on } I \times \mathbb{R}^2 \\ v_s[s]=(f_s,g_s). \end{cases} \end{equation*} where $ f_s,g_s $ are assumed to be localized at frequency $ \simeq \Deltambda $. Then, there exists a wave packet superposition (initialized at $ t=0 $) $$ u_s = \sum_{\pm,T \in \mathcal T_{\left|d}^{\pm}} c_{T,s} u_T $$ and a function $ w_s $ such that $$ \tilde{P}_{\left|d} v_s= \tilde{P}_{\left|d} (u_s+w_s) $$ where \begin{equation}gin{align} & w_s[s]=(0,0) \\ & \vn{\tilde{\tilde{P}}_{\left|d} w_s }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } \lesssim \delta \vn{(f_s,g_s) }_{L^2 \times H^{-1}} \Deltabel{X:sm:delta} \\ & \vn{c_{T,s}}_{\ell^2_T} \lesssim \vn{(f_s,g_s) }_{L^2 \times H^{-1}}. \end{align} \end{lemma} \begin{equation}gin{proof} Denote $ M=\vn{(f_s,g_s) }_{L^2 \times H^{-1}} $ and for brevity we will denote $ \vn{\cdot}_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } $ by $ \vn{\cdot}_X $. Note that $ \vn{v_s[0]}_{H^1 \times L^2} \lesssim \left|d M $. We apply the Parametrix property \ref{param:property} with $ (v_1,v_2)=v_s[0] $ and we obtain a representation $ \tilde{P}_{\left|d} v_s= \tilde{P}_{\left|d} u^1+ \tilde{P}_{\left|d} w^1 $ where $ u^1 $ is a wave packet superposition. For $ s=0 $ this is sufficient. For $ s \neq 0 $, even though $ w^1[s] \neq (0,0) $, we have $ \vn{\tilde{\tilde{P}}_{\left|d} w^1 }_X \lesssim \delta M $. Now we iterate this construction. For $ i \geq 1 $ we write $$ \tilde{P}_{\left|d} w^i= \tilde{P}_{\left|d} (\tilde{\tilde{P}}_{\left|d} w^i-v^{i+1})+\tilde{P}_{\left|d} v^{i+1} $$ where $ v^{i+1} $ solves the homogeneous equation $$ \Box_{g<\sqrt{\left|d} } v^{i+1}=0, \qquad v^{i+1}[s]=\tilde{\tilde{P}}_{\left|d} w^i[s]. $$ Assuming $ \vn{\tilde{\tilde{P}}_{\left|d} w^i }_X \lesssim \delta^i M $ we have $ \vn{v^{i+1}}_{L^{\infty}(H^1 \times L^2) } \lesssim \left|d \delta^i M $. As before we use the Parametrix property \ref{param:property} to write $ \tilde{P}_{\left|d} v^{i+1}= \tilde{P}_{\left|d} (u^{i+1}+w^{i+1}) $ with $ \vn{\tilde{\tilde{P}}_{\left|d} w^{i+1} }_X \lesssim \delta^{i+1} M $. From the above we obtain $ \tilde{P}_{\left|d} v_s= \tilde{P}_{\left|d} (u_s+w_s) $ by defining $$ u_s=\sum_{i \geq 1} u^i, \qquad w_s=\sum_{i \geq 1} \tilde{\tilde{P}}_{\left|d} w^i-v^{i+1} $$ Note that $ w_s[s]=(0,0) $. Both series converge geometrically due to the powers of $ \delta $. \end{proof} \begin{equation}gin{corollary} \Deltabel{parametrix:cor:estw} Let $ v \in X_{\left|d,D}^{0,\frac{1}{2}}[I] $ for $ I=[0,\delta] $ and let $ w $ be defined by $$ w=w_0 + \int_I 1_{t \geq s} w_s \,\mathrm{d} s $$ where $ w_0, w_s $ are obtained from Lemma \ref{parametrix:lemma:repr} applied with $ (f_0,g_0)=\tilde{P}_{\left|d} v[0] $, respectively $ (f_s,g_s)=(0,\Box_{g_{<\sqrt{\left|d}}} \tilde{P}_{\left|d} v(s)) $. Then, $$ \vn{\tilde{\tilde{P}}_{\left|d} w }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } \lesssim \delta \vn{\tilde{\tilde{P}}_{\left|d} v }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } $$ \end{corollary} \begin{equation}gin{proof} To ease notation we denote $ \tilde{\tilde{P}}_{\left|d} $ by $ P $. The inequality for $ w_0 $ follows immediately due to \eqref{energy:X}. Denoting by $ \tilde{w} $ the integral term, we have $ \tilde{w}[0]=(0,0) $ and, since $ w_t[t]=(0,0) $, we have \begin{equation} \Deltabel{box:integrall} \Box_{g_{<\sqrt{\left|d}}} \tilde{w}(t) = \int_I 1_{t \geq s} \Box_{g_{<\sqrt{\left|d}}} w_s \,\mathrm{d} s. \end{equation} Note that by \eqref{X:sm:delta}, \eqref{energy:X} and H\" older's inequality we have \begin{equation} \Deltabel{Swlvx} \vn{P \tilde{w} (t)}_{L^2 \times H^{-1}} \lesssim \delta \vn{P v}_{X_{\left|d,D}^{0,\frac{1}{2}}[I]} \end{equation} We write $$ \Box_{g_{<\sqrt{\left|d}}} P \tilde{w}= P \Box_{g_{<\sqrt{\left|d}}} \tilde{w} + [ \Box_{g_{<\sqrt{\left|d}}}, P ] \tilde{w} $$ and apply \eqref{X:est:idzero}: $$ \vn{ \tilde{\tilde{P}}_{\left|d} w }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } \lesssim \left|d^{-1} D^{-\frac{1}{2} } ( \vn{P \Box_{g_{<\sqrt{\left|d}}} \tilde{w}}_{L^2[I]}+ \vn{ [ \Box_{g_{<\sqrt{\left|d}}}, P ] \tilde{w} ) }_{L^2[I]} ) $$ The second term is estimated by \eqref{com:est} and \eqref{Swlvx}. For the first term on the RHS we apply Minkowski's inequality to \eqref{box:integrall} \begin{equation}gin{align*} & \left|d^{-1} D^{-\frac{1}{2} } \vn{P \int_I 1_{t \geq s} \Box_{g_{<\sqrt{\left|d}}} w_s \,\mathrm{d} s }_{L^2[I]} \lesssim \left|d^{-1} D^{-\frac{1}{2} } \int_I \vn{P \Box_{g_{<\sqrt{\left|d}}} w_s }_{L^2[I]} \,\mathrm{d} s \\ & \lesssim \left|d^{-1} D^{-\frac{1}{2} } \int_I \vn{ \Box_{g_{<\sqrt{\left|d}}} P w_s }_{L^2[I]} + \vn{[P, \Box_{g_{<\sqrt{\left|d}}}] w_s }_{L^2[I]} \,\mathrm{d} s \lesssim \int_I \vn{P w_s}_{X_{\left|d,D}^{0,\frac{1}{2}}[I] }\,\mathrm{d} s \\ & \lesssim \delta \int_I \left|d^{-1} \vn{\Box_{g_{<\sqrt{\left|d}}} \tilde{P}_{\left|d} v(s) }_{L^2_x} \,\mathrm{d} s \lesssim \delta \vn{\tilde{\tilde{P}}_{\left|d} v }_{X_{\left|d,D}^{0,\frac{1}{2}}[I] } \end{align*} where we have used \eqref{com:est}, \eqref{X:sm:delta} and H\" older's inequality in $ s $. \end{proof} \ \subsection{A wave packet characterization of the $ X^{s,\theta} $ spaces.} \ With the previous preliminaries we are now ready to state our general decomposition (see also \cite[Sec. 4]{tataru2003null}). \begin{equation}gin{proposition} \Deltabel{X:wp:dec} Let $ I=[0,\delta] $, $ \mu \geq D=\delta^{-1} $ and let $ v \in X_{\mu,D}^{0,\frac{1}{2}}[I] $ be localized at frequency $ \simeq \mu $. We denote $ M \vcentcolon= \vn{\tilde{\tilde{P}}_{\mu} v }_{X_{\mu,D}^{0,\frac{1}{2}}[I] } \lesssim \vn{v}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} $. \begin{equation}gin{enumerate} [leftmargin=*] \item Then, there exists a wave packet decomposition \begin{equation} P_{\mu} v(t) = P_{\mu} \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} a_T(t) u_T(t) \end{equation} where the time-dependent coefficients satisfy, for all $ t \in I $, \begin{equation}gin{align} \vn{a_T}_{\ell^2_{T } L^{\infty}_{t} } & \lesssim M \Deltabel{coeff:1} \\ \vn{a_T^{\prime} (t)}_{\ell^2_{T }} & \lesssim \mu^{-1} \vn{\Box_{g_{<\sqrt{\mu}} } \tilde{P}_{\mu} v(t)}_{L^2_x} \nonumber \\ \vn{a_T^{\prime} (t)}_{L^2_t \ell^2_{T }} & \lesssim D^{\frac{1}{2}} M \Deltabel{coeff:2} \\ \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} a_T^{\prime}(t) u_T(t) &=0 \Deltabel{coeff:3}. \end{align} \\ \item Conversely, if \eqref{coeff:1}, \eqref{coeff:2}, \eqref{coeff:3} hold for some $ M $ and $$ w= \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} a_T(t) u_T(t), $$ then $ \vn{w}_{X_{\mu,D}^{0,\frac{1}{2}}[I] } \lesssim M $. \end{enumerate} \end{proposition} \begin{equation}gin{proof} For brevity we will denote $ \vn{\cdot}_{X_{\mu,D}^{0,\frac{1}{2}}[I] } $ by $ \vn{\cdot}_X $ and $ P_{\mu}, \tilde{P}_{\mu}, \tilde{\tilde{P}}_{\mu} $ by $ P, \tilde{P}, \tilde{\tilde{P}} $. {\bf (1)} \ Note that it suffices to prove that there exists a decomposition $$ P v(t) =P w(t) + P \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} a_T(t) u_T(t) $$ for some function $ w $ with bound $ \vn{\tilde{\tilde{P}} w }_X \lesssim \delta \vn{\tilde{\tilde{P}} v }_X $ and $ (a_T)_T $ satisfying the requirements above, since then the proposition follows by an iterative argument. By Duhamel's formula we represent $$ \tilde{P} v=v_0+\int_I 1_{t \geq s} v_s \,\mathrm{d} s $$ where $ v_0 $ and each $ v_s $ are solutions to the problems \begin{equation}gin{equation*} \begin{equation}gin{cases} \Box_{g_{<\sqrt{\mu}}} v_0=0 \\ v_0[0]= \tilde{P} v[0] \end{cases} \qquad \qquad \qquad \begin{equation}gin{cases} \Box_{g_{<\sqrt{\mu}}} v_s=0 \\ v_s[s]=(0,\Box_{g_{<\sqrt{\mu}}} \tilde{P} v(s)). \end{cases} \end{equation*} We apply Lemma \ref{parametrix:lemma:repr} and obtain: \begin{equation}gin{align*} & P v_0 =P u_0 + P w_0, \quad & P v_s =P u_s + P w_s \\ &u_0 = \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} c_T u_T , \quad & u_s = \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} c_{T,s} u_T \end{align*} with the bounds \begin{equation}gin{align} & \vn{c_T}_{\ell^2_T} \lesssim \vn{ \tilde{P} v[0] }_{L^2 \times H^{-1} }\lesssim M , \quad & \vn{c_{T,s}}_{\ell^2_T} \lesssim \mu^{-1} \vn{\Box_{g_{<\sqrt{\mu}}} \tilde{P} v(s) }_{L^2_x} \Deltabel{cT:bounds} \end{align} and $ u_s(s)=v_s(s)=0 $. We write $ P v=P \tilde{P} v=P u+P w $ where $$ u=u_0+ \int_I 1_{t \geq s} u_s \,\mathrm{d} s, \qquad w=w_0+ \int_I 1_{t \geq s} w_s \,\mathrm{d} s $$ By Corollary \ref{parametrix:cor:estw} we have $ \vn{\tilde{\tilde{P}} w }_X \lesssim \delta \vn{\tilde{\tilde{P}} v }_X $. We obtain the representation $$ u(t)= \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} a_T(t) u_T(t) $$ where, for any sign $ \pm $ and any $ T \in \mathcal T_{\mu}^{\pm} $: $$ a_T(t)=c_T+ \int_I 1_{t \geq s} c_{T,s} \,\mathrm{d} s \qquad \vm{a_T}_{L^{\infty}_t} \lesssim \vm{c_T}+ \int_I \vm{c_{T,s}} \,\mathrm{d} s $$ and $ a_T^{ \prime} (t)=c_{T,t} $ for all $ t \in I $. We have \begin{equation}gin{align*} \vn{a_T}_{\ell^2_{T } L^{\infty}_{t} } & \lesssim \vn{c_T }_{\ell^2_{T }}+\vn{ \int_I \vm{c_{T,s}} \,\mathrm{d} s }_{\ell^2_{T }} \lesssim M+ \int_I \vn{c_{T,s}}_{\ell^2_{T }} \,\mathrm{d} s \lesssim \\ & \lesssim M+ \int_I \mu^{-1} \vn{\Box_{g_{<\sqrt{\mu}}} \tilde{P} v(s) }_{L^2_x} \,\mathrm{d} s \lesssim M+ \delta^{\frac{1}{2}} \mu^{-1} \vn{\Box_{g_{<\sqrt{\mu}}} \tilde{P} v}_{L^2_{t,x}} \lesssim M. \end{align*} This verifies \eqref{coeff:1}. The next condition holds due to \eqref{cT:bounds}: $$ \vn{a_T^{ \prime} (t)}_{\ell^2_{T }} =\vn{c_{T,t} }_{\ell^2_{T }} \lesssim \mu^{-1} \vn{\Box_{g_{<\sqrt{\mu}}} \tilde{P} v(s) }_{L^2_x} $$ which also gives \eqref{coeff:2} by integration. Since $ u_s(s)=v_s(s)=0 $ we have $ \sum a_T^{ \prime} (s) u_T(s)=0 $ for any $ s \in I $, obtaining \eqref{coeff:3}, which completes the proof of the first part. \ {\bf (2)} \ By H\" older's inequality in time and \eqref{wp:sql2} we have $$ D^{\frac{1}{2}} \vn{w}_{L^2(I \times \mathbb{R}^2)} \lesssim \sum_{\pm} \vn{ \sum_{T \in \mathcal T_{\mu}^{\pm}} a_T(t) u_T(t)}_{L^{\infty}_t L^2_x} \lesssim \sum_{\pm} \vn{a_T}_{L^{\infty}_{t} \ell^2_{T } } \lesssim \sum_{\pm} \vn{a_T}_{\ell^2_{T } L^{\infty}_{t} } \lesssim M $$ Now we consider the term $ \Box_{g_{<\sqrt{\mu}}} w $ which we write as $$ \Box_{g_{<\sqrt{\mu}}} w=\sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} \left[ a_T(t) \Box_{g_{<\sqrt{\mu}}} u_T(t) + a_T^{ \prime}(t) g \partial_{t,x} u_T(t) + \partial_t ( a_T^{ \prime}(t) u_T(t) ) \right] $$ For the last term we use \eqref{coeff:3}. For the first term we use \eqref{param:small}: $$ D^{-\frac{1}{2}} \vn{\sum a_T(t) \Box_{g_{<\sqrt{\mu}}} u_T(t) }_{L^2(I \times \mathbb{R}^2)} \lesssim \left|d D^{-1} \| \partial^2 g\|_{L^2_t L_x^\infty} \sum_{\pm} \vn{a_T}_{L^{\infty}_{t} \ell^2_{T } } \lesssim \left|d M, $$ while for the second term we use \eqref{wp:sql2} and \eqref{coeff:2}: $$ \vn{\sum a_T^{\prime}(t) \partial_{t,x} u_T(t) }_{L^2(I \times \mathbb{R}^2)} \lesssim \mu \vn{a_T^{ \prime} (t)}_{L^2_t \ell^2_{T }} \lesssim \mu D^{\frac{1}{2}} M , $$ which completes the proof. \end{proof} The previous proposition provides the main part of the wave packet decomposition. However, it does not provide control of the second derivatives (in time) of the coefficients. To remedy this we have the following corollary. \begin{equation}gin{corollary} \Deltabel{Cor:WP:dec} Under the assumptions and notations of Prop. \ref{X:wp:dec}: \begin{equation}gin{enumerate} [leftmargin=*] \item There exists a decomposition $$ P_{\mu} v=v^{+}+v^{-}+v_R $$ where \begin{equation} \Deltabel{vpm:wp:dec} v^{\pm} = P_{\mu} \sum_{T \in \mathcal T_{\mu}^{\pm}} c_T(t) u_T(t) \end{equation} such that \begin{equation} \Deltabel{vRpm:est} \vn{v^{\pm}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} \lesssim M, \qquad \vn{v_R}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} \lesssim M, \end{equation} \begin{equation} \Deltabel{vRpm:est2} \vn{v_R}_{L^2[I]} \lesssim \mu^{-1} D^{\frac{1}{2}} M \end{equation} and \begin{equation}gin{align} \vn{c_T}_{\ell^2_{T } L^{\infty}_{t} } & \lesssim M \Deltabel{reg:coeff:1} \\ \vn{c_T^{ \prime} (t)}_{L^2_t \ell^2_{T }} & \lesssim D^{\frac{1}{2}} M \Deltabel{reg:coeff:2} \\ \vn{c_T^{ \prime \prime} (t)}_{L^2_t \ell^2_{T }} & \lesssim \mu D^{\frac{1}{2}} M \Deltabel{reg:coeff:3}. \end{align} \item Let $ T,T' \in \mathcal T_{\mu,\omega}^{\pm} $ be two tubes associated to the same direction $ \omega $. Then \begin{equation} \Deltabel{wp:barL:cT} \vn{\underline{L} P_{\mu} \big( c_T u_T \big) }_{L^{\infty}(T')} \lesssim \ \mu \ \frac{\mu^{\frac{3}{4}}}{\Deltangle d(T,T') \rangle^N} \vm{c_T}_{L^{\infty}_t} \end{equation} \end{enumerate} \end{corollary} \begin{equation}gin{corollary} \Deltabel{Cor:WP:dec:rem} Moreover, for any function $ u_{\left|d} $ localized at frequency $ \simeq \left|d \gg \mu $ we have: \begin{equation} \Deltabel{bil:est:remainder1} \vn{u_{\left|d} \cdot v_R}_{X_{\left|d,\mu}^{0,\frac{1}{4}}[I] } \lesssim \mu^{\frac{3}{4}} \vn{ u_{\left|d}}_{X_{\left|d,D}^{0,\frac{1}{2}}[I]} M \end{equation} while for $ \left|d \simeq \mu \gtrsim \eta $ we have \begin{equation} \Deltabel{bil:est:remainder2} \vn{P_{\eta} (u_{\left|d} \cdot v_R)}_{X_{\eta,\eta}^{0,\frac{1}{4}}[I] } \lesssim \frac{\left|d}{\eta^{\frac{1}{4}}} \vn{ u_{\left|d}}_{X_{\left|d,D}^{0,\frac{1}{2}}[I]} M \end{equation} \end{corollary} \begin{equation}gin{proof}[Proof of Corollary \ref{Cor:WP:dec}] {\bf(1)} \ The previous proposition provides a decomposition into $ + $ and $ - $ components and only condition \eqref{reg:coeff:3} is missing on the coefficients. To gain it, we regularize the coefficients $ a_T(t) $ in time on the $ \mu^{-1} $ scale at the expense of introducing the remainder $ v_R $ which obeys the favorable $ L^2 $ estimate \eqref{vRpm:est2}. We write and define $$ a_T(t) = a_T^{<\mu}(t) +a_T^{>\mu}(t), \qquad c_T(t) \vcentcolon= a_T^{<\mu}(t) $$ The conditions \eqref{reg:coeff:1}, \eqref{reg:coeff:2} are maintained from \eqref{coeff:1}, \eqref{coeff:2}, while \eqref{reg:coeff:3} follows from \eqref{reg:coeff:2}. The $ v^{\pm} $ are defined by \eqref{vpm:wp:dec}. We prove that $ \vn{v^{\pm}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} \lesssim M $ and as a consequence we also obtain $ \vn{v_R}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} \lesssim M $. We have to place $ v^{\pm} $ and $ \Box_{g_{<\sqrt{\mu}}} v^{\pm} $ in $ L^2 $. The fact that $ D^{\frac{1}{2}} \vn{v^{\pm}}_{L^2[I]} \lesssim M $ and $$ D^{-\frac{1}{2}} \mu^{-1} \vn{\sum c_T(t) \Box_{g_{<\sqrt{\mu}}} u_T(t) }_{L^2(I \times \mathbb{R}^2)} \lesssim M $$ $$ D^{-\frac{1}{2}} \mu^{-1} \vn{\sum c_T^{ \prime}(t) \partial u_T(t) }_{L^2(I \times \mathbb{R}^2)} \lesssim M $$ follow like in the proof of part (2) of Prop. \ref{X:wp:dec}. What remains is $$ D^{-\frac{1}{2}} \mu^{-1} \vn{\sum c_T^{ \prime \prime}(t) u_T(t) }_{L^2(I \times \mathbb{R}^2)} \lesssim D^{-\frac{1}{2}} \mu^{-1} \vn{c_T^{ \prime \prime} (t)}_{L^2_t \ell^2_{T }} \lesssim M $$ For $ v_R $ we have $$ v_R = P_{\mu} \sum_{\pm,T \in \mathcal T_{\mu}^{\pm}} a_T^{>\mu}(t) u_T(t) $$ Since $$ \mu \vn{a_T^{>\mu}(t) }_{L^2_t \ell^2_{T }} \lesssim \vn{a_T^{>\mu, \prime}(t) }_{L^2_t \ell^2_{T }} \lesssim D^{\frac{1}{2}} M $$ we obtain \eqref{vRpm:est2}. \\ {\bf(2)} \ To prove \eqref{wp:barL:cT}, for any $ t \in I $ we write $$ \underline{L} P_{\mu} \big( c_T(t) u_T (t)\big)= c_T(t) \underline{L} P_{\mu} u_T(t) + c_T'(t) P_{\mu} u_T(t) $$ For the first term we use \eqref{wp:barL:T}, while for the second we use \eqref{wp:Linfty:T} together with \begin{equation}gin{align} \Deltabel{reg:coeff:4} \vm{c_T'}_{L^{\infty}_t } \lesssim \mu \vm{c_T}_{L^{\infty}_t} \end{align} which holds due to the time regularization done in Step (1). \end{proof} \begin{equation}gin{proof}[Proof of Corollary \ref{Cor:WP:dec:rem}] Note that by Bernstein's inequality and \eqref{vRpm:est2}, \eqref{vRpm:est}, \eqref{energy:X} we have $$ \vn{v_R}_{L^2 L^{\infty}} \lesssim D^{\frac{1}{2}} M, \qquad \vn{\Box_{g_{<\sqrt{\mu}}} v_R}_{L^2 L^{\infty}} \lesssim \mu^2 D^{\frac{1}{2}} M, \qquad \vn{v_R}_{L^{\infty}_{t,x}} \lesssim \mu M. $$ For the $ L^2 $ part of \eqref{bil:est:remainder1} we have $$ \mu^{\frac{1}{4}} \vn{u_{\left|d} \cdot v_R}_{L^2} \lesssim \mu^{\frac{1}{4}} \vn{u_{\left|d}}_{L^{\infty} L^2} \vn{v_R}_{L^2 L^{\infty}} \lesssim \mu^{\frac{3}{4}} (D/\mu)^{\frac{1}{2}} \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M $$ and recall that $ D \leq \mu $. For $ \Box_{g_{<\sqrt{\left|d}}}( u_{\left|d} \cdot v_R ) $ we consider \begin{equation}gin{align*} & \vn{ \Box_{g_{<\sqrt{\left|d}}} u_{\left|d} \cdot v_R}_{L^2} \lesssim \vn{\Box_{g_{<\sqrt{\left|d}}} u_{\left|d} }_{L^2} \vn{v_R}_{L^{\infty}} \lesssim \left|d D^{\frac{1}{2}} \mu \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M \\ & \vn{u_{\left|d} \cdot \Box_{g_{<\sqrt{\mu}}} v_R }_{L^2} \lesssim \vn{u_{\left|d}}_{L^{\infty} L^2} \vn{ \Box_{g_{<\sqrt{\mu}}} v_R}_{L^2 L^{\infty}} \lesssim \mu^2 D^{\frac{1}{2}} \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M \\ & \vn{u_{\left|d} \cdot (\Box_{g_{<\sqrt{\left|d}}}-\Box_{g_{<\sqrt{\mu}}}) v_R }_{L^2} \lesssim \vn{u_{\left|d}}_{L^2} \vn{\mu^{-1} \partial^2 v_R }_{L^{\infty}} \lesssim \mu^2 D^{\frac{1}{2}} \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M \\ & \vn{\partial u_{\left|d} \cdot \partial v_R }_{L^2} \lesssim \vn{\partial u_{\left|d}}_{L^{\infty} L^2} \vn{ \partial v_R}_{L^2 L^{\infty}} \lesssim \left|d \mu D^{\frac{1}{2}} \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M \end{align*} These estimates combine to complete the proof of \eqref{bil:est:remainder1} and we turn to \eqref{bil:est:remainder2}. Here we use Bernstein's inequality in the form $ P_{\eta}:L^2 L^1 \to \eta L^2 $. We have $$ \eta^{\frac{1}{4}} \vn{P_{\eta} (u_{\left|d} \cdot v_R) }_{L^2} \lesssim \eta^{\frac{1}{4}} \eta \vn{u_{\left|d} \cdot v_R }_{L^2 L^1} \lesssim \eta^{\frac{1}{4}} \eta \vn{u_{\left|d}}_{L^{\infty}L^2} \vn{v_R}_{L^2} $$ which clearly suffices using \eqref{vRpm:est2}. For $ \Box_{g_{<\sqrt{\eta}}}( u_{\left|d} \cdot v_R ) $ we similarly consider \begin{equation}gin{align*} & \vn{(\Box_{g_{<\sqrt{\left|d}}}-\Box_{g_{<\sqrt{\eta}}}) P_{\eta}( u_{\left|d} \cdot v_R ) }_{L^2} \lesssim \eta^2 \vn{u_{\left|d} \cdot v_R}_{L^2 L^1} \lesssim \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \lesssim \eta^2 \vn{u_{\left|d} }_{L^2} \vn{v_R}_{L^{\infty}L^2} \lesssim \eta^2 \vn{u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M \\ & \eta \vn{\partial u_{\left|d} \cdot \partial v_R }_{L^2 L^1} \lesssim \eta \vn{\partial u_{\left|d}}_{L^{\infty} L^2} \vn{ \partial v_R}_{L^2} \lesssim \eta \left|d D^{\frac{1}{2}} \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M\\ & \eta \vn{ \Box_{g_{<\sqrt{\left|d}}} u_{\left|d} \cdot v_R}_{L^2 L^1} \lesssim \eta \vn{\Box_{g_{<\sqrt{\left|d}}} u_{\left|d} }_{L^2} \vn{v_R}_{L^{\infty}L^2} \lesssim \eta \left|d D^{\frac{1}{2}} \vn{ u_{\left|d}}_{X_{\mu,D}^{0,\frac{1}{2}}[I]} M \\ \end{align*} and similarly for $ u_{\left|d} \cdot \Box_{g_{<\sqrt{\left|d}}} v_R$. Each of the term above times $ \eta^{-\frac{7}{4}} $ is $ \lesssim \left|d / \eta^{\frac{1}{4}} $. \end{proof} \ Finally, we have the following time-dependent wave packets sums analogue of \eqref{wp:Linfty:T}-\eqref{wp:barL:T}. \begin{equation}gin{lemma} For a fixed $ \omega $ and a dyadic frequency $ \eta $ let $$ v^{\omega}(t)=\sum_{ T\in \mathcal T_{\eta}^{\pm}, \omega_T=\omega } c_T(t) u_T(t), \qquad \qquad t \in I. $$ For $ \nu \geq \eta $, let the vector fields $ L,\underline{L},E $ associated to $ \pm g_{\sqrt{\nu}} $ and to $ \omega $ form a null frame as in section \ref{s:nullframes}. Then one has: \begin{equation}gin{align} \Deltabel{wp:Linfty:T:sum} \vn{v^{\omega}}_{L^{\infty}} \quad & \lesssim \eta^{\frac{3}{4}} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}} \\ \Deltabel{wp:E:T:sum} \vn{E P_{\eta} v^{\omega}} _{L^{\infty}} \ & \lesssim \eta^{\frac{5}{4}} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}} \\ \Deltabel{wp:L:T:sum} \vn{ L P_{\eta} v^{\omega}} _{L^2 L^{\infty}} & \lesssim \eta^{\frac{3}{4}} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 + \vm{c_T'}_{L^{2}_t}^2 \big)^{\frac{1}{2}} \\ \Deltabel{wp:barL:T:sum} \vn{ \underline{L} P_{\eta} v^{\omega}} _{L^\infty} & \lesssim \eta^{\frac{7}{4}} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}}. \end{align} \end{lemma} \begin{equation}gin{proof} These follow easily from \eqref{wp:Linfty:T}-\eqref{wp:barL:T}. In the case of \eqref{wp:L:T:sum} and \eqref{wp:barL:T:sum} we have to consider the case in which the $ \partial_t $ derivative from $ L $ or $ \underline{L} $ falls on the coefficients $ c_T(t) $. We write $$ L (P_{\eta} v^{\omega})(t)=\sum_{ T, \omega_T=\omega } c_T(t) L P_{\eta} u_T(t) + \sum_{ T, \omega_T=\omega } c_T'(t) P_{\eta} u_T(t) $$ For the first sum we use \eqref{wp:L:T}, while for the second we use \eqref{wp:Linfty:T}. The bound \eqref{wp:barL:T:sum} follows from \eqref{wp:barL:cT}. \end{proof} \ \section{Microlocalized characteristic energy estimates} \Deltabel{s:CE} \ The proof of the algebra property $X^{s,\theta} \cdot X^{s, \theta} \subset X^{s,\theta}$ relies on estimates for directionally localized half-waves along certain characteristic surfaces. \subsection{Microlocalization} For each dyadic number $\alphapha \le 1$, let $\tau + a_{<\alphapha^{-1}} ^\pm$ be the half-wave symbols for the operator $\Box_{g_{<\alphapha^{-1}}}$ (note that $a_{<\alphapha^{-1}}^{\pm}$ is not quite frequency-localized due to the square root) and let $\Phi^{\alphapha, \pm}_t (x, \xi) = (x_t^{\alphapha, \pm}, \xi_t^{\alphapha, \pm})$ denote their Hamiltonian flows. On one hand, a routine linearization argument as in the proof of Lemma~\ref{l:optical_bounds} shows that the flow map satisfies \begin{equation}gin{align} \Deltabel{e:flow-deriv} \frac{\partial^k(x^{\alphapha, \pm}_t, \xi^{\alphapha, \pm}_t)}{\partial (x, \xi)^k } = O(\alphapha^{1-|k|}), \quad |k| \ge 1. \end{align} On the other hand, in view of homogeneity the flow map is smoother in some directions than others. To capture this directional information, we consider a class of phase space metrics adapted to the wave equation developed in~\cite{geba2007phase,geba2008gradient}. For each $0 < \alphapha < 1$, define $g := g_{\alphapha}$ by \begin{equation}gin{align} \Deltabel{e:g} g_{\alphapha, (x, \xi)}(y, \eta) = \frac{ | \Deltangle y, \xi \rangle |^2 }{ \alphapha^4 |\xi|^2} + \frac{ |y \wedge \xi|^2 }{ \alphapha^2 |\xi|^2} + \frac{ |\Deltangle \eta, \xi \rangle |^2 }{ |\xi|^4 } + \frac{ |\eta \wedge \xi|^2}{ \alphapha^2 |\xi|^4}. \end{align} We recall \begin{equation}gin{lemma}[{\cite[Lemma 4.2]{geba2008gradient}}] The flows $\Phi^{\alphapha, \pm}_{t}$ are bi-Lipschitz and $g_\alphapha$-smooth. \end{lemma} These metrics shall define the symbol classes $S(m, g)$ in which we work. At a point $(x, \xi)$, the unit ball with respect to $g_{(x,\xi)}$ consists of a $\alphapha^2 \times (\alphapha)^{n-1}$ rectangle in the spatial variable with long side orthogonal to $\xi$, and a $|\xi| \times (\alphapha |\xi|)^{n-1}$ rectangle in the frequency variable with long side parallel to $\xi$. One easily verifies that perturbing the basepoint $(x, \xi)$ within this unit ball yields comparable metrics. Appendix~\ref{s:microlocal} collects the relevant facts and notation concerning pseudo-differential calculus at this generality. Throughout this article, for any frequency $\mu \ge 1$ we write \[\alphapha_\mu := \mu^{-1/2}.\] The parameter $\mu$ will eventually be the smaller of the two frequencies in products of the form $u_\Deltambda v_\mu \in X^{s, \theta}_\Deltambda \cdot X^{s,\theta}_\mu$, and in this context, $\alphapha_\mu$ represents the smallest angular scale in the bilinear decomposition of the product. If $\alphapha \in [\alphapha_\mu, 1]$, write $\Omega_{\alphapha}$ for the collection of half-open dyadic intervals on $S^1 = \mathbb{R} / \mathbb{Z}$ of width $\alphapha$. For $\theta \in \Omega_\alphapha$, let $s_\theta^\alphapha(\xi)$ be a $0$-homogeneous function supported in the sector defined by $\wht{\xi} := \xi/|\xi| \in C \theta$, where $C\theta$ denotes the dilate of the interval $\theta$ about its center by some (fixed) factor $C > 1$, and define time-dependent symbols $\phi_{\theta}^{\alphapha, \pm}$ by transporting $s_\theta^\alphapha$ along the flows $\Phi^{\alphapha_\mu, \pm}_t$: \begin{equation}gin{align} \Deltabel{pullback:flow} \phi_{\theta}^{\alphapha, \pm}(t, x, \xi) := s_\theta^\alphapha \circ \Phi_{-t}^{\alphapha_\mu, \pm}. \end{align} By the previous Lemma, we observe \begin{equation}gin{lemma} The symbols $\phi^{\alphapha, \pm}_{\theta}$ satisfy $\partial_{x,\xi} \phi^{\alphapha, \pm}_{\theta} \in S( (\alphapha |\xi|)^{-1}, g_{\alphapha_\mu})$. \end{lemma} The notation $ S(m,g) $ refers to the symbol classes in Definition \ref{def:symbol:class}. \begin{equation}gin{remark} A very similar construction was proposed by Geba-Tataru \cite[Section 4]{geba2008gradient}. However, here the time dependent symbols are defined using the same flows $\Phi_t^{\alphapha_\mu, \pm}$ for all angular widths $\alphapha \ge \alphapha_\mu$, whereas in that article the initial symbols $s_{\theta}^\alphapha$ are transported along the flows $\Phi_{t}^{\alphapha, \pm}$. Consequently their symbols satisfy the better bounds $\partial \phi_{\theta}^{\alphapha, \pm} \in S( \alphapha|\xi|^{-1}, g_\alphapha)$. \end{remark} For each $\Deltambda \ge \mu$, define the symbols at frequency $\Deltambda$ by \begin{equation}gin{align} \Deltabel{e:pd-loc} \phi_{\theta, \Deltambda}^{\alphapha, \pm}(t, x, \xi) := P_{<\Deltambda/8} (D_x)\phi_{\theta}^{\alphapha, \pm} (t, x, \xi) s_\Deltambda(\xi), \end{align} where $s_{\Deltambda}(\xi)$ is a smooth cutoff supported in the annulus $|\xi| \in [\Deltambda/4, 4\Deltambda]$ and equal to $1$ for $|\xi| \in [\Deltambda/2, 2\Deltambda]$. \begin{equation}gin{lemma} \Deltabel{l:poisson-bracket} The symbols $\phi_{\theta, \Deltambda}^{\alphapha, \pm}$ satisfy $\partial \phi_{\theta, \Deltambda}^{\alphapha, \pm} \in S ( (\alphapha \Deltambda)^{-1}, g_{\alphapha_\mu})$, and also \begin{equation}gin{align*} \{ \tau + a_{<\alphapha_\mu^{-1} }^{\pm}, \phi_{\theta, \Deltambda}^{\pm, \alphapha} \} \in S(1, g_{\alphapha_\mu}). \end{align*} \end{lemma} \begin{equation}gin{proof} The fact that $\phi_{\theta, \Deltambda}^{\alphapha, \pm} \in S(1, \alphapha_\mu)$ is straightforward. Since $\phi_\theta^{\alphapha, \pm}$ is transported along the Hamilton flow for $a_{<\alphapha_\mu^{-1}} (t, x, \xi)$, the Poisson bracket takes the form \begin{equation}gin{align*} -(\partial_x a_{<\alphapha_\mu^{-1}} \partial_\xi \tilde{s}_\Deltambda(\xi)) \phi_{\theta, \Deltambda}^{\alphapha, \pm}(t,x, \xi) + s_{\Deltambda}(\xi)[H_{a_{<\alphapha_\mu^{-1}}^{\pm}}, P_{<\Deltambda/8} (D_x)] \phi_\theta^{\alphapha, \pm} (t, x, \xi), \end{align*} where $\tilde{s}_\Deltambda$ is a fattened version of $s_{\Deltambda}$ and $H_a = \Deltangle a_x, \partial_x\rangle - \Deltangle a_\xi, \partial_\xi \rangle$ denotes the Hamiltonian vector field for a symbol $a$. The first term belongs to $S( \Deltambda^{-1}, g_{\alphapha_\mu})$. Now for any $k_1, k_2$, one has \begin{equation}gin{align*} \partial^{k_1}_x \partial^{k_2}_\xi [\partial_\xi a_{<\alphapha_\mu^{-1}}^\pm, P_{<\Deltambda/8}] \partial_x \phi_\theta^{\alphapha, \pm} = \sum_{j_1=0}^{k_1} \sum_{j_2=0}^{k_2} [ \partial^{j_1}_x \partial^{j_2}_\xi \partial_\xi a_{<\alphapha_\mu^{-1}}^\pm, P_{<\Deltambda/8}] \partial^{k_1 -j_1}_x \partial^{k_2-j_2}_\xi (\partial_x \phi_\theta^{\alphapha, \pm}) \end{align*} As $\partial_x \partial_\xi a_{<\alphapha_\mu^{-1}}^{\pm}$ is $g_{\alphapha_\mu}$-smooth, for any $y_1, \dots, y_{k_1}, \eta_1, \dots, \eta_{k_2}$ with $g(y_j, 0) = g(0, \eta_j) = 1$, \begin{equation}gin{align*} \|[\Deltangle y_1, \partial_x \rangle \cdots \Deltangle y_{j_1}, \partial_x \rangle \Deltangle \eta_1, \partial_\xi \rangle \dots \Deltangle \eta_{j_2}, \partial_\xi\rangle \partial_\xi a_{<\alphapha_\mu^{-1}}^\pm, P_{<\Deltambda/8}]\|_{L^p \to L^p} \lesssim \Deltambda^{-1}. \end{align*} Consequently \begin{equation}gin{align*} \Bigl| \prod_{j_1=1}^{k_1} \prod_{j_2=1}^{k_2} \Deltangle y_{j_1}, \partial_x \rangle \Deltangle \eta_{j_2}, \partial_\xi \rangle [ \partial_{\xi} a_{<\alphapha_\mu^{-1}}^{\pm}, P_{<\Deltambda/8}] \partial_x \phi_\theta^{\alphapha, \pm}(t,x, \xi)\Bigr| \lesssim \Deltambda^{-1} (\alphapha |\xi|)^{-1}, \end{align*} therefore \begin{equation}gin{align*} [\partial_\xi a_{<\alphapha_\mu^{-1}}^{\pm}, P_{<\Deltambda/8} ] (\partial_x \phi_{\theta}^{\alphapha, \pm}) \in S( (\alphapha \Deltambda |\xi|)^{-1}, g_{\alphapha_\mu}). \end{align*} Similarly, as Bernstein implies that $\partial^2_xa_{<\alphapha_\mu^{-1}}^{\pm} \in S( \alphapha_\mu^{-\frac{1}{2}} |\xi|, g_{\alphapha_\mu})$, we have \begin{equation}gin{align*} [\partial_x a_{<\alphapha_\mu^{-1}}^{\pm}, P_{<\Deltambda/8} ] (\partial_\xi \phi_\theta^{\alphapha, \pm} \in S( (\alphapha_\mu)^{-\frac{1}{2}}(\alphapha \Deltambda)^{-1}, g_{\alphapha_\mu}). \end{align*} \end{proof} For each $x$, the symbol $\phi_{\theta}^{\alphapha, \pm}(t, x, \xi)$ is supported in a sector $|\hat{\xi} - \widehat{\xi^{\alphapha_\mu}_\theta} (t, x)| \le c \alphapha$, where $\hat{\xi} := \xi / |\xi|$ and \[ (x, \theta) \mapsto \bigl( (x^{\alphapha_\mu}_\theta(t, x), \theta) \mapsto (x, \xi^{\alphapha_\mu}_\theta (t, x) ) \] parametrizes the graph of the canonical transformation $\Phi^{\alphapha_\mu, \pm}_t$ (the dependence on $\pm$ is suppressed in the notation $\xi_{\theta}^{\alphapha_\mu}(t, x)$). The mollified symbol $\phi_{\theta, \Deltambda}^{\alphapha, \pm}$ is no longer sharply localized to the sector $|\hat{\xi} - \widehat{\xi^{\alphapha_\mu}_\theta }(t,x)| \le c \alphapha$, but we can write \begin{equation}gin{align} \Deltabel{e:input_angular_truncation} \phi_{\theta, \Deltambda}^{\alphapha, \pm}(t, x, \xi) = \phi_{\theta, \Deltambda}^{\alphapha, \pm} \chi_{<2c\alphapha} (|\hat{\xi} - \widehat{\xi^{\alphapha_\mu}_\theta} (t, x) |) + r_{\theta, \Deltambda}^{\alphapha, \pm}, \end{align} where the first symbol has the same regularity as $\phi_{\theta, \Deltambda}^{\alphapha, \pm}$ and $r_{\theta, \Deltambda}^{\alphapha, \pm} = O(\Deltambda^{-\infty})$. For each $\theta$, let \begin{equation}gin{align*} m_\theta(t, x, \xi) := \Deltangle \alphapha^{-1} ( |\widehat{\xi} - \widehat{ \xi_\theta^{\alphapha_\mu}} (t, x) ) \rangle^{-1}. \end{align*} In the notation of Section~\ref{s:microlocal} one has \begin{equation}gin{align*} \phi_{\theta}^{\alphapha, \pm}, \ \phi_{\theta, \Deltambda}^{\alphapha, \pm} \in S^1_{\alphapha} ( m_\theta^\infty, g_{\alphapha_\mu}). \end{align*} For future reference, we also record a technical lemma regarding the time-regularity of symbols. \begin{equation}gin{lemma} \Deltabel{l:t-derivs} There is a decomposition \begin{equation}gin{align*} \partial_t \phi_{\theta}^{\alphapha, \pm} = \psi_1 + \psi_2, \end{align*} where $\psi_1\in S(1, g_{\alphapha_\mu})$ and $\psi_2$ satisfies the estimates \begin{equation}gin{align*} &|\psi_2| \lesssim_N \alphapha^{-1} m_\theta^N \text{ for all } N, \\ &\partial_x \psi_2 \in \alphapha_\mu^{-\frac{1}{2}} \alphapha^{-1} S( m_\theta^\infty, g_{\alphapha_\mu}) + \alphapha_\mu^{-1} S(m_\theta^\infty, g_{\alphapha_\mu}),\\ &\partial_\xi \psi_2 \in \alphapha^{-1}(\alphapha_\mu |\xi|)^{-1} S(m_\theta^\infty, g_{\alphapha_\mu}); \end{align*} in particular $\partial_t \phi_{\theta}^{\alphapha, \pm} \in \alphapha^{-1} S(1, g_{\alphapha_\mu})$. Also, \begin{equation}gin{gather*} \partial_t \{ \tau+ a_{<\alphapha_\mu^{-1}}^{\pm}, \phi_{\theta,\Deltambda}^{\alphapha, \pm} \} \in S( \alphapha_\mu^{-\frac{1}{2}} (\alphapha_\mu^2 \Deltambda)^{-1} m_\theta^\infty, g_{\alphapha_\mu}). \end{gather*} \end{lemma} \begin{equation}gin{proof} The definition of the symbol implies that \begin{equation}gin{align*} \partial_t \phi_{\theta}^{\alphapha, \pm} = \Deltangle a_\xi, \partial_x\rangle \phi_\theta^{\alphapha, \pm} - \Deltangle a_x, \partial_\xi\rangle \phi_{\theta}^{\alphapha, \pm}. \end{align*} The first term evidently belongs to $S(1, g_{\alphapha_\mu})$. The second term is pointwise bounded by $\alphapha^{-1}$ and satisfies \begin{equation}gin{gather*} \partial_x (\Deltangle a_x, \partial_\xi \rangle \phi_{\theta}^{\pm, \alphapha}) = \Deltangle a_{xx}, \partial_{\xi} \rangle \phi_{\theta}^{\alphapha, \pm} + \Deltangle a_x, \partial_\xi \rangle \partial_x \phi_{\theta}^{\alphapha, \pm} \in \alphapha_\mu^{-\frac{1}{2}} \alphapha^{-1}S(m_\theta^\infty, g_{\alphapha_\mu}) + \alphapha_\mu^{-1} S(m_\theta^\infty, g_{\alphapha_\mu}),\\ \partial_\xi( \Deltangle a_x, \partial_\xi \rangle \phi_{\theta}^{\alphapha, \pm}) = \Deltangle a_{\xi x}, \partial_\xi \rangle \phi_{\theta}^{\alphapha, \pm} + \Deltangle a_x, \partial_\xi \rangle \partial_\xi \phi_{\theta}^{\alphapha, \pm} \in \alphapha_\mu^{-1} (\alphapha |\xi|)^{-1} S(m_\theta^\infty, g_{\alphapha_\mu}), \end{gather*} where Bernstein is used for pointwise bounds $a_{xx}$ and further derivatives as in~\eqref{e:metric_est1}. These estimates are preserved by the mollifier $P_{<\Deltambda/8}(D_x)$. The second claim is proved by inspecting the Poisson bracket estimates in the previous lemma: \begin{equation}gin{align*} \partial_t \{ \tau+ a_{<\alphapha_\mu^{-1}}^{\pm}, \phi_{\theta,\Deltambda}^{\alphapha, \pm} \} &= -(\partial_x \partial_t a_{<\alphapha_\mu^{-1}} \partial_\xi \tilde{s}_\Deltambda(\xi)) \phi^{\pm, \alphapha}_{\theta, \Deltambda} - (\partial_x a_{<\alphapha_\mu^{-1}} \partial_\xi \tilde{s}_{\Deltambda} (\xi)) \partial_t \phi_{\theta, \Deltambda}^{\alphapha, \pm} (t, x, \xi)\\ &+ s_\Deltambda(\xi)[ H_{\partial_t a^{\pm}_{<\alphapha_\mu^{-1}}}, P_{<\Deltambda/8}(D_x)] \phi_\theta^{\alphapha, \pm} +s_\Deltambda(\xi)[ H_{ a^{\pm}_{<\alphapha_\mu^{-1}}}, P_{<\Deltambda/8}(D_x)] \partial_t \phi_\theta^{\alphapha, \pm}\\ &\in S( \alphapha_\mu^{-\frac{1}{2}} (\alphapha_\mu^2 \Deltambda)^{-1}, g_{\alphapha_\mu}), \end{align*} where we have used the Bernstein type estimates $\partial_x \partial_\xi \partial_ta_{<\alphapha_\mu^{-1}}^{\pm} \in S(\alphapha_\mu^{-\frac{1}{4}}, g_{\alphapha_\mu})$, respectively $\partial_x^2 \partial_t a_{<\alphapha_\mu^{-1}}^{\pm} \in S( \alphapha_\mu^{-\frac{3}{2}}|\xi|, g_{\alphapha_\mu})$. \end{proof} \begin{equation}gin{proposition} \Deltabel{p:orthogonality} Suppose $\alphapha \ge \alphapha_\mu$. If $u$ is a function at frequency $\Deltambda > \alphapha^{-2}$, then \begin{equation}gin{align*} \sum_{\theta \in \Omega_\alphapha} \| \phi_{\theta, \Deltambda}^{\pm, \alphapha}(t,X,D) u \|_{X_\pm}^2 \sim \|u \|_{X_\pm}^2, \end{align*} where \begin{equation}gin{align*} \| u\|_{X_\pm}^2 := \| u\|_{L^2_{t,x}}^2 + \| (D_t + A^\pm) u\|_{L^2_{t,x}}^2, \end{align*} and $a^\pm$ are the half wave symbols for the mollified wave operator \begin{equation}gin{align*} p = \tau^2 - 2 g^{0j}_{<\sqrt{\Deltambda}} \tau \xi_j - g^{ab}_{<\sqrt{\Deltambda}} \xi_a \xi_b. \end{align*} More precisely, we have \begin{equation}gin{align*} & \sum_{\theta} \| \phi^{\pm, \alphapha}_{\theta, \Deltambda} (t, X, D) u \|_{L^2}^2 \sim \| u\|_{L^2}^2,\\ & \sum_{\theta} \| (D_t + A^{\pm} ) \phi^{\pm, \alphapha}_{\theta, \Deltambda} (t, X, D) u\|^2_{L^2} \sim \| (D_t + A^\pm) u\|_{L^2}^2 + O(\|u\|_{L^2}^2). \end{align*} \end{proposition} In practice we shall usually need only the ``$\lesssim$'' direction. The following terminology will be convenient for describing the size of various operators. \begin{equation}gin{definition} We say a collection of operators $\chi_\theta: L^2_x \to L^2_x$ is \emph{square-summable} with respect to $\theta$ if $\sum_\theta \| \chi_\theta u\|^2_{L^2_x} \lesssim \| u\|^2_{L^2_x}$; that is, if $\sum_\theta \chi_\theta^* \chi_\theta$ is bounded on $L^2_x$. \end{definition} If the operators $\chi_\theta$ depend on $t$, it will be clear from the context whether the implicit constants are uniform or merely square-integrable with respect to $t$; for the latter case we use the term ``$L^2_{t,x}$-square-summable". From the pseudo-differential calculus in Section~\ref{s:microlocal} and the Cotlar-Stein almost orthogonality criterion, one immediately deduces \begin{equation}gin{lemma} \Deltabel{l:square-summable} If symbols $\phi_\theta\in S ( m_\theta^\infty, g_{\alphapha_\mu})$ are supported in $|\xi| \sim \Deltambda \ge \alphapha_\mu^{-2}$, then the operators $P_{\Deltambda}(D)\phi_\theta (X, D)$ are square-summable with respect to $\theta$. \end{lemma} \begin{equation}gin{proof}[Proof of Proposition] Consider first the $L^2$ component. The previous lemma already shows that \begin{equation}gin{align*} \sum_\theta \| \phi_{\theta, \Deltambda}^{\pm, \alphapha} (t, X,D)u\|_{L^2}^2 \lesssim \|u\|_{L^2}^2. \end{align*} For the other direction, we note first that if $u$ is localized at frequency $\Deltambda$, then \begin{equation}gin{align*} \|u\|_{L^2} \sim \Bigl\| \sum_{\theta \in \Omega_\alphapha} \phi_{\theta, \Deltambda}^{\pm, \alphapha} (t, X, D) u\Bigr\|_{L^2}. \end{align*} As $\phi_{\theta, \Deltambda}^{\pm, \alphapha} \in S(1, g_{\alphapha_\mu})$ and localized to frequency $\Deltambda$ in both input and output, the direction ``$\gtrsim$'' follows directly from Lemma~\ref{l:CV}. For the opposite direction, use the first order calculus~\eqref{e:symbol_expansion} to write \begin{equation}gin{align*} u = P_\Deltambda(D) \Bigl( \sum_{\theta} \phi_{\theta, \Deltambda}^{\alphapha} \Bigr)^{-1}(t, X, D) \sum_{\theta} \phi_{\theta, \Deltambda}^{\alphapha}(t, X, D) u + P_\Deltambda(D) r(t,X, D)u, \end{align*} where $r \in S( (\alphapha \Deltambda)^{-2}, g_{\alphapha_\mu})$, and apply Lemma~\ref{l:CV} to obtain \begin{equation}gin{align*} \|u\|_{L^2} \lesssim \Bigl\| \sum_{\theta} \phi_{\theta, \Deltambda}^\alphapha u\Bigr\|_{L^2} + (\alphapha \Deltambda)^{-2} \| u\|_{L^2}. \end{align*} (Of course there is nothing to prove if $\sum_{\theta} \phi_{\theta}^{\pm, \alphapha} \equiv 1$ but recall that according to our construction of the symbols, for a fixed angular scale $\alphapha$ the sum is in general merely bounded above and below.) Hence we may write \begin{equation}gin{align*} \|u\|_{L^2}^2 \lesssim \sum_{\theta, \theta'} \Deltangle \chi_\theta u, \chi_{\theta'} u\rangle, \end{align*} The pseudo-differential calculus yields the estimates \begin{equation}gin{align*} \|\chi_\theta^* \chi_{\theta'}\|_{L^2 \to L^2} \lesssim \Deltangle d_{\alphapha} (\theta, \theta') \rangle^{-N}, \quad d_{\alphapha}(\theta, \theta') := | \alphapha^{-1}(\theta- \theta')|. \end{align*} Splitting \begin{equation}gin{align*} \sum_{\theta, \theta'} \Deltangle \chi_{\theta} u, \chi_{\theta'} u\rangle = \sum_{d_{\alphapha}(\theta, \theta') \le M} \Deltangle \chi_{\theta} u, \chi_{\theta'} u \rangle + \sum_{d_{\alphapha}(\theta, \theta') > M} \Deltangle \chi_{\theta} u, \chi_{\theta'} u\rangle, \end{align*} for $M$ large enough the second sum may be absorbed into the left side, while the remaining terms are handled by Cauchy-Schwarz. Next consider the half-wave component. Without loss of generality we consider just the ``$+$'' case and set $\phi_{\theta, \Deltambda}^\alphapha := \phi_{\theta, \Deltambda}^+$, $a := a^+$, $A := A^+$. Writing \begin{equation}gin{align*} (D_t + A) \phi_{\theta, \Deltambda}^\alphapha = \phi_{\theta, \Deltambda} (D_t + A) \phi_{\theta, \Deltambda}^\alphapha + [D_t + A, \phi_{\theta, \Deltambda}^\alphapha], \end{align*} it suffices by the first part and energy estimates to show that \begin{equation}gin{align*} \sum_{\theta} \|[D_t + A, \phi_{\theta, \Deltambda}^\alphapha] u\|_{L^2}^2 \lesssim \|u\|_{L^\infty L^2}^2. \end{align*} Consider the first the low-frequency portion $A_\mu$, where $A$ is the corresponding half-wave operator for the low-frequency metric $g_{<\sqrt{\mu}}$. By the second-order symbol expansion~\eqref{e:symbol_expansion}, the commutator $[D_t + A_\mu, \phi_{\theta, \Deltambda}^\alphapha]$ has symbol \begin{equation}gin{align*} \frac{1}{i} \{ \tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha\} - \frac{1}{2} \int_0^1 r_s \, ds, \end{align*} where \begin{equation}gin{align*} r_s(t, x, \xi) = &\sum_{j, k} e^{is \Deltangle D_y, D_\eta \rangle} \bigl[ \partial_{\eta_j}\partial_{\eta_k} a(x, \eta)\partial_{y_j}\partial_{y_k} \phi_{\theta,\Deltambda}^\alphapha(y, \xi) \\ &- \partial_{\eta_j}\partial_{\eta_k}\phi_{\theta,\Deltambda}^\alphapha (x, \eta) \partial_{y_j} \partial_{y_k} a(y, \xi)\bigr] |_{\stackrel{y=x}{\eta=\xi}}. \end{align*} The Poisson bracket belongs to $S( m_\theta^\infty, g_{\alphapha_\mu})$. Hence $P_\Deltambda (D) \{\tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha\} (t, X, D)$ is square-summable by the pseudo-differential calculus. The other frequency outputs result from $\{P_{>\Deltambda/16}(D_x) a_\mu, \phi_{\theta, \Deltambda}^{\alphapha}\}(t, X, D) \in \Deltambda^{-N} OPS( m_\theta^\infty, g_{\alphapha_\mu})$, and are therefore square-summable by a brute force bound using Lemma~\ref{l:schur-bound} and the triangle inequality. To evaluate the remainder, note that as \begin{equation}gin{alignat*}{4} &\partial^2_\xi a \in S( |\xi|^{-1}, g_{\alphapha_\mu}), &\quad &\partial_x^2 \phi_{\theta, \Deltambda}^\alphapha \in \alphapha_\mu^{-2} S( m_\theta^\infty, g_{\alphapha_\mu}),\\ &\partial^2_x a \in L^2 S( |\xi|, g_{\alphapha_\mu}), &\quad &\partial_\xi^2 \phi_{\theta, \Deltambda}^\alphapha \in (\alphapha \Deltambda)^{-1} (\alphapha_\mu \Deltambda)^{-1}S(m_\theta^\infty , g_{\alphapha_\mu}), \end{alignat*} by Lemma~\ref{l:gauss_transform} one has \begin{equation}gin{align*} r_s \in (\alphapha_\mu^2\Deltambda)^{-1}S( m_\theta^\infty, g_{\alphapha_\mu}) + (\alphapha^2 \Deltambda)^{-\frac{1}{2}} (\alphapha_\mu^2 \Deltambda)^{-\frac{1}{2}} L^2 S(m^\infty_\theta, g_{\alphapha}). \end{align*} So $r_s(t, X, D)$ are $L^2_{t,x}$-square-summable by considering the operators $P_\Deltambda(D) r_s(t, X, D)$ and $(1-P_\Deltambda(D)) r_s(t, X, D)$ separately as before. It remains to show that \begin{equation}gin{align*} \sum_\theta \| [A -A_\mu, \phi_{\theta,\Deltambda}^\alphapha] u\|_{L^2}^2 \lesssim \| u\|_{L^2}^2. \end{align*} From the computations \begin{equation}gin{alignat*}{4} \partial_\xi (a-a_\mu) &\in S( \alphapha_\mu^2, g_{\alphapha_\mu}), &\quad &\partial_x \phi_{\theta, \Deltambda}^\alphapha \in S(1, g_{\alphapha_\mu}),\\ \partial_x (a-a_\mu) &\in S( \alphapha_\mu^2|\xi|, g_{\alphapha_\mu}), &\quad &\partial_\xi \phi_{\theta, \Deltambda}^\alphapha \in S( (\alphapha \Deltambda)^{-1}, g_{\alphapha_\mu}) \end{alignat*} the formula \begin{equation}gin{align*} a \circ b = ab + \frac{1}{i} \int_0^1 e^{is\Deltangle D_y, D_\eta \rangle} \Deltangle \partial_\eta a(x, \eta), \partial_y b(y, \xi) \rangle|_{\stackrel{y=x}{\eta=\xi}} \, ds, \end{align*} and Lemma~\ref{l:gauss_transform}, it follows that the symbol of $[A-A_\mu, \phi_{\theta, \Deltambda}^\alphapha]$ belongs to $S(\alphapha_\mu^2 \alphapha^{-1}, g_{\alphapha_\mu})$. As before this implies that $[A-A_\mu, \phi_{\theta, \Deltambda}^\alphapha] = \alphapha_\mu^2 \alphapha^{-1} \chi_\theta$ for some square-summable $\chi_\theta$. \end{proof} \subsection{Characteristic energy estimates} \Deltabel{Char:en:subsec} The purpose of this section is to prove energy estimates for directionally localized half-waves along certain null surfaces. We begin by recalling the usual characteristic energy estimate for a general function $v$. For further details, see for instance Alinhac's book~\cite{Alinhac2010}. Suppose $\Omega$ is a spacetime domain whose boundary $\partial \Omega = \Lambda \cup \Sigma_- \cup \Sigma_+$ decomposes into a null hypersurface $\Lambda$ and the time slices $\Sigma_\pm = \{t = t_\pm\} \cap \Omega$, where $t_- < t_+$. Let $L$ be a geodesic generator for $\Lambda$ which is extended to a null frame $\{L, \underline{L}, E\}$ on $\Omega$, so that $L, E$ are tangent to $\Lambda$. By contracting the stress energy tensor \[T = dv \otimes dv - \frac{1}{2} g^{-1}(d v, d v) g\] with $\partial_t$ and applying the divergence theorem, one controls on $\Lambda$ the derivatives of $v$ tangential to $\Lambda$: \begin{equation}gin{equation} \Deltabel{e:CE1} \begin{equation}gin{split} \int_{\Lambda} |Lv|^2 + |Ev|^2 \, d\sigma &\lesssim \int_{\Sigma_\pm} | \nabla_{t, x} u|^2 \, dx + \int_{\Omega} |\Box u| |\partial_t u| \, dx dt + |\Deltangle T, \pi^{(\partial_t)} \rangle| \, dx dt\\ &\lesssim (1+|t_+ - t_-|) \| \nabla_{t,x} v \|_{L^\infty L^2}^2 + \| \Box v\|_{L^1L^2}^2 \end{split} \end{equation} where $\pi^{(Z)} (X, Y)= \Deltangle \nabla_{X} Z , Y \rangle + \Deltangle \nabla_Y Z, X \rangle$ is the deformation tensor of a vector field $Z$. Suppose further that $v$ is microlocalized such that $Lv$ and $Ev$ are smaller than a generic derivative $\nabla v$. Then by contracting the stress energy tensor instead with $L$, one deduces \begin{equation}gin{align} \Deltabel{e:CE2} \int_{\Lambda} |L v|^2 \, d\sigma \lesssim \int_{\Sigma_\pm}| Lv(t_\pm, x)|^2 + |E (t_{\pm}, x)|^2 \, dx + \int_{\Omega} |\Box v| |Lv| + |\Deltangle T, \pi^{(L)} \rangle| \, dxdt. \end{align} This yields a tighter estimate for $Lv$ along $\Lambda$ since the worst component $T^{LL} = T_{\underline{L}\underline{L}} = |\underline{L}v|^2$ is paired with $\pi^{(L)}_{LL} = 0$. Now factor the symbol \[\tau^2 - 2g_{<\sqrt{\Deltambda}}^{0j}\tau \xi_j - g_{\sqrt{\Deltambda}}^{jk} \xi_a \xi_b = (\tau + a^+)(\tau + a^-).\] In the sequel we redenote $a := a^+$ and let $A = a(t, X, D)$ denote the corresponding half-wave operator. For each direction $\theta$, introduce the associated $+$ null foliation $\Lambda_{\theta} = \Lambda_{\theta}^\Deltambda$, defined by the Hamiltonian flow for the half-wave symbol $\tau + a$, and let $\{L, \underline{L}, E\}$ denote the associated null frame. As before, we parametrize the graph of the flow $(x_0, \xi_0) \mapsto (x_t, \xi_t)$ by the variables $(x_t, \xi_0)$, and write $\xi_{\xi_0}(t, x_t) := \xi_t (x_0, \xi_0)$. Combining~\eqref{e:flow-deriv} with $\alphapha = \sqrt{\Deltambda}$ and the computation~\eqref{e:jacobian_alt_param}, one deduces \begin{equation}gin{align*} \frac{\partial^k(x_0, \xi_t)}{\partial (x_t, \xi_0)^k } = O(\Deltambda^{\frac{|k|-1}{2}}), \quad |k| \ge 1. \end{align*} The operators $L$, $E$ therefore belong to $OPS^1_{1, \frac{3}{4}}$ when restricted to input frequencies $\ge \Deltambda$. To obtain estimates for $\partial_t \xi_{\theta}$, we use the equation \begin{equation}gin{align} \Deltabel{e:xi_eqn} \partial_t \xi_{\theta} = -\Deltangle a_{\xi} (t, x, \xi_{\theta}), \partial_x \xi_{\theta}\rangle - a_x (t, x, \xi_{\theta}), \end{align} obtained by differentiating the definition with respect to $t$, to deduce \begin{equation}gin{align} \Deltabel{e:xi_tderivs} |\partial_x^k \partial_t \xi_{\theta}| \lesssim \Deltambda^{\frac{k}{2}} + \Deltambda^{\frac{k-1}{2}} \Deltambda^{\frac{1}{4}}, \quad k \ge 1. \end{align} where we used the Bernstein-type estimate~\eqref{e:metric_est1} to bound $a_{xx}$ and higher order $x$ derivatives (one could alternatively replace the $\Deltambda^{\frac{1}{4}}$ by $M(\|\partial^2 g(t)\|_{L^\infty_x})$). The main result of this section is \begin{equation}gin{proposition} \Deltabel{p:char-energy} Let $u = P_\Deltambda(D_x)u$ be supported in $|t| \le 2$ and satisfy \[ \| \nabla_{t, x} u\|_{L^2} + \| \Box_{g_{<\sqrt{\Deltambda}}} u \|_{L^2} < \infty, \] and let $u = u^+ + u^-$ be the half-wave decomposition from Corollary~\ref{c:half-wave}. Suppose $\phi_{\theta, \Deltambda}^\alphapha$ is a pseudo-differential localization operator of the form~\eqref{e:pd-loc}, defined via the $+$ flow of the metric $g_{<\sqrt{\mu}}$ with $\mu \le \Deltambda$. Assume that $\alphapha \ge \alphapha_\mu := \mu^{-1/2}$, and set $\begin{equation}ta := |\theta-\theta'|$. \begin{equation}gin{enumerate} \item If $L = L_{\theta'}^+$ is the null generator for the foliation $\Lambda_{\theta'}$, then \begin{equation}gin{align*} \sup_h \int_{\Lambda_{h, \theta'}} |L \phi_{\theta, \Deltambda}^{\alphapha} u^+|^2 \, d\sigma \lesssim (\alphapha + \begin{equation}ta)^2 \| \chi_\theta^{1} u^+\|_{L^2}^2. \end{align*} \item If $\begin{equation}ta \ge C \alphapha$ for sufficiently large $C>0$, then for any $\varepsilon > 0$ one has \begin{equation}gin{align*} \sup_h \int_{\Lambda_{h, \theta'}^\varepsilon} | \phi_{\theta, \Deltambda}^{\alphapha} u^+|^2 \, dx dt \lesssim (\varepsilon +(\alphapha\Deltambda)^{-2})\begin{equation}ta^{-2}\| \chi_\theta^{0} u^+\|_{L^2}^2, \end{align*} \end{enumerate} where $\Lambda_{h, \theta'}^\varepsilon$ denotes an $\varepsilon$-neighborhood of $\Lambda_{h, \theta'}$, and $\chi^j_\theta$ are operators such that \begin{equation}gin{align} \Deltabel{sq:sum:chi} \sum_{\theta \in \Omega_\alphapha} \| \chi_\theta^{j} u^+\|_{L^2}^2 \lesssim \Deltambda^{2(j-1)}\bigl(\| \nabla u\|_{L^2}^2 + \| \Box_{g_{<\sqrt{\Deltambda}}} u\|_{L^2}^2\bigr). \end{align} \end{proposition} These are variable-coefficient analogues of the estimates on null hyperplanes proved in \cite[Section 3]{Tat}, and shall play an essential role in the proof of the algebra property. The small angle case $\begin{equation}ta \sim \alphapha$ arises when studying low-modulation outputs, while the transversal case $\begin{equation}ta \sim 1$ is used for high-modulation outputs. In view of the relation $a^+(t, x, -\xi) = - a^-(t, x, \xi)$, the null foliations and generators are related by $\Lambda^-_{-\theta} = \Lambda^+_{\theta}$, $L_{-\theta}^- = -L_\theta^+$. Indeed the optical functions satisfy $\Phi_{-\theta}^- = -\Phi_\theta^+$. Consequently one has \begin{equation}gin{corollary} \Deltabel{c:char-energy} Assume the setup of the previous proposition. \begin{equation}gin{enumerate} \item If $L = L_{-\theta'}^-$ is the null generator for the $-$ foliation $\Lambda^-_{-\theta'}$, then \begin{equation}gin{align*} \sup_h \int_{\Lambda^-_{h, -\theta'}} |L \phi_{\theta, \Deltambda}^{\alphapha} u^+|^2 \, d\sigma \lesssim (\alphapha + \begin{equation}ta)^2 \| \chi_\theta^{1} u^+\|_{L^2}^2. \end{align*} \item If $\begin{equation}ta \ge C \alphapha$ for sufficiently large $C>0$, then for any $\varepsilon > 0$ one has \begin{equation}gin{align*} \sup_h \int_{(\Lambda_{h, -\theta'}^-)^\varepsilon} | \phi_{\theta, \Deltambda}^{\alphapha} u^+|^2 \, dx dt \lesssim (\varepsilon +(\alphapha\Deltambda)^{-2})\begin{equation}ta^{-2}\| \chi_\theta^{0} u^+\|_{L^2}^2, \end{align*} \end{enumerate} \end{corollary} Later we shall consider bilinear estimates of the form $ \|(\phi_{\theta,\Deltambda}^\alphapha u) v_T\|_{L^2}$ and $\| Q(\phi_{\theta, \Deltambda} u, v_T)\|_{L^2}$, where $v_T$ is a frequency $\mu$ wave packet concentrating in some tube $T \in \mathcal{T}_{\mu, \pm \theta'}^\pm$. As each $T$ is contained in a union $\bigcup_{|h-h_0| \lesssim \mu^{-1}} \Lambda_{h, \pm \theta'}^\pm$ of null surfaces, we deduce \begin{equation}gin{corollary} \Deltabel{c:char-energy-tubes} Assume the setup of the previous corollary, and let $T^\pm \in \mathcal{T}_{\pm\theta', \mu}^\pm$ be a frequency-$\mu$ ``tube'' with initial direction $\pm \theta'$. \begin{equation}gin{enumerate} \item If $L^\pm = L_{\pm\theta'}^\pm$ is the null generator for the foliation $\Lambda^{\pm}_{\pm\theta'}$, then \begin{equation}gin{align*} \|L^\pm \phi_{\theta,\Deltambda}^\alphapha u^+ \|_{L^2(T^\pm)} \lesssim \mu^{-\frac{1}{2}} (\alphapha + \begin{equation}ta) \| \chi_{\theta}^1 u^+\|_{L^2}. \end{align*} \item If $\begin{equation}ta \ge C \alphapha$ for sufficiently large $C>0$, then \begin{equation}gin{align*} \| \phi_{\theta, \Deltambda}^{\alphapha} u^+\|_{L^2(T^\pm)} &\lesssim \mu^{-\frac{1}{2}} \begin{equation}ta^{-1} \| \chi_\theta^0 u^+\|_{L^2}. \end{align*} \end{enumerate} \end{corollary} \begin{equation}gin{proof}[Proof of Proposition~\ref{p:char-energy}, part (1)] We begin by recording the symbol estimates that underpin the gain or loss in the angular separation. By a harmless abuse of notation we ignore the prefactor $\sigma$ in $L$ (see Lemma~\ref{l:half-fullwave-bichar}) and redenote \[L = D_t + \partial_\xi a (t, x, \xi_{\theta'} (t, x)) \cdot D_x = D_t + \tilde{A}.\] The difference between $L$ and the half-wave operator $D_t + A$ is \begin{equation}gin{align*} (D_t + A) -L = (A - \tilde{A}), \end{align*} whose symbol is \begin{equation}gin{align} \Deltabel{e:char-e-symbolest1} a - \tilde{a} = a(t,x,\xi) - \Deltangle a_\xi(t, x, \xi_{\theta'}(t, x)), \xi \rangle \sim |\xi| \angle( \wht{\xi}, \wht{\xi_{\theta'}})^2. \end{align} Indeed, put $\omega:= \widehat{\xi_{\theta'} (t,x)}$, and let $f(\xi) := a(t, x, \xi)$, which is $1$-homogeneous and $f_{\xi\xi}(\xi)$ is positive-definite in the orthogonal complement of $\xi$. When $\angle(\wht{\xi}, -\omega) \ge c \min_{|\omega| = 1} f(\omega)$ for a small constant $c >0$, one computes \begin{equation}gin{align*} f(\xi) - f_{\xi}(\omega) \cdot \xi &= |\xi| \Deltangle f_{\xi} (\widehat{\xi}) - f_{\xi}(\omega), \widehat{\xi} \rangle\\ &= \int_0^1 |\xi| \Deltangle f_{\xi\xi}( \omega + s(\widehat{\xi} - \omega)) \, (\widehat{\xi}-\omega), \widehat{\xi} - \omega - s(\widehat{\xi} - \omega) \rangle \, ds\\ &= |\xi| \int_0^1 (1-s)\Deltangle f_{\xi\xi}( \omega + s(\widehat{\xi}-\omega)) \, (\widehat{\xi} -\omega), \widehat{\xi}-\omega \rangle \, ds\\ &\sim |\xi| \angle (\widehat{\xi}, \omega)^2 \end{align*} In the remaining region, where $\wht{\xi}$ and $\omega$ are nearly antipodal, \begin{equation}gin{align*} f(\xi) - f_\xi(\omega) \cdot \xi &= f(-\xi) - f_\xi(\omega) \cdot (-\xi) - 2 f_\xi(\omega) \cdot \xi\\ &= 2|\xi| ( f(\omega) - O( \angle (\wht{\xi}, -\omega) ) )\\ &\gtrsim |\xi|. \end{align*} Further, in view of the identity~\eqref{e:E-symbol}, the symbol of $E$ is $1$-homogeneous and satisfies \begin{equation}gin{align} \Deltabel{e:char-e-symbolest2} e(t, x, \xi) = |\xi|\Deltangle e(t, x), \wht{\xi} \rangle = |\xi| \Deltangle e(t, x), \wht{\xi} - \wht{\xi_{\theta'}} \rangle. \end{align} Recall that the symbol $\phi_{\theta, \Deltambda}^{\alphapha, +}$ is essentially supported in the sector $\{ \xi : \angle( \xi, \xi^\mu_{\theta}(t, x) ) \lesssim \alphapha\}$, where $\xi_\theta^{\mu}$ is defined by the metric $g_{<\sqrt{\mu}}$ mollified at frequency $\mu$. Precisely, we can decompose \begin{equation}gin{align*} \phi_{\theta, \Deltambda}^{\alphapha, +} = \chi_{\theta}^{\alphapha} + r_\theta, \end{align*} where $\chi_{\theta}^{\alphapha}$ is essentially $\phi_\theta^{\alphapha, +}(t, x, \xi) s_\Deltambda(\xi)$ before mollification in the $x$ variable--see~\eqref{e:pd-loc}--and has the required support, while $\|r_\theta\|_{L^2\to L^2} = O(\Deltambda^{-\infty})$. Thus since $|\xi_{\theta}(t,x) - \xi_{\theta'}(t,x)| \sim |\theta -\theta'|$ and $|\wht{\xi_\theta} - \wht{\xi_\theta^\mu}| \lesssim \mu^{-1/2} \le \alphapha$, one has \begin{equation}gin{align} \Deltabel{e:char-e-symbolest3} |(a - \tilde{a}) \phi_{\theta, \Deltambda}^\alphapha| \lesssim (\alphapha+|\theta -\theta'|)^2\Deltambda, \quad |e \phi_{\theta, \Deltambda}^\alphapha| \lesssim (\alphapha+ |\theta-\theta'|)\Deltambda . \end{align} On the other hand, if $|\theta-\theta'| \ge C \alphapha$ for some large $C$ such that $C\alphapha$ dominates the angular width of the cutoff $\chi_{\theta}^\alphapha$, the symbol $a - \tilde{a}$ is then microelliptic: \begin{equation}gin{align} \Deltabel{e:char-e-symbolest4} |\wht{\xi} - \wht{\xi_\theta^\mu}| \lesssim \alphapha \mathbb{R}ightarrow |(a - \tilde{a})| \sim (\alphapha+|\theta -\theta'|)^2\Deltambda, \quad |e| \lesssim (\alphapha+ |\theta-\theta'|)\Deltambda . \end{align} Without loss of generality we prove the estimate on the surface $\Lambda_{0, \theta'}$. In the sequel we write $\Box := \Box_{g_{<\sqrt{\Deltambda}}}$. We apply the estimate~\eqref{e:CE2} to the spacetime region \begin{equation}gin{align*} \Omega = \bigcup_{h \le 0} \Lambda_{h, \theta'} \cap \{|t| \le 5\}, \end{align*} whose boundary is \begin{equation}gin{align*} \partial \Omega = (\Lambda_{0, \theta'} \cap \{ |t| \le 5\}) \cup \{t = \pm 5\}. \end{align*} In terms of the null frame, we have \begin{equation}gin{align*} \int_{\Lambda_{0, \theta'}} |L v|^2 \, d\sigma \lesssim \sum_{\pm} \int |Lv(\pm 5, x)|^2 + |E v(\pm 5, x)|^2 \,dx + \int_{\Omega} |\Box v Lv| \, dx dt + \int_{\Omega} | \Deltangle T, \pi \rangle| \, dx dt, \end{align*} where $\pi (X, Y) = \Deltangle \nabla_X L, Y \rangle + \Deltangle \nabla_Y L, X \rangle$ is the deformation tensor for $L$. Put $v = \phi_{\theta,\Deltambda}^{\alphapha}u^+$. The boundary terms vanish since the half-wave $u^+$ is assumed to be supported in $|t| \le 3$. For the other terms, write \begin{equation}gin{align*} \Deltangle T, \pi \rangle &= T_{\underline{L}\underline{L}} \pi_{LL} + T_{LL} \pi_{\underline{L} \underline{L}} + 2 T_{L\underline{L}} \pi_{\underline{L} L} + 2 T_{\underline{L} E} \pi_{LE} + 2 T_{LE} \pi_{\underline{L}E} + T_{EE} \pi_{EE}\\ \end{align*} The components of the deformation tensor are \begin{equation}gin{align*} \pi_{LL} &= 2 \Deltangle \nabla_{L} L, L \rangle = 0,\\ \pi_{LE} &= \Deltangle \nabla_{L} L, E \rangle + \Deltangle \nabla_E L, L \rangle = 0,\\ \pi_{L \underline{L}} &= \Deltangle \nabla_{L} L, \underline{L} \rangle + \Deltangle \nabla_{\underline{L}} L, L \rangle = 0,\\ \pi_{\underline{L} E} &= O(1),\\ \pi_{\underline{L} \underline{L}} &= O(1),\\ \pi_{EE} &= O(1); \end{align*} the last three are a consequence of the derivative estimates~\eqref{l:optical_bounds} for the optical function. Thus \begin{equation}gin{align*} |\Deltangle T, \pi\rangle| \lesssim |Lv L v| + |Lv E v| + |Ev E v |. \end{align*} Altogether we obtain \begin{equation}gin{align*} \int_{\Lambda_{\theta, 0}} |L\phi_{\theta, \Deltambda}^{\alphapha} u^+|^2 \, d\sigma \lesssim \| \Box \phi_{\theta, \Deltambda}^{\alphapha} u^+\|_{L^2} \| L \phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2} + \| L \phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2}^2 + \| E \phi^\alphapha_{\theta,\Deltambda} u^+\|_{L^2}^2. \end{align*} The claim now follows from the next lemma. \end{proof} \begin{equation}gin{lemma} \Deltabel{e:char-e-commutator1} If $\{L, \underline{L}, E\}$ is the null frame for the foliation $\Lambda_{\theta'}$, then \begin{equation}gin{align*} &L\phi_{\theta, \Deltambda}^\alphapha = (D_t + A) \phi_{\theta, \Deltambda}^\alphapha + (\alphapha+|\theta-\theta'|)^2 \Deltambda \chi_\theta,\\ &E \phi_{\theta, \Deltambda}^\alphapha = (\alphapha +|\theta-\theta'|)\Deltambda \chi_\theta, \end{align*} where $\chi_\theta$ are $L^2_x$-square-summable with constants uniform in time. Also \begin{equation}gin{align*} \sum_{\theta \in \Omega_\alphapha} \| \Box \phi^\alphapha_{\theta, \Deltambda} u^+ \|_{L^2}^2 \lesssim \| \nabla_{t,x}u\|_{L^2}^2 + \| \Box u\|_{L^2}^2. \end{align*} \end{lemma} \begin{equation}gin{proof} The second term on the right side of $L\phi_{\theta,\Deltambda}^\alphapha$ is \begin{equation}gin{align*} (\tilde{A} - A) \phi_{\theta,\Deltambda}^\alphapha = P_\Deltambda (\tilde{A} -A) \phi_{\theta, \Deltambda} + (\tilde{a}_{>\Deltambda/16}- a_{>\Deltambda/16})(t, X, D) \phi_{\theta, \Deltambda}. \end{align*} Since the first term is localized in output frequency, the symbol estimates~\eqref{e:char-e-symbolest3} and pseudo-differential calculus (Lemmas~\ref{l:gauss_transform} and \ref{l:CV}) imply that the first term is $\alphapha^2\Deltambda \chi_\theta$ for some square-summable operator $\chi_\theta$. On the other hand, $a, \tilde{a} \in S^1_{1, 3/4}$ in the region $|\xi| \ge \Deltambda$, one has $a_{>\Deltambda/16}, \tilde{a}_{>\Deltambda/16} \in \bigcap_N \Deltambda^{-N} S^1_{1, 3/4}$, so \begin{equation}gin{align*} \| (\tilde{a}_{>\Deltambda/16} - a_{>\Deltambda/16})(t, X, D) P_\Deltambda(D)\|_{L^2 \to L^2} \lesssim \Deltambda^{-N} \text{ for any } N. \end{align*} The estimate for $E\phi_{\theta, \Deltambda}^\alphapha$ is similar, writing $E \phi_{\theta, \Deltambda}^\alphapha = P_\Deltambda(D) E \phi_{\theta, \Deltambda}^\alphapha + R \phi_{\theta, \Deltambda}^\alphapha$, where $\|R\|_{L^2 \to L^2} = O(\Deltambda^{-N})$ for any $N$, and using~\eqref{e:char-e-symbolest3} for the main term. As $\phi_{\theta,\Deltambda}^{\alphapha}$ are square-summable in $\theta$, it suffices to prove that \begin{equation}gin{align*} \sum_\theta \| [\Box, \phi_{\theta, \Deltambda}^{\alphapha} u^+\|_{L^2}^2 \lesssim \| \nabla_{t, x} u\|_{L^2}^2 + \| \Box u\|_{L^2}^2. \end{align*} To this end we note first of all that \begin{equation}gin{align*} \sum_\theta \| [\Box - (D_t + A^-)(D_t + A^+) ] \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2}^2 \lesssim \sum_{\theta} \| \nabla_{t,x} \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2}^2 \lesssim \|\nabla_{t,x} u\|_{L^2}^2, \end{align*} and write \begin{equation}gin{align*} &[(D_t + A^-) (D_t + A^+), \phi_{\theta, \Deltambda}^\alphapha ] = (D_t+A^-) [D_t + A^+, \phi_{\theta, \Deltambda}^\alphapha] + [D_t + A^-, \phi_{\theta, \Deltambda}^\alphapha] (D_t+A^+). \end{align*} For the first term we split $A^+ = A_\mu + A^+ - A_\mu$, where $A_\mu$ is the corresponding $+$ operator for the low-frequency metric $g_{<\sqrt{\mu}}$. Recall from the proof of Proposition~\ref{p:orthogonality} that \[[D_t + A_\mu, \phi_{\theta, \Deltambda}^\alphapha] = \frac{1}{i} \{\tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha \}(t, X, D) + r(t, X, D), \] where $\{\tau+a_\mu, \phi_{\theta, \Deltambda}^\alphapha\} \in S( m_\theta^\infty, g_{\alphapha_\mu})$, $r \in (\alphapha_\mu^2 \Deltambda)^{-1} L^2 S ( m_\theta^\infty, g_{\alphapha_\mu})$, $P_{>\Deltambda/8}(D_x) r \in \Deltambda^{-N} L^2 S (m_\theta^\infty)$ for any $N$, and $\partial_t r \in \mu S( m_\theta^\infty, g_{\alphapha_\mu})$. The last claim uses the modified computations \begin{equation}gin{alignat*}{4} &\partial^2_\xi \partial_t a \in S( |\xi|^{-1}, g_{\alphapha_\mu}), &\quad &\partial_x^2 \partial_t \phi_{\theta, \Deltambda}^\alphapha \in \alphapha_\mu^{-4} S( m_\theta^\infty , g_{\alphapha_\mu}),\\ &\partial^2_x \partial_t a \in S( \mu^{\frac{3}{4}}|\xi|, g_{\alphapha_\mu}), &\quad &\partial_\xi^2 \partial_t \phi_{\theta, \Deltambda}^\alphapha \in \alphapha^{-1} (\alphapha_\mu \Deltambda)^{-2} S(m_\theta^\infty , g_{\alphapha_\mu}). \end{alignat*} Then \begin{equation}gin{align*} (D_t + A^-)\{ \tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha\} (t, X, D) &= (D_t \{ \tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha\}) (t, X, D) + \{\tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha\}(t, X, D) D_t \\ &+ A^- \{ \tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha \} (t, X, D). \end{align*} By Lemma~\ref{l:t-derivs} the first term belongs to $\alphapha^{-1} OPS( m_\theta^\infty, g_{\alphapha_\mu})$. Modulo a negligible remainder we may restrict each term to output frequency $\Deltambda$, and write \begin{equation}gin{align*} (D_t+A^-) \{\tau + a_\mu, \phi_{\theta, \Deltambda}^\alphapha\} (t, X, D) = \Deltambda T^1_\theta + T^2_\theta D_t \end{align*} where $T^1_\theta$ and $T^2_\theta$ are square-summable. \begin{equation}gin{align*} (D_t+A^-) r(t, X, D) &= (D_tr)(t, X, D) + r(t,X,D)D_t + A^- r(t, X, D)\\ &= \mu T^1_\theta + f(t) T^2_\theta D_t + \Deltambda T^3_\theta, \end{align*} where $f(t) = M(\|\partial^2 g\|_{L^\infty}) \in L^2_t$ and $T^j_\theta$ are square-summable. This is acceptable in view of the energy estimate $\| \nabla u\|_{L^\infty L^2} \lesssim \| \nabla u\|_{L^2} + \| \Box u\|_{L^2}$. For the term $(D_t+A^-)[A -A_\mu, \phi_{\theta, \Deltambda}^{\alphapha}]$, we simply recall from the proof of Proposition~\ref{p:orthogonality} that the commutator belongs to $\alphapha_\mu^2\alphapha^{-1}OPS(m_\theta^\infty, g_{\alphapha_\mu})$ and outputs essentially at frequency $\Deltambda$, so that $A^{-} [A-A_\mu, \phi_{\theta, \Deltambda}^\alphapha] \in \Deltambda \chi_\theta$ for some square-summable $\chi_\theta$, and \begin{equation}gin{align*} D_t [A-A_\mu, \phi_{\theta, \Deltambda}^\alphapha ] = [\partial_t A - \partial_t A_\mu, \phi_{\theta, \Deltambda}^\alphapha] +[A-A_\mu, D_t \phi_{\theta, \Deltambda}^\alphapha] + [A-A_\mu, \phi_{\theta, \Deltambda}^\alphapha]D_t, \end{align*} is acceptable as well. This shows that \begin{equation}gin{align*} \sum_{\theta} \| (D_t+A^-) [D_t + A^+, \phi_{\theta, \Deltambda}^\alphapha] u^+\|_{L^2}^2 \lesssim \| \nabla u^+\|_{L^2}^2 + \| \Box u^+\|_{L^2}^2. \end{align*} Next we write \begin{equation}gin{align*} [D_t + A^-, \phi_{\theta, \Deltambda}^\alphapha] = (D_t\phi_{\theta, \Deltambda}^\alphapha)(t, X, D) + [A^-, \phi_{\theta, \Deltambda}^\alphapha] = \alphapha^{-1} \chi_\theta \end{align*} for some square summable $\chi_\theta$, so that \begin{equation}gin{align*} \sum_\theta \| [D_t + A^-, \phi_{\theta, \Deltambda}^\alphapha] (D_t+A^+) u^+\|_{L^2}^2 \lesssim \alphapha^{-2} \| (D_t+A^+) u^+\|_{L^2}^2 \lesssim \| \nabla u^+\|_{L^2}^2 + \| \Box u^+\|_{L^2}^2, \end{align*} Finally, recall from Proposition~\ref{p:half-wave} that $\|\nabla u^+ \|_{L^2} + \| \Box u^+\|_{L^2} \lesssim \| \nabla u\|_{L^2} + \| \Box u\|_{L^2}.$ \end{proof} For future reference, we collect the key estimates for $\phi_{\theta, \Deltambda}^\alphapha u$ in the following \begin{equation}gin{corollary} If $\{L, \underline{L}, E\}$ is the null frame for the $+$ foliation $\Lambda_{\theta'}$, and $|\theta-\theta'| \sim \alphapha \ge \alphapha_\mu$, then: \begin{equation}gin{align} \Deltabel{O:est:phi} &\bigl(\sum_\theta \| \phi_{\theta, \Deltambda}^\alphapha u \|_{L^\infty L^2}^2 \bigr)^{\frac{1}{2}} \lesssim \| u\|_{X_+}\\ \Deltabel{Box:est:phi} & \bigl(\sum_{\theta} \| \Box_{g_{<\sqrt{\Deltambda}}} \phi_{\theta, \Deltambda}^\alphapha u\|_{L^2 L^2}^2\bigr)^{\frac{1}{2}} \lesssim \| \nabla u\|_{L^2} + \| \Box_{g_{<\sqrt{\Deltambda}}} u\|_{L^2} \\ \Deltabel{E:est:phi} & \bigl(\sum_\theta \| E \phi_{\theta, \Deltambda}^\alphapha u\|_{L^2_x}^2\bigr)^{\frac{1}{2}} \lesssim \Deltambda \alphapha \| u\|_{X_+}\\ \Deltabel{L:est:phi} & \bigl(\sum_{\theta} \| L \phi_{\theta, \Deltambda}^{\alphapha} u\|^2_{L^2 L^2} \bigr)^{\frac{1}{2}} \lesssim \Deltambda \alphapha^2 \| u\|_{X_+}\\ \Deltabel{barL:est:phi} & \bigl(\sum_\theta \| \nabla_{t,x} \phi_{\theta, \Deltambda}^\alphapha u\|_{L^\infty L^2}^2 \bigr)^{\frac{1}{2}}\lesssim \Deltambda \| u\|_{X_+} \end{align} \begin{equation}gin{remark} For split metrics $g^{0j} = 0$, the same estimates hold with the replacements $\Lambda_{\theta'} \to \Lambda_{-\theta'}^{-}$, $X_+ \to X_-$, and $\alphapha \sim |\theta + \theta'|$. \end{remark} \end{corollary} \begin{equation}gin{proof} \eqref{O:est:phi} follows from the energy estimate on bounded time intervals $\| v\|_{L^\infty L^2} \lesssim \| v\|_{L^2} + \| (D_t+A^\pm) v\|_{L^2}$. For the next estimate~\eqref{barL:est:phi} we simply note that by Lemma~\eqref{l:t-derivs}, the commutator $[\nabla_{t, x}, \, \Deltambda^{-1} \phi_{\theta, \Deltambda}^\alphapha ] = \Deltambda^{-1} (\nabla_{t,x} \phi_{\theta, \Deltambda}^\alphapha)$ is square-summable, and apply~\eqref{O:est:phi}. Finally, \eqref{Box:est:phi}, \eqref{E:est:phi}, \eqref{L:est:phi} were proved in the preceding lemma. \end{proof} We turn to the $L^2$ estimate in Proposition~\ref{p:char-energy}, which is slightly more involved. Roughly speaking, the angular separation allows one to microlocally invert the vector field $L$ for the foliation $\Lambda_{\theta'}$ on the support of the cutoff $\phi_{\theta, \Deltambda}^\alphapha$. A similar ellipticity argument was employed previously in~\cite[Lemma 5.2]{geba2008gradient}. \begin{equation}gin{proof}[Proof of Prop.~\ref{p:char-energy}, part (2)] Recall from ~\eqref{e:char-e-symbolest1}, \eqref{e:char-e-symbolest4}, that the difference $D_t + A - L= A - \tilde{A}$ satisfies \begin{equation}gin{align} \Deltabel{e:microelliptic} q := a - \tilde{a} \approx \begin{equation}ta^2 \Deltambda \end{align} on a neighborhood of the support of $\chi_{\theta}^{\alphapha}$. We construct a microlocal inverse for $A - \tilde{A}$. Let $\tilde{\chi}_{\theta}^{\alphapha}$ be a slightly wider version of $\chi_{\theta}^{\alphapha}$ supported where~\eqref{e:microelliptic} holds. Define \begin{equation}gin{align*} \tilde{l}(t, x, \xi) &:= q^{-1} \tilde{\chi}_{\theta}^{\alphapha} , \end{align*} and define \begin{equation}gin{align*} \tilde{L} := P_\Deltambda(D) \tilde{l}(t, x, D). \end{align*} \begin{equation}gin{lemma} \Deltabel{l:microelliptic-regularity} On the support of $\tilde{\chi}_{\theta}^\alphapha$, one has \begin{equation}gin{gather*} |(\begin{equation}ta \Deltambda\partial_\xi)^m \Deltangle \xi, \partial_\xi\rangle^{m'} q| + |(\begin{equation}ta_\Deltambda \partial_x)^k (\begin{equation}ta \partial_x)(\begin{equation}ta \Deltambda \partial_\xi)^m \Deltangle \xi, \partial_\xi\rangle^{m'} q| \lesssim (\begin{equation}ta^2\Deltambda), \quad \alphapha_\Deltambda := \Deltambda^{-\frac{1}{2}},\\ |(\alphapha_\Deltambda\partial_x)^k (\begin{equation}ta \Deltambda \partial_\xi)^m \Deltangle \xi, \partial_\xi\rangle^{m'} \partial_t q| \lesssim \begin{equation}ta^{-1} (\begin{equation}ta^2\Deltambda). \end{gather*} \end{lemma} In conjunction with $\tilde{\chi}_{\theta}^\alphapha \in S^1_\alphapha( m_\theta^\infty, g_{\alphapha_\mu})$, this quickly leads to \begin{equation}gin{corollary} \Deltabel{c:parametrix_symbol} The symbol $\tilde{l}$ satisfies \begin{equation}gin{align*} \tilde{l}, \ \begin{equation}ta \partial_x \tilde{l}, \ \alphapha \Deltambda \partial_\xi \tilde{l} \in (\begin{equation}ta^2 \Deltambda)^{-1}S( m_\theta^\infty, g_{\alphapha_\Deltambda}), \end{align*} where as before $ m_\theta(t, x, \xi) := \Deltangle \alphapha^{-1} ( |\widehat{\xi} - \widehat{ \xi_\theta^{\mu}} (t, x) ) \rangle^{-1}$. \end{corollary} \begin{equation}gin{remark} The worse regularity of $\tilde{l}$ with respect to $\xi$ is due to the factor $\tilde{\chi}_\theta^\alphapha$. \end{remark} \begin{equation}gin{proof}[Proof of Lemma] For simplicity of notation we suppress the $t, x$ variables in the arguments to $a$. We have \begin{equation}gin{align*} \partial_x^k q (\xi) &= \partial_x^k a(\xi) - \Deltangle \partial_x^k a_{\xi} (\xi_{\theta'}), \xi \rangle - \sum_{j \ge 1} \Deltangle \partial_x^{k-j} a_{\xi\xi}(\xi_{\theta'})(\partial_x^j \xi_{\theta'}), \xi - |\xi| \widehat{\xi_{\theta'}} \rangle\\ &- \sum_{j\ge 1} \bigl[\partial_x^{k-j} \partial_\xi^2 a_{\xi}(\xi_{\theta'}) B_2(\partial_x \xi_{\theta'}) + \cdots + \partial_x^{k-j}\partial_\xi^j a_{\xi} (\xi_{\theta'}) B_j(\partial_x \xi_{\theta'})\bigr], \end{align*} where $B_j(\partial_x \xi_{\theta'})$ denotes a $j$-linear quantity in $\partial_x \xi_{\theta'}$ and its higher $x$ derivatives such that the total order of the derivatives equals $j$. This yields \begin{equation}gin{align*} |\partial_x q| &\lesssim \begin{equation}ta^2 \Deltambda + \begin{equation}ta \Deltambda, \end{align*} and when $k \ge 2$ \begin{equation}gin{align*} |\partial_x^k q| &\lesssim \Deltambda^{\frac{k-2}{2}} f(t) \begin{equation}ta^2\Deltambda| + \sum_{j =1}^{k-2} \Deltambda^{\frac{k-j-2}{2}} f(t) ( \Deltambda^{\frac{j-1}{2}} \begin{equation}ta \Deltambda + \Deltambda^{\max(\frac{j-2}{2},0)} \Deltambda)\\ &+ \Deltambda^{\frac{k-1}{2}} \begin{equation}ta \Deltambda + \Deltambda^{\max(\frac{k-2}{2}, 0)} \Deltambda, \end{align*} where $f := M(\|\partial^2 g\|_{L^\infty_x}) \in L^2_t$. Since the metric is localized to frequencies $<\sqrt{\Deltambda}$, we may replace $f(t)$ by the uniform bound $\Deltambda^{\frac{1}{4}}$ as in~\eqref{e:metric_est1}, so that the dominant term when $\Deltambda \ge \alphapha^{-2}$ is $\alphapha_\Deltambda^{1-k} \begin{equation}ta \Deltambda = \alphapha_\Deltambda^{1-k} \begin{equation}ta^{-1} (\begin{equation}ta^2 \Deltambda)$. The estimates for the $\xi$ derivatives follow easily from the explicit form of $q$, and similar considerations handle \begin{equation}gin{align*} \partial_t q = a_t(\xi) - \Deltangle a_{t\xi}(\xi_{\theta'}), \xi \rangle - \Deltangle a_{\xi\xi}(\xi_{\theta'}) \partial_t \xi_{\theta'}, \xi \rangle. \end{align*} \end{proof} Some basic properties of the parametrix $\tilde{L}$ are recorded in \begin{equation}gin{lemma} \Deltabel{l:parametrix_bounds} The operator $\tilde{L}$ satisfies \begin{equation}gin{gather*} \| \tilde{L}\|_{L^2 \to L^2} \lesssim (\begin{equation}ta^2 \Deltambda)^{-1}\\ \|P_\Deltambda(D) [D_t + A, \tilde{L}]P_\Deltambda(D) \|_{L^2 \to L^2} \lesssim (\begin{equation}ta^2 \Deltambda)^{-1}\\ (A - \tilde{A} ) \tilde{L} - \tilde{\chi}_{\theta}^{\alphapha, +}(t, X, D) = (\alphapha \begin{equation}ta \Deltambda)^{-1} \Phi_\theta, \end{gather*} where the operators $P_\Deltambda(D)\Phi_\theta: L^2 \to L^2$ are square-summable in $\theta$. \end{lemma} \begin{equation}gin{proof} The first follows directly from the estimates for the symbol $\tilde{l}$ and Lemma~\ref{l:CV}. For the third estimate, use the first-order symbol expansion~\eqref{e:symbol_expansion} and Lemmas~\ref{l:microelliptic-regularity}, \ref{l:gauss_transform} to see that \begin{equation}gin{align*} (A - \tilde{A}) \tilde{L} = \tilde{\chi}_{\theta}^{\alphapha, +} (t, X, D) + r(t, X, D), \quad r \in (\alphapha \begin{equation}ta \Deltambda)^{-1}S( m_\theta^\infty, g_{\alphapha_\mu}), \end{align*} and apply Lemma~\ref{l:CV} to the remainder. For the commutator estimate we essentially follow the proof of \cite[Lemma 5.2]{geba2008gradient}. From the second order symbol expansion~\eqref{e:symbol_expansion}, the symbol of the commutator is \begin{equation}gin{gather} \frac{1}{i} \{ \tau + a, \tilde{l} \} -\frac{1}{2} \int_0^1 r_s(t, x, \xi) \, ds, \nonumber\\ r_s(t, x, \xi) = \sum_{j, k} e^{is \Deltangle D_y, D_\eta \rangle} [ \partial_{\eta_j}\partial_{\eta_k} a(x, \eta)\partial_{y_j}\partial_{y_k}l(y, \xi) - \partial_{\eta_j}\partial_{\eta_k}l(x, \eta) \partial_{y_j} \partial_{y_k} a(y, \xi)] |_{\stackrel{y=x}{\eta=\xi}} \Deltabel{e:order2_commutator}. \end{gather} We claim that \[r_s \in L^2 S( (\begin{equation}ta^2\Deltambda)^{-1} (\alphapha^2_\mu \Deltambda)^{-1}, g_{\alphapha_\Deltambda}) + S( ((\alphapha_\mu^2 \Deltambda)^{-1} + (\begin{equation}ta^2\Deltambda)^{-\frac{1}{2}})(\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}),\] and is therefore acceptable. This would follow from Lemma~\ref{l:gauss_transform} and the symbol estimates \begin{equation}gin{equation} \Deltabel{e:parametrix_2nd-derivs} \begin{equation}gin{aligned} &\partial_x^2 \tilde{l} \in S((\alphapha_\mu^{-2} + \alphapha_\Deltambda^{-1}\begin{equation}ta^{-1}) (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}), &\quad &\partial_{\xi}^2 a \in S(|\xi|^{-1}, g_{\alphapha_\Deltambda})\\ &\partial_\xi^2 \tilde{l} \in S( (\alphapha\Deltambda)^{-1}(\begin{equation}ta^2 \Deltambda)^{-1} (\alphapha_\mu \Deltambda)^{-1}, g_{\alphapha_\Deltambda}), &\quad &\partial_{x}^2 a \in L^2 S(|\xi|, g_{\alphapha_\Deltambda}). \end{aligned} \end{equation} The bounds for $a$ follow directly from the hypotheses on the metric, so we consider next \[\partial_x^2 \tilde{l} = \partial_x^2 \tilde{\chi}_\theta^\alphapha q^{-1} + 2 \partial_x \tilde{\chi}_{\theta}^\alphapha \partial_x q^{-1} + \tilde{\chi}_{\theta}^\alphapha \partial_x^2 q^{-1}.\] From Lemma~\ref{l:microelliptic-regularity} one sees that on the support of $\tilde{\chi}_{\theta}^\alphapha$, \begin{equation}gin{align*} \partial_x (q^{-1}) \in \begin{equation}ta^{-1} S((\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}), \quad \partial_x^2 (q^{-1}) = \begin{equation}ta^{-1} \alphapha_\Deltambda^{-1} S( (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}), \end{align*} thus \begin{equation}gin{align*} \partial^2_x \tilde{l} \in S( (\alphapha_\mu^{-2} + \alphapha_\Deltambda^{-1} \begin{equation}ta^{-1}) (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}) \end{align*} Similarly \begin{equation}gin{align*} \partial_{\xi}^2 \tilde{l} = \partial_{\xi}^2 \tilde{\chi}_{\theta}^\alphapha q^{-1} + 2 \partial_\xi \tilde{\chi}_{\theta}^\alphapha \partial_\xi q^{-1} + \tilde{\chi}_{\theta}^\alphapha \partial_\xi^2q^{-1}\in S( (\alphapha\Deltambda)^{-1}(\alphapha_\mu \Deltambda)^{-1} (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}) \end{align*} as claimed. It remains to estimate the Poisson bracket. Letting $a_\mu$ denote the half-wave symbol corresponding to the low-frequency metric $g_{<\sqrt{\mu}}$, we write \begin{equation}gin{align} \Deltabel{e:parametrix_halfwave_poisson} \{\tau + a, \tilde{l} \} = \tilde{\chi}_{\theta}^\alphapha q^{-2} \{ \tau + a, q\} + q^{-1} \{ \tau + a_\mu, \tilde{\chi}_\theta^\alphapha\} + q^{-1}\{ a - a_\mu, \tilde{\chi}_{\theta}^\alphapha\}. \end{align} The symbol $q$ vanishes to second order on the submanifold \begin{equation}gin{align*} \{(t, x, \xi_\theta(t, x) ) \} \subset \mathbb{R}_t \times (T^*\mathbb{R}^2_x \setminus 0), \end{align*} which is invariant under the Hamiltonian flow. Hence its derivative along the flow $\{\tau + a, q\}$ vanishes to second order as well, and is smooth and $1$-homogeneous $\xi$. Therefore we see that \[\{\tau + a, q\} \in S( (\begin{equation}ta^2 |\xi|), g_{\alphapha_\Deltambda}).\] The argument of Lemma~\ref{l:poisson-bracket} shows that $\{\tau + a_\mu, \tilde{\chi}_{\theta}^\alphapha\} \in S(1, g_{\alphapha_\mu})$. Finally, by a direct computation $\{a-a_\mu, \tilde{\chi}_\theta^\alphapha\} \in S( (\alphapha_\mu^2\Deltambda) (\alphapha \Deltambda)^{-1}, g_{\alphapha_\mu})$. Altogether we obtain \begin{equation}gin{align*} \{\tau + a, \tilde{l} \} \in S( (\begin{equation}ta^2 \Deltambda)^{-1}, g_{\alphapha_\Deltambda}). \end{align*} \end{proof} Later it will be useful to have a more computational proof of the Poisson bracket bound $\{\tau + a, q\} \in S( \alphapha^2\Deltambda, g_{\alphapha_\Deltambda})$. Using the equation~\eqref{e:xi_eqn} for $\partial_t \xi_{\theta'}$, we compute \begin{equation}gin{align*} \{\tau + a, q\} &= \partial_t q + \Deltangle a_\xi, \partial_x q\rangle - \Deltangle a_x, \partial_\xi q \rangle\\ &= a_t(\xi) - \Deltangle a_{t\xi} (\xi_{\theta'}), \xi\rangle + \bigl \Deltangle \Deltangle a_{\xi}(\xi), (a_x(\xi) - \Deltangle a_{x \xi}(\xi_{\theta'}), \xi \rangle )\bigr\rangle\\ &+ |\xi| \Deltangle a_{\xi\xi}(\xi_{\theta'}) (\widehat{\xi}- \widehat{\xi_{\theta'}} ), \Deltangle a_{\xi}(\xi_{\theta'}) - a_{\xi}(\xi), \partial_x \rangle \widehat{\xi_{\theta'}} \rangle\\ &+ |\xi| \Deltangle a_{\xi}(\xi) - a_{\xi}(\xi_0), a_x (\widehat{\xi_{\theta'}}) - a_x (\widehat{\xi}) \rangle \\ &- |\xi|\Deltangle a_{\xi\xi}( a_{\xi}( \widehat{\xi}) - a_{\xi}( \widehat{\xi_{\theta'}}) - a_{\xi\xi}(\widehat{\xi_{\theta'}}) (\widehat{\xi} - \widehat{\xi_{\theta'}}) \rangle, \end{align*} and an inductive argument similar to the proof of Lemma~\ref{l:microelliptic-regularity} yields the estimates on the support of $\tilde{\chi}_{\theta}^\alphapha$ \begin{equation}gin{align} \begin{equation}gin{split} \Deltabel{e:improved_poisson_bracket} |(\alphapha_\Deltambda \partial_x)^k (\begin{equation}ta \Deltambda \partial_\xi)^m \Deltangle \xi, \partial_\xi\rangle^{m'} \{ \tau + a, q\}| &\lesssim \begin{equation}ta^2 \Deltambda,\\ |(\alphapha_\Deltambda \partial_x)^k (\begin{equation}ta \Deltambda \partial_\xi)^m \Deltangle \xi, \partial_\xi\rangle^{m'}\partial_t \{\tau + a, q\}| &\lesssim \alphapha_\Deltambda^{-1} (\begin{equation}ta^2\Deltambda). \end{split} \end{align} Using the expansion~\eqref{e:parametrix_halfwave_poisson} and Lemma~\ref{l:microelliptic-regularity}, one deduces that \begin{equation}gin{align} \Deltabel{e:parametrix_halfwave_poisson_tderiv} \partial_t \{\tau + a, \tilde{l} \} \in \alphapha_\Deltambda^{-1} S( (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}). \end{align} We continue the proof of the proposition. Write \begin{equation}gin{align*} \phi^\alphapha_{\theta, \Deltambda} u^+ &= P_\Deltambda(D) (D_t+A - L) \tilde{L} \phi^{\alphapha}_{\theta, \Deltambda}u^+ + P_\Deltambda(D)[\tilde{\chi}_{\theta}^\alphapha - (A - \tilde{A}) \tilde{L}] \phi^\alphapha_{\theta, \Deltambda}u^+ + P_\Deltambda(D)(1 - \tilde{\chi}^\alphapha_{\theta}) \phi_{\theta, \Deltambda}^\alphapha u^+\\ &=P_\Deltambda(D) (D_t + A - L) \tilde{L} \phi^\alphapha_{\theta, \Deltambda} u^+ + R_1 u^+ + R_2 u^+. \end{align*} By the previous lemma and the pseudo-differential calculus, the second and third terms both take the form $ (\Deltambda^{-1/2} + (\alphapha^2 \Deltambda)^{-1}) \chi_\theta$ where $\chi_\theta$ is square-summable. Consequently, we estimate \begin{equation}gin{align*} \|(R_j u^+) \|_{L^2( \Lambda_{0, \theta}^\varepsilon)} \lesssim (\alphapha \begin{equation}ta \Deltambda)^{-1} \|\chi_\theta u^+\|_{L^2}. \end{align*} Also, \begin{equation}gin{align*} P_\Deltambda(D_t+A) \tilde{L} \phi^\alphapha_{\theta, \Deltambda} u^+ = \tilde{L}(D_t+A) \phi^\alphapha_{\theta, \Deltambda} u^+ + P_\Deltambda(D) [D_t + A, \tilde{L}] \phi^\alphapha_{\theta, \Deltambda} u^+. \end{align*} By the previous lemma, this is bounded in $L^2$ by \begin{equation}gin{align*} &(\begin{equation}ta^2\Deltambda)^{-1} \| (D_t+A) \phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2} + (\begin{equation}ta^2\Deltambda)^{-1} \| \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2}\\ &\lesssim (\begin{equation}ta^2 \Deltambda)^{-1} ( \| \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2} + \| (D_t+A) \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2}). \end{align*} which is sufficient. The remaining term is \begin{equation}gin{align*} P_\Deltambda(D)L \tilde{L} \phi^\alphapha_{\theta, \Deltambda}u^+ = L \tilde{L} \phi^\alphapha_{\theta, \Deltambda}u^+ + [P_\Deltambda, L] \tilde{L} \phi^\alphapha_{\theta, \Deltambda}u^+. \end{align*} As the commutator $[P_\Deltambda, L] = [P_\Deltambda, \tilde{A}]$ is bounded on $L^2$ at frequency $\Deltambda$, \begin{equation}gin{align*} \| ( [P_\Deltambda, L]\tilde{L} \phi^\alphapha_{\theta, \Deltambda} u^+) \|_{L^2} \lesssim \| \tilde{L} \phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2} \lesssim (\alphapha^2 \Deltambda)^{-1} \| \phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2}, \end{align*} which is acceptable. The remaining term is \begin{equation}gin{align*} \|L \phi_{\theta, \Deltambda}^{\alphapha}u^+ \|_{L^2(\Lambda_{0,\theta'}^\varepsilon)}^2 &\lesssim \int_{|h| \le \varepsilon} \| L \phi_{\theta, \Deltambda}^\alphapha u^+ \|_{L^2(\Lambda_{h, \theta'})}^2 \, dh. \end{align*} For each null surface $\Lambda_{h, \theta'}$, we have \begin{equation}gin{align*} \int_{\Lambda_{h, \theta'}} |L \tilde{L}\phi^{\alphapha}_{\theta, \Deltambda} u^+|^2 \, d\sigma \lesssim \int | \Box \tilde{L}\phi^{\alphapha}_{\theta, \Deltambda} u^+| |L \phi^{\alphapha}_{\theta, \Deltambda} u^+| \, dx dt + \int |\Deltangle T, \pi \rangle| \, dx dt. \end{align*} For the second term, write \begin{equation}gin{align*} \Deltangle T, \pi \rangle &= T_{\underline{L}\underline{L}} \pi_{LL} + T_{LL} \pi_{\underline{L} \underline{L}} + 2 T_{L\underline{L}} \pi_{\underline{L} L} + 2 T_{\underline{L} E} \pi_{LE} + 2 T_{LE} \pi_{\underline{L}E} + T_{EE} \pi_{EE}\\ &\lesssim |L\tilde{L}\phi^\alphapha_{\theta, \Deltambda} u^+ L\tilde{L} \phi^{\alphapha}_{\theta, \Deltambda} u^+| + |L\tilde{L}\phi^\alphapha_{\theta, \Deltambda} u^+ E\tilde{L} \phi^{\alphapha}_{\theta, \Deltambda} u^+| + |E\tilde{L}\phi^\alphapha_{\theta, \Deltambda} u^+ E\tilde{L} \phi^{\alphapha}_{\theta, \Deltambda} u^+ |, \end{align*} so \begin{equation}gin{align*} & \| ( L \tilde{L}\phi^\alphapha_{\theta,\Deltambda} u^+ ) v_T\|_{L^2(\Lambda_{0,\theta'}^\varepsilon)}^2 \\ &\lesssim \varepsilon \Bigl( \|\Box \tilde{L}\phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2} \| L \tilde{L}\phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2} + \| L \tilde{L} \phi^\alphapha_{\theta, \Deltambda} u^+\|_{L^2}^2 + \| E \tilde{L} \phi^\alphapha_{\theta,\Deltambda} u^+\|_{L^2}^2\Bigr)\\ &\lesssim \varepsilon(\begin{equation}ta^2 \Deltambda)^{-2} \Bigl( \| \Box \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2} \| L \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2} + \|L \phi_{\theta,\Deltambda}^\alphapha u^+\|_{L^2}^2 + \| E\phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2}^2\Bigr)\\ &+ \varepsilon \Bigl( \| [\Box, \tilde{L}] \phi_{\theta, \Deltambda}^\alphapha u^+ \|_{L^2} \bigl( (\begin{equation}ta^2\Deltambda)^{-1} \|L \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2} + \| [L, \tilde{L}] \phi_{\theta, \Deltambda}^\alphapha u^+\|_{L^2} \bigr)\\ &+ \bigl( (\begin{equation}ta^2\Deltambda)^{-1} \| \Box \phi_{\theta,\Deltambda}^\alphapha u^+\|_{L^2} + \| [\Box, \tilde{L} ]\phi_{\theta,\Deltambda}^\alphapha u^+\|_{L^2}\bigr) \| [L, \tilde{L}] \phi_{\theta,\Deltambda}^\alphapha u^+\||_{L^2}\\ &+ \| [L, \tilde{L}] \phi_{\theta,\Deltambda}^\alphapha u^+\||_{L^2}^2 + \| [E, \tilde{L}] \phi_{\theta,\Deltambda}^\alphapha u^+\||_{L^2}^2\Bigr), \end{align*} and appeal to Lemmas~\ref{e:char-e-commutator1} and \ref{e:char-e-commutator2}. \end{proof} \begin{equation}gin{lemma} \Deltabel{e:char-e-commutator2} If $\{L, \underline{L}, E\}$ is the null frame for the foliation $\Lambda_{\theta'}$, then \begin{equation}gin{align*} &\| [ E, \tilde{L} ] \|_{L^2 \to L^2} \lesssim \alphapha^{-1}(\begin{equation}ta^2\Deltambda)^{-1},\\ &\| [L, \tilde{L} ]\|_{L^2 \to L^2} \lesssim (\begin{equation}ta^2 \Deltambda)^{-1},\\ &\| [\Box, \tilde{L} ]u \|_{L^2\to L^2} \lesssim (\begin{equation}ta^2\Deltambda)^{-1}(\| \nabla_{t,x} u\|_{L^2} + \Deltambda \| (D_t+A) u\|_{L^2}) \text{ if } u = P_\Deltambda(D_x) u. \end{align*} \end{lemma} \begin{equation}gin{proof} The estimate for $[E, \tilde{L}]$ is simplest, using the first order symbol expansion~\eqref{e:symbol_expansion}, the computations \begin{equation}gin{alignat*}{4} &\partial_x e \in S(|\xi|, g_{\alphapha_\mu}), &\quad &\partial_{\xi} e \in S(1, g_{\alphapha_\mu}), \\ &\partial_x \tilde{l} \in S(\begin{equation}ta^{-1} (\begin{equation}ta^2 \Deltambda)^{-1}, g_{\alphapha_\mu}) &\quad &\partial_\xi \tilde{l} \in S( (\alphapha\Deltambda)^{-1} (\begin{equation}ta^2 \Deltambda)^{-1}, g_{\alphapha_\mu}), \end{alignat*} and Lemmas~\ref{l:gauss_transform} and \ref{l:CV} (again considering the outputs at frequency $\Deltambda$ and $\ne \Deltambda$ separately). Next, write $[L, \tilde{L}] = [D+A, \tilde{L}] + [\tilde{A} - A, \tilde{L}]$. The first term was studied in the previous lemma, while the second again follows from first order calculus. The commutator estimate for $\Box$ is proven similarly as the lemma in the previous section, differing mainly in the symbol estimates involved. It remains to consider $[\Box, \tilde{L}]$. As usual we first replace $\Box$ with $(D_t + A^-)(D_t + A^+)$ at the cost of an acceptable error, and write \begin{equation}gin{align} \Deltabel{e:commutator_box_parametrix} [(D_t+A^-)(D+A^+), \tilde{L}] =(D_t + A^-) [D_t + A^+, \tilde{L}] + [D_t + A^-, \tilde{L}] (D_t+A^+). \end{align} We proceed similarly as in the proof of Lemma~\ref{l:parametrix_bounds} but first gather symbol bounds for $\partial_t \tilde{l}$. This is a routine computation using Lemmas~\ref{l:t-derivs} and \ref{l:microelliptic-regularity} which leads to $\partial_t \tilde{l} \in S(\alphapha^{-1} (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda})$, \begin{equation}gin{equation} \Deltabel{e:parametrix_2nd-derivs-t} \begin{equation}gin{aligned} &\partial_x \partial_t \tilde{l} \in S( ( \alphapha_\mu^{-2} + \alphapha^{-1} \alphapha_{\Deltambda}^{-1}) (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}), &\quad &\partial^2_x \partial_t \tilde{l} \in S( (\alphapha_\mu^{-4} + \alphapha^{-1} \alphapha_\Deltambda^{-2}) (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}),\\ &\partial_\xi \partial_t \tilde{l} \in S( (\alphapha_\mu \alphapha \Deltambda)^{-1} (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}), &\quad &\partial_\xi^2 \partial_t \tilde{l} \in S( (\alphapha \Deltambda)^{-1} (\alphapha_\mu^2\Deltambda)^{-1} (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}); \end{aligned} \end{equation} the factors of $\alphapha$ arise from derivatives that land on the $\chi_{\theta}^\alphapha$ factor. Also note that \begin{equation}gin{equation} \Deltabel{e:halfwave_2nd-derivs-t} \begin{equation}gin{aligned} &\partial_x \partial_t a \in f(t) S( |\xi|, g_{\alphapha_\Deltambda}), &\quad &\partial_x^2 \partial_t a \in \alphapha_\Deltambda^{-1} f(t) S(|\xi|, g_{\alphapha_\Deltambda}),\\ &\partial_\xi \partial_t a \in S(1, g_{\alphapha_\Deltambda}), &\quad &\partial_\xi^2 \partial_t a \in S(|\xi|^{-1}, g_{\alphapha_\Deltambda}), \end{aligned} \end{equation} where as usual $f(t) = M(\|\partial^2 g(t)\|_{L^\infty_x})$. The second term on the right side of~\eqref{e:commutator_box_parametrix} is handled by the first order estimates \begin{equation}gin{align*} \| P_\Deltambda(D) (\partial_t \tilde{l)}(t, X, D)\|_{L^2 \to L^2} \lesssim \alphapha^{-1}(\begin{equation}ta^2\Deltambda)^{-1}, \quad \|[A^-, \tilde{L} ]\|_{L^2\to L^2} \lesssim \alphapha^{-1}(\begin{equation}ta^2\Deltambda)^{-1}. \end{align*} Also recall from the proof of Lemma~\ref{l:parametrix_bounds} that the symbol of $[D_t +A, \tilde{L}]$ is \begin{equation}gin{align*} \frac{1}{i} \{ \tau + a, \tilde{l} \} + r, \end{align*} where \[r \in f(t) S( (\begin{equation}ta^2\Deltambda)^{-1} (\alphapha_\mu^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}) + S(( (\alphapha_\mu^2\Deltambda)^{-1} + ( \begin{equation}ta^2\Deltambda)^{-\frac{1}{2}}) (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}).\] Combining the estimates \eqref{e:parametrix_2nd-derivs}, \eqref{e:parametrix_2nd-derivs-t}, \eqref{e:halfwave_2nd-derivs-t} with the explicit form~\eqref{e:order2_commutator} of the second order commutator expansion and Lemma~\ref{l:gauss_transform}, one obtains $ \partial_t r = r_1 + r_2 + r_3 + r_4, $ where \begin{equation}gin{equation} \Deltabel{e:order2_commutator_t-deriv} \begin{equation}gin{aligned} &r_1 \in \alphapha^{-1} f(t) S( (\alphapha_\mu \alphapha_\Deltambda \Deltambda)^{-1} (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}) , &\quad &r_2 \in \alphapha^{-1}f(t) S( (\alphapha_\mu^2 \Deltambda)^{-1} (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}),\\ &r_3 \in S((\alphapha_\mu^2\Deltambda)^{-1} + (\begin{equation}ta^2\Deltambda)^{-\frac{1}{2}})(\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}), &\quad &r_4 \in S( (( \alphapha_\mu^{-2} (\alphapha_\mu^2 \Deltambda)^{-1} + \alphapha^{-1} ) (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda}). \end{aligned} \end{equation} These correspond to the pairings $\{\partial_\xi^2 \tilde{l},\ \partial_x^2 \partial_t a\}$, $\{\partial_\xi^2 \partial_t \tilde{l}, \ \partial_x^2 a\}$, $\{\partial_x^2 \tilde{l}, \ \partial_\xi^2\partial_t a\}$, and $\{\partial_x^2\partial_t \tilde{l}, \ \partial_\xi^2 a\}$. For the first term on the right side of of~\eqref{e:commutator_box_parametrix}, we have \begin{equation}gin{align*} \bigl[ (D_t + A^-), [D_t + A^+, \tilde{L}]\bigr] &= \frac{1}{i} (D_t \{ \tau + a^+, \tilde{l}\})(t, X, D) + (D_tr) (t, X, D)\\ &+ \frac{1}{i} [ A^-, \{ \tau + a^+, \tilde{l}\} (t, X, D)] + [A^-, r(t, X, D)]. \end{align*} The bounds~\eqref{e:order2_commutator_t-deriv} imply that \begin{equation}gin{align*} \| (D_tr )(t, X, D) u\|_{L^2} \lesssim \alphapha^{-1} (\begin{equation}ta^2 \Deltambda)^{-1} \| u\|_{L^\infty L^2} + \mu (\begin{equation}ta^2\Deltambda)^{-1} \| u\|_{L^2} \end{align*} which is acceptable in view of the energy estimate $\| u\|_{L^\infty L^2} \lesssim \| u\|_{L^2} + \|(D_t+A^+)u\|_{L^2}$. By first order estimates, recalling that $\{\tau + a^+, \tilde{l}\} \in S( (\begin{equation}ta^2\Deltambda)^{-1}, g_{\alphapha_\Deltambda})$ the last two terms of the commutator satisfy \begin{equation}gin{align*} &\| [A^-, \{\tau + a^+, \tilde{l} \}(t, X, D)]\|_{L^2 \to L^2} \lesssim \begin{equation}ta^{-2}\\ &\| [A^-, r(t, X, D)] \|_{L^2 \to L^2} \lesssim \begin{equation}ta^{-2} f(t) (\alphapha_\mu^2\Deltambda)^{-1} + \begin{equation}ta^{-2} \bigl( (\alphapha_\mu^2 \Deltambda)^{-1} + (\begin{equation}ta^2\Deltambda)^{-\frac{1}{2}} \bigr), \end{align*} which is also acceptable by the energy estimate. Finally, the Poisson bracket estimate~\eqref{e:parametrix_halfwave_poisson_tderiv} shows that \begin{equation}gin{align*} \|(D_t \{\tau + a^+, \tilde{l} \}) (t, X, D)\|_{L^2 \to L^2} \lesssim \begin{equation}ta^{-1} (\begin{equation}ta^2\Deltambda)^{-\frac{1}{2}}. \end{align*} This completes the proof of the lemma. \end{proof} \ \section{The algebra property \texorpdfstring{\eqref{alg:est}}{} } \Deltabel{Sec:alg} \ In this section we prove the estimate \eqref{alg:est}. \begin{equation}gin{proposition} \Deltabel{prop:alg:prop} Assume that $ \theta>\frac{1}{2} $ and $ s>\theta+\frac{1}{2} $. Then the space $ X^{s,\theta} $ is an algebra. Moreover, for $ \sigma >s $ we have \begin{equation} \Deltabel{alg:sgm} \vn{u \cdot v}_{X^{\sigma,\theta}} \lesssim \vn{u}_{X^{\sigma,\theta}} \vn{v}_{X^{s,\theta}} + \vn{u}_{X^{s,\theta}} \vn{v}_{X^{\sigma,\theta}} \end{equation} \end{proposition} \ The proof of this algebra property will be based on the estimates in the following two propositions. The next Proposition is the crux of our result and is the variable coefficients analogue of Theorem 3 from \cite{Tat} since, due to the low modulations, we may think of $ u_{\left|d,1}, v_{\mu,1} $ as being approximately free waves. \ \begin{equation}gin{proposition} \Deltabel{PropX} Let $ u_{\left|d,1}, v_{\mu,1} $, $ v_{\left|d',1} $ be functions localized at frequency $ \simeq \left|d $, $ \simeq \mu $, resp. $ \simeq \left|d' $ and let $ d_0=\min(\mu, \frac{\left|d}{\mu} ) $. Then \begin{equation}gin{enumerate} \item In the high-low case $ \mu \ll \left|d $ we have: \begin{equation} \Deltabel{bil:main:LH} \vn{ u_{\left|d,1} \cdot v_{\mu,1} }_{ X_{\left|d', \leq \mu,\infty}^{0,\frac{1}{4}}} \lesssim \mu^{\frac{3}{4}} \vn{ u_{\left|d,1}}_{X_{\left|d,1}^{0,\frac{1}{2}}} \vn{ v_{\mu,1} }_{X_{\mu,1}^{0,\frac{1}{2}}} \end{equation} \item In the high-high to low case $ \mu \lesssim \left|d \simeq \left|d' $ we have \begin{equation} \Deltabel{bil:main:HH} \vn{ P_{\mu} ( u_{\left|d,1} \cdot v_{\left|d',1}) }_{ X_{\mu, [d_0,\mu],\infty}^{1,\frac{1}{4}} + ( \tilde{X}_{\mu,\mu}^{1,\frac{1}{4}} \cap \frac{\left|d}{\mu} X_{\mu,\mu}^{1,\frac{1}{4}} )} \lesssim \frac{\mu^{\frac{3}{4}}}{\left|d} \vn{ u_{\left|d,1}}_{X_{\left|d,1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',1} }_{X_{\left|d',1}^{1,\frac{1}{2}}} \end{equation} \end{enumerate} \end{proposition} Here, due to the selection of modulation $1$ inputs, the index $\frac12$ on the right is superfluous. We have kept it in order to make it easier to compare the above result with the next result. One can easily go from modulation $1$ to any larger modulations by a rescaling and time orthogonality argument. This is accomplished in the next proposition. \begin{equation}gin{proposition} \Deltabel{Lemma:bil:inter} \begin{equation}gin{enumerate} [leftmargin=*] Let $ u_{\left|d,d_1}, v_{\mu,d_2} $, $ v_{\left|d',d_2} $ be functions localized at frequency $ \simeq \left|d $, $ \simeq \mu $, respectively $ \simeq \left|d' $ where $ d_1,d_2 \geq 1 $ and denote $ d_{\max}=\max (d_1,d_2) $. \item (Low modulations) For $ 1 \leq d_1,d_2 \leq \mu < \left|d \simeq \left|d' $, denoting $ d_0=\min\left( \frac{\left|d}{\mu}, \frac{\mu}{d_{\max} }\right) $ \begin{equation}gin{align} \Deltabel{bil:inter:lowmod:LH} \vn{ u_{\left|d,d_1} \cdot v_{\mu,d_2} }_{ X_{\left|d',[d_{\max},\mu]}^{1,\frac{1}{2}}} & \lesssim \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}} \\ \Deltabel{bil:inter:lowmod:HH} \vn{ P_{\mu} ( u_{\left|d,d_1} \cdot v_{\left|d',d_2} ) }_{ X_{\mu,[d_{\max} d_0,\mu]}^{1,\frac{1}{2}} + ( \tilde{X}_{\mu,\mu}^{1,\frac{1}{2}} \cap \frac{\left|d}{\mu} X_{\mu,\mu}^{1,\frac{1}{2}} )} & \lesssim \frac{\mu}{\left|d} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',d_2} }_{X_{\left|d',d_2}^{1,\frac{1}{2}}} \end{align} \item (High modulations) For $ 1 \leq d_2 \leq \mu \leq d_1 \leq \left|d \simeq \left|d' $ we have \begin{equation} \Deltabel{bil:inter:highmod:LH} \vn{ u_{\left|d,d_1} \cdot v_{\mu,d_2} }_{ X_{\left|d,d_1}^{1,\frac{1}{2}}} \lesssim \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}} \end{equation} For $ 1 \leq \mu \leq d_{\max} \leq \left|d \simeq \left|d' $ we have \begin{equation} \Deltabel{bil:inter:highmod:HH} \vn{ P_{\mu} ( u_{\left|d,d_1} \cdot v_{\left|d',d_2} ) }_{\frac{\left|d}{\mu} X_{\mu,\mu}^{1,\frac{1}{2}} \cap \tilde{X}_{\mu,\mu}^{1,\frac{1}{2}} } \lesssim \frac{\mu}{\left|d} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',d_2} }_{X_{\left|d',d_2}^{1,\frac{1}{2}}} \end{equation} \end{enumerate} \end{proposition} \subsection{Proof of Proposition \ref{prop:alg:prop}} The proof consists of a simple summation argument based on Proposition \ref{Lemma:bil:inter}. Let $ u^1, u^2 \in X^{s,\theta} $ and write $$ u^i=\sum_{\left|d \geq 1} P_{\left|d} u^i_{\left|d}, \qquad \qquad \vn{u^i}_{X^{s,\theta}}^2 \simeq \sum_{\left|d \geq 1} \vn{ u^i_{\left|d}}_{X^{s,\theta}_{\left|d}}^2, \qquad \qquad i\in \overline{1,2}. $$ By Remark \ref{rmk:tildeX:loc} we may assume that the $ u^i_{\left|d} $ are localized in frequency. By the standard Littlewood-Paley decomposition we write $$ u^1 \cdot u^2 =\sum_{\left|d_1,\left|d_2,\left|d_3 \geq 1} P_{\left|d_3} ( P_{\left|d_1} u^1_{\left|d_1} \cdot P_{\left|d_2} u^2_{\left|d_2} ). $$ By splitting the sum into three terms corresponding to the three cases: $ \left|d_1 \ll \left|d_2 \simeq \left|d_3$, $ \left|d_2 \ll \left|d_1 \simeq \left|d_3$, $ \left|d_3 \lesssim \left|d_1 \simeq \left|d_2 $ we obtain $ u^1 \cdot u^2 \in X^{s,\theta} $ from the following estimates, stated for any frequency localized functions $ u_{\left|d}, v_{\mu} $: Let $ s=\theta+\frac{1}{2}+\varepsilon $ for $ \varepsilon >0 $. For $ \mu \ll \left|d $, $ \left|d' \simeq \left|d $ we have \begin{equation} \Deltabel{alg:loc:HL} \vn{P_{\left|d}u_{\left|d} \cdot P_{\mu} v_{\mu} }_{X_{\left|d'}^{s,\theta}} \lesssim \frac{1}{\mu^{\varepsilon}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\mu} }_{X_{\mu}^{s,\theta} } \end{equation} For $ \mu \lesssim \left|d $, $ \left|d' \simeq \left|d $ we have \begin{equation} \Deltabel{alg:loc:HH} \vn{P_{\mu} (P_{\left|d} u_{\left|d} \cdot P_{\left|d'} v_{\left|d'} ) }_{X_{\mu}^{s,\theta}} \lesssim \frac{\mu^{s-1}}{ \left|d^{s-1+\varepsilon}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\left|d'} }_{X_{\left|d'}^{s,\theta} } \end{equation} Here it is essential that we are in the subcritical case $ s>1 $ which allows us to have the the power $ \mu^{-\varepsilon} $ in \eqref{alg:loc:HL}. We write $$ u_{\left|d}= \sum_{d=1}^{\left|d} u_{\left|d,d} \qquad \qquad \vn{u_{\left|d}}_{X^{s,\theta}_{\left|d}}^2 \simeq \sum_{d=1}^{\left|d} \vn{u_{\left|d,d}}_{X^{s,\theta}_{\left|d,d}}^2. $$ and the similar decompositions for $ v_{\mu}, v_{\left|d'} $. Then, using \eqref{bil:inter:lowmod:LH}, \eqref{bil:inter:highmod:LH} we have \begin{equation}gin{align*} & \vn{P_{\left|d}u_{\left|d} \cdot P_{\mu} v_{\mu} }_{X_{\left|d'}^{s,\theta}} \lesssim \sum_{d_1, d_2 \leq \mu} \vn{P_{\left|d}u_{\left|d,d_1} \cdot P_{\mu} v_{\mu,d_2} }_{X_{\left|d',[d_{\max},\mu]}^{s,\theta}}+\sum_{d_2 \leq \mu} \vn{P_{\left|d}u_{\left|d,\geq \mu} \cdot P_{\mu} v_{\mu,d_2} }_{X_{\left|d',\geq \mu}^{s,\theta}} \\ & \lesssim \left|d^{s-1} \mu^{\theta-\frac{1}{2}} \sum_{d_1, d_2 \leq \mu} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}} + \left|d^{s-1} \vn{u_{\left|d}}_{X_{\left|d, [\mu,\left|d]}^{1,\theta}} \sum_{d_2 \leq \mu} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}} \\ & \lesssim \frac{ \mu^{\theta-\frac{1}{2}}}{ \mu^{s-1}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\mu} }_{X_{\mu}^{s,\theta} } + \frac{1}{ \mu^{s-1}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\mu} }_{X_{\mu}^{s,\theta} } \lesssim \frac{1}{\mu^{\varepsilon}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\mu} }_{X_{\mu}^{s,\theta} } \end{align*} In this argument we have used the factors $ d_i^{\frac{1}{2}-\theta} $ to get square sums in $ d_i \leq \mu $ while the modulation square summability of $ X_{\left|d',\geq \mu}^{s,\theta} $ is inherited from $ \vn{u_{\left|d}}_{X_{\left|d, [\mu,\left|d]}^{1,\theta}} $ due to \eqref{bil:inter:highmod:LH}. The proof of \eqref{alg:loc:HH} is similar, using \eqref{bil:inter:lowmod:HH}, \eqref{bil:inter:highmod:HH}: \begin{equation}gin{align*} \vn{P_{\mu} (P_{\left|d} u_{\left|d} \cdot P_{\left|d'} v_{\left|d'} ) }_{X_{\mu}^{s,\theta}} & \lesssim \mu^{s-1} \mu^{\theta-\frac{1}{2}} \sum_{d_1,d_2} \vn{P_{\mu} (P_{\left|d} u_{\left|d,d_1} \cdot P_{\left|d'} v_{\left|d',d_2} ) }_{X_{\mu}^{1,\frac{1}{2}}} \\ & \lesssim \mu^{s-1} \mu^{\theta-\frac{1}{2}} \sum_{d_1,d_2 } \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',d_2} }_{X_{\left|d',d_2}^{1,\frac{1}{2}}} \\ & \lesssim \frac{\mu^{s-1}}{\left|d^{s-1}} \frac{\mu^{\theta-\frac{1}{2}}}{\left|d^{s-1}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\left|d'} }_{X_{\left|d'}^{s,\theta} } \lesssim \frac{\mu^{s-1}}{ \left|d^{s-1+\varepsilon}} \vn{ u_{\left|d} }_{X_{\left|d}^{s,\theta}} \vn{ v_{\left|d'} }_{X_{\left|d'}^{s,\theta} }. \end{align*} Finally, \eqref{alg:sgm} also follows from \eqref{alg:loc:HL}, \eqref{alg:loc:HH} by readjusting the weights. \ \subsection{Proof of Proposition \ref{PropX}} To be able to use the wave packet decomposition from Proposition \ref{X:wp:dec} and Corollary \ref{Cor:WP:dec} we need to be on a small interval such as $ [k \delta, (k+1) \delta ]$. We fix $ \delta $ and without loss of generality we prove \eqref{bil:main:LH}, \eqref{bil:main:HH} on $ I=[0,\delta] $. We sum these bounds by brute force treating $ 1/\delta = D $ as a universal constant and then $ \vn{ v_{\eta}}_{X_{\eta,D}^{0,\frac{1}{2}}[I]} \simeq \vn{ v_{\eta}}_{X_{\eta,1}^{0,\frac{1}{2}}[I]} $. Let $ \eta \in \{ \mu, \left|d' \} $ and we refer to $ \eta = \mu $, \eqref{bil:main:LH} as Case 1, and to $ \eta = \left|d' $, \eqref{bil:main:HH} as Case 2. For the remainder of this proof, we normalize the norms of the inputs as follows: \begin{equation} \Deltabel{input:normalization} \vn{ u_{\left|d,1}}_{X_{\left|d,1}^{0,\frac{1}{2}}[I]}=\vn{ v_{\eta,1}}_{X_{\eta,1}^{0,\frac{1}{2}}[I]}=1, \qquad \qquad \eta \in \{ \mu, \left|d' \} \end{equation} \ We first give an overview of the estimates needed to establish \eqref{bil:main:LH}, \eqref{bil:main:HH}. \ {\bf Step~1.} {\bf(Bilinear angular decomposition) } We may apply appropriate multipliers such that $ P_{\left|d} u_{\left|d,1}=u_{\left|d,1} $, $ P_{\eta} v_{\eta,1}=v_{\eta,1} $, $ \eta \in \{ \mu, \left|d' \} $. The terms where $ \mu \simeq 1 $ of Case 1 and $ \left|d \simeq \left|d' \simeq \mu \simeq 1 $ of Case 2 are easily treated by H\"older's inequality and the chain rule. Thus we may assume $ \mu,\left|d' \gg 1 $ are large enough and Corollary \ref{Cor:WP:dec} is applicable, providing a decomposition $$ v_{\eta,1} = v^{+} + v^{-} + v_{R}, \qquad \qquad \eta \in \{ \mu, \left|d' \}. $$ The terms $ u_{\left|d,1} \cdot v_R $ are estimated by \eqref{bil:est:remainder1} and \eqref{bil:est:remainder2} in Corollary \ref{Cor:WP:dec:rem}. We collect \begin{equation} \Deltabel{v:omega:pm} v^{\pm}=\sum_{\omega \in \Omega_{\alpha_{\eta}}} v^{\omega,\pm}, \qquad v^{\omega,\pm} = P_{\eta} \sum_{ T\in \mathcal T_{\eta}^{\pm}, \omega_T = \omega} c_T(t) u_T(t). \end{equation} Next, by Proposition~\ref{p:half-wave} and its corollary we decompose $$ u_{\left|d,1}= u^{+} + u^{-} $$ Recalling \cite[Theorem 3]{Tat}, if $u$ and $v$ are free waves on a Minkowski background, the modulation of the product $uv$ depends on their relative positions on the null cone $\tau^2 = |\xi|^2$. Accordingly, we perform a bilinear angular decomposition of the products $u^{\pm_1} v^{\pm_2}$ with angular separation $\simeq \alphapha$. The bilinear decomposition is nonstandard since the two factors are localized differently: the high-frequency input will be split pseudo-differentially, while for the low-frequency input we use a wave packet decomposition to leverage the characteristic energy estimates from the previous section. Let $\Omega_{\alphapha}$ be a partition of the unit circle into angle $\alphapha$ arcs and let $ \alpha_{\eta}=\eta^{-\frac{1}{2}} $. At time $ t=0 $, for any $ \omega \in \Omega_{\alpha_{\eta}} $ we invoke the partition of unity from the Appendix - Proposition \ref{p:bilinear_pou}: \begin{equation}gin{align*} 1 = \sum_{j} \phi^{\alphapha_j, k_j(\omega)}_{\theta_j}(\xi), \qquad t=0, \qquad k_j(\omega) \in \{1, 2, 3, 4\}. \end{align*} For each interval $\omega \in \Omega_{\alphapha_\eta}$ and $(\theta, k) \in \Omega_\alphapha \times [1,4]$, define the relation $\omega \sim_\alphapha (\theta, k)$ if the triple $(\alphapha, \theta, k)$ appears in the above partition of unity. Then $\omega \sim_\alphapha (\theta, k)$ only if $|\theta - \omega| \sim \alphapha$ or $ |\theta - \omega| \lesssim \alpha_{\eta} $, and for each scale $\alphapha$ there are at most $O(1)$ intervals $\theta \in \Omega_\alphapha$ related to $\omega$. Let $\Phi^{\alphapha_\eta, \pm}_t$ denote the Hamiltonian flows for the half-wave symbols $\tau + a_{<\alphapha_\eta^{-1}}^{\pm}$ in the factorization $g_{<\sqrt{\eta}}^{\alphapha \begin{equation}ta} \xi_\alphapha \xi_\begin{equation}ta = (\tau + a_{<\alphapha_\eta^{-1}}^{+})(\tau + a_{<\alphapha_\eta^{-1}}^{-})$. Pulling back both sides by this flow as in \eqref{pullback:flow} and mollifying in the $x$ variable as in \eqref{e:pd-loc}, we obtain a time-dependent partition of unity for functions localized at frequency $\Deltambda$ \begin{equation}gin{align} \nonumber P_{\Deltambda}(\xi) &= \sum_{j} (P_{<\Deltambda/8}(D_x)\phi^{\alphapha_j,\pm, k_j(\omega)}_{\theta_j})(t, x, \xi) P_\Deltambda(\xi) \\ \Deltabel{e:partition} &= \sum_j \phi_{\theta_j, \Deltambda}^{\alphapha_j, \pm, k_j(\omega)} (t,x,\xi)\\ \Deltabel{e:partition-flipped} &= \sum_j \tilde{\phi}_{\theta_j, \Deltambda}^{\alphapha_j, \pm, k_j(\omega)} (t,x,\xi), \end{align} where $\tilde{\phi}_\theta (t, x, \xi) := \phi_\theta(t, x, -\xi)$ (recall that the multipliers $s_\Deltambda$ are assumed radial). Note that in general $\tilde{\phi}_{\theta}^{\pm} \ne \phi_{-\theta}^{\pm}$. For any signs $\pm_1, \pm_2 $ let $ \pm=\pm_1 \pm_2$. One has \begin{equation}gin{align} \nonumber u^{\pm_1} v^{\pm_2} &=\sum_{\omega \in \Omega_{\alpha_{\eta}}} u^{\pm_1} v^{\omega,\pm_2} = \sum_{\omega} \sum_{j} (\phi_{\theta_j, \Deltambda}^{\alphapha_j, \pm_1, k_j(\omega)} u^{\pm_1}) v^{\omega, \pm_2} \\ \nonumber &= \sum_{\alphapha\in [\alphapha_\eta, 1]} \sum_{\theta \in \Omega_\alphapha} \sum_{k=1}^4 \phi_{\theta, \Deltambda}^{\alphapha, \pm_1, k} u^{\pm_1} \sum_{\omega \sim (\alphapha, \pm \theta, k)} v^{\omega, \pm_2}\\ \nonumber &= \sum_{\alphapha\in [\alphapha_\eta,1]} \sum_{\theta \in \Omega_\alphapha} \sum_{k=1}^4 (\phi_{\theta,\Deltambda}^{\alphapha, \pm_1, k} u^{\pm_1} ) v_{\pm \theta}^{\alphapha, \pm_2, k}. \end{align} Thus for $ \alpha \not\simeq \alphapha_\eta $, $ v_{\pm \theta}^{\alphapha, \pm_2, k} $ contains packets $ u_T $ corresponding to $ T=(x_T,\omega_T) \in \mathcal T_{\eta}^{\pm_2}$ which have angular separation of $ \alpha $ relative to $ \pm \theta $: $ \angle(\omega_T, \pm \theta) \simeq \alpha $: \begin{equation} \Deltabel{v.theta:sum} v^{\pm_2,\alpha,k}_{\theta}(t)=P_{\eta} \sum_{ T\in \mathcal T_{\eta}^{\pm_2}, \omega_T \sim (\alphapha, \pm \theta, k) } c_T(t) u_T(t). \end{equation} Since the $ \omega_T $'s are separated by $ \simeq \eta^{-\frac{1}{2}} $, for every $ \theta$ we have roughly $ \frac{\alpha}{\eta^{-\frac{1}{2}} }=\alpha \eta^{\frac{1}{2}}$ directions $ \omega $ which obey this condition. The index $k$ is a technical artifact of our construction and can be safely ignored. Hence we shall hereafter simply write \begin{equation}gin{align} \Deltabel{bil:dec:uv} u^{\pm_1} v^{\pm_2} = \sum_{\alphapha \in [\alphapha_\eta, 1]} \sum_{\theta \in \Omega_\alphapha} ( \phi_{\theta, \Deltambda}^{\alphapha, \pm_1} u^{\pm_1} ) v_{\pm \theta}^{\alphapha, \pm_2}. \end{align} A typical term $(\phi_{\theta, \Deltambda}^{\alphapha, +} u) v_{\pm\theta}^{\alphapha, \pm}$ intuitively involves waves propagating at relative angle~$\alphapha$. For studying nonresonant interactions of the form $P_\mu \bigl( P_\Deltambda u^\pm P_\Deltambda v^\pm\bigr)$, we need a modified decomposition using instead the partition~\eqref{e:partition-flipped}: \begin{equation}gin{align} \Deltabel{bil:dec:uv:neg} u^{\pm_1} v^{\pm_2} = \sum_{\alphapha \in [\alphapha_\eta, 1]} \sum_{\theta \in \Omega_\alphapha} ( \tilde{\phi}_{\theta, \Deltambda}^{\alphapha, \pm_1} u^{\pm_1} ) v_{\theta}^{\alphapha, \pm_2}. \end{align} Note that both terms of the form $(\phi_{\theta, \Deltambda}^{\alphapha,+} u ) v_{-\theta}^{\alphapha, -}$ and $(\tilde{\phi}_{\theta, \Deltambda}^{\alphapha, +} u) v^{+, \alphapha}_{\theta}$ only involve interactions between pairs of frequencies $(\xi_1, \xi_2)$ with $\angle(\xi_1, -\xi_2) \sim \alphapha$. Finally, we sometimes write $v_{\theta, \eta}^{\alphapha, \pm_2}$ to clarify the frequency of the packets constituting $v_\theta^{\alphapha, \pm_2}$. \ {\bf Step~2.} {\bf(Small angles interactions)} We first consider the minimal angle case consisting of the $ \alpha \simeq \alpha_{\eta}=\eta^{-\frac{1}{2}} $ terms in \eqref{bil:dec:uv} , which will mostly follow from H\"older's inequality. \begin{equation}gin{proposition} \Deltabel{prop:alphamin} Let $ \left|d \gtrsim \eta \in \{ \mu,\left|d' \} $. Under normalization \eqref{input:normalization}, on $ I $, one has: \begin{equation}gin{align} \Deltabel{sum:tht:alphamin:L2} \sum_{\theta \in \Omega_{\alpha_{\eta}}} \vn{ \phi_{\theta,\Deltambda}^{\alpha_{\eta}, \pm_1} u^{\pm_1} \cdot v^{\alpha_{\eta},\pm_2}_{\theta} }_{ L^2} \lesssim \eta^{\frac{3}{4}} \\ \Deltabel{sum:tht:alphamin:Q:L2} \sum_{\theta \in \Omega_{\alpha_{\eta}}} \vn{ Q_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alpha_{\eta}, \pm_1} u^{\pm_1} \cdot v^{\alpha_{\eta},\pm_2}_{\theta} \big) }_{ L^2} \lesssim \left|d \eta^{\frac{3}{4}} \\ \Deltabel{sum:tht:alphamin:Q:L1L2} \sum_{\theta \in \Omega_{\alpha_{\eta}}} \vn{ Q_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alpha_{\eta}, \pm_1} u^{\pm_1} \cdot v^{\alpha_{\eta},\pm_2}_{\theta} \big) }_{ L^2 L^1} \lesssim \left|d \end{align} For any $ \alpha \geq \alpha_{\eta} $, under \eqref{input:normalization}, one has: \begin{equation}gin{align} \Deltabel{sum:tht:box:1} \sum_{\theta \in \Omega_{\alpha}} \vn{ \Box_{g_{<\sqrt{\left|d}}} \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} }_{ L^2} + \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot \Box_{g_{<\sqrt{\left|d}}} v^{\alpha,\pm_2}_{\theta} }_{ L^2} \lesssim \left|d \eta \alpha^{\frac{1}{2}} \\ \Deltabel{sum:tht:box:2} \sum_{\theta \in \Omega_{\alpha}} \vn{ \Box_{g_{<\sqrt{\left|d}}} \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} }_{ L^2 L^1 } + \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot \Box_{g_{<\sqrt{\left|d}}} v^{\alpha,\pm_2}_{\theta} }_{ L^2 L^1 } \lesssim \left|d \end{align} \end{proposition} As a consequence we will obtain the following estimates in low modulation spaces, which take care of the terms in \eqref{bil:dec:uv} with $ \alpha \simeq \alpha_{\eta} $. \begin{equation}gin{corollary} \Deltabel{cor:alphamin} In Case 1, resp. Case 2, under normalization \eqref{input:normalization}, one has: \begin{equation}gin{align} \Deltabel{bil:almin:LH:I} \sum_{\theta \in \Omega_{\alpha_{\eta}}} \vn{ \phi_{\theta,\Deltambda}^{\alpha_{\mu}, \pm_1} u^{\pm_1} \cdot v^{\alpha_{\mu},\pm_2}_{\theta,\mu} }_{ X_{\left|d', 1}^{0,\frac{1}{4}}[I]} & \lesssim \mu^{\frac{3}{4}} \\ \Deltabel{bil:almin:HH:I} \sum_{\theta \in \Omega_{\alpha_{\eta}}} \vn{ P_{\mu} \big( \phi_{\theta,\Deltambda}^{\alpha_{\left|d'}, \pm_1} u^{\pm_1} \cdot v^{\alpha_{\left|d'},\pm_2}_{\theta,\left|d'} \big) }_{ X_{\mu, d_0}^{0,\frac{1}{4}}[I]} & \lesssim \frac{\left|d}{\mu^{\frac{1}{4}}} \end{align} \end{corollary} \ {\bf Step~3.} {\bf(Non-resonant interactions)} We continue with the non-resonant parts of Case 2 ($ \eta=\left|d'$), which include $ u^{+} \cdot v^{+} $, $ u^{-} \cdot v^{-} $ and the terms of $ u^{\pm} \cdot v^{\mp} $ in \eqref{bil:dec:uv} where $ \alpha > \max(\frac{\mu}{\left|d}, \alpha_{\left|d} ) $. Here we have the following estimates, which are responsible for the $ \frac{\left|d}{\mu} $ loss in the high modulation bound $ X_{\mu,\mu}^{1,\frac{1}{4}} $ in \eqref{bil:main:HH}\footnote{The loss of $ \frac{\left|d}{\mu} $ in \eqref{bil:main:HH} caused by \eqref{HH:nonresonant} is essentially due to our choice of spaces. Assuming constant coefficients, a $ ++ $ high-high to low $ (\left|d,\left|d) \to \mu $ interaction would have output modulation $ \left|d $, while we force our modulation weights to be at most equal to the frequency ($ \mu $). To compensate for this, we introduced the $ \tilde{X}_{\mu,\mu}^{1,\frac{1}{4}} $ norms which retain the expected $ \frac{\mu}{\left|d} $ factor. } \begin{equation}gin{proposition} \Deltabel{p:nonresonant} Let $ D \leq \mu \ll \left|d \simeq \left|d' $ and let $ \pm $ be a sign. Suppose $u^\pm = P_\Deltambda u^\pm$, $v = P_{\Deltambda'} v^\pm$. Then: \begin{equation}gin{equation}\Deltabel{HH:nonresonant}\vn{P_{\mu} ( u^{\pm} \cdot v^{\pm} ) }_{X_{\mu,\mu}^{1,\frac{1}{4}}[I]} \lesssim \mu^{-\frac{1}{4}} \| u^\pm \|_{X^{1,\frac{1}{2}}_{\Deltambda,1}} \| v^{\pm}\|_{X^{1, \frac{1}{2}}_{\Deltambda,1}}. \end{equation} More precisely, \begin{equation}gin{align*} \mu^{ \frac{1}{4}} \| \nabla_{t,x} P_\mu (u^\pm \cdot v^\pm) \|_{L^2} &\lesssim \mu^{-\frac{1}{4}} \frac{\mu}{\Deltambda}\| u^\pm \|_{X^{1,\frac{1}{2}}_{\Deltambda,1}} \| v^{\pm}\|_{X^{1, \frac{1}{2}}_{\Deltambda,1}},\\ \mu^{\frac{1}{4}-1} \| \Box_{g_{<\sqrt{\mu}}} P_\mu( u^\pm \cdot v^\pm) \|_{L^2} &\lesssim \mu^{-\frac{1}{4}} \| u^\pm \|_{X^{1,\frac{1}{2}}_{\Deltambda,1}} \| v^{\pm}\|_{X^{1, \frac{1}{2}}_{\Deltambda,1}}. \end{align*} For $ \alpha > \max(\frac{\mu}{\left|d}, \alpha_{\left|d} ) $ one has: \begin{equation}gin{align}\Deltabel{HH:ang:nonresonant} \sum_{\theta \in \Omega_{\alpha}} \vn{ P_\mu \big( \phi_{ \theta, \Deltambda}^{\alphapha} u^{\pm} \cdot v^{\mp,\alphapha}_{-\theta} \big) }_{ X_{\mu, \mu}^{1,\frac{1}{4}}[I]} \lesssim_N \mu^{-\frac{1}{4}} \Bigl(\frac{\mu}{\Deltambda}\Bigr) \|u^{\pm} \|_{X^{1, \frac{1}{2}}_{\Deltambda,1}} \|v^{\mp} \|_{X^{1, \frac{1}{2}}_{\Deltambda,1}} \end{align} \end{proposition} The bounds \eqref{HH:nonresonant}, \eqref{HH:ang:nonresonant} are used for the $ X_{\mu,\mu}^{1,\frac{1}{4}} $ part of \eqref{bil:main:HH}. To prove the $ \tilde{X}_{\mu,\mu}^{1,\frac{1}{4}} $ part we also need $ L^{\infty} L^2 $ estimates, but these follow easily by Bernstein $ P_{\mu} L^{\infty} L^1 \to \mu L^{\infty} L^{2} $ and H\" older $ L^{\infty} L^2 \times L^{\infty} L^2 \to L^{\infty} L^1 $. The proof of this proposition uses several technical lemmas, whose proofs are deferred to a later section. We begin with a pseudo-differential calculus estimate. \begin{equation}gin{lemma} \Deltabel{l:HHLfreqtails} Let $u^{\pm} = P_\Deltambda u^{\pm}, \ v^{\pm} = P_\Deltambda v^{\pm}$, where $v = v^{\pm} = \sum_{\omega \in \Omega_{\alphapha_\Deltambda}} v^{\omega, \pm}$ is a superposition of frequency $\Deltambda$ packets. Consider the bilinear decompositions \[ u^{\pm} v^{\mp} = \sum_\alphapha \sum_\theta (\phi_{\theta, \Deltambda}^{\alphapha, \pm} u) v_{-\theta}^{\alphapha, \mp}, \quad u^{\pm} v^{\pm} = \sum_{\alphapha} \sum_\theta (\tilde{\phi}_{\theta, \Deltambda}^{\alphapha, \pm} u) v_{\theta}^{\alphapha, \pm} \] as in~\eqref{bil:dec:uv}, \eqref{bil:dec:uv:neg}. If $\mu \ll \alphapha \Deltambda$ and $\alphapha \gg \Deltambda^{-\frac{1}{2}}$, then there is a rapidly converging expansion \begin{equation}gin{align*} P_\mu ( \phi_{\theta, \Deltambda}^\alphapha u v_{-\theta, \Deltambda}^\alphapha) = \sum_{j=1, 2} \sum_{\vec{k}} (\alphapha \Deltambda)^{-1} P_{\mu, \vec{k}} ( \phi_{\theta, \vec{k}}^{j, \alphapha} u \psi_{-\theta, \vec{k}}^{j, \alphapha} v) + \tilde{P}_\mu(\phi_{\theta, \Deltambda}^\alphapha u r_{-\theta}^\alphapha), \end{align*} where $\phi^{j, \alphapha}_{\theta, \vec{k} }, \ \psi^{j, \alphapha}_{-\theta, \vec{k} } \in \Deltangle \vec{k} \rangle^{-N} S(1, g_{\alphapha_\Deltambda})$ are square-summable in $\theta$, $P_{\mu, \vec{k}}$ are Fourier multipliers supported in $|\xi| \lesssim \mu$ with $L^2\to L^2$ norm $O(\Deltangle k \rangle^{-N})$ and $r_{-\theta}^\alphapha = \sum_{|\omega \mp \theta| \lesssim \alphapha} \sum_{T} c_T r_T$ is a superposition of packets $r_T$ with $\|r_T\|_{WP} = O( (\alphapha^2\Deltambda)^{-\infty})$ uniformly in $T$. Here $WP$ denotes any weighted norm $WP^N_T$ as defined in~\eqref{e:wpnorm}. \end{lemma} The next lemma gives a variable-coefficient version of some $L^2$ null form estimates considered by Foschi and Klainerman~\cite{foschi-klainerman}. Observe that $\mu^{\frac{1}{2}}$ on the right side is consistent with scaling. \begin{equation}gin{lemma} \Deltabel{l:HHLnullform} For $\mu \ll \Deltambda$ and any pair of signs $\pm_1, \pm_2 \in \{\pm\}$, \begin{equation}gin{align} \| P_\mu Q_{g_{<\sqrt{\Deltambda}}}(u^{\pm_1}, v^{\pm_2}) \|_{L^2} &\lesssim \mu^{\frac{1}{2}} \| u^{\pm_1} \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v^{\pm_2} \|_{X^{1, \frac{1}{2}}_{\Deltambda,1}},\\ \| P_\mu (u^{\pm_1} v^{\pm_2}) \|_{L^2} &\lesssim \mu^{\frac{1}{2}} \| u^{\pm_1} \|_{X^{0, \frac{1}{2}}_{\Deltambda, 1}} \| v^{\pm_2} \|_{X^{0, \frac{1}{2}}_{\Deltambda,1}}, \Deltabel{e:HHL-L2}\\ \| \partial_t P_\mu (u^{\pm_1} v^{\pm_2}) \|_{L^2} &\lesssim \mu^{\frac{1}{2}} \left|d \| u^{\pm_1} \|_{X^{0, \frac{1}{2}}_{\Deltambda, 1}} \| v^{\pm_2} \|_{X^{0, \frac{1}{2}}_{\Deltambda,1}}. \Deltabel{dt:pu:u:v:pm} \end{align} \end{lemma} Finally, the last part of Proposition~\ref{p:nonresonant} utilizes \begin{equation}gin{lemma} \Deltabel{l:HHLnullform2} For $\alphapha > \max(\tfrac{\mu}{\Deltambda}, \Deltambda^{-\frac{1}{2}})$ in the bilinear decomposition, \begin{equation}gin{gather*} \sum_{\theta} \mu^{1 + \frac{1}{2}} \| P_\mu (\phi_{\theta, \Deltambda}^{\alphapha, \mp} u v_{-\theta}^{\alphapha, \pm}) \|_{L^2} \lesssim_N [(\alphapha \Deltambda)^{-1} + (\alphapha^2\Deltambda)^{-N}] \Bigl(\frac{\mu}{\Deltambda}\Bigr)^2 \|u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \|v\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} ,\\ \sum_{\theta} \mu^{\frac{1}{2} - 1} \| P_\mu Q ( \phi_{\theta, \Deltambda}^{\alphapha, \mp} u, v_{-\theta}^{\alphapha, \pm}) \|_{L^2} \lesssim_N [ \alphapha^{\frac{1}{2}} \mu^{-\frac{1}{2}} + (\alphapha^2\Deltambda)^{-N}] \Bigl( \frac{\Deltambda^{\frac{1}{2}}}{\mu} \Bigr) \frac{\mu}{\Deltambda}\|u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{gather*} \end{lemma} Now we show how these claims imply Proposition~\ref{p:nonresonant}. Consider~\eqref{HH:nonresonant}. The preceding lemmas imply the $ L^2 $ bound in \eqref{HH:nonresonant}. The $\Box$ part of the norm is bounded by \begin{equation}gin{align*} \mu^{\frac{1}{4} - 1} \bigl(\|P_\mu \Box_{g_{<\sqrt{\Deltambda}}} (u_\Deltambda^+, v_\Deltambda^+) \|_{L^2} + \| [P_\mu, \Box_{g_{<\sqrt{\Deltambda}} } ] (u_\Deltambda^+ v_{\Deltambda}^+)\|_{L^2} + \|( \Box_{g_{<\sqrt{\mu}}} - \Box_{g_{<\sqrt{\Deltambda}}}) P_\mu (u_\Deltambda^+ v_\Deltambda^+)\|_{L^2}\bigr). \end{align*} For the last term we use the estimate~\eqref{e:metric_est2}, Bernstein, and the energy estimate to infer \begin{equation}gin{align*} \|(\Box_{g_{<\sqrt{\mu}}} - \Box_{g_{<\sqrt{\Deltambda}}})P_\mu (u_\Deltambda^+ v_\Deltambda^+)\|_{L^2} &\lesssim \| P_\mu \partial_t(u_\Deltambda^+ v_\Deltambda^+)\|_{L^\infty L^2} + \mu \| P_\mu(u_\Deltambda^+ v_{\Deltambda}^+) \|_{L^\infty L^2} \\ &\lesssim \mu \| \nabla_{t,x} u\|_{L^\infty L^2} \| v\|_{L^\infty L^2} + \mu \| u\|_{L^\infty L^2} \| \nabla_{t,x} v\|_{L^\infty L^2} \\ &\lesssim \frac{\mu}{\Deltambda}\|u_{\Deltambda}^+\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v_\Deltambda^+\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} Writing the commutator as \begin{equation}gin{align*} [P_\mu, \Box_{g_{<\sqrt{\Deltambda}}} ] &= \bigl([P_{\mu}, g_{<\mu}]P_{<8\mu} + \sum_{\nu \in [\mu, \sqrt{\Deltambda}]} [P_\mu, g_\nu] \tilde{P}_{\nu} \bigr)(\partial^2_x + \partial_x \partial_t), \end{align*} one deduces as above that \begin{equation}gin{align*} \| [P_\mu, \Box_{g_{<\sqrt{\Deltambda}}} ] (u_\Deltambda^+ v_\Deltambda^+)\|_{L^2} &\lesssim \frac{\mu}{\Deltambda} \| u_\Deltambda^+\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v_\Deltambda^+\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} For the first term, write \begin{equation}gin{align*} \| P_\mu \Box_{g_{<\sqrt{\Deltambda}}} (u_\Deltambda^+ v_\Deltambda^+)\|_{L^2} &\lesssim \| P_\mu ( \Box_{g_{<\sqrt{\Deltambda}}} u_\Deltambda^+ v_{\Deltambda}^+ )\|_{L^2} + \| P_\mu ( u_{\Deltambda}^+ \Box_{g_{<\sqrt{\Deltambda}}} v_\Deltambda^+ )\|_{L^2} \\ &+ \| P_\mu Q_{<\sqrt{\Deltambda}} (u_{\Deltambda}^+, v_{\Deltambda}^+)\|_{L^2}. \end{align*} By Bernstein, H\"{o}lder, and energy estimates, the first two terms are bounded by \begin{equation}gin{align*} \mu\| \Box_{g_{<\sqrt{\Deltambda}}} u_{\Deltambda}^+ v_\Deltambda^+ \|_{L^2L^1} + \mu\| u_\Deltambda^+\Box_{g_{<\sqrt{\Deltambda}}} v_{\Deltambda}^+\|_{L^2L^1} \lesssim \frac{\mu}{\Deltambda} \| u_\Deltambda^+\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v_\Deltambda^+\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} The main null form term is estimated using Lemma~\ref{l:HHLnullform}. Collecting all the estimates we obtain the $ \Box_{g_{<\sqrt{\mu}}} $ part of \eqref{HH:nonresonant}. The estimate \eqref{HH:ang:nonresonant} is proved similarly. For the $ L^2 $ part we appeal to Lemma~\ref{l:HHLnullform2}, while for the $\Box_{g_{<\sqrt{\mu}}}$ part we argue as above to first bound \begin{equation}gin{align*} \sum_{\theta} \mu^{\frac{1}{2} - 1} \|(\Box_{g_{<\sqrt{\mu}}} P_\mu - P_\mu \Box_{g_{<\sqrt{\Deltambda}}}) (\phi_{\theta, \Deltambda}^\alphapha u v_{-\theta, \Deltambda}^\alphapha)\|_{L^2} \lesssim \mu^{-\frac{1}{2}} [\frac{\mu}{\Deltambda} + \Deltambda^{-\frac{3}{2}}] \|u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \|v \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}, \end{align*} and estimate $\|P_\mu \Box_{g_{<\sqrt{\Deltambda}}} (\phi_{\theta, \Deltambda}^\alphapha u v_{-\theta, \Deltambda}^\alphapha)\|_{L^2}$ as before. Modulo the preceding lemmas, the proof of Proposition~\ref{p:nonresonant} is complete. \ {\bf Step~4.} {\bf (Almost resonant interactions)} We have arrived at the key part of the argument, involving \eqref{bil:dec:uv} for any signs $ \pm_1, \pm_2 $ in Case 1, respectively the signs $ \pm_1 \pm_2=- $ in Case 2, with the sum restricted to $ \alpha_{\left|d}=\left|d^{-\frac{1}{2}} < \alpha \leq \frac{\mu}{\left|d} $. Thus, in Case 2 we may assume $ \sqrt{\left|d} < \mu $. The remaining parts of Case 2 are non-resonant and were treated in Step 3. To show that $ u^{\pm_1} \cdot v^{\pm_2} $ lies in $X_{\left|d', \leq \mu,\infty}^{0,\frac{1}{4}}[I] $, respectively $ X_{\mu, [d_0,\mu],\infty}^{0,\frac{1}{4}}[I] $ it suffices to use the decomposition \eqref{bil:dec:uv} and for each $ \alpha $ to define an appropriate modulation $ d $ such that the $ \alpha $-term is in $X_{\left|d', d}^{0,\frac{1}{4}}[I] $, resp. $ X_{\mu, d}^{0,\frac{1}{4}}[I] $, which proceeds as follows: \ \textbf{Case 1. }($ \eta=\mu $). Define\footnote{\Deltabel{note1} The choice of $ d $ is motivated by the Fourier analysis of the constant coefficients case, see \cite{Tat}} $ d=\mu \alpha^2 $. When $ \alpha $ ranges from $ \alpha_{\mu} $ to $ 1 $, $ d $ ranges from $ D $ to $ \mu $ and it suffices to prove, under \eqref{input:normalization}: \begin{equation} \Deltabel{bil:alpha:X:LH:I} \sum_{\theta \in \Omega_{\alpha}} \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta,\mu} }_{ X_{\left|d', \mu \alpha^2}^{0,\frac{1}{4}}[I]} \lesssim \mu^{\frac{3}{4}} \end{equation} \ \textbf{Case 2. }($ \eta=\left|d' \simeq \left|d $). Define\footref{note1} $ d=\frac{\left|d^2 \alpha^2}{\mu} $. When $ \alpha $ ranges from $ \alpha_{\left|d} $ to $ \frac{\mu}{\left|d} $, $ d $ ranges from $ d_0= \frac{\left|d}{\mu} $ to $ \mu $ and it suffices to prove, under \eqref{input:normalization}: \begin{equation} \Deltabel{bil:alpha:X:HH:I} \sum_{\theta \in \Omega_{\alpha}} \vn{ P_{\mu} \big( \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta,\left|d'} \big) }_{ X_{\mu, \frac{\left|d^2 \alpha^2}{\mu}}^{0,\frac{1}{4}}[I]} \lesssim \frac{\left|d}{\mu^{\frac{1}{4}}} \end{equation} Both cases are based on the following proposition, which incorporates the characteristic energy estimate from Section \ref{Char:en:subsec} and the uniform bounds on wave packets. \begin{equation}gin{proposition} \Deltabel{Prop:Z} Let $ \alpha > \alpha_{\eta} $. In decomposition \eqref{bil:dec:uv}, under \eqref{input:normalization} we have\footnote{ The estimate \eqref{NF:NE:WP:summed} shows that the effect of the null form $ Q_{g_{<\sqrt{\left|d}}} $ is $ \left|d \eta \alpha^2 $ which is a familiar factor from the constant coefficients case, given the angular localization.} \begin{equation}gin{align} \Deltabel{NE:WP:summed} \sum_{\theta \in \Omega_{\alpha}} \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} }_{L^2[I]} & \lesssim \frac{\eta^{\frac{1}{2}}}{ \alpha^{\frac{1}{2}}} \\ \Deltabel{NF:NE:WP:summed} \sum_{\theta \in \Omega_{\alpha}} \vn{Q_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} \big) }_{L^2[I]} & \lesssim (\left|d \eta \alpha^2) \frac{\eta^{\frac{1}{2}}}{ \alpha^{\frac{1}{2}}}. \end{align} \end{proposition} \ The $ L^2 $ part of both \eqref{bil:alpha:X:LH:I} and \eqref{bil:alpha:X:HH:I} follows immediately from \eqref{NE:WP:summed}, by discarding the multiplier $ P_{\mu} $ if needed. For the $ \Box_{g_{<\sqrt{\mu}}} $ part of \eqref{bil:alpha:X:HH:I} we write \begin{equation}gin{equation} \Deltabel{Box:S:comm:diff} \Box_{g_{<\sqrt{\mu}}} P_{\mu}= P_{\mu} \Box_{g_{<\sqrt{\left|d}}} +\big( \Box_{g_{<\sqrt{\mu}}} - \Box_{g_{<\sqrt{\left|d}}} \big) P_{\mu} + [\Box_{g_{<\sqrt{\left|d}}}, P_{\mu}]. \end{equation} For the first term, we discard $P_\mu$ and write \begin{equation}gin{align} \Deltabel{Box:dec:Q} \Box_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} \big) = & \ 2 \ Q_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} \big) + \\ \nonumber & \Box_{g_{<\sqrt{\left|d}}} \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} + \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot \Box_{g_{<\sqrt{\left|d}}} v^{\alpha,\pm_2}_{\theta} \end{align} In both \eqref{bil:alpha:X:LH:I} and \eqref{bil:alpha:X:HH:I}, the bound for the $ Q_{g_{<\sqrt{\left|d}}} $ term follows from \eqref{NF:NE:WP:summed}, while the bounds for the other terms follow from \eqref{sum:tht:box:1}. For the other two terms in~\eqref{Box:S:comm:diff}, we note first of all that as \begin{equation}gin{align*} \mu^{-1}d^{\frac{1}{4}-1} \| (\Box_{g_{<\sqrt{\mu}}} - \Box_{g_{<\sqrt{\Deltambda}}} ) P_\mu w\|_{L^2} \lesssim \mu^{-1} d^{\frac{1}{4}-1} \|\nabla_{t,x} P_\mu w\|_{L^\infty L^2} \lesssim d^{-\frac{1}{2}} \| P_\mu w\|_{X^{0, \frac{1}{4}}_{\mu, \frac{\Deltambda^2\alphapha^2}{\mu}} [I]}, \end{align*} the second term may be absorbed in the left side of~\eqref{bil:alpha:X:HH:I} so long as $\mu \ll \Deltambda$, which guarantees that $d \gg 1$. Also, since $\mu > \sqrt{\Deltambda}$ we may estimate \begin{equation}gin{align} \Deltabel{comm:ngfreq} \| [\Box_{g_{<\sqrt{\Deltambda}}}, P_\mu] w\|_{L^2} \lesssim \mu \| P_{[\mu/2, 2\mu]} w\|_{L^2} + \| \partial_t P_{[\mu/2, 2\mu]} w \|_{L^2}, \end{align} and again use~\eqref{tildeX:loc} to obtain \begin{equation}gin{align} \Deltabel{term:rhs:lhs} &\sum_{\theta \in \Omega_\alphapha} \| P_\mu (\phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v_{\theta, \Deltambda'}^{\alphapha, \pm_2}) \|_{X^{0, \frac{1}{4}}_{\mu, \frac{\Deltambda^2\alphapha^2}{\mu}} [I]} \\ \nonumber &\lesssim \frac{\Deltambda}{\mu^{\frac{1}{4}}} + \sum_{\theta \in \Omega_\alphapha} \mu^{-1} d^{\frac{1}{4}-1} \| \partial_t P_{[\mu/2, 2\mu]} (\phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v_{\theta, \Deltambda'}^{\alphapha, \pm_2})\|_{L^2}\\ \nonumber &\lesssim \frac{\Deltambda}{\mu^{\frac{1}{4}}} + \sum_{\theta \in \Omega_\alphapha} d^{-1}\| P_{[\mu/2, 2\mu]} (\phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v_{\theta, \Deltambda'}^{\alphapha, \pm_2}) \||_{X^{0, \frac{1}{4}}_{\mu, \frac{\Deltambda^2\alphapha^2}{\mu}} [I]}. \end{align} The summation on the right side is handled by another perturbative argument. For fixed $\alphapha > \Deltambda^{-\frac{1}{2}}$ and a small constant $c>0$, let $M$ be the best constant in~\eqref{bil:alpha:X:HH:I} which is uniform in $\mu \le c \Deltambda$. Invoking Proposition~\ref{p:nonresonant} and \eqref{bil:alpha:X:LH:I} for $\mu < \alphapha \Deltambda$ and $\mu > c\Deltambda$, respectively, we have $M \lesssim 1 + d_{c\Deltambda}^{-1} M$, where $d_\mu := \Deltambda^2\alphapha^2/\mu$, and the right term may be absorbed into the left side if $c$ is sufficiently small. \ In what follows we complete the proof of Proposition \ref{PropX} by establishing Proposition \ref{prop:alphamin}, Corollary \ref{cor:alphamin}, Proposition \ref{Prop:Z}, Lemma \ref{l:HHLfreqtails} and Lemmas \ref{l:HHLnullform}, \ref{l:HHLnullform2}. \subsubsection{\bf Proof of Proposition \ref{Prop:Z}} We successively consider the two estimates in the proposition. \ {\bf (1)} Using \eqref{sq:sum:chi} and \eqref{reg:coeff:1} the estimate \eqref{NE:WP:summed} reduces to proving \begin{equation} \Deltabel{NE:WP:summed:red} \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\alpha,\pm_2}_{\theta} }_{L^2} \lesssim \frac{\eta^{\frac{1}{2}}}{ \alpha^{\frac{1}{2}}} \vn{ \chi_{\theta}^0 u^{\pm_1} }_{L^2 } \big( \sum_{ T, \omega_T \sim (\alpha,\pm \theta) } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}} \end{equation} for any $ \theta \in \Omega_{\alpha} $. Recall \begin{equation} \Deltabel{v:tht:sum} v^{\alpha,\pm_2}_{\theta} =\sum_{\omega, \omega \sim (\alpha,\pm \theta) } v^{\omega,\pm_2} \end{equation} where $ v^{\omega,\pm_2} $ is given by \eqref{v:omega:pm}. Since there are $ \simeq \alpha \eta^{\frac{1}{2}}$ directions $ \omega $ in the \eqref{v:tht:sum} sum, \eqref{NE:WP:summed:red} follows from summing the following with Cauchy-Schwarz in $ \omega $: \begin{equation} \Deltabel{NE:WP:summed:red2} \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\omega,\pm_2} }_{L^2} \lesssim \frac{\eta^{\frac{1}{4}}}{ \alpha} \vn{ \chi_{\theta}^0 u^{\pm_1} }_{L^2 } \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}} \end{equation} Since for any $ \omega $ we have $$ I \times \mathbb{R}^2 = \bigcup_{T, x_T \in \Xi_{\eta}^{\omega}} T $$ and the $ T $'s are finitely overlapping, we obtain \eqref{NE:WP:summed:red2} if we have \begin{equation} \vn{ \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot P_{\eta} c_T u_T }_{L^2(T')} \lesssim \frac{1}{\Deltangle d(T,T') \rangle^N} \frac{\eta^{\frac{1}{4}}}{ \alpha} \vn{ \chi_{\theta}^0 u^{\pm_1} }_{L^2 } \vm{c_T}_{L^{\infty}_t} \end{equation} for any $ T,T' $ with $ \omega_T=\omega_{T'}=\omega $. This follows from \eqref{wp:Linfty:T} and Corollary \ref{c:char-energy-tubes}. \ {\bf (2)} The proof of \eqref{NF:NE:WP:summed} proceeds similarly by reducing to \begin{equation} \Deltabel{NF:NE:WP:summed:red2} \vn{Q_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot v^{\omega,\pm_2} \big) }_{L^2} \lesssim (\left|d \eta \alpha^2) \frac{\eta^{\frac{1}{4}}}{ \alpha} \vn{ \chi_{\theta}^1 u^{\pm_1 }} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 + \vm{c_T'}_{L^{2}_t}^2 \big)^{\frac{1}{2}} \end{equation} for every $ \omega $, based on \eqref{reg:coeff:1}, \eqref{reg:coeff:2}, \eqref{sq:sum:chi} with $ j=1 $. Associated to $ g_{\sqrt{\left|d}} $ and to $ \omega $ we consider vector fields $ L, \underline{L}, E $ which form a null frame as in section \ref{s:nullframe}. Then we can express the null form as $$ 2 Q_{g_{<\sqrt{\left|d}}} \big(u,v \big) = Lu \cdot \underline{L}v + \underline{L} u \cdot Lv - 2 Eu \cdot Ev. $$ For the term $ L \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot \underline{L} v^{\omega,\pm_2} $ we proceed as before, reducing to \begin{equation}gin{align*} \vn{L \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot \underline{L} P_{\eta} c_T u_T }_{L^2(T')} & \lesssim \vn{ L \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} }_{L^2(T')} \vn{\underline{L} P_{\eta}\big( c_T u_T \big)}_{L^{\infty}(T')} \\ & \lesssim \left|d \eta^{\frac{5}{4}} \alpha \frac{1}{\Deltangle d(T,T') \rangle^N} \vn{ \chi_{\theta}^1 u^{\pm_1 }}_{L^2} \vm{c_T}_{L^{\infty}_t} \end{align*} which holds due to \eqref{wp:barL:cT} and Corollary \ref{c:char-energy-tubes}. The terms $$ \underline{L} \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot L v^{\omega,\pm_2} \qquad \text{and} \qquad E \phi_{\theta,\Deltambda}^{\alphapha, \pm_1} u^{\pm_1} \cdot E v^{\omega,\pm_2} $$ are easier, as here we obtain the corresponding part of \eqref{NF:NE:WP:summed:red2} by H\" older using \eqref{wp:L:T:sum}, \eqref{wp:E:T:sum}, \eqref{E:est:phi}, \eqref{barL:est:phi} and using the fact that $ \alpha \geq \eta^{-\frac{1}{2}} $. \ \subsubsection{\bf Proof of Proposition \ref{prop:alphamin}} For \eqref{sum:tht:alphamin:L2} - \eqref{sum:tht:alphamin:Q:L1L2} we may assume without loss of generality that $ v^{\alpha_{\eta},\pm_2}_{\theta} = v^{\omega,\pm_2} $. \ {\bf (1)} For \eqref{sum:tht:alphamin:L2} we use H\"older's inequality $$ \vn{ \phi_{\theta,\Deltambda}^{\alpha_{\eta}, \pm_1} u^{\pm_1} \cdot v^{\omega,\pm_2} }_{ L^2} \lesssim \vn{ \phi_{\theta,\Deltambda}^{\alpha_{\eta}, \pm_1} u^{\pm_1} }_{L^2} \vn{v^{\omega,\pm_2} }_{ L^{\infty}} $$ Square summing this with \eqref{wp:Linfty:T:sum} and Prop. \ref{p:orthogonality} we obtain \eqref{sum:tht:alphamin:L2}. \ {\bf(2)} By square summing and Cauchy-Schwarz we reduce \eqref{sum:tht:alphamin:Q:L2} to $$ \vn{ Q_{g_{<\sqrt{\left|d}}} \big( \phi_{\theta,\Deltambda}^{\alpha_{\eta}, \pm_1} u^{\pm_1} \cdot v^{\omega,\pm_2} \big) }_{ L^2} \lesssim \left|d \eta^{\frac{3}{4}} \|\chi_{\theta}^{\alphapha_\eta} u^{\pm_1 }\|_{L^2} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 + \vm{c_T'}_{L^{2}_t}^2 \big)^{\frac{1}{2}} $$ for some square-summable $\chi_\theta^{\alphapha_\eta}$. Associated to $ g_{\sqrt{\left|d}} $ and to $ \omega $ we consider vector fields $ L, \underline{L}, E $ which form a null frame as in section \ref{s:nullframe}. Then we express the null form as $$ 2 Q_{g_{<\sqrt{\left|d}}} \big(u,v \big) = Lu \cdot \underline{L}v + \underline{L} u \cdot Lv - 2 Eu \cdot Ev $$ We use H\"older's inequality: $L^2 L^2 \times L^\infty L^\infty \to L^2 L^2$ and \eqref{L:est:phi}, \eqref{wp:barL:T:sum} for $Lu\underline{L}v$; $ L^{\infty} L^2 \times L^2 L^{\infty} \to L^2 L^2 $ with \eqref{barL:est:phi}, \eqref{wp:L:T:sum} for the second term; and $L^2 L^2 \times L^\infty L^\infty \to L^2 L^2$ with \eqref{E:est:phi}, \eqref{wp:E:T:sum} for the third term. \ {\bf(3)} For \eqref{sum:tht:alphamin:Q:L1L2} we use H\"older $ L^{\infty} L^2 \times L^2 L^{2} \to L^2 L^1 $ together with the null frame $ L, \underline{L}, E $ associated to $ g_{\sqrt{\left|d}} $ and to $ \theta $, using \eqref{E:est:phi}, \eqref{barL:est:phi}, \eqref{L:est:phi} and \begin{equation}gin{align*} & \vn{E v^{\omega,\pm_2} }_{L^\infty L^2[I]}^2 \lesssim \sum_{ T, \omega_T=\omega } \eta \vm{c_T}_{L^{\infty}_t}^2 \\ & \vn{L v^{\omega,\pm_2} }_{L^2[I]}^2 \lesssim \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 + \vm{c_T'}_{L^{2}_t}^2 \\ & \vn{\underline{L} v^{\omega,\pm_2} }_{L^\infty L^2[I]}^2 \lesssim \sum_{ T, \omega_T=\omega } \eta^2 \vm{c_T}_{L^{\infty}_t}^2. \end{align*} \ {\bf(4)} The first part of \eqref{sum:tht:box:1} follows from $ L^2 \times L^{\infty} \to L^2 $ with Prop. \ref{e:char-e-commutator1}, \eqref{reg:coeff:1} and \begin{equation} \Deltabel{v:alpha:tht:infty} \vn{ v^{\alpha,\pm_2}_{\theta} }_{L^{\infty}} \lesssim \eta \alpha^{\frac{1}{2}} \big( \sum_{ T, \omega_T \sim (\alpha,\pm \theta) } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}}. \end{equation} The second part of \eqref{sum:tht:box:1} follows from $ L^{\infty} L^2 \times L^2 L^{\infty} \to L^2 L^2 $ based on \eqref{O:est:phi} and \begin{equation} \Deltabel{Box:v:alpha:tht:infty} \Big( \sum_{\theta \in \Omega_{\alpha}} \vn{ \Box_{g_{<\sqrt{\left|d}}} v^{\alpha,\pm_2}_{\theta} }_{L^2 L^{\infty}}^2 \Big)^{\frac{1}{2}} \lesssim \eta^2 \alpha^{\frac{1}{2}}. \end{equation} This estimate is obtained by decomposing as in \eqref{v:tht:sum}, using Cauchy-Schwarz in $ \omega $, using \eqref{reg:coeff:1}-\eqref{reg:coeff:3} and $$ \vn{ \Box_{g_{<\sqrt{\left|d}} } v^{\omega,\pm_2}}_{L^2 L^{\infty}} \lesssim \eta^{\frac{7}{4}} \big( \sum_{ T, \omega_T=\omega } \vm{c_T}_{L^{\infty}_t}^2 + \vm{c_T'}_{L^{2}_t}^2 + \eta^{-2} \vm{c_T''}_{L^{2}_t}^2 \big)^{\frac{1}{2}} $$ {\bf(5)} Finally, the proof of \eqref{sum:tht:box:2} is similar to the proof of \eqref{sum:tht:box:1} in {\bf(4)}, except that we use $ L^2 L^2 \times L^{\infty} L^2 \to L^2 L^1 $ for the first part and $ L^{\infty} L^2 \times L^2 L^2 \to L^2 L^1 $ for the second part. Here \eqref{v:alpha:tht:infty} is replaced by $$ \vn{ v^{\alpha,\pm_2}_{\theta} }_{L^{\infty} L^2} \lesssim \big(\sum_{ T, \omega_T \sim (\alpha,\pm \theta) } \vm{c_T}_{L^{\infty}_t}^2 \big)^{\frac{1}{2}} $$ while \eqref{Box:v:alpha:tht:infty} is replaced by $$ \Big( \sum_{\theta \in \Omega_{\alpha}} \vn{ \Box_{g_{<\sqrt{\left|d}}} v^{\alpha,\pm_2}_{\theta} }_{L^2 L^{2}}^2 \Big)^{\frac{1}{2}} \lesssim \eta. $$ \subsubsection{\bf Proof of Corollary \ref{cor:alphamin} } The estimate \eqref{bil:almin:LH:I} follows immediately from \eqref{sum:tht:alphamin:L2}, \eqref{Box:dec:Q}, \eqref{sum:tht:alphamin:Q:L2} and \eqref{sum:tht:box:1} for $ \alpha=\alpha_{\mu}$. For \eqref{bil:almin:HH:I} we consider two cases: \begin{equation}gin{enumerate} \item $ \left|d^{\frac{1}{2}} \leq \mu $ and $ d_0= \frac{\left|d}{\mu} $ \item $ \mu \leq \left|d^{\frac{1}{2}} $ and $ d_0=\mu $. \end{enumerate} The $ L^2 $ part of \eqref{bil:almin:HH:I} also follows from \eqref{sum:tht:alphamin:L2} in both cases (in Case (2) we use $ \mu^{\frac{1}{2}} \leq \left|d^{\frac{1}{4}} $). For the $ \Box_{g_{<\sqrt{\mu}}} P_{\mu} $ part of Case (1), since $ \left|d^{\frac{1}{2}} \leq \mu $ we may freely replace $ \Box_{g_{<\sqrt{\mu}}} P_{\mu} $ by $ \Box_{g_{<\sqrt{\left|d}}} P_{\mu} $. We estimate the $ P_{\mu} \Box_{g_{<\sqrt{\left|d}}} $ by \eqref{Box:dec:Q}, \eqref{sum:tht:alphamin:Q:L2} and \eqref{sum:tht:box:1} for $ \alpha=\alpha_{\left|d'}$. Then we treat $ [P_{\mu}, \Box_{g_{<\sqrt{\left|d}}} ] $ by the argument used to prove \eqref{comm:ngfreq}, \eqref{term:rhs:lhs}. For the $ \Box_{g_{<\sqrt{\mu}}} P_{\mu} $ part of Case (2) we denote $ w=\phi_{\theta,\Deltambda}^{\alpha_{\left|d'}, \pm_1} u^{\pm_1} \cdot v^{\alpha_{\left|d'},\pm_2}_{\theta,\left|d'} $ and write \begin{equation}gin{align*} \Box_{g_{<\sqrt{\mu}}} P_{\mu} w= P_{\mu} \Box_{g_{<\sqrt{\left|d}}} w - \sum_{\nu \in [\sqrt{\mu},\sqrt{\left|d}]} P_{\mu} \big( g_{\nu} \cdot \partial_{x} \partial_{t,x} P_{\lesssim \max (\nu,\mu)} w \big) + [\Box_{g_{<\sqrt{\mu}}}, P_{\mu}]w \end{align*} We will use Bernstein's inequality $ P_{\mu}: L^2 L^1 \to \mu L^2 L^2 $. For the first term we invoke the $ L^2 L^1 $ estimates \eqref{sum:tht:alphamin:Q:L1L2} and \eqref{sum:tht:box:2} for $ \alpha=\alpha_{\left|d'}$. For the second term we discard the $ P_{\mu} $ and write \begin{equation}gin{align*} & \vn{g_{\nu} \cdot \partial_{x} \partial_{t,x} P_{\lesssim \max (\nu,\mu)} w }_{L^2 L^1} \lesssim \frac{1}{\nu^2} \max (\nu,\mu) \vn{\partial^2 g_{\nu}}_{L^2 L^{\infty}} \times \\ \times & \Big( \vn{ \partial_{t,x} \phi_{\theta,\Deltambda}^{\alpha_{\left|d'}, \pm_1} u^{\pm_1} }_{L^{\infty} L^2} \vn{ v^{\alpha_{\left|d'},\pm_2}_{\theta,\left|d'} }_{L^{\infty} L^2} + \vn{ \phi_{\theta,\Deltambda}^{\alpha_{\left|d'}, \pm_1} u^{\pm_1} }_{L^{\infty} L^2} \vn{ \partial_{t,x} v^{\alpha_{\left|d'},\pm_2}_{\theta,\left|d'} }_{L^{\infty} L^2} \Big). \end{align*} which is more than enough. The third term is estimated similarly since it is of the form $ \approx \mu^{-1} \nabla g \partial_{x} \partial_{t,x} $. \subsubsection{\bf Proof of Lemma \ref{l:HHLfreqtails}} We remark first of all that by standard pseudo-differential calculus arguments (see for example~\cite{hormanderv3}), if $A(x, \xi) \in S(1, g_{\alphapha})$ with $\alphapha \ge \alphapha_\Deltambda := \Deltambda^{-1/2}$ and $v_T$ is a frequency $\Deltambda$ packet, then so is $A(X, D) v_T$. From this one infers \begin{equation}gin{lemma} \Deltabel{e:wp-pdo-tail} Let $v_T$ be a frequency $\Deltambda$ packet with $\omega(T) = \omega$. Suppose $|\theta -\omega| \le \alphapha/4$ and $\phi_\theta(t, x, \xi) = \chi \bigl(\frac{ |\wht{\xi} - \wht{\xi_\theta}(t, x)|}{ \alphapha} \bigr)P_\Deltambda(\xi)$ where $\chi$ is smooth and supported in $[-1,1]$. Then \begin{equation}gin{align*} \|(1 -\phi_\theta(t,X,D)) v_T\|_{WP} = O( (\alphapha^2\Deltambda)^{-\infty}). \end{align*} \end{lemma} \begin{equation}gin{proof} Put $\psi = 1-\phi$. Without loss of generality assume $v_T$ is centered at $x =0$. Let $\chi(x)$ be a bump function supported in ball $|x| \le \alphapha/16$. Then $\chi \psi$ only admits input frequencies outside the sector $\angle (\xi, \omega) \lesssim \alphapha$, so $\|\chi \psi(t,X,D) v_T\|_{WP} = O( (\alphapha^2\Deltambda)^{-\infty})$. On the other hand, the spatial decay of $\psi v_T$ by our initial remark implies that $\|(1-\chi) \psi(t,X,D) v_T\|_{WP} = O( (\alphapha^2\Deltambda)^{-\infty})$ as well. \end{proof} Using Lemma~\ref{e:wp-pdo-tail} we may write \begin{equation}gin{align*} v_{-\theta, \Deltambda}^\alphapha = \psi_{-\theta}^\alphapha v_{-\theta, \Deltambda}^\alphapha + r_{\theta}^\alphapha, \end{align*} where $\psi_{-\theta}^\alphapha = \chi \bigl(\frac{ |\wht{\xi} + \wht{\xi_\theta}(t, x)|}{ \alphapha} \bigr)s_\Deltambda(\xi)$ for smooth cutoff function $\chi$, and where $r_{\theta}^\alphapha =\sum_T a_T r_T$ is a sum of packets with $\|r_T\|_{WP} = O( (\alphapha^2\Deltambda)^{-\infty})$. It remains to decompose the psuedo-differential term $P_\mu(\phi^\alphapha_{\theta, \Deltambda} u \cdot \psi_{-\theta}^{\alphapha} v_{-\theta, \Deltambda}^\alphapha)$. We write $v := v_{-\theta, \Deltambda}^\alphapha$ and omit the dependence on $\alphapha$, $\Deltambda$, and $t$ in the other notations. For any $w \in L^2$, \begin{equation}gin{align*} \Deltangle P_\mu (\phi_{\theta} u \psi_{-\theta} v), w \rangle &= \int e^{i \Deltangle x, \xi + \eta + \zeta\rangle} \phi_{\theta}(x, \xi) \psi_{-\theta} (x, \eta) s_\mu(\zeta) \wht{u}(\xi) \wht{v}(\eta) \wht{w} (\zeta) \, d\xi d\eta d\zeta dx\\ &= i \int e^{i \Deltangle x, \xi + \eta + \zeta \rangle} \Deltangle \partial_x, F(\xi, \eta, \zeta) \rangle \phi_{\theta}(x, \xi) \psi_{-\theta} (x, \eta) s_\mu(\zeta) \\ &\times\wht{u}(\xi) \wht{v}(\eta) \wht{w} (\zeta) \, d\xi d\eta d\zeta dx, \end{align*} where $ F = \tfrac{\xi + \eta + \zeta }{ | \xi +\eta + \zeta|^2 }. $ As $\mu \ll \alphapha \Deltambda$, the denominator of $F$ is bounded below $|\xi + \eta + \zeta| \gtrsim \alphapha \Deltambda$ on the support of the integrand. Introducing slightly wider cutoffs $\tilde{\phi}_{\theta}$, $\tilde{\psi}_{-\theta}$ so that $\tilde{\phi}_{\pm \theta} \phi_{\pm \theta} = \phi_{\pm \theta}$, we separate variables, for instance using Fourier series expansion \begin{equation}gin{align*} \tilde{\phi}_{\theta} \tilde{\psi}_{-\theta} s_{<2\mu} F = (\alphapha \Deltambda)^{-1} \sum_{\vec{k}} a_{\vec{k}}(x) e_{1, k_1}(\xi) e_{2, k_2} (\eta) e_{3, k_3} (\zeta), \end{align*} where $e_{1, k_1}$ and $e_{3, k_3}$ are Fourier characters adapted to $\Deltambda \times (\alphapha \Deltambda)$ rectangles and $e_{2, k_2}$ are adapted to $\mu \times \mu$ rectangles. By the derivative bounds~\eqref{e:flow-deriv} with $\alphapha=\sqrt{\Deltambda}$, the coefficients $ a_{\vec{k}} = (\alphapha \Deltambda) \Deltangle \tilde{\phi}_{\theta} \tilde{\psi}_{-\theta} s_{<2\mu} F , e_{\vec{k}} \rangle $ satisfy \begin{equation}gin{align} \Deltabel{e:ak_deriv_bounds} \partial_x^j a_{\vec{k}} = O( \Deltambda^{\frac{j-1}{2}} |\vec{k}|^{-N}), \ j \ge 1. \end{align} Consequently, the integral takes the form \begin{equation}gin{align*} &(\alphapha \Deltambda)^{-1}\sum_{\vec{k}} \int e^{i\Deltangle x, \xi + \eta + \zeta\rangle} a_{\vec{k}}(x) \bigl[(\partial_x \phi_{\theta})(x, \xi) \psi_{-\theta}(x, \eta) + \phi_\theta (x, \xi) (\partial_x \psi ) (x, \eta)]\\ &\times s_\mu(\zeta) e_{1, k_1}(\xi) e_{2, k_2}(\eta) e_{3, k_3}(\zeta)\wht{u}(\xi) \wht{v}(\eta) \wht{w}(\zeta) \, d\xi d\eta d\zeta dx\\ &= \Bigl\Deltangle \sum_{j=1, 2} \sum_{\vec{k}} (\alphapha \Deltambda)^{-1} \phi^j_{\theta, \vec{k}} u \psi^j_{-\theta, \vec{k}} v, \, P_{\mu, \vec{k}} (D)w \Bigr\rangle, \end{align*} where \begin{equation}gin{align*} \phi^1_{\theta, \vec{k}}(x, \xi) &= |k|^{2N/3} a_{\vec{k}}(x) (\partial_x \phi_\theta) e_{1, k_1}(\xi), \quad \phi^2_{\theta, \vec{k}}(x, \xi) = |k|^{2N/3} a_{\vec{k}}(x) \phi_\theta e_{1, k_1}(\xi)\\ \psi^1_{-\theta, \vec{k}}(x, \eta) &= |k|^{-N/3}\psi_{-\theta} (x, \eta) e_{2, k_2}(\eta), \quad \psi^2_{-\theta, \vec{k}}(x, \eta) = |k|^{-N/3}(\partial_x\psi_{-\theta}) (x, \eta) e_{2, k_2}(\eta)\\ P_{\mu, \vec{k}}(\zeta) &= |k|^{-N/3} s_\mu (\zeta) e_{3, k_3}(\zeta) \end{align*} The $\partial_x$ is harmless since $\phi_{\theta}, \ \psi_{-\theta}$ are Lipschitz in $x$, so we have a rapidly converging series expansion \begin{equation}gin{align*} P_\mu( \phi_{\theta} u \psi_{-\theta} v) = \sum_{j=1, 2} \sum_{\vec{k}} (\alphapha \Deltambda)^{-1} P_{\mu, \vec{k}}(D) ( \phi^j_{\theta, \vec{k} } u \psi^j_{-\theta, \vec{k}} v) \end{align*} where $\phi^j_{\theta, \vec{k}}, \ \psi^j_{-\theta, \vec{k}} \in \Deltangle \vec{k} \rangle^{-N} S(1, g_{\alphapha_\Deltambda} )$ and retain the angular localization of $\phi_\theta, \ \psi_{-\theta}$. Although these are not quite localized in output frequency, the bounds~\eqref{e:ak_deriv_bounds} imply rapid decay from frequency $\Deltambda$ on the $\sqrt{\Deltambda}$ scale. By Lemma~\ref{l:CV} these tails are negligible for square summability in $\theta$. \subsubsection{\bf Proof of Lemmas~\ref{l:HHLnullform} and \ref{l:HHLnullform2}} Both parts can be proved in parallel. Without loss of generality we consider just the $++$ and $+-$ interactions. Begin with the decompositions \begin{equation}gin{align*} u^+ v^- = \sum_{\Deltambda^{-1/2} \le \alphapha \le 1} \sum_{\theta} (\phi_{\theta, \Deltambda}^\alphapha u) v_{-\theta, \Deltambda}^\alphapha, \ u^{+} v^{+} = \sum_{\alphapha} \sum_\theta (\tilde{\phi}_{\theta, \Deltambda}^{\alphapha, \pm} u) v_{\theta, \Deltambda}^{\alphapha, \pm} \end{align*} where $v^\alphapha_{\pm\theta, \Deltambda} = \sum_{|\omega \mp \theta| \sim \alphapha} v^\omega$ is a sum of frequency $\Deltambda$ packets. The arguments involved depend on the relation between $\alphapha$, $\mu/\Deltambda$, $\Deltambda^{-1/2}$, and for each of the following cases we discuss the null form estimates (Cases 1a, 2a, 3a, 4) and the $L^2$ estimates (Cases 1b, 2b, 3b, 4) for both $++$ and $+-$ interactions. Note that Lemma~\ref{l:HHLnullform2} is covered by Cases 2 and 4 below. Denote $Q := Q_{g_{<\sqrt{\Deltambda}}}$. \textbf{Case 1}: $\Deltambda^{-1/2} \le \alphapha \le \mu/\Deltambda$. \textbf{Case 1a$++$}: We can use Cauchy-Schwartz in $\omega$ to obtain \begin{equation}gin{align*} \|P_\mu Q(\tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{\theta, \Deltambda}^\alphapha ) \|_{L^2} \lesssim (\alphapha^2 \Deltambda)^{\frac{1}{4}} \Bigl( \sum_{|\omega - \theta| \sim \alphapha} \| P_\mu Q( \tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v^\omega) \|_{L^2}^2 \Bigr)^{1/2} \end{align*} For each $\omega$ we expand $Q$ using the null frame adapted to the direction $\omega$ and discard the mollifier $P_\mu$. For the term $\underline{L} \tilde{\phi}^\alphapha_{\theta, \Deltambda} u L v^\omega$, H\"{o}lder, energy estimates, and the wave packet bounds~\eqref{wp:L:T:sum} imply \begin{equation}gin{align*} \| \underline {L} \tilde{\phi}^\alphapha_{\theta, \Deltambda} u L v^\omega\|_{L^2} &\lesssim \| \underline {L} \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{L^\infty L^2}\| L v^\omega \|_{L^2L^\infty} \lesssim \Deltambda^{-\frac{1}{4}}\| \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v^\omega\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} For the other terms we use the characteristic energy estimate \eqref{e:CE1} and the packet bounds~\eqref{wp:Linfty:T} through \eqref{wp:barL:T} to get \begin{equation}gin{align*} \| E\tilde{\phi}^\alphapha_{\theta, \Deltambda} u E v^\omega \|_{L^2}^2 &\lesssim \bigl(\sum_T |c_T|_{L^{\infty}_t}^2 \Deltambda^{\frac{3}{2} + 1}\bigr) \cdot \sup_T \| E \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{L^2(T)}^2 \\ &\lesssim \Deltambda^{-\frac{1}{2}} \| \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}^2 \sum_{\omega_T = \omega} \Deltambda^2|c_T|_{L^{\infty}_t}^2,\\ \| L\tilde{\phi}^\alphapha_{\theta, \Deltambda} u \underline{L} v^\omega\|_{L^2}^2 &\lesssim \| \sum_{T} c_T L \tilde{\phi}^\alphapha_{\theta, \Deltambda} u \underline{L} u_T \|_{L^2}^2 + \bigl\| L \tilde{\phi}^\alphapha_{\theta, \Deltambda} u \sum_T c_T' v_T \bigr\|_{L^2}^2 \\ &\lesssim \bigl (\sum_T |c_T|_{L^{\infty}}^2 \Deltambda^{\frac{3}{2} + 2}\bigr) \cdot \sup_T \| L \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{L^2(T)}^2 + \| L \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{L^\infty L^2}^2 \| \sum_T c_T' u_T\|_{L^2 L^\infty}^2\\ &\lesssim \Deltambda^{\frac{1}{2}} \| \tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}^2 \sum_{\omega_T = \omega } \Deltambda^2|c_T|_{L^{\infty}_t}^2 + \| \tilde{\phi}_{\theta, \Deltambda}^\alphapha u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}^2 \sum_{\omega_T = \omega} \Deltambda^{\frac{3}{2}} |c_T'|_{L^2_t}^2 \end{align*} Overall \begin{equation}gin{align*} \| P_\mu Q( \tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v^\omega)\|_{L^2} &\lesssim \Deltambda^{\frac{1}{4}} \|\tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{\omega_T = \omega} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}},\\ \| P_\mu Q(\tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{-\theta, \Deltambda}^\alphapha ) \|_{L^2} &\lesssim \mu^{\frac{1}{2}} \Bigl( \frac{ \alphapha \Deltambda}{\mu} \Bigr)^{\frac{1}{2}} \|\tilde{\phi}^\alphapha_{\theta, \Deltambda} u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{|\omega - \theta| \sim \alphapha} \sum_{ \omega_T = \omega \sim \alphapha} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}}, \end{align*} which can be summed in $\alphapha$ when $\alphapha \in [\Deltambda^{-1/2},\, \mu/\Deltambda]$. \textbf{Case 1a$+-$}: In this case the waves $\phi^\alphapha_{\theta, \Deltambda} u$ and $v^\omega$ propagate at small relative angles, so by Proposition~\ref{p:char-energy} the characteristic energy estimate for the term $L\phi^\alphapha_{\theta, \Deltambda} u \underline{L}v^\omega$ improves by a factor of $\alphapha$. The previous estimates are replaced by \begin{equation}gin{align*} \| P_\mu Q( \phi^\alphapha_{\theta, \Deltambda} u, v^\omega)\|_{L^2} &\lesssim \alphapha \Deltambda^{\frac{1}{4}} \|\phi^\alphapha_{\theta, \Deltambda} u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{\omega_T = \omega} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}},\\ \| P_\mu Q(\phi^\alphapha_{\theta, \Deltambda} u, v_{-\theta, \Deltambda}^\alphapha ) \|_{L^2} &\lesssim \alphapha \mu^{\frac{1}{2}} \Bigl( \frac{ \alphapha \Deltambda}{\mu} \Bigr)^{\frac{1}{2}} \|\phi^\alphapha_{-\theta, \Deltambda} u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{|\omega + \theta| \sim \alphapha} \sum_{ \omega_T = \omega \sim \alphapha} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda|c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}}\\ &\lesssim \mu^{\frac{1}{2}} \frac{\mu}{\Deltambda} \Bigl( \frac{\alphapha \Deltambda}{\mu} \Bigr)^{\frac{3}{2}} \Bigl( \sum_{|\omega + \theta| \sim \alphapha} \sum_{ \omega_T = \omega \sim \alphapha} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}}. \end{align*} \textbf{Case 1b$++$}: For the $L^2$ estimate we also write \begin{equation}gin{align*} \|P_\mu (\tilde{\phi}^\alphapha_{\theta, \Deltambda} u v_{\theta, \Deltambda}^\alphapha ) \|_{L^2} \lesssim (\alphapha^2 \Deltambda)^{\frac{1}{4}} \Bigl( \sum_{|\omega - \theta| \sim \alphapha} \| P_\mu ( \tilde{\phi}^\alphapha_{\theta, \Deltambda} u v^\omega) \|_{L^2}^2 \Bigr)^{1/2} \end{align*} For each $\omega$, discard $P_\mu$ and use the pointwise bounds for the packets in $v^\omega$ and the characteristic $L^2$ estimate for $\phi^\alphapha_{\theta, \Deltambda} u$ along characteristic surfaces for $v^\omega$ (the second part of Proposition~\ref{p:char-energy} with $\begin{equation}ta \sim 1$). \begin{equation}gin{align*} \|\tilde{\phi}^\alphapha_{\theta, \Deltambda} u v^\omega\|_{L^2} &\lesssim \sup_T \|\tilde{\phi}^\alphapha_{\theta, \Deltambda} u\|_{L^2(T)} \cdot \Deltambda^{\frac{1}{4}} \Bigl( \sum_{T} |c_T|_{L^\infty_t}^2\Bigr)^{\frac{1}{2}} \lesssim \Deltambda^{\frac{1}{4}} \|\chi_{\theta}^{\alphapha} u \|_{L^2} \Bigl( \sum_{T} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}}, \end{align*} where $\chi_{\theta}^\alphapha$ are square-summable in $\theta$. Thus \begin{equation}gin{align*} \| \tilde{\phi}^\alphapha_{\theta, \Deltambda} u v_{\theta, \Deltambda}^\alphapha\|_{L^2} &\lesssim (\alphapha \Deltambda)^{\frac{1}{2}} \|\chi_{\theta}^\alphapha u\|_{L^2} \Bigl(\sum_{|\omega - \theta| \sim \alphapha} \sum_{ \omega(T) = \omega} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}} \\ &\lesssim \mu^{\frac{1}{2}} \Bigl(\frac{\alphapha \Deltambda}{\mu}\Bigr)^{\frac{1}{2}}\|\chi_{\theta}^\alphapha u\|_{L^2} \Bigl(\sum_{|\omega - \theta| \sim \alphapha} \sum_{ \omega(T) = \omega} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}}. \end{align*} \textbf{Case 1b$+-$}: The argument of the $++$ case would yield a loss of $\alphapha^{-1}$ due to small angles ($\begin{equation}ta = \alphapha$ in Corollary 6.9): \begin{equation}gin{align*} \| \phi^\alphapha_{\theta, \Deltambda} u v_{-\theta, \Deltambda}^\alphapha \|_{L^2} \lesssim \mu^{\frac{1}{2}} \Bigl( \frac{\alphapha \Deltambda}{\mu}\Bigr)^{\frac{1}{2}} \alphapha^{-1} \|\chi_{\theta}^\alphapha u\|_{L^2} \Bigl(\sum_{|\omega + \theta| \sim \alphapha} \sum_{ \omega(T) = \omega} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}}. \end{align*} Instead we use the method of Case 3 below with a null foliation \emph{tranverse} to $\phi^\alphapha_{\theta, \Deltambda} u$ to estimate \begin{equation}gin{align*} \| P_\mu (\phi^\alphapha_{\theta, \Deltambda} u v_{-\theta, \Deltambda}^\alphapha)\|_{L^2} &\lesssim \mu \Bigl( \sum_j \| \phi_{\theta, \Deltambda}^\alphapha u \|_{L^2(\Sigma_j)}^2 \|v_{-\theta, \Deltambda}^\alphapha\|_{L^\infty L^2(\Sigma_j)} \Bigr)^{\frac{1}{2}}\\ &\lesssim \mu^{\frac{1}{2}} \|\chi_{\theta}^\alphapha u\|_{L^2} \Bigl(\sum_{|\omega + \theta| \sim \alphapha } \sum_{\omega_T = \omega} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}}. \end{align*} \textbf{Case 2}: $\alphapha > \mu/\Deltambda \ge \Deltambda^{-1/2}$. \textbf{Case 2a$++$}: By Lemma~\ref{l:HHLfreqtails}, the estimates of the previous case hold with an additional factor of $(\alphapha \Deltambda)^{-1} + (\alphapha^2\Deltambda)^{-N}$ for any $N$: \begin{equation}gin{align*} &\| P_\mu Q(\tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{\theta, \Deltambda}^\alphapha ) \|_{L^2} \\ &\lesssim [(\alphapha \Deltambda)^{-1} + (\alphapha^2\Deltambda)^{-N}] \mu^{\frac{1}{2}} \Bigl( \frac{ \alphapha \Deltambda}{\mu} \Bigr)^{\frac{1}{2}} \|\tilde{\phi}_{\theta, \Deltambda}^\alphapha u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{|\omega - \theta| \sim \alphapha} \sum_{ \omega_T = \omega} \Deltambda^2 |c_T|_{L^\infty}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}}, \end{align*} which is summable in $\alphapha$. \textbf{Case 2a$+-$}: \begin{equation}gin{align*} &\| P_\mu Q(\phi_{\theta, \Deltambda}^\alphapha u, v_{-\theta, \Deltambda}^\alphapha ) \|_{L^2} \\ &\lesssim [(\alphapha \Deltambda)^{-1} + (\alphapha^2\Deltambda)^{-N}] \mu^{\frac{1}{2}} \frac{\mu}{\Deltambda} \Bigl( \frac{ \alphapha \Deltambda}{\mu} \Bigr)^{\frac{3}{2}} \|\phi_{\theta, \Deltambda}^\alphapha u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{|\omega + \theta| \sim \alphapha} \sum_{ \omega_T = \omega} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}}\\ &\lesssim \mu^{\frac{1}{2}} \frac{\mu}{\Deltambda} \Bigl( \alphapha^{\frac{1}{2}} \mu^{-\frac{1}{2}} \frac{\Deltambda^{\frac{1}{2}} }{\mu} + (\alphapha^2\Deltambda)^{-N+\frac{3}{4}} \Bigl(\frac{\Deltambda^{\frac{1}{2}}}{\mu}\Bigr)^{\frac{3}{2}} \Bigr) \|\phi_{\theta, \Deltambda}^\alphapha u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \Bigl( \sum_{|\omega + \theta| \sim \alphapha} \sum_{ \omega_T = \omega} \Deltambda^2 |c_T|_{L^\infty_t}^2 + \Deltambda |c_T'|_{L^2_t}^2 \Bigr)^{\frac{1}{2}} \end{align*} \textbf{Case 2b$++$}: The same considerations as for the null form estimate yield \begin{equation}gin{align*} \| P_\mu (\tilde{\phi}_{\theta, \Deltambda}^\alphapha u v_{\theta, \Deltambda}^\alphapha ) \|_{L^2} &\lesssim [(\alphapha \Deltambda)^{-1} + (\alphapha^2\Deltambda)^{-N}] \mu^{\frac{1}{2}} \Bigl( \frac{ \alphapha \Deltambda}{\mu} \Bigr)^{\frac{1}{2}} \|\chi_{\theta}^\alphapha u \|_{L^2} \Bigl( \sum_{|\omega - \theta| \sim \alphapha} \sum_{ \omega_T = \omega} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}}. \end{align*} \textbf{Case 2b$-$}: \begin{equation}gin{align*} \| P_\mu (\phi_{\theta, \Deltambda}^\alphapha u v_{-\theta, \Deltambda}^\alphapha)\|_{L^2} &\lesssim [(\alphapha\Deltambda)^{-1} + (\alphapha^2\Deltambda)^{-N}]\mu^{\frac{1}{2}} \|\chi_{-\theta}^\alphapha u\|_{L^2} \Bigl(\sum_{|\omega - \theta| \sim \alphapha } \sum_{\omega_T = \omega} |c_T|_{L^\infty_t}^2 \Bigr)^{\frac{1}{2}}. \end{align*} \textbf{Case 3}: $\mu \le \Deltambda^{1/2}$, $\alphapha \approx \Deltambda^{-1/2}$. \textbf{Case 3a$++$}: A direct application of Bernstein and energy estimates would yield \begin{equation}gin{align*} \| P_\mu Q( \tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{\theta, \Deltambda}^\alphapha )\|_{L^2} \lesssim \mu \| \tilde{\phi}_{\theta, \Deltambda}^\alphapha u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v_{\theta, \Deltambda}^\alphapha \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} The additional $\mu^{-\frac{1}{2}}$ gain required is precisely what would result from a characteristic energy estimate over a $\mu^{-1}$ neighborhood of a null surface. Using the null foliation $\Lambda_{h}$ adapted to $v_{-\theta, \Deltambda}^\alphapha$, we partition spacetime into ``null slabs'' of thickness $\mu^{-1}$ \begin{equation}gin{align*} \mathbb{R}^{1+2} = \bigcup_j \bigcup_{h \in [j \mu^{-1}, (j+1)\mu^{-1}] } \Lambda_{h} =: \bigcup_j \Sigma_j \end{align*} Since the mollifier $P_\mu$ averages functions on the $\mu^{-1}$ spatial scale, we have roughly \begin{equation}gin{align} \Deltabel{e:x-localized-bernstein} \|P_\mu f \|_{L^2( \Sigma_j)} ``\lesssim'' \mu\| f\|_{L^2 L^1( \Sigma_j)}. \end{align} This is not quite accurate since the kernel of $P_\mu$ is not compactly supported. However, by partitioning the kernel one can decompose $P_\mu = \sum_k P_\mu^k$ where for any function $f(x)$ and any set $K \subset \mathbb{R}^2$ one has \begin{equation}gin{align*} \| P_\mu^k f\|_{L^2( K) } \lesssim_N 2^{-kN} \mu \|f\|_{L^1( K + B(0, 2^k\mu^{-1}))} \text{ for any } N. \end{align*} Hence in the sequel we shall ignore the imprecision in the above Bernstein estimate. We write \begin{equation}gin{align*} \| P_\mu Q ( \phi_{\theta, \Deltambda}^\alphapha u, v_{-\theta, \Deltambda}^\alphapha) \|_{L^2}^2 &\lesssim \sum_{j}\| P_\mu Q ( \phi_{\theta, \Deltambda}^\alphapha u, v_{-\theta, \Deltambda}^\alphapha ) \|_{L^2(\Sigma_j)}^2. \end{align*} Using the ``space-localized'' Bernstein~\eqref{e:x-localized-bernstein} and the null frame for $\Lambda$, we estimate \begin{equation}gin{align*} \| P_\mu Q( \tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{\theta, \Deltambda}^\alphapha)\|_{L^2(\Sigma_j)} &\lesssim \mu \| Q(\tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{\theta, \Deltambda}^\alphapha)\|_{L^2L^1(\Sigma_j)} \\ &\lesssim \mu \| L\tilde{\phi}_{\theta, \Deltambda}^\alphapha u \|_{L^2(\Sigma_j)} \| \underline{L} v_{\theta, \Deltambda}^\alphapha \|_{L^\infty L^2 (\Sigma_j)} \\ &+ \mu \| E \tilde{\phi}_{\theta, \Deltambda}^\alphapha u \|_{L^2( \Sigma_j)} \| E v_{\theta, \Deltambda}^\alphapha \|_{L^\infty L^2(\Sigma_j)} \\ &+ \mu \| \underline{L} \tilde{\phi}_{\theta}^\alphapha u \|_{L^\infty L^2(\Sigma_j)} \| L v_{\theta, \Deltambda}^\alphapha \|_{L^2( \Sigma_j)}. \end{align*} Apply the characteristic energy estimate~\eqref{e:CE1} to $\phi_{\theta, \Deltambda}^\alphapha u$ for the first two terms and to $v_{-\theta, \Deltambda}^\alphapha$ for the third term, thus obtaining a factor of $\mu^{-\frac{1}{2}}$. Since each of the resulting three products remains localized to $\Sigma_j$ in one factor, we may square-sum both sides in $j$ to conclude that \begin{equation}gin{align*} \| P_\mu Q ( \tilde{\phi}_{\theta, \Deltambda}^\alphapha u, v_{\theta, \Deltambda}^\alphapha) \|_{L^2} &\lesssim \mu^{\frac{1}{2}} (\| \nabla \tilde{\phi}_{\theta, \Deltambda}^\alphapha u\|_{L^\infty L^2}+ \| \Box u\|_{L^1 L^2}) (\| \nabla v_{\theta, \Deltambda}^\alphapha \|_{L^\infty L^2} + \| \Box v_{-\theta, \Deltambda}^\alphapha \|_{L^1 L^2})\\ &\lesssim \mu^{\frac{1}{2}} \| \tilde{\phi}_{\theta, \Deltambda}^\alphapha u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \|v_{-\theta, \Deltambda}^\alphapha \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} \textbf{Case 3a$+-$}: In this case we omit the spacetime partition and directly apply Bernstein, H\"{o}lder, and the following small-angle improvements of the above estimates: \begin{equation}gin{gather*} \Deltambda \| L \phi_{\theta, \Deltambda}^\alphapha u \|_{L^2} + \Deltambda^{\frac{1}{2}} \|E \phi_{\theta, \Deltambda}^\alphapha u\|_{L^2}\lesssim \| \phi_{\theta, \Deltambda}^\alphapha u\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \text{ (by Lemma~\ref{e:char-e-commutator1})},\\ \Deltambda^{-\frac{1}{2}} \| v_{-\theta, \Deltambda}^\alphapha\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \text{ (by~\eqref{wp:E:T})},\\ \Bigl( \sum_{j} \| Lv_{-\theta, \Deltambda}^\alphapha\|_{L^\infty L^2(\Sigma_j)}^2\Bigr)^{\frac{1}{2}} \lesssim \Deltambda^{-1} \| v_{-\theta, \Deltambda}^\alphapha\|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \text{ (by~\eqref{wp:L:T})}. \end{gather*} Consequently \begin{equation}gin{align*} \| P_\mu Q( \phi_{\theta, \Deltambda}^\alphapha u, v_{-\theta, \Deltambda}^\alphapha)\|_{L^2} \lesssim \frac{\mu}{\Deltambda} \| \phi_{\theta, \Deltambda}^\alphapha u \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}} \| v_{-\theta, \Deltambda}^\alphapha \|_{X^{1, \frac{1}{2}}_{\Deltambda, 1}}. \end{align*} \textbf{Case 3b$+\pm $}: For the $L^2$ estimate we also use~\eqref{e:x-localized-bernstein} and invoking the second part of Corollary~\ref{c:char-energy}, with $\begin{equation}ta \sim 1$, \begin{equation}gin{align*} \| P_\mu ( \phi_{\theta, \Deltambda}^\alphapha u v_{-\theta, \Deltambda}^\alphapha)\|_{L^2(\Sigma_j)} &\lesssim \mu \| \phi_{\theta, \Deltambda}^\alphapha u \|_{L^2(\Sigma_j)} v_{-\theta, \Deltambda}^\alphapha\|_{L^\infty L^2(\Sigma_j)},\\ &\lesssim \mu^{\frac{1}{2}} \| \chi_{\theta}^\alphapha u\|_{L^2} \|v_{-\theta, \Deltambda}^\alphapha\|_{L^\infty L^2(\Sigma_j)}, \end{align*} which can then be square-summed in $j$ and then in $\theta$. \textbf{Case 4}: $\mu \le \Deltambda^{-1/2}, \alphapha > \Deltambda^{-1/2}$. Combine the arguments from Cases 2 and 3. \ Finally, to deduce~\eqref{dt:pu:u:v:pm} we write $u_\Deltambda := u^{\pm_1}$, $v_\Deltambda := v^{\pm_2}$, and use the Leibniz rule to bound~\eqref{dt:pu:u:v:pm} by \begin{equation}gin{align*} \|P_\mu(\partial_t u_\Deltambda v_\Deltambda )\|_{L^2} + \| P_\mu(u \partial_t v_\Deltambda)\|_{L^2}. \end{align*} To estimate the first term, decompose \[u_{\Deltambda} = P_{\le 8\Deltambda}(D_t)u_\Deltambda + P_{>8\Deltambda}(D_t) u_\Deltambda = u_{\Deltambda}^{<\Deltambda} + u_{\Deltambda}^{>\Deltambda} \] The high-frequency piece satisfies the elliptic estimate \begin{equation}gin{align*} \| \nabla_{t,x} u_\Deltambda^{>\Deltambda} \|_{L^2} \lesssim \Deltambda^{-1}( \| \nabla u_\Deltambda \|_{L^2} + \| \Box u_\Deltambda \|_{L^2}), \end{align*} whose proof is similar to that of Proposition~\ref{p:half-wave}. Then by Bernstein and H\"{o}lder, \begin{equation}gin{align*} \| P_\mu (\partial_t u_\Deltambda^{>\Deltambda} v_\Deltambda)\|_{L^2} &\lesssim \mu \|\partial_t u_\Deltambda^{>\Deltambda} \|_{L^2} \| v_\Deltambda\|_{L^\infty L^2}\\ &\lesssim \mu \| u_\Deltambda\|_{X^{0, \frac{1}{2}}_{\Deltambda, 1}} \| v_{\Deltambda} \|_{X^{0, \frac{1}{2}}_{\Deltambda, 1}}, \end{align*} which is more than acceptable. On the other hand, the estimate~\eqref{e:HHL-L2} \begin{equation}gin{align*} \| P_\mu ( \partial_t u_\Deltambda^{<\Deltambda} v_\Deltambda)\|_{L^2} \lesssim \mu^{\frac{1}{2}} \| \partial_t u_\Deltambda \|_{X_{\Deltambda, 1}^{0, \frac{1}{2}}} \| v_\Deltambda\|_{X_{\Deltambda, 1}^{0, \frac{1}{2}}}, \end{align*} and by a simple commutation argument $\|\partial_t u_\Deltambda^{<\Deltambda} \|_{X^{0, \frac{1}{2}}_{\Deltambda, 1}} \lesssim \Deltambda \| u_\Deltambda \|_{X^{0, \frac{1}{2}}_{\Deltambda, 1}}$. To treat the term $\|P_\mu(u_\Deltambda \partial_t v)\|_{L^2}$, we repeat the proofs of Case 1b, 2b, 3b, 4 and use \eqref{wp:barL:T} as well as the Bernstein estimates~\eqref{reg:coeff:3}, \eqref{reg:coeff:4} for the wave packet coefficients. \subsection{Proof of Proposition \ref{Lemma:bil:inter}} We succesively consider the bounds in the proposition: \ {\bf(1)} \ First we consider the low modulation cases \eqref{bil:inter:lowmod:LH} and \eqref{bil:inter:lowmod:HH}. Let $ (\chi^j)_j$ be a partition of unity with respect to time intervals $ (I_j)_j $ of length $ \simeq d_{\max}^{-1} $ and let $ (\tilde{\chi}^j)_j, (\tilde{\tilde{\chi}}^j)_j $ be similar families such that $ \tilde{\chi}^j = 1 $ on $ I_j $ and $ \tilde{\tilde{\chi}} =1 $ on the support of $ \tilde{\chi}^j $. By rescaling from Prop. \ref{PropX} using \eqref{eq:scaling} we obtain the product estimates for frequency localized functions: \begin{equation} \Deltabel{HLXMap} X_{\left|d,d_{\max}}^{1,\frac{1}{2}}[I_j] \cdot X_{\mu,d_{\max}}^{1,\frac{1}{2}}[I_j] \longrightarrow X_{\left|d',[d_{\max},\mu]}^{1,\frac{1}{2}}[I_j] \end{equation} Note that the regularity of the metric's coefficients improves with the rescaling. Suppose, for instance, that $ d_1 \leq d_2 = d_{\max} $. Using properties \eqref{time:ort:1}, \eqref{time:ort:2}, \eqref{X:modulations:intervals:eq} we have \begin{equation}gin{align*} \vn{ u_{\left|d,d_1} \cdot v_{\mu,d_2} }_{ X_{\left|d',[d_{\max},\mu]}^{1,\frac{1}{2}}}^2 & \lesssim \sum_j \vn{ \chi^j ( u_{\left|d,d_1} \cdot v_{\mu,d_2} )}_{ X_{\left|d',[d_{\max},\mu]}^{1,\frac{1}{2}}}^2 \\ & \lesssim \sum_j \vn{ \tilde{\chi}^j u_{\left|d,d_1} \cdot \tilde{\chi}^j v_{\mu,d_2} }_{ X_{\left|d',[d_{\max},\mu]}^{1,\frac{1}{2}}[I_j]}^2 \\ & \lesssim \sum_j \vn{ \tilde{\tilde{\chi}}^j \tilde{\chi}^j u_{\left|d,d_1}}_{X_{\left|d,d_{\max}}^{1,\frac{1}{2}}[I_j]}^2 \vn{ \tilde{\tilde{\chi}}^j \tilde{\chi}^j v_{\mu,d_2}}_{X_{\mu,d_{\max}}^{1,\frac{1}{2}}[I_j]}^2 \\ & \lesssim \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}}^2 \sum_j \vn{ \tilde{\chi}^j v_{\mu,d_2}}_{X_{\mu,d_2}^{1,\frac{1}{2}}}^2 \lesssim \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}}^2 \vn{ v_{\mu,d_2}}_{X_{\mu,d_2}^{1,\frac{1}{2}}}^2. \end{align*} The same argument gives \eqref{bil:inter:lowmod:HH}. \begin{equation}gin{remark} \Deltabel{rk:prod:bfMos} Note that between the rescaled \eqref{bil:main:LH} and \eqref{HLXMap} we have used the factors $ \big( \frac{d}{\mu} \big)^{\frac{1}{4}} $ to sum over $ d $- which is the modulation of the output. If we choose to keep this factor, the same argument gives decompositions $$ P_{\left|d'}(u_{\left|d,d_1} \cdot v_{\mu,d_2})=\sum_{d=d_{\max}}^{\mu} w_{\left|d',d}, \qquad P_{\mu} ( u_{\left|d,d_1} \cdot v_{\left|d',d_2} )=\sum_{d=d_{\max}d_0/2}^{\mu/2} w_{\mu,d} + w_{\mu,\mu}, $$ for $ 1 \leq d_1,d_2 \leq \mu $, under the assumptions of Prop \ref{Lemma:bil:inter} (1), such that \begin{equation}gin{align} \Deltabel{opt:bil:inter:lowmod:HL} \vn{w_{\left|d',d}}_{ X_{\left|d',d}^{1,\frac{1}{2}}} & \lesssim \big( \frac{d}{\mu} \big)^{\frac{1}{4}} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}} \\ \Deltabel{opt:bil:inter:lowmod:HH} \vn{w_{\mu,d} }_{ X_{\mu,d}^{1,\frac{1}{2}}} & \lesssim \frac{\mu}{\left|d} \big( \frac{d}{\mu} \big)^{\frac{1}{4}} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',d_2} }_{X_{\left|d',d_2}^{1,\frac{1}{2}}} \\ \Deltabel{opt:bil:inter:lowmod:HH2} \vn{w_{\mu,\mu} }_{ \tilde{X}_{\mu,\mu}^{1,\frac{1}{2}}} & \lesssim \frac{\mu}{\left|d} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',d_2} }_{X_{\left|d',d_2}^{1,\frac{1}{2}}} \end{align} These will be useful in the proof of the Moser-type estimate. \end{remark} {\bf(2)} \ Recall that by Bernstein's inequality and the energy estimate \eqref{energy:X} we have $$ \vn{v_{\mu}}_{L^{\infty}_{t,x}} \lesssim \vn{ v_{\mu}}_{ X_{\mu, d_2}^{1,\frac{1}{2}} } $$ We now prove \eqref{bil:inter:highmod:LH}. We have $$ \left|d d_1^{\frac{1}{2}} \vn{ u_{\left|d,d_1} \cdot v_{\mu,d_2} }_{L^2} \lesssim \big( \left|d d_1^{\frac{1}{2}} \vn{ u_{\left|d,d_1}}_{L^2} \big) \vn{v_{\mu,d_2}}_{L^{\infty}} \lesssim \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}}. $$ For $ \Box_{ g_{<\sqrt{\left|d'}} } (u_{\left|d,d_1} \cdot v_{\mu,d_2} ) $ we consider \begin{equation}gin{align*} & d_1^{-\frac{1}{2}} \vn{ \Box_{ g_{<\sqrt{\left|d}} }u_{\left|d,d_1} \cdot v_{\mu,d_2} }_{L^2} \lesssim d_1^{-\frac{1}{2}} \vn{ \Box_{ g_{<\sqrt{\left|d}} }u_{\left|d,d_1}}_{L^2} \vn{v_{\mu,d_2}}_{L^{\infty}} \\ & d_1^{-\frac{1}{2}} \vn{ ( \Box_{ g_{<\sqrt{\left|d'}}} - \Box_{ g_{<\sqrt{\left|d}} } )u_{\left|d,d_1} \cdot v_{\mu,d_2} }_{L^2} \lesssim \left|d \vn{ u_{\left|d,d_1}}_{L^2} \vn{v_{\mu,d_2}}_{L^{\infty}} \\ & d_1^{-\frac{1}{2}} \vn{ u_{\left|d,d_1} \cdot \Box_{ g_{<\sqrt{\mu}} } v_{\mu,d_2} }_{L^2} \lesssim d_1^{-\frac{1}{2}} \vn{ u_{\left|d,d_1}}_{L^{\infty} L^2} \vn{\Box_{ g_{<\sqrt{\mu}} } v_{\mu,d_2}}_{L^2 L^{\infty}} \\ & \qquad \qquad \qquad \qquad \qquad \quad \lesssim \frac{ \mu}{\left|d} \vn{ \nabla u_{\left|d,d_1}}_{L^{\infty} L^2} d_2^{-\frac{1}{2}} \vn{\Box_{ g_{<\sqrt{\mu}} } v_{\mu,d_2}}_{L^2 } \\ & d_1^{-\frac{1}{2}} \vn{ u_{\left|d,d_1} \cdot ( \Box_{ g_{<\sqrt{\left|d'}}} - \Box_{ g_{<\sqrt{\mu}} } ) v_{\mu,d_2} }_{L^2} \lesssim d_1^{-\frac{1}{2}} \vn{ u_{\left|d,d_1}}_{L^2} \mu \vn{ v_{\mu,d_2}}_{L^{\infty}} \\ & d_1^{-\frac{1}{2}} \vn{ \partial u_{\left|d,d_1} \cdot \partial v_{\mu,d_2} }_{L^2} \lesssim d_1^{-\frac{1}{2}} \vn{ \partial u_{\left|d,d_1}}_{L^2} \vn{\partial v_{\mu,d_2} }_{L^{\infty}} \\ & \qquad \qquad \qquad \qquad \qquad \lesssim \frac{\mu}{d_1} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{\partial v_{\mu,d_2} }_{L^{\infty} L^2} \end{align*} Each of the five terms above is $ \lesssim \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\mu,d_2} }_{X_{\mu,d_2}^{1,\frac{1}{2}}} $. We continue with the proof of \eqref{bil:inter:highmod:HH}. Here we will use Bernstein's inequality $$ \vn{P_{\mu} w}_{L^2_{t,x}} \lesssim \mu \vn{w}_{L^2 L^1}. $$ Assume without loss of generality that $ d_1=d_{\max} $. We have \begin{equation}gin{align} \nonumber \mu^{\frac{5}{2}} \vn{ u_{\left|d,d_1} \cdot v_{\left|d',d_2} }_{ L^2 L^1} & \lesssim \mu^{\frac{5}{2}} \vn{u_{\left|d,d_1}}_{L^2} \vn{v_{\left|d',d_2 }}_{L^{\infty} L^2} \\ \Deltabel{bil:easy1} & \lesssim \Big( \frac{\mu}{\left|d} \Big)^2 \Big( \frac{\mu}{d_{\max}} \Big)^{\frac{1}{2}} \vn{ u_{\left|d,d_1}}_{X_{\left|d,d_1}^{1,\frac{1}{2}}} \vn{ v_{\left|d',d_2} }_{X_{\left|d',d_2}^{1,\frac{1}{2}}} . \end{align} The same argument applies for the $ \nabla_{t,x} $ in the $ L^2$ part of $ \tilde{X}_{\mu,\mu}^{1,\frac{1}{2}} $. For the $ L^{\infty} L^2 $ parts we use Bernstein, the chain rule and $ L^{\infty} L^2 \times L^{\infty} L^2 \to L^{\infty} L^1 $. For $ \Box_{ g_{<\sqrt{\mu}} }( u_{\left|d,d_1} \cdot v_{\left|d',d_2} ) $ we split as before \begin{equation}gin{align*} & \vn{ \Box_{ g_{<\sqrt{\left|d}} } u_{\left|d,d_1} \cdot v_{\left|d',d_2} }_{L^2 L^1} \lesssim \vn{ \Box_{ g_{<\sqrt{\left|d}} }u_{\left|d,d_1}}_{L^2} \vn{v_{\left|d',d_2} }_{L^{\infty} L^2} \\ & \vn{ u_{\left|d,d_1} \cdot \Box_{ g_{<\sqrt{\left|d'}} } v_{\left|d',d_2} }_{L^2 L^1} \lesssim \vn{ u_{\left|d,d_1}}_{L^{\infty} L^2} \vn{\Box_{ g_{<\sqrt{\left|d'}} } v_{\left|d',d_2} }_{L^2} \\ & \vn{ \partial u_{\left|d,d_1} \cdot \partial v_{\left|d',d_2} }_{L^2 L^1} \lesssim \vn{ \partial u_{\left|d,d_1}}_{L^2} \vn{ \partial v_{\left|d',d_2} }_{L^{\infty} L^2} \\ & \vn{ ( \Box_{ g_{<\sqrt{\mu}} }- \Box_{ g_{<\sqrt{\left|d}} } ) u_{\left|d,d_1} \cdot v_{\left|d',d_2} }_{L^2 L^1} \lesssim \frac{\left|d}{\mu} \vn{ \partial u_{\left|d,d_1}}_{L^2} \vn{v_{\left|d',d_2} }_{L^{\infty} L^2} \\ & \vn{ u_{\left|d,d_1} \cdot ( \Box_{ g_{<\sqrt{\mu}} }- \Box_{ g_{<\sqrt{\left|d'}} } ) v_{\left|d',d_2} }_{L^2 L^1} \lesssim \vn{ u_{\left|d,d_1}}_{L^2} \frac{\left|d'}{\mu} \vn{ \partial v_{\left|d',d_2} }_{L^{\infty} L^2} \end{align*} Each of the five terms times $ \mu^{\frac{1}{2}} $ is $ \lesssim \|u_{\Deltambda, d_1}\|_{X^{1, \frac{1}{2}}_{\Deltambda, d_1}} \| v_{\Deltambda', d_2} \|_{X^{1, \frac{1}{2}}_{\Deltambda', d_2} }, $ which completes the proof. \ \section{The product estimate \texorpdfstring{\eqref{prod:est}}{}} \Deltabel{Sec:Prod:est} \ We now turn our attention to the proof of \eqref{prod:est}. We recall the notations $ X=X^{s,\theta}$ and $N=X^{s-1,\theta-1} $ with $ \theta=\frac{1}{2}+ \varepsilon' $ and $ s=1+\varepsilon'+ \varepsilon $. The duality property \eqref{duality} states $$ N=( X^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'}+L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon} )'. $$ Therefore, by duality, to obtain \eqref{prod:est} it suffices to prove $$ \vn{u^1 \cdot u^2}_{X^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'}+L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon}} \lesssim \vn{u^1}_{X^{s,\theta}} \vn{u^2}_{X^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'}+L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon}}. $$ We reduce this estimate to the following bounds: \begin{equation}gin{align} \vn{u^1 \cdot u^2}_{L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon}} \lesssim \vn{u^1}_{X^{s,\theta}} \vn{u^2}_{L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon}} \Deltabel{prod:red1} \\ \vn{u^1 \cdot u^2}_{X^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'}+L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon}} \lesssim \vn{u^1}_{X^{s,\theta}} \vn{u^2}_{X^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'}} \Deltabel{prod:red2} \end{align} For both estimates we use the Littlewood-Paley trichotomy and reduce to estimates for terms $ P_{\left|d_3} (u^1_{\left|d_1} \cdot u^2_{\left|d_2} ) $. \ Be begin with \eqref{prod:red1}. In the high-high to low case $ \left|d_1 \simeq \left|d_2 $ and in the low-high case $ \left|d_1 \ll \left|d_2 \simeq \left|d_3 $ we use H\" older's inequality $ L^{\infty} L^{\infty} \times L^{2} L^{2} \to L^{2} L^{2} $ together with Bernstein's inequality $ \tilde{P}_{\left|d_1} L^{\infty} L^2 \to \left|d_1 L^{\infty} L^{\infty} $. In the high-low case $ \left|d_2 \ll \left|d_1 \simeq \left|d_3 $ we use H\" older's inequality $ L^{\infty} L^{2} \times L^{2} L^{\infty} \to L^{2} L^{2} $ together with Bernstein's inequality $ \tilde{P}_{\left|d_2} L^{2} L^2 \to \left|d_2 L^{2} L^{\infty} $. \ Now we turn to the proof of \eqref{prod:red2}. We write $$ u_{\left|d_1}^1= \sum_{d_1=1}^{\left|d_1} u_{\left|d_1,d_1}^1 \qquad \qquad \vn{u_{\left|d_1}^1 }_{X^{s,\theta}_{\left|d_1}}^2 \simeq \sum_{d_1=1}^{\left|d_1} \vn{u_{\left|d_1,d_1}^1 }_{X^{s,\theta}_{\left|d_1,d_1}}^2. $$ and the similar decomposition of $ u_{\left|d_2}^2 $ relative to the space $ X^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'} $. In the low-high case $ \left|d_1 \ll \left|d_2 $ it suffices to prove $$ \vn{ u^1_{\left|d_1} \cdot u^2_{\left|d_2}}_{ X^{0,\frac{1}{2}-\varepsilon'}_{\left|d_2}} \lesssim \frac{1}{\left|d_1^{\varepsilon}} \vn{ u^1_{\left|d_1} }_{ X^{1+\varepsilon'+\varepsilon, \frac{1}{2}+ \varepsilon'}_{\left|d_1} } \vn{ u^2_{\left|d_2}}_{ X^{0,\frac{1}{2}-\varepsilon'}_{\left|d_2}}. $$ We estimate $ u_{\left|d_1,d_1}^1 u_{\left|d_2,d_2}^2 $ in $ X^{0,\frac{1}{2}-\varepsilon'}_{\left|d_2, [\max(d_1,d_2),\left|d_1] } $ for $ d_1, d_2 \leq \left|d_1 $ using \eqref{bil:inter:lowmod:LH} and computing the weights we note that there is enough room to sum the modulations $ \leq \left|d_1 $. For $ \left|d_1 \leq d_2 \leq \left|d_2 $ we use \eqref{bil:inter:highmod:LH} instead and we obtain square-summability in $ d_2 $. In the high-low case $ \left|d_2 \ll \left|d_1 $ we follow the same argument and here we have a better factor including a power of $ \left|d_2/\left|d_1 $ In the high-high to low case $ \left|d_1 \simeq \left|d_2 $ we have $$ \vn{P_{\left|d_3} (u^1_{\left|d_1} \cdot u^2_{\left|d_2} )}_{X_{\left|d_3}^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'}+L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon}} \lesssim \frac{1}{\left|d_3^{\varepsilon}} \vn{ u^1_{\left|d_1} }_{ X^{1, \frac{1}{2}+ \varepsilon'}_{\left|d_1} } \vn{ u^2_{\left|d_2}}_{ X^{0,\frac{1}{2}-\varepsilon'}_{\left|d_2}}. $$ For $ d_1, d_2 \leq \left|d_3 $ we estimate $ P_{\left|d_3} ( u_{\left|d_1,d_1}^1 u_{\left|d_2,d_2}^2 ) $ using \eqref{bil:inter:lowmod:HH}: with appropriate weights the $ X^{1.\frac{1}{2}}_{\mu,[d_{\max},\mu]} $ bound transfers to $ X_{\left|d_3}^{-\varepsilon'-\varepsilon,\frac{1}{2}-\varepsilon'} $, while the $ \tilde{X}_{\mu,\mu}^{1.\frac{1}{2}} $ one transfers to $ L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon} $. For $ \max(d_1,d_2) \geq \left|d_3 $ we use Bernstein and \eqref{bil:easy1} to place the output into $ L^2 H^{\frac{1}{2}-2 \varepsilon'-\varepsilon} $. \ \begin{equation}gin{remark}[Higher regularity] Let $ \sigma >s $. By applying the product estimate with a slightly lower $ s'<s $ (say $ s'=1+\varepsilon'+\varepsilon/2 $) and considering the Littlwood-Paley trichotomy in terms of $ u_{\mu} F_{\left|d} $, $ u_{\left|d} F_{\mu} $ ($ \mu \ll \left|d $) and $ P_{\mu} (u_{\left|d} F_{\left|d'}) $ ($ \mu \lesssim \left|d \simeq \left|d' $) we obtain inequalities of type \begin{equation}gin{align*} & \vn{u_{\mu} F_{\left|d}}_{X^{\sigma-1,\theta-1}} \simeq \left|d^{\sigma-s'} \vn{u_{\mu} F_{\left|d}}_{X^{s'-1,\theta-1}} \lesssim \mu^{-\varepsilon/2} \vn{u_{\mu}}_{X^{s,\theta}} \vn{F_{\left|d}}_{X^{\sigma-1,\theta-1}} \\ & \vn{P_{\mu} (u_{\left|d} F_{\left|d'})}_{X^{\sigma-1,\theta-1}} \lesssim (\mu/\left|d)^{\sigma-s} \vn{u_{\left|d}}_{X^{s,\theta}} \vn{F_{\left|d'}}_{X^{\sigma-1,\theta-1}} \end{align*} Putting all these estimates together with the similar one for $ u_{\left|d} F_{\mu} $ we obtain \begin{equation} \Deltabel{prod:est:higher} \vn{u \cdot F}_{X^{\sigma-1,\theta-1}} \lesssim \vn{u}_{X^{s,\theta}} \vn{F}_{X^{\sigma-1,\theta-1}} + \vn{u}_{X^{\sigma,\theta}} \vn{F}_{X^{s-1,\theta-1}} \end{equation} \end{remark} \begin{equation}gin{remark}[Higher regularity, Corollaries of \eqref{prod:est:higher}] \Deltabel{rk:cor:prodest:higher} Assuming higher regularity of the metric $ g $, by Remark \ref{X:high:reg}, which gives $ \vn{\Box_g u}_{X^{\sigma-1,\theta-1}} \lesssim \vn{u}_{X^{\sigma,\theta}} $ for $ \sigma \leq k+1 $ and the identity $$ 2 Q_g(u,v)= 2 g^{\alpha \begin{equation}ta} \partial_{\alpha} u \partial_{\begin{equation}ta} v = \Box_g(uv) - u \Box_g v-v \Box_g u, $$ from \eqref{prod:est:higher} and \eqref{alg:sgm} we obtain the null form bound \begin{equation} \Deltabel{nf:sgm} \vn{Q_g(u,v)}_{X^{\sigma-1,\theta-1}} \lesssim \vn{u}_{X^{s,\theta}} \vn{v}_{X^{\sigma,\theta}} + \vn{u}_{X^{\sigma,\theta}} \vn{v}_{X^{s,\theta}}. \end{equation} Furthermore, when $ u,v$ are bounded in $ X^{s,\theta} $ we have the Moser estimates in the form $ \vn{\Gamma(u)}_{X^{\sigma,\theta}} \lesssim \vn{u}_{X^{\sigma,\theta}}$ and \begin{equation} \Deltabel{Mos:higher} \vn{\Gamma(u)-\Gamma(v) }_{X^{\sigma,\theta}} \lesssim \vn{u-v}_{X^{\sigma,\theta}} + \vn{u-v}_{X^{s,\theta}} (\vn{u}_{X^{\sigma,\theta}}+ \vn{v}_{X^{\sigma,\theta}} ) \end{equation} Using this together with \eqref{prod:est:higher}, \eqref{nf:sgm}, \eqref{nf:est}, \eqref{moser:est} we obtain \begin{equation} \Deltabel{nonlin:sgm} \vn{ \Gamma(u) Q_g(u,u) }_{X^{\sigma-1,\theta-1}} \lesssim \vn{u}_{X^{s,\theta}}^2 \vn{u}_{X^{\sigma,\theta}}. \end{equation} Similarly one obtains a bound for $ \Gamma(u) Q_g(u,u)- \Gamma(v) Q_g(v,v)$ in $X^{\sigma-1,\theta-1} $. \end{remark} \ \section{The Moser estimate \texorpdfstring{\eqref{moser:est}}{}} \Deltabel{Sec:Moser} \ In this section we prove the nonlinear estimate \eqref{moser:est}, which resembles the Moser-type estimates in the context of Sobolev spaces (sometimes referred to as Schauder estimates) that can be proved using paradifferential calculus and the chain rule (see \cite[Lemma A.9]{tao2006nonlinear}). Here we will use the iterated paradifferential expansion strategy from \cite{TataruWM} to leverage the product estimates from \emph{Section \ref{Sec:alg}}. \begin{equation}gin{proposition} \Deltabel{Prop:Moser} Let $ F $ be a smooth bounded function with uniformly bounded derivatives with $ \partial^{(j)} F(0)=0 $ for $ \vm{j} \leq C $. If $ u \in X^{s,\theta} $ then $ F(u) \in \tilde{X}^{s,\theta} $ and \begin{equation} \Deltabel{Moser:weak} \vn{F(u)}_{ \tilde{X}^{s,\theta}} \lesssim \vn{u}_{ X^{s,\theta}} \big( 1 + \vn{u}_{ X^{s,\theta}}^{15} \big) \end{equation} Moreover, $ F(u) \in X^{s,\theta} $ and \begin{equation} \Deltabel{Moser:strong} \vn{F(u)}_{ X^{s,\theta}} \lesssim \vn{u}_{ X^{s,\theta}} \big( 1 + \vn{u}_{ X^{s,\theta}}^{15} \big) \end{equation} \end{proposition} \ \begin{equation}gin{remark} \Deltabel{Rk:tildeX:Moser} The bound \eqref{Moser:weak} should be seen as a key intermediate step in the proof of \eqref{Moser:strong}. The space $ \tilde{X}^{s,\theta} $ is defined in Definition \ref{tld:X} and $ X^{s,\theta} \subset \tilde{X}^{s,\theta} $. The only difference between the two spaces occurs at high modulations, where we discarded the terms $ \left|d^{s+\theta-2} \vn{\Box_{g_{<\sqrt{\left|d}} } u_{\left|d,\left|d}}_{L^2} $, making the norm $ \tilde{X}^{s,\theta} $ smaller. This allows us to have the factor $ (\left|d / \left|d_0)^s $ in Lemma \ref{multilinear}, based on the better factors $ \mu / \left|d $ for $ \tilde{X}_{\mu,\mu}^{1,\frac{1}{2}} $ compared to $ X_{\mu,\mu}^{1,\frac{1}{2}} $ in the estimates \eqref{bil:inter:lowmod:HH}, \eqref{bil:inter:highmod:HH}, thus making \eqref{Moser:weak} easier to prove. \end{remark} \ Notation-wise, we focus the proof on the case of functions $ F $ of a scalar argument and note that it is easy to see that the same argument applies for the case of multivariate arguments $ F(u_1, \dots, u_d) $. \subsection{Reduction of \eqref{Moser:strong} to \eqref{Moser:weak} } \ In light of Lemma \ref{lemma:XXtildeBox} it suffices to show that $ \Box_g F(u) \in L^2 H^{s+\theta-2} $: \begin{equation}gin{lemma} If $ u \in X^{s,\theta} $ then $$ \vn{ \Box_g F(u)}_{ L^2 H^{s+\theta-2}} \lesssim \vn{u}_{X^{s,\theta}} \big(1+ \vn{u}_{X^{s,\theta}}^3 \big) $$ \end{lemma} \begin{equation}gin{proof} We split $ \Box_g F(u) $ into $ F'(u) \Box_g u $ and $ F''(u) Q_g(u,u) $. Let $ G $ be either one of the terms $ F'(u), F''(u) $ and let $ f $ be $ \Box_g u $ or $ Q_g(u,u) $. By Lemma \ref{lin:map:Box}, \eqref{nf:est} and \eqref{tld:X} we have \begin{equation} \Deltabel{f:in:H} \vn{f}_{L^2 H^{s+\theta-2}} \lesssim \vn{u}_{X^{s,\theta}} \big(1+ \vn{u}_{X^{s,\theta}} \big). \end{equation} To place $ G f $ in $ L^2 H^{s+\theta-2} $ we use a Littlewood decomposition and bound $ P_{\left|d} ( G f_{\nu} ) $ using \eqref{f:in:H}. When $ \nu \lesssim \left|d $ we place $ G \in L^{\infty} $ and get the factor $ (\nu / \left|d)^{2-s-\theta} $. When $ \nu \gg \left|d $, for the term $ P_{\left|d} ( G_{\nu} f_{\nu} ) $ we first use Bernstein $ P_{\left|d} : L^2 L^1 \to \left|d L^2 L^2 $, then we use the fact that $ G \in L^{\infty} H^{s} $ by classical Moser/Schauder estimates (see \cite[Lemma A.9]{tao2006nonlinear}), obtaining the factor $ (\left|d / \nu )^{s+\theta-1} $. \end{proof} \ We continue towards the proof of \eqref{Moser:weak} by setting up some preliminaries. \subsection{Iterated paradifferential expansions} \Deltabel{sec:para:exp} We write $$ F(u)-F(v) = (u-v) h(v,u) $$ and $$ h(v,u)-h(x,y)=(v-x) h_1(x,y,v,u)+ (u-y) h_2 (x,y,v,u) $$ and so on, where the $ h $'s are generic smooth functions with uniformly bounded derivatives. For $ \nu < \mu \leq \infty $ we may decompose \begin{equation}gin{align*} F(u_{<\mu}) & =F(u_{<\nu})+ \sum_{\left|d_0 = \nu}^{\mu/2} F(u_{<2 \left|d_0})-F(u_{< \left|d_0}) \\ & = F(u_{<\nu})+ \sum_{\left|d_0 = \nu}^{\mu/2} u_{\left|d_0} h(u_{< \left|d_0}, u_{< 2 \left|d_0} ). \end{align*} Repeating the same argument for $ h(u_{< \left|d_0}, u_{< 2 \left|d_0} ) $ and denoting $$ h(..,u_{< \left|d_1},..)= h_1(u_{< \left|d_1}, u_{< 2 \left|d_1}, u_{< 2 \left|d_1}, u_{< 4 \left|d_1} ) + h_2 (u_{< \left|d_1/2}, u_{< \left|d_1}, u_{< \left|d_1}, u_{< 2 \left|d_1} ) $$ we further decompose \begin{equation}gin{align*} F(u_{<\mu}) = F(u_{<\nu}) + \sum_{\left|d_0 = \nu}^{\mu/2} u_{\left|d_0} h(u_{1}, u_{\leq 2} )+ \sum_{\left|d_0 = \nu}^{\mu/2} \sum_{\left|d_1=2}^{\left|d_0/2} u_{\left|d_0} u_{\left|d_1} h(..,u_{< \left|d_1},..) \end{align*} Iterating this argument we can write $ F(u_{<\mu}) $ as a sum of three types of terms: \begin{equation}gin{enumerate} \item $ F(u_{<\nu}) $ \item $u_{\left|d_0} h(u_{1}, u_{\leq 2} ), \ u_{\left|d_0} u_{\left|d_1} h(u_{1}, u_{\leq 2}, \dots), \dots, u_{\left|d_0} u_{\left|d_1} \cdots u_{\left|d_{N-1}} h(u_1, \dots, u_{\leq C}) $ \item $u_{\left|d_0} u_{\left|d_1} \cdots u_{\left|d_{N}} h(u_{< \left|d_N/c}, \dots, u_{< \left|d_N}, \dots, u_{< c \left|d_N} ) $ \end{enumerate} for $ \nu \leq \left|d_0 < \mu $ and $ \left|d_0 \geq \left|d_1 \geq \cdots \geq \left|d_N \geq 1 $. \subsection{Bilinear estimates involving $ \tilde{X} $ spaces} Here we supplement the estimates from Propositions \ref{PropX} and \ref{Lemma:bil:inter} with some bounds in terms of the $ \tilde{X}_{\mu,\mu}^{s,\theta} $ norms. \begin{equation}gin{lemma} \Deltabel{lemma:bil:suppl} Let $ \mu \leq d < \left|d $ and $ u_{\mu,\mu} \in \tilde{X}_{\mu,\mu}^{s,\theta} $. Then there exists a decomposition \[ u_{\mu,\mu}= u_{\mu,\mu}^{<\left|d} + u_{\mu,\mu}^{> \left|d} \] such that \begin{equation}gin{align} \Deltabel{dec:lmd:d:mu1} \vn{u_{\left|d,d} \cdot u_{\mu,\mu}^{<\left|d}}_{X_{\left|d,d}^{s,\theta}} & \lesssim \vn{u_{\left|d,d}}_{X_{\left|d,d}^{s,\theta}} \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}} \\ \Deltabel{dec:lmd:d:mu2} \vn{ u_{\left|d,d} \cdot u_{\mu,\mu}^{>\left|d}}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} & \lesssim \Big( \frac{\mu}{d} \Big)^{\theta} \vn{u_{\left|d,d}}_{X_{\left|d,d}^{s,\theta}} \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}} \end{align} Moreover, one has \begin{equation}gin{align} \Deltabel{dec:lmd:mu:mu1} \vn{u_{\left|d,\leq \mu} \cdot u_{\mu,\mu}^{<\left|d}}_{X_{\left|d,\mu}^{s,\theta}} & \lesssim \vn{u_{\left|d,\leq \mu}}_{X_{\left|d,\leq \mu}^{s,\theta}} \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}} \\ \Deltabel{dec:lmd:lmd:mu2} \vn{ u_{\left|d,\leq \mu} \cdot u_{\mu,\mu}^{>\left|d}}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} & \lesssim \vn{u_{\left|d,\leq \mu}}_{X_{\left|d,\leq \mu}^{s,\theta}} \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}} \end{align} and \begin{equation} \Deltabel{dec:lmd:lmd:mu} \vn{u_{\left|d,\left|d} \cdot u_{\mu}}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \vn{u_{\left|d,\left|d} }_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu}}_{\tilde{X}_{\mu}^{s,\theta}} \end{equation} \end{lemma} \begin{equation}gin{proof} We define $ u_{\mu,\mu}^{<\left|d} $ by averaging $ u_{\mu,\mu} $ in time on the $ \left|d^{-1} $ scale. {\bf(1)} We begin with \eqref{dec:lmd:d:mu1} and \eqref{dec:lmd:d:mu2}. Similarly to the proof of \eqref{bil:inter:highmod:LH}, for all terms occurring in each of the products we place the higher frequency term in $ L^2 $ and the lower frequency term in $ L^{\infty} $. For \eqref{dec:lmd:d:mu1} this works because $$ \vn{\Box_{g_{<\sqrt{\left|d}}} u_{\mu,\mu}^{<\left|d}}_{L^{\infty}} \lesssim \left|d \mu \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}}, $$ while for \eqref{dec:lmd:d:mu2} we use \begin{equation} \Deltabel{mu:mu:highmod} \vn{ u_{\mu,\mu}^{>\left|d}}_{L^{\infty}} \lesssim \left|d^{-1} \vn{ \partial_t u_{\mu,\mu}}_{L^{\infty}} \lesssim \frac{\mu}{\left|d} \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}}, \end{equation} and we recall that the $ \tilde{X}_{\left|d,\left|d}^{s,\theta} $ norm does not contain $ \Box_{g_{<\sqrt{\left|d}}} $ terms. {\bf(2)} In the case of \eqref{dec:lmd:mu:mu1} we use $ L^{\infty} L^2 \times L^2 L^{\infty} \to L^2 L^2 $ and Bernstein for the terms $ u_{\left|d,\leq \mu} \cdot u_{\mu,\mu}^{<\left|d} $, $ \nabla_{t,x} u_{\left|d,\leq \mu} \cdot \nabla_{t,x} u_{\mu,\mu}^{<\left|d} $ and $ u_{\left|d,\leq \mu} \cdot \Box_{g_{<\sqrt{\left|d}}} u_{\mu,\mu}^{<\left|d} $. We use $ L^2 L^2 \times L^{\infty} L^{\infty} \to L^2 L^2 $ for $ \Box_{g_{<\sqrt{\left|d}}} u_{\left|d,\leq \mu} \cdot u_{\mu,\mu}^{<\left|d} $. The place where we use the $ ^{<\left|d} $ smoothness is $$ \left|d^{-1} \mu^{\theta} \vn{\Box_{g_{<\sqrt{\left|d}}} u_{\mu,\mu}^{<\left|d}}_{L^2} \lesssim \mu^{\theta} \vn{ \nabla_{t,x} u_{\mu,\mu}}_{L^2} \lesssim \frac{1}{\mu^{\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}}. $$ Finally, in the case of \eqref{dec:lmd:lmd:mu2} we always place the $ \left|d $-frequency terms in $ L^{\infty} L^2 $, using either $ L^{\infty} L^2 \times L^2 L^{\infty} \to L^2 L^2 $ or $ L^{\infty} L^2 \times L^{\infty} L^{\infty} \to L^{\infty} L^2 $ and Bernstein for the $ \mu $-frequency terms. This works because of \eqref{mu:mu:highmod} and $$ \vn{ u_{\mu,\mu}^{>\left|d}}_{L^2 L^{\infty}} \lesssim \frac{\mu}{\left|d} \vn{\partial_t u_{\mu,\mu}^{>\left|d}}_{L^2} \lesssim \frac{\mu}{\left|d} \frac{1}{\mu^{\theta+\varepsilon}} \vn{u_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}}. $$ {\bf(3)} The proof of \eqref{dec:lmd:lmd:mu} is straightforward by bounding the $ \mu $-frequency terms in $ L^{\infty} $. \end{proof} \subsection{Multilinear estimates} The next order of business is to obtain effective bounds for the products $u_{\left|d_0} u_{\left|d_1} \cdots u_{\left|d_{N}} $ in the expansion from section \ref{sec:para:exp}, while the effect of multiplying by $ h(.. u_{< \left|d_N}..) $ is studied in the next subsection. \begin{equation}gin{lemma} \Deltabel{multilinear} Let $ \left|d_0 \geq \left|d_1\geq \dots \geq \left|d_N $ for $ N \geq 1 $ and let $ u \in \tilde{X}^{s,\theta} $. For any $ \left|d \lesssim \left|d_0 $ \begin{equation} \Deltabel{multilinear:est:sumd} \vn{P_{\left|d} \big( u_{\left|d_0} \dots u_{\left|d_N} \big) }_{\tilde{X}^{s,\theta}_{\left|d}} \lesssim \frac{\left|d^s}{\left|d_0^s} \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{\tilde{X}_{\left|d_i}^{s,\theta}} \end{equation} More precisely, there exists a decomposition $$ P_{\left|d} \big( u_{\left|d_0} \dots u_{\left|d_N} \big) = \sum_{d=1}^{\left|d} v_{\left|d,d} $$ such that \begin{equation} \Deltabel{multilinear:est} \vn{v_{\left|d,d}}_{\tilde{X}_{\left|d,d}^{s,\theta}} \lesssim \frac{\left|d^s}{\left|d_0^s} \prod_{i=1}^{\min(3,N)} \min \Big( 1,\frac{d}{ \min(\left|d, \left|d_i)} \Big) ^{\frac{1}{4}} \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{\tilde{X}_{\left|d_i}^{s,\theta}} \end{equation} Moreover, if $ \left|d > d > \left|d_1 $ then we can replace $ \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} $ by $ \vn{u_{\left|d_0,d}}_{X_{\left|d_0, d}^{s,\theta}} $. \end{lemma} \begin{equation}gin{remark} We recall that for $ 1 \leq d < \left|d $ the norms of $ \tilde{X}_{\left|d,d}^{s,\theta} $ and $ X_{\left|d,d}^{s,\theta} $ coincide. \end{remark} \ \begin{equation}gin{corollary} \Deltabel{multilinear:cor} Under the assumptions of Lemma \ref{multilinear}, for $ N \geq 3$ and any $ \gamma \leq \min(\left|d,\left|d_3) $, one has \begin{equation} \Deltabel{multilinear:est:cor} \vn{v_{\left|d,\leq \gamma }}_{\tilde{X}_{\left|d,\gamma}^{s,\theta}} \lesssim \frac{\left|d^s}{\left|d_0^s} \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{\tilde{X}_{\left|d_i}^{s,\theta}} \end{equation} \end{corollary} \ \ \noindent \emph{Proof of Corollary \ref{multilinear:cor}.} Using \eqref{multilinear:est} with $ N \geq 3$ to sum the norms of $ \vn{v_{\left|d,d}}_{L^2}$ and $ \vn{\Box_{g_{<\sqrt{\left|d}}} v_{\left|d,d}}_{L^2}$ over $ d \in \overline{1,\gamma} $ we obtain a favorable factor, since $\big(\frac{d}{\left|d_{\min}} \big) ^{\frac{3}{4}} $ can be used to always have a positive power of $ d $ on the RHS. When $ \gamma=\left|d $ we use energy estimates. \(\Box\) \ \noindent \emph{Proof of Lemma \ref{multilinear}}. We prove the statement by induction with respect to $ N $. {\bf(1)} For $ N=1 $ we use Remark \ref{rk:prod:bfMos}, Prop. \ref{Lemma:bil:inter} (2) and Lemma \ref{lemma:bil:suppl} to obtain the decomposition. We split $$ P_{\left|d} (u_{\left|d_0} u_{\left|d_1}) = \sum_{1 \leq d_i \leq \left|d_i,i=0,1} P_{\left|d} (u_{\left|d_0,d_0} u_{\left|d_1,d_1} ) $$ where for both $ \left|d_i, \ i=0,1 $ we write, by Definition \ref{tld:X}, $$ u_{\left|d_i}= \sum_{d_i=1}^{\left|d_i/2} u_{\left|d_i,d_i}+u_{\left|d_i,\left|d_i} \qquad \quad \vn{u_{\left|d_i}}_{X^{s,\theta}_{\left|d_i}}^2 \simeq \sum_{d_i=1}^{\left|d_i/2} \vn{u_{\left|d_i,d_i}}_{X^{s,\theta}_{\left|d_i,d_i}}^2+ \vn{u_{\left|d_i,\left|d_i}}_{\tilde{X}^{s,\theta}_{\left|d_i,\left|d_i}}^2. $$ We obtain the desired estimate by a summation of $ d_0,d_1 $ argument as in the proof of Prop. \ref{prop:alg:prop}, as follows: \begin{equation}gin{enumerate} \item In the high-low case $ \left|d \simeq \left|d_0 \gg \left|d_1 $: for $ d_0,d_1 \leq d < \left|d_1 $ we use \eqref{opt:bil:inter:lowmod:HL} to obtain \begin{equation} \Deltabel{mod:high:low} \vn{v_{\left|d,d}}_{X_{\left|d,d}^{s,\theta}} \lesssim \Big( \frac{d}{\left|d_1} \Big) ^{\frac{1}{4}} \frac{1}{\left|d_1^{\varepsilon}} \vn{u_{\left|d_0,\leq d}}_{X_{\left|d_0,\leq d}^{s,\theta}} \vn{u_{\left|d_1,\leq d} }_{X_{\left|d_1,\leq d}^{s,\theta}} \end{equation} while when $ \left|d_1 < d=d_0 < \left|d, \left|d_0 $ we have \begin{equation} \Deltabel{mod:high:low2} \vn{v_{\left|d,d}}_{X_{\left|d,d}^{s,\theta}} \lesssim \frac{1}{\left|d_1^{\varepsilon}} \vn{u_{\left|d_0, d}}_{X_{\left|d_0, d}^{s,\theta}} \vn{u_{\left|d_1} }_{\tilde{X}_{\left|d_1}^{s,\theta}} \end{equation} based on \eqref{bil:inter:highmod:LH}, \eqref{dec:lmd:d:mu1} and defining $ v_{\left|d,d}=P_{\left|d} ( u_{\left|d_0,d} u_{\left|d_1,<\left|d_1} )+P_{\left|d} ( u_{\left|d_0,d} u_{\left|d_1,\left|d_1}^{<\left|d_0} ) $, where $ u_{\left|d_1,\left|d_1}^{<\left|d_0} $ is defined in Lemma \ref{lemma:bil:suppl}. When $ d=\left|d_1 $ we define $ v_{\left|d,\left|d_1}=P_{\left|d} ( u_{\left|d_0,\left|d_1} u_{\left|d_1,<\left|d_1} )+P_{\left|d} ( u_{\left|d_0,\leq \left|d_1} u_{\left|d_1,\left|d_1}^{<\left|d_0} ) $ and using \eqref{bil:inter:highmod:LH}, \eqref{dec:lmd:mu:mu1} we have \begin{equation} \Deltabel{mod:high:low3} \vn{ v_{\left|d,\left|d_1}}_{X_{\left|d,\left|d_1}^{s,\theta}} \lesssim \frac{1}{\left|d_1^{\varepsilon}} \vn{u_{\left|d_0, \leq \left|d_1}}_{X_{\left|d_0, \leq \left|d_1}^{s,\theta}} \vn{u_{\left|d_1} }_{\tilde{X}_{\left|d_1}^{s,\theta}} \end{equation} Finally, when $ d=\left|d $ we define $ v_{\left|d,\left|d}= P_{\left|d} ( u_{\left|d_0, < \left|d_0} u_{\left|d_1,\left|d_1}^{>\left|d_0} ) + \sum_{d \simeq \left|d_0} P_{\left|d_0} ( u_{\left|d_0,d} u_{\left|d_1} ) $ and using \eqref{dec:lmd:d:mu2}, \eqref{dec:lmd:lmd:mu2}, \eqref{dec:lmd:lmd:mu} we have \begin{equation} \Deltabel{mod:high:low4} \vn{v_{\left|d,\left|d} }_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \frac{1}{\left|d_1^{\varepsilon}} \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \vn{u_{\left|d_1}}_{\tilde{X}_{\left|d_1}^{s,\theta}} \end{equation} \item In the high-high to low case $ \left|d \lesssim \left|d_0 \simeq \left|d_1 $: for $ d \in [1,\left|d) $, by \eqref{opt:bil:inter:lowmod:HH} we have \begin{equation} \Deltabel{mod:high:high1} \vn{v_{\left|d,d}}_{X_{\left|d,d}^{s,\theta}} \lesssim \frac{\left|d^s}{\left|d_0^s} \Big(\frac{d}{\left|d} \Big)^{\frac{1}{4}} \frac{1}{\left|d_1^{\varepsilon}} \vn{u_{\left|d_0,\leq d}}_{X_{\left|d_0,\leq d}^{s,\theta}} \vn{u_{\left|d_1,\leq d} }_{X_{\left|d_1,\leq d}^{s,\theta}} \end{equation} while for $ d=\left|d $, $ v_{\left|d,\left|d} $ contains the contributions of $ u_{\left|d_0,d_0} u_{\left|d_1,d_1} $ when $ \max(d_0,d_1) \geq \left|d $ (use \eqref{bil:inter:highmod:HH}) and the contributions when $ d_0,d_1 \leq \left|d $ given by \eqref{opt:bil:inter:lowmod:HH2} obtaining \begin{equation} \Deltabel{mod:high:high2} \vn{v_{\left|d,\left|d}}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \frac{\left|d^s}{\left|d_0^s} \frac{1}{\left|d_1^{\varepsilon}} \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \vn{u_{\left|d_1} }_{\tilde{X}_{\left|d_1}^{s,\theta}} \end{equation} \end{enumerate} {\bf(2)} Now we assume the statement holds for $ N-1 $ and prove it for $ N $. Letting $ v^{(N-1)}=u_{\left|d_0} \dots u_{\left|d_{N-1}} $, summing the induction hypothesis for $ v^{(N-1)}_{\nu,d'} $ over $ d' \leq d $ for $ d \leq \min(\nu,\left|d_1) $ we obtain \begin{equation} \Deltabel{multilinear:est:leqmod} \vn{ v^{(N-1)}_{\nu,\leq d} }_{\tilde{X}_{\nu,\leq d}^{s,\theta}} \lesssim \frac{\nu^s}{\left|d_0^s} \prod_{i=1}^{\min(3,N-1)} \min \Big( 1,\frac{d}{ \min(\nu, \left|d_i)} \Big) ^{\frac{1}{4}} \vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \prod_{i=1}^{N-1} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{\tilde{X}_{\left|d_i}^{s,\theta}} \end{equation} We consider three cases: \begin{equation}gin{enumerate} \item $ \left|d \ll \left|d_N. $ We have $$ P_{\left|d} \big( u_{\left|d_0} \dots u_{\left|d_N} \big) = P_{\left|d} \big( \tilde{P}_{\left|d_N} v^{(N-1)} u_{\left|d_N} \big). $$ For $ d<\left|d $ we apply \eqref{mod:high:high1} to $ \tilde{P}_{\left|d_N} v^{(N-1)} $ and $ u_{\left|d_N} $ and then use \eqref{multilinear:est:leqmod} for $ \nu \simeq \left|d_N $. When $ d=\left|d $ we use \eqref{mod:high:high2} instead of \eqref{mod:high:high1}, and then \eqref{multilinear:est:sumd} for $ N-1 $ and $ P_{\nu} $ for $ \nu \simeq \left|d_N $, as no powers of $ d/\left|d $ are necessary. \item $ \left|d \simeq \left|d_N. $ We decompose $$ P_{\left|d} \big( u_{\left|d_0} \dots u_{\left|d_N} \big) = \sum_{\nu \lesssim \left|d } P_{\left|d} \big( P_{\nu} v^{(N-1)} u_{\left|d_N} \big) $$ For $ d \leq \nu $ we apply \eqref{mod:high:low} and \eqref{mod:high:low3} for $ u_{\left|d_N} $ and $ P_{\nu} v^{(N-1)} $ together with \eqref{multilinear:est:leqmod}, while for $ d > \nu $ we use \eqref{mod:high:low2}, \eqref{mod:high:low4} together with \eqref{multilinear:est:sumd} for $ P_{\nu} v^{(N-1)} $. Summing the factors that contain $ \nu $ we obtain $$ \sum_{\nu \lesssim \left|d } \frac{\nu^s}{\left|d_0^s} \frac{1}{\nu^{\varepsilon}} \min \Big( 1,\frac{d}{ \nu} \Big) ^{\frac{1}{4} \min(3,N) } \simeq \frac{\left|d^s}{\left|d_0^s} \frac{1}{\left|d^{\varepsilon}} \Big( \frac{d}{ \left|d} \Big)^{\frac{1}{4} \min(3,N) } $$ which is favorable. \item $ \left|d \gg \left|d_N. $ We write $$ P_{\left|d} \big( u_{\left|d_0} \dots u_{\left|d_N} \big) = P_{\left|d} \big( \tilde{P}_{\left|d} v^{(N-1)} u_{\left|d_N} \big). $$ For $ d \leq \left|d_N $ we apply \eqref{mod:high:low} and \eqref{mod:high:low3} to $ \tilde{P}_{\left|d} v^{(N-1)} $ and $ u_{\left|d_N} $ and then use \eqref{multilinear:est:leqmod} for $ \nu \simeq \left|d $. In the case $ \left|d_N < d < \left|d $ \eqref{mod:high:low2} applies with $ \vn{v^{(N-1)}_{\nu,d}}_{X_{\nu,d}^{s,\theta}} $ on the RHS, $ \nu \simeq \left|d $, and this norm is estimated by the induction hypothesis. Finally, when $ d=\left|d $ we use \eqref{mod:high:low4} and then \eqref{multilinear:est:sumd} for $ N-1 $. \(\Box\) \end{enumerate} For the sum of the output frequencies and modulations of the products we have: \begin{equation}gin{corollary} \Deltabel{multilinear:cor2} For $ N \geq 4 $, let $ \left|d_0 \geq \left|d_1\geq \dots \geq \left|d_N $ and let $ u \in X^{s,\theta} $. Denoting $ w=u_{\left|d_0} \dots u_{\left|d_N} $ and $ M=\vn{u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta}} \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{\tilde{X}_{\left|d_i}^{s,\theta}} $, one has $$ \vn{w}_{L^2}+ \left|d_N^{-\frac{1}{2}} \vn{w}_{L^{\infty} L^2} + \left|d_N^{-1} \vn{\nabla_{t,x} w}_{L^2} + \left|d_N^{-\frac{3}{2}} \vn{\nabla_{t,x} w}_{L^{\infty} L^2} \lesssim \left|d_N^{-s-\theta} M $$ \end{corollary} \begin{equation}gin{proof} First consider $ \left|d_N \lesssim \left|d $. By summing \eqref{multilinear:est} and using also the argument of Cor. \ref{multilinear:cor}, together with \eqref{tildeX:loc}, \eqref{energy:X} we obtain \begin{equation} \Deltabel{Y:fixed:freq} \vn{w_{\left|d}}_{L^2}+ \left|d_N^{-\frac{1}{2}} \vn{w_{\left|d}}_{L^{\infty} L^2} + \left|d^{-1} \vn{\nabla_{t,x} w_{\left|d}}_{L^2} + \left|d_N^{-\frac{1}{2}} \left|d^{-1} \vn{\nabla_{t,x} w_{\left|d}}_{L^{\infty} L^2} \lesssim \left|d^{-s} \left|d_N^{-\theta} M \end{equation} Summing in $ \left|d \geq \left|d_N/C$ one obtains $$ \vn{w_{\geq \left|d_N}}_{L^2}+ \left|d_N^{-\frac{1}{2}} \vn{w_{\geq \left|d_N}}_{L^{\infty} L^2} + \left|d_N^{-1} \vn{\nabla_{t,x} w_{\geq \left|d_N}}_{L^2} + \left|d_N^{-\frac{3}{2}} \vn{\nabla_{t,x} w_{\geq \left|d_N}}_{L^{\infty} L^2} \lesssim \left|d_N^{-s-\theta} M $$ Now we consider frequencies $ \ll \left|d_N $ and write $$ P_{\ll \left|d_N } w= \tilde{P}_{\left|d_N} \big( u_{\left|d_0} \dots u_{\left|d_{N-1}} \big) u_{\left|d_N}. $$ We place $ u_{\left|d_N}, \nabla_{t,x} u_{\left|d_N} $ in $ L^{\infty}$ while for $ \tilde{P}_{\left|d_N} \big( u_{\left|d_0} \dots u_{\left|d_{N-1}} \big)$ we use the argument of the previous step used to obtain \eqref{Y:fixed:freq}. \end{proof} \subsection{Multiplicative properties} Now that we have bounds for multilinear terms in $ \tilde{X}_{\left|d,d}^{s,\theta} $, in order to make use of the expansion in terms $u_{\left|d_0} u_{\left|d_1} \cdots u_{\left|d_{N}} h(.. u_{< \left|d_N}.. ) $ we need to investigate which multiplications by $ h $ leave the $ \tilde{X}_{\left|d,d}^{s,\theta} $ spaces unchanged. \begin{equation}gin{definition} For $ d \leq \left|d $ the norm of $ M_{\left|d,d} $ is defined by $$ \vn{h}_{M_{\left|d,d}} = \vn{h}_{L^{\infty}}+\frac{1}{d} \vn{\nabla_{t,x} h}_{L^{\infty}}+ \frac{1}{d \left|d} \vn{\Box_{g_{< \sqrt{\left|d}}} h}_{ L^{\infty}+ d^{-\frac{1}{2}} L^2 L^{\infty} } $$ and for $ \mu \leq \left|d $ the norm of $ N_{\left|d,\mu} $ is defined by $$ \vn{h}_{N_{\left|d,\mu}} = \vn{h}_{L^{\infty}}+\frac{1}{\left|d} \vn{\nabla_{t,x} h}_{L^{\infty}}+ \frac{1}{\left|d} \vn{\Box_{g_{< \sqrt{\left|d}}} h}_{ L^{\infty} L^2+ \mu^{-\frac{1}{2}} L^2 }. $$ We also define the following version of the $ \tilde{X}_{\left|d,\left|d}^{s,\theta} $ norm adapted for function not assumed to be frequency localized: $$ \vn{w}_{Y_{\left|d}^{s,\theta}} = \left|d^{s+\theta} \big( \vn{w}_{L^2}+ \left|d^{-\frac{1}{2}} \vn{w}_{L^{\infty} L^2} +\left|d^{-1} \vn{\nabla_{t,x} w}_{L^2} + \left|d^{-\frac{3}{2}} \vn{\nabla_{t,x} w}_{L^{\infty} L^2} \big). $$ \end{definition} \ Then we have the following multiplication properties. \begin{equation}gin{lemma} For any function $ u $ localized at frequency $ \simeq \left|d $ one has \begin{equation} \Deltabel{mult:prop:1} \vn{u \cdot h}_{X_{\left|d,d}^{s,\theta}} \lesssim \vn{u}_{X_{\left|d,d}^{s,\theta}} \vn{h}_{M_{\left|d,d}}. \end{equation} and similarly with $ X_{\left|d,d}^{s,\theta} $ replaced by $ \tilde{X}_{\left|d,d}^{s,\theta} $. For any function $ v $ localized at frequency $ \simeq \mu \leq \left|d $ one has \begin{equation}gin{align} \Deltabel{mult:prop:2} \vn{v \cdot h}_{X_{\left|d,\mu}^{s,\theta}} & \lesssim \frac{\left|d^s}{\mu^s} \vn{v}_{X_{\mu,\mu}^{s,\theta}} \vn{h}_{N_{\left|d,\mu}} \\ \Deltabel{mult:prop:4} \vn{v \cdot h}_{X_{\left|d,\mu}^{s,\theta}} & \lesssim \frac{\left|d^s}{\mu^s} \big[ \vn{v}_{\tilde{X}_{\mu,\mu}^{s,\theta}} + \mu^{\theta} \left|d^{-1} \vn{\partial_t^2 v}_{L^2 H^{s-1}} \big] \vn{h}_{N_{\left|d,\mu}} \\ \Deltabel{mult:prop:5} \vn{v \cdot h}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} & \lesssim \left|d^{s+\theta} \vn{ (v,\left|d^{-1} \nabla_{t,x}v )}_{L^2 \cap \mu^{\frac{1}{2}} L^{\infty} L^2} \vn{h}_{N_{\left|d,\mu}}. \end{align} For functions $ w $ not assumed to be frequency localized one has: \begin{equation} \Deltabel{mult:prop:3} \vn{w \cdot h}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \vn{w}_{Y_{\left|d}^{s,\theta}} \vn{h}_{M_{\left|d,\left|d}}. \end{equation} \end{lemma} \begin{equation}gin{proof} The proof is straightforward, based on Leibniz's rule, H\" older's inequality and the energy estimate \eqref{energy:X}. For \eqref{mult:prop:2} we also use Bernstein's inequality $ L^2_x \to \mu L^{\infty}_x $ for $ v $. \end{proof} This property is used in conjunction with the following lemma. \begin{equation}gin{lemma} Let $ \mu \leq d \leq \left|d $ and $ c \simeq 1 $. For any smooth, bounded function $ h $ with uniformly bounded derivatives one has: \begin{equation}gin{align} \Deltabel{M:h:est1} \vn{ P_{\lesssim \mu} h(u_{<\mu}) }_{M_{\left|d,d}} & \lesssim 1+ \vn{u}_{X^{s,\theta}}^2 \\ \Deltabel{N:h:est} \vn{ \tilde{P}_{\left|d} h(u_{<c\left|d}) }_{N_{\left|d,\mu}} & \lesssim 1+ \vn{u}_{X^{s,\theta}}^2 \\ \Deltabel{M:h:est2} \vn{ \tilde{P}_{\left|d} h(u_{<\mu}) }_{M_{\left|d,\left|d}} & \lesssim \big( 1+ \vn{u}_{X^{s,\theta}}^2 \big) \max_{0 \leq k \leq 2} \vn{ \tilde{P}_{\left|d} (\partial^k h) (u_{<\mu}) }_{L^{\infty}}, \ \mu \ll \left|d. \end{align} The same statement holds for multivariate functions $ h(u_{<\mu/c}, \cdots, u_{<\mu}, \cdots, u_{<c\mu}) $. \end{lemma} \begin{equation}gin{proof} The same argument applies for the scalar or multivariate case; however, for brevity we will simply use the notation $ h(u_{<\mu}) $ for both cases. We begin with the $ \nabla_{t,x} h (\cdot) $ terms since the $ \vn{h}_{L^{\infty}} $ bound is clear. By the chain rule, for $ \nu \in \{ \mu, c \left|d \} $, we write schematically $$ \nabla_{t,x} h(u_{<\nu})= \sum \partial h \cdot \nabla_{t,x} u_{<c_i \nu}. $$ We have $ \vn{ \partial h}_{L^{\infty}} \lesssim 1$ and $ \vn{ \nabla_{t,x} u_{<c_i \nu}}_{L^{\infty}} \lesssim \nu \vn{u}_{X^{s,\theta}} $, which suffices for all three bounds. Next, we write $$ \Box_{g_{< \sqrt{\left|d}}} P h(u_{<\nu}) = [\Box_{g_{< \sqrt{\left|d}}}, P] h(u_{<\nu})+ P \sum g_{< \sqrt{\left|d}} \partial^2 h \nabla u_{<c_i \nu} \nabla u_{<c_j \nu} + P \sum \partial h \Box_{g_{< \sqrt{\left|d}}} u_{<c_i \nu} $$ In the case of \eqref{M:h:est2}, since $ \mu \ll \left|d$ for $ P=\tilde{P}_{\left|d} $, the only parts of $ \partial^2 h, \partial h $ that contribute to this equality are $ \tilde{P}_{\left|d} \partial^2 h, \tilde{P}_{\left|d} \partial h $ for a frequency $ \left|d $ multiplier $ \tilde{P}_{\left|d} $. By Bernstein we have \begin{equation}gin{align*} & \vn{g \partial^2 h \nabla u_{<c_i \mu} \nabla u_{<c_j \mu}}_{L^{\infty}} \lesssim \mu^2 \vn{\partial^2 h}_{L^{\infty}} \vn{\nabla u_{\lesssim \mu}}_{L^{\infty} L^2}^2 \lesssim \mu^2 \vn{\partial^2 h}_{L^{\infty}} \vn{u}_{X^{s,\theta} }^2 \\ & \vn{g \partial^2 h \nabla u_{<c_i \left|d} \nabla u_{<c_j \left|d}}_{L^{\infty} L^2} \lesssim \left|d \vn{\nabla u_{\lesssim \left|d}}_{L^{\infty} L^2}^2 \lesssim \left|d \vn{u}_{X^{s,\theta} }^2 \end{align*} Similarly, for $ P \in \{ P_{\lesssim \mu}, \tilde{P}_{\left|d} \} $ by writing $ [\Box_{g_{< \sqrt{\left|d}}}, P]= [g_{< \sqrt{\left|d}},P] \partial_{t,x} \partial_x $ and using a standard commutator estimate we obtain \begin{equation}gin{align*} \vn{ [\Box_{g_{< \sqrt{\left|d}}}, P_{\lesssim \mu} ] h(u_{<\mu}) }_{L^{\infty}} & \lesssim \mu^2 \vn{\partial_{t,x} u_{\lesssim \mu}}_{L^{\infty} L^2}^2 + \mu \vn{\partial_{t,x} \partial_x u_{\lesssim \mu}}_{L^{\infty} L^2} \lesssim \mu^2 \big( 1+\vn{u}_{X^{s,\theta} }^2 \big) \\ \vn{ [\Box_{g_{< \sqrt{\left|d}}}, \tilde{P}_{\left|d}] h(u_{<\mu}) }_{L^{\infty}} & \lesssim \mu^2 \vn{\tilde{P}_{\left|d} \partial^2 h}_{L^{\infty}} \vn{\partial_{t,x} u_{\lesssim \mu}}_{L^{\infty} L^2}^2 + \mu \vn{\tilde{P}_{\left|d} \partial h}_{L^{\infty}} \vn{\partial_{t,x} \partial_x u_{\lesssim \mu}}_{L^{\infty} L^2} \\ & \lesssim \mu^2 \max \big( \vn{\tilde{P}_{\left|d} \partial h}_{L^{\infty}}, \vn{\tilde{P}_{\left|d} \partial^2 h}_{L^{\infty}} \big) \big( 1+\vn{u}_{X^{s,\theta} }^2 \big) \\ \vn{ [\Box_{g_{< \sqrt{\left|d}}}, \tilde{P}_{\left|d}] h(u_{<c \left|d}) }_{L^{\infty} L^2} & \lesssim \left|d \vn{\partial_{t,x} u_{\lesssim \left|d}}_{L^{\infty} L^2}^2 + \vn{\partial_{t,x} \partial_x u_{\lesssim \left|d}}_{L^{\infty} L^2} \lesssim \left|d \big( 1+\vn{u}_{X^{s,\theta} }^2 \big) \end{align*} Using Definitions \ref{def:X:1}, \ref{def:X:2} of the $ X_{\left|d,d}^{s,\theta}, X^{s,\theta} $ spaces, the hypothesis $ \partial^2 g \in L^2 L^{\infty} $, Bernstein and a summation argument in frequencies and modulations one obtains \begin{equation}gin{align*} \vn{ \Box_{g_{< \sqrt{\left|d}}} u_{<c \mu}}_{ L^2 L^{\infty}} \lesssim \mu^{\frac{3}{2}} \vn{u}_{X^{s,\theta} } \\ \vn{ \Box_{g_{< \sqrt{\left|d}}} u_{<c \left|d}}_{ L^2 } \ \ \lesssim \left|d^{\frac{1}{2}} \vn{u}_{X^{s,\theta} } \end{align*} Using these we get \begin{equation}gin{align*} \vn{ (\tilde{P}) \partial h \Box_{g_{< \sqrt{\left|d}}} u_{<c \mu} }_{L^2 L^{\infty}} & \lesssim \mu^{\frac{3}{2}} \vn{(\tilde{P}) \partial h}_{L^{\infty}} \vn{u}_{X^{s,\theta} } \\ \vn{ \partial h \Box_{g_{< \sqrt{\left|d}}} u_{<c \left|d} }_{L^2 } & \lesssim \left|d^{\frac{1}{2}} \vn{u}_{X^{s,\theta} }\end{align*} Putting all the above inequalities together completes the proof of \eqref{M:h:est1}-\eqref{M:h:est2}. \end{proof} The main application of the multiplication properties above is: \begin{equation}gin{lemma} \Deltabel{lemma:mainterm} Let $ \left|d \simeq \left|d_0 \geq \left|d_1\geq \dots \geq \left|d_N $ for $ N \geq 3 $ and $ v=u_{\left|d_0} \dots u_{\left|d_N} $. For any smooth, bounded function $ h $ with uniformly bounded derivatives one has: \begin{equation} \vn{ P_{\left|d} \big( v \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N}) \big) }_{\tilde{X}_{\left|d}^{s,\theta}} \lesssim \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \big(1+\vn{u}_{X^{s,\theta}}^2 \big) \prod_{i=1}^{N} \frac{1}{\left|d_i^{\varepsilon/2}} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}} \end{equation} \end{lemma} \begin{equation}gin{proof} We decompose $$ P_{\left|d} \big( v \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N}) \big) = \sum_{\left|d' \simeq \left|d} P_{\left|d} \big( v_{\left|d'} \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N}) \big)+ \sum_{\mu \ll \left|d} P_{\left|d} \big( v_{\mu} \cdot \tilde{P}_{\left|d} P_{\lesssim \left|d_N} h(u_{<\left|d_N}) $$ where the second sum is non-zero only when $ \left|d_N \simeq \left|d $, in which case we can redenote $ \tilde{P}_{\left|d} P_{\lesssim \left|d_N} $ by $ \tilde{P}_{\left|d} $. We separately consider the terms with: \ {\bf(1)} $ \left|d' \simeq \left|d $. We use Lemma \ref{multilinear} and Corollary \ref{multilinear:cor} to decompose $ v_{\left|d'}$. Applying \eqref{mult:prop:1} together with \eqref{M:h:est1} (with $ \mu=\left|d_N, d=\left|d_N, $ resp. $ d>\left|d_N $) and \eqref{multilinear:est:cor}, resp. \eqref{multilinear:est} we obtain \begin{equation}gin{align*} \vn{v_{\left|d',\leq \left|d_N} \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N})}_{X_{\left|d,\left|d_N}^{s,\theta}} \lesssim \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \big(1+\vn{u}_{X^{s,\theta}}^2 \big) \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}} \\ \vn{v_{\left|d',d} \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N})}_{X_{\left|d,d}^{s,\theta}} \lesssim \Big( \frac{d}{\left|d_1} \Big)^{\frac{1}{4}} \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \big(1+\vn{u}_{X^{s,\theta}}^2 \big) \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}} \end{align*} for $ d \in [\left|d_N,\left|d_1] $, while for $ d \in (\left|d_1, \left|d)$ we have $$ \vn{v_{\left|d',d} \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N})}_{X_{\left|d,d}^{s,\theta}} \lesssim \vn{u_{\left|d_0,d}}_{X_{\left|d_0,d}^{s,\theta}} \big(1+\vn{u}_{X^{s,\theta}}^2 \big) \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}} $$ $$ \vn{v_{\left|d',\left|d'} \cdot P_{\lesssim \left|d_N} h(u_{<\left|d_N})}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \big(1+\vn{u}_{X^{s,\theta}}^2 \big) \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}} $$ We square-sum these bounds to obtain the conclusion for the $ v_{\left|d'} $ terms. \ {\bf(2)} $ \mu \ll \left|d $, in the case $ \left|d_N \simeq \left|d_{N-1} \simeq \dots \simeq \left|d_0 \simeq \left|d $. By \eqref{multilinear:est:cor} from Corollary \ref{multilinear:cor} and \eqref{multilinear:est} we have $$ \vn{v_{\mu,<\mu}}_{X_{\mu,\mu}^{s,\theta}} + \vn{v_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}} \lesssim \frac{\mu^s}{\left|d_0^s} \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}}. $$ Using this together with \eqref{mult:prop:2} and \eqref{N:h:est} (with $ c \left|d=\left|d_N $) we obtain : $$ \vn{ v_{\mu,<\mu} \cdot \tilde{P}_{\left|d} h(u_{<\left|d_N}) }_{{X_{\left|d,\mu}^{s,\theta}}} \lesssim \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \big(1+\vn{u}_{X^{s,\theta}}^2 \big) \prod_{i=1}^{N} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}}. $$ Summing $ \mu \leq \left|d/C $ gives a factor of $ \log \left|d $ which is overpowered by $ \frac{1}{\left|d^{\varepsilon/2}} \simeq \frac{1}{\left|d_i^{\varepsilon/2}} $. We regularize $ v_{\mu,\mu} $ in time on the $ \left|d^{-1} $ scale to split $ v_{\mu,\mu}= v_{\mu,\mu}^{<\left|d}+ v_{\mu,\mu}^{>\left|d} $. We have $$ \vn{\partial_t^2 v_{\mu,\mu}^{<\left|d}}_{L^2 H^{s-1}} \lesssim \left|d \mu^{-\theta} \vn{v_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}}, $$ so we can use this together with \eqref{mult:prop:4}. Finally, we have $$ \vn{ (v^{>\left|d},\left|d^{-1} \nabla_{t,x}v^{>\left|d} )}_{L^2 \cap \mu^{\frac{1}{2}} L^{\infty} L^2} \lesssim \left|d^{-1} \vn{ \nabla_{t,x}v^{>\left|d} }_{L^2 \cap \mu^{\frac{1}{2}} L^{\infty} L^2} \lesssim \left|d^{-1} \mu^{1-s-\theta} \vn{v_{\mu,\mu}}_{\tilde{X}_{\mu,\mu}^{s,\theta}} $$ and we use this with \eqref{mult:prop:5} to conclude. \end{proof} \subsection{Nonlinear low frequency input $ \to $ high frequency output estimates} \ Low $ \to $ high frequency interactions do not occur in bilinear or multilinear expressions, thus one expects their effect to be under control for nonlinear interactions as well. We begin with $ L^{\infty} $ bounds. \begin{equation}gin{lemma} Let $ \left|d \geq \mu $ and let $ h $ be a smooth, bounded function $ h $ with uniformly bounded derivatives. Then, for any even $ N \geq 2 $ \begin{equation}gin{align} \Deltabel{lowinput:infty} \vn{P_{\left|d} h(u_{< \mu})}_{L^{\infty}} \lesssim \Big( \frac{\mu}{\left|d} \Big)^N \vn{u}_{X^{s,\theta}} \big( 1+ \vn{u}_{X^{s,\theta}}^{N-1} \big) \\ \vn{ \nabla_{t,x} P_{\left|d} h(u_{< \mu})}_{L^{\infty}} \lesssim \mu \Big( \frac{\mu}{\left|d} \Big)^N \vn{u}_{X^{s,\theta}} \big( 1+ \vn{u}_{X^{s,\theta}}^{N} \big) \end{align} \end{lemma} \begin{equation}gin{proof} For any nonzero multi-index $ \begin{equation}ta $ we may use the chain rule to write $$ \partial_x^{\begin{equation}ta} h(u_{< \mu})= \sum_{j=1}^{\vm{\begin{equation}ta}} \sum_{\begin{equation}ta_1+ \dots + \begin{equation}ta_j=\begin{equation}ta} h^{(j)}(u_{< \mu}) \partial_x^{\begin{equation}ta_1} u_{< \mu} \dots \partial_x^{\begin{equation}ta_j} u_{< \mu}, \qquad \begin{equation}ta_i \neq 0. $$ Using $ \vn{\partial_x^{\begin{equation}ta} u_{< \mu} }_{L^{\infty}} \lesssim \mu^{\vm{\begin{equation}ta}} \vn{u}_{X^{s,\theta}} $ we obtain $$ \vn{P_{\left|d} h(u_{< \mu})}_{L^{\infty}} \lesssim \frac{1}{\left|d^N} \vn{\Delta^{N/2} P_{\left|d} h(u_{< \mu}) }_{L^{\infty}} \lesssim \Big( \frac{\mu}{\left|d} \Big)^N \vn{u}_{X^{s,\theta}} \big( 1+ \vn{u}_{X^{s,\theta}}^{N-1} \big). $$ The same argument is used for $ \nabla_{t,x} P_{\left|d} h(u_{< \mu}) $. \end{proof} For very low frequency terms we have: \begin{equation}gin{lemma} \Deltabel{lem:plmd:u1} Let $ h $ be a smooth, bounded function with uniformly bounded derivatives such that $ h(0)=0 $. Then, for all $ \left|d \geq 1 $ and even $ N \geq 2 $ one has \begin{equation} \Deltabel{est:plmd:u1} \vn{\tilde{P}_{\left|d} h(u_1) }_{X_{\left|d,\left|d}^{s,\theta}} \lesssim \frac{1}{\left|d^N} \vn{u_1}_{X_1^{s, \theta}} \big( 1+\vn{u_1}_{X_1^{s,\theta}}^{N+1} \big). \end{equation} The same statement holds for $ h(u_{\leq C}) $ and for multivariate functions $ h(u_{1}, u_{\leq 2}, \cdots, u_{\leq 2^k}) $. \end{lemma} \begin{equation}gin{proof} When $ \left|d=1 $ we use the Lipschitz property $ \vm{h(u_1)} \lesssim \vm{u_1} $ to control $ \vn{h(u_1)}_{L^2} $. For $ \vn{\Box_{g_{<1}} \tilde{P}_{1} h(u_1) }_{L^2} $ we use the chain rule. Now we assume $ \left|d > 1 $. We have $$ \vn{\tilde{P}_{\left|d} h(u_1) }_{L^2} \lesssim \frac{1}{\left|d^{N+2}} \vn{ \Delta^{\frac{N}{2}+1} h(u_1) }_{L^2} $$ Using the chain rule we write $$ \Delta^{\frac{N}{2}+1} h(u_1)=\sum h^{(j)} (u_1) \partial_x^{\begin{equation}ta_1} u_1 \cdots \partial_x^{\begin{equation}ta_j} u_1 $$ We use the $ L^2 $ norm for $ \partial_x^{\begin{equation}ta_1} u_1 $ and $ L^{\infty} $ for the other terms. The same type of argument applies for bounding $ \vn{\Box_{g_{<\sqrt{\left|d}}} \tilde{P}_{\left|d} h(u_1) }_{L^2} $. \end{proof} With all the preparations above we are ready to treat high frequency outputs in high modulation spaces $ \tilde{X}_{\left|d,\left|d}^{s,\theta} $. \begin{equation}gin{proposition} \Deltabel{prop:low:input:freq} Let $ F $ be a smooth, bounded function with uniformly bounded derivatives such that $ F^{(j)}(0)=0 $ for $ j \leq 4 $. Then, for any even $ N \geq 2 $ and any $ \mu \ll \left|d $ one has \begin{equation} \Deltabel{low:input:freq} \vn{P_{\left|d} F(u_{<\mu})}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \Big( \frac{\mu}{\left|d} \Big)^N \big( 1+\vn{u}_{X^{s,\theta}}^{N+8} \big) \sup_{\nu \leq \mu } \Big( \frac{\nu}{\mu} \Big)^N \vn{u_{\nu}}_{X_{\nu}^{s,\theta}}. \end{equation} We same result applies for multivariate functions $ P_{\left|d} h(u_{< \mu/c}, \dots, u_{< \mu}, \dots, u_{< c \mu} ) $. \end{proposition} \begin{equation}gin{proof} We use the iterated paradifferential expansion from Section \ref{sec:para:exp} with $ \nu = 2 $ to express $ F(u_{<\mu}) $ as a sum of terms of type: {\bf(1)} $ F(u_{1}), u_{\left|d_0} h(u_{1}, u_{\leq 2} ), \dots, u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} h(u_1, \dots, u_{\leq C}) $ for $ \left|d_3 \leq \left|d_2 \leq \left|d_1 \leq \left|d_0 \leq \mu \ll \left|d $. Using \eqref{est:plmd:u1}, \eqref{dec:lmd:lmd:mu} we get \begin{equation}gin{align} \vn{P_{\left|d} F(u_{1}) }_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} & \lesssim \frac{1}{\left|d^N} \big( 1+\vn{u}_{X^{s,\theta}}^{N+1} \big) \vn{u_1}_{X_1^{s,\theta}} \nonumber \\ \vn{ u_{\left|d_0} \tilde{P}_{\left|d} h(u_{1}, u_{\leq 2} ) }_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} & \lesssim \vn{ \tilde{P}_{\left|d} h(u_{1}, u_{\leq 2} ) }_{ \tilde{X}_{\left|d,\left|d}^{s,\theta}} \frac{1}{\left|d_0^{\varepsilon}} \vn{ u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta} } \nonumber \\ & \lesssim \frac{1}{\left|d^N} \frac{1}{\left|d_0^{\varepsilon}} \vn{ u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta} } \big( 1+\vn{u}_{X^{s,\theta}}^{N+1} \big) \vn{u_{\leq 2}}_{X^{s,\theta}} \nonumber \\ & \dots \Deltabel{dots:eq} \\ \vn{u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} \tilde{P}_{\left|d} h(u_1, \dots, u_{\leq C})}_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} & \lesssim \frac{1}{\left|d^N} \big( 1+\vn{u}_{X^{s,\theta}}^{N+1} \big) \prod_{i=0}^{3} \frac{1}{\left|d_i^{\varepsilon}} \vn{ u_{\left|d_i}}_{X_{\left|d_i}^{s,\theta} } \vn{u_{\leq C}}_{X^{s,\theta}} \nonumber \end{align} Then we may sum $ \left|d_0, \dots, \left|d_3 $. {\bf(2)} $ u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} h(u_{< \left|d_4/c}, \dots, u_{< \left|d_4}, \dots, u_{< c \left|d_4} ) $ for $ \left|d_4 \leq \dots \leq \left|d_0 \leq \mu \ll \left|d $. By abuse of notation we denote $$ h(u_{< \left|d_4}) = h(u_{< \left|d_4/c}, \dots, u_{< \left|d_4}, \dots, u_{< c \left|d_4} ), \qquad w= u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} $$ Then $$ P_{\left|d} \big( w \cdot h(u_{< \left|d_4}) \big) = P_{\left|d} \big( w \cdot \tilde{P}_{\left|d} h(u_{< \left|d_4}) \big) $$ We discard the $ P_{\left|d} $ and use \eqref{mult:prop:3}: $$ \vn{P_{\left|d} \big( w \cdot h(u_{< \left|d_4}) \big) }_{\tilde{X}_{\left|d,\left|d}^{s,\theta}} \lesssim \vn{w}_{Y_{\left|d}^{s,\theta}} \vn{ \tilde{P}_{\left|d} h(u_{< \left|d_4}) }_{M_{\left|d,\left|d}} $$ By Corollary \ref{multilinear:cor2} we obtain $$ \vn{w}_{Y_{\left|d}^{s,\theta}} \lesssim \Big( \frac{\left|d}{\left|d_4} \Big)^{s+\theta} \vn{u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta}} \prod_{i=1}^{4} \frac{1}{\left|d_i^\varepsilon} \vn{u_{\left|d_i} }_{X_{\left|d_i}^{s,\theta}} $$ Using \eqref{M:h:est2} together with \eqref{lowinput:infty} (for $ h, \partial h $ and $ \partial^2 h $) we obtain $$ \vn{ \tilde{P}_{\left|d} h(u_{< \left|d_4}) }_{M_{\left|d,\left|d}} \lesssim \Big( \frac{\left|d_4}{\left|d} \Big)^{N+2} \vn{u}_{X^{s,\theta}} \big( 1+ \vn{u}_{X^{s,\theta}}^{N+3} \big) $$ We put these together and sum over all but $ \left|d_4 $. It remains to bound $$ \sum_{\left|d_4 \leq \mu} \log \big( \frac{\mu}{\left|d_4} \big) \Big( \frac{\left|d_4}{\left|d} \Big)^{N+2-s-\theta} \vn{u_{\left|d_4} } \big( 1+ \vn{u}_{X^{s,\theta}}^{N+8} \big) \lesssim $$ $$ \lesssim \Big( \frac{\mu}{\left|d} \Big)^N \Big[ \sup_{\left|d_4 \leq \mu } \Big( \frac{\left|d_4}{\mu} \Big)^N \vn{u_{\left|d_4}}_{X_{\left|d_4}^{s,\theta}} \Big] \sum_{\left|d_4 \leq \mu} \Big( \frac{\left|d_4}{\left|d} \Big)^{2-s-\theta-\varepsilon} \big( 1+ \vn{u}_{X^{s,\theta}}^{N+8} \big) $$ which completes the proof. \end{proof} \subsection{Conclusion - Proof of Proposition \ref{Prop:Moser}} \ It remains to prove \eqref{Moser:weak}. We use the iterated paradifferential expansion from section \ref{sec:para:exp} with $ \mu=\infty $ and $ \nu = \left|d/C $ to express $ F(u) $ as a sum of terms of type: \begin{equation}gin{enumerate} \item $ F(u_{\ll \left|d}) $ \item $u_{\left|d_0} h(u_{1}, u_{\leq 2} ), \ u_{\left|d_0} u_{\left|d_1} h(u_{1}, u_{\leq 2}, \dots), \dots, u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} h(u_1, \dots, u_{\leq C}) $ \item $ u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} h(u_{< \left|d_4/c}, \dots, u_{< \left|d_4}, \dots, u_{< c \left|d_4} ) $ \end{enumerate} for $ \left|d \lesssim \left|d_0 $ and $ \left|d_0 \geq \left|d_1 \geq \cdots \geq \left|d_4 \geq 1 $. The term $ \vn{P_{\left|d} F(u_{\ll \left|d})}_{\tilde{X}^{s,\theta}_{\left|d}} $ is estimated by \eqref{low:input:freq}. Note that the RHS is square-summable in $ \left|d $. For the terms in (2) we use \eqref{multilinear:est:sumd}, for $ \eta \lesssim \left|d_0 $ $$ \vn{P_{\left|d} \big[ \Pi_i u_{\left|d_i} P_{\eta} h(u_1,..) \big]}_{\tilde{X}_{\left|d}^{s,\theta}} \lesssim \frac{\left|d^s}{\left|d_0^s} \vn{ u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta} } \vn{P_{\eta} h(u_1,..) \big}_{\tilde{X}_{\eta}^{s,\theta}} \prod_{i\neq0} \frac{1}{\left|d_i^{\varepsilon}} \vn{ u_{\left|d_i}}_{\tilde{X}_{\left|d_i}^{s,\theta} } $$ Then we apply Lemma \ref{lem:plmd:u1} and sum in $ \eta, \left|d_i, i \geq 1 $. We have square-summability in $ \left|d $ due to the factors $ \left|d^s \left|d_0^{-s} \vn{ u_{\left|d_0}}_{\tilde{X}_{\left|d_0}^{s,\theta} } $. We continue with the term (3). For $ \eta \gg \left|d_4 $ we use \eqref{multilinear:est:sumd} and Prop. \ref{prop:low:input:freq} for $$ P_{\left|d} \big[ u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} P_{\eta} h(u_{< \left|d_4/c}, \dots, u_{< \left|d_4}, \dots, u_{< c \left|d_4} ) \big], $$ and then sum $ \eta \geq C \left|d_4 $ to obtain $$ \vn{P_{\left|d} \big[ \Pi_i u_{\left|d_i} P_{\gg \left|d_4} h(..u_{< \left|d_4}.. ) \big]}_{\tilde{X}_{\left|d}^{s,\theta}} \lesssim \frac{\left|d^s}{\left|d_0^s} \vn{ u_{\left|d_0}}_{X_{\left|d_0}^{s,\theta} } \big( 1+ \vn{u}_{X^{s,\theta}}^{11} \big) \prod_{i=1}^{4} \frac{1}{\left|d_i^{\varepsilon}} \vn{ u_{\left|d_i}}_{X_{\left|d_i}^{s,\theta} }. $$ It remains to consider $$ P_{\left|d} \big[ u_{\left|d_0} u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} P_{\lesssim \left|d_4 } h(u_{< \left|d_4/c}, \dots, u_{< \left|d_4}, \dots, u_{< c \left|d_4} ) \big] $$ For $ \left|d_0 \simeq \left|d $ we use Lemma \ref{lemma:mainterm}. Now we assume $ \left|d_0 \gg \left|d $, in which case $ \left|d_1 \simeq \left|d_0 $ and $$ P_{\left|d} \big[ \Pi_i u_{\left|d_i} P_{\lesssim \left|d_4} h(..u_{< \left|d_4}.. ) \big] = P_{\left|d} \big[ u_{\left|d_0} \tilde{P}_{\left|d_0} \big( u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} P_{\lesssim \left|d_4} h(..u_{< \left|d_4}.. ) \big) \big] $$ We bound this by first using the estimate \eqref{multilinear:est:sumd} for the two terms and then Lemma \ref{lemma:mainterm} applied to $ \tilde{P}_{\left|d_0} \big( u_{\left|d_1} u_{\left|d_2} u_{\left|d_{3}} u_{\left|d_{4}} P_{\lesssim \left|d_4} h(..u_{< \left|d_4}.. ) \big) $. This concludes the proof of Proposition \ref{Prop:Moser}. \(\Box\) \appendix \ \section{Smith's wave packets} \ \Deltabel{s:smithpackets} \ We briefly review Smith's wave packet parametrix construction for wave equations \begin{equation}gin{align*} (\partial_t^2 - g^{ab}(t, x)\partial_a \partial_b) u = 0, \ u[0] = (u_0, u_1) \end{align*} with $C^{1,1}$ coefficients~\cite{smith1998parametrix}. This construction works as well for coefficients $g^{ij}$ satisfying $ \partial^2 g \in L^2 L^{\infty} $. Begin with a partition of unity on $ \mathbb{R}^n_{\xi}: $ $$ 1= \vm{h_1(\xi)}^2 + \sum_{\left|d \geq 2} \sum_{\omega} \vm{h_{\left|d}^{\omega} (\xi) }^2 $$ where, for each frequency $ \left|d \geq 2 $, the $ \omega $'s are summed over $ \simeq \left|d^{\frac{n-1}{2}} $ directions that are uniformly separated on the unit sphere. The smooth functions $ h_{\left|d}^{\omega} (\xi) $ vanish outside the annular sectors $$ \{ \ \vm{\xi} \simeq \left|d, \ \vm{ \xi / \vm{\xi}-\omega } \lesssim \left|d^{-\frac{1}{2}} \ \} $$ and satisfy the natural bounds $$ \vm{ (\omega \cdot \nabla_{\xi})^j \partial_{\xi}^{\alpha} \ h_{\left|d}^{\omega} (\xi) } \lesssim \left|d^{-j-\frac{\vm{\alpha}}{2}}. $$ Thus each $ h_{\left|d}^{\omega} (\xi) $ is supported in a rectangle of size $ \simeq (\left|d^{\frac{1}{2}})^{n-1} \times \left|d $ oriented in the $ \omega $ direction. To each of them we associate a lattice $ \Xi_{\left|d}^{\omega} $ in the physical space $ \mathbb{R}_x^{n} $ on the dual scale, i.e. spaced $ \left|d^{-1} $ in the $ \omega $ direction and spaced $ \left|d^{-\frac{1}{2}} $ in directions in $ \omega^{\perp} $. For each $ \left|d $ we denote \[ \mathcal T_{\left|d}=\{ T=(x,\omega) \ | \ x \in \Xi_{\left|d}^{\omega} \} \] and to each $ T $ we will associate a "tube" and a function $ \varphi_T $ defined by $$ \widehat{\varphi}_T(\xi)= (2 \pi)^{-\frac{n}{2}} \left|d^{-\frac{n+1}{4}} e^{-i x \cdot \xi} h_{\left|d}^{\omega} (\xi). $$ This function is concentrated in phase space around $ (x, \left|d \omega) $. Thus, we have $$ \vm{ (\omega \cdot \nabla_{y})^j (\omega^{\perp} \cdot \nabla_{y})^{\alpha} \ \varphi_T(y) } \lesssim \left|d^{j+\frac{\vm{\alpha}}{2}+\frac{n+1}{4}} \frac{1}{(1+\left|d \vm{ \omega \cdot (y-x)} + \left|d \vm{y-x}^2 )^N} $$ \ These properties imply that for any sum we have $$ \int_{\mathbb{R}^n } \big| \sum_{\left|d, T\in \mathcal T_{\left|d}} d_T \varphi_T \big|^2 \,\mathrm{d} y \lesssim \sum_{\left|d, T\in \mathcal T_{\left|d}} \vm{d_T}^2. $$ The family $ (\varphi_T)_{T,\left|d} $ is used to decompose arbitrary functions into highly localized components. Indeed, for any $ f \in L^2(\mathbb{R}^n) $ we have $$ f=\sum_{\left|d, T\in \mathcal T_{\left|d}} c_T \varphi_T , \qquad c_T \vcentcolon= \int_{\mathbb{R}^n } \overline{\varphi_T} f \,\mathrm{d} y $$ In addition, we have $$ \int_{\mathbb{R}^n } \vm{f}^2 \,\mathrm{d} y = \sum_{\left|d, T\in \mathcal T_{\left|d}} \vm{c_T}^2. $$ Note that if $ f $ is localized at frequencies $ \simeq \left|d $, then it's decomposition only contains terms $ \varphi_T $ with $ T \in \mathcal T_{\left|d'} $ where $ \left|d' \simeq \left|d $. By abuse of notation we will sometimes write simply the sum over $ T \in \mathcal T_{\left|d} $. The decomposition is, in general, not unique since the $ \varphi_T $'s are not linearly independent. \ Let $ T=(x_T,\omega_T) \in \mathcal T_{\left|d} $ and fix a sign $ \pm $. Let $ (x_T(t),\omega_T(t)) $ be the projection to $S^*(\mathbb{R}^n)$ of the bicharacteristic initialized at $ (x_T,\omega_T) $. In other words, we set $ \omega_T(t)=\frac{\xi(t)}{\vm{\xi(t)}} $ and then $ (x_T(t),\omega_T(t) )$ solves \begin{equation}gin{equation} \Deltabel{e:smith_hamilton_flow} \begin{equation}gin{cases} \frac{\,\mathrm{d} x}{\,\mathrm{d} t}=\pm \partial_{\xi} a(t,x,\omega) \\ \frac{\,\mathrm{d} \omega}{\,\mathrm{d} t}=\mp \partial_x a(t,x,\omega) \pm \Deltangle \omega, \partial_x a(t,x,\omega) \rangle \omega, \end{cases} \quad a(t, x, \xi) := (g^{ab}_{<\Deltambda^{1/2}} \xi_a \xi_b)^{1/2}. \end{equation} Define the orthogonal matrix $\Theta^\pm(t)$ by the ODE \begin{equation}gin{align*} \dot{\Theta^\pm} = \mp\Theta^\pm [ \omega \otimes a_x (t, x, \omega) - (a_x) (t, x, \omega ) \otimes \omega ], \end{align*} where $v \otimes w$ is the linear map $x \mapsto v \Deltangle w, x \rangle$; by construction, $ \Theta^\pm(t) \omega_T(t)=\omega_T $. Set \begin{equation}gin{align*} u_T^\pm (t,y)&=\varphi_T (\Theta^\pm(t)(y-x_T(t))+x_T),\\ v_T^\pm (t,y)&= \frac{1}{a(0, x_T, \omega_T)} \psi_T (\Theta^\pm(t)(y-x^\pm _T(t))+x_T), \end{align*} where \begin{equation}gin{align*} \widehat{\psi}_T(\xi) := -\Deltambda \Deltangle \omega_T, \xi \rangle^{-1} \widehat{\varphi}_T(\xi). \end{align*} A parametrix for initial data $u[0] := (u_0, u_1)$ is then given by \begin{equation}gin{align*} w(t) &:= \frac{1}{2}\sum_{\Deltambda} \sum_{T \in \mathcal{T}_\Deltambda} (u_T^+(t) + u_T^{-}(t)) \Deltangle \varphi_T, u_0\rangle_{L^2_x} \\ &+ \frac{1}{2} \Bigl[\sum_{\Deltambda \le \Deltambda_0} \sum_{T \in \mathcal{T}_\Deltambda} \Deltambda^{-1} (v_T^+(t) - v_T^{-}(t)) \Deltangle \varphi_T, (I + E)u_1 \rangle_{L^2_x} \Bigr] + tP_{<\Deltambda_0} (I+E) u_1\\ &=: \mathbf{c}(t, 0) u_0 + \mathbf{s}(t, 0) u_1, \end{align*} where $\Deltambda_0 \gg 1$ is an absolute constant, and the operator $E$ satisfies $\| E\|_{L^2 \to L^2} \lesssim \Deltambda_0^{-\frac{1}{2}}$ and ensures that $\partial_t \mathbf{s}|_{t = 0} = I$. The operators $\mathbf{c}(t, 0)$, $\mathbf{s}(t, 0)$ are approximations of the usual wave propagators $\cos t\sqrt{-\Delta}$, $(\sqrt{-\Delta})^{-1}\sin t\sqrt{-\Delta}$. The functions $u_T^{\pm}$, $\Deltambda^{-1} v_T^{\pm}$ then satisfy the definitions of the packets in Section~\ref{s:packets}, and shall hereafter be denoted generically by $u_T$. If $(u_0, u_1)$ is localized at frequency $\Deltambda \gg \Deltambda_0$, then the frequency-$\Deltambda$ part of the solution $u$ to $\Box_g u= 0$ is approximated in the sense of~\eqref{wp:dec1}, \eqref{wp:dec2}, \eqref{param:small}, \eqref{homogeneous:sol}, by retaining just the terms in $w$ with frequencies comparable to $\Deltambda$; see \cite[Theorem 4.3]{smith1998parametrix}. As $$ \Deltangle \Theta^\pm(t)(y-x_T(t)), \omega_T \rangle=\Deltangle y-x_T(t), \omega_T(t) \rangle, $$ for any $ N \geq 0 $ we have \begin{equation}gin{align} &\vm{ (\omega_T(t) \nabla_{y})^j (\omega_T(t)^{\perp} \nabla_{y})^{\alpha} \ u_T(t,y) } \nonumber \\ &\lesssim \left|d^{j+\frac{\vm{\alpha}}{2}+\frac{n+1}{4}} \frac{1}{ (1+\left|d \vm{ \omega_T(t) \cdot (y-x_T(t))} + \left|d \vm{y-x_T(t)}^2 )^N}. \Deltabel{e:smith-decay} \end{align} Thus, $ u_T(t,\cdot) $ are concentrated on spatial rectangles of size $ \left|d^{-1} \times (\left|d^{\frac{1}{2}})^{n-1} $ that get rotated according to $ \omega_T(t) $ as time evolves and are centered around $ x_T(t) $. By slight abuse of notation, we will also denote by $ T $ this space-time region, called a tube, where $ u_T $ is concentrated. For fixed $ \omega $ and a sign $ \pm $, corresponding to the lattice $ \Xi_{\left|d}^{\omega} $ we obtain a family $ \mathcal T_{\left|d,\omega}^{\pm} $ of spacetime tubes which are finitely overlapping. We introduce the null foliation $\Lambda_\theta$ with direction $\theta$ associated to the metric $g_{<\Deltambda^{1/2}}$, and construct a null frame $\{L, \underline{L}, E\}$ as before. The following computation is a variation of the proof of \cite[Lemma 3.4]{smith1998parametrix}. \begin{equation}gin{lemma} $Lu_T$ satisfies the decay estimate~\eqref{e:packet_decay}. \end{lemma} \begin{equation}gin{proof} By Lemma~\ref{l:half-fullwave-bichar}, we can replace $L$ with the operator $\partial_t + \Deltangle a_\xi (t, y, \widehat{\xi}(t, y) ), \partial_y \rangle$. Assume WLOG that $x_T = 0$. Then we have \begin{equation}gin{align*} (\partial_t + \Deltangle a_{\xi} (t, x(t), \widehat{\xi} (t) ), \partial_y \rangle) u_T (t, y)&= \Deltangle \widehat{\xi} (t), y - x(t) \rangle \Deltangle a_x (t, x(t), \widehat{\xi}(t)), \partial_y \rangle u (t, y)\\ &-\Deltangle \widehat{\xi}(t)), y - x(t) \rangle \Deltangle \widehat{\xi}(t), \partial_y \rangle u(t, y). \end{align*} Therefore \begin{equation}gin{align*} (\partial_t + \Deltangle a_\xi (t, y, \widehat{\xi}(t, y) ), \partial_y \rangle) u_T &= \Deltangle a_\xi (t, y, \widehat{\xi} (t, y)) - a_\xi (t, x(t), \widehat{\xi} (t, x(t))), \partial_y \rangle u_T\\ &- \Deltangle a_x ( t, x(t), \widehat{\xi} (t, x(t))), y - x(t) \rangle \Deltangle \widehat{\xi} (t, x(t)), \partial_y \rangle u_T \\ &+ \Deltangle \widehat{\xi} (t), y - x(t) \rangle \Deltangle a_x (t, x(t), \widehat{\xi}(t)), \partial_y \rangle u_T. \end{align*} The third term is $\mu |\Deltangle y - x(t), \widehat{\xi}(t)\rangle |$ times an order $0$ packet, therefore is also order $0$. To estimate the first two terms, for simplicity of notation we drop the dependence in $t$ and write $x := x(t)$, $\xi(y) := \xi(t, y)$, $a(y, \widehat{\xi}(y)) := a(t, y, \widehat{\xi}(t, y))$. The first two terms can then be written as \begin{equation}gin{align*} &\Deltangle a_\xi (y, \widehat{\xi}(y)) - a_{\xi} (x, \widehat{\xi} (x)) - \Deltangle a_x (x, \widehat{\xi}(x)), y-x \rangle \widehat{\xi}(x), \partial_y \rangle u_T\\ &=\Deltangle a_{\xi}(y, \widehat{\xi}(y)) - a_{\xi}(y, \widehat{\xi}(x)), \partial_y \rangle u_T \\ &+ \Deltangle a_{\xi}(y, \widehat{\xi}(x)) - a_{\xi} (x, \widehat{\xi}(x)) - \Deltangle a_x (x, \widehat{\xi} (x)), y-x \rangle \widehat{\xi}(x), \partial_y \rangle u_T. \end{align*} For the first term, write $a_{\xi}(y, \widehat{\xi}(y)) - a_{\xi} (y, \widehat{\xi}(x)) = a_{\xi \xi} (y, \widehat{\xi}(x))[ \widehat{\xi}(y)-\widehat{\xi}(x) ] + O(|\widehat{\xi}(y)-\widehat{\xi}(x)|^2)$. The remainder is $O(|y-x|^2)$, while the linear term has size $O(|y-x|)$ and is orthogonal to $\widehat{\xi}(x)$ by homogeneity: \begin{equation}gin{align*} \Deltangle a_{\xi \xi} (y, \widehat{\xi}(x)) [ \widehat{\xi} (y) - \widehat{\xi}(x)], \widehat{\xi}(x) \rangle =\Deltangle \widehat{\xi} (y) - \widehat{\xi}(x), a_{\xi \xi}(y, \widehat{\xi}(x)) \widehat{\xi}(x) \rangle = 0. \end{align*} Therefore \begin{equation}gin{align*} \Deltangle a_{\xi}(y, \widehat{\xi}(y)) - a_{\xi}(y, \widehat{\xi}(x)), \partial_y \rangle u_T = \mu^{1/2}|y-x| v_T + \mu|y-x|^2 w_T, \end{align*} where $v_T$ and $w_T$ satisfy order $0$ decay, hence is also order $0$. Finally, the remaining term is acceptable since \begin{equation}gin{align*} &\Deltangle \widehat{\xi}(x), a_{\xi} (y, \widehat{\xi}(x)) - a_\xi (x, \widehat{\xi}(x)) - \Deltangle a_{x} (x, \widehat{\xi}(x)), y-x \rangle \widehat{\xi}(x) \rangle \\ &= a(y, \widehat{\xi}(x)) - a(x, \widehat{\xi}(x)) - \Deltangle a_x (x, \widehat{\xi}(x)), y-x\rangle\\ &= O(\| \partial^2 g(t)\|_{L^\infty_x}|y-x|^2), \end{align*} while the component orthogonal to $\widehat{\xi}(x)$ has size $O(|y-x|)$. \end{proof} \subsection{General metrics} Smith's construction adapts, with minor modifications, to more general equations of the form \begin{equation}gin{align*} g^{\alphapha \begin{equation}ta} \partial_\alphapha \partial_\begin{equation}ta u = 0, \ u[0] = (u_0, u_1). \end{align*} Here we will harmlessly assume that $g^{00} = 1$. For each frequency $\Deltambda \gg 1$, factor the principal symbol of the mollified operator $g^{\alphapha\begin{equation}ta}_{<\Deltambda^{1/2}} \partial_\alphapha \partial_\begin{equation}ta$ as $(\tau - a^+)(\tau - a^-)$, where \[ a^\pm(t, x, \xi) := g_{<\Deltambda^{1/2}}^{0b} \xi_b \pm\bigl[(g^{0b}_{<\Deltambda^{1/2}}\xi_b)^2 + g^{ab}_{<\Deltambda^{1/2}} \xi_a \xi_b\bigr]^{1/2}. \] are smooth convex/concave symbols in view of the hyperbolicity condition. Then the analogue of~\eqref{e:smith_hamilton_flow} is \begin{equation}gin{equation} \Deltabel{e:smith_hamilton_flow2} \begin{equation}gin{cases} \dfrac{\,\mathrm{d} x}{\,\mathrm{d} t}= \partial_{\xi} a^\pm(t,x,\omega) \\ \\ \dfrac{\,\mathrm{d} \omega}{\,\mathrm{d} t}= - \partial_x a^\pm(t,x,\omega) \pm \Deltangle \omega, \partial_x a^\pm(t,x,\omega) \rangle \omega,\\ \\ \dfrac{\,\mathrm{d} \Theta^\pm}{dt} = -\Theta^\pm [ \omega \otimes a^\pm_x (t, x, \omega) - (a^\pm_x) (t, x, \omega ) \otimes \omega ], \end{cases} \end{equation} and the analogues of $u_T^\pm$ and $v_T^\pm$ are furnished by the following construction. \begin{equation}gin{lemma} Fix $(x_T, \omega_T)$, and let $\varphi_T$ be as before. Then for all sufficiently large frequencies $\Deltambda \gg 1$, there exist functions $\varphi_T^\pm, \psi_T^\pm$ with similar properties as $\varphi_T$ such that \begin{equation}gin{align*} u_T^\pm(t,y) := \varphi_T^\pm(\Theta^\pm(t)(y - x_T(t)) ), \quad v_T^\pm(t,y) := \psi_T^\pm(\Theta^\pm(t)(y - x_T(t)) ), \end{align*} satisfy the same estimates as before, and \begin{equation}gin{align*} \begin{equation}gin{cases} u_T^+(0) + u_T^-(0) = \varphi_T + \Deltambda^{-1/2}\widetilde{\varphi_T}\\ \partial_t u_T^+(0) + \partial_t u_T^-(0) = \Deltambda^{-1/2}\widetilde{\varphi_T} \end{cases} \quad \begin{equation}gin{cases} v_T^+(0) - v_T^-(0) = \Deltambda^{-1/2}\widetilde{\varphi_T}\\ \partial_t v_T^+(0) - \partial_t v_T^-(0) = \varphi_T + \Deltambda^{-1/2}\widetilde{\varphi_T}, \end{cases}, \end{align*} where $\tilde{\varphi_T}$ denotes generic function with similar smoothness and decay as $\varphi_T$. \end{lemma} \begin{equation}gin{proof} Without loss of generality consider the first system. Using equations~\eqref{e:smith_hamilton_flow2}, and writing $\Phi_T = (\varphi_T^+, \varphi_T^-)^*$, $F_T = (\varphi_T, 0)^*$, the system takes the form \begin{equation}gin{align*} [M_T(D) + R_T(X, D)] \Phi_T = F_T + O(\Deltambda^{-1/2}). \end{align*} where \begin{equation}gin{align*} M_T(D) &= \left(\begin{equation}gin{array}{cc} 1 & 1 \\-\Deltangle a_\xi^+(0, z_T), \partial_x \rangle & -\Deltangle a_\xi^-(0, z_T), \partial_x \rangle \end{array}\right), \quad z_T = (x_T, \omega_T)\\ R_T(X, D) &= \left(\begin{equation}gin{array}{cc}0 & 0 \\ \Deltangle \dot{\Theta}^+(0) (x-x_T), \partial_x \rangle & \Deltangle \dot{\Theta}^- (0)(x-x_T) , \partial_x \rangle \end{array}\right). \end{align*} When $\xi$ is restricted to a small sector centered at $\omega_T$ the main term is elliptic: \begin{equation}gin{align*} |\det M_T(\xi)| = |\Deltangle (a_\xi^+(0, z_T) - a_\xi^-(0, z_T)), \xi \rangle| \sim |\xi|(a^+ - a^-)(0, z_T) \sim |\xi|. \end{align*} Further, in view of spatial localization the operator $R_T(X, D)$ has order $\tfrac{1}{2}$ when acting on functions of the form $\widetilde{\varphi_T}$. Thus it suffices to set $\Phi_T := M_T(D)^{-1} F_T$. \end{proof} For each frequency $\Deltambda >1 $ and time $t$, let $\Phi_\Deltambda(t): L^2 \times \Deltambda^{-1}L^2 \to L^2 \times \Deltambda^{-1} L^2$ denote the operator \begin{equation}gin{gather*} \Phi_\Deltambda (t)\binom{f}{g} := \sum_{T \in \mathcal{T}_\Deltambda} \Phi_T (t) \binom{f_T}{g_T},\\ \Phi_T(t) := \left(\begin{equation}gin{array}{cc} u_T^+ + u_T^{-} & v_T^+ - v_T^{-}\\ \partial_t u_T^+ + \partial_t u_T^{-} & \partial_t v_T^+ - \partial_t v_T^{-} \end{array}\right)(t), \quad \binom{f_T}{g_T} = \binom{\Deltangle f, \varphi_T \rangle}{\Deltangle g, \varphi_T \rangle}, \end{gather*} and set \begin{equation}gin{align*} \Phi(t) := \sum_{\Deltambda > \Deltambda_0} \Phi_\Deltambda(t) P_\Deltambda + \left(\begin{equation}gin{array}{cc} 1 & t\\emptyset & 1\end{array}\right) P_{\le\Deltambda_0}. \end{align*} Then as $\|\Phi(0) - I\|_{L^2 \times H^{-1} \to L^2 \times H^{-1}} = O(\Deltambda_0^{-1/2})$, by choosing $\Deltambda_0$ sufficiently large the operator $\Phi(0)$ is invertible on $L^2 \times H^{-1}$. Hence \begin{equation}gin{align*} \widetilde{\Phi}(t) := \Phi(t) \Phi(0)^{-1} \end{align*} is a parametrix in the sense of Property~\ref{param:property}. \ \section{Microlocal analysis tools} \Deltabel{s:microlocal} \ \subsection{Symbols and phase space metrics} We begin by briefly reviewing the framework of Hormander metrics; for further details consult~\cite[Section 18.4]{hormanderv3}. Let $g$ be a slowly varying metric on phase space $W :=T^* \mathbb{R}^n = \mathbb{R}^n_x \times (\mathbb{R}^n)^*_\xi$. This induces norms on tensors in the usual manner; in particular, if $l \in T_zW \times \cdots T_z W \to \mathbb{R}$ is $k$-linear for some $z \in (x, \xi)$, set \begin{equation}gin{align*} |l_z|_k^g := \sup_{0 \ne t_j \in T_zW} \frac{ |l_z (t_1, \dots, t_k) | }{ \prod_{j=1}^k g_z(t_j)^{1/2}}. \end{align*} \begin{equation}gin{definition} \Deltabel{def:symbol:class} If $m$ is a slowly varying function on $W$, write $S(m, g)$ for the space of functions $u$ on $W$ such that \begin{equation}gin{align*} \sup_z |\nabla^k u|^g_k (z) / m(z) \le C_k \quad \text{for all } z\in W. \end{align*} \end{definition} \begin{equation}gin{definition} A map $\chi : W \to W$ is $g$-smooth if the pullback $\chi^* S(1,g) \subset S(1,g)$. \end{definition} By the chain rule and induction, this definition is equivalent to requiring that \begin{equation}gin{align*} |D^k \chi (z; t_1, \dots, t_k)|_{g_{\chi(z)}} \le C_k \prod_{j=1}^k g_z(t_j)^{\frac{1}{2}} \text{ for all } t_j \in T_zW, \text{ uniformly in } z. \end{align*} \subsection{Pseudo-differential calculus} Suppose $A$ be the quadratic form on $W$ given by $A(y, \eta) = \Deltangle y, \eta \rangle$, and for $\alphapha \le 1$ let $g_\alphapha$ denote the phase space metric~\eqref{e:g}. Let $U_\alphapha$ denote the phase space region \[ U_\alphapha := \{ (x, \xi): |\xi| \ge \alphapha^{-2}\} \]In the language of Hormander 18.4, the phase space metric $g$ is $A$-\emph{temperate} in $U_\alphapha$ in the sense that there exist constants $C, N$ \begin{equation}gin{align*} g_w(t) \le C g_z (t) (1 + g^A_w (z-w) )^N, \text{ for all } z, w \in U_\alphapha, \end{align*} where for a general quadratic form $A$ one has \begin{equation}gin{align*} g^A _w(A\zeta) := \sup_{\eta \in \operatorname{Im}(A)} \frac{ |\Deltangle \eta , \zeta \rangle|^2}{g_w(\eta)} \end{align*} and $g^A_w (\begin{equation}ta):= \infty$ for $\begin{equation}ta \notin \operatorname{Im}(A)$. In the present case, \begin{equation}gin{align*} g^{A}_{(x, \xi)} (y, \eta) &= g^{-1}_{(x, \xi)} (\eta, y) = \alphapha^4 |\Deltangle \eta, \wht{\xi} \rangle|^2 + \alphapha^2 | \eta \wedge \wht{\xi}|^2 + |\xi|^2 | \Deltangle y, \wht{\xi} \rangle|^2 + \alphapha^2 |\xi|^2 | y \wedge \wht{\xi} |^2, \end{align*} where we write $\wht{\xi} := \xi/|\xi|$. Similarly, a slowly varying function $m$ is $A$-\emph{temperate} with respect to $z \in U_\alphapha$ if \begin{equation}gin{align*} m(w) \le C m(z) (1 + g_w^A (z-w) )^N \text{ for all } w \in U_\alphapha. \end{align*} For a symbol $a$, we define the corresponding pseudo-differential operator using right-quantization \begin{equation}gin{align*} a(X, D) = (2\pi)^{-n} \int_{\mathbb{R}^n} e^{i\Deltangle x -y, \xi \rangle} a(x, \xi) \, d\xi, \end{align*} and write $OPS(m, g)$ for the quantizations of symbols in $S(m, g)$. Recall that if $a$ and $b$ are symbols, then formally \begin{equation}gin{align*} a(X,D) b(X,D) = (a \circ b)(X, D), \end{align*} where \begin{equation}gin{align*} a \circ b (x, \xi) := e^{i\Deltangle D_\eta, D_y \rangle} [a(x, \eta) b(y, \xi)]|_{\stackrel{y=x}{\eta=\xi}}. \end{align*} In particular, we have the first order and second order symbol expansions: \begin{equation}gin{align} \Deltabel{e:symbol_expansion} \begin{equation}gin{split} a \circ b &= ab + \frac{1}{i} \int_0^1 r_{1,s} \, ds\\ &= ab + \frac{1}{i} a_\xi b_x -\frac{1}{2} \int_0^1 r_{2, s} \, ds, \end{split} \end{align} where \begin{equation}gin{align*} r_{j, s} (x, \xi) = e^{is \Deltangle D_y, D_\eta \rangle} \Deltangle \partial_y, \partial_\eta\rangle^j [a(x, \eta) b(y, \xi)]_{\stackrel{y=x}{\eta=\xi}}. \end{align*} The remainder can be estimated as in~\cite[Section 18.4]{hormanderv3}. For a real parameter $t \ge 0$, define \begin{equation}gin{align*} a \circ_t b (x, \xi) := e^{it\Deltangle D_\eta, D_y \rangle} [a(x, \eta) b(y, \xi)]|_{\stackrel{y=x}{\eta=\xi}}. \end{align*} \begin{equation}gin{lemma} \Deltabel{l:gauss_transform} Suppose $m_1, m_2$ are slowly varying and $A$-temperate in the region $U_\alphapha$. If $a \in S(m_1, g_{\alphapha})$, $b \in S(m_2, g_{\alphapha} )$ are symbols supported in $U_\alphapha$, then \begin{equation}gin{align*} a \circ_t b \in S( m_1 m_2, g_{\alphapha}) \end{align*} with constants uniform in $0 \le t \le 1$. \end{lemma} \begin{equation}gin{proof}[Proof sketch] Follow the proof of Theorem 18.4.10 and Proposition 18.5.2 in Hormander~\cite{hormanderv3}. The main point is that estimates for the Gauss transform $e^{it A(D)}$ only improve when $t \le 1$ since $g^{tA} = t^{-2} g^{A} \ge g^{A}$. Let $B_t:$ denote the quadratic form on $W \oplus W$ defined by $B_t(x, \eta), (y, \xi)) := t \Deltangle y, \eta \rangle$, then one needs to verify that \begin{equation}gin{itemize} \item The metric $G := g_\alphapha \oplus g_\alphapha$ is slowly varying and $B_t$-temperate with respect to the diagonal $(x, \xi, x, \xi)$ in $U_\alphapha \times U_\alphapha$, and $G \le G^{B_t}$. \item The weight function $M := m_1 \otimes m_2$ is slowly varying and temperate with respect to $B_t$ at the diagonal where $|\xi| \ge \alphapha^{-2}$. \end{itemize} \end{proof} We hereafter denote by $S(m, g_\alphapha)$ those symbols supported in the region $|\xi| \ge \alphapha^{-2}/8$. The corresponding operators $a(X, D) \in OPS(m, g_\alphapha)$ accept input frequencies $\gtrsim \alphapha^{-2}$. The previous lemma shows that $OPS(m_1, g_\alphapha) \cdot OPS(m_2, g_\alphapha) \subset OPS(m_1 m_2, g_\alphapha)$. In our applications we encounter a slightly more complicated class of symbols associated to two angular scales. For $\begin{equation}ta \ge \alphapha$, let \begin{equation}gin{align*} S^1_\begin{equation}ta (m, g_\alphapha) := \{ \phi \in S(m, g_\alphapha): \partial_x \phi \in S(m, g_\alphapha), \ \partial_\xi \phi \in S( (\begin{equation}ta |\xi|)^{-1} m, g_\alphapha)\}. \end{align*} In view of Lemma~\ref{l:gauss_transform} and the identities \begin{equation}gin{align*} \partial_x (a \circ b) &= (\partial_x a) \circ b + a \circ (i \partial_x b),\\ \partial_\xi (a \circ b) &= (i\partial_\xi a) \circ b + a \circ (\partial_\xi b), \end{align*} one sees that \begin{equation}gin{align*} OPS^1_\begin{equation}ta (m_1, g_\alphapha) \cdot OPS^1_\begin{equation}ta (m_2, g_\alphapha) \subset OPS^1_\begin{equation}ta ( m_1 m_2, g_\alphapha). \end{align*} However, we do not quite have the usual pseudo-differential calculus since $OPS(m, g_\alphapha)$ is not closed under taking adjoints. One remedy is to consider the subclass of operators in $OPS(m, g_\alphapha)$ which output at frequencies $\ge \alphapha^{-2}$. \begin{equation}gin{lemma} \Deltabel{l:adjoint} Suppose $m$ is slowly varying and temperate with respect to $g_{\alphapha}$. If $\phi \in S(m, g_{\alphapha})$ satisfies $\phi(X,D) = S_{\ge\alphapha^{-2}}(D) \phi(X,D) S_{\ge\alphapha^{-2}}(D)$ then \begin{equation}gin{align*} \phi^* (x, \xi) = [e^{i\Deltangle D_y, D_\eta \rangle} \phi(y, \eta) ]_{\stackrel{y=x}{\eta=\xi}} \in S(m, g_{\alphapha}) \end{align*} and is supported in $|\xi| \ge \alphapha^{-2}/8$. \end{lemma} This leads to the main $L^2$ estimate: \begin{equation}gin{lemma}{\cite[Theorem 4.8]{geba2007phase}} \Deltabel{l:CV} If $a \in S(1, g_\alphapha)$ in addition satisfies \begin{equation}gin{align*} |S_{<\Deltambda}(D_x + \xi) a (x, \xi)| \le C_N \Bigl( \frac{\Deltambda}{\Deltangle \xi \rangle} \Bigr)^N, 1 \le \Deltambda \le \Deltangle \xi \rangle, \end{align*} then $a(X, D)$ is continuous on $L^2$. In particular, the conclusion holds for operators of the form \[a(X, D) = S_{>\Deltambda/8}(D)a(X, D) S_{\Deltambda}(D), \quad \Deltambda \ge \alphapha^{-2}.\] \end{lemma} \begin{equation}gin{proof} For the last statement, note that since \begin{equation}gin{align*} \wht{a(X,D) u}(\eta) = \int \wht{a}(\eta - \xi, \xi) \wht{u}(\xi) \, d\xi, \end{align*} the hypothesis implies that $S_{<\Deltambda/8} (D_x + \xi) a(x, \xi) = 0$. \end{proof} We also need bounds for certain pseudo-differential operators which are strongly localized in input frequency but do not quite satisfy the hypotheses of the previous lemma. For a direction field $x \mapsto \Theta(x) \in S^{n-1}$ on $\mathbb{R}^n$, let \begin{equation}gin{align*} m = m_\Theta (x, \xi) := \Deltangle \alphapha^{-1} (\wht{\xi} - \Theta(x) ) \rangle^{-1}, \quad \wht{\xi} := \xi / |\xi|. \end{align*} To express the angular localization of various symbols it will be convenient to introduce the following \textbf{Notation}. For a weight function $m$, write $S(m^\infty, g_{\alphapha}) := \bigcap_N S( m^N, g_\alphapha)$. \begin{equation}gin{lemma} \Deltabel{l:schur-bound} Suppose $x \mapsto \Theta(x) \in S^{n-1} \subset \mathbb{R}^n$ is Lipschitz, and let $\phi \in S(m^\infty, g_\alphapha)$ be supported in an annulus $|\xi| \sim \Deltambda \ge \alphapha^{-2}$. Then $\phi(X, D)$ is bounded on $L^p$ for all $1 \le p \le \infty$. \end{lemma} \begin{equation}gin{proof} We use Schur's test on the kernel $K(x, y) = (2\pi)^{-n}\int e^{i\Deltangle x-y, \xi\rangle} \phi(x, \xi) \, d\xi = \mathcal{F}_2^{-1} \phi (x, x-y)$ of $\phi(X, D)$. For fixed $x$, $\xi \mapsto \phi(x, \xi)$ is a Schwartz function with height $1$ and adapted to the sector $|\wht{\xi} -\Theta(x)| \lesssim \alphapha$, $|\xi| \sim \Deltambda$. Therefore \begin{equation}gin{align*} |K(x, y)| \lesssim \Deltambda (\alphapha \Deltambda)^{n-1} \Deltangle \Deltambda |\Deltangle x-y, \Theta(x) \rangle| + \alphapha \Deltambda |x-y \wedge \Theta(x)| \rangle^{-N} \text{ for any }N, \end{align*} so \begin{equation}gin{align*} \sup_{x} \int |K(x, y)| \, dy < \infty. \end{align*} To evaluate \begin{equation}gin{align*} \sup_y \int |K(x,y)| \, dx, \end{align*} decompose $\phi = \sum \phi_{\nu}^\omega$, where $\omega$ ranges over a partition of the annulus $|\xi| \sim \Deltambda$ into $\Deltambda \times (\alphapha \Deltambda)^{n-1}$ sectors, and for each $\omega$, $\nu$ ranges over a partition of space into parallel $\alphapha^2 \times (\alphapha)^{n-1}$ parallelpipeds $R^{\omega}_\nu$ with orientation $\omega$. The kernel decomposes as \begin{equation}gin{align*} K(x,y) = \sum_{\omega} \sum_\nu \mathcal{F}_2^{-1} \phi_\nu^\omega (x, x-y) = \sum_{\omega} \sum_\nu K_\nu^\omega (x, y). \end{align*} Then \begin{equation}gin{align*} |K_\nu^\omega(x,y)| \lesssim \mathrm{1}_{R_\nu^\omega}(x) \Deltangle \alphapha^{-1} (\omega - \Theta(x_\nu^\omega) ) \rangle^{-N} \Deltambda (\alphapha \Deltambda)^{n-1} \Deltangle \Deltambda |\Deltangle x-y, \omega \rangle| + \alphapha \Deltambda |x-y \wedge \omega | \rangle^{-N}. \end{align*} For a constant $c$ depending on the Lipschitz constant of $\Theta$, \begin{equation}gin{align*} &\int |K(x, y)| \, dx \lesssim \sum_{ |x_\nu^\omega-y| \le c(|\omega - \Theta(y)| + \alphapha)} \Deltangle \alphapha^{-1} ( \omega - \Theta(y) ) \rangle^{-N} \\ &\times \int_{R_\nu^\omega} \Deltambda (\alphapha \Deltambda)^{n-1} \Deltangle \Deltambda |\Deltangle x-y, \omega \rangle| + \alphapha \Deltambda |x-y \wedge \omega | \rangle^{-2N} \, dx\\ &+ \sum_{ |x_\nu^\omega-y| > c(|\omega - \Theta(y)| + \alphapha)} \Deltangle \alphapha^{-1} ( \omega - \Theta(x_\nu^\omega) ) \rangle^{-N}\Deltangle \alphapha \Deltambda |\omega - \Theta(y)| \rangle^{-N} \\ &\times \int_{R_\nu^\omega} \Deltambda (\alphapha \Deltambda)^{n-1} \Deltangle \Deltambda |\Deltangle x-y, \omega \rangle| + \alphapha \Deltambda |x -y \wedge \omega | \rangle^{-N} \, dx\\ &\lesssim \sum_{\omega} \Deltangle \alphapha^{-1}(\omega - \Theta(y)) \rangle^{-N} < \infty. \end{align*} \end{proof} \section{An angular partition of unity} \ Throughout this discussion we fix a minimum angular scale $\alphapha_\mu \ll 1$. Choose $0 \le \eta \in C^\infty_0\bigl( (-1, 1) \bigr)$ with $\int \eta = 1$, and let $\eta_h = h^{-1} \eta( h^{-1} \cdot )$. For each $\theta \in \Omega_\alphapha$, with $\theta = [a. b)$, define the scale functions \begin{equation}gin{align*} &h_1(\xi) = \left\{\begin{equation}gin{array}{ll} \frac{1}{8}\alphapha, & \xi \le \frac{a+b}{2},\\ \frac{1}{16} \alphapha, & \xi > \frac{a+b}{2} \end{array} \right. \quad &h_2(\xi) = \left\{\begin{equation}gin{array}{ll} \frac{1}{16}\alphapha, & \xi \le \frac{a+b}{2},\\ \frac{1}{8} \alphapha, & \xi > \frac{a+b}{2} \end{array} \right.\\ &h_3(\xi) = \left\{\begin{equation}gin{array}{ll} \frac{1}{16}\alphapha, & \xi \le \frac{a+b}{2},\\ \frac{1}{16} \alphapha, & \xi > \frac{a+b}{2} \end{array} \right. \quad &h_4(\xi) = \left\{\begin{equation}gin{array}{ll} \frac{1}{8}\alphapha, & \xi \le \frac{a+b}{2},\\ \frac{1}{8} \alphapha, & \xi > \frac{a+b}{2} \end{array} \right. \end{align*} and for $k = 1, 2, 3, 4$, define $\phi_{\theta}^{\alphapha, k}$ by mollifying the characteristic functions of $\theta$ on a position-dependent scale: \begin{equation}gin{align*} \phi^{\alphapha, k}_\theta (\xi) := \mathrm{1}_{\theta} * \eta_{h_k(\xi)} (\xi). \end{align*} Then each $\phi^{\alphapha, k}_\theta$ is smooth on the $\alphapha$ scale and supported in a $\tfrac{1}{8}|\theta|$ neighborhood of the interval $\theta$, and there exist $c_1, c_2 > 0$ such that \begin{equation}gin{align} \Deltabel{e:approx_partition} c_1 \le \sum_{\theta \in \Omega_\alphapha} \phi^{\alphapha, k(\theta)}_\theta \le c_2 \end{align} for any sequence $k(\theta) \in \{1, 2, 3, 4\}$. \begin{equation}gin{proposition} \Deltabel{p:bilinear_pou} Fix a small dyadic number $\alphapha_\mu < 1/4$. For each $\omega \in \Omega_{\alphapha_\mu}$, there exist dyadic intervals $\theta_0 = \omega, \theta_1, \dots $ on the circle from the families $ \Omega_{\alphapha} $ with, $|\theta_j - \omega| \sim \alphapha_j$ when $j \ge 1$, such that we have the (fixed time) partition of unity \begin{equation}gin{align*} 1 = \sum_{j} \phi^{\alphapha_j, k(\omega)}_{\theta_j}, \end{align*} where $k(\omega) \in \{1, 2, 3, 4\}$, and the scales $\alphapha_j$ satisfy \begin{equation}gin{itemize} \item $\tfrac{1}{2}\alphapha_{j-1} \le \alphapha_{j} \le 2 \alphapha_{j-1}$, and \item at most $O(1)$ consecutive $\alphapha_j$'s are equal. \end{itemize} The families $ \Omega_{\alphapha} $ are independent of $ \omega $. \end{proposition} Before the proof, we consider \begin{equation}gin{lemma} Given $\omega \in \Omega_{\alphapha_0}$, there exists a sequence $\omega = \theta_0, \theta_1, \theta_2, \dots$ of dyadic intervals in $\mathbb{R}$ with the following properties: \begin{equation}gin{itemize} \item $\theta_j$ is right-adjacent to $\theta_{j-1}$. \item $2|\theta_{j-1}| \le |\theta_j| \le | \theta_{j+1}|$ for all $j$. \item If $|\theta_{j-1}| = |\theta_j|$, then $|\theta_{j+1}| = 2 |\theta_j|$. \item $\sum_{j=1}^J |\theta_j| \le 4 |\theta_J|$ for all $J$. \end{itemize} The analogous statement with ``right'' and ``left'' interchanged also holds. \end{lemma} \begin{equation}gin{proof} The idea of the construction is to ``double whenever possible while moving right.'' Using the usual tree terminology, define the sequence $\theta_j$ inductively as follows: \begin{equation}gin{itemize} \item If $\theta_{j}$ is the left-child of its parent, let $\theta_{j+1}$ be its sibling. \item Else let $\theta_{j+1}$ be the right neighbor of $\theta_j$'s parent. \end{itemize} Then $|\theta_{j+1}| = |\theta_j|$ in the first case and $|\theta_{j+1}| = 2|\theta_j|$ in the second. Since each interval has only two children, there cannot be more than two consecutive intervals of the same width. The inequality $\sum_{j=1}^J |\theta_j| \le 4|\theta_J|$ is verified inductively. Assume it holds for all smaller $J$. If $|\theta_{J+1}| = 2|\theta_J|$, then \begin{equation}gin{align*} \sum_{j=1}^{J+1} |\theta_j| \le 4 |\theta_J| + 2|\theta_J| \le 4|\theta_{J+1}|, \end{align*} while if $|\theta_{J+1}| = |\theta_J|$, then $|\theta_J| = 2|\theta_{J-1}|$ and we have \begin{equation}gin{align*} \sum_{j=1}^{J+1} |\theta_j| \le 4|\theta_{J-1}| + 2|\theta_{J-1}| + 2|\theta_{J-1}| \le 4 |\theta_{J+1}|. \end{align*} \end{proof} \begin{equation}gin{proof}[Proof of Prop.~\ref{p:bilinear_pou}] Starting with $\omega$, we construct dyadic intervals $\theta_1^r, \theta_2^r, \dots$ and $\theta_1^l, \theta_2^{l}, \dots$ to the right and left of $\omega$, respectively, according to the lemma. Choose $J^r$ and $J^l$ maximal such that \begin{equation}gin{align*} \sum_{j=1}^{J^r} |\theta^r_j| \le \frac{1}{2} - |\omega|, \quad \sum_{j=1}^{J^l} |\theta^l_j| \le \frac{1}{2} - |\omega|. \end{align*} By the lemma one must have \begin{equation}gin{align*} \frac{1}{8} - \frac{|\omega|}{4} \le |\theta^r_{J^r}|, |\theta^l_{J^l} | \le \frac{1}{2} - |\omega|; \end{align*} that is, \begin{equation}gin{align*} |\theta^r_{J^r}|, |\theta^l_{J^l}| \in \Bigl\{ \frac{1}{16}, \frac{1}{8}, \frac{1}{4} \Bigr\}. \end{align*} Assume with loss of generality that $|\theta^l_{J^l}| \le |\theta^r_{J^r}|$. \begin{equation}gin{itemize} \item If $|\theta^l_{J^l}| = \tfrac{1}{16}$, $|\theta^r_{J^r}| = \tfrac{1}{4}$, then $\theta^r_{J^r}$ and $\theta^l_{J^l}$ are separated modulo 1 by less than $10$ dyadic intervals of width $\tfrac{1}{16}$ (the worst case being when $|\theta^l_{J^l + 1}| = \tfrac{1}{8}$ and $|\theta^r_{J^r + 1}| = \tfrac{1}{2}$). We replace $\theta^r_{J^r}$ by its two children $\theta^{r, 1}_{J^r}, \ \theta^{r, 2}_{J^r}$ of width $\tfrac{1}{8}$, and reindex the intervals by defining $\theta^r_{J^r} := \theta^{r, 1}_{J^r}$, $\theta^r_{J^r + 1} := \theta^{r, 2}_{J^r}$, and replacing $J^r$ by $J^r + 1$. \item If $|\theta^l_{J^l}|$ and $|\theta^r_{J^r}|$ differ by at most one dyadic scale, then they are separated modulo 1 by at less than $6$ dyadic intervals of width $|\theta^l_{J^l}|$. \end{itemize} Let $\theta'_{1}, \dots, \theta'_{n} \in \Omega_{|\theta^l_{J^l}|}, \ n < 10$ be the intervening intervals. Projecting the intervals to $\mathbb{R} / \mathbb{Z}$, we relabel the sequence \[\theta_0, \theta^r_{1}, \dots, \theta^r_{J^r}, \theta'_1, \dots, \theta'_n, \theta^l_{J^l}, \dots, \theta^{l}_1\] as \[ \theta_0, \theta_1, \dots, \theta_{M-1}, \] and write \[\alphapha_j := |\theta_j|.\] Then, interpreting the indices modulo $M$, we have \begin{equation}gin{itemize} \item $\tfrac{1}{2}\alphapha_{j-1} \le \alphapha_{j} \le 2 \alphapha_{j-1}$, and \item at most $O(1)$ consecutive $\alphapha_j$'s are equal. \end{itemize} For each $\theta_j = [a_j, b_j)$ (mod 1), define the scales \begin{equation}gin{align*} h_{j}(\xi) = \left\{\begin{equation}gin{array}{ll} \frac{\alphapha_j}{8}, & \alphapha_j \le \alphapha_{j+1}\\ \frac{\alphapha_j}{16}, & \alphapha_j > \alphapha_{j+1} \end{array} \right. \end{align*} and define $\phi^{\alphapha_j}_{\theta_j}$ by mollifying the characteristic functions of $\theta_j$ near the junction points. \begin{equation}gin{align*} \phi^{\alphapha_j}_{\theta_j} (\xi) = \left\{\begin{equation}gin{array}{ll} \mathrm{1}_{\theta_j} * \eta_{h_{j-1}} (\xi), & a_j - \frac{\alphapha_j}{8} \le \xi \le \frac{a_j + b_j}{2}, \\ \mathrm{1}_{\theta_j} * \eta_{h_j} (\xi), & \frac{a_j+b_j}{2} < \xi \le b_j + \frac{\alphapha_j}{8},\\ 0, & \text{otherwise} \end{array}\right. \end{align*} This function has one of the four forms asserted in the proposition, and \begin{equation}gin{align*} 1 = \sum_{j=0}^{M-1} \phi^{\alphapha_j}_{\theta_j} \end{align*} since the same is true for the sum of characteristic functions $\mathrm{1}_{\theta_j}$ and the mollification scale is the same near the boundary between $\theta_j$ and $\theta_{j+1}$. \end{proof} \ \nocite{*} \ \end{document}
\begin{document} \begin{abstract} In this paper we give an asymptotically tight bound for the tolerated Tverberg Theorem when the dimension and the size of the partition are fixed. To achieve this, we study certain partitions of order-type homogeneous sets and use a generalization of the Erdős-Szekeres theorem. \end{abstract} \title{A note on the tolerated Tverberg theorem} \section{Introduction}\label{sec:intro} Tverberg's theorem \cite{Tve1966} states that any set with at least $(d+1)(r-1)+1$ points in $\mathbb R^d$ can be partitioned into $r$ disjoint sets $A_1, \ldots, A_r$ such that $\bigcap_{i=1}^r \conv(A_i) \neq \emptyset$. Furthermore, this bound is tight. The tolerated Tverberg theorem generalizes Tverberg's theorem by introducing a new parameter $t$ called \emph{tolerance}. It states that there is a minimal number $N=N(d,t,r)$ so that any set $X$ of at least $N$ points in $\mathbb R^d$ can be partitioned into $r$ disjoint sets $A_1, \ldots, A_r$ such that $\bigcap_{i=1}^r \conv(A_i\setminus Y) \neq \emptyset$ for any $Y\subset X$ with at most $t$ points. In contrast with the classical Tverberg theorem, the best known bounds for $N(d,t,r)$ are not tight. In \cite{Lar1972}, Larman proved that $N(d,1,2)\leq 2d+3,$ García-Colín showed that $N(d,t,2)\le(t+1)(d+1)+1$ in her PhD thesis \cite{GCol2007}, later published in \cite{GL2015}. This was later generalized by Strausz and Soberón who gave the general bound $N(d,t,r)\le (r-1)(t+1)(d+1)+1$ \cite{SS2012}. Later, Mulzer and Stein gave the bound $N(d,t,r)\le 2^{d-1}(r(t+2)-1)$ which improves the previous bound for $d\le 2$ and is tight for $d=1$ \cite{MS2014}. As for lower bounds, Ramírez-Alfonsín \cite{RAlf2001} and García-Colín \cite{GL2015}, using oriented matroids, proved that $\ceil{\frac{5d}{3}} +3 \leq N{(d, 1, 2)}$ and $2d + t +1 \leq N{(d, t, 2)},$ respectively. Furthermore, Larman's upper bound is known to be sharp for $d=1, 2, 3$ and $4$ \cite{Lar1972,Forge2001}. Lastly, Soberón gave the bound $r(\floor{\frac d2}+t+1)\le N(d,t,r)$ \cite{Sob2015}. In this paper we show that for fixed $d$ and $r$, the correct value for $N(d,t,r)$ is asymptotically equal to $rt$. To be precise, we prove the following theorem. \begin{theorem}\label{thm:main} For fixed $r$ and $d$ we have that $$N(d,t,r) = rt + o(t).$$ \end{theorem} This improves all previously known upper bounds whenever $t$ is large compared to $r$ and $d$, and comes with a matching lower bound. The proof follows from studying the behavior of $t$ with respect to $N$ and using the Erdős-Szekeres theorem for cyclic polytopes in $\mathbb R^d$. We include a short review of cyclic polytopes and the Erdős-Szekeres theorem in Section \ref{sec:prelim}. In \ref{sec:alternating} we prove a useful Lemma about alternating partitions of a cyclic polytope which leads to an interesting open problem. The proof of Theorem \ref{thm:main} is detailed in Section \ref{sec:remarks}. \section{Preliminaries}\label{sec:prelim} In this section we introduce some definitions and recall some well known concepts which we later use in the proofs of this paper. \subsection{Order-type homogeneous sets} Any ordered set $X \subset \mathbb R^d$ with the property that the orientation of any ordered subset of $X$ with $(d+1)$ elements is always the same is called an \emph{order-type homogeneous set}. A classic example of such a set is the set of vertices of a cyclic polytope, $X$, which is constructed as follows: consider the moment curve $\gamma(\alpha)=(\alpha, \alpha^2, \ldots, \alpha^d)$, given real numbers $\alpha_1<\alpha_2<\dots<\alpha_n$ define $X=\{\gamma(\alpha_1),\gamma(\alpha_2),\dots,\gamma(\alpha_n)\}.$ The set $\conv(X)$ is the $d$-dimensional \emph{cyclic polytope} on $n$ points and any other polytope combinatorially equivalent to the cyclic polytope is also sometimes referred to as a cyclic polytope or, more generally, as an order-type homogeneous set. Order-type homogeneous sets have been studied extensively \cite{Bis2006,Gal1963,Gr2003,Mat2002,Zie1995} and have proven to be very useful as examples with extremal properties in various combinatorial problems. In our case they will prove useful in finding better bounds for the tolerated Tverberg number $N(d,t,r)$. The following lemma, due to Gale \cite{Gal1963} is one of the most useful tools for studying the properties of order-type homogeneous sets. \begin{lemma}[Gale's evenness criterion]\label{lem:Gale} Let $X=\{x_1,x_2,\ldots,x_n\}$ be an order-type homogeneous set. A subset $F \subset X$ such that $\card F=d$ determines a facet of $conv(X)$ if and only if, any two vertices in $ X \setminus F$ have an even number of vertices of $F$ between them in the order. \end{lemma} As a consequence of Lemma \ref{lem:Gale}, the polytopes that arise as the convex hulls of order-type homogeneous sets are known to be $\floor{\frac{d}{2}}$-neighborly. That is, the convex hull of every $\floor{\frac{d}{2}}$ points in $X$ is contained in a facet of $C$ and, since $C$ is simplicial, the convex hull of such vertices is a $\floor{\frac{d}{2}}-1$ face of $C$. Another useful fact when working with order-type homogeneous sets is Lemma 2.1 from \cite{BMP2014}: \begin{lemma}\label{lem:ot} An ordered set $X=\{x_1,x_2,\dots x_n\}$ in general position in $\mathbb R^d$ is order-type homogeneous if and only if the polygonal path $\pi=x_1x_2\dots x_n$ intersects every hyperplane in at most $d$ points, with the exception of the hyperplanes that contain an edge of $\pi$. \end{lemma} \subsection{The Erdős-Szekeres theorem} In 1935 Erdős and Szekeres proved two important theorems in combinatorial geometry \cite{ES1935}. The first Erdős-Szekeres theorem implies that any sequence of numbers with length $(n-1)^2+1$ always contains a monotonous (either increasing or decreasing) subsequence. The second Erdős-Szekeres theorem states that among any $2^{\Theta(n)}$ points in the plane there are $n$ of them in convex position. These two theorems can be thought as results on order-type homogeneous sets in dimensions $1$ and $2$. The following theorem, proved in \cite{Suk2014,BMP2014}, generalizes both results to order-type homogeneous sets in any $d$-dimensional space. \begin{theorem}\label{thm:Suk} Let $\OT_d(n)$ be the smallest integer such that any set of $\OT_d(n)$ points in general position in $\mathbb R^d$ contains an order-type homogeneous subset of size $n$. Then $\OT_d(n)=\twr_d(\Theta(n))$, where the tower function $\twr_d$ is defined by $\twr_1(\alpha)=\alpha$ and $\twr_{i+1}(\alpha)=2^{\twr_i(\alpha)}$. \end{theorem} \section{Tolerance of partitions of sets}\label{sec:proof} Let $X=\{x_1,x_2,\dots,x_n\}$ be a set of points in $\mathbb R^d$. We define the \emph{tolerance} $t(X,r)$ of $X$ as the maximum number of points such that there is a partition $A_1, \ldots, A_r$ of $X$ with the property that $\bigcap_{i=1}^r \conv(A_i\setminus Y) \neq \emptyset$ for any $Y\subset X$ with at most $t(X,r)$ points. \begin{observation}\label{dumbpigeonholelemma} Let $X_1$, $X_2$ be disjoint sets of points of $\mathbb R^d$. Then $t(X_1\cup X_2,r) \geq t(X_1,r)+t(X_2,r)$. \end{observation} We also define the following two numbers; $$t(n, d, r)=\min_{\substack{ X \subset \mathbb R^d \\ |X|=n}}\{t(X,r)\} \quad \text{ and } \quad T(n, d, r)=\max_{\substack{X \subset \mathbb R^d \\ |X|=n}}\{t(X,r)\}.$$ For fixed $n,$ $t(n, d, r)$ indicates that \emph{there exists a set} $X$ such that for all partitions $A_1, \ldots, A_r$ of $X,$ such that $\bigcap_{i=1}^r \conv(A_i\setminus Y) \neq \emptyset$ for any $Y\subset X$ with at most $t(X,r)=t(n, d, r)$ points, while $T(n, d, r)$ indicates that \emph{for all} $X$ with size $n$ there is a partition $A_1, \ldots, A_r$ of $X,$ such that $\bigcap_{i=1}^r \conv(A_i\setminus Y) \neq \emptyset$ for any $Y\subset X$ with at most $t(X,r)=T(n, d, r)$ points. \subsection{Tolerance of order-type homogeneous sets}\label{sec:alternating} For proving Theorem \ref{thm:main} we need to study a specific type of partitions. Let $X=\{x_1,x_2,\dots,x_n\}$ be an ordered set of points in $\mathbb R^d$ with a the order specified by the subindices and let $r>0$ be a fixed integer. The partition of $X$ into $r$ sets $A_1, \ldots, A_r$ given by $A_i=\{x_j : j \equiv i \mod r\}$ is called an \emph{alternating partition}. Our main interest is to determine when the convex hulls of the sets $A_i$ have a common point and how \emph{tolerant} are they are. \begin{lemma} \label{lem:alternating} Let $X=\{x_1,x_2,\dots,x_n\}$ be an order-type homogeneous set of points in $\mathbb R^d$ with alternating partition $A_1, \ldots, A_r$. Then there is a number $c(d,r)\le (d+1)(\floor{\frac d2}+1)(r-1)+1\approx \frac{rd^2}{2}$ such that if $n\ge c(d,r)$, then $\bigcap_{i=1}^r \conv(A_i)\neq\emptyset$. \end{lemma} \begin{figure} \caption{An example for Lemma \ref{lem:alternating} \label{fig:lem} \end{figure} \begin{proof} Let $O$ be a center point for $X$. This means that every semi-space containing $O$ also contains at least $\ceil{\frac{n}{d+1}}$ points of $X$. We will show that $O\in\conv(A_i)$ for every $i$. Suppose this is not the case. Then there is a hyperplane $H$ strictly separating $O$ from some $\conv(A_i)$. We may assume (by perturbing $H$ if necessary) that no point in $X$ is contained in $H$. Let $H^+$ be the semi-space bounded by $H$ that contains $O$. Since $O$ is a center point then $X\cap H^+$ contains at least $\ceil{\frac{n}{d+1}}>\left(\floor{\frac d2}+1\right)(r-1)$ points. On the other hand, by Lemma \ref{lem:ot}, the polygonal path $\pi$ generated by $X$ intersects $H$ at most $d$ times. Therefore $\pi\cap H^+$ has at most $\floor{\frac d2}+1$ connected components and, since $A_i\cap H^+=\emptyset$, each of these components is a sub-path of $\pi$ contained between two consecutive points of $A_i\subset\pi$ (see Figure \ref{fig:lem}). Thus, each component contains at most $r-1$ points from $X$, so $X\cap H^+$ has at most $(\floor{\frac d2}+1)(r-1)$ points. This contradicts our assumption that $O\not\in\conv(A_i)$. \end{proof} The bound for $c(d,r)$ given in the previous lemma is not tight. In fact it can be improved when $d$ is even by noticing that, if $n\equiv i$ (mod $r$), then $X\cap H^+$ can have at most $\frac d2(r-1)+i$ points. The bound obtained in this case is $c(d,r)\le \min_i \left\{\frac{d(d+1)}2(r-1)+i(d+1)+s_i\right\}$, where $s_i$ be the smallest positive integer such that $s_i\equiv \frac{d(d+1)}2-id$ (mod $r$). When $r$ is large compared to $d$ this simply equals $\frac{d(d+1)}{2}r$. However this bound is still not tight, giving rise to an interesting open question. \begin{problem} Determine the smallest value for $c(d,r)$ for which Lemma \ref{lem:alternating} holds. \end{problem} The cases $d=1$ and $d=2$ are not difficult. We have the following values: $c(1,r)=2r-1$, $c(2,1)=1$, $c(2,2)=4$ and $c(2,r)=3r$ when $r\ge 3$. Note that the bound from Lemma \ref{lem:alternating} is tight for $d=1$ and the bound described for even dimensions after the proof of the lemma is tight for $d=2$. If $r=2$, a simple separating-hyperplane argument shows that $c(d,r)=d+2$. In general it can also be proved that $c(d,r)\ge (d+1)r$ whenever $r>d$, but this is also not tight. The example in Figure \ref{fig:ex} shows that $c(3,4)>16$. \begin{figure} \caption{Four alternating tetrahedra with vertices on the moment curve $(t,t^2,t^3)$ and without a common point. In this example $t$ takes the values $-4$, $-3$, $-2$, $-2$, $-2$, $-1$, $-1$, $-1$, $0$, $1$, $2$, $6$, $6$, $7$, $8$ and $9$, which may be perturbed so that all vertices are distinct.} \label{fig:ex} \end{figure} Now we are ready to study the tolerance of an order-type homogeneous set. The upper bound in the following theorem was proved by Soberón in \cite{Sob2015} but we include the proof for completion. \begin{theorem}\label{thm:cotastol} Let $X=\{x_1,x_2,\dots,x_n\}$ be an order-type homogeneous set of points in $\mathbb R^d$. Then $\floor{\frac{n}{r}} - c(d,r) \le t(X,r)\leq \floor{\frac{n}{r}}-\floor{\frac{d}{2}}$, where $c(d,r)$ is the number from Lemma \ref{lem:alternating}. \end{theorem} \begin{proof} First we prove the upper bound for $t(X,r)$. For any partition of $X$ into $r$ disjoint parts $A_1, \ldots, A_r$ we will find that for some $i \in [r]=\{1, \ldots, r\}$, $\card{A_i}\leq \floor{\frac{n}{r}}$. Let $Y \subset A_i $ be any subset such that $\card{A_i\setminus Y}\leq \floor{\frac{d}{2}}$, then necessarily $\conv(A_i\setminus Y)$ is disjoint from $\conv(X\setminus A_i)$ as $X$ is the set of vertices of a $\floor{\frac{d}{2}}$-neighborly polytope and therefore $\conv(A_i\setminus Y)$ is a $\floor{\frac{d}{2}}-1$ face of the polytope $\conv(X)$. In particular, this implies that $\bigcap_{i=1}^r \conv(A_i \setminus Y) = \emptyset$, hence $t(X,r)\leq \floor{\frac{n}{r}}-\floor{\frac{d}{2}}$. For the lower bound, consider the alternating partition $A_1, \ldots, A_r$ of $X$. Assume that $Y \subset X$ satisfies $\card Y \leq \floor{\frac{n}{r}}-c(d,r)$. By the pigeonhole principle, we can find $X' \subset X\setminus Y$ such that $\card{X'}= c(d,r)$ and the restriction of the partition of $X$ to $X'$ (i.e. $A_1\cap X', \ldots, A_r \cap X'$) is also an alternating partition. Thus, by Lemma \ref{lem:alternating} we have that $\bigcap_{i=1}^r \conv(A_i \cap X') \neq \emptyset$ and the theorem follows. \end{proof} \subsection{Tolerance of partitions of general sets} The tolerated Tverberg number's bound $N(d,t,r)\leq (r-1)(t+1)(d+1)+1$ implies that for any set $X$ of $n$ points, its tolerance is bounded by $\frac{n-1}{(r-1)(d+1)}-1 \leq t(X, r).$ On the other hand, we can argue that the tolerance under any partition of a set can never be greater than the size of the smallest part in the partition, \textit{i.e.} $T(n, d, r)\leq \floor{\frac{n}{r}}.$ The arguments in the previous paragraphs imply that $\frac{n-1}{(r-1)(d+1)}-1 \leq t(n, d, r) \leq t(X,n) \leq T(n, d, r) \leq \floor{\frac{n}{r}}$ holds for any $X \subset \mathbb R^d$ with $\card{X}=n$. In this section we exhibit improved bounds for the tolerance of partitions of general sets. \begin{prop}\label{lem: bound2} For any positive integers $n,d,r$ we have that $T(n,d,r)\leq \floor{\frac{n}{r}}-\floor{\frac{d}{2}}$. \end{prop} \begin{proof} Let $A_1, \ldots A_r$ be a partition of the set, and let $t'$ be maximum such that $\bigcap_{i=1}^r \conv(A_i\setminus Y) \neq \emptyset$ for any $Y\subset X$ with at most $t'$ points. Then $t'\leq T(n, d, r).$ Let $A_i, A_j$ be parts such that $i \neq j$, we may assume that $|A_i \cup A_j|\geq d+2,$ otherwise $t'=0.$ Then for any subset of $d$ points in $A_i \cup A_j$, $D$ we must have that the hyperplane $H=\aff(D)$ is such that; $\card{ H^+ \cap A_i} + \card{H^- \cap A_j} >t'$ and $\card{H^- \cap A_i} + \card{H^+ \cap A_j} >t'$. Hence $\card{A_i} + \card{A_j} -d > 2t'$ and adding through all the different pairs, $ \sum_{i<j} \card{A_i} + \card{A_j}> \binom{r}{2} (2t'+d)$. That is, $ (r-1)\sum_{i \in [r]} \card{A_i} > \binom{r}{2} (2t'+d)$ and thus $n >\frac{r}{2}(2t'+d)$. Rearranging the later equation we can obtain $\frac{n}{r}>t' +\frac{d}{2}$. Therefore $\frac{n}{r}>t' + \floor{\frac{d}{2}}$ and so $\floor{\frac{n}{r}} \geq t' + \floor{\frac{d}{2}}$. \end{proof} \begin{lemma} \label{thm:main2} Let $r,d$ be fixed natural numbers. For a large enough $n$ we have that $t(n,d,r) \geq \frac{n}{r}-o(n)$. \end{lemma} \begin{proof} Fix small $\varepsilon > 0$. We shall construct a large number $n$ satisfying that, for any set $X$ of $n$ points in $\mathbb R^d$, we have $t(X,r) \geq \frac{n}{r}(1-\varepsilon)$. Let $c=c(d,r)$ as in Lemma \ref{lem:alternating}. Assume that $n = \OT_d(k) + mk$ for some positive integers $m$ and $k$, where $\OT_d$ is the bound from Theorem \ref{thm:Suk}. Then, given a set $X$ of $n$ points in general position in $\mathbb R^d$, we can select $m$ pairwise-disjoint order-type homogeneous subsets $X_1, X_2,\dots, X_m$ of size $k$ from $X$. Partition the points of each $X_i$ into $r$ parts using the alternating method proposed in Section \ref{sec:alternating}. By Theorem \ref{thm:cotastol}, we have that $t(X_i,r) \geq \frac{k}{r} - c$ and therefore, by Observation \ref{dumbpigeonholelemma}, $t(X,r) \geq t(X_1,r)+\dots+t(X_m,r) \geq m\left(\frac kr-c\right)$. We may rewrite this last value as $$m\left(\frac kr-c\right)=\frac nr\left(\frac{mk-mcr}{n}\right)=\frac nr\left(1-\frac{\OT_d(k)+mcr}{\OT_d(k)+mk}\right).$$ By choosing a large enough $k$ so that $\frac{1+cr}{1+k} < \varepsilon$ and $m=\OT_d(k)$, we obtain $t(X,r)\ge\frac nr\left(1-\frac{1+cr}{1+k}\right)>\frac nr(1-\varepsilon)$. \end{proof} \section{Bounds on the Tolerated Tverberg number}\label{sec:remarks} So far we have being concerned with studying the behavior of $t$ with respect to $n, d$ and $r$. By a simple manipulation of the results in the previous section, we may now easily prove Theorem \ref{thm:main}. \begin{proof}[Proof of Theorem \ref{thm:main}] Fix $r$ and $d$. By Proposition \ref{lem: bound2} we have that $t \leq \floor{\frac{n}{r}}-\floor{\frac{d}{2}}$, which implies $n\ge tr+\frac{r(d-2)}{2}$. Lemma \ref{thm:main2} can be rewritten as $n\leq tr + o(n)$. These inequalities imply $n=\Theta(t)$, so we have that $$rt+\frac{r(d-2)}{2}\leq n \leq rt+ o(t),$$ which yields the result. \end{proof} This result clarifies why the search for a definite $N(d, r, t)$ has been elusive. It seems that the relationship between $t$ and $N$ changes as $t$ increases, as opposed to being a constant multiple of $t$ (for a fixed $d$ and $r$). From the analysis made in Lemma \ref{thm:main2} it follows that the term $o(t)$ in Theorem \ref{thm:main} decays like $\frac{t}{\log^{(d)}(t)}$, which is extremely slow. It is our impression that $N(d,t,r)$ approaches $rt$ much faster than this. \end{document}
\mathfrak{b}egin{document} \title[Generalized Cohen-Macaulay rings]{Small perturbations in generalized Cohen-Macaulay local rings} \mathfrak{a}uthor[Pham Hung Quy]{Pham Hung Quy} \operatorname{ad}dress{Department of Mathematics, FPT University, Hanoi, Vietnam} \email{[email protected]} \mathfrak{a}uthor{Van Duc Trung} \operatorname{ad}dress{Department of Mathematics, University of Genoa, Via Dodecaneso 35, 16146 Genoa, Italy} \email{[email protected]} \thanks{2020 {\em Mathematics Subject Classification\/}: 13H10, 13D40, 13D45.\\ The work is partially supported by a fund of Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 101.04-2020.10} \operatorname{ker}ywords{Hilbert function, Small perturbation, Generalized Cohen-Macaulay ring, Local cohomology} \mathfrak{b}egin{abstract} Let $(R, \frak m)$ be a generalized Cohen-Macaulay local ring of dimension $d$, and $f_1, \ldots, f_r$ a part of system of parameters of $R$. In this paper we give explicit numbers $N$ such that the lengths of all lower local cohomology modules and the Hilbert function of $R/(f_1, \ldots, f_r)$ are preserved when we perturb the sequence $f_1, \ldots, f_r$ by $\varepsilon_1, \ldots, \varepsilon_r \in \frak m^N$. The second assertion extends a previous result of Srinivas and Trivedi for generalized Cohen-Macaulay rings. \end{abstract} \maketitle \section{Introduction} This work is inspired by the recent work of the first author with Ma and Smirnov {\cal C}ite{MQS} about the preservation of Hilbert function under sufficiently small perturbations which was also inspired by the previous work of Srinivas and Trivedi {\cal C}ite{ST1}. Taking a small perturbation arises naturally in studying deformations when we change the defining equations by adding terms of high order. In this way we can transform a singularity defined analytically, e.g., as a quotient of a (convergent) power series ring, into an algebraic singularity by truncating the defining equations. This problem was first considered by Samuel in 1956. Let $f \in S = k[[x_1, \ldots, x_d]]$ be a hypersurface with an isolated singularity, i.e. the Jacobian ideal $J(f) = (\frac{{\cal P}artial f}{{\cal P}artial x_1}, \ldots, \frac{{\cal P}artial f}{{\cal P}artial x_d})$ is $(x_1,\ldots, x_d)$-primary. Then Samuel proved that for every $\varepsilon \in (x_1,\ldots, x_d) J(f)^2$ we have an automorphism of $S$ that maps $f \mapsto f + \varepsilon$. In particular, Samuel’s result asserts if $f$ has an isolated singularity and $\varepsilon$ is in a sufficiently large power of $(x_1, \ldots, x_d)$, then the rings $S/(f)$ and $S/(f + \varepsilon)$ are isomorphic. Samuel's result was extended by Hironaka in 1965, who showed that if $S/I$ is an equidimensional reduced isolated singularity, then $S/I {\cal C}ong S/I'$ for every ideal $I'$ obtained by changing the generators of $I$ by elements of sufficiently large order such that $S/I'$ is still reduced, equidimensional, and same height as $I$. The isolated singularity is essential in the both theorems of Samuel and Hironaka. For a local ring $(R, \frak m)$ and a sequence of elements $f_1, \ldots, f_r$, instead of requiring the deformation to give isomorphic rings $R/(f_1, \ldots, f_r) {\cal C}ong R/(f_1 + \varepsilon_1, \ldots, f_r + \varepsilon_r)$, we consider a weaker question: what properties and invariants are preserved by a sufficiently fine perturbation? For example, Eisenbud {\cal C}ite{E} showed how to control the homology of a complex under a perturbation and thus showed that Euler characteristic and depth can be preserved. As an application, if $f_1, \ldots, f_r$ is a regular sequence, then so is the sequence $f_1 + \varepsilon_1, \ldots, f_r + \varepsilon_r$ as long as we take a sufficiently small perturbation. Huneke and Trivedi {\cal C}ite{HT} extended this result for filter regular sequences, a generalization of the notion of regular sequence. For numerical invariants, perhaps the most natural direction is to study the behavior of Hilbert function. Srinivas and Trivedi {\cal C}ite{ST1} showed that the Hilbert function of a sufficiently fine perturbation is at most the original Hilbert function. Furthermore they proved that the Hilbert functions of $R/(f_1, \ldots, f_r)$ and $R/(f_1 + \varepsilon_1, \ldots, f_r + \varepsilon_r)$ coincide under small perturbations provided two conditions: (a) $f_1, \ldots, f_r$ a filter regular sequence; (b) $R/(f_1, \ldots, f_r)$ is generalized Cohen-Macaulay. Recalling that $(R, \frak m)$ is generalized Cohen-Macaulay if all lower local cohomology $H^i_{\frak m}(R), i < \operatorname{dim} R$, have finite length. Moreover, a generalized Cohen-Macaulay ring is Cohen-Macaulay on the punctured spectrum. Srinivas and Trivedi gave examples to show that the condition (a) is essential even if $f_1, \ldots, f_r$ is a part of system of parameters. However they asked whether the condition (b) is superfluous. \mathfrak{b}egin{Notation} Let $(R,{\frak m})$ be a Noetherian local ring and $I = (f_1,\ldots,f_r)$ an ideal of $R$. For each $N > 0$ we denote $$C_N(I) = \{ (f_1+\varepsilon_1, \ldots, f_r+\varepsilon_r)\ | \ \varepsilon_i \in {\frak m}^N \}.$$ \end{Notation} Recently, Ma, Smirnov and the first author {\cal C}ite{MQS} answered affirmatively the above question of Srinivas and Trivedi and proved the following. \mathfrak{b}egin{Theorem} Let $(R, \frak m)$ be a Noetherian local ring of dimension $d$, and $I \mathfrak{su}bseteq R$ is generated by a filter regular sequence $f_1,\ldots,f_r$. Then there exists $N>0$ such that for all $I_N \in C_N(I)$, the Hilbert functions of $R/I$ and $R/I_N$ are equal, i.e. $$\ell(R/(I + {\frak m}^n)) = \ell(R/(I_N + \frak m^n))$$ for all $n \mathfrak{g}e 1$.\footnote{Actually, we proved the result for any ideal $J$ such that $(f_1, \ldots, f_r)+J$ is $\frak m$-primary. Although the main result of this paper can be extended for such ideals, we will keep our interest for the maximal ideal for simplicity.} \end{Theorem} We also asked the question. \mathfrak{b}egin{Question}\label{question} Can one obtain explicit bounds on $N$? \end{Question} A certainly positive answer for the case $r=1$ was given in {\cal C}ite[Theorem 3.3]{MQS}. If $R$ is a Cohen-Macaulay local ring of dimension $d$, Srinivas and Trivedi {\cal C}ite[Proposition 1.1]{ST2} provided a formula for $N$ in terms of the multiplicity for any $r \mathfrak{g}e 1$. Namely, we can choose $$N = (d-r)! \, e(R/I) + 2.$$ Inspired by the above formula, one can hope to give a bound for $N$ in any local ring by using the extended degree instead of the multiplicity. See the next section for more details about the notion of extended degree. The aim on the present paper is to give an evident for this belief. We will extend the above result of Srinivas and Trivedi for the class of generalized Cohen-Macaulay rings by using the multiplicity and the length of local cohomology $H^i_{\frak m}(R)$. Let $(R, \frak m)$ be a local ring and $M$ a generalized Cohen-Macaulay module of dimension $d$. The Buchsbaum invariant of $M$ is defined as follows $$I(M) = \mathfrak{su}m_{i = 0}^{d-1} \mathfrak{b}inom{d-1}{i} \ell(H^i_{\frak m}(M)).$$ We now present the first main result of this paper. \mathfrak{b}egin{Theorem}\label{result 1} Let $(R, {\frak m})$ be a generalized Cohen-Macaulay local ring of dimension $d$ and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1,\ldots,f_r$ of $R$. Let $s = d-r $, and $$N = s!\, \mathfrak{b}ig(e(R/I) + I(R/I)\mathfrak{b}ig) + (s+1)I(R) + 1.$$ Then for all $I_N \in C_N(I)$ we have the Hilbert functions of $R/I$ and $R/I_N$ are equal. \end{Theorem} The method of our proof of the above result is inspired by the Srinivas and Trivedi one in the Cohen-Macaulay case. Let us mention the most important step in our proof. If $R$ is Cohen-Macaulay and $J = (x_1, \ldots, x_s)$ a minimal reduction of $\frak m$ with respect to $R/I$, then we can choose $N$ such that $I + J = I_N + J$ for all $I_N \in C_N(I)$. The strategy of Srinivas and Trivedi was to transform the Hilbert functions of $R/I$ and $R/I_N$ (with respect to $\frak m$) to the Hilbert functions of $R/I$ and $R/I_N$ with respect to the parameter ideal $J$, and using the following well-known fact for Cohen-Macaulay rings $$\ell (R/(I+J^{n+1})) = \mathfrak{b}inom{n+s}{s} \ell(R/(I+J)) = \mathfrak{b}inom{n+s}{s} \ell(R/(I_N+J)) = \ell (R/(I_N+J^{n+1})).$$ For generalized Cohen-Macaulay rings, we also have an explicit formula for the Hilbert function with respect to special parameter ideals, say standard parameter ideals, in terms of the length of lower local cohomology modules (see Theorem \ref{HF of standard}). Therefore we need to control $\ell (H^i_{\frak m}(R/I))$ under sufficiently small perturbations. This is the second main result of this paper. \mathfrak{b}egin{Theorem}\label{result 2} Let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$ and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1,\ldots,f_r$. Let $N = e(R/I) + I(R) + 1$, then for all $I_N \in C_N(I)$ we have $$\ell(H^i_{{\frak m}}(R/I)) = \ell(H^i_{{\frak m}}(R/I_N))$$ for every $i < d-r$. \end{Theorem} The paper is organized as follows: In the next section, we recall some notations used in this paper. We will prove Theorem \ref{result 2} in Section 3. Section 4 is devoted to prove Theorem \ref{result 1}. \mathfrak{b}egin{acknowledgement} This project resulted from a trip the first author took to the University of Genoa, we would like to thank Matteo Varbaro for making that trip possible. This paper was written while the first author visited the Vietnam Institute for Advanced Study in Mathematics (VIASM), he would like to thank the VIASM for the very kind support and hospitality. The authors are grateful to Professor Maria Evelina Rossi for her useful discussions on the first step of the project. \end{acknowledgement} \section{Preliminaries} Throughout this paper, $(R, \frak m)$ denotes a Noetherian local ring. Let $M$ be a finitely generated $R$-module of dimension $d \mathfrak{g}eq 1$, and $J$ an ${\frak m}$-primary ideal of $R$. The {\it Hilbert function} of $M$ with respect to $J$ is defined by $$HF_J(M)(n) := \ell(M/J^{n+1}M)$$ for all $n\mathfrak{g}eq 0$. The Hilbert function of $M$, denoted by $HF(M)$, is the Hilbert function of $M$ with respect to the maximal ideal ${\frak m}$. It is well-known that for $n$ sufficiently large the Hilbert function $HF_J(M)$ becomes a polynomial in $n$ of degree $d$, and can be written as the following form $$HF_J(M)(n) = e_0(J,M)\mathfrak{b}inom{n+d}{d} - e_1(J,M)\mathfrak{b}inom{n+d-1}{d-1} + {\cal C}dots + (-1)^de_d(J,M)$$ for all $n \mathfrak{g}g 0$, where $e_i(J,M)$ are the integers and they are called the Hilbert coefficients of $M$ with respect to $J$. In particular, $e(J,M) = e_0(J, M)$ is called the {\it multiplicity} of $M$ with respect to $J$ and $e(M) = e({\frak m},M)$ is called the multiplicity of $M$. If $M$ is Cohen-Macaulay and $J$ is a parameter ideal we have $$HF_J(M)(n) = \ell (M/JM) \, \mathfrak{b}inom{n+d}{d}$$ for all $n \mathfrak{g}e 0$. In particular $e(J,M) = \ell (M/JM)$. In general we always have the inequality $e(J,M) \le \ell (M/JM)$ for all parameter ideals $J$ of $M$. \mathfrak{b}egin{Definition} An $R$-module $M$ is called {\it generalized Cohen-Macaulay} if the difference $\ell (M/JM) - e(J, M)$ is bounded above for every parameter ideal $J$. \end{Definition} We next recall some well-known facts in the theory of generalized Cohen-Macaulay modules (see {\cal C}ite{T}). \mathfrak{b}egin{Remark} Let $M$ be an $R$-module of dimension $d$. Then \mathfrak{b}egin{enumerate} \item $M$ is generalized Cohen-Macaulay if and only if $H^i_{\frak m}(M)$ has finite length for every $i<d$. Moreover, we have $$\ell (M/JM) - e(J, M) \le \mathfrak{su}m_{i=0}^{d-1}\mathfrak{b}inom{d-1}{i} \ell(H^i_{\frak m}(M))$$ for all parameter ideals $J$. The left hand side, denoted by $I(M)$, and is called the {\it Buchsbaum invariant} of $M$. \item If $M$ is generalized Cohen-Macaulay, then for every part of system of parameters $x_1, \ldots, x_r$ we have $I(M) \mathfrak{g}e I(M/(x_1, \ldots, x_r)M)$. \item If $M$ is generalized Cohen-Macaulay, then every system of parameter is a filter regular sequence of $M$. Recalling that $x_1, \ldots, x_t \in \frak m$ is called a {\it filter regular sequence} of $M$ if $$\mathrm{Supp} \mathfrak{b}ig(\frac{(x_1, \ldots, x_{i-1})M: x_i}{(x_1, \ldots, x_{i-1})M} \mathfrak{b}ig) \mathfrak{su}bseteq \{\frak m\}$$ for all $i = 1, \ldots, t$. \end{enumerate} \end{Remark} \mathfrak{b}egin{Definition} Let $M$ be a generalized Cohen-Macalay module of dimension $d$. A parameter ideal $J$ of $M$ is called {\it standard} if $$\ell (M/JM) - e(J, M) = \mathfrak{su}m_{i=0}^{d-1} \mathfrak{b}inom{d-1}{i} \ell(H^i_{\frak m}(M)).$$ \end{Definition} \mathfrak{b}egin{Remark} Let $M$ be a generalized Cohen-Macalay module of dimension $d$. Then there exists a positive integer $N$ such that $J$ is standard for every parameter ideal $J \mathfrak{su}bseteq \frak m^N$. In fact we can choose $N = I(M)$. \end{Remark} The Hilbert function of a standard parameter ideal $J$ can be expressed explicitly as follows (see {\cal C}ite[Corollary 4.2]{T}). \mathfrak{b}egin{Theorem} \label{HF of standard} $J$ is a standard parameter ideal of $M$ if and only if $$HF_J(M)(n) = \mathfrak{b}inom{n+d}{d}e(J,M) + \mathfrak{su}m_{i=1}^d \mathfrak{su}m_{j=0}^{d-i}\mathfrak{b}inom{n+d-i}{d-i}\mathfrak{b}inom{d-i-1}{j-1}\ell(H_{{\frak m}}^j(M)),$$ for all $n \mathfrak{g}eq 0$. \end{Theorem} In order to capture the complexity of non (generalized) Cohen-Macaulay modules, Vasconcelos et al. {\cal C}ite{V1, V2} introduced the notion of extended degree which is a generalization of the notion of multiplicity. Let $\mathcal{M}(R)$ be the category of finitely generated $R$-modules. An {\it extended degree} on $\mathcal{M}(R)$ is a numerical function $D(\mathfrak{b}ullet)$ on $\mathcal{M}(R)$ such that the following properties hold for every $R$-module $M \in \mathcal{M}(R)$: \mathfrak{b}egin{enumerate} \item $D(M) = D(M/L) + \ell(L)$, where $L = H^0_{\frak m}(M)$, \item $D(M) \mathfrak{g}eq D(M/xM)$ for a generic element $x$ of ${\frak m}$, \item $D(M) = e(M)$ if $M$ is a Cohen-Macaulay module. \end{enumerate} The prototype of an extended degree is the homological degree defined by Vasconcelos in {\cal C}ite{V1}. If $R$ is a homomorphic image of a Gorenstein ring $S$ with $\operatorname{dim} S = n$ then the {\it homological degree} of $R$-module $M$ is defined by $$\operatorname{hdeg}(M):= e(M) + \mathfrak{su}m_{i=0}^{d-1}\mathfrak{b}inom{d-1}{i}\operatorname{hdeg}(\operatorname{Ext}_S^{n-i}(M,S)).$$ Recently, Cuong and the first author {\cal C}ite{CQ} introduced a new extended degree, say the {\it unmixed degree}, and denoted by $\mathrm{udeg}(M)$. The readers are encouraged to {\cal C}ite{CQ} for more details about the construction. If $M$ is generalized Cohen-Macaulay we have $$\operatorname{hdeg} (M) = \mathrm{udeg} (M) = e(M) + I(M).$$ We close this section with some lemmas that will be useful for the proof of the main results. \mathfrak{b}egin{Lemma}\label{presever sop} Let $(R, \frak m)$ be a local ring of dimension $d$ and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1, \ldots, f_r$. Let $D(\mathfrak{b}ullet)$ be an any extended degree and set $N = D(R/I) + 1$. Then for every $\varepsilon_1, \ldots, \varepsilon_r \in \frak m^{N}$ we have $f_1 + \varepsilon_1, \ldots, f_r + \varepsilon_r$ is a part of system of parameters of $R$. \end{Lemma} \mathfrak{b}egin{proof} Let $x_1, \ldots, x_{d-r}$ be a general sequence of elements of $R/I$. Then $$\ell (R/(I + (x_1, \ldots, x_{d-r})) \le D(R/I) = N-1.$$ This implies that $\frak m^{N-1} \mathfrak{su}bseteq I + (x_1, \ldots, x_{d-r})$. Therefore for every $\varepsilon_1, \ldots, \varepsilon_r \in \frak m^{N}$ we have $$(f_1, \ldots, f_r,x_1, \ldots, x_{d-r}) = (f_1 + \varepsilon_1, \ldots, f_r + \varepsilon_r, x_1, \ldots, x_{d-r}).$$ Hence $f_1 + \varepsilon_1, \ldots, f_r + \varepsilon_r$ is a part of system of parameters of $R$. \end{proof} It would be nice if we obtain similar results for (filter) regular sequences instead of system of parameters. These results, if have, will play an important role for an answer for Question \ref{question} in the general case. In the main context of this paper $R$ is generalized Cohen-Macaulay, so these three notions coincide. We will need the following regular {\cal C}ite[Corollary 3.6]{V1}. \mathfrak{b}egin{Lemma}\label{reduction} Let $(R, \frak m)$ be a local ring of dimension $d$ with the infinite residue field. Let $\operatorname{hdeg}(\mathfrak{b}ullet)$ be the homological degree. Then there exists a minimal reduction $J$ of $\frak m$ with reduction number $\mathrm{r}_J(\frak m) \le (d-r)!\, \operatorname{hdeg}(R) - 1$. \end{Lemma} \mathfrak{b}egin{Lemma} \label{prevever reduction} Let $(R, \frak m)$ be a local ring of dimension $d$ and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1, \ldots, f_r$. Let $J$ be a minimal reduction of $\frak m$ in $R/I$, and $k$ a non-negative integer such that $\mathrm{r}_J(\frak m , R/I) \le k$. Then for all $I_{k+2} \in C_{k+2}(I)$, $J$ is a minimal reduction of $\frak m$ in $R/I_{k+2}$ and $\mathrm{r}_J(\frak m, R/I_{k+2}) \le k+1$. \end{Lemma} \mathfrak{b}egin{proof} By the assumption we have $\frak m^{k+1} + I = J\frak m^k + I$. Therefore $\frak m^{k+1} \mathfrak{su}bseteq J + I$. Hence for every $I_{k+2} \in C_{k+2}(I)$ we have $J + I = J + I_{k+2}$. We are going to prove that $\frak m^{k+2} + I_{k+2} = J\frak m^{k+1} + I_{k+2}$. We have \mathfrak{b}egin{align*} J\frak m^{k+1} + I_{k+2} + \frak m (\frak m^{k+2} + I_{k+2}) &= J\frak m^{k+1} + I_{k+2} + \frak m (\frak m^{k+2} + I)\\ &= J\frak m^{k+1} + I_{k+2} + \frak m (J\frak m^{k+1} + I)\\ &= J\frak m^{k+1} + I_{k+2} + \frak m I\\ &= \frak m( J\frak m^{k} + I)+ I_{k+2}\\ &= \frak m( \frak m^{k+1} + I)+ I_{k+2}\\ &= \frak m^{k+2} + I_{k+2} + \frak m I\\ &= \frak m^{k+2} + I_{k+2}. \end{align*} The last equality follows from the fact that $\frak m I \mathfrak{su}bseteq \frak m^{k+2} + I = \frak m^{k+2} + I_{k+2}$. By NAK we have $\frak m^{k+2} + I_{k+2} = J\frak m^{k+1} + I_{k+2}$. The proof is complete. \end{proof} \section{Local cohomology under small perturbations} Let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$ and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1, \ldots, f_r$. In this section we provide a positive integer $N$ depends on $e(R/I)$ and $I(R)$ such that for all $I_N \in C_N(I)$ we have the lengths of $H^i_{{\frak m}}(R/I)$ and $H^i_{{\frak m}}(R/I_N)$ coincide for every $0 \leq i < d-r$. The proof of the main result is based on the induction on $r$, where $r$ is the length of the sequence $f_1, \ldots, f_r$. First, for the case $r=1$ we have the following proposition. \mathfrak{b}egin{Proposition} \label{one form} Let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$, and $f$ a parameter element of $R$. Then for every $\varepsilon \in {\frak m}^{I(R)}$ such that $f + \varepsilon$ is a parameter element of $R$, we have $$\ell(H^i_{{\frak m}}(R/(f))) = \ell(H^i_{{\frak m}}(R/(f+\varepsilon)))$$ for every $i < d-1$. \end{Proposition} \mathfrak{b}egin{proof} Following from the short exact sequence $$0 \longrightarrow R/(0:f) \overset{f}{\longrightarrow} R \longrightarrow R/(f) \longrightarrow 0$$ we obtain the following short exact sequence $$0 \longrightarrow \frac{H^i_{{\frak m}}(R)}{fH^i_{{\frak m}}(R)} \longrightarrow H^i_{{\frak m}}(R/(f)) \longrightarrow (0:_{H^{i+1}_{{\frak m}}(R)} f)\longrightarrow 0$$ for every $i < d-1$. Similarly we have the following short exact sequence $$0 \longrightarrow \frac{H^i_{{\frak m}}(R)}{(f+\varepsilon)H^i_{{\frak m}}(R)} \longrightarrow H^i_{{\frak m}}(R/(f+\varepsilon)) \longrightarrow (0:_{H^{i+1}_{{\frak m}}(R)} (f+\varepsilon))\longrightarrow 0$$ for every $i < d-1$. Since $\varepsilon \in {\frak m}^{I(R)}$ we have $\varepsilon\, H^i_{{\frak m}}(R) = 0$ for all $i <d$. It follows that $(0:_{H^{i+1}_{{\frak m}}(R)} f ){\cal C}ong (0:_{H^{i+1}_{{\frak m}}(R)} (f+\varepsilon))$ and $\frac{H^i_{{\frak m}}(R)}{fH^i_{{\frak m}}(R)} {\cal C}ong \frac{H^i_{{\frak m}}(R)}{(f+\varepsilon)H^i_{{\frak m}}(R)}$ for all $i < d-1$. Hence the above two short exact sequences imply $$\ell(H^i_{{\frak m}}(R/(f))) = \ell(H^i_{{\frak m}}(R/(f+\varepsilon)))$$ for all $i < d-1$. \end{proof} We now present the main result of this section. \mathfrak{b}egin{Theorem} \label{lc} Let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$ and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1, \ldots, f_r$. Let $N = e(R/I) + I(R) + 1$, then for all $I_N \in C_N(I)$ we have $$\ell(H^i_{{\frak m}}(R/I)) = \ell(H^i_{{\frak m}}(R/I_N))$$ for every $i < d-r$. \end{Theorem} \mathfrak{b}egin{proof} Without loss of generality we will always assume that the residue field is infinite. We proceed by induction on $r$. For $r=1$, we have $$N \mathfrak{g}eq e(R/(f_1)) + I(R/(f_1)) + 1 = \operatorname{hdeg}(R/(f_1)) + 1.$$ So, by Lemma \ref{presever sop}, $f_1+\varepsilon_1$ is a parameter element for every $\varepsilon_1 \in {\frak m}^N$. Hence we are done by Proposition \ref{one form}. For $r > 1$ and $I_N = (f_1+\varepsilon_1,\ldots,f_r+\varepsilon_r)$, where $\varepsilon_1,\ldots,\varepsilon_r \in {\frak m}^N$. Let $R_1 = R/(f_1)$. For simplicity, we will identify $f_i$ with its image in $R_1$. Since $N \mathfrak{g}eq e(R_1/({f_2},\ldots,{f_r})R_1) + I(R_1) + 1$ and $\varepsilon_2,\ldots,\varepsilon_r \in {\frak m}^N$, by induction we get $$\ell(H^i_{{\frak m}}(R/(f_1,f_2,\ldots,f_r))) = \ell(H^i_{{\frak m}}(R/(f_1,f_2+\varepsilon_2,\ldots,f_r+\varepsilon_r)))$$ for every $i < d-r$. Since $N \mathfrak{g}eq \operatorname{hdeg}(R/I)+1$ and $\varepsilon_1,\ldots,\varepsilon_r \in {\frak m}^N$, by Lemma \ref{presever sop} we have $f_1,f_2+\varepsilon_2,\ldots,f_r+\varepsilon_r$ and $f_1+\varepsilon_1,f_2+\varepsilon_2,\ldots,f_r+\varepsilon_r$ are the parts of system of parameters of $R$. Let $R_2 = R/(f_2+\varepsilon_2,\ldots,f_r+\varepsilon_r)$, we have $f_1$ and $f_1+\varepsilon_1$ are parameter elements of $R_2$. Moreover, $N \mathfrak{g}eq I(R_2)$ and $\varepsilon_1 \in {\frak m}^N$, by Proposition \ref{one form} we get $$\ell(H^i_{{\frak m}}(R_2/f_1R_2)) = \ell(H^i_{{\frak m}}(R_2/(f_1+\varepsilon_1)R_2))$$ for every $i < d-r$. That is $$\ell(H^i_{{\frak m}}(R/(f_1,f_2+\varepsilon_2,\ldots,f_r+\varepsilon_r))) = \ell(H^i_{{\frak m}}(R/(f_1+\varepsilon_1,f_2+\varepsilon_2,\ldots,f_r+\varepsilon_r)))$$ for every $i < d-r$. Hence we obtain the desired assertion. The proof is complete. \end{proof} It is natural to ask the following question in general case. \mathfrak{b}egin{Question} Let $(R, \frak m)$ be a local ring and $I \mathfrak{su}bseteq R$ is generated by a filter regular sequence $f_1, \ldots, f_t$ . Does there exist a positive integer $N$ such that for all $I_N \in C_N(I)$ we have $$\ell (H^0_{\frak m}(R/I)) = \ell (H^0_{\frak m}(R/I_N))?$$ \end{Question} \section{Hilbert funcion under small perturbations} In this section, let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$, and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1,\ldots,f_r$. We will find an explicitly positive integer $N$ depends on $\operatorname{hdeg}(R/I)$ and $I(R)$ such that for all $I_N \in C_N(I)$ the Hilbert functions of $R/I$ and $R/I_N$ coincide. The following lemma is a special case of Lemma \ref{reduction} and Lemma \ref{prevever reduction}. \mathfrak{b}egin{Lemma}\label{reduction gCM} Let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$ with the infinite residue field, and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1,\ldots,f_r$. Let $s = d-r$, and $k = s!\, \operatorname{hdeg}(R/I)+1$. Then there exists a minimal reduction $J$ of ${\frak m}$ in $R/I$ such that \mathfrak{b}egin{enumerate} \item ${\frak m}^{k+m} + I = J^{m+1}{\frak m}^{k-1}+ I$ for all $m \mathfrak{g}eq 0$. \item For every $I_k \in C_k(I)$ one has ${\frak m}^{k+m} + I_k = J^{m+1}{\frak m}^{k-1}+ I_k$ for all $m \mathfrak{g}eq 0$. \end{enumerate} \end{Lemma} \mathfrak{b}egin{proof} By Lemma \ref{reduction} there exists a minimal reduction $J=(x_1,\ldots,x_{d-r})$ of $\frak m$ in $R/I$ such that $\mathrm{r}_J(\frak m , R/I) \le (d-r)!\, \operatorname{hdeg}( R/I) - 1 = k - 2$. Hence ${\frak m}^k + I = J{\frak m}^{k-1}+I$. By Lemma \ref{prevever reduction} one has $J$ is a minimal reduction of ${\frak m}$ in $R/I_k$ and $\mathrm{r}_J(\frak m,R/I_k) \le k-1$. Hence ${\frak m}^k + I_k = J{\frak m}^{k-1} + I_k$. Therefore, for all $m \mathfrak{g}e 0$ we have $\frak m^{k+m} R/I = J^{m+1} \frak m^{k-1} R/I$ and $\frak m^{k+m} R/I_k = J^{m+1} \frak m^{k-1} R/I_k$. The claims are now clear. \end{proof} The following theorem is the main result of this section. It extends the result of Srinivas and Trivedi {\cal C}ite[Proposition 1]{ST2} for generalized Cohen-Macaulay rings. \mathfrak{b}egin{Theorem} Let $(R,{\frak m})$ be a generalized Cohen-Macaulay ring of dimension $d$, and $I \mathfrak{su}bseteq R$ is generated by a part of system of parameters $f_1,\ldots,f_r$. Let $s = d-r $, and $$N = s!\, \operatorname{hdeg}(R/I) + (s+1)I(R) + 1.$$ Then for all $I_N \in C_N(I)$ we have $$HF(R/I) = HF(R/I_N).$$ \end{Theorem} \mathfrak{b}egin{proof} Without loss of generality we may assume that the residue field is infinite. Let $k = s!\, \operatorname{hdeg}(R/I) + 1$, by Lemma \ref{reduction gCM} there exists ideal $J = (x_1,\ldots,x_s) \mathfrak{su}bseteq {\frak m}$ such that $${\frak m}^{k+m} + I = J^{m+1}{\frak m}^{k-1}+I$$ for all $m \mathfrak{g}eq 0$. Moreover, since $I_N \in C_N(I) \mathfrak{su}bseteq C_k(I)$ we also have $${\frak m}^{k+m} + I_N = J^{m+1}{\frak m}^{k-1}+I_N$$ for all $m \mathfrak{g}eq 0$. Let $t= \max\{I(R),1\}$. For all $0 \le i \le t-1$ set $N_i= k + s(t-1) + i$. We have $N_i \le N$ for all $i \le t-1$. Since $I_N \in C_N(I) \mathfrak{su}bseteq C_{N_0}(I)$ we have $${\frak m}^i + I = {\frak m}^i + I_N$$ for all $ i \leq N_0$. Therefore, it is enough to prove that $$\ell \left(\frac{R}{{\frak m}^n+I}\right) = \ell \left(\frac{R}{{\frak m}^n+I_N}\right) \quad \quad (1)$$ for all $n \mathfrak{g}eq N_0.$ We will prove it in the following equivalent form $$\ell \left(\frac{R}{{\frak m}^{N_i + mt}+I}\right) = \ell \left(\frac{R}{{\frak m}^{N_i + mt}+I_N}\right) \quad \quad (2)$$ for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. Set $J' = (x_1^t,\ldots,x_s^t)$, one has $J' \mathfrak{su}bseteq \frak m^t$ is a standard parameter ideal of $R/I$ and $J' J^{(s-1)(t-1)} = J^{s(t-1)+1}$. \vskip 0.2cm {\cal N}oindent {\mathfrak{b}f Claim 1.} For all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$ we have $${\frak m}^{N_i+mt} + I = J'^{m+1}{\frak m}^{N_i-t}+I, $$ and $${\frak m}^{N_i+mt} + I_N = J'^{m+1}{\frak m}^{N_i-t} + I_N$$ for all $I_N \in C_N(I)$ \mathfrak{b}egin{proof}[Proof of Claim 1] We have \mathfrak{b}egin{eqnarray*} \frak m^{N_i + mt}R/I &=& J^{s(t-1)+mt+i+1}\frak m^{k-1}R/I\\ &=& J'^{m+1} J^{(s-1)(t-1) + i }\frak m^{k-1}R/I\\ & =& J'^{m+1} \frak m^{N_i-t}R/I \end{eqnarray*} for all $m \mathfrak{g}e 0$. Therefore $${\frak m}^{N_i+mt} + I = J'^{m+1}{\frak m}^{N_i-t}+I$$ for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. The second assertion can be proved similarly. The Claim is proved. \end{proof} By Claim 1, in order to prove the equality (2) it is enough to show $$\ell \left(\frac{R}{J'^{m+1}{\frak m}^{N_i-t}+I}\right) = \ell \left(\frac{R}{J'^{m+1}{\frak m}^{N_i-t}+I_N}\right) \quad \quad (3)$$ for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. On the other hand, since $N \mathfrak{g}e e(R/I) + I(R) + 1$ we have $$\ell(H^i_{{\frak m}}(R/I)) = \ell(H^i_{{\frak m}}(R/I_N))$$ for all $i<s$ by Theorem \ref{lc}. We also have $I + J' = I_N + J'$ since $I + J' \mathfrak{su}pseteq {\frak m}^{N-1}$. Hence $J'$ is a standard parameter ideal of $R/I_N$ and $e(J',R/I_N)= e(J',R/I)$. By Theorem \ref{HF of standard} we have $$\ell \left(\frac{R}{J'^{m+1}+I}\right) = \ell \left(\frac{R}{J'^{m+1} +I_N}\right) \quad \quad (4)$$ for all $m \mathfrak{g}e 0$. Therefore in order to prove the equality (3) it is sufficient to prove that $$\ell \left(\frac{J'^{m+1}+I}{J'^{m+1}{\frak m}^{N_i-t}+I}\right) = \ell \left(\frac{J'^{m+1} +I_N}{J'^{m+1}{\frak m}^{N_i-t}+I_N}\right) \quad \quad (5)$$ for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. Let $K' = I + J' = I_N + J'$. We will prove (5) in the following equivalent form $$\ell \left(\frac{K'^{m+1}+I}{K'^{m+1}{\frak m}^{N_i-t}+I}\right) = \ell \left(\frac{K'^{m+1} +I_N}{K'^{m+1}{\frak m}^{N_i-t}+I_N}\right) \quad \quad (6)$$ for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. \vskip 0.2cm {\cal N}oindent {\mathfrak{b}f Claim 2.} For all $m \mathfrak{g}e 0$ we have $K'^{m+1} {\cal C}ap I = IK'^m$ and $K'^{m+1} {\cal C}ap I_N = I_NK'^m$. \mathfrak{b}egin{proof}[Proof of Claim 2] Notice that $x_1^t,\ldots,x_s^t$ forms a d-sequence of $R/I$, by {\cal C}ite[Theorem 2.1]{H} we have $$J'^{m+1} {\cal C}ap I \mathfrak{su}bseteq J'^m I.$$ Hence, for all $m\mathfrak{g}eq0$ we have \mathfrak{b}egin{align*} K'^{m+1} {\cal C}ap I &= (I+J')^{m+1} {\cal C}ap I\\ &= (IK'^m + J'^{m+1}) {\cal C}ap I\\ &= IK'^m + J'^{m+1} {\cal C}ap I\\ &= IK'^m. \end{align*} The second assertion can be proved similarly. \end{proof} We continue the proof of our theorem. Follows from Claim 2 we have \mathfrak{b}egin{align*} K'^{m+1}{\frak m}^{N_i-t}+K'^{m+1} {\cal C}ap I &= K'^{m+1}{\frak m}^{N_i-t} + K'^mI\\ &= K'^m(K'{\frak m}^{N_i-t} + I)\\ &= K'^m(J'{\frak m}^{N_i-t}+ I)\\ &= K'^m({\frak m}^{N_i}+I)\\ &= K'^m({\frak m}^{N_i}+I_N)\\ &= K'^m(J'{\frak m}^{N_i-t} + I_N)\\ &= K'^m(K'{\frak m}^{N_i-t} + I_N)\\ &= K'^{m+1}{\frak m}^{N_i-t} + K'^{m+1} {\cal C}ap I_N \end{align*} for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. Therefore \mathfrak{b}egin{eqnarray*} \frac{K'^{m+1}+I}{K'^{m+1}{\frak m}^{N_i-t}+I} & {\cal C}ong& \frac{K'^{m+1}}{K'^{m+1}{\frak m}^{N_i-t}+K'^{m+1}{\cal C}ap I}\\ &{\cal C}ong & \frac{K'^{m+1}}{K'^{m+1}{\frak m}^{N_i-t}+K'^{m+1}{\cal C}ap I_N}\\ &{\cal C}ong & \frac{K'^{m+1} +I_N}{K'^{m+1}{\frak m}^{N_i-t}+I_N} \end{eqnarray*} for all $0 \le i \le t-1$ and all $m \mathfrak{g}e 0$. The equality (6) is now clear. The proof is complete. \end{proof} We close the paper with the following. \mathfrak{b}egin{Remark} If $R$ is Cohen-Macaulay, our formula $N = s!\, e(R/I) + 1$ slightly improves the formula of Srinivas and Trivedi. If $R$ is generalized Cohen-Macaulay but not Cohen-Macaulay, according to the proof we can choose $$N = N_{t-1} = s!\, \operatorname{hdeg}(R/I) + (s+1)I(R)-s$$ \end{Remark} \mathfrak{b}egin{thebibliography}{1} \mathfrak{b}ibitem{CQ} N.T. Cuong and P.H. Quy, {\em On the structure of finitely generated modules over quotients of Cohen-Macaulay local rings}, arXiv:1612.07638 \mathfrak{b}ibitem{E} D. Eisenbud, {\em Adic approximation of complexes, and multiplicities}, Nagoya Math. J. {\mathfrak{b}f 54}, 61–67 (1974). \mathfrak{b}ibitem{H} Huneke, {\em The Theory of $d$-sequences and Powers of Ideals}, Adv. Math. {\mathfrak{b}f 46}, 249--279 (1982). \mathfrak{b}ibitem{HT} C. Huneke and V. Trivedi, {\em The Height of Ideals and Regular Sequences}, Manus. Math. {\mathfrak{b}f 93}, 137--142 (1997). \mathfrak{b}ibitem{MQS} L. Ma, P.H. Quy and I. Smirnov, {\em Filter Regular Sequence under small Perturbations}, Math. Ann. to appear. \mathfrak{b}ibitem{ST1} V. Srinivas and V. Trivedi, {\em The Invarience of Hilbert Functions of Quotients under Small Perturbations}, J. Algebra {\mathfrak{b}f 186}, 1--19 (1996). \mathfrak{b}ibitem{ST2} V. Srinivas and V. Trivedi, {\em A finiteness theorem for the Hilbert functions of complete intersection local rings}, Math. Z. {\mathfrak{b}f 225}, 543--558 (1997). \mathfrak{b}ibitem{T} N.V. Trung, {\em Toward a Theory of generalized Cohen-Macaulay modules}, Nagoya Math. J. {\mathfrak{b}f 102}, 1--49 (1986). \mathfrak{b}ibitem{V1} W.V. Vasconcelos, {\em The homological degree of a module}, Trans. Amer. Math. Soc. {\mathfrak{b}f 350}, 1167--1179 (1998). \mathfrak{b}ibitem{V2} W.V. Vasconcelos, {\em Cohomological degrees of graded modules}, Six lectures on commutative algebra (Bellaterra, 1996), 345--392, Progr. Math. 166, Birkh\"{a}user, Basel (1998). \end{thebibliography} \end{document}
\begin{document} \title[On $C^*$-algebras associated to right LCM semigroups]{On $C^*$-algebras associated to right LCM semigroups} \author{Nathan Brownlowe} \address{School of Mathematics and Applied Statistics \\ University of Wollongong \\ Australia} \email{[email protected]} \author{Nadia S.~Larsen} \address{Department of Mathematics \\ University of Oslo \\mathbb{P}.O. Box 1053, Blindern\\0316 Oslo\\ Norway} \email{[email protected]} \author{Nicolai Stammeier} \address{Mathematisches Institut, Westf\"{a}lischen Wilhelms-Universit\"{a}t M\"{u}nster \\ Germany} \email{[email protected]} \thanks{Part of this research was carried out while all three authors participated in the workshop "Operator algebras and dynamical systems from number theory" in November 2013 at the Banff International Research Station, Canada. We thank BIRS for hospitality and excellent working environment. The third author was supported by DFG through SFB $878$ and by ERC through AdG $267079$.} \begin{abstract} We initiate the study of the internal structure of $C^*$-algebras associated to a left cancellative semigroup in which any two principal right ideals are either disjoint or intersect in another principal right ideal; these are variously called right LCM semigroups or semigroups that satisfy Clifford's condition. Our main findings are results about uniqueness of the full semigroup $C^*$-algebra. We build our analysis upon a rich interaction between the group of units of the semigroup and the family of constructible right ideals. As an application we identify algebraic conditions on $S$ under which $C^*(S)$ is purely infinite and simple. \end{abstract} \date{22 June 2014. Revised on 24 November 2014.} \maketitle \section{Introduction}\label{sec: intro} \noindent In recent years, $C^*$-algebras associated to semigroups have received much attention due to the range of new examples and interesting applications that they encompass. One such application is to the connections between operator algebras and number theory, which have grown deeper since Cuntz's work in \cite{Cun1} on the $C^*$-algebra $\mathbb{Q}Q_\mathbb{N}$ associated to the affine semigroup over the natural numbers $\mathbb{N}\rtimes\mathbb{N}^\times$. Laca and Raeburn \cite{LacR2} continued the analysis of $C^*$-algebras associated to $\mathbb{N}\rtimes\mathbb{N}^\times$ by examining the Toeplitz algebra $\mathbb{T}T(\mathbb{N}\rtimes\mathbb{N}^\times)$, including an analysis of its KMS structure. Cuntz, Deninger and Laca \cite{CDL} have since examined the KMS structure of Toeplitz-type $C^*$-algebras associated to $ax+b$-semigroups $R\rtimes R^\times$ of rings of integers $R$ in number fields. Li has recently defined $C^*$-algebras associated to left cancellative semigroups $S$ with identity, and initiated a study of when certain naturally arising $*$-homomorphisms are injective \cite{Li1, Li2}. The reduced $C^*$-algebra $C_r^*(S)$ associated to $S$ is defined by means of the left regular representation of $S$ on the Hilbert space $\ell^2(S)$. The full $C^*$-algebra $C^*(S)$ is defined to be the universal $C^*$-algebra generated by isometries and projections, subject to certain relations which are imposed by the regular representation. For certain classes of semigroups, the canonical isomorphism between the full and reduced semigroup $C^*$-algebras was established in \cite{Li1, Li2, No0}. In \cite{BRRW}, the authors studied the full semigroup $C^*$-algebra arising from an algebraic construction called a Zappa-S{\'z}ep product of semigroups. The resulting semigroups display ordering features similar to the quasi-lattice ordered semigroups introduced by Nica \cite{Ni}, but by contrast contain a non-trivial group of units. These semigroups were called right LCM (for least common multiples) in \cite{BRRW}, and we shall henceforth use this terminology, but mention that in \cite[\S 4.1]{Law2} and \cite{No0} these are known as semigroups that satisfy Clifford's condition. The class of right LCM semigroups is pleasantly large and includes quasi-lattice ordered semigroups, certain semidirect products of semigroups, and also semigroups that model self-similar group actions, see \cite{Law1, LW, BRRW}. In the present work we begin a study of the internal structure of $C^*$-algebras associated to right LCM semigroups. The main thrust of our work is that when $S$ is a right LCM semigroup one may unveil the internal structure of $C^*(S)$ and answer questions about its uniqueness by carefully analysing the relationship between the group of units $S^*$ and the constructible right ideals of $S$. The problem of finding good criteria for injectivity of $*$-homomorphisms on $C^*(S)$ and in particular to decide uniqueness of such $C^*$-algebras is at the moment not settled in the generality of left cancellative semigroups. A powerful method to prove injectivity of $*$-representations was developed by Laca and Raeburn in \cite[Theorem 3.7]{LaRa} for $C^*(S)$ with $(G, S)$ quasi-lattice ordered. Their work recasted Nica's $C^*$-algebras associated to quasi-lattice ordered groups in \cite{Ni} by viewing them as $C^*$-crossed products by semigroups of endomorphisms. Based on this realisation, they adapted a technique introduced by Cuntz in \cite{Cun} which involved expecting onto a diagonal subalgebra. There are new technical obstacles to be overcome when dealing with a semigroup $S$ that has a non-trivial group of units. In particular, not all of Laca and Raeburn's programme can be carried through beyond the case of quasi-lattice ordered pairs. One challenge is that the diagonal subalgebra of $C^*(S)$, denoted $\mathcal{D}$ in \cite{Li1}, may be too small to accommodate the range of a conditional expectation from $C^*(S)$, cf. an observation made in \cite{No0}. Furthermore, generating isometries in $C^*(S)$ that correspond to elements from the group of units $S^*$ give rise to unitaries. These unitaries together with the generating projections from $\mathcal{D}$ yield two new subalgebras of $C^*(S)$ whose role in explaining the structure of $C^*(S)$ is yet to be fully understood. Our initial approach was to push to the fullest extent the Laca-Raeburn strategy to an arbitrary right LCM semigroup $S$, with or without an identity. It soon became evident that the presence of non-trivial units in $S^*$ makes it unlikely that \cite[Theorem 3.7]{LaRa} will extend in the greatest generality to right LCM semigroups. However, by carefully analysing the action of the group of units $S^*$ on the constructible right ideals $\mathcal{J}(S)$ of $S$ we are able to identify conditions on $S$ which ensure that injectivity of $*$-homomorphisms on $C^*(S)$ can be characterised on $\mathcal{D}$. This approach has lead us to find conditions on a right LCM semigroup $S$ which ensure that $C^*(S)$ is purely infinite and simple. The examples we have of such semigroups belong to a class of semidirect products $G\rtimes_\theta P$ of a group $G$ by an injective endomorphic action $\theta$ of a semigroup $P$. $C^*$-algebras associated to such semidirect products where $P=\mathbb{N}$ were studied by Cuntz and Vershik in \cite{CV}, and by Vieira in \cite{Vie}. Our $C^*(G\rtimes_\theta P)$ may be interpreted as higher dimensional versions of those $C^*$-algebras. We mention that K-theory and internal structure of $C^*$-algebras associated to $ax+b$-semigroups of certain integral domains were analysed recently by Li, see \cite{Li3}. The organisation of the paper is as follows. In Section~\ref{sec: background} we collect some standard results about semigroups. We also introduce our conventions on semidirect product semigroups, and identify an abstract characterisation of the examples of interest $G\rtimes_\theta P$. Section~\ref{sec: right LCM algebras} contains an introduction to right LCM semigroups, and their associated full and reduced $C^*$-algebras. Since we do not assume that $S$ necessarily contains an identity element, we explain how the definitions of $C_r^*(S)$ and $C^*(S)$ from \cite{Li1} can be adapted to this, slightly more general, situation. In the same section we introduce the distinguished subalgebras of interest, which are built out of $\mathcal{D}$ and the unitaries coming from the group of units $S^*$. We also discuss conditional expectations onto the diagonal subalgebras of $C^*(S)$ and $C_r^*(S)$. Our first findings about injectivity of a $*$-homomorphism on $C^*(S)$ are the subject of Section~\ref{sec: diagonalsection}. We show in Theorem~\ref{thm: using the diagonal} that injectivity can be phrased as a nonvanishing condition involving projections from $\mathcal{D}$, similar to \cite[Theorem 3.7]{LaRa}, when the semigroup $S$ has at most an identity element as unit, or, in the presence of non-trivial units, satisfies a technical condition on the left action of $S^*$ on the space $\mathcal{J}(S)$. In Section~\ref{section:pisimple} we identify a number of conditions on a right LCM semigroup $S$ which imply that $C^*(S)$ is purely infinite and simple. These conditions include a characterisation of the left action of $S^*$ on $\mathcal{J}(S)$ which is a refined version of effective action; we call this strongly effective. In a short Section~\ref{section:injregular} we discuss injectivity of the canonical surjection from $C^*(S)$ onto $C_r^*(S)$ and illustrate this with semigroups of the form $G\rtimes_\theta P$. Section~\ref{section:usingcore} initiates the study of injectivity of $*$-homomorphisms on $C^*(S)$ phrased in terms of a core subalgebra that is built from $\mathcal{D}$ and the unitaries corresponding to the group of units $S^*$ in $S$. The final section, Section~\ref{sec: examples}, is devoted to applications. Here we discuss the validity of the properties of right LCM semigroups introduced in sections~\ref{sec: diagonalsection} and \ref{section:pisimple}. The main class of examples is that of semidirect products of the form $G\rtimes_\theta P$, and via Theorem~\ref{thm:gxp p.i. simple} we provide examples of purely infinite simple $C^*(S)$ from this class. We also take the opportunity to examine the Zappa-Sz\'ep product semigroups $X^*\bowtie G$ coming from self-similar actions $(G,X)$ as considered in \cite{Law1, LW, BRRW}; in particular, we examine some of the properties of semigroups introduced in the paper. While at this stage we cannot apply our $C^*$-algebraic results to this class of semigroups, we plan to examine these problems in further work. We thank the referee for suggesting many improvements to the presentation. \section{Some results on semigroups}\label{sec: background} \noindent By a semigroup $S$ we understand a non-empty set $S$ with an associative operation. We refer to \cite{Cli-Pre} and \cite{Lal} for basic properties of semigroups. Semigroups with an identity element for the operation are known as monoids. Here we shall use the terminology semigroup, and specify existence of an identity when this is the case. All semigroups considered in this work are discrete. A semigroup $S$ is {\em left cancellative} if $pq=pr$ implies $q=r$ for all $p,q,r\in S$; {\em right cancellative} if $qp=rp$ implies $q=r$ for all $p,q,r\in S$; and {\em cancellative} if it is both left and right cancellative. Given a semigroup $S$ with identity $1_S$, an element $x$ in $S$ is invertible if there is $y\in S$ such that $xy=yx=1_S$. We denote by $S^*$ the group of invertible elements of $S$ (also called the group of units of $S$). We shall write $S^*\neq \emptyset$ in case the group of units is non-trivial (possibly consisting only of the identity element), and we write $S^*=\emptyset$ otherwise. If $S$ is cancellative and $x\in S^*$, then $x^{-1}$ will denote the inverse of $x$. The Green relations on a semigroup are well-known, see for example \cite[Chapter 2]{Lal}. The left Green relation $\mathcal{L}$ is $a\mathcal{L}b$ if and only if $Sa=Sb$ for $a,b\in S$. Likewise, the right Green relation $\mathcal{R}$ is given by $a\mathcal{R}b$ if and only if $aS=bS$ for $a, b\in S$. Suppose that $S$ is a semigroup with $S^* \neq \emptyset$. Since $Sx=S$ whenever $x\in S^*$, we see that $a=xb$ for some $x\in S^*$ implies that $a\mathcal{L}b$. If $S$ is right cancellative, the reverse implication holds and, moreover, the element $x$ in $S^*$ is unique. Indeed, let $Sa=Sb$. Then there are $c,d\in S$ such that $b=ca$ and $a=db$, so $b=cdb$ and $a=dca$. Thus right cancellation implies $cd=1_S=dc$, showing that $c,d\in S^*$. If right cancellation is replaced with left cancellation in the previous considerations, then $a\mathcal{R}b$ is the same as $a=by$ for a unique $y\in S^*$. If $S^*= \emptyset$, we will assume throughout this paper that $S$ has the following property: if $a,b \in S$ satisfy $aS = bS$, then $a=b$. This is what happens in the case that $S^* = \{1_S\}$. Given a semigroup $S$, a right ideal $R$ is a non-empty subset of $S$ such that $RS\subseteq R$. The \emph{principal right ideals} of $S$ are all the right ideals of the form $pS:=\{ps\mid s \in S\}$ for $p\in S$. Given a principal right ideal $pS$, an element $r\in pS$ is called a \emph{right multiple} of $p$. The right ideal generated by $p\in S$ is defined as $\{p\}\cup pS$; we shall denote it $\langle p\rangle$. \begin{remark}\label{rmk:ideal-and-regular} If $S$ has an identity it is clear that $pS=\langle p\rangle$. For an arbitrary left cancellative semigroup $S$ and $p\in S$, a sufficient condition to have $pS=\langle p\rangle$ is that there is an idempotent $t\in S$, i.e. $t=tt$, such that $p=pt$. Note that if $p$ is a regular element of $S$, in the sense that there is $s\in S$ such that $p=psp$, then $t=sp$ is an idempotent such that $p=pt$. Thus $p\in pS$ whenever $p$ is a regular element in a semigroup $S$. \end{remark} \begin{definition}\label{def:right-LCM-sgp} A semigroup $S$ is {\em right LCM} if it is left cancellative and every pair of elements $p$ and $q$ with a right common multiple has a right least common multiple $r$. \end{definition} \noindent It is clear that a semigroup $S$ is right LCM if it is left cancellative and for any $p, q$ in $S$, the intersection of principal right ideals $pS\cap qS$ is either empty or of the form $rS$ for some $r\in S$. This property of semigroups is called {\em Clifford's condition} in \cite[\S 4.1]{Law2} and \cite{No0}. In general, right least common multiples are not unique: if $r$ is a right least common multiple of $p$ and $q$, then so is $rx$ for any $x\in S^*$. The quasi-lattice ordered groups treated in \cite{Ni} are examples of right LCM semigroups with unique right least common multiples. We discuss other examples in Section~\ref{sec: examples}. The main class of examples of semigroups that is considered in the present work is that of semidirect product semigroups. We introduce next our conventions for a semidirect product of semigroups. For a semigroup $T$ we let $\operatorname{End}{T}$ denote the semigroup of all homomorphisms $T\to T$. The identity endomorphism is $\operatorname{id}_T$. An action $P \stackrel{\theta}{\curvearrowright} T$ of a semigroup $P$ on $T$ is a homomorphism $\theta:P\to \operatorname{End}{T}$, i.e. $\theta_p\theta_q=\theta_{pq}$ for all $p,q\in P$. If $T$ has an identity $1_T$, we shall require that $\theta_p(1_T)=1_T$ for all $p\in P$. In case $P$ has an identity $1_P$, we shall further require that $\theta_{1_P}$ is the identity endomorphism of $T$. \begin{definition}\label{def:sdps} Let $T, P$ be semigroups and $P \stackrel{\theta}{\curvearrowright} T$ an action. The \emph{semidirect product} of $T$ by $P$ with respect to $\theta$, denoted $T{\rtimes_\theta}P$, is the semigroup $T\times P$ with composition given by \[ (s,p)(t,q) = (s\theta_{p}(t),pq), \] for $s,t\in T$ and $p, q\in P$. \end{definition} \noindent Examples of semidirect products are $ax+b$-semigroups, where $T$ comprises the additive structure, and $P$ the multiplicative structure in some ring or field. It is known that $T\rtimes_\theta P$ is right cancellative when $T$ and $P$ are both right cancellative, and $T\rtimes_\theta P$ is left cancellative when $T$ and $P$ are both left cancellative and, in addition, $\theta$ is an action by injective endomorphisms of $T$. In the next result we describe $S^*$ in the case of a semidirect product $S=G\rtimes_\theta P$ in which $G$ is a group. \begin{lemma}\label{lem: units in GxP} Let $G$ be a group, $P$ a semigroup and $P \stackrel{\theta}{\curvearrowright}G$ an action such that $G\rtimes_\theta P$ is left cancellative. If $P$ has an identity, then $(G\rtimes_\theta P)^* = G\rtimes_{\theta}P^*$ holds, otherwise $G\rtimes_\theta P$ does not have an identity. \end{lemma} \begin{proof} If $P$ has an identity element $1_P$, the identity element of $G\rtimes_\theta P$ is given by $(1_G,1_P)$. Now let $(g,x) \in (G\rtimes_\theta P)^*$. By definition, there is $(h,y) \in G\rtimes_\theta P$ such that $(g\theta_x(h),xy) = (g,x)(h,y) = (1_G,1_P)$. Thus, $x \in P^*$. Conversely, if $x \in P^*$ and $g \in G$, the inverse of $(g,x)$ is given by $(\theta_{x^{-1}}(g^{-1}),x^{-1})$. The second case is obvious. \end{proof} \begin{remark} Let $G$ be a group, $P$ a semigroup with $P^*=\{1_P\}$ and $P \stackrel{\theta}{\curvearrowright}G$ an action such that $G\rtimes_\theta P$ is left cancellative. Given $(g,p)\in G\rtimes_\theta P$, we have $(g,p)(h,1_P)=(g\theta_p(h)g^{-1},1_P)(g,p)$ for any $h\in G$. By Lemma~\ref{lem: units in GxP}, $a(G\rtimes_\theta P)^*\subset (G\rtimes_\theta P)^*a$ for any $a$ in $G\rtimes_\theta P$. This observation motivates the next considerations. \end{remark} \noindent In \cite[\S 10.3]{Cli-Pre}, a subset $H$ of a semigroup $S$ is called centric if $aH=Ha$ for every $a\in S$. For a semigroup $S$ with $S^*\neq\emptyset$, we shall consider two one-sided versions of this condition. \begin{definition} Given a semigroup $S$ with $S^*\neq\emptyset$, let (C1) and (C2) be the conditions: \begin{enumerate} \item[(C1)]\label{def: centric} $aS^*\subseteq S^*a$ for all $a\in S$. \item[(C2)]\label{def: centric-other} $S^*a\subseteq aS^*$ for all $a\in S$. \end{enumerate} \end{definition} \begin{prop}\label{prop:congruence} Let $S$ be a semigroup with $S^* \neq \emptyset$. Consider the equivalence relation on $S$ given as follows: for $a,b\in S$, \[ a\sim b \text{ if }a=xb \text{ for some }x\in S^*.\] If $S$ satisfies (C1), then $\sim$ is a congruence on $S$. Consequently, if $\mathcal{S}:=S/_\sim$ denotes the collection of equivalence classes $[a]:=\{b\in S\mid b\sim a\}$, then $\mathcal{S}$ is a semigroup with identity $[1_S]$. Moreover, $\mathcal{S}^*=\{[1_S]\}$. \end{prop} \begin{proof}[Proof of Proposition~\ref{prop:congruence}.] It is routine to check that $\sim$ is an equivalence relation. To show that it is a congruence on $S$, we must show that whenever $a\sim b$ then $cad\sim cbd$ for all $c,d$ in $S$. Let $x$ in $S^*$ such that $a=xb$. By (C1), there is $x'\in S^*$ such that $cx=x'c$. Then $cad=cxbd=x'cbd$, giving the claim. Thus $[a_1]\cdot [a_2]:=[a_1a_2]$ for $a_1,a_2 \in S$ is a well-defined operation which turns $\mathcal{S}$ into a semigroup with identity $[1_S]$. Suppose that $[a][b]=[1_S]=[b][a]$ for $a,b\in S$. Then $ab=x$ and $ba=y$ for $x,y\in S^*$, which shows that $bx^{-1}=y^{-1}b$ is an inverse for $a$. Similarly, $b\in S^*$, and thus $[a]=[b]=[1_S]$. \end{proof} \begin{remark} The relation $\sim$ from Proposition~\ref{prop:congruence} is closely related to the left Green relation: since $Sx=S$ whenever $x\in S^*$, we see that $a\sim b$ implies $a\mathcal{L}b$. If $S$ is right cancellative, then also $a\mathcal{L}b$ implies $a\sim b$. \end{remark} \noindent Our interest is in semigroups $S$ that are left cancellative and often cancellative. So we would like to know when the semigroup $\mathcal{S}$ from Proposition~\ref{prop:congruence} inherits these properties. One sufficient condition for left cancellation to pass from $S$ to $\mathcal{S}$ is spelled out in the next lemma, whose immediate proof we omit. \begin{lemma}\label{lemma:calS-cancellative} Let $S$ be a semigroup with $S^* \neq \emptyset$ and satisfying (C1). If $S$ is right cancellative then $\mathcal{S}$ is right cancellative. Further, $\mathcal{S}$ is left cancellative if $S$ is left cancellative and has the following property: \begin{equation*}\label{eq:condC3} ab=xac \text{ for }a,b,c\in S, x\in S^* \Longrightarrow \exists y\in S^* \text{ with }xa=ay. \end{equation*} \end{lemma} \begin{prop}\label{prop:calS-for-GP} Let $P$ be a semigroup with $P^* \neq \emptyset$, $G$ a group and $P \stackrel{\theta}{\curvearrowright} G$ an action by injective group endomorphisms of $G$. Denote $S=G\rtimes_\theta P$ the resulting semidirect product. \textnormal{(a)} If $P$ satisfies (C1), then so does $S$. \textnormal{(b)} If $P$ is right cancellative and satisfies (C1), then $\mathcal{S}$ is right cancellative. \textnormal{(c)} If $P$ is left cancellative and $P^*$ is centric, then $\mathcal{S}$ is left cancellative. \end{prop} \begin{proof} For (a), let $(g,p)\in S$ and $(g', x)\in S^*=G\rtimes P^*$, according to Lemma~\ref{lem: units in GxP}. Choose by (C1) an element $y\in P^*$ such that $px=yp$. It follows that $(g, p)(g', x)=(g'', y)(g,p)$ for $g''=g\theta_p(g')\theta_y(g^{-1})$. For assertion (b), note that $S$ has (C1) by (a) and is right cancellative, so the claim follows by Lemma~\ref{lemma:calS-cancellative}. To prove (c), first note that $\mathcal{S}$ is well-defined since $S$ has (C1). Suppose we have elements $(g,p)$, $(h, q)$, $(k, r)$ in $S$ and $(g_0,p_0)\in S^*$ such that $(g, p)(h,q)=(g_0, p_0)(g,p)(k, r)$. Therefore $(g\theta_p(h), pq)=(g_0\theta_{p_0}(g\theta_p(k)),p_0pr)$. Since $P^*$ is centric, there is a unique $p_1 \in P^*$ such that $p_0p=pp_1$. Choosing $g_1=h\theta_{p_1}(k^{-1})$ in $G$ we have $(g_0, p_0)(g,p)=(g,p)(g_1, p_1)$. Hence Lemma~\ref{lemma:calS-cancellative} applies and shows that $\mathcal{S}$ is left cancellative. \end{proof} \noindent The next result shows that cancellative semigroups which are semidirect products of the form $G\rtimes_\theta P$, with $P^*=\{1_P\}$, can be characterised abstractly as cancellative semigroups $S$ that satisfy (C1) and for which the quotient map of $S$ onto $\mathcal{S}$ admits a homomorphism lift. \begin{prop}\label{prop: correspondence between S and GxP} There is a bijective correspondence between the class of cancellative semigroups $S$ with identity $1_S$ satisfying (C1) and such that the quotient map from $S$ onto $\mathcal{S}$ admits a transversal homomorphism which embeds $\mathcal{S}$ into $S$, and the class of semidirect product semigroups $G\rtimes_\theta P$ arising from a cancellative semigroup $P$ with $P^*=\{1_P\}$, which acts by injective endomorphisms of a group $G$. \end{prop} \begin{proof} Suppose $S$ is cancellative with $1_S$, satisfies (C1), and is such that there is an embedding of $\mathcal{S}$ as a subsemigroup of $S$ which is a right inverse for the quotient map $S\to \mathcal{S}$. For ease of notation, we identify $\mathcal{S}\subseteq S$. Then for each $p\in\mathcal{S}$ we have a map $\theta_p:S^*\to S^*$, where $\theta_p(x)$ is the unique element of $S^*$ satisfying $px=\theta_p(x)p$. Note that such an element exists because of (C1), and is unique because $S$ is right cancellative. We claim that $\theta:p\mapsto \theta_p$ is an action of $\mathcal{S}$ by injective endomorphisms of $S^*$. For each $p\in \mathcal{S}$ and $x,y\in S^*$ we have $\theta_p(xy)p=pxy=\theta_p(x)py=\theta_p(x)\theta_p(y)p$, which by right cancellation means $\theta_p(xy)=\theta_p(x)\theta_p(y)$. Since we obviously have $\theta_p(1_S)=1_S$, each $\theta_p$ is an endomorphism of $S^*$. For each $p,q\in\mathcal{S}$ and $x\in S^*$ we have $\theta_{pq}(x)pq=pqx=p\theta_q(x)q=\theta_p(\theta_q(x))pq$, which by right cancellation means $\theta_{pq}(x)=\theta_p(\theta_q(x))$, and so $\theta$ is an action. Hence we can form the semidirect product $S^*\rtimes_\theta\mathcal{S}$. We have each $\theta_p$ injective because $\theta_p(x)=\theta_p(y)$ implies $px=\theta_p(x)p=\theta_p(y)p=py$, resulting in $x=y$. The map $\phi:S^*\rtimes_\theta\mathcal{S}\to S$ given by $\phi((x,p))=xp$ is a homomorphism because \[\phi((x,p))\phi((y,q))=xpyq=x\theta_p(y)pq=\phi((x,p)(y,q)).\] For each $r\in S$ we choose $p\in\mathcal{S}$ the representative of $r$ in $\mathcal{S}$. Then $r=xp$ for some $x\in S^*$, which means $r=\phi((x,p))$, and hence $\phi$ is surjective. For injectivity note that $\phi((x,p))=\phi((y,q))$ means $p$ and $q$ differ by a unit. Hence as elements of $\mathcal{S}$ they must be equal. Then right cancellation gives $x=y$. So $\phi:S^*\rtimes_\theta\mathcal{S}\to S$ is an isomorphism. Moreover, $\mathcal{S}$ is cancellative because $S$ is cancellative, and we have $\mathcal{S}^*=\{1_S\}$ because $[x]=[1_S]$ for all $x\in S^*$. Since $\mathcal{S}^*=\{1_S\}$, $\mathcal{S}$ trivially satisfies (C1). Now suppose that $P$ is cancellative with $P^*=\{1_P\}$, and acts by injective endomorphisms on a group $G$. Then we know from the discussion on semidirect products prior to Lemma~\ref{lem: units in GxP} that $G\rtimes_\theta P$ is cancellative. We also know from Proposition~\ref{prop:calS-for-GP} that $G\rtimes_\theta P$ satisfies (C1). Denote by $\mathcal{S}_{G,P}$ the semigroup obtained by applying Proposition~\ref{prop:congruence} to $G\rtimes_\theta P$, and consider the map $\pi:\mathcal{S}_{G,P}\to G\rtimes_\theta P$ given by $\pi([(g,p)])=(1_G,p)$. Since $(G\rtimes_\theta P)^*=G\times\{1_P\}$, the equality $[(g,p)]=[(h,q)]$ implies $p=q$, which means $\pi([(g,p)])=\pi([(h,q)])$. So $\pi$ is well defined. We have \[\pi([(g,p)][(h,q)])=\pi([(g\theta_p(h),pq)])=pq=\pi([(g,p)])\pi([(h,q)])\] for each $[(g,p)],[(h,q)]\in\mathcal{S}_{G,P}$, and so $\pi$ is a homomorphism. Moreover, $\pi$ is obviously unital. Finally, for each $[(g,p)],[(h,q)]\in\mathcal{S}_{G,P}$ we have \[\pi([(g,p)])=\pi([(h,q)])\Longrightarrow p=q,\] so $(g,p)=(gh^{-1},1_P)(h,q)$, resulting in $[(g,p)]=[(h,q)]$. Thus $\pi$ is injective, and hence a semigroup embedding in $G\rtimes_\theta P$. \end{proof} \section{Right LCM semigroup C*-algebras}\label{sec: right LCM algebras} \subsection{Semigroup C*-algebras}\label{subsec: semigroup algebras}~\\ In \cite{Li1}, Li constructed the reduced and the full $C^*$-algebras $C_r^*(S)$ and $C^*(S)$ associated to a left cancellative semigroup $S$ with identity. In this work we shall allow semigroups that do not necessarily have an identity, so we start by investigating to what extent the construction of $C_r^*(S)$ and $C^*(S)$ from \cite{Li1} still makes sense. Let $S$ be a left cancellative semigroup, and let $\{\operatorname{var}epsilon_t\}_{t\in S}$ denote the canonical orthonormal basis of $\ell^2(S)$ such that $(\operatorname{var}epsilon_s| \operatorname{var}epsilon_t)=\delta_{s,t}$ for $s, t\in S$. For each $p\in S$ let $V_p$ be the operator in $\mathcal{L}(\ell^2(S))$ given by $V_p\operatorname{var}epsilon_t=\operatorname{var}epsilon_{pt}$ for all $t\in S$. We have $V_p^*V_p=I$ in $\mathcal{L}(\ell^2(S))$, so that $V_p$ is an isometry for every $p\in S$. We define the \emph{reduced} $C^*$-algebra $C_r^*(S)$ to be the unital $C^*$-subalgebra of $\mathcal{L}(\ell^2(S))$ generated by $V_p$ for all $p\in S$. Given $p\in S$, clearly $V_pV_p^*\operatorname{var}epsilon_s=0$ when $s\notin pS$. Left cancellation implies that $V_pV_p^*\operatorname{var}epsilon_s=\operatorname{var}epsilon_s$ when $s\in pS$. Thus the range projection $V_pV_p^*$ of $V_p$ is the orthogonal projection onto the subspace $l^2(pS)$ corresponding to the principal right ideal $pS$. We shall denote this projection by $E_{pS}$. With reference to Remark~\ref{rmk:ideal-and-regular}, note that $p$ need not belong to $pS$. However, $p$ is contained in $pS$ if $S$ has an identity or if $p$ is a regular element of $S$. We summarise some properties of the elements $V_p$ and $E_{pS}$ in the next lemma, whose proof we omit. \begin{lemma}\label{lem:def-V-E-reduced-alg} Let $S$ be a left cancellative semigroup that does not necessarily have an identity. Then for each $p$ in $S$, the range projection of $V_p$ is equal to the orthogonal projection $E_{pS}$ onto the subspace $l^2(pS)$. Further, the isometries $V_p$ and the projections $E_{pS}$ satisfy the relations: \begin{enumerate} \item $V_pV_q=V_{pq}$; \item $V_p E_{qS}V_p^*=E_{pqS}$; \item $E_{pS}E_{qS}=E_{pS\cap qS}$ \end{enumerate} for all $p,q\in S$. \end{lemma} \noindent Recall from \cite{Li1} that for each right ideal $X$ and $p\in S$, the sets \[pX = \{px \mid x \in X\}\quad \text{and}\quad p^{-1} X = \{y\in S\mid py \in X\}\] are also right ideals. Li \cite[\S2.1]{Li1} defines the set of {\em constructible right ideals} $\mathcal{J}(S)$ to be the smallest family of right ideals of $S$ satisfying \begin{enumerate} \item[(1)] $\emptyset, S \in \mathcal{J}(S)$ and \item[(2)] $X \in \mathcal{J}(S),p \in S\Longrightarrow pX,p^{-1} X \in \mathcal{J}(S)$. \end{enumerate} An inductive argument as in the proof of \cite[Lemma 3.3]{Li1} shows that (1) and (2) imply \begin{enumerate} \item[(3)] $X, Y \in \mathcal{J}(S)\Longrightarrow X \cap Y \in \mathcal{J}(S)$. \end{enumerate} The full $C^*$-algebra for a left cancellative semigroup $S$ will be defined in terms of generators and relations similar to what is done in \cite{Li1} for semigroups with identity. \begin{definition}\label{def: Li's full algebra} Let $S$ be a left cancellative semigroup. The {\em full semigroup $C^*$-algebra} $C^*(S)$ is the universal unital $C^*$-algebra generated by isometries $(v_p)_{p\in S}$ and projections\\ $(e_X)_{X\in\mathcal{J}(S)}$ satisfying \begin{enumerate} \item[(L1)] $v_pv_q=v_{pq}$; \item[(L2)] $v_pe_Xv_p^*=e_{pX}$; \item[(L3)] $e_\emptyset=0$ and $e_S=1$; and \item[(L4)] $e_Xe_Y=e_{X\cap Y}$, \end{enumerate} for all $p,q\in S$, $X,Y\in\mathcal{J}(S)$. \end{definition} \noindent The \emph{left regular representation} is, by definition, the $*$-homomorphism $\lambda:C^*(S)\to C_r^*(S)$ given by $\lambda(v_p)=V_p$ for all $p\in S$. In \cite{Li1}, the set of constructible right ideals $\mathcal{J}(S)$ is called {\em independent} if for every choice of $X,X_1,\dots,X_n\in\mathcal{J}(S)$ we have \[ X_j\operatorname{var}subsetneqq X\text{ for all $1\le j\le n$}\Longrightarrow \bigcup_{j=1}^n X_j\operatorname{var}subsetneqq X. \] Equivalently, $\mathcal{J}(S)$ is independent if $\cup_{j=1}^n X_j= X$ implies $X_j=X$ for some $1\le j\le n$. The next two lemmas explain why right LCM semigroups form a particularly tractable class of semigroups. The proof of the first of these lemmas is left to the reader. \begin{lemma}\label{lem: constr right ideals for right LCM sgp} If $S$ is a right LCM semigroup, then $\mathcal{J}(S) = \{\emptyset, S\}\cup \{pS\mid p \in S\}$. \end{lemma} \begin{lemma}\label{lem: independence for granted} Let $S$ be a right LCM semigroup. Then $\bigcup_{X \in F}{X} \subsetneqq S$ holds for all finite subsets $F \subset \mathcal{J}(S) \setminus \{S\}$ if and only if $\mathcal{J}(S)$ is independent. \end{lemma} \begin{proof} Clearly, independence of $\mathcal{J}(S)$ implies $\bigcup_{X \in F}{X} \subsetneqq S$ for all finite $F \subset \mathcal{J}(S) \setminus \{S\}$. Conversely, let $X,X_{1},\dots,X_{n} \in \mathcal{J}(S)$ satisfy $X_i \subsetneqq X$. Since $S$ is right LCM, Lemma~\ref{lem: constr right ideals for right LCM sgp} gives $p,p_1,\dots,p_n \in S$ with $X = pS, X_i = p_iS$ for $i = 1,\dots,n$. For each $i=1,\dots, n$, $X_i \subsetneqq X$ implies that $p_i = pp_{i}'$ for some $p_{i}' \in S$ with $p_i'S\subsetneqq S$. Thus \[\bigcup\limits_{1 \leq i \leq n}{X_{i}} = p\bigcup\limits_{1 \leq i \leq n}{p_{i}'S} \text{ and } X = pS.\] By left cancellation, $\bigcup_{1 \leq i \leq n}{X_{i}} = X$ is equivalent to $\bigcup_{1 \leq i \leq n}{p_{i}'S} = S$. However, the second statement is false by the choice of $p_{i}'S$. Hence $\bigcup_{1 \leq i \leq n}{X_{i}} \subsetneqq X$ and $\mathcal{J}(S)$ is independent. \end{proof} \begin{remark}\label{rem: Q^e_F,emptyset non-zero} Let $S$ be a left cancellative semigroup and $\mathcal{J}(S)$ the family of constructible right ideals. Let $F$ be a finite subset of $\mathcal{J}(S) \setminus \{S\}$. Note that if $S$ has an identity $1_S$, then $\bigcup\limits_{X \in F}{X} \subsetneqq S$ holds. Indeed, if we had $\bigcup\limits_{X \in F}{X} = S$, then there would exist $X \in F$ such that $1_S \in X$, so $X=S$ since $X$ is a right ideal, a contradiction. \end{remark} \begin{cor}\label{cor: independence for monoids} If $S$ is a right LCM semigroup with identity, then $\mathcal{J}(S)$ is independent. \end{cor} \begin{proof} This follows from \cite[Proposition 2.3.5]{No0}. Alternatively, apply Lemma~\ref{lem: independence for granted} and Remark \ref{rem: Q^e_F,emptyset non-zero}. \end{proof} \noindent If $S$ does not have an identity, we can always pass to its unitisation $\tilde{S} = S \cup \{1_S\}$, where we declare $1_S p = p = p 1_S$ for all $p \in \tilde{S}$. \begin{lemma}\label{lem:right LCM for unitisation} If $S$ is a right LCM semigroup with $S^*= \emptyset$, then for every $p,q\in S$ we have $pS \cap qS=\emptyset$ precisely when $p\tilde{S} \cap q\tilde{S}=\emptyset$, and \[pS \cap qS = rS \text{ if and only if } p\tilde{S} \cap q\tilde{S} = r\tilde{S}\] for $r \in S$. In particular, $\tilde{S}$ is right LCM and $\mathcal{J}(\tilde{S})$ is independent. \end{lemma} \begin{proof} Let $p,q \in S$. It is clear that $pS \cap qS$ is empty if and only if $p\tilde{S} \cap q\tilde{S}$ is. Suppose next that $pS \cap qS \neq \emptyset$. In case $pS=qS$, the standing assumption imposed on semigroups without identity element forces $p=q$, and so $p\tilde{S} = q\tilde{S}$. Assume therefore that $pS\neq qS$, and let $r\in S$ with $pS \cap qS = rS$. Then \[pS \cap qS = rS \subset r\tilde{S} \subseteq p\tilde{S} \cap q\tilde{S}.\] We claim that $p\tilde{S} \cap q\tilde{S} \subseteq r\tilde{S}$. Let $t\in p\tilde{S} \cap q\tilde{S}$. If $t\in pS\cap qS$ then clearly $t\in rS\subset r\tilde{S}$. Assume that $t=q=ps$ for some $s\in S$. Then $t=q1_S\in q\tilde{S}$ and $t\in pS\subset p\tilde{S}$, so $t\in r\tilde{S}$. The case that $t=p=qu$ for some $u\in S$ is similar, and the claim is established. Since left cancellation in $\tilde{S}$ is inherited from $S$, this shows that $\tilde{S}$ is a right LCM semigroup Thus $\mathcal{J}(\tilde{S})$ is independent according to Corollary~\ref{cor: independence for monoids}. \end{proof} \noindent The following example shows that independence of $\mathcal{J}(S)$ need not hold in general for semigroups without an identity: \begin{example} Let $S = 2\mathbb{N}^\times \cup 3\mathbb{N}^\times$ be endowed with composition given by multiplication. Then $\mathcal{J}(S)$ is not independent. Indeed, for $X_1 = 2\mathbb{N}^\times = 3^{-1}(2S)$ and $X_2 = 3\mathbb{N}^\times = 2^{-1}(3S)$, we have $X_i \subsetneqq S$ but $X_1 \cup X_2 = S$. We remark that $S$ is not right LCM. \end{example} \noindent One can modify the previous example to get a right LCM semigroup with $S^{*} = \emptyset$ such that $\mathcal{J}(S)$ is independent. \begin{example}\label{ex:rightLCM no units} Consider the set $S = \mathbb{N}^\times \setminus \{1\}$ with composition given by multiplication. Then $S$ is a right LCM semigroup with $S^{*} = \emptyset$. We claim that $\mathcal{J}(S)$ is independent. For this it suffices to show that $\bigcup_{X \in F}{X} \subsetneqq S$ holds for all finite $F \subset \mathcal{J}(S) \setminus \{S\}$. Assume that $\bigcup_{i=1}^n{X_i}=S$ for $X_1,\dots ,X_n$ in $\mathcal{J}(S) \setminus \{S\}$. Since $S$ contains $n+1$ relatively prime elements $p_1, \dots ,p_{n+1}$, we can find $i_0\in \{1,\dots ,n\}$ and $j,k\in \{1,\dots, n+1\}$ with $j\neq k$ such that $p_j, p_k\in X_{i_0}$. But this implies that $X_{i_0}=S$, a contradiction. The underlying idea is that as long as there are infinitely many prime right ideals, $\mathcal{J}(S)$ is independent. \end{example} \begin{remark}\label{rem: range proj of the gen isometries} For a left cancellative semigroup $S$, the range projection $v_pv_p^*$ of the generating isometry $v_p$ in $C^*(S)$ equals $e_{pS}$: \[v_pv_p^* \stackrel{(L3)}{=} v_pe_Sv_p^* \stackrel{(L2)}{=} e_{pS}.\] Thus, if $S$ has an identity, then $v_x$ is a unitary in $C^*(S)$ if (and only if) $x \in S^*$. If $S$ is right LCM, then Lemma~\ref{lem: constr right ideals for right LCM sgp} shows that $C^*(S)$ is generated already by $(v_p)_{p \in S}$.\\ \end{remark} \subsection{Spanning families and distinguished subalgebras.}~\\ When $S$ is a right LCM semigroup we have a description of its $C^*$-algebra $C^*(S)$ in terms of a spanning set of monomials of the kind that span $C^*$-algebras associated to quasi-lattice ordered pairs, see \cite{LaRa}. This assertion could be deduced from \cite[Proposition 3.2.15]{No0}, however we include a proof since we here do not assume that $S$ necessarily has an identity. \begin{lemma}\label{lem: spanning family for algebra} Let $S$ be a right LCM semigroup. If $S$ has an identity, then $C^*(S)=\overline{\operatorname{span}}\{v_pv_q^* \mid p,q\in S\}$. If $S^* = \emptyset$, then $C^*(S)=\overline{\operatorname{span}}\{v_pv_q^* \mid p,q\in \tilde{S}\}$. \end{lemma} \begin{proof} In each case, the right-hand side is closed under taking adjoints and, due to Remark~\ref{rem: range proj of the gen isometries}, contains the generators of $C^*(S)$. Hence, we only need to show that the right-hand side is multiplicatively closed. Using (L1), it suffices to show that the product of $v_q^*$ and $v_p$ for arbitrary $p$ and $q$ in $S$ is $0$ or has the form $v_{p'}v_{q'}^*$ for some $p',q' \in S$. By Remark~\ref{rem: range proj of the gen isometries}, we have \[v_q^*v_p = v_q^*e_{qS}e_{pS}v_p \stackrel{(L4)}{=} v_q^*e_{qS \cap pS}v_p.\] Since $S$ is right LCM, we know that $pS \cap qS$ is either empty, in which case $e_{qS \cap pS} = 0$ by (L3), or $pS \cap qS = rS$ for some $r \in pS \cap qS$. If we let $p',q'\in S$ be such that $pp' = qq'=r$ in $S$ (which are uniquely determined since $S$ is left cancellative), then \[v_q^*v_p=v_q^*e_{rS}v_p=v_q^*v_{qq'}v_{pp'}^*v_p=v_{q'}v_{p'}^*\] establishes the claim for the second case. \end{proof} \begin{definition} Let $S$ be a left cancellative semigroup. Define a subalgebra of $C^*(S)$ by \begin{equation*} \mathcal{D}:=C^*(\{e_{X}\mid X\in \mathcal{J}(S)\}). \end{equation*} If $S^*\neq \emptyset$, define further the subalgebras \[ \mathbb{C}C_O:=C^*(\{v_pv_xv_p^*\mid p\in S,x\in S^*\})\text{ and } \mathbb{C}C_I:=C^*(\{e_{pS},v_x\mid p\in S,x\in S^*\}). \] These are, respectively, the {\em diagonal}, the {\em outer core} and the {\em inner core} of $C^*(S)$. \end{definition} \noindent It is clear that $\mathcal{D}=\overline{\operatorname{span}}\{e_X \mid X\in \mathcal{J}(S)\}$. The other two subalgebras satisfy the following: \begin{lemma}\label{lem: properties of the subalgebras} Let $S$ be a right LCM semigroup with $S^*\neq \emptyset$. Then \begin{enumerate} \item[(i)] $\mathcal{D}\subseteq\mathbb{C}C_I\subseteq\mathbb{C}C_O$; \item[(ii)] $\mathbb{C}C_I=\overline{\operatorname{span}}\{e_{pS}v_x \mid p\in S,~x\in S^* \}$; and \item[(iii)] if $S^*=\{1_S\}$, then $\mathcal{D}=\mathbb{C}C_I=\mathbb{C}C_O$. \end{enumerate} \end{lemma} \begin{proof} Parts (i) and (iii) are immediate verifications. For assertion (ii) we use (L2) and (L4) to get \[ e_{pS}v_xe_{qS}v_y=e_{pS}v_xe_{qS}v_x^*v_xv_y=e_{pS\cap xqS}v_{xy}, \] for each $p,q\in S$, $x,y\in S^*$. Hence $\{e_{pS}v_x \mid p\in S,~x\in S^*\}$ is closed under multiplication. Since $(e_{pS}v_x)^*= v_x^*e_{pS}=e_{x^{-1}pS}v_{x^{-1}}$, claim (ii) follows. \end{proof} \subsection{\texorpdfstring{Conditional expectations onto canonical diagonals of $C^*(S)$ and $C_r^*(S)$}{Conditional expectations onto canonical diagonals of full and reduced semigroup C*-algebra}}\label{subsection:cond-exp-diagonal}~\\ Let $S$ be a left cancellative semigroup. The diagonal $\mathcal{D}_r$ in $C_r^*(S)$ is defined to be the subalgebra $\mathcal{D}_r=\overline{\operatorname{span}}\{E_X \mid X\in \mathcal{J}(S)\}$. We show next that when $S$ is right LCM and also {right cancellative}, there is a canonical faithful conditional expectation from $C_r^*(S)$ onto its diagonal. The result was motivated by \cite[Lemma 3.11]{Li1}, and is a generalisation to cancellative right LCM semigroups of a similar result proved for quasi-lattice ordered groups, see \cite[Remark 3.6]{Ni} and \cite{QuiRa}. More precisely, it is a consequence of the normality of the coaction in \cite[Proposition 6.5]{QuiRa} and of \cite[Lemma 6.7]{QuiRa} that the Wiener-Hopf algebra $\mathcal{T}(G, S)$, i.e. the reduced $C^*$-algebra of a quasi-lattice ordered group $(G, S)$, admits a faithful conditional expectation onto its canonical diagonal. \begin{prop}\label{prop: faithful cond exp for the red algebra} If $S$ is a cancellative right LCM semigroup, then the canonical map $\mathbb{P}hi_{\mathcal{D},r}:C^*_r(S) \longrightarrow \mathcal{D}_r$ given by $\mathbb{P}hi_{\mathcal{D}, r}(V_pV_q^*)=\delta_{p,q}V_pV_p^*$ for $p,q\in S$ is a faithful conditional expectation. \end{prop} \begin{proof} It was proved in \cite[Section 3.2]{Li1} that there is a faithful conditional expectation $E:\mathcal{L}(\ell^2(S)) \longrightarrow \ell^\infty(S)$ characterised by $\langle E(T)\operatorname{var}epsilon_s,\operatorname{var}epsilon_s\rangle = \langle T\operatorname{var}epsilon_s,\operatorname{var}epsilon_s\rangle$ for all $s \in S$ and all $T \in \mathcal{L}(\ell^2(S))$. Clearly, $\mathcal{D}_r \subset \ell^\infty(S)$. We will show that the converse inclusion holds. Note that $C^*_r(S)$ is the closure of the span of elements $V_pV_q^*, p,q \in S$. Therefore it suffices to show that $E(V_pV_q^*) \in \mathcal{D}_r$ for any $p, q\in S$. Let $s \in S$. If $s \notin qS$, then $V_q^*\operatorname{var}epsilon_s = 0$, and for $s\in qS$ of the form $s=qs'$ we have $V_q^*\operatorname{var}epsilon_s = \operatorname{var}epsilon_s'$. Thus if $E(V_pV_q^*) \neq 0$, then there is $s' \in S$ such that $ps' = qs'$. Right cancellation then implies $p=q$, so $V_pV_q^* \in \mathcal{D}_r$. Since $\mathbb{P}hi_{\mathcal{D},r} = E$ in this case, the proposition follows. \end{proof} \noindent A successful strategy to prove injectivity of representations of $C^*(S)$ uses the classical idea of Cuntz from \cite{Cun}, which involves expecting onto a diagonal subalgebra and constructing a projection with good approximation properties. To pursue this path, we need a faithful conditional expectation from $C^*(S)$ onto $\mathcal{D}$. Such a map can be specified by its image on the spanning elements of $C^*(S)$ as follows: \begin{equation}\label{def:Phi-D} \mathbb{P}hi_{\mathcal{D}} (v_pv_q^*)=\begin{cases}v_pv_p^*,&\text{ if }p=q\\ 0,&\text{ if }p\neq q.\end{cases} \end{equation} Thus in examples we need to ensure that \eqref{def:Phi-D} does extend to $C^*(S)$ and that it is faithful on positive elements. We now describe one such situation. Let us recall the notion of a semigroup crossed product by endomorphisms, see e.g. \cite{LaRa}. Let $S$ be a semigroup with identity and $A$ a unital $C^*$-algebra with an action $S \stackrel{\alpha}{\curvearrowright}A$ by endomorphisms. A nondegenerate representation of $(A,S,\alpha)$ in a unital $C^*$-algebra $B$ is given by a unital $*$-homomorphism $\pi_A: A \longrightarrow B$ and a semigroup homomorphism $\pi_S: S \longrightarrow \text{Isom}(B)$, where $\text{Isom}(B)$ denotes the semigroup of isometries in the $C^*$-algebra $B$. The pair $(\pi_A,\pi_S)$ is said to be covariant if it satisfies the covariance condition \[ \pi_S(s)\pi_A(a)\pi_S(s)^* = \pi_A(\alpha_s(a)) \text{ for all } a \in A \text{ and }s \in S. \] Assuming that there is a covariant pair, the semigroup crossed product $A \rtimes_\alpha S$ is the unital $C^*$-algebra generated by a pair $(\iota_A, \iota_S)$ which is universal for nondegenerate covariant representations. This is to say that whenever $(\pi_A, \pi_S)$ is a nondegenerate covariant representation of $(A,S,\alpha)$ in a $C^*$-algebra $B$, there is a homomorphism $\overline{\pi}: A \rtimes_\alpha S \longrightarrow B$ such that \[ \pi_A = \overline{\pi} \circ \iota_A \text{ and } \pi_S = \overline{\pi} \circ \iota_S. \] The crossed product $A \rtimes_\alpha S$ is uniquely determined (up to canonical isomorphism) by this property. If the action $\alpha$ is by injective endomorphisms, then there is always a covariant pair and $A \rtimes_\alpha S$ is non-trivial, see \cite{Lac}. It was observed in \cite{Li1} that whenever $S$ is a left cancellative semigroup with identity, then there is an action $\tau$ of $S$ by endomorphisms of $\mathcal{D}$ given by $\tau_p(e_{X})=v_pe_{X}v_p^* = e_{pX}$ for all $p\in S$ and $X\in \mathcal{J}(S)$. The semigroup crossed product $\mathcal{D}\rtimes_\tau S$ is the universal $C^*$-algebra generated by a pair $(\iota_\mathcal{D}, \iota_S)$ of homomorphisms of $\mathcal{D}$ and $S$, respectively, subject to the covariance condition $\iota_S(p)\iota_\mathcal{D}(e_{X})\iota_S(p)^*=\iota_\mathcal{D}(e_{pX})$ for all $p\in S$ and $X\in \mathcal{J}(S)$. As shown in \cite[Lemma 2.14]{Li1}, the $C^*$-algebras $C^*(S)$ and $\mathcal{D}\rtimes_\tau S$ are canonically isomorphic, through the isomorphism that sends $v_p$ to $\iota_S(p)$ and $e_{X}$ to $\iota_\mathcal{D}(e_{X})$. We have the following consequence of Lemma~\ref{lem: spanning family for algebra}. \begin{cor} Given a right LCM semigroup $S$, let $\tau$ be the action of $S$ on $\mathcal{D}$ given by conjugation with $v_p$ for $p\in S$. If $S$ has an identity, then $\mathcal{D}\rtimes_\tau S=\overline{\operatorname{span}}\{\iota_S(p)\iota_S(q)^*\mid p,q \in S\}.$ If $S^*=\emptyset$, then $\mathcal{D}\rtimes_\tau S=\overline{\operatorname{span}}\{\iota_S(p)\iota_S(q)^*\mid p,q \in \tilde{S}\}$ holds. \end{cor} \noindent Recall that a semigroup $S$ is said to be \emph{right reversible} if $Sp \cap Sq$ is non-empty for all $p,q \in S$, see \cite[§10.3]{Cli-Pre}. If $S$ embeds into a group, we refer to the subgroup generated by the image of $S$ as the \emph{enveloping group} of $S$. Note that this group is unique up to canonical isomorphism in case it exists. \begin{prop}\label{prop:left-inverse} Let $S$ be a right LCM semigroup with identity such that $S$ is right reversible and its enveloping group $\mathcal{G}=S^{-1}S$ is amenable. Then there is a faithful conditional expectation from $C^*(S)$ onto $\mathcal{D}$ characterised by \eqref{def:Phi-D}. \end{prop} \begin{proof} The first observation is that the action $\tau$ admits a left inverse, $\beta$, given by \[\beta_p(e_X)=v_p^*e_{X}v_p = e_{p^{-1}X}\] for $p\in S$ and $X \in \mathcal{J}(S)$. It was proved in \cite[Corollary 2.9]{Li2} that $\beta_p$ defines an endomorphism of $\mathcal{D}$ for each $p\in S$, the reason for this being that $p^{-1}X \cap p^{-1}Y = p^{-1}(X \cap Y)$ holds for all $X,Y \in \mathcal{J}(S)$. It is clear that $\beta$ is an action of $S$ such that $\beta_p\circ \tau_p=\operatorname{id}$ for all $p\in S$. Moreover, \[ (\tau_p\circ \beta_p)(e_X)=v_pv_p^*e_Xv_pv_p^*=e_{pS}e_Xe_{pS}=e_X\tau_p(1) \] for every $p\in S$ and $X\in \mathcal{J}(S)$. Thus $\tau_p\circ \beta_p$ is simply the cut-down to the corner associated to the projection $\tau_p(1)$. One consequence of the existence of $\beta$ is that $\tau_p$ is injective for every $p\in S$. Hence \cite[Theorems 2.1 and 2.4]{Lac} show that $\mathcal{D}$ embeds in $\mathcal{D}\rtimes_\tau S$. As a second consequence of the existence of $\beta$, note that \cite[Proposition 3.1(1)]{Lar} implies that there is a coaction of $\mathcal{G}$ whose fixed-point algebra is $\iota_\mathcal{D}(\mathcal{D})$. Thus there is a conditional expectation $\mathbb{P}hi_\mathcal{D}$ from $\mathcal{D}\rtimes_\tau S$ onto $\iota_\mathcal{D}(\mathcal{D})$ such that \[ \mathbb{P}hi_\mathcal{D}(\iota_S(p)\iota_S(q)^*)=\begin{cases} \iota_S(p)\iota_S(p)^*,&\text{ if }p=q\\ 0,&\text{ if }p\neq q. \end{cases} \] Identifying $\iota_S(p)$ with $v_p$ and $\iota_\mathcal{D}(e_{pS})$ with $e_{pS}$ gives existence of the claimed expectation. Under the assumption that the enveloping group $\mathcal{G}$ is amenable, the map $\mathbb{P}hi_\mathcal{D}$ is faithful on positive elements, cf. \cite[Lemma 1.4]{Qui}. Note that the last conclusion may also be reached for the semigroup dynamical system $(\mathcal{D}, S, \tau)$ by invoking \cite[Lemma 8.2.5]{CEL1}. \end{proof} \subsection{\texorpdfstring{From quasi-lattice order groups to right LCM semigroups.}{From quasi-lattice ordered groups to right LCM semigroups.}}\label{subsection-LR}~\\ It turns out that a good part of the general strategy of Laca and Raeburn \cite{LaRa} for proving injectivity of representations of $C^*(S)$ in the case that $S$ is part of a quasi-lattice order $(G, S)$ can be extended to the class of right LCM semigroups, although the arguments become more delicate due to the presence of non-trivial units. The next several results make this claim precise. \begin{notation}\label{notation: projections--D} In Lemma~\ref{lem:def-V-E-reduced-alg} we introduced isometries $V_p$ for $p\in S$ and projections $E_{pS}$ for $pS\in \mathcal{J}(S)$ in $C_r^*(S)$ that satisfy conditions (L1)-(L4). Later in the paper we shall mainly be interested in families of isometries and projections satisfying (L1)-(L4) inside an arbitrary $C^*$-algebra $B$. In order to avoid unnecessary notational adornment we shall still use $V_p, E_{pS}$ in that case. Given a family of commuting projections $(E_{i})_{i \in I}$ in a unital $C^*$-algebra $B$ and finite subsets $A \subset F$ of $I$, we denote \[Q_{F,A}^{E} := \prod\limits_{i \in A}E_{i}\prod_{j \in F \setminus A}(1-E_{j}).\] If the family is $(e_X)_{X\in \mathcal{J}(S)}$ in $C^*(S)$, we write $Q^e_{F, A}$ for the corresponding projections. In the case of a right LCM semigroup $S$, finite subsets of $\mathcal{J}(S)$ are determined by finite subsets of $S$, see Lemma~\ref{lem: constr right ideals for right LCM sgp}. \end{notation} \noindent If $S$ is a left cancellative semigroup with identity such that $\mathcal{J}(S)$ is independent, then \cite[Corollary 2.22]{Li1} and \cite[Proposition 2.24]{Li1} show that the left regular representation $\lambda$ from $C^*(S)$ to $C_r^*(S)$ restricts to an isomorphism from $\mathcal{D}$ onto the diagonal $\mathcal{D}_r$. This allows us to show: \begin{lemma}\label{lem:lambda-iso-diagonal-general} Let $S$ be a right LCM semigroup. Then the left regular representation $\lambda$ restricts to an isomorphism from the diagonal $\mathcal{D}$ of $C^*(S)$ onto the diagonal $\mathcal{D}_r$ of $C_r^*(S)$. \end{lemma} \begin{proof} If $S$ has an identity, then $\mathcal{J}(S)$ is independent by Corollary~\ref{cor: independence for monoids}. Hence the lemma is simply an application of the mentioned results from \cite{Li1}. Now suppose $S^*= \emptyset$ holds. Then $\mathcal{J}(\tilde{S})$ is independent according to Lemma~\ref{lem:right LCM for unitisation} and Corollary~\ref{cor: independence for monoids}. Moreover, by Lemma~\ref{lem:right LCM for unitisation}, we have \[pS \cap qS = rS \text{ if and only if } p\tilde{S} \cap q\tilde{S} = r\tilde{S} \text{ for all } p,q,r \in S. \] This fact and the standing hypothesis $S \neq \emptyset$ imply that the maps \[\begin{array}{lclclcl} \mathcal{D} &\longrightarrow& \tilde{\mathcal{D}} &\hspace*{4mm}\text{and}\hspace*{4mm}& \mathcal{D}_r &\longrightarrow& \tilde{\mathcal{D}}_r\\ e_S &\mapsto& e_{\tilde{S}}&&E_S &\mapsto& E_{\tilde{S}}\\ e_{pS} &\mapsto& e_{p\tilde{S}}&&E_{pS} &\mapsto& E_{p\tilde{S}} \end{array}\] are isomorphisms, where $\tilde{\mathcal{D}}$ and $\tilde{\mathcal{D}}_r$ denote the diagonal subalgebra of $C^*(\tilde{S})$ and $C^*_r(\tilde{S})$, respectively. Since $\mathcal{J}(\tilde{S})$ is independent, $\tilde{\lambda}:\tilde{\mathcal{D}} \longrightarrow \tilde{\mathcal{D}}_r$ is an isomorphism. Altogether, we get a commutative diagram \begin{equation}\label{com:diagram DD unitise S} \begin{tikzpicture}[>=stealth, xscale=2, yscale=1.5] \node (c) at (0,-2) {$\tilde{\mathcal{D}}$}; \node (b) at (2,0) {$\mathcal{D}_r$}; \node (a) at (0,0) {$\mathcal{D}$} edge[->](b) edge[->] (c); \node (d) at (2,-2) {$\tilde{\mathcal{D}}_r$} edge[<-] (b) edge[<-] (c); \node at (1,0.2) {$\lambda|_{\mathcal{D}}$}; \node at (-0.2,-1) {$\cong$}; \node at (2.2,-1) {$\cong$}; \node at (1,-2.2) {$\cong$}; \end{tikzpicture} \end{equation} which proves that $\lambda|_{\mathcal{D}}$ is an isomorphism. \end{proof} \begin{prop}\label{prop: part (i) independent} Suppose $S$ is a right LCM semigroup and $\pi$ is a $*$-homomorphism of $C^*(S)$. Let $E_X:=\pi(e_X)$ for $X\in \mathcal{J}(S)$ and $V_p:=\pi(v_p)$ for $p\in S$. Then the following statements are equivalent: \begin{enumerate}[(I)] \item\label{it:I1}$\pi\vert_\mathcal{D}:\mathcal{D}\longrightarrow \pi(\mathcal{D})$ is an isomorphism. \item\label{it:I2} $Q_{F,A}^{E} \neq 0$ for all non-empty finite subsets $F$ of $\mathcal{J}(S)$ and all non-empty subsets $A\subset F$ satisfying \[ \bigcap\limits_{X \in A}{X} \cap \bigcap\limits_{Y \in F \setminus A}{S \setminus Y} \neq \operatorname{var}nothing. \] \item\label{it:I3} $Q_{F, \emptyset}^E\neq 0$ for all non-empty subsets $F\subset \mathcal{J}(S)\setminus\{S\}$. \end{enumerate} \end{prop} \begin{proof} Lemma~\ref{lem:lambda-iso-diagonal-general} implies that the left regular representation $\lambda$ restricts to an isomorphism from $\mathcal{D}$ onto $\mathcal{D}_r$. Thus assuming \emph{(I)} and letting $A\subset F$ be finite non-empty subsets of $\mathcal{J}(S)$ satisfying the non-empty intersection condition of \emph{(II)}, it follows that $\lambda(Q^e_{F, A})\neq 0$. Hence $Q^e_{F, A}\neq 0$, which by injectivity of $\pi\vert_\mathcal{D}$ gives $Q^E_{F, A}\neq 0$. This shows that \emph{(I)} implies \emph{(II)}. Conversely, it suffices to note that by \cite[Lemma 2.20]{Li1}, condition $(I)$ is equivalent to the implication $Q_{F,A}^{E} = 0 \Longrightarrow Q_{F,A}^{e} = 0$ for all non-empty finite subsets $F$ of $\mathcal{J}(S)$ and all non-empty subsets $A \subset F$. Thus \emph{(I)} and \emph{(II)} are equivalent. Consider next a non-empty finite subset $F\subset \mathcal{J}(S)\setminus\{S\}$. If $S$ has an identity, then Lemma~\ref{lem: independence for granted} provides independence of $\mathcal{J}(S)$. In particular, we have $\bigcup_{X \in F}{X} \subsetneqq S$. Hence $Q_{F,\emptyset}^{e} \neq 0$ because its image under $\lambda$ is non-zero. In case $S^*=\emptyset$, Lemma~\ref{lem:right LCM for unitisation} shows that $F \subset \mathcal{J}(S)\setminus\{S\}$ corresponds to a finite subset $\tilde{F} \subset \mathcal{J}(\tilde{S})\setminus\{\tilde{S}\}$. As $\tilde{S}$ is a right LCM semigroup with identity, we get $Q_{\tilde{F},\emptyset}^{e} \neq 0$. According to Lemma~\ref{lem:lambda-iso-diagonal-general}, this is equivalent to $Q_{F,\emptyset}^{e} \neq 0$. Since $\pi$ carries $Q_{F,\emptyset}^e$ to $Q_{F, \emptyset}^E$, it follows that \emph(I) implies \emph{(III)}. Thus it remains to prove that \emph{(III)} yields \emph{(II)}. Assume \emph{(III)} and let $F\subset\mathcal{J}(S)$ be a non-empty subset and $A\subset F$ non-empty satisfying the non-empty intersection condition of \emph{(II)}. Let $\sigma_A \in S$ such that $\sigma_AS = \bigcap\limits_{X \in A}{X}$ and $\bigcup\limits_{Y \in F \setminus A}{Y} \neq S$. Thus, \[Q_{F,A}^E = Q_{A,A}^E\,Q_{F \setminus A,\emptyset}^E\,Q_{A,A}^E = V_{\sigma_A}\prod\limits_{Y \in F \setminus A}{(1-V_{\sigma_A}^*E_YV_{\sigma_A})}V_{\sigma_A}^*. \] Each $Y\in F\setminus A$ has the form $Y=p_{Y}S$ for some $p_Y\in S$. Since $S$ is right LCM, there exists $q_Y\in S$ such that $\sigma_A^{-1}Y=q_YS$ and $\sigma_A q_YS=\sigma_AS\cap p_YS$. Thus $\sigma_A^{-1}Y$ is a proper right ideal of $S$ if and only if $\sigma_A \notin Y$. The choice of $F$ and $A$ therefore guarantees that $\sigma_A^{-1}Y \neq S$ for all $Y \in F \setminus A$. Hence $Q_{\sigma_A^{-1}(F\setminus A), \emptyset}^E\neq 0$ by \emph{(III)}. From \[ Q_{\sigma_A^{-1}(F\setminus A), \emptyset}^E=\prod\limits_{Y \in F \setminus A}{(1-E_{\sigma_A^{-1}(Y)})} \] and $V_{\sigma_A}^*E_YV_{\sigma_A} = E_{\sigma_A^{-1}(Y)}$, we obtain that \[ Q_{F,A}^E = V_{\sigma_A}\prod\limits_{Y \in F \setminus A}{(1-E_{\sigma_A^{-1}(Y)})}V_{\sigma_A}^* \neq 0 \] since $V_{\sigma_A}$ is an isometry. This finishes the proof of the proposition. \end{proof} \noindent The following result is a variant of \cite[Lemma 1.4]{LaRa}. \begin{lemma}\label{lem: first Qs--D} If $(E_{i})_{I}$ are commuting projections in a unital $C^*$-algebra $B$ and $A \subset F$ are finite subsets of $I$, then each $Q_{F,A}^{E}$ is a projection, $\sum\limits_{A \subset F}Q_{F,A}^{E} = 1$, we have \begin{equation}\label{eq: 1 for the first Qs--D} \sum_{i \in F}\lambda_{i}E_{i}=\sum_{A\subset F}\Big(\sum_{i\in A}\lambda_{i}\Big) Q_{F,A}^{E} \end{equation} for any choice of complex numbers $\{\lambda_i \mid i\in I\}$ and, moreover, \begin{equation}\label{eq: 2 for the first Qs--D} \Big\|\sum_{i\in F}\lambda_{i}E_{i}\Big\|=\operatorname{max}\limits_{\substack{A \subset F \\ Q_{F,A}^{E} \neq 0}}\big|\sum_{i\in A}\lambda_{i}\big|. \end{equation} \end{lemma} \begin{proof} Since the projections $E_{i}$ commute, $Q_{F,A}^{E}$ is a projection. The second assertion is obtained via \[1 = \prod\limits_{i \in F}{(E_{i} + 1- E_{i})} = \sum\limits_{A \subset F}Q_{F,A}^{E}.\] Equation~\eqref{eq: 1 for the first Qs--D} as well as Equation~\eqref{eq: 2 for the first Qs--D} follow immediately from this. \end{proof} \noindent We now set up a conventional notation which will be used repeatedly in the sequel. Let $S$ be a right LCM semigroup. We let \begin{equation}\label{eq:tF-tFD} t_F := \sum\limits_{p,q \in F} \lambda_{p,q}v_pv_q^* \,\,\text{ and }\,\, t_{F,\mathcal{D}} := \sum\limits_{p \in F} \lambda_{p,p}e_{pS} \end{equation} denote an arbitrary, but fixed finite linear combination in $C^*(S)$ and its image in $\mathcal{D}$ under $\mathbb{P}hi_\mathcal{D}$, where $F$ is a finite subset of $S$ when $S$ has an identity, or, in case $S^*=\emptyset$, $F$ is a finite subset of $\tilde{S}$, and $\lambda_{p,q}\in \mathbb{C}$ for $p,q \in F$. We will decompose $t_F - t_{F,\mathcal{D}}$ into further terms, based on a suitable subset $A \subset F$ depending on the choice of the $\lambda_{p,q}$'s. We are interested in combinations $t_F$ with $t_{F, \mathcal{D}}\neq 0$, so we shall make this a standing assumption. \begin{lemma}\label{lem: basic projection--D} Let $S$ be a right LCM semigroup and $t_F$, $t_{F,\mathcal{D}}$ be as in \eqref{eq:tF-tFD}. Then there exists a non-empty subset $A\subset F$ such that the projection $Q_{F,A}^e$ is non-zero and satisfies the following: \begin{enumerate} \item[(i)] $Q_{F,A}^ev_pv_q^*Q_{F,A}^e=0$ for all $p,q\in F$ with $p\not\in A$ or $q\not\in A$. \item[(ii)] $\Vert Q_{F,A}^{e} t_{F,\mathcal{D}}Q_{F,A}^{e}\Vert= \|t_{F,\mathcal{D}}\|$. \item[(iii)] If $t_{F, \mathcal{D}}$ is positive, then we may take $Q_{F,A}^{e}t_{F,\mathcal{D}}Q_{F,A}^{e}=\|t_{F,\mathcal{D}}\|Q_{F,A}^{e} $. \end{enumerate} \end{lemma} \begin{proof} The projections $(e_{pS})_{p \in F}$ commute because of $e_{pS}e_{qS} = e_{pS \cap qS}$ for any $p, q\in S$. Applying Lemma \ref{lem: first Qs--D} yields $A \subset F$ which satisfies $Q_{F,A}^e\not=0$, and \[\Vert Q_{F,A}^{e} t_{F,\mathcal{D}}Q_{F,A}^{e}\Vert =\|t_{F,\mathcal{D}}\|.\] If $t_F$ is positive, then we may choose $Q_{F,A}^{e}t_{F,\mathcal{D}}Q_{F,A}^{e}$ to be a multiple of $Q_{F,A}^{e}$. As $t_{F,\mathcal{D}} \neq 0$, we must have $A \neq \emptyset$. The fact that $Q_{F,A}^e \neq 0$ and the right LCM property of $S$ imply that \[ Q_{F,A}^e = \prod\limits_{q\in F\setminus A}(e_{\sigma_AS}-e_{\sigma_AS\cap qS}), \] where $\sigma_A \in S$ is such that $\sigma_AS=\cap_{p\in A}pS$. We claim that $Q_{F,A}^{e}v_pv_q^*Q_{F,A}^{e}=0$ for $p \in F \setminus A$. Indeed, if we have $p\not\in A$, then $Q_{F,A}^{e}v_pv_q^*Q_{F,A}^{e}$ contains a factor of $(1-e_{pS})v_p=v_p-v_p=0$, and hence $Q_{F,A}^{e}v_pv_q^*Q_{F,A}^{e}=0$. Similarly, $v_q^*(1-e_{qS}) = 0$, so we get $Q_{F,A}^{e}v_pv_q^*Q_{F,A}^{e}=0$ for $q \in F \setminus A$. \end{proof} \noindent Before we state the next result we introduce some notation. Assume the hypotheses of Lemma~\ref{lem: basic projection--D} and let $A$ be the finite subset of $F$ satisfying (i)-(iii). Fix $\sigma_A \in S$ such that $\bigcap_{p \in A}pS = \sigma_AS$ (this element is not unique for the given $A$; in case $S^*\neq \emptyset$ then $\sigma_Ax$ for any $x \in S^*$ will satisfy the same identity as $\sigma_A$). For each $p\in A$, let $p_A \in S$ denote the element satisfying $pp_A = \sigma_A$. By left cancellation, this element is unique. Define now \[\begin{array}{lcl} t_{F,1} &=&\sum\limits_{\substack{p,q\in F, p\neq q\\p\notin A\text{ or } q\notin A}} \lambda_{p,q}v_pv_q^*, \label{def:TF1--D} pace*{2mm}\\ t_{F,2} &=& \sum\limits_{\substack{p,q\in A, p\neq q\\p_A S\neq q_A S}} \lambda_{p,q}v_pv_q^* pace*{2mm},\text{ and}\\ t_{F,3} &=& \sum\limits_{\substack{p,q\in A, p\neq q\\p_A S=q_A S}} \lambda_{p,q}v_pv_q^*. \end{array}\] The sum $t_{F,3}$ will only be relevant here when $|S^*|>1$. When $|S^*|\leq 1$, we distinguish two cases: if $S^*=\emptyset$, our standing assumption says that $sS=tS$ forces $s=t$ for $s, t\in S$. Hence a term in $t_{F, 3}$ would correspond to $p_A=q_A$, which implies $pp_A=qp_A$. Thus, if the semigroup $S$ is also right cancellative, we would get $p=q$, a contradiction. The same argument rules out $t_{F,3}$ when $S^*=\{1_S\}$. \begin{lemma}\label{lem: decomposition of t_F--D} Assume the hypotheses of Lemma~\ref{lem: basic projection--D} and let $A$ be the finite subset of $F$ satisfying (i)-(iii). Fix $\sigma_A$ as above. Define a subset of $A\times A$ by \[ A_1 = \{(p,q)\mid p\neq q, \exists x,y\in S, x,y \text { not both units}: p_Ax=q_Ay, p_AS \cap q_AS = p_AxS\}. \] Then \begin{align} e_{\sigma_AS}t_{F,2}e_{\sigma_AS} &= \sum\limits_{(p,q) \in A_1} \lambda_{p,q}v_{\sigma_Ax}v_{\sigma_Ay}^*.\label{def:TF2Q1--D} \end{align} If $|S^*|>1$, then also \begin{align} e_{\sigma_AS}t_{F,3}e_{\sigma_AS}&= \sum\limits_{\substack{p,q\in A, p\neq q,\\ p_A = q_Ax, x\in S^*}} \lambda_{p,q}v_{\sigma_A}v_{x}v_{\sigma_A}^*.\label{def:TF3Q1--D} \end{align} \end{lemma} \begin{proof} Clearly, $t_F= t_{F,\mathcal{D}}+t_{F,1}+t_{F,2}+t_{F,3}$. Let us look more closely at the cut-downs of $t_{F,2}$ and $t_{F,3}$ by $Q_{F,A}^e$. For $p,q \in A, p \neq q$, we have \[\begin{array}{lcl} e_{\sigma_AS}v_pv_q^*e_{\sigma_AS} &=& v_{\sigma_A}v_{p_A}^*v_{q_A}v_{\sigma_A}^* \\ &=& v_{\sigma_A}v_{p_A}^*e_{p_AS}e_{q_AS}v_{q_A}v_{\sigma_A}^* pace*{2mm}\\ &=& \begin{cases} 0,&\text{ if } p_AS \cap q_AS = \emptyset\\ v_{\sigma_Ax}v_{\sigma_Ay}^*,&\text{ if } p_AS \cap q_AS \neq \emptyset, \end{cases}\end{array} \] where $x,y \in S$ satisfy $p_Ax = q_Ay$ and $p_AS \cap q_AS = p_AxS$. The choice of the pair $(x,y)$ is unique up to composition from the right by $S^*$. Hence, $v_{\sigma_Ax}v_{\sigma_Ay}^*$ is independent of the choice of $(x,y)$. Therefore, with regard to $t_{F,2}$, we only have to deal with $p,q \in A, p \neq q$ such that $p_AS \cap q_AS \neq \emptyset$. These are exactly the pairs $(p,q)$ in $A_1$, so \eqref{def:TF2Q1--D} follows. If $v_pv_q^*$ are terms in $t_{F,3}$ then $p_AS=q_AS$ means that $p_A\mathcal{R}q_A$, where $\mathcal{R}$ is the right Green relation. Thus there exists $x\in S^*$ such that $p_A=q_Ax$, and \eqref{def:TF3Q1--D} follows. \end{proof} \begin{lemma}\label{lem: second projection--D} If $S$ is a right LCM semigroup, then there are finite subsets $A,F_1$ of $S$ with $A \subset F \subset F_1$ and $Q_{F_1,A}^e \neq 0$ such that \begin{align*} Q_{F_1,A}^e t_F Q_{F_1,A}^e &= Q_{F_1,A}^e(t_{F,\mathcal{D}} + t_{F,3})Q_{F_1,A}^e \text{ and }\\ \Vert Q_{F_1,A}^e t_{F,\mathcal{D}} Q_{F_1,A}^e \Vert &= \|t_{F,\mathcal{D}}\|. \end{align*} If $t_{F, \mathcal{D}}$ is positive, then we may take $Q_{F_1,A}^e t_{F,\mathcal{D}} Q_{F_1,A}^e=\|t_{F,\mathcal{D}}\| Q_{F_1,A}^e$. \end{lemma} \begin{proof} We invoke the notation of Lemma~\ref{lem: decomposition of t_F--D}. For each $(p,q)\in A_1$, let $\alpha_{p,q}\in S$ be given by \[\alpha_{p,q}:= \begin{cases} x & \text{if $x\in S\setminus S^*$,}\\ y & \text{if $x\in S^*$,}\\ \end{cases}\] and set $F_1 := F \cup \{\sigma_A\alpha_{p,q}\mid (p,q) \in A_1 \}$. First of all, let us show that $Q_{F_1,A}^e \neq 0$ holds. Due to $Q_{F,A}^{e} \neq 0$, we know that $\sigma_AS\cap rS$ is a proper and non-empty subset of $\sigma_AS$ for each $r\in F\setminus A$. Choose for each $r\in F\setminus A$, an element $r'\in S\setminus S^*$ such that $\sigma_AS\cap rS=\sigma_Ar'S$. It follows that $r'S \subsetneqq S$ and $\alpha_{p,q}S \subsetneqq S$ for all $r \in F\setminus A$ and all $(p,q) \in A_1$. If $S$ has an identity, $\mathcal{J}(S)$ is independent by Corollary~\ref{cor: independence for monoids} and hence we get \[\bigcup\limits_{r\in F\setminus A}r'S \cup \bigcup\limits_{(p,q)\in A_1}\alpha_{p,q}S \subsetneqq S\] as both index sets are finite. By taking complements and using the implication \emph{(I)} $\mathbb{R}ightarrow$ \emph{(II)} from Proposition \ref{prop: part (i) independent}, this shows that \[\prod\limits_{r \in F \setminus A}{(1-e_{r'S})}\prod\limits_{(p,q) \in A_1}{(1-e_{\alpha_{p,q}S})} \neq 0.\] In the case where $S^*=\emptyset$, we get $r'\tilde{S} \subsetneqq \tilde{S}$ and $\alpha_{p,q}\tilde{S} \subsetneqq \tilde{S}$ for all $r \in F\setminus A$ and all $(p,q) \in A_1$. By Lemma~\ref{lem:right LCM for unitisation}, $\mathcal{J}(\tilde{S})$ is independent. If we combine this with Proposition~\ref{prop: part (i) independent} and the isomorphism $\tilde{\mathcal{D}} \cong \mathcal{D}$ from Lemma~\ref{lem:lambda-iso-diagonal-general}, we also get \[\prod\limits_{r \in F \setminus A}{(1-e_{r'S})}\prod\limits_{(p,q) \in A_1}{(1-e_{\alpha_{p,q}S})} \neq 0\] in the case $S^*=\emptyset$. Since $v_{\sigma_A}$ is an isometry and $Q_{F_1,A}^e$ has the form \begin{equation}\label{eq:Q1-expanded--D} Q_{F_1,A}^e = v_{\sigma_A}\Big(\prod\limits_{r\in F\setminus A}(1-e_{r'S})\prod\limits_{(p,q)\in A_1}(1-e_{\alpha_{p,q}S})\Big)v_{\sigma_A}^*, \end{equation} it follows that $Q_{F_1,A}^e \neq 0$. Then $Q_{F_1,A}^e$ is a non-trivial subprojection of $Q_{F,A}^e$, so \[\Vert Q_{F_1,A}^e t_{F,\mathcal{D}} Q_{F_1,A}^e \Vert= \|t_{F,\mathcal{D}}\|.\] If $t_{F,\mathcal{D}} $ is positive, then we have $Q_{F_1,A}^e t_{F,\mathcal{D}} Q_{F_1,A}^e=\|t_{F,\mathcal{D}}\|Q_{F_1,A}^e$. Note that Lemma~\ref{lem: basic projection--D} implies $Q_{F_1,A}^e t_{F,1} Q_{F_1,A}^e = 0$, and \eqref{def:TF2Q1--D} gives \[Q_{F_1,A}^e t_{F,2} Q_{F_1,A}^e = Q_{F_1{\setminus}A,\emptyset}^e \sum_{(p,q) \in A_1} \lambda_{p,q}v_{\sigma_Ax}v_{\sigma_Ay}^* Q_{F_1{\setminus}A,\emptyset}^e. \] Now suppose $(p,q)\in A_1$ and $\alpha_{p,q}=x$. Then $Q_{F_1{\setminus}A,\emptyset}^ev_{\sigma_Ax}v_{\sigma_Ay}^*Q_{F_1{\setminus}A,\emptyset}^e$ contains a factor $(1-e_{\sigma_AxS})v_{\sigma_Ax}v_{\sigma_Ay}^* = 0$ and hence $Q_{F_1,A}^ev_pv_q^*Q_{F_1,A}^e = 0$. A similar argument gives $Q_{F_1,A}^ev_pv_q^*Q_{F_1,A}^e = 0$ for $(p,q)\in A_1$ and $\alpha_{p,q}=y$. Therefore, we have verified $Q_{F_1,A}^e t_{F,2} Q_{F_1,A}^e = 0$, or in other words \[Q_{F_1,A}^e t_F Q_{F_1,A}^e = Q_{F_1,A}^e(t_{F,\mathcal{D}} + t_{F,3})Q_{F_1,A}^e. pace*{-6mm}\] pace*{2mm}\end{proof} \begin{lemma}\label{lem: projection for trivial S*--D} Let $S$ be a \emph{cancellative} right LCM semigroup with $S^*=\emptyset$. Then there are finite subsets $A,F_1$ of $S$ such that $A \subset F \subset F_1$, $Q_{F_1,A}^e \neq 0$ and \[\Vert Q_{F_1,A}^e t_F Q_{F_1,A}^e \Vert = \|t_{F,\mathcal{D}}\|.\] If $t_F$ is positive, then we may take $Q_{F_1,A}^e t_F Q_{F_1,A}^e=\|t_{F,\mathcal{D}}\|Q_{F_1,A}^e.$ \end{lemma} \begin{proof} It suffices to note that the sum $t_{F, 3}$ is empty, by our remark prior to Lemma~\ref{lem: decomposition of t_F--D}. Then the finite subsets $A,F_1$ of $S$ given by Lemma \ref{lem: second projection--D} satisfy the claim. \end{proof} \section{\texorpdfstring{Uniqueness theorem for right LCM semigroup algebras using $\mathcal{D}$}{Uniqueness theorem for right LCM semigroup algebras using the diagonal}}\label{sec: diagonalsection} \noindent In this section we prove a uniqueness theorem which involves a nonvanishing condition on elements of the diagonal subalgebra $\mathcal{D}\subset C^*(S)$. Our theorem will apply to right LCM semigroups $S$ satisfying additional properties, including that $S$ must be cancellative. One of these conditions, the one we call (D2), is rather technical. Before we state it we introduce two other conditions, which besides being closely related to (D2), are also likely to have more transparent formulations in examples. Indeed, they are often satisfied, while condition (D2) may be harder to obtain for large classes of semigroups. Let $S$ be a right LCM semigroup with $S^* \neq \emptyset$ and consider the action $S^* \curvearrowright\mathcal{J}(S)$ given by left multiplication, that is $x\cdot X=xX$ for $x\in S^*$ and $X\in J(S)$. It is standard terminology that the action $S^*{\curvearrowright}\mathcal{J}(S)$ is effective if, by definition, for every $x$ in $S^*\setminus\{1_S\}$ there is $X \in \mathcal{J}(S)$ such that $xX \neq X$. We next introduce three other properties that a semigroup can have. \begin{definition}\label{def: conditions on S for using D} Let $S$ be a right LCM semigroup with $S^*\neq \emptyset$. We say that the action $S^* \curvearrowright\mathcal{J}(S)$ given by left multiplication is {\em strongly effective} if for all $x \in S^*\setminus \{1_S\}$ and $p \in S$, there exists $q \in pS$ such that $xqS \neq qS$. \end{definition} Consider further the following two conditions that $S$ can satisfy: \begin{enumerate} \item[(D1)] For all $x \in S^*$ and $X \in \mathcal{J}(S)$, we have $xX \cap X \neq \emptyset\Longrightarrow xX = X$. \item[(D2)] If $s_0 \in S, s_1 \in s_0S$ and $F \subset S$ is a finite subset so that $s_1S \cap \biggl(S{\setminus}\bigcup\limits_{q \in F} qS\biggr) \neq \emptyset,$ then, for every $x \in S^*{\setminus}\{1_S\}$, there is $s_2 \in s_1S$ satisfying \[s_2S \cap \biggl(S{\setminus}\bigcup\limits_{q \in F} qS\biggr) \neq \emptyset \text{ and } s_0^{-1}s_2S \cap xs_0^{-1}s_2S = \emptyset.\] \end{enumerate} \begin{remark}\label{rmk:consequence-strongly-effective} Suppose that the action $S^*{\curvearrowright}\mathcal{J}(S)$ is strongly effective and (D1) is satisfied. It is immediate to see that for every $x \in S^*\setminus\{1_S\}$ and $p\in S$ there exists $q\in pS$ such that $xqS\cap qS=\emptyset$. In fact, more is true. Let $s_0\in S$, $s_1\in s_0S$, and $x\in S^*\setminus\{1_S\}$. Write $s_1=s_0r$ for some $r\in S$. By the previous observation applied to $x$ and $r\in S$ there is $r'\in rS$ such that $xr'S\cap r'S=\emptyset$. If we now let $s_2=s_0r'\in s_1 S$, we have established condition (D2) in case $F$ is the empty set. Conversely, if condition (D2) is satisfied, then by applying it with $F$ equal the empty set and $s_0=1_S$ it follows that $S^*{\curvearrowright}\mathcal{J}(S)$ is strongly effective. \end{remark} \noindent For convenience, we will denote the elements of $F$ by $q_1,\dots,q_{|F|}$ whenever $F \neq \emptyset$. \begin{thm}\label{thm: using the diagonal} Let $S$ be a cancellative right LCM semigroup such that $\mathbb{P}hi_{\mathcal{D}}:C^*(S)\to \mathcal{D}$ is a faithful conditional expectation. Let $(V_p)_{p\in S}$ and $(E_{pS})_{p\in S}$ be families of isometries and projections in a $C^*$-algebra $B$ satisfying (L1)--(L4). Let $\pi:=\pi_{V,E}$ be the associated $*$-homomorphism from $C^*(S)$ to $B$. Assume that one of the following conditions holds: \begin{enumerate}[(1)] \item $|S^*| \leq 1$. \item $|S^*|>1$ and $S$ satisfies condition (D2). \end{enumerate} Then $\pi:C^*(S)\to B$ is injective if and only if \begin{equation}\label{eq: key to faithfulness independent version} \prod_{p\in F}(1-E_{pS}) \not= 0 \text{ for every finite }F\subset S\setminus S^*. \end{equation} \end{thm} \begin{remark} \textnormal{(a)} We observe that, for a quasi-lattice ordered pair $(G,S)$ in the sense of \cite{Ni}, the semigroup $S$ is right LCM with $S^*=\{1_S\}$. Thus part (3) of the theorem recovers \cite[Theorem 3.7]{LaRa}. \textnormal{(b)} Note that Theorem~\ref{thm: using the diagonal} does not apply to the case where $S$ is a non-trivial group ($S = \{1_S\}$ amounts to $C^*(S) \cong \mathbb{C}$). The reason is that $S^* = S$ directs us to part (2) of Theorem~\ref{thm: using the diagonal} and (D2) fails in the group case for $F = \emptyset$: indeed, there exists $x \in S^*\setminus\{1_S\}$, but for every $p \in S$ we get $xpS \cap pS = S \neq \emptyset$. \textnormal{(c)} The hypotheses of part (1) of the theorem are satisfied in the case of the semigroup from Example~\ref{ex:rightLCM no units} because $\mathbb{P}hi_\mathcal{D}$ is a faithful expectation; to see this, note that $\mathbb{N}\setminus \{1\}$ embeds in $\mathbb{Q}_+^*\setminus\{1\}$, hence in $\mathbb{Q}_+^*$, and the latter admits a dual action on $C^*(S)$ by \cite[Remark 3.7]{LaRa}. Semigroups satisfying condition (D2) will be described in Examples~\ref{ex: GxP sat D2 but not D3 I} and \ref{ex: GxP sat D2 but not D3 II}. \end{remark} \noindent The proof of this theorem requires some preparation. Note that by Proposition~\ref{prop: part (i) independent}, condition \eqref{eq: key to faithfulness independent version} is equivalent to injectivity of $\pi$ on $\mathcal{D}$. Relying on this equivalence, the key step in proving Theorem~\ref{thm: using the diagonal} is the following intermediate result: \begin{prop}\label{prop:extend-to-contraction-using-D} Let $S$ be a cancellative right LCM semigroup, and let $(V_p)_{p\in S}$ and $(E_{pS})_{p\in S}$ be families of isometries and projections in a $C^*$-algebra $B$ satisfying (L1)--(L4). Let $\pi$ be the associated $*$-homomorphism from $C^*(S)$ to $B$. Assume that one of the following conditions holds: \begin{enumerate}[(1)] \item $|S^*| \leq 1$. \item $|S^*|>1$ and $S$ satisfies condition (D2). \end{enumerate} If $\pi$ is injective on $\mathcal{D}$, then the map \[\sum_{p,q\in F}\lambda_{p,q}V_pV_q^*\mapsto \sum_{p\in F}\lambda_{p,p}V_pV_p^*,\] where $F \subset S$ is finite and $\lambda_{p, q}\in \mathbb{C}$, is contractive, and hence extends to a contraction $\mathbb{P}hi$ of $\pi(C^*(S))$ onto $\pi(\mathcal{D})$. \end{prop} \noindent One consequence of this proposition is that when $\pi$ is the identity homomorphism, we obtain a contractive map from $C^*(S)$ onto $\mathcal{D}$ which is nothing but the conditional expectation $\mathbb{P}hi_\mathcal{D}$ from Theorem~\ref{thm: using the diagonal}. Thus it remains to prove Proposition~\ref{prop:extend-to-contraction-using-D}. The established strategy is to express $\mathbb{P}hi$ on finite linear combinations of the spanning family $(v_pv_q^*)_{p,q \in S}$ as a cut-down by a suitable projection that will depend on the given linear combination. Fix therefore finite combinations $t_F\in C^*(S)$ and $t_{F, \mathcal{D}}\in \mathcal{D}$ as in \eqref{eq:tF-tFD}. In view of our aim we assume, without loss of generality, that $t_{F,\mathcal{D}} \neq 0$ holds (otherwise $0$ is a suitable projection). Most of the preparation needed to construct $\mathbb{P}hi$ was done in section~\ref{subsection-LR}. For case (1), Lemma \ref{lem: projection for trivial S*--D} will suffice. Condition (D2) is relevant when there are non-trivial elements in $S^*$. These units will appear in the sum from \eqref{def:TF3Q1--D} due to right cancellation in $S$: indeed, for $p,q \in A, p \neq q$ satisfying $p_AS = q_AS$ it follows that $pp_A = qp_Ax$ for some $x\in S^*$. By right cancellation, necessarily $x \neq 1_S$. Thus there are $n \in \mathbb{N}$, $x_1,\dots,x_n \in S^*{\setminus}\{1_S\}$ and $\lambda_1, \dots, \lambda_n \in \mathbb{C}$ such that \begin{equation} \begin{array}{lcl} e_{\sigma_AS}t_{F,3}e_{\sigma_AS} &=& \sum\limits_{i=1}^n \lambda_{i}v_{\sigma_A}v_{x_i}v_{\sigma_A}^*.\label{def:TF3Q1mod--D} \end{array} \end{equation} \begin{lemma}\label{lem: projection for non-trivial units--D} Let $S$ be a cancellative right LCM semigroup such that $|S^*|>1$ and $S$ satisfies condition (D2). Let $A, F_1$ be as in Lemma~\ref{lem: second projection--D}. Then there exists $p_F\in \sigma_AS$ such that $e_F:=e_{p_FS}$ satisfies \begin{enumerate} \item[(i)] $e_FQ_{F_1{\setminus}A,\emptyset}^e \neq 0$. \item[(ii)] $\Vert e_{F}Q_{F_1{\setminus}A,\emptyset}^e t_{F} e_{F}Q_{F_1{\setminus}A,\emptyset}^e\Vert = \|t_{F,\mathcal{D}}\|.$ \item[(iii)]If $t_{F,\mathcal{D}}$ is positive, then $e_{F}Q_{F_1{\setminus}A,\emptyset}^e t_{F} e_{F}Q_{F_1{\setminus}A,\emptyset}^e=\|t_{F,\mathcal{D}}\|e_{F}Q_{F_1{\setminus}A,\emptyset}^e.$ \end{enumerate} \end{lemma} \begin{proof} Since $\mathcal{J}(S)$ is independent, $Q_{F_1,A}^e \neq 0$ is equivalent to $\sigma_AS \cap \biggl(S{\setminus}\bigcup\limits_{q \in F_1\setminus A}{qS}\biggr) \neq \emptyset$. Applying (D2) to $\sigma_A$ in place of $s_0, s_1$, the unit $x_1\in S^*{\setminus}\{1_S\}$ and the finite set $F_1\setminus A$ gives an element $s_2 \in \sigma_AS$ such that $x_1\sigma_A^{-1}s_2S\cap \sigma_A^{-1}s_2S=\emptyset$ and $s_2S$ has non-empty intersection with $S{\setminus}\bigcup\limits_{q \in F_1{\setminus}A} qS$. Next, we apply (D2) to $\sigma_A$ as $s_0$, $s_2$ in place of $s_1$, the unit $x_2$ and $F_1\setminus A$ resulting in an element $s_3 \in s_2S$ such that $x_2\sigma_A^{-1}s_3S \cap \sigma_A^{-1}s_3S = \emptyset$ and $s_3S$ has non-empty intersection with $S{\setminus}\bigcup\limits_{q \in F_1{\setminus}A} qS$. Note that we have \[x_1\sigma_A^{-1}s_3S \cap \sigma_A^{-1}s_3S \stackrel{s_3 \in s_2S}{\subset} x_1\sigma_A^{-1}s_2S \cap \sigma_A^{-1}s_2S = \emptyset.\] Thus, proceeding inductively, we get $s_n \in \sigma_AS$ such that $s_nS$ has non-empty intersection with $S{\setminus}\bigcup\limits_{q \in F_1{\setminus}A} qS$ and $x_i\sigma_A^{-1}s_nS\cap \sigma_A^{-1}s_nS=\emptyset$ for all $i = 1,\dots,n$. This translates to $Q_{F_1,A}^e \geq e_{s_nS}Q_{F_1{\setminus}A,\emptyset}^e \neq 0$ and $e_{s_nS}t_{F,3}e_{s_nS} = 0$. Let $p_F := s_n$. Since \[ e_{F}Q_{F_1{\setminus}A,\emptyset}^e t_{F} e_{F}Q_{F_1{\setminus}A,\emptyset}^e=e_{F}Q_{F_1{\setminus}A,\emptyset}^e t_{F, \mathcal{D}} e_{F}Q_{F_1{\setminus}A,\emptyset}^e, \] an application of Lemma~\ref{lem: second projection--D} shows that $p_F$ satisfies (i)$-$(iii). \end{proof} \begin{proof}[Proof of Proposition \ref{prop:extend-to-contraction-using-D}] For any finite linear combination $T_F \subset \text{span}\{~V_pV_q^*~|~p,q \in S~\}$, consider the corresponding element $t_F \in C^*(S)$. For case (1), we use Lemma~\ref{lem: projection for trivial S*--D} to obtain a non-zero projection $Q_{F_1,A}^e \in \mathcal{D}$ that satisfies \[\Vert Q_{F_1,A}^e t_F Q_{F_1,A}^e\Vert = \|\mathbb{P}hi(t_F)\|.\] Since $\pi_{\mathcal{D}}$ is injective and $(V_p)_{p\in S},(E_{pS})_{p\in S}$ are families of isometries and projections, respectively, satisfying (L1)--(L4), we get \[\Vert Q_{F_1,A}^E T_F Q_{F_1,A}^E \Vert= \|\mathbb{P}hi(T_F)\|.\] As $Q_{F_1,A}^E \neq 0$ is a projection, we get \[\|\mathbb{P}hi(T_F)\| = \|Q_{F_1,A}^E T_F Q_{F_1,A}^E\| \leq \|T_F\|,\] so $\mathbb{P}hi$ is contractive on a dense subset of $\pi(C^*(S))$. By standard arguments, it extends to a contraction from $\pi(C^*(S))$ to $\pi(\mathcal{D})$. For case (2), run the same argument with $e_{F}Q_{F_1,A}^e$ given by Lemma~\ref{lem: projection for non-trivial units--D} as the suitable replacement for $Q_{F_1,A}^e$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: using the diagonal}] Since $S$ is a right LCM semigroup, Proposition~\ref{prop: part (i) independent} implies that condition~\eqref{eq: key to faithfulness independent version} is equivalent to injectivity of $\pi$ on $\mathcal{D}$. Obviously, $\pi|_{\mathcal{D}}$ is injective whenever $\pi$ is injective, showing the forward implication in the theorem. To prove the reverse implication, we apply Proposition~\ref{prop:extend-to-contraction-using-D} to obtain the following commutative diagram. \begin{equation}\label{com:diagram DD} \begin{tikzpicture}[>=stealth, xscale=2, yscale=1.5] \node (c) at (0,-2) {$\mathcal{D}$}; \node (b) at (2,0) {$\pi(C^*(S))$}; \node (a) at (0,0) {$C^*(S)$} edge[->](b) edge[->] (c); \node (d) at (2,-2) {$\pi|_{\mathcal{D}}(\mathcal{D})$} edge[<-] (b) edge[<-] (c); \node at (1,0.2) {$\pi$}; \node at (-0.2,-1) {$\mathbb{P}hi_\mathcal{D}$}; \node at (2.2,-1) {$\mathbb{P}hi$}; \node at (1,-2.2) {$\pi|_{\mathcal{D}}$}; \end{tikzpicture} \end{equation} Now, if $a \in C^*(S)_+, a \neq 0$, then $\mathbb{P}hi{\circ}\pi(a) = \pi|_\mathcal{D}{\circ}\mathbb{P}hi_\mathcal{D}(a) \neq 0$ as $\mathbb{P}hi_\mathcal{D}$ is faithful and $\pi|_\mathcal{D}$ is injective. Thus, we have $\pi(a) \neq 0$. Since injectivity of *-homomorphisms can be detected on positive elements, $\pi$ is seen to be injective. \end{proof} \section{\texorpdfstring{Purely infinite simple $C^*(S)$ arising from right LCM semigroups}{Purely infinite simple C*(S) arising from right LCM semigroups}}\label{section:pisimple} \noindent Suppose that $S$ is a right LCM semigroup. Consider the following refinement of condition (D2): \begin{enumerate} \item[(D3)] If $s\in S$ and $F$ is a finite subset of $S$ with $sS \cap \big(S\setminus \bigcup\limits_{q \in F} {qS}\big) \neq \emptyset$, then there is $s' \in sS$ such that $s'S \cap qS = \emptyset$ for all $q \in F$. \end{enumerate} Whenever $F \neq \emptyset$, we will denote its elements by $q_1,\dots,q_n$. In section~\ref{sec: examples} we will see examples of semigroups satisfying conditions (D3) and (D2). To clarify the relationship between (D2) and (D3), we make the following observation: \begin{lemma}\label{lem:str eff+D1+D3 gives D2} Let $S$ be a right LCM semigroup with $S^*\neq \emptyset$. If the action $S^*\curvearrowright\mathcal{J}(S)$ is strongly effective and $S$ satisfies (D1) and (D3), then (D2) holds. \end{lemma} \begin{proof} We saw in Remark~\ref{rmk:consequence-strongly-effective} that the condition (D2) where $F=\emptyset$ is satisfied when $S^*\curvearrowright\mathcal{J}(S)$ is strongly effective and $S$ satisfies (D1). Thus it remains to prove (D2) in case $F \neq \emptyset$. Let therefore $s_0,q_1,\dots,q_n \in S$ and $s_1 \in s_0S$ with $s_1S \cap \biggl(S{\setminus}\bigcup\limits_{i=1}^n q_iS\biggr) \neq \emptyset$. Applying (D3) yields an element $s \in s_1S \subset s_0S$ such that $sS \cap q_iS = \emptyset$ for $i=1,\dots, n$. Note that every $s_2 \in sS$ inherits this property, and therefore the first equation in (D2) is satisfied for such elements. Let $x \in S^*{\setminus}\{1_S\}$ and write $s=s_0r$ for some $r\in S$. By strong effectiveness and (D1) applied to $x$ and $rS$ we get $s' \in rS$ with the property that $s'S\cap xs'S=\emptyset$. Now $s_2=s_0s' \in sS \subset s_1S$ satisfies $s_0^{-1}s_2S\cap xs_0^{-1}s_2S=s'S\cap xs'S=\emptyset$, proving (D2). \end{proof} \noindent We note that (D1), (D3) and strong effectiveness are properties of a semigroup that can be more readily verified than (D2). The latter condition is quite close to the operator algebraic application it is designed for. Therefore, in Theorem~\ref{thm: simple and p.i.} we provide an independent proof for the last two sets of assumptions, even though (3) may be deduced from the proof in the case (2). First we need a lemma. \begin{lemma}\label{lem: projection for simplicity (D1)+str.eff+(D3).--D} Let $S$ be a cancellative right LCM semigroup such that $|S^*|>1$, the action $S^*\curvearrowright\mathcal{J}(S)$ is strongly effective, and (D1), (D3) are satisfied. Suppose $t_F\in C^*(S)$ and $t_{F,\mathcal{D}}\in \mathcal{D}$ are linear combinations as in \eqref{eq:tF-tFD}, and assume that $t_F$ is a positive element in $C^*(S)$. Then $t_{F,\mathcal{D}}$ is positive and there is $p_F \in S$ such that $e_F:=e_{p_FS}$ satisfies \[e_{F} t_F e_{F} = \|t_{F,\mathcal{D}}\|e_F.\] \end{lemma} \begin{proof} As $t_{F,\mathcal{D}}$ is the image of $t_F$ under the natural conditional expectation $C^*(S) \to \mathcal{D}$, it is also positive. According to Lemma~\ref{lem: second projection--D}, there are finite subsets $A,F_1$ of $S$ with $A \subset F \subset F_1$ and $Q_{F_1,A}^e \neq 0$ such that \[ Q_{F_1,A}^e t_F Q_{F_1,A}^e = Q_{F_1,A}^e(t_{F,\mathcal{D}} + t_{F,3})Q_{F_1,A}^e \text{ and } Q_{F_1,A}^e t_{F,\mathcal{D}} Q_{F_1,A}^e = \|t_{F,\mathcal{D}}\|Q_{F_1,A}^e. \] Since $|S^*|>1$, the collection $\mathcal{J}(S)$ is independent by Corollary~\ref{cor: independence for monoids}. According to Proposition~\ref{prop: part (i) independent}, we have $Q_{F_1,A}^e \neq 0$ if and only if $\sigma_AS \cap \biggl(S{\setminus}\bigcup\limits_{q \in F_1{\setminus}A} qS\biggr) \neq \emptyset$. By (D3), there is $p_0 \in \sigma_AS$ such that $p_0S \cap qS = \emptyset$ for all $q \in F_1{\setminus}A$. Hence, we have \[ e_{p_0S} t_F e_{p_0S} = e_{p_0S}(t_{F,\mathcal{D}} + t_{F,3})e_{p_0S} \text{ and } e_{p_0S} t_{F,\mathcal{D}} e_{p_0S} = \|t_{F,\mathcal{D}}\|e_{p_0S}. \] Let $x_1,\dots,x_n \in S^*{\setminus}\{1_S\}$ be the invertible elements that appear in \eqref{def:TF3Q1mod--D} and let $p_0'$ denote the element satisfying $p_0 = \sigma_Ap_0'$. Applying strong effectiveness to $p_0'$ and $x_1$ yields an element $p_1' \in p_0'S$ satisfying $x_1p_1'S \neq p_1'S$. By (D1), this amounts to $x_1p_1'S \cap p_1'S = \emptyset$. Proceeding inductively, where $p_i \in p_{i-1}S$ is given as $p_i = \sigma_Ap_i'$ with $p_i' \in p_{i-1}'S$ satisfying $x_ip_i'S \cap p_i'S = \emptyset$, we obtain $p_n \in \sigma_AS$ such that \[\begin{array}{lclr} e_{p_nS}v_{\sigma_A}v_{x_i}v_{\sigma_A}^*e_{p_nS} &=& v_{p_n}v_{p_n'}^*v_{x_i}v_{p_n'}v_{p_n}^*\\ &=& v_{p_n}v_{p_n'}^*e_{p_i'S}v_{x_i}e_{p_i'S}v_{p_n'}v_{p_n}^* &(p_n'S \subset p_i'S)\\ &=& v_{p_n}v_{p_n'}^*e_{p_i'S \cap x_ip_i'S}v_{x_i}v_{p_n'}v_{p_n}^*\\ &=&0 \end{array}\] for all $i = 1,\dots,n$. Thus, $p_F:= p_n$ satisfies the claim of the lemma. \end{proof} \begin{thm}\label{thm: simple and p.i.} Let $S$ be a cancellative right LCM semigroup such that $\mathbb{P}hi_{\mathcal{D}}:C^*(S)\to \mathcal{D}$ is a faithful conditional expectation. Assume that (D3) and one of the following conditions hold: \begin{enumerate}[(1)] \item $|S^*| \leq 1$. \item $|S^*|>1$ and $S$ satisfies condition (D2). \item $|S^*|>1$, $S$ satisfies condition (D1), and the action $S^*\curvearrowright\mathcal{J}(S)$ is strongly effective. \end{enumerate} Then $C^*(S)$ is purely infinite and simple. \end{thm} \begin{proof} Recall from Lemma~\ref{lem: spanning family for algebra} that the linear span of the elements $v_pv_q^*$ is dense in $C^*(S)$. Every element from this linear span has the form $t_F = \sum\limits_{p,q \in F}\lambda_{p,q} v_pv_q^*$ for some finite $F \subset S$ and suitable $\lambda_{p,q} \in \mathbb{C}$. Moreover, $\mathbb{P}hi_\mathcal{D}(t_F) = t_{F,\mathcal{D}} = \sum\limits_{p \in F}\lambda_{p,p} e_{pS}$. Let $a \in C^*(S)$ be positive and non-zero, and let $\operatorname{var}epsilon > 0$. Choose a positive linear combination $t_F$ that approximates $a$ up to within $\operatorname{var}epsilon$. If $\operatorname{var}epsilon$ is sufficiently small, we have $t_F \neq 0$ which we will assume from now on. For the four different cases in the hypothesis of the theorem, we will use different methods to obtain a suitable small projection $e'_F:=e_{q_FS}$ that annihilates the off-diagonal terms of $t_F$ while picking up the norm of the diagonal part: that is, \[e'_{F} t_F e'_{F} = \|t_{F,\mathcal{D}}\|e'_{F} = \|\mathbb{P}hi_\mathcal{D}(t_{F})\|e'_{F}.\] For case (1), we use Lemma~\ref{lem: projection for trivial S*--D}, and for case (2) Lemma~\ref{lem: projection for non-trivial units--D} to get a finite subset $F_2 = F_1\setminus A$ of $S$ and an element $p_F\in S$ such that $e_F=e_{p_FS}$ satisfies \begin{enumerate} \item[(i)] $e_{F}Q_{F_2,\emptyset}^e \neq 0$, and \item[(ii)] $e_{F}Q_{F_2,\emptyset}^e t_{F} e_{F}Q_{F_2,\emptyset}^e = \|t_{F,\mathcal{D}}\|e_{F}Q_{F_2,\emptyset}^e.$ \end{enumerate} Since $S$ is right LCM, (i) translates to $p_FS \cap \biggl(S \setminus \bigcup\limits_{q \in F_2} qS\biggr) \neq \emptyset$ according to Lemma~\ref{lem:lambda-iso-diagonal-general}. So we can apply (D3) to get an element $q_F \in p_FS$ such that $q_FS \cap qS = \emptyset$ for all $q \in F_2$. By (L4) this gives $e_{q_FS} \leq e_FQ_{F_2,\emptyset}^e$. Now $e'_F:=e_{q_FS}$ satisfies $ e'_Ft_{F} e'_{F} = \|t_{F,\mathcal{D}}\| e'_F$ by (ii). For case (3), the existence of such a projection $e'_F$ follows directly from Lemma~\ref{lem: projection for simplicity (D1)+str.eff+(D3).--D}. We have $\|\mathbb{P}hi_\mathcal{D}(t_F)\| > 0$ since $\mathbb{P}hi_\mathcal{D}$ is faithful. Thus, $e'_{F}t_Fe'_{F}=\|\mathbb{P}hi_\mathcal{D}(t_F)\|e'_{F}$ is invertible in the corner $e'_{F}C^*(S)e'_{F}$. If $\|a-t_F\|$ is sufficiently small, this implies that $e'_{F}ae'_{F}$ is positive and invertible in $e'_{F}C^*(S)e'_{F}$ as well, because $\|\mathbb{P}hi_\mathcal{D}(t_F)\|~\stackrel{\operatorname{var}epsilon \searrow 0}{\longrightarrow}~\|\mathbb{P}hi_\mathcal{D}(a)\|~>~0$. Hence, if we denote its positive inverse by $b$, we get \[\left(b^{\frac{1}{2}}v_{q_F}\right)^{*}e'_{F}ae'_{F}\left(b^{\frac{1}{2}}v_{q_F}\right) = v_{q_F}^*e'_{F}v_{q_F} = 1.\] This implies that $C^*(S)$ is purely infinite and simple. \end{proof} \section{\texorpdfstring{Injectivity of the left regular representation of $C^*(S)$}{Injectivity of the left regular representation of C*(S)}}\label{section:injregular} \noindent A major question of interest in \cite{Li1} and \cite{Li2} is to determine conditions under which the left regular representation $\lambda$ is an isomorphism $C^*(S) \cong C^*_r(S)$. In the context of right LCM semigroups, we have identified some classes of semigroups for which this isomorphism holds. \begin{prop}\label{cor: isom full and reduced algebra - simple case} Assume the hypotheses of Theorem~\ref{thm: simple and p.i.}. Then the left regular representation $\lambda:C^*(S)\to C_r^*(S)$ is an isomorphism. \end{prop} \begin{proof} The conclusion follows because in this case $C^*(S)$ is simple. \end{proof} \begin{thm}\label{cor:easy-amenability} Assume that $S$ is a cancellative right LCM semigroup such that the conditional expectation $\mathbb{P}hi_\mathcal{D}$ is faithful. Then the left regular representation $\lambda$ is an isomorphism from $C^*(S)$ onto $C^*_r(S)$. \end{thm} \begin{proof} By Lemma~\ref{lem:lambda-iso-diagonal-general}, $\lambda$ restricts to an isomorphism $\mathcal{D} \cong \mathcal{D}_r$. Then for $a \in C^*(S)_+, a \neq 0$, we have $\lambda|_\mathcal{D} \circ \mathbb{P}hi_\mathcal{D}(a) \neq 0$. From $\lambda|_\mathcal{D} \circ \mathbb{P}hi_\mathcal{D} = \mathbb{P}hi_{\mathcal{D},r} \circ \lambda$ and faithfulness of $\mathbb{P}hi_{\mathcal{D},r}$ established in Proposition~\ref{prop: faithful cond exp for the red algebra} it follows that $\lambda(a) \neq 0$. Hence, $\lambda$ is an isomorphism. \end{proof} \begin{example} By Theorem~\ref{cor:easy-amenability} and Proposition~\ref{prop:left-inverse} it follows that $\lambda$ is an isomorphism in the case of $S=G\rtimes_\theta P$ that is right LCM, right reversible, and satisfies that $S^{-1}S$ is an amenable group. \end{example} \begin{remark} There is an alternative approach to injectivity of $\lambda$ for certain subsemigroups of amenable groups, see \cite{Li2}. We refer to \cite[Section 4]{Li2} for the definition of the Toeplitz condition. Namely, if $S$ is a left cancellative semigroup satisfying the conditions \begin{enumerate}[i)] \item $\mathcal{J}(S)$ is independent, \item $S$ embeds into an amenable group $H$ such that $S$ generates $H$ and $S \subset H$ satisfies the Toeplitz condition, \end{enumerate} then $\lambda: C^*(S)\longrightarrow C^*_r(S)$ is an isomorphism, cf. the equivalence of (iii) and (v) in \cite[Theorem 6.1]{Li2} applied to $A = \mathbb{C}$ (where (v) is valid because $H$ is amenable). The proof of \cite[Theorem 6.1]{Li2} depends on a relatively involved machinery of various crossed product constructions. The conclusion in Theorem~\ref{cor:easy-amenability} for right LCM semigroups is obtained through an analysis solely of the semigroup $C^*$-algebra $C^*(S)$. \end{remark} \section{\texorpdfstring{A uniqueness result using $\mathbb{C}C_I$}{A uniqueness result using the inner core}}\label{section:usingcore} \noindent In this section we consider left-cancellative semigroups with identity that satisfy condition (C1). By following an idea from \cite{Mi} we show that it is possible to construct a conditional expectation from $C^*(S)$ onto $\mathbb{C}C_I$ which may be used to reduce the question of injectivity of representations. The proof of the next result is similar to \cite[Proposition 3.5]{CLSV}. For a discrete group $\Gamma$, we denote $i_\Gamma$ the canonical homomorphism sending $\gamma$ in $\Gamma$ to the generating unitary $i_\Gamma(\gamma)$ in $C^*(\Gamma)$, and we let $\delta_\Gamma$ be the homomorphism $C^*(\Gamma)\to C^*(\Gamma)\otimes C^*(\Gamma)$ induced by the map $\gamma\to i_\Gamma(\gamma)\otimes i_\Gamma(\gamma)$. \begin{prop}\label{prop:coaction} Let $S$ be a right LCM semigroup with identity and assume that there exists a homomorphism $\sigma:S\to T$ onto a subsemigroup $T$ of a group $\Gamma$ such that $T$ generates $\Gamma$. Then there is a coaction $\delta:C^*(S)\to C^*(S)\otimes C^*(\Gamma)$ such that \[\delta(v_pv_q^*)=v_pv_q^*\otimes i_\Gamma(\sigma(p)\sigma(q)^{-1})\] for all $p,q\in S$. \end{prop} \begin{proof} Since $C^*(S)=\overline{\operatorname{span}}\{v_pv_q^*:p,q\in S\}$, we define a map from $S$ to $C^*(S)\otimes C^*(\Gamma)$ by $s\mapsto v_s\otimes i_\Gamma(\sigma(s))$ for $s\in S$. A routine calculation shows that $\{v_s\otimes i_\Gamma(\sigma(s))\}_{s\in S}$ and $\{v_pv_p^*\otimes i_\Gamma(1_\Gamma)\}$ satisfy the relations that characterise $C^*(S)$, hence by the universal property we obtain the required homomorphism $\delta$ such that $\delta(v_s)=v_s\otimes i_\Gamma(\sigma(s))$ for all $s\in S$. Since $C^*(S)$ is unital and $\delta(1)=1\otimes i_\Gamma(1_\Gamma)$, the map $\delta$ is nondegenerate. If we let $\operatorname{var}epsilon:C^*(\Gamma)\to \mathbb{C}$ be the homomorphism integrated from $\gamma\mapsto 1$, then the equality $(\operatorname{id}_{C^*(S)}\otimes \operatorname{var}epsilon)\circ\delta= \operatorname{id}_{C^*(S)}$ shows that $\delta$ is an injective map. The coaction identity \[ (\delta\otimes \operatorname{id}_{C^*(\Gamma)})\circ \delta=(\operatorname{id}_{C^*(S)}\otimes \delta_\Gamma)\circ \delta \] is immediately checked on the generators $v_s$ of $C^*(S)$. \end{proof} \noindent By standard theory of coactions, see for example \cite{Qui}, the spectral subspaces for $\delta$ are defined as $C^*(S)^\delta_{\sigma(s)}=\{a\in C^*(S)\mid \delta(a)=a\otimes i_\Gamma (\sigma(s))\}$. The space \[ C^*(S)^\delta=\{a\in C^*(S)\mid \delta(a)=a\otimes i_\Gamma (1_\Gamma)\} \] is a $C^*$-subalgebra of $C^*(S)$, called the fixed-point algebra. There is always a conditional expectation $\mathbb{P}hi^\delta:C^*(S)\to C^*(S)^\delta$ such that $\mathbb{P}hi^\delta(a)=0$ if $a\in C^*(S)^\delta_{\sigma(s)}$ with $\sigma(s)\neq 1_\Gamma$. Moreover, it is known that $\mathbb{P}hi^\delta$ is faithful on positive elements precisely when the coaction $\delta$ is \emph{normal}: this is for instance the case when $\Gamma$ is amenable. Since $C^*(S)=\overline{\operatorname{span}}\{v_pv_q^*:p,q\in S\}$, we have $C^*(S)^\delta=\overline{\operatorname{span}}\{v_pv_q^*:p,q\in S, \sigma(p)=\sigma(q)\}$ and \begin{equation}\label{eq:cond-exp} \mathbb{P}hi^\delta(v_pv_q^*)=\begin{cases}v_pv_q^*,&\text{ if }\sigma(p)=\sigma(q)\\ 0,&\text{ otherwise}.\end{cases} \end{equation} We will prove in Corollary~\ref{cor:phi-CI} that the above fixed-point algebra $C^*(S)^\delta$ coincides with $\mathbb{C}C_I$ when the semigroup $S$ satisfies condition (C1). \begin{remark}\label{rmk:CO-CI} Recall that $\mathbb{C}C_I$ was defined as $ C^*(\{v_x,e_{pS}~|~p \in S,x \in S^*\})$, where we assume $S^*$ is non-trivial, and that basic properties of this subalgebra were established in Lemma~\ref{lem: properties of the subalgebras}. We claim that if $S^*$ is non-trivial and $S$ satisfies (C1), then $\mathbb{C}C_O=\mathbb{C}C_I$. To see this, note that Lemma~\ref{lem: properties of the subalgebras} implies that it suffices to show $v_pv_xv_p^*\in\mathbb{C}C_I$ for all $p \in S$ and $x \in S^*$. By (C1) there exists $y\in S^*$ such that $px=yp$. Hence, \[v_pv_xv_p^*=v_ye_{pS}=e_{ypS}v_y\in\mathbb{C}C_I.\] \end{remark} \noindent Given a right LCM semigroup $S$ with $S^*\neq \emptyset$ and satisfying (C1), suppose that the monoid $\mathcal{S}$ constructed in Proposition~\ref{prop:congruence} embeds into a group $\Gamma$ such that $\mathcal{S}$ generates $\Gamma$. Proposition~\ref{prop:coaction} applied to the canonical homomorphism $\sigma:S\to \mathcal{S}$, $\sigma(p)=[p]$ for $p\in S$ gives a coaction of $\Gamma$ with associated conditional expectation as described in \eqref{eq:cond-exp}. Note that, in this situation, $\sigma(p)=\sigma(q)$ means precisely that $p=xq$ for some $x\in S^*$. Hence $v_pv_q^*=v_{xq}v_q^*=v_{x}e_{qS}$, which is in $\mathbb{C}C_I$. Thus $C^*(S)^\delta\subseteq \mathbb{C}C_I$, and since the reverse inclusion is immediate, the two subalgebras of $C^*(S)$ are equal. In case $S$ is not right cancellative, it may happen that $p=xq=x'q$ for different $x,x'$ in $S^*$. However, $v_{x}e_{qS}=v_{xq}v_q^*=v_{x'q}v_q^*=v_{x'}e_{qS}$. We summarise these considerations in the following result. \begin{cor}\label{cor:phi-CI} Let $S$ be a right LCM semigroup such that $S^*\neq \emptyset$ and (C1) holds. Assume that $\mathcal{S}$ embeds into a group $\Gamma$ which is generated by the image of $\mathcal{S}$. Then there is a well-defined conditional expectation $\mathbb{P}hi_{\mathbb{C}C_I}:C^*(S)\to \mathbb{C}C_I$ such that \begin{equation}\label{def:cond-exp-CI} \mathbb{P}hi_{\mathbb{C}C_I}(v_pv_q^*)=\begin{cases}v_xe_{qS},&\text{ if }p=xq\text{ for some }x\in S^*\\ 0,&\text{ if }p\not\sim q.\end{cases} \end{equation} If $\Gamma$ is amenable, then $\mathbb{P}hi_{\mathbb{C}C_I}$ is faithful on positive elements. \end{cor} \noindent Our main result about injectivity of representations in terms of their restriction to $\mathbb{C}C_I$ is the following theorem. \begin{thm}\label{thm:uniqueness theorem based on C_I} Let $S$ be a cancellative right LCM semigroup with identity $1_S$ such that $S$ satisfies (C1) and the semigroup $\mathcal{S}$ constructed in Proposition~\ref{prop:congruence} embeds into a group $\Gamma$ in such a way that $\Gamma$ is generated by $\mathcal{S}$. Assume that the conditional expectation $\mathbb{P}hi_{\mathbb{C}C_I}:C^*(S) \longrightarrow \mathbb{C}C_I$ constructed in Corollary~\ref{cor:phi-CI} is faithful, and that there is a faithful conditional expectation $\mathbb{P}hi_0$ from $\mathbb{C}C_I$ onto $\mathcal{D}$ such that $\mathbb{P}hi_0(e_{qS}v_x)=\delta_{x, 1_S}e_{qS}$ for all $q\in S$ and $x\in S^*$. Then a *-homomorphism $\pi:C^*(S) \longrightarrow B$ is injective if and only if $\pi|_{\mathbb{C}C_I}$ is injective. \end{thm} \begin{proof} One direction of the theorem is clear, so assume that $\pi\vert_{\mathbb{C}C_I}$ is injective. We must prove that $\pi$ is injective. Let $\mathbb{P}hi := \mathbb{P}hi_0 \circ \mathbb{P}hi_{\mathbb{C}C_I}$ be the faithful conditional expectation from $C^*(S)$ to $\mathcal{D}$ obtained by composing the two given expectations. The idea of the proof is to construct a contraction $\mathbb{P}hi^\pi:\pi(C^*(S))\to \mathcal{D}$ such that $\mathbb{P}hi^\pi\circ \pi=\mathbb{P}hi$. Then the injectivity of $\pi$ will follow from a standard argument: let $a\in C^*(S)_+$ with $a\neq 0$. From $\mathbb{P}hi^{\pi}(\pi(a))=\mathbb{P}hi({a})$ and the fact that $\mathbb{P}hi$ is faithful on positive elements it follows that $\pi(a)\neq 0$. Let $F \subset S$ be finite and $t_F \in C^*(S)$ a linear combination of $v_pv_q^*, p,q \in F$ with scalars $\lambda_{p,q}$ in $\mathbb{C}$ such that $t_F$ is positive and non-zero. Then $\mathbb{P}hi(t_F)\neq 0$. We have \begin{align*} \mathbb{P}hi(t_F) &=\sum_{p\sim q}\lambda_{p,q} (\mathbb{P}hi_0\circ \mathbb{P}hi_{\mathbb{C}C_I})(v_pv_q^*)\\ &= \sum_{p\sim q, p=xq}\lambda_{p,q}\mathbb{P}hi_0(v_xe_{qS})\\ &=\sum_{\{p,q\in F\mid p=q\}} \lambda_{p,q}e_{qS}, \end{align*} which is $t_{F,\mathcal{D}}$, and is non-zero. By Lemma~\ref{lem: second projection--D}, there are finite subsets $A \subset F \subset F_1 \subset S$ such that $Q_{F_1,A}^et_FQ_{F_1,A}^e \in \mathbb{C}C_O$ and $Q_{F_1,A}^et_{F,\mathcal{D}} = \|t_{F,\mathcal{D}}\| Q_{F_1,A}^e \neq 0$. Remark~\ref{rmk:CO-CI} shows that $\mathbb{C}C_O = \mathbb{C}C_I$, so $Q_{F_1,A}^et_FQ_{F_1,A}^e \in \mathbb{C}C_I$. Therefore, we get \[\begin{array}{lcl} \mathbb{P}hi(Q_{F_1,A}^et_FQ_{F_1,A}^e) &=& Q_{F_1,A}^e \mathbb{P}hi(t_F)Q_{F_1,A}^e\\ &=& Q_{F_1,A}^et_{F,\mathcal{D}}Q_{F_1,A}^e \\ &=& \|t_{F,\mathcal{D}}\| Q_{F_1,A}^e \neq 0. \end{array}\] Since $\pi|_{C_I}$ is injective, it follows that \[\begin{array}{lcl} \|\pi(t_F)\| &\geq& \|\pi(Q_{F_1,A}^e)\pi(t_F)\pi(Q_{F_1,A}^e)\| \\ &=& \|\pi(Q_{F_1,A}^et_FQ_{F_1,A}^e)\| \\ &=& \|Q_{F_1,A}^et_FQ_{F_1,A}^e\| \\ &\geq& \| \mathbb{P}hi(Q_{F_1,A}^et_FQ_{F_1,A}^e)\| \\ &=& \|t_{F,\mathcal{D}}\|. \end{array}\] Thus $\mathbb{P}hi^\pi:\pi(C^*(S))\to \mathcal{D}$ given by $\mathbb{P}hi^\pi(\pi(t_F)):=t_{F, \mathcal{D}}$ is a well-defined contraction with the desired properties. \end{proof} \noindent For the moment the only examples of semigroups $S$ we know for which the hypotheses of Theorem~\ref{thm:uniqueness theorem based on C_I} are satisfied are semidirect products $G\rtimes_\theta P$ covered by Proposition~\ref{prop:left-inverse}. However, we expect that the theorem will apply in situations where $G$ or the enveloping group of $P$ are non-amenable, for example when they are free groups. The challenge is to generalise the arguments of \cite{LaRa} that show existence of a faithful conditional expectation onto $\mathcal{D}$ from the case $S^*=\{1_S\}$ to the case that there are non-trivial units. \begin{cor}\label{cor: isom full and reduced algebras - easy argument} Assume the notation and hypotheses of Theorem~\ref{thm:uniqueness theorem based on C_I}. Then the left regular representation $\lambda: C^*(S) \longrightarrow C^*_r(S)$ is an isomorphism. \end{cor} \begin{proof} Let $\mathbb{P}hi=\mathbb{P}hi_0\circ \mathbb{P}hi_{\mathbb{C}C_I}$ be the faithful conditional expectation from Theorem~\ref{thm:uniqueness theorem based on C_I}. Since $S$ has an identity, $\mathcal{J}(S)$ is independent, and therefore $\lambda$ restricts to an isomorphism $\mathcal{D} \cong \mathcal{D}_r$, \cite{Li1}. Thus the argument of Theorem~\ref{cor:easy-amenability} can be used. \end{proof} \noindent Results with the flavour of a gauge-invariant uniqueness theorem have been proved for many classes of $C^*$-algebras, see \cite{CLSV} and the references therein. In our context, a straightforward version is as presented in the next proposition. \begin{prop}\label{thm:giut} Let $S$ be a cancellative right LCM semigroup with identity such that $S$ satisfies (C1) and the conditional expectation $\mathbb{P}hi_{\mathbb{C}C_I}:C^*(S) \longrightarrow \mathbb{C}C_I$ constructed in Corollary~\ref{cor:phi-CI} is faithful. Then a $*$-homomorphism $\pi:C^*(S)\to B$ is injective if and only if $\pi\vert_{\mathbb{C}C_I}$ is injective and $B$ admits a coaction $\epsilon$ of the enveloping group $\Gamma$ of $\mathcal{S}$ such that $\pi$ is $\delta-\epsilon$-equivariant, i.e. $(\pi\otimes\operatorname{id}_{C^*(\Gamma)})\circ \delta=\epsilon\circ \pi$. \end{prop} \begin{proof} If $B$ admits a coaction $\epsilon$ as in the hypothesis, then there is a conditional expectation $\mathbb{P}hi^\epsilon$ from $B$ onto $\pi(\mathbb{C}C_I)$ such that $\mathbb{P}hi^\epsilon\circ \pi=(\pi\vert_{\mathbb{C}C_I})\circ \mathbb{P}hi_{\mathbb{C}C_I}$. Now the standard argument shows that injectivity of $\pi$ on $\mathbb{C}C_I$ can be lifted to $C^*(S)$. \end{proof} \section{Applications}\label{sec: examples} \subsection{Semidirect products of groups by semigroups}~\\ The new class of right LCM semigroups covered in this work is that of semidirect products of a group by the action of a semigroup. Throughout this subsection, let $G$ be a group, $P$ a cancellative right LCM semigroup with identity $1_P$, and $P\stackrel{\theta}{\curvearrowright}G$ an action by injective group endomorphisms of $G$. The semidirect product $G\rtimes_\theta P$ is denoted by $S$ for convenience of notation. \begin{definition} The action $\theta$ is said to \emph{respect the order} on $P$ if for all $p,q \in P$ with $pP \cap qP \neq \emptyset$, we have $\theta_p(G) \cap \theta_q(G) = \theta_r(G)$, where $r\in P$ is any element such that $pP \cap qP = rP$. This is well-defined since $r_1P= pP \cap qP=r_2P$ implies that $r_1 = r_2x$ for some $x \in P^*$, which means $\theta_{r_1}(G)=\theta_{r_2x}(G)=\theta_{r_2}(G)$ because $\theta_x$ is an automorphism of $G$. \end{definition} \begin{prop}\label{prop: GxP right LCM} If $\theta$ respects the order, then $S$ is a right LCM semigroup. \end{prop} \begin{proof} Since both $G$ and $P$ are left cancellative and $\theta$ acts by injective maps, $S$ is left cancellative. Suppose $g_1,g_2 \in G$ and $p_1,p_2 \in P$ such that $(g_1,p_1)S \cap (g_2,p_2)S \neq \emptyset$. Then $p_1P \cap p_2P \neq \emptyset$, and since $P$ is right LCM, there is $q \in P$ satisfying $p_1P \cap p_2P = qP$. Denote by $p_1',q_1'\in P$ the elements satisfying $q=p_1p_1'=p_2p_2'$. We must also have $h_1,h_2\in G$ such that $g_1\theta_{p_1}(h_1)=g_2\theta_{p_2}(h_2)$. We claim that \[(g_1,p_1)S \cap (g_2,p_2)S = (g_1\theta_{p_1}(h_1),q)S.\] Since $(g_1\theta_{p_1}(h_1),q)=(g_1,p_1)(h_1,p_1')=(g_2,p_2)(h_2,p_2')$, the right ideal $(g_1\theta_{p_1}(h_1),q)S$ is contained in $(g_1,p_1)S \cap (g_2,p_2)S$. For the reverse containment, suppose that $(g,s),(h,t)\in S$ with $(g_1,p_1)(g,s)=(g_2,p_2)(h,t)$. Then $g_1\theta_{p_1}(g)=g_2\theta_{p_2}(h)$ and $p_1s=p_2t$. We now immediately have $p_1s=p_2t=qq'$ for some $q'\in P$. The identities $g_1\theta_{p_1}(h_1)=g_2\theta_{p_2}(h_2)$ and $g_1\theta_{p_1}(g)=g_2\theta_{p_2}(h)$ yield $\theta_{p_1}(h_1^{-1}g)=\theta_{p_2}(h_2^{-1}h)$. Since $\theta$ respects the order on $P$, we have $\theta_{p_1}(G)\cap\theta_{p_2}(G)=\theta_q(G)$, and hence $\theta_{p_1}(h_1^{-1}g)=\theta_q(k)$ for some $k\in G$. Then \begin{align*} (g_1,p_1)(g,s)=(g_1\theta_{p_1}(g),p_1s) =(g_1\theta_{p_1}(h_1)\theta_{p_1}(h_1^{-1}g),p_1s) &=(g_1\theta_{p_1}(h_1),q)(k,q')\\ &\in (g_1\theta_{p_1}(h_1),q)S. \end{align*} So the reverse containment holds, and hence $S$ is right LCM. \end{proof} \noindent Since the focus of this paper is on right LCM semigroups we shall assume from now on that $\theta$ respects the order. The structure of $\mathcal{J}(S)$ is determined by the semigroup $P$ and the collection of cosets $\{G/\theta_p(G)\}_{p \in P}$. \begin{lemma}\label{lem: GxP disjoint ideals} For any $g,h \in G$ and $p \in P$ we have \[(g,p)S \cap (h,p)S pace*{2mm} = \begin{cases} (g,p)S, &\text{if } g^{-1}h \in \theta_p(G),\\ \emptyset, &\text{ otherwise.} \end{cases}\] \end{lemma} \begin{proof} If the intersection is non-empty, we have $g\theta_p(g_1) = h\theta_p(h_1)$ for some $g_1,h_1 \in G$. Then $g^{-1}h =\theta_p(g_1h_1^{-1}) \in \theta_p(G)$, as needed. \end{proof} \begin{cor}\label{cor:useful-equivalence} Let $P$ satisfy (C2). For any $(g,p)\in S$ and $(h,x)\in S^*$, the following are equivalent: \begin{enumerate} \item[(i)] $(h,x)(g,p)S \neq (g,p)S$; \item[(ii)] $(h\theta_x(g),p)S \cap (g,p)S =\emptyset$; \item[(iii)] $g^{-1}h\theta_x(g) \notin \theta_p(G)$. \end{enumerate} In particular, $S$ satisfies condition (D1). \end{cor} \begin{proof} Take $(g,p) \in S$ and $(h,x) \in S^*$. By Lemma \ref{lem: units in GxP}, we have that $x \in P^*$. Condition (C2) gives $y\in P^*$ with $xp = py$. Therefore, \[ (h,x)(g,p)S\cap (g,p)S=(h\theta_x(g),p)S\cap (g,p)S.\] By Lemma~\ref{lem: GxP disjoint ideals}, these intersections are non-empty if and only if $g^{-1}h\theta_x(g)\in \theta_p(G)$, in which case the ideals $(h,x)(g,p)S$ and $(g,p)S$ coincide. It follows immediately from this that $S$ satisfies condition (D1). \end{proof} \begin{lemma}\label{lem: GxP str eff} Let $P$ satisfy (C2). Then the action $S^*{\curvearrowright}\mathcal{J}(S)$ from Definition~\ref{def: conditions on S for using D} is strongly effective if and only if it is effective. \end{lemma} \begin{proof} Strong effectiveness implies effectiveness. Assume therefore that $S^*{\curvearrowright}\mathcal{J}(S)$ is effective. Let $(g,p) \in S$ and $(h,x) \in (G\times P^*)\setminus\{(1_G,1_P)\}$, where $S^*=G\times P^*$ by Lemma~\ref{lem: units in GxP}. If $(h,x)(g,p)S \neq (g,p)S$ holds, then $(g,p)$ itself does the job required for strong effectiveness. \noindent Let now $(h,x)(g,p)S = (g,p)S$. We have to find an element $(g',p') \in S$ satisfying \begin{equation}\label{eq:strong-effect} (h,x)(g,p)(g',p')S \neq (g,p)(g',p')S. \end{equation} It follows from Corollary~\ref{cor:useful-equivalence} that $g^{-1}h\theta_x(g) = \theta_p(\tilde{g})$ for some $\tilde{g} \in G$. Using (C2) to find $y$ in $S^*$ such that $xpp'=pp'y$, the left-hand side of \eqref{eq:strong-effect} rewrites as \[ (h,x)(g,p)(g',p')S = (h\theta_x(g)\theta_{xp}(g'),xpp')S = (h\theta_x(g)\theta_{py}(g'),pp')S. \] Thus to prove \eqref{eq:strong-effect} we need to ensure that \[ (g\theta_p(g'))^{-1}h\theta_x(g)\theta_{py}(g') = \theta_p((g')^{-1})g^{-1}h\theta_x(g)\theta_{py}(g') \notin \theta_{pp'}(G). \] Since $g^{-1}h\theta_x(g) = \theta_p(\tilde{g})$ and $\theta_p$ is injective, this is equivalent to \[(g')^{-1}\tilde{g}\theta_{y}(g') \notin \theta_{p'}(G).\] Since $x\neq 1_P$ implies, by right cancellation in $P$, that $y\neq 1_P$, we see that the existence of $(g',p')$ is guaranteed by effectiveness of the action applied to $(\tilde{g},y) \in S^*\setminus\{1_S\}$. Thus $S^*{\curvearrowright}\mathcal{J}(S)$ is strongly effective. \end{proof} \noindent Since an action of a group on a space is effective precisely when the intersection of all stabiliser subgroups is the trivial subgroup, Lemma~\ref{lem: GxP str eff} says that we can rephrase the property of $S^* {\curvearrowright} \mathcal{J}(S)$ being strongly effective in terms of stabilisers. We introduce first some notation. For each $(g,p)\in S$, let $S_{(g,p)}$ denote the subgroup of $G$ equal to $g\theta_p(G)g^{-1}$. For the action $S^* {\curvearrowright} \mathcal{J}(S)$ from Definition~\ref{def: conditions on S for using D}, let $S^*_{(g,p)S}$ denote the stabiliser subgroup of $(g,p)S\in\mathcal{J}(S)$. \begin{lemma}\label{lem: stab for (GxP)* on J(GxP)} Let $P$ satisfy (C2) and consider the action $S^* {\curvearrowright} \mathcal{J}(S)$ from Definition~\ref{def: conditions on S for using D}. Then the stabiliser subgroup of $(g,p)S\in \mathcal{J}(S)$ takes the form \[S^*_{(g,p)S} = \{(h,x) \in S^*\mid h\theta_x(g) \in g\theta_p(G)\}.\] If $P^* = \{1_P\}$, then $S^*_{(g,p)S} = S_{(g,p)}\times\{1_P\}$. Further, $S^* {\curvearrowright} \mathcal{J}(S)$ is strongly effective if and only if \[\bigcap_{(g,p) \in S} S_{(g,p)}= \{1_G\}.\] In particular, if $P^* = \{1_P\}$ and $G$ is abelian, then $S^* {\curvearrowright} \mathcal{J}(S)$ is strongly effective if and only if $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$. \end{lemma} \begin{proof} The claimed description of $S^*_{(g,p)S}$ follows from Corollary~\ref{cor:useful-equivalence}, and the characterisation of strongly effective follows from Lemma~\ref{lem: GxP str eff}. \end{proof} \noindent We shall be able to say more for semigroups $S$ where $P$ is a countably generated free abelian semigroup with identity. For the purposes of the next results, we therefore assume that $P \cong \mathbb{N}^k$ for some $k \in \mathbb{N}$ or $P \cong \bigoplus_\mathbb{N} \mathbb{N}$. In this case, (C2) is automatic for $P$, hence $S = G\rtimes_\theta P$ satisfies (D1) by Corollary~\ref{cor:useful-equivalence}. As indicated in the comment following Lemma~\ref{lem:str eff+D1+D3 gives D2}, condition (D2) is harder to establish in full generality. The next result describes an obstruction to having (D2) satisfied by $S= G\rtimes_\theta P$. \begin{lemma}\label{lem: GxP and D2} Assume $P \cong \mathbb{N}^k$ for some $k \in \mathbb{N}$ or $P \cong \bigoplus_\mathbb{N} \mathbb{N}$. If there are $q_1,\dots,q_m \in P\setminus\{1_P\}$ such that $[G:\theta_{q_i}(G)] < \infty$ and \[ \bigcap\limits_{\stackrel{(g,p)\in S}{p \in P\setminus (\bigcup_{1 \leq i \leq m}q_iP)}} S_{(g,p)} \operatorname{sup}setneqq \{1_G\}, \] then $S$ does not satisfy (D2). \end{lemma} \begin{proof} Suppose there are $q_1,\dots,q_m$ as prescribed above and pick an element \[ h \in \bigcap\limits_{\stackrel{(g,p)\in S}{p \in P\setminus (\bigcup_{1 \leq i \leq m}q_iP)}} S_{(g,p)} \] with $h \neq 1_G$. Denote $[G:\theta_{q_i}(G)] = N_i$ in $\mathbb{N}^\times$, and choose, for each $i=1,\dots, m$, a complete set of representatives $(h_{i,j})_{1 \leq j \leq N_i}$ for $G/\theta_{q_i}(G)$. We claim that (D2) fails for the choice of elements $(g_0,p_0)=(g_1,p_1)=(1_G,1_P)$ in $S$, $(h,1_P)$ in $S^*$, and the finite subset $\{(h_{i,j},q_i)\mid j=1, \dots ,N_i,\, i=1,\dots , m\}$ of $S$. Note that we have $(g_1,p_1) \notin (h_{i,j},q_i)S$ for all $j=1, \dots ,N_i$ and $i=1,\dots , m$. If (D2) were to hold, it would imply the existence of $(g_2,p_2)$ such that both $(g_2,p_2) \notin (h_{i,j},q_i)S$ for all $i,j$ and, by Corollary~\ref{cor:useful-equivalence}, also $h\notin S_{(g_2,p_2)}$. Hence, by the choice of $h$, there is at least one $i$ with $p_2 \in q_iP$. For this $i$ there is a unique $j \in \{1,\dots,N_i\}$ with $g_2 \in h_{i,j}\theta_{q_i}(G)$. In other words, we would get $(g_2,p_2) \in (h_{i,j},q_i)S$, which is a falsehood. \end{proof} \noindent In the next two examples we describe some situations where $S$ satisfies condition (D2). \begin{example}\label{ex: GxP sat D2 but not D3 I} Let $G = \bigoplus_\mathbb{N} \mathbb{Z}$ and $P$ be the unital subsemigroup of $\mathbb{N}^\times$ generated by $2$ and $3$. We shall denote $P=| 2,3\rangle$. Define an action $\theta$ of $P$ by injective endomorphisms of $G$ as follows: for $g = (g_n)_{n \in \mathbb{N}} \in G$, let \[ \theta_2(g) = 2g,\,(\theta_3(g))_0 = 3g_0 \text{ and }(\theta_3(g))_n = g_n \text{ for all }n \geq 1. \] It is immediate that $\theta$ preserves the order, so $S$ is right LCM by Proposition~\ref{prop: GxP right LCM}. Further, $[G:\theta_2(G)] = \infty$ and $[G:\theta_3(G)] = 3$. Note that $\bigcap_{n \in \mathbb{N}} \theta_{2^n}(G) = \{1_G\}$. We claim that $S=G\rtimes_\theta P$ satisfies (D2). It will follow from Lemma~\ref{lem: GxP D2} that $S$ does not satisfy (D3). Suppose that we have $s_0:=(g_0,p_0) \in S,s_1:=(g_1,p_1) \in (g_0,p_0)S$ as well as\linebreak $(h_1,q_1),\dots,(h_m,q_m) \in S$ such that \[ (g_1,p_1)S \cap \biggl(S\setminus\bigcup_{1 \leq i \leq m}(h_i,q_i)S\biggr) \neq \emptyset. \] In particular, this implies $(g_1,p_1) \notin (h_i,q_i)S$ for each $i=1, \dots, m$. In case $p_1 \in q_iP$ for some $i$, then necessarily $g_1 \notin h_i\theta_{q_i}(G)$, and therefore Lemma~\ref{lem: GxP disjoint ideals} implies that $(g_1,p_1)S \cap (h_i,q_i)S = \emptyset$. Without loss of generality we may thus assume that $p_1 \notin q_iP$ for all $i=1, \dots, m$. Let now $x=(g,1_P)$ with $g \neq 1_G$. An element $s_2:=(g_2, p_2)$ as required in (D2) will have to satisfy $xs_0^{-1}s_2S\cap s_0^{-1}s_2S=\emptyset$. If we denote $r:=p_0^{-1}p_2$, this requirement takes the form $(g,1_P)(h',r)S\cap (h',r)S=\emptyset$ for some $h'\in G$. Now, using $\bigcap_{n \in \mathbb{N}} \theta_{2^n}(G) = \{1_G\}$, we can choose $n \in \mathbb{N}$ large enough so that $p_2 := p_1 2^n$ satisfies $g \notin \theta_{r}(G)$. By Corollary~\ref{cor:useful-equivalence}, this means that $x(h', r)S\cap (h',r)S=\emptyset$ for any $h'\in G$. Thus we have freedom to choose the first entry in $s_2$, and this choice must be made so that it ensures the second requirement in (D2). The crucial ingredient here is the fact that $[G:\theta_{2^k}(G)] = \infty$ for all $k\geq 1$, which will allow us to choose $g_2 \in g_1\theta_{p_1}(G)$ such that $(g_2,p_2) \notin (h_i,q_i)S$ for all $i$. To achieve this goal requires a careful argument. If $p_2\notin q_iP$ for all $i=1, \dots, m$, then any choice of $g_2\in g_1\theta_{p_1}(G)$ will ensure that $(g_2,p_2)\notin (h_i,q_i)S$ for all $i$. Assume next that $q_1,\dots,q_m$ are labelled in such a way that there is $m' \in \{1, \dots , m\}$ with the property that $p_2 \in q_iP$ implies $i\leq m'$. Note that the elements corresponding to $i = m'+1,\dots,m$ pose no obstruction to the choice of $g_2$ because for these indices $i$ we have $(g_2,p_2)\notin (h_i, q_i)S$ irrespective of the choice of $g_2$. Possibly changing enumeration once more, we can assume that $1$ is minimal in $\{1,\dots,m'\}$ in the sense that \[ p_1P \cap q_1P \subset p_1P \cap q_iP \Longrightarrow p_1P \cap q_1P = p_1P \cap q_iP \text{ for all } 1 \leq i \leq m', \] and that $2,\dots,m'$ are assigned in such a way that there is $m_1$ with \[p_1P \cap q_1P = p_1P \cap q_iP \Longrightarrow i \leq m_1 \text{ for } i \in 2,\dots,m'. \] Let $ n_1$ be such that $p_1P \cap q_1P = p_1 2^{n_1}$, and note that $1\leq n_1 \leq n$. Since $[G:\theta_{2^{n_1}}(G)] = \infty$, there are infinitely many distinct principal right ideals of the form $(g_1\theta_{p_1}(g'),p_1 2^{n_1})S \subset (g_1,p_1)S$ with $g' \in G$. Since $S$ is right LCM, of these infinitely many ideals, at most $m_1$ of them are not admissible for a choice of $g_2$ (because they are possibly contained in $(h_1,q_1)S,\dots,(h_{m_1},q_{m_1})S$). Thus there is $g_{2,1} \in g_1\theta_{p_1}(G)$ with \[(g_{2,1},p_1 2^{n_1}) \notin (h_i,q_i)S \text{ for all } i = 1,\dots,m_1.\] Replacing $g_1$ by $g_{2,1}$, $p_1$ by $p_1 2^{n_1}$, $n$ by $n-n_1$, and $\{1,\dots,m'\}$ by $\{m_1+1,\dots,m'\}$, we can iterate this process. Thus at the second step we obtain an element $(g_{2,2}, p_12^{n_1}2^{n_2})\in (g_{2,1},p_1 2^{n_1})S$, for appropriate $ 1\leq n_2\leq n$ and $g_{2,2}\in G$, which also avoids additional ideals $(h_i, q_i)S$, where $i\in \{1, \dots, m_2\}$ is a subset of $\{m_1+1,\dots,m'\}$ for appropriate $m_2$. This process stops after finitely many steps (equal to $m\geq 1$ if $n_1+\cdots +n_m=n$) because $n$ is finite. Hence the final pair $(g_{2,m},p_2)$ has the required properties. \end{example} \noindent The second example shows that we can also have (D2) in the absence of endomorphisms with infinite index: \begin{example}\label{ex: GxP sat D2 but not D3 II} Let $G = \mathbb{Z}$, $P = \mathbb{N}^\times$ and $\theta$ be given by multiplication, i.e. $\theta_p(g) = pg$. Clearly, we have $[G:\theta_p(G)] = p < \infty$ for all $p \in P$. Also, note that for all $q_1,\dots,q_m \in P\setminus\{1_P\}$, we have \[\bigcap\limits_{p \in P\setminus (\bigcup\limits_{1 \leq i \leq m}q_iP)} \theta_p(G) = \{1_G\}\] since $P\setminus (\bigcup_{1 \leq i \leq m}q_iP)$ contains arbitrarily large positive integers. So there is an abundance of subsemigroups $Q \subset P$ for which the restricted action $\theta|_Q$ separates the points in $G$. We claim that $S=G\rtimes_\theta P$ satisfies (D2). Let $(g_0,p_0)\in S$, $(g_1,p_1) \in (g_0, p_0)S$, $(h_1,q_1),\dots,(h_m,q_m) \in S\setminus\{1_S\}$ with $(g_1,p_1) \notin (h_i,q_i)S$ for $i = 1,\dots,m$, and $x=(g,1_P) \in S$ with $g \neq 1_G$. For similar reasons as in Example~\ref{ex: GxP sat D2 but not D3 I}, we can assume that $p_1 \notin q_iP$ holds for all $i$. Now choose a prime $p \in P$ that does not divide any of the $q_1,\dots,q_m$. Take $n \geq 1$ such that $g \notin \theta_{p^n}(G) = p^n\mathbb{Z}$. If we let $p_2 := p_1p^n$ and $g_2 := g_1$, then $p_2 \notin q_iP$ and hence $(g_2,p_2) \notin (h_i,q_i)S$ for all $i$. Moreover, $p_0^{-1}p_2 \in p^nP$ as $p_1 \in p_0P$. Therefore $g \notin \theta_{p_0^{-1}p_2}(G)$. Hence Corollary~\ref{cor:useful-equivalence} implies that $(g,1_P)(g_0,p_0)^{-1}(g_2,p_2)S \cap (g_0,p_0)^{-1}(g_2,p_2)S = \emptyset$, showing (D2). \end{example} \begin{remark} One can relax the assumptions and consider semidirect products of suitable semigroups by semigroups instead, for instance positive cones $G_+$ in a group $G$ on which we already have an action $\theta$ of a semigroup $P$. A natural assumption in this setting would be $\theta_p(G_+) \subset G_+$. Natural examples of this kind arise for $\mathbb{N}^k \subset \mathbb{Z}^k$ where $\theta$ takes values in $\operatorname{M}_k(\mathbb{N})\cap \operatorname{GL}_k(\mathbb{Q})$. pace*{3mm} \end{remark} \subsection{\texorpdfstring{Examples of purely infinite simple semigroup $C^*$-algebras from semidirect products}{Examples of purely infinite simple semigroup C*-algebras from semidirect products}}~\\ As before, we consider $S$ of the form $G\rtimes_\theta P$, where $P$ is a countably generated, free abelian semigroup with identity and $P\stackrel{\theta}{\curvearrowright}G$ is an action by injective group endomorphisms of $G$ that respects the order. In this subsection, we show that Theorem~\ref{thm: simple and p.i.}~(3) applies to $S$ if $[G:\theta_p(G)]$ is infinite for every $p \neq 1_P$. We illustrate the theorem with several concrete examples of semigroups whose semigroup $C^*$-algebra is purely infinite and simple. \begin{lemma}\label{lem: GxP D2} Assume that $P \cong \mathbb{N}^k$ for some $k \geq 1$ or $P \cong \bigoplus_\mathbb{N} \mathbb{N}$. Then $S$ satisfies (D3) if and only if the index $[G:\theta_p(G)]$ is infinite for every $p \neq 1_P$. \end{lemma} \begin{proof} To begin with, note that $S^* = G\times\{1_P\}$. Suppose that there exists $q \neq 1_P$ such that $[G:\theta_q(G)] = n < \infty$ and let $h_1,\dots,h_n \in G$ be a complete set of representatives for $G/\theta_q(G)$. We claim that (D3) fails for $(g,p)=(1_G,1_P)=1_S$ and $(h_1,q),\dots,(h_n,q)$. To see this, note first that $(g,p) \notin (h_k,q)S$ for all $1 \leq k \leq n$. If $(g',p') \in (g,p)S$ is arbitrary, then there is a unique $k$ such that $g' \in h_k\theta_q(G)$, i.e. $g' = h_k\theta_q(\tilde{g})$ for some $\tilde{g} \in G$. Since $P$ is commutative, we have $ (g',p')S \cap (h_k,q)S \operatorname{sup}set (g',p'q)S \cap (h_k\theta_q(\tilde{g}),p'q)S$, which equals $(g',p'q)S$, so $(g',p')S \cap (h_k,q)S$ is non-empty. Now suppose $[G:\theta_p(G)]$ is infinite for every $p \in P{\setminus}\{1_P\}$. Let $(g,p) \in S$ and $F \subset S$ be finite such that \[(g,p)S \cap \biggl(S\setminus\bigcup\limits_{(h,q) \in F} (h,q)S\biggr) \neq \emptyset.\] Without loss of generality, we may assume $(g,p)S \cap (h,q)S \neq \emptyset$ and $p \neq q$ hold for all $(h,q) \in F$. Consider \[F_P := \{ r \mid pP \cap qP = rP \text{ for some } (h,q) \in F \}.\] Pick $p_1 \in F_P$ which is minimal in the sense that for any other $r \in F_P$, $p_1 \in rP$ implies $r=p_1$. Let $(h_1,q_1),\dots,(h_n,q_n) \in F$ denote the elements satisfying $pP \cap q_iP = p_1P$. According to Proposition~\ref{prop: GxP right LCM}, the fact that $(g,p)S \cap (h_i,q_i)S \neq \emptyset$ for all $i=1,\dots, n$ shows that we have \[ (g,p)S \cap (h_i,q_i)S = (g\theta_p(g'_i),p_1)S=(g,p)(g_i', p^{-1}p_1)S \] for suitable $g'_i \in G$ and each $i=1, \dots, n$. Since $p \neq q_i$, we have $p^{-1}p_1 \neq 1_P$ and hence the index $[G:\theta_{p^{-1}p_1}(G)]$ is infinite. In particular, there is $g_1 \in g\theta_p(G)$ such that \[(g_1,p_1) \in (g,p)S \text{ and } (g_1,p_1)S \cap (h_i,q_i)S = \emptyset \text{ for all } i = 1,\dots,n.\] Setting \[F_1:= \{(h,q) \in F\mid (h,q)S \cap (g_1,p_1)S \neq \emptyset \},\] we observe that $F_1 \subset F\setminus\{(h_1,q_1),\dots,(h_n,q_n)\}$ so $F_1 \subsetneqq F$. If $F_1$ is empty, then we are done so let us assume that $F_1 \neq \emptyset$. Note that the minimal way in which $p_1$ was chosen implies $p_1 \notin pP \cap qP$ for all $(h,q) \in F_1$. This will allow us to conclude \[ (g_1,p_1)S \cap \biggl(S\setminus\bigcup\limits_{(h,q) \in F_1}, (h,q)S\biggr) \neq \emptyset \] by invoking the choice of $(g,p)$ and $F$. Indeed, if the intersection was empty, then there would be $(h,q) \in F_1$ with $(g_1,p_1)S \subset (h,q)S$, see Proposition~\ref{prop: GxP right LCM}. This would force $p_1 \in qP$ and therefore $p_1 \in p_1P \cap qP \subset pP \cap qP$, contradicting $(h,q) \in F_1$. Thus, we can iterate this process and, after finitely many steps, arrive at an element $(g',p') \in (g,p)S$ with the property $(g',p')S \cap (h,q)S = \emptyset$ for all $(h,q) \in F$. This completes the proof of the lemma. \end{proof} \begin{thm}\label{thm:gxp p.i. simple} Suppose $G$ is a group, $P \cong \mathbb{N}^k$ for some $k \geq 1$ or $P \cong \bigoplus_\mathbb{N} \mathbb{N}$, and $P\stackrel{\theta}{\curvearrowright}G$ is an action by injective group endomorphisms of $G$ respecting the order. Denote $S=G\rtimes_\theta P$. Assume that $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$, $[G:\theta_p(G)]$ is infinite for every $p \neq 1_P$ and the conditional expectation $C^*(S) \stackrel{\mathbb{P}hi_\mathcal{D}}{\longrightarrow} \mathcal{D}$ is faithful. Then $C^*(S)$ is purely infinite and simple. \end{thm} \begin{proof} We intend to apply Theorem~\ref{thm: simple and p.i.}~(3). First, note that (D1) holds by Corollary~\ref{cor:useful-equivalence} since (C2) is trivially satisfied for $P$. By Lemma~\ref{lem: stab for (GxP)* on J(GxP)}, $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$ corresponds to strong effectiveness of $S^*{\curvearrowright}\mathcal{J}(S)$. The fact that $S$ satisfies (D3) follows from Lemma~\ref{lem: GxP D2}. Since $\mathbb{P}hi_\mathcal{D}$ is faithful, Theorem~\ref{thm: simple and p.i.}~(3) implies that $C^*(S)$ is purely infinite and simple. \end{proof} \noindent Let us now look at some concrete examples. We start with a shift space: \begin{example}\label{ex:gxp shift} Let $P \cong \mathbb{N}^k$ for some $k \geq 1$ or $P \cong \bigoplus_\mathbb{N} \mathbb{N}$ and suppose $G_0$ is a countable amenable group. To avoid pathologies, let us assume that $G_0$ has at least two distinct elements. Then $P$ admits a shift action $\theta$ on $G := \bigoplus\limits_P G_0$ given by \[(\theta_p((g_q)_{q \in P}))_r = \mathbb{C}hi_{pP}(r)~g_{p^{-1}r} \text{ for all } p,r \in P.\] It is apparent that $\theta$ is an action by injective group endomorphism that respects the order and $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$ holds. We note that $S$ is a right reversible semigroup whose enveloping group $S^{-1}S$ is amenable because $G$ and $P^{-1}P$ are amenable. Using Proposition~\ref{prop:left-inverse} we conclude that $\mathbb{P}hi_\mathcal{D}$ is faithful. Finally, $[G:\theta_p(G)] < \infty$ holds for $p \neq 1_P$ only if $G_0$ is finite and $P \cong \mathbb{N}$. Indeed, if $p \neq 1_P$, then each element of $\bigoplus_{q \in P\setminus pP} G_0$ yields a distinct left-coset in $G/\theta_p(G)$. Clearly, this group is finite if and only if $G_0$ is finite and $P \cong \mathbb{N}$. So if $P$ is not singly generated or $G_0$ is a countably infinite group, $C^*(S)$ is purely infinite and simple by Theorem~\ref{thm:gxp p.i. simple}. \end{example} \noindent A variant of the next example with singly generated $P$ and finite field $\mathcal{K}$ has been considered in \cite[Example 2.1.4]{CV}. \begin{example}\label{ex:gxp poly ring} Let $\mathcal{K}$ be a countably infinite field and let $G=\mathcal{K}[T]$ denote the polynomial ring in a single variable $T$ over $\mathcal{K}$. We choose non-constant polynomials $p_i \in \mathcal{K}[T], i \in I$ for some index set $I$. Multiplying by $p_i$ defines an endomorphism $\theta_{p_i}$ of $G$ with $[G:\theta_{p_i}(G)] = |\mathcal{K}|^{\deg(p_i)}$, where $\deg(p_i)$ denotes the degree of $p_i \in \mathcal{K}[T]$. Thus, if we let $P$ be the semigroup generated by all the $p_i$'s, in notation \[P := |(p_i)_{i \in I}\rangle,\] then the index of $\theta_p(G)$ in $G$ is infinite for all $p \neq 1_P$. It is not hard to show that $\theta$ respects the order if and only if $(p_i) \cap (p_j) = (p_ip_j)$ holds for the principal ideals whenever $i \neq j$. Since every element in $G$ has finite degree, $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$ is automatic because the $p_i$ are non-constant. The expectation $\mathbb{P}hi_\mathcal{D}$ is faithful for the same reason as in Example~\ref{ex:gxp shift}. Thus, provided the family $(p_i)_{i \in I}$ has been chosen accordingly, $C^*(S)$ is purely infinite and simple. \end{example} \noindent We next discuss a class of semigroups $S$ based on a non-commutative group $G$. \begin{example}\label{ex:gxp F2} Let $G = \mathbb{F}_2$ be the free group in $a$ and $b$. We define injective group endomorphisms $\theta_1,\theta_2$ of $G$ by $\theta_1(a) = a^2, \theta_1(b) = b$, $\theta_2(a) = a, \theta_2(b) = b^2$ and set $P = |\theta_1,\theta_2\rangle \cong \mathbb{N}^2$. It is clear that the induced action $\theta$ of $P$ on $G$ respects the order. Additionally, $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$ is easily checked using the word length coming from $\{a,a^{-1},b,b^{-1}\}$. To see that $[G:\theta_1(G)] = \infty$ holds, note that the family $\left((ab)^j\right)_{j\geq 1}$ yields mutually distinct left-cosets in $G/\theta_1(G)$. The same argument with $ba$ instead of $ab$ shows that $\theta_2(G)$ has infinite index in $G$. Thus, $C^*(S)$ is purely infinite and simple provided that $\mathbb{P}hi_\mathcal{D}$ is faithful. One can show that this amounts to amenability of the action $\mathbb{F}_2 \stackrel{\tau}{\curvearrowright} \mathcal{D}$. \end{example} \noindent This example can be viewed as belonging to a larger class, as described next. The inspiration for these examples was \cite[Example 2.3.9]{Vie}, where the single endomorphism $\theta_2$ from Example~\ref{ex:gxp F2} on $\mathbb{F}_2$ is considered. \begin{example}\label{ex:gxp Fn} For $2 \leq n \leq \infty$, let $\mathbb{F}_n$ be the free group in $n$ generators $(a_k)_{1 \leq k \leq n}$. Fix $1 \leq d \leq n$ and choose for each $1 \leq i \leq d$ an $n$-tuple $(m_{i,k})_{1 \leq k \leq n} \subset \mathbb{N}^\times$ such that \begin{enumerate}[1)] \item for each $1 \leq i \leq d$, there exists $k$ such that $m_{i,k} > 1$, and \item for all $1 \leq i,j \leq d, i \neq j$ and $1 \leq k \leq n$, $m_{i,k}$ and $m_{j,k}$ are relatively prime. \end{enumerate} Then $\theta_i(a_k) = a_k^{m_{i,k}}$ defines an injective group endomorphism of $\mathbb{F}_n$ for each $1 \leq i \leq d$. We set $P = |(\theta_i)_{1\leq i\leq d}\rangle$. Using 2), one can show that the induced action $\theta$ of $P$ on $G$ respects the order. As in Example~\ref{ex:gxp F2}, $[G:\theta_p(G)]$ is infinite for every $p \neq 1_P$. The requirement $\bigcap_{p \in P} \theta_p(G) = \{1_G\}$ reduces to \begin{enumerate} \item[3)] For each $1 \leq k \leq n$, there exists $1 \leq i \leq d$ satisfying $m_{i,k} > 1$. \end{enumerate} So $C^*(S)$ is purely infinite and simple if conditions 1)$-$3) above are satisfied and $\mathbb{P}hi_\mathcal{D}$ is faithful. As in Example~\ref{ex:gxp F2}, the latter corresponds to amenability of the action $\mathbb{F}_n \stackrel{\tau}{\curvearrowright} \mathcal{D}$.\\ \end{example} \subsection{Semigroups from self-similar actions}~\\ Another large class of right LCM semigroups arises from self-similar actions, cf. \cite{Law1, LW, BRRW}. We won't be able to say much here, since conditions (D2) and (D3) are not likely to hold. However, these semigroups will satisfy condition (D1), and they will satisfy strong effectiveness in the presence of right cancellation. We include these observations here, as well as a description of those semigroups that satisfy (C1). Let $X$ be a finite alphabet. We write $X^n$ for the set of all words of length $n$, and $X^*$ for the set of all finite words. We let $\operatorname{var}nothing$ denote the empty word. Under concatenation of words, $X^*$ is a semigroup (and is nothing more than $\mathbb{F}_{|X|}^+$). A {\em self-similar action} is a pair $(G,X)$, where $G$ is a group acting faithfully on $X^*$ and such that for every $g \in G$ and $x\in X$, there exists a unique $g|_x \in G$ such that \begin{equation}\label{eq: ss condition} g\cdot (xw) = (g\cdot x) (g|_x \cdot w). \end{equation} The group element $g|_x$ is called the \emph{restriction} of $g$ to $x$. The restriction map can be extended iteratively to all finite words, and satisfies \[ g|_{vw}=(g|_{v})|_{w}, \quad gh|_{v} = g|_{h\cdot v} h|_{v}, \quad\text{and}\quad g|_{v}^{-1} = g^{-1}|_{g\cdot v}, \] for all $g, h\in G$ and $v,w\in X^*$. Moreover, the map $g:X^n \to X^n$ for $n\geq 1$ given by $w \mapsto g \cdot w$ is bijective. The proof of these properties and much more can be found in \cite{NekBook}. The Cuntz-Pimsner algebra $\mathcal{O}(G,X)$ of a self-similar group has been studied in \cite{Nek1,Nek2}, and the Toeplitz algebra $\mathbb{T}T(G,X)$ has been studied in \cite{LRRW}. To each self-similar action $(G,X)$ there exists a semigroup $X^*\bowtie G$, which is the set $X^*\times G$ with multiplication given by \[(x,g)(y,h) = (x(g\cdot y),g|_y h).\] The semigroup $X^*\bowtie G$ was introduced in \cite{Law1}, and is an example of a Zappa-Sz\'{e}p product. The $C^*$-algebra $C^*(X^*\bowtie G)$ was studied in \cite{BRRW}, and was shown to be isomorphic to $\mathbb{T}T(G,X)$. Denote $S=X^*\bowtie G$. Then $S$ is right LCM and the principal right ideals are determined by the element of $X^*$, in the sense that $(w,g)X^*\bowtie G=(z,h)X^*\bowtie G$ if and only if $w=z$. The identity in $X^*\bowtie G$ is $(\operatorname{var}nothing,1_G)$, and we have $(X^*\bowtie G)^*=\{\operatorname{var}nothing\}\times G$. For $x\in X$ let $G_x$ denote the stabiliser subgroup of $x$ in $G$. The map $\phi_x:G_x\to G$ given by $\phi_x(g)=g\vert_x$ is a homomorphism, see for example \cite[Lemma 3.1]{LW}. We now show that $S$ satisfies (D1), and we determine the precise conditions under which the action $S^* \curvearrowright\mathcal{J}(S)$ given by left multiplication on constructible ideals is strongly effective. \begin{lemma}\label{lem: self-sim Q2} Let $(G,X)$ be a self-similar action. Then $X^*\bowtie G$ satisfies (D1) from Definition~\ref{def: conditions on S for using D}. \end{lemma} \begin{proof} Let $(\operatorname{var}nothing,h)\in (X^*\bowtie G)^*$ and $(w,g)X^*\bowtie G\in\mathcal{J}(X^*\bowtie G)$ with \[ (\operatorname{var}nothing,h)(w,g)X^*\bowtie G\cap (w,g)X^*\bowtie G\not=\emptyset. \] Then there are $(w',g'),(w'',g'')$ such that $(\operatorname{var}nothing,h)(w,g)(w',g')=(w,g)(w'',g'')$, and since $w$ and $h\cdot w$ have the same length, this means $w=h\cdot w$. Then \[ (\operatorname{var}nothing,h)(w,g)X^*\bowtie G=(h\cdot w,h|_wg)X^*\bowtie G=(w,h|_wg)X^*\bowtie G=(w,g)X^*\bowtie G.\qedhere \] \end{proof} \noindent We know from \cite[Proposition 3.11]{LW} that $X^*\bowtie G$ is right cancellative if and only if $\{w\in X^*:\exists g\in G\setminus\{1_G\}, g\cdot w=w\text{ and }g|_w=1_G\}=\emptyset$. This condition also appears in the following result: \begin{lemma}\label{lem: se action} Let $(G,X)$ be a self-similar action. Then the action $S^* \curvearrowright\mathcal{J}(S)$ given by left multiplication is strongly effective in the sense of Definition~\ref{def: conditions on S for using D} if and only if \[ \{w\in X^*:\exists g\in G\setminus\{1_G\}, g\cdot w=w\text{ and }g|_w=1_G\}=\emptyset. \] \end{lemma} \begin{proof} We prove the contrapositive of the forward implication. Suppose $w\in X^*$ and $g\in G$ with $g\cdot w=w$ and $g|_w=1_G$. Then $(\operatorname{var}nothing,g)\in (X^*\bowtie G)^*$ and $(w,h)\in S$ satisfy \[ (\operatorname{var}nothing,g)(w,h)(z,k)X^*\bowtie G=(g\cdot w,g|_wh)(z,k)X^*\bowtie G=(w,h)(z,k)X^*\bowtie G, \] for all $(z,k)\in X ^*\bowtie G$. So the action is not strongly effective. For the reverse implication, suppose $(\operatorname{var}nothing,g)\in (X^*\bowtie G)^*$ and $(w,g)\in X^*\bowtie G$. If $g\cdot w\not=w$, then $(\operatorname{var}nothing,g)(w,h)X^*\bowtie G\not=(w,h)X^*\bowtie G$. If $g\cdot w=w$, then $g|_w=1_G$ by assumption. Choose $z\in X^*$ such that $g|_w\cdot(h\cdot z)\not= h\cdot z$. Then \begin{align*} (\operatorname{var}nothing,g)(w,h)(z,1_G)X^*\bowtie G &= \big(w(g|_w\cdot (h\cdot z)),(g|_wh)|_z\big)X^*\bowtie G\\ & \not= (w(h\cdot z),(g|_wh)|_z)X^*\bowtie G\\ &=(w(h\cdot z),h|_z)X^*\\ &=(w,g)(z,1_G)X^*\bowtie G. \end{align*} So the action is strongly effective. \end{proof} \noindent We can describe those semigroups $X^*\bowtie G$ that satisfy (C1). Recall from \cite[page 22]{LW} (or \cite{Nek1}) that a self-similar action of $G$ on $X$ is \emph{recurrent} if the action of $G$ on $X$ is transitive and the homomorphism $\phi_x$ is surjective for any $x\in X$. By \cite[Lemma 1.3(8)]{LW}, the last condition is equivalent to $\phi_w$ being surjective for all $w\in X^*.$ \begin{lemma}\label{lem:recurrent} Let $G$ be a recurrent self-similar action on $X$. Then $S:=X^*\bowtie G$ satisfies (C1). In the converse direction, if $S$ satisfies (C1), then all maps $\phi_x$ for $x\in X$ are surjective. \end{lemma} \begin{proof} Let $(w,g)\in S$ and $(\operatorname{var}nothing, h)\in S^*$. We will show that there is $(\operatorname{var}nothing, k)\in S^*$ such that $(w,g)(\operatorname{var}nothing, h)=(\operatorname{var}nothing, k)(w,g)$. Since $\phi_w$ is surjective, there is $k\in G_w$ such that $\phi(k)=ghg^{-1}$. In other words, there is $k\in G$ with $k\cdot w=w$ and $k\vert_w=ghg^{-1}$. Then \[ (\operatorname{var}nothing, k)(w,g)=(k\cdot w, k\vert_wg)=(w, ghg^{-1}g)=(w,g)(\operatorname{var}nothing, h), \] showing (C1). In the other direction, let $x\in X$ and $g\in G$. From (C1) applied to $(x, g^{-1})\in S$ and $(\operatorname{var}nothing, g) \in S^*$, there is $(\operatorname{var}nothing, k)\in S^*$ such that $(x, g^{-1})(\operatorname{var}nothing, g)=(x,e_G)=(\operatorname{var}nothing, k)(x, g^{-1})$, which is $(k\cdot x, k\vert_{x}g^{-1})$. This says that $k\in G_x$ and $\phi_x(k)=g$, showing surjectivity for all $x\in X$ (hence for all $x\in X^*$ by \cite[Lemma 1.3(8)]{LW}.) \end{proof} \noindent It would be interesting to know if for the latter class one can prove a uniqueness result using the expectation onto the $C^*$-subalgebra $\mathbb{C}C_I$. \end{document}
\begin{document} \title{Revisiting the $\mathcal{PT}$-symmetric ring: spectra, eigenstates and transport properties} \title{Spectra, eigenstates and transport properties of a $\mathcal{PT}$-symmetric ring} \author{Adrian Ortega} \email{[email protected]} \affiliation{Wigner RCP, Konkoly-Thege M. u. 29-33, H-1121 Budapest, Hungary} \author{Luis Benet} \email{[email protected]} \affiliation{Instituto de Ciencias F\'isicas, Universidad Nacional Aut\'onoma de M\'exico, Av. Universidad s/n, Col. Chamilpa, C.P. 62210 Cuernavaca, Morelos, M\'exico} \author{Hern\'an Larralde} \email{[email protected]} \affiliation{Instituto de Ciencias F\'isicas, Universidad Nacional Aut\'onoma de M\'exico, Av. Universidad s/n, Col. Chamilpa, C.P. 62210 Cuernavaca, Morelos, M\'exico} \mathrm{d}ate{\today} \begin{abstract} We study, analytically and numerically, a simple $\mathcal{PT}$-symmetric tight-binding ring with an onsite energy $a$ at the gain and loss sites. We show that if $a\neq 0$, the system generically exhibits an unbroken $\mathcal{PT}$-symmetric phase. We study the nature of the spectrum in terms of the singularities in the complex parameter space as well as the behavior of the eigenstates at large values of the gain and loss strength. We find that in addition to the usual exceptional points, there are ``diabolical points'', and inverse exceptional points at which complex eigenvalues reconvert into real eigenvalues. We also study the transport through the system. We calculate the total flux from the source to the drain, and how it splits along the branches of the ring. We find that while usually the density flows from the source to the drain, for certain eigenstates a stationary ``backflow'' of density from the drain to the source along one of the branches can occur. We also identify two types of singular eigenstates, i.e. states that do not depend on the strength of the gain and loss, and classify them in terms of their transport properties. \end{abstract} \maketitle \section{Introduction} \label{sec:Introduction} $\mathcal{PT}$-symmetry is by now a well established field of Quantum Physics, in which many interesting phenomena arise, see \cite{Bagarellobook2016, El-GanainyNature2018, Christodoulidesbook2018, Benderbook2019} and references therein. The field can be considered at a stage in which the theory is already on firm grounds and applications are starting to appear ~\cite{Chen2017,Christodoulidesbook2018,RosaJMPS2021,BergholtzRMP2021}. Among the many instances of systems that can be described by non-Hermitian Hamiltonians, a particularly useful one concerns their role as effective models of open systems. A simple example of such systems are tight-binding models with input (gain) and output (loss) at different sites, which are the kind of systems we are interested in this paper. While many properties are known for a one dimensional homogeneous $\mathcal{PT}$-symmetric tight-binding chain with open boundary conditions, see for instance ~\cite{OrtegaJPA2020, GraefeJPA2008, JinPRA2009,YogeshPRARapid2011}, the case with periodic boundary conditions has been less explored \cite{CerveroPLA2003, ZnojilPLA2011, ScottPRA2012, ScottJPA2012}. One reason for this is that, for constant values of the tunneling couplings and onsite energies, the spectrum typically becomes complex for any constant strength of the gain and loss different from zero~\cite{YogeshPRARapid2011}, which motivated the study of inhomogeneous rings~\cite{ZnojilPLA2011}. In this work we study the spectrum and eigenvectors of a simple extension of the homogeneous $\mathcal{PT}$-symmetric tight-binding ring. We show that in spite of the periodic boundary conditions, our system generically exhibits the usual unbroken and broken $\mathcal{PT}$-symmetric phases. This is achieved by introducing complex $\mathcal{PT}$-symmetric onsite energies $\alpha=a\pm i\eta$ at two sites on the ring, the imaginary parts of which represent the gain and loss, the real part representing a real on-site energy on those sites. The rationale for this extension is the following: In the absence of the complex on-site energies, the system is rotational invariant and has multiple doubly degenerate states. In this situation, when the system is coupled to a pure gain and loss, the degenerate energy levels indeed generally, though not always, split into complex conjugate pairs and the system does not exhibit a $\mathcal{PT}$-symmetric phase. However, the inclusion of the real onsite energies (without gain and loss for the time being), retains the Hermitian nature of the system, breaks the rotational invariance, and typically lifts the degeneracies, giving place to a non-degenerate real spectrum. Since the singularities of the spectrum must occur at degeneracies, this implies that now there is a range of values of the gain and loss, from zero up until the first degeneracy occurs, for which the spectrum will continue to be real. In this range, the system is in a $\mathcal{PT}$-symmetric phase. This simple construction contrasts with previous analytical~\cite{ScottPRA2012} and numerical models~\cite{ScottJPA2012} in which one has to engineer different couplings among the sites of the chain to attain a $\mathcal{PT}$-symmetry phase. Our model is simple enough to be treated analytically. Specifically, following a variation of the method outlined in \cite{OrtegaJPA2020}, we obtain a relatively simple equation that determines the quasi-momentum $\theta$, in terms of which we can write closed expressions for the eigenvalues and eigenstates of the system. These results allow a detailed study of the different types of singularities that appear in the spectrum. We find that in addition to exceptional points (EPs), there are singularities that resemble ``diabolical points'' \cite{BerryWilkinson84, Berry2004} and, under certain circumstances, we also find inverse exceptional points, at which complex eigenvalues reconvert into real eigenvalues. Our results also allow us to characterize the transport properties through the system. In particular, we show that transport is efficient in the $\mathcal{PT}$-symmetric states, meaning that the inflow and outflow of density, if any, are equal. More interestingly, we calculate how the total flux from the source to the drain splits along the branches of the ring for these $\mathcal{PT}$-symmetric states. We find that while usually the density flows from the source to the drain along both branches at a fixed ratio that depends on the size of the system and the distance between the leads, for other eigenstates states a stationary ``backflow'' of density from the drain to the source along one of the branches can occur. Next, we identify singular eigenstates, i.e. states that do not depend on the strength of the gain and loss, and we classify them in terms of their transport properties. We show that one kind of singular states, that only depend on the the system size and distance between the leads, have zero amplitude at the positions of the leads. Thus, these states cannot couple with the gain and loss. Following ~\cite{OrtegaJPA2020} these singular eigenstates are called ``opaque'' (non conducting). A different type of singular eigenstates, which we call accidental singular states, only appear for certain configurations and specific values of the onsite energy. These states do couple with the gain and loss, and as they have efficient transport, we classify these states as ``transparent'' (or conducting). Finally, we show an interesting feature of this system, which we call ``eigenvalue reconversion''. Usually after a coalescence of energies at an EP, these become complex and remain so as the gain and loss increases. However, in our system we observe that as the strength of the gain and loss increases, some complex eigenvalues may coalesce again at a reverse EP, and reconvert to being purely real. Thus the states corresponding to these eigenvalues recover the properties of the $\mathcal{PT}$-symmetric states, for example, they are again capable of conducting efficiently across the system. Additionally, we also show how some eigenstates vanish on one branch of the ring as the strength of the input and output increases, attaining a ``directional character''. \section{The model and its solution} \label{sec:ModelSolution} We consider a Hamiltonian for a periodic tight-binding chain with $N$-sites, uniform couplings among neighboring sites, and complex $\mathcal{PT}$-symmetric self energies at sites $k$ and $k'$. The imaginary parts of these represents a gain and a loss respectively, the real parts correspond to on-site energies. The Hamiltonian reads~\cite{Benderbook2019} \begin{equation} H = \sum_{i=1}^{N-1}\Big(\ket{i}\bra{i+1} + \ket{i+1}\bra{i} \Big) + \ket{1}\bra{N} + \ket{N}\bra{1} + \Big(\alpha\ket{k}\bra{k} + \alpha^*\ket{k'}\bra{k'} \Big), \label{eq:ham} \end{equation} where $\alpha=a+i\eta$ and $a,\eta$ are real parameters. With periodic boundary conditions, the relevant coordinates are the distances between the positions of gain and loss along each branch of the ring, i.e. $k'-k$ and $N-(k'-k)$ (for concreteness, we shall consider that $k'>k$), or, equivalently, $k'-k$ and $N$ . The periodic boundary conditions allow one to choose an appropriate $\mathcal{P}$ operator for arbitrary positions of the source and drain. Specifically, $\mathcal{P}$ should ``reflect'' the system along the axis that splits the ring symmetrically half way between $k$ and $k'$ (note that the representation of the parity operator can be different than in the open boundary case~\cite{OrtegaJPA2020}). The time inversion operator is defined as usual, namely $\mathcal{T}A\mathcal{T} = A^*$ where ${}^*$ represents complex conjugation and $A$ is any matrix operator~\cite{WangPTRS2013}. Then, as expected, the Hamiltonian commutes with the operator $\mathcal{PT}$. Following the method of Losonczi-Yueh~\cite{Losonczi1992, Yueh2005, OrtegaJPA2020}, the eigenvectors can be obtained explicitly. From Eq.~(\ref{eq:ujapp}), if we call $\theta$ the quasi-momentum, then the $u_j$ component reads \begin{equation} \begin{split} u_j &=\frac{1}{\sin\theta}\Big[ u_0\sin[(1-j)\theta] + u_1\sin(j\theta) -\alpha u_k\sin[(j-k)\theta] \Theta(j-k-1)\\ &\qquad -\alpha^* u_{k'} \sin[(j-k')\theta]\Theta(j-k'-1) \Big], \end{split} \label{eq:uj} \end{equation} with $\Theta(x)$ the usual Heaviside function and $1\leq j\leq N$ an integer. The eigenvalues of the system can be written as \begin{equation} E = 2\cos\theta. \label{eq:E} \end{equation} The $u_k$ and $u_{k'}$ components are obtained from equation (\ref{eq:uj}) by evaluating at $j=k$ and $j=k'$ respectively. Thus, all the eigenvector components can be expressed in terms of $u_0$ and $u_1$. Finally, requiring that the homogeneous system that results from evaluating (\ref{eq:uj}) at $u_0$ and $u_1$ has nontrivial solutions, implies that $\theta$ must fulfill the equation \begin{equation} 4\, \sin^2\left(\frac{N\theta}{2}\right) + 2a \frac{\sin(N\theta)}{\sin\theta} - |\alpha|^2\frac{\sin[(k'-k)\theta]\sin[(N-k'+k)\theta]}{\sin^2\theta} = 0. \label{eq:theta} \end{equation} Details of the derivation of Eq.~(\ref{eq:theta}) can be found in Appendix~\ref{sec:egvalsol}, which considers a slightly more general Hamiltonian. For $\alpha=0$, i.e. for the Hermitian rotational invariant system, the quasi-momenta that fulfill Eq.~(\ref{eq:theta}) are $\theta_0=2\pi m/N$, where $m$ is an integer that goes from 0 to $N-1$. We note that the quasi-momenta $2\pi m/N$ and $2\pi (N-m)/N$ yield the same eigenvalue $E$ for all values of $m$. That is, all eigenvalues are doubly degenerate except for those associated with $\theta=0$ and, if $N$ is even, for $\theta=\pi$. We consider the case $a=0$ using standard perturbation theory, and take $\eta$ as our perturbation parameter. First, we discuss what happens to the quasi-momenta that correspond to degenerate states, and later the non-degenerate cases $\theta=0,\pi$. The unperturbed solutions have the form $\theta_0=2\pi m/N$ with $m=1, \mathrm{d}ots, N-1$, excluding $m= N/2$ for even $N$, and we write the perturbed solutions as $\theta=\theta_0+\mathrm{d}elta\theta_e \eta$. Expanding Eq.~(\ref{eq:theta}) to second order in $\eta$ and equating the expression to zero, we obtain \begin{equation} \Big(N^2 \mathrm{d}elta\theta_e^2 + \frac{\sin^2[(k'-k)\theta_0]}{\sin^2\theta_0}\Big)\eta^2 = 0, \end{equation} which yields \begin{equation} \mathrm{d}elta\theta_e = i\sin[(k'-k)\theta_0]/(N\sin\theta_0). \label{eq:delta_e} \end{equation} In Eq.~(\ref{eq:delta_e}) we have chosen the positive sign of the square-root, though the negative sign would be just as good; either choice spans identically the perturbed solutions, differing only in the specific labelling of the quasi-momenta to the eigenvalues. The resulting (perturbed) quasi-momenta no longer lead to a double degeneracy: the degenerate unperturbed energies split into a complex-conjugate pair of eigenvalues for $\eta\neq 0$; the difference between these pairs being proportional to the perturbation parameter $\eta$. Thus, for $a=0$ we have a broken $\mathcal{PT}$-symmetric phase as soon as $\eta$ is different from zero, provided that $(k'-k)\theta_0\neq r\pi$. For the case $\theta_0=0$ (or $\pi$ for even $N$), we obtain $\mathrm{d}elta\theta_e=\sqrt{(k'-k)(N-k'+k)}/N$, which is real, thus the eigenvalues corresponding to these values of $\theta_0$ remain real for $\eta \neq 0$. Now we consider the case with $\eta=0$ and take $a\neq 0$ as our perturbation parameter. This is a Hermitian system and therefore the eigenvalues are real. As before, we begin with quasi-momenta of the form $\theta_0=2\pi m /N$. Expanding~(\ref{eq:theta}) to second order in $a$, we obtain \begin{equation} \Big(N^2\mathrm{d}elta\theta_a^2+\frac{2N}{\sin\theta_0}\mathrm{d}elta\theta_a + \frac{\sin^2[(k'-k)\theta_0]}{\sin^2\theta_0}\Big)a^2 = 0, \label{eq:perturb_a} \end{equation} which yields \begin{equation} \mathrm{d}elta\theta_a = \frac{1}{N}\left(-\frac{1}{\sin\theta_0} + \frac{\cos[(k'-k)\theta_0]}{|\sin\theta_0|}\right). \label{eq:delta_a} \end{equation} As in Eq.~(\ref{eq:delta_e}), we have chosen in Eq.~(\ref{eq:delta_a}) the positive solution of the square-root. Thus, the non-zero onsite energy lifts the degeneracy of the rotational symmetric $a=0$ case. The case $\theta_0=0$ is treated similarly, but using $\theta=\sqrt{a}\mathrm{d}elta\theta_a$ due to the singularities arising from the denominators in Eq.~(\ref{eq:theta}). In this case, the first order expansion yields a purely imaginary correction $\mathrm{d}elta\theta_a = i\sqrt{2/N}$, and therefore the corresponding energy is $2\cosh(\sqrt{2/N})$, which is real and larger than 2. A similar approach can be used for $\theta_0=\pi$, and yields $\mathrm{d}elta\theta_a=\sqrt{2/N}$. \begin{figure} \caption{Spectrum for $a=0$ (top row) and $a=1.5$ (bottom row) as a function of $\eta$. The left panels illustrate the real part of the eigenvalues $E$ and the right ones their imaginary part. In the first case the spectrum is always complex for $\eta>0$ and the $\mathcal{PT} \label{fig:uno} \end{figure} The central point is that, were it not for the self-energies $a$ at the leads, the presence of gain and loss would, in general, split the eigenvalues corresponding to degenerate states into complex conjugate pairs as soon as $\eta\neq 0$, and there would be no $\mathcal{PT}$-symmetric phase. The inclusion of the self-energy $a$ at the leads lifts the double degeneracies that are present for $a=0$, and allows for a range of values of the parameter $\eta$, in which all the eigenvalues of the system are real despite the lack of Hermiticity of the Hamiltonian. That is, a $\mathcal{PT}$-symmetry phase is possible, in general, by simply introducing the self energies. These findings are illustrated in Fig.~\ref{fig:uno}. Further, as we shall see below, under certain conditions which can be guessed from Eq.~(\ref{eq:delta_e}), there can also be a $\mathcal{PT}$-symmetric phase for $a=0$; see bottom row in Fig.~\ref{fig:singular}. For the case $a=0$, the point $\eta=0$ is somewhat unusual as a singularity in the complex plane, at least in comparison with the behaviour around an exceptional point. Indeed, as mentioned above, the first correction to the quasi-momenta is linear in $\eta$, which carries over to the energies. This is in contrast to the square-root singularity which is usually observed close to an exceptional point~\cite{BerryWilkinson84,Seyranianbook2003, Berry2004,SeyranianJPA2005,HeissJPA2012}. This behavior is illustrated in Fig.~\ref{fig:uno}, in the imaginary part of $E$ for $a=0$. The linear dependence is reminiscent of the so-called diabolical points, also known as conical intersections \cite{BerryWilkinson84, Berry2004}. Thus, the $\mathcal{PT}$-symmetric ring exhibits both diabolical points (at $a=0$) as well as exceptional points (when $a\neq 0)$. This property makes it an attractive system to exploit the different sensitivities that these singularities induce~\cite{Chen2017,RosaJMPS2021}. We consider now the behaviour of the eigenvalues in the limit $\eta\rightarrow\infty$, which can also be understood using standard perturbation theory. Similarly to the case of the chain with open boundary conditions~\cite{DangelPRA2018, OrtegaJPA2020}, when $\eta\rightarrow\infty$ the system effectively separates, in general, into four subsystems: The two sites of the gain and loss, where the imaginary potential dominates, and two sub-chains with vanishing boundary conditions (unless the gain and loss are placed beside each other). Indeed, as $\eta$ becomes very large, two eigenstates become increasingly localized at the gain and at the loss, becoming decoupled from neighbouring sites. The eigenenergies of the states localized at the gain and loss are obtained by setting $\theta=i\phi$ in Eq.~(\ref{eq:theta}). In this case, assuming $\phi\to\infty$ as $\eta$ grows, we find that it fulfills \begin{equation} e^{2\phi}-2ae^{\phi}+|\alpha|^2 \to 0, \end{equation} whose solution is $\phi\sim\log(a\pm i\eta)$, which is consistent with our assumption. Thus, the energy of these states is given by \begin{equation} E_{\theta,\eta\rightarrow\infty} = 2\cosh\phi \sim |a+i\eta|e^{i\arg (a\pm i\eta)} + \frac{1}{|a+i\eta|}e^{-i\arg(a \pm i\eta)}. \label{eq:Elargeeta} \end{equation} This expression shows that for large $\eta$, the real part of these eigenvalues tends to $a$, and the imaginary part which grows as $\pm \eta$. We notice that for $a=0$ we have $\arg(\pm i\eta)=\pi/2$ and then $E_{\theta,\eta\rightarrow\infty}\sim\pm i\left(\eta -1/\eta \right)$. This is the same result, in the corresponding limit, for the chain with open boundary conditions~\cite{OrtegaJPA2020, DangelPRA2018}. The behaviour of the energies for the remaining values of $\theta$ is computed as follows. We first divide Eq.~(\ref{eq:theta}) by $|\alpha|^2$. In the limit $\eta\rightarrow\infty$ we use the perturbation parameter $\epsilonilon=1/|\alpha|^2$. The leading order, $\theta_0$, are solutions of \begin{equation} \frac{\sin[(k'-k)\theta_0]\sin[(N-k'+k)\theta_0]}{\sin^2\theta_0} = 0. \label{eq:accidental-states} \end{equation} These can be written as $\theta_0 = \pi m_1/(k'-k)$, for $m_1= 1,\mathrm{d}ots,(k'-k)-1$, or as $\theta_0 = \pi m_2/(N-k'+k))$ for $m_2=1,\mathrm{d}ots,(N-k'+k)-1$. To determine the nature of the first order correction in $\epsilonilon$, we consider the derivative of the function $g(\theta)=\sin[(k'-k)\theta]\sin[(N-k'+k)\theta]$ at $\theta_0$: \begin{eqnarray} g'(\theta_0) = (k'-k)\cos[(k'-k)\theta_0]\sin[(N-k'+k)\theta_0] + \nonumber\\ \qquad (N-k'+k)\sin[(k'-k)\theta_0]\cos[(N-k'+k)\theta_0]. \label{eq:theta0} \end{eqnarray} If $g'(\theta_0)\neq 0$ then we can write the perturbation expansion as $\theta\approx \theta_0 + \epsilonilon\theta_1 +...$, where $\theta_1$ and all subsequent corrections are real. In this situation, the imaginary part of the eigenvalues is identically zero for large enough values of $\eta$, which, as we shall see, occurs either by crossing a reverse EP or if the eigenvalue was never complex. To analyze the case $g'(\theta_0)=0$, we write generically $\theta_0=\pi m / q$, with $m,q\in \mathbb{Z}$; then, $g'(\theta_0)=0$ occurs if $q$ divides $k'-k$ and $N$ simultaneously. If $Nm/q$ is even, $\theta_0$ is an exact solution of Eq.~(\ref{eq:theta}) independently of $\eta$ and corresponds to a ``singular state'', which we discuss in depth in Sect. \ref{sec:singular}. If $Nm/q$ is odd, then the appropriate perturbation expansion is $\theta\approx \theta_0 + \epsilonilon^{1/2}\theta_1$, where \begin{equation} \theta_1 = \pm 2i \frac{\sin\theta_0}{\sqrt{(k'-k)(N-k'+k)}}. \label{eq:theta1OddCase} \end{equation} In this case, a complex correction term remains and vanishes slowly as $\eta\to\infty$. Examples of these behaviors can be seen in Figs.~\ref{fig:5} and~\ref{fig:7}. \section{Transport and eigenstate classification} \subsection{Flux and transport} \label{ssec:Flux} As in the case of the linear chain~\cite{OrtegaJPA2020}, we define the local density flux as \begin{equation} J_n \equiv -2\,\textrm{Im}\left( c_n(t) c_{n-1}^*(t)\right), \end{equation} where the coefficients $c_n(t)$ are defined by the solution to the time-dependent Schr\"odinger equation written as $|\Psi(t)\rangle = \sum_j c_j(t) |j\rangle$, in the the site basis. When the initial condition is an eigenstate of the Hamiltonian, it evolves with time dependent coefficients $c_n(t) = \exp(-i E_\theta t) u_n$. If we focus on the non-degenerate $\mathcal{PT}$-symmetric states, then the eigenvalue $E_\theta$ is real, and we obtain \begin{eqnarray} J_{n}(\theta) = -\eta\left[1-\frac{\tan(\frac{N}{2}-k'+k)\theta}{\tan(\frac{N}{2})\theta} \right]|u_k|^2 +2 \eta\Big(|u_k|^2\Theta(n-k-1) - |u_{k'}|^2\Theta(n-k'-1) \Big).\nonumber\\ \label{eq:flux} \end{eqnarray} Since the flux must be equal for $n\leq k$ and $n>k'$, the above result also implies that $|u_k|^2=|u_{k'}|^2$ for these states. If we denote $J_{\rm right}$ the flux for $k<n\leq k'$, and $J_{\rm left}$ the flux for the other branch of the ring ($n\leq k$ and $n>k'$), from Eq.~(\ref{eq:flux}) we have \begin{eqnarray} J_{\rm right}=2\eta\left[\frac{\sin(N-k'+k)\theta}{\sin(k'-k)\theta +\sin(N-k'+k)\theta}\right]|u_k|^2 \label{eq.Jright} \end{eqnarray} and \begin{eqnarray} J_{\rm left}=-2\eta\left[\frac{\sin(k'-k)\theta}{\sin(k'-k)\theta +\sin(N-k'+k)\theta}\right]|u_k|^2. \label{eq.Jleft} \end{eqnarray} Then \begin{eqnarray} J_{\rm right}-J_{\rm left}=2\eta|u_k|^2, \label{eq:efficient_transp} \end{eqnarray} indicating the total transport from the source to the sink is efficient, in the sense that the inflow of density equals the total flux along the branches of the ring. \begin{figure} \caption{Stationary local fluxes $J_n(\theta)$ corresponding to purely real energies, plotted as a radial coordinate, in terms of $n$, the position along the ring for a ring, with $N=6$ and leads located at $k=1$ and $k^\prime=3$ with $a=0.5$, for different values of $\eta$. Concentric circles correspond to a constant value for the flux; for simplicity we have included three circles with the corresponding value of the flux. Note that when $\eta=0.05$, for four eigenstates $J_{\rm right} \label{fig:dos} \end{figure} In Fig.~\ref{fig:dos} we present the stationary local density flux $J_n(\theta)$, plotted as a radial coordinate, in terms of the position along the ring of size $N=6$, with the leads located at $k=1$ and $k^\prime=3$; we take $a=0.5$ and we consider three values of $\eta=0.05,1.5,3$. The local currents for the $\mathcal{PT}$-symmetric states are given by Eq.~(\ref{eq.Jright}) from $n=1$ through $n=3$ in the clockwise direction, and Eq.~(\ref{eq.Jleft}) in the anti-clockwise direction. Note that in the $\eta=0.05$ case, the fluxes corresponding to the states with energies $E\sim-1.85$ and $2.19$ have $J_{\rm right}>0$ and $J_{\rm left}<0$, respectively, which implies that density flows from the source to the sink on both branches. However, for all the other states, the sign of the current is the same on both branches; thus there is a back flow of density from sink to source along one branch or the other in each of these cases. For the remaining values of $\eta$, we only show the flux for the $\mathcal{PT}$-symmetric states, the other eigenstates correspond to energies having a nonzero imaginary part, and thus, do not have stationary fluxes. \subsection{Singular eigenstates} \label{sec:singular} As in the case with open boundary conditions, eigenstates with energies that do not depend on the values of $\eta$ and $a$ can appear in the system. In \cite{OrtegaJPA2020} we called these states opaque if the wave function vanished at the contacts, implying that there was no transport through the system, or transparent, if the wave function did not vanish at the contacts and transport was efficient, in the sense, as before, that the input is equal to the output, and there is no build up nor depletion of density inside the system. For the time being we call these states ``singular states'', and later on distinguish under which conditions they correspond to opaque or transparent states. \begin{figure} \caption{Spectra of a $\mathcal{PT} \label{fig:singular} \end{figure} We begin by considering solutions of Eq.~(\ref{eq:theta}) of the form $\theta_s=2\pi r/M$, where $M$ divides $N$ and $0<r<M/2$ is an integer. Writing $N=qM$ with $q$ an integer, then $N\theta_s=2\pi rq$ and $\cos (N\theta_s)=1$, $\sin(N\theta_s)=0$, and $\sin(N\theta_s/2)=0$. The first two terms of Eq.~(\ref{eq:theta}) vanish, which implies that the third term must be zero for $\theta_s$ to be a solution. We distinguish two cases when this occurs: either $k'-k$, or equivalently $N-k'+k$, is divided by $M$; or $2(k'-k)$ is divided by $M$. These conditions define the singular states $\theta_s$. To illustrate the occurrence of singular states, we consider $N=6$ as an example, which can be factorized by $M=2,3,6$. First, for $M=2$, there is no $r$ allowed, and hence no singular states associated to this value of $M$. For $M=3$ we have $r=1$ and $M$ divides $k'-k=3$, which defines the singular state $\theta_s=2\pi/3$. Finally, for $M=6$ we can have $r=1,2$, and $M$ divides $2(k'-k)=6$. The case $r=2$ yields the same singular state as $M=3$, while $r=1$ defines the singular state $\theta_s=\pi/6$. Then, for a $\mathcal{PT}$-symmetric ring of size $N=6$, and having the separation $k'-k=3$ between the gain and loss, the singular states correspond to the energies $E=\pm 1$. This case is illustrated in Fig.~\ref{fig:singular} (top) for $a=0.5$. The reasoning used above is independent of $a$ and therefore it holds for $a=0$ too. In this case the eigenvalues $E=\pm 1$ are doubly degenerate for all $\eta$; see Fig.~\ref{fig:singular} (bottom). In this configuration the exceptional point observed at $\eta= 2$ is related to the eventual coalescence of the eigenvalues $E=\pm 2$ for $\eta=0$. Note that the same situation for $a=0$ is encountered if $N$ is even and $k'-k=N/2$: all singular states are real and doubly degenerated for $\eta\in[-2,2]$, where a $\mathcal{PT}$-symmetric phase exists. Indeed, from Eq.~(\ref{eq:theta}) with $a=0$, $N$ even and $k'-k=N-k'+k=N/2$, it follows that the non-degenerate eigenvalues satisfy \begin{equation} \eta^2 + 4\cos^2\theta = \eta^2 + E^2 = 4, \label{eq:circle} \end{equation} which is a circle in the $E$--$\eta$ plane of radius 2 for $\eta^2 \leq 2$, or a hyperbola for imaginary $E$; cf. Fig.~\ref{fig:singular} (bottom). We emphasize that the $\mathcal{PT}$-symmetry phase described has zero onsite energies for all sites. Finally, if $a\neq 0$, a tedious but simple calculation shows that for the singular states described thus far we have $u_k= u_{k'}=0$. This implies that these states do not couple with the source and sink, and cannot be used for transport through the system. In \cite{OrtegaJPA2020} we called these kind of states ``opaque''. \subsection{Accidental singular eigenstates} The introduction of the onsite energy at the contacts gives rise to a new kind of singular states that we refer to as ``accidental singular states''. These states correspond to values $\theta_a=\pi m/(k'-k)$ for $0<m<(k'-k)$ where we assume that $k'-k$ does not divide $N$. Similarly, the states $\theta_a=\pi m/(N-k'+k)$ also define accidental singular states, with $0<m<(N-k'+k)$ integer and with $N-k'+k$ not dividing $N$. With these definitions Eq.~(\ref{eq:accidental-states}) holds, i.e., the last term in Eq.~(\ref{eq:theta}) vanishes but not the first two. Yet, we can now choose the value of the onsite energy $a$ so that both terms cancel; specifically \begin{equation} a=-\sin\theta_a \tan\left(N\theta_a/2\right). \label{eq:atune} \end{equation} An instance of an accidental singular state is illustrated in Fig.~\ref{fig:accidental} for $N=5$ with $k'-k=2$ or $N-k'+k=3$. In this case we have accidental eigenstates when $\theta_a=\pi/3$, if $a=0.5$, corresponding to an energy $E=1$. Other accidental eigenstates in the system occur at $\theta_a=2\pi/3$ with $a=1.5$, and $\theta_a=\pi/2$ with $a=-1$. In contrast to the singular states discussed above, the amplitude of the eigenvectors does not vanish at the contacts. This situation corresponds to the states we called ``transparent'' in \cite{OrtegaJPA2020}, through which transport is efficient, independently of the value of $\eta$. \begin{figure} \caption{Spectrum of a $\mathcal{PT} \label{fig:accidental} \end{figure} \section{Eigenvalue reconversion and directional eigenstates} \label{sec:egreconv} We discuss now an interesting property of the system, that we term eigenvalue reconversion. As we have shown above, for small enough $\eta$ (and $a$ distinct from zero) we have a $\mathcal{PT}$--symmetric phase, which eventually is lost through an eigenvalue coalescence at an EP. In addition, for $\eta\to\infty$ we showed that two eigenvalues have an asymptotically increasing imaginary part, whereas in the same limit, the $N-2$ remaining eigenvalues have imaginary parts that either vanish as $\eta$ increases, or are identically zero. Interestingly, this latter case may occur via the reconversion of a pair complex conjugate eigenvalues back to being purely real again at a reverse EP. \begin{figure} \caption{Real and imaginary parts of the spectrum in terms of $\eta$, illustrating the eigenvalue reconversion, for $N=10$, with the gain and loss located at $k=1$ and $k^\prime=5$, for $a=0.5$. Note that there are two pairs of eigenvalues that after a second coalescence they undergo a transition from complex back to purely real.} \label{fig:5} \end{figure} The phenomenon of eigenvalue reconversion is illustrated in Fig.~\ref{fig:5}, which shows the case $N=10$, with $k=1$, $k'=5$ and $a=0.5$. It is apparent that two energies have imaginary parts that diverge to $\pm \infty$ for $\eta$ large, and two eigenvalues remain real for all values of $\eta$, though they are not singular states since they have a weak dependence on $\eta$. The remaining six eigenvalues became complex after coalescences at EPs, four of them become real again after a new coalesces at reverse EPs. We notice that the two eigenvalues that still have imaginary parts that tend asymptotically to zero as $\eta$ grows, correspond to $\theta_0=\pi/2$; thus, using Eq.~(\ref{eq:theta1OddCase}) for $\theta_1$, yields the quasi-momenta \begin{equation} \theta \sim \frac{\pi}{2}\pm \frac{i}{\sqrt{6}|\alpha|}, \label{eq:Ea} \end{equation} indicating imaginary parts indeed vanish slowly for this case, as opposed to what happens at the reverse EPs. \begin{figure} \caption{Eigenvectors in the site basis for some real eigenvalues illustrated in Fig.~\ref{fig:5} \label{fig:6} \end{figure} As described in Sect.~\ref{sec:ModelSolution}, for large values of $\eta$ the spectrum effectively splits in four parts: two levels are associated with localized states at the gain and loss positions, and then we have two separated open chains between the contacts whose eigenvalues are asymptotically given by Eq.~(\ref{eq:accidental-states}). According to this effective decoupling, we expect that some states are non-zero in one of the sub-chains between the contacts, and vanishing small in the other. In this case, whatever transport that may happen from the gain to the loss, can only occur along one branch of the ring. We call these states ``directional eigenstates''. Figure~\ref{fig:6} illustrates the modulus squared for a couple of these eigenvectors in the site basis; the parameters used for the ring correspond to those used in Fig.~\ref{fig:5}. The eigenvectors in red correspond to energies that remain real for all $\eta$; the eigenvectors in blue correspond to a couple of energies that suffer a reconversion. The specific values $\eta$ and $E$ are given in the figure. For small values of $\eta$, cf. the left panel in Fig.~\ref{fig:6}, the eigenvectors are more or less evenly distributed throughout the chain. The directional eigenstates become clear when $\eta$ is large compared to the value when the reconversions occur; see Figure~\ref{fig:6}, right panel. We note that both eigenvectors in red tend to the same local probability $|\langle \Psi|n\rangle|^2$ and are localized on the right of the chain; the same statement holds for the eigenvectors illustrated in blue, which localize on the left of the chain. \begin{figure} \caption{Similar as Fig.~\ref{fig:6} \label{fig:7} \end{figure} This eigenvector localization can be understood qualitatively by recalling that as $\eta$ increases, the branches of the ring become effectively decoupled. In this regime, we can consider the eigenstates and eigenvalues of each branch separately. For each eigenvalue that is not shared between both branches, an eigenstate of the complete system can be constructed by considering a vector that coincides with the corresponding eigenstate of the appropriate branch, and vanishes on the other branch, as is observed. If both branches share an eigenvalue, the corresponding eigenstates do not necessarily localize in one branch as discussed. This situation is exemplified for a ring with $N=12$, $k=1$, $k^\prime=5$ and $a=0.5$ in Fig.~\ref{fig:7}. In this case, in the large $\eta$ limit, the ring is split in two branches with 3 and 7 sites, aside from the leads. Following~\cite{OrtegaJPA2020}, the quasi-momenta that yield the energies of each branch read $\theta_{\rm short}=\pi r/4$, $r=1,\mathrm{d}ots, 4$, and $\theta_{\rm long}=\pi s/8$, with $s=1, \mathrm{d}ots, 8$. Clearly, all quasi-momenta from the short branch have a corresponding quasi-momenta in the long branch, and thus the eigenvalues in the short branch are eigenvalues of the long one; this indicates that no eigenvector will be localized in the short branch. This is illustrated in Fig.~\ref{fig:7}: The top panel shows the energy spectrum, where we can see that two pairs of eigenvalues have an imaginary part that asymptotically tends to zero; the bottom panels illustrate some of the eigenstates of the system. The red eigenstates in the bottom left panel correspond to eigenvalues that are real for all $\eta$, and the blue one is a singular state shared by both branches. In the bottom right panel, two of the states illustrated are not localized in either branch; the dotted blue state localizes on the right branch. In this panel, the red states correspond to the eigenvalues whose imaginary parts asymptotically vanish, and the blue ones to real eigenvalues that have undergone a reconversion. \section{Summary and Conclusions} \label{sec:Conclusions} In this article we have studied the $\mathcal{PT}$-symmetric ring with on-site potentials $a$, in addition to the $\pm i\eta$ representing the strength of the input and output at the gain and loss sites. We discussed the conditions under which the system possesses an unbroken $\mathcal{PT}$-symmetric phase. If there is no such phase, as happens in general when there is only gain and loss ($a=0$), the eigenvalues are complex and the associated singularities in the complex plane correspond to diabolical points. The introduction of a real onsite potential allows the system to present a $\mathcal{PT}$ unbroken phase for a range of values of the parameter $\eta$, until exceptional points are encountered. We addressed the transport properties of the system for non-degenerate $\mathcal{PT}$-symmetric eigenstates. We showed that the system presents opaque as well as transparent singular eigenstates; the latter only appear at precise values of the onsite potential $a$. We termed such states ``accidental singular states'', to distinguish them from the former set, that also appears for the one-dimensional chain with open boundary conditions~\cite{OrtegaJPA2020}. The study of the local currents reveals that for certain $\mathcal{PT}$-symmetric eigenstates, a stationary back-flow (from the sink to the source) is possible along one of the branches of the ring. We also observed another interesting property of the eigenvalues of the ring, the eigenvalue reconversion, in which pairs of eigenvalues, after becoming complex at an EP, under certain circumstances undergo a reverse EP and become real again as $\eta$ increases. Finally, we found that in the large $\eta$ regime, the eigenstates may show a partial localization effect, becoming localized in only one branch of the ring. In this situation, any transport through the system may only occur in one of the branches, becoming, in some sense, ``directional eigenstates''. The analysis presented thus far has revealed that the periodic boundary conditions allow for an unexpectedly rich behaviour of the eigenvalues and eigenvectors of the system. From the transport point of view, one can tune the parameters of the system to modify the behavior of the system. In addition, the system can have a backflow of density from the sink to the source, which also gives rise to a effective circulating current around the ring. We plan to study the implications of this circulating current in a future work. Finally, the number of sites needed to build systems that display the behaviors discussed in this paper fits well with the current capabilities of microwave experiments~\cite{DietzPS2019,StegmannPRB2020}. We believe our results can be observed in such platforms. \begin{acknowledgments} We gratefully acknowledge financial support from the UNAM-PAPIIT research grant IG-100819. AO is grateful for the support of the National Research, Development and Innovation Office of Hungary (Project K124351) and the Quantum Information National Laboratory of Hungary. \end{acknowledgments} \appendix \section{Solution of the eigenvalue problem} \label{sec:egvalsol} In this section we present a simple variation of the method of Losonczi-Yueh~\cite{Losonczi1992, Yueh2005} to obtain the eigenvalues and eigenvectors of the tight-binding Hamiltonian with periodic boundary conditions. \begin{equation} H = \sum_{i=1}^{N-1}\big(\ket{i}\bra{i+1} + \ket{i+1}\bra{i} \big) + \ket{1}\bra{N} + \ket{N}\bra{1} + \big(\alpha\ket{k}\bra{k} + \beta\ket{k'}\bra{k'} \big), \label{eq:hamapp} \end{equation} with $\alpha, \beta\in \mathbb{C}$ and $k<k'$, the positions of the contacts, arbitrary. The method is as an alternative to the popular Bethe ansatz\cite{ScottPRA2012}, though from our point of view, our method is more transparent. The solution for a one dimensional tight-binding chain with open boundary conditions has been computed in~\cite{OrtegaJPA2020}. We follow that derivation here, and only highlight the main differences due to the periodic boundary conditions. Note that the Hamiltonian~(\ref{eq:hamapp}) is more general than Eq.~(\ref{eq:ham}), and it does not have the $\mathcal{PT}$ symmetry. Clearly, $\mathcal{PT}$ symmetry is obtained by setting $\beta=\alpha^*$. Let $E$ be an eigenvalues of $H$ and $\ket{u} = \sum_j u_j \ket{j}$ its associated eigenvector. The eigenvalue problem $H u = Eu$ can be written as a set of linear equations \begin{equation} \begin{split} u_0 &= u_N,\\ u_0 + u_1 + u_2 &= E u_1,\\ \vdots \\ u_{k-1} + u_k + u_{k+1} &= (E - \alpha) u_k,\\ \vdots \\ u_{k'-1} + u_{k'} + u_{k'+1} &= (E - \beta) u_{k'},\\ \vdots \\ u_{N-1} + u_N + u_{N+1} &= E u_N, \\ u_{N+1} &= u_1, \end{split} \label{eq:Hsyseq} \end{equation} where $u_0 = u_N$ and $u_{N+1}=u_1$ correspond to the periodic boundary conditions. The idea of the Losonczi-Yueh method is to rewrite the system of equations as an equation for infinite sequences; a complete introduction to the method can be found in~\cite{Chengbook2003}. Following Yueh~\cite{Yueh2005}, one substitutes $E=2\cos\theta$ and after some algebra with infinite sequences one finds that the infinite sequence $u = \lbrace u_0,u_1,\mathrm{d}ots,u_{N+1},0,\mathrm{d}ots\rbrace$ has components \begin{equation} \begin{split} u_j &=\frac{1}{\sin\theta}\Big[ u_0\sin[(1-j)\theta] + u_1\sin(j\theta) - \alpha u_k\sin[(j-k)\theta] \Theta(j-k-1)\\ & \qquad - \beta u_{k'}\sin[(j-k')\theta]\Theta(j-k'-1) \Big], \end{split} \label{eq:ujapp} \end{equation} where $\Theta (n)=0$ for $n<0$ and $\Theta (n)=1$ for $n\ge 0$. The values of $u_k$ and $u_k'$ can be found from this equation by setting $j=k,k'$; they read \begin{equation} \begin{split} u_k &= \frac{1}{\sin\theta} \Big[ u_0\sin[(1-k)\theta] + u_1\sin(k\theta) \Big],\\ u_{k'} &= \frac{1}{\sin\theta} \Big[ u_0\sin[(1-k')\theta] + u_1\sin(k'\theta)\\ & \qquad - \alpha \frac{\sin[(k'-k)\theta]}{\sin\theta} \big(u_0\sin[(1-k)\theta] + u_1\sin(k\theta)\big)\Big]. \end{split} \label{eq:ukl} \end{equation} From Eqs.~(\ref{eq:ujapp}) and (\ref{eq:ukl}) one can see that the $u_j$ component depends on $u_0$ and $u_1$. In order to find their values, we exploit the periodic boundary conditions, $u_N=u_0$ and $u_{N+1} = u_1$. The result can be written as an homogeneous linear system of equations \begin{equation} \begin{pmatrix} \mathfrak{a}(\theta) & ~\mathfrak{b}(\theta) \\ \mathfrak{c}(\theta) & ~\mathfrak{d}(\theta) \end{pmatrix} \begin{pmatrix} u_0 \\ u_1 \end{pmatrix} = 0, \label{eq:systemu0u1} \end{equation} where each of the matrix components are given explicitly as \begin{eqnarray} \mathfrak{a}(\theta) &=& \frac{1}{\sin\theta}\bigg[ \sin\theta+\sin[(N-1)\theta] -\alpha\frac{\sin[(N-k)\theta]\sin[(k-1)\theta]}{\sin\theta} \nonumber\\ && -\beta\frac{\sin[(N-k')\theta] \sin[(k'-1)\theta]}{\sin\theta} + \alpha\beta \frac{\sin[(N-k')\theta] \sin[(k'-k)\theta] \sin[(k-1)\theta]}{\sin^2\theta} \bigg], \\ \label{eq:a} \mathfrak{b}(\theta) &=& -\frac{1}{\sin\theta} \bigg[\sin(N\theta) - \alpha\frac{\sin[(N-k)\theta] \sin(k\theta)}{\sin\theta} \nonumber\\ && - \beta\frac{\sin[(N-k')\theta]\sin(k'\theta)}{\sin\theta} + \alpha\beta\frac{\sin[(N-k')\theta]\sin[(k'-k)\theta] \sin(k\theta)}{\sin^2\theta} \bigg], \\ \label{eq:b} \mathfrak{c}(\theta) &=& \frac{1}{\sin\theta} \bigg[\sin(N\theta) - \alpha\frac{\sin[(N-k+1)\theta] \sin[(k-1)\theta]}{\sin\theta} \nonumber\\ && - \beta\frac{\sin[(N-k'+1)\theta]\sin[(k'-1)\theta]}{\sin\theta} +\alpha\beta\frac{\sin[(N-k'+1)\theta]\sin[(k'-k)\theta] \sin[(k-1)\theta]}{\sin^2\theta} \bigg],\nonumber\\ \\ \label{eq:c} \mathfrak{d}(\theta) &=& \frac{1}{\sin\theta}\bigg[ \sin\theta-\sin[(N+1)\theta] +\alpha\frac{\sin[(N-k+1)\theta] \sin(k\theta)}{\sin\theta}\nonumber\\ &&+\beta\frac{\sin[(N-k'+1)\theta] \sin(k'\theta)}{\sin\theta} -\alpha\beta\frac{\sin[(N-k'+1)\theta] \sin[(k'-k)\theta] \sin(k\theta)}{\sin^2\theta} \bigg]. \label{eq:d} \end{eqnarray} This set of equations has a non-trivial solution when the corresponding determinant of the system vanishes; this defines the equation for the quasi-momentum $\theta$. After a trivial, albeit long, algebra the equation for $\theta$ reads \begin{equation} 4\sin^2\left(\frac{N\theta}{2}\right) + \frac{\alpha+\beta}{\sin\theta} \sin(N\theta) - \frac{\alpha\beta}{\sin^2\theta} \sin[(k'-k)\theta] \sin[(N-k'+k)\theta]= 0. \label{eq:thetaapp} \end{equation} Once the condition (\ref{eq:thetaapp}) is satisfied, the eigenvectors are obtained using (\ref{eq:ujapp}) and (\ref{eq:ukl}), by writing $u_1$ in terms of $u_0$ from one of the equations (\ref{eq:systemu0u1}); $u_0$ is fixed by imposing a normalization for the eigenvectors. \end{document}
\begin{document} \title{Calibrated quantum thermometry in cavity optomechanics} \author{A Chowdhury$^{1,2}$, P Vezio$^{2,3}$, M Bonaldi$^{4,5}$, A Borrielli$^{4,5}$, F Marino$^{1,2}$, B Morana$^{4,6}$, G Pandraud$^{6}$, A Pontin$^{7}$, G A Prodi$^{5,8}$, P M Sarro$^{6}$, E Serra$^{5,6}$ and F Marin$^{1,2,3,9}$} \address{$^1$ CNR-INO, L.go Enrico Fermi 6, I-50125 Firenze, Italy} \address{$^2$ Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Firenze, Via Sansone 1, I-50019 Sesto Fiorentino (FI), Italy} \address{$^3$ European Laboratory for Non-Linear Spectroscopy (LENS), Via Carrara 1, I-50019 Sesto Fiorentino (FI), Italy} \address{$^4$ Institute of Materials for Electronics and Magnetism, Nanoscience-Trento-FBK Division, 38123 Povo, Trento, Italy} \address{$^5$ Istituto Nazionale di Fisica Nucleare (INFN), Trento Institute for Fundamental Physics and Application, I-38123 Povo, Trento, Italy} \address{$^6$ Dept. of Microelectronics and Computer Engineering /ECTM-EKL, Delft University of Technology, Feldmanweg 17, 2628 CT Delft, The Netherlands} \address{$^7$ Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom} \address{$^8$ Dipartimento di Fisica, Universit\`a di Trento, I-38123 Povo, Trento, Italy} \address{$^{9}$ Dipartimento di Fisica e Astronomia, Universit\`a di Firenze, Via Sansone 1, I-50019 Sesto Fiorentino (FI), Italy} \ead{[email protected]} \begin{abstract} Cavity optomechanics has achieved the major breakthrough of the preparation and observation of macroscopic mechanical oscillators in peculiarly quantum states. The development of reliable indicators of the oscillator properties in these conditions is important also for applications to quantum technologies. We compare two procedures to infer the oscillator occupation number, minimizing the necessity of system calibrations. The former starts from homodyne spectra, the latter is based on the measurement of the motional sidebands asymmetry in heterodyne spectra. Moreover, we describe and discuss a method to control the cavity detuning, that is a crucial parameter for the accuracy of the latter, intrinsically superior procedure. \end{abstract} \noindent{\it Keywords}: optomechanics, quantum thermometry, micro-oscillators \maketitle \section{Introduction} A crucial outcome of cavity optomechanics \cite{AKMreview} is the observation of peculiar quantum features in the behavior of macroscopic mechanical oscillators. The most relevant indicator of the achieved mechanical quantum domain is the so-called motional sidebands asymmetry. The optomechanical interaction generates spectral peaks around the carrier frequency of a probe field, at distances equal to the mechanical oscillation frequency $\Omega_m$. Their amplitudes are generally different according to quantum theory. Different interpretations have been proposed to explain such asymmetry \cite{Khalili2012,Weinstein2014,Borkje2016}, all agreeing in recognizing it as a non-classical signature of the mechanical oscillator \cite{Borkje2016}, as soon as spurious experimental features are avoided \cite{Jayich2012,Safavi2013}. A particularly elucidating explanation considered that the anti-Stokes (blue) sideband implies an energy transfer from the oscillator to the field (frequency up-conversion of photons), and vice versa for the Stokes (red) sideband. Since the quantum oscillator cannot yield energy when it is in the ground state, the anti-Stokes process is less favored. It turns out that the blue and red sideband strengths are proportional respectively to $\bar{n}$ and $(\bar{n}+1)$ \cite{Wineland1987}, where $\bar{n}$ is the mean occupation number of the oscillator. Measurements of the sidebands asymmetry have been extensively used to monitor the motion of trapped ions \cite{Diedrich1989}, and it has recently become a key technique for cavity optomechanics. Besides its utility as direct indicator of the oscillator quantum behavior, the sidebands asymmetry is a powerful index to deduce the oscillator occupation number avoiding delicate evaluations of optomechanical parameters, such as oscillator effective mass or optomechanical gain, and calibrations of the detection system. It has been remarked that the thermal occupation number $\bar{n}_{\mathrm{th}}$ allows a direct evaluation of the absolute temperature, and it is therefore of extraordinary potential metrological interest. Several experiments concerning the use of optomechanical quantum effects for the measurement of absolute temperature, covering the full range from ultra-cryogenic to room temperatures, are indeed been running \cite{Purdy2017,Sudhir2017}. As a matter of fact, we expect that accurate measurements of the oscillator displacement variance and of motional sidebands asymmetry will be extensively exploited in the next future, and deserve a detailed investigation. Sidebands asymmetry has been measured in optical experiments by alternatively positioning a probe field at a detuning of $\pm \Omega_m$ around the cavity resonance \cite{Safavi2012}, as well as, in a single measurement, from the spectral sidebands in a probe field \cite{Purdy2015,Underwood2015,Peterson2016,Sudhir2017a,Rossi2018}. The former technique is particularly useful in the regime of deeply resolved sidebands ($\Omega_m \gg \kappa$, where $\kappa$ is the cavity linewidth), since for each position of the probe the measured sideband is at the cavity resonance frequency and it is thus amplified. On the other hand, the control of systematic effects can be an issue: the system should remain stable between two separate measurement sessions, the probe intensity and the detection efficiency must be equal for the two values of detuning, and the probe detuning itself must be very accurate. The latter technique, while introduced more recently in cavity optomechanics, is already well established, but it also requires an accurate control of the probe detuning, above all in the case of narrow cavity resonance. The cavity works indeed as frequency filter for the output field, with an effect that differs between the two sidebands and can thus spoil the measurement of their ratio. In this work we experimentally investigate the sidebands asymmetry as signature of quantum performance, and we compare it with a further indicator, i.e., the oscillator displacement variance measured form the area of the corresponding peak in the probe phase spectrum. Furthermore, we demonstrate a method for correcting the measured sidebands asymmetry for non-null probe detuning, exploiting the spectral features of the device oscillating modes that are weakly coupled to the cavity field (``heavy'' modes). \section{Theoretical background} The displacement spectrum of a mechanical oscillator is characterized by resonance peaks corresponding to the different normal modes. The area underlying each peak is a measure of the variance of the motion of the harmonic oscillator associated to the readout of that normal mode. It can be written as $\mathcal{A}_x = 2 x_{\mathrm{ZPF}}^2 (1/2 + \bar{n})$ where $x_{\mathrm{ZPF}}=\sqrt{\hbar/2 m_{\mathrm{eff}} \Omega_m}$ is the zero-point fluctuations amplitude, and $\bar{n}$ is the mean occupation number ($\Omega_m$ is the oscillator angular frequency, $m_{\mathrm{eff}}$ its the effective mass). If the oscillator is in thermal equilibrium with a background at temperature $T_{bath}$, the mean thermal occupation number is $\bar{n}_{\mathrm{th}} \simeq k_{B}T_{bath} / \hbar \Omega_m$ ($k_{B}$ is the Boltzmann constant, and this expression of $\bar{n}_{\mathrm{th}}$ is valid in the high temperature limit $\bar{n}_{\mathrm{th}} >> 1$), and the peak width is $\Gamma_m = \Omega_m/Q$, where $Q$ is the intrinsic mechanical quality factor. When the mechanical oscillator is embedded in an optical cavity, the optomechanical interaction with intracavity radiation yields thermalization toward the photon bath at negligible occupation number (``back-action cooling'' \cite{Arcizet2006,Gigan2006}), at a rate $\Gamma_{\mathrm{opt}}$ proportional to the cooling laser power. The width of the spectral peak becomes $\Gamma_{\mathrm{eff}} = \Gamma_m + \Gamma_{\mathrm{opt}}$ and the oscillator occupation number is reduced by a factor of $\Gamma_{\mathrm{eff}}/\Gamma_m$. However, the back-action of the optomechanical measurement introduces an additional fluctuating force acting on the oscillator, that can be seen as the effect of the quantum noise in the radiation pressure. Since such quantum fluctuations are proportional to the laser power, and actually to $\Gamma_{\mathrm{opt}}$, the originated displacement noise of the optically damped oscillator has negligible dependence on the cooling power, in the limit $\Gamma_{\mathrm{opt}} >> \Gamma_m$. Its contribution to the total displacement variance can be written in terms of additional occupation number $\bar{n}_{\mathrm{BA}}^{cool}$ as \cite{AKMreview,Marquardt2007} \begin{equation} \bar{n}_{\mathrm{BA}}^{cool}=\left[\frac{\mathcal{L}(\Delta_{cool}+\Omega_{m})}{\mathcal{L}(\Delta_{cool}-\Omega_{m})}-1\right]^{-1} \label{nbac} \end{equation} where $\mathcal{L}(\omega)=1/\left[(\kappa/2)^{2}+\omega^{2}\right]$ is the Lorentzian response function of the optical cavity with linewidth $\kappa$, and $\Delta_{cool}$ is the detuning of the cooling radiation with respect to the cavity resonance. The oscillator motion implies variations of the optical cavity resonance frequency $\omega_{\mathrm{cav}}$, at the rate $G = -\partial \omega_{\mathrm{cav}}/\partial x$. Such frequency fluctuations can be measured by exploiting the optical field leaving the cavity. The readout of the oscillator motion may be performed by analyzing the same radiation used for cooling. However, such field is commonly strongly detuned from the cavity resonance to assure an efficient cooling, therefore the optical susceptibility of the cavity is not trivial to be accurately considered. It is more practical to introduce an additional, resonant probe field. The drawback is its additional back-action, that increases the oscillator noise. The probe back-action force does not depend on the cooling power, and it has the same effect of an increased background temperature. In general, the quantum radiation pressure noise produced by an intracavity field at detuning $\Delta$ is proportional to $\bar{n}_{\mathrm{cav}}^{max}\,\mathcal{L}(\Delta)\,\left[\mathcal{L}(\Delta+\Omega_m)+\mathcal{L}(\Delta-\Omega_m)\right]$ where $\bar{n}_{\mathrm{cav}}^{max}$ is the average number of intracavity photons in case of resonant field, that is proportional to the input power. This expression allows us to write the oscillator occupation number added by the probe beam in the form \begin{equation} \bar{n}_{\mathrm{BA}}^{probe}=\bar{n}_{\mathrm{BA}}^{cool} \frac{P^{probe}}{P^{cool}}\,\frac{\mathcal{L}(\Delta_{probe})}{\mathcal{L}(\Delta_{cool})}\,\frac{\mathcal{L}(\Delta_{probe}+\Omega_m)+\mathcal{L}(\Delta_{probe}-\Omega_m)}{\mathcal{L}(\Delta_{cool}+\Omega_m)+\mathcal{L}(\Delta_{cool}-\Omega_m)} \label{nbap} \end{equation} where $P^{probe/cool}$ are the input powers of the probe/cool beam. Expressions (\ref{nbac}) and (\ref{nbap}) are particularly useful in the analysis of the experimental results, since they do not require the evaluation of the laser coupling efficiency and the consequent intracavity photon number, that are often not trivial. We remark that $\bar{n}_{\mathrm{BA}}^{probe}$ is proportional to $1/P^{cool}$ and actually to $1/\Gamma_{\mathrm{opt}}$, provided that the probe beam is close to resonance and has therefore a negligible effect on the effective width. In conclusion, the total effective occupation number can be written as \begin{equation} \bar{n} = \bar{n}_{\mathrm{th}} \frac{\Gamma_m}{\Gamma_{\mathrm{eff}}}+\bar{n}_{\mathrm{BA}}^{cool}+\bar{n}_{\mathrm{BA}}^{probe} \, . \label{ntot} \end{equation} A useful parameter to be considered is the area$\times$width product $\mathcal{A} \Gamma_{\mathrm{eff}}$ of the spectral peak. In the classical limit, when the variance of the motion is still dominated by thermal noise, such product should remain constant as the cooling power is increased, keeping, in the displacement spectrum, the value of $\mathcal{A}_x \times \Gamma_{\mathrm{eff}} \simeq 2 x_{\mathrm{ZPF}}^2 \bar{n}_{\mathrm{th}} \Gamma_m = k_B T_{bath}/m_{\mathrm{eff}} \Omega_m Q$. Quantum noise is instead at the origin of a linear increase of $\mathcal{A} \Gamma_{\mathrm{eff}}$ versus $\Gamma_{\mathrm{eff}}$. The peak area$\times$width product in the frequency spectrum can be written as \begin{equation} \mathcal{A} \Gamma_{\mathrm{eff}} = 2 g_0^2 \Gamma_m \left[\bar{n}_{\mathrm{th}} + \left(\bar{n}_{\mathrm{BA}}^{cool}+\bar{n}_{\mathrm{BA}}^{probe}+\frac{1}{2} \right)\frac{\Gamma_{\mathrm{eff}}}{\Gamma_m} \right] \label{aw} \end{equation} where the vacuum optomechanical coupling strength is $g_0 = G x_{\mathrm{ZPF}}$. The accurate independent estimate of $g_0$ is not straightforward, since it crucially relies on the readout calibration and laser coupling efficiency. On the other hand, the terms into square brackets in Eq. (\ref{aw}), i.e., operatively, the ratio between the slope and the intercept in the $\mathcal{A} \Gamma_{\mathrm{eff}}$ vs $\Gamma_{\mathrm{eff}}$ line, is directly linked to meaningful properties of the oscillator quantum state. It can be used as a check of the agreement between the expected and the measured behavior of the optomechanical system, i.e., to verify the absence of unmodeled extra noise, as well as, e.g., for evaluating $\bar{n}_{\mathrm{th}}$ and actually the oscillator thermodynamic temperature $T_{bath}$. A more accurate measure of the oscillator occupation number can be obtained from the heterodyne spectra of the field reflected by the cavity, that allow to distinguish the two sidebands produced by the Stokes and anti-Stokes processes in the optomechanical interaction. For a resonant probe, the sidebands peaks have areas proportional respectively to $\bar{n}$ (anti-Stokes) and $\bar{n} + 1$ (Stokes), therefore $\bar{n}$ is directly calculated from the Stokes to anti-Stokes sidebands ratio $R$ as $\bar{n} = 1/(R-1)$. This indicator has at least two interesting properties: it does not require a calibration of the measured spectra in terms of, e.g., oscillator displacement or frequency fluctuations, and it is robust against effects of possible misleading extra-noise. It should be remark, however, that correlated phase and amplitude noise in the probe field can produce spurious sidebands asymmetry even at high occupation numbers \cite{Jayich2012}. A crucial concern for the sidebands thermometry is the residual detuning of the probe with respect to the cavity resonance. The two motional sidebands are indeed filtered by the cavity according to $\mathcal{L} (\Delta_{probe} \pm \Omega_m)$, and such filtering effect modifies $R$ as soon as $\Delta_{probe} \ne 0$, thus spoiling the validity of the measurement. The main original contribution of our work is indeed a method to control and evaluate such residual probe detuning and the consequent correction to the measured $R$. \section{Experimental Setup} The measurements are performed on a circular SiN membrane with a thickness of $100\,$nm and a diameter of $1.64\,$mm, supported by a silicon ring frame. This frame is suspended on four points with alternating flexural and torsional springs, forming an on-chip ``loss shield'' structure \cite{Borrielli2014}. More information about the design, fabrication and the characteristic of the device can be found in Borrielli {\it et al.} \cite{Borrielli2016} and Serra {\it et al.} \cite{Serra2016,Serra2018}. The theoretical resonance frequencies of the drum modes in a circular membrane are given by the expression $f_{mn} = f_0\,\alpha_{mn}$ where $\alpha_{mn}$ is the $n$-th root of the Bessel polynomial $J_m$ of order $m$, and $ f_{0} = \frac{1}{\pi}\, \sqrt{\frac{T}{\rho}}\frac{1}{\Phi} \label{f0} $ ($T$ is the stress, $\rho$ the density, $\Phi$ the diameter of the membrane). The measured frequencies are in close agreement (to better than $0.1 \%$) with the theoretical expression, where at cryogenic temperature $f_0 = 96.6\,$kHz. For $m > 0$ we expect couples of degenerate modes. In the real device the perfect circular symmetry is broken, two orthogonal axes are defined and the two quasi-degenerate modes (the measured frequency split is below 100 Hz) have shapes nominally given by the expression $J_m (\alpha_{mn} r) \cos m \theta$ and $J_m (\alpha_{mn} r) \sin m \theta$, where $(r, \theta)$ are polar coordinates and $r$ is normalized to the membrane radius. The oscillator is placed in a Fabry-Perot cavity of length $4.38$~mm, at $2$~mm from the cavity flat end mirror, forming a ``membrane-in-the-middle'' setup \cite{Jayich2008}. The input coupler is concave with a radius of 50~mm, originating a waist of 70~$\mu$m. The cavity finesse and linewidth are respectively $24500$ and $\kappa=1.4 \ \textrm{MHz} \times 2\pi$. The cavity optical axis is displaced from the center of the membrane by $\sim 0.2$ mm, roughly along the axis with $\theta \simeq 0$. As a consequence, the optomechanical coupling and readout are much more efficient for one of the modes in each quasi-degenerate couple (with the shape $\propto \cos m \theta$), that we identify as ``light twin'', with respect to the other one (``heavy twin''). In this work we mainly focus on the (1,1) modes at 370 kHz, having a quality factor of $8.9\times10^{6}$ at cryogenic temperature, which leads to an intrinsic width $\Gamma_{m}/2 \pi = 40\,$mHz. \begin{figure} \caption{(a) Simplified scheme of the experimental setup. The light of a Nd:YAG laser is filtered by a cavity having a linewidth of $66\,$kHz, frequency tuned by a first acousto-optic modulator (AOM), and split into three parts. The first beam (probe) is frequency shifted by two cascade AOMs acting on opposite orders, and phase modulated by an electro-optic modulator (EOM) at $13\,$MHz for the Pounder-Drever-Hall (PDH) locking to the resonance of the optomechanical cavity (OMC). The difference between the frequencies of the cascade AOMs defines the detuning of the second beam (cooling beam). The third beam (local oscillator, LO) is picked up after the second AOM, and frequency shifted by a fourth AOM. Its detuning with respect to the probe is defined by the frequency difference between the third and fourth AOMs. After single-mode fibers, the first two beams are combined with orthogonal polarizations and mode-matched to the OMC. About $2 \mu$W of the reflected probe are sent to the PDH detection, while most of the probe light ($18 \mu$W typically impinge on the cavity) is combined with the LO in a balanced detection (BH). (b) Scheme of the beam frequencies. The LO is placed on the blue side of the probe and detuned by $\Delta_{\mathrm{LO} \label{setup} \end{figure} The optomechanical cavity is cooled down to $\sim 7\,$K in an helium flux cryostat. The light of a Nd:YAG laser is filtered by a Fabry-Perot cavity and split into three beams, whose frequencies are controlled by means of acousto-optic modulators (AOM) (Fig. \ref{setup}a). The first beam (probe) is always resonant with the optomechanical cavity, to which it is kept locked using the Pound-Drever-Hall detection and a servo loop. This exploits the first AOM to follow fast fluctuations, and a piezo-electric transducer to compensate for long term drifts of the cavity length. The second beam (cooling beam), orthogonally polarized with respect to the probe, is also sent to the cavity and red detuned by roughly half linewidth $(\Delta_{cool}= -2 \pi \times 700 \mathrm{kHz} \simeq -\kappa/2)$. The third beam is used as local oscillator (LO) in a balanced detection of the probe beam, reflected by the cavity. In such a detection scheme the LO can either be frequency shifted with respect to the probe (by $\Delta_{\mathrm{LO}}/2\pi \sim 9\,$kHz), allowing a low-frequency heterodyne detection \cite{Pontin2018a} (see Fig. \ref{setup}b), or phase-locked to the probe for an homodyne detection of its phase quadrature. The first scheme (heterodyne) is useful to separate the motional sidebands generated around the optical frequency of the probe field, at frequency shifts corresponding to the mechanical modes frequencies. The spectra acquired with the second scheme (homodyne) are calibrated in terms of cavity frequency fluctuations using a calibration tone in the probe field, and are used to measure the variance of the motion of the different membrane normal modes. \section{Experimental results} \begin{figure} \caption{(a) Calibrated homodyne spectra around the frequency the (1,1) mechanical modes as the cooling power is increased up to $\sim60 \mu$W, maintaining a detuning of $\Delta_{cool} \label{homo} \end{figure} We will focus on the (1,1) membrane modes around 370 kHz, and we will start our analysis from the homodyne spectra of our optomechanical system. The power of the cooling beam is increased by steps up to $\sim 60 \mu$W. The result, as shown in Fig. \ref{homo}a, is a gradual cooling of the light mode with a characteristic increase of $\Gamma_{\mathrm{eff}}$ and a simultaneous red-shift of the mechanical resonance frequency due to so-called optical spring effect. On the other hand, the ``heavy twin'' mode is weakly coupled to the radiation since the optical spot is close to its nodal axis, therefore the associated spectral peak at 370 kHz shows negligible optomechanical effects. The decrease of the peak area is an indication of the reduction of the phonon occupancy $\bar{n}$ in the ``light twin'' mode. A quantitative evaluation of $\bar{n}$ from this parameter would require an independent measurement of the optomechanical gain. On the other hand, the cooling factor $\Gamma_{\mathrm{eff}}/\Gamma_m$ can be accurately measured, but it provides a good estimate of the oscillator effective temperature and consequently of its occupation number $\bar{n} \approx \bar{n}_{\mathrm{th}} \Gamma_m/\Gamma_{\mathrm{eff}}$ just in the classical limit (as soon as the back-action is negligible) and in the absence of extra noise. The two indicators can be usefully put together in the area$\times$width product, that is shown in Fig. \ref{homo}b as a function of $\Gamma_{\mathrm{eff}}$. The reported values of $\mathcal{A} \Gamma_{\mathrm{eff}}$ are obtained from fits of the spectral peaks with a Lorentzian function. Eq. (\ref{aw}) predicts that the $\mathcal{A} \Gamma_{\mathrm{eff}}$ vs $\Gamma_{\mathrm{eff}}$ data should display a linearly increasing behavior, where the slope-to-offset ratio is determined by the different contributions to $\bar{n}$. We have calculated such contributions using independent measurements, as follows. $\bar{n}_{\mathrm{th}}$ is calculated from the bath temperature measured by a silicon diode sensor fixed on the cavity, and the oscillator frequency. $\bar{n}_{\mathrm{BA}}^{cool}$ is calculated from Eq. (\ref{nbac}) using the measured cavity linewidth and the detuning $\Delta_{cool}$ fixed with the AOMs. $\bar{n}_{\mathrm{BA}}^{cool}$ is calculated from Eq. (\ref{nbac}) assuming $\Delta_{probe}\simeq 0$, measuring the probe-to-cooling beam power ratio and fitting the linear dependence between $P^{cool}$ and $\Gamma_{\mathrm{eff}}$ (see the inset in Fig. \ref{homo}a). Finally, $\Gamma_m$ is obtained from ring-down measurements with an additional laser at 970 nm, where the measured optomechanical effects are very weak due to the low cavity finesse and laser power. The experimental measurements well follow the predicted slope, shown with a solid line in Fig. \ref{homo}b, where just the overall vertical scaling factor is fitted to the data. Here the error bars just reflects the standard deviation of measurements performed on consecutive acquisitions. The scattering of the data shows that longer term fluctuations in system parameters (when changing the cooling power) dominate over such statistical uncertainties, that are therefore not considered as meaningful in the following analysis. A further solid curve in the figure shows the behavior of $\bar{n}$ calculated from Eq. (\ref{ntot}), i.e., assuming that the system is well modeled and in the absence of additional noise, as suggested by the good agreement between the prediction of Eq. (\ref{aw}) and the experimental data. We infer that an occupation number of $\bar{n} = 3.9$ is achieved at our maximum cooling power. Moreover, the fitted vertical scaling factor allows to estimate, according to Eq. (\ref{aw}), a vacuum optomechanical coupling strength of $g_0/2\pi = 31 \pm 1$ Hz. We remark again that such additional inferred parameter is not involved in the evaluation of $\bar{n}$. The observed qualitative agreement is not yet a safe guarantee of an accurate measurement. Heating of the membrane oscillator due to laser absorption would yield a linear increase of $T_{bath}$ with $P_{cool}$, and thus a larger slope of $\mathcal{A} \Gamma_{\mathrm{eff}}$ vs $\Gamma_{\mathrm{eff}}$. Leaving the slope as free parameter in the fit of $\mathcal{A} \Gamma_{\mathrm{eff}}$ vs $\Gamma_{\mathrm{eff}}$, we find for the ratio between slope and offset a value of $(8.0 \pm 2.5) \times 10^{-5}$ Hz$^{-1}$, to be compared with $4.9 \times 10^{-5}$ Hz$^{-1}$ calculated from Eq. (\ref{aw}). This suggests that the sample temperature could have increased by $1.8 \pm 1.5$ K at our maximum cooling power. Furthermore, additional amplitude or frequency noise in the laser field would instead provide a quadratic term in $\mathcal{A} \Gamma_{\mathrm{eff}}$ vs $\Gamma_{\mathrm{eff}}$. We have indeed added such term to the fit of our data, finding a maximum contribution of $13 \pm 10\%$ to the measured $\mathcal{A}$. In both fitting procedures the uncertainty is due to the scattering of the experimental data, and the results are compatible with null effects of heating and extra laser noise. While the described analysis of the homodyne spectra at increasing cooling power gives a reliable estimate of $\bar{n}$, skipping the independent calibration of the optomechanical coupling, we see that uncertainties in additional noise sources can reduce the measurement accuracy. Therefore, the measurement of the motional sidebands ratio in heterodyne spectra remains in principle a superior procedure. Indeed, it gives directly access to the real average phonon occupation number for each value of cooling power, including implicitly extra heating and noise and without the necessity of further independent measurements of system parameters. \begin{figure} \caption{Observation of the Stokes (right) and anti-Stokes (left) spectral peaks of the (1,1) membrane mode for two different values of the cooling power: (a) At low cooling power the spectral width $\Gamma_{\mathrm{eff} \label{hetero} \end{figure} Our setup can easily switch from homodyne to heterodyne detection, by just including a frequency offset $\Delta_{\mathrm{LO}}$ in the phase locking of the local oscillator. Figure \ref{hetero} shows two examples of heterodyne spectra, again around the resonance frequency of the (1,1) modes, for two different values of the cooling power. At low power (panel a) the motional sidebands are very similar, while at higher cooling power (panel b) the increased width $\Gamma_{\mathrm{eff}}$, indicating a smaller occupation number, is accompanied by a visible asymmetry, with a smaller left (anti-Stokes) sideband. For a correct evaluation of $\bar{n}$ one must consider the filtering effect of the cavity, and in particular evaluate the residual probe detuning. To this purpose, we have exploited the motional sidebands of the ``heavy twin'' mode that, being weakly coupled to the optical field, maintains a occupation number so high that a possible sideband asymmetry should be completely attributed to the cavity filtering effect. Operatively, we measure the sidebands ratio $R_{light}$ for the ``light twin'' mode and $R_{heavy}$ for the ``heavy twin'' mode, and correct the former according to $R = R_{light}/R_{heavy}$. We use this corrected $R$ to estimate $\bar{n}$, that assumes, e.g., the value of $\bar{n}=17.1\pm3.4$ for the spectra in panel (a) and $\bar{n}=3.87\pm0.21$ for panel (b) (the reported uncertainty is the standard deviation in 10 measurements, performed on consecutive, 10 s long time intervals, for a total measurement time of 100 s). The latter value is obtained for our maximum cooling rate. \begin{figure} \caption{Method for correcting the sideband asymmetry due to the residual probe detuning. The measured sideband ratio for several weakly coupled modes is plotted as a function of the respective resonance frequencies $\Omega_m$ (blue dots), and fitted with the function $\mathcal{L} \label{correction} \end{figure} This calibration procedure relies on the close proximity of the resonance frequencies of the two (1,1) modes, yielding the same cavity filtering effect. However, one can also evaluate the sideband ratio for several weakly coupled modes, deduce the probe detuning $\Delta_{probe}$ by fitting the results with the function $\mathcal{L} (\Delta_{probe} - \Omega_m)/\mathcal{L} (\Delta_{probe} + \Omega_m)$ vs $\Omega_m$, and finally use the same function with the inferred $\Delta_{probe}$ and $\Omega_m = 2 \pi \times 370\,$kHz to correct $R_{light}$. An example of such fit is shown in Fig. \ref{correction}. This procedure also allows to monitor the stability of the detuning during the measurement, as shown in the inset of Fig. \ref{correction}. We have found typically a detuning below $\mid \Delta_{probe}\mid /2\pi < 30$ kHz (corresponding to $0.02\kappa$) and variations during a complete measurement three times smaller. The consequent corrections to $R_{light}$ arrive to nearly $10 \%$. A preliminary evaluation of the sideband ratio for the weakly coupled modes is indeed a good method to adjust the probe detuning at the beginning of the measurement. The corrections to $R_{light}$ obtained with this procedure are in good agreement with the method using directly the ``heavy twin'' mode. \begin{figure} \caption{Close symbols report the occupation number $\bar{n} \label{fignm} \end{figure} The occupation number $\bar{n}$ calculated from the corrected sideband ratio is shown in Figure \ref{fignm} as a function of $\Gamma_{\mathrm{eff}}$, obtained at increasing values of cooling power. Filled solid curves reflect the expected $\bar{n}$ and its different contributions, calculated according to Eqs. (\ref{nbac}-\ref{ntot}). In particular, $n_{BA}^{cool}\sim0.58$, showing that for the (1,1) modes we are here in weakly resolved sidebands regime and back-action cooling can in principle bring these modes to an occupation number below unity (close to $n_{BA}$ in the weak coupling regime \cite{Marquardt2007}. With respect to the analysis of the results extracted from the homodyne spectra, described in Fig. {\ref{homo}b, here the theoretical curves have no free fitting parameters: all the contributions to $\bar{n}$ are calculated on the basis of independent measurements. The agreement with the experimental data is excellent, considering the experimental statistical uncertainty, suggesting the absence of non-modeled extra noise. Each single data point can thus be exploited to extract the occupation number, using as experimental error its statistical uncertainty that, differently from the case of $\mathcal{A} \Gamma_{\mathrm{eff}}$, is now compatible with the data scattering. On the other hand, the overall set of data could be used evaluate $T_{bath}$, leaving $\bar{n}_{\mathrm{th}}$ as free parameter in the expression (\ref{ntot}). In this case, the extracted value is $6.7 \pm 0.6$ K, compatible with the 7.2 K measured by the sensor. \section{Conclusions} We compare two indicators of the oscillator occupation number, namely the peak area$\times$width product of the displacement spectrum, acquired in a homodyne setup, and the motional sidebands asymmetry, measured by heterodyne detection. Neither case requires additional calibrations, even if an additional absolute calibration of the homodyne spectrum in terms of frequency fluctuations allows to additionally infer the single-photon optomechanical coupling strength $g_0$. Both indicators are particularly sensitive at low occupation numbers (i.e., in the quantum regime). In our case the two kinds of estimate are in agreement, showing that a minimal occupation number of 3.9 is achieved in our experiment. However, the latter indicator is superior because it is less sensitive to additional technical noise, and it gives a result with a single measurement while the former procedure requires a set of measurements as a function of, e.g., the cooling power. To reliably exploit the latter indicator one should keep in mind that a crucial requirement for an accurate measurement of the sidebands ratio is the control of the probe detuning. We show a method to perform it, based on the observation of the spectral features of weakly coupled mechanical mode. The calibration of the detuning is thus performed using phase signals generated inside the optomechanical cavity. This method is more trustworthy than those exploiting calibration tones on the probe field since simultaneous phase and amplitude modulation, that commonly occurs in real setups, easily generates asymmetric sidebands that spoils accurate measurements of the cavity detuning. A widespread use of reliable quantum optomechanical indicators, toward which this work is contributing, is expected to play in important role in the development of quantum technologies \cite{Acin2018}. \ack Research performed within the Project QuaSeRT funded by the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union’s Horizon 2020 Programme. The research has been partially supported by INFN (HUMOR project). AP has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 749709. \section*{References} \end{document}
\begin{document} \begin{frontmatter} \title{Almost Periodicity and Ergodic Theorems for Nonexpansive Mappings and Semigroups in Hadamard Spaces} \author[mymainaddress]{Hadi Khatibzadeh\corref{mycorrespondingauthor}} \cortext[mycorrespondingauthor]{Corresponding author} \ead{[email protected]} \author[mymainaddress]{Hadi Pouladi} \ead{[email protected]} \address[mymainaddress]{Department of Mathematics, University of Zanjan, P. O. Box 45195-313, Zanjan, Iran.} \begin{abstract} The main purpose of this paper is to prove the mean ergodic theorem for nonexpansive mappings and semigroups in locally compact Hadamard spaces, including finite dimensional Hadamard manifolds. The main tool for proving ergodic convergence is the almost periodicity of orbits of a nonexpansive mapping. Therefore, in the first part of the paper, we study almost periodicity (and as a special case, periodicity) in metric and Hadamard spaces. Then, we prove a mean ergodic theorem for nonexpansive mappings and continuous semigroups of contractions in locally compact Hadamard spaces. Finally, an application to the asymptotic behavior of the first order evolution equation associated to the monotone vector field on Hadamard manifolds is presented. \end{abstract} \begin{keyword} Almost periodic\sep Ergodic theorem\sep Karcher mean\sep Locally compact Hadamard spaces\sep Hadamard manifold. \MSC[2010] 47H25\sep 40A05\sep 40J05 \end{keyword} \end{frontmatter} \section{Introduction} Von Neumann in \cite{VNeumann1932} proved the first mean ergodic theorem for linear nonexpansive mappings in Hilbert spaces. Almost forty years later \noindent Baillon \cite{baillon1975} proved the first mean ergodic theorem for nonlinear nonexpansive mappings. Regardless of some details, Baillon's theorem is stated as follows. \begin{theorem}\label{baillon} Let $C$ be a nonempty closed convex subset of a Hilbert space $H$, $T:C\longrightarrow C$ be a nonexpansive mapping and $F(T):=\{x\in C: Tx=x\}\neq \emptyset$. Then for each $x\in C$, $\frac{1}{n}\displaystyle\sum_{k=0}^{n-1}T^kx$ converges weakly to a fixed point of $T$ as $n\rightarrow\infty$. \end{theorem} The convergence stated in Theorem \ref{baillon} for orbits of the nonexpansive mapping $T$ is called the Cesaro (or the ergodic) convergence. A stronger notion of convergence which introduced by Lorentz \cite{lorentz1948} is called {\it almost convergence}. A sequence $\{x_n\}$ is called almost convergent if $S_n^k=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1}x_{k+i},$ converges as $n\to\infty$ uniformly respect to $k$. \noindent Br\'{e}zis and Browder \cite{Brezis1976} extended Baillon's nonlinear ergodic theorem to convergence of general summability methods including almost convergence of the orbit of a nonexpansive mapping in Hilbert spaces. Reich \cite{Reich1978} improved these results and simplified their proof and in \cite{Reich1979} generalized these results to Banach spaces. Bruck \cite{Bruck1979} provided other proof for almost convergence of the orbit of a nonexpansive mapping in Banach spaces. After them, many authors and researchers studied various versions of nonlinear ergodic theorems for nonexpansive mappings and semigroups and their generalizations in linear spaces setting specially in Banach spaces. Another concept that plays an essential role in this paper is {\it almost periodicity}. This concept was first studied by Bohr \cite{bohr1947almost} and it has a close relationship with the almost convergence. In fact, it is proved that every almost periodic sequence is almost convergent (see \cite{bohr1947almost, lorentz1948}). In this paper we first study periodicity and almost periodicity in metric and Hadamard (introduced in the next section) spaces . Then using the concept of almost periodicity, we prove an ergodic convergence theorem in locally compact Hadamard spaces. The paper is organized as follows. Section 2 is devoted to the introduction of Hadamard spaces and some preliminaries that we need in the sequel. In Section 3, we show that periodicity and almost periodicity of a sequence is equivalent to periodicity and almost periodicity of the sequence made with the distance of the sequence points with an arbitrary point. Then, we recall a result about relation between almost periodicity and relatively compactness of orbits of a nonexpansive mapping in metric spaces. Section 3 provides the necessary tools for proving the almost convergence of orbits of a nonexpansive mapping. Finally, in Section 4, we present the main result of the paper, i.e., ergodic theorem for nonexpansive mappings in locally compact Hadamard spaces. Our method for proving the ergodic theorem is based on the strong convexity of the square of the distance in Hadamard spaces as well as a lemma on the stability of the minimum point of a strongly convex function. In Section 5, we show that the main results of Sections 3 and 4 hold for continuous semigroup of contractions. In the last section we give an application of the main result to the convergence of generated semigroup by solutions of a Cauchy problem governed by a monotone vector field on Hadamard manifols. This result extends the result of Br\'{e}zis and Baillon \cite{BaillonBerzis1976} from Hilbert spaces to Hadamard manifolds. \section{Preliminaries} A metric space $(X,d)$ is said to be a geodesic metric space if every two points $x,y$ of $X$ are jointed by a geodesic segment (geodesic) that is the image of the isometry \allowdisplaybreaks\begin{equation*} \gamma :[0,d(x,y)]\longrightarrow X, \end{equation*} with $\gamma(0)=x, \gamma(d(x,y))=y$ and $d\big(\gamma(t),\gamma(t')\big)=|t-t'|, \quad \forall t,t'\in[0,1]$. Such space is said uniquely geodesic if between any two points there is exactly one geodesic that for two arbitrary points $x,y$ is denoted by $[x,y]$. All points in $[x,y]$ are denoted by $z_t=(1-t)x\oplus ty$ for all $t\in[0,1]$, where $d(z_t,x)=td(x,y)$ and $d(z_t,y)=(1-t)d(x,y)$. A geodesic triangle $\triangle:=\triangle(x_1,x_2,x_3)$ in a geodesic space $X$ consists of three points $x_1, x_2, x_3\in X$ as vertices and three geodesic segments joining each pair of vertices as edges. A comparison triangle for the geodesic triangle $\triangle$ is the triangle $\overline{\triangle}(x_1,x_2,x_3):=\triangle(\overline{x_1},\overline{x_2},\overline{x_3})$ in the Euclidean space $\Bbb{R}^2$ such that $d(x_i,x_j)=d_{\Bbb{R}^2}(\overline{x_i},\overline{x_j})$ for all $i,j=1,2,3$. A geodesic space $X$ is said to be a $CAT(0)$ space if for each geodesic triangle $\triangle$ in $X$ and its comparison triangle $\overline{\triangle}:=\triangle(\overline{x_1},\overline{x_2},\overline{x_3})$ in $\Bbb{R}^2$, the $CAT(0)$ inequality \begin{equation*} d(x,y)\leq d_{\Bbb{R}^2}(\overline{x},\overline{y}), \end{equation*} is satisfied for all $x,y\in \triangle$ and all comparison points $\overline{x}, \overline{y}\in \overline{\triangle}$ i.e., a geodesic triangle in $X$ is at least as thin as its comparison triangle in the Euclidean plane. A $CAT(0)$ space is uniquely geodesic. A complete $CAT(0)$ space is said Hadamard space. From now, we denote every Hadamard space by $\mathscr H$. Let $(X,d)$ be a metric space, a mapping $T:X\longrightarrow X$ is called nonexpansive if $d(Tx,Ty)\leq d(x,y), \quad \forall x,y\in X$. $F(T)=\{x\in X : Tx=x\}$ denotes the set of all fixed points of the mapping $T$, which is closed and convex in Hadamard spaces (see \cite{kirkvalencia}). A function $f:\mathscr H\longrightarrow \Bbb R$ is said to be convex if for all $x,y\in \mathscr H$ and for all $\lambda\in [0,1]$ \begin{equation*} f\big((1-\lambda )x\oplus\lambda y\big)\leq (1-\lambda)f(x)+\lambda f(y), \end{equation*} also $f$ is said to be strongly convex with parameter $\gamma>0$ if for all $x,y\in \mathscr H$ \begin{equation*} f\big(\lambda x\oplus (1-\lambda)y\big)\leq \lambda f(x)+(1-\lambda)f(y)-\lambda(1-\lambda)\gamma d^2(x,y). \end{equation*} A function $f:\mathscr H\longrightarrow \mathbb{R}$ is said to be lower semicontinuous (shortly, lsc) if the set $\{x\in X : f(x)\leq \alpha\}$ is closed for all $\alpha\in \Bbb R$. Any lsc, strongly convex function in a Hadamard space has a unique minimizer \cite{bacak2014convex}. The following fact can be found in \cite[Lemma 2.5]{DHOMPONGSA20082572} and \cite[page 163]{bridson2011metric}:\\ a geodesic metric space is a $CAT(0)$ space if and only if the function $d^2(x,\cdot)$, for all $x$, is strongly convex with $\gamma=1$, i.e., For every three points $x_0 ,x_1, y \in X$ and for every $0<t<1$ \allowdisplaybreaks\begin{equation} d^2(y, x_t)\leq(1-t)d^2(y,x_0)+td^2(y,x_1)-t(1-t)d^2(x_0,x_1), \end{equation} where $x_t=(1-t)x_0\oplus tx_1$ for every $t\in[0,1]$. Berg and Nikolaev in \cite{Berg2008} introduced the notion of quasilinearization that is the map $\langle \cdot,\cdot\rangle:(X\times X)\times (X\times X)\longrightarrow \Bbb R$ defined by \begin{equation}\label{e12} \langle\overset{\rightarrow}{ab},\overset{\rightarrow}{cd}\rangle=\frac{1}{2}\big\{d^2(a,d)+d^2(b,c)-d^2(a,c)-d^2(b,d) \big\} \quad a,b,c,d\in X, \end{equation} where a vector $\overset{\rightarrow}{ab}$ or $ab$ denotes a pair $(a,b)\in X\times X$. In \cite{Berg2008} they proved that $CAT(0)$ spaces satisfy the Cauchy-Schwarz like inequality: \begin{equation*} \langle ab, cd\rangle\leq d(a,b)d(c,d) \quad (a,b,c,d\in X). \end{equation*} Let $\mathscr{U}$ denote the set of all relatively compact bounded sequences in a Hadamard space $\mathscr H$, i.e., the bounded sequences $\{x_n\}$ such that the set $\{x_n:\ n=1.2.3,\cdots\}$ is relatively compact. For $\epsilon>0$, we say that $E\subset K$ is an $\epsilon-$net of $K$ if for each $x\in K$ there exists $e\in E$ such that $d(x,e)<\epsilon$. A subset $K$ of a metric space $(X,d)$ is said to be totally bounded if for each $\epsilon>0$ there exist $x_1,x_2,\cdots,x_n\in X$ such that $K\subseteq \displaystyle\bigcup_{i=1}^{n} B_{\epsilon}(x_i)$ or $K$ has a finite $\epsilon-$net. It is well-known that the totally boundedness coincides with relatively compactness in complete metric spaces (see for example \cite{brown2013topological}). \section{Periodicity and Almost Periodicity} Kurtz \cite{kurtz1970} proved that the periodicity (resp. almost periodicity) of a sequence $\{x_n\}$ in a Banach space is equivalent to the periodicity (resp. almost periodicity) of the scaler sequence $\{x^*(x_n)\}$, where $x^*$ is an arbitrary element of the dual of the Banach space. Inspired of the results of Kurtz \cite{kurtz1970} in this section we show the equivalence between periodicity (resp. almost periodicity) of a sequence $\{x_n\}$ in a complete metric (or Hadamard) space and periodicity (resp. almost periodicity) of the real sequence $\{d(x_n,y)\}$, where $y$ is an arbitrary point of the metric space. These results correspond to the results of Kurtz \cite{kurtz1970} in a space where the dual concept is not naturally presented. Then we continue this section by studying the relation between almost periodicity and relatively compactness of orbits of a nonexpansive mapping in complete metric spaces. First we recall the notions of periodicity and almost periodicity in metric spaces that are natural extensions of the corresponding notions for real sequences. \begin{definition} Let $\{x_n\}$ be a sequence in metric space $(X,d)$, we call this sequence is periodic with the period $p$ if there exists a positive integer $p$ such that $x_{n+p}=x_n$ for all $n$.\\ A sequence $\{x_n\}$ is called almost periodic if for each $\epsilon>0$ there are natural numbers $L=L(\epsilon)$ and $N=N(\epsilon)$ such that any interval $(k,k+L)$ where $k\geq 0$ contains at least one integer $p$ satisfying \begin{equation} d(x_{n+p},x_n)<\epsilon \quad\quad \forall n\geq N. \end{equation} \end{definition} \begin{proposition}\label{theo-per} A sequence $\{x_n\}$ in a Hadamard space $\mathscr{H}$ is periodic if and only if $\{d(x_n,x)\}$ is periodic for each $x\in\mathscr{H}$. \end{proposition} \begin{proof} The necessity is trivial because for each $x\in\mathscr{H}$, if $p$ is the period of $\{x_n\}$, we have $d(x_{n+p},x)=d(x_n,x)$ for all $n$. Then the real sequence $\{d(x_n,x)\}$ is periodic.\\ Conversely, if for each $x\in\mathscr{H}$, the sequence $\{d(x_n,x)\}$ is periodic, by the definition for each $x\in \mathscr H$ there exists $k$ such that $d(x_{n+k},x)=d(x_n,x)$ for all $n$. Take \begin{equation} E_k=\{x\in\mathscr{H} : d(x_{n+k},x)=d(x_n,x), \quad\forall n\}. \end{equation} It is clear that by the continuity of the metric function, each $E_k$ is closed and $\mathscr{H}=\displaystyle\bigcup_{k=1}^{\infty} E_k$. By the Bair category theorem in a complete metric space, at least one of $E_k$ contains an open ball. Let $B_r(x_0)$ be the open ball i.e., there is a positive integer $k$ such that $B_r(x_0)\subset E_k$. Based on \cite[Proposition 9.2.28]{burago2001course} for all $n$, in the open ball $B_r(x_0)$, there is a geodesic segment joining $x_0$ and $x_0\neq x\in B_r(x_0)$ that is parallel with the geodesic segment joining $x_n$ and $x_{n+k}$. \noindent Using quasilinearization and this fact that the Cauchy-Schwarz inequality for two parallel geodesic segments becomes equality (by \cite{Berg2007}), we have: \begin{eqnarray}\label{e1} &&\frac{1}{2}\bigg(d^2(x_n,x)+d^2(x_{n+k},x_0)-d^2(x_n,x_0)-d^2(x_{n+k},x) \bigg)\nonumber \\ &&=\langle x_nx_{n+k},x_0x \rangle \nonumber \\ &&= d(x_n,x_{n+k})d(x_0,x). \end{eqnarray} But by the definition of $E_k$ and since $x,x_0\in B_r(x_0)\subset E_k$, we get: \begin{equation}\label{e2} d^2(x_n,x)=d^2(x_{n+k},x) \quad \text{and} \quad d^2(x_n,x_0)=d^2(x_{n+k},x_0). \end{equation} Therefore \eqref{e1} and \eqref{e2} imply that: \begin{equation*} d(x_n,x_{n+k})d(x_0,x)=0, \end{equation*} since $d(x_0,x)\neq 0$, we obtain $d(x_n,x_{n+k})=0$ for all $n$ i.e., $x_n=x_{n+k}$ for all $n$. Thus, the sequence $\{x_n\}$ is periodic. \end{proof} \begin{proposition}\label{theo-almper} A sequence $\{x_n\}$ in $\mathscr{U}$ is almost periodic if and only if $\{d(x_n,x)\}$ is almost periodic for each $x\in\mathscr{H}$. \end{proposition} \begin{proof} {\bf Necessity.} Since $\{x_n\}$ is almost periodic, for each $\epsilon >0$, there exist natural numbers $L=L(\epsilon), N=N(\epsilon)$ such that every interval $(k,k+L)$ for any $k\geq 0$, contains at least one integer $p$ such that $d(x_{n+p},x_n)<\epsilon$ holds for all $n\geq N$. Also for each $x\in\mathscr{H}$, we have: \begin{equation*} |d(x_{n+p},x)-d(x_n,x)|\leq d(x_{n+p},x_n). \end{equation*} So the same $N, L$ and $p$ in the definition of almost periodicity of $\{x_n\}$ show that for each $x$, $\{d(x_n,x)\}$ is almost periodic.\\ {\bf Sufficiency.} Let $\epsilon>0$. There are $q_1,q_2,\cdots,q_s\in X$ such that the union of the family $\{B_{\frac{\epsilon}{3}}(q_i)\}_{i=1}^{s}$ contains $\{x_n\}$. Since for each $x$, $\{d(x_n,x)\}$ is almost periodic, for each $q_i$ where $1\leq i\leq s$, $\{d(x_n,q_i)\}$ is almost periodic. Therefore for all $1\leq i\leq s$, there are $L_i, N_i$ such that any interval $(k,k+L_i)$, where $k\geq 0$ contains $p$ satisfying \begin{equation}\label{e3} |d(x_{n+p},q_i)-d(x_n,q_i)|< \frac{\epsilon}{3}\quad \forall n\geq N_i. \end{equation} Take $L=\displaystyle\max_{1\leq i\leq s} L_i$ and $N=\displaystyle\max_{1\leq i\leq s} N_i$. For all $n\geq N$, there is $q_{i_n}$ such that\linebreak $x_n\in B_{\frac{\epsilon}{3}}(q_{i_n})$ i.e., \begin{equation}\label{e4} d(x_n,q_{i_n})< \frac{\epsilon}{3}. \end{equation} By \eqref{e3} for any $k\geq 0$ there is $p\in (k,k+L)$ such that \begin{equation}\label{e5} d(x_{n+p},q_{i_n})< \frac{2\epsilon}{3}\quad \forall n\geq N. \end{equation} Thus \eqref{e4} and \eqref{e5} imply that for arbitrary $\epsilon >0$ there are $L, N$ such that for any $k\geq 0$ there is at least one $p$ in the interval $(k,k+L)$ satisfying \begin{equation*} d(x_{n+p},x_n)< \epsilon \quad \forall n\geq N, \end{equation*} and the proof of almost periodicity of $\{x_n\}$ is complete. \end{proof} The following lemma shows that Theorem 1 of \cite{DAFERMOS197397} is also true for general metric spaces. We use this lemma to extend the next theorem from Banach spaces to complete metric spaces. It is easily seen that the proof of \cite{DAFERMOS197397} also works for metric spaces, then we recall the lemma without proof. \begin{lemma}\label{l1} Let $(X,d)$ be a metric space and $T:X\to X$ be a nonexpansive mapping. If $w(x)$ is the set of strong subsequential limits of the iteration sequence $\{T^nx\}$, then $T$ is an isometry on $w(x)$. \end{lemma} The next theorem extends \cite[Theorem 3.2]{Baillonbruckreich} from Banach spaces to complete metric spaces. Although the proof of the theorem is briefly summarized in \cite{Baillonbruckreich}, we facilitate the reader with the following more complete proof. \begin{theorem}\label{theo-almperreco} Suppose that $(X,d)$ is a complete metric space and $T:X\rightarrow X$ is a nonexpansive mapping with $F(T)\neq\emptyset$. Then the sequence $\{T^nx\}$ is almost periodic if and only if it is relatively compact. \end{theorem} \begin{proof} Necessity is obvious because by \cite{vesel2011} an almost periodic sequence is totally bounded, and we know in a complete metric space totally boundedness is equivalent to relatively compactness. For converse by assumption, $\{T^nx\}$ is relatively compact. If we denote the set of all strongly subsequential limits of $\{T^nx\}$ with $w(x)$, then this set is nonempty, closed and consequently compact by relatively compactness of the iteration sequence. Also, by Lemma \ref{l1} $T$ is isometry on $w(x)$. Take an element $y_0$ in $w(x)$ and set $y_n=T^ny_0$. Clearly since $T$ is isometry, $\{y_n\}$ is a relatively compact isometric sequence i.e., for all $n, m, i$ in $\mathbb{Z}^+$ we have $d(y_{n+i},y_{m+i})=d(y_n,y_m)$. First we show that $\{y_n\}$ is almost periodic sequence.\\ By relatively compactness, let $\{y_{m_i}\}_{i=1}^{r}$ be a finite $\epsilon-$net for $\{y_n\}$. Set $L=r+1$ and $N=0$ in the definition of almost periodicity because for arbitrary $n\geq 0$, $y_n$ belongs to $B_{\epsilon}(y_{m_i})$ where $1\leq i\leq r$. For $y_n, y_{n+1},\cdots, y_{n+r-1}$ if at least two points belong to one ball, in fact $y_{n+i}$ and $y_{n+i+p}$ where $p<L=r+1$ are in the same ball and by isometricity of the sequence $\{y_n\}$ we have: \begin{equation}\label{e6} d(y_{n+i},y_{n+i+p})=d(y_n,y_{n+p})<\epsilon. \end{equation} If none of $y_n, y_{n+1},\cdots, y_{n+r-1}$ belong to one ball, certainly $y_{n+r}$ with one of them place exactly into a ball, so $y_{n+j}$ and $y_{n+j+p}$ where $p<L=r+1$ are into the same ball and again by isometricity of the sequence $\{y_n\}$, similar to \eqref{e6} $d(y_n,y_{n+p})<\epsilon$. Thus, we find one $p$ in the interval $(0,r+1)$ such that by isometricity of the sequence $\{y_n\}$ for all $n\geq 0$, $d(y_n,y_{n+p})<\epsilon$. Similar arguments apply to the case arbitrary $k$ for the existence of $p$ in the interval $(k,k+L)$ and this implies that $\{y_n\}$ is almost periodic.\\ Now since $y_0$ is limit point of $\{T^nx\}$, there exists $N_1=N(\frac{\epsilon}{3})$ such that for all $n\geq N_1$: \begin{equation}\label{e7} d(T^nx,y_0)<\frac{\epsilon}{3}. \end{equation} Also we see that $\{y_n\}$ is almost periodic, thus by necessity for $\frac{\epsilon}{3}$ there is a finite $\frac{\epsilon}{3}-$net with $r$ ball that cover the sequence $\{y_n\}$, so if $L=r+1$ and $N_2=0$, for any $k\geq 0$ there is at least one $p$ in the interval $(k,k+L)$ such that for all $n\geq 0$, $d(y_{n+p},y_n)<\frac{\epsilon}{3}$ and therefore: \begin{equation}\label{e8} d(T^py_0,y_0)<\frac{\epsilon}{3}. \end{equation} Finally for arbitrary $\epsilon>0$, if we set $N=N_1$ and $L=r+1$ where $r$ is the number of balls in $\frac{\epsilon}{3}-$net, since $T$ is nonexpansive and by \eqref{e7} and \eqref{e8} if for all $k\geq 0$ in the interval $(k,k+L)$ we take the same $p$ in \eqref{e8}, for all $n\geq N$ we obtain: \begin{eqnarray} d(T^{n+p}x,T^nx)&\leq & d(T^{N+p}x,T^Nx) \nonumber \\ &\leq &d(T^{N+p}x,T^py_0)+d(T^py_0,y_0)+d(y_0,T^Nx)\nonumber\\ &\leq &d(T^Nx,y_0)+d(T^py_0,y_0)+d(y_0,T^Nx)\nonumber\\ &\leq &\frac{\epsilon}{3}+\frac{\epsilon}{3}+\frac{\epsilon}{3}=\epsilon, \nonumber \end{eqnarray} and this completes the proof. \end{proof} An immediate consequence of Theorem \ref{theo-almperreco} is the almost periodicity of orbits of a nonexpansive mapping with nonempty fixed points set in a locally compact metric space. We will use this property in the next section to prove the mean ergodic theorem. \section{Ergodic Theorem for Nonexpansive Mappings} In Hadamard spaces, Liimatainen \cite{Liimatainen2012} proved a result related to the mean ergodic convergence for the Karcher mean of orbits of nonexpansive mappings. The best result of \cite{Liimatainen2012} for ergodic convergence of orbits of a general nonexpansive mapping implies that every weak cluster point (for the definitions of weak cluster point and weak convergence in Hadamard spaces see \cite{bacak2014convex, Kirk2008weak}) of the Karcher mean (see the following definition) of a bounded orbit is a fixed point of the mapping. But the weak convergence of the orbit is still an open problem in general Hadamard spaces, because the set of weak cluster points is not necessarily a singleton. In this paper using the notion of almost periodicity studied in Section 2 in Hadamard spaces (Proposition \ref{theo-almper} and Theorem \ref{theo-almperreco}), we prove the almost convergence of the orbit (which is stronger than the ergodic convergence) in locally compact Hadamard spaces including Hadamard (finite dimensional) manifolds. First we recall the Karcher mean concept in Hadamard spaces. \begin{definition}\label{Karcher mean} Give a sequence $\{x_n\}$ in a Hadamard space and $n\in \mathbb{N}$ and $k\in \mathbb{N}\cup\{0\}$, we define the functions \begin{equation}\label{e22} \mathcal{F}_n(x)=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_i,x), \end{equation} and \begin{equation}\label{e23} \mathcal{F}_n^k(x)=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(x_{k+i},x). \end{equation} From \cite[Proposition 2.2.17]{bacak2014convex} we know that these functions have unique minimizers. For $\mathcal{F}_n(x)$ the unique minimizer is denoted by $\sigma_n(x_0,\ldots ,x_{n-1})$ (shortly, $\sigma_n$) and is called the mean of $x_0,\ldots ,x_{n-1}$. Also for $\mathcal{F}_n^k(x)$ the unique minimizer is denoted by $\sigma_n^k(x_k,\ldots ,x_{k+n-1})$ (shortly, $\sigma_n^k$) and is called the mean of $x_k,\ldots ,x_{k+n-1}$. These means are known as the Karcher means \cite{karcher1977} and in Hilbert spaces they coincide with the linear (usual) means (see \cite[example 2.2.5]{bacak2014convex}). A sequence $\{x_n\}$ in a Hadamard space $\mathscr H$ is called the Cesaro convergent or the mean convergent (resp. almost convergent) to $x\in X$, if $\sigma_n$ (resp. $\sigma_n^k$) converges (resp. converges uniformly in $k$) to $x$. \end{definition} \noindent For an orbit $\{T^n x | n=0,1,2,\ldots\}$, $\sigma_n(x)$ and $\sigma_n^k(x)$, are defined respectively as the unique minimizers of the functions \begin{equation*} \mathcal{F}[x]_n(y)=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(T^ix,y), \end{equation*} and \begin{equation*} \mathcal{F}[x]_n^k(y)=\frac{1}{n}\displaystyle\sum_{i=0}^{n-1} d^2(T^{k+i}x,y). \end{equation*} Two following lemmas are needed to state the main result. \begin{lemma}\label{lemstconfun1} Let $(\mathscr H, d)$ be a Hadamard space, $f:\mathscr H\longrightarrow \Bbb R$ be a lower semicontinuous and strongly convex function and $x$ be the unique minimizer of $f$. Then for every $y$ out of a neighborhood centered at $x$ with radius $\delta$, we have: \begin{equation*} f(x)<f(y) - \big(d(x,y)-\delta\big)d(x,y). \end{equation*} \end{lemma} \begin{proof} Let $y$ be a point out of a neighborhood centered at $x$ with radius $\delta$. The geodesic segment joining $x$ and $y$ intersects this ball at $\lambda_0 x+(1-\lambda_0)y$, where $0<\lambda_0<1$. \noindent By properties of the metric function in Hadamard space, we have $(1-\lambda_0)d(x,y)=\delta$ so $d(x,y)=\delta +\lambda_0 d(x,y)$ and hence \begin{equation}\label{ineqstconeq1} \lambda_0=\frac{d(x,y)-\delta}{d(x,y)}. \end{equation} Since $x$ is the unique minimizer of the strongly convex function $f$, we have: \allowdisplaybreaks\begin{eqnarray} f(x)& <& f\big(\lambda_0 x+(1-\lambda_0)y\big)\nonumber \\ &\leq &\lambda_0 f(x)+(1-\lambda_0)f(y)- \lambda_0(1-\lambda_0) d^2(x,y), \nonumber \end{eqnarray} therefore \begin{equation*} (1-\lambda_0)f(x)<(1-\lambda_0)f(y) - \lambda_0(1-\lambda_0)d^2(x,y). \end{equation*} Dividing by $(1-\lambda_0)$, we get the desired result by \eqref{ineqstconeq1}. \end{proof} \begin{lemma}\label{lemstconfun2} Let $(\mathscr H, d)$ be a Hadamard space and $\{f_n^k\}_{k,n}$ be the sequence of convex functions on $\mathscr H$. If $\{x_n^k\}_{k,n}$ is a sequence of minimum points of $\{f_n^k\}_{k,n}$ and $x$ is the unique minimizer of the strongly convex function $f$, satisfying: \begin{enumerate} \renewcommand{\Roman{enumi}}{\Roman{enumi}} \item\label{lemstconfun2i1} the sequence $\{f_n^k\}$ is pointwise convergent to $f$ as $n$ tends to infinity uniformly in $k\geq 0$, \item\label{lemstconfun2i2} $\displaystyle\limsup_{n\rightarrow\infty} \sup_{k\geq 0}\big(f(x_n^k)-f_n^k(x_n^k)\big)\leq 0$. \end{enumerate} Then $x_n^k$ converges to $x$ uniformly in $k\geq 0$ as $n\rightarrow \infty$. \end{lemma} \begin{proof} The goal is to prove that $\displaystyle\lim_{n\rightarrow\infty}\sup_{k\geq 0} d(x_n^k,x)=0$. Suppose to the contrary, \begin{equation*} \exists \delta>0 \quad :\quad \forall N\quad \exists n\geq N\quad \text{such that}\quad \sup_{k\geq 0} d(x_n^k,x)\geq\delta. \end{equation*} Therefore for each $0<\epsilon<\delta$ there exist $k=k(\epsilon,n)$, and a subsequence of $\{x_n^k\}_n$ (we denote it by the same sequence $\{x_n^k\}_n$) such that $d(x_n^k,x)\geq\delta-\epsilon$. If we choose $\epsilon<\frac{\delta}{2}$, we have: \begin{equation}\label{ineqstconeq2} d(x_n^k,x)\geq\delta-\epsilon>\frac{\delta}{2}. \end{equation} Using Lemma \ref{lemstconfun1} for $\frac{\delta}{2}$ instead of $\delta$, we get: \begin{equation}\label{ineqstconeq3} f(x)<f(x_n^k) - \big(d(x,x_n^k)-\frac{\delta}{2}\big)d(x,x_n^k). \end{equation} On the other hand since $x_n^k$ is the minimum point of $f_n^k$, we have: \begin{equation}\label{ineqstconeq4} f_n^k(x_n^k)\leq f_n^k(x). \end{equation} By \eqref{ineqstconeq2}, \eqref{ineqstconeq3} and \eqref{ineqstconeq4}, we obtain: \allowdisplaybreaks\begin{eqnarray} (\frac{\delta}{2}-\epsilon)\frac{\delta}{2}&< & \big(d(x,x_n^k)-\frac{\delta}{2}\big)d(x,x_n^k)\nonumber \\ &< &f(x_n^k)-f(x)+f_n^k(x)-f_n^k(x_n^k). \nonumber \end{eqnarray} By the assumptions \eqref{lemstconfun2i1} and \eqref{lemstconfun2i2}, we get a contradiction. Thus $x_n^k$ converges to $x$ uniformly in $k\geq 0$ as $n\rightarrow \infty$. \end{proof} \begin{theorem}\label{maintheo1} Let $C$ be a nonempty, closed and convex subset of a locally compact Hadamard space $\mathscr{H}$ and $T:C\rightarrow C$ be a nonexpansive mapping with $F(T)\neq\emptyset$. Then the sequence $\{T^nx\}$ is almost convergent to a fixed point of $T$. \end{theorem} \begin{proof} Let $x\in \mathscr H$. Since $F(T)\neq\emptyset$, $\{T^nx\}$ is bounded and by the local compactness of the space, it is relatively compact. Theorem \ref{theo-almperreco} implies that $\{T^nx\}$ is almost periodic and by Proposition \ref{theo-almper} $\{d^2(T^nx,y)\}$ is almost periodic for all $y\in \mathscr H$. By \cite{lorentz1948}(see also \cite{bohr1947almost}) the scalar sequence $\{d^2(T^nx,y)\}$ is almost convergent for all $y\in \mathscr H$. Define: \begin{equation}\label{eqmainresu1} \mathcal{F}[x]_n^k(y):=\displaystyle\frac{1}{n}\sum_{i=0}^{n-1}d^2(T^{k+i}x,y), \end{equation} and \begin{equation}\label{eqmainresu2} \mathcal{F}[x](y):=\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}d^2(T^{k+i}x,y)\quad \text{uniformly in}\ k\geq 0. \end{equation} Almost convergency of $\{d^2(T^nx,y)\}$ for any $y\in \mathscr H$ shows that \eqref{eqmainresu2} is well defined. By the strong convexity of $d^2(\cdot,x)$, the functions $\mathcal{F}[x]_n^k$ and $\mathcal{F}[x]$ are strongly convex and therefore have unique minimizers $\sigma_n^k(x)$ and $\sigma(x)$ respectively. If we show that assumption \eqref{lemstconfun2i2} of Lemma \ref{lemstconfun2} holds, then $\mathcal{F}[x]_n^k$ and $\mathcal{F}[x]$ satisfy all assumptions of Lemma \ref{lemstconfun2} and hence $\{T^nx\}$ is almost convergent to $\sigma(x)$. But by the first part of Lemma \ref{lemstconfun2} there exists a subsequence $\{\sigma_n^k(x)\}_n$ satisfying inequalities this lemma. We have to show \begin{equation*} \displaystyle\limsup_{n\rightarrow\infty} \sup_{k\geq 0}\Big(\mathcal{F}[x]\big(\sigma_n^k(x)\big)-\mathcal{F}[x]_n^k\big(\sigma_n^k(x)\big)\Big)\leq 0. \end{equation*} Suppose to the contrary \begin{equation*} \displaystyle\limsup_{n\rightarrow\infty} \sup_{k\geq 0}\Big(\mathcal{F}[x]\big(\sigma_n^k(x)\big)-\mathcal{F}[x]_n^k\big(\sigma_n^k(x)\big)\Big)> 0, \end{equation*} i.e., \begin{equation*} \exists \lambda>0\quad :\quad \displaystyle\limsup_{n\rightarrow\infty} \sup_{k\geq 0}\Big(\mathcal{F}[x]\big(\sigma_n^k(x)\big)-\mathcal{F}[x]_n^k\big(\sigma_n^k(x)\big)\Big)> \lambda. \end{equation*} Therefore there exists a subsequence $\{n_i\}$ of $\{n\}$ such that: \begin{equation*} \sup_{k\geq 0}\Big(\mathcal{F}[x]\big(\sigma_{n_i}^k(x)\big)-\mathcal{F}[x]_{n_i}^k\big(\sigma_{n_i}^k(x)\big)\Big)> \lambda. \end{equation*} On the other hand for each $0<\epsilon<\lambda$, there exists $k=k(\epsilon)$ such that $$\mathcal{F}[x]\big(\sigma_{n_i}^k(x)\big)-\mathcal{F}[x]_{n_i}^k\big(\sigma_{n_i}^k(x)\big)>\lambda -\epsilon,$$ i.e., \begin{equation}\label{eqmainresu3} \mathcal{F}[x]\big(\sigma_{n_i}^k(x)\big)>\lambda -\epsilon +\mathcal{F}[x]_{n_i}^k\big(\sigma_{n_i}^k(x)\big). \end{equation} By \eqref{eqmainresu2} and \eqref{eqmainresu3}, there exists $p_0$ such that for all $p\geq p_0$ we have: \begin{equation*} \displaystyle\frac{1}{p}\sum_{i=0}^{p-1} d^2\big(T^{k+i}x,\sigma_{n_i}^k(x)\big)>\lambda-\epsilon+\displaystyle\frac{1}{n_i}\sum_{i=0}^{n_i-1} d^2\big(T^{k+i}x,\sigma_{n_i}^k(x)\big). \end{equation*} Taking $p=n_i$ for sufficiently large $i$ such that $n_i\geq p_0$, we obtain: \begin{equation*} \displaystyle\frac{1}{n_i}\sum_{i=0}^{n_i-1} d^2\big(T^{k+i}x,\sigma_{n_i}^k(x)\big)>\lambda-\epsilon+\displaystyle\frac{1}{n_i}\sum_{i=0}^{n_i-1} d^2\big(T^{k+i}x,\sigma_{n_i}^k(x)\big), \end{equation*} which is a contradiction. This shows that assumption \eqref{lemstconfun2i2} of Lemma \ref{lemstconfun2} holds. Now we show that $\sigma(x)$ is a fixed point of $T$. We have: \allowdisplaybreaks\begin{eqnarray} \mathcal{F}[x]\big(T\sigma(x)\big)&=&\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1} d^2\big(T^{k+i}x,T\sigma(x)\big)\nonumber\\ &= &\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n}\big\{d^2\big(T^kx,T\sigma(x)\big)+\displaystyle\sum_{i=1}^{n-1} d^2\big(T^{k+i}x,T\sigma(x)\big)\big\}\nonumber\\ &\leq &\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-2} d^2\big(T^{k+i}x,\sigma(x)\big)\ \ \ \ \ \text{by the boundedness of iterations}\nonumber\\ &\leq &\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1} d^2\big(T^{k+i}x,\sigma(x)\big)\nonumber\\ &=&\mathcal{F}[x]\big(\sigma(x)\big).\nonumber \end{eqnarray} Uniqueness of the minimizer implies that $T\sigma(x)=\sigma(x)$, which completes the proof. \end{proof} \section{Ergodic Theorem for Continuous Semigroup of Contractions} In this section, we state the analogous results of the two previous sections for continuous semigroup of contractions with a nonempty fixed point set and show the almost convergence of the orbit to a fixed point of the semigroup in a locally compact Hadamard space. Since the proofs are similar to the dicrete version we will state only the results without proofs. Let $C$ be a closed convex subset of a Hadamard space $(\mathscr{H},d)$. A one-parameter continuous semigroup of contractions is the set $\mathcal{S}=\{S(t) : 0\leq t<\infty\}$ of self-mappings $S(t):C\to C$ that satisfying the following conditions: \begin{enumerate} \renewcommand{\Roman{enumi}}{\roman{enumi}} \item for all $x\in X$, $S(0)x=x$, \item for all $x\in X$ and all $t,s\in [0,\infty)$, $S(t+s)x=S(t)S(s)x$, \item for each $x\in X$, $S(t)x$ is continuous for $t\in [0,\infty)$, \item for each $t\in [0,\infty)$, $S(t)$ is a nonexpansive mapping on $C$, \end{enumerate} Let $F(\mathcal{S})$ denotes the common fixed point set of the family $\mathcal{S}$, i.e., $F(\mathcal{S})=\displaystyle\bigcap_{t\geq 0} F\big(S(t)\big)$. Note that $F(\mathcal{S})$ by \cite{kirkvalencia} is a closed and convex set in a Hadamard space. \begin{definition}[Almost periodic function] Let $(X,d)$ be a general metric space and $f:\Bbb R^+\longrightarrow X$ be a continuous function. $f$ is called almost periodic function if for each $\epsilon>0$, there exist real numbers $L=L(\epsilon)$ and $\tau=\tau(\epsilon)$ such that any interval $(q, q+L)$ where $q\geq 0$ contains at least one number $T$ for which \begin{equation*} d\big(f(t+T),f(t)\big)<\epsilon \quad \forall t\geq \tau. \end{equation*} \end{definition} Considering definition of almost periodic function and replacing sequence with net, all the argument in Proposition \ref{theo-almper} remain valid, therefore if $\mathscr{U}$ denotes all relatively compact nets in Hadamard spaces, we have the next proposition. \begin{proposition} A net $\{x_t\}_{t\in \Bbb R^+}$ is almost periodic if and only if $\{d(x_t,x)\}_{t\in \Bbb R^+}$ is almost periodic for each $x\in\mathscr{H}$. \end{proposition} The proof of \cite[Theorem 1]{DAFERMOS197397} is stated for continuous semigroups in Banach space and also works for metric spaces. Therefore the continuous analogous of Lemma \ref{l1} holds. \begin{lemma} Let $C$ be a closed convex subset of a metric space $(X,d)$, $\mathcal{S}$ be a continuous semigroup of nonexpansive mappings on $C$, $x\in X$, and $w(x)$ be the set of strong subsequential of limits of the orbit $\{S(t)x\}$. Then for each $t\in [0,\infty)$, $S(t)$ is isometry on $w(x)$. \end{lemma} \noindent Thus Theorem \ref{theo-almperreco} remain valid for semigroup of contractions with similar proof, that we can state it as follows. \begin{theorem} Suppose that $(X,d)$ is a complete metric space, $C$ is a closed convex subset of $X$ and $\mathcal{S}$ is a continuous semigroup of nonexpansive mappings on $C$ with $F(\mathcal{S})\neq \emptyset$. Then for $x\in X$ the function $t\rightarrow S(t)x$ is almost periodic if and only if it's range is relatively compact. \end{theorem} For an orbit $\{S(t)x\}$, $\sigma_T(x)$ and $\sigma_T^s(x)$, are defined respectively as the unique minimizers of the functions \begin{equation*} \mathcal{G}[x]_T(y)=\frac{1}{T}\displaystyle\int_{0}^{T} d^2(S(t)x,y)dt, \end{equation*} and \begin{equation*} \mathcal{G}[x]_T^s(y)=\frac{1}{T}\displaystyle\int_{0}^{T} d^2(S(s+t)x,y)dt. \end{equation*} A net $\{S(t)x\}$ in a Hadamard space $X$ is called the Cesaro convergent or the mean convergent (resp. almost convergent) to $x\in X$, if $\sigma_T$ (resp. $\sigma_T^s$) converges (resp. converges uniformly in $s$) to $x$.\\ Using net instead of sequence in Lemmas \ref{lemstconfun1} and \ref{lemstconfun2} implies the same argument remains true, and we have: \begin{lemma} Let $(\mathscr H, d)$ be a Hadamard space and $\{f_t^s\}_{s,t}$ be the net of convex functions on $\mathscr H$. If $\{x_t^s\}_{s,t}$ is a net of minimum points of $\{f_t^s\}_{s,t}$ and $x$ is the unique minimizer of the strongly convex function $f$, satisfying: \begin{enumerate} \renewcommand{\Roman{enumi}}{\Roman{enumi}} \item\label{lemstconfun2i1} the net $\{f_t^s\}$ is pointwise convergent to $f$ as $t\rightarrow\infty$ uniformly in $s\geq 0$, \item\label{lemstconfun2i2} $\displaystyle\limsup_{t\rightarrow\infty} \sup_{s\geq 0}\big(f(x_t^s)-f_t^s(x_t^s)\big)\leq 0$. \end{enumerate} Then $x_t^s$ converges to $x$ uniformly in $s\geq 0$ as $t\rightarrow \infty$. \end{lemma} Therefore the main result holds for semigroup of contractions. \begin{theorem}\label{semimaintheo1} Let $C$ be a nonempty, closed and convex subset of a locally compact Hadamard space $\mathscr{H}$ and $\mathcal{S}$ be a continuous semigroup of nonexpansive mappings on $C$ with $F(\mathcal{S})\neq \emptyset$. Then the orbit $\{S(t)x\}$ is almost convergent to a common fixed point of $\mathcal{S}$. \end{theorem} \begin{proof} The used techniques in Theorem \ref{maintheo1} is also applicable for nets, hence for continuous semigroup of nonexpansive mappings we have $\sigma_T^s(x)\rightarrow \sigma(x)$ uniformly in $s\geq 0$ as $T\rightarrow\infty$. Now we show that $\sigma(x)$ is a common fixed point of $\mathcal{S}$. For each $r\geq 0$, without loss of generality, taking $s=u+r\geq r$, we have: \allowdisplaybreaks\begin{eqnarray} \mathcal{G}[x]\big(S(r)\sigma(x)\big)&=&\displaystyle\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T} d^2\big(S(s+t)x,S(r)\sigma(x)\big) dt\nonumber\\ &\leq &\displaystyle\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T} d^2\big(S(u+t)x,\sigma(x)\big) dt\nonumber\\ &=&\mathcal{G}[x]\big(\sigma(x)\big).\nonumber \end{eqnarray} Uniqueness of the minimizer implies that $S(r)\sigma(x)=\sigma(x)$ For each $r\geq 0$, which completes the proof. \end{proof} \section{Ergodic Convergence of Semigroups Generated by Monotone Vector Fields } Consider a Hilbert space $H$ with the inner product $\langle\cdot ,\cdot \rangle$ and the inductive norm $\|\cdot\|$, a set valued mapping $A: H\rightarrow 2^H$ is said to be monotone if for all $x,y\in D(A)$ \begin{equation*} \langle x^*-y^*,x-y\rangle \geq 0 \quad \forall~x^*\in Ax,~y^*\in Ay, \end{equation*} where $D(A)=\{x\in H : Ax\neq \emptyset\}$. The monotone operator $A$ is said to be maximal monotone if there exists no monotone operator such that it's graph properly contains $Gr(A)$, where $Gr(A)=\{(x,y)\in H\times H : x\in D(A), y\in Ax \}$. The set of zeros of the operator $A$ is $A^{-1}0=\{x\in H : Ax=0\}$. For more details about monotone operators in Hilbert spaces the reader can see \cite{bauschke2017convex}. Let $A$ be a maximal monotone operator on $H$, the Cauchy problem \begin{equation}\label{eveq1} \begin{cases} -x^{\prime}(t)\in Ax(t), &\\ x(0)=x_0, & \end{cases} \end{equation} was studied by many authors and researchers for existence of solutions and asymptotic behavior. Taking $S(t)x_0:=x(t)$, by existence and uniqueness of solution of \eqref{eveq1}, $\mathcal{S}=\{S(t) : t\geq 0\}$ is a continuous semigroup. It is easily seen that the monotonicity of $A$ implies that $\mathcal{S}=\{S(t) : t\geq 0\}$ is a semigroup of contractions (see \cite{morosanu1988nonlinear}). It is known that $p\in A^{-1}0$ if and only if $p$ is common fixed point of the semigroup $\mathcal{S}$, therefore, $A^{-1}0$ is the same common fixed points of $S(t),\ \ t\geq0$ (see \cite{morosanu1988nonlinear}). Therefore if $A^{-1}0\neq \emptyset$, ergodic theorem proved by Baillon and Br\'{e}zis in \cite{BaillonBerzis1976}, shows that for any $y\in H$, the orbit $\{S(t)y\}$ is mean convergent to a point in the set of zeros of $A$ or common fixed point of the semigroup $\mathcal{S}$. By the main theorem of the previous section we extend the result to Hadamard manifolds. Hadamard manifold is a complete, simply connected $n$-dimensional Reimannian manifold of nonpositive sectional curvature. Hadamard manifolds are examples of Hadamard spaces \cite{bacak2014convex}, which are locally compact. Let $M$ be a Hadamard manifold, for $p\in M$, $T_pM$ and $TM=\displaystyle\bigcup_{p\in M} T_pM$ are denoted the tangent space at $p$ and tangent boundle of $M$ respectively. The exponential map $exp_p: T_pM\rightarrow M$ at $p$ is defined by $exp_p(v)=\gamma_v(1)$ for each $v\in T_pM$, where $\gamma_v(\cdot)$ is the geodesic with $\gamma_v(0)=p$ and $\dot{\gamma}_v(0)=v$. A mapping $A: D(A)\subset M\rightarrow 2^{TM}$ that each $x\in M$ maps to a subset of $T_xM$, is a monotone multivalued vector field iff \begin{equation*} \langle x^*,exp_x^{-1}y \rangle +\langle y^*,exp_y^{-1}x \rangle \leq 0, \end{equation*} for all $x,y\in D(A)$ and $x^*\in Ax, y^*\in Ay$. For a complete bibliography and more details about the basic concepts of Hadamard manifolds, the exponential map and also monotone vector fields, we refer the reader to \cite{sakai1996riemannian, nemeth1999, lopezmarquez2009}. If the mapping $A: D(A)\subset M\rightarrow 2^{TM}$ be a monotone multivalued vector field, by \cite[Theorems 5.1 and 5.2]{Iwamiya2003}, \eqref{eveq1} has a global solution that by Proposition 4.2 of \cite{ahmadikhatib2018} is unique. Let $S(t)x_0=x(t)$, then $\mathcal{S}=\{S(t) : t\geq 0\}$ is a nonexpansive semigroup (see lemma 4.1 of \cite{ahmadikhatib2018}). It is easy to see that the set of singularities of $A$ (i.e. the set $A^{-1}(0)$) is equal to the set of common fixed points of $\mathcal{S}$. Now we have the following result as an application of Theorem \ref{semimaintheo1}. \begin{theorem} Suppose $A: D(A)\subset M\rightarrow 2^{TM}$ is a monotone vector field with at least a singularity point. Then every orbit $\{S(t)x\}$ of the semigroup generated by solutions of \eqref{eveq1} is almost convergent to a singularity of $A$. \end{theorem} \end{document}
\overline{b}egin{document} \overline{b}ibliographystyle{plain} \overline{b}egin{abstract} We show that separably closed valued fields of finite imperfection degree (either with \(\lambda\)-functions or commuting Hasse derivations) eliminate imaginaries in the geometric language. We then use this classification of interpretable sets to study stably dominated types in those structures. We show that separably closed valued fields of finite imperfection degree are metastable and that the space of stably dominated types is strict pro-definable. \end{abstract} \mathfrak{m}aketitle \section{Introduction} Since the work of Ax-Kochen and Ershov on the model theory of Henselian valued fields of residue characteristic 0, yielding, e.g., the approximative solution of Artin's conjecture via the famous transfer principle between $\mathfrak{m}athbb{Q}_p$ and $\mathfrak{m}athbb{F}_p((t))$ for large $p$, valued fields have been a constant source for deep model-theoretic applications. One may mention here Denef's rationality results for certain Poincar\'e series, and the foundations of motivic integration. But it is only with the work of Haskell, Hrushovski and Macpherson \cite{HaHrMa06,HaHrMa08} on the model theory of algebraically closed valued fields (\(\mathrm{ACVF}\)) that the methods of geometric model theory have been made available for the study of valued fields. Then, in their groundbreaking work (\cite{HrLo16}, see also \cite{Duc13}), Hrushovski and Loeser used these newly available tools to give a new account of the topological tameness properties of the Berkovich analytification $\defn{V}^{an}$ of a quasi-projective algebraic variety \(\defn{V}\). The purpose of this paper is to study separably closed valued fields of finite imperfection degree from the point of view of geometric model theory, in the light of these results. \mathfrak{m}edskip Our first goal is the classification of the interpretable sets, also called \emph{imaginary sorts}. Recall that a set is interpretable if it is the quotient of a definable set by a definable equivalence relation. A theory is said to eliminate imaginaries if for every interpretable set $\defn{X}$ there is a definable bijection between $\defn{X}$ and a definable set, in other words, if the category of definable sets is closed under quotients. In valued fields, contrarily to what happens, e.g., in various contexts of algebraically closed fields with operators, it is not the case, in general, that the interpretable sets can all be understood in terms of the definable subsets of Cartesian powers of the field. There are some truly new quotients that appear and, in the case of \(\mathrm{ACVF}\), these new quotients are all described by Haskell, Hrushovski and Macpherson \cite{HaHrMa06} as certain quotients of linear algebraic groups by definable subgroups. They show that it is enough to add to the valued field sort, for any $n\geq 1$, the set of $\mathcal{O}$-lattices in $K^n$, which is given by $\mathfrak{m}athbf{GL}_n(K)/\mathfrak{m}athbf{GL}_n(\mathcal{O})$, as well as the union of all $s/\mathfrak{m}athfrak{m} s$, where $s$ is an $\mathcal{O}$-lattice in $K^n$. Here \(\mathcal{O}\) denotes the valuation ring of \(K\) and $\mathfrak{m}athfrak{m}$ denotes the maximal ideal of $\mathcal{O}$. These sorts are called the \emph{geometric sorts} and we will denote them by $\mathbfcal{G}$. Since then, it has been shown that various theories of valued fields eliminate imaginaries down to the geometric sorts, namely the theory RCVF of real closed valued fields (by work of Mellor \cite{Mel06}), the theories of $p$-adic fields and of ultraproducts of $p$-adic fields (by work of Hrushovski, Martin and the third author \cite{HrMaRi14}), and the theory $\mathrm{VDF}_{\mathcal{EC}}$ of existentially closed valued differential fields $(K,v,\partial)$ of residue characteristic 0 satisfying $v(\partial(x))\geq v(x)$ for all $x$ (by work of the third author \cite{RidVDF}). Let us now consider separably closed valued fields. Recall that if $K$ is a non perfect separably closed field of characteristic $p>0$, then $[K:K^p]=p^{e}$ for some $e\in\mathfrak{m}athbb{N}^{*}\cup\{\infty\}$, and the elementary theory of $K$ is determined by $e$, the so-called \emph{imperfection degree} or \emph{Ershov invariant} of $K$. Hrushovski's model-theoretic proof of the relative Mordell-Lang Conjecture in positive characteristic \cite{Hru96} illustrates that the theory of separably closed non perfect fields -- a stable theory -- is a model-theoretic framework which provides a very useful approach for the study of questions from (arithmetic) algebraic geometry in positive characteristic. In the valued context, the completions of the theory \(\mathrm{SCVF}\) of separably closed non-trivially valued fields are determined, as in the case without valuation, by the imperfection degree. One also has an explicit description of the definable sets, by work of Delon and, more recently, work of Hong \cite{Hon-QE}. Our first result is that, as in the case of \(\mathrm{ACVF}\), the geometric sorts are sufficient to describe all the interpretable sets in separably closed valued field of finite degree of imperfection: \overline{b}egin{theoremA}[Theorem\,\ref{T:SCVF-EI}] The theory $\mathrm{SCVF}_{p,e}$ of separably closed valued fields of finite degree of imperfection $e$, with the elements of a $p$-basis named by constants, eliminates imaginaries down to the geometric sorts. \end{theoremA} We prove this rather directly, reducing first to semi-algebraic sets using \(\lambda\)-functions and then performing a topological reduction to the corresponding result in $\mathrm{ACVF}_{p,p}$. The crucial ingredient is Hong's density theorem from \cite{HonPhD} whose proof we include for convenience. However, it seems more appropriate for practical purposes to work in the (strict) reduct obtained when, instead of working over a $p$-basis, one adds a sequence of $e$ commuting Hasse derivations (see \cite{ZieSCH}) to the language of valued fields. The situation is much trickier in this context. It is no longer the case that a definable subset of a Cartesian power of the field is necessarily in definable bijection with a semi-algebraic set. In order to reduce questions to $\mathrm{ACVF}_{p,p}$, prolongations come into the picture. In Corollary\,\ref{C:SCVH-EI}, we prove that the analogue of Theorem A also holds in \(\mathrm{SCVH}_{p,e}\), the theory of existentially closed valued fields with \(e\) commuting Hasse derivations. Note that Theorem A follows formally from this second result, but we chose to present both proofs as the shorter topological proof seems interesting and instructive to us. The first proof consists in finding a canonical way of representing a semi-algebraic set definable in a separably closed valued field \(K\) by the \(K\)-points of a set definable in \(\overline{a}lg{K}\mathfrak{m}odels\mathrm{ACVF}_{p,p}\). The second proof, inspired by the work of the third author on the theory $\mathrm{VDF}_{\mathcal{EC}}$ (\cite{RidVDF}), is much more local. We only achieve a correspondence between \(K\) and \(\overline{a}lg{K}\) at the level of types. The main technical result which allows us, in the second proof, to reduce questions about definable sets to questions about types is the following density result for definable types, with parameters from the geometric sorts: \overline{b}egin{theoremB}[Theorem\,\ref{thm:SCVH dense def}] Let $K\mathfrak{m}odels\mathrm{SCVH}_{p,e}$, and let $\defn{X}\subseteq\Sort{VF}^{n}$ be a \(K\)-definable set. Let $A = \mathbfcal{G}(\eq{\mathrm{acl}}(\code{\defn{X}}))$. Then, there exists an $A$-definable type $p$ such that $p(x)\vdash x\in\defn{X}$. \end{theoremB} Here, $\code{\defn{X}}$ denotes the canonical parameter of $\defn{X}$ in \(\eq{K}\). Recall that a type $p(x)$ over some structure $M$ is said to be definable if for every formula $\varphi(x;y)$ there exists a formula $\theta(y)$ over \(M\), usually denoted by $\mathfrak{m}athrm{d}_{p} x\,\varphi(x;y)$, such that for all $m\in M$: \[\varphi(x;m)\in p\text{ if and only if }M\mathfrak{m}odels\theta(m).\] The proof of Theorem B follows the same strategy as in the case of \(\mathrm{VDF}_{\mathcal{EC}}\) mentioned above. The main new technical point is a quantifier elimination result for dense pairs of valued fields satisfying certain conditions. We then show that the pair $(\overline{a}lg{K},K)$, for $K$ a model of \(\mathrm{SCVH}_{p,e}\), satisfies these conditions. It follows immediately from this density statement that $\mathrm{SCVH}_{p,e}$, considered in the geometric sorts $\mathbfcal{G}$, has weak elimination of imaginaries. That finite sets are coded in $\mathbfcal{G}$ can easily be transferred from the corresponding result in $\mathrm{ACVF}_{p,p}$. This approach also has the added benefit of giving us, as a by-product, the fact that any type over an algebraically closed set has an automorphism invariant global extension, which is an important technical result for what follows. In order to be able to classify imaginaries in the language of valued fields alone (in the case of finite imperfection degree, or even in the case of infinite degree of imperfection), it seems that one would need new ideas. In these cases, the goal would be to give a classification relative to those imaginaries which are definable in the field \emph{without valuation}. \mathfrak{m}edskip The second part of this paper is devoted to studying metastability and stably dominated types, first introduced by Haskell, Hrushovski and Macpherson \cite{HaHrMa06,HaHrMa08} to prove elimination of imaginaries in \(\mathrm{ACVF}\) down to the geometric sorts. A type is said to be stably dominated if its ``generic'' extensions are controlled by pure stable interpretable subsets of the structure. A typical example is the generic type of the valuation ring $\mathfrak{m}athbfcal{O}$ which is controlled by the residue map. Haskell, Hrushovski and Macpherson show that in a model of \(\mathrm{ACVF}\), over the value group, there are ``many'' stably dominated types. One says that \(\mathrm{ACVF}\) is \emph{metastable} and this gives a formal meaning to the idea that a model of \(\mathrm{ACVF}\) is controlled in a very strong sense by its value group and its residue field. Precise definitions can be found in Section\,\ref{S:metastability}. In \cite{HrLo16}, Hrushovski and Loeser use the machinery of geometric model theory in $\mathrm{ACVF}$ to construct a model-theoretic avatar $\widehat{\defn{V}}$, whose points are given by the stably dominated types concentrating on $\defn{V}$, of $\defn{V}^{an}$, the Berkovich analytification of a quasi-projective algebraic variety \(\defn{V}\). Among other things, they show that $\widehat{\defn{V}}$ admits a definable strong deformation retraction onto a $\Sort{\mathcal{G}amma}$-internal subset $\Sort{\Sigma}$, where $\Sort{\mathcal{G}amma}$ is the value group. Since, the divisible ordered Abelian group \(\Sort{\mathcal{G}amma}\) is the natural model-theoretic framework for piecewise linear geometry, this result implies that, without any smoothness assumption on \(\defn{V}\), $\defn{V}^{an}$ is locally contractible and admits a strong deformation retraction onto a piecewise linear space. Our first result regarding these questions is that stable domination in a model $K$ of $\mathrm{SCVH}_{p,e}$ is characterized, via prolongations, by stable domination in the algebraic closure of $K$ (Proposition\,\ref{P:char st dom}). From this, we deduce the following result, using a description of definable closure obtained in Proposition~\ref{prop:descr dcl}: \overline{b}egin{theoremC}[Corollary \ref{C:SCVH-metastable}] The theories $\mathrm{SCVH}_{p,e}$ and $\mathrm{SCVF}_{p,e}$ are metastable over the value group ${\mathfrak{m}athbf \mathcal{G}amma}$. \end{theoremC} Secondly, we establish the analogue of an important technical result from \cite{HrLo16}. Before we may state this result, we need to recall some notions. A \emph{pro-definable} set in $U$ is a set of the form $\defn{D}=\varprojlim_{i\in I}\defn{D}_i$, where $(\defn{D}_i)_{i\in I}$ is a projective system in the category of definable sets and $I$ is a small index set. If all the $\defn{D}_i$ and the transition maps are $C$-definable, $\defn{D}$ is called $C$-pro-definable. A pro-definable function is a (bounded) family of definable functions (equivalently a function whose graph is pro-definable). The ($C$-)pro-definable sets form a category with respect to ($C$-)pro-definable maps. If the pro-definable set $\defn{D}=\varprojlim_{i\in I}\defn{D}_i$ is isomorphic to a pro-definable set with surjective transition functions, it is called \emph{strict pro-definable}. Equivalently, for every $i\in I$, the set $\pi_i(\defn{D})\subseteq \defn{D}_i$ is definable (and not just type-definable). Here, $\pi_i$ denotes the projection map on the $i$th coordinate. Dually, one defines (strict) ind-definable sets. We refer to \cite[Section 2.2]{HrLo16} for the basic properties of these notions. Let $\defn{X}$ be a $C$-definable set in \(\mathrm{ACVF}\). In \cite{HrLo16} it is shown that there is a strict $C$-pro-definable set $\widehat{\defn{X}}$ such that for any $A\supseteq C$, the set $\widehat{\defn{X}}(A)$ is in canonical bijection with the set of $A$-definable global stably dominated types $p(x)$ such that $p(x)\vdash x\in\defn{X}$. If this is the case in a theory $T$, we say (rather loosely) that the set of stably dominated types in $T$ is strict pro-definable. From the proof in \cite{HrLo16}, we extract an abstract condition on a metastable NIP theory $T$ which implies that the set of stably dominated types is strict pro-definable in $T$. This yields the following: \overline{b}egin{theoremD}[Corollaries\,\ref{C:SCVF-strict-pro} and \ref{C:VDF-strict-pro}] The set of stably dominated types is strict pro-definable in $\mathrm{SCVH}_{p,e}$ as well as in $\mathrm{VDF}_{\mathcal{EC}}$. \end{theoremD} The paper is organized as follows. In Section\,\ref{S:prelim} we gather some preliminaries about valued fields, the model theory of algebraically closed valued fields, separably closed fields as well as separably closed valued fields. We then present, in Section \ref{S:density}, the density theorem for semi-algebraic sets, and we prove Theorem A, namely that \(\mathrm{SCVF}_{p,e}\) eliminates imaginaries down to the geometric sorts. Section \ref{S:pairs} is devoted to the proof of Theorem B and of the fact that the geometric sorts classify the imaginaries even when working with Hasse derivations. This section mostly consists in the proof of a quantifier elimination result, of independent interest, for dense pairs of valued fields. Subsequently, we show that various notions in $\mathrm{SCVH}_{p,e}$ reduce in the nicest possible way to $\mathrm{ACVF}_{p,p}$. In the short Section \ref{S:dcl-acl}, we establish this for the definable and the algebraic closure; Section \ref{S:metastability} gives a complete description of the stable stably embedded sets in $\mathrm{SCVH}_{p,e}$ (they are more or less the same as in $\mathrm{ACVF}_{p,p}$) as well as of the stably dominated types, in terms of stable domination of the prolongation in $\mathrm{ACVF}_{p,p}$. Putting all this together, we obtain the metastability of $\mathrm{SCVH}_{p,e}$ (Theorem C). In a final section we present an abstract framework, for metastable NIP theories, which guarantees the strict pro-definability of the set of stably dominated types, and we show that both $\mathrm{SCVH}_{p,e}$ and $\mathrm{VDF}_{\mathcal{EC}}$ fall under this framework, thus establishing Theorem D. \subsection{Acknowledgments} This collaboration began during the Spring 2014 MSRI program {\em Model Theory, Arithmetic Geometry and Number Theory}. The authors would like to thank MSRI for its hospitality and stimulating research environment. We are grateful to the referee for a thorough reading of our paper, and for many useful suggestions. \section{Preliminaries}\label{S:prelim} Let us fix some notation. We will normally denote (ind-,pro-) definable sets in a given theory by bold letters (such as \(\defn{X}\)). As customary, we will often identify such objects with their set of realizations in a universal domain (a fixed sufficiently saturated model), which we keep unspecified. In such cases, by a \emph{set} (of parameters) we will mean a small subset of this universal domain. Whenever $\defn{X}$ is a definable set (or a union of definable sets) and $A$ is a set of parameters, $\defn{X}(A)$ will denote $\defn{X}\cap A$. Usually in this notation there is an implicit definable closure, but we want to avoid that here because more often than not there will be multiple languages around and hence multiple definable closures which could be implicit. Similarly, if $\mathfrak{m}athbfcal{S}$ is a set of definable sets, we will write $\mathfrak{m}athbfcal{S}(A)$ for $\overline{b}igcup_{\Sort{S}\in\mathfrak{m}athbfcal{S}}\Sort{S}(A)$. When the language is clear, we will write \(A\substr{}K\) when \(A\) is a substructure of \(K\) (i.e., closed under function symbols). We will be working with a fixed prime \(p\), and will write \(\ppow[n]{K}\) for the set \(\Set{x^{p^n}}{x\in{}K}\) of \(p^n\)-powers in a field \(K\). Likewise, if \(\defn{X}\) is a definable field, \(\ppow[n]{\defn{X}}\) is the definable set of \(p^n\) powers. We write \(\ppowi{K}\) (resp.~\(\ppowi{\defn{X}}\)) for the intersection of \(\ppow[n]{K}\) (\(\ppow[n]{\defn{X}}\)) over all \(n\). \subsection{Imaginaries} Recall that, in model theory, an imaginary point is a class of a definable equivalence relation. To every theory $T$, we can associate a theory $\eq{T}$ obtained by adding all the imaginary points. Every model \(M\) of \(T\) expands uniquely to a model \(\eq{M}\) of $\eq{T}$. We write $\eq{\mathrm{dcl}}$ and $\eq{\mathrm{acl}}$ to denote the definable and algebraic closure in $\eq{M}$. Let \(\defn{X}\) be a set definable with parameters in a sufficiently saturated and homogeneous structure $M$. We denote by $\code{\defn{X}}\subseteq \eq{M}$ the set of points fixed by the group \(G_{\code{\defn{X}}}\) of automorphisms stabilizing $\defn{X}$ globally. We say that \(\defn{X}\) has a \emph{canonical parameter} (or \emph{is coded}) if it can be defined over $\code{\defn{X}}\cap M$. Likewise, \(\defn{X}\) has an \emph{almost canonical parameter} (or is \emph{weakly coded}) if it can be defined over \(\eq{\mathrm{acl}}(\code{\defn{X}})\cap M\), i.e., over some tuple with a finite orbit under the action of the same group. Note that if this finite orbit (viewed as a definable set) itself has a canonical parameter, then it is a canonical parameter for \(\defn{X}\) as well. A theory admits elimination of imaginaries precisely if every definable set has a canonical parameter. \subsection{Valued fields} \subsubsection{Notation and conventions} Whenever $(K,\operatorname{val})$ is a valued field, we will denote by ${\mathfrak{m}athbf \mathcal{G}amma}(K):=\operatorname{val}(K^\star)$ its value group, by $\mathfrak{m}athbfcal{O}(K)$ its valuation ring, by $\overline{b}oldsymbol{\mathfrak{m}}(K)$ its maximal ideal, by ${\mathfrak{m}athbf k}(K) := \mathfrak{m}athbfcal{O}(K)/\overline{b}oldsymbol{\mathfrak{m}}(K)$ its residue ring and by $\operatorname{res}_K:\mathfrak{m}athbfcal{O}(K)\rightarrow {\mathfrak{m}athbf k}(K)$ the canonical projection. When the field $K$ is clear from context, we will write $\mathcal{G}amma$, $\mathcal{O}$, $\mathfrak{m}$, $k$ and $\operatorname{res}$. We usually identify \(\mathcal{G}amma\) and \(\mathcal{G}amma_\infty := \operatorname{val}(K) = \mathcal{G}amma\cup\{\infty\}\). We will also denote by \(\Sort{VF}(K)\) the points of the valued field itself (this will make more sense once we consider multi-sorted structures). Let us now briefly recall the \emph{geometric sorts} from \cite{HaHrMa06}. For \(n\geq 1\), let \(\mathbf{S}_n(K)\) be the set of \emph{\(\mathcal{O}\)-lattices} in \(K^n\), i.e., \(\mathbf{S}_n(K)\simeq\mathfrak{m}athbf{GL}n(K)/\mathfrak{m}athbf{GL}n(\mathcal{O})\). Note that, for any \(s\in \mathbf{S}_n(K)\), the quotient \(s/\mathfrak{m} s\) is an \(n\)-dimensional \(k\)-vector space. One puts \(\mathbf{T}_n(K):=\dot{\overline{b}igcup}_{s\in \mathbf{S}_n(K)}s/\mathfrak{m} s\). Note that \(\mathbf{T}_n(K)\) can similarly be identified with \((\mathfrak{m}athbf{GL}n/\Sort{G})(K)\), where \(\Sort{G}\) is the inverse image of the stabilizer of a non-zero vector under the residue map on \(\mathfrak{m}athbf{GL}n(\mathfrak{m}athbfcal{O})\). Hence it is indeed an imaginary sort. The map associating to an element of \(\mathbf{T}_n\) the corresponding lattice in \(\mathbf{S}_n\) is denoted by \(\tau_n\). When \(n=1\), the map \(\operatorname{val}:\mathfrak{m}athbf{GL}_1(K)\ra\mathcal{G}amma\) is a surjective homomorphism with kernel \(\mathfrak{m}athbf{GL}_1(\mathcal{O})\), and therefore \(\mathbf{S}_1\cong{\mathfrak{m}athbf \mathcal{G}amma}\) canonically. Similarly, \({\mathfrak{m}athbf k}^\star=(\mathfrak{m}athbfcal{O}/\overline{b}oldsymbol{\mathfrak{m}})^\star\subseteq \mathbf{T}_1\) canonically. In fact $\mathbf{T}_1(K)$ is canonically isomorphic, as above, to the quotient of multiplicative groups $K^\star/(1+\mathfrak{m})$ which is often denoted $\mathfrak{m}athbb{R}V(K)$. \subsubsection{The valuation topology} \overline{b}egin{comment} By a \Emph{definable topology} on a definable set \(\defn{X}\) we mean an ind-definable set \(\defn{T}\) of subsets of \(\defn{X}\) that satisfies the definable analogue of the axioms of a topology. More precisely, it is an ind-definable set \(\defn{T}\), an ind-definable subset \(\defn{U}\subseteq\defn{X}\times\defn{T}\), such that (setting \(\defn{U}_a=\Set{x\in\defn{X}}{(x,a)\in\defn{U}}\)): each \(\defn{U}_a\) is definable (rather than ind-definable), for all \(a,b\in\defn{T}\), \(\defn{U}_a\cap\defn{U}_b=\defn{U}_c\) for some \(c\in\defn{T}\), and for any definable \(\defn{T}_0\subseteq\defn{T}\) (possibly with parameters), \(\exists{}t\in\defn{T}_0(x\in\defn{U}_t)=\defn{U}_a\) for some \(a\in\defn{T}\). As usual, a definable subset of the form \(\defn{U}_a\) will be called open. We note that if \(\operatorname{\mathbb{M}}od{M}\) is a model, \(\defn{T}(\operatorname{\mathbb{M}}od{M})\) is not necessarily a topology on \(\defn{X}(\operatorname{\mathbb{M}}od{M})\), but it is the collection of definable open subsets in the topology generated by \(\defn{T}(\operatorname{\mathbb{M}}od{M})\). Other notions, such as a definable basis for \(\defn{T}\), or continuous definable maps, are defined similarly. \end{comment} Let \(\widetilde{T}\) be a theory of valued fields, with valued field sort \(\Sort{VF}\). We consider \(\Sort{VF}^n\) with the definable \Def{valuation topology}, where a definable subset \(\defn{X}\) is open if each of its points belongs to a product of open balls contained in \(\defn{X}\). Here, \(\defn{X}\), the balls and the points are over parameters, but if \(\defn{X}\) is open and over \(K\), and \(a\in\defn{X}(K)\) for some valued field \(K\), then the ball can also be chosen over \(K\). Continuous definable functions and other topological notions are defined similarly. Given a valued field \(K\), the collection \(\defn{X}(K)\), where \(\defn{X}\) ranges over \(K\)-definable open subsets of \(\Sort{VF}^n\), forms a basis for the usual valuative topology on \(K^n\). Note that an inclusion of valued fields $K\subseteq L$ is continuous if and only if ${\mathfrak{m}athbf \mathcal{G}amma}(K)$ is a cofinal subset of ${\mathfrak{m}athbf \mathcal{G}amma}(L)$. \overline{b}egin{lemma}\label{lem:dense geom} Assume that \(K\) is dense in \(L\) in the valuation topology. Then we have \(\mathbf{S}_n(K)=\mathbf{S}_n(L)\) and \(\mathbf{T}_n(K)=\mathbf{T}_n(L)\) for every \(n\geq1\). In particular, the extension \(L/K\) is immediate. \end{lemma} \overline{b}egin{proof} Since \(K\) is dense in \(L\), the set \(K^N\) is dense in \(L^N\) for every \(N\), and so \(\mathfrak{m}athbf{GL}n(K)\) is dense in \(\mathfrak{m}athbf{GL}n(L)\), as \(\mathfrak{m}athbf{GL}n(L)\) is an open subset of \(L^{n^2}\). For any \(M\in\mathfrak{m}athbf{GL}n(L)\), the set \(M\cdot\mathfrak{m}athbf{GL}n(\mathfrak{m}athbfcal{O}(L))\) is open in \(\mathfrak{m}athbf{GL}n(L)\), so it contains some \(M_0\in\mathfrak{m}athbf{GL}n(K)\), showing that \(\mathbf{S}_n(K)=\mathfrak{m}athbf{GL}n(K)/\mathfrak{m}athbf{GL}n(\mathfrak{m}athbfcal{O}(K))=\mathfrak{m}athbf{GL}n(L)/\mathfrak{m}athbf{GL}n(\mathfrak{m}athbfcal{O}(L))=\mathbf{S}_n(L)\). Now let \(t\in \mathbf{T}_n(L)\). By the previous paragraph, we know that \(\tau_n(t)=s\in \mathbf{S}_n(K)\), so applying a \(K\)-linear change of variables we may assume that \(s=\mathfrak{m}athbfcal{O}^n\) and \(t\in {\mathfrak{m}athbf k}(L)^n=\mathfrak{m}athbfcal{O}(L)^n/\overline{b}oldsymbol{\mathfrak{m}} \mathfrak{m}athbfcal{O}(L)^n\). But then \(\pi^{-1}(t)\) is an open subset of \(\mathfrak{m}athbfcal{O}(L)^n\), so by the density assumption there is a tuple \(\overline{a}\in\mathfrak{m}athbfcal{O}(K)^n\) such that \(\pi(\overline{a})=t\). It follows that \(t\in{}\mathbf{T}_n(K)\). \end{proof} Note that there are immediate extensions \(L/K\) such that \(K\) is not dense in \(L\), e.g., the Puiseux series field \(K=\overline{b}igcup_{n\in\mathfrak{m}athbb{N}}\mathfrak{m}athbb{C}((t^{1/n}))\) inside the Hahn series field \(L=\mathfrak{m}athbb{C}((t^{\mathfrak{m}athbb{Q}}))\). Our goal now is to show that every smooth subvariety of affine space (viewed as a definable subset of \(\Sort{VF}^n\)) is a topological manifold, i.e., definably locally homeomorphic to an open subset of \(\Sort{VF}^m\) for some $m$. If \(\defn{X}\) is a variety over a field \(K_0\), and \(a\in\defn{X}(K_0)\), by a \Def{local coordinate system} around \(a\) we mean an \'etale map from a Zariski neighborhood of \(a\) to \(\Sort{VF}^d\), all over \(K_0\), taking \(a\) to \(0\) (here \(d\) is the dimension of \(\defn{X}\) at \(a\)). The following observation is well known, see, e.g.,~\cite[Proposition~4.9]{MilneLEC} or~\cite[Proposition~I.3.24]{MilneEt}. \overline{b}egin{fact}\label{F:coords} If \(a\) is a smooth point of a variety \(X\), it admits a system of local coordinates. \end{fact} The following statement is essentially the implicit function theorem in the valuative setting. \overline{b}egin{proposition}\label{P:manifold} Let \(\defn{X}\) be a smooth subvariety of affine space, viewed as a definable subset of \(\Sort{VF}^n\) in a theory \(\widetilde{T}\) of Henselian valued fields. Then the induced definable topology on \(\defn{X}\) is the unique topology for which every local coordinate system around every point \(a\) of \(\defn{X}\) is a homeomorphism in a neighborhood of \(a\). In particular, \(\defn{X}\) admits a unique definable topology making \(\defn{X}(K)\) locally homeomorphic to an open ball in \(K^d\) around each point. \end{proposition} \overline{b}egin{proof} Uniqueness is clear by Fact~\ref{F:coords}. To show that every local coordinate system is a local homeomorphism, we first note that the problem is local for the Zariski topology on \(\defn{X}\), and by~\cite[Prop.~I.3.24]{MilneEt} we may therefore assume that \(\defn{X}\) is the zero set of a polynomial map \(P(y_1,\dots,y_n)=(f_{d+1}(\overline{y}),\dots,f_n(\overline{y})):\Sort{VF}^n\ra\Sort{VF}^{n-d}\), where the tangent space of \(\defn{X}\) at \(a\) is given as the kernel of \(dP(a)\), and has dimension \(d\). Also, we are given a coordinate system \(\overline{b}ar{F}=(\overline{b}ar{f_1},\dots,\overline{b}ar{f_d}):\defn{X}\ra\Sort{VF}^d\), which we may assume to be globally \'etale. Let \(F=(f_1,\dots,f_d)\) be any lift of \(\overline{b}ar{F}\) to a polynomial function on \(\Sort{VF}^n\). Then \(dF\) restricts to a bijection from the tangent space of \(\defn{X}\) at \(a\) to \(\Sort{VF}^d\). It follows that the Jacobian matrix \((\partial{}f_i/\partial{}y_j)\) of the combined map \((F,P)\) is invertible. By rescaling, we may assume that both \(a\) and the coefficient of \(F,P\) lie in \(\mathfrak{m}athbfcal{O}\). Replacing \(\defn{X}\) by the graph of \(F\), we are in the situation of Hensel's Lemma (as in, e.g., \cite[Thm.~9.14]{fvk} or~\cite[Thm.~7.4]{PrZi}). Thus, there is a value \(r\in{\mathfrak{m}athbf \mathcal{G}amma}(K_0)\) (namely, the valuation of the Jacobian determinant at \(a\)), such that for any \(\overline{x}\in\Sort{VF}^d\) with \(\operatorname{val}(\overline{x})>2r\), there is a unique \(\overline{y}\in\defn{X}\) with \(F(\overline{y})=\overline{x}\) and \(\operatorname{val}(\overline{y}-a)\ge\operatorname{val}(\overline{x})-r\). This inequality also gives the continuity. \end{proof} Note that this shows, in particular, that there is a valuation topology on any smooth affine variety, independent of an embedding into affine space, and therefore that the same definition determines a topology on any (not necessarily affine) smooth variety. \subsection{Model theory of algebraically closed valued fields} Recall the following languages for valued fields: \overline{b}egin{itemize} \item \(\mathcal{L}_{\operatorname{div}}=\mathcal{L}_{\mathrm{ring}}\cup\{\operatorname{div}\}\) is the language with one sort \(\Sort{VF}\), where \(x\,\operatorname{div}\, y\mathcal{L}eftrightarrow\operatorname{val}(x)\leq\operatorname{val}(y)\). \item \(\mathcal{L}_{\mathcal{G}amma}\) is the two-sorted language with sorts \(\Sort{VF}\) and \({\mathfrak{m}athbf \mathcal{G}amma}\), given by \(\mathcal{L}_{\mathrm{ring}}\) on \(\Sort{VF}\), \(\mathcal{L}_{\mathrm{oag}}=\{0,+,-,<,\infty\}\) on \({\mathfrak{m}athbf \mathcal{G}amma}\) and \(\operatorname{val}:\Sort{VF}\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}\). \item \(\mathcal{L}_{\mathcal{G}amma k}\) is the three-sorted language with sorts \(\Sort{VF}\), \({\mathfrak{m}athbf k}\) and \({\mathfrak{m}athbf \mathcal{G}amma}\), given by \(\mathcal{L}_{\mathrm{ring}}\) on \(\Sort{VF}\), (another copy of) \(\mathcal{L}_{\mathrm{ring}}\) on \({\mathfrak{m}athbf k}\), \(\mathcal{L}_{\mathrm{oag}}\) on \({\mathfrak{m}athbf \mathcal{G}amma}\) and the functions \(\operatorname{val}:\Sort{VF}\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}\) and \(\mathfrak{m}athbb{R}es:\Sort{VF}^2\rightarrow {\mathfrak{m}athbf k}\) between the sorts. Here, \[ \mathfrak{m}athbb{R}es(x,y):=\overline{b}egin{cases} \operatorname{res}(\frac{x}{y}), \text{ if } \infty\neq\operatorname{val}(y)\leq\operatorname{val}(x);\\ 0, \text{otherwise.} \end{cases} \] \item \(\mathcal{L}_{\mathrm{RV}}\) is the $\mathfrak{m}athbb{R}V$-language (also called language with leading terms) with two sorts $\Sort{VF}$ and $\mathfrak{m}athbb{R}V$ where that last sort is interpreted as $\Sort{VF}^\star/(1+\overline{b}oldsymbol{\mathfrak{m}})$. We add an element $0$ to $\mathfrak{m}athbb{R}V$ for the image of $0$. The language consists of the ring language on $\Sort{VF}$, a map $\operatorname{rv} : \Sort{VF}\to\mathfrak{m}athbb{R}V$ and the language $\mathcal{L}_{\operatorname{div}}$ on $\mathfrak{m}athbb{R}V$. The symbol $\cdot$ interprets the (multiplicative) group structure on $\mathfrak{m}athbb{R}V$ and the symbol $+$ the trace of the addition when it is well defined. To be precise, if $\operatorname{val}(x) < \operatorname{val}(y)$, then $\operatorname{rv} (x) + \operatorname{rv} (y) = \operatorname{rv} (y) + \operatorname{rv} (x) = \operatorname{rv} (x)$, if $\operatorname{val}(x) = \operatorname{val}(y) = \operatorname{val}(x+y)$, then $\operatorname{rv} (x) + \operatorname{rv} (y) = \operatorname{rv} (x + y)$ and otherwise $\operatorname{rv} (x) + \operatorname{rv} (y) = 0$. Finally, $\operatorname{rv} (x)\operatorname{div}\operatorname{rv} (y)$ is interpreted as $\operatorname{val}(x)\leq\operatorname{val}(y)$. Note that we will write $\sum_i x_i$ where $x_i \in\mathfrak{m}athbb{R}V$. This notation is slightly abusive as $+$ as defined above is not associative. What we mean by $\sum_i x_i$ is in fact $\sum_{i\in I_0} x_i$ where $I_0$ is the set of indices $i$ such that $x_i$ is of minimal valuation. \item \(\mathcal{L}_{\mathcal{G}}\) is the language in the geometric sorts (or geometric language) from \cite[Section 3.1.]{HaHrMa06}, with set of sorts \(\mathbfcal{G} := \{\Sort{VF},{\mathfrak{m}athbf k},{\mathfrak{m}athbf \mathcal{G}amma}\}\cup \Set{\mathbf{S}_n}{n\geq1}\cup\Set{\mathbf{T}_n}{n\geq1}\). It is an extension of \(\mathcal{L}_{\mathcal{G}amma k}\). We use the notation from \cite{HaHrMa06}, although we write the value group additively and not multiplicatively. \end{itemize} \overline{b}egin{fact}\label{F:ACVF-QE} The theory \(\mathrm{ACVF}\) of algebraically closed non-trivially valued fields eliminates quantifiers in either of the languages \(\mathcal{L}_{\operatorname{div}}\), \(\mathcal{L}_{\mathcal{G}amma}\), \(\mathcal{L}_{\mathcal{G}amma k}\), \(\mathcal{L}_{\mathrm{RV}}\) and \(\mathcal{L}_{\mathcal{G}}\). \end{fact} For the first three languages, we refer to \cite[Theorem 2.1.1]{HaHrMa06}. (In the case of \(\mathcal{L}_{\operatorname{div}}\), the result is more or less due to Robinson.) The quantifier elimination result in the language \(\mathcal{L}_{\mathcal{G}}\) is \cite[Theorem 3.1.2]{HaHrMa06}. Quantifier elimination in \(\mathcal{L}_{\mathrm{RV}}\) for \(\mathrm{ACVF}\) is folklore and we are not aware that any proof exists in the literature. Let us give a sketch of the proof. \overline{b}egin{proof}[{Proof of Fact\,\ref{F:ACVF-QE}, \(\mathcal{L}_{\mathrm{RV}}\) case}] We have to show that given two models $M$ and $N$ of $\mathrm{ACVF}$ such that $N$ is $|M|^+$-saturated and given any isomorphism \(f : A \to B\) between $\mathcal{L}_{\mathrm{RV}}$-substructures of $M$ and $N$, respectively, we can extend $f$ to $M$. First, one can check that $f$ can be extended to the closure of $A$ under inverses (both in $\Sort{VF}$ and $\mathfrak{m}athbb{R}V$). Let $a\in\mathfrak{m}athbb{R}V(A)$ be such that there is no $c\in\Sort{VF}(A)$ with $\operatorname{val}(c) = \operatorname{val}_{\mathfrak{m}athbb{R}V}(a)$ where $\operatorname{val}_{\mathfrak{m}athbb{R}V}$ is induced by $\operatorname{val}$ on $\mathfrak{m}athbb{R}V$. If $\operatorname{val}_{\mathfrak{m}athbb{R}V}(a) \in\mathfrak{m}athbb{Q}\tensor\operatorname{val}(A)$, let $n$ be minimal positive such that $n\operatorname{val}_{\mathfrak{m}athbb{R}V}(a) = \operatorname{val}(e)$ for some $e\in\Sort{VF}(A)$. There exists $c\in\Sort{VF}(M)$ such that $c^n = e$ and $\operatorname{rv} (c) = a$. To show that this holds, it suffices to prove it for $e = 1$ and that can be done easily by applying Hensel's lemma and the Frobenius on the residue field (if the residue characteristic is positive). Similarly there exists $d\in\Sort{VF}(N)$ such that $d^n = f(e)$ and $\operatorname{rv} (d) = f(\operatorname{rv} (a))$. If $\operatorname{val}_{\mathfrak{m}athbb{R}V}(a) \not\in\mathfrak{m}athbb{Q}\tensor\operatorname{val}(A)$, take any $c$ such that $\operatorname{rv} (c) = a$ and any $d\in\Sort{VF}(N)$ such that $\operatorname{rv} (d) = f(\operatorname{rv} (a))$. Then one can extend $f$ by sending $c$ to $d$. Repeating this last step, we may assume that $\operatorname{val}(\Sort{VF}(A)) = \operatorname{val}_{\mathfrak{m}athbb{R}V}(\mathfrak{m}athbb{R}V(A))$ (and that $\mathfrak{m}athbb{R}V(A)$ and $\Sort{VF}(A)$ are closed under inverses). Given \(r\in\mathfrak{m}athbb{R}V(A)\), let \(a\in\Sort{VF}(A)\) be an element with \(\operatorname{val}(a)=\operatorname{val}_\mathfrak{m}athbb{R}V(r)\). Then \(\frac{r}{\operatorname{rv} (a)}\) is a well defined element \(c\) of \({\mathfrak{m}athbf k}(A)\), so \(r=c\operatorname{rv} (a)\), and \(f(r)\) is uniquely determined. Hence such an $f$ is completely determined by its reduct to \(\mathcal{L}_{\mathcal{G}amma k}\) (actually the sort ${\mathfrak{m}athbf \mathcal{G}amma}$ is useless here, but the two sorted language with $\Sort{VF}$ and $\mathbf{k}$ is not usually considered) and so $f$ extends to $M$ by quantifier elimination in $\mathcal{L}_{\mathcal{G}amma k}$. \end{proof} We note that in the multi-sorted languages \(\mathcal{L}_{\mathcal{G}amma}\), \(\mathcal{L}_{\mathcal{G}amma k}\), \(\mathcal{L}_{\mathrm{RV}}\) and \(\mathcal{L}_{\mathcal{G}}\), all added sorts are interpretable in \(\mathcal{L}_{\operatorname{div}}\), and the structure is just the one induced by the corresponding interpretations in \(\mathcal{L}_{\operatorname{div}}\). The completions of \(\mathrm{ACVF}\) are given by specifying the pair of characteristics \((\mathrm{char}(\Sort{VF}),\mathrm{char}(\mathbf{k}))\in\Set{(0,0),(0,p),(p,p)}{p\,\,\text{a prime number}}\). The completion corresponding to \((p,q)\) is denoted by \(\mathrm{ACVF}_{p,q}\). \mathfrak{m}edskip The following is the main result of \cite{HaHrMa06}. \overline{b}egin{fact}[{\cite[Theorem 3.4.10]{HaHrMa06}}]\label{F:ACVF-EI} The theory \(\mathrm{ACVF}\) eliminates imaginaries in \(\mathcal{L}_{\mathcal{G}}\). \end{fact} \subsection{Separably closed fields} \subsubsection{Notation and conventions} Let $K$ be a field of characteristic $p > 0$. Let $b = (b_j)_{j\in J}$ be a (possibly infinite) tuple from $K$ and let $I : J \to p=\{0,\ldots,p-1\}$ be a function with finite support (that is a function that has value $0$ outside of a finite set). We denote by $b^I$ the monomial $\prod_{j\in J} b_j^{I(j)}$. The tuple $b$ is said to be a \emph{$p$-basis} of $K$ if the monomials $b^I$ form a linear basis of $K$ as a vector space over \(\ppow{K}\). Then every $x\in K$ can be uniquely written as $x=\sum_{I}x_I^p b^{I}$. The $x_I$ are called the \emph{\(p\)-components of \(x\)} (with respect to $b$) and the functions $f_I : K\to K$ sending $x$ to $x_I$ are called the \emph{$p$-coordinate functions} or \emph{$\lambda$-functions}. Any characteristic $p$ field admits a $p$-basis and all $p$-bases of $K$ have the same cardinality $e$, usually called the \emph{imperfection degree} or the \emph{Ershov invariant} of $K$. Obviously, we have $[K:\ppow{K}] = p^e$ when \(e\) is finite. A field $K$ is said to be \emph{separably closed} if it has no proper separable algebraic extension. For a prime $p$ and $e < \infty$, let $\mathcal{L}^{\lambda}_{p,e}:=\mathcal{L}_{\mathrm{ring}}\cup\{b_1,\ldots,b_e\}\cup\Set{f_I}{I\in p^{e}}$ be the language with one sort \(\Sort{K}\) and let \(\mathrm{SCF}_{p,e,}\) be the theory of characteristic \(p\) separably closed fields with imperfection degree \(e\), where the $b_j$ form a $p$-basis with corresponding \(p\)-coordinate functions given by the \(f_I\). We will denote by \(\lambda^n:\Sort{K}\to\Sort{K}^{p^{ne}}\) the definable function whose coordinates are the $f_{I_n}\circ\cdots\circ f_{I_1}$ for all tuples $(I_n,\ldots,I_1)$. Note that $\lambda^n(x)^{p^n}$ is the tuple of \(\ppow[n]{\Sort{K}}\)-coordinates of $x$ in the basis $b^{(I_n,\ldots,I_1)} = \prod_j (b^{I_j})^{p^j}$. \overline{b}egin{fact} The theory \(\mathrm{SCF}_{p,e,}\) eliminates quantifiers and imaginaries and is complete. In case $e>0$, it is stable not superstable. \end{fact} The quantifier elimination result in that particular language is due to Delon \cite{DelSCF,DelSCF_MLBook}. The completeness result goes back to work of Ershov \cite{ErsSCF}. Stability and non superstability are proved in \cite{WooSCF}. Elimination of imaginaries is due to Delon and a proof can be found in \cite[Proposition\,3.9]{DelSCF_MLBook}. The (type-definable) subfield \(\ppowi{\Sort{K}}\) is the largest perfect subfield of \(\Sort{K}\), and it is algebraically closed. The induced structure on \(\ppowi{\Sort{K}}\) (by the ambient structure) is that of a pure algebraically closed field. \subsection{Hasse derivations} The $\lambda$-functions already give a ``field with operator'' flavor to separably closed fields. Separably closed fields can also be naturally equipped with more classical operators: Hasse derivations. As explained by Hoffmann \cite{HofSCH}, there are two natural ways in which to endow a separably closed field with Hasse derivations. Here, we follow Ziegler \cite{ZieSCH}. A \Def{Hasse derivation} on a ring $R$ is a sequence $D = (D_n)_{n\in\mathfrak{m}athbb{N}}$ of additive functions $D_n:R\to R$ such that for all $x$, $y\in R$, $D_0(x) = x$ and $D_n(x y) = \sum_{k+l = n} D_k(x)D_l(y)$. We say that $D$ is \Def{iterative} if $D_m\circ D_n = \overline{b}inom{m+n}{n}D_{m+n}$ also holds. We will assume all Hasse derivations to be iterative. Let $e\in\mathfrak{m}athbb{N}$. Let $K$ be a field of characteristic $p > 0$ and $(D_1,\ldots,D_{e})$ be a tuple of commuting Hasse derivations on $K$ (i.e., $D_{i,n}\circ D_{j,m} = D_{j,m}\circ D_{i,n}$ for all $i,j \leq e$ and $n,m\in\mathfrak{m}athbb{N}$). It is easy to check that \(\ppowi{K}\) is contained in the field \[ C_\infty:=\Set{x\in K}{D_{i,n}(x)=0\text{ for all \(i\leq e\) and \(n>0\)}} \] of \emph{(absolute) constants}. The field $K$ is said to be \Def{strict} if \[\ppow{K}=C_1:=\Set{x\in{}K}{D_{i,1}(x)=0\text{ for all \(i\leq{}e\)}}.\] (We then have $\ppowi{K}=C_\infty$). Let \[\mathcal{L}_{p,e}^{D}:=\mathcal{L}_{\mathrm{ring}}\cup\Set{D_{i,n}}{0<i\leq e\text{ and }n\in\mathfrak{m}athbb{N}}\] and let \(\mathrm{SCH}_{p,e,}\) be the theory of characteristic \(p\) separably closed strict fields of imperfection degree $e$ with $e$ commuting Hasse derivations $D_i = (D_{i,n})_{n\in\mathfrak{m}athbb{N}}$. For $N\in\mathfrak{m}athbb{N}^e$, we will denote $D_{N}(x) = D_{1,n_1}\circ\ldots\circ D_{e,n_{e}}(x)$ and $D_\omega(x) = (D_{N}(x))_{N\in\mathfrak{m}athbb{N}^e}$. Note that any separably closed field of imperfection degree $e$ (and $p$-basis $b$) can be made into a strict field with $e$ commuting Hasse derivations by setting \[D_{i,n}(b^I) = \overline{b}inom{I(i)}{n}b_i^{I(i)n}\prod_{j\neq i}b_j^{I(j)}\] and \[D_{i,n}(x) = \sum_I\lambda^m_I(x)^{p^m}D_{i,n}(b^I)\] for any $m$ such that $n < p^m$. For all $n>0$, we then have $D_{i,n}(b_j) = 1$ if $i=j$ and $n=1$ and $D_{i,n}(b_j) = 0$ otherwise. Such a $p$-basis is said to be \Def{canonical}. Conversely, if $b$ is a canonical $p$-basis, the $D_{i,n}$ can be expressed as above using $b$ and $\lambda$. \overline{b}egin{fact}[\cite{ZieSCH}] The theory $\mathrm{SCH}_{p,e}$ eliminates quantifiers and imaginaries and is complete. \end{fact} As noted in \cite{ZieSCH}, the quantifier elimination result can be deduced from quantifier elimination in $\mathrm{SCF}_{p,e}$. This remains true in the valued setting, as will be seen below. \subsection{Separably closed valued fields} We now consider a separably closed field $K$ of positive characteristic $p$ and finite imperfection degree $e$ endowed with a non-trivial valuation $\operatorname{val}$. As is the case for algebraically closed valued fields, there is a number of natural languages in which to consider these structures: \overline{b}egin{itemize} \item The one sorted language $\mathcal{L}_{\operatorname{div}}pe^{\lambda}:=\mathcal{L}_{\operatorname{div}}\cup\mathcal{L}_{p,e}^{\lambda}$; \item The two sorted language $\mathcal{L}_{\mathcal{G}amma}pe^{\lambda}:=\mathcal{L}_{\mathcal{G}amma}\cup\mathcal{L}_{p,e}^{\lambda}$; \item The three sorted language $\mathcal{L}_{\mathcal{G}amma k}pe^{\lambda}:=\mathcal{L}_{\mathcal{G}amma k}\cup\mathcal{L}_{p,e}^{\lambda}$; \item The leading term language $\mathcal{L}_{\mathrm{RV}}pe^{\lambda} := \mathcal{L}_{\mathrm{RV}}\cup\mathcal{L}_{p,e}^{\lambda}$ \item The geometric language $\mathcal{L}_{\mathcal{G}}pe^{\lambda}:=\mathcal{L}_{\mathcal{G}}\cup\mathcal{L}_{p,e}^{\lambda}$. \end{itemize} Let $\mathrm{SCVF}_{p,e}$ denote the theory of separably closed non-trivially valued fields of characteristic $p$ and imperfection degree $e$ (with $\lambda$-functions) in either of these languages. Similarly we define $\mathcal{L}_{\operatorname{div}}pe^{D}$, $\mathcal{L}_{\mathcal{G}amma}pe^{D}$, $\mathcal{L}_{\mathcal{G}amma k}pe^{D}$, $\mathcal{L}_{\mathrm{RV}}pe^{D}$ and $\mathcal{L}_{\mathcal{G}}pe^{D}$ to be the languages with $e$ Hasse derivations and we denote by $\mathrm{SCVH}_{p,e}$ the theory (in any of these languages) of separably closed strict non-trivially valued fields of imperfection degree $e$ with $e$ commuting Hasse derivations. We will also denote by $\mathrm{SCVF}$ the theory of separably closed non-trivially valued fields in the language $\mathcal{L}_{\operatorname{div}}$. \overline{b}egin{proposition}\label{prop:SCVF-dense} Let $K$ be a separably closed non-trivially valued field. For all $n\in\mathfrak{m}athbb{N}$, $\ppow[n]{K}$ is dense in $\overline{a}lg{K}$. Moreover, if $K$ is $\omega$-saturated, then $\ppowi{K}$ is dense in $\overline{a}lg{K}$. In particular $\mathbf{S}_n(\ppowi{K}) = \mathbf{S}_n(\overline{a}lg{K})$ and $\mathbf{T}_n(\ppowi{K}) = \mathbf{T}_n(\overline{a}lg{K})$ in this case. \end{proposition} \overline{b}egin{proof} Cf. \cite[Lemma\,5.2.5 and Remark\,5.2.6]{HonPhD}. The statement about $\ppowi{K}$ follows by compactness and the one about the $\mathbf{S}_n$ and $\mathbf{T}_n$ is a consequence of Lemma\,\ref{lem:dense geom} \end{proof} In \cite{HonPhD}, Hong proved that $\mathrm{SCVF}_{p,e}$ eliminates quantifiers in the language with two sorts. We now show that his proof generalizes to any of the five languages we have been considering. Let us also mention that Hong later proved, in \cite{Hon-QE}, a stronger quantifier elimination result using "parametrized $\lambda$-functions" which also covers the case of separably closed valued fields of infinite Ershov invariant. \overline{b}egin{proposition}\label{prop:SCVF EQ} $\mathrm{SCVF}_{p,e}$ eliminates quantifiers in the one, two and three sorted languages, the leading term language as well as in the geometric language. \end{proposition} In the following pages, let $\mathcal{L}$ denote any of the five languages $\mathcal{L}_{\operatorname{div}}$, $\mathcal{L}_{\mathcal{G}amma}$, $\mathcal{L}_{\mathcal{G}amma k}$, $\mathcal{L}_{\mathrm{RV}}$ and $\mathcal{L}_{\mathcal{G}}$ and $\mathcal{L}^\lambda=\mathcal{L}\cup\mathcal{L}_{p,e}^\lambda$ the corresponding enrichment with $\lambda$-functions (for some fixed \(p\) and \(e\)). The proof relies on one main technical tool, $\lambda$-resolutions: \overline{b}egin{lemma}[{\cite[Definition-Proposition 5.2.2]{HonPhD}}]\label{lem:lambda res} Let $M\mathfrak{m}odels\mathrm{SCVF}_{p,e}$, $A\leq M$ be a substructure and $\defn{X}\subset \Sort{VF}^m$ be quantifier free $\mathcal{L}^\lambda(A)$-definable. There exists $n\in\mathfrak{m}athbb{N}$ such that $\lambda^n(\defn{X}) = \defn{Y}$ for some quantifier free $\mathcal{L}(A)$-definable set $\defn{Y}\subseteq \Sort{VF}^{mp^{en}}$. Such a set $\defn{Y}$ is called a \emph{$\lambda$-resolution }of $\defn{X}$. \end{lemma} Note that once we know quantifier elimination, \emph{all} definable subsets of $\Sort{VF}^{n}$ will have a $\lambda$-resolution. \overline{b}egin{proof} This is an immediate consequence of the fact that $\lambda(x+y)$ and $\lambda(x y)$ can be written as polynomials in $\lambda(x)$ and $\lambda(y)$, and that $\lambda^n$ is onto. \end{proof} \overline{b}egin{lemma}\label{lem:ext transc} Let $M$ and $N$ be two non-trivially valued fields (considered as $\mathcal{L}$-structures), $A\leq M$, $f : A \to N$ be an $\mathcal{L}$-embedding and $a\in\Sort{VF}(M)$ be transcendental over $\Sort{VF}(A)$. Assume that $N$ is $|A|^+$-saturated and $\Sort{VF}(N)$ is dense in $\overline{a}lg{\Sort{VF}(N)}$. Then $f$ can be extended to an $\mathcal{L}$-embedding $A(a) \to N$. \end{lemma} Before giving the proof, we mention that in \(\mathrm{ACVF}\), any definable function from an imaginary geometric sort to \(\Sort{VF}\) has finite image. This follows for example from the fact that there are uncountable models of \(\mathrm{ACVF}\) whose imaginary part is countable (along with quantifier elimination for \(\mathrm{ACVF}\)). See Lemma~\ref{lem:acl field} for a similar argument for \(\mathrm{SCVF}\) or \(\mathrm{SCVH}\). \overline{b}egin{proof} By compactness, it suffices to prove that for every quantifier free $\mathcal{L}(A)$-definable set $\defn{X}$ such that $a\in \defn{X}(M)$, $f_\star \defn{X}\cap \Sort{VF}(N)\neq\emptyset$. Since \(N\) is non-trivially valued, \(\overline{a}lg{\Sort{VF}(N)}\) is a model of \(\mathrm{ACVF}\), and since $\mathrm{ACVF}$ eliminates quantifiers in $\mathcal{L}$, $f_\star \defn{X}\cap \overline{a}lg{\Sort{VF}(N)}\neq\emptyset$. If $f_\star \defn{X}\cap \overline{a}lg{\Sort{VF}(N)}$ has non empty interior, then we conclude by density of $\Sort{VF}(N)$ in $\overline{a}lg{\Sort{VF}(N)}$. If it has empty interior, then $f_\star \defn{X}\cap \overline{a}lg{\Sort{VF}(N)}$ is finite and so is $\defn{X}\cap\overline{a}lg{\Sort{VF}(M)}$. In particular it follows that $a$ is algebraic (in $\mathrm{ACVF}$) over $A$ and hence (by the remark preceding the proof) over $\Sort{VF}(A)$. This contradicts the fact that $a$ is transcendental over $\Sort{VF}(A)$. \end{proof} In fact, $f(a)$ can be chosen in any dense subfield $K_0$ of $\Sort{VF}(N)$. \overline{b}egin{corollary}\label{cor:ext separable} Let $M$ and $N$ be two non-trivially valued fields (considered as $\mathcal{L}$-structures), $A\leq M$, $f : A \to N$ be an $\mathcal{L}$-embedding. Assume that $N$ is separably closed and $|M|^+$-saturated. Let $\Sort{VF}(A)\leq K_0\leq \Sort{VF}(M)$ be such that the extension $\Sort{VF}(A)\leq K_0$ is separable. Then $f$ can be extended to an $\mathcal{L}$-embedding $A(K_0) \to N$. \end{corollary} \overline{b}egin{proof} First, if $K_0$ is an algebraic extension of $\Sort{VF}(A)$, then, by quantifier elimination in $\mathrm{ACVF}$, $f$ extends to an $\mathcal{L}$-embedding of $A(K_0)$ into $N$ (a priori, the image of $f$ is in $\overline{a}lg{N}$, but because $N$ is separably closed, it is, in fact, in $N$). So we may always assume that $\Sort{VF}(A)$ is separably closed. By Proposition \ref{prop:SCVF-dense}, $N$ is dense in $\overline{a}lg{N}$. By compactness and saturation of $N$, it is enough to show the result for $K_0$ which is a finitely generated separable extension of $\Sort{VF}(A)$. Note that such an extension admits a separating transcendence basis. Using Lemma \ref{lem:ext transc}, we may thus conclude by induction on $\mathfrak{m}athrm{trdeg}(K_0/\Sort{VF}(A))$. \end{proof} \overline{b}egin{proof}[{Proof of Proposition\,\ref{prop:SCVF EQ} (following {\cite[Proof of 5.2.1, p.59]{HonPhD}})}] Let $M$, $N$ be two models of $\mathrm{SCVF}_{p,e}$, $A\leq M$ and $f : A\to N$ an $\mathcal{L}^\lambda$-embedding. We have to show, given $a\in M$ and provided $N$ is $|M|^+$-saturated, that $f$ extends to $A(a)$. By compactness, it suffices to show that for every quantifier free $\mathcal{L}^\lambda(A)$-definable set $\defn{X}$ such that $a\in \defn{X}(M)$, $f_\star \defn{X} \cap N\neq\emptyset$. By Lemma\,\ref{lem:lambda res}, we may assume that $\defn{X}$ is $\mathcal{L}(A)$-definable, at the cost of turning $a$ into a finite tuple of elements of $\Sort{VF}(M)$. Note that, because $A$ is closed under $\lambda$-functions, that $\Sort{VF}(M)/\Sort{VF}(A)$ is a separable field extension, so we may extend $f$ to an $\mathcal{L}$-embedding $M\to N$ by Corollary \ref{cor:ext separable}. We then have $f(a) \in f_\star \defn{X} \cap N$ and that concludes the proof. \end{proof} \overline{b}egin{corollary}\label{cor:SCVF compl} Let $M\mathfrak{m}odels\mathrm{SCVF}_{p,e}$. The theory of $M$ is completely determined by the $\mathcal{L}_{\operatorname{div}}$-isomorphism type of $\mathfrak{m}athbb{F}_p[b_1,\ldots,b_e]$. \end{corollary} From quantifier elimination (Proposition \ref{prop:SCVF EQ}), the existence of $\lambda$-resolutions and the fact that $\mathrm{ACVF}_{p,p}$ is $\mathfrak{m}athbb{N}IP$, we obtain the following. \overline{b}egin{corollary}[{Delon, see \cite[Corollary 5.2.13]{HonPhD}}]\label{C:SCVF-NIP} Any completion of $\mathrm{SCVF}_{p,e}$ is $\mathfrak{m}athbb{N}IP$. \end{corollary} Similar results hold for $\mathrm{SCVH}_{p,e}$: \overline{b}egin{proposition}\label{prop:SCVH QE} $\mathrm{SCVH}_{p,e}$ is complete and eliminates quantifiers in the one, two and three sorted languages, the leading term language as well as in the geometric language. \end{proposition} The proof uses the following notion: \overline{b}egin{definition} Let $M\mathfrak{m}odels\mathrm{SCVH}_{p,e}$ and $A\subseteq M$. A $p$-basis $b$ of $M$ is said to be \emph{very canonical} over $A$ if it is a canonical $p$-basis such that $b_i\in\mathfrak{m}athbfcal{O}$ for all $i$ and the $\operatorname{res}(b_i)$ are algebraically independent over $\mathbf{k}(A)$. \end{definition} \overline{b}egin{lemma} Let $M\mathfrak{m}odels\mathrm{SCVH}_{p,e}$ and $A\subseteq M$. If $M$ is $|A|^+$-saturated, then $M$ admits a $p$-basis which is very canonical over $A$. \end{lemma} \overline{b}egin{proof} By \cite[Corollary\,4.2]{ZieSCH}, $M$ has a canonical $p$-basis. As $D_{i,n}(\ppowi{\Sort{VF}})=0$ for any $n>0$, if $b$ is a canonical $p$-basis and $a$ is an $e$-tuple from $\ppowi{\Sort{VF}}(M)$, then $b+a$ is also a canonical $p$-basis. Since $\ppowi{\Sort{VF}}(M)$ is dense in $\Sort{VF}(M)$, for any $c\in\Sort{VF}(M)$, $\operatorname{res}(c+\ppowi{\Sort{VF}}(M)) = \mathbf{k}(M)$. Using $|A|^+$-saturation, one can find a tuple $c\in\mathbf{k}(M)$ algebraically independent over $\mathbf{k}(A)$ and a tuple $a\in\ppowi{\Sort{VF}}$ such that $\operatorname{res}(b+a) = c$. \end{proof} \overline{b}egin{proof}[Proof of Proposition\,\ref{prop:SCVH QE}] Let us first prove quantifier elimination in the leading term language. Quantifier elimination in the one, two and three sorted languages follow formally. To be exact, in the case of the three sorted language, a little work is still necessary as there is no sort for ${\mathfrak{m}athbf \mathcal{G}amma}$ in $\mathcal{L}_{\mathrm{RV}}$. But, when doing a back and forth, that can be easily taken care of by lifting points in ${\mathfrak{m}athbf \mathcal{G}amma}$ to $\mathfrak{m}athbb{R}V$ using quantifier elimination in $\mathrm{ACVF}$. We will be using Ziegler's trick from \cite{ZieSCH}. Let $\varphi(x)$ be an $\mathcal{L}_{\mathrm{RV}}^{D}$-formula. As noted above, the $D_{i,n}$ can be expressed in terms of $b$ and $\lambda$. It follows that $\varphi(x)$ is equivalent to an $\mathcal{L}_{\mathrm{RV}}^{\lambda}$-formula $\psi(x)$ and that by Proposition\,\ref{prop:SCVF EQ}, we may assume that $\psi$ is quantifier free. Note that, by Corollary\,\ref{cor:SCVF compl}, the formula $\psi$ does not depend on the actual choice of very canonical $p$-basis. The only occurrences of $b$ and $\lambda$ in $\psi$ are in terms of the form $\operatorname{rv} (\sum_I P_I(x)b^I)$ where the $P_I$ and $Q_I$ are polynomial in $\lambda^m(x)$ for some $m\in\mathfrak{m}athbb{N}$ (provided we rewrite $\sum_I P_I(x)b^I = 0$ into $\operatorname{rv} (\sum_I P_I(x)b^I) = 0$). Applying the Frobenius automorphism $m$-times, we may assume that the $P_I$ and $Q_I$ are polynomials in $\lambda^m(x)^{p^m}$. But by \cite[Lemma\,4.3]{ZieSCH}, the $\lambda^m(x)^{p^m}$ can be expressed as polynomials in the $D_{i,n}(x)$ and $b$. We may therefore assume that the $P_I$ and $Q_I$ are polynomials in the $D_{i,n}(x)$. Taking $b$ to be a very canonical basis over $\langle x \rangle$, the structure generated by $x$ (which we can do because the choice of very canonical $p$-basis did not matter so far), we get that \[\operatorname{rv} \left(\sum_I P_I(x)b^I\right) = \sum_I\operatorname{rv} (P_I(x))\operatorname{res}(b)^I.\] Moreover, for all $t_I\in\operatorname{rv} (\langle x \rangle)$, $\sum_I t_I\operatorname{res}(b^I) = 0$ if and only if $\overline{b}igwedge_It_I=0$. Therefore, $\psi$ can be rewritten so that $b$ does not appear in $\psi$ (and the rewriting does not depend on the point $x$ we are considering). This concludes the proof in the leading term language. Let us now prove quantifier elimination in the geometric language. Let $M$ and $N$ be models of $\mathrm{SCVH}_{p,e}$ in the geometric language, $A\leq M$ and $f:A\to N$ an $\mathcal{L}_{\mathcal{G}}^D$-embedding. Assume that $M$ is $\omega$-saturated and $N$ is $|M|^+$ saturated. We may assume that $\Sort{VF}(A)$ is strict. Indeed, by \cite[Lemma\,2.4]{ZieSCH}, there is a smallest strict extension of $\Sort{VF}(A)$ which is uniquely determined up to isomorphism -- as a field extension, it is algebraic and purely inseparable. It now follows from \cite[Corollary\,2.2]{ZieSCH} that the field extension $\Sort{VF}(A)\subseteq\Sort{VF}(M)$ is separable. By Corollary \ref{cor:ext separable}, $f$ extends to an $\mathcal{L}_{\mathcal{G}}$-embedding of $M$ into $N$, so there exists in particular an $\mathcal{L}_{\mathcal{G}}$-embedding $g: B := A(\ppowi{\Sort{VF}}(M))\to N$ extending $f$. Clearly $g(\ppowi{\Sort{VF}}(M))\subseteq\ppowi{\Sort{VF}}(N)$. Since we have added only absolute constants, $g$ is automatically an $\mathcal{L}_{\mathcal{G}}^D$-embedding. As $M$ is $\omega$-saturated, $\ppowi{\Sort{VF}}(M)$ is dense in $\Sort{VF}(M)$ by Proposition \ref{prop:SCVF-dense}, and so \(B\) is generated by \(\Sort{VF}(B)\). We may now use quantifier elimination in the one sorted language to extend $g$ to $M$. This concludes the proof in the geometric language. \end{proof} We will sometimes consider the set of all geometric sorts except the valued field sort $\Sort{VF}$. We will call these sorts the \emph{imaginary} geometric sorts and write $\mathbfcal{G}im:=\mathbfcal{G}\setminus\{\Sort{VF}\}=\{{\mathfrak{m}athbf \mathcal{G}amma},{\mathfrak{m}athbf k}\}\cup\{\mathbf{S}_n | n\geq1\}\cup\{\mathbf{T}_n | n\geq1\}$. \overline{b}egin{corollary}\label{cor:purely-geometric} Let $K\mathfrak{m}odels T$ and $A\substr K$, where $T=\mathrm{SCVH}^{\mathcal{G}}_{p,e}$ (or $T=\mathrm{SCVF}^{\mathcal{G}}_{p,e}$, respectively). Let $L=\overline{a}lg{K}$. Then $\mathcal{G}^{\mathrm{im}}(K)=\mathcal{G}^{\mathrm{im}}(L)$, and restriction to \(K\)-points determines an equivalence between the $\mathcal{L}_{\mathcal{G}}^D(A)$-definable sets (the $\mathcal{L}_{\mathcal{G}}^\lambda(A)$-definable sets, respectively) in the multi-sorted structure $\mathcal{G}^{\mathrm{im}}(K)$ and the $\mathcal{L}_{\mathcal{G}}(A)$-definable sets in $\mathcal{G}^{\mathrm{im}}(L)$. \end{corollary} \overline{b}egin{proof} We may assume that $T=\mathrm{SCVH}^{\mathcal{G}}_{p,e}$, as the result for $\mathrm{SCVF}^{\mathcal{G}}_{p,e}$ is a consequence of the one for $\mathrm{SCVH}^{\mathcal{G}}_{p,e}$. By Proposition \ref{prop:SCVF-dense}, we have $\mathcal{G}^{\mathrm{im}}(K)=\mathcal{G}^{\mathrm{im}}(L)$. The statement about definable sets follows from quantifier elimination for $\mathrm{SCVH}_{p,e}$ in the language $\mathcal{L}_{\mathcal{G}}^D$ (Proposition \ref{prop:SCVH QE}). \end{proof} \overline{b}egin{remark}\mathfrak{m}box{} \overline{b}egin{enumerate} \item Note that in $\mathrm{SCVF}_{p,e}$ or $\mathrm{SCVH}_{p,e}$, the field of absolute constants is not stably embedded (although, since $\Sort{VF}\subseteq \mathrm{dcl}(\ppow[n]{\Sort{VF}})$, each of the $\ppow[n]{\Sort{VF}}$ is). Indeed, let $M$ be an $\omega$-saturated model, $a\in\Sort{VF}(M)\setminus\ppowi{\Sort{VF}}(M)$, and let $B$ be the set of balls in $M$ that contain $a$. If $\ppowi{\Sort{VF}}(M)$ were stably embedded, then, by quantifier elimination, $B$ would be \(\mathcal{L}_{\mathcal{G}}(\ppowi{\Sort{VF}}(M))\)-definable and by definable spherical completeness of \(\mathrm{ACVF}\), there would exist \(c\in\overline{b}igcap_{b\in{}B}b(\ppowi{\Sort{VF}}(M))\). But \(\overline{b}igcap_{b\in{}B}b(M)=\{a\}\) and $a\not\in\ppowi{\Sort{VF}}(M)$. \item Nevertheless, by quantifier elimination, $\ppowi{\Sort{VF}}$ is a pure algebraically closed valued field in the following (weak) sense: any definable set in $\ppowi{\Sort{VF}}(M)$ (including the geometric sorts) is the intersection of a quantifier free $\mathcal{L}_{\mathcal{G}}(M)$-definable set with (some Cartesian power of) $\ppowi{\Sort{VF}}$. \end{enumerate} \end{remark} \section{Imaginaries and density}\label{S:density} \subsection{The density theorem} \overline{b}egin{lemma}\label{L:Subvar-SCF} Let \(\defn{V}\) be an irreducible variety defined over \(K=\sep{K}\). Then any \(\overline{a}lg{K}\)-definable open subvariety \(\Sort{U}\subseteq \defn{V}\) is defined over \(K\). \end{lemma} \overline{b}egin{proof} We may suppose that \(\mathfrak{m}athrm{char}(K)=p>0\), and we may assume that \(\defn{V}\) is affine and \(\Sort{U}=\defn{V}_f\), where \(f\in{}\overline{a}lg{K}[\defn{V}]=K[\defn{V}]\otimes_{K}\overline{a}lg{K}\). So there is some \(N=p^k\) such that \(f^N\in K[\defn{V}]\). Thus, \(\defn{V}_f=\defn{V}_{f^N}\) is defined over \(K\). \end{proof} The valuation topology on powers of a model \(\operatorname{\mathbb{M}}od{L}\) of \(\mathrm{ACVF}\) determines, in the terminology of~\cite[2.11]{vdd}, a \emph{topological system} (using the language \(\mathcal{L}_{\operatorname{div}}\)): All polynomials are continuous, and punctured balls are open. Further, it satisfies the assumptions of~\cite[2.15]{vdd}, and therefore we have: \overline{b}egin{fact}[{\cite[2.18]{vdd}}]\label{F:dim} In \(\mathrm{ACVF}\), a set \(\defn{X}\subseteq\Sort{VF}^n(L)\) definable with parameters is Zariski dense if and only if it contains a non-empty open ball. \end{fact} Using Proposition\,\ref{P:manifold}, we obtain the same result for any smooth variety: \overline{b}egin{corollary}\label{C:dim} A definable subset $\defn{X}$ of a (connected) smooth algebraic variety \(\defn{V}\) in \(\mathrm{ACVF}\) is Zariski dense if and only if it contains a non-empty subset which is open in \(\defn{V}\) for the valuation topology. \end{corollary} \overline{b}egin{proof} The claim is local, hence we may choose local coordinates as in Proposition\,\ref{P:manifold} and reduce to the case \(\defn{V}=\Sort{VF}^n\). Now the claim follows from Fact~\ref{F:dim}. \end{proof} \overline{b}egin{lemma}\label{L:strat} If \(\operatorname{\mathbb{M}}od{K}\mathfrak{m}odels\mathrm{SCVF}\), and \(\defn{X}\subseteq\Sort{VF}^n\) is a semi-algebraic subset defined over \(K\), then there are absolutely irreducible affine subvarieties \(\defn{Y}_i\) of \(\Sort{VF}^n\) defined over \(K\) and \(\defn{X}_i\subseteq \defn{Y}_i\) given by a Boolean combination of conditions of the form \(\operatorname{val}(h)>0\), where \(h\) is an invertible regular function on \(\defn{Y}_i\), such that \(\defn{X}=\overline{b}igcup_i\defn{X}_i\). \end{lemma} \overline{b}egin{proof} Let \(J\) be the set of polynomials \(F\) over \(K\) such that for some polynomial \(G\), \(F\operatorname{div}{}G\) or \(G\operatorname{div}{}F\) occurs in a semi-algebraic definition of \(\defn{X}\), and for every subset \(I\) of \(J\), let \(\defn{W}_I\) be the locally closed subset of \(\Sort{VF}^n\) given by \overline{b}egin{equation*} \overline{b}igwedge_{F\in I}F(\overline{x})=0\wedge\prod_{F\not\in I}F(\overline{x})\neq0 \end{equation*} For each \(I\), \(\defn{W}_I\) is an affine subvariety of \(\Sort{VF}^n\), and \(\defn{X}\cap{}\defn{W}_I\) is given by a Boolean combination of polynomial equations and valuative inequalities as in the definition. Hence, by restricting to \(\defn{W}_I\), we may assume that \(\defn{X}\) itself was given by such a Boolean combination to begin with. Writing \(\defn{X}\) in disjunctive normal form, we present \(\defn{X}\) as a finite union of definable sets \(\defn{X}_i\), each the intersection of a (Zariski) locally closed subset \(\defn{Z}_i\) with valuative inequalities. Let \(\defn{Y}_i\) be the Zariski closure of \(\defn{X}_i(K)\) in \(\defn{Z}_i\) (over \(K\)). The irreducible components of \(\defn{Y}_i\) are defined over \(K\), since \(K\)-points are dense. Hence we may assume that each \(\defn{Y}_i\) is (absolutely) irreducible. Then they satisfy the requirements of the claim. \end{proof} \overline{b}egin{proposition}[Hong {\cite[Theorem 5.3.1]{HonPhD}}]\label{P:vdense} Let \(\operatorname{\mathbb{M}}od{K}\mathfrak{m}odels\mathrm{SCVF}\) and \(\defn{X}\subseteq \Sort{VF}^n(K)\) be a semi-algebraic subset of \(\Sort{VF}^n(K)\). Let \(L=\overline{a}lg{K}\). Then there is a quantifier-free \(\mathcal{L}_{\operatorname{div}}(K)\)-formula \(\psi(\overline{x})\) with \(\psi(K)=\defn{X}\) and such that \(\psi(K)\) is dense in \(\psi(L)\). \end{proposition} \overline{b}egin{remark} Actually, Hong states his result only for \(\overline{a}leph_1\)-saturated \(K\), but it is easy to see that the result for general \(K\) follows from this. We add a full proof for convenience. \end{remark} \overline{b}egin{proof} By Lemma\,\ref{L:strat}, we may assume that \(\defn{X}\) is a (non-empty) valuation-open subset of a variety \(\defn{V}\) over \(K\). Since \(\defn{X}\) is Zariski-dense in \(\defn{V}\), we may pass to the smooth locus and assume that \(\defn{V}\) is smooth. By Proposition~\ref{P:manifold}, we may now reduce to the case of \(\defn{V}=\Sort{VF}^d\). Indeed, as $K$ is separably closed, there exists an open cover of $\defn{V}$ by subvarieties $\defn{V}_i$ and \'etale maps $f_i:\defn{V}_i\rightarrow\Sort{VF}^d$ defined over $K$ which are local homeomorphisms. As $K=\sep{K}$ and $f_i$ is \'etale, for every $a\in\defn{V}_i(L)$ one has $f_i(a)\in\Sort{VF}^d(K)$ if and only if $a\in\defn{V}_i(K)$. Now $\Sort{VF}^d(K)$ is dense in $\Sort{VF}^d(L)$ by Proposition \ref{prop:SCVF-dense}, and so $\defn{V}_i(K)$ is dense in $\defn{V}_i(L)$. \end{proof} \overline{b}egin{corollary}\label{cor:red to open in affine space} Let \(\operatorname{\mathbb{M}}od{K}\mathfrak{m}odels\mathrm{SCVF}\) and \(\defn{X}\) be a semi-algebraic subset of \(\Sort{VF}^n(K)\). Let \(\defn{V}\) be the Zariski closure of \(\defn{X}\), and let $d=\dim(\defn{V})$. Then there is a semi-algebraic subset $\defn{X}'$ of $\defn{X}$ and a polynomial map $f:\defn{X}'\rightarrow\Sort{VF}^d$ defined over $K$ which induces a homeomorphism between $\defn{X}'$ and $\mathfrak{m}athbfcal{O}^d(K)$. \end{corollary} \overline{b}egin{proof} The result follows from the proof of Proposition \ref{P:vdense}. \end{proof} \subsection{Elimination of imaginaries in \texorpdfstring{\(\mathrm{SCVF}_{p,e}\)}{SCVFpe}} In this section, \(K\) denotes a sufficiently saturated and homogeneous model of \(\mathrm{SCVF}_{p,e}\), and \(L:=\overline{a}lg{K}\) its algebraic closure, so a model of \(\mathrm{ACVF}_{p,p}\). We consider \(L\) in the language \(\mathcal{L}_{\mathcal{G}}\), and \(K\) in the language \(\mathcal{L}_{\mathcal{G}}pe^\lambda\). In \(K\), \(\eq{\mathrm{acl}}\) etc. refers to \(\mathrm{SCVF}_{p,e}^{\mathfrak{m}athrm{eq}}\). \overline{b}egin{lemma}\label{L:SCVF-ACVF} \overline{b}egin{enumerate} \item Any automorphism of \(K\) (uniquely) lifts to an automorphism of \(L\). In particular, if \(\overline{a},\overline{b}\in\mathbfcal{G}(K)\) and \(\overline{b}\in\mathrm{dcl}_{\mathrm{ACVF}_{p,p}}(\overline{a})\), then \(\overline{b}\in\mathrm{dcl}_{\mathrm{SCVF}_{p,e}}(\overline{a})\). \item For every tuple \(\overline{a}\in\mathbfcal{G}(L)\) there is a tuple \(\overline{a}'\in\mathbfcal{G}(K)\) such that \(\mathrm{dcl}_{\mathrm{ACVF}_{p,p}}(\overline{a})=\mathrm{dcl}_{\mathrm{ACVF}_{p,p}}(\overline{a}')\). \item In the structure \(K\), finite sets are coded, i.e., for every \(\{\overline{a}_1,\ldots,\overline{a}_n\}\subseteq\mathbfcal{G}(K)\) there is \(\overline{b}\in\mathbfcal{G}(K)\) which is interdefinable in \(\eq{K}\) with \(\code{\{\overline{a}_1,\ldots,\overline{a}_n\}}\). \end{enumerate} \end{lemma} \overline{b}egin{proof} (1) is clear. To prove (2), note that \(\mathbf{S}_n(K)=\mathbf{S}_n(L)\) and \(\mathbf{T}_n(K)=\mathbf{T}_n(L)\) for every \(n\geq1\). So it is enough to show (2) for elements of the field sort. But for any \(a\in L\) there is \(m\) such that \(a^{p^m}\in K\), and \(a\) and \(a^{p^m}\) are interdefinable. (3) By elimination of imaginaries in \(\mathrm{ACVF}_{p,p}\) down to the geometric sorts, it follows that finite subsets of \(\mathbfcal{G}(L)\) are coded in \(\mathbfcal{G}(L)\). We finish combining (2) and (1). \end{proof} \overline{b}egin{theorem}\label{T:SCVF-EI} The theory \(\mathrm{SCVF}_{p,e}\) eliminates imaginaries down to the geometric sorts. \end{theorem} \overline{b}egin{proof} Let \(\defn{X}\) be a definable set and \(A\) contain \(\mathcal{G}(\code{\defn{X}})\). We have to prove that \(\defn{X}\) can be defined over \(A\). Since, by Lemma~\ref{L:SCVF-ACVF}, finite sets are coded, it is enough to show that \(\defn{X}\) is weakly coded, so we may assume that \(A\) is algebraically closed. Also, because $\Sort{VF}$ is dominant, we may assume that $\defn{X}$ is a subset of some Cartesian power of $\Sort{VF}$. Let $K_0:=\Sort{VF}(A)$. Since \(K_0\) is closed under $\lambda$-functions and relatively algebraically closed in \(K\), the extension $K/K_0$ is regular. By Lemma\,\ref{lem:lambda res}, there exists an $n$ such that $\lambda^n(\defn{X}) = \psi(K)$ where $\psi$ is a quantifier free $\mathcal{L}_{\operatorname{div}}(K)$-formula. As $\lambda$ is injective and $\emptyset$-definable it follows that finding a (weak) code for $\lambda^n(\defn{X})$ is equivalent to finding one for $\defn{X}$, so we may assume that $\defn{X}=\psi(K)$ for some quantifier free $\mathcal{L}_{\operatorname{div}}(K)$-formula $\psi(x)$. Moreover, by Proposition\,\ref{P:vdense}, we may assume that $\defn{X}$ is dense in $\defn{Y}:=\psi(L)$. Let $\defn{V} = \mathfrak{m}athbb{Z}ar{\defn{X}}$. Then $\defn{V}$ is $K_0$-definable, by the existence of a smallest field of definition of $\defn{V}$. Moreover, $\defn{Y}\subseteq\defn{V}(L)$. We proceed by induction on $\dim(\defn{V})$. Since $\defn{V}(K)$ is Zariski-dense in $\defn{V}(L)$ and the extension $K/K_0$ is regular, the (absolute) irreducible components $\defn{V}_1,\ldots,\defn{V}_l$ of $\defn{V}$ are defined over $K_0$. Hence, encoding $\defn{V}_i(K)\cap\defn{X}$ one by one, we may assume that $\defn{V}$ is absolutely irreducible. It obviously suffices to encode $\mathrm{cl}vK{\defn{X}}$ and $\mathrm{cl}vK{\defn{X}}\setminus\defn{X}$, where $\mathrm{cl}vK{\defn{X}}$ denotes the valuative closure of $\defn{X}$ in \(\Sort{VF}^n(K)\). But $\mathrm{cl}vK{\defn{X}}\setminus\defn{X} \subseteq \mathrm{cl}vL{\defn{Y}}\setminus\defn{Y}$, a subset of $\defn{V}(L)$ which has empty interior (in $\defn{V}(L)$), so, by Corollary\,\ref{C:dim}, $\mathfrak{m}athbb{Z}ar{\mathrm{cl}vK{\defn{X}}\setminus\defn{X}} \subseteq \mathfrak{m}athbb{Z}ar{\mathrm{cl}vL{\defn{Y}}\setminus\defn{Y}}$ is a strict subvariety of $\defn{V}$. By induction $\mathrm{cl}vK{\defn{X}}\setminus\defn{X}$ is $A$-definable. It follows that we may assume $\defn{X}$ valuatively closed in $\defn{V}(K)$. The set $\widetilde{\defn{Y}} = \mathrm{cl}vL{\defn{Y}} = \mathrm{cl}vL{\defn{X}}$ is also definable by a quantifier free $\mathcal{L}_{\operatorname{div}}(K)$-formula, say by $\widetilde{\psi}$, and one has $\widetilde{\psi}(K) = \mathrm{cl}vK{\defn{X}} = \defn{X}$. By elimination of imaginaries in \(\mathrm{ACVF}\) (Fact\,\ref{F:ACVF-EI}), $\widetilde{\defn{Y}}$ is definable over some $e\in\mathbfcal{G}(\code{\widetilde{\defn{Y}}})$ and, by Lemma\,\ref{L:SCVF-ACVF}, we may assume that $e\in \mathbfcal{G}(K)$. Clearly $\defn{X}$ is $\mathcal{L}_{\mathcal{G}}$-definable over $e$ (in $K$) so there only remains to show that $e \in \code{\defn{X}}$. Let $\sigma$ be an automorphism of $K$ that stabilizes $\defn{X}$ globally, then the (unique) extension of $\sigma$ to $L$ must stabilize $\mathrm{cl}vL{\defn{X}} = \widetilde{\defn{Y}}$ and hence fixes $e$. So $e\in\code{\defn{X}}$. \end{proof} \section{Imaginaries, definable types and dense pairs}\label{S:pairs} \subsection{Quantifier elimination in dense pairs of valued fields} Much of the following is inspired by work of Delon \cite{DelDens}. \subsubsection{The pure field case} Let $\mathcal{L}_{\mathrm{P}}$ denote the language $\mathcal{L}_{\mathrm{ring}}\cup\{\mathfrak{m}athbb{F}F,\lin{n},\flin{n,i}\mathfrak{m}id n\in\mathfrak{m}athbb{N}_{>0}\text{ and }0<i\leq n\}$ where $\mathfrak{m}athbb{F}F$ is a new unary predicate, the $\lin{n}$ are new $n$-ary predicate symbols and the $\flin{n,i}$ are new $n+1$-ary function symbols. Note that, for this section, the field sort will be denoted by $\Sort{K}$ and not by $\Sort{VF}$, as it is not valued. Let $\mathrm{T}_{\mathrm{P}}$ be the $\mathcal{L}_{\mathrm{P}}$-theory of pairs of fields, with $\mathfrak{m}athbb{F}F$ defining the smaller field, and where $\lin{n}(y_{1},\ldots,y_n)$ holds if and only if the $y_i$ are linearly independent over $\mathfrak{m}athbb{F}F$ and if $\lin{n}(y)\wedge \neg\lin{n+1}(x,y)$ holds then $x =\sum_i \flin{n,i}(x,y)y_i$ where $\flin{n,i}(x,y)\in\mathfrak{m}athbb{F}F$ (otherwise, set $\flin{n,i}=0$). When $A$ is a field, we will denote linear disjointness over $A$ by $\ldis[A]$. One can easily check the following two facts. \overline{b}egin{lemma}\label{lem:crit substr} Let $M\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}$ and $A\subseteq M$. Then $A\substr M$ if and only if $A$ is a subring, $\mathfrak{m}athbb{F}F(A)$ is a field and $A\ldis[\mathfrak{m}athbb{F}F(A)]\mathfrak{m}athbb{F}F(M)$. \end{lemma} \overline{b}egin{proof} First of all, $A$ is an $\mathcal{L}_{\mathrm{ring}}$-substructure of $M$ if and only if it is a subring. Let us now assume $A$ is closed under the functions $\flin{n,i}$. Note that, if $x\in\mathfrak{m}athbb{F}F$, $\flin{1,1}(1,x) = x^{-1}$ and hence $\mathfrak{m}athbb{F}F(A)$ is a field. Now, let $a\in A$ be a tuple. If $a$ is not linearly independent over $\mathfrak{m}athbb{F}F(M)$, we may assume that $a_{0} = \sum_{i>0} c_i a_i$ where the $(a_i)_{i>0}$ are linearly independent over $\mathfrak{m}athbb{F}F(M)$. Then $c_i = \flin{n,i}(a)\in \mathfrak{m}athbb{F}F(A)$ and hence the tuple $a$ is not linearly independent over $\mathfrak{m}athbb{F}F(A)$. We have just proved that $A\ldis[\mathfrak{m}athbb{F}F(A)]\mathfrak{m}athbb{F}F(M)$. Conversely, assume that $\mathfrak{m}athbb{F}F(A)$ is a field and $A\ldis[\mathfrak{m}athbb{F}F(A)]\mathfrak{m}athbb{F}F(M)$. We have to show that $A$ is closed under the $\flin{n,i}$ functions. Let $a$ be a tuple in $A$ such that $(a_i)_{i>0}$ is linearly independent over $\mathfrak{m}athbb{F}F(M)$ and $a_0 = \sum_{i>0} \flin{n,i}(a)a_i$. By hypothesis, the tuple $a$ is not linearly independent over $\mathfrak{m}athbb{F}F(A)$ either and (because $\mathfrak{m}athbb{F}F(A)$ is a field) there exist elements $c_i\in\mathfrak{m}athbb{F}F(A)$ such that $a_{0} = \sum_{i>0} c_i a_i$. Since the $(a_i)_{i>0}$ are linearly independent, we have $\flin{n,i}(a) = c_i\in\mathfrak{m}athbb{F}F(A)$. \end{proof} \overline{b}egin{lemma}\label{lem:crit iso} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}$, $A_i\substr M_i$ for $i=1,2$, $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{ring}}$-isomorphism such that $f(\mathfrak{m}athbb{F}F(A_1))=\mathfrak{m}athbb{F}F(A_2)$. Then $f$ is in fact an $\mathcal{L}_{\mathrm{P}}$-isomorphism. \end{lemma} \overline{b}egin{proof} We have to check that $f$ respects the predicates $\lin{n}$ and the functions $\flin{n,i}$. First let $a\in A_1$ be a tuple. The tuple $a$ is linearly dependent over $\mathfrak{m}athbb{F}F(M_1)$ if and only if it is linearly dependent over $\mathfrak{m}athbb{F}F(A_1)$, i.e., there exist $\lambda_i\in\mathfrak{m}athbb{F}F(A_1)$ such that $\sum_i\lambda_i a_i = 0$. Equivalently, $f(\sum_i\lambda_i a_i) = \sum_i f(\lambda_i)f(a_i) = 0$ and the tuple $f(a)$ is linearly dependent over $\mathfrak{m}athbb{F}F(M_2)$. By symmetry, we also have that if $f(a)$ linearly dependent over $\mathfrak{m}athbb{F}F(M_2)$, then $a$ is linearly dependent over $\mathfrak{m}athbb{F}F(M_1)$. Hence $f$ respects $\lin{n}$. Let us now assume that $a$ is linearly independent over $\mathfrak{m}athbb{F}F(M_1)$ and that $c = \sum_i \flin{n,i}(c,a)a_i$. Then $f(c) = f(\sum_i \flin{n,i}(c,a)a_i) = \sum_i f(\flin{n,i}(c,a))f(a_i)$. Moreover the tuple $(c,a)$ is linearly dependent over $\mathfrak{m}athbb{F}F(M_1)$ but $a$ is not and hence $(f(c),f(a))$ is linearly dependent over $\mathfrak{m}athbb{F}F(M_2)$ but $f(a)$ is not. Therefore $f(c) = \sum_i\flin{n,i}\left(f(c),f(a)\right) f(a_i)$ and by uniqueness of the coefficient in a decomposition of $f(c)$ in the basis $f(a)$, we obtain that $f(\flin{n,i}(c,a)) = \flin{n,i}(f(c),f(a))$. \end{proof} \overline{b}egin{lemma}\label{lem:frac pure} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}$, $A_i\substr M_i$ for $i=1,2$, $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}$-isomorphism. Then $\mathfrak{m}athbb{F}rac{A_i}\substr M_i$, $\mathfrak{m}athbb{F}F(\mathfrak{m}athbb{F}rac{A_i}) = \mathfrak{m}athbb{F}F(A_i)$ and $f$ extends to a unique $\mathcal{L}_{\mathrm{P}}$-isomorphism between $\mathfrak{m}athbb{F}rac{A_1}$ and $\mathfrak{m}athbb{F}rac{A_2}$. \end{lemma} \overline{b}egin{proof} One checks, by clearing the denominators, that $\mathfrak{m}athbb{F}rac{A_i}\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$ and thus that $\mathfrak{m}athbb{F}F(\mathfrak{m}athbb{F}rac{A_i})$ = $\mathfrak{m}athbb{F}F(A_i)$. Lemmas\,\ref{lem:crit substr} and \ref{lem:crit iso} now allow us to conclude. \end{proof} \overline{b}egin{lemma}\label{lem:small field pure} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}$, $A_i\substr M_i$, $\mathfrak{m}athbb{F}F(A_i)\substr[\mathcal{L}_{\mathrm{ring}}] C_i\substr[\mathcal{L}_{\mathrm{ring}}]\mathfrak{m}athbb{F}F(M_i)$ for $i=1,2$, $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}$-isomorphism and $g: C_1\to C_2$ an $\mathcal{L}_{\mathrm{ring}}$-isomorphism such that $\operatorname{res}tr{g}{\mathfrak{m}athbb{F}F(A_1)} = \operatorname{res}tr{f}{\mathfrak{m}athbb{F}F(A_1)}$. Assume that $A_i$ and $C_i$ are fields then $A_i C_i \substr M_i$, $\mathfrak{m}athbb{F}F(A_i C_i) = C_i$ and there is a unique $\mathcal{L}_{\mathrm{P}}$-isomorphism $h:A_1 C_1\to A_2 C_2$ extending $f$ and $g$. \end{lemma} \overline{b}egin{proof} As $A_i\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$ and $C_i\substr\mathfrak{m}athbb{F}F(M_i)$, we have $A_i C_i \ldis[C_i]\mathfrak{m}athbb{F}F(M_i)$. It follows that $\mathfrak{m}athbb{F}F(A_i C_i) = C_i$ and, by Lemma\,\ref{lem:crit substr}, that $A_i C_i \substr M_i$. As $A_i\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$, it follows that $A_i C_i$ is isomorphic as a ring to $A_i \tensor[\mathfrak{m}athbb{F}F(A_i)] C_i$ and hence there exists a unique ring isomorphism $h:A_1 C_1\to A_2 C_2$ extending $f$ and $g$. By Lemma\,\ref{lem:crit iso}, $h$ is in fact an $\mathcal{L}_{\mathrm{P}}$-isomorphism. \end{proof} \overline{b}egin{lemma}\label{lem:alg pure} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}$, $A_i\substr M_i$ for $i=1,2$ and $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}$-isomorphism. Assume $A_i$ is a field and the extension $\mathfrak{m}athbb{F}F(A_i)\subseteq \mathfrak{m}athbb{F}F(M_i)$ is regular then $\overline{a}lg{A_i}\cap M_i\substr M_i$, $\mathfrak{m}athbb{F}F(\overline{a}lg{A_i}\cap M_i) = \mathfrak{m}athbb{F}F(A_i)$ and any field isomorphism $g:\overline{a}lg{A_1}\cap M_1\to\overline{a}lg{A_2}\cap M_2$ extending $f$ is an $\mathcal{L}_{\mathrm{P}}$-isomorphism. \end{lemma} \overline{b}egin{proof} We have $A_i\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$ and $\mathfrak{m}athbb{F}F(A_i)\subseteq \mathfrak{m}athbb{F}F(M_i)$ regular hence $A_i\subseteq A_i\mathfrak{m}athbb{F}F(M_i)$ is also regular, that is, $\overline{a}lg{A_i}\ldis[A_i]A_i\mathfrak{m}athbb{F}F(M_i)$. By transitivity of linear disjointness, we obtain that $\overline{a}lg{A_i}\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$ and hence $\overline{a}lg{A_i}\cap M_i\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$. In particular, we have $\mathfrak{m}athbb{F}F(\overline{a}lg{A_i}\cap M_i) = \mathfrak{m}athbb{F}F(A_i)$ and we may conclude using Lemmas \ref{lem:crit substr} and \ref{lem:crit iso}. \end{proof} \overline{b}egin{lemma}\label{lem:trans pure} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}$, $A_i\substr M_i$, $c_i\in M_i$ for $i=1,2$ and $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}$-isomorphism. Assume $A_i$ is a field and $c_i$ transcendental over $A_i\mathfrak{m}athbb{F}F(M_i)$ then $A_i(c_i)\substr M_i$, $\mathfrak{m}athbb{F}F(A_i(c_i)) = \mathfrak{m}athbb{F}F(A_i)$ and there exists an $\mathcal{L}_{\mathrm{P}}$-isomorphism $g:A_1(c_1)\to A_2(c_2)$ extending $f$ and sending $c_1$ to $c_2$. \end{lemma} \overline{b}egin{proof} We have that $f$ extends to a ring isomorphism $g$ on $A_1(c_1)$ sending $c_1$ to $c_2$. Moreover, $A_i(c_i)$ is algebraically independent from $A_i\mathfrak{m}athbb{F}F(M_i)$ over $A_i$. As $A_i\subseteq A_i(c_i)$ is regular, $A_i(c_i) \ldis[A_i] A_i\mathfrak{m}athbb{F}F(M_i)$. Since $A_i\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$, by transitivity, it follows that $A_i(c_i)\ldis[\mathfrak{m}athbb{F}F(A_i)]\mathfrak{m}athbb{F}F(M_i)$. In particular $\mathfrak{m}athbb{F}F(A_i(c_i)) = \mathfrak{m}athbb{F}F(A_i)$ and, by Lemma\,\ref{lem:crit iso}, $g$ is in fact an $\mathcal{L}_{\mathrm{P}}$-isomorphism. \end{proof} We can now prove a slightly improved version of \cite[Theorem\,1]{DelDens}. Let $\overline{a}lg{\mathrm{T}_{\mathrm{P}}} := \mathrm{T}_{\mathrm{P}}\cup\mathrm{ACF}\cup\{[\Sort{K}:\mathfrak{m}athbb{F}F] \geq n\mathfrak{m}id n\in\mathfrak{m}athbb{N}\}$. \overline{b}egin{theorem}\label{thm:EQ pure} The theory $\overline{a}lg{\mathrm{T}_{\mathrm{P}}}$ resplendently eliminates quantifiers relative to $\mathfrak{m}athbb{F}F$; that is, for every language $\widetilde{\mathcal{L}}\supseteq\mathcal{L}_{\mathrm{ring}}$ and every $\widetilde{\mathcal{L}}$-theory $\widetilde{T}$ which eliminates quantifiers, the $\mathcal{L}_{\mathrm{P}}\cup\widetilde{\mathcal{L}}$-theory $\mathrm{T}_{\mathrm{P}}p:=\overline{a}lg{\mathrm{T}_{\mathrm{P}}}\cup \widetilde{T}^{\mathfrak{m}athbb{F}F}$ eliminates quantifiers where $\widetilde{T}^{\mathfrak{m}athbb{F}F}$ is the relativization of $\widetilde{T}$ to $\mathfrak{m}athbb{F}F$. \end{theorem} \overline{b}egin{proof} Let us denote by $\mathcal{L}_{\mathrm{P}}p$ the language $\mathcal{L}_{\mathrm{P}}\cup\widetilde{\mathcal{L}}$. Note that an $\mathcal{L}_{\mathrm{P}}p$-isomorphism is an $\mathcal{L}_{\mathrm{P}}$-isomorphism that restricts to an $\widetilde{\mathcal{L}}$-isomorphism on $\mathfrak{m}athbb{F}F$. Let $M$ and $N$ be models of $\mathrm{T}_{\mathrm{P}}p$, $A\substr M$ and $f:A\to N$ an $\mathcal{L}_{\mathrm{P}}p$-embedding. Assume that $N$ is $\mathrm{char}d{M}^+$-saturated. We have to show that $f$ extends to $M$. By Lemma\,\ref{lem:frac pure}, we may assume that $A$ is a field. Since $\mathfrak{m}athbb{F}F(\mathfrak{m}athbb{F}rac{A}) = \mathfrak{m}athbb{F}F(A)$, this new embedding is an $\widetilde{\mathcal{L}}$-embedding. By quantifier elimination in $\widetilde{T}$, we can extend $\operatorname{res}tr{f}{\mathfrak{m}athbb{F}F(A)}$ to $g:\mathfrak{m}athbb{F}F(M)\to\mathfrak{m}athbb{F}F(N)$ and by Lemma\,\ref{lem:small field pure}, we may assume that $\mathfrak{m}athbb{F}F(M)\subseteq A$. As $\widetilde{T}$ eliminates quantifiers, $f(\mathfrak{m}athbb{F}F(M))\preccurlyeq_{\widetilde{\mathcal{L}}} \mathfrak{m}athbb{F}F(N)$ and this extension is regular. Applying Lemma\,\ref{lem:alg pure}, we may assume that $A$ is algebraically closed. \overline{b}egin{claim}\label{claim:tr deg pure} The transcendence degree of $N$ over $\mathfrak{m}athbb{F}F(N)$ is larger than $\mathrm{char}d{M}$. \end{claim} \overline{b}egin{proof} By compactness and saturation this follows from the fact that $N$ is an infinite extension of $\mathfrak{m}athbb{F}F(N)$. \end{proof} Let $c\in M$ be transcendental over $A$ and $d\in N$ be transcendental over $f(A)\mathfrak{m}athbb{F}F(N)$. Then, by Lemma\,\ref{lem:trans pure}, $f$ extends to an $\mathcal{L}_{\mathrm{P}}$-embedding $g:A(c)\to N$ sending $c$ to $d$. Moreover, $\operatorname{res}tr{g}{\mathfrak{m}athbb{F}F} = \operatorname{res}tr{f}{\mathfrak{m}athbb{F}F}$ and hence $g$ is also an $\widetilde{\mathcal{L}}$-embedding. Now, by Lemma\,\ref{lem:alg pure}, $g$ extends to an $\mathcal{L}_{\mathrm{P}}p$-embedding on $\overline{a}lg{A(c)}$. Repeating this last step sufficiently many times, we obtain an $\mathcal{L}_{\mathrm{P}}p$-embedding of $M$ into $N$. \end{proof} \subsubsection{The valued case} We now want to extend the previous results to the setting of dense pairs of valued fields. Let $\mathcal{L}_{\mathrm{P}}v := \mathcal{L}_{\mathrm{P}}\cup\mathcal{L}_{\mathrm{RV}}$. We will consider the $\mathcal{L}_{\mathrm{P}}v$-theory \[\mathrm{T}_{\mathrm{P}}v := \mathrm{T}_{\mathrm{P}}\cup\{\text{$\Sort{VF}$ is a valued field and $\mathfrak{m}athbb{F}F\subseteq\Sort{VF}$ is dense}\}.\] In any model $M\mathfrak{m}odels \mathrm{T}_{\mathrm{P}}v$, by density of the pair $\mathfrak{m}athbb{F}F(M)\subseteq\Sort{VF}(M)$, we have $\operatorname{rv} (\Sort{VF}(M)) = \operatorname{rv} (\mathfrak{m}athbb{F}F(M))$. We define $\mathfrak{m}athbb{F}FRV := \mathfrak{m}athbb{F}F\cup\mathfrak{m}athbb{R}V$. \overline{b}egin{lemma}\label{lem:frac valued} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}v$, $A_i\substr M_i$ for $i=1,2$ and $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}v$-isomorphism. Then $f$ extends to an $\mathcal{L}_{\mathrm{P}}v$-isomorphism between $A_1(\mathfrak{m}athbb{F}rac{\Sort{VF}(A_1)})$ and $A_2(\mathfrak{m}athbb{F}rac{\Sort{VF}(A_2}))$. \end{lemma} \overline{b}egin{proof} By quantifier elimination in $\mathcal{L}_{\mathrm{RV}}$, $f$ extends to an $\mathcal{L}_{\mathrm{RV}}$-isomorphism $g:A_1(\mathfrak{m}athbb{F}rac{\Sort{VF}(A_1)}) \to A_2(\mathfrak{m}athbb{F}rac{\Sort{VF}(A_2)})$. By Lemma\,\ref{lem:frac pure}, $\operatorname{res}tr{g}{\Sort{VF}}$ is an $\mathcal{L}_{\mathrm{P}}$-isomorphism. \end{proof} \overline{b}egin{lemma}\label{lem:small field valued} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}v$, $A_i\substr M_i$, $\mathfrak{m}athbb{F}FRV(A_i)\substr[\mathcal{L}_{\mathrm{RV}}] C_i\substr[\mathcal{L}_{\mathrm{RV}}]\mathfrak{m}athbb{F}FRV(M_i)$ for $i=1,2$, $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}v$-isomorphism and $g: C_1\to C_2$ an $\mathcal{L}_{\mathrm{RV}}$-isomorphism such that $\operatorname{res}tr{g}{\mathfrak{m}athbb{F}FRV(A_1)} = \operatorname{res}tr{f}{\mathfrak{m}athbb{F}FRV(A_1)}$. Moreover, assume that for all $a\in \Sort{VF}(A_i)\setminus\Sort{VF}(C_i)$ there is a Cauchy sequence $(c_{\overline{a}lpha}^a)$ in $\Sort{VF}(C_i)$ converging to $a$ (in $\Sort{VF}(A_i(C_i))$) and such that $g(c_{\overline{a}lpha}^a) = c_{\overline{a}lpha}^{f(a)}$. Assume that $\Sort{VF}(A_i)$ and $\Sort{VF}(C_i)$ are fields. Then there exists a unique $\mathcal{L}_{\mathrm{P}}v$-isomorphism $h:A_1(C_1)\to A_2(C_2)$ extending $f$ and $g$. \end{lemma} \overline{b}egin{proof} Our assumptions imply that $\Sort{VF}(C_i)$ is dense in $\Sort{VF}(A_i(C_i))$. By Lemma\,\ref{lem:small field pure}, there exists a unique $\mathcal{L}_{\mathrm{P}}$-isomorphism $h : \Sort{VF}(A_1(C_1))\to \Sort{VF}(A_2(C_2))$ extending $\operatorname{res}tr{f}{\Sort{VF}}$ and $\operatorname{res}tr{g}{\Sort{VF}}$. We extend it to \(\mathfrak{m}athbb{R}V\) by setting $\operatorname{res}tr{h}{\mathfrak{m}athbb{R}V} := \operatorname{res}tr{g}{\mathfrak{m}athbb{R}V}$. Note that, by density, $\mathfrak{m}athbb{R}V(A_i(C_i))=\mathfrak{m}athbb{R}V(C_i)$. We have to check that $h$ preserves $\operatorname{rv} $. Let us assume that all $(c_\overline{a}lpha^a)$ are indexed by the same ordinal (the cofinality of \(\operatorname{val}(\Sort{VF}(C_i)^\star)\)). Let $P(\ol{X})\in C_1[\ol{X}]$ and $a\in \Sort{VF}(A_1)^{\mathrm{char}d{\ol{X}}}$. If $P(a) = 0$ then $h(P(a)) = 0$ and $\operatorname{rv} (h(P(a)))=\infty=h(\operatorname{rv} (P(a)))$. Thus, we may assume that $P(a)\neq 0$ and $P^g(f(a)) \neq 0$. Since $\operatorname{val}(a-c_{\overline{a}lpha}^a)$ is cofinal in $\operatorname{val}(\Sort{VF}(A_1(C_1)))$ and $P(c_{\overline{a}lpha}^{a}) = P(a) + \sum_{I\neq 0} (c_{\overline{a}lpha}^{a}-a)^I P_I(a)$, for large enough $\overline{a}lpha$, one has $\operatorname{rv} (P(a)) = \operatorname{rv} (P(c_{\overline{a}lpha}^{a}))$. As $g$ is an $\mathcal{L}_{\mathrm{RV}}$-isomorphism, $g(\operatorname{rv} (P(c_{\overline{a}lpha}^{a}))) = \operatorname{rv} (P^g(c_{\overline{a}lpha}^{f(a)}))$. Similarly, $\operatorname{rv} (P^g(c_{\overline{a}lpha}^{f(a)})) = \operatorname{rv} (P^g(f(a))) = \operatorname{rv} (h(P(a)))$. Hence $h(\operatorname{rv} (P(a))) = g(\operatorname{rv} (P(c_{\overline{a}lpha}^{a}))) = \operatorname{rv} (h(P(a)))$. \end{proof} \overline{b}egin{corollary}\label{cor:small field valued} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}v$, $A_i\substr M_i$, $\mathfrak{m}athbb{F}FRV(A_i)\substr[\mathcal{L}_{\mathrm{RV}}] C_i\substr[\mathcal{L}_{\mathrm{RV}}]\mathfrak{m}athbb{F}FRV(M_i)$ for $i=1,2$, $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}v$-isomorphism and $g: C_1\to C_2$ an $\mathcal{L}_{\mathrm{RV}}$-isomorphism such that $\operatorname{res}tr{g}{\mathfrak{m}athbb{F}FRV(A_1)} = \operatorname{res}tr{f}{\mathfrak{m}athbb{F}FRV(A_1)}$. Assume that $\mathfrak{m}athbb{F}F(A_i)$ is dense in $\Sort{VF}(A_i)$, $\operatorname{val}(\Sort{VF}(A_i)^{\star})$ is a cofinal subset of $\operatorname{val}(\Sort{VF}(A_i(C_i))^{\star})$ and that $\Sort{VF}(A_i)$ and $\Sort{VF}(C_i)$ are fields. Then there exists a unique $\mathcal{L}_{\mathrm{P}}v$-isomorphism $h:A_1(C_1)\to A_2(C_2)$ extending $f$ and $g$. \end{corollary} \overline{b}egin{proof} This is an immediate consequence of Lemma\,\ref{lem:small field valued}. Indeed, for all $a\in A_1\setminus C_1$, let $c_\overline{a}lpha^a$ be Cauchy sequence in $\mathfrak{m}athbb{F}F(A_1)$ converging to $a$ in $\Sort{VF}(A_1)$ (and hence in $\Sort{VF}(A_1(C_1))$ by cofinality) and define $c_\overline{a}lpha^{f(a)}$ to be equal to $f(c_\overline{a}lpha^{a})$. Now, Lemma\,\ref{lem:small field valued} applies. \end{proof} \overline{b}egin{comment} \overline{b}egin{lemma}\label{lem:alg valued} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}v$, $A_i\substr M_i$ for $i=1,2$ and $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}v$-isomorphism. Assume $\Sort{VF}(A_i)$ is a field, the extension $\mathfrak{m}athbb{F}F(A_i)\subseteq \mathfrak{m}athbb{F}F(M_i)$ is regular and $\mathfrak{m}athbb{F}F(A_i)$ is dense in $\overline{a}lg{\Sort{VF}(A_i)}$. Then any extension of $f$ to an $\mathcal{L}_{\mathrm{RV}}$-isomorphism $g : A_1(\overline{a}lg{\Sort{VF}(A_1)}\cap M_1) \to A_2(\overline{a}lg{\Sort{VF}(A_2)}\cap M_2)$ is an $\mathcal{L}_{\mathrm{P}}v$-isomorphism. \end{lemma} \overline{b}egin{proof} We only have to show that $\operatorname{res}tr{g}{\Sort{VF}}$ is an $\mathcal{L}_{\mathrm{P}}$-isomorphism. But that follows from Lemma\,\ref{lem:alg pure}. \end{proof} \overline{b}egin{lemma}\label{lem:trans valued} Let $M_i\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}v$, $A_i\substr M_i$, $c_i\in M_i$, and $f : A_1\to A_2$ an $\mathcal{L}_{\mathrm{P}}v$-isomorphism. Assume $A_i$ is a field, the extension $\mathfrak{m}athbb{F}F(A_i)\subseteq \mathfrak{m}athbb{F}F(M_i)$ is regular and $c_i$ transcendental over $A_i\mathfrak{m}athbb{F}F(M_i)$. Then any $\mathcal{L}_{\mathrm{RV}}$-isomorphism $g:A_1(c_1)\to A_2(c_2)$ extending $f$ is an $\mathcal{L}_{\mathrm{P}}v$-isomorphism. \end{lemma} \overline{b}egin{proof} We only have to show that $\operatorname{res}tr{g}{\Sort{VF}}$ is an $\mathcal{L}_{\mathrm{P}}$-isomorphism. But that follows from Lemma\,\ref{lem:trans pure}. \end{proof} \end{comment} Let $\mathcal{L}$ be an $\mathfrak{m}athbb{R}V$-enrichment of $\mathcal{L}_{\mathrm{RV}}$ and $T$ be an $\mathcal{L}$-theory of valued fields that eliminates quantifiers. The main two examples are $\mathrm{ACVF}$ and theories of equicharacteristic zero Henselian fields Morleyized on $\mathfrak{m}athbb{R}V$. As always, the present results remain true in mixed characteristic provided $\mathcal{L}_{\mathrm{RV}}$ is understood as the language of higher order leading terms, i.e., with sorts for $\mathfrak{m}athbb{R}V_n := \Sort{VF}/(1+n\overline{b}oldsymbol{\mathfrak{m}})$ for all $n\in\mathfrak{m}athbb{N}$. Let $\widetilde{\mathcal{L}}\supseteq\mathcal{L}$ be a language and $\widetilde{T}$ be an $\widetilde{\mathcal{L}}$-theory which eliminates quantifiers. We will now consider the theory \[\mathrm{T}_{\mathrm{P}}pv := \mathrm{T}_{\mathrm{P}}v\cup T\cup\widetilde{T}^{\mathfrak{m}athbb{F}FRV}\cup\{[\Sort{VF}:\mathfrak{m}athbb{F}F] \geq n\mathfrak{m}id n\in\mathfrak{m}athbb{N}\}.\] We will be denoting by $\mathbf{B}$ the (interpretable) set of open balls. \overline{b}egin{theorem}\label{thm:EQ valued} Assume that: \overline{b}egin{enumerate} \item For all $M\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}pv$, $A\leq\mathfrak{m}athbb{F}F(M)$ and tuples $b_1$, $b_2\in\mathbf{B}(M)$: If $b_1\equiv_{\mathcal{L}(A)}^{M}b_2$ then $b_1\equiv_{\widetilde{\mathcal{L}}(A)}^{\mathfrak{m}athbb{F}F(M)}b_2$. \item For every $A\leq \mathfrak{m}athbb{F}F(M)\mathfrak{m}odels\widetilde{T}$ with $M$ sufficiently saturated with respect to $|A|$, $C_A:= \{x\in \Sort{VF}(M)\mathfrak{m}id$ the \(\widetilde{\mathcal{L}}\)-structure and the \(\mathcal{L}\)-structure generated by \(Ax\) are equal\(\}\) is dense in $\Sort{VF}(M)$. \end{enumerate} Then $\mathrm{T}_{\mathrm{P}}pv$ eliminates quantifiers. In particular, if $M\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}pv$, then $\mathfrak{m}athbb{F}F(M)$ is stably embedded in $M$ with induced structure given by $\widetilde{\mathcal{L}}$. \end{theorem} \overline{b}egin{proof} Let $M$ and $N$ be models of $\mathrm{T}_{\mathrm{P}}pv$, $A\substr M$ and $f:A\to N$ an $\mathcal{L}_{\mathrm{P}}pv$-embedding, where $\mathcal{L}_{\mathrm{P}}pv:=\mathcal{L}_{\mathrm{P}}v\cup\widetilde{\mathcal{L}}$. Assume that $N$ is $\mathrm{char}d{M}^+$-saturated. We have to show that $f$ extends to $M$. By Lemma\,\ref{lem:frac valued}, we may assume that $\Sort{VF}(A)$ is a field. \overline{b}egin{claim} At the cost of enlarging $M$ (without changing its cardinality) and $A\leq M$, we may assume that $\mathfrak{m}athbb{F}F(A)$ is dense in $\Sort{VF}(A)$ and $\operatorname{val}(\Sort{VF}(A)^\star)$ is cofinal in $\operatorname{val}(M^\star)$. \end{claim} \overline{b}egin{proof} Let $M'$ be some sufficiently saturated extension of $M$. Let $\mathbf{k}appa := \mathrm{char}d{\Sort{VF}(A)}$ and $(a_i)_{i\in\mathbf{k}appa}$ be an enumeration of $\Sort{VF}(A)$. By induction on $(n,i)\in\omega\times\mathbf{k}appa$ (ordered lexicographically), we will construct \overline{b}egin{itemize} \item a sequence $(e_{n,i})_{n\in\omega,i\in\mathbf{k}appa}$ in $\mathfrak{m}athbb{F}F(M')$; \item an increasing chain $(M_{n,i})_{n\in\omega,i\in\mathbf{k}appa}$ of elementary substructures of $M'$ containing $M$; \item an increasing chain of $\mathcal{L}_{\mathrm{P}}pv$-embeddings $f_{n,i}:A_{n,i}\to N$ extending $f$, where $A_{n,i}$ is the structure generated by $A\cup\{e_{m,j}\,|\,(m,j)\leq(n,i)\}$, \end{itemize} satisfying the following conditions for all $(n,i)\in\omega\times\mathbf{k}appa$: \overline{b}egin{enumerate} \item[(i)] $|M_{n,i}|=|M|$; \item[(ii)] $e_{n,i}\in M_{n,i}$; \item[(iii)] $\operatorname{val}(e_{n,i} - a_{i}) > \operatorname{val}(M_{<(n,i)}^{\star})$, where $M_{<(n,i)}:=M\cup \overline{b}igcup_{(n',i')<(n,i)}M_{n',i'}$. \end{enumerate} We will denote the image of $f$ by $C$, and that of $f_{n,i}$ by $C_{n,i}$. Let us first construct $e_{0,0}$, $M_{0,0}$ and $f_{0,0}$. By saturation of $M'$, and Hypothesis 2, we find $e_{0,0}\in\mathfrak{m}athbb{F}F(M')$ such that $\operatorname{val}(e_{0,0}-a_0)>{\mathfrak{m}athbf \mathcal{G}amma}(M)\setminus\{\infty\}$ and the \(\widetilde{\mathcal{L}}\)-structure and the \(\mathcal{L}\)-structure generated by \(\mathfrak{m}athbb{F}F(A)e_{0,0}\) are equal. We also find an elementary submodel $M_{0,0}$ of $M'$ of cardinality $|M|$ containing $M(e_{0,0})$. Let us now show that we may extend $f$ to $e_{0,0}$. Note that Hypothesis 1 implies that if \(\eta:A\to N\) is an \(\widetilde{\mathcal{L}}\)-isomorphism between subsets of \(\mathfrak{m}athbb{F}F\) and \(\rho\) is an extension of \(\eta\) to some tuple of balls \(b\), which is an $\mathcal{L}$-embedding, then there exists an \(\widetilde{\mathcal{L}}\)-embedding extending \(\eta\) and sending \(b\) to \(\rho(b)\). Indeed, by quantifier elimination in \(\widetilde{\mathcal{L}}\), \(\eta^{-1}\) can be extended to an \(\widetilde{\mathcal{L}}\)-embedding \(i\) defined at \(\rho(b)\). Then \(i\circ \rho\) fixes \(A\). So applying Hypothesis 1 to \(b\) and \(i(\rho(b))\), we get that there exists an \(\widetilde{\mathcal{L}}\)-embedding \(j\) fixing \(A\) and sending \(b\) to \(i(\rho(b))\). Then \(h := i^{-1}\circ j\) is an \(\widetilde{\mathcal{L}}\)-embedding extending \(\eta\) and sending \(b\) to \(\rho(b)\). Let us now come back to our previous notations. By quantifier elimination in $T$, $f$ extends to an elementary $\mathcal{L}$-embedding $g:M_{0,0}\to N$. By the previous paragraph and quantifier elimination in \(\widetilde{T}\), there exists an $\widetilde{\mathcal{L}}$-embedding $h : \mathfrak{m}athbb{F}F(M_{0,0})\to \mathfrak{m}athbb{F}F(N)$ extending $\operatorname{res}tr{f}{\mathfrak{m}athbb{F}F}$ such that $\operatorname{res}tr{h}{\mathbf{B}(M_{0,0})} = \operatorname{res}tr{g}{\mathbf{B}(M_{0,0})}$. Let $c_{0,0} = h(e_{0,0}) \in\mathfrak{m}athbb{F}F(N)$. By construction, we have that $\mathrm{tp}_{\widetilde{\mathcal{L}}}(c_{0,0}/\mathfrak{m}athbb{F}F(C)) = f_{\star}\mathrm{tp}_{\widetilde{\mathcal{L}}}(e_{0,0}/\mathfrak{m}athbb{F}F(A))$. Let $d\in M$, $\gamma := \operatorname{rv} (e_{0,0} - d)$ and $b := \{x\mathfrak{m}id \operatorname{rv} (x - d) = \gamma\}$. We have $e_{0,0}\in b \in\mathbf{B}(M_{0,0})$ and hence $c_{0,0}\in h(b) = g(b) = \{x\mathfrak{m}id \operatorname{rv} (x-g(d)) = g(\gamma)\}$. It follows that $\operatorname{rv} (c_{0,0} - g(d)) = g(\operatorname{rv} (e_{0,0} - d))$. In particular $\operatorname{val}(c_{0,0}-f(a_{0})) > {\mathfrak{m}athbf \mathcal{G}amma}(M)\setminus\{\infty\}$ and hence for all non-zero polynomials $P = \sum_i g(d_i) X^i\in\Sort{VF}(g(M))[X]$, letting $i_0=\mathfrak{m}in\{i\mathfrak{m}id g(d_i)\neq0\}$, we have \overline{b}egin{multline*} \operatorname{rv} (P(c_{0,0}-f(a_0)))=\operatorname{rv} (g(d_{i_0}))\cdot \operatorname{rv} (c_{0,0}-f(a_0))^{i_0} \\ =g(\operatorname{rv} (d_{i_0}(e_{0,0}-a_0)^{i_0}) = g(\operatorname{rv} (P(e_{0,0} - a_0))) \end{multline*} It follows (since $g$ is an $\mathcal{L}$-isomorphism) that $\mathrm{tp}_{\mathcal{L}}(c_{0,0}/g(M)) = g_{\star}\mathrm{tp}_{\mathcal{L}}(e_{0,0}/M)$. In particular, $\mathrm{tp}_{\mathcal{L}}(c_{0,0}/C) = f_{\star}\mathrm{tp}_{\mathcal{L}}(e_{0,0}/A)$. Let $f_{0,0}$ be an $\mathcal{L}$-embedding extending $f$ to \(A(e_{0,0})\)and sending $e_{0,0}$ to $c_{0,0}$. Then $\operatorname{res}tr{f_{0,0}}{\mathfrak{m}athbb{F}F}$ is an $\widetilde{\mathcal{L}}$-embedding (remember that the \(\widetilde{\mathcal{L}}\)-structure and the \(\mathcal{L}\)-structure generated by \(\mathfrak{m}athbb{F}F(A)e_{0,0}\) are equal). Finally, by Lemma\,\ref{lem:small field pure}, $f_{0,0}$ is also an $\mathcal{L}_{\mathrm{P}}$-embedding. Now let $(n,i)>(0,0)$ be given, and assume that $e_{m,j}$, $M_{m,j}$ as well as $f_{m,j}$ have been constructed for all $(m,j)<(n,i)$ satisfying the above. We may then construct $e_{n,i}$, $M_{n,i}$ and $f_{n,i}$ in the exact same way. In the argument above, it is enough to replace $M$ by $M_{<(n,i)}$, $A$ by $A_{<(n,i)}=\overline{b}igcup_{(m,j)<(n,i)}A_{m,j}$ and $f$ by $f_{<(n,i)}=\overline{b}igcup_{(m,j)<(n,i)}f_{m,j}$. We define $f_{<(\omega,\mathbf{k}appa)}$, $M_{<(\omega,\mathbf{k}appa)}$ and $A_{<(\omega,\mathbf{k}appa)}\leq M_{<(\omega,\mathbf{k}appa)}$ similarly. It is easy to check that $\operatorname{val}(\mathfrak{m}athbb{F}F(A_{<(\omega,\mathbf{k}appa)}^\star))$ is cofinal in $\operatorname{val}(M_{<(\omega,\mathbf{k}appa)}^{\star})$ and, as the $(e_{n,i})_{n\in\omega}$ are Cauchy sequences whose limit is $a_i$ in $\Sort{VF}(A_{<(\omega,\mathbf{k}appa)})$, that $\mathfrak{m}athbb{F}F(A_{<(\omega,\mathbf{k}appa)})$ is dense in $\Sort{VF}(A_{<(\omega,\mathbf{k}appa)})$. \end{proof} We can now apply Corollary\,\ref{cor:small field valued} to extend $f$ to all of $\mathfrak{m}athbb{F}F(M)$ and we may assume that $\mathfrak{m}athbb{F}F(A) = \mathfrak{m}athbb{F}F(M)$. We may now extend $f$ to the relative algebraic closure of $A$ in $M$ using Lemma\,\ref{lem:alg pure} and quantifier elimination in $T$. \overline{b}egin{claim}\label{claim:tr ball} Any ball from $N$ has transcendence degree larger or equal to $\mathrm{char}d{M}^+$ over $\mathfrak{m}athbb{F}F(N)$. \end{claim} \overline{b}egin{proof} It suffices to prove the claim for $\mathcal{O}$ and, in that case, it is an easy consequence of Claim\,\ref{claim:tr deg pure}. \end{proof} Now let $a\in M\setminus A$, then $a\in\Sort{VF}(M)$, and $a$ is transcendental over $\Sort{VF}(A)$. Let $(a_\overline{a}lpha)$ be a Cauchy-sequence in $A$ converging to $a$. Note that $\operatorname{val}(a - a_\overline{a}lpha)$ is cofinal in ${\mathfrak{m}athbf \mathcal{G}amma}(M)$. Let $b$ be a ball in $N$ that only contains pseudo-limits of the sequence $(f(a_\overline{a}lpha))$. By Claim\,\ref{claim:tr ball}, we can find a point $c\in b$ which is transcendental over $\Sort{VF}(f(A))\mathfrak{m}athbb{F}F(N)$. Let $P\in\Sort{VF}(A)[X]$, then $v(P(a)) < \infty$ and for large enough $\overline{a}lpha$, $\operatorname{rv} (P(a)) = \operatorname{rv} (P(a_\overline{a}lpha))$ (cf. the proof of Lemma\,\ref{lem:small field valued}). Similarly, $\operatorname{rv} (P^f(c)) = \operatorname{rv} (P^f(f(a_\overline{a}lpha))) = f(\operatorname{rv} (P(a)))$. It follows that $f$ can be extended to an $\mathcal{L}$-embedding sending $a$ to $c$. By Lemma\,\ref{lem:trans pure}, this extension is an $\mathcal{L}_{\mathrm{P}}pv$-embedding. Repeating these last two steps we can extend $f$ to $M$. \end{proof} \overline{b}egin{remark} The hypotheses of Theorem\,\ref{thm:EQ valued} are verified in the two following cases: \overline{b}egin{itemize} \item If $\widetilde{T} = T$, Hypothesis 1 follows immediately from the fact that $\widetilde{\mathcal{L}} = \mathcal{L}$ and the fact that $\mathfrak{m}athbb{F}F(M)\preccurlyeq_{\mathcal{L}} M$. Hypothesis 2 is trivial in this case as $\widetilde{\mathcal{L}} = \mathcal{L}$. The previous result therefore applies to dense elementary extensions of characteristic zero Henselian fields. \item Let $T = \mathrm{ACVF}_{p,p}$ and $\widetilde{T} = \mathrm{SCVH}_{p,e}$ (or $\mathrm{SCVF}_{p,e}$, respectively). Hypothesis 1 follows from the fact that if $b_1\equiv_{\mathcal{L}_{\mathrm{RV}}(A)}^M b_2$ for $b_1,b_2\in\mathbf{B}(M)=\mathbf{B}(\mathfrak{m}athbb{F}F(M))$ and $A\leq\mathfrak{m}athbb{F}F(M)$ then $b_1\equiv_{\mathcal{L}_{\mathcal{G}}(A)}^M b_2$ (since the additional sorts in \(\mathcal{L}_{\mathcal{G}}\) are interpretable) and thus $b_1\equiv_{\mathcal{L}_{\mathcal{G}}pe^D(A)}^{\mathfrak{m}athbb{F}F(M)} b_2$ (similarly for $\mathcal{L}_{\mathcal{G}}pe^\lambda$) by Corollary \ref{cor:purely-geometric}. Hypothesis 2 follows from the fact that for all $A$, $\ppowi{\mathfrak{m}athbb{F}F(M)}\subseteq C_A$ and $\ppowi{\mathfrak{m}athbb{F}F(M)}$ is dense in $\mathfrak{m}athbb{F}F(M)$, a field which is dense in $\Sort{VF}(M)$ by assumption. \end{itemize} \end{remark} \subsection{Dense pairs \texorpdfstring{\((\mathrm{ACVF},\mathrm{SCVF})\)}{(ACVF,SCVF)}} We will now focus on the case $T = \mathrm{ACVF}_{p,p}^{\mathcal{G}}$ and $\widetilde{T} = \mathrm{SCVH}_{p,e}^{\mathcal{G}}$ (there are similar corollaries in the case $\widetilde{T} = \mathrm{SCVF}_{p,e}^{\mathcal{G}}$). Note that the geometric language does not exactly fit in the setting of the previous section (since there are additional geometric sorts), but for the results that we are giving here the precise language in which we are working is completely immaterial. \overline{b}egin{corollary}\label{cor:st emb pair} In models of $\mathrm{T}_{\mathrm{P}}pv$, ${\mathfrak{m}athbf \mathcal{G}amma}$ is stably embedded and a pure divisible ordered Abelian group and $\mathbf{k}$ is stably embedded and a pure algebraically closed field. \end{corollary} \overline{b}egin{proof} It follows from Theorem\,\ref{thm:EQ valued}, that $\mathfrak{m}athbb{R}V$ is stably embedded in models of $\mathrm{T}_{\mathrm{P}}pv$ and that its structure is the structure induced by $\mathrm{ACVF}$. Stable embeddedness and purity of ${\mathfrak{m}athbf \mathcal{G}amma}$ and $\mathbf{k}$ now follow from the equivalent result in $\mathrm{ACVF}$. \end{proof} \overline{b}egin{corollary}\label{cor:complete th pair} The theory $\mathrm{T}_{\mathrm{P}}pv$ is complete. \end{corollary} \overline{b}egin{proof} The field $\mathfrak{m}athbb{F}_p$ can be embedded, as a subset of $\mathfrak{m}athbb{F}F$, in every model of $\mathrm{T}_{\mathrm{P}}pv$. Completeness follows from Theorem\,\ref{thm:EQ valued}. \end{proof} \overline{b}egin{lemma}\label{lem:max complete SCVF} Let \(K\mathfrak{m}odels \mathrm{SCVF}\). The following are equivalent: \overline{b}egin{enumerate}[(i)] \item \(K\) does not admit any separable immediate valued field extension, i.e., \(K\) is separably maximally complete; \item \(\overline{a}lg{K}\) does not admit any immediate valued field extension, i.e., \(\overline{a}lg{K}\) is maximally complete. \end{enumerate} \end{lemma} \overline{b}egin{proof} Let us first prove that (i) implies (ii). Let $\overline{a}lg{K} \subseteq L$ be a proper immediate extension. We may assume that $L = \overline{a}lg{K}(t)$, where $t$ is a transcendental element over $\overline{a}lg{K}$. But then \(K(t)/K\) is immediate (and separable of course). Conversely, assume that \(K\) has a proper immediate separable extension. As \(K\) is separably closed, this cannot be an algebraic extension and we may thus assume it is of the form \(K(t)\) with $t$ transcendental. As ${\mathfrak{m}athbf \mathcal{G}amma}(K)={\mathfrak{m}athbf \mathcal{G}amma}(K(t))$ is divisible and ${\mathfrak{m}athbf k}(K)={\mathfrak{m}athbf k}(K(t))$ is algebraically closed, the algebraic extension $\overline{a}lg{K}(t)/K(t)$ is immediate, and so $\overline{a}lg{K}(t)$ is a proper immediate extension of $\overline{a}lg{K}$. \end{proof} \overline{b}egin{proposition}\label{prop:complete model pair} The theory $\mathrm{T}_{\mathrm{P}}pv$ admits a model $M$ such that ${\mathfrak{m}athbf \mathcal{G}amma}(M)=\mathfrak{m}athbb{R}$ and $\Sort{VF}(M)$ is maximally complete. \end{proposition} \overline{b}egin{proof} Let $(x_{\overline{a}lpha})_{\overline{a}lpha\in2^{\overline{a}leph_{0}}}$ be a linear basis of $\mathfrak{m}athbb{R}$ over $\mathfrak{m}athbb{Q}$. Let $L_{0} := \mathfrak{m}athbb{F}_{p}(t_{i})_{1\leq i\leq e}$ be trivially valued and $L_{1} := L_{0}(x_{\overline{a}lpha}^{p^{-\infty}}:\overline{a}lpha\in2^{\overline{a}leph_{0}})$ be equipped with the unique valuation such that $\operatorname{val}(x_{\overline{a}lpha}) = x_{\overline{a}lpha}$. Then $L_{2} := \sep{L_{1}}$ is separably closed of characteristic $p$ and Ershov invariant $e$ and so is any separably maximally complete extension $L\supseteq L_1$. The pair $(\overline{a}lg{L},L)$ is dense and $\operatorname{val}(\overline{a}lg{L}) = \mathfrak{m}athbb{Q}\otimes\langle x_{\overline{a}lpha}\rangle_{\overline{a}lpha\in2^{\overline{a}leph_{0}}} = \mathfrak{m}athbb{R}$. Then $L$ can be endowed with Hasse derivations so that $(\overline{a}lg{L},L)\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}pv$. Moreover, by Lemma\,\ref{lem:max complete SCVF}, $\overline{a}lg{L}$ is maximally complete. \end{proof} \subsection{Imaginaries in \texorpdfstring{$\mathrm{SCVH}_{p,e}$}{SCVHpe}} We begin with a review of some results from~\cite{RidVDF,RidSimNIP}. Consider an arbitrary complete theory \(T\), and a fixed universal domain \(\operatorname{\mathbb{M}}od{M}\). As usual, for a definable set \(\defn{X}\) over parameters, we denote by \(\mathrm{acl}(\code{\defn{X}})\) the set of elements of \(\operatorname{\mathbb{M}}od{M}\) whose orbit under the (set-wise) stabilizer of \(\defn{X}(\operatorname{\mathbb{M}}od{M})\) in \(\mathrm{Aut}(\operatorname{\mathbb{M}}od{M})\) is finite. We will use the following criterion for elimination of imaginaries: \overline{b}egin{proposition}[{\cite[Proposition\,10.1]{RidVDF}}]\label{prop:EICrit} Assume that every non-empty definable set \(\defn{X}\) in a theory \(T\) contains an \(\mathrm{acl}(\code{\defn{X}})\)-invariant global type. Then \(T\) admits weak elimination of imaginaries. \end{proposition} In fact, it suffices to consider sets \(\defn{X}\) in (powers of) a dominant sort. While the above criterion holds with invariant types, we will actually show the existence of a \emph{definable} type in \(\defn{X}\). This is shown in several steps. The following result from~\cite{RidVDF} produces an \(\mathrm{ACVF}\)-type consistent with a given definable set in a suitable expansion, definable in that expansion: \overline{b}egin{proposition}[{\cite[Proposition\,9.6]{RidVDF}}]\label{prop:dens def} Let \({\widehat{T}}\supseteq\mathrm{ACVF}^{\mathcal{G}}\) be a theory in a countable language \({\widehat{\mathcal{L}}}\), such that \overline{b}egin{enumerate} \item \({\widehat{T}}\) eliminates imaginaries; \item \(\mathbf{k}\) and \({\mathfrak{m}athbf \mathcal{G}amma}\) are stably embedded, and \item The induced theories on \(\mathbf{k}\) and \(\eq{{\mathfrak{m}athbf \mathcal{G}amma}}\) eliminate \(\exists^\infty\). \end{enumerate} Let \(\widehat{A}=\mathrm{acl}_{{\widehat{T}}}(\widehat{A})\subseteq\widehat{N}\mathfrak{m}odels{\widehat{T}}\) and let \(\defn{X}\) be a non-empty strict pro-\(\widehat{A}\)-definable set of \(\Sort{VF}\) elements. Then there exists a global \(\mathrm{ACVF}^\mathcal{G}\)-type \(p\) consistent with \(\defn{X}\), which is \(\widehat{A}\)-definable (in \({\widehat{T}}\)). \end{proposition} To replace definability in \({\widehat{T}}\) with definability in \(\mathrm{ACVF}^\mathcal{G}\), we will use the following result. We recall that a subset \(A\) of a structure \(\operatorname{\mathbb{M}}od{M}\) is \Def{uniformly stably embedded} if for every formula \(\varphi(x,y)\) there is a formula \(\psi(x,z)\) such that for every \(m\in\operatorname{\mathbb{M}}od{M}\) there is \(a\in{}A\) such that \(\varphi(A,m)=\psi(A,a)\). We have: \overline{b}egin{proposition}[{\cite[Corollary\,1.7]{RidSimNIP}}]\label{prop:def enrich} Let $T$ be an $\mathfrak{m}athbb{N}IP$ $\mathcal{L}$-theory that eliminates imaginaries. Let ${\widehat{T}}\supseteq T$ be a complete ${\widehat{\mathcal{L}}}$-theory that also eliminates imaginaries. Suppose that there exists $\widehat{M}\mathfrak{m}odels{\widehat{T}}$ such that $\operatorname{res}tr{\widehat{M}}{\mathcal{L}}$ is uniformly stably embedded in every elementary extension. Let $\widehat{N}\mathfrak{m}odels{\widehat{T}}$, $\widehat{A}=\mathrm{dcl}_{\widehat{T}}(\widehat{A})\subseteq\widehat{N}$ and $p$ be a global \(\mathcal{L}\)-type. If $p$ is $\widehat{A}$-definable in \({\widehat{T}}\), then it is in fact $\operatorname{res}tr{\widehat{A}}{\mathcal{L}}$-definable in \(T\). \end{proposition} We now go back to our context. We set \(T=\mathrm{ACVF}^\mathcal{G}_{p,p}\) and \({\widehat{T}}=\eq{(\mathrm{T}_{\mathrm{P}}pv)}\). Combining the last two results, we obtain: \overline{b}egin{corollary}\label{cor:denseDef} Let \(\defn{X}\) be a non-empty strict pro-definable set of \(\Sort{VF}\) elements in \(\mathrm{T}_{\mathrm{P}}pv\) (over parameters), and let \(A=\mathbfcal{G}(\mathrm{acl}_{\eq{(\mathrm{T}_{\mathrm{P}}pv)}}(\code{\defn{X}}))\). Then there is a global $\mathrm{ACVF}^\mathcal{G}_{p,p}$-type \(p\), \(A\)-definable in $\mathrm{ACVF}^\mathcal{G}_{p,p}$ and consistent with \(\defn{X}\). \end{corollary} \overline{b}egin{proof} Let \(\widehat{A}\) be the algebraic closure of the code of \(\defn{X}\) in \({\widehat{T}}=\eq{(\mathrm{T}_{\mathrm{P}}pv)}\). Corollary\,\ref{cor:st emb pair} shows that the hypothesis of Proposition\,\ref{prop:dens def} holds, and therefore provides us with an \(\widehat{A}\)-definable (in \({\widehat{T}}\)) global \(T\)-type \(p\), consistent with \(\defn{X}\). Now, \cite[Corollary\,A.7]{RidVDF} asserts that the model provided by Proposition\,\ref{prop:complete model pair} satisfies the condition in Proposition\,\ref{prop:def enrich}, and Corollary\,\ref{cor:complete th pair} shows that \(\mathrm{T}_{\mathrm{P}}pv\) (hence \({\widehat{T}}\)) is complete, so Proposition\,\ref{prop:def enrich} applies to show that \(p\) is definable in \(T\) over \(\mathbfcal{G}(\widehat{A}) = A\). \end{proof} Recall that if \(a\) is an element of a model of \(\mathrm{SCVH}_{p,e}\), we denote by \(D_\omega(a)\) the infinite tuple obtained by applying the derivations to \(a\). If \(p(x)\) is a (partial) type in the field sort of \(\mathrm{SCVH}_{p,e}\), we let \(\nabla(p)\) be the pro-definable set in the language of \(\mathrm{ACVF}\) determined by the condition: \(D_\omega(a)\mathfrak{m}odels\nabla(p)\) if and only if \(a\mathfrak{m}odels{}p\) for all tuples \(a\) in a model of \(\mathrm{SCVH}_{p,e}\) (in other words, \(\nabla(p)\) is the prolongation of \(p\)). If \(\defn{X}\) is a definable set in \(\mathrm{SCVH}_{p,e}\) (in some $\Sort{VF}^n$), then its image \(D_\omega(\defn{X})\) is a strict pro-definable set over the same parameters, and for all types \(p\) of \(\mathrm{SCVH}_{p,e}\), \(p\) is consistent with \(\defn{X}\) if and only if \(\nabla(p)\) is consistent with \(D_\omega(\defn{X})\). Furthermore, any complete \(\mathrm{ACVF}_{p,p}\) type consistent with \(D_\omega(\defn{X})\) is of the form \(\nabla(p)\) for a complete \(\mathrm{SCVF}_{p,e}\) type in \(\defn{X}\), over the same parameters. Thus, \(\nabla\) provides a bijection between complete \(\mathrm{SCVH}_{p,e}\)-types consistent with \(\defn{X}\) and complete \(\mathrm{ACVF}_{p,p}\)-types consistent with \(D_\omega(\defn{X})\), over the same parameters. \overline{b}egin{theorem}\label{thm:SCVH dense def} Let $\widetilde{N}\mathfrak{m}odels\mathrm{SCVH}_{p,e}^{\mathcal{G}}$ and $\defn{X}\subseteq\Sort{VF}^n$ be $\widetilde{N}$-definable. Let $A=\mathrm{acl}(\code{\defn{X}})$. Then, there exists an $A$-definable type $p$ such that $p(x)\vdash x\in\defn{X}$. \end{theorem} \overline{b}egin{proof} Let us assume that $\widetilde{N}$ is sufficiently saturated and let $N$ denote its algebraic closure. Then $N_P:=(N,\widetilde{N})\mathfrak{m}odels\mathrm{T}_{\mathrm{P}}pv$. Replacing \(\defn{X}\) with \(D_\omega(\defn{X})\) as above, it suffices to find an \(A\)-definable \(\mathrm{ACVF}_{p,p}\)-type \(p\). Let $B=\eq{\mathrm{acl}}_{\mathcal{L}_{\mathrm{P}}pv}(\code{\defn{X}})$. According to Corollary~\ref{cor:denseDef}, \(\defn{X}\) contains an \(\mathrm{ACVF}_{p,p}^\mathcal{G}\)-type \(p\) definable over \(\mathbfcal{G}(B)\). To complete the proof, it remains to show that $\mathbfcal{G}(B)$ is contained in $\mathrm{dcl}_N(A)$. To establish this, by Lemma~\ref{L:SCVF-ACVF}(2) it is enough to show that $\mathbfcal{G}(B)\cap\widetilde{N}\subseteq A$. This latter inclusion follows from the fact that $\widetilde{N}$ is stably embedded in $N_P$ as a pure model of $\mathrm{SCVH}_{p,e}^{\mathcal{G}}$ (by Theorem~\ref{thm:EQ valued}). \end{proof} Let again \(T\) be an arbitrary theory. In case \(T\) eliminates imaginaries, the condition in Prop.~\ref{prop:EICrit} can be viewed as asserting the density of invariant types in the space of types over \(\mathrm{acl}(\code{X})\). Applying compactness, one obtains: \overline{b}egin{proposition}[{\cite[Proposition\,10.2]{RidVDF}}]\label{prop:invTpDense} For a theory \(T=\eq{T}\) and a set of parameters \(A\), the following are equivalent: \overline{b}egin{enumerate} \item Every \(A\)-definable set contains an \(A\)-invariant type. \item \emph{(the invariant extension property over \(A\))} Every type over \(A\) extends to a global \(A\)-invariant type. \end{enumerate} \end{proposition} Combining everything, we obtain: \overline{b}egin{corollary}\label{C:SCVH-EI} The theory $\mathrm{SCVH}_{p,e}^{\mathcal{G}}$ eliminates imaginaries and has the invariant extension property, i.e., every type over an algebraically closed set of parameters has a global invariant extension. \end{corollary} \overline{b}egin{proof} Weak elimination of imaginaries follows from Theorem~\ref{thm:SCVH dense def} using Proposition \ref{prop:EICrit}. Finite sets are coded by Lemma\,\ref{L:SCVF-ACVF}. Elimination of imaginaries follows. The invariant extension property follows again from the same theorem, using Proposition \ref{prop:invTpDense}. \end{proof} \overline{b}egin{remark} When a theory \(T\) is \(\mathfrak{m}athbb{N}IP\) and eliminates imaginaries, as is the case for $\mathrm{SCVH}_{p,e}^{\mathcal{G}}$, the invariant extension property has various model theoretic consequences (cf. \cite[Proposition~2.11]{HruPil}): \overline{b}egin{itemize} \item Lascar strong type, Kim-Pillay strong type and strong type coincide. That is, over an algebraically closed set \(A\), two points have the same type if and only if there exists a model containing \(A\) over which they have the same type. \item Every algebraically closed set is an extension base and thus, by \cite{CheKap}, forking coincides with dividing in \(T\). \end{itemize} In fact, since non-forking types are Lascar invariant in \(\mathfrak{m}athbb{N}IP\) theories, the invariant extension property is equivalent to the conjunction of the above two conditions. \end{remark} \section{Algebraic and definable closure}\label{S:dcl-acl} In this section, we wish to describe the algebraic and definable closure in $\mathrm{SCVF}_{p,e}$ and $\mathrm{SCVH}_{p,e}$. Our main result is that they are no larger than what could be expected: they are the (relative) algebraic and definable closure in $\mathrm{ACVF}$ of the structure generated by the parameters. We will denote by $\langle A\rangle_\lambda$ (respectively $\langle A\rangle_D$) the $\mathcal{L}_{\mathcal{G}}pe^\lambda$-structure (respectively $\mathcal{L}_{\mathcal{G}}pe^D$-structure) generated by $A$. \overline{b}egin{lemma}\label{lem:acl SCVF field} Let $M\mathfrak{m}odels\mathrm{SCVF}_{p,e}$ and $A\substr \Sort{VF}(M)$ (in $\mathcal{L}_{\operatorname{div}}pe^{\lambda}$). Then \[\Sort{VF}(\mathrm{acl}(A)) \subseteq \overline{a}lg{A}.\] \end{lemma} \overline{b}egin{proof} Let us first assume that $\operatorname{val}(A)\neq 0$. Since $A$ is closed under $\lambda$-functions, the extension $A\subseteq \Sort{VF}(M)$ is separable and, as $\Sort{VF}(M)$ is separably closed, $\sep{A}\subseteq M$. As $A$ contains a $p$-basis of $\Sort{VF}(M)$, $\sep{A}$ and $\Sort{VF}(M)$ have the same imperfection degree and hence $\sep{A}\mathfrak{m}odels\mathrm{SCVF}_{p,e}$. By model completeness, $\Sort{VF}(\mathrm{acl}(A))\subseteq\sep{A}\subseteq\overline{a}lg{A}$. Now, if $\operatorname{val}(A) = 0$, assume that $M$ is sufficiently saturated and let $c\in\ppowi{\Sort{VF}}(M)$ be transcendental over $\Sort{VF}(\mathrm{acl}(A))$ and have positive valuation. By the previous paragraph, $\Sort{VF}(\mathrm{acl}(A)) \subseteq \overline{a}lg{\Sort{VF}(\langle A c\rangle_\lambda)} = \overline{a}lg{A(c)}$. Let $a\in\Sort{VF}(\mathrm{acl}(A)) \subseteq \overline{a}lg{A(c)}$. By construction, $c\not\in\overline{a}lg{A(a)}$ and hence $a\in\overline{a}lg{A}$. \end{proof} A similar argument allows us to reduce the study of $\mathrm{acl}_{\mathrm{SCVH}_{p,e}}$ on the field sort to the above result. \overline{b}egin{lemma}\label{lem:acl SCVH field} Let $M\mathfrak{m}odels\mathrm{SCVH}_{p,e}$ and $A\substr \Sort{VF}(M)$ (in $\mathcal{L}_{\operatorname{div}}pe^{D}$). Then \[\Sort{VF}(\mathrm{acl}(A)) \subseteq \overline{a}lg{A}.\] \end{lemma} \overline{b}egin{proof} Let $b$ be a canonical $p$-basis of $\Sort{VF}(M)$ with $\mathfrak{m}athrm{trdeg}(b/\Sort{VF}(\mathrm{acl}(A))) = e$ (for example, $b$ is a very canonical $p$-basis over $\Sort{VF}(\mathrm{acl}(A))$). As the $D_{i,n}(x)$ can be expressed as polynomials in $\lambda(x)$ and $b$, it follows that $\Sort{VF}(\mathrm{acl}(Ab)) \subseteq \Sort{VF}(\mathrm{acl}_{\mathrm{SCVF}_{p,e}}(A)) \subseteq \overline{a}lg{\langle A\rangle_\lambda}$. The last inclusion is proved in Lemma\,\ref{lem:acl SCVF field}. Moreover, by \cite[Lemma\,4.3]{ZieSCH}, $\lambda(x)^p$ can be expressed as a polynomial in $D(x)$ and $b$ and hence $\langle A\rangle_\lambda \subseteq \overline{a}lg{\langle A b\rangle_D} = \overline{a}lg{A(b)}$. Let $a\in\Sort{VF}(\mathrm{acl}(A)) \subseteq \overline{a}lg{A(b)}$. By construction the tuple $b$ is algebraically independent from $a$ over $A$ and hence $a\in\overline{a}lg{A}$. \end{proof} \overline{b}egin{lemma}\label{lem:acl field} Let $T = \mathrm{SCVF}_{p,e}^\mathcal{G}$ or $T = \mathrm{SCVH}_{p,e}^\mathcal{G}$, $M\mathfrak{m}odels T$ and $A\substr M$. Then \[\Sort{VF}(\mathrm{acl}(A)) = \Sort{VF}(\mathrm{acl}(\Sort{VF}(A))) = \overline{a}lg{\Sort{VF}(A)}\cap M.\] \end{lemma} \overline{b}egin{proof} The second equality follows from Lemmas\,\ref{lem:acl SCVF field} and \ref{lem:acl SCVH field}. To prove the first equality it suffices to show that all definable functions from some $\mathbf{S}_n$ or $\mathbf{T}_n$ into $\Sort{VF}$ have a finite image. It is enough to prove this for $T=\mathrm{SCVF}^{\mathcal{G}}_{p,e}$. Consider $M\mathfrak{m}odels\mathrm{SCVF}^{\mathcal{G}}_{p,e}$ of cardinality continuum that contains a dense countable subfield (for example $\sep{\mathfrak{m}athbb{F}_p(t_i\mathfrak{m}id 0 < i < e)((t_0))}$). In such a model, the sorts $\mathbf{S}_n$ and $\mathbf{T}_n$ are countable but any definable subset $\defn{X}$ of (some Cartesian power of) $\Sort{VF}(M)$ is either finite or has cardinality continuum. To prove that last statement, taking a $\lambda$-resolution we may assume that $\defn{X}$ is quantifier-free $\mathcal{L}_{\operatorname{div}}(M)$-definable. If it is not finite then, by Corollary\,\ref{cor:red to open in affine space}, there is a a semialgebraic subset $\defn{X}'\subseteq\defn{X}$ which is in $K$-definable bijection with $\mathfrak{m}athbfcal{O}^d(K)$ for some $d>0$. It follows that $\defn{X}'$ and thus $\defn{X}$ has cardinality continuum. As functions with countable domain cannot have an image with cardinality continuum, it follows that any definable function from some $\mathbf{S}_n$ or $\mathbf{T}_n$ into $\Sort{VF}$ has a finite image and hence $\Sort{VF}(\mathrm{acl}(A)) \subseteq \mathrm{acl}(\Sort{VF}(A))$. \end{proof} \overline{b}egin{proposition}\label{prop:descr acl} Let $T = \mathrm{SCVF}_{p,e}^\mathcal{G}$ or $T = \mathrm{SCVH}_{p,e}^\mathcal{G}$, $M\mathfrak{m}odels T$ and $A\substr M$. \[\mathrm{acl}_T(A) = \mathrm{acl}_{\mathrm{ACVF}^{\mathcal{G}}_{p,p}}(A)\cap M\] \end{proposition} \overline{b}egin{proof} Pick $a\in \mathrm{acl}_T(A)$. If $a\in\Sort{VF}$, by Lemma\,\ref{lem:acl field}, $a\in\overline{a}lg{\Sort{VF}(A)} \subseteq \mathrm{acl}_{\mathrm{ACVF}_{p,p}^{\mathcal{G}}}(A)$. Let us now assume that $a\in \mathbf{S}_n$ (the same proof also works if $a\in \mathbf{T}_n$). By quantifier elimination in the geometric language, there is a quantifier free $\mathcal{L}_{\mathcal{G}}(A)$-formula $\varphi(x)$ such that $\varphi(M)$ is finite and $M\mathfrak{m}odels\varphi(a)$. As $\mathbf{S}_n(\overline{a}lg{M}) = \mathbf{S}_n(M)$ and $\varphi(x)$ is quantifier free, we have $a\in\varphi(M)=\varphi(\overline{a}lg{M})$, with $\varphi(\overline{a}lg{M})$ finite. It follows that $a\in\mathrm{acl}_{\mathrm{ACVF}_{p,p}^{\mathcal{G}}}(A)$. \end{proof} \overline{b}egin{proposition}\label{prop:descr dcl} Let $T = \mathrm{SCVF}_{p,e}^\mathcal{G}$ or $T = \mathrm{SCVH}_{p,e}^\mathcal{G}$, $M\mathfrak{m}odels T$ and $A\substr M$. \[\mathrm{dcl}_T(A) = \mathrm{dcl}_{\mathrm{ACVF}_{p,p}^{\mathcal{G}}}(A)\cap M\] \end{proposition} \overline{b}egin{proof} Pick $a\in \mathrm{dcl}_T(A)$. If $a\in \mathbf{S}_n$ or $\mathbf{T}_n$, then, as above, there is a quantifier free $\mathcal{L}_{\mathcal{G}}(A)$-formula $\varphi(x)$ such that $\varphi(M) = \{a\} = \varphi(\overline{a}lg{M})$ and thus $a\in\mathrm{dcl}_{\mathrm{ACVF}_{p,p}^{\mathcal{G}}}(A)$. If $a\in\Sort{VF}$, by Lemma\,\ref{lem:acl field}, $a\in\overline{a}lg{\Sort{VF}(A)}\cap M=:F$. Let $\sigma\in\mathrm{Aut}_{\mathrm{ACVF}_{p,p}^{\mathcal{G}}}(\overline{a}lg{M}/A)$. The Hasse derivations (and hence the $\lambda$-functions) extend uniquely from $\Sort{VF}(A)$ to $F$. It follows that $\operatorname{res}tr{\sigma}{A\cup F}$ respects all the new structure on $\Sort{VF}$ in $T$ and therefore $\sigma(a)$ is a $T$-conjugate of $a$ over $A$. In particular $\sigma(a) = a$ and $a\in\mathrm{dcl}_{\mathrm{ACVF}^{\mathcal{G}}_{p,p}}(A)$. \end{proof} \section{Metastability}\label{S:metastability} In this section, we will show that $\mathrm{SCVF}_{p,e}$ is metastable (as defined by Haskell, Hrushovski and Macpherson in \cite{HaHrMa08}). But first, let us give some definitions. An $\mathcal{L}(A)$-definable set $\defn{X}$ is said to be \emph{stably embedded} if every $\mathcal{L}(M)$-definable set $\defn{Y}\subseteq \defn{X}^n$ is $\mathcal{L}(A\cup \defn{X}(M))$-definable. The set $\defn{X}$ is said to be \emph{stable} (if it is stably embedded and) if the structure on $\defn{X}$ induced by $\mathcal{L}(A)$ is stable. For a thorough discussion of (stably embedded) stable sets, we refer the reader to the appendix of \cite{ChHr99}. We denote by $\mathrm{St}_A$ the structure whose sorts are the stable (stably embedded) sets which are $\mathcal{L}(A)$-definable, equipped with their $\mathcal{L}(A)$-induced structure. We will denote by $\mathop{\mathpalette\Ind{}}_{A}$ forking independence in $\mathrm{St}_A$. \overline{b}egin{lemma}\label{lem:VF-unstable} Let $T=\mathrm{SCVH}_{p,e}$ or $T=\mathrm{SCVF}_{p,e}$, and let $\defn{X}$ be an infinite definable subset of $\Sort{VF}^n$ for some $n\in\mathfrak{m}athbb{N}$. Then there is a definable function $f:\defn{X}\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}$ with infinite image. In particular, $\defn{X}$ is unstable. \end{lemma} \overline{b}egin{proof} We may work over parameters, and it is thus enough to prove the result for $T=\mathrm{SCVF}_{p,e}$. Assume $\defn{X}$ is defined over $K\mathfrak{m}odels\mathrm{SCVF}_{p,e}$. Using $\lambda$-resolutions, we may assume that $\defn{X}$ is a semialgebraic subset of $K^n$, i.e. defined by a quantifier-free $\mathcal{L}_{\operatorname{div}}(K)$-formula. By Corollary \ref{cor:red to open in affine space}, there is a a semialgebraic subset $\defn{X}'\subseteq\defn{X}$ which is in $K$-definable bijection with $\mathcal{O}_K^d$, where $d>0$ is the dimension of the Zariski closure of $\defn{X}$. The result follows, by considering the function $f:\mathcal{O}_K^d\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}$ sending $x$ to $\operatorname{val}(x_1)$. \end{proof} \overline{b}egin{proposition}\label{P:StA} Let $T=\mathrm{SCVH}^{\mathcal{G}}_{p,e}$ or $T=\mathrm{SCVF}^{\mathcal{G}}_{p,e}$, and let $A\substr M\mathfrak{m}odels T$. Suppose that $\defn{X}$ is an $A$-definable set. Then the following are equivalent: \overline{b}egin{enumerate} \item $\defn{X}$ is stable stably embedded. \item $\defn{X}$, expanded by relations for $A$-definable subsets of $\defn{X}^n$ for all $n$, has finite Morley rank. \item $\defn{X}\perp{\mathfrak{m}athbf \mathcal{G}amma}$, i.e., any definable subset of $\defn{X}^n\times{\mathfrak{m}athbf \mathcal{G}amma}^m$ is a finite union of rectangles. \item There is no definable function $f:\defn{X}\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}$ with infinite image. \item $\defn{X}$ is ${\mathfrak{m}athbf k}$-internal. \item $\defn{X}$ is ${\mathfrak{m}athbf k}$-analyzable. \item Possibly after a permutation of coordinates, $\defn{X}$ is contained in a finite union of sets of the form $s_1/\overline{b}oldsymbol{\mathfrak{m}} s_1\times\cdots\times s_m/\overline{b}oldsymbol{\mathfrak{m}} s_m\times F$, where the $s_i$ are $\mathrm{acl}_T(A)$-definable lattices and $F$ is an $A$-definable finite set of tuples from $\mathbfcal{G}(M)$. \end{enumerate} \end{proposition} \overline{b}egin{proof} The same characterization of stable stably embedded definable sets holds in $\mathrm{ACVF}$ by \cite[Lemma 2.6.2]{HaHrMa06}. Note that (3) and (4) are equivalent, since ${\mathfrak{m}athbf \mathcal{G}amma}$ is stably embedded in $T$ and eliminates imaginaries. The characterization thus holds in $T$ for any $A$-definable set $\defn{X}$ which lives in $\mathbfcal{G}im$, by Corollary \ref{cor:purely-geometric}. Now let $\defn{X}$ be a definable subset of $\Sort{VF}^n\times G$, where $G$ is a finite product of sorts from $\mathbfcal{G}im$. If the projection of $\defn{X}$ to $\Sort{VF}^n$ is infinite, the negation of (4) holds by Lemma \ref{lem:VF-unstable}, and the negation of all other statements follows easily from this. We may thus assume that the projection $F$ of $\defn{X}$ to $\Sort{VF}^n$ is finite, and so we are reduced to definable subsets of $\mathcal{G}^{\mathrm{im}}$. \end{proof} \overline{b}egin{definition}[Stable domination] Let $M$ be some $\mathcal{L}$-structure, $C\subseteq M$, $f$ a pro-definable map to $\mathrm{St}_C$ (defined with parameters in $C$) and $a\in M$. We say that $\mathrm{tp}(a/C)$ is \emph{stably dominated via $f$} if for every $a\mathfrak{m}odels p$ and $B\subseteq M$ such that $\mathrm{St}_C(\mathrm{dcl}(CB))\mathop{\mathpalette\Ind{}}_{C} f(a)$, \[\mathrm{tp}(B/Cf(a))\vdash\mathrm{tp}(B/Ca).\] We say that $p$ is \emph{stably dominated} if it is stably dominated via some map $f$. \end{definition} Note that if $p$ is stably dominated, it is stably dominated via any map enumerating $\mathrm{St}_C(\mathrm{dcl}(Ca))$ for any $a\mathfrak{m}odels p$. \overline{b}egin{remark}\label{rem:suff large} In the definition of stable domination, it suffices to show that for any $B$ there exists $B'$ such that $B\subseteq\mathrm{dcl}(B')$ and if $\mathrm{St}_C(\mathrm{dcl}(CB'))\mathop{\mathpalette\Ind{}}_{C} f(a)$, then $\mathrm{tp}(B'/Cf(a))\vdash\mathrm{tp}(B'/Ca)$. Indeed, as $\mathrm{St}_C$ is stably dominated, if $\mathrm{St}_C(\mathrm{dcl}(CB))\mathop{\mathpalette\Ind{}}_{C} f(a)$, we may also assume (taking a conjugate of $B'$ over $B$) that $\mathrm{St}_C(\mathrm{dcl}(CB'))\mathop{\mathpalette\Ind{}}_{C} f(a)$ and if $\mathrm{tp}(B'/Cf(a))\vdash\mathrm{tp}(B'/Ca)$, then $\mathrm{tp}(B/Cf(a))\vdash\mathrm{tp}(B/Ca)$. \end{remark} \overline{b}egin{definition}[Metastability] Let $T$ be a theory and ${\mathfrak{m}athbf \mathcal{G}amma}$ an $\emptyset$-definable stably embedded set. We say that $T$ is \emph{metastable over ${\mathfrak{m}athbf \mathcal{G}amma}$} if: \overline{b}egin{enumerate} \item The theory $T$ has the invariant extension property (as in Corollary \ref{C:SCVH-EI}). \item For $M\mathfrak{m}odels T$ sufficiently saturated and for every small subset $A\subseteq M$, there exists a small subset $C\subseteq M$ containing $A$ such that for all tuples $a\in M$, $\mathrm{tp}(a/C{\mathfrak{m}athbf \mathcal{G}amma}(\mathrm{dcl}(Ca)))$ is stably dominated. Such a \(C\) is called a metastability basis. \end{enumerate} \end{definition} Let $T$ be a theory and $U\mathfrak{m}odels T$ a monster model of $T$. Let $p(x),q(y)\in S(U)$ be definable types. Then one may define the tensor product $(p\otimes q)(x,y)\in S(U)$ as follows. Let $C\subseteq U$ be small such that $p$ and $q$ are $C$-definable. Then $p\otimes q$ is the unique $C$-definable type in $S(U)$ satisfying \[(a,b)\mathfrak{m}odels(p\otimes q)| B\text{ if and only if } a\mathfrak{m}odels p|Bb\text{ and }b\mathfrak{m}odels q| B\] for all small \(B\subseteq U\) containing \(C\) and all \((a,b)\in U\). We refer to \cite{Sim15} for the basic properties of definable types, \(\otimes\) and generically stable types which we will define now. \overline{b}egin{definition} Let \(T\) be \(\mathfrak{m}athbb{N}IP\) and \(U\mathfrak{m}odels T\) a monster model. An invariant type \(p(x)\in S(U)\) is called \emph{generically stable }if \(p(x)\otimes p(y)=p(y)\otimes p(x)\). \end{definition} Let us also define orthogonality. \overline{b}egin{definition} Let \(Q\) be an \(\emptyset\)-definable stably embedded set. A type \(p\in S(C)\) is said to be \emph{almost orthogonal} to \(Q\) if \(\mathrm{dcl}(Ca)\cap \eq{Q}=\mathrm{dcl}(C)\cap\eq{Q}\) for any \(a\mathfrak{m}odels p\). An invariant type \(p\in S(U)\) is \emph{orthogonal} to \(Q\), denoted by \(p\perp Q\), if \(p| B\) is almost orthogonal to \(Q\) for every set \(B\subseteq U\) over which \(p\) is invariant. \end{definition} As a consequence of metastability (and $\mathfrak{m}athbb{N}IP$), we get an alternative characterization of stably dominated types in case \({\mathfrak{m}athbf \mathcal{G}amma}\) is totally ordered. \overline{b}egin{proposition}\label{P:char-st-dom} Let \(T\) be an \(\mathfrak{m}athbb{N}IP\) theory which is metastable over the stably embedded \(\emptyset\)-definable set \({\mathfrak{m}athbf \mathcal{G}amma}\). Assume that \({\mathfrak{m}athbf \mathcal{G}amma}\) admits a definable total ordering and eliminates imaginaries. A global invariant type $p\in S(U)$ is stably dominated if and only if $p$ is generically stable if and only if $p\perp{\mathfrak{m}athbf \mathcal{G}amma}$. \end{proposition} \overline{b}egin{proof} In \cite[Proposition 2.9.1.a]{HrLo16}, Hrushovski and Loeser show that the above equivalences hold in \(\mathrm{ACVF}\). The proof given there generalizes easily to this more abstract setting. \end{proof} In \cite{HaHrMa08}, Haskell, Hrushovski and Macpherson showed that maximally complete fields are metastability bases in $\mathrm{ACVF}$. In $\mathrm{SCVF}_{p,e}$ and $\mathrm{SCVH}_{p,e}$, we will prove that the situation is quite similar: separably maximally complete fields are metastability bases. But first, let us characterize stably dominated types in $\mathrm{SCVH}_{p,e}$. As $\mathrm{SCVF}_{p,e}$ is an enrichment of $\mathrm{SCVH}_{p,e}$ by constants, the same results will follow for $\mathrm{SCVF}_{p,e}$. We extend \(D_\omega\) to all of \(\mathcal{G}\) by setting \(D_\omega(a) = a\) for non field points. \overline{b}egin{proposition}\label{P:char st dom} Let $M\mathfrak{m}odels\mathrm{SCVH}_{p,e}^\mathcal{G}$, $C \subseteq M$ be closed under $D$, $a\in M$ a tuple and $f$ a pro-definable function. The following are equivalent: \overline{b}egin{enumerate}[(i)] \item The type $\mathrm{tp}_{M}(a/C)$ is stably dominated via $f$ (in $M$). \item There exists, in $\overline{a}lg{M}$, a pro-definable function $g$ such that $f = g \circ D_\omega$ and the type $\mathrm{tp}_{\overline{a}lg{M}}(D_\omega(a)/C)$ is stably dominated via $g$ (in $\overline{a}lg{M}$). \end{enumerate} \end{proposition} \overline{b}egin{proof} First, note that the existence of $g$ follows immediately from Proposition\,\ref{prop:descr dcl}. By Remark\,\ref{rem:suff large}, to prove stable domination of $\mathrm{tp}_{M}(a/C)$ and $\mathrm{tp}_{\overline{a}lg{M}}(D_\omega(a)/C)$, it suffices to consider $B = \Sort{VF}(\mathrm{dcl}_{M}(CB))\subseteq \Sort{VF}(M)$. Moreover, for such a $B$, we have that $\mathrm{St}_C^{M}(B)\mathop{\mathpalette\Ind{}}^{M}_{C} g(D_\omega(a))$ if and only if $\mathrm{St}_C^{\overline{a}lg{M}}(\mathrm{dcl}_{\overline{a}lg{M}}(B))\mathop{\mathpalette\Ind{}}^{\overline{a}lg{M}}_{C} g(D_\omega(a))$. Indeed, by Proposition\,\ref{P:StA}, $\mathrm{St}_C^{M}$ and $\mathrm{St}_C^{\overline{a}lg{M}}$ are essentially the same structure up to the fact that that $\mathrm{St}_C^{\overline{a}lg{M}}$ has some more finite sorts that are irrelevant to forking. Also, by quantifier elimination, $\mathrm{tp}_{M}(B/Ca)$ is equivalent to $\mathrm{tp}_{\overline{a}lg{M}}(B/CD_\omega(a))$ and $\mathrm{tp}_{M}(B/Cf(a))$ is equivalent to $\mathrm{tp}_{\overline{a}lg{M}}(B/Cg(D_\omega(a)))$ (note that we are implicitly using the fact that $Cg(D_\omega(a))$ is closed under $D$ as the image of $g$ is in $\mathrm{St}_C$). The equivalence of (i) and (ii) is an immediate consequence. \end{proof} \overline{b}egin{proposition}\label{P:SCVF-met-basis} Let $M\mathfrak{m}odels\mathrm{SCVH}_{p,e}$, $C \substr \Sort{VF}(M)$ be separably maximally complete and $a\in M$ be a tuple, then $\mathrm{tp}(a/C{\mathfrak{m}athbf \mathcal{G}amma}(\mathrm{dcl}(Ca)))$ is stably dominated. \end{proposition} \overline{b}egin{proof} Let \overline{b}egin{gather*} E = C{\mathfrak{m}athbf \mathcal{G}amma}(\mathrm{dcl}_{\overline{a}lg{M}}(CD_\omega(a))) = C{\mathfrak{m}athbf \mathcal{G}amma}(\mathrm{dcl}_{M}(Ca))\\ \overline{a}lg{E} = E \cup\perf{C}\\ g(D_\omega(a)) = \mathrm{St}_E^{M}(\mathrm{dcl}_{M}(ED_\omega(a)))\\ \overline{a}lg{g}(D_\omega(a))=\mathrm{St}_\overline{a}lg{E}^\overline{a}lg{M}(\mathrm{dcl}_\overline{a}lg{M}(\overline{a}lg{E}D_\omega(a)))= g(D_\omega(a))\cup\perf{C} \end{gather*} By Lemma\,\ref{lem:max complete SCVF}, $\perf{C}$ is maximally complete. Thus, $\mathrm{tp}_{\overline{a}lg{M}}(D_\omega(a)/\overline{a}lg{E})$ is stably dominated via $\overline{a}lg{g}$ by \cite[Theorem\,12.18.(ii)]{HaHrMa08}, and hence $\mathrm{tp}_{\overline{a}lg{M}}(D_\omega(a)/E)$ is stably dominated via $g$. By Proposition\,\ref{P:char st dom}, $\mathrm{tp}_{M}(a/E) = \mathrm{tp}(a/C{\mathfrak{m}athbf \mathcal{G}amma}(\mathrm{dcl}(Ca)))$ is also stably dominated. \end{proof} \overline{b}egin{corollary}\label{C:SCVH-metastable} The theories $\mathrm{SCVH}_{p,e}$ and $\mathrm{SCVF}_{p,e}$ are metastable over ${\mathfrak{m}athbf \mathcal{G}amma}$. \end{corollary} \section{The stable completion of a definable set in \texorpdfstring{$\mathrm{SCVF}$}{SCVF}} The goal of this section is to generalize a result of Hrushovski and Loeser \cite{HrLo16} on the strict pro-definability of the space of stably dominated types. We show that their proof holds in a context general enough to also encompass separably closed valued fields of finite imperfection degree and Scanlon's theory of contractive valued differential fields (see \cite{Sca00}). For a proof of the following result, see, e.g., \cite[Remark 2.32]{Sim15}. \overline{b}egin{fact}\label{F:Uniform-def} Let $T$ be \(\mathfrak{m}athbb{N}IP\) and $U\mathfrak{m}odels T$. Then generically stable types are uniformly definable in $T$: for any formula $\varphi(x;y)$ there is a formula $\theta(y;z)$ such that for every generically stable type $p(x)\in S(U)$ there is $b\in U$ such that $d_px\varphi(x;y)=\theta(y,b)$. \end{fact} Hrushovski and Loeser \cite{HrLo16} use this fact, together with Proposition \ref{P:char-st-dom}, to encode the set $\widehat{\defn{X}}$ of global stably dominated types concentrating on some definable set $\defn{X}$ in \(\mathrm{ACVF}\) as a pro-definable set. \overline{b}egin{notation*} Given an $\emptyset$-definable stably embedded set $\Sort{Q}$, a $C$-definable set $\defn{X}$ and a set $A$ containing $C$, $S_{\mathfrak{m}athrm{def},\defn{X}}(A)$ denotes the set of global $A$-definable types $p(x)$ such that $p(x)\vdash x\in \defn{X}$, and $S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}(A):=\{p\in S_{\mathfrak{m}athrm{def},\defn{X}}(A)\, |\, p\perp \Sort{Q}\}$. Finally, $\widehat{\defn{X}}(A):=\{p\in S_{\mathfrak{m}athrm{def},\defn{X}}(A)\, |\, p\text{ is stably dominated}\}$. \end{notation*} \overline{b}egin{fact}[{\cite[Lemma 2.5.1]{HrLo16}}]\label{F:HL-prodef} Assume $T$ eliminates imaginaries. Let $\Sort{Q}$ be an $\emptyset$-definable stably embedded set. Assume that the definable types orthogonal to $\Sort{Q}$ are uniformly definable. Then for any $C$-definable set $\defn{X}$, $S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}$ is $C$-pro-definable: there is a $C$-pro-definable set $\Sort{Z}$ such that for any $A\supseteq C$ there is a canonical bijection $\Sort{Z}(A) \simeq S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}(A)$. Moreover, if $f:\defn{X}\rightarrow \Sort{Y}$ is a definable function, then, identifying $S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}$ and $S_{\mathfrak{m}athrm{def},\Sort{Y}}^{\Sort{Q}}$ with the corresponding pro-definable sets, the map $f_{\mathfrak{m}athrm{def},\defn{X}}:S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}\rightarrow S_{\mathfrak{m}athrm{def},\Sort{Y}}^{\Sort{Q}}$, $p\mathfrak{m}apsto f_\star(p)$ is pro-definable. \end{fact} We briefly sketch the argument. For notational simplicity, we will assume $C=\emptyset$. Let $f:\defn{X}\rightarrow\eq{\Sort{Q}}$ be definable (with parameters) and let $p\in S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}(U)$. As $p\perp \Sort{Q}$, $f_\star(p)$ is a realized type, i.e., there is $\gamma\in \eq{\Sort{Q}}$ such that $\mathfrak{m}athrm{d}_{p} x(f(x)=\gamma)$. We will denote this by $p_\star(f)=\gamma$. Now let $f:\defn{X}\times \Sort{W}\rightarrow \eq{\Sort{Q}}$ be $\emptyset$-definable, $f_w:=f(-,w)$. It follows from the assumptions that there is a set $\Sort{S}$ and a function $g:\Sort{S}\times \Sort{W}\rightarrow \eq{\Sort{Q}}$, both $\emptyset$-definable, such that for every $p\in S_{\mathfrak{m}athrm{def},\defn{X}}^\Sort{Q}(U)$, the function \[p_\star(f):\Sort{W}\rightarrow\eq{\Sort{Q}}, \,\,w\mathfrak{m}apsto p_\star(f_w)\] is equal to $g_s=g(s,-)$ for a unique $s\in \Sort{S}$. Now choose an enumeration $f_i:\defn{X}\times \Sort{W}_i\rightarrow\eq{\Sort{Q}}$ ($i\in I$) of the functions as above (with corresponding $g_i:S_i\times \Sort{W}_i\rightarrow\eq{\Sort{Q}}$). Then \[p\mathfrak{m}apsto c(p):=\{(s_i)_{i\in I}\mathfrak{m}id p_\star(f_i)=g_{i,s_i } \text{ for all $i$}\}\] defines an injection of $S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}$ into $\prod_{i\in I}\Sort{S}_i$, and one may show that the image $\Sort{Y}_i$ of $c(S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}})$ under the projection map to $\Sort{S}_i$ is $\infty$--definable. Since the set of $\Sort{S}_i$'s is closed under taking finite products (this may be seen using products of the corresponding $f_i$'s), pro-definability of $c(S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}})$ follows. \overline{b}egin{corollary}\label{C:def-types-pro-def} Let $T$ be a theory which eliminates imaginaries, and let $\defn{X}$ be a $C$-definable set. \overline{b}egin{enumerate} \item Assume $T$ is stable. Then $S_{\mathfrak{m}athrm{def},\defn{X}}$ is canonically a $C$-pro-definable set. \item Assume $T$ is \(\mathfrak{m}athbb{N}IP\) and metastable over the stably embedded $\emptyset$-definable set ${\mathfrak{m}athbf \mathcal{G}amma}$. Assume that ${\mathfrak{m}athbf \mathcal{G}amma}$ admits a definable total ordering and eliminates imaginaries. Then $\widehat{\defn{X}}$ is canonically $C$-pro-definable. Moreover, if $f:\defn{X}\rightarrow \Sort{Y}$ is a definable function, then the map $\widehat{f}:\widehat{\defn{X}}\rightarrow\widehat{\Sort{Y}}$, $p\mathfrak{m}apsto f_\star(p)$ is pro-definable, once $\widehat{\defn{X}}$ and $\widehat{\Sort{Y}}$ are identified with the corresponding pro-definable sets. \end{enumerate} \end{corollary} \overline{b}egin{proof} Both parts follow from Fact \ref{F:HL-prodef}. For (1), note that if $\Sort{Q}$ is a 2-element set, then $S_{\mathfrak{m}athrm{def},\defn{X}}^{\Sort{Q}}=S_{\mathfrak{m}athrm{def},\defn{X}}$. Since in a stable theory all definable types are generically stable, uniform definability of types follows from Fact \ref{F:Uniform-def}. In (2), one has $\widehat{\defn{X}}(A)=S_{\mathfrak{m}athrm{def},\defn{X}}^{{\mathfrak{m}athbf \mathcal{G}amma}}(A)$ for all $A\supseteq C$, by Proposition\,\ref{P:char-st-dom}. As in the proof of (1), uniform definability holds by Fact \ref{F:Uniform-def}. \end{proof} Recall that a theory $T$ has the \emph{finite cover property} if there is a formula $\varphi(x,y)$ (where $x$ and $y$ may be tuples of variables) such that for every $n\in\mathfrak{m}athbb{N}$ there are $a_1,\ldots,a_n\in U\mathfrak{m}odels T$ such that $\mathfrak{m}odels\neg\exists x\overline{b}igwedge_{i\leq n}\varphi(x,a_i)$ and $\mathfrak{m}odels\exists x\overline{b}igwedge_{i\leq n,i\neq k}\varphi(x,a_i)$ for every $k\leq n$. The theory $T$ is \emph{nfcp} if it does not have the finite cover property. By a result of Shelah \cite[II Theorems\,4.2, 4.4]{ShClass}, $T$ is nfcp if and only if $T$ is stable and $\eq{T}$ eliminates $\exists^\infty x$. The following characterization is due to Poizat. \overline{b}egin{fact}[{\cite[Th\'eor\`eme\,5]{PoiBellesPaires}}]\label{F:Poizat-nfcp} Let $T$ be stable. Then $T$ is nfcp if and only if for every pair of formulas $\varphi(x;y)$ and $\theta(y;z)$ the set \[\Sort{D}_{\varphi,\theta}=\{c\in U\, | \, \theta(y,c)\text{ is the $\varphi$-definition of a (complete) global type}\}\] is definable. \end{fact} \overline{b}egin{corollary}\label{C:nfcp-strictpro} Let $T$ be stable. Then $T$ is nfcp if and only if for every definable set $\defn{X}$, the set $S_{\mathfrak{m}athrm{def},\defn{X}}$ is strict pro-definable. \end{corollary} \overline{b}egin{proof} Let $\Sort{Z}=\varprojlim \Sort{Z}_i$ be the pro-definable set given by the proof of Fact \ref{F:HL-prodef}, with $\Sort{Z}(A)\simeq S_{\mathfrak{m}athrm{def},\defn{X}}(A)$ canonically. Then $\Sort{Z}_i$ corresponds to canonical parameters of instances of a formula $\theta=\theta_\varphi(y;z)$, and $\pi_i(\Sort{Z})\subseteq \Sort{Z}_i$ is precisely the set of those parameters corresponding to $\varphi$-definitions of complete global types, i.e., $\pi_i(\Sort{Z})=\Sort{D}_{\varphi,\theta}$. We conclude by Fact \ref{F:Poizat-nfcp}. \end{proof} Hrushovski and Loeser showed that for every definable set $\defn{X}$ in \(\mathrm{ACVF}\), the set $\widehat{\defn{X}}$ of stably dominated types concentrated on $\defn{X}$ is strict pro-definable \cite[Theorem 3.1.1]{HrLo16}. Analyzing their proof, we obtain the following abstract version of this important technical result, which covers the theories we are interested in. \overline{b}egin{theorem}\label{T:criterion-strictpro} Let $T$ be a complete theory which eliminates imaginaries, is \(NIP\) and metastable over some $\emptyset$-definable stably embedded set ${\mathfrak{m}athbf \mathcal{G}amma}$. Let ${\mathfrak{m}athbf k}_{i\in I}$ be $\emptyset$-definable stable stably embedded sets. Assume the following properties hold: \overline{b}egin{enumerate} \item ${\mathfrak{m}athbf \mathcal{G}amma}$ eliminates imaginaries and admits a definable total ordering; \item every set of parameters $A$ is included in a metastability basis $C$ such that for every parameter set $B = \mathrm{dcl}(CB)$, $\mathrm{St}_C(B)$ is interdefinable with $\overline{b}igcup_i {\mathfrak{m}athbf k}_i(B)$; \item The (multisorted) structure $\overline{b}igcup_i {\mathfrak{m}athbf k}_i$ (with the induced structure) is nfcp. \end{enumerate} Then, for every $A$-definable set $\defn{X}$, the set $\widehat{\defn{X}}$ is strict $A$-pro-definable. \end{theorem} \overline{b}egin{proof} Let $\defn{X}$ be $A$-definable. Pro-definability of $\widehat{\defn{X}}$ is the content of Corollary \ref{C:def-types-pro-def}(2). We recall the construction of the pro-definable set $\Sort{D}$ satisfying $\Sort{D}(B)=\widehat{\defn{X}}(B)$ canonically, for every parameter set $B\supseteq A$. Let $f_i:\defn{X}\times \Sort{W}_i\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}^{n_i}$ ($i\in I$) be an enumeration of all $A$-definable families of functions from $\defn{X}$ to $\eq{{\mathfrak{m}athbf \mathcal{G}amma}}$. As ${\mathfrak{m}athbf \mathcal{G}amma}$ eliminates imaginaries, it is enough to consider functions with values in ${\mathfrak{m}athbf \mathcal{G}amma}^{n}$ for some $n$. Let $g_i:\Sort{S}_i\times \Sort{W}_i\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}^{n_i}$ be $A$-definable such that for any $p\in\widehat{\defn{X}}(U)$ the function $p_\star(f_i):\Sort{W}_i\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}^{n_i}$ is equal to $g_{i,s_i}=g_i(s_i,-)$ for some unique $s_i\in \Sort{S}_i$. Then $\Sort{D}$ is the image of the injective map $c:\widehat{\defn{X}}\hookrightarrow\prod_{i\in I}\Sort{S}_i$, $p\mathfrak{m}apsto(s_i)_{i\in I}$. In order to show that $\Sort{D}$ is strict pro-definable, it is enough to show that the projection $\Sort{Y}_i:=\pi_i(\Sort{D})\subseteq \Sort{S}_i$ is definable for all $i\in I$. (Note that the $\Sort{S}_i$'s are closed under taking finite products.) We already know that $\Sort{Y}_i$ is $\infty$-definable, so there only remains to show that it is a union of definable sets, i.e., $\mathfrak{m}athrm{ind}$-definable. Now fix $i\in I$. We will omit indices and write $f:\defn{X}\times \Sort{W}\rightarrow{\mathfrak{m}athbf \mathcal{G}amma}^n$ and $g:\Sort{S}\times \Sort{W}\rightarrow {\mathfrak{m}athbf \mathcal{G}amma}^n$ in what follows. Let $\Sort{Z}$ be the set of functions $g_0 : \Sort{W} \to {\mathfrak{m}athbf \mathcal{G}amma}^n$ such that there exist: \overline{b}egin{itemize} \item a finite product $\Sort{L} = \prod_j {\mathfrak{m}athbf k}_j^{n_j}$; \item a function $h : \defn{X} \to \Sort{L}$, definable with parameters $c$, and \item for $\varphi(y;c,v,w)= (\exists x\in \defn{X}\,h(x) = y)\wedge (\forall x\in \defn{X}\,h(x) = y \to f(x,w) = v)$, a definable $\varphi(y;c,v,w)$-type $q_0$ concentrating on $\Sort{L}$ \end{itemize} satisfying $g_0(w) = \gamma\text{ if and only if }d_{q_0}y\,\varphi(y;c,\gamma,w)$. By Fact \ref{F:Poizat-nfcp}, for every formula $\theta(v,w;z)$, the fact that, for a given $t$, $\theta(v,w,t)$ is the $\varphi(y;c,v,w)$-definition of a consistent type is a definable condition in $t$. It follows that $\Sort{Z}$ is $\mathfrak{m}athrm{ind}$-definable. We will now show that $\Sort{Z} = \Sort{Y}$. First, pick $p\in\widehat{\defn{X}}$. Let $C$ be a set as in Hypothesis 2 such that $C \supseteq A$ and $p$ is $C$-definable. By stable domination, Hypothesis 2 (and compactness), there exists $h$ and $\Sort{L}$ as above such that for all $a\mathfrak{m}odels\operatorname{res}tr{p}{C}$ and all $b$ and $\gamma$, if $h(a)\mathop{\mathpalette\Ind{}}_C \mathrm{St}_C(\mathrm{dcl}(Cb\gamma))$, $\mathrm{tp}(b,\gamma,h(a)) \vdash f(a,b) = \gamma$. Actually, making $h$ bigger we may assume that the above holds for all $a\in \defn{X}$. Let $q = h_\star p$, then $\operatorname{res}tr{q}{Cb\gamma}(y)\vdash \forall x\in \defn{X}\,h(x) = y \to f(x,b) = \gamma$ and the tuple $(\Sort{L},h,\operatorname{res}tr{q}{\theta})$ proves that $p_\star f \in \Sort{Z}$ and hence $\Sort{Y}\subseteq \Sort{Z}$. Now, let $g_0\in \Sort{Z}$ and $\Sort{L}$, $h$ and $q_0$ as in the definition of $\Sort{Z}$. Let $C$ be as in Hypothesis 2, such that $g_0$, $h$ and $q_0$ are defined over $C$. Let $b\mathfrak{m}odels \operatorname{res}tr{q_0}{C}$ and $a\in \defn{X}$ such that $h(a) = b$. Let $C' = \mathrm{acl}(C{\mathfrak{m}athbf \mathcal{G}amma}(\mathrm{dcl}(Ca)))$. Since $C$ is a metastability basis, $\mathrm{tp}(a/C')$ is stably dominated (and thus in particular definable over $C'$), so $h_\star p$ is definable over $C'$. By orthogonality of ${\mathfrak{m}athbf \mathcal{G}amma}$ and the stable part, $h_\star p$ is definable over $\mathrm{St}_C(C')=\mathrm{St}_C(C)$, which is interdefinable with $C$. As $h_\star p$ extends $\operatorname{res}tr{q_0}{C}$, we have that $h_\star p = q_0$. Let $b\in \Sort{W}$, $\gamma = g_0(b)$ and $a\mathfrak{m}odels\operatorname{res}tr{p}{C'b\gamma}$, then $h(a)\mathfrak{m}odels\operatorname{res}tr{q_0}{Cb\gamma}$ and, by definition of $\Sort{Z}$, $f(a,b) = \gamma$. It follows that $p_\star f(b) = g_0(b)$, hence $g_0 = p_\star f \in \Sort{Y}$ and so $\Sort{Z}\subseteq \Sort{Y}$. \end{proof} \overline{b}egin{corollary}\label{C:SCVF-strict-pro} Let $T=\mathrm{SCVH}_{p,e}$ (or any completion of $\mathrm{SCVF}_{p,e}$). Then for every $A$-definable set $\defn{X}$, $\widehat{\defn{X}}$ may be canonically identified with a strict $A$-pro-definable set. \end{corollary} \overline{b}egin{proof} This follows from Theorem \ref{T:criterion-strictpro}. Indeed, $\mathrm{SCVH}_{p,e}$ is \(\mathfrak{m}athbb{N}IP\) (Corollary \ref{C:SCVF-NIP}), and it is metastable over ${\mathfrak{m}athbf \mathcal{G}amma}$ by Corollary \ref{C:SCVH-metastable}. Since ${\mathfrak{m}athbf \mathcal{G}amma}$ is a stably embedded pure divisible ordered Abelian group, it eliminates imaginaries. Any parameter set $A$ is contained in a separably maximal model $K$, and such a $K$ is a metastability basis (see Proposition \ref{P:SCVF-met-basis}). Over $K$, indeed over any model of $\mathrm{SCVH}_{p,e}$, $\mathrm{St}_K(B)=\mathrm{dcl}({\mathfrak{m}athbf k}(B))$ for every parameter set $B=\mathrm{dcl}(BK)$. This follows from Proposition \ref{P:StA}(7) and the fact that every lattice $s\in \mathbf{S}_n(K)$ has a $K$-definable basis. Since the residue field ${\mathfrak{m}athbf k}$ is a pure model of ACF$_p$, it is nfcp. (Note that purity and stable embeddedness of ${\mathfrak{m}athbf \mathcal{G}amma}$ and ${\mathfrak{m}athbf k}$ follow from the corresponding results in $\mathrm{ACVF}_{p,p}$, by Corollary \ref{cor:purely-geometric}.) Thus, all hypotheses of Theorem \ref{T:criterion-strictpro} are satisfied. \end{proof} We now discuss a similar context in which Theorem \ref{T:criterion-strictpro} applies. Let \(\mathrm{VDF}_{\mathcal{EC}}\) be the theory of existentially closed valued differential fields $(K,v,\partial)$ of residue characteristic 0 satisfying $v(\partial(x))\geq v(x)$ for all $x$. This theory had first been studied by Scanlon \cite{Sca00}. The theory \(\mathrm{VDF}_{\mathcal{EC}}\) is \(\mathfrak{m}athbb{N}IP\), and the residue field is stably embedded and a pure model of DCF$_0$, with the derivation induced by $\partial$. The third author has shown in \cite{RidVDF} that \(\mathrm{VDF}_{\mathcal{EC}}\) eliminates imaginaries in the geometric sorts and that it is metastable over the value group ${\mathfrak{m}athbf \mathcal{G}amma}$, a stably embedded pure divisible ordered Abelian group. As shown there, every set of parameters is included in a metastability basis which is a model, and over any model any stably embedded stable set is interdefinable with the residue field, since it is the case in the underlying algebraically closed valued field. As DCF$_0$ is stable and eliminates imaginaries, it is enough to show that it eliminates $\exists^\infty x$. But this follows from the fact that the algebraic closure of a set $A$ is the field theoretic algebraic closure of the differential field generated by $A$, by quantifier elimination. We have thus proved the following result. \overline{b}egin{corollary}\label{C:VDF-strict-pro} Let $\defn{X}$ be any $A$-definable set in a model of \(\mathrm{VDF}_{\mathcal{EC}}\). Then $\widehat{\defn{X}}$ may be canonically identified with a strict $A$-pro-definable set. \end{corollary} \overline{b}ibliography{SCVF-EI} \end{document}
\begin{document} \title[Symmetric power $L$-functions]{On the value-distribution of symmetric power $L$-functions} \author{Kohji Matsumoto} \address{K. Matsumoto: Graduate School of Mathematics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan} \email{[email protected]} \author{Yumiko Umegaki} \address{Y. Umegaki: Faculty, Division of Natural Sciences, Research Group of Mathematics, Nara Women's University, Kitauoya Nishimachi, Nara 630-8506, Japan} \email{[email protected]} \keywords{symmetric power $L$-function, automorphic $L$-function, value-distribution, density function, $M$-function} \subjclass[2010]{Primary 11F66, Secondary 11M41} \begin{abstract} We first briefly survey the value-distribution theory of $L$-functions of the Bohr-Jessen flavor (or the theory of ``$M$-functions''). Limit formulas for the Riemann zeta-function, Dirichlet $L$-functions, automorphic $L$-functions etc. are discussed. Then we prove new results on the value-distribution of symmetric power $L$-functions, which are limit formulas involving associated $M$-functions. \end{abstract} \maketitle \section{The Bohr-Jessen limit theorem}\label{sec1} We begin with the classical result of Bohr and Jessen \cite{BJ3032} on the value-distribution of the Riemann zeta-function $\zeta(s)$. Let $R$ be a rectangle in the complex plane $\mathbb{C}$ with the edges parallel to the axes. Let $s=\sigma+it$ be a complex variable. By $\mu_1$ we mean the $1$-dimensional Lebesgue measure. For $\sigma>1/2$ and $T>0$, we define \begin{equation}\label{1-1} V_{\sigma}(T,R;\zeta)=\mu_1\{t\in[-T,T]\;|\;\log\zeta(\sigma+it)\in R\}, \end{equation} where the rigorous definition of $\log\zeta(\sigma+it)$ will be given later (in Section \ref{sec2}). Then the result of Bohr and Jessen can be stated as follows. \begin{theorem}[Bohr and Jessen \cite{BJ3032}]\label{thm-BJ} \begin{itemize} \item[(i)] There exists the limit \begin{equation}\label{1-2} W_{\sigma}(R;\zeta)=\lim_{T\to\infty}\frac{1}{T}V_{\sigma}(T,R;\zeta). \end{equation} \item[(ii)] This limit can be written as \begin{equation}\label{1-3} W_{\sigma}(R;\zeta)=\int_R \mathcal{F}_{\sigma}(z,\zeta)|dz|, \end{equation} where $z=x+iy\in\mathbb{C}$, $|dz|=dxdy/2\pi$, and $\mathcal{F}_{\sigma}(z,\zeta)$ is a continuous non-negative, explicitly constructed function defined on $\mathbb{C}$. \end{itemize} \end{theorem} The limit $W_{\sigma}(R;\zeta)$ may be regarded as the probability of how many values of $\log\zeta(\sigma+it)$ on the line $\Re s=\sigma$ belong to the given rectangle $R$, and $\mathcal{F}_{\sigma}(z,\zeta)$ may be called the density function of this probability. Theorem \ref{thm-BJ} is now called the Bohr-Jessen limit theorem. \begin{remark} A reformulation of this type of results in terms of weak convergence of probability measures was given by Laurin{\v c}ikas (see \cite{Lau96}). \end{remark} The original proof of Bohr and Jessen is of some geometric flavor. Their proof starts with the expression \begin{equation}\label{1-4} \log\zeta(\sigma+it)=-\sum_{n=1}^{\infty}\log(1-p_n^{-\sigma-it}), \end{equation} where $p_n$ is the $n$th prime, which is valid for $\sigma>1$. They consider the truncation \begin{equation}\label{1-5} f_N(\sigma+it)=-\sum_{n=1}^N\log(1-p_n^{-\sigma-it}) =-\sum_{n=1}^N\log(1-p_n^{-\sigma}e^{-it\log p_n}), \end{equation} which, even in the case $1/2<\sigma\leq 1$, approximates $\log\zeta(\sigma+it)$ in a certain mean value sense. A key idea of Bohr and Jessen is to introduce the auxiliary mapping $S_N:\mathbb{T}^N\to\mathbb{C}$ associated with $f_N(\sigma+it)$ (where $\mathbb{T}^N\simeq [0,1)^N$ is the $N$-dimensional unit torus) defined by \begin{equation}\label{1-6} S_N(\theta_1,\ldots,\theta_N;\zeta)=-\sum_{n=1}^N\log(1-p_n^{-\sigma}e^{2\pi i\theta_n}) \qquad (0\leq \theta_n <1). \end{equation} Let $z_n(\theta;\zeta)=-\log(1-p_n^{-\sigma}e^{2\pi i\theta})$. Then each term $z_n(\theta_n;\zeta)$ on the right-hand side of \eqref{1-6} describes a planer convex curve when $\theta_n$ varies from 0 to 1. Therefore $S_N(\theta_1,\ldots,\theta_N;\zeta)$ is a kind of geometric ``sum'' of convex curves. Bohr and Jessen \cite{BJ29} developed a detailed theory on such sums of convex curves, and applied it to the proof of their Theorem \ref{thm-BJ}. Later Jessen and Wintner \cite{JW35} published an alternative proof of Theorem \ref{thm-BJ}, which is more analytic (Fourier theoretic). In their proof they used a certain inequality (the Jessen-Wintner inequality), which is also related with convex properties of curves. We also note that the analogue of Theorem \ref{thm-BJ} for $(\zeta'/\zeta)(s)$ was shown by Kershner and Wintner \cite{KW37}. As for the explicit construction of the density function, see van Kampen and Wintner \cite{vKW37}. \section{A generalization of the Bohr-Jessen limit theorem}\label{sec2} It is a natural question to ask how to generalize Theorem \ref{thm-BJ}, the Bohr-Jessen limit theorem, to more general zeta and $L$-functions. An obstacle is that, in more general situation, the geometry of corresponding curves becomes more complicated; especially, the convexity is not valid in general. Still, however, the part (i) of Theorem~\ref{thm-BJ} can be generalized to a fairly general class of zeta-functions. Let $\mathbb{N}$ be the set of positive integers. For any $n\in\mathbb{N}$, let $g(n)\in \mathbb{N}$, $f(k,n)\in\mathbb{N}$ and $a_n^{(k)}\in\mathbb{C}$ ($1\leq k\leq n$). Using the polynomilas given by \[ A_n(X)=\prod_{k=1}^{g(n)}\left(1-a_n^{(k)}X^{f(k,n)}\right), \] we define the zeta-function $\varphi(s)$ by the Euler product \begin{equation}\label{2-1} \varphi(s)=\prod_{n=1}^{\infty} A_n(p_n^{-s})^{-1}. \end{equation} Assume \begin{equation}\label{2-2} g(n)\leq C_0 p_n^{\alpha}, \qquad |a_n^{(k)}|\leq p_n^{\beta} \end{equation} with constants $\alpha,\beta\geq 0$, $C_0 >0$. Then \eqref{2-1} is convergent absolutely in the region $\Re s>\alpha+\beta+1$. Let $\mathscr{M}_{\alpha\beta}$ be the set of all functions $\varphi(s)$ defined as above, satisfying \eqref{2-2} and the following: \begin{itemize} \item[(i)] $\varphi(s)$ can be continued meromorphically to $\sigma\geq\sigma_0$, where $\alpha+\beta+1/2\leq\sigma_0<\alpha+\beta+1$, and all poles in this region are included in a compact subset of $\{s\;|\;\sigma>\sigma_0\}$, \item[(ii)] $\varphi(\sigma+it)=O((|t|+1)^{C'_0})$ for any $\sigma\geq\sigma_0$, with a constant $C'_0>0$, \item[(iii)] It holds that \begin{equation}\label{2-3} \int_{-T}^T |\varphi(\sigma_0+it)|^2 dt = O(T). \end{equation} \end{itemize} The class \[ \mathscr{M}=\bigcup_{\alpha,\beta\geq 0}\mathscr{M}_{\alpha\beta} \] was first introdued by the first author \cite{Mat90}. For $\sigma >\sigma_0$, define \begin{equation}\label{2-4} V_{\sigma}(T,R;\varphi)=\mu_1\{t\in[-T,T]\;|\;\log\varphi(\sigma+it)\in R\}, \end{equation} where the definition of $\log\varphi(s)$ (for $\varphi\in\mathscr{M}$) is as follows. First, when $\sigma>\alpha+\beta+1$ define \[ \log\varphi(s)=-\sum_{n=1}^{\infty}\sum_{k=1}^{g(n)}\mathrm{Log}\left(1-a_n^{(k)} p_n^{-f(k,n)s}\right), \] where $\mathrm{Log}$ means the principal branch. Next, let \[ G(\varphi)=\{s\;|\;\sigma\geq\sigma_0\}\setminus\bigcup_{\rho} \{\sigma+i\Im\rho\;|\;\sigma_0\leq\sigma\leq\Re\rho\}, \] where $\rho$ runs over all zeros and poles $\rho$ with $\Re\rho\geq\sigma_0$. For any $s\in G(\varphi)$, define $\log\varphi(s)$ by the analytic continuation along the horizontal path from the right. In this general situation, the corresponding mapping is \begin{equation}\label{2-5} S_N(\theta_1,\ldots,\theta_N;\varphi)=\sum_{n=1}^N z_n(\theta_n;\varphi) \qquad (0\leq \theta_n <1), \end{equation} where \begin{equation}\label{2-6} z_n(\theta_n;\varphi)=-\sum_{k=1}^{g(n)} \log(1-a_n^{(k)}p_n^{-f(k,n)\sigma}e^{2\pi if(k,n)\theta_n}). \end{equation} In \cite{Mat90}, the following generalization of Theorem \ref{thm-BJ} (i) was shown. \begin{theorem}[\!\!\cite{Mat90}]\label{thm-M90} If $\varphi\in\mathscr{M}$, then for any $\sigma>\sigma_0$, the limit \begin{equation}\label{2-7} W_{\sigma}(R;\varphi)=\lim_{T\to\infty}\frac{1}{2T}V_{\sigma}(T,R;\varphi) \end{equation} exists. \end{theorem} It can be seen that the class $\mathscr{M}$ includes a lot of important zeta and $L$-functions. The reason why such general statement can be shown is that, for the proof of this theorem, geometric properties of corresponding curves \eqref{2-6} are not necessary. In fact, the proof of Theorem \ref{thm-M90} is just based on (besides simple arithmetic facts) Prokhorov's theorem in probability theory. An alternative proof is given in \cite{Mat92a} in the case of Dedekind zeta-functions of algebraic number fields. The method in \cite{Mat92a} is to use L{\'e}vy's convergence theorem, again in probability theory. This method can also be applied to general $\varphi\in\mathscr{M}$, which is pointed out in \cite{Mat92b} and a sketch of the argument in the general case is described in \cite{MU2}. Therefore, now we can say that the part (i) of Theorem \ref{thm-BJ} has been sufficiently generalized. However Theorem \ref{thm-BJ} includes the part (ii). The part (ii) gives an explicit expression of the limit value in terms of the density function, so it is highly desirable to generalize also the part (ii), in order to study the behavior of the limit $W_{\sigma}(R;\varphi)$ more closely. However this part is related with the geometry of corresponding curves, and its generalization is much more difficult. Joyner \cite{Joy86} discussed the properties of density functions in the case of Dirichlet $L$-functions, and the first author \cite{Mat92a} studied the density functions for Dedekind zeta-functions of Galois number fields, but both of them are the cases when the corresponding curves \eqref{2-6} are convex. In the case of automorphic $L$-functions, the corresponding \eqref{2-6} is not always convex. The study in this case will be given in later sections. \section{$M$-functions}\label{sec3} The theorems of Bohr-Jessen type consider the situation when $t=\Im s$ varies. That is, Theorems \ref{thm-BJ} and \ref{thm-M90} are results in $t$-aspect. When we consider more general zeta and $L$-functions, it is also important to study the value-distribution in some different aspect. For example, it is possible to consider the modulus aspect for Dirichlet or Hecke $L$-functions. Let $\chi$ be a certain character, and $L(s,\chi)$ be the associated $L$-function (over certain number field or function field). Ihara \cite{Iha08} studied the behavior of $(L'/L)(s,\chi)$ from this aspect, and proved the limit formula of the form \begin{equation}\label{3-1} \mathrm{Avg}_{\chi}\Phi\left(\frac{L'}{L}(s,\chi)\right)=\int_{\mathbb{C}}M_{\sigma}(z) \Phi(z)|dz| \end{equation} for a certain average (specified below) with respect to $\chi$, where $\Phi$ is a test function, and $M_{\sigma}:\mathbb{C}\to\mathbb{R}$ is an explicitly constructed density function, which is non-negative, and belongs to the class $C^{\infty}$. Ihara called this $M_{\sigma}$ the ``$M$-function'' associated with the value-distribution of $L(s,\chi)$. When $\sigma>1$, Ihara proved \eqref{3-1} for any continuous test function $\Phi$. In the function field case, using the (proved) Riemann hypothesis, Ihara proved \eqref{3-1} even in some subregion in the critical strip for more restricted class of $\Phi$ (e.g. $\sigma>3/4$ when $\Phi\in L^1\cap L^{\infty}$ and moreover its Fourier transform has compact support). As for the meaning of $\mathrm{Avg}_{\chi}$, Ihara considered several types of averages, but when the ground field is the rational number field $\mathbb{Q}$, the meaning is one of the following: The first type is \begin{equation}\label{3-2} \mathrm{Avg}_{\chi}\phi(\chi)=\lim_{m\to\infty}\frac{1}{\pi(m)} \sum_{2< p\leq m}\frac{1}{p-2} {\sum_{\chi (\text{mod}\; p)}}^{\!\!\!\!\!*} \phi(\chi) \end{equation} for a complex-valued function $\phi$ of $\chi$, where $\pi(m)$ denotes the number of primes up to $m$, $p$ runs over primes, and $\sum^*$ stands for the sum over primitive Dirichlet characters of modulus $p$. The second type is considered for the character $\chi_{\tau}(p)=p^{-i\tau}$. Then the Euler product of the associated $L$-function is \[ \prod_p(1-\chi_{\tau}(p)p^{-s})^{-1} =\prod_p(1-p^{-s-i\tau})^{-1}=\zeta(s+i\tau), \] and the meaning of the average is given by \begin{equation}\label{3-3} \mathrm{Avg}_{\chi}\phi(\chi)=\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T \phi(\chi_{\tau}) d\tau. \end{equation} The second type of average actually implies a limit formula for the Riemann zeta-function in $t$-aspect. In particular, the formula \eqref{3-1} for this second type of average, with $\Phi$ being the characteristic function of $R$, coincides with the formulation of Kershner and Wintner \cite{KW37}. An important discovery in Ihara \cite{Iha08} is that the same function $M_{\sigma}$ can be used in the formula \eqref{3-1} for both of the meanings of average. Now we restrict ourselves to the case when the ground field is $\mathbb{Q}$, so the meaning of the average is \eqref{3-2} or \eqref{3-3}. We also consider the value-distribution of $\log L(s,\chi)$, so the corresponding limit formula is of the form \begin{equation}\label{3-4} \mathrm{Avg}_{\chi}\Phi\left(\log L(s,\chi)\right)=\int_{\mathbb{C}}\mathcal{M}_{\sigma}(z) \Phi(z)|dz| \end{equation} with the density function $\mathcal{M}_{\sigma}$. \begin{theorem}[Ihara and Matsumoto \cite{IM11a} \cite{IM14}]\label{thm-IM1114} For any $\sigma> 1/2$, and for the average \eqref{3-2} or \eqref{3-3}, both \eqref{3-1} and \eqref{3-4} hold with explicitly constructed density functions (``$M$-functions'') $M_{\sigma}$ and $\mathcal{M}_{\sigma}$, for any test function $\Phi$ which is (i) any bounded continuous function, or (ii) the characteristic function of either a compact subset of $\mathbb{C}$ or the complement of such a subset. \end{theorem} In the number field case the Riemann hypothesis is surely not yet proved, but instead, we can apply certain mean value estimates to obtain the above theorem. Therefore Theorem \ref{thm-IM1114} is unconditional. In particular, this theorem includes the Bohr-Jessen limit theorem, and its $\zeta'/\zeta$ analogue due to Kershner and Wintner, as special cases. If we assume the Riemann hypothesis (for Dirichlet $L$-functions), even stronger result can be shown. In \cite{IM11b}, the average \begin{equation}\label{3-5} \mathrm{Avg}_{\chi}\phi(\chi)=\lim_{p\to\infty}\frac{1}{p-2} {\sum_{\chi (\text{mod}\; p)}}^{\!\!\!\!\!*} \phi(\chi) \end{equation} was considered, and for this average, both \eqref{3-1} and \eqref{3-4} were proved for more general class of test functions (that is, (i) of Theorem~\ref{3-1} is replaced by any continuous function with at most exponential growth) under the assumption of the Riemann hypothesis. The corresponding study for $M$-functions in the function field case was done in \cite{IM10} \cite{IM11b}. Here we mention several further researches in the theory of $M$-functions. Let $D$ a fundamental discriminant, and $\chi_D$ the associated real character. Mourtada and Murty \cite{MM15} studied the value-distribution of $(L'/L)(\sigma,\chi_D)$ (where $\sigma>1/2$) in $D$-aspect, and proved a limit formula similar to \eqref{3-1} under the assumption of the Riemann hypothesis. Akbary and Hamieh \cite{AH18} proved an analogous result for the cubic character case, without the assumption of the Riemann hypothesis. As for the value-distribution of $(\zeta'/\zeta)(s)$ in $t$-aspect, there is another approach due to Guo \cite{Guo96a} \cite{Guo96b}. Inspired by the idea of Guo, Mine \cite{Mine1} proved the existence (and the explicit construction) of the $M$-function for $(\zeta'_K/\zeta_K)(s)$ in $t$-aspect, where $\zeta_K(s)$ denotes the Dedekind zeta-function of an algebraic number field $K$ (including the non-Galois case), with an explicit error estimate in the limit formula of the form \eqref{3-1}. In \cite{Mine3}, he extended the result to the case of more general $L$-functions, belonging to a certain subclass of $\mathscr{M}$. In his another paper \cite{Mine2}, Mine treated the limit theorem of Bohr-Jessen type (but without taking logarithm) for Lerch zeta-functions, and proved a refinement, written in terms of the associated $M$-function. This paper of Mine implies that the theory of $M$-functions works for zeta-functions without Euler products. Suzuki \cite{Suz15} discovered that certain $M$-function appears even in a rather different context. He studied the zeros of the real or imaginary part of \[ \xi(s)=\frac{1}{2}s(s-1)\pi^{-s/2}\Gamma(s/2)\zeta(s), \] and proved that the distribution of spacings of the second-order normalization of imaginary parts of those zeros can be represented by an integral involving the $M$-function for $(\zeta'/\zeta)(s)$. \section{The value-distribution of automorphic $L$-functions (the modulus and level aspects)}\label{sec4} At the end of the preceding section we watched that $M$-functions have been studied for various zeta and $L$-functions. Since one of the most important classes of $L$-functions is the class of automorphic $L$-functions, it is natural to ask how is the theory of $M$-functions associated with automorphic $L$-functions. First we fix the notation. Let $k$ be an even integer and $N$ positive integer, and let $S_k(N)$ be the set of cusp forms of weight $k$ for $\Gamma_0(N)$. We write the Fourier expansion of $f\in S_k(N)$ as \[ f(z)=\sum_{n=1}^{\infty}\lambda_f(n)n^{(k-1)/2}e^{2\pi inz}, \] and define the attached $L$-function by \[ L(f,s)=\sum_{n=1}^{\infty}\lambda_f(n)n^{-s}. \] Now we assume that $f\in S_k(N)$ is a primitive form, that is, a normalized Hecke-eigen newform. Then $L(s,f)$ has the Euler product \begin{align}\label{4-1} L(f,s)&=\prod_{p|N}(1-\lambda_f(p)p^{-s})^{-1} \prod_{p\nmid N}(1-\lambda_f(p)p^{-s}+p^{-2s})^{-1}\\ &=\prod_{p|N}(1-\lambda_f(p)p^{-s})^{-1} \prod_{p\nmid N}(1-\alpha_f(p)p^{-s})^{-1}(1-\beta_f(p)p^{-s})^{-1},\notag \end{align} where $|\alpha_f(p)|=1$, $\beta_f(p)=\overline{\alpha_f(p)}$, and $\alpha_f(p)+\beta_f(p)=\lambda_f(p)$ (for $p\nmid N$). First consider the modulus aspect. Let $\chi$ be a Dirichlet character. The twisted $L$-function $L(f\otimes\chi,s)$ is defined by replacing $p^{-s}$ by $\chi(p)p^{-s}$ on each local factor. Lebacque and Zykin \cite{LZ} developed the theory similar to \cite{IM11b} for $L(f\otimes\chi,s)$, and proved the limit formulas corresponding to \eqref{3-1} and \eqref{3-4}. More difficult is the case of the level aspect. So far there are two attempts in this direction, the aforementioned paper of Lebacque and Zykin \cite{LZ}, and an article of the authors \cite{MU1}. Here we briefly mention the results proved in \cite{MU1}. Let $\gamma\in\mathbb{N}$, and define the (partial) $\gamma$th symmetric power $L$-function attached to $f$ by \begin{equation}\label{4-2} L_N(\mathrm{Sym}_f^{\gamma},s)= \prod_{p\nmid N}\prod_{h=0}^{\gamma}(1-\alpha_f^{\gamma-h}(p)\beta_f^h(p)p^{-s})^{-1}. \end{equation} Here we consider the situation $N=q^m$, where $q$ is a prime number. Then the form of the right-hand side of \eqref{4-2} is the same for all $m$, which we denote by \begin{equation}\label{4-3} L_q(\mathrm{Sym}_f^{\gamma},s)= \prod_{p\neq q}\prod_{h=0}^{\gamma}(1-\alpha_f^{\gamma-h}(p)\beta_f^h(p)p^{-s})^{-1}. \end{equation} Let $\mu,\nu\in\mathbb{N}$, $\mu-\nu=2$. By $Q(\mu)$ we denote the smallest prime number satisfying $2^{\mu}/\sqrt{Q(\mu)}<1$. The main results in \cite{MU1} is the limit formula for the value-distribution of the difference \[ \log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma) \qquad(\sigma>1/2). \] In the proof of limit theorems mentioned in the present article, some kind of ``independence'' or ``orthogonality'' properties are necessary. For example, in the proof of Theorem \ref{thm-BJ} and Theorem \ref{thm-M90}, the Kronecker-Weyl theorem on the uniform distribution of sequences is used. Ihara's argument \cite{Iha08} for $L$-functions is based on the orthogonality of Dirichlet characters. In the present situation, the necessary tool is supplied by Petersson's formula, in the form shown in the second author's article \cite{Ich11}. In view of the formula in \cite{Ich11}, we define the following weighted sum for any sequence $\{A_f\}$ over primitive forms $f\in S_k(q^m)$: \begin{equation}\label{4-4} {\sum_f}^{\prime}A_f= \frac{1}{C_k(1-C_q(m))}\sum_f \frac{A_f}{\langle f,f\rangle_P}, \end{equation} where \[ C_k=\frac{(4\pi)^{k-1}}{\Gamma(k-1)},\quad C_q(m)= \begin{cases} 0, & m=1,\\ q(q^2-1)^{-1}, & m=2,\\ q^{-1}, & m\geq 3, \end{cases} \] the symbol $\langle,\rangle_P$ is the Petersson inner product, and the sum on the right-hand side of \eqref{4-4} runs over all primitive forms belonging to $S_k(q^m)$. We define two types of averages in the level aspect. The first one is \begin{align}\label{4-5} \lefteqn{\mathrm{Avg}_{\text{prime}} \Psi(\log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma))}\\ &=\lim_{q\to\infty}{\sum_f}^{\prime} \Psi(\log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma))\notag \end{align} for fixed $m$, where $\Psi:\mathbb{R}\to\mathbb{C}$ is a test function. The second one is \begin{align}\label{4-6} \lefteqn{\mathrm{Avg}_{\text{power}} \Psi(\log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma))}\\ &=\lim_{m\to\infty}{\sum_f}^{\prime} \Psi(\log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma))\notag \end{align} for fixed $q$, where $q$ is a prime and $q\geq Q(\mu)$ if $1\geq\sigma >1/2$. \begin{theorem}[\!\!\cite{MU1}]\label{thm-MU1} Let $f\in S_k(N)$ be a primitive form, $2\leq k<12$ or $k=14$, and $N=q^m$ with a certain prime $q$. Let $\mu,\nu\in\mathbb{N}$, $\mu-\nu=2$. We assume that the symmetric power L-functions $L_q(\mathrm{Sym}_f^{\mu},s)$, $L_q(\mathrm{Sym}_f^{\nu},s)$ can be continued holomorphically to $\sigma>1/2$, satisfy the estimate $\ll q^m(|t|+2)$ in the strip $2\geq\sigma>1/2$, and have no zero in $1\geq\sigma>1/2$. Then, for any $\sigma>1/2$, there exists a density function $\mathcal{M}_{\sigma}:\mathbb{R}\to\mathbb{R}_{\geq 0}$ which can be explicitly constructed, and for which the formula \begin{align}\label{4-7} \mathrm{Avg}_{\mathrm{prime}} \lefteqn{\Psi(\log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma))}\\ &=\mathrm{Avg}_{\mathrm{power}} \Psi(\log L_q(\mathrm{Sym}_f^{\mu},\sigma)-\log L_q(\mathrm{Sym}_f^{\nu},\sigma))\notag\\ &=\int_{\mathbb{R}}\mathcal{M}_{\sigma}(u)\Psi(u)\frac{du}{\sqrt{2\pi}}\notag \end{align} holds for any test function $\Psi$ which is bounded continuous, or a compactly supported characteristic function. \end{theorem} In this theorem we require several assumptions, which are plausible but seem very difficult to prove. The main reason of using those assumptions is that we have no idea of showing suitable mean value estimates for symmetric power $L$-functions. \par For any $\sigma >1$, since $\mu-\nu=2$, we have \begin{align*} & \log L_q(\textrm{Sym}_f^{\mu}, s) - \log L_q(\textrm{Sym}_f^{\nu}, s) \\ =&\sum_{p\neq q}(-\log(1-\alpha_f^{\mu}(p)p^{-s}) -\log(1-\beta_f^{\mu}(p)p^{-s})). \end{align*} If we could find a method for the study of $\mathrm{Avg}_{\text{prime}}$ and $\mathrm{Avg}_{\text{power}}$ of the right-hand side of the above equation in the case $\mu=1$, it would imply the limit theorem for $\log L(f,s)$ similar to \eqref{3-4}, but at present we cannot extend the theorem to $\log L(f,s)$. (The theorem is shown only for $\mu\geq 3$.) Lebacque and Zykin \cite{LZ} studied $\log L(f,s)$ and $(L'/L)(f,s)$ along the line of \cite{IM11b}, and obtained a result analogous to \cite[Theorem 1]{IM11b}. However their argument also does not arrive at the limit theorem for $\log L(f,s)$ or $(L'/L)(f,s)$ of the form \eqref{3-1} or \eqref{3-4}. \section{The value-distribution of automorphic $L$-functions (the $t$-aspect)}\label{sec5} Now we return to the matter of $t$-aspect. As we mentioned in Section \ref{sec2}, the part (ii) of Theorem \ref{thm-BJ} has been generalized only for some special cases when convex properties can be used. Automorphic $L$-functions are typical examples for which the corresponding curves are not always convex, so it is important how to generalize the part (ii) of Theorem \ref{thm-BJ} to the case of automorphic $L$-functions $L(f,s)$. This has been done in \cite{MU2}. Since $L(f,s)\in\mathscr{M}_{00}$, the existence of the limit $W_{\sigma}(R;L(f,\cdot))$ (for $\sigma>1/2$) is already known by Theorem \ref{thm-M90}. \begin{theorem}[\!\!\cite{MU2}]\label{thm-MU2} For any $\sigma>1/2$, there exists a continuous non-negative function ${\mathcal M}_{\sigma}(z,L(f,\cdot))$, explicitly defined on $\mathbb{C}$, for which \begin{equation}\label{5-1} W_{\sigma}(R;L(f,\cdot))=\int_R {\mathcal M}_{\sigma}(z,L(f,\cdot))|dz| \end{equation} holds. \end{theorem} \begin{remark}\label{rem-MU2} Once \eqref{5-1} is proved, then we can deduce \begin{align}\label{5-1bis} \lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T \Phi(\log L(f,s+i\tau))d\tau =\int_{\mathbb{C}}{\mathcal M}_{\sigma}(z,L(f,\cdot))\Phi(z)|dz| \end{align} for any test function $\Phi$ as in the statement of Theorem \ref{thm-IM1114}, by the argument given in \cite[Remark 9.1]{IM11a}. Note that the left-hand side of \eqref{5-1bis} is the function of variable $s$, but the $M$-function on the right-hand side depends only on $\sigma=\Re s$. \end{remark} The basic structure of the proof of Theorem \ref{thm-MU2} in \cite{MU2}, which we briefly outline here, is along the line similar to \cite{Mat92a}. Actually in \cite{MU2}, we are working in more general situation, that is in the class $\mathscr{M}$. Let $\varphi\in\mathscr{M}$. Define the integral \begin{equation}\label{5-2} K_n(w;\varphi)=\int_0^1 \exp\left(i\langle z_n(\theta_n;\varphi),w\rangle\right)d\theta_n \qquad (n\in\mathbb{N}), \end{equation} where $z_n(\theta_n;\varphi)$ is defined by \eqref{2-6} and $\langle z,w\rangle=\Re z \Re w + \Im z \Im w$. In \cite{MU2}, we prove the following \begin{lemma}[\!\!\cite{MU2}]\label{lem-MU2} If there are at least five $n$'s for which \begin{equation}\label{5-3} K_n(w;\varphi)=O_n(|w|^{-1/2}) \qquad (|w|\to\infty) \end{equation} holds, then we can find a continuous non-negative function ${\mathcal M}_{\sigma}(z,\varphi)$ for $\sigma>\sigma_0$ by which we can write \begin{equation}\label{5-4} W_{\sigma}(R;\varphi)=\int_R {\mathcal M}_{\sigma}(z,\varphi) |dz|. \end{equation} Moreover ${\mathcal M}_{\sigma}(z,\varphi)$ is explicitly given by \begin{equation}\label{5-5} {\mathcal M}_{\sigma}(z,\varphi)=\int_{\mathbb{C}}e^{-i\langle z,w\rangle} \Lambda(w;\varphi)|dw|, \end{equation} where \begin{equation}\label{5-6} \Lambda(w;\varphi)=\int_{\mathbb{C}}e^{i\langle z,w\rangle}dW_{\sigma}(z;\varphi). \end{equation} \end{lemma} Therefore the main problem is reduced to the proof of \eqref{5-3}. Jessen and Wintner \cite{JW35} proved that $K_n(w,\varphi)=O(|w|^{-1/2})$ for any $n$, when the corresponding curves are convex. This is the original Jessen-Wintner inequality. Now consider the case of automorphic $L$-functions. Let $\mathbb{P}_f(\varepsilon)$ be the set of primes $p$ satisfying $|\lambda_f(p)|>\sqrt{2}-\varepsilon$. Then $\mathbb{P}_f(\varepsilon)$ is of positive density (M. R. Murty \cite{RM83} for the full modular case, and M. R. Murty and V. K. Murty \cite{RMKM} for any level $N$). In \cite{MU2}, we observed geometric behavior of the corresponding curves and proved \begin{lemma}[\!\!\cite{MU2}]\label{lem2-MU2} If $p_n\in\mathbb{P}_f(\varepsilon)$ and $n$ is sufficiently large, then \begin{equation}\label{5-9} K_n(w;L(f,\cdot))=O_{\varepsilon}\left(p_n^{\sigma/2}|w|^{-1/2} +p_n^{\sigma}|w|^{-1}\right) \end{equation} holds. \end{lemma} Since $\mathbb{P}_f(\varepsilon)$ is of positive density, obviously we can find more than five (actually infinitely many) $n$'s for which \eqref{5-9} is valid. Therefore using Lemma \ref{lem-MU2} we can deduce the conclusion of Theorem \ref{thm-MU2}. \section{The value-distribution of symmetric power $L$-functions (the $t$-aspect)} \label{sec6} Now we proceed to state our new results in the present paper, on the value-distribution of symmetric power $L$-functions, defined by \eqref{4-2}. The proof of the results stated in this section will be given in Sections \ref{sec7} and \ref{sec8}. First consider the case $\gamma=2$, that is the symmetric square $L$-functions \[ L(\mathrm{Sym}_f^2, s)=L_N(\mathrm{Sym}_f^2, s)\prod_{p\mid N}(1-\lambda_f(p^2)p^{-s})^{-1}. \] Assume $N$ is square-free and let $f\in S_k(N)$ be a primitive form. Let \[ \Lambda(\mathrm{Sym}_f^2,s)=N^s \pi^{-3s/2}\Gamma\left(\frac{s+1}{2}\right) \Gamma\left(\frac{s+k-1}{2}\right)\Gamma\left(\frac{s+k}{2}\right) L(\mathrm{Sym}_f^2,s). \] Then it is known (Shimura \cite{Shi75}, Gelbart and Jacquet \cite{GJ78}; see also \cite{IM}) that $\Lambda(\mathrm{Sym}_f^2,s)$ can be continued to an entire function, and satisfies the functional equation \begin{align}\label{6-1} \Lambda(\mathrm{Sym}_f^2,s)= \Lambda(\mathrm{Sym}_f^2,1-s). \end{align} Because of \eqref{6-1}, we can apply the general theorem of Kanemitsu, Sankaranarayanan and Tanigawa \cite{KST02}. The part (iii) of their Theorem 1 implies that, in the strip $2/3<\sigma<1$, it holds that \begin{align}\label{6-2} \int_1^T |L(\mathrm{Sym}_f^2,\sigma+it)|^2 dt=C_2(\sigma,f)T+ O\left(T^{2-(3/2)\sigma+\varepsilon}\right) \end{align} for any $\varepsilon>0$, where $C_2(\sigma,f)$ is a constant depending on $\sigma$ and $f$. (Note that the first author \cite{Mat05} developed a more refined general theory, which improves the error estimate in \eqref{6-2} to $O(T^{3-3\sigma+\varepsilon})$; see \cite[(3.10)]{Mat05}.) From \eqref{6-2} we find that \begin{align}\label{6-3} \int_1^T |L(\mathrm{Sym}_f^2,\sigma+it)|^2 dt=O(T) \end{align} for $\sigma>2/3$. This is condition (iii) of the class $\mathscr{M}_{00}$. Condition (ii) also follows from \eqref{6-1} by invoking the Phragm{\'e}n-Lindel{\"o}f principle. Therefore $L(\mathrm{Sym}_f^2,\cdot)\in\mathscr{M}_{00}$, so the method in \cite{MU2} can be applied to $L(\mathrm{Sym}_f^2,\cdot)$. The result is \begin{theorem}\label{thm-square} Let $N$ be a square-free integer, and $f\in S_k(N)$ a primitive form. For any $\sigma>2/3$, there exists a continuous non-negative function ${\mathcal M}_{\sigma}(z,L(\mathrm{Sym}_f^2,\cdot))$, explicitly defined on $\mathbb{C}$, for which \begin{equation}\label{6-4} \lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T \Phi(\log L(\mathrm{Sym}_f^2,s+i\tau))d\tau= \int_{\mathbb{C}} {\mathcal M}_{\sigma}(z,L(\mathrm{Sym}_f^2,\cdot))\Phi(z)|dz| \end{equation} holds for any test function $\Phi$ as in the statement of Theorem \ref{thm-IM1114}. \end{theorem} Next consider more general symmetric power $L$-functions $L_N(\mathrm{Sym}_f^{\gamma}, s)$ (see \eqref{4-2}) associated with a primitive form $f\in S_k(N)$, where $N$ is a positive integer. It is known that $L_N(\mathrm{Sym}_f^{\gamma}, s)$ has meromorphic continuation to the whole complex plane (see \cite{BGHT}). We assume the following \begin{assumption}\label{automorphy} There are predicted local factors $L_p(\mathrm{Sym}_f^{\gamma}, s)$ for $p\mid N$ and $L_N(\mathrm{Sym}_f^{\gamma}, s)$ satisfies the functional equation \begin{align}\label{6-5} \Lambda(\mathrm{Sym}_f^{\gamma},s)=\varepsilon_{\gamma,f} \Lambda(\mathrm{Sym}_f^{\gamma},1-s), \end{align} where $|\varepsilon_{\gamma,f}|=1$ and \[ \Lambda(\mathrm{Sym}_f^{\gamma},s) =q_{\gamma,f}^{s/2}\widetilde{\Gamma}_{\gamma}(s) L_N(\mathrm{Sym}_f^{\gamma},s)\prod_{p\mid N}L_p(\mathrm{Sym}_f^{\gamma},s) \] with the conductor $q_{\gamma,f}$ and the ``gamma factor'' $\widetilde{\Gamma}_{\gamma}(s)$. Here, the gamma factor is written by \begin{align}\label{gamma-factor} \widetilde{\Gamma}_{\gamma}(s) =\pi^{-(\gamma+1)s/2}\prod_{j=1}^{\gamma+1}\Gamma\bigg(\frac{s+\kappa_{j,\gamma}}{2}\bigg), \end{align} where $\kappa_{j,\gamma}\in\mathbb{R}$, and each local factor for $p|N$ is written as \begin{align}\label{local-factor} L_p(\mathrm{Sym}_f^{\gamma},s)=(1-\lambda_{p,\gamma,f}p^{-s})^{-1}, \quad |\lambda_{p,\gamma,f}|\leq p^{-\gamma/2} \end{align} (see Cogdell and Michel \cite{CM04}, Moreno and Shahidi \cite{MS85}, Rouse \cite{rouse07}, and Rouse and Thorner \cite{RT17}). \end{assumption} The above assumptions are reasonable in view of the Langlands functoriality conjecture. Let \[ L(\mathrm{Sym}_f^{\gamma}, s)= L_N(\mathrm{Sym}_f^{\gamma},s)\prod_{p\mid N}L_p(\mathrm{Sym}_f^{\gamma},s). \] From \eqref{4-2} and \eqref{local-factor} we see that the Dirichlet series expansion of $L(\mathrm{Sym}_f^{\gamma}, s)$ is of the form $\sum_{n=1}^{\infty}c_n n^{-s}$, $|c_n|\ll n^{\varepsilon}$. Since the gamma factor is given by \eqref{gamma-factor}, again using the general result of \cite{KST02}, we obtain \begin{align}\label{6-7} \int_1^T |L(\mathrm{Sym}_f^{\gamma},\sigma+it)|^2 dt=C_{\gamma}(\sigma,f)T+ O\left(T^{1+(\gamma/2)-((\gamma+1)/2)\sigma+\varepsilon}\right) \end{align} in the strip $1-1/(\gamma+1) <\sigma<1$, with a certain constant $C_{\gamma}(\sigma,f)$. Therefore $L(\mathrm{Sym}_f^{\gamma},\cdot)\in\mathscr{M}_{00}$. Another tool we use is the following quantitative version of the Sato-Tate conjecture due to Thorner \cite{Tho14}. We write $\alpha_f(p)=e^{i\theta_f(p)}$; we may assume $0\leq\theta_f(p)\leq\pi$. Let $I$ be any subset of $[0,\pi]$, and let \[ \pi_I(x)=\#\{p:\text{prime}\;|\;p\leq x, \;\theta_f(p)\in I\}. \] Then Thorner's result is, under Assumption~\ref{automorphy}, \begin{align}\label{Thorner} \frac{\pi_I(x)}{\pi(x)}=\frac{2}{\pi}\int_a^b \sin^2\theta d\theta +O\left(\frac{x}{\pi(x)(\log x)^{9/8-\varepsilon}}\right) \end{align} for any $\varepsilon>0$, where $I=[a,b]$ and $\pi(x)$ denotes the number of primes up to $x$. (Under the assumption of the GRH for $L(\mathrm{Sym}_f^{\gamma},s)$, sharper estimates for the error term are known.) \begin{theorem}\label{thm-general} Let $N$ be a positive integer. Let $f\in S_k(N)$ be a primitive form which is not of CM-type. Let $\gamma\geq 2$, and assume Assumption~\ref{automorphy}. Then, for any $\sigma>1-1/(\gamma+1)$, there exists a continuous non-negative function ${\mathcal M}_{\sigma}(z,L(\mathrm{Sym}_f^{\gamma},\cdot))$, explicitly defined on $\mathbb{C}$, for which \begin{align}\label{6-9} &\lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T \Phi(\log L(\mathrm{Sym}_f^{\gamma},s+i\tau))d\tau\\ =& \int_{\mathbb{C}} {\mathcal M}_{\sigma}(z,L(\mathrm{Sym}_f^{\gamma},\cdot))\Phi(z)|dz|\notag \end{align} holds for any test function $\Phi$ as in the statement of Theorem \ref{thm-IM1114}. \end{theorem} \begin{remark} In Theorem \ref{thm-general} we assume Assumtion~\ref{automorphy}, because it is not yet fully proved. However, Barnet-Lamb et al. \cite{BGHT} proved the ``potential automorphy'' of $L(\mathrm{Sym}_f^{\gamma},s)$, which gives a certain functional equation (see \cite[Theorem B, Assertion 2]{BGHT}). If the factors appearing in the functional equation are shown to be sufficiently well-behaved, we can apply the result in \cite{KST02} to obtain some suitable mean value result unconditionally, so we can remove Assumtion~\ref{automorphy} from the statement of Theorem \ref{thm-general}. \end{remark} \section{Some general lemmas}\label{sec7} We start the proof of theorems stated in the preceding section. In this section we consider the general situation that $\varphi\in\mathscr{M}_{00}$ with $f(k,n)=1$ for all $k$ and $n$. Then \begin{equation}\label{7-1} z_n(\theta_n;\varphi)=-\sum_{k=1}^{g(n)} \log(1-a_n^{(k)}p_n^{-\sigma}e^{2\pi i\theta_n}). \end{equation} Put \[ R_n(X;\varphi)=-\sum_{k=1}^{g(n)}\log(1-a_n^{(k)}X), \] and write its Taylor expansion as $R_n(X;\varphi)=\sum_{j=1}^{\infty}r_{j,n} X^j$. Then we have \begin{equation}\label{7-2} z_n(\theta_n;\varphi)=R_n(p_n^{-\sigma}e^{2\pi i\theta_n};\varphi) =\sum_{j=1}^{\infty}r_{j,n} p_n^{-j\sigma}e^{2\pi ij\theta_n}. \end{equation} Let $x_n(\theta_n;\varphi)=\Re z_n(\theta_n;\varphi)$ and $y_n(\theta_n;\varphi)=\Im z_n(\theta_n;\varphi)$. Write $w=|w|e^{i\tau}=|w|\cos\tau+i|w|\sin\tau$. Then \begin{equation}\label{7-3} \langle z_n(\theta_n;\varphi),w\rangle=|w|g_{\tau,n}(\theta_n;\varphi), \end{equation} where \[ g_{\tau,n}(\theta_n;\varphi)=x_n(\theta_n;\varphi)\cos\tau +y_n(\theta_n;\varphi)\sin\tau. \] Substituting this into \eqref{5-2}, we have \begin{equation}\label{7-4} K_n(w,\varphi)=\int_0^1 \exp\left(i|w|g_{\tau,n}(\theta_n;\varphi)\right)d\theta_n. \end{equation} Therefore, to evaluate $K_n(w,\varphi)$, the essential point is to analyze the behavior of $g_{\tau,n}(\theta_n;\varphi)$. We prove \begin{lemma}\label{lem-7-1} Let $\varphi\in\mathscr{M}_{00}$. The function $g_{\tau,n}(\theta_n;\varphi)$ is a $C^{\infty}$-class function as a function in $\theta_n$. Moreover, if $n$ is sufficiently large, and \begin{equation}\label{7-5} |r_{1,n}|\geq C \end{equation} holds with a positive constant $C$, then for those $n$, $g_{\tau,n}^{\prime\prime}(\theta_n;\varphi)$ has exactly two zeros on the interval $[0,1)$. The same assertion also holds for $g_{\tau,n}^{\prime}(\theta_n;\varphi)$. \end{lemma} \begin{proof} This lemma is an analogue of \cite[Lemma 7.1]{MU2}. From the definition, we have \begin{equation}\label{7-6} r_{j,n}=\frac{1}{j}\sum_{k=1}^{g(n)}(a_{n}^{(k)})^j. \end{equation} Since $\varphi\in\mathscr{M}_{00}$, we find that $|r_{j,n}|\leq g(n)/j \leq C_0/j$. Noting this point, we can see that exactly the same argument as in the proof of \cite[Lemma 7.1]{MU2} can be applied to our present situation. (The part on $g_{\tau,n}^{\prime}(\theta_n;\varphi)$ is the same as in \cite[Remark 7.2]{MU2}.) \end{proof} Now we can show the following lemma, which is the analogue of Lemma \ref{lem2-MU2} for $\varphi\in\mathscr{M}_{00}$. \begin{lemma}[The Jessen-Wintner inequality for $\varphi$]\label{lem-JW} Let $\varphi\in\mathscr{M}_{00}$, and assume that $n$ is sufficiently large and \eqref{7-5} holds. Then we have \begin{equation}\label{7-7} K_n(w,\varphi)=O\left(\frac{p_n^{\sigma/2}}{|w|^{1/2}}+ \frac{p_n^{\sigma}}{|w|}\right). \end{equation} \end{lemma} \begin{proof} The method of the proof is the same as in \cite[Proposition 7.3]{MU2} (whose idea goes back to Jessen and Wintner \cite{JW35}), so we just sketch the idea briefly. Using \eqref{7-2} we have $$ g_{\tau,n}(\theta_n;\varphi)=\sum_{j=1}^{\infty}|r_{j,n}|p_n^{-j\sigma} \cos(\gamma_{j,n}+2\pi j\theta_n-\tau), $$ where $\gamma_{j,n}=\arg r_{j,n}$, and hence \begin{align*} &g_{\tau,n}^{\prime}(\theta_n;\varphi)=-2\pi |r_{1,n}|p_n^{-\sigma} \sin(\gamma_{1,n}+2\pi\theta_n-\tau)+O(p_n^{-2\sigma}),\\ &g_{\tau,n}^{\prime\prime}(\theta_n;\varphi)=-(2\pi)^2 |r_{1,n}|p_n^{-\sigma} \cos(\gamma_{1,n}+2\pi\theta_n-\tau)+O(p_n^{-2\sigma}). \end{align*} Let $\theta_n=\theta_1^c, \theta_2^c$ be two solutions of $\cos(\gamma_{1,n}+2\pi\theta_n-\tau)=0$ ($0\leq\theta_n<1$). Then, when $n$ is sufficiently large and \eqref{7-5} holds, the two solutions of $g_{\tau,n}^{\prime\prime}(\theta_n;\varphi)=0$ stated in Lemma \ref{lem-7-1} are close to $\theta_1^c, \theta_2^c$. Similarly, the two solutions of $g_{\tau,n}^{\prime}(\theta_n;\varphi)=0$ are close to the two solutions $\theta_n=\theta_1^s, \theta_2^s$ be two solutions of $\sin(\gamma_{1,n}+2\pi\theta_n-\tau)=0$. Then, for each $i,j$ ($1\leq i,j \leq 2$), there exists a unique $\theta_{ij}$ between $\theta_i^c$ and $\theta_j^s$ for which \[ |\sin(\gamma_{1,n}+2\pi\theta_n-\tau)|=|\cos(\gamma_{1,n}+2\pi\theta_n-\tau)|=1/\sqrt{2} \] holds. We divide the interval $0\leq\theta_n<1$ (mod 1) into four subintervals at the values $\theta_{ij}$, and divide also the integral \eqref{7-4} accordingly. On two of those subintervals $|\sin(\gamma_{1,n}+2\pi\theta_n-\tau)|\geq 1/\sqrt{2}$, which implies that $|g_{\tau,n}^{\prime}(\theta_n;\varphi)|$ is not close to 0. Therefore the integrals on those subintervals can be evaluated by the first derivative test. On the other two subintervals $|g_{\tau,n}^{\prime\prime}(\theta_n;\varphi)|$ is not close to 0, so the second derivative test works. These evaluations give the conclusion \eqref{7-7}. \end{proof} If there exist at least five large values of $n$ for which \eqref{7-5} holds, then we can apply Lemma \ref{lem-JW} to Lemma \ref{lem-MU2} to obtain \begin{equation}\label{7-8} W_{\sigma}(R;\varphi)=\int_R {\mathcal M}_{\sigma}(z,\varphi)|dz| \end{equation} for any $\sigma>\sigma_0$, with an explicitly constructed continuous non-negative function ${\mathcal M}_{\sigma}(z,\varphi)$ (the associated $M$-function). Then, as indicated in Remark \ref{rem-MU2}, we can deduce the formula of the form \begin{equation}\label{7-9} \lim_{T\to\infty}\frac{1}{2T}\int_{-T}^T \Phi(\log\varphi(s+i\tau))d\tau= \int_{\mathbb{C}} {\mathcal M}_{\sigma}(z,\varphi)\Phi(z)|dz| \end{equation} in the region $\sigma>\sigma_0$, for any test function $\Phi$ as in the statement of Theorem \ref{thm-IM1114}. Therefore, to complete the proof of our theorems, the only remaining task is to show \eqref{7-5} for sufficiently many large values of $n$. \section{Proof of Theorems \ref{thm-square} and \ref{thm-general}}\label{sec8} Now we return to the specific situation of symmetric power $L$-functions. \begin{proof}[Proof of Theorem \ref{thm-square}] In this case, for any $n$ such that $p_n\nmid N$, we see that $g(n)=3$, and from \eqref{7-6} we have \begin{align}\label{8-1} r_{1,n}&=\alpha_f^2(p_n)+\alpha_f(p_n)\beta_f(p_n)+\beta_f^2(p_n)\\ &=(\alpha_f(p_n)+\beta_f(p_n))^2 -\alpha_f(p_n)\beta_f(p_n)\notag\\ &= (\lambda_f(p_n))^2-1. \notag \end{align} If $p_n\in\mathbb{P}_f(\varepsilon)$, then $|\lambda_f(\varepsilon)|>\sqrt{2}-\varepsilon$, so \[ r_{1,n}>(\sqrt{2}-\varepsilon)^2-1=1-(2\sqrt{2}\varepsilon-\varepsilon^2), \] which is positive if $\varepsilon$ is small. Since $\mathbb{P}_f(\varepsilon)$ is a set of positive density, we now obtain the inequality \eqref{7-5} for infinitely many values of $n$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm-general}] In this case, for any $n$ such that $p_n\nmid N$, \[ r_{1,n}=\sum_{h=0}^{\gamma}\alpha_f^{\gamma-h}(p_n)\beta_f^h(p_n). \] In particular $r_{1,n}$ is real, and so \[ r_{1,n}=\Re r_{1,n}=\sum_{h=0}^{\gamma}\cos((\gamma-2h)\theta_f(p_n)). \] Then it is easy to see that \[ r_{1,n}\sin\theta_f(p_n)=\sin((\gamma+1)\theta_f(p_n)) \] (cf. \cite[p.86]{RMKM}), hence \begin{equation}\label{8-2} |r_{1,n}|\geq |\sin((\gamma+1)\theta_f(p_n))|. \end{equation} Fix a number $\xi\in(0,\pi/2)$, and let $\eta=\sin\xi$. Then $0<\eta<1$. Define the intervals \[ A(j)=\left[\frac{2\pi j+\xi}{\gamma+1},\frac{2\pi j+\pi-\xi}{\gamma+1}\right],\; B(j)=\left[\frac{2\pi j+\pi+\xi}{\gamma+1},\frac{2\pi j+2\pi-\xi}{\gamma+1}\right] \] where $j$ is a non-negative integer. If $\gamma$ is odd, then \begin{equation}\label{8-3} |\sin((\gamma+1)\theta_f(p_n))|\geq \eta \end{equation} if and only if \begin{equation}\label{8-4} \theta_f(p_n)\in I_1:=\bigcup_{j=0}^{(\gamma-1)/2}\left(A(j)\cup B(j)\right). \end{equation} If $\gamma$ is even, then \eqref{8-3} holds if and only if \begin{equation}\label{8-5} \theta_f(p_n)\in I_2:=\bigcup_{j=0}^{(\gamma-2)/2}\left(A(j)\cup B(j)\right) \cup A(\gamma/2). \end{equation} These observations and \eqref{8-2} and \eqref{8-3} imply that $|r_{1,n}|\geq\eta$ if and only if $\theta_f(p_n)\in I_1$ (if $\gamma$ is odd) or $\in I_2$ (if $\gamma$ is even). Therefore, to prove Theorem \ref{thm-general}, it is enough to show that the set \begin{equation}\label{8-6} \{p:\text{prime}\;|\;\theta_f(p_n)\in I_\ell \}\quad(\ell=1,2) \end{equation} is of positive density. Since \[ \int_a^b \sin^2\theta d\theta= \frac{1}{2}\left(b-a-\frac{1}{2}(\sin 2b-\sin 2a)\right), \] from \eqref{Thorner} we have \begin{align}\label{8-7} \frac{\pi_I(x)}{\pi(x)}=\frac{1}{\pi}\left(b-a-\frac{1}{2}(\sin 2b-\sin 2a)\right) +O\left((\log x)^{-1/8+\varepsilon}\right) \end{align} for $I=[a,b]$. Denote \begin{align*} &a_{A(j)}=\frac{2\pi j+\xi}{\gamma+1}, \;\;b_{A(j)}=\frac{2\pi j+\pi-\xi}{\gamma+1},\\ &a_{B(j)}=\frac{2\pi j+\pi+\xi}{\gamma+1},\;\; b_{B(j)}=\frac{2\pi j+2\pi-\xi}{\gamma+1}. \end{align*} Then from \eqref{8-7} we can write \begin{equation}\label{8-8} \frac{\pi_{I_\ell}(x)}{\pi(x)}=\frac{1}{\pi}S_{\ell}+\frac{1}{2\pi}T_{\ell} +O\left((\log x)^{-1/8+\varepsilon}\right)\qquad (\ell=1,2), \end{equation} where \begin{align*} &S_1=\sum_{j=0}^{(\gamma-1)/2}\left(b_{A(j)}-a_{A(j)})+(b_{B(j)}-a_{B(j)})\right),\\ &S_2=\sum_{j=0}^{(\gamma-2)/2}\left(b_{A(j)}-a_{A(j)})+(b_{B(j)}-a_{B(j)})\right) +\left(b_{A(\gamma/2)}-a_{A(\gamma/2)}\right),\\ &T_1=\sum_{j=0}^{(\gamma-1)/2}\left((\sin(2b_{A(j)})-\sin(2a_{A(j)})) +(\sin(2b_{B(j)})-\sin(2a_{B(j)}))\right),\\ &T_2=\sum_{j=0}^{(\gamma-2)/2}\left((\sin(2b_{A(j)})-\sin(2a_{A(j)})) +(\sin(2b_{B(j)})-\sin(2a_{B(j)}))\right)\\ &\qquad\qquad+\left(\sin(2b_{A(\gamma/2)})-\sin(2a_{A(\gamma/2)})\right). \end{align*} It is easy to see that \begin{equation}\label{8-9} S_{\ell}=\pi-2\xi \qquad (\ell=1,2). \end{equation} Next we show that \begin{equation}\label{8-10} T_{\ell}=0 \qquad (\ell=1,2). \end{equation} In fact, we know \[ (\sin(2b_{\Box(j)})-\sin(2a_{\Box(j)})) =2\sin\frac{\pi-2\xi}{\gamma+1}\cos\frac{4\pi j +c\pi}{\gamma +1}, \] where $c=1$ if $\Box=A$ and $c=3$ if $\Box=B$. Then \begin{align*} T_1&=2\sin\frac{\pi-2\xi}{\gamma+1}\sum_{j=0}^{(\gamma-1)/2}\left( \cos\frac{4\pi j+\pi}{\gamma+1}+\cos\frac{4\pi j+3\pi}{\gamma+1}\right)\\ &=4\sin\frac{\pi-2\xi}{\gamma+1}\cos\frac{\pi}{\gamma+1} \sum_{j=0}^{(\gamma-1)/2}\cos\frac{4\pi j+2\pi}{\gamma+1}, \end{align*} and \begin{align*} \sin\frac{2\pi}{\gamma+1}\sum_{j=0}^{(\gamma-1)/2}\cos\frac{4\pi j+2\pi}{\gamma+1} &=\frac{1}{2}\sum_{j=0}^{(\gamma-1)/2}\left( \sin\frac{4\pi(j+1)}{\gamma+1}-\sin\frac{4\pi j}{\gamma+1}\right)\\ &=\frac{1}{2}(\sin(2\pi)-\sin 0)=0, \end{align*} therefore $T_1=0$. Similarly we find that \begin{align*} T_2=4\sin\frac{\pi-2\xi}{\gamma+1}\cos\frac{\pi}{\gamma+1} \sum_{j=0}^{(\gamma-2)/2}\cos\frac{4\pi j+2\pi}{\gamma+1} +2\sin\frac{\pi-2\xi}{\gamma+1}\cos\frac{\pi}{\gamma+1}, \end{align*} and the sum on the right-hand side is equal to $-1/2$, and hence $T_2=0$. From \eqref{8-8}, \eqref{8-9} and \eqref{8-10} we obtain \begin{align}\label{8-11} \frac{\pi_{I_\ell}(x)}{\pi(x)}=1-\frac{2\xi}{\pi} +O\left((\log x)^{-1/8+\varepsilon}\right)\qquad (\ell=1,2). \end{align} Since $\xi<\pi/2$, this implies that the set \eqref{8-6} is of positive density in the set of all primes. This completes the proof. \end{proof} \begin{remark} Actually, to prove Theorem \ref{thm-general}, it is not necessary to invoke the quantitative result of Thorner \cite{Tho14}. The above argument, combined with the famous solution of the Sato-Tate conjecture \cite{BGHT}, implies \begin{align}\label{8-12} \frac{\pi_{I_\ell}(x)}{\pi(x)}\sim 1-\frac{2\xi}{\pi}>0, \end{align} which is sufficient for our purpose. However we may expect that a quantitative formula like \eqref{8-11} will be useful when we try to develop more detailed study on $M$-functions. \end{remark} \end{document}
\begin{document} \title[Class group twists of $\operatorname{GL}_n$-automorphic $L$-functions] {Class group twists and Galois averages of $\operatorname{GL}_n$-automorphic $L$-functions} \author{Jeanine Van Order} \begin{abstract} Fix $n \geq 2$ an integer, and let $F$ be a totally real number field. We derive estimates for the finite parts of the $L$-functions of irreducible cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representations twisted by class group characters or ring class characters of a totally imaginary quadratic extensions $K$ of $F$, evaluated at central values $s=1/2$ or more generally values $s \in {\bf{C}}$ within the strip $\frac{1}{2} - \frac{1}{n^2 + 1} < \Re(s) < 1$. Assuming the generalized Ramanujan conjecture at infinity, we obtain estimates for all arguments in the critical strip $0 < \Re(s) < 1$. We also derive finer nonvanishing estimates for central values $s=1/2$ twisted by ring class characters of $K$. When the dimension $n \leq 4$ is small, these give us nonvanishing estimates depending on the best known approximations towards the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect, and in particular unconditional nonvanishing for $n \leq 3$ (with the case of $n=3$ being new). We derive such estimates via certain exact integral representations for the moments, and in particular for new developments of bounds on the shifted convolution problem in this context. In the setting where the cuspidal representation is cohomological of even rank $n \geq 2$, we also explain how to view these estimates in terms of recent rationality theorems towards Deligne's conjecture for automorphic motives over CM fields. \end{abstract} \maketitle \tableofcontents \section{Introduction} Let $F$ be a totally real number field of degree $d = [F: {\bf{Q}}]$ and adele ring ${\bf{A}}_F$. Fix $n \geq 2$ an integer, and let $\Pi = \otimes_v \Pi_v$ be an irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ having unitary central character $\omega = \otimes_v \omega_{\Pi_v}$. We write $\Lambda(s, \Pi) = L(s, \Pi_{\infty})L(s, \Pi)$ to denote the standard $L$-function of $\Pi$. Let $K$ be a totally imaginary quadratic extension of $F$ whose relative discriminant $\mathfrak{D} = \mathfrak{D}_{K/F} \subset \mathcal{O}_F$ is prime to the conductor $\mathfrak{f}(\Pi) \subset \mathcal{O}_F$ of $\Pi$, and whose associated idele class character of $F$ we denote by $\eta = \eta_{K/F}$. We write $D_K$ to denote the absolute discriminant of $K$. Let $\chi = \otimes_w \chi_w$ be an idele class character of $K$ corresponding to a ring class Hecke character of $K$. Hence, there exists a minimal nonzero ideal $\mathfrak{c} \subset \mathcal{O}_F$ for which $\chi$ is a character of the class group $C(\mathcal{O}_{\mathfrak{c}})$ of the $\mathcal{O}_F$-order $\mathcal{O}_{\mathfrak{c}} := \mathcal{O}_F + \mathfrak{c} \mathcal{O}_K$ of conductor $\mathfrak{c}$ in $K$. If $\mathfrak{c} = \mathcal{O}_F$, then this is simply a character of the ideal class group $C(\mathcal{O}_K)$ of $\mathcal{O}_K$. Let us write $\pi(\chi) = \otimes_v \pi(\chi)_v$ to denote the automorphic representation of $\operatorname{GL}_2({\bf{A}}_F)$ induced from such a character. We consider the corresponding Rankin-Selberg $L$-function $\Lambda(s, \Pi \times \pi(\chi)) = L(s, \Pi_{\infty} \times \pi(\chi)_{\infty}) L(s, \Pi \times \pi(\chi))$ on $\operatorname{GL}_n({\bf{A}}_F) \times \operatorname{GL}_2({\bf{A}}_F)$. Equivalently, writing $\Pi_K = \otimes_w \Pi_{K, w}$ to denote the basechange of $\Pi$ to an automorphic representations $\operatorname{GL}_n({\bf{A}}_K)$ (which exists thanks to Arthur-Clozel \cite{AC}), we consider the corresponding basechange $L$-function $\Lambda(s, \Pi_K \otimes \chi) = L(s, \Pi_{K, \infty} \otimes \chi_{\infty}) L(s, \Pi_K \otimes \chi)$ of $\Pi_K \otimes \chi$ on $\operatorname{GL}_n({\bf{A}}_K) \times \operatorname{GL}_1({\bf{A}}_K)$. This $L$-function has a well-known analytic continuation to all $s \in {\bf{C}}$, and satisfies the functional equation \begin{align*} \Lambda(s, \Pi \times \pi(\chi)) &= \epsilon(s, \Pi \times \pi(\chi))\Lambda(1-s, \widetilde{\Pi} \otimes \pi(\chi)).\end{align*} Here, the epsilon factor \begin{align*} \epsilon(s, \Pi \times \pi(\chi)) &= q(\Pi \times \pi(\chi))^{\frac{1}{2}-s} \epsilon(1/2, \Pi \times \pi(\chi)) \end{align*} is given in terms of the conductor $q(\Pi \times \pi(\chi))$ and the root number $\epsilon(1/2, \Pi \times \pi(\chi)) \in {\bf{S}}^1$ of $\Lambda(s, \Pi \times \pi(\chi))$, and $\widetilde{\Pi} = \otimes_v \widetilde{\Pi}_v$ denotes the contragredient representation associated to $\Pi$. We derive two main estimates for values within the strip $0 < \Re(s) < 1$ of these $L$-functions $L(s, \Pi \times \pi(\chi))$. Our main innovation is to derive new integral presentations for these values as sums of specializations of constant coefficients of some other automorphic form, and in particular to derive the following preliminary special estimates for the shifted convolution problem in this setting. That is, we first prove the following key estimates using a novel integral presentations with the classical projection operator $\mathbb P^n_1$. Since we anticipate this should be of independent interest, with distinct arithmetic applications, let us first state this key estimate. \begin{theorem}[Theorem \ref{SDSCS}, Theorem \ref{SDSCS2}]\label{SCS} Let $\Pi = \otimes_v \Pi_v$ be a cuspidal automophic representation of $\operatorname{GL}_n({\bf{A}}_F)$ with unitary central character. Let $L(s, \Pi)$ denote the finite part of the standard $L$-function $\Lambda(s, \Pi) = L(s, \Pi) L(s, \Pi_{\infty})$ of $\Pi$, with Dirichlet series expansion \begin{align*} L(s, \Pi) = \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} c_{\Pi}(\mathfrak{m}) {\bf{N}}\mathfrak{m}^{-s} \end{align*} for $\Re(s) >1$. We derive the following uniform bounds for shifted convolution sums of the coefficients $c_{\Pi}$. Let $0 \leq \theta_0 \leq 1/2$ denote the best uniform approximation towards the generalized Ramanujan conjecture for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms, hence with $\theta_0 = 0$ conjectured and $\theta_0 = 7/64$ admissible by theorem of Blomer and Brumley \cite{BB}. Let $0 \leq \sigma_0 \leq 1/4$ denote the best uniform approximation towards the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect, hence with $\sigma_0=0$ conjectured, and $\sigma_0 = 103/512$ admissible by the theorem of Blomer and Harcos \cite{BH10}. Let $W \in \mathcal{C}^{\infty}({\bf{R}}_{>0})$ be any smooth function and compactly supported function of rapid decay near infinity which satisfies $W^{(i)} \ll 1$ for all $i \geq 0$.\\ \begin{itemize} \item[(i)] Let $\alpha \in \mathcal{O}_F$ be any nonzero $F$-integer, which we identify with its image under the diagonal embedding $\alpha \mapsto (\alpha, \alpha, \ldots) \in {\bf{A}}_F^{\times}$. Given any real numbers $Y >1$ and $\varepsilon >0$, we derive the uniform estimate \begin{align*} \sum\limits_{r \in \mathcal{O}_F / \mathcal{O}_F^{\times} } \frac{c_{\Pi}( r^2 + \alpha)}{ {\bf{N}}(r^2 + \alpha)^{\frac{1}{2}}} W \left( \frac{ {\bf{N}}(r^2 + \alpha) }{Y} \right) &\ll_{\Pi, \varepsilon} Y^{\frac{1}{4}} \cdot {\bf{N}}\alpha^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{ {\bf{N}}\alpha }{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align*} Here, the sum runs over nonzero $F$-integers $r \in \mathcal{O}_F$ up to the action of units $\mathcal{O}_F^{\times}$, and hence corresponds to summing over a set of generators of all principal ideals of $\mathcal{O}_F$. \\ \item[(ii)] Let $q(x) = A x^2 + B x + C$ be any irreducible, definite quadratic polynomial with $F$-integer coefficients $A, B, C \in \mathcal{O}_F$ and totally definite (hence nonzero) discriminant $\Delta = B^2 - 4 AC$. Given and real numbers $Y > 1$ and $\varepsilon >0$, and taking for granted the same conventions as used to describe the shifted convolution sum in (i), we derive the uniform estimate \begin{align*} \sum\limits_{r \in \mathcal{O}_F/ \mathcal{O}_F^{\times}} \frac{c_{\Pi}(r)}{ {\bf{N}} q(r)^{\frac{1}{2}}} W \left( \frac{ {\bf{N}}q(r) }{Y} \right) &\ll_{\Pi, \varepsilon} Y^{\frac{1}{4} + \sigma_0} \cdot {\bf{N}}A \cdot {\bf{N}}\Delta^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{ {\bf{N}}\Delta }{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align*} \end{itemize} \end{theorem} We refer to the discussion below for more details, but note in passing that a major departure from previous theorems in this direction (beyond the novel integral presentations we derive) is to decompose initially into linear combinations of Poincar\'e series. Decomposing the Poincar\'e series spectrally according to standard Sobolev norms arguments then allows us to derive the stated bounds, after setting up and justifying the initial decomposition. The second part (ii) gives a broad and ``soft" generalization of Blomer's quadratic progressions theorem \cite{VB}, and appears to be the most general estimate of this form in the literature. Although these bounds are completely new for higher rank $n \geq 3$, they have various well-known precedents in the literature for $n=2$ (e.g.~\cite{TeT}, \cite{BH10}, \cite{VB}). Here, we apply them as follows to estimate moments of critical values of certain $\operatorname{GL}_n({\bf{A}}_K)$-automorphic $L$-functions, i.e. for $K$ a fixed totally imaginary quadratic extension of $F$, or equivalently for moments of critical values of certain Rankin-Selberg $L$-functions for $\operatorname{GL}_n({\bf{A}}_F) \times \operatorname{GL}_2({\bf{A}}_F)$ of $\Pi$ times some dihedral representation $\pi(\chi)$ of $\operatorname{GL}_2({\bf{A}}_F)$ induced from a Hecke character $\chi$ of $K$. \subsubsection*{Class group twists} We first estimate average values of $L(s, \Pi \times \pi(\chi))$, for $\chi$ ranging over class group characters of the CM extension $K/F$, where $s \in {\bf{C}}$ is any argument $1/2 - 1/(n^2+1) < \Re(s) < 1$ inside of the critical strip $0 < \Re(s) < 1$. In fact, if the $\operatorname{GL}_n({\bf{A}}_F)$-representation is tempered at the real places of $F$, as predicted by the generalized Ramanujan conjecture, then we obtain nonvanishing estimates for all $0 < \Re(s) < 1$. In any case, the estimates we obtain in dimensions $n \geq 3$ are completely new, even for the rational number field $F = {\bf{Q}}$. Recall that the standard $L$-function of $\Pi$ can be described as an Euler product \begin{align*} \Lambda(s, \Pi) = L(s, \Pi_{\infty}) L(s, \Pi) = \prod_{v \leq \infty} L(s, \Pi_v)\end{align*} whose local components at places $v$ of $F$ where $\Pi_v$ is unramified take the form \begin{align*} L(s, \Pi_v) &= \begin{cases} \prod_{j=1}^n \left( 1 - \alpha_{j, v} {\bf{N}} v^{-s} \right)^{-1} &\text{for $v \nmid \infty$ nonarchimedean} \\ \prod_{j=1}^n \Gamma_{\bf{R}}(s - \mu_{j, v}) &\text{for $v \mid \infty$ real}. \end{cases} \end{align*} Here, the complex numbers $\alpha_{j, v}$ and $\mu_{j, v}$ correspond to the Satake parameters of $\Pi_v$, and $\Gamma_{\bf{R}}(s) = \pi^{-\frac{s}{2}} \Gamma \left( \frac{s}{2} \right)$. This $L$-function $\Lambda(s, \Pi)$ is holomorphic except for simple poles at $s = 0$ and $s=1$ when $\Pi$ is the trivial representation. It also satisfies the functional equation \begin{align*} L(s, \Pi) &= \epsilon(s, \Pi) L(1-s, \widetilde{\Pi}), \end{align*} where $\widetilde{\Pi}$ denotes the contragredient representation, and the $\epsilon$-factor $\epsilon(s, \Pi)$ equals \begin{align*} \epsilon(s, \Pi) &= (D_F^n {\bf{N}} \mathfrak{f}(\Pi) )^{\frac{1}{2} - s} W(\Pi). \end{align*} Here, $D_F$ denotes the absolute discriminant of $F$, with ${\bf{N}} \mathfrak{f}(\Pi)$ the absolute norm of the conductor $\mathfrak{f}(\Pi)$, and $W(\Pi) \in {\bf{S}}^1$ the root number. The generalized Ramanujan conjecture predicts that for each place $v$ of $F$ for which $\Pi_v$ is unramified, the Satake parameters satisfy the constraints \begin{align*} \begin{cases}\vert \alpha_{j, v} \vert = 1 &\text{ for each $1 \leq j \leq n$ if $v \nmid \infty$ is nonarchimedean } \\ \vert \Re(\mu_{j, v}) \vert = 0 &\text{ for each $1 \leq j \leq n$ if $v \mid \infty$ is real}. \end{cases} \end{align*} We know thanks to Luo-Rudnick-Sarnak \cite[Theorem 2]{LRS2} that we have the following approximations to this conjecture as places $v$ where the local representation $\Pi_v$ is unramified: \begin{align*} \begin{cases} \vert \log_{ {\bf{N}}(v) } \vert \alpha_{j, v} \vert \vert \leq \frac{1}{2} - \frac{1}{n^2 +1} &\text{ for each $1 \leq j \leq n$ if $v \nmid \infty$ is nonarchimedean } \\ \vert \Re(\mu_{j, v}) \vert \leq \frac{1}{2} - \frac{1}{n^2 + 1} &\text{ for each $1 \leq j \leq n$ if $v \mid \infty$ is archimedean}. \end{cases} \end{align*} Let us now assume that the archimedean component $\Pi_{\infty}$ of our fixed representation $\Pi = \otimes_v \Pi_v$ is spherical unless the generalized Ramanujan conjecture is known or taken for granted (cf.~\cite{LRS}, \cite{LRS2}). Let $\chi$ denote a character of the ideal class group of $K$, with $\pi(\chi)$ again denoting the induced automorphic representation of $\operatorname{GL}_2({\bf{A}}_F)$. The corresponding Rankin-Selberg $L$-function is defined (first for $\Re(s) \gg 1$) by an Euler product \begin{align*} \Lambda(s, \Pi \times \pi(\chi)) = L(s, \Pi_{\infty} \times \pi(\chi)_{\infty}) L(s, \Pi \times \pi(\chi)) = \prod_{v \leq \infty} L(s, \Pi_v \times \pi(\chi)_v).\end{align*} This completed function $\Lambda(s, \Pi \times \pi(\chi))$ is entire, and satisfies a functional equation of the following form (\cite[Proposition 4.1]{BR}): Assuming that $\mathfrak{D}$ is coprime to $\mathfrak{f}(\Pi)$, we have for any class group character $\chi$ of $K$ that \begin{align}\label{FE} L(s, \Pi \times \pi(\chi)) &= \epsilon(s, \Pi \times \pi(\chi)) L(1 - s, \widetilde{\Pi} \times \pi(\chi)), \end{align} where the epsilon factor $\epsilon(s, \Pi \times \pi(\chi))$ is given by the formula \begin{align*} \epsilon(s, \Pi \times \pi(\chi)) &= \left(D_K^n {\bf{N}} \mathfrak{f}(\Pi_K) \right)^{\frac{1}{2} - s} \epsilon(1/2, \Pi \times \pi(\chi)), \end{align*} and the root number $\epsilon(1/2, \Pi \times \pi(\chi)) \in {\bf{S}}^1$ by the formula \begin{align*} \epsilon(1/2, \Pi \times \pi(\chi)) &= W(\Pi_K) = W(\Pi) W(\Pi \otimes \eta). \end{align*} Here we write $\Pi_K$ to denote the basechange of $\Pi$ to $\operatorname{GL}_n({\bf{A}}_K)$, and use the well-known equality of $L$-functions $\Lambda(s, \Pi \times \pi(\chi)) = L(s, \Pi_K \otimes \chi)$. Now, it is also well-known that the root number $\epsilon(1/2, \Pi \times \pi(\chi))$ does not depend on the choice of $\chi$ (see e.g.~Lemma \ref{root}). We are therefore justified in dropping the $\chi$ from the notation, writing $W(\Pi_K) = \epsilon(1/2, \Pi \times \pi(\chi))$ for each $\chi$ to denote the corresponding root number. Let us also consider the quotient of archimedean factors more simply as \begin{align}\label{F} F(s) &= \frac{L(1-s, \widetilde{\Pi}_{\infty} \times \pi(\chi)_{\infty})}{L(s, \Pi_{\infty} \times \pi(\chi)_{\infty})}.\end{align} This quotient is also independent of the choice of class group character $\chi$ (see Lemma \ref{arch}), and so we are henceforth justified in dropping the $\chi$ from the notation here as well. Notice too that $F(\delta) \neq 0$ for any $\delta \in {\bf{C}}$ in the interval $\frac{1}{2} - \frac{1}{n^2 + 1} < \Re(\delta) < 1$ thanks to the theorem of Luo-Rudnick-Sarnak \cite{LRS2}. Moreover, if we know or assume the generalized Ramanujan conjecture at the real places of $F$, then $F(\delta) \neq 0$ for any $\delta \in {\bf{C}}$ in the critical strip $0 < \Re(\delta) <1$. Let us now fix some $\delta \in {\bf{C}}$ inside of the critical strip $0 < \Re(\delta) < 1$. In the event that $\delta = 1/2$ and $\Pi$ is self-contragredient, we shall also assume (for nontriviality) that the root number $W(\Pi_K)$ is not $-1$. That is, we shall exclude this situation, as it corresponds to having forced vanishing of central values by the functional equation $(\ref{FE})$. Writing $C(\mathcal{O}_K)^{\vee}$ to denote the character group of the ideal class group $C(\mathcal{O}_K)$ of $\mathcal{O}_K$, we estimate the corresponding average of values at $\delta$, denoted by \begin{align*} X_{\mathfrak{D}}(\Pi, \delta) &= \frac{1}{\vert C(\mathcal{O}_K) \vert } \sum_{\chi \in C(\mathcal{O}_K)^{\vee}} L(\delta, \Pi \times \pi(\chi)). \end{align*} To describe the estimates we obtain for this average, let us first introduce the symmetric square $L$-function $\Lambda(s, \operatorname{Sym^2} \Pi) = \prod_{v \leq \infty} L(s, \operatorname{Sym^2} \Pi_v)$ (see \cite{Sha2}, \cite{Sha}). Note that at a finite place $v$ of $F$ where $\Pi_v$ is unramified, the local Euler factor is given by the expression \begin{align*} L(s, \operatorname{Sym^2} \Pi_v) &= \prod_{1 \leq i \leq j \leq n} \left(1 - \alpha_{i, v} \alpha_{j, v} {\bf{N}} v^{-s} \right)^{-1}. \end{align*} Let $c_{\Pi}$ denote the $L$-function coefficients of $\Pi$, so that the finite part $L(s, \Pi)$ of the standard $L$-function $\Lambda(s, \Pi) = L(s, \Pi_{\infty}) L(s, \Pi)$ has the expansion $L(s, \Pi) = \sum_{\mathfrak{n} \subset \mathcal{O}_F} c_{\Pi}(\mathfrak{n}) {\bf{N}} \mathfrak{n}^{-s}$ for $\Re(s) \gg 1$. We shall consider the following Dirichlet series expansion related to $L(s, \operatorname{Sym^2} \Pi)$ for $\Re(s) \gg 1$, \begin{align*} L^{\star}(s, \operatorname{Sym^2} \Pi) &= L(2s, \omega) \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n}^2)}{{\bf{N}}\mathfrak{n}^s} = \sum_{ \mathfrak{m} \subset \mathcal{O}_F} \frac{ \omega (\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^{2s}} \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n}^2)}{{\bf{N}}\mathfrak{n}^s}. \end{align*} This series agrees with the symmetric square $L$-function $L(s, \operatorname{Sym^2} \Pi)$ up to a product of Euler factors which converges absolutely for $\Re(s) > 1/2$. Note that by Shahidi \cite[Theorem 5.1]{Sha} and \cite{Sha2}, we know that $L(1, \operatorname{Sym^2} \Pi)$ does not vanish, from which it is simple to deduce that $L^{\star}(1, \operatorname{Sym^2} \Pi)$ does not vanish. We refer to \cite{GHL} and \cite{CM} for more on the analytic properties satisfied by these $L$-functions. Here, we shall consider the partial $L$-series defined by the following Dirichlet series expansions: Writing ${\bf{1}} = {\bf{1}}_F$ to denote the principal class of $F$ (so that ${\bf{1}}$ is simply the ring of integers $\mathcal{O}_F$ when $F$ has class number one), we consider \begin{align*} L^{\star}_{\bf{1}}(s, \operatorname{Sym^2} \Pi) &= L_{\bf{1}}(2s, \omega) \sum_{\mathfrak{n} \in {\bf{1}}} \frac{c_{\Pi}(\mathfrak{n}^2)}{{\bf{N}}\mathfrak{n}^s} = \sum_{ \mathfrak{m} \in {\bf{1}} } \frac{ \omega (\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^{2s}} \sum_{\mathfrak{n} \in {\bf{1}}} \frac{c_{\Pi}(\mathfrak{n}^2)}{{\bf{N}}\mathfrak{n}^s}. \end{align*} Note that this Dirichlet series does not generally admit an Euler product. We obtain the following estimate for the average $X_{\mathfrak{D}}(\Pi, \delta)$, given in terms of these partial symmetric square $L$-values $L^{\star}_{\bf{1}}(s, \operatorname{Sym^2} \Pi)$. Let $w_K$ denote the number of roots of unity in $K$ not contained in $\mathcal{O}_F$, or equivalently the number of automorphs of any $F$-rational binary quadratic form representative associated to the principal class of $\mathcal{O}_K$. \begin{theorem}[Theorem \ref{CGTmain}]\label{SCest} Let $\Pi = \otimes_v \Pi_v$ be a cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representation of conductor $\mathfrak{f}(\Pi) \subset \mathcal{O}_F$ and unitary central character $\omega = \otimes_v \omega_v$. Let $K$ be a totally imaginary quadratic extension of $F$ of relative discriminant $\mathfrak{D} \subset \mathcal{O}_F$ prime to $\mathfrak{f}(\Pi)$ and corresponding idele class character $\eta$. Let $\delta \in {\bf{C}}$ be any complex number in the critical strip $0 < \Re(\delta) < 1$. Writing $D_K = {\bf{N}} \mathfrak{D}$ to denote the absolute discriminant of $K$, and $Y = (D_K^n {\bf{N}} \mathfrak{f}(\Pi_K))^{\frac{1}{2}}$ the square root of the conductor of each $L$-function in the average $X_{ \mathfrak{D} }(\Pi, \delta)$ we have for any choice of $\varepsilon >0$ the estimate \begin{equation*}\begin{aligned} &X_{ \mathfrak{D} }(\Pi, \delta)\\ &= \frac{1}{w_K} \left( L(2 \delta, \eta \omega) \cdot \frac{L_{\bf{1}}(4(1 - \delta), \overline{\omega})}{L_{\bf{1}}(4 \delta, \omega)} \cdot L^{\star}_{{\bf{1}}}(2 \delta, \operatorname{Sym^2} \Pi) + W(\Pi_K) \cdot Y^{1 - 2 \delta} \cdot F(\delta) \cdot L(2 - 2\delta, \eta \overline{\omega}) \cdot L^{\star}_{\bf{1}}(2(1-\delta), \operatorname{Sym^2} \widetilde{\Pi}) \right) \\ &+ O_{\Pi, \varepsilon} \left( Y^{\frac{3}{4} - \Re(\delta) + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{- \frac{1}{2}} \right). \end{aligned}\end{equation*} \end{theorem} It is easy to see from this estimate that if $\Re(\delta) \geq \frac{1}{2} - \frac{1}{n^2 + 1}$, so that the factor of $F(\delta)$ in the second residual term does not vanish, then the average $X_{\mathfrak{D}}(\Pi, \delta)$ does not vanish for sufficiently large absolute discriminant $D_K \gg 1$ whenever the inequality $Y^{1-2 \Re(\delta)} > Y^{\frac{3}{4} + \sigma_0 - \Re(\delta)}$ is satisfied, equivalently whenever $\Re(\delta) < \frac{1}{4} - \sigma_0$. In other words, Theorem \ref{SCest} implies that $X_{\mathfrak{D}}(\Pi, \delta)$ does not vanish for $D_K \gg 1$, provided that the constraint \begin{align*} \frac{1}{2} - \frac{1}{n^2 + 1} \leq \Re(\delta) < \frac{1}{4} - \sigma_0 \end{align*} is satisfied. If we assume that $\Pi$ satisfies the generalized Ramanujan conjecture at the real places of $F$, which is for instance the case for the cohomological representations studied in \cite{GHL}, then can deduce when $0 < \Re(\delta) < \frac{1}{4} - \sigma_0$ that the corresponding average $X_{\mathfrak{D}}(\Pi, \delta)$ does not vanish as ${\bf{N}}\mathfrak{D} = D_K \rightarrow \infty$. These results are completely new in higher dimensions $n \geq 3$, even over the rational number field $F={\bf{Q}}$. They have several antecedents in the literature, not only the theorem of Luo-Rudnick-Sarnak \cite{LRS2} (cf.~\cite{LRS}), but also that of Barthel-Ramakrishan \cite{BR} dealing with regions in the critical strip closer to the edge $\Re(s) = 1$. In the special case of dimension $n=2$, there is the theorem of Rohrlich \cite{Ro3} which deals with central values of the completed $L$-function $\Lambda(s, \Pi_K \otimes \chi)$ for $\Pi_K$ a Hecke Grossencharacter and $\chi$ a ring class character. There is also the estimate of Templier \cite{Te} for central derivative values corresponding to the degenerate case of $\Pi \cong \widetilde{\Pi}$ with $W(\Pi_K) = -1$ described above, as well as the estimates for moments implicit in the shifted convolution bounds of Templier-Tsimerman \cite{TeT} (see also \cite{VO9}). \subsubsection*{Galois averages} In general, we can also estimate averages over ring class characters of central values in this direction, motivated in part by a conjecture of Harris et alia (see e.g.~\cite[Conjecture 3.7]{GHL}), which leads us to the second and more arithmetic part of this work with central values and cohomological representations. Here, as we explain, this method with estimating moments via Theorem \ref{SCS} is limited in its applicability to nonvanishing applications for small dimensions $n \leq 4$. Nevertheless, we explain the general setup as we anticipate it will be useful for future work, and outline the connection to periods of automorphic motives. We first derive estimates for the following averages of central values. Let us first fix a prime ideal $\mathfrak{c} \subset \mathcal{O}_F$ coprime to the conductor of $\Pi$. We consider averages over primitive ring class characters $\chi$ of $K$ of conductor $\mathfrak{c}$. That is, we consider characters of the class group $C(\mathcal{O}_{\mathfrak{c}})$ of the $\mathcal{O}_F$-order $\mathcal{O}_{\mathfrak{c}} := \mathcal{O}_F + \mathfrak{c} \mathcal{O}_K$ of conductor $\mathfrak{c}$ which do not factor through $C(\mathcal{O}_{\mathfrak{c}'})$ for any proper divisor $\mathfrak{c}' \mid \mathfrak{c}$. Writing $P(\mathfrak{c})$ to denote the set of such characters, we consider the corresponding weighted average \begin{align*} X_{\mathfrak{c}}(\Pi) &= \frac{1}{\vert P(\mathfrak{c}) \vert } \sum\limits_{ \chi \in P(\mathfrak{c}) } L(1/2, \Pi \times \pi(\chi)).\end{align*} We also consider the following more Iwasawa theoretic subaverages. Let $\mathfrak{p} \subset \mathcal{O}_F$ be a prime ideal with underlying rational prime $p$. We also consider certain averages over ring class characters which factor through the profinite limit $C(\mathcal{O}_{\mathfrak{p}^{\infty}}) = \varprojlim_{\alpha} C(\mathcal{O}_{\mathfrak{p}^{\alpha}})$. To be more precise, we consider primitive ring class characers of some conductor $\mathfrak{p}^{\alpha}$ whose induced character on the finite torsion subgroup $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$ is fixed. That is, fixing $\chi_0$ some character of $C_0$, we consider weighted averages over primitive ring class characters of conductor $\mathfrak{p}^{\alpha}$ whose induced character on $C_0$ is $\chi_0$. Writing $P(\mathfrak{p}^{\alpha}, \chi_0)$ to denote the set of such characters, we also consider the corresponding weighted average \begin{align*} X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) &= \frac{1}{\vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert} \sum\limits_{ \chi \in P(\mathfrak{p}^{\alpha}, \chi_0) } L(1/2, \Pi \times \pi(\chi)). \end{align*} These averages are sometimes referred to as ``tame" subaverages (see \cite{Va}, \cite{CV}), as the abelian extension of $K$ cut out by $C_0$ contains a tamely ramified subextension of the tower of ring class extensions of $K$ of $\mathfrak{p}$-power conductor. We also derive the following estimates for these averages. \begin{theorem}[Corollary \ref{GAmain}] Let $\Pi$ be a cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ with unitary central character $\omega$. Let us again fix a totally imaginary quadratic extension $K$ of $F$ with corresponding character $\eta$. Again, we write $0 \leq \sigma_0 \leq 1/4$ to denote the best approximation to the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect, with $\sigma_0=0$ conjectured, and $\sigma_0 = 103/512$ admissible by the theorem of Blomer and Harcos \cite{BH10}, and moreover and also $\sigma_0 = 3/16$ admissible when $F = {\bf{Q}}$ by the antecedent theorem of Blomer-Harcos \cite{BH07}. \\ \begin{itemize} \item[(i)] Given $\mathfrak{c} \subset \mathcal{O}_F$ coprime to the conductor of $\Pi$, the corresponding average $X_{\mathfrak{c}}(\Pi)$ can be estimated as \begin{align*} X_{\mathfrak{c}}(\Pi) &= \frac{1}{w_K} \left( L(1, \omega \eta) \cdot \frac{ L_{\bf{1}}^{\star}(1, \operatorname{Sym^2} \Pi) }{ L_{\bf{1}}(2, \omega) } + W(\Pi_K) \cdot L(1, \overline{\omega} \eta) \cdot \frac{ L_{\bf{1}}^{\star} (1, \operatorname{Sym^2} \widetilde{\Pi}) }{ L_{\bf{1}}(2, \overline{\omega})} \right) \\ &+ O_{\Pi, \varepsilon} \left( Y^{ \frac{1}{4} + \sigma_0 + \varepsilon} \cdot {\bf{N}}( \mathfrak{c}^2 \mathfrak{D})^{-\frac{1}{2}} \right). \end{align*} In particular, assuming $W(\Pi_K) \neq -1$ in the event that $\Pi \cong \widetilde{\Pi}$ is self-dual, if the constraint $n(1/4 + \sigma_0) < 1$ is satisfied, then $X_{\mathfrak{c}}(\Pi) \neq 0$ as ${\bf{N}}(\mathfrak{c}^2 \mathfrak{D}) \rightarrow \infty$. \\ \item[(ii)] Given a prime ideal $\mathfrak{p} \subset \mathcal{O}_F$ coprime to the conductor of $\Pi$, an integer $\alpha \geq 1$, and $\chi_0$ a character of the finite torsion subgroup $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$, the corresponding ``tame" subaverage $X_{ \mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ of $X_{\mathfrak{p}^{\alpha}}(\Pi)$ can be estimated as follows. Given a class $A \in C(\mathcal{O}_{\mathfrak{p}^{\alpha}})$, we write $q_A(x, y) = a_A x^2 + b_A xy + c_A y^2$ to denote the primitive reduced binary quadratic form class representative, hence with $F$-integer coefficients $a_A, b_A, c_A \in \mathcal{O}_F$ satisfying ${\bf{N}} b_A \leq {\bf{N}} a_A \leq {\bf{N}} c_A$. Let simplify notation by writing \begin{align*} R(\Pi, a_A; q) &:= \frac{1}{w_K} \sum\limits_{q \mid a_A} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \omega \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \Pi) }{ L_{\bf{1}}(2, \omega) } \\ R(\widetilde{\Pi}, a_A; q) &:= \frac{1}{w_K} \sum\limits_{q \mid a_A} \mu(q) \cdot \overline{\omega}(q) \cdot \frac{ c_{\widetilde{\Pi}}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \overline{\omega} \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \widetilde{\Pi}) }{ L_{\bf{1}}(2, \overline{\omega}) }\end{align*} for each divisor $q \mid a_A$ to denote the residual terms appearing in Proposition \ref{residue2}. We have that \begin{align*} &X_{ \mathfrak{p}^{\alpha}, \chi_0}(\Pi) \\Ê&= \sum\limits_{A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})} \sum\limits_{q \mid a_A} \left( R(\Pi, a_A; q) + W(\Pi_K) \cdot R(\widetilde{\Pi}, a_A; q) \right) + O_{\Pi, \varepsilon} \left( Y^{\frac{1}{4} + \sigma_0 + \varepsilon} \cdot {\bf{N}} a_A \cdot {\bf{N}}(\mathfrak{p}^{2\alpha} \mathfrak{D})^{-\frac{1}{2}} \right). \end{align*} In particular, assuming $W(\Pi_K) \neq -1$ in the event that $\Pi \cong \widetilde{\Pi}$ is self-dual, if the constraint $n(1/4 + \sigma_0) < 1$ is satisfied, then $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) \neq 0$ as $\alpha \rightarrow \infty$. \\ \end{itemize} \end{theorem} Let now make a few remarks on nonvanishing in this context. It is easy to see that for $n=2$, these estimates imply the nonvanishing of the corresponding averages as either ${\bf{N}} \mathfrak{c}$, $\alpha$, or ${\bf{N}} \mathfrak{D} = D_K$ tends to infinity. Here, we use well-known the well-known vanishing properties of the symmetric square $L$-function $L(s, \operatorname{Sym^2} \Pi)$ at $s=1$, shown in this setting by Shahidi \cite{Sha2}. In this way, as we explain, we can derive completely analytic proofs of the theorems of Vatsal \cite{Va} and Cornut-Vatsal \cite{CV}, generalizing earlier theorems of Rohrlich \cite{Ro2} and Greenberg \cite{Gr}. In higher rank, we see an immediate limitation of this method to $n=4$ by inspection of the exponents in the error terms. Nevertheless, we derive novel nonvanishing estimates for rank $n=3$ in this way. We also explain how the nonvanishing of the tame averages $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ in general can be used to derive stronger nonvanishing theorems in the style of Vatsal \cite{Va}, Rohrlich \cite{Ro}, \cite{Ro2}, Vatsal \cite{Va}, and Cornut-Vatsal \cite{CV}, \cite{CV2} in the setting where $\Pi = \otimes_v \Pi_v$ is a regular cohomological representation of $\operatorname{GL}_n({\bf{A}}_F)$ of even dimension $n \geq 2$. That is, we use various known results towards Deligne's rationality conjecture \cite[$\S 2$]{De} (cf.~\cite[$\S 3$]{HL}) to derive the following. \begin{theorem}[Theorem \ref{GAL}] Let $n \geq 2$ be an even integer. Let $\Pi = \otimes_v \Pi_v$ be a self-dual, cuspidal, cohomological automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$. Writing $\lambda = (\lambda_{j})_{j=1}^n$ to denote the corresponding highest weight, and $w = \lambda_{j} + \lambda_{n+1 - j}$ (for any choice of $j$) the purity weight, we assume that $\Pi$ is regular, i.e.~that $\lambda_{i} \neq \lambda_{j}$ for each pair of indices $i \neq j$. Let us assume additionally that the Hodge-Tate weights of $\Pi$ satisfy the condition that $\lambda_j \neq \frac{w}{2}$ for each index $1 \leq j \leq n$. Fix $K$ a totally imaginary quadratic extension of $F$, together with $\mathfrak{p} \subset \mathcal{O}_F$ a prime ideal coprime to the conductor of $\Pi$ with underlying rational prime $p$. Fix $\chi_0$ a character of the finite torsion subgroup $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$ of the profinite limit \begin{align*} C(\mathcal{O}_{\mathfrak{p}^{\infty}}) &= \varprojlim_{\alpha} C(\mathcal{O}_{\mathfrak{p}^{\infty}}) \cong {\bf{Z}}_p^{\delta_{\mathfrak{p}}} \times C_0, \quad \delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p]. \end{align*} Suppose that \\ \begin{itemize} \item[(i)] The weighted average $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) = \vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert^{-1} \sum_{\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)} L(1/2, \Pi \times \pi(\chi))$ does not vanish for all sufficiently large exponents $\alpha \gg 1$. Equivalently, suppose there exists for each sufficiently large exponent $\alpha \gg 1$ a ring class character $\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)$ for which $L(1/2, \Pi \times \pi(\chi)) \neq 0$. \\ \item[(ii)] The Hecke field ${\bf{Q}}(\Pi_K) = {\bf{Q}}(\Pi)$ is linearly disjoint over ${\bf{Q}}$ to the cyclotomic field ${\bf{Q}}(\mu_{p^{\infty}})$ obtained by adjoining to ${\bf{Q}}$ all $p$-power roots of unity. \\ \end{itemize} If the residue degree $\delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p]$ of $\mathfrak{p}$ equals one, then for all but finitely many ring class characters $\chi$ of $p$-power conductor, the central value $L(1/2, \Pi \times \pi(\chi))$ does not vanish. More generally, if $F$ is an arbitrary totally real number field with $\mathfrak{p} \subset \mathcal{O}_F$ a prime ideal of residue degree $\delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p] \geq 1$, then for a positive proportion of all primitive ring class characters $\chi$ of $\mathfrak{p}$-power conductor, the central value $L(1/2, \Pi \times \pi(\chi))$ does not vanish. That is, for each sufficiently large exponent $\alpha \gg 1$, there exists a character $\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)$ for which the corresponding Galois average $G_{[\chi]}(\Pi)$ does not vanish, and the limit over all such a Galois conjugate values with $\alpha \rightarrow \infty$ describes this positive density set. \end{theorem}Ê We remark that such nonvanishing properties have well-known applications to the corresponding Iwasawa-Greenberg and Bloch-Kato main conjectures (see e.g.~\cite{Gr94}). Hence, this result could be viewed as a criterion describing the minimal amount of ``analytic" input required to derive nontriviality properties for typical or conjctural/expected constructionsof $p$-adic $L$-functions and Euler systems in this setup -- at least for the case lower rank (e.g.~$n=2$), where such constructions have already been given. \subsection{Connection to periods of automorphic motives} To illustrate motivation, let us outline connections to Deligne's conjecture and factorization of periods of automorphic motives over CM fields due to Grobner-Harris-Lin \cite{GHL18} (cf.~\cite{MH}, \cite{GH}, and \cite{HL}) and its forthcoming sequel. These latter results relate automorphic periods on different groups, these periods being associated to motives (for absolute Hodge cycles) occurring in the cohomology of different Shimura varieties, and are consistent with the predictions of Tate's conjecture on cycle classes in $l$-adic cohomology. Essentially, these results relate critical values of $L$-functions of automorphic motives associated to cohomological automorphic forms on unitary groups to periods of arithmetically normalized automorphic forms on the corresponding unitary Shimura varieties. We begin with the following formal consequence of Theorem \ref{GAL}, which in particular shows special cases of \cite[Conjecture 3.7]{GHL}. \begin{conjecture}\label{3.7} Let $\Pi_K$ be any cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_K)$. Let $S$ be any finite set of nonarchimedean places of the totally real field $F$. Fix for each place $v \in S$ a continuous algebraic character $\alpha_v: \operatorname{GL}_1(\mathcal{O}_{F_v}) \rightarrow {\bf{C}}^{\times}$. Let $\rho_{\infty}$ be an algebraic character of $\operatorname{GL}_1(K \otimes_{\bf{Q}} {\bf{R}})$. There exists an idele class character $\rho = \otimes_w \rho_w$ of $K$ with conjugate self-dual archimedean component $\rho_{\infty}$ such that $\rho\vert_{F_v^{\times}} = \alpha_v$ for each $v \in S$ and such that $L(1/2, \Pi_K \otimes \rho) \neq 0$. \end{conjecture} \begin{corollary}\label{3.7cor} If $n \leq 3$ and $\Pi_K$ arises via cyclic basechange (in the sense of Arthur-Clozel \cite{AC}) from a cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}_F})$, then Conjecture \ref{3.7} is true. \end{corollary} \begin{proof} Let $\chi' = \otimes_w \chi_w'$ be any idele class character of $K$ whose archimedean component is given by $\chi_{\infty}' = \rho_{\infty}$, and whose restrictions to $F_v^{\times}$ for each $v \in S$ are given by $\chi'\vert_{F_v^{\times}} = \alpha_v$. Put $\Pi_K' = \Pi_K \otimes \chi'$, and view this as a cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_K)$. We argue that $\Pi_K'$ is only self-dual if each continuous character $\alpha_v$ is trivial, in which case we can assume without loss of generality that $W(\Pi_K' \otimes \rho_{\infty}) \neq -1$ after replacing $\Pi_K'$ by $\Pi_K' \otimes \xi_K$, where $\xi_K = \xi \circ {\bf{N}}$ denotes the composition with the norm homomorphism ${\bf{N}}: K \rightarrow {\bf{Q}}$ of some suitable primitive even Dirichlet character $\xi$. Theorem \ref{GAmain} implies that there exists a class group character $\chi$ of $K$ for which $L(1/2, \Pi_K' \otimes \chi) = L(1/2, \Pi_K \otimes \chi' \chi)$ does not vanish. Since any such character $\chi$ has trivial archimedean component $\chi_{\infty}$ and trivial restriction $\chi \vert_{ {\bf{A}}_F^{\times} }$ to ${\bf{A}}_F^{\times}$, It is easy to deduce that the idele class character $\rho = \chi' \chi$ of $K$ has archimedean component $\rho_{\infty} = \chi_{\infty}'$ and local restrictions $\rho \vert_{ F_v^{\times} } = \alpha_v$ for each $v \in S$. A similar statement can be deduced for $\chi$ a ring class character of $K$ of some conductor coprime to the conductor of $\Pi_K'$ and the discriminant of $K$ via Theorem \ref{GAmain}. \end{proof} We now outline the application of this result via \cite[Theorem 5.2]{GHL18} to factorization of automorphic periods and Deligne's conjecture for automorphic motives over CM fields, pointing to precise references for details. Recall that, given a motive $\mathcal{M}$ of rank $n$ defined over ${\bf{Q}}$ with coefficients in a number field $E$, one can construct via the corresponding $l$-adic realization an $L$-series $L(s, \mathcal{M})$ (see e.g.~\cite[$\S 2$]{De} or \cite[$\S 2$]{HL}). This $L$-series $L(s, \mathcal{M})$ is only absolutely convergent for $\Re(s) \gg 1$ a priori, but conjectured to have an analytic continuation $\Lambda(s, \mathcal{M}) = L_{\infty}(s, \mathcal{M})L(s, \mathcal{M})$ to all $s \in {\bf{C}}$ for some archimedean component of gamma factors $L_{\infty}(s, \mathcal{M})$, and to satisfy a functional equation $\Lambda(s, \mathcal{M}) = \epsilon(s, \mathcal{M})\Lambda(1-s, \widetilde{\mathcal{M}})$. Granted such an analytic continuation, an integer $m$ is said to be {\it{critical}} if $L_{\infty}(s, \mathcal{M})$ has a pole at $s = m$, in which case Deligne's rationality conjecture predicts following description of the special value $L(m, \mathcal{M})$. Writing $c(\mathcal{M})$ to denote the Deligne period obtained via the comparison isomorphism between the complexified Betti and de Rham realizations $\mathcal{M}_B \otimes {\bf{C}} \cong \mathcal{M}_{dR} \otimes {\bf{C}}$ (with respect to fixed $E$-bases), and $c^{\pm}(\mathcal{M})$ those associated to the invariants of $\pm 1$ times the infinite Frobenius map $\Phi_{\infty}: \mathcal{M}_B \otimes {\bf{C}} \rightarrow \mathcal{M}_B \otimes {\bf{C}}$, the special value $L(m, \mathcal{M})$ should be given up to an nonzero scalar in the number field $E$ by the number $(2 \pi i)^{m n^{\pm 1}} c^{\pm}(\mathcal{M})$. Hence, writing $\sim_E$ to denote equality up to multiplication by nonzero scalar in $E$, Deligne's conjecture predicts that \begin{align*} L(m, \mathcal{M}) \sim_E (2 \pi i)^{m n^{\pm 1}} c^{\pm}(\mathcal{M}). \end{align*} A natural analogue of this conjecture can be made for tensor products of motives over number fields, as explained in \cite[$\S$ 2-3]{HL}. If $\mathcal{M}$ arises via the Weil restriction of scalars from a tensor product of automorphic motives $M(\Pi_K)$ and $M(\Pi_K')$ associated to cohomological automorphic representations $\Pi_K$ of $\operatorname{GL}_n({\bf{A}}_K)$ and $\Pi_K'$ of $\operatorname{GL}_{n-1}({\bf{A}}_K)$ defined over the CM field $K/F$, so $\mathcal{M} = \operatorname{Res}_{K/{\bf{Q}}}(M(\Pi_K) \otimes M(\Pi_K'))$, then more can be said about this conjecture. In particular, if $\Pi_K$ and $\Pi_K'$ arise via basechange from cuspidal automorphic representations of unitary groups defined over $F$, then this conjecture -- including a construction of the motives $M(\Pi_K)$ and $M(\Pi_K')$ -- is to be addressed in the forthcoming work of Grobner-Harris-Lin, using as input the following factorization of automorphic periods (\cite[Theorem 5.2]{GHL18}). Roughly speaking, if for $\rho$ an idele class character of $K$ the central value $L(1/2, \Pi_K \otimes \rho)$ does not vanish, then the strategy initiated in Harris \cite{MH} and Grobner-Harris \cite{GH} using known cases of the so-called Ichino-Ikeda-Neal-Harris (IINH) conjecture can be used to express $L(1/2, \Pi_K \otimes \rho)$ in terms of periods of arithmetically normalized automorphic forms on some related unitary Shimura varieties. Taking the setup of this approach for granted, the preliminary work \cite{GHL} derives a factorization of these latter automorphic periods (to be used crucially in the sequel work), which as explained in \cite[$\S 2$]{MH} is consistent with implications of Tate's conjecture on algebraic cycles. To describe and state this factorization theorem more precisely, let us first fix some conventions. Let $S_{\infty} = S_{\infty}(K)$ denote the set of pairs of conjugate complex embeddings $v = (\iota_v, \overline{\iota}_v)$ of $K$ into ${\bf{C}}$, dropping the subscript $v$ from the notations when the context is clear. Note that $S_{\infty}$ can be identified with the set of real places of the maximal totally real subfield $F \subset K$ via restriction to the first component, or equivalently via the assignment $v \rightarrow \iota_v$, and that this fixes a CM type of $K$ which we denote by $\Sigma$. Let us also write $\operatorname{Gal}(K/F) = \lbrace {\bf{1}}, c \rbrace$, so that $c$ denotes the nontrivial automorphism of the quadratic extension $K/F$. Given an integer $n \geq 2$ and an $n$-dimensional nondegenerate $c$-hermitian space $(V_n, \langle \cdot , \cdot \rangle )$ defined over $K$, we write $H = H_n = U(V_n)$ to denote the corresponding unitary group defined over $F$. Recall that a representation $\Pi_{K, \infty}$ of $G_{\infty} = G_{n, \infty} = \operatorname{Res}_{K/{\bf{Q}}}(\operatorname{GL}_n)({\bf{R}})$ is said to be {\it{cohomological}} if there exists a highest weight module $\mathcal{E}_{\mu}$ for which the corresponding relative Lie algebra cohomology $H^*(\operatorname{Lie}(G_{\infty}), \operatorname{Lie}(K_{G_{\infty}}), \Pi_{K, \infty} \otimes \mathcal{E}_{\mu})$ is nontrivial, where $K_{G_{\infty}} \subset G_{\infty}$ denotes the product of the centre of $G_{\infty}$ with the connected component of the identity of a maximal compact subgroup of $G_{\infty}$. This module $\mathcal{E}_{\mu}$ is an irreducible finite dimensional representation of the real Lie group $G_{\infty}$ given by its highest weight $\mu = (\mu_v)_{v \in S_{\infty}}$. We assume throughout that $\mathcal{E}_{\mu}$ is algebraic in the sense that the weight $\mu$ has integer coordinates $\mu = (\mu_{\iota_v}, \mu_{\overline{\iota}_v}) \in {\bf{Z}}^n \times {\bf{Z}}^n$ (with respect to standard choices of subtorus and basis of complexified Lie algebra of $G_{\infty}$). Let us also recall that for a given integer $m \geq 1$, the representation $\Pi_{K, \infty}$ is said to be {\it{$m$-regular}} if $\mu_{\iota, j} - \mu_{\overline{\iota}_v, j+1} \geq m$ and $\mu_{\overline{\iota}_v, j} - \mu_{\iota_v, j+1} \geq m$ for all pairs $v = (\iota_v, \overline{\iota}_v) \in S_{\infty}$ and indices $1 \leq j \leq n$. If $m=1$, then $\Pi_{K, \infty}$ is said simply to be {\it{regular}}. In a similar way, a representation $\pi_{\infty}$ of $H_{\infty} = H_{n, \infty} = \operatorname{Res}_{K/{\bf{Q}}}(H_n)({\bf{R}})$ is said to be {\it{cohomological}} if there exists a highest weight module $\mathcal{F}_{\lambda}$ for which the corresponding relative Lie algebra cohomology $H^*(\operatorname{Lie}(H_{\infty}), \operatorname{Lie}(K_{H_{\infty}}), \pi_{\infty} \otimes \mathcal{F}_{\lambda})$ is nontrivial, where $K_{H_{\infty}} \subset H_{\infty}$ denotes the product of the centre of $H_{\infty}$ with the connected component of the identity of a maximal compact subgroup of $H_{\infty}$. As explained in \cite[$\S 1.3.2$]{GHL18}, if $\pi$ is a unitary automorphic representation of $H({\bf{A}}_F) = H_n({\bf{A}}_F)$ which is tempered and cohomological with respect to $\widetilde{F}_{\lambda}$, then each of the local archimedean component representations $\pi_v$ of $H_v \cong U(r_v, s_v)$ is isomorphic to one of the $d_v = {n \choose r_v}$ many inequivalent discrete series representations denoted by $\pi_{\lambda, q}$, for $1 \leq q \leq d_v$. In this way, one can assign to such a cohomological representation $\pi_{\infty}$ an $S_{\infty}$-tuple of Harish-Chandra parameters $(A_v)_{v \in S_{\infty}}$ so that $\pi_{\infty} = \otimes_{v \in S_{\infty}} \pi(A_v)$, where $\pi(A_v)$ denotes the discrete series representation of $H_v$ with parameter $A_v$. Suppose now that $\pi = \otimes_v \pi_v$ is a square integrable automorphic representation of $H({\bf{A}}_F)$ whose archimedean component $\pi_{\infty}$ is cohomological in this sense. One knows by works of Labesse \cite{La}, Harris-Labesse \cite{HaLa}, and others (\cite{KK04}, \cite{KK05}, \cite{SM}, \cite{SWS}) that there exists a basechange lifting $\Pi_K = \operatorname{BC}(\pi)$ of $\pi$ to $\operatorname{GL}_n({\bf{A}}_K)$. This representation $\Pi_K$ decomposes into an isobaric sum $\Pi_K = \Pi_K(n_1) \boxplus \cdots \boxplus \Pi_K(n_k)$ of conjugate self-dual automorphic representations $\Pi_K(n_j)$ of $\operatorname{GL}_{n_j}({\bf{A}}_K)$ for some partition $\sum_{j=1}^k n_j = n$, where each of the representations $\Pi_K(n_j)$ is determined uniquely in terms of the Satake parameters of $\pi$. In particular, if the archimedean component $\pi_{\infty}$ of $\pi$ is cohomological, then so too is the archimedean component $\Pi_{K, \infty}$ of $\Pi_K$. Equipped with this information, we can then define the global $L$-packet $L_{\Phi}(H, \Pi_K) = L_{\Phi}(H_n, \Pi_K)$ to be the set of all cohomological tempered essentially square integrable (i.e.~square integrable up to a multiple of an idele class character) automorphic representations of $H({\bf{A}}_F)$ which admit a basechange lifting $\operatorname{BC}(\pi) = \Pi_K$ to $\operatorname{GL}_n({\bf{A}}_K)$. Let $H^{(0)} = H_n^{(0)}$ denote the unitary group of a hermitian space over $K$ having signature $(n-1, 1)$ at a fixed complex embedding $\iota_0: K \hookrightarrow {\bf{C}}$ which is definite at each of the remaining complex embeddings $\iota \neq \iota_0$. Given an integer $0 \leq q \leq n-1$, and writing $v_0$ to denote the real place of $F$ corresponding to $\iota_0$, a representation $\pi \in \Pi_{\Phi}(H^{(0)}, \widetilde{\Pi}_K)$ is said to have {\it{degree $q$}} if the corresponding Harish-Chandra parameter $A_{v_0}$ of $\pi_{v_0}$ satisfies the condition $q(\pi_{v_0}) = q(A_{v_0}) = q$ (see \cite[$\S$ 4.3, Definition 5.1]{GHL18}). The main theorem of \cite{GHL} is the following factorization of automorphic periods corresponding to arithmetically normalized automorphic forms $\phi_{(r_{\iota_0}, s_{\iota_0})}(\Pi_K)$ on the Shimura variety associated to a unitary group having signature $(r_{\iota_0}, s_{\iota_0})$ at $\iota_0$, and periods associated to arithmetically normalized automorphic forms on the Shimura variety associated to $H^{(0)}$. Given an automorphic representation $\pi$, let us write $E(\pi)$ denote the compositum of the Hecke field ${\bf{Q}}(\pi)$ with the Galois closure of the CM field $K$ in $\overline{{\bf{Q}}}$. Given nonzero complex numbers $A$ and $B$, we then write $A \sim_{E(\pi)} B$ to denote equality up to a nonzero scalar in $E(\pi)$, so that $A = \alpha B$ for some $\alpha \in E(\pi)^{\times}$. \begin{theorem} Let $\Pi_K$ be a cohomological cuspidal $(n-1)$-regular conjugate self-dual automorphic representation of $\operatorname{GL}_n({\bf{A}}_K)$ of central character $\omega_{\Pi_K}$. Suppose that for each choice of $I = (I_{\iota})_{\iota \in \Sigma} \in \lbrace 0, 1, \ldots, n \rbrace^{\Sigma}$, the contragredient representation $\widetilde{\Pi}_K$ descends via basechange to a cohomological cuspidal automorphic representation $\pi$ of the unitary group $U_I({\bf{A}}_F)$ of signature $(n-1, I_{\iota})$ at each $v = (I_{\iota}, I_{\iota}) \in S_{\infty}$, with tempered archimedean component $\pi_{\infty}$. We then write $P^I(\Pi_K) = \langle \phi_I, \phi_I \rangle$ to denote the Petersson inner product $\langle \cdot, \cdot \rangle$ of an arithmetically normalized automorphic form $\phi_I$ on the Shimura variety corresponding to $U_I$, which as explained in \cite[$\S$2.2]{GHL} decomposes into a product of automorphic periods $P^I(\Pi_K) \sim_{E(\Pi_K)} \prod_{\iota \in \Sigma} P^{(I_{\iota})}(\Pi_K, \iota)$. Writing $M(\Pi_K)$ to denote the motive over $K$ with coefficients in $E(\Pi_K)$ associated (conjecturally) to $\Pi_K$, these latter automorphic periods can be characterized according to their (conjectural) relation to the motivic periods $Q_i(M(\Pi_K), \iota)$ defined by taking the inner product of a vector in the Betti realization $M(\Pi_K)_{\iota}$ of $M(\Pi_K)$ at $\iota$ whose image under the comparison isomorphism lands in the $i$-th bottom degree of the Hodge filtration for $M(\Pi_K)$. That is, for each integer $0 \leq q \leq n-1$, \begin{align*} Q_{q+1}(M(\Pi_K), \iota) &\sim_{E(\Pi_K)} \frac{P^{(q+1)}(\Pi_K, \iota)}{P^{(q)}(\Pi_K, \iota)}. \end{align*} For each integer $0 \leq q \leq n-1$, let $\pi(q) = \pi(q, \iota_0) \in \Pi_{\Phi}(H^{(0)}, \widetilde{\Pi}_K)$ be an element of degree $q$. Let $Q(\pi(q)) = \langle \phi_{\pi(q)}, \phi_{\pi(q)} \rangle$ denote the Petersson inner product $\langle \cdot, \cdot \rangle$ of a de Rham-rational vector $\phi_{\pi(q)} \in V_{\pi(q)}$ (see \cite[$\S$ 2.6]{GHL}), and $p(\overline{\omega}_{\Pi_K}, \Sigma)$ the CM period associated to the idele class character $\overline{\omega}_{\Pi_K}^c$ of $K$ and the CM-type $\Sigma$ (as defined e.g.~in the appendix to \cite{HK}). Assume in addition that the following criteria are satisfied: \\ \begin{itemize} \item[(a)] The Gan-Gross-Prasad conjecture of Ichino-Ikeda and Neal Harris (\cite[Conjecture 3.3]{GHL18}) for pairs of cohomological representations on totally definite and definite-at-all-but-one-real-place unitary groups over $F$ holds. This is known if these groups have the following signatures (see \cite[Assumption 4.7]{GHL18}): There exists a pair $v_0 \in S_{\infty}(K)$ such that $(r_{v_0}, s_{v_0}) = (n-1, 1)$ and $(r'_{v_0}, s'_{v_0}) = (n-1, 1)$ respectively; for $v \neq v_0$ the signatures are given by $(n, 0)$ and $(n-1, 0)$ respectively. (See also \cite{BP15ii}, \cite{BP15}, and \cite{BP18}). \\ \item[(b)] The conclusion of Conjecture \ref{3.7} holds for any cohomological $(n-1)$-regular conjugate self-dual cuspidal automorphic representation $\Pi_K$ of $\operatorname{GL}_2({\bf{A}}_K)$. \\ \item[(c)] The rationality of archimedean local period integrals associated to coherent cohomology classes of Shimura varieties as posed in \cite[Conjecture 4.16]{GHL18} holds. \\ \end{itemize} Then, we have for each integer $0 \leq q \leq n-1$ the factorization of automorphic periods \begin{align*} Q(\pi(q)) &\sim_{E(\pi(q))} p( \overline{\omega}_{\Pi_K}^{c}, \Sigma)^{-1} \frac{P^{(q+1)}(\Pi_K, \iota_0)}{P^{(q)}(\Pi_K, \iota_0)}. \end{align*} \end{theorem} In other words, the works of \cite{MH}, \cite{GH}, and \cite{GHL18} forge a passage to showing the (highly non-obvious) relation of automorphic motivic periods \begin{align*} Q(\pi(q)) &\sim_{E(\pi(q))} p( \overline{\omega}_{\Pi_K}^{c}, \Sigma)^{-1} Q_{q+1}(M(\Pi_K), \iota_0). \end{align*} As explained in \cite[$\S 1$]{GHL18}, the most serious hypothesis of their work is that of \cite[Conjecture 3.7]{GHL18}, which we address for a special case in Corollary \ref{3.7cor} if $\Pi$ is non-orthogonal. To conclude this discussion, we remark that Conjecture \ref{3.7} might be accessible in all cases by an argument closer to the calculations given by Cornut and Vatsal \cite{Va}, \cite{CV}, \cite{CV2}, using the theorems of Ratner \cite{Ra} and Margulis-Tomanov \cite{MT} on $p$-adic unipotent flows, together with recent work in progress on the Gan-Gross-Prasad conjecture for $U_n({\bf{A}}_K) \times U_1({\bf{A}}_K)$ by Beuzart-Plessis, Chaudouard, and Zydor. \section{Rankin-Selberg $L$-functions} Fix $n \geq 2$ an integer, and let $\Pi = \otimes_v \Pi_v$ be a cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representation with unitary central character $\omega = \otimes_v \omega_v$ (an idele class character of $F$). Fix $K$ a totally imaginary quadratic extension of $F$, and let $\chi$ be a ring class character of $K$ of some conductor $\mathfrak{c} \subset \mathcal{O}_F$. Recall that such characters factor through the class group $C(\mathcal{O}_{\mathfrak{c}})$ of the $\mathcal{O}_F$-order $\mathcal{O}_{\mathfrak{c}} := \mathcal{O}_F + \mathfrak{c} \mathcal{O}_K$ of conductor $\mathfrak{c}$ of $\mathcal{O}_K$, \begin{align*} C(\mathcal{O}_{\mathfrak{c}}) &= {\bf{A}}_K^{\times} / K_{\infty}^{\times} K^{\times} \widehat{ \mathcal{O} }_{ \mathfrak{c} }^{\times}. \end{align*} Given such a character $\chi$, let $\pi(\chi)$ denote the corresponding induced automorphic representation of $\operatorname{GL}_2({\bf{A}}_F)$. Note that this induced representation $\pi(\chi)$ has conductor $\mathfrak{D} \mathfrak{c}^2$, where $\mathfrak{D} = \mathfrak{D}_{K/F} \subset \mathcal{O}_F$ denotes the relative discriminant of $K/F$, and $\mathfrak{c} = c(\chi) \subset \mathcal{O}_F$ the conductor of the ring class character $\chi$. We consider the $\operatorname{GL}_n({\bf{A}}_F) \times \operatorname{GL}_2({\bf{A}}_F)$ Rankin-Selberg $L$-function \begin{align*} \Lambda(s, \Pi \times \pi(\chi)) = \prod_{v \leq \infty} L(s, \Pi_v \times \pi(\chi)_v) = L(s, \Pi_{\infty} \times \pi(\chi)_{\infty}) L(s, \Pi \times \pi(\chi))\end{align*} of $\Pi$ times $\pi(\chi)$, whose Euler factors $L(s, \Pi_v \times \pi(\chi)_v)$ at places $v$ of $F$ where both local representations $\Pi_v$ and $\pi(\chi)_v$ are unramified take the form \begin{align*} L(s, \Pi_v \times \pi(\chi)_v) &= \begin{cases} \prod_{j=1}^n \prod_{k=1}^2 \left(1 - \alpha_j(\Pi_v) \alpha_k(\pi(\chi)_v) {\bf{N}}v^{-s} \right)^{-1} &\text{ if $v$ is nonarchimedean} \\ \prod_{j=1}^n \prod_{k=1}^2 \Gamma_{\bf{R}}(1 - \mu_j(\Pi_v) - \mu_k(\pi(\chi)_v)) &\text{ if $v$ is real}. \end{cases} \end{align*} \subsection{Basechange equivalence} Let us now record that $\Lambda(s, \Pi \times \pi(\chi))$ is equivalent to the $\operatorname{GL}_n({\bf{A}}_K) \times \operatorname{GL}_1({\bf{A}}_K)$ $L$-function of the basechange $\Pi_K = \otimes_w \Pi_{K, w}$ of $\Pi$ to $\operatorname{GL}_n({\bf{A}}_K)$, twisted by the idele class character $\chi = \otimes_w \chi_w$ of $K$. That is, we have the equivalence of $L$-functions $\Lambda(s, \Pi \times \pi(\chi)) = \Lambda(s, \Pi_K \otimes \chi)$, where \begin{align*} \Lambda(s, \Pi_K \otimes \chi) &= \prod_{w \leq \infty} L(s, \Pi_{K, w} \otimes \chi_w) = L(s, \Pi_{K, \infty} \otimes \chi_{\infty})L(s, \Pi_K \otimes \chi) \end{align*} is the $L$-function of degree $n$ over $K$ whose Euler factors at places $w$ of $K$ where neither $\Pi_{K, w}$ nor $\chi_w$ is ramified take the form \begin{align*} L(s, \Pi_{K, w} \otimes \chi_w) &= \begin{cases} \prod_{j=1}^n \left( 1 - \alpha_j(\Pi_{K, w}) {\bf{N}} w^{-s} \right)^{-1} &\text{ if $w$ is nonarchimedean} \\ \prod_{j=1}^n \Gamma_{\bf{C}}(s - \mu_j(\Pi_{K, w})) &\text{ if $w$ is complex}. \end{cases} \end{align*} Here, we use the standard notation $\Gamma_{\bf{C}}(s) = 2(2 \pi)^{-s} \Gamma(s)$, and the coefficients $\alpha_j(\Pi_w)$ and $\mu_j(\Pi_w)$ are once again determined by the Satake parameters of the corresponding local representation $\Pi_{K, w}$. Observe too that the Euler factors at the archimedean places do not depend on the choice of ring class character $\chi$, as a consequence of the fact that such characters are unramified at infinity (cf.~\cite{Ro3}, \cite{BR}). This latter $L$-function $\Lambda(s, \Pi_K \otimes \chi)$ has an analytic continuation and functional equation completely analogous to the standard $L$-function $\Lambda(s, \Pi)$ described above (see e.g.~\cite{BR}). Using this, we deduce that $L(s, \Pi \times \pi(\chi))$ has an analytic continuation to the complex plane, with at worst simple poles at $s=0,1$ if $\Pi_K \otimes \chi$ is trivial, and satisfies the functional equation \begin{align*} \Lambda(s, \Pi \times \pi(\chi)) &= \epsilon(s, \Pi \times \pi(\chi)) \Lambda(1-s, \widetilde{\Pi} \times \pi(\chi)). \end{align*} Here, the epsilon factor is given by \begin{align*} \epsilon(s, \Pi \times \pi(\chi)) &= \left( D_K^n {\bf{N}} \mathfrak{f}(\Pi_K) {\bf{N}} \mathfrak{f}(\chi)^n \right)^{\frac{1}{2} - s} \epsilon(1/2, \Pi \times \pi(\chi)), \end{align*} where $D_K$ denotes the absolute discriminant of $K$, and ${\bf{N}}\mathfrak{f}(\Pi_K)$ the absolute norm of the conductor $\mathfrak{f}(\Pi_K) \subset \mathcal{O}_K$ of the basechange representation $\Pi_K$. \subsection{Root number invariance} Keep the setup above, with $K$ a totally imaginary quadratic extension of $F$, and $\chi$ a ring class character of $K$ of conductor $\mathfrak{c} \subset \mathcal{O}_F$. The root number $\epsilon(1/2, \Pi \times \pi(\chi)) = \epsilon(1/2, \Pi_K \otimes \chi)$ can be given explicitly in terms of the root number $\epsilon(1/2, \Pi)$ of the standard $L$-function $\Lambda(s, \Pi)$, and in particular does not depend on the choice of ring class character $\chi$. To be more precise, we have the following formula for this root number (whose exact form we shall not require in our subsequent arguments). \begin{proposition}\label{root} Let $\Pi$ be an irreducible cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representation of conductor $\mathfrak{f}(\Pi) \subset \mathcal{O}_F$ and unitary central character $\omega = \omega_{\Pi}$. Let $\Pi_K = \operatorname{BC}_{K/F}(\Pi)$ denote the quadratic basechange of $\Pi$ to $\operatorname{GL}_n({\bf{A}}_K)$, with $\mathfrak{f}(\Pi_K) \subset \mathcal{O}_K$ its conductor. Let $\chi$ be any primitive ring class character of $K$ of conductor $\mathfrak{c} \subset \mathcal{O}_F$. If $(\mathfrak{f}(\Pi), \mathfrak{c} \mathfrak{D}) = 1$ then the root number $\epsilon(1/2, \Pi \otimes \pi(\chi)) = \epsilon(1/2, \Pi_K \otimes \chi)$ is given by \begin{align*} \epsilon(1/2, \Pi \times \pi(\chi)) &= \omega(\mathfrak{D} \mathfrak{c}^2) \cdot \eta(\mathfrak{f}(\Pi)) \cdot \epsilon(1/2, \Pi), \end{align*} which in particular does not depend on the choice of ring class character $\chi$ (but rather on the conductor $\mathfrak{c}$). \end{proposition} \begin{proof} Note that via the basechange equivalence $L(s, \Pi \times \pi(\chi)) = L(s, \Pi_K \otimes \chi)$, there are two equivalent ways of deriving the stated formula for the corresponding root number $\epsilon(1/2, \Pi \times \pi(\chi)) = \epsilon(1/2, \Pi_K \otimes \chi)$. To derive the formula via the Rankin-Selberg presentation $\epsilon(1/2, \Pi \times \pi(\chi))$, we use that the conductor $\mathfrak{f}(\Pi)$ of $\Pi$ is coprime to the conductor $\mathfrak{D} \mathfrak{c}^2$ of $\pi(\chi)$ to invoke the well-known general formula (see \cite{LRS}, \cite{JP-PS}) \begin{align*} \epsilon(1/2, \Pi \times \pi(\chi)) &= \omega_{\Pi}(\mathfrak{D} \mathfrak{c}^2) \cdot \eta_{K/F}(\mathfrak{f}(\Pi)) \cdot \epsilon(1/2, \Pi) \cdot \epsilon(1/2, \eta_{K/F})^{2n}. \end{align*} Here, we also use the fact that $\eta_{K/F}$ is the central character of $\pi(\chi)$. Now, the corresponding root number $\epsilon(1/2, \eta_{K/F}) \in \lbrace \pm 1 \rbrace$ is real-valued. It is then easy to deduce that $\epsilon(1/2, \eta_{K/F})^{2n} = 1$, as required. To derive the formula via the equivalent basechange presentation $\epsilon(1/2, \Pi_K \otimes \chi)$, we can use the local calculation given in \cite[Proposition 4.1]{BR}. That is, let $\Pi_{K} = \otimes_w \Pi_{K, w}$ be an irreducible cuspidal $\operatorname{GL}_n({\bf{A}}_K)$-automorphic representation of central character $\omega_{\Pi_K}$ and conductor $\mathfrak{f}(\Pi_K) \subset \mathcal{O}_K$. Let $\chi$ be any Hecke character of $K$ of conductor $\mathfrak{f}(\chi) \subset \mathcal{O}_K$ coprime to $\mathfrak{f}(\Pi_K)$ which is unramified at the archimedean places of $K$ (and hence corresponds to a wide ray class character of $K$). Then, \cite[Proposition 4.1]{BR} shows that \begin{align*} \epsilon(1/2, \Pi_K \otimes \chi) &= \omega_K (\mathfrak{f}(\chi_K)) \cdot \chi(\mathfrak{f}(\Pi_K)) \cdot \epsilon(1/2, \Pi_K) \cdot \epsilon(1/2, \chi)^n. \end{align*} Artin formalism implies that we have the decomposition $\epsilon(1/2, \Pi_K) = \epsilon(1/2, \Pi) \epsilon(1/2, \Pi \otimes \eta_{K/F})$. According to \cite[Theorem 2 (d)]{GL}, we also have the composition $\omega_K = \omega_{\Pi} \circ {\bf{N}}_{K/F}$ of $\omega_{\Pi}$ with the relative norm homomorphism ${\bf{N}}_{K/F}: K \rightarrow F$. Hence, $\omega_K(\mathfrak{f}(\chi)) = \omega({\bf{N}}_{K/F} \mathfrak{f}(\chi_K)) = \omega({\bf{N}}_{K/F}(\mathfrak{c} \mathcal{O}_K) )= \omega(\mathfrak{c}^2)$, and so \begin{align*} \epsilon(1/2, \Pi_K \otimes \chi) &= \omega( \mathfrak{c}^2) \cdot \chi(\mathfrak{f}(\Pi_K)) \cdot \epsilon(1/2, \Pi) \cdot \epsilon(1/2, \Pi \otimes \eta_{K/F}) \cdot \epsilon(1/2, \chi)^n. \end{align*} If $(\mathfrak{f}(\Pi), \mathfrak{D}) = 1$, then we can also decompose the root number $\epsilon(1/2, \Pi \otimes \eta_{K/F})$ of the quadratic twist as \begin{align*} \epsilon(1/2, \Pi \otimes \eta_{K/F}) &= \omega_{\Pi}(\mathfrak{D}) \cdot \eta_{K/F}(\mathfrak{f}(\Pi)) \cdot \epsilon(1/2, \Pi) \cdot \epsilon(1/2, \eta_{K/F})^n,\end{align*} which gives us the more explicit formula \begin{align*} \epsilon(1/2, \Pi_K \otimes \chi) &= \omega_{\Pi}(\mathfrak{D} \mathfrak{c}^2) \cdot \eta_{K/F}(\mathfrak{f}(\Pi)) \cdot \chi(\mathfrak{f}(\Pi_K)) \cdot \epsilon(1/2, \Pi)^2 \cdot \epsilon(1/2, \eta_{K/F})^n \cdot \epsilon(1/2, \chi)^n. \end{align*} Now, a classical calculation due to Hecke (see e.g.~\cite[Ch. VII, (8.5)]{Ne}) shows that $\epsilon(1/2, \chi) = \tau(\chi) {\bf{N}} \mathfrak{f}(\chi)^{-\frac{1}{2}}$, where $\tau(\chi)$ denotes the Gauss sum corresponding to $\chi$. Moreover, since ring class characters are equivariant under complex conjugation, we have that $\epsilon(1/2, \chi) \in \lbrace \pm 1 \rbrace$ (cf.~\cite{Ro2}). Since the induced representation $\pi(\chi)$ has central character equal to $\eta_{K/F}$, we deduce that $\epsilon(1/2, \chi) = \epsilon(1/2, \eta_{K/F})$, which gives us \begin{align*} \epsilon(1/2, \Pi_K \otimes \chi) &= \omega_{\Pi}(\mathfrak{D} \mathfrak{c}^2) \cdot \eta_{K/F}(\mathfrak{f}(\Pi)) \cdot \chi(\mathfrak{f}(\Pi_K)) \cdot \epsilon(1/2, \Pi)^2. \end{align*} In this way, we can also deduce that if $(\mathfrak{f}(\Pi), \mathfrak{D}) = 1$, then $\chi(\mathfrak{f}(\Pi_K)) \cdot \epsilon(1/2, \Pi) = 1$. \end{proof} Thus, we justify the independence of the root number $\epsilon(1/2, \Pi \times \pi(\chi)) = \epsilon(1/2, \Pi_K \otimes \chi)$ on the choice of ring class character $\chi$: Generically, it depends only on the factorization of the conductor $\mathfrak{f}(\Pi)$ in the quadratic extension $K/F$. We shall take this for granted henceforth, writing $W(\Pi_K) = \epsilon(1/2, \Pi_K \otimes \chi) = \epsilon(1/2, \Pi \times \pi(\chi))$ to denote this root number. As well, we shall rule out forced vanishing in the self-dual setting $\Pi \cong \widetilde{\Pi}$ by insisting that $W(\Pi_K) \neq -$1. \subsection{Gamma factor invariance} The archimedean components $L(s, \Pi_{\infty} \times \pi(\chi)_{\infty})$ are also independent of the choice of ring class character $\chi$, analogous to setup with root numbers $\epsilon(1/2, \Pi \times \pi(\chi))$ described above. \begin{lemma}\label{arch} If an idele class character $\chi = \otimes \chi_w$ of $K$ is a class group or ring class group character (or more generally corresponds to a wide ray class character), then the archimedean component $\chi_{\infty}$ is trivial, whence $L(s, \Pi_{K, \infty} \otimes \chi_{\infty}) = L(s, \Pi_{K, \infty})$. Consequently, we have for any ring class group character $\chi$ of any quadratic extension $K/F$ that \begin{align*} L(s, \Pi_{\infty} \times \pi(\chi)_{\infty}) = L(s, \Pi_{K, \infty}) = L(s, \Pi_{\infty}) L(s, \Pi_{\infty} \otimes \eta_{\infty}).\end{align*} In particular, the archimedean component $L(s, \Pi_{\infty} \times \pi(\chi)_{\infty})$ of $\Lambda(s, \Pi \times \pi(\chi))$ does not depend on the choice of class group character $\chi$ of $K$. \end{lemma} \begin{proof} Consider the Euler product decomposition \begin{align*} \Lambda(s, \Pi_K \otimes \chi) &= L(s, \Pi_{K, \infty} \otimes \chi_{\infty}) L(s, \Pi_K \otimes \chi) = \prod_{w \leq \infty} L(s, \Pi_{K, w} \otimes \chi_w)\end{align*} over all places $w$ of $K$. We have the Artin decomposition \begin{align}\label{AD} \Lambda(s, \Pi_K) &= \Lambda(s, \Pi) \Lambda(s, \Pi \otimes \eta) = L(s, \Pi_{\infty}) L(s, \Pi_{\infty} \otimes \eta_{\infty}) L(s, \Pi) L(s, \Pi \otimes \eta) \end{align} for $\Lambda(s, \Pi_K) = L(s, \Pi_{K, \infty})L(s, \Pi_K)$ of $\Pi_K$. Now, it is classical (see e.g.~\cite[Corollary (6.10)]{Ne}) that the archimedean component $\chi_{\infty}$ of any class group character $\chi$ is trivial, in which case we derive (via $(\ref{AD})$) that \begin{align*} L(s, \Pi_{K, \infty} \otimes \chi_{\infty}) = L(s, \Pi_{K, \infty}) &= L(s, \Pi_{\infty}) L(s, \Pi_{\infty} \otimes \eta_{\infty}).\end{align*} The result is then easy to deduce, using that $\Lambda(s, \Pi \times \pi(\chi)) = L(s, \Pi_K \otimes \chi)$. \end{proof} Recall that we consider the quotient of archimedean factors $F(s)$ defined in $(\ref{F})$ above. \begin{corollary}\label{poles} The function $F(s)$ has no poles in the region $0 < \Re(s) < \frac{1}{2} + \frac{1}{n^2 + 1}$. \end{corollary} \begin{proof} Note that by Lemma \ref{arch}, we have the equivalent expressions \begin{align*} F(s) &= \frac{L(1- s, \widetilde{\Pi}_{K, \infty})}{L(s, \Pi_{K, \infty})} = \frac{ L(1-s, \widetilde{\Pi}_{\infty}) L(1-s, \widetilde{\Pi}_{\infty} \otimes \eta_{\infty}) }{L(s, \Pi_{\infty}) L(\Pi_{\infty} \otimes \eta_{\infty})}. \end{align*} Writing $(\mu_{K, j, w})_{w \mid \infty}$ to denote the Satake parameters of the basechange component $\Pi_{K, \infty}$, we know thanks to Luo-Rudnick-Sarnak \cite[Theorem 2]{LRS2} that \begin{align*} \vert \Re(\mu_{j, v}) \vert \leq \frac{1}{2} - \frac{1}{n^2 + 1} &\text{~~ for each $1 \leq j \leq n$ and each archimedean place $v \mid \infty$ of $F$ } \\ \vert \Re(\mu_{K, j, v}) \vert \leq \frac{1}{2} - \frac{1}{n^2 + 1} &\text{~~ for each $1 \leq j \leq n$ and each archimedean place $w \mid \infty$ of $K$ }. \end{align*} The claim is then easy to deduce from the definition $L(s, \Pi_{K, \infty}) = \prod_{w \mid \infty} L(s, \Pi_{K, w})$. \end{proof} \subsection{Dirichlet series expansions} Fix $\chi$ a ring class character of $K$ of some conductor $\mathfrak{c} \subset \mathcal{O}_F$. We can decompose the corresponding Rankin-Selberg $L$-function $\Lambda(s, \Pi \times \pi(\chi))$ we consider into archimedean and nonarchimedean parts as $\Lambda(s, \Pi \times \pi(\chi)) = L_{\infty}(s, \Pi_{\infty} \times \pi(\chi)_{\infty}) L(s, \Pi \times \pi(\chi))$, where the Euler product over finite places $L(s, \Pi \times \pi(\chi))$ has the expansion (first for $\Re(s) >1$) given by \begin{align*} L(s, \Pi \times \pi(\chi)) &= \sum_{ \mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2s}} \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n})}{{\bf{N}}\mathfrak{n}^s} \sum_{ A \in C(\mathcal{O}_{\mathfrak{c}})} r_A(\mathfrak{n}) \chi(A). \end{align*} Here, the first two sums range over nonzero ideals in the ring of integers $\mathcal{O}_F$, and we omit this obvious condition from the notations throughout. The third sum ranges over classes $A$ of order $\mathcal{O}_{\mathfrak{c}} = \mathcal{O}_F + \mathfrak{c} \mathcal{O}_K$ of conductor $\mathfrak{c}$ in $K$, and $r_A(\mathfrak{n})$ denotes the number of ideals in a class $A$ whose image under the relative norm homomorphism ${\bf{N}}_{K/F}: K \rightarrow F$ equals $\mathfrak{n}$. Let us note that there are many ways to parametrize this counting function $r_A$. On the one hand, fixing an $\mathcal{O}_F$-basis $[\gamma_A, \delta_A]$ for a proper integral ideal representative of the class $A \in C(\mathcal{O}_{\mathfrak{c}})$, and writing ${\bf{N}}_{K/F}: K \rightarrow F$ to denote the relative norm homomorphism sending $\mathfrak{a} \subset \mathcal{O}_K$ to $\mathfrak{a} \overline{\mathfrak{a}} \subset \mathcal{O}_F$, this function can be parametrized as \begin{align*} r_A(\mathfrak{n}) &= \frac{1}{w_K} \cdot \# \left\lbrace a, b \in \mathcal{O}_F: {\bf{N}}_{K/F}(a \gamma_A + b \delta_A) = \mathfrak{n} \right\rbrace, \end{align*} where $w_K$ denotes the number of roots of unity in $K$. On the other hand, taking $q_A(x, y) = a_A x^2 + b_A xy + c_A y^2$ to be any representative in the corresponding class group of positive definite $F$-rational binary quadratic forms of discriminant $\mathfrak{D} \mathfrak{c}^2$, we can also parametrize this counting function equivalently as \begin{align}\label{rAnqf} r_A(\mathfrak{n}) &= \frac{1}{w_K} \cdot \# \left\lbrace a, b \in \mathcal{O}_F: q_A(a, b) = \mathfrak{n} \right\rbrace. \end{align} Note that $w_K$ is also equal to the number of automorphs of the quadratic form $q_A(x,y)$. Here, we shall always take $q_A(x, y)$ to be the unique primitive reduced form. Hence, the $F$-integer coefficients $a_A, b_A$, and $c_A$ are mutually coprime, with ${\bf{N}} b_A \leq {\bf{N}} a_A \leq {\bf{N}} c_A$. Let us also note that when $A = {\bf{1}} \in C(\mathcal{O}_{\mathfrak{c}})$ is the principal class, then the leading coefficient is simply $a_A = a_{\bf{1}} = 1$. \section{Projected cuspidal forms} Let $\psi = \otimes_v \psi_v$ denote the standard additive character on ${\bf{A}}_F/F$. Hence, $\psi$ is the unique continuous additive character of ${\bf{A}}_F$ which is trivial on $F$, given by $x \mapsto \exp(2 \pi i(x_1 + \cdots x_d))$ on $x = (x_j)_{j=1}^d \in F_{\infty} = F \otimes_{\bf{Q}} {\bf{R}}$, and at each nonarchimedean place $v$ (identified with its corresponding prime ideal $v \subset \mathcal{O}_F$) is trivial on the local inverse different $\mathfrak{d}_{F, v}^{-1}$ but nontrivial on $v \mathfrak{d}_{F, v}^{-1}$. We extend this additive character $\psi$ in the usual way to the unipotent subgroup $N_n \subset \operatorname{GL}_n$. Fix $(\Pi, V_{\Pi})$ an irreducible cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representation, and let $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ be a pure tensor. We view $\varphi$ as a cuspidal automorphic form on $\operatorname{GL}_n({\bf{A}}_F)$. Hence for $g \in \operatorname{GL}_n({\bf{A}}_F)$, it has the Fourier-Whittaker expansion \begin{align*} \varphi(g) &= \sum_{\gamma \in N_{n-1}(F) \backslash \operatorname{GL}_{n-1}(F)} W_{\varphi} \left( \left( \begin{array} {cc} \gamma & ~ \\ & 1 \end{array}\right) g \right), \end{align*} with Whittaker coefficients \begin{align*} W_{\varphi}(g) &= W_{\varphi, \psi}(g) = \int_{N_n(F) \backslash N_n({\bf{A}}_F)} \varphi(n g) \psi^{-1}(n) dn. \end{align*} This series converges absolutely and uniformly on compact sets of $\operatorname{GL}_n({\bf{A}}_F)$. Note that this expansion of $n \geq 3$ is a theorem of Piatetski-Shapiro and Shalika. In the special case of $n=2$, it takes the simpler form \begin{align*} \varphi(g) &= \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma & ~ \\ & 1 \end{array}\right) g \right), \end{align*} where \begin{align*} W_{\varphi}(g) &= \int_{{\bf{A}}_F/F} \varphi \left( \left( \begin{array} {cc} 1 & x \\ & 1 \end{array}\right) g \right) \psi(-x) dx. \end{align*} We now consider the image of $\varphi$ under a certain projection operator $\mathbb P^n_1$ to the mirabolic subgroup of $P_2({\bf{A}}_F) \subset \operatorname{GL}_2({\bf{A}}_F)$, and in particular its Fourier-Whittaker expansion. Let us remark that this operator $\mathbb P^n_1$ plays a prominent role in the theory of Eulerian integral presentations for automorphic $L$-functions on $\operatorname{GL}_n({\bf{A}}_F) \times \operatorname{GL}_1({\bf{A}}_F)$ (see e.g.~\cite[$\S 2.2.1$]{JC}), however the construction itself goes back to earlier works of Ginzburg, Jacquet, Piatetski-Shapiro, and Shalika. Here, we shall use it in a precise way to derive integral presentations for the averages we seek to estimate. \subsection{Definitions and Fourier-Whittaker expansions} Let $Y_{n,1} \subset \operatorname{GL}_n$ denote the maximal unipotent subgroup of the parabolic subgroup attached to the partition $(2, 1, \ldots, 1)$ of $n$. Hence, we have the semi-direct product decomposition $N_n = N_{2} \ltimes Y_{n,1}$. Note as well that $Y_{n, 1}$ is normalized by $\operatorname{GL}_2 \subset \operatorname{GL}_n$. Let $P_2({\bf{A}}_F) \subset \operatorname{GL}_n({\bf{A}}_F)$ denote the mirabolic subgroup \begin{align*} P_2({\bf{A}}_F) &= \left\lbrace \left( \begin{array} {cc} y & x \\ 0 & 1 \end{array}\right): y \in {\bf{A}}_F^{\times}, x \in {\bf{A}}_F \right\rbrace \subset \operatorname{GL}_2({\bf{A}}_F). \end{align*} Given a pure tensor $\varphi \in V_{\Pi}$, we define the projection $\mathbb P^n_1 \varphi$ as a function on $p \in P_2({\bf{A}}_F)$ by \begin{align}\label{projector} \mathbb P_1^n \varphi(p) &= \vert \det(p) \vert^{-\left( \frac{n-2}{2} \right)} \int_{Y_{n, 1}(F) \backslash Y_{n,1}({\bf{A}}_F)} \varphi \left( y \left( \begin{array} {cc} p & ~ \\ ~ & {\bf{1}}_{n-2} \end{array}\right) \right) \psi^{-1}(y) dy, \end{align} where ${\bf{1}}_{m}$ for a positive integer $m$ denotes the $m \times m$ identity matrix. This integral is taken over a compact domain, and hence converges absolutely. \begin{proposition}\label{FWEP} Given a cuspidal automorphic form $\varphi$ on $\operatorname{GL}_n({\bf{A}}_F)$, the projection $\mathbb P^n_1 \varphi$ defined by the unipotent integral $(\ref{projector})$ is a cuspidal automorphic form on $P_2({\bf{A}}_F)$ having the Fourier-Whittaker expansion \begin{align*} \mathbb P_1^n \varphi(p) &= \vert \det(p) \vert^{- \left( \frac{n-2}{2} \right)} \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma & ~ \\ ~ & {\bf{1}}_{n-1} \end{array}\right) \left( \begin{array} {cc} p & ~ \\ ~ & {\bf{1}}_{n-2} \end{array}\right) \right). \end{align*} In particular, for $x \in {\bf{A}}_F$ any adele and $y \in {\bf{A}}_F^{\times}$ any idele, we have the Fourier-Whittaker expansion \begin{align*} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y & x \\ ~ & 1\end{array} \right) \right) &= \vert y \vert^{- \left( \frac{n-2}{2} \right)} \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} y \gamma & ~ \\ ~ & {\bf{1}}_{n-1} \end{array} \right) \right) \psi(\gamma x). \end{align*} \end{proposition} \begin{proof} See \cite[Lemma 5.2]{JC} or \cite[$\S 2.2.1$]{JCn} for the first two statements. The third is an easy consequence of specialization. We spell out proofs in this special case for the convenience of the reader. Let us first show that $\mathbb P^n_1 \varphi$ determines a cuspidal automorphic form on $P_2({\bf{A}}_F)$, by which we mean a left $P_2(F)$-invariant function on $p \in P_2({\bf{A}}_F)$ whose unipotent integrals vanish in the usual sense. We consider the normalized function \begin{align}\label{normalized} \varphi'(p) &= \vert \det (p) \vert^{\frac{n-2}{2}} \mathbb P^n_1 \varphi(p). \end{align} Since $\varphi(g)$ is smooth on $\operatorname{GL}_n({\bf{A}}_F)$, we deduce that the unipotent integral defining $\varphi'(p)$ is smooth on $P_2({\bf{A}}_F)$. To check that $\varphi'(p)$ is left $P_2(F)$-invarant, let us for $\gamma \in P_2(F)$ consider the integral \begin{align*} \varphi'(\gamma p) &= \int_{ Y_{n, 1}(F) \backslash Y_{n,1}({\bf{A}}_F)} \varphi \left( y \left( \begin{array} {cc} \gamma & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \left( \begin{array} {cc} p & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \right) \psi^{-1}(y) dy. \end{align*} Since $P_2$ normalizes $Y_{n, 1}$ and fixes $\psi$, we can then make a crucial change of variables \begin{align*} y \longmapsto \left( \begin{array} {cc} \gamma & \\ & {\bf{1}}_{n-2} \end{array}\right) y \left( \begin{array} {cc} \gamma & \\ & {\bf{1}}_{n-2} \end{array}\right)^{-1} \end{align*} to derive \begin{align*} \varphi'(\gamma p) &= \int_{ Y_{n, 1}(F) \backslash Y_{n,1}({\bf{A}}_F)} \varphi \left( \left( \begin{array} {cc} \gamma & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) y \left( \begin{array} {cc} p & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \right) \psi^{-1}(y) dy. \end{align*} Now, since $\varphi(g)$ is automorphic on $\operatorname{GL}_n({\bf{A}}_F)$, it is left invariant under $\operatorname{GL}_n(F)$, and hence we can deduce that $\varphi'(\gamma p) = \varphi'(p)$. To check that $\varphi'(p)$ is cuspidal, let $U \subset P_2$ denote the standard unipotent subgroup attached to the partition $(n_1, n_2) = (1,1)$ of $2$, so that it will do to show the generic vanishing of the integral \begin{align*} &\int_{U(F) \backslash U({\bf{A}}_F)} \varphi'(up) du \\ &= \int_{U(F) \backslash U({\bf{A}}_F) } \int_{ Y_{n, 1}(F) \backslash Y_{n,1}({\bf{A}}_F) } \varphi \left( y \left( \begin{array} {cc} u & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \left( \begin{array} {cc} p & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \right) \psi^{-1}(y) dy. \end{align*} Let $U' = U \ltimes Y_{n,1} \subset \operatorname{GL}_n$ be the standard unipotent subgroup attached to the partition $(n_1, n_2, 1, \ldots, 1)$ of $n$. Let $U'' \subset \operatorname{GL}_n$ be the standard unipotent subgroup attached to the partition $(n_1, n_2, n-2)$ of $n$. Let $\widetilde{N}_{n-2} \subset \operatorname{GL}_n$ denote the subgroup determined by the image of the embedding \begin{align*} N_{n-2} \longrightarrow \operatorname{GL}_n, ~~~~~n \longmapsto \left( \begin{array} {cc} {\bf{1}}_{2} & ~ \\ & n \end{array}\right). \end{align*} We have the decomposition $U' = \widetilde{N}_{n-2} \ltimes U''$. Let us now extend the additive character $\psi$ on $Y_{n, 1}$ to $U'$ by taking it to be trivial on $U$. Hence in this latter decomposition, the additive character $\psi$ is trivial on the component corresponding to $U$, and depends only on that of $\widetilde{N}_{n-2}$, where it coincides with the standard additive character on $N_{n-2}$. This allows us to unwind the latter integral as \begin{align*} &\int_{U(F) \backslash U({\bf{A}}_F)} \varphi'(up) du \\ &= \int_{N_{n-2}(F) \backslash N_{n-2}({\bf{A}}_F)} \int_{ U''(F) \backslash U''({\bf{A}}_F)} \varphi \left( u''\left( \begin{array} {cc} {\bf{1}}_{2} & ~ \\ & n \end{array}\right) \left( \begin{array} {cc} p & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \right) du'' \psi^{-1}(n) dn. \end{align*} Now, observe that $\varphi$ is cuspidal on $\operatorname{GL}_n$, and since $U''$ a standard unipotent subgroup of $\operatorname{GL}_n$, it follows from the definition of cuspidality for $\operatorname{GL}_n$ that the inner integral must vanish: \begin{align*} \int_{U''(F) \backslash U''({\bf{A}}_F)} \varphi \left( u''\left( \begin{array} {cc} {\bf{1}}_{2} & ~ \\ & n \end{array}\right) \left( \begin{array} {cc} p & ~ \\ & {\bf{1}}_{n-2} \end{array}\right) \right) du'' \equiv 0. \end{align*} Hence, we derive the desired cuspidality condition \begin{align*} \int_{U(F) \backslash U({\bf{A}}_F)} \varphi'(up) du \equiv 0. \end{align*} To derive the Fourier-Whittaker expansion for any $p \in P_2({\bf{A}}_F)$, let us again work with the normalized function $(\ref{normalized})$. Since we know from the argument above that $\varphi'(p)$ is a cuspidal automorphic form on the mirabolic subgroup $P_2({\bf{A}}_F) \subset \operatorname{GL}_2({\bf{A}}_F)$, we deduce that it has a Fourier expansion of the form \begin{align}\label{ie} \varphi'(p) &= \sum_{\gamma \in F^{\times}} W_{\varphi'} \left( \left( \begin{array} {cc} \gamma & ~ \\ ~ & 1 \end{array} \right) p \right), \end{align} where $W_{\varphi'} = W_{\varphi', \psi}$ denotes the corresponding Whittaker coefficient \begin{align*} W_{\varphi'}(p) &= \int_{N_2(F) \backslash N_2({\bf{A}}_F) } \varphi'(np) \psi^{-1}(n)dn. \end{align*} We now express $(\ref{ie})$ in terms of the initial $\operatorname{GL}_n({\bf{A}}_F)$ pure tensor $\varphi$. Opening up definitions, we have \begin{align*} W_{\varphi'}(p) &= \int_{N_2(F) \backslash N_2({\bf{A}}_F) } \varphi'(n' p) \varphi^{-1}(n') dn' \\ &= \int_{N_2(F) \backslash N_2({\bf{A}}_F) } \int_{Y_{n,1}(F) \backslash Y_{n,1}({\bf{A}}_F) } \varphi \left( y \left( \begin{array} {cc} n' p & ~ \\ ~ & {\bf{1}}_{n-2} \end{array} \right) \right) \psi^{-1}(y) dy \psi^{-1}(n') dn'. \end{align*} The key point is that the maximal unipotent subgroup $N_n \subset \operatorname{GL}_n$ decomposes as $N_n = N_2 \ltimes Y_{n, 1}$. Writing $n = n' y \in N_2 \ltimes Y_{n, 1}$ accordingly with $n' \in N_2$ and $y \in Y_{n,1}$, then using that $\psi(n) = \psi(n') \psi(y)$, we can rewrite this latter integral as \begin{align}\label{ie2} W_{\varphi'}(p) &= \int_{N_n(F) \backslash N_n({\bf{A}}_F) } \varphi \left( n \left( \begin{array} {cc} p & ~ \\ ~ & {\bf{1}}_{n-2} \end{array} \right) \right) \psi^{-1}(n)dn = W_{\varphi} \left( \left( \begin{array} {cc} p & ~ \\ ~ & {\bf{1}}_{n-2} \end{array} \right) \right). \end{align} Substituting $(\ref{ie2})$ back into $(\ref{ie})$ then gives the desired expansion \begin{align*} \varphi'(p) &= \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma & ~ \\ ~ & {\bf{1}}_{n-1} \end{array} \right) \left( \begin{array} {cc} p & ~ \\ ~ & {\bf{1}}_{n-2} \end{array} \right) \right). \end{align*} For the third assertion, writing $\varphi'$ to denote the normalized function $(\ref{normalized})$, we specialize the previous expansion to $p = \left( \begin{array} {cc} y & x \\ ~ & 1\end{array}\right)$. It is then easy to check that \begin{align*} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y & x \\ & 1 \end{array}\right) \right) &=\sum_{\gamma \in F^{\times}} W_{\varphi'} \left( \left( \begin{array} {cc} \gamma & \\ & 1 \end{array}\right) \left( \begin{array} {cc} 1 & x \\ & 1 \end{array}\right) \left( \begin{array} {cc} y & \\ & 1 \end{array}\right) \right) \\ &= \sum_{\gamma \in F^{\times}} W_{\varphi'} \left( \left( \begin{array} {cc} 1 & \gamma x \\ & 1 \end{array}\right) \left( \begin{array} {cc} \gamma & \\ & 1 \end{array}\right) \left( \begin{array} {cc} y & \\ & 1 \end{array}\right) \right) \\ &= \sum_{\gamma \in F^{\times}} W_{\varphi'} \left( \left( \begin{array} {cc} y \gamma & \\ & 1 \end{array}\right) \right) \psi(\gamma x). \end{align*} \end{proof} \subsection{Eulerian integral presentations} Keeping notations as above, we now consider the integral \begin{align*} I(s, \varphi) &= \int_{ {\bf{A}}_F^{\times} / F^{\times} } \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} h & \\ & 1 \end{array} \right) \right) \vert h \vert^{s - \frac{1}{2}} dh, \end{align*} defined first for $s \in {\bf{C}}$ with $\Re(s) >1$. As explained in \cite[Lecture 5]{JC} or \cite[$\S 2.2$]{JCn} (again with $m=1$), we can open up the Fourier-Whittaker expansion of $\mathbb P^n_1 \varphi$ in this integral to derive the integral presentation \begin{align*} I(s, \varphi) &= \int_{ {\bf{A}}_F^{\times}/F^{\times} } \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma h & \\ & {\bf{1}}_{n-1} \end{array} \right) \right) \vert h \vert^{s - \frac{1}{2} - \left( \frac{n-2}{2} \right)} dh \\ &= \int_{ {\bf{A}}_F^{\times} } W_{\varphi} \left( \left( \begin{array} {cc} h & \\ & {\bf{1}}_{n-1} \end{array} \right) \right) \vert h \vert^{s - \left( \frac{n-1}{2} \right)} dh. \end{align*} Now, since we choose $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ in such a way that each nonarchimedean local component is an essential Whittaker vector (as we can at the ramified places thanks to Matringe \cite{Ma}), we deduce from the theory of Eulerian integrals for $\operatorname{GL}_n \times \operatorname{GL}_1$ that we have the following exact integral presentation for the finite part $L(s, \Pi \otimes \xi)$ of the standard $L$-function $\Lambda(s, \Pi) = L(s, \Pi_{\infty} ) L(s, \Pi)$ of $\Pi$. \begin{theorem}\label{CM} Fix $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ a pure tensor, and assume that each of the nonarchimedean local factors is an essential Whittaker vector. Then, the finite part $L(s, \Pi)$ of the standard $L$-function $\Lambda(s, \Pi) = L(s, \Pi_{\infty}) L(s, \Pi)$ has the integral presentation \begin{align*} L(s, \Pi ) &= \prod_{v < \infty} \int_{F_v^{\times}} W_{\varphi_v} \left( \left( \begin{array} {cc} h_v & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \vert h_v \vert_v^{s - \left( \frac{n-1}{2} \right)} dh_v \\ &= \int_{ {\bf{A}}_{F, f}^{\times} } W_{\varphi} \left( \left( \begin{array} {cc} h_f & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \vert h_f \vert^{s - \left( \frac{n-1}{2} \right)} dh_f. \end{align*} Here, make a choice of local Haar measure on $F_v^{\times}$ for each nonarchimedean place $v$ dividing the conductor of $\Pi$ according to Matringe \cite[Corollary 3.3]{Ma}, and for an idele $h = (h_v)_v \in {\bf{A}}_F^{\times}$ write the corresponding decomposition into nonarchimedean and archimedean components as $ h = h_f h_{\infty}$ for $h_f \in {\bf{A}}_{F, f}^{\times}$ and $h_{\infty} \in F_{\infty}^{\times}$. Writing the corresponding decomposition of the specialized Whittaker function as \begin{align*} \rho_{\varphi}(h_f) &:= W_{\varphi} \left( \left( \begin{array} {cc} h_f & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) ~\text{and}~ W_{\varphi}(h_{\infty}) := W_{\varphi} \left( \left( \begin{array} {cc} h_{\infty} & \\ & {\bf{1}}_{n-1} \end{array}\right) \right), \end{align*} we then obtain the following description of $L(s, \Pi)$: Fixing a finite idele representative of each nonzero integral integral ideal $\mathfrak{m} \subset \mathcal{O}_F$, and using the same symbol to denote this so that $\mathfrak{m} \in {\bf{A}}_{F, f}^{\times}$, we have that \begin{align*} L(s, \Pi) &:= \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ c_{\Pi}(\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^s} = \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \rho_{\varphi}(\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^{s - \left( \frac{n-1}{2} \right)} }. \end{align*} \end{theorem} \begin{proof} The first claim follows from the theory of Eulerian integrals for $\operatorname{GL}_n \times \operatorname{GL}_1$ according to the description in \cite[Lecture 9]{JC}, using the result of \cite[Corollary 3.3]{Ma} to describe the local zeta integrals at each nonarchimedean place $v$ where $\Pi_v$ or $\xi_v$ is ramified. Putting these together gives us the identification \begin{align*} L(s, \Pi_v ) &= \int_{F_v^{\times}} W_{\varphi_v} \left( \left( \begin{array} {cc} h_v & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \vert h_v \vert_v^{s - \left( \frac{n-1}{2} \right)} dh_v \end{align*} for each nonarchimedean place where $\Pi_v$ is ramified, the unramified case being well-understood (\cite[Lecture 7]{JC}). Note that this claim can also be derived in the classical description (see e.g.~\cite[$\S$12.3]{Go}), without recourse to essential Whittaker vectors, using an explicit description of the Fourier-Whittaker expansion instead. The second assertion follows after comparing coefficients of Mellin transforms. \end{proof} In particular (considering $s=(n-1)/2$), comparing Fourier-Whittaker coefficients, we deduce from the third assertion of Proposition \ref{FWEP} that for $x \in {\bf{A}}_F$ any adele and $y = y_{\infty} \in F_{\infty}^{\times}$ any archimedean idele, we have the Fourier-Whittaker expansion \begin{align}\label{FWEPexp}\mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x \\ & 1 \end{array}\right) \right) &= \vert y_{\infty} \vert^{- \left( \frac{n-2}{2} \right)} \sum_{\gamma \in F^{\times}} \frac{c_{\Pi}(\gamma )}{ \vert \gamma \vert^{\frac{n-1}{2}}} W_{\varphi} \left( \gamma y_{\infty} \right) \psi(\gamma x). \end{align} Here, we also use the symbol $\gamma$ to denote the image of an $F$-rational number $\gamma \in F^{\times}$ in the ideles ${\bf{A}}_F^{\times}$ under the diagonal embedding $\gamma \mapsto (\gamma, \gamma, \ldots)$. \subsection{Surjectivity of the archimedean Kirillov map} Let us now explain the following consequence of the archimedean Kirillov map for the Whittaker function $W_{\varphi}$, and hence for the Fourier-Whittaker coefficients appearing in the expansion $(\ref{FWEPexp})$ of the projected cusp form $\mathbb P^n_1 \varphi$ on $P_2({\bf{A}}_F)$. Let $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ be a pure tensor as above, whose nonarchimedean local vectors $\varphi_v$ taken to be essential Whittaker vectors (via \cite{Ma}), and let us view this as a cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic form. Recall that to such a vector, we have a corresponding Whittaker function defined on $g \in \operatorname{GL}_n({\bf{A}}_F)$ by \begin{align*} W_{\varphi} \left( g \right) &= \int_{N_n(F) \backslash N_n({\bf{A}}_F) } \varphi(n g) \psi(-n) dn. \end{align*} Again, $N_n \subset \operatorname{GL}_n$ denotes the unipotent subgroup of upper triangular matrices, and $\psi$ the extension of our fixed (standard) additive character of ${\bf{A}}_F/F$ to this subgroup in the usual way. We again consider the archimedean component of this function defined on $y_{\infty} \in F_{\infty}^{\times}$ by \begin{align*} W_{\varphi}(y_{\infty}) &= W_{\varphi} \left( \left( \begin{array} {cc} y_{\infty} & ~ \\ ~ & {\bf{1}}_{n-1} \end{array}\right) \right).\end{align*} \begin{proposition}[Jacquet-Shalika]\label{choice} Let $W$ be any smooth and summable function on $F_{\infty}^{\times}$ which decays rapidly at infinity or is compactly supported. There exists a pure tensor $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ whose corresponding archimedean Whittaker function $W_{\varphi}(y_{\infty}) $ satisfies $W_{\varphi}(y_{\infty}) = W(y_{\infty})$. \end{proposition} \begin{proof} The result is shown in \cite[(3.8)]{JS}. See also \cite[Lemma 5.1]{IT}. \end{proof} \begin{remark} Note that the chosen archimedean vector $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_v$ in Proposition \ref{choice} need not be smooth. In brief, the theorem of Jacquet-Shalika \cite[(3.8)]{JS} shows that the restriction of the unitary representation $\pi$ to the mirabolic subgroup $P_n \subset \operatorname{GL}_n$ is equivalent to the unitary representation $\tau$ of $P_n$. There exist vectors in $\tau$ which are not smooth on $\operatorname{GL}_n$. However, this subtlety (for rank $n \geq 3$) does not affect our subsequent arguments. \end{remark} \subsection{Liftings to automorphic forms on $\operatorname{GL}_2({\bf{A}}_F)$} Let us now say a few words about the automorphic form defined by the product $\mathbb P^n_1 \varphi \overline{\theta} = \mathbb P^n_1 \varphi \cdot \overline{\theta}$ of the mirabolic cusp form $\mathbb P^n_1 \varphi$ on $p \in P_2({\bf{A}}_F)$ with some theta series $\overline{\theta}$, on $g \in \operatorname{GL}_2({\bf{A}}_F)$ or its two-fold metaplectic cover $\overline{g} = (g, \zeta) \in \overline{G}({\bf{A}}_F)$. Note that we shall often write $g = (g, 1)$ in the latter setting when the context is clear. As well, we shall write $\overline{\theta}$ to denote the image of $\theta$ under the Hecke operator $T_{-1}$ sending $x \rightarrow -x \in {\bf{A}}_F$: \begin{align*} \overline{\theta} \left( \left( \begin{array} {cc} y & x \\ ~ & 1\end{array}\right) \right) &= T_{-1} \theta\left( \left( \begin{array} {cc} y & x \\ ~ & 1\end{array}\right) \right) = \theta \left( \left( \begin{array} {cc} y & - x \\ ~ & 1\end{array}\right) \right). \end{align*} We first consider how to extend the mirabolic form $\mathbb P^n_1 \varphi$ to an $L^2$-automorphic form $\widetilde{\mathbb P^n_1 \varphi}$ on $\operatorname{GL}_2({\bf{A}}_F)$, making an argument analogous to the classical passage from Maass forms on the $d$-fold upper-half plane $\mathfrak{H}^d \cong P_2(F_{\infty})$. We then consider relations of Fourier-Whittaker coefficients, and in particular show that $\widetilde{\mathbb P^n_1 \varphi}$ contains enough information about the $L$-function coefficients of the cuspidal $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representation $\Pi$ through its Fourier-Whittaker coefficients. We shall also describe some background on genuine metaplectic forms, as well as the theta series we consider in subsequent arguments. \subsubsection{Liftings to $\operatorname{GL}_2({\bf{A}}_F)$} Let $C(\mathcal{O}_F)$ denote the class group of $F$, so that we have the identification \begin{align*} {\bf{A}}_F^{\times} / F_{\infty}^{\times} F^{\times} \prod_{v \nmid \infty} \mathcal{O}_{F_v}^{\times} \cong C(\mathcal{O}_F). \end{align*} Fixing a set of representatives $\Delta$ in ${\bf{A}}_F^{\times}$ of $C(\mathcal{O}_F)$, and defining the product \begin{align*} I &= F_{\infty}^{\times} \cdot \prod_{v \nmid \infty} \mathcal{O}_{F_v}^{\times} \cdot \coprod_{\zeta \in \Delta} \zeta, \end{align*} we then have the decomposition \begin{align}\label{AFSA} {\bf{A}}_F^{\times} &= \coprod_{\alpha \in F^{\times}} \alpha \cdot I. \end{align} Let $\mathcal{K} = \prod_{v \leq \infty} \mathcal{K}_v = O_2(F_{\infty}) \prod_{v < \infty} \operatorname{GL}_2(\mathcal{O}_{F_v})$ denote the maximal compact subgroup of $\operatorname{GL}_2({\bf{A}}_F)$, and also $h_{\zeta} = \operatorname{diag}(\zeta, \zeta) \in Z_2({\bf{A}}_F)$ the diagonal matrix associated to a given representative $\zeta \in \Delta$, i.e.~with respect to the natural identification \begin{align*} {\bf{A}}_F^{\times} &\cong Z_2({\bf{A}}_F), \quad \zeta \longmapsto h_{\zeta} := \left( \begin{array} {cc} \zeta & ~ \\ ~ & \zeta \end{array}\right). \end{align*} Consider the subgroup $\operatorname{GL}_2(F) \operatorname{GL}_2(F_{\infty}) \mathcal{K} \subset \operatorname{GL}_2({\bf{A}}_F)$. We know by strong approximation that the map \begin{align*} \operatorname{GL}_2({\bf{A}}_F) \xrightarrow {\operatorname{det}} {\bf{A}}_F^{\times} \longrightarrow C(\mathcal{O}_F) \end{align*} factors to give an identification \begin{align*} \operatorname{GL}_2({\bf{A}}_F)/ \operatorname{GL}_2(F) \operatorname{GL}_2(F_{\infty}) \mathcal{K} &\cong C(\mathcal{O}_F), \end{align*} and hence that we have the decomposition \begin{align}\label{SAde} \operatorname{GL}_2({\bf{A}}_F) &= \operatorname{GL}_2(F) \operatorname{GL}_2(F_{\infty}) \mathcal{K} \cdot \coprod_{\zeta \in \Delta} h_{\zeta}. \end{align} Now, fixing a fundamental domain for the action of $\operatorname{GL}_2(\mathcal{O}_F)$ on $\operatorname{GL}_2(F_{\infty})$, and using the standard Iwasawa decomposition $\operatorname{GL}_2(F_{\infty}) = P_2(F_{\infty}) Z_2(F_{\infty}) O_2(F_{\infty})$ for $\operatorname{GL}_2(F_{\infty})$, we can derive the following unique decomposition property for matrices $g \in \operatorname{GL}_2({\bf{A}}_F)$ via the induced decomposition \begin{align}\label{SAI} \operatorname{GL}_2({\bf{A}}_F) &= \coprod_{\zeta \in \Delta} \operatorname{GL}_2(F) P_2(F_{\infty}) h_{\zeta} Z_2(F_{\infty}) \mathcal{K}. \end{align} To describe this unique decomposition property more precisely, let us first describe how to derive such a fundamental domain from a chosen fundamental domain $\mathcal{F}$ for the action of the Hilbert modular group $\Gamma_F = \operatorname{PSL}_2(\mathcal{O}_F)$ on the $d$-fold upper-half plane $\mathfrak{H}^d$, which we identify with $P_2(F_{\infty})$ in the natural way via \begin{align*} \mathfrak{H}^d &\longrightarrow P_2(F_{\infty}) = \operatorname{GL}_2(F_{\infty})/Z_2(F_{\infty}) O_2(F_{\infty}), \quad z_{\infty} = x_{\infty} + iy_{\infty} \longmapsto \left( \begin{array}{cc} y_{\infty} & x_{\infty} \\ ~& 1 \end{array} \right). \end{align*} Here, we follow the abstract description given in \cite[$\S$ 1.6]{Ga}, but note that a more explicit description is implicit in \cite[$\S$ I.3]{vdG}, and also that we recount this description briefly for completeness only. Recall that there is a well-known bijective correspondence between cusp (representatives) $\sigma = (\alpha: \beta) \in \mathbb P^1(F)$ for $\Gamma_F$ and (representatives of) ideal classes $[\mathfrak{a}] = [(\alpha, \beta)] \in C(\mathcal{O}_F)$, i.e.~where $(\alpha, \beta)$ denotes the ideal of $\mathcal{O}_F$ generated by $\alpha, \beta \in F$. We write $\infty$ as usual to denote the cusp represented by $(1:0) \in \mathbb P^1(F)$. Fixing a set of cusp representatives $\lbrace \sigma_i \rbrace_i = \lbrace (\alpha_i: \beta_i) \rbrace_i$ corresponding to a full set of representatives $[(\alpha_i, \beta_i)]$ for the ideal class group $C(\mathcal{O}_F)$, let us for each index $i=1, \ldots, h_F = \#C(\mathcal{O}_F)$ fix an element $\delta_i \in \operatorname{SL}_2(F)$ such that $\delta_i \sigma_i = \infty$. Recall that a standard Siegel set $S$ for $\mathfrak{H}^d$ is a subset $S \subset \mathfrak{H}^d$ of the following form: For some compact subset $R \subset {\bf{R}}^d$ and some constant $B >0$, \begin{align*} S &= \left\lbrace z_{\infty} = (z_{\infty, j})_{j=1}^d \in \mathfrak{H}^d: \Re(z_{\infty, j}) \in R, \Im(z_{\infty, j}) >B \text{ for all $j = 1, \ldots, d$} \right\rbrace. \end{align*} We know by \cite[$\S$ 1.6, Theorem]{Ga} (for instance) that for some such Siegel set $S = S_{\Gamma_F}$, the fundamental domain $\mathcal{F}$ for $\Gamma_F \backslash \mathfrak{H}^d$ can be given by \begin{align}\label{fdF} \mathcal{F} &= \bigcup_{i=1}^{h_F} \delta_i \left( S \right). \end{align} Moreover, as can be deduced from the more explicit description given in \cite[$\S$ I.3]{vdG}, we can assume without loss of generality that the compact domain $R$ takes the form $R = [0, 1/2]^d$. Now, we can construct from this fundamental domain $\mathcal{F}$ a fundamental domain $\mathcal{G}$ for the action of $\operatorname{GL}_2(\mathcal{O}_F)$ on $\operatorname{GL}_2(F_{\infty})$ as follows. Observe that for each $x_{\infty} = (x_{\infty, j})_{j=1}^d \in F_{\infty}$ and $y_{\infty} = (y_{\infty, j})_{j=1}^d \in F_{\infty}^{\times}$, we have the elementary identity \begin{align*} \left( \begin{array} {cc} -1 & ~ \\ ~ & 1\end{array}\right) \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} -1 & ~ \\ ~ & 1\end{array}\right) &= \left( \begin{array} {cc} y_{\infty} & -x_{\infty} \\ ~ & 1 \end{array}\right).\end{align*} This allows us to deduce that a fundamental domain $\mathcal{G}$ for $\operatorname{GL}_2(\mathcal{O}_F) \backslash \operatorname{GL}_2(F_{\infty})$ will be one half of that $(\ref{fdF})$ for $\Gamma_F \backslash \mathfrak{H}^d$ in the $x_{\infty} = \Re(z_{\infty})$-variable. Thus, we have a fundamental domain for the form \begin{align}\label{fdG} \mathcal{G} &= \bigcup_{i=1}^{h_F} \delta_i \left( \left\lbrace z_{\infty} = (z_{\infty, j})_{j=1}^d \in \mathfrak{H}^d: \Re(z_{\infty, j}) \in [0,1/2], \Im(z_{\infty, j}) > B \text{ for all $j=1, \ldots, d$} \right\rbrace \right) \end{align} for the action of $\operatorname{GL}_2(\mathcal{O}_F)$ on $\operatorname{GL}_2(F_{\infty})$. \begin{theorem}\label{extension} Fix $\Delta$ a set of idele representatives of the class group ${\bf{A}}_F^{\times}/F_{\infty}^{\times} F^{\times} \prod_{v \nmid \infty} \mathcal{O}_{F_v}^{\times} \cong C(\mathcal{O}_F)$ of $F$, and for each $\zeta \in \Delta$ write $h_{\zeta} \in Z_2({\bf{A}}_F) \cong {\bf{A}}_F^{\times}$ to denote the corresponding diagonal matrix. \\ \begin{itemize} \item[(i)] Fixing a fundamental domain $\mathcal{G}$ for the action of $\operatorname{GL}_2(\mathcal{O}_F)$ on $\operatorname{GL}_2(F_{\infty})$ as in $(\ref{fdG})$ above, each matrix $g$ in $\operatorname{GL}_2( {\bf{A}}_F)$ can be expressed uniquely as the product \begin{align*} g &= \coprod_{\zeta \in \Delta} \gamma \cdot \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \cdot \left( \begin{array} {cc} r_{\infty} & \\ ~ & r_{\infty} \end{array}\right) \cdot \left( \begin{array}{cc} \zeta & ~\\ ~ & \zeta \end{array} \right) \cdot k \end{align*} of a rational matrix $\gamma \in \operatorname{GL}_2(F)$ embedded diagonally into $\operatorname{GL}_2({\bf{A}}_F)$, a mirabolic matrix \begin{align*} \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \in P_2(F_{\infty})\end{align*} with archimedean coordinates $x_{\infty} \in F_{\infty}$ and $y_{\infty} \in F_{\infty}^{\times}$, a central element \begin{align*} \left( \begin{array} {cc} r_{\infty} & ~ \\ & r_{\infty} \end{array}\right) \in Z_2(F_{\infty}) \end{align*} with archimedean coordinates, and $k \in \mathcal{K} = O_2(F_{\infty}) \prod_{v < \infty} \operatorname{GL}_2(\mathcal{O}_{F_v})$. Here, our choice fundamental domain $\mathcal{G}$ imposes the following constraints For each index $1 \leq j \leq d$, the adele $x_{\infty} = (x_{\infty, j})_{j=1}^d$ and the ideles $r_{\infty} = (r_{\infty, j})_{j=1}^d$ and $y_{\infty} = (y_{\infty, j})_{j=1}^d$ must satisfy \begin{align*} r_{\infty, j} >0, \quad y_{\infty, j} > B, \quad 0 \leq x_{\infty, j} \leq 1/2. \end{align*} \item[(ii)] Taking for granted the unique decomposition of matrices $g \in \operatorname{GL}_2({\bf{A}}_F)$ described in (i), the function defined on $g \in \operatorname{GL}_2( {\bf{A}}_F)$ by the rule \begin{align*} \widetilde{ \mathbb P^n_1} \varphi (g) = \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \end{align*} determines an $L^2$-automorphic form on $\operatorname{GL}_2( {\bf{A}}_F)$ having trivial central character and trivial right action by the maximal compact subgroup $\mathcal{K} \subset \operatorname{GL}_2({\bf{A}}_F)$. \end{itemize} \end{theorem} \begin{proof} The first assertion (i) can be deduced from the decomposition $(\ref{SAI})$ together with the choice $\mathcal{G}$ of fundamental domain for the action of $\operatorname{GL}_2(\mathcal{O}_F)$ on $\operatorname{GL}_2(F_{\infty})$. Note that this is the same style of argument used to pass from Maass forms on the $d$-fold upper-half plane $\mathfrak{H}^d \cong P_2(F_{\infty})$ to automorphic forms on $\operatorname{GL}_2({\bf{A}}_F)$. We refer to \cite[Theorem 4.4.4]{GoHu} for the special case of $F = {\bf{Q}}$, and note again that the discussion given above allows us to generalize the result directly to the setting of totally real fields. To be more precise, we first use the following standard Iwasawa decomposition of $\operatorname{GL}_2(F_{\infty})$ (see e.g.~\cite[Proposition 4.1.1]{GH}). Fixing a fundamental domain $\mathcal{G}$ for $\operatorname{GL}_2(\mathcal{O}_F) \backslash \operatorname{GL}_2(F_{\infty})$ as in $(\ref{fdG})$ above, each $g_{\infty} \in \operatorname{GL}_2(F_{\infty})$ can be expressed uniquely as \begin{align}\label{Iw} g_{\infty} &= \left( \begin{array}{cc} 1 & x_{\infty} \\ ~ & 1 \end{array} \right) \cdot \left( \begin{array}{cc} y_{\infty} & ~\\ ~& 1 \end{array} \right) \cdot k \cdot \left( \begin{array}{cc} r_{\infty} & ~\\ ~& r_{\infty} \end{array} \right) \end{align} for $k \in O_2(F_{\infty})$, and for $x_{\infty} = (x_{\infty, j})_{j=1}^d \in F_{\infty}$, $y_{\infty} = (y_{\infty, j})_{j=1}^d \in F_{\infty}^{\times}$, and $r_{\infty} = (r_{\infty, j})_{j=1}^d \in F_{\infty}^{\times}$ subject to the stated constraints. That is, we can assume without loss of generalize that $r_{\infty, j} >0$ for each $j = 1, \ldots, d$; the remaining constraints on $x_{\infty}$ and $y_{\infty}$ can be read off directly from the choice of fundamental domain $(\ref{fdG})$. Thus, each matrix $g \in \operatorname{GL}_2({\bf{A}}_F)$ can be decomposed uniquely as a product \begin{align*} g &= \coprod_{\zeta \in \Delta} \gamma \cdot \left( \begin{array}{cc} 1 & ~ x_{\infty} \\ ~ & 1 \end{array} \right) \cdot \left( \begin{array}{cc} y_{\infty} & ~\\ ~& 1 \end{array} \right) \cdot \left( \begin{array} {cc} r_{\infty} & ~ \\ ~ & r_{\infty} \end{array}\right) \cdot \left( \begin{array} {cc} \zeta & ~ \\ ~ & \zeta \end{array}\right) \cdot k, \end{align*} with $\gamma \in \operatorname{GL}_2(F)$ a rational matrix embedded diagonally into $\operatorname{GL}_2({\bf{A}}_F)$, $k \in \mathcal{K} = O_2(F_{\infty}) \prod_{v < \infty} \operatorname{GL}_2(\mathcal{O}_{F_v})$, an element of the maximal compact subgroup, and $x_{\infty} \in F_{\infty}$ and $r_{\infty}, y_{\infty} \in F_{\infty}^{\times}$ as in $(\ref{Iw})$ above. The second assertion (ii) is easy to deduce from (i). It is easy to check that the extended function $\widetilde{\mathbb P^n_1 \varphi} (g)$ is convergent in the $L^2$-norm from the fact that the automorphic function $\mathbb P^n_1 \varphi$ on the mirabolic subgroup $P_2({\bf{A}}_F)$ is $L^2$, i.e.~which follows as a consequence of the cuspidality of $\varphi$. By construction, $\widetilde{\mathbb P^n_1 \varphi}$ is both left $\operatorname{GL}_2(F)$-invariant and right $\mathcal{K}$-invariant. Hence, it is trivially ``$\mathcal{K}$-finite" in the natural sense, although it is not $\mathcal{Z}$-finite. To check the invariance under the centre, we argue via $(\ref{AFSA})$ that for any $z \in Z_2({\bf{A}}_F) \cong {\bf{A}}_F^{\times}$, we can find find $\alpha \in F^{\times}$, $w_{\infty} \in F_{\infty}^{\times}$, and $k \in \mathcal{K}$ such that \begin{align*} z &= \coprod_{\zeta \in \Delta} \left( \begin{array} {cc} \alpha & ~ \\ ~ & \alpha \end{array}\right) \cdot \left( \begin{array} {cc} w_{\infty} & ~ \\ ~ & w_{\infty} \end{array}\right) \cdot \left( \begin{array}{cc} \zeta & ~\\ ~& \zeta \end{array} \right) \cdot k \end{align*} It is then easy to check that \begin{align*} \widetilde{ \mathbb P^n_1 \varphi} \left( z \cdot g \right) &= \widetilde{ \mathbb P^n_1} \left(g \cdot z \right) = \widetilde{ \mathbb P^n_1 \varphi} \left( \coprod_{\zeta \in \Delta} g \cdot \left( \begin{array} {cc} \alpha & ~ \\ ~ & \alpha \end{array}\right) \cdot \left( \begin{array} {cc} w_{\infty} & ~ \\ ~ & w_{\infty} \end{array}\right) \cdot \left( \begin{array}{cc} \zeta & ~\\ ~& \zeta \end{array} \right) \cdot k \right) = \widetilde{ \mathbb P^n_1} \varphi(g), \end{align*} since $g \cdot z$ is then in the correct form for the definition. This completes the proof. \end{proof} \subsubsection{Relations of Fourier-Whittaker expansions} We now consider the coefficients in the Fourier-Whittaker expansion of the mirabolic form $\mathbb P^n_1 \varphi$ and more precisely its product $\mathbb P^n_1 \varphi \cdot \overline{\theta}$ with some metaplectic theta series $\overline{\theta}$. That is, for a given $F$-integer $\alpha \in \mathcal{O}_F$, writing $x \in {\bf{A}}_F$ and and $y = y_f y_{\infty} \in {\bf{A}}_F^{\times}$ as usual, we should like to consider the corresponding Fourier-Whittaker coefficient defined by the unipotent integral \begin{align*} \int_{{\bf{A}}_F/F} \mathbb P^n_1 \varphi \cdot \overline{\theta} \left( \left( \begin{array} {cc} y & x \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x) dx = \int_{{\bf{A}}_F/F} \mathbb P^n_1 \varphi \cdot \overline{\theta} \left( \left( \begin{array} {cc} 1 & x \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} y & ~ \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x)dx. \end{align*} Note that we can express the Fourier-Whittaker coefficients equivalently as integrals over the compact domain $I \cong [0, 1]^d \subset F_{\infty} \approx {\bf{R}}^d$. That is, for any $F$-integer $\alpha \in \mathcal{O}_F$, we have the identification of unipotent integrals \begin{align*} \int_{ {\bf{A}}_F/F} \mathbb P^n_1 \varphi \cdot \overline{\theta} \left( \left( \begin{array} {cc} y & x \\ ~ & 1\end{array}\right) \right) \psi(- \alpha x) dx &= \int_{[0, 1]^d} \mathbb P^n_1 \varphi \cdot \overline{\theta} \left( \left( \begin{array} {cc} y & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty})dx_{\infty}. \end{align*} This is a consequence of the fact that both domains determine full orthogonal sets of the corresponding additive characters. We shall use the latter semi-classical description in most of the discussion that follows. Let us now consider the Fourier-Whittaker coefficients of the $\operatorname{GL}_2({\bf{A}}_F)$-automorphic form $\widetilde{\mathbb P^n_1 \varphi}$ defined in Theorem \ref{extension} (ii) above. We shall now argue as follows that for any idele $y_{\infty} \in F_{\infty}$ with $\vert y_{\infty} \vert \gg 1$ contained in our fixed fundamental domain of Theorem \ref{extension} (i), we have a relation of Fourier-Whittaker coefficients \begin{align*} \int_{I \cong [0, 1]^d \subset F_{\infty}} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) d x_{\infty} &= \int_{I \cong [0,1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \left(\left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) dx_{\infty} \end{align*} for any $F$-integer $\alpha \in \mathcal{O}_F$. \begin{proposition}\label{inversion} The following assertions are true. \\ \begin{itemize} \item[(i)] Let $\phi \in L^2(\operatorname{GL}_2(F) \backslash \operatorname{GL}_2({\bf{A}}_F), {\bf{1}})^{\mathcal{K}}$ be any $L^2$-automorphic form on $\operatorname{GL}_2({\bf{A}}_F)$ having trivial central character ${\bf{1}}$ which is also right $\mathcal{K}$-invariant. Then for any idele $y \in {\bf{A}}_F^{\times}$, we have the relation \begin{align*} \phi \left( \left( \begin{array} {cc} y & ~ \\ ~ & 1 \end{array}\right) \right) &= \phi \left( \left( \begin{array} {cc} y^{-1} & \\ ~ & 1 \end{array}\right) \right). \end{align*} \item[(ii)] Let $\Pi = \otimes_v \Pi_v$ be any irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$, and let $\varphi \in V_{\Pi}$ be any vector in the underlying space. Assume that $W_{\varphi}(y_{\infty}) = W_{\varphi}(-y_{\infty})$, i.e. so that we have \begin{align*} W_{\varphi} \left( \left( \begin{array} {cc} y_{\infty} & ~ \\ ~ & {\bf{1}}_{n-1} \end{array}\right) \right) &= W_{\varphi} \left( \left( \begin{array} {cc} - y_{\infty} & ~ \\ ~ & {\bf{1}}_{n-1} \end{array}\right) \right) \end{align*} as functions of $ y_{\infty} \in F_{\infty}^{\times}$. Let $\mathbb P^n_1 \varphi$ denote the projection of $\varphi$ to the mirabolic subgroup $P_2({\bf{A}}_F)$ of $\operatorname{GL}_2({\bf{A}}_F)$, and let $\widetilde{\mathbb P^n_1 \varphi} \in L^2(\operatorname{GL}_2(F) \backslash \operatorname{GL}_2({\bf{A}}_F), {\bf{1}})^{\mathcal{K}}$ denote its extension to $\operatorname{GL}_2({\bf{A}}_F)$ as defined in Proposition \ref{extension} above. Let $\alpha \in F$ be any $F$-integer, embedded diagonally into $F_{\infty} \cong {\bf{R}}^d$. We have for any archimedean idele $y_{\infty} \in F_{\infty}^{\times}$ with $\vert y_{\infty} \vert > B$ contained in our fixed fundamental domain $\mathcal{G}$ for Theorem \ref{extension} (i) the identification of Fourier-Whittaker coefficients \begin{align*} &\int_{I \cong [0, 1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) dx_{\infty} \\ &= \int_{I \cong [0, 1]^d \subset F_{\infty}} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) dx_{\infty}. \\ \end{align*} \item[(iii)] Again, suppose we choose the pure tensor $\varphi \in V_{\Pi}$ in such a way that $W_{\varphi}(y_{\infty}) = W_{\varphi}(-y_{\infty})$. Fixing $y_{\infty} \in F_{\infty}^{\times}$ in our chosen fundamental domain $\mathcal{G}$ for $\operatorname{GL}_2(\mathcal{O}_F) \backslash \operatorname{GL}_2(F_{\infty})$ described in $(\ref{fdG})$ above, the corresponding extended function \begin{align*} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array}{cc} y_{\infty} & x_{\infty} \\ ~& 1 \end{array} \right) \right) \end{align*} is continuous as a function of $x_{\infty} \in I \cong [0, 1]^d \subset F_{\infty}$. \end{itemize} \end{proposition} \begin{proof} To show (i), observe that since $\phi$ is both left invariant by $\left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array} \right) \in \operatorname{GL}_2(F)$ and right invariant by $\left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array} \right) \in \mathcal{K}$, we have via the elementary matrix identity \begin{align*} \left( \begin{array} {cc} y^{-1} & ~ \\ ~ & 1 \end{array}\right) &= \left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array}\right) \left( \begin{array} {cc} y & \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} y^{-1} & \\ ~ & y^{-1} \end{array}\right) \left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array}\right). \end{align*} that \begin{align*} \phi \left( \left( \begin{array} {cc} y^{-1} & ~ \\ ~ & 1 \end{array}\right) \right) &= \phi \left( \left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array}\right) \left( \begin{array} {cc} y & \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} y^{-1} & \\ ~ & y^{-1} \end{array}\right) \left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array}\right) \right) = \phi \left( \left( \begin{array} {cc} y & \\ ~ & 1 \end{array}\right) \right) \end{align*} Here, the second identification follows from the fact that $\phi$ has trivial central character. The shows the first stated relation for (i). Before going to (ii), let us first observe that the elementary matrix identity \begin{align*} \left( \begin{array} {cc} y_{\infty} & ~ \\ ~ & 1 \end{array}\right) &= \left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array}\right) \left( \begin{array} {cc} y_{\infty}^{-1} & \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} y_{\infty} & \\ ~ & y_{\infty} \end{array}\right) \left( \begin{array} {cc} ~ & 1 \\ 1 & ~ \end{array}\right) \end{align*} puts the matrix on the left hand side into the correct form for the unique factorization result of Theorem \ref{extension} (i) via the calculation on the right hand side. Hence for $y_{\infty} \in F_{\infty}^{\times}$ with $\vert y_{\infty} \vert > B$ contained in our chosen fundamental domain, we derive from the definition of $\widetilde{\mathbb P^n_1 \varphi}$ the relations \begin{align*} \widetilde{ \mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & ~ \\ ~ & 1 \end{array}\right) \right) &= \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} 1/y_{\infty} & ~ \\ ~ & 1 \end{array}\right) \right) = \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} 1/y_{\infty} & ~ \\ ~ & 1 \end{array}\right) \right) \end{align*} To show (ii), we divide the interval $I \cong [0, 1]^d \subset F_{\infty}$ into parts $I = I_1 + I_2$ with $I_1 \cong [0, 1/2]^d \subset F_{\infty}$ and $I_2 \cong (1/2, 1]^d \subset F_{\infty}$, so that \begin{align*} &\int_{I \cong [0, 1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) d x_{\infty} \\ &= \int_{I_1 \cong [0, 1/2]^d \subset F_{\infty} }\mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) d x_{\infty} \\ &+ \int_{I_2 \cong (1/2, 1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(- \alpha x_{\infty}) d x_{\infty}. \end{align*} It is easy to see from the definition (given in Proposition \ref{extension}) that the claim is true for the first integral, \begin{align*} &\int_{I_1 \cong [0, 1/2]^d \subset F_{\infty} } \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array} \right) \right) \psi(-\alpha x_{\infty}) d x_{\infty} \\ &= \int_{ I_1 \cong [0, 1/2]^d \subset F_{\infty} } \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array} \right) \right) \psi(- \alpha x_{\infty}) d x_{\infty}. \end{align*} To make a similar identification for the second integral, fix an idele $x_{\infty} \in I_2 \cong (1/2, 1]^d \subset F_{\infty}$. We first observe that the condition $W_{\varphi}(y_{\infty}) = W_{\varphi}(-y_{\infty})$ gives us the relation \begin{align}\label{one} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} -y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right). \end{align} Indeed, comparing Fourier-Whittaker expansions, and making use of our condition on the archimedean Whittaker coefficients, we see that \begin{align*} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} -y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \left\vert - y_{\infty} \right\vert^{- \left( \frac{n-2}{2} \right)} \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} -\gamma y_{\infty} & ~ \\ ~ & {\bf{1}}_{n-1} \end{array}\right) \right) \psi(\gamma x_{\infty}) \\ &= \left\vert y_{\infty} \right\vert^{- \left( \frac{n-2}{2} \right)} \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma y_{\infty} & ~ \\ ~ & {\bf{1}}_{n-1} \end{array}\right) \right) \psi(\gamma x_{\infty}) = \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right). \end{align*} Now, since the mirabolic form $\mathbb P^n_1 \varphi$ is left invariant under $ \left( \begin{array} {cc} -1 & ~ \\ ~ & 1 \end{array}\right) \in P_2(F)$, we have the relation \begin{align}\label{two} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} - y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & -x_{\infty} \\ ~ & 1 \end{array}\right) \right). \end{align} At the same time, since the mirabolic form $\mathbb P^n_1 \varphi$ is left invariant by $ \left( \begin{array} {cc} 1 & -1 \\ ~ & 1 \end{array}\right) \in P_2(F)$, we also have that \begin{align}\label{three} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & 1 - x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & -x_{\infty} \\ ~ & 1 \end{array}\right) \right). \end{align} Now, notice that the archimedean adele component on the left hand side of $(\ref{three})$ is the right form for the unique factorization of Proposition \ref{extension}, in which case we have (by the definition) that \begin{align}\label{four} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & 1 - x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & 1 - x_{\infty} \\ ~ & 1 \end{array}\right) \right). \end{align} Moreover, since the extended form $\widetilde{\mathbb P^n_1 \varphi}$ is both left invariant by $ \left( \begin{array} {cc} -1 & ~ \\ ~ & 1 \end{array}\right) \in \operatorname{GL}_2(F)$ and right invariant by $ \left( \begin{array} {cc} -1 & ~ \\ ~ & 1 \end{array}\right) \in \mathcal{K}$, we see via the direct calculation \begin{align*} \left( \begin{array} {cc} -1 & ~ \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} y_{\infty} & x_{\infty}-1 \\ ~ & 1 \end{array}\right) \left( \begin{array} {cc} -1 & ~ \\ ~ & 1 \end{array}\right) &= \left( \begin{array} {cc} y_{\infty} & 1 - x_{\infty} \\ ~ & 1 \end{array}\right) \end{align*} that \begin{align}\label{five} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & 1-x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty}-1 \\ ~ & 1 \end{array}\right) \right). \end{align} As well, we see from the fact that the extended form $\widetilde{\mathbb P^n_1 \varphi}$ is left invariant under $ \left( \begin{array} {cc} 1 & -1 \\ ~ & 1 \end{array}\right) \in \operatorname{GL}_2(F)$ that \begin{align}\label{six} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty}-1 \\ ~ & 1 \end{array}\right) \right) &= \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right). \end{align} Hence, putting together the identifications $(\ref{one})$, $(\ref{two})$, $(\ref{three})$, $(\ref{four})$, $(\ref{five})$, and $(\ref{six})$ respectively, we see that \begin{equation}\begin{aligned}\label{seriesID} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) &= \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} -y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) = \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & -x_{\infty} \\ ~ & 1 \end{array}\right) \right) = \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & 1-x_{\infty} \\ ~ & 1 \end{array}\right) \right) \\ &= \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & 1-x_{\infty} \\ ~ & 1 \end{array}\right) \right) = \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} - 1 \\ ~ & 1 \end{array}\right) \right) = \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \end{aligned}\end{equation} and hence \begin{align*} &\int_{I_2 \cong (1/2, 1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(- \alpha x_{\infty}) dx_{\infty} \\ &= \int_{I_2 \cong (1/2, 1]^d \subset F_{\infty}} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ ~ & 1 \end{array}\right) \right) \psi(- \alpha x_{\infty}) d x_{\infty}, \end{align*} as required to deduce the claimed relation of coefficients via the decomposition $I = I_1 + I_2$. For (iii), the claim is easy to derive from this relation $(\ref{seriesID})$ of the Fourier-Whittaker coefficients. \end{proof} \subsection{Relations to automorphic forms on the metaplectic group} Let us now consider automorphic forms on the metaplectic cover $\overline{G}$ of $\operatorname{GL}_2$. We refer to \cite[$\S 2$]{Ge} for more background. To summarize briefly, the group of adelic points $\overline{G}({\bf{A}}_F)$ is a central extension of $\operatorname{GL}_2({\bf{A}}_F)$ by the square roots of unity $\mu_2 = \lbrace \pm 1 \rbrace$, and fits into the exact sequence \begin{align*} 1 \longrightarrow \mu_2 \longrightarrow \overline{G}({\bf{A}}_F) \longrightarrow \operatorname{GL}_2({\bf{A}}_F) \longrightarrow 1. \end{align*} This sequence splits over the rational points $\operatorname{GL}_2(F)$, as well as the unipotent subgroup $N_2(F) \subset \operatorname{GL}_2(F)$. An automorphic form on $\overline{G}({\bf{A}}_F)$ is said be to {\it{genuine}} if it transforms nontrivially under the action of $\mu_2$, in which case it corresponds to a Hilbert modular form of half-integral weight. A form which transforms trivially under $\mu_2$ is said to be {\it{non-genuine}}, and corresponds to a lift of a Hilbert modular form of integral weight (cf.~\cite[Proposition 3.1, p.~57]{Ge}). Writing $\overline{g} = (g, \xi) \in \overline{G}({\bf{A}}_F)$ to denote a generic element, with $\xi \in \mu_2$ and $g \in \operatorname{GL}_2({\bf{A}}_F)$, we can consider the following spectral decomposition of a genuine $L^2$-automorphic form $\theta(\overline{g})$ of some central character $\omega = \omega_{\theta}$, factoring through the corresponding Hilbert space $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ of such automorphic forms. This space $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ decomposes into a direct sum of a discrete spectrum $L^2_{\operatorname{disc}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ plus a continuous spectrum $L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ spanned by analytic continuations of Eisenstein series, \begin{align*} L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) &= L^2_{\operatorname{disc}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) \oplus L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega).\end{align*} The subspace $L^2_{\operatorname{disc}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ decomposes into an orthogonal direct sum of the subspace of residual forms $L^2_{\operatorname{res}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ which occur as residues of metaplectic Eisenstein series, with orthogonal complement given by the subspace of cuspidal forms $L^2_{\operatorname{cusp}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ (see \cite[p.~53]{Ge}): \begin{align*} L^2_{\operatorname{disc}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) &= L^2_{\operatorname{res}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) \oplus L^2_{\operatorname{cusp}} (\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega).\end{align*} To be more precise, the subspace $ L^2_{\operatorname{cusp}} (\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ is spanned by a basis $\lbrace f_i \rbrace_i$ of cuspidal forms $f_i$. The subspace $L^2_{\operatorname{res}} (\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ is spanned by a basis $\lbrace \vartheta_{\xi} \rbrace_{\xi}$ of residual forms $\vartheta_{\xi}$ which occur as residues of metaplectic Eisenstein series, and moreover which are given as translates of the metaplectic theta series associated to the quadratic from $Q(x) = x^2$ (see \cite[Theorem 6.1]{Ge}). The continuous spectrum $L^2_{\operatorname{cont}} (\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ is spanned by analytic continuations of metaplectic Eisenstein series $\mathcal{E}_{\varpi}$, \begin{align*} L^2_{\operatorname{cont}} (\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) &= \bigoplus_{\varpi} \int_{\Re(s) = 1/2} \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i}. \end{align*} Let us also remark that the forms making up the continuous spectrum $L^2_{\operatorname{cont}} (\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ can be realized more explicitly after Mellin inversion as Poincar\'e series (see e.g.~\cite[$\S$ 7.4]{Ku}). In particular, Poincar\'e series span this closed subspace of $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$. We shall revisit this point later. \subsubsection{Theta series} Let us now describe the theta series we shall work with. Let $Q$ denote the $F$-rational quadratic form $Q(x)=x^2$, and $\theta_Q$ the corresponding genuine theta series on the metaplectic cover $\overline{G}({\bf{A}}_F)$ of $\operatorname{GL}_2({\bf{A}}_F)$ (cf.~e.g.~\cite[Proposition 2.35]{Ge}). In fact, we shall only consider the theta series of $Q$ associated to the principal class in $C(\mathcal{O}_F) \cong {\bf{A}}_F^{\times}/ F_{\infty}^{\times} F^{\times} \prod_{v < \infty} \mathcal{O}_{F_v}^{\times}$. To be explicit, this partial theta series has the following expansion: Given $x_{\infty} \in F_{\infty}$ and $y_{\infty} \in F_{\infty}^{\times}$, we have the expansion \begin{align}\label{mettheta} \theta_Q \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right)\right) = \theta_Q \left( \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right), 1 \right) \right) &= \vert y_{\infty} \vert^{\frac{1}{4}} \sum_{\gamma \in \mathcal{O}_F / \mathcal{O}_F^{\times}} \psi(Q(\gamma)(x_{\infty} + iy_{\infty})). \end{align} Here, the sum runs over $F$-integers up to the action of units $\mathcal{O}_F^{\times}$, and hence corresponds to a sum over principal ideals of $F$. The image $\overline{\theta}_Q = T_{-1}\theta_Q$ under the Hecke operator at infinity $T_{-1}$ sending $x_{\infty} \mapsto -x_{\infty} \in F_{\infty}$ has the corresponding expansion \begin{align*} \overline{\theta}_Q \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right)\right) = \overline{\theta}_Q \left( \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right), 1 \right) \right) &= \vert y_{\infty} \vert^{\frac{1}{4}} \sum_{\gamma \in \mathcal{O}_F/ \mathcal{O}_F^{\times}} \psi(Q(\gamma)(-x_{\infty} + iy_{\infty})). \end{align*} Here, we only evaluate on elements of the form $(g, 1) \in \overline{G}({\bf{A}}_F)$ with $g \in \operatorname{GL}_2({\bf{A}}_F)$ and more specifically at a mirabolic matrix $g \in P_2(F_{\infty})$. Hence, we shall ofter drop the square root of unity term $1 \in \mu_2$ from the notation. Note however that this can be though of as a genuine automorphic form on $\overline{g} = (g, \xi) \in \overline{G}({\bf{A}}_F)$. \subsection{Spectral decompositions of shifted convolution sums} Let us now explain how to use the setup above to derive nontrivial bounds for shifted convolution sums of the $L$-function coefficients of our fixed $\operatorname{GL}_n({\bf{A}}_F)$-automorphic representation. Note that while we give a self-contained account of these bounds, variations of the same bounds (for distinct applications) are also derived in the related works \cite{VO9} and \cite{VO19}. Fix $\Pi = \otimes_v \Pi_v$ an irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ as above, and let us take for granted the discussion with the projection $\mathbb P^n_1 \varphi$ of a given pure tensor $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ and the relation of its Fourier-Whittaker coefficients to the $L$-function coefficients of $\Pi$ as described in $(\ref{FWEPexp})$. We shall also take for granted the setup with extensions and relations of Fourier-Whittaker expansions derived in Propositions \ref{extension} and \ref{inversion}, including the calculations given for the proofs. Building on this discussion we shall derive bounds for sums of the form \begin{align}\label{ssum} \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(Q(r) + \alpha)}{ \vert Q(r) + \alpha \vert^{\frac{1}{2}} } W \left( \frac{\vert Q(r) + \alpha \vert}{ \vert Y_{\infty} \vert} \right) \end{align} for $\alpha \in \mathcal{O}_F$ a given nonzero $F$-integer, $Y_{\infty} \in F_{\infty}^{\times}$ some idele of norm $\vert Y_{\infty} \vert > 1$, and $W$ some suitable smooth weight function on $y \in {\bf{R}}_{>0}$ of compact support with bounded derivatives $W^{(i)} \ll 1$ for all $i \geq 0$. We shall also consider sums of the following more general form. Let $q(x) = A x^2 + B x + C$ be an irreducible definite quadratic polynomial with totally positive $F$-integer coefficients $A, B, C \in \mathcal{O}_F$ and totally negative discriminant $\Delta = B^2 - 4 A C \ll 0$. We also derive bounds for sums of the form \begin{align}\label{ssum2} \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(q(r))}{ \vert q(r) \vert^{\frac{1}{2}} } W \left( \frac{\vert q(r) \vert}{ \vert Y_{\infty} \vert} \right). \end{align} Although it is not a priori obvious, we shall reduce the task of finding bounds for each of these sums $(\ref{ssum})$ and $(\ref{ssum2})$ to the better-understood setting of rank $n=2$, using distinct arguments with decompositions into Poincar\'e series to derive sufficient bounds for our applications to moments of central values of $L$-functions. Here, we shall first give some relevant general background about the Poincar\'e series we use, including derivations of formulae for the Fourier-Whittaker coefficients. We then recall some standard background on Sobolev norms and bounds for classical Whittaker functions near zero before deriving bounds for each of these sums $(\ref{ssum})$ and $(\ref{ssum2})$. \subsubsection{Generalized Poincar\'e series} Let us first introduction a class of generalized Poincar\'e series, following \cite{CPS} and \cite{CoPSS}. Let us write $G$ to denote either of the reductive algebraic groups $\operatorname{GL}_2$ or $\overline{G}$, taking for granted that the context will make the distinction clear. Fixing $\psi' = \otimes_v \psi'_v$ a nontrivial additive character of ${\bf{A}}_F/F$, extended in the natural way to $N_2({\bf{A}}_F)/N_2(F)$, and $\omega = \otimes_v \omega_v$ an idele class character of ${\bf{A}}_F^{\times}/F^{\times}$, let us consider the space $\mathcal{S}(N_2({\bf{A}}_F) \backslash G({\bf{A}}_F); \psi', \omega)$ of functions $f: G({\bf{A}}_F) \longrightarrow {\bf{C}}$ such that: \\ \begin{itemize} \item $f(n g) = \psi'(n) f(g)$ for all $n \in N_2({\bf{A}}_F)$ and $g \in G({\bf{A}}_F)$,\\ \item $f(z g) = \omega(z) f(g)$ for all $z \in Z_2({\bf{A}}_F) \cong {\bf{A}}_F^{\times}$, \\ \item The function $f = \otimes_v f_v$ is decomposable, with each $f_v$ factoring through the analogously-defined space of local functions $\mathcal{S}(N_2(F_v) \backslash G(F_v); \psi_v', \omega_v)$, \\ \item The archimedean component $f_{\infty} = \otimes_{v \mid \infty} f_v$ is Schwartz modulo $N_2(F_{\infty})$ (cf.~\cite[$\S$ 2]{CPS}), i.e.~for each operator $X \in \mathcal{U}(\mathfrak{g})$, the corresponding function $R X f_{\infty}$ is rapidly decreasing modulo $N_2(F_{\infty})$. \\ \end{itemize} To construct Poincar\'e series from such a choice of function $f \in \mathcal{S}(N_2({\bf{A}}) \backslash G({\bf{A}}_F); \psi', \omega)$, let us also fix $\mu: \operatorname{GL}_2 \longrightarrow {\bf{C}}$ a multiplier system (see e.g.~\cite{Pro}) and $\Gamma \subset \operatorname{GL}_2({\bf{A}}_F)$ a discrete finite-covolume subgroup whose stabilizer subgroup for the cusp at infinity we denote by $\Gamma_{\infty} \cong N_2(F)$. We then consider the function $P_{f, \omega, \mu}$ defined on $g \in G({\bf{A}}_F)$ by the summation \begin{align*} P_{f, \omega, \mu}(g) &= \sum\limits_{\gamma \in \Gamma_{\infty} \backslash \Gamma} \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot f(\gamma g). \end{align*} Note that the multilplier system $\mu$ is trivial, and hence dropped from our subsequent notations, when $G = \operatorname{GL}_2$. We shall also drop the central character $\omega$ from our subsequent notations in the event that it is the trivial character $\omega = {\bf{1}}$. As explained in \cite[Proposition 2.2]{CPS} (for instance), our choice of decomposable function $f \in \mathcal{S}(N_2({\bf{A}}_F) \backslash G({\bf{A}}_F); \psi', \omega)$ with $f_{\infty}$ Schwartz modulo $N_2(F_{\infty})$ ensures that the Poincar\'e series $P_{f, \omega, \mu}$ determines a {\it{smooth}} $L^2$-automorphic form on $G({\bf{A}}_F)$ with central character $\omega$. That is, such a choice of function $f = \otimes_v f_v \in \mathcal{S}(N_2({\bf{A}}_F) \backslash G({\bf{A}}_F); \psi', \omega)$ ensures that $P_{f, \omega, \mu} \in L^2(\operatorname{GL}_2(F) \backslash G({\bf{A}}_F), \omega)$ is a smooth $L^2$-automorphic form. To describe the Fourier-Whittaker coefficients of such a Poincar\'e series $P_{f, \omega, \mu}$, we must first introduce the Bruhat decomposition. Let us thus introduce the set of archimedean idele (vectors) \begin{align*} \Omega(\Gamma) &= \left\lbrace c \in F_{\infty}^{\times} : N_2(F_{\infty}) \cdot w \cdot \underline{c} \cdot N_2(F_{\infty}) \neq \emptyset \right\rbrace, \end{align*} where we use the shorthand matrix notations \begin{align*} w &= \left( \begin{array}{cc} & ~ -1 \\ 1 & ~ \end{array} \right) \quad \text{ and } \quad \underline{c} = \left( \begin{array}{cc} c & ~\\Ê~Ê& 1\end{array} \right). \end{align*} Given $c \in \Omega(\Gamma)$, we shall also consider the corresponding subgroup \begin{align*} \Gamma_c &= \left( N_2(F_{\infty}) \cdot w \cdot \underline{c} \cdot N_2(F_{\infty}) \right) \cap \Gamma \subset \Gamma. \end{align*} We write the corresponding Bruhat decomposition of any element $\gamma \in \Gamma_c$ as $\gamma = n_1(\gamma) \cdot w \cdot \underline{c} \cdot n_2(\gamma)$, i.e.~with $\gamma_j \in N_2(F_{\infty})$ for each of $j=1,2$. Explicitly, this can be viewed as the elementary matrix decomposition \begin{align*} \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) &= \left( \begin{array}{cc} 1 & ac^{-1} \\ ~& 1 \end{array} \right) \left( \begin{array}{cc} & ~ -1 \\ 1 & ~ \end{array} \right) \left( \begin{array}{cc} c & ~ \\ ~ & c^{-1}(ad-bc) \end{array} \right) \left( \begin{array}{cc} 1 & dc^{-1} \\ ~& 1 \end{array} \right), \end{align*} so that \begin{align*} n_1(\gamma) = \left( \begin{array}{cc} 1 & ac^{-1} \\ ~ & 1 \end{array} \right) \quad \text{and} \quad n_2(\gamma) = \left( \begin{array}{cc} 1 & d c^{-1} \\ ~ & 1 \end{array} \right). \end{align*} We can now describe the Fourier coefficients of the Poincar\'e series $P_{f, \omega, \mu}$ introduced above as follows. \begin{proposition}\label{FWEPS} Given two nontrivial additive characters $\psi_1$ and $\psi_2$ of ${\bf{A}}_F/F$, together with a decomposable function $f = \otimes_v f_v \in \mathcal{S}(N_2({\bf{A}}_F) \backslash G({\bf{A}}_F); \psi_1, \omega)$ as above, the Fourier coefficient defined on $g \in G({\bf{A}})$ with respect to $\psi_2$ by the unipotent integral \begin{align*} W_{P_{f, \omega, \mu}}(g) = W_{ P_{f, \omega, \mu}, \psi_2 }(g) &= \int_{ {\bf{A}}_F/F } P_{f, \omega, \mu} \left( \left( \begin{array}{cc} 1 & x \\ ~ & 1 \end{array} \right) g \right) \psi_2(-x)dx \end{align*} is given by the formula \begin{align*} &W_{ P_{f, \omega, \mu}}(g) = \int_{ {\bf{A}}_F/F } f(g) \psi_1(x) \psi_2(-x) dx \\ &+ \sum\limits_{c \in \Omega(\Gamma)} \sum\limits_{\gamma \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot \psi_1(n_1(\gamma)) \cdot \psi_2(n_2(\gamma)) \cdot \int_{ {\bf{A}}_F } f \left( w \cdot \underline{c} \cdot \left( \begin{array}{cc} 1 & x \\ ~ & 1 \end{array} \right) \cdot g \right) \psi_2(-x) dx. \end{align*} Explicitly, writing $\psi = \otimes_v \psi_v$ as above to denote the standard additive character on $x \in {\bf{A}}_F$, if we choose $\psi_1(x) = \psi_{\infty}(mx)$ and $\psi_2(x) = \psi_{\infty}(rx)$ for nonzero $F$-integers $m$ and $r$, then we obtain the classical formula \begin{align}\label{classicalPFWE} W_{ P_{f, \omega, \mu} }(g) &= \int_{ {\bf{A}}_F/F } f \left( \left( \begin{array}{cc} 1 & x \\Ê~Ê& 1\end{array} \right) g \right) \psi_1(mx - rx) dx + \sum\limits_{ c \in \Omega(\Gamma) } \operatorname{Kl}_{\Gamma, \omega, \mu}(m, r; c) \cdot \mathcal{F}_{f, r, c} (g), \end{align} where each $\operatorname{Kl}_{\Gamma, \omega, \mu}(m, r; c)$ denotes the Kloosterman sum defined by \begin{align*} \operatorname{Kl}_{\Gamma, \omega, \mu}(m, r; c) &= \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\Êc & d \end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot \psi_{\infty} \left( \frac{a m }{c} \right) \psi_{\infty} \left( \frac{ d r}{c} \right), \end{align*} and each $\mathcal{F}_{f, r, c} (g)$ the intertwining integral defined by \begin{align*} \mathcal{F}_{f, r, c}(g) &= \int_{ {\bf{A}}_F } f \left( w \cdot \underline{c} \cdot \left( \begin{array}{cc} 1 & x \\Ê~Ê& 1 \end{array} \right) \cdot g \right) \psi(-rx)dx. \end{align*} \end{proposition} \begin{proof} See \cite[Proposition 2.1]{CPS}; the same proof works here. To be clear, given $f \in \mathcal{S}(N_2({\bf{A}}_F) \backslash G({\bf{A}}_F); \psi_1, \omega)$ and $\vartheta$ as above, we open up definitions to find that \begin{align*} W_{P_{f, \omega, \mu}}(g) &= \int_{ {\bf{A}}_F/F } P_{f, \omega, \mu} \left( \left( \begin{array}{cc} 1 & x \\ ~ & 1 \end{array} \right) g \right) \psi_2(-x)dx \\ &= \int_{ {\bf{A}}_F/F } \sum\limits_{ \gamma \in \Gamma_{\infty} \backslash \Gamma } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot f \left( \gamma \left( \begin{array}{cc} 1 & x \\ ~ & 1 \end{array} \right) g \right) \psi_2(-x)dx, \end{align*} which after using the Bruhat decomposition $\Gamma = \Gamma_{\infty} \cup \bigcup_{c \in \Omega(\Gamma)} \Gamma_c$ is the same as \begin{align*} \int_{ {\bf{A}}_F/F } f \left( \left( \begin{array}{cc} 1 & x \\ ~Ê& 1\end{array} \right) g \right) \psi_2(-x)dx + \sum\limits_{c \in \Omega(\Gamma)} \int_{ {\bf{A}}_F/F } \sum\limits_{ \gamma \in \Gamma_{\infty} \backslash \Gamma_c} \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot f \left( \gamma \left( \begin{array}{cc} 1 & x \\ ~Ê& 1\end{array} \right) g \right) \psi_2(-x) dx. \end{align*} Now, to compute the second integral in this latter expression, we use the natural identification $\Gamma_{\infty} \cong N_2(F)$ with the matrix decomposition $\gamma = n_1(\gamma) \cdot w \cdot \underline{c} \cdot n_2(\gamma) \in \Gamma_c$ described above to find that \begin{align*} &\sum\limits_{c \in \Omega(\Gamma)} \int_{ {\bf{A}}_F/F } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot f \left( \gamma \left( \begin{array}{cc} 1 & x \\ ~Ê& 1\end{array} \right) g \right) \psi_2(-x) dx \\ &= \sum\limits_{c \in \Omega(\Gamma)} \int_{ N_2(F) \backslash N_2({\bf{A}}_F) } \sum\limits_{ \gamma \in \Gamma_{\infty} \backslash \Gamma_c } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot f \left( \gamma n g \right) \cdot \psi_2^{-1}(n ) dn \\ &= \sum\limits_{ c \in \Omega(\Gamma) } \sum\limits_{ \gamma \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \int_{N_2( {\bf{A}}_F)} \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot f \left( \gamma n g \right) \cdot \psi_2^{-1}(n ) dn \\ &= \sum\limits_{ c \in \Omega(\Gamma) } \sum\limits_{ \gamma \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} \atop \gamma = n_1(\gamma) \cdot w \cdot \underline{c} \cdot n_2(\gamma) } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot \int_{ {\bf{A}}_F} f \left( \gamma \cdot \left( \begin{array}{cc} 1 & x \\ ~Ê& 1\end{array} \right) \cdot g \right) \cdot \psi_2(-x) dx \\ &= \sum\limits_{c \in \Omega(\Gamma)} \sum\limits_{ \gamma \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} \atop \gamma = n_1(\gamma) \cdot w \cdot \underline{c} \cdot n_2(\gamma) } \omega(\gamma) \cdot \overline{\mu(\gamma)} \cdot \psi_1(n_1(\gamma)) \cdot \psi_2(n_2(\gamma)) \cdot \int_{ {\bf{A}}_F } f \left( w \cdot \underline{c} \cdot \left( \begin{array}{cc} 1 & x \\Ê~Ê& 1 \end{array} \right) \cdot g \right) \psi_2(-x) dx. \end{align*} Here, in the last step, we make a change of variables $x \rightarrow d c^{-1} x$ to bring out the factor of $\psi_2(n_2(\gamma))$. \end{proof} \subsubsection{Spectral decompositions and Sobolev norm bounds} Recall that the Hilbert space $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ has the spectral decomposition \begin{align*} L_{\operatorname{cusp}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) \oplus L_{\operatorname{res}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) \oplus L_{\operatorname{cont}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega) \end{align*} into a subspace of cuspidal forms $L_{\operatorname{cusp}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega )$, a subspace of residual forms $L_{\operatorname{res}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega )$, and a subspace $L_{\operatorname{cont}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega )$ spanned by analytic continuations of Eisenstein series (see \cite{Ge}). Let us thus fix such a basis $\mathcal{B}$ of smooth forms of $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega )$ consisting of: \\ \begin{itemize} \item An orthonormal basis $\lbrace \phi_i \rbrace_i$ of cuspidal forms $\phi_i \in L_{\operatorname{cusp}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ of respective weights $\kappa_i = (\kappa_{i,j})_{j=1}^d$ and spectral parameters $\nu_i = (\nu_{i, j})_{j=1}^d$; \\ \item An orthonormal basis $\lbrace \vartheta_{\xi} \rbrace_{\xi}$ of residual forms $\vartheta_{\xi} \in L_{\operatorname{res}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ of respective weights $\kappa_{\xi} = (\kappa_{\xi, j})_{j=1}^d$ and spectral parameters $\nu_{\xi} = (\nu_{\xi, j})_{j=1}^d$; \\ \item An orthonormal basis $\lbrace \mathcal{E}_{\varpi} \rbrace_{\varpi}$ of Eisenstein series $\mathcal{E}_{\varpi} = \mathcal{E}_{\varpi}(*, s)$ of respective weights $\kappa_{\varpi} = (\kappa_{\varpi, j})_{j=1}^d$ and spectral parameters $\nu_{\varpi, s} = (\nu_{\varpi, s j})_{j=1}^d$, whose analytic continuations furnish the corresponding continuous spectrum: \begin{align*} \bigoplus_{\varpi} \int_{\Re(s)= 1/2} \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i} &= L_{\operatorname{cont}}^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega). \end{align*} Note that via Mellin inversion (cf.~\cite[$\S$7.4]{Ku}), such an orthonormal basis can be identified with a spanning set of smooth Poincar\'e series $\lbrace \mathcal{P}_{\varpi} \rbrace_{\varpi}$ of the same weights $\kappa_{\varpi}$. \end{itemize} Hence, we can decompose any $L^2$-automorphic form $\phi \in L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ (not necessarily smooth) into such an orthonormal basis to derive the $L^2$-expansion \begin{equation}\begin{aligned}\label{SD} \phi &= \sum_i \langle \phi_i, \phi \rangle \cdot \phi_i + \sum_{\xi} \langle \vartheta_{\xi}, \phi \rangle \cdot \vartheta_{\xi} + \bigoplus_{\varpi} \int_{\Re(s) = 1/2} \langle \mathcal{E}_{\varpi}(*, s), \phi \rangle \cdot \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i} \\ &= \sum_i \langle \phi_i, \phi \rangle \cdot \phi_i + \sum_{\xi} \langle \vartheta_{\xi}, \phi \rangle \cdot \vartheta_{\xi} + \sum_{\varpi} \langle \phi, \mathcal{P}_{\varpi} \rangle \cdot \mathcal{P}_{\varpi}. \end{aligned}\end{equation} Note that this decomposition {\it{does not imply}} that $\Phi$ is smooth. That is, while the decomposition $(\ref{SD})$ is convergent in the $L^2$-norm, it is not convergent in the supremum norm over compact subsets of $\overline{G}({\bf{A}}_F)$. We now review some background on Sobolev norms for later. In particular, we show how these can be used to bound the spectral coefficients of smooth $L^2$-automorphic forms uniformly in the spectral parameters of the basis forms. We shall later apply these to an auxiliary family of Poincar\'e series for our arguments. Recall that the action of $\operatorname{GL}_2(F_{\infty})$ on the space $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ induces one of its Lie algebra $\mathfrak{gl}_2$, and hence also one of its Lie subalgebra $\mathfrak{g} = \mathfrak{sl}_2$. Writing $e_j = (0, \ldots, 1, \ldots, 0)$ with entry $1$ at the $j$-th component for each index $1 \leq j \leq d$, this action is generated by the linearly independent vectors \begin{align*} H_j &= \left( \begin{array} {cc} e_j & 0 \\ 0 & - e_j \end{array}\right), \quad R_j = \left( \begin{array} {cc} 0 & e_j \\ 0 & 0 \end{array}\right), \quad L_j = \left( \begin{array} {cc} 0 & 0 \\ e_j & 0 \end{array}\right). \end{align*} The universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ also acts on $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ via differential operators. Writing $\vert \vert \phi \vert \vert = \langle \phi, \phi \rangle$ to denote the $L^2$ norm on $\phi \in L^2( \operatorname{GL}_2(F) \backslash \overline{G} ( {\bf{A}}_F), \omega)$, we define for each integer $B \geq 1$ the corresponding Sobolev norm $\vert \vert \cdot \vert \vert_B$ on any $L^2$-automorphic form $\phi \in L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ by \begin{align*} \vert \vert \phi \vert \vert_B &= \sum_{\operatorname{ord}(\mathcal{D}) \leq B} \vert \vert \mathcal{D} \phi \vert \vert, \end{align*} where the sum runs over all monomials in $H_{j_1}$, $R_{j_2}$, and $L_{j_3}$ of degrees at most $B$. Note that this sum does not necessarily converge. Suppose now that $\phi \in L^2 \left( \operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega \right)$ is any smooth (genuine) $L^2$ automorphic form on $\overline{G}({\bf{A}}_F)$ of central character $\omega$ decomposed spectrally with respect to our fixed orthonormal basis $\mathcal{B}$ as \begin{equation}\begin{aligned}\label{SDPsi} \phi &= \sum_i \langle \phi_i, \phi \rangle \cdot \phi_i + \sum_{\xi} \langle \vartheta_{\xi}, \phi \rangle \cdot \vartheta_{\xi} + \bigoplus_{\varpi} \int_{\Re(s) = 1/2} \langle \mathcal{E}_{\varpi}(*, s), \phi \rangle \cdot \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i} \\ &= \sum_i \langle \phi_i, \phi \rangle \cdot \phi_i + \sum_{\xi} \langle \vartheta_{\xi}, \phi \rangle \cdot \vartheta_{\xi} + \sum_{\varpi} \langle \phi, \mathcal{P}_{\varpi} \rangle \cdot \mathcal{P}_{\varpi}. \end{aligned}\end{equation} We argue via smoothness that the form $\phi$ for all integers $B \geq 1$. The significance of this observation is that each differential operator $\mathcal{D} \in \mathcal{U}(\mathfrak{g})$ is compatible with the spectral decomposition $(\ref{SDPsi})$ in the sense that after applying $\mathcal{D}$ and taking $L^2$-norms, we derive the relation \begin{equation*}\begin{aligned} \vert \vert \mathcal{D} \phi \vert \vert^2 &= \sum_i \langle \phi, \phi_i \rangle^2 \cdot \vert \vert \mathcal{D} \phi_i \vert \vert^2 + \sum_{\xi} \langle \phi, \vartheta_{\xi} \rangle^2 \cdot \vert \vert \mathcal{D} \vartheta_{\xi} \vert \vert^2 + \bigoplus_{\varpi} \left\vert \left\vert \int_{\Re(s) = 1/2} \langle \phi, \mathcal{E}_{\varpi}(*, s) \rangle \cdot \mathcal{D} \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i} \right\vert \right\vert^2\\ &+ \sum_i \langle \phi, \phi_i \rangle^2 \cdot \vert \vert \mathcal{D} \phi_i \vert \vert^2 + \sum_{\xi} \langle \phi, \vartheta_{\xi} \rangle^2 \cdot \vert \vert \mathcal{D} \vartheta_{\xi} \vert \vert^2 + \sum_{\varpi} \langle \phi, \mathcal{P}_{\varpi} \rangle^2 \cdot \vert \vert \mathcal{D} \mathcal{P}_{\varpi} \vert \vert^2. \end{aligned}\end{equation*} Fixing an arbitrary integer $B \geq 1$, taking $B$-fold iterates $\Delta^{(B)}$ of the suitable-weight Laplacian operator $\Delta$, and using that $\mathcal{B}$ is an orthonormal basis, we can then bound the spectral coefficients $\langle \phi, \psi_i \rangle$, $\langle \phi, \vartheta_{\xi} \rangle$, and $\langle \phi, \mathcal{E}_{\varpi}(*, s) \rangle$ in terms of the corresponding spectral parameters $\nu_i$, $\nu_{\xi}$, and $\nu_{\varpi, s}$ in the same style as explained in \cite[$\S$ 5, (5.7)]{TeT} or \cite[$\S 2.6$]{VO9} (for instance). That is, we can consider for any integer $B \geq 1$ the application of the $B$-fold iterate $\Delta^{(B)}$ to the spectral decomposition $(\ref{SDPsi})$ of $\phi$ to get \begin{equation*}\begin{aligned} \Delta^{(B)} \phi &= \sum_i \langle \phi, \phi_i \rangle \cdot \prod_{j=1}^d \left( \frac{1}{4} + \nu_{i, j}^2 \right)^B \cdot \phi_i + \sum_{\xi} \langle \phi, \vartheta_{\xi} \rangle \cdot \prod_{j=1}^d \left( \frac{1}{4} + \nu_{\xi, j}^2 \right)^B \cdot \vartheta_{\xi} \\ &+ \bigoplus_{\varpi} \int_{\Re(s) = 1/2} \langle \phi, \mathcal{E}_{\varpi}(*, s) \rangle \cdot \prod_{j=1}^d \left( \frac{1}{2} + \nu_{\varpi, s, j}^2 \right)^B \cdot \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i}. \end{aligned}\end{equation*} Taking $L^2$-norms then gives us the relation \begin{align*} \vert \vert \Delta^{(B)} \psi \vert \vert^2 &= \sum_i \langle \phi, \phi_i \rangle^2 \cdot \prod_{j=1}^d \left( \frac{1}{4} + \nu_{i, j}^2 \right)^{2B} \cdot \vert \vert \phi_i \vert \vert^2 + \sum_{\xi} \langle \phi, \vartheta_{\xi} \rangle^2 \cdot \prod_{j=1}^d \left( \frac{1}{4} + \nu_{\xi, j}^2 \right)^{2B} \cdot \vert \vert \vartheta_{\xi} \vert \vert^B \\ &+ \bigoplus_{\varpi} \left\vert \left\vert \int_{\Re(s) = 1/2} \langle \phi, \mathcal{E}_{\varpi}(*, s) \rangle \cdot \prod_{j=1}^d \left( \frac{1}{2} + \nu_{\varpi, s, j}^2 \right)^B \cdot \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i} \right\vert \right\vert^2. \end{align*} Since this latter inequality holds for an arbitrary integer $B \geq 1$, it is easy to deduce from the Sobolev norm convergence $\vert \vert \Delta^{(B)} \phi \vert \vert \ll \vert \vert \phi \vert \vert_{A} \ll 1$ for some integer $A = A(B)$ depending on only on the choice of $B \geq 1$ that the spectral coefficients appearing in $(\ref{SDPsi})$ can be bounded for any choices of constants $C \in {\bf{R}}$ as \begin{align}\label{standardSCB} \langle \phi, \phi_i \rangle \ll_C \prod_{j=1}^d \vert \nu_{i, j} \vert^C, \quad \langle \phi, \vartheta_{\xi} \rangle \ll_C \prod_{j=1}^d \vert \nu_{\xi, j} \vert^C, \quad \langle \phi, \mathcal{E}_{\varpi}(*, s) \rangle\vert_{\Re(s) = 1/2} \ll_C \prod_{j=1}^d \vert (\nu_{\varpi, s, j})_{\Re(s) = 1/2} \vert^C. \end{align} We shall apply this observation later on to the smooth Poincar\'e series introduced above, i.e.~after showing that we can decompose the mirabolic forms $\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q$ and $\mathbb P^n_1 \varphi$ (used to derive relevant integral presentations) into linear combinations of such Poincar\'e series as the first step to bounding the sums $(\ref{ssum})$ and $(\ref{ssum2})$. \subsubsection{Bounds for archimedean Whittaker functions} Let us now record some general bounds for the classical Whittaker function $W_{p, \nu}(y)$ for $y \rightarrow 0$, following the discussion in \cite[$\S$ 7]{TeT}. Thus for complex numbers $p, \nu \in {\bf{C}}$, we consider the classical Whittaker function $W_{p, \nu}(y)$ on $y \in {\bf{R}}_{>0}$, which according to \cite[7.621-11]{GrRy} can be described via the integral presentation \begin{align*} \int_0^{\infty} e^{- \frac{y}{2}}W_{p, \nu}(y) y^s \frac{dy}{y} &= \frac{ \Gamma \left( \frac{1}{2} + s + \nu \right) \Gamma \left( \Gamma + s - \nu \right)} { \Gamma \left( s + 1 -p \right)}, \quad \Re(s) > -1. \end{align*} Taking the inverse Mellin transform then gives us the usable contour integral presentation \begin{align*} W_{p, \nu}(y) &= e^{\frac{y}{2}} \int_{\Re(s) = \sigma} \frac{ \Gamma \left( \frac{1}{2} + s + \nu \right) \Gamma \left( \frac{1}{2} + s - \nu \right)}{ \Gamma \left( s + 1 -p \right)} y^{-s} \frac{ds}{2 \pi i} \end{align*} for any admissible real number $\sigma >0$. We remark that one can consider other normalizations of this Whittaker function $W_{p, \nu}(y)$. For instance, as explained in \cite[$\S$ 2.4]{BH10}, if we fix an integer $k$ and suppose \begin{align*} \nu \in \begin{cases} (\frac{1}{2} + {\bf{Z}}) \cup i {\bf{R}} \cup (-\frac{1}{2}, \frac{1}{2}) &\text{ if $k \equiv 0 \bmod 2$} \\ {\bf{Z}} \cup i {\bf{R}} &\text{ if $k \equiv 1 \bmod 2$} \end{cases}, \end{align*} then the normalized Whittaker function defined by \begin{align*} W_{\frac{k}{2}, \nu}^{\star}(y) &= \frac{ i^{ \operatorname{sgn}(y) \frac{k}{2} } W_{ \operatorname{sgn}(y) \frac{k}{2}, \nu }(4 \pi \vert y \vert) } { \left\vert \Gamma \left( \frac{1}{2} - \nu + \operatorname{sgn}(y) \frac{k}{2} \right) \Gamma \left( \frac{1}{2} + \nu + \operatorname{sgn}(y) \frac{k}{2} \right) \right\vert^{ \frac{1}{2} } } \end{align*} has some natural interpretation. Namely, a theorem derived in \cite[$\S$ 4]{BM} shows that these functions furnish an orthonormal basis of the Hilbert space $L^2({\bf{R}}^{\times}, dy^{\times})$ of square integrable functions on ${\bf{R}}^{\times}$ with respect to the Haar measure $d^{\times} y = dy /\vert y \vert$: For each choice of parity $\mu \in \lbrace 0 ,1 \rbrace$, we have the decomposition \begin{align*} L^2({\bf{R}}^{\times}, dy^{\times}) &= \bigoplus_{k \in {\bf{Z}} \atop k \equiv \mu \bmod 2} W_{\frac{k}{2}, \nu}^{\star} {\bf{C}}, \quad \langle W_{\frac{k_1}{2}, \nu}^{\star}, W_{\frac{k_2}{2}, \nu}^{\star} \rangle = \begin{cases} 1 &\text{if $k_1 = k_2$} \\ 0 &\text{otherwise}. \end{cases} \end{align*} In any case, these Whittaker functions $W_{p, \nu}$ and $W_{\frac{k}{2}, \nu}^{\star}$ are relevant to the setting of totally real fields we consider here, for if $\phi$ is a smooth (and hence $\mathcal{Z}$-finite) automorphic form on $\operatorname{GL}_2({\bf{A}}_F)$ of weight $p = (p_j)_{j=1}^d$ and spectral parameter $\nu = (\nu_j)_{j=1}^d$, then its archimedean Whittaker function $W_{\phi}$ is given by the product \begin{align*} W_{\phi}(y_{\infty}) &= \prod_{j=1}^d W_{p_j, \nu_j} (y_{\infty, j}), \quad y_{\infty} = (y_{\infty, j})_{j=1}^d \in F_{\infty}^{\times} \cong ({\bf{R}}^{\times})^d. \end{align*} In particular, the archimedean Whittaker functions that appear in the spectral decompositions in terms of a smooth orthonormal basis of $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ will always be proportional to such a $d$-fold product of these classical Whittaker functions. We therefore record the following uniform bounds for later use. \begin{proposition}\label{Whittaker} We have the following uniform bounds for the Whittaker function $W_{p, \nu}(y)$ as $y \rightarrow 0$. For any $\varepsilon >0$, there exists in each case a constant $A>0$ for which the following estimates hold: \\ \begin{itemize} \item[(i)] Given parameters $p, r \in {\bf{R}}$, we have \begin{align*} \frac{W_{p, ir}(y)}{\Gamma\left( \frac{1}{2} + ir + p \right) } \ll_{\varepsilon} \left( \vert p \vert + \vert r \vert + 1 \right)^A y^{\frac{1}{2}-\varepsilon}. \end{align*} \item[(ii)] Given parameters $p, \nu \in {\bf{R}}$ with $0 < \nu < \frac{1}{2}$, we have \begin{align*} \frac{W_{p, \nu}(y)}{ \Gamma \left( \frac{1}{2} + p \right) } &\ll_{\varepsilon} \left( \vert p \vert +1 \right)^A y^{\frac{1}{2} - \nu - \varepsilon}. \end{align*} \item[(iii)] Given parameters $p, \nu \in {\bf{R}}$ so that $p - \nu - \frac{1}{2} \in {\bf{Z}}_{\geq 0}$ is a positive integer and $\nu > - \frac{1}{2} + \varepsilon$, we have \begin{align*} \frac{ W_{p, \nu}(y) }{ \left\vert \Gamma \left( \frac{1}{2} - \nu + p \right) \Gamma \left( \frac{1}{2} + \nu + p \right) \right\vert^{\frac{1}{2}} } &\ll_{\varepsilon} \left( \vert p \vert + \vert \nu \vert + 1 \right)^A y^{\frac{1}{2}- \varepsilon}. \\ \end{align*} \end{itemize} \end{proposition} \begin{proof} See \cite[Proposition 3.1, $\S$ 7]{TeT} for a proof of the stated estimates in the region $0 < y < 1$, i.e.~for $y \rightarrow 0$. \end{proof} \subsubsection{Bounds for the shifted convolution sum $(\ref{ssum})$} We begin with the observation that the sum $(\ref{ssum})$ can be described as $\vert Y_{\infty} \vert^{\frac{1}{4}}$ times the Fourier-Whittaker coefficient of $\alpha$ of a certain genuine metaplectic form. \begin{proposition}\label{metaIP} Fix $\Pi = \otimes_v \Pi_v$ an irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ with unitary central character. Let $W $ be any smooth function on ${\bf{R}}_{>0}$ of moderate decay near zero and rapid decay near infinity, which is either square integrable or else compactly supported, and whose derivatives are bounded as $W^{(i)} \ll 1$ for all $i \geq 0$. Let $Y_{\infty} = (Y_{\infty, j})_{j=1}^d \in F_{\infty}^{\times}$ be any totally positive archimedean idele with norm $\vert Y_{\infty} \vert = Y >1$, contained strictly within our fixed fundamental domain of Theorem \ref{extension} (i). Let $\alpha \in \mathcal{O}_F$ be any $F$-integer. Let $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ be a pure tensor whose nonarchimedean local components are all essential Whittaker vectors, and whose archimedean local component is chosen so that the corresponding Whittaker coefficient is given as a function of $y_{\infty} \in F_{\infty}^{\times}$ by \begin{align*} W_{\varphi}(y_{\infty}) = W_{\varphi} \left( \left( \begin{array}{cc} y_{\infty} & ~\\ ~& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-2}{2}} \psi(- i y_{\infty} ) \psi \left( i \cdot \frac{ \alpha }{Y_{\infty}} \right) W\left( \vert y_{\infty} \vert \right). \end{align*} Then, we have for any $y_{\infty} \in F_{\infty}^{\times}$ the unipotent integral relation \begin{align*} \int_{ I \cong [0,1]^d \subset F_{\infty} } &\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty} ) dx_{\infty} = \left\vert y_{\infty} \right\vert^{\frac{1}{4}} \sum_{a \in \mathcal{O}_F} \frac{c_{\Pi}(Q(a) + \alpha)}{ \vert Q(a) + \alpha \vert^{\frac{1}{2}}} W \left( \vert (Q(a) + \alpha) Y_{\infty} \vert \right). \end{align*} In particular, taking $y_{\infty} = 1/Y_{\infty}$, we have the unipotent integral relation \begin{equation}\begin{aligned}\label{ssumIP} \vert Y_{\infty} \vert^{\frac{1}{4}} \int_{ I \cong [0,1]^d \subset F_{\infty} } &\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array} {cc} \frac{1}{Y_{\infty}} & x_{\infty} \\ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty} ) dx_{\infty} &= \sum_{a \in \mathcal{O}_F} \frac{c_{\Pi}(Q(a) + \alpha)}{ \vert Q(a) + \alpha \vert^{\frac{1}{2}}} W \left( \frac{\vert Q(a) + \alpha \vert }{\vert Y_{\infty} \vert } \right). \end{aligned}\end{equation} Moreover, if the archimedean local component $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_v$ here is chosen so that the corresponding Whittaker coefficient is given as a function of $y_{\infty} \in F_{\infty}^{\times}$ by \begin{align*} W_{\varphi}(y_{\infty}) = W_{\varphi} \left( \left( \begin{array}{cc} y_{\infty} & ~\\ ~& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-2}{2}} \psi(- i \vert y_{\infty} \vert ) \psi \left( i \cdot \alpha Y_{\infty} \right) W\left( \vert y_{\infty} \vert \right). \end{align*} Note that this function satisfies the condition $W_{\varphi}(y_{\infty}) = W_{\varphi}(-y_{\infty})$ for all $y_{\infty} \in F_{\infty}^{\times}$, and so is admissible for Proposition \ref{inversion}. Then, for any $F$-integer $\alpha \in \mathcal{O}_F$, we have the relation \begin{align*} \int_{ I \cong [0,1]^d \subset F_{\infty} } &\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q \left( \left( \begin{array} {cc} Y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty} ) dx_{\infty} = \vert Y_{\infty} \vert^{\frac{1}{4}} \sum_{a \in \mathcal{O}_F} \frac{c_{\Pi}(Q(a) + \alpha)}{ \vert Q(a) + \alpha \vert^{\frac{1}{2}}} W \left( \vert (Q(a) + \alpha) Y_{\infty} \vert \right). \end{align*} \end{proposition} \begin{proof} We open up Fourier-Whittaker expansions and switch the order of summation, using orthogonality of additive characters on the compact abelian group $I \cong [0,1]^d \cong ({\bf{R}}/{\bf{Z}})^d \subset F_{\infty}$ to compute \begin{align*} &\int_{ I \cong [0,1]^d \subset F_{\infty} } \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \overline{\theta}_Q \left( \left( \begin{array} {cc} y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) dx_{\infty} \\ &= \left\vert y_{\infty} \right\vert^{\frac{1}{4} - \left( \frac{n-2}{2} \right) } \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma y_{\infty} & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \sum_{a \in \mathcal{O}_F} \psi \left( i \cdot Q(a) y_{\infty} \right) \int_{ I \cong [0,1]^d \subset F_{\infty} } \psi(\gamma x_{\infty} - Q(a) x_{\infty} - \alpha x_{\infty}) dx_{\infty} \\ &= \left\vert y_{\infty} \right\vert^{\frac{1}{4} - \left( \frac{n-2}{2} \right)} \sum_{\alpha \in \mathcal{O}_F} W_{\varphi} \left( \left( \begin{array} {cc} (Q(a) + \alpha) y_{\infty} & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \psi \left( i \cdot Q(a) y_{\infty} \right) \\ &= \left\vert y_{\infty} \right\vert^{\frac{1}{4} - \left( \frac{n-2}{2} \right)} \sum_{a \in \mathcal{O}_F} \rho_{\varphi}( Q(a) + \alpha ) W_{\varphi}((Q(a) + \alpha) y_{\infty}) \psi \left( i \cdot Q(a) y_{\infty} \right) \\ &= \left\vert y_{\infty} \right\vert^{ \frac{1}{4} - \left( \frac{n-2}{2} \right) } \sum_{a \in \mathcal{O}_F} \frac{c_{\Pi}(Q(a) + \alpha)}{ \vert Q(a) + \alpha \vert^{\frac{n-1}{2}} } W_{\varphi}( Q(a) + \alpha ) \psi \left( i \cdot Q(a) y_{\infty} \right). \end{align*} Substituting the chosen archimedean local vector (and hence function $W_{\varphi}(y_{\infty})$), we then derive the first claim. To show the second claim, we first note that our choice of archimedean local Whittaker function satisfies the requisite condition of $W_{\varphi}(y_{\infty}) = W_{\varphi}(-y_{\infty})$ for all $y_{\infty} \in F_{\infty}^{\times}$ of Proposition \ref{extension} (ii) above. Thus, using Proposition \ref{extension} and $(\ref{seriesID})$, we may proceed in the same way to compute \begin{align*} &\int_{ I \cong [0,1]^d \subset F_{\infty} } \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array} {cc} Y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \overline{\theta}_Q \left( \left( \begin{array} {cc} Y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) dx_{\infty} \\ &= \int_{ I \cong [0,1]^d \subset F_{\infty} } \mathbb P^n_1 \varphi \left( \left( \begin{array} {cc} Y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \overline{\theta}_Q \left( \left( \begin{array} {cc} Y_{\infty} & x_{\infty} \\ & 1 \end{array}\right) \right) \psi(-\alpha x_{\infty}) dx_{\infty} \\ &= \left\vert Y_{\infty} \right\vert^{ \frac{1}{4} - \left( \frac{n-2}{2} \right) } \sum_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array} {cc} \gamma Y_{\infty} & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \sum_{a \in \mathcal{O}_F} \psi \left( i \cdot Q(a) Y_{\infty} \right) \int_{ I \cong [0,1]^d \subset F_{\infty} } \psi(\gamma x_{\infty} - Q(a) x_{\infty} - \alpha x_{\infty}) dx_{\infty} \\ &= \left\vert Y_{\infty} \right\vert^{ \frac{1}{4} - \left( \frac{n-2}{2} \right)} \sum_{\alpha \in \mathcal{O}_F} W_{\varphi} \left( \left( \begin{array} {cc} (Q(a) + \alpha) Y_{\infty} & \\ & {\bf{1}}_{n-1} \end{array}\right) \right) \psi \left( i \cdot Q(a) Y_{\infty} \right) \\ &= \left\vert Y_{\infty} \right\vert^{ \frac{1}{4} - \left( \frac{n-2}{2} \right)} \sum_{a \in \mathcal{O}_F} \rho_{\varphi}( Q(a) + \alpha ) W_{\varphi}((Q(a) + \alpha) Y_{\infty}) \psi \left( i \cdot Q(a) Y_{\infty} \right) \\ &= \left\vert Y_{\infty} \right\vert^{ \frac{1}{4} - \left( \frac{n-2}{2} \right) } \sum_{a \in \mathcal{O}_F} \frac{c_{\Pi}(Q(a) + \alpha)}{ \vert Q(a) + \alpha \vert^{\frac{n-1}{2}} } W_{\varphi}( (Q(a) + \alpha)Y_{\infty} ) \psi \left( i \cdot Q(a) Y_{\infty} \right). \end{align*} Substituting the chosen archimedean local vector (and hence the function $W_{\varphi}(y_{\infty})$), we derive the claim. \end{proof} \begin{theorem} \label{SDSCS} Let $\Pi = \otimes_v \Pi_v$ be an irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$. Let $c_{\Pi}$ denote the $L$-function coefficients of $\Pi$, so that the Dirichlet series determined by the Euler product over finite primes of $F$ of the standard $L$-function of $\Pi$ can be written $L(s, \Pi) = \sum_{\mathfrak{n} \subset \mathcal{O}_F} c_{\Pi}(\mathfrak{n}) {\bf{N}} \mathfrak{n}^{-s}$ for $\Re(s) >1$. Let $0 \leq \theta_0 \leq 1/2$ denote the best known approximation of the exponent towards the generalized Ramanujan conjecture for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms, with $\theta_0 = 0$ conjectured, and $\theta_0 = 7/64$ admissible by the theorem of Blomer-Brumley \cite{BB}. Let $0 \leq \sigma_0 \leq 1/4$ denote the best known approximation of the exponent towards the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect, with $\sigma_0 = 0$ conjectured, and $\sigma_0 = 103/512$ admissible by the theorem of Blomer-Harcos \cite{BH10} (using $\theta_0= 7/64$). Let $\alpha$ be any nonzero $F$-integer, and let us also use the same symbol to denote the image of $\alpha$ under the diagonal embedding $\alpha \mapsto (\alpha, \alpha, \cdots) \in {\bf{A}}_F^{\times}$. Finally, let $W$ be any smooth function on ${\bf{R}}_{>0}$ of moderate decay near zero and rapid decay at infinity which is either square integrable or else compactly supported. Assume that $W^{(i)} \ll 1$ for all $i \geq 0$. Given $Y_{\infty} \in F_{\infty}^{\times}$ any totally positive archimedean idele with archimedean norm $\vert Y_{\infty} \vert = Y $ which is contained in our fixed fundamental domain of Theorem \ref{extension} (i), we have for any $\varepsilon >0$ the following uniform estimate for the corresponding shifted convolution sum: \begin{align*} \sum\limits_{a \in \mathcal{O}_F / \mathcal{O}_F^{\times}} &\frac{c_{\Pi}(a^2 + \alpha)}{{\bf{N}}(a^2 + \alpha)^{\frac{1}{2}}} W \left( \frac{ {\bf{N}}( a^2 + \alpha) }{ Y } \right) \ll_{\Pi, \varepsilon} Y^{\frac{1}{4}} \cdot \vert \alpha \vert^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{Ê\vert \alpha \vert}{ Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon }. \end{align*} \end{theorem} \begin{proof} Recall that by the integral presentation $(\ref{ssumIP})$ shown in Proposition \ref{metaIP}, we reduce to estimating a factor of $Y^{\frac{1}{4}}$ times the Fourier-Whittaker coefficient \begin{align*} W_{\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\ ~Ê& 1 \end{array} \right) \right) &= \int_{I \cong [0,1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left(\begin{array}{cc} 1 & x_{\infty} \\ ~& 1\end{array} \right) \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~Ê\\Ê~Ê& 1 \end{array} \right) \right) \psi_{\infty}(-\alpha x_{\infty}) dx_{\infty} \end{align*} of the metaplectic mirabolic form $\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q$. On the other hand, recall from the discussion above that we construct a lifting of this form to an $L^2$-automorphic form $\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q$ on the metaplectic cover $\overline{G}({\bf{A}}_F)$. We now argue from the well-known and classical fact (see e.g.~\cite[$\S$ 7.4]{Ku}) that Poincar\'e series span the closed subspace $L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ of $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ that we can decompose $\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q$ into some linear combination of Poincar\'e series $P_{f, \omega, \mu} \in L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$ as \begin{align*} \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q &= \sum_f c_f ( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q ) \cdot P_{f, \omega, \mu}. \end{align*} Hence, for any $x_{\infty} \in I \cong [0,1]^d \subset F_{\infty}$, it is easy to see from the discussion of Theorem \ref{extension} and Proposition \ref{inversion} above that we have the more explicit decomposition \begin{align*} \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) &= \sum_f c_f ( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q ) \cdot P_{f, \omega, \mu} \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right), \end{align*} which by the definition of the lifting $\widetilde{\mathbb P^n_1 \varphi}$ is the same as the decomposition \begin{align}\label{MD} \mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) &= \sum_f c_f ( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q ) \cdot P_{f, \omega, \mu} \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) \end{align} for the underling mirabolic form $\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q$. Now, observe that $(\ref{MD})$ can be viewed as a decomposition of functions in the Hilbert space $L^2(P_2(F) \backslash P_2({\bf{A}}_F))$ of measurable square integrable functions on $P_2({\bf{A}}_F)$ which are left $P_2(F)$-invariant. As such, we can consider the right regular action on each side of $(\ref{MD})$ by \begin{align*} \left( \begin{array}{cc} Y_{\infty}^{-2} & ~Ê\\ ~Ê& 1 \end{array} \right) \in P_2(F_{\infty}) \quad \implies \quad \left( \begin{array}{cc} Y_{\infty} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \left( \begin{array}{cc} Y_{\infty}^{-2} & ~Ê\\ ~Ê& 1 \end{array} \right) = \left( \begin{array}{cc} Y_{\infty}^{-1} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \end{align*} to get the corresponding decomposition \begin{align}\label{EMD} \mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array}{cc} Y_{\infty}^{-1} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) &= \sum_f c_f ( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q ) \cdot P_{f, \omega, \mu} \left( \left( \begin{array}{cc} Y_{\infty}^{-1} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) \end{align} for any $x_{\infty} \in I \cong [0,1]^d \subset F_{\infty}$, hence extending to the region we wish to consider. We now argue as follows that we may pass to Fourier-Whittaker coefficients on each side of $(\ref{EMD})$, i.e.~using that the coefficients in the decomposition are known only to be $l^2$ convergent. Observe that the functions on each side of $(\ref{EMD})$ can be viewed as functions on some domain of the form $\mathcal{J} \subset P_2(F_{\infty}) \subset P_2({\bf{A}})$ determined by a set of the form $J(Y_{\infty}^{-1}) \times I \subset \mathfrak{H}^d$, with $J(Y_{\infty}^{-1})$ a neighbourhood of $Y_{\infty} \in F_{\infty}^{\times}$ not crossing our chosen fundamental domain, and $I \cong [0,1]^d \subset F_{\infty}$. It is apparent from the discussion above that as functions on such a domain $\mathcal{J}$, the functions on each side of the expansion $(\ref{EMD})$ are continuous, and consequently they admit unique Fourier-Whittaker expansions at each point in $\mathcal{J}$. Since the coefficients in these Fourier-Whittaker expansions are uniquely-determined, we deduce that have the desired decomposition of Fourier-Whittaker coefficients at $\alpha$: \begin{align}\label{MWD} W_{\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\ ~Ê& 1 \end{array} \right) \right) &= \sum_f c_f ( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q ) \cdot P_{f, \omega, \mu} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\ ~Ê& 1 \end{array} \right) \right), \end{align} or equivalently \begin{align*} \int_{I \cong [0,1]^d \subset F_{\infty}} &\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) \psi_{\infty}(-\alpha x_{\infty}) d x_{\infty} \\ &= \sum_f c_f ( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q ) \cdot \int_{I \cong [0,1]^d \subset F_{\infty}} P_{f, \omega, \mu} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & x_{\infty}Ê\\ ~Ê& 1 \end{array} \right) \right) \psi_{\infty}(-\alpha x_{\infty}) d x_{\infty}. \end{align*} Now, starting with the decomposition $(\ref{MWD})$, we can further decompose each of the Poincar\'e series $P_{f, \omega, \mu}$ spectrally in terms of the fixed orthonomal basis $\mathcal{B}$ described above as \begin{align*} P_{f, \omega, \mu} &= \sum_i \langle \phi_i, P_{f, \omega, \mu} \rangle \cdot \phi_i + \sum_{\xi} \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \cdot \vartheta_{\xi} + \sum_{\varpi} \int_{\Re(s) = 1/2} \langle \mathcal{E}_{\varpi}(*, s), P_{f, \omega, \mu} \rangle \cdot \mathcal{E}_{\varpi}(*, s) \frac{ds}{2 \pi i} \end{align*} to get the corresponding decomposition \begin{equation*}\begin{aligned} W_{\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\Ê~Ê& 1\end{array} \right) \right) &= \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_i \langle \phi_i, P_{f, \omega, \mu} \rangle \cdot W_{\phi_i} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\Ê~Ê& 1\end{array} \right) \right) \\ &+ \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_{\xi} \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \cdot W_{\vartheta_{\xi}} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\Ê~Ê& 1\end{array} \right) \right) \\ &+ \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_{\varpi} \int_{\Re(s) = 1/2} \langle \mathcal{E}_{\varpi}(*, s), P_{f, \omega, \mu} \rangle \cdot W_{\mathcal{E}_{\varpi}(*, s)} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\Ê~Ê& 1\end{array} \right) \right) \frac{ds}{2 \pi i}, \end{aligned}\end{equation*} which after normalizing the Whittaker coefficients as above then gives us the more explicit decomposition \begin{equation}\begin{aligned}\label{EDD} W_{\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~Ê\\Ê~Ê& 1\end{array} \right) \right) &= \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_i \langle \phi_i, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\phi_i}(\alpha)}{\vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_i, \nu_i} \left( \frac{\alpha}{Y_{\infty}} \right) \\ &+ \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_{\xi} \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\vartheta_{\xi}}(\alpha)}{ \vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_{\xi}, \nu_{\xi}} \left( \frac{\alpha}{Y_{\infty}} \right) \\ &+ \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_{\varpi} \int_{\Re(s) = 1/2} \langle \mathcal{E}_{\varpi}(*, s), P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\mathcal{E}_{\varpi}}(\alpha)}{ \vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_{\varpi}, \nu_{\varpi, s}} \left( \frac{\alpha}{Y_{\infty}} \right) \frac{ds}{2 \pi i}. \end{aligned}\end{equation} Note that the spectral coefficients of the Poincar\'e series can be bounded uniformly in terms of the spectral parameters of the orthonormal basis forms as in $(\ref{standardSCB})$ above. That is, for any choice of real number $C \in {\bf{R}}$ in each case, we have the bounds \begin{equation}\begin{aligned}\label{SCBP} \langle \phi_i, P_{f, \omega, \mu} \rangle \ll_C \prod_{j=1}^d \vert \nu_{i, j} \vert^C, \quad \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \ll_C \prod_{j=1}^d \vert \nu_{\xi, j} \vert^C, \quad \langle \mathcal{E}_{\varpi}(*, s), P_{f, \omega, \mu} \rangle\vert_{\Re(s) = 1/2} \ll_C \prod_{j=1}^d \vert (\nu_{\varpi, s, j})\vert_{\Re(s) = 1/2} \vert^C. \end{aligned}\end{equation} Taking this setup for granted, we can reduce the task of deriving bounds to the standard argument for $\operatorname{GL}_2({\bf{A}}_F)$ as follows. Let us fix any pair of integers $p, q \geq 1$ such that $1/p + 1/q = 1$. Writing $\vert \vert \cdot \vert \vert_p$ to denote the $l^p$ norm and $\vert \vert \cdot \vert \vert_q$ the $l^q$ norm, we deduce from H\"older's inequality that each double sum over coefficients can be bounded above by \begin{align*} \sum_f &c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_i \langle \phi_i, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\phi_i}(\alpha)}{\vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_i, \nu_i} \left( \frac{\alpha}{Y_{\infty}} \right) \\ &\ll \left\vert \left\vert \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \right\vert \right\vert_p \left\vert \left\vert \sum_f \sum_i \langle \phi_i, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\phi_i}(\alpha)}{\vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_i, \nu_i} \left( \frac{\alpha}{Y_{\infty}} \right) \right\vert \right\vert_q, \end{align*} \begin{align*} \sum_f &c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_{\xi} \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\vartheta_{\xi}}(\alpha)}{ \vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_{\xi}, \nu_{\xi}} \left( \frac{\alpha}{Y_{\infty}} \right) \\ &\ll \left\vert \left\vert \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \right\vert \right\vert_p \left\vert \left\vert \sum_f \sum_{\xi} \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\vartheta_{\xi}}(\alpha)}{ \vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_{\xi}, \nu_{\xi}} \left( \frac{\alpha}{Y_{\infty}} \right) \right\vert \right\vert_q, \end{align*} and \begin{align*} \sum_f &c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \sum_{\varpi} \langle \mathcal{P}_{\varpi}, P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\mathcal{P}_{\varpi}}(\alpha)}{ \vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_{\varpi}, \nu_{\varpi}} \left( \frac{\alpha}{Y_{\infty}} \right) \\ &\ll \left\vert \left\vert \sum_f c_f(\widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \right\vert \right\vert_p \left\vert \left\vert \sum_f \sum_{\varpi} \int_{\Re(s) = 1/2} \langle \mathcal{E}_{\varpi}(*, s), P_{f, \omega, \mu} \rangle \cdot \frac{\rho_{\mathcal{P}_{\varpi}}(\alpha)}{ \vert \alpha \vert^{\frac{1}{2}}} \cdot W^{\star}_{k_{\varpi}, \nu_{\varpi, s}} \left( \frac{\alpha}{Y_{\infty}} \right) \frac{ds}{2 \pi i} \right\vert \right\vert_q. \end{align*} Taking $p=q=2$ for instance (so that these follow from a variation of the Cauchy-Schwartz inequality), we argue that we can assume without loss of generality that the spanning set of Poincar\'e series $\lbrace P_{f, \omega, \mu} \rbrace_f$ forms an orthonormal basis of $L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), \omega)$. Taking $l^2$-norms of both sides of $(\ref{MD})$ then implies that \begin{equation*}\begin{aligned} \left\vert \left\vert \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q \right\vert \right\vert_2 &= \sum_f \left\vert c_f( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \right\vert^2 \cdot \left\vert P_{f, \omega, \mu} \right\vert^2 = \sum_f \left\vert c_f( \widetilde{\mathbb P^n_1 \varphi} \cdot \overline{\theta}_Q) \right\vert^2 \ll 1, \end{aligned}\end{equation*} so that we can deduce from the Sobolev norm bounds $(\ref{SCBP})$ that our sums over coefficients are bounded as \begin{equation}\begin{aligned}\label{coefficientbounds} &\sum_f \sum_i \langle \phi_i, P_{f, \omega, \mu} \rangle \ll_{\Pi} 1, \quad \sum_f \sum_{\xi} \langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle \ll_{\Pi} 1, \quad \sum_f \sum_{\varpi} \langle \mathcal{E}_{\varpi}(*, s), P_{f, \omega, \mu} \rangle\vert_{\Re(s) = 1/2} \ll_{\Pi} 1. \end{aligned}\end{equation} Let us now return to the expansion $(\ref{EDD})$ above. We first argue that we can ignore the contribution of the residual spectrum. One way to justify this is though the interpretation of the residual forms $\vartheta_{\xi}$ as residues of metaplectic Eisenstein series $\vartheta_{\xi} = \operatorname{Res}_{s=s_0} \mathcal{E}_{\xi}(s, *)$, so that $\langle \vartheta_{\xi}, P_{f, \omega, \mu} \rangle = \operatorname{Res}_{s=s_0} \langle \mathcal{E}_{\xi}(*, s), P_{f, \omega, \mu} \rangle$ contributes only if the Rankin-Selberg product $\langle \mathcal{E}_{\xi}(*, s), P_{f, \omega, \mu} \rangle$ has a pole, i.e.~which via decay properties of $P_{f, \omega, \mu}$ is seen by inspection to not occur. Hence, we deduce that we can drop the contributions of the residual spectrum in the decomposition $(\ref{EDD})$. Putting together the bounds $(\ref{coefficientbounds})$ for the remaining coefficients with the archimedean Whittaker coefficient bounds of Proposition \ref{Whittaker} then allows us to deduce via $(\ref{EDD})$ that \begin{equation*}\begin{aligned} W_{\mathbb P^n_1 \varphi \cdot \overline{\theta}_Q} \left( \left( \begin{array}{cc} \frac{\alpha}{Y_{\infty}} & ~ \\ ~Ê& 1\end{array} \right)\right) &\ll_{\pi, \varepsilon} \vert \alpha \vert^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{\vert \alpha \vert }{ Y } \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{aligned}\end{equation*} Note that this latter bound is derived by a variation of the standard arguments for the special case of $n=2$ given in \cite{TeT} and \cite{BH10} (for instance). We then derive the stated bound after multiplying in the necessary factor of $Y^{\frac{1}{4}}$ to account for the weight of the metaplectic theta series $\overline{\theta}_Q$. \end{proof} \subsubsection{Bounds for the shifted convolution sum $(\ref{ssum2})$} We now derive the following bounds for the shifted convolution sum $(\ref{ssum2})$. Here, we shall present the sum first in terms of Fourier-Whittaker coefficient of some chosen pure tensor $\varphi = \otimes_v \varphi_v \in V_{\pi}$, and then argue as in Theorem \ref{SDSCS} that we can decompose he projected mirabolic form $\mathbb P^n_1 \varphi$ into a linear combination of smooth Poincar\'e series $P_f \in L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \operatorname{GL}_2({\bf{A}}_F), {\bf{1}})$. Passing to unipotent integrals to derive an expression for the sum $(\ref{ssum2})$, we then open up Fourier-Whittaker coefficients according to $(\ref{classicalPFWE})$ above and apply Poisson summation in the style of the argument of Blomer \cite{VB} to relate to the Fourier-Whittaker coefficient of some genuine metaplectic Eisenstein series. We then decompose spectrally to obtain bounds, so that this argument may be viewed as a variation of Theorem \ref{SDSCS} above. Let us remark that this result (in such generality) does not appear elsewhere in the literature. \begin{theorem}\label{SDSCS2} Let us retain the setup of Theorem \ref{SDSCS}. Hence, we fix $\Pi = \otimes_v \Pi_v$ an irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ with $L$-function coefficients $c_{\Pi}$. We write $0 \leq \theta_0 \leq 1/2$ to denote the best known approximation to the generalized Ramanujan conjecture for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect, and $0 \leq \sigma_0 \leq 1/4$ that towards the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect. We also fix $W$ a smooth and rapidly decaying weight function on $y \in {\bf{R}}_{>0}$ which is either square integrable or compactly supported, and which satisfies the additional property that $W^{(i)} \ll 1$ for all integers $i \geq 0$. Let $q(x) = A x^2 + B x + C$ be an irreducible, definite quadratic polynomial with totally positive integer coefficients $A, B, C \in \mathcal{O}_F$ and totally negative discriminant $\Delta = B^2 - 4 A C$. Given $Y_{\infty} \in F_{\infty}^{\times}$ and totally positive ideal of norm $Y = \vert Y_{\infty} \vert >1$ contained in our fundamental domain of Theorem \ref{extension}, we have the uniform bound \begin{align*} \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(q(r))}{ \vert q(r) \vert^{\frac{1}{2}} } W \left( \frac{\vert q(r) \vert}{Y} \right) \ll_{\pi, \varepsilon} Y^{\frac{1}{4}} \cdot \vert A \vert \cdot \vert \Delta \vert^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{\vert \Delta \vert }{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align*} \end{theorem} \begin{proof} Let $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ be a pure tensor whose nonarchimedean local components $\varphi_v$ are each essential Whittaker vectors as described above, and whose archimedean component $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_v$ is chosen so that \begin{align*} W_{\varphi}(y_{\infty}) := W_{\varphi} \left( \left( \begin{array}{cc} y_{\infty} & ~\\Ê~& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-2}{2}} W(\vert y_{\infty} \vert) \end{align*} as functions of $y_{\infty} \in F_{\infty}^{\times}$. It is then easy to see from our discussion of Fourier-Whittaker expansions in $(\ref{FWEPexp})$ above that we have for each nonzero $F$-integer $r \in \mathcal{O}_F$ (embedded into ${\bf{A}}_F^{\times}$ diagonally) the relation \begin{align*} W_{\mathbb P^n_1 \varphi} \left( \left( \begin{array}{cc} \frac{q(r)}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) &= \int_{I \cong [0,1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \psi_{\infty}(-q(r) x_{\infty}) dx_{\infty} \\ &= \int_{I \cong [0,1]^d \subset F_{\infty}} \vert Y_{\infty} \vert^{\frac{n-2}{2}} \sum\limits_{\gamma \in F^{\times}} W_{\varphi} \left( \left( \begin{array}{cc} \frac{\gamma}{Y_{\infty}} & ~\\Ê~Ê&{\bf{1}}_{n-1} \end{array} \right) \right) \psi_{\infty}(\gamma x_{\infty} - q(r) x_{\infty}) dx_{\infty} \\ &= \vert Y_{\infty} \vert^{\frac{n-2}{2}} \cdot W_{\varphi} \left( \left( \begin{array}{cc} \frac{q(r)}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) = \frac{c_{\Pi}(q(r))}{\vert q(r) \vert^{\frac{1}{2}}} \cdot W \left( \frac{\vert q(r) \vert }{Y} \right). \end{align*} It follows that our shifted convolution sum $(\ref{ssum2})$ can be presented equivalently as \begin{align}\label{Re1} \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(q(r))}{\vert q(r) \vert^{\frac{1}{2}}} W \left( \frac{\vert q(r) \vert }{Y} \right) &= \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} W_{\mathbb P^n_1 \varphi} \left( \left( \begin{array}{cc} \frac{q(r)}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right). \end{align} Now, we argue in the style of Theorem \ref{SDSCS} that $\mathbb P^n_1 \varphi$ can be decomposed into a linear combination of Poincar\'e series on $\operatorname{GL}_2({\bf{A}}_F)$. Hence, let us fix a spanning set $\lbrace P_f \rbrace_f = \lbrace P_{f, {\bf{1}}, {\bf{1}}} \rbrace_f$ of Poincar\'e series in $L^2_{\operatorname{cont}}(\operatorname{GL}_2(F) \backslash \operatorname{GL}_2({\bf{A}}_F), {\bf{1}})$ associated to decomposable Schwartz functions $f \in \mathcal{S}_2(N_2({\bf{A}}) \backslash \operatorname{GL}_2({\bf{A}}_F), {\bf{1}}; \psi')$ as described above. We shall argue as before that we can chose an orthonormal basis here, and that we can parametrize these according to the choice of additive character $\psi'(x) = \psi_{\infty}(lx)$ on $x \in {\bf{A}}_F$, i.e.~for $l \in \mathcal{O}_F$ some nonzero $F$-integer, in which case we shall later write $P_l = P_f$ for simplicity. Again, we use the construction of the lifting $\widetilde{\mathbb P^n_1 \varphi} \in L^2_{\operatorname{cusp}}(\operatorname{GL}_2(F) \backslash \operatorname{GL}_2({\bf{A}}_F), {\bf{1}})$ to argue that we have for any $Y_{\infty} \in F_{\infty}^{\times}$ in the chosen fundamental domain of Theorem \ref{extension} and $x_{\infty} \in I \cong [0, 1]^d \subset F_{\infty}$ the expansion \begin{align*} \widetilde{\mathbb P^n_1 \varphi} \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty} \\Ê~Ê&1 \end{array} \right) \right) = \mathbb P^n_1 \varphi \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty} \\Ê~Ê&1 \end{array} \right) \right) &= \sum\limits_f c_f(\widetilde{\mathbb P^n_1 \varphi}) \cdot P_f \left( \left( \begin{array}{cc} Y_{\infty} & x_{\infty} \\Ê~Ê&1 \end{array} \right) \right), \end{align*} which we can view as a decomposition of functions in $L^2(P_2(F) \backslash P_2({\bf{A}}_F))$. Acting on the right by $\left( \begin{array}{cc} Y_{\infty}^{-2} & ~ \\Ê~Ê&1 \end{array} \right)$ then gives us the desired decomposition \begin{align*} \mathbb P^n_1 \varphi \left( \left( \begin{array}{cc} Y_{\infty}^{-1} & x_{\infty} \\Ê~Ê&1 \end{array} \right) \right) &= \sum\limits_f c_f(\widetilde{\mathbb P^n_1 \varphi}) \cdot P_f \left( \left( \begin{array}{cc} Y_{\infty}^{-1} & x_{\infty} \\Ê~Ê&1 \end{array} \right) \right). \end{align*} Hence, decomposing the $\mathbb P^n_1 \varphi$ in $(\ref{Re1})$ in this way and passing to unipotent integrals, we obtain the expansion \begin{equation}\begin{aligned}\label{Re2} \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(q(r))}{\vert q(r) \vert^{\frac{1}{2}}} W \left( \left\vert \frac{q(r)}{Y_{\infty}} \right\vert \right) &= \sum_f c_f( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} W_{P_f} \left( \left( \begin{array}{cc} \frac{q(r)}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{(r)} W_{P_l} \left( \left( \begin{array}{cc} \frac{q(r)}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right). \end{aligned}\end{equation} Here, we simplify the latter sum as in the previous discussion, writing $(l)$ and $(r)$ respectively to denote the corresponding sums over principal ideals parametrized by $F$-integer generators $l \in \mathcal{O}_F/\mathcal{O}_F^{\times}$ and $r \in \mathcal{O}_F/\mathcal{O}_F^{\times}$. To proceed in deriving bounds, we first open up the Fourier-Whittaker coefficients of the Poincar\'e series according to $(\ref{classicalPFWE})$ above as \begin{align*} W_{P_l} \left( \left( \begin{array}{cc} \frac{q(r)}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) &= \int_{{\bf{A}}_F/F } f \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \psi_{\infty}(l x - q(r) x) dx + \sum\limits_{c \in \Omega(\Gamma)} \operatorname{Kl}_{\Gamma}(f, q(r) ; c) \cdot \mathcal{F}_{f, q(r), c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right). \end{align*} Observe that the integral defining the first term can be evaluated via orthogonality as \begin{align*} \int_{{\bf{A}}_F/F } f \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \psi_{\infty}(l x - q(r) x) dx &= \begin{cases} f \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) &\text{if $l=q(r)$} \\ 0 &\text{otherwise}. \end{cases} \end{align*} We can then deduce from the rapid decay properties of the Schwartz function $f \bmod N_2(F_{\infty})$ that the corresponding contribution to the sum $(\ref{Re2})$ is negligible, so that it will do to estimate the remaining sum \begin{equation}\begin{aligned}\label{Re3} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{(r)} \sum\limits_{c \in \Omega(\Gamma)} \operatorname{Kl}_{\Gamma}(f, q(r) ; c) \cdot \mathcal{F}_{f, q(r), c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{(r)} \sum\limits_{c \in \Omega(\Gamma)} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \psi_{\infty} \left( \frac{d q(r)}{c} \right) \mathcal{F}_{f, q(r), c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right). \end{aligned}\end{equation} We now partition in the $r$-sum in $(\ref{Re3})$ into congruence classes modulo $c$ and apply the Poisson summation formula to the intertwining integral, analogous to the argument given in \cite[$\S 4$]{VB}. To be clear, recall that for $\mathcal{F}$ any Schwartz class function on $x_{\infty} = (x_{\infty, j})_{j=1}^d \in F_{\infty}^{\times}$, we have the summation formula \begin{align*} \sum\limits_{m \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop m \equiv u \bmod c} \mathcal{F}(m) &= \frac{1}{\vert c \vert} \sum\limits_{ h \in \mathcal{O}_F/ \mathcal{O}_F^{\times}} \widehat{\mathcal{F}}\left( \frac{h}{c} \right) \psi_{\infty} \left( \frac{hu}{c} \right) \end{align*} where \begin{align*} \widehat{\mathcal{F}}(x_{\infty}) &= \int_{F_{\infty} \cong {\bf{R}}^d} \mathcal{F}(z_{\infty}) \psi_{\infty}(-x_{\infty} z_{\infty}) dz_{\infty}\end{align*} denotes the Fourier transform. In this way, partitioning the inner $r$-sum in $(\ref{Re3})$ and applying Poisson summation in his way, we obtain \begin{equation}\begin{aligned}\label{Re4} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{(r)} \sum\limits_{c \in \Omega(\Gamma)} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \psi_{\infty} \left( \frac{d q(r)}{c} \right) \mathcal{F}_{f, q(r), c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \sum\limits_{u \bmod c} \sum\limits_{(r) \atop r \equiv u \bmod c } \psi_{\infty} \left( \frac{d q(r)}{c} \right) \mathcal{F}_{f, q(r), c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{1}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \sum\limits_{u \bmod c} \sum\limits_{(h)} \psi_{\infty} \left( \frac{hu}{c} \right) \psi_{\infty} \left( \frac{d q(u)}{c} \right) \widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right), \end{aligned}\end{equation} where we write each of the corresponding Fourier transforms as \begin{align*} &\widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) = \int_{F_{\infty}} \mathcal{F}_{f, \frac{q(z_{\infty})}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \psi_{\infty}(-h z_{\infty}) dz_{\infty} \\ &= \int_{ F_{\infty} } \int_{F_{\infty}} f \left( w \cdot \underline{c} \left( \begin{array}{cc} 1 & x_{\infty} \\Ê~Ê&1 \end{array} \right) \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \psi_{\infty} \left( - \frac{q(z_{\infty})}{c} \cdot x_{\infty} \right) \psi(-h z_{\infty}) dx_{\infty } dz_{\infty}.\end{align*} Here again, w write $(h)$ as shorthand to denote the sum over $F$-integers $h \in \mathcal{O}_F / \mathcal{O}_F^{\times}$ corresponding to a sum over principal ideals of $\mathcal{O}_F$. Switching the order of summation in the inner sum of $(\ref{Re4})$, we then derive \begin{equation}\begin{aligned}\label{Re5} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{1}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \sum\limits_{(h)} \widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \sum\limits_{u \bmod c} \psi_{\infty} \left( \frac{hu + d q(u))}{c} \right). \end{aligned}\end{equation} We now consider the inner quadratic Gauss sum \begin{align*} \sum\limits_{u \bmod c} \psi_{\infty} \left( \frac{hu + d q(u)}{c} \right) &= \psi_{\infty} \left( \frac{dC}{c} \right) \sum\limits_{u \bmod c} \psi_{\infty} \left( \frac{ d A u^2 + (h + dB)u }{c} \right)\end{align*} in this latter expression $(\ref{Re5})$. Let us first rewrite the inner sum over classes $u \bmod c$ as follows so that the terms in the numerator and denominator of the phase are mutually coprime. Hence, we let $k = \gcd(Ad, c)$ so that we can write $dA = k d' A'$ with $c=kc'$, and write $\overline{k}$ to denote the multiplicative inverse of $k \bmod c'$: \begin{align*} \sum\limits_{u \bmod c} \psi_{\infty} \left( \frac{hu + d q(u)}{c} \right) &= \psi_{\infty} \left( \frac{dC}{c} \right) \sum\limits_{u \bmod c'} \psi_{\infty} \left( \frac{ d' A' u^2 + \overline{k}(h + dB)u }{c'} \right)\end{align*} Using that the archimedean additive character $\psi_{\infty}$ is defined on $z_{\infty} = (z_{\infty, j})_{j=1}^d \in F_{\infty}$ by \begin{align*} \psi_{\infty}(z_{\infty}) &= e \left( \operatorname{Tr}(z_{\infty}) \right) = \exp(2 \pi i \operatorname{Tr}(z_{\infty})), \quad \operatorname{Tr}(z_{\infty}) = \sum_{j=1}^d z_{\infty, j} , \end{align*} we see that \begin{align*} \sum\limits_{u \bmod c'} \psi_{\infty} \left( \frac{ d' A' u^2 + \overline{k}(h + dB)u }{c'} \right) = \sum\limits_{u \bmod c'} e \left( \operatorname{Tr}\left( \frac{ d' A' u^2 + \overline{k}(h + dB)u }{c'} \right) \right).\end{align*} Writing then writing real embeddings of $F$ as $\sigma_j:F \rightarrow {\bf{R}}$ for $1 \leq j \leq d$, and then writing $\alpha_j$ to denote the image of an $F$-integer $\alpha \in \mathcal{O}_F$ under $\sigma_j$, it is then easy to check from distributivity that we have \begin{align*} \sum\limits_{u \bmod c'} \psi_{\infty} \left( \frac{ d' A' u^2 + \overline{k}(h + dB)u }{c'} \right) &= \prod_{j=1}^d \sum\limits_{u_j \bmod c_j'} e \left( \frac{d_j' A_j' u_j^2 + \overline{k}_j (d_j B_j + h_j) u_j}{c_j'} \right). \end{align*} We can then apply the calculation of \cite[Lemma 7]{VB} to each inner Gauss sum to evaluate \begin{align*} &\sum\limits_{u_j \bmod c_j'} e \left( \frac{d_j' A_j' u_j^2 + \overline{k}_j (d_j B_j + h_j) u_j}{c_j'} \right) \\ &=\begin{cases} 0 &\text{ if $2 \nmid \vert \overline{k}(d B + h) \vert$} \\ \prod_{j=1}^{[F:{\bf{Q}}]} (1+i) \cdot \sqrt{c_j'} \cdot \left( \frac{c_j'}{d_j' A_j'} \right) \cdot \epsilon_{d_j' A_j'}^{-1} \cdot e \left(- \frac{ \overline{d_j' A_j' k_j^2}(d_j B_j + h_j)^2/4}{c_j'} \right) &\text{ if $2 \mid \vert \overline{k}(d B + h) \vert$}, \end{cases} \end{align*} where $\left( \frac{\cdot}{\cdot} \right)$ denotes the Legendre symbol, and \begin{align*} \epsilon_q &= \begin{cases} 1 &\text{ if $q \equiv 1 \bmod 4$} \\ i &\text { if $q \equiv 3 \bmod 4$}. \end{cases} \end{align*} Substituting this calculation back into $(\ref{Re5})$, interpreting products as norms, we arrive at the expression \begin{equation}\begin{aligned}\label{Re6} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{1}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \sum\limits_{(h)} \widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \sum\limits_{u \bmod c} \psi_{\infty} \left( \frac{hu + d q(u))}{c} \right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{1}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \psi_{\infty} \left( \frac{al}{c} \right) \\ &\times \sum\limits_{(h)} \widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê&1 \end{array} \right) \right) \psi_{\infty} \left( \frac{dC}{c} \right) \sum\limits_{u \bmod c'} \psi_{\infty} \left( \frac{ d' A' u^2 + \overline{k}(h + dB)u }{c'} \right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{(1+i)^{[F:{\bf{Q}}]} \vert c' \vert^{\frac{1}{2}}}{\vert c \vert} \sum\limits_{(h) \atop \vert \overline{k}(dB+h)\vert \equiv 0 \bmod 2 } \psi_{\infty} \left( - \frac{B \overline{A} h /2}{c} \right) \widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~ \\Ê~Ê& 1 \end{array} \right)\right) \\ &\times \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \left( \frac{c}{dA} \right) \left( \prod_{j=1}^{[F:{\bf{Q}}]} \epsilon_{d_j A_j}^{-1} \right) \psi_{\infty} \left( \frac{a(l - \overline{A} h^2/4)}{c} \right) \psi_{\infty} \left( \frac{d(C - \overline{A} B^2/4)}{c} \right), \end{aligned}\end{equation} i.e.~after using the elementary calculation \begin{align*} \frac{dC}{c} - \frac{\overline{d' A' k^2}(dB + h^2)/4}{c'} &= \frac{dC - \overline{dA}(dB + h)^2/4}{c} = \frac{dC - d \overline{A}B^2/4 - 2 \overline{A} B/4 - \overline{dA} h^2}{c} \\ &= \frac{d(C - \overline{A} B^2/4)}{c} - \frac{\overline{A}Bh/2}{c} - \frac{a \overline{A} h^2}{c} \end{align*} to evaluate \begin{align*} \psi_{\infty} \left( \frac{dC}{c} \right) \psi_{\infty} \left( \frac{\overline{d' A' k^2}(dB +h)^2/4}{c} \right) &= \psi_{\infty} \left( \frac{d(C - \overline{A} B^2/4)}{c} \right) \psi_{\infty} \left( \frac{\overline{A} Bh+2}{c} \right) \psi_{\infty} \left( - \frac{a \overline{A} h^2}{c} \right). \end{align*} Now, switching the order of summation in the inner sum of $(\ref{Re6})$, we arrive at the expression \begin{equation*}\begin{aligned} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{(1+i)^{[F:{\bf{Q}}]} \vert c' \vert^{\frac{1}{2}}}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \left( \frac{c}{dA} \right) \left( \prod_{j=1}^{[F:{\bf{Q}}]} \epsilon_{d_j A_j}^{-1} \right) \psi_{\infty} \left( \frac{a l}{c} \right) \psi_{\infty} \left( \frac{d(C - \overline{A} B^2/4)}{c} \right) \\ &\times \sum\limits_{(h) \atop \vert \overline{k}(dB+h)\vert \equiv 0 \bmod 2 } \psi_{\infty} \left( \frac{-a \overline{A} h^2/4 - B \overline{A}h/2}{c} \right) \widehat{\mathcal{F}}_{f, \frac{q(h)}{c}, c} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~ \\Ê~Ê& 1 \end{array} \right)\right), \end{aligned}\end{equation*} which after opening the Fourier transform in the inner $(h)$-sum is the same as \begin{equation}\begin{aligned}\label{Re7} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{(1+i)^{[F:{\bf{Q}}]} \vert c' \vert^{\frac{1}{2}}}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \left( \frac{c}{dA} \right) \left( \prod_{j=1}^{[F:{\bf{Q}}]} \epsilon_{d_j A_j}^{-1} \right) \psi_{\infty} \left( \frac{a l}{c} \right) \psi_{\infty} \left( \frac{d(C - \overline{A} B^2/4)}{c} \right) \\ &\times \sum\limits_{(h) \atop \vert \overline{k}(dB+h)\vert \equiv 0 \bmod 2 } \psi_{\infty} \left( \frac{-a \overline{A} h^2/4 - B \overline{A}h/2}{c} \right) \int_{{\bf{A}}_F } f \left( w \underline{c} \left( \begin{array}{cc} 1 & x \\Ê~Ê& 1 \end{array} \right) \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~ \\Ê~Ê& 1 \end{array} \right) \right) \int_{F_{\infty}} \psi_{\infty} \left( \frac{-q(z_{\infty}) x_{\infty} - hc z_{\infty} }{c} \right) d z_{\infty}. \end{aligned}\end{equation} Note that for each $x_{\infty}$ with $\vert x_{\infty} \vert \neq 0$, we can evaluate the inner integral \begin{align*} \int_{F_{\infty}} \psi_{\infty} \left( \frac{-q(z_{\infty}) x_{\infty} - hc z_{\infty} }{c} \right) d z_{\infty} &= \int_{F_{\infty}} \psi_{\infty} \left( - \left( \frac{A x_{\infty} }{c} \right) z_{\infty}^2 - \left( \frac{ B x_{\infty} + hc}{c} \right) z_{\infty} - \left( \frac{C x_{\infty}}{c} \right) \right) d z_{\infty} \end{align*} in this latter sum $(\ref{Re7})$ as \begin{align*} \int_{F_{\infty}} \psi_{\infty} \left( \frac{-q(z_{\infty}) x_{\infty} - hc z_{\infty} }{c} \right) d z_{\infty} &= \left\vert \frac{c}{2 i A x_{\infty}} \right\vert^{\frac{1}{2}} \cdot \psi_{\infty} \left( \frac{(B^2 - 4 AC)x_{\infty}}{4AC} \right) \psi_{\infty} \left( \frac{2 B x_{\infty} h c + h^2 c^2}{4 AC x_{\infty}} \right) \end{align*} by the integral formula \begin{align*} \int_{-\infty}^{\infty} e^{- \left( \mathcal{A}(x_{\infty, j}) z_{\infty, j}^2 + \mathcal{B}(x_{\infty, j}) z_{\infty, j} + \mathcal{C}(x_{\infty, j}) \right)} dz_{\infty, j} &= \sqrt{\frac{\pi}{\mathcal{A}(x_{\infty, j})}} \cdot e^{ \frac{ \mathcal{B}(x_{\infty, j})^2 - 4 \mathcal{A}(x_{\infty, j}) \mathcal{C}(x_{\infty, j})}{4 \mathcal{A} (x_{\infty, j})} } \end{align*} applied to each component of $x_{\infty} = (x_{\infty, j})_{j=1}^d \in F_{\infty}$ with \begin{align*} \mathcal{A}(x_{\infty, j}) &= 2 \pi i \left( \frac{A_j x_{\infty, j}}{c_j} \right), \quad \mathcal{B}(x_{\infty, j}) = 2 \pi i \left( \frac{B_j x_{\infty, j} + h_j c_j}{c_j} \right), \quad \text{ and } \quad \mathcal{C}(x_{\infty, j}) = 2 \pi i \left( \frac{C_j x_{\infty, j}}{c_j} \right). \end{align*} In this way, we first argue that for some decomposable Schwartz function \begin{align*} f' \in \mathcal{S}\left( N_2({\bf{A}}_F) \backslash \overline{G}({\bf{A}}_F); {\bf{1}}, \psi' \right) \end{align*} modulo $N_2(F_{\infty})$, the sum $(\ref{Re7})$ can be approximated by the simpler expression \begin{equation}\begin{aligned}\label{Re8} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c \in \Omega(\Gamma)} \frac{(1+i)^{[F:{\bf{Q}}]} \vert c' \vert^{\frac{1}{2}}}{\vert c \vert} \sum\limits_{ \gamma = \left( \begin{array}{cc} a & b \\ c & d\end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_c / \Gamma_{\infty} } \left( \frac{c}{dA} \right) \left( \prod_{j=1}^{[F:{\bf{Q}}]} \epsilon_{d_j A_j}^{-1} \right) \psi_{\infty} \left( \frac{a l}{c} \right) \psi_{\infty} \left( \frac{d(C - \overline{A} B^2/4)}{c} \right) \\ &\times \int_{{\bf{A}}_F } f' \left( w \underline{c} \left( \begin{array}{cc} 1 & x \\Ê~Ê& 1 \end{array} \right) \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~ \\Ê~Ê& 1 \end{array} \right) \right) \psi_{\infty} \left( \frac{\Delta}{4 A C} \cdot x \right) dx. \end{aligned}\end{equation} Indeed, evaluating the $\vert x_{\infty} \vert \neq 0$ terms in this way, then switching the order of summation between the inner $(h)$-sum and integral in the latter expression $(\ref{Re7})$, we obtain \begin{align*} &\int_{{\bf{A}}_F } f \left( w \underline{c} \left( \begin{array}{cc} 1 & x \\Ê~Ê& 1 \end{array} \right) \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~ \\Ê~Ê& 1 \end{array} \right) \right) \\ &\times \left\lbrace \left\vert \frac{c}{2 i A x_{\infty}} \right\vert^{\frac{1}{2}} \sum\limits_{(h) \atop \vert h(dB+h)\vert \equiv 0 \bmod 2} \psi_{\infty} \left( \frac{- a \overline{A} h^2+4 - B \overline{A} h/2}{c} \right) \psi_{\infty} \left( \frac{Bhc/2}{cA} \right) \psi_{\infty} \left( \frac{h^2 c^2 }{4 c A} \right) \right\rbrace \psi_{\infty} \left( \frac{\Delta}{4AC} \cdot x \right) dx. \end{align*} We then argue that inner $(h)$-sum in this latter expression can be approximated as a Gaussian integral and treated as a constant. Now, we argue that we can further simplify $(\ref{Re8})$ by making a change of variables $c \rightarrow c'' = 4Ac$ to reduce to bounding the expression \begin{equation}\begin{aligned}\label{Re9} &\sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c'' \in \Omega(\Gamma)} \sum\limits_{ \gamma = \left( \begin{array}{cc} a'' & b'' \\ c'' & d'' \end{array} \right) \in \Gamma_{\infty} \backslash \Gamma_{c''} / \Gamma_{\infty} } \left( \frac{c''}{d''} \right) \left( \prod_{j=1}^{[F:{\bf{Q}}]} \epsilon_{d_j''}^{-1} \right) \psi_{\infty} \left( \frac{a'' l}{c''} \right) \psi_{\infty} \left(- \frac{d'' \Delta}{c''} \right) \cdot \mathcal{F}_{f', - \Delta, c''} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê& 1 \end{array} \right)\right) \\ &= \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \sum\limits_{c'' \in \Omega(\Gamma)} \operatorname{Kl}_{\Gamma'', {\bf{1}}, \mu}(4 Al, -\Delta; c'') \cdot \mathcal{F}_{f', - \Delta, c''} \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & ~\\Ê~Ê& 1 \end{array} \right)\right), \end{aligned}\end{equation} where each $\operatorname{Kl}_{\Gamma'', {\bf{1}}, \mu}(4 Al, -\Delta; c'')$ is the corresponding Kloosterman sum associated to the theta multiplier of half-integral weight $\mu$ (as described e.g.~in \cite[$\S$ 2]{Ge} and \cite{Pro} for each component). It is then simple to argue from the discussion of Fourier-Whittaker expansions described in $(\ref{classicalPFWE})$ above that for some corresponding family of Poincar\'e series $\mathcal{P}_l$ in $L^2(\operatorname{GL}_2(F) \backslash \overline{G}({\bf{A}}_F), {\bf{1}})$, we reduce to estimating the more natural sum \begin{align}\label{Re10} \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \cdot W_{\mathcal{P}_l} \left( \left( \begin{array}{cc} - \frac{\Delta}{Y_{\infty}} & ~Ê\\Ê~Ê& 1 \end{array} \right) \right), \end{align} i.e.~up to a factor of $Y^{\frac{1}{4}}$ to account for the parallel half-integral weight of each series $\mathcal{P}_l$. In this way, using Weyl's law to trace the dependence on the level, we deduce from a variation of the argument given for Theorem \ref{SDSCS} above that we can derive the stated bound \begin{align*} \sum\limits_{r \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(q(r))}{\vert q(r) \vert^{\frac{1}{2}}} W \left( \frac{\vert q(r) \vert }{Y} \right) &\ll \sum_{(l)} c_l( \widetilde{\mathbb P^n_1 \varphi}) \cdot W_{\mathcal{P}_l} \left( \left( \begin{array}{cc} - \frac{\Delta}{Y_{\infty}} & ~Ê\\Ê~Ê& 1 \end{array} \right) \right) \ll_{\Pi, \varepsilon} Y^{\frac{1}{4}} \cdot \vert A \vert \cdot \vert \Delta \vert^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{\vert \Delta \vert }{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align*} \end{proof} \section{Class group twists in the critical strip} We now prove Theorem \ref{SCest}. \subsection{Approximate functional equations} Keep the setup described above, with $\chi = \otimes \chi_w$ a class group character of the fixed totally imaginary quadratic extension $K/F$. We identify $\chi$ with its corresponding idele class character of $K$. Notice that the functional equation $(\ref{FE})$ can be written in the asymmetric form \begin{align}\label{fFE} L(s, \Pi \times \pi(\chi)) &= \epsilon(1/2, \Pi \times \pi(\chi)) \left( D_K^n {\bf{N}}\mathfrak{f}(\Pi_K) \right)^{\frac{1}{2} - s} F(s) L(1-s, \widetilde{\Pi} \times \pi(\chi)), \end{align} where $F(s)$ is the quotient of archimedean factors defined in $(\ref{F})$, an the Dirichlet series $L(s, \Pi \times \pi(\chi))$ corresponds to the Euler product over the finite places of $F$. Here, we use the description of Lemma \ref{arch}, which shows that this quotient $F(s)$ does not depend on the choice of class group character $\chi$. Fix $\delta \in {\bf{C}}$ in the critical strip $0 < \Re(\delta) < 1$. Recall that the Dirichlet series expansion of $L(s, \Pi \times \pi(\chi))$ for $\Re(s) > 1$ is given by \begin{align}\label{DSE} L(s, \Pi \times \pi(\chi)) &= \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta (\mathfrak{m})}{{\bf{N}}\mathfrak{m}^{2s}} \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n}) c_{\chi}(\mathfrak{n})}{ {\bf{N}}\mathfrak{n}^s}, \end{align} where each sum is taken over nonzero integral ideals of $F$, $c_{\Pi}$ denotes the $L$-function coefficient of $\Pi$ so that $L(s, \Pi) = \sum_{\mathfrak{n} \subset \mathcal{O}_F} c_{\Pi}(\mathfrak{n}) {\bf{N}} \mathfrak{n}^{-s}$ for $\Re(s) > 1$, and \begin{align*} c_{\chi}(\mathfrak{n}) &= \sum_{A \in \operatorname{Cl}(\mathcal{O}_K)} \chi(A) r_A(\mathfrak{n}) \end{align*} the coefficient of the Hecke $L$-function $L(s, \chi)$. Again, this latter sum is taken over the classes $A$ of the ideal class group $\operatorname{Cl}(\mathcal{O}_K)$ of $K$, and each $r_A(\mathfrak{n})$ counts the number of ideals in the class $A$ whose image under the relative norm homomorphism ${\bf{N}}_{K/F}: K \longrightarrow F$ equals $\mathfrak{n}$. Choose a smooth and compactly supported test function $f \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$, and let $k_0(s) = \int_0^{\infty} f(y) y^s \frac{dy}{y}$ denote its Mellin transform. Assume that $k_0(0) = 1$. We can and do suppose that the Mellin transform $k_0(s)$ vanishes at any possible poles of $F(s)$ in the region $\frac{1}{2} + \frac{1}{n^2 + 1} < \Re(s) < 1$ (see Corollary \ref{poles}). Let us then define $k(s)$ to be the function of $s \in {\bf{C}}$ defined by \begin{align*} k(s) &= k_0(s) L_{\bf{1}}(4(1 - s - \delta), \overline{\omega}). \end{align*} Writing $F(s)$ again to denote the quotient of archimedean factors defined in $(\ref{F})$, we consider the following functions defined on a real variable $y \in {\bf{R}}_{>0}$: \begin{align}\label{V1} V_1 \left( y \right) &= \int_{\Re(s) = 2} \frac{k(s)}{s} y^{-s} \frac{ds}{2 \pi i} \end{align} and \begin{align}\label{V2} V_2 \left( y \right) = V_{2, \delta} \left( y \right)&= \int_{\Re(s) = 2} \frac{k(-s) F(-s + \delta)}{s} y^{-s} \frac{ds}{2 \pi i}.\end{align} \begin{lemma}\label{AFE} We have the following expression for $L(s, \Pi \times \pi(\chi))$ at any $\delta \in {\bf{C}}$: \begin{align*} L(\delta, \Pi \times \pi(\chi) ) = &\sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n}) c_{\chi} (\mathfrak{n})}{{\bf{N}} \mathfrak{n}^{\delta} } V_1 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 \mathfrak{n})}{Y} \right) \\ &+ W(\Pi_K) Y^{1 - 2 \delta} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2(1 - \delta)} } \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\widetilde{\Pi}}(\mathfrak{n}) c_{\chi} (\mathfrak{n})}{{\bf{N}} \mathfrak{n}^{1 - \delta} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 \mathfrak{n})}{Y} \right). \end{align*} Here, $W(\Pi_K)$ denotes the root number $\epsilon(1/2, \Pi \times \pi(\chi))$, which does not depend on the choice of class group character $\chi$ (see Proposition \ref{root}), and $Y = (D_K^n {\bf{N}} \mathfrak{f}(\Pi_K))^{\frac{1}{2}}$ denotes the square root of the conductor of $L(s, \Pi \times \pi(\chi))$. \end{lemma} \begin{proof} The proof is given by a standard contour argument (see e.g.~\cite[Lemma 3.2]{LRS}). \end{proof} \begin{lemma}\label{RD} The cutoff functions $V_1$ and $V_2 = V_{2, \delta}$ decay as follows. \begin{itemize} \item[(i)] We have that that $V_j(y) = O_C(y^{-C})$ for any $C >0$ as $y \rightarrow \infty$ ($j=1,2$), i.e.~for $y > 1$. \item[(ii)] We have that $V_1(y) = O_B(y^B)$ for any $B \geq 1$ as $y \rightarrow 0$, i.e.~for $0 < y \leq 1 $. \item[(iii)] We have that $V_2(y) \ll 1 + O_{\varepsilon}(y^{1 - \Re(\delta) - \delta_0 + \varepsilon })$ as $y \rightarrow 0$, i.e.~for $0 < y \leq 1$, where $$\delta_0 = \max_j \Re(\mu_j) \leq \frac{1}{2} - \frac{1}{n^2 + 1}.$$ \end{itemize} \end{lemma} \begin{proof} The proof is also given by standard contour argument (see e.g.~ \cite[Lemma 3.1]{LRS}). \end{proof} \subsection{Estimates} Again we fix $\delta \in {\bf{C}}$ with $0 < \Re(\delta) < 1$, and write $Y = (D_K^n {\bf{N}} \mathfrak{f}(\Pi_K))^{\frac{1}{2}}$ to denote the square root of the conductor of each $L(s, \Pi \times \pi(\chi))$ in the average $X_{\mathfrak{D}}(\Pi, \delta)$. In the event that $\delta = 1/2$ and $\Pi$ is self-contragredient, we again assume additionally that the root number $W(\Pi_K)$ is not $-1$, so as to exclude the setting of forced vanishing from the functional equation $(\ref{FE})$. Using Lemma \ref{AFE} together with the orthogonality of characters on $\operatorname{Cl}(\mathcal{O}_K)$, it is easy to see that \begin{align*} X_{\mathfrak{D}}(\Pi, \delta) = &\sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n}) r (\mathfrak{n})}{{\bf{N}} \mathfrak{n}^{\delta} } V_1 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 \mathfrak{n})}{Y} \right) \\ &+ W(\Pi_K) Y^{1 - 2 \delta} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2(1-\delta)} } \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\widetilde{\Pi}}(\mathfrak{n}) r (\mathfrak{n})}{{\bf{N}} \mathfrak{n}^{1 - \delta} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 \mathfrak{n})}{Y} \right), \end{align*} where $r(\mathfrak{n})$ denotes the number of principal ideals in the class group $\operatorname{Cl}(\mathcal{O}_K)$ whose image under the relative norm homomorphism ${\bf{N}}_{K/F}: K \longrightarrow F$ equals $\mathfrak{n} \subset \mathcal{O}_F$. Recall that we take $q_{\bf{1}}(x, y) = a_{\bf{1}} x^2 + b_{\bf{1}} ab + c_{\bf{1}} y^2$ to be the primitive reduced quadratic form representative associated to the principal class ${\bf{1}} \in C(\mathcal{O}_K)$. Hence, using the same notation $\mathfrak{D}$ to denote an $F$-integer representative of the relative discriminant $\mathfrak{D} \subset \mathcal{O}_F$, \begin{align*} q_{\bf{1}}(x, y) &= \begin{cases} x^2 - \frac{\mathfrak{D}}{4} y^2 &\text{ if $\mathfrak{D} \equiv 0 \bmod 4 \mathcal{O}_F$} \\ x^2 + xy+ \left( \frac{1-\mathfrak{D}}{4} \right) y^2 &\text{ if $\mathfrak{D} \equiv 1 \bmod 4 \mathcal{O}_F$}. \end{cases} \end{align*} Hence $a_{\bf{1}} = 1$, with $b_{\bf{1}} = 0, 1$ according as to whether $\mathfrak{D} \equiv 0, 1 \bmod 4 \mathcal{O}_F$, and $c_{\bf{1}} = \mathfrak{D}/4, (\mathfrak{D}-1)/4$ according as to whether $\mathfrak{D} \equiv 0, 1 \bmod 4 \mathcal{O}_F$. Expanding out the counting function $r(\mathfrak{n}) = r_{\bf{1}}(\mathfrak{n})$ in terms of the corresponding parametrization $(\ref{rAnqf})$, and again writing $w_K$ to denote the number of roots of unity in $K$ (or equivalently, the number of automorphs of $q_{\bf{1}}(x, y)$), we then obtain the equivalent formula \begin{equation}\begin{aligned}\label{param} &X_{\mathfrak{D}}(\Pi, \delta) = \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum_{ a, b \in \mathcal{O}_F/\mathcal{O}_F^{\times} \atop q_{\bf{1}}(a, b) \neq 0} \frac{c_{\Pi}(q_{\bf{1}}(a, b) )}{{\bf{N}}(q_{\bf{1}}(a, b) )^{\delta} } V_1 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 (q_{\bf{1}}(a, b) ))}{Y} \right) \\ &+ W(\Pi_K) Y^{1 - 2 \delta} \cdot \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2(1-\delta)} } \sum_{ a, b \in \mathcal{O}_F/\mathcal{O}_F^{\times} \atop q_{\bf{1}}(a, b) \neq 0} \frac{c_{\widetilde{\Pi}}(q_{\bf{1}}(a, b) )}{{\bf{N}}(q_{\bf{1}}(a, b) )^{1 - \delta} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2(q_{\bf{1}}(a, b) ))}{Y} \right). \end{aligned}\end{equation} Here, we write the sums over $F$-integers $a, b \in \mathcal{O}_F$ up the the action of units as $a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times}$, so that these double sums are equivalent to double sums over principal ideals in $\mathcal{O}_F$. Let us first estimate the (residual) contributions coming from $b = 0$ in this expression $(\ref{param})$. \begin{lemma}\label{residue} Let $\delta \in {\bf{C}}$ be any complex number in the critical strip $0 < \Re(\delta) < 1$. The contribution from the $b=0$ terms in the expression $(\ref{param})$ is estimated for any choices of $\varepsilon>0$ and $C >0$ as \begin{align*} &\frac{1}{w_K} \left( L(2 \delta, \eta \omega) \cdot \frac{L_{\bf{1}}(4(1 - \delta), \overline{\omega})}{L_{\bf{1}}(4 \delta, \omega)} \cdot L^{\star}_{{\bf{1}}}(2 \delta, \operatorname{Sym^2} \Pi) + W(\Pi_K) \cdot Y^{1 - 2 \delta} \cdot F(\delta) \cdot L(2 - 2\delta, \eta \overline{\omega}) \cdot L^{\star}_{\bf{1}}(2 - 2 \delta, \operatorname{Sym^2} \widetilde{\Pi}) \right) \\ &+ O_C(Y^{-C}) . \end{align*} \end{lemma} \begin{proof} Again, we put $Y = (D_K^n {\bf{N}} \mathfrak{f}(\Pi_K))^{\frac{1}{2}}$. We seek to estimate the contribution of the sums \begin{equation}\begin{aligned}\label{squares} &\frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum\limits_{ a \in \mathcal{O}_F/\mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\Pi}(a^2)}{{\bf{N}} a^{2 \delta} } V_1 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 a^2)}{Y} \right) \\ &+ W(\Pi_K) Y^{1 - 2 \delta} \cdot \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2(1-\delta)} } \sum\limits_{ a \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\widetilde{\Pi}}(a^2)}{{\bf{N}}a^{2(1 - \delta)} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 a^2)}{Y} \right). \end{aligned}\end{equation} Opening up the definition of the cutoff function $V_1$ in the first sum, we have that \begin{align*} \frac{1}{w_K} &\sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum\limits_{ a \in \mathcal{O}_F/\mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\Pi}(a^2)}{{\bf{N}} a^{2 \delta} } V_1 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 a^2)}{Y} \right) \\ &= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum\limits_{ a \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\Pi}(a^2)}{{\bf{N}} a^{2 \delta} } \int_{\Re(s) = 2} \frac{k_0(s) L_{\bf{1}}(4(1 - s - \delta), \overline{\omega})}{s} \left( \frac{ {\bf{N}}(\mathfrak{m}^2 a^2)}{Y} \right)^{-s} \frac{ds}{2 \pi i} \\ &= \frac{1}{w_K}\int_{\Re(s) = 2} \frac{k_0(s) L_{\bf{1}}(4(1 - s - \delta), \overline{\omega})}{s} L(2\delta + 2 s, \omega \eta) \frac{L^{\star}_{\bf{1}}(2\delta + 2s, \operatorname{Sym^2} \Pi)}{ L_{\bf{1}}(4\delta + 4s, \omega)} Y^s \frac{ds}{2 \pi i}. \end{align*} Shifting the contour leftward to $\Re(s) = -2$ (say), applying functional equations to each of the Dirichlet series in the contour as required, we cross a simple pole at $s=0$ to arrive at the preliminary estimate \begin{align*} &\frac{1}{w_K} \cdot L(2\delta, \omega \eta) \cdot \frac{L_{\bf{1}}(4(1 - \delta), \overline{\omega})}{L_{\bf{1}}(4 \delta, \omega)} \cdot L^{\star}_{\bf{1}}(2 \delta, \operatorname{Sym^2} \Pi) \\ &+ \frac{1}{w_K} \int_{\Re(s) = -2} \frac{k_0(s) L_{\bf{1}}(4(1 - s - \delta), \overline{\omega})}{s} L(2\delta + 2 s, \omega \eta) \frac{L^{\star}_{\bf{1}}(2\delta + 2s, \operatorname{Sym^2} \Pi)}{ L_{\bf{1}}(4\delta + 4s, \omega)} Y^s \frac{ds}{2 \pi i}. \end{align*} To bound the remaining contour in this expression, we then use Stirling's approximation formula with known bounds for the $L$-functions in the contour to deduce that for any constant $C>0$, this is bounded as \begin{align*} \frac{1}{w_K} \cdot L(2\delta, \omega \eta) \cdot \frac{L_{\bf{1}}(4(1 - \delta), \overline{\omega})}{L_{\bf{1}}(4 \delta, \omega)} \cdot L^{\star}_{\bf{1}}(2 \delta, \operatorname{Sym^2} \Pi) + O_C(Y^{-C}). \end{align*} To estimate the second sum in $(\ref{squares})$, we then open up the second cutoff function $V_2 = V_{2, \delta}$ to find \begin{align*} &\frac{1}{w_K}\sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2(1-\delta)} } \sum_{ a \in \mathcal{O}_F \atop a \neq 0}\frac{c_{\widetilde{\Pi}}(a^2)}{{\bf{N}}a^{2(1 - \delta)} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 a^2)}{Y} \right) \\ &= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{ \overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2(1-\delta)} } \sum_{ a \in \mathcal{O}_F \atop a \neq 0} \frac{c_{\widetilde{\Pi}}(a^2)}{{\bf{N}}a^{2(1 - \delta)} } \int_{\Re(s) = 2} \frac{k(-s) F(-s + \delta)}{s} \left( \frac{ {\bf{N}}(\mathfrak{m}^2 a^2)}{Y} \right)^{-s} \frac{ds}{2 \pi i} \\ &= \frac{1}{w_K} \int_{\Re(s) = 2} \frac{k(-s) F(-s + \delta)}{s} L(2(1 - \delta) + 2s, \overline{\omega} \eta) \frac{L^{\star}_{\bf{1}}(2(1 - \delta) + 2s, \operatorname{Sym^2} \widetilde{\Pi})}{L_{\bf{1}}(4(1 - \delta) + 4s, \overline{\omega})} Y^s \frac{ds}{2 \pi i} \\ &= \frac{1}{w_K} \int_{\Re(s) = 2} \frac{k_0(-s) F(-s + \delta)}{s} L(2(1 - \delta) + 2s, \overline{\omega} \eta) L^{\star}_{\bf{1}}(2(1 - \delta) + 2s, \operatorname{Sym^2} \widetilde{\Pi}) Y^s \frac{ds}{2 \pi i}, \end{align*} which after shifting the contour to $\Re(s) = -C$ for some suitable $C >0$ (using our choice of $k(s)$) becomes \begin{align*} &\frac{1}{w} \cdot F(\delta) \cdot L(2(1 - \delta), \overline{\omega} \eta) \cdot L^{\star}_{\bf{1}}(2(1 - \delta), \operatorname{Sym^2} \widetilde{\Pi}) \\ &+ \frac{1}{w_K} \int_{\Re(s) = -C} \frac{k_0(-s) F(-s + \delta)}{s} L(2(1 - \delta) + 2s) L^{\star}_{\bf{1}}(2(1 - \delta) + 2s, \operatorname{Sym^2} \widetilde{\Pi}) Y^s \frac{ds}{2 \pi i}.\end{align*} Applying the functional equation to the Dirichlet series $L(2(1-\delta ) +2s, \overline{\omega} \eta)$ as needed, the latter contour can still be bounded as $O_{C}(Y^{-C})$ for any $C >0$. Taking $C \gg 1 - 2 \Re(\delta)$ then implies the claimed estimate. \end{proof} Let us now consider the remaining $b \neq 0$ contributions to the average $X_{\mathfrak{D}}(\Pi, \delta)$. It is easy to see from the rapid decay properties of the cutoff functions $V_j$ (for $j=1,2$) that it will suffice in any case to estimate the truncated sum defined by \begin{equation}\begin{aligned}\label{CGTTS} X_{\mathfrak{D}}^{\dagger}(\Pi, \delta) &:= \frac{1}{w_K} \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^{2 \delta} } \sum\limits_{a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times}, b \neq 0 \atop {\bf{N}}(\mathfrak{m}^2 q_{\bf{1}}(a, b)) \leq Y} \frac{ c_{\Pi}(q_{\bf{1}}(a, b))}{ {\bf{N}}q_{\bf{1}}(a, b)^{\delta} } V_1 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 q_{\bf{1}}(a, b)) }{Y} \right) \\ &+ W(\Pi_K) \cdot Y^{1 - 2 \delta} \cdot \frac{1}{w_K} \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^{2 (1-\delta)} } \sum\limits_{a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times}, b \neq 0 \atop {\bf{N}}(\mathfrak{m}^2 q_{\bf{1}}(a, b)) \leq Y} \frac{ c_{\widetilde{\Pi}}(q_{\bf{1}}(a, b))}{ {\bf{N}}q_{\bf{1}}(a, b)^{1-\delta} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 q_{\bf{1}}(a, b)) }{Y} \right). \end{aligned}\end{equation} \begin{proposition}\label{CGTOD} We have for each $F$-integer $b \neq 0$ in the sum $(\ref{CGTTS})$ above the following uniform estimates. \begin{itemize} \item[(i)] For $W_1 \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$ any smooth and and rapidly decaying function with compact support satisfying $W_1^{(i)} \ll 1$ for all $i \geq 0$, and for any choice of $\varepsilon >0$, \begin{align*} \sum\limits_{a \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(q_{\bf{1}}(a, b))}{ {\bf{N}}q_{\bf{1}}(a, b)^{\delta} } W_1 \left( \frac{ {\bf{N}}q_{\bf{1}}(a, b) }{Y} \right) \ll_{\Pi, \varepsilon} Y^{\frac{3}{4} - \Re(\delta)} \cdot {\bf{N}}(b^2 \mathfrak{D})^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{ {\bf{N}}(b^2 \mathfrak{D}) }{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align*} \item[(ii)] For $W_2 \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$ any smooth and and rapidly decaying function with compact support satisfying $W_2^{(i)} \ll 1$ for all $i \geq 0$, and for any choice of $\varepsilon >0$, \begin{align*} \sum\limits_{a \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\widetilde{\Pi}}(q_{\bf{1}}(a, b))}{ {\bf{N}}q_{\bf{1}}(a, b)^{1-\delta} } W_2 \left( \frac{ {\bf{N}}q_{\bf{1}}(a, b) }{Y} \right) \ll_{\Pi, \varepsilon} Y^{\Re(\delta)- \frac{3}{4} } \cdot {\bf{N}}(b^2 \mathfrak{D})^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{ {\bf{N}}(b^2 \mathfrak{D}) }{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align*} \end{itemize} \end{proposition} \begin{proof} We divide into cases on $\mathfrak{D} \equiv 0, 1 \bmod 4 \mathcal{O}_F$. Let us suppose first that $\mathfrak{D} \equiv 0 \bmod 4 \mathcal{O}_F$, so that we have $q_{\bf{1}}(a, b) = a^2 + c_{\bf{1}} b^2 = a^2 - \frac{\mathfrak{D}}{4} b^2$ in the notations described above. Let us then put $\alpha = - \frac{\mathfrak{D}}{4} b^2$. We can deduce (i) and (ii) from the following variations of the proof of Theorem \ref{SDSCS}. For (i), we fix $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ a pure tensor whose nonarchimedean local components are each essential Whittaker vectors, but whose archimedean local component $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_{\infty}$ is chosen so that \begin{align*} W_{\varphi}\left( \left( \begin{array}{cc} y_{\infty} & ~Ê\\Ê~Ê& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-1}{2} - \delta} \cdot \psi \left( -i\vert y_{\infty} \vert \right) \cdot \psi \left( i \cdot \frac{\alpha}{Y_{\infty}} \right) \cdot W_1 \left( \vert y_{\infty} \vert \right) \end{align*} as functions of $y_{\infty} \in F_{\infty}^{\times}$. It is then easy to deduce from the same calculation given in Proposition \ref{metaIP} above that we have the integral presentation \begin{align*} Y^{\frac{3}{4} - \delta} \int_{I \cong [0,1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & x_{\infty}Ê\\Ê~Ê& 1 \end{array} \right) \right) \psi(- \alpha x_{\infty}) d x_{\infty} &= \sum\limits_{a \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(a^2 + \alpha) }{ {\bf{N}}(a^2 + \alpha)^{\delta} } W_1 \left( \frac{ {\bf{N}}(a^2 + \alpha) }{Y} \right). \end{align*} The argument of Theorem \ref{SDSCS} then implies the stated bound \begin{align}\label{TS1SCS} \sum\limits_{a \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\Pi}(a^2 + \alpha) }{ {\bf{N}}(a^2 + \alpha)^{\delta} } W_1 \left( \frac{ {\bf{N}}(a^2 + \alpha) }{Y} \right) &\ll_{\Pi, \varepsilon} Y^{\frac{3}{4} - \Re(\delta)} \cdot \vert \alpha \vert^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{\vert \alpha \vert}{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align} Similarly for (ii), we fix $\varphi = \otimes_v \varphi_v \in V_{\widetilde{\Pi}}$ a pure tensor whose nonarchimedean local components are each essential Whittaker vectors, but whose archimedean local component $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_{\infty}$ is chosen so that \begin{align*} W_{\varphi}\left( \left( \begin{array}{cc} y_{\infty} & ~Ê\\Ê~Ê& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-1}{2} - (1-\delta)} \cdot \psi \left( -i\vert y_{\infty} \vert \right) \cdot \psi \left( i \cdot \frac{\alpha}{Y_{\infty}} \right) \cdot W_2 \left( \vert y_{\infty} \vert \right) \end{align*} as functions of $y_{\infty} \in F_{\infty}^{\times}$. It is then easy to deduce from Proposition \ref{metaIP} that we have \begin{align*} Y^{\delta - \frac{1}{4} } \int_{I \cong [0,1]^d \subset F_{\infty}} \mathbb P^n_1 \varphi \cdot \overline{\theta}_Q \left( \left( \begin{array}{cc} \frac{1}{Y_{\infty}} & x_{\infty}Ê\\Ê~Ê& 1 \end{array} \right) \right) \psi(- \alpha x_{\infty}) d x_{\infty} &= \sum\limits_{a \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\widetilde{\Pi}}(a^2 + \alpha) }{ {\bf{N}}(a^2 + \alpha)^{1-\delta} } W_2 \left( \frac{ {\bf{N}}(a^2 + \alpha) }{Y} \right). \end{align*} The argument of Theorem \ref{SDSCS} then implies the stated bound \begin{align}\label{TS2SCS} \sum\limits_{a \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \frac{c_{\widetilde{\Pi}}(a^2 + \alpha) }{ {\bf{N}}(a^2 + \alpha)^{1-\delta} } W_2 \left( \frac{ {\bf{N}}(a^2 + \alpha) }{Y} \right) &\ll_{\Pi, \varepsilon} Y^{\Re(\delta) - \frac{1}{4}} \cdot \vert \alpha \vert^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{\vert \alpha \vert}{Y} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon}. \end{align} Let us now suppose that $\mathfrak{D} \equiv 1 \bmod 4 \mathcal{O}_F$. We consider the quadratic polynomial defined by the specialization of our fixed binary quadratic form $q(x) = q_{\bf{1}}(x, b) = x^2 + b_{\bf{1}} b x + c_{\bf{1}} b^2$, noting that this has discriminant $\Delta_b := (b_{\bf{1}}b )^2 - 4 c_{\bf{1}} b^2 = - b^2 \mathfrak{D}$. We can then deduce the stated bounds (i) and (ii) in a similar way via the following variation of the proof of Theorem \ref{SDSCS2}. To be more precise, let us write $\alpha = \Delta_b = - b^2 \mathfrak{D}$. For the first sum (i), we take $\varphi = \otimes_v \varphi_v \in V_{\Pi}$ to be a pure tensor whose nonarchimedean local components are each essential Whittaker vectors, and whose archimedean local component $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_{\infty}$ is chosen so that \begin{align*} W_{\varphi}\left( \left( \begin{array}{cc} y_{\infty} & ~Ê\\Ê~Ê& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-1}{2} - \delta} \cdot W_1 \left( \vert y_{\infty} \vert \right) \end{align*} as functions of $y_{\infty} \in F_{\infty}^{\times}$. Using this choice of pure tensor $\varphi \in V_{\Pi}$ in the proof of Theorem \ref{SDSCS2} then implies the stated bound $(\ref{TS1SCS})$. Similarly for (ii), we take $\varphi = \otimes_v \varphi_v \in V_{\widetilde{\Pi}}$ to be a pure tensor whose nonarchimedean local components are each essential Whittaker vectors, and whose archimedean local component $\varphi_{\infty} = \otimes_{v \mid \infty} \varphi_{\infty}$ is chosen so that \begin{align*} W_{\varphi}\left( \left( \begin{array}{cc} y_{\infty} & ~Ê\\Ê~Ê& {\bf{1}}_{n-1} \end{array} \right) \right) &= \vert y_{\infty} \vert^{\frac{n-1}{2} - (1-\delta)} \cdot W_2 \left( \vert y_{\infty} \vert \right) \end{align*} as functions of $y_{\infty} \in F_{\infty}^{\times}$. Using this choice of pure tensor $\varphi \in V_{\widetilde{\Pi}}$ in the proof of Theorem \ref{SDSCS2} then implies the stated bound $(\ref{TS2SCS})$. \end{proof} \begin{corollary}\label{CGTTSbound} We have the following uniform estimate for the truncated sum $(\ref{CGTTS})$: For any choice of $\varepsilon >0$, \begin{align*} X_{\mathfrak{D}}^{\dagger}(\Pi, \delta) &\ll_{\Pi, \varepsilon} Y^{\frac{3}{4} - \Re(\delta) + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{- \frac{1}{2}}. \end{align*} \end{corollary} \begin{proof} Let us start with the first term defining the truncated sum $(\ref{CGTTS})$. Let $W_1 \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$ be any smooth function with support on $[1/2, 1]$ satisfying $W_1^{(i)} \ll 1$ for all $i \geq 0$. Let $R \geq 1$ be any real number. Then, \begin{equation*}\begin{aligned} &\left\vert \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F } \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum\limits_{a,b \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop b \neq 0} \frac{c_{\Pi}(q_{\bf{1}}(a, b)}{ {\bf{N}} q_{\bf{1}}(a, b)^{\delta}} W_1 \left( \frac{ {\bf{N}} ( \mathfrak{m}^2 q_{\bf{1}}(a, b)) }{R} \right) \right\vert \\ &\ll \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F } \frac{1}{ {\bf{N}}\mathfrak{m}^{2 \Re(\delta)} } \sum\limits_{b \neq 0 \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \left\vert \sum\limits_{a, \in \mathcal{O}_F/ \mathcal{O}_F^{\times} } \frac{c_{\Pi}(q_{\bf{1}}(a, b)}{ {\bf{N}} q_{\bf{1}}(a, b)^{\delta}} W_1 \left( \frac{ {\bf{N}} ( \mathfrak{m}^2 q_{\bf{1}}(a, b)) }{R} \right) \right\vert, \end{aligned}\end{equation*} which by Proposition \ref{CGTOD} (i) can be bounded above by \begin{equation*}\begin{aligned} &\ll \sum\limits_{b \neq 0 \in \mathcal{O}_F / \mathcal{O}_F^{\times} \atop {\bf{N}}b \leq \left( \frac{R}{ {\bf{N}}\mathfrak{D} } \right)^{\frac{1}{2}} } R^{\frac{3}{4} - \Re(\delta)} {\bf{N}}(b^2 \mathfrak{D})^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{ {\bf{N}}(b^2 \mathfrak{D}) }{R} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon} \\ &\ll R^{\frac{1}{4} - \Re(\delta) + \frac{\theta_0}{2} + \varepsilon} \cdot {\bf{N}}\mathfrak{D}^{\sigma_0 - \frac{\theta_0}{2} - \varepsilon} \cdot \left( \frac{R}{ {\bf{N}}\mathfrak{D} } \right)^{\frac{1}{2} + \sigma_0 - \frac{\theta_0}{2} - \varepsilon} = R^{\frac{3}{4} - \Re(\delta) + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{-\frac{1}{2}}. \end{aligned}\end{equation*} Taking the sum over $\log Y$ many ranges $R \geq 1$, we then have by a standard partition of unity and dyadic decomposition the bound \begin{align}\label{CGTts1} \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F } \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 \delta} } \sum\limits_{a,b \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop b \neq 0} \frac{c_{\Pi}(q_{\bf{1}}(a, b)}{ {\bf{N}} q_{\bf{1}}(a, b)^{\delta}} V_1 \left( \frac{ {\bf{N}} ( \mathfrak{m}^2 q_{\bf{1}}(a, b)) }{Y} \right) \ll_{\Pi, \varepsilon} Y^{\frac{3}{4} - \Re(\delta) + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{-\frac{1}{2}}. \end{align} For the second sum in (\ref{CGTTS}), we argue in the same way. That is, for $W_2 \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$ any smooth function with support on $[1/2, 1]$ satisfying $W_2^{(i)} \ll 1$ for all $i \geq 0$ and $R \geq 1$ be any real number, we have \begin{equation*}\begin{aligned} &\left\vert \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F } \frac{\omega \overline{\eta}(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 (1-\delta)} } \sum\limits_{a,b \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop b \neq 0} \frac{c_{\widetilde{\Pi}}(q_{\bf{1}}(a, b)}{ {\bf{N}} q_{\bf{1}}(a, b)^{1-\delta}} W_2 \left( \frac{ {\bf{N}} ( \mathfrak{m}^2 q_{\bf{1}}(a, b)) }{R} \right) \right\vert \\ &\ll \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F } \frac{1}{ {\bf{N}}\mathfrak{m}^{2 (1-\Re(\delta))} } \sum\limits_{b \neq 0 \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \left\vert \sum\limits_{a, \in \mathcal{O}_F/ \mathcal{O}_F^{\times} } \frac{c_{\widetilde{\Pi}}(q_{\bf{1}}(a, b)}{ {\bf{N}} q_{\bf{1}}(a, b)^{1-\delta}} W_2 \left( \frac{ {\bf{N}} ( \mathfrak{m}^2 q_{\bf{1}}(a, b)) }{R} \right) \right\vert, \end{aligned}\end{equation*} which by Proposition \ref{CGTOD} (ii) can be bounded above by \begin{equation*}\begin{aligned} &\ll \sum\limits_{b \neq 0 \in \mathcal{O}_F / \mathcal{O}_F^{\times} \atop {\bf{N}}b \leq \left( \frac{R}{ {\bf{N}}\mathfrak{D} } \right)^{\frac{1}{2}} } R^{ \Re(\delta) - \frac{1}{4}} {\bf{N}}(b^2 \mathfrak{D})^{\sigma_0 - \frac{1}{2}} \cdot \left( \frac{ {\bf{N}}(b^2 \mathfrak{D}) }{R} \right)^{\frac{1}{2} - \frac{\theta_0}{2} - \varepsilon} \\ &\ll R^{\Re(\delta) - \frac{3}{4} + \frac{\theta_0}{2} + \varepsilon} \cdot {\bf{N}}\mathfrak{D}^{\sigma_0 - \frac{\theta_0}{2} - \varepsilon} \cdot \left( \frac{R}{ {\bf{N}}\mathfrak{D} } \right)^{\frac{1}{2} + \sigma_0 - \frac{\theta_0}{2} - \varepsilon} = R^{ \Re(\delta) - \frac{3}{4} + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{-\frac{1}{2}}. \end{aligned}\end{equation*} Taking the sum over $\log Y$ many ranges $R \geq 1$, we then have by a standard partition of unity and dyadic decomposition the bound \begin{align}\label{CGTts2} \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F } \frac{\overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}^{2 (1-\delta)} } \sum\limits_{a,b \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop b \neq 0} \frac{c_{\widetilde{\Pi}}(q_{\bf{1}}(a, b)}{ {\bf{N}} q_{\bf{1}}(a, b)^{1-\delta}} V_2 \left( \frac{ {\bf{N}} ( \mathfrak{m}^2 q_{\bf{1}}(a, b)) }{Y} \right) \ll_{\Pi, \varepsilon} Y^{\Re(\delta) - \frac{3}{4} + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{-\frac{1}{2}}. \end{align} Putting together the bounds $(\ref{CGTts1})$ and $(\ref{CGTts2})$ in the definition $(\ref{CGTTS})$ then implies the stated bound. \end{proof} \begin{corollary}\label{CGTmain} We have the following estimate for the average $X_{\mathfrak{D}}(\Pi, \delta)$: For any $\varepsilon >0$, \begin{equation*}\begin{aligned} &X_{ \mathfrak{D} }(\Pi, \delta) \\ &= \frac{1}{w_K} \left( L(2 \delta, \eta \omega) \cdot \frac{L_{\bf{1}}(4(1 - \delta), \overline{\omega})}{L_{\bf{1}}(4 \delta, \omega)} \cdot L^{\star}_{{\bf{1}}}(2 \delta, \operatorname{Sym^2} \Pi) + W(\Pi_K) \cdot Y^{1 - 2 \delta} \cdot F(\delta) \cdot L(2 - 2\delta, \eta \overline{\omega}) \cdot L^{\star}_{\bf{1}}(2 - 2 \delta, \operatorname{Sym^2} \widetilde{\Pi}) \right) \\ &+ O_{\Pi, \varepsilon} \left( Y^{\frac{3}{4} - \Re(\delta) + \sigma_0} \cdot {\bf{N}} \mathfrak{D}^{- \frac{1}{2}} \right). \end{aligned}\end{equation*} \end{corollary} \section{Galois averages of central values} We now consider the following setup with central values $\delta = 1/2$, and $\chi$ taken more generally to be a ring class character of some conductor $\mathfrak{c} \subset \mathcal{O}_F$ of $K$. Recall that we write $C(\mathcal{O}_{\mathfrak{c}})$ to denote the class group of the $\mathcal{O}_F$-order $\mathcal{O}_{\mathfrak{c}} = \mathcal{O}_F + \mathfrak{c} \mathcal{O}_K$. Note that the cardinality $\# C(\mathcal{O}_{\mathfrak{c}})$ is given by Dedekind's formula \begin{align*} \#C(\mathcal{O}_{\mathfrak{c}}) &= \frac{h_K [\mathcal{O}_K: \mathcal{O}_{\mathfrak{c}}]}{[\mathcal{O}_K^{\times}: \mathcal{O}_{\mathfrak{c}}^{\times}]} \prod_{\mathfrak{q} \mid \mathfrak{c}} \left( 1 - \frac{\eta(\mathfrak{q})}{ {\bf{N}}\mathfrak{q} } \right), \end{align*} where $h_K = \# C(\mathcal{O}_K)$ denotes the class number of $K$, and the product ranges over prime ideal divisors of the conductor $\mathfrak{c}$. Here, we shall derive estimates for weighted averages of central values $L(1/2, \Pi \times \pi(\chi))$ over primitive ring class characters $\chi$ of a given conductor. We shall also consider the following more arithmetic setup, fixing a prime ideal $\mathfrak{p} \subset \mathcal{O}_F$ with underling rational prime $p$, and taking averages over certain ring class characters which factor through the profinite limit $$C( \mathcal{O}_{ \mathfrak{p}^{\infty}}) = \varprojlim_{\alpha} C(\mathcal{O}_{\mathfrak{p}^{\alpha}}).$$ To be more precise, it is well-known that this profinite limit $C(\mathcal{O}_{\mathfrak{p}^{\infty}})$ is isomorphic as a topological group to ${\bf{Z}}_p^{\delta_{\mathfrak{p}}} \times C_0$, where $\delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p]$ denotes the residue degree of $\mathfrak{p}$, and that $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$ is the torsion subgroup (which is finite). We shall consider weighted averages of central values $L(1/2, \Pi \times \pi(\chi))$ of primitive ring class characters $\chi$ of $\mathfrak{p}$-power conductor which factor through $C(\mathcal{O}_{\mathfrak{p}^{\infty}})$ and have a given induced character $\chi_0$ on the finite torsion subgroup $C_0$. We note that this is the same setup as considered by Cornut and Vatsal \cite{CV}, and moreover that nonvanishing of such subaverages suffice for most Iwasawa theoretic applications (see e.g.\cite{VO7}). \subsection{Approximate functional equations} Let us first set up the following simpler symmetric approximate functional equation to describe central values. Again, we fix a smooth test function of compact support $f \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$, and write $k(s) = f^*(s) := \int_0^{\infty} f(x) x^s \frac{dx}{x}$ to denote its Mellin transform. Hence, $k(s) = k_0(s)$ in our previous notations, as we do not normalize by any factor of the partial Dirichlet $L$-function $L_{\bf{1}}(s, \omega)$. Let us again assume the test function $f \in \mathcal{C}_c^{\infty}({\bf{R}}_{>0})$ is chosen so that its Mellin transform $f^*(s) = k(s)$ satisfies $k(0)=1$ for all $s \neq 0 \in \mathbf C$. Again, in the event that we do not know the generalized Ramanujan conjecture for $\Pi_{K, \infty}$, we can and do choose the test function in such a way that $k(s)$ vanishes at any additional poles of the quotient of archimedean local factors $F(s)$ in this region $0 < \Re(s) < \frac{1}{2} - \frac{1}{n^2 + 1}$. Fixing such a choice of test function $k(s) = \int_0^{\infty} f(x) x^s \frac{dx}{x}$ from now on, we have the following expression for the central values $L(1/2, \Pi \times \pi(\chi)) = L(1/2, \Pi_K \otimes \chi)$, i.e.~where $\chi$ is any ring class character\footnote{or more generally wide ray class character} of $K$. Taking $\delta=1/2$, we then define smooth cutoff functions $V_j(y)$ (for $j=1,2$) on $y \in {\bf{R}}_{>0}$ as in $(\ref{V1})$ and $(\ref{V2})$ above, so \begin{align*} V_1(y) &= \int_{\Re(s) = 2} \frac{k(s)}{s} y^{-s} \frac{ds}{2 \pi i}, \quad \quad V_2(y) = V_{2, 1/2}(y) = \int_{\Re(s)=2} \frac{k(-s)}{s} F(-s + 1/2) y^{-s} \frac{ds}{2 \pi i}. \end{align*} Note again that these functions $V_j$ decay rapidly according to Lemma \ref{RD} above. \begin{lemma}\label{AFEb} Fix $\chi$ a ring class character of conductor $\mathfrak{c} \subset \mathcal{O}_F$ of $K$, and let us write $Y = (D_K^n {\bf{N}} \mathfrak{f}(\Pi_K) {\bf{N}}\mathfrak{c}^{2n})^{\frac{1}{2}}$ to denote the square root of the conductor of the corresponding $L$-function $L(s, \Pi \times \pi(\chi)) = L(s, \Pi_K \otimes \chi)$. We have for any choice of real parameter $Z >0$ the following formula for the central value $L(1/2, \Pi \times \pi(\chi))$: \begin{align*} L(1/2, \Pi \times \pi(\chi)) &= \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n})}{ {\bf{N}}\mathfrak{n}^{\frac{1}{2}}} \sum_{A \in C(\mathcal{O}_{\mathfrak{c}})} \chi(A) r_A(\mathfrak{n}) V_1 \left( {\bf{N}}\mathfrak{m}^2 {\bf{N}}\mathfrak{n} Z \right) \\ &+ \epsilon(1/2, \Pi \times \pi(\chi)) \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \overline{\omega}(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}} \sum_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\widetilde{\Pi}}(\mathfrak{n})}{ {\bf{N}}\mathfrak{n}^{\frac{1}{2}}} \sum_{A \in C(\mathcal{O}_{\mathfrak{c}})} \chi(A) r_A(\mathfrak{n}) V_2 \left( \frac{ {\bf{N}}\mathfrak{m}^2 {\bf{N}}\mathfrak{n} }{Y^2 Z}\right). \end{align*} \end{lemma} \begin{proof} We again apply a standard contour argument as in Lemma \ref{AFE} above (cf.~e.g.~\cite[$\S 5.2$]{IK}, \cite[$\S 3$]{LRS}). \end{proof} \subsection{Averages} We now use the approximate functional equation described in Lemma \ref{AFEb} above to derive formulae for the averages we consider. As we shall explain, after using M\"obius inversion and orthogonality in each case, we reduce to estimating sums of the following general type. Throughout, we fix $\Pi = \otimes_v \Pi_v$ a cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ with unitary central character $\omega = \otimes_v \omega_v$. Given an ideal $\mathfrak{c} \subset \mathcal{O}_F$ and a ring class $A \in C(\mathcal{O}_{\mathfrak{c}})$, we shall consider for any choice of real parameter $Z >1$ the sums \begin{align*} D_{A, \mathfrak{c}, 1}(\Pi; Z) &= \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum_{ \mathfrak{n} \subset \mathcal{O}_F } \frac{c_{\Pi}(\mathfrak{n}) r_A(\mathfrak{n})}{ {\bf{N}}\mathfrak{n}^{\frac{1}{2}}} V_1 \left( Z {\bf{N}}\mathfrak{m}^2 {\bf{N}} \mathfrak{n} \right) \end{align*} and \begin{align*} D_{A, \mathfrak{c}, 2}(\Pi; Z) &= \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \overline{\omega}(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}} \sum_{ \mathfrak{n} \subset \mathcal{O}_F } \frac{c_{\widetilde{\Pi}}(\mathfrak{n}) r_A(\mathfrak{n})}{ {\bf{N}}\mathfrak{n}^{\frac{1}{2}}} V_2 \left( \frac{ {\bf{N}}\mathfrak{m}^2 {\bf{N}} \mathfrak{n} }{Y^2 Z }\right). \end{align*} Here again, we write $r_A(\mathfrak{n})$ to denote the function counting the number of ideals in the class $A \in C(\mathcal{O}_{\mathfrak{c}})$ whose image under the relative norm homomorphism ${\bf{N}}_{K/F}: K \rightarrow F$ equals $\mathfrak{n}$. We shall use the following parametrization of these counting functions henceforth. Let for each class $A \in C(\mathcal{O}_{\mathfrak{c}})$ write $q_A(x, y) = a_A x^2 + b_A xy + c_A y^2$ to denote the corresponding primitive reduced binary quadratic form class representative. Hence, the $F$-integer coefficients $a_A, b_A, c_A \in \mathcal{O}_F$ satisfy the bounds ${\bf{N}} b_A \leq {\bf{N}} a_A \leq {\bf{N}} c_A$. Moreover, the coefficients $a_A$ and $c_A$ are totally positive, as is the middle coefficient $b_A$ if either ${\bf{N}} b_A = {\bf{N}} a_A$ or $a_A = c_A$. Writing $w_K$ as before to the denote the number of automorphs, we then have the parametrization \begin{align*} r_A(\mathfrak{n}) &= \frac{1}{w_K} \cdot \# \left\lbrace a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times}: q_A(a, b) = \mathfrak{n} \right\rbrace. \end{align*} Hence, we can and do expand these sums equivalently in terms of this parametrization as \begin{align}\label{DA1} D_{A, \mathfrak{c}, 1}(\Pi; Z) &= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum\limits_{a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop q_A(a, b) \neq 0} \frac{c_{\Pi}(q_A(a, b))}{ {\bf{N}}q_A(a, b)^{\frac{1}{2}}} V_1 \left( Z {\bf{N}}\mathfrak{m}^2 {\bf{N}} q_A(a, b) \right) \end{align} and \begin{align}\label{DA2} D_{A, \mathfrak{c}, 2}(\Pi; Z) &= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \overline{\omega}(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}} \sum\limits_{a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop q_A(a, b) \neq 0} \frac{c_{\widetilde{\Pi}}(q_A(a, b))}{ {\bf{N}}q_A(a, b)^{\frac{1}{2}}} V_2 \left( \frac{ {\bf{N}}\mathfrak{m}^2 {\bf{N}}q_A(a, b) }{Y^2 Z }\right). \end{align} \subsubsection{Primitive averages} Fix an integral ideal $\mathfrak{c} \subset \mathcal{O}_F$, and let us for simplicity write $C(\mathfrak{c}) = C(\mathcal{O}_{\mathfrak{c}})^{\vee}$ to denote the group of characters of $C(\mathcal{O}_{\mathfrak{c}})$. Recall that a ring class character $\chi$ of $C(\mathcal{O}_{\mathfrak{c}})$ is primitive if it does not factor through the ring class group $C(\mathcal{O}_{\mathfrak{c}'})$ for any proper divisor $\mathfrak{c}' \mid \mathfrak{c}$. We write $P(\mathfrak{c}) = C(\mathfrak{c}) - C(\mathfrak{c})$ to denote the set of such characters. Note that by M\"obius inversion on inclusion exclusion, we have the relations \begin{align*} P(\mathfrak{c}) &= \sum\limits_{\mathfrak{c'} \mid \mathfrak{c} } \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c'} } \right) C(\mathfrak{c}') \end{align*} and \begin{align*} \vert P(\mathfrak{c}) \vert &= \sum\limits_{\mathfrak{c'} \mid \mathfrak{c} } \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c'} } \right) \vert C(\mathfrak{c}') \vert. \end{align*} Here, the sums run over all divisors $\mathfrak{c}' \mid \mathfrak{c}$ (including $\mathfrak{c}' = \mathfrak{c}$), and $\mu$ denotes the M\"obius function on integral ideals of $\mathcal{O}_F$. Given any class $A \in C(\mathcal{O}_{\mathfrak{c}})$, it is then easy to see from orthogonality of characters on each of the finite abelian groups $C(\mathcal{O}_{\mathfrak{c}'})$ that we have the relation \begin{align*} \sum\limits_{\chi \in P(\mathfrak{c})} \chi(A) &= \sum\limits_{\mathfrak{c}' \mid \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{\mathfrak{c}'} \right) \sum\limits_{\chi' \in C(\mathfrak{c}')} \chi'(A) = \sum\limits_{\mathfrak{c}' \mid \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{\mathfrak{c}'} \right) \cdot \begin{cases} \vert C(\mathfrak{c}') \vert & \text{ if $A = {\bf{1}} \in C(\mathcal{O}_{\mathfrak{c}'})$} \\ 0 & \text{ otherwise}. \end{cases} \end{align*} To put this into a simpler form for our purposes, let us for each proper divisor $\mathfrak{c}' \mid \mathfrak{c}$ write $Z(\mathfrak{c}, \mathfrak{c}')$ to denote the kernel of the natural surjective map $C(\mathcal{O}_{\mathfrak{c}}) \longrightarrow C(\mathcal{O}_{\mathfrak{c}'})$. Dividing out by the cardinality $\vert P(\mathfrak{c}) \vert$, the previous relation then implies that we have \begin{align*} \frac{1}{ \vert P(\mathfrak{c}) \vert} \sum\limits_{\chi \in P(\mathfrak{c})} \chi(A) &= \begin{cases} 1 &\text{ if $A = {\bf{1}} \in C(\mathcal{O}_{\mathfrak{c}})$ is the principal class} \\ \mu \left( \frac{\mathfrak{c}}{\mathfrak{c}} \right) \cdot \frac{\vert C(\mathfrak{c}')}{ \vert P(\mathfrak{c}) \vert } &\text{ if $A \in Z(\mathfrak{c}, \mathfrak{c}')$ for some proper divisor $\mathfrak{c}' \mid \mathfrak{c}$, with $A \neq {\bf{1}} \in C(\mathcal{O}_{\mathfrak{c}})$} \\ 0 &\text{ otherwise}. \end{cases} \end{align*} Moreover, since the sum over classes $A \in Z(\mathfrak{c}, \mathfrak{c}')$ can be identified with the principal class ${\bf{1}} \in C(\mathcal{O}_{\mathfrak{c'}})$, we can re-arrange terms in this latter expression to obtain for each nonzero integral ideal $\mathfrak{n} \subset \mathcal{O}_F$ the relation \begin{equation}\begin{aligned}\label{PO} \frac{1}{\vert P(\mathfrak{c}) \vert} \sum\limits_{\chi \in P(\mathfrak{c})} \chi(A) r_A(\mathfrak{n}) &= \left( 1 - \sum\limits_{\mathfrak{c}' \mid \mathfrak{c} \atop \mathfrak{c}' \neq \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{\mathfrak{c}'} \right) \frac{ \vert C(\mathfrak{c}')\vert }{ \vert P(\mathfrak{c}) \vert } \right) r_{\bf{1}}(\mathfrak{n}) + \sum\limits_{\mathfrak{c}' \mid \mathfrak{c} \atop \mathfrak{c}' \neq \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c}' } \right) \frac{ \vert C(\mathfrak{c}')\vert }{ \vert P(\mathfrak{c}) \vert } \cdot r_{ {\bf{1}}_{\mathfrak{c}'}}(\mathfrak{n}). \end{aligned}\end{equation} Here, we write ${\bf{1}}$ to denote the principal class in $C(\mathcal{O}_{\mathfrak{c}})$, and ${\bf{1}}_{\mathfrak{c}'}$ for each proper divisor $\mathfrak{c}' \mid \mathfrak{c}$ to denote the principal class in $C(\mathcal{O}_{\mathfrak{c}'})$. Writing $W(\Pi_K) = \epsilon(1/2, \Pi \times \pi(\chi))$ again to denote the generic root number term (which does not depend on the choice of $\chi \in P(\mathfrak{c})$), can then use this relation $(\ref{PO})$ directly to compute the corresponding average for any choice of real parameter $Z >1$ as \begin{align*} X_{\mathfrak{c}}(\Pi) &:= \frac{1}{\vert P(\mathfrak{c}) \vert} \sum\limits_{\chi \in P(\mathfrak{c})} L(1/2, \Pi \times \pi(\chi)) \\ &= \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}} \mathfrak{m} } \sum\limits_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n})}{ {\bf{N}} \mathfrak{n}^{\frac{1}{2}} } V_1 \left( {\bf{N}}(\mathfrak{m}^2 \mathfrak{n}) Z \right) \cdot \frac{1}{\vert P(\mathfrak{c}) \vert} \sum\limits_{\chi \in P(\mathfrak{c})} \chi(A) r_A(\mathfrak{n}) \\ &+ W(\Pi_K) \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}} \mathfrak{m} } \sum\limits_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\widetilde{\Pi}}(\mathfrak{n})}{ {\bf{N}} \mathfrak{n}^{\frac{1}{2}} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 \mathfrak{n})}{Y^2 Z} \right) \cdot \frac{1}{\vert P(\mathfrak{c}) \vert} \sum\limits_{\chi \in P(\mathfrak{c})} \chi(A) r_A(\mathfrak{n}) \end{align*} as \begin{equation}\begin{aligned}\label{PAformula}X_{\mathfrak{c}}(\Pi) &= \left( 1 - \sum\limits_{\mathfrak{c}' \mid \mathfrak{c} \atop \mathfrak{c}' \neq \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c}' } \right) \frac{ \vert C(\mathfrak{c}') \vert }{ \vert P(\mathfrak{c} ) \vert } \right) D_{ {\bf{1}}, \mathfrak{c}, 1}(\Pi; Z) + \sum\limits_{\mathfrak{c}' \mid \mathfrak{c} \atop \mathfrak{c}' \neq \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c}' } \right) \frac{ \vert C(\mathfrak{c}') \vert }{ \vert P(\mathfrak{c} ) \vert } D_{ {\bf{1}}, \mathfrak{c}', 1}(\Pi; Z) \\ &+ W(\Pi_K) \left( 1 - \sum\limits_{\mathfrak{c}' \mid \mathfrak{c} \atop \mathfrak{c}' \neq \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c}' } \right) \frac{ \vert C(\mathfrak{c}') \vert }{ \vert P(\mathfrak{c} ) \vert } \right) D_{ {\bf{1}}, \mathfrak{c}, 2}(\Pi; Z) + W(\Pi_K) \sum\limits_{\mathfrak{c}' \mid \mathfrak{c} \atop \mathfrak{c}' \neq \mathfrak{c}} \mu \left( \frac{\mathfrak{c}}{ \mathfrak{c}' } \right) \frac{ \vert C(\mathfrak{c}') \vert }{ \vert P(\mathfrak{c} ) \vert } D_{ {\bf{1}}, \mathfrak{c}', 2}(\Pi; Z). \end{aligned}\end{equation} \subsubsection{``Tame" subaverages} We now fix a prime ideal $\mathfrak{p} \subset \mathcal{O}_F$ with underling rational prime $p$, and consider the setup described above corresponding to ring class characters $\chi$ factoring through the profinite group group $C(\mathcal{O}_{\mathfrak{p}^{\infty}}) = \varprojlim C(\mathcal{O}_{\mathfrak{p}^{\alpha}})$. Hence, let us fix a character $\chi_0$ of the finite torsion subgroup $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$, and for each (sufficiently large) integer $\alpha \geq 1$ write $P(\mathfrak{p}^{\alpha}, \chi_0)$ to denote the subset of primitive ring class characters $\chi \in P(\mathfrak{p}^{\alpha})$ whose induced character on $C_0$ is given by $\chi_0$. To compute the corresponding average \begin{align*} X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) &= \frac{1}{ \vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert } \sum\limits_{\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)} L(1/2, \Pi \times \pi(\chi)) \\ &= \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\omega \eta(\mathfrak{m})}{ {\bf{N}} \mathfrak{m} } \sum\limits_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\Pi}(\mathfrak{n})}{ {\bf{N}} \mathfrak{n}^{\frac{1}{2}} } V_1 \left( {\bf{N}}(\mathfrak{m}^2 \mathfrak{n}) Z \right) \cdot \frac{1}{\vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert} \sum\limits_{\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)} \chi(A) r_A(\mathfrak{n}) \\ &+ W(\Pi_K) \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\overline{\omega} \eta(\mathfrak{m})}{ {\bf{N}} \mathfrak{m} } \sum\limits_{\mathfrak{n} \subset \mathcal{O}_F} \frac{c_{\widetilde{\Pi}}(\mathfrak{n})}{ {\bf{N}} \mathfrak{n}^{\frac{1}{2}} } V_2 \left( \frac{ {\bf{N}}(\mathfrak{m}^2 \mathfrak{n})}{Y^2 Z} \right) \cdot \frac{1}{\vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert} \sum\limits_{\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)} \chi(A) r_A(\mathfrak{n}), \end{align*} we first recall the following orthogonality relation shown in \cite[Lemma 2.8]{CV}. Let us for a given integer $\alpha \geq 1$ write $C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})$ to denote the image of the torsion subgroup $C_0$ in $C(\mathcal{O}_{\mathfrak{p}^{\alpha}})$, with $\overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}}) = C(\mathcal{O}_{\mathfrak{p}^{\alpha}})/C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})$. Then, we know thanks to the arguments detailed in \cite[Lemma 2.8]{CV} that: \\ \begin{itemize} \item The natural map $C_0 \longrightarrow C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})$ is surjective for $\alpha \gg 1$ sufficiently large. \\ \item The natural surjective map $C(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \longrightarrow \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}})$ induces a bijection between the kernel groups $Z(\mathfrak{p}^{\alpha}) := \ker( C(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \rightarrow C(\mathcal{O}_{\mathfrak{p}^{\alpha-1}}) )$ and $\ker( \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \rightarrow \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha-1}}) )$. \\ \item The kernel group $\mathcal{Z}(\mathfrak{p}^{\alpha}) := \ker( C(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \rightarrow \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha-1}}) )$ decomposes as $\mathcal{Z}(\mathfrak{p}^{\alpha}) \cong C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \oplus Z(\mathfrak{p}^{\alpha})$. \\ \end{itemize} In particular, it can be deduced following the argument given in \cite[Lemma 2.8]{CV} that there exists a character $\chi_0'$ of $C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})$ inducing $\chi_0$ on $C_0$ and the trivial character on $Z(\mathfrak{p}^{\alpha})$ such that the subset of primitive characters $P(\mathfrak{p}^{\alpha}, \chi_0) \subset P(\mathfrak{p}^{\alpha})$ can be presented explicitly as \begin{align*} P(\mathfrak{p}^{\alpha}, \chi_0) &= \chi_0' \cdot \left( \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}})^{\vee} - \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha-1}})^{\vee} \right). \end{align*} Using this presentation in lieu of M\"obius inversion, we derive for each class $A \in C(\mathcal{O}_{\mathfrak{p}^{\alpha}})$ the relation \begin{align*} \frac{1}{ \vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert} \sum\limits_{ \chi \in P(\mathfrak{p}^{\alpha}, \chi_0)} \chi(A) &= \frac{\chi_0'(A)}{ \left( \vert \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \vert - \vert \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha -1}}) \vert \right) } \left( \sum\limits_{\overline{\chi} \in \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}})} \overline{\chi}(A) - \sum\limits_{\overline{\chi}' \in \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha-1}})} \overline{\chi}'(A) \right) \\ &= \chi'_0(A) \cdot \begin{cases} 1 &\text{ if $A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})$} \\ - \frac{ \vert \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha-1}}) \vert}{ \vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert } &\text{ if $A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \backslash C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})$ } \\ 0 & \text{ otherwise}. \end{cases} \end{align*} Here, we use that $\vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert = \vert \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \vert - \vert \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha -1}}) \vert $ to simplify the formula. It is then easy to see that for any choice of real unbalancing parameter $Z >1$, we derive the corresponding average formula \begin{equation}\begin{aligned}\label{TAformula} X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) &= \sum\limits_{A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})} \chi'_0(A) \cdot \left( D_{A, \mathfrak{p}^{\alpha}, 1}(\Pi; Z) + W(\Pi_K) \cdot D_{A, \mathfrak{p}^{\alpha}, 2}(\Pi; Z) \right) \\ &- \frac{ \vert \overline{C}(\mathcal{O}_{\mathfrak{p}^{\alpha-1}}) \vert }{ \vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert } \sum\limits_{A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha-1} } ) \atop A \notin C(\mathcal{O}_{\mathfrak{p}^{\alpha}})} \chi'_0(A) \cdot \left( D_{A, \mathfrak{p}^{\alpha-1}, 1}(\Pi; Z) + W(\Pi_K) \cdot D_{A, \mathfrak{p}^{\alpha-1}, 2}(\Pi; Z) \right). \end{aligned}\end{equation} \subsection{Estimates} We now estimate the sums $D_{A, \mathfrak{c}, j}(\Pi; Z)$ for each of $j=1,2$, i.e.~as parametrized respectively in $(\ref{DA1})$ and $(\ref{DA2})$ above via the primitive quadratic form representative $q_A(x, y) = a_A x^2 + b_A xy + c_A y^2$. \subsubsection{Residual terms} Let us first consider the $b=0$ terms. That is, we now estimate the sums \begin{align*} D_{A, \mathfrak{c}, 1}(\Pi; Z)\vert_{b=0} &:= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum\limits_{a \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\Pi}(q_A(a, 0))}{ {\bf{N}}q_A(a, 0)^{\frac{1}{2}}} V_1 \left( Z {\bf{N}}\mathfrak{m}^2 {\bf{N}} q_A(a, 0) \right) \end{align*} and \begin{align*} D_{A, \mathfrak{c}, 2}(\Pi; Z)\vert_{b=0} &:= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \overline{\omega}(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}} \sum\limits_{a \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\widetilde{\Pi}}(q_A(a, 0))}{ {\bf{N}}q_A(a, 0)^{\frac{1}{2}}} V_2 \left( \frac{ {\bf{N}}\mathfrak{m}^2 {\bf{N}}q_A(a, 0) }{Y^2 Z }\right). \end{align*} To account for the more general setting where $a_A \geq 2$ (namely for the setting where $A$ is not principal), let us introduce some additional notations. Fix a class $A \in C(\mathcal{O}_{\mathfrak{c}})$, and a divisor $q \mid a_A$ of the leading coefficient in its reduced binary quadratic form representative $q_A(x, y) = a_A x^2 + b_A xy + c_A y^2$. We consider the following Dirichlet series related to the partial symmetric square $L$-function $L_{\bf{1}}^{\star}(s, \operatorname{Sym^2} \Pi)$, \begin{align*} L_{ {\bf{1}}, q }^{\star}(s, \operatorname{Sym^2} \Pi) &= L_{\bf{1}}^{\star}(2s, \omega) \sum\limits_{ a \neq 0 \in \mathcal{O}_F/\mathcal{O}_F^{\times} \atop a \equiv 0 \bmod q \mathcal{O}_F } \frac{c_{\Pi}(a^2)}{ {\bf{N}} a^s }. \end{align*} Note that if $q = 1$, then this is simply the series $L_{ {\bf{1}}, 1}^{\star}(s, \operatorname{Sym^2} \Pi) = L_{\bf{1}}^{\star}(s, \operatorname{Sym^2} \Pi)$ we considered before. \begin{proposition}\label{residue2} We have the following estimates for the $b=0$ terms in the sums $(\ref{DA1})$ and $(\ref{DA2})$. Let us again write $0 \leq \theta_0 \leq 1/2$ to denote the best known approximation towards the generalized Ramanujan conjecture for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms, hence with $\theta_0 =0$ conjectured and $\theta_0 = 7/64$ admissible by \cite{BB}. \begin{itemize} \item[(i)] We have for any choices of real numbers $Z >1$ and $\varepsilon >0$ that \begin{align*} D_{A, \mathfrak{c}, 1}(\Pi; Z)\vert_{b=0} = \frac{1}{w_K} &\sum\limits_{q \mid a_A} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \omega \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \Pi) }{ L_{\bf{1}}(2, \omega) } \\ &+ O_{\Pi, \varepsilon} \left( D_K^{ \frac{1}{4} - \frac{(1-2 \theta_0)}{16} + \varepsilon} ({\bf{N}}a_A Z)^{\frac{1}{4}} \right). \end{align*} \item[(ii)] We have for any choices of real numbers $Z >1$ and $\varepsilon >0$ that \begin{align*} D_{A, \mathfrak{c}, 2}(\Pi; Z)\vert_{b=0} = \frac{1}{w_K} &\sum\limits_{q \mid a_A} \mu(q) \cdot \overline{\omega}(q) \cdot \frac{ c_{\widetilde{\Pi}}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \overline{\omega} \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \widetilde{\Pi}) }{ L_{\bf{1}}(2, \overline{\omega}) } \\ &+ O_{\Pi, \varepsilon} \left( D_K^{ \frac{1}{4} - \frac{(1-2 \theta_0)}{16} + \varepsilon} \left( \frac{ {\bf{N}} a_A }{ Y^2 Z } \right)^{\frac{1}{4}} \right). \end{align*} \end{itemize} \end{proposition} \begin{proof} For (i), we start with \begin{align*} D_{A, \mathfrak{c}, 1}(\Pi; Z)\vert_{b=0} &:= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum\limits_{a \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \neq 0} \frac{c_{\Pi}(a_A a^2)}{ {\bf{N}}(a_A a^2)^{\frac{1}{2}}} V_1 \left( Z {\bf{N}}\mathfrak{m}^2 {\bf{N}}(a_A a^2) \right), \end{align*} which after using the Hecke relation \begin{align*} c_{\Pi}(a_A a^2) &= \sum\limits_{q \mid \gcd(a_A, a^2)} \mu(q) \cdot \omega(q) \cdot c_{\Pi} \left( \frac{a_A}{q} \right) c_{\Pi}\left( \frac{a^2}{q} \right) \end{align*} for each $F$-integer $a \neq 0 \in \mathcal{O}_F/\mathcal{O}_F^{\times}$ gives us the contour expression \begin{equation*}\begin{aligned} &\frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum\limits_{a \neq 0 \in \mathcal{O}_F/\mathcal{O}_F^{\times}} \sum\limits_{q \mid \gcd(a_A, a^2)} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi} \left( a_A q^{-1} \right)}{ {\bf{N}} a_A^{\frac{1}{2}} } \frac{c_{\Pi}(a^2 q^{-1})}{ {\bf{N}}a } V_1( {\bf{N}}(\mathfrak{m}^2 a_A a^2) Z ) \\ &= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum\limits_{q \mid \gcd(a_A, a^2)} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi} \left( a_A q^{-1} \right)}{ {\bf{N}}(q^2a_A)^{\frac{1}{2}} } \sum\limits_{a \neq 0 \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \equiv 0 \bmod q \mathcal{O}_F} \frac{c_{\Pi}(a^2)}{ {\bf{N}} a} V_1( {\bf{N}}(\mathfrak{m}^2 a_A a^2) Z ) \\ &= \frac{1}{w_K} \sum\limits_{q \mid \gcd(a_A, a^2)} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi} \left( a_A q^{-1} \right)}{ {\bf{N}}(q^2a_A)^{\frac{1}{2}} } \int_{\Re(s) = 2} \frac{k(s)}{s} \sum\limits_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{ {\bf{N}} \mathfrak{m}^{1 + 2s} } \sum\limits_{a \neq 0 \in \mathcal{O}_F/ \mathcal{O}_F^{\times} \atop a \equiv 0 \bmod q \mathcal{O}_F} \frac{c_{\Pi}(a^2)}{ {\bf{N}}a^{1 + 2s}} ( {\bf{N}}a_A Z )^{-s} \frac{ds}{2 \pi i} \\ &= \frac{1}{w_K} \sum\limits_{q \mid \gcd(a_A, a^2)} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi} \left( a_A q^{-1} \right)}{ {\bf{N}}(q^2a_A)^{\frac{1}{2}} } \int_{\Re(s) = 2} \frac{k(s)}{s} L(1+ 2s, \omega \eta) \cdot \frac{ L_{ {\bf{1}}, q }^{\star}(2s + 1, \operatorname{Sym^2} \Pi)}{ L_{\bf{1}}^{\star}(4s + 2, \omega) } ( {\bf{N}}a_A Z )^{-s} \frac{ds}{2 \pi i} .\end{aligned}\end{equation*} Shifting the contour in the latter expression to $\Re(s) = -1/4$, we cross a simple pole at $s=0$ of residue \begin{align*} \frac{1}{w_K} &\sum\limits_{q \mid a_A} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \omega \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \Pi) }{ L_{\bf{1}}(2, \omega) }. \end{align*} The remaining contour is then bounded using Stirling's approximation formula with the Burgess-like bound \begin{align*} L(s, \omega \eta)\vert_{\Re(s) = 1/2} \ll_{\Pi, \varepsilon} D_K^{\frac{1}{4} - \frac{(1-2 \theta_0)}{16} + \varepsilon} \end{align*} of \cite{Wu}. This shows (i). For (ii), we use a completely analogous argument to derive the stated estimate. \end{proof} \subsubsection{Off-diagonal terms} Let us now consider the remaining off-diagonal terms. For simplicity, we now take $Z = Y^{-1}$ with $Y = D_K^{\frac{n}{2}} {\bf{N}} \mathfrak{f}(\Pi_K)^{\frac{1}{2}} {\bf{N}} \mathfrak{c}^n$ the square root of the conductor, and estimate the corresponding sums \begin{align*} D^{\dagger}_{A, \mathfrak{c}, 1}(\Pi; Y^{-1}) &:= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \omega(\mathfrak{m})}{{\bf{N}}\mathfrak{m}} \sum\limits_{a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times}, b \neq 0 \atop {\bf{N}}(\mathfrak{m}^2 q_A(a,b)) \leq Y } \frac{c_{\Pi}(q_A(a, b))}{ {\bf{N}}q_A(a, b)^{\frac{1}{2}}} V_1 \left( \frac{ {\bf{N}}\mathfrak{m}^2 {\bf{N}} q_A(a, b)}{Y} \right) \end{align*} and \begin{align*} D^{\dagger}_{A, \mathfrak{c}, 2}(\Pi; Y^{-1}) &:= \frac{1}{w_K} \sum_{\mathfrak{m} \subset \mathcal{O}_F} \frac{\eta \overline{\omega}(\mathfrak{m})}{ {\bf{N}}\mathfrak{m}} \sum\limits_{a, b \in \mathcal{O}_F/ \mathcal{O}_F^{\times}, b \neq 0 \atop {\bf{N}}(\mathfrak{m}^2 q_A(a,b)) \leq Y } \frac{c_{\widetilde{\Pi}}(q_A(a, b))}{ {\bf{N}}q_A(a, b)^{\frac{1}{2}}} V_2 \left( \frac{ {\bf{N}}\mathfrak{m}^2 {\bf{N}}q_A(a, b) }{Y }\right). \end{align*} \begin{proposition}\label{OD2} Let us again write $0 \leq \theta_0 \leq 1/2$ to denote the best approximation towards the generalized Ramanujan conjecture for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms, and $0 \leq \sigma_0 \leq 1/4$ that towards the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect. We have the following uniform estimates, given in terms of the square root of the conductor $Y = D_K^{\frac{n}{2}} {\bf{N}} \mathfrak{f}(\Pi_K)^{\frac{1}{2}} {\bf{N}} \mathfrak{c}^n$. For any integral ideal $\mathfrak{c} \subset \mathcal{O}_F$ and any class $A$ in the corresponding ring class group $C(\mathcal{O}_{\mathfrak{c}})$ choice of $\varepsilon$, we have for either index $j=1,2$ the following uniform bound for any choice of $\varepsilon >0$: \begin{align*} D^{\dagger}_{A, \mathfrak{c}, j}(\Pi; Y^{-1}) \ll_{\Pi, \varepsilon} Y^{\frac{1}{4} + \sigma_0 + \varepsilon} \cdot {\bf{N}} a_A \cdot {\bf{N}}(\mathfrak{c}^2 \mathfrak{D})^{-\frac{1}{2}}. \end{align*} \end{proposition} \begin{proof} Formally, the proof is derived in the same way as that for Proposition \ref{CGTOD} above with $\delta = 1/2$ via Theorems \ref{SDSCS} and \ref{SDSCS2}, using that the underling binary quadratic form $q_A(x, y) = a_A x^2 + b_A xy + c_A y^2$ has discriminant $b_A^2 - 4 a_A c_A = \mathfrak{c}^2 \mathfrak{D}$. Here, we also add in the contribution of the leading coefficient $a_A$ for the corresponding application of Theorem \ref{SDSCS2}. \end{proof} \subsubsection{Estimates for the averages} Putting together the formulae for the averages $X_{\mathfrak{c}}(\Pi)$ and $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ derived above with the estimates of Propositions \ref{residue2} and \ref{OD2} for each of the summands gives us the following. \begin{corollary}\label{GAmain} We have the following estimates for the averages $X_{\mathfrak{c}}(\Pi)$ and $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ given terms of the square root of the conductor $Y = D_K^{\frac{n}{2}} {\bf{N}} \mathfrak{f}(\Pi_K)^{\frac{1}{2}} {\bf{N}} \mathfrak{c}^n$. Let us take all of the setup described above for granted. Again, we write $0 \leq \sigma_0 \leq 1/4$ to denote the best approximation towards the generalized Lindel\"of hypothesis for $\operatorname{GL}_2({\bf{A}}_F)$-automorphic forms in the level aspect, with $\sigma_0 = 0$ conjectured, and $\sigma_0 = 103/512$ admissible by the main theorem of \cite{BH10}, and also $\sigma_0 = 3/16$ admissible when $F = {\bf{Q}}$ by the antecedent theorem of \cite{BH07}. \\ \begin{itemize} \item[(i)] Given $\mathfrak{c} \subset \mathcal{O}_F$ coprime to the conductor of $\Pi$, the corresponding average $X_{\mathfrak{c}}(\Pi)$ can be estimated as \begin{align*} X_{\mathfrak{c}}(\Pi) &= \frac{1}{w_K} \left( L(1, \omega \eta) \cdot \frac{ L_{\bf{1}}^{\star}(1, \operatorname{Sym^2} \Pi) }{ L_{\bf{1}}(2, \omega) } + W(\Pi_K) \cdot L(1, \overline{\omega} \eta) \cdot \frac{ L_{\bf{1}}^{\star} (1, \operatorname{Sym^2} \widetilde{\Pi}) }{ L_{\bf{1}}(2, \overline{\omega})} \right) \\ &+ O_{\Pi, \varepsilon} \left( Y^{ \frac{1}{4} + \sigma_0 + \varepsilon} \cdot {\bf{N}}( \mathfrak{c}^2 \mathfrak{D})^{-\frac{1}{2}} \right). \end{align*} In particular, assuming $W(\Pi_K) \neq -1$ in the event that $\Pi \cong \widetilde{\Pi}$ is self-dual, if the constraint $n(1/4 + \sigma_0) < 1$ is satisfied, then $X_{\mathfrak{c}}(\Pi) \neq 0$ as ${\bf{N}}(\mathfrak{c}^2 \mathfrak{D}) \rightarrow \infty$. \\ \item[(ii)] Given a prime ideal $\mathfrak{p} \subset \mathcal{O}_F$ coprime to the conductor of $\Pi$, an integer $\alpha \geq 1$, and $\chi_0$ a character of the finite torsion subgroup $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$, the corresponding ``tame" subaverage $X_{ \mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ of $X_{\mathfrak{p}^{\alpha}}(\Pi)$ can be estimated as follows. Let simplify notation by writing \begin{align*} R(\Pi, a_A; q) &:= \frac{1}{w_K} \sum\limits_{q \mid a_A} \mu(q) \cdot \omega(q) \cdot \frac{ c_{\Pi}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \omega \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \Pi) }{ L_{\bf{1}}(2, \omega) } \\ R(\widetilde{\Pi}, a_A; q) &:= \frac{1}{w_K} \sum\limits_{q \mid a_A} \mu(q) \cdot \overline{\omega}(q) \cdot \frac{ c_{\widetilde{\Pi}}(a_A q^{-1}) }{ {\bf{N}}(q^2 a_A)^{\frac{1}{2}} } \cdot L(1, \overline{\omega} \eta) \cdot \frac{ L_{ {\bf{1}}, q}^{\star}(1, \operatorname{Sym^2} \widetilde{\Pi}) }{ L_{\bf{1}}(2, \overline{\omega}) }\end{align*} for each divisor $q \mid a_A$ to denote the residual terms appearing in Proposition \ref{residue2}. We have that \begin{align*} &X_{ \mathfrak{p}^{\alpha}, \chi_0}(\Pi) \\Ê&= \sum\limits_{A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}})} \sum\limits_{q \mid a_A} \left( R(\Pi, a_A; q) + W(\Pi_K) \cdot R(\widetilde{\Pi}, a_A; q) \right) + O_{\Pi, \varepsilon} \left( Y^{\frac{1}{4} + \sigma_0 + \varepsilon} \cdot {\bf{N}} a_A \cdot {\bf{N}}(\mathfrak{p}^{2\alpha} \mathfrak{D})^{-\frac{1}{2}} \right). \end{align*} In particular, assuming $W(\Pi_K) \neq -1$ in the event that $\Pi \cong \widetilde{\Pi}$ is self-dual, if the constraint $n(1/4 + \sigma_0) < 1$ is satisfied, then $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) \neq 0$ as $\alpha \rightarrow \infty$. \\ \end{itemize} \end{corollary} \begin{proof} Both estimates follow from the discussion above with Propositions \ref{residue2} and \ref{OD2}, taking $Z = Y^{-1}$. The nonvanishing claim for (i) is deduced in a direct way from nonvanishing properties for the symmetric square $L$-function $L(s, \operatorname{Sym^2} \Pi)$ on the line $\Re(s) = 1$, as shown in \cite{Sha2}. To be clear, the implication is direct if $\Pi \cong \widetilde{\Pi}$ is self-dual. Otherwise, the root number $W(\Pi_K)$ is not real-valued, and it is simple to see that the sum of residues cannot vanish. The nonvanishing claim for (ii) is deduced from a variation of this argument, as given in \cite{VO7}. In brief, it it not hard to see that each of the leading coefficients $a_A$ associated to the classes $A \in C_0(\mathcal{O}_{\mathfrak{p}^{\alpha}}) \cong C_0$ which appear in the average are bounded above by some constant independent of the choice of exponent $\alpha$. Here, it is also simple to see by inspection that the residual terms $R(\Pi, a_A; q)$ and $R(\widetilde{\Pi}, a_A; q)$ always have a nonvanishing contribution from $q=1$. \end{proof} \subsection{Rationality theorems and Galois averages} Let us now explain how stronger nonvanishing properties can be derived from nonvanishing estimates for the tame subaverages $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ via certain rationality theorems when the rank $n \geq 2$ is even, in the style of the theorems of Greenberg \cite{Gr} and Rohrlich \cite{Ro2} and \cite{Ro} for central values of $L$-functions of elliptic curves with complex multiplication, as well as those of Vatsal \cite{Va} and Cornut-Vatsal \cite{CV} \cite{CV2} for self-dual families of central values of Rankin-Selberg $L$-functions on $\operatorname{GL}_2({\bf{A}}_F) \times \operatorname{GL}_2({\bf{A}}_F)$ indexed by ring class characters of $\mathfrak{p}$-power conductor. \subsubsection{The classical setting} The theorems of Rohrlich \cite{Ro2}, \cite{Ro} and Vatsal \cite{Va} use the following theorem of Shimura \cite{Sh2}, which we now summarize for convenience. Suppose that $n=2$, and that our cuspidal $\operatorname{GL}_2({\bf{A}}_F)$-automorphic representation $\Pi$ is attached to a holomorphic Hilbert modular eigenform $f$ of arithmetic weight $k = (k_j)_{j=1}^d$ (for $d = [F: {\bf{Q}}]$), with each $k_j \geq 2$. Here, arithmetic means that $k_i \equiv k_j \bmod 2$ for all indices $i \neq j$. Fixing $K$ a totally imaginary quadratic extension of $F$, let $\chi$ be any ring class character of $K$, or more generally any Hecke character of $K$ whose corresponding theta series $\theta(\chi)$ has arithmetic weight $w = (w_j)_j^d$ with $w_j < k_j$ for each $1 \leq j \leq d$. The main theorem of Shimura \cite{Sh2} implies there exists some nonzero complex number $\Omega(\Pi)$ (independent of the choice of $\chi$), for which the ratio \begin{align}\label{shimura} \mathcal{L}(1/2, \Pi \times \pi(\chi)) &= L(1/2, \Pi \times \pi(\chi))/\Omega(\Pi) \end{align} is an algebraic number. To be more explicit, this number $\Omega(\Pi)$ is given by $8 \pi^2$ times the Petersson norm of $f$. As well, this ratio $(\ref{shimura})$ is contained in the compositum ${\bf{Q}}(\Pi, \chi) = {\bf{Q}}(\Pi) {\bf{Q}}(\chi)$ of the Hecke field ${\bf{Q}}(\Pi)$ of $\Pi$ obtained by adjoining to ${\bf{Q}}$ the Hecke eigenvalues of $\Pi$ with the cyclotomic field ${\bf{Q}}(\chi)$ obtained by adjoining to ${\bf{Q}}$ the values of $\chi$. Moreover, writing $G_{\bf{Q}} = \operatorname{Gal}({\bf{Q}}/{\bf{Q}})$ to denote the absolute Galois group of ${\bf{Q}}$, there is a natural action of $\sigma \in G_{\bf{Q}}$ on these values $\mathcal{L}(1/2, \Pi \times \pi(\chi))$ of the form \begin{align*} \sigma \left( \mathcal{L}(1/2, \Pi \times \pi(\chi)) \right) &= \mathcal{L}(1/2, \Pi^{\gamma} \otimes \pi(\chi^{\sigma})).\end{align*} Here, $\gamma$ denotes the restriction of $\sigma$ to the Hecke field ${\bf{Q}}(\Pi)$, i.e. $\gamma = \sigma \vert_{ {\bf{Q}}(\Pi)}$, with the action being determined on the level of Hecke eigenvalues or Fourier coefficients in the natural way (see \cite{Sh2}). As well, viewing $\chi$ as a character on nonzero ideals $\mathfrak{a} \subset \mathcal{O}_K$, we write $\chi^{\sigma}$ to denote the character determined by the rule $\mathfrak{a} \mapsto \sigma \left( \chi(\mathfrak{a}) \right)$. In particular, if we take $\Sigma = \Sigma_{[\chi]}$ to be the set of all embeddings $\sigma: {\bf{Q}}(\Pi, \chi) \rightarrow {\bf{C}}$ which fix ${\bf{Q}}(\Pi)$, then the values in the set defined by $G_{[\chi]}(\Pi) = \left\lbrace \sigma \left( \mathcal{L}(1/2, \Pi \times \pi(\chi))\right) : \sigma \in \Sigma \right\rbrace $ are Galois conjugate. An immediate consequence is the following rigidity property: $\mathcal{L}(1/2, \Pi \times \pi(\chi)) = 0$ if any only if $\mathcal{L}(1/2, \Pi \times \pi(\chi^{\sigma})) = 0$ for all $\sigma \in \Sigma_{[\chi]}$. This property underlies the averaging theorems of Rohrlich, \cite{Ro2}, \cite{Ro}\footnote{However, although the principle remains the same, the setting of Dirichlet characters implicit in the latter theorem \cite{Ro} takes a slightly different form than described here (e.g.~as the $L$-functions considered there are not of Rankin-Selberg type).}, and Vatsal \cite{Va}. \subsubsection{Automorphic periods over CM fields} A generalization of Shimura's theorem \cite{Sh2} in the direction of Deligne's rationality conjectures \cite{De} can be established for special cases of the basechange $L$-functions $L(s, \Pi_K \otimes \chi)$ we consider above when the dimension $n \geq 2$ is even. We refer to Garrett-Harris \cite{GaHa} for special cases of $n=4$ via the functoriality theorem of Ramakrishnan \cite{Ram}, with the more general case on even $n$ given in Harris \cite{MH} and more generally Guerberoff and Guerberoff-Lin, as described in Harris-Lin \cite[$\S 4.3$]{HL}. To give a brief summary in the notations and normalizations used here, let $\Pi = \otimes_v \Pi_v$ be a cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ for $n \geq 2$ even. We assume from now on that $\Pi$ is cohomological, which means essentially that $\Pi$ can be realized in the cohomology of some Shimura variety, and refer to Clozel \cite{Clo} for a more detailed account of such representations. For each real place $\tau$ of $F$, such a cohomological representation $\Pi$ has a corresponding highest weight given by an $n$-tuple $(\lambda_j)_{j=1}^n = (\lambda_{\tau, j})_{j=1}^d$, and it is known (see \cite[Lemme de Puret\'e 4.9]{Clo}) that the sum $\lambda_{j} + \lambda_{n+1-j}$ for each index $1 \leq j \leq n$ is equal to a constant $w \in {\bf{Z}}$ known as the {\it{purity weight}}. This purity weight $w$ does not depend on the choice of real place $\tau$. As well, such a representation $\Pi$ is said to be {\it{regular}} if the weights $\lambda_{j} \neq \lambda_{i}$ for all indices $i \neq j$ (see \cite[3.5]{Clo}). In the setup we consider above, the central value $s = 1/2$ of $L(s, \Pi_K \otimes \chi)$ (with $\chi$ a ring class character of $K$) is critical in the sense of Deligne \cite{De} if and only if $\lambda_{j} \neq \frac{w}{2}$ for each index $1 \leq j \leq n$. We can now state the following special case of Deligne's conjecture for our averaging setup: \begin{theorem}\label{CMrationality} Let $n \geq 2$ be an even integer. Let $\Pi = \otimes_v \Pi_v$ be a regular, self-dual, cuspidal, cohomological automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$. Assume that the Hodge-Tate weights of $\Pi$ satisfy the condition described above, so that $\lambda_j \neq \frac{w}{2}$ for each index $1 \leq j \leq n$. There exists a nonzero complex number $\Omega(\Pi_K)$ (depending only on the basechange representation $\Pi_K$) such that for any ring class character $\chi$ (and more generally any wide ray class character) of $K$, the ratio \begin{align*} \mathcal{L}(1/2, \Pi_K \otimes \chi) &= L(1/2, \Pi_K \otimes \chi) / \Omega(\Pi_K) \end{align*} is an algebraic number, contained in the compositum of the Hecke field ${\bf{Q}}(\Pi_K)$ of $\Pi_K$ with the cyclotomic field ${\bf{Q}}(\chi)$ obtained by adjoining to ${\bf{Q}}$ the values of $\chi$. Moreover, there is a natural action of $\sigma \in G_{\bf{Q}}$ on these values $\mathcal{L}(1/2, \Pi_K \otimes \chi)$ of the form \begin{align*} \sigma \left( \mathcal{L}(1/2, \Pi_K \otimes \chi) \right) &= \mathcal{L}(1/2, \Pi_K^{\gamma} \otimes \chi^{\sigma}), \end{align*} where $\gamma$ denotes the restriction of $\sigma$ to the Hecke field ${\bf{Q}}(\Pi_K)$, i.e. $\gamma = \sigma \vert_{ {\bf{Q}}(\Pi_K)}$. Hence in particular, if we take $\Sigma = \Sigma_{[\chi]}$ to be the set of all embeddings $\sigma: {\bf{Q}}(\Pi_K, \chi) \rightarrow {\bf{C}}$ which fix ${\bf{Q}}(\Pi_K)$, then the values in the set \begin{align*} G_{[\chi]}(\Pi_K) = \left\lbrace \sigma \left( \mathcal{L}(1/2, \Pi_K \otimes \chi)\right) : \sigma \in \Sigma \right\rbrace \end{align*} are Galois conjugate. Consequently, $\mathcal{L}(1/2, \Pi_K \times \chi) = 0$ if any only if $\mathcal{L}(1/2, \Pi_K \otimes \chi^{\sigma}) = 0$ for all $\sigma \in \Sigma_{[\chi]}$. \end{theorem} \begin{proof} See \cite[Theorem 4.7]{HL}; the theorem is established by Harris \cite{MH} and more generally Guerberoff-Lin (see \cite{HL} for details). The special case of certain triple products of holomorphic forms corresponding to $n=4$ (thanks to the functoriality theorem of Ramakrishnan \cite{Ram}) is also implied by Garrett-Harris \cite{GaHa}. \end{proof} \subsubsection{Automorphic periods via Eisenstein cohomology} We also have the following constructions of periods in the sense of Deligne \cite{De} given by Harder-Raghuram \cite{HR1} and Mahnkopf \cite{Mahn} via Eisenstein cohomology for the $\operatorname{GL}_n({\bf{A}}_F) \times \operatorname{GL}_1({\bf{A}}_F)$ $L$-functions $L(s, \Pi \otimes \xi)$, i.e.~ with $\xi = \otimes_v \xi_v$ an idele class character of the totally real\footnote{Note that the induced representation $\pi(\chi)$ of $\operatorname{GL}_2({\bf{A}}_F)$ coming from a ring class character $\chi$ of a totally imaginary quadratic extension $K/F$ is {\it{not}} cohomological, and hence the theorems of \cite{HR0}, \cite{HR1}, \cite{Mahn} do not apply in the setting of Rankin-Selberg $L$-functions $L(s , \Pi \times \pi(\chi)) = L(s, \Pi_K \otimes \chi)$ we consider above.} field $F$. To describe the setup briefly, we begin with the following intuitive or conceptual characterization: \begin{remark}[Heuristic observation.] Let $\Pi$ be an irreducible cuspidal automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$ with unitary central character $\omega$. Assume again that $\Pi$ is cohomological. For the purpose of this preliminary discussion, it will in fact suffice to assume that the root number $\epsilon(1/2, \Pi) \in {\bf{S}}^1$ of $L(s, \Pi)$ of $\Pi$ is an algebraic number. Fix $\xi = \otimes_v \xi_v$ an idele class character of $F$ of conductor $\mathfrak{f}(\xi) \subset \mathcal{O}_F$ prime to the conductor $\mathfrak{f}(\Pi) \subset \mathcal{O}_F$ of $\Pi$, which we shall later take to have trivial archimedean component. Hence, $\xi$ corresponds to a wide ray class Hecke character. Recall that the twisted $L$-function $L(s, \Pi \otimes \xi)$, defined first for $\Re(s) >1$ by the Dirichlet series expansion $L(s, \Pi \otimes \xi) = \sum_{\mathfrak{m}\subset \mathcal{O}_F} c_{\Pi}(\mathfrak{m}) \xi(\mathfrak{m}) {\bf{N}} \mathfrak{m}^{-s}$, has a well-known analytic continuation, and satisfies the functional equation \begin{align}\label{FEr} \Lambda(s, \Pi \otimes \xi) &= \epsilon(1/2, \Pi \otimes \xi ) \Lambda(1-s, \widetilde{\Pi} \otimes \xi^{-1}),\end{align} where $\Lambda(s, \Pi \otimes \xi) = L(s, \Pi_{\infty} \otimes \xi_{\infty}) L(s, \Pi \otimes \xi)$ denotes the completed $L$-function whose archimedean component (in the notations described above) is given by \begin{align*} L(s, \Pi_{\infty} \otimes \xi_{\infty}) &= \left( D_F^n {\bf{N}} \mathfrak{f}(\Pi) {\bf{N}} \mathfrak{f}(\xi)^n \right)^{\frac{s}{2}} \prod_{v \mid \infty} \prod_{j=1}^n \Gamma_{\bf{R}} \left( s - \xi_v(-1) \mu_j(\Pi_{\infty}) \right) \\ &= \left( D_F^n {\bf{N}} \mathfrak{f}(\Pi) {\bf{N}} \mathfrak{f}(\xi)^n \right)^{\frac{s}{2}} \gamma_{\infty}(s), \end{align*} and root number $\epsilon(1/2, \Pi \times \chi) \in {\bf{S}}^1$ by the formula \begin{align*} \epsilon(1/2, \Pi \otimes \xi) &= \omega( \mathfrak{f}(\xi) ) \xi ( \mathfrak{f}(\Pi) ) \epsilon(1/2, \Pi) \epsilon(1/2, \xi)^n \\ &= \omega( \mathfrak{f}(\xi) ) \xi ( \mathfrak{f}(\Pi) ) \epsilon(1/2, \Pi) \epsilon(1/2, \Pi) \left( \frac{\tau(\xi)} { \sqrt{{\bf{N}}\mathfrak{f}(\xi)} } \right)^n.\end{align*} Here, we write $\tau(\xi)$ to denote the Gauss sum of $\xi$ (see e.g.~\cite{Ne} for definitions), and $D_F$ for the absolute discriminant of $F$. Since the Hecke characters $\xi$ and its Gauss sum $\tau(\xi)$ take algebraic values, our assumption that $\epsilon(1/2, \Pi)$ take algebraic values implies the same for the root number $\epsilon(1/2, \Pi \otimes \xi)$ for any idele class character $\xi = \otimes_v \xi_v$ of $F$. As a consequence, we see via $(\ref{FEr})$ that for any complex argument $\delta \in {\bf{C}}$ for which the corresponding value $L(\delta, \Pi \otimes \xi)$ does not vanish, the ratio defined by \begin{align*} \mathfrak{L}(\delta, \Pi \otimes \xi) &= \frac{ \Lambda(\delta, \Pi \otimes \xi)}{\Lambda(1- \delta, \widetilde{\Pi} \otimes \xi^{-1})} \\ &= ( D_F^n {\bf{N}} \mathfrak{f}(\Pi) {\bf{N}} \mathfrak{f}(\xi)^{n})^{\delta - \frac{1}{2}} \cdot \frac{\gamma_{\infty}(\delta)L(\delta, \Pi \otimes \xi)}{ \widetilde{\gamma}_{\infty}(1 - \delta) L (1-\delta, \widetilde{\Pi} \otimes \xi)} \end{align*} is an algebraic number of complex modulus one. Moreover, there is a natural action of $\sigma \in G_{\bf{Q}}$ on these values $\sigma \left( \mathfrak{L}(\delta, \Pi \otimes \xi) \right) = \mathfrak{L}(\delta, \Pi^{\gamma} \otimes \xi^{\sigma})$ via the natural action on the root number $\epsilon(1/2, \Pi \otimes \xi)$, although this is only well-defined if each of the values in the orbit is known a priori to be nonvanishing. \end{remark} The construction of periods via Eisenstein cohomology contains a similar ratio of completed $L$-values $\mathfrak{L}(\delta, \Pi \otimes \xi)$, where the argument $\delta \in {\bf{C}}$ is critical in the sense of Deligne \cite{De}, as characterized in this setting in \cite[Proposition 7.36]{HR1}. In particular, we derive from \cite[Theorem 7.40]{HR1} the following rationality property. \begin{theorem}\label{EISrationality} Fix an even integer $n \geq 2$. Let $\Pi = \otimes_v \Pi_v$ be a regular cuspidal cohomological representation of $\operatorname{GL}_n({\bf{A}}_F)$. Assume that the Hodge-Tate weights of $\Pi$ described for Theorem \ref{CMrationality} above satisfy the condition of \cite[Proposition 6.1]{GR}, so $\lambda_{\frac{n}{2}} \leq 0 \leq \lambda_{\frac{n}{2} + 1} = w - \lambda_{\frac{n}{2}}$. Let $\xi = \otimes_v \xi_v$ be any idele class character of $F$ which trivial archimedean component. There exists a nonzero complex number $\Omega(\Pi)$ (depending only on the representation $\Pi$) such that the ratio \begin{align*} \mathcal{L}(1/2, \Pi \otimes \xi) &:= \frac{ \Lambda(1/2, \Pi \otimes \xi)}{ (\Omega(\Pi) \Lambda(1/2, \widetilde{\Pi} \otimes \xi) } = \frac{ \mathfrak{L}(1/2, \Pi \otimes \xi)}{ \Omega(\Pi)}\end{align*} is an algebraic number, contained in the compositum ${\bf{Q}}(\Pi, \xi) = {\bf{Q}}(\Pi) {\bf{Q}}(\xi)$ of the Hecke field ${\bf{Q}}(\Pi)$ of $\Pi$ with the cyclotomic field ${\bf{Q}}(\xi)$ obtained by adjoining to ${\bf{Q}}$ the values of $\xi$. Moreover, there is a natural action of $\sigma \in G_{\bf{Q}}$ on these values of the form $\sigma \left( \mathcal{L}(1/2, \Pi \otimes \xi) \right) = \mathcal{L}(1/2, \Pi^{\gamma} \otimes \xi^{\sigma})),$ where $\gamma$ denotes the restriction of $\sigma$ to the Hecke field ${\bf{Q}}(\Pi)$, i.e. $\gamma = \sigma \vert_{ {\bf{Q}}(\Pi)}$. In particular, if we take $\Sigma = \Sigma_{[\xi]}$ to be the set of all embeddings $\sigma: {\bf{Q}}(\Pi, \xi) \rightarrow {\bf{C}}$ which fix ${\bf{Q}}(\Pi)$, then the values in the set \begin{align*} G_{[\xi]}(\Pi) = \left\lbrace \sigma \left( \mathcal{L}(1/2, \Pi \otimes \xi)\right) : \sigma \in \Sigma \right\rbrace \end{align*} are Galois conjugate, and hence $\mathcal{L}(1/2, \Pi \otimes \xi) = 0$ if any only if $\mathcal{L}(1/2, \Pi \otimes \xi^{\sigma}) = 0$ for all $\sigma \in \Sigma_{[\xi]}$. \end{theorem} \begin{proof} See Harder-Raghuram \cite[Theorem 4.1]{HR0} and \cite[$\S$ 7.2 and Theorem 7.40]{HR1}. See also Mahnkopf \cite[Theorem A]{Mahn} with the nonvanishing theorem of Sun \cite{BS}, as well as Harder \cite{Ha} for the case of $n = 2$. \end{proof} \subsubsection{Galois averages of ring class characters} We now explain how analogues of the theorems of Vatsal \cite{Va}, Cornut-Vatsal \cite{CV}, and Rohrlich \cite{Ro2} can be deduced from nonvanishing estimates for the tame subaverages $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi)$ of $X_{\mathfrak{p}^{\alpha}}(\Pi)$ considered for Corollary \ref{GAmain} above when the dimension $n \geq 2$ is even. \begin{theorem}\label{GAL} Let $n \geq 2$ be an even integer. Let $\Pi = \otimes_v \Pi_v$ be a regular, self-dual, cuspidal, cohomological automorphic representation of $\operatorname{GL}_n({\bf{A}}_F)$. Let us assume additionally that the Hodge-Tate weights of $\Pi$ satisfy the condition described above, so that $\lambda_j \neq \frac{w}{2}$ for each index $1 \leq j \leq n$. Let $K$ a totally imaginary quadratic extension of $F$, and $\mathfrak{p} \subset \mathcal{O}_F$ a prime ideal coprime to the conductor of $\Pi$ with underlying rational prime $p$. Fix $\chi_0$ a character of the finite torsion subgroup $C_0 = C(\mathcal{O}_{\mathfrak{p}^{\infty}})_{\operatorname{tors}}$ of the profinite limit \begin{align*} C(\mathcal{O}_{\mathfrak{p}^{\infty}}) &= \varprojlim_{\alpha} C(\mathcal{O}_{\mathfrak{p}^{\infty}}) \cong {\bf{Z}}_p^{\delta_{\mathfrak{p}}} \times C_0, \quad \delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p]. \end{align*} Suppose that \\ \begin{itemize} \item[(i)] The weighted average $X_{\mathfrak{p}^{\alpha}, \chi_0}(\Pi) = \vert P(\mathfrak{p}^{\alpha}, \chi_0) \vert^{-1} \sum_{\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)} L(1/2, \Pi \times \pi(\chi))$ does not vanish for all sufficiently large exponents $\alpha \gg 1$. Equivalently, suppose there exists for each sufficiently large exponent $\alpha \gg 1$ a ring class character $\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)$ for which $L(1/2, \Pi \times \pi(\chi)) \neq 0$. \\ \item[(ii)] The Hecke field ${\bf{Q}}(\Pi_K) = {\bf{Q}}(\Pi)$ is linearly disjoint over ${\bf{Q}}$ to the cyclotomic field ${\bf{Q}}(\mu_{p^{\infty}})$ obtained by adjoining to ${\bf{Q}}$ all $p$-power roots of unity. \\ \end{itemize} If the residue degree $\delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p]$ of $\mathfrak{p}$ equals one, then for all but finitely many ring class characters $\chi$ of $p$-power conductor, the central value $L(1/2, \Pi \times \pi(\chi))$ does not vanish. More generally, if $F$ is an arbitrary totally real number field with $\mathfrak{p} \subset \mathcal{O}_F$ a prime ideal of residue degree $\delta_{\mathfrak{p}} = [F_{\mathfrak{p}}: {\bf{Q}}_p] \geq 1$, then for a positive proportion of all primitive ring class characters $\chi$ of $\mathfrak{p}$-power conductor, the central value $L(1/2, \Pi \times \pi(\chi))$ does not vanish. That is, for each sufficiently large exponent $\alpha \gg 1$, there exists a character $\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)$ for which the corresponding Galois average $G_{[\chi]}(\Pi)$ does not vanish, and the limit over all such a Galois conjugate values with $\alpha \rightarrow \infty$ describes this positive density set. \end{theorem}Ê \begin{proof} We deduce from condition (i) that for each sufficiently large exponent $\alpha \gg 1$, there exists a ring class character $\chi \in P(\mathfrak{p}^{\alpha}, \chi_0)$ for which $L(1/2, \Pi \times \pi(\chi)) = L(1/2, \Pi_K \otimes \chi)$ does not vanish, and hence for which the corresponding Galois average $G_{[\chi]}(\Pi_K)$ does not vanish. We then consider the limit over all exponents $\alpha \gg 1$ of these Galois averages $G_{[\chi]}(\Pi_K)$, and in particular the implications that follow from Theorems \ref{CMrationality} and \ref{EISrationality} above. In the special case where $\delta_{\mathfrak{p}} = 1$ so that $C(\mathcal{O}_{\mathfrak{p}^{\infty}}) \cong {\bf{Z}}_p \times C_0$, and moreover the simplifying condition (ii) is met, this limit describes all of the ``wild" characters factoring through the quotient $C(\mathcal{O}_{\mathfrak{p}^{\infty}})/C_0 \cong {\bf{Z}}_p$. \end{proof} \end{document}
\begin{document} \title{Remote preparation of arbitrary time-encoded single-photon ebits} \author{Alessandro Zavatta} \affiliation{Istituto Nazionale di Ottica Applicata - CNR, L.go E. Fermi, 6, I-50125, Florence, Italy} \author{Milena D'Angelo} \affiliation{LENS, Via Nello Carrara 1, 50019 Sesto Fiorentino, Florence, Italy} \author{Valentina Parigi} \affiliation {Department of Physics, University of Florence, I-50019 Sesto Fiorentino, Florence, Italy} \author{Marco Bellini} \email{[email protected]} \affiliation{Istituto Nazionale di Ottica Applicata - CNR, L.go E. Fermi, 6, I-50125, Florence, Italy} \affiliation{LENS, Via Nello Carrara 1, 50019 Sesto Fiorentino, Florence, Italy} \date{\today} \begin{abstract} We propose and experimentally verify a novel method for the remote preparation of entangled bits (ebits) made of a single-photon coherently delocalized in two well-separated temporal modes. The proposed scheme represents a remotely tunable source for tailoring arbitrary ebits, whether maximally or non-maximally entangled, which is highly desirable for applications in quantum information technology. The remotely prepared ebit is studied by performing homodyne tomography with an ultra-fast balanced homodyne detection scheme recently developed in our laboratory. \end{abstract} \pacs{03.67.Mn, 42.50.Dv, 03.65.Wj} \maketitle Entanglement, nonlocal correlations, indistinguishable alternatives are, historically, among the most intriguing and appealing topics of quantum mechanics. Besides their relevance in fundamental physics \cite{epr}, these phenomena have attracted much attention due to their usefulness in quantum information technology \cite{qu_infor}. Extravagant but promising protocols such as quantum teleportation, quantum cryptography, and quantum computation have been proposed and experimentally verified (see, e.g., \cite{qu_infor} and references therein). All these schemes were originally based on two-photon entanglement. Recently, increasing attention has been given to a new quantum information perspective: the carriers of quantum information are no longer the photons, but rather the field modes ``carrying'' them. Based on this idea, two different approaches have been followed. The first one exploits the entanglement in momentum generated when a single-photon impinges on a beam-splitter and is characterized by the state $\alpha |1\rangle_a |0\rangle_b + \beta |0\rangle_a |1\rangle_b$, where $a$ and $b$ denote two distinct spatial modes, $\alpha$ and $\beta$ are complex amplitudes such that $|\alpha|^2+|\beta|^2=1$ (see, e.g., \cite{ent_1phot,knill} and references therein). The second and more recent road has been traced by Gisin's group \cite{gisin} on the line of Franson's approach \cite{franson}, and leads to two-photon systems entangled in ultra-short co-propagating temporal modes (or ``time-bins'') \cite{simon}. In this Letter, we propose the first remotely tunable source of arbitrary single-photon entangled states (ebits) in the time domain and experimentally demonstrate its working principle. We start from the spontaneous parametric down conversion (SPDC) signal-idler pairs~\cite{klyshko} generated by a train of phase-locked pump pulses~\cite{spdc_pulse,gisin} and generate indistinguishability between pairs of consecutive non-overlapping temporal modes propagating in the idler channel; this enables us to remotely delocalize the twin signal photon between two identical and well-separated time-bins, thus generating the single-photon temporal ebit: $\alpha |1^{(n)}\rangle |0^{(n+1)} \rangle + \beta |0^{(n)} \rangle |1^{(n+1)} \rangle$, where $n$ denotes the temporal mode associated with the $n^{th}$ pump pulse. Both maximally and non-maximally entangled single-photon states, with any relative phase, can be produced by performing simple and reversible operations in the remote idler channel. The proposed scheme may find immediate application in quantum information technology; single-photon ebits have been proven to enable linear optics quantum teleportation \cite{ent_1phot,gisin_tele} and play a central role in linear optics quantum computation \cite{knill,pittman}. Furthermore, time-bin entanglement has been proven suitable for long distance applications \cite{gisin_crypto,gisin_tele}, where the insensitivity to both depolarization and polarization fluctuations becomes a strong requirement. In addition, since the carriers of entanglement are naturally separated (i.e., no further optical element is required) and undergo the same losses, entanglement in time is less sensitive to losses and easier to purify \cite{yamamoto}. The experimental setup is pictured in Fig.~\ref{fig_setup}. The $1.5$ ps pulses at $786$ nm from a mode-locked Ti:Sapphire laser at a repetition rate of $82$ MHz are frequency doubled in a LBO crystal. The resulting pulse train impinges on a non-linear BBO crystal cut for degenerate ($\Omega_s=\Omega_i=\Omega_p/2$) non-collinear type-I SPDC; signal-idler photon pairs centered around $786$ nm are thus generated in two distinct spatial modes. A single mode fiber and a pair of etalon interference filters (F) are employed for spatial and spectral filtering of the idler beam before its entrance in a fiber-coupled piezo-controlled (PZT) Michelson interferometer; a single-photon detector ($D_1$) is inserted at the exit port of the interferometer. The signal beam propagates in free space before being mixed at a 50-50 beam-splitter (BS) with a local oscillator (LO) for high-frequency time-domain balanced homodyne detection (HD) \cite{josa02,marco_pra70_04}. \begin{figure}\label{fig_setup} \end{figure} Spatial and spectral filtering of the idler mode guarantees the conditional projection of the signal photons into a single-photon pure state \cite{Aichele_02,marco_prl90_03,marco_pra69_04}. On the other hand, the Michelson interferometer generates indistinguishability between two consecutive temporal modes propagating in the idler channel: an idler photon detected by $D_1$ may have been generated by either the $N^{th}$ or the $(N+1)^{th}$ pump pulse, provided that the time delay ($T$) between the short and long arms of the interferometer is chosen to be approximately equal to the time separation between two consecutive pump pulses ($T_p=12.3$ ns). Notice that the bandpass of the spectral filter in the idler arm ($\sigma_i=50$ GHz) is wide enough so that no first order interference occurs ($\sigma_i \gg \pi / T_p$). Based on a standard quantum mechanical calculation, we find that the combination of indistinguishability and tight filtering in the idler channel allows the conditional remote preparation, in the signal channel, of the temporally delocalized single-photon ebit: \begin{equation}\label{state_fin} |\Psi_s^{\phi_i} \rangle = \frac{1}{\sqrt{2}} (|1^{(n)}, 0^{(n+1)} \rangle + e^{-i \phi_i} |0^{(n)}, 1^{(n+1)} \rangle), \end{equation} with $\phi_i= \Omega_p(T_p-T/2)$. Interestingly, the relative phase $\phi_i$ characterizing the remotely prepared ebit is defined not only by the phase difference introduced by the Michelson interferometer ($\varphi_{int}=\Omega_i T$), but also by the relative phase between consecutive pump pulses ($\varphi_{pump}=\Omega_p T_p$). The result of Eq.~(\ref{state_fin}) represents the temporal counterpart of the spatially delocalized single-photon produced at the output ports of a beam splitter; this case has been studied experimentally by Babichev, {\em et al.} \cite{lvovsky_prl04_2}. However, different from Ref.~\cite{lvovsky_prl04_2}, the entangled state of Eq.~(\ref{state_fin}) has been prepared remotely, without performing any manipulation on the signal photons. It is the interferometer in the idler arm which generates indistinguishability between two consecutive non-overlapping temporal modes; this indistinguishability, together with the coherence of both pump beam and SPDC process, gives rise, in the signal channel, to the coherent superposition of two previously independent and still temporally separated time-bins. An important advantage of such a remote state preparation scheme is the possibility of generating both maximally and non-maximally single-photon entangled states, with any relative phase $\phi_i$, by performing simple and reversible operations in the idler arm (or on the train of pump pulses). For instance, two of the four Bell states, namely $|\Psi_s^{\pm} \rangle = \frac{1}{\sqrt{2}} (|1^{(n)}, 0^{(n+1)} \rangle \pm |0^{(n)}, 1^{(n+1)} \rangle)$, can be easily generated by manipulating the interferometer. Furthermore, the probability amplitudes characterizing the delocalized single-photon may be continuously varied by simply introducing controllable losses in one arm of the interferometer; this has the only effect of lowering the production rate but does not introduce any impurity in the state generated in the signal channel. The expected two-mode Wigner function \cite{leonardth} for the delocalized single-photon of Eq.~(\ref{state_fin}) is given by: \begin{eqnarray}\label{wigner_fin} & & W^{\phi_i}(x_1,y_1; x_2,y_2)= \frac{1}{2}[8 W^{\phi_i}_{10} (x_1,y_1; x_2,y_2) \; \\ &+& W_1(x_1,y_1) W_0(x_2,y_2) + W_0(x_1,y_1) W_1(x_2,y_2)],\nonumber \end{eqnarray} where $W_1(x,y)=\frac{2}{\pi} e^{-2x^2} e^{-2y^2} (4x^2+4y^2-1)$ and $W_0(x,y)=\frac{2}{\pi} e^{-2x^2} e^{-2y^2}$ are the single-mode Wigner functions associated with a single-photon Fock state and with the vacuum, respectively; on the other hand, $W^{\phi_i}_{10} (x_1,y_1; x_2,y_2)$ is a non-factorable 4-D function which couples the quadratures of two consecutive non-overlapping signal temporal modes: \begin{eqnarray}\label{wigner_acc} W^{\phi_i}_{10} (x_1,y_1; x_2,y_2)&=& W_0(x_1,y_1) W_0(x_2,y_2) \nonumber \\ & \times& (x_1 x_2^{\phi_i}+y_1 y_2^{\phi_i}), \end{eqnarray} where $x_2^{\phi_i}=x_2 \cos \phi_i -y_2 \sin \phi_i$ and $y_2^{\phi_i}=x_2 \sin \phi_i +y_2 \cos \phi_i$. Then, the Wigner function associated with the delocalized signal photon contains information about the characteristic phase $\phi_i$ introduced through the idler arm. Also notice that, by introducing the phase-dependent correlation quadratures $x_{\pm}^{\phi_i} = (x_1 \pm x_2^{\phi_i})/\sqrt{2}$, and $y_{\pm}^{\phi_i} = (y_1 \pm y_2^{\phi_i})/\sqrt{2}$, the Wigner function of Eq.~(\ref{wigner_fin}) factors: \begin{eqnarray}\label{wigner_finAB} W(x_+^{\phi_i} ,y_+^{\phi_i} ; x_-^{\phi_i} ,y_-^{\phi_i} )= W_1(x_+^{\phi_i} ,y_+^{\phi_i} ) \times W_0(x_-^{\phi_i} ,y_-^{\phi_i} ). \end{eqnarray} This result explicitly indicates that the delocalized single-photon cannot be described in terms of the quadratures associated with neither one of the two distant temporal modes ($1$ and $2$), separately; however, the single-photon is well defined in the phase space $(x_+^{\phi_i},y_+^{\phi_i})$, while the vacuum is defined in the phase space $(x_-^{\phi_i},y_-^{\phi_i})$. Thus, the 4-D Wigner function reproduces the correlations remotely generated between pairs of well-separated temporal modes in the signal arm. We have experimentally verified the correctness of the above predictions by performing balanced homodyne tomography and reconstructing both the density matrix and the Wigner function of the ebit remotely prepared in the signal channel. The density matrix has been reconstructed directly from the homodyne data by employing the method developed by D'Ariano, {\em et al.} \cite{dariano_den}; its elements have then been used to reconstruct the Wigner function (for more details see our previous works \cite{marco_pra70_04,marco_science_04}). In order to reconstruct the two-mode 4-D Wigner function of Eq.~(\ref{wigner_fin}), one would normally need to measure the joint marginal distribution of the quadratures $X_1(\theta_1)=x_1 \cos \theta_1 - y_1 \sin \theta_1$ and $X_2(\theta_2)=x_2 \cos \theta_2 - y_2 \sin\theta_2$, while varying the phases $\theta_1$ and $\theta_2$ of two LO pulses spatially and temporally matched (i.e., synchronized) to the modes $1$ and $2$, respectively. However, the particular state investigated here is invariant with respect to the global phase, and only the relative phase $\Delta \theta =\theta_1-\theta_2$ needs to be controlled in the experiment \cite{lvovsky_prl04_2,leonardth}. Moreover, the joint marginal distribution is invariant under interchange of $\phi_i$ and $\Delta \theta$. We exploited this property in order to overcome the difficulty connected to the generation of a pair of phase-controllable LO pulses out of the train coming from the laser. Rather than varying the relative LO phase, one may keep $\Delta \theta$ fixed (by just using any two consecutive pulses directly from the mode-locked train) and vary the phase $\phi_i$ by means of the interferometer. Although what we actually do in this case is to measure fixed quadratures on the two modes for a varying quantum state $|\Psi_s^{\phi_i} \rangle$, it is immediate to show that this is equivalent to performing a conventional LO phase scan of the fixed quantum state $|\Psi_s^{\phi_i=const} \rangle$. We shall name this technique as ``remote balanced homodyne tomography''. For each value of the interferometer phase $\varphi_{int}$ and upon detection of an idler photon, stable and fast quadrature measurements have been realized on the corresponding pair of consecutive signal time-bins (plus one containing just the vacuum and used for calibration), while keeping both the local oscillator and the homodyne detection apparatus unchanged. A total of $10^6$ quadrature measurements, equally distributed over the range $[0,\pi]$ of $\varphi_{int}$, has been performed on each of the three time-bins. \begin{figure}\label{fig_correlation} \end{figure} The experimental results are reported in Fig.~\ref{fig_correlation}, where we plot the measured values of the quadratures $X_1$ and $X_2$ obtained for three different values of the remote phase $\phi_i$, while leaving $\Delta \theta$ fixed. According to the above reasoning, these results also represent the marginal distributions $p(X_1,X_2,\Delta \theta)$ associated with the ebit of Eq.~(\ref{state_fin}) for $\phi_i=0$, and obtained for three different values of the relative phase $\Delta \theta$. Notice that, while the joint distribution $p(X_1,X_2,\Delta \theta)$ is strongly phase-dependent, the marginal distributions $p(X_1)$ and $p(X_2)$ associated with each temporal mode, separately, are phase-independent. This is consistent with the fact that each mode, separately, is an incoherent statistical mixture of vacuum and single-photon Fock state; however, the pair of modes $1$ and $2$, as a whole, is in the coherent superposition described by Eq.~(\ref{state_fin}), with $\phi_i=0$. Figure \ref{fig_correlation} also shows that a single-photon Fock state is defined in the phase space $(x_+^{\phi_i=0},y_+^{\phi_i=0})$, while the vacuum is defined in the phase space $(x_-^{\phi_i=0 },y_-^{\phi_i=0})$, as expected from Eq.~(\ref{wigner_finAB}). Figure~\ref{fig_wigner} (a) reports the reconstructed density matrix: $\hat{\rho}= (1-\eta)|0\rangle \langle0|+\eta|\Psi_s^{\phi_i=0}\rangle \langle \Psi_s^{\phi_i=0}|$, where the overall efficiency $\eta=60.5$\% accounts for both preparation and detection efficiencies; notice that almost no multi-photon contribution exists. From this figure it is also apparent that the vacuum contamination, hence the losses, does not degrade the coherence of the remotely delocalized single-photon; in fact, both the non-diagonal and the diagonal ($|01 \rangle \langle 01|$ and $|10 \rangle \langle 10|$) elements of the reconstructed density matrix are reduced by the same amount. This may be understood as a consequence of the common losses undergone by the pair of entangled time-bins. \begin{figure}\label{fig_wigner} \end{figure} Figures \ref{fig_wigner}(b) and (c) reproduce, respectively, the $(x_1,y_1)$ and $(x_1,x_2)$ sections of the reconstructed 4-D Wigner function $W^{\phi_i=0}(x_1, y_1, x_2, y_2)$. The cross section $(x_1,y_1)$ resembles the standard Wigner function of a single-photon Fock state, but is characterized by a well-defined phase; the existence of this phase is the result of the coherent delocalization of the single-photon between two separate temporal modes. The $(x_1,x_2)$ section of the reconstructed Wigner function explicitly shows the correlation between the quadratures $x_1$ and $x_2$; the non-factorable nature of the delocalized single-photon is here apparent. In summary, the experimental reconstruction of the Wigner function of the conditionally prepared single-photon ebit has enabled us to verify its entangled nature and study its purity. Beside the non-classical behavior typical of single-photon Fock states (negative values around the origin), the reconstructed 4-D Wigner function has been found to be characterized by an intriguing phase information and by correlation between well separated temporal modes, as expected from Eq.~(\ref{wigner_fin}). It may seem counterintuitive that a single photon simultaneously affects two non-overlapping temporal modes or, equivalently, carries a well defined phase. However, the effect is a direct consequence of the coherent superposition remotely induced between otherwise independent signal time-bins; it can then be understood in terms of quantum entanglement between two co-propagating but distinct temporal modes carrying a single-photon. \begin{figure}\label{fig_contour} \end{figure} From the applicative viewpoint, one of the most interesting aspects of the proposed scheme is the dependence of the relative phase characterizing the delocalized (signal) single photon on both the relative phase between pump pulses and the phase delay introduced by the remote Michelson interferometer. Based on this effect, for any fixed value of the remotely controlled phase $\phi_i$, one may generate, in the signal arm, a specific single-photon ebit. This point is pictorially demonstrated by Fig.~\ref{fig_contour}, where we draw the contour plots of the $(x_1,x_2)$ section of the 4-D Wigner function for all possible values of the remotely tunable phase $\phi_i$ characterizing the ebit of Eq.~(\ref{state_fin}). The Wigner function $W^{\phi_i}(x_1,0;x_2,0)$ associated with each conditionally prepared ebit $|\Psi_s^{\phi_i}\rangle$ reveals a specific correlation between the field quadratures of two distinct signal temporal modes; as the preparation phase $\phi_i$ is changed from $0$ to $\pi$, we observe the transition from correlated to anti-correlated quadratures through a ``saddle" point at $\phi_i = \pi/2$, where the anti-correlation is transferred into the quadrature space $(x_1,y_2)$ (not shown in figure). In other words, the proposed scheme can be regarded as a remotely tunable source of arbitrary single-photon ebits; such a source is highly desirable for applications in quantum information technology. This work has been performed in the frame of the ``Spettroscopia laser e ottica quantistica'' project of the Physics Deptartment of the University of Florence and partially supported by Ente Cassa di Risparmio di Firenze and MIUR, under the FIRB contract RBNE01KZ94. M.D. acknowledges the support of Marie Curie RTN. \end{document}
\begin{document} \title[Characterization of singular numbers]{Characterization of singular numbers of products of operators in matrix algebras and finite von Neumann algebras} \author[Bercovici]{H.\ Bercovici$^1$} \address{H.\ Bercovici, Department of Mathematics, Indiana University, Bloomington, IN 47405, USA} \thanks{\footnotesize $^1$Research supported in part by a grant from the NSF} \email{[email protected]} \author[Collins]{B.\ Collins$^2$} \address{B.\ Collins, D\'epartement de Math\'ematique et Statistique, Universit\'e d'Ottawa, 585 King Edward, Ottawa, ON, K1N6N5 Canada, WPI AIMR, Tohoku, Sendai, 980-8577 Japan and CNRS, Institut Camille Jordan Universit\'e Lyon 1, France} \email{[email protected]} \thanks{\footnotesize $^2$Research supported in part by ERA, NSERC and AIMR} \author[Dykema]{K.\ Dykema$^1$} \address{K.\ Dykema, Department of Mathematics, Texas A\&M University, College Station, TX 77843-3368, USA} \email{[email protected]} \author[Li]{W.S.\ Li$^1$} \address{W.S.\ Li, School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332-1060, USA} \email{[email protected]} \subjclass[2000]{15A42, 46L10} \date{June 25, 2013} \begin{abstract} We characterize in terms of inequalities the possible generalized singular numbers of a product $AB$ of operators $A$ and $B$ having given generalized singular numbers, in an arbitrary finite von Neumann algebra. We also solve the analogous problem in matrix algebras $M_n(\Cpx)$, which seems to be new insofar as we do not require $A$ and $B$ to be invertible. \end{abstract} \maketitle \section{Introduction and summary of results} H.\ Weyl~\cite{W12} asked: what are the possible eigenvalues of $A+B$ when $A$ and $B$ are $n\times n$ symmetric matrices whose eigenvalues are given? A complete answer was conjectured by A.\ Horn~\cite{H62}. This problem became known as the Horn problem, attracted the attention of many mathematicians, and was finally solved (proving Horn's conjecture) with the critical input of A. A.\ Klyachko~\cite{Kl98} and Knutson and Tao~\cite{KT99}. We refer to \cite{Fulton00} for a description of the results. Later, these results have been extended in many directions. Let us mention two of them of interest for this paper: the multiplicative direction, and the infinite dimensional direction. In the multiplicative direction, one problem is to describe the possible singular values of $AB$ when $A$ and $B$ are $n\times n$ symmetric matrices whose eigenvalues are given. We will call this the {\em multiplicative Horn problem}. This problem is fully solved in the case where $A$ and $B$ are invertible matrices, because Kyachko~\cite{Kl00} showed it is equivalent to the additive problem after taking logarithms. We could not find a proof in the literature for the case where $A$ and $B$ need not be invertible and we solve it here. The solution to this problem is a limit of the invertible case, but the description is perhaps not completely obvious; the proof relies on the solution of the invertible case and on an interpolation result from~\cite{BLT09}. In the infinite dimensional direction (meaning here, in infinite dimensional von Neumann algebras that have normal, faithful traces; thus, in so-called finite von Neumann algebras), it was proved in~\cite{BCDLT10} that the solution to the additive Horn problem essentially survives, after natural adaptation to the infinite dimensional setting. The main result of this paper is to settle the multiplicative Horn problem in the setting of finite von Neumann algebras. Similar to the additive case, the result is an infinite dimensional modification of the finite dimensional case. In von Neumann algebras that have the Connes embedding property (namely, those that embed in an ultrapower $R^\omega$ of the hyperfinite II$_1$-factors, that is, where finite tuples of elements can be approximated in mixed moments by complex matrices) it is not so difficult to see, nor is it surprising, that the additive and multiplicative Horn problems have solutions described by the obvious infinite dimensional modifications of the finite dimensional solutions (see~\cite{BL01}). Thus, our main result shows that, as in the additive case, the solution of the multiplicative Horn problem in arbitrary finite von Neumann algebras is equivalent to its counterpart in finite von Neumann algebras that have the Connes embedding property. It is not known whether all finite von Neumann algebras with separable predual have the Connes embedding property. The main theorems of this paper are Theorems~\ref{thm:KKt} and \ref{thm:MMt}. The paper is organized as follows: Section~\ref{sec:prelimMats} contains preliminaries about Horn triples and matrix algebras; Section~\ref{sec:singMat} proves inequalities involving singular numbers in matrix algebras; Section~\ref{sec:non-inv} gives the solution of the multiplicative Horn problem for all (including non-invertible) matrices; Section~\ref{sec:preliminaries} contains some preliminaries about finite von Neumann algebras and Horn triples; Section~\ref{sec:sing} proves inequalities involving singular numbers of products in finite von Neumann algebras; Section~\ref{sec:finvN} gives the solution of the multiplicative Horn problem in finite von Neumann algebras. \section{Preliminaries on Horn triples and matrix algebras} \label{sec:prelimMats} \subsection{Horn triples and additive Horn inequalities} \label{subsec:HornTrips} Given integers $1\le r\le n$, to each triple $(I,J,K)$ of subsets of $\{1,\ldots,n\}$, each of cardinality $r$, and writing \begin{align*} I&=\{i(1)<i(2)<\cdots<i(r)\}, \\ J&=\{j(1)<j(2)<\cdots<j(r)\}, \\ K&=\{k(1)<k(2)<\cdots<k(r)\}, \end{align*} if the identity \begin{equation}\label{eq:IJKident} \sum_{\ell=1}^r\big((i(\ell)-\ell)+(j(\ell)-\ell)+(k(\ell)-\ell)\big)=2r(n-r), \end{equation} holds, then one associates the Littlewood--Richardson coefficient $c_{IJK}$, which is a nonnegative integer. (See, e.g., Fulton~\cite{Fulton00} for more on this, though note that his $K$ in triples $(I,J,K)$ corresponds to our $\overline K$, under the operation defined below by~\eqref{eq:Ibar}.) Supposing $A,B$ and $C$ are $n\times n$ Hermitian matrices with eigenvalues (listed according to multiplicity and in decreasing order) $\alpha=(\alpha_1,\ldots,\alpha_n)$ for $A$, $\beta=(\beta_1,\ldots,\beta_n)$ for $B$ and $\gamma=(\gamma_1,\ldots,\gamma_n)$ for $C$, the corresponding Horn inequality is \begin{equation}\label{eq:Horn} \sum_{i\in I}\alpha_i+\sum_{j\in J}\beta_j+\sum_{k\in K}\gamma_k\le0. \end{equation} Horn's conjecture is equivalent to the assertion that the set of all triples of eigenvalue sequences $(\alpha,\beta,\gamma)$ arising from $n\times n$ Hermitian matrices $A$, $B$ and $C$ subject to $A+B+C=0$ equals the set of triples $(\alpha,\beta,\gamma)$ such that the equality \begin{equation}\label{eq:Tr=} \sum_{i=1}^n\alpha_i+\beta_i+\gamma_i=0 \end{equation} holds and the inequality~\eqref{eq:Horn} holds for all $(I,J,K)$ such that $c_{IJK}>0$. Belkale~\cite{Be01} showed that the inequalities with $c_{IJK}>1$ are redundant, so that Horn's conjecture concerns the convex body determined by the inequalities~\eqref{eq:Horn} for all triples $(I,J,K)$ with $c_{IJK}=1$. For future use, we let $H(n,r)$ be the set of all triples $(I,J,K)$ as described above, satisfying~\eqref{eq:IJKident} and with $c_{IJK}=1$. For convenience, we also declare that $(\varnothing,\varnothing,\varnothing)$ is a Horn triple and let $H(n,0)=\{(\varnothing,\varnothing,\varnothing)\}$. Given $n\in\Nats$ and a subset $K$ of $\{1,\ldots,n\}$, following \cite{Fulton00} we let \begin{equation}\label{eq:Ibar} \Kbar=\{n+1-k\mid k\in K\}. \end{equation} Letting $D=-C$ and letting $\rho$ be the eigenvalue sequence of $D$, we have $\gamma_k=\rho_{n+1-k}$ and from~\eqref{eq:Horn} and~\eqref{eq:Tr=} we get the two inequalities \begin{gather} \sum_{i\in I}\alpha_i+\sum_{j\in J}\beta_j\le\sum_{k\in\Kbar}\rho_k \label{eq:AddHorn<} \\ \sum_{i\in I^c}\alpha_i+\sum_{j\in J^c}\beta_j\ge\sum_{k\in\Kbar^c}\rho_k, \label{eq:AddHorn>} \end{gather} where in~\eqref{eq:AddHorn>} the complements $I^c$, $J^c$ and $\Kbar^c$ are taken in $\{1,\ldots,n\}$. These may be called the additive Horn inequalities for $D=A+B$. \subsection{The intersection property in matrix algebras} \label{subsec:IPmat} A {\em full flag} in $\Cpx^n$ is an increasing sequence $E=\{E_1,\ldots,E_n\}$ of subspaces of $\Cpx^n$ such that $\dim(E_j)=j$ for all $j$. Given $n\in\Nats$ and a set $I=\{i(1),\ldots,i(r)\}$ with $1\le i(1)<i(2)\cdots<i(r)\le n$, and given a full flag $E$ in $M_n(\Cpx)$, we consider the corresponding {\em Schubert variety} $S(E,I)$. It is the set of all subspaces $V\subseteq\Cpx^n$ of dimension $r$ such that for all $\ell\in\{1,\ldots,r\}$, \[ \dim(V\cap E_{i(\ell)})\ge\ell. \] Given subsets $I$, $J$ and $K$ of $\{1,\ldots,n\}$, each of cardinality $r$, we say the triple $(I,J,K)$ has the {\em intersection property} in $M_n(\Cpx)$ if, whenever $E$, $F$ and $G$ are full flags in $\Cpx^n$, the intersection $S(E,I)\cap S(F,J)\cap S(G,K)$ is nonempty. It is well known that every triple $(I,J,K)\in\bigcup_{r=0}^n H(n,r)$ has the intersection property. In fact, a construction in~\cite{BCDLT10} provides for every $(I,J,K)\in H(n,r)$ an algorithm to find the projection onto a subspace belonging to $S(E,I)\cap S(F,J)\cap S(G,K)$ as a lattice polynomial of the projections onto the subspaces in the flags $E$, $F$ and $G$, provided that the latter are in general position. \section{Inequalities for singular numbers in matrix algebras} \label{sec:singMat} In this section, we prove inequalities involving singular numbers of a product of matrices (Proposition~\ref{prop:ABCMn}). This can be viewed as a template for the proof of an analogous inequality for singular numbers in finite von Neumann algebras that is found in Section~\ref{sec:sing}, but the result is also used in Section~\ref{sec:non-inv} to help solve the multiplicative Horn problem for (non-invertible) matrices. Recall that the singular numbers of an $n\times n$ complex matrix $A\in M_n(\Cpx)$ are $\|A\|=s_1(A)\ge s_2(A)\ge\cdots\ge s_n(A)\ge0$, where \begin{equation}\label{eq:defsingMn} s_j(A)=\inf\|A(1-P)\|, \end{equation} with the infimum over all self-adjoint projections $P$ of rank $j-1$. In other words, singular numbers of $A$ are the eigenvalues of $|A|=(A^*A)^{1/2}$, listed according to multiplicity and in decreasing order. Note also, for $A\in M_n(\Cpx)$, we have \begin{equation}\label{eq:absdet} |\tdet(A)|=\tdet(|A|)=\prod_{j=1}^ns_j(A) \end{equation} and if $A$ is invertible, then \[ |\tdet(A)|=\exp\big(\Tr(\log|A|)\big), \] where $\Tr$ is the unnormalized trace. \begin{lemma}\label{lem:detsnum} Suppose $A\in M_n(\Cpx)$. Let $v_1,\ldots,v_n$ be an orthonormal set of eigenvectors of $|A|$ with corresponding eigenvalues $s_1(A),\ldots,s_n(A)$, respectively. Consider the full flag $E_1\subsetneq\cdots\subsetneq E_n$ given by \[ E_k=\lspan\{v_1,v_2,\ldots,v_k\}. \] Let $I=\{i(1),\ldots,i(r)\}\subseteq\{1,\ldots,n\}$, with $i(1)<i(2)<\cdots<i(r)$, suppose $V\subseteq\Cpx^n$ is a subspace of dimension $r$ with \[ \dim(V\cap E_{i(\ell)})\ge\ell\quad(1\le\ell\le r) \] and let $P\in M_n(\Cpx)$ be the self-adjoint projection onto $V$. Let $Q$ be the self-adjoint projection onto an $r$-dimensional subspace containing $A(V)$. Let $W\in M_n(\Cpx)$ be any partial isometry such that $WW^*=P$ and $W^*W=Q$. Let $\tdet_P$ denote the determinant on the algebra $PM_n(\Cpx)P$, which is unitarily isomorphic to the $r\times r$ matrices. Then \[ \big|\tdet_P(WAP)\big|\ge s_{i(1)}(A)s_{i(2)}(A)\cdots s_{i(r)}(A). \] \end{lemma} \begin{proof} Since $|\tdet_P(WAP)|=\prod_{\ell=1}^rs_\ell(WAP)$, it will suffice to show \begin{equation}\label{eq:sineq} s_\ell(WAP)\ge s_{i(\ell)}(A)\quad(\ell\in\{1,\ldots,r\}). \end{equation} Suppose $F\subseteq V$ is a subspace of dimension $\ell-1$. Since its orthocomplement $V\ominus F$ has dimension $r-\ell+1$ and since the dimension of $V\cap E_{i(\ell)}$ is as least $\ell$, there is a unit vector $x$ in $(V\ominus F)\cap E_{i(\ell)}$. Then $\|WAPx\|=\|Ax\|\ge s_{i(\ell)}$. This proves~\eqref{eq:sineq}. \end{proof} \begin{notation} We call a flag $E=(E_1,E_2,\ldots,E_n)$ as in Lemma~\ref{lem:detsnum} an {\em eigenvector flag} of $|A|$. \end{notation} Though the following proposition follows from Klyachko's Theorem (given as Theorem~\ref{thm:klyachko} below), we will give a proof here because it provides a model for the proof of the finite von Neumann algebra version, Theorem~\ref{thm:ABC}. \begin{prop}\label{prop:ABCMn} Let $A,B,C\in M_n(\Cpx)$ be such that $ABC=1_n$. Suppose $(I,J,K)\in H(n,r)$ for some $0\le r\le n$. Then \begin{equation}\label{eq:ABCMn} \prod_{\ell=1}^rs_{i(\ell)}(A)s_{j(\ell)}(B)s_{k(\ell)}(C)\le 1. \end{equation} \end{prop} \begin{proof} If $r=0$ then $(I,J,K)=(\varnothing,\varnothing,\varnothing)$ and~\eqref{eq:ABCMn} is by definition an equality. So we may suppose $r\ge1$. Let $E$, $F$ and $G$ be eigenvector flags of $|A|$, $|B|$ and $|C|$, respectively. We choose a subspace $V\subseteq\Cpx^n$ of dimension $r$ such that \[ \dim(BC(V)\cap E_{i(\ell)})\ge\ell,\quad\dim(C(V)\cap F_{j(\ell)})\ge\ell,\quad\dim(V\cap G_{k(\ell)})\ge\ell, \quad(1\le\ell\le r). \] This can be done by applying the intersection property to the flags $C^{-1}B^{-1}(E)$, $C^{-1}(F)$ and $G$. Let $P$, $Q$ and $R$ be the self-adjoint projections onto $V$, $C(V)$ and $BC(V)$, respectively. Let $W_{Q,P}$ and $W_{R,P}$ be partial isometries such that $W_{Q,P}^*W_{Q,P}=P=W_{R,P}^*W_{R,P}$, $W_{Q,P}W_{Q,P}^*=Q$ and $W_{R,P}W_{R,P}^*=R$. Note that, since $ABC=1_n$, we have $AR=PAR$. Then applying Lemma~\ref{lem:detsnum} three times, we have \begin{align*} 1&=\tdet_P(ABCP)=\tdet_P(PARBQCP) \\[1ex] &=\tdet_P(PAW_{R,P})\tdet_P(W_{R,P}^*BW_{Q,P})\tdet_P(W_{Q,P}^*CP) \\[1ex] &=\tdet_P(W_{R,P}AR)\tdet_Q(W_{Q,P}W_{R,P}^*BQ)\tdet_P(W_{Q,P}^*CP) \\ &\ge\prod_{\ell=1}^rs_{i(\ell)}(A)s_{j(\ell)}(B)s_{k(\ell)}(C), \end{align*} as required. \end{proof} \begin{cor}\label{cor:MnInvIneqs} If $A,B\in M_n(\Cpx)$ are invertible and if $D=AB$, then for every $(I,J,K)\in\bigcup_{r=0}^n H(n,r)$, we have \begin{gather} \sum_{i\in I}\log s_i(A)+\sum_{j\in J}\log s_j(B)\le\sum_{k\in\Kbar}\log s_k(D) \label{eq:Mnsineq<} \\ \sum_{k\in\Kbar^c}\log s_k(D)\le \sum_{i\in I^c}\log s_i(A)+\sum_{j\in J^c}\log s_j(B), \label{eq:Mnsineq>} \end{gather} where in~\eqref{eq:Mnsineq>}, $\Kbar^c$, $I^c$ and $J^c$ indicate the respective complements in $\{1,\ldots,n\}$. \end{cor} \begin{proof} To obtain~\eqref{eq:Mnsineq<}, we apply Proposition~\ref{prop:ABCMn} with $C=D^{-1}$ and observe that we have $s_k(C)=s_{n+1-k}(D)^{-1}$. Now~\eqref{eq:Mnsineq>} follows from~\eqref{eq:Mnsineq<}, using~\eqref{eq:absdet} and $\det(D)=\det(A)\det(B)$. \end{proof} \begin{thm}\label{thm:MnIneqs} Let $A,B\in M_n(\Cpx)$ and let $D=AB$. Then for every $(I,J,K)\in\bigcup_{r=0}^n H(n,r)$, the inequalities~\eqref{eq:Mnsineq<} and~\eqref{eq:Mnsineq>} hold, with $-\infty$ allowed for values. \end{thm} \begin{proof} Writing $A=U|A|$ and $B=|B^*|V$ for unitaries $U$ and $V$ we have $D=U|A||B^*|V$ and $s_i(A)=s_i(|A|)$, $s_j(B)=s_j(|B^*|)$ and $s_k(D)=s_k(|A||B^*|)$. Thus, we may without loss of generality assume $A\ge0$ and $B\ge0$. Let $\eps_1,\eps_2>0$ and let $D(\eps_1,\eps_2)=(A+\eps_11)(B+\eps_21)$. Then $s_i(A+\eps_11)=\eps_1+s_i(A)$ and $s_j(B+\eps_21)=\eps_2+s_j(B)$ and from Corollary~\ref{cor:MnInvIneqs} we get \begin{gather} \sum_{i\in I}\log(\eps_1+s_i(A))+\sum_{j\in J}\log(\eps_2+s_j(B))\le\sum_{k\in\Kbar}\log s_k(D(\eps_1,\eps_2)) \label{eq:sumlog<Deps} \\ \sum_{k\in\Kbar^c}\log s_k(D(\eps_1,\eps_2))\le\sum_{i\in I^c}\log(\eps_1+s_i(A))+\sum_{j\in J^c}\log(\eps_2+s_j(B)). \label{eq:sumlogDeps<} \end{gather} But letting $\eps_1,\eps_2\to0$, we have $s_k(D(\eps_1,\eps_2))\to s_k(D)$ for each $k$, and from~\eqref{eq:sumlog<Deps} and~\eqref{eq:sumlogDeps<} we obtain the desired inequalities~\eqref{eq:Mnsineq<} and~\eqref{eq:Mnsineq>}, with possible values $-\infty$. \end{proof} \section{The multiplicative Horn problem for non-invertible matrices} \label{sec:non-inv} We let $\Reals^{n\downarrow}$ and, respectively, $\mathbb{R}_+^{n\downarrow}$ and $\mathbb{R}_+^{*n\downarrow}$ denote the sets of nonincreasing sequences real numbers and, respectively, of nonnegative real numbers and of strictly positive real numbers, having length $n$. We will need the following theorem, which follows from the solution of the additive Horn problem and a result of Kyachko~\cite{Kl00} (see Theorem 2 of~\cite{Sp05}). Again, $H(n,r)$ is the set of Horn triples with Littlewood--Richardson coefficient equal to $1$, as described in~\S\ref{subsec:HornTrips}. \begin{thm}\label{thm:klyachko} For sequences \[ \lambda = (\lambda_1, \ldots , \lambda_n),\quad \mu = (\mu_1, \ldots , \mu_n),\quad \gamma = (\gamma_1, \ldots , \gamma_n) \] in $\mathbb{R}_+^{*n\downarrow}$, there exist matrices $A,B,C\in M_n(C)$ such that $ABC=1_n$ and having singular values of $\lambda,\mu$ and $\gamma$, respectively, if and only if the following hold: \begin{equation}\label{eq:detID} \prod_{i=1}^n\lambda_i\mu_i\gamma_i=1 \end{equation} and \begin{equation}\label{eq:multHornIneq} \forall(I,J,K)\in\bigcup_{r=0}^n H(n,r),\quad \prod_{i\in I }\lambda_i\prod_{j\in J}\mu_j\prod_{k\in K}\gamma_k\leq 1. \end{equation} \end{thm} This theorem fully solves the multiplicative Horn problem in $M_n(\Cpx)$ in the case where $A,B,C$ are invertible. In particular, the multiplicative Horn problem for invertibles is equivalent to the additive Horn problem by taking logarithms. In this section we will deal with the non-invertible case, namely: given $\lambda,\mu\in\mathbb{R}_+^{n\downarrow}$, what are the possible singular values $\nu\in\mathbb{R}_+^{n\downarrow}$ of $AB$ when $A$ and $B$ in $M_n(\Cpx)$ have respective singular values $\lambda$ and $\mu$? We will denote the set of all such $\nu$ by $K_{\lambda,\mu}$ and call it the {\em multiplicative Horn body}. It is equal the set of all singular values of matrices $\diag(\lambda)U\diag(\mu)$, where $U$ ranges over the $n\times n$ unitary group. To summarize, our goal in this section is to describe the set \[ K_{\lambda,\mu}:=\{\nu\in\mathbb{R}_+^{n\downarrow}\mid \nu = \text{ singular values of } \diag(\lambda)U\diag(\mu),\,U\in\mathbb{U}_n\}. \] As one might expect, the answer is a continuous limit of the invertible case, however with one subtlety in its description. Note that, since the singular values of a matrix are continuous with respect to operator norm and since the unitary group is compact, $K_{\lambda,\mu}$ is a compact subset of $\mathbb{R}_+^{n\downarrow}$. See~\eqref{eq:Ibar} for the operation $I\mapsto\Ibar$. Given $\lambda,\mu\in \mathbb{R}_+^{n\downarrow}$, we let \begin{align} \Kt_{\lambda,\mu }=\bigg\{\nu\in \mathbb{R}_+^{n\downarrow}\;\bigg|\;&\forall(I,J,K)\in\bigcup_{r=0}^n H(n,r), \label{eq:Ktdef} \\ &\prod_{i\in I}\lambda_i\prod_{j\in J}\mu_j\leq \prod_{k\in \overline K}\nu_k \label{eq:Ktdef<} \\ &\text{ and } \prod_{i\in I^c}\lambda_i\prod_{j\in J^c}\mu_j\geq \prod_{k\in\Kbar^c}\nu_k\bigg\}. \label{eq:Ktdef>} \end{align} \begin{lemma}\label{lem:invertibleCase} If $\lambda,\mu \in \mathbb{R}_+^{*n\downarrow}$, then $K_{\lambda,\mu}=\Kt_{\lambda,\mu}$. \end{lemma} \begin{proof} This follows easily from Theorem~\ref{thm:klyachko}. Indeed, $K_{\lambda,\nu}$ is the set of all $\nu$ arising from sequences $\gamma$ satisfying the conditions~\eqref{eq:detID} and~\eqref{eq:multHornIneq} of Theorem~\ref{thm:klyachko}, where the correspondence between $\nu$ and $\gamma$ is given by $\nu_i=\gamma_{n+1-i}^{-1}$. Thus, $K_{\lambda,\mu}$ is the set of all $\nu\in\mathbb{R}_+^{n\downarrow}$ such that \begin{equation}\label{eq:nudetID} \prod_1^n\nu_i=\prod_1^n\lambda_i\mu_i \end{equation} and \begin{equation}\label{eq:numultHornIneq} \forall(I,J,K)\in\bigcup_{r=0}^n H(n,r),\quad \prod_{i\in I}\lambda_i\prod_{j\in J}\mu_j\leq \prod_{k\in \overline K}\nu_k \end{equation} If $\nu\in\Kt_{\lambda,\mu}$, then~\eqref{eq:numultHornIneq} holds by definition. To see that~\eqref{eq:nudetID} holds, we apply the inequality~\eqref{eq:Ktdef<} for the Horn triple $(\{1,\ldots ,n\},\{1,\ldots ,n\},\{1,\ldots ,n\} )\in H(n,n)$ and the inequality~\eqref{eq:Ktdef>} for the Horn triple $(\varnothing,\varnothing,\varnothing)\in H(n,0)$. This yields $\Kt_{\lambda,\mu}\subseteq K_{\lambda,\mu}$. For the reverse inclusion, assume $\nu\in K_{\lambda,\nu}$. Then~\eqref{eq:Ktdef<} is~\eqref{eq:numultHornIneq}, while~\eqref{eq:Ktdef>} follows from the~\eqref{eq:Ktdef<} and the determinant identity~\eqref{eq:nudetID}. Thus, $\nu\in\Kt_{\lambda,\nu}$. \end{proof} We are now able to prove the main result of this section: \begin{thm}\label{thm:KKt} For all $\lambda,\mu \in\mathbb{R}_+^{n\downarrow}$, we have $K_{\lambda,\mu}=\Kt_{\lambda,\mu}$. \end{thm} \begin{proof} The inclusion $K_{\lambda,\mu}\subseteq\Kt_{\lambda,\mu}$ follows from Theorem~\ref{thm:MnIneqs} by exponentiating. It will be convenient to have the notation, applicable for any $\lambda,\mu\in\mathbb{R}_+^{n\downarrow}$, that $\Kt_{\lambda,\mu}^+$ is the set of all $\nu\in\Reals_+^{n\downarrow}$ so that~\eqref{eq:Ktdef<} holds for all Horn triples, and $\Kt_{\lambda,\mu}^-$ is the set of all $\nu\in\Reals_+^{n\downarrow}$ so that~\eqref{eq:Ktdef>} holds for all Horn triples. Thus, \[ \Kt_{\lambda,\mu}=\Kt_{\lambda,\mu}^-\cap\Kt_{\lambda,\mu}^+. \] For $\eps\in\Reals_+$ and $\nu\in\Reals_+^{n\downarrow}$, let $\nu+\eps\in\Reals_+^{n\downarrow}$ be obtained from $\nu$ by adding $\eps$ to all $n$ coordinates. Similarly, if $\nu\in\Reals_+^{*n\downarrow}$, then $\log\nu\in\Reals^{n\downarrow}$ is obtained by taking the logarithm of each component. Now, to show $\Kt_{\lambda,\mu}\subseteq K_{\lambda,\mu}$, let $\nu\in\Kt_{\lambda,\mu}$ and let $\eps>0$. Then $\nu+\eps\in\Kt_{\lambda,\mu}^+$ and for all $(I,J,K)\in\bigcup_{r=0}^n H(n,r)$, the inequality \[ \prod_{i\in I}\lambda_i\prod_{j\in J}\mu_j\leq \prod_{k\in \overline K}(\nu+\eps)_k \] holds with strict inequality, except when $(I,J,K)=(\varnothing,\varnothing,\varnothing)$, when it is the equality $1=1$. Similarly, we have $\nu\in\Kt_{\lambda+\eps,\mu+\eps}^-$ and all of the inequalities \[ \prod_{i\in I^c}(\lambda+\eps)_i\prod_{j\in J^c}(\mu+\eps)_j\geq \prod_{k\in\Kbar^c}\nu_k \] hold with strict inequality, except when $(I^c,J^c,K^c)=(\varnothing,\varnothing,\varnothing)$, when it is the equality $1=1$. Therefore, there is $\delta=\delta(\eps)$ satisfying $0<\delta<\eps$ such that \[ \nu+\eps\in\Kt_{\lambda+\delta,\mu+\delta}^+\quad\text{ and }\quad \nu+\delta\in\Kt_{\lambda+\eps,\mu+\eps}^-. \] Now $(\log(\lambda+\delta),\log(\mu+\delta),\log(\nu+\eps))$ satisfies the additive Horn inequalities~\eqref{eq:AddHorn<} while $(\log(\lambda+\eps),\log(\mu+\eps),\log(\nu+\delta))$ satisfies the additive Horn inequalities~\eqref{eq:AddHorn>}. By the interpolation result, Proposition~3.2 of~\cite{BLT09}, it follows that there is $(\alpha,\beta,\rho)\in(\Reals^{n\downarrow})^3$ satisfying all of the inequalities for the additive Horn problem and such that componentwise we have \begin{equation}\label{eq:abcineqs} \begin{gathered} \log(\lambda+\delta)\le\alpha\le\log(\lambda+\eps), \\ \log(\mu+\delta)\le\beta\le\log(\mu+\eps), \\ \log(\nu+\delta)\le\rho\le\log(\nu+\eps). \end{gathered} \end{equation} Thus, letting \[ \lambdat_\eps=\exp(\alpha),\quad\mut_\eps=\exp(\beta),\quad\nut_\eps=\exp(\rho), \] we have $\nut_\eps\in\Kt_{\lambdat_\eps,\mut_\eps}$ and, by Lemma~\ref{lem:invertibleCase}, $\nut_\eps\in K_{\lambdat_\eps,\mut_\eps}$. So there is an $n\times n$ unitary matrix $U_\eps$ so that the singular numbers of $\diag(\lambdat_\eps)U_\eps\diag(\mut_\eps)$ are precisely $\nut_\eps$. Of course, the inequalities~\eqref{eq:abcineqs} give, componentwise, \[ \lambda+\delta(\eps)\le\lambdat_\eps\le\lambda+\eps,\quad \mu+\delta(\eps)\le\mut_\eps\le\mu+\eps,\quad \nu+\delta(\eps)\le\nut_\eps\le\nu+\eps. \] Since the singular numbers of an $n\times n$ matrix are continuous with respect to the operator norm, choosing by compactness a sequence $\eps(k)$ tending to zero so that $U_{\eps(k)}$ converges as $k\to\infty$ in norm to a unitary matrix $U$ and taking the limit as $k\to\infty$ we obtain that the singular numbers of $\diag(\lambda)U\diag(\mu)$ are precisely $\nu$. Thus, $\nu\in K_{\lambda,\mu}$, as required. \end{proof} \section{Preliminaries in finite von Neumann algebras} \label{sec:preliminaries} Throughout this section and the next, $\Mcal$ will denote a finite von Neumann algebra with a normal, faithful, tracial state $\tau$. We will also assume $\Mcal$ is diffuse, meaning that it has no minimal projections. Now we recall some facts about finite von Neumann algebras and introduce notation that is used in the remainder of the paper. Some of this notation (for example, related to flags and singular number) is in conflict with the notation used for matrix algebras in previous sections. \subsection{Projections and flags} We let $\Proj(\Mcal)$ denote the set of (self-adjoint) projections in $\Mcal$ and for $P\ne0$ such a projection, the cut-down von Neumann algebra $P\Mcal P$ will usually be taken with the tracial state $\tau(\cdot)/\tau(P)$. Recall that for $P,Q\in\Proj(\Mcal)$, their greatest lower bound $P\wedge Q\in\Proj(\Mcal)$ has trace satisfying \[ \tau(P\wedge Q)\ge\tau(P)+\tau(Q)-1. \] By relativising, we obtain the following easy but useful result. \begin{lemma}\label{lem:EFP} Let $E,F,P\in\Proj(\Mcal)$ with $E\le F$. Then \begin{equation}\label{eq:EFP} \tau(P\wedge E)\ge\tau(P\wedge F)-\tau(F-E). \end{equation} \end{lemma} \begin{proof} Working in $F\Mcal F$, since $P\wedge E =(P\wedge F)\wedge E$, we have \[ \tau^{(F\Mcal F)}(P\wedge E)\ge\tau^{(F\Mcal F)}(P\wedge F)+\tau^{(F\Mcal F)}(E)-1 \] and multiplying by $\tau(F)$ yields~\eqref{eq:EFP}. \end{proof} A {\em flag} $E$ in $\Mcal$ will be a function $E:D\to\Proj(\Mcal)$ for some subset $D\subseteq[0,1]$ so that for all $s,t\in D$ with $s\le t$, we have $\tau(E(t))=t$ and $E(s)\le E(t)$. A {\em full flag} is a flag whose domain $D$ is $[0,1]$. If $T\in\Mcal$ and $P\in\Proj(\Mcal)$, then we let $T\cdot P$ denote the range projection of $TP$. The following properties are well known and easy to prove (see, for example, \S2.2 of \cite{CD09} for this and more). \begin{prop} Let $S,T\in\Mcal$ and let $P,Q\in\Proj(\Mcal)$. Then \begin{enumerate}[(i)] \item $S\cdot(T\cdot P)=(ST)\cdot P$; \item in general, $\tau(T\cdot P)\le\tau(P)$, while if $T$ is invertible or, more generally, if $T$ has zero kernel, then $\tau(T\cdot P)=\tau(P)$; \item $T\cdot(P\wedge Q)=(T\cdot P)\wedge(T\cdot Q)$. \end{enumerate} \end{prop} \subsection{Singular numbers and eigenvalue functions} The singular numbers in the setting of a finite von Neumann algebra were introduced by von Neumann. See Fack and Kosaki~\cite{FaKo86} for an excellent presentation and development of this theory. For an element $A\in\Mcal$, the singular number function $s_A:[0,1]\to[0,\infty)$ is the right-continuous, nonincreasing function defined by \begin{equation}\label{eq:snums} s_A(t)=\inf\{\|A(1-Q)\|\mid Q\in\Proj(\Mcal),\,\tau(Q)\le t\}. \end{equation} We may also write $s_A^{(\Mcal)}(t)$ if we want to indicate the von Neumann algebra. Thus, for example, if $P\in\Proj(\Mcal)$ and $A\in P\Mcal P$, then we have \begin{equation}\label{eq:sPMP} s_A^{(\Mcal)}(t)=\begin{cases} s_A^{(P\Mcal P)}(t/\tau(P)),&0\le t<\tau(P), \\ 0,&\tau(P)\le t\le1. \end{cases} \end{equation} Note that we have $s_A=s_{A^*}=s_{|A|}$ where $|A|=(A^*A)^{1/2}$. For a self-adjoint $T\in\Mcal$, its {\em spectral distribution} $\mu_T$ is the Borel probability measure supported on the spectrum of $T$ whose moments agree with those of $T$ (with respect to $\tau$). It is also equal to $\tau$ composed with the projection valued spectral measure of $T$. The {\em eigenvalue function} of $T$ is the nonincreasing, right-continuous function $\lambda_T:[0,1)\to\Reals$ given by \[ \lambda_T(t)=\sup\{x\in\Reals\mid\mu_T((x,\infty))>t\}. \] There is a full flag $E_T$ of projections in $\Mcal$, that can be obtained by starting with a chain of spectral projections of $T$ and extending in the case that the distribution $\mu_T$ has atoms, so that \[ T=\int_0^1\lambda_T(t)\,dE_T(t). \] We will call $E_T$ a {\em spectral flag} of $T$, and the possible nonuniqueness of spectral flags will not concern us. We note that, for $T\ge0$ $\lambda_T=s_T$ and all $t\in[0,1]$, we have \begin{gather} \|T(1-E_T(t))\|=s_T(t) \label{eq:Esnum} \\[1ex] TE_T(t)\ge s_T(t)E_T(t). \label{eq:sEunder} \end{gather} \subsection{Fuglede--Kadison determinant} The Fuglede--Kadison determinant~\cite{FuKa52} is the function $\Delta:\Mcal\to[0,\infty)$ defined by $\Delta(T)=\Delta(|T|)=\exp\tau(\log|T|)$ for $T$ invertible and $\Delta(T)=\lim_{\eps\to0^+}\Delta(|T|+\eps)$ for $T$ non-invertible, and it satisfies, for all $S,T\in\Mcal$, $\Delta(ST)=\Delta(S)\Delta(T)$. We may also write $\Delta^{(\Mcal)}(T)$ for $\Delta(T)$, to emphasize the von Neumann algebra (and, implicitly, the trace) with respect to which the Fuglede--Kadison determinant is taken. \subsection{The intersection property in II$_1$-factors} \label{subsec:IP} Given $n\in\Nats$ and a set $I=\{i(1),\ldots,i(r)\}$ with $1\le i(1)<i(2)\cdots<i(r)\le n$, and given a flag $E$ in $\Mcal$ whose domain includes the rational numbers \begin{equation}\label{eq:ratsn} \left\{0,\frac1n,\frac2n,\ldots,\frac{n-1}n,1\right\}, \end{equation} we consider the corresponding {\em Schubert variety} $S(E,I)$. It is the set of all projections $P\in\Mcal$ satisfying $\tau(P)=r/n$ and, for all $\ell\in\{1,\ldots,r\}$, \[ \tau\left(P\wedge E\left(\frac{i(\ell)}n\right)\right)\ge\frac\ell n. \] Given subsets $I$, $J$ and $K$ of $\{1,\ldots,n\}$, each of cardinality $r$, we say the triple $(I,J,K)$ has the {\em intersection property} in $\Mcal$ if, whenever $E$, $F$ and $G$ are flags in $\Mcal$ each of whose domains includes the rational numbers~\eqref{eq:ratsn}, there is a projection $P\in S(E,I)\cap S(F,J)\cap S(G,K)$. A main result of~\cite{BCDLT10} is that every $(I,J,K)\in H(n,r)$ has the intersection property in every II$_1$-factor $\Mcal$. \section{Singular numbers of products in finite von Neumann algebras} \label{sec:sing} In this section, we prove inequalities for singular numbers of products in finite von Neumann algebras that generalize those proved for matrix algebras in Section~\ref{sec:singMat} (and whose proofs are also analogous). Note that our results and techniques overlap with and extend those of Harada~\cite{H07}. For the next two lemmas, we fix $n\in\Nats$ and $r\in\{1,\ldots,n\}$ and $I=\{i(1),\ldots,i(r)\}\subseteq\{1,\ldots,n\}$ such that $i(j)<i(j+1)$. Consider the corresponding union of subintervals of $[0,1]$ \begin{equation}\label{eq:FI} F_I=\bigcup_{\ell=1}^r\left[\frac{i(\ell)-1}n,\frac{i(\ell)}n\right]\subseteq[0,1] \end{equation} and let $m$ denote Lebesgue measure on $[0,1]$. \begin{lemma}\label{lem:PEx} Suppose $P\in\Proj(\Mcal)$ and $E$ is a full flag in $\Mcal$ and \begin{equation}\label{eq:PFil} \forall\ell\in\{1,\ldots,r\},\quad\tau\left(P\wedge E\left(\frac{i(\ell)}n\right)\right)\ge\frac\ell n. \end{equation} Then \begin{equation}\label{eq:tauPEx} \forall x\in[0,1],\quad\tau(P\wedge E(x))\ge m([0,x]\cap F_I). \end{equation} \end{lemma} \begin{proof} Since the left hand side is increasing in $x$ and the right hand side is constant when $x$ varies over intervals disjoint from $F_I$, we need only prove~\eqref{eq:tauPEx} for $x\in\big[\frac{i(\ell)-1}n,\frac{i(\ell)}n\big]$, $\ell\in\{1,\ldots,r\}$. For such $x$, using Lemma~\ref{lem:EFP} and the hypothesis~\eqref{eq:PFil}, we get \begin{align*} \tau(P\wedge E(x))&\ge\tau\left(P\wedge E\left(\frac{i(\ell)}n\right)\right) -\tau\left(E\left(\frac{i(\ell)}n\right)-E(x)\right)\ge\frac\ell n-\frac{i(\ell)-x}n \\[1ex] &=m\left(\left[0,\frac{i(\ell)}n\right]\cap F_I\right)-m\left(\left[x,\frac{i(\ell)}n\right]\cap F_I\right)=m([0,x]\cap F_I). \end{align*} \end{proof} \begin{lemma}\label{lem:FKPAP} Let $A\in\Mcal$, and let $E=E_{|A|}$ be a spectral flag of $|A|=(A^*A)^{1/2}$. Let $P\in\Proj(\Mcal)$ be such that $\tau(P)=r/n$ and \[ \forall\ell\in\{1,\ldots,r\},\quad\tau\left(P\wedge E\left(\frac{i(\ell)}n\right)\right)\ge\frac\ell n. \] Let $Q=A\cdot P$ and let $W\in\Mcal$ be any partial isometry such that $W^*W\ge Q$ and $WW^*=P$. Then \begin{equation}\label{eq:FKPAP} \log\Delta^{(P\Mcal P)}(WAP)\ge\frac1{\tau(P)}\int_{F_I}\log(s_A(t))\,dt. \end{equation} \end{lemma} \begin{proof} By re-indexing the integrand and using~\eqref{eq:sPMP}, we get \begin{align*} \log\Delta^{(P\Mcal P)}(WAP)&=\int_{[0,1]}\log s^{(P\Mcal P)}_{WAP}(x)\,dx \\ &=\frac1{m(F_I)}\int_{F_I}\log s^{(P\Mcal P)}_{WAP}\left(\frac{m([0,t]\cap F_I)}{m(F_I)}\right)\,dt \end{align*} We will now prove that \begin{equation}\label{eq:sWAP} s^{(P\Mcal P)}_{WAP}\left(\frac{m([0,t]\cap F_I)}{m(F_I)}\right)\ge s_A^{(\Mcal)}(t) \end{equation} holds for almost all $t\in F_I$, which will yield the desired inequality~\eqref{eq:FKPAP}. We take $t\in F_I\backslash\partial F_I$ and we will show that for all $\eps>0$, we have \begin{equation}\label{eq:sWAPeps} s^{(P\Mcal P)}_{WAP}\left(\frac{m([0,t]\cap F_I)}{m(F_I)}\right)\ge s_A^{(\Mcal)}(t+\eps), \end{equation} which by right-continuity of $s_A$ will imply~\eqref{eq:sWAP}. Suppose $Q\le P$ is a projection and $\tau(Q)\le m([0,t]\cap F_I)$. By Lemma~\ref{lem:PEx}, we have \[ \tau(P\wedge E(t+\eps))\ge m([0,t+\eps]\cap F_I), \] and so, by Lemma~\ref{lem:EFP}, we have \begin{multline*} \tau(E(t+\eps)\wedge(P-Q))\ge\tau(E(t+\eps)\wedge P)-\tau(Q) \\ \ge m([0,t+\eps]\cap F_I)-m([0,t]\cap F_I)=m([t,t+\eps]\cap F_I)>0. \end{multline*} By applying the operator $WA(P-Q)$ to a vector belonging to the range of the projection $E(t+\eps)\wedge(P-Q))$ and using~\eqref{eq:sEunder}, we obtain $\|WA(P-Q)\|\ge s_A(t+\eps)$. Hence, we have \begin{multline*} s^{(P\Mcal P)}_{WAP}\left(\frac{m([0,t]\cap F_I)}{m(F_I)}\right) \\[1ex] =\inf\{\|WA(P-Q)\|\mid Q\in\Proj(P\Mcal P),\,\tau(Q)\le m([0,t]\cap F_I)\} \\ \ge s_A(t+\eps) \end{multline*} and~\eqref{eq:sWAPeps} is proved. \end{proof} The next results apply to Horn triples $(I,J,K)\in H(n,r)$ as described in~\S\ref{subsec:HornTrips}, using that the triple has the intersection property in every II$_1$-factor, as described in~\S\ref{subsec:IP}. \begin{thm}\label{thm:ABC} Let $A,B,C\in\Mcal$ be such that $ABC=1$. For $r,n\in\Nats$ with $r\le n$, and for $(I,J,K)\in H(n,r)$, we have \begin{equation}\label{eq:intlogs} \int_{F_I}\log s_A +\int_{F_J}\log s_B +\int_{F_K}\log s_C\le0, \end{equation} where $F_I$, $F_J$ and $F_K$ are the corresponding unions of subintervals of $[0,1]$ as defined in~\eqref{eq:FI} and where the integrals are with respect to Lebesgue measure. \end{thm} \begin{proof} Since every finite von Neumann algebra with specified normal, faithful, tracial state can be embedded into a II$_1$-factor in a trace-preserving way (see, e.g., Prop.~8.1 of~\cite{BCDLT10}), we may without loss of generality assume that $\Mcal$ is a II$_1$-factor. Consider the full flags \[ E=(BC)^{-1}\cdot E_{|A|},\quad F=C^{-1}\cdot E_{|B|},\quad G=E_{|C|}. \] Since $(I,J,K)$ has the intersection property in $\Mcal$, there is a projection $P\in\Mcal$ with $\tau(P)=r/n$ such that for all $\ell\in\{1,\ldots,r\}$ we have \[ \tau\left(P\wedge E\left(\frac{i(\ell)}n\right)\right)\ge\frac\ell r,\quad \tau\left(P\wedge F\left(\frac{j(\ell)}n\right)\right)\ge\frac\ell r,\quad \tau\left(P\wedge G\left(\frac{k(\ell)}n\right)\right)\ge\frac\ell r. \] Let $Q=C\cdot P$ and $R=BC\cdot P$. Since $C$ and $B$ are invertible, these projections have the same trace as $P$ and we can choose partial isometries $W_{Q,P},W_{R,P}\in\Mcal$ such that \[ W_{Q,P}^*W_{Q,P}=P,\quad W_{Q,P}W_{Q,P}^*=Q,\qquad W_{R,P}^*W_{R,P}=P,\quad W_{R,P}W_{R,P}^*=R. \] Then we have \begin{align*} 1&=\Delta^{(P\Mcal P)}(PABCP) =\Delta^{(P\Mcal P)}(PARBQCP) \\ &=\Delta^{(P\Mcal P)}(PAW_{R,P}W_{R,P}^*BW_{Q,P}W_{Q,P}^*CP) \\ &=\Delta^{(P\Mcal P)}(PAW_{R,P})\,\Delta^{(P\Mcal P)}(W_{R,P}^*BW_{Q,P})\,\Delta^{(P\Mcal P)}(W_{Q,P}^*CP). \end{align*} So \begin{multline}\label{eq:ABClogsum} 0=\log\Delta^{(R\Mcal R)}(W_{R,P}AP)+\log\Delta^{(Q\Mcal Q)}(W_{Q,P}W_{R,P}^*BQ) \\ +\log\Delta^{(P\Mcal P)}(W_{Q,P}^*CP). \end{multline} But for all $\ell\in\{1,\ldots,r\}$ we have \[ R\wedge E_{|A|}\left(\frac{i(\ell)}n\right)=(BC\cdot P)\wedge\left(BC\cdot E\left(\frac{i(\ell)}n\right)\right) =BC\cdot\left(P\wedge E\left(\frac{i(\ell)}n\right)\right), \] so \[ \tau\left(R\wedge E_{|A|}\left(\frac{i(\ell)}n\right)\right)=\tau\left(P\wedge E\left(\frac{i(\ell)}n\right)\right)\ge\frac\ell r \] and, similarly, \[ \tau\left(Q\wedge E_{|B|}\left(\frac{j(\ell)}n\right)\right)=\tau\left(P\wedge F\left(\frac{j(\ell)}n\right)\right)\ge\frac\ell r \] and, since $E_{|C|}=G$ we have \[ \tau\left(P\wedge E_{|C|}\left(\frac{k(\ell)}n\right)\right)\ge\frac\ell r. \] Using~\eqref{eq:ABClogsum} and applying Lemma~\ref{lem:FKPAP} three times yields the desired inequality~\eqref{eq:intlogs}. \end{proof} See~\eqref{eq:Ibar} for the definition of the operation $K\mapsto\Kbar$. \begin{cor}\label{cor:ABinv} Let $A,B\in\Mcal$ be invertible and let $D=AB$. With $(I,J,K)$ as in Theorem~\ref{thm:ABC}, we have \begin{gather} \int_{F_I}\log s_A+\int_{F_J}\log s_B\le\int_{F_\Kbar}\log s_D \label{eq:intlog<D} \\ \int_{(F_\Kbar)^c}\log s_D\le\int_{(F_I)^c}\log s_A+\int_{(F_J)^c}\log s_B, \label{eq:intlogD<} \end{gather} where the complements in~\eqref{eq:intlogD<} are taken in $[0,1]$. \end{cor} \begin{proof} Applying Theorem~\ref{thm:ABC} with $C=D^{-1}$ and using that the equality \[ \log s_D(t)=-\log s_C(1-t) \] holds for almost every $t\in[0,1]$, (namely, at points of continuity) yields~\eqref{eq:intlog<D}. Now using $\log\Delta(T)=\int_{[0,1]}s_T$ and $\Delta(D)=\Delta(A)\Delta(B)$, we get~\eqref{eq:intlogD<} from~\eqref{eq:intlog<D}. \end{proof} \begin{thm}\label{thm:AB} Let $A,B\in\Mcal$ and let $D=AB$. Then for all $r,n\in\Nats$ and all triples $(I,J,K)$ as in Theorem~\ref{thm:ABC}, the inequalities~\eqref{eq:intlog<D} and~\eqref{eq:intlogD<} hold, with $-\infty$ allowed for values. \end{thm} \begin{proof} Writing $A=U|A|$ and $B=|B^*|V$ for unitaries $U$ and $V$ we have $D=U|A||B^*|V*$ and $s_A=s_{|A|}$, $s_B=s_{|B^*|}$ and $s_{D}=s_{|A||B^*|}$. Thus, we may without loss of generality assume $A\ge0$ and $B\ge0$. Let $\eps_1,\eps_2>0$ and let $D(\eps_1,\eps_2)=(A+\eps_11)(B+\eps_21)$. Then $s_{A+\eps_11}=\eps_1+s_A$ and $s_{B+\eps_21}=\eps_2+s_B$ and from Corollary~\ref{cor:ABinv}, we get \begin{gather} \int_{F_I}\log(\eps_1+s_A)+\int_{F_J}\log(\eps_2+s_B)\le\int_{F_\Kbar}\log s_{D(\eps_1,\eps_2)} \label{eq:intlog<Deps} \\ \int_{(F_\Kbar)^c}\log s_{D(\eps_1,\eps_2)}\le\int_{(F_I)^c}\log(\eps_1+s_A)+\int_{(F_J)^c}\log(\eps_2+s_B). \label{eq:intlogDeps<} \end{gather} Since for any projection $P\in\Mcal$, and for $0<\eps_1'\le\eps_1$ we have \[ \|D(\eps_1,\eps_2)(1-P)\|^2=\|(1-P)(B+\eps_2)(A^2+2\eps_1A+\eps_1^2)(B+\eps_2)(1-P)\| \] and \[ A^2+2\eps_1'A+(\eps_1')^2\le A^2+2\eps_1A+\eps_1^2, \] we get $\|D(\eps'_1,\eps_2)(1-P)\|\le\|D(\eps_1,\eps_2)(1-P)\|$ and, from the definition~\eqref{eq:snums} of singular numbers, we obtain that $s_{D(\eps_1,\eps_2)}$ is decreasing in $\eps_1$. However, since $D(\eps_1,\eps_2)$ and $(B+\eps_21)(A+\eps_11)$ have the same singular numbers, we similarly obtain that $s_{D(\eps_1,\eps_2)}$ is decreasing in $\eps_2$. Now letting $\eps_1,\eps_2\to0$ and using the monotone convergence theorem in~\eqref{eq:intlog<Deps} and~\eqref{eq:intlogDeps<}, we obtain the desired inequalities~\eqref{eq:intlog<D} and~\eqref{eq:intlogD<} for our $A$, $B$ and $D$. \end{proof} \section{The multiplicative Horn problem in finite Neumann algebras} \label{sec:finvN} In this section we solve the multiplicative Horn problem in finite von Neumann algebras. The solution is analogous to the result in matrix algebras that was proved in Section~\ref{sec:non-inv}. Let $\SVF$ denote the set of all real-valued, right-continuous, non-negative, non-increasing functions on $[0,1]$. These are the functions that can be singular value functions of elements in finite von Neumann algebras. For a Horn triple $(I,J,K)\in H(n,r)$, we will make use of the subsets $F_I$ of $[0,1]$ introduced in~\eqref{eq:FI} at the beginning of Section~\ref{sec:sing}. Let $f,g\in\SVF$. Let $M_{f,g}$ be the set of all $h\in\SVF$ such that there exists a diffuse, finite von Neumann algebra $\Mcal$ with normal, faithful tracial state $\tau$ and there exist $A,B\in\Mcal$ yielding singular number functions $s_A=f$, $s_B=g$ and $s_{AB}=h$. Our goal is to describe the set $M_{f,g}$. Mimicking the finite dimensional case, we define \begin{align*} \Mt_{f,g}=\bigg\{h\in\SVF\;\bigg|\;&\forall n\in\Nats\;\forall(I,J,K)\in\bigcup_{r=1}^n H(n,r), \\ &\int_{F_I}\log f+\int_{F_J}\log g\leq \int_{F_\Kbar}\log h \\ &\text{and } \int_{(F_I)^c}\log f+\int_{(F_J)^c}\log g\geq \int_{(F_\Kbar)^c}\log h \bigg\}, \end{align*} where of course, the definitions of $F_I$, {\em etcetera}, in the above inequalities depend on the value of $n$ under consideration, where $-\infty$ is allowed for values of the integrals and where the complements are taken in $[0,1]$. Our main result is: \begin{thm}\label{thm:MMt} For all $f,g\in\SVF$, we have $M_{f,g}=\Mt_{f,g}$ \end{thm} \begin{proof} The inclusion $M_{f,g}\subseteq\Mt_{f,g}$ follows from Theorem \ref{thm:AB}. To show the reverse inclusion. Let $f,g\in\SVF$ and let $h\in\Mt_{f,g}$. For a function $s\in\SVF$ and $n\in\Nats$, we let $s^{(n)}\in\Reals_+^{n\downarrow}$ be the sequence whose $j$-th element is \[ s^{(n)}_j=\exp\left(n\int_{(j-1)/n}^{j/n}\log s(x)dx\right). \] Now from $h\in\Mt_{f,g}$, we easily verify $h^{(n)}\in\Kt_{f^{(n)},g^{(n)}}$, and using Theorem~\ref{thm:KKt}, we deduce that there are matrices $A_n,B_n\in M_n(\Cpx)$ with the property that the singular values of $A_n$ and, respectively, of $B_n$ and $A_nB_n$ are the sequences $f^{(n)}$ and, respectively, $g^{(n)}$ and $h^{(n)}$. Now, letting $\omega$ be a free ultrifilter on $\Nats$ and letting $\Mcal=\prod_\omega M_n(\Cpx)$ be the corresponding ultraproduct of matrix algebras, letting $A,B\in\Mcal$ be the elements represented by sequences $(A_n)_{n=1}^\infty$ and $(B_n)_{n=1}^\infty$, respectively, we have that the singular value functions of $A$, $B$ and $AB$ are, respectively, $f$, $g$ and $h$. Thus, $h\in M_{f,g}$. \end{proof} Let us conclude by expanding on the remark made in the penultimate paragraph of the introduction about Connes' embedding property. For $f,g\in\SVF$, let $M_{f,g}^{\text{emb}}$ be the set of all $h\in\SVF$ such that there exist $A$ and $B$ in an ultrapower $R^\omega$ of the hyperfinite II$_1$-factor, with singular value functions $s_A=f$, $s_B=g$ and $s_{AB}=h$. We clearly have $M_{f,g}^{\text{emb}}\subseteq M_{f,g}$, while the proof of the above theorem actually showed $\Mt_{f,g}\subseteq M_{f,g}^{\text{emb}}$. Thus, we get: \begin{cor} For any $f,g\in\SVF$, we have $M_{f,g}=M_{f,g}^{\text{emb}}$. \end{cor} \begin{bibdiv} \begin{biblist} \bib{Be01}{article}{ author={Belkale, Prakash}, title={Local systems on $\mathbb{P}^1-S$ for $S$ a finite set}, journal={Compos. Math.}, volume={129}, year={2001}, pages={67-86} } \bib{BCDLT10}{article}{ author={Bercovici, Hari}, author={Collins, Beno\^it}, author={Dykema, Ken}, author={Li, Wing Suet}, author={Timotin, Dan}, title={Intersections of Schubert varieties and eigenvalue inequalitites in an arbitrary finite factor}, journal={J. Funct. Anal.}, year={2010}, volume={258}, pages={1579--1627} } \bib{BL01}{article}{ author={Bercovici, Hari}, author={Li, Wing-Suet}, title={Inequalities for eigenvalues of sums in a von Neumann algebra}, conference={ title={Recent advances in operator theory and related topics}, address={Szeged}, date={1999}, }, book={ series={Oper. Theory Adv. Appl.}, volume={127}, year={2001}, publisher={Birkh\"auser}, address={Basel}, }, pages={113--126} } \bib{BLT09}{article}{ author={Bercovici, H.}, author={Li, W. S.}, author={Timotin, D.}, title={The Horn conjecture for sums of compact selfadjoint operators}, journal={Amer. J. Math.}, volume={131}, date={2009}, pages={1543--1567}, } \bib{CD09}{article}{ author={Collins, Beno\^it}, author={Dykema, Ken}, title={On a reduction procedure for Horn inequalities in finite von Neumann algebras}, journal={Oper. Matrices}, volume={3}, year={2009}, pages={1--40}, } \bib{FaKo86}{article}{ author={Fack, Thierry}, author={Kosaki, Hideki}, title={Generalized $s$-numbers of $\tau$-measurable operators}, journal={Pacific J. Math.}, volume={123}, date={1986}, pages={269--300}, } \bib{FuKa52}{article}{ author={Fuglede, Bent}, author={Kadison, Richard V.}, title={Determinant theory in finite factors}, journal={Ann. of Math.}, volume={55}, year={1952}, pages={520--530} } \bib{Fulton00}{article}{ author={Fulton, William}, title={Eigenvalues, invariant factors, highest weights, and Schubert calculus}, journal={Bull. Amer. Math. Soc. (N.S.)}, volume={37 (3)}, year={2000}, pages={209--249} } \bib{H07}{article}{ author={Harada, Tetsuo}, title={Multiplicative versions of Klyachko's theorem in finite factors}, journal={Linear Algebra Appl.}, volume={425}, date={2007}, pages={102--108}, } \bib{H62}{article}{ author={Horn, Alfred}, title={Eigenvalues of sums of Hermitian matrices}, journal={Pacific J. Math}, volume={12}, year={1962}, pages={225--241} } \bib{Kl98}{article}{ author={Klyachko, Alexander A.}, title={Stable bundles, representation theory and Hermitian operators}, journal={Selecta Math. (N.S.)}, volume={4}, year={1998}, pages={419--445} } \bib{Kl00}{article}{ author={Klyachko, Alexander A.}, title={Random walks on symmetric spaces and inequalities for matrix spectra}, journal={Linear Algebra Appl.}, volume={319}, year={2000}, pages={37--59} } \bib{KT99}{article}{ author={Knutson, Allen}, author={Tao, Terrence}, title={The honeycomb model of GL$_{n}(\mathbb{C})$ tensor products. I. Proof of the saturation conjecture}, journal={J. Amer. Math. Soc.}, volume={12}, year={1999}, pages={1055--1090} } \bib{Sp05}{article}{ author={Speyer, David E.}, title={Horn's problem, Vinnikov curves, and the hive cone}, journal={Duke Math. J.}, volume={127}, date={2005}, pages={395--427}, } \bib{W12}{article}{ author={Weyl, H.}, title={Das asymptotische Verteilungsgesetz der Eigenwerte lineare partieller Differentialgleichungen}, journal={Math. Ann.}, volume={71}, year={1912}, pages={441--479}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{Quantum Friction in Arbitrarily Directed Motion} \author{J. Klatt} \affiliation{Physikalisches Institut, Albert-Ludwigs-Universit\"at Freiburg, Hermann-Herder-Str. 4, D-79104 Freiburg, Germany} \author{M. Bel\'{e}n Far\'ias} \affiliation{Departamento de F\'isica, FCEyN, UBA and IFIBA, CONICET, Pabell\'on 1, Ciudad Universitaria, 1428 Buenos Aires, Argentina} \author{D.A.R. Dalvit} \affiliation{Theoretical Divison, MS B213, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA} \author{S.Y. Buhmann} \affiliation{Physikalisches Institut, Albert-Ludwigs-Universit\"at Freiburg, Hermann-Herder-Str. 4, D-79104 Freiburg, Germany}\affiliation{Freiburg Institute for Advanced Studies, Albert-Ludwigs-Universit\"{a}t Freiburg, Albertstr. 19, D-79104 Freiburg i. Br., Germany} \date{\today} \begin{abstract} Quantum friction, the electromagnetic fluctuation-induced frictional force decelerating an atom which moves past a macroscopic dielectric body, has so far eluded experimental evidence despite more than three decades of theoretical studies. Inspired by the recent finding that dynamical corrections to such an atom's internal dynamics are enhanced by one order of magnitude for vertical motion -- compared to the paradigmatic setup of parallel motion -- we generalize quantum friction calculations to arbitrary angles between the atom's direction of motion and the surface in front of which it moves. Motivated by the disagreement between quantum friction calculations based on Markovian quantum master equations and time-dependent perturbation theory, we carry out our derivations of the quantum frictional force for arbitrary angles employing both methods and compare them. \end{abstract} \maketitle \begin{section}{Introduction} The successful measurement of the Casimir-Polder force by, e.g., Sukenik \textit{et~al.} \cite{Sukenik1993} is one of the rather impressive investigations of the quantum vacuum and how it may be shaped through boundary conditions imposed by macroscopic bodies. As predicted by Casimir and Polder \cite{Casimir1948}, this force between a neutral atom and a polarizable macroscopic object arises from a position dependence in the atom's Lamb shift introduced by the presence of the body. A few decades after these early works theoreticians have come to agree that there must exist dynamical corrections to both the position-dependent Lamb shift and the resulting Casimir-Polder force when the atom moves parallel to the nearby surface \cite{Ferrell1980,Pendry1997,Volokitin2007}. Various theoretical methods have been employed, including time-dependent perturbation theory \cite{Barton2010,Barton2012}, quantum master equations \cite{Scheel2009}, generalized non-equilibrium fluctuation-dissipation relations \cite{Intravaia2014, Intravaia2016}, and influence-functional methods \cite{Farias2016}. Experimentally, however, such velocity-dependent corrections have eluded confirmation, as they are extremely small and short-ranged. In a recent work \cite{Klatt2016}, dynamical corrections to the internal dynamics, i.e., atomic level shifts and rates, were shown to be significantly larger when the atom moves vertically, rather than parallel to the macroscopic surface. And hence the question arises whether a similar qualitative change may be found for the quantum frictional force in the case of perpendicular motion -- potentially facilitating the experimental accessibility of the quantum friction force in such vertical setups. Hence, in this work, we generalize dynamical Casimir-Polder calculations from the paradigmatic scenario of parallel motion to arbitrarily directed motion. Since the absence of an experimental benchmark has fostered the co-existence of alternative theoretical approaches to quantum friction -- some of them even giving contradictory predictions -- in this work we will carry out our derivations in a two-fold manner, using both Markovian quantum master equations and time-dependent perturbation theory. These two methods are known to lead to different results for the leading order in relative velocity of the quantum friction force for parallel motion, being given by a linear \cite{Scheel2009} and cubic \cite{Intravaia2015} dependency on velocity $\boldsymbol{v}$, respectively (we note that for certain exactly solvable models quantum friction for parallel motion has been shown to be cubic in velocity on asymptotic timescales \cite{Intravaia2016}). We find this discrepancy to prevail also for arbitrary angles between the atom's direction of motion and the surface. For the parallel setup -- where it is possible to define a non-equilibrium steady state (NESS) in which the atom moves at constant velocity at a fixed distance from the plate -- generalized non-equilibrium fluctuation-dissipation relations can be employed in order to infer the dipole correlations entering the quantum friction force \cite{Intravaia2016}. By doing that, a recent study suggests that, in the asymptotic large time limit, the aforementioned discrepancies in the expressions for that force stem from different forms of the dipole power spectra implied using either approach \cite{Intravaia2016b}. The vanishing of the linear order in relative velocity seen in the fluctuation-relation approach is intimately linked to the symmetry of the atomic response function in the frequency domain. This symmetry -- more precisely, the Schwartz reflection principle required of any physical response function -- is \textit{necessarily} broken when the Markov approximation is applied, resulting in a non-vanishing linear order of the quantum friction force in atomic velocity. However, for intermediate timescales -- that is, the realm of time-dependent perturbation theory and Markovian quantum master equations -- and moreover for the case of arbitrarily directed motion studied in this work, it is not possible to define such a NESS. In the former case, because stationarity is not yet reached, and in the latter because stationarity \textit{cannot} be reached at all since the atom is continuously approaching (or leaving) the surface. Hence, it is not possible to obtain an exact expression for the dipole spectrum, and therefore it is unclear which of two approaches for the quantum friction force acting upon an arbitrarily moving atom is appropriate, as they may simply apply in different temporal regimes. We will study the paradigmatic setup of an atom moving next to a planar macroscopic body at zero temperature. The atom is neutral and possesses neither an electric nor a magnetic permanent dipole moment. The charge distribution provided by its constituents, however, is subject to quantum fluctuations which give rise to the atom's electric polarizability. The electric polarizability of the macroscopic body, which is assumed to be a metal or dielectric, is taken into account effectively through its permittivity $\varepsilon(\omega)$. The field induced by the atomic zero-point dipole fluctuations polarizes the body and thereby leads to the build-up of a mirror charge within it. That is, the field reflected by the surface is equal to a field induced by a fluctuating charge distribution, which for a perfect conductor is identical to the atom's but of opposite sign and mirrored at the interface. As the atom's dipole oscillates, the mirror atom's dipole does as well. Since both undulations are correlated, a finite dipole--dipole interaction emerges and hence an attractive force -- the Casimir-Polder force. The direction of the force is given by the axis between the atom and its mirror image. For an atom at rest in front of a plane surface, it points perpendicularly towards that surface and is nothing but the Casimir-Polder force $\F_\T{CP}$ \cite{Casimir1948}. As shown in Fig. \ref{fig:setup}, in case of a moving atom, fields induced by mirror images at previous times reach the actual atom and superpose in its current position, resulting in a tilted force. In addition to the ever-present Casimir-Polder force perpendicular to the surface, there then exists a finite force component pointing in the direction opposite to the atom's motion, causing it to decelerate. This force is what is called quantum friction in our context. \begin{figure} \caption{Atom moving next to a surface with velocity $\V$ at zero temperature. Its electric dipole $\di(t)$ fluctuates about zero, resulting in emission of virtual photons. The atom may, e.g., have emitted a photon at time $t-\tau$ which after reflection is reabsorbed at time $t$. This interaction leads to the Casimir-Polder force $\F_\T{CP} \label{fig:setup} \end{figure} The paper is organized as follows. In Section II the common ground for both the Markovian and the perturbative approaches to quantum friction is layed out. Subsequently, in Section III, we develop the Markovian quantum master equation approach to quantum friction in arbitrarily directed motion, while in Section IV we contrast this calculation with time-dependent perturbation theory. Lastly, Section V contains our conclusions. \end{section} \begin{section}{Setup}\label{sec:setup} As mentioned in the Introduction, we here consider an atom moving in the proximity of a homogeneous, dielectric, half space, $z<0$, while the atom itself is placed in vacuum at zero temperature. The Hamiltonian of the entire system consists of atomic, field and interaction contributions, \begin{align}\label{eq:hamiltonian} \hat H= &\hat H_\T{A}+\hat H_\T{F}+\hat H_\T{AF}\\\label{eq:atom}= &\frac{\hat \Pa_\TA^2}{2m_\TA}+\sum_{n=0}^\infty E_n \hat A_{nn}\\\label{eq:field} &+\hb\sum_{\sigma=\T{e},\T{m}}\int\!\! d\R\id\int_0^\infty\!\!\id\id d\w\w\,\hat \f^\dag_\si(\R,\w)\,\,\id\cdot\,\id\hat \f_\si(\R,\w)\\\label{eq:int} &-\sum\mn \hat A\mn \di\mn\,\,\id\cdot\,\id\hat \E(\R_\TA). \end{align} Here, $\hat \Pa_\TA$ is the atom's center-of-mass momentum operator and the $E_n$ are the atom's internal eigenenergies. The $\hat A\mn=\ket{m}\bra{n}$ are so-called flip operators which, for $m\,\id=\,\id n$, project onto the $n^\T{th}$ eigenstate or, for $m\,\id\neq\,\id n$, induce transitions from state $\ket{n}$ to $\ket{m}$. The medium-assisted excitations $\hat \f^\dag_\si$ result from the quantization of the field in the presence of the bulk medium \cite{Huttner1992,Philbin2010}: \begin{align} \hat \E(\R)=\id\sum_{\sigma=\T{e},\T{m}}\int\!\! d\R'\id\int_0^\infty\!\!\id\id d\w\,\tens{G}_\si(\R,\R',\w)\,\,\id\cdot\,\id\f_\si(\R',\w)+\T{h.c.}. \end{align} They can be thought of as representing electric (e) or magnetic (m) unit dipoles residing in $\R'$ and oscillating at frequency $\w$, thereby populating the appropriate field mode. They act upon the field's vacuum state $\ket{\{0\}}$ as \begin{align} \hat\f_\si(\R,\w)\ket{\{0\}}=&\,0,\\ \hat \f^\dag_\si(\R,\w)\ket{\{0\}}=&\ket{\bm{1}_\si(\R,\w)}, \end{align} and fulfill bosonic commutation relations. The electric and magnetic coefficients, \begin{align} &\tens{G}_\T{e}(\R,\R',\w)=i\tfrac{\w^2}{c^2}\sqrt{\tfrac{\hb\e_0}{\pi}\T{Im}\,\e(\w)}\,\tens{G}(\R,\R',\w)\,,\\ &\tens{G}_\T{m}(\R,\R',\w)=i\tfrac{\w}{c}\sqrt{\tfrac{\hb}{\pi\m_0}\tfrac{\T{Im}\,\m(\w)}{|\m(\R',\w)|^2}}[\na'\times\tens{G}(\R,\R',\w)]^\T{T}, \end{align} respectively, derive from the electromagnetic Green's tensor $\tens{G}$. The latter is defined as the formal solution to the homogeneous Helmholtz equation which arises from Faraday's and Amp\`ere's law, supplemented by vanishing boundary conditions at $|\R-\R'|\to\infty$ \cite{Buhmann2013}. The $\tens{G}_\si$ fulfill the integral relation \begin{align}\label{eq:intrel} \sum_\si\,\id\int\id d\s\tens{G}_\si(\R\!,\s,\w)\!\!\cdot\!\!\tens{G}_\si^*(\s,\R'\!\!\!,\w)=\tfrac{\hb\m_0\w^2}{\pi}\T{Im}\tens{G}(\R\!,\R'\!\!\!,\w). \end{align} Calculating the variance of $\hat\E$ using this relation and invoking the fluctuation-dissipation theorem reveals that $\mu_0\w^2\tens{G}$ is the linear response function of the electromagnetic field. In order to eventually evaluate the force, the Green's tensor has to be specified according to the geometry and material properties of the bulk medium. We will adhere to a Drude-Lorentz modeled dielectric and the non-retarded limit of the half-space scattering Green's tensor \cite{Buhmann2013}, \begin{align}\label{eq:Gs2} \Gs\,\id(\R,\R'\,\id,\w)=\tfrac{r_\T{p}(\w)c^2}{8\pi^2\w^2}\!\!\int\!\!\tfrac{d^2\K^\pa}{k^\pa}\,(\K\otimes\K^*)\,e^{i\K^\pa\cdot(\R-\R')-k^\pa(z+z')}. \end{align} It describes the near-field scattering of medium-assisted field excitations of frequency $\w$ and wave vector \begin{align} \K=(\K^\pa,ik^\pa)=(k^\pa\cos\phi,k^\pa\sin\phi,ik^\pa)\,. \end{align} The reflection is governed by the Fresnel reflection coefficient $r_\T{p}(\w)$ for transverse magnetic radiation of frequency $\w$, whose non-retarded limit reads \begin{align} r_\T{p}(\w)=\frac{\e(\w)-1}{\e(\w)+1}\,. \end{align} The permittivity entering the reflection coefficient should be local and dispersive for our results to hold. The last term of the Hamiltonian (\ref{eq:hamiltonian}) describes the interaction of atom and field in electric-dipole approximation where the atomic dipole $\di$ has been expanded in the flip-operator basis. The force acting upon the atom is \begin{align}\label{eq:force} \F(t)=\langle\hat\di\cdot\nabla\hat\E(\R_\TA)\rangle\,. \end{align} As was already studied in Ref.~\cite{Intravaia2015}, the force crucially depends on the atom's trajectory. In the Markovian approach (Section III), the trajectory can be assumed approximately straight and uniform \begin{align} \R_\TA -\R'_\TA \simeq\V_\TA(t-t'), \end{align} on time scales of the field's auto-correlation time. Within the perturbative approach (Section IV), we will assume that the atom is at rest for times $t<0$ and that it moves at constant velocity $\V_\TA=v(\sin\theta,0,\cos\theta)$ with $v>0$ for times $t>0$. This sudden boost trajectory is precisely that used in Ref.~\cite{Barton2010} in the case of parallel motion. Note that this prescribed uniform motion for $t>0$ is maintained by an external force counteracting the quantum friction force. For $t<0$ the atom is located at $\R_\TA(t<0)=(x_0,y_0,z_0)$ and for $t>0$ its trajectory is given by \begin{equation} \R_\TA(t)=(x_0+vt\sin\theta,y_0,z_0+v t \cos\theta). \end{equation} Note that $\theta\!=\!\pm\pi/2$ corresponds to parallel motion, $\theta\!=\!\pi$ to vertical motion towards the plane, and $\theta\!=\!0$ indicates vertical motion away from the plane. Trajectories containing, e.g., a continuous acceleration from zero velocity to constant final velocity over a given time interval, have been considered in Ref.~\cite{Intravaia2015} for the parallel motion case, and could be analogously implemented for our case of arbitrarily directed motion. However, for simplicity, in this paper we only consider the sudden boost trajectory described above. \end{section} \begin{section}{Markovian Approach}\label{sec:markov} We will first solve the internal dynamics of the atom by means of a Markovian quantum master equation. This then further allows for inferring atomic dipole correlations via the quantum regression hypothesis \cite{Lax2000}, which can be proven to hold as a theorem for semi-group Markovian processes \cite{Breuer2002}. The such obtained dipole correlations eventually lead to an evaluable expression of the quantum friction force. \begin{subsection}{Internal Atomic Dynamics} By means of a Born-Oppenheimer type argument, the internal dynamics of the atom can be separated from its center-of-mass motion. That is, it can be solved for an arbitrary but fixed instantaneous position $\R_\TA$ and momentum $\Pa_\TA$. Subsequently, the force determining the change in this very momentum may be calculated for given internal dynamics. The latter are captured by the time evolution of the flip operators $\hat A_{mn}$. In this subsection we solve the corresponding Heisenberg equation, showing that it leads to velocity-dependent rates of spontaneous decay and eigenfrequencies, respectively. Note that the full Hamiltonian includes the field surrounding the atom. This environment will be traced over later on, leading to dissipative dynamics in the first place. At any point in time, the full Hamiltonian can be decomposed as in (\ref{eq:hamiltonian}). The purely atomic operators $\hat A\mn$ then commute with the field contribution (\ref{eq:field}) at equal times, since they live in orthogonal subspaces of the total Hilbert space. The commutator with the atomic Hamiltonian (\ref{eq:atom}) yields the eigenfrequencies as in the absence of the field, while the commutator with the interaction Hamiltonian (\ref{eq:int}) will lead to the Lamb shift and Einstein rates: \begin{align}\label{eq:HeisA} \dot{\hat A}\mn(t)=i\w\mn \hat A\mn+\tfrac{1}{i\hb}[\hat A\mn(t),\hat H_\T{AF}(t)]. \end{align} This differential equation may be solved formally and re-substituted into itself \textit{ad infinitum}. This procedure results in a Dyson-like expansion based on which one may design a series of approximate solutions $\hat A\mn^{(k)}$, where $k$ is even, converging towards the exact one as $k\,\id\rightarrow\,\id\infty$. The $k^\T{th}$ element of the above series is of order $\di^{k}$. Therefore, its dynamics comprise re-absorption of up to $k/2$-fold reflected photons. In the following, multiple reflections by the atom will be neglected. That is, the dynamics will be solved up to an order $k$ equal to two. Employing the prescription above, one writes \begin{align}\label{eq:a1} \dot{\hat A}\mn^{(2)}(t)=&\,\dot{\hat A}\mn^{(0)}(t)+\tfrac{1}{i\hb}[\hat A\mn^{(0)}(t),\hat H_\T{AF}^{(2)}(t)]\q,\\\label{eq:h0} \hat H_\T{AF}^{(2)}(t)=&\,-\sum\mn \hat A\mn^{(0)}(t)\di\mn\,\,\id\cdot\,\id\hat \E^{(1)}(\R_\TA,t). \end{align} Beware that $\hat \E^{(1)}$ denotes the free field plus the field induced by an atom described as $\hat A\mn^{(0)}$. Starting from the Heisenberg equation for the $\hat \f_\si$ and employing the integral relation (\ref{eq:intrel}), one arrives at \begin{align}\label{eq:e0} \hat \E^{(1)}(\R_\TA,t)= &\sum_\si\int\!\! d\R\id\int_0^\infty\!\!\id\id d\w\,\tens{G}_\si(\R_\TA,\R,\w)\cdot\hat \f_\si(\R,\w)\\\nonumber &+\tfrac{i\m_0}{\pi}\sum\mn\int_{t_0}^t\id\!dt'\id\int_0^\infty\!\!\id\id d\w\w^2e^{i\w\mn(t-t')}\\\nonumber &\times \T{Im}\tens{G}(\R_\TA,\R'_\TA,\w)\,\,\id\cdot\,\,\id\di\mn \hat A^{(0)}\mn(t')+\T{h.c.} , \end{align} where absence of a time argument indicates initial time $t_0$, and primed quantities are evaluated at $t'$. Substituting the above field into (\ref{eq:h0}) makes it apparent that the interaction process taken into account is re-absorption of a reflected, medium assisted, photon that has been emitted by the atom at time $t'$ in history, i.e., a dipole interaction between $\di(t)$ and $\di(t')$. Each of these reflected photons accumulates a phase $i\w\mn(t-t')$ as it travels from the place of its creation $\R'_\TA$ to its destination $\R_\TA$, where it eventually interferes with others of its kind, according to their respective relative phases. Inserting (\ref{eq:e0}) into the normal-ordered vacuum-expectation value of (\ref{eq:a1}) leads to the reduced dynamics of the atom's internal: \begin{align}\nonumber &\langle\dot{\hat A}^{(2)}\mn(t)\rangle=\,i\w\mn\langle\hat A^{(2)}\mn(t)\rangle\\\label{eq:master} &\qq\qq\qq-\sum_k[C_{nk}(t)+C^*_{mk}(t)]\langle\hat A^{(2)}\mn(t)\rangle, \end{align} with the coefficients \begin{align}\nonumber &C\nk=\,\tfrac{\m_0}{\pi\hb}\int_{t_0}^t\id\!dt'\id \int_0^\infty\!\!\id\id d\w\w^2\,e^{-i(\w-\w\nk)(t-t')}\\\label{eq:c} &\qq\qq\qq \times \di\nk\cdot\T{Im}\tens{G}(\R_\TA,\R'_\TA,\w)\cdot\di\kn. \end{align} We here employed the fact that in the absence of degenerate and quasi-degenerate dipole-transitions in the atom, the off-diagonal flip-operator dynamics effectively decouple from the diagonal ones as well as from each other due to the orthogonality of non-commensurate oscillations and neglected terms of order $d^4$. Now, the real part of the $C\nk$ renders the rates $\Ga_n$ of spontaneous decay of $\ket{n}$, while their imaginary part delivers the corrections $\hbar\de\w_n$ to the eigenenergy $E_n$ of the free atom -- both stemming from the interaction with the field: \begin{align}\label{eq:dw} &\de\w_n=\sum_k\de\w\nk=\sum_k\IM C\nk\,,\\\label{eq:G} &\Ga_n=\sum_k\Ga\nk=2\sum_k\RE C\nk\,. \end{align} The $t'$ integration in (\ref{eq:c}) involves the Green's tensor via $\R'_\TA$, and the oscillating exponential describing a field excitation of frequency $\w\nm$ traveling from $\R'_\TA$ to $\R_\TA$. In the spirit of the Doppler effect, the spatial distance between the photon's origin and destination may equally well be translated into a shift in frequency. Under the conditions of (a) uniform motion on timescales of the field's auto-correlation time and (b) $t$ much larger than these times, performing the limit of $t_0\to-\infty$, i.e., a Markov approximation, and employing the non-retarded Green's tensor (\ref{eq:Gs2}) leads to \begin{align}\label{eq:cres} &C\nk^\T{res}\!=\!-\tfrac{i}{8\pi^2\hb\e_0}\id\int_0^{2\pi}\id\id\id d\phi\!\!\int_0^\infty\id\id\!\!dk^\pa k^{\pa2} d\nk^{( \phi)2}r_\T{p}(\w'\nk)\Theta(\w'\nk)e^{-2k^\pa z_\TA(t)}\!\!,\\\label{eq:cnres} &C\nk^\T{nres}\!=\!\tfrac{i}{8\pi^3\hb\e_0}\id\int_0^\infty\id\id\!\!d\xi\!\!\int_0^{2\pi}\id\id\id d\phi\!\!\int_0^\infty\id\id\!\!dk^\pa k^{\pa2} d\nk^{( \phi)2}\frac{\w'\nk r_\T{p}(i\xi)}{(\w\nk^{\prime2}+\xi^2)}e^{-2k^\pa z_\TA(t)}, \end{align} for the resonant and non-resonant contributions to the Heisenberg coefficients (\ref{eq:c}), respectively \cite{Klatt2016}. Above we have introduced the shorthand notation, \begin{align} & d\nk^{(\phi)2}=\di\nk\cdot\li(\id\begin{smallmatrix}\cos^2\phi&\cos\phi\sin\phi&-i\cos\phi\\\cos\phi\sin\phi&\sin^2\phi&-i\sin\phi\\i\cos\phi&i\sin\phi&1\end{smallmatrix}\,\id\re)\cdot\di\nk\,, \end{align} and the Doppler-shifted, complex-valued, frequency \begin{align}\label{eq:wtilde} \w'\nk=\w\nk+vk^\pa(\sin\theta\cos\phi-i\cos\theta)\,. \end{align} The Heaviside step function appearing in Eq.~(\ref{eq:cres}) is understood with respect to the real part of $\w'\nk$. Without loss of generality, the coordinate system was chosen such that the $y$-component of the velocity vanishes. The direction of the atomic velocity $\V$ is hence solely determined by the angle $\theta$ between $\V$ and the $z$-axis. Depending on $\theta$, the frequency $\w\nk$ may be shifted along the imaginary axis, which is unusual. The Doppler effect can be understood as the shortened/lengthened time interval between the passing of two consecutive wave fronts through a certain (and possibly itself moving) observation point in space due to the relative motion of the waves' source and that very point. In the scenario at hand, the source is the atomic dipole at a time $t$-$\tau$ in the past and the observation point is the position of the atom at time $t$. While in the case of the atom moving parallel to the surface, there are actual wave fronts propagating along the direction of motion, in the case of vertical motion, in the non-retarded regime, only evanescent waves perpetuate along the direction of motion. The latter do not possess such thing as a wave front and thus the traditional intuition for the Doppler effect does not apply. Just as evanescent waves are characterized by a complex $k$-vector, the Doppler shift we encounter in vertical motion has a component along the imaginary axis. Finally, we emphasize that since we are using the non-retarded form of the Green's tensor, all the results below will be valid as long as the atom is always within the near-field zone. The expressions (\ref{eq:cres}) and (\ref{eq:cnres}) for the resonant and non-resonant contributions to the internal dynamics of an atom moving in front of a half-space hold for arbitrarily directed atomic motion. However, since they result from a series expansion of convergence radius $v/z_\TA(t)\,\w\nk$ (see Ref.~\cite{Klatt2016}) a condition which has to be fulfilled is $v\!<\!z_\TA(t)\w\nk$. The static Casimir-Polder shifts and rates are re-obtained by replacing the primed, i.e., Doppler-shifted, frequencies $\w'\nk$ in (\ref{eq:cres}) and (\ref{eq:cnres}) by their bare counterparts $\w\nk$, which is identical to considering only the zeroth order in relative velocity. For finite velocity, the integrals in (\ref{eq:cres}) and (\ref{eq:cnres}) can be solved numerically. In the case of parallel motion, due to the rotational symmetry of the setup, shifts and rates only vary quadratically with the atom's velocity. As for vertical motion, the variation is linear \cite{Klatt2016}. Therefore, for small velocities the dynamical corrections to the static shifts and rates are significantly larger if the atom moves perpendicularly to the surface. Since they are needed later on, we will conclude this Section by giving both the decay rate and non-resonant level shift for a ground-state atom as prescribed by the real and imaginary part of \eqref{eq:cnres}. In order to match the configuration of the perturbative calculation, the atom is assumed to be a two-level system. Hence, for an isotropic ground state atom, \begin{align}\nonumber \Ga_0^{(\theta=\frac{\pi}{2})} &=\tfrac{d^2}{2\pi^2\hb\e_0}\int_0^\infty\id\id\!\!d\w\!\!\int_0^\infty\id\id\!\!d^2k^\pa k^\pa e^{-2k^\pa z_0}\IM r_p(\w)\\\label{eq:Ga01res} & \qq \times \de(\w+\w_{10}-k^\pa v\cos\phi), \,\\\nonumber \Ga_0^{(\theta\neq\frac{\pi}{2})} &\simeq-\tfrac{3 d^2v\cos\theta}{8\pi^2\hb\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\w\;\tfrac{\IM r_p(\w)}{(\w+\w_{10})^2}\\\label{eq:Ga01nres} &=\tfrac{3d^2v\cos\theta}{8\pi^2\hb\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\xi\;\tfrac{\xi^2-\w_{10}^2}{(\xi^2+\w_{10}^2)^2}\;r_p(i\xi), \,\\\nonumber \de\w_0 &\simeq-\tfrac{d^2}{8\pi^2\hb\e_0z_\TA^3(t)}\int_0^\infty\id\id\!\!d\w\;\tfrac{\IM r_p(\w)}{\w+\w_{10}}\left[1-\tfrac{3}{4}\tfrac{v^2(1+3\cos2\theta)}{(\w+\w_{10})^2z_\TA^2(t)}\right]\\\label{eq:dw01} &=-\tfrac{d^2}{8\pi^2\hb\e_0z_\TA^3(t)}\int_0^\infty\id\id\!\!d\xi\;\tfrac{\w_{10} }{\w_{10}^2+\xi^2} r_p(i\xi) \\\nonumber &\qq\qq\qq \times \left[1-\tfrac{3}{4}\tfrac{v^2(\w_{10}^2-\xi^2)(1+3\cos2\theta)}{(\w_{10}^2+\xi^2)^2z_\TA^2(t)}\right] . \end{align} Note that the rates signify a spontaneous excitation of the atom. In the parallel case, however, this process is resonant. Due to the resonance-enforcing Heaviside-$\Theta$ function, the probability for this Cherenkov-type excitation is only finite if the Doppler-shift energy $\hb vk^\pa$ is large enough to bridge the gap between the ground state and first excited state. This imposes a constraint on the parallel projection of the wave vector, namely $k^\pa>\w_{10}/v$. This constraint manifests itself in the exponential of the $k$-space integration in both the rate and the shift and results in an exponential suppression of the motion-induced excitation of a parallely moving ground-state atom as the speed of the latter approaches zero \cite{Intravaia2015}. Not so in non-parallel motion, where to leading order the excitation rate scales linearly with velocity. Remarkably, as for the non-resonant level shift, the directionality does not change the leading-order power-law in $v$. It scales quadratically with atomic velocity in any case. Lastly, note that for parallel motion the above expressions perfectly coincide with the ones obtained perturbatively in \cite{Barton2010,Intravaia2015} if one recalls that it needs a factor $4\pi\e_0$ in order to translate from SI units (used in our work) to Gaussian units (used in \cite{Barton2010,Intravaia2015}). \end{subsection} \begin{subsection}{Casimir-Polder and Friction Force}\label{sec:comm} After having solved the internal atomic dynamics for an arbitrary but fixed atomic center-of-mass velocity, one can now solve the Newtonian dynamics of the atom for arbitrary but fixed atomic transition frequencies and decay rates. The force (\ref{eq:force}) determining the Newtonian dynamics of the atom can be decomposed into its projection $F_v$ onto the atom's direction of motion and a component orthogonal to that. While the former decelerates the atom, the latter changes its direction of motion. Since we are interested in a frictional force, we study the gradient $\nabla_v\equiv \V\cdot\nabla/v$ of the force along the atom's velocity. The electric field entering the force (\ref{eq:force}) was already determined in (\ref{eq:e0}). Inserting it and normally-ordering atom and field operators leads to \begin{align}\label{eq:F1} F_v(t)&=\frac{i\mu_0}{\pi}\int_0^\infty\id\id\!\!d\tau\id\int_0^\infty\id\id\!\!d\w\w^2e^{-i\w\tau}\\\nonumber &\times \nabla_v\langle\di(t)\cdot\IM\G^{(1)}(\R_\TA,\R'_\TA,\w)\cdot\di(t-\tau)\rangle+\T{h.c.}\,. \end{align} This clearly shows the dependence of the force on the auto-correlation of the atomic dipole. Within the Markov approximation, one may infer the two-point correlation function needed above via Lax's quantum-regression hypothesis \cite{Lax2000,Breuer2002}. It gives \begin{align}\label{eq:C} \langle\di(t)\di(t\!-\!\tau)\rangle\!=\!\sum_{nk}\!\di_{nk}\di_{kn}\,p_n(t)\,e^{[i\w\nk-\frac{1}{2}(\Ga_n+\Ga_k)]\tau}, \end{align} where $p_n(t)\!\equiv\!\langle A_{nn}(t)\rangle$ is the population of the atomic state $\ket{n}$. Inserting Eq.~(\ref{eq:C}) into the force (\ref{eq:F1}), and restricting to the non-retarded regime, leads to a decomposition of the friction force into contributions associated with the various energy levels of the atom, each weighted with the population $p_n$ of that level, \begin{align} F_v(t)=\sum\nk p_n(t)F\nk(t)\,. \end{align} The summands are given by \begin{align}\label{eq:F} &F\nk(t)=-\tfrac{1}{4\pi^3\e_0}\int_0^\infty\id\id\!\!d\w\id\int_0^{2\pi}\id\id\!\!d\phi\id\int_0^\infty\id\id\!\!dk^\pa k^{\pa3}e^{-2k^\pa z_\TA(t)}\, d\nk^{(\phi)2}\\\nonumber &\times \frac{(\w+\Omega'\kn)\cos\theta+\frac{1}{2}(\Ga'_n+\Ga'_k)\sin\theta\cos\phi}{(\w+\Omega'\kn)^2+\frac{1}{4}(\Ga'_n+\Ga'_k)^2}\,\T{Im}r_{\,\,\id\T{p}}(\w)\,, \end{align} where the capital omega indicates that frequencies include the formerly calculated Casimir-Polder shifts (\ref{eq:dw}), \begin{align} \Omega\nk=\w\nk+\de\w_n-\de\w_k\,, \end{align} and the primed shifts and rates carry Doppler shifts \begin{align}\label{eq:Omprime} \Omega\nk' &=\Omega\nk-vk^\pa\sin\theta\cos\phi\,,\\\label{eq:Gaprime} \Ga\nk' &=\Ga\nk-vk^\pa\cos\theta\,. \end{align} We will now focus on an atom which at time $t$ is in its ground state, i.e., $p_n(t)=\delta_{n,0}$, and only consider its first excited state when calculating the Casimir-Polder and quantum friction force, i.e., $F_v(t)=F_{01}(t)$. This facilitates the comparison to the perturbative calculations of Section~IV. For better readability, we will drop subscripts, that is $\Omega\!\equiv\!\Omega_{10}$ and $\Ga\!\equiv\!\tfrac{1}{2}(\Ga_0+\Ga_1)$. The full force acting on the ground-state atom can be split into a resonant term -- which stems from the pole in the $\w$-integration -- and a non-resonant term. This decomposition gives \begin{align}\nonumber &F_v^\T{res}(t) =-\tfrac{1}{4\pi^2\e_0}\,\T{Re}\int_0^{2\pi}\id\id\!\!d\phi\id\int_0^\infty\id\id\!\!dk^\pa k^{\pa3}e^{-2k^\pa z_\TA(t)}\,d^{(\phi)2}\\\label{eq:Fres} &\times (\cos\theta-i\sin\theta\cos\phi)\,r_{\,\,\id\T{p}}(-\Omega'+i\Ga')\,\Theta(-\Omega'),\\\nonumber &F_v^\T{nres}(t) =-\tfrac{1}{4\pi^3\e_0}\,\T{Re}\!\!\int_0^{2\pi}\id\id\!\!d\phi\id\int_0^\infty\id\id\!\!dk^\pa k^{\pa3}e^{-2k^\pa z_\TA(t)}\,d^{(\phi)2}\\\label{eq:Fnres} &\times (\cos\theta-i\sin\theta\cos\phi)\id\int_0^\infty\id\id\!\!d\xi\,\frac{\Omega'-i\Ga'}{(\Omega'-i\Ga')^2+\xi^2}\,r_{\,\,\id\T{p}}(i\xi)\,, \end{align} respectively. Again, due to the Heaviside-$\Theta$ function, the resonant force is only finite if the Doppler-shift energy $\hb vk^\pa$ is large enough to bridge the gap between the ground state and first excited state. The resonant friction force hence stems from a Cherenkov-like excitation of the atom and is exponentially suppressed due to the constraint on $k^\pa$ imposed by the $\Theta$ function \cite{Intravaia2015}. We will therefore neglect that term and focus on the non-resonant friction force in the following. The velocity dependence of that non-resonant force (\ref{eq:Fnres}) is of two-fold nature. The $v$-dependence brought about by the Doppler-shifted frequencies $\Omega'$ and rates $\Ga'$ we will call \textit{explicit} dependence. The velocity dependence $\de^vC\nk$ already included in the non-shifted $\Omega$ and $\Ga$ via the coefficients \begin{align} C\nk=\left.C\nk\right|_{v=0}+\de^vC\nk\,, \end{align} given in (\ref{eq:c}), instead, will be referred to as \textit{implicit} dependence. Eventually, up to linear order in $v$ the force acting upon the atom reads, \begin{align}\label{eq:FMarkov0} &F_v(t)\simeq-\frac{3d^2}{8\pi^2\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\w\,\frac{(\w+\Omega)\,\IM r_{\,\,\id\T{p}}(\w)}{(\w+\Omega)^2+\Ga^2}\\\nonumber &\times\!\li[\!\cos\theta\!+\!\frac{2v\,\Ga(1+\cos^2\!\theta)}{z_\TA(t)[(\w+\Omega)^2+\Ga^2]}\!-\!\li(\de^vC^*_{01}+\de^vC_{10}\re)\cos\theta\re]\\\nonumber &\qq=-\frac{3d^2}{8\pi^2\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\xi\,\frac{\Omega(\Omega^2+\Ga^2+\xi^2)\,r_{\,\,\id\T{p}}(i\xi)}{(\Omega^2+\Ga^2-\xi^2)^2+4\Omega^2\xi^2}\\\nonumber &\times\!\li[\!\cos\theta\!+\!\frac{2v\,\Ga(1+\cos^2\!\theta)}{z_\TA(t)[(\w+\Omega)^2+\Ga^2]}\!-\!\li(\de^vC^*_{01}+\de^vC_{10}\re)\cos\theta\re]\!. \end{align} The first summand gives the static Casimir-Polder force's projection onto the direction of motion. The second summand comprises the explicit velocity dependence and the last one stems from the implicit dependence mentioned above. As an anchor to previous results as well as the perturbative calculations to come, we now start by deducing the static Casimir-Polder force up to order $d^2$ from the expression \eqref{eq:FMarkov0}. This is done by omitting the decay rate as well as the corrections $\de\w_0$ to the bare frequency $\w_{10}$, and setting the velocity to zero. For an isotropic preparation of the atom, i.e., $d_x\!=\!d_y\!=\!d_z\!\equiv\!d$, this yields \begin{align}\nonumber F^{(2)}_\T{CP}(t) &=-\frac{3d^2\cos\theta}{8\pi^2\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\w\,\frac{\T{Im}r_{\,\,\id\T{p}}(\w)}{\w+\w_{10}}\\\label{eq:Fcpd2} &=-\frac{3d^2\cos\theta}{8\pi^2\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\xi\,\frac{\w_{10}\;r_{\,\,\id\T{p}}(i\xi)}{\w^2_{10}+\xi^2}\,. \end{align} The cosine stems from the force's projection onto the direction of motion. If the atom's trajectory is chosen such that it moves towards the plane and hence $\cos\theta<0$, the projection of the Casimir-Polder force onto the velocity vector is positive, as it should be. The leading-order-in-$v$ friction force according to Eq. \eqref{eq:FMarkov0} amounts to \begin{align}\nonumber &F_\T{fr}(t)\simeq-\frac{3d^2}{8\pi^2\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\w\,\frac{\IM r_{\,\,\id\T{p}}(\w)}{\w+\Omega}\\\nonumber &\times\li[\frac{v\Ga_1(1+\cos^2\!\theta)}{z_\TA(t)\,(\w+\Omega)^2}-\li(\de^vC^*_{01}+\de^vC_{10}\re)\cos\theta\re]\\\label{eq:FMarkov} &\qq=-\frac{3d^2}{8\pi^2\e_0z_\TA^4(t)}\int_0^\infty\id\id\!\!d\xi\,\frac{\Omega\,r_{\,\,\id\T{p}}(i\xi)}{\Omega^2+\xi^2}\\\nonumber &\times\li[\frac{v\Ga_1(\Omega^2\!-3\xi^2)(1\!+\cos^2\!\theta)}{z_\TA(t)\,(\Omega^2+\xi^2)^2}-\li(\de^vC^*_{01}+\de^vC_{10}\re)\cos\theta\re]\!. \end{align} Several remarks are in order at this point. First and foremost, note that the above expression is only correct up to order $d^4$, since the internal dynamics of the atom, entering the force, were only determined up to order $d^2$. Secondly, the \textit{implicit} $v$-dependence only contributes for non-parallel motion since it is weighted with a factor $\cos\theta$. Lastly note that, even though the leading-order in $v$ of the above force is linear, this linear order in velocity is of fourth order in the atomic dipole moment. If only $d^2$ terms are considered, the friction force vanishes to first order of $v$. This is because in this lowest order perturbation theory the rates and implicit velocity dependencies are set identically zero, i.e., $\Ga_1=\de^vC_{01}=0$. It can be shown that, on the $d^2$ level for parallel motion, the non-resonant force is strictly zero in all orders of atomic velocity. For any other direction of motion, its leading-order-in-$v$ force on level $d^2$ reads, \begin{align}\nonumber F^{(2)}_\T{fr}(t) &\simeq\frac{15\,d^2v^2}{16\pi^2\e_0z_\TA ^6(t)}\int_0^\infty\id\id\!\!d\xi\,\frac{\w_{10}(\w_{10}^2-3\xi^2)}{(\w_{10}^2+\xi^2)^3}\,r_{\,\,\id\T{p}}(i\xi)\\ &\qq \times (1+\cos^2\theta)\cos\theta\,. \label{f2fricMark} \end{align} and is hence quadratic in the atomic velocity. This provides us with a first answer to the initial question, whether quantum friction would be qualitatively different for non-parallel motion. Up to second order in coupling the answer is yes. Whereas the $d^2$ force on a parallely moving atom is exponentially suppressed as we saw above and as it was also previously shown in Ref.~\cite{Intravaia2015}, a non-parallely moving atom experiences an unsuppressed force which scales quadratic in velocity. This can be easily understood. On $d^2$ level the time-integral entering the force for parallel motion takes the form \begin{align} \RE\!\!\int_0^\infty\id\id\!\!d\tau\;e^{-i(\omega+\Omega-k^\pa v)\tau}=\de(\omega+\Omega-k^\pa v)\,, \end{align} as illustrated before, this resonance condition enforces a constraint on the wave vector which in turn leads to the exponential suppression. For vertical motion, however, the time-integral entering the $d^2$ friction force reads \begin{align} -\IM\!\!\int_0^\infty\id\id\!\!d\tau\;e^{-[i(\omega+\Omega)+k^\pa v]\tau}=\frac{\omega+\Omega}{(\omega+\Omega)^2+(k^\pa v)^2}\,. \end{align} Instead of the sharp resonance condition in the co-moving frame, enforced by a Dirac-$\de$, we here encounter a Lorentzian whose width is given by the velocity. This Lorentzian is evidently quadratic in $v$. This very basic consideration also reveals how a finite linear order in $v$ comes about in the Markovian $d^4$ friction force. There, the same Lorentzian appears, however with a width now given by the sum of the atomic rate of spontaneous decay and the velocity. The latter hence broadens the Lorentzian peak in a linear manner. Note that this qualitative difference between parallel and vertical friction is intimately linked to the evanescent nature of near-field waves. Lastly, note that from the expression \eqref{f2fricMark} the directionality of the force is not immediately evident. A more careful study of \eqref{f2fricMark} reveals that the integration of $(\w_{10}^2-3\xi^2)/(\w_{10}^2+\xi^2)^3$, from zero to infinite imaginary frequency, identically vanishes since the positive contribution for small imaginary frequencies exactly balances the negative contribution for large imaginary frequencies. With $r_p(i\xi)$ being strictly positive and monotonously decreasing this leads to overall positivity of the imaginary-frequency integral. For trajectories such that the atom moves towards the plane ($\cos\theta<0$), the projection of the frictional force onto the velocity is negative and counteracts the motion. \end{subsection} \end{section} \begin{section}{Time-Dependent Perturbation Theory} In this Section we will compute the quantum frictional force on a moving atom at constant velocity in arbitrarily direction motion using time-dependent perturbation theory. We will closely follow the approach used in Refs.~\cite{Barton2010,Intravaia2015}. To this end, the mathematical framework sketched in Section~\ref{sec:setup} is slightly modified. First of all, calculations are now performed in the interaction picture, rather than in the Heisenberg picture. Inspired by the 1s and 1p states of the hydrogen atom, the lowest quantum states of the atom are now taken to be the ground state $|g \rangle$ and three degenerate excited states written as $| \boldsymbol{\eta} \rangle$. The unit vector $\boldsymbol{\eta}$ is taken from a set $\{ \boldsymbol{\eta} \}$ forming an orthonormal and real basis. As before, the bare transition frequency between the levels is $\omega_{10}$ and the atom interacts with the electromagnetic field through its electric dipole momentum, $\hat{\di}(t)$. Its nonzero matrix elements in the interaction picture are $\bra{g} \hat{\di}(t) \ket{\boldsymbol{\eta}}=\boldsymbol{\eta} d e^{-i \omega_{10} t}$ and $\bra{\boldsymbol{\eta}} \hat{\di}(t) \ket{g}=\boldsymbol{\eta} d e^{i \omega_{10} t}$. As mentioned in Section~\ref{sec:setup}, we assume that for $t<0$ the atom is static at a distance $z_0$ from the surface, and that for $t>0$ its distance from the surface varies as $z_A(t)=z_0 + v t \cos \theta$. As in the Markovian approach discussed before, we will also assume here that $z_A(t)$ is always within the near-field zone, irrespective of whether the atom is moving towards or away from the surface. Hence the electric field operator can be written as \cite{Barton2010,Intravaia2015} \begin{align} \hat{\E}(\R_\TA) &= \int\!\! d^2\K^\pa\!\!\int_0^{\infty}\id\id\!\! d\omega\; i \K \;\hat{a}_{\K^\pa \omega} \psi_{\K^\pa \omega} \\\nonumber &\qq\times e^{i \K^\pa \cdot \R_\TA^\pa(t)-i\omega t - k^\pa z_\TA(t)} + \text{h.c.} \end{align} Here, $\hat{a}_{\K^\pa \omega}$, $\hat{a}^{\dagger}_{\K^\pa \omega}$ are bosonic annihilation/creation operators (which roughly correspond to the 2D Fourier transform of the $\f^\dag_\T{e}(\R,\w)$ in Section~\ref{sec:setup}), and $\psi_{\K^\pa \omega}$ are complex plasmon amplitudes whose modulus squared is $|\psi_{\K^\pa \omega}|^2= (\hbar/(8\pi^3\e_0 k^\pa)) \text{Im} r_{\rm p}(\omega)$. Note that they differ by a factor $\sqrt{4\pi\e_0}$ with respect to the aforementioned references. This is rooted in the use of different unit systems. Here we employ SI units whereas in those references Gaussian units were used. We assume that the initial state of the system is the atom in its ground state and no photons, i.e. $\ket{\psi (0)}= \ket{g;\text{vac}}$. As in \cite{Intravaia2015}, we express the joint atom+field state in a perturbative expansion in the coupling constant $d$. To third order, it is given by \begin{align} \ket{\psi (t)} &\simeq\left(1+c_0^{(2)}(t)\right) \ket{g;\text{vac}} \\\nonumber &\qq+ \sum_{\boldsymbol{\eta}} \int\id d^3 \kappa \left(c_1^{(1)}(t) + c_1^{(3)}(t)\right) \ket{\boldsymbol{\eta}; \kappa}\\\nonumber &\qq+ \tfrac{1}{2}\int\id d^3 \kappa_1\!\! \int\id d^3 \kappa_2 \;c_2^{(2)}(t) \ket{g;\kappa_1,\kappa_2}\,, \end{align} where $c_n^{(p)}(t)$ denotes the transition amplitudes for states with $n$ photons in the $p$th perturbative order and can be obtained by using the standard techniques of perturbation theory. We need to compute the state to third order in order to evaluate the force to fourth order in the coupling. Above we have used the compact notation $\kappa=\{\K^\pa,\omega\}$, and the integrals $\int d^3 \kappa =\int d^2\K^\pa \int_0^{\infty} d\omega$. The expectation value in the state $\ket{\psi (t)}$ of the force operator along the direction of motion, $\hat{F}_v = \V \cdot \hat{\F}/v = \sin \theta \hat{F}_x +\cos \theta \hat{F}_z$, is given by \begin{align}\label{eq:fza} &F_v(t) = 2 \text{Re}\sum_{\boldsymbol{\eta}} \left\{\int\!\!d^3\kappa \bra{g;\text{vac}} \hat{F}_v \ket{\boldsymbol{\eta}; \kappa} \right.\\\nonumber &\qq\qq\qq\times \left[c_1^{(1)}(t)+c_0^{(2)*}(t) c_1^{(1)}(t)+c_1^{(3)}(t)\right] \nonumber \\\nonumber &+ \left.\tfrac{1}{2}\int\!\!d^3\kappa\,d^3\kappa_1\,d^3\kappa_2\, \bra{\boldsymbol{\eta}; \kappa} \hat{F}_v \ket{g;\kappa_1,\kappa_2}c_1^{(1)*}(t) c_2^{(2)}(t) \right\} , \end{align} valid to fourth order in the coupling. The relevant matrix elements of the interaction Hamiltonian are \begin{align} & \bra{g;\text{vac}}\hat{H}_{\rm AF} \ket{\boldsymbol{\eta};\kappa}=i d (\boldsymbol{\eta} \cdot \K) \psi_{\kappa}\\\nonumber &\qq\times e^{-i(\omega_{10}+\omega)t+i \K^\pa \cdot \R_\TA^\pa(t)- k^\pa z_\TA(t)}\,,\\ &\bra{\boldsymbol{\eta};\kappa} \hat{H}_{\rm AF} \ket{g;\kappa_1,\kappa_2} =i d (\boldsymbol{\eta} \cdot \K_1) \psi_{\kappa_1}\\\nonumber &\qq\times e^{i(\omega_{10}-\omega_1)t+i\bm{\K^\pa_1}\cdot \R_\TA^\pa(t)- k^\pa_1 z_\TA(t)} \delta^3(\kappa-\kappa_2)\\\nonumber &\qq+ (1 \leftrightarrow 2)\,, \nonumber \\ & \bra{\boldsymbol{\eta};\text{vac}} \hat{H}_{\rm AF} \ket{g;\kappa}= i d (\boldsymbol{\eta} \cdot \K) \psi_{\kappa}\\\nonumber &\qq\times e^{i (\omega_{10}-\omega)t+i \K^\pa \cdot \R_\TA^\pa(t)- k^\pa z_\TA(t)}\,, \end{align} where $\delta^3(\kappa-\kappa_1) = \delta^2(\K^\pa-\K^\pa_1) \delta(\omega-\omega_1)$. The relevant matrix elements of the force operator are easily computed. We obtain \begin{align} &\bra{g;\text{vac}} \hat{F}_v \ket{\boldsymbol{\eta}; \kappa} = - i d k^\pa (\boldsymbol{\eta} \cdot \K) f_{\phi\theta} \psi_{\kappa}\\\nonumber &\qq\times e^{-i (\omega_{10} + \omega)t + i \K^\pa \cdot \R_\TA^\pa(t)-k^\pa z_\TA(t)}\,,\\ &\bra{\boldsymbol{\eta}; \kappa} \hat{F}_v \ket{g;\kappa_1,\kappa_2} =- i d k^\pa_1 (\boldsymbol{\eta} \cdot \K_1) f_{\phi_1 \theta} \psi_{\kappa_1}\\\nonumber &\qq\times e^{i (\omega_{10} - \omega_1)t + i \K^\pa_1\cdot \R_\TA^\pa(t)-k^\pa_1 z_\TA(t)} \delta^3(\kappa-\kappa_2)\\\nonumber &\qq+ (1 \leftrightarrow 2)\,, \end{align} where $f_{\phi\theta} =-\cos\theta + i \sin \theta \cos\phi$. \begin{subsection}{Internal Atomic Dynamics} We now compute the relevant transition amplitudes $c_n^{(p)}(t)$ necessary to evaluate the force to fourth order in perturbation theory. The coefficient $c_1^{(1)}(t)$ is given by \begin{align} \label{c11} c_1^{(1)}(t) &=- \tfrac{i}{\hbar} \int_0^t dt' \bra{\boldsymbol{\eta};\kappa} \hat{H}_{\rm AF}(t') \ket{g;\text{vac}}\\\nonumber &= \frac{i d (\boldsymbol{\eta} \cdot \K)^* \psi^*_{\kappa}}{\hbar (\omega_{10} +\omega')}\; e^{-i \K^\pa \cdot \R_0- k^\pa z_0} \left[ e^{i(\omega_{10}+\omega')t} -1 \right]\,, \end{align} where we have defined the complex frequency \begin{align} \omega'=\omega - v k^\pa \cos \phi \sin \theta + i v k^\pa \cos \theta\,, \end{align} and $\R_0=(x_0,y_0)$. The coefficient $c_2^{(2)}(t)$ is given by \begin{align} \label{c22} &c_2^{(2)}(t) =- \tfrac{i}{\hbar}\sum_{\boldsymbol{\eta}}\!\!\int\!\!d^3\kappa\!\!\int_{0}^t dt'c_1^{(1)}(t') \bra{g;\kappa_1 \kappa_2 }\hat{H}_{\rm AF}(t')\ket{\boldsymbol{\eta};\kappa}\\\nonumber &=\!- \frac{d^2 (\K_1 \cdot \K_2)^* \psi_{\kappa_1}^* \psi_{\kappa_2}^* }{\hbar^2} e^{-i (\K^\pa_1+\K^\pa_2) \cdot \R_0 - (k^\pa_1+k^\pa_2) z_0} \\ \nonumber & \times \left[ \frac{e^{i (\omega'_1+\omega'_2) t} -1}{\omega_1'+\omega_2'} \left( \frac{1}{\omega_{10}+\omega'_1} + \frac{1}{\omega_{10}+\omega'_2} \right) \right. \\ \nonumber & \left. + \frac{e^{-i (\omega_{10}-\omega'_1)t}-1}{(\omega_{10}-\omega'_1) (\omega_{10}+\omega'_2)} + \frac{e^{-i (\omega_{10}-\omega'_2)t}-1}{(\omega_{10}-\omega'_2) (\omega_{10}+\omega'_1)} \right], \end{align} with the separately shifted frequencies \begin{align} \omega'_j=\omega_j-v k^\pa_j\cos\phi_1 \sin\theta + i v k^\pa_j \cos\theta\,. \end{align} The coefficient $c_0^{(2)}(t)$ is given by \begin{align}\nonumber c_0^{(2)}(t) &=-\tfrac{i}{\hbar}\sum_{\boldsymbol{\eta}}\!\!\int\!\!d^3\kappa\!\!\int_0^t dt'c_1^{(1)}(t')\bra{g;\text{vac}}\hat{H}_{\rm AF}(t') \ket{\boldsymbol{\eta};\kappa}\\\nonumber &=-\tfrac{i d^2}{4\pi^3\hbar\e_0}\!\!\int_0^{\infty}\id\id\!\!d\omega \text{Im}r_{\rm p}(\omega) \!\!\int\!\!d^2\K^\pa \frac{k^\pa e^{-2 k^\pa z_0}}{\omega_{10} +\omega'} \\\label{c02} & \times \left[ \frac{e^{-2 k^\pa v \cos\theta t}-1}{-2 k^\pa v \cos\theta} - \frac{e^{-i(\omega_{10} + \omega')t -2 k^\pa v \cos\theta t}-1}{-i(\omega_{10} + \omega') -2 k^\pa v \cos\theta} \right] \end{align} This coefficient involves the energy shift of the state $\ket{g;{\rm vac}}$ and the rate for the process $\ket{g;{\rm vac}} \rightarrow \ket{\boldsymbol{\eta};\kappa}$. In the limit $k^\pa v t \ll 1$ (small times or small velocities), the first term within the square brackets grows as $t$, while the second one is subleading in time (it is a sum of an modulated oscillatory function plus a time-independent term). We can then approximate $c_0^{(2)}(t)$ as \begin{align} c_0^{(2)}(t) \simeq - \frac{i t}{\hbar} \delta E_g - t \frac{\Gamma_g}{2} , \end{align} and hence $1+ c_0^{(2)}(t) \simeq \exp[- \frac{i t}{\hbar} \delta E_g - t \frac{\Gamma_g}{2}]$, where $\delta E_g$ is the energy shift and $\Gamma_g$ is the rate. Performing a further expansion of the integrand in (\ref{c02}) in powers of $k^\pa v$ and carrying out the momentum integration, we obtain \begin{align} \delta E_g &\simeq-\tfrac{d^2}{8\pi^2 \e_0 z_0^3} \int_0^\infty\id\id\!\!d\w\;\tfrac{\IM r_p(\w)}{\w+\w_{10}} \left[ 1-\tfrac{3}{4}\tfrac{v^2(1+3\cos2\theta)} {(\w+\w_{10})^2z_0^2} \right], \label{Eg} \end{align} and \begin{align} \Gamma_g & \simeq -\tfrac{3 d^2v\cos\theta}{8\pi^2\hb\e_0z_0^4} \int_0^\infty\id\id\!\!d\w\;\tfrac{\IM r_p(\w)}{(\w+\w_{10})^2} . \label{Gg} \end{align} These equations for the shift and rate coincide with Eqs. (\ref{eq:Ga01nres}) and (\ref{eq:dw01}) obtained in the Markovian approach, with the slight difference that in the latter the instantaneous height $z_\TA(t)$ rather than the initial height $z_0$ appears. We note that the rate (\ref{Eg}) vanishes for parallel motion, consistent with the exponentially small rate found in the perturbative approach for parallel motion studied in Refs.~\cite{Barton2010,Intravaia2015}. Finally, we compute the $c_1^{(3)}(t)$ coefficient, which we express as a sum of two contributions $c_1^{(3)}(t) = c_{1,0}^{(3)}(t) + c_{1,2}^{(3)}(t)$. The subscript 0 in the first term denotes contributions from the vacuum, and the subscript 2 in the second term denotes those from the two-photon sector. They are respectively given by \begin{align}\label{c13-0} &c_{1,0}^{(3)}(t)=-\tfrac{i}{\hbar}\!\!\int_0^t dt'c_0^{(2)}(t')\bra{\boldsymbol{\eta};\kappa} \hat{H}_{\rm AF}(t') \ket{g;\text{vac}}\\\nonumber &\q=\tfrac{d (\boldsymbol{\eta} \cdot \K)^* \psi_{\kappa}^*}{\hbar}\!\!\int_0^t dt' e^{i(\omega_{10}+\omega)t'-i \K^\pa \cdot \R^\pa_\TA(t')-k^\pa z_\TA(t')} \\\nonumber &\qq\qq\times t' \left( \tfrac{\Gamma_g}{2} + \tfrac{i \delta E_g}{\hbar} \right) \end{align} and \begin{align}\label{c13-2} &c_{1,2}^{(3)}(t)= \tfrac{i}{2\hbar}\!\!\int\!\!d^3\kappa_1 d^3\kappa_2\!\!\int_0^t dt'c_2^{(2)}(t')\bra{\boldsymbol{\eta};\kappa} \hat{H}_{\rm AF}(t')\ket{g; \kappa_1 \kappa_2}\\\nonumber &= \frac{i d^3}{\hbar^3} \psi_{\kappa}^* e^{-i \K^\pa \cdot \R_0} \int\!\!d^3\kappa_1 \frac{(\boldsymbol{\eta} \cdot \K_1) (\K_1 \cdot \K)^* |\psi_{\kappa_1}|^2 e^{-2k^\pa_1z_0}} {\omega'_1+\omega'} \\ \nonumber & \times \left\{ \left[ \frac{1}{\omega_{10}+\omega'_1} + \frac{1}{\omega_{10}+\omega'} \right] \right. \\ \nonumber & \left[ \frac{e^{i(\omega_{10}+\omega'+2 i v k^\pa \cos\theta)t}-1}{\omega_{10}+\omega'+2 i v k^\pa \cos\theta} - \frac{e^{i(\omega_{10}-\omega'_1+2 i v k^\pa \cos\theta)t}-1}{\omega_{10}-\omega'_1+2 i v k^\pa \cos\theta} \right] \\ \nonumber & + \frac{1}{(\omega_{10}+\omega') (\omega_{10}-\omega'_1)} \\ \nonumber & \left. \left[ \frac{e^{-2 v k^\pa \cos\theta t}-1}{2 i v k^\pa \cos\theta} - \frac{e^{i(\omega_{10}-\omega'_1+2 i v k^\pa \cos\theta)t}-1}{\omega_{10}-\omega'_1+2 i v k^\pa \cos\theta} \right] \right\}. \end{align} \end{subsection} \begin{subsection}{Casimir-Polder and Friction Force: 2nd Order} The first non-vanishing contribution to the force is second order in the coupling, and is given by the term in Eq.(\ref{eq:fza}) containing only the $c_1^{(1)}(t)$ coefficient. We obtain \begin{align} \label{f2} &F^{(2)}_v(t)=2 \text{Re} \sum_{\boldsymbol{\eta}} \int\!\!d^3\kappa \bra{g;\text{vac}} \hat{F}_v \ket{\boldsymbol{\eta}; \kappa}c_1^{(1)}(t)\\\nonumber &=\frac{d^2}{2\pi^3\e_0}\int_0^{\infty}\id\id\!\!d\omega\!\!\int\!\!d^2\K^\pa k^{\pa2} e^{-2 k^\pa z_\TA(t)}\text{Im}r_{\rm p}(\omega) \\\nonumber & \times {\rm Re}\!\left[ \tfrac{f_{\phi\theta}}{\omega_{10}+\omega'} \left(1-e^{-i (\omega_{10}+\omega-v k^\pa \cos\phi \sin\theta)t} \right) \right]. \end{align} The second term within the square brackets leads to a modulated oscillatory contribution to the force, and averages out to zero after time averaging. The first term, on the other hand, gives a non-vanishing contribution, and its explicit form can be evaluated in the low-velocity limit. To this end, it is convenient to introduce the dimensionless variables $s=k^\pa z_\TA(t)$ and $y=v/[z_\TA(t) (\omega_{10}+\omega)]$. Then the force is re-written as \begin{align}\label{f2tot} F^{(2)}_v(t) &=-\frac{d^2}{2\pi^3\e_0}\frac{\cos \theta}{z_\TA^4(t)}\int_0^\infty\id\id\!\!d\omega\;\frac{\text{Im} r_{\rm p}(\omega)}{\omega_{10}+\omega}\\\nonumber &\q\times\!\!\int_0^{2\pi}\id\id\!\!d\phi\!\int_0^\infty\id\id\!\!ds\,\frac{s^3 e^{-2s} (1-2 y s \cos \phi \sin \theta)}{(1-y s \cos\phi \sin\theta)^2+(y s \cos \theta)^2}. \end{align} In the adiabatic regime $y\ll 1$ in which the characteristic frequency of the motion $v/z_\TA(t)$ is much smaller than the atom's transition frequency $\omega_{10}$, we can express the force as the sum of two contributions $F_v^{(2)}(t) = F_{\rm CP}^{(2)}(t) + F_{\text{fr}}^{(2)}(t)$, where \begin{align}\nonumber F_{\rm CP}^{(2)}(t) &=-\frac{3 d^2}{8\pi^2\e_0} \frac{\cos \theta}{z^4_\TA(t)} \int_0^\infty\id\id\!\!d\omega\;\frac{\text{Im} r_{\rm p}(\omega)}{ \omega_{10}+\omega}\\\label{fcp2pert} &=-\frac{3 d^2}{8\pi^2\e_0} \frac{\cos \theta}{z^4_\TA(t)} \int_0^\infty\id\id\!\!d\xi\;\frac{\omega_{10}}{\omega^2_{10}+\xi^2}\;r_{\rm p}(i\xi) \end{align} is the projection of the standard Casimir-Polder force along the direction of the motion, evaluated at the instantaneous position of the atom. In the last step we have Wick-rotated to imaginary frequencies $\omega \rightarrow i \xi$. This perturbative result perfectly coincides with the expression \eqref{eq:Fcpd2} obtained via the Markovian approach. Note that for parallel motion ($\theta = \pi/2$) this projection is zero, as expected, since the drag force is orthogonal to the Casimir-Polder force. Also note that for vertical motion ($\theta=0, \pi$), $F_{\rm CP}^{(2)}(t)$ changes sign, which is simply due to the change of sign of the velocity vector. The other force term $F_{\text{fr}}^{(2)}(t)$ is given by \begin{align}\nonumber F_{\text{fr}}^{(2)}(t) &=\frac{15 d^2 v^2 \cos \theta (1+\cos^2\theta)}{16\pi^2\e_0 z^6_\TA(t)}\int_0^\infty\id\id\!\!d\omega\,\frac{\text{Im} r_{\rm p}(\omega) }{(\omega_{10}+\omega)^3}\\\nonumber &=\frac{15 d^2 v^2 }{16\pi^2\e_0 z^6_\TA(t)} \int_0^{\infty}\id\id\!\!d\xi\,\frac{\omega_{10} (\omega_{10}^2-3 \xi^2)}{ (\omega_{10}^2+\xi^2)^3} r_{\rm p}(i \xi)\\\label{fric2pert} &\qq\times(1+\cos^2\theta)\cos \theta\,. \end{align} This equation coincides with Eq.~(\ref{f2fricMark}) obtained via the Markovian approach. Note that $F_{\text{fr}}^{(2)}(t)=0$ for parallel motion, in agreement with previous works in the literature that showed that quantum friction to second order in the coupling is vanishingly small (see, for example Refs. \cite{Barton2010,Intravaia2015}). \end{subsection} \begin{subsection}{Casimir Polder and Friction Force: 4th Order, via Vacuum} We now compute the next order of the force, which is fourth-order in the coupling. To this end, we follow an approach similar to the one described in Appendix C of Ref.~\cite{Intravaia2015}. We first consider the part involving the mixed amplitude $c_0^{(2)*}(t) c_1^{(1)}(t)$ in Eq.~(\ref{eq:fza}) and the part of $c_1^{(3)}(t)$ going via vacuum, Eq.~(\ref{c13-0}). Integrating by parts and using Eq.~\eqref{c11}, $c_{1,0}^{(3)}(t)$ can be written as \begin{align} \label{inter1} & c_{1,0}^{(3)}(t)= c^{(2)}_0(t) \; c^{(1)}_1(t) - \frac{i d (\boldsymbol{\eta} \cdot \K)^* \psi^*_{\kappa}}{\hbar (\omega_{10} +\omega')}\; c^{(2)}_0(t) e^{-i \K^\pa \cdot \R_0- k^\pa z_0} \\\nonumber & + \left(- \frac{1}{\hbar} \delta E_g + i \frac{\Gamma_g}{2}\right) \frac{d (\boldsymbol{\eta} \cdot \K)^* \psi^*_{\kappa}}{\hbar (\omega_{10} +\omega')^2}\; e^{-i \K^\pa \cdot \R_0- k^\pa z_0} \\\nonumber & \times [e^{i(\omega_{10} +\omega')t}-1]. \end{align} Combining the first term above with $c_0^{(2)*}(t) c_1^{(1)}(t)$ results in $-\Gamma_g t c_1^{(1)}(t)$, and then the corresponding fourth-order force is \begin{equation} F^{(4)}_{v,0}(t)= -\Gamma_g t F^{(2)}_v(t), \label{f40} \end{equation} which, when combined with the force at second order $F^{(2)}_v (t)$, represents a loss of probability in the ground state. The subscript 0 in $F^{(4)}_{v,0}(t) $ denotes contributions from the vacuum. The contribution to the force of the second term in \eqref{inter1} is \begin{align} \label{van1} & -\frac{d^2}{2 \pi^3 \epsilon_0} \int_0^{\infty} d\omega \text{Im} r_{\rm p}(\omega) \int d^2k^\pa (k^\pa)^2 e^{-2 k^\pa z_0 - k^\pa v t \cos\theta} \\\nonumber & \times \text{Re} \left[ \frac{f_{\phi \theta}}{\omega_{10}+\omega'} \left(\frac{i t}{\hbar} \delta E_g + t \frac{\Gamma_g}{2}\right) e^{-i (\omega_{10} + \omega - k^\pa v \cos\phi \sin \theta)t} \right]. \end{align} This is a modulated oscillatory function of time, and vanishes after time-averaging. The contribution to the force of the third term in \eqref{inter1} is \begin{align} \label{van2} & \frac{d^2}{2 \pi^3 \hbar \epsilon_0} \int_0^{\infty} d\omega \text{Im} r_{\rm p}(\omega) \int d^2k^\pa (k^\pa)^2 e^{-2 k^\pa z_\TA(t)} \\\nonumber & \times \text{Re} \left\{ \left( - \frac{1}{\hbar} \delta E_g + i \frac{\Gamma_g}{2} \right)\frac{f_{\phi \theta}}{(\omega_{10}+\omega')^2} \right\} + \text{m.o.t.} , \end{align} where "m.o.t." denotes modulated oscillatory terms -- similar to the ones appearing in \eqref{van1} -- that vanish after time-averaging. In order to evaluate \eqref{van2}, we re-write the second line in terms of the dimensionless variables $s=k^\pa z_\TA(t)$ and $y=v/[z_\TA(t) (\omega_{10}+\omega)]$ introduced above, and perform an expansion in powers of velocity. To order $y^0$, we obtain a $d^4$-correction to the Casimir-Polder force \eqref{fcp2pert}, \begin{align}\label{FCP4} F_{\rm CP}^{(4)}(t) &=\frac{3 d^2 \delta E^{(0)}_g}{8\pi^2 \hbar \e_0} \frac{\cos \theta}{z^4_\TA(t)} \int_0^\infty\id\id\!\!d\omega\;\frac{\text{Im} r_{\rm p}(\omega)}{(\omega_{10}+\omega)^2}. \end{align} where $\delta E^{(0)}_g$ is the velocity-independent term of the shift $\delta E_g$ defined in \eqref{Eg}. Note that this correction identically vanishes for parallel motion. To order $y^1$, we obtain a $d^4$-correction to the friction force \eqref{fric2pert}, \begin{align}\label{Ffr4} F_{\rm fr}^{(4)}(t) &=- \frac{3 d^2 \Gamma_g v (1+\cos^2 \theta)}{8\pi^2 \hbar \e_0 z^5_\TA(t)} \int_0^\infty\id\id\!\!d\omega\;\frac{\text{Im} r_{\rm p}(\omega)}{(\omega_{10}+\omega)^3}. \end{align} Recalling the definition of the rate $\Gamma_g$ in \eqref{Gg}, we conclude that $F_{\rm fr}^{(4)}(t)$ goes as $v^2$ and vanishes for parallel motion. \end{subsection} \begin{subsection}{Casimir Polder and Friction Force: 4th Order, via Two Photons}\label{sec:v3} The final piece for the fourth order force contains two contributions. One arises from the part of $c_1^{(3)}(t)$ that involves the two-photon sector, i.e., $c_{1,2}^{(3)}(t)$ in Eq.(\ref{c13-2}), and another one from the coherence between the one- and two-photon sectors, last term in Eq.(\ref{eq:fza}). We denote them by $F^{(4)[03]}_{v,2}(t)$ and $F^{(4)[12]}_{v,2}(t)$, respectively, and we compute them separately. The subscript 2 in $F^{(4)}_{v,2}(t) $ denotes contributions from the two-photon sector. They read \begin{align}\nonumber &F^{(4)[03]}_{v,2}(t) = \frac{d^4}{4 \pi \epsilon_0 \hbar^3}\;{\rm Re}\!\!\int\!\!d^3\kappa_1 d^3\kappa_2| \K_1 \cdot \K_2 |^2|\psi_{\kappa_1}|^2 |\psi_{\kappa_2}|^2\\\nonumber &\q\times e^{-2(k^\pa_1+k^\pa_2) z_\TA(t)} \left[ \tfrac{k^\pa_1 f_{\phi_1\theta}}{\omega_{10}+\omega'_1 + 2 i v k^\pa_2 \cos\theta} + (1 \leftrightarrow 2) \right]\\ &\q\times \tfrac{1}{\omega'_1 + \omega'_2}\left[ \tfrac{1}{\omega_{10}+\omega'_2} + (1 \leftrightarrow 2) \right] + \text{m.o.t.}, \end{align} and \begin{align}\nonumber &F^{(4)[12]}_{v,2}(t) = \frac{d^4}{4 \pi \epsilon_0 \hbar^3}\;{\rm Re}\!\!\int d^3\kappa_1 d^3\kappa_2 | \K_1 \cdot \K_2 |^2|\psi_{\kappa_1}|^2 |\psi_{\kappa_2}|^2\\\nonumber &\q\times e^{-2(k^\pa_1+k^\pa_2) z_\TA(t)} \left[ \tfrac{k^\pa_1 f_{\phi_1\theta}}{\omega_{10}+\omega'_2} + (1 \leftrightarrow 2) \right]\\ &\q\times \tfrac{1}{\omega'_1 + \omega'_2}\left[ \tfrac{1}{\omega_{10}+\omega'_2} + (1 \leftrightarrow 2) \right] + \text{m.o.t.}. \end{align} The modulated oscillatory terms (m.o.t.) contributions vanish after time-averaging, and we will discard them in the following. Note that the two equations above have the same structure, except for the bracket in the second line of each of them. Defining their sum as $\Sigma^{(4)}(t) = F^{(4)[03]}_{v,2}(t) + F^{(4)[12]}_{v,2}(t)$, we obtain \begin{align}\label{Sigma4} &\Sigma^{(4)}(t) = \frac{d^4}{4 \pi \epsilon_0\hbar^3}\;{\rm Re}\!\!\int\!\!d^3\kappa_1 d^3\kappa_2| \K_1 \cdot \K_2 |^2|\psi_{\kappa_1}|^2 |\psi_{\kappa_2}|^2\\\nonumber &\q\times \frac{ e^{-2(k^\pa_1+k^\pa_2) z_\TA(t)} }{\omega'_1 + \omega'_2}\left[ \frac{1}{\omega_{10}+\omega'_2} + (1 \leftrightarrow 2) \right]\\\nonumber &\q\times\left[ k^\pa_1 f_{\phi_1\theta} \left(\tfrac{1}{\omega_{10}+\omega'_1 + 2 i v k^\pa_2 \cos\theta} + \tfrac{1}{\omega_{10}+\omega'_2}\right)+ (1 \leftrightarrow 2) \right] . \end{align} In order to evaluate Eq. (\ref{Sigma4}), we proceed as in the previous subsections, and define variables $y_1=v/[z_\TA(t) (\omega_{10}+\omega_1)]$ and $y_2=v/[z_\TA(t) (\omega_{10}+\omega_2)]$, and perform an expansion in powers of $y_1$ and $y_2$. The calculations are quite cumbersome, and here we only report the main results. To lowest order in velocity (i.e. terms proportional to $y_1^0 y_2^0$), we obtain \begin{align}\nonumber \Sigma^{(4)}_0(t) &=-\frac{3 d^4}{128 \pi^3 \hbar\e_0} \frac{\cos\theta}{z_\TA^7(t)} \int_0^{\infty}\id\id\!\!d\omega_1 d\omega_2\text{Im} r_{\rm p}(\omega_1) \text{Im} r_{\rm p}(\omega_2)\\ &\q\times \frac{(2 \omega_{10}+\omega_1+\omega_2)^2}{(\omega_1 + \omega_2) (\omega_{10}+\omega_1)^2 (\omega_{10}+\omega_2)^2}\,. \end{align} The subscript $0$ in $\Sigma^{(4)}_0(t)$ denotes zero-order in velocity. Hence, $\Sigma^{(4)}_0(t)$ is a correction to the Casimir-Polder force \eqref{fcp2pert}, coming from processes concerning the emission or absorption of two photons. The linear-in-velocity force (arising from terms proportional to $y_1^1 y_2^0$ and $y_1^0 y_2^1$) vanishes identically, i.e. $\Sigma^{(4)}_1(t) =0$. The next non-vanishing order is quadratic in velocity (it arises from terms proportional to $y_2^1 y_2^0$, $y_1^0 y_2^2$, and $y_1^1 y_2^1$), and the resulting force is $\Sigma^{(4)}_2(t) \propto v^2 \cos\theta / z_\TA^9(t)$ (the prefactor is a complicated integral over $\omega_1$ and $\omega_2$, and we do not report it here). Note that for parallel motion ($\theta=\pi/2$) both $\Sigma^{(4)}_0(t)$ and $\Sigma^{(4)}_2(t)$ are zero, and the first non-vanishing term is proportional to $v^3$, in agreement with the result found in Ref.~\cite{Intravaia2015}. \end{subsection} \end{section} \begin{section}{Conclusions} \begin{table} \begin{tabular}{|m{2cm}|m{3cm}|m{3cm}|} \hline \rule[-2ex]{0pt}{5.5ex} & \q Markov & \qq Perturbation \\ \hline\hline \rule[-2ex]{0pt}{5.5ex} \q$\pa$ shift & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ \\ \hline \rule[-2ex]{0pt}{5.5ex} \q$\perp$ shift & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ \\ \hline\hline \rule[-2ex]{0pt}{5.5ex} \q$\pa$ rate & \qq exp. small & \qq 0 \\ \hline \rule[-2ex]{0pt}{5.5ex} \q$\perp$ rate & \qq$\frac{v}{\w_{10}z_\TA}$ & \qq$\frac{v}{\w_{10}z_\TA}$ \\ \hline\hline \rule[-2ex]{0pt}{5.5ex} \q$\pa$ force $d^2$ & \qq exp. small & \qq 0 \\ \hline \rule[-2ex]{0pt}{5.5ex} \q$\perp$ force $d^2$ & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ \\ \hline\hline \rule[-2ex]{0pt}{5.5ex} \q$\pa$ force $d^4$ & \qq$\left(\frac{v}{\omega_{10}z_\TA}\right)\left(\frac{\Ga_1}{\omega_{10}}\right)$ & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!3}$ \\ \hline \rule[-2ex]{0pt}{5.5ex} \q$\perp$ force $d^4$ & \qq$\left(\frac{v}{\omega_{10}z_\TA}\right)\left(\frac{\Ga_1}{\omega_{10}}\right)$ & \qq$\left(\!\frac{v}{\w_{10}z_\TA}\!\right)^{\!2}$ \\ \hline \end{tabular} \caption{Comparison of the motion-induced corrections to both the internal dynamics of a ground-state atom moving in front of a macroscopic body and the Casimir-Polder force that acts upon that atom. The table gives the scaling of shifts, rates and forces with increasing atomic speed $v$, bare atomic transition frequency $\omega_{10}$, atom-surface separation $z_\TA$ and atomic rate of spontaneous decay $\Ga_1$. Here perpendicular ($\perp$) refers to an atom moving towards the body. The results obtained via Markovian quantum master equations (``Markov'') and time-dependent perturbation theory (``Perturbation'') agree for the level shift and rate, as well as the $d^2$ level friction force, but differ for the $d^4$ friction force.} \label{tab:results} \end{table} We summarize our results for the motion-induced corrections to both the internal dynamics -- energy level and rate of transition -- of a ground-state atom and the Casimir-Polder force that acts upon that very atom in Table~\ref{tab:results}. In this table, non-parallel motion is represented by its extreme, i.e., perfectly vertical motion of the atom towards the surface. The results for such motion are contrasted with the ones for the parallel scenario. In both cases, the Markovian approach and time-dependent perturbation theory agree concerning leading order dynamical corrections to level shifts (compare Eqs.~\eqref{eq:dw01} and \eqref{Eg}) and decay rates (compare Eqs.~\eqref{eq:Ga01res}, \eqref{eq:Ga01nres} and \eqref{Gg}). Note that the Markovian approach predicts an exponentially small parallel rate, while this rate is identically zero in the perturbative approach. Regarding the quantum frictional force, both approaches agree to second order in the atom-field coupling. For example, they both predict a vanishing force for parallel motion, and a $v^2$ scaling for vertical motion (compare Eqs.~\eqref{f2fricMark} and \eqref{fric2pert}). In contrast, the two approaches differ to fourth order in the coupling, irrespective of the direction of motion. For example, within the Markovian approach the force is always linear in $v$ for all directions, while according to time-dependent perturbation theory the friction force undergoes a qualitative change from a $v^3$ behavior for parallel motion and a $v^2$ scaling for vertical motion instead. As explained in more detail in the introduction, at this point in time we cannot decide whether any of the two results is flawed -- and if so, for which reason -- or whether they merely apply to different temporal regimes. Both the Markovian and the perturbative approach rely on approximations which restrict their respective applicability. The Markov approximation presupposes exponential decay of both one- and two-point expectation values on all timescales, which in turn implies perfectly Lorentzian power spectra of all observables. This restricts the realm of validity of the such obtained results in twofold manner: firstly, the Markovian results only apply to a spectral range near atomic and surface plasmon resonances and secondly, as a consequence, only apply to timescales smaller than those where algebraic decay sets in. More interesting when comparing to perturbation theory, however, is the fact that the Markovian results also do not apply to timescales shorter than the electromagnetic field's auto-correlation time. It is on this timescale that memory effects lead to transient behavior which is not resolved when applying the Markov approximation. These short times -- fractions of the atomic excited state's lifetime -- though, are the domain of time-dependent perturbation theory. Therefore, one need not be surprised if Markovian and perturbative results for both the internal dynamics of the atom and the quantum friction force do not agree. However, it would be desirable to map out the transition from one regime to the other or verify the respective results by means of a third theoretical method or experiment. Finally, be reminded that the facilitation of quantum friction force measurements -- which as of now are far out of reach -- was the original motivation for this work. Inspired by the findings that motion-induced corrections to the internal dynamics of a vertically moving atom exceed the ones obtained for the parallel-motion scenario by an order of magnitude \cite{Klatt2016}, we set out to study whether such qualitative changes may likewise be found for the quantum friction force. After all, the physics leading to both phenomena is identical. Looking at the results summarized in Table~\ref{tab:results}, we can conclude that there are indeed qualitative changes in the velocity dependence of the force when changing from parallel to vertical relative motion of the atom. While up to second order in the atom-field coupling, the quantum friction force is exponentially small for parallel motion, it is found to scale quadratic with relative velocity in the vertical case. Moreover it is consistently found to do so in both the Markovian and the perturbative approach. If considering the terms of fourth order in coupling as well, the Markovian approach only predicts a change in the numerical prefactor of the force when changing from parallel to vertical motion, whereas time-dependent perturbation theory suggests indeed an increase by one order of magnitude. However, we must recognize that even for extremely confident parameter estimates such as $v\!=\!10^3\,\T{m}/\T{s}$, $\w_{10}\!=\!10\,\T{THz}$, $\Ga_1\!=\!10^9\,\T{s}^{-1}$ and $z_\TA\!=1\,\T{nm}$, the best-case-scenario, that is in case of the largest prediction for the quantum friction force among the ones given in Tab.~\ref{tab:results}, \begin{align} F_\T{fric}\propto\left(\!\tfrac{v}{\w_{10}z_\TA}\!\right)^{\!2}F_\T{CP}\,, \end{align} the friction force only amounts to about 1\% of the static Casimir-Polder force and thereby continues to elude detection. \end{section} \begin{section}{Acknowledgments} This work was supported by the DFG (Grants BU 1803/3-1476, GRK 2079/1), the Freiburg Institute for Advanced Studies and the LANL LDRD program. M.B.F would like to thank ANPCyT, CONICET, UBA, LANL and CNLS. We are grateful to G. Barton, C. Henkel, F. Intravaia, F. Mazzitelli, V. Mkrtchian and S. Scheel for discussions. \end{section} \end{document}
\begin{document} \title[Functional calculus on Hilbert space] {Functional calculus for a bounded $C_0$-semigroup on Hilbert space} \author[L. Arnold]{Loris Arnold} \email{[email protected]} \address{Laboratoire de Math\'ematiques de Besan\c con, UMR 6623, CNRS, Universit\'e Bourgogne Franche-Comt\'e, 25030 Besan\c{c}on Cedex, France} \author[C. Le Merdy]{Christian Le Merdy} \email{[email protected]} \address{Laboratoire de Math\'ematiques de Besan\c con, UMR 6623, CNRS, Universit\'e Bourgogne Franche-Comt\'e, 25030 Besan\c{c}on Cedex, France} \date{\today} \maketitle \begin{abstract} We introduce a new Banach algebra $\mathcal{A}(\mathbb{C}_+)$ of bounded analytic functions on $\mathbb{C}_+=\{z\in\mathbb{C}\, :\, {\rm Re}(z)>0\}$ which is an analytic version of the Figa-Talamenca-Herz algebras on $\mathbb{R}$. Then we prove that the negative generator $A$ of any bounded $C_0$-semigroup on Hilbert space $H$ admits a bounded (natural) functional calculus $\rho_A\colon \mathcal{A}(\mathbb{C}_+)\to B(H)$. We prove that this is an improvement of the bounded functional calculus $\mathcal{B}_0(\mathbb{C}_+)\to B(H)$ recently devised by Batty-Gomilko-Tomilov on a certain Besov algebra $\mathcal{B}_0(\mathbb{C}_+)$ of analytic functions on $\mathbb{C}_+$, by showing that $\mathcal{B}_0(\mathbb{C}_+)\subset \mathcal{A}(\mathbb{C}_+)$ and $\mathcal{B}_0(\mathbb{C}_+)\not= \mathcal{A}(\mathbb{C}_+)$. In the Banach space setting, we give similar results for negative generators of $\gamma$-bounded $C_0$-semigroups. The study of $\mathcal{A}(\mathbb{C}_+)$ involves dealing with Fourier multipliers on the Hardy space $H^1(\mathbb{R})\subset L^1(\mathbb{R})$ of analytic functions. \end{abstract} \vskip 0.8cm \noindent {\it 2000 Mathematics Subject Classification:} 47A60, 30H05, 47D06. \noindent {\it Key words:} Functional calculus, Semigroups, Hardy spaces, Fourier multipliers, Besov spaces, $\gamma$-boundedness. \vskip 1cm \section{Introduction}\label{Intro} Let $H$ be a Hilbert space and let $-A$ be the infinitesimal generator of a bounded $C_0$-semigroup $(T_t)_{t\geq 0}$ on $H$. To any $b\in L^1(\mathbb{R}_+)$, one may associate the operator $\Gamma(A,b)\in B(H)$ defined by $$ [\Gamma(A,b)](x) = \int_0^\infty b(t) T_t(x)\, dt,\qquad x\in H. $$ The mapping $b\mapsto \Gamma(A,b)$ is the so-called Hille-Phillips functional calculus (\cite{hp}, see also \cite[Section 3.3]{haa1}) and we obviously have $$ \norm{\Gamma(A,b)}\leq C\norm{b}_1,\qquad b\in L^1(\mathbb{R}_+), $$ where $C=\sup_{t\geq 0}\norm{T_t}$. This holds true as well for any bounded $C_0$-semigroup on Banach space. However we focus here on semigroups acting on Hilbert space. If $(T_t)_{t\geq 0}$ is a contractive semigroup (i.e. $\norm{T_t}\leq 1$ for all $t\geq 0)$ on $H$, then we have the much stronger estimate $\norm{\Gamma(A,b)}\leq \norm{\widehat{b}}_\infty$ for all $b\in L^1(\mathbb{R}_+)$, where $\widehat{b}$ denotes the Fourier transform of $b$. This is a semigroup version of von Neumann's inequality, see \cite[Section 7.1.3]{haa1} for a proof. Hence more generally, if $(T_t)_{t\geq 0}$ is similar to a contractive semigroup, then there exists a constant $C\geq 1$ such that \begin{equation}\label{PB} \norm{\Gamma(A,b)}\leq C\norm{\widehat{b}}_\infty,\qquad b\in L^1(\mathbb{R}_+). \end{equation} However not all negative generators of bounded $C_0$-semigroups satisfy such an estimate. Indeed if $A$ is sectorial of type $<\frac{\pi}{2}$, it follows from \cite[Section 3.3]{haa1} that $A$ satisfies an estimate of the form (\ref{PB}) exactly when $A$ has a bounded $H^\infty$-functional calculus, see Subsection \ref{H-infty} for more on this. The motivation for this paper is the search of sharp estimates of $\norm{\Gamma(A,b)}$, and of the norms of other functions of $A$, valid for all negative generators of bounded $C_0$-semigroups. A major breakthrough was achieved by Haase \cite[Corollary 5.5]{haa3} who proved an estimate \begin{equation}\label{MH} \norm{\Gamma(A,b)}\leq C\norm{L_b}_{\mathcal{B}_0}, \end{equation} where $\norm{\,\cdotp}_{\mathcal{B}_0}$ denotes the norm with respect to a suitable Besov algebra $\mathcal{B}_0(\mathbb{C}_+)$ of analytic functions, and $$ L_b\colon \mathbb{C}_+=\{{\rm Re}(\,\cdotp)>0\}\to \mathbb{C},\quad L_b(z)=\int_0^\infty b(t)e^{-tz}\, dt, $$ is the Laplace transform of $b$. More recently, Batty-Gomilko-Tomilov \cite{bgt1} (see also \cite{bgt2}) extended Haase's result by providing an explicit construction of a bounded functional calculus $\mathcal{B}_0(\mathbb{C}_+)\to B(H)$ associated with $A$, extending the Hille-Phillips functional calculus. It is worth mentioning the related works by Vitse \cite{vit1} and White \cite{white} (see also Subsection \ref{Added}). In this paper we introduce the space $\mathcal{A}(\mathbb{R})\subset H^\infty(\mathbb{R})$ defined by \[ \mathcal{A}(\mathbb{R})= \mathcal{B}ig\{ F= \sum_{k=1}^{\infty}f_k\star h_k\, :\, (f_k)_{k\in \mathbb{N}}\subset BUC(\mathbb{R}), \, (h_k)_{k\in \mathbb{N}}\subset H^1(\mathbb{R}), \, \sum_{k=1}^{\infty}\normeinf{f_k}\norme{h_k}_1 < \infty \mathcal{B}ig\}, \] equipped with the norm $\norme{F}_{\mathcal{A}} = \inf \{\sum_{k=1}^{\infty}\normeinf{f_k}\norme{h_k}_1\}$, where the infimum runs over all sequences $(f_k)_{k\in \mathbb{N}} \subset BUC(\mathbb{R})$ and $(h_k)_{k\in \mathbb{N}} \subset H^1(\mathbb{R})$ such that $\sum_{k=1}^{\infty}\normeinf{f_k}\norme{h_k}_1<\infty$ and $F= \sum_{k=1}^{\infty}f_k\star h_k$. The definition of this space is inspired by Peller's paper \cite{pel1}, where a discrete analogue of $\mathcal{A}(\mathbb{R})$ was introduced to study functions of power bounded operators on Hilbert space. We also refer to \cite{Wells} for earlier results on this theme. Furthermore $\mathcal{A}(\mathbb{R})$ can be regarded as an $H^1(\mathbb{R})$-version of the Figa-Talamanca-Herz algebras $A_p(\mathbb{R})$, $1<p<\infty$, for which we refer e.g. to \cite[Chapter 3]{der}. We prove in Section \ref{AA0} that $\mathcal{A}(\mathbb{R})$ is indeed a Banach algebra for pointwise multiplication. Next in Section \ref{CF} we introduce the natural half-plane version $\mathcal{A}(\mathbb{C}_+)\subset H^\infty(\mathbb{C}_+)$ of $\mathcal{A}(\mathbb{R})$, we prove that it contains $L_b$ for all $b\in L^1(\mathbb{R}_+)$, and we show (Corollary \ref{main3}) that whenever $A$ is the negative generator of a bounded $C_0$-semigroup $(T_t)_{t\geq 0}$ on Hilbert space, there is a unique bounded homomorphism $\rho_A\colon\mathcal{A}(\mathbb{C}_+)\to B(H)$, such that \begin{equation}\label{compatible} \rho_A(L_b) =\Gamma(A,b),\qquad b\in L^1(\mathbb{R}_+). \end{equation} In particular we show that $$ \norm{\Gamma(A,b)}\leq\mathcal{B}igl(\sup_{t\geq 0}\norm{T_t}\mathcal{B}igr)^2\,\norm{L_b}_{\mathcal{A}},\qquad b\in L^1(\mathbb{R}_+). $$ This improves Haase's estimate (\ref{MH}) mentioned above. Our work also improves \cite[Theorem 4.4]{bgt1} in the Hilbert space case. Indeed we show in Section \ref{Besov} that the Besov algebra considered in \cite{haa1,bgt1} is included in $\mathcal{A}(\mathbb{C}_+)$, with an estimate $\norm{\,\cdotp}_{\mathcal{A}}\lesssim \norm{\,\cdotp}_{\mathcal{B}_0}$, and we also show that the converse is not true. In general, our main result (Corollary \ref{main3}) does not hold true on non Hibertian Banach spaces (see the start of Section \ref{Banach}). In Section \ref{Banach}, following ideas from \cite{arn2, arn1, lem1}, we give a Banach space version of Corollary \ref{main3}, using the notion of $\gamma$-boundedness. Namely we show that if $A$ is the negative generator of a bounded $C_0$-semigroup $(T_t)_{t\geq 0}$ on a Banach space $X$, then the set $\{T_t\, :\, t\geq 0\}\subset B(X)$ is $\gamma$-bounded if and only if there exists a $\gamma$-bounded homomorphism $\rho_A\colon\mathcal{A}(\mathbb{C}_+)\to B(X)$ satisfying (\ref{compatible}). This should be regarded as a semigroup version of \cite[Theorem 4.4]{lem1}, where a characterization of $\gamma$-bounded continuous representations of amenable groups was established. Our results make crucial use of Fourier multipliers on the Hardy space $H^1(\mathbb{R})$. Section \ref{MultH1} is devoted to this topic. In particular we establish the following result of independent interest: if a bounded operator $T\colon H^1(\mathbb{R})\to H^1(\mathbb{R})$ commutes with translations, then there exists a bounded continuous function $m\colon\mathbb{R}_+^*\to\mathbb{C}$ such that $\norm{m}_\infty\leq\norm{T}$ and for any $h\in H^1(\mathbb{R})$, $\widehat{T(h)} = m\widehat{h}$. \subsection*{Notation and convention.} We use the notations $\mathbb{R}_+=[0,\infty)$, $\mathbb{R}_+^*=(0,\infty)$ and $\mathbb{R}_-=(-\infty,0]$ on the real line. We will use the following open half-planes of $\mathbb{C}$, $$ \mathbb{C}_+ := \{ z \in \mathbb{C}\, :\, {\rm Re}(z) > 0 \}, \quad P_+ := \{ z \in \mathbb{C}\, :\, {\rm Im}(z) > 0 \}, \quad P_- := \{ z \in \mathbb{C}\, :\, {\rm Im}(z) < 0 \}. $$ Also for any real $\alpha\in\mathbb{R}$, we set \begin{equation}\label{HP0} {\mathcal H}_\alpha = \{ z \in \mathbb{C}\, :\, {\rm Re}(z) > \alpha \}. \end{equation} In particular, ${\mathcal H}_0=\mathbb{C}_+$. For any $s\in \mathbb{R}$, we let $\tau_s\colon L^1(\mathbb{R})+L^\infty(\mathbb{R})\to L^1(\mathbb{R})+L^\infty(\mathbb{R})$ denote the translation operator defined by $$ \tau_sf(t) = f(t-s),\qquad t\in\mathbb{R}, $$ for any $f\in L^1(\mathbb{R})+L^\infty(\mathbb{R})$. The Fourier transform of any $f\in L^1(\mathbb{R})$ is defined by \[ \widehat{f}(u) = \int_{-\infty}^\infty f(t)e^{-itu}\,dt, \qquad u\in\mathbb{R}. \] Sometimes we write $\mathcal{F}(f)$ instead of $\widehat{f}$. We will also let $\mathcal{F}(f)$ or $\widehat{f}$ denote the Fourier transform of any $f\in L^1(\mathbb{R})+L^\infty(\mathbb{R})$. Wherever it makes sense, we will use $\mathcal{F}^{-1}$ to denote the inverse Fourier transform. We will use several times the following elementary result (which follows from Fubini's theorem and the Fourier inversion theorem). \begin{lem}\label{Tool} Let $f_1,f_2\in L^1(\mathbb{R})$ such that either $\widehat{f_1}$ or $\widehat{f_2}$ belongs to $L^1(\mathbb{R})$. Then $$ \int_{-\infty}^\infty f_1(t)f_2(t)\, dt \,=\,\frac{1}{2\pi}\, \int_{-\infty}^\infty \widehat{f_1}(u)\widehat{f_2}(-u)\, du. $$ \end{lem} The norm on $L^p(\mathbb{R})$ will be denoted by $\norm{\,\cdotp}_p$. We let $C_0(\mathbb{R})$ (resp. $BUC(\mathbb{R})$, resp. $C_b(\mathbb{R})$) denote the Banach algebra of continuous functions on $\mathbb{R}$ which vanish at infinity (resp. of bounded and uniformly continuous functions on $\mathbb{R}$, resp. of bounded continuous functions on $\mathbb{R}$), equipped with the sup-norm $\norm{\,\cdotp}_\infty$. We set \begin{equation}\label{C00} C_{00}(\mathbb{R}) = \{f\in L^1(\mathbb{R})\, :\, \widehat{f}\in L^1(\mathbb{R})\}. \end{equation} This is a dense subspace of $C_0(\mathbb{R})$. Further we let $\mathcal{S}(\R)$ denote the Schwartz space on $\mathbb{R}$ and we let $M(\mathbb{R})$ denote the Banach algebra of all bounded Borel measures on $\mathbb{R}$. We will use the identification $M(\mathbb{R})\simeq C_0(\mathbb{R})^*$ (Riesz's theorem) provided by the duality pairing \begin{equation}\label{Riesz} \langle\mu,f\rangle \,=\, \int_{\tiny{\mathbb{R}}} f(-t)\, d\mu(t),\qquad \mu\in M(\mathbb{R}),\ f\in C_0(\mathbb{R}). \end{equation} The use of a minus sign in this duality pairing will make the study of $\mathcal{A}(\mathbb{R})$ easier. For any non empty open set $\mathcal{O}\subset\mathbb{C}$, we let $H^{\infty}(\mathcal{O})$ denote the Banach algebra of all bounded analytic functions on $\mathcal{O}$, equipped with the sup-norm $\norm{\,\cdotp}_\infty$. Let $X,Y$ be (complex) Banach spaces. We let $B(X,Y)$ denote the Banach space of all bounded operators $X\to Y$. We simply write $B(X)$ instead of $B(X,X)$, when $Y=X$. We let $I_X$ denote the identity operator on $X$. The domain of an operator $A$ on some Banach space $X$ is denoted by ${\rm Dom}(A)$. Its kernel and range are denoted by ${\rm Ker}(A)$ and ${\rm Ran}(A)$, respectively. If $z\in\mathbb{C}$ belongs to the resolvent set of $A$, we let $R(z,A)=(zI_X-A)^{-1}$ denote the corresponding resolvent operator. \section{Fourier multipliers on $H^1(\mathbb{R})$}\label{MultH1} We denote by $H^1(\mathbb{R})$ the classical Hardy space, defined as the closed subspace of $L^1(\mathbb{R})$ of all functions $h$ such that $\widehat{h}(u) = 0$ for all $u\leq 0$. For any $1<p< \infty $, we denote by $H^p(\mathbb{R})$ the closure of $H^1(\mathbb{R})\cap L^p(\mathbb{R})$ in $L^p(\mathbb{R})$. Also we let $H^\infty(\mathbb{R})$ denote the $w^*$-closure of $H^1(\mathbb{R})\cap L^\infty(\mathbb{R})$ in $L^\infty(\mathbb{R})$. We recall (see e.g. \cite{gar}, \cite{hof} or \cite{koo}) that for any $1\leq p\leq\infty$, $H^p(\mathbb{R})$ coincides with the subspace of all functions $f\in L^p(\mathbb{R})$ whose Poisson integral ${\mathcal P}[f]\colon P_+\to\mathbb{C}$ is analytic. It is well-known that $H^p(\mathbb{R})$ also coincides with the subspace of all functions in $L^p(\mathbb{R})$ whose (distributional) Fourier transform has support in $\mathbb{R}_+$. In particular, $H^2(\mathbb{R})$ is the subspace of all functions in $L^2(\mathbb{R})$ whose Fourier transform (regarded as an element of $L^2(\mathbb{R})$) vanishes almost everywhere on $\mathbb{R}_{-}$. This can be expressed by the identification \begin{equation}\label{H2=FL2} \mathcal{F}(H^2(\mathbb{R})) = L^{2}(\mathbb{R}_+). \end{equation} Let $m \in L^{\infty}(\mathbb{R}_+)$. Using (\ref{H2=FL2}), we may associate \[ \begin{array}{ccccc} T_m & : & H^2(\mathbb{R}) & \to & H^2(\mathbb{R}) \\ & & h & \mapsto & \mathcal{F}^{-1} (m\widehat{h}), \\ \end{array} \] and we have $\norm{T_m}=\norm{m}_\infty$. The function $m$ is called the symbol of $T_m$. Let $1 \leq p < \infty$. Assume that $T_m$ is bounded with respect to the $H^p(\mathbb{R})$-norm, that is, there exists a constant $C>0$ such that \begin{equation}\label{ineH1mul} \bignorm{\mathcal{F}^{-1} (m\widehat{h})}_p \leq C\norme{h}_p,\qquad h\in H^p(\mathbb{R})\cap H^2(\mathbb{R}). \end{equation} Then since $H^p(\mathbb{R})\cap H^2(\mathbb{R})$ is dense in $H^p(\mathbb{R})$, $T_m$ uniquely extends to a bounded operator on $H^p(\mathbb{R})$ whose norm is the least possible constant $C$ satisfying \eqref{ineH1mul}. In this case we keep the same notation $T_m\colon H^p(\mathbb{R})\to H^p(\mathbb{R})$ for this extension. Operators of this form are called bounded Fourier multipliers on $H^p(\mathbb{R})$. They form a subspace of $B(H^p(\mathbb{R}))$, that we denote by $\mathcal{M}(H^1(\R))p$. It is plain that ${\mathcal M}(H^2(\mathbb{R}))\simeq L^{\infty}(\mathbb{R}_+)$ isometrically. In the sequel we will be mostly interested by $\mathcal{M}(H^1(\R))$. The above definitions parallel the classical definitions of bounded Fourier multipliers on $L^p(\mathbb{R})$, that we will use without any further reference. \begin{ex} Let $s \in \mathbb{R}$. For all $h \in L^1(\mathbb{R})$, one has $\widehat{\tau_s h}(u) = e^{-isu}\widehat{h}(u)$ for all $u\in\mathbb{R}$. Hence $\tau_s$ maps $H^1(\mathbb{R})$ into itself. Further $\tau_s$ is a bounded Fourier multiplier on $H^1(\mathbb{R})$, with symbol $m(u) = e^{-isu}$. \end{ex} In the sequel we say that a bounded operator $T\colon H^1(\mathbb{R}) \to H^1(\mathbb{R})$ commutes with translations if $T\tau_s = \tau_s T$ for each $s\in \mathbb{R}$. Classical properties of the Fourier transform easily imply that any bounded Fourier multiplier on $H^1(\mathbb{R})$ commutes with translations. The next result implies that the converse is true and provides a sharp estimate on the symbol of an element of $\mathcal{M}(H^1(\R))$. \begin{thm}\label{thmmutlH1} Let $T \in B(H^1(\mathbb{R}))$ and assume that $T$ commutes with translations. Then there exists a bounded continuous function $m \colon \mathbb{R}_+^* \rightarrow \mathbb{C}$ such that $\widehat{Th} =m\widehat{h}$ for all $h\in H^1(\mathbb{R})$ (and hence $T = T_m$). In this case, we have \begin{equation}\label{ineMH1} \quad \normeinf{m} \leq \norme{T}. \end{equation} \end{thm} \begin{proof} Except the estimate (\ref{ineMH1}), this statement can be deduced from \cite[pp. 131-132]{BJORK}, and from the fact that $\{h_1 + h_2(-\cdotp)\, :\, h_1, h_2\in H^1(\mathbb{R})\}\subset L^1(\mathbb{R})$ coincides with the so-called real $H^1$-space (see also \cite[Theorem 7.31]{GCRF}). We briefly prove the first part of our statement for completeness and then focus on the proof of (\ref{ineMH1}). We use Bochner spaces and Bochner integrals, for which we refer to \cite{du}. Let $T \in B(H^1(\mathbb{R}))$ and assume that $T$ commutes with translations. Let $h,g \in H^1(\mathbb{R})$. The identification $L^1(\mathbb{R}^2) = L^1(\mathbb{R};L^1(\mathbb{R}))$ and the fact that $$ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}|h(t-s)||g(s)|\,dtds = \norme{h}_1\norme{g}_1\,<\infty $$ imply that $s\mapsto g(s)\tau_sh$ is an almost everywhere defined function belonging to the Bochner space $L^1(\mathbb{R};L^1(\mathbb{R}))$. Since $\tau_sh$ belongs to $H^1(\mathbb{R})$ for all $s\in\mathbb{R}$, the latter is actually an element of $L^1(\mathbb{R};H^1(\mathbb{R}))$. Further its integral (which is an element of $H^1(\mathbb{R})$) is equal to the convolution of $h$ and $g$, that is, \begin{equation}\label{eq1} h \star g = \int_{-\infty}^{\infty}\tau_s hg(s) \,ds. \end{equation} It follows, using the assumption, that $$ T(h\star g) = \int_{-\infty}^{\infty} T(\tau_sh)g(s) \,ds\, = \int_{-\infty}^{\infty} \tau_s(Th)g(s)\, ds\,= Th \star g. $$ (The last equality comes from \eqref{eq1} replacing $h$ by $Th$.) Likewise $T(h \star g) =h\star Tg$, whence $Th \star g= h \star Tg$. Applying the Fourier transform to the latter equality, one obtains \begin{equation}\label{eq2} \widehat{Th}\cdotp\widehat{g} = \widehat{h}\cdotp\widehat{Tg}. \end{equation} Now let $f \in \mathcal{S}(\R)$ satisfying $f=0$ on $\mathbb{R}_-$ and $f > 0$ on $\mathbb{R}_+^*$. The function $g = \mathcal{F}^{-1}(f)$ belongs to $H^1(\mathbb{R})$ and we may define $m\colon \mathbb{R}_+^*\to\mathbb{C}$ by \begin{equation}\label{eq3} m(u) = \frac{\widehat{Tg}(u)}{\widehat{g}(u)}\,, \qquad u>0. \end{equation} Obviously $m$ is continuous. Furthermore it follows from \eqref{eq2} that for any $h\in H^1(\mathbb{R})$, we have $\widehat{Th}=m\widehat{h}$ on $\mathbb{R}_+^*$. It therefore suffices to show that $m$ is bounded and that (\ref{ineMH1}) holds true. We adapt an argument from \cite{lem3}. Let us first prove the inequality \begin{equation}\label{m(1)} |m(1)| \leq \norme{T}. \end{equation} To obtain this, fix $\gamma_1\in C_{00}(\mathbb{R})$ (see (\ref{C00})) such that $\norme{\gamma_1}_1 = 1 $ and ${\rm Supp}(\widehat{\gamma_1}) \subset [-1,1]$. Then consider $\gamma_2\in C_{00}(\mathbb{R})$ such that $\normeinf{\gamma_2} =1$. For each $0 < \varepsilon \leq \frac{1}{2}$, we define $h_{\varepsilon}, g_{\varepsilon} \colon \mathbb{R} \rightarrow \mathbb{C}$ by \[ h_{\varepsilon}(t) = \varepsilon e^{it}\gamma_1(\varepsilon t) \quad \text{ and } \quad g_{\varepsilon}(t) = e^{-it}\gamma_2(\varepsilon t),\qquad t\in\mathbb{R}. \] Then, $h_{\varepsilon} \in H^1(\mathbb{R})$, $g_{\varepsilon} \in C_{00}(\mathbb{R})$, $\norme{h_{\varepsilon}}_1 =1$, $\normeinf{g_{\varepsilon}} = 1$ and both $$ \widehat{h_{\varepsilon} }(u) = \widehat{\gamma_1} \mathcal{B}ig(\frac{u-1}{\varepsilon}\mathcal{B}ig) \qquad\hbox{and} \qquad \widehat{g_{\varepsilon}}(u) = \frac{1}{\varepsilon}\widehat{\gamma_2}\mathcal{B}ig(\frac{u+1}{\varepsilon}\mathcal{B}ig) $$ for all $u\in\mathbb{R}$. Moreover using Lemma \ref{Tool}, we have \[ \int_{-\infty}^{\infty}\bigl[T(h_{\varepsilon})\bigr](t)g_{\varepsilon}(t)\,dt = \frac{1}{2\pi}\int_{0}^{\infty} \widehat{T(h_{\varepsilon})}(u)\widehat{g_{\varepsilon}}(-u) \,du. \] We infer that \begin{align*} \int_{-\infty}^{\infty}\bigl[T(h_{\varepsilon})\bigr](t)g_{\varepsilon}(t) \, dt & = \frac{1}{2\pi}\int_{0}^{\infty} m(u) \widehat{h_{\varepsilon}}(u)\widehat{g_{\varepsilon}}(-u)du \\ &= \frac{1}{2\pi}\int_{1-\varepsilon}^{1+\varepsilon} \frac{m(u)}{\varepsilon}\widehat{\gamma_1}\mathcal{B}ig(\frac{u-1}{\varepsilon}\mathcal{B}ig) \widehat{\gamma_2}\mathcal{B}ig(\frac{-u+1}{\varepsilon}\mathcal{B}ig)\, du. \end{align*} Therefore, \[ \frac{1}{\varepsilon}\bigg| \int_{1-\varepsilon}^{1+\varepsilon} m(u)\widehat{\gamma_1} \mathcal{B}ig(\frac{u-1}{\varepsilon}\mathcal{B}ig) \widehat{\gamma_2}\mathcal{B}ig(\frac{-u+1}{\varepsilon}\mathcal{B}ig)\,du \bigg| \leq 2\pi\norme{T}. \] By the change of variable $u\mapsto \frac{u-1}{\varepsilon}$, this reads \[ \bigg| \int_{-1}^{1} m(1+ \varepsilon u) \widehat{\gamma_1}(u) \widehat{\gamma_2}(-u)\,du \bigg| \leq 2\pi\norme{T}. \] Since $m$ is continuous on $\mathbb{R}_+^*$ and $\varepsilon \in (0,\frac{1}{2})$, the function $u \mapsto m(1+\varepsilon u)$ is bounded on $(-1,1)$, independently of $\varepsilon$. Moreover $\widehat{\gamma_1}$ is bounded on $(-1,1)$ and $\widehat{\gamma_2}$ is integrable. Hence by Lebesgue's dominated convergence Theorem, \[ \int_{-1}^{1} m(1+ \varepsilon u) \widehat{\gamma_1}(u) \widehat{\gamma_2}(-u)\,du \underset{\varepsilon \rightarrow 0}\longrightarrow m(1)\int_{-1}^{1}\widehat{\gamma_1}(u) \widehat{\gamma_2}(-u)du. \] Since $\int_{0}^{\infty} \widehat{\gamma_1}(u)\widehat{\gamma_2}(-u) du =2\pi \int_{-\infty}^{\infty}\gamma_1(t)\gamma_2(t)dt$, by Lemma \ref{Tool}, we deduce the inequality \[ |m(1)|\, \mathcal{B}ig|\int_{-\infty}^{\infty} \gamma_1(t)\gamma_2(t)\, dt\mathcal{B}ig| \leq \norme{T}. \] Since the unit ball of $C_{00}(\mathbb{R})$ is $w^*$-dense in the unit ball of $L^\infty(\mathbb{R})$, $$ \sup\mathcal{B}igl\{\mathcal{B}igl|\int_{-\infty}^{\infty}\gamma_1(t) \gamma_2(t)\,dt\mathcal{B}igr|\, :\, \gamma_2\in C_{00}(\mathbb{R}),\,\norme{\gamma_2}_\infty \leq 1\mathcal{B}igr\}\,=\,\norme{\gamma_1}_1=1. $$ Hence (\ref{m(1)}) holds true. Now consider an arbitrary $a >0 $. Define $r_a \colon H^1(\mathbb{R}) \rightarrow H^1(\mathbb{R})$ by $$ [r_a h](t) =\frac{1}{a}h\mathcal{B}igl(\frac{t}{a}\mathcal{B}igr),\qquad h\in H^1(\mathbb{R}),\ t\in\mathbb{R}. $$ Then $r_a$ is an isometry satisfying $\widehat{r_a h}(u) = \widehat{h}(au)$ for any $h\in H^1(\mathbb{R})$ and $u \in \mathbb{R}$. Set $$ T_{(a)} = r_a T r_{\frac{1}{a}} \colon H^1(\mathbb{R}) \rightarrow H^1(\mathbb{R}). $$ Then $\tau_s T_{(a)} = T_{(a)}\tau_s$ for all $s\in\mathbb{R}$ and the function $\mathbb{R}_+^*\to\mathbb{C}$ associated with $T_{(a)}$ by the formula \eqref{eq3} is $u \mapsto m(au)$. Further $\norme{T_{(a)}} = \norme{T}$. Hence applying (\ref{m(1)}) to $T_{(a)}$ we deduce that \[ |m(a)| \leq \norme{T}. \] This proves the boundedness of $m$ and (\ref{ineMH1}). \end{proof} \begin{rq1}\label{MHP} \ (1) Let $1<p<\infty$. Let $S\colon L^p(\mathbb{R})\to L^p(\mathbb{R})$ be a bounded Fourier multiplier. Then $S$ maps $H^p(\mathbb{R})$ into itself and the restriction $S_{\vert H^p}\colon H^p(\mathbb{R})\to H^p(\mathbb{R})$ is a bounded Fourier multiplier. Let $Q\colon L^p(\mathbb{R})\to H^p(\mathbb{R})$ be the Riesz projection and let $J\colon H^p(\mathbb{R})\to L^p(\mathbb{R})$ be the canonical embedding. Then conversely, for any bounded Fourier multiplier $T\colon H^p(\mathbb{R})\to H^p(\mathbb{R})$, $S=JTQ$ is a bounded Fourier multiplier on $L^p(\mathbb{R})$, whose restriction to $H^p(\mathbb{R})$ coincides with $T$. Thus $\mathcal{M}(H^1(\R))p$ can be simply regarded as a subspace of ${\mathcal M}(L^p(\mathbb{R}))$, the space of bounded Fourier multipliers on $L^p(\mathbb{R})$. It is also easy to check that a bounded operator $H^p(\mathbb{R})\to H^p(\mathbb{R})$ belongs to $\mathcal{M}(H^1(\R))p$ if and only if it commutes with translations, using the similar result on $L^p(\mathbb{R})$. (2) Let $p'=\frac{p}{p-1}$ be the conjugate exponent of $1<p<\infty$. Using $Q$ again, we see that given any $m\in L^\infty(\mathbb{R}_+)$, the operator $T_m\colon H^2(\mathbb{R})\to H^2(\mathbb{R})$ extends to a bounded Fourier multiplier on $H^p(\mathbb{R})$ if and only if it extends to a bounded Fourier multiplier on $H^{p'}(\mathbb{R})$. Thus, \begin{equation}\label{duality} \mathcal{M}(H^1(\R))p\simeq\mathcal{M}(H^{p'}(\mathbb{R})) \end{equation} isomorphically. (3) Recall that the bounded Fourier multipliers on $L^1(\mathbb{R})$ are the operators of the form $h\mapsto \mu\star h$, with $\mu\in M(\mathbb{R})$, and that the norm of the latter operator is equal to $\norm{\mu}_{M(\tiny{\mathbb{R}})}$ (see e.g. \cite[Chapter I, Theorem 3.19]{sw}). For any $\mu\in M(\mathbb{R})$, let $R_\mu\colon H^1(\mathbb{R})\to H^1(\mathbb{R})$ be the restriction of $h\mapsto \mu\star h$ to $H^1(\mathbb{R})$. This is a bounded Fourier multiplier whose symbol is equal to the restriction of $\widehat{\mu}$ to $\mathbb{R}_+^*$. We set \begin{equation}\label{Trivial} {\mathcal R} = \{ R_\mu\, :\, \mu\in M(\mathbb{R})\}\,\subset \mathcal{M}(H^1(\R)). \end{equation} In contrast with the result in part (1) of this remark, we have $$ {\mathcal R}\not=\mathcal{M}(H^1(\R)). $$ Indeed this follows from \cite[Remark (ii)]{Gau}. \end{rq1} The following lemma will play a crucial role. \begin{lem}\label{noMpisnotM1} For any $1\leq p<\infty$, we have $\mathcal{M}(H^1(\R))\subset \mathcal{M}(H^1(\R))p$. \end{lem} \begin{proof} By definition we have $\mathcal{M}(H^1(\R))\subset \mathcal{M}(H^{2}(\mathbb{R}))$. By (\ref{duality}) we may assume that $p\in (1,2)$. Let $\theta = 2\bigl(1-\frac{1}{p}\bigr)$. Then in the complex interpolation method, we have \[ [H^1(\mathbb{R}),H^{2}(\mathbb{R})]_{\theta} \simeq H^p(\mathbb{R}), \] by \cite[Theorem 4.3]{pis4}. The result follows at once. \end{proof} \section{Algebras $\mathcal{A}_0$ and $\mathcal{A}$}\label{AA0} We introduce and study new algebras of functions which will be used in Section \ref{CF} to establish a functional calculus for negative generators of bounded $C_0$-semigroups on Hilbert space. The next definitions are insprired by \cite{pel1}, see also the ``Notes and Remarks on Chapter 6" in \cite{pis1}. \subsection{Definitions and properties}\label{DandP} \begin{df}\label{defA} We let $\mathcal{A}_0(\mathbb{R})$ (resp. $\mathcal{A(\mathbb{R})}$) be the set of all functions $F \colon \mathbb{R} \rightarrow \mathbb{C}$ such that there exist two sequences $(f_k)_{k\in \mathbb{N}}$ in $C_0(\mathbb{R})$ (resp. $BUC(\mathbb{R})$) and $(h_k)_{k\in \mathbb{N}}$ in $H^1(\mathbb{R})$ satisfying \begin{equation}\label{ineA} \sum_{k=1}^{\infty} \normeinf{f_k}\norme{h_k}_1 < \infty \end{equation} and \begin{equation}\label{equA2} \forall\,s \in \mathbb{R}, \qquad F(s) = \sum_{k=1}^{\infty} (f_k\star h_k)(s). \end{equation} \end{df} Note that for any $f\in C_0(\mathbb{R})$ (resp. $BUC(\mathbb{R})$) and any $h\in H^1(\mathbb{R})$, $f\star h$ belongs to $C_0(\mathbb{R})\cap H^\infty(\mathbb{R})$ (resp. $BUC(\mathbb{R})\cap H^\infty(\mathbb{R})$). Further for any $(f_k)_{k\in \mathbb{N}}$ and $(h_k)_{k\in \mathbb{N}}$ as in Definition \ref{defA}, we have $\normeinf{f_k \star h_k } \leq \normeinf{f_k}\norme{h_k}_1$, and hence $\sum_{k=1}^{\infty} \normeinf{f_k\star h_k} < \infty\,$, by \eqref{ineA}. This ensures the convergence of the series in \eqref{equA2} and implies that $$ \mathcal{A}_0(\mathbb{R}) \subset C_0(\mathbb{R})\cap H^\infty(\mathbb{R})\qquad\hbox{and}\qquad \mathcal{A}(\mathbb{R}) \subset BUC(\mathbb{R})\cap H^\infty(\mathbb{R}). $$ \begin{df}\label{defNA} For all $F \in \mathcal{A}_0(\mathbb{R})$ (resp. $F \in \mathcal{A}(\mathbb{R})$), we set \[ \norme{F}_{\mathcal{A}_0} = \inf \mathcal{B}ig\{\sum_{k=1}^{\infty} \normeinf{f_k}\norme{h_k}_1 \mathcal{B}ig\} \qquad \big(\text{resp.} \norme{F}_{\mathcal{A}} = \inf \mathcal{B}ig\{\sum_{k=1}^{\infty} \normeinf{f_k}\norme{h_k}_1 \mathcal{B}ig\} \big), \] where the infimum runs over all sequences $(f_k)_{k\in \mathbb{N}}$ in $C_0(\mathbb{R})$ (resp. $BUC(\mathbb{R})$) and $(h_k)_{k\in \mathbb{N}}$ in $H^1(\mathbb{R})$ satisfying \eqref{ineA} and \eqref{equA2}. \end{df} It is clear that $$ \norme{F}_\infty\leq \norme{F}_{\mathcal{A}},\qquad F \in \mathcal{A}(\mathbb{R}). $$ To show that $\norme{\cdot}_{\mathcal{A}_0}$ and $\norme{\cdot}_{\mathcal{A}}$ are complete norms, we make a connection with projective tensor products, which will be useful throughout this section. If $X$ and $Y$ are any Banach spaces, the projective norm of $\zeta \in X \otimes Y$ is defined by \[ \norme{\zeta}_{\wedge} = \inf\mathcal{B}igl\{ \sum_k \norme{x_k}\norme{y_k} \mathcal{B}igr\}, \] where the infimum runs over all finite families $(x_k)_k$ in $X$ and $(y_k)_k$ in $Y$ satisfying $\zeta = \sum_k x_k \otimes y_k\,$. The completion of $(X\otimes Y, \norme{\,\cdotp}_{\wedge})$, denoted by $X\widehat{\otimes} Y$, is called the projective tensor product of $X$ and $Y$. Let $Z$ be a third Banach space. To any $\ell\in B_2(X\times Y, Z)$, the space of bounded bilinear maps from $X\times Y$ into $Z$, one can associate a linear map $\overset{\circ}{\ell}\colon X\otimes Y\to Z$ by the formula $$ \overset{\circ}{\ell}(x\otimes y)=\ell(x,y),\qquad x\in X,\, y\in Y. $$ Then $\overset{\circ}{\ell}$ extends to a bounded operator (still denoted by) $\overset{\circ}{\ell}\colon X\widehat{\otimes} Y\to Z$, with $\norm{\overset{\circ}{\ell}}=\norm{\ell}$. Further the mapping $\ell\mapsto\overset{\circ}{\ell}$ yields an isometric identification \begin{equation}\label{biliproj} B_2(X\times Y, Z) \simeq B(X \widehat{\otimes} Y,Z). \end{equation} Let us apply the above property in the case when $Z=\mathbb{C}$. Using the standard identification $B_2(X\times Y, \mathbb{C})=B(Y,X^*)$, we obtain an isometric identification \begin{equation}\label{isomidenEprojF} (X\widehat{\otimes}Y)^* \simeq B(Y,X^*). \end{equation} We refer to either \cite[Chapter 8, Theorem 1 $\&$ Corollary 2]{du} or \cite[Theorem 2.9]{Ryan} for these classical facts. Consider the bilinear map $\sigma \colon C_0(\mathbb{R}) \times H^1(\mathbb{R}) \rightarrow C_0(\mathbb{R})$ defined by \begin{equation}\label{sigma} \sigma(f,h) = f\star h,\qquad f\in C_0(\mathbb{R}),\ h\in H^1(\mathbb{R}). \end{equation} Applying (\ref{biliproj}), let $$ \overset{\circ}{\sigma}\colon C_0(\mathbb{R})\widehat{\otimes} H^1(\mathbb{R})\longrightarrow C_0(\mathbb{R}) $$ be associated with $\sigma$. Then $\mathcal{A}_0(\mathbb{R}) = {\rm Ran}(\overset{\circ}{\sigma})$. Through the resulting linear isomorphism between $\mathcal{A}_0(\mathbb{R})$ and $(C_0(\mathbb{R})\widehat{\otimes} H^1(\mathbb{R}))/{\rm Ker}(\overset{\circ}{\sigma})$, $\norme{\,\cdotp}_{\mathcal{A}_0}$ corresponds to the quotient norm on the latter space (this follows from either \cite[Chapter 8, Proposition 9 (b)]{du} or \cite[Proposition 2.8]{Ryan}). Thus $(\mathcal{A}_0(\mathbb{R}),\norme{\,\cdotp}_{\mathcal{A}_0})$ is a Banach space and $\overset{\circ}{\sigma}$ induces an isometric identification \begin{equation}\label{quotient} \mathcal{A}_0(\mathbb{R}) \simeq \frac{C_0(\mathbb{R}) \widehat{\otimes} H^1(\mathbb{R})}{{\rm Ker}(\overset{\circ}{\sigma})}\,. \end{equation} Similarly, $\norme{\,\cdotp}_{\mathcal{A}}$ is a norm on $\mathcal{A}(\mathbb{R})$ and $(\mathcal{A}(\mathbb{R}),\norme{\,\cdotp}_{\mathcal{A}})$ is a Banach space. \begin{rq1}\label{rqA0} It is clear from Definition \ref{defNA} that $\mathcal{A}_0(\mathbb{R})\subset \mathcal{A}(\mathbb{R})$ and that for any $F \in \mathcal{A}_0(\mathbb{R})$, we have \begin{equation}\label{comparnormA} \norme{F}_{\mathcal{A}} \leq \norme{F}_{\mathcal{A}_0}. \end{equation} \end{rq1} \subsection{$\mathcal{A}$ and $\mathcal{A}_0$ are Banach algebras}\label{BA} \begin{prop}\label{propAlg} The spaces $\mathcal{A}_0(\mathbb{R})$ and $\mathcal{A}(\mathbb{R})$ are Banach algebras for the pointwise multiplication. Furthermore, $\mathcal{A}_0(\mathbb{R})$ is an ideal of $\mathcal{A}(\mathbb{R})$ and for any $F \in \mathcal{A}(\mathbb{R})$ and $G \in \mathcal{A}_0(\mathbb{R})$, we have \begin{equation}\label{ideal} \norme{FG}_{\mathcal{A}_0} \leq \norme{F}_{\mathcal{A}} \norme{G}_{\mathcal{A}_0}. \end{equation} \end{prop} \begin{proof} We will adapt an idea from \cite{pel1}. Let $f_1, f_2 \in C_0(\mathbb{R})$ and $h_1,h_2 \in H^1(\mathbb{R})$. We note that \begin{equation}\label{eqL1L1} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}|h_1(t)||h_2(t+s)| \,dtds = \norme{h_1}_1\norme{h_2}_1. \end{equation} We define, for $s\in \mathbb{R}$, $\varphi_s\colon \mathbb{R} \rightarrow \mathbb{C}$ and $\psi_s \colon \mathbb{R} \rightarrow \mathbb{C}$ by \[ \varphi_s(t) = f_1(t)f_2(t-s) \qquad\hbox{and}\qquad \psi_s(t) = h_1(t)h_2(t+s). \] Since $f_2$ is uniformly continuous, the function $s \mapsto \varphi_s$ is continuous from $\mathbb{R}$ into $C_0(\mathbb{R})$. Thus $s \mapsto \varphi_s$ belongs to the Bochner space $L^{\infty}(\mathbb{R}; C_0(\mathbb{R}))$. Using (\ref{eqL1L1}) and arguing as at the beginning of the proof of Theorem \ref{thmmutlH1}, we see that $s \mapsto \psi_s$ belongs to $L^1(\mathbb{R}; H^1(\mathbb{R}))$. It follows that $s \mapsto \varphi_s\star \psi_s$ is defined almost everywhere and belongs to $L^{1}(\mathbb{R}; \mathcal{A}_0(\mathbb{R}))$. Moreover, \begin{equation}\label{inealgA0} \int_{-\infty}^{\infty}\norme{\varphi_s\star \psi_s}_{\mathcal{A}_0}ds \leq \normeinf{f_1}\normeinf{f_2}\norme{h_1}_1\norme{h_2}_1. \end{equation} Indeed, using (\ref{eqL1L1}) and Fubini's theorem, \begin{align*} \int_{-\infty}^{\infty}\norme{\varphi_s\star \psi_s}_{\mathcal{A}_0}ds\, &\leq \int_{-\infty}^{\infty} \norme{\varphi_s}_\infty\norme{\psi_s}_1 ds\\ &\leq \normeinf{f_1}\normeinf{f_2}\int_{-\infty}^{\infty}\norme{\psi_s}_1 ds, \end{align*} which is equal to the right-hand side of (\ref{inealgA0}). The integral of $s \mapsto \varphi_s\star \psi_s$ is an element of $\mathcal{A}_0(\mathbb{R})$. We claim that we actually have \begin{equation}\label{eqalgA0} \int_{-\infty}^{\infty}\varphi_s\star \psi_s\, ds = (f_1\star h_1)(f_2\star h_2). \end{equation} Indeed, using again \eqref{eqL1L1} and Fubini's theorem, we have for all $u \in \mathbb{R}$, \begin{align*} \int_{-\infty}^{\infty}(\varphi_s\star \psi_s)(u)ds &= \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}f_1(t)f_2(t-s)h_1(u-t)h_2(u-t+s)\,dtds \\ & = \int_{-\infty}^{\infty}f_1(t)h_1(u-t)\int_{-\infty}^{\infty}f_2(t-s)h_2(u-(t-s))\,dsdt \\ &= (f_1\star h_1)(u)(f_2\star h_2)(u). \end{align*} Combining (\ref{eqalgA0}) and (\ref{inealgA0}), we obtain that $(f_1\star h_1)(f_2\star h_2) \in \mathcal{A}_0(\mathbb{R})$, with \[ \norme{(f_1\star h_1)(f_2\star h_2)}_{\mathcal{A}_0} \leq \normeinf{f_1}\normeinf{f_2}\norme{h_1}_1\norme{h_2}_1. \] Now let $F,G \in \mathcal{A}_0(\mathbb{R})$ and let $\varepsilon > 0$. Consider sequences $(f^1_k)_{k\in \mathbb{N}}$, $(f^2_k)_{k\in \mathbb{N}}$ in $C_0(\mathbb{R})$ and $(h^1_k)_{k\in \mathbb{N}}$, $(h^2_k)_{k\in \mathbb{N}}$ in $H^1(\mathbb{R})$ such that \[ F = \sum_{k=1}^{\infty} f^1_k\star h^1_k \qquad \text{and} \qquad \sum_{k=1}^{\infty} \normeinf{f^1_k}\norme{h^1_k}_1 \leq \norme{F}_{\mathcal{A}_0} + \varepsilon, \] as well as \[ G = \sum_{k=1}^{\infty} f^2_k\star h^2_k \qquad \text{and} \qquad \sum_{k=1}^{\infty} \normeinf{f^2_k}\norme{h^2_k}_1 \leq \norme{G}_{\mathcal{A}_0} + \varepsilon. \] Then, using summation in $C_0(\mathbb{R})$, we have $$ FG = \sum_{k,l=1}^{\infty}(f^1_k\star h^1_k) (f^2_l\star h^2_l). $$ Further $(f^1_k\star h^1_k)(f^2_l\star h^2_l)\in\mathcal{A}_0(\mathbb{R})$ for all $k,l\geq 1$, and \begin{align*} \sum_{k,l=1}^{\infty}\norme{(f^1_k\star h^1_k) (f^2_l\star h^2_l)}_{\mathcal{A}_0}\, &\leq \sum_{k,l=1}^{\infty} \normeinf{f^1_k}\normeinf{f^2_l}\norme{h^1_k}_1\norme{h^2_l}_1\\ & = \mathcal{B}igl(\sum_{k=1}^{\infty} \normeinf{f^1_k}\norme{h^1_k}_1\mathcal{B}igr)\,\mathcal{B}igl(\sum_{l=1}^{\infty} \normeinf{f^2_l}\norme{h^2_l}_1\mathcal{B}igr)\\ & \leq \bigl(\norme{F}_{\mathcal{A}_0} + \varepsilon\bigr) \bigl(\norme{G}_{\mathcal{A}_0} + \varepsilon\bigr). \end{align*} Since $\mathcal{A}_0(\mathbb{R})$ is complete, this shows that $FG\in\mathcal{A}_0(\mathbb{R})$. Since $\varepsilon > 0$ is arbitrary, we obtain that $\norme{FG}_{\mathcal{A}_0} \leq \norme{F}_{\mathcal{A}_0} \norme{G}_{A_0}$. This shows that $\mathcal{A}_0(\mathbb{R})$ is a Banach algebra. Analogously, $\mathcal{A}(\mathbb{R})$ is a Banach algebra. Moreover if $f_1 \in BUC(\mathbb{R})$ and $f_2 \in C_0(\mathbb{R})$, then for each $s\in \mathbb{R}$, $\varphi_s\colon t\mapsto f_1(t)f_2(t-s)$ belongs to $C_0(\mathbb{R})$. Hence the computations above show that $\mathcal{A}_0(\mathbb{R})$ is an ideal of $\mathcal{A}(\mathbb{R})$, as well as \eqref{ideal}. \end{proof} We note that the Banach algebra $\mathcal{A}_0(\mathbb{R})$ can be naturally regarded as an $H^1(\mathbb{R})$-version of the Figa-Talamanca-Herz algebras $A_p(\mathbb{R})$, $1<p<\infty$ (see e.g. \cite[Chapter 3]{der}). \subsection{Duality results and consequences}\label{Duality} The main aim of this subsection is to identify $\mathcal{A}_0(\mathbb{R})^*$ with a subspace of $\mathcal{M}(H^1(\R))$, the space of bounded Fourier multipliers on $H^1(\mathbb{R})$. This requires the use of duality tools. We let $H^1_{-}(\mathbb{R})$ (resp. $H^\infty_{-}(\mathbb{R})$) be the subspace of all $f$ in $L^1(\mathbb{R})$ (resp. $L^\infty(\mathbb{R})$) such that $f(-\,\cdotp)$ belongs to $H^1(\mathbb{R})$ (resp. $H^\infty(\mathbb{R})$). Recall the identification $M(\mathbb{R})\simeq C_0(\mathbb{R})^*$ provided by (\ref{Riesz}) and regard $H^1(\mathbb{R})\subset L^1(\mathbb{R})\subset M(\mathbb{R})$ in the usual way. By \cite[Chapter II, Theorem 3.8]{gar}, we have \begin{equation}\label{Ortho-H1} (C_0(\mathbb{R})\cap H^\infty_{-}(\mathbb{R}))^\perp = H^1(\mathbb{R}). \end{equation} Set $Z_0 : = C_0(\mathbb{R})\cap H^\infty_{-}(\mathbb{R})$ for convenience, then the above result yields an isometric identification \begin{equation}\label{H1Dual} \biggr(\frac{C_0(\mathbb{R})}{Z_0}\biggl)^*\,\simeq\, H^1(\mathbb{R}). \end{equation} In the sequel, we let $\dot{f}\in \frac{C_0(\mathbb{R})}{Z_0}$ denote the class of any $f\in C_0(\mathbb{R})$. We note that for any $f\in C_0(\mathbb{R})$, $h\in H^1(\mathbb{R})$ and $s\in\mathbb{R}$, we have \begin{equation}\label{Form1} (f\star h)(s) = \langle \tau_{-s}h, f\rangle. \end{equation} Thus $f\star h=0$ for any $f\in Z_0$ and $h\in H^1(\mathbb{R})$. The bilinear map $\sigma\colon C_0(\mathbb{R})\times H^1(\mathbb{R})\to C_0(\mathbb{R})$ defined by (\ref{sigma}) therefore induces a bilinear map $\delta \colon \frac{C_0(\mathbb{R})}{Z_0}\,\times H^1(\mathbb{R})\to C_0(\mathbb{R})$ given by $$ \delta(\dot{f},h) = f\star h, \qquad f\in C_0(\mathbb{R}),\ h\in H^1(\mathbb{R}). $$ Let \begin{equation}\label{delta} \overset{\circ}{\delta} \colon \frac{C_0(\mathbb{R})}{Z_0} \widehat{\otimes} H^1(\mathbb{R}) \longrightarrow C_0(\mathbb{R}) \end{equation} be the bounded map induced by $\delta$. Then the argument leading to (\ref{quotient}) shows as well that $\mathcal{A}_0(\mathbb{R})$ is equal to the range of $\overset{\circ}{\delta}$ and that \begin{equation}\label{quotient2} \mathcal{A}_0(\mathbb{R}) \simeq \mathcal{B}igl(\frac{C_0(\mathbb{R})}{Z_0}\widehat{\otimes} H^1(\mathbb{R})\mathcal{B}igr)\mathcal{B}ig/{\rm Ker}(\overset{\circ}{\delta}). \end{equation} Since $H^1(\mathbb{R})$ is a dual space, by (\ref{H1Dual}), $B(H^1(\mathbb{R}))$ is a dual space as well. Indeed applying (\ref{isomidenEprojF}), we have an isometric identification \begin{equation}\label{BH1-dual} B(H^1(\mathbb{R}))\simeq \mathcal{B}igl(\frac{C_0(\mathbb{R})}{Z_0}\widehat{\otimes} H^1(\mathbb{R})\mathcal{B}igr)^*. \end{equation} If we unravel the identifications leading to (\ref{BH1-dual}), we obtain that the latter is given by \begin{equation}\label{BH1-dual+} \langle T, \dot{f}\otimes h\rangle\,=\, \int_{-\infty}^\infty (Th)(t)f(-t)\, dt, \qquad T\in B(H^1(\mathbb{R})),\ f\in C_0(\mathbb{R}),\ h\in H^1(\mathbb{R}). \end{equation} \begin{lem}\label{MH-closed} The space $\mathcal{M}(H^1(\R))$ is $w^*$-closed in $B(H^1(\mathbb{R}))$. \end{lem} \begin{proof} According to Theorem \ref{thmmutlH1}, an operator $T\in B(H^1(\mathbb{R}))$ belongs to $\mathcal{M}(H^1(\R))$ if and only if $\tau_sT=T\tau_s$ for all $s\in\mathbb{R}$. Hence it suffices to show that the maps $T\mapsto\tau_sT$ and $T\mapsto T\tau_s$ are $w^*$-continuous on $B(H^1(\mathbb{R}))$, for all $s\in\mathbb{R}$. Fix such an $s$ and let $(T_i)_i$ be a bounded net of $B(H^1(\mathbb{R}))$ converging to some $T\in B(H^1(\mathbb{R}))$ in the $w^*$-topology. It readily follows from (\ref{BH1-dual+}) that for any $f\in C_0(\mathbb{R})$ and $h\in H^1(\mathbb{R})$, we have $$ \langle \tau_s T, \dot{f}\otimes h\rangle = \langle T, \dot{\overbrace{\tau_s f}}\otimes h\rangle \qquad\hbox{and}\qquad \langle \tau_s T_i, \dot{f}\otimes h\rangle = \langle T_i, \dot{\overbrace{\tau_s f}}\otimes h\rangle, $$ for all $i$. This implies that $\langle \tau_s T_i, \dot{f}\otimes h\rangle \to \langle \tau_s T, \dot{f}\otimes h\rangle$ when $i\to\infty$. By linearity, this implies that for any $\Phi\in \frac{C_0(\mathbb{R})}{Z_0}\otimes H^1(\mathbb{R})$, we have $\langle \tau_s T_i, \Phi \rangle \to \langle \tau_s T, \Phi\rangle$ when $i\to\infty$. Since $(T_i)_i$ is bounded, this implies that $\tau_sT_i\to \tau_sT$ in the $w^*$-topology. By the Krein-Smulian theorem, this shows that $T\mapsto\tau_sT$ is $w^*$-continuous. The proof that $T\mapsto T\tau_s$ is $w^*$-continuous is similar. \end{proof} We introduce $$ \mathcal{PM} = \overline{\rm Span}^{w^*}\bigl\{\tau_s\, :\, s\in\mathbb{R}\bigr\}\,\subset B(H^1(\mathbb{R})). $$ A direct consequence of Lemma \ref{MH-closed} is that \begin{equation}\label{MH-inclusion} \mathcal{PM}\subset \mathcal{M}(H^1(\R)). \end{equation} \begin{lem}\label{PM} Recall the mapping $\overset{\circ}{\delta}$ from (\ref{delta}). Then $\bigl[{\rm Ker}(\overset{\circ}{\delta})\bigr]^\perp =\mathcal{PM}$. \end{lem} \begin{proof} Let $s\in \mathbb{R}$ and let $\Phi\in \frac{C_0(\mathbb{R})}{Z_0}\widehat{\otimes} H^1(\mathbb{R})$. According to \cite[Proposition 2.8]{Ryan}, we may choose two sequences $(f_k)_{k\in \mathbb{N}}$ in $C_0(\mathbb{R})$ and $(h_k)_{k\in \mathbb{N}}$ in $H^1(\mathbb{R})$ such that $$ \sum_{k=1}^\infty \norme{f_k}_\infty\norme{h_k}_1<\infty \qquad\hbox{and}\qquad \Phi=\sum_{k=1}^\infty \dot{f_k}\otimes h_k. $$ Then by (\ref{Form1}), $$ \bigl[\overset{\circ}{\delta}(\Phi)\bigr](s) = \sum_{k=1}^\infty (f_k\star h_k)(s) = \sum_{k=1}^\infty \langle \tau_{-s} h_k, f_k\rangle = \langle\tau_{-s},\Phi\rangle. $$ This shows that $$ {\rm Span}\bigl\{\tau_s\, :\, s\in\mathbb{R}\bigr\}_\perp = {\rm Ker}(\overset{\circ}{\delta}). $$ The result follows at once. \end{proof} By standard duality and (\ref{quotient2}), the dual space $\mathcal{A}_0(\mathbb{R})^*$ may be identified with $\bigl[{\rm Ker}(\overset{\circ}{\delta})\bigr]^\perp$. Applying Lemma \ref{PM} and (\ref{BH1-dual+}), we therefore obtain the following. \begin{thm}\label{DualA0} \ \begin{itemize} \item [(1)] For any $T\in\mathcal{PM}$, there exists a unique $\eta_T\in\mathcal{A}_0(\mathbb{R})^*$ such that $$ \langle\eta_T, f\star h\rangle = \int_{-\infty}^\infty (Th)(t) f(-t)\, dt $$ for any $f\in C_0(\mathbb{R})$ and any $h\in H^1(\mathbb{R})$. \item [(2)] The mapping $T\mapsto \eta_T$ induces a $w^*$-homeomorphic and isometric identification $$ \mathcal{A}_0(\mathbb{R})^* \simeq\mathcal{PM}. $$ \end{itemize} \end{thm} \begin{rq1} Recall ${\mathcal R}$ from (\ref{Trivial}). It turns out that $$ \mathcal{PM} =\overline{{\mathcal R}}^{w^*}. $$ Indeed for any $s\in \mathbb{R}$, $\tau_s\colon H^1(\mathbb{R})\to H^1(\mathbb{R})$ is equal to the convolution by the Dirac mass at $s$, which yields $\subset$. To show the converse inclusion, we observe that for any $\mu\in M(\mathbb{R})$ and any $\Phi\in \frac{C_0(\mathbb{R})}{Z_0}\widehat{\otimes} H^1(\mathbb{R})$, we have \begin{equation}\label{Connection} \langle \mu, \overset{\circ}{\delta}(\Phi)\rangle =\langle R_\mu,\Phi\rangle. \end{equation} Here we use the identification $M(\mathbb{R})\simeq C_0(\mathbb{R})^*$ given by (\ref{Riesz}) on the left-hand side and we use (\ref{BH1-dual}) on the right-hand side. To prove this identity, let $f\in C_0(\mathbb{R})$ and $h\in H^1(\mathbb{R})$. Then \begin{align*} \langle \mu, \overset{\circ}{\delta}(\dot{f}\otimes h)\rangle \, & = \langle \mu, f\star h\rangle \\ & = \int_{\tiny\mathbb{R}}\int_{-\infty}^\infty f(-t)h(t-s)\, dt\, d\mu(s)\\ & = \langle \mu\star h, f \rangle. \end{align*} This shows (\ref{Connection}) when $\Phi=\dot{f}\otimes h$. By linearity, this implies (\ref{Connection}) for $\Phi\in \frac{C_0(\mathbb{R})}{Z_0} \otimes H^1(\mathbb{R})$. Then by density, we deduce (\ref{Connection}) for all $\Phi$. It clearly follows from (\ref{Connection}) that ${\mathcal R}\subset\bigl[{\rm Ker}(\overset{\circ}{\delta})\bigr]^\perp$. By Lemma \ref{PM}, this yields $\supset$. \end{rq1} We now give a few consequences of the above duality results. \begin{prop}\label{lemcont} For any $b$ in $L^1(\mathbb{R}_+)$, the function $\widehat{b}(-\,\cdotp)\colon\mathbb{R}\to\mathbb{C}\,$ belongs to $\mathcal{A}_0(\mathbb{R})$ and $$ \bignorm{\widehat{b}(-\,\cdotp)}_{\mathcal{A}_0}\leq\norme{b}_1. $$ Moreover the mapping $L^1(\mathbb{R}_+)\to \mathcal{A}_0(\mathbb{R})$ taking $b$ to $\widehat{b}(-\,\cdotp)$ is a Banach algebra homomorphism with dense range. \end{prop} \begin{proof} We will use the space $C_{00}(\mathbb{R})$ defined by (\ref{C00}). Let $C_{00}\star H^1\subset \mathcal{A}_0(\mathbb{R})$ be the linear span of the functions $f\star h$, for $f\in C_{00}(\mathbb{R})$ and $h\in H^1(\mathbb{R})$. By definition of $C_{00}(\mathbb{R})$, the Fourier transform maps $C_{00}\star H^1$ into $L^1(\mathbb{R}_+)$. Then we consider $$ {\mathcal C}_{0,1}=\{\widehat{F}\, :\, F\in C_{00} \star H^1\}\subset L^1(\mathbb{R}_+). $$ Let $b\in C^\infty(\mathbb{R}_+^*)$ with compact support. Let $c\in C^\infty(\mathbb{R}_+^*)$ with compact support such that $c\equiv 1$ on the support of $b$, so that $b=bc$. Then ${\mathcal F}^{-1}(b)\in C_{00}(\mathbb{R})$, ${\mathcal F}^{-1}(c)\in H^1(\mathbb{R})$ and the Fourier transform of ${\mathcal F}^{-1}(b)\star {\mathcal F}^{-1}(c)$ is equal to $b$. Thus $b\in {\mathcal C}_{0,1}$. Consequently, ${\mathcal C}_{0,1}$ is dense in $L^1(\mathbb{R}_+)$. Let $b\in {\mathcal C}_{0,1}$ and let $F=\widehat{b}(-\,\cdotp)$, so that $$ \widehat{F}\,=(2\pi)b. $$ Take finite families $(f_k)_k$ in $C_{00}(\mathbb{R})$ and $(h_k)_k$ in $H^1(\mathbb{R})$ such that $F=\sum_k f_k\star h_k$. Pick $\eta\in\mathcal{A}_0(\mathbb{R})^*$ such that $\norme{\eta}=1$ and $\norme{F}_{\mathcal{A}_0}= \langle\eta,F\rangle$. By Theorem \ref{DualA0}, there exists $T\in\mathcal{M}(H^1(\R))$ such that $\norme{T}_{B(H^1)}=1$ and for any $k$, $$ \langle\eta, f_k\star h_k\rangle = \int_{-\infty}^\infty (Th_k)(u) f_k(-u)\, du. $$ By Theorem \ref{thmmutlH1}, the symbol $m$ of the multiplier $T$ satisfies $\norme{m}_\infty\leq 1$. Moreover by Lemma \ref{Tool}, we have \begin{align*} \int_{-\infty}^\infty (Th_k)(u) f_k(-u)\, du\, &= \,\frac{1}{2\pi}\int_{0}^\infty \widehat{Th_k}(t) \widehat{f_k}(t)\, dt\\ &= \,\frac{1}{2\pi}\int_{0}^\infty m(t) \widehat{h_k}(t)\widehat{f_k}(t)\, dt, \end{align*} for all $k$. Summing over $k$, we deduce that $$ \langle\eta,F\rangle = \,\frac{1}{2\pi}\, \sum_k \int_{0}^\infty m(t) \widehat{h_k}(t)\widehat{f_k}(t)\, dt\, = \,\frac{1}{2\pi}\,\int_{0}^\infty m(t) \widehat{F}(t)\, dt, $$ and hence $$ \langle\eta,F\rangle = \int_{0}^\infty m(t)b(t)\,dt. $$ We deduce that $$ \norm{F}_{\mathcal{A}_0}\leq\norm{b}_1. $$ Since ${\mathcal C}_{0,1}$ is dense in $L^1(\mathbb{R}_+)$ and $\mathcal{A}_0(\mathbb{R})$ is complete, this estimate implies that $\widehat{b}(-\,\cdotp)\colon\mathbb{R}\to\mathbb{C}\,$ belongs to $\mathcal{A}_0(\mathbb{R})$ for any $b\in L^1(\mathbb{R}_+)$, with $\bignorm{\widehat{b}(-\,\cdotp)}_{\mathcal{A}_0}\leq\norme{b}_1$. It is plain that $b\mapsto \widehat{b}(-\,\cdotp)$ is a Banach algebra homomorphism. Its range contains $C_{00}\star H^1$ and the latter is dense in $\mathcal{A}_0(\mathbb{R})$, because $C_{00}(\mathbb{R})$ is dense in $C_0(\mathbb{R})$. \end{proof} \begin{rq1}\label{DualA0+} Let $\eta\in\mathcal{A}_0(\mathbb{R})^*$, let $T\in \mathcal{M}(H^1(\R))$ be the multiplier associated with $\eta$, according to Theorem \ref{DualA0}, and let $m\in C_b(\mathbb{R}_+^*)$ be the symbol of $T$. Then it follows from the previous result and its proof that for all $b\in L^1(\mathbb{R}_+)$, $$ \langle\eta,\widehat{b}(-\,\cdotp)\rangle = \int_{0}^\infty m(t)b(t)\,dt. $$ \end{rq1} \begin{rq1}\label{Rational} For any $\lambda\in P_-$, we let $$ b_\lambda(t)= ie^{-i\lambda t},\qquad t>0. $$ Then $b_\lambda\in L^1(\mathbb{R}_+)$ and we have $$ \widehat{b_\lambda}(-u) = \,\frac{1}{\lambda - u},\qquad u\in\mathbb{R}. $$ Applying Proposition \ref{lemcont}, we obtain that $(\lambda - \,\cdotp)^{-1}$ belongs to $\mathcal{A}_0(\mathbb{R})$ for any $\lambda\in P_-$. Since $\mathcal{A}_0(\mathbb{R})$ is a Banach algebra, this implies that any rational function $F\colon \mathbb{R}\to\mathbb{C}\,$ with degree $deg(F)\leq -1$ and poles in $P_-$ belongs to $\mathcal{A}_0(\mathbb{R})$. \end{rq1} We can now strengthen Remark \ref{rqA0} as follows. \begin{prop}\label{normequA0} For any $F \in \mathcal{A}_0(\mathbb{R})$, we have \[ \norme{F}_{\mathcal{A}_0}=\norme{F}_{\mathcal{A}}. \] \end{prop} \begin{proof} For any $N\in\mathbb{N}$, let $G_N\colon\mathbb{R}\to\mathbb{C}$ be defined by $$ G_N(u)=\,\frac{N}{N-iu}\,,\qquad u\in\mathbb{R}. $$ Then $G_N = \widehat{Ne^{-N\,\cdotp}}(-\,\cdotp)$. We note that the sequence $(Ne^{-N\,\cdotp})_{N\in\mathbb{N}}$ is a contractive approximate unit of $L^1(\mathbb{R}_+)$. It therefore follows from Proposition \ref{lemcont} that $(G_N)_{N\in\mathbb{N}}$ is a contractive approximate unit of $\mathcal{A}_0(\mathbb{R})$. Let $F\in\mathcal{A}_0(\mathbb{R})$. By Proposition \ref{propAlg}, we have \begin{equation}\label{inenormA0A} \norme{FG_N}_{\mathcal{A}_0} \leq \norme{F}_{\mathcal{A}}\norme{G_N}_{\mathcal{A}_0} \leq \norme{F}_{\mathcal{A}}. \end{equation} Moreover $FG_N\to F$ in $\mathcal{A}_0(\mathbb{R})$ hence $\norme{FG_N}_{\mathcal{A}_0}\to \norme{F}_{\mathcal{A}_0}$ when $N\to\infty$. We deduce that \[ \norme{F}_{\mathcal{A}_0} \leq \norme{F}_{\mathcal{A}}. \] Combining with \eqref{comparnormA}, we obtain $\norme{F}_{\mathcal{A}_0} = \norme{F}_{\mathcal{A}}$. \end{proof} \begin{rq1}\label{densities} According to \cite[Chapter II, Theorem 3.8]{gar}, $H^1_{-}(\mathbb{R})\subset M(\mathbb{R})$ is the annihilator of $\bigl\{(\lambda-\,\cdotp)^{-1}\, :\, \lambda\in P_{-}\bigr\}\subset C_0(\mathbb{R})$. Hence we deduce from Remark \ref{Rational} that $H^1_{-}(\mathbb{R})$ contains $\mathcal{A}_0(\mathbb{R})^\perp$. By (\ref{Ortho-H1}), we have $(C_0(\mathbb{R})\cap H^\infty(\mathbb{R}))^\perp=H^1_{-}(\mathbb{R})$. Hence $\mathcal{A}_0(\mathbb{R})^\perp=(C_0(\mathbb{R})\cap H^\infty(\mathbb{R}))^\perp$. This implies that $\mathcal{A}_0(\mathbb{R})$ is dense in $C_0(\mathbb{R})\cap H^\infty(\mathbb{R})$. Using Proposition \ref{lemcont}, or repeating the above argument, we also obtain that the space $\{\widehat{b}(-\,\cdotp)\, :\, b\in L^1(\mathbb{R}_+)\}$ is dense in $C_0(\mathbb{R})\cap H^\infty(\mathbb{R})$. \end{rq1} \subsection{Half-plane versions}\label{HP-Versions} For the purpose of studying functional calculus in the next three sections, we now introduce half-plane versions $\mathcal{A}_0(\mathbb{C}_+)$ and $\mathcal{A}(\mathbb{C}_+)$ of $\mathcal{A}_0(\mathbb{R})$ and $\mathcal{A}(\mathbb{R})$, respectively. Let $F\in H^\infty(\mathbb{R})$. We may consider its Poisson integral ${\mathcal P}[F]\colon P_+\to\mathbb{C}$ and the latter is a bounded holomorphic function (see e.g. \cite[Sect. I.3]{gar}). Then we define $$ \widetilde{F}\colon \mathbb{C}_+\longrightarrow\mathbb{C} $$ by setting $$ \widetilde{F}(z) = {\mathcal P}[F](iz),\qquad z\in\mathbb{C}_+. $$ Note that the mapping $F\mapsto \widetilde{F}$ is an isometric algebra isomorphism of $H^\infty(\mathbb{R})$ onto $H^\infty(\mathbb{C}_+)$. We set $$ \mathcal{A}_0(\mathbb{C}_+) = \bigl\{\widetilde{F}\, :\, F\in\mathcal{A}_0(\mathbb{R})\bigr\} \qquad\hbox{and}\qquad \mathcal{A}(\mathbb{C}_+) = \bigl\{\widetilde{F}\, :\, F\in\mathcal{A}(\mathbb{R})\bigr\}. $$ We equip these spaces with the norms induced by $\mathcal{A}_0(\mathbb{R})$ and $\mathcal{A}(\mathbb{R})$, respectively. That is, we set $\norm{\widetilde{F}}_{\mathcal{A}_0}= \norm{F}_{\mathcal{A}_0}$ (resp. $\norm{\widetilde{F}}_{\mathcal{A}}= \norm{F}_{\mathcal{A}}$) for any $F\in\mathcal{A}_0(\mathbb{R})$ (resp. $F\in\mathcal{A}(\mathbb{R})$). By construction, we have \begin{equation}\label{Inclusions} \mathcal{A}_0(\mathbb{C}_+)\subset \mathcal{A}(\mathbb{C}_+) \qquad\hbox{and}\qquad \mathcal{A}(\mathbb{C}_+)\subset H^\infty(\mathbb{C}_+). \end{equation} It clearly follows from Proposition \ref{propAlg} and from the multiplicativity of the mapping $F\mapsto \widetilde{F}$ that $\mathcal{A}(\mathbb{C}_+)$ is a Banach algebra for pointwise multiplication and that $\mathcal{A}_0(\mathbb{C}_+)$ is an ideal of $\mathcal{A}(\mathbb{C}_+)$. Further the second inclusion in (\ref{Inclusions}) is contractive and it follows from Proposition \ref{normequA0} that the first inclusion in (\ref{Inclusions}) is an isometry. Let $b\in L^1(\mathbb{R}_+)$ and consider $F=\widehat{b}(-\,\cdotp)\colon\mathbb{R}\to\mathbb{C}$. Then $\widetilde{F}$ coincides with the Laplace transform $L_b\colon\mathbb{C}_+\to\mathbb{C}$ of $b$ defined by \begin{equation}\label{Lb} L_b(z)=\,\int_{0}^{\infty} e^{-tz} b(t)\, dt,\qquad z\in\mathbb{C}_+. \end{equation} As a consequence of Proposition \ref{lemcont}, we therefore obtain the following. \begin{lem}\label{Laplace} For any $b\in L^1(\mathbb{R}_+)$, $L_b$ belongs to $\mathcal{A}_0(\mathbb{C}_+)$ and $\norm{L_b}_{\mathcal{A}_0}\leq \norm{b}_1$. Moreover the mapping $b\mapsto L_b$ is a Banach algebra homomorphism from $L^1(\mathbb{R}_+)$ into $\mathcal{A}_0(\mathbb{C}_+)$, and the space $\{L_b\,:\, b\in L^1(\mathbb{R}_+)\}$ is dense in $\mathcal{A}_0(\mathbb{C}_+)$. \end{lem} \section{Functional calculus on $\mathcal{A}_0$ and $\mathcal{A}$}\label{CF} The goal of this section is to construct a bounded functional calculus for the negative generator of a bounded $C_0$-semigroup on Hilbert space, defined on $\mathcal{A}(\mathbb{C}_+)$. \subsection{Half-plane holomorphic functional calculus}\label{HPFC} We need some background on the half-plane holomorphic functional calculus introduced by Batty-Haase-Mubeen in \cite{bat-haa}, to which we refer for details. Let $X$ be an arbitrary Banach space. Let $\omega\in\mathbb{R}$ and recall the definition of ${\mathcal H}_{\omega}$ from (\ref{HP0}). Let $A$ be a closed and densely defined operator on $X$ such that the spectrum of $A$ is included in the closed half-plane $\overline{{\mathcal H}_{\omega}}$ and \begin{equation}\label{HP} \forall\,\alpha<\omega,\qquad \sup \bigl\{\norme{R(z,A)}\, :\, {\rm Re}(z) \leq \alpha \bigr\} < \infty. \end{equation} Consider the auxiliary algebra \[ \mathcal{E}({\mathcal H}_{\alpha}) := \bigl\{\varphi \in H^{\infty}({\mathcal H}_{\alpha}) \,:\,\exists s>0,\, \vert \varphi(z)\vert = O(|z|^{-(1+s)}) \text{ as } |z| \rightarrow \infty\bigr\}, \] for any $\alpha<\omega$. For any $\varphi\in \mathcal{E}({\mathcal H}_{\alpha})$ and for any $\beta\in(\alpha,\omega)$, the assumption (\ref{HP}) insures that the integral \begin{equation}\label{fofA} \varphi(A) := \frac{-1}{2\pi}\,\int_{-\infty}^{\infty}\varphi(\beta+is)R(\beta+is,A)\,ds \end{equation} is absolutely convergent in $B(X)$. Further its value is independent of the choice of $\beta$ (this is due to Cauchy's Theorem for vector-valued holomorphic functions) and the mapping $\varphi\mapsto \varphi(A)$ in an algebra homomorphism from $\mathcal{E}({\mathcal H}_{\alpha})$ into $B(X)$. This definition is compatible with the usual rational functional calculus; indeed for any $\mu\in\mathbb{C}$ with ${\rm Re}(\mu)<\alpha$ and any integer $m\geq 2$, the function $$ e_{\mu,m}\colon z\mapsto (\mu -z)^{-m}, $$ which belongs to $\mathcal{E}({\mathcal H}_{\alpha})$, satisfies $e_{\mu,m}(A)=R(\mu,A)^m$. Let $\varphi\in H^{\infty}({\mathcal H}_{\alpha})$. We can define a closed, densely defined, operator $\varphi(A)$ by regularisation, as follows (see \cite{bat-haa} and \cite{haa1} for more on such constructions). Take $\mu\in\mathbb{C}$ with ${\rm Re}(\mu)<\alpha$, and set $e=e_{\mu,2}$. Then $e\varphi \in \mathcal{E}({\mathcal H}_{\alpha})$ and $e(A) = R(\mu,A)^2$ is injective. Then $\varphi(A)$ is defined by \begin{equation}\label{eqregul} \varphi(A) = e(A)^{-1}(e\varphi)(A), \end{equation} with domain ${\rm Dom}(\varphi(A))$ equal to the space of all $x\in X$ such that $[(e\varphi)(A)](x)$ belongs to the range of $e(A)$ $(= {\rm Dom}(A^2))$. It turns out that this definition does not depend on the choice of $\mu$. The half-plane holomorphic functional calculus $\varphi\mapsto \varphi(A)$ satisfies the following ``Convergence Lemma'', provided by \cite[Theorem 3.1]{bat-haa}. \begin{lem}\label{CVLemma} Assume that $A$ satisfies (\ref{HP}) for some $\omega\in\mathbb{R}$ and let $\alpha<\omega$. Let $(\varphi_i)_i$ be a net of $H^\infty({\mathcal H}_\alpha)$ such that $\varphi_i(A)\in B(X)$ for all $i$ and let $\varphi\in H^\infty({\mathcal H}_\alpha)$ such that $\varphi_i\to \varphi$ pointwise on $R_\alpha$, when $i\to\infty$. If $$ \sup_i\norm{\varphi_i}_{H^\infty(R_\alpha)}\,<\infty \qquad\hbox{and}\qquad \sup_i\norm{\varphi_i(A)}_{B(X)}\,<\infty, $$ then $\varphi(A)\in B(X)$ and for any $x\in X$, $[\varphi_i(A)](x)\to [\varphi(A)](x)$ when $i\to\infty$. \end{lem} Let $(T_t)_{t\geq 0}$ be a bounded $C_0$-semigroup on $X$ and let $-A$ denote its infinitesimal generator. For any $b\in L^1(\mathbb{R}_+)$, we set $$ \Gamma(A,b) :=\,\int_0^\infty b(t)\, T_t\, dt, $$ where the latter integral is defined in the strong sense. The mapping $b\mapsto \Gamma(A,b)$ is the so-called Hille-Phillips functional calculus. We refer to \cite[Section 3.3]{haa1} for information and background. We recall that this functional calculus is a Banach algebra homomorphism $$ L^1(\mathbb{R}_+)\longrightarrow B(X). $$ We now provide a compatibility result between the half-plane holomorphic functional calculus and the Hille-Phillips functional calculus. This kind of compatibility properties is very common in functional calculus (see e.g. \cite[Section 3.3]{haa1}) and we follow a classical approach. Note that any $A$ as above satisfies (\ref{HP}) for $\omega=0$. Thus for any $\varepsilon >0$, the operator $A+\varepsilon$ satisfies (\ref{HP}) for $\omega=\varepsilon$. For any $b\in L^1(\mathbb{R}_+)$, this allows us to define $L_b(A+\varepsilon)$, where $L_b$ is the Laplace transform defined by (\ref{Lb}). \begin{lem}\label{lemcompatibility} Let $b\in L^1(\mathbb{R}_+)$ and let $-A$ be the generator of a bounded $C_0$-semigroup on $X$. Then for any $\varepsilon>0$, we have $$ L_b(A+\varepsilon) = \Gamma(A+\varepsilon,b). $$ \end{lem} \begin{proof} Let $\varepsilon >0$, let $\beta\in(0,\varepsilon)$, and let $b\in L^1(\mathbb{R}_+)$. First suppose that $L_b \in \mathcal{E}({\mathcal H}_0)$. Then by the Laplace formula, $$ L_b(A+\varepsilon) =\, \frac{1}{2\pi} \int_{-\infty}^{\infty} L_b(\beta + is) \mathcal{B}ig(\int_{0}^{\infty}e^{ist}e^{\beta t}e^{-\varepsilon t}T_t\,dt\mathcal{B}ig)ds. $$ Since $L_b \in \mathcal{E}({\mathcal H}_0)$, the function $s\mapsto L_b(\beta +is)$ belongs to $L^1(\mathbb{R})$. Hence by Fubini's theorem, $$ L_b(A+\varepsilon) = \,\frac{1}{2\pi}\int_{0}^{\infty}e^{\beta t} \mathcal{B}ig(\int_{-\infty}^{\infty}L_b(\beta + is)e^{ist}\,ds\mathcal{B}ig) \,e^{-\varepsilon t}T_t\,dt. $$ By the Fourier inversion Theorem, we have \[ \frac{1}{2\pi}\int_{-\infty}^{\infty}L_b(\delta +is)e^{ist}\,ds = e^{-\beta t}b(t) \] for each $t>0$. We deduce that \[ L_b(A+\varepsilon) = \int_{0}^{\infty}b(t)e^{-\varepsilon t}T_t\, dt = \Gamma(A+\varepsilon,b). \] For the general case, let us consider $e \in \mathcal{E}({\mathcal H}_0)$ defined by $e(z) = (1+z)^{-2}$. We note that $e$ is the Laplace transform of the function $c\in L^1(\mathbb{R}_+)$ defined by $c(t) = te^{-t}$. The product $eL_b$, which belongs to $\mathcal{E}({\mathcal H}_0)$, is therefore the Laplace transform of $b\star c$. Hence by the first part of this proof, \[ (eL_b)(A+\varepsilon) = \Gamma(A+\varepsilon,b\star c). \] The multiplicativity of the Hille-Phillips functional calculus yields $\Gamma(A+\varepsilon,b\star c)=\Gamma(A+\varepsilon,c)\Gamma(A+\varepsilon, b)$. Further $e(A+\varepsilon) = \Gamma(A+\varepsilon, c)$ by the Laplace formula. Thus we have $$ e(A+\varepsilon)\Gamma(A+\varepsilon,b) = (eL_b)(A+\varepsilon). $$ Applying \eqref{eqregul}, we obtain that $L_b(A+\varepsilon) = \Gamma(A+\varepsilon,b)$ as required. \end{proof} \subsection{Functional calculus on $\mathcal{A}_0(\mathbb{C}_+)$}\label{MAIN} Throughout this subsection, we fix a Hilbert space $H$, we let $(T_t)_{t \geq 0}$ be a bounded $C_0$-semigroup on $H$ and we let $-A$ denote its infinitesimal generator. We set \[ C := \sup\{ \norme{T_t}\, :\, t\geq 0 \}. \] For any $f\in C_{00}(\mathbb{R})$ and any $h \in H^1(\mathbb{R})$, the function $$ b = (2\pi)^{-1}\widehat{f}\widehat{h} $$ belongs to $L^1(\mathbb{R}_+)$ and we have $\widehat{b}(-\,\cdotp)= f\star h$. Consequently, \[ (f\star h)^{\sim} = L_b. \] Further we have the following key estimate, which is inspired by \cite[Proposition 4.16]{pis1}. \begin{lem}\label{key} For any $f\in C_{00}(\mathbb{R})$ and any $h \in H^1(\mathbb{R})$, \begin{equation}\label{inelemcalfon} \bignorm{\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{h}\bigr)} \leq C^2\normeinf{f}\norme{h}_1. \end{equation} \end{lem} \begin{proof} We fix $f\in C_{00}(\mathbb{R})$. Let $w,v \in H^2(\mathbb{R})\cap \mathcal{S}(\mathbb{R})$ and let $h=wv$. By definition, \[ \Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{h}\bigr)= \,\frac{1}{2\pi}\, \int_{0}^{\infty}\widehat{f}(t)\widehat{wv}(t)T_t\, dt. \] By assumption, $\widehat{w}$ and $\widehat{v}$ belong to $L^1(\mathbb{R}_+)$ hence $\widehat{wv} = (2\pi)^{-1} \widehat{w}\star\widehat{v}$ belongs to $L^1(\mathbb{R}_+)$. Further $f$ belongs to $L^1(\mathbb{R})$. We can therefore apply Fubini's Theorem and we obtain that \[ \frac{1}{2\pi}\,\int_{0}^{\infty}\widehat{f}(t) \widehat{wv}(t)T_t\, dt = \frac{1}{2\pi}\, \int_{-\infty}^{\infty} f(s)\mathcal{B}igr(\int_{0}^{\infty}\widehat{wv}(t) e^{-its}T_t\, dt\mathcal{B}igl)\,ds. \] Note that for any $s\in\mathbb{R}$, $$ \int_{0}^{\infty}\widehat{wv}(t) e^{-its}T_t\, dt = \Gamma(A+is,\widehat{wv}). $$ According to the multiplicativity of the Hille-Phillips functional calculus, we have \[ \int_{0}^{\infty}\widehat{wv}(t)e^{-its}T_t \,dt = \frac{1}{2\pi} \mathcal{B}ig(\int_{0}^{\infty}\widehat{w}(r)e^{-irs}T_r\,dr \mathcal{B}ig)\mathcal{B}ig(\int_{0}^{\infty}\widehat{v}(t)e^{-its}T_{t}\, dt\mathcal{B}ig). \] Let $W,V\colon\mathbb{R}\to B(H)$ be defined by $$ W(s) = \int_{0}^{\infty} \widehat{w}(r)e^{-irs}T_r\,dr \qquad\hbox{and}\qquad V(s) = \int_{0}^{\infty} \widehat{v}(t)e^{-its}T_t\,dt, \qquad s\in\mathbb{R}. $$ It follows from above that for any $x,x^*\in H$, we have \begin{equation}\label{equaforgammbound} \bigl\langle \Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{h}\bigr)x,x^* \bigr\rangle = \frac{1}{4\pi^2}\int_{-\infty}^\infty f(s) \langle W(s) x, V(s)^*x^* \rangle\,ds . \end{equation} Applying the Cauchy-Schwarz inequality, we deduce \[ \big|\big\langle \Gamma\bigl(A, (2\pi)^{-1} \widehat{f}\widehat{h}\bigr)x,x^* \big\rangle\big| \leq \frac{1}{4\pi^2}\norme{f}_{\infty}\mathcal{B}igl(\int_{-\infty}^\infty\norme{W(s)x}^2ds \mathcal{B}igr)^{\frac{1}{2}} \mathcal{B}igl(\int_{-\infty}^\infty \norme{V(s)^*x^*}^2 ds\mathcal{B}igr)^{\frac{1}{2}}. \] According to the Fourier-Plancherel equality on $L^2(\mathbb{R};H)$, we have $$ \int_{-\infty}^\infty \norme{W(s)x}^2 ds = 2\pi \int_{0}^\infty |\widehat{w}(r)|^2\norme{T_r x}^2dr. $$ This implies $$ \int_{-\infty}^\infty \norme{W(s)x}^2 ds \leq 2\pi C^2\int_{0}^\infty |\widehat{w}(r)|^2\norme{x}^2 dr \, = 4\pi^2 C^2 \norme{w}^2_2\norme{x}^2. $$ Similarly, we have \[ \int_{-\infty}^\infty \norme{V(s)^*x^*}^2ds \leq 4\pi^2 C^2\norme{v}^2_2\norme{x^*}^2. \] Hence, $$ \big|\big\langle \Gamma\bigl(A, (2\pi)^{-1} \widehat{f}\widehat{h}\bigr)x,x^* \big\rangle\big| \leq C^2\norme{f}_{\infty}\norme{w}_2\norme{v}_2 \norm{x}\norm{x^*}. $$ Since this is true for all $x,x^*$, we have proved that $$ \bignorm{\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{h}\bigr)} \leq C^2\norme{f}_{\infty}\norme{w}_2\norme{v}_2 . $$ Now let $h$ be an arbitrary element of $H^1(\mathbb{R})$. As is well-known (see e.g. \cite[Exercise 1, p. 84]{gar}), there exist $w,v\in H^2(\mathbb{R})$ such that $h =wv$ and $\norme{w}_2^2 =\norme{v}_2^2 = \norme{h}_1$. Since $\mathcal{F}(\mathcal{S}(\mathbb{R})) = \mathcal{S}(\mathbb{R})$, it follows from (\ref{H2=FL2}) that $\mathcal{F}(H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R})) = L^2(\mathbb{R}_+) \cap \mathcal{S}(\mathbb{R})$. Since $ L^2(\mathbb{R}_+) \cap \mathcal{S}(\mathbb{R})$ is dense in $L^2(\mathbb{R}_+)$, we readily deduce that $H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R})$ is dense in $H^2(\mathbb{R})$. Thus, there exist sequences $(w_k)_{k\in \mathbb{N}}$, $(v_k)_{k\in \mathbb{N}} $ in $H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R})$ such that $w_k \to w$ and $v_k \to v$ in $H^2(\mathbb{R})$, when $k\to\infty$. This implies that $$ \norm{w_k}_2\norm{v_k}_2\longrightarrow\norm{h}_1, $$ when $k\to\infty$ and $w_kv_k \to wv=h$ in $H^1(\mathbb{R})$, when $k\to\infty$. Consequently, $$ \bignorm{\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{w_kv_k}\bigr) -\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{h}\bigr)} \longrightarrow 0 $$ when $k\to\infty$. Indeed, \begin{align*} \bignorm{\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{w_kv_k}\bigr) -\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{h}\bigr)} &= \norme{\int_{0}^{\infty} \widehat{f}(t)\widehat{(w_kv_k-wv)}(t)T_t\,dt} \\ & \leq C\norm{\widehat{f}}_1\norm{\widehat{(w_kv_k-wv)}}_\infty \\ & \leq C\norm{\widehat{f}}_1\norm{w_kv_k-wv}_1. \end{align*} For all $k \in \mathbb{N}$, we have $$ \bignorm{\Gamma\bigl(A, (2\pi)^{-1}\widehat{f}\widehat{w_kv_k}\bigr)} \leq C^2\normeinf{f}\norme{u_k}_2\norm{v_k}_2, $$ by the first part of the proof. Passing to the limit, we obtain (\ref{inelemcalfon}). \end{proof} We now arrive at the main result of this subsection. \begin{thm}\label{main1} There exists a unique bounded homomorphism $\rho_{0,A}\colon \mathcal{A}_0(\mathbb{C}_+)\to B(H)$ such that \begin{equation}\label{main2} \rho_{0,A}(L_b)=\int_0^\infty b(t)T_t\,dt \end{equation} for all $b\in L^1(\mathbb{R}_+)$. Moreover $\norm{\rho_{0,A}}\leq C^2$. \end{thm} \begin{proof} By Lemma \ref{key} and the density of $C_{00}(\mathbb{R})$ in $C_0(\mathbb{R})$, there exists a unique bounded bilinear map \[ u_A \colon C_0(\mathbb{R})\times H^1(\mathbb{R}) \longrightarrow B(H) \] such that $u_A(f,h) = \Gamma\bigl(A,(2\pi)^{-1}\widehat{f}\widehat{h}\bigr)$ for each $(f,h)\in C_{00}(\mathbb{R}) \times H^1(\mathbb{R})$. Moreover $\norm{u_A}\leq C^2$. For each $\varepsilon >0$, $(A+\varepsilon)$ is the negative generator of the semigroup $(e^{-\varepsilon t}T_t)_{t \geq 0}$. Therefore, in the same manner as above, one can define $u_{A+\varepsilon} \colon C_0(\mathbb{R})\times H^1(\mathbb{R}) \rightarrow B(H)$ and we have the uniform estimate \begin{equation}\label{estuAeps} \forall \varepsilon > 0, \qquad \bignorm{u_{A+\varepsilon} : C_0(\mathbb{R})\times H^1(\mathbb{R}) \longrightarrow B(H)} \leq C^2. \end{equation} We claim that for each $\varepsilon>0$, we have \begin{equation}\label{proofpropcalfoncstep1} u_{A+\varepsilon}(f,h)= (f\star h)^\sim(A+\varepsilon), \qquad f\in C_0(\mathbb{R}),\, h\in H^1(\mathbb{R}). \end{equation} (We recall that the operator on the right-hand side is defined by the half-plane holomorphic functional calculus. In particular the above formula shows that $(f\star h)^\sim(A+\varepsilon)$ is bounded.) Recall that if $f\in C_{00}(\mathbb{R})$, then $b=(2\pi)^{-1}\widehat{f}\widehat{h} \in L^1(\mathbb{R}_+)$ and $(f\star h)^\sim=L_b$. Hence (\ref{proofpropcalfoncstep1}) is given by Lemma \ref{lemcompatibility} in this case. In the general case, let $(f_n)_{n\in \mathbb{N}}$ be a sequence of $C_{00}(\mathbb{R})$ such that $f_n \to f$ in $C_0(\mathbb{R})$, when $n\to\infty$. Then $u_{A + \varepsilon}(f_n,h) \to u_{A + \varepsilon}(f,h)$, hence $(f_n\star h)^\sim(A+\varepsilon) \to u_{A + \varepsilon}(f,h)$. Moreover $(f_n\star h)^\sim\to (f\star h)^\sim$ in $H^\infty(\mathbb{C}_+)$. Therefore by the Convergence Lemma \ref{CVLemma}, $(f\star h)^\sim(A+\varepsilon)$ is bounded and \eqref{proofpropcalfoncstep1} holds true. Next we show that in $B(H)$, we have \begin{equation}\label{eps-zero} u_{A+\varepsilon}(f,h) \underset{\varepsilon \rightarrow 0}\longrightarrow u_A(f,h),\qquad f\in C_0(\mathbb{R}),\, h\in H^1(\mathbb{R}). \end{equation} In the case when $f\in C_{00}(\mathbb{R})$, $$ u_{A+\varepsilon}(f,h)= \,\frac{1}{2\pi}\,\int_0^\infty \widehat{f}(t)\widehat{h}(t)e^{-\varepsilon t} T_t\,dt $$ for all $\varepsilon\geq 0$. Hence $$ \bignorm{u_{A}(f,h) -u_{A+\varepsilon}(f,h)} \leq\,\frac{C}{2\pi} \int_0^\infty\bigl\vert \widehat{f}(t)\widehat{h}(t)\bigr\vert (1-e^{-\varepsilon t})\, dt. $$ This integral goes to $0$ when $\varepsilon\to 0$, by Lebesgue's dominated convergence theorem. This yields the result in this case. The general case follows from the density of $C_{00}(\mathbb{R})$ in $C_0(\mathbb{R})$ and the uniform estimate (\ref{estuAeps}). We now construct $\rho_{0,A}$. Let $F\in \mathcal{A}_0(\mathbb{R})$ and consider two sequences $(f_k)_{k\in \mathbb{N}}$ of $C_0(\mathbb{R})$ and $(h_k)_{k\in \mathbb{N}}$ of $H^1(\mathbb{R})$ satisfying \eqref{ineA} and \eqref{equA2}. We let $$ F_N=\sum_{k=1}^N f_k\star h_k,\qquad N\geq 1. $$ For any fixed $\varepsilon>0$, it follows from \eqref{proofpropcalfoncstep1} that for any $N\geq 1$, $$ \widetilde{F_N}(A+\varepsilon) = \sum_{k=1}^N u_{A+\varepsilon}(f_k,h_k). $$ We have both that $\widetilde{F_N} \to \widetilde{F}$ in $H^\infty(\mathbb{C}_+)$ and that $\sum_{k=1}^N u_{A+\varepsilon}(f_k,h_k)\to \sum_{k=1}^\infty u_{A+\varepsilon}(f_k,h_k)$ in $B(H)$. Appealing again to Lemma \ref{CVLemma}, we deduce that $\widetilde{F}(A+\varepsilon)\in B(H)$ and that \begin{equation}\label{proofpropcalfoncstep2} \widetilde{F}(A+\varepsilon) = \sum_{k=1}^\infty u_{A+\varepsilon}(f_k,h_k). \end{equation} We observe that \begin{equation}\label{proofpropcalfoncstep3} \sum_{k=1}^\infty u_{A+\varepsilon}(f_k,h_k) \underset{\varepsilon \rightarrow 0} \longrightarrow \sum_{k=1}^\infty u_A(f_k,h_k) \end{equation} in $B(H)$. To check this, let $a>0$ and choose $N\geq 1$ such that $\sum_{k=N+1}^\infty \norm{f_k}_\infty\norm{h_k}_1\leq a$. We have \begin{align*} \mathcal{B}ignorm{ \sum_{k=1}^{\infty} u_{A+\varepsilon} (f_k,h_k)\,-\, \sum_{k=1}^{\infty} u_{A} (f_k,h_k)} &\,\leq \mathcal{B}ignorm{\sum_{k=1}^{N} u_{A+\varepsilon} (f_k,h_k)\,-\, \sum_{k=1}^{N} u_{A} (f_k,h_k)}\\ & +\, \sum_{k=N+1}^{\infty} \norm{u_{A+\varepsilon}(f_k,h_k)}\ +\, \sum_{k=N+1}^{\infty} \norm{u_{A}(f_k,h_k)}. \end{align*} Using the uniform estimate (\ref{estuAeps}), this implies that $$ \mathcal{B}ignorm{ \sum_{k=1}^{\infty} u_{A+\varepsilon} (f_k,h_k)\,-\, \sum_{k=1}^{\infty} u_{A} (f_k,h_k)} \,\leq \mathcal{B}ignorm{\sum_{k=1}^{N} u_{A+\varepsilon} (f_k,h_k)\,-\, \sum_{k=1}^{N} u_{A} (f_k,h_k)} +\,2Ca. $$ Applying (\ref{eps-zero}), we deduce that $$ \mathcal{B}ignorm{ \sum_{k=1}^{\infty} u_{A+\varepsilon} (f_k,h_k)\,-\, \sum_{k=1}^{\infty} u_{A} (f_k,h_k)} \,\leq 3Ca $$ for $\varepsilon>0$ small enough, which shows the result. Combining (\ref{proofpropcalfoncstep2}) and (\ref{proofpropcalfoncstep3}) we obtain that $\widetilde{F}(A+\varepsilon)$ has a limit in $B(H)$, when $\varepsilon\to 0$. We set $$ \rho_{0,A}(\widetilde{F}) :=\lim_{\varepsilon\to 0} \widetilde{F} (A+\varepsilon). $$ It is plain that $\rho_{0,A}\colon \mathcal{A}_0(\mathbb{C}_+)\to B(H)$ is a linear map. It follows from the construction that $$ \norm{\rho_{0,A}(\widetilde{F})}_{\mathcal{A}_0}\leq C^2 \norm{\widetilde{F}}_{\mathcal{A}_0} $$ for any $F\in\mathcal{A}_0(\mathbb{R})$, hence $\rho_{0,A}$ is bounded with $\norm{\rho_{0,A}}\leq C^2$. Let $b\in L^1(\mathbb{R}_+)$. By the compatibility Lemma \ref{lemcompatibility}, we have $$ L_b(A+\varepsilon) =\int_0^\infty b(t)e^{-\varepsilon t} T_t\,dt $$ for all $\varepsilon >0$. Passing to the limit and using Lebesgue's dominated convergence theorem, we obtain (\ref{main2}). It follows from the density of $\{L_b \,:\, b\in L^1(\mathbb{R}_+)\}$ in $\mathcal{A}_0(\mathbb{C}_+)$, given by Lemma \ref{Laplace}, that $\rho_{0,A}$ is unique. Morever the multiplicativity of the Hille-Phillips functional calculus ensures that $\rho_{0,A}$ is a Banach algebra homomorphism. \end{proof} \begin{rq1}\label{Indep1} Let $F\in \mathcal{A}_0(\mathbb{R})$ and let $(f_k)_{k\in \mathbb{N}}$ and $(h_k)_{k\in \mathbb{N}}$ be sequences of $C_0(\mathbb{R})$ and $H^1(\mathbb{R})$, respectively, satisfying \eqref{ineA} and \eqref{equA2}. It follows from the proof of Theorem \ref{main1} that \begin{equation}\label{Indep2} \rho_{0,A}(\widetilde{F}) =\,\sum_{k=1}^\infty u_A(f_k,h_k). \end{equation} This equality shows that the right-hand side of (\ref{Indep2}) does not depend on the choice of $(f_k)_{k\in \mathbb{N}}$ and $(h_k)_{k\in \mathbb{N}}$. The reason why we did not take (\ref{Indep2}) as a definition of $\rho_{0,A}$ is precisely that we did not know a priori that $\sum_{k=1}^\infty u_A(f_k,h_k)$ was independent of the representation of $F$. \end{rq1} \subsection{Functional calculus on $\mathcal{A}(\mathbb{C}_+)$}\label{FCA} We keep the notation from the previous subsection. We can extend Theorem \ref{main1} as follows. \begin{cor}\label{main3} There exists a unique bounded homomorphism $\rho_{A}\colon \mathcal{A}(\mathbb{C}_+)\to B(H)$ extending $\rho_{0,A}$. Moreover $\norm{\rho_A}\leq C^2$. \end{cor} \begin{proof} We follow an idea from \cite{bgt1}, using regularization. Consider the sequence $(G_N)_{N\in\mathbb{N}}$ defined in the proof of Proposition \ref{normequA0}. Then $$ \widetilde{G_N}(z)=\,\frac{N}{N+z}\,,\qquad z\in\mathbb{C}_+,\, N\geq 1. $$ For any $\varphi\in\mathcal{A}(\mathbb{C}_+)$, we let $S_\varphi$ be the operator defined by $$ S_\varphi=(1+A)\rho_{0,A}(\varphi\widetilde{G_1}), $$ with domain ${\rm Dom}(S_\varphi)=\{x\in H\, :\, [\rho_{0,A}(\varphi \widetilde{G_1})](x)\in {\rm Dom}(A)\}$. In this definition, we use the fact that $\varphi\widetilde{G_1}$ belongs to $\mathcal{A}_0(\mathbb{C}_+)$, which follows from Proposition \ref{propAlg}. It is clear that $S_\varphi$ is closed. Further ${\rm Dom}(A)\subset {\rm Dom}(S_\varphi)$, hence $S_\varphi$ is densely defined. More precisely, if $x\in {\rm Dom}(A)$, then $x= \rho_{A,0}(\widetilde{G_1})(1+A)(x)$ hence $$ [\rho_{0,A}(\varphi \widetilde{G_1})](x) = \rho_{0,A}(\varphi \widetilde{G_1}^2)](1+A)(x) =(1+A)^{-1}\rho_{0,A}(\varphi \widetilde{G_1})](1+A)(x) $$ belongs to ${\rm Dom}(A)$ and we have \begin{equation}\label{DomA} S_\varphi(x)= \rho_{0,A}(\varphi\widetilde{G_1})(1+A)(x). \end{equation} Since $\rho_{0,A}$ is multiplicative, we have $\rho_{0,A}(\varphi \widetilde{G_N}\widetilde{G_1})= \rho_{0,A}(\varphi \widetilde{G_N})(1+A)^{-1}$ for any $N\geq 1$. Moreover as noticed in the proof of Proposition \ref{normequA0}, $(G_N)_{N\in\mathbb{N}}$ is an approximate unit of $\mathcal{A}_0(\mathbb{R})$, hence $\varphi \widetilde{G_N}\widetilde{G_1}\to \varphi\widetilde{G_1}$ in $\mathcal{A}_0(\mathbb{C}_+)$, when $N\to \infty$. We deduce, using (\ref{DomA}), that for any $x\in{\rm Dom}(A)$, $$ S_\varphi(x) =\lim_N \rho_{0,A}(\varphi \widetilde{G_N}\widetilde{G_1})(1+A)(x) = \lim_N \rho_{0,A}(\varphi \widetilde{G_N})(x). $$ For any $N\geq 1$, $$ \norm{\rho_{0,A}(\varphi \widetilde{G_N})} \leq C^2 \norm{\varphi\widetilde{G_N}}_{\mathcal{A}_0} \leq C^2 \norm{\varphi}_{\mathcal{A}}, $$ by (\ref{ideal}). Consequently, $\norm{S_\varphi(x)} \leq C^2 \norm{\varphi}_{\mathcal{A}}\norm{x}$ for any $x\in{\rm Dom}(A)$. This shows that ${\rm Dom}(S_\varphi)=H$ and $S_\varphi\in B(H)$. We now define $\rho_{A}\colon \mathcal{A}(\mathbb{C}_+)\to B(H)$ by $\rho_A(\varphi)=S_\varphi$. It is clear from above that $\rho_{A}$ is linear and bounded, with $\norm{\rho_A}\leq C^2$. It extends $\rho_{0,A}$ because if $F\in\mathcal{A}_0(\mathbb{C}_+)$, then we have $\rho_{0,A}(\varphi\widetilde{G_1}) = \rho_{0,A}(\widetilde{G_1})\rho_{0,A}(\varphi)=(1+A)^{-1} \rho_{0,A}(\varphi)$, hence $S_\varphi=\rho_{0,A}(\varphi)$. Let $\varphi_1,\varphi_2\in \mathcal{A}(\mathbb{C}_+)$. For any integers $N_1,N_2\geq 1$, we have $$ \rho_{0,A}\bigl(\varphi_1\varphi_2 \widetilde{G_{N_1}}\widetilde{G_{N_2}}\bigr) = \rho_{0,A}(\varphi_1 \widetilde{G_{N_1}}) \rho_{0,A}(\varphi_2 \widetilde{G_{N_2}}), $$ because $\rho_{0,A}$ is multiplicative. We deduce that $\rho_{0,A}(\varphi_1\varphi_2 \widetilde{G_{N_1}}) = \rho_{0,A}(\varphi_1 \widetilde{G_{N_1}}) \rho_{A}(\varphi_2)$ for all $N_1\geq 1$, by letting $N_2\to\infty$. Next we obtain $\rho_{A}(\varphi_1\varphi_2) = \rho_{A}(\varphi_1) \rho_{A}(\varphi_2)$ by letting $N_1\to\infty$. Thus $\rho_{A}$ is multiplicative. The uniqueness property is clear. \end{proof} \subsection{Operators with a bounded $H^\infty(\mathbb{C}_+)$-functional calculus}\label{H-infty} The goal of this subsection is to explain the connections between our main results (Theorem \ref{main1}, Corollary \ref{main3}) and $H^\infty$-functional calculus. We will assume that the reader is familiar with sectorial operators and their $H^\infty$-functional calculus, for which we refer to \cite{haa1} or \cite[Chapter 10]{hnvw}. Using standard notation, for any $\theta\in(0,\pi)$ we let $\mathcal{S}igma_\theta=\{z\in\mathbb{C}^*\, :\, \vert{\rm Arg}(z)\vert<\theta\}$ and $$ H_0^\infty(\mathcal{S}igma_{\theta}) =\bigl\{\varphi \in H^{\infty}(\mathcal{S}igma_{\theta}) \,:\,\exists s>0,\, \vert \varphi(z)\vert \lesssim \min\{ {|z|^s,\vert z\vert^{-s}}\} \text{ on } \mathcal{S}igma_{\theta}\bigr\}. $$ Let $(T_t)_{t\geq 0}$ be a bounded $C_0$-semigroup on some Banach space $X$, with generator $-A$. Recall that $A$ is a sectorial operator of type $\frac{\pi}{2}$. The following lemma is probably known to specialists, we include a proof for the sake of completeness. In part (i), the operator $\varphi(A)$ is defined by (\ref{fofA}) whereas in part (ii), the operator $\varphi(A)$ is defined by \cite[(2.5)]{haa1}. It is worth noting that if $\varphi\in {\mathcal E}({\mathcal H}_\alpha)\cap H_0^\infty(\mathcal{S}igma_{\theta})$, then these two definitions coincide. \begin{lem}\label{Calcul-H} The following assertions are equivalent. \begin{itemize} \item [(i)] There exists a constant $C>0$ such that for all $\alpha<0$ and for all $\varphi\in {\mathcal E}({\mathcal H}_\alpha)$, \begin{equation}\label{Calcul-HH} \norm{\varphi(A)}\leq C\norm{\varphi}_{H^\infty(\mathbb{C}_+)}. \end{equation} \item [(ii)] There exists a constant $C>0$ such that for all $\theta\in\bigl(\frac{\pi}{2},\pi\bigr)$ and for all $\varphi\in H_0^\infty(\mathcal{S}igma_{\theta})$, $$ \norm{\varphi(A)}\leq C\norm{\varphi}_{H^\infty(\mathbb{C}_+)}. $$ \item [(iii)] There exists a constant $C>0$ such that for all $b\in L^1(\mathbb{R}_+)$, \begin{equation}\label{Calcul-HHH} \mathcal{B}ignorm{\int_{0}^\infty b(t)T_t\, dt}\leq C\norm{\widehat{b}}_\infty. \end{equation} \end{itemize} \end{lem} \begin{proof} Assume (i). By the approximation argument at the beginning of \cite[Section 5]{bat-haa}, (\ref{Calcul-HH}) holds as well for any $\varphi\in H^\infty({\mathcal H}_\alpha)$. Let $b\in L^1(\mathbb{R}_+)$. The function $L_b(\,\cdotp+\varepsilon)$ belongs to $H^\infty({\mathcal H}_{-\varepsilon})$ for any $\varepsilon>0$, hence we have $$ \norm{L_b(A+\varepsilon)}\leq C \norm{L_b(\,\cdotp+\varepsilon)}_{H^\infty(\mathbb{C}_+)} \leq C \norm{L_b}_{H^\infty(\mathbb{C}_+)} = C\norm{\widehat{b}}_\infty. $$ Applying Lemma \ref{lemcompatibility} and letting $\varepsilon\to 0$, we obtain (\ref{Calcul-HHH}), which proves (iii). The fact that (iii) implies (ii) follows from \cite[Lemma 3.3.1 $\&$ Proposition 3.3.2]{haa1}, see also \cite[Lemma 2.12]{lem2}. Assume (ii) and let us prove (i). For any $\varepsilon\in (0,1)$, consider the rational function $q_\varepsilon$ defined by $$ q_\varepsilon(z) = \frac{\varepsilon +z}{1+\varepsilon z}, \qquad z\not= \frac{-1}{\varepsilon}. $$ We may and do assume that $\alpha\in (-1,0)$ when proving (i). Fix some $\varepsilon\in (0,1)$. It is easy to check (left to the reader) that $q_\varepsilon$ maps ${\mathcal H}_\alpha$ into itself. Moreover there exists $\theta\in\bigl(\frac{\pi}{2},\pi\bigr)$ such that $q_\varepsilon$ maps $\mathcal{S}igma_{\theta}$ into $\mathbb{C}_+$. Let $\varphi\in {\mathcal E}({\mathcal H}_\alpha)$, then $$ \varphi_\varepsilon : = \varphi\circ q_\varepsilon\colon {\mathcal H}_\alpha\cup \mathcal{S}igma_\theta \longrightarrow\mathbb{C} $$ is a well-defined bounded holomorphic function. Moreover we have \begin{equation}\label{unifo} \norm{\varphi_\varepsilon}_{H^{\infty}({\mathcal H}_\alpha)} \leq \norm{\varphi}_{H^{\infty}({\mathcal H}_\alpha)}. \end{equation} By \cite[Lemma 2.2.3]{haa1}, $\varphi_\varepsilon$ belongs to $H_0^\infty(\mathcal{S}igma_{\theta})\oplus {\rm Span}\{1,(1+\,\cdotp)^{-1}\}$. Further the definition of $\varphi_\varepsilon(A)$ provided by the functional calculus of sectorial operators coincides with the definition of $\varphi_\varepsilon(A)$ provided by the half-plane functional calculus. Hence for some constant $C'>0$ not depending on $\varepsilon$, we have $$ \norm{\varphi_\varepsilon(A)}\leq C'\norm{\varphi_\varepsilon}_{H^\infty(\mathbb{C}_+)} \leq C'\norm{\varphi}_{H^\infty(\mathbb{C}_+)}, $$ by (ii). Since $\varphi_\varepsilon\to \varphi$ pointwise on ${\mathcal H}_\alpha$, it now follows from (\ref{unifo}) and the Convergence Lemma \ref{CVLemma} that $\norm{\varphi(A)}\leq C'\norm{\varphi}_{H^\infty(\mathbb{C}_+)}$, which proves (i). \end{proof} We say that $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus if one of (equivalently, all of) the properties of Lemma \ref{Calcul-H} hold true. If $A$ is sectorial of type $<\frac{\pi}{2}$, the latter is equivalent to $A$ having a bounded $H^\infty$-functional calculus of angle $\frac{\pi}{2}$ is the usual sense. The main feature of the ``bounded $H^\infty(\mathbb{C}_+)$-functional calculus" property considered here is that it may apply to the case when the sectorial type of $A$ is not $<\frac{\pi}{2}$. We now come back to the specific case when $X=H$ is a Hilbert space. Here are a few known facts in this setting: \begin{itemize} \item [(f1)] If $(T_t)_{t\geq 0}$ is a contractive semigroup (that is, $\norm{T_t}\leq 1$ for all $t\geq 0$), then $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus. See \cite[Section 7.1.3]{haa1} for a proof and more on this theme. \item [(f2)] We say that $(T_t)_{t\geq 0}$ is similar to a contractive semigroup if there exists an invertible operator $S\in B(H)$ such that $(ST_tS^{-1})_{t\geq 0}$ is a contractive semigroup. A straightforward application of the previous result is that in this case, $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus. \item [(f3)] If $A$ is sectorial of type $<\frac{\pi}{2}$, then $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus (if and) only if $(T_t)_{t\geq 0}$ is similar to a contractive semigroup. This goes back to \cite[Section 4]{lemsim}. \item [(f4)] There exist sectorial operators of type $<\frac{\pi}{2}$ which do not admit a bounded $H^\infty(\mathbb{C}_+)$-functional calculus, by \cite{mcya, baicle} (see also \cite[Section 7.3.4]{haa1}). \item [(f5)] There exists a bounded $C_0$-semigroup $(T_t)_{t\geq 0}$ such that $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus but $(T_t)_{t\geq 0}$ is not similar to a contractive semigroup. This follows from \cite[Proposition 4.8]{lemsim} and its proof. \end{itemize} We now establish analogues of Theorem \ref{main1} and Corollary \ref{main3} in the case when $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus. Just as we did in Subsection \ref{HP-Versions}, we set $$ {\mathcal C}_0(\mathbb{C}_+)=\bigl\{\widetilde{F}\, :\, F\in C_0(\mathbb{R})\cap H^\infty(\mathbb{R})\bigr\} \qquad\hbox{and}\qquad {\mathcal C}(\mathbb{C}_+)=\bigl\{\widetilde{F}\, :\, F\in C_b(\mathbb{R})\cap H^\infty(\mathbb{R})\bigr\}. $$ Since $\{\widehat{b}(-\,\cdotp)\, :\, b\in L^1(\mathbb{R}_+)\}$ is dense in $C_0(\mathbb{R})\cap H^\infty(\mathbb{R})$, by Remark \ref{densities}, the following is straightforward. \begin{prop}\label{H-infini2} Assume that $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus on $H$. Then there exists a unique bounded homomorphism $\nu_{0,A}\colon {\mathcal C}_0(\mathbb{C}_+) \to B(H)$ such that \begin{equation}\label{main4} \nu_{0,A}(L_b)=\int_0^\infty b(t)T_t\,dt \end{equation} for all $b\in L^1(\mathbb{R}_+)$. \end{prop} Now arguing as in the proof of Corollary \ref{main3}, we deduce the following. \begin{cor}\label{H-infini3} Assume that $A$ admits a bounded $H^\infty(\mathbb{C}_+)$-functional calculus on $H$. Then there exists a unique bounded homomorphism $\nu_{A}\colon {\mathcal C}(\mathbb{C}_+) \to B(H)$ such that (\ref{main4}) holds true for all $b\in L^1(\mathbb{R}_+)$. \end{cor} Of course when the above corollary applies, $\nu_{A}$ is an extension of the mapping $\rho_A$ from Corollary \ref{main3}. Thus our main results (Theorem \ref{main1}, Corollary \ref{main3}) should be regarded as a way to obtain a ``good" functional calculus for negative generators of bounded $C_0$-semigroups which do not admit a bounded $H^\infty(\mathbb{C}_+)$-functional calculus. \subsection{Note added in May 2022}\label{Added} A first version of this paper has circulated since the beginning of 2021. A few months later, together with Safoura Zadeh, we proved in \cite[Section 4]{ALZ} that the inclusion (\ref{MH-inclusion}) is actually an equality. Equivalently (see Theorem \ref{DualA0} above), we have $$ {\mathcal A}_0(\mathbb{R})^*\simeq \mathcal{M}(H^1(\R)). $$ The paper \cite{ALZ} also contains a new proof of Theorem \ref{main1} based on a description of the so-called $S^1$-bounded Fourier multipliers on $H^1(\mathbb{R})$ and on a tensor product estimate of independent interest, inspired by an old result of White \cite[Section 5]{white}. \section{Comparison with the Besov functional calculus}\label{Besov} In this section we compare the functional calculus constructed in Section \ref{CF} (Theorem \ref{main1} and Corollary \ref{main3}) with the Besov functional calculus from \cite[Subsection 5.5]{haa3} and \cite{bgt1}. We start with some background on the analytic homogeneous Besov space used in the latter paper. We refer to \cite[Section 6]{bgt1} for further details. Let $\psi\in \mathcal{S}(\R)$ such that ${\rm Supp}(\psi)\subset \bigl[\frac12,2\bigr]$, $\psi(t)\geq 0$ for all $t\in\mathbb{R}$, and $\psi(t)+\psi(\frac{t}{2}\bigr)=1$ for all $t\in \bigl[1,2\bigr]$. For any $k\in\mathbb{Z}$, we let $\psi_k\in\mathcal{S}(\R)$ be defined by $\psi_k(t)=\psi(2^{-k}t)$, $t\in\mathbb{R}$. A key property of the sequence $(\psi_k)_{k\in\tiny\mathbb{Z}}$ is that for any $k_0\in\mathbb{Z}$, we have \begin{equation}\label{sum1} \forall\, t\in [2^{k_0},2^{k_0+1}):\qquad \psi_{k_0}(t)+\psi_{k_0+1}(t)=1\quad\hbox{and}\quad \psi_k(t)=0\ \hbox{if}\ k\notin\{k_0,k_0+1\}. \end{equation} Next define $\phi_k={\mathcal F}^{-1}(\psi_k)$. It is plain that for any $k\in\mathbb{Z}$, \begin{equation}\label{phi-k} \phi_k\in H^1(\mathbb{R}) \qquad\hbox{and}\qquad \norm{\phi_k}_1 = \norm{\phi_0}_1. \end{equation} It follows that for any $F\in BUC(\mathbb{R})$ and any $k\in\mathbb{Z}$, $F\star \phi_k$ belongs to $BUC(\mathbb{R})\cap H^\infty(\mathbb{R})$. We define a Besov space $\mathcal{B}_0(\mathbb{R})$ by $$ \mathcal{B}_0(\mathbb{R})=\biggl\{F\in BUC(\mathbb{R})\, :\, \sum_{k\in\tiny{\mathbb{Z}}}\norm{F\star \phi_k}_\infty\,<\infty \ \hbox{ and }\ F=\sum_{k\in\tiny{\mathbb{Z}}} F\star \phi_k \biggr\}. $$ This is a Banach space for the norm $$ \norm{F}_{\mathcal{B}_0} = \sum_{k\in\tiny{\mathbb{Z}}}\norm{F\star \phi_k}_\infty. $$ This space is denoted by $\mathcal{B}_{dyad}$ in \cite[Section 6]{bgt1}. Next we set $\mathcal{B}_{00}(\mathbb{R})= \mathcal{B}(\mathbb{R})\cap C_0(\mathbb{R})$, equipped with the norm of $\mathcal{B}_0(\mathbb{R})$. Then $\mathcal{B}_{00}(\mathbb{R})$ is a closed subspace of $\mathcal{B}(\mathbb{R})$ and we clearly have $$ \mathcal{B}_{00}(\mathbb{R})\subset C_0(\mathbb{R})\cap H^\infty(\mathbb{R}) \qquad\hbox{and}\qquad \mathcal{B}_0(\mathbb{R})\subset BUC(\mathbb{R})\cap H^\infty(\mathbb{R}). $$ We wish to underline that the above definitions of $\mathcal{B}_0(\mathbb{R})$ and $\mathcal{B}_{00}(\mathbb{R})$ do not depend on the choice of the function $\psi$. More precisely if $\psi^{(1)},\psi^{(2)}$ are two functions as above and if we let $\mathcal{B}_0^{\psi^{(1)}}(\mathbb{R})$ and $\mathcal{B}_0^{\psi^{(2)}}(\mathbb{R})$ denote the associated spaces, then $\mathcal{B}_0^{\psi^{(1)}}(\mathbb{R})$ and $\mathcal{B}_0^{\psi^{(2)}}(\mathbb{R})$ coincide as vector spaces and the norms $\norm{\,\cdotp}_{\mathcal{B}_0^{\psi^{(1)}}}$ and $\norm{\,\cdotp}_{\mathcal{B}_0^{\psi^{(2)}}}$ are equivalent. We refer to \cite[Section 6]{bgt1} and the references therein for these properties. Similarly to Subsection \ref{HP-Versions}, we introduce half-plane versions of $\mathcal{B}_0(\mathbb{R})$ and $\mathcal{B}_{00}(\mathbb{R})$, by setting $$ \mathcal{B}_{00}(\mathbb{C}_+) = \bigl\{\widetilde{F}\, :\, F\in \mathcal{B}_0(\mathbb{R})\bigr\} \qquad\hbox{and}\qquad \mathcal{B}_0(\mathbb{C}_+) = \bigl\{\widetilde{F}\, :\, F\in \mathcal{B}(\mathbb{R})\bigr\}. $$ According to \cite[Proposition 6.2]{bgt1}, the space $\mathcal{B}_0(\mathbb{C}_+)\subset H^\infty(\mathbb{C}_+)$ coincides with the space $\mathcal{B}_0$ considered by Batty-Gomilko-Tomilov in \cite[Subsection 2.2]{bgt1}. By \cite[Subsection 2.4]{bgt1}, we have \begin{equation}\label{LM} \bigl\{L_b\, :\, b\in L^1(\mathbb{R}_+)\bigr\}\,\subset\, \mathcal{B}_{00}(\mathbb{C}_+). \end{equation} Moreover Batty-Gomilko-Tomilov established the following remarkable functional calculus result. \begin{thm}\label{BGT} (\cite[Theorem 4.4]{bgt1}, \cite[Theorem 6.1]{bgt2}) Let $X$ be a Banach space, let $(T_t)_{t\geq 0}$ be a bounded $C_0$-semigroup on $X$ and let $-A$ denote its generator. The following are equivalent. \begin{itemize} \item [(i)] There exists a constant $K>0$ such that $$ \int_{-\infty}^{\infty}\bigl\vert \langle R(\beta +it, A)^2(x),x^*\rangle\bigr\vert\, dt \,\leq \,\frac{-K}{\beta}\,\norm{x}\norm{x^*} $$ for all $\beta<0$, all $x\in X$ and all $x^*\in X^*$. \item [(ii)] There exists a bounded homomorphism $\gamma_A\colon \mathcal{B}_0(\mathbb{C}_+)\to B(X)$ such that $$ \gamma_A(L_b) = \int_{0}^{\infty} b(t) T_t\, dt,\qquad b\in L^1(\mathbb{R}_+). $$ \end{itemize} In this case, $\gamma_A$ is unique. \end{thm} Condition (i) in Theorem \ref{BGT} goes back at least to \cite{gom} and \cite{shifeng}. In fact, condition (i) can be defined for any closed and densely defined operator $A$ satisfying (\ref{HP}) for $\omega=0$. Then it follows from \cite{gom,shifeng} that (i) actually implies that $-A$ generates a bounded $C_0$-semigroup on $X$. (See also \cite[Theorem 6.4]{bat-haa}.) Conversely, if $X=H$ is a Hilbert space, it is proved in \cite{gom,shifeng} that if $-A$ generates a bounded $C_0$-semigroup, then $A$ satisfies (i). (The assumption that $X=H$ is a Hilbert space is crucial here, see the beginning of Section \ref{Banach} for more on this.) Thus if $(T_t)_{t\geq 0}$ is a bounded $C_0$-semigroup with generator $-A$ on Hilbert space, then the property (ii) in Theorem \ref{BGT} holds true. It is therefore natural to compare Corollary \ref{main3} with that property. This is the aim of the rest of this section. \begin{prop}\label{embeddingBesovA} We have \[ \mathcal{B}_0(\mathbb{C}_+)\subset \mathcal{A}(\mathbb{C}_+) \qquad\hbox{and}\qquad \mathcal{B}_{00}(\mathbb{C}_+)\subset \mathcal{A}_0(\mathbb{C}_+). \] Moreover there exists a constant $K>0$ such that $\norm{\varphi}_{\mathcal{A}}\leq K\norm{\varphi}_{\mathcal{B}_0}$ for any $\varphi\in \mathcal{B}_0(\mathbb{C}_+)$. \end{prop} \begin{proof} It follows from (\ref{sum1}) that \begin{equation}\label{sum2} \psi_k=\psi_k(\psi_{k-1}+\psi_k+\psi_{k+1}),\qquad k\in\mathbb{Z}. \end{equation} Consequently, $$ \phi_k=\phi_k\star\bigl(\phi_{k-1}+\phi_{k}+\phi_{k+1}\bigr), \qquad k\in\mathbb{Z}. $$ Let $F\in \mathcal{B}(\mathbb{R})$. Applying the above identity, we have $$ F = \sum_{k\in\tiny\mathbb{Z}} F\star \phi_k \star \bigl(\phi_{k-1}+\phi_{k}+\phi_{k+1}\bigr). $$ Appealing to (\ref{phi-k}), we observe that $F\star \phi_k\in BUC(\mathbb{R})$ and $\phi_{k-1}+\phi_{k}+\phi_{k+1}\in H^1(\mathbb{R})$ for each $k\in\mathbb{Z}$, and that $$ \sum_{k\in\tiny\mathbb{Z}} \norm{F\star \phi_k}_\infty \norm{\phi_{k-1}+\phi_{k}+\phi_{k+1}}_1\,\leq 3\norm{\phi_0}_1\norm{F}_{\mathcal{B}_0}. $$ This shows that $F\in\mathcal{A}(\mathbb{R})$, with $$ \norm{F}_\mathcal{A}\leq 3\norm{\phi_0}_1\norm{F}_{\mathcal{B}_0}. $$ This yields $\mathcal{B}(\mathbb{C}_+)\subset \mathcal{A}(\mathbb{C}_+)$. The above argument also shows that $\mathcal{B}_0(\mathbb{C}_+)\subset \mathcal{A}_0(\mathbb{C}_+)$. \end{proof} Let $H$ be a Hilbert space and let $A$ be the negative generator of a bounded $C_0$-semigroup on $H$. We already noticed that $A$ satisfies property (ii) in Theorem \ref{BGT}. According to Proposition \ref{embeddingBesovA} and (\ref{main2}), the functional calculus $\rho_A\colon\mathcal{A}(\mathbb{C}_+)\to B(H)$ from Corollary \ref{main3} extends the functional calculus $\gamma_A\colon\mathcal{B}(\mathbb{C}_+)\to B(H)$. It turns out that the extension from $\gamma_A$ to $\rho_A$ is an actual improvement, because of the following result. \begin{thm}\label{AdifferentB0} We have \[ \mathcal{B}_0(\mathbb{C}_+)\not= \mathcal{A}(\mathbb{C}_+) \qquad\hbox{and}\qquad \mathcal{B}_{00}(\mathbb{C}_+)\not= \mathcal{A}_0(\mathbb{C}_+). \] \end{thm} We need some preparation before coming to the proof. We use an idea from \cite[Paragraph 2.6.4]{triebel}. First for the definition of the Besov space $\mathcal{B}_0(\mathbb{R})$, we make the additional assumption that $\psi(t)=1$ for any $t\in\bigl[\frac34,1\bigr]$. This is allowed by the aforementioned fact that the definition of $\mathcal{B}_0(\mathbb{R})$ does not depend on $\psi$. This implies that ${\rm Supp}(\psi)\subset \bigl[\frac12,\frac32]$. Second we fix a non-zero function $f_0\in\mathcal{S}(\R)$ such that ${\rm Supp}(f_0)\subset \bigl[\frac34,1\bigr]$. Next for any integer $n\geq 0$, we set $N_n=2^n-1$ and $f_n = \tau_{N_n} f_0 = f_0(\,\cdotp -N_n)$. By construction, ${\rm Supp}(\psi_k)\subset \bigl[2^{k-1},\frac32 2^k]$ for all $k\in\mathbb{Z}$ and ${\rm Supp}(f_n)\subset \bigl[2^n-\frac14,2^n]$ for all $n\geq 0$. We derive that \begin{equation}\label{product} \forall\, k\geq 0 :\qquad f_k\psi_k = f_k \quad\hbox{and}\quad f_n\psi_k=0\ \hbox{if}\ n\not=k, \end{equation} as well as \begin{equation}\label{product2} \forall\, n,n'\geq 0 : \qquad f_n f_{n'}=0\ \hbox{if}\ n\not=n'. \end{equation} \begin{lem}\label{lemBnoeqA} There exists a bounded continuous function $m \colon \mathbb{R}_+^*\to\mathbb{C}$ such that \begin{equation}\label{eqB1infty} \underset{k\in \mathbb{Z}}\sup \norm{\widehat{m\psi_k}}_1 < \infty \end{equation} and the mapping $T_m\colon H^2(\mathbb{R})\to H^2(\mathbb{R})$ does not belong to $\mathcal{M}(H^1(\R))$. \end{lem} \begin{proof} Using the definitions preceding the lemma, we set $$ m(t) =\sum_{n=0}^\infty e^{iN_n t}f_n(t),\qquad t>0. $$ At most one term is non zero in this sum, hence this is well-defined and $m\in C_b(\mathbb{R}_+^*)$. Let $k\geq 0$. According to (\ref{product}), we have $m\psi_k = e^{iN_k\,\cdotp} f_k$ hence $$ \norm{\widehat{m\psi_k}}_1 = \norm{\widehat{f_k}(\cdotp - N_k)}_1 =\norm{\widehat{f_k}}_1 =\norm{\widehat{f_0}}_1. $$ Since $m\psi_k=0$ if $k<0$, this shows (\ref{eqB1infty}). Define $$ g_N = {\mathcal F}^{-1}\mathcal{B}igl( \sum_{n=0}^N e^{-iN_n\,\cdotp}f_n\mathcal{B}igr) $$ for all $N\geq 0$. Then $g_N\in \mathcal{S}(\R)\cap H^1(\mathbb{R})$ hence $g_N\in H^p(\mathbb{R})$ for any $1\leq p\leq\infty$. Let us estimate its $L^p$-norm. On the one hand, we have $$ \norm{g_N}_1\leq \sum_{n=0}^N \bignorm{{\mathcal F}^{-1}(e^{-iN_n\,\cdotp}f_n)}_1 = \sum_{n=0}^N \bignorm{\bigl[{\mathcal F}^{-1}(f_n)\bigr](\,\cdotp -N_n))}_1 = \sum_{n=0}^N \bignorm{{\mathcal F}^{-1}(f_n)}_1, $$ hence $$ \norm{g_N}_1\leq (N+1)\norm{{\mathcal F}^{-1}(f_0)}_1. $$ On the other hand, for any $t\in\mathbb{R}$, we have $$ g_N(t) = \sum_{n=0}^N {\mathcal F}^{-1}(f_n)(t-N_n), $$ hence $$ \vert g_N(t)\vert \leq \sum_{n=0}^N \vert g_0(t-N_n)\vert. $$ Since $g_0=\mathcal{F}^{-1}(f_0)\in\mathcal{S}(\R)$ we infer that $$ \sup_{N\geq 0}\norm{g_N}_\infty\,<\infty. $$ For any $1<p<\infty$, we have $\norm{g_N}_p\leq\norm{g_N}_1^{\frac{1}{p}} \norm{g_N}_\infty^{1-\frac{1}{p}}$, hence the above estimates imply the existence of a constant $K>0$ such that \begin{equation}\label{Np} \norm{g_N}_p\leq K N^{\frac{1}{p}},\qquad N\geq 1. \end{equation} By (\ref{product2}), we have $$ m\widehat{g_N} = \mathcal{B}igl(\sum_{n=0}^\infty e^{iN_n\,\cdotp}f_n \mathcal{B}igr)\mathcal{B}igl( \sum_{n=0}^N e^{-iN_n\,\cdotp}f_n\mathcal{B}igr) = \sum_{n=0}^N f_n^2. $$ For any $n\geq 0$, ${\rm Supp}(f_n^2)\subset [2^{n-1},2^n]$ hence by \cite[Theorem 5.1.5.]{gra}, we have an estimate $$ \bignorm{{\mathcal F}^{-1}(m\widehat{g_N})}_p\approx \mathcal{B}ignorm{\mathcal{B}igl(\sum_{n=0}^N \vert {\mathcal F}^{-1}(f_n^2)\vert^2\mathcal{B}igr)^\frac12}_p. $$ Further, $f_n^2 = f_0^2(\,\cdotp -N_n)$ hence $\vert {\mathcal F}^{-1}(f_n^2)\vert=\vert {\mathcal F}^{-1}(f_0^2)\vert$ for any $n\geq 0$. Consequently, $$ \mathcal{B}igl(\sum_{n=0}^N \vert {\mathcal F}^{-1}(f_n^2)\vert^2\mathcal{B}igr)^\frac12 =(N+1)^\frac12\vert {\mathcal F}^{-1}(f_0^2)\vert. $$ Thus we have $$ \bignorm{{\mathcal F}^{-1}(m\widehat{g_N})}_p\approx N^\frac12. $$ Comparing with (\ref{Np}) we deduce that if $2<p<\infty$, then $T_m\colon H^2(\mathbb{R})\to H^2(\mathbb{R})$ is not a bounded Fourier multiplier on $H^p(\mathbb{R})$. By Lemma \ref{noMpisnotM1}, we deduce that $T_m\notin \mathcal{M}(H^1(\R))$. \end{proof} \begin{proof}[Proof of Theorem \ref{AdifferentB0}] If $\mathcal{A}(\mathbb{C}_+)$ were equal to $\mathcal{B}_0(\mathbb{C}_+)$, we would have $\mathcal{A}_0(\mathbb{C}_+)=\mathcal{B}_{00}(\mathbb{C}_+)$ which in turn is equivalent to $\mathcal{A}_0(\mathbb{R}) =\mathcal{B}_{00}(\mathbb{R})$. So it suffices to show that this equality fails. Let us assume, by contradiction, that $\mathcal{A}_0(\mathbb{R}) =\mathcal{B}_{00}(\mathbb{R})$. Let $m$ be given by Lemma \ref{lemBnoeqA}. Let $b\in L^1(\mathbb{R}_+)$. For any $k\in\mathbb{Z}$, we have $$ \psi_kb = \,\frac{1}{2\pi}\,{\mathcal F}\bigl( \phi_k\star \widehat{b}(-\,\cdotp)\bigr). $$ Hence using (\ref{sum1}), (\ref{sum2}) and Lemma \ref{Tool}, we have \begin{align*} \int_{-\infty}^{\infty} m(t)b(t)\,dt & = \sum_{k\in\mathbb{Z}} \int_{-\infty}^{\infty} m(t)\psi_k(t)b(t)\,dt\\ &= \sum_{k\in \mathbb{Z}} \int_{-\infty}^{\infty} \bigl(\psi_{k-1}(t)+\psi_k(t)+\psi_{k+1}(t)\bigr) m(t)\psi_k(t)b(t)\,dt \\ &=\,\frac{1}{2\pi}\,\sum_{k\in \mathbb{Z}} \int_{-\infty}^\infty \bigl[\mathcal{F}\bigl((\psi_{k-1}+\psi_k+\psi_{k+1})m\bigr)\bigr](u) \bigl[\mathcal{F}(\psi_k b)\bigr](-u)\, du \\ &= \,\sum_{k\in \mathbb{Z}} \int_{-\infty}^\infty \bigl[\mathcal{F}\bigl((\psi_{k-1}+\psi_k+\psi_{k+1})m\bigr)\bigr](u) \bigl[\phi_k\star\widehat{b}(-\,\cdotp)\bigr](u)\, du \end{align*} Therefore, $$ \mathcal{B}igl\vert \int_{-\infty}^\infty m(t)b(t)\,dt\,\mathcal{B}igr\vert\,\leq\, \sum_{k\in \mathbb{Z}} \bignorm{\mathcal{F}\bigl((\psi_{k-1}+\psi_k+\psi_{k+1})m\bigr)}_1 \bignorm{\phi_k\star\widehat{b}(-\,\cdotp)}_\infty. $$ Applying \eqref{eqB1infty}, we deduce the existence of a constant $K>0$ such that $$ \mathcal{B}igl\vert \int_{-\infty}^\infty m(t)b(t)\,dt\,\mathcal{B}igr\vert\,\leq\, K \sum_{k\in \mathbb{Z}} \bignorm{\phi_k\star\widehat{b}(-\,\cdotp)}_\infty = K\norm{\widehat{b}(-\,\cdotp)}_{\mathcal{B}_0}. $$ Therefore there exists $\eta\in\mathcal{B}_{00}(\mathbb{R})^*$ such that $$ \langle\eta,\widehat{b}(-\,\cdotp)\rangle = \int_{-\infty}^\infty m(t)b(t)\,dt,\qquad b\in L^1(\mathbb{R}_+). $$ By assumption, $\eta\in\mathcal{A}_0(\mathbb{R})^*$. Applying Theorem \ref{DualA0}, let $T\in\mathcal{M}(H^1(\R))$ be associated to $\eta$ and let $m_0\in C_b(\mathbb{R}_+^*)$ be the symbol of $T$. Then by Remark \ref{DualA0+}, we have $$ \langle\eta,\widehat{b}(-\,\cdotp)\rangle = \int_{-\infty}^\infty m_0(t)b(t)\,dt,\qquad b\in L^1(\mathbb{R}_+). $$ We deduce that $m_0=m$, and this contradicts the fact that $T_m\notin\mathcal{M}(H^1(\R))$. \end{proof} We conclude this section with a series of remarks. \begin{rq1}\label{narrow} Let $A$ be as in Subsections \ref{MAIN} and \ref{FCA}. Let $\mu\in M(\mathbb{R}_+)$, with $\mu(\{0\})=0$. According to \cite[Subsection 2.2 $\&$ Proposition 6.2]{bgt1}, its Laplace transform $L_\mu\colon\mathbb{C}_+\to\mathbb{C}$ belongs to $\mathcal{B}_0(\mathbb{C}_+)$. Hence $L_\mu$ belongs to $\mathcal{A}(\mathbb{C}_+)$, by Proposition \ref{embeddingBesovA}. The argument in the proof of Corollary \ref{main3} shows that $\rho_{A}(L_\mu)$ is the strong limit of $\rho_{0,A}(L_\mu\widetilde{G_N})$, when $N\to\infty$. Define $c_N(t)= Ne^{-Nt}$ for any $t> 0$ and recall that $\widetilde{G_N} = L_{c_N}$. Then $L_\mu \widetilde{G_N} = L_{\mu\star c_N}$ for any $N\geq 1$. Further $\mu\star c_N\to \mu$ narrowly, when $N\to\infty$. It therefore follows from (\ref{main2}) that $$ [\rho_A(L_\mu)](x) = \,\int_{\mathbb{R}_+} T_t(x)\,d\mu(t),\qquad x\in H. $$ \end{rq1} \begin{rq1}\label{D} Let ${\mathcal D}\subset H^1(\mathbb{R})$ be the space of all $h\in H^1(\mathbb{R})$ such that ${\rm Supp}(\widehat{h})$ is a compact subset of $\mathbb{R}_+^*$. It is well-known that ${\mathcal D}$ is dense in $H^1(\mathbb{R})$. To check this, take any $h\in H^1(\mathbb{R})$ and recall that there exist $v,w \in H^2(\mathbb{R})$ such that $h=wv$. Let $(d_n)_{n\in\mathbb{N}}$ and $(c_n)_{n\in\mathbb{N}}$ be sequences of $C_b(\mathbb{R}_+^*)$ with compact supports such that $d_n\to \widehat{w}$ and $c_n\to \widehat{v}$ in $L^2(\mathbb{R}_+)$. Then $\mathcal{F}^{-1}(d_n)\to w$ and $\mathcal{F}^{-1}(c_n)\to v$ in $H^2(\mathbb{R})$, hence $\mathcal{F}^{-1}(d_n)\mathcal{F}^{-1}(c_n)\to h$ in $H^1(\mathbb{R})$. Now it is easy to see that $\mathcal{F}^{-1}(d_n)\mathcal{F}^{-1}(c_n)$ belongs to ${\mathcal D}$ for any $n\in\mathbb{N}$. Let $BUC\star {\mathcal D}\subset \mathcal{A}(\mathbb{R})$ be the linear span of the functions $f\star h$, for $f\in BUC(\mathbb{R})$ and $h\in {\mathcal D}$. It follows from above that this is a dense subspace of $\mathcal{A}(\mathbb{R})$. Let ${\mathcal G}\subset H^\infty(\mathbb{R})$ be the space of all $F\in H^\infty(\mathbb{R})$ such that ${\rm Supp}(\widehat{F})$ is a compact subset of $\mathbb{R}_+^*$. Then we have $$ BUC\star {\mathcal D}\subset {\mathcal G}\subset \mathcal{B}_0(\mathbb{R}). $$ The first inclusion of obvious and the second one is given by \cite[Lemma 2.4]{bgt1}. It follows that $\mathcal{B}_0(\mathbb{R})$ is dense in $\mathcal{A}(\mathbb{R})$, or equivalently that $\mathcal{B}_0(\mathbb{C}_+)$ is dense in $\mathcal{A}(\mathbb{C}_+)$. Also $\mathcal{B}_{00}(\mathbb{C}_+)$ is dense in $\mathcal{A}_{00}(\mathbb{C}_+)$, by (\ref{LM}) and Lemma \ref{Laplace}. \end{rq1} \begin{rq1}\label{nonunital} It follows from \cite[Subsection 2.2]{bgt1} that for any $\varphi\in\mathcal{B}_0(\mathbb{C}_+)$, $\lim_{y\to\infty} \varphi(y)=0$, where the limit is taken for $y$ going to $\infty$ along the real axis. We noticed in Remark \ref{D} that $\mathcal{B}_0(\mathbb{C}_+)$ is dense in $\mathcal{A}(\mathbb{C}_+)$. Since $\norm{\,\cdotp}_{H^\infty(\mathbb{C}_+)}\leq \norm{\,\cdotp}_{\mathcal{A}(\mathbb{C}_+)}$, this implies that any element of $\mathcal{A}(\mathbb{C}_+)$ is the uniform limit of a sequence of $\mathcal{B}_0(\mathbb{C}_+)$. Consequently, $\lim_{y\to\infty} \varphi(y)=0$ for any $\varphi\in\mathcal{A}(\mathbb{C}_+)$. Thus the algebra $\mathcal{A}(\mathbb{C}_+)$ (equivalently, the algebra $\mathcal{A}(\mathbb{R})$) does not contain any non zero constant function and hence is not unital. \end{rq1} \begin{rq1}\label{open} The following problem is open: Let $-A$ be the generator of a bounded $C_0$-semigroup on a Hilbert space. Is the Cayley transform $V=(A-I_H)(A+I_H)^{-1}$ power bounded? This problem is discussed in \cite[Section 5.5]{bgt1}, to which we refer for information. Let $v\in H^\infty(\mathbb{C}_+)$ be defined by $v(z)=(z-1)(z+1)^{-1}$. It follows from Remark \ref{Rational} that for any integer $n\geq 1$, the function $\varphi_n\colon\mathbb{C}_+\to\mathbb{C}$ defined by $\varphi_n(z) = v(z)^n - (-1)^n$ belongs to $\mathcal{A}_0(\mathbb{C}_+)$ and $$ V^n= (-1)^n I_H +\rho_{0,A}(\varphi_n). $$ Hence $\norme{V^n} = O\bigl(\norme{\varphi_n}_{\mathcal{A}_0}\bigr)$. Therefore it would be interesting to determine the behaviour of $\norme{\varphi_n}_{\mathcal{A}_0}$. It is shown in \cite[Section 5.1]{bgt2} that $\norme{\varphi_n}_{\mathcal{B}_0}\asymp {\rm log(n)}$. We do not know if the asymptotic behaviour of $\norme{\varphi_n}_{\mathcal{A}_0}$ differs from the one of $\norme{\varphi_n}_{\mathcal{B}_0}$. \end{rq1} \section{$\gamma$-Bounded semigroups on Banach spaces}\label{Banach} In general, Theorem \ref{main1} and Corollary \ref{main3} do not hold true if $H$ is replaced by an arbitrary Banach space. Indeed it is shown in \cite[Corollary 6.7]{bgt2} that the translation semigroup $(T_t)_{t\geq 0}$ on $L^p(\mathbb{R})$, for $1\leq p\not=2<\infty$, does not satisfy condition (i) in Theorem \ref{BGT}. Hence by the latter theorem and Proposition \ref{embeddingBesovA}, the mapping $$ L_b\mapsto \int_{0}^{\infty} b(t)T_t\, dt\,,\qquad b\in L^1(\mathbb{R}_+), $$ is not bounded with respect to the $\mathcal{A}_0(\mathbb{C}_+)$-norm. In this section we will however establish Banach space versions of Theorem \ref{main1} and Corollary \ref{main3} on Banach spaces, involving $\gamma$-boundedness. We start with some background and basic facts on this topic and refer to \cite[Chapter 9]{hnvw} for details and more information. Let $X$ be a Banach space. Let $(\gamma_n)_{n \geq 1}$ be a sequence of independent complex valued standard Gaussian variables on some probability space $\mathcal{S}igma$ and let $G_0\subset L^2(\mathcal{S}igma)$ be the linear span of the $\gamma_n$. We denote by $G(X)$ the closure of \[ G_0\otimes X = \mathcal{B}igl\{ \sum_{k=1}^N \gamma_k \otimes x_k\, :\, x_k \in X, \, N \in \mathbb{N} \mathcal{B}igr\} \] in the Bochner space $L^2(\mathcal{S}igma;X)$, equipped with the induced norm. Next we let $G'(X^*)$ denote the closure of $G_0\otimes X^*$ in the dual space $G(X)^*$. A bounded set $\mathcal{T}\subset B(X)$ is called $\gamma$-bounded if there exists a constant $C \geq 0 $ such that for all finite sequences $(S_k)_{k=1}^{N} \subset \mathcal{T}$ and $(x_k)_{k=1}^{N} \subset X$, we have: \begin{equation}\label{Rboundedness} \mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes S_k(x_k)}_{G(X)} \leq C\mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k}_{G(X)}. \end{equation} The least admissible constant $C$ in the above inequality is called the $\gamma$-bound of $\mathcal{T}$ and is denoted by $\gamma(\mathcal{T})$. Let $Z$ be any Banach space and let ${\rm Ball}(Z)$ denote its closed unit ball. A bounded operator $\rho\colon Z\to B(X)$ is called $\gamma$-bounded if the set $\rho({\rm Ball}(Z))\subset B(X)$ is $\gamma$-bounded. In this case we set $\gamma(\rho) = \gamma(\rho({\rm Ball}(Z)))$. We now turn to the definition of $\gamma$-spaces, which goes back to the paper \cite{kal-wei1} (which began to circulate 20 years ago). Let $H$ be a Hilbert space. A bounded operator $T \colon H \rightarrow X$ is called $\gamma$-summing if \[ \norme{T}_{\gamma} := \sup\mathcal{B}igl\{ \mathcal{B}ignorm{\sum_{k=1}^{N}\gamma_k\otimes T(e_k)}_{G(X)}\mathcal{B}igr\} < \infty, \] where the supremum is taken over all finite orthonormal systems $(e_k)_{k=1}^{N}$ in $H$. We let $\gamma_{\infty}(H;X)$ denote the space of all $\gamma$-summing operators and we endow it with the norm $\norme{\,\cdotp}_{\gamma}$. Then $\gamma_{\infty}(H;X)$ is a Banach space. Any finite rank bounded operator is $\gamma$-summing. We let $$ \gamma(H;X)\subset \gamma_{\infty}(H;X) $$ denote the closure of the space of finite rank bounded operators in $\gamma_{\infty}(H;X)$. In the sequel, finite rank bounded operators are represented by the algebraic tensor product $H^*\otimes X$ in the usual way. Following \cite[Section 5]{kal-wei1}, we let $\gamma'_+(H^*;X^*)$ be the space of all bounded operators $S\colon H^*\to X^*$ such that $$ \norme{S}_{\gamma'} := \sup\bigl\{ \vert {\rm tr}(T^*S)\vert\, \big\vert \, T\colon H\to X,\, {\rm rank}(T)<\infty,\,\norm{T}_\gamma\leq 1\bigr\}\,<\infty. $$ Then $\norme{\,\cdotp}_{\gamma'}$ is a norm on $\gamma'_+(H^*;X^*)$ and according to \cite[Proposition 5.1]{kal-wei1}, we have \begin{equation}\label{Dual-gamma} \gamma'_+(H^*;X^*)\,=\,\gamma(H;X)^* \end{equation} isometrically, through the duality pairing $$ (S,T)\mapsto {\rm tr}(T^*S),\qquad T\in \gamma(H;X),\ S\in \gamma'_+(H^*;X^*). $$ We will focus on the case when $H$ is an $L^2$-space. Let $(\Omega,\mu)$ be a $\sigma$-finite measure space. We identify $L^2(\Omega)^*$ and $L^2(\Omega)$ in the usual way. A function $\xi\colon\Omega \rightarrow X$ is called weakly-$L^2$ if for each $x^* \in X^*$, the function $\langle x^*, \xi(\,\cdotp) \rangle$ belongs to $L^2(\Omega)$. Then the operator $x^*\mapsto \langle x^*, \xi(\,\cdotp) \rangle$ from $X^*$ into $L^2(\Omega)$ is bounded. If $\xi$ is both measurable and weakly-$L^2$, then its adjoint takes values in $X$ and we let $\mathbb{I}_\xi \colon L^2(\Omega)\to X$ denote the resulting operator. More explicitly, $$ \langle x^*, \mathbb{I}_\xi (g)\rangle= \int_\Omega g(t)\langle x^*, \xi(t) \rangle\,d\mu(t)\, , \qquad g\in L^2(\Omega),\, x^*\in X^*. $$ We let $\gamma(\Omega;X)$ be the space of all measurable and weakly-$L^2$ functions $\xi\colon\Omega \rightarrow X$ such that $\mathbb{I}_\xi$ belongs to $\gamma(L^2(\Omega);X)$, and we write $\norme{\xi}_{\gamma} = \norme{\mathbb{I}_\xi}_{\gamma}$ for any such function. Likewise a function $\zeta\colon\Omega \rightarrow X^*$ is called weakly$^*$-$L^2$ if for each $x \in X$, the function $\langle \zeta(\,\cdotp),x \rangle$ belongs to $L^2(\Omega)$. In this case, the operator $x\mapsto\langle \zeta(\,\cdotp),x \rangle$ from $X$ into $L^2(\Omega)$ is bounded and we let $\mathbb{I}_\zeta \colon L^2(\Omega)\to X^*$ denote its adjoint. We let $\gamma'_+(\Omega;X^*)$ be the space of all weakly$^*$-$L^2$ functions $\zeta\colon\Omega \rightarrow X$ such that $\mathbb{I}_\zeta$ belongs to $\gamma'_+(L^2(\Omega);X)$, and we write $\norme{\zeta}_{\gamma'} = \norme{\mathbb{I}_\zeta}_{\gamma'}$ for any such function. Note that our space $\gamma'_+(\Omega;X^*)$ is a priori bigger than the one from \cite[Definition 4.5]{kal-wei1}, where only measurable functions $\Omega\to X^*$ are considered. \begin{lem}\label{Integral} For any $\xi\in\gamma(\Omega;X)$ and any $\zeta\in \gamma'_+(\Omega;X^*)$, the function $t\mapsto \langle\zeta(t),\xi(t)\rangle$ belongs to $L^1(\Omega)$ and in the duality (\ref{Dual-gamma}), we have $$ \langle \mathbb{I}_\zeta,\mathbb{I}_\xi\rangle =\,\int_\Omega \langle\zeta(t),\xi(t)\rangle\, d\mu(t). $$ Moreover $$ \int_\Omega \vert\langle\zeta(t),\xi(t)\rangle\vert \,d\mu(t)\,\leq \norm{\xi}_\gamma\norm{\zeta}_{\gamma'}. $$ \end{lem} If we consider measurable functions $\zeta\colon \Omega\to X^*$ only, the above statement is provided by \cite[Corollary 5.5]{kal-wei1}. The fact that is holds as well in the more general setting of the present paper follows from the proof of \cite[Theorem 9.2.14]{hnvw}. The main result of this section is the following. \begin{thm}\label{main5} Let $(T_t)_{t\geq 0}$ be a bounded $C_0$-semigroup on $X$ and let $A$ be its negative generator. The following assertions are equivalent. \begin{itemize} \item [(i)] The semigroup $(T_t)_{t\geq 0}$ is $\gamma$-bounded, that is, the set ${\mathcal T}_A = \{T_t\, :\, t\geq 0\}$ is $\gamma$-bounded; \item [(ii)] There exists a $\gamma$-bounded homomorphism $\rho_{0,A} \colon\mathcal{A}_0(\mathbb{C}_+)\to B(X)$ such that (\ref{main2}) holds true for all $b\in L^1(\mathbb{R}_+)$. \end{itemize} In this case, $\rho_{0,A}$ is unique and $\gamma({\mathcal T}_A) \leq \gamma(\rho_{0,A}) \leq \gamma({\mathcal T}_A)^2$. Further there exists a unique bounded homomorphism $\rho_{A}\colon \mathcal{A}(\mathbb{C}_+)\to B(X)$ extending $\rho_{0,A}$, this homomorphism is $\gamma$-bounded and $\gamma(\rho_{A}) =\gamma(\rho_{0,A})$. \end{thm} A thorough look at the proofs of Theorem \ref{main1} and Corollary \ref{main3} reveals that in Subsections \ref{MAIN} and \ref{FCA}, the Hilbertian structure was used only in Lemma \ref{key}. So without any surprise the main point in proving Theorem \ref{main5} is the following $\gamma$-bounded version of Lemma \ref{key}. \begin{lem}\label{key2} Let $(T_t)_{t\geq 0}$ be a $\gamma$-bounded $C_0$-semigroup on $X$ and let $A$ be its negative generator. Let $C= \gamma({\mathcal T}_A)$. Then the set \begin{equation}\label{Theset} \mathcal{B}igl\{\Gamma\bigl(A,(2\pi)^{-1}\widehat{f}\widehat{wv}\bigr)\, :\, f\in C_{00}(\mathbb{R}),\, w,v\in H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R}),\, \{\norm{f}_\infty,\norm{w}_2,\norm{v}_2\}\leq 1\mathcal{B}igr\} \end{equation} is $\gamma$-bounded, with $\gamma$-bound $\leq C^2$. \end{lem} \begin{proof} Let $N\in\mathbb{N}$ and let $f_1,\ldots, f_N\in C_{00}(\mathbb{R})$, $w_1,\ldots,w_N,v_1,\ldots,v_N \in H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R})$ such that $\norm{f_k}_\infty\leq 1, \norm{w_k}_2\leq 1$ and $\norm{v_k}_2\leq 1$ for any $k=1,\ldots, N$. We set $$ S_k= \Gamma\bigl(A,(2\pi)^{-1}\widehat{f_k}\widehat{w_kv_k}\bigr), \qquad k=1,\ldots, N. $$ Let $x_1,\ldots,x_N\in X$ and $x_1^*,\ldots,x_N^*\in X^*$. Following the notation in the proof of Lemma \ref{key}, we define, for any $k=1,\ldots, N$, two strongly continuous functions $W_k,V_k\colon \mathbb{R}\to B(X)$ by $$ W_k(s) = \int_{0}^{\infty} \widehat{w_k}(r)e^{-irs}T_r\,dr \qquad\hbox{and}\qquad V_k(s) = \int_{0}^{\infty} \widehat{v_k}(t)e^{-its}T_t\,dt, \qquad s\in\mathbb{R}. $$ According to (\ref{equaforgammbound}), $$ \sum_{k=1}^{N} \langle S_k(x_k),x_k^*\rangle\, =\, \frac{1}{4\pi^2}\,\sum_{k=1}^{N} \int_{-\infty}^\infty f_k(s) \langle W_k(s) x_k, V_k(s)^*x_k^* \rangle\,ds, $$ hence $$ \mathcal{B}igl\vert \sum_{k=1}^{N} \langle S_k(x_k),x_k^*\rangle\mathcal{B}igr\vert \,\leq\, \frac{1}{4\pi^2}\,\sum_{k=1}^{N} \int_{-\infty}^\infty \bigl\vert \langle W_k(s) x_k, V_k(s)^*x_k^* \rangle \bigr\vert\, ds. $$ We let $\mathbb{N}_N=\{1,\ldots,N\}$ for convenience. We will use $\gamma$-spaces on either $\mathbb{R}$ or $\mathbb{N}_N\times\mathbb{R}$. For any $k=1,\ldots,N$, the function $$ \alpha_k : = W_k(\,\cdotp)x_k\colon \mathbb{R}\longrightarrow X $$ is measurable and weakly-$L^2$. Likewise, $$ \beta_k : = V_k(\,\cdotp)^*x_k^*\colon \mathbb{R}\longrightarrow X^* $$ is weakly$^*$-$L^2$. If we are able to show that $\alpha_k\in \gamma(\mathbb{R};X)$ and $\beta_k\in \gamma'_+(\mathbb{R};X^*)$ for any $k=1,\ldots,N$, then Lemma \ref{Integral} ensures that \begin{equation}\label{4pi2} \mathcal{B}igl\vert \sum_{k=1}^{N} \langle S_k(x_k),x_k^*\rangle\mathcal{B}igr\vert \,\leq\, \frac{1}{4\pi^2}\, \bignorm{(k,s)\mapsto W_k(s)x_k}_{\gamma(\mathbb{N}_N\times\mathbb{R};X)} \bignorm{(k,s)\mapsto V_k^*(s)x_k^*}_{\gamma'(\mathbb{N}_N\times\mathbb{R};X^*)}. \end{equation} Our aim is now to check that $\alpha_k\in \gamma(\mathbb{R};X)$ and $\beta_k\in \gamma'_+(\mathbb{R};X^*)$ for any $k$ and to estimate the right-hand side of (\ref{4pi2}). By assumption, ${\mathcal T}_A=\{T_t\, :\, t\geq 0\}$ is $\gamma$-bounded. According to the Multiplier Theorem stated as \cite[Theorem 6.1]{haa-roz}, there exists a bounded operator $$ M\colon \gamma(L^2(\mathbb{R});X)\longrightarrow \gamma(L^2(\mathbb{R});X) $$ with norm $\leq C = \gamma({\mathcal T}_A)$, mapping $\gamma(\mathbb{R};X)$ into itself, and such that for any $\xi\in\gamma(\mathbb{R};X)$, $[M(\xi)](t) = T_t(\xi(t))$ if $t\geq 0$, and $[M(\xi)](t) = 0$ if $t< 0$. Further by the Extension Theorem stated as \cite[Theorem 9.6.1]{hnvw}, ${\mathcal F}\otimes I_X\colon L^2(\mathbb{R})\otimes X\to L^2(\mathbb{R})\otimes X$ admits a (necessarily unique) bounded extension $$ \Psi\colon \gamma(L^2(\mathbb{R});X)\longrightarrow \gamma(L^2(\mathbb{R});X), $$ with norm $\leq \sqrt{2\pi}$. According to \cite[Lemma 2.19]{arn1}, $\mathbb{I}_{\alpha_k} = (\Psi\circ M)(\widehat{w_k}\otimes x_k)$ for any $k=1,\ldots,N$. This shows that $\alpha_k\in \gamma(\mathbb{R};X)$. Let $(e_k)_{k=1}^N$ be the canonical basis of $\ell^2_N$. It follows from above that \begin{align*} \bignorm{(k,s)\mapsto W_k(s)x_k}_{\gamma(\mathbb{N}_N\times\mathbb{R};X)} \, & = \mathcal{B}ignorm{\sum_{k=1}^N e_k\otimes (\Psi\circ M)(\widehat{w_k}\otimes x_k)}_{\gamma(L^2(\mathbb{N}_N\times\mathbb{R});X)} \\ & \leq \sqrt{2\pi}\, C\, \mathcal{B}ignorm{\sum_{k=1}^N e_k \otimes\widehat{w_k}\otimes x_k}_{\gamma(L^2(\mathbb{N}_N\times\mathbb{R});X)}. \end{align*} The finite sequence $(e_k \otimes\widehat{w_k})_{k=1}^N$ is an orthogonal family of $L^2(\mathbb{N}_N\times\mathbb{R})$. Consequently, \begin{align*} \mathcal{B}ignorm{\sum_{k=1}^N e_k \otimes\widehat{w_k}\otimes x_k}_{\gamma(L^2(\mathbb{N}_N\times\mathbb{R});X)}\, & =\mathcal{B}ignorm{\sum_{k=1}^N \norm{\widehat{w_k}}_2\gamma_k\otimes x_k}_{G(X)}\\ &\leq\max_k\norm{\widehat{w_k}}_2\, \mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k}_{G(X)}. \end{align*} Since $\norm{\widehat{w_k}}_2=\sqrt{2\pi}\norm{w_k}_2\leq \sqrt{2\pi}$ for any $k=1,\ldots,N$, we finally obtain that $$ \bignorm{(k,s)\mapsto W_k(s)x_k}_{\gamma(\mathbb{N}_N\times\mathbb{R};X)}\, \leq\,2\pi\, C\,\mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k}_{G(X)}. $$ We now analyse the $\beta_k$. Fix $k$ and consider $g\in L^2(\mathbb{R})$ and $x\in X$. Using Lemma \ref{Integral}, we have \begin{align*} \langle \mathbb{I}_{\beta_k}(g), x\rangle \, & =\int_{-\infty}^\infty g(s)\langle x_k^*, V_k(s)x\rangle\,ds\\ & =\int_{-\infty}^\infty g(s) {\mathcal F}\bigl( \widehat{v_k}\langle x_k^*, T_{\cdotp}(x) \rangle\bigr)(s)\, ds\\ & =\int_{0}^\infty \widehat{g}(t) \widehat{v_k}(t) \langle x_k^*, T_t(x)\rangle\, dt\\ & =\bigl\langle \widehat{v_k}\otimes x_k^*, M(\widehat{g}\otimes x)\bigr\rangle\\ & =\bigl\langle \widehat{v_k}\otimes x_k^*, (M\circ\Psi)(g\otimes x)\bigr\rangle\\ & =\bigl\langle (\Psi^*\circ M^*)(\widehat{v_k}\otimes x_k^*), g\otimes x\bigr\rangle. \end{align*} This shows that $\beta_k\in\gamma'_+(\mathbb{R};X^*)$, with $\mathbb{I}_{\beta_k} = (\Psi^*\circ M^*)(\widehat{v_k}\otimes x_k^*)$. Now arguing as in the $W_k(\,\cdotp)x_k$ case, we obtain that $$ \bignorm{(k,s)\mapsto V_k^*(s)x_k^*}_{\gamma'(\mathbb{N}_N\times\mathbb{R};X^*)} \, \leq\,2\pi\, C\,\mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k^*}_{G'(X)}. $$ We now implement these estimates in (\ref{4pi2}) to obtain that $$ \mathcal{B}igl\vert \sum_{k=1}^{N} \langle S_k(x_k),x_k^*\rangle\mathcal{B}igr\vert \,\leq\,C^2 \mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k}_{G(X)}\mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k^*}_{G'(X)}. $$ By the very definition of $G'(X)$, this means that $$ \mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes S_k(x_k)}_{G(X)}\,\leq\, C^2 \mathcal{B}ignorm{\sum_{k=1}^N \gamma_k\otimes x_k}_{G(X)}, $$ which completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{main5}] Assume (i). By Lemma \ref{key2}, any element in the set (\ref{Theset}) has norm $\leq C^2$. Hence the proof of Theorem \ref{main1} shows the existence of a unique bounded homomorphism $\rho_{0,A} \colon\mathcal{A}_0(\mathbb{C}_+)\to B(X)$ such that (\ref{main2}) holds true for all $b\in L^1(\mathbb{R}_+)$. To prove $\gamma$-boundedness of $\rho_{0,A}$, we introduce the set $$ {\mathcal L}\,=\, \bigl\{(f\star wv)^{\sim}\, :\, f\in C_{00}(\mathbb{R}),\, w,v\in H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R}),\, \{\norm{f}_\infty,\norm{w}_2,\norm{v}_2\}\leq 1\bigr\}\, \subset\mathcal{A}_{0}(\mathbb{C}_+). $$ Recall (see the proof of Lemma \ref{key}) that any $h\in H^1(\mathbb{R})$ can be written as a product $h=wv$, with $w,v\in H^2(\mathbb{R})$ and $\norm{w}_2^2=\norm{v}_2^2=\norm{h}_1$, and that $H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R})$ is dense in $H^2(\mathbb{R})$. Going back to Definition \ref{defNA}, we derive that $$ {\rm Ball}(\mathcal{A}_0(\mathbb{C}_+)) =\overline{\rm Conv}\{\mathcal L\}. $$ This implies that $$ \rho_{0,A}\bigl({\rm Ball}(\mathcal{A}_0(\mathbb{C}_+))\bigr) \subset \overline{\rm Conv}\bigl\{\rho_{0,A}(\mathcal L)\bigr\}. $$ Since $\rho_{0,A}((f\star wv)^{\sim}) = \Gamma\bigl(A,(2\pi)^{-1}\widehat{f}\widehat{wv}\bigr)$ for any $f\in C_{00}(\mathbb{R})$ and any $w,v\in H^2(\mathbb{R})\cap\mathcal{S}(\mathbb{R})$, Lemma \ref{key2} says that $\rho_{0,A}(\mathcal L)$ is $\gamma$-bounded, with $\gamma$-bound $\leq C^2$. Owing to the fact that $\gamma$-boundedness and $\gamma$-bounds are preserved by convex hulls (see e.g. \cite[Proposition 8.1.21]{hnvw}) and uniform limits, we infer that $\rho_{0,A}$ is $\gamma$-bounded, with $\gamma(\rho_{0,A})\leq C^2$. This proves (ii). Conversely assume (ii). The proof of Corollary \ref{main3} shows the existence of a unique bounded homomorphism $\rho_{A}\colon \mathcal{A}(\mathbb{C}_+)\to B(X)$ extending $\rho_{0,A}$ as well as the fact that $\rho_{A}\bigl({\rm Ball}(\mathcal{A}(\mathbb{C}_+))\bigr)$ belongs to the strong closure of $\rho_{0,A}\bigl({\rm Ball}(\mathcal{A}_0(\mathbb{C}_+))\bigr)$. Since $\gamma$-boundedness and $\gamma$-bounds are preserved by strong limits, we obtain that $\rho_{A}$ is $\gamma$-bounded, with $\gamma(\rho_{A}) =\gamma(\rho_{0,A})$. Finally the argument in Remark \ref{narrow} (1) shows that for any $t>0$, $$ T_t\in \rho_{A}\bigl({\rm Ball}(\mathcal{A}(\mathbb{C}_+))\bigr). $$ This implies (i), with $\gamma({\mathcal T}_A)\leq \gamma(\rho_A)$. \end{proof} \end{document}
\begin{document} \title{Adaptive shrinkage of singular values} \date{\today} \author{Julie Josse\\ \emph{Agrocampus Ouest}\\and\\ Sylvain Sardy\\ \emph{Universit\'e de Gen\`eve}} \pagestyle{myheadings} \markright{Adaptive shrinkage of singular values} \maketitle \begin{abstract} To recover a low rank structure from a noisy matrix, truncated singular value decomposition has been extensively used and studied. Recent studies suggested that the signal can be better estimated by shrinking the singular values. We pursue this line of research and propose a new estimator offering a continuum of thresholding and shrinking functions. To avoid an unstable and costly cross-validation search, we propose new rules to select two thresholding and shrinking parameters from the data. In particular we propose a generalized Stein unbiased risk estimation criterion that does not require knowledge of the variance of the noise and that is computationally fast. A Monte Carlo simulation reveals that our estimator outperforms the tested methods in terms of mean squared error on both low-rank and general signal matrices across different signal to noise ratio regimes. In addition, it accurately estimates the rank of the signal when it is detectable. Keywords: denoising, singular values shrinking and thresholding, Stein's unbiased risk estimate, adaptive trace norm, rank estimation \end{abstract} \section{Introduction} In many applications such as image denoising, signal processing, collaborative filtering, it is common to model the data ${\bf X}$, an $N \times P$ matrix, as \begin{equation} {\bf X}={\bf W}+{\bf E}, \label{eq:model} \end{equation} where the unknown matrix ${\bf W}$ is measured with i.i.d.~${\rm N}(0,\sigma^2)$ errors ${\bf E}$. The matrix ${\bf W}$ is assumed to have low rank $R < \min(N,P)$, which means that its singular value decomposition (SVD) ${\bf W}={\bf P}{\bf D}{\bf Q}^{\rm T}$ has $R$ non-zero singular values $d_1\geq \ldots \geq d_R$. Note that model (\ref{eq:model}) is also known as bilinear model \citep{Mandel69} in analysis of variance, as fixed factor score model \citep{deleeuw1985} or fixed effect models \citep{caussinus1986} in principal component analysis. Such models describe well data in many sciences, such as genotype-environment data in agronomy, or relational data in social science and in biological networks, where the variation between rows and columns is of equal interest \citep{hoff_2007_jasa}. To denoise the data, an old approach consists in performing the SVD of the matrix ${\bf X}={\bf U}\boldsymbol{\Lambda} {\bf V}^{\rm T}$ and defining $\hat {\bf W}=\hat {\bf P} \hat {\bf D} \hat {\bf Q}^{\rm T}$ with $\hat {\bf P}={\bf U}$, $\hat {\bf Q}={\bf V}$ and keeping the first $R$ singular values while setting the others to zero. In other words, the so-called truncated SVD keeps the empirical directions ${\bf U}$ and ${\bf V}$, and estimates the singular values by \begin{equation}\label{eq:hard} \hat d_i= \lambda_i \cdot 1(i\leq R)=\lambda_i \cdot 1(\lambda_i \geq \tau), \end{equation} which can be parametrized either in the rank $R$ or the threshold $\tau$ (here $1()$ is the indicator function). This estimate is also solution \citep{Eckart:1936} to \begin{equation}\label{eq:rank} \min_{{\bf W} \in \mathbb{R}^{N\times P}} \frac{1}{2} \|{\bf X}-{\bf W} \|_F^2 \quad {\rm s.t.} \quad {\rm rank}({\bf W})\leq R, \end{equation} where $\| {\bf M} \|_F$ is the Frobenius norm of the matrix ${\bf M}$. The truncation (\ref{eq:hard}) and its penalty formulation (\ref{eq:rank}) are reminiscent of the hard thresholding \citep{Dono94b} solution to best subset variable selection in regression. The truncated SVD requires $R$ as tuning parameter, which can be selected using cross-validation \citep{Owen:cv:2009,Josse11b} or Bayesian considerations \citep{hoff_2007_jasa}. While this approach is still extensively used, recent studies \citep{Sourav:2013, Donoho:SVDHT:2013} suggested an optimal hard threshold for singular values with better asymptotic mean squared error than thresholding at $R$ or at the bulk edge (the limit of detection). More precisely, \cite{Donoho:SVDHT:2013} considered the asymptotic framework, in which the matrix size is much larger than the rank of the signal matrix to be recovered, and the signal-to-noise ratio of the low-rank piece stays constant while the matrix grows, and showed that the optimal threshold is $\left(4/\sqrt{3} \sqrt{p} \sigma\right)$ in the case of a square $(p \times p)$ matrix and $\sigma$ known. The other thresholds for the cases of rectangular matrices and unknown $\sigma$ are also detailed in their paper. Another popular and recent estimation strategy consists in applying a soft thresholding rule to the singular values \begin{equation}\label{eq:soft} \hat d_i= \lambda_i \max(1-\tau/\lambda_i,0), \end{equation} where any singular value smaller than the threshold $\tau$ is set to zero. The estimate $\hat {\bf W}={\bf U} \hat {\bf D} {\bf V}^{\rm T}$ with $\hat d_i$ in (\ref{eq:soft}) is also the closed form solution \citep{Mazumder:STMiss:2010, Cai:soft:2010} to \begin{equation} \label{eq:tracenorm} \min_{{\bf W} \in \mathbb{R}^{N\times P}} \frac{1}{2} \|{\bf X}-{\bf W} \|_F^2 + \tau \| {\bf W} \|_*, \end{equation} where $\| {\bf W} \|_*=\sum_{i=1}^{\min(N,P)} d_i$ is the trace norm of the matrix ${\bf W}$. The regularization (\ref{eq:soft}) and its penalty formulation (\ref{eq:tracenorm}) are inspired by soft thresholding \citep{Dono94b} and lasso \citep{Tibs:regr:1996}. The tuning parameter $\tau$ is often selected by cross-validation. Recently, \citet{CandesSURE:2013} defined a Stein unbiased risk estimate (SURE) \citep{Stein:1981} to select $\tau$ more efficiently considering the noise variance $\sigma^2$ as known. Finally, other reconstruction schemes involving nonlinear shrinkage of the singular values have been proposed in the literature \citep{Verbanck:RegPCA:2013,Nada:2013,Shabalin2013}. More precisely, using the same asymptotic framework as previously and asymptotic results on the distribution of the singular values and singular vectors \citep{John:asympeignull:2001,Baik:asympteig:2006, Paul:asympeig:2007}, \citet{Shabalin2013} and \citet{OS:2014} showed that the shrinkage estimator $\hat {\bf W}$ closest to ${\bf W}$ in term of mean squared error has the form $\hat {\bf W}={\bf U} \hat {\bf D} {\bf V}^{\rm T}$ with, when $\sigma=1$, \begin{equation}\label{eq:shaba} \hat d_i = \frac{1}{\lambda_i} \sqrt{\left(\lambda_i^2 - \beta -1 \right)^2 - 4 \beta} \cdot 1\left(i\geq (1+\sqrt{\beta})\right), \end{equation} and $N/P \rightarrow \beta\in (0,1]$. The case with unknown $\sigma$ is also covered in their paper. With different asymptotics, considering that the noise variance tends to zero and $N$ and $P$ are fixed, \citet{Verbanck:RegPCA:2013} also reached similar estimates and suggested the following heterogeneous shrinkage estimate \begin{equation}\label{eq:twosteps} \hat d_i = \lambda_i \left(1-\frac{\sigma^2}{\lambda_i^2}\right) \cdot 1(i\leq R). \end{equation} Unlike soft thresholding, the smallest singular values are more shrunk than the largest ones. It is a two-step procedure: first select $R$, then shrink the $R$ largest singular values. It is a compromise between hard and soft thresholding. In regression, \citet{Zou:adap:2006} also bridged the gap between soft and hard thresholdings by defining the adaptive lasso estimator governed by two parameters chosen by cross-validation to control thresholding and shrinkage. To avoid expensive resampling, \citet{SardySBITE2012} selected them by minimizing a Stein unbiased estimate of the risk. Adaptive lasso has oracle properties and has shown good results in terms of prediction accuracy, especially using Stein unbiased risk. In this paper, we propose in Section~\ref{subsct:def} an estimator inspired by adaptive lasso to recover ${\bf W}$. It thresholds and shrinks the singular values in a single step using two parameters that parametrize a continuum of thresholding and shrinking functions. We propose in Section~\ref{subsct:selection} simple though efficient strategies to select the two tuning parameters from the data, without relying on the unstable and costly cross-validation. One approach consists in estimating the $\ell_2$-loss, the other in selecting the threshold at the detection limit, estimated empirically given the data matrix ${\bf X}$, and the last one can be applied when the variance $\sigma^2$ is unknown. Finally, we assess the method on simulated data in Section~\ref{sec:simu} and show that it outperforms the state of the art methods in terms of mean squared error and rank estimation. \section{Adaptive trace norm} \subsection{Definition} \label{subsct:def} The past evolution of regularization for reduced rank matrix estimation reveals that the empirical singular values should not simply be thresholded (with hard thresholding), but should also be shrunk (with soft thresholding) or more heavily shrunk, as in (\ref{eq:shaba}) and (\ref{eq:twosteps}). Inspired by adaptive lasso \citep{Zou:adap:2006}, we propose a continuum of functions indexed by two parameters $(\tau,\gamma)$. The following thresholding and shrinkage function, \begin{equation}\label{eq:adaptivelambda} \hat d_i= \lambda_i \max\left(1-\frac{\tau^\gamma}{\lambda_i^\gamma},0\right), \end{equation} is defined for a positive threshold $\tau$, and encompasses soft thresholding (\ref{eq:soft}) for $\gamma=1$ and hard thresholding (\ref{eq:hard}) when $\gamma \rightarrow \infty$. We call the associated estimator \begin{equation} \label{eq:ATN} \hat {\bf W}_{\tau, \gamma} = \sum_{i=1}^{\min(N,P)} U_i \lambda_i \max\left(1-\frac{\tau^\gamma}{\lambda_i^\gamma},0\right) V_i' \end{equation} the adaptive trace norm estimator (ATN) with $\tau\geq 0$ and $\gamma \geq 1$. Note that as a byproduct, the rank of the matrix is also estimated as $\hat R= \sum_{i}^{\min(N,P)} 1(\hat d_{i} \geq 0)$. In comparison to the hard and soft thresholding rules, the advantage of using the single and more flexible thresholding and shrinkage function (\ref{eq:adaptivelambda}) is twofold. First (\ref{eq:adaptivelambda}) parametrizes a rich family of functions that can more closely approach an ideal thresholding and shrinking function to recover well the structure of the underlying matrix ${\bf W}$, given the noise level $\sigma^2$. Second, the specific multiplicative factors $(1-\tau^\gamma/\lambda_i^\gamma)$ fit the rationale that the largest singular values correspond to stable directions and should be shrunk mildly. In comparison to other non linear thresholding rules, (\ref{eq:adaptivelambda}) does not rely on asymptotic derivations. Instead it selects its parameters $(\hat \tau,\hat \gamma)$ from the data, which leads to smaller MSE in many scenarii as illustrated in Section~\ref{sec:simu}. Our estimator is related to penalized Frobenius norm regularization. In a matrix completion and regression context, \citet{Mazumder:STMiss:2010}, \citet{GaiffasWTN:2011} and \citet{CDarxivC12} proved that, for a weakly increasing weight sequence~${\boldsymbol \omega}$, the optimization problem \begin{equation} \label{eq:weightedpenalty} \min_{W \in \mathbb{R}^{N\times P}} \frac{1}{2} \|{\bf X}-{\bf W} \|_F^2 + \alpha \| {\bf W} \|_{*,{\boldsymbol \omega}} \quad {\rm with} \quad \| {\bf W} \|_{*,{\boldsymbol \omega}}=\sum_{i=1}^{\min(N,P)} \omega_i d_i \end{equation} has the closed form solution $\hat {\bf W}={\bf U} \hat {\bf D} {\bf V}^{\rm T}$ with $\hat d_i= \max(\lambda_i-\alpha \omega_i,0)$. So the adaptive trace norm estimator \eqref{eq:ATN} has weights inversely proportional to the empirical singular values, and corresponds to $\alpha=\tau^\gamma$ and $\omega_i=1/\lambda_i^{\gamma-1}$. \subsection{Selection of $\tau$ and $\gamma$} \label{subsct:selection} The parameters $\tau$ and $\gamma$ can be estimated using cross-validation. Leave-one-out cross-validation, first consists in removing one cell $(i;j)$ of the data matrix ${\bf X}$. Then, for one pair $(\tau, \gamma)$, it consists in predicting its value using the estimator obtained from the dataset that excludes this cell. The value of the predicted cell is denoted $\hat X_{ij}^{-ij}$. Finally, the prediction error is computed $(X_{ij}- \hat X_{ij}^{-ij})^2$ and the operation is repeated for all the cells in ${\bf X}$ and for each $(\tau, \gamma)$. The pair that minimizes the error of prediction is selected. Such a procedure requires a method which provides an estimator despite the missing values. Such methods estimating the singular vectors and singular values from incomplete data exist but use computationally intensive iterative algorithms \citep{Ilin10, Mazumder:STMiss:2010, JosseHusson12}. This makes the cross-validation procedure difficult to use in practice even with a $K$-fold strategy. As an alternative, we suggest three methods which strength is to select $\tau$ and $\gamma$ adapting to the signal ${\bf W}$ and the noise level $\sigma^2$. \subsubsection{When $\sigma$ is known} The first method seeks good $\ell_2$-risk. The mean squared error $\mbox{MSE}=\mathbb{E}||\hat {\bf W}-{\bf W}||^{2}$, or risk, of the estimator $\hat {\bf W}={\bf U} \hat {\bf D} {\bf V}^{\rm T}$ depends on the unknown ${\bf W}$ and cannot be computed explicitly. However, it can be estimated unbiasedly using Stein unbiased risk estimate ($\mathbb{E}(\mbox{SURE})=\mbox{MSE}$). For estimators that threshold and skrink singular values, SURE still has a classical form with the residual sum of squares (RSS) penalized by the divergence of the operator: \begin{equation} {\rm SURE} = - NP\sigma^2 + \mbox{RSS}+ 2 \sigma^2 {\rm div}(\hat {\bf W} ). \label{eq:SUREdiv} \end{equation} However, the form of the divergence is not straightforward and \citet[Theorem 4.3]{CandesSURE:2013} showed that it has the following closed form expression: \begin{eqnarray*} {\rm div}(\hat {\bf W})&=&\sum_{s=1}^{\min(N,P)} \left(f'_{i}(\lambda_{i}) + |N-P| \frac{f'_{i}(\lambda_{i})}{\lambda_{i}} \right)+ 2 \sum_{t \neq s,t=1}^{\min(N,P)} \frac{\lambda_{i} f'_{i}(\lambda_{i})}{\lambda_s^2-\lambda_t^2}, \end{eqnarray*} where $f_{i}(\lambda_{i})$ is the thresholding and shrinking function. The derivation of SURE requires this function to be weakly differentiable in Stein sense, which means differentiable except on a set of measure zero. This is for instance the case for the soft-thresholding function used by \citet{CandesSURE:2013}. Likewise the adaptive thresholding function (\ref{eq:adaptivelambda}) is differentiable on $\mathbb{R}$ except at $\lambda_i=\tau$ for the range of interest $\gamma \in [1,\infty)$. Hence SURE for adaptive trace norm is \begin{equation} \label{eq:SURE} {\rm SURE} (\tau,\gamma)= - NP\sigma^2 + \sum_{s=1}^{\min(N,P)}\lambda_s^2 \min\left(\frac{\tau^{2\gamma}}{\lambda_s^{2\gamma}}, 1\right) + 2 \sigma^2 {\rm div}(\hat {\bf W}_{\tau,\gamma}), \end{equation} where \begin{eqnarray*} {\rm div}(\hat {\bf W}_{\tau,\gamma})&=&\sum_{s=1}^{\min(N,P)} \left(1+(\gamma-1) \frac{\tau^\gamma}{\lambda_s^\gamma}\right) \cdot 1\left(\lambda_s \geq \tau\right) + |N-P| \max(1-\frac{\tau^\gamma}{\lambda_s^\gamma},0)\\ &&+ 2 \sum_{t \neq s,t=1}^{\min(N,P)} \frac{\lambda_s^2 \max(1-\frac{\tau^\gamma}{\lambda_s^\gamma},0) }{\lambda_s^2-\lambda_t^2}. \end{eqnarray*} A selection rule for $\tau \geq 0$ and $\gamma \geq 1$ finds the pair $(\tau, \gamma)$ that minimizes the bivariate function ${\rm SURE} (\tau,\gamma)$ in (\ref{eq:SURE}). It is not computationally costly, unlike cross-validation, but supposes the variance of the noise $\sigma^2$ as known. The second selection method is primarily driven by a good estimation of the rank of the matrix ${\bf W}$. The parameter that determines the estimated rank is the threshold $\tau$ since any empirical singular value $\lambda_i\leq \tau$ is set to zero by (\ref{eq:adaptivelambda}). Inspired by the universal rule of \citet{Dono94b} and thresholding tests \citep{SardyANOVA13}, we propose to use the $(1-\alpha)$-quantile of the distribution of the largest empirical singular value $\lambda_1$ of ${\bf X}$ under the null hypothesis that ${\bf W}$ has rank zero to determine the selected threshold. With $\alpha$ tending to zero with the sample size, null rank estimation is guaranteed with probability tending to one under the null hypothesis. \citet{Dono94b} implicitly used level of order $\alpha=O(1/\sqrt{\log N})$ when $N=P$, so we choose a similar level tending to zero with the maximum of $N$ and $P$. This leads to the definition of universal threshold for reduced rank mean matrix estimation: \begin{equation} \label{eq:tauNP} \tau_{\max(N,P)}=\sigma F_{\Lambda_1}^{-1}\left(1-\frac{1}{\sqrt{\log(\max(N,P))}}\right), \end{equation} where $F_{\Lambda_1}$ is the cumulative distribution function of the largest singular value under Gaussian white noise with unit variance. We then select the shrinkage parameter $\gamma$ by minimizing SURE (\ref{eq:SURE}) in $\gamma$ for $\tau=\tau_{\max(N,P)}$. In practice the finite sample distribution of $\Lambda_1$ of an $N\times P$ matrix of independent and identically distributed Gaussian random variables ${\rm N}(0,1)$ is known \citep{Zanella:2009} but difficult to use. Thus, we simulate random variables from that distribution and take the appropriate quantile to estimate $\tau_{\max(N,P)}$ in (\ref{eq:tauNP}). Alternatively, we could use results of \citet{Shabalin2013}, who derived the asymptotic distribution of the singular values of model (\ref{eq:model}) based on results from random matrix theory \citep{John:asympeignull:2001,Baik:asympteig:2006, Paul:asympeig:2007}. \subsubsection{When $\sigma$ is unknown} Both previous methods need as an input the noise variance $\sigma^2$. In some applications such as image denoising \citep{milanf:2013, CandesSURE:2013}, it is known or a good estimation is available. However, very often, this is not the case and the formula (\ref{eq:SURE}) cannot be used as such. Inspired by generalized cross validation \citep{CravenWahba79}, we propose generalized SURE: \begin{equation} \label{eq:GSURE} {\rm GSURE} (\tau,\gamma)= \frac { \sum_{s=1}^{\min(N,P)}\lambda_s^2 \min\left(\frac{\tau^{2\gamma}}{\lambda_s^{2\gamma}}, 1\right) } {(1-{\rm div}(\hat {\bf W}_{\tau,\gamma})/(NP))^2}. \end{equation} Using a first order Taylor expansion $1/(1-\epsilon)^2$ of \eqref{eq:GSURE}, we get that $\mbox{GSURE}\approx\mbox{RSS}\left(1+2\ {\rm div}(\hat {\bf W}_{\tau,\gamma})/(NP)\right)$; then considering the estimate of variance $\hat \sigma^2=\mbox{RSS}/(NP)$, one sees how GSURE approximates SURE \eqref{eq:SUREdiv}. The GSURE criterion has the great advantage of not requiring any input value and can be applied straightforwardly to select both tuning parameters. \section{Simulations} \label{sec:simu} \subsection{Gaussian setting with large $N$ and $P$} We compare the adaptive trace norm estimator to existing ones by reproducing the simulation of \citet{CandesSURE:2013}. Here, matrices of size $200\times 500$ are generated according to model \eqref{eq:model} with four signal-to-noise ratios SNR $\in \{0.5, 1, 2, 4\}$ (calculated as one over $\sigma \sqrt{NP}$) and two values for the rank $R \in \{10, 100\}$. For each combination, 50 datasets are generated. We consider five estimators: \begin{itemize} \item Truncated SVD (TSVD). We use the common one with the true rank $R$ for (\ref{eq:hard}) as well as the ones proposed by \citet{Donoho:SVDHT:2013} with asymptotic MSE optimal choices of hard threshold $\tau=\lambda_{*}(\frac{N}{P})\sqrt{P}\sigma$ when $\sigma$ is known, and $\tau=w(\frac{N}{P}) {\rm median}(\lambda_i)$ when $\sigma$ is unknown. The values for the coefficients $\lambda_{*}(\frac{N}{P})$ and $w(\frac{N}{P})$ are given in their Tables 1 and 4. \item Optimal shrinkage (OS) of \citet{Shabalin2013} and \citet{OS:2014} when $\sigma$ is known as well as when $\sigma$ is unknown as defined in Section 7 of \citet{OS:2014}. \item Singular value soft thresholding (SVST) with $\tau$ selected to minimize SURE \citep{CandesSURE:2013} for (\ref{eq:soft}), knowing $\sigma$. \item the 2-step estimator with the true rank $R$ \citep{Verbanck:RegPCA:2013} for (\ref{eq:twosteps}). \item Adaptive trace norm (ATN) with three selections of the two parameters indexing a family of shrinkers. With $\sigma$ known: SURE (\ref{eq:SURE}), and SURE as a function of $\gamma$ only ($\tau$ is set to the universal threshold (\ref{eq:tauNP})). With $\sigma$ unknown: GSURE (\ref{eq:GSURE}). \end{itemize} We report in Table~\ref{simu_candes} and~\ref{simu_candes*} the estimated mean squared error between the fitted matrix $\hat {\bf W}$ and the true signal ${\bf W}$, and the estimated rank (the number of singular values that are not set to zero). We also include the lower bound on worst-case MSE for any matrix denoiser as given in \citet{Donoho:minimaxsvst:2014} which can be used as a baseline. The standard deviations of the MSEs are very small for all the estimators and vary from the order of $10^{-5}$ for high SNR to $10^{-3}$ for small SNR. Thus, the MSEs can be directly analysed to compare the estimators. We indicate the standard deviations for the rank. Table~\ref{simu_candes} reports the performance with no oracle information, while Table~\ref{simu_candes*} is when some parameters are known, either the true rank or the true noise variance. Comparing the two Tables allows to assess the performance loss by having to estimate all parameters, like in most real life applications. \begin{table} \begin{center} \scriptsize \begin{tabular}{rr||r|r|r|r} \multicolumn{1}{r}{$R$} & \multicolumn{1}{c||}{SNR} & \multicolumn{1}{c|}{ATN} & \multicolumn{1}{c|}{TSVD} & \multicolumn{1}{c|}{OS} & Lower\\ & & \multicolumn{1}{c|}{{\bf GSURE}} & \multicolumn{1}{c|}{${\boldsymbol \tau}$} && bound \\ \hline {\bf MSE} &&&& &\\ 10&4&\textbf{0.004}& \textbf{0.004} & \textbf{0.004}& 0.0012 \\ 100&4&\textbf{0.037}&0.409 & 0.335 &0.0106\\ 10&2&\textbf{0.017}&\textbf{0.017} & \textbf{0.017} &0.0024\\ 100&2&{\bf 0.142}&0.755 & 0.606 &0.0212\\ 10&1&\textbf{0.067} & 0.072 & \textbf{0.067}&0.0048\\ 100&1&\textbf{0.454}&1.000 & 0.892&0.0424\\ 10&0.5&\textbf{0.254}& 0.321 & \textbf{0.250} & 0.0097\\ 100&0.5&\textbf{0.978}&1.000 & 0.994& 0.0845\\ \hline {\bf Rank}&&&&\\ 10&4& 11 (1.8) & \textbf{10} (0.0)& \textbf{10} (0.0)\\ 100&4& \textbf{102} (1.7)& 49 (1.2) & 78 (0.7)\\ 10&2& 11 (1.4) & \textbf{10} (0.0)& \textbf{10} (0.0)\\ 100&2& \textbf{112} (2.8)& 20 (1.6)& 48 (1.3)\\ 10&1& 11 (1.3)& \textbf{10} (0.0)& \textbf{10} (0.0) \\ 100&1& \textbf{140} (4.3)& 0 (0.0)& 16 (1.6)\\ 10&0.5& 15 (1.6)& \textbf{10} (0.0)& \textbf{10} (0.0)\\ 100&0.5& \textbf{14} (7.6)& 0 (0.0)& 2 (1.2)\\ \hline \end{tabular} \caption{Monte Carlo results in terms of mean squared errors (top) and rank estimation with its standard deviation (bottom). $R$~is the true rank (10 or 100) and SNR is the signal-to-noise ratio. Three fully automatically estimators are considered: adaptive trace norm (ATN) based on GSURE (\ref{eq:GSURE}), truncated SVD (TSVD) using $\tau=\omega(0.4) {\rm median}(\lambda_i)$ \citep{Donoho:SVDHT:2013} and optimal shrinkage (OS) using the estimation for the noise variance \citep{OS:2014}. The lower bound of \citet{Donoho:minimaxsvst:2014} is indicated. Sample size is $N = 200$ individuals and number of variables is $P = 500$. Results correspond to the mean over the 50 simulations. Best results linewise are indicated in {\bf bold}. \label{simu_candes}} \end{center} \end{table} \normalsize \begin{table} \begin{center} \scriptsize \begin{tabular}{rr||r|r|r|r|r|r|r} \multicolumn{1}{r}{$R$} & \multicolumn{1}{c||}{SNR} & \multicolumn{2}{c|}{ATN} & \multicolumn{2}{c|}{TSVD} & \multicolumn{1}{c|}{OS} & \multicolumn{1}{c|}{SVST} & \multicolumn{1}{c}{2-steps} \\ \hline & & \multicolumn{1}{c}{${\rm SURE}$} & \multicolumn{1}{c|}{${\rm universal}$} & \multicolumn{1}{c}{$R$} & \multicolumn{1}{c|}{$\tau$} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{${\rm SURE}$} & \multicolumn{1}{c}{} \\ \hline {\bf MSE} &&&&&&& \\ 10&4 & \textbf{0.004}&\textbf{0.004}& \textbf{0.004} & \textbf{0.004} & \textbf{0.004} &0.008&\textbf{0.004}\\ 100&4&\textbf{0.037}&\textbf{0.037}& 0.038 &0.038 & \textbf{0.037} & 0.045&\textbf{0.037}\\ 10&2&\textbf{0.017}&\textbf{0.017}& \textbf{0.017} & \textbf{0.017} & \textbf{0.017} & 0.033&\textbf{0.017}\\ 100&2 &{\bf 0.142}&0.147& 0.152 & 0.158 & 0.146 & 0.156&\textbf{0.141}\\ 10&1& \textbf{0.067}&\textbf{0.067} & 0.072 & 0.072 & \textbf{0.067} & 0.116&\textbf{0.067}\\ 100&1&\textbf{0.448}&0.623& 0.733 &0.856 & 0.600 & \textbf{0.448}&0.491\\ 10&0.5&0.253& 0.251& 0.321 & 0.321 & \textbf{0.250} & 0.353&0.257\\ 100&0.5&\textbf{0.852}&0.957& 3.164 &1.000 & 0.961 &\textbf{0.852}&1.477\\\hline {\bf Rank}&&&&&&&& \\ 10&4& 11 (1.6)& \textbf{10} (0.0) & & \textbf{10} & \textbf{10} & 65 (2.3)& \\ 100&4& 103 (1.9)& \textbf{100} (0.0)& & \textbf{100} & \textbf{100} & 193 (0.8) & \\ 10&2& 11 (1.0)& \textbf{10} (0.1)& & \textbf{10} & \textbf{10} & 63 (2.2)& \\ 100&2 &114 (2.2)& \textbf{100} (0.0)& & \textbf{100} & \textbf{100} & 181 (1.1)& \\ 10&1& 11 (1.2)& \textbf{10} (0.0)& & \textbf{10} & \textbf{10} & 59 (1.7) & \\ 100&1 &154 (1.8)& \textbf{65} (0.8)& & 38 (0.6)& 64 (0.7) & 154 (1.2) & \\ 10&0.5 & 15 (1.6)& \textbf{10} (0.0) & & \textbf{10} & \textbf{10} & 51 (2.7)& \\ 100&0.5 & \textbf{87} (3.3)& 16 (0.8)& & 0 & 15 (0.8) & 86 (2.6)& \\ \hline \end{tabular} \caption{Same setting as for the Monte Carlo simulations of Table~\ref{simu_candes}. The same estimators are considered, except that oracle quantities, either the true rank $R$ or the true variance $\sigma^2$, are used. Considered estimators are: ATN with two selection rules, SURE and universal, for the two parameters, assuming $\sigma$ is known; TSVD knowing true rank $R$ and $\tau=\lambda_*(0.4)\sqrt{500}\sigma$ \citep{Donoho:SVDHT:2013}; Optimal shrinkage (OS) with $\sigma$ known \citep{OS:2014}; singular value soft thresholding (SVST) with $\sigma$ is known \citep{CandesSURE:2013}; two-steps knowing true rank $R$ \citep{Verbanck:RegPCA:2013}. \label{simu_candes*}} \end{center} \end{table} \normalsize Looking at Table~\ref{simu_candes}, we see that the proposed adaptive trace norm estimator performs remarkably well, owing to its flexibility (with two parameters) and to a good selection of the appropriate model with GSURE. It can even outperform oracle estimators (in Table~\ref{simu_candes*}) that are governed by a single parameter. Here GSURE results are very similar to its corresponding SURE results, showing that \eqref{eq:GSURE} is a good approximation to \eqref{eq:SUREdiv} thanks to a large value for $NP$. Figure~\ref{fig:tau_gamma} illustrates the striking ability of GSURE to approximate the true loss in different regimes. On the top, the estimated risk of ATN GSURE is represented as a function of $(\tau,\gamma)$ for $R$=10 and SNR=1 on the left (row 6 of Table \ref{simu_candes}) and for $R$=100 and SNR=0.5 (last row of Table \ref{simu_candes}) on the right. On the bottom, the true loss function is plotted as a function of $\tau$ and $\gamma$. The estimated values (top) and the optimal choice (bottom) located by a cross are close. \begin{figure} \caption{Top: Generalized Stein unbiased risk estimate (GSURE); Bottom: corresponding true $\ell_2$-loss. Both are plotted as a function of $(\tau,\gamma)$ for data generated as in Table~\ref{simu_candes} \label{fig:tau_gamma} \end{figure} Looking at Table~\ref{simu_candes*}, we see that when the SNR is high (equal to 4 and 2), the ATN, 2-step, TSVD (with $R$ and $\tau$) and OS estimators give results in term of MSE of the same order of magnitude and clearly outperform the SVST approach. When the SNR decreases, the TSVD, and the 2-step and OS methods to a lower extend, collapse; this is when the SVST provides better results especially for the difficult setting when the rank $R=100$. The good behavior of the 2-step approach in many situations highlights the fact that it is often a good strategy to apply a different amount of shrinkage to each singular value. These simulations provide good insights into the regimes for which each estimator is well suited: low noise regime for the TSVD, moderate noise regime for the 2-step, and high noise for the SVST. But, if one is interested in a single estimator, regardless of the unknown underlying structure, then ATN becomes the method of choice. Figure \ref{fig:shapeshrink} illustrates the adaptation of ATN to various SNR and rank by representing a typical shrinking and thresholding function selecting $(\tau,\gamma)$ with SURE. As expected when SNR is 0.5, ATN is close to soft thresholding, whereas when SNR is 4, it is close to hard thresholding. \begin{figure} \caption{Typical thresholding function selected by ATN with SURE for ranks $R=10$ (left) and $R=100$ (right), and for four values of SNR. \label{fig:shapeshrink} \label{fig:shapeshrink} \end{figure} As far as rank estimation is concerned, even if it is not the primary objective, ATN gives a very good estimation of the rank and SVST considerably over-estimates. This phenomenon is also known in the setup of regression where the lasso tends to select too many variables \citep{Zou:adap:2006,Zhang:2008}. Note that ATN using the universal threshold (column 2 of Table~\ref{simu_candes*}) was designed for estimating the rank and indeed provided a very good estimate. Finally, the results of TSVD and OS (oracle or not) when $R=100$ are not as competitive as when $R=10$, because these methods are based on asymptotics and assume low rank compared to matrix size. In addition, the MSEs are very similar across both Tables for $R=10$ whereas there are differences for $R=100$ which indicates that the estimation of $\sigma$ encounters difficulties. The case SNR=0.5 and $R$=100 in Table~\ref{simu_candes*} is also worth a comment. Here, the data are so noisy that part of the signal is indistinguishable from the noise: only 16 singular values are greater than the ones that would be obtained under the null hypothesis that the rank of the $N\times P$ matrix ${\bf W}$ is zero. Nevertheless, ATN with SURE estimates on average a matrix of rank 87, which is quite remarkable. In this situation, the same amount of shrinkage is applied to all the singular values with soft thresholding (see that the selected $\hat \gamma=1$ on Figure~\ref{fig:shapeshrink} in this situation) leading to the smallest MSE. \subsection{Non-Gaussian noise} The methods considered above are all based on the assumption of Gaussian noise. We now assess their sensitivity to Student noise with 5 degrees of freedom, based on the same simulations as for Table~\ref{simu_candes}. We also considered a more difficult situation with a matrix size divided by 10, that is $N=20$ and $P=50$. Figure~\ref{fig:ptitgauss_stud} points to two noticeable consequences: the boxplots are more variable with Student, yet centered around the same median, except for ATN with GSURE that sees its efficiency drop when both $N$ and $P$ are small (bottom right). \begin{figure} \caption{MSE boxplots for $R=10$ and SNR=1. Top: $N=200$, $P=500$; bottom: $N=20$, $P=50$. Left: Gaussian; right: Student. \label{fig:ptitgauss_stud} \label{fig:ptitgauss_stud} \end{figure} \subsection{Simulations based on a real small data set} We consider here a realistic simulation based on a wine dataset with $N=21$ wines described by $P=30$ sensory descriptors (the data are available in the R package FactoMineR \citep{Le:Josse:Husson:2008:JSSOBK:v25i01}). We used the fitted rank-$R$ matrix as the true signal matrix, and then added Gaussian noise to perform a Monte Carlo simulation. Note that in practice, it makes sense to center the data before using the estimators since the values are shrunk toward zero. \begin{figure} \caption{Distribution of the MSE for the wine dataset simulations for $R=8$.} \label{fig:ptit} \end{figure} On this small sample case, we found the same trends as observed previously. As illustrated in Figure~\ref{fig:ptit} for a case with two levels of noise and $R$=8, the estimators often manage to improve on the usual truncated SVD, the SVST fits well for high noise regime and poor otherwise, and ATN remains very powerful. Note that ATN with universal $\tau$ has results similar to those of the optimal shrinker (OS), which provides an empirical interpretation of the OS estimator and highlights the capability of ATN to find the optimal way to shrink the singular values. Finally, GSURE, although still the best method among the blind estimators (the last 3 estimators on the graphics) in term of median MSE, is more variable. \section{Conclusions} Recovering a reduced rank matrix from noisy data is a hot topic that has aroused the scientific community for a few years, as testified by the abundant recent literature on the subject. The adaptive trace norm estimator combines the strength of the hard, soft and the two-step procedures by means of a shrinking and a thresholding parameter indexing a family of shrinkers. The method adapts to the data which ensures good estimation of both low rank and general signal matrices, whatever the regime encountered in practice. The tuning parameters are estimated without using computationally intensive resampling methods thanks to the SURE and GSURE formulae. The latter version has the great advantage of not requiring knowledge of the noise variance. In addition, the rank is also estimated accurately, especially with the universal version designed for it. Our method outperforms the competitors on simulations and thus can be recommended to users. We showed that Student noise affected the results, but other corruptions such as strong outliers alter the performances of the estimators to a greater extent. To tackle this issue, a natural extension could be to derive robust estimators using for instance the robust Huber loss function $\rho$ instead of the Frobenius norm in (\ref{eq:weightedpenalty}). It leads to an estimator solution to \begin{equation} \label{eq:robustATN} \min_{{\bf W} \in \mathbb{R}^{N\times P}} \|{\bf X}-{\bf W} \|_\rho + \alpha \| {\bf W} \|_{*,{\boldsymbol \omega}} \quad {\rm with} \quad \| {\bf W} \|_{*,{\boldsymbol \omega}}=\sum_{i=1}^{\min(N,P)} \omega_i d_i, \end{equation} where $\| H \|_\rho=\sum_{i=1}^N \sum_{j=1}^P \rho(h_{ij})$ with $$ \rho(h)= \left \{ \begin{array}{ll} h^2/2, & |h|\leq \tilde \alpha \\ \tilde \alpha |h|-\tilde \alpha^2/2, & |h|> \tilde \alpha \end{array} \right . $$ is the Huber loss with cutpoint $\tilde \alpha$ \citep{Huberbook}. Extending the results of \citet{STB01}, we could rewrite (\ref{eq:robustATN}) as $$ \min_{{\bf W},{\bf R} \in \mathbb{R}^{N\times P}, } \|{\bf X}-{\bf W} - {\bf R} \|_F^2 + \alpha \| {\bf W} \|_{*,{\boldsymbol \omega}} + \tilde \alpha \| {\bf R} \|_1 \quad {\rm with} \quad \| {\bf R} \|_1 = \sum_{i=1}^N \sum_{j=1}^P |R_{ij}|. $$ It would allow to solve the problem by block coordinate relaxation and could be used as an alternative to the robust estimator of \citet{robustPCA09}. Two others extensions should be considered. First, assessing our estimator in a missing data framework as an alternative to iterative soft thresholding algorithms \citep{Mazumder:STMiss:2010}. Second, assessing our estimator to denoise inner product matrices or covariance matrices as in \citep{Ledoit:nonlinshrink:2012}. Finally, we can mention the work of \cite{hoff_2013} who suggested a Bayesian treatment of Tucker decomposition methods to analyze arrays datasets. To better fit data, he pointed to hierarchical priors learning the values of the hyper-parameters from the data with an empirical Bayesian approach, which follows essentially the same goal as our method. The results are reproducible with the {\tt R} code provided by the first author on her webpage. \section*{Acknowledgment} The authors are grateful for the helpful comments of the reviewers and editors. J.J. is supported by an AgreenSkills fellowship of the European Union Marie-Curie FP7 COFUND People Programme. S.S. is supported by the Swiss National Science Foundation. This work started while both authors were visiting Stanford University and the authors would like to thank the Department of Statistics for hosting them and for its stimulating seminars. \end{document}
\betaegin{document} \partialgestyle{myheadings} \deltaef\mathbb{R}{\mathbb{R}} \title[Oscillatory Integrals and Fractal Dimension]{Oscillatory Integrals and Fractal Dimension} \alphauthor{J.-P.\ Rolin, D.\ Vlah, V.\ \v{Z}upanovi\'{c}} \betaegin{abstract} We study geometrical representation of oscillatory integrals with an analytic phase function and a smooth amplitude with compact support. Geometrical properties of the curves defined by the oscillatory integral depend on the type of a critical point of the phase. We give explicit formulas for the box dimension and the Minkowski content of these curves. Methods include Newton diagrams and the resolution of singularities. \varepsilonnd{abstract} \deltaate{} \maketitle Keywords: oscillatory integral, box dimension, Minkowski content, critical points, Newton diagram. AMS Classification: 58K05, 42B20, (secondary 28A75, 34C15)\\ \nablaewtheorem{theorem}{Theorem} \nablaewtheorem*{theorem*}{Theorem} \nablaewtheorem{cor}{Corollary} \nablaewtheorem{prop}{Proposition} \nablaewtheorem{lemma}{Lemma} \theoremstyle{remark} \nablaewtheorem{remark}{Remark} \nablaewtheorem{example}{Example} \varphiont\lambdaammasc=cmcsc10 \deltaef\mathop{\rm ess\,sup}{\mathop{\rm ess\,sup}} \deltaef\mathop{\rm ess\,inf}{\mathop{\rm ess\,inf}} \deltaef\wo#1#2#3{W^{#1,#2}_0(#3)} \deltaef\w#1#2#3{W^{#1,#2}(#3)} \deltaef\wloc#1#2#3{W_{\sigmacriptstyle loc}^{#1,#2}(#3)} \deltaef\mathop{\rm osc}{\mathop{\rm osc}} \deltaef\mathop{\rm Var}{\mathop{\rm Var}} \deltaef\mathop{\rm supp}{\mathop{\rm supp}} \deltaef{\rm Cap}{{\rm Cap}} \deltaef\nablaorma#1#2{\|#1\|_{#2}} \deltaef\Gamma{\Gamma} \let\text=\mbox \lambdaammaatcode`\@=11 \let\lambdaammaed=\lambdaamma \deltaef\alpha{\alphalpha} \deltaef\beta{\betaeta} \deltaef\lambdaamma{\lambdaamma} \deltaef\delta{\deltaelta} \deltaef\lambda{\lambda} \deltaef\omega{\omegamega} \deltaef\quad{\quaduad} \deltaef\nabla{\nablaabla} \deltaef\sigma{\sigmaigma} \deltaef\deltaiv{\mathop{\rm div}} \deltaef\sigmaing{{\rm Sing}\,} \deltaef\sigmaingg{{\rm Sing}_\infty\,} \deltaef{\mathcal A}{{\lambdaammaal A}} \deltaef{\mathcal F}{{\lambdaammaal F}} \deltaef{\mathcal H}{{\lambdaammaal H}} \deltaef{\bf W}{{\betaf W}} \deltaef{\mathcal M}{{\lambdaammaal M}} \deltaef{\mathcal N}{{\lambdaammaal N}} \deltaef{\mathcal S}{{\lambdaammaal S}} \deltaef\infty{\infty} \deltaef\varepsilon{\mathop{\rm Var}epsilon} \deltaef\varphi{\mathop{\rm Var}phi} \deltaef{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}{{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}} \deltaef\omegav#1{\omegaverline{#1}} \deltaef\Delta{\Deltaelta} \deltaef\Omega{\Omegamega} \deltaef\partial{\partialrtial} \deltaef\sigmat{\sigmaubset} \deltaef\sigmatq{\sigmaubseteq} \deltaef\pd#1#2{\varphirac{\partial#1}{\partial#2}} \deltaef\sigmagn{{\rm sgn}\,} \deltaef\sigmap#1#2{\langle#1,#2\rangle} \nablaewcount\betar@j \betar@j=0 \deltaef\quad{\quaduad} \deltaef\lambdag #1#2{\hat G_{#1}#2(x)} \deltaef\int_0^{\ty}{\int_0^{\infty}} \deltaef\omegad#1#2{\varphirac{d#1}{d#2}} \deltaef\betag{\betaegin} \deltaef\varepsilonq{equation} \deltaef\betageq{\betag{\varepsilonq}} \deltaef\varepsilonndeq{\varepsilonnd{\varepsilonq}} \deltaef\betageqnn{\betag{eqnarray*}} \deltaef\varepsilonndeqnn{\varepsilonnd{eqnarray*}} \deltaef\betageqn{\betag{eqnarray}} \deltaef\varepsilonndeqn{\varepsilonnd{eqnarray}} \deltaef\betageqq#1#2{\betageqn\label{#1} #2\left\{\betaegin{array}{ll}} \deltaef\varepsilonndeqq{\varepsilonnd{array}\right.\varepsilonndeqn} \deltaef\alphabstract{\betagroup\leftskip=2\partialrindent\rightskip=2\partialrindent \nablaoindent{\betaf Abstract.\varepsilonnspace}} \deltaef\varepsilonndabstract{\partialr\varepsilongroup} \deltaef\udesno#1{\unskip\nablaobreak\hfil\penalty50\hskip1em\hbox{} \nablaobreak\hfil{#1\unskip\ignorespaces} \partialrfillskip=\z@ \varphiinalhyphendemerits=\z@\partialr \partialrfillskip=0pt plus 1fil} \lambdaammaatcode`\@=11 \deltaef\lambdaammaal{\mathcal} \deltaef\varepsilonN{\mathbb{N}} \deltaef\mathbb{Z}{\mathbb{Z}} \deltaef\mathbb{Q}{\mathbb{Q}} \deltaef\Gammae{\mathbb{C}} \deltaef\omegasd{\mathrm{osd}\,} \sigmaection{Introduction, motivation and definitions} This paper is a starting point of a study intended to relate the standard classification of singularities of maps with the fractal dimension and the Minkowski content of curves defined by oscillatory integrals. The close link between the theory of singularities and the investigation of oscillatory integrals is well-known, and is explained in detail in \lambdaammaite{arnold-vol2}. Our purpose is to connect these notions to the analysis of fractal data of curves as it is described in \lambdaammaite{tricot}. In particular we consider the \varepsilonmph{box counting dimension} (also called the \varepsilonmph{box dimension}), and the \varepsilonmph{Minkowski content}. It is worth noticing that every rectifiable curve has a box dimension equal to $1$. Hence the box dimension is a tool to distinguish nonrectifiable curves. Notice that another commonly used fractal dimension, the Hausdorff dimension, which takes the value $1$ on every non rectifiable smooth curve, cannot distinguish between them.\\ One motivation originates in previous works, in which the behavior of a (discrete or continuous) dynamical system in the neighborhood of a singular point is analyzed through the box dimension of an orbit. For example, in \lambdaammaite{zuzu}, the authors consider a family of planar polynomial vector fields, called the \varepsilonmph{standard model of the Hopf-Takens bifurcation}. They prove that the box dimension of any trajectory spiraling in the neighborhood of a limit cycle of multiplicity $m$ has the box dimension $2-1/m$. They also link in \lambdaammaite{zuzulien}, for a planar analytic system with a weak focus singular point, the box dimension of a spiraling trajectory and the Lyapunov coefficients of the singularity.\\ If we consider now a discrete dynamical system on the real line in the neighborhood of a fixed point, the box dimension of a discrete orbit is related to the multiplicity of the generating function. This approach, together with the standard methods combining the study of discrete and continuous systems via the use of Poincar{\'e} first return map, leads to further results (see \lambdaammaite{MRZ} and \lambdaammaite{zuzulien}). It is proved in \lambdaammaite{majaformal} that the formal class of an analytic parabolic diffeomorphism is fully determined by the knowledge of a fractal data of a single orbit: namely, its box dimension, its Minkowski content and another number called its \varepsilonmph{residual content}.\\ Based on these considerations, it seems relevant to study the singularities of a map $f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ by considering the fractal data of an oscillatory integral with a phase $f$, and its geometric representation as a plane curve parametrized by its real and imaginary part. We actually observed a relation between the type of a critical point of the phase and the box dimension of the associated curve: a ``high degeneracy'' of the critical point causes a ``big accumulation'' of the curve, which is reflected by a larger box dimension. This is the exact analogue of the phenomenon observed above for the orbits or trajectories of dynamical systems. A well-known example of this situation is the oscillatory \varepsilonmph{Fresnel integral}, and its geometric representation, the \varepsilonmph{Cornu spiral} (also known as \varepsilonmph{clothoid} or \varepsilonmph{Euler spiral}). This curve plays an important role in the problem of the construction of optimal trajectories of a planar motion with a bounded derivative of the curvature; see \lambdaammaite{kostov}. Its fractal data have been computed in \lambdaammaite{clothoid}. It is worth noticing that the phase function of a Fresnel integral has only non-degenerate critical points. Our results can be summarized as follows. We consider oscillatory integrals with an analytic phase function and an amplitude with compact support. We study the graph of the oscillatory integrals $I(\tau)$, as $\tau\to\infty$, and also the curves defined in a standard way, analogously as the Cornu spiral, which are defined by the parametrization given by the real and imaginary parts of the integral $I(\tau)$. We show that the box dimension and the Minkowski content of the curves reveal the leading term of the asymptotic expansion. More precisely, the \varepsilonmph{oscillation index} can be read from the box dimension, while the leading coefficient can be read from the Minkowski content in the case of Minkowski nondegeneracy. Minkowski degeneracy corresponds to a nontrivial multiplicity of the oscillation index. In particular, for phase functions of two variables, we show explicitly how to connect these notions to their Newton diagram.\\ We plan to pursue the present work in various directions. One goal is the study, from our point of view, the bifurcations in parametric families of maps and their caustics. Second, we would like to know how our results behave if we take, in the oscillatory integral, an amplitude function which is not of class $C^{\infty}$ (for example, oscillatory integrals on halfspaces). Finally, we want to develop our subject in the direction of tame, but non-analytic phase functions.\\ The main results of this paper are presented in three theorems, with respect to the dimension of the space: Theorems \ref{tm_case_n1}, \ref{tm_case_n2} and \ref{tm_case_ng2}, for $n=1$, $n=2$ and $n>2$, respectively. The main difference between the first two theorems is caused by logarithmic terms which can appear in the expansion of the integral in Theorem \ref{tm_case_n2}, while in Theorem \ref{tm_case_n1}, that is not possible. In Theorem \ref{tm_case_ng2} powers of logarithmic terms can also appear. \sigmaubsection{The box dimension}\label{subsec_box_dim} For $A\sigmat\mathbb{R}^N$ bounded we define the \varepsilonmph{$\varepsilon$-neigh\-bour\-hood} of $A$ as: $ A_\varepsilon:=\{y\in\mathbb{R}^N{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}d(y,A)<\varepsilon\} $. By the \varepsilonmph{lower $s$-dimensional Minkowski content} of $A$, for $s\lambdae0$, we mean $$ {\mathcal M}_*^s(A):=\liminf_{\varepsilon\to0}\varphirac{|A_\varepsilon|}{\varepsilon^{N-s}}, $$ and analogously for the \varepsilonmph{upper $s$-dimensional Minkowski content} ${\mathcal M}^{*s}(A)$. If ${\mathcal M}^{*s}(A)={\mathcal M}_*^{s}(A)$, we call the common value the \varepsilonmph{$s$-dimensional Minkowski content of $A$}, and denote it by ${\mathcal M}^s(A)$. The lower and upper box dimensions of $A$ are $$ \underline\deltaim_BA:=\inf\{s\lambdae0{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}{\mathcal M}_*^s(A)=0\} $$ and analogously $\omegav\deltaim_BA:=\inf\{s\lambdae0{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}{\mathcal M}^{*s}(A)=0\}$. If these two values coincide, we call it simply the box dimension of $A$, and denote it by $\deltaim_BA$. This will be our situation. If $0<{\mathcal M}_*^d(A)\le{\mathcal M}^{*d}(A)<\infty$ for some $d$, then we say that $A$ is \varepsilonmph{Minkowski nondegenerate}. In this case obviously $d=\deltaim_BA$. In the case when the lower or upper $d$-dimensional Minkowski content of $A$ is equal to $0$ or $\infty$, where $d=\deltaim_BA$, we say that $A$ is \varepsilonmph{degenerate}. If there exists ${\mathcal M}^d(A)$ for some $d$ and ${\mathcal M}^d(A)\in(0,\infty)$, then we say that $A$ is \varepsilonmph{Minkowski measurable}. For more details on these definitions see, e.g., Falconer \lambdaammaite{falc}, and \lambdaammaite{zuzu}. \sigmaubsection{Examples of the box dimension} \betaegin{enumerate} \item A basic example of fractal sets with a nontrivial box dimension is the $a$-string defined by $A=\{k^{-a}\lambdaammaolon k\in\varepsilonN\}$, where $a>0$, introduced by Lapidus; see, e.g., \lambdaammaite{lapidusfrank2}. Here is $\deltaim_BA=1/(1+a)$. \item Furthermore, important examples are curves from Tricot's formulas; see \lambdaammaite[p.\ 121]{tricot}. The box dimension of a spiral in the plane defined in the polar coordinates by $r=m\,\varphi^{-\alpha}$, $\varphi\lambdae\varphi_1>0$, where $\varphi_1$, $m>0$ and $\alpha\in(0,1]$ are fixed, is equal to $2/(1+\alpha)$. \item Assuming that $0<\alpha\le\beta$, the box dimension of the graph of the function $f_{\alpha,\beta}(x)=x^{\alpha}\sigmain(x^{-\betaeta})$, for $x\in(0,1]$, which is called $(\alpha,\beta)$-chirp, is equal to $2-(\alpha+1)/(\beta+1)$; see \lambdaammaite[p.\ 121]{tricot}. \varepsilonnd{enumerate} \sigmaubsection{Oscillatory integrals} One of the main objects of interest in this paper are the oscillatory integrals \betageq\label{integral} I(\tau)=\int_{\mathbb{R}^n}e^{i\tau f(x)}\phi(x) dx,\quadquad \tau\in\mathbb{R} , \varepsilonndeq where $f$ is called the phase function and $\phi$ the amplitude. Throughout this paper in all theorems we will use the following assumptions on the phase function $f$ and the amplitude $\phi$ that we call \varepsilonmph{the standard assumptions}. The amplitude function $\phi:\mathbb{R}^n\to\mathbb{R}$, \betaegin{itemize} \item is of class $C^\infty$, \item is a non-negative function with compact support, \item the point $0\in\mathbb{R}^n$ is contained in the interior of the support of the function $\phi$. \varepsilonnd{itemize} The phase function $f:\mathbb{R}^n\to\mathbb{R}$: \betaegin{itemize} \item the point $0$ is a critical point of the function $f$, \item $f$ is a \varepsilonmph{real analytic} function in the neighborhood of its critical point~$0$, \item the point $0$ is \varepsilonmph{the only} critical point of the function $f$ in the interior of the support of the function $\phi$. \varepsilonnd{itemize} The asymptotic expansion of $I(\tau)$, as $\tau\to\infty$, depends essentially on critical points of $f$. The critical point of $f$ is a point with all partial derivatives equal to zero. The nondegenerate critical point is a point were the Hessian is regular. In that case integral (\ref{integral}) is called the Fresnel integral in the reference Arnold et all \lambdaammaite{arnold-vol2}. We use theorems from \lambdaammaite{arnold-vol2} to obtain the asymptotic expansion of $I(\tau)$ as $\tau\to\infty$, in the cases if $f$ has no critical points, has the nondegenerate or the degenerate critical point. The phase function $f$ determines exponents in the asymptotic expansion, while the amplitude function determines the coefficients. We will discuss curves defined by the oscillatory functions \betaegin{eqnarray}\label{curve} X(\tau)&=&Re\ I(\tau),\nablaonumber\\ Y(\tau)&=&Im\ I(\tau), \varepsilonnd{eqnarray} for $\tau$ near $\infty$, and also the reflected functions $x(t):=X(1/t)$, $y(t):=Y(1/t)$, as $t\to 0$. \sigmaubsection{Oscillation and singular indices} Applying \lambdaammaite[Theorem 6.3]{arnold-vol2} on (\ref{integral}) we get the asymptotic expansion \betaegin{equation}\label{expansion} I(\tau)\sigmaim e^{i\tau f(0)}\sigmaum_{\alphalpha}\sigmaum_{k=0}^{n-1}a_{k,\alphalpha}(\phi)\tau^{\alphalpha}\left(\log\tau\right)^k,\quaduad\textrm{as}\ \tau\to\infty . \varepsilonnd{equation} According to the same theorem, the parameter $\alphalpha$ is from the set consisting of a finite set of arithmetic progressions, which depend only on the phase $\phi$, and consisting of negative rational numbers. Coefficients $a_{k,\alphalpha}$ depend only on the amplitude $\phi$. The \varepsilonmph{index set} of an analytic phase $f$ at a critical point is defined as the set of all numbers $\alphalpha$ having the property: for any neighborhood of the critical point there is an amplitude with support in this neighborhood for which in the asymptotic series (\ref{expansion}) there is a number $k$ such that the coefficient $a_{k,\alphalpha}$ is not equal to zero. The \varepsilonmph{oscillation index} $\betaeta$ of an analytic phase $f$ at a critical point is the maximal number in the index set. The \varepsilonmph{multiplicity of the oscillation index} $K$ of an analytic phase $f$ at a critical point is the maximal number $k$ having the property: for any neighborhood of the critical point there is an amplitude with support in this neighborhood for which in the asymptotic series (\ref{expansion}) the coefficient $a_{k,\betaeta}$ is not equal to zero. The \varepsilonmph{singular index} of an analytic phase $f$ in $n$ variables at a critical point is equal to $\betaeta+n/2$. The \varepsilonmph{multiplicity of the singular index} is the multiplicity of $\betaeta$. \sigmaubsection{Oscillatory and curve dimensions} We say that $x(t)=X(1/t)$ is oscillatory near the origin if $X(\tau)$ is oscillatory near $\tau=\infty$. We measure the rate of oscillatority of $X(\tau)$ near $\tau=\infty$ by the rate of oscillatority of $x(t)$ near $t=0$. More precisely, the {\it oscillatory dimension} $\deltaim_{osc}(X)$ (near $\tau=\infty$) is defined as the box dimension of the graph of $x(t)$ near $t=0$. Also, we investigate the associated Minkowski contents. Analogously for $y(t)$ and $Y(\tau)$. Given the oscillatory integral $I(\tau)$ from (\ref{integral}), we define the \varepsilonmph{curve dimension} of $I(\tau)$ as the box dimension of the curve defined in the complex plane by $I(\tau)$, near $\tau=\infty$. As in the oscillatory dimension, we also investigate the associated Minkowski contents. \betaegin{figure}[htp] \betaegin{center} \betaegin{tabular}{cc} $f_1(x)=x^2+1$ & $f_2(x)=x^3+1$\\ \includegraphics[width=0.45\textwidth]{curve-s-2.png} & \includegraphics[width=0.45\textwidth]{curve-s-3.png}\\ $d_1=\varphirac{4}{3}$ & $d_2=\varphirac{3}{2}$ \varepsilonnd{tabular} \varepsilonnd{center} \lambdaammaaption{Curves defined by oscillatory integrals $I_i(\tau)$ from (\ref{integral}), for phase functions $f_i$ and their respective curve dimensions $d_i$, see Theorem \ref{tm_case_n1} below.} \varepsilonnd{figure} It is well known that degenerate critical points of phase functions contribute to the leading term of the asymptotic expansion (\ref{expansion}) of oscillatory integral (\ref{integral}). On the other hand, curve dimension of (\ref{curve}) will be determined by the asymptotic expansion, so we will connect type of critical point with the curve dimension. More precisely, in Theorems \ref{tm_case_n2} and \ref{tm_case_ng2}, the oscillatory and curve dimensions are related to the oscillation index. Also, it is well known that the asymptotic expansion has been related to the Newton diagram of the phase function. \sigmaubsection{The Newton diagram} According to \lambdaammaite{arnold-vol2}, we will use the notion of the Newton polyhedron of the phase function to formulate our results in the dimension $n\lambdae 2$. The Newton polyhedron is defined for the Taylor series of the critical point. Let us consider the positive orthant of the space $\mathbb{R}^n$. We define the Newton polyhedron of an arbitrary subset of this orthant consisting of points with integer coordinates. At all such points we take a parallel positive orthant. The {\varepsilonm Newton polyhedron} is the convex hull in $\mathbb{R}^n$ of the union of all parallel orthants mentioned above. The {\varepsilonm Newton diagram} $\Deltaelta$ of a subset is the union of compact faces of the Newton polyhedron of the same subset. We consider the power series of the phase $f$ $$f(x)=\sigmaum a_kx^k $$ with real coefficients, having monomials $$ x^k=x_1^{k_1}\deltaots x_n^{k_n} $$ with multi-index $k=(k_1,\deltaots k_n)$. The Newton polyhedron and diagram of this power series has been constructed using the multi-indices which are in the reduced support of the series. Reduced support is obtained by removing the origin from the support of the series. This support is a subset of the positive orthant, consisting of points having non-negative coordinates. These points are given by multi-indices of all monomials from the power series, having non-zero coefficients. The polynomial $f_\Deltaelta$ that equals to the sum of monomials belonging to the Newton diagram, is called the {\varepsilonm principal part} of the series. To each face $\lambdaamma$ of the Newton diagram is associated the quasi-homogeneous polynomial. The type of quasi-homogeneity is determined by the slope of the face. Furthermore, we introduce the concept of nondegeneracy of the principal part. Notice that in this article we have $3$ distinct types of nondegeneracy: \betaegin{itemize} \item nondegeneracy of a critical point with respect to the Hessian, \item nondegeneracy of the Minkowski content, \item nondegeneracy of the principal part of the series. \varepsilonnd{itemize} The principal part $f_\Deltaelta$ of the power series $f$ with real coefficients is $\mathbb{R}$-nondegenerate if for every compact face $\lambdaamma$ of the Newton polyhedron of the series the polynomials $$ \partialrtial f_{\lambdaamma}/\partialrtial x_1,\deltaots,\partialrtial f_{\lambdaamma}/\partialrtial x_n $$ do not have common zeroes in $(\mathbb{R}\sigmaetminus 0)^n$. Roughly speaking, $\mathbb{R}$-nondegeneracy means that these mentioned derivatives have the same common zeroes as monomials. This property is essential for the resolution of the singularity, see \lambdaammaite[p.\ 195]{arnold-vol2}. Furthermore, the set of all series with a degenerate principal part is `small`, more precisely, the set of $\mathbb{R}$-nondegenerate series is dense in the space of all series with a fixed Newton polyhedron, see Lemma 6.1. \lambdaammaite{arnold-vol2}. A generalization of the notion of the principal part for $\mathbb{R}$-degenerate vector fields could be found in \lambdaammaite{zup2000}. The asymptotic expansion of the oscillatory integrals is related to some properties of critical points of its phase function, which could be read from the Newton diagram. Let us consider the bisector of the positive orthant in $\mathbb{R}^n$, that is the line consisting of points with equal coordinates. The bisector intersects the boundary of the Newton polyhedron in the exactly one point $(c,\deltaots,c)$, which is called the center of the boundary of the Newton polyhedron. The number $c$ is called the {\varepsilonm distance to the Newton polyhedron}. {\varepsilonm Remoteness of the Newton polyhedron} is equal to $r=-1/c$. If $r>-1$ the Newton polyhedron is {\varepsilonm remote}, which means that it does not contain the point $(1,\deltaots,1)$. Let the phase be an analytic function in a neighborhood of its critical point. {\varepsilonm Remoteness of the critical point} of the phase is the upper bound of remotenesses of the Newton polyhedra of the Taylor series of the phase in all systems of local analytic coordinates with the origin at the critical point. The coordinates in which the remoteness is the greatest, are called the {\varepsilonm adapted} coordinates to the critical point. We consider the open face which contains the center of the boundary of the Newton polyhedron. The codimension of this face, less one, is called the {\varepsilonm multiplicity} of the remoteness. If the face is a vertex then the multiplicity is $n-1$, and if the face is an edge then the multiplicity is $n-2$. \sigmaection{Main results} We use Theorems 6.1., 6.2., 6.3., 6.4. from \lambdaammaite{arnold-vol2} in order to measure the oscillatority of the oscillatory integral by using the box dimension. These theorems give the asymptotic expansion of (\ref{integral}) if the phase $f$ has no critical points, nondegenerate and degenerate critical points. Theorem 6.4. involves Newton diagrams. In our theorems we use these results about asymptotic expansions. In Theorems \ref{tm_case_n1}, \ref{tm_case_n2} and \ref{tm_case_ng2} we present our main results about fractal analysis of singularities in dimensions $n=1$, $n=2$ and $n>2$, respectively. Proofs of these theorems are presented in Section \ref{section_proofs}. \betaegin{theorem}[The phase function of a single variable]\label{tm_case_n1} Let $n=1$, the standard assumptions on $f$ and $\phi$ hold, and let $f(0)\nablaeq 0$. Assume $f'(0)=f''(0)=\lambdaammadots=f^{(s-1)}(0)=0$ and $f^{(s)}(0)\nablaeq 0$ for some integer $s\lambdaeq 2$. Let $\Gamma$ be the curve defined by (\ref{integral}) and (\ref{curve}), near the origin. Then: $(i)$ The oscillatory dimension of both $X$ and $Y$ from (\ref{curve}) is equal to $d'=\varphirac{3s-1}{2s}$ and associated graphs are Minkowski nondegenerate. $(ii)$ The curve dimension of $I$ is $d=\varphirac{2s}{s+1}$, curve $\Gamma$ is Minkowski measurable, and $d$-dimensional Minkowski content of $\Gamma$ is \betaegin{equation}\label{Mcontent} {\mathcal M}^d(\Gamma)=|C_1|^{\varphirac{2s}{s+1}}\lambdaammadot\pi\lambdaammadot\left(\varphirac{\pi}{s\lambdaammadot f(0)}\right)^{-\varphirac{2}{s+1}}\lambdaammadot\varphirac{s+1}{s-1}, \varepsilonnd{equation} where the constant $C_1$ depends on the phase function $f$ and on the amplitude function value in the origin $\phi(0)$. \varepsilonnd{theorem} \remark The constant $C_1$ can be explicitly calculated using a standard formula for phase functions with nondegenerate critical point; see Remark \ref{remark_non_deg_coeff} in Section \ref{section_examples} with examples. For a more general case of phase functions $f$ see \lambdaammaite{stein}. \betaegin{theorem}[The phase function of two variables]\label{tm_case_n2} Let $n=2$, the standard assumptions on $f$ and $\phi$ hold, and let $f(0)\nablaeq 0$. Let $\betaeta$ be the remoteness of the critical point of the phase function $f$. Let $\Gamma$ be the curve defined by (\ref{integral}) and (\ref{curve}), near the origin, with asymptotic expansion (\ref{expansion}). Then: $(i)$ If the multiplicity of the remoteness $\betaeta$ is equal to $0$ or the remoteness $\betaeta$ is equal to $-1$, then the oscillatory dimension of both $X$ and $Y$ from (\ref{curve}) is equal to $d'=(\betaeta+3)/2$ and the associated graphs are Minkowski nondegenerate. The curve dimension of $I$ is $d=2/(1-\betaeta)$ and the associated Minkowski content is \betaegin{equation}\label{Mcontent2} {\mathcal M}^d(\Gamma)=\left[\varphirac{|a_{0,\betaeta}(\phi)|}{f(0)^{\betaeta}}\right]^{\varphirac{2}{1-\betaeta}}\lambdaammadot[-\betaeta]^{\varphirac{2\betaeta}{1-\betaeta}} \lambdaammadot\pi^{\varphirac{1+\betaeta}{1-\betaeta}}\lambdaammadot\varphirac{1-\betaeta}{1+\betaeta}. \varepsilonnd{equation} $(ii)$ If the multiplicity of the remoteness $\betaeta$ is equal to $1$ and the remoteness $\betaeta$ is bigger than $-1$, then the oscillatory and curve dimensions are the same as in the previous case with associated degenerate Minkowski contents \varepsilonnd{theorem} \betaegin{theorem}[The phase function of more than two variables]\label{tm_case_ng2} Let $n>2$ the standard assumptions on $f$ and $\phi$ hold, and let $f(0)\nablaeq 0$. Let the principal part of the Taylor series of $f$ at its critical point is $\mathbb{R}$-nondegenerate, and the Newton polyhedron of this series is remote with the remoteness of the Newton polyhedron equal to $\betaeta$. Let $\Gamma$ be the curve defined by (\ref{integral}) and (\ref{curve}), near the origin, having asymptotic expansion (\ref{expansion}). Then: $(i)$ If $a_{0,\betaeta}(\phi)\nablae 0$ and $a_{i,\betaeta}=0$, for $i=1,\deltaots,n-1$, the oscillatory dimension of both $X$ and $Y$ from (\ref{curve}) is equal to $d'=(\betaeta+3)/2$ and the associated graphs are Minkowski nondegenerate. The curve dimension of $I$ is $d=2/(1-\betaeta)$ and the associated Minkowski content is given by (\ref{Mcontent2}). $(ii)$ If for some $L>0$ holds $a_{L,\betaeta}\nablaeq 0$, the oscillatory and curve dimensions are the same as for the previous case and the associated Minkowski contents are degenerate. \varepsilonnd{theorem} \betaegin{remark} In Theorems \ref{tm_case_n1}, \ref{tm_case_n2} and \ref{tm_case_ng2}, if we take $f(0)=0$, then the curve $\Gamma$ and the associated reflected graphs are rectifiable, and all dimensions are equal to $1$. \varepsilonnd{remark} If there are no singularities in the observed domain, which is given by the support of the amplitude $\phi$, then Proposition \ref{pr_no_sing} gives only a trivial fractal dimension. \betaegin{prop} {\rm (The regular phase function)}\label{pr_no_sing} Assume that the standard assumptions on $\phi$ hold, and that $f$ does not have any critical point contained in the interior of the support of $\phi$. Let $\Gamma$ be the curve defined by (\ref{integral}) and (\ref{curve}), near the origin. Then $\Gamma$ is a rectifiable curve and the curve dimension of $I$ is equal to $1$. Furthermore, the graphs of the functions $x(t)=X(1/t)$ and $y(t)=Y(1/t)$, where $X$ and $Y$ are from (\ref{curve}), are rectifiable. Hence, the oscillatory dimension of $X$ and $Y$ equals $1$. \varepsilonnd{prop} \betaegin{proof} From \lambdaammaite[Theorem 6.1]{arnold-vol2} it follows that $I(\tau)$ tends to zero more rapidly than any power of the parameter, as $\tau\to+\infty$. The claim is based on the Riemann-Lebesgue lemma, see \lambdaammaite[p.\ 16]{wong}. For a $1$-dimensional situation we have \[ I'(\tau)=i\int_{\mathbb{R}}e^{i\tau f(x)}f(x)\phi(x) dx. \] The integral $I'(\tau)$ admits the same type of asymptotic expansion as $I(\tau)$ and all derivatives go to zero more rapidly than any power, that is, $\tau^n I^{(k)}(\tau)\to 0$ as $\tau\to+\infty$, for all $k\lambdae 0$, $n\in\varepsilonN$. We deduce that $\tau^{n}\sigmaqrt{X'\left(\tau\right)^{2}+Y'\left(\tau\right)^{2}}\rightarrow0$ as $\tau\rightarrow+\infty$, for all $n\in\varepsilonN$, so that $\Gamma$ is rectifiable. Therefore (see \lambdaammaite{tricot}) its box dimension equals $1$. For the same reason $x'\left(t\right)=-t^{-2}X'\left(1/t\right)\rightarrow0$ as $t\rightarrow0$, hence $\int_{0}^{x_{0}}\sigmaqrt{1+x'\left(t\right)^{2}}dt<\infty$. The same holds for $y$. It proves that graphs of functions $x$ and $y$ are rectifiable, so the oscillatory dimension of $X$ and $Y$ equals $1$. \varepsilonnd{proof} Proposition \ref{pr_nondeg_high} demonstrates that in dimensions higher than $2$, nondegenerate singularities cannot be detected by the fractal dimension. \betaegin{prop} {\rm (The nondegenerate critical point in a higher dimension)}\label{pr_nondeg_high} Assume that the standard assumptions on $\phi$ and $f$ hold, and that $0\in\mathbb{R}^n$, where $n>2$, is a nondegenerate critical point of $f$ (the Hessian matrix of $f$ is not equal to zero). Let $\Gamma$ be the curve defined by (\ref{integral}) and (\ref{curve}), near the origin. Then $\Gamma$ is a rectifiable curve and the curve dimension of $I$ is equal to $1$. Furthermore, the graphs of the functions $x(t)=X(1/t)$ and $y(t)=Y(1/t)$, where $X$ and $Y$ are from (\ref{curve}), are rectifiable. Hence, the oscillatory dimension of $X$ and $Y$ equals $1$. \varepsilonnd{prop} \betaegin{proof} The nondegeneracy of the critical point implies that $I\left(\tau\right)\sigmaim C\lambdaammadot e^{i\tau f\left(0\right)}\lambdaammadot\tau^{-\varphirac{n}{2}}$ as $\tau\rightarrow+\infty$, where $C\in\Gammae$ (see \lambdaammaite[Theorem 6.2]{arnold-vol2}. As in the proof of Proposition \ref{pr_no_sing}, $I'\left(\tau\right)$ admits the same type of asymptotic expansion. Hence $\sigmaqrt{X'\left(\tau\right)^{2}+Y'\left(\tau\right)^{2}}\leq C_1\tau^{-\varphirac{n}{2}}$ for some $C_1>0$, so $\Gamma$ is rectifiable. As above, $x'\left(t\right)^{2}=t^{-4}X'\left(\varphirac{1}{t}\right)^{2}\leq C_2 t^{n-4}$ for some $C_2>0$. So $\sigmaqrt{1+x'\left(t\right)^{2}}\leq 1 + C_{3}t^{\varphirac{n}{2}-2}$ for some $C_{3}>0$. As $\varphirac{n}{2}>1$, we conclude that the graph of $x$ is rectifiable. The same holds for $y$. About the dimensions, we conclude as in the proof of Proposition \ref{pr_no_sing}. \varepsilonnd{proof} \sigmaection{Examples}\label{section_examples} \betaegin{remark}\label{remark_non_deg_coeff} In \lambdaammaite[Theorem 6.2]{arnold-vol2} there is an explicit formula for the leading coefficient in the asymptotic expansion of the oscillatory integral with a nondegenerate critical point of the phase in space of the dimension $n$. If the phase $f$ and the amplitude $\phi$ satisfy the standard assumptions, then a leading coefficient is the coefficient of the power ${\tau}^{-n/2}$ and is equal to $$ \phi(0)(2\pi)^{n/2}\varepsilonxp \left((i\pi/4)\lambdaammadot\sigmagn(f_{xx}^{\prime\prime} (0))\right)|\deltaet f_{xx}^{\prime\prime} (0)|^{-1/2}. $$ \varepsilonnd{remark} \betaegin{example} A computation of the Minkowski content of the curve for the nondegenerate case in $1$-dimensional space. Using Theorem \ref{tm_case_n1}, for $s=2$ we obtain oscillatory and curve dimensions for the integral and the curve defined by (\ref{integral}) and (\ref{curve}), respectively. The oscillatory dimension is equal to $5/2$ and the curve dimension is equal to $4/3$. Using Remark \ref{remark_non_deg_coeff}, for $n=1$ we compute $$ C_1=\phi(0)\sigmaqrt{2\pi}\,{| f^{\prime\prime} (0)|}^{-1/2}\varepsilonxp \left((i\pi/4)\lambdaammadot\sigmagn(f^{\prime\prime} (0)) \right) , $$ and using formula (\ref{Mcontent}) we obtain the Minkowski content of the curve $\Gamma$ $$ {\mathcal M}^{4/3}(\Gamma)=3|C_1|^{\varphirac{4}{3}}\pi\left(\varphirac{\pi}{2 f(0)}\right)^{-\varphirac{2}{3}}. $$ For an example, if $f(x)=x^2+1$, then we have $$ {\mathcal M}^{4/3}(\Gamma)=3\lambdaammadot 2^{2/3}\pi {\phi(0)}^{4/3}. $$ \varepsilonnd{example} \betaegin{example} A caustic consisting of the elementary critical points $A_k$ and $D_k$. \lambdaammaite{arnold-vol1} and \lambdaammaite{arnold-vol2} introduced the classification of singularities using normal forms of singularities and parametric families. According to the assumptions of our theorems here we work with maps whose critical point does not coincide with the zero point, so we shift the graph of our map. The situation when these points coincide is not oscillatory, see expansion (\ref{expansion}) for $f(0)=0$, so we take $f(0)=1$. Let us suppose that for a given value of the parameters, the phase function has a unique critical point. In this case the caustic in a neighborhood of the given value of the parameter is said to be elementary. Here we mention examples of elementary caustics obtained by varying two or three parameters, \lambdaammaite[p.\ 174, 185]{arnold-vol2}, \lambdaammaite[p.\ 246]{arnold-vol1}. The caustics consist of degenerate critical points of type $A_k$ for $k\lambdae 1$, and $D_k$ for $k\lambdae 4$. Contributions of the critical points of the phase to the asymptotic expansion of oscillatory integrals depend on the type of these critical points. Each degenerate critical point has contribution of order $\tau^{\lambdaamma-n/2}$, where $\lambdaamma=(k-1)/(2k+2)$ for $A_k$, and $\lambdaamma=(k-2)/(2k-2)$ for $D_k$, which are singular indices. According to Theorem \ref{tm_case_n2}, the box dimension of the associated curve, the curve dimension, is equal to $d=2/(1-\betaeta)$, where $\betaeta=\lambdaamma-n/2$. If $k\to\infty$ then $\lambdaamma\to 1/2$, so $\betaeta\to(1-n)/2$, hence the curve dimension $d\to 4/(1+n)$. We see that the curve dimension increases and tends to $2$ for $n=1$, and tends to $4/3$ for $n=2$, when we have more complicated critical points whose singular index tends to $1/2$. The oscillatory dimension is equal to $d'=\varphirac{3+\betaeta}{2}$. \varepsilonnd{example} \betaegin{example}\label{example_greenblatt} The normal forms of the type $x^p+y^q$. Consider the phase $f(x,y)=x^p+y^q+1$, for integers $p,q\lambdaeq 2$ and $(p,q)\nablaeq (2,2)$, so that $f(0,0)\nablaeq 0$. In this case the remoteness $\betaeta=-\varphirac1p-\varphirac1q$, hence it follows from Theorem \ref{tm_case_n2} that the oscillatory dimension is equal to $$ d^{\prime} =\varphirac{2}{1+\varphirac1p +\varphirac1q} , $$ while the curve dimension is equal to $$ d=\varphirac32-\varphirac{1}{2p}-\varphirac{1}{2q} . $$ The computation of Minkowski content (\ref{Mcontent2}) is more involved, as it depend on the computation of the first coefficient in asymptotic expansion (\ref{expansion}) of the integral. Notice that in this example we replace our standard notation $\Gamma$ for the curve associated to the oscillatory integral with $\mathcal{C}$, in order to avoid a confusion with the gamma function. According to \lambdaammaite{greenblatt} we can compute the first coefficient in the expansion. The phase is written in adapted coordinates, which means that the remoteness is the biggest possible, in the set of all remotenesses of Newton diagrams of the map in different coordinate systems. In this case the Newton diagram has only one compact side $S_0$. As the bisector intersects the interior of the compact edge, the leading term of the asymptotic expansion is $d_0(\phi)\tau ^{-\betaeta}$, where $\betaeta$ is the remoteness and $\phi$ the amplitude. First, define the function $S_0(x,y)$ to be equal to $f(x,y)$. Now, define the function $S_0^+(x,y)^{-\varphirac{1}{d}}$ to be equal to $S_0(x,y)^{-\varphirac{1}{d}}$ when $S_0(x,y)>0$ and zero otherwise. Analogously, define the function $S_0^-(x,y)^{-\varphirac{1}{d}}$ to be equal to $(-S_0(x,y))^{-\varphirac{1}{d}}$ when $S_0(x,y)<0$ and zero otherwise. The coordinates are \varepsilonmph{superadapted}; see \lambdaammaite{greenblatt}, which means that $S_0(1,y)$ and $S_0(-1,y)$ have no real roots of order bigger than $-1/\betaeta$, except $y=0$. Hence, according to \lambdaammaite[Theorem 1.2]{greenblatt}, if we put $$ c_0(\phi):=\varphirac{\phi (0,0)}{m+1}\int_{-\infty}^{+\infty}\left(S_0^+(1,y)^{\betaeta}+S_0^+(-1,y)^{\betaeta}\right)dy , $$ $$ C_0(\phi):=\varphirac{\phi (0,0)}{m+1}\int_{-\infty}^{+\infty}\left(S_0^-(1,y)^{\betaeta}+S_0^-(-1,y)^{\betaeta}\right)dy , $$ where $-1/m$ is a slope of the edge $S_0$, then the leading term coefficient of the asymptotic expansion of $I(\tau)$ is equal to \betaegin{equation}\label{greenblatt_coefficient} a_{0,\betaeta}(\phi) = -\betaeta\,\Gamma\left(-\betaeta\right)\left(e^{-i\varphirac{\pi}{2}\betaeta}c_0(\phi)+e^{i\varphirac{\pi}{2}\betaeta}C_0(\phi)\right) . \varepsilonnd{equation} Obviously, there are four distinct cases in the computation regarding the integers $p$ and $q$ being odd or even. We will compute the leading term coefficient and the Minkowski content (\ref{Mcontent2}) for the case of $p$ and $q$ being even. The other three cases are computed in similar fashion. We compute $\betaeta=-1/p-1/q$ and $m=p/q$. After integration we obtain the result \[ c_0(\phi)=\varphirac{4\,\phi(0,0)}{q(m+1)}\,B\left(\varphirac{1}{p}, \varphirac{1}{q}\right),\quadquad C_0(\phi)=0 , \] \[ a_{0,\betaeta}(\phi) = 4\,\phi(0,0)\,e^{i\varphirac{\pi}{2}\left(\varphirac{1}{p}+\varphirac{1}{q}\right)}\Gamma\left(\varphirac{1}{p}+1\right)\Gamma\left(\varphirac{1}{q}+1\right) , \] expressed using the beta function $B$ and the gamma function $\Gamma$. As we took $f(0,0)=1$ and as we can without loss of generality fix $\phi(0,0)=1$, we obtain the Minkowski content of the curve $\mathcal{C}$ to be equal to \[ {\mathcal M}^{\varphirac{3+\betaeta}{2}}(\mathcal{C}) = \left[4\,\Gamma\left(\varphirac{1}{p}+1\right)\Gamma\left(\varphirac{1}{q}+1\right)\right]^{\varphirac{2}{1-\betaeta}}\lambdaammadot[-\betaeta]^{\varphirac{2\betaeta}{1-\betaeta}} \lambdaammadot\pi^{\varphirac{1+\betaeta}{1-\betaeta}}\lambdaammadot\varphirac{1-\betaeta}{1+\betaeta} , \] by putting the coefficient $a_{0,\betaeta}(\phi)$ in formula (\ref{Mcontent2}), where $\betaeta$ depends only on $p$ and $q$. Notice that the Minkowski content depends essentially only on $p$ and $q$. Finally, notice that the normal forms from this example include singularities of the standard classification (see \lambdaammaite{arnold-vol1}) types $E_6$ and $E_8$, for $(p,q)=(3,4)$ and $(p,q)=(3,5)$, respectively. Also, the normal form for the ordinary cusp is obtained by taking $(p,q)=(2,3)$. \varepsilonnd{example} \sigmaection{Proofs of main results} \label{section_proofs} \betaegin{proof}[Proof of Theorem \ref{tm_case_n1}] Without the loss of generality we assume that $f(0)>0$. In the case of $f(0)<0$, we consider the integral $J$ having the phase $\widetilde{f}(x)=-f(x)$. Now $J(\tau)=\omegaverline{I(\tau)}$ and we see from the definitions of fractal properties (oscillatory and curve dimensions and Minkowski contents) of an oscillatory integral, that they are invariant to complex conjugation of $I$. We use the asymptotic expansion of the integral $I$ from (\ref{integral}), $$ I(\tau) \sigmaim e^{i\tau f(0)}\sigmaum\limits_{j=1}^{\infty} C_j\lambdaammadot\tau^{-j/s},\quaduad\mathrm{as}\ \tau\to\infty, $$ where $C_j\in\Gammae$, from \lambdaammaite[Proposition 3 on page 334]{stein}, and it holds that $C_1\nablaeq 0$. From the same reference, it follows that each constant $C_j$ depends on only finitely many derivatives of $f$ and $\phi$ at $0$. We write \betaegin{equation}\label{integral-eP} I(\tau) = e^{i\tau f(0)}P(\tau), \varepsilonnd{equation} where the function $P(\tau)\sigmaim\sigmaum\limits_{j=1}^{\infty} C_j\lambdaammadot\tau^{-j/s}$, as $\tau\to\infty$. First we show that the function $I$ is of class $C^{\infty}(\mathbb{R})$, using derivation under the integral sign. By taking the derivative of (\ref{integral}), we get $$ I'(\tau)=i\int_{\mathbb{R}^n}e^{i\tau f(x)}\phi_1(x) dx, $$ where $\phi_1(x)=f(x)\phi(x)$. Inductively, we see that $$ I^{(m)}(\tau)=i^m\int_{\mathbb{R}^n}e^{i\tau f(x)}\phi_m(x) dx,\quaduad\mathrm{for}\ \mathrm{all}\ m\in\varepsilonN, $$ where $\phi_m(x)=[f(x)]^m\phi(x)$. Notice that for every $m\in\varepsilonN$, the function $I^{(m)}$ is equal to the constant $i^m$ multiplying the oscillatory integral of type (\ref{integral}), with the phase $f$ and the amplitude $\phi_m$. Further, using the asymptotic expansion of this integral, we get \betaegin{equation}\label{D-integral-eP} I^{(m)}(\tau)=i^m e^{i\tau f(0)}P_m(\tau),\quaduad\mathrm{for}\ \mathrm{all}\ m\in\varepsilonN, \varepsilonnd{equation} where the function $P_m$ possesses an asymptotic expansion in the same asymptotic sequence as $P$. Now we want to prove that the function $P$ is of class $C^{\infty}(\mathbb{R})$, and that its derivative of any order possesses an asymptotic expansion in the same asymptotic sequence as $P$, that is, $P^{(m)}(\tau)\sigmaim\sigmaum\limits_{j=1}^{\infty} C_j^{(m)}\lambdaammadot\tau^{-j/s}$, as $\tau\to\infty$, where $C_j^{(m)}\in\Gammae$. Notice that from (\ref{integral-eP}) and the fact that $I\in C^{\infty}$ immediately follows that $P\in C^{\infty}$. By taking the derivative of (\ref{integral-eP}), we get $$ I'(\tau)=i f(0) e^{i\tau f(0)}P(\tau)+e^{i\tau f(0)}P'(\tau). $$ Respecting (\ref{D-integral-eP}) and dividing every term by $e^{i\tau f(0)}$, we get the expression $$ P'(\tau)=i\left(P_1(\tau)-f(0)P(\tau)\right), $$ using \lambdaammaite[p.\ 14]{erdelyi}, it shows that $P'$ possesses an asymptotic expansion in the same asymptotic sequence as $P$. It follows by induction, that for all $m\in\varepsilonN$, the function $P^{(m)}$ also possesses an asymptotic expansion in the same asymptotic sequence as $P$. Notice that the exponents of the monomials of the asymptotic sequence are integer multiples of a common real number $-1/s$. Hence, it follows from the clasical proof; see \lambdaammaite[p.\ 21]{erdelyi}, that the asymptotic expansion of $P^{(m)}$ is given by $m$ times differentiating the asymptotic expansion of $P$, term by term. Now define $a_j=Re\ C_j$ and $b_j=Im\ C_j$ for all $j\in\varepsilonN$. Also, define functions $A(\tau)=Re\ P(\tau)$ and $B(\tau)=Im\ P(\tau)$. Respecting (\ref{curve}) we get \betaegin{eqnarray} X(\tau) &=& \lambdaammaos(\tau f(0))A(\tau)-\sigmain(\tau f(0))B(\tau),\\ Y(\tau) &=& \sigmain(\tau f(0))A(\tau)+\lambdaammaos(\tau f(0))B(\tau), \varepsilonnd{eqnarray} where \betaegin{eqnarray} A(\tau) &\sigmaim& \sigmaum\limits_{j=1}^{\infty} a_j\lambdaammadot\tau^{-j/s},\quaduad\mathrm{as}\ \tau\to\infty,\\ B(\tau) &\sigmaim& \sigmaum\limits_{j=1}^{\infty} b_j\lambdaammadot\tau^{-j/s},\quaduad\mathrm{as}\ \tau\to\infty. \varepsilonnd{eqnarray} Notice that from $P(\tau)=A(\tau)+i B(\tau)$ follows that functions $A$ and $B$ are of class $C^{\infty}(\mathbb{R})$ and that $A^{(m)}$ and $B^{(m)}$, $m\in\varepsilonN_0$, possess asymptotic expansions given by $m$ times differentiating the asymptotic expansions of $A$ and $B$, term by term, respectively. For the oscillatory dimension we will provide the proof for $Y$. For the function $X$ the proof is analogous. Using the substitution $t=1/\tau$, to determine the oscillatory dimension of $Y$, we further investigate the asymptotic expansion of the function $y$, defined by $y(t)=Y(1/t)$, near the origin, $$ y(t)=\sigmain(f(0)/t)a(t)+ \lambdaammaos(f(0)/t)b(t), $$ where $a(t)=A\left(t^{-1}\right)$ and $b(t)=B\left(t^{-1}\right)$. Now \betaegin{eqnarray} a(t) &\sigmaim& \sigmaum\limits_{j=1}^{\infty} a_j\lambdaammadot t^{j/s},\quaduad\mathrm{as}\ t\to 0^+,\\ b(t) &\sigmaim& \sigmaum\limits_{j=1}^{\infty} b_j\lambdaammadot t^{j/s},\quaduad\mathrm{as}\ t\to 0^+. \varepsilonnd{eqnarray} Notice that both functions $a$ and $b$ are of class $C^{\infty}(\mathbb{R}^+)$ and that both $a^{(m)}$ and $b^{(m)}$, for all $m\in\varepsilonN_0$, possess asymptotic expansions, near the origin, given by $\deltaisplaystyle\varphirac{d^m}{d t^m}\left[A\left(t^{-1}\right)\right]$ and $\deltaisplaystyle\varphirac{d^m}{d t^m}\left[B\left(t^{-1}\right)\right]$, respectively. Indeed, this $m$-th derivatives are finite linear combinations of products given by $A^{(k)}\left(t^{-1}\right)$ or $B^{(k)}\left(t^{-1}\right)$, multiplied by a negative power of $t$, where $k\leq m$. A linear combination of asymptotic expansions is again an asymptotic expansion; see \lambdaammaite[p.\ 14]{erdelyi}. Finally, we define functions $p(t)=\sigmaqrt{a^2(t)+b^2(t)}$ and $\psi:\mathbb{R}\rightarrow [0,2\pi)$ such that $$ \lambdaammaos\psi(t)=\varphirac{a(t)}{p(t)},\quadquad \sigmain\psi(t)=\varphirac{b(t)}{p(t)}. $$ Exploiting trigonometric addition formulas we get the expression $$ y(t)=p(t)\sigmain(q(t)),\quaduad\mathrm{where}\ q(t)=f(0)\lambdaammadot t^{-1}+\psi(t). $$ The function $p$ is of class $C^{\infty}(\mathbb{R}^+)$, and $q$ is also $C^{\infty}(\mathbb{R}^+)$, as $\psi$ is $C^{\infty}(\mathbb{R}^+)$ by the definition, for sufficiently small $t$. For the derivative of $q$, we get \betaegin{equation}\label{Der-q} q'(t)=-f(0)\lambdaammadot t^{-2}+\psi'(t),\quaduad\mathrm{where}\ \psi(t)=\alpharctan\varphirac{b(t)}{a(t)} . \varepsilonnd{equation} Because $b(t)/a(t)\to\mathrm{const}$ as $t\to\infty$, and as $\alpharctan$ is an analytic function, then for all $m\in\varepsilonN$, $\psi^{(m)}(t)$ and $q^{(m)}$ possess an asymptotic expansion. Using the same principle, we can prove that for all $m\in\varepsilonN$, $p^{(m)}$ possesses an asymptotic expansion. The asymptotic representation of the function $p$ is easily determined, \betaegin{eqnarray} p(t) &=& \sigmaqrt{\left(a_1 t^{1/s} + O\left(t^{2/s}\right)\right)^2 + \left(b_1 t^{1/s} + O\left(t^{2/s}\right)\right)^2}\\ &=& t^{1/s}\sigmaqrt{a^2_1+b^2_1}\left(1+ O\left(t^{1/s}\right)\right) \sigmaim t^{1/s}\sigmaqrt{a^2_1+b^2_1}, \varepsilonnd{eqnarray} as $t\to 0$, and the asymptotic representation of derivative of any order of $p$ is given by differentiating that many times the asymptotic representation of $p$. For the function $q$, as $\psi$ is bounded, it follows that $q(t)\sigmaim f(0)\lambdaammadot t^{-1}$, as $t\to 0$, and for all $m\in\varepsilonN$, $q^{(m)}\sigmaim f(0)\lambdaammadot\varphirac{d^m}{dt^m}\left[t^{-1}\right]$, as $t\to 0$. Finally, we use Theorem \ref{BDchirp} from Section \ref{section_pwr-log-chrips-spirals}, with $S(t)=\sigmain t$, and constants $T=\pi$, $\alphalpha=1/s$ and $\betaeta=1$. Notice that all of the assumptions of that theorem are satisfied. Let $\Gamma_y$ be the graph of the function $y$. We conclude that $\deltaim_B\Gamma_y=\deltaim_{osc}Y=2-\varphirac{\alphalpha+1}{\betaeta+1}=\varphirac{3s-1}{2s}$ and that $\Gamma_y$ is Minkowski nondegenerate. \sigmamallskip In order to compute the curve dimension, we want to investigate the oscillatory integral $I$ in polar coordinates. We first define the real function $G(\tau)=|I(\tau)|=|P(\tau)|=\sigmaqrt{X^2(\tau)+Y^2(\tau)}=\sigmaqrt{A^2(\tau)+B^2(\tau)}$, hence it follows that $G$ is of class $C^{\infty}$. Using asymptotic expansions for $A$ and $B$, we get $$ G(\tau)\sigmaim\sigmaum\limits_{j=1}^{\infty} c_j\lambdaammadot\tau^{-j/s},\quadquad\mathrm{as}\ \tau\to\infty, $$ where $c_j=\sigmaqrt{a^2_j+b^2_j}=|C_j|\in\mathbb{R}$, for all $j\in\varepsilonN$. Next, we define the continuous function $\mathop{\rm Var}phi:[\tau_0,\infty)\rightarrow\mathbb{R}$, $\tau_0>0$, by $$ \tan\mathop{\rm Var}phi(\tau)=\varphirac{Y(\tau)}{X(\tau)}, $$ where $X(\tau)\nablaeq 0$, and extend it by continuity. The zero set of $X$ is discrete because of the asymptotic expansion of $X'(\tau)$. Using trigonometric addition formulas we calculate $$ \tan\mathop{\rm Var}phi(\tau)=\varphirac{\sigmain(\tau f(0)+\Psi(\tau))G(\tau)}{\lambdaammaos(\tau f(0)+\Psi(\tau))G(\tau)}=\tan(\tau f(0)+\Psi(\tau)), $$ where $\Psi:[\tau_0,\infty)\rightarrow[0,2\pi)$ is such that $$ \lambdaammaos\Psi(\tau)=\varphirac{A(\tau)}{G(\tau)},\quadquad \sigmain\Psi(\tau)=\varphirac{B(\tau)}{G(\tau)}. $$ We compute the expression $$ \Psi'(\tau)=\varphirac{A(\tau)B'(\tau)-B(\tau)A'(\tau)}{A^2(\tau)+B^2(\tau)}=K\lambdaammadot\tau^{-1-1/s}\left(1+O\left(\tau^{-1/s}\right)\right),\ \mathrm{as}\ \tau\to\infty, $$ where the constant $\deltaisplaystyle K=\varphirac{a_2 b_1-a_1 b_2}{s\left(a_1^2+b_1^2\right)}$, so $\mathop{\rm Var}phi'(\tau)=f(0)+\Psi'(\tau)\sigmaim f(0)+K\lambdaammadot\tau^{-1-1/s}$, as $\tau\to\infty$. From the expression for $\Psi'(\tau)$ it follows that $\Psi'$ is of class $C^{\infty}$. As $\Psi$ is a continuous function for sufficiently large $\tau_0$, it follows that $\Psi$ is of class $C^{\infty}$ and it holds $\mathop{\rm Var}phi(\tau)=\tau f(0)+\Psi(\tau)$, hence $\mathop{\rm Var}phi$ is also of class $C^{\infty}$. Analogously as before, functions $G^{(m)}$, $\mathop{\rm Var}phi^{(m)}$ and $\Psi^{(m)}$ possess asymptotic expansions for all $m\in\varepsilonN_0$. As $f(0)>0$, we can take $\tau_0>0$ sufficiently large such that $\mathop{\rm Var}phi'(\tau)>0$, for every $\tau\in[\tau_0,\infty)$. As now $\mathop{\rm Var}phi:[\tau_0,\infty)\rightarrow[\mathop{\rm Var}phi_0,\infty)$, where $\mathop{\rm Var}phi_0=\mathop{\rm Var}phi(\tau_0)$, is of class $C^{\infty}$ and a strictly increasing bijection, so is its inverse function $\tau:[\mathop{\rm Var}phi_0,\infty)\rightarrow[\tau_0,\infty)$. Now the radius function $r:[\mathop{\rm Var}phi_0,\infty)\rightarrow[0,\infty)$, defined by $r(\mathop{\rm Var}phi)=G(\tau(\mathop{\rm Var}phi))$, is of class $C^{\infty}$. We want to determine asymptotic representations of two derivatives of $r(\mathop{\rm Var}phi)$, as $\mathop{\rm Var}phi\to\infty$. From before, we know that $G(\tau)\sigmaim c_1\lambdaammadot\tau^{-1/s}$, $G'(\tau)\sigmaim -\varphirac{c_1}{s}\lambdaammadot\tau^{-1-1/s}$ and $G''(\tau)\sigmaim \varphirac{c_1}{s}\left(1+\varphirac{1}{s}\right)\lambdaammadot\tau^{-2-1/s}$, as $\tau\to\infty$. We compute \betaegin{eqnarray} r'(\mathop{\rm Var}phi) &=& G'(\tau(\mathop{\rm Var}phi))\tau'(\mathop{\rm Var}phi)= \varphirac{G'(\tau(\mathop{\rm Var}phi))}{\mathop{\rm Var}phi'(\tau(\mathop{\rm Var}phi))}, \nablaonumber\\ r''(\mathop{\rm Var}phi) &=& G''(\tau(\mathop{\rm Var}phi))(\tau'(\mathop{\rm Var}phi))^2 + G'(\tau(\mathop{\rm Var}phi))\tau''(\mathop{\rm Var}phi) \nablaonumber\\ &=& \varphirac{G''(\tau(\mathop{\rm Var}phi))}{(\mathop{\rm Var}phi'(\tau(\mathop{\rm Var}phi)))^2}-G'(\tau(\mathop{\rm Var}phi))\varphirac{\mathop{\rm Var}phi''(\tau(\mathop{\rm Var}phi))}{\left[\mathop{\rm Var}phi'(\tau(\mathop{\rm Var}phi))\right]^3} , \nablaonumber \varepsilonnd{eqnarray} since $\tau'(\mathop{\rm Var}phi)=\left[\mathop{\rm Var}phi'(\tau(\mathop{\rm Var}phi))\right]^{-1}$ and $\tau''(\mathop{\rm Var}phi)=-\mathop{\rm Var}phi''(\tau(\mathop{\rm Var}phi))/\left[\mathop{\rm Var}phi'(\tau(\mathop{\rm Var}phi))\right]^3$. As $\Psi$ is a bounded function, it follows that $\mathop{\rm Var}phi(\tau)\sigmaim\tau f(0)$, as $\tau\to\infty$. It is easy to see that the inverse $\tau(\mathop{\rm Var}phi)\sigmaim\mathop{\rm Var}phi/f(0)$, as $\mathop{\rm Var}phi\to\infty$. Notice that $\mathop{\rm Var}phi'(\tau)\sigmaim f(0)$ and $\mathop{\rm Var}phi''(\tau)\sigmaim K\left(-1-\varphirac{1}{s}\right)\lambdaammadot\tau^{-2-1/s}$, as $\tau\to\infty$. Finally, notice that $\tau(\mathop{\rm Var}phi)\to\infty$, as $\mathop{\rm Var}phi\to\infty$. Hence, we can compute \betaegin{eqnarray} r(\mathop{\rm Var}phi) &\sigmaim& c_1\lambdaammadot\left(\mathop{\rm Var}phi/f(0)\right)^{-1/s} = c_1 f(0)^{1/s}\mathop{\rm Var}phi^{-1/s}, \nablaonumber\\ r'(\mathop{\rm Var}phi) &\sigmaim& \varphirac{-\varphirac{c_1}{s}\lambdaammadot\left(\mathop{\rm Var}phi/f(0)\right)^{-1-1/s}}{f(0)} = -\varphirac{c_1}{s}f(0)^{1/s}\lambdaammadot\mathop{\rm Var}phi^{-1-1/s}, \nablaonumber\\ r''(\mathop{\rm Var}phi)&\sigmaim& \varphirac{\varphirac{c_1}{s}\left(1+\varphirac{1}{s}\right)\lambdaammadot\left(\mathop{\rm Var}phi/f(0)\right)^{-2-1/s}}{(f(0))^2} \nablaonumber\\ &+& \varphirac{c_1}{s}\lambdaammadot\left(\mathop{\rm Var}phi/f(0)\right)^{-1-1/s}\varphirac{K\left(-1-\varphirac{1}{s}\right)\lambdaammadot\left(\mathop{\rm Var}phi/f(0)\right)^{-2-1/s}}{(f(0))^3} \nablaonumber\\ &\sigmaim& \varphirac{c_1}{s}\left(1+\varphirac{1}{s}\right)f(0)^{1/s}\lambdaammadot\mathop{\rm Var}phi^{-2-1/s}, \nablaonumber \varepsilonnd{eqnarray} as $\mathop{\rm Var}phi\to\infty$. Notice, as $c_1>0$ that $r'(\mathop{\rm Var}phi)<0$, for $\mathop{\rm Var}phi$ sufficiently large, so we can take $\tau_0>0$ sufficiently large such that $r$ is a strictly decreasing function. Finally, we use Theorem \ref{tm_ffam} from Section \ref{section_pwr-log-chrips-spirals}, taking $\alphalpha=1/s$. Function $r$ satisfies the assumptions of this theorem. We calculate the constant $m$ from (\ref{lim}), below, to be equal to $f(0)^{1/s} |C_1|$, and $|r''(\varphi)\,\varphi^{\alpha}|\to 0$, as $\mathop{\rm Var}phi\to\infty$, so it is uniformly bounded as a function of $\mathop{\rm Var}phi$ on its domain $[\mathop{\rm Var}phi_0,\infty)$. As all of the assumptions of that theorem are satisfied, we conclude that the curve dimension of $I$ is $d:=2/(1+\alpha)=2s/(s+1)$, the curve $\Gamma$ is Minkowski measurable and its $d$-dimensional Minkowski content is given by (\ref{Mcontent}). \varepsilonnd{proof} \betaegin{proof}[Proof of Theorem \ref{tm_case_n2}] Using \lambdaammaite[Theorem 6.5]{arnold-vol2} we conclude that the oscillation index of the critical point of $f$ equals to its remoteness $\betaeta$. Then using $a_{k,\lambdaamma}:=a_{k,\lambdaamma}(\phi)$ and rewriting (\ref{expansion}) we get the asymptotic expansion \betaegin{equation}\label{neq2} I(\tau)\sigmaim e^{i\tau f(0)}\left(a_{1,\betaeta}\tau^{\betaeta}\log\tau+a_{0,\betaeta}\tau^{\betaeta} + \sigmaum_{\alphalpha<\betaeta}\left(a_{1,\alphalpha}\tau^{\alphalpha}\log\tau+a_{0,\alphalpha}\tau^{\alphalpha}\right)\right) \varepsilonnd{equation} as $\tau\to\infty$, where $\alphalpha$ runs through a finite set of arithmetic progressions, hence there exists $\mathop{\rm Var}epsilon$ such that $|\betaeta-\alphalpha|>\mathop{\rm Var}epsilon$ for all such $\alphalpha$. Without loss of generality, we can assume that we work in \varepsilonmph{superadapted}, hence adapted coordinates; see \lambdaammaite[Section 7.]{greenblatt}. We now have to establish, for cases $(i)$ and $(ii)$, if the first coefficient $a_{1,\betaeta}$ is vanishing or not. For case $(i)$, as the multiplicity of the remoteness is equal to $0$ and the dimension $n=2$, we conclude that the open face of the Newton diagram of the phase $f$ that contains the center of the boundary of the associated Newton polyhedron is an edge. If the Newton polyhedron is remote, that is $\betaeta>-1$, we are in the Case $1$ or $3$ from \lambdaammaite[Theorem 1.2]{greenblatt}, from which it follows that $a_{1,\betaeta}=0$. From the definition of the oscillation index it follows that $a_{0,\betaeta}\nablaeq 0$. If $\betaeta=-1$, it follows from \lambdaammaite[lemma 1.0]{greenblatt} that the critical point of $f$ at the origin in nondegenerate. Now, from \lambdaammaite[Theorem 6.2]{arnold-vol2} it follows that $a_{0,\betaeta}\nablaeq 0$ and $a_{1,\betaeta}=0$. The rest of the proof now basically follows the proof of Theorem \ref{tm_case_n1}. Minor differences arise regarding treatment of more complicated asymptotic expansion (\ref{neq2}), which has terms having a logarithm function. For case $(ii)$, the multiplicity of the remoteness is equal to $1$, hence the center of the boundary of the associated Newton polyhedron is a vertex. As $\betaeta>-1$, we are in the Case $2$ from \lambdaammaite[Theorem 1.2]{greenblatt}, hence from \lambdaammaite[Comment 2.]{greenblatt} it follows that $a_{1,\betaeta}\nablaeq 0$. Now the first term in the asymptotic expansion has a logarithm inside. Like in the case $(i)$, the proof of Theorem \ref{tm_case_n1} is once more adapted concerning log-terms in asymptotic expansions. Further differences arise in the final steps of the proof, when applying Theorems \ref{BDchirp} and \ref{tm_ffam}, for oscillatory and curve dimensions, respectively. We first consider the proof for the oscillatory dimension. It is easy to see that here, contrary to the proof of Theorem \ref{tm_case_n1}, it holds $p(t)\sigmaim const\lambdaammadot t^{-\alphalpha}\log(t^{-1})$, as $t\to 0$. It follows $p'(t)\sigmaim const\lambdaammadot t^{-\alphalpha-1}\log(t^{-1})$, as $t\to 0$. So instead of Theorem \ref{BDchirp} from Section \ref{section_pwr-log-chrips-spirals}, which can not be applied here, we use Theorem~\ref{BDchirp_mod}. For the proof for the curve dimension, instead of using Theorem \ref{tm_ffam} on the curve radius function $r=r(\varphi)$ (see the proof of Theorem \ref{tm_case_n1}), we use directly Theorem \ref{ffa}. \varepsilonnd{proof} \betaegin{proof}[Proof of Theorem \ref{tm_case_ng2}] Using \lambdaammaite[Theorem 6.4]{arnold-vol2} we conclude that the oscillation index of the critical point of $f$ equals to the remoteness $\betaeta$. Using (\ref{expansion}) we get the asymptotic expansion \betaegin{equation}\label{ng2} I(\tau)\sigmaim e^{i\tau f(0)}\left(\sigmaum_{k=0}^{n-1} a_{k,\betaeta}(\phi)\tau^{\betaeta}\log^k\tau + \sigmaum_{\alphalpha<\betaeta}\sigmaum_{k=0}^{n-1} a_{k,\alphalpha}(\phi)\tau^{\alphalpha}\log^k\tau\right) \varepsilonnd{equation} as $\tau\to\infty$, where $\alphalpha$ runs through a finite set of arithmetic progressions. The rest of the proof is analogous to the proof of Theorem \ref{tm_case_n2}, in both cases. Notice that here the asymptotic scale is involving terms consisting of $\tau$ to a negative rational power multiplied by a logarithm of $\tau$ to the power $k$. \varepsilonnd{proof} \sigmaection{Fractal properties of chirps and spirals related to oscillatory integrals}\label{section_pwr-log-chrips-spirals} In order to compute curve and oscillatory dimensions of oscillatory integrals and related Minkowski contents, we use theorems presented in this section. Theorems \ref{BDchirp} and \ref{tm_ffam}, cited below, were used before in different setting, related to fractal analysis of differential equations, Fresnel integrals and dynamical systems; see \lambdaammaite{cswavy}, \lambdaammaite{clothoid} and \lambdaammaite{zuzu}. Here, they are used in the proofs of Theorems \ref{tm_case_n1}, \ref{tm_case_n2} and \ref{tm_case_ng2}. Also, for the proofs of Theorems \ref{tm_case_n2} and \ref{tm_case_ng2}, Theorems \ref{BDchirp} and \ref{tm_ffam} had to be modified, as the original versions fail to take into account the power-log asymptotic of the leading term in the asymptotic expansion of related oscillatory integrals. This modified variants, Theorems \ref{BDchirp_mod} and \ref{ffa} below, are proved throughout the rest of this section. \betaegin{theorem}[Theorem 5 from \lambdaammaite{cswavy}]\label{BDchirp} Let $y(x)=p(x)S(q(x))$, where $x \in I=(0,c]$ and $c>0$. Let the functions $p(x)$, $q(x)$ and $S(t)$ satisfy the following assumptions: \betaegin{equation} \mbox{ $p\in C(\betaar{I})\lambdaammaap C^{1}(I)$, $q\in C^{1}(I)$, $S\in C^1(\mathbb{R})$}. \varepsilonnd{equation} The function $S(t)$ is assumed to be a $2T$-periodic real function defined on $\mathbb{R}$ such that \betaegin{equation}\label{Sx1} \left\{ \betaegin{array}{c} \mbox{$S(a)=S(a+T)=0$ for some $a\in\mathbb{R}$,}\\ \mbox{$S(t)\nablaeq 0$ for all $t\in (a,a+T)\lambdaammaup (a+T,a+2T)$,} \varepsilonnd{array} \right. \varepsilonnd{equation} where $T$ is a positive real number and $S(t)$ alternately changes a sign on intervals $(a+(k-1)T,a+kT)$, for $k\in \varepsilonN$. Without loss of generality, we take $a=0$. Let us suppose that $0<\alphalpha \leq \betaeta$ and: \betaegin{equation}\label{apsp_org} p(x)\sigmaimeq_1 x^{\alphalpha} \quaduad \mbox{as} \quaduad x\to 0,\quadquad q(x)\sigmaimeq_1 x^{-\betaeta} \quaduad \mbox{as} \quaduad x\to 0. \varepsilonnd{equation} Let $\Gamma_y$ be the graph of the function $y$. Then $\deltaim_B\Gamma_y=2-(\alphalpha+1)/(\betaeta+1)$ and $\Gamma_y$ is Minkowski nondegenerate. \varepsilonnd{theorem} \betaegin{theorem}[Theorem 2 from \lambdaammaite{clothoid}]\label{tm_ffam} Assume that $\varphi_1>0$ and $r{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}[\varphi_1,\infty)\to(0,\infty)$ is a decreasing $C^2$ function converging to zero as $\varphi\to\infty$. Let the limit \betageq\label{lim} m:=\lim_{\varphi\to\infty}\varphirac{r'(\varphi)}{(\varphi^{-\alpha})'} \varepsilonndeq exist, where $\alpha\in(0,1)$. Assume that $|r''(\varphi)\,\varphi^{\alpha}|$ is uniformly bounded as a function of $\varphi$. Let $\Gamma$ be the graph of the spiral $\rho=r(\varphi)$ and define $d:=2/(1+\alpha)$. Then $\deltaim_B\Gamma=d$, the spiral is Minkowski measurable, and moreover, \betageq {\mathcal M}^d(\Gamma)=m^d\pi(\pi\alpha)^{-2\alpha/(1+\alpha)}\varphirac{1+\alpha}{1-\alpha}. \varepsilonndeq \varepsilonnd{theorem} \betaegin{remark} Theorem \ref{tm_ffam} is the simplified but equivalent form of the result first introduced in \lambdaammaite{zuzu}. \varepsilonnd{remark} \betaegin{theorem}\label{BDchirp_mod} {\betaf (Box dimension and Minkowski degeneracy of the graph of a logarithmic $(\alphalpha,1)$-chirp-like function)} Let $y(x)=p(x)\sigmain(q(x))$, $x \in I=(0,c],$ $c>0.$ Let the functions $p(x)$ and $q(x)$ satisfy the following assumptions: \betaegin{equation}\label{px} \mbox{ $p\in C(\betaar{I})\lambdaammaap C^{1}(I)$, $q\in C^{1}(I)$}. \varepsilonnd{equation} Let us suppose that $0<\alphalpha \leq 1$, $l\in\varepsilonN$ and: \betaegin{equation}\label{apsp} p(x)\sigmaimeq_1 x^{\alphalpha}\left[\log(x^{-1})\right]^l \quaduad \mbox{as} \quaduad x\to 0, \varepsilonnd{equation} \betaegin{equation}\label{qq} q(x)\sigmaimeq_1 x^{-1} \quaduad \mbox{as} \quaduad x\to 0. \varepsilonnd{equation} Let $\Gamma_y$ be the graph of the function $y$. Then $\deltaim_B\Gamma_y=d$, where $d=2-(\alphalpha+1)/2$, and $\Gamma_y$ is Minkowski degenerate, having $\mathcal{M}^d(\Gamma_y)=\infty$. \varepsilonnd{theorem} \betaegin{remark} Theorem \ref{BDchirp_mod} is a modified variant of Theorem \ref{BDchirp}, by setting $S(x)=\sigmain x$, $T=\pi$ and $\betaeta=1$ in the original theorem, and adapting condition (\ref{apsp_org}) by introduction of log-term asymptotics. The same applies also for Propositions \ref{druga_mod} and \ref{treca_mod}, below. \varepsilonnd{remark} The proof of Theorem \ref{BDchirp_mod} uses Theorem \ref{mersat_mod}, below, which is a modified variant of \lambdaammaite[Theorem 2.1.]{mersat}, and two propositions concerning the properties of functions $p$ and $q$, which are also modified variants of \lambdaammaite[Proposition 1 and 2]{cswavy}, below. We also need \lambdaammaite[Definition 2.1.]{mersat}, stating that for some $\mathop{\rm Var}epsilon_0>0$, we say that a function $k=k(\mathop{\rm Var}epsilon)$ is \varepsilonmph{an index function} on $(0,\mathop{\rm Var}epsilon_0]$ if $k:(0,\mathop{\rm Var}epsilon_0]\to\varepsilonN$, $k(\mathop{\rm Var}epsilon)$ is nonincreasing and $\lim_{\mathop{\rm Var}epsilon\to 0} k(\mathop{\rm Var}epsilon)=\infty$. \betaegin{theorem}[Modification of Theorem 2.1.\ from \lambdaammaite{mersat}]{\label{mersat_mod}} Let $y\in C^1((0,T])$ be a bounded function on $(0,T]$. Let $s\in[1,2)$ be a real number, let $l\in\varepsilonN$ and let $(a_n)$ be a decreasing sequence of consecutive zeros of $y(x)$ in $(0,T]$ such that $a_n\to 0$ when $n\to \infty$ and let there exist constants $c_1, c_2, \mathop{\rm Var}epsilon_0$ such that for all $\mathop{\rm Var}epsilon \in(0,\mathop{\rm Var}epsilon_0)$ we have: \betaegin{equation}\label{left} c_1\mathop{\rm Var}epsilon^{2-s}\left[\log(\mathop{\rm Var}epsilon^{-1})\right]^l\leq \sigmaum_{n\lambdaeq k(\mathop{\rm Var}epsilon)} \max_{x\in[a_{n+1},a_n]} |y(x)|(a_n-a_{n+1}), \varepsilonnd{equation} \betaegin{equation}\label{right} a_{k(\mathop{\rm Var}epsilon)} \sigmaup_{x\in(0,a_{k(\mathop{\rm Var}epsilon)}]} |y(x)| + \mathop{\rm Var}epsilon \int_{a_{k(\mathop{\rm Var}epsilon)}}^{a_1} |y'(x)|dx \leq c_2\mathop{\rm Var}epsilon^{2-s}\left[\log(\mathop{\rm Var}epsilon^{-1})\right]^l, \varepsilonnd{equation} where $k(\mathop{\rm Var}epsilon)$ is an index function on $(0,\mathop{\rm Var}epsilon_0]$ such that $$ |a_n-a_{n+1}|\leq \mathop{\rm Var}epsilon \quaduad \mbox{for all} \quaduad n\lambdaeq k(\mathop{\rm Var}epsilon) \quaduad\mbox{and}\quaduad \mathop{\rm Var}epsilon \in (0,\mathop{\rm Var}epsilon_0). $$ Let $G(y)$ be the graph of the function $y$. Then $\deltaim_B(G(y))=s$ and $G(y)$ is Minkowski degenerate, having $\mathcal{M}^s(G(y))=\infty$. \varepsilonnd{theorem} \betaegin{proof} Let $G_{\varepsilon}(y)$ be the $\varepsilon$-neighbourhood of the graph $G(y)$ of the function $y$. From \lambdaammaite[Lemma 2.1.]{mersat} it follows that $|G_{\varepsilon}(y)|\lambdaeq c_1\mathop{\rm Var}epsilon^{2-s}\left[\log(\mathop{\rm Var}epsilon^{-1})\right]^l$, and from \lambdaammaite[Lemma 2.2.]{mersat} it follows that $|G_{\varepsilon}(y)|\leq c\left[\varepsilon + c_2\mathop{\rm Var}epsilon^{2-s}\left[\log(\mathop{\rm Var}epsilon^{-1})\right]^l\right]$, where $c>0$. From the definitions of ${\mathcal M}^{*s}(G(y))$ and ${\mathcal M}_*^{s}(G(y))$ it follows that \[ {\mathcal M}^{*s}(G(y)) \lambdaeq {\mathcal M}_*^{s}(G(y)) \lambdaeq \liminf_{\varepsilon\to0} \varphirac{c_1\mathop{\rm Var}epsilon^{2-s}\left[\log(\mathop{\rm Var}epsilon^{-1})\right]^l}{\varepsilon^{2-s}} = +\infty , \] and that \[ {\mathcal M}_*^{s'}(G(y)) \leq {\mathcal M}^{*s'}(G(y)) \leq \liminf_{\varepsilon\to0} \varphirac{c\left[\varepsilon + c_2\mathop{\rm Var}epsilon^{2-s}\left[\log(\mathop{\rm Var}epsilon^{-1})\right]^l\right]}{\varepsilon^{2-s'}} = 0 \] holds for all $s'>s$, hence the theorem is proved. \varepsilonnd{proof} \betaegin{prop}[Modification of Proposition 1 from \lambdaammaite{cswavy}]{\label{druga_mod}} Assume that the functions $p(x)$ and $q(x)$ satisfy conditions {\rm (\ref{px})}, {\rm (\ref{apsp})} and {\rm (\ref{qq})}. Then there exist $\deltaelta_0 >0$, $l\in\varepsilonN$ and positive constants $C_1 \mbox{and} \ C_2$ such that: \betaegin{eqnarray} C_1x^{\alphalpha}\left[\log(x^{-1})\right]^l\leq & p(x) & \leq C_2x^{\alphalpha}\left[\log(x^{-1})\right]^l, \nablaonumber\\ C_1x^{\alphalpha-1}\left[\log(x^{-1})\right]^l\leq & p'(x) & \leq C_2x^{\alphalpha-1}\left[\log(x^{-1})\right]^l, \nablaonumber\\ C_1x^{-1}\leq & q(x) & \leq C_2x^{-1}, \nablaonumber\\ C_1x^{-2}\leq & -q'(x) & \leq C_2x^{-2}, \nablaonumber \varepsilonnd{eqnarray} for all $x\in(0,\deltaelta_0]$. Furthermore, there exists the inverse function $q^{-1}$ of the function $q$ defined on $[m_0,\infty)$, where $m_0=q(\deltaelta_0)$, and it holds: \betaegin{equation*}\label{qq-1} q^{-1}(t) \sigmaimeq_1 t^{-1}\quaduad \mbox{as}\quaduad t\to\infty, \varepsilonnd{equation*} \betaegin{equation*}\label{qq-1st} C_1t^{-2}(t-s)\leq q^{-1}(s)- q^{-1}(t)\leq C_2s^{-2}(t-s), \quaduad m_0\leq s<t. \varepsilonnd{equation*} \varepsilonnd{prop} \betaegin{prop}[Modification of Proposition 2 from \lambdaammaite{cswavy}]{\label{treca_mod}} For any function $q(x)$ with properties {\rm (\ref{px})} and {\rm (\ref{qq})}, we have: \betaegin{itemize} \item[\rm{(i)}] Let $a_k=q^{-1}(k\pi)$ and $s_k=q^{-1}(t_0+k\pi), \ k\in \varepsilonN$, where $t_0\in(0,\pi)$ is arbitrary. Then there exist $k_0\in \varepsilonN$ and $c_0>0$ such that $a_k\in(0,\deltaelta_0]$, $y(a_k)=0$, $s_k\in(a_{k+1},a_k)$ for all $k\lambdaeq k_0$, $a_k\sigmaearrow 0$ as $k\to \infty$, $a_k\sigmaimeq k^{-1}$ as $k\to \infty$, and \betaegin{equation*}\label{pro4} \max_{x\in[a_{k+1},a_k]} |y(x)| \lambdaeq c_0(k+1)^{-\alphalpha}\left[\log{(k+1)}\right]^l \quaduad \mbox{for all $k\lambdaeq k_0$, $c_0>0$}, \varepsilonnd{equation*} where $y(x)=p(x)\sigmain(q(x))$ and the function $p(x)$ and $l\in\varepsilonN$ satisfy (\ref{apsp}). \item[\rm{(ii)}] There exists $\mathop{\rm Var}epsilon_0>0$ and a function $k:(0,\mathop{\rm Var}epsilon_0)\to\varepsilonN$ such that \betaegin{equation}\label{kaeps} \varphirac{1}{\pi}\left(\varphirac{\mathop{\rm Var}epsilon}{\pi C_2} \right)^{-\varphirac{1}{2}}\leq k(\mathop{\rm Var}epsilon)\leq \varphirac{2}{\pi}\left(\varphirac{\mathop{\rm Var}epsilon}{\pi C_2} \right)^{-\varphirac{1}{2}}. \varepsilonnd{equation} In particular, $$\varphirac{C_1}{\pi}(k+1)^{-2}\leq a_k-a_{k+1}\leq \mathop{\rm Var}epsilon ,$$ for all $k\lambdaeq k(\mathop{\rm Var}epsilon)$ and $\mathop{\rm Var}epsilon\in (0,\mathop{\rm Var}epsilon_0)$. \varepsilonnd{itemize} \varepsilonnd{prop} Proofs of Propositions \ref{druga_mod} and \ref{treca_mod} are analogous as in \lambdaammaite{cswavy}. \betaegin{proof}[Proof of Theorem \ref{BDchirp_mod}] We have to check that assumptions (\ref{left}) and (\ref{right}) are satisfied. By Proposition \ref{treca_mod} we have \betaegin{align*} \sigmaum_{k\lambdaeq k(\mathop{\rm Var}epsilon)} \max_{x\in[a_{k+1},a_k]} |y(x)|(a_k-a_{k+1}) &\lambdaeq \varphirac{c_0 C_1}{\pi}\sigmaum_{k=k(\mathop{\rm Var}epsilon)}^\infty (k+1)^{-\alphalpha-2}\left[\log (k+1)\right]^l \\ &\lambdaeq c\sigmaum_{k=k(\mathop{\rm Var}epsilon)+1}^\infty k^{-\alphalpha-2}\left[\log k\right]^l=ca, \varepsilonnd{align*} where the series $ a=\sigmaum_{k=k(\mathop{\rm Var}epsilon)+1}^\infty k^{-\alphalpha-2}\left[\log k\right]^l$ is convergent. Then, using the integral test for convergence and (\ref{kaeps}), we obtain that \betaegin{align*} ca &\lambdaeq \int_{k=k(\varepsilon)+1}^\infty k^{-\alphalpha-2}\left[\log k\right]^l \lambdaeq c_1(\varphirac{1}{k(\varepsilon)+1})^{\alphalpha+1}\left[\log (k(\varepsilon)+1)\right]^l \\ &\lambdaeq \varphirac{c_1 }{2}(\varphirac{1}{k(\varepsilon)})^{\alphalpha+1}\left[\log k(\varepsilon)\right]^l \lambdaeq c_2\varepsilon^{2-\left(2-\varphirac{\alphalpha+1}{2}\right)}\left[\log\left(\varepsilon^{-1}\right)\right]^l,\nablaonumber \varepsilonnd{align*} for all $\mathop{\rm Var}epsilon \in (0,\mathop{\rm Var}epsilon_0)$. Using Proposition \ref{druga_mod} it follows that \betaegin{align*} |y'(x)|=|p'(x)\sigmain(q(x))+p(x)q'(x)\lambdaammaos(q(x))| &\leq c_3 x^{\alphalpha-1}\left[\log\left(x^{-1}\right)\right]^l \\ &\leq c_4 x^{\alphalpha-2}, \varepsilonnd{align*} which holds near $x=0^+$. By Proposition \ref{treca_mod} we conclude that \betaegin{align*} a_{k(\varepsilon)} \sigmaup_{x\in(0,a_{k(\varepsilon)}]} |y(x)| &+ \varepsilon \int_{a_{k(\varepsilon)}}^{a_{k_0}} |y'(x)|dx \\ &\leq c_5\varepsilon^{\varphirac{\alphalpha+1}{2}}\left[\log\left(\varepsilon^{-1}\right)\right]^l+\varepsilon c_4[a_{k_0}^{\alphalpha-1}+ a_{k(\varepsilon)}^{\alphalpha-1}] \\ &\leq c_6\varepsilon^{2-\left(2-\varphirac{\alphalpha+1}{2}\right)}\left[\log\left(\varepsilon^{-1}\right)\right]^l , \varepsilonnd{align*} for all $\mathop{\rm Var}epsilon \in(0,\mathop{\rm Var}epsilon_0)$. Finally, we apply Theorem \ref{mersat_mod}, where $s=2-\varphirac{\alphalpha+1}{2}$. \varepsilonnd{proof} The last part of this section is devoted to proving the modified variant of Theorem \ref{tm_ffam}. More precisely, we will prove the modified variant of the original result, \lambdaammaite[Theorem 5]{zuzu}. To prove this variant, Theorem \ref{ffa} below, we proceed our presentation as in \lambdaammaite{zuzu}, by first stating and proving where necessary, some auxiliary definitions and results. We define a spiral in the plane as the graph $\Gamma$ of a function $r=f(\varphi)$, $\varphi\lambdae\varphi_1$, in polar coordinates, where \betageqn\label{cond} \left\{ \betaegin{array}{ll} \mbox{$f{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}[\varphi_1,\infty)\to(0,\infty)$ is such that $f(\varphi)\to0$ as $\varphi\to\infty$,}&\\ \mbox{$f$ is {\it radially decreasing} (ie, for any fixed $\varphi\lambdae\varphi_1$}&\\ \mbox{the function $\varepsilonN\nablai k \mapsto f(\varphi+2k\pi)$ is decreasing)}&\\ \varepsilonnd{array} \right. \varepsilonndeqn Let $\Gamma$ be a spiral defined by $r=f(\varphi)$, $\varphi\lambdae\varphi_1$. We denote a subset of the spiral $\Gamma$ corresponding to angles in the interval $(\varphi_0,\varphi_2)$ by $\Gamma(\varphi_0,\varphi_2)$, more precisely, \betageq \Gamma(\varphi_0,\varphi_2):=\{(r,\varphi)\in\Gamma{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}\varphi\in(\varphi_0,\varphi_2)\}. \varepsilonndeq Let $A$ be a bounded set in $\mathbb{R}^N$, and let the radial distance function $d_{rad}(x,A)$, be defined as the Euclidean distance from $x$ to the set $A\lambdaammaap\{tx{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}t\lambdae0\}$, provided the intersection is nonempty, and $\infty$ otherwise. Now the radial $\varepsilon$-neighbourhood around $A$ is defined as the set $A_{\varepsilon,rad}:=\{y\in\mathbb{R}^N{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}d_{rad}(y,A)<\varepsilon\}$. Using radial $\varepsilon$-neighbourhood we define \varepsilonmph{radial s-dimensional lower and upper Minkowski content} of set $A$, analogously as in Section \ref{subsec_box_dim}, denoted by ${\mathcal M}_*^s(A,rad)$ and ${\mathcal M}^{*s}(A,rad)$, respectively. Also, analogously we define \varepsilonmph{radial lower and radial upper box dimension} of $A$, denoted by $\underline\deltaim_B(A,rad)$ and $\omegav\deltaim_B(A,rad)$, respectively. If both quantities coincide, the common value is denoted by $\deltaim_B(A,rad)$, and we call it {\it radial box dimension} of $A$. For a general definition of directional box dimensions in $\mathbb{R}^2$, see Tricot \lambdaammaite[pp.\ 248--249]{tricot}. Since $A_{\varepsilon,rad}\sigmatq A_\varepsilon$, it is clear that \betageq\label{dimrad} \underline\deltaim_B(A,rad)\le\underline\deltaim_BA,\quad \omegav\deltaim_B(A,rad)\le\omegav\deltaim_BA. \varepsilonndeq We define (radial) $\varepsilon$-{\it nucleus} of the spiral $\Gamma$ as the radial $\varepsilon$-neighbourhood around $\Gamma(\varphi_2(\varepsilon),\infty)\sigmat\Gamma$, that is, \betageq\label{N} N(\Gamma,\varepsilon):=\Gamma(\varphi_2(\varepsilon),\infty)_{\varepsilon,rad}, \varepsilonndeq where by $\varphi_2(\varepsilon)$ we denote the smallest angle such that for all $\psi\lambdae\varphi_2(\varepsilon)$ we have $f(\psi)-f(\psi+2\pi)\le2\varepsilon$, more precisely, \betageq\label{fi2} \varphi_2(\varepsilon):=\inf\{\varphi\lambdae\varphi_1{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}\varphiorall\psi\lambdae\varphi,\,\,f(\psi)-f(\psi+2\pi)\le2\varepsilon\}. \varepsilonndeq The set $T(\Gamma,\varepsilon)$ obtained as the radial $\varepsilon$-neighbourhood around the arc $\Gamma(\varphi_1,\varphi_2(\varepsilon))$, that is, \betageq\label{T} T(\Gamma,\varepsilon):=\Gamma(\varphi_1,\varphi_2(\varepsilon))_{\varepsilon,rad}, \varepsilonndeq is called (radial) $\varepsilon$-{\it tail} of the spiral $\Gamma$. The notions of nucleus and tail of a spiral are introduced by Tricot \lambdaammaite[pp.\ 121, 122]{tricot}. We consider lower nucleus and lower tail $s$-dimensional Minkowski contents of $\Gamma$ defined by \betageq\label{nt} {\mathcal M}_*^s(\Gamma,n):=\liminf_{\varepsilon\to0}\varphirac{|N(\Gamma,\varepsilon)|}{\varepsilon^{2-s}},\quad {\mathcal M}_*^s(\Gamma,t):=\liminf_{\varepsilon\to0}\varphirac{|T(\Gamma,\varepsilon)|}{\varepsilon^{2-s}} \varepsilonndeq respectively, for $s\lambdae0$. Analogously for the upper nucleus and upper tail Minkowski contents. It is clear that \betageq\label{gnc} {\mathcal M}^{*s}(\Gamma,rad)\le{\mathcal M}^{*s}(\Gamma,n)+{\mathcal M}^{*s}(\Gamma,t). \varepsilonndeq Indeed, we can express radial $\varepsilon$-neighbourhood around $\Gamma$ as $\Gamma_{\varepsilon,rad}=N(\Gamma,\varepsilon)\lambdaammaup T(\Gamma,\varepsilon)\lambdaammaup S(\varepsilon)$, where $S(\varepsilon):=\{(r,\varphi)\in\Gamma_{\varepsilon,rad}{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}\varphi=\varphi_2(\varepsilon)\}$ is of the $2$-dimensional Lebesgue measure zero. Hence \[ {\mathcal M}^{*s}(\Gamma,rad)\le\limsup_{\varepsilon\to0}\varphirac{|N(\Gamma,\varepsilon)|}{\varepsilon^{2-s}}+\limsup_{\varepsilon\to0}\varphirac{|T(\Gamma,\varepsilon)|} {\varepsilon^{2-s}} . \] First we prove the modified variant of \lambdaammaite[Theorem 1]{zuzu}. \betaegin{theorem}\label{F} Let $f{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}[\varphi_1,\infty)\to(0,\infty)$, where $\varphi_1>e$, be a measurable, radially decreasing function, see (\ref{cond}). Let $\alpha\in(0,1)$ and $l\in\varepsilonN$ such that for some positive numbers $\underline m$ and $\omegav m$ we have \betageq\label{m} \underline m\,\varphi^{-\alpha}\left[\log\varphi\right]^l\le f(\varphi)\le\omegav m\,\varphi^{-\alpha}\left[\log\varphi\right]^l \varepsilonndeq for all $\varphi\lambdae\varphi_1>0$. Assume that there exist positive constants $\underline a$ and $\omegav a$ such that for all $\varphi\lambdae\varphi_1$, \betageq\label{a} \underline a\,\varphi^{-\alpha-1}\left[\log\varphi\right]^l\le f(\varphi)-f(\varphi+2\pi)\le \omegaverline a\,\varphi^{-\alpha-1}\left[\log\varphi\right]^l. \varepsilonndeq Let $\Gamma$ be the graph of $r=f(\varphi)$ in polar coordinates. Then \betageqn &d:=\deltaim_B(\Gamma,rad)=\varphirac2{1+\alpha},&\\ &{\mathcal M}^{*d}(\Gamma,rad)=+\infty.& \varepsilonndeqn \varepsilonnd{theorem} \betaegin{proof} We first obtain the upper bound of the area of the $\varepsilon$-nucleus of $\Gamma$. Note that inequality $f(\varphi)-f(\varphi+2\pi)> 2\varepsilon$ is satisfied when $\underline a\,\varphi^{-\alpha-1}\left[\log\varphi\right]^l > \underline a\,\varphi^{-\alpha-1} > 2\varepsilon$, that is, for $\varphi<\underline \varphi_2(\varepsilon)$, where $\underline\varphi_2(\varepsilon):=\left(\varphirac{2\varepsilon}{\underline a}\right)^{-1/(1+\alpha)}$. From the definition of $\varphi_2(\varepsilon)$, see (\ref{fi2}), we have \betageq\label{uf2} \varphi_2(\varepsilon)\lambdae\underline\varphi_2(\varepsilon), \varepsilonndeq therefore, \betageq |N(\Gamma,\varepsilon)|\le\pi(\sigmaup_{[\underline\varphi_2(\varepsilon),\underline\varphi_2(\varepsilon)+2\pi]} f+\varepsilon)^2\le \pi\left(\omegav m\,\underline\varphi_2(\varepsilon)^{-\alpha}\left[\log\underline\varphi_2(\varepsilon)\right]^l+\varepsilon\right)^2. \varepsilonndeq We see that \betageq |N(\Gamma,\varepsilon)|\le\omegav c_1\lambdaammadot\varepsilon^{2\alpha/(1+\alpha)}\left[\log\varepsilon\right]^{2l}, \varepsilonndeq where $\omegav c_1>0$. Now we estimate the area of the $\varepsilon$-tail of $\Gamma$ from above. The inequality $f(\varphi)-f(\varphi+2\pi)< 2\varepsilon$ is satisfied when $\omegav a\,\varphi^{-\alpha-1}\left[\log\varphi\right]^l< 2\varepsilon$. Hence, $f(\varphi)-f(\varphi+2\pi)< 2\varepsilon$ is satisfied for $\varphi>\omegav \varphi_2(\varepsilon)$, where \betageq\label{ovf2} \omegav\varphi_2(\varepsilon):=\left(\varphirac{2\varepsilon}{\omegav a}\right)^{-1/(1+\alpha-\delta)}, \varepsilonndeq for $\delta=\delta(\varepsilon):=\inf_{\delta>0}\{\delta : [\log\varphi]^l<\varphi^{\delta}, \varphiorall \varphi\lambdaeq\varphi_2(\varepsilon)\}$. Notice that $\omegav a\,\varphi^{-\alpha-1}\left[\log\varphi\right]^l\leq\omegav a\,\varphi^{-(\alpha-\delta)-1}$, for all $\varphi\lambdaeq\varphi_2(\varepsilon)$. Therefore $\varphi_2(\varepsilon)\le\omegav\varphi_2(\varepsilon)$, and from this we have that \betageqn |T(\Gamma,\varepsilon)|&\le&2\int_{\varphi_1}^{\omegav\varphi_2(\varepsilon)}[(f(\varphi)+\varepsilon)^2-(f(\varphi)-\varepsilon)^2]\,d\varphi \nablaonumber\\ &=&2\varepsilon\int_{\varphi_1}^{\omegav\varphi_2(\varepsilon)}f(\varphi)\,d\varphi\le 2\varepsilon\omegav m\int_{\varphi_1}^{\omegav\varphi_2(\varepsilon)}\varphi^{-\alpha}\left[\log\varphi\right]^ld\varphi \nablaonumber\\ &\le& 2\varepsilon\omegav m\int_{\varphi_1}^{\omegav\varphi_2(\varepsilon)}\varphi^{-(\alpha-\delta)}d\varphi \nablaonumber\\ &=&\varphirac{2\varepsilon\omegav m}{1-\alpha}(\omegav\varphi_2(\varepsilon)^{1-(\alpha-\delta)}-\varphi_1^{1-(\alpha-\delta)})\le\omegav c_2\lambdaammadot\varepsilon^{2(\alpha-\delta)/(1+\alpha-\delta)}, \nablaonumber \varepsilonndeqn where $\omegav c_2>0$. Notice that $\delta\to 0$ as $\varepsilon\to 0$. Defining $d:=2/(1+\alpha)$ we have that, see (\ref{gnc}), \betageq {\mathcal M}^{*d}(\Gamma,rad)\le {\mathcal M}^{*d}(\Gamma,n)+{\mathcal M}^{*d}(\Gamma,t)=+\infty+\infty=+\infty . \varepsilonndeq For every $d'>d$ it holds that ${\mathcal M}^{*d'}(\Gamma,n)=0$, and we can take $\varepsilon>0$ sufficiently small, such that $\delta>0$ be sufficiently small, so that ${\mathcal M}^{*d'}(\Gamma,t)=0$. Hence, we conclude that $\deltaim_B(\Gamma,rad)\leq d$. To obtain a lower bound of the area of $\varepsilon$-nucleus of $\Gamma$, we show that \betageq\label{NB} N(\Gamma,\varepsilon)\sigmaupset B_r(0),\quad r:=\inf_{\varphi\in[\omegav\varphi_2(\varepsilon),\omegav\varphi_2(\varepsilon)+2\pi]} f(\varphi) , \varepsilonndeq analogously as in the proof of \lambdaammaite[Theorem 1]{zuzu}. Using (\ref{NB}) and (\ref{m}) we obtain \betageq\label{NN} |N(\Gamma,\varepsilon)|\lambdae \pi r^2\lambdae\pi\left(\underline m(\omegav\varphi_2(\varepsilon)+2\pi)^{-\alpha}\left[\log(\omegav\varphi_2(\varepsilon))\right]^l\right)^{2}, \varepsilonndeq hence, \betageq |N(\Gamma,\varepsilon)|\lambdae\underline c_1\lambdaammadot\varepsilon^{2\alpha/(1+\alpha-\delta)}\left[\log\varepsilon\right]^{2l}, \varepsilonndeq where $\underline c_1>0$. Similarly as above we obtain that \betageq |T(\Gamma,\varepsilon)|\lambdae2\varepsilon\int_{\varphi_1}^{\underline\varphi_2(\varepsilon)}f(\varphi)\,d\varphi \lambdae \underline c_2\lambdaammadot\varepsilon^{2\alpha/(1+\alpha)}, \varepsilonndeq where $\underline c_2>0$, provided $\varepsilon$ is sufficiently small. As $\underline\varphi_2(\varepsilon)\leq\omegaverline\varphi_2(\varepsilon)$, we conclude that \betageq {\mathcal M}_*^{d}(\Gamma,rad)\lambdae\liminf_{\varepsilon\to0}\varphirac{\underline c_1\lambdaammadot\varepsilon^{2\alpha/(1+\alpha-\delta)}\left[\log\varepsilon\right]^{2l} +\underline c_2\lambdaammadot\varepsilon^{2\alpha/(1+\alpha)}}{\varepsilon^{2-d}} = \underline c_2 . \varepsilonndeq For every $d'<d$, we can take $\varepsilon>0$ sufficiently small, such that $\delta>0$ be sufficiently small, so it holds that ${\mathcal M}_*^{d'}(\Gamma,rad)=+\infty$. Hence, we conclude that $\deltaim_B(\Gamma,rad)\lambdaeq d$. \varepsilonnd{proof} The following theorem is a marginally modified variant of \lambdaammaite[Theorem 4]{zuzu}. The only difference is in adding the log term $\left[\log\varepsilon\right]^l$. \betaegin{theorem}\label{FI} Let $\Gamma$ be a spiral of focus type defined by $r=f(\varphi)$, $f{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}[\varphi_1,\infty)\to(0,\infty)$, such that $f(\varphi)$ is decreasing, and $f(\varphi)\to0$. Let there exist $\varepsilon_0$ such that the functional inequality \betageq\label{fi} f\left( \varphi+\varphirac{\varepsilon}{f(\varphi)}+\varphirac{\varepsilon}{f(\varphi)-\varepsilon} \right) > f(\varphi)-\varepsilon. \varepsilonndeq holds for all $\varepsilon\in(0,\varepsilon_0)$ and $\varphi\in(\varphi_1,\varphi_2(\varepsilon))$, where $\varphi_2(\varepsilon)$ is defined by (\ref{fi2}). Assume also that there exist positive constants $\underline C$, $\omegav C$ and $q<1$ such that $\underline C\,\varepsilon^{q}\le f(\varphi_2(\varepsilon))\le\omegav C\,\varepsilon^{1-\varphirac d2}\left[\log\varepsilon\right]^l$, where $d:=\omegav\deltaim_B(\Gamma,rad)$ and $l\in\varepsilonN$. Then \betageq \omegav\deltaim_B\Gamma=\omegav\deltaim_B(\Gamma,rad). \varepsilonndeq \varepsilonnd{theorem} We omit the proof of Theorem \ref{FI}, as it is almost completely analogous to the proof of \lambdaammaite[Theorem 4]{zuzu}. Only one small difference occurs in the treatment of the log term in the condition $f(\varphi_2(\varepsilon))\le\omegav C\,\varepsilon^{1-\varphirac d2}\left[\log\varepsilon\right]^l$. The following excision property of Minkowski contents will enable us to handle the condition for $\varphi_1$ to be sufficiently large in Theorem~\ref{ffa}. We completely omit the proof, as it is already proved in \lambdaammaite{zuzu}. \betaegin{lemma}\label{excision} {\rm(Excision property for simple smooth curves)} Let $\Gamma$ be a simple smooth curve in $\mathbb{R}^2$, that is, $\Gamma$ is the graph of continuously differentiable injection $h{\penalty10000\hbox{\kern1mm\rm:\kern1mm}\penalty10000}[\varphi_1,\infty)\to\mathbb{R}^2$. Assume that $\underline\deltaim_B\Gamma>1$. Let $\omegav\varphi_1>\varphi_1$ be given and $\Gamma_1:=h(\omegav\varphi_1,\infty)$. Then \betageqn &\underline d:=\underline\deltaim_B\Gamma_1=\underline\deltaim_B\Gamma,\quad\omegav d:=\omegav\deltaim_B\Gamma_1=\omegav\deltaim_B\Gamma,&\\ &{\mathcal M}_*^{\underline d}(\Gamma_1)= {\mathcal M}_*^{\underline d}(\Gamma),\quad {\mathcal M}^{*\omegav d}(\Gamma_1)={\mathcal M}^{*\omegav d}(\Gamma).& \varepsilonndeqn Analogous claim holds for radial box dimensions and radial Minkowski contents: if $\underline\deltaim_B(\Gamma,rad)>1$, then \betageqn &\underline \delta:=\underline\deltaim_B(\Gamma_1,rad)=\underline\deltaim_B(\Gamma,rad),\quad\omegav \delta:=\omegav\deltaim_B(\Gamma_1,rad)= \omegav\deltaim_B(\Gamma,rad),&\nablaonumber\\ &{\mathcal M}_*^{\underline \delta}(\Gamma_1,rad)= {\mathcal M}_*^{\underline \delta}(\Gamma,rad),\quad {\mathcal M}^{*\omegav \delta}(\Gamma_1,rad)={\mathcal M}^{*\omegav \delta}(\Gamma,rad).&\nablaonumber \varepsilonndeqn In particular, the conclusions hold for smooth spirals $r=f(\varphi)$, where $f(\varphi)$ is a decreasing function tending to $0$ as $\varphi\to\infty$. \varepsilonnd{lemma} Finally, here is the modified variant of Theorem \ref{tm_ffam}. \betaegin{theorem}\label{ffa} Assume in addition to the assumptions of Theorem \ref{F} that the function $f$ is decreasing, of class $C^2$, and there exist positive constants $M_1$ and $M_2$ such that for all $\varphi\lambdae\varphi_1$, \betageq\label{Mi} M_1\varphi^{-\alpha-1}[\log\varphi]^l\le|f'(\varphi)|\le M_2\varphi^{-\alpha-1}[\log\varphi]^l. \varepsilonndeq Then \betageq\label{drad} \deltaim_B\Gamma=\deltaim_B(\Gamma,rad)=d, \varepsilonndeq and \betageq\label{Mrad} {\mathcal M}^{*d}(\Gamma)={\mathcal M}^{*d}(\Gamma,rad)=+\infty, \varepsilonndeq where $d:=\varphirac2{1+\alpha}$. \varepsilonnd{theorem} \betaegin{proof} (a) From the excision result, see Lemma~\ref{excision}, we can assume without loss of generality that $\varphi_1$ is sufficiently large, which we need below. We first check that condition (\ref{fi}) of Theorem~\ref{FI} is fulfilled. By the Lagrange mean value theorem for all $\varphi\in(\varphi_1,\varphi_2(\varepsilon))$, where $\varphi_2(\varepsilon)$ is defined in (\ref{fi2}), we have that \betaegin{align*} D &:= f(\varphi)-f\left( \varphi+\varphirac{\varepsilon}{f(\varphi)}+\varphirac{\varepsilon}{f(\varphi)-\varepsilon} \right) \\ &\le |f'(\varphi)|\left( \varphirac\varepsilon{f(\varphi)}+\varphirac{\varepsilon}{f(\varphi)-\varepsilon} \right) \\ &\le M_2\varphi^{-\alpha-1}[\log\varphi]^l\left( \varphirac\varepsilon{\underline m\varphi^{-\alpha}[\log\varphi]^l}+\varphirac{\varepsilon}{\underline m\varphi^{-\alpha}[\log\varphi]^l-\varepsilon} \right) \\ &= \varepsilon\lambdaammadot M_2\varphirac{\varphi^{-1}}{[\log\varphi]^l}\left( \varphirac1{\underline m}+\varphirac1{\underline m-\varepsilon\lambdaammadot\varphirac{\varphi^{\alpha}}{[\log\varphi]^l}} \right). \varepsilonnd{align*} Since $\varphi_2(\varepsilon)\le\omegav c\lambdaammadot\varepsilon^{-1/(1+\alpha-\delta)}$, see the proof of Theorem \ref{F}, we have \[ \varepsilon\lambdaammadot\varphirac{\varphi^\alpha}{[\log\varphi]^l}\le\varepsilon\lambdaammadot\varphi^\alpha\le\varepsilon\lambdaammadot\varphi_2(\varepsilon)^\alpha \le\omegav c^\alpha\varepsilon^{(1-\delta)/{(1+\alpha-\delta)}}\le\varphirac 12\underline m \] for all $\varepsilon\in(0,\varepsilon_0)$, provided $\varepsilon_0$ is sufficiently small. Therefore, \betageq D\le\varepsilon\lambdaammadot\varphirac{3M_2}{\underline m\,\varphi_1[\log\varphi_1]^l}<\varepsilon, \varepsilonndeq where we assume that $\varphi_1$ is sufficiently large: $\varphi_1>3M_2/\underline m$. The second condition in Theorem~\ref{FI} is also fulfilled. Indeed, since $\underline c\lambdaammadot\varepsilon^{-1/(1+\alpha)}\le\varphi_2(\varepsilon)\le\omegav c\lambdaammadot\varepsilon^{-1/(1+\alpha-\omegav\delta)}$, where $\omegav\delta:=\sigmaup_{\varepsilon\in(0,\varepsilon_0)} \delta(\varepsilon)$ and $\delta(\varepsilon)$ being defined as in the proof of Theorem \ref{F}, we conclude that \betaegin{align*} \underline m\omegav c^{-\alpha}\varepsilon^{\alpha/(1+\alpha-\omegav\delta)} &\le \underline m\omegav c^{-\alpha}\varepsilon^{\alpha/(1+\alpha-\omegav\delta)}\left[\log\left(\omegav c \lambdaammadot \varepsilon^{-1/(1+\alpha-\omegav\delta)}\right)\right]^l \\ &\le f(\varphi_2(\varepsilon))\le\omegav m\underline c^{-\alpha}\varepsilon^{\alpha/(1+\alpha)}, \varepsilonnd{align*} that is, $\underline C\varepsilon^q\le f(\varphi_2(\varepsilon))\le\omegav C\varepsilon^{1-\varphirac d2}$, where $q:=\alpha/(1+\alpha-\omegav\delta)$. Therefore, by Theorem~\ref{FI} we have that $\omegav\deltaim_B\Gamma=\omegav\deltaim_B(\Gamma,rad)$. Now from this, using (\ref{dimrad}), Theorems~\ref{F} and~\ref{FI}, we obtain $$ \varphirac2{1+\alpha}=\underline\deltaim_B(\Gamma,rad)\le \underline\deltaim_B\Gamma\le\omegav\deltaim_B\Gamma=\omegav\deltaim_B(\Gamma,rad)=\varphirac2{1+\alpha}. $$ This proves (\ref{drad}). (b) To prove (\ref{Mrad}) it suffices to check that for all $\varepsilon\in(0,\varepsilon_0)$, \betageq\label{c} |\Gamma(\varphi_1,\infty)_{\varepsilon,rad}|-O(\varepsilon^2)\le|\Gamma(\varphi_1,\infty)_\varepsilon|. \varepsilonndeq Indeed, since $\Gamma(\varphi_1,\infty)_{\varepsilon,rad}\sigmatq\Gamma(\varphi_1,\infty)_\varepsilon\lambdaammaup A(\varepsilon)$, where $A(\varepsilon)$ is the area of the part of $\Gamma(\varphi_1,\infty)_\varepsilon$ corresponding to $\varphi<\varphi_1$. This area is clearly of order $O(\varepsilon^2)$ since it is contained in the disk $B_{\varepsilon}(T_1)$, where $T_1$ is the point on $\Gamma$ corresponding to $\varphi_1$. From (\ref{c}) we have ${\mathcal M}^{*s}(\Gamma,rad)\leq{\mathcal M}^{*s}(\Gamma)$, for all $s\lambdaeq 0$. From Theorem \ref{F} it follows ${\mathcal M}^{*d}(\Gamma,rad)=+\infty$, hence ${\mathcal M}^{*d}(\Gamma)=+\infty$. \varepsilonnd{proof} \partialragraph{Acknowledgments.} This research was supported by: Croatian Science Foundation (HRZZ) under the project IP-2014-09-2285, French ANR project STAAVF 11-BS01-009, French-Croatian bilateral Cogito project \varepsilonmph{Classification de points fixes et de singularit\'es }\`a\varepsilonmph{ l'aide d'epsilon-voisinages d'orbites et de courbes}, and University of Zagreb research support for 2015 and 2016. \betaibliographystyle{plain} \betaibliography{bibliografija} \varepsilonnd{document}
\begin{document} \title{The simplex method is strongly polynomial for deterministic Markov decision processes} \author{Ian Post \thanks{Department of Combinatorics and Optimization, University of Waterloo. Research done while at Stanford University. Email: \href{mailto:[email protected]}{[email protected]}. Research supported by NSF grant 0904325. We also acknowledge financial support from grant \#FA9550-12-1-0411 from the U.S. Air Force Office of Scientific Research (AFOSR) and the Defense Advanced Research Projects Agency (DARPA). } \and Yinyu Ye \thanks{Department of Management Science and Engineering, Stanford University. Email: \href{mailto:[email protected]}{[email protected]}.} } \maketitle \pdfbookmark[1]{Abstract}{MyAbstract} \begin{abstract} We prove that the simplex method with the highest gain/most-negative-reduced cost pivoting rule converges in strongly polynomial time for deterministic Markov decision processes (MDPs) regardless of the discount factor. For a deterministic MDP with $n$ states and $m$ actions, we prove the simplex method runs in $O(n^3 m^2 \log^2 n)$ iterations if the discount factor is uniform and $O(n^5 m^3 \log^2 n)$ iterations if each action has a distinct discount factor. Previously the simplex method was known to run in polynomial time only for discounted MDPs where the discount was bounded away from 1 \cite{ye_simplex}. Unlike in the discounted case, the algorithm does not greedily converge to the optimum, and we require a more complex measure of progress. We identify a set of layers in which the values of primal variables must lie and show that the simplex method always makes progress optimizing one layer, and when the upper layer is updated the algorithm makes a substantial amount of progress. In the case of nonuniform discounts, we define a polynomial number of ``milestone'' policies and we prove that, while the objective function may not improve substantially overall, the value of at least one dual variable is always making progress towards some milestone, and the algorithm will reach the next milestone in a polynomial number of steps. \end{abstract} \section{Introduction} Markov decision processes (MDPs) are a powerful tool for modeling repeated decision making in stochastic, dynamic environments. An MDP consists of a set of states and a set of actions that one may perform in each state. Based on an agent's actions it receives rewards and affects the future evolution of the process, and the agent attempts to maximize its rewards over time (see Section \ref{section_prelim} for a formal definition). MDPs are widely used in machine learning, robotics and control, operations research, economics, and related fields. See the books \cite{puterman} and \cite{bertsekas} for a thorough overview. Solving MDPs is also an important problem theoretically. Optimizing an MDP can be formulated as a linear program (LP), and although these LPs possess extra structure that can be exploited by algorithms like Howard's policy iteration method \cite{howard}, they lie just beyond the point at which our ability to solve LPs in strongly-polynomial time ends (and are a natural target for extending this ability), and they have proven to be hard in general for algorithms previously thought to be quite powerful, such as randomized simplex pivoting rules \cite{fhz_randsimplex}. In practice \cite{littmandeankaelbling} MDPs are solved using policy iteration, which may be viewed as a parallel version of the simplex method with multiple simultaneous pivots, or value iteration \cite{bellman}, an inexact approximation to policy iteration that is faster per iteration. If the discount factor $\gamma$, which determines the effective time horizon (see Section \ref{section_prelim}), is small it has long been known that policy and value iteration will find an $\epsilon$-approximation to the optimum \cite{bellman}. It is also well-known that value iteration may be exponential, but policy iteration resisted worst-case analysis for many years. It was conjectured to be strongly polynomial but except for highly-restricted examples \cite{madani} only exponential time bounds were known \cite{mansoursingh}. Building on results for parity games \cite{paritygame}, Fearnley recently gave an exponential lower bound \cite{fearnley}. Friedmann, Hansen, and Zwick extended Fearnley's techniques to achieve sub-exponential lower bounds for randomized simplex pivoting rules \cite{fhz_randsimplex} using MDPs, and Friedmann gave an exponential lower bound for MDPs using the least-entered pivoting rule \cite{friedmann_simplex}. Melekopoglou and Condon proved several other simplex pivoting rules are exponential \cite{melecondon_simplex}. On the positive side, Ye designed a specialized interior-point method that is strongly polynomial in everything except the discount factor \cite{ye_intpt}. Ye later proved that for discounted MDPs with $n$ states and $m$ actions, the simplex method with the most-negative-reduced-cost pivoting rule and, by extension, policy iteration, run in time $O(nm/(1-\gamma)\log(n/(1-\gamma)))$ on discounted MDPs, which is polynomial for fixed $\gamma$ \cite{ye_simplex}. Hansen, Miltersen, and Zwick improved the policy iteration bound to $O(m/(1-\gamma)\log(n/(1-\gamma)))$ and extended it to both value iteration as well as the strategy iteration algorithm for two player turn-based stochastic games \cite{strategyiteration}. But the performance of policy iteration and simplex-style basis-exchange algorithms on MDPs remains poorly understood. Policy iteration, for instance, is conjectured to run in $O(m)$ iterations on deterministic MDPs, but the best upper bounds are exponential, although a lower bound of $O(m)$ is known \cite{hansenzwick_mmc}. Improving our understanding of these algorithms this is an important step in designing better ones with polynomial or even strongly-polynomial guarantees. Motivated by these questions, we analyze the simplex method with the most-negative-reduced-cost pivoting rule on deterministic MDPs. For a deterministic MDP with $n$ states and $m$ actions, we prove that the simplex method terminates in $O(n^3m^2\log^2 n)$ iterations regardless of the discount factor, and if each action has a distinct discount factor, then the algorithm runs in $O(n^5 m^3 \log^2 n)$ iterations. Our results do not extend to policy iteration, and we leave this as a challenging open question. Deterministic MDPs were previously known to be solvable in strongly polynomial time using specialized methods not applicable to general MDPs---minimum mean cycle algorithms \cite{papadimitriou_mdp} or, in the case of nonuniform discounts, by exploiting the property that the dual LP has only two variables per inequality \cite{hochbaum_tvpi}. The fastest known algorithm for uniformly discounted deterministic MDPs runs in time $O(mn)$ \cite{madani2010discounted}. However, these problems were not known to be solvable in polynomial time with the more-generic simplex method. More generally, we believe that our results help shed some light on how algorithms like simplex and policy iteration function on MDPs. Our proof techniques, particularly in the case of nonuniform discounts, may be of independent interest. For uniformly discounted MDPs, we show that the values of the primal flux variables must lie within one of two intervals or layers of polynomial size depending on whether an action is on a path or a cycle. Most iterations update variables in the smaller path layer, and we show these converge rapidly to a locally optimal policy for the paths, at which point the algorithm must update the larger cycle layer and makes a large amount of progress towards the optimum. Progress takes the form of many small improvements interspersed with a few much larger ones rather than uniform convergence. The nonuniform case is harder, and our measure of progress is unusual and, to the best of our knowledge, novel. We again define a set of intervals in which the value of variables on cycles must fall, and these define a collection of intermediate milestone or checkpoint values for each dual variable (the value of a state in the MDP). Whenever a variable enters a cycle layer, we argue that a corresponding dual variable is making progress towards the layer's milestone and will pass this value after enough updates. When each of these checkpoints have been passed, the algorithm must have reached the optimum. We believe some of these ideas may prove useful in other problems as well. In Section \ref{section_prelim} we formally define MDPs and describe a number of well-known properties that we require. In Section \ref{section_uniform} we analyze the case of a uniform discount factor, and in Section \ref{section_nonuniform} we extend these results to the nonuniform case. \section{Preliminaries} \label{section_prelim} Many variations and extensions of MDPs have been defined, but we will study the following problem. A Markov decision process consists of a set of $n$ states $S$ and $m$ actions $A$. Each action $a$ is associated with a single state $s$ in which it can be performed, a reward $\mb{r}_a \in \mathbb{R}$ for performing the action, and a probability distribution $P_a$ over states to which the process will transition when using action $a$. We denote by $P_{a,s}$ the probability of transitioning to state $s$ when taking action $a$. There is at least one action usable in each state. Let $\mb{r}$ be the vector of rewards indexed by $a$ with entries $\mb{r}_a$, $A_s \subset A$ be the set of actions performable in state $s$, and $P$ be the $n$ by $m$ matrix with columns $P_a$ and entries $P_{a,s}$. We will restrict the distributions $P_a$ to be deterministic for all actions, in which case states may be thought of as nodes in a graph and actions as directed edges. However, the results in this section apply to MDPs with stochastic transitions as well. At each time step, the MDP starts in some state $s$ and performs an action $a$ admissible in state $s$, at which point it receives the reward $\mb{r}_a$ and transitions to a new state $s'$ according to the probability distribution $P_a$. We are given a discount factor $\gamma < 1$ as part of the input, and our goal is to choose actions to perform so as to maximize the expected discounted reward we accumulate over an infinite time horizon. The discount can be thought of as a stopping probability---at each time step the process ends with probability $1-\gamma$. Normally, the discount $\gamma$ is uniform for the entire MDP, but in Section \ref{section_nonuniform} we will allow each action to have a distinct discount $\gamma_a$. Due to the Markov property---transitions depend only the current state and action---there is an optimal strategy that is memoryless and depends only on the current state. Let $\pi$ be such a {\em policy}, a distribution of actions to perform in each state. This defines a Markov chain and a value for each state: \begin{defn} \label{def_value} Let $\pi$ be a policy, $P^\pi$ be the $n$ by $n$ matrix where $P^\pi_{s,s'}$ is the probability of transitioning from $s'$ to $s$ using $\pi$, and $\mb{r}_\pi$ the vector of expected rewards for each state according to the distribution of actions in $\pi$. The {\em value vector} $\mb{v}^\pi$ is indexed by states, and $\mb{v}^\pi_s$ is equal to the expected total discounted reward of starting in state $s$ and following policy $\pi$. It is defined as $\mb{v}^\pi = \sum_{i\ge 0} (\gamma (P^\pi)^T)^i\mb{r}_\pi = (I-\gamma P^\pi)^{-T}\mb{r}_\pi$ or equivalently by \begin{equation} \label{eq_value_vector} \mb{v}^\pi = \mb{r}_\pi + \gamma (P^\pi)^T \mb{v}^\pi . \end{equation} \end{defn} If policy $\pi$ is randomized and uses two or more actions in some state $s$, then the value of $\mb{v}^\pi_s$ is an average of the values of performing each of the pure actions in $s$, and one of these is the largest. Therefore we can replace the distribution by a single action and only increase the value of the state. In the remainder of the paper we will restrict ourselves to pure policies in which a single action is taken in each state. In addition to the value vector, a policy $\pi$ also has an associated flux vector $\mb{x}^\pi$ that will play a critical role in our analysis. It acts as a kind of ``discounted flow.'' Suppose we start with a single unit of ``mass'' on every state and then run the Markov chain. At each time step we remove $1-\gamma$ fraction of the mass on each state and redistribute the remaining mass according to the policy $\pi$. Summing over all time steps, the total amount of mass that passes through each action is its flux. More formally, \begin{defn} \label{def_flux} Let $\pi$ be a policy and $P^\pi$ the $n$ by $n$ transition matrix for $\pi$ formed by the columns $P_a$ for actions in $\pi$. The {\em flux vector} $\mb{x}^\pi$ is indexed by actions. If action $a$ is not in $\pi$ then $\mb{x}^\pi_a = 0$, and if $\pi$ uses $a$ in state $s$, then $\mb{x}^\pi_a = \mb{y}_s$, where \begin{equation} \label{eq_flux_vector} \mb{y} = \sum_{i \ge 0} (\gamma P^\pi)^i \mb{1} = (I - \gamma P^\pi)^{-1} \mb{1} \; , \end{equation} and $\mb{1}$ is the all ones vector of dimension $n$. The flux is the total discounted number of times we use each action if we start the MDP in all states and run the Markov chain $P^\pi$ discounting by $\gamma$ each iteration. \end{defn} Note that if $a \in \pi$ then $\mb{x}^\pi_a \ge 1$, since the initial flux placed on $a$'s state always passes through $a$. Further note that each bit of flux can be traced back to one of the initial units of mass placed on each state, although the vector $\mb{x}^\pi$ sums flux from all states. This will be important in Section \ref{section_nonuniform}. Solving the MDP can be formulated as the following primal/dual pair of LPs, in which the flux and value vectors correspond to primal and (possibly infeasible) dual solutions: \begin{equation} \label{eq_primal} \begin{array}{rlrl} \textsc{Primal:} && \\ \textrm{maximize} & \multicolumn{2}{l}{ \sum_a \mb{r}_a\mb{x}_a } \\ \textrm{subject to} &\forall s \in S, & \sum_{a \in A_s} \mb{x}_a & = 1 + \gamma \sum_{a} P_{a,s}\mb{x}_a\\ & & \mb{x} & \ge 0 \\ \end{array} \end{equation} \begin{equation} \label{eq_dual} \begin{array}{rlrl} \textsc{Dual:} && \\ \textrm{minimize} & \multicolumn{2}{l}{ \sum_s \mb{v}_s } \\ \textrm{subject to} &\forall s \in S, a \in A_s, & \mb{v}_s \ge \mb{r}_a + \gamma \sum_{s'} P_{a,s'}\mb{v}_{s'} \\ \end{array} \end{equation} The constraint matrix of \eqref{eq_primal} is equal to $M - \gamma P$, where $M_{s,a} = 1$ if action $a$ can be used in state $s$ and 0 otherwise. The dual value LP \eqref{eq_dual} is often defined as the primal, as it is perhaps more intuitive, and \eqref{eq_primal} is rarely considered. However, our analysis centers on the flux variables, and algorithms that manipulate policies can more naturally be seen as moving through the polytope \eqref{eq_primal}, since vertices of the polytope represent policies: \begin{lem} The LP \eqref{eq_primal} is non-degenerate, and there is a bijection between vertices of the polytope and policies of the MDP. \label{lem_vertices} \end{lem} \begin{proof} Policies have exactly $n$ nonzero variables, and solving for the flux vector in \eqref{eq_flux_vector} is identical to solving for a basis in the polytope, so policies map to bases. Write the constraints in \eqref{eq_primal} in the standard matrix form $A\mb{x} = \mb{b}$. The vector $\mb{b}$ is $ \mb{1}$, and $A = M - \gamma P$. In a row $s$ of $A$ the only positive entries are on actions usable in state $s$, so if $A\mb{x} = \mb{b}$, then $\mb{x}$ must have a nonzero entry for every state, i.e., a choice of action for every state. Bases of the LP have $n$ variables, so they must include only one action per state. Finally, as shown above $\mb{x}^\pi_a \ge 1$ for all $a$ in a policy/basis, so the LP is not degenerate, and bases correspond to vertices. \end{proof} By Lemma \ref{lem_vertices}, the simplex method applied to \eqref{eq_primal} corresponds to a simple, single-switch version of policy iteration: we start with an arbitrary policy, and in each iteration we change a single action that improves the value of some state. Since the LP is not degenerate, the simplex method will find the optimal policy with no cycling. We will use Dantzig's most-negative-reduced-cost pivoting rule to choose the action switched. Since \eqref{eq_primal} is written as a maximization problem, we will refer to reduced costs as gains and always choose the highest gain action to switch/pivot. For MDPs, the gains have a simple interpretation: \begin{defn} \label{def_reduced_costs} The {\em gain} (or {\em reduced cost}) of an action $a$ for state $s$ with respect to a policy $\pi$ is denoted $\mb{r}^\pi_a$ and is the improvement in the value of $s$ if $s$ uses action $a$ once and then follows $\pi$ for all time. Formally, $\mb{r}^\pi_a = (\mb{r}_a + \gamma P_a^T \mb{v}^{\pi}) - \mb{v}^\pi_s$, or, in vector form \begin{equation} \label{eq_reducedcost} \mb{r}^\pi = \mb{r} - (M - \gamma P)^T\mb{v}^\pi \; . \end{equation} \end{defn} We denote the optimal policy by $\pi^*$, and the optimal flux, values, and gains by $\mb{x}^*$, $\mb{v}^*$, and $\mb{r}^*$. The following are basic properties of the simplex method, and we prove them for completeness. \begin{lem} \label{lem_reducedcosts} Let $\pi$ and $\pi'$ be any policies. The gains satisfy the following properties \begin{itemize} \item $(\mb{r}^\pi)^T\mb{x}^{\pi'} = \mb{r}^T\mb{x}^{\pi'} - \mb{r}^T\mb{x}^\pi = \mb{1}^T\mb{v}^{\pi'}-\mb{1}^T\mb{v}^\pi$, \item $\mb{r}^\pi_a = 0$ for all $a \in \pi$, and \item $\mb{r}^*_a \le 0$ for all $a$. \end{itemize} \end{lem} \begin{proof} From the definition of gains $(\mb{r}^\pi)^T\mb{x}^{\pi'} = (\mb{r} - (M - \gamma P)^T\mb{v}^\pi)^T\mb{x}^{\pi'} = \mb{r}^T\mb{x}^{\pi'} - (\mb{v}^\pi)^T(M-\gamma P) \mb{x}^{\pi'} = \mb{r}^T\mb{x}^{\pi'} - (\mb{v}^{\pi})^T\mb{1}$, using that $(M-\gamma P)$ is the constraint matrix of \eqref{eq_primal}. From the definition of value and flux vectors $\mb{r}^T\mb{x}^{\pi} = \mb{r}_\pi^T (I-\gamma P^\pi)^{-1}\mb{1} = (\mb{v}^\pi)^T \mb{1}$, where $\mb{r}_\pi$ is the reward vector restricted to indices $\pi$. Combining these two gives the first result. For the second result, if $a$ is in $\pi$, then $\mb{v}^\pi_s = \mb{r}_a + \gamma P_a^T \mb{v}^\pi$, so $\mb{r}^\pi_a = 0$. Finally, if $\mb{r}^*_a > 0$ for some $a$, then consider the policy $\pi$ that is identical to $\pi^*$ but uses $a$. Then $(\mb{r}^*)^T\mb{x}^{\pi} > 0$, and the first identity proves that $\pi^*$ is not optimal. \end{proof} A key property of the simplex method on MDPs that we will employ repeatedly is that not only is the overall objective improving, but also the values of all states are monotone non-decreasing, and there exists a single policy we denote by $\pi^*$ that maximizes the values of all states: \begin{lem} \label{lem_monotone} Let $\pi$ and $\pi'$ be policies appearing in an execution of the simplex method with $\pi'$ being used after $\pi$. Then $\mb{v}^{\pi'} \ge \mb{v}^{\pi}$. Further, let $\pi^*$ be the policy when simplex terminates, and $\pi''$ be any other policy. Then $\mb{v}^* \ge \mb{v}^{\pi''}$. \end{lem} \begin{proof} Suppose $\pi$ and $\pi'$ are subsequent policies. The gains of all actions in $\pi'$ with respect to $\pi$ are equal to $\mb{r}_{\pi'} - (I - \gamma P^{\pi'})^T\mb{v}^{\pi}$, all of which are nonnegative. Therefore $\mb{0} \le (I-\gamma P^{\pi'})^{-T}(\mb{r}_{\pi'} - (I - \gamma P^{\pi'})^T)\mb{v}^{\pi} = \mb{v}^{\pi'} - \mb{v}^\pi$, using that $(I-\gamma P^{\pi'})^{-T} = \sum_{i\ge 0} (\gamma (P^\pi)^T)^i \ge \mb{0}$. By induction, this holds if $\pi$ and $\pi'$ occur further apart. Performing a similar calculation using the gains $\mb{r}^*$, which are nonpositive, shows that $\mb{v}^* - \mb{v}^{\pi''} \ge \mb{0}$ for any policy $\pi''$. \end{proof} \section{Uniform discount} \label{section_uniform} As a warmup before delving into our analysis of deterministic MDPs, we briefly review the analysis of \cite{ye_simplex} for stochastic MDPs with a fixed discount. Consider the flux vector in Definition \ref{def_flux}. One unit of flux is added to each state, and every step it is discounted by a factor of $\gamma$, for a total of $n(1 + \gamma + \gamma^2 + \cdots) = n/(1-\gamma)$ flux overall. If $\pi$ is the current policy and $\Delta$ is the highest gain, then, by Lemma \ref{lem_reducedcosts} the farthest $\pi$ can be from $\pi^*$ is if all $n/(1-\gamma)$ units of flux in $\pi^*$ are on the action with gain $\Delta$, so $\mb{r}^T\mb{x}^* - \mb{r}^T\mb{x}^\pi \le n\Delta/(1-\gamma)$. If we pivot on this action, at least 1 unit of flux is placed on the new action, increasing the objective by at least $\Delta$. Thus we have reduced the gap to $\pi^*$ by a $1 - (1-\gamma)/n$ fraction, which is substantial if $1/(1-\gamma)$ is polynomial. Now consider $\mb{r}^T\mb{x}^* - \mb{r}^T\mb{x}^\pi = -(\mb{r}^*)^T\mb{x}^\pi$. All the terms $-\mb{r}^*_a \mb{x}^\pi_a$ are nonnegative, and for some action $a$ in $\pi$ we have $-\mb{r}^*_a \mb{x}^\pi_a \ge -(\mb{r}^*)^T\mb{x}^\pi / n$. The term $-\mb{r}^*_a \mb{x}^\pi_a$ is at most $-\mb{r}^*_a n/(1-\gamma)$, so $-\mb{r}^*_a \ge -(\mb{r}^*)^T\mb{x}^\pi / (n^2/(1-\gamma))$. But for any policy $\pi'$ that includes $a$, $-(\mb{r}^*)^T\mb{x}^{\pi'} \ge -\mb{r}^*_a \mb{x}^{\pi'}_a \ge - \mb{r}^*_a$, so after $\mb{r}^T\mb{x}^* - \mb{r}^T\mb{x}^\pi$ has shrunk by a factor of $n^2/(1-\gamma)$, action $a$ cannot appear in any future policy, and this occurs after \[ \log_{1-(1-\gamma)/n} \frac{1-\gamma}{n^2} = O\left( \frac{n}{1-\gamma}\log \frac{n}{1-\gamma} \right) \] steps. See \cite{ye_simplex} for the details. The above result hinged on the fact that the size of all nonzero flux lay within the interval $[1,n/(1-\gamma)]$, which was assumed to be polynomial but gives a weak bound if $\gamma $ is very close to 1. However, consider a policy for a deterministic MDP. It can be seen as a graph with a node for each state with a single directed edge leaving each state representing the action, so the graph consists of one or more directed cycles and directed paths leading to these cycles. Starting on a path, the MDP uses each path action once before reaching a cycle, so the flux on paths must be small. Flux on the cycles may be substantially larger, but since the MDP revisits each action after at most $n$ steps, the flux on cycle actions varies by at most a factor of $n$. \begin{lem} \label{lem_flux_size} Let $\pi$ be a policy with flux vector $\mb{x}^\pi$ and $a$ an action in $\pi$. If $a$ is on a path in $\pi$ then $1 \le \mb{x}^\pi_a \le n$, and if $a$ is on a cycle then $1/(1-\gamma) \le \mb{x}^\pi_a \le n/(1-\gamma)$. The total flux on paths is at most $n^2$, and the total flux on cycles is at most $n/(1-\gamma)$. \end{lem} \begin{proof} All actions have at least 1 flux. If $a$ is on a path, then starting from any state we can only use $a$ once and never return, contributing flux at most 1 per state, so $\mb{x}^\pi_a \le n$. Summing over all path actions, the total flux is at most $n^2$. If $a$ is on a cycle, each state on the cycle contributes a total of $1/(1-\gamma)$ flux to the cycle. By symmetry this flux is distributed evenly among actions on the cycle, so $\mb{x}^\pi_a \ge 1/(1-\gamma)$. The total flux in the MDP is $n/(1-\gamma)$, so $\mb{x}^\pi_a \le n/(1-\gamma)$. \end{proof} The overall range of flux is large, but all values must lie within one of two polynomial layers. We will prove that simplex can essentially optimize each layer separately. If a cycle is not updated, then not much progress is made towards the optimum, but we make a substantial amount of progress in optimizing the paths for the current cycles. When the paths are optimal the algorithm is forced to update a cycle, at which point we make a substantial amount of progress towards the optimum but resets all progress on the paths. First we analyze progress on the paths: \begin{lem} \label{lem_path_converge} Suppose the simplex method pivots from $\pi$ to $\pi'$, which does not create a new cycle. Let $\pi''$ be the final policy such that cycles in $\pi''$ are a subset of those in $\pi$ (i.e., the final policy before a new cycle is created). Then $\mb{r}^T(\mb{x}^{\pi''} - \mb{x}^{\pi'}) \le (1-1/n^2)\mb{r}^T(\mb{x}^{\pi''} - \mb{x}^{\pi})$. \end{lem} \begin{proof} Let $\Delta = \max_a \mb{r}^\pi_a$ be the highest gain. Consider $(\mb{r}^{\pi})^T\mb{x}^{\pi''}$. Since cycles in $\pi''$ are contained in $\pi$, $\mb{r}^{\pi}_a = 0$ for any action $a$ on a cycle in $\pi''$, and by Lemma \ref{lem_flux_size}, $\pi''$ has at most $n^2$ units of flux on paths, so $(\mb{r}^{\pi})^T\mb{x}^{\pi''} = \mb{r}^T(\mb{x}^{\pi''} - \mb{x}^\pi) \le n^2\Delta$. Policy $\pi'$ has at least 1 unit of flux on the action with gain $\Delta$, so \[ \mb{r}^T(\mb{x}^{\pi''} - \mb{x}^{\pi'}) \le \mb{r}^T(\mb{x}^{\pi''} - \mb{x}^{\pi}) - \Delta \le \left(1-\frac{1}{n^2}\right)\mb{r}^T(\mb{x}^{\pi''} - \mb{x}^{\pi}) \; . \qedhere \] \end{proof} Due to the polynomial contraction in the lemma above, not too many iterations can pass before a new cycle is formed. \begin{lem} \label{lem_path_elimination} Let $\pi$ be a policy. After $O(n^2\log n)$ iterations starting from $\pi$, either the algorithm finishes, a new cycle is created, a cycle is broken, or some action in $\pi$ never appears in a policy again until a new cycle is created. \end{lem} \begin{proof} Let $\pi$ be the policy in some iteration, $\pi'$ the last policy before a new cycle is created, and $\pi''$ an arbitrary policy occurring between $\pi$ and $\pi'$ in the algorithm. Policy $\pi$ differs from $\pi'$ in actions on paths and possibly in cycles that exist in $\pi$ but have been broken in $\pi'$. By Lemma \ref{lem_reducedcosts} $-(\mb{r}^{\pi'})^T\mb{x}^\pi = \mb{r}^T(\mb{x}^{\pi'}- \mb{x}^{\pi}) = \mb{1}^T(\mb{v}^{\pi'}-\mb{v}^{\pi})$. We divide the analysis into two cases. First suppose that there exists an action $a$ used in state $s$ on a path such that $-\mb{r}^{\pi'}_a\mb{x}^{\pi}_a \ge -(\mb{r}^{\pi'})^T\mb{x}^{\pi}/n$ (note $(\mb{r}^{\pi'})^T\mb{x}^\pi \le 0$). Since $a$ is on a path $\mb{x}^\pi_a \le n$, which implies $-\mb{r}^{\pi'}_a n^2 \ge -(\mb{r}^{\pi'})^T\mb{x}^{\pi}$. Now if policy $\pi''$ uses action $a$, then \begin{align*} -(\mb{r}^{\pi'})^T\mb{x}^{\pi''} = \mb{1}^T(\mb{v}^{\pi'}-\mb{v}^{\pi''}) \ge \mb{v}^{\pi'}_s - \mb{v}^{\pi''}_s = & \mb{v}^{\pi'}_s - (\mb{r}_a + \gamma P_a \mb{v}^{\pi''}) \\ \ge & \mb{v}^{\pi'}_s - (\mb{r}_a + \gamma P_a \mb{v}^{\pi'}) = -\mb{r}^{\pi'}_a \ge -\frac{-(\mb{r}^{\pi'})^\pi\mb{x}^{\pi}}{n^2} \; , \end{align*} using that the values of all states are monotone increasing. In the second case there is no action $a$ on a path in $\pi$ satisfying $-\mb{r}^{\pi'}_a\mb{x}^{\pi}_a \ge -(\mb{r}^{\pi'})^T\mb{x}^{\pi}/n$. The remaining portion of $-(\mb{r}^{\pi'})^T\mb{x}^{\pi}$ is due to cycles, so there must be some cycle $C$ consisting of actions $\{a_1, \ldots, a_k\}$ used in states $\{s_1,\ldots, s_k\}$ such that $\sum_{a \in C} -\mb{r}^{\pi'}_a \mb{x}^\pi_a \ge -(\mb{r}^{\pi'})^T\mb{x}^{\pi}/n$. All flux in $C$ first enters $C$ either from a path ending at $C$ or from the initial unit of flux placed on some state $s$ in $C$. If $y_s \ge 1$ units of flux first enter $C$ at state $s$ in policy $\pi$, then that flux earns $y_s(\mb{v}^{\pi'}_s-\mb{v}^{\pi}_s)$ reward with respect to the rewards $-\mb{r}^{\pi'}$, so $\sum_{a \in C} -\mb{r}^{\pi'}_a \mb{x}^\pi_a = \sum_{s \in C} y_s(\mb{v}^{\pi'}_s-\mb{v}^{\pi}_s)$. Moreover, each term $\mb{v}^{\pi'}_s - \mb{v}^{\pi}_s$ is nonnegative, since the values of all states are nondecreasing. Now note that $\sum_{s \in C} (\mb{v}^{\pi'}_s - \mb{v}^{\pi}_s) = \sum_{a \in C} -\mb{r}^{\pi'}_a/(1-\gamma)$, and at most $n$ units of flux enter each state from outside. Therefore $-n \sum_{a \in C} \mb{r}^{\pi'}_a/(1-\gamma) \ge \sum_{a \in C} -\mb{r}^{\pi'}_a \mb{x}^\pi_a$, implying $-n^2 \sum_{a \in C} \mb{r}^{\pi'}_a/(1-\gamma) \ge -(\mb{r}^{\pi'})^T\mb{x}^{\pi}$. As long as cycle $C$ is intact, each $a \in C$ has $1/(1-\gamma)$ flux from states in $C$ (Lemma \ref{lem_flux_size}), so if $C$ is in policy $\pi''$ then \begin{equation} \label{eqn_cycle_breaking} -(\mb{r}^{\pi'})^T\mb{x}^{\pi''} = \mb{1}^T(\mb{v}^{\pi'}-\mb{v}^{\pi''}) \ge \sum_{s \in C} \mb{v}^{\pi'}_s - \mb{v}^{\pi''}_s = -\frac{\sum_{a \in C} \mb{r}^{\pi''}_a}{1-\gamma} \ge -\frac{-(\mb{r}^{\pi'})^T\mb{x}^{\pi}}{n^2} \; . \end{equation} Now if $\log_{n^2/(n^2-1)} n^2$ iterations occur between $\pi$ and $\pi''$, Lemma \ref{lem_path_converge} implies \[ -(\mb{r}^{\pi'})^T\mb{x}^{\pi''} < -\left(1-\frac{1}{n^2}\right)^{\log_{n^2/(n^2-1)} n^2} (\mb{r}^{\pi'})^T\mb{x}^{\pi} \le -\frac{-(\mb{r}^{\pi'})^T\mb{x}^{\pi}}{n^2} \; . \] In the first case action $a$ cannot appear in $\pi''$, and in the second case cycle $C$ must be broken broken in $\pi''$. This takes $\log_{n^2/(n^2-1)} n^2 = O(n^2\log n)$ iterations if no new cycles interrupt the process. \end{proof} \begin{lem} \label{lem_new_cycle} Either the algorithm finishes or a new cycle is created after $O(n^2 m \log n)$ iterations. \end{lem} \begin{proof} Let $\pi_0$ be a policy after a new cycle is created, and consider the policies $\pi_1, \pi_2, \ldots$ each separated by $O(n^2\log n)$ iterations. If no new cycle is created, then by Lemma \ref{lem_path_elimination} each of these policies $\pi_i$ has either broken another cycle in $\pi_0$ or contains an action that cannot appear in $\pi_{j}$ for all $j >i$. There are at most $n$ cycles in $\pi_0$ and at most $m$ actions that can be eliminated, so after $(m+n)O(n^2 \log n) = O(n^2 m \log n)$ iteration, the algorithm must terminate or create a new cycle. \end{proof} When a new cycle is formed, the algorithm makes a substantial amount of progress towards the optimum but also resets the path optimality above. \begin{lem} \label{lem_cycle_converge} Let $\pi$ and $\pi'$ be subsequent policies such that $\pi'$ creates a new cycle. Then $\mb{r}^T(\mb{x}^* - \mb{x}^{\pi'}) \le (1-1/n)\mb{r}^T(\mb{x}^* - \mb{x}^\pi)$. \end{lem} \begin{proof} Let $\Delta = \max_{a'} \mb{r}^\pi_{a'}$ and $a = \argmax_{a'} \mb{r}^\pi_a$. There is a total of $n/(1-\gamma)$ flux in the MDP, so $\mb{r}^T\mb{x}^* - \mb{r}^T\mb{x}^{\pi} = (\mb{r}^\pi)^T\mb{x}^* \le \Delta n/(1-\gamma)$. By Lemma \ref{lem_flux_size}, pivoting on $a$ and creating a cycle will result in at least $1/(1-\gamma)$ flux through $a$. Therefore $\mb{r}^T\mb{x}^{\pi'} \ge \mb{r}^T\mb{x}^\pi + \Delta/(1-\gamma)$, so \[ \mb{r}^T(\mb{x}^* - \mb{x}^{\pi'}) \le \mb{r}^T(\mb{x}^* - \mb{x}^{\pi}) - \frac{\Delta}{1-\gamma} \le \left(1-\frac{1}{n}\right)\mb{r}^T(\mb{x}^* - \mb{x}^{\pi}) \; . \qedhere \] \end{proof} \begin{lem} \label{lem_cycle_elimination} Let $\pi$ be a policy. Starting from $\pi$, after $O(n \log n)$ iterations in which a new cycle is created, some action in $\pi$ is either eliminated from cycles for the remainder of the algorithm or entirely eliminated from policies for the remainder of the algorithm. \end{lem} \begin{proof} Consider a policy $\pi$ with respect to the optimal gains $\mb{r}^*$. There is an action $a$ such that $-\mb{r}^*_a\mb{x}^\pi_a \ge -(\mb{r}^*)^T\mb{x}^\pi/n$. If $a$ is on a path in $\pi$, then $1 \le \mb{x}^\pi_a \le n$, so $-\mb{r}^*_a \ge -(\mb{r}^*)^T\mb{x}^\pi/n^2$, and if $a$ is on a cycle, then $1/(1-\gamma) \le \mb{x}^\pi_a \le n/(1-\gamma)$, so $-\mb{r}^*_a/(1-\gamma) \ge -(\mb{r}^*)^T\mb{x}^\pi/n^2$. Since $\mb{r}^*$ are the gains for the optimal policy, $\mb{r}^*_{a'} \le 0$ for all $a'$. Therefore if $\pi'$ is any policy containing $a$, then $-\mb{r}^*_a \le -\mb{r}^*_a \mb{x}^{\pi'}_a \le -(\mb{r}^*)^T\mb{x}^{\pi'}$, and if $\pi'$ is any policy containing $a$ on a cycle, then $-\mb{r}^*_a/(1-\gamma) \le -\mb{r}^*_a \mb{x}^{\pi'}_a \le -(\mb{r}^*)^T\mb{x}^{\pi'}$. Now by Lemma \ref{lem_cycle_converge}, if there are more than $\log_{n/(n-1)} n^2 = O(n \log n)$ new cycles created between policies $\pi$ and $\pi'$ then \[ -(\mb{r}^*)^T\mb{x}^{\pi'} < -\left(1-\frac{1}{n}\right)^{\log_{n/(n-1)} n^2}(\mb{r}^*)^T\mb{x}^{\pi} = -\frac{(\mb{r}^*)^T\mb{x}^\pi}{n^2} \; . \] Therefore if $\pi$ contained $a$ on a path, then $a$ cannot appear in any policy after $\pi'$ for the remainder of the algorithm, and if $\pi$ contained $a$ on a cycle, then $a$ cannot appear in a cycle after $\pi'$ (but may appear in a path) for the remainder of the algorithm. \end{proof} \begin{thm} The simplex method converges in at most $O(n^3 m^2 \log^2 n)$ iterations on deterministic MDPs with uniform discount using the highest gain pivoting rule. \end{thm} \begin{proof} Consider the policies $\pi_0, \pi_1, \pi_2,\ldots$ where $O(n \log n)$ new cycles have been created between $\pi_i$ and $\pi_{i+1}$. By Lemma \ref{lem_cycle_elimination}, each $\pi_i$ contains an action that is either eliminated entirely in $\pi_j$ for $j>i$ or eliminated from cycles. Each action can be eliminated from cycles and paths, so after $2m$ such rounds of $O(n\log n)$ new cycles the algorithm has converged. By Lemma \ref{lem_new_cycle} cycles are created every $O(n^2 m \log n)$ iterations, for a total of $O(n^3 m^2 \log^2 n)$ iterations. \end{proof} \section{Varying Discounts} \label{section_nonuniform} In this section we allow each action $a$ to have a distinct discount $\gamma_a$. This significantly complicates the proof of convergence since the total flux is no longer fixed. When updating a cycle we can no longer bound the distance to the optimum based solely on the maximum gain, since the optimal policy may employ actions with smaller gain to the current policy but substantially more flux. We are able to exhibit a set of layers in which the flux on cycles must lie based on the discount of the actions, and we will show that when a cycle is created in a particular layer we make progress towards the optimum value for the updated state {\em assuming that it lies within that layer}. These layers will define a set of bounds whose values we must surpass, which serve as milestones or checkpoints to the optimum. When we update a cycle we cannot claim that the overall objective increases substantially but only that the values of individual states make progress towards one of these milestone values. When the values of all states have surpassed each of these intermediate milestones the algorithm will terminate. We first define some notation. Recall that to calculate flux we place one unit of ``mass'' in each state and then run the Markov chain, so all flux traces back to some state, but $\mb{x}^\pi$ aggregates all of it together. Because we will be concerned with analyzing the values of individual states in this section, it will be useful to separate out the flux originating in a particular state $s$. Consider the following alternate LP: \begin{equation} \label{eq_mod_primal} \begin{array}{rlrl} \textrm{maximize} & \mb{r}^T\mb{x} \\ \textrm{subject to} & & \sum_{a \in A_s} \mb{x}_a & = 1 + \sum_{a} \gamma_a P_{a,s}\mb{x}_a\\ & \forall s' \neq s & \sum_{a \in A_{s'}} \mb{x}_a & = \sum_{a} \gamma_a P_{a,s'} \mb{x}_a\\ & & \mb{x} & \ge 0 \\ \end{array} \end{equation} The LP \eqref{eq_mod_primal} is identical to \eqref{eq_primal}, except that initial flux is only added to state $s$ rather than all states, and the dual of \eqref{eq_mod_primal} matches \eqref{eq_dual} if the objective in \eqref{eq_dual} is changed to minimize only $\mb{v}_s$. Feasible solutions in \eqref{eq_mod_primal} measure only flux originating in $s$ and contributing to $\mb{v}_s$. For a state $s$ and policy $\pi$ we use the notation $\mb{x}^{\pi,s}$ to denote the corresponding vertex in \eqref{eq_mod_primal}. Note that $\mb{x}^\pi = \sum_s \mb{x}^{\pi,s}$. The following lemma is analogous to Lemma \ref{lem_reducedcosts} and has an identical proof: \begin{lem} \label{lem_subset_reducedcosts} For a state $s$ and for policies $\pi$ and $\pi'$, $(\mb{r}^\pi)^T\mb{x}^{\pi',s} = \mb{r}^T\mb{x}^{\pi',S} - \mb{r}^T\mb{x}^{\pi,S} = \mb{v}^{\pi'}_s - \mb{v}^\pi_s$. \end{lem} We now define the intervals in which the flux must lie. As in Section \ref{section_uniform} flux on paths is in $[1,n]$. Let $C$ be a cycle in some policy, and $\gamma_C = \prod_{a\in C} \gamma_a$ be total discount of $C$. We will prove that the smallest discount in $C$ determines the rough order of magnitude of the flux through $C$. \begin{defn} Let $C$ be a cycle and $a$ an action in $C$, then the discount of $a$ {\em dominates} the discount of $C$ if $\gamma_a \le \gamma_{a'}$ for all $a' \in C$. \end{defn} \begin{lem} \label{lem_cycle_flow_nonuniform} Let $\pi$ be a policy containing the cycle $C$ with discount dominated by $\gamma_a$ and total discount $\gamma_C$. Let $s$ be a state on $C$, $a'$ the action used in $s$ and $a''$ an arbitrary action in $C$, then \begin{itemize} \item $\mb{x}^{\pi,s}_{a'} = 1/(1-\gamma_C)$, \item $\gamma_C/(1-\gamma_C) \le \mb{x}^{\pi,s}_{a''} \le 1/(1-\gamma_C)$, and \item $1/(n(1-\gamma_a)) \le 1/(1-\gamma_C) \le 1/(1-\gamma_a)$. \end{itemize} \end{lem} \begin{proof} For the first equality, all flux originates at $s$, so the flux through $a'$ (used in state $s$) either just originated in $s$ or came around the cycle from $s$, implying $\mb{x}^{\pi,s}_{a'} = 1 + \gamma_C\mb{x}^{\pi,s}_{a'}$. An analogous equation holds for all other actions $a''$ on $C$, but now the initial flow from $s$ may have been discounted by at most $\gamma_C$ before reaching $a''$, giving $\gamma_C/(1-\gamma_C) \le \mb{x}^{\pi,s}_{a''} \le 1/(1-\gamma_C)$. The upper bound in the final inequality, $1/(1-\gamma_C) \le 1/(1-\gamma_a)$ holds since $a \in C$ ($\gamma_a$ dominates the discount of $C$). For the lower bound, let $\ell = 1-\gamma_a$. Then $\gamma_C \ge \gamma_a^n = (1-\ell)^n \ge 1-n\ell = 1-n(1-\gamma_a)$, implying $1/(1-\gamma_C) \ge 1/(n(1-\gamma_a))$. \end{proof} Flux on paths still falls in $[1,n]$, so the algorithm behaves the same on paths as it did in the uniform case: \begin{lem} \label{lem_new_cycle_nonuniform} Either the algorithm finishes or a new cycle is created after $O(n^2 m \log n)$ iterations. \end{lem} \begin{proof} This is identical to the proof of Lemma \ref{lem_new_cycle}, which depends on Lemmas \ref{lem_path_converge} and \ref{lem_path_elimination}. Lemma \ref{lem_path_converge} holds for nonuniform discounts, and Lemma \ref{lem_path_elimination} holds after adjusting Equation \eqref{eqn_cycle_breaking} as follows \[ -(\mb{r}^{\pi'})^T\mb{x}^{\pi''} \ge \sum_{s \in C} \mb{v}^{\pi'}_s - \mb{v}^{\pi''}_s \ge -\frac{\sum_{a \in C} \mb{r}^{\pi''}_a}{1-\gamma_C} \ge -\frac{-(\mb{r}^{\pi'})^T\mb{x}^{\pi}}{n^2} \; , \] using that $\sum_{a \in C} \mb{r}^{\pi''}_a n/(1-\gamma_C) \ge -(\mb{r}^{\pi'})^T\mb{x}^{\pi}/n$ and Lemma \ref{lem_cycle_flow_nonuniform}. \end{proof} Now suppose the simplex method updates the action for state $s$ in policy $\pi$ and creates a cycle dominated by $\gamma_a$. Again, $\mb{v}_s$ may not improve much, since there may be a cycle with discount much larger than $\gamma_a$. However, in any policy $\pi'$ where $s$ is on a cycle dominated by $\gamma_{a}$ and $s$ uses some action $a'$, $1/(n(1-\gamma_a)) \le \mb{x}^{\pi',s}_{a'} \le 1/(1-\gamma_a)$, which allows us to argue $\mb{v}_s$ has made progress towards the highest value achievable when it is on a cycle dominated by $\gamma_a$, and after enough such progress has made, $\mb{v}_s$ will beat this value and never again appear on any cycle dominated by $\gamma_a$. The optimal values achievable for each state on a cycle dominated by each $\gamma_a$ serve as the above-mentioned milestones. Since all cycles are dominated by some $\gamma_a$, there are $m$ milestones per state. \begin{lem} \label{lem_cycle_converge_nonuniform} Suppose the simplex method moves from $\pi$ to $\pi'$ by updating the action for state $s$, creating a new cycle $C$ with discount dominated by $\gamma_a$ for some $a$ in $\pi'$. Let $\pi''$ be the final policy used by the simplex method in which $s$ is in a cycle dominated by $\gamma_a$. Then $\mb{v}_s^{\pi''} - \mb{v}_s^{\pi'} \le (1-1/n^2)(\mb{v}_s^{\pi''} - \mb{v}_s^{\pi})$. \end{lem} \begin{proof} Let $\Delta = \max_{a'} \mb{r}^{\pi}_{a'}$ be the value of the highest gain with respect to $\pi$. Any cycle contains at most $n$ actions, each of which has gain at most $\Delta$ in $\mb{r}^\pi$, so if $s$ is on a cycle dominated by $\gamma_a$ in $\pi''$ then by Lemma \ref{lem_cycle_flow_nonuniform} and Lemma \ref{lem_subset_reducedcosts}, $\mb{v}_s^{\pi''} - \mb{v}_s^\pi \le n\Delta/(1-\gamma_a)$, and since $\pi'$ creates a cycle dominated by $\gamma_a$, by the same lemmas $\mb{v}_s^{\pi'} \ge \mb{v}_s^\pi +\Delta/(n(1-\gamma_a))$. Combining the two, \[ \mb{v}_s^{\pi''} - \mb{v}_s^{\pi'} = (\mb{v}_s^{\pi''} - \mb{v}_s^{\pi}) - (\mb{v}_s^{\pi'} - \mb{v}_s^{\pi}) \le (\mb{v}_s^{\pi''} - \mb{v}_s^{\pi}) - \frac{\Delta}{n(1-\gamma_a)} \le \left(1-\frac{1}{n^2}\right)(\mb{v}_s^{\pi''} - \mb{v}_s^{\pi}) \; . \qedhere \] \end{proof} The following lemma is the crux of our analysis and allows us to eliminate actions when we get close to a milestone value. This occurs because the positive gains must shrink or else the algorithm would surpass the milestone, and as the positive gains shrink they can no longer balance larger negative gains, forcing such actions out of the cycle. \begin{lem} \label{lem_cycle_elimination_nonuniform} Suppose policy $\pi$ contains a cycle $C$ with discount dominated by $\gamma_a$ and $s$ is a state in $C$. There is some action $a'$ in $C$ (depending on $s$) such that after $O(n^2 \log n)$ iterations that change the action for $s$ and create a cycle with discount dominated by $\gamma_a$, action $a'$ will never again appear in a cycle dominated by $\gamma_a$. \end{lem} \begin{proof} Let $\pi$ be a policy containing a cycle $C$ with discount dominated by $\gamma_a$ and $s$ a state in $C$. Let $\pi'$ be another policy where $s$ is on a cycle dominated by $\gamma_a$ after at least $1+\log_{n^2/(n^2-1)} n^5 = O(n^2 \log n)$ iterations that create such a cycle by changing the action for $s$ and $\pi''$ the final policy used by the algorithm in which $s$ is on a cycle dominated by $\gamma_a$. Consider the policy $\hat{\pi}$ in the iteration immediately preceding $\pi'$. By Lemma \ref{lem_cycle_converge_nonuniform}, and the choice of $\pi'$, \[ \mb{v}^{\pi''}_s - \mb{v}^{\hat{\pi}}_s \le \left(1-\frac{1}{n^2}\right)^{\log_{n^2/(n^2-1)} n^5}(\mb{v}^{\pi''}_s - \mb{v}^{\pi}_s) = \frac{1}{n^5}(\mb{v}^{\pi''}_s - \mb{v}^{\pi}_s) \; , \] or equivalently $\mb{v}^{\pi}_s - \mb{v}^{\pi''}_s \le -n^5(\mb{v}^{\pi''}_s - \mb{v}^{\hat{\pi}}_s )$, implying \begin{equation} \label{eqn_v_gap} \mb{v}^{\pi}_s - \mb{v}^{\hat{\pi}}_s = (\mb{v}^{\pi}_s - \mb{v}^{\pi''}_s) + (\mb{v}^{\pi''}_s - \mb{v}^{\hat{\pi}}_s) \le (-n^5 +1)(\mb{v}^{\pi''}_s - \mb{v}^{\hat{\pi}}_s) \; . \end{equation} Since the gap $\mb{v}^{\pi}_s - \mb{v}^{\hat{\pi}}_s$ is large and negative, there must be highly negative gains in $\mb{r}^{\hat{\pi}}$. By Lemma \ref{lem_subset_reducedcosts} $\mb{v}_s^{\pi}-\mb{v}_s^{\hat{\pi}} = (\mb{r}^{\hat{\pi}})^T\mb{x}^{\pi,s}$. Let $\mb{r}^{\hat{\pi}}_{a'} = \min_{a \in C} \mb{r}^{\hat{\pi}}_a$ and $s'$ be the state using $a'$. By Lemma \ref{lem_cycle_flow_nonuniform}, $\mb{x}^{\pi,s} \le 1/(1-\gamma_a)$, and $C$ has at most $n$ states, so applying Equation \eqref{eqn_v_gap} \begin{equation} \label{eqn_min_gain} \frac{\mb{r}^{\hat{\pi}}_{a'}}{1-\gamma_a} \le \frac{1}{n}(\mb{v}_s^\pi-\mb{v}_s^{\hat{\pi}}) \le \left(-n^4 + \frac{1}{n}\right)(\mb{v}^{\pi''}_s - \mb{v}^{\hat{\pi}}_s) \; . \end{equation} The positive entries in $\mb{r}^{\hat{\pi}}$ must all be small, since there is only a small increase in the value of $s$. Let $\Delta = \max \mb{r}^{\hat{\pi}}$. The algorithm pivots on the highest gain, and by assumption it updates the action for $s$ and creates a cycle dominated by $\gamma_a$. By Lemma \ref{lem_cycle_flow_nonuniform}, the new action is used at least $1/(n(1-\gamma_a))$ times by flux from $s$, since it is the first action in the cycle, so \begin{equation} \label{eqn_max_gain} \frac{\Delta}{n(1-\gamma_a)} \le \mb{v}_s^{\pi'} - \mb{v}_s^{\hat{\pi}} \le \mb{v}_s^{\pi''} - \mb{v}_s^{\hat{\pi}} \; . \end{equation} We prove that the highly negative $\mb{r}^{\hat{\pi}}_{a'}$ cannot coexist with only small positive gains bounded by $\Delta$. Consider any policy in which $s'$ is on a cycle $C'$ containing $a'$ (but not necessarily containing $s$) with total gain $\gamma_{C'}$ dominated by $\gamma_a$. By Lemma \ref{lem_cycle_flow_nonuniform}, there is at least $1/(1-\gamma_{C'}) \ge 1/(n(1-\gamma_a))$ flux from $s$ going through $a'$, and in the rest of the cycle there are at most $n-1$ other actions with at most $1/(1-\gamma_{C'}) \le 1/(1-\gamma_a)$ flux. The highest gain with respect to $\hat{\pi}$ is $\Delta$, so the value of $\mb{v}_{s'}$ relative to $\mb{r}^{\hat{\pi}}$ is at most \begin{align*} \frac{\mb{r}^{\hat{\pi}}_{a'}}{n(1-\gamma_a)} + \frac{n\Delta}{1-\gamma_a} & \le \left(-n^3 + \frac{1}{n^2}\right)(\mb{v}_s^{\pi''} -\mb{v}_s^{\hat{\pi}}) + n^2(\mb{v}_s^{\pi''} - \mb{v}_s^{\hat{\pi}}) \\ & = \left(-n^3 + \frac{1}{n^2} + n^2\right)(\mb{v}_s^{\pi''} - \mb{v}_s^{\hat{\pi}}) < 0 \end{align*} using Equations \eqref{eqn_min_gain} and \eqref{eqn_max_gain}. But $\mb{v}^{\hat{\pi}}_{s'} = 0$ relative to $\mb{r}^{\hat{\pi}}$, and it only increases in future iterations, so $a'$ cannot appear again in a cycle dominated by $\gamma_a$. \end{proof} \begin{lem} \label{lem_discount_elimination} For any action $a$, there are at most $O(n^3m \log n)$ iterations that create a cycle with discount dominated by $\gamma_a$. \end{lem} \begin{proof} After $O(n^3 \log n)$ iterations that create a cycle dominated by $\gamma_a$, some state must have been updated in $O(n^2 \log n)$ of those iterations, so by Lemma \ref{lem_cycle_elimination_nonuniform} some action will never appear again in a cycle dominated by $\gamma_a$. After $m$ repetitions of this process all actions have been eliminated. \end{proof} \begin{thm} Simplex terminates in at most $O(n^5 m^3 \log^2 n)$ iterations on deterministic MDPs with nonuniform discounts using the highest gain pivoting rule. \end{thm} \begin{proof} There are $O(m)$ possible discounts $\gamma_a$ that can dominate a cycle, and by Lemma \ref{lem_discount_elimination} there are at most $O(n^3 m \log n)$ iterations creating a cycle dominated by any particular $\gamma_a$, for a total of $O(n^3 m^2 \log n)$ iterations that create a cycle. By Lemma \ref{lem_new_cycle_nonuniform} a new cycle is created every $O(n^2 m \log n)$ iterations, for a total of $O(n^5 m^3 \log^2 n)$ iterations overall. \end{proof} \section{Open problems} A difficult but natural next step would be to try to extend these techniques to handle policy iteration on deterministic MDPs. The main problem encountered is that the multiple simultaneous pivots used in policy iteration can interfere with each other in such a way that the algorithm effectively pivots on the {\em smallest} improving switch rather than the largest. See \cite{hansenzwick_mmc} for such an example. Another challenging open question is to design a strongly polynomial algorithm for general MDPs. Finally, we believe the technique of dividing variable values into polynomial sized layers may be helpful for entirely different problems. \pdfbookmark[1]{\refname}{My\refname} \end{document}
\begin{document} \draft \title{Quantum Communication with Correlated Nonclassical States} \author{S. F. Pereira,* Z. Y. Ou,** and H. J. Kimble} \address{Norman Bridge Laboratory of Physics 12-33 \\ California Institute of Technology\\ Pasadena, CA 91125} \date{\today} \maketitle \begin{abstract} Nonclassical correlations between the quadrature-phase amplitudes of two spatially separated optical beams are exploited to realize a two-channel quantum communication experiment with a high degree of immunity to interception. For this scheme, either channel alone can have an arbitrarily small signal-to-noise ratio (SNR) for transmission of a coherent ``message''. However, when the transmitted beams are combined properly upon authorized detection, the encoded message can in principle be recovered with the original SNR of the source. An experimental demonstration has achieved a 3.2 dB improvement in SNR over that possible with correlated classical sources. Extensions of the protocol to improve its security against eavesdropping are discussed. \end{abstract} \pacs{3.67-a, 3.67.Hk, 42.50} \section{Introduction} Principal motivations for the investigation of manifestly quantum or nonclassical states of the electromagnetic field have been their possible exploitation for optical communication$^{\cite{tak,yuen,cav}}$ and for enhanced measurement sensitivity.$^{\cite{caves}}$ For example, relative to a coherent state, the reduced quantum fluctuations associated with squeezed and number states offer potential for improving channel capacity in the transmission of information.$^{\cite{cav}}$ Squeezed states of light have been widely employed to achieve measurement sensitivity beyond the standard quantum limits in applications such as precision interferometry,$^{\cite {xiao87}}$ the detection of directly encoded amplitude modulation,$^{\cite {xiao88}}$ atomic spectroscopy,$^{\cite{pol}}$ and quantum noise reduction in optical amplification.$^{\cite{ou93}}$ Likewise, nonclassical correlations for the amplitudes of spatially separated beams have been exploited in diverse situations, including demonstrations of the EPR paradox for continuous variables,$^{\cite{ou92}}$ of quantum nondemolition detection (QND),$^{\cite{roch,grangier98}}$ and of a quantum-optical tap.$^{\cite{poiz} }$ Within the broader setting of quantum information science (QIS), there has been growing interest and important progress concerning the prospects for quantum information processing with continuous quantum variables, including universal quantum computation,$^{\cite{lloyd98a}}$ quantum error correction,$ ^{\cite{lloyd98b,sam98i,sam98ii}}$ and entanglement purification.$^{\cite {plenio99,cirac99}}$ Theories for quantum teleportation of continuous quantum variables in an infinite dimensional Hilbert space have been developed,$^{\cite{vaidman,sam98a,ralph98,kurizki99}}$ including for broad bandwidth teleportation$^{\cite{sam98b}}$ and for teleportation of atomic wavepackets.$^{\cite{scott99a}}$ This formalism has also been applied to super-dense quantum coding.$^{\cite{sam98c}}$ On an experimental front, these developments in QIS led to the first {\it bona fide }demonstration of quantum teleportation, which was carried out by exploiting nonclassical states of light in conjunction with continuous quantum variables.$^{\cite {furusawa98,fuchs99}}$ Against this backdrop the focus of attention in this article is optical communication in two channels with quantum correlated light fields and the associated quadrature amplitudes.$^{\cite{patent}}$ The goal is to explore the extension of quantum cryptography from the usual setting of discrete variables as pioneered by C. Bennett and colleagues$^{\cite{ben92a}}$ (e.g., photon polarization as in the experiments of Refs.\cite {franson95,gisin00,hughes00,townsend98}) into the realm of continuous quantum variables (e.g., the complex amplitude of the electromagnetic field). Apart from our work, several related schemes for quantum cryptography based upon continuous variables have recently been analyzed, including a single-beam scheme with squeezed light$^{\cite{hillery99}}$ as well dual-beam schemes with shared entanglement.$^{\cite{ralph99,reid99}}$ However, we stress at the outset that neither for our scheme nor for any of these other protocols, can any claim about{\it \ absolute} security be made. Rather, we suggest that these protocols (and suitable extensions thereof) are worthy candidates for more detailed analyses. Such an undertaking would involve various important matters of principle as well as practice for continuous quantum variables, and might hopefully lead to security proofs such as have recently emerged in the case of discrete variables.$^{\cite {ben99,lo99,mayers99}}$ As illustrated in Figure 1, the basic idea in our scheme is to construct a ``transmitter'' which combines a coherent signal of amplitude $\epsilon /t$ (the ``message'') with the large fluctuating fields generated in nondegenerate optical parametric amplification (the ``noise'').$^{\cite {patent}}$ The ``message'' and the ``noise'' are superimposed at mirror $M$ with transmission coefficient $t$ $<<$ 1. Note that although each of the two transmitted beams along channels ($A,B$) has large phase insensitive fluctuations that are individually indistinguishable from a thermal source,$ ^{\cite{NOPA}}$ the quadrature-phase amplitudes of the two-beams can be quantum copies of one another,$^{\cite{ou92,kim}}$ and in fact form an entangled EPR state.$^{\cite{EPR,reid}}$ Hence proper subtraction of the photocurrents at the ``receiver'' can result in the faithful reconstruction of the encoded ``message'' even though the signal-to-noise ratios $R_{j}$ ($ j $=$A,B$) during transmission are individually much less than one. Indeed, in a lossless system with large parametric gain, the signal-to-noise ratio of the reconstructed message $R_{t}$ can approach the signal-to-noise ratio $ R_{0}$ of the original message ($\epsilon ^{2}/t^{2}$), which was written in the transmitter as a coherent state before the mirror $M$ in Figure 1. Note that the individual Channels $(A,B)$ have a high degree of immunity to unauthorized interception since the signal-to-noise ratios $R_{A,B}$ in these channels are each very small. Furthermore, any attempt to extract information from the $(A,B)$ channels will reveal itself either by a decrease in R$_{t}$ (classical extraction) or by an increase in the fluctuations of the orthogonal quadrature amplitude (quantum extraction). In addition to achieving a faithful reconstruction of the message transmitted through M to the receiver, note that the scheme of Figure 1 also preserves to a high degree the signal-to-noise ratio for the original message beam that reflects from M. More specifically, for high gain and for losses dominated by the transmission coefficient of M, the signal-to-noise ratio R$_{r}$ for the reflected beam can approach R$_{0}$ for the original message. In this limit, we then have that the information transfer coefficient T $\equiv $ (R$_{r}$+R$_{t})$ / R$_{0}\rightarrow $ 2, where $ 0\leq $ T $\leq $ 1 for classical devices and 1 $<$ T $\leq $ 2 for manifestly quantum or nonclassical situations.$^{\cite{poiz}}$ Hence the scheme depicted in Fig.1 acts as a quantum optical tap in the fashion originally discussed by Shapiro.$^{\cite{sha}}$ It provides a received (or ``tapped'') message with a signal-to-noise ratio equal to that of the input ( $R_{t}/R_{0})\rightarrow 1$, while simultaneously transmitting an output field with signal-to-noise ratio equal to that of the input $ (R_{r}/R_{0})\rightarrow 1$. Of course similar schemes for two-channel communication can be implemented with correlated classical noise sources (i.e., thermal light), each with large fluctuations which ``hide'' the message $\epsilon $ during transmission. However, with classical sources of whatever type, only excess fluctuations can be subtracted; the quantum fluctuations at the vacuum-state level will remain unchanged and will enforce a noise floor for information transmission and extraction. For the case illustrated in Figure 1, this noise floor for the message at the receiver is given by the sum of independent vacuum fluctuations from fields in channels ($A,B$) and sets a fundamental noise level of ``2'' (with ``1'' as the individual vacuum-state limits for the two channels). Here we adopt the usual convention for the demarcation between classical and nonclassical correlations in terms of the behavior the Glauber-Sudarshan phase-space function.$^{\cite{kim}}$ Hence for the case illustrated in Figure 1 but with classical input fields, the signal-to-noise ratio $R_{t}^{\prime }$ for the detected message at the receiver is given by $R_{t}^{\prime }\simeq |\epsilon |^{2}\ll R_{t}$. In fact, for classical inputs, we have that $T^{\prime }\equiv R_{t}^{\prime }+R_{r}^{\prime }\leq 1$, and the system no longer functions as a quantum optical tap. Furthermore, the individual channels ($A,B$) are not protected from unauthorized eavesdropping, since information can be extracted from these channels with impunity for classical noise much greater than the vacuum-state limit. Apart from these considerations related to secure communication and quantum optical tapping, the configuration of Figure 1 can also be viewed as a means to realize super-dense quantum coding$^{\cite{densecoding}}$ for continuous quantum variables.$^{\cite{sam98c}}$ Here, the message $\epsilon /t$ is again encoded at the mirror $M$, but now in a single channel corresponding to one component of the entangled EPR state (e.g., channel $A$). This combination of the message and the fluctuations from one component of the NOPA are transmitted to the receiving station where they are combined with the second component of the entangled output of the NOPA that has been independently transmitted (e.g., along channel $B$). The signal is then decoded by combining the outputs of the two channels in a fashion similar to that shown in Figure 1 as discussed in more detail in Ref.\cite{sam98c}. The principal distinctions between this dense coding scheme and the aforementioned dual channel arrangement are (1) the message is encoded in a single component of the entangled EPR beam instead of symmetrically in both and (2) the received beams from paths $(A,B)$ must be physically recombined, with the phases of the local oscillators $(A,B)$ at the receiving station offset by $\frac{\pi }{2}$. Recall that for dense coding in its canonical form,$^{\cite{densecoding}}$ no signal modulation is applied to the second (i.e., channel $B$) component of the entangled state, so that it carries no information by itself. In subsequent sections of this paper, we describe in more detail the implementation of this general discussion about quantum communication with correlated nonclassical fields. In our experiment, we have been able to demonstrate an improvement in signal-to-noise ratio by a factor of 2.1 over that possible with any classical source (that is, 10 $log[R_{t}/R_{t}^{ \prime }]=3.2$ dB) and have succeeded in suppressing the noise of the difference photocurrent $i_{-}\equiv i_{A}-i_{B}$ below that associated with the vacuum fluctuations of even a single beam, thus making possible transmission with $|\epsilon |^{2}$ $<1$. Quantum dense coding would thereby be enabled with the aforementioned changes in the overall experimental protocol. We conclude with a discussion of possible extensions for enhanced security against unauthorized eavesdropping. \section{Implementation by Nondegenerate Parametric Amplification} As illustrated in Figure 1, correlated nonclassical states for our work are generated by a nondegenerate optical parametric amplifier (NOPA) that produces orthogonally polarized but frequency degenerate signal and idler beams for channels ($A,B$). We emphasize that these beams represent a realization of the entangled state originally discussed by Einstein, Podolsky, and Rosen.$^{\cite{EPR,reid}}$ For the original EPR state, there exist perfect correlations both in position and momentum for two massive particles. In the optical case, the quadrature amplitudes of the electromagnetic field play the roles of position and momentum with a finite degree of correlation for finite NOPA gain, as has been experimentally demonstrated$^{\cite{ou92}}$ and exploited to realize quantum teleportation.$ ^{\cite{furusawa98}}$ A coherent-state ``message'' of total amplitude $\epsilon /t$ is encoded in equal measure onto these entangled EPR beams by orienting its polarization at 45$^{\circ }$ with respect to the signal and idler polarizations at the mirror $M$ of Figure 1. To obtain a quantitative statement of the performance of this system, we must include the finite gain of the amplifier as well as various passive losses, which together limit the degree of correlation that can be exploited for communication. Following the analysis of Ref.\cite{ou92}, we find that the SNR $R_{j}(\Omega )$ for the individual signal and idler photocurrents for propagation and detection in the presence of overall channel efficiency $\xi $ is given by $R_{j}(\Omega )=\xi {{ \epsilon ^{2}}/{2G_{q}(\Omega )}}$, where $G_{q}(\Omega )$ is the detected quantum-noise gain of the amplifier which can be determined experimentally from measurements of the spectral densities $\Psi _{A,B}(\Omega )$ for the fluctuations of photocurrents for signal and idler beams alone at either detector. Relative to the frequency of the optical carrier determined by the down-conversion process in the NOPA, the frequency $\Omega $ specifies the Fourier components of the quadrature-phase amplitudes of signal and idler fields as well as of the coherent field $\epsilon $.$^{\cite{kim}}$ Note that $\xi $ ($0\leq \xi \leq 1$) incorporates the cavity escape efficiency for our NOPA, the propagation efficiency from the NOPA to the detectors, and the homodyne and quantum efficiencies of the balanced detectors themselves.$ ^{\cite{ou92}}$ Although the individual fluctuations for channels ($A,B$) give rise to a level $G_{q}(\Omega )>1$, (that is, greater than the vacuum-state limit of either beam alone), these large fluctuations are correlated in a nonclassical manner and hence can be eliminated by proper choice of the quadrature amplitudes detected at ($A,B$). As shown in Ref.\cite{ou92}, there is a continuous set of such amplitudes with minimum variance for their difference requiring only that the quadrature-phase angles ($\theta _{A},\theta _{B})$ satisfy $\theta _{A}+\theta _{B}=2p\pi $ ($p$ = integer). Denoting one such pair by ($X_{A},X_{B}$), we have that \[ \langle (X_{A}(\Omega )-X_{B}(\Omega ))(X_{A}(\Omega ^{\prime })-X_{B}(\Omega ^{\prime }))\rangle \] \begin{equation} =V_{-}(\Omega )\delta (\Omega +\Omega ^{\prime }), \label{variance} \end{equation} where $V_{-}(\Omega )$ is a variance which quantifies the degree of correlation between $(X_{A},X_{B}$). Explicit expressions for both $ V_{-}(\Omega )$ and $G_{q}(\Omega )$ are given in Ref.\cite{ou92}. For propagation and detection in the presence of loss, we introduce the quantities ($V_{-}^{d},G_{q}^{d}$) which refer to the variance and quantum noise gain for fictitious fields having propagated with total loss $(1-\xi )$ , where the spectral density of the photocurrent fluctuations $\Phi _{-}(\Omega )$ is proportional to $V_{-}^{d}(\Omega )$. Hence, the SNR $ R_{d} $ for detection of the message via $i_{-}$ is given by $R_{d}={{2\eta \epsilon ^{2}}/{V_{-}^{d}(\Omega )}}$, where $\eta $ accounts for the propagation and detection efficiency for the message from the mirror M to the photocurrent $i_{A,B}$. Without discussing the general case, here we note simply that for efficient propagation and detection with $(1-\xi )<<1$ and for near threshold operation with (analysis frequency $\Omega )<<$ (cavity linewidth $\Gamma )$, then $G_{q}^{d}(\Omega )\rightarrow 1+{\frac{1 }{2}}(\Gamma /\Omega )^{2}\xi $, while $V_{-}^{d}(\Omega )\rightarrow 2(1-\xi )<<1$, so that $R_{d}(\Omega )\rightarrow \eta \epsilon ^{2}/(1-\xi ) $. Hence in the ideal case with $\eta \rightarrow $1 and $\xi \rightarrow (1-{|t|}^{2})$, with t as the amplitude transmission coefficient of mirror M, we find that the reconstructed message is recovered with the same SNR with which it was originally encoded (namely $R_{d}\rightarrow \epsilon ^{2}/t^{2} $), while the fluctuations in the individual channels become arbitrarily large ($R_{A,B}\sim \Omega ^{2}\epsilon ^{2}/\Gamma ^{2}\rightarrow 0$ for $\Omega /\Gamma \rightarrow 0$). As for the performance as an optical tap, note that the transfer coefficient associated with the {\em detected} message at the receiver and with the reflected output field is given by $T_d = (R_d+R_r) /R_0$, where $R_d$ is related to $R_t$ by way of the propagation and detection efficiency $\eta$ from M to the photocurrents at the receivers. In the present case, we have that \begin{equation} T_d= {\frac{{\vert r \vert ^2} }{{U_-^r (\Omega)}}} + {\frac{{\ 2 \eta \vert t \vert ^2} }{{V_-^d(\Omega)}}}, \end{equation} with $\vert r \vert ^2$ as the reflectivity of mirror M $({\vert r \vert} ^{2} + {\vert t \vert}^{2}$=1) and $U_-^r(\Omega)$ as the variance of the reflected field. Hence in the ideal case with $\xi \rightarrow (1-\vert t \vert^2)$, with V$_-^d(\Omega) \rightarrow 2(1-\xi)$ and with $U_-^r(\eta) = \vert r \vert ^2$, we have that $R_d \rightarrow R_t$ and $T_d \rightarrow 2$ . Thus in addition to providing large quantum fluctuations for secure transmission, the system also acts as a quantum optical tap with a nearly ideal transfer coefficient $T$. In fact the system can be considered as a realization of the scheme for quantum tapping that was originally suggested by Shapiro.$^{\cite{sha}}$ To see this more clearly, recall that the projection of signal and idler fields along the $45^{\circ}$ polarization direction of the message beam results in a squeezed field.$^{\cite{kim}}$ Hence, from the perspective of Ref.$\cite {sha}$, we are ``tapping'' the original message field by injecting squeezed light into the normally open (or vacuum) port of mirror M. The use of the output of a nondegenerate parametric amplifier allows us subsequently to decompose this squeezed plus coherent field into individually noisy signal and idler fields at polarizer P for transmission. \section{Experimental Setup and Results} The general scheme for our experimental implementation of these ideas is shown in Figure 1, where frequency degenerate but orthogonally polarized signal and idler beams are generated by Type II down-conversion in a subthreshold optical parametric oscillator formed by a folded cavity containing an $a$-cut crystal of potassium titanyl phosphate (KTP) that provides noncritical phase matching at 1.08$\mu $m. The crystal is 10mm long, is anti-reflection coated for both 1.08$\mu $m and 0.54$\mu $m, and has a measured harmonic conversion efficiency of $6\times 10^{-4}$/W (single-pass) for this geometry. The total intracavity passive losses at 1.08 $\mu $m are 0.3\% and the transmission coefficient of mirror M1 is 3\%. The amplifier is pumped by green light at 0.54 $\mu $m generated by external frequency doubling of a frequency-stabilized, TEM$_{00}$-mode Nd:YAP laser.$ ^{\cite{ouol}}$ The subthreshold oscillator acts as a narrow-band amplifier (NOPA) which is locked to the original laser frequency with a weak counter-propagating beam. Simultaneous resonance for the orthogonally polarized signal and idler fields is achieved by adjusting the temperature of the KTP crystal around 60$^{\circ }$C with milliKelvin precision. The pump field at 0.54 $\mu $m is itself resonant in a separate and independently locked build-up cavity (enhancement $\sim $ 5x). As we have demonstrated in our previous experiments, $^{\cite{ou92}}$ the orthogonally polarized signal and idler fields generated by the NOPA individually are fields of zero mean values and exhibit large phase insensitive fluctuations. It is in the midst of this noise that we now hide a ``message'', with this coherent field being combined with the signal and idler fields at the highly reflecting mirror $M$ shown in Figure 1 ($ (1-t^{2})\simeq 0.99$). The coherent beam is injected at 45$^{\circ }$ with respect to signal and idler polarizations and is frequency shifted by $ \Omega _{0}/2\pi =1.1$MHz (single-side band) from the primary laser frequency with the help of a pair of acoustooptic modulators, which are gated ``on'' and ``off'' to provide information encoded for transmission. The noisy but correlated signal and idler beams together with the coherent information are then separated by a polarizer $P$, transmitted independently over the two channels ($A,B$), and then directed to two separate balanced homodyne detectors for measurements of their individual quadrature-phase amplitudes and their mutual correlations. The local oscillators for the two balanced homodyne detectors originate from the laser at 1.08$\mu $m; their phases can be independently controlled by mirrors mounted on piezoelectric transducers. The spectral densities of the photocurrents for the two channels ($A=$ signal, $B=$ idler) are defined by \begin{equation} \Psi _{A,B}(\Omega )=\int {<i_{A,B}(t)i_{A,B}(t+\tau )>e^{i\Omega \tau }d{ \tau }} \end{equation} and are recorded by a RF spectral analyzer, as is the spectral density \begin{equation} \Phi _{-}(\Omega )=\int {<i_{-}(t)i_{-}(t+\tau )>e^{i\Omega \tau }d{\tau }} \label{phiminus} \end{equation} for the difference photocurrent $i_{-}\equiv i_{A}-i_{B}$. In Figures 2 and 3 we present results from a series of measurements of these various spectral densities. First of all, in Figure 2a, trace {\it i} gives the spectral density $\Psi _{A}$ for channel A alone with an injected ``message'' and with the amplifier turned on to generate large $(\sim $ 7 dB) phase insensitive noise above the vacuum-state level $\Psi _{0A}$ (indicated by a dashed line in Figure 2) for the signal beam. A similar trace is obtained for the spectral density $\Psi _{B}$. By contrast, trace {\it ii} in Figure 2a gives the spectral density $\Phi _{-}$ for the difference photocurrent $i_{-}$, with the phases of the local oscillators adjusted for minimum noise and maximum coherent signal. In this trace, the coherent message that was completely obscured in trace {\it i} emerges with high signal-to-noise ratio. Note that in trace {\it ii} the correlated quantum fluctuations for signal and idler fields are subtracted to approximately 0.4 dB below the vacuum-noise level $\Psi _{0A}$ of the signal beam alone (and likewise for the idler), indicating an improvement in SNR over a conventional single-channel communication scheme with a classical light source. To complete the discussion, we present in Figure 2b results obtained with the amplifier turned off (that is, uncorrelated vacuum-state inputs for signal and idler fields which are combined with the coherent ``message'' information at mirror $M$). Trace {\it i} shows the result for the signal beam alone ($\Psi _{A}$), where again the noise floor $\Psi _{0A}$ is from the vacuum fluctuations of the signal beam; a similar trace is obtained for the idler beam $(\Psi _{B})$. Trace {\it ii} gives the corresponding result for $\Phi _{-}$ for the combined signal and idler photocurrents when the amplifier is off. Note that this trace represents the best possible SNR with which the encoded information can be recovered when correlated classical noise sources are employed since here the (uncorrelated) vacuum fluctuations of signal and idler beams set an ultimate noise floor 3 dB above $\Psi _{0A}$ (that is, $\Phi _{0-}=2\Psi _{0A}$).$^{\cite{inf}}$ On comparing traces {\it ii} in Figures 2a and 2b, we see that the correlated quantum fluctuations of signal and idler fields brought about by parametric amplification result in an improvement in SNR of 3.2 dB relative to that possible with classical noise sources. The improvement in SNR with correlated quantum fields over classical fields in our two-channel communication scheme can be of utility especially when the message is so weak that the SNR is poor for transmission with correlated classical sources (that is, for the case where vacuum noise dominates the encoded message). This situation is illustrated in Figure 3, where we plot $ \Phi _{-}$ for the two cases without (trace {\it i}) and with (trace {\it ii} ) correlated quantum fields.$^{\cite{inf}}$ Relative to Figure 2, here the coherent beam has been attenuated resulting in a smaller SNR for the ``message''. Indeed in trace {\it i}, this information is ``buried'' by the vacuum noise $\Phi _{0-}$ associated with independent vacuum fluctuations in channels $A$ and $B$; recovery of the encoded information is poor. On the other hand, as shown in trace {\it ii}, when correlated quantum fields are employed, there is a reduction in the noise floor by more than 3 dB which makes possible improved recovery of the encoded information, with the recovery here limited by losses in propagation and detection.$^{\cite{ou92}}$ As for the actual performance with respect to optical tapping, our system falls far short of the projected possibilities discussed in the preceding section because of an unfortunate mismatch between the transmissivity $ |t|^{2}$ for mirror M and the overall system efficiency $\xi $. In quantitative terms, recall that the transfer coefficient $T$ for encoding information from the input beam to the reflected and transmitted beams at M is given by $T\equiv (R_{r}+R_{t})/R_{0}$ whereas the transfer coefficient for the detected message photocurrent and the reflected signal field is $ T_{d}\equiv (R_{r}+R_{d})/R_{0}$ as given explicitly in Eq. (2). For the propagation and detection efficiencies in our experiment $(\xi \simeq $ 0.65 and $\eta \simeq $ 0.75), these transfer coefficients are optimized for mirror transmission ${|t|}^{2}\sim $ 0.5 for M. In our arrangement we have instead $|t|^{2}=0.01$, with the inferred result that $T_{d}\simeq 1.02$, which is only marginally in the quantum domain. In the experiment described here, the receiver uses a local oscillator (LO) that originates from the fundamental frequency of the same laser that generated the pump beam for the NOPA. This LO is necessary for proper detection of the quadrature amplitudes of the nonclassical beams and of the message, since it provides a phase reference that follows phase fluctuations of the NOPA's pump beam. In practice, as the stability of the available lasers improve, one should consider schemes for which the measurement is carried out with nominally independent lasers for the LO and for the source. For example, one might employ a stabilized laser diode as a reference to phase lock lasers both at the sender and at the receiver, where the laser diode could be widely distributed through optical fibers. Alternatively, Ralph has analyzed a scheme in which the local oscillators are transmitted and recovered as part of the overall protocol.$^{\cite{ralph99}}$ \section{Comparison with Other Dual Beam Schemes} It is perhaps obvious that the degree of immunity to interception for a two channel scheme such as we have discussed is related to the degree of excess fluctuations for each individual beam. For the demonstration in Ref.\cite {man}, the excess noise used to ``hide'' the encoded information in each beam comes from some artificial unrelated source. Unfortunately such uncorrelated excess fluctuations also add noise to the coincidence signal in the recovery of the ``message,'' even though the added noise scales differently as a function of photon number for single-beam measurements (linearly) and for dual beam measurements (quadratically). Hence larger background noise which better ``hides'' the encoded information also brings larger added noise in the extraction of the ``message.'' Because of the quadratic dependence on the total photon number for the extra noise added in coincidence detection, this scheme is best suited to low light level transmission, as demonstrated in the pioneering experiment by Hong et al.$^{ \cite{man}}$ The situation is quite different for the quadrature-phase amplitudes of the correlated signal and idler fields generated by the NOPA. As the NOPA is pumped harder and the threshold for parametric oscillation is approached, the gain of the amplifier increases, as do the excess fluctuations of the signal and idler fields. However, the correlation between the fluctuations of the signal and idler beams also improves, giving rise to even better SNR for the recovered signal. The key point is that the large fluctuations in the signal and idler beams needed for immunity to interception are intrinsic and do not add extra noise to the recovered signal but, on the contrary, serve to reduce the noise in $i_{-}$ as the gain of the amplifier increases. In the end, the SNR for the recovered message is arbitrated by the imperfect correlation resulting from finite gain and from passive losses in propagation and detection. On the other hand, this dependence provides a powerful means to detect eavesdropping because unauthorized extraction of signal or idler fields from channels $A$ or $B$ results in a reduction of the detected correlation and hence an increase in the noise floor of the recovered message. Note that unauthorized extraction of information from both channels by way of a quantum optical tap$^{\cite{sha}}$ or a quantum nondemolition measurement$^{\cite{grangier98}}$ can likewise be detected because of the unavoidable increase of fluctuations for the orthogonal quadrature-phase amplitudes ($\theta _{A,B}+\pi /2$) of the two channels. Furthermore, these quantum eavesdropping schemes can be defeated in large measure by random switching of the phases of the message, signal, and idler beams as discussed below. Our system also offers advantages with respect to the (classical) digital Vernan cipher, where a message is decomposed in two correlated random signals and transmitted over two one-way channels. Although this system seems to be similar to ours in the sense that is also secure provided the eavesdropper has access to one channel only, the situation is different if the eavesdropper can split a small fraction of both channels since in the classical case, this can be done without the knowledge of the receiver. However, in our system the eavesdropper cannot choose arbitrarily the reflectivity of any ``beamsplitter'' used for extraction from the two channels since in the quantum case, the fraction of the beams extracted should be big enough so that the signal-to-noise ratio for the intercepted message is greater than one. But if this is the case, then unavoidable extra ``noise'' added to the transmitted beams by the open port of the ``beamsplitter'' degrades the signal-to-noise ratio of the message at the legitimate receiver, thus revealing the unauthorized intervention during transmission. One might attempt to circumvent this difficulty by employing a quantum extraction procedure, such as quantum nondemolition detection$^{\cite {grangier98}}$ of the quadrature amplitudes in Channels $(A,B)$. Although the signal-to-noise ratio $R_{d}$ at the receiver would not in this case be degraded by an ideal eavesdropper, the unauthorized intervention could nonetheless be discovered because of the injection of large fluctuations (``back-action'' noise) in the quadrature orthogonal to that in which signal information is stored, as previously noted. \section{Extensions via Random Phase Switching} One way an eavesdropper {\it Eve}\ could access the signal and idler beams without the knowledge of the legitimate receiver is if she can intercept both channels completely, detect in the same manner as does the legitimate receiver (i.e., {\it Eve} should also have access to a local oscillator phase stable with respect to that of sender and receiver) and retransmit the beams in the same way as the legitimate sender. Because of this possibility, our protocol as described is certainly not secure, in contrast to the protocols for discrete variables.$^{\cite{ben99,lo99,mayers99}}$ However, we suggest that simple extensions of our protocol might lead to significant enhancements in security. If the goal were to achieve quantum key distribution, one idea is to make straightforward adaptations of the protocols introduced by Bennett and colleagues for the discrete case, as in Ref.\cite{hillery99,ralph99,reid99}. Here, we propose that the sending station ({\it Alice}) and receiving station {\it (Bob}) make random choices for the set of phases of the coherent message beam, as well as for the signal and idler beams. Recall that the variance $V_{-}(\Omega )$ of Eq.\ref{variance} is the minimum possible and applies only for the choice of quadrature-phase angles $(\theta _{A},\theta _{B})$ for the signal and idler beams that satisfy $\theta _{A}+\theta _{B}=2p\pi $ ($p$ = integer). For definiteness, assume the following two choices. \begin{enumerate} \item $(\theta _{A}^{0},\theta _{B}^{0})$, with $\theta _{A}^{0}+\theta _{B}^{0}=0$ and corresponding quadrature amplitudes $(X_{A},X_{B})$. \item $(\theta _{A}^{\pi /2}=\theta _{A}^{0}+\frac{\pi }{2},\theta _{B}^{\pi /2}=\theta _{B}^{0}+\frac{\pi }{2})$ and corresponding quadrature amplitudes $(Y_{A},Y_{B})$. \end{enumerate} In the first case, the minimum variance $V_{-}(\Omega )$ results for the combination $(X_{A}-X_{B})$, while in the second case, the combination $ (Y_{A}+Y_{B})$ has minimum variance. This is because $Y_{B}\rightarrow -Y_{B} $ is equivalent to the shift $\theta _{B}^{\pi /2}\rightarrow \theta _{B}^{\pi /2}+\pi $, so that $\theta _{A}^{\pi /2}+\theta _{B}^{\pi /2}+\pi =2\pi $. With these definitions, {\it Alice} at the sending station (randomly) makes one of two choices. \begin{enumerate} \item {\it Phase} $0$ -- Set the quadrature-phase angles $(\theta _{A},\theta _{B})$ to $(\theta _{A}^{0},\theta _{B}^{0})$ and the phases $ \beta _{A,B}=\beta _{A,B}^{0}$ for the coherent message beam $|\alpha \rangle =|\alpha |\exp [i\beta ]$ corresponding to the $X$ quadratures of $ (A,B)$. \item {\it Phase} $\frac{\pi }{2}$ -- Set $(\theta _{A},\theta _{B})$ to $ (\theta _{A}^{\pi /2},\theta _{B}^{\pi /2})$ and $\beta _{A,B}=\beta _{A,B}^{\pi /2}=\beta _{A,B}^{0}\pm \frac{\pi }{2}$ corresponding to the $Y$ quadratures. \end{enumerate} The encoded message (which could consist of $|\alpha |=[a_{0},a_{1}]$ for a binary transmission) is sent to {\it Bob's} receiving station precisely as in Figure 1. {\it Bob} must then choose the appropriate phases $(\phi _{A},\phi _{B})$ for his local oscillators $(LO_{A},LO_{B})$ to detect quadrature amplitudes such that the spectral density $\Phi _{-}(\Omega )$ for the difference photocurrent $i_{-}\equiv i_{A}-i_{B}$ is minimized and the signal maximized. In the case {\it Phase} $0$, denote the local oscillator settings as $(\phi _{A}^{0},\phi _{B}^{0})$, in correspondence to the detection of $(X_{A},X_{B})$ with minimum variance $V_{-}(\Omega )$. On the other hand, for the case {\it Phase} $\frac{\pi }{2}$, the local oscillator phases $(\phi _{A},\phi _{B})\rightarrow (\phi _{A}^{\pi /2},\phi _{B}^{\pi /2})=(\phi _{A}^{0}+\frac{\pi }{2},\phi _{B}^{0}+\frac{3\pi }{2})$ , in correspondence to the detection of $(Y_{A},-Y_{B})$ with minimum variance. In both cases, the encoded message would be recovered with maximum signal-to-noise ratio. Note that precisely such a switching protocol was implemented in our prior experiment of Ref.\cite{ou92} with results as stated for the variances. Of course, {\it Bob }does not know in advance which choice $[0,\frac{\pi }{2} ]$ {\it Alice} will have made for any given transmission. Hence, he makes a random selection between the alternatives $(\phi _{A}^{0},\phi _{B}^{0})$ and $(\phi _{A}^{\pi /2},\phi _{B}^{\pi /2})$, recovering the message in some cases but not others. After a series of transmissions, {\it Alice} and {\it Bob} communicate publicly about their choice of bases, keeping measurement results only when their choices coincide. Now, if an eavesdropper {\it Eve} attempts to intervene (either by a strategy of partial tapping or by one of complete interception and re-broadcast), she will necessarily increase the noise level and error rate at {\it Bob's }receiving station. The random switching of the phases $ (\theta _{A},\theta _{B})$ by {\it Alice} forces {\it Eve} to make a guess as to the correct quadratures $(\delta _{A},\delta _{B})$ to be detected. Having made a choice, information about the orthogonal quadrature is lost. Of course, rather than homodyne detection, she could choose to employ heterodyne detection to gain information about the full complex amplitude. However, relative to homodyne detection, heterodyne detection brings a well-known penalty of a $3$dB reduction in signal-to-noise ratio.$^{\cite{AK} }$ While it is beyond the scope of the current paper to make any claims about the quantitative limits to the information that {\it Eve} might access or about the absolute ability of {\it Alice }and {\it Bob }to detect her presence, we do suggest that these would be interesting questions to investigate. There are certainly intervention strategies beyond those that we have mentioned that a cunning {\it Eve} would want to consider, such as an adaptive strategy for adjusting the phases $(\delta _{A},\delta _{B})$ during the duration of the transmission of any given message.$^{\cite {wiseman95}}$ Likewise, in any real-world setting, overcoming the deleterious effects of losses in propagation from {\it Alice} to {\it Bob} will be a overriding consideration. The question of preserving the entanglement of the initial EPR\ state in the face of such losses is a fascinating one for continuous quantum variables. Although initial attempts have been made to develop error correcting quantum codes for continuous variables,$^{\cite{lloyd98b,sam98i,sam98ii}}$ no adequate solution seems to yet have been found. Finally, it would be of interest to analyze the case where only one of the two correlated beams is sent to {\it Bob}, with then {\it Alice }retaining the other. \acknowledgments We gratefully acknowledge the comments of J.H. Shapiro who pointed out the connection of our experiment to Ref.\cite{sha}, of S. L. Braunstein and H. Mabuchi for critical discussions, and of one of the referees who brought to our attention the Vernon cipher. This work was supported by the Office of Naval Research, by the National Science Foundation, and by DARPA via the QUIC administered by the Army Research Office. \begin{references} \bibitem[*]{byline1} Present address: Delft University of Technology, Faculty of Applied Sciences - Optics Research Group, Lorentzweg 1, 2628 CJ Delft, The Netherlands. \bibitem[**]{byline2} Present address: Department of Physics, Indiana University-Purdue University at Indianapolis, 402 N Blackford St., Indianapolis, IN 46202. \bibitem{tak} H. Takahasi, Adv. Commun. Syst.{\bf 1}, 227 (1965). \bibitem{yuen} H. P. Yuen and J. H. Shapiro, IEEE Trans. Inform. Th. {\bf IT-24}, 657 (1978). \bibitem{cav} C. M. Caves and P. D. Drummond, Rev. Mod. Phys. (1994); H. P. Yuen and M. Ozawa, Phys. Rev. Lett. {\bf 70}, 363 (1993). \bibitem{caves} C. M. Caves, Phys. Rev. {\bf D23}, 1963(1981) and Phys. Rev. {\bf D26}, 1817(1982); C. M. Caves et al., Rev. Mod. Phys. {\bf 52}, 341(1980). \bibitem{xiao87} M. Xiao, L. A. Wu, and H. J. Kimble, Phys. Rev. Lett. {\bf 59}, 278 (1987); P. Grangier, R. E. Slusher, B. Yurke, and LaPorta, Phys. Rev. Lett. {\bf 59}, 2153 (1987). \bibitem{xiao88} M. Xiao, L.A. Wu, and H. J. Kimble, Opt. Lett. {\bf 13}, 476 (1988). \bibitem{pol} E.S. Polzik, J. Carri, and H. J. Kimble, Phys. Rev. Lett. {\bf 68}, 3020 (1992). \bibitem{ou93} Z. Y. Ou, S. F. Pereira, and H. J. Kimble, Phys. Rev. Lett. {\bf 70}, 3239 (1993). \bibitem{ou92} Z. Y. Ou, S. F. Pereira, H. J. Kimble, and K. C. Peng, Phys. Rev. Lett. {\bf 68}, 3663 (1992); Z. Y. Ou, S. F. Pereira, and H. J. Kimble, Appl. Phys. {\bf B55}, 265 (1992). \bibitem{roch} J. F. Roch, G. Roger, P. Grangier, J. M. Courty, and S. Reynauld, Appl. Phys. {\bf B55}, 291 (1992). \bibitem{grangier98} For a review, see P. Grangier, J. A. Levenson, and J.-P. Poizat, Nature {\bf 396}, 537 (1998). \bibitem{poiz} J. Ph. Poizat and P. Grangier, Phys. Rev. Lett. {\bf 70}, 271 (1993). \bibitem{lloyd98a} S. Lloyd and S. L. Braunstein, quant-ph/9810082. \bibitem{lloyd98b} S. Lloyd and J. J. E. Slotine, Phys. Rev. Lett. {\bf 80} , 4088 (1998). \bibitem{sam98i} S. L. Braunstein, Phys. Rev. Lett. {\bf 80}, 4084 (1998). \bibitem{sam98ii} S. L. Braunstein, Nature {\bf 394}, 47 (1998). \bibitem{plenio99} S. Parker, S. Bose, and M. B. Plenio, quant-ph/9906098. \bibitem{cirac99} L.-M. Duan, G. Giedke, J. I. Cirac, and P. Zoller, quant-ph/9912017. \bibitem{vaidman} L. Vaidman, Phys. Rev. {\bf A49}, 1473 (1994). \bibitem{sam98a} S. L. Braunstein and H. J. Kimble, Phys. Rev. Lett. {\bf 80 }, 869 (1998). \bibitem{ralph98} T. C. Ralph and P. K. Lam, Phys. Rev. Lett. {\bf 81}, 5668 (1998). \bibitem{kurizki99} T. Opatrny, G. Kurizki, and D.-G. Welsch, quant-ph/9907048. \bibitem{sam98b} P. van Loock, S. L. Braunstein, and H. J. Kimble, Phys. Rev. {\bf A} (accepted, 2000); quant-ph/9902030. \bibitem{scott99a} A. S. Parkins and H. J. Kimble, Journal of Optics B - Quantum and Semiclassical Optics {\bf 1}, 496 (1999), available as quant-ph/9904062; and submitted, 1999, quant-ph/9909021. \bibitem{sam98c} S. L. Braunstein and H. J. Kimble, Phys. Rev. {\bf A} (accepted, 2000); quant-ph/9810082. \bibitem{furusawa98} A. Furusawa, J. Sorensen, S. L. Braunstein, C. Fuchs, H. J. Kimble, and E. S. Polzik, Science {\bf 282}, 706 (1998). \bibitem{fuchs99} S.\ L.\ Braunstein, C.\ A.\ Fuchs, H.\ J.\ Kimble, Journal of Modern Optics {\bf 47}, 267 (2000), available as quant-ph/9910030. \bibitem{patent} United States Patent $5,339,182$, ``Method and Apparatus for Quantum Communication Employing Nonclassical Correlations of Quadrature-Phase Amplitudes,'' issued August 16, 1994, H. J. Kimble, Z. Y. Ou, and S. F. Pereira. \bibitem{ben92a} C. H. Bennett, G. Brassard, and A. K. Ekert, Sci. Am. {\bf 267}, 50 (1992). \bibitem{franson95} J. D. Franson, B. C. Jacobs, Electronics Letters {\bf 31 }, 232 (1995). \bibitem{gisin00} G. Ribordy, J. D. Gautier, N. Gisin, O. Guinnard, H. Zbinden, J. Mod. Optics {\bf 47}, 517 (2000). \bibitem{hughes00} R. J. Hughes, G. L. Morgan GL, and C. G. Peterson, J. Mod. Optics {\bf 47}, 533 (2000). \bibitem{townsend98} P. D. Townsend, Opt. Fiber Technology {\bf 4}, 345 (1998). \bibitem{hillery99} M. Hillery, quant-ph/9909006. \bibitem{ralph99} T. C. Ralph, quant-ph/9907073. \bibitem{reid99} M. D. Reid, quant-ph/9909030. \bibitem{ben99} For a recent review of security proofs in quantum cryptography, see C. H. Bennett and P. W. Shor, Science {\bf 284}, 747 (1999). \bibitem{lo99} H.-K. Lo and H. F. Chau, Science {\bf 283}, 2050 (1999). \bibitem{mayers99} D. Mayers, quant-ph/9802025. \bibitem{NOPA} S. M. Barnett and P. Knight, J. Opt. Soc. Am. {\bf B2}, 467 (1985). \bibitem{kim} H. J. Kimble, in {\it Fundamental Systems in Quantum Optics}, ed. by J. Dalibard, J. M. Raimond, and J. Zinn Justin (Elsevier, Amsterdam, 1992). 545ff. \bibitem{EPR} A.\ Einstein, B.\ Podolsky, N.\ Rosen, Phys.\ Rev.\ {\bf 47}, 777 (1935). \bibitem{reid} M.\ D.\ Reid and P.\ D.\ Drummond, Phys.\ Rev.\ Lett.\ {\bf 60}, 2731 (1988); M.\ D.\ Reid, Phys.\ Rev.\ A {\bf 40}, 913 (1989). \bibitem{sha} J. H. Shapiro, Opt. Lett. {\bf 5}, 351 (1980). \bibitem{densecoding} C.\ H.\ Bennett and S.\ J.\ Wiesner, Phys.\ Rev.\ Lett.\ {\bf 69}, 2881 (1992). \bibitem{ouol} Z. Y. Ou, S. F. Pereira, E. S. Polzik, and H. J. Kimble, Opt. Lett. {\bf 17}, 640 (1992). \bibitem{inf} In Figs.(2,3), $\Phi _{-}$=1.9 $\Psi _{0,A}$ (2.8 dB above) due to a slight imbalance (0.1 dB) between the ($A,B$) detectors and to a small contribution (0.1 dB) from detector thermal noise. \bibitem{man} L. Mandel, J. Opt. Soc. Am. {\bf B1},108 (1984); C. K. Hong, S. R. Friberg, and L. Mandel, Appl. Opt. {\bf 24}, 3877 (1985). \bibitem{AK} E.\ Arthurs and J.\ L.\ Kelly Jr., Bell. Syst. Tech. J. April, 725 (1965). \bibitem{wiseman95} H. M. Wiseman, Phys.\ Rev.\ Lett.\ {\bf 75}, 4587 (1995); H. M. Wiseman and R. B. Killip, Phys.\ Rev.\ {\bf A57}, 2169 (1998). \end{references} \begin{figure} \caption{Principal components of the experiment showing the ``transmitter'', where a message $\protect\epsilon $ is combined with noise fields from a nondegenerate optical parametric amplifier (NOPA) at mirror $M$. The orthogonally polarized signal and idler beams are separated by polarizer $P$ , and then propagate along independent channels ($A,B$) to two separate balanced homodyne detectors that form the ``receiver''. In the figure, arrows represent coherent amplitudes of various fields, while the shaded circles are meant to indicate their fluctuations.} \label{exp} \end{figure} \begin{figure} \caption{(a) Signal recovery with correlated quantum states in channels ($ A,B $). Trace i gives the spectral density $\Psi_A(\Omega)$ of photocurrent fluctuations for channel $A$ alone as a function of time. Trace ii is the spectral density $\Phi_-(\Omega)$ for the combined photocurrent $i_-=i_A-i_B$ from both channels; here the ``message'' (coherent beam chopped on and off) clearly emerges. (b) Signal recovery with uncorrelated vacuum fluctuations. Again, trace i gives $\Psi_A(\Omega)$ (channel $A$ only) while trace ii gives $\Phi_-(\Omega)$ (combined photocurrent $i_-$). In (a) and (b) the vacuum-state level $\Psi_{0A} \label{big} \end{figure} \begin{figure} \caption{Spectral density of the photocurrent fluctuations $\Phi_-(\Omega)$ for $i_-=i_A-i_B$ for the case when the on-off modulation of the message is encoded with small SNR. Trace i $-$ Uncorrelated vacuum fluctuations. Trace ii $-$ Correlated quantum fluctuations in channels ($A,B$). The vacuum-state limits $\Psi_{0A} \label{small} \end{figure} \end{document}
\begin{document} \title{Minimizing Maximum Response Time and \\ Delay Factor in Broadcast Scheduling} \author{ Chandra Chekuri\thanks{Department of Computer Science, University of Illinois, 201 N.\ Goodwin Ave., Urbana, IL 61801. {\tt [email protected]}. Partially supported by NSF grants CCF-0728782 and CNS-0721899. } \and Sungjin Im\thanks{Department of Computer Science, University of Illinois, 201 N.\ Goodwin Ave., Urbana, IL 61801. {\tt [email protected]}} \and Benjamin Moseley\thanks{Department of Computer Science, University of Illinois, 201 N.\ Goodwin Ave., Urbana, IL 61801. {\tt [email protected]}. Partially supported by NSF grant CNS-0721899. } } \iffalse \authorrunning{Chekuri, Im, Moseley} \tocauthor{Chandra Chekuri(UIUC), Sungjin Im(UIUC) and Ben Moseley(UIUC) } \institute{Dept.\ of Computer Science, University of Illinois, Urbana, IL 61801. \\ \email{\{chekuri, im3, bmosele2\}@cs.uiuc.edu}} \fi \begin{titlepage} \def\thepage{} \maketitle \begin{abstract} We consider {\em online} algorithms for pull-based broadcast scheduling. In this setting there are $n$ pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page $p$, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their {\em weighted} versions. We obtain the following results in the worst-case online competitive model. \begin{itemize} \item We show that FIFO (first-in first-out) is $2$-competitive even when the page sizes are different. Previously this was known only for unit-sized pages \cite{ChangEGK08} via a delicate argument. Our proof differs from \cite{ChangEGK08} and is perhaps more intuitive. \item We give an online algorithm for maximum delay-factor that is $O(1/\epsilon^2)$-competitive with $(1+\epsilon)$-speed for unit-sized pages and with $(2+\epsilon)$-speed for different sized pages. This improves on the algorithm in \cite{ChekuriM09} which required $(2+\epsilon)$-speed and $(4+\epsilon)$-speed respectively. In addition we show that the algorithm and analysis can be extended to obtain the same results for maximum {\em weighted} response time and delay factor. \item We show that a natural greedy algorithm modeled after LWF (Longest-Wait-First) is not $O(1)$-competitive for maximum delay factor with any constant speed even in the setting of standard scheduling with unit-sized jobs. This complements our upper bound and demonstrates the importance of the tradeoff made in our algorithm. \end{itemize} \end{abstract} \end{titlepage} \section{Introduction} We consider \emph{online} algorithms in pull-based broadcasting. In this model there are $n$ pages (representing some form of useful information) available at a server and clients request a page that they are interested in. When the server transmits a page $p$, all outstanding requests for that page $p$ are satisfied since it is assumed that all clients can simultaneously receive the information. It is in this respect that broadcast scheduling differs crucially from standard scheduling where each jobs needs its own service from the server. We distinguish two cases: all the pages are of same size (unit-size without loss of generality) and when the pages can be of different size. Broadcast scheduling is motivated by several applications in wireless and LAN based systems \cite{AcharyaFZ95, AksoyF98, Wong88}. It has seen a substantial interest in the algorithmic scheduling literature starting with the work of Bartal and Muthukrishanan \cite{BartalM00}; see \cite{KalyanasundaramPV00}. In addition to the applications, broadcast scheduling has sustained interest due to the significant technical challenges that basic problems in this setting have posed for algorithm design and analysis. To distinguish broadcast scheduling from ``standard'' job scheduling, we refer to the latter as unicast scheduling --- we use requests in the context of broadcast and jobs in the context of unicast scheduling. In this paper, we focus on scheduling to minimize two related objectives: the maximum response time and the maximum delay factor. We also consider their {\em weighted} versions. Interestingly, the maximum response time metric was studied in the (short) paper of Bartal and Muthukrishnan \cite{BartalM00} where they claimed that the online algorithm FIFO (for First In First Out) is $2$-competitive for broadcast scheduling, and moreover that no deterministic online algorithm is $(2-\epsilon)$-competitive. (It is easy to see that FIFO is optimal in unicast scheduling). Despite the claim, no proof was published. It is only recently, almost a decade later, that Chang et al. \cite{ChangEGK08} gave formal proofs for these claims for unit-sizes pages. This simple problem illustrates the difficulty of broadcast scheduling: the ability to satisfy multiple requests for a page $p$ with a single transmission makes it difficult to relate the total ``work'' that the online algorithm and the offline adversary do. The upper bound proof for FIFO in \cite{ChangEGK08} is short but delicate. In fact, \cite{BartalM00} claimed $2$-competitiveness for FIFO even when pages have different sizes. As noted in previous work \cite{BartalM00,EdmondsP03,PruhsU03}, when pages have different sizes, one needs to carefully define how a request for a page $p$ gets satisfied if it arrives midway during the transmission of the page. In this paper we consider the sequential model \cite{EdmondsP03}, the most restrictive one, in which the server broadcasts each page sequentially and a client receives the page sequentially without buffering; see \cite{PruhsU03} on the relationship between different models. The claim in \cite{BartalM00} regarding FIFO for different pages is in a less restrictive model in which clients can buffer and take advantage of partial transmissions and the server is allowed to preempt. The FIFO analysis in \cite{ChangEGK08} for unit-sized pages does not appear to generalize for different page sizes. Our first contribution in this paper is the following. \begin{theorem} \label{thm:fifo} FIFO is $2$-competitive for minimizing maximum response time in broadcast scheduling even with different page sizes. \end{theorem} Note that FIFO, whenever the server is free, picks the page $p$ with the earliest request and {\em non-preemptively} broadcasts it. Our bound matches the lower bound shown even for unit-sized pages, thus closing one aspect of the problem. Our proof differs from that of Chang et al.; it does not explicitly use the unit-size assumption and this is what enables the generalization to different page sizes. The analysis is inspired by our previous work on maximum delay factor \cite{ChekuriM09} which we discuss next. \noindent {\bf Maximum (Weighted) Delay Factor and Weighted Response Time:} The delay factor of a schedule is a metric recently introduced in \cite{ChangEGK08} (and implicitly in \cite{BenderCT08}) when requests have deadlines. Delay factor captures how much a request is delayed compared to its deadline. More formally, let $J_{p,i}$ denote the $i$'th request of page $p$. Each request $J_{p,i}$ arrives at $a_{p,i}$ and has a deadline $d_{p,i}$. The finish time $f_{p,i}$ of a request $J_{p,i}$ is defined to be the earliest time after $a_{p,i}$ when the page $p$ is sequentially transmitted by the scheduler starting from the beginning of the page. Note that multiple requests for the same page can have the same finish time. Formally, the delay factor of the job $J_{p,i}$ is defined as $ \max\{1, \frac{f_{p,i} - a_{p,i}}{d_{p,i} - a_{p,i}}\}$; we refer to the quantity $S_{p,i} = d_{p,i} - a_{p,i}$ as the {\em slack} of $J_{p,i}$. For a more detailed motivation of delay factor, see \cite{ChekuriM09}. Note that for unit-sized pages, delay factor generalizes response time since one could set $d_{p,i} = a_{p,i} + 1$ for each request $J_{p,i}$ in which case its delay factor equals its response time. In this paper we are interested in online algorithms that minimize the maximum delay factor, in other words the objective function is $\min \max_{p,i} \{1, \frac{f_{p,i} - a_{p,i}}{d_{p,i} - a_{p,i}}\}$. We also consider a related metric, namely {\em weighted} response time. Let $w_{p,i}$ be a non-negative weight associated with $J_{p,i}$; the weighted response time is then $w_{p,i} (f_{p,i} - a_{p,i})$ and the goal is to minimize the maximum weighted response time. Delay factor and weighted response time have syntactic similarity if we ignore the $1$ term in the definition of delay factor --- one can think of the weight as the inverse of the slack. Although the metrics are some what similar we note that there is no direct way to reduce one to the other. On the other hand, we observe that upper bounds for one appear to translate to the other. We also consider the problem of minimizing the maximum weighted delay factor $\min \max_{p,i} w_{p,i} \{1, \frac{f_{p,i} - a_{p,i}}{d_{p,i} - a_{p,i}}\}$. Surprisingly, the maximum weighted response time metric appears to not have been studied formally even in classical unicast scheduling; however a special case, namely maximum {\em stretch} has received attention. The stretch of a job is its response time divided by its processing time; essentially the weight of a job is the inverse of its processing time. Bender et al.\ \cite{BenderCM98,BenderMR02}, motivated by applications to web-server scheduling, studied maximum stretch and showed very strong lower bounds in the online setting. Using similar ideas, in some previous work \cite{ChekuriM09}, we showed strong lower bounds for minimizing maximum delay factor even for unit-time jobs. In \cite{ChekuriM09}, constant competitive algorithms were given for minimizing maximum delay factor in both unicast and broadcast scheduling; the algorithms are based on resource augmentation \cite{KalyanasundaramP95} wherein the algorithm is given a speed $s > 1$ server while the offline adversary is given a speed $1$ server. They showed that $\Algorithm{SSF}$ (shortest slack first) is $O(1/\epsilon)$-competitive with $(1+\epsilon)$-speed in unicast scheduling. $\Algorithm{SSF}$ does not work well in the broadcast scheduling. A different algorithm that involves waiting, $\Algorithm{SSF-W}$ (shortest slack first with waiting) was developed and analyzed in \cite{ChekuriM09}; the algorithm is $O(1/\epsilon^2)$-competitive for unit-size pages with $(2+\epsilon)$-speed and with $(4+\epsilon)$-speed for different sized pages. In this paper we obtain improved results by altering the analysis of $\Algorithm{SSF-W}$ in a subtle and important way. In addition we show that the algorithm and analysis can be altered in an easy fashion to obtain the same bounds for weighted response time and delay factor. \begin{theorem} \label{thm:delayfactor} There is an algorithm that is $(1+\epsilon)$-speed $O(1/\epsilon^2)$-competitive for minimizing maximum delay factor in broadcast scheduling with unit-sized pages. For different sized pages there is a $(2+\epsilon)$-speed $O(1/\epsilon^2)$-competitive algorithm. The same bounds apply for minimizing maximum weighted response time and maximum weighted delay factor. \end{theorem} \begin{remark} Minimizing maximum delay factor is NP-hard and there is no $(2- \epsilon)$-approximation unless $P=NP$ for any $\epsilon >0$ in the {\em offline} setting for unit-sized pages. There is a polynomial time computable $2$-speed schedule with the optimal delay factor (with $1$-speed) \cite{ChangEGK08}. Theorem~\ref{thm:delayfactor} gives a polynomial time computable $(1+\epsilon)$-speed schedule that is $O(1/\epsilon^2)$-optimal (with $1$-speed). \end{remark} We remark that the algorithm $\Algorithm{SSF-W}$ makes an interesting tradeoff between two competing metrics and we explain this tradeoff in the context of weighted response time and a lower bound we prove in this paper for a simple greedy algorithm. Recall that FIFO is $2$-competitive for maximum response time in broadcast scheduling and is optimal for job scheduling. What are natural ways to generalize FIFO to delay factor and weighted response time? As shown in \cite{ChekuriM09}, $\Algorithm{SSF}$ (which is equivalent to maximum weight first for weighted response time) is $O(1/\epsilon)$-competitive with $(1+\epsilon)$-speed for job scheduling but is not competitive for broadcast scheduling --- it may end up doing much more work than necessary by transmitting a page repeatedly instead of waiting and accumulating requests for a page. One natural algorithm that extends FIFO for delay factor or weighted response time is to schedule the request in the queue that has the largest current delay factor (or weighted wait time). This greedy algorithm was labeled $\Algorithm{LF}$ (longest first) since it can be seen as an extension of the well-studied $\Algorithm{LWF}$ (longest-wait-first) for average flow time. Since $\Algorithm{LWF}$ is known to be $O(1)$-competitive with $O(1)$-speed for average flow time, it was suggested in \cite{ChekuriIM09} that $\Algorithm{LF}$ may be $O(1)$-speed $O(1)$-competitive for maximum delay factor. We show that this is not the case even for unicast scheduling. \begin{theorem} \label{thm:lf} For any constants $s, c > 1$, $\Algorithm{LF}$ is not $c$-competitive with $s$-speed for minimizing maximum delay factor (or weighted response time) in unicast scheduling of unit-time jobs. \end{theorem} Our algorithm $\Algorithm{SSF-W}$ can be viewed as an interesting tradeoff between $\Algorithm{SSF}$ and $\Algorithm{LF}$. $\Algorithm{SSF}$ gives preference to small slack requests while the $\Algorithm{LF}$ strategy helps avoid doing too much extra work in broadcast scheduling by giving preference to pages that have waited sufficiently long even if they have large slack. The algorithm $\Algorithm{SSF-W}$ considers all requests whose delay factor at time $t$ (or weighted wait time) is within a constant factor of the largest delay factor at $t$ and amongst those requests schedules the one with the smallest slack. This algorithmic principle may be of interest in other settings and is worth exploring in the future. \noindent {\bf Other Related Work:} We have focussed on maximum response time and its variants and have already discussed closely related work. Other metrics that have received substantial attention in broadcast scheduling are minimizing average flow time and maximizing throughput of satisfied requests when requests have deadlines. We refer the reader to a comprehensive survey on online scheduling algorithms by Pruhs, Sgall and Torng \cite{PruhsST} (see also \cite{Pruhs07}). The recent paper of Chang et al.\ \cite{ChangEGK08} addresses, among other things, the offline complexity of several basic problems in broadcast scheduling. Average flow-time received substantial attention in both the offline and online settings \cite{KalyanasundaramPV00,ErlebachH02,GandhiKKW04,GandhiKPS06,BansalCKN05,BansalCS06}. For average flow time, there are three $O(1)$-speed $O(1)$-competitive online algorithms. $\Algorithm{LWF}$ is one of them \cite{EdmondsP03,ChekuriIM09} and the others are BEQUI \cite{EdmondsP03} and its extention \cite{EdmondsP09}. Our recent work \cite{ChekuriIM09} has investigated $L_k$ norms of flow-time and showed that $\Algorithm{LWF}$ is $O(k)$-speed $O(k)$-competitive algorithm. Constant competitive online algorithms for maximizing throughput for unit-sized pages can be found in \cite{Kimc04,ChanLTW04,ZhengFCCPW06,ChrobakDJKK06}. A more thorough description of related work is deferred to a full version of the paper. \noindent {\bf Organization:} We prove each of the theorems mentioned above in a different section. The algorithm and analysis for weighted response time and weighted delay factor are very similar to that for delay factor and hence, in this version, we omit the analysis and only describe the algorithm. \noindent {\bf Notation:} We denote the length of page $p$ by $\ell_p$. That is, $\ell_p$ is the amount of time a 1-speed server takes to broadcast page $p$ non-preemptively. We assume without loss of generality that for any request $J_{p,i}$, $S_{p,i} \ge \ell_p$. For an algorithm $A$ we let $\alpha^A$ denote the maximum delay factor witnessed by $A$ for a given sequence of requests. We let $\alpha^*$ denote the optimal delay factor of an offline schedule. Likewise, we let $\rho^A$ denote the maximum response time witnessed by $A$ and $\rho^*$ the optimal response time of an offline schedule. For a time interval $I=[a,b]$ we define $|I| = b - a$ to be the length of interval $I$. \section{Minimizing the Maximum Response Time} In this section we analyze $\Algorithm{FIFO}$ for minimizing maximum response time when page sizes are different. We first describe the algorithm $\Algorithm{FIFO}$. $\Algorithm{FIFO}$ broadcasts pages {\em non-preemptively}. Consider a time $t$ when $\Algorithm{FIFO}$ finished broadcasting a page. Let $J_{p,i}$ be the request in $\Algorithm{FIFO}$'s queue with earliest arrival time breaking ties arbitrarily. $\Algorithm{FIFO}$ begins broadcasting page $p$ at time $t$. At any time during this broadcast, we will say that $J_{p,i}$ \emph{forced} $\Algorithm{FIFO}$ to broadcast page $p$ at this time. When broadcasting a page $p$ all requests for page $p$ that arrived before the start of the broadcast are simultaneously satisfied when the broadcast completes. Any request for page $p$ that arrive during the broadcast are not satisfied until the next full transmission of $p$. \newcommand{\mathcal{J}_I}{\mathcal{J}_I} We consider $\Algorithm{FIFO}$ when given a $1$-speed machine. Let $\sigma$ be an arbitrary sequence of requests. Let $\textrm{\sc OPT}$ denote some fixed offline optimum schedule and let $\rho^*$ denote the optimum maximum response time. We will show that $\rho^{\Algorithm{FIFO}} \leq 2 \rho^*$. For the sake of contradiction, assume that $\Algorithm{FIFO}$ witnesses a response time $c \rho^*$ by some job $J_{q,k}$ for some $c > 2$. Let $t^*$ be the time $J_{q,k}$ is satisfied, that is $t^* = f_{q,k}$. Let $t_1$ be the smallest time less than $t^*$ such that at any time $t$ during the interval $[t_1, t^*]$ the request which forces $\Algorithm{FIFO}$ to broadcast a page at time $t$ has response time at least $\rho^*$ when satisfied. Throughout the rest of this section we let $I = [t_1, t^*]$. Let $\mathcal{J}_I$ denote the requests which forced $\Algorithm{FIFO}$ to broadcast during $I$. Notice that during the interval $I$, all requests in $\mathcal{J}_I$ are completely satisfied during this interval. In other words, any request in $\mathcal{J}_I$ starts being satisfied during $I$ and is finished during $I$. We say that $\textrm{\sc OPT}$ \emph{merges} two distinct requests for a page $p$ if they are satisfied by the same broadcast. \begin{lemma} \label{lem:main-fifo} $\textrm{\sc OPT}$ cannot merge any two requests in $\mathcal{J}_I$ into a single broadcast. \end{lemma} \begin{proof} Let $J_{p,i}, J_{p,j} \in \mathcal{J}_I$ such that $i < j$. Note that that $J_{p,i}$ is satisfied before $J_{p,j}$. Let $t'$ be the time that $\Algorithm{FIFO}$ \emph{starts} satisfying request $J_{p,i}$. By the definition of $I$, request $J_{p,i}$ has response time at least $\rho^*$. The request $J_{p,j}$ must arrive after time $t'$, that is $a_{p,j} > t'$, otherwise request $J_{p,j}$ is satisfied by the same broadcast of page $p$ that satisfied $J_{p,i}$. Therefore, it follows that if $\textrm{\sc OPT}$ merges $J_{p,i}$ and $J_{p,j}$ then the finish time of $J_{p,i}$ in $\textrm{\sc OPT}$ is strictly greater than its finish time in $\Algorithm{FIFO}$ which is already at least $\rho^*$; this is a contradiction to the definition of $\rho^*$. \end{proof} \begin{lemma} \label{lem:arrival-fifo} All requests in $\mathcal{J}_I$ arrived no earlier than time $t_1 - \rho^*$. \end{lemma} \begin{proof} For the sake of contradiction, suppose some request $J_{p,i} \in \mathcal{J}_I$ arrived at time $a_{p,i} < t_1 - \rho^*$. During the interval $[a_{p,i} + \rho^*, t_1]$ the request $J_{p,i}$ must have wait time at least $\rho^*$. However, then any request which forces $\Algorithm{FIFO}$ to broadcast during $[a_{p,i} + \rho^*, t_1]$ must have response time at least $\rho^*$, contradicting the definition of $t_1$. \end{proof} We are now ready to prove Theorem~\ref{thm:fifo}, stating that $\Algorithm{FIFO}$ is 2-competitive. \begin{proof} Recall that all requests in $\mathcal{J}_I$ are completely satisfied during $I$. Thus we have that the total size of requests in $\mathcal{J}_I$ is $|I|$. By definition $J_{q,k}$ witnesses a response time greater than $2\rho^*$ and therefore $t^* - a_{q,k} > 2\rho^*$. Since $J_{q,k} \in \mathcal{J}_I$ is the last request done by $\Algorithm{FIFO}$ during $I$, all requests in $\mathcal{J}_I$ must arrive no later than $a_{q,k}$. Therefore, these requests must be finished by time $a_{q,k} + \rho^*$ by the optimal solution. From Lemma~\ref{lem:arrival-fifo}, all the requests $\mathcal{J}_I$ arrived no earlier than $t_1 - \rho^*$. Thus $\textrm{\sc OPT}$ must finish all requests in $\mathcal{J}_I$, whose total volume is $|I|$, during $I_{opt} = [t_1 - \rho^*, a_{q,k}+\rho^*]$. Thus it follows that $|I| \leq |[t_1 - \rho^*, a_{q,k}+\rho^*]|$, which simplifies to $t^* \leq a_{q,k} + 2\rho^*$. This is a contradiction to the fact that $t^* - a_{q,k} > 2 \rho^*$. \end{proof} \begin{figure} \caption{ Broadcasts by $\Algorithm{FIFO} \end{figure} \iffalse Theorem~\ref{thm:fifo} is surprising because in the broadcast setting requests must receive pages sequentially from the beginning. To understand the difficulties of varying sized pages, the interested reader should contrast the proofs for varying sized pages for minimizing the maximum response time and minimizing the maximum delay factor. \fi We now discuss the differences between our proof of $\Algorithm{FIFO}$ for varying sized pages and the proof given by Chang et al.\ in \cite{ChangEGK08} showing that $\Algorithm{FIFO}$ is 2-competitive for unit sized pages. In \cite{ChangEGK08} it is shown that at anytime $t$, $F(t)$, the set of {\em unique} pages in $\Algorithm{FIFO}$'s queue satisfies the following property: $|F(t) \setminus O(t)| \le |O(t)|$ where $O(t)$ is the set of unique pages in $\textrm{\sc OPT}$'s queue. This easily implies the desired bound. To establish this, they use a slot model in which unit-sized pages arrive only during integer times which allows one to define unique pages. This may appear to be a technicality, however when considering different sized pages, it is not so clear how one even defines unique pages since this number varies during the transmission of $p$ as requests accumulate. \iffalse Moreover, this assumes that $\Algorithm{FIFO}$ is defined as a non-preemptive algorithm as we do; if preemptions are allowed, one encounters more difficulties since requests are satisfied sequentially.\fi Our approach avoids this issue in a clean manner by not assuming a slot model or unit-sized pages. \section{Minimizing Maximum Delay Factor and Weighted Response time} In this section we consider the problem of minimizing maximum delay factor and prove Theorem~\ref{thm:delayfactor}. \subsection{Unit Sized Pages} In this section we consider the problem of minimizing the maximum delay factor when all pages are of unit size. In this setting we assume preemption is not allowed. In the standard unicast scheduling setting where each broadcast satisfies exactly one request, it is known that the algorithm which always schedules the request with smallest slack at any time is $(1+\epsilon)$-speed $O(\frac{1}{\epsilon})$-competitive \cite{ChekuriM09}. However, in the broadcast setting this algorithm, along with other simple greedy algorithms, do not provide constant competitive ratios even with extra speed. The reason for this is that the adversary can force these algorithm to repeatedly broadcast the same page even though the adversary can satisfy each of these requests in a singe broadcast. Due to this, we consider a more sophisticated algorithm called $\Algorithm{SSF-W}$ (Shortest-Slack-First with Waiting). This algorithm was developed and analyzed in \cite{ChekuriM09}. In this paper we alter the algorithm in a slight but practically important way. The main contribution is, however, a new analysis that is at a high-level similar in outline to the one in \cite{ChekuriM09} but is subtly different and leads to much improved bound on its performance. $\Algorithm{SSF-W}$ \emph{adaptively} forces requests to wait after their arrival before they are considered for scheduling. The algorithm is parameterized by a real value $c > 1$ which is used to determine how long a request should wait. Before scheduling a page at time $t$, the algorithm determines the largest current delay factor of any request that is unsatisfied at time $t$, $\alpha_t$. Amongst the unsatisfied requests that have a current delay factor at least $\frac{1}{c} \alpha_t$, the page corresponding to the request with smallest slack is broadcasted. Note that in the algorithm, each request is forced to wait to be scheduled until it has delay factor at least $\frac{1}{c} \alpha_t$. Thus $\Algorithm{SSF-W}$ can be seen an adaptation of the algorithm which schedules the request with smallest slack in broadcasting setting with explicit waiting. Waiting is used to potentially satisfy multiple requests with similar arrival times in a single broadcast. Another interpretation, that we mentioned earlier, is that $\Algorithm{SSF-W}$ is a balance between $\Algorithm{LF}$ and $\Algorithm{SSF}$. \begin{center} \begin{tabular}[r]{|c|} \hline \textbf{Algorithm}: \Algorithm{SSF-W} \\ \\ \begin{minipage}{\textwidth} \begin{itemize} \item Let $\alpha_t$ be the maximum delay factor of any request in $\Algorithm{SSF-W}$'s queue at time $t$. \item At time $t$, let $Q(t) = \{ J_{p,i} \mid \mbox{ $J_{p,i}$ has not been satisfied and ${{t - a_{p,i} \over {S_{p,i}}} } \geq \frac{1}{c} \alpha_t $} \}$. \item If the machine is free at $t$, schedule the request in $Q(t)$ with the smallest slack {\em non-preemptively}. \end{itemize} \end{minipage}\\ \hline \end{tabular} \end{center} First we note the difference between $\Algorithm{SSF-W}$ above and the one described in \cite{ChekuriM09}. Let $\alpha'_t$ denote the maximum delay factor witnessed so far by $\Algorithm{SSF-W}$ at time $t$ over all requests seen by $t$ including satisfied and unsatisfied requests. In \cite{ChekuriM09}, a request $J_{p,i}$ is in $Q(t)$ if $\frac{t-a_{p,i}}{S_{p,i}} \geq \frac{1}{c}\alpha'_t$. Note that $\alpha'_t$ is monotonically increases with $t$ while $\alpha_t$ can increase and decrease with $t$ and is never more than $\alpha'_t$. In the old algorithm it is possible that $Q(t)$ is empty and no request is scheduled at $t$ even though there are outstanding requests! Our new version of $\Algorithm{SSF-W}$ can be seen as more practical since there will always be requests in $Q(t)$ if there are outstanding requests and moreover it adapts and may reduce $\alpha_t$ as the request sequence changes with time. It is important to note that our analysis and the analysis given in $\cite{ChekuriM09}$ hold for both definitions of $\Algorithm{SSF-W}$ with some adjustments. We analyze $\Algorithm{SSF-W}$ when it is given a $(1 + \epsilon)$-speed machine. Let $c > 1 + \frac{2}{\epsilon}$ be the constant which parameterizes $\Algorithm{SSF-W}$. Let $\sigma$ be an arbitrary sequence of requests. We let $\textrm{\sc OPT}$ denote some fixed offline optimum schedule and let $\alpha^*$ and $\alpha^{\Algorithm{SSF-W}}$ denote the maximum delay factor achieved by $\textrm{\sc OPT}$ and $\Algorithm{SSF-W}$, respectively. We will show that $\alpha^{\Algorithm{SSF-W}} \le c^2\alpha^*$. For the sake of contradiction, suppose that $\Algorithm{SSF-W}$ witnesses a delay factor greater than $c^2 \alpha^*$. We consider the \emph{first} time $t^*$ when $\Algorithm{SSF-W}$ has some request in its queue with delay factor $c^2 \alpha^{*}$. Let the request $J_{q,k}$ be a request which achieves the delay factor $c^2 \alpha^{*}$ at time $t^*$. Let $t_1$ be the smallest time less than $t^*$ such that at each time $t$ during the interval $[t_1, t^*]$ if $\Algorithm{SSF-W}$ is forced to broadcast by request $J_{p,i}$ at time $t$ it is the case that $\frac{t - a_{p,i}}{S_{p,i}} \geq \alpha^*$ and $S_{p,i} \leq S_{q,k}$. \iffalse Let $t_1$ be the smallest time less than $t^*$ such that for any request $J_{p,i}$ with $f_{p,i} \in [t_1, t^*]$ and $J_{p,i}$ forced $\Algorithm{SSF-W}$ to broadcast at time $f_{p,i}$, it is the case that $\frac{f_{p,i} - a_{p,i}}{S_{p,i}} \geq \alpha^*$ and $S_{p,i} \leq S_{a,k}$. \fi Throughout this section we let $I = [t_1, t^*]$. The main difference between the analysis in \cite{ChekuriM09} and the one here is in the definition of $t_1$. In \cite{ChekuriM09}, $t_1$ was implicitly defined to be $a_{q,k} + c(f_{q,k} - a_{q,k})$. We let $\mathcal{J}_I$ denote the requests which forced $\Algorithm{SSF-W}$ to schedule broadcasts during the interval $[t_1, t^*]$. We now show that any two request in $\mathcal{J}_I$ cannot be satisfied with a single broadcast by the optimal solution. Intuitively, the most effective way the adversary to performs better than $\Algorithm{SSF-W}$ is to merge requests of the same page into a single broadcast. Here we will show this is not possible for the requests in $\mathcal{J}_I$. We defer the proof of Lemma~\ref{lem:main} to the Appendix, since the proof is similar to that of Lemma~\ref{lem:main-fifo} \begin{lemma} \label{lem:main} $\textrm{\sc OPT}$ cannot merge any two requests in $\mathcal{J}_I$ into a single broadcast. \end{lemma} To fully exploit the advantage of speed augmentation, we need to ensure that the length of the interval $I$ is sufficiently long. \begin{lemma} \label{lem:len} $|I| = |[t_1, t^*]| \geq (c^2 -c)S_{q,k}\alpha^*$. \end{lemma} \begin{proof} The request $J_{q,k}$ has delay factor at least $c\alpha^*$ at any time during $I' = [t', t^*]$, where $t' = t^* -(c^2 -c)S_{q,k}\alpha^*$. Let $\tau \in I'$. The largest delay factor any request can have at time $\tau$ is less than $c^2 \alpha^*$ by definition of $t^*$ being the first time $\Algorithm{SSF-W}$ witnesses delay factor $c^2 \alpha^*$. Hence, $\alpha_{\tau} \leq c^2 \alpha^*$. Thus, the request $J_{q,k}$ is in the queue $Q(\tau)$ because $c\alpha^* \geq \frac{1}{c}\alpha_{\tau}$. Moreover, this means that any request that forced $\Algorithm{SSF-W}$ to broadcast during $I'$, must have delay factor at least $\alpha^*$ and since $J_{q,k} \in Q(\tau)$ for any $\tau \in I'$, the requests scheduled during $I'$ must have slack at most $S_{q,k}$. \end{proof} We now explain a high level view of how we lead to a contradiction. From Lemma~\ref{lem:main}, we know any two requests in $\mathcal{J}_I$ cannot be merged by $\textrm{\sc OPT}$. Thus if we show that $\textrm{\sc OPT}$ must finish all these requests during an interval which is not long enough to include all of them, we can draw a contradiction. More precisely, we will show that all requests in $\mathcal{J}_I$ must be finished during $I_{opt}$ by $\textrm{\sc OPT}$, where $I_{opt} = [t_1 -2S_{q,k} \alpha^* c, t^*]$. It is easy to see that all these requests already have delay factor $\alpha^*$ by time $t^*$, thus the optimal solution must finish them by time $t^*$. For the starting point, we will bound the arrival times of the requests in $\mathcal{J}_I$ in the following lemma. After that, we will draw a contradiction in \lemref{competitiveone}. \begin{lemma} \label{lem:arrivaltime} Any request in $\mathcal{J}_I$ must have arrived after time $t_1 - 2 S_{q,k} \alpha^* c$. \end{lemma} \begin{proof} For the sake of contradiction, suppose that some request $J_{p,i} \in \mathcal{J}_I$ arrived at time $t' < t_1 -2 S_{q,k} \alpha^* c$. Recall that $J_{p,i}$ has a slack no bigger than $S_{q,k}$ by the definition of $I$. Therefore at time $t_1 - S_{q,k} \alpha^* c$, $J_{p,i}$ has a delay factor of at least $c \alpha^*$. Thus any request scheduled during the interval $I' = [t_1 - S_{q,k} \alpha^* c, t_1]$ has a delay factor no less than $\alpha^*$. We observe that $J_{p,i}$ is in $Q(\tau)$ for $\tau \in I'$; otherwise there must be a request with a delay factor bigger than $c^2 \alpha^*$ at time $\tau$ and this is a contradiction to the assumption that $t^*$ is the first time that $\Algorithm{SSF-W}$ witnessed a delay factor of $c^2 \alpha^*$. Therefore any request scheduled during $I'$ has a slack no bigger than $S_{p,i}$. Also we know that $S_{p,i} \leq S_{q,k}$. In sum, we showed that any request done during $I'$ had slack no bigger than $S_{q,k}$ and a delay factor no smaller than $\alpha^*$, which is a contradiction to the definition of $t_1$. \end{proof} Now we are ready to prove the competitiveness of $\Algorithm{SSF-W}$. \begin{lemma} \lemlab{competitiveone} Suppose $c$ is a constant s.t. $ c > 1 + 2/ \epsilon$. If $\Algorithm{SSF-W}$ has $(1+ \epsilon)$-speed then $\alpha^{\Algorithm{SSF-W}} \leq c^2 \alpha^*$. \end{lemma} \begin{proof} For the sake of contradiction, suppose that $\alpha^{\Algorithm{SSF-W}} > c^2 \alpha^*$. During the interval $I $, the number of broadcasts which $\Algorithm{SSF-W}$ transmits is $(1+\epsilon)|I|$. From Lemma~\ref{lem:arrivaltime}, all the requests processed during $I$ have arrived no earlier than $t_1 -2 c \alpha^* S_{q,k}$. We know that the optimal solution must process these requests before time $t^*$ because these requests have delay factor at least $\alpha^*$ by $t^*$. By Lemma~\ref{lem:main} the optimal solution must make a unique broadcast for each of these requests. Thus, the optimal solution must finish all of these requests in $2 c \alpha^* S_{q,k} + |I|$ time steps. Thus, the it must hold that $(1+\epsilon)|I| \leq 2c \alpha^* S_{q,k} + |I|$. Using Lemma~\ref{lem:len}, this simplifies to $c \leq 1 + 2/\epsilon$, which is a contradiction to $c > 1 + 2/\epsilon$,. \end{proof} The previous lemmas prove the first part of Theorem~\ref{thm:delayfactor} when $c = 1 + 3/\epsilon$. Namely that $\Algorithm{SSF-W}$ is a $(1+ \epsilonilon)$-speed $O(\frac{1}{\epsilonilon^2})$-competitive algorithm for minimizing the maximum delay factor in broadcast scheduling with unit sized pages. We now compare proof of Theorem~\ref{thm:delayfactor} and the proof of Theorem~\ref{thm:fifo} with the analysis given in $\cite{ChekuriM09}$. The central technique used in $\cite{ChekuriM09}$ and in our analysis is to draw a contradiction by showing that the optimal solution must complete more requests than possible on some time interval $I$. This technique is well known in unicast scheduling. At the heart of this technique is to find the which requests to consider and bounding the length of the interval $I$. This is where our proof and the one given in \cite{ChekuriM09} differ. Here we are more careful on how $I$ is defined and how we find requests the optimal solution must broadcast during $I$. This allows us to show tighter bounds on the speed and competitive ratios while simplifying the analysis. In fact, our analysis of $\Algorithm{FIFO}$ and $\Algorithm{SSF-W}$ shows the importance of these definitions. Our analysis of $\Algorithm{FIFO}$ shows that a tight bound on the length of $I$ can force a contradiction without allowing extra speed-up given to the algorithm. Our analysis of $\Algorithm{SSF-W}$ shows that when the length of $I$ varies how resource augmentation can be used to force the contradiction. \subsection{Weighted Response Time and Weighed Delay Factor} Before showing that $\Algorithm{SSF-W}$ is $(2+\epsilonilon)$-speed $O(\frac{1}{\epsilonilon^2})$-competitive for minimizing the maximum delay factor with different sized pages, we show the connection of our analysis of $\Algorithm{SSF-W}$ to the problem of minimizing \emph{weighted} response time. In this setting a request $J_{p,i}$ has a weight $w_{p,i}$ instead of a slack. The goal is to minimize the maximum weighted response time $\max_{p,i} w_{p,i}(f_{p,i} - a_{p,i})$. We develop an algorithm which we call $\Algorithm{BWF-W}$ for Biggest-Wait-First with Waiting. This algorithm is defined analogously to the definition of $\Algorithm{SSF-W}$. The algorithm is parameterized by a constant $c >1$. At any time $t$ before broadcasting a page, $\Algorithm{BWF-W}$ determines the largest weighted wait time of any request which has yet to be satisfied. Let this value be $\rho_t$. The algorithm then chooses to broadcast a page corresponding to the request with largest weight amongst the requests whose current weighted wait time at time $t$ is larger than $\frac{1}{c}\rho_t$. \begin{center} \begin{tabular}[r]{|c|} \hline \textbf{Algorithm}: \Algorithm{BWF-W} \\ \\ \begin{minipage}{\textwidth} \begin{itemize} \item Let $\rho_t$ be the maximum weighted wait time of any request in $\Algorithm{BWF-W}$'s queue at time $t$. \item At time $t$, let $Q(t) = \{ J_{p,i} \mid \mbox{ $J_{p,i}$ has not been satisfied and $w_{p,i}(t - a_{p,i}) \geq \frac{1}{c} \rho_t $} \}$. \item If the machine is free at $t$, schedule the request in $Q(t)$ with largest weight {\em non-preemptively}. \end{itemize} \end{minipage}\\ \hline \end{tabular} \end{center} Although minimizing the maximum delay factor and minimizing the maximum weighted flow time are very similar metrics, the problems are not equivalent. It may also be of interest to minimize the maximum \emph{weighted} delay factor. In this setting each request has a deadline and a weight. The goal is to minimize $\max_{p,i} w_{p,i}(f_{p,i} - a_{p,i}) / S_{p,i}$. For this setting we develop another algorithm which we call $\Algorithm{SRF-W}$ (Smallest-Ratio-First with Waiting). The algorithm takes the parameter $c$. At any time $t$ before broadcasting a page, $\Algorithm{SRF-W}$ determines the largest weighted delay factor of any request which has yet to be satisfied. Let this value be $\alpha^w_t$. The algorithm then chooses to broadcast a page corresponding to the request with the smallest ratio of the slack over the weight amongst the requests whose current weighted delay factor at time $t$ is larger than $\frac{1}{c}\alpha^w_t$. The algorithm can be formally expressed as follows. \begin{center} \begin{tabular}[r]{|c|} \hline \textbf{Algorithm}: \Algorithm{SRF-W} \\ \\ \begin{minipage}{\textwidth} \begin{itemize} \item Let $\alpha^w_t$ be the maximum weighted delay factor of any request in $\Algorithm{SRF-W}$'s queue at time $t$. \item At time $t$, let $Q(t) = \{ J_{p,i} \mid \mbox{ $J_{p,i}$ has not been satisfied and $w_{p,i}(t - a_{p,i})/S_{p,i} \geq \frac{1}{c} \alpha^w_t $} \}$. \item If the machine is free at $t$, schedule the request in $Q(t)$ with smallest slack over weight {\em non-preemptively}. \end{itemize} \end{minipage}\\ \hline \end{tabular} \end{center} For the problems of minimizing the maximum weighted response time and weighted delay factor, the upper bounds shown for $\Algorithm{SSF-W}$ in this paper also hold for $\Algorithm{BWF-W}$ and $\Algorithm{SRF-W}$, respectively. The analysis of $\Algorithm{BWF-W}$ and $\Algorithm{SRF-W}$ is very similar to that of $\Algorithm{SSF-W}$ and the proofs are omitted. \subsection{Varying Sized Pages} Here we extend our ideas to the case where pages can have a different sizes for the objective of minimizing the maximum delay factor. We develop a generalization of $\Algorithm{SSF-W}$ for this setting which is similar to the generalization of $\Algorithm{SSF-W}$ given in \cite{ChekuriM09}. For each page $p$, we let $\ell_p$ denote the length of page $p$. Since pages have different lengths, we allow preemption. Therefore, if $t_1$ is the time where the broadcast of page $p$ is started and $t_2$ is the time that this broadcast is completed it is the case that $\ell_p \leq t_2 - t_1$. A request for the page $p$ is satisfied by this broadcast only if the request arrives before time $t_1$. A request that arrives during the interval $(t_1, t_2]$ does not start being satisfied because it must receive a sequential transmission of page $p$ starting from the beginning. It is possible that a transmission of page $p$ is $\emph{restarted}$ due to another request for page $p$ arriving which has smaller slack. The original transmission of page $p$ in this case is abandoned. Notice that this results in wasted work by the algorithm. It is because of this wasted work that more speed is needed to show the competitiveness of $\Algorithm{SSF-W}$. We outline the details of modifications to $\Algorithm{SSF-W}$. As before, at any time $t$, the algorithm maintains a queue $Q(t)$ at each time where a request $J_{p,i}$ is in $Q(t)$ if and only if $\frac{t - a_{p,i}}{S_{p,i}} \geq \frac{1}{c} \alpha_t$. The algorithm broadcasts a request with the smallest slack in $Q(t)$. The algorithm may preempt a broadcast of $p$ that is forced by request $J_{p,i}$ if another request $J_{p',j}$ becomes available for scheduling such that $S_{p',j} < S_{p,i}$. If the request $J_{p,i}$ ever forces $\Algorithm{SSF-W}$ to broadcast again, then $\Algorithm{SSF-W}$ continues to broadcast page $p$ from where it left off before the preemption. If another request for page $p$ forces $\Algorithm{SSF-W}$ to broadcast page $p$ before $J_{p,i}$ is satisfied, then the transmission of page $p$ is $\emph{restarted}$. A key difference between our generalization of $\Algorithm{SSF-W}$ and the one from \cite{ChekuriM09} is that in our new algorithm, requests can be forced out of $Q$ even after they have been started. Hence, in our version of $\Algorithm{SSF-W}$ every request in $Q(t)$ has current delay factor at least $\frac{1}{c} \alpha_t$ at time $t$. Our algorithm breaks ties arbitrarily. In \cite{ChekuriM09}, ties are broken arbitrarily, but the algorithm ensures that if a request $J_{p,k}$ is started before a request $J_{p',j}$ then $J_{p,k}$ will be finished before request $J_{p',j}$. Here, this requirement is not needed. Note that the algorithm may preempt a request $J_{p,i}$ by another request $J_{p,k}$, for the same page $p$ if $S_{p,k} < S_{p,i}$. In this case the first broadcast of page $p$ is abandoned. Notice that multiple broadcasts of page $p$ can repeatedly be abandoned. We now analyze the extended algorithm assuming that it has a $(2+\epsilon)$-speed advantage over the optimal offline algorithm. As mentioned before, the extra speed is needed to overcome the wasted work by abandoning broadcasts. As before, let $\sigma$ be an arbitrary sequence of requests. We let $\textrm{\sc OPT}$ denote some fixed offline optimum schedule and let $\alpha^*$ denote the optimum delay factor. Let $c > 1 + \frac{4}{\epsilon}$ be the constant that parameterizes $\Algorithm{SSF-W}$. We will show that $\alpha^{\Algorithm{SSF-W}} \le c^2\alpha^*$. For the sake of contradiction, suppose that $\Algorithm{SSF-W}$ witnesses a delay factor greater than $c^2 \alpha^*$. We consider the \emph{first} time $t^*$ when $\Algorithm{SSF-W}$ has some request in its queue with delay factor $c^2 \alpha^{*}$. Let the request $J_{q,k}$ be a request which achieves the delay factor $c^2 \alpha^{*}$ at time $t^*$. Let $t_1$ be the smallest time less than $t^*$ such that at each time $t$ during the interval $[t_1, t^*]$ if $\Algorithm{SSF-W}$ is forced to broadcast by request $J_{p,i}$ at time $t$ it is the case that $\frac{t - a_{p,i}}{S_{p,i}} \geq \alpha^*$ and $S_{p,i} \leq S_{q,k}$. Throughout this section we let $I=[t_1, t^*]$. Notice that some requests that force $\Algorithm{SSF-W}$ to broadcast during $I$ could have started being satisfied before $t_1$. We say that a request \emph{starts} being scheduled at time $t$ if it is the request which forces $\Algorithm{SSF-W}$ to broadcast at time $t$ and $t$ is the first time the request forces $\Algorithm{SSF-W}$ to schedule a page. Notice that a request can only start being satisfied once and at most one request starts being scheduled at any time. We now show a lemma analogous to Lemma~\ref{lem:main}. \begin{lemma} \label{lem:main-varying} Consider two distinct requests $J_{x,j}$ and $J_{x,i}$ for some page $x$. If $J_{x,j}$ and $J_{x,i}$ both start being scheduled by $\Algorithm{SSF-W}$ during the interval $I$ then $\textrm{\sc OPT}$ cannot satisfy $J_{x,j}$ and $J_{x,i}$ by a single broadcast. \end{lemma} \begin{proof} Without loss of generality say that request $J_{x,j}$ was satisfied before request $J_{x,i}$ by $\Algorithm{SSF-W}$. Let $t'$ be the time that $\Algorithm{SSF-W}$ \emph{starts} satisfying request $J_{x,j}$. By the definition of $I$, request $J_{x,j}$ must have delay factor at least $\alpha^*$ at this time. We also know that the request $J_{x,i}$ must arrive after time $t'$, otherwise request $J_{x,i}$ must also be satisfied at time $t'$. If the optimal solution combines these requests into a single broadcast then the request $J_{x,j}$ must wait until the request $J_{x,i}$ arrives to be satisfied. However, this means that the request $J_{x,j}$ must achieve a delay factor greater than $\alpha^*$ by $\textrm{\sc OPT}$, a contradiction of the definition of $\alpha^*$. \end{proof} The next two lemmas have proofs similar to Lemma~\ref{lem:len} and Lemma~\ref{lem:arrivaltime}, we defer the proofs to the Appendix. \begin{lemma} \label{lem:len-varying} $|I| = |[t_1, t^*]| \geq (c^2 -c)S_{q,k}\alpha^*$. \end{lemma} \iffalse \begin{proof} The request $J_{q,k}$ has delay factor at least $c\alpha^*$ at any time $t$ during $I' = [t', t^*]$, where $t' = t^* -(c^2 -c)S_{q,k}\alpha^*$. The largest delay factor any request can have during $I'$ is less than $c^2 \alpha^*$ by definition of $t^*$ being the first time $\Algorithm{SSF-W}$ witnesses delay factor $c^2 \alpha^*$. Hence the request $J_{q,k}$ is in the queue $Q(t)$ at any time $t$ during $I'$. Therefore, any request that forced $\Algorithm{SSF-W}$ to broadcast during $I'$, must have delay factor at least $\alpha^*$ and since $J_{q,k} \in Q(t')$, the requests scheduled during $I'$ must have slack at most $S_{q,k}$. \end{proof} \fi \begin{lemma} \label{lem:arrivaltime-varying} Any request which forced $\Algorithm{SSF-W}$ to schedule a page during $I$ must have arrived after time $t_1 - 2 S_{q,k} \alpha^* c$. \end{lemma} \iffalse \begin{proof} For the sake of contradiction, suppose that some request $J_{p,i}$ that forced $\Algorithm{SSF-W}$ to broadcast page $p$ during the interval $I$ arrived at time $t' < t_1 -2 S_{q,k} \alpha^* c$. Recall that $J_{p,i}$ has a slack no bigger than $S_{q,k}$ by the definition of $I$. Therefore at time $t_1 - S_{q,k} \alpha^* c$, $J_{p,i}$ has a delay factor of at least $c \alpha^*$. Thus any request scheduled during the interval $[t_1 - S_{q,k} \alpha^* c, t_1]$ has a delay factor no less than $\alpha^*$. We observe that $J_{p,i}$ is in $Q(\tau)$ for $\tau \in [t_1 - S_{q,k} \alpha^* c, t_1]$; otherwise there must be a request with a delay factor bigger than $c^2 \alpha^*$ during the interval and this is a contradiction to the assumption that $t^*$ is the first time that $\Algorithm{SSF-W}$ witnessed a delay factor of $c^2 \alpha^*$. Therefore any request that forced $\Algorithm{SSF-W}$ to preform a broadcast during $[t_1 - S_{q,k} \alpha^* c, t_1]$ has a slack no bigger than $S_{p,i}$. Also we know that $S_{p,i} \leq S_{q,k}$ by the definition of $I$. In sum, we showed that any request that forced $\Algorithm{SSF-W}$ to do a broadcast during $[t_1 - S_{q,k} \alpha^* c, t_1]$ have a slack no bigger than $S_{q,k}$ and a delay factor no smaller than $\alpha^*$, which is a contradiction of the definition of $t_1$. \end{proof} \fi Using the previous lemmas we can bound the competitiveness of $\Algorithm{SSF-W}$. In the following lemma the main difference between the proof for unit sized pages and the proof for varying sized pages can be seen. The issue is that there can be some requests which start being satisfied before time $t_1$ which force $\Algorithm{SSF-W}$ to broadcast a page during the interval $I$. When these requests were started, their delay factor need not be bounded by $\alpha^*$. Due to this, it is possible for these requests to be merged with other requests which forced $\Algorithm{SSF-W}$ to broadcast on the interval $I$. \begin{lemma} \lemlab{competitive} Suppose $c$ is a constant s.t. $ c > 1 + 4/ \epsilon$. If $\Algorithm{SSF-W}$ has $(2+ \epsilon)$-speed then $\alpha^{\Algorithm{SSF-W}} \leq c^2 \alpha^*$. \end{lemma} \begin{proof} For the sake of contradiction, suppose that $\alpha^{\Algorithm{SSF-W}} > c^2 \alpha^*$. Let $\mathcal{A}$ be the set of requests which start being satisfied before time $t_1$ which force $\Algorithm{SSF-W}$ to broadcast at some time during $I$. Notice that no two requests in $\mathcal{A}$ are for the same page. Let $\mathcal{B}$ be the set of requests which start being satisfied during the interval $I$. Note that the sets $\mathcal{A}$ and $\mathcal{B}$ may consist of requests whose corresponding broadcast was abandoned at some point and that $\mathcal{A} \cap \mathcal{B} = \emptyset$ by definition. Let $V_\mathcal{A}$ and $V_\mathcal{B}$ denote the total sum of size of the requests in $\mathcal{A}$ and $\mathcal{B}$, respectively. During the interval $I$, the volume of broadcasts which $\Algorithm{SSF-W}$ transmits is $(2+\epsilon)|I|$. Notice that $V_\mathcal{A} + V_\mathcal{B} \geq (2+\epsilon)|I|$, since $\mathcal{A} \cup \mathcal{B}$ accounts for all requests which forced $\Algorithm{SSF-W}$ to broadcast their pages during $I$. From Lemma~\ref{lem:arrivaltime-varying}, \emph{all} the requests processed during $I$ have arrived no earlier than $t_1 -2 c \alpha^* S_{q,k}$. We know that the optimal solution must process these requests before time $t^*$ because these requests have delay factor at least $\alpha^*$ by this time. By Lemma~\ref{lem:main-varying} the optimal solution must make a unique broadcast for each request in $\mathcal{B}$. We also know that no two requests in $\mathcal{A}$ can be merged because no two requests in $\mathcal{A}$ are for the same page. The optimal solution, however, could possibly merge requests in $\mathcal{A}$ with requests in $\mathcal{B}$. Thus, the optimal solution must broadcast at least a $\max\{V_\mathcal{A}, V_\mathcal{B}\}$ volume of requests during the interval $[t_1 - 2c\alpha^* S_{q,k}, t^*]$. Notice that $\max\{V_\mathcal{A}, V_\mathcal{B} \} \geq \frac{1}{2} (V_\mathcal{A} + V_\mathcal{B}) \geq \frac{1}{2} (2+\epsilonilon)|I|$ and that $|[t_1 - 2c\alpha^* S_{q,k}, t^*]| = 2 c \alpha^* S_{q,k} + |I|$. Therefore, it must hold that $\frac{1}{2}(2+\epsilon)|I| \leq 2c \alpha^* S_{q,k} + |I|$. With Lemma~\ref{lem:len-varying}, this simplifies to $c \leq 1 + 4 / \epsilon$. This is a contradiction to $c > 1 + 4/ \epsilon$. \end{proof} Thus, we have the second part of Theroem~\ref{thm:delayfactor} by setting $c = 1 + 5/\epsilon$. Namely that $\Algorithm{SSF-W}$ is $(2+\epsilonilon)$-speed $O(\frac{1}{\epsilonilon^2})$-competitive for minimizing the maximum delay factor for different sized pages. \section{Lower Bound for a Natural Greedy Algorithm $\Algorithm{LF}$} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{S}}{\mathcal{S}} In this section, we consider a natural algorithm which is similar to $\Algorithm{SSF-W}$. This algorithm, which we will call $\Algorithm{LF}$ for Longest Delay First, always schedules the page which has the largest delay factor. Notice that $\Algorithm{LF}$ is the same as $\Algorithm{SSF-W}$ when $c=1$. However, we are able to show a negative result on the algorithm for minimizing the maximum delay factor. This demonstrates the importance of the tradeoff between scheduling a request with smallest slack and forcing requests to wait. The algorithm $\Algorithm{LF}$ was suggested and analyzed in our recent work \cite{ChekuriIM09} and is inspired by $\Algorithm{LWF}$ which was shown to be $O(1)$-competitive with $O(1)$-speed for average flow time \cite{EdmondsP04}. In \cite{ChekuriIM09} $\Algorithm{LF}$ is shown to be $O(k)$-competitive with $O(k)$-speed for $L_k$ norms of flow time and delay factor in broadcast scheduling for unit sized pages. Note that $\Algorithm{LF}$ is a simple greedy algorithm. It was suggested in \cite{ChekuriIM09} that $\Algorithm{LF}$ may be competitive for maximum delay factor which is the $L_{\infty}$-norm of delay factor. To show the lower bound on the $\Algorithm{LF}$, we will show that it is not $O(1)$-speed $O(1)$-competitive, even in the standard unicast scheduling setting with unit sized jobs. Since we are considering the unicast setting where processing a page satisfies exactly one request, we drop the terminology of `requests' and use `jobs'. We also drop the index of a request $J_{p,i}$ and use $J_i$ since there can only be one request for each page. Let us say that $J_{i}$ has a wait ratio of $r_{i}(t) = \frac{t- a_{i}}{S_{i}}$ at time $t > a_i$, where $a_i$ and $S_i$ is the arrival time and slack size of $J_i$. Note that the delay factor of $J_i$ is $\max(1, r_i(f_i))$ where $f_i$ is $J_i$'s finish time. We now formally define $\Algorithm{LF}$. The algorithm $\Algorithm{LF}$ schedules the request with the largest wait ratio at each time. $\Algorithm{LF}$ can be seen as a natural generalization of $\Algorithm{FIFO}$. This is because $\Algorithm{FIFO}$ schedules the request with largest wait time at each time. Recall that $\Algorithm{SSF-W}$ forces requests to wait to help merge potential requests in a single broadcast. The algorithm $\Algorithm{LF}$ behaves similarly since it implicitly delays each request until it is the request with the largest wait ratio, potentially merging many requests into a single broadcast. Hence, this algorithm is a natural candidate for the problem of minimizing the maximum delay factor and it does not need any parameters like the algorithm $\Algorithm{SSF-W}$. However, this algorithm cannot have a constant competitive ratio with any constant speed. For any speed-up $s \geq 1$ and any constant $c \geq 2$, we construct the following adversarial instance $\sigma$. For this problem instance we will show that $\Algorithm{LF}$ has wait ratio at least $c$, while $\textrm{\sc OPT}$ has wait ratio at most $1$. Hence, we can force $\Algorithm{LF}$ to have a competitive ratio of $c$, for any constant $c \geq 2$. In the instance $\sigma$, there are a series job groups $\mathcal{J}_i$ for $0 \leq i \leq k$, where $k$ is a constant to be fixed later. We now fix the jobs in each group. For simplicity of notation and readability, we will allow jobs to arrive at negative times. We can simply shift each of the times later, so that all arrival times are positive. It is also assumed that $s$ and $c$ are integers in our example. All jobs in each group $\mathcal{J}_i$ have the same arrival time $A_i = -(sc)^{k-i+1} - \sum_{j=0}^{k-i-1} (sc)^j$ and have the same slack size of $S_i = \frac{c(sc)^{k-i} }{ (1-1/sc)^{k-i}}$. There will be $s(sc)^{k+1}$ jobs in the group $\mathcal{J}_0$ and $s(sc)^{k-i}$ jobs in the group $\mathcal{J}_i$ for $1 \leq i \leq k$. We now explain how $\Algorithm{LF}$ and $\textrm{\sc OPT}$ behave for the instance $\sigma$. For simplicity, we will refer to $\mathcal{J}_i$, instead of a job in $\mathcal{J}_i$, since all jobs in the same group are indistinguishable to the schedular. For the first group $\mathcal{J}_0$, $\Algorithm{LF}$ starts and keeps processing $\mathcal{J}_0$ upon its arrival until completing it. On the other hand, we let $\textrm{\sc OPT}$ procrastinate $\mathcal{J}_0$ until $\textrm{\sc OPT}$ finishes all jobs in $\mathcal{J}_1$ to $\mathcal{J}_k$. This does not hurt $\textrm{\sc OPT}$, since the slack size of the jobs in $\mathcal{J}_0$ is so large. In fact, we will show that $\textrm{\sc OPT}$ can finish $\mathcal{J}_0$ by its deadline. For each group $\mathcal{J}_i$ for $1 \leq i \leq k$, $\textrm{\sc OPT}$ will start $\mathcal{J}_i$ upon its arrival and complete each job in $\mathcal{J}_i$ without interruption. To the contrary, for each $1 \leq i \leq k$ $\Algorithm{LF}$ will not begin scheduling $\mathcal{J}_i$ until the jobs have been substantially delayed. The delay of $\mathcal{J}_k$ is critical for $\Algorithm{LF}$, since the slack of $\mathcal{J}_k$ is small. For intuitive understanding, we refer the reader to Figure 2. \begin{figure} \caption{Comparison of scheduling of group $\mathcal{J} \end{figure} We now formally prove that $\Algorithm{LF}$ achieves wait ratio $c$, while $\textrm{\sc OPT}$ has wait ratio at most 1 for the given problem instance $\sigma$. Let $F_i = A_i + (sc)^{k-i+1}, 0 \leq i \leq k$. Let $R_i$ be the maximum wait ratio for any job in $\mathcal{J}_i$ witnessed by $\Algorithm{LF}$. We now define $k$ to be a constant such that $(1 - \frac{1}{sc})^k c \leq \frac{1}{3s}$. \begin{lemma} \label{lem:ld_act} $\Algorithm{LF}$, given speed $s$, processes $\mathcal{J}_0$ during $[A_0, F_0]$ and $\mathcal{J}_i$ during $[F_{i-1}, F_i]$, $1 \leq i \leq k$. \end{lemma} \begin{proof}[Proof Sketch:] By simple algebra one can check that the length of the time intervals $[A_0, F_0]$ and $[F_{i-1}, F_i]$ is the exact amount of time for $\Algorithm{LF}$ with $s$-speed needs to completely process $\mathcal{J}_0$ and $\mathcal{J}_i$, respectively. First we show that $\mathcal{J}_0$ is finished during $[A_0, F_0]$ by $\Algorithm{LF}$. It can be seen that at time $F_0$ the jobs in $\mathcal{J}_j$ for $2 \leq j \leq k$ have not arrived, so we can focus on the class $\mathcal{J}_1$. The jobs in $\mathcal{J}_1$ can be shown to have the same wait ratio as the jobs in $\mathcal{J}_0$ at time $F_0$ and therefore the jobs in $\mathcal{J}_1$ have smaller wait ratio than the jobs in $\mathcal{J}_0$ at all times before $F_0$. This is because $\mathcal{J}_0$ has a bigger slack than $\mathcal{J}_1$. Hence, $\Algorithm{LF}$ will finish all of the jobs in $\mathcal{J}_0$ before beginning the jobs in $\mathcal{J}_1$. To complete the proof, we show that $\mathcal{J}_i$ is finished during $[F_{i-1}, F_i]$ by $\Algorithm{LF}$. It can be seen that at time $F_i$ the jobs in $\mathcal{J}_j$ for $i+2 \leq j \leq k$ have not arrived, so we can focus on the class $\mathcal{J}_{i+1}$. The jobs in $\mathcal{J}_i$ can be shown to have the same wait ratio as the jobs in $\mathcal{J}_{i+1}$ at time $F_i$ and therefore the jobs in $\mathcal{J}_{i+1}$ have smaller wait ratio than the jobs in $\mathcal{J}_i$ at all times before $F_i$. Hence, $\Algorithm{LF}$ will finish all of the jobs in $\mathcal{J}_i$ before beginning the jobs in $\mathcal{J}_{i+1}$. \iffalse Thus to show that $\Algorithm{LF}$ schedules groups $\mathcal{J}_i$ in the order from $\mathcal{J}_0$ to $\mathcal{J}_k$, we only need to show that during $[A_0, F_0]$ ($[F_{i-1}, F_i]$), $\mathcal{J}_1$ ($\mathcal{J}_i$) has a bigger wait ratio than $\mathcal{J}_0$ ($\mathcal{J}_{i+1}$) through $\mathcal{J}_{k}$. Indeed, the wait ratio of $\mathcal{J}_i$, $1 \leq i \leq k-1$ at time $F_i$ is $(F_i - A_i)/ s_i = c(1- 1/sc)^{k-i}$, which is the same as that of $cj_{i+1}$ at time $F_i$, $(F_i - A_{i+1}) /s_{i+1}$. The case for the first group can be shown similarly. \fi \end{proof} Using Lemma~\ref{lem:ld_act} and the given arrival times of each of the jobs we have the following lemma. \begin{lemma} \label{lem:ld_poor} $R_i = c(1 - \frac{1}{sc})^{k-i}$ ~for~ $0 \leq i \leq k$. \end{lemma} Notice that Lemma~\ref{lem:ld_poor} implies that $R_k \geq c$. Hence, the maximum delay factor witnessed by $\Algorithm{LF} $ is at least $c$. In the following lemma, we show that there exists a valid scheduling by $\textrm{\sc OPT}$ where the maximum wait ratio is at most one. This will show that $\Algorithm{LF}$ achieves a competitive ratio of $c$. Note that Lemma~\ref{lem:ld_poor} shows that $R_0 \leq \frac{1}{3s}$. \begin{lemma} \label{lem:opt_good} Consider a schedule which processes each job in $\mathcal{J}_{0}$ during $[F_k, F_k + |\mathcal{J}_{0}|]$ and each job in $\mathcal{J}_{i}$ during $[A_i, A_i + |\mathcal{J}_{i}|]$ for $1 \leq i \leq k$. This schedule is valid and, moreover, the maximum wait ratio witnessed by this schedule is at most one. \end{lemma} \begin{proof}[Proof Sketch:] It is not hard to show that the time intervals $[F_k, F_k + |\mathcal{J}_{0}|]$ and $[A_i, A_i + |\mathcal{J}_{i}|]$ for $1 \leq i \leq k$ do not overlap, therefore this is a valid schedule. The wait ratio witnessed by the jobs in groups $\mathcal{J}_i$ for $1 \leq i \leq k$ can easily be seen to be at most 1. This is because this schedule processes each of these jobs as soon as they arrive. We now show that the wait ratio of each of the jobs in $\mathcal{J}_0$ is at most $1$. Recall that $R_0$, the maximum wait ratio of $\mathcal{J}_0$ by $\Algorithm{LF}$ is at most $\frac{1}{3s}$ at time $F_0$. Using the fact $sc \geq 2$, we can easily show that $|F_k - A_0| \leq 2 |F_0 - A_0|$. Note that $\textrm{\sc OPT}$ can finish $\mathcal{J}_0$ during $[F_k, F_k + s|F_0 - A_0|]$, since $\Algorithm{LF}$ with $s$-speed could finish $\mathcal{J}_0$ during $[A_0, F_0]$. Thus the wait ratio of $\mathcal{J}_0$ at time $F_k + s|F_0 - A_0|$ is at most $\frac{2+s}{3s} \leq 1$. \end{proof} From Lemma~\ref{lem:ld_poor} the maximum delay factor witnessed by $\Algorithm{LF}$ is $c$ and by Lemma~\ref{lem:opt_good} the maximum delay factor witnessed by $\textrm{\sc OPT}$ is $1$. Hence we have the proof of Theorem~\ref{thm:lf}. \section{Conclusion} In this paper, we showed an almost fully scalable algorithm\footnote{An algorithm is said to be almost fully scalable if for any fixed $\epsilon>0$, it is $O(1+\epsilon)$-speed $O(1)$-competitive.} for minimizing the maximum delay factor in broadcasting for unit sized jobs. The slight modification we make to $\Algorithm{SSF-W}$ from \cite{ChekuriM09} makes the algorithm more practical. Using the intuition developed for the maximum delay factor, we proved that $\Algorithm{FIFO}$ is in fact 2-competitive for varying sized jobs closing the problem for minimizing the maximum response time online in broadcast scheduling. We close this paper with the following open problems. Although the new algorithm for the maximum delay factor with unit sized jobs is almost fully scalable, it explicitly depends on speed given to the algorithm. Can one get another algorithm independent of this dependency? For different sized pages, it is still open on whether there exists a $(1+\epsilonilon)$-speed algorithm that is $O(1)$-competitive. For minimizing the maximum response time offline it is of theoretical interest to show a lower bound on the approximation ratio that can be achieved or to show an algorithm that is a $c$-approximation for some $c < 2$. \appendix \section{Omitted Proofs} \subsection{Proof of Lemma~\ref{lem:main}} \begin{proof} Let $J_{x,i}, J_{x,j} \in \mathcal{J}_I$ such that $i < j$. Let $t'$ be the time that $\Algorithm{SSF-W}$ starts satisfying request $J_{x,i}$. By the definition of $I$, request $J_{x,i}$ must have delay factor at least $\alpha^*$ at time $f_{x,i}$. We also know that the request $J_{x,j}$ must arrive after time $t'$, otherwise request $J_{x,j}$ must also be satisfied at time $t'$. If the optimal solution combines these requests into a single broadcast then the request $J_{x,i}$ must wait until the request $J_{x,j}$ arrives to be satisfied. However, this means that the request $J_{x,i}$ must achieve a delay factor greater than $\alpha^*$ by $\textrm{\sc OPT}$, a contradiction of the definition of $\alpha^*$. \end{proof} \subsection{Proof of Lemma~\ref{lem:len-varying}} \begin{proof} The request $J_{q,k}$ has delay factor at least $c\alpha^*$ at any time $t$ during $I' = [t', t^*]$, where $t' = t^* -(c^2 -c)S_{q,k}\alpha^*$. The largest delay factor any request can have during $I'$ is less than $c^2 \alpha^*$ by definition of $t^*$ being the first time $\Algorithm{SSF-W}$ witnesses delay factor $c^2 \alpha^*$. Hence the request $J_{q,k}$ is in the queue $Q(t)$ at any time $t$ during $I'$. Therefore, any request that forced $\Algorithm{SSF-W}$ to broadcast on $I'$, must have delay factor at least $\alpha^*$ and since $J_{q,k} \in Q(t)$ for all $t \in I'$, the requests scheduled on $I'$ must have slack at most $S_{q,k}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:arrivaltime-varying}} \begin{proof} For the sake of contradiction, suppose that some request $J_{p,i}$ that forced $\Algorithm{SSF-W}$ to broadcast page $p$ on the interval $I$ arrived at time $t' < t_1 -2 S_{q,k} \alpha^* c$. Recall that $J_{p,i}$ has a slack no bigger than $S_{q,k}$ by the definition of $I$. Therefore at time $t_1 - S_{q,k} \alpha^* c$, $J_{p,i}$ has a delay factor of at least $c \alpha^*$. Thus any request scheduled during the interval $[t_1 - S_{q,k} \alpha^* c, t_1]$ has a delay factor no less than $\alpha^*$. We observe that $J_{p,i}$ is in $Q(\tau)$ for $\tau \in [t_1 - S_{q,k} \alpha^* c, t_1]$; otherwise there must be a request with a delay factor bigger than $c^2 \alpha^*$ at time $\tau$ and this is a contradiction to the assumption that $t^*$ is the first time that $\Algorithm{SSF-W}$ witnessed a delay factor of $c^2 \alpha^*$. Therefore any request that forced $\Algorithm{SSF-W}$ to broadcast during $[t_1 - S_{q,k} \alpha^* c, t_1]$ has a slack no bigger than $S_{p,i}$. Also we know that $S_{p,i} \leq S_{q,k}$ by the definition of $I$. In sum, we showed that any request that forced $\Algorithm{SSF-W}$ to do a broadcast during $[t_1 - S_{q,k} \alpha^* c, t_1]$ have a slack no bigger than $S_{q,k}$ and a delay factor no smaller than $\alpha^*$, which is a contradiction of the definition of $t_1$. \end{proof} \end{document}
\begin{document} \title{Line bundles for which a projectivized jet bundle is a product} \author{Sandra Di Rocco and Andrew J. Sommese} \date{June 25, 1998} \maketitle \begin{abstract} We characterize the triples $(X,L,H)$, consisting of line bundles $L$ and $H$ on a complex projective manifold $X$, such that for some positive integer $k$, the $k$-th holomorphic jet bundle of $L$, $J_k(X,L)$, is isomorphic to a direct sum $H\oplus\dots\oplus H$. \noindent {\em 1991 Mathematics Subject Classification}. 14J40, 14M99.\newline \indent{\em Keywords and phrases.} jet bundle, complex projective manifold, projective space, Abelian variety. \end{abstract} \section*{Introduction} Let $X$ be a complex projective manifold. A large amount of information on the geometry of an embedding $i: X\hookrightarrow \pn N$ is contained in the ``bundles of jets" of the line bundles $k\sL=i^*\sO_{\pn{N}}(k)$ for $k\geq 1$. The $k$-th jet bundle of a line bundle $L$ (sometimes called the $k$-th principal part of $L$) is usually denoted by $J_k(X,L)$, or by $J_k(L)$ when the space $X$ is clear from the context. The bundle $J_k(k\sL)$ is spanned by the $k$-jets of global sections of $k\sL$. If $\gra:\pn{}(J_1(\sL))\to \pn{N}$ is the map given by the $1$-jets of elements of $H^0(X,\sL)$, then $\gra(\pn{}(J_1(\sL))_x)={\Bbb T}_x$, where ${\Bbb T}_x$ is the embedded tangent space at $x$. Similarly $\pn{}(J_k(k\sL))_x$ is mapped to the ``$k$-th embedded tangent space" at $x$ (see \cite{GrHa} for more details). Given this interpretation of the projectivized jet bundle, it is natural to expect that a projectivized jet bundle is a product of the base and a projective space only under very rare circumstances. In this paper we analyze the more general setting where $L$ is a line bundle on $X$ (with no hypothesis on its positivity) and $J_k(L)=\oplus H$, with $H$ a line bundle on $X$. Some years ago the second author \cite{So} analyzed the pairs $(X,L)$ under the stronger hypothesis that $J_k(L)$ is a trivial bundle. Though, the results in this paper do not follow from \cite{So}, the only possible projective manifolds with a projectivized jet bundle of some line bundle being a product of the base manifold and a projective space, turn out to be the same, i.e., Abelian varieties and projective space. We completely characterize the possible triples $(X,L,H)$. In fact, on Abelian varieties we have the necessary and sufficient condition that $L\in {\mbox{\rm Pic}_0}(X)$, and if $L=\sO_{\pn{n}}(a)$, then only the range $a\geq k$ or $a\leq -1$ can occur. A significant amount of the research on this paper was done during our stay at Max-Planck-Institut f\"ur Mathematik in Bonn, to which we would like to express our gratitude for the excellent working conditions. The second author would also like to thank the Alexander von Humboldt Foundation for its support. \section{Some preliminaries} We follow the usual notation of algebraic geometry. Often we denote the direct sum of $m$ copies of a vector bundle $\sE$, for some integer $m> 0$, by $\displaystyle\bigoplus_m\sE$. We freely use the additive notation for line bundles. We often use the same symbols for a vector bundle and its associated locally free sheaf of germs of holomorphic sections. We say a vector bundle is spanned if the global sections generate each fiber of the bundle. For general references we refer to \cite{Book} and \cite{KS}. By $X$ we will always denote a projective manifold over the complex numbers $\comp$ and by $L$ a holomorphic line bundle on $X$. Let $k$ be a nonnegative integer. The {\em $k$-th jet bundle} $J_k(L)$ associated to $L$ is defined as the vector bundle of rank ${k+n}\choose{n}$ associated to the sheaf $p^*L/(p^*L\otimes\sJ_{\Delta}^{k+1})$, where $p:X\times X\to X$ is the projection on the first factor and $\sJ_{\Delta}$ is the sheaf of ideals of the diagonal, $\Delta$, of $X\times X$. Let $j_k:H^0(X,L)\times X \to J_k(L)$ be the map which associates to each section $s\in H^0(X,L)$ its $k$-th jet. That means that locally, choosing coordinates $(x_1,...,x_n)$ and a trivialization of $L$ in a neighborhood of $x$, $j_k(s,x)=(a_1,...,a_{{n+k}\choose {n}})$, where the $a_i$'s are the coefficients of the terms of degree up to $k$ in the Taylor expansion of $s$ around $x$. Notice that the map $j_k$ is surjective if and only if $H^0(X,L)$ generates all the $k$-jets at all points $x\in X$. For example $j_1$ being surjective is equivalent to $|L|$ defining an immersion of $X$ in $\Bbb P^{h^0(L)-1}$. We will often use the associated exact sequence: $$ 0\to T_X^{*(k)}\otimes L\to J_k(L)\to J_{k-1}(L)\to 0\;\;\;\;\;\;\;\;(j_k) $$ where $T_X^{*(k)}$ denotes the $k$-th symmetric power of the cotangent bundle of $X$. There is an injective bundle map \cite[p.\ 52]{KS} $$\grg_{i,j}: J_{i+j}(L)\to J_i(J_j(L)).$$ Using the sequence $(j_k)$ it is easy to see that $$\det J_k(L)=\frac{1}{n+1}{{n+k}\choose{n}}(kK_X+(n+1)L)$$ \begin{lemma}\label{split} Let $X$ be a compact K\"ahler variety and $L$ a holomorphic line bundle on $X$. Then $c_1(L)=0$ in $H^1(T^*_X)$ if and only if the bundle sequence $(j_1)$ splits. If $c_1(L)= 0$ in $H^1(T^*_X)$, then $J_k(L)\cong J_k(\sO_X)\otimes L$. \end{lemma} \proof A local computation, using Cech coverings, shows that the Atiyah class defined by the sequence $(j_1)$ is the cocycle $c_1(L)\in H^1(X,T_X^*)$. Thus $c_1(L)=0$ in $H^1(T^*_X)$ if and only if the bundle sequence $(j_1)$ splits. If $c_1(L)=0$ in $H^1(T^*_X)$ then $L$ has constant transition functions. From this, the isomorphism $J_k(L)\cong J_k(\sO_X)\otimes L$ follows. \qed Let $A$ and $B$ be vector bundles on $X$, we recall that if $\gra\in H^1(A\otimes B^*)$ represents the vector bundle extension $E$, then any nonzero multiple $\grl\gra$ gives an isomorphic extension. If $\grl=0$ this is of course false since we would get the trivial extension. It follows that: \begin{lemma}\label{ext} Let $\grl$ be a nonzero integer, then $J_1(\grl L)\cong J_1(L)\otimes (\grl -1)L$ \end{lemma} \proof Consider the extension : $$0\to T_X^*\otimes \grl L\to J_1(L)\otimes (\grl -1)L\to\grl L\to 0\;\;\;\;\;\;\;\;(j_1)\otimes (\grl -1)L$$ represented by $c_1(L)+ c_1((\grl -1)L)=\grl c_1(L)=c_1(\grl L)$. Then, from what was observed above, $J_1(\grl L)$, which is the vector bundle extension given by $c_1(\grl L)$, is isomorphic to $ J_1(L)\otimes (\grl -1)L$. \qed \section{The basic examples} In this section we will characterize line bundles with splitting $k$-th jet bundles on Abelian varieties and on $\pn{n}$. These turn out to be the only possible examples. \begin{proposition}\label{Abelian} Let $L$ and $H$ be line bundles on an Abelian variety $X$, then the following assertions are equivalent: \begin{itemize} \item $J_k(L)\cong \oplus H$ for some $k$; \item $H\cong L$ and $L\in {\mbox{\rm Pic}_0}(X)$. \end{itemize} In particular $J_k(L)$ splits in the sum of spanned line bundles only when $L=\sO_X$ and $J_k(L)=\oplus \sO_X$. \end{proposition} \proof Let $X$ be an Abelian variety and assume that $J_k(L)=\oplus H$ for some line bundle $H$. Then the sequences $(j_m)$ for $m\le k$ imply that there is a surjection of the trivial bundle $J_k(L)\otimes (-H)$ onto $L-H$. Thus the line bundle $L-H$ is a spanned line bundle with $${{n+k}\choose{n}}c_1(L-H)=0,$$ which implies $L=H$. Using this we can find a direct summand $L$ of $J_k(L)$ whose image in $J_1(L)$ maps onto $L$ under the map $J_1(L)\to L\to 0$ of the sequence $(j_1)$. Thus the sequence $(j_1)$ splits and therefore $c_1(L)=0$ in $H^1(T^*_X)$ by Lemma \ref{split}. Conversely assume that we have a line bundle $L\in {\mbox{\rm Pic}_0}(X)$. Then by Lemma \ref{split}, we have that $J_k(L)\cong J_k(\sO_X)\otimes L$. Using the triviality of $T_X^{(j)}$ for all $j\ge 0$, it follows that $J_k(\sO_X)$ is trivial. \qed \begin{proposition}\label{pn} $J_k(\opn{n}(a))$ is isomorphic to a direct sum $\displaystyle \bigoplus_{{k+n}\choose{n}}\opn{n}(q)$ for some $k,a,q\in {\Bbb Z}$ with $k>0$ if and only if $q=a-k$ and either $a\geq k$ or $a\leq -1$. \end{proposition} \proof Let $L=\opn{n}(a)$. First notice that if the $k$-th jet bundle splits as $\displaystyle J_k(L)=\bigoplus_{{k+n}\choose{n}}\opn{n}(q)$ for some nonnegative $k$ and some integer $q$, then $$\det J_k(L)=\opn{n}\left({{n+k}\choose{k}}(a-k)\right),$$ which implies that $q=a-k$. Then the fact that the $k$-th jet bundle of $\opn{n}(a)$ always has sections for $a\ge 0$ rules out the cases $a=0,...,k-1$. Lemma \ref{ext} gives $J_1(\opn{n}(a))\cong J_1(\opn{n}(1))\otimes \opn{n}(a-1) $. Then $J_1(\opn{n}(1))=\oplus \opn{}$ (see, e.g., \cite{So}) implies $$J_1(\opn{n}(a))=\bigoplus_{n+1}\opn{n}(a-1)$$ Dualizing $\grg_{1,1}$ and using $$J_1\left(J_1(\opn{n}(a)\right)=J_1\left(\bigoplus_{n+1}\opn{n}(a-1)\right)=\bigoplus_{n+1}\left(\bigoplus_{n+1}\opn{n}(a-2)\right)$$ we obtain the quotient $$\bigoplus_{n+1}\left(\bigoplus_{n+1}\opn{n}(a-2)\right)^*\to J_2(\opn{n}(a))^*\to 0$$ and thus, tensoring by $\opn{n}(a-2)$ $$\bigoplus_{(n+1)^2}\opn{n}\to J_2(\opn{n}(a))^*\otimes \opn{n}(a-2)\to 0$$ Then the vector bundle $J_2(\opn{n}(a))^*\otimes \opn{n}(a-2)$ is spanned with $\det(J_2(\opn{n}(a))^*\otimes \opn{n}(a-2))=\opn{n}$ which implies that $J_2(\opn{n}(a))^*\otimes \opn{n}(a-2)=\oplus\opn{n}$. Iterating this argument one gets $\displaystyle J_k(\opn{n}(a))^*\otimes \opn{n}(a-k)=\bigoplus_{{k+n}\choose{n}}\opn{n}.$ \qed \section{The main result} In this section we characterize complex projective manifolds having a line bundle whose projectivized $k$-jet bundle is a product of the base manifold and a projective space. \begin{theorem} Let $L$ and $H$ be holomorphic line bundles on $X$. Then $J_k(L)=H\oplus\dots\oplus H$ if and only if the triple $(X,L,H)$ is one of the two below: \begin{enumerate} \item $(X,L,L)$ where $X$ is Abelian and $L\in {\mbox{\rm Pic}_0}(X)$ \item $(\pn{n}, \opn{n}(a),\opn{n}(a-k))$ and $a\geq k$ or $a\leq -1$. \end{enumerate} \end{theorem} \proof Assume $J_k(L)=H\oplus\dots\oplus H$. Tensoring the sequence $(j_k)$ by $H^*$ gives the quotient $ \displaystyle \bigoplus_{{n+k}\choose{n}}\sO_X\to L\otimes H^*\to 0.$ Tensoring the sequence $(j_k)$ by $H^*$ and then dualizing it gives the quotient $ \displaystyle \bigoplus_{{n+k}\choose{n}}\sO_X\to T_X^{(k)}\otimes (H\otimes L^*)\to 0.$ It follows that $L\otimes H^*, T_X^{(k)}\otimes (H\otimes L^*)$ and thus $T_X^{(k)}$ are spanned bundles over $X$. First assume that the canonical bundle $K_X$ is nef. Since $T_X^{(k)}$ is spanned, we conclude that $\det T_X^{(k)}$, which is a spanned negative multiple of $K_X$ is trivial. Thus $T_X^{(k)}$ is a trivial bundle. From this we conclude that $L-H$ is trivial. It follows that the trivial bundle $J_k(L)^*\otimes L$ has a filtration with quotient bundles $T^{(j)}_X$ for $0\le j\le k$. This shows that $T^{(j)}_X$ is trivial for all $j> 0$. Thus under the assumption that $K_X$ is nef, we conclude that $X$ would be an Abelian variety. Proposition \ref{Abelian} implies also that $L\in {\mbox{\rm Pic}_0}(X)$. This gives the first case of the theorem. We are thus left with the case when $K_X$ is not nef. Since $T_X^{(k)}$ is spanned $-K_X$ is nef. The cone theorem then yields the existence of an extremal ray $\reals_+[ \grg]$, with $1\leq -K_X\cdot\grg\leq n+1.$ Let $l$ be the normalization of $\grg$ and let $$T_{X|l}=\bigoplus \sO_l(a_i),\;\;\;(L\otimes H^*)_l=\sO_l(b),$$ where by abuse of notation we denote the pullback of a bundle $\sE$ on $\gamma$ to $l$ by $\sE_l$. Writing for simplicity $T^{(k)}_{X|l}=(\oplus_i \sO_l(ka_i))\oplus P$ we get $$T^{(k)}_{X|l}\otimes (L\otimes H^*)^*_l=(\oplus_i \sO(ka_i-b))\oplus (P\otimes\sO_l(-b))$$ Since $T^{(k)}_{X|l}\otimes (L\otimes H^*)^*_l$ is spanned we conclude that $ka_i\geq b\ge 0$. Note that $b> 0$. Indeed if $b=0$, then $$0=\deg \det J_k(L)_l\otimes (-H_l)= \frac{1}{n+1}{{n+k}\choose{n}}kK_X\cdot l\not=0.$$ Thus $a_i>0$ for all $i$'s. Moreover from sheaf injection $\displaystyle 0\to T_l\to T_{X|l}$ we see that $T_{X|l}$ must contain a factor $\sO_l(a_i)$ with $a_i\geq 2$. Then $\displaystyle -K_X\cdot\grg=\sum_{i=1}^{n+1}a_i\leq n+1$ implies that $a_j=2$ for one $j$ and $a_i=1$ for $i\neq j$, i.e., $-K_X\cdot\grg=n+1$. Now from Mori's proof of Hartshorne conjecture (see \cite[\S 4]{L}) we deduce that $X$ must be $\pn{n}$. At this point we have recovered case (2) by applying Proposition \ref{pn}. Propositions \ref{Abelian} and \ref{pn} show that if we are in cases (1) and (2) respectively, then $\displaystyle J_k(L)=H\oplus\dots\oplus H$ satisfied. \qed { \begin{tabular}{ll} Sandra Di Rocco&Andrew J. Sommese\\ Department of Mathematics&Department of Mathematics\\ KTH, Royal Institute of Technology\ \ \ &University of Notre Dame\\ S-100 44 Stockholm, Sweden&Notre Dame, Indiana 46556, U.S.A.\\ e-mail: [email protected]&e-mail: [email protected]\\ http://www.math.kth.se/$\sim$sandra&http://www.nd.edu/$\sim$sommese \end{tabular} } \end{document}
\begin{document} \title[Convex characteristics of quaternionic positive definite functions]{Convex characteristics of quaternionic positive definite functions on abelian groups} \thanks{} \author[Z. P. Zhu]{Zeping Zhu} \email{zzp$\symbol{64}$mail.ustc.edu.cn} \begin{abstract} This paper is concerned with the topological space of normalized quaternion-valued positive definite functions on an arbitrary abelian group $G$, especially its convex characteristics. There are two main results. Firstly, we prove that the extreme elements in the family of such functions are exactly the homomorphisms from $G$ to the sphere group $\mathbb{S}$, i.e., the unit $3$-sphere in the quaternion algebra. Secondly, we reveal a phenomenon which does not exist in the complex setting: The compact convex set of such functions is not a Bauer simplex except when $G$ is of exponent $\leq 2$. In contrast, its complex counterpart is always a Bauer simplex, as is well known. We also present an integral representation for such functions as an application and some other minor interesting results. \end{abstract} \maketitle \section{Introduction} \subsection{background} The theory of positive definite and related functions is an important ingredient in harmonic analysis with strong connections to moment problems, unitary representations, reproducing kernel Hilbert spaces, stationary stochastic processes, Hoeffding-type inequalities and many other fields. Our interest lies on its further development in the noncommutative setting, especially the quaternionic case. It is well known that under certain regularity conditions every complex-valued positive definite function is the Fourier-type transform of a positive measure. This is formulated in exact terms in the famous theorems of Herglotz, Bochner, Hamburger and Bernstein-Widder. All these theorems are special cases of a theorem in the general setting of abelian semigroups (one may refer to, e.g., \cite{Berg-1976,Berg-1984,Lindahl-1971} for details). The four results mentioned above correspond to the following semigroups with involution respectively: $$(\mathbb{Z},+,x^*=-x),\ (\mathbb{R},+,x^*=-x),\ (\mathbb{N},+,x^*=x),\ ([0.+\infty),+,x^*=x).$$ These contributions compose a rather satisfactory theory of the characterization of complex-valued positive definite functions. Some earlier contributions on this topic or related topics in the quaternionic setting have been made by Alpay along with his coauthors (see, e.g., \cite{Alpay-2016,Alpay-2017}) and some other academics. But there are still some fundamental problems that remain unsolved. \subsection{study subject} In the complex setting, for a normalized positive-definite function $\phi$ on an abelian group $G$, there exists a unique regular Borel probability measure $\mu$ on the dual group $\widehat{G}$ such that $$\phi(x)=\int_{\widehat{G}}\gamma(x)d\mu(\gamma), $$ and vice verse. Here the dual group $\widehat{G}$ consists of complex characters on $G$. From the view-point of convexity, this correspondence indicates that the extreme elements in the convex set $\mathcal{P}_*$ of normalized positive-definite functions are precisely the complex characters. Moreover, based on Choquet's theory, the uniqueness of $\mu$ implies that $\mathcal{P}_*$ is a simplex, thereby a Bauer simplex since its extreme boundary is closed (one may refer to Theorems 3.6 and 4.1 in Chapter II of \cite{Alfsen-1971} for more details). In brief, $\mathcal{P}_*$ shares almost the same convex characteristics with the finite dimensional simplex. These results are also valid for the exponentially bounded positive definite functions on abelian semigroups (see, e.g., \cite{Berg-1976,Berg-1984}). Although, there are several generalizations of Bochner's theorem that characterize the quaternionic positive definiteness, the convex structure of normalized quaternion-valued positive definite functions is still unrevealed, since the integral characterizations are given by the Fourier-type transforms of certain kinds of quaternion-valued measures, not necessarily positive ones, on the dual group (see, e.g., Theorem 4.5 in \cite{Alpay-2016}). In this paper, we intend to solve two questions on this topic: \begin{itemize} \item[1)]How to characterize the extreme boundary of the convex set of normalized quaternion-valued positive definite functions on an abelian group. \item[2)]Whether this convex set is a Bauer simplex in the pointwise convergence topology. \end{itemize} \subsection{main results} The answers to the main questions mentioned above are as follows: \begin{itemize} \item[1)]The extreme boundary of the convex set of normalized quaternion-valued positive definite functions consists of all the quaternion-valued characters. \item[2)]This set is a Bauer simplex if and only if the given group is of exponent $\leq 2$, or equivalently it is a $\mathbb{Z}_2$-vector space. \end{itemize} A global characterization of quaternion-valued positive definite functions follows naturally: \begin{itemize} \item[3)]for a normalized positive-definite function $\phi$ on an abelian group $G$, there exists a (not necessarily unique) regular Borel probability measure $\mu$ on its quaternionic dual $G^\delta$ such that $$\phi(x)=\int_{G^\delta}\Gamma(x)d\mu(\Gamma). $$ Conversely, the quaternionic positive definiteness is characteristic for such expressions. \end{itemize} Here $G^\delta$ consists of all the quaternion-valued characters, i.e., homomorphisms from $G$ to the group of unit quaternions with the composition being the multiplication inherited from the quaternion algebra. Intuitively speaking, in most cases (except when $G$ is of exponent $\leq 2$) the convex set of normalized quaternion-valued positive definite functions has a more evenly distributed extreme boundary in contrast to its complex counterpart, the extreme boundary of which is less evenly distributed as in the primitive case of a $n$-simplex. We can say that our work reveals a phenomenon different from the classical case, and also fills a research gap in the quaternionic setting. \subsection{arrangement of sections} This paper is organized as follows. Section 2 contains basic notions and properties of quaternion-valued positive definite functions. Section 3 is split into three major parts. In the first subsection, the main questions are discussed in the special case of $G=\mathbb{Z}$. In the second subsection, the first one of the main results above is established in the general case when $G$ is an arbitrary abelian group, and an integral representation for quaternion-valued positive definite functions, i.e., the third main result, is given as an application. The final subsection is devoted to the second one of the main results above. \section{definitions and basic properties} We work mainly on the quaternion-valued positive definite functions on abelian groups. Nevertheless, the relevant concepts shall be introduced in the more general setting of semigroups with involution. \subsection{quaternionic positive definite functions}\label{subsec-pdf} \begin{definition}\cite{Drazin-1978} Let $S$ be a semigroup, i.e., a nonempty set furnished with an associative binary operation $\circ$ and a neutral element $e$. An involution on $S$ is a bijection $s\mapsto s^*$ of $S$ onto itself, satisfying $(\alpha^*)^*=\alpha,\ (\alpha\beta)^*=\beta^*\alpha^*$. \end{definition} For an abelian group, the composition and the neutral element are conventionally denoted by $+$ and $0$; and there are two natural involutions defined as $s^*=-s$ and $s^*=s$, respectively. Let $\mathbb{H}$ denote the real quaternion algebra $$\{q=a_0i_0+a_1i_1+a_2i_2+a_3i_3:\ a_i\in\mathbb{R}\ (i=0,1,2,3)\}, $$ where $i_0=1$, $i_3=i_1i_2$, and $i_1,i_2$ are the generators of $\mathbb H$, subject to the following identities: $$i_1^2=i_2^2=-1, \qquad i_1i_2=-i_2i_1. $$ For any $q\in \mathbb{H}$, its conjugate is defined as $\overline{q}:=a_0i_0-a_1i_1-a_2i_2-a_3i_3$, and its norm given by $\lvert q\rvert:=\sqrt{a_0^2+a_1^2+a_2^2+a_3^2}$. One may refer to \cite{Gurlebeck-2008} for more details about the real quaternion algebra. Hereinafter we always assume that $S$ is a semigroup equipped with some involution. \begin{definition} A quaternion-valued function $\phi$ on $S$ is said to be positive definite if for any $s_1,s_2,\cdots,s_k\in S$ and any $q_1,q_2,\cdots,q_k\in\mathbb{H}$, the following inequality \begin{equation}\label{Eq-Def-PDF} \sum_{1\leq i,j\leq k} \overline{q_i}\phi(s_i^*\circ s_j)q_j\geq 0 \end{equation} is satisfied. \end{definition} We denote the set of quaternion-valued positive definite functions on $S$ by $\mathcal{P}^\mathbb{H}(S)$. Additionally, a function $\phi$ on $S$ is called normalized if $\phi(e)=1$. Then we define $$\mathcal{P}_*^\mathbb{H}(S):=\{\phi\in\mathcal{P}^\mathbb{H}(S): \phi \text{ is normalized}\}. $$ One may refer to \cite{Alpay-2016,Alpay-2017}, etc for analogous definitions of quaternionic positive definiteness. \begin{theorem}\label{thm-basic-properties} Any quaternion-valued positive definite function $\phi$ satisfies the following properties: \begin{itemize} \item[i)]$\phi$ is hermitian, i.e., $\phi(s^*)=\overline{\phi(s)}$ for $s\in S$; \item[ii)]$\phi(\alpha^*\circ \alpha)\geq0$ and $\lvert\phi(\alpha^*\circ \beta)\rvert^2\leq\phi(\alpha^*\circ \alpha) \phi(\beta^*\circ \beta)$. \end{itemize} \end{theorem} The proof of this theorem is omitted since it follows the same procedure as in the classical case. \begin{theorem}\label{thm-positive-definiteness} Let $\phi$ be a quaternion-valued positive definite function on $S$. Then for any $p_i\in\mathbb{H}$ and $t_i\in S$ $(i=1,2,\cdots,n)$, the function $\varphi$ defined as $$\varphi(s):= \sum_{1\leq j,k\leq n} \overline{p_j}\phi(t_j^*\circ s \circ t_k)p_k \quad\text{for}\quad s\in S,$$ is also positive definite on $S$. \end{theorem} \begin{proof} For any $s_1,s_2,\cdots,s_m\in S$ and any $q_1,q_2,\cdots,q_m\in\mathbb{H}$, a direct calculation yields $$\sum_{1\leq j',k'\leq m} \overline{q_{j'}}\varphi(s_{j'}^*\circ s_{k'})q_{k'}=\sum_{1\leq j',k'\leq m}\sum_{1\leq j,k\leq n}\overline{p_jq_{j'}}\phi\left((s_{j'}\circ t_j)^* \circ (s_{k'}\circ t_k)\right)p_kq_{k'}; $$ thus we derive by the quaternionic positive definiteness of $\phi$ that $$\sum_{1\leq j',k'\leq m} \overline{q_{j'}}\varphi(s_{j'}^*\circ s_{k'})q_{k'} \geq 0. $$ It completes the proof. \end{proof} \subsection{exponential boundedness} \begin{definition}\label{def-absolute-value}\cite{Berg-1984} A nonnegative real valued function $\alpha$ on $S$ is called an absolute value if it satisfies the following conditions: \begin{itemize} \item[i)]$\alpha(e)=1$; \item[ii)]$\alpha(s\circ t)\leq \alpha(s)\alpha(t)$; \item[iii)]$\alpha(s^*)=\alpha(s)$. \end{itemize} \end{definition} As shown in the complex case, this notion provides a certain type of boundedness condition that plays a vital role in the spectral characterization of positive definite functions. \begin{definition} A quaternion-valued function $\phi$ on $S$ is called bounded w.r.t. an absolute value $\alpha$ (or $\alpha$-bounded for short) if $\phi$ is dominated by $\alpha$, i.e., there exists a positive real constant $C$ such that $$\lvert\phi(s)\rvert\leq C\alpha(s) \quad\text{for}\quad s\in S. $$ \end{definition} For a fixed absolute value $\alpha$, we write the set of $\alpha$-bounded quaternioin-valued positive definite functions on $S$ as $\mathcal{P}_\alpha^\mathbb{H}(S)$, and the set of normalized elements in $\mathcal{P}_\alpha^\mathbb{H}(S)$ as $\mathcal{P}_{\alpha,*}^\mathbb{H}(S)$. We say a function $\phi$ is exponentially bounded if it is dominated by some absolute value. The set of exponentially bounded quaternioin-valued positive definite functions on $S$ is denoted by $\mathcal{P}_e^\mathbb{H}(S)$, and the set of normalized elements in $\mathcal{P}_e^\mathbb{H}(S)$ by $\mathcal{P}_{e,*}^\mathbb{H}(S)$. \begin{theorem}\label{thm-inequality} Let $\phi$ be a quaternioin-valued positive definite function on $S$ and bounded w.r.t. an absolute value $\alpha$. Then $$|\phi(s)|\leq\phi(e)\alpha(s)\quad \text{for} \quad s\in S. $$ \end{theorem} The proof of this result is quite similar to that given in the complex case \cite{Berg-1984} and so is omitted. A simple corollary follows immediately by applying Tychonoff's theorem. \begin{corollary}\label{cor-compactness} For any absolute value $\alpha$ on $S$, the (possibly empty) convex set $\mathcal{P}_{\alpha,*}^\mathbb{H}(S)$ is compact in the pointwise convergence topology. \end{corollary} \subsection{RKHS associated with $\phi$} Assume that $S$ is a semigroup with involution and the function $\phi:S\mapsto \mathbb{H}$ is positive definite. Denote the set of all quaternion-valued functions on $S$ by $\mathbb{H}^S$. We would like to mention that we only focus on the conventional right quaternion-linear structure in $\mathbb{H}^S$ given by the pointwise scalar multiplication on functions from the right side, which is to say that for $f\in \mathbb{H}^S$ and $q\in\mathbb{H}$, $fq$ is defined by $$(fq)(s)=f(s)q \quad \text{for} \quad s\in S. $$ In the quaternionic setting, due to some technical reasons a right quaternion-linear space may be endowed with an extra well-chosen left linear structure (see, e.g., \cite{Alpay-2016-1,Alpay-2016-2}). Sometimes, the situation is different from what is usual in the commutative case. So one should be careful when considering the both-side quaternionic linearity. Let us consider the right quaternion-linear subspace $H_\phi$ of $\mathbb{H}^S$ generated by the functions $\{\phi_s\}_{s\in S}$ with $\phi_s(t):=\phi(t^*\circ s)$. We furnish $H_\phi$ with a quaternion-valued scalar product defined as $$\langle\sum_{s}\phi_s q_s, \sum_{t}\phi_t p_t\rangle:= \sum_{s,t} \overline{p_t}\phi(t^*\circ s) q_s$$ where the indices $s$ and $t$ run over two arbitrary finite nonempty subsets of $S$ respectively, and $q_s, p_t\in\mathbb{H}$. It is obvious that this product satisfies \begin{eqnarray*} \langle f, g\rangle&=& \overline{\langle g, f\rangle}, \\ \langle f+g,h\rangle&=&\langle f,h\rangle+\langle g,h\rangle, \\ \langle fp, g\rangle&=&\langle f, g\rangle p, \\ \langle f, gp\rangle&=&\overline{p}\langle f, g\rangle, \\ \end{eqnarray*} for all $f, g, h\in H_\phi$ and $p\in\mathbb{H}$. In addition, since the function $\phi$ is positive definite, the inequality $\langle f, f\rangle\geq0$ always holds. Furthermore, the Cauchy-Schwarz inequality yields that for any $f=\sum_{s}\phi_s q_s\in H_\phi$, and $t\in S$, $$\lvert f(t)\rvert= \lvert\sum_{s}\phi(t^*\circ s) q_s\rvert=\lvert\langle f, p_t\rangle\rvert\leq \langle f,f\rangle^{1/2}\langle \phi_t,\phi_t\rangle^{1/2}, $$ which implies $$\langle f, f\rangle=0 \quad\text{iff}\quad f=0. $$ Therefore, $\langle\cdot, \cdot\rangle: H_\phi\times H_\phi\mapsto\mathbb{H}$ is a quaternionic inner product (see, e.g., \cite{Alpay-2016-1,Viswanath-1971}) on $H_\phi$. We call the completion of $H_\phi$ as the reproducing kernel Hilbert space (RKHS) associated with $\phi$, denoted by $\mathcal{H}_\phi$. Because all the quaternion-valued functionals $\Lambda_s: f \mapsto f(s)$ with $s\in S$ are continuous on $H_\phi$, the norm topology on $H_\phi$ must be stronger than or equal to the pointwise convergence topology. Accordingly, its completion $\mathcal{H}_\phi$ can be treated as a subset of $\mathbb{H}^S$, i.e., the space of all quaternion-valued functions on $S$. In other words, every element in the completion is a concrete function as well. There is an alternative way to construct the RKHS: Let $H$ be the family of quaternion-valued functions on $S$ with finite supports. Obviously, the positive definite function $\varphi$ can induce a (possibly degenerate) inner product: $$\langle h,k\rangle:=\sum_{s,t\in S}\overline{k(t)}\varphi(t^*\circ s)h(s), $$ for all $h,k\in H$. Quotienting $H$ by the subspace of functions with zero norm eliminates the degeneracy. Then taking the completion gives a quaternionic Hilbert space which is isomorphic to $\mathcal{H}_\phi$ via the mapping $h\mapsto f=\sum_{s}\phi_sh(s)$. A natural representation of $S$ on the quaternionic pre-Hilbert space $H_\phi$ is given as \begin{equation}\label{eq-unitary-rep} \lambda(s)\left(\sum_{t}\phi_t q_t\right):=\sum_{t}\phi_{s\circ t} q_t \quad\text{for}\quad s\in S, \end{equation} where the index $t$ runs over an arbitrary nonempty subset of $S$, and $q_t$ is a quaternion for every $t$. Theoretically, this representation contains all the information about $\phi$. One can easily observe that \begin{equation}\label{eq-representation} \phi(s)=\langle \lambda(s)\phi_e,\phi_e\rangle \quad\text{for}\quad s\in S. \end{equation} Moreover, the norm-boundedness of $\lambda$ is equivalent with the exponential boundedness of $\phi$ as shown below. \begin{theorem}\label{thm-boundedness} Let $\phi$ ba a quaternion-valued positive definite function on $S$, and $\lambda$ be the representation as above. Then $\phi$ is exponentially bounded if and only if $\lambda(s)$ is a bounded operator on $H_\phi$ for all $s\in S$. \end{theorem} \begin{proof}First, we show the sufficiency. Assume $\lambda(s)$ is a bounded operator on $H_\phi$ for all $s\in S$. It is easy to verify that $\alpha(s):=\|\lambda(s)\|$ is an absolute value. And by \eqref{eq-representation} we obtain $$|\phi(s)|\leq\|\phi_e\|^2\|\lambda(s)\|. $$ Hence, $\phi$ is bounded w.r.t. $\alpha$, thereby exponentially bounded. Next, we prove the necessity. Suppose that $\phi$ is exponentially bounded. Then there exists an absolute value $\alpha$ which dominates $\phi$. Take an arbitrary function $f=\sum_{i=1}^{n}\phi_{t_i}p_i\in H_\phi$, and define $$\varphi(s):= \sum_{1\leq j,k\leq n} \overline{p_j}\phi(t_j^*\circ s \circ t_k)p_k \quad\text{for}\quad s\in S.$$ Firstly, Theorem \ref{thm-positive-definiteness} tells $\varphi$ must be positive definite. Secondly, Theorem \ref{thm-inequality}, together with the last two conditions in Definition \ref{def-absolute-value}, yields the inequality $|\phi(t_j^*\circ s \circ t_k)|\leq \phi(e)\alpha(t_j)\alpha(t_k)\alpha(s)$. We thus derive that $$|\varphi(s)|\leq \left(\phi(e)\sum_{1\leq j,k\leq n}|p_j||p_k|\alpha(t_j)\alpha(t_k)\right)\alpha(s) $$ holds for all $s\in S$, which means $\varphi$ is also bounded w.r.t. $\alpha$. Then applying Theorem \ref{thm-inequality} again we have $$|\varphi(s)|\leq \varphi(e)\alpha(s)=\|f\|^2\alpha(s). $$ It follows that $$\|\lambda(s)f\|^2= \langle\lambda(s^*\circ s)f,f\rangle=\varphi(s^*\circ s)\leq \|f\|^2\alpha(s^*\circ s)\leq\|f\|^2\alpha(s)^2. $$ Therefore, $\lambda(s)$ is bounded for all $s$. \end{proof} Based on the result above, we know that when $\phi$ is exponentially bounded, the representation $\lambda$ has a unique bounded extension to the quaternionic Hilbert space $\mathcal{H}_\phi$. More explicitly, the extension of $\lambda(s)$ coincides with the left shifting operator $L_{s^*}$ given as $$L_{s^*}(f)(t):=f(s^*\circ t),$$ for all $f\in \mathcal{H}_\phi$ and $t\in S$. \subsection{relations between complex and quaternionic positive definiteness} It is not difficult to show that a complex positive definite function must be quaternionic positive definite. \begin{theorem}\label{thm-equiv-c-h-pd} Let $\phi$ be a $\mathbb{C}_I$-valued function on $S$. Then $\phi$ is complex positive definite if and only if it is quaternionic positive definite. \end{theorem} Here $\mathbb{C}_I$ stands for an arbitrary complex slice of $\mathbb{H}$, namely $\mathbb{C}_I:=\mathbb{R}\oplus I\mathbb{R}$ with $I$ being an imaginary unit in $\mathbb{H}$ \cite{Gurlebeck-2008}. \begin{proof} Evidently, the quaternionic positive definiteness is stronger than the complex positive definiteness. Thus we only need to verify the necessity. Suppose that $\phi: S\mapsto\mathbb{C}_I$ is complex positive definite, which is to say that for any $c_1,\cdots,c_n\in\mathbb{C}_I$ and $s_1,\cdots,s_n\in\mathbb{C}_I$, the following inequality holds: \begin{equation}\label{eq-proof-thm-equiv-c-h-pd} \sum_{i,j=1}^{n}\overline{c_i}c_j\phi(s_i^*\circ s_j)\geq 0. \end{equation} Take an imaginary unit $J$ in the quaternion algebra such that $I\perp J$, or equivalently $IJ=-JI$. Notice that any $q\in\mathbb{H}$ can be expressed as $$q=c+c'J\quad\text{with}\quad c,c'\in\mathbb{C}_I.$$ Hence, for any $q_1,\cdots,q_n\in\mathbb{H}$ and $s_1,\cdots,s_n\in\mathbb{C}_I$, we can decompose the summary $\sum_{i,j=1}^{n}\overline{q_i}\phi(s_i^*\circ s_j)q_j$ as $$\sum_{i,j=1}^{n}\overline{c_i}c_j\phi(s_i^*\circ s_j)+\sum_{i,j=1}^{n}\overline{c'_i}c'_j\phi(s_i^*\circ s_j)-J\sum_{i,j=1}^{n}\overline{c'_i}c_j\phi(s_i^*\circ s_j)+J\sum_{i,j=1}^{n}\overline{c'_j}c_i\overline{\phi(s_i^*\circ s_j)}, $$ where $q_i=c_i+c'_iJ$ with $c_i, c'_i\in\mathbb{C}_I$ $(i=1,\cdots,n)$. \eqref{eq-proof-thm-equiv-c-h-pd} indicates the first two terms are nonnegative. And the last two terms eliminate each other since $\phi$ is hermitian, i.e., $\phi(s^*)=\overline{\phi(s)}$. Consequently, we obtain $$\sum_{i,j=1}^{n}\overline{q_i}\phi(s_i^*\circ s_j)q_j\geq 0,$$ which means $\phi$ is quaternionic positive definite. \end{proof} As mentioned in the proof above, any $q\in\mathbb{H}$ can be expressed as $$q=c+c'J\quad\text{with}\quad c,c'\in\mathbb{C}_I,$$ where $J$ is an arbitrary imaginary unit vertical to $I$. Moreover, the value of $c$ is irrelative with the choice of $J$. We call $c$ the projection of $q$ on the complex slice $\mathbb{C}_I$. For convenience, we denote the mapping $q\mapsto c$ by $P_I$. \begin{theorem}\label{Thm-qpd-to-cpd} Let $\phi$ be a quaternion-valued positive definite function on $S$. Then $P_I\phi$, i.e., the projection of $\phi$ on $\mathbb{C}_I$ is complex positive definite for any imaginary unit $I\in\mathbb{H}$. \end{theorem} \begin{proof} Note that the equality $$P_I(q)=\frac{q-IqI}{2}$$ is valid for any quaternion $q$, and any imaginary unit $I$. Thus for any $c_1,\cdots,c_n\in\mathbb{C}_I$ and $s_1,\cdots,s_n\in\mathbb{C}_I$, the summation $$ \sum_{i,j=1}^{n}\overline{c_i}c_jP_I\phi(s_i^*\circ s_j) $$ can be split into two parts $$ 1/2 \sum_{i,j=1}^{n}\overline{c_i}\phi(s_i^*\circ s_j)c_j+1/2 \sum_{i,j=1}^{n}\overline{c_iI}\phi(s_i^*\circ s_j)c_jI, \\ $$ and thereby is nonnegative owing to the quaternionic positive definiteness of $\phi$. In conclusion, $P_I\phi$ is complex positive definite for any imaginary unit $I\in\mathbb{H}$. \end{proof} The reverse of the theorem above is not true, that is, $\phi:S\mapsto\mathbb{H}$ need not necessarily be positive definite, when $P_I\phi:S\mapsto\mathbb{C}_I$ is positive definite for all $I$. \section{Characterizations for quaternion-valued positive definite functions} As mentioned at the beginning of Subsection \ref{subsec-pdf}, On an abelian group, there exist two natural involutions, the identical involution $s^*=s$ and the inverse involution $s^*=-s$. Correspondingly, we get two different types of positive definiteness for quaternion-valued functions on abelian groups. One is called positive definiteness in the semigroup sense, applying to the case when the group is endowed with the identical involution. The other is called positive definiteness in the group sense, which is suitable when the inverse involution is equipped. If a quaternion-valued function $\phi$ is positive definite in the semigroup sense, then Theorem \ref{thm-basic-properties} yields that $\phi(s)=\phi(s^*)=\overline{\phi(s)}$ is valid on the whole group. So the function $\phi$ must be real valued. It means there is essentially no such thing as a quaternion-valued positive definite function in the semigroup sense. Besides, there already exists a quite complete theory about the real valued positive definite functions on abelian groups. For these reasons, we only focus on the positive definiteness in the group sense. And all the positive definite functions hereinafter appearing, by default, are positive definite in the group sense. Given an arbitrary quaternion-valued positive definite function $\phi$ on an abelian group $G$, it is not difficult to verify that $\rvert\phi(g)\lvert\leq \phi(0)$ holds for all $g\in G$, which is a direct generalization of the boundedness of complex-valued positive definite functions on abelian groups to the quaternionic case via Theorem \ref{Thm-qpd-to-cpd}. Thus we see every quaternion-valued positive definite function on an abelian group is dominated by the constant absolute value $1$, thereby exponentially bounded. \subsection{characterizations for quaternion-valued positive definite functions on the group of integers} We start with one of the simplest cases when the abelian group is $\mathbb{Z}$. All the characterizations in this part shall be given in explicit forms. In addition, a new phenomenon will be demonstrated by certain examples. Assume $\phi$ is a quaternion-valued positive definite function on $\mathbb{Z}$, as shown above it is spontaneously exponentially bounded. Then Theorem \ref{thm-boundedness} says the representation $\lambda$ of $\mathbb{Z}$ defined as \eqref{eq-unitary-rep} with the semigroup $S$ replaced by the group $\mathbb{Z}$, can be uniquely extended to the RKHS associated with $\phi$. Furthermore, the extension of $\lambda(n)$, $n\in \mathbb{Z}$, is nothing but the shifting operator $L_{n}$ given by $$L_{n}(f)(m):=f(m-n),$$ for all $f$ in the RKHS and all $m\in \mathbb{Z}$. In an unpublished work of ours, we have established a spectral characterization of quaternion-valued positive definite functions on the group of real numbers based on the S-functional calculus. By the same method as employed in the former work, we can obtain the following result on the group of integers: There exists a unique pair of a resolution of the identity $E$ on the set $\mathbb{S}^+:=\{s=e^{it}\mid t\in [0,\pi]\}$ and a bounded operator $J$ on the RKHS, subject to the conditions $EJ=JE$ and $J^2=-1$, such that $$\lambda(n)=\int_{\mathbb{S}^+}\Re(s^n)dE(s)+J\int_{\mathbb{S}^+}\Im(s^n)dE(s)$$ holds for any integer $n$, where $\Re(s^n)$ and $\Im(s^n)$ stand for the real part and the imaginary part of $s^n$, respectively. This is a quit interesting result in our own opinion. And one can easily see that an integral characterization for the function $\phi$ follows naturally since Equation \eqref{eq-representation} indicates $\phi(g)=\langle \lambda(g)\phi_0,\phi_0\rangle$. But unfortunately, this approach can not solve any of the major questions stated in our introduction on the convex structure of quaternion-valued positive definite functions. So we are forced to find another way to achieve our aims. And we leave the further discussion on the above approach to the possible future work. We now turn to the fist main question for the group of integers: `how to characterize the extreme boundary of the convex set of normalized quaternion-valued positive definite functions'. Recall that $\mathbb{S}$ stands for the group of all unit quaternions with the composition being the multiplication inherited from the quaternion algebra, and the convex set of normalized positive definite functions on an abelian group $G$ is denoted by $\mathcal{P}_*^\mathbb{H}(G)$. The theorem below is concerned with the extreme elements in $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$. \begin{theorem}\label{Thm-ext-boundary-on-Z} A function $\phi:\mathbb{Z}\to\mathbb{H}$ is an extreme element in $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$ if and only if it is a homomorphism from $\mathbb{Z}$ to the sphere group $\mathbb{S}$. \end{theorem} More explicitly, every extreme element has the form of $\phi(n)=s^{n}$ with $s\in\mathbb{S}$. \begin{proof} First we show the sufficiency. Assume $\phi$ is a homomorphism from $\mathbb{Z}$ to the sphere group $\mathbb{S}$, then $\phi(n-m)=\phi(n)\phi(m)^{-1}=\phi(n)\overline{\phi(m)}$ holds for all integers $n$ and $m$. It follows that the homomorphism $\phi$ is normalized, i.e., satisfying the condition $\phi(0)=1$, and positive definite; which means it belongs to the family $\mathcal{P}_1^\mathbb{H}(\mathbb{Z})$. Moreover, if $\phi$ is a convex combination of two functions $f$ and $g$ in $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$, namely $\phi=\lambda f + (1-\lambda) g$ with the constant $\lambda\in[0,1]$; then based on the following facts: \begin{itemize} \item[i)] $\lvert\phi(n)\rvert\equiv 1$, since $\phi(n)$ is always a unit quaternion, \item[ii)] $\lvert f(n)\rvert, \lvert g(n)\rvert\leq 1$ for every integer $n$, indicated by Theorem \ref{Thm-qpd-to-cpd} together with the boundedness of complex-valued positive definite functions; \end{itemize} it is not difficult to verify that both $f$ and $g$ must be identical with $\phi$. In conclusion, every homomorphism from $\mathbb{Z}$ to $\mathbb{S}$ is an extreme element in $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$. Next we show the necessity. Suppose that $\phi$ is an extreme element in the convex set $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$, and for an arbitrary integer $m$ we consider two related functions $$\tau_m^+ \phi(n):= \phi(n)+\frac{1}{2}\phi(n+m)+\frac{1}{2}\phi(n-m); $$ $$\tau_m^- \phi(n):= \phi(n)-\frac{1}{2}\phi(n+m)-\frac{1}{2}\phi(n-m). $$ We claim that both the functions above are (quaternionic) positive definite on $\mathbb{Z}$. It can be proved in a straightforward way. For any $n_1,n_2,\cdots,n_k\in \mathbb{Z}$ and any $q_1,q_2,\cdots,q_k\in\mathbb{H}$, by the first conclusion of Theorem \ref{thm-basic-properties} we have \begin{eqnarray*} \sum_{1\leq i,j\leq k} \overline{q_i}\tau_m^{\pm} \phi(n_j-n_i)q_j &=& \sum_{1\leq i,j\leq k} \overline{q_i}\phi(n_j-n_i)q_j \\ & & \pm\Re \left\{\sum_{1\leq i,j\leq k} \overline{q_i}\phi(n_j-n_i+m)q_j\right\} \\ &=& \varphi(0)\pm\Re \varphi(m), \end{eqnarray*} where \begin{equation}\label{eq-def-varphi-proof-integer} \varphi(n):=\sum_{1\leq i,j\leq k} \overline{q_i}\phi(n_j-n_i+n)q_j \quad\text{for}\quad n\in\mathbb{Z}; \end{equation} and the symbol $\Re$ means taking the real part. Theorem \ref{thm-positive-definiteness} says the function $\varphi$ defined in such a form is positive definite, then applying Theorem \ref{Thm-qpd-to-cpd} again in light of the boundedness of complex-valued positive definite functions gives \begin{equation}\label{Eq-proof-integer-varphi-boundedness} \lvert \varphi(m)\rvert\leq\varphi(0)\quad\text{for}\quad m\in\mathbb{Z}. \end{equation} Accordingly, we get $$\sum_{1\leq i,j\leq k} \overline{q_i}\tau_m^{\pm} \phi(n_j-n_i)q_j=\varphi(0)\pm\Re \varphi(m)\geq 0, $$ which confirms our claim. What's more, because $\phi$ is an extreme element in the convex set $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$ as supposed and obviously it equals a convex combination of the positive definite functions $\tau_m^{\pm}\phi$, there must exist a positive constant $C_m$ such that $\tau_m^{+}\phi=C_m\phi$. We thus obtain a vital equality: \begin{equation}\label{Eq-integer-convex-comb} \phi(n+m)+\phi(n-m)=C'_m \phi(n) \quad\text{for}\quad n\in \mathbb{Z}, \end{equation} with $C'_m$ being a real constant dependent on the parameter $m$. Based on the above equality, we shall derive that the range of any extreme element $\phi$ in $\mathcal{P}_1^\mathbb{H}(\mathbb{Z})$ is included by a complex slice. Recall that a complex slice $\mathbb{C}_I$ is a subalgebra of the quaternion algebra generated by an imaginary unit $I$. \eqref{Eq-integer-convex-comb} indicates that if both $\phi(n-m)$ and $\phi(n)$ are contained by the same complex slice, so be $\phi(n+m)$. Besides, $\phi(0)$ is certainly a nonnegative real number, thereby lies in every complex slice. It thus follows by induction on $n$ that all the values $\phi(0), \phi(\pm 1), \cdots, \phi(\pm n)$ must belong to the same complex slice for any positive integer $n$. As a consequence, there exists an imaginary unit $I$ such that the range of $\phi$ is included by $\mathbb{C}_I$. Then for an arbitrary integer $m$, we define another pair of auxiliary functions $\kappa_m^{\pm}\phi$ by $$\kappa_m^+ \phi(n):= \phi(n)+\frac{I}{2}\phi(n+m)-\frac{I}{2}\phi(n-m); $$ $$\kappa_m^- \phi(n):= \phi(n)-\frac{I}{2}\phi(n+m)+\frac{I}{2}\phi(n-m). $$ We claim that they are also (quaternionic) positive definite like $\tau_m^{\pm}\phi$. Since $\kappa_m^{\pm}\phi$ are $\mathbb{C}_I$-valued, according to Theorem \ref{thm-equiv-c-h-pd} it suffices to prove that they are (complex) positive definite. The proof of this result is quite similar with that given earlier for the positive definiteness of the functions $\tau_m^{\pm}\phi$, and so some minor details are omitted. For any $n_1,n_2,\cdots,n_k\in \mathbb{Z}$ and any $c_1,c_2,\cdots,c_k\in\mathbb{C}_I$, a simple calculation leads to $$ \sum_{1\leq i,j\leq k} \overline{c_i}c_j\kappa_m^{\pm} \phi(n_j-n_i)=\varphi^{*}(0)\pm\Re \{I\varphi^{*}(m)\}, $$ where $$\varphi^{*}(n):=\sum_{1\leq i,j\leq k} \overline{c_i}c_j\phi(n_j-n_i+n) \quad\text{for}\quad n\in\mathbb{Z};$$ It is easy to see that $\varphi^{*}$ is just a special case of the function $\varphi$ given by \eqref{eq-def-varphi-proof-integer} when all the linear combination parameters are limited in the complex slice $\mathbb{C}_I$. Then \eqref{Eq-proof-integer-varphi-boundedness} implies $$ \sum_{1\leq i,j\leq k} \overline{c_i}c_j\kappa_m^{\pm} \phi(n_j-n_i)\geq 0, $$ or equivalently, $\kappa_m^{\pm}\phi$ are (complex) positive definite. Thus our second claim is also confirmed. Moreover, both the functions $\kappa_m^{\pm}\phi$ must be nonnegative scalar multiples of $\phi$, since $\phi$ is an extreme element in $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$, namely the set of normalized positive definite functions, and equals a convex combination of the functions $\kappa_m^{\pm}\phi$ which are also positive definite (but possibly not normalized). It follows immediately that there exists a real constant $C''_m$ depending on the parameter $m$ such that \begin{equation*} \phi(n+m)-\phi(n-m)=IC''_m \phi(n) \quad\text{for}\quad n\in \mathbb{Z}. \end{equation*} Combining this equality and its analogue \eqref{Eq-integer-convex-comb}, we see that \begin{equation*} \phi(n+m)=c(m) \phi(n) \quad\text{for}\quad n,m\in \mathbb{Z}, \end{equation*} where $c(m)$ is a certain $\mathbb{C}_I$-valued function of the variable $m$. Setting $n=0$, we get $c(m)=\phi(m)$. Consequently, \begin{equation*} \phi(m+n)= \phi(m) \phi(n)\quad\text{for}\quad n,m\in \mathbb{Z}, \end{equation*} In addition, Theorem \ref{thm-basic-properties} says every positive definite function is hermitian, thereby $\phi(-n)=\overline{\phi(n)}$. Then it is easy to see that $$\lvert\phi(n)\rvert^2=\phi(-n)\phi(n)=\phi(0)=1\quad\text{for}\quad n\in \mathbb{Z}. $$ Therefore, $\phi$ is a homomorphism from $\mathbb{Z}$ to the sphere group $\mathbb{S}$. The proof is now completed. \end{proof} \begin{remark} The key to the proof of Theorem \ref{Thm-ext-boundary-on-Z} is to show that the range of every extreme element in the convex set of normalized positive definite functions on the group of integers is included by some complex slice $\mathbb{C}_I$. This result ensures the positive-definiteness of the second pair of auxiliary functions $\kappa_m^{\pm}\phi$. \end{remark} Theorem \ref{Thm-ext-boundary-on-Z} yields a one-to-one correspondence between the sphere group $\mathbb{S}$ and the extreme boundary of $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$, the expression of which is given as $$s\in\mathbb{S} \longmapsto \phi\in Ex \left(\mathcal{P}_*^\mathbb{H}(\mathbb{Z})\right):n\mapsto s^n. $$ Apparently this bijection is continuous with respect to the pointwise convergence topology on the extreme boundary $Ex \left(\mathcal{P}_*^\mathbb{H}(\mathbb{Z})\right)$, and thereby is a homeomorphism since its definition domain is compact and its range is Hausdorff, which means that it is legitimate in the topological sense to identify the sphere group with the extreme boundary. Then a integral characterization of quaternionic positive definite functions on the group of integers follows immediately. \begin{theorem}\label{Thm-characterization-Z} A function $\phi:\mathbb{Z}\mapsto\mathbb{H}$ is positive definite if and only if there exists a (not necessarily unique) nonnegative Radon measure $\mu$ on the sphere group $\mathbb{S}$ such that \begin{equation}\label{Eq-thm-characterization-on-Z-integral-expression} \phi(n)=\int_{\mathbb{S}}s^n d\mu(s), \quad\text{for}\quad n\in \mathbb{Z}. \end{equation} \end{theorem} \begin{proof} The sufficiency can be easily verified based on the fact that for any $s\in\mathbb{S}$, $s^n$ is positive definite with respect to the integer variable $n$. As for the necessity, it suffices to prove that every $\phi\in\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$ can be expressed in the form of \eqref{Eq-thm-characterization-on-Z-integral-expression} with $\mu$ being a regular Borel probability measure. Recall that on the group of integers, every positive definite function is bounded, in other words dominated by the constant absolute value $1$. Hence, according to Corollary \ref{cor-compactness} with the absolute value $\alpha\equiv 1$, the convex set $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$ is compact. Then in light of Theorem 3.23 (the Krein-Milman Theorem) and Theorem 3.28 in \cite{Rudin-1991}, or the Choquet-Bishop-de Leeuw Theorem, we see that for any $\phi\in\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$, there is a regular Borel probability measure $\mu$ on the extreme boundary of $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$ such that the following equality \begin{equation*} \phi=\int_{Ex \left(\mathcal{P}_*^\mathbb{H}(\mathbb{Z})\right)}\varphi \quad d\mu(\varphi). \end{equation*} holds in the sense of vector-valued integration. And noting the evident fact that taking the value at any integer point is a continuous operation with respect to the pointwise convergence topology, we thus have \begin{equation*} \phi(n)=\int_{Ex \left(\mathcal{P}_*^\mathbb{H}(\mathbb{Z})\right)}\varphi(n) \quad d\mu(\varphi), \quad\text{for}\quad n\in \mathbb{Z}. \end{equation*} As shown in the argument right above this theorem, the sphere group $\mathbb{S}$ can be identified with the extreme boundary $Ex \left(\mathcal{P}_*^\mathbb{H}(\mathbb{Z})\right)$ via the following homeomorphism $$s\in\mathbb{S} \longmapsto \phi\in Ex \left(\mathcal{P}_*^\mathbb{H}(\mathbb{Z})\right):n\mapsto s^n. $$ This observation allows us to rewrite the preceding equality as \eqref{Eq-thm-characterization-on-Z-integral-expression} equivalently as desired. The proof is now completed. \end{proof} In the remark below, an answer to the second main question stated in the introduction, namely `whether the convex set of normalized quaternion-valued positive definite functions is a Bauer simplex in the pointwise convergence topology' is given in the case of the group of integers; and two explicit examples are presented to support our assertion. \begin{remark}\label{Re-Z} In terms of convexity the biggest difference between the complex case and the quaternionic one is how the extreme elements in the family of normalized positive definite functions are distributed. More specifically, the set of complex-valued normalized positive definite functions on any abelian group is a Bauer simplex, that is to say, every real valued continuous function on its extreme boundary has a unique affine extension to the whole set (see Theorem 4.3 in Chapter II of \cite{Alfsen-1971}); in contrast, its quaternionic counterpart is not. Intuitively speaking, in the quaternionic setting the distribution of the extreme elements are more uniform. This major difference also can be observed from the viewpoint of integral characterization. For the sake of simplicity and content coherence, we limit our argument in the case of the group of integers instead of a random abelian group at the present stage. Herglotz representation theorem indicates that for a complex-valued normalized positive definite function $\phi$, there exists a unique regular Borel probability measure $\mu$ on the unit circle $\mathbb{T}$ in the complex plane such that $$\phi(n)=\int_{\mathbb{T}}z^n d\mu(z) \quad\text{for}\quad n\in \mathbb{Z}. $$ In a word, the existence and uniqueness of the measure representation is valid. Then a question arises: whether this still holds true in the quaternionic setting. The answer turns out partially negative. The existence is valid as stated in Theorem \ref{Thm-characterization-Z}, but the uniqueness is actually not as illustrated by the two examples below. \begin{itemize} \item[Ex 1:] Consider $\phi(n)=\alpha^n$ with the constant $\alpha$ being a unit quaternion, by Theorem \ref{Thm-ext-boundary-on-Z}, we know that it is an extreme element in the family of normalized positive definite functions. Thus only one nonnegative measure $\mu=\delta_\alpha$, namely the Dirac measure centred on the point $\alpha$, satisfies \eqref{Eq-thm-characterization-on-Z-integral-expression}. In other words, $\phi$ can not be represented as the barycenter of any nonnegative measure except $\delta_\alpha$. \item[Ex 2:] Observe the cosine function $\phi(n)=\cos n$, one can easily see $$\phi(n)=\frac{1}{2}(\alpha^n+\overline{\alpha}^n)\quad\text{for}\quad n\in \mathbb{Z},\ \alpha\in\mathbb{S}.$$ Thus the measure $\mu=\frac{1}{2}(\delta_\alpha+\delta_{\overline{\alpha}})$ satisfies \eqref{Eq-thm-characterization-on-Z-integral-expression} for every $\alpha=e^I$ with $I$ being a pure imaginary unit quaternion, which means the uniqueness of the measure representation breaks down in this example. \end{itemize} In conclusion, only for some of the quaternion-valued positive definite functions, the corresponding measure representations are unique. And further arguments on this phenomenon in the general case of an arbitrary abelian group are to come at the end of this section where a criterion for which of the measure representations, i.e., nonnegative Radon measure solutions to \eqref{Eq-thm-characterization-on-Z-integral-expression}, are unique shall be established. \end{remark} \subsection{generalized results for quaternion-valued positive definite functions on arbitrary abelian groups}In this part, we focus on the former of the two major questions, i.e., `how to characterize the extreme boundary of the convex set of normalized quaternion-valued positive definite functions' in the general case of arbitrary abelian groups. The trick of our approach is still to show that the range of every extreme element is included by a complex slice as in the case of the group of integers. By contrast, a more complicated manipulation is required in the general case. For simplicity of presentation, hereinafter we always assume that $G$ is an abelian group equipped with the inverse involution. In preparation for establishing our first main result in this case, we need the following two lemmas. \begin{lemma}\label{Lemma-Ext-1} Any extreme element $\phi$ in $\mathcal{P}_*^\mathbb{H}(G)$ satisfies the equations below. \begin{align*} 2\Re \phi(b)\Re \phi(a) &=\Re \phi(a+b)+\Re \phi(a+b^*), \\ 2\Re \phi(b)\Im \phi(a) &=\Im \phi(a+b)+\Im \phi(a+b^*), \end{align*} for all $a, b\in G$. \end{lemma} Here operations $\Re$ and $\Im$ take the real part and the imaginary part of any given quaternion respectively. More explicitly, for $q=a_0i_0+a_1i_1+a_2i_2+a_3i_3\in\mathbb{H}$, $$\Re\ q=a_0i_0 \quad \text{and} \quad \Im\ q=a_1i_1+a_2i_2+a_3i_3.$$ \begin{proof} Suppose that $\phi$ is an extreme element in the convex set $\mathcal{P}_*^\mathbb{H}(\mathbb{G})$, and for an arbitrary element $b\in G$ we consider two related functions $$\tau_b^\pm \phi(a):= \phi(a)\pm\frac{1}{2}\phi(a+b)\pm\frac{1}{2}\phi(a+b^*) \quad\text{for}\quad a\in G. $$ Adopting the same procedure as in the proof of Theorem \ref{Thm-ext-boundary-on-Z}, we can easily deduce that $\tau_b^\pm \phi$ are both positive definite on $G$. Then, since $\phi$ is an extreme element in the convex set $\mathcal{P}_*^\mathbb{H}(\mathbb{G})$ and equals a convex combination of $\tau_b^{\pm}\phi$, there exists a positive constant $C_b$ dependent on $b$ such that $\tau_b^{+}\phi=C_b\phi$. Accordingly, we have \begin{equation}\label{eq-proof-Lemma-Ext-1} \phi(a+b)+\phi(a+b^*)=(2C_b-1) \phi(a) \quad\text{for}\quad a\in G. \end{equation} Moreover, by putting $a=0$, we obtain $2C_b-1=\phi(b)+\phi(b^*)$. In addition, Theorem \ref{thm-basic-properties} says $\phi$ is hermitian. Thus the equality \begin{equation*} 2C_b-1=2\Re\phi(b). \end{equation*} holds true. Inserting this back into \eqref{eq-proof-Lemma-Ext-1} and separating the real and imaginary parts of the both sides finally leads to the desired result. \end{proof} We would like to mention that the results of Lemma \ref{Lemma-Ext-1} is also valid in the case of an abelian semigroup with involution. And here comes the other technical lemma. \begin{lemma}\label{Lemma-Ext-2} Let $\phi$ be a quaternion-valued solution to the following system \begin{align} \label{eq-Lemma-Ext-2-1}2\Re \phi(b)\Re \phi(a) &=\Re \phi(a+b)+\Re \phi(a+b^*), \\ \label{eq-Lemma-Ext-2-2}2\Re \phi(b)\Im \phi(a) &=\Im \phi(a+b)+\Im \phi(a+b^*), \end{align} for all $a, b\in G$, subject to the given condition that $\phi(0)=1$ and $\displaystyle{\sup_G\lvert\phi\rvert=1}$. Then there exists an imaginary unit $I\in\mathbb{H}$ such that the range of $\phi$ is included by the complex slice $\mathbb{C}_I$. \end{lemma} \begin{proof} We divide our proof into three steps. In each step, one of the following properties shall be verified in sequence. \begin{itemize} \item[i)] For every $a\in G$, there is a complex slice which contains $\phi(na)$ for all integers $n$. \item[ii)] For any $a, b\in G$, there is one which contains $\phi(n(a+b))$ and $\phi(n(a+b^*))$ for all integers $n$. \item[iii)] For any $a, b\in G$, there is one which contains both $\phi(a)$ and $\phi(b)$. \end{itemize} The details are as follows. Step 1: Assume that $a$ is an arbitrary element in $G$. We define $$f(n):=\phi(na)\quad\text{for}\quad n\in\mathbb{Z}. $$ Then we substitute this expression into \eqref{eq-Lemma-Ext-2-2} to obtain \begin{equation}\label{eq-Lemma-Ext-2-3} 2\Re f(m)\Im f(n) =\Im f(n+m)+\Im f(n-m)\quad\text{for}\quad n,m\in\mathbb{Z}, \end{equation} which implies if a complex slice contains both $f(n)$ and $f(n-m)$, it will also contains $f(n+m)$. Accordingly, an obvious induction on $n$ shows that all the values $$f(0), f(\pm 1),\cdots, f(\pm n)$$ are in the same complex slice. Hence, there exists an imaginary unit $I_a$ such that the complex slice $C_{I_a}$ contains $f(n)$ for all integers $n$. In other words, $f$ is essentially complex-valued. In addition, substituting the expression of $f$ into \eqref{eq-Lemma-Ext-2-2} gives \begin{equation*} 2\Re f(m)\Re f(n) =\Re f(n+m)+\Re f(n-m)\quad\text{for}\quad n,m\in\mathbb{Z}. \end{equation*} One may notice that the equality above and \eqref{eq-Lemma-Ext-2-3} share the same form with the product-to-sum formulae of trigonometric functions. It is not difficult to see from this viewpoint that there exist two constants $\theta_a\in[0,2\pi)$ and $t_a\in[0,1]$ so that \begin{equation}\label{eq-Lemma-Ext-2-4-key} \phi(na)=f(n)=\cos (n\theta_a)+t_a\sin(n\theta_a)I_a\quad\text{for}\quad n\in\mathbb{Z}. \end{equation} The first part of this proof is now completed. Step 2: In this part, we attempt to prove by contradiction that for any given $a, b\in G$, all the values $\phi(n(a+b))$ and $\phi(n(a+b^*))$ with $n\in\mathbb{Z}$ lie in one complex slice. First, we assume the opposite of the conclusion: for some $a, b\in G$, these related values can not be put in the same complex slice. The next thing to do is to derive new consequences until one violates the premise. The final conclusion of the first step states that there are imaginary units $I_a,I_b\in\mathbb{H}$, angles $\theta_a,\theta_b\in[0,2\pi)$ and real constants $t_a, t_b\in[0,1]$ such that the following expressions \begin{align*} \phi(na)&=\cos (n\theta_a)+t_a\sin(n\theta_a)I_a,\\ \phi(nb)&=\cos (n\theta_b)+t_b\sin(n\theta_b)I_b, \end{align*} hold for all integers $n$. Similarly, we have \begin{align*} \phi(n(a+b))&=\cos (n\theta_1)+t_1\sin(n\theta_1)I_1, \\ \phi(n(a+b^*))&=\cos (n\theta_2)+t_2\sin(n\theta_2)I_2. \end{align*} Moreover, the premise above indicates that the imaginary units $I_1,I_2$ are real linearly independent, the angles $\theta_1,\theta_2$ belong to $(0,\pi)\cup(\pi,2\pi)$, and the constants $t_1, t_2$ lie in $(0,1]$. Multiplying $a,b$ by $n$ in \eqref{eq-Lemma-Ext-2-2} and inserting the four preceding expressions leads to the equality below. \begin{equation*} 2\cos(n\theta_b)\sin(n\theta_a)I_a=t_1\sin(n\theta_1)I_1+t_2\sin(n\theta_2)I_2\quad\text{for}\quad n\in\mathbb{Z}. \end{equation*} When $n=1$, the right side is clearly not zero, which forces the imaginary unit $I_a$ on the left side to become a real linear combination of the ones on the right side. Hence, there exist two real constants $c_i$ such that \begin{equation}\label{eq-Lemma-Ext-2-5} \sin(n\theta_i)=c_i\cos(n\theta_b)\sin(n\theta_a), \quad i=1,2; \end{equation} are valid for all integers $n$. By performing the same procedure with interchange of $a$ and $b$, we have \begin{equation}\label{eq-Lemma-Ext-2-6} \sin(n\theta_i)=c'_i\cos(n\theta_a)\sin(n\theta_b), \quad i=1,2; \end{equation} where $c'_i$ is another pair of real constants. Thus, \begin{equation*} \sin(n\theta_i)=\frac{1}{2}(c_i+c'_i)\sin(n(\theta_a+\theta_b))=\frac{1}{2}(c_i-c'_i)\sin(n(\theta_a-\theta_b)), \quad i=1,2; \end{equation*} holds for all $n$, implying $\theta_a\pm\theta_b\equiv\theta_i$ or $-\theta_i$ mod $2\pi$ in the conventional additive group structure of $\mathbb{R}$. It follows that $\theta_a$ or $\theta_b\equiv0$ mod $\pi$, so in view of \eqref{eq-Lemma-Ext-2-5} and \eqref{eq-Lemma-Ext-2-6}, \begin{equation*} \sin(n\theta_1)=\sin(n\theta_2)=0\quad\text{for}\quad n\in\mathbb{Z}, \end{equation*} which is in contradiction to the condition $\theta_1,\theta_2\in(0,\pi)\cup(\pi,2\pi)$. Therefore, the premise is false. Then indeed, for any given $a, b\in G$, there is an imaginary unit $I$ such that the complex slice $C_I$ contains all $\phi(n(a+b))$ and $\phi(n(a+b^*))$. Now the second part of this proof is accomplished. Step 3: It only remains to show that for any $a, b\in G$, there is a complex slice which contains both $\phi(a)$ and $\phi(b)$. The approach is similar with that applied in the second step, and some minor details are omitted to avoid repetition. Let $a, b\in G$, and suppose that no complex slice can contain both $\phi(a)$ and $\phi(b)$. In order to create a contradiction, now we need to gather some preliminary results. Firstly, owing to the final conclusion of the first step, the values of $\phi$ on the two subgroups generated by $a$ and $b$ respectively have the forms as follows. \begin{align*} \phi(na)&=\cos (n\theta_a)+t_a\sin(n\theta_b)I_a, \\ \phi(nb)&=\cos (n\theta_b)+t_b\sin(n\theta_b)I_b. \end{align*} with $I_a,I_b$ being a pair of real linearly independent imaginary units, $\theta_a,\theta_b\in(0,\pi)\cup(\pi,2\pi)$ and $t_a, t_b\in (0,1]$. Secondly, according to the final conclusion in the second step, all $\phi(n(2b+a^*))$ and $\phi(na)$ with $n\in\mathbb{Z}$ are in the same complex slice, since $2b+a^*=b+(a+b^*)^*$ and $a=b+(a+b^*)$. This means $\phi(n(2b+a^*))$ must belong to $C_{I_a}$ because this complex slice is the only one that contains all $\phi(na)$. Then applying the final result of the first step again gives \begin{equation*} \phi(n(2b+a^*))=\cos (n\theta_3)+t_3\sin(n\theta_3)I_a, \end{equation*} where $\theta_3\in[0,2\pi)$ and $t_3\in [0,1]$; while for the values of $\phi$ on the subgroup generated by $a+b^*$, we have \begin{equation*} \phi(n(a+b^*))=\cos (n\theta_4)+t_4\sin(n\theta_4)I_4, \end{equation*} where $\theta_4\in[0,2\pi)$ and $t_4\in [0,1]$ and $I_4$ is another imaginary unit, the relation between which and $I_a,I_b$ is unknown yet. Next, we precede to create a contradiction. Replacing $a$ and $b$ with $b$ and $a+b^*$ respectively in \eqref{eq-Lemma-Ext-2-1} and \eqref{eq-Lemma-Ext-2-2} and then substituting the four expressions about $\phi$ above yields \begin{align} \label{eq-Lemma-Ext-2-7}\cos (n\theta_4)\cos (n\theta_b)&=\cos (n\theta_a)+\cos (n\theta_3), \\ \label{eq-Lemma-Ext-2-8}\cos (n\theta_4)\sin (n\theta_b)I_b&=\left(t_a\sin (n\theta_a)+t_3\sin (n\theta_3)\right)I_a. \end{align} It is immediately seen from \eqref{eq-Lemma-Ext-2-8} that the equalities below \begin{align} \label{eq-Lemma-Ext-2-9}\cos (n\theta_4)\sin (n\theta_b)&=0, \\ \label{eq-Lemma-Ext-2-10}t_a\sin (n\theta_a)+t_3\sin (n\theta_3)&=0. \end{align} hold for all integers $n$, since $I_a$ and $I_b$ are real linearly independent. The condition $\theta_b\in(0,\pi)\cup(\pi,2\pi)$, in light of \eqref{eq-Lemma-Ext-2-9}, leads to the result that $\theta_b$ and $\theta_4\equiv \frac{\pi}{2}$ mod $\pi$. Hence, the left side of \eqref{eq-Lemma-Ext-2-7} equals $0$ when $n$ is odd, and $2$ when $n$ is even. What's more, applying the condition that $\theta_a\in(0,\pi)\cup(\pi,2\pi)$ and $t_a\neq 0$ to \eqref{eq-Lemma-Ext-2-10}, we get $t_a=t_3$ and $\theta_a=2\pi-\theta_3$, so that the right side of \eqref{eq-Lemma-Ext-2-7} equals $2\cos(n\theta_a)$. Therefore, \begin{equation*} \cos(n\theta_a)=\left\{\begin{array}{lcl} 0, & & \text{for } n \text{ odd}; \\ & &\\ 1, & & \text{for } n \text{ even}. \end{array}\right. \end{equation*} It is clear that no such angle $\theta_a$ exists. Finally, the opposite of the desired final conclusion is reduced to this absurdity, which completes the whole proof. \end{proof} We can now give an answer to the first main question. Recall that the notation $\mathbb{S}$ stands for the $3$-dimensional sphere consisting of unit quaternions, and $\mathcal{P}_*^\mathbb{H}(G)$ the family of normalized quaternion-valued positive definite functions on $G$. \begin{theorem}\label{Thm-ext-boundary-on-G} A function $\phi:G\to\mathbb{H}$ is an extreme element in $\mathcal{P}_*^\mathbb{H}(G)$ if and only if it is a homomorphism from $G$ to the sphere group $\mathbb{S}$. \end{theorem} \begin{proof} First, we verify the necessity. Provided that $\phi$ is an extreme element in the convex set $\mathcal{P}_*^\mathbb{H}(\mathbb{Z})$, combining Lemmas \ref{Lemma-Ext-1} and \ref{Lemma-Ext-2} yields that there exists a imaginary unit $I\in\mathbb{H}$ such that the corresponding complex slice $C_I$ includes the range of $\phi$. Then for an arbitrary $b\in G$, we define a pair of auxiliary functions $\kappa_b^{\pm}\phi$ as $$\kappa_b^+ \phi(a):= \phi(a)+\frac{I}{2}\phi(a+b)-\frac{I}{2}\phi(a+b^*); $$ $$\kappa_b^- \phi(a):= \phi(a)-\frac{I}{2}\phi(a+b)+\frac{I}{2}\phi(a+b^*). $$ An argument similar to the one used in the proof of Theorem \ref{Thm-ext-boundary-on-Z} shows that $\kappa_b^{\pm}\phi$, treated as $C_I$-valued functions, both are complex positive definite, thereby quaternionic positive definite due to Theorem \ref{thm-equiv-c-h-pd}. Obviously, the extreme element $\phi$ is a convex combination of $\kappa_b^{\pm}\phi$, and then the quaternionic positive definiteness of both the functions $\kappa_b^{\pm}\phi$ forces them to be nonnegative scalar multiples of $\phi$. We thus obtain \begin{equation*} \phi(a+b)-\phi(a+b^*)=IC_b\phi(a)\quad\text{for}\quad a\in G, \end{equation*} where $C_b$ is a real constant dependent on $b$. Then putting $a=0$, we get $IC_b=\phi(b)-\phi(b^*)=2\Im \phi(b)$ by Theorem \ref{thm-basic-properties}. Hence, \begin{equation*} \phi(a+b)-\phi(a+b^*)=2\Im \phi(b)\phi(a)\quad\text{for}\quad a,b\in G. \end{equation*} Moreover, Lemma \ref{Lemma-Ext-1} says \begin{equation*} \phi(a+b)+\phi(a+b^*)=2\Re \phi(b)\phi(a)\quad\text{for}\quad a,b\in G. \end{equation*} Therefore, \begin{equation*} \phi(a+b)=\phi(b)\phi(a)\quad\text{for}\quad a,b\in G; \end{equation*} which, in light of Theorem \ref{thm-basic-properties}, implies $\lvert \phi(a)\rvert^2=\phi(a^*)\phi(a)=\phi(0)=1$. In conclusion, every extreme element $\phi$ is a homomorphism from $G$ to the sphere group $\mathbb{S}$. As for the sufficiency, its proof is just a replication of that given earlier for the sufficiency in Theorem \ref{Thm-ext-boundary-on-Z}, and so is omitted. \end{proof} As shown in the above theorem, homomorphisms from $G$ to the sphere group $\mathbb{S}$ play a significant role in the convex structure of quaternion-valued positive definite functions. And they shall be named of a new term. \begin{definition} A quaternionic character on $G$ is a homomorphism from $G$ to the sphere group $\mathbb{S}$. The set of all the quaternionic characters is called the quaternionic dual of $G$ and denoted by $G^\delta$, namely $G^\delta=Hom(G,\mathbb{S})$. \end{definition} We refer to the pointwise convergence topology as the canonical topology of $G^\delta$. With the new notation, Theorem \ref{Thm-ext-boundary-on-G} can be simplified as $$Ex\left(\mathcal{P}_*^\mathbb{H}(G)\right)=G^\delta. $$ Note that in contrast to its complex counterpart, i.e., $Hom(G,\mathbb{T})$ usually called the Pontryagin dual or the dual group of $G$, the quaternionic dual $G^\delta$ does not possess a natural group structure because of the non-commutativity of quaternions. Nevertheless, there still are many operations on $G^\delta$ with algebraic effects, and some of them will be introduced in the next subsection. As an application of our first main result, we now present an integral characterization for quaternion-valued positive definite functions. \begin{theorem}\label{Thm-characterization-G} A function $\phi:G\mapsto\mathbb{H}$ is positive definite if and only if there exists a (not necessarily unique) nonnegative Radon measure $\mu$ on $G^\delta$ such that \begin{equation}\label{Eq-thm-characterization-on-G-integral-expression} \phi(g)=\int_{G^\delta}\gamma(g) d\mu(\gamma) \quad\text{for}\quad g\in G. \end{equation} \end{theorem} \begin{proof} The proof of this result is almost the same with that given for Theorem \ref{Thm-characterization-Z}, and so is omitted. \end{proof} \subsection{final arguments on the uniqueness of measure representations on the quaternionic dual} We now turn to the second main question, which is `whether $\mathcal{P}_*^\mathbb{H}(G)$ is a Bauer simplex in the pointwise convergence topology'. As is well known, there are multiple ways to describe a Bauer simplex (see, e.g., Theorems 4.1 and 4.3 in Chapter II of \cite{Alfsen-1971}). Among them we choose the one directly related to the measure representation given in Theorem \ref{Thm-characterization-G}. Indeed, if the nonnegative solution $\mu$ of \eqref{Eq-thm-characterization-on-G-integral-expression} is unique for every $\phi\in \mathcal{P}_*^\mathbb{H}(G)$, then it is a Bauer simplex, and vice versa. In the special case of $G$ being the group of integers, as Remark \ref{Re-Z} says, the nonnegative solution is unique for some $\phi$'s, whereas the uniqueness breaks down for others; which means $\mathcal{P}_*^\mathbb{H}(G)$ is not a Bauer simplex when $G=\mathbb{Z}$. A natural question arises at this point: Does this phenomenon exist in any case? We shall see later that the answer varies with a certain algebraic feature of $G$. Two approaches to the final conclusion are provided as follows. \begin{itemize} \item[1)] Lemma \ref{lemma-criteria-G} $\longrightarrow$ Theorem \ref{thm-property-p-d-f-G} (with Lemma \ref{lemma-equiv-algebra-feature}) $\longrightarrow$ Theorem \ref{thm-not-Bauer-G} \item[2)] A construction method (with Lemma \ref{lemma-equiv-algebra-feature}) $\longrightarrow$ Theorem \ref{thm-not-Bauer-G} \end{itemize} The former involves some manipulations on measures and is relatively abstract, while the latter is much more straightforward. Throughout this subsection, the space of (signed) Radon measures on the quaternionic dual $G^\delta$ is denoted by $\mathcal{M}(G^\delta)$, and the subspace $\mathcal{K}(G^\delta)$ consists of $\nu\in \mathcal{M}(G^\delta)$ satisfying $$ \int_{G^\delta}\gamma\ d\nu(\gamma)=0 $$ In the lemma below, we give a criteria for whether the nonnegative solution to \eqref{Eq-thm-characterization-on-G-integral-expression} is unique. \begin{lemma}\label{lemma-criteria-G} Let $\mu\in\mathcal{M}(G^\delta)$ be a nonnegative solution of \eqref{Eq-thm-characterization-on-G-integral-expression} for a given $\phi\in\mathcal{P}_*^\mathbb{H}(G)$. Then it is the unique one iff every nonzero $\nu\in\mathcal{K}(G^\delta)$ satisfies one of the following conditions. \begin{itemize} \item[i)]$\nu_2^-\neq 0$, \item[ii)]$\nu_2^-=0$ and $d\nu_1^-/d\mu$ is essentially unbounded, \end{itemize} where $\nu^-=\nu_1^-+\nu_2^-$ is the Lebesgue decomposition of the lower variation of $\nu$ into the absolutely continuous part $\nu_1^-$ and the singular part $\nu_2^-$ w.r.t. $\mu$. \end{lemma} \begin{proof} Assume that \eqref{Eq-thm-characterization-on-G-integral-expression} has multiple nonnegative solutions, then there exists another solution $\mu'$ except for $\mu$. One can easily see that nonzero Radon measure $\nu:=\mu'-\mu$ does not satisfy any of Conditions i and ii. Indeed, the lower variation of $\nu$ is absolutely continuous with respect to $\mu$, and furthermore the corresponding Radon-Nikodym derivative is essentially bounded. So the sufficiency is true. As for the necessity, provided that a nonzero Radon measure $\nu$ fails to satisfies either of Conditions i and ii, which means the Radon-Nikodym derivative $d\nu^-/d \mu$ exists and is essentially less than a positive constant $C$, then we find another different solution $\mu'$ defined as $\mu+\frac{1}{C}\nu$. Now this proof is completed. \end{proof} For convenience, we identify the circle group $\mathbb{T}$ with the unit circle in a fixed complex slice $C_{\widehat I}$ of the quaternion algebra. Then the classical dual group $\widehat{G}:=Hom(G,\mathbb{T})$ can be considered as a subset of the quaternion dual $G^\delta$. And we denote the subset of all the real-valued elements (e.g., the constant character $1$) in $G^\delta$ by $G^\delta_\mathbb{R}$. In particular, $G^\delta_\mathbb{R}\subset \widehat{G} \subset G^\delta$. The following lemma shows that the above inclusion relation is tight if and only if $G$ has an exponent $\leq 2$. Although its proof is evident, this result is crucial in both the approaches to Theorem \ref{thm-not-Bauer-G}. \begin{lemma}\label{lemma-equiv-algebra-feature} The following conditions are equivalent. \begin{itemize} \item[i)]$\widehat{G}=G^\delta$, \item[ii)]every $\phi\in\mathcal{P}_*^\mathbb{H}(G)$ is real valued (i.e., $G^\delta_\mathbb{R}=G^\delta$), \item[iii)] $G$ is of exponent $\leq 2$. \end{itemize} \end{lemma} \begin{proof} (i $\Rightarrow$ iii) Suppose that $G$ has an exponent $>2$, then there exists an element $a\in G$ satisfying $a\neq -a$. This allows us to define a function on $G$ as $$\phi(x):=\left\{\begin{array}{ll} 1,& \quad x=0, \\ \pm \frac{J}{2},&\quad x=\pm a, \\ 0,& \quad otherwise; \end{array}\right. $$ where $J$ is a unit quaternion $\neq \pm\widehat I$. A simple calculation tells us that $\phi$ is positive definite. It is apparent that the range of $\phi$ is not included by $C_I$, which indicates $\widehat{G}\neq G^\delta$ in view of Theorem \ref{Thm-characterization-G}. (iii $\Rightarrow$ ii) Let $G$ be an abelian group of exponent $\leq 2$. Then $x=-x$ holds for all $x\in G$. We thus have $\phi(x)=\overline{\phi(-x)}=\overline{\phi(x)}$ for $\phi\in\mathcal{P}_*^\mathbb{H}(G)$, since $\phi$ is hermitian according to Theorem \ref{thm-basic-properties}. (ii $\Rightarrow$ i) Clearly, this part is trivial. \end{proof} Although the quaternionic dual $G^\delta$ has no natural group structure, we still can find many operations on $G^\delta$ with algebraic meaning, for example, the involution $\phi^*:=\overline{\phi}$ and the rotation $R_q\phi:= q\phi q^{-1}$ with $q$ being a unit quaternion. It can be easily seen that both the operations preserve the pointwise convergence topology. They thus induce two push-forward mappings $\mu\mapsto \mu^{\hat *}$ and $\mu\mapsto \hat R_q\mu$ on $\mathcal{M}(G^\delta)$ as follows. $$\mu^{\hat *}(\Omega):=\mu(\Omega^*)\quad\text{and}\quad \hat R_q\mu(\Omega):=\mu( R_q^{-1}\Omega),\quad\forall\text{ Borel set } \Omega\subset G^\delta. $$ Note that $\Omega^*$ stands for the image and also the inverse image of $\Omega$ through the involution, and $R_q^{-1}\Omega$ does for the inverse image through the rotation. Now we present an interesting property of the unique measure representations. \begin{theorem}\label{thm-property-p-d-f-G} If $\mu\in\mathcal{M}(G^\delta)$ is the unique nonnegative solution of \eqref{Eq-thm-characterization-on-G-integral-expression} for some given $\phi\in\mathcal{P}_*^\mathbb{H}(G)$, then the restrictions of $\mu^{\hat *}$ and $\mu$ on $G^\delta\setminus G^\delta_\mathbb{R}$ are mutually singular, or in short $\mu^{\hat *}\perp\mu$ on $G^\delta\setminus G^\delta_\mathbb{R}$. \end{theorem} \begin{proof} Assume that the restrictions of $\mu^*$ and $\mu$ on $G^\delta\setminus G^\delta_\mathbb{R}$ are not mutually singular. Due to Lemma \ref{lemma-criteria-G}, it suffices to show that there exists a (signed) measure $\nu\in \mathcal{K}(G^\delta)$ such that its lower variation $\nu^-$ is absolutely continuous with respect to $\mu$ and the Radon-Nikodym derivative $d \nu^- /d \mu$ is essentially bounded. In order to find such measure $\nu$, we need to construct a neighbour with a well-chosen separation property for every element $\phi\in G^\delta\setminus G^\delta_\mathbb{R}$. The details is as follows. Since $\phi$ is not real-valued, we have $\phi^*\neq \phi$. Moreover, as demonstrated in the proof of Theorem \ref{Thm-ext-boundary-on-G}, there exists an imaginary unit $I$ such that the complex slice $C_{I}$ includes the range of $\phi$, which is identical with that of $\phi^*$. Take $J$ as another imaginary unit that anti-commutes with $I$ and define a quaternion as $q:=\frac{1}{\sqrt 2}(1+J)$. Then we get four distinct elements in $G^\delta\setminus G^\delta_\mathbb{R}$, namely $\phi$, $\phi^*$, $R_q\phi$ and $R_q\phi^*$. And there are neighbours of each which are disjoint from each other, since $G^\delta\setminus G^\delta_\mathbb{R}$ is Hausdorff. We denote these neighbours by $U_i$ ($i=1,2,3,4$) in sequence. Finally, a neighbour of $\phi$ in the subspace $G^\delta\setminus G^\delta_\mathbb{R}$ is given as $$U_\phi:=U_1\cap (U_2)^* \cap R_{q^{-1}}(U_3) \cap R_{q^{-1}}((U_4)^*),$$ and by its construction there is a quaternion $q_\phi$ such that $U_\phi$, $(U_\phi)^*$, $R_{q_\phi}(U_\phi)$ and $R_{q_\phi}((U_\phi)^*)$ are disjoint from each other. Now we proceed to construct the desired measure $\nu$. Let $\mu^{\hat *}=\mu_1+\mu_2$ be the Lebesgue decomposition of $\mu^{\hat *}$ into the absolutely continuous part $\mu_1$ and the singular part $\mu_2$ with respect to $\mu$. According to the premise, the absolutely continuous part does not vanish on $G^\delta\setminus G^\delta_\mathbb{R}$, namely $\mu_1(G^\delta\setminus G^\delta_\mathbb{R})$ is strictly positive. Thus the Radon-Nikodym derivative $d \mu_1/ d \mu$ is not essentially zero on $G^\delta\setminus G^\delta_\mathbb{R}$. Neither is the nonnegative measurable function $f:=\min\left\{1,\frac{d \mu_1}{d \mu}\right\}$, which implies that $f$ must be not essentially zero on some neighbour $U_\phi$ (w.r.t. $\mu$), since all the neighbours cover the whole subspace $G^\delta\setminus G^\delta_\mathbb{R}$. This observation allows we define a nonzero measure $\nu$ as $$\nu:=-\lambda-\lambda^{\hat *}+\hat R_{q_\phi}\lambda+ \hat R_{q_\phi}\lambda^{\hat *}. $$ Here $\lambda$ is given by $d\lambda=\left(\chi_{U_\phi}f\right)d\mu$, and $\chi_{U_\phi}$ stands for the characteristic function of the neighbour $U_\phi$. Indeed, $\lambda$ and the three push-forward measures $\neq 0$, because $f$ is not essentially zero on $U_\phi$ with respect to $\mu$. Furthermore, the masses of them lie in four disjoint neighbours $U_\phi$, $(U_\phi)^*$, $R_{q_\phi}(U_\phi)$ and $R_{q_\phi}((U_\phi)^*)$ respectively, meaning these four nonzero measures are singular. It confirms that $\nu$ is nonzero. In particular, the lower variation of $\nu$ is exactly the sum of $\lambda$ and $\lambda^{\hat *}$. It remains to show that $\nu$ satisfies the following two conditions. \begin{itemize} \item[i)]$\nu$ belongs to $\mathcal{K}(G^\delta)$, namely $\displaystyle{\int_{G^\delta}\gamma d\nu(\gamma)=0}$. \item[ii)]The lower variation of $\nu$, i.e., $\lambda+\lambda^{\hat *}$, is absolutely continuous with respect to $\mu$, and the corresponding Radon-Nikodym derivative is essentially bounded. \end{itemize} We start with the first condition. By definition, \begin{equation}\label{eq-them-p-d-f-G-1} \begin{array}{rll} \displaystyle{\int_{G^\delta}\gamma d\nu(\gamma)}= &\displaystyle{-\int_{G^\delta}\gamma d\lambda (\gamma)} & \displaystyle{-\int_{G^\delta}\gamma d\lambda^{\hat *}(\gamma)} \\ & \\ & \displaystyle{+\int_{G^\delta}\gamma d\hat R_{q_\phi}\lambda (\gamma)} & \displaystyle{+\int_{G^\delta}\gamma d\hat R_{q_\phi}\lambda^{\hat *}(\gamma)}. \end{array} \end{equation} Here $\gamma$ is considered as an identity function or a vector variable on $G^\delta$. For simplicity, we write the four terms on the right as $T_i$ ($i=1,\cdots,4$). Recall that $\hat{*}$ and $\hat R_{q_\phi}$ are the respective push-forward mappings induced by the involution $*$ and the rotation $R_{q_\phi}$ on $G_\delta$. Thus for the last three terms on the right, we have \begin{align*} T_2=&\int_{G^\delta}\gamma^* d\lambda (\gamma), \\ T_3=&\int_{G^\delta}R_{q_\phi}\gamma d\lambda (\gamma), \\ T_4=&\int_{G^\delta}R_{q_\phi}\gamma^* d\lambda (\gamma). \\ \end{align*} Substituting the three expressions back into \eqref{eq-them-p-d-f-G-1} gives \begin{equation*} \int_{G^\delta}\gamma d\nu(\gamma)=\int_{G^\delta}-(\gamma+\gamma^*)+R_{q_\phi}(\gamma+\gamma^*)d\lambda(\gamma). \end{equation*} It follows that $\nu$ belongs to $\mathcal{K}(G^\delta)$, since $\gamma+\gamma^*$ is real-valued as a function on $G$ and every real-valued function is invariant under the rotation $R_{q_\phi}$. Now we turn to the second condition. Recall that $\mu^{\hat *}=\mu_1+\mu_2$ is the Lebesgue decomposition of $\mu^{\hat *}$ into the absolutely continuous part $\mu_1$ and the singular part $\mu_2$ with respect to $\mu$, and $\lambda$ is defined by $d\lambda=\left(\chi_{U_\phi}\min\left\{1,\frac{d \mu_1}{d \mu}\right\}\right)d\mu$, where the neighbour $U_\phi$ is disjoint from its image through the involution, i.e., $(U_\phi)^*$. According to the definition of $\lambda$, we see: \begin{itemize} \item[a.]The mass of $\lambda$ is restricted in $U_\phi$, \item[b.]$\lambda\leq\mu_1$ and $\mu$. \end{itemize} In addition, applying the involution to the Lebesgue decomposition of $\mu^{\hat *}$ leads to $\mu=\mu_1^{\hat *}+\mu_2^{\hat *}\geq \mu_1^{\hat *}$, which together with Condition b indicates $\mu\geq \lambda^{\hat *}$. Moreover, by Condition b, we see that the mass of $\lambda^{\hat *}$ is restricted in $(U_\phi)^*$. To summarize what we have obtained, firstly, $\lambda$ and $\lambda^{\hat *}$ both $\leq \mu$; secondly, $\lambda$ and $\lambda^{\hat *}$ are mutually singular, since their masses are separately restricted in two disjoint sets. In conclusion, $\lambda+\lambda^{\hat *}\leq\mu$, so that $\lambda+\lambda^{\hat *}$ is absolutely continuous with respect to $\mu$ and the corresponding Radon-Nikodym derivative is essentially bounded. This completes the proof. \end{proof} The answer to the second main question is given in the theorem below. \begin{theorem}\label{thm-not-Bauer-G} $\mathcal{P}_*^\mathbb{H}(G)$ is a Bauer simplex if and only if $G$ is an abelian group of exponent $\leq 2$, or equivalently a $\mathbb{Z}_2$-vector space. \end{theorem} Roughly speaking, $\mathcal{P}_*^\mathbb{H}(G)$ is not a Bauer simplex in most cases. \begin{proof} The sufficiency is an immediate result of Lemma \ref{lemma-equiv-algebra-feature}. Indeed, when $G$ is of exponent $\leq 2$, the quaternionic dual $G^\delta$, i.e., the extreme boundary of $\mathcal{P}_*^\mathbb{H}(G)$, coincides with the classical dual group $\widehat{G}:=Hom(G,\mathbb{T})$, which is the extreme boundary of a Bauer simplex as is well known. The necessity can be proved in two ways as follows. 1. If $G$ is of exponent $> 2$, then by Lemma \ref{lemma-equiv-algebra-feature} we see $G^\delta_\mathbb{R}$ is a proper subset of $G^\delta$. Thus there exists a nonnegative Radon measure $\nu\in\mathcal{M}(G^\delta)$ satisfying $\nu(G^\delta\setminus G^\delta_\mathbb{R})>0$. It is obvious that the nonnegative Radon measure $\mu$ given as $\mu:=\nu+\nu^{\hat *}$ is identical with its involution $\mu^{\hat *}$, so that the restrictions of $\mu$ and $\mu^{\hat *}$ on $G^\delta\setminus G^\delta_\mathbb{R}$ are not mutually singular. Hence, in view of Theorem \ref{thm-property-p-d-f-G}, the uniqueness of the solution to \eqref{Eq-thm-characterization-on-G-integral-expression} breaks down for a certain positive definite function. Therefore, we are led to the conclusion that $\mathcal{P}_*^\mathbb{H}(G)$ is not a Bauer simplex, when $G$ is of exponent $> 2$. 2. Under the same assumption on $G$ as above, we have $G^\delta\setminus G^\delta_\mathbb{R}\neq\emptyset$. Take $\phi$ as an element in $G^\delta\setminus G^\delta_\mathbb{R}$. As we demonstrated in the preceding subsection, the range of $\phi$ is included by some complex slice $C_I$. Then we define $\varphi:=(1+J)\phi(1+J)^{-1}$ where $J$ is an imaginary unit that anti-commutes with $I$. Notice that $\phi$, $\phi^*$, $\varphi$ and $\varphi^*$ are four distinct elements in $G^\delta\setminus G^\delta_\mathbb{R}$. Hence, the sum of the dirac measures centred on the elements $\phi$ and $\phi^*$ does not equals that of the dirac measures centred on the elements $\varphi$ and $\varphi^*$. A direct calculation yields that both the sums correspond to the same positive definite function $2\Re \phi$ via \eqref{Eq-thm-characterization-on-G-integral-expression}. As a consequence, $\mathcal{P}_*^\mathbb{H}(G)$ is not a Bauer simplex. The proof is now completed. \end{proof} \end{document}
\begin{document} \title{Continuous ensembles and the $\chi$-capacity of infinite-dimensional channels} \author{A.S. Holevo, M.E.Shirokov\\Steklov Mathematical Institute, 119991 Moscow, Russia} \date{} \maketitle \section{Introduction} This paper is devoted to systematic study of the classical capacity (more precisely, a closely related quantity -- the $\chi$-capacity) of infinite dimensional quantum channels, following \cite{H-c-w-c}, \cite{H-Sh}, \cite{Sh}. While major attention in quantum information theory up to now was paid to finite dimensional systems, there is an important and interesting class of Gaussian channels, see e. g. \cite{H-W}, \cite{Gio}, \cite{E} which act in infinite dimensional Hilbert space. Although many questions for Gaussian Bosonic systems with finite number of modes can be solved with finite dimensional matrix techniques, a general underlying Hilbert space operator analysis is indispensable. Moreover, it was observed recently \cite{Sh} that Shor's proof of global equivalence of different forms of the famous additivity conjecture is related to weird discontinuity of the $\chi$-capacity in the infinite dimensional case. All this calls for a mathematically rigorous treatment involving specific results from the operator theory in a Hilbert space and measure theory. There are two important features essential for channels in infinite dimensions. One is the necessity of the input constraints (such as mean energy constraint for Gaussian channels) to prevent from infinite capacities (although considering input constraints was recently shown quite useful also in the study of the additivity conjecture for channels in finite dimensions \cite{H-Sh}). Another is the natural appearance of infinite, and, in general, ``continuous'' state ensembles understood as probability measures on the set of all quantum states. By using compactness criteria from probability theory and operator theory we can show that the set of all generalized ensembles with the average in a compact set of states is itself a compact subset of the set of all probability measures. With this in hand we give a sufficient condition for existence of an optimal generalized ensemble for a constrained quantum channel. This condition can be verified in particular in the case of Bosonic Gaussian channels with constrained mean energy. In the case of convex constraints we give a characterization of the optimal generalized ensemble extending the ``maximal distance property'' \cite{Sch-West-1}, \cite{H-Sh}. \section{Preliminaries} Let $\mathcal{H}$ be a separable Hilbert space, $\mathfrak{B}(\mathcal{H})$ the algebra of all bounded operators in $\mathcal{H}$, $\mathfrak{T}( \mathcal{H})$ the Banach space of all trace-class operators with the trace norm $\Vert \cdot \Vert _{1}$ and $\mathfrak{S}(\mathcal{H})$ the closed convex subset of $\mathfrak{T}(\mathcal{H})$ consisting of all density operators (states) in $\mathcal{H}$, which is complete separable metric space with the metric defined by the norm. We shall use the fact that convergence of a sequence of states to a \textit{state} in the weak operator topology is equivalent to convergence of this sequence to this state in the trace norm \cite{D-A}. A closed subset $\mathcal{K}$ of states is compact if and only if for any $\varepsilon>0$ there is a finite dimensional projector $P$ such that $\mathrm{Tr}\rho P\geq 1-\varepsilon$ for all $\rho\in\mathcal{K}$ \cite{S}. A finite collection $\{\pi _{i},\rho _{i}\}$ of states $\rho _{i}$ with the corresponding probabilities $\pi _{i}$ is conventionally called \textit{ensemble}. The state $\bar{\rho}=\sum_{i}\pi _{i}\rho _{i}$ is called \textit{the average} of the ensemble. We refer to \cite{Bil},\cite{Par} for definitions and facts concerning probability measures on separable metric spaces. In particular we denote $\mathrm{supp}(\pi)$ support of measure $\pi$ as defined in \cite{Par}. \textbf{Definition}. We call \textit{generalized ensemble} an arbitrary Borel probability measure $\pi $ on $\mathfrak{S}(\mathcal{H})$. The \textit{average}\footnote{Also called barycenter of the measure $\pi$.} of the generalized ensemble $\pi$ is defined by the Pettis integral \[ \bar{\rho}(\pi )=\int\limits_{\mathfrak{S}(\mathcal{H})}\rho \pi (d\rho ). \] Using the result of \cite{D-A} it is possible to show that the above integral exists also in Bochner sense \cite{H&P}. The conventional ensembles correspond to measures with finite support. Denote by $\mathcal{P}$ the convex set of all probability measures on $\mathfrak{S}(\mathcal{H})$ equipped with the topology of weak convergence \cite{Bil}. It is easy to see (due to the result of \cite{D-A}) that the mapping $\pi \mapsto \bar{\rho}(\pi )$ is continuous in this topology. \textbf{Lemma 1.} \textit{The subset of measures with finite support is dense in the set of all measures with given average }$\bar{\rho}$\textit{.} A proof of this statement is given in the appendix A. In what follows $\log $ denotes the function on $[0,+\infty ),$ which coincides with the usual logarithm on $\left( 0,+\infty \right) $ and vanishes at zero. If $A$ is a positive finite rank operator in $\mathcal{H},$ then the entropy is defined as \begin{equation} H(A)=\mathrm{Tr}A\left( I\log \mathrm{Tr}A-\log A\right) , \label{ent} \end{equation} where $I$ is the unit operator in $\mathcal{H}$. If $A,B$ two such operators then the relative entropy is defined as \begin{equation} H(A\,\Vert B)=\mathrm{Tr}(A\log A-A\log B+B-A) \label{relent} \end{equation} provided $\mathrm{ran}A\subseteq\mathrm{ran}B$, and $H(A\,\Vert B)=+\infty$ otherwise (throughout this paper $\mathrm{ran}$ denotes the closure of the range of an operator in $\mathcal{H}$). These definitions can be extended to arbitrary positive $A$,$B\in \mathfrak{T}(\mathcal{H})$ with the help of the following lemma \cite{L}: \textbf{Lemma 2.} \textit{Let $\left\{ P_{n}\right\} $ be an arbitrary sequence of finite dimensional projectors monotonously increasing to the unit operator $I$. The sequences }$\left\{ H(P_{n}AP_{n})\right\} ,$ $ \left\{ H(P_{n}AP_{n}\Vert P_{n}BP_{n})\right\} $\textit{\ are monotonously increasing and have the limits in the range }$\left[ 0,+\infty \right] $ \textit{\ independent of the choice of the sequence $\left\{ P_{n}\right\} .$ } We can thus define the entropy and the relative entropy as\textit{\ \[ H(A)=\lim_{n\rightarrow +\infty }H(P_{n}AP_{n});\;\quad H(A\,\Vert B)=\lim_{n\rightarrow +\infty }H(P_{n}AP_{n}\Vert P_{n}BP_{n}). \] } As it is well known, the properties of the entropy for infinite and finite dimensional Hilbert spaces differ quite substantially: in the latter case the entropy is bounded continuous function on $\mathfrak{S}(\mathcal{H}),$ while in the former it is discontinuous (lower semicontinuous) at every point, and infinite ``most everywhere'' in the sense that the set of states with finite entropy is a first category subset of $ \mathfrak{S}(\mathcal{H})$ $\cite{W}.$ \section{The $\chi$-capacity of constrained channels} Lemma 2 implies, in particular, that the nonnegative function\break $\rho\mapsto H(\Phi (\rho )\Vert \Phi (\bar{\rho}(\pi )))$ is measurable on $ \mathfrak{S}(\mathcal{H})$. Hence the functional \[ \chi _{\Phi }(\pi )=\int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{\rho}(\pi )))\pi (d\rho ) \] is well defined on the set $\mathcal{P}$ (with the range $[0;+\infty ]$). \textbf{Proposition 1.} \textit{The functional $\chi _{\Phi }(\pi )$ is lower semicontinuous on $\mathcal{P}$. If $H(\Phi (\bar{\rho}(\pi ))<\infty,$ then} \begin{equation}\label{formula} \chi _{\Phi }(\pi)= H(\Phi (\bar{\rho}(\pi )))-\int\limits_{\mathfrak{S}( \mathcal{H})}H(\Phi (\rho ))\pi (d\rho ). \end{equation} \textbf{Proof.} Let $\left\{ P_{n}\right\} $ be an arbitrary sequence of finite dimensional projectors monotonously increasing to the unit operator $I$. We show first that the functionals \[ \chi _{\Phi }^{n}(\pi )=\int\limits_{\mathfrak{S}(\mathcal{H})}H(P_{n}\Phi (\rho )P_{n}\Vert P_{n}\Phi (\bar{\rho}(\pi ))P_{n})\pi (d\rho ) \] are continuous. We have \[ \mathrm{ran}(P_{n}\Phi (\rho )P_{n})\subseteq \mathrm{ran}(P_{n}\Phi (\bar{ \rho}(\pi ))P_{n}) \] for $\pi -$almost all $\rho $. Indeed, closure of the range is orthogonal complement to the null subspace of a Hermitian operator, and for null subspaces the opposite inclusion holds obviously. It follows that \[ \begin{array}{c} H(P_{n}\Phi (\rho )P_{n}\Vert P_{n}\Phi (\bar{\rho}(\pi ))P_{n})=\mathrm{Tr}((P_{n}\Phi (\rho )P_{n})\log (P_{n}\Phi (\rho )P_{n}) \\\\ -(P_{n}\Phi(\rho)P_{n})\log (P_{n}\Phi (\bar{\rho}(\pi ))P_{n})+P_{n}\Phi (\bar{\rho}(\pi ))P_{n}-P_{n}\Phi (\rho )P_{n}) \end{array} \] for $\pi -$almost all $\rho $. By using (\ref{ent}) we have \[ \begin{array}{c} \chi _{\Phi }^{n}(\pi )=-\int\limits_{\mathfrak{S}(\mathcal{H} )}H(P_{n}\Phi (\rho )P_{n})\pi (d\rho )+ \int\limits_{\mathfrak{S}(\mathcal{H})}\mathrm{Tr}(P_{n}\Phi (\rho ))\log \mathrm{Tr}(P_{n}\Phi (\rho ))\pi (d\rho ) \\\\ -\int\limits_{\mathfrak{S}(\mathcal{H})}\mathrm{Tr}(P_{n}\Phi (\rho )P_{n})\log (P_{n}\Phi (\bar{\rho}(\pi ))P_{n})\pi (d\rho)\\\\ +\int\limits_{\mathfrak{S}(\mathcal{H})}\mathrm{Tr}(P_{n}\Phi (\bar{\rho} (\pi )))\pi (d\rho )-\int\limits_{\mathfrak{S}(\mathcal{H})}\mathrm{Tr} (P_{n}\Phi (\rho))\pi (d\rho ). \end{array} \] It is easy to see that the two last terms cancel while the central term can be transformed in the following way \[ \begin{array}{c} -\int\limits_{\mathfrak{S}(\mathcal{H})}\mathrm{Tr}(P_{n}\Phi (\rho )P_{n})\log (P_{n}\Phi (\bar{\rho}(\pi ))P_{n})\pi (d\rho ) \\\\ =-\mathrm{Tr}\int\limits_{\mathfrak{S}(\mathcal{H})} (P_{n}\Phi (\rho )P_{n})\log (P_{n}\Phi (\bar{\rho}(\pi ))P_{n})\pi (d\rho )\\\\ =H(P_{n}\Phi (\bar{\rho}(\pi ))P_{n})-\mathrm{Tr}(P_{n}\Phi (\bar{\rho}(\pi )))\log \mathrm{Tr}(P_{n}\Phi (\bar{\rho}(\pi ))). \end{array} \] Hence \begin{equation}\label{chi-n} \begin{array}{c} \chi _{\Phi }^{n}(\pi )=H(P_{n}\Phi (\bar{\rho}(\pi ))P_{n})-\mathrm{Tr}(P_{n}\Phi (\bar{\rho}(\pi )))\log \mathrm{Tr}(P_{n}\Phi (\bar{\rho}(\pi )))\\\\-\int\limits_{\mathfrak{S}(\mathcal{H})}H(P_{n}\Phi(\rho )P_{n})\pi (d\rho )+\!\!\!\int\limits_{\mathfrak{S}(\mathcal{H})}\!\mathrm{Tr}(P_{n}\Phi (\rho ))\log \mathrm{Tr}(P_{n}\Phi (\rho ))\pi (d\rho ). \end{array} \end{equation} Continuity and boundedness of the quantum entropy in the finite dimensional case and similar properties of the function $\rho\mapsto\mathrm{Tr}(P_{n}\Phi (\rho ))\log \mathrm{Tr}(P_{n}\Phi (\rho ))$ imply continuity of the functionals $\chi _{\Phi }^{n}(\pi ).$ By the monotonous convergence theorem (in what follows, m.c.-theorem) \cite{K&F},\cite{H&P} the sequence of functionals $\chi _{\Phi }^{n}(\pi )$ is monotonously increasing and pointwise converges to $\chi _{\Phi }(\pi )$. Hence the functional $\chi_{\Phi }(\pi )$ is lower semicontinuous. To prove (\ref{formula}) note that lemma 2 implies $$ \lim_{n\rightarrow+\infty}H(P_{n}\Phi (\bar{\rho}(\pi ))P_{n})=H(\Phi (\bar{\rho}(\pi ))) $$ and $$ \lim_{n\rightarrow+\infty}\int\limits_{\mathfrak{S}(\mathcal{H})}H(P_{n}\Phi(\rho )P_{n})\pi (d\rho )=\int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi(\rho))\pi (d\rho) $$ due to m.c.-theorem. For every $\rho$ the sequence $\{\mathrm{Tr}(P_{n}\Phi (\rho))\}$ is in $[0,1]$ and converges to $1$, therefore $ \lim_{n\rightarrow+\infty}\mathrm{Tr}(P_{n}(\rho))\log \mathrm{Tr}(P_{n}(\rho))=0, $ in particular the second term in (\ref{chi-n}) tends to $0$. Since $|x\log x|<1$ for all $x\in(0,1],$ the last term also tends to $0$ by dominated convergence theorem, so passing to the limit $n\to\infty$ in (\ref{chi-n}) gives (\ref{formula}).$\square$ Let $\mathcal{H},\mathcal{H}^{\prime }$ be a pair of separable Hilbert spaces which we shall call correspondingly input and output space. A channel $\Phi $ is a linear positive trace preserving map from $\mathfrak{T}(\mathcal{ H })$ to $\mathfrak{T}(\mathcal{H}^{\prime })$ such that the dual map $\Phi ^{\ast }:\mathfrak{B}(\mathcal{H}^{\prime })\mapsto\mathfrak{B}(\mathcal{H})$ (which exists since $\Phi $ is bounded) is completely positive. Let $\mathcal{A}$ be an arbitrary subset of $ \mathfrak{S}(\mathcal{H})$. We consider constraint on input ensemble $\{\pi _{i},\rho _{i}\}$, defined by the requirement $\bar{\rho}\in \mathcal{A}$. The channel $\Phi $ with this constraint is called the $\mathcal{A}$ -\textit{constrained} channel. We define the $\chi $-\textit{capacity} of the $\mathcal{A}$-constrained channel $\Phi $ as \begin{equation} \bar{C}(\Phi ;\mathcal{A})=\sup_{\bar{\rho}\in \mathcal{A}}\chi _{\Phi }(\{\pi _{i},\rho _{i}\}), \label{ccap} \end{equation} where \begin{equation} \chi _{\Phi }(\{\pi _{i},\rho _{i}\})=\sum_{i}\pi _{i}H(\Phi (\rho _{i})\Vert \Phi (\bar{\rho})). \label{h-q} \end{equation} \textit{Throughout this paper we shall consider the constraint sets $ \mathcal{A}$ such that} \begin{equation} \bar{C}(\Phi ;\mathcal{A})<+\infty . \label{A} \end{equation} The subset of $\mathcal{P}$, consisting of all measures $\pi $ with the average state $\bar{\rho}(\pi)$ in a subset $ \mathcal{A\subseteq \mathfrak{S}(\mathcal{H})}$, will be denoted $\mathcal{P} _{\mathcal{A}}$. Lemma 1 and proposition 1 imply \textbf{Corollary 1.} \textit{The $\chi$-capacity of $\mathcal{A}$-constrained channel $\Phi$ can be defined by} \[ \bar{C}(\Phi ;\mathcal{A})=\!\!\sup\limits_{\pi \in \mathcal{P}_{\mathcal{A} }}\chi _{\Phi }(\pi ). \] \textbf{Proof.} The definition (\ref{ccap}) is a similar expression in which the supremum is over all measures in $\mathcal{P}_{\mathcal{A}}$ with finite support. By lemma 1 we can approximate arbitrary measure $\pi$ in $\mathcal{P}_{\mathcal{A}}$ by a sequence $\{\pi_{n}\}$ of measures in $\mathcal{P}_{\mathcal{A}}$ with finite support. By proposition 1, $\liminf_{n\rightarrow+\infty}\chi _{\Phi }(\pi_{n})\geq\chi _{\Phi }(\pi)$. It follows that the supremum over all measures in $\mathcal{P}_{\mathcal{A}}$ coincides with the supremum over all measures in $\mathcal{P}_{\mathcal{A}}$ with finite support.$\square$ \section{Compact constraints} It is convenient to introduce the following notion. An unbounded positive operator $H$ in $\mathcal{H}$ with discrete spectrum of finite multiplicity will be called $\mathfrak{H}$-\textit{operator}. Let $Q_n$ be the spectral projector of $H$ corresponding to the lowest $n$ eigenvalues. Following \cite{H-c-w-c} we shall denote \begin{equation}\label{limit}\mathrm{Tr}\rho H=\lim_{n\to\infty}\mathrm{Tr}\rho Q_n H, \end{equation} where the sequence on the right side is monotonously nondecreasing. It was shown in \cite{H-c-w-c} that \begin{equation} \mathcal{K}=\left\{ \rho :\mathrm{Tr}\rho H\leq h\right\} , \label{ham} \end{equation} where $H$ is an $\mathfrak{H}$-operator, is a compact subset of \textit{$\mathfrak{S}(\mathcal{H})$ }. \textbf{Lemma 3.} \textit{Let $\mathcal{A}$ be a compact subset of $ \mathfrak{S}(\mathcal{H})$. Then there exist an $\mathfrak{H}$-operator $H$ and a positive number $h$ such that $\mathrm{Tr}\rho H\leq h$ for all $\rho\in\mathcal{A}$.} \textbf{Proof.} By the compactness criterion from \cite{S} for any natural $n$ there exists a finite rank projector $P_{n}$ such that $\mathrm{Tr}\rho P_{n}\geq 1-n^{-3}$ for all $\rho $ in $\mathcal{A}$. Without loss of generality we may assume that $\bigvee_{k=1}^{+\infty}P_{k}(\mathcal{H})=\mathcal{H}$, where $\bigvee$ denotes closed linear span of the subspaces. Let $\hat{P}_{n}$ be the projector on the finite dimensional subspace $\bigvee_{k=1}^{n}P_{k}(\mathcal{H })$. Thus $H=\sum_{n=1}^{+\infty }n(\hat{P}_{n+1}-\hat{P}_{n})$ is a $\mathfrak{H}$-operator satisfying \[ \mathrm{Tr}\rho H=\sum_{n=1}^{+\infty }n\mathrm{Tr}\rho (\hat{P}_{n+1}-\hat{P }_{n})\leq \sum_{n=1}^{+\infty }n\mathrm{Tr}\rho (I_{\mathcal{H}}-\hat{P} _{n})\leq \sum_{n=1}^{+\infty }n^{-2}=h \] for arbitrary state $\rho $ in the set $\mathcal{A}$. $\square $ \textbf{Proposition 2.} \textit{The set $\mathcal{P}_{\mathcal{A}}$ is a compact subset of $\mathcal{P}$ if and only if the set $\mathcal{A}$ is a compact subset of $\mathfrak{S}(\mathcal{H})$.} \textbf{Proof.} Let the set $\mathcal{P}_{\mathcal{A}}$ be compact. The set $\mathcal{A}$ is the image of the set $\mathcal{P}_{ \mathcal{A}}$ under the continuous mapping $\pi \mapsto \bar{\rho}(\pi )$, hence it is compact. Let the set $\mathcal{A}$ be compact. By lemma 3 there exists an $\mathfrak{H}$-operator $H$ such that $\mathrm{Tr}\rho H\leq h$ for all $\rho $ in $\mathcal{A}$. For arbitrary $\pi \in \mathcal{P}_{\mathcal{A}}$ we have \begin{equation} \int\limits_{\mathfrak{S}(\mathcal{H})}(\mathrm{Tr}\rho H) \pi (d\rho )=\mathrm{Tr}\left(\;\int\limits_{\mathfrak{S}(\mathcal{H})}\rho \pi (d\rho)\;H\right) =\mathrm{Tr}\bar{\rho}(\pi)H\leq h \label{int-eq} \end{equation} The existence of the integral on the left side and the first equality follows from m.c.-theorem, since by (\ref{limit}) the function $\mathrm{Tr}\rho H$ is the limit of nondecreasing sequence of continuous bounded functions $\mathrm{Tr}\rho Q_{n}H$. Let $\mathcal{K}_{\varepsilon }=\{\rho :\mathrm{Tr}\rho H\leq h\varepsilon ^{-1}\}$. The set $\mathcal{K}_{\varepsilon }$ is a compact subset of $ \mathfrak{S}(\mathcal{H})$ for any $\varepsilon $. By (\ref{int-eq}) for any measure $\pi $ in $\mathcal{P}_{\mathcal{A}}$ we have \begin{equation} \begin{array}{c} \pi (\mathfrak{S}(\mathcal{H})\backslash \mathcal{K}_{\varepsilon })=\int\limits_{\mathfrak{S}(\mathcal{H})\backslash \mathcal{K}_{\varepsilon }}\pi (d\rho )\leq \varepsilon h^{-1}\int\limits_{\mathfrak{S}(\mathcal{H} )\backslash \mathcal{K}_{\varepsilon }}(\mathrm{Tr}\rho H) \pi (d\rho )\leq \varepsilon \end{array} \label{int-ineq} \end{equation} By Prokhorov's theorem \cite{P} the (obviously closed) set $\mathcal{P}_{\mathcal{A}}$ is compact.$ \square $ We will use the following notions, introduced in \cite{Sh}. The sequence of ensembles $\{\pi _{i}^{k},\rho _{i}^{k}\}$ with the averages $\,\bar{\rho} ^{k}\in \mathcal{A}$ is called an \textit{approximating sequence} if \[ \lim_{k\rightarrow +\infty }\chi _{\Phi }(\{\pi _{i}^{k},\rho _{i}^{k}\})= \bar{C}(\Phi ;\mathcal{A}). \] The state $\bar{\rho}\in \mathcal{A}$ is called an \textit{optimal average state } if it is a partial limit of a sequence of average states for some approximating sequence of ensembles. Compactness of the set $\mathcal{A}$ implies that the set of optimal average states is not empty. \textbf{Theorem.} \textit{If the restriction of the output entropy $H(\Phi(\rho))$ to the set $\mathcal{A}$ is continuous at least at one optimal average state $\bar{\rho}_{0}\in\mathcal{A}$ then there exist an optimal generalized ensemble $\pi^{*}$ in $\mathcal{P}_{\mathcal{A}}$ such that $\mathrm{supp}\pi^{*}\subseteq\mathrm{Extr}\mathfrak{S}(\mathcal{H})$ and $$ \bar{C}(\Phi ;\mathcal{A})=\chi_{\Phi}(\pi^{*})= \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi(\rho)\|\Phi(\bar{\rho}(\pi^{*})))\pi^{*}(d\rho). $$} \textbf{Proof.} We will show first that the function \[ \pi \mapsto \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\pi (d\rho ) \] is well defined and lower semicontinuous on the set $\mathcal{P}_{ \mathcal{A}}$. By lemma 2 the function $H(\Phi (\rho ))$ is a pointwise limit of the monotonously increasing sequences of functions \[ f_{n}(\rho )=\mathrm{Tr}\left((P_{n}\Phi (\rho )P_{n})\left( I\log \mathrm{Tr} (P_{n}\Phi (\rho )P_{n})-\log (P_{n}\Phi (\rho )P_{n})\right)\right) , \] which are continuous and bounded on $\mathfrak{S}(\mathcal{H})$. Hence the function $H(\Phi (\rho ))$ is measurable and the m.c.-theorem implies \[ \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\pi (d\rho )=\lim_{n\rightarrow \infty }\int\limits_{\mathfrak{S}(\mathcal{H} )}f_{n}(\rho )\pi (d\rho ). \] The sequence of continuous functionals \[ \pi \mapsto \int\limits_{\mathfrak{S}(\mathcal{H})}f_{n}(\rho )\pi (d\rho ) \] is nondecreasing. Hence its pointwise limit is lower semicontinuous. By the assumption the restriction of the function $H(\Phi (\rho ))$ to the set $\mathcal{A}$ is continuous at some optimal average state $\bar{\rho}_{0} $. The continuity of the mapping $\pi \mapsto \bar{\rho}(\pi )$ implies that the restriction of the functional $\pi \mapsto H(\Phi (\bar{\rho}(\pi )))$ to the set $\mathcal{P}_{\mathcal{A}}$ is continuous at any point $\pi _{0}$ such that $\bar{\rho}(\pi _{0})=\bar{\rho}_{0}$. Hence $H(\Phi(\bar{\rho}(\pi)))<+\infty$ for any point $\pi$ in the intersection of $\mathcal{P}_{\mathcal{A}}$ with some neighbourhood of $\pi_{0}$. For every such point $\pi$ the relation (\ref{formula}) holds. Therefore the restriction of the functional $\chi _{\Phi }(\pi)$ to the set $\mathcal{P}_{\mathcal{A}}$ is upper semicontinuous, and by proposition 1 it is continuous at any point $\pi _{0}$ in $\mathcal{P}_{\mathcal{A}}$ such that $\bar{\rho}(\pi _{0})=\bar{\rho}_{0}$. Let $\{\pi _{i}^{n},\rho _{i}^{n}\}$ be an approximating sequence of ensembles with the corresponding sequence of average states $\bar{\rho}^{n}$ converging to the state $\bar{\rho}_{0}$. Decomposing each state of the ensemble $\{\pi _{i}^{n},\rho _{i}^{n}\}$ into a countable convex combination of pure states we obtain the sequence $\{\hat{\pi}_{j}^{n},\hat{\rho}_{j}^{n}\}$ of generalized ensembles consisting of countable number of pure states with the same sequence of the average states $\bar{\rho}^{n}$. Let $\hat{\pi}^{n}$ be the sequence of measures ascribing value $\hat{\pi}_{j}^{n}$ to the set $\{ \hat{\rho}_{j}^{n}\}$ for each $j$. It follows that \begin{equation}\label{chi-exp} \chi _{\Phi }(\hat{\pi}_{n})=\sum\limits_{j}\hat{\pi}_{j}^{n}H(\Phi (\hat{\rho}_{j}^{n})\Vert \Phi (\bar{\rho}^{n}))\geq \sum\limits_{i}\pi _{i}^{n}H(\Phi (\rho _{i}^{n})\Vert \Phi (\bar{\rho}^{n}))=\chi _{\Phi }(\{\pi _{i}^{n},\rho _{i}^{n}\}), \end{equation} where the inequality follows from convexity of the relative entropy. By construction $\mathrm{supp}\hat{\pi}^{n}\subseteq \mathrm{Extr}\mathfrak{S}( \mathcal{H})$ for each $n$. By proposition 2 there exists a subsequence $ \hat{\pi}^{n_{k}}$, converging to some measure $\pi ^{\ast }$ in $\mathcal{P} _{\mathcal{A}}$. Since the set $\mathrm{Extr}\mathfrak{S}(\mathcal{H})$ of all pure states is closed subset of $\mathfrak{S}(\mathcal{H})$\footnote{ The set $\mathrm{Extr}\mathfrak{S}(\mathcal{H})$ is described by the inequality $H(\rho )\leq 0$, and due to lower semicontinuity of the quantum entropy it is closed.} we have $\mathrm{supp}\pi ^{\ast }\subseteq \mathrm{ Extr}\mathfrak{S}(\mathcal{H})$ due to theorem 6.1 in \cite{Par}. It is clear that $\bar{\rho}(\pi ^{\ast })=\bar{\rho}_{0}$ and, hence, as shown above, the restriction of the functional $\chi _{\Phi }(\pi)$ on the set $\mathcal{P}_{\mathcal{A}}$ is continuous at the point $\pi ^{\ast }$. This, the approximating property of the sequence $\{\pi _{i}^{n},\rho _{i}^{n}\}$ and (\ref{chi-exp}) implies \[ \bar{C}(\Phi ;\mathcal{A})=\lim_{k\rightarrow \infty }\chi _{\Phi }(\{\pi _{i}^{n_{k}},\rho _{i}^{n_{k}}\})\leq \lim_{k\rightarrow \infty }\chi _{\Phi }(\hat{\pi}_{n_{k}})=\chi _{\Phi }(\pi ^{\ast }). \] Since the converse inequality follows from corollary 1, we obtain $\bar{C} (\Phi ;\mathcal{A})=\chi _{\Phi }(\pi ^{\ast })$, which means that the measure $\pi ^{\ast }$ is an optimal generalized ensemble for the $\mathcal{A}$-constrained channel $\Phi $.$\square $ \textbf{Corollary 2.} \textit{For arbitrary state $\rho _{0}$ with $ H(\Phi (\rho _{0}))<+\infty $ there exists a generalized ensemble\footnote{In what follows we can consider the generalized ensembles as measures supported by the set of pure states} $\pi _{0}$ such that $ \bar{\rho}(\pi _{0})=\rho _{0}$ and \[ \chi _{\Phi }(\rho _{0})\equiv \sup_{\sum_{i}\pi _{i}\rho _{i}=\rho _{0}}\chi _{\Phi }(\{\pi _{i},\rho _{i}\})=\int\limits_{\mathfrak{S}( \mathcal{H})}H(\Phi (\rho )\Vert \Phi (\rho _{0}))\pi _{0}(d\rho ). \] } \textbf{Proof.} It is sufficient to note that the condition of the theorem holds trivially for $\mathcal{A}=\{\rho _{0}\}$. $\square$ In the finite dimensional case we obviously have \begin{equation}\label{cap-chi-rel} \bar{C}(\Phi ;\mathcal{A})=\chi _{\Phi }(\bar{\rho}), \end{equation} where $\bar{\rho}$ is the average state of any optimal ensemble. The generalization of this relation to the infinite dimensional case is closely connected with the question of existence of the optimal generalized ensemble. \textbf{Corollary 3.} \textit{If an optimal generalized ensemble for the $\mathcal{A}$ - constrained channel $\Phi$ exists, then the equality (\ref{cap-chi-rel}) holds for some optimal average state $\bar{\rho}$ for the $\mathcal{A}$-constrained channel $\Phi$.} \textit{If the equality (\ref{cap-chi-rel}) holds for some optimal average state $\bar{\rho}$ for the $\mathcal{A}$ - constrained channel $\Phi$ with $H(\Phi (\bar{\rho}))<+\infty$ then there exists an optimal generalized ensemble for the $\mathcal{A}$-constrained channel $\Phi$.} \textbf{Proof.} The first assertion is obvious while the second one follows from corollary 2.$\square$ \textbf{Remark.} The continuity condition in the theorem is essential, as it is shown in Appendix B. It is possible to show that this condition holds automatically if the set $\mathcal{A}$ is convex with a finite number of extreme points with finite output entropy. We conjecture that this condition holds for arbitrary convex compact set $\mathcal{A}$ due to the special properties of optimal average states in this case, considered in \cite{Sh}. \textbf{Proposition 3.} \textit{Let $H^{\prime }$ be a $\mathfrak{H}$-operator on the space $\mathcal{H}^{\prime }$ such that } \begin{equation} \mathit{\mathrm{Tr}\exp (-\beta H^{\prime })<+\infty \quad \ \mathrm{for\ all }\ \quad \beta >0} \label{beta} \end{equation} \textit{and $\mathrm{Tr}\,\Phi (\rho)H^{\prime }\leq h^{\prime }$ for all $\rho\in\mathcal{A}$. Then there exists an optimal generalized ensemble for the $\mathcal{A}$-constrained channel $\Phi $.} \textbf{Proof.} We will show that under the condition of the lemma the restriction of the output entropy $H(\Phi (\rho ))$ on the set $\mathcal{A}$ is continuous, which implies validity of the condition of the theorem. Let $\rho _{\beta }^{\prime }=(\mathrm{Tr}\exp (-\beta H^{\prime }))^{-1}\exp (-\beta H^{\prime })$ be a state in $\mathfrak{S}(\mathcal{H} ^{\prime })$. For arbitrary $\rho $ in $\mathcal{A}$ we have \begin{equation} H(\Phi (\rho )\Vert \rho _{\beta }^{\prime })=-H(\Phi (\rho ))+\beta \mathrm{ Tr}\Phi (\rho )H^{\prime }+\log \mathrm{Tr}\exp (-\beta H^{\prime }) \label{r-e-exp} \end{equation} Let $\rho _{n}$ be an arbitrary sequence of states in $\mathcal{A}$ converging to the state $\rho $. By using (\ref{r-e-exp}) and lower semicontinuity of the relative entropy we obtain \[ \begin{array}{c} \limsup\limits_{n\rightarrow \infty }H(\Phi (\rho _{n}))=H(\Phi (\rho ))+H(\Phi (\rho )\Vert \rho _{\beta }^{\prime })-\liminf\limits_{n\rightarrow \infty }H(\Phi (\rho _{n})\Vert \rho _{\beta }^{\prime }) \\ +\limsup\limits_{n\rightarrow \infty }\beta \mathrm{Tr}\Phi (\rho _{n})H^{\prime }-\beta \mathrm{Tr}\Phi (\rho )H^{\prime }\leq H(\Phi (\rho ))+\beta h^{\prime }. \end{array} \] By tending $\beta $ in the above inequality to zero we can establish the upper semicontinuity of the restriction of the function $H(\Phi (\rho ))$ to the set $\mathcal{A}$. The lower semicontinuity of this function follows from the lower semicontinuity of the entropy \cite{W}. Hence the restriction of the function $H(\Phi (\rho ))$ on the set $\mathcal{A}$ is continuous. $\square $ The condition of proposition 3 is fulfilled for Gaussian channels with the power constraint of the form (\ref{ham}) where $H=R^{T}\epsilon R$ is the many-mode oscillator Hamiltonian with nondegenerate energy matrix $ \epsilon ,$ and $R$ are the canonical variables of the system. We give a brief sketch of the argument which can be made rigorous by taking care of unboundedness of the canonical variables. Indeed, let \[ R^{\prime }=KR+K_{E}R_{E} \] be the equation of the channel in the Heisenberg picture, where $R_{E}$ are the canonical variables of the environment which is in the Gaussian state with zero mean and the correlation matrix $\alpha _{E}$ \cite{H-W}. Taking $ H^{\prime }=c[R^{T}R-I$Sp$\alpha _{E}K_{E}^{T}$ $K_{E}]$, we have $\Phi ^{\ast }(H^{\prime })=cR^{T}K^{T}KR$, and we can always choose a positive $c$ such that $\Phi ^{\ast }(H^{\prime })\leq H.$ Moreover, $H^{\prime }$ satisfies the condition (\ref{beta}). Thus the conditions of proposition 3 can be fulfilled in this case. \textbf{Conjecture.} \textit{For arbitrary Gaussian channel with the power constraint an optimal generalized ensemble is given by a Gaussian measure supported by the set of pure Gaussian states with arbitrary mean and a fixed correlation matrix.} This conjecture was stated in \cite{H-W} for attenuation/amplification channel with classical noise. For the case of pure attenuation channel characterized by the property of zero minimal output entropy the validity of this conjecture was established in \cite{Gio}. \section{Convex constraints} In the case of convex constraint set there are further special properties, such as uniqueness of the output of optimal average state, see \cite{Sh}. The following lemma is a generalization of Donald's identity \cite{Don}. \textbf{Lemma 4.} \textit{For arbitrary measure $\pi$ in $\mathcal{P}$ and arbitrary state $\sigma$ in $\mathfrak{S}(\mathcal{H})$ the following identity holds \begin{equation} \int\limits_{\mathfrak{S}(\mathcal{H})}H(\rho \Vert \sigma )\pi(d\rho )=\int\limits_{\mathfrak{S}(\mathcal{H})}H(\rho \Vert \bar{\rho}(\pi ))\pi (d\rho )+H(\bar{\rho}(\pi )\Vert \sigma ). \label{relens} \end{equation} } \textbf{Proof.} We first notice that in the finite dimensional case Donald's identity \[ \sum_{i}\pi _{i}H(\rho_{i}\Vert \sigma )=\sum_{i}\pi _{i}H(\rho_{i}\Vert \bar{\rho} (\pi ))+H(\bar{\rho}(\pi )\Vert \sigma ) \] holds for not necessarily normalized positive operators with the generalized definition of the relative entropy (\ref{relent}). This can be obviously extended to generalized ensembles in finite-dimensional Hilbert space, giving (\ref{relens}) for this case. Thus this relation holds for the operators $P_{n}\rho P_{n},P_{n}\sigma P_{n}$, where $P_{n}$ is an arbitrary sequence of finite projectors increasing to $I_{\mathcal{H}}$. Passing to the limit $n\rightarrow \infty $ and referring to the m.c.-theorem, we obtain (\ref{relens}) in infinite-dimensional case. $\square $ The following proposition is a generalization of the ``maximal distance property'', cf. proposition 1 in \cite{H-Sh}. \textbf{Proposition 4.} \textit{Let $\mathcal{A}$ be convex subset of $\mathfrak{S}(\mathcal{H})$. A measure $\pi \in \mathcal{P}_{\mathcal{A}}$ is optimal generalized ensemble for the $\mathcal{A}$-constrained channel $ \Phi $ if and only if \begin{equation} \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{\rho} (\pi )))\mu (d\rho )\leq \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{\rho}(\pi )))\pi (d\rho )=\chi _{\Phi }(\pi ) \label{opt-ch} \end{equation} for arbitrary measure $\mu \in \mathcal{P}_{\mathcal{A}}$.} \textbf{Proof.} Let inequality (\ref{opt-ch}) holds for arbitrary measure $ \mu \in \mathcal{P}_{\mathcal{A}}$. By lemma 4 we have \[ \begin{array}{c} \chi _{\Phi }(\mu )\leq \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{\rho}(\mu )))\mu (d\rho )+H(\Phi (\bar{\rho}(\mu ))\Vert \Phi (\bar{\rho}(\pi ))) \\ =\int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{\rho} (\pi )))\mu (d\rho )\leq \chi _{\Phi }(\pi ), \end{array} \] which implies optimality of the measure $\pi $. Conversely, let $\pi $ be an optimal generalized ensemble for the $\mathcal{A}$-constrained channel $\Phi $ and $\mu $ be an arbitrary measure in $\mathcal{P}_{\mathcal{A}}$. By convexity of the set $\mathcal{A}$ the measure $\pi _{\eta }=\eta \mu +(1-\eta )\pi $ is also in $\mathcal{P}_{\mathcal{A}}$ for arbitrary $\eta \in (0,1)$. Using lemma 4 we have \[ \begin{array}{c} \chi _{\Phi }(\pi _{\eta })=\int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{\rho}(\pi _{\eta })))\pi _{\eta }(d\rho ) \\ =\eta \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho )\Vert \Phi (\bar{ \rho}(\pi _{\eta })))\mu (d\rho )+(1-\eta )\chi _{\Phi }(\pi )+(1-\eta )H( \bar{\rho}(\pi )\Vert \bar{\rho}(\pi _{\eta })). \end{array} \] The optimality of $\pi $ and nonnegativity of the relative entropy imply \begin{equation} \begin{array}{c} \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\Vert \Phi (\bar{\rho} (\pi _{\eta })))\mu (d\rho )-\chi _{\Phi }(\pi )\leq \eta ^{-1}(\chi _{\Phi }(\pi _{\eta })-\chi _{\Phi }(\pi ))\leq 0. \end{array} \label{chi-ineq-one} \end{equation} By lemma 4 and lower semicontinuity of the relative entropy \[ \begin{array}{c} \liminf\limits_{\eta \rightarrow 0}\int\limits_{\mathfrak{S}(\mathcal{H} )}H(\Phi (\rho ))\Vert \Phi (\bar{\rho}(\pi _{\eta })))\mu (d\rho ) \\ =\int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\Vert \Phi (\bar{\rho} (\mu )))\mu (d\rho )+\liminf\limits_{\eta \rightarrow 0}H(\Phi (\bar{\rho} (\mu ))\Vert \bar{\rho}(\pi _{\eta })) \\ \geq \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\Vert \Phi (\bar{ \rho}(\mu )))\mu (d\rho )+H(\Phi (\bar{\rho}(\mu ))\Vert \bar{\rho}(\pi )) \\ =\int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\Vert \Phi (\bar{\rho} (\pi )))\mu (d\rho ) \end{array} \] Then (\ref{chi-ineq-one}) implies \[ \begin{array}{c} \int\limits_{\mathfrak{S}(\mathcal{H})}H(\Phi (\rho ))\Vert \Phi (\bar{\rho} (\pi )))\mu (d\rho )-\chi _{\Phi }(\pi ) \\ \leq \liminf\limits_{\eta \rightarrow 0}\int\limits_{\mathfrak{S}(\mathcal{H} )}H(\Phi (\rho ))\Vert \Phi (\bar{\rho}(\pi _{\eta })))\mu (d\rho )-\chi _{\Phi }(\pi ) \\ \leq \liminf\limits_{\eta \rightarrow 0}\eta ^{-1}(\chi _{\Phi }(\pi _{\eta })-\chi _{\Phi }(\pi ))\leq 0.\;\square \end{array} \] \section{Appendices} \textbf{A. Proof of lemma 1.} We first notice that $\mathrm{supp}(\pi )\subseteq U$, where $U$ a closed convex subset of $\mathfrak{S}(\mathcal{H})$\textit{\ }implies \begin{equation} \bar{\rho}(\pi )\in U. \label{l1} \end{equation} \textit{\ }This is obvious for arbitrary measure $\pi $ with finite support. By theorem 6.3 in \cite{Par} the set of such measures is dense in $\mathcal{P}$. The continuity of the mapping $\pi \mapsto \bar{\rho}(\pi )$ completes the proof of (\ref{l1}). Let now $\pi $ be an arbitrary measure in $\mathcal{P}$. Since $\mathfrak{S}(\mathcal{H})$ is separable we can, for each $n\in \mathbb{N}$, find a sequence $\left\{ A_{i}^{n}\right\} $ of Borel sets of diameters less than $1/n$ such that $\mathfrak{S}(\mathcal{H} )=\bigcup_{i}A_{i}^{n}$, $A_{i}^{n}\cap A_{j}^{n}=\emptyset $ for $j\neq i$. Find a number $\,m=m(n)$ such that $\sum_{i=m+1}^{+\infty }\pi (A_{i}^{n})<1/n$. Consider the finite collection of Borel set $\{\hat{A} _{i}^{n}\}_{i=1}^{m+1}$, where $\hat{A}_{i}^{n}=A_{i}^{n}$ for all $i= \overline{1,m}$ and $\hat{A}_{m+1}^{n}=\bigcup_{i=m+1}^{+\infty }A_{i}^{n}$. We have \begin{equation} \bar{\rho}(\pi )=\sum_{i=1}^{m+1}\int\limits_{\hat{A}_{i}^{n}}\rho \pi (d\rho )=\sum_{i=1}^{m+1}\pi _{i}^{n}\rho _{i}^{n}, \label{int-decomp} \end{equation} where $\pi _{i}^{n}$ $=\mathrm{Tr}\int\limits_{\hat{A}_{i}^{n}}\rho \pi (d\rho )=\pi (\hat{A}_{i}^{n})$ and $\rho _{i}^{n}=(\pi (\hat{A}_{i}^{n}))^{-1}\int\limits_{\hat{A}_{i}^{n}}\rho \pi (d\rho )$ (without loss of generality we assume $\pi _{i}^{n}>0$). Let $\pi ^{n}$ be the probability measure on $\mathfrak{S}(\mathcal{H})$, ascribing the value $\pi _{i}^{n}$ to the point $\rho _{i}^{n}$. Equality (\ref{int-decomp}) implies $\bar{\rho} (\pi ^{n})=\bar{\rho}(\pi )$. Since $\pi ^{n}$ has finite support for each $ n $, to prove the assertion of the lemma it is sufficient to show that $\pi ^{n}$ tends to $\pi $ in the weak topology as $n$ tends to $+\infty $. By theorem 6.1 in \cite{Par} to establish the above convergence it is sufficient to show that \[ \lim_{n\rightarrow +\infty }\int\limits_{\mathfrak{S}(\mathcal{H})}f(\rho )\pi ^{n}(d\rho )=\int\limits_{\mathfrak{S}(\mathcal{H})}f(\rho )\pi (d\rho ) \] for arbitrary bounded uniformly continuous function $f(\rho )$ on $\mathfrak{ \ S}(\mathcal{H})$. Let $M_{f}=\sup_{\rho \in \mathfrak{S}(\mathcal{H} )}|f(\rho )|$. For arbitrary $\,\varepsilon >0\;$ let $n_{\varepsilon }$ be such that $\varepsilon n_{\varepsilon }>2M_{f}$ and \[ \sup_{\rho \in U(n_{\varepsilon })}f(\rho )-\inf_{\rho \in U(n_{\varepsilon })}f(\rho )<\varepsilon \] for arbitrary closed ball $U(n_{\varepsilon })$ of diameter $ 1/n_{\varepsilon }.$Let $n\geq n_{\varepsilon }.$ By construction the set $\hat{A}_{i}^{n}$ is contained in some ball $U_{i}(n)$ for each $i=\overline{1,m}$. By (\ref{l1}) the state $\rho _{i}^{n}$ lies in the same ball $U_{i}(n)$. Hence we have \[ \begin{array}{c} |\int\limits_{\mathfrak{S}(\mathcal{H})}f(\rho )\pi ^{n}(d\rho )-\int\limits_{\mathfrak{S}(\mathcal{H})}f(\rho )\pi (d\rho )| \\ \leq \sum\limits_{i=1}^{m+1}\int\limits_{\hat{A}_{i}^{n}}|f(\rho )-f(\rho _{i})|\pi (d\rho ) \\ \leq \varepsilon \sum_{i=1}^{m}\pi (\hat{A}_{i}^{n})+2M_{f}\pi (\hat{A}_{m+1}^{n})<2\varepsilon . \end{array} \] for all $n\geq n_{\varepsilon }$. $\square $ \textbf{B. Example of a channel without optimal generalized ensembles.} Consider Abelian von Neumann algebra $\mathbf{\textit{l}}_{\infty}$ and its predual $\mathbf{\textit{l}}_{1}$. Let $\Phi$ be the noiseless channel on $\mathbf{\textit{l}}_{1}$. Consider the sequence of states $$\rho_{n}= \{1-q_{n},\underbrace{\textstyle\frac{q_{n}}{n},\frac{q_{n}}{n},...,\frac{q_{n}}{n}}_{n}, 0, 0 ...\}, $$ where $q_{n}$ is a sequence of numbers in $[0,1]$, which will be defined below. Note that in this case $\chi_{\Phi}(\rho_{n})=H(\rho_{n})=h_{2}(q_{n})+q_{n}\log n$, where $h_{2}(x)=-x\log x-(1-x)\log(1-x)$. We will show later that there exists the sequence $q_{n}$ such that $\lim_{n\rightarrow+\infty}q_{n}=0$ while the corresponding sequence $\chi_{\Phi}(\rho_{n})=H(\rho_{n})$ monotonously increases to $1$. Let $q_{n}$ be such a sequence and $\mathcal{A}$ be the closure of the sequence $\rho_{n}$, which obviously consists of states $\rho_{n}$ and pure state $\rho_{*}=\lim_{n\rightarrow+\infty}\rho_{n}=\{1, 0, 0 ...\}$. By definition and the above monotonicity $\bar{C}(\Phi ;\mathcal{A})=\lim_{n\rightarrow+\infty}\chi_{\Phi}(\rho_{n})=1$ while $\rho_{*}$ is the only optimal average state for the $\mathcal{A}$-constrained channel $\Phi$ and $\chi_{\Phi}(\rho_{*})=H(\rho_{*})=0$. So we have $\bar{C}(\Phi ;\mathcal{A})>\chi_{\Phi}(\rho_{*})$ and corollary 3 implies that there is no optimal ensemble for the $\mathcal{A}$-constrained channel $\Phi$. Let us construct the sequence $q_{n}$ with the above properties. Consider the strongly increasing function $f(x)=x(1-\ln x)$ on $[0,1]$. It is easy to see that $f'(x)=-\ln x$ and $f([0,1])=[0,1]$. Let $f^{-1}$ be the converse function and $g(x)=xf^{-1}(\ln 2/x)$ for all $x\geq 1$. Note that the function $g(x)$ is implicitly defined by the equation \begin{equation}\label{a-e} g(1-\ln(g/x))=\ln 2. \end{equation} Using this it is easy to see that the function $g(x)$ satisfies the following differential equation \begin{equation}\label{d-e} \ln(g/x)g'=g/x. \end{equation} Since $g(x)/x=f^{-1}(\ln 2/x)$ we have $g(x)/x\in[0,1]$. This with (\ref{a-e}) and (\ref{d-e}) implies $g(x)\in[0,1]$, $\lim_{x\rightarrow+\infty}g(x)=0$ and $g'(x)<0$ correspondingly. Consider the function $H(x)=h_{2}(g(x))+g(x)\log x$. By (\ref{a-e}) and (\ref{d-e}) with the above observations we have $$ \lim_{x\rightarrow+\infty}H(x)=(\ln 2)^{-1}\lim_{x\rightarrow+\infty}g(x)\ln x=1 $$ and $$ \begin{array}{c} H'(x)=(\ln 2)^{-1}\left(g'(x)\ln(1-g(x))-g'(x)\ln g(x)+g'(x)\ln x+g(x)/x\right)\\\\=(\ln 2)^{-1}g'(x)\log (1-g(x))>0,\quad \forall x>1. \end{array} $$ It follows that $H(x)$ is an increasing function on $[1,+\infty)$, tending to its upper bound $1$ at infinity. Setting $q_{n}=g(n)$ we obtain the sequence with the desired properties. {\bf Acknowledgements.} The first author acknowledges support from QIS Program, the Newton Institute, Cambridge, where this paper was completed. The work was partially supported by INTAS grant 00-738. \end{document}
\begin{document} \title{Energy efficiency of consecutive fragmentation processes} \author{Joaqu\'{\i}n Fontbona, Nathalie Krell, Servet Mart\'{\i}nez} \maketitle \begin{abstract} We present a first study on the energy required to reduce a unit mass fragment by consecutively using several devices, as it happens in the mining industry. Two devices are considered, which we represent as different stochastic fragmentation processes. Following the self-similar energy model introduced by Bertoin and Mart\'\i nez \cite{bermar}, we compute the average energy required to attain a size $\eta_0$ with this two-device procedure. We then asymptotically compare, as $\eta_0$ goes to $0$ or $1$, its energy requirement with that of individual fragmentation processes. In particular, we show that for certain range of parameters of the fragmentation processes and of their energy cost-functions, the consecutive use of two devices can be asymptotically more efficient than using each of them separately, or conversely. \end{abstract} Keywords: fragmentation process, fragmentation energy, subordinators, Laplace exponents. Mathematics Subject Classification (2000): Primary 60J85, Secondary 60J80. \mathbf{s}ection {Introduction} The present work is motivated by the mining industry, where mechanical devices are used to break rocks in order to liberate the metal contained in them. This fragmentation procedure is carried out in a series of steps (the first of them being blasting, followed then by crushers, grinders or mills) until fragments attain a sufficiently small size for the mining purposes. One of the problems that faces the mining industry is to minimize the total amount of energy consumed in this process. To be more precise, at each intermediate step, material is broken by a repetitive mechanism until particles can go across a classifying-grid leading to the next step. The output sizes are known to be not optimal in terms of the global energy cost. Moreover, since crushers or mills are large and hardly replaceable machines, those output sizes are in practice one of the few parameters on which a decision can be made. In an idealized setting, the problem might be posed as follows: suppose that a unit-size fragment is to be reduced into fragments of sizes smaller than a fixed threshold $\eta_0\in (0,1]$, by passing consecutively through two different fragmentation mechanisms (for instance the first one could be constituted by the crushers and the second one by the mills). In this ``two-step'' fragmentation procedure, each mass fragment evolves in the first fragmentation mechanism until it first becomes smaller than $\eta\in (\eta_0,1]$, at which moment it immediately enters the second mechanism. Then, the fragment continues to evolve until the first instant it becomes smaller than $\eta_0$, when it finally exits the system. The central question is: \mathbf{s}mallskip (*) \ what is the optimal choice for the intermediate threshold $\eta$? \mathbf{s}mallskip To formulate this problem we shall model each fragmentation mechanism by a continuous-time random fragmentation process, in which particles break independently of each other (branching property) and in a self-similar way. (For recent a account and developments on the mathematical theory of fragmentation processes, we refer to Bertoin \cite{ber}.) The self-similarity hypothesis agrees with observations made by the mining industry; see e.g. \cite{Charles}. In particular, it is reasonable to assume that the energy required to break a block of size $s$ into a set of smaller blocks of sizes $(s_1,s_2,...)$ is of the form $s^\beta \varphi(s_1/s,s_2/s,\ldots)$, where $\varphi$ is a cost function and $\beta >0$ a fixed parameter. For example, in the so-called potential case, one has $\varphi(s_1,s_2,\ldots)=\mathbf{s}um_{n=1}^{\infty} s_n^\beta-1$, which corresponds to the law of Charles, Walker and Bond \cite{Charles}. Within that mathematical framework, the asymptotic behavior of the energy required by a single fragmentation process in order that all fragments attain sizes smaller than $\eta$ was studied in \cite{bermar}. It was shown that the mean energy behaves as $1/ \eta^{\alpha-\beta}$ when $\eta\to0$, where $\alpha$ denotes the Malthusian exponent of the fragmentation process and where $\alpha>\beta$ in physically reasonable cases. Therefore, the performances of two individual fragmentation processes are asymptotically comparable by means of the quantities $\alpha-\beta$ and $\widehat{\alpha}-\widehat{\beta}$, where $\widehat{\alpha}>\widehat{\beta}$ are the parameters associated with a second fragmentation process. We shall formulate problem (*) in mathematical terms adopting the same mean energy point of view as in \cite{bermar}. First, we will explicitly compute the objective function, which we express in terms of the Levy and renewal measures associated with the ``tagged fragment'' of each of the two fragmentation processes (see \cite{ber2}). Then, our goal will be to study a preliminary question related to (*), which is weaker but still relevant for the mining industry: \mathbf{s}mallskip (**) \ when is the above described ``two-step'' procedure efficient in terms of mean energy, compared to the ``one-step'' procedures where only the first or only the second fragmentation mechanisms reduce a unit size fragment to fragments not larger that $\eta_0$? \mathbf{s}mallskip We shall address this question in asymptotic regimes, namely for $\eta$ and $\eta_0$ going together either to $0$ or to $1$. In both cases, we will give explicit estimates in terms of $\eta$ for the efficiency gain or loss of using the two-step procedure. As we shall see, if $\alpha,\beta$, $\widehat{\alpha}$ and $\widehat{\beta}$ are different, for any values of $\eta/\eta_0\in (0,1)$ the relations between those four parameters determine the relative efficiency between the first, the second, and the two-step fragmentation procedures if $\eta$ is sufficiently small. In particular, when $\alpha>\widehat{\alpha}$ and $\beta >\widehat{\beta}$ the answer to question (**) is affirmative for $\eta$ sufficiently small, so that the solution to problem (*) is in general non trivial. We shall carry out a similar analysis for large (that is, close to unit-size) thresholds. In order to quantify the comparative efficiency of the two-step procedure, we shall make an additional hypothesis of regular variation at $\infty$ of the Levy exponents of the tagged fragment processes. This will be transparently interpreted in terms of the infinitesimal average energy required by each of the fragmentation processes to break arbitrarily close to unit-size fragments. We will show that at least for small values of $\log \eta_0 /\log \eta$ and variation indexes in $(0,\frac{1}{2}]$ for both fragmentation processes, the relative infinitesimal efficiency of the two fragmentation processes determines the comparative efficiency of the three alternative fragmentation procedures if $\eta$ is sufficiently close to $1$. We point out that the relevant parameters involved in our analysis could in principle be statistically estimated. A first concrete step in that direction has been made by Hoffmann and Krell \cite{kre} who asymptotically estimate the Levy measure of the tagged fragment from the observations of the sizes fragments at the first time they become smaller than $\eta\to 0$. Although this is in general not enough to recover the characteristics of the fragmentation process, it provides all the relevant parameters we need which are not observable by other means. The remainder of this paper is organized as follows. In Section 2 we recall the construction of homogeneous fragmentation processes in terms of Poisson point processes, we describe our model of the two-step fragmentation procedure and compute its average energy using first passage laws for subordinators. In Section 3 we recall some results on renewal theory for subordinators and use them to study the small thresholds asymptotics of our problem in Theorems \ref{2frvs1} and \ref{2frvs2}, where the two-step procedure is respectively compared with the first and the second fragmentation processes. The comparative efficiency of the three alternatives according to the values of $\alpha,\beta$, $\widehat{\alpha}$ and $\widehat{\beta}$ is summarized in Corollary \ref{summary}. In Section 4 we introduce the idea of relative ``infinitesimal efficiency'' of two fragmentation procedures. We relate it to a regular variation assumption at infinity for the Levy exponent of tagged fragment, and use it to analyze the comparative efficiency of the two-step fragmentation procedure for close to unit-size fragments, using Dynkin-Lamperti asymptotics for subordinators at first passage. \mathbf{s}ection{The model} \mathbf{s}ubsection{The fragmentation process} We shall model the fragmentation mechanisms as a homogeneous fragmentation processes, as introduced in \cite{ber}. This is a homogeneous Markov process $\mathbf{X}=(X(t,\mathbf{x}) :t\geq 0)$ taking values in $$\mathcal{S}^{\downarrow}:=\left\{\mathbf{s}= (s_{1},s_{2},...) \ : \ s_{1}\geq s_{2}\geq ...\geq 0\ , \mathbf{s}um_{i=1}^{\infty} s_{i} \leq 1 \right\}\,, $$ which satisfies the two fundamental properties of homogeneity and branching. The parameter $\mathbf{x}=(x_{1},x_{2},...)$ is an element of $\mathcal{S}^{\downarrow}$ standing for the initial condition: $X(0,\mathbf{x} )=\mathbf{x}$ a.s.. In the case $\mathbf{x}=(1,0,\dots)$ we simply write $X(t)=X(t,\mathbf{x})$, $t\geq 0$. We observe that homogeneous fragmentation processes are self-similar fragmentation processes with zero index of self-similarity (see \cite{ber}). Since self-similar fragmentation processes with different indexes are related by a family of random time-changes (depending on fragments), there is no loss of generality in working here in the homogeneous case as the quantities we study are only size-dependent (see also \cite{bermar}). We assume that no creation of mass occurs. It is known that in this case, the process $\mathbf{X}$ is entirely characterized by an erosion coefficient $c\geq 0$ and a dislocation measure $\nu$, which is a measure on $\mathcal{S}^{\downarrow}$ satisfying the conditions \begin{equation} \label{mesuredelevy} \nu (\{1,0,0,...\})=0 \, \hbox{ and } \, \int_{\mathcal{S}^{\downarrow}} (1-s_{1})\nu(d\mathbf{s})<\infty\,. \end{equation} Moreover, we suppose that we are in the dissipative case $\mathbf{s}um_{i=1}^{\infty}s_{i}\leq 1 \hbox{ a.s.}$, and we assume absence of erosion: $c=0$. Let us recall the construction of a homogeneous fragmentation process in this setting, in terms the atoms of a Poisson point process (see \cite{be2}). Let $ \nu$ be a dislocation measure fulfilling conditions \eqref{mesuredelevy}. Let $\mathbf{K} = \left( (\Delta(t) , k(t)): t \geq 0 \right)$ be a Poisson point process with values in $\mathcal{S}^{\downarrow}\times \mathbb{N}$, and with intensity measure $\nu\otimes\mathbf{s}harp$, where $\mathbf{s}harp$ is the counting measure on $\mathbb{N}$. As in \cite{be2}, we can construct a unique $\mathcal{S}^{\downarrow}$-valued process $\mathbf{X}= (X(t,\mathbf{x}): t\geq 0)$ started from $\mathbf{x}$ with paths that jump only at instants $t\geq 0$ at which a point $ (\Delta(t)=(\Delta_{1},\Delta_{2},....),k (t))$ occurs. Plainly, $X(t,\mathbf{x})$ is obtained by replacing the $k(t)$-term $ X(t-,\mathbf{x})$ by the decreasing rearrangement of the sequence $X_{1}(t-,\mathbf{x}),...,X_{k-1}(t-,\mathbf{x}),X_{k}(t-,\mathbf{x}) \Delta_{1},X_{k}(t-,\mathbf{x})\Delta_{2},...,X_{k+1}(t-,\mathbf{x}),... $. Define $$\underline{p}:=\inf\left\{p\in\mathbb{R}:\ \int_{\mathcal{S}^{\downarrow}}\mathbf{s}um_{j=2}^{\infty}s_{j}^{p} \nu (d\mathbf{s})<\infty \right\} $$ and for every $q\in (\underline{p},\infty)$ consider, \begin{equation} \label{kappa} \kappa (q) := \int_{\mathcal{S}^{\downarrow}}\left (1-\mathbf{s}um_{j=1}^{\infty}s_{j}^{q}\right) \nu (d\mathbf{s})\, . \end{equation} In the sequel, we assume the Malthusian hypothesis: $\exists \, ! \, \alpha \geq \underline{p}$ such that $\kappa (\alpha )=0$ which is called the Malthusian exponent. A key tool in fragmentation theory is the tagged fragment associated with $\mathbf{X}$. For the precise definition, we refer the reader to \cite{ber2}. The tagged fragment is a process defined by $$ \chi (t):=X_{J(t)}(t) $$ where $J(t)$ is a random integer such that, conditioned on $X(t)$, $\mathbb{P}(J(t)=i|X(t))=X_{i}(t)$ for all $i\geq 1$, and $\mathbb{P}(J(t)=0|X(t))=1-\mathbf{s}um_{i=1}^{\infty}X_{i}(t)$. Is is shown by Bertoin (Theorem 3 in \cite{ber2}) that the process $$ \mathbf{x}i_{t}=-\log \chi (t) $$ is a subordinator. Moreover, its Laplace exponent $\phi$ is given by $$\phi(q):=\kappa (q+1)$$ for $q>\underline{p}-1$. Since $\phi(\alpha-1)=0$, the process $e^{(1-\alpha)\mathbf{x}i (t)}$ is a nonnegative martingale, and we can then define a probability measure $\widetilde{\mathbb{P}}$ on the path space by \begin{equation} \label{eq1111} d\widetilde{\mathbb{P}}{\big|}_{\mathcal{F}_{t}}=e^{(1-\alpha)\mathbf{x}i(t)} d\mathbb{P}{\big|}_{\mathcal{F}_{t}} \,, \end{equation} where $\left(\mathcal{F}_{t} : t\geq 0 \right)$ denotes the natural filtration of $\mathbf{x}i$. It is well known that under this ``tilted'' law, $\mathbf{x}i$ is a subordinator with Laplace exponent \begin{equation} \label{phitilde} \widetilde{\phi}(q)=\phi(q+\alpha-1). \end{equation} We will respectively denote by ${\Pi}$ and ${U}$ the L\'evy measure and the renewal measure of $\mathbf{x}i_{t}$ under ${\widetilde{\mathbb{P}}}$ (see e.g. \cite{be1}). For $\eta\in (0,1]$ we denote by $$ T_{\eta}:=\inf\{ t\geq 0 : \mathbf{x}i_{t}>\log(1/\eta)\} $$ the first time that the size of the tagged fragment is smaller than $\eta$. \mathbf{s}ubsection{ The fragmentation energy } Following \cite{bermar}, we shall assume that the energy needed to split a fragment of size $x\in [0,1]$ into a sequence $x_1\geq x_2\geq \dots$ is given by the formula $$x^{\beta}\varphi\left(\frac{x_1}{x},\frac{x_2}{x},\dots\right),$$ where $\beta>0$ is a fixed constant and $\varphi :\mathcal{S}\rightarrow \mathbb{R}$ is a measurable ``cost function'' such that $\varphi ((1,0,...))=0$. We are interested in the total energy ${{\mathsf E}}^{(\mathbf{x})} (\eta)$ used in splitting the initial fragment of size $x$ until each of them has reached, for the first time, a size that is smaller than $\eta$. This quantity is given by $$ {{\mathsf E}}^{(\mathbf{x})}(\eta)=\mathbf{s}um_{t\geq 0} \nbOne_{X_{k(t)}(t_{-}, \mathbf{x})\geq \eta} X_{k(t)}^{\beta} (t_{-},\mathbf{x})\varphi (\Delta (t)). $$ We shall simply write $$ {{\mathsf E}}(\eta):={{\mathsf E}}^{(1,0,\dots)}(\eta)\,. $$ The following consequence of the homogeneity property will be useful. \begin{lm} \label{ssproperty} Let $\mathbf{x}=(x_{1},x_{2},...)\in \mathcal{S^{\downarrow}}$ and $\eta \in [0,1]$. We have \begin{equation} \label{eqenergie} {{\mathsf E}}^{(\mathbf{x})}(\eta^{}) \mathbf{s}tackrel{(law)}{=} \mathbf{s}um_{i} \nbOne_{x_{i}\ge \eta^{}} x_{i}^{\beta^{}}{{\mathsf E}}_{i}(\eta^{}/x_{i}), \end{equation} where for each $i\geq 1$, ${{\mathsf E}}_{i}({\widehat D}ot)$ is the energy of a fragmentation process $\mathbf{X}^{(i)}$ issued from $(1,0,\dots)$ with the same characteristics as $\mathbf{X}$, and the copies $(\mathbf{X}^{(i)}: i\geq 1)$ are independent. \end{lm} \begin{dem} Let $\left((\Delta_{i}(t),k_{i}(t)): t\geq 0 \right)$, $i\geq 1$, be i.i.d. Poisson point processes with intensity measure $\nu\otimes \mathbf{s}harp$. Denote by ${\overline {\mathbf{X}}}^{(x_{i})}$, $i\geq 1$, the sequence of independent homogeneous fragmentation processes constructed from the latter processes, respectively starting from $(x_{i},0,{\widehat D}ots)$. From the branching property of $\mathbf{X}$, we have the identity $$ {{\mathsf E}}^{(\mathbf{x})}(\eta^{})\mathbf{s}tackrel{(law)}{=}\mathbf{s}um_{i}\mathbf{s}um_{t\geq 0} \nbOne_{x_{i}\geq\eta^{}}\, \nbOne_{{\overline X}_{k_{i}(t)}^{(x_{i})} (t_{-})\geq \eta^{}} ({\overline X}_{k_{i}(t)}^{(x_{i})})^{\beta^{}}(t_{-})\, \varphi^{} (\Delta_{i} (t)). $$ Denoting now by $\left( (\Delta^{(i)}(t),k^{(i)}(t)): t\geq 0 \right)$, $i\geq 1$, the family of i.i.d. Poisson point processes associated with the process $\mathbf{X}^{(i)}$, we get by homogeneity that $$ {{\mathsf E}}^{(\mathbf{x})}(\eta^{})\mathbf{s}tackrel{(law)}{=}\mathbf{s}um_{i}\mathbf{s}um_{t\geq 0} \nbOne_{x_{i}\geq\eta^{}} \, \nbOne_{x_{i}X_{k^{(i)}(t)}^{(i)}(t_{-})\geq \eta^{}} x_{i}^{\beta^{}} (X_{k^{(i)}(t)}^{(i)})^{\beta^{}}(t_{-}) \, \varphi^{} (\Delta^{(i)} (t)), $$ and the statement follows. \end{dem} \mathbf{s}ubsection{ The energy of a two-step fragmentation procedure } To formulate our problem, we introduce a second Poisson point process ${\widehat \mathbf{K}} = (({\widehat \Delta}(t), {\widehat k}(t) ), t \geq 0 )$ with values in $\mathcal{S}^{\downarrow}\times \mathbb{N}$, and with intensity measure ${\widehat{\nu}}\otimes\mathbf{s}harp$, where ${\widehat{\nu}}$ is a dislocation measure satisfying the same type of assumptions as $\nu$. We can then simultaneously define a family of fragmentation processes ${\widehat{\X}}=({\widehat X}(t,\mathbf{x}): t\geq 0)$ indexed by the initial condition $\mathbf{x}=(x_{1},x_{2},...)$. We denote by ${\widehat \alpha}$ the Malthus coefficient of ${\widehat{\nu}}$. The energy used in the second fragmentation process is assumed to take the same form as for the first, in terms of (possibly different) parameters ${\widehat \beta}$ and ${\widehat{\varphi}}$. We assume that $\mathbf{K}$ and ${\widehat \mathbf{K}}$ are independent, so the families of fragmentation processes $\mathbf{X}$ and ${\widehat{\X}}$ are independent, and they are called respectively the first and the second fragmentation processes. In the sequel we assume that the first fragmentation process $\mathbf{X}$ is issued from the unitary fragment $(1,0,\dots)$. Let $1\geq \eta \geq \eta_0>0$. We let each mass fragment evolve in the first fragmentation process until the instant it first becomes smaller than $\eta$. Then it immediately enters the second fragmentation process ${\widehat{\X}}$, and then evolves until it first becomes smaller than $\eta_0$. For each $\eta\in (0,1]$ let $\mathbf{x}^{\eta}\in \mathcal{S^{\downarrow}}$ be the mass partition given by the ``output'' of $\mathbf{X}$ when each of the fragments reaches for the first time a size smaller than $\eta$. More precisely, each fragment is ``frozen'' at that time, while other (larger than $\eta$) fragments continue their independent evolutions. We write \begin{equation} \label{ouput1} \mathbf{x}^{\eta}=(x_1^{\eta},x_2^{\eta},{\widehat D}ots) \end{equation} for the decreasing rearrangement of the (random) frozen sizes of fragments when exiting the first fragmentation process. By the homogeneity and branching properties, if ${{\widehat A}l E}(\eta ,\eta_0)$ denotes the total energy spent in reducing the unit-size fragment by these procedure, we have the identity \begin{equation} \label{energieloi} {{\widehat A}l E}(\eta,\eta_0):\mathbf{s}tackrel{(law)}{=} {{\mathsf E}}(\eta)+ {\widehat {\mE}}^{(\mathbf{x}_{\eta})}(\eta_0)\,, \end{equation} where ${\widehat {\mE}}^{(\mathbf{x})}({\widehat D}ot)$ is the energy of a copy of the second fragmentation process ${\widehat{\X}}$ starting from $\mathbf{x}$, independent of the first fragmentation process. \begin{rk} Notice that ${{\widehat A}l E}(1,\eta_0)$ is the energy required to initially dislocate the unit mass with the first fragmentation process, and then use the second fragmentation process to continue breaking its fragments if their sizes are larger or equal to $\eta_0$ (the other ones immediately exit from the system). We will denote ${{\widehat A}l E}(1^+,\eta_0)={\widehat {\mE}}(\eta_0)$ the total energy required when only the second fragmentation process is used from the beginning. For the quantity ${{\widehat A}l E}(\eta_0,\eta_0)={{\mathsf E}}(\eta_0)$ no confusion arises: it corresponds to the case when the first fragmentation process is used during the whole procedure. \end{rk} Our goal now is to compute the expectation of ${{\widehat A}l E}(\eta ,\eta_0)$. The notation ${\widehat \mathbf{x}i}$, ${\widehat T}_{\eta}$, ${\widehat{\Pi}}$, ${\widehat {U}}$ and so on, will be used for the analogous objects associated with the fragmentation process ${\widehat{\X}}$. So far the notation ${\mathbb{P}}$ has been used to denote the law of $\mathbf{x}i$. In all the sequel, we keep the same notation ${\mathbb{P}}$ to denote the product law of independent copies of the processes $\mathbf{x}i$ and $\widehat{\mathbf{x}i}$ in the product path space. Extending accordingly the definition in (\ref{eq1111}), we will also denote by $\widetilde{\mathbb{P}}$ the product measure the first marginal of which is given by $d\widetilde{\mathbb{P}}{\big|}_{\mathcal{F}_{t}}=e^{(1-\alpha)\mathbf{x}i(t)} d\mathbb{P}{\big|}_{\mathcal{F}_{t}}$ and the second one given by $d\widetilde{\mathbb{P}}{\big|}_{{\widehat {\mathcal{F}}}_{t}}= e^{(1-{\widehat \alpha}){\widehat \mathbf{x}i}(t)} d\mathbb{P}{\big|}_{{\widehat {\mathcal{F}}}_{t}}$. Here $\left(\mathcal{F}_{t} : t\geq 0 \right)$ and $\left({\widehat {\mathcal{F}}}_{t} : t\geq 0 \right)$ are the natural filtrations of $\mathbf{x}i$ and $\widehat \mathbf{x}i$ respectively. We shall assume throughout that the following integrability condition holds: \begin{equation} \label{inte1} \varphi\in L^1(\nu)\; \hbox{ and } \; {\widehat{\varphi}}\in L^1({\widehat{\nu}})\,. \end{equation} In this case we define $$ C= \int_{ S } \varphi(\mathbf{s}) \nu( d \mathbf{s}) \; \hbox{ and } \; {\widehat C}= \int_{ S } {\widehat{\varphi}}(\mathbf{s}) {\widehat{\nu}}( d \mathbf{s})\,. $$ Let us introduce the functions $$ \Psi(x)=C \, \int_{0}^{x} e^{(\alpha-\beta) y} {U} (dy)\,,\;\;\; {\widehat{\Psi}}(x)= {\widehat C} \,\int_{0}^{x} e^{({\widehat \alpha}-{\widehat \beta}) y} \widehat{U} (dy) \,,\; x\geq 0\, . $$ To simplify the notation we will put $$ \forall\, a>0 : \;\, \ell(a):=\log (1/a)\,. $$ We have the elements to compute the expected energy requirement in the two step fragmentation procedure. \begin{lm} \label{expen} Assume that the integrability condition (\ref{inte1}) is satisfied. Let $\eta_0\in (0,1)$. Then, we have for $\eta_0<\eta<1$ that $$ \begin{disarray}{rcl} \mathbb{E}({{\widehat A}l E}(\eta ,\eta_0))&=& C \,\int_{0}^{ \ell(\eta)} e^{(\alpha-\beta) y} \, {U} (dy) \\ && + {\widehat C} \, \int_{0}^{ \ell(\eta)} \int_{ \ell(\eta)-y}^{\ell(\eta_0)-y} e^{(\alpha-{\widehat \beta})(z+y)} \left[\int_{0}^{\ell(\eta_0)-(z+y)} e^{({\widehat \alpha}-{\widehat \beta})x} \widehat{U}(dx)\right] {\Pi}(dz){U}(dy) \\ &=&\Psi( \ell(\eta))+ \widetilde {\mathbb{E}} \left(\nbOne_{\mathbf{x}i_{T_{\eta}}< \ell(\eta_0)} e^{(\alpha-{\widehat \beta}) \,\mathbf{x}i_{T_{\eta}}} \, {\widehat{\Psi}}( \ell(\eta_0)-\mathbf{x}i_{T_{\eta}})\right), \\ \end{disarray} $$ and $$ \mathbb{E}({{\widehat A}l E}(\eta_0 ,\eta_0))= \mathbb{E}({{\mathsf E}}(\eta_0))= \Psi(\ell(\eta_0)), \quad \mathbb{E}({{\widehat A}l E}(1^+ ,\eta_0))= \mathbb{E}({\widehat {\mE}}(\eta_0))={\widehat{\Psi}}( \ell(\eta_0)). $$ When the renewal measures $U(dx)$ has no atom at $0$ one has $$ \mathbb{E}({{\widehat A}l E}(1^+ ,\eta_0))= \mathbb{E}({{\widehat A}l E}(1 ,\eta_0))\,. $$ \end{lm} \begin{dem} The proof is an extension of arguments given in \cite{bermar} corresponding to the case ``$\eta=1^+$'' or $\eta=\eta_0$ and which we repeat here for convenience. By the compensation formula for the Poisson point process $(\Delta (u), k(u))$ associated with the first fragmentation process $\mathbf{X}$, we get that for $\eta_0\in (0,1]$, \begin{equation*} \label{eq1} \mathbb{E}({{\mathsf E}}(\eta_0)) =\mathbb{E}\left(\int_{0}^{\infty} \nbOne_{\chi(t)>\eta_0} (\chi (t))^{\beta-1} dt\right) \int_{ S } \varphi (\mathbf{s}) \nu ( d \mathbf{s}) =C ~\mathbb{E}\left(\int_{0}^{\infty} \nbOne_{\mathbf{x}i_{t}<\ell(\eta_0) } e^{(1-\beta)\mathbf{x}i_{t}} dt\right)\,. \end{equation*} Thus \begin{equation} \label{aven} \mathbb{E}({{\mathsf E}}(\eta_0)) =C\widetilde{\mathbb{E}}\left(\int_{0}^{\infty} \nbOne_{\mathbf{x}i_{t} <\ell(\eta_0) } e^{(\alpha-\beta)\mathbf{x}i_{t}} dt\right) =C\int_{0}^{\ell(\eta_0)} e^{(\alpha-\beta) y} {U} (dy) =\Psi(\ell(\eta_0)). \end{equation} Similarly, $$ \begin{disarray}{rcl}\mathbb{E}({\widehat {\mE}}(\eta_0)) &=&{\widehat C} \int_{0}^{\ell(\eta_0)} e^{({\widehat \alpha}-{\widehat \beta}) y} \widehat{U} (dy)={\widehat{\Psi}}(\ell(\eta_0)). \end{disarray} $$ The above identity also implies that $\mathbb{E}({{\widehat A}l E}(1 ,\eta_0))= {\widehat C} \int_{0^+}^{\ell(\eta_0)} e^{({\widehat \alpha}-{\widehat \beta}) y} \widehat{U}(dy)= \mathbb{E}({\widehat {\mE}}(\eta_0))$ when $U$ has no atom at $0$. The statement is thus proved for the cases ``$\eta=1^+$'' and $\eta=\eta_0$. For the general case, we use Lemma \ref{ssproperty} to get $$ \begin{disarray}{rcl} \mathbb{E}({\widehat {\mE}}^{(\mathbf{x}_{\eta})}(\eta_0))&=& \mathbb{E}\left(\mathbf{s}um_{i} \nbOne_{x_{\eta,i}>\eta_0} x_{\eta,i}^{{\widehat \beta}}{\widehat {\mE}}_{ i}(\eta_0/x_{\eta,i})\right)\\&=& \mathbb{E}\left(\mathbf{s}um_{i} \nbOne_{x_{\eta,i}>\eta_0} x_{\eta,i}^{{\widehat \beta}}\mathbb{E} ({\widehat {\mE}}_{i}(\eta_0/x_{\eta,i}) \, | \, x_{\eta ,i})\right)\\ &=& \mathbb{E} \left( \nbOne_{\chi(T_{\eta})>\eta_0}(\chi (T_{\eta}))^{{\widehat \beta}-1} {\widehat {\mathbb{E}}}({\widehat {\mE}}(\eta_0/y))\vert_{y= \chi (T_{\eta})} \right), \end{disarray} $$ where ${\widehat {\mE}}({\widehat D}ot)$ is the energy of a copy of the second fragmentation process, starting from the unit mass, and which is independent of the first one, and ${\widehat {\mE}}_{i}({\widehat D}ot)$ are independent copies of ${\widehat {\mE}}({\widehat D}ot)$. Then, since $\chi (t)=e^{-\mathbf{x}i_t}$, we have, $$ \begin{disarray}{rcl} \mathbb{E}({\widehat {\mE}}^{(\mathbf{x}_{\eta})}(\eta_0))&=& \widetilde {\mathbb{E}} \left( \nbOne_{\mathbf{x}i_{T_{\eta}}< \ell(\eta_0)}e^{(\alpha-{\widehat \beta})\mathbf{x}i_{T_{\eta}}} \widetilde{\mathbb{E}}({\widehat {\mE}}(\eta_0e^{z} ))\vert_{ z=\mathbf{x}i_{T_{\eta}}}\right) \\ &=& \widetilde {\mathbb{E}} \left( \nbOne_{\mathbf{x}i_{T_{\eta}}<\ell(\eta_0)}e^{(\alpha-{\widehat \beta}) \mathbf{x}i_{T_{\eta}}}{\widehat{\Psi}}(\ell(\eta_0)-\mathbf{x}i_{T_{\eta}})\right) .\\ \end{disarray} $$ According to Lemma 1.10 of \cite{ber1999} the distribution of $\mathbf{x}i_{T_{\eta}}$ under $\widetilde{\mathbb{P}}$ is given by $$ \widetilde{\mathbb{P}}(\mathbf{x}i_{T_{\eta}}\in dz)= \int_{0}^{\ell(\eta)} \nbOne_{ \ell(\eta )< z} {\Pi} (dz-y){U} (dy) . $$ Therefore, $$ \mathbb{E}({\widehat {\mE}}^{(\mathbf{x}_{\eta})}(\eta_0))= \int_{0}^{ \ell(\eta)} \left[\int_{\ell(\eta)-y}^{ \ell(\eta_0)-y}e^{(\alpha-{\widehat \beta})(z+y)} {\widehat{\Psi}} \left( \ell(\eta)-(z+y)\right){\Pi}(dz)\right]{U}(dy). $$ By bringing the pieces together and by using the identity \eqref{energieloi} we get the result. \end{dem} In analogy with (\ref{ouput1}), we introduce the notation \begin{equation} \label{ouput2} {{\widehat{\x}}}^{\eta}=({{\widehat x}_1}^{\eta},{{\widehat x}_2}^{\eta},{\widehat D}ots) \end{equation} for the decreasing rearrangement of the frozen sizes of fragments smaller than $\eta$, that exit the second fragmentation process started from the unit mass. The following decompositions of the total energy will be useful in the sequel: \begin{rk} \label{decomp} For $1\geq \eta \geq \eta_0 >0$ we have $$ {\widehat {\mE}}(\eta_0)= {\widehat {\mE}}(\eta)+ {\widehat {\mE}}^{({{\widehat{\x}}}^{\eta})}(\eta_0), $$ whence, \begin{equation*} \begin{split} {{\widehat A}l E}(\eta ,\eta_0)-{{\widehat A}l E}(1^+,\eta_0)= & {{\mathsf E}}(\eta) -{\widehat {\mE}}(\eta) + {\widehat {\mE}}^{(\mathbf{x}^{\eta})}(\eta_0) - {\widehat {\mE}}^{({{\widehat{\x}}}^{\eta})}(\eta_0)\,. \end{split} \end{equation*} From this relation and by similar computations as in Lemma \ref{expen}, we can write \begin{equation*} \begin{split} \mathbb{E}({{\widehat A}l E}(\eta ,\eta_0)-{{\widehat A}l E}(1^+,\eta_0)) &= \Psi(\ell(\eta)) - {\widehat{\Psi}}(\ell(\eta)) \\ &\quad + {\widetilde {\mathbb{E}}} \left( \nbOne_{\mathbf{x}i_{T_{\eta}}< \ell(\eta_0)} e^{(\alpha-{\widehat \beta})\mathbf{x}i_{T_{\eta}}} {\widehat{\Psi}}(\ell(\eta_0)-\mathbf{x}i_{T_{\eta}})\right) \\ & \quad -{\widetilde{\mathbb{E}}} \left(\nbOne_{ {\widehat \mathbf{x}i}_{ {\widehat T}_{\eta} }<\ell(\eta_0) } e^{({\widehat \alpha}-{\widehat \beta}){\widehat \mathbf{x}i}_{ {\widehat T}_{\eta} } } {\widehat{\Psi}}(\ell(\eta_0)-{\widehat \mathbf{x}i}_{ {\widehat T}_{\eta} } ) \right) .\\ \end{split} \end{equation*} Observe that when $U$ has no atom at $0$, one can replace ${{\widehat A}l E}(1^+,\eta_0)$ by ${{\widehat A}l E}(1,\eta_0)$ on the left hand side of the formula. Similarly, we have \begin{equation*} \begin{split} \mathbb{E}({{\widehat A}l E}(\eta ,\eta_0)- &{{\widehat A}l E}(\eta_0,\eta_0))= {\widehat {\mE}}^{(\mathbf{x}^{\eta})}(\eta_0) - {{\mathsf E}}^{({\mathbf{x}}^{\eta})}(\eta_0)\\ = & {\widetilde {\mathbb{E}}} \bigg( \nbOne_{\mathbf{x}i_{T_{\eta}}< \ell(\eta_0)}e^{(\alpha-{\widehat \beta})\mathbf{x}i_{T_{\eta}}} {\widehat{\Psi}}(\ell(\eta_0)-\mathbf{x}i_{T_{\eta}}) - \nbOne_{\mathbf{x}i_{T_{\eta}}< \ell(\eta_0)}e^{(\alpha-\beta)\mathbf{x}i_{T_{\eta}}} \Psi(\ell(\eta_0)-\mathbf{x}i_{T_{\eta}})\bigg).\\ \end{split} \end{equation*} \end{rk} \mathbf{s}ection{Small thresholds} In this section, we consider the total energy $\mathbb{E}({{\widehat A}l E}(\eta ,\eta_0))$ when $\eta_0$ and $\eta$ go to $0$ in a suitable joint asymptotics. Our goal is to compare it with the mean energy required for reducing the unit fragment to fragments smaller than $\eta_0$ using only the first or only the second fragmentation processed. We shall assume that the quantities $$ m(\alpha):=\int_{\mathcal{S}^{\downarrow}}\mathbf{s}um_{i=1}^{\infty}s_n^{\alpha} \log\left(\frac{1}{s_n^{\alpha}}\right)\nu(d\mathbf{s})\, \mbox{ and } \, {\widehat m}({\widehat \alpha}):=\int_{\mathcal{S}^{\downarrow}}\mathbf{s}um_{i=1}^{\infty}s_n^{\alpha} \log\left(\frac{1}{s_n^{{\widehat \alpha}}}\right){\widehat{\nu}}(d\mathbf{s}) $$ are finite. Moreover, we impose the conditions $$ \beta<\alpha\mbox{ and }{\widehat \beta}<{\widehat \alpha}. $$ The latter assumption is physically reasonable, since the energy $\Psi(\infty)$ (respectively ${\widehat{\Psi}}(\infty)$) required in order that all fragments vanish in the first (respectively second) fragmentation processes is otherwise finite (see Remark 1 in \cite{bermar}). The following asymptotic result on the mean energy of a single fragmentation processes is based on the renewal Theorem for subordinators (Bertoin et al. \cite{Be99renew}). Its proof is simply adapted from that of Lemma 4 in \cite{bermar}, see also Theorem 1 therein. \begin{lm} \label{ensmallfrag} Under the previous assumptions, we have $$\lim_{\eta\to 0} \eta^{\alpha-\beta} \mathbb{E}({{\mathsf E}}(\eta))=\frac{C}{(\alpha-\beta)m(\alpha)} \, \mbox{ and } \, \lim_{\eta\to 0} \eta^{{\widehat \alpha}-{\widehat \beta}} \mathbb{E}({\widehat {\mE}}(\eta))=\frac{{\widehat C}}{({\widehat \alpha}-{\widehat \beta}){\widehat m}({\widehat \alpha})}.$$ \end{lm} By the renewal Theorem for subordinators we also have as $\eta\to 0^+$ that $$ \widetilde{\mathbb{P}} \left(\mathbf{x}i_{T_{\eta}}-\ell(\eta)\in du\right) \to M(du):=\frac{1}{m(\alpha)}\int_{\mathbb{R}^+} {\Pi}(y+du)dy, $$ and $$ \widetilde{\mathbb{P}} \left({\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}- \ell(\eta)\in du\right) \to {\widehat M}(du):= \frac{1}{{\widehat m}({\widehat \alpha})}\int_{\mathbb{R}^+} {\widehat \Pi}(y+du)dy $$ in the weak sense. Let us define, for $\lambda>0$ a fixed parameter, the finite and strictly positive constants $$ F_{\lambda}:=m(\alpha)\int_0^{\lambda}e^{(\alpha-{\widehat \beta})u} {\widehat{\Psi}}(\lambda-u) M(du),$$ $$D_{\lambda}:=m(\alpha)\int_0^{\lambda}e^{(\alpha-\beta)u} \Psi(\lambda-u) M(du)$$ $$ {\widehat D}_{\lambda}:={\widehat m}({\widehat \alpha})\int_0^{\lambda}e^{({\widehat \alpha}-{\widehat \beta})u} {\widehat{\Psi}}(\lambda-u) {\widehat M}(du). $$ We fix in the sequel the parameter $\lambda>0$. With these elements, we are in position to explicitly study the (comparative) behavior of the total energy for small thresholds $\eta$ and $\eta_0$, when these are bond by the relation $$ \eta_0=\eta e^{-\lambda}. $$ \begin{theo} \label{2frvs1} (Two-step procedure versus first fragmentation only) Assume that the renewal measure $U(dx)$ has no atom at $0$. For any $\lambda>0$, the following hold: $(a)$ If ${\widehat \beta}>\beta$, then $\forall \, \varepsilon\in (0,\frac{\alpha-\beta}{C}D_{\lambda})\;$ $\exists \, \eta_{\varepsilon}\in (0,1)$ such that $$ \forall \, \eta\leq \eta_{\varepsilon}: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\leq \left(\varepsilon-\frac{\alpha-\beta}{C}D_{\lambda}\right) \mathbb{E}({{\mathsf E}}(\eta))+\mathbb{E}({{\mathsf E}}(\eta e^{-\lambda}))< \mathbb{E}({{\mathsf E}}(\eta e^{-\lambda})). $$ \noindent $(b)$ If ${\widehat \beta}<\beta$, then $\forall \, M>0\;$ $\exists \, \eta_{M}\in (0,1)$ such that $$ \forall \, \eta\leq \eta_{M}: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\geq M \mathbb{E}({{\mathsf E}}(\eta)) +\mathbb{E}({{\mathsf E}}(\eta e^{-\lambda}))> \mathbb{E}({{\mathsf E}}(\eta e^{-\lambda})). $$ \noindent $(c)$ If ${\widehat \beta}=\beta$, then $\forall \, \varepsilon\in (0,1)$ $\exists \, \eta_{\varepsilon}\in (0,1)$ such that $\forall \, \eta\leq \eta_{\varepsilon}$: \begin{multline*} \left(-\varepsilon+\frac{\alpha-\beta}{C}(F_{\lambda}-D_{\lambda})\right) \mathbb{E}({{\mathsf E}}(\eta))+\mathbb{E}({{\mathsf E}}(\eta e^{-\lambda})) \\ \leq \mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\leq \left(\varepsilon+\frac{\alpha-\beta}{C}(F_{\lambda}-D_{\lambda})\right) \mathbb{E}({{\mathsf E}}(\eta))+\mathbb{E}({{\mathsf E}}(\eta e^{-\lambda}))\,. \end{multline*} In all cases, one can replace $\mathbb{E}({{\mathsf E}}(\eta))$ by $C\left[ \eta^{\alpha-\beta} m(\alpha)(\alpha-\beta)\right]^{-1}$. \end{theo} \begin{dem} All parts are obtained by taking limit when $\eta\to 0$ in the identity \begin{equation*} \begin{split} \eta^{\alpha-\beta}\left(\mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\!-\!\mathbb{E}({{\widehat A}l E}(\eta e^{-\lambda} ,\eta e^{-\lambda}))\right)= & \widetilde {\mathbb{E}} \bigg( e^{(\alpha-{\widehat \beta})(\mathbf{x}i_{T_{\eta}}-\ell(\eta))} {\widehat{\Psi}}(\lambda\!-\!(\mathbf{x}i_{T_{\eta}}\!-\!\ell(\eta)) \nbOne_{\mathbf{x}i_{T_{\eta}}-\ell(\eta)<\lambda}\bigg)\eta^{{\widehat \beta}-\beta} \\ & -\widetilde {\mathbb{E}} \bigg( e^{(\alpha-\beta)(\mathbf{x}i_{T_{\eta}}-\ell(\eta))} \Psi(\lambda\!-\!(\mathbf{x}i_{T_{\eta}}\!-\!\ell(\eta)) \nbOne_{\mathbf{x}i_{T_{\eta}}-\ell(\eta)<\lambda }\bigg), \end{split} \end{equation*} which follows from Remark \ref{decomp}, and then using Lemma \ref{ensmallfrag} and the previously mentioned weak convergence result for $\widetilde{\mathbb{P}} \left({\widehat \mathbf{x}i}_{{\widehat T}_{\eta}} -\ell(\eta)\in dy\right)$ (notice that the limit is absolutely continuous). \end{dem} \begin{theo} \label{2frvs2} (Two-step procedure versus second fragmentation only) Assume that $U(dx)$ and $\widehat{U}(dx)$ have no atom at $0$. For any $\lambda>0$, the following hold: $(a)$ If ${\widehat \alpha}>\alpha$, then $\forall \, \varepsilon\in (0,\frac{{\widehat \alpha}-{\widehat \beta}}{{\widehat C}} {{\widehat D}}_{\lambda})\;$ $\exists \, \eta_{\varepsilon}\in (0,1)$ such that $$ \forall \, \eta\leq \eta_{\varepsilon} : \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\leq \left(\varepsilon-\frac{{\widehat \alpha}-{\widehat \beta}}{{\widehat C}}{{\widehat D}}_{\lambda}\right) \mathbb{E}({\widehat {\mE}}(\eta))+ \mathbb{E}({\widehat {\mE}}(\eta e^{-\lambda}))< \mathbb{E}({\widehat {\mE}}(\eta e^{-\lambda}))\,. $$ \noindent $(b)$ If ${\widehat \alpha}<\alpha$, then $\forall \, M>0 \;$ $\exists \, \eta_{M}\in (0,1)$ such that $$ \forall \eta\leq \eta_{M}: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\geq M \mathbb{E}({\widehat {\mE}}(\eta)) +\mathbb{E}({\widehat {\mE}}(\eta e^{-\lambda}))> \mathbb{E}({\widehat {\mE}}(\eta e^{-\lambda})). $$ \noindent $(c)$ If ${\widehat \alpha}=\alpha$, then $\forall \, \varepsilon\in (0,1)\;$ $\exists \, \eta_{\varepsilon}$ such that $\forall \, \eta\leq \eta_{\varepsilon}$, \begin{multline*} \left(-\varepsilon+\frac{{\widehat \alpha}-{\widehat \beta}}{{\widehat C}}(F_{\lambda}-{{\widehat D}}_{\lambda})\right) \mathbb{E}({\widehat {\mE}}(\eta))+\mathbb{E}({\widehat {\mE}}(\eta e^{-\lambda})) \\ \leq \mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda}))\leq \left(\varepsilon+\frac{{\widehat \alpha}-{\widehat \beta}}{{\widehat C}}(F_{\lambda}-{{\widehat D}}_{\lambda})\right) \mathbb{E}({\widehat {\mE}}(\eta))+\mathbb{E}({\widehat {\mE}}(\eta e^{-\lambda}))\,. \end{multline*} In all cases, one can replace $\mathbb{E}({\widehat {\mE}}(\eta))$ by ${\widehat C}\left[\eta^{{\widehat \alpha}-{\widehat \beta}}m({\widehat \alpha})({\widehat \alpha}-{\widehat \beta})\right]^{-1}$. \end{theo} \begin{dem} The proof is similar to previous one, noting that \begin{equation*} \begin{split} \eta^{{\widehat \alpha}-{\widehat \beta}}\left(\mathbb{E}({{\widehat A}l E}(\eta ,\eta e^{-\lambda})) \!-\!\mathbb{E}({{\widehat A}l E}(1 ,\eta e^{-\lambda}))\right)= & \widetilde {\mathbb{E}} \bigg( e^{(\alpha-{\widehat \beta})(\mathbf{x}i_{T_{\eta}}-\ell(\eta))} {\widehat{\Psi}}(\lambda\!-\!(\mathbf{x}i_{T_{\eta}}\!-\!\ell(\eta)) \nbOne_{\mathbf{x}i_{T_{\eta}}-\ell(\eta)<\lambda}\bigg)\eta^{{\widehat \alpha}-\alpha} \\ & -\widetilde {\mathbb{E}} \bigg( e^{({\widehat \alpha}-{\widehat \beta})({\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}-\ell(\eta))} \Psi(\lambda\!-\!({\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}\!-\!\ell(\eta)) \nbOne_{{\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}-\ell(\eta)<\lambda }\bigg). \end{split} \end{equation*} \end{dem} We next summarize the main results of this section in an asymptotic comparative scheme. The notation $F_{1,2}$ refers to the situation where in the two-step fragmentation procedure both devices are effectively used (i.e. $\eta_0/\eta\in (0,1)$), whereas the notation $F_1$ and $F_2$ respectively refer to the situations where only the first or only the second fragmentation process is used. \begin{cor}\label{summary} Assume that $U(dx)$ and $\widehat{U}(dx)$ have no atom at $0$. In each of the following cases, the corresponding assertion holds true for any value of $\eta_0/\eta\in (0,1)$ as soon as $\eta$ is sufficiently small: \begin{equation*} \begin{split} \widehat{\alpha} >\alpha, \ \widehat{\beta}<\beta \mbox{ (thus }\alpha - \beta < \widehat{\alpha} -\widehat{\beta}) :& \ F_1 \mbox{ is better than }F_{1,2} \mbox{ which is better than } F_2 \ . \\ \widehat{\alpha} <\alpha , \ \widehat{\beta}>\beta \mbox{ (thus }\alpha - \beta > \widehat{\alpha} -\widehat{\beta}) :& \ F_2 \mbox{ is better than } F_{1,2} \mbox{ which is better than }F_1 \ . \\ \widehat{\alpha} <\alpha, \ \widehat{\beta}<\beta \mbox{ and } \alpha - \beta < \widehat{\alpha} -\widehat{\beta} :& \ F_1 \mbox{ is better than } F_2 \mbox{ which is better than } \ F_{1,2} \ . \\ \widehat{\alpha} <\alpha, \ \widehat{\beta}<\beta \mbox{ and } \alpha -\beta >\widehat{\alpha} -\widehat{\beta} :& \ F_2 \mbox{ is better than } F_1 \mbox{ which is better than } \ F_{1,2} \ . \\ \widehat{\alpha} >\alpha, \ \widehat{\beta}>\beta \mbox{ and }\alpha - \beta <\widehat{\alpha} -\widehat{\beta} :& \ F_{1,2} \mbox{ is better than }F_1 \mbox{ which is better than }F_2 \ . \\ \widehat{\alpha} >\alpha, \ \widehat{\beta}>\beta \mbox{ and } \alpha - \beta > \widehat{\alpha} -\widehat{\beta} :& \ F_{1,2} \mbox{ is better than }F_2 \mbox{ which is better than }F_1 \ . \\ \end{split} \end{equation*} \end{cor} \mathbf{s}mallskip \begin{rk} By parts c) of Theorems \ref{2frvs1} and \ref{2frvs2}, if $\widehat{\alpha} =\alpha$ or if $\widehat{\beta}=\beta$ the comparative efficiency of $F_1$, $F_2$ and $F_{1,2}$ for $\eta$ small enough is in general determined by those parameters but also by the value of $\eta_0/\eta\in (0,1)$. \end{rk} \mathbf{s}ection{Close-to-unit size thresholds} We shall next be interested in the behavior of $ \mathbb{E}({{\widehat A}l E}(\eta ,\eta_0))$ for large values of $\eta$ and $\eta_0$. Again, we shall compare the mean energy of the two-step fragmentation procedure with the situations when only the second, or only the first fragmentation process is used. We shall assume in this analysis that the subordinators $\mathbf{x}i$ and ${\widehat \mathbf{x}i}$ satisfy under $\widetilde{\mathbb{P}}$ a condition of regular variation at $\infty$. Namely, respectively denoting by $\widetilde{\phi}$ and $\widehat{{\widetilde \phi}}$ their Laplace exponents (see (\ref{phitilde})), we assume $$ ({\bf RV})\;\;\;\; \exists \; \rho,\, {\widehat \phi}r \, \in \, (0,1) \, \hbox{ such that } \, \forall \, \lambda\geq 0 \,: \;\;\; \lim_{q\to \infty} \frac{\widetilde{\phi}(\lambda q)}{\widetilde{\phi}(q)}= \lambda^{\rho}\,,\;\;\; \lim_{q\to \infty} \frac{\widehat{{\widetilde \phi}}(\lambda q)}{\widehat{{\widetilde \phi}}(q)}= \lambda^{{\widehat \phi}r}\,. $$ This assumption can be equivalently (and transparently) stated in terms of the infinitesimal behavior near $\eta=1$ of the ``mean energy functions'' $\eta\mapsto \mathbb{E}({{\mathsf E}}(\eta))$ and $\eta\mapsto\mathbb{E}({\widehat {\mE}}(\eta))$ of each of the fragmentation processes. See Remark \ref{avennear0} below. Recall that a function $G:\mathbb{R}_+\to\mathbb{R}_+ $ is said to vary slowly at $0$ if $\lim_{x\to 0^+} G(\lambda x) /G(x) =1$ for all $\lambda \in [0,\infty)$. A well known fact that will be used in the sequel is that such convergence is uniform in $\lambda \in [0,\lambda_0]$, for all $\lambda_0 \in (0,\infty)$. By $L$ and ${\widehat L}$ we shall denote the nonnegative slowly varying functions at $0$ defined by the relations $$ L\left(\frac{1}{ x}\right) = \frac{x^{\rho}}{\widetilde{\phi}(x)} \,,\;\;\; {{\widehat L}}\left(\frac{1}{x}\right)= \frac{x^{{\widehat \phi}r}}{\widehat{{\widetilde \phi}}(x)}. $$ \begin{rk} Using the aforementioned uniform convergence result for $L$ and ${\widehat L}$ it is not hard to check that $ \lim\limits_{q \to \infty}\frac{{\phi}( q)}{\widetilde{\phi}(q)}= \lim\limits_{q \to \infty}\frac{{{\widehat \phi}}( q)}{\widehat{{\widetilde \phi}}(q)} =1$. Consequently, ({\bf RV}) implies that the same condition hold on $\phi$ and ${\widehat \phi}$ and conversely. \end{rk} We define $$ Q_{\phi,{\widehat \phi}}:=\lim\limits_{q \to \infty} \frac{{\widehat \phi}( q)}{\phi(q)} \ = \lim\limits_{q \to \infty}\frac{\widehat{{\widetilde \phi}}(q)}{\widetilde{\phi}(q)} = \lim\limits_{x \to 0^+} \frac{L(x)x^{\rho}}{{\widehat L}(x)x^{{\widehat \phi}r}} $$ if the limit in $ [0,\infty]$ exists. More generally, we write $$ Q_{{\phi},{\widehat \phi}}^+ :=\limsup\limits_{q \to \infty} \frac{{\widehat \phi}( q)}{\phi(q)} \ = \limsup\limits_{x \to 0^+} \frac{L(x)x^{\rho}}{{\widehat L}(x)x^{{\widehat \phi}r}} $$ and $$ Q_{{\phi},{{\widehat \phi}}}^- :=\liminf\limits_{q \to \infty}\frac{{\widehat \phi}( q)}{\phi(q)} \ = \liminf\limits_{x \to 0^+} \frac{L(x)x^{\rho}}{{\widehat L}(x)x^{{\widehat \phi}r}}. $$ Recall the notation $$ \Psi(x)=C \, \int_{0}^{x} e^{(\alpha-\beta)y} {U} (dy)\,,\;\;\; {\widehat{\Psi}}(x)={\widehat C} \,\int_{0}^{x} e^{({\widehat \alpha}-{\widehat \beta}) y} \widehat{U} (dy) . $$ \begin{lm} \label{firstpartofthelimit} We have $$ C Q_{{\phi},{{\widehat \phi}}}^- -{\widehat C}\leq \liminf\limits_{\eta\to 1^-} \frac{\Psi(\ell(\eta)) -{\widehat{\Psi}}(\ell(\eta))}{{\widehat L}(\ell(\eta)) {\ell(\eta)}^{{\widehat \phi}r}} \leq \limsup\limits_{\eta\to 1^-} \frac{\Psi(\ell(\eta)) -{\widehat{\Psi}}(\ell(\eta))}{{\widehat L}(\ell(\eta)) {\ell(\eta)}^{{\widehat \phi}r}}\leq C Q_{{\phi},{{\widehat \phi}}}^+ -{\widehat C} . $$ In particular, \begin{equation*} \lim\limits_{\eta\to 1^-} \frac{\Psi(\ell(\eta)) -{\widehat{\Psi}}(\ell(\eta))}{{\widehat L}(\ell(\eta)) {\ell(\eta)}^{{\widehat \phi}r}}= \begin{cases} \infty & \mbox{ if } {\widehat \phi}r>\rho \\ - {\widehat C} & \mbox{ if } {\widehat \phi}r<\rho\\ C Q_{{\phi},{{\widehat \phi}}} -{\widehat C} & \mbox{ if } {\widehat \phi}r=\rho \mbox{ and } \exists \; Q_{{\phi},{{\widehat \phi}}}=\lim\limits_{x \to 0^+} \frac{L(x)}{{\widehat L}(x)} \in [0,\infty]\,. \end{cases} \end{equation*} \end{lm} \begin{dem} By classic Tauberian theorems (see e.g. Th, 5.13 in \cite{Ky} or Section 0.7 in \cite{be1}), our assumptions on $\widetilde{\phi}$ and $\widehat{{\widetilde \phi}}$ are respectively equivalent to \begin{equation*} \lim\limits_{x \to 0^+} \frac{{U}(x)}{x^{\rho}L(x)}=1 \,,\;\;\; \lim\limits_{x \to 0^+} \frac{\widehat{U}(x)}{x^{{\widehat \phi}r}{\widehat L}(x)}= 1\,. \end{equation*} On the other hand, we have \begin{equation*} \begin{split} \Psi(x) -{\widehat{\Psi}}(x)\leq & \ C e^{|\alpha-\beta|x} {U}(x)-{\widehat C} e^{-|{\widehat \alpha}-{\widehat \beta}|x}\widehat{U}(x) \\ = & {\widehat L}(x)x^{{\widehat \phi}r} {\widehat C} \left( \frac{C}{{\widehat C}} e^{|\alpha-\beta|x} \frac{{U}(x)}{x^{\rho}L(x)} \frac{L(x)}{{\widehat L}(x)}x^{\rho-{\widehat \phi}r} -e^{-|{\widehat \alpha}-{\widehat \beta}|x} \frac{\widehat{U}(x)}{x^{{\widehat \phi}r}{\widehat L}(x)} \right)\\ \end{split} \end{equation*} and similarly, \begin{equation*} \begin{split} \Psi(x)-{\widehat{\Psi}}(x)\geq & \ C e^{-|\alpha-\beta|x} {U}(x)-{\widehat C} e^{|{\widehat \alpha}-{\widehat \beta}|x}\widehat{U}(x) \\ = & {\widehat L}(x)x^{{\widehat \phi}r} {\widehat C} \left( \frac{C}{{\widehat C}} e^{-|\alpha-\beta|x} \frac{{U}(x)}{x^{\rho}L(x)} \frac{L(x)}{{\widehat L}(x)}x^{\rho-{\widehat \phi}r} -e^{|{\widehat \alpha}-{\widehat \beta}|x} \frac{\widehat{U}(x)}{x^{{\widehat \phi}r}{\widehat L}(x)} \right)\,.\\ \end{split} \end{equation*} The first statement follows from these bounds. To complete the proof, notice that since $\frac{L(x)}{{\widehat L}(x)}$ is slowly varying at $0$, we have that \begin{equation*} \lim_{x\to 0^+} \frac{L(x)}{{\widehat L}(x)}x^{\rho-{\widehat \phi}r}= \begin{cases} \infty & \mbox{ if }{\widehat \phi}r>\rho \\ 0 & \mbox{ if } {\widehat \phi}r<\rho\\ Q_{{\phi},{{\widehat \phi}}} & \mbox{ if } {\widehat \phi}r=\rho \mbox{ and } \exists \; \lim\limits_{x \to 0^+} \frac{L(x)}{{\widehat L}(x)}\in [0,\infty]\,, \end{cases} \end{equation*} using also the fact that $\lim\limits_{x\to 0^+}G(x)=0$ for any regularly varying (at $0$) function $G(x)$ with positive index. \end{dem} Notice that ({\bf RV}) implies that $U$ has no atom at $0$ (see e.g. the first lines of the previous proof). \begin{rk} \label{avennear0} The estimates used in the proof of Lemma \ref{firstpartofthelimit} show that $$\Psi(x)\mathbf{s}im C {U}(x)\quad \mbox{ and }\quad {\widehat{\Psi}}(x)\mathbf{s}im {\widehat C} \widehat{U}(x)\quad \mbox{ when }x\to 0^+,$$ so that $\Psi(x)\mathbf{s}im C x^{\rho}L(x)$ and ${\widehat{\Psi}}(x)\mathbf{s}im {\widehat C} x^{{\widehat \phi}r}{\widehat L}(x)$ as well. Consequently, by the aforementioned Tauberian results, assumption ({\bf RV}) is equivalent to ({\bf RV}) $x\mapsto\mathbb{E}({{\mathsf E}}(e^{-x}))$ and $x\mapsto\mathbb{E}({\widehat {\mE}}(e^{-x}))$ are regularly varying at $0^+$ with indexes $\rho,\, {\widehat \phi}r\in (0,1)$ respectively. This alternative formulation has the advantage of providing a way to infer the regularity indexes from separate observations of both fragmentation processes, if one was able to measure the energies required to obtain fragments of different close to unit sizes. More precisely, $$ \frac{\log \mathbb{E}({{\mathsf E}}(\eta^{\lambda}))- \log \mathbb{E}({{\mathsf E}}(\eta))}{\log \lambda} $$ should be close to $\rho$ for $\eta$ sufficiently close to $1$. Alternatively, $\rho$ could in principle also be deduced from the estimation method of $\phi$ developed in \cite{kre}. In the same vein, we remark that the existence of the limit $Q_{\phi,{\widehat \phi}}$ is equivalent to $$ \exists \; Q:=\lim\limits_{\eta\to 1^-} \frac{\mathbb{E}({{\mathsf E}}(\eta))}{\mathbb{E}({\widehat {\mE}}(\eta))}=\lim\limits_{\eta\to 1^-}\frac{C}{{\widehat C}}Q_{\phi,{\widehat \phi}}. $$ In general, Lemma \ref{firstpartofthelimit} indeed shows that $$ {\widehat C}(Q^- -1)\leq \liminf\limits_{\eta\to 1^-} \frac{\Psi(\ell(\eta)) -{\widehat{\Psi}}(\ell(\eta))}{{\widehat L}(\ell(\eta)) {\ell(\eta)}^{{\widehat \phi}r}} \leq \limsup\limits_{\eta\to 1^-} \frac{\Psi(\ell(\eta)) -{\widehat{\Psi}}(\ell(\eta))}{{\widehat L}(\ell(\eta)) {\ell(\eta)}^{{\widehat \phi}r}}\leq {\widehat C}(Q^+ -1), $$ where $$ Q^+:=\limsup\limits_{\eta\to 1^-} \frac{\mathbb{E}({{\mathsf E}}(\eta))}{\mathbb{E}({\widehat {\mE}}(\eta))}=\frac{C}{{\widehat C}}Q_{\phi,{\widehat \phi}}^+ \;, $$ and $$ Q^-:=\liminf\limits_{\eta\to 1^-} \frac{\mathbb{E}({{\mathsf E}}(\eta))}{\mathbb{E}({\widehat {\mE}}(\eta))}=\frac{C}{{\widehat C}} Q_{\phi,{\widehat \phi}}^-\;. $$ \end{rk} We recall now that, under our assumptions on the Laplace exponents $\widetilde{\phi}$ and $\widehat{{\widetilde \phi}}$, by the Dynkin-Lamperti Theorem it weakly holds as $\eta\to 1^-$ that $$ \widetilde{\mathbb{P}} \left(\frac{\mathbf{x}i_{T_{\eta}}-\ell(\eta)}{\ell(\eta)}\in dy\right) \to \mu(dy):=\frac{\mathbf{s}in(\rho \pi)}{\pi} \frac{dy}{(1+y)y^{\rho}} $$ and $$ \widetilde{\mathbb{P}} \left(\frac{{\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}-\ell(\eta)}{\ell(\eta)}\in dy\right)\to {\widehat{\mu}}(dy):= \frac{\mathbf{s}in({\widehat \phi}r \pi)}{\pi} \frac{dy}{(1+y)y^{{\widehat \phi}r}}\,. $$ This suggest us the way in which $\eta$ and $\eta_0$ should go to $1$ in order to observe a coherent close-to-unit size asymptotic behavior. In all the sequel $\gamma>1$ is a fixed parameter, and we assume that $$ \eta_0=\eta_0(\eta)=\eta^{\gamma}. $$ We have the following \begin{lm}\label{computationofseconterm} \begin{multline} \label{secondterm} \lim_{\eta\to 1^-}\frac{\widetilde {\mathbb{E}} \left( \nbOne_{\mathbf{x}i_{T_{\eta}}< \ell(\eta_0)}e^{(\alpha-{\widehat \beta}) \mathbf{x}i_{T_{\eta}}}{\widehat{\Psi}}(\ell(\eta_0)-\mathbf{x}i_{T_{\eta}})\right) -\widetilde{\mathbb{E}} \left(\nbOne_{{\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}< \ell(\eta_0)}e^{({\widehat \alpha}-{\widehat \beta}){\widehat \mathbf{x}i}_{{\widehat T}_{\eta}}} {\widehat{\Psi}}(\ell(\eta_0)-{\widehat \mathbf{x}i}_{{\widehat T}_{\eta}})\right)} {(\ell(\eta))^{{\widehat \phi}r}{\widehat L}(\ell(\eta))} \\ = {\widehat C} \left[ \int_0^{\gamma-1}\!\!\! (\gamma-1-y)^{{\widehat \phi}r}\mu(dy)- \int_0^{\gamma-1} \!\!\! (\gamma-1-y)^{{\widehat \phi}r} {\widehat{\mu}}(dy)\right]\,. \end{multline} Moreover, in the case $\frac{1}{2}\geq \rho>{\widehat \phi}r$, the limit is a nonnegative and increasing function of $\gamma$ for $\gamma\in [1,2]$, which goes to $0$ when $\gamma \to 1^+$. \end{lm} \begin{dem} Denote by $\partial(\eta)$ the numerator in the left hand side \eqref{secondterm} and respectively by $\mu^{\eta}$ and ${{\widehat{\mu}}}^{\eta}$ the laws of $$ \frac{\mathbf{x}i_{T_{\eta}}-\ell(\eta)}{\ell(\eta)} \;\, \hbox{ and } \;\, \frac{{\widehat \mathbf{x}i}_{T_{\eta}}-\ell(\eta)}{\ell(\eta)}. $$ We then easily see that \begin{equation*} \begin{split} \partial(\eta) \leq & e^{\gamma \ell(\eta) |\alpha-{\widehat \beta}|} \int_0^{\gamma-1}{\widehat \phi}s(\ell(\eta)(\gamma-1-y))\mu^{\eta}(dy) \\& - e^{- \gamma \ell(\eta) |{\widehat \alpha}-{\widehat \beta}|} \int_0^{\gamma-1}{\widehat \phi}s(\ell(\eta)(\gamma-1-y)){{\widehat{\mu}}}^{\eta}(dy)\\ \end{split} \end{equation*} and \begin{equation*} \begin{split} \partial(\eta) \geq & e^{-\gamma \ell(\eta) |\alpha-{\widehat \beta}|} \int_0^{\gamma-1}{\widehat \phi}s(\ell(\eta)(\gamma-1-y))\mu^{\eta}(dy) \\& - e^{\gamma \ell(\eta) |{\widehat \alpha}-{\widehat \beta}|} \int_0^{\gamma-1}{\widehat \phi}s(\ell(\eta)(\gamma-1-y)){{\widehat{\mu}}}^{\eta}(dy).\\ \end{split} \end{equation*} On the other hand, by similar estimates as in the previous lemma, one checks that \begin{equation} \label{theta} \theta(x):=\frac{{\widehat \phi}s(x)}{{\widehat C} {\widehat L}(x)x^{{\widehat \phi}r}}\to 1 \end{equation} when $x\mathbf{s}earrow 0$, and thus $\theta(x)$ is slowly varying at $0$. Fix now $\varepsilon\in (0,1)$, and recall that for a slowly varying at $0$ function $G(x)$, the convergence $G(\lambda x)/G(x)\to 1$ is uniform in $\lambda \in [0,\lambda_0]$ for all $\lambda_0\in (0,1)$. Therefore, since \begin{equation*} {\widehat \phi}s(\ell(\eta) y)= \frac{\theta(\ell(\eta)y)}{\theta(\ell(\eta))} \frac{{\widehat L}(\ell(\eta) y)}{{\widehat L}(\ell(\eta))} {\widehat \phi}s(\ell(\eta))y^{{\widehat \phi}r}, \end{equation*} we deduce that if $\eta\in (0,1)$ is sufficiently close to $1$, \begin{equation*} \forall \, y\in [0,\gamma-1]:\;\;\; (1-\varepsilon){\widehat \phi}s(\ell(\eta))y^{{\widehat \phi}r} \leq {\widehat \phi}s(\ell(\eta) y ) \leq (1+\varepsilon){\widehat \phi}s(\ell(\eta)) y ^{{\widehat \phi}r}\,. \end{equation*} Moreover, from \eqref{theta}, it follows that if $\eta$ is sufficiently close to $1$ then \begin{equation} \label{ineqs} \forall \, y\in [0,\gamma-1] :\;\;\; {\widehat C} (1-\varepsilon)^2 y^{{\widehat \phi}r} \leq \frac{{\widehat \phi}s(\ell(\eta) y )} {(\ell(\eta))^{{\widehat \phi}r}{\widehat L}(\ell(\eta))} \\ \leq {\widehat C} (1+\varepsilon)^2 y ^{{\widehat \phi}r}\,. \end{equation} It follows that \begin{equation*} \limsup_{\eta\to 1^-} \frac{\partial(\eta)}{{(\ell(\eta))^{{\widehat \phi}r} {\widehat L}(\ell(\eta))}} \leq (1+\varepsilon)^2 {\widehat C} A_{\gamma}- (1-\varepsilon)^2 {\widehat C} {{\widehat A}}_{\gamma} \end{equation*} and \begin{equation*} \liminf_{\eta\to 1^-} \frac{\partial(\eta)}{{(\ell(\eta))^{{\widehat \phi}r} {\widehat L}(\ell(\eta))}} \geq (1-\varepsilon)^2 {\widehat C} A_{\gamma}- (1+\varepsilon)^2 {\widehat C} {{\widehat A}}_{\gamma}, \end{equation*} where $$ A_{\gamma}=\frac{\mathbf{s}in(\pi\rho)}{\pi}\!\int_0^{\gamma-1} \!\!\!\! (\gamma\!-\!1\!-\!u)^{{\widehat \phi}r} \frac{du}{(1+u)u^{\rho}}\,, \;\; {{\widehat A}}_{\gamma}=\frac{ \mathbf{s}in(\pi {\widehat \phi}r)}{\pi} \! \int_0^{\gamma-1} \!\!\!\! (\gamma\!-\!1\!-\!u)^{{\widehat \phi}r} \frac{du}{(1+u)u^{{\widehat \phi}r}} \,. $$ The first statement follows by letting $\varepsilon\to 0^+$. The asserted properties of ${\widehat C}(A_{\gamma}-{\widehat A}_{\gamma})$ are consequence of the inequalities $u^{-\rho}>u^{-{\widehat \phi}r}$ for $u\in (0,1)$, $\mathbf{s}in(\pi \rho)> \mathbf{s}in(\pi {\widehat \phi}r)>0$ when $\frac{1}{2}>\rho> {\widehat \phi}r$, and dominated convergence. \end{dem} We next introduce helpful concepts in order to state our results on the energy for large thresholds. \begin{df} \noindent $(i)$ The fragmentation processes $\mathbf{X}$ is said to be infinitesimally efficient ({\it inf. eff.}) compared to ${\widehat{\X}}$ if ({\bf RV}) holds and $Q_{\phi,{\widehat \phi}}^+<\frac{{\widehat C}}{C}.$ Conversely, \noindent $(ii)$ The fragmentation processes ${\widehat{\X}}$ is said to be {\it inf. eff.} compared to $\mathbf{X}$ if ({\bf RV}) holds and $Q_{\phi,{\widehat \phi}}^->\frac{{\widehat C}}{C}.$ \end{df} For instance, $\mathbf{X}$ is {\it inf. eff.} compared to ${\widehat{\X}}$ if $\rho> {\widehat \phi}r$ or if $\rho={\widehat \phi}r$ and $Q_{\phi,{\widehat \phi}}$ exists in $[0,\frac{{\widehat C}}{C})$. Similarly, ${\widehat{\X}}$ is {\it inf. eff.} compared to ${\widehat{\X}}$ e.g. if $\rho< {\widehat \phi}r$ or if $\rho={\widehat \phi}r$ and $Q_{\phi,{\widehat \phi}}$ exists in $(\frac{{\widehat C}}{C},\infty]$. \begin{rk} We observe that $\mathbf{X}$ (respectively ${\widehat{\X}}$) is {\it inf. eff.} compared to ${\widehat{\X}}$ (respectively $\mathbf{X}$) if and only if $\mathbb{E}({{\mathsf E}}(e^{-x}))$ and $\mathbb{E}({\widehat {\mE}}(e^{-x}))$ are regularly varying functions at $x=0$ with indexes in $(0,1)$ and $Q^+=\limsup\limits_{\eta\to 1^-} \frac{\mathbb{E}({{\mathsf E}}(\eta))}{\mathbb{E}({\widehat {\mE}}(\eta))}<1$ (respectively $Q^-=\liminf\limits_{\eta\to 1^-} \frac{\mathbb{E}({{\mathsf E}}(\eta))}{\mathbb{E}({\widehat {\mE}}(\eta))}>1$). \end{rk} Bringing all together, we have obtain: \begin{theo} \label{mainforetacloseto1} (Two-step procedure versus second fragmentation only) For each $\gamma \in (1,\infty)$ it holds: \mathbf{s}mallskip \noindent $(a)$ If ${\widehat{\X}}$ is {\it inf. eff.} compared to $\mathbf{X}$ and $Q^-=Q=\infty$ (in particular if ${\widehat \phi}r >\rho$), then: $\forall \, M> 0\;$ $\exists\, \eta_M\in (0,1)$ such that \begin{equation*} \forall \, \eta\in (\eta_M,1]: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))> \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+M \mathbb{E}({\widehat {\mE}}(\eta))> \mathbb{E}({\widehat {\mE}}(\eta^{\gamma})). \end{equation*} \noindent $(b)$ If ${\widehat{\X}}$ is {\it inf. eff.} compared to $\mathbf{X}$ and $Q^-\in (1,\infty)$ (and thus $\rho={\widehat \phi}r$), then: $\forall \, \varepsilon\in (0,Q^- -1)\;$ $\exists\, \eta_{\varepsilon}\in (0,1)$ such that \begin{equation*} \forall \, \eta\in (\eta_{\varepsilon},1] : \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))> \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+ (Q^- -1-\varepsilon)\mathbb{E}({\widehat {\mE}}(\eta)) > \mathbb{E}({\widehat {\mE}}(\eta^{\gamma})). \end{equation*} \noindent $(c)$ If $\mathbf{X}$ is {\it inf. eff.} compared to ${\widehat{\X}}$ and $Q^+\in (0,1)$ (and thus $\rho={\widehat \phi}r$), then: $\forall \, \varepsilon\in (0,1-Q^+)\;$ $\exists\, \eta_{\varepsilon}\in (0,1)$ such that \begin{equation*} \forall \, \eta\in (\eta_{\varepsilon},1]: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma})) < \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+ (Q^+-1+\varepsilon)\mathbb{E}({\widehat {\mE}}(\eta))<\mathbb{E}({\widehat {\mE}}(\eta^{\gamma})). \end{equation*} \noindent $(d)$ If $\mathbf{X}$ is {\it inf. eff.} compared to ${\widehat{\X}}$ and $Q^+=Q=0$ (in particular if ${\widehat \phi}r<\rho$), then: $\forall \, \varepsilon\in (0,1)$, $\exists\, \eta_{\varepsilon}\in (0,1)$ such $\forall \, \eta\in (\eta_{\varepsilon},1]$: \begin{multline*} \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+ (A_{\gamma}-{{\widehat A}}_{\gamma}-1-\varepsilon)\mathbb{E}({\widehat {\mE}}(\eta)) \!< \!\mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))\!< \! \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))\!+\! (A_{\gamma}-{{\widehat A}}_{\gamma}-1+\varepsilon)\mathbb{E}({\widehat {\mE}}(\eta)). \end{multline*} (The quantities $A_{\gamma}$ and ${{\widehat A}}_{\gamma}$ were defined in Lemma \ref{computationofseconterm}). Moreover, if $\frac{1}{2}\geq \rho>{\widehat \phi}r$, $\exists \, \gamma_0\in (1,2]$ such that $\forall \, \gamma \in (1,\gamma_0]$, one has $1-A_{\gamma}+{{\widehat A}}_{\gamma}>0$ and $\forall \, \varepsilon \in (0,1-A_{\gamma}+{{\widehat A}}_{\gamma})$, \begin{equation*} \forall \, \eta\in (\eta_{\varepsilon},1]: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))< \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+ (A_{\gamma}-{{\widehat A}}_{\gamma}-1+\varepsilon) \mathbb{E}({\widehat {\mE}}(\eta))<\mathbb{E}({\widehat {\mE}}(\eta^{\gamma})). \end{equation*} In all four cases, similar statements hold with $\mathbb{E}({\widehat {\mE}}(\eta))$ replaced by ${\widehat C} \left[\widehat{{\widetilde \phi}}\left(\frac{1}{\log (1/\eta)}\right)\right]^{-1}$. \end{theo} \begin{dem} By Remark \ref{decomp} and the previous results, we simply have to notice that when $\eta \to 1^-$, $$ \frac{\mathbb{E}({\widehat {\mE}}(\eta))}{{\widehat C}}\mathbf{s}im {\widehat L}(\ell(\eta))(\ell(\eta))^{{\widehat \phi}r} = \left[\widehat{{\widetilde \phi}}\left(\frac{1}{\ell(\eta)}\right)\right]^{-1}, $$ ${{\widehat A}l E}(1 ,\eta^{\gamma})= {\widehat {\mE}}(\eta^{\gamma})$ and the quantities $A_{\gamma}$ and ${{\widehat A}}_{\gamma}$ are equal if $\rho={\widehat \phi}r$. The last assertion in part $(d)$ is consequence of the last part of Lemma \ref{computationofseconterm}. \end{dem} The previous theorem provided conditions on large thresholds $\eta$ and $\eta_0$ under which the use of the second fragmentation process can be told to be efficient or not. We next briefly address the efficiency of using or not the first fragmentation process. The arguments of the following theorem are similar to those of the previous lemmas, so we just sketch its proof. We use the following notation $$ \forall \, \gamma \in (1,\infty): \;\;\; B_{\gamma}:=\frac{\mathbf{s}in(\pi\rho)}{\pi} \! \int_0^{\gamma-1} \!\!(\gamma\!-\!1\!-\!u)^{\rho} \frac{du}{(1+u)u^{\rho}}\,. $$ \begin{theo} (Two-step procedure versus first fragmentation only) For all $\gamma \in (1,\infty)$ it holds: \noindent $(a)$ If ${\widehat{\X}}$ is {\it inf. eff.} compared to $\mathbf{X}$, then: $\forall \, \varepsilon\in (0,1-\frac{1}{Q^-})\;$ $\exists \, \eta_{\varepsilon}\in (0,1)$ such that \begin{equation*} \forall \, \eta\in (\eta_{\varepsilon},1]: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))< \mathbb{E}({{\mathsf E}}(\eta^{\gamma}))+ \left(\frac{1}{Q^-}-1 +\varepsilon\right) B_{\gamma}\mathbb{E}({{\mathsf E}}(\eta)) < \mathbb{E}({{\mathsf E}}(\eta^{\gamma})). \end{equation*} \noindent $(b)$ If $\mathbf{X}$ is {\it inf. eff.} compared to ${\widehat{\X}}$ and $Q^+\in (0,1)$ (and thus $\rho={\widehat \phi}r$), then: $\forall \, \varepsilon\in (0,\frac{1}{Q^+}-1)\;$, $\exists \, \eta_{\varepsilon}\in (0,1)$ such that \begin{equation*} \forall \, \eta\in (\eta_{\varepsilon},1]: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))> \mathbb{E}({{\mathsf E}}(\eta^{\gamma}))+ \left(\frac{1}{Q^+}-1 -\varepsilon\right) B_{\gamma}\mathbb{E}({{\mathsf E}}(\eta)) > \mathbb{E}({{\mathsf E}}(\eta^{\gamma})). \end{equation*} \noindent $(c)$ If $\mathbf{X}$ is {\it inf. eff.} compared to ${\widehat{\X}}$ and $Q^+=Q=0$ (in particular if ${\widehat \phi}r<\rho$), then: $\forall \, M> 0\;$ $\exists \, \eta_M\in (0,1)$ such that \begin{equation*} \forall \, \eta\in (\eta_M,1]: \;\;\; \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))> \mathbb{E}({{\mathsf E}}(\eta^{\gamma}))+M \mathbb{E}({{\mathsf E}}(\eta))> \mathbb{E}({{\mathsf E}}(\eta^{\gamma})). \end{equation*} In all cases, one can replace $\mathbb{E}({{\mathsf E}}(\eta))$ by $C\left[\widetilde{\phi}\left(\frac{1}{\ell(\eta)}\right)\right]^{-1}$. \end{theo} \begin{dem} Fix $\gamma >1$ and $\varepsilon\in (0,1)$. As in Lemma \ref{computationofseconterm} we get that for all $y\in [0,\gamma-1]$, \begin{equation*} C (1-\varepsilon)^2 y^{\rho} \leq \frac{\psi(\ell(\eta) y )} {(\ell(\eta))^{\rho}L(\ell(\eta))} \\ \leq C (1+\varepsilon)^2 y ^{\rho} \end{equation*} and \begin{equation*} {\widehat C} (1-\varepsilon)^2 \frac{(\ell(\eta))^{{\widehat \phi}r}{\widehat L}(\ell(\eta))}{(\ell(\eta))^{\rho}L(\ell(\eta))} y^{{\widehat \phi}r} \leq \frac{{\widehat \phi}s(\ell(\eta) y )} {(\ell(\eta))^{\rho}L(\ell(\eta))}\\ \leq {\widehat C} (1+\varepsilon)^2 \frac{(\ell(\eta))^{{\widehat \phi}r} {\widehat L}(\ell(\eta))}{(\ell(\eta))^{\rho}L(\ell(\eta))} y ^{{\widehat \phi}r} \end{equation*} if $\eta$ is close enough to $1$. Set now $\overline{\partial}(\eta):=\mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma})-{{\widehat A}l E}(\eta^{\gamma},\eta^{\gamma}))$. From the previous bounds, and from the explicit expression for $\overline{\partial}(\eta)$ given in Remark \ref{decomp}, we deduce that \begin{equation*} C \left(\frac{A_{\gamma}}{Q^+}- B_{\gamma}\right) \leq \liminf_{\eta\to 1^-} \frac{\overline{{\partial}}(\eta)}{{(\ell(\eta))^{\rho} L(\ell(\eta))}} \leq \limsup_{\eta\to 1^-} \frac{\overline{\partial}(\eta)}{{(\ell(\eta))^{\rho} L(\ell(\eta))}} \leq C\left(\frac{A_{\gamma}}{Q^-} - B_{\gamma}\right). \end{equation*} Part $(a)$ follows from this relation, using the facts that $A_{\gamma}=B_{\gamma}$ if $\rho ={\widehat \phi}r$, and that $Q_- =\infty$ if ${\widehat \phi}r> \rho$. The remaining parts are similar. \end{dem} \begin{rk} If $\rho={\widehat \phi}r$ and $Q\in (0,\infty)$ exists, one obtains for $\eta$ close enough to $1$, \begin{equation*} \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+ (Q-1-\varepsilon)\mathbb{E}({\widehat {\mE}}(\eta)) < \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))< \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))+ (Q-1+\varepsilon)\mathbb{E}({\widehat {\mE}}(\eta)) \end{equation*} \begin{equation*} \mathbb{E}({{\mathsf E}}(\eta^{\gamma}))+ (Q^{-1}-1-\varepsilon)B_{\gamma}\mathbb{E}({{\mathsf E}}(\eta)) < \mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))< \mathbb{E}({{\mathsf E}}(\eta^{\gamma}))+ (Q^{-1}-1+\varepsilon)B_{\gamma}\mathbb{E}({{\mathsf E}}(\eta)). \end{equation*} In particular, when $Q=1$ we deduce that $\mathbb{E}({{\widehat A}l E}(\eta ,\eta^{\gamma}))\mathbf{s}im \mathbb{E}({\widehat {\mE}}(\eta^{\gamma}))\mathbf{s}im \mathbb{E}({{\mathsf E}}(\eta^{\gamma}))$ when $\eta\to 1^-$, as one could expect. \end{rk} \noindent {\bf Acknowledgments.} J.Fontbona and S.Mart\'\i nez are indebted to Basal Conicyt Project. \noindent JOAQU\'IN FONTBONA \noindent {\it Departamento Ingenier{\'\i}a Matem\'atica and Centro Modelamiento Matem\'atico, Universidad de Chile, UMI 2807 CNRS, Casilla 170-3, Correo 3, Santiago, Chile.} e-mail: $<[email protected]>$ \noindent NATHALIE KRELL \noindent {\it IRMAR, Universit\'e Rennes 1, Campus de Beaulieu, 35042 Rennes Cedex FRANCE.} e-mail: $<[email protected]>$ \noindent SERVET MART\'INEZ \noindent {\it Departamento Ingenier{\'\i}a Matem\'atica and Centro Modelamiento Matem\'atico, Universidad de Chile, UMI 2807 CNRS, Casilla 170-3, Correo 3, Santiago, Chile.} e-mail: $<[email protected]>$ \end{document}
\begin{document} \title{Entropy of a quantum error correction code} \author[David W. Kribs]{David W. Kribs$^{1}$} \address{$^{1}$Department of Mathematics and Statistics, University of Guelph, Guelph, Ontario, N1G 2W1, Canada\\and Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada} \author[Aron Pasieka]{Aron Pasieka$^{2}$} \address{$^{2}$Department of Physics, University of Guelph, Guelph, Ontario, N1G 2W1, Canada} \author[Karol {\.Z}yczkowski]{Karol {\.Z}yczkowski$^{3}$} \address{$^{3}$Instytut Fizyki im. Smoluchowskiego, Uniwersytet Jagiello{\'n}ski, ul. Reymonta 4, 30-059 Krak{\'o}w, Poland\\and Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotnik{\'o}w 32/44, 02-668 Warszawa, Poland} \begin{abstract} We define and investigate a notion of entropy for quantum error correcting codes. The entropy of a code for a given quantum channel has a number of equivalent realisations, such as through the coefficients associated with the Knill-Laflamme conditions and the entropy exchange computed with respect to any initial state supported on the code. In general the entropy of a code can be viewed as a measure of how close it is to the minimal entropy case, which is given by unitarily correctable codes (including decoherence-free subspaces), or the maximal entropy case, which from dynamical Choi matrix considerations corresponds to non-degenerate codes. We consider several examples, including a detailed analysis in the case of binary unitary channels, and we discuss an extension of the entropy to operator quantum error correcting subsystem codes. \end{abstract} \maketitle \section{Introduction} Quantum error correcting codes are a central weapon in the battle to overcome the effects of environmental noise associated with attempts to control quantum mechanical systems as they evolve in time \cite{NC00,Got06}. It is thus important to develop techniques that assist in determining whether one code is better than another for a given noise model. In this paper we make a contribution to this study by introducing a notion of entropy for quantum error correcting codes. No single quantity can be expected to hold all information on a code, its entropy included. Nevertheless, the entropy of a code is one way in which the amount of effort required to recover a code can be quantified. In the extremal case, a code has zero entropy if and only if it can be recovered with a single unitary operation. This is the simplest of all correction operations in that a measurement is not required as part of the correction process. These codes have been recently coined {\it unitarily correctable} \cite{KLPL05,KS06,BNPV07}, and include {\it decoherence-free subspaces} \cite{DG97,KLV00a,LCW98a,Zan01b,ZR97c} in the case that recovery is the trivial identity operation. Thus more generally the entropy can be regarded as a measure of how close a code is to being unitarily correctable, or decoherence-free in some cases. In the next section we introduce the entropy of a code, along with required nomenclature. We also consider an example motivated by the stabilizer formalism \cite{Got96} and discuss an extension of code entropy to operator quantum error correcting subsystem codes \cite{KLP05,Pou05,Bac05,KlaSar06}. We then consider in detail an illustrative class of quantum operations for which the code structure has recently been characterised, the class of {\it binary unitary channels} \cite{CKZ06a,CKZ06b,CHKZ08,Woe07,LS08,CGHK08,Kazakov08}. \section{Entropy of a Quantum Error Correction Code} Let $\rho$ denote a quantum state: a Hermitian, positive operator, satisfying the trace normalisation condition Tr$\rho=1$. A linear quantum operation (or channel) $\Phi$, which sends a density operator $\rho$ of size $N$ into its image $\rho'$ of the same size may be described in the Choi-Kraus form \cite{Cho75a,Kr71} \begin{equation} \rho'=\Phi(\rho) =\sum_{i=1}^{M} E_i\rho E_i^{\dagger} . \label{Kraus} \end{equation} The Kraus operators $E_i$ can be chosen to be orthogonal $\langle E_i|E_j\rangle={\rm Tr}E_i^{\dagger}E_j = d_i \delta_{ij}$, so that the non-negative weights $d_i$ become eigenvalues of the dynamical (Choi) matrix, $D_\Phi=(\langle E_i|E_j\rangle )$. We refer to the rank of the Choi matrix as the {\it Choi rank} of $\Phi$. Observe that the Choi rank of $\Phi$ is equal to the minimal number of Kraus operators required to describe $\Phi$ as in (\ref{Kraus}). Hence in the canonical form the number $M$ of non-zero Kraus operators does not exceed $N^2$. Due to the theorem of Choi the map $\Phi$ is completely positive (CP) if and only if the corresponding dynamical matrix is positive if and only if $\Phi$ has a form as in (\ref{Kraus}). The map $\Phi$ is {\it trace preserving}, ${\rm Tr} \rho'={\rm Tr}\rho=1$, if and only if $\sum_{i=1}^{N^2} E_i^{\dagger} E_i =\mathbbm{1}$ where we assume some $E_{i}$ are zero if $M$ is less than $N^{2}$. The family of operators $E_i$ is not unique. However, if $\{F_j\}$ is another family of operators that determine $\Phi$ as in (\ref{Kraus}), then there is a scalar unitary matrix $U=(u_{ij})$ such that $E_i = \sum_j u_{ij} F_j$ for all $i$. We refer to this as the {\it unitary invariance} of Choi-Kraus decompositions. \subsection{Entropy Exchange and Lindblad Theorem} To characterise the information missing in a quantum state one uses its {\sl von Neumann} entropy, \begin{equation}\label{vNentropy}S(\rho) \ = \ - {\rm Tr} \rho \log \rho.\end{equation} We will use the convention that $\log$ refers to logarithm base two as this provides a cleaner operational qubit definition in the context of quantum information. In order to describe the action of a CP map $\Phi$, represented in the canonical Choi-Kraus form (\ref{Kraus}), for an initial state $\rho$ we may compare its entropy with the entropy of the image $S(\rho')=S(\Phi(\rho))$. To obtain a bound for such an entropy change we can define an operator $\sigma=\sigma(\Phi,\rho)$ acting on an extended Hilbert space ${\cal H}_{N^2}$, \begin{equation} \sigma_{ij} \ = \ \mbox{Tr} \rho E_i^{\dagger} E_j , \quad i,j=1,\dots, N^2 \ . \label{osigma} \end{equation} \noindent If the map $\Phi$ is stochastic, the operator $\sigma$ is positive definite and normalised so it represents a density operator in its own right, $\sigma \in {\cal M}_{N^2}$ (specifically, it is an initially pure environment evolved by the unitary dilation of $\Phi$). Observe that for any unitary map, $\Phi_U(\rho)=U\rho U^{\dagger}$, the form (\ref{Kraus}) consists of a single term only. Hence in this case the operator $\sigma$ reduces to a single number equal to unity and its entropy vanishes, $S\bigl(\sigma(\Phi_U,\rho)\bigr)=0$. The auxiliary state $\sigma$ acting in an extended Hilbert space was used by Lindblad to derive bounds for the entropy of an image $\rho'=\Phi(\rho)$ of any initial state. The bounds of Lindblad \cite{Li91}, \begin{equation} 0 \ \le \ |S(\rho')-S(\sigma)| \ \le \ S(\rho) \ \le \ S(\sigma)+S(\rho') \ , \label{boundent} \end{equation} are obtained by defining another density matrix in the composite Hilbert space ${\cal H}_N \otimes {\cal H}_{M}$, \begin{equation} \omega \: = \: \sum_{i=1}^{M} \sum_{j=1}^{M} E_j \rho E_i^{\dagger} \otimes |i\rangle\langle j| \ , \label{omega} \end{equation} \noindent where $M=N^2$ and $|i\rangle$ forms an orthonormal basis in ${\cal H}_{M}$. Thus, $\omega$ is simply the system and an initially pure environment evolved by the unitary dilation of $\Phi$. Computing partial traces one finds that Tr$_N \omega=\sigma$ and Tr$_M \omega=\rho'$. It is possible to verify that $S(\omega) = S(\rho)$, so making use of the subadditivity of entropy and the triangle inequality \cite{AL70} one arrives at (\ref{boundent}). If the initial state is pure, that is if $S(\rho) = 0$, we find from (\ref{boundent}) that the final state $\rho'$ has entropy $S(\sigma)$. For this reason $S(\sigma)$ was called the {\it entropy exchange} of the operation by Shumacher \cite{Sc96}. In that work an alternative formula for the entropy exchange was proven, \begin{equation} S\bigl(\sigma(\Phi,\rho)) \ = \ S \Bigl( ( \Phi \otimes {\mathbbm 1}) |\psi\rangle \langle \psi| \Bigr) \ , \label{osigma2} \end{equation} \noindent where $|\psi\rangle$ is an \emph{arbitrary} purification of the mixed state, Tr$_B |\psi\rangle \langle \psi|=\rho$. Thus, the entropy exchange is invariant under purification of the initial state and remains a function only of the initial density operator $\rho$ and the map $\Phi$. \subsection{Quantum Error Correcting Codes} A quantum operation $\Phi$ allows for an error correction scheme in the standard framework for quantum error correction \cite{Got96,Sho95a,Ste96a,BDSW96a,KL97} if there exists a subspace $\cal C$ such that for some set of complex scalars $\Lambda = (\lambda_{ij})$ the corresponding projection operator $P_{\cal{C}}$ satisfies \begin{eqnarray} P_{\cal{C}} E_i^{\dagger} E_j P_{\cal{C}} \ = \ \lambda_{ij} P_{\cal{C}} {\rm \quad for \quad all \quad} i,j=1,\dots,N^2. \label{compression} \end{eqnarray} Specifically, this is equivalent to the existence of a quantum {\sl recovery operation} $\Psi$ such that \begin{equation} \Psi \circ \Phi \circ {\cal{P}}_{\cal{C}} = {\cal{P}}_{\cal{C}}, \label{superoperator} \end{equation} where ${\cal{P}}_{\cal{C}}$ is the map ${\cal{P}}_{\cal{C}}(\rho) = P_{\cal{C}} \rho P_{\cal{C}}$. The subspace related to $P_{\cal{C}}$ determines a {\sl quantum error correcting code} (QECC) for the map $\Phi$. A special class of codes are the {\it unitarily correctable codes} (UCC), which are characterised by the existence of a unitary recovery operation $\Psi(\rho) = U \rho U^\dagger$. These codes include {\it decoherence-free subspaces} (DFS) in the case that $\Psi$ is the identity map, $\Psi(\rho)=\rho$. It can be shown that the matrix $\Lambda = (\lambda_{ij})$ is Hermitian and positive, and is in fact a density matrix, so it can be considered as an auxiliary state acting in an extended Hilbert space of size at most $N^2$. It is easy to obtain a more refined global upper bound on the rank of $\Lambda$ in terms of the map $\Phi$. \begin{lemma}\label{upperbound} Let $\Lambda$ be the matrix determined by a code $\mathcal C$ for a quantum map $\Phi$. Then the rank of $\Lambda$ is bounded above by the Choi rank of $\Phi$. \end{lemma} \noindent\textbf{Proof.} Without loss of generality assume the Choi matrix $D_\Phi$ is diagonal. We have for all $i$, \begin{equation} \lambda_{ii} \dim {\mathcal C} = \mbox{Tr}(\lambda_{ii}P_{\mathcal{C}})= \mbox{Tr} (P_{{\mathcal C}} E_i^\dagger E_i P_{{\mathcal C}}) \leq \mbox{Tr} (E_i^\dagger E_i) = \langle E_i|E_i\rangle , \end{equation} and the result follows from the positivity of $D_\Phi$ and $\Lambda$. \pfQED Unitarily correctable codes are typically highly {\it degenerate} codes, as the map $\Phi$ collapses to a single unitary operation when restricted to the code subspace. In particular, the unitary invariance of Choi-Kraus representations implies the restricted operators $E_i P_{\mathcal C}$ are all scalar multiples of a single unitary. More generally, one can see that a code ${\mathcal C}$ is degenerate for $\Phi$ precisely when the Choi rank of $\Phi$ is strictly greater than the rank of $\Lambda$. Indeed, the Choi rank counts the minimal number of Kraus operators required to implement $\Phi$ via (2.1), and satisfaction of this strict inequality means there is redundancy in the description of $\Phi \circ {\mathcal P}_{\mathcal C} $ by the operators $E_i P_{\mathcal C}$. Thus, for these reasons we shall refer to codes as {\it non-degenerate} if the Choi rank of $\Phi$ coincides with the rank of $\Lambda$ and if the spectrum of $\Lambda$ is equally balanced -- that is to say that non-degenerate codes correspond to the maximally degenerate error correction matrix, $\Lambda$ proportional to the identity matrix \subsection{Entropy of a Code} Assume now that an error correcting code $\cal C$ exists and all conditions (\ref{compression}) are satisfied. If a quantum state $\rho$ is supported on the code then $P_{\cal{C}} \rho P_{\cal{C}} = \rho$ and calculation of the entropy exchange (\ref{osigma}) simplifies, \begin{equation} \sigma_{ij} = \ \mbox{Tr} \rho E_i^{\dagger} E_j = \mbox{Tr} P_{\cal{C}} \rho P_{\cal{C}} E_i^{\dagger} E_j = \mbox{Tr} \rho P_{\cal{C}} E_i^{\dagger} E_j P_{\cal{C}} = \mbox{Tr} \rho \lambda_{ij} P_{\cal{C}}=\lambda_{ij} . \label{osigma3} \end{equation} In this way we have shown that the error correction matrix $\Lambda$ is {\sl equal} to the auxiliary matrix $\sigma$ of Lindblad, provided the initial state belongs to the code subspace. From another direction, given an error correcting code $\cal{C}$ for a map $\Phi$, in \cite{KS06,NaySen06} it was shown that there is a quantum state $\tau$ and an isometry $V$ such that for all $\rho = P_{\cal{C}} \rho P_{\cal{C}}$, \begin{equation} \Phi(\rho) = V (\tau \otimes \rho) V^\dagger. \label{ssprinc} \end{equation} The result, which can be seen as a consequence of the decoupling condition of \cite{SW02}, gives an explicit way to ``see'' a code at the output stage of a quantum process for which the code is correctable. The result (and its subsystem generalization -- see below) may also be viewed as a formalisation of the {\it subsystem principle} for preserving quantum information \cite{KLV00a}. From the proof of this result one can see directly that the entropy of $\tau$ satisfies $S(\tau) = S(\Lambda)$. This equality follows also from the fact that $\tau$ and $\Lambda$ can be interpreted as the states obtained by partial trace of an initially pure state with respect to two different subsystems. Thus, from multiple perspectives we find motivation for the following: \begin{definition}\label{entdefn} Given a quantum operation $\Phi$ with Kraus operators $\left\{E_{i}\right\}$ and a code $\cal{C}$ with matrix $\Lambda$ given by (\ref{compression}), we call the von Neumann entropy $S(\Phi,{\cal{C}}):= S(\Lambda)$ the \emph{entropy of} ${\cal{C}}$ \emph{relative to} $\Phi$. \end{definition} The entropy of a code depends only on the map and the subspace defined by $P_{\mathcal{C}}$, not on any particular state in the code subspace. Thus, the entropy exchange will be the same for all initial states supported on the code subspace and is therefore a property of the code itself. In the following result we determine what possible values the code entropy can take, and we derive a characterisation of the extremal cases in terms of both the code and the map. \begin{theorem}\label{ucccase} Let $\Phi$ be a quantum operation and let ${\mathcal C}$ be a code with matrix $\Lambda$ given by (\ref{compression}). Then $S(\Phi,{\cal{C}})$ belongs to the closed interval $[0,\log D]$, where $D$ is the Choi rank of $\Phi$. Furthermore, the extremal cases are characterised as follows: \begin{itemize} \item[$(i)$] $S(\Phi,{\cal{C}})=0$ if and only if ${\mathcal C}$ is a unitarily correctable code for $\Phi$. \item[$(ii)$] $S(\Phi,{\cal{C}})=\log D$ if and only if $\mathcal C$ is a non-degenerate code for $\Phi$. \end{itemize} \end{theorem} \noindent\textbf{Proof.} By Lemma~\ref{upperbound} and the subsequent discussion, the maximal entropy case occurs when the rank of $\Lambda$ and $D_\Phi$ coincide and the spectrum of $\Lambda$ is equally balanced; that is, the code is non-degenerate. This occurs (by a standard spectral majorization argument) precisely when the code entropy satisfies $S(\Phi,{\cal{C}})=\log D$. For the minimal entropy case, first suppose that $\cal{C}$ is a UCC for $\Phi$. Then by (\ref{superoperator}) there is a unitary operation ${\cal{U}}(\rho)= U\rho U^\dagger$ such that $\Phi \circ {\cal{P}}_{\cal{C}} = {\cal{U}}\circ {\cal{P}}_{\cal{C}}$. Thus by the unitary invariance of Choi-Kraus decompositions, it follows that $E_iP_{\cal{C}} = \alpha_i U P_{\cal{C}}$ for some scalars $\alpha_i$. Hence we have $\Lambda = (\overline{\alpha}_i \alpha_j) = \kb{\psi}{\psi}$, where $\ket{\psi}$ is the vector state with coordinates $\alpha_i$, and so $S(\Phi,{\cal{C}})=S(\Lambda)=0$. On the other hand, suppose $\Lambda = \kb{\psi}{\psi}$ is rank one. Let $V$ be a scalar unitary that diagonalises $\Lambda$. It follows that $V$ induces a unitary change of representation for $\Phi$ via (\ref{compression}) from $\{E_i\}$ to $\{F_j\}$. But since $V\Lambda V^\dagger$ is diagonal, only one $F_j$, say $F$, is non-zero, and hence by unitary invariance we have $FP_{\cal{C}}=UP_{\cal{C}}$ for some unitary $U$. Thus $\Phi \circ {\cal{P}}_{\cal{C}} = \cal{U}\circ {\cal{P}}_{\cal{C}}$, and the result follows. \pfQED From an operational perspective, the numerical value of the entropy of a code allows us to quantify the number of ancilla qubits needed to perform a recovery operation. Specifically, the Choi rank, $D$, of a map $\Phi$ gives the minimum number of Kraus operators necessary to describe the map or, by Stinespring's dilation theorem~\cite{stinespring}, the dimension of the ancilla required to implement $\Phi$ as a unitary. The rank of $\Lambda$ then gives the number of Kraus operators necessary to describe the action of the map restricted to $\mathcal{C}$ and thus the number of Kraus operators, $M$, necessary for a recovery operation in the usual measurement cum reversal picture of recovery. Again by Stinespring's dilation theorem we need an $M$-dimensional ancilla to implement the recovery as a unitary, and thus this requires $\log{M}$ qubits. Hence the entropy is equal to zero for a unitarily correctable code, in which the action of the noise is unitary and thus requires no ancilla to implement the recovery operation. If the code entropy is positive, then any state of the code can potentially evolve to any one of multiple locations in the system Hilbert space under the action of the noise $\Phi$. This fact has to be compensated by the recovery operation $\Psi$. The maximal entropy case for a particular $\Phi$ is characterised by evolution to each of these locations ($M=D$ by Lemma~\ref{upperbound}) with equal probability (by Theorem~\ref{ucccase}). Here the entropy and thus the number of qubits in the ancilla will be $\log{D}$. \subsection{Stabilizer Example} As an example from the stabilizer formalism, consider a three-qubit system with the usual notation $X_i$, $Z_i$, $i=1,2,3$, for Pauli operators \cite{NC00}. The single-qubit stabilizer code with generators $\{Z_{1}Z_{2},Z_{2}Z_{3}\}$, is spanned by $|0_L\rangle = |000\rangle$ and $|1_L\rangle = |111\rangle$. The set of operators $\{\mathbbm{1},X_{1},X_{2},X_{3}\}$ form a correctable set of errors for this stabilizer thus we can consider a channel, a three-qubit bit-flip channel, comprised of these operators -- for example, the channel with Kraus operators $E_{1}=\sqrt{\frac{1}{3}(3-p-q-r)}\mathbbm{1}$, $E_{2}=\sqrt{\frac{1}{3}p}X_{1}$, $E_{3}=\sqrt{\frac{1}{3}q}X_{2}$ and $E_{4}=\sqrt{\frac{1}{3}r}X_{3}$. Using $P_{\mathcal{C}}=|000\rangle\langle 000| + |111\rangle\langle 111|$,~(\ref{compression}) tells us that the error correction matrix is $$\Lambda=\left(\begin{array}{cccc}\frac{1}{3}(3-p-q-r)&0&0&0\\0&\frac{1}{3}p&0&0\\0&0&\frac{1}{3}q&0\\0&0&0&\frac{1}{3}r\end{array}\right).$$ The entropy of this code is therefore \begin{multline}S(\Lambda)=-\frac{1}{3}(3-p-q-r)\log{\frac{1}{3}(3-p-q-r)} \\-\frac{1}{3}p\log{\frac{1}{3}p}-\frac{1}{3}q\log{\frac{1}{3}q}-\frac{1}{3}r\log{\frac{1}{3}r}.\notag\end{multline} As we would expect, the minimum entropy is achieved when $p=q=r=0$. The maximum entropy occurs when \hbox{$p=q=r=\frac{3}{4}$}, which we might also expect since this puts the auxiliary density matrix $\Lambda$ into the maximally mixed state. Here, the Choi rank of our map is $4$ as is the rank of $\Lambda$, and the spectrum of $\Lambda$ is equally balanced. So $\mathcal{C}$ is non-degenerate for $\Phi$. Further, as we expect from Theorem~\ref{ucccase}, the entropy is \hbox{$\log{4}=2$}. Since the rank of $\Lambda$ is the number of Kraus operators required for a recovery operation we find that in order to dilate the recovery operation to a unitary process, we need a $4$-dimensional space, or equivalently a $2$-qubit ancilla, which matches the value of the entropy. We next consider a variant of this noise model, in which the third qubit undergoes no error and the first and second qubits are flipped with probability equal to that of no error. Thus the map $\Phi$ has noise operators $ \{ I, X_1, X_2 \}$, each weighted by $\frac{1}{\sqrt{3}}$. The entropies for a trio of correctable codes for $\Phi$ are given in Table~\ref{EntTable} (vectors are assumed to be normalised). The first code is non-degenerate, and thus yields the maximal entropy for this noise model. The second code has positive entropy less than the maximum since it is partially degenerate for $\Phi$. Indeed, one can check that $X_1$ (and $I$) act trivially on the code, whereas $X_2$ maps the code to an orthogonal subspace. The final code shows zero entropy since it is fully degenerate for $\Phi$. In fact, as can be directly verified it is a decoherence-free subspace for $\Phi$. \begin{table}[h] \caption{Codes for the noise model $\Phi = \frac{1}{\sqrt{3}} \{ I, X_1, X_2 \}$.} \label{EntTable} \begin{center} \begin{tabular}{cc} \hline Qubit Code ${\mathcal C} = \{ |0_L \rangle , |1_L \rangle \}$ & Entropy $S(\Phi,{\mathcal C})$ \\ \hline $\{|000\rangle , |111\rangle\}$ & $\log 3$ \\ $\{|000\rangle + |100\rangle, |011\rangle + |111\rangle\}$ & $\log 3 - \frac{2}{3}$ \\ $\{|000\rangle + |100\rangle + |010\rangle + |110\rangle,$ & $0$\\ $|011\rangle + |111\rangle + |001\rangle + |101\rangle \}$ & \\ \hline \end{tabular} \end{center} \end{table} \subsection{Entropy of a Subsystem Code} We next consider an extension of code entropy to the case of operator quantum error correcting subsystem codes. We shall only introduce this notion here and leave a potential further investigation for elsewhere. Subsystem codes were formally introduced under the umbrella of {\it operator quantum error correction} \cite{KLPL05,KLP05}, a framework that unifies the active and passive approaches to quantum error correction. These codes now play a central role in fault tolerant quantum computing. Let $\cal{H}$ be a Hilbert space of finite dimension $N$. Any decomposition $\cal{H} = (A\otimes B)\oplus \cal{K}$ determines subsystems $A$ and $B$ of $\cal{H}$. Given a quantum operation $\Phi$ acting on $\cal{H}$, we say that a subsystem $B$ of $\cal{H}$ is {\it correctable} for $\Phi$ if there exists maps $\Psi$ on $\cal{H}$ and $\tau_A$ on $A$ such that \begin{equation}\label{subsystemcode} \Psi\circ \Phi \circ \cal{P}_{AB}= (\tau_A\otimes {\rm id}_B) \circ \cal{P}_{AB}, \end{equation} where ${\cal{P}}_{AB}(\rho)= P_{AB} \rho P_{AB}$ and $P_{AB}$ is the projection onto the subspace $A\otimes B$. Subsystem codes generalize standard (subspace) codes (\ref{superoperator}) in the sense that subspaces may be regarded as subsystems with trivial ancilla ($\dim A=1$). Thus, it is natural to suggest that a notion of entropy for subsystem codes should generalize the subspace definition. We find motivation for such a notion through the main result of \cite{KS06} alluded to above. To every correctable subsystem for $\Phi$, there are subsystems $C$ and $B'\cong B$, a map $\tau_{C|A}$ from $A$ to $C$ and a unitary map $\cal{V}_{B'|B}$ from $B$ to $B'$ such that \begin{equation}\label{KSresult} \Phi \circ \cal{P}_{AB}= (\tau_{C|A}\otimes \cal{V}_{B'|B}) \circ \cal{P}_{AB}, \end{equation} Recall that the {\it maximal output entropy} of a channel $\Psi$ is given by $S(\Psi) := \max_\rho S(\Psi(\rho))$. This motivates the following. \begin{definition} Let $\Phi$ be a quantum operation with correctable subsystem $B$ that satisfies (\ref{subsystemcode}). Then we define the entropy of $B$ relative to $\Phi$ as the maximal output entropy of the associated ancilla channel, $S(\tau_{C|A})$ from (\ref{KSresult}). \end{definition} Observe that this generalizes Definition~\ref{entdefn}, since a density operator may be regarded as a channel from a one-dimensional input space. The minimal entropy case is characterized by the ancilla channel $\tau_{C|A}$ having range supported on a one-dimensional subspace. Such codes are unitarily correctable, in fact as subspaces, but the converse is not true. Instead, in the more general subsystem setting, the minimal entropy case is described by the associated ancilla subsystem $A$ undergoing ``cooling'' to a fixed state. In principle one should be able to conduct a deeper analysis of subsystem code entropy. We leave this as an open investigation for elsewhere. \section{Entropy of a Code for Binary Unitary Channels} A binary unitary channel has the form \begin{equation} \rho'=\Phi(\rho) = (1-p) W_1 \rho W_1^{\dagger} + p W_2 \rho W_2^{\dagger} \label{biunitary1} \end{equation} where $W_1$ and $W_2$ denote two arbitrary unitary operators and the probability $p$ belongs to $[0,1]$. It is clear that the problem of finding an error correcting code subspace $\cal C$ for the above map is equivalent to the case \begin{equation} \rho'' = \Phi_U(\rho) = (1-p) \rho+ p U \rho U^{\dagger} \label{biunitary2} \end{equation} where $U=W_1^{\dagger}W_2$. The number $M$ of Kraus operators is equal to $2$, with $E_1=\sqrt{1-p} {\mathbbm 1}$ and $E_2=\sqrt{p} U$. Thus the error correction matrix $\Lambda$ is of size two and reads \begin{equation} \Lambda \ = \ \left(\begin{matrix} 1-p & \sqrt{p(1-p)} \lambda \\ \sqrt{p(1-p)} \lambda^* & p \end{matrix}\right) \label{lambda1} \end{equation} where $\lambda$ is a solution of the {\it compression problem} for $U$ \begin{equation} P_{\cal{C}} U P_{\cal{C}}= \lambda P_{\cal{C}} \ . \label{comp1} \end{equation} The set of solutions to this problem can be phrased in terms of the {\it higher-rank numerical range} of the matrix $U$. The rank-$k$ numerical range of $U$ is defined as \begin{equation} \label{rank-k-num-range} \Omega_{k}(U)=\left\{\lambda\in\mathbb{C}\, |\, PUP=\lambda P\text{ for some rank-$k$ projection } P \right\}. \end{equation} Given a dimension-$k$ that defines the size of the desired correctable code, each $\lambda$ in $\Omega_{k}(U)$ corresponds to a particular correctable code defined by the associated projection $P$ that solves (\ref{comp1}). The following is a straightforward application of (\ref{compression}). \begin{proposition} \label{correctable-if-numrange} Given a binary unitary channel $\Phi$, there exists a rank-$k$ correctable code for $\Phi$ if and only if the rank-$k$ numerical range of $U$ is non-empty. \end{proposition} Thus, the problem of finding the correctable codes for a given binary unitary channel can be reduced to the problem of finding the higher-rank numerical range of $U$. This problem has recently been solved in its entirety \cite{CKZ06a,CKZ06b,CHKZ08,Woe07,LS08,CGHK08,Kazakov08}. Most succinctly, in terms of the eigenvalues $\sigma(U)$ of $U$, the $k$th numerical range of $U$ is the convex subset of the unit disk given by \begin{equation}\label{omega1} \Omega_k(U) \,\,\,= \,\,\,\bigcap_{\Gamma\,\subseteq\, \sigma(U);\,\,|\Gamma|=N-k+1}\, {\rm conv}\, (\Gamma), \end{equation} where ${\rm conv}\,\{\lambda_1,\ldots,\lambda_m\}$ is the set of linear combinations $\lambda = t_1\lambda_1 + \ldots +t_m\lambda_m$ such that $\sum_{j=1}^m t_j = 1$ and $t_j\geq 0$. Figures~1.a and 1.b depict the case of a generic two-qubit unitary ($N=4$) with $k=2$, while Figure 1.c shows the case of a generic two-qutrit unitary noise ($N=3\times 3 = 9$) with $k=3$. \begin{figure} \caption{Higher rank numerical range $\Omega_k$ for unitary matrices $U$ describing a bi-unitary channel: two qubit system, a) example 2 with $\lambda=0$, b) case with $r=|\lambda| >>0$ for which code entropy is smaller, c) two qutrit case with $\lambda\in \Omega_3(U)$ chosen to maximize its modulus $r$ and to minimize the code entropy $S(\Lambda)$.} \label{range08a} \end{figure} With a particular $\lambda$ in-hand, straightforward algebra provides us with the spectrum of the matrix (\ref{lambda1}), \begin{equation} \Lambda_{\pm} \ = \ \frac{1}{2} \left(1 \pm \sqrt{ 1-4p(1-p)(1-|\lambda|^2)} \right), \label{lambda3} \end{equation} which allows us to calculate the entropy of the code, \begin{equation} S(\Phi,{\cal C}) = S(\Lambda) \ = \ - \Lambda_+ \log \Lambda_+ - \Lambda_- \log \Lambda_-. \label{lambda5} \end{equation} We are then led to the following: \begin{theorem} \label{entropy-biunitary} A minimum entropy rank-$k$ code for a binary unitary channel $\Phi(\rho)=(1-p) \rho + p U \rho U^{\dagger}$ corresponds to any code for which the magnitude $|\lambda|$ of the compression values $\lambda\in\Omega_k(U)$ is closest to unity, while the maximum entropy corresponds to $|\lambda|$ closest to zero. Moreover, a code with minimal entropy can be constructively obtained. \end{theorem} \noindent\textbf{Proof.} The first statement follows directly from an application of first and second derivative tests on (\ref{lambda5}), constrained to the unit disk. A minimal entropy code can be explicitly constructed based on the analysis of higher-rank numerical ranges \cite{CKZ06a,CKZ06b,CHKZ08,Woe07,LS08,CGHK08,Kazakov08}. \pfQED \begin{example} As an illustration of the code construction in the simplest possible case ($N=4,k=2$), let $U$ be the unitary with spectrum depicted in Figure~1.a, $\sigma(U) = \{ z_k = \exp (k\frac{\pi}{4}) : k=1,3,5,7\}$. Let $\ket{\psi_k}$ be the associated eigenstates, $U\ket{\psi_k} = z_k\ket{\psi_k}$. In this case we have $\Omega_2(U)=\{0\}$ so $\lambda=0$, and one can check directly that a single qubit correctable code for $\Phi$ is given by ${\cal{C}} = {\rm span}\,\{ \ket{\phi_1}, \ket{\phi_2} \}$, where $$\left\{\begin{array}{lcl}\ket{\phi_1} &=& \frac{1}{\sqrt{2}} (\ket{\psi_1}+\ket{\psi_3})\\ \ket{\phi_2} &=& \frac{1}{\sqrt{2}} (\ket{\psi_2}+\ket{\psi_4})\end{array}.\right.$$ For a concrete example, in the case that $p=0.01$, (\ref{lambda5}) yields a code entropy of \hbox{$S(\Phi,{\cal{C}}) = 0.081$}. The general case requires a more delicate construction, nevertheless it can be done. The ``eigenstate grouping'' procedure used above can be applied whenever $k$ divides $N$. For instance, in the generic $N=9$ and $k=3$ case depicted in Figure~1.c, a single qutrit code can be constructed for all $\lambda$ in the region $\Omega_3(U)$. The states $\ket{\phi_i}$, $i=1,2,3$, can be constructed in an analogous manner by grouping the nine eigenstates for $U$ into three groups of three, and writing $\lambda$ in three different ways as a linear combination of the associated unimodular eigenvalues $z_j$, $j=1,\dots,9$. However, without going into the details of this construction we can still analyze the corresponding code entropies. For simplicity assume the nine eigenvalues are distributed evenly around the unit circle with $z_1=0$. By Theorem~\ref{entropy-biunitary} we know that the entropy will be minimized for any $\lambda$ that gives the minimum distance from $\Omega_3(U)$ to the unit circle. An elementary calculation shows that one such $\lambda$, given by the intersection of the lines through the first and seventh, and sixth and ninth eigenvalues (counting counterclockwise) is approximately $\lambda_0 = 0.092 - 0.524\,i$. With the probability $p=0.01$, the corresponding error correction matrix $\Lambda$ has spectrum $\{0.007,0.993\}$. Thus, (\ref{lambda3}) and (\ref{lambda5}) yield the minimal qutrit code entropy for this channel as $$\min_{\dim{\cal{C}}=3} S(\Phi,{\cal{C}})= \min_{\lambda\in\Omega_3(U)} S(\Lambda) = S(\Lambda_{\lambda=\lambda_0}) = 0.060.$$ On the other hand, as $\lambda =0$ belongs to $\Omega_3(U)$, by Theorem~\ref{entropy-biunitary} and (\ref{lambda1}) we also see the maximal entropy for $p=0.01$ occurs for any code with $\lambda=0$. In such cases we have $\Lambda$ spectrum $\{0.01, 0.99\}$, and hence the maximal entropy is $S(\Lambda_{\lambda=0}) = 0.081$. Changing focus briefly, if we fix an arbitrary unitary $U$, then we could consider the family of channels determined by varying the probability $p$. It follows from (\ref{lambda5}) that the channel with the correctable codes of maximal entropy corresponds to the $p=\frac{1}{2}$ channel, and the channels whose correctable codes possess minimal entropy correspond to $p=0$ and $p=1$. Indeed, the value of $\lambda$ depends on $U$ but not $p$, thus a given $\lambda$ will solve (\ref{comp1}) for any $p$ and so $\lambda$ can be chosen independently using the above theorem. The result once again follows from application of first and second derivative tests with $p$ between $0$ and $1$. \end{example} The following results show that the entropy of a code for a binary unitary channel can be regarded as a measure of how close the code is to a decoherence-free subspace. \begin{lemma}\label{uccdfs} If $\Phi$ is a binary unitary channel, then the sets of unitarily correctable subspaces and decoherence-free subspaces coincide. \end{lemma} \noindent\textbf{Proof.} As proved in \cite{KS06}, for a bistochastic (unital) map $\Phi$, a condition satisfied by every binary unitary channel, the unitarily correctable codes (respectively the decoherence-free subspaces) for $\Phi$ are imbedded in the fixed point algebra for the map $\Phi^\dagger\circ\Phi$ (respectively $\Phi$), where $\Phi^\dagger$ is the Hilbert-Schmidt dual map of $\Phi$. In particular, it follows from this fact that the former set is given by the set of operators that commute with $U$ and $U^\dagger$, whereas the latter is the set of operators that commute with $U$. By the Spectral Theorem these two sets coincide. \pfQED \begin{theorem}\label{dfsbuc} Let $\Phi$ be a binary unitary channel. Then there is a rank-$k$ code $\cal{C}$ of zero entropy, $S(\Phi,{\cal{C}})=0$, for $\Phi$ if and only if there is a $k$-dimensional decoherence-free subspace for $\Phi$ if and only if there exists $\lambda\in\Omega_k(U)\cap\sigma(U)$. \end{theorem} \noindent\textbf{Proof.} A $k$-dimensional decoherence-free subspace for $\Phi$ corresponds to an eigenvalue $\lambda$ of $U$ with multiplicity at least $k$ (see \cite{KS06} and the references therein); that is, $\lambda\in\Omega_k(U)\cap\sigma(U)$. The rest follows from the lemma and previous theorem. \pfQED In order to further illustrate these results, consider again the case of an arbitrary two-qubit system ($N=4$). The correctable codes with largest entropy are those with $p=\frac{1}{2}$ and so the spectrum of $\Lambda$ reads \begin{equation} \Lambda_{\pm} \ = \ \frac{1}{2} \bigl(1 \pm |\lambda| \bigr). \label{lambda4} \end{equation} In the two-qubit case, the complex number $\lambda$ is given by the point inside the unit circle at which two diagonals of the quadrangle formed by the spectrum of $U$ cross (see Figures~1.a and 1.b). Consider a special case of the problem where $U$ has a doubly degenerated eigenvalue, so that $|\lambda|=1$. For example, $U$ could be any (non-identity) element of the two-qubit Pauli group. Then the spectrum of $\Lambda$ consists of $\{1,0\}$ which implies $S(\Lambda)=0$ (despite $p$ having been chosen for the largest entropy correctable codes). Hence $\Lambda$ is pure and there exists a decoherence free subspace -- the one spanned by the degenerated eigenvalues of $U$. In general, for binary unitary channels one may use the entropy (\ref{lambda5}) as a measure quantifying to what extend a given error correction code is close to a decoherence-free subspace. For instance, any channel (\ref{biunitary2}) acting on a two qubit system and described by unitary matrix $U=W_1^{\dagger}W_2$ of size $4$ may be characterized by the radius $r=|\lambda|$ of the point in which two diagonals of the quadrangle of the spectrum cross. The larger $r$, the smaller entropy $S(\Phi,{\cal C})$, and the closer the error correction code is to a decoherence-free space. The code entropy can be also used to classify codes designed for a binary unitary channel acting on larger systems. For instance in the case of two qutrits, $N=3 \times 3=9$, one can find a subspace supported on $k=3$ dimensional subspace. The solution is by far not unique and can be parametrized by complex numbers $\lambda$ belonging to an intersection of $3$ triangles, which forms a convex set of a positive measure. From this set one can thus select a concrete solution providing a code $\cal C$, such that $r=|\lambda|$ is the largest, which implies that the code entropy, $S(\Phi,{\cal C})$, is the smallest -- see Figure~1.c. Such an error correction code is distinguished by being as close to a decoherence-free subspace as possible. \section{Conclusions} We have investigated a notion of entropy for quantum error correcting codes and quantum operations. The entropy has multiple natural realisations through fundamental results in the theory of quantum error correction. We showed how the extremal cases are characterised by unitarily correctable codes and decoherence-free subspaces on the one hand, and the non-degenerate case determined by the Choi matrix of the map on the other. We considered examples from the stabilizer formalism, and conducted a detailed analysis in the illustrative case of binary unitary channels. Recently developed techniques on higher-rank numerical ranges were used to give a complete geometrical description of code entropies for binary unitary channels; in particular, the structure of these subsets of the complex plane can be used to visually determine how close a code is to a decoherence-free subspace. We also introduced an extension of code entropy to subsystem codes, and left a deeper investigation of this notion for elsewhere. It could be interesting to explore further applications of the code entropy in quantum error correction. For instance, although quantum error correction codes were originally designed for models of discrete time evolution in the form of a quantum operation, generalizations to the case of continuous evolution in time \cite{Bra98,LS98,OLB08} have been investigated. Further, we have investigated perfect correction codes only, for which the error recovery operation brings the quantum state corrupted by the noise back to the initial state with fidelity equal one. Such perfect correction codes may be treated as a special case of more general approximate error correction codes \cite{SW02,CGS05,Kl07}. Another recent investigation \cite{Ali08} includes analysis that suggests the measurement component of recovery may prove to be problematic in quantum error correction, and hence may motivate further investigation of unitarily correctable codes. \end{document}
\begin{document} \title{A family of Eta Quotients and an Extension of the Ramanujan-Mordell Theorem} \author{Ay\c{s}e Alaca, \c{S}aban Alaca, Zafer Selcuk Aygin} \maketitle \markboth{AY\c{S}E ALACA, \c{S}ABAN ALACA, ZAFER SELCUK AYGIN} {A FAMILY OF ETA QUOTIENTS AND THE RAMANUJAN-MORDELL THEOREM} \begin{abstract} Let $k\geq 2$ be an integer and $j$ an integer satisfying $1\leq j \leq 4k-5$. We define a family $\{ C_{j,k}(z) \}_{1\leq j \leq 4k-5} $ of eta quotients, and prove that this family constitute a basis for the space $S_{2k} (\Gamma_0 (12))$ of cusp forms of weight $2k$ and level $12$. We then use this basis together with certain properties of modular forms at their cusps to prove an extension of the Ramanujan-Mordell formula.\\ \noindent Key words and phrases: Ramanujan-Mordell formula, Dedekind eta function, eta quotients, eta products, theta functions, Eisenstein series, Eisenstein forms, modular forms, cusp forms, Fourier coefficients, Fourier series.\\ \noindent 2010 Mathematics Subject Classification: 11F11, 11F20, 11F27, 11E20, 11E25, 11F30, 11Y35 \end{abstract} \section{Introduction} Let $\mathbb{N}$, $\mathbb{N}_0$, $\mathbb{Z}$, $\mathbb{Q}$ and $\mathbb{C}$ denote the sets of positive integers, non-negative integers, integers, rational numbers and complex numbers, respectively. Let $N\in\mathbb{N}$. Let $\Gamma_0(N)$ be the modular subgroup defined by \begin{equation}ars \Gamma_0(N) = \left\{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \mid a,b,c,d\in \mathbb{Z} ,~ ad-bc = 1,~c \equiv 0 \smod {N} \right\} . \end{equation}ars Let $k \in \mathbb{Z}$. We write $M_k(\Gamma_0(N))$ to denote the space of modular forms of weight $k$ for $\Gamma_0(N)$, and $E_k (\Gamma_0(N))$ and $S_k(\Gamma_0(N))$ to denote the subspaces of Eisenstein forms and cusp forms of $M_k(\Gamma_0(N))$, respectively. It is known that \begin{equation}ar \label{1_10} M_k (\Gamma_0(N)) = E_k (\Gamma_0(N)) \oplus S_k(\Gamma_0(N)). \end{equation}ar The Dedekind eta function $\eta (z)$ is the holomorphic function defined on the upper half plane $\mathbb{H} = \{ z \in \mathbb{C} \mid \mboxox{\rm Im}(z) >0 \}$ by the product formula \begin{equation}ars \eta (z) = e^{\pi i z/12} \prod_{n=1}^{\infty} (1-e^{2\pi inz}). \end{equation}ars An eta quotient is defined to be a finite product of the form \begin{equation}ar \label{1_30} f(z) = \prod_{\delta } \eta^{r_{\delta}} ( \delta z), \end{equation}ar where $\delta$ runs through a finite set of positive integers and the exponents $r_{\delta}$ are non-zero integers. By taking $N$ to be the least common multiple of the $\delta$'s we can write the eta quotient {\rm (\ref{1_30})} as \begin{equation}ar \label{1_40} f(z) = \prod_{1\leq \delta \mid N} \eta^{r_{\delta}} ( \delta z) , \end{equation}ar where some of the exponents $r_{\delta}$ may be $0$. When all the exponents $r_{\delta}$ are nonnegative, $f(z)$ is said to be an eta product. As in \cite{Kohler} throughout the paper we use the notation $q=e(z):=e^{2\pi i z}$ with $z\in \mathbb{H}$, and so $|q| < 1$ and $q^{1/24} = e(z/24)$. Ramanujan's theta function $\varphi (z)$ is defined by \begin{equation}ars \varphi(z) = \sum_{n=0}^\infty q^{ n^2 }. \end{equation}ars It is known that $\varphi (z)$ can be expressed as an eta quotient as \begin{equation}ar \label{1_50} \varphi(z)=\fracac{\eta^5(2z)}{\eta^2(z) \eta^2(4z)}. \end{equation}ar \noindent For $a_j \in \mathbb{N} $, $1 \leq j \leq 4k$, we define \begin{equation}ars N(a_1, \ldots, a_{4k};n):=card\{(x_1, \ldots , x_{4k}) \in \mathbb{Z}^{4k} \mid n=a_1 x_1^2 + \cdots +a_{4k} x_{4k}^2 \} . \end{equation}ars Then we have \begin{equation}ar \label{1_60} \varphi(a_1 z) \cdots \varphi(a_{4k} z)=\sum_{n=0}^\infty N(a_1, \ldots, a_{4k};n) q^{ n }. \end{equation}ar The value of $N(a_1, \ldots, a_{4k};n)$ is independent of the order of the $a_j$'s. Let $k \geq 2$ be an integer, and let $a_j\in \{1, 3\}$, $1 \leq j \leq 4k$, with an even number of $a_j$'s equal to $3$. Then we write \begin{equation}ars N(a_1, \ldots, a_{4k};n)=N(1^{4k-2i}, 3^{2i};n), \end{equation}ars where $i$ is an integer with $0\leq i \leq 2k$. Ramanujan \cite{ramanujan} stated a formula for $N(1^{2k}, 3^0;n)$, which was proved by Mordell in \cite{mordell}, see also \cite{cooper, aaw}. In this paper we define a family $\{ C_{j,k}(z) \}_{1\leq j \leq 4k-5} $ of eta quotients, and prove that this family constitute a basis for the space $S_{2k} (\Gamma_0 (12))$ of cusp forms of weight $2k$ and level $12$. We then use this basis together with certain properties of modular forms at their cusps to prove an extension of the Ramanujan-Mordell formula, that is, we give a formula for $N(1^{4k-2i},3^{2i};n)$. For $n, k \in \mathbb{N}$ we define the sum of divisors function $\ds \sigma_{k}(n)$ by \begin{equation}ars \sigma_k(n) = \sum_{1\leq m \mid n} m^{k}. \end{equation}ars If $n \not\in \mathbb{N}$ we set $\sigma_{k }(n)=0$. We define the Eisenstein series $E_{2k}(z)$ by \begin{equation}ar \label{1_80} &&\ds E_{2k} (z) :=-\fracac{B_{2k}}{4k}+\sum_{n=1}^{\infty} \sigma_{2k-1}(n)q^{ n }, \end{equation}ar where $B_{2k}$ are Bernoulli numbers defined by the generating function \begin{equation}ar \label{1_90} \ds \fracac{x}{e^x-1}=\sum_{n=0}^\infty\fracac{B_n x^n}{n!}. \end{equation}ar The cusps of $\Gamma_0(N)$ can be represented by rational numbers $a/c$, where $a\in \mathbb{Z}$, $c\in \mathbb{N}$, $c|N$ and $\gcd(a,c)=1$, see \cite[p. 320]{okamoto} and \cite[p. 103]{DiamondShurman}. We can choose the representatives of cusps of $\Gamma_0(12)$ as \begin{equation}ars 1, 1/2, 1/3, 1/4, 1/6, \infty. \end{equation}ars Throughout the paper we use $\infty$ and $1/12$ interchangeably as they are equivalent cusps for $\Gamma_0(12)$. Let $f(z)$ be an eta quotient given by (\ref{1_40}). A formula for the order $ \ds v_{a/c}(f) $ of $f(z)$ at the cusp $a/c$ (see \cite[p. 320]{okamoto} and \cite[Proposition 3.2.8]{Ligozat}) is given by \begin{equation}ar \label{1_110} v_{a/c}(f)=\fracac{N}{24\gcd(c^2,N)}\sum_{1 \leq \delta|N}\fracac{\gcd(\delta,c)^2 \cdot r_\delta }{\delta} . \end{equation}ar We use the following theorem to determine if a given eta quotient is in $M_k(\Gamma_0(N))$. See \cite[Proposition 1, p. 284]{Lovejoy}, \cite[Corollary 2.3, p. 37]{Kohler}, \cite[p. 174]{GordonSinor}, \cite{Ligozat} and \cite{Kilford}. \begin{theorem}{\bf (Ligozat)} Let $f(z)$ be an eta quotient given by (\ref{1_40}) which satisfies the following conditions:\\ {\em (L1)~} $\ds \sum_{ 1\leq \delta \mid N} \delta \cdot r_{\delta} \equiv 0 \smod {24}$,\\ {\em (L2)~} $\ds \sum_{ 1 \leq \delta \mid N} \fracac{N}{\delta} \cdot r_{\delta} \equiv 0 \smod {24}$, \\ {\em (L3)~} For each $d \mid N$, $\ds \sum_{1 \leq \delta \mid N} \fracac{ \gcd (d, \delta)^2 \cdot r_{\delta} }{\delta} \geq 0 $,\\ {\em (L4)~} $\ds \sqrt{\prod_{1 \leq \delta \mid N} \delta^{ r_{\delta} } }\in \mathbb{Q} $, \\ {\em (L5)~} $\ds k = \fracac{1}{2} \sum_{1 \leq \delta \mid N} r_{\delta} $ an even integer. \\ \noindent Then $f(z) \in M_k(\Gamma_0(N))$. Furthermore if all inequalities in {\em (L3)} are strict then $f(z) \in S_k(\Gamma_0(N))$. \end{theorem} \noindent For $N=12$ in (L4) of Theorem 1.1, we have \begin{equation}ar \label{1_130} \sqrt{\prod_{1 \leq \delta \mid 12} \delta^{ r_{\delta} } }=2^{r_4+r_{12}} \sqrt{2^{ r_{2} +r_6} 3^{ r_{3}+r_6+r_{12} } }. \end{equation}ar Thus the expresion in (\ref{1_130}) is a rational number if and only if \begin{equation}ars r_{2} +r_6 \equiv 0 \smod{2} \mboxox{ and } r_{3} +r_6+r_{12} \equiv 0 \smod{2}. \end{equation}ars \section{Statements of main results} For $j, k \in \mathbb{Z}$ we define an eta quotient $C_{j,k}(z)$ by \begin{equation}ar C_{j,k}(z)&:=& \Big(\fracac{\eta^{10}(2z) \eta^{5}(3z) \eta(4z) \eta^{2}(6z) }{\eta^{15}(z) \eta^{3}(12z)} \Big) \Big(\fracac{\eta^{2}(2z) \eta(3z) \eta^{3}({12z})}{ \eta^{3}(z) \eta(4z) \eta^{2}(6z) }\Big)^j \Big(\fracac{\eta^{6}(z)\eta(6z) }{\eta^{3}(2z) \eta^{2}(3z)}\Big)^{k} \nonumber \\ \label{2_10} &=& q^j+ \sum_{n=j+1}^\infty c_{j,k}(n) q^{ n }. \end{equation}ar In the following theorem we give a basis for $M_{2k} (\Gamma_0 (12))$ when $k \geq 2$. \begin{theorem} Let $k\geq 2$ be an integer. {\em \bf (a)} The family $\{ C_{j,2k}(z) \}_{1\leq j \leq 4k-5} $ constitute a basis for $S_{2k} (\Gamma_0 (12))$. {\em \bf (b)} The set of Eisenstein series \begin{equation}ars && \{ E_{2k} (z),~E_{2k} (2z), ~E_{2k} (3z), ~E_{2k} (4z), ~E_{2k} (6z), ~E_{2k} ({12z}) \} \end{equation}ars constitute a basis for $E_{2k}(\Gamma_0(12))$. {\em \bf (c)} The set \begin{equation}ars \{ E_{2k} (\delta z) \mid \delta =1, 2, 3, 4, 6, 12 \} \cup \{ C_{j,2k} (z) \mid 1 \leq j \leq 4k-5 \} \end{equation}ars constitute a basis for $M_{2k}(\Gamma_0(12))$. \end{theorem} For convenience we set \begin{equation}ar \label{2_20} \alpha_k=\fracac{-4k}{(2^{2k}-1)(3^{2k}-1)B_{2k}}, \end{equation}ar where $B_{2k}$ are Bernoulli numbers given in (\ref{1_90}). Also we write $[j]f(z):= a_j$ for $\ds f(z)=\sum_{n=0}^\infty a_n q^{n}$. We now give an extension of the Ramanujan-Mordell Theorem. \begin{theorem} Let $k\geq 2$ be an integer and $i$ an integer satisfying $0 \leq i \leq 2k$. Let $\alpha_k$ be as in {\em(\ref{2_20})}. Then \begin{eqnarray*} \varphi^{4k-2i}(z)\varphi^{2i}(3z)= \sum_{r \mid 12} b_{(r,i,k)} E_{2k}(rz) + \sum_{1\leq j \leq 4k-5}a_{(j,i,k)} C_{j,2k}(z) , \end{eqnarray*} where \begin{eqnarray} \label{2_30} && b_{(1,i,k)}=(-1)^{k}(3^{2k-i}+(-1)^{i+1}) \cdot \alpha_k,\\ \label{2_40} && b_{(2,i,k)}=(-1)^{i+1}(1+(-1)^{i+k})(3^{2k-i}+(-1)^{i+1})\cdot \alpha_k,\\ \label{2_50} && b_{(3,i,k)}=(-1)^{i+k}3^{2k-i}(3^{i}+(-1)^{i+1}) \cdot \alpha_k,\\ \label{2_60} && b_{(4,i,k)}=(-1)^{i}2^{2k}(3^{2k-i}+(-1)^{i+1}) \cdot \alpha_k,\\ \label{2_70} && b_{(6,i,k)}=-(1+(-1)^{i+k})3^{2k-i}(3^{i}+(-1)^{i+1})\cdot \alpha_k,\\ \label{2_80} && b_{(12,i,k)}=2^{2k}3^{2k-i}(3^{i}+(-1)^{i+1})\cdot \alpha_k, \end{eqnarray} and for $1 \leq j \leq 4k-5$ \begin{equation}ar \label{2_90} &&a_{(j,i,k)}= N(1^{4k-2i},3^{2i};j) -\sum_{r \mid 12} b_{(r,i,k)} \sigma_{2k-1}(j/r) - \sum_{1\leq l \leq j-1}a_{(l,i,k)} c_{l,2k}(j). \end{equation}ar \end{theorem} The following theorem follows immediately from Theorem 2.2. \begin{theorem} Let $k\geq 2$ be an integer and $i$ an integer with $0 \leq i \leq 2k$. Then \begin{eqnarray*} N(1^{4k-2i}, 3^{2i};n) = \sum_{r \mid 12} b_{(r,i,k)} \sigma_{2k-1}(n/r) + \sum_{1\leq j \leq 4k-5} a_{(j,i,k)} c_{j,2k}(n) , \end{eqnarray*} where $a_{(j,i,k)}$, $c_{j,2k}(n)$ and $b_{(r,i,k)}~(r=1,2,3,4,6,12)$ are given in {\em (\ref{2_90}), (\ref{2_10})}, and {\em (\ref{2_30})--(\ref{2_80})} respectively. \end{theorem} \section{Proof of Theorem 2.1} {\bf (a)} By Theorem 1.1 we see that $C_{j,2k}(z)\in S_{2k}(\Gamma_0(12))$ for $1\leq j \leq 4k-5$. It follows from (\ref{2_10}) that $\{ C_{j,2k}(z)\}_{1\leq j \leq 4k-5}$ is a linearly independent set. We deduce from the formulae in \cite[Section 6.3, p. 98]{stein} that \begin{equation}ars && \ds \dim(S_{2k}(\Gamma_0(12)))=4k-5. \end{equation}ars Thus the set $\{ C_{j,2k}(z)\}_{1\leq j \leq 4k-5}$ is a basis for $S_{2k}(\Gamma_0(12))$. {\bf (b)} It follows from \cite[Theorem 5.9]{stein} that \begin{equation}ars && \{ E_{2k} (z),~E_{2k} (2z), ~E_{2k} (3z), ~E_{2k} (4z), ~E_{2k} (6z), ~E_{2k} ({12z}) \} \end{equation}ars constitute a basis for $E_{2k}(\Gamma_0(12))$. {\bf (c)} Appealing to (\ref{1_10}), the assertion follows from (a) and (b). \section{Fourier series expansions of $\eta (rz)$ and $E_{2k}(rz)$ at certain cusps} Let $k \geq 2$ be an integer. For convenience we set $\eta_r (z)=\eta(rz)$ for $r \in \mathbb{N}$. We also set \begin{equation}ar \label{4_10} A_c= \begin{bmatrix} -1 & 0 \\ c & -1 \end{bmatrix} \in SL_2(\mathbb{Z}). \end{equation}ar The Fourier series expansions of $\eta_r(z)$ for $r=1,2,3,4,6,12$ at the cusp $1/c$ are given by the Fourier series expansions of $\eta_r(A_c^{-1}z)$ at the cusp $\infty$. In \cite[Theorem 1.7 and Proposition 2.1]{Kohler} we take $L = \begin{bmatrix} x & y \\ u & v \end{bmatrix} =L_r$ as \begin{equation}ars &&L_1=\begin{bmatrix} -1 & 1 \\ -1 & 0 \end{bmatrix}, L_2=\begin{bmatrix} -2 & 1 \\ -1 & 0 \end{bmatrix}, L_3=\begin{bmatrix} -3 & 1 \\ -1 & 0 \end{bmatrix},\\ && L_4=\begin{bmatrix} -4 & 1 \\ -1 & 0 \end{bmatrix}, L_6=\begin{bmatrix} -6 & 1 \\ -1 & 0 \end{bmatrix}, L_{12}=\begin{bmatrix} -12 & 1 \\ -1 & 0 \end{bmatrix}, \end{equation}ars and $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} = A_1$, where $A_1$ is given by (\ref{4_10}). We obtain the Fourier series expansions of $\eta_r(z)$ for $r=1,2,3,4,6,12$ at the cusp $1$ as \begin{equation}ar \label{4_20} && \ds \eta_1(A_1^{-1}z)=e^{ \pi i /3} (-z-1)^{1/2} \, \sum_{n \geq 1} \dqu{12}{n} e\Big( \fracac{n^2}{24} (z+1) \Big), \\ && \ds \eta_2(A_1^{-1}z)=\fracac{e^{5 \pi i /12}}{2^{1/2}}(-z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big( \fracac{n^2}{48} (z+1) \Big), \\ && \ds \eta_3(A_1^{-1}z)=\fracac{e^{ \pi i /2}}{3^{1/2}}(-z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big( \fracac{n^2}{72} (z+1) \Big), \\ && \ds \eta_4(A_1^{-1}z)=\fracac{e^{7 \pi i /12}}{2}(-z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big( \fracac{n^2}{96} (z+1) \Big), \\ && \ds \eta_6(A_1^{-1}z)=\fracac{e^{3 \pi i /4}}{6^{1/2}}(-z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big( \fracac{n^2}{144} (z+1) \Big), \\ \label{4_70} && \ds \eta_{12}(A_1^{-1}z)=\fracac{e^{5 \pi i /4}}{12^{1/2}}(-z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big( \fracac{n^2}{288} (z+1) \Big). \end{equation}ar From (\ref{1_50}) and (\ref{4_20})--(\ref{4_70}) we obtain the Fourier series expansions of $\varphi (z)$ and $\varphi (3z)$ at the cusp $1$ as \begin{equation}ar & \ds \varphi(A_1^{-1} z)&\ds = \fracac{\eta_2^5 (A_1^{-1} z)}{\eta_1^2(A_1^{-1} z)\eta_4^2(A_1^{-1}z)} \nonumber \\ \label{4_80} && \ds = \fracac{\ds \fracac{ e^{\pi i /4}}{2^{1/2}} (-z-1)^{1/2} \ds \Big(\sum_{n\geq 1}\dqu{12}{n} e\big( \fracac{n^2}{48} (z+1) \big) \Big )^5} { \ds \Big(\sum_{n\geq 1}\dqu{12}{n} e\big( \fracac{n^2}{24} (z+1)\big) \Big)^2 \ds\Big( \sum_{n\geq 1}\dqu{12}{n} e\big( \fracac{n^2}{96} (z+1)\big) \Big)^2}, \\ & \ds \varphi(3 A_1^{-1} z)&\ds = \fracac{\eta_6^5 (A_1^{-1} z)}{\eta_3^2(A_1^{-1} z)\eta_{12}^2(A_1^{-1}z)} \nonumber \\ \label{4_90} && =\ds\fracac{ \ds \fracac{e^{\pi i /4}}{6^{1/2}} (-z-1)^{1/2} \Big( \ds \sum_{n\geq 1}\dqu{12}{n} e\big( \fracac{n^2}{144} (z+1) \big) \Big)^5} {\Big( \ds \sum_{n\geq 1}\dqu{12}{n} e\big( \fracac{n^2}{72} (z+1) \big) \Big)^2 \Big( \ds \sum_{n\geq 1}\dqu{12}{n} e\big( \fracac{n^2}{288} (z+1)\big) \Big)^2} . \end{equation}ar Similarly, by taking $A=A_3$ in \cite[Theorem 1.7 and Proposition 2.1]{Kohler} and $L$ as \begin{equation}ars &&L_1:=\begin{bmatrix} -1 & -5 \\ -3 & -16 \end{bmatrix},~ L_2:=\begin{bmatrix} -2 & -5 \\ -3 & -8 \end{bmatrix},~ L_3:=\begin{bmatrix} -1 & 1 \\ -1 & 0 \end{bmatrix},\\ &&L_4:=\begin{bmatrix} -4 & -5 \\ -3 & -4 \end{bmatrix},~ L_6:=\begin{bmatrix} -2 & 1 \\ -1 & 0 \end{bmatrix},~ L_{12}:=\begin{bmatrix} -4 & 1 \\ -1 & 0 \end{bmatrix} \end{equation}ars we obtain the Fourier series expansions of $\eta_r(z)$ for $r=1,2,3,4,6,12$ at the cusp $1/3$ as \begin{equation}ar \label{4_100} && \ds \eta_1(A_3^{-1}z)=e^{5 \pi i /3}(-3z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big(\fracac{n^2}{24}(z-5)\Big), \\ && \ds \eta_2(A_3^{-1}z)= \fracac{e^{7 \pi i /12}}{2^{1/2}} (-3z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n} e\Big(\fracac{n^2}{48}(z-5)\Big), \\ && \ds \eta_3(A_3^{-1}z)=e^{ \pi i /3} (-3z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}\exp\Big(\fracac{n^2}{24}(3 z+1)\Big), \\ && \ds \eta_4(A_3^{-1}z)= \fracac{e^{17 \pi i /12}}{2} (-3z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{96}(z-5)\Big), \\ && \ds \eta_6(A_3^{-1}z)=\fracac{e^{5 \pi i /12}}{2^{1/2}}(-3z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{48}(3 z+1 )\Big), \\ \label{4_150} && \ds \eta_{12}(A_3^{-1}z)= \fracac{e^{7 \pi i /12}}{2} (-3z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{96}(3 z+1 )\Big). \end{equation}ar From (\ref{1_50}) and (\ref{4_100})--(\ref{4_150}) we obtain the Fourier series expansions of $\varphi (z)$ and $\varphi (3z)$ at the cusp $1/3$ as \begin{equation}ar & \ds \varphi(A_3^{-1} z) &= \fracac{\eta_2^5 (A_3^{-1} z)}{\eta_1^2(A_3^{-1} z)\eta_4^2(A_3^{-1}z)} \nonumber \\ \label{4_160} && =\ds \fracac{ \ds \fracac{e^{3\pi i /4}}{2^{1/2}} (-3z-1)^{1/2} \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{48}(z-5)\big)\Big)^5} {\Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{24}(z-5)\big) \Big)^2 \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{96}(z-5)\big) \Big)^2}, \\ & \ds \varphi(3 A_3^{-1} z) &= \fracac{\eta_6^5 (A_3^{-1} z)}{\eta_3^2(A_3^{-1} z)\eta_{12}^2(A_3^{-1}z)} \nonumber \\ \label{4_170} && =\ds\fracac{ \ds \fracac{e^{\pi i /4}}{2^{1/2}} (-3z-1)^{1/2} \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{48}(3 z+1)\big) \Big)^5} {\Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{24}(3 z+1 )\big)\Big )^2 \Big ( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{96}(3 z+1 )\big) \Big)^2} . \end{equation}ar Again by taking $A=A_4$ in \cite[Theorem 1.7 and Proposition 2.1]{Kohler} and $L$ as \begin{equation}ars &&L_1:=\begin{bmatrix} -1 & 1 \\ -4 & 3 \end{bmatrix},~ L_2:=\begin{bmatrix} -1 & 2 \\ -2 & 3 \end{bmatrix},~ L_3:=\begin{bmatrix} -3 & 1 \\ -4 & 1 \end{bmatrix} ,\\ &&L_4:=\begin{bmatrix} -1 & 4 \\ -1 & 3 \end{bmatrix},~ L_6:=\begin{bmatrix} -3 & 2 \\ -2 & 1 \end{bmatrix},~ L_{12}:=\begin{bmatrix} -3 & 4 \\ -1 & 1 \end{bmatrix} \end{equation}ars we obtain the Fourier series expansions of $\eta_r(z)$ for $r=1,2,3,4,6,12$ at the cusp $1/4$ as \begin{equation}ar \label{4_180} && \ds \eta_1(A_4^{-1}z)=e^{13 \pi i /12}(-4z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{24}(z+1)\Big),\\ && \ds \eta_2(A_4^{-1}z)=e^{ \pi i /6} (-4z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{12}( z+ 1)\Big),\\ && \ds \eta_3(A_4^{-1}z)=\fracac{e^{5 \pi i /12}}{3^{1/2}}(-4z-1)^{1/2}\, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{72}(z+1)\Big),\\ && \ds \eta_4(A_4^{-1}z)=e^{ \pi i /12} (-4z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{6}(z+1)\Big),\\ && \ds \eta_6(A_4^{-1}z)=\fracac{e^{ \pi i /3}}{3^{1/2}}(-4z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{36}(z+1)\Big),\\ \label{4_230} && \ds \eta_{12}(A_4^{-1}z)=\fracac{e^{5 \pi i /12}}{3^{1/2}} (-4z-1)^{1/2} \, \sum_{n\geq 1}\dqu{12}{n}e\Big(\fracac{n^2}{18}(z+1)\Big). \end{equation}ar From (\ref{1_50}) and (\ref{4_180})--(\ref{4_230}) we obtain the Fourier series expansions of $\varphi (z)$ and $\varphi (3z)$ at the cusp $1/4$ as \begin{equation}ar & \ds \varphi(A_4^{-1} z) &= \fracac{\eta_2^5 (A_4^{-1} z)}{\eta_1^2(A_4^{-1} z)\eta_4^2(A_4^{-1}z)} \nonumber \\ \label{4_240} && =\ds\fracac{ e^{\pi i /2} (-4z-1 )^{1/2} \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{12}( z+1)\big) \Big)^5} {\Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{24}(z+1)\big)\Big)^2 \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{6}(z+ 1)\big)\Big)^2}, \\ & \ds \varphi(3 A_4^{-1} z) &= \fracac{\eta_6^5 (A_4^{-1} z)}{\eta_3^2(A_4^{-1} z)\eta_{12}^2(A_4^{-1}z)} \nonumber \\ \label{4_250} && =\ds\fracac{ \ds\fracac{1}{3} (-4z-1)^{1/2} \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{36}(z+1)\big)\Big)^5} {\Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{72}( z+ 1)\big) \Big)^2 \Big( \ds \sum_{n\geq 1}\dqu{12}{n}e\big(\fracac{n^2}{18}(z+1)\big)\Big)^2} . \end{equation}ar If the Fourier series expansion of a modular form $f(z)$ of weight $k$ at the cusp $1/c$ is of the form \begin{equation}ar \label{4_260} f(A_c^{-1}z)=(-cz-1)^k\sum_{n \geq 0} a_n e^{2 \pi i n z_c}, \end{equation}ar where $z_c\in \mathbb{H}$ depends on $z$ and $A_c$, then we refer to $(-cz-1)^k a_0$ in (\ref{4_260}) as the ``first term'' of the Fourier series expansion of $f(z)$ at the cusp $1/c$. For $k\geq 2$ an integer and $i$ an integer with $0 \leq i \leq 2k$ we give the first terms of the Fourier series expansions of $\varphi^{4k-2i}(z) \varphi^{2i} (3z)$ at certain cusps in the following table. We deduce them (except for the cusps $1/2$ and $1/6$) from (\ref{4_80}), (\ref{4_90}), (\ref{4_160}), (\ref{4_170}), (\ref{4_240}) and (\ref{4_250}). \begin{center} \begin{longtable}{l | c c c c c c} \caption{First terms of $\varphi^{4k-2i}(z) \varphi^{2i} (3z)$ at certain cusps} \endhead {\rm cusp} & $\infty$ & $1$ & $ 1/2$ & $1/3$ & $1/4$ & $1/6$ \\ \hline \\[-2mm] {\rm term} & $ 1 $ & $\ds (-z -1)^{2k}\fracac{(-1)^k}{2^{2k}3^i}$ & $ 0$ & $\ds (-3z-1)^{2k}\fracac{(-1)^{i+k}}{2^{2k}} $ & $\ds (-4z-1)^{2k}\fracac{ (-1)^i}{3^i} $ & $0$ \end{longtable} \end{center} By (\ref{1_110}), we have \begin{equation}ars &&v_{1/2}(\varphi^{4k-2i}(z)\varphi^{2i}(3z))=3k-i > 0,~\\ &&v_{1/6}(\varphi^{4k-2i}(z)\varphi^{2i}(3z))=k+i > 0 \end{equation}ars for all $1\leq i \leq 2k$, that is, the first terms of the Fourier series expansions of $\varphi^{4k-2i}(z)\varphi^{2i}(3z)$ at cusps $1/2$ and $1/6$ are $0$. This completes Table 4.1. The following theorem is an analogue of \cite[Proposition 2.1]{Kohler} for the Eisenstein series $E_{2k}(tz)$. For convenience we set $E_{(2k,t)}(z)=E_{2k}(tz)$ for $t \in \mathbb{N}$. \begin{theorem} Let $k\geq 2$ be an integer and $t \in \mathbb{N}$. The Fourier series expansion of $E_{(2k,t)}(z)$ at the cusp $1/c\in \mathbb{Q}$ is \begin{equation}ars E_{(2k,t)}(A_c^{-1}z)=\Big(\fracac{g}{t}\Big)^{2k}(-cz-1)^{2k} E_{2k}\Big(\fracac{g^2}{t}z+\fracac{y g}{t}\Big), \end{equation}ars where $g=\gcd(t, c)$, $y$ is some integer, and $A_c$ is given in {\em (\ref{4_10})}. \end{theorem} \begin{proof} The Fourier series expansion of $E_{(2k,t)}(z)$ at the cusp $ 1/c $ is given by the Fourier series expansion of $E_{(2k,t)}(A_c^{-1}z)$ at the cusp $\infty$. We have \begin{equation}ars E_{(2k,t)}(A_c^{-1}z) =E_{(2k,t)}(\fracac{-z}{-cz-1}) = E_{2k}(\fracac{-tz}{-cz-1}) = E_{2k}(\gamma z), \end{equation}ars where $\gamma = \begin{bmatrix} -t & 0 \\ -c & -1 \end{bmatrix}$. As $\gcd(t/g,c/g)=1$, there exist $y,v\in \mathbb{Z}$ such that $\ds \fracac{t}{g}(-v)+\fracac{c}{g}y=1$. Thus $L := \begin{bmatrix} -t/g & y \\ -c/g & v \end{bmatrix} \in SL_2(\mathbb{Z})$. Then for $k \geq 2$, we have \begin{equation}ars & E_{(2k,t)}(A_c^{-1}z) &=E_{2k}(L L^{-1} \gamma z)\\ & &=\Big(-c\Big(\fracac{(-vt +cy) z +y}{t}\Big) + v \Big)^{2k} E_{2k}\Big( \fracac{(-vt +cy) z +y}{t/g}\Big)\\ & &=(g/t)^{2k}\Big(c\fracac{vt-cy}{g}z + \fracac{vt-cy}{g} \Big)^{2k} E_{2k}\Big( \fracac{g^2 z + yg}{t}\Big)\\ & &=(g/t)^{2k}(-cz - 1 )^{2k} E_{2k}\Big( \fracac{g^2 }{t}z + \fracac{yg}{t}\Big), \end{equation}ars which completes the proof. \end{proof} It folows from Theorem 4.1 and (\ref{1_80}) that the first term of the Fourier series expansion of $E_{2k}(tz)$ at the cusp $1/c$ is \begin{equation}ar \label{4_270} \Big(\fracac{g}{t}\Big)^{2k} (-cz-1)^{2k} \Big( \fracac{-B_{2k}}{4k} \Big). \end{equation}ar \section{Proofs of Theorems 2.2 and 2.3} Let $k\geq 2$ be an integer and $i$ an integer with $0 \leq i \leq 2k$. By (\ref{1_50}) we have \begin{equation}ars && \ds \varphi^{4k-2i}(z)\varphi^{2i}(3z)=\fracac{\eta^{20k-10i}(2z)}{\eta^{8k-4i}(z)\eta^{8k-4i}(4z)}\cdot \fracac{\eta^{10i}(6z)}{\eta^{4i}(3z) \eta^{4i}({12z})}. \end{equation}ars By Theorem 1.1, we have $\varphi^{4k-2i}(z)\varphi^{2i}(3z) \in M_{2k}(\Gamma_0(12))$. By Theorem 2.1(c), we have \begin{eqnarray} \label{5_10} \varphi^{4k-2i}(z)\varphi^{2i}(3z) = \sum_{r \mid 12} b_{(r,i,k)} E_{2k}(rz) + \sum_{1\leq j \leq 4k-5}a_{(j,i,k)} C_{j,2k}(z) \end{eqnarray} for some constants $b_{(1,i,k)}, b_{(2,i,k)}, b_{(3,i,k)}, b_{(4,i,k)}, b_{(6,i,k)}, b_{(12,i,k)}, a_{(1,i,k)}, \ldots, a_{(4k-5,i,k)}$. Since $C_{1,k}(z), \ldots, C_{4k-5,k}(z)$ are cusp forms, the first terms of their Fourier series expansions at all cusps are $0$. By appealing to (\ref{4_270}) and Table 4.1 we equate the first terms of the Fourier series expansions of (\ref{5_10}) in both sides at cusps $ \infty$, $1$, $1/2$, $1/3$, $1/4$, $1/6$ to obtain the system of linear equations \begin{equation}ars && b_{(1,i,k)} + b_{(2,i,k)} + b_{(3,i,k)} + b_{(4,i,k)} + b_{(6,i,k)} + b_{(12,i,k)} = \fracac{-4k}{B_{2k}} ,\\ && b_{(1,i,k)} + \fracac{b_{(2,i,k)}}{2^{2k}} + \fracac{b_{(3,i,k)}}{3^{2k} } + \fracac{b_{(4,i,k)}}{4^{2k}} + \fracac{b_{(6,i,k)}}{6^{2k}} + \fracac{b_{(12,i,k)}}{12^{2k}} = \fracac{ (-1)^k }{2^{2k} 3^i} \cdot \fracac{-4k}{B_{2k}},\\ && b_{(1,i,k)} + b_{(2,i,k)}+ \fracac{b_{(3,i,k)}}{ 3^{2k}} + \fracac{b_{(4,i,k)}}{2^{2k} } + \fracac{b_{(6,i,k)}}{3^{2k}} + \fracac{b_{(12,i,k)}}{6^{2k}}= 0,\\ && b_{(1,i,k)} + \fracac{b_{(2,i,k)} }{2^{2k}} + b_{(3,i,k)} + \fracac{b_{(4,i,k)} }{ 4^{2k}} + \fracac{b_{(6,i,k)}}{2^{2k}} + \fracac{ b_{(12,i,k)}}{ 4^{2k}} = \fracac{(-1)^{i+k}}{2^{2k}} \cdot \fracac{-4k}{B_{2k}},\\ && b_{(1,i,k)} + b_{(2,i,k)} + \fracac{b_{(3,i,k)}}{3^{2k}} + b_{(4,i,k)} + \fracac{b_{(6,i,k)} }{3^{2k}} + \fracac{b_{(12,i,k)} }{3^{2k}} =\fracac{ (-1)^{i} }{3^i} \cdot \fracac{-4k}{B_{2k}},\\ && b_{(1,i,k)} + b_{(2,i,k)} + b_{(3,i,k)} + \fracac{b_{(4,i,k)} }{ 2^{2k} } + b_{(6,i,k)} + \fracac{b_{(12,i,k)} }{ 2^{2k} }=0. \end{equation}ars Solving the above system of linear equations, we obtain the asserted expressions for $b_{(r,i,k)}$ for $r=1,2,3,4,6,12$ in (\ref{2_30})--(\ref{2_80}). By (\ref{2_10}), we have $[j]C_{j,k}(z) =1$ for each $j$ with $1 \leq j \leq 4k-5$. Equating the coefficients of $q^{ j }$ in both sides of (\ref{5_10}) we obtain \begin{equation}ars N(1^{4k-2i},3^{2i};j) = \sum_{r \mid 12} b_{(r,i,k)} \sigma_{2k-1} (j/r) + \sum_{1\leq l \leq j-1}a_{(l,i,k)} c_{l,2k}(z) + a_{(j,i,k)}. \end{equation}ars We isolate $a_{(j,i,k)}$ to complete the proof of Theorem 2.2. Finally, Theorem 2.3 follows from (\ref{1_60}), (\ref{1_80}), (\ref{2_10}) and Theorem 2.2. \section{Examples and Remarks} We now illustrate Theorems 2.2 and 2.3 by some examples. \begin{example} We determine $N(1^{6}, 3^{2};n)$ for all $n \in \mathbb{N}$. We take $k=2$ and $i=1$ in Theorems {\em 2.2} and {\em 2.3}. By {\em (\ref{2_30})--(\ref{2_80})} we have \begin{equation}ar \label{6_10} \left\{\begin{array}{l} b_{(1,1,2)}= 28/5,~b_{(2,1,2)}=0,~b_{(3,1,2)}=-108/5,\\[1mm] b_{(4,1,2)}=-448/5,~b_{(6,1,2)}=0,~b_{(12,1,2)}=1728/5. \end{array}\right. \end{equation}ar We compute $N(1^{6}, 3^{2};n)$ for $n=1, 2, 3$ as $4k-5 =3$, and obtain \begin{equation}ar \label{6_20} N(1^{6}, 3^{2};1)=12,~N(1^{6}, 3^{2};2)=60,~N(1^{6}, 3^{2};3)=164. \end{equation}ar By {\em (\ref{2_90})}, {\em (\ref{6_10})} and {\rm(\ref{6_20})}, we obtain \begin{equation}ars a_{(1,1,2)}=32/5,~a_{(2,1,2)}=48,~a_{(3,1,2)}=576/5. \end{equation}ars Then, by Theorem {\em 2.3}, for all $n \in \mathbb{N}$, we have \begin{eqnarray*} N(1^6, 3^2;n) &=& \fracac{28}{5} \sigma_{3}(n) - \fracac{108}{5} \sigma_{3}(n/3) -\fracac{448}{5} \sigma_{3}(n/4) + \fracac{1728}{5} \sigma_{3}(n/12) \\ &&+ \fracac{32}{5} c_{1,4}(n) + 48 c_{2,4}(n) + \fracac{576}{5} c_{3,4}(n), \end{eqnarray*} which agrees with the known results, see for example \cite{alacaw}. We note that the last two cofficients in the above expression are different from the ones in \cite{alacaw} since we used a different basis for the space of cusp forms. \end{example} \begin{example} We determine $N(1^4, 3^8;n)$ for all $n \in \mathbb{N}$. We take $k=3$ and $i=4$ in Theorems {\em 2.2} and {\em 2.3}. By {\em (\ref{2_30})--(\ref{2_80})} we have \begin{equation}ar \label{6_30} \left\{\begin{array}{l} b_{(1,4,3)}= 8/91,~b_{(2,4,3)}=0,~b_{(3,4,3)}=720/91,\\ b_{(4,4,3)}=-512/91,~b_{(6,4,3)}=0,~b_{(12,4,3)}=-4608/91. \end{array}\right. \end{equation}ar We compute $N(1^{4}, 3^{8};n)$ for $n=1, 2, 3, 4, 5, 6, 7$ as $4k-5 =7$, and obtain \begin{equation}ar \label{6_40} \left\{\begin{array}{l} N(1^{4}, 3^{8};1)=8,~N(1^{4}, 3^{8};2)=24,~N(1^{4}, 3^{8};3)=48,\\ N(1^{4}, 3^{8};4)=152,~N(1^{4}, 3^{8};5)=432,\\ N(1^{4}, 3^{8};6)=720,~N(1^{4}, 3^{8};7)=1344. \end{array}\right. \end{equation}ar By {\em (\ref{2_90})}, {\em (\ref{6_30})} and {\em (\ref{6_40})}, we obtain \begin{equation}ars &&a_{(1,4,3)}=720/91,~a_{(2,4,3)}=14880/91,~a_{(3,4,3)}=123376/91~a_{(4,4,3)}=40640/7,\\ &&a_{(5,4,3)}=1248448/91,~a_{(6,4,3)}=1551360/91,~a_{(7,4,3)}=792576/91. \end{equation}ars Then, by Theorem {\em 2.3}, for all $n \in \mathbb{N}$, we have \begin{eqnarray*} N(1^4, 3^8;n) &=& \fracac{8}{91} \sigma_{5}(n) + \fracac{720}{91} \sigma_{5}(n/3) -\fracac{512}{91} \sigma_{5}(n/4) - \fracac{46080}{91} \sigma_{5}(n/12) \\ &&+ \fracac{720}{91} c_{1,6}(n) + \fracac{14880}{91} c_{2,6}(n) + \fracac{123376}{91} c_{3,6}(n) + \fracac{40640}{7} c_{4,6}(n) \\ &&+ \fracac{1248448}{91} c_{5,6}(n) + \fracac{1551360}{91} c_{6,6}(n) + \fracac{792576}{91} c_{7,6}(n), \end{eqnarray*} which agrees with the known results, see for example \cite{ayse-1}. \end{example} \begin{example} Ramanujan \cite{ramanujan} stated a formula for $N(1^{2k},3^0,n)$, which was proved by Mordell in \cite{mordell}, see also \cite{cooper, aaw}. By taking $i=0$ in Theorems {\em 2.2} and {\em 2.3}, we obtain \begin{eqnarray*} N(1^{4k},3^0,n) &=& \fracac{4k}{(2^{2k}-1)B_{2k}} \Big( (-1)^{k+1} \sigma_{2k-1}(n) + (1+(-1)^{k}) \sigma_{2k-1}(n/2) \\ &&\hspacepace{30mm} -2^{2k} \sigma_{2k-1}(n/4) \Big) + \sum_{1\leq j \leq 4k-5} a_{(j,0,k)} c_{j,2k}(n), \end{eqnarray*} where $a_{(j,0,k)}$ and $c_{j,2k}(n)$ are given by {\em (\ref{2_90})} and {\em (\ref{2_10})}, respectively. The coefficients of $\sigma$-functions in the above formula agree with those in \cite[Theroem 1.1]{cooper} and \cite[Theroem 4.1]{aaw}. Different coefficients in the cusp part are due to the choice of our basis for the space $S_{2k}\big(\Gamma_0(12)\big)$ of cusp forms. \end{example} \begin{remark} Throughout the paper we assumed that $k \geq 2$. For $k=1$ we have {\em dim}$\big(S_{2}\big(\Gamma_0(12)\big)\big)=0$. A basis for $M_{2}\big(\Gamma_0(12)\big) = E_{2}\big(\Gamma_0(12)\big)$ is given in \cite{alacaaygin}, see also \cite{williams, alaca-2, alaca-3}. \end{remark} \begin{remark} Let $k\geq 2$ be an integer. Let $N\in \mathbb{N}$ and $\chi$ a Dirichlet character of modulus dividing $N$. We write $M_k(\Gamma_0(N), \chi)$ to denote the space of modular forms of weight $k$ with multiplier system $\chi$ for $\Gamma_0(N)$, and $E_k (\Gamma_0(N), \chi)$ and $S_k(\Gamma_0(N), \chi)$ to denote the subspaces of Eisenstein forms and cusp forms of $M_k(\Gamma_0(N), \chi)$, respectively. We deduce from the formulae in \cite[Section 6.3, p. 98]{stein} that \begin{equation}ars && {\rm dim} \big( S_{2k-1}\big(\Gamma_0(12),\chi_1\big)\big)=4k-7,\\ && {\rm dim}\big( S_{2k-1}\big(\Gamma_0(12),\chi_2\big)\big)=4k-6,\\ && {\rm dim}\big( S_{2k}\big(\Gamma_0(12),\chi_3\big)\big)=4k-4, \end{equation}ars where \begin{equation}ar \label{6_50} \chi_1(m)=\dqu{-3}{m},~ \chi_2(m)=\dqu{-4}{m},~\chi_3(m)=\dqu{12}{m} \end{equation}ar are Legendre-Jacobi-Kronecker symbols. By appealing to a more general version of Theorem {\em 1.1 (Ligozat)}, see for example {\em \cite[Theorem 1.64]{onoweb}} and \cite[Corollary 2.3, p. 37]{Kohler}, we deduce that the families of eta quotients \begin{equation}ars && \ds \big\{ C_{j,2k-1}(z) \big\}_{1\leq j \leq 4k-7} , \\[1mm] && \ds \Big\{ \fracac{\eta^4(z)\eta(4z)\eta(12z)}{\eta^4(2z)\eta^2(6z)} C_{j,2k-1}(z) \Big\}_{1\leq j \leq 4k-6},\\[1mm] && \ds \Big\{ \fracac{\eta^4(z)\eta(4z)\eta(12z)}{\eta^4(2z)\eta^2(6z)}C_{j,2k}(z)\Big\}_{1 \leq j \leq 4k-4} \end{equation}ars constitute a basis for $S_{2k-1}(\Gamma_0(12),\chi_1)$, $S_{2k-1}(\Gamma_0(12),\chi_2)$, $S_{2k}(\Gamma_0(12),\chi_3)$, respectively, where $\chi_1,\chi_2,\chi_3$ are given in {\em (\ref{6_50})}. \end{remark} \section*{Acknowledgments} The authors would like to thank Professor Shaun Cooper for bringing this research problem to our attention at CNTA XIII (2014). The research of the first two authors was supported by Discovery Grants from the Natural Sciences and Engineering Research Council of Canada (RGPIN-418029-2013 and RGPIN-2015-05208). Zafer Selcuk Aygin's studies are supported by Turkish Ministry of Education. \noindent School of Mathematics and Statistics\\ Carleton University\\ Ottawa, Ontario, Canada K1S 5B6 \noindent e-mail addresses : \\ [email protected]\\ [email protected]\\ [email protected] \end{document}
\begin{document} \title{Structures in Concrete Categories} \author{Henri Bourl\`{e}s\thanks{ Satie, ENS de Cachan/CNAM, 61 Avenue Pr\'{e}sident Wilson, F-94230 Cachan, France ([email protected]) }} \maketitle \begin{abstract} The monumental treatise "\'{E}l\'{e}ments de math\'{e}matique" of N. Bourbaki is based on the notion of structure and on the theory of sets. \ On the other hand, the theory of categories is based on the notions of morphism and functor. An appropriate definition of a structure in a concrete category is given in this paper. This definition makes it possible to bridge the gap between Bourbaki's structural approach and the categorical language. \end{abstract} \section{Introduction} The monumental treatise "\'{E}l\'{e}ments de math\'{e}matique" of N. Bourbaki is based on the notion of structure and on the theory of sets \cite {Bourbaki-E}. \ On the other hand, the theory of categories is based on the notions of morphism and functor. \ The notion of "concrete category", as developed in \cite{Adamek}, did not bridge the gap between the categorical approach and Bourbaki's structures. According to \cite{Adamek}, a structure in a concrete category $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{ X}\right) ,$ where $\mathcal{X}$ is the base category and $\left\vert .\right\vert :\mathcal{C}\rightarrow \mathcal{X}$ is the forgetful functor, is an object $A$ of $\mathcal{C}$. \ If $A,$ $A^{\prime }$ are two $\mathcal{ C}$-isomorphic objects such that $\left\vert A\right\vert =\left\vert A^{\prime }\right\vert ,$ these objects are not equal in general. \ Nevertheless, if $\mathcal{C}$ is, for example, the category $\mathbf{Top}$ of topological spaces and $\mathcal{X}$ is the category $\mathbf{Set}$ of sets, $A$ and $A^{\prime }$ are two equal sets endowed with the same topologies. \begin{definition} A structure in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X} \right) $ is an equivalence class of objects of $\mathcal{C}$ with respect to the following equivalence relation: two objects $A$ and $A^{\prime }$ of $ \mathcal{C}$ are equivalent if $\left\vert A\right\vert =\left\vert A^{\prime }\right\vert $ and there is a $\mathcal{C}$-isomorphism $ A\rightarrow A^{\prime }.$ \end{definition} Some consequences of this definition are studied the sequel. \section{Structures} \subsection{Isomorphisms of structures} Let $\mathfrak{S}_{1}$ and $\mathfrak{S}_{2}$\ be two structures in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X}\right) .$ \ Assume that there exist representatives $A_{1}$ and $A_{2}$ of $\mathfrak{S}_{1}$ and $ \mathfrak{S}_{2},$ respectively, which are isomorphic in $\mathcal{C}$. \ Then $\left\vert A_{1}\right\vert $ and $\left\vert A_{2}\right\vert $ are not necessarily equal, but are isomorphic in $\mathcal{X}$. \ If $ A_{1}^{\prime }$ and $A_{2}^{\prime }$ are other representatives of $ \mathfrak{S}_{1}$ and $\mathfrak{S}_{2},$ they are isomorphic in $\mathcal{C} $. \begin{definition} The structures $\mathfrak{S}_{1}$ and $\mathfrak{S}_{2}$\ are called isomorphic if the above condition holds. \end{definition} This definition determines an equivalence relation on structures in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X}\right) $. \subsection{Comparison of structures} Obviously, if $\mathfrak{S}_{1}$ and $\mathfrak{S}_{2}$\ are two structures in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X}\right) $, and $ A_{1}$ and $A_{2}$ are representatives of $\mathfrak{S}_{1}$ and $\mathfrak{S }_{2},$ respectively, such that $\left\vert A_{1}\right\vert =\left\vert A_{2}\right\vert $ and there exists a $\mathcal{C}$-morphism $ A_{1}\rightarrow A_{2}$ such that $\left\vert A_{1}\rightarrow A_{2}\right\vert =\func{id}_{A_{1}},$ then for any representatives $ A_{1}^{\prime }$ and $A_{2}^{\prime }$ of $\mathfrak{S}_{1}$ and $\mathfrak{S }_{2},$ respectively, there exists a $\mathcal{C}$-morphism $A_{1}^{\prime }\rightarrow A_{2}^{\prime }$ such that $\left\vert A_{1}\rightarrow A_{2}\right\vert =\func{id}_{A_{1}^{\prime }}.$ \begin{definition} If the above condition holds, then the structure $\mathfrak{S}_{1}$ is called finer than $\mathfrak{S}_{2}$ (written $\mathfrak{S}_{1}\geq \mathfrak{S}_{2}$) and $\mathfrak{S}_{2}$ is called coarser than $\mathfrak{S }_{2}$ (written $\mathfrak{S}_{2}\leq \mathfrak{S}_{1}$). \end{definition} \begin{lemma} \label{lemma-order-relation}The relation $\geq $ on structures in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X}\right) $ is an order relation. \end{lemma} \begin{proof} The reflexivity and the transitivity are obvious. \ Assume that $\mathfrak{S} _{1}\geq \mathfrak{S}_{2}$ and $\mathfrak{S}_{2}\geq \mathfrak{S}_{1}.$ \ Let $A_{1},A_{2}$ be representatives of $\mathfrak{S}_{1},\mathfrak{S}_{2}$ respectively. \ There exist $\mathcal{C}$-morphisms $A_{1}\rightarrow A_{2}$ and $A_{2}\rightarrow A_{1}$ such that $\left\vert A_{1}\rightarrow A_{2}\right\vert =\func{id}_{\left\vert A_{1}\right\vert }$ and $\left\vert A_{2}\rightarrow A_{1}\right\vert =\func{id}_{\left\vert A_{2}\right\vert }.$ \ Thus $A_{1}\rightarrow A_{2}\longrightarrow A_{1}$ and $A_{2}\rightarrow A_{1}\rightarrow A_{2}$ are $\mathcal{C}$-morphisms such that $\left\vert A_{1}\rightarrow A_{2}\longrightarrow A_{1}\right\vert =\func{id} _{\left\vert A_{1}\right\vert }$ and $\left\vert A_{2}\rightarrow A_{1}\rightarrow A_{2}\right\vert =\func{id}_{A_{2}}.$ \ Since the forgetful functor is faithful, it follows that $A_{1}\rightarrow A_{2}\longrightarrow A_{1}=\func{id}_{A_{1}}$ and $A_{2}\rightarrow A_{1}\rightarrow A_{2}=\func{ id}_{A_{2}}.$ \ Therefore, $A_{1}\rightarrow A_{2}$ and $A_{2}\rightarrow A_{1}$ are isomorphisms, and each of them is the inverse of the other. \ As a result, $\mathfrak{S}_{1}=\mathfrak{S}_{2}.$ \end{proof} \begin{example} A structure in the concrete category $\left( \mathbf{Top},\left\vert .\right\vert ,\mathbf{Set}\right) $ is a topology. \ The structure $ \mathfrak{S}_{1}$ is finer than $\mathfrak{S}_{2}$ if, and only if the topology $\mathfrak{S}_{1}$ is finer than the topology $\mathfrak{S}_{2}.$ \end{example} \subsection{Initial structures and final structures} Let $A,B$ be objects of $\mathcal{C}$. \ Recall that the relation "$ f:\left\vert B\right\vert \rightarrow \left\vert A\right\vert $ is a $ \mathcal{C}$-morphism" means that there exists a morphism $f:B\rightarrow A$ (necessarily unique) such that $\left\vert B\rightarrow A\right\vert =\left\vert B\right\vert \rightarrow \left\vert A\right\vert $ (\cite{Adamek} , Remark 6.22). Let $\left( f_{i}:A\rightarrow A_{i}\right) _{i\in I}$ be a source in $ \mathcal{C}$, i.e. a family of $\mathcal{C}$-morphisms. \ This source is called initial if for each object $B$ of $\mathcal{C},$ the relation $\qquad $"$f:\left\vert B\right\vert \rightarrow \left\vert A\right\vert $ is a $\mathcal{C}$-morphism" is equivalent to the relation $\qquad $"for all $i\in I,$ $f_{i}\circ f:\left\vert B\right\vert \rightarrow \left\vert A_{i}\right\vert $ is a $\mathcal{C}$-morphism". Obviously, the source $\left( f_{i}:A\rightarrow A_{i}\right) _{i\in I}$ is initial if, and only if for every object $A^{\prime }$ with the same structure as $A,$ i.e. for which there exists a $\mathcal{C}$-isomorphism $ \varphi :A^{\prime }\rightarrow A$ with $\left\vert \varphi \right\vert = \limfunc{id}_{\left\vert A\right\vert }$, the source $\left( f_{i}\circ \varphi :A^{\prime }\rightarrow A_{i}\right) _{i\in I}$ is initial. \begin{definition} If the source $\left( f_{i}:A\rightarrow A_{i}\right) _{i\in I}$ is initial, then the structure of $A$ is called initial with respect to the family $ \left( f_{i}:\left\vert A\right\vert \rightarrow \left\vert A_{i}\right\vert \right) _{i\in I}.$ \end{definition} \begin{theorem} If the structure $\mathfrak{S}$ of $A$ is initial for the family $\left( f_{i}:\left\vert A\right\vert \rightarrow \left\vert A_{i}\right\vert \right) _{i\in I},$ $\mathfrak{S}$ is the coarsest of all structures $ \mathfrak{S}^{\prime }$ of objects\ $A^{\prime }$ of $\mathcal{C}$ such that all $\mathcal{X}$-morphisms $f_{i}:\left\vert A^{\prime }\right\vert \rightarrow \left\vert A_{i}\right\vert $ $\left( i\in I\right) $\ are $ \mathcal{C}$-morphisms, and consequently is unique. \end{theorem} \begin{proof} Assume that the structure $\mathfrak{S}$ of $A$ is initial for the family $ \left( f_{i}:\left\vert A\right\vert \rightarrow \left\vert A_{i}\right\vert \right) _{i\in I}.$ \ Then all $f_{i}:\left\vert A\right\vert \rightarrow \left\vert A_{i}\right\vert $ $\left( i\in I\right) $\ are $\mathcal{C}$ -morphisms. \ Let $A^{\prime }$ be such all $f_{i}:\left\vert A^{\prime }\right\vert \rightarrow \left\vert A_{i}\right\vert $ are $\mathcal{C}$ -morphisms. \ Since $f_{i}$ is an $\mathcal{X}$-morphism, $\left\vert A^{\prime }\right\vert =\left\vert A\right\vert $ and $f=\limfunc{id} _{\left\vert A\right\vert }$ is an $\mathcal{X}$-morphism and for each $i\in I,$ $f_{i}\circ f:\left\vert A^{\prime }\right\vert \rightarrow \left\vert A_{i}\right\vert =f_{i}$ is a $\mathcal{C}$-morphism. \ Thus $f:\left\vert A^{\prime }\right\vert \rightarrow \left\vert A\right\vert $ is a $\mathcal{C }$-morphism and $\mathfrak{S}$ is coarser than the structure of $A^{\prime }. $ \ This determines $\mathfrak{S}$ in a unique way by Lemma \ref {lemma-order-relation}. \end{proof} As shown by (\cite{Bourbaki-E}, p. IV.30, exerc. 30), it may happen that a structure $\mathfrak{S}$ in a concrete category $\left( \mathcal{C} ,\left\vert .\right\vert ,\mathcal{X}\right) $ be the coarsest of all structures $\mathfrak{S}^{\prime }$ of objects\ $A^{\prime }$ of $\mathcal{C} $ such all $\mathcal{X}$-morphisms $f_{i}:\left\vert A^{\prime }\right\vert \rightarrow \left\vert A_{i}\right\vert $ are $\mathcal{C}$-morphisms, and that this structure be not initial with respect to the family $\left( f_{i}:\left\vert A\right\vert \rightarrow \left\vert A_{i}\right\vert \right) _{i\in I}.$ The opposite of the concrete category $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X}\right) $ is $\left( \mathcal{C}^{\limfunc{op} },\left\vert .\right\vert ^{\limfunc{op}},\mathcal{X}^{\limfunc{op}}\right) $ where for each pair of objects $A,B$ of $\mathcal{C}$, and any morphism $ A\longleftarrow B$ in $\mathcal{C}^{\limfunc{op}},\ \left\vert A\longleftarrow B\right\vert ^{\limfunc{op}}=\left\vert A\leftarrow B\right\vert .$ \ The structure of $A$ is called final with respect to the family $\left( f_{i}:\left\vert A\right\vert \longleftarrow \left\vert A_{i}\right\vert \right) _{i\in I}$ in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X}\right) $\ if it is initial with respect to this family in $\left( \mathcal{C}^{\limfunc{op}},\left\vert .\right\vert ^{ \limfunc{op}},\mathcal{X}^{\limfunc{op}}\right) .$ \section{Applications} Examples of initial structures (\cite{Bourbaki-E}, \S\ 2.4): inverse image of a structure, induced structure, concrete product (see Remark \ref {remark-product}). Examples of final structures (\cite{Bourbaki-E}, \S\ 2.6): direct image of a structure, quotient structure, concrete coproduct (see Remark \ref {remark-product}). \begin{remark} \label{remark-product}Let $\mathcal{P}=\left( \limfunc{pr}_{i}:A\rightarrow A_{i}\right) _{i\in I}$ be a product in $\mathcal{C}.$ \ This product is called concrete in $\left( \mathcal{C},\left\vert .\right\vert ,\mathcal{X} \right) $ if $\left\vert \mathcal{P}\right\vert =\left( \left\vert \limfunc{ pr}_{i}\right\vert :\left\vert A\right\vert \rightarrow \left\vert A_{i}\right\vert \right) _{i\in I}$ is a product in $\mathcal{X}$ (\cite {Adamek}, \S 10.52). \ All products considered in (\cite{Bourbaki-E}, \S\ 2.4) are concrete but there exist concrete categories with non-concrete products (\cite{Adamek}, \S 10.55(2)). \ Dually, there exist concrete categories with non-concrete coproducts (\cite{Adamek}, \S 10.67). \end{remark} \end{document}
\begin{document} \title{\bf Lowering-Raising triples and $U_q(\mathfrak{sl}_2)$ } \author{ Paul Terwilliger} \date{} \maketitle \begin{abstract} We introduce the notion of a lowering-raising (or LR) triple of linear transformations on a nonzero finite-dimensional vector space. We show how to normalize an LR triple, and classify up to isomorphism the normalized LR triples. We describe the LR triples using various maps, such as the reflectors, the inverters, the unipotent maps, and the rotators. We relate the LR triples to the equitable presentation of the quantum algebra $U_q(\mathfrak{sl}_2)$ and Lie algebra $\mathfrak{sl}_2$. \noindent {\bf Keywords}. Lowering map, raising map, quantum group, quantum algebra, Lie algebra. \hfil\break \noindent {\bf 2010 Mathematics Subject Classification}. Primary: 17B37. Secondary: 15A21. \end{abstract} \section{Introduction} \noindent For the quantum algebra $U_q(\mathfrak{sl}_2)$, the equitable presentation was introduced in \cite{equit} and further investigated in \cite{fduq}, \cite{ba}. For the Lie algebra $\mathfrak{sl}_2$, the equitable presentation was introduced in \cite{ht} and comprehensively studied in \cite{bt}. These equitable presentations have been related to Leonard pairs \cite{alnajjar}, \cite{alnajjar2}, tridiagonal pairs \cite{bockting}, Leonard triples \cite{gao}, \cite{huang}, the universal Askey-Wilson algebra \cite{uawe}, the tetrahedron algebra \cite{hartwig}, \cite{ht}, \cite{3ptsl2}, the $q$-tetrahedron algebra \cite{irt}, \cite{qtet}, and distance-regular graphs \cite{boyd}. See also \cite{alnajjar3}, \cite{neubauer}, \cite{uqsl2hat}, \cite{tersym}. \\ \noindent From the equitable point of view, consider a finite-dimensional irreducible module for $U_q(\mathfrak{sl}_2)$ or $\mathfrak{sl}_2$. In \cite[Lemma~7.3]{fduq} and \cite[Section~8]{bt} we encounter three nilpotent linear transformations of the module, with each transformation acting as a lowering map and raising map in multiple ways. In order to describe this situation more precisely, we now introduce the notion of a lowering-raising (or LR) triple of linear transformations. \noindent An LR triple is described as follows (formal definitions begin in Section 2). Fix an integer $d\geq 0$. Let $\mathbb F$ denote a field, and let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. By a decomposition of $V$ we mean a sequence $\lbrace V_i\rbrace_{i=0}^d$ of one-dimensional subspaces whose direct sum is $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$. A linear transformation $A \in {\rm End}(V)$ is said to lower $\lbrace V_i\rbrace_{i=0}^d$ whenever $AV_i = V_{i-1} $ for $1 \leq i \leq d$ and $AV_0 = 0$. The map $A$ is said to raise $\lbrace V_i\rbrace_{i=0}^d$ whenever $AV_i = V_{i+1} $ for $0 \leq i \leq d-1$ and $AV_d = 0$. An ordered pair of elements $A,B$ in ${\rm End}(V)$ is called lowering-raising (or LR) whenever there exists a decomposition of $V$ that is lowered by $A$ and raised by $B$. A 3-tuple of elements $A,B,C$ in ${\rm End}(V)$ is called an LR triple whenever any two of $A,B,C$ form an LR pair on $V$. The LR triple $A,B,C$ is said to be over $\mathbb F$ and have diameter $d$. \noindent In this paper we obtain three main results, which are summarized as follows: (i) we show how to normalize an LR triple, and classify up to isomorphism the normalized LR triples; (ii) we describe the LR triples using various maps, such as the reflectors, the inverters, the unipotent maps, and the rotators; (iii) we relate the LR triples to the equitable presentations of $U_q(\mathfrak{sl}_2)$ and $\mathfrak{sl}_2$. \noindent We now describe our results in more detail. We set the stage with some general remarks; the assertions therein will be established in the main body of the paper. Let the integer $d$ and the vector space $V$ be as above, and assume for the moment that $d=0$. Then $A,B,C \in {\rm End}(V)$ form an LR triple if and only if each of $A,B,C$ is zero; this LR triple is called trivial. Until further notice, assume that $d\geq 0$ and let $A,B,C$ denote an LR triple on $V$. As we describe this LR triple, we will use the following notation. Observe that any permutation of $A,B,C$ is an LR triple on $V$. For any object $f$ that we associate with $A,B,C$ let $f'$ (resp. $f''$) denote the corresponding object for the LR triple $B,C,A$ (resp. $C,A,B$). Since $A,B$ is an LR pair on $V$, there exists a decomposition $\lbrace V_i\rbrace_{i=0}^d$ of $V$ that is lowered by $A$ and raised by $B$. This decomposition is uniquely determined by $A,B$ and called the $(A,B)$-decomposition of $V$. For $0 \leq i \leq d$ we have $ A^{d-i}V= V_0+V_1+ \cdots + V_i$ and $ B^{d-i}V= V_d+V_{d-1}+ \cdots + V_{d-i}$. \noindent We now introduce the parameter array of $A,B,C$. For $1 \leq i \leq d$ we have $AV_i=V_{i-1}$ and $BV_{i-1}=V_i$. Therefore, $V_i$ is invariant under $BA$ and the corresponding eigenvalue is a nonzero scalar in $\mathbb F$. Denote this eigenvalue by $\varphi_i$. For notational convenience define $\varphi_0=0$ and $\varphi_{d+1}=0$. We call the sequence \begin{eqnarray*} (\lbrace \varphi_i \rbrace_{i=1}^d; \lbrace \varphi'_i \rbrace_{i=1}^d; \lbrace \varphi''_i \rbrace_{i=1}^d) \end{eqnarray*} the parameter array of $A,B,C$. \noindent We now introduce the idempotent data of $A,B,C$. For $0 \leq i \leq d$ define $E_i \in {\rm End}(V)$ such that $(E_i-I)V_i=0$ and $E_iV_j=0$ for $0 \leq j \leq d$, $j \not=i$. Thus $E_i$ is the projection from $V$ onto $V_i$. Note that $V_i = E_iV$. We have \begin{eqnarray*} E_i = \frac{A^{d-i}B^d A^i}{\varphi_1 \cdots \varphi_d}, \qquad \qquad E_i = \frac{B^{i}A^d B^{d-i}}{\varphi_1 \cdots \varphi_d}. \end{eqnarray*} We call the sequence \begin{eqnarray*} ( \lbrace E_i\rbrace_{i=0}^d; \lbrace E'_i\rbrace_{i=0}^d; \lbrace E''_i\rbrace_{i=0}^d) \end{eqnarray*} the idempotent data of $A,B,C$. \noindent We now introduce the Toeplitz data of $A,B,C$. A basis $\lbrace v_i \rbrace_{i=0}^d$ of $V$ is called an $(A,B)$-basis whenever $v_i \in V_i$ for $0 \leq i \leq d$ and $Av_i=v_{i-1}$ for $1 \leq i \leq d$. Let $\lbrace u_i \rbrace_{i=0}^d$ denote a $(C,B)$-basis of $V$ and let $\lbrace v_i \rbrace_{i=0}^d$ denote a $(C,A)$-basis of $V$ such that $u_0=v_0$. Let $T$ denote the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$. Then $T$ has the form \begin{eqnarray*} T = \left( \begin{array}{ c c cc c c } \alpha_0 & \alpha_1 & \cdot & \cdot& \cdot & \alpha_d \\ & \alpha_0 & \alpha_1 & \cdot& \cdot & \cdot \\ & & \alpha_0 & \cdot & \cdot& \cdot \\ & & & \cdot & \cdot & \cdot \\ & & & & \cdot & \alpha_1 \\ {\bf 0} && & & & \alpha_0 \\ \end{array} \right), \end{eqnarray*} where $\alpha_i \in \mathbb F$ for $0 \leq i \leq d$ and $\alpha_0=1$. A matrix of the above form is said to be upper triangular and Toeplitz, with parameters $\lbrace \alpha_i\rbrace_{i=0}^d$. The matrix $T^{-1}$ is upper triangular and Toeplitz; let $\lbrace \beta_i\rbrace_{i=0}^d$ denote its parameters. We call the sequence \begin{eqnarray*} ( \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d; \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d; \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d ) \end{eqnarray*} the Toeplitz data of $A,B,C$. \noindent We now introduce the trace data of $A,B,C$. For $0 \leq i \leq d$ let $a_i$ denote the trace of $CE_i$. We have $\sum_{i=0}^d a_i=0$. If $A,B,C$ is trivial then $a_0=0$. If $A,B,C$ is nontrivial then $a_i = \alpha'_1(\varphi''_{d-i+1}-\varphi''_{d-i})$ and $a_i = \alpha''_1(\varphi'_{d-i+1}-\varphi'_{d-i})$ for $0 \leq i \leq d$. We call the sequence \begin{eqnarray*} (\lbrace a_i \rbrace_{i=0}^d; \lbrace a'_i \rbrace_{i=0}^d; \lbrace a''_i \rbrace_{i=0}^d ) \end{eqnarray*} the trace data of $A,B,C$. \noindent With respect to an $(A,B)$-basis of $V$, the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_1 & 0 & && & \\ & \varphi_2 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } a_0 & \varphi'_d/\varphi_1 & && & \bf 0 \\ \varphi''_d & a_1 & \varphi'_{d-1}/\varphi_2 && & \\ & \varphi''_{d-1} & a_2 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi'_1/\varphi_d \\ {\bf 0} && & & \varphi''_1 & a_d \\ \end{array} \right). \end{eqnarray*} Assume for the moment that $A,B,C$ is nontrivial. Then $A,B,C$ is determined up to isomorphism by its parameter array and any one of \begin{eqnarray*} a_0, a'_0, a''_0; \quad \qquad a_d, a'_d, a''_d; \quad \qquad \alpha_1, \alpha'_1, \alpha''_1; \quad \qquad \beta_1, \beta'_1, \beta''_1. \end{eqnarray*} We often put the emphasis on $\alpha_1$, and call this the first Toeplitz number of $A,B,C$. In Propositions \ref{prop:AlphaRecursion}--\ref{prop:AlphaRecursion3} we obtain some recursions that give the Toeplitz data of $A,B,C$ in terms of its parameter array and first Toeplitz number. \noindent We now introduce the bipartite condition. The LR triple $A,B,C$ is said to be bipartite whenever $a_i = a'_i = a''_i = 0$ for $0 \leq i \leq d$. Assume for the moment that $A,B,C$ is not bipartite. Then $A,B,C$ is nontrivial, and each of \begin{eqnarray*} \alpha_1, \qquad \alpha'_1, \qquad \alpha''_1, \qquad \beta_1, \qquad \beta'_1, \qquad \beta''_1 \end{eqnarray*} is nonzero. Until further notice assume that $A,B,C$ is bipartite. Then $d=2m$ is even. Moreover for $0 \leq i \leq d$, each of \begin{eqnarray*} \alpha_i, \qquad \alpha'_i, \qquad \alpha''_i, \qquad \beta_i, \qquad \beta'_i, \qquad \beta''_i \end{eqnarray*} is zero if $i$ is odd and nonzero if $i$ is even. There exists a direct sum $V=V_{\rm out} + V_{\rm in}$ such that $V_{\rm out}$ is equal to each of \begin{eqnarray*} \sum_{j=0}^m E_{2j} V, \qquad \quad \sum_{j=0}^m E'_{2j} V, \qquad \quad \sum_{j=0}^m E''_{2j} V \end{eqnarray*} and $V_{\rm in}$ is equal to each of \begin{eqnarray*} \sum_{j=0}^{m-1} E_{2j+1} V, \qquad \quad \sum_{j=0}^{m-1} E'_{2j+1} V, \qquad \quad \sum_{j=0}^{m-1} E''_{2j+1} V. \end{eqnarray*} The dimensions of $V_{\rm out}$ and $V_{\rm in}$ are $m+1$ and $m$, respectively. We have \begin{eqnarray*} && AV_{\rm out} = V_{\rm in}, \qquad \qquad BV_{\rm out} = V_{\rm in}, \qquad \qquad CV_{\rm out} = V_{\rm in}, \\ && AV_{\rm in} \subseteq V_{\rm out}, \qquad \qquad BV_{\rm in} \subseteq V_{\rm out}, \qquad \qquad CV_{\rm in} \subseteq V_{\rm out}. \end{eqnarray*} Define \begin{eqnarray} \label{eq:6listIntro} A_{\rm out}, \qquad A_{\rm in}, \qquad B_{\rm out}, \qquad B_{\rm in}, \qquad C_{\rm out}, \qquad C_{\rm in} \end{eqnarray} in ${\rm End}(V)$ as follows. The map $A_{\rm out}$ acts on $V_{\rm out}$ as $A$, and on $V_{\rm in}$ as zero. The map $A_{\rm in}$ acts on $V_{\rm in}$ as $A$, and on $V_{\rm out}$ as zero. The other maps in (\ref{eq:6listIntro}) are similarly defined. By construction \begin{equation*} A = A_{\rm out}+ A_{\rm in}, \qquad \qquad B = B_{\rm out}+ B_{\rm in}, \qquad \qquad C = C_{\rm out}+ C_{\rm in}. \end{equation*} We are done assuming that $A,B,C$ is bipartite. \noindent We now introduce the equitable condition. The LR triple $A,B,C$ is said to be equitable whenever $\alpha_i = \alpha'_i=\alpha''_i$ for $0 \leq i \leq d$. In this case $\beta_i = \beta'_i=\beta''_i$ for $0 \leq i \leq d$. Assume for the moment that $A,B,C$ is trivial. Then $A,B,C$ is equitable. Next assume that $A,B,C$ is nonbipartite. Then $A,B,C$ is equitable if and only if $\alpha_1 = \alpha'_1=\alpha''_1$. In this case $\varphi_i = \varphi'_i = \varphi''_i$ for $1 \leq i \leq d$, and $a_i = a'_i = a''_i = \alpha_1(\varphi_{d-i+1}-\varphi_{d-i})$ for $0 \leq i \leq d$. Next assume that $A,B,C$ is bipartite and nontrivial. Then $A,B,C$ is equitable if and only if $\alpha_2 = \alpha'_2=\alpha''_2$. In this case $ \varphi_{i-1}\varphi_i = \varphi'_{i-1}\varphi'_i = \varphi''_{i-1}\varphi''_i$ for $2 \leq i \leq d$. We are done with our general remarks. \noindent Concerning the normalization of LR triples, we now define what it means for $A,B,C$ to be normalized. Assume for the moment that $A,B,C$ is trivial. Then $A,B,C$ is normalized. Next assume that $A,B,C$ is nonbipartite. Then $A,B,C$ is normalized whenever $\alpha_1=\alpha'_1=\alpha''_1=1$. Next assume that $A,B,C$ is bipartite and nontrivial. Then $A,B,C$ is normalized whenever $\alpha_2= \alpha'_2=\alpha''_2=1$. If $A,B,C$ is normalized then $A,B,C$ is equitable. We now explain how to normalize $A,B,C$. Assume for the moment that $A,B,C$ is trivial. Then there is nothing to do. Next assume that $A,B,C$ is nonbipartite. Then there exists a unique sequence $\alpha, \beta, \gamma$ of nonzero scalars in $\mathbb F$ such that $\alpha A, \beta B, \gamma C$ is normalized. Next assume that $A,B,C$ is bipartite and nontrivial. Then there exists a unique sequence $\alpha, \beta, \gamma$ of nonzero scalars in $\mathbb F$ such that \begin{eqnarray*} \alpha A_{\rm out} +A_{\rm in}, \qquad \beta B_{\rm out}+ B_{\rm in}, \qquad \gamma C_{\rm out}+ C_{\rm in} \end{eqnarray*} is normalized. \noindent We now describe our classification up to isomorphism of the normalized LR triples over $\mathbb F$. Up to isomorphism there exists a unique normalized LR triple over $\mathbb F$ with diameter $d=0$, and this LR triple is trivial. Up to isomorphism there exists a unique normalized LR triple over $\mathbb F$ with diameter $d=1$, and this is given in Lemma \ref{lem:1or2}. For $d\geq 2$, we display nine families of normalized LR triples over $\mathbb F$ that have diameter $d$, denoted \begin{eqnarray*} && {\rm NBWeyl}^+_d(\mathbb F;j,q), \qquad \quad {\rm NBWeyl}^-_d(\mathbb F;j,q), \qquad \quad {\rm NBWeyl}^-_d(\mathbb F;t), \\ && {\rm NBG}_d(\mathbb F;q), \qquad \quad {\rm NBG}_d(\mathbb F;1), \\ && {\rm NBNG}_d(\mathbb F;t), \\ && {\rm B}_d(\mathbb F;t,\rho_0,\rho'_0,\rho''_0), \qquad \quad {\rm B}_d(\mathbb F;1,\rho_0,\rho'_0,\rho''_0), \qquad \quad {\rm B}_2(\mathbb F;\rho_0,\rho'_0,\rho''_0). \end{eqnarray*} We show that each normalized LR triple over $\mathbb F$ with diameter $d$ is isomorphic to exactly one of these examples. \noindent We now describe the LR triples using various maps. Let $A,B,C$ denote an LR triple on $V$. We show that there exists a unique antiautomorphism $\dagger$ of ${\rm End}(V)$ that sends $A\leftrightarrow B$. We call $\dagger$ the $(A,B)$-reflector. Assume for the moment that $A,B,C$ is equitable and nonbipartite. We show that $\dagger$ fixes $C$. Next assume that $A,B,C$ is equitable, bipartite, and nontrivial. We show that $\dagger$ sends $A_{\rm out}\leftrightarrow B_{\rm in}$ and $B_{\rm out}\leftrightarrow A_{\rm in}$. We also show that $\dagger$ sends each of $C_{\rm out},C_{\rm in}$ to a scalar multiple of the other. Define \begin{eqnarray*} \Psi = \sum_{i=0}^d \frac{\varphi_1 \varphi_2 \cdots \varphi_i}{\varphi_d \varphi_{d-1}\cdots \varphi_{d-i+1}} E_i. \end{eqnarray*} We call $\Psi$ the $(A,B)$-inverter. We show that the following three LR pairs are mutually isomorphic: \begin{eqnarray*} A, \Psi^{-1}B\Psi \qquad \qquad B,A \qquad \qquad \Psi A \Psi^{-1}, B. \end{eqnarray*} Define \begin{eqnarray*} {\mathbb A} = \sum_{i=0}^d E_{d-i} E''_{i}, \qquad \qquad {\mathbb B} = \sum_{i=0}^d E'_{d-i} E_{i}, \qquad \qquad {\mathbb C} = \sum_{i=0}^d E''_{d-i} E'_i. \end{eqnarray*} We call $\mathbb A, \mathbb B, \mathbb C$ the unipotent maps for $A,B,C$. We show that \begin{eqnarray*} \mathbb A = \sum_{i=0}^d \alpha'_i A^i, \qquad \qquad \mathbb B = \sum_{i=0}^d \alpha''_i B^i, \qquad \qquad \mathbb C = \sum_{i=0}^d \alpha_i C^i \end{eqnarray*} and \begin{eqnarray*} \mathbb A^{-1} = \sum_{i=0}^d \beta'_i A^i, \qquad \qquad \mathbb B^{-1} = \sum_{i=0}^d \beta''_i B^i, \qquad \qquad \mathbb C^{-1} = \sum_{i=0}^d \beta_i C^i. \end{eqnarray*} By a rotator for $A,B,C$ we mean an element $R \in {\rm End}(V)$ such that for $0 \leq i \leq d$, \begin{eqnarray*} E_i R = R E'_i, \qquad \qquad E'_i R = R E''_i, \qquad \qquad E''_i R = R E_i. \end{eqnarray*} Let $\mathcal R$ denote the set of rotators for $A,B,C$. Note that $\mathcal R$ is a subspace of the $\mathbb F$-vector space ${\rm End}(V)$. We obtain the following basis for $\mathcal R$. Assume for the moment that $A,B,C$ is trivial. Then $\mathcal R ={\rm End}(V)$ has a basis consisting of the identity element. Next assume that $A,B,C$ is nonbipartite. Then $\mathcal R$ has a basis $\Omega$ such that \begin{eqnarray*} && \Omega = \mathbb B \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E_i\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E'_i\Biggr) \mathbb B \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E''_i\Biggr) \mathbb C. \end{eqnarray*} Next assume that $A,B,C$ is bipartite and nontrivial. Then $\mathcal R$ has a basis $\Omega_{\rm out}$, $\Omega_{\rm in}$ such that \begin{eqnarray*} && \Omega_{\rm out} = \mathbb B \Biggl(\sum_{j=0}^{d/2} \frac{\varphi_1 \varphi_2\cdots \varphi_{2j}} {\varphi_d \varphi_{d-1}\cdots \varphi_{d-2j+1}} E_{2j}\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{j=0}^{d/2} \frac{\varphi_1 \varphi_2 \cdots \varphi_{2j}} {\varphi_d \varphi_{d-1}\cdots \varphi_{d-2j+1}} E'_{2j}\Biggr) \mathbb B \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{j=0}^{d/2} \frac{\varphi_1 \varphi_2 \cdots \varphi_{2j}} {\varphi_d \varphi_{d-1}\cdots \varphi_{d-2j+1}} E''_{2j}\Biggr) \mathbb C \end{eqnarray*} and \begin{eqnarray*} &&\Omega_{\rm in} = \mathbb B \Biggl(\sum_{j=0}^{d/2-1} \frac{\varphi_2 \varphi_3\cdots \varphi_{2j+1}} {\varphi_{d-1} \varphi_{d-2}\cdots \varphi_{d-2j}} E_{2j+1}\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{j=0}^{d/2-1} \frac{\varphi_2 \varphi_3 \cdots \varphi_{2j+1}} {\varphi_{d-1} \varphi_{d-2}\cdots \varphi_{d-2j}} E'_{2j+1}\Biggr) \mathbb B \nonumber \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{j=0}^{d/2-1} \frac{\varphi_2 \varphi_3\cdots \varphi_{2j+1}} {\varphi_{d-1} \varphi_{d-2} \cdots \varphi_{d-2j}} E''_{2j+1}\Biggr) \mathbb C. \end{eqnarray*} \noindent We now briefly relate the LR triples to the equitable presentations of $U_q(\mathfrak{sl}_2)$ and $\mathfrak{sl}_2$. Adjusting the equitable presentation of $U_q(\mathfrak{sl}_2)$ in two ways, we obtain an algebra $U^R_q(\mathfrak{sl}_2)$ called the reduced $U_q(\mathfrak{sl}_2)$ algebra, and an algebra $U^E_q(\mathfrak{sl}_2)$ called the extended $U_q(\mathfrak{sl}_2)$ algebra. Let $A,B,C$ denote an LR triple on $V$. After imposing some minor restrictions on its parameter array, we use $A,B,C$ to construct a module on $V$ for $U_q(\mathfrak{sl}_2)$ or $U^R_q(\mathfrak{sl}_2)$ or $U^E_q(\mathfrak{sl}_2)$ or $\mathfrak{sl}_2$. Each construction involves the equitable presentation. \noindent This paper is organized as follows. In Section 2 we review some basic concepts and explain our notation. In Sections 3--10 we develop a theory of LR pairs that will be applied to LR triples later in the paper. In Section 11 we classify a type of finite sequence said to be constrained, for use in our LR triple classification later in the paper. Section 12 is about upper triangular Toeplitz matrices. In Section 13 we introduce the LR triples, and discuss their parameter array, idempotent data, Toeplitz data, and trace data. In Sections 14, 15 we obtain some equations relating the parameter array, Toeplitz data, and trace data. We also introduce the LR triples of Weyl and $q$-Weyl type. Sections 16--18 are about the bipartite, equitable, and normalized LR triples, respectively. In Sections 19, 20 we compare the structure of a bipartite and nonbipartite LR triple, using the notions of an idempotent centralizer and double lowering space. Sections 21--23 are about the unipotent maps, rotators, and reflectors, respectively. In Sections 24--30 we classify up to isomorphism the normalized LR triples. Section 31 is about the Toeplitz data, and how the unipotent maps are related to the exponential function and quantum exponential function. In Section 32 we display some relations that are satisfied by an LR triple. In Section 33 we relate the LR triples to the equitable presentations of $U_q(\mathfrak{sl}_2)$ and $\mathfrak{sl}_2$. Section 34 contains three characterizations of an LR triple. Sections 35, 36 are appendices that contain some matrix representations of an LR triple. \section{Preliminaries} \noindent We now begin our formal argument. In this section we review some basic concepts and explain our notation. We will be discussing algebras and Lie algebras. An algebra without the Lie prefix is meant to be associative and have a 1. A subalgebra has the same 1 as the parent algebra. Recall the ring of integers $\mathbb Z = \lbrace 0,\pm 1,\pm 2,\ldots\rbrace$. Throughout the paper we fix an integer $d\geq 0$. For a sequence $\lbrace u_i\rbrace_{i=0}^d$, we call $u_i$ the {\it $i$-component} or {\it $i$-coordinate} of the sequence. By the {\it inversion} of $\lbrace u_i\rbrace_{i=0}^d$ we mean the sequence $\lbrace u_{d-i}\rbrace_{i=0}^d$. Let $\mathbb F$ denote a field. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let ${\rm End}(V)$ denote the $\mathbb F$-algebra consisting of the $\mathbb F$-linear maps from $V$ to $V$. Let ${\rm Mat}_{d+1}(\mathbb F)$ denote the $\mathbb F$-algebra consisting of the $d+1$ by $d+1$ matrices that have all entries in $\mathbb F$. We index the rows and columns by $0,1,\ldots, d$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote a basis for $V$. For $A \in {\rm End}(V)$ and $M\in {\rm Mat}_{d+1}(\mathbb F)$, we say that {\it $M$ represents $A$ with respect to $\lbrace v_i\rbrace_{i=0}^d$} whenever $Av_j = \sum_{i=0}^d M_{ij}v_i$ for $0 \leq j \leq d$. Suppose we are given two bases for $V$, denoted $\lbrace u_i\rbrace_{i=0}^d$ and $\lbrace v_i\rbrace_{i=0}^d$. By the {\it transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$} we mean the matrix $S \in {\rm Mat}_{d+1}(\mathbb F)$ such that $v_j = \sum_{i=0}^d S_{ij}u_i$ for $0 \leq j \leq d$. Let $S$ denote the transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$. Then $S^{-1}$ exists and equals the transition matrix from $\lbrace v_i\rbrace_{i=0}^d$ to $\lbrace u_i\rbrace_{i=0}^d$. Let $\lbrace w_i\rbrace_{i=0}^d$ denote a basis for $V$ and let $H$ denote the transition matrix from $\lbrace v_i\rbrace_{i=0}^d$ to $\lbrace w_i\rbrace_{i=0}^d$. Then $S H$ is the transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace w_i\rbrace_{i=0}^d$. Let $A \in {\rm End}(V)$ and let $M \in {\rm Mat}_{d+1}(\mathbb F)$ represent $A$ with respect to $\lbrace u_i\rbrace_{i=0}^d$. Then $S^{-1}MS$ represents $A$ with respect to $\lbrace v_i\rbrace_{i=0}^d$. Define a matrix ${\bf Z} \in {\rm Mat}_{d+1}(\mathbb F)$ with $(i,j)$-entry $\delta_{i+j,d}$ for $0 \leq i,j\leq d$. For example if $d=3$, \begin{eqnarray*} {\bf Z}= \left( \begin{array}{ c c cc} 0 & 0 & 0 &1 \\ 0 & 0 & 1 &0 \\ 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0 \\ \end{array} \right). \end{eqnarray*} \noindent Note that ${\bf Z}^2=I$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote a basis for $V$ and consider the inverted basis $\lbrace v_{d-i}\rbrace_{i=0}^d$. Then ${\bf Z}$ is the transition matrix from $\lbrace v_i\rbrace_{i=0}^d$ to $\lbrace v_{d-i}\rbrace_{i=0}^d$. \noindent By a {\it decomposition of $V$} we mean a sequence $\lbrace V_i\rbrace_{i=0}^d$ of one dimensional subspaces of $V$ such that $V= \sum_{i=0}^d V_i$ (direct sum). Given a decomposition $\lbrace V_i\rbrace_{i=0}^d$ of $V$, for notational convenience define $V_{-1}=0$ and $V_{d+1}=0$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. For $0 \leq i \leq d$ let $V_i$ denote the span of $v_i$. Then the sequence $\lbrace V_i \rbrace_{i=0}^d$ is a decomposition of $V$, said to be {\it induced} by the basis $\lbrace v_i \rbrace_{i=0}^d$. Let $\lbrace u_i \rbrace_{i=0}^d$ and $\lbrace v_i \rbrace_{i=0}^d$ denote bases for $V$. Then the following are equivalent: (i) the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$ is diagonal; (ii) $\lbrace u_i \rbrace_{i=0}^d$ and $\lbrace v_i \rbrace_{i=0}^d$ induce the same decomposition of $V$. \noindent Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$. For $0 \leq i \leq d$ define $E_i \in {\rm End}(V)$ such that $(E_i - I)V_i = 0$ and $E_iV_j=0$ for $0 \leq j \leq d$, $j\not=i$. We call $E_i$ the $i$th {\it primitive idempotent} for $\lbrace V_i \rbrace_{i=0}^d$. We have (i) $E_i E_j = \delta_{i,j}E_i$ $(0 \leq i,j\leq d)$; (ii) $I = \sum_{i=0}^d E_i$; (iii) $V_i = E_iV$ $(0 \leq i \leq d)$; (iv) ${\rm rank}(E_i) = 1 ={\rm tr}(E_i)$ $(0 \leq i \leq d)$, where tr means trace. We call $\lbrace E_i \rbrace_{i=0}^d$ the {\it idempotent sequence} for $\lbrace V_i \rbrace_{i=0}^d$. Note that $\lbrace E_{d-i}\rbrace_{i=0}^d$ is the idempotent sequence for the decomposition $\lbrace V_{d-i}\rbrace_{i=0}^d$. \noindent Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. Let $\lbrace V_i \rbrace_{i=0}^d$ denote the induced decomposition of $V$, with idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. For $0 \leq r \leq d$ consider the matrix in ${\rm Mat}_{d+1}(\mathbb F)$ that represents $E_r$ with respect to $\lbrace v_i \rbrace_{i=0}^d$. This matrix has $(r,r)$-entry 1 and all other entries 0. \begin{lemma} \label{lem:EiMeaning} Let $A \in {\rm End}(V)$. Let $\lbrace V_i \rbrace_{i=0}^d$ denote a decomposition of $V$ with idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. Consider a basis for $V$ that induces $\lbrace V_i \rbrace_{i=0}^d$. Let $M \in {\rm Mat}_{d+1}(\mathbb F)$ represent $A$ with respect to this basis. Then for $0 \leq r,s\leq d$ the entry $M_{r,s}=0$ if and only if $E_r A E_s = 0$. \end{lemma} \begin{proof} Represent $A, E_r,E_s$ by matrices with respect to the given basis. \end{proof} \noindent By a {\it flag on $V$} we mean a sequence $\lbrace U_i \rbrace_{i=0}^d$ of subspaces of $V$ such that $U_i$ has dimension $i+1$ for $0 \leq i \leq d$ and $U_{i-1} \subseteq U_i$ for $1 \leq i \leq d$. For a flag $\lbrace U_i \rbrace_{i=0}^d$ on $V$ we have $U_d=V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$. For $0 \leq i \leq d$ define $U_i = V_0 + \cdots + V_i$. Then the sequence $\lbrace U_i \rbrace_{i=0}^d$ is a flag on $V$. This flag is said to be {\it induced} by the decomposition $\lbrace V_i\rbrace_{i=0}^d$. Let $\lbrace u_i\rbrace_{i=0}^d$ denote a basis of $V$. This basis induces a decomposition of $V$, which in turn induces a flag on $V$. This flag is said to be {\it induced} by the basis $\lbrace u_i\rbrace_{i=0}^d$. Let $\lbrace u_i\rbrace_{i=0}^d$ and $\lbrace v_i\rbrace_{i=0}^d$ denote bases of $V$. Then the following are equivalant: (i) the transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$ is upper triangular; (ii) $\lbrace u_i\rbrace_{i=0}^d$ and $\lbrace v_i\rbrace_{i=0}^d$ induce the same flag on $V$. \noindent Suppose we are given two flags on $V$, denoted $\lbrace U_i \rbrace_{i=0}^d$ and $\lbrace U'_i \rbrace_{i=0}^d$. These flags are called {\it opposite} whenever $U_i \cap U'_j = 0$ if $i+j<d$ $(0 \leq i,j\leq d)$. The following are equivalent: (i) $\lbrace U_i \rbrace_{i=0}^d$ and $\lbrace U'_i \rbrace_{i=0}^d$ are opposite; (ii) there exists a decomposition $\lbrace V_i \rbrace_{i=0}^d$ of $V$ that induces $\lbrace U_i \rbrace_{i=0}^d$ and whose inversion induces $\lbrace U'_i \rbrace_{i=0}^d$. In this case $V_i = U_i \cap U'_{d-i}$ for $0 \leq i \leq d$. \noindent Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$. For $A \in {\rm End}(V)$, we say that $A$ {\it lowers} $\lbrace V_i\rbrace_{i=0}^d$ whenever $A V_i = V_{i-1} $ for $1 \leq i \leq d$ and $AV_0 = 0$. \begin{lemma} \label{lem:LowerRaise} Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$, with idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. For $A \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $A$ lowers $\lbrace V_i\rbrace_{i=0}^d$; \item[\rm (ii)] $E_i A E_j = \begin{cases} \not=0 & {\mbox{\rm if $j-i=1$}}; \\ 0 & {\mbox{\rm if $j-i\not=1$}} \end{cases} \qquad \qquad (0 \leq i,j \leq d)$. \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:EiMeaning}. \end{proof} \noindent Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$ and let $A \in {\rm End}(V)$. Assume that $\lbrace V_i\rbrace_{i=0}^d$ is lowered by $A$. Then $V_i = A^{d-i}V_d$ for $0 \leq i \leq d$. Moreover $A^{d+1}=0$. For $0 \leq i \leq d$ the subspace $V_0 + \cdots + V_i$ is the kernel of $A^{i+1}$ and equal to $A^{d-i}V$. In particular, $V_0$ is the kernel of $A$ and equal to $A^dV$. The sequences $\lbrace {\rm ker}\,A^{i+1} \rbrace_{i=0}^d$ and $\lbrace A^{d-i}V\rbrace_{i=0}^d$ both equal the flag on $V$ induced by $\lbrace V_i\rbrace_{i=0}^d$. We say that $A$ {\it raises} $\lbrace V_i\rbrace_{i=0}^d$ whenever $A V_i = V_{i+1} $ for $0 \leq i \leq d-1$ and $AV_d = 0$. Note that $A$ raises $\lbrace V_i\rbrace_{i=0}^d$ if and only if $A$ lowers the inverted decomposition $\lbrace V_{d-i}\rbrace_{i=0}^d$. \begin{lemma} \label{lem:RaiseLower} Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$, with idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. For $A \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $A$ raises $\lbrace V_i\rbrace_{i=0}^d$; \item[\rm (ii)] $E_i A E_j = \begin{cases} \not=0 & {\mbox{\rm if $i-j=1$}}; \\ 0 & {\mbox{\rm if $i-j\not=1$}} \end{cases} \qquad \qquad (0 \leq i,j \leq d)$. \end{enumerate} \end{lemma} \begin{proof} Apply Lemma \ref{lem:LowerRaise} to the decomposition $\lbrace V_{d-i}\rbrace_{i=0}^d$. \end{proof} \begin{definition} \label{def:Nil} \rm An element $A \in {\rm End}(V)$ will be called {\it Nil} whenever $A^{d+1}=0$ and $A^d \not=0$. \end{definition} \begin{lemma} \label{lem:NilRec} For $A \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $A$ is Nil; \item[\rm (ii)] there exists a decomposition of $V$ that is lowered by $A$; \item[\rm (iii)] there exists a decomposition of $V$ that is raised by $A$; \item[\rm (iv)] for $0 \leq i \leq d$ the kernel of $A^{i+1}$ is $A^{d-i}V$; \item[\rm (v)] the kernel of $A$ is $A^dV$; \item[\rm (vi)] the sequence $\lbrace {\rm ker}\,A^{i+1} \rbrace_{i=0}^d$ is a flag on $V$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ By assumption there exists $v \in V$ such that $A^dv\not=0$. By assumption $A^{d+1}v=0$. Define $v_i=A^{d-i}v$ for $0 \leq i \leq d$. Then $Av_i= v_{i-1}$ for $1 \leq i \leq d$ and $Av_0=0$. By these comments, for $0 \leq i \leq d$ the vector $v_i$ is in the kernel of $A^{i+1}$ and not in the kernel of $A^i$. Therefore $\lbrace v_i \rbrace_{i=0}^d$ are linearly independent, and hence form a basis for $V$. By construction the induced decomposition of $V$ is lowered by $A$. \\ \noindent ${\rm (ii)}\Leftrightarrow {\rm (iii)}$ A decomposition of $V$ is raised by $A$ if and only if its inversion is lowered by $A$. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (iv)}$ By the comments above Lemma \ref{lem:RaiseLower}. \\ \noindent ${\rm (iv)}\mathbb Rightarrow {\rm (v)}$ Clear. \\ \noindent ${\rm (v)}\mathbb Rightarrow {\rm (i)}$ Observe that $A^{d+1}V=A(A^dV)=0$, so $A^{d+1}=0$. The map $A$ is not invertible, so $A$ has nonzero kernel. This kernel is $A^dV$, so $A^dV\not=0$. Therefore $A^d\not=0$. So $A$ is Nil by Definition \ref{def:Nil}. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (vi)}$ By the comments above Lemma \ref{lem:RaiseLower}. \\ \noindent ${\rm (vi)}\mathbb Rightarrow {\rm (i)}$ For $0 \leq i \leq d$ let $U_i$ denote the kernel of $A^{i+1}$. By assumption $\lbrace U_i\rbrace_{i=0}^d$ is a flag on $V$. We have $U_d=V$, so $A^{d+1}=0$. We have $U_{d-1}\not=V$, so $A^d\not=0$. Therefore $A$ is Nil by Definition \ref{def:Nil}. \end{proof} \noindent We emphasize a point from Lemma \ref{lem:NilRec}. For a Nil element $A \in {\rm End}(V)$ the sequence $\lbrace A^{d-i}V\rbrace_{i=0}^d$ is a flag on $V$. \section{LR pairs} \noindent In this paper, our main topic is the notion of an LR triple. As a warmup, we first consider the notion of an LR pair. \noindent Throughout this section $V$ denotes a vector space over $\mathbb F$ with dimension $d+1$. \begin{definition} \label{def:lr} \rm An ordered pair $A,B$ of elements in ${\rm End}(V)$ is called {\it lowering-raising} (or {\it LR}) whenever there exists a decomposition of $V$ that is lowered by $A$ and raised by $B$. We refer to such a pair as an {\it LR pair on $V$}. This LR pair is said to be {\it over $\mathbb F$}. We call $V$ the {\it underlying vector space}. We call $d$ the {\it diameter} of the pair. \end{definition} \begin{lemma} Let $A,B$ denote an LR pair on $V$. Then $B,A$ is an LR pair on $V$. \end{lemma} \begin{lemma} \label{lem:ABdecIndNil} Let $A,B$ denote an LR pair on $V$. Then each of $A,B$ is Nil. \end{lemma} \noindent We mention a very special case. \begin{example} \label{def:triv} \rm Assume that $d=0$. Then $A,B \in {\rm End}(V)$ form an LR pair if and only if $A=0$ and $B=0$. This LR pair will be called {\it trivial}. \end{example} \noindent Let $A,B$ denote an LR pair on $V$. By Definition \ref{def:lr}, there exists a decomposition $\lbrace V_i\rbrace_{i=0}^d$ of $V$ that is lowered by $A$ and raised by $B$. We have $V_i = A^{d-i}V_d = B^i V_0$ for $0 \leq i \leq d$. Moreover $V_0=A^dV$ and $V_d=B^dV$. Therefore $V_i = A^{d-i}B^dV= B^iA^dV$ for $0 \leq i \leq d$. The decomposition $\lbrace V_i\rbrace_{i=0}^d$ is uniquely determined by $A,B$; we call $\lbrace V_i\rbrace_{i=0}^d$ the {\it $(A,B)$-decomposition of $V$}. Its inversion $\lbrace V_{d-i}\rbrace_{i=0}^d$ is the $(B,A)$-decomposition of $V$. \begin{definition} \label{def:ABE} \rm Let $A,B$ denote an LR pair on $V$. By the {\it idempotent sequence} for $A,B$ we mean the idempotent sequence for the $(A,B)$-decomposition of $V$. \end{definition} \noindent We have some comments. \begin{lemma} \label{lem:Ebackward} Let $A,B$ denote an LR pair on $V$, with idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. Then the LR pair $B,A$ has idempotent sequence $\lbrace E_{d-i} \rbrace_{i=0}^d$. \end{lemma} \begin{lemma} \label{lem:ABdecInd} Let $A,B$ denote an LR pair on $V$. The $(A,B)$-decomposition of $V$ induces the flag $\lbrace A^{d-i}V\rbrace_{i=0}^d$. The $(B,A)$-decomposition of $V$ induces the flag $\lbrace B^{d-i}V\rbrace_{i=0}^d$. The flags $\lbrace A^{d-i}V\rbrace_{i=0}^d$ and $\lbrace B^{d-i}V\rbrace_{i=0}^d$ are opposite. \end{lemma} \begin{lemma} \label{lem:alphaBeta} Let $A,B$ denote an LR pair on $V$. For nonzero $\alpha, \beta \in \mathbb F$ the pair $\alpha A, \beta B$ is an LR pair on $V$. The $(\alpha A, \beta B)$-decomposition of $V$ is equal to the $(A,B)$-decomposition of $V$. Moreover the idempotent sequence for $\alpha A, \beta B$ is equal to the idempotent sequence for $A, B$. \end{lemma} \begin{lemma} \label{lem:ABAaction} Let $A,B$ denote an LR pair on $V$. For $0 \leq r,s\leq d$, consider the action of the map $A^rB^dA^s$ on the $(A,B)$-decomposition of $V$. The map sends the $s$-component onto the $(d-r)$-component. The map sends all other components to zero. \end{lemma} \begin{lemma} \label{lem:ABAbasis} Let $A,B$ denote an LR pair on $V$. Then the following is a basis for the $\mathbb F$-vector space ${\rm End}(V)$: \begin{eqnarray} \label{eq:ABAbasis} A^r B^d A^s \qquad 0 \leq r,s\leq d. \end{eqnarray} \end{lemma} \begin{proof} The dimension of ${\rm End}(V)$ is $(d+1)^2$. The list (\ref{eq:ABAbasis}) contains $(d+1)^2$ elements, and these are linearly independent by Lemma \ref{lem:ABAaction}. The result follows. \end{proof} \begin{corollary} \label{cor:ABgen} Let $A,B$ denote an LR pair on $V$. Then the $\mathbb F$-algebra ${\rm End}(V)$ is generated by $A,B$. \end{corollary} \begin{proof} By Lemma \ref{lem:ABAbasis}. \end{proof} \begin{lemma} \label{lem:paPre} Let $A,B$ denote an LR pair on $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. Then the following {\rm (i)--(iv)} hold. \begin{enumerate} \item[\rm (i)] For $0 \leq i \leq d$ the subspace $V_i$ is invariant under $AB$ and $BA$. \item[\rm (ii)] The map $BA$ is zero on $V_0$. \item[\rm (iii)] The map $AB$ is zero on $V_d$. \item[\rm (iv)] For $1 \leq i \leq d$, the eigenvalue of $AB$ on $V_{i-1}$ is nonzero and equal to the eigenvalue of $BA$ on $V_i$. \end{enumerate} \end{lemma} \begin{proof} (i)--(iii) The decomposition $\lbrace V_i\rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. \\ \noindent (iv) Pick $0 \not= u \in V_{i-1}$ and $0 \not=v \in V_i$. There exist nonzero $r,s \in \mathbb F$ such that $Av=r u$ and $Bu=sv$. The scalar $rs$ is the eigenvalue of $AB$ on $V_{i-1}$, and the eigenvalue of $BA$ on $V_i$. \end{proof} \begin{definition} \label{def:pa} \rm Let $A,B$ denote an LR pair on $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. For $1 \leq i \leq d$ let $\varphi_i$ denote the eigenvalue referred to in Lemma \ref{lem:paPre}(iv). Thus $0 \not=\varphi_i\in \mathbb F$. The sequence $\lbrace \varphi_i\rbrace_{i=1}^d$ is called the {\it parameter sequence} for $A,B$. For notational convenience define $\varphi_0=0$ and $\varphi_{d+1}=0$. \end{definition} \begin{lemma} \label{lem:BAvAB} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. Then the LR pair $B,A$ has parameter sequence $\lbrace \varphi_{d-i+1}\rbrace_{i=1}^d$. \end{lemma} \begin{proof} Use Lemma \ref{lem:paPre} and Definition \ref{def:pa}. \end{proof} \noindent Here is an example of an LR pair. \begin{example} \label{ex:LR} \rm Let $\lbrace \varphi_i\rbrace_{i=1}^d$ denote a sequence of nonzero scalars in $\mathbb F$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote a basis for $V$. Define $A \in {\rm End}(V)$ such that $Av_i = \varphi_i v_{i-1}$ for $1 \leq i \leq d$ and $Av_0=0$. Define $B \in {\rm End}(V)$ such that $Bv_{i} = v_{i+1}$ for $0 \leq i \leq d-1$ and $Bv_d=0$. Then the pair $A,B$ is an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. The $(A,B)$-decomposition of $V$ is induced by the basis $\lbrace v_i \rbrace_{i=0}^d$. \end{example} \noindent Let $A,B$ denote an LR pair on $V$, with idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. Our next goal is to obtain each $E_i$ in terms of $A,B$. \begin{lemma} \label{lem:AiBione} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d $ and idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. Then \begin{eqnarray} && \label{eq:AiBione} AB = \sum_{j=0}^{d-1} E_j \varphi_{j+1}, \qquad \qquad \label{eq:BiAione} BA = \sum_{j=1}^d E_j \varphi_{j}. \end{eqnarray} \end{lemma} \begin{proof} Use Definitions \ref{def:ABE}, \ref{def:pa}. \end{proof} \noindent The following result is a generalization of Lemma \ref{lem:AiBione}. \begin{lemma} \label{lem:AiBi} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d $ and idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. Then for $0 \leq r \leq d$, \begin{eqnarray} && \label{eq:AiBi} A^rB^r = \sum_{j=0}^{d-r} E_j \varphi_{j+1}\varphi_{j+2} \cdots \varphi_{j+r}, \\ && \label{eq:BiAi} B^rA^r = \sum_{j=r}^{d} E_j \varphi_{j}\varphi_{j-1} \cdots \varphi_{j-r+1}. \end{eqnarray} \end{lemma} \begin{proof} To verify (\ref{eq:AiBi}), note that for $0 \leq i \leq d$, the two sides agree on component $i$ of the $(A,B)$-decomposition of $V$. Line (\ref{eq:BiAi}) is similarly verified. \end{proof} \begin{lemma} \label{lem:Eform} Let $A,B$ denote an LR pair on $V$, with idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. Then for $0 \leq i \leq d$, \begin{eqnarray} \label{eq:TwoE} E_i = \frac{A^{d-i}B^d A^i}{\varphi_1 \cdots \varphi_d}, \qquad \qquad E_i = \frac{B^{i}A^d B^{d-i}}{\varphi_1 \cdots \varphi_d}, \end{eqnarray} where $\lbrace \varphi_j \rbrace_{j=1}^d$ is the parameter sequence for $A,B$. \end{lemma} \begin{proof} To obtain the formula on the left in (\ref{eq:TwoE}), in the equation $A^{d-i}B^d A^i=A^{d-i}B^{d-i}B^iA^i$, evaluate the right-hand side using Lemma \ref{lem:AiBi} and simplify the result using $E_r E_s = \delta_{r,s}E_r$ $(0 \leq r,s\leq d)$. The formula on the right in (\ref{eq:TwoE}) is similarly obtained. \end{proof} \begin{lemma} \label{lem:zeroprod} Let $A,B$ denote an LR pair on $V$, with idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. Then for $0 \leq i<j\leq d$ the following are zero: \begin{eqnarray*} A^j E_i, \qquad \qquad E_j A^{d-i}, \qquad \qquad E_i B^j, \qquad \qquad B^{d-i} E_j. \end{eqnarray*} \end{lemma} \begin{proof} By (\ref{eq:TwoE}) together with $A^{d+1}=0$ and $B^{d+1}=0$. \end{proof} \begin{lemma} \label{lem:varphiTrace} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$ and idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. Then for $0 \leq i \leq d$, \begin{eqnarray} {\rm tr}(AB E_i)=\varphi_{i+1}, \qquad \qquad {\rm tr}(BAE_i)=\varphi_{i}. \label{eq:traceE} \end{eqnarray} \end{lemma} \begin{proof} In the equation on the left in (\ref{eq:AiBione}), multiply each side on the right by $E_i$ to get $ABE_i = \varphi_{i+1} E_i$. In this equation, take the trace of each side, and recall that $E_i$ has trace 1. This gives the equation on the left in (\ref{eq:traceE}). The other equation in (\ref{eq:traceE}) is similarly verified. \end{proof} \noindent Let $A,B$ denote an LR pair on $V$. We now describe a set of bases for $V$, called $(A,B)$-bases. \begin{definition} \label{def:ABbasis} \rm Let $A,B$ denote an LR pair on $V$. Let $\lbrace V_i \rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. A basis $\lbrace v_i \rbrace_{i=0}^d$ for $V$ is called an {\it $(A,B)$-basis} whenever: \begin{enumerate} \item[\rm (i)] $ v_i \in V_i$ for $0 \leq i \leq d$; \item[\rm (ii)] $Av_i = v_{i-1}$ for $1 \leq i \leq d$. \end{enumerate} \end{definition} \begin{lemma} \label{lem:BAbasisU} Let $A,B$ denote an LR pair on $V$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote an $(A,B)$-basis for $V$, and let $\lbrace v'_i\rbrace_{i=0}^d$ denote any vectors in $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v'_i\rbrace_{i=0}^d$ is an $(A,B)$-basis for $V$; \item[\rm (ii)] there exists $0 \not=\zeta \in \mathbb F$ such that $v'_i = \zeta v_i $ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} Use Definition \ref{def:ABbasis}. \end{proof} \begin{lemma} \label{lem:ABmatrix} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i \rbrace_{i=0}^d$ is an $(A,B)$-basis for $V$; \item[\rm (ii)] with respect to $\lbrace v_i \rbrace_{i=0}^d$ the matrices representing $A$ and $B$ are \begin{eqnarray} \label{eq:ABrep} A:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_1 & 0 & && & \\ & \varphi_2 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d & 0 \\ \end{array} \right). \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ Use Definitions \ref{def:pa}, \ref{def:ABbasis}. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ Let $\lbrace V_i \rbrace_{i=0}^d$ denote the decomposition of $V$ induced by $\lbrace v_i \rbrace_{i=0}^d$. By (\ref{eq:ABrep}), $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. Therefore $\lbrace V_i \rbrace_{i=0}^d$ is the $(A,B)$-decomposition of $V$. Now by Definition \ref{def:ABbasis}, $\lbrace v_i \rbrace_{i=0}^d$ is an $(A,B)$-basis for $V$. \end{proof} \begin{lemma} \label{lem:5Char} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote any vectors in $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i\rbrace_{i=0}^d$ is an $(A,B)$-basis for $V$; \item[\rm (ii)] $0 \not= v_0 \in A^dV$ and $Bv_i = \varphi_{i+1}v_{i+1}$ for $0 \leq i \leq d-1$; \item[\rm (iii)] there exists $0 \not= \eta \in A^dV$ such that $v_i = (\varphi_1 \varphi_2 \cdots \varphi_i)^{-1}B^i \eta$ for $0 \leq i \leq d$; \item[\rm (iv)] $0 \not=v_d \in B^dV$ and $Av_i = v_{i-1}$ for $1 \leq i \leq d$; \item[\rm (v)] there exists $0 \not= \xi \in B^dV$ such that $v_i = A^{d-i} \xi$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:ABmatrix}. \end{proof} \noindent Let $A,B$ denote an LR pair on $V$. By an {\it inverted $(A,B)$-basis for $V$} we mean the inversion of an $(A,B)$-basis for $V$. \begin{lemma} \label{lem:invABbasis} Let $A,B$ denote an LR pair on $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. A basis $\lbrace v_i \rbrace_{i=0}^d$ for $V$ is an inverted $(A,B)$-basis if and only if both \begin{enumerate} \item[\rm (i)] $v_i \in V_{d-i}$ for $0 \leq i \leq d$; \item[\rm (ii)] $Av_i = v_{i+1}$ for $0 \leq i \leq d-1$. \end{enumerate} \end{lemma} \begin{proof} By Definition \ref{def:ABbasis} and the meaning of inversion. \end{proof} \begin{lemma} \label{lem:invABmatrix} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i \rbrace_{i=0}^d$ is an inverted $(A,B)$-basis for $V$; \item[\rm (ii)] with respect to $\lbrace v_i \rbrace_{i=0}^d$ the matrices representing $A$ and $B$ are \begin{eqnarray} \label{eq:invABrep} A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1 & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & \varphi_d & && & \bf 0 \\ & 0 & \varphi_{d-1} && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_1 \\ {\bf 0} && & & & 0 \\ \end{array} \right). \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:ABmatrix} and the meaning of inversion. \end{proof} \begin{lemma} \label{lem:5CharInv} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote any vectors in $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i\rbrace_{i=0}^d$ is an inverted $(A,B)$-basis for $V$; \item[\rm (ii)] $0 \not=v_0 \in B^dV$ and $Av_i = v_{i+1}$ for $0 \leq i \leq d-1$; \item[\rm (iii)] there exists $0 \not= \xi \in B^dV$ such that $v_i = A^i \xi$ for $0 \leq i \leq d$; \item[\rm (iv)] $0 \not=v_d \in A^dV$ and $Bv_i = \varphi_{d-i+1}v_{i-1}$ for $1 \leq i \leq d$; \item[\rm (v)] there exists $0 \not= \eta \in A^dV$ such that $v_i = (\varphi_1 \varphi_{2} \cdots \varphi_{d-i})^{-1}B^{d-i} \eta$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:5Char} and the meaning of inversion. \end{proof} \noindent Let $A,B$ denote an LR pair on $V$. We now consider a $(B,A)$-basis for $V$. \begin{lemma} \label{lem:BAbasis} Let $A,B$ denote an LR pair on $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. A basis $\lbrace v_i \rbrace_{i=0}^d$ for $V$ is a $(B,A)$-basis if and only if both \begin{enumerate} \item[\rm (i)] $v_i \in V_{d-i}$ for $0 \leq i \leq d$; \item[\rm (ii)] $Bv_i = v_{i-1}$ for $1 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} Apply Definition \ref{def:ABbasis} to the LR pair $B,A$. \end{proof} \begin{lemma} \label{lem:BAbasisMat} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i \rbrace_{i=0}^d$ is a $(B,A)$-basis for $V$; \item[\rm (ii)] with respect to $\lbrace v_i \rbrace_{i=0}^d$ the matrices representing $A$ and $B$ are \begin{eqnarray} \label{eq:BArep} A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_d & 0 & && & \\ & \varphi_{d-1} & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_1 & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right). \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} Apply Lemma \ref{lem:ABmatrix} to the LR pair $B,A$. \end{proof} \begin{lemma} \label{lem:5CharBA} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote any vectors in $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i\rbrace_{i=0}^d$ is a $(B,A)$-basis for $V$; \item[\rm (ii)] $0 \not=v_0 \in B^dV$ and $Av_i = \varphi_{d-i} v_{i+1}$ for $0 \leq i \leq d-1$; \item[\rm (iii)] there exists $0 \not= \xi \in B^dV$ such that $v_i = (\varphi_d \varphi_{d-1} \cdots \varphi_{d-i+1})^{-1} A^i \xi$ for $0 \leq i \leq d$; \item[\rm (iv)] $0 \not=v_d \in A^dV$ and $Bv_i = v_{i-1}$ for $1 \leq i \leq d$; \item[\rm (v)] there exists $0 \not= \eta \in A^dV$ such that $v_i = B^{d-i} \eta$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} Apply Lemma \ref{lem:5Char} to the LR pair $B,A$. \end{proof} \noindent Let $A,B$ denote an LR pair on $V$. We now consider an inverted $(B,A)$-basis for $V$. \begin{lemma} Let $A,B$ denote an LR pair on $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. A basis $\lbrace v_i \rbrace_{i=0}^d$ for $V$ is an inverted $(B,A)$-basis if and only if both \begin{enumerate} \item[\rm (i)] $v_i \in V_{i}$ for $0 \leq i \leq d$; \item[\rm (ii)] $Bv_i = v_{i+1}$ for $0 \leq i \leq d-1$. \end{enumerate} \end{lemma} \begin{proof} Apply Lemma \ref{lem:invABbasis} to the LR pair $B,A$. \end{proof} \begin{lemma} \label{lem:BAmatrixInv} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i \rbrace_{i=0}^d$ is an inverted $(B,A)$-basis for $V$; \item[\rm (ii)] with respect to $\lbrace v_i \rbrace_{i=0}^d$ the matrices representing $A$ and $B$ are \begin{eqnarray} \label{eq:invBArep} A:\; \left( \begin{array}{ c c cc c c } 0 & \varphi_1 & && & \bf 0 \\ & 0 & \varphi_2 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_d \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1 & 0 \\ \end{array} \right). \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} Apply Lemma \ref{lem:invABmatrix} to the LR pair $B,A$. \end{proof} \begin{lemma} \label{lem:5CharBAinv} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote any vectors in $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i\rbrace_{i=0}^d$ is an inverted $(B,A)$-basis for $V$; \item[\rm (ii)] $0 \not=v_0 \in A^dV$ and $Bv_i = v_{i+1}$ for $0 \leq i \leq d-1$; \item[\rm (iii)] there exists $0 \not= \eta \in A^dV$ such that $v_i = B^{i} \eta$ for $0 \leq i \leq d$; \item[\rm (iv)] $0 \not=v_d \in B^dV$ and $Av_i = \varphi_i v_{i-1}$ for $1 \leq i \leq d$; \item[\rm (v)] there exists $0 \not= \xi \in B^dV$ such that $v_i = (\varphi_d \varphi_{d-1} \cdots \varphi_{i+1})^{-1} A^{d-i} \xi$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} Apply Lemma \ref{lem:5CharInv} to the LR pair $B,A$. \end{proof} \noindent Let $A,B$ denote an LR pair on $V$. Earlier we used $A,B$ to obtain four bases for $V$. We now consider some transitions between these bases. \begin{lemma} \label{lem:sometrans} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. \begin{enumerate} \item[\rm (i)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote an $(A,B)$-basis for $V$. Then the sequence $\lbrace \varphi_1 \varphi_2\cdots \varphi_i v_i\rbrace_{i=0}^d$ is an inverted $(B,A)$-basis for $V$. \item[\rm (ii)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote an inverted $(B,A)$-basis for $V$. Then $\lbrace (\varphi_1 \varphi_2\cdots \varphi_i )^{-1}v_i\rbrace_{i=0}^d$ is an $(A,B)$-basis for $V$. \item[\rm (iii)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote a $(B,A)$-basis for $V$. Then the sequence $\lbrace (\varphi_1 \varphi_{2}\cdots \varphi_{d-i})^{-1} v_i\rbrace_{i=0}^d$ is an inverted $(A,B)$-basis for $V$. \item[\rm (iv)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote an inverted $(A,B)$-basis for $V$. Then $\lbrace \varphi_1 \varphi_{2}\cdots \varphi_{d-i} v_i\rbrace_{i=0}^d$ is a $(B,A)$-basis for $V$. \end{enumerate} \end{lemma} \begin{proof} (i), (ii) Compare Lemma \ref{lem:5Char}(iii) and Lemma \ref{lem:5CharBAinv}(iii). \\ \noindent (iii), (iv) Compare Lemma \ref{lem:5CharInv}(v) and Lemma \ref{lem:5CharBA}(v). \end{proof} \noindent We now discuss isomorphisms for LR pairs. \begin{definition} \label{def:isoLRP} \rm Let $A,B$ denote an LR pair on $V$. Let $V'$ denote a vector space over $\mathbb F$ with dimension $d+1$, and let $A',B'$ denote an LR pair on $V'$. By an {\it isomorphism of LR pairs from $A,B$ to $A',B'$} we mean an $\mathbb F$-linear bijection $\sigma :V \to V'$ such that $\sigma A = A' \sigma $ and $\sigma B = B' \sigma$. The LR pairs $A,B$ and $A',B'$ are called {\it isomorphic} whenever there exists an isomorphism of LR pairs from $A,B$ to $A',B'$. \end{definition} \noindent We now classify the LR pairs up to isomorphism. \begin{proposition} \label{prop:LRpairClass} Consider the map which sends an LR pair to its parameter sequence. This map induces a bijection between the following two sets: \begin{enumerate} \item[\rm (i)] the isomorphism classes of LR pairs over $\mathbb F$ that have diameter $d$; \item[\rm (ii)] the sequences $\lbrace \varphi_i \rbrace_{i=1}^d$ of nonzero scalars in $\mathbb F$. \end{enumerate} \end{proposition} \begin{proof} By Example \ref{ex:LR} and Lemma \ref{lem:BAmatrixInv}. \end{proof} \noindent We have some comments about Definition \ref{def:isoLRP}. \begin{lemma} \label{lem:isoMove} Referring to Definition \ref{def:isoLRP}, let $\lbrace E_i \rbrace_{i=0}^d$ and $\lbrace E'_i \rbrace_{i=0}^d$ denote the idempotent sequences for $A,B$ and $A',B'$ respectively. Let $\sigma$ denote an isomorphism of LR pairs from $A,B$ to $A',B'$. Then $\sigma E_i = E'_i \sigma $ for $0 \leq i \leq d$. \end{lemma} \begin{proof} Use Lemma \ref{lem:Eform}. \end{proof} \begin{lemma} \label{lem:isoFix} Let $A,B$ denote an LR pair on $V$. For nonzero $\sigma \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $\sigma$ is an isomorphism of LR pairs from $A,B$ to $A,B$; \item[\rm (ii)] $\sigma$ commutes with $A$ and $B$; \item[\rm (iii)] $\sigma$ commutes with everything in ${\rm End}(V)$; \item[\rm (iv)] there exists $0 \not=\zeta \in \mathbb F$ such that $\sigma = \zeta I$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ By Definition \ref{def:isoLRP}. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (iii)}$ By Corollary \ref{cor:ABgen}. \\ \noindent ${\rm (iii)}\mathbb Rightarrow {\rm (iv)}$ By linear algebra. \\ \noindent ${\rm (iv)}\mathbb Rightarrow {\rm (i)}$ By Definition \ref{def:isoLRP}. \end{proof} \noindent We have some comments about Lemma \ref{lem:alphaBeta}. \begin{lemma} \label{lem:alphaBetacom} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. For nonzero $\alpha, \beta \in \mathbb F$ the LR pair $\alpha A,\beta B$ has parameter sequence $\lbrace \alpha \beta \varphi_i \rbrace_{i=1}^d$. \end{lemma} \begin{proof} Use Definition \ref{def:pa}. \end{proof} \begin{lemma} Let $A,B$ denote a nontrivial LR pair over $\mathbb F$. For nonzero $\alpha, \beta \in \mathbb F$ the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR pairs $A,B$ and $\alpha A, \beta B$ are isomorphic; \item[\rm (ii)] $\alpha \beta = 1$. \end{enumerate} \end{lemma} \begin{proof} Use Proposition \ref{prop:LRpairClass} and Lemma \ref{lem:alphaBetacom}. \end{proof} \begin{lemma} Let $A,B$ denote an LR pair on $V$. Pick nonzero $\alpha, \beta \in \mathbb F$. \begin{enumerate} \item[\rm (i)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote an $(A,B)$-basis for $V$. Then the sequence $\lbrace \alpha^{-i} v_i \rbrace_{i=0}^d$ is an $(\alpha A, \beta B)$-basis for $V$. \item[\rm (ii)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote an inverted $(A,B)$-basis for $V$. Then the sequence $\lbrace \alpha^{i} v_i \rbrace_{i=0}^d$ is an inverted $(\alpha A, \beta B)$-basis for $V$. \item[\rm (iii)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote a $(B,A)$-basis for $V$. Then the sequence $\lbrace \beta^{-i} v_i \rbrace_{i=0}^d$ is a $(\beta B,\alpha A)$-basis for $V$. \item[\rm (iv)] Let $\lbrace v_i \rbrace_{i=0}^d$ denote an inverted $(B,A)$-basis for $V$. Then the sequence $\lbrace \beta^{i} v_i \rbrace_{i=0}^d$ is an inverted $(\beta B, \alpha A)$-basis for $V$. \end{enumerate} \end{lemma} \begin{proof} To obtain part (i) use Lemma \ref{lem:alphaBeta} and Definition \ref{def:ABbasis}. Parts (ii)--(iv) are similarly obtained. \end{proof} \begin{definition} \label{def:FlagLR} \rm Let $\lbrace U_i\rbrace_{i=0}^d$ denote a flag on $V$. An element $A \in {\rm End}(V)$ is said to {\it lower} $\lbrace U_i\rbrace_{i=0}^d$ whenever $AU_i = U_{i-1}$ for $1 \leq i \leq d$ and $AU_0=0$. The map $A$ is said to {\it raise} $\lbrace U_i\rbrace_{i=0}^d$ whenever $U_i + AU_i = U_{i+1}$ for $0 \leq i \leq d-1$. \end{definition} \begin{lemma} \label{lem:Ainduce} Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$ that is lowered by $A \in {\rm End}(V)$. \begin{enumerate} \item[\rm (i)] The flag induced by $\lbrace V_i\rbrace_{i=0}^d$ is lowered by $A$. \item[\rm (ii)] The flag induced by $\lbrace V_{d-i}\rbrace_{i=0}^d$ is raised by $A$. \end{enumerate} \end{lemma} \begin{proof} (i) Let $\lbrace U_i\rbrace_{i=0}^d$ denote the flag on $V$ that is induced by $\lbrace V_i\rbrace_{i=0}^d$. Then $U_i= V_0 + \cdots + V_i$ for $0 \leq i \leq d$. By assumption $AV_i = V_{i-1} $ for $1 \leq i \leq d$ and $AV_0=0$. Therefore $AU_i = U_{i-1}$ for $1 \leq i \leq d$ and $AU_0=0$. In other words, the flag $\lbrace U_i\rbrace_{i=0}^d$ is lowered by $A$. \\ \noindent (ii) Let $\lbrace U_i\rbrace_{i=0}^d$ denote the flag on $V$ that is induced by $\lbrace V_{d-i}\rbrace_{i=0}^d$. Then $U_i = V_{d-i}+ \cdots + V_d$ for $0 \leq i \leq d$. For $0 \leq i \leq d-1$, $AU_i = V_{d-i-1}+ \cdots + V_{d-1}$. By these comments $U_i + AU_i = U_{i+1}$. Therefore $\lbrace U_i\rbrace_{i=0}^d$ is raised by $A$. \end{proof} \begin{lemma} \label{lem:LRFlag} Let $A,B$ denote an LR pair on $V$. \begin{enumerate} \item[\rm (i)] The flag $\lbrace A^{d-i}V\rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. \item[\rm (ii)] The flag $\lbrace B^{d-i}V\rbrace_{i=0}^d$ is raised by $A$ and lowered by $B$. \end{enumerate} \end{lemma} \begin{proof} The $(B,A)$-decomposition of $V$ is the inversion of the $(A,B)$-decomposition of $V$. The $(A,B)$-decomposition of $V$ is lowered by $A$. The $(B,A)$-decomposition of $V$ is lowered by $B$. By Lemma \ref{lem:ABdecInd}, the $(A,B)$-decomposition of $V$ induces the flag $\lbrace A^{d-i}V\rbrace_{i=0}^d$, and the $(B,A)$-decomposition of $V$ induces the flag $\lbrace B^{d-i}V\rbrace_{i=0}^d$. The result follows in view of Lemma \ref{lem:Ainduce}. \end{proof} \begin{lemma} \label{lem:oppF} Let $\lbrace U_i \rbrace_{i=0}^d$ and $\lbrace U'_i \rbrace_{i=0}^d$ denote flags on $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace U_i \rbrace_{i=0}^d$ and $\lbrace U'_i \rbrace_{i=0}^d$ are opposite; \item[\rm (ii)] there exists $A \in {\rm End}(V)$ that lowers $\lbrace U_i \rbrace_{i=0}^d$ and raises $\lbrace U'_i \rbrace_{i=0}^d$. \end{enumerate} \noindent Assume that {\rm (i), (ii)} hold, and define $V_i = U_i \cap U'_{d-i}$ for $0 \leq i \leq d$. Then the decomposition $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$. \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ Define $V_i = U_i \cap U'_{d-i}$ for $0 \leq i \leq d$. Then $\lbrace V_i \rbrace_{i=0}^d$ is a decomposition of $V$ that induces $\lbrace U_i \rbrace_{i=0}^d$ and whose inversion induces $\lbrace U'_i \rbrace_{i=0}^d$. Let $A \in {\rm End}(V)$ lower $\lbrace V_i \rbrace_{i=0}^d$. By Lemma \ref{lem:Ainduce}, $A$ lowers $\lbrace U_i \rbrace_{i=0}^d$ and raises $\lbrace U'_i \rbrace_{i=0}^d$. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ We display a decomposition $\lbrace W_i \rbrace_{i=0}^d$ of $V$ that induces $\lbrace U_i \rbrace_{i=0}^d$ and whose inversion induces $\lbrace U'_i \rbrace_{i=0}^d$. Define $W_i = A^{d-i} U'_0$ for $0 \leq i \leq d$. We show that $\lbrace W_i \rbrace_{i=0}^d$ is a decomposition of $V$. Note that $U'_0$ has dimension one, so $W_i$ has dimension at most one for $0 \leq i \leq d$. Using the assumption that $A$ raises $\lbrace U'_i \rbrace_{i=0}^d$, we obtain $U'_j = W_d +W_{d-1} + \cdots + W_{d-j}$ for $0 \leq j \leq d$. Setting $j=d$ we find $V = \sum_{i=0}^d W_i$. The dimension of $V$ is $d+1$. Therefore the sum $V = \sum_{i=0}^d W_i$ is direct, and $W_i$ has dimension one for $0 \leq i \leq d$. In other words $\lbrace W_i \rbrace_{i=0}^d$ is a decomposition of $V$. By construction, the inverted decomposition $\lbrace W_{d-i} \rbrace_{i=0}^d$ induces $\lbrace U'_i \rbrace_{i=0}^d$. By assumption $A$ lowers the flag $\lbrace U_{i} \rbrace_{i=0}^d$. By Definition \ref{def:FlagLR}, $\lbrace U_{i} \rbrace_{i=0}^d$ is the unique flag on $V$ that is lowered by $A$. By the definition of $\lbrace W_i \rbrace_{i=0}^d$ we find $AW_i = W_{i-1}$ for $1 \leq i \leq d$. Also $AW_0=A^{d+1}V=0$ since $A$ is Nil. Consequently $A$ lowers $\lbrace W_i \rbrace_{i=0}^d$. By Lemma \ref{lem:Ainduce}, $A$ lowers the flag induced by $\lbrace W_{i} \rbrace_{i=0}^d$. By these comments $\lbrace W_{i} \rbrace_{i=0}^d$ induces $\lbrace U_i \rbrace_{i=0}^d$. We have shown that the decomposition $\lbrace W_{i} \rbrace_{i=0}^d$ induces $\lbrace U_i \rbrace_{i=0}^d$ and the inverted decomposition $\lbrace W_{d-i} \rbrace_{i=0}^d$ induces $\lbrace U'_i \rbrace_{i=0}^d$. Therefore $\lbrace U_i \rbrace_{i=0}^d$ and $\lbrace U'_i \rbrace_{i=0}^d$ are opposite. \\ \noindent Assume that (i), (ii) hold. Recall from the proof of ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ that the decomposition $\lbrace W_{i} \rbrace_{i=0}^d$ induces $\lbrace U_i \rbrace_{i=0}^d$ and the inverted decomposition $\lbrace W_{d-i} \rbrace_{i=0}^d$ induces $\lbrace U'_i \rbrace_{i=0}^d$. Therefore $W_i = U_{i} \cap U'_{d-i} = V_i$ for $0 \leq i \leq d$. Recall also from the proof of ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ that $A$ lowers $\lbrace W_i \rbrace_{i=0}^d$. So $A$ lowers $\lbrace V_i \rbrace_{i=0}^d$. \end{proof} \begin{proposition} \label{prop:LRchar} Let $A,B \in {\rm End}(V)$. Then $A,B$ is an LR pair on $V$ if and only if the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $A$ and $B$ are Nil; \item[\rm (ii)] the flag $\lbrace A^{d-i}V\rbrace_{i=0}^d$ is raised by $B$; \item[\rm (iii)] the flag $\lbrace B^{d-i}V\rbrace_{i=0}^d$ is raised by $A$. \end{enumerate} \end{proposition} \begin{proof} First assume that $A,B$ is an LR pair on $V$. Then condition (i) holds by Lemma \ref{lem:ABdecIndNil}, and conditions (ii), (iii) hold by Lemma \ref{lem:LRFlag}. Conversely, assume that the conditions (i)--(iii) hold. To show that $A,B$ is an LR pair on $V$, we display a decomposition $\lbrace V_i\rbrace_{i=0}^d$ of $V$ that is lowered by $A$ and raised by $B$. By construction and since $A$ is Nil, the flag $\lbrace A^{d-i}V\rbrace_{i=0}^d$ is lowered by $A$. By assumption the flag $\lbrace B^{d-i}V\rbrace_{i=0}^d$ is raised by $A$. Now by Lemma \ref{lem:oppF} the flags $\lbrace A^{d-i}V\rbrace_{i=0}^d$ and $\lbrace B^{d-i}V\rbrace_{i=0}^d$ are opposite. Define $V_i = A^{d-i}V \cap B^iV$ for $0 \leq i \leq d$. By Lemma \ref{lem:oppF} the decomposition $\lbrace V_i\rbrace_{i=0}^d$ is lowered by $A$. Interchanging the roles of $A,B$ in the argument so far, we see that $\lbrace V_i\rbrace_{i=0}^d$ is raised by $B$. We have shown that $A,B$ is an LR pair on $V$. \end{proof} \noindent We define some matrices for later use. \begin{definition} \label{def:Dmat} \rm Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Define a diagonal matrix $D \in {\rm Mat}_{d+1}(\mathbb F)$ with $(i,i)$-entry $\varphi_1 \varphi_2 \cdots \varphi_i$ for $0 \leq i \leq d$. \end{definition} \begin{lemma} \label{lem:Dmeaning} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. For $M \in {\rm Mat}_{d+1}(\mathbb F)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $M$ is the transition matrix from an $(A,B)$-basis of $V$ to an inverted $(B,A)$-basis of $V$; \item[\rm (ii)] there exists $0 \not=\zeta \in \mathbb F$ such that $M=\zeta D$. \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:sometrans}(i). \end{proof} \begin{definition} \label{def:tauMat} Let $\tau$ denote the matrix in ${\rm Mat}_{d+1}(\mathbb F)$ that has $(i-1,i)$-entry 1 for $1 \leq i \leq d$, and all other entries $0$. Thus \begin{eqnarray*} \tau = \left( \begin{array}{ c c cc c c } 0 & 1 & & & & {\bf 0} \\ & 0 & 1 & & & \\ & & 0 & \cdot & & \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad {\bf Z}\tau {\bf Z} = \left( \begin{array}{ c c cc c c } 0 & & & & & {\bf 0} \\ 1 & 0 & & & & \\ & 1 & 0 & & & \\ & & \cdot & \cdot & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1 & 0 \\ \end{array} \right). \end{eqnarray*} \end{definition} \noindent Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. In lines (\ref{eq:ABrep}), (\ref{eq:invABrep}), (\ref{eq:BArep}), (\ref{eq:invBArep}) we encountered some matrices that had $\lbrace \varphi_i \rbrace_{i=1}^d$ among the entries. We now express these matrices in terms of $\bf Z$, $D$, $\tau$. \begin{lemma} Referring to Definitions \ref{def:Dmat} and \ref{def:tauMat}, \begin{eqnarray*} && D^{-1}\tau D = \left( \begin{array}{ c c cc c c } 0 & \varphi_1 & & & & {\bf 0} \\ & 0 & \varphi_2 & & & \\ & & 0 & \cdot & & \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_d \\ {\bf 0} && & & & 0 \\ \end{array} \right), \quad {\bf Z}D{\bf Z} \tau {\bf Z} D^{-1} {\bf Z} = \left( \begin{array}{ c c cc c c } 0 & \varphi_d & & & & {\bf 0} \\ & 0 & \varphi_{d-1} & & & \\ & & 0 & \cdot & & \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_1\\ {\bf 0} && & & & 0 \\ \end{array} \right), \\ && {\bf Z}D^{-1}\tau D {\bf Z} = \left( \begin{array}{ c c cc c c } 0 & & & & & {\bf 0} \\ \varphi_d & 0 & & & & \\ & \varphi_{d-1} & 0 & & & \\ & & \cdot & \cdot & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_1 & 0 \\ \end{array} \right), \quad D{\bf Z}\tau {\bf Z} D^{-1} = \left( \begin{array}{ c c cc c c } 0 & & & & & {\bf 0} \\ \varphi_1 & 0 & & & & \\ & \varphi_{2} & 0 & & & \\ & & \cdot & \cdot & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d & 0 \\ \end{array} \right). \end{eqnarray*} \end{lemma} \begin{proof} Matrix multiplication. \end{proof} \noindent In Section 12 we will consider some powers of the matrix $\tau$ from Definition \ref{def:tauMat}. We now compute the entries of these powers. \begin{lemma} \label{lem:tauPower} Referring to Definition \ref{def:tauMat}, for $0 \leq r \leq d$ the matrix $\tau^r$ has $(i,j)$-entry \begin{eqnarray*} (\tau^r)_{i,j} = \begin{cases} 1 & {\mbox{\rm if $j-i=r$}}; \\ 0 & {\mbox{\rm if $j-i\not=r$}} \end{cases} \qquad \qquad (0 \leq i,j \leq d). \end{eqnarray*} Moreover $\tau^{d+1}=0$. \end{lemma} \begin{proof} Matrix multiplication. \end{proof} \noindent Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. As we proceed, we will encounter the case in which the $\lbrace \varphi_i \rbrace_{i=1}^d$ satisfy a linear recurrence. We now consider this case. \begin{lemma} \label{lem:ABrelations} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Pick an integer $r$ $(1 \leq r \leq d+1)$. Let $x$ and $\lbrace y_i \rbrace_{i=0}^{r}$ denote scalars in $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $ x A^{r-1} =\sum_{i=0}^{r} y_i A^i B A^{r-i}$; \item[\rm (ii)] $ x B^{r-1} = \sum_{i=0}^{r} y_i B^{r-i} A B^i$; \item[\rm (iii)] $x = y_0 \varphi_{i} + y_1 \varphi_{i+1} + \cdots + y_{r}\varphi_{i+r}$ for $0 \leq i \leq d-r+1$. \end{enumerate} \end{lemma} \begin{proof} Represent $A$ and $B$ by matrices as in (\ref{eq:ABrep}). \end{proof} \section{LR pairs of Weyl and $q$-Weyl type} In this section we investigate two families of LR pairs, said to have Weyl type and $q$-Weyl type. We begin with an example that illustrates Lemma \ref{lem:ABrelations}. Throughout this section $V$ denotes a vector space over $\mathbb F$ with dimension $d+1$. \begin{example} \label{ex:Weyl} \rm Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. By Lemma \ref{lem:ABrelations}, \begin{eqnarray} AB-BA = I \label{eq:Weyl1} \end{eqnarray} if and only if \begin{eqnarray} \varphi_{i+1}-\varphi_i = 1 \qquad \qquad (0 \leq i \leq d). \label{eq:Weyl2} \end{eqnarray} \end{example} \begin{note}\rm The equation (\ref{eq:Weyl1}) is called the {\it Weyl relation}. \end{note} \begin{definition} \label{def:Weyl} \rm The LR pair $A,B$ in Example \ref{ex:Weyl} is said to have {\it Weyl type} whenever it satisfies the equivalent conditions {\rm (\ref{eq:Weyl1})}, {\rm (\ref{eq:Weyl2})}. \end{definition} \begin{note}\rm Referring to Example \ref{ex:Weyl}, assume that $A,B$ has Weyl type. Then the LR pair $B,-A$ has Weyl type. \end{note} \begin{lemma} \label{lem:pPrime} Referring to Example \ref{ex:Weyl}, assume that $A,B$ has Weyl type. Then {\rm (i), (ii)} hold below. \begin{enumerate} \item[\rm (i)] $\varphi_i = i$ for $1 \leq i \leq d$. \item[\rm (ii)] The integer $p=d+1$ is prime and ${\rm Char}(\mathbb F)=p$. \end{enumerate} \end{lemma} \begin{proof} By (\ref{eq:Weyl2}) and $\varphi_0=0$ we obtain $\varphi_i = i$ for $1\leq i\leq d+1$. We have $\varphi_{d+1}=0$, so $d+1=0$ in the field $\mathbb F$. For $1 \leq i \leq d$ we have $\varphi_i\not=0$, so $i \not=0$ in $\mathbb F$. The results follow. \end{proof} \begin{lemma} \label{ex:Weylback} Assume that $p=d+1$ is prime and ${\rm Char}(\mathbb F) = p$. Define $\varphi_i = i$ for $1 \leq i \leq d$. Then $\lbrace \varphi_i \rbrace_{i=1}^d$ are nonzero; let $A,B$ denote the LR pair over $\mathbb F$ that has parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Then $A,B$ has Weyl type. \end{lemma} \begin{proof} One checks that condition (\ref{eq:Weyl2}) is satisfied. \end{proof} \begin{lemma} \label{lem:WeylChar} Assume that $p=d+1$ is prime and ${\rm Char}(\mathbb F)=p$. Then for $A,B \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] neither of $A,B$ is invertible and $AB-BA=I$; \item[\rm (ii)] $A,B$ is an LR pair on $V$ that has Weyl type. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ Since $A$ is not invertible, there exists $0 \not=\eta \in V$ such that $A\eta=0$. Define $v_i = B^i\eta$ for $0 \leq i \leq d$. By construction, $Av_0=0$ and $Bv_{i-1} = v_i$ for $1 \leq i \leq d$. Using $AB-BA=I$ and induction on $i$, we obtain $Av_i=i v_{i-1}$ for $1 \leq i \leq d$. Note that $i\not=0$ in $\mathbb F$ for $1 \leq i \leq d$. By these comments, for $0 \leq i \leq d$ the vector $v_i$ is in the kernel of $A^{i+1}$ and not in the kernel of $A^i$. Therefore $\lbrace v_i \rbrace_{i=0}^d$ are linearly independent and hence form a basis for $V$. By construction the induced decomposition $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$. Therefore $A$ is Nil. Replacing $A,B$ by $B,-A$ in the above argument, we see that $B$ is Nil. Now $Bv_d=B^{d+1}\eta=0$. Now by construction $B$ raises the decomposition $\lbrace V_i \rbrace_{i=0}^d$. We have shown that the decomposition $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. Therefore $A,B$ is an LR pair on $V$. This LR pair has Weyl type by Definition \ref{def:Weyl} and since $AB-BA=I$. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ The elements $A,B$ are not invertible, since they are Nil by Lemma \ref{lem:ABdecIndNil}. By Definition \ref{def:Weyl} we have $AB-BA=I$. \end{proof} \noindent Later in the paper we will use the following curious fact about LR pairs of Weyl type. \begin{lemma} \label{lem:curious} Assume $d\geq 2$. Let $A,B$ denote an LR pair on $V$ that has Weyl type. Define $C=-A-B$. Then the pairs $B,C$ and $C,A$ are LR pairs on $V$ that have Weyl type. \end{lemma} \begin{proof} By Definition \ref {def:Weyl}, $AB-BA=I$. By Lemma \ref{lem:pPrime}(ii), $p=d+1$ is prime and ${\rm Char}(\mathbb F)=p$. We show that $B,C$ is an LR pair on $V$ that has Weyl type. To do this we apply Lemma \ref{lem:WeylChar} to the pair $B,C$. Using $AB-BA=I$ and the definition of $C$, we find $BC-CB=I$. The map $B$ is not invertible since $B$ is Nil. We show that $C$ is not invertible. By Lemma \ref{lem:BAmatrixInv}, with respect to an inverted $(B,A)$-basis for $V$ the element $A+B$ is represented by \begin{equation} \label{eq:matrixEIG} \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ 1& 0 & 2&& & \\ & 1 & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & d\\ {\bf 0} && & & 1 & 0 \\ \end{array} \right). \end{equation} By our assumption $d\geq 2$, the prime $p=d+1$ is odd. Therefore $d$ is even. Define \begin{eqnarray*} z_{2i}= \frac{(-1)^i}{ 2^{i}i!} \qquad \qquad (0 \leq i \leq d/2) \end{eqnarray*} and $z_{2i+1}= 0$ for $0 \leq i < d/2$. The matrix (\ref{eq:matrixEIG}) times the column vector $(z_0, z_1, \ldots, z_d)^t$ is zero. Therefore the matrix (\ref{eq:matrixEIG}) is not invertible. Therefore $A+B$ is not invertible, so $C$ is not invertible. Applying Lemma \ref{lem:WeylChar} to the pair $B,C$ we find that $B,C$ is an LR pair on $V$ that has Weyl type. One similarly shows that $C,A$ is an LR pair on $V$ that has Weyl type. \end{proof} \noindent Here is another example of an LR pair. \begin{example} \label{ex:qWeyl} \rm Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Pick a nonzero $q \in \mathbb F$ such that $q^2 \not=1$. By Lemma \ref{lem:ABrelations}, \begin{eqnarray} \frac{qAB-q^{-1}BA}{q-q^{-1}}= I \label{eq:qWeyl1} \end{eqnarray} if and only if \begin{eqnarray} \frac{q\varphi_{i+1}-q^{-1}\varphi_i}{q-q^{-1}}= 1 \qquad \qquad (0 \leq i \leq d). \label{eq:qWeyl2} \end{eqnarray} \end{example} \begin{note}\rm The equation (\ref{eq:qWeyl1}) is called the {\it $q$-Weyl relation}. \end{note} \begin{definition} \label{def:qWeyl} \rm The LR pair $A,B$ in Example \ref{ex:qWeyl} is said to have {\it $q$-Weyl type} whenever it satisfies the equivalent conditions (\ref{eq:qWeyl1}), (\ref{eq:qWeyl2}). \end{definition} \begin{note} \label{note:minusq} \rm Referring to Example \ref{ex:qWeyl}, assume that $A,B$ has $q$-Weyl type. Then $A,B$ has $(-q)$-Weyl type. Moreover the LR pair $B,A$ has $(q^{-1})$-Weyl type. \end{note} \begin{lemma} \label{lem:stand} Referring to Example \ref{ex:qWeyl}, assume that $A,B$ has $q$-Weyl type. Then {\rm (i)--(v)} hold below. \begin{enumerate} \item[\rm (i)] $d\geq 1$. \item[\rm (ii)] $\varphi_i = 1-q^{-2i}$ for $1 \leq i \leq d$. \item[\rm (iii)] Assume that ${\rm Char}(\mathbb F)\not=2$ and $d$ is odd. Then $q$ is a primitive $(2d+2)$-root of unity. \item[\rm (iv)] Assume that ${\rm Char}(\mathbb F)\not=2$ and $d$ is even. Then $q$ becomes a primitive $(2d+2)$-root of unity, after replacing $q$ by $-q$ if necessary. \item[\rm (v)] Assume that ${\rm Char}(\mathbb F)=2$. Then $d$ is even. Moreover $q$ is a primitive $(d+1)$-root of unity. \end{enumerate} \end{lemma} \begin{proof} By (\ref{eq:qWeyl2}) and $\varphi_0=0$ along with induction on $i$, we obtain $\varphi_i = 1-q^{-2i}$ for $1 \leq i \leq d+1$. We have $\varphi_{d+1}=0$, so $q^{2d+2}=1$. For $1 \leq i \leq d$ we have $\varphi_i\not=0$, so $q^{2i}\not=1$. The results follow. \end{proof} \begin{definition}\rm \label{def:qstand} For $q \in \mathbb F$ the ordered pair $d,q$ will be called {\it standard} whenever the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] $d\geq 1$. \item[\rm (ii)] Assume ${\rm Char}(\mathbb F)\not=2$. Then $q$ is a primitive $(2d+2)$-root of unity. \item[\rm (iii)] Assume ${\rm Char}(\mathbb F)=2$. Then $d$ is even, and $q$ is a primitive $(d+1)$-root of unity. \end{enumerate} \end{definition} \begin{note}\rm Referring to Definition \ref{def:qstand}, assume that $d,q$ is standard. Then $q$ is nonzero and $q^2\not=1$. \end{note} \begin{lemma} \label{lem:Mq} Referring to Example \ref{ex:qWeyl}, assume that $A,B$ has $q$-Weyl type. Then $d,q$ is standard or $d,-q$ is standard. \end{lemma} \begin{proof} Use Lemma \ref{lem:stand} and Definition \ref{def:qstand}. \end{proof} \noindent For the rest of this section, the following assumption is in effect. \begin{assumption} \label{def:sqRoot} \rm Fix $q \in \mathbb F$ and assume that $d,q$ is standard. We fix a square root $q^{1/2}$ in the algebraic closure $\overline {\mathbb F}$. \end{assumption} \begin{lemma} \label{ex:exceptional} With reference to Assumption \ref{def:sqRoot}, define $\varphi_i = 1-q^{-2i}$ for $1 \leq i \leq d$. Then $\lbrace \varphi_i \rbrace_{i=1}^d$ are nonzero; let $A,B$ denote the LR pair over $\mathbb F$ that has parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$. Then $A,B$ has $q$-Weyl type. \end{lemma} \begin{proof} Condition (\ref{eq:qWeyl2}) is readily checked. \end{proof} \begin{lemma} \label{lem:qWeylABextend} With reference to Assumption \ref{def:sqRoot}, for $A,B \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] neither of $A,B$ is invertible and \begin{eqnarray} \frac{qAB-q^{-1}BA}{q-q^{-1}}= I; \label{eq:ABqWeyl} \end{eqnarray} \item[\rm (ii)] $A,B$ is an LR pair on $V$ that has $q$-Weyl type. \end{enumerate} \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{lem:WeylChar}. For the sake of completeness we give the details. \\ \noindent ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ For $1 \leq i \leq d$ define $\varphi_i = 1-q^{-2i}$ and note that $\varphi_i \not=0$. Since $A$ is not invertible, there exists $0 \not=\eta \in V$ such that $A\eta=0$. Define $v_i = B^i\eta$ for $0 \leq i \leq d$. By construction, $Av_0=0$ and $Bv_{i-1} = v_i$ for $1 \leq i \leq d$. Using (\ref{eq:ABqWeyl}) and induction on $i$, we obtain $Av_i=\varphi_i v_{i-1}$ for $1 \leq i \leq d$. By these comments, for $0 \leq i \leq d$ the vector $v_i$ is in the kernel of $A^{i+1}$ and not in the kernel of $A^i$. Therefore $\lbrace v_i \rbrace_{i=0}^d$ are linearly independent and hence form a basis for $V$. By construction the induced decomposition $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$. Therefore $A$ is Nil. Replacing $A,B,q$ by $B,A,q^{-1}$ in the above argument, we see that $B$ is Nil. Now $Bv_d=B^{d+1}\eta=0$. Now by construction $B$ raises the decomposition $\lbrace V_i \rbrace_{i=0}^d$. We have shown that the decomposition $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. Therefore $A,B$ is an LR pair on $V$. This LR pair has $q$-Weyl type by Definition \ref{def:qWeyl} and (\ref{eq:ABqWeyl}). \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ The elements $A,B$ are not invertible since they are Nil by Lemma \ref{lem:ABdecIndNil}. By Definition \ref{def:qWeyl} the pair $A,B$ satisfies (\ref{eq:ABqWeyl}). \end{proof} \noindent With reference to Assumption \ref{def:sqRoot}, let $A,B$ denote an LR pair on $V$ that has $q$-Weyl type. Later in the paper we will need the eigenvalues of $qA+q^{-1}B$. Our next goal is to compute these eigenvalues. \begin{lemma} \label{lem:bMatrixPre} Pick a nonzero $b \in \mathbb F $ such that $b^i \not=1$ for $1 \leq i \leq d$. For the tridiagonal matrix \begin{eqnarray} \left( \begin{array}{ c c cc c c } 0 & b^d-1 & && & \bf 0 \\ b-1 & 0 & b^d-b && & \\ & b^2-1 & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & b^d-b^{d-1} \\ {\bf 0} && & & b^d-1 & 0 \\ \end{array} \right) \label{eq:Deltamat} \end{eqnarray} the roots of the characteristic polynomial are \begin{eqnarray} \label{eq:evListPre} b^j - b^{d-j} \qquad \qquad j=0,1,\ldots, d. \end{eqnarray} For $0 \leq j \leq d$ we give a (column) eigenvector for the matrix {\rm (\ref{eq:Deltamat})} and eigenvalue $b^j-b^{d-j}$. This eigenvector has $i$th coordinate \begin{eqnarray*} {}_3\phi_2 \Biggl( \genfrac{}{}{0pt}{} {b^{-i}, \;b^{j-d},\;-b^{-j}} {0, \;\;b^{-d}} \;\Bigg\vert \; b,\;b\Biggr) \end{eqnarray*} for $0 \leq i \leq d$. We follow the standard notation for basic hypergeometric series {\rm \cite[p.~4]{GR}}. \end{lemma} \begin{proof} See \cite[Example 5.9]{terPA}. \end{proof} \begin{definition} \label{def:thetaI} \rm With reference to Assumption \ref{def:sqRoot}, define \begin{equation} \theta_j = q^{j+1/2} + q^{-j-1/2} \qquad \qquad (0 \leq j \leq d). \label{eq:thetaCalc} \end{equation} \end{definition} \begin{lemma} With reference to Assumption \ref{def:sqRoot} and Definition \ref{def:thetaI}, the following {\rm (i)--(iv)} hold. \begin{enumerate} \item[\rm (i)] $\theta_j = -\theta_{d-j}$ for $0 \leq j \leq d$. \item[\rm (ii)] Assume that $d=2m$ is even. Then $\theta_m =0$. \item[\rm (iii)] Assume that ${\rm Char}(\mathbb F)\not=2$. Then $\lbrace \theta_j \rbrace_{j=0}^d$ are mutually distinct. \item[\rm (iv)] Assume that ${\rm Char}(\mathbb F)=2$, so that $d=2 m$ is even. Then $\lbrace \theta_j \rbrace_{j=0}^m$ are mutually distinct. \end{enumerate} \end{lemma} \begin{proof} Use Definition \ref{def:thetaI} and the restrictions on $q$ given in Definition \ref{def:qstand}. \end{proof} \begin{lemma} \label{lem:qAqiB} With reference to Assumption \ref{def:sqRoot}, let $A,B$ denote an LR pair on $V$ that has $q$-Weyl type. Then for $qA+q^{-1}B$ the roots of the characteristic polynomial are $\lbrace \theta_j \rbrace_{j=0}^d$. \end{lemma} \begin{proof} Let $H$ denote the matrix in ${\rm Mat}_{d+1}(\mathbb F)$ that represents $qA+q^{-1}B$ with respect to an inverted $(B,A)$-basis for $V$. The entries of $H$ are obtained using Lemma \ref{lem:BAmatrixInv}. Let $\mathbb Delta$ denote the matrix (\ref{eq:Deltamat}), with $b=q^{-1}$. For $1 \leq i \leq d$ define $n_i = q^{1/2}(q^{-i}-1)$ and note that $n_i \not=0$. Define a diagonal matrix $N \in {\rm Mat}_{d+1}(\overline {\mathbb F})$ with $(i,i)$-entry $n_1 n_2 \cdots n_i$ for $0 \leq i \leq d$. Note that $N$ is invertible. By matrix multiplication $H = q^{-1/2} N^{-1} \mathbb Delta N$. One checks that \begin{eqnarray*} \theta_j = q^{-1/2} (b^{j}-b^{d-j}) \qquad \qquad (0 \leq j \leq d). \end{eqnarray*} By these comments and Lemma \ref{lem:bMatrixPre}, for the matrix $H$ the roots of the characteristic polynomial are $\lbrace \theta_j \rbrace_{j=0}^d$. The result follows. \end{proof} \section{The dual space $V^*$} Recall our vector space $V$ over $\mathbb F$ with dimension $d+1$. Let $V^*$ denote the vector space over $\mathbb F$ consisting of the $\mathbb F$-linear maps from $V$ to $\mathbb F$. We call $V^*$ the {\it dual space} for $V$. The vector spaces $V$ and $V^*$ have the same dimension $d+1$. There exists a bilinear form $(\,,\,): V \times V^* \to \mathbb F$ such that $(u,f)= f(u)$ for all $u \in V$ and $f \in V^*$. This bilinear form is nondegenerate in the sense of \cite[Section~11]{3ptsl2}. We view $(V^*)^*=V$. Nonempty subsets $X \subseteq V$ and $Y \subseteq V^*$ are called {\it orthogonal} whenever $(x,y)=0$ for all $x \in X$ and $y \in Y$. For a subspace $U$ of $V$ (resp. $V^*$) let $U^\perp$ denote the set of vectors in $V^*$ (resp. $V$) that are orthogonal to everything in $U$. The subspace $U^\perp$ is called the {\it orthogonal complement of $U$}. Note that ${\rm dim}(U) + {\rm dim}(U^\perp) = d+1$. \noindent A basis $\lbrace v_i\rbrace_{i=0}^d$ of $V$ and a basis $\lbrace v'_i\rbrace_{i=0}^d$ of $V^*$ are called {\it dual} whenever $(v_i,v'_j)=\delta_{i,j}$ for $0 \leq i,j\leq d$. Each basis of $V$ (resp. $V^*$) is dual to a unique basis of $V^*$ (resp. $V$). Let $\lbrace u_i\rbrace_{i=0}^d$ (resp. $\lbrace v_i\rbrace_{i=0}^d$) denote a basis of $V$, and let $\lbrace u'_i\rbrace_{i=0}^d$ (resp. $\lbrace v'_i\rbrace_{i=0}^d$) denote the dual basis of $V^*$. Then the following matrices are transpose: (i) the transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$; (ii) the transition matrix from $\lbrace v'_i\rbrace_{i=0}^d$ to $\lbrace u'_i\rbrace_{i=0}^d$. \noindent A decomposition $\lbrace V_i\rbrace_{i=0}^d$ of $V$ and a decomposition $\lbrace V'_i\rbrace_{i=0}^d$ of $V^*$ are called {\it dual} whenever $V_i, V'_j$ are orthogonal for all $i,j$ $(0 \leq i,j\leq d)$ such that $i\not=j$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis of $V$ and let $\lbrace v'_i \rbrace_{i=0}^d$ denote the dual basis of $V^*$. Then the following are dual: (i) the decomposition of $V$ induced by $\lbrace v_i \rbrace_{i=0}^d$; (ii) the decomposition of $V^*$ induced by $\lbrace v'_i \rbrace_{i=0}^d$. Each decomposition of $V$ (resp. $V^*$) is dual to a unique decomposition of $V^*$ (resp. $V$). \noindent A flag $\lbrace U_i\rbrace_{i=0}^d$ on $V$ and a flag $\lbrace U'_i\rbrace_{i=0}^d$ on $V^*$ are called {\it dual} whenever $U_i, U'_j$ are orthogonal for all $i,j$ $(0 \leq i,j\leq d)$ such that $i+j=d-1$. In this case $U_i, U'_j$ are orthogonal complements for all $i,j$ $(0 \leq i,j\leq d)$ such that $i+j=d-1$. Each flag on $V$ (resp. $V^*$) is dual to a unique flag on $V^*$ (resp. $V$). \begin{lemma} \label{lem:flagdual} Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$ and let $\lbrace V'_i\rbrace_{i=0}^d$ denote the dual decomposition of $V^*$. Then the following are dual: \begin{enumerate} \item[\rm (i)] the flag on $V$ induced by $\lbrace V_i\rbrace_{i=0}^d$; \item[\rm (ii)] the flag on $V^*$ induced by $\lbrace V'_{d-i}\rbrace_{i=0}^d$. \end{enumerate} \end{lemma} \begin{proof} Let $\lbrace U_i \rbrace_{i=0}^d$ and $\lbrace U'_i \rbrace_{i=0}^d$ denote the flags from (i) and (ii), respectively. For $0 \leq i \leq d$ we have $U_i = V_0+\cdots + V_i$ and $U'_i = V'_{d-i} + \cdots + V'_{d}$. For $0 \leq i,j\leq d$ such that $i+j=d-1$, the subspace $U'_j = V'_{i+1} + \cdots + V'_{d}$ is orthogonal to $U_i$. The result follows. \end{proof} \noindent For $\mathbb F$-algebras $\mathcal A$ and $\mathcal A'$, a map $\sigma: \mathcal A \to \mathcal A'$ is called an {\it $\mathbb F$-algebra antiisomorphism} whenever $\sigma$ is an isomorphism of $\mathbb F$-vector spaces and $(ab)^\sigma= b^\sigma a^\sigma $ for all $a,b\in \mathcal A$. By an {\it antiautomorphism} of $\mathcal A$ we mean an $\mathbb F$-algebra antiisomorphism $\sigma:\mathcal A\to \mathcal A$. For $X \in {\rm End}(V)$ there exists a unique element of ${\rm End}(V^*)$, denoted $\tilde X$, such that $(Xu,v)=(u,\tilde X v)$ for all $u \in V$ and $v \in V^*$. The map $\tilde X$ is called the {\it adjoint of $X$}. The adjoint map ${\rm End}(V)\to {\rm End}(V^*)$, $X \mapsto \tilde X$ is an $\mathbb F$-algebra antiisomorphism. Let $\lbrace v_i\rbrace_{i=0}^d$ denote a basis of $V$ and let $\lbrace v'_i\rbrace_{i=0}^d$ denote the dual basis of $V^*$. Then for $X \in {\rm End}(V)$ the following matrices are transpose: (i) the matrix representing $X$ with respect to $\lbrace v_i\rbrace_{i=0}^d$; (ii) the matrix representing $\tilde X$ with respect to $\lbrace v'_i\rbrace_{i=0}^d$. \noindent Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$ and let $\lbrace V'_i\rbrace_{i=0}^d$ denote the dual decomposition of $V^*$. Let $\lbrace E_i\rbrace_{i=0}^d$ denote the idempotent sequence for $\lbrace V_i\rbrace_{i=0}^d$. Then $\lbrace \tilde E_i\rbrace_{i=0}^d$ is the idempotent sequence for $\lbrace V'_i\rbrace_{i=0}^d$. \begin{lemma} \label{lem:Adual} Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$ and let $\lbrace V'_i\rbrace_{i=0}^d$ denote the dual decomposition of $V^*$. Then for $A \in {\rm End}(V)$, \begin{enumerate} \item[\rm (i)] $A$ lowers $\lbrace V_i\rbrace_{i=0}^d$ if and only if $\tilde A$ raises $\lbrace V'_i\rbrace_{i=0}^d$; \item[\rm (ii)] $A$ raises $\lbrace V_i\rbrace_{i=0}^d$ if and only if $\tilde A$ lowers $\lbrace V'_i\rbrace_{i=0}^d$. \end{enumerate} \end{lemma} \begin{proof} (i) We invoke Lemmas \ref{lem:LowerRaise}, \ref{lem:RaiseLower}. Let $\lbrace E_i\rbrace_{i=0}^d$ denote the idempotent sequence for $\lbrace V_i\rbrace_{i=0}^d$. Then $\lbrace \tilde E_i\rbrace_{i=0}^d$ is the idempotent sequence for $\lbrace V'_i\rbrace_{i=0}^d$. Recall that the adjoint map is an antiisomorphism. So for $0 \leq i,j\leq d$, $E_iAE_j=0$ if and only if $\tilde E_j \tilde A \tilde E_i=0$. Consequently Lemma \ref{lem:LowerRaise}(ii) holds for $\lbrace E_i\rbrace_{i=0}^d$ and $A$, if and only if Lemma \ref{lem:RaiseLower}(ii) holds for $\lbrace \tilde E_i\rbrace_{i=0}^d$ and $\tilde A$. The result now follows in view of Lemmas \ref{lem:LowerRaise}, \ref{lem:RaiseLower}. \\ \noindent (ii) Similar to the proof of (i) above. \end{proof} \begin{lemma} \label{lem:Nildual} For $A \in {\rm End}(V)$, $A$ is Nil if and only if $\tilde A$ is Nil. In this case the following flags are dual: \begin{eqnarray} \lbrace A^{d-i}V\rbrace_{i=0}^d, \qquad \qquad \lbrace {\tilde A}^{d-i}V^*\rbrace_{i=0}^d. \label{eq:twoflags} \end{eqnarray} \end{lemma} \begin{proof} The adjoint map is an antiisomorphism. So for $0 \leq i \leq d+1$, $A^i = 0$ if and only if $\tilde A^i= 0$. Therefore, $A$ is Nil if and only if $\tilde A$ is Nil. In this case, the flags (\ref{eq:twoflags}) are dual since for $0 \leq i,j\leq d$ such that $i+j=d-1$, \begin{eqnarray*} (A^{d-i}V,\tilde A^{d-j}V^*) = (A^{d-j} A^{d-i}V,V^*) = (A^{d+1}V,V^*) = (0,V^*) = 0. \end{eqnarray*} \end{proof} \begin{lemma} \label{lem:LRDdual} Let $A,B$ denote an LR pair on $V$. Then the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] The pair $\tilde A, \tilde B$ is an LR pair on $V^*$. \item[\rm (ii)] The $(A,B)$-decomposition of $V$ is dual to the $(\tilde B,\tilde A)$-decomposition of $V^*$. \item[\rm (iii)] The $(B,A)$-decomposition of $V$ is dual to the $(\tilde A,\tilde B)$-decomposition of $V^*$. \end{enumerate} \end{lemma} \begin{proof} Let $\lbrace V_i \rbrace_{i=0}^d$ denote the $(A,B)$-decomposition of $V$. Let $\lbrace V'_i \rbrace_{i=0}^d$ denote the dual decomposition of $V^*$. By construction $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. By Lemma \ref{lem:Adual}, $\lbrace V'_i \rbrace_{i=0}^d$ is raised by $\tilde A$ and lowered by $\tilde B$. Therefore, the decomposition $\lbrace V'_{d-i} \rbrace_{i=0}^d$ is lowered by $\tilde A$ and raised by $\tilde B$. The results follow. \end{proof} \begin{lemma} \label{lem:LRdual} Let $A,B$ denote an LR pair on $V$, with idempotent sequence $\lbrace E_i\rbrace_{i=0}^d$. Then the LR pair $\tilde A, \tilde B$ has idempotent sequence $\lbrace \tilde E_{d-i}\rbrace_{i=0}^d$. \end{lemma} \begin{proof} By the assertion above Lemma \ref{lem:Adual}, along with Lemma \ref{lem:LRDdual}(iii). \end{proof} \begin{lemma} \label{lem:LRdualPar} Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$. Then the LR pair $\tilde A, \tilde B$ has parameter sequence $\lbrace \varphi_{d-i+1}\rbrace_{i=1}^d$. \end{lemma} \begin{proof} An element in ${\rm End}(V)$ has the same trace as its adjoint. Let $\lbrace \tilde \varphi_i \rbrace_{i=1}^d$ denote the parameter sequence for $\tilde A, \tilde B$. For $1 \leq i \leq d$ we show that $\tilde \varphi_i = \varphi_{d-i+1}$. By Lemmas \ref{lem:varphiTrace}, \ref{lem:LRdual} and since ${\rm tr}(XY)= {\rm tr}(YX)$, \begin{eqnarray*} \tilde \varphi_i = {\rm tr}(\tilde B \tilde A \tilde E_{d-i}) = {\rm tr}(E_{d-i} A B) = {\rm tr}(AB E_{d-i}) = \varphi_{d-i+1}. \end{eqnarray*} The result follows. \end{proof} \begin{lemma} \label{lem:LRBdual} Let $A,B$ denote an LR pair on $V$. Then the following {\rm (i)--(iv)} hold. \begin{enumerate} \item[\rm (i)] For an $(A,B)$-basis of $V$, its dual is an inverted $(\tilde A,\tilde B)$-basis of $V^*$. \item[\rm (ii)] For an inverted $(A,B)$-basis of $V$, its dual is a $(\tilde A,\tilde B)$-basis of $V^*$. \item[\rm (iii)] For a $(B,A)$-basis of $V$, its dual is an inverted $(\tilde B,\tilde A)$-basis of $V^*$. \item[\rm (iv)] For an inverted $(B,A)$-basis of $V$, its dual is a $(\tilde B,\tilde A)$-basis of $V^*$. \end{enumerate} \end{lemma} \begin{proof} (i) Let $\lbrace v_i \rbrace_{i=0}^d$ denote an $(A,B)$-basis of $V$. Let $\lbrace v'_i \rbrace_{i=0}^d$ denote the dual basis of $V^*$. With respect to $\lbrace v_i \rbrace_{i=0}^d$ the matrices representing $A,B$ are given in (\ref{eq:ABrep}). For these matrices the transpose represents $\tilde A, \tilde B$ with respect to $\lbrace v'_i \rbrace_{i=0}^d$. Applying Lemma \ref{lem:invABmatrix} to the LR pair $\tilde A,\tilde B$ and using Lemma \ref{lem:LRdualPar}, we see that $\lbrace v'_i \rbrace_{i=0}^d$ is an inverted $(\tilde A, \tilde B)$-basis of $V^*$. \\ \noindent (ii)--(iv) Similar to the proof of (i) above. \end{proof} \begin{lemma} \label{lem:LRDiso} A given LR pair $A,B$ on $V$ is isomorphic to the LR pair $\tilde B,\tilde A$ on $V^*$. \end{lemma} \begin{proof} Let $\lbrace \varphi_i\rbrace_{i=1}^d$ denote the parameter sequence of $A,B$. By Lemma \ref{lem:BAvAB} the LR pair $B,A$ has parameter sequence $\lbrace \varphi_{d-i+1}\rbrace_{i=1}^d$. Now by Lemma \ref{lem:LRdualPar} the LR pair $\tilde B,\tilde A$ has parameter sequence $\lbrace \varphi_{i}\rbrace_{i=1}^d$. The LR pairs $A,B$ and $\tilde B,\tilde A$ have the same parameter sequence, so they are isomorphic by Proposition \ref{prop:LRpairClass}. \end{proof} \section{The reflector $\dagger$} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B$ denote an LR pair on $V$. We discuss a certain antiautomorphism $\dagger$ of ${\rm End}(V)$ called the reflector for $A,B$. \begin{proposition} \label{prop:antiaut} There exists a unique antiautomorphism $\dagger$ of ${\rm End}(V)$ such that $A^\dagger = B$ and $B^\dagger=A$. Moreover $(X^{\dagger})^{\dagger} = X$ for all $X \in {\rm End}(V)$. \end{proposition} \begin{proof} We first show that $\dagger$ exists. By Lemma \ref{lem:LRDiso} there exists an isomorphism $\sigma$ of LR pairs from $A,B$ to $\tilde B,\tilde A$. Thus $\sigma:V\to V^*$ is an $\mathbb F$-linear bijection such that $\sigma A = \tilde B \sigma$ and $\sigma B = \tilde A \sigma$. By construction the map ${\rm End}(V)\to {\rm End}(V^*)$, $X\mapsto \sigma X\sigma^{-1}$ is an $\mathbb F$-algebra isomorphism that sends $A\mapsto \tilde B$ and $B\mapsto \tilde A$. Recall that the adjoint map ${\rm End}(V)\to {\rm End}(V^*)$, $X\mapsto \tilde X$ is an $\mathbb F$-algebra antiisomorphism. By these comments the composition \begin{equation*} \dagger:\quad \begin{CD} {\rm End}(V) @>> {\rm adj} > {\rm End}(V^*) @>> X\mapsto \sigma^{-1}X\sigma > {\rm End}(V) \end{CD} \end{equation*} is an antiautomorphism of ${\rm End}(V)$ such that $A^\dagger=B$ and $B^\dagger=A$. We have shown that $\dagger$ exists. We now show that $\dagger$ is unique. Let $\mu$ denote an antiautomorphism of ${\rm End}(V)$ such that $A^\mu=B$ and $B^\mu=A$. We show that $\dagger = \mu$. The composition \begin{equation} \label{eq:compdt} \begin{CD} {\rm End}(V) @>> \dagger > {\rm End}(V) @>> \mu^{-1} > {\rm End}(V) \end{CD} \end{equation} is an $\mathbb F$-algebra isomorphism that fixes each of $A,B$. By this and Corollary \ref{cor:ABgen}, the map (\ref{eq:compdt}) fixes everything in ${\rm End}(V)$ and is therefore the identity map. Consequently $\dagger = \mu$. We have shown that $\dagger$ is unique. To obtain the last assertion of the lemma, note that $\dagger^{-1}$ is an antiautomorphism of ${\rm End}(V)$ that sends $A\leftrightarrow B$. Therefore $\dagger = \dagger^{-1}$ by the uniqueness of $\dagger$. Consequently $(X^\dagger)^{\dagger}=X$ for all $X \in {\rm End}(V)$. \end{proof} \begin{definition}\rm \label{def:REFdag} By the {\it reflector} for $A,B$ (or the {\it $(A,B)$-reflector}) we mean the antiautomorphism $\dagger$ from Proposition \ref{prop:antiaut}. \end{definition} \begin{lemma} \label{lem:trivDag} Assume that $A,B$ is trivial. Then the $(A,B)$-reflector fixes everything in ${\rm End}(V)$. \end{lemma} \begin{proof} By assumption $d=0$, so the identity $I$ is a basis for the $\mathbb F$-vector space ${\rm End}(V)$. The $(A,B)$-reflector is $\mathbb F$-linear and fixes $I$. The result follows. \end{proof} \begin{lemma} \label{lem:daggerE} Let $\lbrace E_i \rbrace_{i=0}^d$ denote the idempotent sequence for $A,B$. Then the $(A,B)$-reflector fixes $E_i$ for $0 \leq i \leq d$. \end{lemma} \begin{proof} Referring to (\ref{eq:TwoE}), for the equation on the left apply $\dagger$ to each side and evaluate the result using the equation on the right. \end{proof} \begin{lemma} The $(A,B)$-reflector is the same as the $(B,A)$-reflector. \end{lemma} \begin{proof} By Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. \end{proof} \begin{lemma} Let $\tilde \dagger$ denote the reflector for the LR pair $\tilde A, \tilde B$. Then the following diagram commutes: \begin{equation*} \begin{CD} {\rm End}(V) @>{\rm adj}> > {\rm End}(V^*) \\ @V\dagger VV @VV\tilde \dagger V \\ {\rm End}(V) @>>{\rm adj}> {\rm End}(V^*) \end{CD} \end{equation*} \end{lemma} \begin{proof} Recall from Corollary \ref{cor:ABgen} that ${\rm End}(V)$ is generated by $A,B$. Chase $A$ and $B$ around the diagram, using Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. The result follows. \end{proof} \section{The inverter $\Psi$} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$ and idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. We discuss a map $\Psi \in {\rm End}(V)$ called the inverter for $A,B$. The name is motivated by Proposition \ref{prop:invmap} below. \begin{definition} \label{def:REF} \rm Define \begin{eqnarray} \Psi = \sum_{i=0}^d \frac{\varphi_1 \varphi_2 \cdots \varphi_i}{\varphi_d \varphi_{d-1}\cdots \varphi_{d-i+1}} E_i. \label{eq:REF} \end{eqnarray} We call $\Psi$ the {\it inverter} for $A,B$ or the {\it $(A,B)$-inverter}. \end{definition} \begin{lemma} \label{lem:InvTriv} Assume that $A,B$ is trivial. Then $\Psi = I$. \end{lemma} \begin{proof} In Definition \ref{def:REF} set $d=0$ and note that $E_0=I$. \end{proof} \begin{lemma} \label{lem:refInv} The map $\Psi$ is invertible, and \begin{eqnarray} \Psi^{-1} = \sum_{i=0}^d \frac{\varphi_d \varphi_{d-1} \cdots \varphi_{d-i+1}}{\varphi_1 \varphi_{2}\cdots \varphi_{i}} E_i. \label{eq:REFinv} \end{eqnarray} \end{lemma} \begin{proof} Use the fact that $E_i E_j = \delta_{i,j}E_i$ for $0 \leq i,j\leq d$ and $I = \sum_{i=0}^d E_i$. \end{proof} \begin{lemma} \label{lem:coefDup} Referring to the sum {\rm (\ref{eq:REF})}, for $0 \leq i \leq d$ the coefficients of $E_i$ and $E_{d-i}$ are the same; in other words \begin{eqnarray} \label{eq:CoefSame} \frac{\varphi_1 \varphi_2 \cdots \varphi_i} {\varphi_d\varphi_{d-1}\cdots \varphi_{d-i+1}} = \frac{\varphi_1 \varphi_2 \cdots \varphi_{d-i}} {\varphi_d\varphi_{d-1}\cdots \varphi_{i+1}}. \end{eqnarray} \end{lemma} \begin{proof} Line (\ref{eq:CoefSame}) is readily checked. \end{proof} \begin{lemma} \label{lem:refCom} For $0 \leq i \leq d$, \begin{eqnarray} \Psi E_i = E_i \Psi = \frac{\varphi_1 \varphi_2 \cdots \varphi_i} {\varphi_d\varphi_{d-1}\cdots \varphi_{d-i+1}} E_i. \label{eq:Psicom} \end{eqnarray} \end{lemma} \begin{proof} Use Definition \ref{def:REF}. \end{proof} \begin{corollary} \label{cor:PsiAct} For $0 \leq i \leq d$ the following hold on $E_iV$: \begin{eqnarray} \label{eq:PsionEiV} \Psi = \frac{\varphi_1 \varphi_2 \cdots \varphi_i} {\varphi_d\varphi_{d-1}\cdots \varphi_{d-i+1}} I, \qquad \qquad \Psi^{-1} = \frac{\varphi_d \varphi_{d-1} \cdots \varphi_{d-i+1}} {\varphi_1\varphi_2\cdots \varphi_{i}} I. \end{eqnarray} \end{corollary} \begin{proof} Use Lemma \ref{lem:refCom}. \end{proof} \begin{lemma} \label{lem:decompstab} The map $\Psi$ fixes the $(A,B)$-decomposition of $V$ and the $(B,A)$-decomposition of $V$. \end{lemma} \begin{proof} The sequence $\lbrace E_iV\rbrace_{i=0}^d$ is the $(A,B)$-decomposition of $V$. The sequence $\lbrace E_{d-i}V\rbrace_{i=0}^d$ is the $(B,A)$-decomposition of $V$. By Corollary \ref{cor:PsiAct}, $\Psi E_iV=E_iV$ for $0 \leq i \leq d$. The result follows. \end{proof} \begin{lemma} \label{lem:flagstab} The map $\Psi$ fixes each of the following flags: \begin{eqnarray*} \lbrace A^{d-i}V\rbrace_{i=0}^d, \qquad \qquad \lbrace B^{d-i}V\rbrace_{i=0}^d. \end{eqnarray*} \end{lemma} \begin{proof} The flag on the left (resp. right) is induced by the $(A,B)$-decomposition (resp. $(B,A)$-decomposition) of $V$. The result follows in view of Lemma \ref{lem:decompstab}. \end{proof} \begin{lemma} The map $\Psi$ commutes with $AB$ and $BA$. \end{lemma} \begin{proof} By Lemma \ref{lem:AiBione}, $E_i$ commutes with $AB$ and $BA$ for $0 \leq i \leq d$. The result follows in view of Definition \ref{def:REF}. \end{proof} \begin{lemma} \label{lem:psiFix} The map $\Psi$ is fixed by the $(A,B)$-reflector $\dagger$ from Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. \end{lemma} \begin{proof} By Lemma \ref{lem:daggerE} and Definition \ref{def:REF}. \end{proof} \begin{lemma} \label{lem:ABBA} The following maps are inverse: \begin{enumerate} \item[\rm (i)] the inverter for the LR pair $A,B$; \item[\rm (ii)] the inverter for the LR pair $B,A$. \end{enumerate} \end{lemma} \begin{proof} Use Lemmas \ref{lem:Ebackward}, \ref{lem:BAvAB} along with Lemma \ref{lem:coefDup}. \end{proof} \begin{lemma} For nonzero $\alpha, \beta$ in $\mathbb F$ the following maps are the same: \begin{enumerate} \item[\rm (i)] the inverter for the LR pair $A,B$; \item[\rm (ii)] the inverter for the LR pair $\alpha A, \beta B$. \end{enumerate} \end{lemma} \begin{proof} Use Lemmas \ref{lem:alphaBeta}, \ref{lem:alphaBetacom}. \end{proof} \begin{lemma} \label{lem:reflectAdj} The following maps are inverse: \begin{enumerate} \item[\rm (i)] the inverter for the LR pair $\tilde A, \tilde B$; \item[\rm (ii)] the adjoint of the inverter for the LR pair $A,B$. \end{enumerate} \end{lemma} \begin{proof} Use Lemmas \ref{lem:LRdual}, \ref{lem:LRdualPar} along with Lemmas \ref{lem:refInv}, \ref{lem:coefDup}. \end{proof} \noindent We turn our attention to the maps $\Psi A \Psi^{-1}$ and $\Psi^{-1}B\Psi$. We first consider how these maps act on $E_iV$ for $0 \leq i \leq d$. \begin{lemma} \label{lem:PsiABact} The following {\rm (i), (ii)} hold. \begin{enumerate} \item[\rm (i)] $\Psi A \Psi^{-1}$ is zero on $E_0V$. Moreover for $1 \leq i \leq d$ and on $E_iV$, \begin{eqnarray} \Psi A \Psi^{-1} = \frac{\varphi_{d-i+1}}{\varphi_i} A. \label{eq:A1} \end{eqnarray} \item[\rm (ii)] $\Psi^{-1} B \Psi$ is zero on $E_dV$. Moreover for $0 \leq i \leq d-1$ and on $E_iV$, \begin{eqnarray} \Psi^{-1} B \Psi = \frac{\varphi_{d-i}}{\varphi_{i+1}} B. \label{eq:B1} \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} The decomposition $\lbrace E_iV\rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. The results follow from this and Corollary \ref{cor:PsiAct}. \end{proof} \begin{corollary} The $(A,B)$-decomposition of $V$ is lowered by $\Psi A \Psi^{-1}$ and raised by $\Psi^{-1}B\Psi$. \end{corollary} \begin{proof} By Lemma \ref{lem:PsiABact}, and since the $(A,B)$-decomposition of $V$ is equal to $\lbrace E_iV\rbrace_{i=0}^d$. \end{proof} \begin{lemma} \label{lem:PsiTable} In the table below, we give the matrices that represent $\Psi A \Psi^{-1} $ and $\Psi^{-1}B \Psi$ with respect to various bases for $V$. {\small \centerline{ \begin{tabular}[t]{c|cc} {\rm type of basis} & {\rm matrix rep. of $\Psi A \Psi^{-1}$} & {\rm matrix rep. of $\Psi^{-1} B \Psi$} \\ \hline $(A,B)$ & $ \left( \begin{array}{ c c cc c c } 0 & \varphi_d/\varphi_1 & && & \bf 0 \\ & 0 & \varphi_{d-1}/\varphi_2 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_1/\varphi_d \\ {\bf 0} && & & & 0 \\ \end{array} \right) $ & $ \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_d & 0 & && & \\ & \varphi_{d-1} & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_1 & 0 \\ \end{array} \right) $ \\ \\ {\rm inv. $(A,B)$} & $ \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_1/\varphi_d & 0 & && & \\ & \varphi_2/\varphi_{d-1} & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d/ \varphi_1 & 0 \\ \end{array} \right) $ & $ \left( \begin{array}{ c c cc c c } 0 & \varphi_1 & && & \bf 0 \\ & 0 & \varphi_{2} && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_d \\ {\bf 0} && & & & 0 \\ \end{array} \right) $ \\ \\ {\rm $(B,A)$} & $ \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_1 & 0 & && & \\ & \varphi_{2} & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d & 0 \\ \end{array} \right) $&$ \left( \begin{array}{ c c cc c c } 0 & \varphi_1/\varphi_d & && & \bf 0 \\ & 0 & \varphi_2/\varphi_{d-1} && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_d/\varphi_1 \\ {\bf 0} && & & & 0 \\ \end{array} \right) $ \\ \\ {\rm inv. $(B,A)$} & $ \left( \begin{array}{ c c cc c c } 0 & \varphi_d & && & \bf 0 \\ & 0 & \varphi_{d-1} && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_1 \\ {\bf 0} && & & & 0 \\ \end{array} \right) $ & $ \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_d/\varphi_1 & 0 & && & \\ & \varphi_{d-1}/\varphi_2 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_1/\varphi_d & 0 \\ \end{array} \right) $ \end{tabular}} } \end{lemma} \begin{proof} Use Lemmas \ref{lem:ABmatrix}, \ref{lem:invABmatrix}, \ref{lem:BAbasisMat}, \ref{lem:BAmatrixInv} along with Lemma \ref{lem:PsiABact}. \end{proof} \noindent Recall the reflector antiautomorphism $\dagger$ from Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. \begin{lemma} \label{lem:newLRP} The following {\rm (i)--(vii)} hold: \begin{enumerate} \item[\rm (i)] the ordered pair $A,\Psi^{-1}B\Psi$ is an LR pair on $V$; \item[\rm (ii)] the $(A,\Psi^{-1}B\Psi)$-decomposition of $V$ is equal to the $(A,B)$-decomposition of $V$; \item[\rm (iii)] the idempotent sequence of $A,\Psi^{-1}B\Psi$ is equal to the idempotent sequence of $A,B$; \item[\rm (iv)] an $(A,\Psi^{-1}B\Psi)$-basis of $V$ is the same thing as an $(A,B)$-basis of $V$; \item[\rm (v)] the LR pair $A,\Psi^{-1}B\Psi$ has parameter sequence $\lbrace\varphi_{d-i+1} \rbrace_{i=1}^d$; \item[\rm (vi)] for the LR pair $A,\Psi^{-1}B\Psi $ the reflector is the composition \begin{equation*} \begin{CD} {\rm End}(V) @>> {\dagger} > {\rm End}(V) @>> X\mapsto \Psi^{-1}X\Psi > {\rm End}(V); \end{CD} \end{equation*} \item[\rm (vii)] for the LR pair $A,\Psi^{-1}B\Psi$ the inverter is $\Psi^{-1}$. \end{enumerate} \end{lemma} \begin{proof} (i), (ii) The $(A,B)$-decomposition of $V$ is lowered by $A$ and raised by $\Psi^{-1}B\Psi$. \\ \noindent (iii) By (ii) above and Definition \ref{def:ABE}. \\ \noindent (iv) By (ii) above and Definition \ref{def:ABbasis}. \\ \noindent (v) Consider the matrices that represent $A$ and $\Psi^{-1}B \Psi$ with respect to an $(A,B)$-basis of $V$. For $A$ this matrix is given in Lemma \ref{lem:ABmatrix}. For $\Psi^{-1} B\Psi $ this matrix is given in Lemma \ref{lem:PsiTable}. \\ \noindent (vi) Use Lemma \ref{lem:psiFix}. \\ \noindent (vii) By (iii), (v) above and line (\ref{eq:REFinv}). \end{proof} \begin{lemma} \label{lem:five2} The following {\rm (i)--(vii)} hold: \begin{enumerate} \item[\rm (i)] the ordered pair $\Psi A\Psi^{-1},B$ is an LR pair on $V$; \item[\rm (ii)] the $(\Psi A\Psi^{-1},B)$-decomposition of $V$ is equal to the $(A,B)$-decomposition of $V$; \item[\rm (iii)] the idempotent sequence of $\Psi A\Psi^{-1}, B$ is equal to the idempotent sequence of $A,B$; \item[\rm (iv)] a $(B,\Psi A\Psi^{-1})$-basis of $V$ is the same thing as a $(B,A)$-basis of $V$; \item[\rm (v)] the LR pair $\Psi A\Psi^{-1},B$ has parameter sequence $\lbrace\varphi_{d-i+1} \rbrace_{i=1}^d$; \item[\rm (vi)] for the LR pair $\Psi A\Psi^{-1},B$ the reflector is the composition \begin{equation*} \begin{CD} {\rm End}(V) @>> {\dagger} > {\rm End}(V) @>> X\mapsto \Psi X \Psi^{-1} > {\rm End}(V); \end{CD} \end{equation*} \item[\rm (vii)] for the LR pair $\Psi A\Psi^{-1},B$ the inverter is $\Psi^{-1}$. \end{enumerate} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:newLRP}. \end{proof} \begin{proposition} \label{prop:invmap} The following three LR pairs are mutually isomorphic: \begin{eqnarray} A, \Psi^{-1}B\Psi \qquad \qquad B,A \qquad \qquad \Psi A \Psi^{-1}, B. \label{eq:threeLRP} \end{eqnarray} \end{proposition} \begin{proof} The LR pairs (\ref{eq:threeLRP}) have the same parameter sequence by Lemmas \ref{lem:BAvAB}, \ref{lem:newLRP}(v), \ref{lem:five2}(v). The result follows in view of Proposition \ref{prop:LRpairClass}. \end{proof} \begin{lemma} For $\sigma \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $\sigma$ is an isomorphism of LR pairs from $A,\Psi^{-1}B\Psi$ to $B,A$; \item[\rm (ii)] $\sigma$ is an isomorphism of LR pairs from $B,A$ to $\Psi A \Psi^{-1},B$; \item[\rm (iii)] $\sigma$ sends each $(A,B)$-basis of $V$ to a $(B,A)$-basis of $V$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (iii)}$ The map $\sigma $ sends each $(A,\Psi^{-1}B\Psi)$-basis of $V$ to a $(B,A)$-basis of $V$. Also by Lemma \ref{lem:newLRP}(iv), an $(A,\Psi^{-1}B\Psi)$-basis of $V$ is the same thing as an $(A,B)$-basis of $V$. \\ \noindent ${\rm (iii)}\mathbb Rightarrow {\rm (i)}$ The matrix that represents $A$ (resp. $\Psi^{-1}B\Psi$) with respect to an $(A,B)$-basis of $V$ is equal to the matrix that represents $B$ (resp. $A$) with respect to a $(B,A)$-basis of $V$. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (iii)}$ The map $\sigma $ is an isomorphism of LR pairs from $A,B$ to $B,\Psi A \Psi^{-1}$. So $\sigma $ sends each $(A,B)$-basis of $V$ to a $(B,\Psi A \Psi^{-1})$-basis of $V$. Also by Lemma \ref{lem:five2}(iv), a $(B,\Psi A \Psi^{-1})$-basis of $V$ is the same thing as a $(B,A)$-basis of $V$. \\ \noindent ${\rm (iii)}\mathbb Rightarrow {\rm (ii)}$ The matrix that represents $B$ (resp. $A$) with respect to an $(A,B)$-basis of $V$ is equal to the matrix that represents $\Psi A\Psi^{-1}$ (resp. $B$) with respect to a $(B,A)$-basis of $V$. \end{proof} \begin{lemma} For $\sigma \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $\sigma$ is an isomorphism of LR pairs from $A,\Psi^{-1}B\Psi$ to $\Psi A \Psi^{-1},B$; \item[\rm (ii)] there exists $0 \not=\zeta \in \mathbb F$ such that $\sigma=\zeta \Psi$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ Using Definition \ref{def:isoLRP}, we find that $ \Psi^{-1}\sigma $ commutes with each of $A,\Psi^{-1}B\Psi$. Now by Lemma \ref{lem:isoFix} (applied to the LR pair $A,\Psi^{-1}B\Psi$) there exists $0 \not=\zeta \in \mathbb F$ such that $\Psi^{-1}\sigma = \zeta I$. Therefore $\sigma=\zeta \Psi$. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ It suffices to show that $\Psi$ is an isomorphism of LR pairs from $A,\Psi^{-1}B\Psi$ to $\Psi A \Psi^{-1},B$. Since $\Psi$ is invertible the map $\Psi:V\to V$ is an $\mathbb F$-linear bijection. Observe that $\Psi A = (\Psi A \Psi^{-1})\Psi$ and $\Psi (\Psi^{-1} B \Psi) = B \Psi$. Now by Definition \ref{def:isoLRP}, $\Psi$ is an isomorphism of LR pairs from $A,\Psi^{-1}B\Psi$ to $\Psi A \Psi^{-1},B$. \end{proof} \begin{lemma} The LR pairs {\rm (\ref{eq:threeLRP})} all have the same inverter. \end{lemma} \begin{proof} By Lemmas \ref{lem:ABBA}, \ref{lem:newLRP}(vii), \ref{lem:five2}(vii). \end{proof} \section{The outer and inner part} \noindent Throughout this section the following assumptions are in effect. We assume that $d=2m$ is even. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$, idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$, and inverter $\Psi$. Note that $\lbrace E_iV\rbrace_{i=0}^d$ is the $(A,B)$-decomposition of $V$, which is lowered by $A$ and raised by $B$. \begin{definition} \label{def:VoutVin} \rm Define \begin{eqnarray*} V_{\rm out} = \sum_{j=0}^{m} E_{2j}V, \qquad \qquad V_{\rm in} = \sum_{j=0}^{m-1} E_{2j+1}V. \end{eqnarray*} \end{definition} \begin{lemma} \label{lem:LRPV0V1} We have \begin{eqnarray} \label{eq:LRPV0V1} V= V_{\rm out}+V_{\rm in} \qquad \quad {\mbox{\rm (direct sum).}} \end{eqnarray} Moreover \begin{eqnarray*} {\rm dim}(V_{\rm out}) = m+1, \qquad \qquad {\rm dim}(V_{\rm in}) = m. \end{eqnarray*} \end{lemma} \begin{proof} Since $\lbrace E_iV\rbrace_{i=0}^d$ is a decomposition of $V$. \end{proof} \begin{lemma} \label{lem:LRPtrivialT} We have $V_{\rm out}\not=0$ and $V_{\rm in}\not=V$. Moreover the following are equivalent: {\rm (i)} $A,B$ is trivial; {\rm (ii)} $V_{\rm out}=V$; {\rm (iii)} $V_{\rm in}=0$. \end{lemma} \begin{proof} Use Example \ref{def:triv} and Lemma \ref{lem:LRPV0V1}. \end{proof} \begin{lemma} \label{lem:EoutEin} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] for even $i$ $(0 \leq i \leq d)$, the map $E_i$ leaves $V_{\rm out}$ invariant, and is zero on $V_{\rm in}$; \item[\rm (ii)] for odd $i$ $(0 \leq i \leq d)$, the map $E_i$ leaves $V_{\rm in}$ invariant, and is zero on $V_{\rm out}$; \item[\rm (iii)] each of $V_{\rm out}$, $V_{\rm in}$ is invariant under $\Psi$. \end{enumerate} \end{lemma} \begin{proof} (i), (ii) Use Definition \ref{def:VoutVin}. \\ \noindent (iii) By (i), (ii) above and Definition \ref{def:REF}. \end{proof} \begin{lemma} \label{lem:ABaction} Referring to Definition \ref{def:VoutVin}, \begin{eqnarray*} AV_{\rm out} = V_{\rm in}, \qquad AV_{\rm in} \subseteq V_{\rm out}, \qquad BV_{\rm out} =V_{\rm in}, \qquad BV_{\rm in} \subseteq V_{\rm out}. \end{eqnarray*} Moreover \begin{eqnarray*} A^2V_{\rm out} \subseteq V_{\rm out}, \qquad A^2V_{\rm in} \subseteq V_{\rm in}, \qquad B^2V_{\rm out} \subseteq V_{\rm out}, \qquad B^2V_{\rm in} \subseteq V_{\rm in}. \end{eqnarray*} \end{lemma} \begin{proof} By Definition \ref{def:VoutVin} and the construction. \end{proof} \begin{definition} \label{def:OUTERINNER} \rm Referring to Definition \ref{def:VoutVin}, the subspace $V_{\rm out}$ (resp. $V_{\rm in}$) will be called the {\it outer part} (resp. {\it inner part}) of $V$ with respect to $A,B$. \end{definition} \begin{lemma} \label{lem:InOutswap} The outer part of $V$ with respect to $A,B$ coincides with the outer part of $V$ with respect to $B,A$. Moreover, the inner part of $V$ with respect to $A,B$ coincides with the inner part of $V$ with respect to $B,A$. \end{lemma} \begin{proof} By Lemma \ref{lem:Ebackward} and Definition \ref{def:VoutVin}, along with the assumption that $d$ is even. \end{proof} \begin{lemma} \label{lem:ANZ} Assume that $A,B$ is nontrivial. Then $A$ and $B$ are nonzero on both $V_{\rm out}$ and $V_{\rm in}$. \end{lemma} \begin{proof} By Definition \ref{def:VoutVin} and the construction. \end{proof} \begin{definition} \label{def:Ainout} \rm Using the LR pair $A,B$ we define \begin{eqnarray} A_{\rm out}, \qquad A_{\rm in}, \qquad B_{\rm out}, \qquad B_{\rm in} \label{eq:LRP6list} \end{eqnarray} in ${\rm End}(V)$ as follows. The map $A_{\rm out}$ (resp. $B_{\rm out}$) acts on $V_{\rm out}$ as $A$ (resp. $B$), and on $V_{\rm in}$ as zero. The map $A_{\rm in}$ (resp. $B_{\rm in}$) acts on $V_{\rm in}$ as $A$ (resp. $B$), and on $V_{\rm out}$ as zero. By construction \begin{equation*} A = A_{\rm out}+ A_{\rm in}, \qquad \qquad B = B_{\rm out}+ B_{\rm in}. \end{equation*} \end{definition} \begin{lemma} \label{lem:AiAoLI} Assume that $A,B$ is nontrivial. Then \begin{enumerate} \item[\rm (i)] the maps $A_{\rm out}, A_{\rm in}$ are linearly independent over $\mathbb F$; \item[\rm (ii)] the maps $B_{\rm out}, B_{\rm in}$ are linearly independent over $\mathbb F$. \end{enumerate} \end{lemma} \begin{proof} (i) Suppose we are given $r,s \in \mathbb F$ such that $rA_{\rm out}+ sA_{\rm in}=0$. In this equation apply each side to $V_{\rm out}$, to find $rA=0$ on $V_{\rm out}$. By Lemma \ref{lem:ANZ} $A\not=0$ on $V_{\rm out}$. Therefore $r=0$. One similarly shows that $s=0$. \\ \noindent (ii) Similar to the proof of (i) above. \end{proof} \begin{definition} \label{def:PoutPin} Define \begin{eqnarray*} \Psi_{\rm out} = \sum_{j=0}^{m} \frac{\varphi_1 \varphi_2 \cdots \varphi_{2j}} {\varphi_d\varphi_{d-1}\cdots \varphi_{d-2j+1}} E_{2j}, \qquad \qquad \Psi_{\rm in} = \sum_{j=0}^{m-1} \frac{\varphi_2 \varphi_3 \cdots \varphi_{2j+1}} {\varphi_{d-1}\varphi_{d-2}\cdots \varphi_{d-2j}} E_{2j+1}. \end{eqnarray*} \end{definition} \begin{lemma} The following {\rm (i)--(iv)} hold: \begin{enumerate} \item[\rm (i)] the subspace $V_{\rm out}$ is invariant under $\Psi_{\rm out}$; \item[\rm (ii)] $\Psi_{\rm out}$ is zero on $V_{\rm in}$; \item[\rm (iii)] the subspace $V_{\rm in}$ is invariant under $\Psi_{\rm in}$; \item[\rm (iv)] $\Psi_{\rm in}$ is zero on $V_{\rm out}$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:EoutEin}(i),(ii) and Definition \ref{def:PoutPin}. \end{proof} \noindent The following two propositions are obtained by routine computation. \begin{proposition} \label{lem:LRPA2B2C2Out} The elements $A^2, B^2$ act on $V_{\rm out}$ as an LR pair. For this LR pair, \begin{enumerate} \item[\rm (i)] the diameter is $m$; \item[\rm (ii)] the parameter sequence is $\lbrace \varphi_{2j-1}\varphi_{2j}\rbrace_{j=1}^m$; \item[\rm (iii)] the idempotent sequence is given by the actions of $ \lbrace E_{2j}\rbrace_{j=0}^m$ on $V_{\rm out}$; \item[\rm (iv)] the inverter is equal to the action of $\Psi_{\rm out}$ on $V_{\rm out}$. \end{enumerate} \end{proposition} \begin{proposition} \label{lem:LRPA2B2C2In} Assume that $A,B$ is nontrivial. Then $A^2, B^2$ act on $V_{\rm in}$ as an LR pair. For this LR pair, \begin{enumerate} \item[\rm (i)] the diameter is $m-1$; \item[\rm (ii)] the parameter sequence is $\lbrace \varphi_{2j}\varphi_{2j+1}\rbrace_{j=1}^{m-1}$; \item[\rm (iii)] the idempotent sequence is given by the actions of $\lbrace E_{2j+1}\rbrace_{j=0}^{m-1}$ on $V_{\rm in}$; \item[\rm (iv)] the inverter is equal to the action of $\Psi_{\rm in}$ on $V_{\rm in}$. \end{enumerate} \end{proposition} \begin{lemma} \label{lem:psiFixOutIn} The maps $\Psi_{\rm out}$, $\Psi_{\rm in}$ are fixed by the $(A,B)$-reflector $\dagger$ from Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. \end{lemma} \begin{proof} By Lemma \ref{lem:daggerE} and Definition \ref{def:PoutPin}. \end{proof} \begin{lemma} Assume that $A,B$ is nontrivial. Then \begin{eqnarray*} \Psi = \Psi_{\rm out} + \frac{\varphi_1}{\varphi_{d}} \Psi_{\rm in}. \end{eqnarray*} \end{lemma} \begin{proof} Compare Definitions \ref{def:REF}, \ref{def:PoutPin}. \end{proof} \begin{definition}\rm We call $\Psi_{\rm out}$ (resp. $\Psi_{\rm in}$) the {\it outer inverter} (resp. {\it inner inverter}) for the LR pair $A,B$. \end{definition} \section{The projector $J$} \noindent Throughout this section the following assumptions are in effect. We assume that $d=2m$ is even. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$, idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$, and inverter $\Psi$. Recall the subspaces $V_{\rm out}$ and $V_{\rm in}$ from Definition \ref{def:VoutVin}. \begin{definition} \label{def:J} \rm Define $J \in {\rm End}(V)$ such that $(J-I)V_{\rm out} = 0$ and $J V_{\rm in} = 0$. Referring to (\ref{eq:LRPV0V1}), the map $J$ (resp. $I-J$) acts as the projection from $V$ onto $V_{\rm out}$ (resp. $V_{\rm in}$). We call $J$ (resp. $I-J$) the {\it outer projector} (resp. {\it inner projector}) for the LR pair $A,B$. By the {\it projector} for $A,B$ we mean the outer projector. \end{definition} \begin{lemma} \label{lem:Jtriv} The map $J\not=0$. If $A,B$ is trivial then $J=I$. If $A,B$ is nontrivial then $J, I$ are linearly independent over $\mathbb F$. \end{lemma} \begin{proof} Use (\ref{eq:LRPV0V1}) and Lemma \ref{lem:LRPtrivialT}. \end{proof} \begin{lemma} \label{lem:Jfacts} The following {\rm (i)--(v)} hold: \begin{enumerate} \item[\rm (i)] $J= \sum_{j=0}^{d/2} E_{2j}$; \item[\rm (ii)] $J^2=J$; \item[\rm (iii)] for even $i$ $(0 \leq i \leq d)$, $E_iJ=JE_i=E_i$; \item[\rm (iv)] for odd $i$ $(0 \leq i \leq d)$, $E_iJ=JE_i=0$; \item[\rm (v)] $V_{\rm out} = JV$ and $V_{\rm in} = (I-J)V$. \end{enumerate} \end{lemma} \begin{proof} (i) For the given equation the two sides agree on $E_iV$ for $0 \leq i \leq d$. \\ \noindent (ii)--(iv) Use (i) above and $E_rE_s = \delta_{r,s}E_r $ for $0 \leq r,s\leq d$. \\ \noindent (v) By Definition \ref{def:J}. \end{proof} \begin{lemma} \label{lem:rankandtrace} For the map $J$ (resp. $I-J$) the rank and trace are equal to $m+1$ (resp. $m$). \end{lemma} \begin{proof} By Lemma \ref{lem:LRPV0V1}, Definition \ref{def:J}, and linear algebra. \end{proof} \begin{lemma} \label{lem:JRef} The map $J$ is fixed by the $(A,B)$-reflector $\dagger$ from Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. \end{lemma} \begin{proof} By Lemmas \ref{lem:daggerE}, \ref{lem:Jfacts}(i). \end{proof} \begin{lemma} The following maps are the same: \begin{enumerate} \item[\rm (i)] the projector for the LR pair $A,B$; \item[\rm (ii)] the projector for the LR pair $B,A$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:InOutswap} and Definition \ref{def:J}. \end{proof} \begin{lemma} For nonzero $\alpha, \beta \in \mathbb F$ the following maps are the same: \begin{enumerate} \item[\rm (i)] the projector for the LR pair $A,B$; \item[\rm (ii)] the projector for the LR pair $\alpha A,\beta B$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:alphaBeta} and Lemma \ref{lem:Jfacts}(i). \end{proof} \begin{lemma} The following maps are the same: \begin{enumerate} \item[\rm (i)] the projector for the LR pair $\tilde A,\tilde B$; \item[\rm (ii)] the adjoint of the projector for $A,B$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:LRdual} and Lemma \ref{lem:Jfacts}(i). \end{proof} \begin{lemma} \label{lem:INOutFacts} Referring to Definition \ref{def:Ainout} the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $A_{\rm out} = AJ=(I-J)A$ and $B_{\rm out} = BJ=(I-J)B$; \item[\rm (ii)] $A_{\rm in} = JA = A(I-J)$ and $B_{\rm in} = JB = B(I-J)$; \item[\rm (iii)] $A = AJ+JA$ and $B = BJ+JB$. \end{enumerate} \end{lemma} \begin{proof} (i), (ii) For each given equation the two sides agree on $V_{\rm out}$ and $V_{\rm in}$. \\ \noindent (iii) By (i) above. \end{proof} \begin{lemma} $J$ commutes with each of $A^2, B^2, AB, BA$. \end{lemma} \begin{proof} Use Lemma \ref{lem:INOutFacts}(iii). \end{proof} \begin{lemma} Referring to Definition \ref{def:PoutPin} the following {\rm (i), (ii)} hold. \begin{enumerate} \item[\rm (i)] $ \Psi_{\rm out} = J \Psi = \Psi J$. \item[\rm (ii)] For $A,B$ nontrivial, \begin{eqnarray*} \varphi_1/\varphi_d \Psi_{\rm in} = (I-J) \Psi = \Psi (I-J). \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} Use Definitions \ref{def:REF}, \ref{def:PoutPin}, \ref{def:J}. \end{proof} \begin{lemma} \label{lem:sigmaJ} Let $V'$ denote a vector space over $\mathbb F$ with dimension $d+1$, and let $A',B'$ denote an LR pair on $V'$. Let $J'$ denote the projector for $A',B'$. Let $\sigma $ denote an isomorphism of LR pairs from $A,B$ to $A',B'$. Then $\sigma J = J' \sigma$. \end{lemma} \begin{proof} By Lemma \ref{lem:isoMove} and Lemma \ref{lem:Jfacts}(i). \end{proof} \section{Similarity and bisimilarity} In this section we describe two equivalence relations for LR pairs, called similarity and bisimilarity. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. \begin{definition}\rm Let $A,B$ and $A',B'$ denote LR pairs on $V$. These LR pairs will be called {\it associates} whenever there exist nonzero $\alpha, \beta $ in $ \mathbb F$ such that $A'=\alpha A$ and $B'=\beta B$. Associativity is an equivalence relation. \end{definition} \begin{lemma} \label{lem:twoview} Let $A,B$ denote an LR pair on $V$. Let $V'$ denote a vector space over $\mathbb F$ with dimension $d+1$, and let $A',B'$ denote an LR pair on $V'$. Let $\sigma :V\to V'$ denote an $\mathbb F$-linear bijection. Then for nonzero $\alpha, \beta $ in $\mathbb F$ the following {\rm (i)--(iii)} are equivalent: \begin{enumerate} \item[\rm (i)] $\sigma$ is an isomorphism of LR pairs from $\alpha A,\beta B$ to $A',B'$; \item[\rm (ii)] $\sigma$ is an isomorphism of LR pairs from $A,B$ to $A'/\alpha,B'/\beta$; \item[\rm (iii)] $\alpha \sigma A = A' \sigma$ and $\beta \sigma B = B' \sigma$. \end{enumerate} \end{lemma} \begin{proof} By Definition \ref{def:isoLRP}, assertions (i), (iii) are equivalent and assertions (ii), (iii) are equivalent. \end{proof} \begin{lemma} \label{lem:assocIso} Let $A,B$ and $A',B'$ denote LR pairs over $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] there exists an LR pair over $\mathbb F$ that is associate to $A,B$ and isomorphic to $A',B'$; \item[\rm (ii)] there exists an LR pair over $\mathbb F$ that is isomorphic to $A,B$ and associate to $A',B'$. \end{enumerate} \end{lemma} \begin{proof} Pick nonzero $\alpha, \beta $ in $\mathbb F$. By Lemma \ref{lem:twoview}(i),(ii) the LR pair $\alpha A, \beta B$ satisfies condition (i) in the present lemma if and only if the LR pair $A'/\alpha, B'/\beta$ satisfies condition (ii) in the present lemma. The result follows. \end{proof} \begin{definition} \label{def:SIM} \rm Let $A,B$ and $A',B'$ denote LR pairs over $\mathbb F$. These LR pairs will be called {\it similar} whenever they satisfy the equivalent conditions {\rm (i), (ii)} in Lemma \ref{lem:assocIso}. Similarity is an equivalence relation. \end{definition} \begin{lemma} Let $A,B$ (resp. $A',B'$) denote an LR pair over $\mathbb F$, with parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$ (resp. $\lbrace \varphi'_i \rbrace_{i=1}^d$). Then the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR pairs $A,B$ and $A',B'$ are similar; \item[\rm (ii)] the ratio $\varphi'_i /\varphi_i$ is independent of $i$ for $1 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ By Lemma \ref{lem:assocIso} and Definition \ref{def:SIM}, there exist nonzero $\alpha, \beta$ in $\mathbb F$ such that $\alpha A, \beta B$ is isomorphic to $A',B'$. Recall from Lemma \ref{lem:alphaBetacom} that $\alpha A, \beta B$ has parameter sequence $\lbrace \alpha \beta \varphi_i \rbrace_{i=1}^d$. Now by Proposition \ref{prop:LRpairClass}, $\varphi'_i = \alpha \beta \varphi_i$ for $1 \leq i \leq d$. Therefore $\varphi'_i /\varphi_i$ is independent of $i$ for $1 \leq i \leq d$. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ Let $\alpha$ denote the common value of $\varphi'_i /\varphi_i$ for $1 \leq i \leq d$. By Lemma \ref{lem:alphaBetacom} and the construction, the LR pairs $\alpha A,B$ and $A',B'$ have the same parameter sequence $\lbrace \varphi'_i\rbrace_{i=1}^d$. Therefore they are isomorphic by Proposition \ref{prop:LRpairClass}. Now the LR pairs $A,B$ and $A',B'$ are similar by Lemma \ref{lem:assocIso} and Definition \ref{def:SIM}. \end{proof} \noindent We now describe the bisimilarity relation. For the rest of this section, assume that $d=2m$ is even. Until further notice let $A,B$ denote an LR pair on $V$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$ and idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. Recall that $\lbrace E_iV\rbrace_{i=0}^d$ is the $(A,B)$-decomposition of $V$, which is lowered by $A$ and raised by $B$. \begin{lemma} \label{lem:iso2view} Let $V'$ denote a vector space over $\mathbb F$ with dimension $d+1$, and let $A',B'$ denote an LR pair on $V'$. Let $\sigma :V\to V'$ denote an $\mathbb F$-linear bijection. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\sigma $ is an isomorphism of LR pairs from $A,B$ to $A',B'$; \item[\rm (ii)] all of \begin{eqnarray} && \sigma A_{\rm out} = A'_{\rm out} \sigma, \qquad \qquad \sigma A_{\rm in} = A'_{\rm in} \sigma, \label{eq:Aform} \\ && \sigma B_{\rm out} = B'_{\rm out} \sigma, \qquad \qquad \sigma B_{\rm in} = B'_{\rm in} \sigma. \label{eq:Bform} \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ Use Lemma \ref{lem:INOutFacts}(i),(ii) and Lemma \ref{lem:sigmaJ}. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ Add the two equations in (\ref{eq:Aform}) and use $A=A_{\rm out} +A_{\rm in}$, $A'=A'_{\rm out} +A'_{\rm in}$ to obtain $\sigma A = A' \sigma$. Similarly we obtain $\sigma B = B' \sigma$. Now by Definition \ref{def:isoLRP} the map $\sigma $ is an isomorphism of LR pairs from $A,B$ to $A',B'$. \end{proof} \begin{lemma} \label{lem:LRPInOut} Let $\alpha_{\rm out}, \alpha_{\rm in}, \beta_{\rm out}, \beta_{\rm in} $ denote nonzero scalars in $\mathbb F$. Then the ordered pair \begin{eqnarray} \label{eq:LRPNewLRT} \alpha_{\rm out}A_{\rm out} +\alpha_{\rm in}A_{\rm in}, \qquad \beta_{\rm out} B_{\rm out}+ \beta_{\rm in}B_{\rm in} \end{eqnarray} is an LR pair on $V$, with idempotent sequence $\lbrace E_i \rbrace_{i=0}^d$. The outer part of $V$ with respect to {\rm (\ref{eq:LRPNewLRT})} coincides with the outer part of $V$ with respect to $A,B$. The inner part of $V$ with respect to {\rm (\ref{eq:LRPNewLRT})} coincides with the inner part of $V$ with respect to $A,B$. The projector for the LR pair {\rm (\ref{eq:LRPNewLRT})} coincides with the projector for $A,B$. The LR pair {\rm (\ref{eq:LRPNewLRT})} has parameter sequence $\lbrace f_i \varphi_i\rbrace_{i=1}^d$, where \begin{eqnarray*} && f_i = \begin{cases} \alpha_{\rm out}\beta_{\rm in} & {\mbox{\rm if $i$ is even}}; \\ \alpha_{\rm in}\beta_{\rm out} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{lemma} \begin{proof} By construction. \end{proof} \begin{lemma} \label{lem:ApBp} Let $A',B'$ denote an LR pair on $V$. Let $\alpha_{\rm out}, \alpha_{\rm in}, \beta_{\rm out}, \beta_{\rm in} $ denote nonzero scalars in $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] both \begin{eqnarray*} A' = \alpha_{\rm out}A_{\rm out} +\alpha_{\rm in}A_{\rm in}, \qquad \qquad B' = \beta_{\rm out} B_{\rm out}+ \beta_{\rm in}B_{\rm in}; \end{eqnarray*} \item[\rm (ii)] all of \begin{eqnarray*} && A'_{\rm out} = \alpha_{\rm out}A_{\rm out}, \qquad \qquad A'_{\rm in} = \alpha_{\rm in}A_{\rm in}, \\ && B'_{\rm out} = \beta_{\rm out}B_{\rm out}, \qquad \qquad B'_{\rm in} = \beta_{\rm in}B_{\rm in}. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ Use Definition \ref{def:Ainout} and Lemma \ref{lem:LRPInOut}. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ By Definition \ref{def:Ainout} we have $A' = A'_{\rm out} + A'_{\rm in}$ and $B' = B'_{\rm out} + B'_{\rm in}$. \end{proof} \noindent Referring to Lemmas \ref{lem:LRPInOut}, \ref{lem:ApBp}, we now consider the case in which $\alpha_{\rm in}=1$ and $\beta_{\rm in}=1$. \begin{definition}\rm Let $A',B'$ denote an LR pair on $V$. The LR pairs $A,B$ and $A',B'$ will be called {\it biassociates} whenever there exist nonzero $\alpha, \beta$ in $\mathbb F$ such that \begin{eqnarray*} A' = \alpha A_{\rm out} +A_{\rm in}, \qquad \qquad B' = \beta B_{\rm out}+ B_{\rm in}. \end{eqnarray*} Biassociativity is an equivalence relation. \end{definition} \begin{lemma} \label{lem:twoviewBisim} Let $V'$ denote a vector space over $\mathbb F$ with dimension $d+1$, and let $A',B'$ denote an LR pair on $V'$. Let $\sigma :V\to V'$ denote an $\mathbb F$-linear bijection. Then for nonzero $\alpha, \beta $ in $\mathbb F$ the following {\rm (i)--(iii)} are equivalent: \begin{enumerate} \item[\rm (i)] $\sigma$ is an isomorphism of LR pairs from $\alpha A_{\rm out}+A_{\rm in}, \beta B_{\rm out}+B_{\rm in}$ to $A',B'$; \item[\rm (ii)] $\sigma$ is an isomorphism of LR pairs from $A,B$ to $\alpha^{-1} A'_{\rm out} + A'_{\rm in}, \beta^{-1} B'_{\rm out} + B'_{\rm in}$; \item[\rm (iii)] all of \begin{eqnarray*} &&\alpha \sigma A_{\rm out} = A'_{\rm out} \sigma, \qquad \qquad \sigma A_{\rm in} = A'_{\rm in} \sigma, \\ && \beta \sigma B_{\rm out} = B'_{\rm out} \sigma, \qquad \qquad \sigma B_{\rm in} = B'_{\rm in} \sigma. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} By Lemmas \ref{lem:iso2view}, \ref{lem:ApBp} the assertions (i), (iii) are equivalent, and the assertions (ii), (iii) are equivalent. \end{proof} \begin{lemma} \label{lem:LRPBiassocIso} Let $A,B$ and $A',B'$ denote LR pairs over $\mathbb F$ that have diameter $d$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] there exists an LR pair over $\mathbb F$ that is biassociate to $A,B$ and isomorphic to $A',B'$; \item[\rm (ii)] there exists an LR pair over $\mathbb F$ that is isomorphic to $A,B$ and biassociate to $A',B'$. \end{enumerate} \end{lemma} \begin{proof} Pick nonzero $\alpha, \beta $ in $\mathbb F$. By Lemma \ref{lem:twoviewBisim}(i),(ii) the LR pair $\alpha A_{\rm out} + A_{\rm in}, \beta B_{\rm out} + B_{\rm in}$ satisfies condition (i) in the present lemma if and only if the LR pair $\alpha^{-1} A'_{\rm out} + A'_{\rm in}, \beta^{-1} B'_{\rm out} + B'_{\rm in}$ satisfies condition (ii) in the present lemma. The result follows. \end{proof} \begin{definition} \label{def:BIAS} \rm Let $A,B$ and $A',B'$ denote LR pairs over $\mathbb F$ that have diameter $d$. Then $A,B$ and $A',B'$ will be called {\it bisimilar} whenever the equivalent conditions {\rm (i), (ii)} hold in Lemma \ref{lem:LRPBiassocIso}. \end{definition} \begin{lemma} Let $A,B$ (resp. $A',B'$) denote an LR pair over $\mathbb F$, with parameter sequence $\lbrace \varphi_i\rbrace_{i=1}^d$ (resp. $\lbrace \varphi'_i\rbrace_{i=1}^d$). Then the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR pairs $A,B$ and $A',B'$ are bisimilar; \item[\rm (ii)] the ratio $\varphi'_i / \varphi_i$ is independent of $i$ for $i$ even $(1 \leq i \leq d)$, and independent of $i$ for $i$ odd $(1 \leq i \leq d)$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ By Lemma \ref{lem:LRPBiassocIso} and Definition \ref{def:BIAS}, there exist nonzero $\alpha, \beta $ in $\mathbb F$ such that $\alpha A_{\rm out}+A_{\rm in}, \beta B_{\rm out}+B_{\rm in}$ is isomorphic to $A',B'$. By Lemma \ref{lem:LRPInOut} the LR pair $\alpha A_{\rm out}+A_{\rm in}, \beta B_{\rm out}+B_{\rm in}$ has parameter sequence $\lbrace f_i \varphi_i\rbrace_{i=1}^d$, where \begin{eqnarray*} f_i = \begin{cases} \alpha & {\mbox{\rm if $i$ is even}}; \\ \beta & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} By Proposition \ref{prop:LRpairClass} $\varphi'_i = f_i \varphi_i$ for $1 \leq i \leq d$. By these comments $\varphi'_i / \varphi_i$ is independent of $i$ for $i$ even $(1 \leq i \leq d)$, and independent of $i$ for $i$ odd $(1 \leq i \leq d)$. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ By assumption there exist nonzero $\alpha, \beta$ in $\mathbb F$ such that \begin{eqnarray*} \varphi'_i/\varphi_i = \begin{cases} \alpha & {\mbox{\rm if $i$ is even}}; \\ \beta & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} By Lemma \ref{lem:LRPInOut} and the construction, the LR pairs $\alpha A_{\rm out} + A_{\rm in}, \beta B_{\rm out} + B_{\rm in} $ and $A',B'$ have the same parameter sequence $\lbrace \varphi'_i\rbrace_{i=1}^d$. Therefore they are isomorphic by Proposition \ref{prop:LRpairClass}. Now $A,B$ and $A',B'$ are bisimilar in view of Lemma \ref{lem:LRPBiassocIso} and Definition \ref{def:BIAS}. \end{proof} \section{Constrained sequences} In this section we consider a type of finite sequence, said to be constrained. We classify the constrained sequences. This classification will be used later in the paper. \noindent Throughout this section, $n$ denotes a nonnegative integer and $\lbrace \rho_i \rbrace_{i=0}^n$ denotes a sequence of scalars taken from $\mathbb F$. \begin{definition} \label{def:constrain} \rm The sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is said to be {\it constrained} whenever \begin{enumerate} \item[\rm (i)] $\rho_i \rho_{n-i}=1$ for $0 \leq i \leq n$; \item[\rm (ii)] there exist $a,b,c \in \mathbb F$ that are not all zero and $a\rho_{i-1} + b\rho_i + c \rho_{i+1} = 0 $ for $1 \leq i \leq n-1$. \end{enumerate} \end{definition} \noindent Shortly we will classify the constrained sequences. We will use an inductive argument based on the following observation. \begin{lemma} Assume that $n\geq 2$ and the sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained. Then the sequence $\rho_1, \rho_2, \ldots, \rho_{n-1}$ is constrained. \end{lemma} \noindent The sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is called {\it geometric} whenever $\rho_i \not=0$ for $0 \leq i \leq n$ and $\rho_i/\rho_{i-1}$ is independent of $i$ for $1 \leq i \leq n$. The following are equivalent: (i) $\lbrace \rho_i \rbrace_{i=0}^n$ is geometric; (ii) there exist nonzero $r,\xi \in \mathbb F$ such that $\rho_i = \xi r^i$ for $0 \leq i \leq n$. In this case $\xi = \rho_0$ and $r= \rho_i /\rho_{i-1}$ for $1 \leq i \leq n$. \noindent We now classify the constrained sequences. The case of $n$ even and $n$ odd will be treated separately. \begin{proposition} \label{prop:nevenPre} Assume that $n$ is even. Then for the sequence $\lbrace \rho_i \rbrace_{i=0}^n$ the following {\rm (i)--(iii)} are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained; \item[\rm (ii)] $\lbrace \rho_i \rbrace_{i=0}^n$ is geometric and $\rho_{n/2} \in \lbrace 1,-1\rbrace$; \item[\rm (iii)] there exist $0 \not=r \in \mathbb F$ and $\varepsilon \in \lbrace 1,-1\rbrace$ such that $\rho_i = \varepsilon r^{i-n/2}$ for $0 \leq i \leq n$. \end{enumerate} Assume that {\rm (i)--(iii)} hold. Then $r=\rho_i /\rho_{i-1}$ for $1 \leq i \leq n$, and $\varepsilon = \rho_{n/2}$. \end{proposition} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (iii)}$ Our proof is by induction on $n$. First assume that $n=0$. Then $\rho^2_0=1$. Condition (iii) holds with $\varepsilon = \rho_0$ and arbitrary $0 \not=r \in \mathbb F$. Next assume that $n=2$. Then $\rho_0\rho_2=1$ and $\rho^2_1=1$. Condition (iii) holds with $\varepsilon=\rho_1$ and $r=\rho_1/\rho_0$. Next assume that $n\geq 4$. By Definition \ref{def:constrain}(ii), there exist $a,b,c\in \mathbb F$ that are not all zero and $a\rho_{i-1}+b\rho_i+c\rho_{i+1}=0$ for $1 \leq i \leq n-1$. Define $m=n-2$ and $\rho'_i=\rho_{i+1}$ for $0 \leq i \leq m$. By construction $\rho'_i \rho'_{m-i}=1$ for $0 \leq i \leq m$. Moreover $a \rho'_{i-1} + b\rho'_i + c \rho'_{i+1} = 0$ for $1 \leq i \leq m-1$. By induction there exist $0 \not=r \in \mathbb F$ and $\varepsilon \in \lbrace 1,-1\rbrace$ such that $\rho'_i = \varepsilon r^{i-m/2}$ for $0 \leq i \leq m$. Now \begin{eqnarray} \label{eq:part} \rho_i = \varepsilon r^{i-n/2} \qquad \qquad (1 \leq i \leq n-1). \end{eqnarray} We show that $\rho_0 = \varepsilon r^{-n/2}$ and $\rho_n = \varepsilon r^{n/2}$. Since $\rho_0 \rho_n = 1$, it suffices to show that $\rho_0 = \varepsilon r^{-n/2}$. We claim that \begin{equation} \label{eq:matcheck} {\rm det} \left( \begin{array}{ccc} \rho_0 & \rho_1 & \rho_{n-2} \\ \rho_1& \rho_2 & \rho_{n-1} \\ \rho_2 & \rho_3 & \rho_n \end{array} \right) = \frac{-r^2(\rho_0 -\varepsilon r^{-n/2})^2}{\rho_0}. \end{equation} To verify (\ref{eq:matcheck}), evaluate the determinant using (\ref{eq:part}) and $\rho_0 \rho_n = 1$, and simplify the result. The claim is proven. For the matrix in (\ref{eq:matcheck}), $a({\rm top} \;{\rm row})+ b({\rm middle}\; {\rm row})+c({\rm bottom}\; {\rm row})=0$. The matrix is singular, so its determinant is zero. Therefore $\rho_0 = \varepsilon r^{-n/2}$ as desired. \\ ${\rm (iii)}\mathbb Rightarrow {\rm (i)}$ By construction $\rho_i \rho_{n-i}=1$ for $0 \leq i \leq n$. Define $a=r$, $b=-1$, $c=0$. Then $a\rho_{i-1} + b\rho_i + c \rho_{i+1}=0$ for $1 \leq i \leq n-1$. \\ ${\rm (ii)}\Leftrightarrow {\rm (iii)}$ Routine. \end{proof} \begin{proposition} \label{prop:noddPre} Assume that $n$ is odd. Then for the sequence $\lbrace \rho_i \rbrace_{i=0}^n$ the following {\rm (i)--(iii)} are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained; \item[\rm (ii)] the sequences $\rho_0, \rho_2, \ldots, \rho_{n-1}$ and $\rho^{-1}_n,\rho^{-1}_{n-2}, \ldots, \rho^{-1}_1$ are equal and geometric; \item[\rm (iii)] there exist nonzero $s,\xi \in \mathbb F$ such that \begin{eqnarray} \label{eq:gammaSol} \rho_i = \begin{cases} \xi s^{i/2} & {\mbox{\rm if $i$ is even}}; \\ \xi^{-1}s^{(i-n)/2} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (0 \leq i \leq n). \end{eqnarray} \end{enumerate} Assume that {\rm (i)--(iii)} hold. Then $s=\rho_i /\rho_{i-2}$ for $2 \leq i \leq n$ and $\xi = \rho_0$. \end{proposition} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (iii)}$ Our proof is by induction on $n$. First assume that $n=1$. Then (iii) holds with $\xi = \rho_0$ and arbitrary $0 \not=s \in \mathbb F$. Next assume that $n=3$. Then (iii) holds with $\xi=\rho_0$ and $s=\rho_2/\rho_0$. Next assume that $n\geq 5$. By Definition \ref{def:constrain}(ii), there exist $a,b,c \in \mathbb F$ that are not all zero and $a\rho_{i-1} + b\rho_i + c \rho_{i+1} = 0 $ for $1 \leq i \leq n-1$. Define $m=n-2$ and $\rho'_i=\rho_{i+1}$ for $0 \leq i \leq m$. By construction $\rho'_i \rho'_{m-i}=1$ for $0 \leq i \leq m$. Moreover $a \rho'_{i-1} + b\rho'_i + c \rho'_{i+1} = 0$ for $1 \leq i \leq m-1$. By induction there exist nonzero $s,x \in \mathbb F$ such that \begin{eqnarray*} \rho'_i = \begin{cases} x s^{i/2} & {\mbox{\rm if $i$ is even}}; \\ x^{-1}s^{(i-m)/2} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (0 \leq i \leq m). \end{eqnarray*} \noindent Define $\xi = x^{-1}s^{-(1+m)/2}$. Then \begin{eqnarray} \label{eq:middle} \rho_i = \begin{cases} \xi s^{i/2} & {\mbox{\rm if $i$ is even}}; \\ \xi^{-1}s^{(i-n)/2} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq n-1). \end{eqnarray} We show that $\rho_0= \xi$ and $\rho_n = \xi^{-1}$. Since $\rho_0 \rho_n = 1$, it suffices to show that $\rho_0 = \xi$. We claim that \begin{equation} \label{eq:mat2} {\rm det} \left( \begin{array}{ccc} \rho_0 & \rho_1 & \rho_2 \\ \rho_1& \rho_2 & \rho_3 \\ \rho_2 & \rho_3 & \rho_4 \end{array} \right) + \xi^2 s^2 {\rm det} \left( \begin{array}{ccc} \rho_0 & \rho_1 & \rho_{n-2} \\ \rho_1& \rho_2 & \rho_{n-1} \\ \rho_2 & \rho_3 & \rho_n \end{array} \right) = \frac{-(\rho_0 -\xi)^2}{ s^{n-3}\xi^{2}\rho_0}. \end{equation} \noindent To verify (\ref{eq:mat2}), evaluate the two determinants using (\ref{eq:middle}) and $\rho_0 \rho_n = 1$, and simplify the result. The claim is proven. For each matrix in (\ref{eq:mat2}), $a({\rm top} \;{\rm row})+ b({\rm middle}\; {\rm row})+c({\rm bottom}\; {\rm row})=0$. Each matrix is singular, so its determinant is zero. Therefore $\rho_0 = \xi$. \\ ${\rm (iii)}\mathbb Rightarrow {\rm (i)}$ By construction $\rho_i \rho_{n-i}=1$ for $0 \leq i \leq n$. Define $a=s$, $b=0$, $c=-1$. Then $a\rho_{i-1} + b\rho_i + c \rho_{i+1}=0$ for $1 \leq i \leq n-1$. \\ ${\rm (ii)}\Leftrightarrow {\rm (iii)}$ Routine. \end{proof} \noindent Assume for the moment that $n$ is odd, and refer to Proposition \ref{prop:noddPre}. It could happen that $\lbrace \rho_i\rbrace_{i=0}^n$ is geometric, and the equivalent conditions (i)--(iii) hold. We now investigate this case. \begin{lemma} \label{lem:ConGeo} Assume that $n$ is odd. Then for the sequence $\lbrace \rho_i \rbrace_{i=0}^n$ the following {\rm (i)--(iv)} are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace \rho_i \rbrace_{i=0}^n$ is geometric and constrained; \item[\rm (ii)] $\lbrace \rho_i \rbrace_{i=0}^n$ is geometric and $\rho_{(n-1)/2}$, $\rho_{(n+1)/2}$ are inverses; \item[\rm (iii)] there exists $0 \not= t\in \mathbb F$ such that $\rho_i = t^{2i-n}$ for $0 \leq i \leq n$; \item[\rm (iv)] there exist $s,\xi \in \mathbb F$ that satisfy $\xi^4 s^n=1$ and Proposition \ref{prop:noddPre}{\rm (iii)}. \end{enumerate} Assume that {\rm (i)--(iv)} hold. Then $\xi=\rho_0 = t^{-n}$ and \begin{eqnarray} \label{eq:4eq} s=t^4, \qquad \rho_{(n-1)/2}=t^{-1}, \qquad \rho_{(n+1)/2}=t, \qquad t^2 = \rho_i/\rho_{i-1} \quad (1 \leq i \leq n). \end{eqnarray} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ By Definition \ref{def:constrain}(i). \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (iii)}$ By assumption there exists $0 \not=t \in \mathbb F$ such that $\rho_{(n-1)/2}=t^{-1}$ and $\rho_{(n+1)/2}=t$. Since $\lbrace \rho_i\rbrace_{i=0}^n$ is geometric, the ratio $\rho_i/\rho_{i-1}$ is independent of $i$ for $1 \leq i \leq n$. Taking $i=(n+1)/2$ we find that this ratio is $t^2$. By these comments $\rho_i = t^{2i-n}$ for $0 \leq i \leq n$. \\ \noindent ${\rm (iii)}\mathbb Rightarrow {\rm (iv)}$ The values $s=t^4$, $\xi =t^{-n}$ meet the requirement. \\ \noindent ${\rm (iv)}\mathbb Rightarrow {\rm (i)}$ Evaluating (\ref{eq:gammaSol}) using $\xi^4 s^n=1$ we find that $\lbrace \rho_i \rbrace_{i=0}^n$ is geometric. The sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained by Proposition \ref{prop:noddPre}(i),(iii). \\ \noindent Assume that (i)--(iv) hold. Then the equations $\xi=\rho_0 = t^{-n}$ and (\ref{eq:4eq}) are readily checked. \end{proof} \begin{lemma} \label{lem:NGd4} Assume that $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained but not geometric. Then $n$ is odd and at least 3. \end{lemma} \begin{proof} The integer $n$ is odd by Proposition \ref{prop:nevenPre}(i),(ii). If $n=1$ then $\rho_0\rho_1=1$, and the sequence $\rho_0, \rho_1$ is geometric, for a contradiction. Therefore $n\geq 3$. \end{proof} \noindent We have some comments about the scalars $a,b,c$ from Definition \ref{def:constrain}(ii). \begin{definition} \label{def:lincon} \rm Assume that the sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained. By a {\it linear constraint} for this sequence we mean a vector $(a,b,c) \in \mathbb F^3$ such that $a\rho_{i-1}+b\rho_i + c\rho_{i+1}=0$ for $1 \leq i \leq n-1$. \end{definition} \noindent Let $\lambda $ denote an indeterminate, and let $\mathbb F \lbrack \lambda \rbrack$ denote the $\mathbb F$-algebra consisting of the polynomials in $\lambda$ that have all coefficients in $\mathbb F$. Let $(a,b,c)$ denote a vector in $\mathbb F^3$. Define a polynomial $\psi \in \mathbb F\lbrack \lambda \rbrack$ by \begin{eqnarray*} \psi = a + b\lambda + c\lambda^2. \end{eqnarray*} \begin{proposition} \label{prop:nodd} Assume that $n\geq 2$ and $\lbrace \rho_i\rbrace_{i=0}^n$ is constrained. Then for the above vector $(a,b,c)$ the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] Assume that $n$ is even. Then $(a,b,c)$ is a linear constraint for $\lbrace \rho_i\rbrace_{i=0}^n$ if and only if $\psi(r)=0$, where $r$ is from Proposition \ref{prop:nevenPre}. \item[\rm (ii)] Assume that $n$ is odd and $\lbrace \rho_i\rbrace_{i=0}^n$ is not geometric. Then $(a,b,c)$ is a linear constraint for $\lbrace \rho_i\rbrace_{i=0}^n$ if and only if $\psi = c(\lambda^2-s)$, where $s$ is from Proposition \ref{prop:noddPre}. \item[\rm (iii)] Assume that $n$ is odd and $\lbrace \rho_i\rbrace_{i=0}^n$ is geometric. Then $(a,b,c)$ is a linear constraint for $\lbrace \rho_i\rbrace_{i=0}^n$ if and only if $\psi(t^2)=0$, where $t$ is from Lemma \ref{lem:ConGeo}. \end{enumerate} \end{proposition} \begin{proof} (i) By Proposition \ref{prop:nevenPre} we have $\rho_i = \varepsilon r^{i-n/2}$ for $0 \leq i \leq n$, where $r = \rho_i /\rho_{i-1}$ for $1 \leq i \leq n$ and $\varepsilon = \rho_{n/2}$. For $1 \leq i \leq n-1$, $a \rho_{i-1} + b\rho_i + c\rho_{i+1} = 0$ if and only if $a+br+cr^2=0$ if and only if $\psi(r)=0$. So $(a,b,c)$ is a linear constraint for $\lbrace \rho_i \rbrace_{i=0}^n$ if and only if $a \rho_{i-1} + b\rho_i + c\rho_{i+1} = 0$ for $1 \leq i \leq n-1$ if and only if $\psi(r)=0$. \\ \noindent (ii) The sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained, so by Proposition \ref{prop:noddPre} it has the form (\ref{eq:gammaSol}), where $s=\rho_i/\rho_{i-2}$ for $2 \leq i \leq n$ and $\xi = \rho_0$. By assumption $\lbrace \rho_i \rbrace_{i=0}^n$ is not geometric, so $\xi^4 s^n \not=1$ in view of Lemma \ref{lem:ConGeo}(i),(iv). Define $k = \xi^2 s^{(n+1)/2}$ and note that $k^2=\xi^4 s^{n+1}$. Therefore $k^2 \not=s$ so $k \not=k^{-1}s$. Pick an integer $i$ $(1 \leq i \leq n-1)$. First assume that $i$ is even. By (\ref{eq:gammaSol}), $a\rho_{i-1}+b\rho_i + c\rho_{i+1}=0$ if and only if $a+bk+ c s=0$. Next assume that $i$ is odd. By (\ref{eq:gammaSol}), $a\rho_{i-1}+b\rho_i + c\rho_{i+1}=0$ if and only if $a+bk^{-1}s+ c s=0$. We now argue that $(a,b,c)$ is a linear constraint for $\lbrace \rho_i \rbrace_{i=0}^n$ if and only if $a\rho_{i-1}+b\rho_i + c\rho_{i+1}=0$ for $1\leq i \leq n-1$ if and only if both $a+bk+ c s=0$, $a+b k^{-1}s+ c s=0$ if and only if $b=0=a+cs$ if and only if $\psi = c(\lambda^2-s)$. \\ \noindent (iii) Similar to the proof of (i) above. \end{proof} \begin{definition} \label{def:LC} \rm Assume that the sequence $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained. Let LC denote the set of linear constraints for $\lbrace \rho_i \rbrace_{i=0}^n$. By Definition \ref{def:lincon}, LC is a subspace of the $\mathbb F$-vector space $\mathbb F^3$. By Definition \ref{def:constrain}(ii), LC is nonzero. We call LC the {\it linear constraint space} for $\lbrace \rho_i \rbrace_{i=0}^n$. \end{definition} \begin{proposition} \label{prop:LCbasis} Assume that $n\geq 2$ and $\lbrace \rho_i \rbrace_{i=0}^n$ is constrained. Let LC denote the corresponding linear constraint space. \begin{enumerate} \item[\rm (i)] Assume that $n$ is even. Then LC has dimension 2. The vectors $(r,-1,0)$ and $(r^2,0,-1)$ form a basis of LC, where $r$ is from Proposition \ref{prop:nevenPre}. \item[\rm (ii)] Assume that $n$ is odd and $\lbrace \rho_i \rbrace_{i=0}^n$ is not geometric. Then LC has dimension 1. The vector $(s,0,-1)$ forms a basis of LC, where $s$ is from Proposition \ref{prop:noddPre}. \item[\rm (iii)] Assume that $n$ is odd and $\lbrace \rho_i \rbrace_{i=0}^n$ is geometric. Then LC has dimension 2. The vectors $(t^2,-1,0)$ and $(t^4,0,-1)$ form a basis of LC, where $t$ is from Lemma \ref{lem:ConGeo}. \end{enumerate} \end{proposition} \begin{proof} This is a reformulation of Proposition \ref{prop:nodd}. \end{proof} \section{Toeplitz matrices and nilpotent linear transformations} \noindent We will be discussing an upper triangular matrix of a certain type, said to be Toeplitz. \begin{definition} \label{def:top} \rm (See \cite[Section~8.12]{dym}.) Let $\lbrace \alpha_i \rbrace_{i=0}^d$ denote scalars in $\mathbb F$. Let $T$ denote an upper triangular matrix in ${\rm Mat}_{d+1}(\mathbb F)$. Then $T$ is said to {\it Toeplitz, with parameters $\lbrace \alpha_i \rbrace_{i=0}^d$} whenever $T$ has $(i,j)$-entry $\alpha_{j-i}$ for $0 \leq i\leq j\leq d$. In this case \begin{eqnarray*} T = \left( \begin{array}{ c c cc c c } \alpha_0 & \alpha_1 & \cdot & \cdot& \cdot & \alpha_d \\ & \alpha_0 & \alpha_1 & \cdot& \cdot & \cdot \\ & & \alpha_0 & \cdot & \cdot& \cdot \\ & & & \cdot & \cdot & \cdot \\ & & & & \cdot & \alpha_1 \\ {\bf 0} && & & & \alpha_0 \\ \end{array} \right). \end{eqnarray*} \end{definition} \noindent We have some comments. \begin{note} \rm The matrix $\tau$ from Definition \ref{def:tauMat} is Toeplitz, with parameters \begin{eqnarray*} \alpha_i = \begin{cases} 1 & {\mbox{\rm if $i=1$}}; \\ 0 & {\mbox{\rm if $i\not=1$}} \end{cases} \qquad \qquad (0 \leq i \leq d). \end{eqnarray*} \end{note} \begin{lemma} \label{lem:ToeChar} Let $T$ denote an upper triangular matrix in ${\rm Mat}_{d+1}(\mathbb F)$. Then $T$ is Toeplitz if and only if $T$ commutes with $\tau$. In this case $T= \sum_{i=0}^d \alpha_i \tau^i$, where $\lbrace \alpha_i \rbrace_{i=0}^d$ are the parameters for $T$. \end{lemma} \begin{proof} Use Lemma \ref{lem:tauPower}. \end{proof} \begin{lemma} \label{lem:ToepTrans} Let $T$ denote an upper triangular Toeplitz matrix in ${\rm Mat}_{d+1}(\mathbb F)$. Then $T^t={\bf Z}T{\bf Z}$. \end{lemma} \begin{proof} Expand ${\bf Z}T{\bf Z}$ by matrix multiplication. \end{proof} \noindent Referring to Definition \ref{def:top}, assume that $T$ is Toeplitz with parameters $\lbrace \alpha_i \rbrace_{i=0}^d$. Note that $T$ is invertible if and only if $\alpha_0 \not=0$. Assume this is the case. Then $T^{-1}$ is upper triangular and Toeplitz: \begin{eqnarray*} T^{-1} = \left( \begin{array}{ c c cc c c } \beta_0 & \beta_1 & \cdot & \cdot& \cdot & \beta_d \\ & \beta_0 & \beta_1 & \cdot& \cdot & \cdot \\ & & \beta_0 & \cdot & \cdot& \cdot \\ & & & \cdot & \cdot & \cdot \\ & & & & \cdot & \beta_1 \\ {\bf 0} && & & & \beta_0 \\ \end{array} \right), \end{eqnarray*} with parameters $\lbrace \beta_i \rbrace_{i=0}^d$ that are obtained from $\lbrace \alpha_i \rbrace_{i=0}^d$ by recursively solving $\alpha_0 \beta_0 = 1$ and \begin{equation} \label{eq:recursion} \alpha_0 \beta_j + \alpha_1 \beta_{j-1} + \cdots + \alpha_j \beta_0 = 0 \qquad \qquad (1 \leq j \leq d). \end{equation} We have \begin{eqnarray*} \beta_0 &=& \alpha^{-1}_0, \\ \beta_1 &=& -\alpha_1 \alpha^{-2}_0\;\quad \qquad \qquad \qquad \mbox{\rm (if $d\geq 1$),} \\ \beta_2 &=& \frac{ \alpha^2_1-\alpha_0 \alpha_2}{\alpha^3_0} \;\;\quad \qquad \qquad \mbox{\rm (if $d\geq 2$),} \\ \beta_3 &=& \frac{2\alpha_0 \alpha_1 \alpha_2 -\alpha^3_1 - \alpha^2_0\alpha_3}{\alpha^4_0} \qquad \mbox{\rm (if $d\geq 3$),} \\ \beta_4 &=& \frac{\alpha^4_1 +2\alpha^2_0 \alpha_1 \alpha_3 + \alpha^2_0 \alpha^2_2-3\alpha_0 \alpha^2_1 \alpha_2 - \alpha^3_0 \alpha_4 }{\alpha^5_0} \qquad \mbox{\rm (if $d\geq 4$).} \end{eqnarray*} For $\alpha_0=1$ this becomes \begin{eqnarray*} \beta_0 &=& 1, \\ \beta_1 &=& -\alpha_1, \;\qquad \qquad \quad \qquad \qquad \qquad \mbox{\rm (if $d\geq 1$),} \\ \beta_2 &=& \alpha^2_1- \alpha_2 \;\;\qquad \qquad \quad \qquad \qquad \mbox{\rm (if $d\geq 2$),} \\ \beta_3 &=& 2 \alpha_1 \alpha_2 -\alpha^3_1 - \alpha_3 \qquad \qquad \qquad \mbox{\rm (if $d\geq 3$),} \\ \beta_4 &=& \alpha^4_1 +2\alpha_1 \alpha_3 + \alpha^2_2-3\alpha^2_1 \alpha_2 - \alpha_4 \qquad \mbox{\rm (if $d\geq 4$).} \end{eqnarray*} \noindent Recall our vector space $V$ over $\mathbb F$ with dimension $d+1$. Recall the Nil elements in ${\rm End}(V)$ from Definition \ref{def:Nil}. We now give a variation on Lemma \ref{lem:NilRec}(i),(ii) in terms of vectors. \begin{lemma} \label{lem:reform1} For $A \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $A$ is Nil; \item[\rm (ii)] there exists a basis $\lbrace v_i \rbrace_{i=0}^d$ of $V$ such that $Av_0=0$ and $Av_i = v_{i-1}$ for $1 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:NilRec}(i),(ii). \end{proof} \begin{lemma} \label{lem:whatDec} Assume $A \in {\rm End}(V)$ is Nil. For subspaces $\lbrace V_i \rbrace_{i=0}^d$ of $V$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace V_i \rbrace_{i=0}^d$ is a decomposition of $V$ that is lowered by $A$; \item[\rm (ii)] the sum $V=AV+V_d$ is direct, and $V_i = A^{d-i}V_d$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ The sum $V=AV+V_d$ is direct since $AV=V_0+ \cdots + V_{d-1}$. The remaining assertion is clear. \\ ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ Define $U_i = A^{d-i}V$ for $0 \leq i \leq d$. The sequence $\lbrace U_i\rbrace_{i=0}^d$ is a flag on $V$. By construction, the sum $U_i = U_{i-1}+V_i$ is direct for $1 \leq i \leq d$. Therefore $\lbrace V_i \rbrace_{i=0}^d$ is a decomposition of $V$. By construction $AV_i= V_{i-1}$ for $1 \leq i \leq d$. Also $AV_0=0$ since $A$ is Nil. By these comments $\lbrace V_i \rbrace_{i=0}^d$ is lowered by $A$. \end{proof} \begin{lemma} \label{lem:twoView} Assume $A \in {\rm End}(V)$ is Nil. For vectors $\lbrace v_i\rbrace_{i=0}^d$ in $V$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace v_i\rbrace_{i=0}^d$ is a basis of $V$ such that $Av_0=0$ and $Av_i=v_{i-1}$ for $1 \leq i \leq d$; \item[\rm (ii)] $v_d \not\in AV$ and $v_i = A^{d-i}v_d$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} This is a reformulation of Lemma \ref{lem:whatDec}. \end{proof} \noindent Assume $A \in {\rm End}(V)$ is Nil. By Lemma \ref{lem:twoView}, for $v \in V \backslash AV$ the sequences $\lbrace A^iv\rbrace_{i=0}^d$ and $\lbrace A^{d-i}v\rbrace_{i=0}^d$ are bases for $V$. Relative to these bases the matrices representing $A$ are, respectively, \begin{eqnarray*} A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1 & 0 \\ \end{array} \right), \qquad \qquad A:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right). \end{eqnarray*} \noindent For $u,v \in V \backslash AV$, we now compute the transition matrix from the basis $\lbrace A^{d-i}u\rbrace_{i=0}^d$ to the basis $\lbrace A^{d-i} v\rbrace_{i=0}^d$. There exist scalars $\lbrace \alpha_i \rbrace_{i=0}^d$ in $\mathbb F$ such that $\alpha_0 \not=0$ and $v = \sum_{i=0}^d \alpha_i A^i u$. In this equation, for $0 \leq j \leq d$ apply $A^j$ to each side and adjust the result to obtain $A^jv = \sum_{i=j}^d \alpha_{i-j} A^{i} u$. This yields \begin{eqnarray} A^{d-j}v = \sum_{i=0}^j \alpha_{j-i} A^{d-i} u \qquad \qquad 0 \leq j \leq d. \label{eq:Toep} \end{eqnarray} By (\ref{eq:Toep}), the transition matrix from the basis $\lbrace A^{d-i}u\rbrace_{i=0}^d$ to the basis $\lbrace A^{d-i}v\rbrace_{i=0}^d$ is upper triangular, and Toeplitz with parameters ${\lbrace \alpha_i \rbrace}_{i=0}^d$. Define $\Phi = \sum_{i=0}^d \alpha_i A^i$. By construction $A\Phi=\Phi A$ and $ \Phi u=v$. Therefore $\Phi $ sends $A^{d-i}u \mapsto A^{d-i}v$ for $0 \leq i \leq d$. \begin{proposition} \label{prop:Toe} Let $\lbrace u_i \rbrace_{i=0}^d$ and $\lbrace v_i \rbrace_{i=0}^d$ denote bases for $V$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] there exists $A \in {\rm End}(V)$ such that $Au_i = u_{i-1}$ $(1 \leq i \leq d)$, $Au_0 = 0$, $Av_i= v_{i-1}$ $(1 \leq i \leq d)$, $Av_0 = 0$; \item[\rm (ii)] the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$ is upper triangular and Toeplitz. \end{enumerate} \end{proposition} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ The map $A$ is Nil by Lemma \ref{lem:reform1}. By Lemma \ref{lem:twoView}, there exist $u,v \in V\backslash AV$ such that $u_i= A^{d-i}u$ and $v_i= A^{d-i}v$ for $0 \leq i \leq d$. Now (ii) follows from the sentence below (\ref{eq:Toep}). \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ Define $A \in {\rm End}(V)$ such that $Au_0 = 0$ and $Au_i = u_{i-1}$ for $1 \leq i \leq d$. For the transition matrix in question let $\lbrace \alpha_i \rbrace_{i=0}^d$ denote the corresponding parameters, and define $\Phi = \sum_{i=0}^d \alpha_i A^i$. Then $A\Phi=\Phi A$. By construction $\Phi u_i = v_i $ for $0 \leq i \leq d$. Therefore $Av_0 = 0 $ and $Av_i = v_{i-1}$ for $1 \leq i \leq d$. \end{proof} \begin{lemma} \label{lem:a0a1} Assume that the two equivalent conditions in Proposition \ref{prop:Toe} hold, and let $\lbrace \alpha_i \rbrace_{i=0}^d$ denote the parameters for the Toeplitz matrix mentioned in the second condition. Fix $0 \not=r \in \mathbb F$. \begin{enumerate} \item[\rm (i)] If we replace $u_i$ by $u'_i = ru_i$ for $0 \leq i \leq d$, then the equivalent conditions in Proposition \ref{prop:Toe} still hold, with $A'=A$ and $\alpha'_i = r^{-1}\alpha_i$ for $0 \leq i \leq d$. \item[\rm (ii)] If we replace $u_i$ and $v_i$ by $u'_i = r^iu_i$ and $v'_i = r^i v_i$ for $0 \leq i \leq d$, then the equivalent conditions in Proposition \ref{prop:Toe} still hold, with $A'=r^{-1}A$ and $\alpha'_i = r^{i}\alpha_i$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} By linear algebra. \end{proof} \section{LR triples} \noindent We now turn our attention to LR triples. Throughout this section, $V$ denotes a vector space over $\mathbb F$ with dimension $d+1$. \begin{definition} \label{def:LRT} \rm An {\it LR triple on $V$} is a sequence $A,B,C$ of elements in ${\rm End}(V)$ such that any two of $A,B,C$ form an LR pair on $V$. This LR triple is said to be {\it over $\mathbb F$}. We call $V$ the {\it underlying vector space}. We call $d$ the {\it diameter}. \end{definition} \begin{definition} \label{def:isoLRT} \rm Let $A,B,C$ denote an LR triple on $V$. Let $V'$ denote a vector space over $\mathbb F$ with dimension $d+1$, and let $A',B',C'$ denote an LR triple on $V'$. By an {\it isomorphism of LR triples from $A,B,C$ to $A',B',C'$} we mean an $\mathbb F$-linear bijection $\sigma :V \to V'$ such that \begin{eqnarray*} \sigma A = A' \sigma, \qquad \qquad \sigma B = B' \sigma, \qquad \qquad \sigma C = C' \sigma. \end{eqnarray*} The LR triples $A,B,C$ and $A',B',C'$ are called {\it isomorphic} whenever there exists an isomorphism of LR triples from $A,B,C$ to $A',B',C'$. \end{definition} \begin{example} \label{ex:trivial} \rm Assume $d=0$. A sequence of elements $A,B,C$ in ${\rm End}(V)$ form an LR triple if and only if each of $A,B,C$ is zero. This LR triple will be called {\it trivial}. \end{example} \noindent We will use the following notational convention. \begin{definition} \label{def:primeConv} \rm Let $A,B,C$ denote an LR triple. For any object $f$ that we associate with this LR triple, then $f'$ (resp. $f''$) will denote the corresponding object for the LR triple $B,C,A$ (resp. $C,A,B$). \end{definition} \begin{definition} \label{def:LRTpar} \rm Let $A,B,C$ denote an LR triple on $V$. By Definition \ref{def:LRT}, the pair $A,B$ (resp. $B,C$) (resp. $C,A$) is an LR pair on $V$. Following the notational convention in Definition \ref{def:primeConv}, for these LR pairs the parameter sequence is denoted as follows: \centerline{ \begin{tabular}[t]{c|c} {\rm LR pair} & {\rm parameter sequence} \\ \hline $A,B$ & $\lbrace \varphi_i \rbrace_{i=1}^d$ \\ $B,C$ & $\lbrace \varphi'_i \rbrace_{i=1}^d$ \\ $C,A$ & $\lbrace \varphi''_i \rbrace_{i=1}^d$ \end{tabular}} \noindent We call the sequence \begin{eqnarray} \label{eq:paLRT} (\lbrace \varphi_i \rbrace_{i=1}^d; \lbrace \varphi'_i \rbrace_{i=1}^d; \lbrace \varphi''_i \rbrace_{i=1}^d) \end{eqnarray} the {\it parameter array} of the LR triple $A,B,C$. \end{definition} \begin{note}\rm As we will see, not every LR triple is determined up to isomorphism by its parameter array. \end{note} \begin{lemma} \label{lem:albega} Let $A,B,C$ denote an LR triple on $V$, with parameter array {\rm (\ref{eq:paLRT})}. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the triple $\alpha A,\beta B,\gamma C$ is an LR triple on $V$, with parameter array \begin{eqnarray*} (\lbrace \alpha \beta\varphi_i \rbrace_{i=1}^d; \lbrace \beta\gamma \varphi'_i \rbrace_{i=1}^d; \lbrace \gamma \alpha \varphi''_i \rbrace_{i=1}^d). \end{eqnarray*} \end{lemma} \begin{proof} Use Lemma \ref{lem:alphaBetacom} and Definition \ref{def:LRTpar}. \end{proof} \begin{lemma} \label{lem:albegaCor} Let $A,B,C$ denote a nontrivial LR triple over $\mathbb F$. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR triples $A,B,C$ and $\alpha A, \beta B, \gamma C$ have the same parameter array; \item[\rm (ii)] $\alpha\beta = \beta \gamma = \gamma \alpha = 1$; \item[\rm (iii)] $\alpha = \beta = \gamma \in \lbrace 1,-1\rbrace$. \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:albega}. \end{proof} \begin{lemma} \label{lem:ABCvar} Let $A,B,C$ denote an LR triple on $V$, with parameter array {\rm (\ref{eq:paLRT})}. Then each permutation of $A,B,C$ is an LR triple on $V$. The corresponding parameter array is given in the table below: \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm parameter array} \\ \hline \hline $A,B,C$ & $(\lbrace \varphi_i \rbrace_{i=1}^d; \lbrace \varphi'_i \rbrace_{i=1}^d; \lbrace \varphi''_i \rbrace_{i=1}^d)$ \\ $B,C,A$ & $(\lbrace \varphi'_i \rbrace_{i=1}^d; \lbrace \varphi''_i \rbrace_{i=1}^d; \lbrace \varphi_i \rbrace_{i=1}^d)$ \\ $C,A,B$ & $(\lbrace \varphi''_i \rbrace_{i=1}^d; \lbrace \varphi_i \rbrace_{i=1}^d; \lbrace \varphi'_i \rbrace_{i=1}^d)$ \\ \hline $C,B,A$ & $(\lbrace \varphi'_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi''_{d-i+1} \rbrace_{i=1}^d)$ \\ $A,C,B$ & $(\lbrace \varphi''_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi'_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi_{d-i+1} \rbrace_{i=1}^d)$ \\ $B,A,C$ & $(\lbrace \varphi_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi''_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi'_{d-i+1} \rbrace_{i=1}^d)$ \end{tabular}} \end{lemma} \begin{proof} By Lemma \ref{lem:BAvAB} and Definition \ref{def:LRTpar}. \end{proof} \begin{definition} \label{def:ASSOC} \rm Let $A,B,C$ and $A',B',C'$ denote LR triples on $V$. These LR triples will be called {\it associates} whenever there exist nonzero $\alpha, \beta,\gamma $ in $ \mathbb F$ such that \begin{eqnarray*} A'=\alpha A, \qquad B'= \beta B, \qquad C'= \gamma C. \end{eqnarray*} Associativity is an equivalence relation. \end{definition} \begin{lemma} \label{lem:assocIsoT} Let $A,B,C$ and $A',B',C'$ denote LR triples over $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] there exists an LR triple over $\mathbb F$ that is associate to $A,B,C$ and isomorphic to $A',B',C'$; \item[\rm (ii)] there exists an LR triple over $\mathbb F$ that is isomorphic to $A,B,C$ and associate to $A',B',C'$. \end{enumerate} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:assocIso}. \end{proof} \begin{definition} \label{def:simiLar} \rm Let $A,B,C$ and $A',B',C'$ denote LR triples over $\mathbb F$. These LR triples will be called {\it similar} whenever they satisfy the equivalent conditions {\rm (i), (ii)} in Lemma \ref{lem:assocIsoT}. Similarity is an equivalence relation. \end{definition} \begin{lemma} \label{lem:newPA} Let $A,B,C$ denote an LR triple on $V$, with parameter array {\rm (\ref{eq:paLRT})}. In each row of the table below, we display an LR triple on $V^*$ along with its parameter array. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm parameter array} \\ \hline \hline $\tilde A, \tilde B, \tilde C$ & $(\lbrace \varphi_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi'_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi''_{d-i+1} \rbrace_{i=1}^d)$ \\ $\tilde B, \tilde C, \tilde A$ & $(\lbrace \varphi'_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi''_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi_{d-i+1} \rbrace_{i=1}^d)$ \\ $\tilde C, \tilde A, \tilde B$ & $(\lbrace \varphi''_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi_{d-i+1} \rbrace_{i=1}^d; \lbrace \varphi'_{d-i+1} \rbrace_{i=1}^d)$ \\ \hline $\tilde C, \tilde B, \tilde A$ & $(\lbrace \varphi'_{i} \rbrace_{i=1}^d; \lbrace \varphi_{i} \rbrace_{i=1}^d; \lbrace \varphi''_{i} \rbrace_{i=1}^d)$ \\ $\tilde A, \tilde C, \tilde B$ & $(\lbrace \varphi''_{i} \rbrace_{i=1}^d; \lbrace \varphi'_{i} \rbrace_{i=1}^d; \lbrace \varphi_{i} \rbrace_{i=1}^d)$ \\ $\tilde B, \tilde A, \tilde C$ & $(\lbrace \varphi_{i} \rbrace_{i=1}^d; \lbrace \varphi''_{i} \rbrace_{i=1}^d; \lbrace \varphi'_{i} \rbrace_{i=1}^d)$ \end{tabular}} \end{lemma} \begin{proof} The given triples are LR triples by Lemma \ref{lem:LRDdual}(i) and Definition \ref{def:LRT}. To compute their parameter array use Lemmas \ref{lem:LRdualPar}, \ref{lem:ABCvar}. \end{proof} \begin{definition} \label{def:prel} \rm Let $A,B,C$ denote an LR triple on $V$. By a {\it relative} of $A,B,C$ we mean an LR triple from the table in Lemma \ref{lem:ABCvar} or Lemma \ref{lem:newPA}. A relative of $A,B,C$ is said to have {\it positive orientation} (resp. {\it negative orientation}) with respect to $A,B,C$ whenever it is in the top half (resp. bottom half) of the table of Lemma \ref{lem:ABCvar} or the bottom half (resp. top half) of the table in Lemma \ref{lem:newPA}. We call such a relative a {\it p-relative} (resp. {\it n-relative}) of $A,B,C$. Note that an n-relative of $A,B,C$ is the same thing as a p-relative of $C,B,A$. \end{definition} \noindent Let $A,B,C$ denote an LR triple on $V$. By Lemma \ref{lem:ABdecIndNil}, each of $A,B,C$ is Nil. By Lemma \ref{lem:ABdecInd} the following are mutually opposite flags on $V$: \begin{equation} \lbrace A^{d-i}V \rbrace_{i=0}^d, \qquad \lbrace B^{d-i}V \rbrace_{i=0}^d, \qquad \lbrace C^{d-i}V \rbrace_{i=0}^d. \label{eq:LRTMOF} \end{equation} \begin{lemma} \label{lem:LRTinducedFlag} Let $A,B,C$ denote an LR triple on $V$. In each row of the table below, we display a decomposition of $V$ along with its induced flag on $V$: \centerline{ \begin{tabular}[t]{c|c} {\rm decomp. of $V$} & {\rm induced flag on $V$} \\ \hline \hline $(A,B)$ & $\lbrace A^{d-i}V \rbrace_{i=0}^d$ \\ $(B,C)$ & $\lbrace B^{d-i}V \rbrace_{i=0}^d$ \\ $(C,A)$ & $\lbrace C^{d-i}V \rbrace_{i=0}^d$ \\ \hline $(B,A)$ & $\lbrace B^{d-i}V \rbrace_{i=0}^d$ \\ $(C,B)$ & $\lbrace C^{d-i}V \rbrace_{i=0}^d$ \\ $(A,C)$ & $\lbrace A^{d-i}V \rbrace_{i=0}^d$ \\ \end{tabular}} \end{lemma} \begin{proof} By Lemma \ref{lem:ABdecInd}. \end{proof} \begin{lemma} \label{lem:LRTdecDual} Let $A,B,C$ denote an LR triple on $V$. In each row of the table below, we display a decomposition of $V$ along with its dual decomposition of $V^*$. \centerline{ \begin{tabular}[t]{c|c} {\rm decomp. of $V$} & {\rm dual decomp. of $V^*$} \\ \hline \hline $(A,B)$ & $(\tilde B, \tilde A)$ \\ $(B,C)$ & $(\tilde C, \tilde B)$ \\ $(C,A)$ & $(\tilde A, \tilde C)$ \\ \hline $(B,A)$ & $(\tilde A, \tilde B)$ \\ $(C,B)$ & $(\tilde B, \tilde C)$ \\ $(A,C)$ & $(\tilde C, \tilde A)$ \\ \end{tabular}} \end{lemma} \begin{proof} By Lemma \ref{lem:LRDdual}. \end{proof} \begin{lemma} \label{lem:LRTflagDual} Let $A,B,C$ denote an LR triple on $V$. In each row of the table below, we display a flag on $V$ along with its dual flag on $V^*$. \centerline{ \begin{tabular}[t]{c|c} {\rm flag on $V$} & {\rm dual flag on $V^*$} \\ \hline \hline $\lbrace A^{d-i}V\rbrace_{i=0}^d$ & $\lbrace \tilde A^{d-i}V^*\rbrace_{i=0}^d$ \\ $\lbrace B^{d-i}V\rbrace_{i=0}^d$ & $\lbrace \tilde B^{d-i}V^*\rbrace_{i=0}^d$ \\ $\lbrace C^{d-i}V\rbrace_{i=0}^d$ & $\lbrace \tilde C^{d-i}V^*\rbrace_{i=0}^d$ \\ \end{tabular}} \end{lemma} \begin{proof} By Lemma \ref{lem:Nildual}. \end{proof} \begin{lemma} \label{lem:flagaction} Let $A,B,C$ denote an LR triple on $V$. In the table below we describe the action of $A,B,C$ on the flags {\rm (\ref{eq:LRTMOF})}. \centerline{ \begin{tabular}[t]{c|c|c} {\rm flag on $V$} & {\rm lowered by} & {\rm raised by} \\ \hline \hline $\lbrace A^{d-i}V\rbrace_{i=0}^d$ & $A$ & $B,C$ \\ $\lbrace B^{d-i}V\rbrace_{i=0}^d$ & $B$ & $C,A$ \\ $\lbrace C^{d-i}V\rbrace_{i=0}^d$ & $C$ & $A,B$ \\ \end{tabular}} \end{lemma} \begin{proof} Any two of $A,B,C$ form an LR pair on $V$. Apply Lemma \ref{lem:LRFlag} to these pairs. \end{proof} \begin{lemma} \label{lem:LRTaction} Let $A,B,C$ denote an LR triple on $V$. In each row of the table below, we display a decomposition $\lbrace V_i\rbrace_{i=0}^d$ of $V$. For $0 \leq i \leq d$ we give the action of $A,B,C$ on $V_i$. \centerline{ \begin{tabular}[t]{c|ccc} {\rm dec. $\lbrace V_i\rbrace_{i=0}^d$} & {\rm action of $A$ on $V_i$} & {\rm action of $B$ on $V_i$} & {\rm action of $C$ on $V_i$} \\ \hline \hline $(A,B)$ & $AV_i = V_{i-1}$ & $BV_i = V_{i+1}$ & $CV_i \subseteq V_{i-1}+V_i + V_{i+1}$ \\ $(B,C)$ & $AV_i \subseteq V_{i-1}+V_i + V_{i+1}$ & $BV_i = V_{i-1}$ & $CV_i = V_{i+1}$ \\ $(C,A)$ & $AV_i = V_{i+1}$ & $BV_i \subseteq V_{i-1}+V_i + V_{i+1}$ & $CV_i = V_{i-1}$ \\ \hline $(B,A)$ & $AV_i = V_{i+1}$ & $BV_i = V_{i-1}$ & $CV_i \subseteq V_{i-1}+V_i + V_{i+1}$ \\ $(C,B)$ & $AV_i \subseteq V_{i-1}+V_i + V_{i+1}$ & $BV_i = V_{i+1}$ & $CV_i = V_{i-1}$ \\ $(A,C)$ & $AV_i = V_{i-1}$ & $BV_i \subseteq V_{i-1}+V_i + V_{i+1}$ & $CV_i = V_{i+1}$ \\ \end{tabular}} \end{lemma} \begin{proof} We verify the first row; the other rows are similarly verified. Let $i$ be given. By construction $AV_i = V_{i-1}$ and $BV_i = V_{i+1}$. We now compute $CV_i$. By Lemma \ref{lem:LRTinducedFlag} the flag $\lbrace A^{d-j}V\rbrace_{j=0}^d$ is induced by $\lbrace V_j \rbrace_{j=0}^d$. By Lemma \ref{lem:flagaction} the flag $\lbrace A^{d-j}V\rbrace_{j=0}^d$ is raised by $C$. By these comments and Definition \ref{def:FlagLR}, \begin{equation} \label{eq:C1} C V_i \subseteq C(V_0 + \cdots + V_i) \subseteq V_0 + \cdots + V_{i+1}. \end{equation} Similarly, the flag $\lbrace B^{d-j}V\rbrace_{j=0}^d$ is induced by $\lbrace V_{d-j} \rbrace_{j=0}^d$ and raised by $C$. Therefore \begin{equation} \label{eq:C2} C V_i \subseteq C (V_i+\cdots + V_d) \subseteq V_{i-1} + \cdots + V_d. \end{equation} Combining (\ref{eq:C1}), (\ref{eq:C2}) we obtain $C V_i \subseteq V_{i-1} + V_{i} + V_{i+1}$. \end{proof} \begin{lemma} \label{lem:ABCGEN} Let $A,B,C$ denote an LR triple on $V$. Then the $\mathbb F$-algebra ${\rm End}(V)$ is generated by any two of $A,B,C$. \end{lemma} \begin{proof} By Corollary \ref{cor:ABgen}. \end{proof} \begin{definition} \label{def:LRTIdem} \rm Let $A,B,C$ denote an LR triple on $V$. Recall that the pair $A,B$ (resp. $B,C$) (resp. $C,A$) is an LR pair on $V$. For these LR pairs the idempotent sequence from Definition \ref{def:ABE} is denoted as follows: \centerline{ \begin{tabular}[t]{c|c} {\rm LR pair} & {\rm idempotent sequence} \\ \hline $A,B$ & $\lbrace E_i \rbrace_{i=0}^d$ \\ $B,C$ & $\lbrace E'_i \rbrace_{i=0}^d$ \\ $C,A$ & $\lbrace E''_i \rbrace_{i=0}^d$ \end{tabular}} \noindent We call the sequence \begin{eqnarray} \label{eq:idseq} ( \lbrace E_i\rbrace_{i=0}^d; \lbrace E'_i\rbrace_{i=0}^d; \lbrace E''_i\rbrace_{i=0}^d) \end{eqnarray} the {\it idempotent data} of $A,B,C$. \end{definition} \begin{lemma} \label{lem:isequal} Let $A,B,C$ denote an LR triple on $V$. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the idempotent data of $\alpha A,\beta B, \gamma C$ is equal to the idempotent data of $A,B,C$. \end{lemma} \begin{proof} By the last assertion of Lemma \ref{lem:alphaBeta}. \end{proof} \noindent Let $A,B,C$ denote an LR triple on $V$. Our next goal is to compute the idempotent data for the relatives of $A,B,C$. \begin{lemma} \label{lem:ABCEvar} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. In each row of the table below, we display an LR triple on $V$ along with its idempotent data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm idempotent data} \\ \hline \hline $A,B,C$ & $(\lbrace E_i \rbrace_{i=0}^d; \lbrace E'_i \rbrace_{i=0}^d; \lbrace E''_i \rbrace_{i=0}^d)$ \\ $B,C,A$ & $(\lbrace E'_i \rbrace_{i=0}^d; \lbrace E''_i \rbrace_{i=0}^d; \lbrace E_i \rbrace_{i=0}^d)$ \\ $C,A,B$ & $(\lbrace E''_i \rbrace_{i=0}^d; \lbrace E_i \rbrace_{i=0}^d; \lbrace E'_i \rbrace_{i=0}^d)$ \\ \hline $C,B,A$ & $(\lbrace E'_{d-i} \rbrace_{i=0}^d; \lbrace E_{d-i} \rbrace_{i=0}^d; \lbrace E''_{d-i} \rbrace_{i=0}^d)$ \\ $A,C,B$ & $(\lbrace E''_{d-i} \rbrace_{i=0}^d; \lbrace E'_{d-i} \rbrace_{i=0}^d; \lbrace E_{d-i} \rbrace_{i=0}^d)$ \\ $B,A,C$ & $(\lbrace E_{d-i} \rbrace_{i=0}^d; \lbrace E''_{d-i} \rbrace_{i=0}^d; \lbrace E'_{d-i} \rbrace_{i=0}^d)$ \end{tabular}} \end{lemma} \begin{proof} Use Lemma \ref{lem:Ebackward}. \end{proof} \begin{lemma} \label{lem:tildeABCEvar} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. In each row of the table below, we display an LR triple on $V^*$ along with its idempotent data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm idempotent data} \\ \hline \hline $\tilde A, \tilde B, \tilde C$ & $(\lbrace \tilde E_{d-i} \rbrace_{i=0}^d; \lbrace \tilde E'_{d-i} \rbrace_{i=0}^d; \lbrace \tilde E''_{d-i} \rbrace_{i=0}^d)$ \\ $\tilde B, \tilde C, \tilde A$ & $(\lbrace \tilde E'_{d-i} \rbrace_{i=0}^d; \lbrace \tilde E''_{d-i} \rbrace_{i=0}^d; \lbrace \tilde E_{d-i} \rbrace_{i=0}^d)$ \\ $\tilde C, \tilde A, \tilde B$ & $(\lbrace \tilde E''_{d-i} \rbrace_{i=0}^d; \lbrace \tilde E_{d-i} \rbrace_{i=0}^d; \lbrace \tilde E'_{d-i} \rbrace_{i=0}^d)$ \\ \hline $\tilde C, \tilde B, \tilde A$ & $(\lbrace \tilde E'_{i} \rbrace_{i=0}^d; \lbrace \tilde E_{i} \rbrace_{i=0}^d; \lbrace \tilde E''_{i} \rbrace_{i=0}^d)$ \\ $\tilde A, \tilde C, \tilde B$ & $(\lbrace \tilde E''_{i} \rbrace_{i=0}^d; \lbrace \tilde E'_{i} \rbrace_{i=0}^d; \lbrace \tilde E_{i} \rbrace_{i=0}^d)$ \\ $\tilde B, \tilde A, \tilde C$ & $(\lbrace \tilde E_{i} \rbrace_{i=0}^d; \lbrace \tilde E''_{i} \rbrace_{i=0}^d; \lbrace \tilde E'_{i} \rbrace_{i=0}^d)$ \end{tabular}} \end{lemma} \begin{proof} By Lemmas \ref{lem:LRdual}, \ref{lem:ABCEvar}. \end{proof} \begin{lemma} \label{lem:Eform2} Let $A,B,C$ denote an LR triple on $V$, with parameter array {\rm (\ref{eq:paLRT})} and idempotent data {\rm (\ref{eq:idseq})}. Then for $0 \leq i \leq d$, \begin{eqnarray*} && E_i = \frac{A^{d-i}B^d A^i}{\varphi_1 \cdots \varphi_d}, \qquad \qquad E'_i = \frac{B^{d-i}C^d B^i}{\varphi'_1 \cdots \varphi'_d}, \qquad \qquad E''_i = \frac{C^{d-i}A^d C^i}{\varphi''_1 \cdots \varphi''_d}, \\ && E_i = \frac{B^{i}A^d B^{d-i}}{\varphi_1 \cdots \varphi_d}, \qquad \qquad E'_i = \frac{C^{i}B^d C^{d-i}}{\varphi'_1 \cdots \varphi'_d}, \qquad \qquad E''_i = \frac{A^{i}C^d A^{d-i}}{\varphi''_1 \cdots \varphi''_d}. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:Eform}. \end{proof} \begin{lemma} \label{lem:zeroprodABC} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. Then for $0 \leq i<j\leq d$ the following are zero: \begin{eqnarray*} &&A^j E_i, \qquad \qquad \; E_j A^{d-i}, \qquad \qquad \; E_i B^j, \qquad \qquad \; B^{d-i} E_j, \\ &&B^j E'_i, \qquad \qquad \, E'_j B^{d-i}, \qquad \qquad \, E'_i C^j, \qquad \qquad \, C^{d-i} E'_j, \\ &&C^j E''_i, \qquad \qquad E''_j C^{d-i}, \qquad \qquad E''_i A^j, \qquad \qquad A^{d-i} E''_j. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:zeroprod}. \end{proof} \begin{lemma} \label{lem:EiEjp} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. Then the following {\rm (i), (ii)} hold for $0 \leq i,j\leq d$. \begin{enumerate} \item[\rm (i)] Suppose $i+j<d$. Then \begin{eqnarray*} E_i E'_j = 0, \qquad \qquad E'_i E''_j = 0, \qquad \qquad E''_i E_j = 0. \end{eqnarray*} \item[\rm (ii)] Suppose $i+j>d$. Then \begin{eqnarray*} E'_j E_i = 0, \qquad \qquad E''_j E'_i = 0,\qquad \qquad E_j E''_i = 0. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} (i) We show $E_i E'_j = 0$. The sequence $\lbrace E'_rV\rbrace_{r=0}^d$ is the $(B,C)$ decomposition of $V$, which induces the flag $\lbrace B^{d-r}V\rbrace_{r=0}^d$ on $V$. By this and Lemma \ref{lem:zeroprodABC}, \begin{eqnarray*} E_i E'_j V \subseteq E_i (E'_0V+ \cdots + E'_jV) = E_i B^{d-j}V = 0. \end{eqnarray*} This shows that $E_iE'_j=0$. The remaining assertions are similarly shown. \\ \noindent (ii) We show $E'_j E_i = 0$. The sequence $\lbrace E_{d-r}V\rbrace_{r=0}^d$ is the $(B,A)$ decomposition of $V$, which induces the flag $\lbrace B^{d-r}V\rbrace_{r=0}^d$ on $V$. Now by Lemma \ref{lem:zeroprodABC}, \begin{eqnarray*} E'_j E_i V \subseteq E'_j (E_iV+ \cdots + E_dV) = E'_j B^iV = 0. \end{eqnarray*} This shows that $E'_jE_i=0$. The remaining assertions are similarly shown. \end{proof} \begin{lemma} \label{lem:tripleProd} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. Then for $0 \leq i,j\leq d$, \begin{eqnarray*} && E_i E'_j E_i = \delta_{i+j,d}E_i, \qquad \quad E'_i E''_j E'_i = \delta_{i+j,d}E'_i, \qquad \quad E''_i E_j E''_i = \delta_{i+j,d}E''_i, \\ && E_i E''_j E_i = \delta_{i+j,d}E_i, \qquad \quad E'_i E_j E'_i = \delta_{i+j,d}E'_i, \qquad \quad E''_i E'_j E''_i = \delta_{i+j,d}E''_i. \end{eqnarray*} \end{lemma} \begin{proof} Consider the product $E_i E'_j E_i$. First assume that $i+j < d$. Then $E_i E'_j=0$ by Lemma \ref{lem:EiEjp}(i), so $E_i E'_jE_i=0$. Next assume that $i+j>d$. Then $E'_j E_i = 0 $ by Lemma \ref{lem:EiEjp}(ii), so $E_i E'_jE_i=0$. Next assume that $i+j=d$. By our results so far, \begin{eqnarray*} E_i E'_jE_i = \sum_{r=0}^d E_i E'_rE_i = E_i I E_i = E_i. \end{eqnarray*} We have verified our assertion for the product $E_i E'_jE_i$; the remaining assertions are similarily verified. \end{proof} \begin{lemma} \label{lem:doubleTrace} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. Then for $0\leq i,j\leq d$ the products \begin{eqnarray*} && E_i E'_j, \qquad \qquad E'_i E''_j, \qquad \qquad E''_i E_j, \\ && E_i E''_j, \qquad \qquad E'_i E_j, \qquad \qquad E''_i E'_j \end{eqnarray*} have trace $0$ if $i+j \not=d$ and trace 1 if $i+j=d$. \end{lemma} \begin{proof} For each displayed equation in Lemma \ref{lem:tripleProd}, take the trace of each side and simplify the result using ${\rm tr}(KL)= {\rm tr}(LK)$. \end{proof} \begin{proposition} \label{eq:varphiTRACE} Let $A,B,C$ denote an LR triple on $V$, with parameter array {\rm (\ref{eq:paLRT})} and idempotent data {\rm (\ref{eq:idseq})}. In the table below, for each map $F$ in the header row, we display the trace of $FE_i$, $FE'_i$, $FE''_i$ for $0 \leq i \leq d$. \centerline{ \begin{tabular}[t]{c|ccc ccc} $F$ & $AB$ & $BA$ & $BC$ & $CB$ & $CA$ & $AC$ \\ \hline \hline ${\rm tr}(FE_i)$ & $\varphi_{i+1}$ & $\varphi_{i}$ & $\varphi'_{d-i+1}$ & $\varphi'_{d-i}$ & $\varphi''_{d-i+1}$ & $\varphi''_{d-i}$ \\ ${\rm tr}(FE'_i)$ & $\varphi_{d-i+1}$ & $\varphi_{d-i}$ & $\varphi'_{i+1}$ & $\varphi'_{i}$ & $\varphi''_{d-i+1}$ & $\varphi''_{d-i}$ \\ ${\rm tr}(FE''_i)$ & $\varphi_{d-i+1}$ & $\varphi_{d-i}$ & $\varphi'_{d-i+1}$ & $\varphi'_{d-i}$ & $\varphi''_{i+1}$ & $\varphi''_{i}$ \end{tabular}} \end{proposition} \begin{proof} We verify the first column of the table. We have ${\rm tr}(ABE_i) = \varphi_{i+1}$ by Lemma \ref{lem:varphiTrace}. To verify ${\rm tr}(ABE'_i)=\varphi_{d-i+1}$ and ${\rm tr}(ABE''_i)=\varphi_{d-i+1}$, eliminate $AB$ using the equation on the left in (\ref{eq:BiAione}), and evaluate the result using Lemma \ref{lem:doubleTrace}. We have verified the first column of the table; the remaining columns are similarly verified. \end{proof} \noindent In Proposition \ref{eq:varphiTRACE} we used the trace function to describe the parameter array of an LR triple. We now use the trace function to define some more parameters for an LR triple. \begin{definition} \label{def:traceData} \rm Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})}. For $0 \leq i \leq d$ define \begin{eqnarray} a_i = {\rm tr}(CE_i), \qquad \qquad a'_i = {\rm tr}(AE'_i), \qquad \qquad a''_i = {\rm tr}(BE''_i). \label{eq:aaa} \end{eqnarray} We call the sequence \begin{eqnarray} \label{eq:tracedata} (\lbrace a_i \rbrace_{i=0}^d; \lbrace a'_i \rbrace_{i=0}^d; \lbrace a''_i \rbrace_{i=0}^d ) \end{eqnarray} the {\it trace data} of $A,B,C$. \end{definition} \noindent Our next goal is to describe the meaning of the trace data from several points of view. \begin{lemma} \label{lem:tracedataI} Let $A,B,C$ denote an LR triple on $V$, with trace data {\rm (\ref{eq:tracedata})}. Consider a basis for $V$ that induces the $(A,B)$-decomposition (resp. $(B,C)$-decomposition) (resp. $(C,A)$-decomposition) of $V$. Then for $0 \leq i \leq d$, $a_i$ (resp. $a'_i$) (resp. $a''_i$) is the $(i,i)$-entry of the matrix in ${\rm Mat}_{d+1}(\mathbb F)$ that represents $C$ (resp. $A$) (resp. $B$) with respect to this basis. \end{lemma} \begin{proof} To obtain the assertion about $a_i$, in the equation on the left in (\ref{eq:aaa}) represent $C$ and $E_i$ by matrices with respect to the given basis. The other assertions are similarly obtained. \end{proof} \begin{lemma} \label{lem:traceDataII} Let $A,B,C$ denote an LR triple on $V$, with idempotent data {\rm (\ref{eq:idseq})} and trace data {\rm (\ref{eq:tracedata})}. Then for $0 \leq i \leq d$, \begin{eqnarray*} E_i C E_i = a_i E_i, \qquad \qquad E'_i A E'_i = a'_i E'_i, \qquad \qquad E''_i B E''_i = a''_i E''_i. \end{eqnarray*} \end{lemma} \begin{proof} We verify the equation on the left. Since $E_i$ is idempotent and rank 1, there exists $a \in \mathbb F$ such that $E_iCE_i=a E_i$. In this equation, take the trace of each side and use Definition \ref{def:traceData} to get $a=a_i$. \end{proof} \begin{lemma} \label{lem:aisum} Let $A,B,C$ denote an LR triple on $V$, with trace data {\rm (\ref{eq:tracedata})}. Then \begin{eqnarray*} 0 = \sum_{i=0}^d a_i, \qquad \quad 0 = \sum_{i=0}^d a'_i, \qquad \quad 0 = \sum_{i=0}^d a''_i. \end{eqnarray*} \end{lemma} \begin{proof} The sum $\sum_{i=0}^d a_i$ is the trace of $C$, which is zero since $C$ is nilpotent. The remaining assertions are similarly shown. \end{proof} \begin{lemma} \label{lem:traceAdj} Let $A,B,C$ denote an LR triple on $V$, with trace data {\rm (\ref{eq:tracedata})}. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the LR triple $\alpha A, \beta B, \gamma C$ has trace data \begin{eqnarray*} (\lbrace \gamma a_i \rbrace_{i=0}^d; \lbrace \alpha a'_i \rbrace_{i=0}^d; \lbrace \beta a''_i \rbrace_{i=0}^d). \end{eqnarray*} \end{lemma} \begin{proof} Use Lemma \ref{lem:isequal} and Definition \ref{def:traceData}. \end{proof} \noindent Let $A,B,C$ denote an LR triple on $V$. Our next goal is to compute the trace data for the relatives of $A,B,C$. \begin{lemma} \label{lem:tracedataAlt} Let $A,B,C$ denote an LR triple on $V$, with trace data {\rm (\ref{eq:tracedata})}. In each row of the table below, we display an LR triple on $V$ along with its trace data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm trace data} \\ \hline \hline $A,B,C$ & $(\lbrace a_i \rbrace_{i=0}^d; \lbrace a'_i \rbrace_{i=0}^d; \lbrace a''_i \rbrace_{i=0}^d)$ \\ $B,C,A$ & $(\lbrace a'_i \rbrace_{i=0}^d; \lbrace a''_i \rbrace_{i=0}^d; \lbrace a_i \rbrace_{i=0}^d)$ \\ $C,A,B$ & $(\lbrace a''_i \rbrace_{i=0}^d; \lbrace a_i \rbrace_{i=0}^d; \lbrace a'_i \rbrace_{i=0}^d)$ \\ \hline $C,B,A$ & $(\lbrace a'_{d-i} \rbrace_{i=0}^d; \lbrace a_{d-i} \rbrace_{i=0}^d; \lbrace a''_{d-i} \rbrace_{i=0}^d)$ \\ $A,C,B$ & $(\lbrace a''_{d-i} \rbrace_{i=0}^d; \lbrace a'_{d-i} \rbrace_{i=0}^d; \lbrace a_{d-i} \rbrace_{i=0}^d)$ \\ $B,A,C$ & $(\lbrace a_{d-i} \rbrace_{i=0}^d; \lbrace a''_{d-i} \rbrace_{i=0}^d; \lbrace a'_{d-i} \rbrace_{i=0}^d)$ \end{tabular}} \end{lemma} \begin{proof} By Lemma \ref{lem:ABCEvar} and Definition \ref{def:traceData}. \end{proof} \begin{lemma} \label{lem:tracedataDual} Let $A,B,C$ denote an LR triple on $V$, with trace data {\rm (\ref{eq:tracedata})}. In each row of the table below, we display an LR triple on $V^*$ along with its trace data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm trace data} \\ \hline \hline $\tilde A, \tilde B, \tilde C$ & $(\lbrace a_{d-i} \rbrace_{i=0}^d; \lbrace a'_{d-i} \rbrace_{i=0}^d; \lbrace a''_{d-i} \rbrace_{i=0}^d)$ \\ $\tilde B, \tilde C, \tilde A$ & $(\lbrace a'_{d-i} \rbrace_{i=0}^d; \lbrace a''_{d-i} \rbrace_{i=0}^d; \lbrace a_{d-i} \rbrace_{i=0}^d)$ \\ $\tilde C, \tilde A, \tilde B$ & $(\lbrace a''_{d-i} \rbrace_{i=0}^d; \lbrace a_{d-i} \rbrace_{i=0}^d; \lbrace a'_{d-i} \rbrace_{i=0}^d)$ \\ \hline $\tilde C, \tilde B, \tilde A$ & $(\lbrace a'_{i} \rbrace_{i=0}^d; \lbrace a_{i} \rbrace_{i=0}^d; \lbrace a''_{i} \rbrace_{i=0}^d)$ \\ $\tilde A, \tilde C, \tilde B$ & $(\lbrace a''_{i} \rbrace_{i=0}^d; \lbrace a'_{i} \rbrace_{i=0}^d; \lbrace a_{i} \rbrace_{i=0}^d)$ \\ $\tilde B, \tilde A, \tilde C$ & $(\lbrace a_{i} \rbrace_{i=0}^d; \lbrace a''_{i} \rbrace_{i=0}^d; \lbrace a'_{i} \rbrace_{i=0}^d)$ \end{tabular}} \end{lemma} \begin{proof} An element in ${\rm End}(V)$ has the same trace as its adjoint. The result follows from this along with Lemma \ref{lem:tildeABCEvar} and Definition \ref{def:traceData}. \end{proof} \noindent Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data {\rm (\ref{eq:idseq})}, and trace data (\ref{eq:tracedata}). Associated with $A,B,C$ are 12 types of bases for $V$: \begin{eqnarray} && (A,B), \quad\qquad \mbox{\rm inverted}\;(A,B), \quad \qquad (B,A), \quad \qquad \mbox{\rm inverted}\; (B,A), \label{eq:typeAB} \\ && (B,C), \qquad \quad \mbox{\rm inverted}\; (B,C), \quad \qquad (C,B), \qquad\quad \mbox{\rm inverted}\; (C,B), \label{eq:typeBC} \\ && (C,A), \qquad \quad \,\mbox{\rm inverted}\; (C,A), \quad \qquad (A,C), \qquad \quad \,\mbox{\rm inverted}\; (A,C). \label{eq:typeCA} \end{eqnarray} \noindent We now consider the actions of $A,B,C$ on these bases. We will use the following notation. \begin{definition} \label{def:natural} \rm For the above LR triple $A,B,C$ consider the 12 types of bases for $V$ from {\rm (\ref{eq:typeAB})--(\ref{eq:typeCA})}. For each type $\natural$ in the list and $F \in {\rm End}(V)$ let $F^\natural$ denote the matrix in ${\rm Mat}_{d+1}(\mathbb F)$ that represents $F$ with respect to a basis for $V$ of type $\natural$. Note that the map $\natural : {\rm End}(V) \to {\rm Mat}_{d+1}(\mathbb F)$, $F \mapsto F^\natural$ is an $\mathbb F$-algebra isomorphism. \end{definition} \begin{proposition} \label{prop:matrixRep} For the above LR triple $A,B,C$ consider the 12 types of bases for $V$ from {\rm (\ref{eq:typeAB})--(\ref{eq:typeCA})}. For each type $\natural $ in the list, the entries of $A^\natural$, $B^\natural$, $C^\natural$ are given in the table below. All entries not shown are zero. \centerline{ \begin{tabular}[t]{c|ccc|ccc|ccc} $\natural$ & $A^\natural_{i,i-1}$ & $A^\natural_{i,i}$ & $A^\natural_{i-1,i}$ & $B^\natural_{i,i-1}$ & $B^\natural_{i,i}$ & $B^\natural_{i-1,i}$ & $C^\natural_{i,i-1}$ & $C^\natural_{i,i}$ & $C^\natural_{i-1,i}$ \\ \hline \hline $(A,B)$ & $0$ & $0$ & $1$ & $\varphi_i$ & $0$ & $0$ & $\varphi''_{d-i+1}$ & $a_i$ & $\frac{\varphi'_{d-i+1}}{\varphi_i}$ \\ {\rm inv. $(A,B)$} & $1$ & $0$ & $0$ & $0$ & $0$ & $\varphi_{d-i+1}$ & $\frac{\varphi'_{i}}{\varphi_{d-i+1}}$ & $a_{d-i}$ & $\varphi''_{i}$ \\ $(B,A)$ & $\varphi_{d-i+1}$ & $0$ & $0$ & $0$ & $0$ & $1$ & $\varphi'_{i}$ & $a_{d-i}$ & $\frac{\varphi''_{i}}{\varphi_{d-i+1}}$ \\ {\rm inv. $(B,A)$} & $0$ & $0$ & $\varphi_i $ & $1$ & $0$ & $0$ & $\frac{\varphi''_{d-i+1}}{\varphi_i}$ & $a_i$ & $\varphi'_{d-i+1}$ \\ \hline $(B,C)$ & $\varphi_{d-i+1}$ & $a'_{i}$ & $\frac{\varphi''_{d-i+1}}{\varphi'_i}$ & $0$ & $0$ & $1$ & $\varphi'_i$ & $0$ & $0$ \\ {\rm inv. $(B,C)$} & $\frac{\varphi''_{i}}{\varphi'_{d-i+1}}$ & $a'_{d-i}$ & $\varphi_{i}$ & $1$ & $0$ & $0$ & $0$ & $0$ & $\varphi'_{d-i+1}$ \\ $(C,B)$ & $\varphi''_{i}$ & $a'_{d-i}$ & $\frac{\varphi_{i}}{\varphi'_{d-i+1}}$ & $\varphi'_{d-i+1}$ & $0$ & $0$ & $0$ & $0$ & $1$ \\ {\rm inv. $(C,B)$} & $\frac{\varphi_{d-i+1}}{\varphi'_i}$ & $a'_{i}$ & $\varphi''_{d-i+1}$ & $0$ & $0$ & $\varphi'_i $ & $1$ & $0$ & $0$ \\ \hline $(C,A)$ & $\varphi''_i$ & $0$ & $0$ & $\varphi'_{d-i+1}$ & $a''_{i}$ & $\frac{\varphi_{d-i+1}}{\varphi''_i}$ & $0$ & $0$ & $1$ \\ {\rm inv. $(C,A)$} & $0$ & $0$ & $\varphi''_{d-i+1}$ & $\frac{\varphi_{i}}{\varphi''_{d-i+1}}$ & $a''_{d-i}$ & $\varphi'_{i}$ & $1$ & $0$ & $0$ \\ $(A,C)$ & $0$ & $0$ & $1$ & $\varphi_{i}$ & $a''_{d-i}$ & $\frac{\varphi'_{i}}{\varphi''_{d-i+1}}$ & $\varphi''_{d-i+1}$ & $0$ & $0$ \\ {\rm inv. $(A,C)$} & $1$ & $0$ & $0$ & $\frac{\varphi'_{d-i+1}}{\varphi''_i}$ & $a''_{i}$ & $\varphi_{d-i+1}$ & $0$ & $0$ & $\varphi''_i $ \\ \end{tabular}} \end{proposition} \begin{proof} We verify the first row of the table. Consider a basis for $V$ of type $\natural=(A,B)$. The entries of $A^\natural$ and $B^\natural$ are given in Lemma \ref{lem:ABmatrix}. We now compute the entries of $C^\natural$. This matrix is tridiagonal by Lemma \ref{lem:LRTaction}. The diagonal entries of $C^\natural$ are given in Lemma \ref{lem:tracedataI}. As we compute additional entries of $C^\natural$, we will use the fact that for $0 \leq j \leq d$ the matrix $E^\natural_j$ has $(j,j)$-entry 1 and all other entries $0$. For $1 \leq i \leq d$ we now compute the $(i,i-1)$-entry of $C^\natural$. We evaluate ${\rm tr}(CAE_i)$ in two ways. On one hand, by Proposition \ref{eq:varphiTRACE} this trace is equal to $\varphi''_{d-i+1}$. On the other hand, by linear algebra this trace is equal to ${\rm tr}(C^\natural A^\natural E^\natural_i)$, which is equal to the $(i,i)$-entry of $C^\natural A^\natural $ by the form of $E^\natural_i$. By the form of $A^\natural $, the $(i,i)$-entry of $C^\natural A^\natural $ is equal to $C^\natural_{i,i-1}$. By these comments $C^\natural_{i,i-1}=\varphi''_{d-i+1}$. Next we compute the $(i-1,i)$-entry of $C^\natural$. We evaluate ${\rm tr}(BCE_i)$ in two ways. On one hand, by Proposition \ref{eq:varphiTRACE} this trace is equal to $\varphi'_{d-i+1}$. On the other hand, by linear algebra this trace is equal to ${\rm tr}(B^\natural C^\natural E^\natural_i)$, which is equal to the $(i,i)$-entry of $B^\natural C^\natural $ by the form of $E^\natural_i$. By the form of $B^\natural$, the $(i,i)$-entry of $B^\natural C^\natural $ is equal to $\varphi_i C^\natural_{i-1,i}$. By these comments $C^\natural_{i-1,i}=\varphi'_{d-i+1}/\varphi_i$. We have verified the first row of the table, and the remaining rows are similarly verified. \end{proof} \begin{proposition} \label{prop:IsoParTrace} An LR triple is uniquely determined up to isomorphism by its parameter array and trace data. \end{proposition} \begin{proof} In Proposition \ref{prop:matrixRep}, the matrix entries are determined by the parameter array and trace data. \end{proof} \noindent Let $A,B,C$ denote an LR triple on $V$. Recall the 12 types of bases for $V$ from {\rm (\ref{eq:typeAB})--(\ref{eq:typeCA})}. We now consider how these bases are related. As we proceed, keep in mind that any permutation of $A,B,C$ is an LR triple on $V$. \begin{lemma} \label{lem:xyz} Let $A,B,C$ denote an LR triple on $V$. Let $\lbrace u_i \rbrace_{i=0}^d$ denote an $(A,C)$-basis of $V$, and let $\lbrace v_i\rbrace_{i=0}^d$ denote an $(A,B)$-basis of $V$. Then the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$ is upper triangular and Toeplitz. \end{lemma} \begin{proof} Let $S \in {\rm Mat}_{d+1}(\mathbb F)$ denote the transition matrix in question. By Definition \ref{def:ABbasis} and the construction, $Au_i = u_{i-1}$ $(1 \leq i \leq d)$, $Au_0=0$, $Av_i = v_{i-1}$ $(1 \leq i \leq d)$, $Av_0=0$. Now by Proposition \ref{prop:Toe}, $S$ is upper triangular and Toeplitz. \end{proof} \begin{definition}\rm \label{def:compat} Two bases of $V$ will be called {\it compatible} whenever the transition matrix from one basis to the other is upper triangular and Toeplitz, with all diagonal entries 1. \end{definition} \begin{lemma} \label{lem:exCompat} Let $A,B,C$ denote an LR triple on $V$. Given an $(A,C)$-basis of $V$, there exists a compatible $(A,B)$-basis of $V$. \end{lemma} \begin{proof} Let $\lbrace u_i\rbrace_{i=0}^d$ denote the $(A,C)$-basis in question. Let $\lbrace v_i\rbrace_{i=0}^d$ denote an $(A,B)$-basis of $V$. Let $S \in {\rm Mat}_{d+1}(\mathbb F)$ denote the transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$. By construction $S$ is invertible. By Lemma \ref{lem:xyz}, $S$ is upper triangular and Toeplitz. Let $\lbrace \alpha_i \rbrace_{i=0}^d$ denote the corresponding parameters, and note that $\alpha_0\not=0$. Define $v'_i = v_i /\alpha_0$ for $0 \leq i \leq d$. Then $\lbrace v'_i\rbrace_{i=0}^d$ is an $(A,B)$-basis of $V$. The transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v'_i\rbrace_{i=0}^d$ is $S/\alpha_0$. This matrix is upper triangular and Toeplitz, with all diagonal entries 1. Now by Definition \ref{def:compat} the basis $\lbrace v'_i\rbrace_{i=0}^d$ is compatible with $\lbrace u_i\rbrace_{i=0}^d$. \end{proof} \begin{definition}\rm \label{def:TTT} Let $A,B,C$ denote an LR triple on $V$. We define matrices $T, T', T''$ in ${\rm Mat}_{d+1}(\mathbb F)$ as follows: \begin{enumerate} \item[\rm (i)] $T$ is the transition matrix from a $(C,B)$-basis of $V$ to a compatible $(C,A)$-basis of $V$; \item[\rm (ii)] $T'$ is the transition matrix from an $(A,C)$-basis of $V$ to a compatible $(A,B)$-basis of $V$; \item[\rm (iii)] $T''$ is the transition matrix from a $(B,A)$-basis of $V$ to a compatible $(B,C)$-basis of $V$. \end{enumerate} \end{definition} \begin{definition} \label{def:TTT1} \rm Let $A,B,C$ denote an LR triple on $V$. By Definition \ref{def:compat} the associated matrix $T$ (resp. $T'$) (resp. $T''$) is upper triangular and Toeplitz; let $\lbrace \alpha_i \rbrace_{i=0}^d$ (resp. $\lbrace \alpha'_i \rbrace_{i=0}^d$) (resp. $\lbrace \alpha''_i \rbrace_{i=0}^d$) denote the corresponding parameters. Let $\lbrace \beta_i \rbrace_{i=0}^d$, $\lbrace \beta'_i \rbrace_{i=0}^d$, $\lbrace \beta''_i \rbrace_{i=0}^d$ denote the parameters for $T^{-1}$, $(T')^{-1}$, $(T'')^{-1}$ respectively. We call the 6-tuple \begin{eqnarray} ( \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d; \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d; \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d ) \label{eq:ToeplitzData} \end{eqnarray} the {\it Toeplitz data} for $A,B,C$. For notational convenience define each of the following to be zero: \begin{eqnarray*} \alpha_{d+1}, \qquad \alpha'_{d+1}, \qquad \alpha''_{d+1}, \qquad \beta_{d+1}, \qquad \beta'_{d+1}, \qquad \beta''_{d+1}. \end{eqnarray*} \end{definition} \begin{lemma} \label{lem:alpha0} Referring to Definition \ref{def:TTT1}, \begin{eqnarray} &&\alpha_0 = 1, \qquad \qquad \alpha'_0=1, \qquad \qquad \alpha''_0=1, \label{eq:list1} \\ &&\beta_0 = 1, \qquad \qquad \beta'_0=1, \qquad \qquad \beta''_0=1. \label{eq:list2} \end{eqnarray} Moreover \begin{eqnarray} \label{eq:list3} \beta_1 = - \alpha_1, \qquad \qquad \beta'_1 = - \alpha'_1, \qquad \qquad \beta''_1 = - \alpha''_1. \end{eqnarray} \end{lemma} \begin{proof} Concerning (\ref{eq:list1}), (\ref{eq:list2}) the matrices $T,T',T''$ and their inverses are all transition matrices between a pair of compatible bases. So their diagonal entries are all 1 by Definition \ref{def:compat}. Line (\ref{eq:list3}) comes from above Lemma \ref{lem:reform1}. \end{proof} \begin{lemma} Referring to Definitions \ref{def:tauMat}, \ref{def:TTT}, \begin{eqnarray*} &&T = \sum_{i=0}^d \alpha_i \tau^i \qquad \qquad T' = \sum_{i=0}^d \alpha'_i \tau^i \qquad \qquad T'' = \sum_{i=0}^d \alpha''_i \tau^i, \\ && T^{-1} = \sum_{i=0}^d \beta_i \tau^i, \qquad \quad (T')^{-1} = \sum_{i=0}^d \beta'_i \tau^i, \qquad \quad (T'')^{-1} = \sum_{i=0}^d \beta''_i \tau^i. \end{eqnarray*} Moreover $T,T', T'', \tau$ mutually commute. \end{lemma} \begin{proof} By Lemma \ref{lem:ToeChar}. \end{proof} \begin{lemma} \label{lem:ToeplitzAdjust} Let $A,B,C$ denote an LR triple on $V$, with Toeplitz data {\rm (\ref{eq:ToeplitzData})}. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the LR triple $\alpha A, \beta B, \gamma C$ has Toeplitz data \begin{eqnarray} ( \lbrace \gamma^{-i}\alpha_i \rbrace_{i=0}^d, \lbrace \gamma^{-i}\beta_i \rbrace_{i=0}^d; \lbrace \alpha^{-i} \alpha'_i \rbrace_{i=0}^d, \lbrace \alpha^{-i} \beta'_i \rbrace_{i=0}^d; \lbrace \beta^{-i} \alpha''_i \rbrace_{i=0}^d, \lbrace \beta^{-i} \beta''_i \rbrace_{i=0}^d ). \label{eq:NewToep} \end{eqnarray} \end{lemma} \begin{proof} Use Lemma \ref{lem:a0a1}(ii). \end{proof} \noindent Let $A,B,C$ denote an LR triple on $V$. Our next goal is to compute the Toeplitz data for the relatives of $A,B,C$. \begin{lemma} \label{lem:ToeplitzData} Let $A,B,C$ denote an LR triple on $V$, with Toeplitz data {\rm (\ref{eq:ToeplitzData})}. In each row of the table below, we display an LR triple on $V$ along with its Toeplitz data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm Toeplitz data} \\ \hline \hline $A,B,C$ & $( \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d; \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d; \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d )$ \\ $B,C,A$ & $( \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d; \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d; \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d )$ \\ $C,A,B$ & $( \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d; \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d; \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d )$ \\ \hline $C,B,A$ & $( \lbrace \beta'_i \rbrace_{i=0}^d, \lbrace \alpha'_i \rbrace_{i=0}^d; \lbrace \beta_i \rbrace_{i=0}^d, \lbrace \alpha_i \rbrace_{i=0}^d; \lbrace \beta''_i \rbrace_{i=0}^d, \lbrace \alpha''_i \rbrace_{i=0}^d )$ \\ $A,C,B$ & $( \lbrace \beta''_i \rbrace_{i=0}^d, \lbrace \alpha''_i \rbrace_{i=0}^d; \lbrace \beta'_i \rbrace_{i=0}^d, \lbrace \alpha'_i \rbrace_{i=0}^d; \lbrace \beta_i \rbrace_{i=0}^d, \lbrace \alpha_i \rbrace_{i=0}^d )$ \\ $B,A,C$ & $( \lbrace \beta_i \rbrace_{i=0}^d, \lbrace \alpha_i \rbrace_{i=0}^d; \lbrace \beta''_i \rbrace_{i=0}^d, \lbrace \alpha''_i \rbrace_{i=0}^d; \lbrace \beta'_i \rbrace_{i=0}^d, \lbrace \alpha'_i \rbrace_{i=0}^d )$ \\ \end{tabular}} \end{lemma} \begin{proof} By Definition \ref{def:TTT} and the construction. \end{proof} \begin{lemma} \label{lem:compatFlip} Let $\lbrace u_i \rbrace_{i=0}^d$ (resp. $\lbrace v_i \rbrace_{i=0}^d$) denote a basis of $V$, and let $\lbrace u'_i \rbrace_{i=0}^d$ (resp. $\lbrace v'_i \rbrace_{i=0}^d$) denote the dual basis of $V^*$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $\lbrace u_i \rbrace_{i=0}^d$ and $\lbrace v_i \rbrace_{i=0}^d$ are compatible; \item[\rm (ii)] $\lbrace u'_{d-i} \rbrace_{i=0}^d$ and $\lbrace v'_{d-i} \rbrace_{i=0}^d$ are compatible. \end{enumerate} Moreover, suppose {\rm (i), (ii)} hold. Then the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$ is the inverse of the transition matrix from $\lbrace u'_{d-i} \rbrace_{i=0}^d$ to $\lbrace v'_{d-i} \rbrace_{i=0}^d$. \end{lemma} \begin{proof} Let $S \in {\rm Mat}_{d+1}(\mathbb F)$ denote the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$. Then $S^t$ is the transition matrix from $\lbrace v'_i \rbrace_{i=0}^d$ to $\lbrace u'_i \rbrace_{i=0}^d$. Then $(S^t)^{-1}$ is the transition matrix from $\lbrace u'_i \rbrace_{i=0}^d$ to $\lbrace v'_i \rbrace_{i=0}^d$. Then ${\bf Z} (S^t)^{-1}{\bf Z}$ is the transition matrix from $\lbrace u'_{d-i} \rbrace_{i=0}^d$ to $\lbrace v'_{d-i} \rbrace_{i=0}^d$. Note that $S$ is upper triangular and Toeplitz with all diagonal entries 1, if and only if $S^{-1}$ is upper triangular and Toeplitz with all diagonal entries 1. By this and Lemma \ref{lem:ToepTrans}, we see that $S$ is upper triangular and Toeplitz with all diagonal entries 1, if and only if ${\bf Z} (S^t)^{-1}{\bf Z}$ is upper triangular and Toeplitz with all diagonal entries 1, and in this case ${\bf Z} (S^t)^{-1}{\bf Z}=S^{-1}$. The result follows. \end{proof} \begin{lemma} \label{lem:ToeplitzDataD} Let $A,B,C$ denote an LR triple on $V$, with Toeplitz data {\rm (\ref{eq:ToeplitzData})}. In each row of the table below, we display an LR triple on $V^*$ along with its Toeplitz data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm Toeplitz data} \\ \hline \hline $\tilde A, \tilde B, \tilde C$ & $( \lbrace \beta_i \rbrace_{i=0}^d, \lbrace \alpha_i \rbrace_{i=0}^d; \lbrace \beta'_i \rbrace_{i=0}^d, \lbrace \alpha'_i \rbrace_{i=0}^d; \lbrace \beta''_i \rbrace_{i=0}^d, \lbrace \alpha''_i \rbrace_{i=0}^d )$ \\ $\tilde B, \tilde C, \tilde A$ & $( \lbrace \beta'_i \rbrace_{i=0}^d, \lbrace \alpha'_i \rbrace_{i=0}^d; \lbrace \beta''_i \rbrace_{i=0}^d, \lbrace \alpha''_i \rbrace_{i=0}^d; \lbrace \beta_i \rbrace_{i=0}^d, \lbrace \alpha_i \rbrace_{i=0}^d )$ \\ $\tilde C, \tilde A, \tilde B$ & $( \lbrace \beta''_i \rbrace_{i=0}^d, \lbrace \alpha''_i \rbrace_{i=0}^d; \lbrace \beta_i \rbrace_{i=0}^d, \lbrace \alpha_i \rbrace_{i=0}^d; \lbrace \beta'_i \rbrace_{i=0}^d, \lbrace \alpha'_i \rbrace_{i=0}^d )$ \\ \hline $\tilde C, \tilde B, \tilde A$ & $( \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d; \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d; \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d )$ \\ $\tilde A, \tilde C, \tilde B$ & $( \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d; \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d; \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d )$ \\ $\tilde B, \tilde A, \tilde C$ & $( \lbrace \alpha_i \rbrace_{i=0}^d, \lbrace \beta_i \rbrace_{i=0}^d; \lbrace \alpha''_i \rbrace_{i=0}^d, \lbrace \beta''_i \rbrace_{i=0}^d; \lbrace \alpha'_i \rbrace_{i=0}^d, \lbrace \beta'_i \rbrace_{i=0}^d )$ \\ \end{tabular}} \end{lemma} \begin{proof} Use Lemma \ref{lem:LRBdual}, Definition \ref{def:TTT}, and Lemma \ref{lem:compatFlip}. \end{proof} \noindent Until further notice fix an LR triple $A,B,C$ on $V$, with parameter array (\ref{eq:paLRT}), idempotent data {\rm (\ref{eq:idseq})}, trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). Recall the 12 types of bases for $V$ from {\rm (\ref{eq:typeAB})--(\ref{eq:typeCA})}. As we consider how these bases are related, it is convenient to work with specific bases of each type. Fix nonzero vectors \begin{eqnarray} && \eta \in A^dV, \qquad \qquad \;\; \eta' \in B^dV, \qquad \qquad \;\; \eta'' \in C^dV, \label{eq:eta} \\ && \tilde \eta \in \tilde A^dV^*, \qquad \qquad \tilde \eta' \in \tilde B^dV^*, \qquad \qquad \tilde \eta'' \in \tilde C^dV^*. \label{eq:etaDual} \end{eqnarray} \noindent By construction, \begin{eqnarray} && A \eta = 0, \qquad \qquad B \eta' = 0, \qquad \qquad C \eta'' = 0, \label{eq:ABCzero} \\ && \tilde A \tilde \eta = 0, \qquad \qquad \tilde B \tilde \eta' = 0, \qquad \qquad \tilde C \tilde \eta'' = 0. \label{eq:TABCzero} \end{eqnarray} \noindent We mention a result for later use. \begin{lemma} \label{lem:lateruse} The following scalars are nonzero: \begin{eqnarray} && \label{eq:IP1} (\eta, \tilde \eta'), \qquad \qquad (\eta', \tilde \eta''), \qquad \qquad (\eta'', \tilde \eta), \\ && \label{eq:IP2} (\eta, \tilde \eta''), \qquad \qquad (\eta', \tilde \eta), \qquad \qquad (\eta'', \tilde \eta'). \end{eqnarray} For $d\geq 1$ the following scalars are zero: \begin{eqnarray} (\eta, \tilde \eta), \qquad \qquad (\eta', \tilde \eta'), \qquad \qquad (\eta'', \tilde \eta''). \label{eq:IP3} \end{eqnarray} \end{lemma} \begin{proof} We show that $(\eta, \tilde \eta')\not=0$. By assumption $0 \not= \eta \in A^dV$ and $0 \not= \tilde \eta' \in \tilde B^dV^*$. The flags $\lbrace B^{d-i}V\rbrace_{i=0}^d$ and $\lbrace \tilde B^{d-i}V^*\rbrace_{i=0}^d$ are dual by Lemma \ref{lem:LRTflagDual}; therefore $BV$ is the orthogonal complement of $ \tilde B^dV^*$. The flags $\lbrace A^{d-i}V\rbrace_{i=0}^d$ and $\lbrace B^{d-i}V\rbrace_{i=0}^d$ are opposite; therefore $A^dV \cap BV=0$. By these comments $(\eta, \tilde \eta')\not=0$. The other five inner products in (\ref{eq:IP1}), (\ref{eq:IP2}) are similarly shown to be nonzero. Next assume that $d\geq 1$. We show that $(\eta, \tilde \eta)=0$. By construction $\eta \in A^dV$ and $\tilde \eta \in \tilde A^dV^*$. The flags $\lbrace A^{d-i}V\rbrace_{i=0}^d$ and $\lbrace \tilde A^{d-i}V^*\rbrace_{i=0}^d$ are dual by Lemma \ref{lem:LRTflagDual}; therefore $AV$ is the orthogonal complement of $\tilde A^{d}V^*$. The subspace $AV$ contains $A^dV$ since $d\geq 1$; therefore $A^dV$ is orthogonal to $\tilde A^d V^*$. By these comments $(\eta, \tilde \eta)=0$. The other two inner products in (\ref{eq:IP3}) are similarly shown to be zero. \end{proof} \noindent We now display some bases for $V$ of the types (\ref{eq:typeAB})--(\ref{eq:typeCA}). \begin{lemma} \label{lem:Allbases} In each row of the tables below, for $0 \leq i \leq d$ we display a vector $v_i \in V$. The vectors $\lbrace v_i \rbrace_{i=0}^d$ form a basis for $V$; we give the type and the induced decomposition of $V$. \centerline{ \begin{tabular}[t]{c|c|c} $v_i$ & {\rm type of basis} & {\rm induced dec. of $V$} \\ \hline \hline $B^i \eta$ & {\rm inverted $(B,A)$} & $(A,B)$ \\ $B^{d-i} \eta$ & {\rm $(B,A)$} & $(B,A)$ \\ $(\varphi_1 \cdots \varphi_i)^{-1}B^i \eta$ & $(A,B)$ & $(A,B)$ \\ $(\varphi_1 \cdots \varphi_{d-i})^{-1}B^{d-i} \eta$ & {\rm inverted $(A,B)$} & $(B,A)$ \\ \hline $C^i \eta$ & {\rm inverted $(C,A)$} & $(A,C)$ \\ $C^{d-i} \eta$ & {\rm $(C,A)$} & $(C,A)$ \\ $(\varphi''_d \cdots \varphi''_{d-i+1})^{-1}C^i \eta$ & $(A,C)$ & $(A,C)$ \\ $(\varphi''_d \cdots \varphi''_{i+1})^{-1}C^{d-i} \eta$ & {\rm inverted $(A,C)$} & $(C,A)$ \\ \end{tabular}} \centerline{ \begin{tabular}[t]{c|c|c} $v_i$ & {\rm type of basis} & {\rm induced dec. of $V$} \\ \hline \hline $C^i \eta'$ & {\rm inverted $(C,B)$} & $(B,C)$ \\ $C^{d-i} \eta'$ & {\rm $(C,B)$} & $(C,B)$ \\ $(\varphi'_1 \cdots \varphi'_i)^{-1}C^i \eta'$ & $(B,C)$ & $(B,C)$ \\ $(\varphi'_1 \cdots \varphi'_{d-i})^{-1}C^{d-i} \eta'$ & {\rm inverted $(B,C)$} & $(C,B)$ \\ \hline $A^i \eta'$ & {\rm inverted $(A,B)$} & $(B,A)$ \\ $A^{d-i} \eta'$ & {\rm $(A,B)$} & $(A,B)$ \\ $(\varphi_d \cdots \varphi_{d-i+1})^{-1}A^i \eta'$ & $(B,A)$ & $(B,A)$ \\ $(\varphi_d \cdots \varphi_{i+1})^{-1}A^{d-i} \eta'$ & {\rm inverted $(B,A)$} & $(A,B)$ \\ \end{tabular}} \centerline{ \begin{tabular}[t]{c|c|c} $v_i$ & {\rm type of basis} & {\rm induced dec. of $V$} \\ \hline \hline $A^i \eta''$ & {\rm inverted $(A,C)$} & $(C,A)$ \\ $A^{d-i} \eta''$ & {\rm $(A,C)$} & $(A,C)$ \\ $(\varphi''_1 \cdots \varphi''_i)^{-1}A^i \eta''$ & $(C,A)$ & $(C,A)$ \\ $(\varphi''_1 \cdots \varphi''_{d-i})^{-1}A^{d-i} \eta''$ & {\rm inverted $(C,A)$} & $(A,C)$ \\ \hline $B^i \eta''$ & {\rm inverted $(B,C)$} & $(C,B)$ \\ $B^{d-i} \eta''$ & {\rm $(B,C)$} & $(B,C)$ \\ $(\varphi'_d \cdots \varphi'_{d-i+1})^{-1}B^i \eta''$ & $(C,B)$ & $(C,B)$ \\ $(\varphi'_d \cdots \varphi'_{i+1})^{-1}B^{d-i} \eta''$ & {\rm inverted $(C,B)$} & $(B,C)$ \\ \end{tabular}} \end{lemma} \begin{proof} Use Lemmas \ref{lem:5Char}, \ref{lem:5CharInv}, \ref{lem:5CharBA}, \ref{lem:5CharBAinv}. \end{proof} \begin{lemma} \label{lem:moreTTTp} In each of {\rm (i)--(iii)} below we describe two bases from the tables in Lemma \ref{lem:Allbases}. These two bases are compatible. \begin{enumerate} \item[\rm (i)] the $(A,B)$-basis in the first table, and the $(A,C)$-basis in the first table; \item[\rm (ii)] the $(B,C)$-basis in the second table, and the $(B,A)$-basis in the second table; \item[\rm (iii)] the $(C,A)$-basis in the third table, and the $(C,B)$-basis in the third table. \end{enumerate} \end{lemma} \begin{proof} (i) Let $\lbrace u_i \rbrace_{i=0}^d$ denote the $(A,C)$-basis in the first table, and let $\lbrace v_i \rbrace_{i=0}^d$ denote the $(A,B)$-basis in the first table. Let $S \in {\rm Mat}_{d+1}(\mathbb F)$ denote the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$. By Lemma \ref{lem:xyz}, $S$ is upper triangular and Toeplitz; let $\lbrace s_i \rbrace_{i=0}^d$ denote the corresponding parameters. Note that $u_0$ and $v_0$ are both equal to $\eta$; therefore $s_0=1$. Consequently the diagonal entries of $S$ are all 1. The bases $\lbrace u_i \rbrace_{i=0}^d$ and $\lbrace v_i \rbrace_{i=0}^d$ are compatible by Definition \ref{def:compat}. \\ \noindent (ii), (iii) Similar to the proof of (i) above. \end{proof} \begin{lemma} \label{lem:moreTTT} Referring to Lemma \ref{lem:Allbases}, \begin{enumerate} \item[\rm (i)] $T$ is the transition matrix from the $(C,B)$-basis in the third table, to the $(C,A)$-basis in the third table; \item[\rm (ii)] $T'$ is the transition matrix from the $(A,C)$-basis in the first table, to the $(A,B)$-basis in the first table; \item[\rm (iii)] $T''$ is the transition matrix from the $(B,A)$-basis in the second table, to the $(B,C)$-basis in the second table. \end{enumerate} \end{lemma} \begin{proof} By Definition \ref{def:TTT} and Lemma \ref{lem:moreTTTp}. \end{proof} \begin{lemma} \label{lem:TransI} For $0 \leq j \leq d$, \begin{eqnarray*} && B^j \eta = \sum_{i=0}^j \alpha'_{j-i} \frac{\varphi_1 \cdots \varphi_j} {\varphi''_d \cdots \varphi''_{d-i+1}} \,C^i \eta, \qquad \qquad \;\;\;\;C^j \eta = \sum_{i=0}^j \beta'_{j-i} \frac{\varphi''_d \cdots \varphi''_{d-j+1}} {\varphi_1 \cdots \varphi_{i}} \,B^i \eta, \\ && C^j \eta' = \sum_{i=0}^j \alpha''_{j-i} \frac{\varphi'_1 \cdots \varphi'_j} {\varphi_d \cdots \varphi_{d-i+1}} \,A^i \eta', \qquad \qquad \;\; A^j \eta' = \sum_{i=0}^j \beta''_{j-i} \frac{\varphi_d \cdots \varphi_{d-j+1}} {\varphi'_1 \cdots \varphi'_{i}} \,C^i \eta', \\ && A^j \eta'' = \sum_{i=0}^j \alpha_{j-i} \frac{\varphi''_1 \cdots \varphi''_j} {\varphi'_d \cdots \varphi'_{d-i+1}} \,B^i \eta'', \qquad \qquad B^j \eta'' = \sum_{i=0}^j \beta_{j-i} \frac{\varphi'_d \cdots \varphi'_{d-j+1}} {\varphi''_1 \cdots \varphi''_{i}} \,A^i \eta''. \end{eqnarray*} \end{lemma} \begin{proof} Use Lemmas \ref{lem:Allbases}, \ref{lem:moreTTT}. \end{proof} \begin{lemma} \label{lem:alphaSum} We have \begin{eqnarray} && \label{eq:one} \eta = \frac{(\eta,\tilde \eta'')}{(\eta',\tilde \eta'')} \sum_{i=0}^d \alpha_i C^i \eta', \qquad \qquad \eta = \frac{(\eta,\tilde \eta')}{(\eta'',\tilde \eta')} \sum_{i=0}^d \beta''_i B^i \eta'', \\ && \label{eq:two} \eta' = \frac{(\eta',\tilde \eta)}{(\eta'',\tilde \eta)} \sum_{i=0}^d \alpha'_i A^i \eta'', \qquad \qquad \eta' = \frac{(\eta',\tilde \eta'')}{(\eta,\tilde \eta'')} \sum_{i=0}^d \beta_i C^i \eta, \\ && \label{eq:three} \eta'' = \frac{(\eta'',\tilde \eta')}{(\eta,\tilde \eta')} \sum_{i=0}^d \alpha''_i B^i \eta, \qquad \qquad \eta'' = \frac{(\eta'',\tilde \eta)}{(\eta',\tilde \eta)} \sum_{i=0}^d \beta'_i A^i \eta'. \end{eqnarray} \end{lemma} \begin{proof} We verify the equation on the left in (\ref{eq:one}). By Lemma \ref{lem:Allbases}, $\lbrace C^{d-i}\eta'\rbrace_{i=0}^d$ is a $(C,B)$-basis of $V$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a compatible $(C,A)$-basis of $V$. By Definition \ref{def:TTT}(i) $T$ is the transition matrix from $\lbrace C^{d-i}\eta'\rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$. By the discussion in Definition \ref{def:TTT1}, $T$ is upper triangular and Toeplitz with parameters $\lbrace \alpha_i \rbrace_{i=0}^d$. By these comments $v_d = \sum_{i=0}^d \alpha_i C^i\eta'$. By construction $v_d$ is a basis of $A^dV$. Since $\eta$ is also a basis of $A^dV$, there exists $0 \not=\zeta \in \mathbb F$ such that $\eta = \zeta v_d$. Therefore \begin{equation} \label{eq:vdsum} \eta = \zeta \sum_{i=0}^d \alpha_i C^i\eta'. \end{equation} We now compute $\zeta$. In the equation (\ref{eq:vdsum}), take the inner product of each side with $\tilde \eta''$. By Lemma \ref{lem:LRTdecDual} (row 2 of the table) we have $(C^i \eta', \tilde \eta'') = 0 $ for $1 \leq i \leq d$. By this and $ \alpha_0=1$ we obtain $(\eta,\tilde \eta'') = \zeta (\eta',\tilde \eta'')$. Therefore \begin{eqnarray} \label{lem:zetaVal} \zeta = \frac{ (\eta,\tilde \eta'')} {(\eta',\tilde \eta'')}. \end{eqnarray} Combining (\ref{eq:vdsum}), (\ref{lem:zetaVal}) we obtain the equation on the left in (\ref{eq:one}). The remaining equations in (\ref{eq:one})--(\ref{eq:three}) are similarly verified. \end{proof} \noindent In the next two lemmas we give some results about $\alpha_1$. Similar results hold for $\alpha'_1, \alpha''_1$. \begin{lemma} \label{lem:etapp} Assume $d\geq 1$. Then the vector $\eta''$ is an eigenvector for $A/\varphi''_1- B/\varphi'_d$ with eigenvalue $\alpha_1$. \end{lemma} \begin{proof} In Lemma \ref{lem:TransI}, set $j=1$ in either equation from the third row. \end{proof} \begin{lemma} \label{lem:matrixA} Assume $d\geq 1$. Then the column vector $(\alpha''_0,\alpha''_1,\ldots, \alpha''_d)^t$ is an eigenvector for the matrix \begin{equation} \label{eq:matrixEig} \left( \begin{array}{ c c cc c c } 0 & \varphi_1/\varphi''_1 & && & \bf 0 \\ -1/\varphi'_d & 0 & \varphi_2/\varphi''_1 && & \\ & -1/\varphi'_d & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi_d/\varphi''_1 \\ {\bf 0} && & & -1/\varphi'_d & 0 \\ \end{array} \right). \end{equation} The corresponding eigenvalue is $\alpha_1$. \end{lemma} \begin{proof} In Lemma \ref{lem:etapp}, represent everything with respect to $\lbrace B^i \eta\rbrace_{i=0}^d$, which is an inverted $(B,A)$-basis of $V$. In this calculation use Proposition \ref{prop:matrixRep} and the equation on the left in (\ref{eq:three}). \end{proof} \begin{lemma} The column vector $(\alpha''_0,\alpha''_1,\ldots, \alpha''_d)^t$ is an eigenvector for the matrix \begin{equation} \label{eq:matrixEig2} \left( \begin{array}{ c c cc c c } a_0 & \varphi'_d & && & \bf 0 \\ \varphi''_d/\varphi_1 & a_1 & \varphi'_{d-1} && & \\ & \varphi''_{d-1}/\varphi_2 & a_2 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi'_1 \\ {\bf 0} && & & \varphi''_1/\varphi_d & a_d \\ \end{array} \right). \end{equation} The corresponding eigenvalue is $0$. \end{lemma} \begin{proof} In the equation $C\eta''=0$, represent everything with respect to $\lbrace B^i \eta\rbrace_{i=0}^d$, which is an inverted $(B,A)$-basis of $V$. In this calculation use Proposition \ref{prop:matrixRep} and the equation on the left in (\ref{eq:three}). \end{proof} \begin{lemma} \label{lem:prodFormula} For $0 \leq i \leq d$, \begin{eqnarray*} && \varphi_1 \varphi_2 \cdots \varphi_i \alpha''_i \beta'_d = \beta'_{d-i}, \qquad \quad \varphi'_1 \varphi'_2 \cdots \varphi'_i \alpha_i \beta''_d = \beta''_{d-i}, \qquad \quad \varphi''_1 \varphi''_2 \cdots \varphi''_i \alpha'_i \beta_d = \beta_{d-i}, \\ && \varphi_1 \varphi_2 \cdots \varphi_i \alpha'_i \beta''_d = \beta''_{d-i}, \qquad \quad \varphi'_1 \varphi'_2 \cdots \varphi'_i \alpha''_i \beta_d = \beta_{d-i}, \qquad \quad \varphi''_1 \varphi''_2 \cdots \varphi''_i \alpha_i \beta'_d = \beta'_{d-i}. \end{eqnarray*} \end{lemma} \begin{proof} We verify the first equation. By Lemma \ref{lem:TransI} with $j=d$, \begin{eqnarray} \label{eq:jd} C^d \eta = \sum_{i=0}^d \beta'_{d-i} \frac{\varphi''_1 \cdots \varphi''_{d}} {\varphi_1 \cdots \varphi_{i}} \,B^i \eta. \end{eqnarray} The vectors $C^d \eta $ and $\eta''$ are both bases for $C^d V$, so there exists $0 \not= \vartheta \in \mathbb F$ such that $ C^d \eta = \vartheta \eta''$. Use this to compare (\ref{eq:jd}) with the equation on left in (\ref{eq:three}). We find that for $0 \leq i \leq d$, \begin{eqnarray} \label{eq:geni} \beta'_{d-i} \frac{\varphi''_1 \cdots \varphi''_d}{\varphi_1 \cdots \varphi_i} = \frac{(\eta'',\tilde \eta')}{ (\eta,\tilde \eta')} \vartheta \alpha''_i. \end{eqnarray} Setting $i=0$ in (\ref{eq:geni}), \begin{eqnarray} \label{eq:i0} \beta'_d \varphi''_1 \cdots \varphi''_d = \frac{(\eta'',\tilde \eta')}{ (\eta,\tilde \eta')} \vartheta. \end{eqnarray} Eliminating $\vartheta$ in (\ref{eq:geni}) using (\ref{eq:i0}), we obtain the first equation in the lemma statement. Apply this equation to the p-relatives of $A,B,C$ to get the remaining equations in the lemma statement. \end{proof} \begin{lemma} \label{lem:NZ} \label{lem:identity} The following scalars are nonzero: \begin{eqnarray*} \alpha_d, \qquad \alpha'_d, \qquad \alpha''_d, \qquad \beta_d, \qquad \beta'_d, \qquad \beta''_d. \end{eqnarray*} Moreover \begin{eqnarray*} && \varphi_1 \varphi_2 \cdots \varphi_d = \frac{1}{ \alpha'_d \beta''_d} = \frac{1}{ \alpha''_d \beta'_d}, \qquad \qquad \varphi'_1 \varphi'_2 \cdots \varphi'_d = \frac{1}{ \alpha''_d \beta_d} = \frac{1}{ \alpha_d \beta''_d}, \\ &&\varphi''_1 \varphi''_2 \cdots \varphi''_d = \frac{1}{ \alpha_d \beta'_d} = \frac{1}{ \alpha'_d \beta_d}. \end{eqnarray*} \end{lemma} \begin{proof} Set $i=d$ in Lemma \ref{lem:prodFormula} and use Lemma \ref{lem:alpha0}. \end{proof} \begin{lemma} \label{lem:alphaInv} For $0 \leq i \leq d$, \begin{eqnarray} \label{eq:alphaInv} && \frac{\alpha_i}{\varphi_1 \varphi_2 \cdots \varphi_i} = \frac{\alpha'_i}{\varphi'_1 \varphi'_2 \cdots \varphi'_i} = \frac{\alpha''_i}{\varphi''_1 \varphi''_2 \cdots \varphi''_i} \end{eqnarray} and also \begin{eqnarray} \frac{\beta_i}{\varphi_d \varphi_{d-1} \cdots \varphi_{d-i+1}} = \frac{\beta'_i}{\varphi'_d \varphi'_{d-1} \cdots \varphi'_{d-i+1}} = \frac{\beta''_i}{\varphi''_d \varphi''_{d-1} \cdots \varphi''_{d-i+1}}. \label{eq:betaInv} \end{eqnarray} \end{lemma} \begin{proof} To obtain (\ref{eq:alphaInv}), use Lemma \ref{lem:prodFormula} and the first assertion of Lemma \ref{lem:NZ}. To obtain (\ref{eq:betaInv}), apply (\ref{eq:alphaInv}) to the LR triple $\tilde A, \tilde B, \tilde C$ and use Lemmas \ref{lem:newPA}, \ref{lem:ToeplitzDataD}. \end{proof} \begin{lemma} \label{lem:COB} For $0 \leq j \leq d$, \begin{eqnarray*} &&B^j \eta = \frac{1}{\beta''_d} \, \frac{(\eta,\tilde \eta'')}{(\eta',\tilde \eta'')} \, \frac{A^{d-j}\eta'}{\varphi_d \cdots \varphi_{j+1}}, \qquad \quad C^j \eta = \frac{1}{\alpha_d} \, \frac{(\eta,\tilde \eta')}{(\eta'',\tilde \eta')} \, \frac{A^{d-j}\eta''}{\varphi''_1 \cdots \varphi''_{d-j}}, \\ &&C^j \eta' = \frac{1}{\beta_d} \, \frac{(\eta',\tilde \eta)}{(\eta'',\tilde \eta)} \, \frac{B^{d-j}\eta''}{\varphi'_d \cdots \varphi'_{j+1}}, \qquad \quad A^j \eta' = \frac{1}{\alpha'_d} \, \frac{(\eta',\tilde \eta'')}{(\eta,\tilde \eta'')} \, \frac{B^{d-j}\eta}{\varphi_1 \cdots \varphi_{d-j}}, \\ &&A^j \eta'' = \frac{1}{\beta'_d} \, \frac{(\eta'',\tilde \eta')}{(\eta,\tilde \eta')} \, \frac{C^{d-j}\eta}{\varphi''_d \cdots \varphi''_{j+1}}, \qquad \quad B^j \eta'' = \frac{1}{\alpha''_d} \, \frac{(\eta'',\tilde \eta)}{(\eta',\tilde \eta)} \, \frac{C^{d-j}\eta'}{\varphi'_1 \cdots \varphi'_{d-j}}. \end{eqnarray*} \end{lemma} \begin{proof} We verify the first equation. In Lemma \ref{lem:Allbases}, compare the $(A,B)$-basis of $V$ from the first table, with the $(A,B)$-basis of $V$ from the second table. By Lemma \ref{lem:BAbasisU} there exists $0 \not=\zeta \in \mathbb F$ such that \begin{eqnarray} \frac{B^j \eta } {\varphi_1 \cdots \varphi_j} = \zeta A^{d-j} \eta' \qquad \qquad (0 \leq j \leq d). \label{eq:compareA} \end{eqnarray} We now find $\zeta$. Setting $j=0$ in (\ref{eq:compareA}) we find $\eta = \zeta A^d \eta'$. Use this to compare the equation in Lemma \ref{lem:TransI} (row 2, column 2, $j=d$), with the equation on the left in (\ref{eq:one}). In this comparison consider the summands for $i=0$ to obtain \begin{eqnarray} \label{eq:compareA5} \zeta = \frac{1}{\beta''_d}\, \frac{1}{\varphi_1 \cdots \varphi_d}\, \frac{ (\eta, \tilde \eta'') }{ (\eta', \tilde \eta'') }. \end{eqnarray} Evaluating (\ref{eq:compareA}) using (\ref{eq:compareA5}) we get the first displayed equation in the lemma statement. Applying our results so far to the LR triples in Lemma \ref{lem:ABCvar}, we obtain the remaining equations in the lemma statement. \end{proof} \noindent We emphasize a special case of Lemma \ref{lem:COB}. \begin{lemma} \label{lem:COBSC} We have \begin{eqnarray*} &&B^d \eta = \frac{(\eta,\tilde \eta'')}{(\eta',\tilde \eta'')} \, \frac{\eta'}{\beta''_d}, \qquad \quad C^d \eta = \frac{(\eta,\tilde \eta')}{(\eta'',\tilde \eta')} \, \frac{\eta''}{\alpha_d}, \\ &&C^d \eta' = \frac{(\eta',\tilde \eta)}{(\eta'',\tilde \eta)} \, \frac{\eta''}{\beta_d }, \qquad \quad A^d \eta' = \frac{(\eta',\tilde \eta'')}{(\eta,\tilde \eta'')} \, \frac{\eta}{\alpha'_d }, \\ &&A^d \eta'' = \frac{(\eta'',\tilde \eta')}{(\eta,\tilde \eta')} \, \frac{\eta}{\beta'_d }, \qquad \quad B^d \eta'' = \frac{(\eta'',\tilde \eta)}{(\eta',\tilde \eta)} \, \frac{\eta'}{\alpha''_d}. \end{eqnarray*} \end{lemma} \begin{proof} Set $j=d$ in Lemma \ref{lem:COB}. \end{proof} \begin{proposition} \label{prop:sixBil} Each of $\alpha_d/\beta_d$, $\alpha'_d/\beta'_d$, $\alpha''_d/\beta''_d$ is equal to \begin{eqnarray} \frac{ (\eta,\tilde \eta') (\eta',\tilde \eta'') (\eta'',\tilde \eta) } { (\eta,\tilde \eta'') (\eta',\tilde \eta) (\eta'',\tilde \eta') }. \label{eq:TripleProd} \end{eqnarray} \end{proposition} \begin{proof} The scalars $\alpha_d/\beta_d$, $\alpha'_d/\beta'_d$, $\alpha''_d/\beta''_d$ are equal by Lemma \ref{lem:NZ}. To see that $\alpha_d /\beta_d$ is equal to (\ref{eq:TripleProd}), compare the two equations in (\ref{eq:one}) using Lemma \ref{lem:COB} (row 2, column 1). The result follows after a brief computation. \end{proof} \begin{note}\rm By Proposition \ref{prop:sixBil} the scalars in (\ref{eq:IP1}), (\ref{eq:IP2}) are determined by the Toeplitz data (\ref{eq:ToeplitzData}) and the sequence \begin{eqnarray} \label{eq:5list} (\eta, \tilde \eta'), \qquad (\eta', \tilde \eta''), \qquad (\eta'', \tilde \eta), \qquad (\eta, \tilde \eta''), \qquad (\eta', \tilde \eta). \end{eqnarray} The scalars (\ref{eq:5list}) are ``free'' in the following sense. Given a sequence $\Psi$ of five nonzero scalars in $\mathbb F$, there exist nonzero vectors $\eta, \eta',\eta''$ and $\tilde \eta, \tilde \eta', \tilde \eta''$ as in (\ref{eq:eta}), (\ref{eq:etaDual}) such that the sequence (\ref{eq:5list}) is equal to $\Psi$. \end{note} \noindent We display some transition matrices for later use. \begin{lemma} \label{lem:TI} Referring to Lemma \ref{lem:Allbases}, the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] The transition matrix from the inverted $(B,A)$-basis in the first table to the inverted $(B,A)$-basis in the second table is \begin{eqnarray*} \frac{(\eta', \tilde \eta'')}{(\eta, \tilde \eta'')} \,\beta''_d I. \end{eqnarray*} \item[\rm (ii)] The transition matrix from the inverted $(C,B)$-basis in the second table to the inverted $(C,B)$-basis in the third table is \begin{eqnarray*} \frac{(\eta'', \tilde \eta)}{(\eta', \tilde \eta)} \,\beta_d I. \end{eqnarray*} \item[\rm (iii)] The transition matrix from the inverted $(A,C)$-basis in the third table to the inverted $(A,C)$-basis in the first table is \begin{eqnarray*} \frac{(\eta, \tilde \eta')}{(\eta'', \tilde \eta')} \,\beta'_d I. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:COB}. \end{proof} \noindent The following definition is motivated by Definition \ref{def:Dmat}. \begin{definition} \label{def:DDD} \rm Let $D$ (resp. $D'$) (resp. $D''$) denote the diagonal matrix in ${\rm Mat}_{d+1}(\mathbb F)$ with $(i,i)$-entry $\varphi_1 \varphi_2 \cdots \varphi_i$ (resp. $\varphi'_1 \varphi'_2 \cdots \varphi'_i$) (resp. $\varphi''_1 \varphi''_2 \cdots \varphi''_i$) for $0 \leq i \leq d$. \end{definition} \noindent The following result is reminiscent of Lemma \ref{lem:Dmeaning}. \begin{lemma} \label{lem:DDD} Referring to Lemma \ref{lem:Allbases}, \begin{enumerate} \item[\rm (i)] $D$ is the transition matrix from the $(A,B)$-basis in the first table to the inverted $(B,A)$-basis in the first table; \item[\rm (ii)] $D'$ is the transition matrix from the $(B,C)$-basis in the second table to the inverted $(C,B)$-basis in the second table; \item[\rm (iii)] $D''$ is the transition matrix from the $(C,A)$-basis in the third table to the inverted $(A,C)$-basis in the third table. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:Allbases} and Definition \ref{def:DDD}. \end{proof} \begin{definition} \label{def:theta} \rm Let $\theta$ denote the scalar (\ref{eq:TripleProd}). By Proposition \ref{prop:sixBil}, \begin{equation} \label{eq:thetaEx} \theta= \frac{\alpha_d}{\beta_d} = \frac{\alpha'_d}{\beta'_d} = \frac{\alpha''_d}{\beta''_d}. \end{equation} Note that $\theta \not=0$. \end{definition} \begin{proposition} \label{prop:12cycle} We have \begin{equation} \label{eq:9cycle} T' D {\bf Z} T''D'{\bf Z} T D'' {\bf Z} =\frac{I}{\theta \beta_d \beta'_d \beta''_d}, \end{equation} where $\theta$ is from Definition \ref{def:theta}. \end{proposition} \begin{proof} Consider the following 12 bases from Lemma \ref{lem:Allbases}. In each row of the table below, for $0 \leq i \leq d$ we display a vector $v_i \in V$. The vectors $\lbrace v_i \rbrace_{i=0}^d$ form a basis for $V$; we give the type and the induced decomposition of $V$. \centerline{ \begin{tabular}[t]{c|c|c} $v_i$ & {\rm type of basis} & {\rm induced dec. of $V$} \\ \hline \hline $(\varphi_1 \cdots \varphi_i)^{-1}B^i \eta$ & $(A,B)$ & $(A,B)$ \\ $B^i \eta$ & {\rm inverted $(B,A)$} & $(A,B)$ \\ $(\varphi_d \cdots \varphi_{i+1})^{-1}A^{d-i} \eta'$ & {\rm inverted $(B,A)$} & $(A,B)$ \\ $(\varphi_d \cdots \varphi_{d-i+1})^{-1}A^{i} \eta'$ & $(B,A)$ & $(B,A)$ \\ \hline $(\varphi'_1 \cdots \varphi'_{i})^{-1}C^{i} \eta'$ & $(B,C)$ & $(B,C)$ \\ $C^{i} \eta'$ & {\rm inverted $(C,B)$} & $(B,C)$ \\ $(\varphi'_d \cdots \varphi'_{i+1})^{-1}B^{d-i} \eta''$ & {\rm inverted $(C,B)$} & $(B,C)$ \\ $(\varphi'_d \cdots \varphi'_{d-i+1})^{-1}B^{i} \eta''$ & $(C,B)$ & $(C,B)$ \\ \hline $(\varphi''_1 \cdots \varphi''_{i})^{-1}A^{i} \eta''$ & $(C,A)$ & $(C,A)$ \\ $A^{i} \eta''$ & {\rm inverted $(A,C)$} & $(C,A)$ \\ $(\varphi''_d \cdots \varphi''_{i+1})^{-1}C^{d-i} \eta$ & {\rm inverted $(A,C)$} & $(C,A)$ \\ $(\varphi''_d \cdots \varphi''_{d-i+1})^{-1}C^{i} \eta$ & $(A,C)$ & $(A,C)$ \\ \end{tabular}} \noindent We cycle through the bases in the above table, starting with the basis in the bottom row, jumping to the basis in the top row, and then going down through the rows until we return to the basis in the bottom row. For each basis in the sequence, consider the transition matrix to the next basis in the sequence. This gives a sequence of transition matrices. Compute the product of these transition matrices in the given order. This product is evaluated in two ways. On one hand, the product is equal to the identity matrix. On the other hand, each factor in the product is computed using Lemmas \ref{lem:moreTTT}, \ref{lem:TI}, \ref{lem:DDD} and the definition of $\bf Z$ in Section 2. Evaluate the resulting equation using Proposition \ref{prop:sixBil}. The result follows. \end{proof} \noindent In Section 34 we use the equation (\ref{eq:9cycle}) to characterize the LR triples. \noindent We mention some other results involving the scalar $\theta$ from Definition \ref{def:theta}. \begin{lemma} \label{lem:ABCtrace} We have \begin{eqnarray} {\rm tr}(A^dB^dC^d) = \frac{\theta}{\alpha_d \alpha'_d \alpha''_d}, \qquad \qquad {\rm tr}(C^d B^d A^d) = \frac{1}{\theta \beta_d \beta'_d \beta''_d}. \label{eq:abctrace} \end{eqnarray} \end{lemma} \begin{proof} We verify the equation on the left in (\ref{eq:abctrace}). Let $\lbrace u_i\rbrace_{i=0}^d$ denote an $(A,C)$-basis of $V$, such that $u_0 = \eta$. Let $S$ denote the matrix in ${\rm Mat}_{d+1}(\mathbb F)$ that represents $A^dB^dC^d$ with respect to $\lbrace u_i\rbrace_{i=0}^d$. The map $C$ raises the $(A,C)$-decomposition of $V$. Therefore $C^d u_i = 0$ for $1 \leq i \leq d$. By Lemma \ref{lem:COBSC} and Definition \ref{def:theta}, \begin{eqnarray*} A^d B^d C^d \eta = \frac{\theta \eta}{\alpha_d \alpha'_d \alpha''_d}. \end{eqnarray*} By these comments $S$ has $(0,0)$-entry $\theta/(\alpha_d \alpha'_d \alpha''_d)$ and all other entries zero. We have verified the equation on the left in (\ref{eq:abctrace}). The other equation is similarily verified. \end{proof} \begin{lemma} \label{lem:thetaTrace} For $0 \leq i \leq d$, the trace of $E'_{d-i}E_i E''_{d-i} E'_i E_{d-i} E''_i$ is \begin{eqnarray*} \theta \,\frac{\varphi_d \cdots \varphi_{d-i+1}}{\varphi_1 \cdots \varphi_i} \, \frac{\varphi'_d \cdots \varphi'_{d-i+1}}{\varphi'_1 \cdots \varphi'_i} \, \frac{\varphi''_d \cdots \varphi''_{d-i+1}}{\varphi''_1 \cdots \varphi''_i}, \end{eqnarray*} and the trace of $E_{d-i}E'_i E''_{d-i} E_i E'_{d-i} E''_i$ is \begin{eqnarray*} \frac{1}{\theta} \, \frac{\varphi_1 \cdots \varphi_{i}}{\varphi_d \cdots \varphi_{d-i+1}} \, \frac{\varphi'_1 \cdots \varphi'_{i}}{\varphi'_d \cdots \varphi'_{d-i+1}} \, \frac{\varphi''_1 \cdots \varphi''_{i}}{\varphi''_d \cdots \varphi''_{d-i+1}}. \end{eqnarray*} \end{lemma} \begin{proof} We verify the first assertion. In the product $E'_{d-i}E_i E''_{d-i} E'_i E_{d-i} E''_i$, evaluate each factor using Lemma \ref{lem:Eform2}, and simplify the result using Lemma \ref{lem:ABCtrace} along with the meaning of the parameter array. The first assertion follows after a brief computation. The second assertion is similarly verified. \end{proof} \begin{corollary} \label{cor:thetaint} The trace of $E'_d E_0 E''_d E'_0 E_d E''_0$ is $\theta$. The trace of $E_d E'_0 E''_d E_0 E'_d E''_0$ is $\theta^{-1}$. \end{corollary} \begin{proof} Set $i=0$ in Lemma \ref{lem:thetaTrace}. \end{proof} \section{ How the parameter array, trace data, and Toeplitz data are related, I} \noindent Throughout this section and the next, let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Fix an LR triple $A,B,C$ on $V$. We consider how its parameter array (\ref{eq:paLRT}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}) are related. \noindent Recall Definition \ref{def:natural}. Let $\lbrace u_i \rbrace_{i=0}^d $ denote a basis for $V$ of type $\natural =(A,C)$. Let $C^\natural \in {\rm Mat}_{d+1}(\mathbb F)$ represent $C$ with respect to $\lbrace u_i \rbrace_{i=0}^d $. The entries of $C^\natural $ are given in Proposition \ref{prop:matrixRep}, row $(A,C)$ of the table. Let $\lbrace v_i \rbrace_{i=0}^d $ denote a compatible basis for $V$ of type $\sharp = (A,B)$. Let $C^\sharp \in {\rm Mat}_{d+1}(\mathbb F)$ represent $C$ with respect to $\lbrace v_i \rbrace_{i=0}^d $. The entries of $C^\sharp$ are given in Proposition \ref{prop:matrixRep}, row $(A,B)$ of the table. By Definition \ref{def:TTT}(ii), $T'$ is the transition matrix from $\lbrace u_i \rbrace_{i=0}^d$ to $\lbrace v_i \rbrace_{i=0}^d$. By linear algebra, \begin{equation} \label{eq:NT=TS} C^\natural T' = T' C^\sharp. \end{equation} Consequently \begin{equation} \label{eq:TiNT=S} (T')^{-1}C^\natural T' = C^\sharp. \end{equation} \begin{proposition} \label{lem:ai} For $0 \leq i \leq d$, \begin{eqnarray} && a_{d-i}=\alpha'_0 \beta'_1 \varphi''_i + \alpha'_1 \beta'_0 \varphi''_{i+1} = \alpha''_0 \beta''_1 \varphi'_i + \alpha''_1 \beta''_0 \varphi'_{i+1}, \label{eq:aip} \\ && a'_{d-i}= \alpha''_0 \beta''_1 \varphi_i + \alpha''_1 \beta''_0 \varphi_{i+1} = \alpha_0 \beta_1 \varphi''_i + \alpha_1 \beta_0 \varphi''_{i+1}, \label{eq:aipp} \\ && a''_{d-i}= \alpha_0 \beta_1 \varphi'_i + \alpha_1 \beta_0 \varphi'_{i+1} = \alpha'_0 \beta'_1 \varphi_i + \alpha'_1 \beta'_0 \varphi_{i+1}. \label{eq:ai} \end{eqnarray} \end{proposition} \begin{proof} We verify the equation on the left in (\ref{eq:aip}). In the equation (\ref{eq:TiNT=S}), compute the $(d-i,d-i)$-entry of each side, and evaluate the result using Proposition \ref{prop:matrixRep} and Definition \ref{def:TTT}. This yields the equation on the left in (\ref{eq:aip}). To finish the proof, apply this equation to the relatives of $A,B,C$. \end{proof} \noindent We mention some variations on Proposition \ref{lem:ai}. \begin{corollary} \label{cor:aiVar} For $0 \leq i \leq d$, \begin{eqnarray*} && a_0 + a_{1} + \cdots + a_{d-i} = \beta'_1 \varphi''_i = \beta''_1 \varphi'_i, \\ && a'_0 + a'_{1} + \cdots + a'_{d-i} = \beta''_1 \varphi_i = \beta_1 \varphi''_i, \\ && a''_0 + a''_{1} + \cdots + a''_{d-i} = \beta_1 \varphi'_i = \beta'_1 \varphi_i. \end{eqnarray*} \end{corollary} \begin{proof} To verify each equation, evaluate the sum on the left using Proposition \ref{lem:ai}, and simplify the result using Lemma \ref{lem:alpha0}. \end{proof} \begin{corollary} \label{cor:aiVar2} For $0 \leq i \leq d$, \begin{eqnarray*} && a_d + a_{d-1} + \cdots + a_{d-i} = \alpha'_1 \varphi''_{i+1} = \alpha''_1 \varphi'_{i+1}, \\ && a'_d + a'_{d-1} + \cdots + a'_{d-i} = \alpha''_1 \varphi_{i+1} = \alpha_1 \varphi''_{i+1}, \\ && a''_d + a''_{d-1} + \cdots + a''_{d-i} = \alpha_1 \varphi'_{i+1} = \alpha'_1 \varphi_{i+1}. \end{eqnarray*} \end{corollary} \begin{proof} To verify each equation, evaluate the sum on the left using Proposition \ref{lem:ai}, and simplify the result using Lemma \ref{lem:alpha0}. \end{proof} \begin{corollary} \label{lem:a0ad} We have \begin{eqnarray*} && a_0 = \beta'_1 \varphi''_d = \beta''_1 \varphi'_d, \qquad \qquad a'_0 = \beta''_1 \varphi_d = \beta_1 \varphi''_d, \qquad \qquad a''_0 = \beta_1 \varphi'_d = \beta'_1 \varphi_d, \\ && a_d = \alpha'_1 \varphi''_1 = \alpha''_1 \varphi'_1, \qquad \qquad a'_d = \alpha''_1 \varphi_1 = \alpha_1 \varphi''_1, \qquad \qquad a''_d = \alpha_1 \varphi'_1 = \alpha'_1 \varphi_1. \end{eqnarray*} \end{corollary} \begin{proof} Set $i=d$ in Corollary \ref{cor:aiVar}, and $i=0$ in Corollary \ref{cor:aiVar2}. \end{proof} \begin{corollary} \label{cor:alphaOneInv} For $1 \leq i \leq d$, \begin{eqnarray} \label{eq:alphaOneInv} \frac{\alpha_1}{\varphi_i} = \frac{\alpha'_1}{\varphi'_i} = \frac{\alpha''_1}{\varphi''_i}, \qquad \qquad \qquad \frac{\beta_1}{\varphi_i} = \frac{\beta'_1}{\varphi'_i} = \frac{\beta''_1}{\varphi''_i}. \end{eqnarray} \end{corollary} \begin{proof} Use Corollaries \ref{cor:aiVar}, \ref{cor:aiVar2}. \end{proof} \begin{proposition} \label{prop:goodrec} For $1 \leq i \leq d$, \begin{eqnarray*} &&\frac{\varphi'_i}{\varphi''_{d-i+1}} = \alpha'_0 \beta'_2 \varphi_{i-1}+ \alpha'_1 \beta'_1 \varphi_{i}+ \alpha'_2 \beta'_0 \varphi_{i+1}, \\ &&\frac{\varphi''_i}{\varphi_{d-i+1}} = \alpha''_0 \beta''_2 \varphi'_{i-1}+ \alpha''_1 \beta''_1 \varphi'_{i}+ \alpha''_2 \beta''_0 \varphi'_{i+1}, \\ &&\frac{\varphi_i}{\varphi'_{d-i+1}} = \alpha_0 \beta_2 \varphi''_{i-1}+ \alpha_1 \beta_1 \varphi''_{i}+ \alpha_2 \beta_0 \varphi''_{i+1} \end{eqnarray*} and also \begin{eqnarray*} &&\frac{\varphi''_i}{\varphi'_{d-i+1}} = \alpha''_0 \beta''_2 \varphi_{i-1}+ \alpha''_1 \beta''_1 \varphi_{i}+ \alpha''_2 \beta''_0 \varphi_{i+1}, \\ &&\frac{\varphi_i}{\varphi''_{d-i+1}} = \alpha_0 \beta_2 \varphi'_{i-1}+ \alpha_1 \beta_1 \varphi'_{i}+ \alpha_2 \beta_0 \varphi'_{i+1}, \\ &&\frac{\varphi'_i}{\varphi_{d-i+1}} = \alpha'_0 \beta'_2 \varphi''_{i-1}+ \alpha'_1 \beta'_1 \varphi''_{i}+ \alpha'_2 \beta'_0 \varphi''_{i+1}. \end{eqnarray*} \end{proposition} \begin{proof} We verify the last equation in the proposition statement. In the equation (\ref{eq:TiNT=S}), compute the $(d-i,d-i+1)$-entry of each side, and evaluate the result using Proposition \ref{prop:matrixRep} and Definition \ref{def:TTT}. This yields the last equation in the proposition statement. To finish the proof, apply this equation to the relatives of $A,B,C$. \end{proof} \begin{proposition} \label{prop:longrec} For $3 \leq r \leq d+1$ and $0 \leq i \leq d-r+1$, \begin{eqnarray*} && 0 = \alpha'_0 \beta'_r \varphi_i + \alpha'_1 \beta'_{r-1} \varphi_{i+1} + \cdots + \alpha'_r \beta'_0 \varphi_{i+r}, \\ && 0 = \alpha''_0 \beta''_r \varphi'_i + \alpha''_1 \beta''_{r-1} \varphi'_{i+1} + \cdots + \alpha''_r \beta''_0 \varphi'_{i+r}, \\ && 0 = \alpha_0 \beta_r \varphi''_i + \alpha_1 \beta_{r-1} \varphi''_{i+1} + \cdots + \alpha_r \beta_0 \varphi''_{i+r} \end{eqnarray*} and also \begin{eqnarray*} && 0 = \alpha''_0 \beta''_r \varphi_i + \alpha''_1 \beta''_{r-1} \varphi_{i+1} + \cdots + \alpha''_r \beta''_0 \varphi_{i+r}, \\ && 0 = \alpha_0 \beta_r \varphi'_i + \alpha_1 \beta_{r-1} \varphi'_{i+1} + \cdots + \alpha_r \beta_0 \varphi'_{i+r}, \\ && 0 = \alpha'_0 \beta'_r \varphi''_i + \alpha'_1 \beta'_{r-1} \varphi''_{i+1} + \cdots + \alpha'_r \beta'_0 \varphi''_{i+r}. \end{eqnarray*} \end{proposition} \begin{proof} We verify the last equation in the proposition statement. In the equation (\ref{eq:TiNT=S}), compute the $(d-i-r+1,d-i)$-entry of each side, and evaluate the result using Proposition \ref{prop:matrixRep} and Definition \ref{def:TTT}. This yields the last equation in the proposition statement. To finish the proof, apply this equation to the relatives of $A,B,C$. \end{proof} \section{ How the parameter array, trace data, and Toeplitz data are related, II} We continue to discuss our LR triple $A,B,C$ on $V$, with parameter array (\ref{eq:paLRT}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). In the previous section we found a relationship among these scalars, using the equation (\ref{eq:TiNT=S}). In the present section we describe this relationship from the point of view of (\ref{eq:NT=TS}). \begin{proposition} \label{prop:abI} For $1\leq i\leq d$ and $0 \leq j \leq d-i$, \begin{eqnarray*} &&\alpha'_{i-1} \frac{\varphi'_{j+1}}{\varphi''_{d-j}} + \alpha'_i a''_{d-j} + \alpha'_{i+1} \varphi_j = \alpha'_{i+1} \varphi_{i+j+1}, \\ &&\alpha''_{i-1} \frac{\varphi''_{j+1}}{\varphi_{d-j}} + \alpha''_i a_{d-j} + \alpha''_{i+1} \varphi'_j = \alpha''_{i+1} \varphi'_{i+j+1}, \\ &&\alpha_{i-1} \frac{\varphi_{j+1}}{\varphi'_{d-j}} + \alpha_i a'_{d-j} + \alpha_{i+1} \varphi''_j = \alpha_{i+1} \varphi''_{i+j+1} \end{eqnarray*} and also \begin{eqnarray*} &&\alpha''_{i-1} \frac{\varphi''_{j+1}}{\varphi'_{d-j}} + \alpha''_i a'_{d-j} + \alpha''_{i+1} \varphi_j = \alpha''_{i+1} \varphi_{i+j+1}, \\ &&\alpha_{i-1} \frac{\varphi_{j+1}}{\varphi''_{d-j}} + \alpha_i a''_{d-j} + \alpha_{i+1} \varphi'_j = \alpha_{i+1} \varphi'_{i+j+1}, \\ &&\alpha'_{i-1} \frac{\varphi'_{j+1}}{\varphi_{d-j}} + \alpha'_i a_{d-j} + \alpha'_{i+1} \varphi''_j = \alpha'_{i+1} \varphi''_{i+j+1}. \end{eqnarray*} \end{proposition} \begin{proof} We verify the last equation in the proposition statement. In the equation (\ref{eq:NT=TS}), compute the $(d-i-j,d-j)$-entry of each side, and evaluate the result using Proposition \ref{prop:matrixRep} and Definition \ref{def:TTT}. This yields the last equation of the proposition statement. To finish the proof, apply this equation to the p-relatives of $A,B,C$. \end{proof} \noindent We point out some special cases of Proposition \ref{prop:abI}. \begin{corollary} \label{prop:abII} For $1\leq i\leq d-1$, \begin{eqnarray*} && \alpha'_{i-1}\frac{\varphi'_{d-i+1}}{\varphi''_i} + \alpha'_i a''_{i} + \alpha'_{i+1} \varphi_{d-i} = 0, \\ && \alpha''_{i-1}\frac{\varphi''_{d-i+1}}{\varphi_i} + \alpha''_i a_{i} + \alpha''_{i+1} \varphi'_{d-i} = 0, \\ && \alpha_{i-1}\frac{\varphi_{d-i+1}}{\varphi'_i} + \alpha_i a'_{i} + \alpha_{i+1} \varphi''_{d-i} = 0 \end{eqnarray*} and also \begin{eqnarray*} && \alpha''_{i-1}\frac{\varphi''_{d-i+1}}{\varphi'_i} + \alpha''_i a'_{i} + \alpha''_{i+1} \varphi_{d-i} = 0, \\ && \alpha_{i-1}\frac{\varphi_{d-i+1}}{\varphi''_i} + \alpha_i a''_{i} + \alpha_{i+1} \varphi'_{d-i} = 0, \\ && \alpha'_{i-1}\frac{\varphi'_{d-i+1}}{\varphi_i} + \alpha'_i a_{i} + \alpha'_{i+1} \varphi''_{d-i} = 0. \end{eqnarray*} \end{corollary} \begin{proof} In Proposition \ref{prop:abI}, assume $i\leq d-1$ and $j=d-i$. \end{proof} \begin{corollary} \label{prop:abIII} For $1\leq i\leq d-1$, \begin{eqnarray*} && \alpha'_{i-1}\frac{\varphi'_1}{\varphi''_d} + \alpha'_i a''_d = \alpha'_{i+1} \varphi_{i+1}, \qquad \quad \alpha''_{i-1}\frac{\varphi''_1}{\varphi'_d} + \alpha''_i a'_d = \alpha''_{i+1} \varphi_{i+1}, \\ && \alpha''_{i-1}\frac{\varphi''_1}{\varphi_d} + \alpha''_i a_d = \alpha''_{i+1} \varphi'_{i+1}, \qquad \quad \alpha_{i-1}\frac{\varphi_1}{\varphi''_d} + \alpha_i a''_d = \alpha_{i+1} \varphi'_{i+1}, \\ && \alpha_{i-1}\frac{\varphi_1}{\varphi'_d} + \alpha_i a'_d = \alpha_{i+1} \varphi''_{i+1}, \qquad \quad \alpha'_{i-1}\frac{\varphi'_1}{\varphi_d} + \alpha'_i a_d = \alpha'_{i+1} \varphi''_{i+1}. \end{eqnarray*} \end{corollary} \begin{proof} In Proposition \ref{prop:abI}, assume $i \leq d-1$ and $j=0$. \end{proof} \begin{corollary} \label{prop:abIV} For $d\geq 1$, \begin{eqnarray*} && \alpha'_{d-1} \frac{\varphi'_1}{\varphi''_d}+ \alpha'_d a''_d = 0, \quad \qquad \alpha''_{d-1} \frac{\varphi''_1}{\varphi_d}+ \alpha''_d a_d = 0, \quad \qquad \alpha_{d-1} \frac{\varphi_1}{\varphi'_d}+ \alpha_d a'_d = 0, \\ && \alpha''_{d-1} \frac{\varphi''_1}{\varphi'_d}+ \alpha''_d a'_d = 0, \qquad \quad \alpha_{d-1} \frac{\varphi_1}{\varphi''_d}+ \alpha_d a''_d = 0, \qquad \quad \alpha'_{d-1} \frac{\varphi'_1}{\varphi_d}+ \alpha'_d a_d= 0. \end{eqnarray*} \end{corollary} \begin{proof} In Proposition \ref{prop:abI}, assume $i=d$ and $j=0$. \end{proof} \begin{proposition} \label{prop:baI} For $1 \leq i \leq d$ and $0 \leq j\leq d-i$, \begin{eqnarray*} &&\beta'_{i-1} \frac{\varphi'_{d-j}}{\varphi''_{j+1}} + \beta'_i a''_{j} + \beta'_{i+1} \varphi_{d-j+1} = \beta'_{i+1} \varphi_{d-i-j}, \\ &&\beta''_{i-1} \frac{\varphi''_{d-j}}{\varphi_{j+1}} + \beta''_i a_{j} + \beta''_{i+1} \varphi'_{d-j+1} = \beta''_{i+1} \varphi'_{d-i-j}, \\ &&\beta_{i-1} \frac{\varphi_{d-j}}{\varphi'_{j+1}} + \beta_i a'_{j} + \beta_{i+1} \varphi''_{d-j+1} = \beta_{i+1} \varphi''_{d-i-j} \end{eqnarray*} and also \begin{eqnarray*} &&\beta''_{i-1} \frac{\varphi''_{d-j}}{\varphi'_{j+1}} + \beta''_i a'_{j} + \beta''_{i+1} \varphi_{d-j+1} = \beta''_{i+1} \varphi_{d-i-j}, \\ &&\beta_{i-1} \frac{\varphi_{d-j}}{\varphi''_{j+1}} + \beta_i a''_{j} + \beta_{i+1} \varphi'_{d-j+1} = \beta_{i+1} \varphi'_{d-i-j}, \\ &&\beta'_{i-1} \frac{\varphi'_{d-j}}{\varphi_{j+1}} + \beta'_i a_{j} + \beta'_{i+1} \varphi''_{d-j+1} = \beta'_{i+1} \varphi''_{d-i-j}. \end{eqnarray*} \end{proposition} \begin{proof} Apply Proposition \ref{prop:abI} to the LR triple $\tilde A, \tilde B, \tilde C$. \end{proof} \begin{corollary} For $1\leq i\leq d-1$, \begin{eqnarray*} && \beta'_{i-1}\frac{\varphi'_{i}}{\varphi''_{d-i+1}} + \beta'_i a''_{d-i} + \beta'_{i+1} \varphi_{i+1} = 0, \\ && \beta''_{i-1}\frac{\varphi''_{i}}{\varphi_{d-i+1}} + \beta''_i a_{d-i} + \beta''_{i+1} \varphi'_{i+1} = 0, \\ && \beta_{i-1}\frac{\varphi_{i}}{\varphi'_{d-i+1}} + \beta_i a'_{d-i} + \beta_{i+1} \varphi''_{i+1} = 0 \end{eqnarray*} and also \begin{eqnarray*} && \beta''_{i-1}\frac{\varphi''_{i}}{\varphi'_{d-i+1}} + \beta''_i a'_{d-i} + \beta''_{i+1} \varphi_{i+1} = 0, \\ && \beta_{i-1}\frac{\varphi_{i}}{\varphi''_{d-i+1}} + \beta_i a''_{d-i} + \beta_{i+1} \varphi'_{i+1} = 0, \\ && \beta'_{i-1}\frac{\varphi'_{i}}{\varphi_{d-i+1}} + \beta'_i a_{d-i} + \beta'_{i+1} \varphi''_{i+1} = 0. \end{eqnarray*} \end{corollary} \begin{proof} In Proposition \ref{prop:baI}, assume $i \leq d-1$ and $j=d-i$. \end{proof} \begin{corollary} For $1\leq i\leq d-1$, \begin{eqnarray*} && \beta'_{i-1}\frac{\varphi'_d}{\varphi''_1} + \beta'_i a''_0 = \beta'_{i+1} \varphi_{d-i}, \qquad \quad \beta''_{i-1}\frac{\varphi''_d}{\varphi'_1} + \beta''_i a'_0 = \beta''_{i+1} \varphi_{d-i}, \\ && \beta''_{i-1}\frac{\varphi''_d}{\varphi_1} + \beta''_i a_0 = \beta''_{i+1} \varphi'_{d-i}, \qquad \quad \beta_{i-1}\frac{\varphi_d}{\varphi''_1} + \beta_i a''_0 = \beta_{i+1} \varphi'_{d-i}, \\ && \beta_{i-1}\frac{\varphi_d}{\varphi'_1} + \beta_i a'_0 = \beta_{i+1} \varphi''_{d-i}, \qquad \quad \beta'_{i-1}\frac{\varphi'_d}{\varphi_1} + \beta'_i a_0 = \beta'_{i+1} \varphi''_{d-i}. \end{eqnarray*} \end{corollary} \begin{proof} In Proposition \ref{prop:baI}, assume $i \leq d-1$ and $j=0$. \end{proof} \begin{corollary} For $d\geq 1$, \begin{eqnarray*} && \beta'_{d-1} \frac{\varphi'_d}{\varphi''_1}+ \beta'_d a''_0 = 0, \quad \qquad \beta''_{d-1} \frac{\varphi''_d}{\varphi_1}+ \beta''_d a_0 = 0, \quad \qquad \beta_{d-1} \frac{\varphi_d}{\varphi'_1}+ \beta_d a'_0 = 0, \\ && \beta''_{d-1} \frac{\varphi''_d}{\varphi'_1}+ \beta''_d a'_0 = 0, \qquad \quad \beta_{d-1} \frac{\varphi_d}{\varphi''_1}+ \beta_d a''_0 = 0, \qquad \quad \beta'_{d-1} \frac{\varphi'_d}{\varphi_1}+ \beta'_d a_0 = 0. \end{eqnarray*} \end{corollary} \begin{proof} In Proposition \ref{prop:baI}, assume $i =d$ and $j=0$. \end{proof} \noindent We have displayed many equations relating the parameter array (\ref{eq:paLRT}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). From these equations it is apparent that we can improve on Proposition \ref{prop:IsoParTrace}. We now give some results in this direction. To avoid trivialities we assume $d\geq 1$. \begin{proposition} \label{prop:extra} Assume $d\geq 1$. Then the LR triple $A,B,C$ is uniquely determined up to isomorphism by its parameter array along with any one of the following 12 scalars: \begin{eqnarray} \label{eq:12values} a_0, a'_0, a''_0; \quad \qquad a_d, a'_d, a''_d; \quad \qquad \alpha_1, \alpha'_1, \alpha''_1; \quad \qquad \beta_1, \beta'_1, \beta''_1. \end{eqnarray} \end{proposition} \begin{proof} Use Proposition \ref{prop:IsoParTrace} along with (\ref{eq:list3}), Proposition \ref{lem:ai}, and Corollary \ref{lem:a0ad}. \end{proof} \noindent In our discussion going forward, among the scalars (\ref{eq:12values}) we will put the emphasis on $\alpha_1$. We call $\alpha_1$ the {\it first Toeplitz number} of the LR triple $A,B,C$. \begin{lemma} \label{lem:SpittingField} Assume $d\geq 1$. For the LR triple $A,B,C$ let $\mathbb K$ denote a subfield of $\mathbb F$ that contains the scalars {\rm (\ref{eq:paLRT})} and the first Toeplitz number $\alpha_1$. Then there exists an LR triple over $\mathbb K$ that has parameter array {\rm (\ref{eq:paLRT})} and first Toeplitz number $\alpha_1$. \end{lemma} \begin{proof} Represent $A,B,C$ by matrices, using the first row in the table of Proposition \ref{prop:matrixRep}. For the resulting three matrices each entry is in $\mathbb K$. So each matrix represents a $\mathbb K$-linear transformation of a vector space over $\mathbb K$. The resulting three $\mathbb K$-linear transformations form an LR triple over $\mathbb K$ that has parameter array (\ref{eq:paLRT}) and first Toeplitz number $\alpha_1$. \end{proof} \begin{lemma} \label{lem:whatisIso} For the LR triple $A,B,C$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $\varphi_i = \varphi'_i = \varphi''_i $ for $1 \leq i \leq d$; \item[\rm (ii)] the p-relatives of $A,B,C$ are mutually isomorphic; \item[\rm (iii)] the n-relatives of $A,B,C$ are mutually isomorphic. \end{enumerate} \noindent Assume that {\rm (i)--(iii)} hold. Then for $0 \leq i \leq d$, \begin{eqnarray} \label{eq:aaaCom} a_i = a'_i = a''_i, \qquad \qquad \alpha_i = \alpha'_i = \alpha''_i, \qquad \qquad \beta_i = \beta'_i = \beta''_i. \end{eqnarray} \end{lemma} \begin{proof} Assume $d\geq 1$; otherwise (i)--(iii) and (\ref{eq:aaaCom}) all hold. \\ \noindent ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ We have $\alpha_1 = \alpha'_1=\alpha''_1$ by Corollary \ref{cor:alphaOneInv}. The result follows by Proposition \ref{prop:extra}, along with Lemmas \ref{lem:ABCvar}, \ref{lem:newPA} and Definition \ref{def:prel}. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ By Lemmas \ref{lem:ABCvar}, \ref{lem:newPA} and Definition \ref{def:prel}. \\ \noindent ${\rm (i)}\Leftrightarrow {\rm (iii)}$ Similar to the proof of ${\rm (i)}\Leftrightarrow {\rm (ii)}$ above. \\ Assume that (i)--(iii) hold. Then (\ref{eq:aaaCom}) holds by Lemmas \ref{lem:tracedataAlt}, \ref{lem:tracedataDual}, \ref{lem:ToeplitzData}, \ref{lem:ToeplitzDataD}. \end{proof} \noindent We now compute the Toeplitz data (\ref{eq:ToeplitzData}) in terms of the parameter array (\ref{eq:paLRT}) and any scalar from (\ref{eq:12values}). We will focus on $\lbrace \alpha_i\rbrace_{i=0}^d$ and $\lbrace \beta_i\rbrace_{i=0}^d$. \begin{proposition} \label{prop:AlphaRecursion} For $d\geq 1$ the following {\rm (i), (ii)} hold. \begin{enumerate} \item[\rm (i)] The sequence $\lbrace \alpha_i\rbrace_{i=0}^d$ is computed as follows: $\alpha_0=1$ and $\alpha_1$ is from Corollary \ref{lem:a0ad}. Moreover \begin{eqnarray} &&\alpha_{i+1} = \frac{ \alpha_1 \alpha_{i}\varphi''_1 + \alpha_{i-1} \varphi_1 (\varphi'_d)^{-1}}{\varphi''_{i+1}} \qquad \qquad (1 \leq i \leq d-1). \label{eq:alphaRec} \end{eqnarray} \item[\rm (ii)] The sequence $\lbrace \beta_i\rbrace_{i=0}^d$ is computed as follows: $\beta_0=1$ and $\beta_1$ is from Corollary \ref{lem:a0ad}. Moreover \begin{eqnarray} &&\beta_{i+1} = \frac{ \beta_1 \beta_{i}\varphi''_d + \beta_{i-1} \varphi_d (\varphi'_1)^{-1}}{\varphi''_{d-i}} \qquad \qquad (1 \leq i \leq d-1). \label{eq:betaRec} \end{eqnarray} \end{enumerate} \end{proposition} \begin{proof} (i) We verify (\ref{eq:alphaRec}). Consider the displayed equation in Corollary \ref{prop:abIII} (row 3, column 1). In this equation solve for $\alpha_{i+1}$, and eliminate $a'_d$ using the equation $a'_d = \alpha_1 \varphi''_1$ from Corollary \ref{lem:a0ad}. \\ \noindent (ii) Similar to the proof of (i) above. \end{proof} \noindent We now give some more ways to compute $\lbrace \alpha_i\rbrace_{i=0}^d$ and $\lbrace \beta_i\rbrace_{i=0}^d$. \begin{proposition} \label{prop:AlphaRecursion2} For $d\geq 1$ the following {\rm (i), (ii)} hold. \begin{enumerate} \item[\rm (i)] The sequence $\lbrace \alpha_i\rbrace_{i=0}^d$ is computed as follows: $\alpha_0=1$ and $\alpha_1$ is from Corollary \ref{lem:a0ad}. Moreover \begin{eqnarray} &&\alpha_{i+1} = \frac{ \alpha_1 \alpha_{i}(\varphi''_{d-i}-\varphi''_{d-i+1}) - \alpha_{i-1} \varphi_{d-i+1}(\varphi'_{i})^{-1}}{\varphi''_{d-i}} \qquad (1 \leq i \leq d-1). \label{eq:alphaRec2} \end{eqnarray} \item[\rm (ii)] The sequence $\lbrace \beta_i\rbrace_{i=0}^d$ is computed as follows: $\beta_0=1$ and $\beta_1$ is from Corollary \ref{lem:a0ad}. Moreover \begin{eqnarray} &&\beta_{i+1} = \frac{ \beta_1 \beta_{i}(\varphi''_{i+1}-\varphi''_{i}) - \beta_{i-1} \varphi_{i}(\varphi'_{d-i+1})^{-1}}{\varphi''_{i+1}} \qquad (1 \leq i \leq d-1). \label{eq:betaRec2} \end{eqnarray} \end{enumerate} \end{proposition} \begin{proof} (i) We verify (\ref{eq:alphaRec2}). Consider the displayed equation in Corollary \ref{prop:abII} (row 3). In this equation solve for $\alpha_{i+1}$, and eliminate $a'_i$ using the equation $a'_i = \alpha_1 (\varphi''_{d-i+1}-\varphi''_{d-i})$ from Proposition \ref{lem:ai}. \\ \noindent (ii) Similar to the proof of (i) above. \end{proof} \begin{proposition} \label{prop:AlphaRecursion3} For $d\geq 1$ the following {\rm (i)--(iv)} hold: \begin{enumerate} \item[\rm (i)] for $1 \leq i \leq d-1$, \begin{eqnarray*} &&\alpha_1 \alpha_{i} \biggl( 1-\frac{\varphi''_1}{\varphi''_{i+1}} - \frac{\varphi''_{d-i+1}}{\varphi''_{d-i}} \biggr) = \alpha_{i-1} \biggl( \frac{\varphi_1}{\varphi'_d \varphi''_{i+1}} + \frac{\varphi_{d-i+1}}{\varphi'_i\varphi''_{d-i}} \biggr); \end{eqnarray*} \item[\rm (ii)] $\alpha_1 \alpha_d \varphi''_1 = - \alpha_{d-1} \varphi_1 /\varphi'_d$; \item[\rm (iii)] for $1 \leq i \leq d-1$, \begin{eqnarray*} &&\beta_1 \beta_{i} \biggl( 1-\frac{\varphi''_d}{\varphi''_{d-i}} - \frac{\varphi''_{i}}{\varphi''_{i+1}} \biggr) = \beta_{i-1} \biggl( \frac{\varphi_d}{\varphi'_1 \varphi''_{d-i}} + \frac{\varphi_{i}}{\varphi'_{d-i+1}\varphi''_{i+1}} \biggr); \end{eqnarray*} \item[\rm (iv)] $\beta_1 \beta_d \varphi''_d = - \beta_{d-1} \varphi_d /\varphi'_1$. \end{enumerate} \end{proposition} \begin{proof} (i) Subtract (\ref{eq:alphaRec2}) from (\ref{eq:alphaRec}) and simplify the result. \\ \noindent (ii) In the displayed equation of Corollary \ref{prop:abIV} (row 1, column 3) eliminate $a'_d$ using the equation $a'_d = \alpha_1 \varphi''_1$ from Corollary \ref{lem:a0ad}. \\ \noindent (iii), (iv) Similar to the proof of (i), (ii) above. \end{proof} \begin{note} \label{note:alphaOne} \rm Referring to Proposition \ref{prop:extra}, if we replace the LR triple $A,B,C$ by the LR triple $-A,-B,-C$ then the parameter array is unchanged, and each scalar in (\ref{eq:12values}) is replaced by its opposite. So in general, the LR triple $A,B,C$ is not determined up to isomorphism by its parameter array. \end{note} \noindent Referring to Proposition \ref{prop:extra} and in light of Note \ref{note:alphaOne}, we now consider the extent to which $\alpha^2_1$ is determined by the parameter array (\ref{eq:paLRT}). \begin{lemma} \label{lem:alphaOneOK} For $d\geq 1$ the scalar $\alpha^2_1$ is related to the parameter array {\rm (\ref{eq:paLRT})} in the following way. \begin{enumerate} \item[\rm (i)] Assume $d=1$. Then \begin{eqnarray} \label{eq:alphaD1OK} \alpha^2_1 = - \frac{\varphi_1}{\varphi'_1 \varphi''_1}. \end{eqnarray} \item[\rm (ii)] Assume $d=2$. Then $\alpha_1=0$ or \begin{eqnarray} \label{eq:alphaD2OK} \alpha^2_1 = - \frac{\varphi_1+\varphi_2}{\varphi'_2 \varphi''_1}. \end{eqnarray} \item[\rm (iii)] Assume $d\geq 2$. Then \begin{eqnarray} \label{eq:DG2OK} \alpha^2_1\biggl( 1- \frac{\varphi''_1}{\varphi''_{2}} - \frac{\varphi''_d}{\varphi''_{d-1}} \biggr) = \frac{\varphi_d}{\varphi''_{d-1}} \frac{1}{\varphi'_1} + \frac{\varphi_1}{\varphi''_{2}} \frac{1}{\varphi'_d}. \end{eqnarray} Moreover for $1 \leq i \leq d$, \begin{eqnarray} \alpha^2_1\biggl( \frac{\varphi''_d}{\varphi''_{d-1}} \varphi''_{i-1} - \varphi''_i + \frac{\varphi''_1}{\varphi''_{2}} \varphi''_{i+1} \biggr) = \frac{\varphi_i}{\varphi'_{d-i+1}} - \frac{\varphi_d}{\varphi''_{d-1}} \frac{\varphi''_{i-1}}{\varphi'_1} - \frac{\varphi_1}{\varphi''_{2}} \frac{\varphi''_{i+1}}{\varphi'_d}. \label{eq:dLarge} \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} (i), (ii) Compute the eigenvalues of the matrix (\ref{eq:matrixEig}). \\ \noindent (iii) Using Proposition \ref{prop:AlphaRecursion}, solve for $\alpha_2$, $\beta_2$ in terms of $\alpha_1$ and the parameter array (\ref{eq:paLRT}). To obtain (\ref{eq:DG2OK}), use the above solutions and $\beta_2 = \alpha^2_1 - \alpha_2$. To obtain (\ref{eq:dLarge}), use the above solutions and the third displayed equation in Proposition \ref{prop:goodrec}. \end{proof} \noindent Referring to Lemma \ref{lem:alphaOneOK}(iii), it sometimes happens that in each equation (\ref{eq:DG2OK}), (\ref{eq:dLarge}) the coefficient of $\alpha^2_1$ is zero. We illustrate with two examples. \begin{definition}\rm \label{def:WEYL} The LR triple $A,B,C$ is said to have {\it Weyl type} whenever the LR pairs $A,B$ and $B,C$ and $C,A$ all have Weyl type, in the sense of Definition \ref{def:Weyl}. In this case, $p=d+1$ is prime and ${\rm Char}(\mathbb F)=p$. Moreover \begin{eqnarray} &&AB-BA=I, \qquad \qquad BC-CB=I, \qquad\qquad CA-AC=I, \label{eq:tripleWeyl} \\ && \label{eq:eqeq} \qquad \qquad \qquad \varphi_i = \varphi'_i= \varphi''_i = i \qquad \qquad (1 \leq i \leq d). \end{eqnarray} \end{definition} \begin{lemma} Assume that the LR triple $A,B,C$ has Weyl type. Then each p-relative of $A,B,C$ has Weyl type. \end{lemma} \begin{proof} By Lemma \ref{lem:whatisIso} and (\ref{eq:eqeq}). \end{proof} \begin{lemma} \label{lem:except2WEYL} Assume that $d\geq 2$ and $A,B,C$ has Weyl type. Then in each of {\rm (\ref{eq:DG2OK})}, {\rm (\ref{eq:dLarge})} the coefficient of $\alpha^2_1$ is zero. Moreover the right-hand side is zero. \end{lemma} \begin{proof} This is readily checked using (\ref{eq:eqeq}). \end{proof} \noindent Assume that $A,B,C$ has Weyl type. Then equations (\ref{eq:DG2OK}), (\ref{eq:dLarge}) give no information about $\alpha_1$. To compute $\alpha_1$ we use the following result. \begin{lemma} \label{lem:ABCsum} Assume that $A,B,C$ has Weyl type. Then \begin{eqnarray} A + B + C = \alpha_1 I. \label{eq:ABCsum} \end{eqnarray} \end{lemma} \begin{proof} Represent $A,B,C$ by matrices, using for example the first row in the table of Proposition \ref{prop:matrixRep}. By Proposition \ref{lem:ai} we have $a_i = \alpha_1 $ for $0 \leq i \leq d$. \end{proof} \begin{lemma} \label{lem:WEYLalphaZero} Assume that $A,B,C$ has Weyl type. Then $\alpha_1=1$ if $d=1$, and $\alpha_1=0$ if $d\geq 2$. \end{lemma} \begin{proof} Recall from Definition \ref{def:WEYL} that $p=d+1$ is prime and ${\rm Char}(\mathbb F)=p$. First assume that $d=1$. Then by Lemma \ref{lem:alphaOneOK}(i) and since ${\rm Char}(\mathbb F)=2$, we obtain $\alpha^2_1=1$. Again using ${\rm Char}(\mathbb F)=2$ we find $\alpha_1=1 $. Next assume that $d\geq 2$. By Lemma \ref{lem:ABCsum}, $C-\alpha_1 I = -A-B$. On one hand, the pair $B,C$ is an LR pair on $V$, so $C$ is Nil by Lemma \ref{lem:ABdecIndNil}. On the other hand, by Lemma \ref{lem:curious} the pair $B,-A-B$ is an LR pair on $V$, so $-A-B$ is Nil by Lemma \ref{lem:ABdecIndNil}. By these comments, both $C$ and $C-\alpha_1 I$ are Nil. Considering their eigenvalues we obtain $\alpha_1=0$. \end{proof} \begin{lemma} \label{lem:alphabetaWeyl} Assume that $d\geq 2$ and $A,B,C$ has Weyl type. Then \begin{eqnarray*} \alpha_{2i}= \frac{(-1)^i}{ 2^{i}i!}, \qquad \qquad \beta_{2i}= \frac{1}{ 2^{i}i!} \qquad \qquad (0 \leq i \leq d/2). \end{eqnarray*} Also $\alpha_{2i+1}=0$ and $\beta_{2i+1}= 0$ for $0 \leq i<d/2$. \end{lemma} \begin{proof} Use Proposition \ref{prop:AlphaRecursion} and Lemma \ref{lem:WEYLalphaZero}. \end{proof} \begin{proposition} \label{prop:Weyclass} The following are equivalent: \begin{enumerate} \item[\rm (i)] $p=d+1$ is prime and ${\rm Char}(\mathbb F)=p$; \item[\rm (ii)] there exists an LR triple $A,B,C$ over $\mathbb F$ that has diameter $d$ and Weyl type. \end{enumerate} Assume that {\rm (i)}, {\rm (ii)} hold. Then $A,B,C$ is unique up to isomorphism. \end{proposition} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ By Lemma \ref{ex:Weylback}, there exists an LR pair $A,B$ over $\mathbb F$ that has diameter $d$ and Weyl type. Define $C=I-A-B$ if $d=1$, and $C=-A-B$ if $d\geq 2$. For $d=1$ one routinely verifies that $A,B,C$ is an LR triple of Weyl type. Assume that $d\geq 2$. By Lemma \ref{lem:curious} and Definition \ref{def:WEYL} the sequence $A,B,C$ is an LR triple of Weyl type. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ By Definition \ref{def:WEYL}. \\ \noindent Assume that (i), (ii) hold. The LR triple $A,B,C$ is unique up to isomorphism by Proposition \ref{prop:extra}, line (\ref{eq:eqeq}), and Lemma \ref{lem:WEYLalphaZero}. \end{proof} \noindent We continue to discuss our LR triple $A,B,C$ on $V$, with parameter array (\ref{eq:paLRT}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). \begin{definition}\rm \label{def:qExceptional} Pick a nonzero $q \in \mathbb F$ such that $q^2\not=1$. The LR triple $A,B,C$ is said to have {\it $q$-Weyl type} whenever the LR pairs $A,B$ and $B,C$ and $C,A$ all have $q$-Weyl type, in the sense of Definition \ref{def:qWeyl}. In this case $d,q$ or $d,-q$ is standard. Moreover \begin{eqnarray} \label{eq:WWW} &&\frac{qAB-q^{-1}BA}{q-q^{-1}}=I, \qquad \quad \frac{qBC-q^{-1}CB}{q-q^{-1}}=I, \qquad \quad \frac{qCA-q^{-1}AC}{q-q^{-1}}=I, \\ && \qquad \qquad \qquad \varphi_i = \varphi'_i= \varphi''_i = 1-q^{-2i} \qquad \qquad (1 \leq i \leq d). \label{eq:vvv} \end{eqnarray} \end{definition} \begin{lemma} With reference to Definition \ref{def:qExceptional}, assume that $A,B,C$ has $q$-Weyl type. Then $A,B,C$ has $(-q)$-Weyl type. \end{lemma} \begin{proof} Use Note \ref{note:minusq}. \end{proof} \begin{lemma} \label{lem:qiWeyl} With reference to Definition \ref{def:qExceptional}, assume that $A,B,C$ has $q$-Weyl type. Then each p-relative of $A,B,C$ has $q$-Weyl type. Moreover each n-relative of $A,B,C$ has $(q^{-1})$-Weyl type. \end{lemma} \begin{proof} By Note \ref{note:minusq} and Definition \ref{def:prel}. \end{proof} \begin{lemma} \label{lem:except2} With reference to Definition \ref{def:qExceptional}, assume that $d\geq 2$ and $A,B,C$ has $q$-Weyl type. Then in each of {\rm (\ref{eq:DG2OK})}, {\rm (\ref{eq:dLarge})} the coefficient of $\alpha^2_1$ is zero. Moreover the right-hand side is zero. \end{lemma} \begin{proof} This is readily checked. \end{proof} \noindent With reference to Definition \ref{def:qExceptional}, assume that $d\geq 2$ and $A,B,C$ has $q$-Weyl type. Then (\ref{eq:DG2OK}), (\ref{eq:dLarge}) give no information about $\alpha_1$. To get some information about $\alpha_1$ we turn to Lemma \ref{lem:etapp}. \begin{lemma} With reference to Definition \ref{def:qExceptional}, assume that $A,B,C$ has $q$-Weyl type. Then \begin{eqnarray} A/\varphi_1-B/\varphi_d = \frac{qA+q^{-1}B}{q-q^{-1}}. \label{eq:ABcheck} \end{eqnarray} \end{lemma} \begin{proof} Use (\ref{eq:vvv}). \end{proof} \noindent Recall Assumption \ref{def:sqRoot}. \begin{lemma} \label{lem:alphaOneList} With reference to Assumption \ref{def:sqRoot} and Definition \ref{def:qExceptional}, assume that $A,B,C$ has $q$-Weyl type. Then for the element {\rm (\ref{eq:ABcheck})} the roots of the characteristic polynomial are \begin{eqnarray} \frac{q^{j+1/2}+q^{-j-1/2}}{q-q^{-1}} \qquad \qquad (0 \leq j \leq d). \label{eq:thetaCalc2} \end{eqnarray} Moreover $\alpha_1$ is contained in the list {\rm (\ref{eq:thetaCalc2})}. \end{lemma} \begin{proof} The first assertion follows from Lemma \ref{lem:qAqiB}. The second assertion follows from Lemma \ref{lem:etapp} and (\ref{eq:vvv}). \end{proof} \noindent We now consider which values of (\ref{eq:thetaCalc2}) could equal $\alpha_1$. \begin{lemma} \label{lem:sixeqs} With reference to Definition \ref{def:qExceptional}, assume that $A,B,C$ has $q$-Weyl type. Then $\alpha_1 (q-q^{-1})I$ is equal to each of \begin{eqnarray*} && q A + q^{-1}B + qC - qABC, \qquad \qquad q^{-1}A+qB+q^{-1}C- q^{-1}CBA, \\ && q B + q^{-1}C + qA - qBCA, \qquad \qquad q^{-1}B+qC+q^{-1}A- q^{-1}ACB, \\ && q C + q^{-1}A + q B - qCAB, \qquad \qquad q^{-1}C+qA+q^{-1}B- q^{-1}BAC. \end{eqnarray*} \end{lemma} \begin{proof} Represent $A,B,C$ by matrices, using for example the first row in the table of Proposition \ref{prop:matrixRep}. By Proposition \ref{lem:ai} we have $a_i = \alpha_1 q^{2i+1}(q-q^{-1})$ for $0 \leq i \leq d$. \end{proof} \begin{proposition} \label{prop:AllqWeyl} Assume that $\mathbb F$ is algebraically closed. With reference to Assumption \ref{def:sqRoot}, pick an integer $j$ $(0 \leq j \leq d)$ and define \begin{eqnarray} \alpha_1= \frac{q^{j+1/2} + q^{-j-1/2}}{q-q^{-1}}. \label{eq:alphafix} \end{eqnarray} Then there exists an LR triple over $\mathbb F$ that has diameter $d$ and $q$-Weyl type, with first Toeplitz number $\alpha_1$. This LR triple is uniquely determined up to isomorphism by $d,q,j$. For this LR triple, \begin{eqnarray} \label{eq:alphaiBasic} && \alpha_i = \frac{(-1)^i q^{-i/2}}{(q^{-1};q^{-1})_i}\; {}_3\phi_2 \Biggl( \genfrac{}{}{0pt}{} {q^{i}, \;-q^{-j-1},\;-q^{j}} {0, \;\;-q^{-1}} \;\Bigg\vert \; q^{-1},\;q^{-1}\Biggr) \qquad \qquad (0 \leq i \leq d), \\ && \beta_i = \frac{(-1)^i q^{i/2}}{(q;q)_i}\; {}_3\phi_2 \Biggl( \genfrac{}{}{0pt}{} {q^{-i}, \;-q^{j+1},\;-q^{-j}} {0, \;\;-q} \;\Bigg\vert \; q,\;q\Biggr) \label{eq:betaiBasic} \qquad \qquad (0 \leq i \leq d). \end{eqnarray} \end{proposition} \begin{proof} By Lemma \ref{ex:exceptional} there exists an LR pair $A,B$ on $V$ that has $q$-Weyl type. Its parameter sequence $\lbrace \varphi_i \rbrace_{i=1}^d$ satisfies $\varphi_i = 1-q^{-2i}$ for $1 \leq i \leq d$. With respect to an $(A,B)$-basis of $V$ the matrices representing $A,B$ are given as shown in the first row of the table in Proposition \ref{prop:matrixRep}. Define $C\in {\rm End}(V)$ such that with respect to the $(A,B)$-basis, the matrix representing $C$ is given as shown in the first row of the table, using $a_i = \alpha_1 q^{2i+1}(q-q^{-1})$ for $0 \leq i \leq d$ and $\varphi'_i = \varphi''_i = \varphi_i$ for $1 \leq i \leq d$. We show that $A,B,C$ is an LR triple on $V$ that has $q$-Weyl type. To do this, it suffices to show that $B,C$ and $C,A$ are LR pairs on $V$ that have $q$-Weyl type. We now show that $B,C$ is an LR pair on $V$ that has $q$-Weyl type. To this end we apply Lemma \ref{lem:qWeylABextend} to the pair $B,C$. From the matrix representions we see that $B,C$ satisfy the middle equation in (\ref{eq:WWW}). The map $B$ is not invertible, since $B$ is Nil by Lemma \ref{lem:ABdecIndNil}. We show that $C$ is not invertible. From the matrix representations we obtain \begin{eqnarray} \label{eq:tocheck} \alpha_1 (q-q^{-1})I = qA+q^{-1}B + qC -qABC. \end{eqnarray} Rearranging (\ref{eq:tocheck}), \begin{eqnarray} \alpha_1 (q-q^{-1})I - qA - q^{-1}B = q(1-AB)C. \label{eq:BAC} \end{eqnarray} By assumption (\ref{eq:alphafix}) along with Definition \ref{def:thetaI} and Lemma \ref{lem:qAqiB}, $\alpha_1(q-q^{-1})$ is an eigenvalue for $qA+q^{-1}B$. So in the equation (\ref{eq:BAC}) the expression on the left is not invertible. Therefore $(1-AB)C$ is not invertible. Note that $I-AB$ is diagonalizable with eigenvalues $\lbrace 1-\varphi_{i+1}\rbrace_{i=0}^d$. Moreover $1-\varphi_{i+1} = q^{-2i-2}\not=0$ for $0 \leq i \leq d$. Therefore $I-AB$ is invertible. By these comments $C$ is not invertible. Applying Lemma \ref{lem:qWeylABextend} to the pair $B,C$ we find that $B,C$ is an LR pair on $V$ that has $q$-Weyl type. One similarly shows that $C,A$ is an LR pair on $V$ that has $q$-Weyl type. Now by Definition \ref{def:qExceptional} the triple $A,B,C$ is an LR triple on $V$ that has $q$-Weyl type. Comparing (\ref{eq:tocheck}) with the first expression in the display of Lemma \ref{lem:sixeqs}, we see that $A,B,C$ has first Toeplitz number $\alpha_1$. We have displayed an LR triple over $\mathbb F$ that has diameter $d$ and $q$-Weyl type, with first Toeplitz number $\alpha_1$. This LR triple is unique up to isomorphism by Proposition \ref{prop:extra} and line (\ref{eq:vvv}). To obtain (\ref{eq:alphaiBasic}) use the eigenvector assertion in Lemma \ref{lem:bMatrixPre} along with Lemma \ref{lem:matrixA}. To obtain (\ref{eq:betaiBasic}), apply (\ref{eq:alphaiBasic}) to any n-relative of $A,B,C$ and use Lemma \ref{lem:qiWeyl}. \end{proof} \section{Bipartite LR triples} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We describe a condition on $A,B,C$ called bipartite. \begin{definition} \label{def:LRTbip} \rm The LR triple $A,B,C$ is called {\it bipartite} whenever each of $a_i, a'_i, a''_i$ is zero for $0 \leq i \leq d$. \end{definition} \begin{lemma} \label{lem:TrivBip} If $A,B,C$ is trivial then it is bipartite. \end{lemma} \begin{proof} Set $d=0$ in Lemma \ref{lem:aisum} to see that each of $a_0, a'_0, a''_0$ is zero. \end{proof} \begin{lemma} \label{lem:bipRel} Assume that $A,B,C$ is bipartite (resp. nonbipartite). Then each relative of $A,B,C$ is bipartite (resp. nonbipartite). \end{lemma} \begin{proof} Use Lemmas \ref{lem:tracedataAlt}, \ref{lem:tracedataDual}. \end{proof} \begin{lemma} \label{lem:BPabc} Assume that $A,B,C$ is bipartite (resp. nonbipartite). Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the LR triple $\alpha A, \beta B, \gamma C$ is bipartite (resp. nonbipartite). \end{lemma} \begin{proof} Use Lemma \ref{lem:traceAdj}. \end{proof} \begin{lemma} \label{lem:NotB} Assume that $A,B,C$ is nonbipartite. Then $d\geq 1$. Moreover each of \begin{eqnarray} \label{eq:NotBip} \alpha_1, \qquad \alpha'_1, \qquad \alpha''_1, \qquad \beta_1, \qquad \beta'_1, \qquad \beta''_1 \end{eqnarray} is nonzero. \end{lemma} \begin{proof} We have $d\geq 1 $ by Lemma \ref{lem:TrivBip}. We show $\alpha_1 \not= 0 $. Suppose $\alpha_1 = 0$. By Corollary \ref{cor:alphaOneInv} we obtain $\alpha'_1= 0$ and $\alpha''_1= 0$. Observe that $\beta_1 = -\alpha_1= 0$. Similarly $\beta'_1= 0$ and $\beta''_1= 0$. Now by Proposition \ref{lem:ai} each of $a_i, a'_i, a''_i$ is zero for $0 \leq i \leq d$. Now $A,B,C$ is bipartite, for a contradiction. We have shown that $\alpha_1 \not=0$; by Lemma \ref{lem:bipRel} the other scalars in (\ref{eq:NotBip}) are nonzero. \end{proof} \begin{lemma} \label{lem:case} Assume that $A,B,C$ is bipartite. Then $d$ is even. Moreover for $0 \leq i \leq d$, each of \begin{eqnarray} \label{eq:BipZero} \alpha_i, \qquad \alpha'_i, \qquad \alpha''_i, \qquad \beta_i, \qquad \beta'_i, \qquad \beta''_i \end{eqnarray} is zero if $i$ is odd and nonzero if $i$ is even. \end{lemma} \begin{proof} We claim that $\alpha_i$ is zero if $i$ is odd and nonzero if $i$ is even. We prove the claim by induction on $i$. The claim holds for $i=0$, since $\alpha_0=1$. The claim holds for $i=1$, since $\alpha_1=0$ by Corollary \ref{lem:a0ad} and our assumption that $A,B,C$ is bipartite. The claim holds for $2 \leq i \leq d$ by Proposition \ref{prop:AlphaRecursion}(i) and induction. The claim is proven. By Lemma \ref{lem:bipRel} the other scalars in (\ref{eq:BipZero}) are zero if $i$ is odd and nonzero if $i$ is even. The diameter $d$ must be even by the first assertion of Lemma \ref{lem:NZ}. \end{proof} \noindent As we continue to discuss LR triples, we will often treat the bipartite and nonbipartite cases separately. For the next few results, we consider the nonbipartite case. \begin{lemma} \label{lem:alphaiAlpha1} Assume that $A,B,C$ is nonbipartite. Then for $0 \leq i \leq d$, \begin{eqnarray} \label{ex:alphaiAlpha1} \frac{\alpha_i}{\alpha^i_1} = \frac{\alpha'_i}{(\alpha'_1)^i} = \frac{\alpha''_i}{(\alpha''_1)^i}, \qquad \qquad \qquad \frac{\beta_i}{\beta^i_1} = \frac{\beta'_i}{(\beta'_1)^i} = \frac{\beta''_i}{(\beta''_1)^i}. \end{eqnarray} \end{lemma} \begin{proof} By Lemma \ref{lem:alphaInv} and Corollary \ref{cor:alphaOneInv}. \end{proof} \begin{lemma} Assume that $A,B,C$ is nonbipartite. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR triples $A,B,C$ and $\alpha A,\beta B, \gamma C$ are isomorphic; \item[\rm (ii)] $\alpha = \beta = \gamma =1$. \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:ToeplitzAdjust}. \end{proof} \noindent We turn our attention to bipartite LR triples. \begin{lemma} \label{lem:BiPbasic} Assume that $A,B,C$ is bipartite and nontrivial. Then \begin{enumerate} \item[\rm (i)] $\alpha_1, \alpha'_1, \alpha''_1 $ and $\beta_1, \beta'_1, \beta''_1 $ are all zero; \item[\rm (ii)] $\alpha_2, \alpha'_2, \alpha''_2 $ and $\beta_2, \beta'_2, \beta''_2 $ are all nonzero; \item[\rm (iii)] We have \begin{eqnarray} \beta_2 = - \alpha_2, \qquad\qquad \beta'_2 = - \alpha'_2, \qquad\qquad \beta''_2 = - \alpha''_2. \label{eq:Al2Be2} \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} (i), (ii) By Lemma \ref{lem:case}. \\ \noindent (iii) From above Lemma \ref{lem:reform1}. \end{proof} \begin{lemma} \label{lem:UNIQUE} A bipartite LR triple is uniquely determined up to isomorphism by its parameter array. \end{lemma} \begin{proof} By Proposition \ref{prop:IsoParTrace} and Definition \ref{def:LRTbip}. \end{proof} \begin{lemma} Assume that $A,B,C$ is bipartite. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR triples $A,B,C$ and $\alpha A,\beta B, \gamma C$ are isomorphic; \item[\rm (ii)] $\alpha = \beta = \gamma \in \lbrace 1,-1\rbrace$. \end{enumerate} \end{lemma} \begin{proof} Use Lemmas \ref{lem:albegaCor}, \ref{lem:UNIQUE}. \end{proof} \begin{lemma} \label{lem:V0V1} Assume that $A,B,C$ is bipartite, so that $d=2 m$ is even. \begin{enumerate} \item[\rm (i)] The following subspaces are equal: \begin{equation} \label{eq:3verOut} \sum_{j=0}^m E_{2j} V, \qquad \quad \sum_{j=0}^m E'_{2j} V, \qquad \quad \sum_{j=0}^m E''_{2j} V. \end{equation} \item[\rm (ii)] The following subspaces are equal: \begin{equation} \label{eq:3verIn} \sum_{j=0}^{m-1} E_{2j+1} V, \qquad \quad \sum_{j=0}^{m-1} E'_{2j+1} V, \qquad \quad \sum_{j=0}^{m-1} E''_{2j+1} V. \end{equation} \item[\rm (iii)] Let $V_{\rm out}$ and $V_{\rm in}$ denote the common values of {\rm (\ref{eq:3verOut})} and {\rm (\ref{eq:3verIn})}, respectively. Then \begin{equation} \label{eq:V0V1} V= V_{\rm out}+V_{\rm in} \qquad \quad {\mbox{\rm (direct sum).}} \end{equation} \item[\rm (iv)] We have \begin{equation} {\rm dim}(V_{\rm out}) = m+1, \qquad \qquad {\rm dim}(V_{\rm in}) = m. \label{eq:V0V1dim} \end{equation} \end{enumerate} \end{lemma} \begin{proof} (i) Denote the sequence in (\ref{eq:3verOut}) by $U$, $U'$, $U''$. We show $U' = U$. The sequence $\lbrace E_iV\rbrace_{i=0}^d$ is the $(A,B)$-decomposition of $V$. Therefore $\lbrace E_{d-i}V\rbrace_{i=0}^d$ is the $(B,A)$-decomposition of $V$. The sequence $\lbrace E'_{i}V\rbrace_{i=0}^d$ is the $(B,C)$-decomposition of $V$. Let $\lbrace u_i\rbrace_{i=0}^d$ denote a $(B,A)$-basis for $V$. Let $\lbrace v_i\rbrace_{i=0}^d$ denote a compatible $(B,C)$-basis for $V$. Thus for $0 \leq i \leq d$, $u_i$ (resp. $v_i$) is a basis for $E_{d-i}V$ (resp. $E'_iV$). Consequently $\lbrace u_{2j}\rbrace_{j=0}^m$ and $\lbrace v_{2j}\rbrace_{j=0}^m$ are bases for $U$ and $U'$, respectively. The matrix $T''$ from Definition \ref{def:TTT}(iii) is the transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$. By construction $T''$ is upper triangular with $(i,r)$-entry $\alpha''_{r-i}$ for $0 \leq i\leq r\leq d$. By Lemma \ref{lem:case} the scalars $\alpha''_1, \alpha''_3,\ldots, \alpha''_{d-1}$ are zero. So the $(i,r)$-entry of $T''$ is zero if $r-i$ is odd $(0 \leq i \leq r\leq d)$. By these comments $v_{2j} \in U$ for $0 \leq j \leq m$. Therefore $U' \subseteq U$. In this inclusion each side has dimension $m+1$, so $U'=U$. One similarly shows that $U''=U'$. \\ \noindent (ii) Similar to the proof of (i) above. \\ \noindent (iii), (iv) The sequence $\lbrace E_iV\rbrace_{i=0}^d$ is a decomposition of $V$. \end{proof} \begin{definition}\rm Referring to Lemma \ref{lem:V0V1} and following Definition \ref{def:OUTERINNER}, we call $V_{\rm out}$ (resp. $V_{\rm in}$) the {\it outer part} (resp. {\it inner part}) of $V$ with respect to $A,B,C$. \end{definition} \begin{lemma} \label{lem:trivialT} Assume that $A,B,C$ is bipartite. Then $V_{\rm out}\not=0$ and $V_{\rm in}\not=V$. Moreover the following are equivalent: {\rm (i)} $A,B$ is trivial; {\rm (ii)} $V_{\rm out}=V$; {\rm (iii)} $V_{\rm in}=0$. \end{lemma} \begin{proof} By Lemma \ref{lem:LRPtrivialT}. \end{proof} \begin{lemma} \label{lem:bipABCact} Assume that $A,B,C$ is bipartite. Then \begin{eqnarray*} && AV_{\rm out} = V_{\rm in}, \qquad \qquad BV_{\rm out} = V_{\rm in}, \qquad \qquad CV_{\rm out} = V_{\rm in}, \\ && AV_{\rm in} \subseteq V_{\rm out}, \qquad \qquad BV_{\rm in} \subseteq V_{\rm out}, \qquad \qquad CV_{\rm in} \subseteq V_{\rm out}. \end{eqnarray*} Moreover \begin{eqnarray*} && A^2V_{\rm out} \subseteq V_{\rm out}, \qquad \qquad B^2V_{\rm out} \subseteq V_{\rm out}, \qquad \qquad C^2V_{\rm out} \subseteq V_{\rm out}, \\ && A^2V_{\rm in} \subseteq V_{\rm in}, \qquad \qquad B^2V_{\rm in} \subseteq V_{\rm in}, \qquad \qquad C^2V_{\rm in} \subseteq V_{\rm in}. \end{eqnarray*} \end{lemma} \begin{proof} Use Lemma \ref{lem:ABaction}. \end{proof} \begin{definition}\rm \label{def:BipNotation} For notational convenience define \begin{eqnarray*} t_i = \frac{\varphi'_{d-i+1} \varphi''_{d-i+1}}{\varphi_i}, \qquad \quad t'_i = \frac{\varphi''_{d-i+1} \varphi_{d-i+1}}{\varphi'_i}, \qquad \quad t''_i = \frac{\varphi_{d-i+1} \varphi'_{d-i+1}}{\varphi''_i} \end{eqnarray*} for $1 \leq i \leq d$, and \begin{eqnarray*} t_0=0, \qquad t'_0 =0, \qquad t''_0=0, \qquad t_{d+1}=0, \qquad t'_{d+1} = 0, \qquad t''_{d+1}=0. \end{eqnarray*} \end{definition} \noindent The following two lemmas are obtained by routine computation. \begin{lemma} \label{lem:A2B2C2Out} Assume that $A,B,C$ is bipartite. Then the action of $A^2, B^2, C^2$ on $V_{\rm out}$ is an LR triple with diameter $m=d/2$. For this LR triple, \begin{enumerate} \item[\rm (i)] the parameter array is \begin{eqnarray*} ( \lbrace \varphi_{2j-1}\varphi_{2j}\rbrace_{j=1}^m; \lbrace \varphi'_{2j-1}\varphi'_{2j}\rbrace_{j=1}^m; \lbrace \varphi''_{2j-1}\varphi''_{2j}\rbrace_{j=1}^m); \end{eqnarray*} \item[\rm (ii)] the idempotent data is \begin{eqnarray*} ( \lbrace E_{2j}\rbrace_{j=0}^m; \lbrace E'_{2j}\rbrace_{j=0}^m; \lbrace E''_{2j}\rbrace_{j=0}^m ); \end{eqnarray*} \item[\rm (iii)] the trace data is (using the notation of Definition \ref{def:BipNotation}) \begin{eqnarray*} ( \lbrace t_{2j} + t_{2j+1} \rbrace_{j=0}^m; \lbrace t'_{2j} + t'_{2j+1} \rbrace_{j=0}^m; \lbrace t''_{2j} + t''_{2j+1} \rbrace_{j=0}^m); \end{eqnarray*} \item[\rm (iv)] the Toeplitz data is \begin{eqnarray*} ( \lbrace \alpha_{2j}\rbrace_{j=0}^m, \lbrace \beta_{2j}\rbrace_{j=0}^m; \lbrace \alpha'_{2j}\rbrace_{j=0}^m, \lbrace \beta'_{2j}\rbrace_{j=0}^m; \lbrace \alpha''_{2j}\rbrace_{j=0}^m, \lbrace \beta''_{2j}\rbrace_{j=0}^m ). \end{eqnarray*} \end{enumerate} \end{lemma} \begin{lemma} \label{lem:A2B2C2In} Assume that $A,B,C$ is bipartite and nontrivial. Then the action of $A^2, B^2, C^2$ on $V_{\rm in}$ is an LR triple with diameter $m-1$, where $m=d/2$. For this LR triple, \begin{enumerate} \item[\rm (i)] the parameter array is \begin{eqnarray*} ( \lbrace \varphi_{2j}\varphi_{2j+1}\rbrace_{j=1}^{m-1}; \lbrace \varphi'_{2j}\varphi'_{2j+1}\rbrace_{j=1}^{m-1}; \lbrace \varphi''_{2j}\varphi''_{2j+1}\rbrace_{j=1}^{m-1}); \end{eqnarray*} \item[\rm (ii)] the idempotent data is \begin{eqnarray*} ( \lbrace E_{2j+1}\rbrace_{j=0}^{m-1}; \lbrace E'_{2j+1}\rbrace_{j=0}^{m-1}; \lbrace E''_{2j+1}\rbrace_{j=0}^{m-1} ); \end{eqnarray*} \item[\rm (iii)] the trace data is (using the notation of Definition \ref{def:BipNotation}) \begin{eqnarray*} ( \lbrace t_{2j+1} + t_{2j+2} \rbrace_{j=0}^{m-1}; \lbrace t'_{2j+1} + t'_{2j+2} \rbrace_{j=0}^{m-1}; \lbrace t''_{2j+1} + t''_{2j+2} \rbrace_{j=0}^{m-1}); \end{eqnarray*} \item[\rm (iv)] the Toeplitz data is \begin{eqnarray*} ( \lbrace \alpha_{2j}\rbrace_{j=0}^{m-1}, \lbrace \beta_{2j}\rbrace_{j=0}^{m-1}; \lbrace \alpha'_{2j}\rbrace_{j=0}^{m-1}, \lbrace \beta'_{2j}\rbrace_{j=0}^{m-1}; \lbrace \alpha''_{2j}\rbrace_{j=0}^{m-1}, \lbrace \beta''_{2j}\rbrace_{j=0}^{m-1} ). \end{eqnarray*} \end{enumerate} \end{lemma} \begin{lemma} \label{lem:NormHalf} Assume that $A,B,C$ is bipartite, and consider the action of $A^2,B^2,C^2$ on $V_{\rm out}$. \begin{enumerate} \item[\rm (i)] Assume that $A,B,C$ is trivial. Then so is the action of $A^2,B^2,C^2$ on $V_{\rm out}$. \item[\rm (ii)] Assume that $A,B,C$ is nontrivial. Then the action of $A^2,B^2,C^2$ on $V_{\rm out}$ is nonbipartite. \end{enumerate} \end{lemma} \begin{proof} (i) By Example \ref{ex:trivial} and Lemma \ref{lem:trivialT}. \\ \noindent (ii) Evaluate the Toeplitz data in Lemma \ref{lem:A2B2C2Out}(iv) using Lemmas \ref{lem:NotB}, \ref{lem:case}. \end{proof} \begin{lemma} \label{lem:NormHalfIn} Assume that $A,B,C$ is bipartite and nontrivial. Consider the action of $A^2,B^2,C^2$ on $V_{\rm in}$. \begin{enumerate} \item[\rm (i)] Assume that $d=2$. Then the action of $A^2,B^2,C^2$ on $V_{\rm in}$ is trivial. \item[\rm (ii)] Assume that $d\geq 4$. Then the action of $A^2,B^2,C^2$ on $V_{\rm in}$ is nonbipartite. \end{enumerate} \end{lemma} \begin{proof} (i) By Example \ref{ex:trivial} and Lemma \ref{lem:trivialT}. \\ \noindent (ii) Evaluate the Toeplitz data in Lemma \ref{lem:A2B2C2In}(iv) using Lemmas \ref{lem:NotB}, \ref{lem:case}. \end{proof} \begin{lemma} \label{lem:Balpha2Inv} Assume that $A,B,C$ is bipartite and nontrivial. Then for $2 \leq i \leq d$, \begin{eqnarray} \label{eq:Balpha2Inv} \frac{\alpha_2}{\varphi_{i-1}\varphi_i} = \frac{\alpha'_2}{\varphi'_{i-1}\varphi'_i} = \frac{\alpha''_2}{\varphi''_{i-1}\varphi''_i}, \qquad \qquad \frac{\beta_2}{\varphi_{i-1}\varphi_i} = \frac{\beta'_2}{\varphi'_{i-1}\varphi'_i} = \frac{\beta''_2}{\varphi''_{i-1}\varphi''_i}. \end{eqnarray} \end{lemma} \begin{proof} Apply Corollary \ref{cor:alphaOneInv} to the LR triples in Lemmas \ref{lem:A2B2C2Out}, \ref{lem:A2B2C2In}. \end{proof} \begin{lemma} \label{lem:Bippipd} Assume that $A,B,C$ is bipartite and nontrivial. Then the following {\rm (i), (ii)} hold for $1 \leq i,j\leq d$. \begin{enumerate} \item[\rm (i)] Assume that $i,j$ have opposite parity. Then \begin{eqnarray} \label{eq:Bippipd} \frac{\alpha_2}{\varphi_{i}\varphi_{j}} = \frac{\alpha'_2}{\varphi'_{i}\varphi'_{j}} = \frac{\alpha''_2}{\varphi''_{i}\varphi''_{j}}, \quad \qquad \frac{\beta_2}{\varphi_{i}\varphi_{j}} = \frac{\beta'_2}{\varphi'_{i}\varphi'_{j}} = \frac{\beta''_2}{\varphi''_{i}\varphi''_{j}}. \end{eqnarray} \item[\rm (ii)] Assume that $i,j$ have the same parity. Then \begin{eqnarray} \label{eq:Bippipd2} \frac{\varphi_{i}}{\varphi_{j}} = \frac{\varphi'_{i}}{\varphi'_{j}} = \frac{\varphi''_{i}}{\varphi''_{j}}. \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof} We have a preliminary remark. For $2\leq k \leq d$ define $x_k = \alpha_2(\varphi_{k-1}\varphi_k)^{-1}$, and note that $x_k=x'_k=x''_k$ by Lemma \ref{lem:Balpha2Inv}. \\ \noindent (i) We may assume without loss that $i<j$. Observe that \begin{eqnarray*} \frac{\alpha_2}{\varphi_{i}\varphi_{j}} = \frac{x_{i+1}x_{i+3} \cdots x_{j}}{x_{i+2}x_{i+4}\cdots x_{j-1}}. \end{eqnarray*} By this and the preliminary remark, we obtain the equations on the left in (\ref{eq:Bippipd}). The equations on the right in (\ref{eq:Bippipd}) are similarly obtained. \\ \noindent (ii) We may assume without loss that $i<j$. Observe that \begin{eqnarray*} \frac{\varphi_{i}}{\varphi_{j}} = \frac{x_{i+2}x_{i+4} \cdots x_{j}}{x_{i+1}x_{i+3}\cdots x_{j-1}}. \end{eqnarray*} By this and the preliminary remark, we obtain the equations (\ref{eq:Bippipd2}). \end{proof} \begin{lemma} \label{lem:BipAlphaInvar} Assume that $A,B,C$ is bipartite and nontrivial. Then for $0 \leq j \leq d/2$, \begin{eqnarray} \label{ex:BipAlphaInvar} \frac{\alpha_{2j}}{\alpha^j_2} = \frac{\alpha'_{2j}}{(\alpha'_2)^j} = \frac{\alpha''_{2j}}{(\alpha''_2)^j}, \qquad \qquad \qquad \frac{\beta_{2j}}{\beta^j_2} = \frac{\beta'_{2j}}{(\beta'_2)^j} = \frac{\beta''_{2j}}{(\beta''_2)^j}. \end{eqnarray} \end{lemma} \begin{proof} Apply Lemma \ref{lem:alphaiAlpha1} to the LR triple in Lemma \ref{lem:A2B2C2Out}. This LR triple is nonbipartite by Lemma \ref{lem:NormHalf}(ii). \end{proof} \begin{definition}\rm \label{def:projector} Assume that $A,B,C$ is bipartite. An ordered pair of elements chosen from $A,B,C$ form an LR pair; consider the corresponding projector map from Definition \ref{def:J}. By Lemma \ref{lem:V0V1} this projector is independent of the choice; denote this common projector by $J$. We call $J$ the {\it projector} for $A,B,C$. \end{definition} \noindent In Section 9 we discussed in detail the projector map for LR pairs. We now adapt a few points to LR triples. \begin{lemma} \label{lem:ABCJtriv} Assume that $A,B,C$ is bipartite. Then its projector map $J$ is nonzero. If $A,B,C$ is trivial then $J=I$. If $A,B,C$ is nontrivial then $J, I$ are linearly independent over $\mathbb F$. \end{lemma} \begin{proof} By Lemma \ref{lem:Jtriv} and Definition \ref{def:projector}. \end{proof} \begin{lemma} \label{lem:EEEJ} Assume that $A,B,C$ is bipartite. Then its projector map $J$ satisfies \begin{eqnarray*} J = \sum_{j=0}^{d/2} E_{2j} = \sum_{j=0}^{d/2} E'_{2j} = \sum_{j=0}^{d/2} E''_{2j}. \end{eqnarray*} Moreover $J^2=J$. Also, $J$ commutes with each of $E_i,E'_i,E''_i$ for $0 \leq i \leq d$. \end{lemma} \begin{proof} By Lemma \ref{lem:Jfacts} and Definition \ref{def:projector}. \end{proof} \begin{lemma} \label{lem:ABCJ} Assume that $A,B,C$ is bipartite. Then its projector map $J$ satisfies \begin{eqnarray*} A = A J + J A, \qquad \qquad B = B J + J B, \qquad \qquad C = C J + J C. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:INOutFacts}(iii) and Definition \ref{def:projector}. \end{proof} \begin{lemma} \label{lem:JMat} Assume that $A,B,C$ is bipartite. With respect to any of the 12 bases {\rm (\ref{eq:typeAB})--(\ref{eq:typeCA})}, the matrix representing $J$ is ${\rm diag}(1,0,1,0,\ldots,0,1)$. \end{lemma} \begin{proof} By construction and linear algebra. \end{proof} \noindent The following definition is motivated by Definition \ref{def:Ainout}. \begin{definition} \label{def:ABCoutIn} \rm Assume that $A,B,C$ is bipartite. Define \begin{eqnarray} A_{\rm out}, \qquad A_{\rm in}, \qquad B_{\rm out}, \qquad B_{\rm in}, \qquad C_{\rm out}, \qquad C_{\rm in} \label{eq:6list} \end{eqnarray} in ${\rm End}(V)$ as follows. The map $A_{\rm out}$ acts on $V_{\rm out}$ as $A$, and on $V_{\rm in}$ as zero. The map $A_{\rm in}$ acts on $V_{\rm in}$ as $A$, and on $V_{\rm out}$ as zero. The other maps in (\ref{eq:6list}) are similarly defined. By construction \begin{equation*} A = A_{\rm out}+ A_{\rm in}, \qquad \qquad B = B_{\rm out}+ B_{\rm in}, \qquad \qquad C = C_{\rm out}+ C_{\rm in}. \end{equation*} \end{definition} \begin{lemma} \label{lem:ABCINOUTJ} Assume that $A,B,C$ is bipartite. Then \begin{eqnarray*} && A_{\rm out} = AJ=(I-J)A, \qquad \qquad A_{\rm in} = JA = A(I-J), \\ && B_{\rm out} = BJ=(I-J)B, \qquad \qquad B_{\rm in} = JB = B(I-J), \\ && C_{\rm out} = CJ=(I-J)C, \qquad \qquad C_{\rm in} = JC = C(I-J). \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:INOutFacts}(i),(ii) and Definition \ref{def:projector}. \end{proof} \begin{lemma} \label{lem:InOut} Assume that $A,B,C$ is bipartite. Let \begin{eqnarray} \label{eq:AlphaInOut} \alpha_{\rm out}, \quad \alpha_{\rm in}, \quad \beta_{\rm out}, \quad \beta_{\rm in}, \quad \gamma_{\rm out}, \quad \gamma_{\rm in} \end{eqnarray} denote nonzero scalars in $\mathbb F$. Then the sequence \begin{eqnarray} \label{eq:NewLRT} \alpha_{\rm out}A_{\rm out} +\alpha_{\rm in}A_{\rm in}, \qquad \beta_{\rm out} B_{\rm out}+ \beta_{\rm in}B_{\rm in}, \qquad \gamma_{\rm out}C_{\rm out}+ \gamma_{\rm in} C_{\rm in} \end{eqnarray} is a bipartite LR triple on $V$. \end{lemma} \begin{proof} By construction. \end{proof} \noindent Our next goal is to obtain the parameter array, idempotent data, and Toeplitz data for the LR triple in (\ref{eq:NewLRT}). The following definition is for notational convenience. \begin{definition} \label{def:NewLRTnot} \rm Adopt the assumptions and notation of Lemma \ref{lem:InOut}. For $1 \leq i \leq d$ define \begin{eqnarray*} && f_i = \alpha_{\rm out}\beta_{\rm in}, \qquad f'_i = \beta_{\rm out}\gamma_{\rm in}, \qquad f''_i = \gamma_{\rm out}\alpha_{\rm in} \qquad \qquad {\mbox {\rm (if $i$ is even)}}, \\ && f_i = \alpha_{\rm in}\beta_{\rm out}, \qquad f'_i = \beta_{\rm in}\gamma_{\rm out}, \qquad f''_i = \gamma_{\rm in}\alpha_{\rm out} \qquad \qquad {\mbox {\rm (if $i$ is odd)}}. \end{eqnarray*} \noindent Also define \begin{eqnarray*} && g_i = (\alpha_{\rm out}\alpha_{\rm in})^{-i/2}, \qquad g'_i = (\beta_{\rm out}\beta_{\rm in})^{-i/2}, \qquad g''_i = (\gamma_{\rm out}\gamma_{\rm in})^{-i/2} \qquad \quad {\mbox {\rm (if $i$ is even)}}, \\ && g_i = 0, \qquad g'_i = 0, \qquad g''_i = 0 \qquad \qquad {\mbox {\rm (if $i$ is odd)}}. \end{eqnarray*} \end{definition} \begin{lemma} \label{lem:PAnewLRT} Referring to Lemma \ref{lem:InOut}, let the nonzero scalars {\rm (\ref{eq:AlphaInOut})} be given, and consider the LR triple in {\rm (\ref{eq:NewLRT})}. For this LR triple, \begin{enumerate} \item[\rm (i)] the parameter array is (using the notation of Definition \ref{def:NewLRTnot}) \begin{eqnarray*} ( \lbrace \varphi_i f_i\rbrace_{i=1}^d; \lbrace \varphi'_i f'_i\rbrace_{i=1}^d; \lbrace \varphi''_i f''_i\rbrace_{i=1}^d ); \end{eqnarray*} \item[\rm (ii)] the idempotent data is equal to the idempotent data for $A,B,C$; \item[\rm (iii)] the Toeplitz data is (using the notation of Definition \ref{def:NewLRTnot}) \begin{eqnarray*} ( \lbrace \alpha_i g''_i\rbrace_{i=0}^d, \lbrace \beta_i g''_i\rbrace_{i=0}^d; \lbrace \alpha'_i g_i\rbrace_{i=0}^d, \lbrace \beta'_i g_i\rbrace_{i=0}^d; \lbrace \alpha''_i g'_i\rbrace_{i=0}^d, \lbrace \beta''_i g'_i \rbrace_{i=0}^d ). \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} (i) Use Lemma \ref{lem:LRPInOut}. \\ \noindent (ii) Similar to the proof of Lemma \ref{lem:isequal}. \\ \noindent (iii) Similar to the proof of Lemma \ref{lem:ToeplitzAdjust}. \end{proof} \begin{definition} \label{def:BIASSOC} \rm Assume that $A,B,C$ is bipartite. Let $A',B',C'$ denote a bipartite LR triple on $V$. Then $A,B,C$ and $A',B',C'$ will be called {\it biassociate} whenever there exist nonzero scalars $\alpha, \beta, \gamma$ in $\mathbb F$ such that \begin{eqnarray*} A' = \alpha A_{\rm out} +A_{\rm in}, \qquad B' = \beta B_{\rm out}+ B_{\rm in}, \qquad C' = \gamma C_{\rm out}+ C_{\rm in}. \end{eqnarray*} Biassociativity is an equivalence relation. \end{definition} \begin{lemma} \label{lem:BiassocIso} Assume that $A,B,C$ is bipartite. Let $A',B',C'$ denote a bipartite LR triple over $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] there exists a bipartite LR triple over $\mathbb F$ that is biassociate to $A,B,C$ and isomorphic to $A',B',C'$; \item[\rm (ii)] there exists a bipartite LR triple over $\mathbb F$ that is isomorphic to $A,B,C$ and biassociate to $A',B',C'$. \end{enumerate} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:assocIso}. \end{proof} \begin{definition} \label{def:biSim} \rm Assume that $A,B,C$ is bipartite. Let $A',B',C'$ denote a bipartite LR triple over $\mathbb F$. Then $A,B,C$ and $A',B',C'$ will be called {\it bisimilar} whenever the equivalent conditions {\rm (i), (ii)} hold in Lemma \ref{lem:BiassocIso}. Bisimilarity is an equivalence relation. \end{definition} \section{Equitable LR triples} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We describe a condition on $A,B,C$ called equitable. \begin{definition} \label{def:equitNorm} \rm The LR triple $A,B,C$ is called {\it equitable} whenever $\alpha_i = \alpha'_i = \alpha''_i$ for $0 \leq i \leq d$. \end{definition} \begin{lemma} \label{lem:TrivEquit} If $A,B,C$ is trivial, then it is equitable. \end{lemma} \begin{proof} Recall that $\alpha_0=\alpha'_0 = \alpha''_0=1$. \end{proof} \begin{lemma} \label{lem:equitBasicBeta} Assume that $A,B,C$ is equitable. Then $\beta_i= \beta'_i = \beta''_i$ for $0 \leq i \leq d$. \end{lemma} \begin{proof} Refer to Definitions \ref{def:TTT}, \ref{def:TTT1}. The matrices $T,T',T''$ coincide, so their inverses coincide. The result follows. \end{proof} \begin{lemma} If $A,B,C$ is equitable, then so are its relatives. \end{lemma} \begin{proof} By Lemmas \ref{lem:ToeplitzData}, \ref{lem:ToeplitzDataD}, \ref{lem:equitBasicBeta} and Definition \ref{def:equitNorm}. \end{proof} \noindent As we investigate the equitable property, we will treat the bipartite and nonbipartite cases separately. We begin with the nonbipartite case. \begin{lemma} \label{lem:equitNBmin} Assume that $A,B,C$ is nonbipartite. Then $A,B,C$ is equitable if and only if $\alpha_1 = \alpha'_1 = \alpha''_1$. \end{lemma} \begin{proof} By Lemma \ref{lem:alphaiAlpha1} and Definition \ref{def:equitNorm}. \end{proof} \begin{lemma} \label{lem:equitBasic} Assume that $A,B,C$ is nonbipartite and equitable. Then the following hold: \begin{enumerate} \item[\rm (i)] $\varphi_i = \varphi'_i = \varphi''_i$ for $1 \leq i \leq d$; \item[\rm (ii)] $a_i = a'_i = a''_i =\alpha_1(\varphi_{d-i+1}-\varphi_{d-i})$ for $0 \leq i \leq d$. \end{enumerate} \end{lemma} \begin{proof} (i) By Corollary \ref{cor:alphaOneInv} and Lemma \ref{lem:NotB}. \\ \noindent (ii) Use (\ref{eq:list3}) and Proposition \ref{lem:ai}. \end{proof} \noindent The following definition is for later use. \begin{definition} \label{def:Rhoi} \rm Assume that $A,B,C$ is nonbipartite and equitable. Define \begin{eqnarray} \rho_i = \frac{\varphi_{i+1}}{\varphi_{d-i}} \qquad \qquad (0 \leq i \leq d-1). \label{eq:mainrec2} \end{eqnarray} Note by Lemma \ref{lem:equitBasic}(i) that $\rho_i = \rho'_i = \rho''_i$ for $0 \leq i\leq d-1$. \end{definition} \begin{lemma} \label{def:RhoiCom} Assume that $A,B,C$ is nonbipartite and equitable. Then $\rho_i \rho_{d-i-1} = 1$ for $0 \leq i \leq d-1$. \end{lemma} \begin{proof} By Definition \ref{def:Rhoi}. \end{proof} \begin{lemma} \label{lem:NBipEquitAdj} Assume that $A,B,C$ is nonbipartite. Let $\alpha,\beta,\gamma$ denote nonzero scalars in $\mathbb F$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] the LR triple $\alpha A,\beta B,\gamma C$ is equitable; \item[\rm (ii)] $\alpha/\alpha'_1 = \beta/\alpha''_1 = \gamma/\alpha_1 $. \end{enumerate} \end{lemma} \begin{proof} By Lemmas \ref{lem:ToeplitzAdjust}, \ref{lem:equitNBmin}. \end{proof} \begin{lemma} \label{lem:EqAdjust} Assume that $A,B,C$ is nonbipartite. Then there exists an equitable LR triple on $V$ that is associate to $A,B,C$. \end{lemma} \begin{proof} By Definition \ref{def:ASSOC} and Lemma \ref{lem:NBipEquitAdj}. \end{proof} \noindent We turn our attention to bipartite LR triples. \begin{lemma} \label{lem:BPEquitMin} Assume that $A,B,C$ is bipartite and nontrivial. Then $A,B,C$ is equitable if and only if $\alpha_2 = \alpha'_2 = \alpha''_2$. \end{lemma} \begin{proof} By Lemmas \ref{lem:case}, \ref{lem:BipAlphaInvar} and Definition \ref{def:equitNorm}. \end{proof} \begin{lemma} \label{lem:basic2} Assume that $A,B,C$ is bipartite, nontrivial, and equitable. Then $\varphi_{i-1}\varphi_i = \varphi'_{i-1} \varphi'_i = \varphi''_{i-1} \varphi''_i$ for $2 \leq i \leq d$. \end{lemma} \begin{proof} By Lemma \ref{lem:BiPbasic}(ii) and Lemma \ref{lem:Balpha2Inv}. \end{proof} \begin{lemma} \label{lem:basic2A} Assume that $A,B,C$ is bipartite and equitable. Then the following {\rm (i), (ii)} hold for $0 \leq i \leq d$. \begin{enumerate} \item[\rm (i)] For $i$ even, \begin{eqnarray*} && \varphi_1 \varphi_2 \cdots \varphi_i = \varphi'_1 \varphi'_2 \cdots \varphi'_i = \varphi''_1 \varphi''_2 \cdots \varphi''_i, \\ && \varphi_d \varphi_{d-1} \cdots \varphi_{d-i+1} = \varphi'_d \varphi'_{d-1} \cdots \varphi'_{d-i+1} = \varphi''_d \varphi''_{d-1} \cdots \varphi''_{d-i+1}. \end{eqnarray*} \item[\rm (ii)] For $i$ odd, \begin{eqnarray*} && \varphi_2 \varphi_3 \cdots \varphi_i = \varphi'_2 \varphi'_3 \cdots \varphi'_i = \varphi''_2 \varphi''_3 \cdots \varphi''_i, \\ && \varphi_{d-1} \varphi_{d-2} \cdots \varphi_{d-i+1} = \varphi'_{d-1} \varphi'_{d-2} \cdots \varphi'_{d-i+1} = \varphi''_{d-1} \varphi''_{d-2} \cdots \varphi''_{d-i+1}. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lem:basic2} and since $d$ is even. \end{proof} \begin{lemma} \label{lem:basic2new} Assume that $A,B,C$ is bipartite and equitable. Then for $0 \leq i \leq d-1$, \begin{eqnarray*} \frac{\varphi'_{i+1}}{\varphi''_{d-i}} = \frac{\varphi''_{i+1}}{\varphi'_{d-i}}, \qquad \qquad \frac{\varphi''_{i+1}}{\varphi_{d-i}} = \frac{\varphi_{i+1}}{\varphi''_{d-i}}, \qquad \qquad \frac{\varphi_{i+1}}{\varphi'_{d-i}} = \frac{\varphi'_{i+1}}{\varphi_{d-i}}. \end{eqnarray*} \end{lemma} \begin{proof} Assume that $A,B,C$ is nontrivial; otherwise there is nothing to prove. Now use Lemma \ref{lem:Bippipd}(i) with $j=d-i+1$. The integers $i,j$ have opposite parity since $d$ is even. \end{proof} \begin{definition} \label{def:RHOD} Assume that $A,B,C$ is bipartite and equitable. Then for $0 \leq i \leq d-1$ define \begin{eqnarray*} && \rho_i = \frac{\varphi'_{i+1}}{\varphi''_{d-i}} = \frac{\varphi''_{i+1}}{\varphi'_{d-i}}, \\ && \rho'_i= \frac{\varphi''_{i+1}}{\varphi_{d-i}} = \frac{\varphi_{i+1}}{\varphi''_{d-i}}, \\ && \rho''_i = \frac{\varphi_{i+1}}{\varphi'_{d-i}} = \frac{\varphi'_{i+1}}{\varphi_{d-i}}. \end{eqnarray*} We emphasize that for $d\geq 1$, \begin{eqnarray} \rho_0 = \frac{\varphi'_1}{\varphi''_d} = \frac{\varphi''_1}{\varphi'_d}, \qquad \qquad \rho'_0 = \frac{\varphi''_1}{\varphi_d} = \frac{\varphi_1}{\varphi''_d}, \qquad \qquad \rho''_0 = \frac{\varphi_1}{\varphi'_d} = \frac{\varphi'_1}{\varphi_d}. \label{def:rho0def} \end{eqnarray} \end{definition} \begin{lemma} \label{def:BipRhoiCom} Assume that $A,B,C$ is bipartite and equitable. Then for $0 \leq i \leq d-1$, \begin{eqnarray} \rho_i \rho_{d-i-1}=1, \qquad \rho'_i \rho'_{d-i-1}=1, \qquad \rho''_i \rho''_{d-i-1}=1. \label{eq:ThreeRho} \end{eqnarray} \end{lemma} \begin{proof} By Definition \ref{def:RHOD}. \end{proof} \begin{lemma} \label{lem:RhoRelBip} Assume that $A,B,C$ is bipartite, nontrivial, and equitable. Then the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] for $1 \leq i \leq d$, \begin{eqnarray*} && \frac{\varphi_i}{\rho_0} = \frac{\varphi'_i}{\rho'_0} = \frac{\varphi''_i}{\rho''_0} \qquad \qquad \qquad {\mbox{\rm if $i$ is even}}, \\ && \varphi_i\rho_0 = \varphi'_i \rho'_0 = \varphi''_i\rho''_0 \qquad \qquad {\mbox{\rm if $i$ is odd}}; \end{eqnarray*} \item[\rm (ii)] for $0 \leq i \leq d$, \begin{eqnarray*} && \rho_0 \rho_1 \cdots \rho_{i-1} = \rho'_0 \rho'_1 \cdots \rho'_{i-1} = \rho''_0 \rho''_1 \cdots \rho''_{i-1} \qquad \qquad {\mbox{\rm if $i$ is even}}, \\ && \rho_1 \rho_2 \cdots \rho_{i-1} = \rho'_1 \rho'_2 \cdots \rho'_{i-1} = \rho''_1 \rho''_2 \cdots \rho''_{i-1} \qquad \qquad {\mbox{\rm if $i$ is odd}}; \end{eqnarray*} \item[\rm (iii)] for $0 \leq i \leq d-1$, \begin{eqnarray*} && \frac{\rho_i}{\rho_0} = \frac{\rho'_i}{\rho'_0} = \frac{\rho''_i}{\rho''_0} \qquad \qquad \qquad {\mbox{\rm if $i$ is even}}, \\ && \rho_i\rho_0 = \rho'_i\rho'_0 = \rho''_i\rho''_0 \qquad \qquad {\mbox{\rm if $i$ is odd}}. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} Use Lemma \ref{lem:Bippipd} and Definition \ref{def:RHOD}. \end{proof} \begin{lemma} \label{lem:A2B2C2Equit} Assume that $A,B,C$ is bipartite and equitable. Then: \begin{enumerate} \item[\rm (i)] the action of $A^2,B^2,C^2$ on $V_{\rm out}$ is equitable; \item[\rm (ii)] for $A,B,C$ nontrivial the action of $A^2,B^2,C^2$ on $V_{\rm in}$ is equitable. \end{enumerate} \end{lemma} \begin{proof} (i) By Lemma \ref{lem:A2B2C2Out}(iv) and Definition \ref{def:equitNorm}. \\ \noindent (ii) By Lemma \ref{lem:A2B2C2In}(iv) and Definition \ref{def:equitNorm}. \end{proof} \begin{lemma} \label{lem:BipEquitAdj} Referring to Lemma \ref{lem:InOut}, assume that $A,B,C$ is nontrivial, and let the nonzero scalars {\rm (\ref{eq:AlphaInOut})} be given. Then the LR triple {\rm (\ref{eq:NewLRT})} is equitable if and only if \begin{eqnarray} \alpha_{\rm out}\alpha_{\rm in}/\alpha'_2= \beta_{\rm out}\beta_{\rm in}/\alpha''_2= \gamma_{\rm out}\gamma_{\rm in} /\alpha_2. \label{eq:neededForEquit} \end{eqnarray} \end{lemma} \begin{proof} By Lemma \ref{lem:PAnewLRT}(iii) and Definition \ref{def:equitNorm}. \end{proof} \begin{lemma} \label{lem:EqAdjustBip} Assume that $A,B,C$ is bipartite and nontrivial. Then there exists an equitable LR triple on $V$ that is biassociate to $A,B,C$. \end{lemma} \begin{proof} By Definition \ref{def:BIASSOC} and Lemma \ref{lem:BipEquitAdj}. \end{proof} \noindent We have a comment about general LR triples, bipartite or not. \begin{lemma} \label{lem:combine} Assume that $A,B,C$ is equitable. Then \begin{eqnarray} \label{eq:combine} \varphi_1 \varphi_2 \cdots \varphi_d = \varphi'_1 \varphi'_2 \cdots \varphi'_d = \varphi''_1 \varphi''_2 \cdots \varphi''_d. \end{eqnarray} \end{lemma} \begin{proof} For $A,B,C$ nonbipartite, the result follows from Lemma \ref{lem:equitBasic}(i). For $A,B,C$ bipartite, the result follows from Lemma \ref{lem:basic2A}(i) and since $d$ is even. \end{proof} \section{Normalized LR triples} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We describe a condition on $A,B,C$ called normalized. The condition is defined a bit differently in the trivial, nonbipartite, and bipartite nontrivial cases. We first dispense with the trivial case. \begin{definition} \rm \label{def:BNormTriv} Assume that $A,B,C$ is trivial. Then we declare $A,B,C$ to be {\it normalized}. \end{definition} \begin{definition} \label{def:NBNorm} \rm Assume that $A,B,C$ is nonbipartite. Then $A,B,C$ is called {\it normalized} whenever \begin{eqnarray*} \alpha_1 = 1, \qquad \quad \alpha'_1 = 1, \qquad \quad \alpha''_1 = 1. \end{eqnarray*} \end{definition} \begin{lemma} Assume that $A,B,C$ is nonbipartite and normalized. Then $A,B,C$ is equitable. \end{lemma} \begin{proof} By Lemma \ref{lem:equitNBmin} and since $\alpha_1 = \alpha'_1 = \alpha''_1$. \end{proof} \begin{lemma} Assume that $A,B,C$ is nonbipartite and normalized. Then so are its p-relatives. \end{lemma} \begin{proof} By Definition \ref{def:prel} and Lemmas \ref{lem:ToeplitzData}, \ref{lem:ToeplitzDataD}. \end{proof} \noindent A nonbipartite LR triple can be normalized as follows. \begin{lemma} \label{lem:NBnormalize} Assume that $A,B,C$ is nonbipartite. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the LR triple $\alpha A, \beta B, \gamma C$ is normalized if and only if \begin{eqnarray*} \alpha = \alpha'_1, \qquad \qquad \beta = \alpha''_1, \qquad \qquad \gamma = \alpha_1. \end{eqnarray*} \end{lemma} \begin{proof} Use Lemmas \ref{lem:ToeplitzAdjust}, \ref{lem:BPabc} and Definition \ref{def:NBNorm}. \end{proof} \begin{corollary} \label{prop:NormalNBunique} Assume that $A,B,C$ is nonbipartite. Then there exists a unique sequence $\alpha, \beta,\gamma$ of nonzero scalars in $\mathbb F$ such that $\alpha A, \beta B, \gamma C$ is normalized. \end{corollary} \begin{proof} By Lemma \ref{lem:NBnormalize}. \end{proof} \begin{corollary} \label{prop:normalNBunique} Assume that $A,B,C$ is nonbipartite. Then $A,B,C$ is associate to a unique normalized nonbipartite LR triple over $\mathbb F$. \end{corollary} \begin{proof} By Definition \ref{def:ASSOC}, Lemma \ref{lem:BPabc}, and Corollary \ref{prop:NormalNBunique}. \end{proof} \begin{lemma} \label{lem:Nbeta} Assume that $A,B,C$ is nonbipartite and normalized. Then \begin{eqnarray*} \beta_1 = -1, \qquad \quad \beta'_1 = -1, \qquad \quad \beta''_1 = -1. \end{eqnarray*} \end{lemma} \begin{proof} By (\ref{eq:list3}) and Definition \ref{def:NBNorm}. \end{proof} \begin{lemma} Assume that $A,B,C$ is nonbipartite and normalized. Then so is the LR triple $-C,-B,-A$. \end{lemma} \begin{proof} By Lemma \ref{lem:ToeplitzData} (row 4 of the table) along with Lemmas \ref{lem:NBnormalize}, \ref{lem:Nbeta}. \end{proof} \begin{lemma} \label{lem:NBnormIso} Assume that $A,B,C$ is nonbipartite and normalized. Then $A,B,C$ is uniquely determined up to isomorphism by its parameter array. \end{lemma} \begin{proof} By Proposition \ref{prop:IsoParTrace} the LR triple $A,B,C$ is uniquely determined up to isomorphism by its parameter array and trace data. The trace data is determined by the parameter array using Lemma \ref{lem:equitBasic}(ii) and $\alpha_1=1$. The result follows. \end{proof} \noindent We turn our attention to bipartite nontrivial LR triples. \begin{definition} \label{def:BNorm} \rm Assume that $A,B,C$ is bipartite and nontrivial. Then $A,B,C$ is called {\it normalized} whenever \begin{eqnarray*} \alpha_2 = 1, \qquad \quad \alpha'_2 = 1, \qquad \quad \alpha''_2 = 1. \end{eqnarray*} \end{definition} \begin{lemma} Assume that $A,B,C$ is bipartite, nontrivial, and normalized. Then $A,B,C$ is equitable. \end{lemma} \begin{proof} By Lemma \ref{lem:BPEquitMin} and since $\alpha_2 = \alpha'_2 = \alpha''_2$. \end{proof} \begin{lemma} Assume that $A,B,C$ is bipartite, nontrivial, and normalized. Then so are its p-relatives. \end{lemma} \begin{proof} By Definition \ref{def:prel} and Lemmas \ref{lem:ToeplitzData}, \ref{lem:ToeplitzDataD}. \end{proof} \noindent A bipartite nontrivial LR triple can be normalized as follows. \begin{lemma} \label{lem:howtoNorm} Referring to Lemma \ref{lem:InOut}, assume that $A,B,C$ is nontrivial, and let the nonzero scalars {\rm (\ref{eq:AlphaInOut})} be given. Then the LR triple {\rm (\ref{eq:NewLRT})} is normalized if and only if \begin{eqnarray*} \alpha_{\rm out} \alpha_{\rm in} = \alpha'_2, \qquad \qquad \beta_{\rm out} \beta_{\rm in} = \alpha''_2, \qquad \qquad \gamma_{\rm out} \gamma_{\rm in} = \alpha_2. \end{eqnarray*} \end{lemma} \begin{proof} Use Lemma \ref{lem:PAnewLRT}(iii) and Definition \ref{def:BNorm}. \end{proof} \begin{corollary} \label{lem:NormUnique} Assume that $A,B,C$ is bipartite and nontrivial. Then there exists a unique sequence $\alpha, \beta, \gamma$ of nonzero scalars in $\mathbb F$ such that \begin{eqnarray*} \alpha A_{\rm out} + A_{\rm in}, \qquad \quad \beta B_{\rm out} + B_{\rm in}, \qquad \quad \gamma C_{\rm out} + C_{\rm in} \end{eqnarray*} is normalized. \end{corollary} \begin{proof} In Lemma \ref{lem:howtoNorm} set $ \alpha_{\rm in} =1$, $ \beta_{\rm in} =1$, $\gamma_{\rm in} =1$ to see that $\alpha= \alpha'_2$, $\beta= \alpha''_2$, $\gamma= \alpha_2$ is the unique solution. \end{proof} \begin{corollary} \label{lem:biassocUnique} Assume that $A,B,C$ is bipartite and nontrivial. Then $A,B,C$ is biassociate to a unique biparitite normalized LR triple over $\mathbb F$. \end{corollary} \begin{proof} By Lemma \ref{lem:InOut}, Definition \ref{def:BIASSOC}, and Corollary \ref{lem:NormUnique}. \end{proof} \begin{lemma} \label{lem:BPBeta2} Assume that $A,B,C$ is bipartite, nontrivial, and normalized. Then \begin{eqnarray*} \beta_2 = -1, \qquad \quad \beta'_2 = -1, \qquad \quad \beta''_2 = -1. \end{eqnarray*} \end{lemma} \begin{proof} By (\ref{eq:Al2Be2}) and Definition \ref{def:BNorm}. \end{proof} \begin{lemma} Assume that $A,B,C$ is bipartite, nontrivial, and normalized. Then so is the LR triple \begin{eqnarray*} C_{\rm out}- C_{\rm in}, \quad \qquad B_{\rm out}- B_{\rm in}, \quad \qquad A_{\rm out}- A_{\rm in}. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:ToeplitzData} (row 4 of the table) and Lemmas \ref{lem:howtoNorm}, \ref{lem:BPBeta2}. \end{proof} \begin{lemma} \label{lem:NormHalf2} Assume that $A,B,C$ is bipartite, nontrivial, and normalized. Then: \begin{enumerate} \item[\rm (i)] the action of $A^2,B^2,C^2$ on $V_{\rm out}$ is normalized; \item[\rm (ii)] the action of $A^2,B^2,C^2$ on $V_{\rm in}$ is normalized. \end{enumerate} \end{lemma} \begin{proof} (i) Evaluate the Toeplitz data in Lemma \ref{lem:A2B2C2Out}(iv) using Lemma \ref{lem:NormHalf}(ii) and Definition \ref{def:NBNorm}. \\ \noindent (ii) For $d=2$, the action of $A^2,B^2,C^2$ on $V_{\rm in}$ is trivial and hence normalized. For $d\geq 4$, evaluate the Toeplitz data in Lemma \ref{lem:A2B2C2In}(iv) using Lemma \ref{lem:NormHalfIn}(ii) and Definition \ref{def:NBNorm}. \end{proof} \section{The idempotent centralizers for an LR triple} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We discuss a type of element in ${\rm End}(V)$ called an idempotent centralizer. \begin{definition} \label{def:IC} \rm By an {\it idempotent centralizer} for $A,B,C$ we mean an element in ${\rm End}(V)$ that commutes with each of $E_i, E'_i, E''_i$ for $0 \leq i \leq d$. \end{definition} \begin{lemma} \label{lem:ICmeaning} For $X \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $X$ is an idempotent centralizer for $A,B,C$; \item[\rm (ii)] for $0 \leq i \leq d$, \begin{eqnarray*} X E_iV \subseteq E_iV,\qquad \qquad X E'_iV \subseteq E'_iV,\qquad \qquad X E''_iV \subseteq E''_iV. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} By Definition \ref{def:IC} and linear algebra. \end{proof} \begin{example} \label{ex:trivIC} \rm The identity $I \in {\rm End}(V)$ is an idempotent centralizer for $A,B,C$. \end{example} \begin{definition} \label{def:ICspace} \rm Let $\mathcal I$ denote the set of idempotent centralizers for $A,B,C$. Note that $\mathcal I$ is a subalgebra of the $\mathbb F$-algebra ${\rm End}(V)$. We call $\mathcal I$ the {\it idempotent centralizer algebra} for $A,B,C$. \end{definition} \noindent Referring to Definition \ref{def:ICspace}, our next goal is to display a basis for the $\mathbb F$-vector space $\mathcal I$. Recall the projector $J$ from Definition \ref{def:projector}. \begin{proposition} \label{prop:centBasis} The following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] Assume that $A,B,C$ is trivial. Then $I$ is a basis for $\mathcal I$. \item[\rm (ii)] Assume that $A,B,C$ is nonbipartite. Then $I$ is a basis for $\mathcal I$. \item[\rm (iii)] Assume that $A,B,C$ is bipartite and nontrivial. Then $I,J$ is a basis for $\mathcal I$. \end{enumerate} \end{proposition} \begin{proof} (i) Routine. \\ \noindent (ii), (iii) Assume that $A,B,C$ is nontrivial. Let the set $S$ consist of $I$ (if $A,B,C$ is nonbipartite) and $I,J$ (if $A,B,C$ is bipartite). We show that $S$ is a basis for $\mathcal I$. By Lemma \ref{lem:ABCJtriv} and Lemma \ref{lem:EEEJ}, $S$ is a linearly independent subset of $\mathcal I$. We show that $S$ spans $\mathcal I$. Let $\lbrace u_i\rbrace_{i=0}^d$ denote an $(A,C)$-basis of $V$, and let $\lbrace v_i\rbrace_{i=0}^d$ denote a compatible $(A,B)$-basis of $V$. The transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$ is the matrix $T'$ from Definition \ref{def:TTT}. The matrix $T'$ is upper triangular and Toeplitz, with parameters $\lbrace \alpha'_i\rbrace_{i=0}^d$. Let $X \in \mathcal I$. By Lemma \ref{lem:ICmeaning} there exist scalars $\lbrace r_i\rbrace_{i=0}^d$ in $\mathbb F$ such that $Xu_i = r_i u_{i}$ for $0 \leq i \leq d$. Also by Lemma \ref{lem:ICmeaning}, there exist scalars $\lbrace s_i\rbrace_{i=0}^d$ in $\mathbb F$ such that $Xv_i = s_i v_{i}$ for $0 \leq i \leq d$. Let the matrix $M \in {\rm Mat}_{d+1}(\mathbb F)$ represent $X$ with respect to $\lbrace u_i \rbrace_{i=0}^d$. Then $M$ is diagonal with $(i,i)$-entry $r_i$ for $0 \leq i \leq d$. Let the matrix $N \in {\rm Mat}_{d+1}(\mathbb F)$ represent $X$ with respect to $\lbrace v_i \rbrace_{i=0}^d$. Then $N$ is diagonal with $(i,i)$-entry $s_i$ for $0 \leq i \leq d$. By linear algebra $M T' = T' N$. In this equation, for $0 \leq i \leq d$ compare the $(i,i)$-entry of each side, to obtain $r_i=s_i$. Until further notice assume that $A,B,C$ is nonbipartite. Then $\alpha'_1 \not=0$. In the equation $M T' = T' N$, for $1 \leq i \leq d$ compare the $(i-1,i)$-entry of each side, to obtain $r_{i-1} = r_{i}$. So $r_i=r_0$ for $0 \leq i \leq d$. Consequently $X-r_0I$ vanishes on $u_i$ for $0 \leq i \leq d$. Therefore $X=r_0I$. Next assume that $A,B,C$ is bipartite. Then $\alpha'_1 =0$ and $\alpha'_2 \not=0$. In the equation $M T' = T' N$, for $2 \leq i \leq d$ compare the $(i-2,i)$-entry of each side, to obtain $r_{i-2} = r_{i}$. For $0 \leq i \leq d$ we have $r_i = r_0$ (if $i$ is even) and $r_i = r_1$ (if $i$ is odd). Consequently $X-r_0 J - r_1 (I-J)$ vanishes on $u_i$ for $0 \leq i \leq d$. Therefore $X=r_0 J + r_1 (I-J)$. We have shown that the set $S$ spans $\mathcal I$. The result follows. \end{proof} \noindent We have some comments about Proposition \ref{prop:centBasis}(iii). \begin{lemma} \label{def:bipIC} Assume that $A,B,C$ is bipartite and nontrivial. Let $X$ denote an idempotent centralizer for $A,B,C$. Then \begin{enumerate} \item[\rm (i)] $XV_{\rm out} \subseteq V_{\rm out}$ and $XV_{\rm in} \subseteq V_{\rm in}$; \item[\rm (ii)] $XJ=JX$. \end{enumerate} \end{lemma} \begin{proof} (i) By Lemmas \ref{lem:V0V1}, \ref{lem:ICmeaning}. \\ \noindent (ii) The map $J$ acts on $V_{\rm out}$ as the identity, and on $V_{\rm in}$ as zero. The result follows from this and (i) above. \end{proof} \begin{definition} \label{def:ABCJ} \rm Assume that $A,B,C$ is bipartite and nontrivial. Let $X$ denote an idempotent centralizer for $A,B,C$. Then $X$ is called {\it outer} (resp. $\it inner$) whenever $X$ is zero on $V_{\rm in}$ (resp. $V_{\rm out}$). Let ${\mathcal I}_{\rm out}$ (resp. ${\mathcal I}_{\rm in}$) denote the set of outer (resp. inner) idempotent centralizers for $A,B,C$. Note that ${\mathcal I}_{\rm out}$ and ${\mathcal I}_{\rm in}$ are ideals in the algebra $\mathcal I$. \end{definition} \begin{proposition} \label{prop:ICbasis} Assume that $A,B,C$ is bipartite and nontrivial. Then the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] the sum $\mathcal I = {\mathcal I}_{\rm out} + {\mathcal I}_{\rm in}$ is direct; \item[\rm (ii)] $J$ is a basis for ${\mathcal I}_{\rm out}$; \item[\rm (iii)] $I-J$ is a basis for ${\mathcal I}_{\rm in}$. \end{enumerate} \end{proposition} \begin{proof} By Definitions \ref{def:J}, \ref{def:ABCJ} we find that $J \in {\mathcal I}_{\rm out}$ and $I-J \in {\mathcal I}_{\rm in}$. Also ${\mathcal I}_{\rm out} \cap {\mathcal I}_{\rm in} = 0$ by Definition \ref{def:ABCJ} and since the sum $V= V_{\rm out}+V_{\rm in}$ is direct. The result follows in view of Proposition \ref{prop:centBasis}(iii). \end{proof} \section{The double lowering spaces for an LR triple} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We discuss some subspaces of ${\rm End}(V)$ called the double lowering spaces. \begin{definition} \rm Let $\lbrace V_i\rbrace_{i=0}^d$ denote a decomposition of $V$. For $X \in {\rm End}(V)$, we say that $X$ {\it weakly lowers $\lbrace V_i\rbrace_{i=0}^d$} whenever $XV_i \subseteq V_{i-1}$ for $1 \leq i \leq d$ and $X V_0 = 0$. \end{definition} \begin{definition} \label{def:DL} \rm Let $\overline A$ denote the set of elements in ${\rm End}(V)$ that weakly lower both the $(A,B)$-decomposition of $V$ and the $(A,C)$-decomposition of $V$. The sets $\overline B$, $\overline C$ are similarly defined. Note that $\overline A$, $\overline B$, $\overline C$ are subspaces of the $\mathbb F$-vector space ${\rm End}(V)$. We call $\overline A, \overline B, \overline C$ the {\it double lowering spaces} for the LR triple $A,B,C$. \end{definition} \noindent We now describe the $\mathbb F$-vector space $\overline A$; similar results hold for $\overline B$ and $\overline C$. \begin{theorem} \label{thm:DL} The following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] Assume that $A,B,C$ is trivial. Then $\overline A=0$. \item[\rm (ii)] Assume that $A,B,C$ is nonbipartite. Then $A$ is a basis for $\overline A$. Moreover $\overline A$ has dimension 1. \item[\rm (ii)] Assume that $A,B,C$ is bipartite and nontrivial. Then $A_{\rm out}, A_{\rm in}$ form a basis for $\overline A$. Moreover $\overline A$ has dimension 2. \end{enumerate} \end{theorem} \begin{proof} (i) $\overline A$ is zero on $E_0V$, and $E_0V=V$. \\ \noindent (ii), (iii) Assume that $A,B,C$ is nontrivial. Let the set $S$ consist of $A$ (if $A,B,C$ is nonbipartite) and $A_{\rm out}, A_{\rm in}$ (if $A,B,C$ is bipartite). We show that $S$ is a basis for $\overline A$. By Lemma \ref{lem:AiAoLI} and the construction, $S$ is a linearly independent subset of $\overline A$. We show that $S$ spans $\overline A$. Let $\lbrace u_i\rbrace_{i=0}^d$ denote an $(A,C)$-basis of $V$, and let $\lbrace v_i\rbrace_{i=0}^d$ denote a compatible $(A,B)$-basis of $V$. The transition matrix from $\lbrace u_i\rbrace_{i=0}^d$ to $\lbrace v_i\rbrace_{i=0}^d$ is the matrix $T'$ from Definition \ref{def:TTT}. The matrix $T'$ is upper triangular and Toeplitz, with parameters $\lbrace \alpha'_i\rbrace_{i=0}^d$. Let $X \in \overline A$. The map $X$ weakly lowers the $(A,C)$-decomposition of $V$, so there exist scalars $\lbrace r_i\rbrace_{i=1}^d$ in $\mathbb F$ such that $Xu_i = r_i u_{i-1}$ for $1 \leq i \leq d$ and $Xu_0=0$. The map $X$ weakly lowers the $(A,B)$-decomposition of $V$, so there exist scalars $\lbrace s_i\rbrace_{i=1}^d$ in $\mathbb F$ such that $Xv_i = s_i v_{i-1}$ for $1 \leq i \leq d$ and $Xv_0=0$. Let the matrix $M \in {\rm Mat}_{d+1}(\mathbb F)$ represent $X$ with respect to $\lbrace u_i \rbrace_{i=0}^d$. Then $M$ has $(i-1,i)$-entry $r_i$ for $1 \leq i \leq d$, and all other entries 0. Let the matrix $N \in {\rm Mat}_{d+1}(\mathbb F)$ represent $X$ with respect to $\lbrace v_i \rbrace_{i=0}^d$. Then $N$ has $(i-1,i)$-entry $s_i$ for $1 \leq i \leq d$, and all other entries 0. By linear algebra $M T' = T' N$. In this equation, for $1 \leq i \leq d$ compare the $(i-1,i)$-entry of each side, to obtain $r_i=s_i$. Until further notice assume that $A,B,C$ is nonbipartite. Then $\alpha'_1 \not=0$. In the equation $M T' = T' N$, for $1 \leq i \leq d-1$ compare the $(i-1,i+1)$-entry of each side, to obtain $r_i = r_{i+1}$. So $r_i=r_1$ for $1 \leq i \leq d$. By construction $A u_i = u_{i-1}$ for $1 \leq i \leq d$ and $Au_0=0$. By these comments $X-r_1A$ vanishes on $u_i$ for $0 \leq i \leq d$. Therefore $X=r_1A$. Next assume that $A,B,C$ is bipartite. Then $\alpha'_1 =0$ and $\alpha'_2 \not=0$. In the equation $M T' = T' N$, for $2 \leq i \leq d-1$ compare the $(i-2,i+1)$-entry of each side, to obtain $r_{i-1} = r_{i+1}$. For $1 \leq i \leq d$ we have $r_i = r_2$ (if $i$ is even) and $r_i = r_1$ (if $i$ is odd). For $0 \leq i \leq d$ define $\varepsilon_i$ to be $0$ (if $i$ is even) and $1$ (if $i$ is odd). By construction $A_{\rm out} u_i = (1-\varepsilon_i) u_{i-1}$ for $1 \leq i \leq d$ and $A_{\rm out} u_0 = 0$. Also by construction $A_{\rm in} u_i = \varepsilon_i u_{i-1}$ for $1 \leq i \leq d$ and $A_{\rm in} u_0 = 0$. By the above comments $X-r_1 A_{\rm in} - r_2 A_{\rm out}$ vanishes on $u_i$ for $0 \leq i \leq d$. Therefore $X=r_1 A_{\rm in} + r_2 A_{\rm out}$. We have shown that the set $S$ spans $\overline A$. The result follows. \end{proof} \section{The unipotent maps for an LR triple} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). Using $A,B,C$ we define three elements $\mathbb A$, $\mathbb B$, $\mathbb C$ in ${\rm End}(V)$ called the unipotent maps. This name is motivated by Lemma \ref{lem:unip} below. \begin{definition} \label{def:Del} \rm Define \begin{eqnarray*} {\mathbb A} = \sum_{i=0}^d E_{d-i} E''_{i}, \qquad \qquad {\mathbb B} = \sum_{i=0}^d E'_{d-i} E_{i}, \qquad \qquad {\mathbb C} = \sum_{i=0}^d E''_{d-i} E'_i. \end{eqnarray*} We call $\mathbb A$, $\mathbb B$, $\mathbb C$ the {\it unipotent maps} for $A,B,C$. \end{definition} \begin{lemma} \label{lem:UniTriv} Assume that $A,B,C$ is trivial. Then $\mathbb A= \mathbb B= \mathbb C = I$. \end{lemma} \begin{proof} For $d=0$ we have $E_0=E'_0 = E''_0=I$. \end{proof} \begin{lemma} \label{lem:DeltaInv} The maps $\mathbb A, \mathbb B, \mathbb C$ are invertible. Their inverses are \begin{eqnarray*} \mathbb {A}^{-1} = \sum_{i=0}^d E''_{d-i} E_{i}, \qquad \qquad \mathbb B^{-1} = \sum_{i=0}^d E_{d-i} E'_{i}, \qquad \qquad \mathbb C^{-1} = \sum_{i=0}^d E'_{d-i} E''_i. \end{eqnarray*} \end{lemma} \begin{proof} Concerning $\mathbb A$ and using Lemma \ref{lem:tripleProd}, \begin{eqnarray*} \mathbb A \sum_{j=0}^d E''_{d-j} E_j= \sum_{j=0}^d E_jE''_{d-j}E_j = \sum_{j=0}^d E_j = I. \end{eqnarray*} \end{proof} \begin{lemma} \label{lem:delA} For $0 \leq i \leq d$, \begin{eqnarray*} \mathbb A E''_i = E_{d-i} \mathbb A, \qquad \qquad \mathbb B E_i = E'_{d-i} \mathbb B, \qquad \qquad \mathbb C E'_i = E''_{d-i} \mathbb C. \end{eqnarray*} \end{lemma} \begin{proof} These equations are verified by evaluating each side using Definition \ref{def:Del}. \end{proof} \begin{lemma} \label{lem:moveDec} For $0 \leq i \leq d$, \begin{eqnarray*} \mathbb A E''_iV = E_{d-i}V, \qquad \qquad \mathbb B E_iV = E'_{d-i}V, \qquad \qquad \mathbb C E'_iV = E''_{d-i}V. \end{eqnarray*} \end{lemma} \begin{proof} Use Lemmas \ref{lem:DeltaInv}, \ref{lem:delA}. \end{proof} \begin{lemma} \label{lem:AAdecSend} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $\mathbb A$ sends the $(A,C)$-decomposition of $V$ to the $(A,B)$-decomposition of $V$; \item[\rm (ii)] $\mathbb B$ sends the $(B,A)$-decomposition of $V$ to the $(B,C)$-decomposition of $V$; \item[\rm (iii)] $\mathbb C$ sends the $(C,B)$-decomposition of $V$ to the $(C,A)$-decomposition of $V$. \end{enumerate} \end{lemma} \begin{proof} This is a reformulation of Lemma \ref{lem:moveDec}. \end{proof} \noindent We now consider how the maps $\mathbb A,\mathbb B, \mathbb C$ act on the three flags (\ref{eq:LRTMOF}). \begin{lemma} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $\mathbb A$ fixes $\lbrace A^{d-i}V\rbrace_{i=0}^d$ and sends $\lbrace C^{d-i}V\rbrace_{i=0}^d$ to $\lbrace B^{d-i}V\rbrace_{i=0}^d$; \item[\rm (ii)] $\mathbb B$ fixes $\lbrace B^{d-i}V\rbrace_{i=0}^d$ and sends $\lbrace A^{d-i}V\rbrace_{i=0}^d$ to $\lbrace C^{d-i}V\rbrace_{i=0}^d$; \item[\rm (iii)] $\mathbb C$ fixes $\lbrace C^{d-i}V\rbrace_{i=0}^d$ and sends $\lbrace B^{d-i}V\rbrace_{i=0}^d$ to $\lbrace A^{d-i}V\rbrace_{i=0}^d$. \end{enumerate} \end{lemma} \begin{proof} By Lemmas \ref{lem:LRTinducedFlag}, \ref{lem:AAdecSend}. \end{proof} \noindent We now consider how the maps $\mathbb A,\mathbb B, \mathbb C$ act on some bases for $V$. \begin{lemma} \label{lem:bbAaction} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $\mathbb A$ sends each $(A,C)$-basis of $V$ to a compatible $(A,B)$-basis of $V$; \item[\rm (ii)] $\mathbb B$ sends each $(B,A)$-basis of $V$ to a compatible $(B,C)$-basis of $V$; \item[\rm (iii)] $\mathbb C$ sends each $(C,B)$-basis of $V$ to a compatible $(C,A)$-basis of $V$. \end{enumerate} \end{lemma} \begin{proof}(i) Let $\lbrace u_i\rbrace_{i=0}^d$ denote an $(A,C)$-basis of $V$, and let $\lbrace v_i \rbrace_{i=0}^d$ denote a compatible $(A,B)$-basis of $V$. We have $Au_i=u_{i-1}$ for $1 \leq i \leq d$ and $Au_0=0$. Also $v_i=B^iv_0/(\varphi_1\cdots \varphi_i) $ for $0 \leq i \leq d$. Moreover $u_0=v_0$. For $0 \leq i \leq d$, \begin{eqnarray*} \mathbb A u_i = E_iE''_{d-i}u_i = E_i u_i = \frac{A^{d-i}B^dA^i}{\varphi_1 \cdots \varphi_d} u_i = \frac{A^{d-i}B^du_0}{\varphi_1\cdots \varphi_d} = \frac{B^iu_0}{\varphi_1\cdots \varphi_i}=v_i. \end{eqnarray*} \\ \noindent (ii), (iii) Similar to the proof of (i) above. \end{proof} \begin{lemma} \label{lem:TandA} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] the matrix $T'$ represents $\mathbb A$ with respect to each $(A,C)$-basis of $V$; \item[\rm (ii)] the matrix $T''$ represents $\mathbb B$ with respect to each $(B,A)$-basis of $V$; \item[\rm (iii)] the matrix $T$ represents $\mathbb C$ with respect to each $(C,B)$-basis of $V$. \end{enumerate} \end{lemma} \begin{proof} By Definition \ref{def:TTT} and Lemma \ref{lem:bbAaction}. \end{proof} \noindent Recall the vectors $\eta,\eta',\eta''$ and $\tilde \eta, \tilde \eta', \tilde \eta''$ from (\ref{eq:eta}), (\ref{eq:etaDual}). \begin{lemma} \label{lem:AiBiCi} For $0 \leq i \leq d$, \begin{eqnarray*} && \mathbb A C^i \eta = \frac{\varphi''_d \cdots \varphi''_{d-i+1}}{\varphi_1 \cdots \varphi_i}\,B^i \eta, \qquad \qquad \mathbb A^{-1} B^i \eta = \frac{\varphi_1 \cdots \varphi_{i}}{\varphi''_d \cdots \varphi''_{d-i+1}}\,C^i \eta, \\ && \mathbb B A^i \eta' = \frac{\varphi_d \cdots \varphi_{d-i+1}}{\varphi'_1 \cdots \varphi'_i}\,C^i \eta', \qquad \qquad \mathbb B^{-1} C^i \eta' = \frac{\varphi'_1 \cdots \varphi'_{i}}{\varphi_d \cdots \varphi_{d-i+1}}\,A^i \eta', \\ && \mathbb C B^i \eta'' = \frac{\varphi'_d \cdots \varphi'_{d-i+1}}{\varphi''_1 \cdots \varphi''_i}\,A^i \eta'', \qquad \qquad \mathbb C^{-1} A^i \eta'' = \frac{\varphi''_1 \cdots \varphi''_{i}}{\varphi'_d \cdots \varphi'_{d-i+1}}\,B^i \eta''. \end{eqnarray*} \end{lemma} \begin{proof} By Lemmas \ref{lem:moreTTTp}, \ref{lem:bbAaction}. \end{proof} \begin{lemma} \label{lem:AAi} For $0 \leq i \leq d$, \begin{eqnarray*} && \mathbb A A^i \eta'' = \frac{(\eta'',\tilde \eta)}{(\eta',\tilde \eta)} A^i \eta', \qquad \qquad \mathbb A^{-1} A^i \eta' = \frac{(\eta',\tilde \eta)}{(\eta'',\tilde \eta)} A^i \eta'', \\ && \mathbb B B^i \eta = \frac{(\eta,\tilde \eta')}{(\eta'',\tilde \eta')} B^i \eta'', \qquad \qquad \mathbb B^{-1} B^i \eta'' = \frac{(\eta'',\tilde \eta')}{(\eta,\tilde \eta')} B^i \eta, \\ && \mathbb C C^i \eta' = \frac{(\eta',\tilde \eta'')}{(\eta,\tilde \eta'')} C^i \eta, \qquad \qquad \mathbb C^{-1} C^i \eta = \frac{(\eta,\tilde \eta'')}{(\eta',\tilde \eta'')} C^i \eta'. \end{eqnarray*} \end{lemma} \begin{proof} Evaluate the displayed equations in Lemma \ref{lem:AiBiCi} using Lemma \ref{lem:COB}, and simplify the results using Lemma \ref{lem:NZ} and Proposition \ref{prop:sixBil}. \end{proof} \begin{lemma} \label{lem:iIszero} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $\mathbb A$ fixes $\eta$ and sends $\eta''$ to $(\eta'',\tilde \eta)/(\eta',\tilde \eta)\eta'$; \item[\rm (ii)] $\mathbb B$ fixes $\eta'$ and sends $\eta$ to $(\eta,\tilde \eta')/(\eta'',\tilde \eta')\eta''$; \item[\rm (iii)] $\mathbb C$ fixes $\eta''$ and sends $\eta'$ to $(\eta',\tilde \eta'')/(\eta,\tilde \eta'')\eta$. \end{enumerate} \end{lemma} \begin{proof} Set $i=0$ in Lemmas \ref{lem:AiBiCi}, \ref{lem:AAi}. \end{proof} \begin{proposition} \label{prop:Apoly} We have \begin{eqnarray} \label{eq:rotator1} \mathbb A = \sum_{i=0}^d \alpha'_i A^i, \qquad \qquad \mathbb B = \sum_{i=0}^d \alpha''_i B^i, \qquad \qquad \mathbb C = \sum_{i=0}^d \alpha_i C^i. \end{eqnarray} Moreover \begin{eqnarray} \label{eq:rotator2} \mathbb A^{-1} = \sum_{i=0}^d \beta'_i A^i, \qquad \qquad \mathbb B^{-1} = \sum_{i=0}^d \beta''_i B^i, \qquad \qquad \mathbb C^{-1} = \sum_{i=0}^d \beta_i C^i. \end{eqnarray} \end{proposition} \begin{proof} We verify $\mathbb A = \sum_{i=0}^d \alpha'_i A^i$. Let $\lbrace u_i\rbrace_{i=0}^d$ denote an $(A,C)$-basis of $V$. Recall the matrix $\tau$ from Definition \ref{def:tauMat}. By Proposition \ref{prop:matrixRep}, $\tau$ represents $A$ with respect to $\lbrace u_i\rbrace_{i=0}^d$. By Lemma \ref{lem:ToeChar}, $T' = \sum_{i=0}^d \alpha'_i \tau^i$. So $T'$ represents $\sum_{i=0}^d \alpha'_i A^i$ with respect to $\lbrace u_i\rbrace_{i=0}^d$. By Lemma \ref{lem:TandA}(i), $T'$ represents $\mathbb A$ with respect to $\lbrace u_i\rbrace_{i=0}^d$. Therefore $\mathbb A = \sum_{i=0}^d \alpha'_i A^i$. The remaining assertions of the lemma are similarly verified. \end{proof} \noindent We emphasize one aspect of Proposition \ref{prop:Apoly}. \begin{corollary} \label{cor:AAcom} The element $A$ (resp. $B$) (resp. $C$) commutes with $\mathbb A$ (resp. $\mathbb B$) (resp. $\mathbb C$). \end{corollary} \noindent An element $X \in {\rm End}(V)$ is called {\it unipotent} whenever $X-I$ is nilpotent. \begin{lemma} \label{lem:unip} Each of $\mathbb A$, $\mathbb B$, $\mathbb C$ is unipotent. \end{lemma} \begin{proof} The element $\mathbb A-I$ is nilpotent, since it is a linear combination of $\lbrace A^i\rbrace_{i=1}^d$ and $A$ is nilpotent. Therefore $\mathbb A$ is unipotent. The maps $\mathbb B$, $\mathbb C$ are similarly shown to be unipotent. \end{proof} \begin{definition} \label{def:unidata} \rm Call the sequence $\mathbb A, \mathbb B, \mathbb C$ the {\it unipotent data} for $A,B,C$. \end{definition} \begin{lemma} \label{lem:uniNew} Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the LR triples $A,B,C$ and $\alpha A, \beta B, \gamma C$ have the same unipotent data. \end{lemma} \begin{proof} By Lemma \ref{lem:isequal} and Definition \ref{def:Del}. \end{proof} \begin{lemma} \label{lem:newUniData} In the table below, we display some LR triples on $V$ along with their unipotent data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm unipotent data} \\ \hline \hline $ A, B, C$ & $\mathbb A, \mathbb B, \mathbb C$ \\ $ B, C, A$ & $\mathbb B, \mathbb C, \mathbb A$ \\ $C, A, B$ & $\mathbb C, \mathbb A, \mathbb B$ \\ \hline $ C, B, A$ & $\mathbb C^{-1}, \mathbb B^{-1}, \mathbb A^{-1}$ \\ $ A, C, B$ & $\mathbb A^{-1}, \mathbb C^{-1}, \mathbb B^{-1}$ \\ $ B, A, C$ & $\mathbb B^{-1}, \mathbb A^{-1}, \mathbb C^{-1}$ \end{tabular}} \end{lemma} \begin{proof} Use Definition \ref{def:Del} and Lemmas \ref{lem:ABCEvar}, \ref{lem:DeltaInv}. \end{proof} \begin{lemma} \label{lem:newUniDataDual} In the table below, we display some LR triples on $V^*$ along with their unipotent data. \centerline{ \begin{tabular}[t]{c|c} {\rm LR triple} & {\rm unipotent data} \\ \hline \hline $ \tilde A, \tilde B,\tilde C$ & $\tilde {\mathbb A}^{-1}, \tilde{ \mathbb B}^{-1}, \tilde {\mathbb C}^{-1}$ \\ $ \tilde B, \tilde C, \tilde A$ & $\tilde {\mathbb B}^{-1}, \tilde {\mathbb C}^{-1}, \tilde {\mathbb A}^{-1}$ \\ $\tilde C, \tilde A, \tilde B$ & $\tilde {\mathbb C}^{-1}, \tilde {\mathbb A}^{-1}, \tilde {\mathbb B}^{-1}$ \\ \hline $ \tilde C, \tilde B, \tilde A$ & $\tilde {\mathbb C}, \tilde {\mathbb B}, \tilde {\mathbb A}$ \\ $ \tilde A, \tilde C, \tilde B$ & $\tilde{ \mathbb A}, \tilde {\mathbb C}, \tilde {\mathbb B}$ \\ $ \tilde B, \tilde A, \tilde C$ & $\tilde {\mathbb B}, \tilde {\mathbb A}, \tilde {\mathbb C}$ \end{tabular}} \end{lemma} \begin{proof} Use Definition \ref{def:Del} and Lemmas \ref{lem:tildeABCEvar}, \ref{lem:DeltaInv}. Keep in mind that the adjoint map is an antiisomorphism. \end{proof} \begin{lemma} Assume that $A,B,C$ is bipartite. Then the projector $J$ commutes with each of $\mathbb A$, $\mathbb B$, $\mathbb C$. \end{lemma} \begin{proof} By Definition \ref{def:Del}, and since $J$ commutes with each of $E_i, E'_i, E''_i$ for $0 \leq i \leq d$. \end{proof} \begin{lemma} Assume that $A,B,C$ is bipartite. Then \begin{eqnarray*} && \mathbb A V_{\rm out} = V_{\rm out}, \qquad \qquad \mathbb B V_{\rm out} = V_{\rm out}, \qquad \qquad \mathbb C V_{\rm out} = V_{\rm out}, \\ && \mathbb A V_{\rm in} = V_{\rm in}, \qquad \qquad \;\; \mathbb B V_{\rm in} = V_{\rm in}, \qquad \qquad \quad \mathbb C V_{\rm in} = V_{\rm in}. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:V0V1}, Definition \ref{def:Del}, and since $d$ is even, we find that $V_{\rm out}$ and $V_{\rm in}$ are invariant under each of $\mathbb A, \mathbb B, \mathbb C$. By Lemma \ref{lem:DeltaInv} the maps $\mathbb A, \mathbb B, \mathbb C$ are invertible. The result follows. \end{proof} \noindent The next two lemmas follow from the construction. \begin{lemma} \label{lem:unipA2} Assume that $A,B,C$ is bipartite, so that $A^2,B^2,C^2$ act on $V_{\rm out}$ as an LR triple. The unipotent data for this triple is given by the actions of $\mathbb A, \mathbb B, \mathbb C$ on $V_{\rm out}$. \end{lemma} \begin{lemma} \label{lem:unipA2In} Assume that $A,B,C$ is bipartite and nontrivial, so that $A^2,B^2,C^2$ act on $V_{\rm in}$ as an LR triple. The unipotent data for this triple is given by the actions of $\mathbb A, \mathbb B, \mathbb C$ on $V_{\rm in}$. \end{lemma} \begin{lemma} Assume that $A,B,C$ is bipartite. Let \begin{eqnarray*} \alpha_{\rm out}, \quad \alpha_{\rm in}, \quad \beta_{\rm out}, \quad \beta_{\rm in}, \quad \gamma_{\rm out}, \quad \gamma_{\rm in} \end{eqnarray*} denote nonzero scalars in $\mathbb F$, so that the sequence \begin{eqnarray*} \alpha_{\rm out}A_{\rm out} +\alpha_{\rm in}A_{\rm in}, \qquad \beta_{\rm out} B_{\rm out}+ \beta_{\rm in}B_{\rm in}, \qquad \gamma_{\rm out}C_{\rm out}+ \gamma_{\rm in} C_{\rm in} \end{eqnarray*} is a bipartite LR triple on $V$. This LR triple has the same unipotent data as $A,B,C$. \end{lemma} \begin{proof} By Lemma \ref{lem:PAnewLRT}(ii) and Definition \ref{def:Del}. \end{proof} \section{The rotators for an LR triple} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We discuss a type of element in ${\rm End}(V)$ called a rotator. \begin{definition} \label{def:ABCROT} \rm By a {\it rotator} for $A,B,C$ we mean an element $R \in {\rm End}(V)$ such that for $0 \leq i \leq d$, \begin{eqnarray} \label{eq:ROT} E_i R = R E'_i, \qquad \qquad E'_i R = R E''_i, \qquad \qquad E''_i R = R E_i. \end{eqnarray} \end{definition} \begin{lemma} \label{lem:ROTmeaning} For $R \in {\rm End}(V)$ the following are equivalent: \begin{enumerate} \item[\rm (i)] $R$ is a rotator for $A,B,C$; \item[\rm (ii)] for $0 \leq i \leq d$, \begin{eqnarray*} && R E'_iV \subseteq E_iV, \;\qquad \qquad R E''_iV \subseteq E'_iV, \;\qquad \qquad R E_iV \subseteq E''_iV. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} By Definition \ref{def:ABCROT} and linear algebra. \end{proof} \begin{lemma} \label{ex:trivRot} \rm Assume that $A,B,C$ is trivial. Then each element of ${\rm End}(V)$ is a rotator for $A,B,C$. \end{lemma} \begin{proof} For $d=0$ we have $E_0=E'_0=E''_0=I$. \end{proof} \begin{lemma} Let $R$ denote a rotator for $A,B,C$. Then \begin{eqnarray} \mathbb A R = R \mathbb B, \qquad \qquad \mathbb B R = R \mathbb C, \qquad \qquad \mathbb C R = R \mathbb A. \label{eq:RotM2} \end{eqnarray} \end{lemma} \begin{proof} Use Definitions \ref{def:Del}, \ref{def:ABCROT}. \end{proof} \begin{definition} \label{def:ABCROTspace} \rm Let $\mathcal R$ denote the set of rotators for $A,B,C$. Note that $\mathcal R$ is a subspace of the $\mathbb F$-vector space ${\rm End}(V)$. We call $\mathcal R$ the {\it rotator space} for $A,B,C$. \end{definition} \begin{definition} \label{lem:Rbasis} \rm Assume that $A,B,C$ is trivial. Then the identity $I$ of ${\rm End}(V)=\mathcal R$ is a basis for $\mathcal R$. We call $I$ the {\it standard rotator} for $A,B,C$. \end{definition} \noindent Assume for the moment that $A,B,C$ is nontrivial. We are going to show that $\mathcal R$ has dimension 1 (if $A,B,C$ is nonbipartite) and 2 (if $A,B,C$ is bipartite). In each case, we will display an explicit basis for $\mathcal R$. We now obtain some results that will be used to construct these bases. \begin{lemma} \label{lem:3rot1} The following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] $\mathbb B^{-1} C \mathbb B$ is zero on $E_0V$. Moreover for $1 \leq i \leq d$ and on $E_iV$, \begin{eqnarray*} \mathbb B^{-1} C \mathbb B = \frac{\varphi'_{d-i+1}}{\varphi_{i}} A. \end{eqnarray*} \item[\rm (ii)] $\mathbb C^{-1} A \mathbb C$ is zero on $E'_0V$. Moreover for $1 \leq i \leq d$ and on $E'_iV$, \begin{eqnarray*} \mathbb C^{-1} A \mathbb C = \frac{\varphi''_{d-i+1}}{\varphi'_{i}} B. \end{eqnarray*} \item[\rm (iii)] $\mathbb A^{-1} B \mathbb A$ is zero on $E''_0V$. Moreover for $1 \leq i \leq d$ and on $E''_iV$, \begin{eqnarray*} \mathbb A^{-1} B \mathbb A = \frac{\varphi_{d-i+1}}{\varphi''_{i}} C. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} (i) The vector $A^{d-i}\eta'$ is a basis for $E_iV$. Apply $\mathbb B^{-1}C\mathbb B$ to this vector and evaluate the result using Lemma \ref{lem:AiBiCi} (middle row). \\ \noindent (ii), (iii) Similar to the proof of (i) above. \end{proof} \begin{lemma} \label{lem:3rot2} The following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] $\mathbb A C \mathbb A^{-1}$ is zero on $E_dV$. Moreover for $0 \leq i \leq d-1$ and on $E_iV$, \begin{eqnarray*} \mathbb A C \mathbb A^{-1} = \frac{\varphi''_{d-i}}{\varphi_{i+1}} B. \end{eqnarray*} \item[\rm (ii)] $\mathbb B A \mathbb B^{-1}$ is zero on $E'_dV$. Moreover for $0 \leq i \leq d-1$ and on $E'_iV$, \begin{eqnarray*} \mathbb B A \mathbb B^{-1} = \frac{\varphi_{d-i}}{\varphi'_{i+1}} C. \end{eqnarray*} \item[\rm (iii)] $\mathbb C B \mathbb C^{-1}$ is zero on $E''_dV$. Moreover for $0 \leq i \leq d-1$ and on $E''_iV$, \begin{eqnarray*} \mathbb C B \mathbb C^{-1} = \frac{\varphi'_{d-i}}{\varphi''_{i+1}} A. \end{eqnarray*} \end{enumerate} \end{lemma} \begin{proof} (i) The vector $B^{i}\eta$ is a basis for $E_iV$. Apply $\mathbb A C\mathbb A^{-1}$ to this vector and evaluate the result using Lemma \ref{lem:AiBiCi} (top row). \\ \noindent (ii), (iii) Similar to the proof of (i) above. \end{proof} \begin{lemma} \label{lem:newLR} The following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] the $(A,B)$-decomposition of $V$ is lowered by $\mathbb B^{-1}C\mathbb B$ and raised by $\mathbb A C \mathbb A^{-1}$; \item[\rm (ii)] the $(B,C)$-decomposition of $V$ is lowered by $\mathbb C^{-1}A\mathbb C$ and raised by $\mathbb B A \mathbb B^{-1}$; \item[\rm (iii)] the $(C,A)$-decomposition of $V$ is lowered by $\mathbb A^{-1}B\mathbb A$ and raised by $\mathbb C B \mathbb C^{-1}$. \end{enumerate} \end{lemma} \begin{proof} (i) The sequence $\lbrace E_iV\rbrace_{i=0}^d$ is the $(A,B)$-decomposition of $V$. This decomposition is lowered by $A$ and raised by $B$. The result follows in view of Lemmas \ref{lem:3rot1}(i), \ref{lem:3rot2}(i). \end{proof} \begin{lemma} \label{lem:3and3} We have \begin{eqnarray*} && \mathbb B^{-1} C \mathbb B \Biggl( \sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E_i \Biggr) = \Biggl( \sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E_i \Biggr) A, \\ && \mathbb C^{-1} A \mathbb C \Biggl( \sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E'_i \Biggr) = \Biggl( \sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E'_i \Biggr) B, \\ && \mathbb A^{-1} B \mathbb A \Biggl( \sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi_d \cdots \varphi_{d-i+1}} E''_i \Biggr) = \Biggl( \sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi_d \cdots \varphi_{d-i+1}} E''_i \Biggr) C \end{eqnarray*} and also \begin{eqnarray*} && \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E_i\Biggr) \mathbb A C \mathbb A^{-1} = B \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E_i\Biggr), \\ && \Biggl(\sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi_d\cdots \varphi_{d-i+1}} E'_i\Biggr) \mathbb B A \mathbb B^{-1} = C \Biggl(\sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi_d \cdots \varphi_{d-i+1}} E'_i\Biggr), \\ && \Biggl(\sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E''_i\Biggr) \mathbb C B \mathbb C^{-1} = A \Biggl(\sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E''_i\Biggr). \end{eqnarray*} \end{lemma} \begin{proof} To verify the first (resp. fourth) displayed equation in the lemma statement, for $0 \leq i \leq d$ apply each side to $E_iV$, and evaluate the result using Lemma \ref{lem:3rot1}(i) (resp. Lemma \ref{lem:3rot2}(i)). The remaining equations are similarly verified. \end{proof} \noindent For the next few results, it is convenient to assume that $A,B,C$ is equitable. Shortly we will return to the general case. \begin{proposition} \label{prop:EquitWOW} Assume that $A,B,C$ is equitable. Then \begin{eqnarray} && \mathbb C \Biggl(\sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E'_i\Biggr) \mathbb B = \mathbb A \Biggl(\sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E''_i\Biggr) \mathbb C, \label{eq:Om2} \\ && \mathbb A \Biggl(\sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi_d \cdots \varphi_{d-i+1}} E''_i\Biggr) \mathbb C = \mathbb B \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E_i\Biggr) \mathbb A, \label{eq:Om3} \\ && \mathbb B \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E_i\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi_d \cdots \varphi_{d-i+1}} E'_i\Biggr) \mathbb B. \label{eq:Om1} \end{eqnarray} \end{proposition} \begin{proof} We prove (\ref{eq:Om2}). Define \begin{eqnarray} \label{eq:XX} X = \mathbb C \Biggl(\sum_{i=0}^d \frac{\varphi'_1 \cdots \varphi'_i} {\varphi''_d \cdots \varphi''_{d-i+1}} E'_i\Biggr). \end{eqnarray} We claim that \begin{eqnarray} \label{eq:XY} X = \Biggl(\sum_{i=0}^d \frac{\varphi''_1 \cdots \varphi''_i} {\varphi'_d \cdots \varphi'_{d-i+1}} E''_i\Biggr) \mathbb C. \end{eqnarray} To verify (\ref{eq:XY}), evaluate the right-hand side of (\ref{eq:XX}) using Lemma \ref{lem:delA}, and simplify the result using Lemma \ref{lem:combine}. The claim is proven. By (\ref{eq:XY}) and the last diplayed equation in Lemma \ref{lem:3and3}, $XB=AX$. So $XB^i=A^iX$ for $0 \leq i \leq d$. By Definition \ref{def:equitNorm} and line (\ref{eq:rotator1}), $\mathbb A = \sum_{i=0}^d \alpha_i A^i$ and $\mathbb B = \sum_{i=0}^d \alpha_i B^i$. By these comments $X \mathbb B = \mathbb A X$. In this equation evaluate the $X$ on the left and right using (\ref{eq:XX}) and (\ref{eq:XY}), respectively. This yields (\ref{eq:Om2}). The equations (\ref{eq:Om3}), (\ref{eq:Om1}) are similarly obtained. \end{proof} \begin{definition} \label{def:Om123} \rm Assume that $A,B,C$ is equitable. Let $\Omega, \Omega', \Omega''$ denote the common values of {\rm (\ref{eq:Om2})}, {\rm (\ref{eq:Om3})}, {\rm (\ref{eq:Om1})} respectively. \end{definition} \begin{lemma} \label{lem:OmegaTriv} Assume that $A,B,C$ is trivial. Then $\Omega = \Omega' = \Omega''=I$. \end{lemma} \begin{proof} By Lemma \ref{lem:UniTriv}, Definition \ref{def:Om123}, and since $E_0 = E'_0 = E''_0=I$. \end{proof} \begin{lemma} \label{lem:OmegaCom2} Assume that $A,B,C$ is equitable. Then for $0 \leq i \leq d$, \begin{eqnarray*} E_i \Omega = \Omega E'_i, \qquad \qquad E'_i \Omega' = \Omega' E''_i, \qquad \qquad E''_i \Omega'' = \Omega'' E_i. \end{eqnarray*} \end{lemma} \begin{proof} To verify $E_i \Omega = \Omega E'_i$, eliminate $\Omega$ using the formula on the right in (\ref{eq:Om2}), and evaluate the result using Lemma \ref{lem:delA}. The remaining equations are similary verified. \end{proof} \begin{lemma} \label{lem:OmegaCom} Assume that $A,B,C$ is equitable. Then \begin{eqnarray} \label{eq:AOmB} A \Omega = \Omega B, \qquad \qquad B \Omega' = \Omega' C, \qquad \qquad C \Omega'' = \Omega'' A. \end{eqnarray} \end{lemma} \begin{proof} To verify $A \Omega = \Omega B$, eliminate $\Omega$ using the formula on the right in (\ref{eq:Om2}), and evaluate the result using Corollary \ref{cor:AAcom} and the last displayed equation in Lemma \ref{lem:3and3}. The remaining equations in (\ref{eq:AOmB}) are similarly verified. \end{proof} \begin{lemma} \label{lem:OmInv} Assume that $A,B,C$ is equitable. Then the elements $\Omega, \Omega', \Omega''$ are invertible. Moreover \begin{eqnarray*} \Omega^{-1} = \mathbb B^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi''_d \cdots \varphi''_{d-i+1}} {\varphi'_1 \cdots \varphi'_i} E'_i\Biggr) \mathbb C^{-1} = \mathbb C^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi'_d\cdots \varphi'_{d-i+1}} {\varphi''_1 \cdots \varphi''_i} E''_i\Biggr) \mathbb A^{-1}, && \\ (\Omega')^{-1}= \mathbb C^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi_d \cdots \varphi_{d-i+1}} {\varphi''_1 \cdots \varphi''_i} E''_i\Biggr) \mathbb A^{-1} = \mathbb A^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi''_d\cdots \varphi''_{d-i+1}} {\varphi_1 \cdots \varphi_i} E_i\Biggr) \mathbb B^{-1}, && \\ (\Omega'')^{-1}= \mathbb A^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi'_d \cdots \varphi'_{d-i+1}} {\varphi_1 \cdots \varphi_i} E_i\Biggr) \mathbb B^{-1} = \mathbb B^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi_d\cdots \varphi_{d-i+1}} {\varphi'_1 \cdots \varphi'_i} E'_i\Biggr) \mathbb C^{-1}. && \end{eqnarray*} \end{lemma} \begin{proof} Use Proposition \ref{prop:EquitWOW} and Definition \ref{def:Om123}. \end{proof} \begin{lemma} \label{prop:NBequit} Assume that $A,B,C$ is equitable and nonbipartite. Then $\Omega = \Omega'=\Omega''$, and this common value is equal to \begin{eqnarray*} &&\mathbb B \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E_i\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E'_i\Biggr) \mathbb B \nonumber \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E''_i\Biggr) \mathbb C. \end{eqnarray*} \end{lemma} \begin{proof} By Lemma \ref{lem:equitBasic}(i) along with (\ref{eq:Om2})--(\ref{eq:Om1}) and Definition \ref{def:Om123}. \end{proof} \noindent For the past few results we assumed that $A,B,C$ is equitable. We now drop the equitable assumption and return to the general case. \begin{theorem} \label{cor:everythingGen} \label{cor:everything} Assume that $A,B,C$ is nonbipartite. Then the following {\rm (i)--(v)} hold. \begin{enumerate} \item[\rm (i)] We have \begin{eqnarray*} &&\mathbb B \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E_i\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E'_i\Biggr) \mathbb B \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{i=0}^d \frac{\varphi_1 \cdots \varphi_i} {\varphi_d \cdots \varphi_{d-i+1}} E''_i\Biggr) \mathbb C. \end{eqnarray*} Denote this common value by $\Omega$. \item[\rm (ii)] $\Omega$ is invertible, and $\Omega^{-1}$ is equal to \begin{eqnarray*} && \mathbb A^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi_d \cdots \varphi_{d-i+1}} {\varphi_1 \cdots \varphi_{i}} E_i\Biggr) \mathbb B^{-1} = \mathbb B^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi_d \cdots \varphi_{d-i+1}} {\varphi_1 \cdots \varphi_{i}} E'_i\Biggr) \mathbb C^{-1} \nonumber \\ && \qquad \qquad \qquad = \mathbb C^{-1} \Biggl(\sum_{i=0}^d \frac{\varphi_d \cdots \varphi_{d-i+1}} {\varphi_1 \cdots \varphi_{i}} E''_i\Biggr) \mathbb A^{-1}. \end{eqnarray*} \item[\rm (iii)] For $0 \leq i \leq d$, \begin{eqnarray*} E_i \Omega = \Omega E'_i, \qquad \qquad E'_i \Omega = \Omega E''_i, \qquad \qquad E''_i \Omega = \Omega E_i. \end{eqnarray*} \item[\rm (iv)] We have \begin{eqnarray*} \alpha'_1 A \Omega = \alpha''_1 \Omega B, \qquad \qquad \alpha''_1 B \Omega = \alpha_1 \Omega C, \qquad \qquad \alpha_1 C \Omega = \alpha'_1 \Omega A. \end{eqnarray*} \item[\rm (v)] For $A,B,C$ equitable, \begin{eqnarray*} A \Omega = \Omega B, \qquad \qquad B \Omega = \Omega C, \qquad \qquad C \Omega = \Omega A. \end{eqnarray*} \end{enumerate} \end{theorem} \begin{proof} Apply Lemmas \ref{lem:OmegaCom2}--\ref{prop:NBequit} to the equitable LR triple $\alpha'_1 A, \alpha''_1 B, \alpha_1 C$ and use Lemmas \ref{lem:albega}, \ref{lem:isequal}, \ref{lem:uniNew}. \end{proof} \begin{proposition} \label{lem:OmEi} Assume that $A,B,C$ is nonbipartite. Then $\Omega$ is a rotator for $A,B,C$. \end{proposition} \begin{proof} By Definition \ref{def:ABCROT} and Theorem \ref{cor:everythingGen}(iii). \end{proof} \begin{lemma} \label{lem:OmegaSendEV} Assume that $A,B,C$ is nonbipartite. Then for $0 \leq i \leq d$, \begin{eqnarray*} \Omega E'_iV= E_iV, \qquad \qquad \Omega E''_iV= E'_iV, \qquad \qquad \Omega E_iV= E''_iV. \end{eqnarray*} \end{lemma} \begin{proof} By Proposition \ref{lem:OmEi} the map $\Omega$ is a rotator for $A,B,C$. The result follows by Lemma \ref{lem:ROTmeaning}, and since $\Omega$ is invertible by Theorem \ref{cor:everythingGen}(ii). \end{proof} \noindent Recall the rotator space $\mathcal R$ from Definition \ref{def:ABCROTspace}. \begin{proposition} \label{prop:OmegaBasis} Assume that $A,B,C$ is nonbipartite. Then $\Omega$ is a basis for the $\mathbb F$-vector space $\mathcal R$. \end{proposition} \begin{proof} We have $\Omega \in \mathcal R$ by Proposition \ref{lem:OmEi}. The map $\Omega$ is invertible by Theorem \ref{cor:everythingGen}(ii), so of course $\Omega \not=0$. We show that $\Omega $ spans $\mathcal R$. Let $R \in \mathcal R$. By assumption $R$ and $\Omega$ are rotators for $A,B,C$. So by Definition \ref{def:ABCROT}, $\Omega^{-1}R$ commutes with each of $E_i$, $E'_i$, $E''_i$ for $0 \leq i \leq d$. Now by Definition \ref{def:IC}, $\Omega^{-1}R$ is an idempotent centralizer for $A,B,C$. By Definition \ref{def:ICspace} and Proposition \ref{prop:centBasis}(ii), there exists $\zeta \in \mathbb F$ such that $\Omega^{-1}R = \zeta I$. Therefore $R = \zeta \Omega$. We have shown that $\Omega $ spans $\mathcal R$. The result follows. \end{proof} \begin{definition} \label{def:Rotator} \rm Assume that $A,B,C$ is nonbipartite. By the {\it standard rotator} for $A,B,C$ we mean the map $\Omega$ from Theorem \ref{cor:everythingGen} and Proposition \ref{prop:OmegaBasis}. \end{definition} \begin{lemma} \label{lem:OmegaSendsEta} Assume that $A,B,C$ is nonbipartite. Then $\Omega$ sends \begin{eqnarray*} \eta \to \frac{(\eta,\tilde \eta')}{(\eta'',\tilde \eta')}\eta'', \qquad \qquad \eta' \to \frac{(\eta',\tilde \eta'')}{(\eta,\tilde \eta'')}\eta, \qquad \qquad \eta'' \to \frac{(\eta'',\tilde \eta)}{(\eta',\tilde \eta)}\eta'. \end{eqnarray*} \end{lemma} \begin{proof} Use Lemma \ref{lem:iIszero} and the formulae for $\Omega$ given in Theorem \ref{cor:everythingGen}(i). \end{proof} \begin{proposition} \label{thm:OmegaCubed} Assume that $A,B,C$ is nonbipartite. Then $\Omega^3= \theta I$, where $\theta$ is from Definition \ref{def:theta}. \end{proposition} \begin{proof} By Theorem \ref{cor:everythingGen}(iii), $\Omega^3$ commutes with each of $E_i, E'_i, E''_i$ for $0 \leq i \leq d$. By Definition \ref{def:IC}, $\Omega^{3}$ is an idempotent centralizer for $A,B,C$. By Definition \ref{def:ICspace} and Proposition \ref{prop:centBasis}(ii), there exists $\zeta \in \mathbb F$ such that $\Omega^{3}= \zeta I$. Considering the action of $\Omega^3$ on $\eta$ and using Lemma \ref{lem:OmegaSendsEta}, we obtain $\zeta = \theta$. \end{proof} \begin{lemma} \label{lem:SRadj} Assume that $A,B,C$ is nonbipartite. Let $\alpha, \beta, \gamma$ denote nonzero scalars in $\mathbb F$. Then the LR triples $A,B,C$ and $\alpha A, \beta B, \gamma C$ have the same standard rotator. \end{lemma} \begin{proof} Use Lemmas \ref{lem:albega}, \ref{lem:isequal}, \ref{lem:uniNew} and any formula for $\Omega$ given in Theorem \ref{cor:everythingGen}(i). \end{proof} \begin{lemma} \label{lem:sameROT} Assume that $A,B,C$ is nonbipartite. \begin{enumerate} \item[\rm (i)] The following LR triples have the same standard rotator: \begin{eqnarray*} A,B,C \qquad \qquad B,C,A\qquad \qquad C,A,B. \end{eqnarray*} \item[\rm (ii)] The following LR triples have the same standard rotator: \begin{eqnarray*} C,B,A \qquad \qquad A,C,B \qquad \qquad B,A,C. \end{eqnarray*} \item[\rm (iii)] The standard rotators in {\rm (i)}, {\rm (ii)} above are inverses. \end{enumerate} \end{lemma} \begin{proof} By Lemmas \ref{lem:EqAdjust}, \ref{lem:SRadj} we may assume without loss that $A,B,C$ is equitable. Now use Lemmas \ref{lem:ABCvar}, \ref{lem:ABCEvar}, \ref{lem:equitBasic}(i), \ref{lem:newUniData} and the formulae for $\Omega, \Omega^{-1}$ given in Theorem \ref{cor:everythingGen}(i),(ii). \end{proof} \noindent Recall from Lemma \ref{lem:newPA} the LR triple $\tilde A,\tilde B,\tilde C$ on the dual space $V^*$. \begin{lemma} Assume that $A,B,C$ is nonbipartite. Then the following are inverse: \begin{enumerate} \item[\rm (i)] the adjoint of the standard rotator for $A,B,C$; \item[\rm (ii)] the standard rotator for $\tilde A, \tilde B, \tilde C$. \end{enumerate} \end{lemma} \begin{proof} Use Lemmas \ref{lem:newPA}, \ref{lem:tildeABCEvar}, \ref{lem:newUniDataDual} and any formula for $\Omega$ given in Theorem \ref{cor:everythingGen}(i). \end{proof} \begin{proposition} \label{cor:RinvSend} Assume that $A,B,C$ is nonbipartite. Let $R$ denote a nonzero rotator for $A,B,C$. Then the following {\rm (i)--(iv)} hold. \begin{enumerate} \item[\rm (i)] $R$ is invertible. \item[\rm (ii)] We have \begin{eqnarray*} \alpha'_1 A R = \alpha''_1 R B, \qquad \qquad \alpha''_1 B R = \alpha_1 RC, \qquad \qquad \alpha_1 C R= \alpha'_1 R A. \end{eqnarray*} \item[\rm (iii)] We have \begin{eqnarray*} \overline A R = R \overline B , \qquad \qquad \overline B R = R\overline C, \qquad \qquad \overline C R = R\overline A. \end{eqnarray*} \item[\rm (iv)] For $A,B,C$ equitable, \begin{eqnarray*} A R = R B, \qquad \qquad B R = R C, \qquad \qquad C R = R A. \end{eqnarray*} \end{enumerate} \end{proposition} \begin{proof} By Proposition \ref{prop:OmegaBasis} there exists $0 \not=\zeta \in \mathbb F$ such that $R = \zeta \Omega$. The results follow in view of Theorems \ref{thm:DL}(ii), \ref{cor:everythingGen}. \end{proof} \noindent We now turn our attention to the case in which $A,B,C$ is bipartite and nontrivial. \begin{lemma} \label{lem:RoutInPre} Assume that $A,B,C$ is bipartite and nontrivial. Let $R$ denote a rotator for $A,B,C$. Then the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] $R V_{\rm out} \subseteq V_{\rm out}$ and $R V_{\rm in} \subseteq V_{\rm in}$; \item[\rm (ii)] $R J=JR$; \item[\rm (iii)] $R J$ is a rotator for $A,B,C$. \end{enumerate} \end{lemma} \begin{proof}(i) By Lemmas \ref{lem:V0V1}, \ref{lem:ROTmeaning}. \\ \noindent (ii) The map $J$ acts on $V_{\rm out}$ as the identity, and on $V_{\rm in}$ as zero. The result follows from this and (i) above. \\ \noindent (iii) By Definition \ref{def:ABCROT}, and since $J$ is an idempotent centralizer for $A,B,C$ by Proposition \ref{prop:centBasis}(iii). \end{proof} \begin{definition} \label{def:RoutRin} \rm Assume that $A,B,C$ is bipartite and nontrivial. Let $R$ denote a rotator for $A,B,C$. Then $R$ is called {\it outer} (resp. {\it inner}) whenever $R$ is zero on $V_{\rm in}$ (resp. $V_{\rm out}$). Let ${\mathcal R}_{\rm out}$ (resp. ${\mathcal R}_{\rm in}$) denote the set of outer (resp. inner) rotators for $A,B,C$. Note that ${\mathcal R}_{\rm out}$ and ${\mathcal R}_{\rm in}$ are subspaces of the $\mathbb F$-vector space $\mathcal R$. \end{definition} \begin{definition} \label{def:OmegaOutIn} \rm Assume that $A,B,C$ is bipartite and nontrivial. Define elements $\Omega_{\rm out}$, $\Omega_{\rm in}$ in ${\rm End}(V)$ as follows. Recall by Lemmas \ref{lem:A2B2C2Out}, \ref{lem:NormHalf}(ii) that $A^2,B^2,C^2$ acts on $V_{\rm out}$ as a nonbipartite LR triple. The map $\Omega_{\rm out}$ acts on $V_{\rm out}$ as the standard rotator for this LR triple. The map $\Omega_{\rm out}$ acts on $V_{\rm in}$ as zero. The map $\Omega_{\rm in}$ acts on $V_{\rm out}$ as zero. Recall by Lemmas \ref{lem:A2B2C2In}, \ref{lem:NormHalfIn}(i),(ii) that $A^2,B^2,C^2$ acts on $V_{\rm in}$ as an LR triple that is nonbipartite or trivial. The map $\Omega_{\rm in}$ acts on $V_{\rm in}$ as the standard rotator for this LR triple. \end{definition} \begin{lemma} \label{lem:OmegaBasic} With reference to Definition \ref{def:OmegaOutIn}, \begin{eqnarray*} \Omega_{\rm out} V_{\rm out} = V_{\rm out}, \quad \qquad \Omega_{\rm out} V_{\rm in} = 0, \quad \qquad \Omega_{\rm in} V_{\rm out} = 0, \quad \qquad \Omega_{\rm in} V_{\rm in} = V_{\rm in}. \end{eqnarray*} \end{lemma} \begin{proof} By Definition \ref{def:OmegaOutIn} and the construction. \end{proof} \begin{proposition} \label{prop:bipRoutIN} Assume that $A,B,C$ is bipartite and nontrivial. Then the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] the sum ${\mathcal R} = {\mathcal R}_{\rm out} + {\mathcal R}_{\rm in}$ is direct; \item[\rm (ii)] $\Omega_{\rm out}$ is a basis for ${\mathcal R}_{\rm out}$; \item[\rm (iii)] $\Omega_{\rm in}$ is a basis for ${\mathcal R}_{\rm in}$. \end{enumerate} \end{proposition} \begin{proof} By Definitions \ref{def:RoutRin}, \ref{def:OmegaOutIn} we find $\Omega_{\rm out} \in \mathcal R_{\rm out}$ and $\Omega_{\rm in} \in \mathcal R_{\rm in}$. We mentioned in Definition \ref{def:OmegaOutIn} that $A^2,B^2,C^2$ acts on $V_{\rm out}$ as a nonbipartite LR triple. Denote the corresponding rotator subspace and standard rotator by $\mathcal R^{\rm out}$ and $\Omega^{\rm out}$, respectively. By construction $\mathcal R^{\rm out}$ is a subspace of ${\rm End}(V_{\rm out})$. By Proposition \ref{prop:OmegaBasis}, $\Omega^{\rm out}$ is a basis for $\mathcal R^{\rm out}$. For $R \in \mathcal R$ the restriction $R \vert_{V_{\rm out}}$ is contained in ${\mathcal R}^{\rm out}$, and the map $\mathcal R \to {\mathcal R}^{\rm out}$, $R \mapsto R \vert_{V_{\rm out}}$ is $\mathbb F$-linear. This map has kernel $\mathcal R_{\rm in}$. This map sends $\Omega_{\rm out} \mapsto \Omega^{\rm out}$ and is therefore surjective. By these comments $\Omega_{\rm out}$ forms a basis for a complement of $\mathcal R_{\rm in}$ in $\mathcal R$. Similarly $\Omega_{\rm in}$ forms a basis for a complement of $\mathcal R_{\rm out}$ in $\mathcal R$. Note that ${\mathcal R}_{\rm out} \cap {\mathcal R}_{\rm in} = 0$ by Definition \ref{def:RoutRin} and since the sum $V = {V}_{\rm out} + {V}_{\rm in}$ is direct. The result follows. \end{proof} \begin{definition} \label{def:StandardOmega} \rm With reference to Definition \ref{def:OmegaOutIn} and Proposition \ref{prop:bipRoutIN}, we call $\Omega_{\rm out}$ (resp. $\Omega_{\rm in}$) the {\it standard outer rotator} (resp. {\it standard inner rotator}) for $A,B,C$. \end{definition} \noindent We now describe $\Omega_{\rm out}$ and $\Omega_{\rm in}$ in more detail. \begin{theorem} \label{lem:OmegaOutD} Assume that $A,B,C$ is bipartite and nontrivial. Then the following {\rm (i)--(v)} hold. \begin{enumerate} \item[\rm (i)] We have \begin{eqnarray*} &&\Omega_{\rm out} = \mathbb B \Biggl(\sum_{j=0}^{d/2} \frac{\varphi_1 \varphi_2\cdots \varphi_{2j}} {\varphi_d \varphi_{d-1}\cdots \varphi_{d-2j+1}} E_{2j}\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{j=0}^{d/2} \frac{\varphi_1 \varphi_2 \cdots \varphi_{2j}} {\varphi_d \varphi_{d-1}\cdots \varphi_{d-2j+1}} E'_{2j}\Biggr) \mathbb B \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{j=0}^{d/2} \frac{\varphi_1 \varphi_2 \cdots \varphi_{2j}} {\varphi_d \varphi_{d-1}\cdots \varphi_{d-2j+1}} E''_{2j}\Biggr) \mathbb C. \end{eqnarray*} \item[\rm (ii)] For $0 \leq i \leq d$, \begin{eqnarray} \label{eq:OutCom} E_i \Omega_{\rm out} = \Omega_{\rm out} E'_i, \qquad \qquad E'_i \Omega_{\rm out} = \Omega_{\rm out} E''_i, \qquad \qquad E''_i \Omega_{\rm out} = \Omega_{\rm out} E_i. \end{eqnarray} \item[\rm (iii)] Referring to {\rm (\ref{eq:OutCom})}, if $i$ is odd then for each equation both sides are zero. \item[\rm (iv)] We have \begin{eqnarray*} \alpha'_2 A^2 \Omega_{\rm out} = \alpha''_2 \Omega_{\rm out} B^2, \qquad \alpha''_2 B^2 \Omega_{\rm out} = \alpha_2 \Omega_{\rm out} C^2, \qquad \alpha_2 C^2 \Omega_{\rm out} = \alpha'_2 \Omega_{\rm out} A^2. \end{eqnarray*} \item[\rm (v)] For $A,B,C$ equitable, \begin{eqnarray*} A^2 \Omega_{\rm out} = \Omega_{\rm out} B^2, \qquad \qquad B^2 \Omega_{\rm out} = \Omega_{\rm out} C^2, \qquad \qquad C^2 \Omega_{\rm out} = \Omega_{\rm out} A^2. \end{eqnarray*} \end{enumerate} \end{theorem} \begin{proof} Apply Theorem \ref{cor:everything} to the LR triple in Lemma \ref{lem:A2B2C2Out}, and evaluate the result using Lemmas \ref{lem:unipA2}, \ref{lem:OmegaBasic}. \end{proof} \begin{theorem} \label{lem:OmegaOutInD} Assume that $A,B,C$ is bipartite and nontrivial. Then the following {\rm (i)--(v)} hold. \begin{enumerate} \item[\rm (i)] We have \begin{eqnarray*} &&\Omega_{\rm in} = \mathbb B \Biggl(\sum_{j=0}^{d/2-1} \frac{\varphi_2 \varphi_3\cdots \varphi_{2j+1}} {\varphi_{d-1} \varphi_{d-2}\cdots \varphi_{d-2j}} E_{2j+1}\Biggr) \mathbb A = \mathbb C \Biggl(\sum_{j=0}^{d/2-1} \frac{\varphi_2 \varphi_3 \cdots \varphi_{2j+1}} {\varphi_{d-1} \varphi_{d-2}\cdots \varphi_{d-2j}} E'_{2j+1}\Biggr) \mathbb B \nonumber \\ && \qquad \qquad \qquad = \mathbb A \Biggl(\sum_{j=0}^{d/2-1} \frac{\varphi_2 \varphi_3\cdots \varphi_{2j+1}} {\varphi_{d-1} \varphi_{d-2} \cdots \varphi_{d-2j}} E''_{2j+1}\Biggr) \mathbb C. \end{eqnarray*} \item[\rm (ii)] For $0 \leq i \leq d$, \begin{eqnarray} \label{eq:InCom} E_i \Omega_{\rm in} = \Omega_{\rm in} E'_i, \qquad \qquad E'_i \Omega_{\rm in} = \Omega_{\rm in} E''_i, \qquad \qquad E''_i \Omega_{\rm in} = \Omega_{\rm in} E_i. \end{eqnarray} \item[\rm (iii)] Referring to {\rm (\ref{eq:InCom})}, if $i$ is even then for each equation both sides are zero. \item[\rm (iv)] We have \begin{eqnarray*} \alpha'_2 A^2 \Omega_{\rm in} = \alpha''_2 \Omega_{\rm in} B^2, \qquad \alpha''_2 B^2 \Omega_{\rm in} = \alpha_2 \Omega_{\rm in} C^2, \qquad \alpha_2 C^2 \Omega_{\rm in} = \alpha'_2 \Omega_{\rm in} A^2. \end{eqnarray*} \item[\rm (v)] For $A,B,C$ equitable, \begin{eqnarray*} A^2 \Omega_{\rm in} = \Omega_{\rm in} B^2, \qquad \qquad B^2 \Omega_{\rm in} = \Omega_{\rm in} C^2, \qquad \qquad C^2 \Omega_{\rm in} = \Omega_{\rm in} A^2. \end{eqnarray*} \end{enumerate} \end{theorem} \begin{proof} Apply Theorem \ref{cor:everything} to the LR triple in Lemma \ref{lem:A2B2C2In}, and evaluate the result using Lemmas \ref{lem:unipA2In}, \ref{lem:OmegaBasic}. \end{proof} \noindent Recall the maps $\Omega, \Omega', \Omega''$ from Definition \ref{def:Om123}. \begin{proposition} \label{lem:OmegaOmega} Assume that $A,B,C$ is equitable, bipartite, and nontrivial. Then \begin{eqnarray*} \Omega = \Omega_{\rm out} + \rho_0 \Omega_{\rm in}, \qquad \qquad \Omega' = \Omega_{\rm out} + \rho'_0 \Omega_{\rm in}, \qquad \qquad \Omega'' = \Omega_{\rm out} + \rho''_0 \Omega_{\rm in}. \end{eqnarray*} \end{proposition} \begin{proof} Compare the formulae for $\Omega$, $\Omega'$, $\Omega''$ given in Proposition \ref{prop:EquitWOW}, with the formulae for $\Omega_{\rm out}$, $\Omega_{\rm in}$ given in Theorems \ref{lem:OmegaOutD}(i), \ref{lem:OmegaOutInD}(i). The result follows in view of Lemma \ref{lem:basic2A} and (\ref{def:rho0def}). \end{proof} \begin{lemma} \label{lem:OmegaMC} Assume that $A,B,C$ is equitable, bipartite, and nontrivial. Then $\Omega, \Omega', \Omega''$ are rotators for $A,B,C$. \end{lemma} \begin{proof} By Propositions \ref{prop:bipRoutIN}, \ref{lem:OmegaOmega}. \end{proof} \begin{proposition} \label{prop:rhoShow} Assume that $A,B,C$ is bipartite and nontrivial. Then \begin{eqnarray*} && \varphi'_d A \Omega_{\rm out} = \varphi''_1 \Omega_{\rm in} B, \qquad \qquad \varphi''_d B \Omega_{\rm out} = \varphi_1 \Omega_{\rm in} C, \qquad \qquad \varphi_d C \Omega_{\rm out} = \varphi'_1 \Omega_{\rm in} A, \\ && \varphi'_1 A \Omega_{\rm in} = \varphi''_d\Omega_{\rm out} B, \qquad \qquad \varphi''_1 B \Omega_{\rm in} = \varphi_d \Omega_{\rm out} C, \qquad \qquad \varphi_1 C \Omega_{\rm in} = \varphi'_d \Omega_{\rm out} A. \end{eqnarray*} Moreover for $A,B,C$ equitable, \begin{eqnarray*} && A \Omega_{\rm out} = \rho_0 \Omega_{\rm in} B, \qquad \qquad B \Omega_{\rm out} = \rho'_0 \Omega_{\rm in} C, \qquad \qquad C \Omega_{\rm out} = \rho''_0 \Omega_{\rm in} A, \\ && \rho_0 A \Omega_{\rm in} = \Omega_{\rm out} B, \qquad \qquad \rho'_0 B \Omega_{\rm in} = \Omega_{\rm out} C, \qquad \qquad \rho''_0 C \Omega_{\rm in} = \Omega_{\rm out} A. \end{eqnarray*} \end{proposition} \begin{proof} First assume that $A,B,C$ is equitable. To obtain the result under this assumption, evaluate (\ref{eq:AOmB}) using Proposition \ref{lem:OmegaOmega}, Lemmas \ref{lem:bipABCact}, \ref{lem:OmegaBasic}, and line (\ref{def:rho0def}). We have verified the result under the assumption that $A,B,C$ is equitable. To remove the assumption, apply the result so far to the LR triple (\ref{eq:NewLRT}) in Lemma \ref{lem:InOut}, made equitable by chosing the parameters (\ref{eq:AlphaInOut}) to satisfy (\ref{eq:neededForEquit}). \end{proof} \begin{proposition} \label{prop:rotatorRaction} Assume that $A,B,C$ is bipartite and nontrivial. Let $R$ denote a rotator for $A,B,C$ and write $R=r\Omega_{\rm out} + s \Omega_{\rm in}$ with $r,s \in \mathbb F$. Then \begin{eqnarray*} && s \varphi'_d A_{\rm out} R = r \varphi''_1 R B_{\rm out}, \qquad s \varphi''_d B_{\rm out} R = r \varphi_1 R C_{\rm out}, \qquad s \varphi_d C_{\rm out} R = r \varphi'_1 R A_{\rm out}, \\ && r\varphi'_1 A_{\rm in} R = s \varphi''_d R B_{\rm in}, \quad \qquad r\varphi''_1 B_{\rm in} R = s \varphi_d R C_{\rm in}, \quad \qquad r\varphi_1 C_{\rm in} R = s \varphi'_d R A_{\rm in}. \end{eqnarray*} Moreover for $A,B,C$ equitable, \begin{eqnarray*} && s A_{\rm out} R = r \rho_0 R B_{\rm out}, \qquad s B_{\rm out} R = r \rho'_0 R C_{\rm out}, \qquad s C_{\rm out} R = r \rho''_0 R A_{\rm out}, \\ && r\rho_0 A_{\rm in} R = s R B_{\rm in}, \quad \qquad r\rho'_0 B_{\rm in} R = s R C_{\rm in}, \quad \qquad r\rho''_0 C_{\rm in} R = s R A_{\rm in}. \end{eqnarray*} \end{proposition} \begin{proof} To verify these equations, eliminate $R$ using $R= r \Omega_{\rm out}+s \Omega_{\rm in}$ and evaluate the result using Definition \ref{def:ABCoutIn} together with Proposition \ref{prop:rhoShow}. \end{proof} \begin{lemma} \label{lem:OmegaOutSend} Assume that $A,B,C$ is bipartite and nontrivial. Then $\Omega_{\rm out}$ sends \begin{eqnarray*} \eta \to \frac{(\eta,\tilde \eta')}{(\eta'',\tilde \eta')}\eta'', \qquad \qquad \eta' \to \frac{(\eta',\tilde \eta'')}{(\eta,\tilde \eta'')}\eta, \qquad \qquad \eta'' \to \frac{(\eta'',\tilde \eta)}{(\eta',\tilde \eta)}\eta'. \end{eqnarray*} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:OmegaSendsEta}. \end{proof} \noindent Recall the scalar $\theta$ from Definition \ref{def:theta}. \begin{proposition} \label{lem:OmegaOutcubed} Assume that $A,B,C$ is bipartite and nontrivial. Then the following {\rm (i), (ii)} hold. \begin{enumerate} \item[\rm (i)] $\Omega^3_{\rm out} = \theta I$ on $V_{\rm out}$. \item[\rm (ii)] $\Omega^3_{\rm in} = \rho^{-1} \theta I$ on $V_{\rm in}$, where $\rho= \varphi_1 \varphi'_1 \varphi''_1/ (\varphi_d \varphi'_d \varphi''_d)$. \end{enumerate} \end{proposition} \begin{proof} (i) Similar to the proof of Proposition \ref{thm:OmegaCubed}. \\ \noindent (ii) By Proposition \ref{prop:rhoShow} we obtain $A \Omega^3_{\rm out} = \rho \Omega^3_{\rm in} A $. For this equation apply each side to $V_{\rm out}$ and use the fact that $AV_{\rm out} = V_{\rm in}$. The result follows in view of (i) above. \end{proof} \begin{lemma} \label{lem:whenRinv} Assume that $A,B,C$ is bipartite and nontrivial. Let $R$ denote a rotator for $A,B,C$ and write $R = r \Omega_{\rm out} + s \Omega_{\rm in}$ with $r,s\in \mathbb F$. Then $R$ is invertible if and only if $r,s$ are nonzero. \end{lemma} \begin{proof} By Lemma \ref{lem:OmegaBasic}. \end{proof} \begin{proposition} Assume that $A,B,C$ is bipartite and nontrivial. Let $R$ denote an invertible rotator for $A,B,C$. Then \begin{eqnarray*} \overline A R = R\overline B, \qquad \qquad \overline B R = R\overline C, \qquad \qquad \overline C R = R \overline A. \end{eqnarray*} \end{proposition} \begin{proof} Use Theorem \ref{thm:DL}(iii) and the comment above that theorem, along with Proposition \ref{prop:rotatorRaction} and Lemma \ref{lem:whenRinv}. \end{proof} \section{The reflectors for an LR triple} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). Recall the reflector antiautomorphism concept discussed in Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. There are three reflectors associated with $A,B,C$; the $(A,B)$-reflector, the $(B,C)$-reflector, and the $(C,A)$-reflector. We now consider how these reflectors behave. In order to keep things simple, throughout this section we assume that $A,B,C$ is equitable. \begin{proposition} Assume that $A,B,C$ is equitable and nonbipartite, with standard rotator $\Omega$. Then the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] The $(A,B)$-reflector swaps $A,B$ and fixes $C$. It swaps $\mathbb A, \mathbb B$ and fixes $\mathbb C$. It fixes $\Omega$. For $0 \leq i \leq d$ it fixes $E_i$ and swaps $E'_i,E''_{i}$. \item[\rm (ii)] The $(B,C)$-reflector swaps $B,C$ and fixes $A$. It swaps $\mathbb B, \mathbb C$ and fixes $\mathbb A$. It fixes $\Omega$. For $0 \leq i \leq d$ it fixes $E'_i$ and swaps $E''_i,E_{i}$. \item[\rm (iii)] The $(C,A)$-reflector swaps $C,A$ and fixes $B$. It swaps $\mathbb C, \mathbb A$ and fixes $\mathbb B$. It fixes $\Omega$. For $0 \leq i \leq d$ it fixes $E''_i$ and swaps $E_i,E'_{i}$. \end{enumerate} \end{proposition} \begin{proof} (i) Denote the $(A,B)$-reflector by $\dagger$. The map $\dagger$ swaps $A,B$ by Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. We have $\mathbb A = \sum_{i=0}^d \alpha_i A^i$ and $\mathbb B = \sum_{i=0}^d \alpha_i B^i$ by Proposition \ref{prop:Apoly}, so $\dagger$ swaps $\mathbb A,\mathbb B$. To see that $\dagger$ fixes $\Omega$, use the first formula for $\Omega$ given in Theorem \ref{cor:everything}(i), along with Definition \ref{def:REF} and Lemma \ref{lem:psiFix}. From Theorem \ref{cor:everythingGen}(v) we obtain $C=\Omega A \Omega^{-1}$ and $C=\Omega^{-1}B\Omega$. By these and since $\dagger$ fixes $\Omega$, we see that $\dagger$ fixes $C$. Consequently $\dagger$ fixes $\mathbb C = \sum_{i=0}^d \alpha_i C^i$. For $0 \leq i\leq d$ the map $\dagger$ fixes $E_i$ by Lemma \ref{lem:daggerE}. Also by Lemma \ref{lem:Eform2} and Lemma \ref{lem:equitBasic}(i) the map $\dagger$ swaps $E'_i, E''_{i}$. \\ \noindent (ii), (iii) Use (i) above and Lemma \ref{lem:sameROT}(i). \end{proof} \begin{proposition} \label{eq:EBRef} Assume that $A,B,C$ is equitable, bipartite, and nontrivial. Then the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] The $(A,B)$-reflector sends \begin{eqnarray*} && A_{\rm out}\rightarrow B_{\rm in}, \qquad \qquad B_{\rm out}\rightarrow A_{\rm in}, \qquad \qquad C_{\rm out}\rightarrow (\rho''_0/\rho'_0) C_{\rm in}, \\ && A_{\rm in}\rightarrow B_{\rm out}, \qquad \qquad B_{\rm in}\rightarrow A_{\rm out}, \qquad \qquad C_{\rm in}\rightarrow (\rho'_0/\rho''_0) C_{\rm out}. \end{eqnarray*} It swaps $\mathbb A, \mathbb B$ and fixes $\mathbb C$. It fixes $J$ and everything in $\mathcal R$. For $0 \leq i \leq d$ it fixes $E_i$ and swaps $E'_i, E''_i$. \item[\rm (ii)] The $(B,C)$-reflector sends \begin{eqnarray*} && B_{\rm out}\rightarrow C_{\rm in}, \qquad \qquad C_{\rm out}\rightarrow B_{\rm in}, \qquad \qquad A_{\rm out}\rightarrow (\rho_0/\rho''_0) A_{\rm in}, \\ && B_{\rm in}\rightarrow C_{\rm out}, \qquad \qquad C_{\rm in}\rightarrow B_{\rm out}, \qquad \qquad A_{\rm in}\rightarrow (\rho''_0/\rho_0) A_{\rm out}. \end{eqnarray*} It swaps $\mathbb B, \mathbb C$ and fixes $\mathbb A$. It fixes $J$ and everything in $\mathcal R$. For $0 \leq i \leq d$ it fixes $E'_i$ and swaps $E''_i, E_i$. \item[\rm (iii)] The $(C,A)$-reflector sends \begin{eqnarray*} && C_{\rm out}\rightarrow A_{\rm in}, \qquad \qquad A_{\rm out}\rightarrow C_{\rm in}, \qquad \qquad B_{\rm out}\rightarrow (\rho'_0/\rho_0) B_{\rm in}, \\ && C_{\rm in}\rightarrow A_{\rm out}, \qquad \qquad A_{\rm in}\rightarrow C_{\rm out}, \qquad \qquad B_{\rm in}\rightarrow (\rho_0/\rho'_0) B_{\rm out}. \end{eqnarray*} It swaps $\mathbb C, \mathbb A$ and fixes $\mathbb B$. It fixes $J$ and everything in $\mathcal R$. For $0 \leq i \leq d$ it fixes $E''_i$ and swaps $E_i, E'_i$. \end{enumerate} \end{proposition} \begin{proof} (i) Denote the $(A,B)$-reflector by $\dagger$. The map $\dagger$ swaps $A,B$ by Proposition \ref{prop:antiaut} and Definition \ref{def:REFdag}. For $0 \leq i \leq d$ the map $\dagger$ fixes $E_i$ by Lemma \ref{lem:daggerE}. By this and Lemma \ref{lem:Jfacts}(i), the map $\dagger$ fixes $J$. By this and Lemma \ref{lem:INOutFacts}(i),(ii) the map $\dagger$ sends $A_{\rm out} \leftrightarrow B_{\rm in}$ and $A_{\rm in} \leftrightarrow B_{\rm out}$. We have $\mathbb A = \sum_{i=0}^d \alpha_i A^i$ and $\mathbb B = \sum_{i=0}^d \alpha_i B^i$ by Proposition \ref{prop:Apoly}, so $\dagger $ swaps $\mathbb A, \mathbb B$. We show that $\dagger$ fixes everything in $\mathcal R$. By Proposition \ref{prop:bipRoutIN} it suffices to show that $\dagger$ fixes $\Omega_{\rm out}$ and $\Omega_{\rm in}$. To see that $\dagger$ fixes $\Omega_{\rm out}$ (resp. $\Omega_{\rm in}$), use the first formula for $\Omega_{\rm out}$ (resp. $\Omega_{\rm in}$) given in Theorem \ref{lem:OmegaOutD}(i) (resp. Theorem \ref{lem:OmegaOutInD}(i)), along with Definition \ref{def:PoutPin} and Lemma \ref{lem:psiFixOutIn}. For $0 \leq i \leq d$ we show that $\dagger$ swaps $E'_i, E''_i$. Pick an invertible $R \in \mathcal R$. By Definition \ref{def:ABCROT}, $E_iR = RE'_i$ and $E''_iR = RE_i$. In either equation, apply $\dagger$ to each side and compare the results with the other equation. This shows that $\dagger$ swaps $E'_i, E''_i$. Using this and $\mathbb C = \sum_{i=0}^d E''_{d-i} E'_{i}$ we find that $\dagger$ fixes $\mathbb C$. To obtain the action of $\dagger$ on $C_{\rm out}$, $C_{\rm in}$, we invoke Proposition \ref{prop:rotatorRaction}. Referring to that proposition, assume that $r,s$ are nonzero, so that $R$ is invertible, and consider the equations $s C_{\rm out} R = r \rho''_0 R A_{\rm out}$ and $r \rho'_0 B_{\rm in} R = s R C_{\rm in}$. In either equation, apply $\dagger $ to each side and compare the results with the other equation. This shows that $\dagger$ sends $C_{\rm out} \mapsto (\rho''_0 /\rho'_0)C_{\rm in}$ and $C_{\rm in} \mapsto (\rho'_0 /\rho''_0)C_{\rm out}$. \\ \noindent (ii), (iii) Similar to the proof of (i) above. \end{proof} \begin{corollary} Assume that $A,B,C$ is equitable, bipartite, and nontrivial. Then the following {\rm (i)--(iii)} hold: \begin{enumerate} \item[\rm (i)] the $(A,B)$-reflector swaps $\overline A, \overline B$ and fixes $\overline C$; \item[\rm (ii)] the $(B,C)$-reflector swaps $\overline B, \overline C$ and fixes $\overline A$; \item[\rm (iii)] the $(C,A)$-reflector swaps $\overline C, \overline A$ and fixes $\overline B$. \end{enumerate} \end{corollary} \begin{proof} Use Theorem \ref{thm:DL}(iii) and the comment above that theorem, along with Proposition \ref{eq:EBRef}. \end{proof} \section{Normalized LR triples with diameter at most 2} \noindent Our next general goal is to classify up to isomorphism the normalized LR triples. As a warmup, we consider the normalized LR triples with diameter at most 2. For the results in this section the proofs are routine, and left as an exercise. \begin{lemma} Up to isomorphism, there exists a unique normalized LR triple over $\mathbb F$ that has diameter $0$. This LR triple is trivial. \end{lemma} \begin{lemma} \label{lem:1or2} Up to isomorphism, there exists a unique normalized LR triple $A,B,C$ over $\mathbb F$ that has diameter 1. This LR triple is nonbipartite and $\varphi_1 = -1$. Moreover $a_0 =1$ and $a_1 = -1$. With respect to an $(A,B)$-basis the matrices representing $A,B,C$ and the standard rotator $\Omega$ are \begin{eqnarray*} A: \left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array} \right), \qquad B: \left( \begin{array}{cc} 0 & 0 \\ -1 & 0 \\ \end{array} \right), \qquad C: \left( \begin{array}{cc} 1 & 1 \\ -1 & -1 \\ \end{array} \right), \qquad \Omega: \left( \begin{array}{cc} 1 & 1 \\ -1 & 0 \\ \end{array} \right). \end{eqnarray*} \end{lemma} \begin{lemma} \label{lem:dtwoClassNB} We give a bijection from the set $\mathbb F \backslash \lbrace 0,-1\rbrace$ to the set of isomorphism classes of normalized nonbipartite LR triples over $\mathbb F$ that have diameter $2$. For $q \in \mathbb F \backslash \lbrace 0,-1\rbrace$ the corresponding LR triple $A,B,C$ has parameters \begin{eqnarray*} &&\varphi_1 = -1-q^{-1}, \qquad \varphi_2 = -1-q, \\ &&\alpha_2 = \frac{1}{1+q}, \qquad \beta_2 = \frac{q}{1+q}, \\ && a_0 = 1+q, \qquad a_1 = q^{-1}-q, \qquad a_2 = -1-q^{-1}. \end{eqnarray*} With respect to an $(A,B)$-basis the matrices representing $A,B,C$ and the standard rotator $\Omega$ are \begin{eqnarray*} && A: \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{array} \right), \qquad \qquad B: \left( \begin{array}{ccc} 0 & 0 &0 \\ -1-q^{-1} & 0 & 0 \\ 0 & -1-q & 0 \end{array} \right), \\ && C: \left( \begin{array}{ccc} 1+q & q & 0 \\ -1-q & q^{-1}-q & q^{-1} \\ 0 & -1-q^{-1} & -1-q^{-1} \end{array} \right), \qquad \Omega: \left( \begin{array}{ccc} 1 & 1 & (1+q)^{-1} \\ -1-q^{-1} & -1 & 0 \\ 1+q^{-1} & 0 & 0 \end{array} \right). \end{eqnarray*} \end{lemma} \begin{lemma} \label{lem:dtwoClassB} We give a bijection from the set the 3-tuples \begin{eqnarray} \label{eq:3tupleset} (\rho_0, \rho'_0, \rho''_0) \in \mathbb F^3, \qquad \qquad \rho_0 \rho'_0 \rho''_0 = -1 \end{eqnarray} to the set of isomorphism classes of normalized bipartite LR triples over $\mathbb F$ that have diameter $2$. For a 3-tuple $(\rho_0, \rho'_0, \rho''_0)$ in the set {\rm (\ref{eq:3tupleset})}, the corresponding LR triple $A,B,C$ has parameters \begin{eqnarray*} && \varphi_1 = -1/\rho_0, \qquad \varphi'_1 = -1/\rho'_0, \qquad \varphi''_1 = -1/\rho''_0, \\ && \varphi_2 = \rho_0, \qquad \varphi'_2 = \rho'_0, \qquad \varphi''_2 = \rho''_0. \end{eqnarray*} With respect to an $(A,B)$-basis the matrices representing $A,B,C$, the projector $J$, and the standard outer/inner rotators $\Omega_{\rm out}$, $\Omega_{\rm in}$ are \begin{eqnarray*} && A: \left( \begin{array}{ccc} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{array} \right), \qquad B: \left( \begin{array}{ccc} 0 & 0 &0 \\ -1/\rho_0 & 0 & 0 \\ 0 & \rho_0 & 0 \end{array} \right), \qquad C: \left( \begin{array}{ccc} 0 & 1/\rho''_0 & 0 \\ \rho''_0 & 0 & \rho''_0 \\ 0 & -1/\rho''_0 & 0 \end{array} \right), \\ && J: \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right), \qquad \Omega_{\rm out}: \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{array} \right), \qquad \Omega_{\rm in}: \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right). \end{eqnarray*} \end{lemma} \section{The sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is constrained} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote a nontrivial LR triple on $V$, with parameter array (\ref{eq:paLRT}), idempotent data (\ref{eq:idseq}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). We assume that $A,B,C$ is equitable, so that $\alpha_i = \alpha'_i = \alpha''_i$ and $\beta_i = \beta'_i = \beta''_i$ for $0 \leq i \leq d$. For $A,B,C$ nonbipartite we have the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ from Definition \ref{def:Rhoi}, and for $A,B,C$ bipartite we have the sequences $\lbrace \rho_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho'_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho''_i \rbrace_{i=0}^{d-1}$ from Definition \ref{def:RHOD}. Our next goal is to show that these sequences are constrained, in the sense of Definition \ref{def:constrain}. \begin{lemma} \label{lem:Con1} Assume that $A,B,C$ is equitable. Then the following {\rm (i)--(iii)} hold. \begin{enumerate} \item[\rm (i)] For $d\geq 2$, \begin{eqnarray} \label{eq:L1} &&\rho_i = \alpha_0 \beta_2 \varphi_i + \alpha_1 \beta_1 \varphi_{i+1} + \alpha_2 \beta_0 \varphi_{i+2} \qquad \qquad (0 \leq i \leq d-1), \\ &&0 = \alpha_0 \beta_2 + \alpha_1 \beta_1 + \alpha_2 \beta_0. \label{eq:L2} \end{eqnarray} \item[\rm (ii)] For $d \geq 3$, \begin{eqnarray} \label{eq:L3} &&0 = \alpha_0 \beta_3 \varphi_{i-2} + \alpha_1 \beta_2 \varphi_{i-1} + \alpha_2 \beta_1 \varphi_{i} + \alpha_3 \beta_0 \varphi_{i+1} \qquad (2 \leq i \leq d), \\ && 0 = \alpha_0 \beta_3 +\alpha_1 \beta_2 + \alpha_2 \beta_1 + \alpha_3 \beta_0. \label{eq:L4} \end{eqnarray} \item[\rm (iii)] For $A,B,C$ bipartite and $d\geq 4$, \begin{eqnarray} \label{eq:L5} && 0 = \alpha_0 \beta_4 \varphi_{i-2} + \alpha_2 \beta_2 \varphi_{i} + \alpha_4 \beta_0 \varphi_{i+2} \qquad \qquad (2 \leq i \leq d-1), \\ && \label{eq:L6} 0 = \alpha_0 \beta_4 + \alpha_2 \beta_2 + \alpha_4 \beta_0. \end{eqnarray} \end{enumerate} \end{lemma} \begin{proof}(i) Line (\ref{eq:L1}) is from the first displayed equation in Proposition \ref{prop:goodrec}, along with Lemma \ref{lem:equitBasic}(i). Line (\ref{eq:L2}) is from (\ref{eq:recursion}). \\ \noindent (ii) Line (\ref{eq:L3}) is from the first displayed equation in Proposition \ref{prop:longrec} (with $r=3$). Line (\ref{eq:L4}) is from (\ref{eq:recursion}). \\ \noindent (iii) Similar to (ii) above, but also use Lemma \ref{lem:case}. \end{proof} \noindent As we proceed, we will consider the bipartite and nonbipartite cases separately. We begin with the nonbipartite case. \begin{lemma} \label{lem:NBCon1} Assume that $A,B,C$ is nonbipartite, equitable, and $d\geq 2$. Then for $0 \leq i \leq d-1$, \begin{eqnarray} \label{eq:middleElim} \rho_i = \alpha_0 \beta_2( \varphi_i -\varphi_{i+1}) - \alpha_2 \beta_0 (\varphi_{i+1}-\varphi_{i+2}). \end{eqnarray} \end{lemma} \begin{proof} Subtract $\varphi_{i+1}$ times (\ref{eq:L2}) from (\ref{eq:L1}). \end{proof} \begin{definition} \label{def:NBCon2} Assume that $A,B,C$ is nonbipartite, equitable, and $d\geq 3$. Define \begin{eqnarray} \label{eq:abc} a = \alpha_0 \beta_3, \quad \qquad b = \alpha_0 \beta_3 + \alpha_1 \beta_2 = - \alpha_2 \beta_1 - \alpha_3 \beta_0, \qquad \quad c = -\alpha_3 \beta_0. \end{eqnarray} \end{definition} \begin{lemma} \label{lem:NBCon4} Assume that $A,B,C$ is nonbipartite, equitable, and $d\geq 3$. Then for $2 \leq i \leq d$, \begin{eqnarray} 0 = a(\varphi_{i-2}-\varphi_{i-1})+ b(\varphi_{i-1}-\varphi_{i})+ c(\varphi_{i}-\varphi_{i+1}), \label{eq:abcRec} \end{eqnarray} where $a,b,c$ are from {\rm (\ref{eq:abc})}. \end{lemma} \begin{proof} To verify (\ref{eq:abcRec}), eliminate $a,b,c$ using (\ref{eq:abc}), and compare the result with (\ref{eq:L3}). \end{proof} \begin{lemma} \label{lem:NBCon5} Assume that $A,B,C$ is nonbipartite, equitable, and $d\geq 3$. Then for $1 \leq i \leq d-2$, \begin{eqnarray} 0 = a \rho_{i-1} + b\rho_i + c \rho_{i+1}, \label{eq:abcRho} \end{eqnarray} where $a,b,c$ are from {\rm (\ref{eq:abc})}. \end{lemma} \begin{proof} To verify (\ref{eq:abcRho}), eliminate $\rho_{i-1}, \rho_i,\rho_{i+1}$ using Lemma \ref{lem:NBCon1}, and evaluate the result using Lemma \ref{lem:NBCon4}. \end{proof} \begin{lemma} \label{lem:NBCon3} Assume that $A,B,C$ is nonbipartite, equitable, and $d\geq 3$. Then the scalars $a,b,c$ from Definition \ref{def:NBCon2} are not all zero. \end{lemma} \begin{proof} Recall from Lemmas \ref{lem:alpha0}, \ref{lem:NotB} that $\alpha_0 = 1 = \beta_0$ and $\alpha_1 = -\beta_1$ is nonzero. Suppose that each of $a,b,c$ is zero. Using (\ref{eq:abc}) we obtain $\alpha_3=0$, $\beta_3=0$, $\alpha_2=0$, $\beta_2=0$. Now in (\ref{eq:middleElim}) the right-hand side is zero and the left-hand side is nonzero, for a contradiction. The result follows. \end{proof} \begin{proposition} \label{lem:NBCon6} Assume that $A,B,C$ is nonbipartite and equitable. Then the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is constrained. \end{proposition} \begin{proof} We verify that $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ satisfies the conditions (i), (ii) of Definition \ref{def:constrain}. Definition \ref{def:constrain}(i) holds by Lemma \ref{def:RhoiCom}. If $d\leq 2$ then Definition \ref{def:constrain}(ii) holds vacuosly, and if $d\geq 3$ then Definition \ref{def:constrain}(ii) holds by Lemmas \ref{lem:NBCon5}, \ref{lem:NBCon3}. \end{proof} \begin{lemma} \label{cor:NotGeomD4} Assume that $A,B,C$ is nonbipartite and equitable, but the sequence $\lbrace \rho_{i}\rbrace_{i=0}^{d-1}$ is not geometric. Then $d$ is even and at least 4. \end{lemma} \begin{proof} By Lemma \ref{lem:NGd4} and Proposition \ref{lem:NBCon6}. \end{proof} \noindent We turn our attention to bipartite LR triples. \begin{lemma} \label{lem:BipRRR2} Assume that $A,B,C$ is bipartite, equitable, and $d\geq 2$. Then for $0 \leq i \leq d-1$, \begin{eqnarray} \rho_i = \alpha_0 \beta_2 (\varphi_i - \varphi_{i+2}), \qquad \rho'_i = \alpha_0 \beta_2 (\varphi'_i - \varphi'_{i+2}), \qquad \rho''_i = \alpha_0 \beta_2 (\varphi''_i - \varphi''_{i+2}). \label{eq:RRR} \end{eqnarray} \end{lemma} \begin{proof} To verify the equation on the left in (\ref{eq:RRR}), set $\alpha_1=0$, $\beta_1=0$ in Lemma \ref{lem:Con1}(i). The other two equations in (\ref{eq:RRR}) are similarly verified. \end{proof} \begin{lemma} \label{lem:BipRRR3} Assume that $A,B,C$ is bipartite, equitable, and $d\geq 4$. Then for $2 \leq i \leq d-1$, \begin{eqnarray} && \alpha_0 \beta_4 (\varphi_{i-2}-\varphi_i) = \alpha_4 \beta_0 (\varphi_i-\varphi_{i+2}), \label{eq:biprec1} \\ && \alpha_0 \beta_4 (\varphi'_{i-2}-\varphi'_i) = \alpha_4 \beta_0 (\varphi'_i-\varphi'_{i+2}), \label{eq:biprec2} \\ && \alpha_0 \beta_4 (\varphi''_{i-2}-\varphi''_i) = \alpha_4 \beta_0 (\varphi''_i-\varphi''_{i+2}). \label{eq:biprec3} \end{eqnarray} \end{lemma} \begin{proof} To obtain (\ref{eq:biprec1}), subtract $\varphi_i$ times (\ref{eq:L6}) from (\ref{eq:L5}). Equations (\ref{eq:biprec2}), (\ref{eq:biprec3}) are similarly obtained. \end{proof} \begin{lemma} \label{lem:BipRRR4} Assume that $A,B,C$ is bipartite, equitable, and $d\geq 4$. Then for $1 \leq i \leq d-2$, \begin{eqnarray} \alpha_0 \beta_4 \rho_{i-1} = \alpha_4 \beta_0 \rho_{i+1}, \qquad \alpha_0 \beta_4 \rho'_{i-1} = \alpha_4 \beta_0 \rho'_{i+1}, \qquad \alpha_0 \beta_4 \rho''_{i-1} = \alpha_4 \beta_0 \rho''_{i+1}. \label{eq:BipRRR5} \end{eqnarray} \end{lemma} \begin{proof} Use Lemmas \ref{lem:BipRRR2}, \ref{lem:BipRRR3}. \end{proof} \begin{proposition} \label{lem:3seqCon} Assume that $A,B,C$ is bipartite, equitable, and nontrivial. Then the sequences $\lbrace \rho_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho'_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho''_i \rbrace_{i=0}^{d-1}$ are constrained. \end{proposition} \begin{proof} We verify that $\lbrace \rho_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho'_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho''_i \rbrace_{i=0}^{d-1}$ satisfy the conditions (i), (ii) of Definition \ref{def:constrain}. Definition \ref{def:constrain}(i) holds by Lemma \ref{def:BipRhoiCom}. Recall that $d$ is even. If $d=2$ then Definition \ref{def:constrain}(ii) holds vacuosly, and if $d\geq 4$ then Definition \ref{def:constrain}(ii) holds by Lemma \ref{lem:BipRRR4} and since $\alpha_4\not=0$, $\beta_4\not=0$ by Lemma \ref{lem:case}. \end{proof} \section{The classification of normalized LR triples; an overview} Throughout this section assume $d\geq 2$. Our next goal is to classify up to isomorphism the normalized LR triples over $\mathbb F$ that have diameter $d$. We now describe our strategy. Consider a normalized LR triple $A,B,C$ over $\mathbb F$ that has parameter array (\ref{eq:paLRT}). Recall the sequence $\lbrace \rho_i\rbrace_{i=0}^{d-1}$ from Definition \ref{def:Rhoi}. We place $A,B,C$ into one of four families as follows: \centerline{ \begin{tabular}[t]{c|c|c} {\rm family name} & {\rm family definition} &{\rm $d$ restriction} \\ \hline ${\rm NBWeyl}_d(\mathbb F)$ & over $\mathbb F$; diameter $d$; nonbipartite; normalized; there exist & \\ &scalars $a,b,c$ in $\mathbb F$ that are not all zero such that & \\ & $a+b+c=0$ and $a \varphi_{i-1} + b\varphi_i + c \varphi_{i+1} = 0$ for $1 \leq i \leq d$ & \\ && \\ ${\rm NBG}_d(\mathbb F)$ & over $\mathbb F$; diameter $d$; nonbipartite; normalized; not in \\& ${\rm NBWeyl}_d(\mathbb F)$; the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is geometric & \\ && \\ ${\rm NBNG}_d(\mathbb F)$ & over $\mathbb F$; diameter $d$; nonbipartite; normalized; & $d$ even; \\& the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is not geometric & $d \geq 4$ \\ && \\ ${\rm B}_d(\mathbb F)$ & over $\mathbb F$; diameter $d$; bipartite; normalized & $d$ even \end{tabular}} \noindent As we will show in Lemma \ref{lem:NBWisGeom}, if $A,B,C$ is contained in ${\rm NBWeyl}_d(\mathbb F)$ then $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is geometric. By this and Lemmas \ref{lem:case}, \ref{cor:NotGeomD4} the LR triple $A,B,C$ falls into exactly one of the four families. \noindent Over the next four sections, we classify up to isomorphism the LR triples in each family. \section{The classification of LR triples in ${\rm NBWeyl}_d(\mathbb F)$ } \noindent In this section we classify up to isomorphism the LR triples in ${\rm NBWeyl}_d(\mathbb F)$, for $d\geq 2$. We first describe some examples. \begin{example} \label{ex:nbw1} \rm The LR triple ${\rm NBWeyl}^{+}_d(\mathbb F;j,q)$ is over $\mathbb F$, diameter $d$, nonbipartite, normalized, and satisfies \begin{eqnarray*} &&d\geq 2; \qquad \qquad {\mbox{\rm $d$ is even}}; \qquad \qquad j \in \mathbb Z, \quad 0 \leq j < d/2; \qquad \qquad 0 \not=q \in \mathbb F; \\ && {\mbox{\rm if ${\rm Char}(\mathbb F)\not=2$ then $q$ is a primitive $(2d+2)$-root of unity;}} \\ && {\mbox{\rm if ${\rm Char}(\mathbb F)=2$ then $q$ is a primitive $(d+1)$-root of unity;}} \\ && \varphi_i = \frac{(1+q^{2j+1})^2(1-q^{-2i})}{q^{2j+1}(q-q^{-1})^2} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{example} \label{ex:nbw2} \rm The LR triple ${\rm NBWeyl}^{-}_d(\mathbb F;j,q)$ is over $\mathbb F$, diameter $d$, nonbipartite, normalized, and satisfies \begin{eqnarray*} && {\mbox{\rm ${\rm Char}(\mathbb F)\not=2$; }} \qquad \qquad d\geq 3; \qquad \qquad {\mbox{\rm $d$ is odd}}; \\ && j \in \mathbb Z, \quad 0 \leq j < (d-1)/4; \qquad \qquad 0 \not=q \in \mathbb F; \\ && {\mbox{\rm $q$ is a primitive $(2d+2)$-root of unity;}} \\ && \varphi_i = \frac{(1+q^{2j+1})^2(1-q^{-2i})}{q^{2j+1}(q-q^{-1})^2} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{example} \label{ex:nbw3} \rm The LR triple ${\rm NBWeyl}^{-}_d(\mathbb F; t)$ is over $\mathbb F$, diameter $d$, nonbipartite, normalized, and satisfies \begin{eqnarray*} && {\mbox{\rm ${\rm Char}(\mathbb F)\not=2$; }} \qquad \qquad d\geq 5; \qquad \qquad d \equiv 1 \pmod{4}; \\ && 0 \not=t \in \mathbb F; \qquad \qquad {\mbox{\rm $t$ is a primitive $(d+1)$-root of unity;}} \\ && \varphi_i = \frac{2t(1-t^{i})}{(1-t)^2} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{lemma} \label{lem:PreThm} For the LR triples in Examples \ref{ex:nbw1}--\ref{ex:nbw3}, {\rm (i)} they exist; {\rm (ii)} they are contained in ${\rm NBWeyl}_d(\mathbb F)$; {\rm (iii)} they are mutually nonisomorphic. \end{lemma} \begin{proof} (i) In Examples \ref{ex:nbw1}, \ref{ex:nbw2} we see an integer $j$. For Example \ref{ex:nbw3} define an integer $j=(d-1)/4$. In Examples \ref{ex:nbw1}, \ref{ex:nbw2} we see a parameter $q \in \mathbb F$. For Example \ref{ex:nbw3}, define $q \in \overline {\mathbb F}$ such that $t=q^{-2}$. In each of Examples \ref{ex:nbw1}--\ref{ex:nbw3} the pair $d,q$ is standard. For each example we use the data $d,j,q$ and Proposition \ref{prop:AllqWeyl} to get an LR triple over $\overline {\mathbb F}$ that has $q$-Weyl type. This LR triple is nonbipartite, since its first Toeplitz number is nonzero. Normalize this LR triple and apply Lemma \ref{lem:SpittingField} to get the desired LR triple over $\mathbb F$. \\ \noindent (ii) Let $A,B,C$ denote an LR triple listed in Examples \ref{ex:nbw1}--\ref{ex:nbw3}. By assumption $A,B,C$ is over $\mathbb F$, diameter $d$, nonbipartite, and normalized. Define $a=1$, $b=-1-q^2$, $c=q^2$, where $t=q^{-2}$ in Example \ref{ex:nbw3}. Then $a+b+c=0$, and $a\varphi_{i-1}+b\varphi_i + c\varphi_{i+1}=0$ for $1 \leq i \leq d$. Therefore $A,B,C$ is contained in ${\rm NBWeyl}_d(\mathbb F)$. \\ \noindent (iii) Among the LR triples listed in Examples \ref{ex:nbw1}--\ref{ex:nbw3}, no two have the same parameter array. Therefore no two are isomorphic. \end{proof} \begin{theorem} \label{thm:ClassW} For $d\geq 2$, each LR triple in ${\rm NBWeyl}_d(\mathbb F)$ is isomorphic to a unique LR triple listed in Examples \ref{ex:nbw1}--\ref{ex:nbw3}. \end{theorem} \begin{proof} Let $A,B,C$ denote an LR triple in ${\rm NBWeyl}_d(\mathbb F)$, with parameter array (\ref{eq:paLRT}) and Toeplitz data (\ref{eq:ToeplitzData}). By assumption there exist scalars $a,b,c$ in $\mathbb F$ that are not all zero, such that $a+b+c=0$ and $a\varphi_{i-1}+b \varphi_i + c \varphi_{i+1} = 0$ for $1 \leq i \leq d$. Setting $i=1$ and $\varphi_0=0$ we obtain $b\varphi_1 + c\varphi_2=0$. Setting $i=d$ and $\varphi_{d+1}=0$ we obtain $a\varphi_{d-1} + b\varphi_d=0$. By these comments each of $a,b,c$ is nonzero. Define a polynomial $g \in \mathbb F\lbrack \lambda \rbrack$ by $g(\lambda) = a+ b\lambda + c\lambda^2$. Observe that $g(1)=0$, so there exists $t \in \mathbb F$ such that $g(\lambda) = c(\lambda-1)(\lambda-t)$. We have $ct=a$, so $t\not=0$. Assume for the moment that $t=1$. By construction there exists $u,v \in \mathbb F$ such that $\varphi_i = u + vi $ for $0 \leq i \leq d+1$. setting $i=0$ and $\varphi_0=0$ we obtain $u=0$. Consequently $\varphi_i = vi$ for $0 \leq i \leq d+1$, and $v\not=0$. Fix a square root $v^{1/2}\in \overline {\mathbb F}$. Define an LR triple $A^{\vee},B^\vee, C^\vee$ over $\overline {\mathbb F}$ by \begin{eqnarray*} A^\vee = A v^{-1/2}, \qquad B^\vee = B v^{-1/2}, \qquad C^\vee = C v^{-1/2}. \end{eqnarray*} By Lemma \ref{lem:albega} this LR triple has parameter array \begin{eqnarray*} \varphi^\vee_i = (\varphi'_i)^\vee = (\varphi''_i)^\vee = \varphi_i /v= i \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} Now by Definition \ref{def:WEYL} the LR triple $A^\vee, B^\vee, C^\vee $ has Weyl type. Consider its first Toeplitz number $\alpha_1^\vee$. On one hand, by Lemma \ref{lem:ToeplitzAdjust} and the construction, $\alpha_1^\vee = \alpha_1 v^{1/2}=v^{1/2}$. On the other hand, by Lemma \ref{lem:WEYLalphaZero}, $\alpha_1^\vee = 0 $. This is a contradiction, so $t \not=1$. The polynomial $g(\lambda)=c(\lambda-1)(\lambda-t)$ has distinct roots. Therefore there exist $u,v\in \mathbb F$ such that $\varphi_i = u + vt^i$ for $0 \leq i \leq d+1$. Setting $i=0$ and $\varphi_0=0$ we obtain $0 = u+v$. Consequently $\varphi_i = u(1-t^i)$ for $0 \leq i \leq d+1$, and $u \not=0$. Fix square roots $u^{1/2}, t^{1/2} \in \overline {\mathbb F}$. Define $q=t^{-1/2}$. By construction $q \not=0$, $t=q^{-2}$, and $q^2 \not=1$. Define an LR triple $A^{\vee},B^\vee, C^\vee$ over $\overline {\mathbb F}$ by \begin{eqnarray*} A^\vee = A u^{-1/2}, \qquad B^\vee = B u^{-1/2}, \qquad C^\vee = C u^{-1/2}. \end{eqnarray*} By Lemma \ref{lem:albega} this LR triple has parameter array \begin{eqnarray*} \varphi^\vee_i = (\varphi'_i)^\vee = (\varphi''_i)^\vee = \varphi_i/u= 1-q^{-2i} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} Now by Definition \ref{def:qExceptional}, the LR triple $A^\vee, B^\vee, C^\vee $ has $q$-Weyl type. Replacing $q$ by $-q$ if necessary, we may assume by Lemma \ref{lem:Mq} that the pair $d,q$ is standard in the sense of Definition \ref{def:qstand}. By that definition, if ${\rm Char}(\mathbb F)\not=2$ then $q$ is a primitive $(2d+2)$-root of unity. Moreover if ${\rm Char}(\mathbb F)=2$, then $d$ is even and $q$ is a primitive $(d+1)$-root of unity. Consider the first Toeplitz number $\alpha_1^\vee$. By Lemma \ref{lem:alphaOneList} there exists an integer $j$ $(0 \leq j \leq d)$ such that \begin{eqnarray*} \alpha_1^\vee = \frac{q^{j+1/2}+ q^{-j-1/2}}{q-q^{-1}}. \end{eqnarray*} By Lemma \ref{lem:ToeplitzAdjust} and the construction, $\alpha_1^\vee = u^{1/2}$. Therefore $u = (\alpha_1^\vee)^2$. By these comments \begin{eqnarray} \label{eq:UVal} u = \frac{(1+q^{2j+1})^2}{q^{2j+1}(q-q^{-1})^2}. \end{eqnarray} Replacing $j$ by $d-j$ corresponds to replacing $u^{1/2}$ by $-u^{1/2}$, and this move leaves $u$ invariant. Replacing $j$ by $d-j$ if necessary, we may assume without loss that $j\leq d/2$. Note that $ j \not=d/2$; otherwise $1+q^{2j+1}=0$ which contradicts (\ref{eq:UVal}). Therefore $j < d/2$. Assume for the moment that $d=1+4j$. Then $q^{2j+1} + q^{-2j-1} = 0$. In this case (\ref{eq:UVal}) reduces to $u=2/(q-q^{-1})^2$, or in other words $u=2t/(1-t)^2$. Now $d,t$ satisfy the requirements of Example \ref{ex:nbw3}, so $A,B,C$ is isomorphic to ${\rm NBWeyl}^{-}_d(\mathbb F; t)$. For the rest of this proof, assume that $d \not= 1 + 4j$. We show $q \in \mathbb F$. Define $f=q^{2j+1}+ q^{-2j-1}$ and note by (\ref{eq:UVal}) that $f+2=uq^2(1-q^{-2})^2$. By this and $u, q^2 \in \mathbb F$ we find $f \in \mathbb F$. Also, using $q^2 \in \mathbb F$ we obtain $f q = q^{2j+2} + q^{-2j} \in \mathbb F$. We have $f \not=0$ since $d \not=1+4j$. By these comments $q = fq/f \in \mathbb F$. For the moment assume that $d$ is even. Then $d, j,q$ meet the requirements of Example \ref{ex:nbw1}, so $A,B,C$ is isomorphic to ${\rm NBWeyl}^{+}_d(\mathbb F;j,q)$. Next assume that $d$ is odd. We mentioned earlier that the pair $d,q$ is standard. The pair $d,q$ remains standard if we replace $q$ by $-q$. Consider what happens if we replace $q$ by $-q$ and $j$ by $(d-1)/2-j$. By (\ref{eq:UVal}) this replacement has no effect on $u$. Making this adjustment if necessary, we may assume without loss that $j< (d-1)/4$. Now $d,j,q$ meet the requirements of Example \ref{ex:nbw2}, so $A,B,C$ is isomorphic to ${\rm NBWeyl}^{-}_d(\mathbb F;j,q)$. We have shown that $A,B,C$ is isomorphic to at least one of the LR triples listed in Examples \ref{ex:nbw1}--\ref{ex:nbw3}. The result follows from this and Lemma \ref{lem:PreThm}(iii). \end{proof} \begin{lemma} \label{lem:NBWisGeom} Assume $d\geq 2$. Let $A,B,C$ denote an LR triple in ${\rm NBWeyl}_d(\mathbb F)$. Then for $0 \leq i \leq d-1$ the scalar $\rho_i$ from Definition \ref{def:Rhoi} satisfies \centerline{ \begin{tabular}[t]{c|ccc} {\rm case} & ${\rm NBWeyl}^{+}_d(\mathbb F;j,q)$ & ${\rm NBWeyl}^{-}_d(\mathbb F;j,q)$ & ${\rm NBWeyl}^{-}_d(\mathbb F;t)$ \\ \hline $\rho_i$ & $-q^{-2i-2}$ & $-q^{-2i-2}$ & $-t^{i+1}$ \\ \end{tabular}} \noindent Moreover, the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is geometric. \end{lemma} \begin{proof} Compute $\rho_i=\varphi_{i+1}/\varphi_{d-i}$ using the data in Examples \ref{ex:nbw1}--\ref{ex:nbw3}. \end{proof} \section{The classification of LR triples in ${\rm NBG}_d(\mathbb F)$ } \noindent In this section we classify up to isomorphism the LR triples in ${\rm NBG}_d(\mathbb F)$, for $d\geq 2$. We first describe some examples. \begin{example} \label{ex:NBGq} \rm The LR triple ${\rm NBG}_d(\mathbb F;q)$ is over $\mathbb F$, diameter $d$, nonbipartite, normalized, and satisfies \begin{eqnarray*} && d\geq 2; \qquad \qquad 0 \not=q \in \mathbb F; \\ && q^i \not=1 \quad (1 \leq i \leq d); \qquad \qquad \quad q^{d+1}\not=-1; \\ &&\varphi_i = \frac{q(q^i-1)(q^{i-d-1}-1)}{(q-1)^2} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{example} \label{ex:NBGqEQ1} \rm The LR triple ${\rm NBG}_d(\mathbb F;1)$ is over $\mathbb F$, diameter $d$, nonbipartite, normalized, and satisfies \begin{eqnarray*} && d\geq 2; \qquad \qquad {\mbox {\rm ${\rm Char}(\mathbb F)$ is 0 or greater than $d$}}; \\ && \varphi_i = i (i-d-1) \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{lemma} \label{lem:NBGexist} For the LR triples in Examples \ref{ex:NBGq}, \ref{ex:NBGqEQ1}, {\rm (i)} they exist; {\rm (ii)} they are contained in ${\rm NBG}_d(\mathbb F)$; {\rm (iii)} they are mutually nonisomorphic. \end{lemma} \begin{proof} (i) In Example \ref{ex:NBGq} we see a parameter $q \in \mathbb F$. For Example \ref{ex:NBGqEQ1} define $q=1$. Using $q$ we construct an LR triple $A,B,C$ as follows. For notational convenience define $a_i = \varphi_{d-i+1}-\varphi_{d-i}$ for $0 \leq i \leq d$, where $\varphi_0=0$ and $\varphi_{d+1}=0$. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $\lbrace v_i \rbrace_{i=0}^d$ denote a basis for $V$. Define $A,B,C$ in ${\rm End}(V)$ such that the matrices representing $A,B,C$ with respect to $\lbrace v_i \rbrace_{i=0}^d$ are given by the first row of the table in Proposition \ref{prop:matrixRep}. Here $\varphi'_i=\varphi_i$ and $\varphi''_i = \varphi_i $ for $1 \leq i \leq d$. We show that $A,B,C$ is an LR triple on $V$. We first show that $A,B$ is an LR pair on $V$. Let $\lbrace V_i\rbrace_{i=0}^d$ denote the decomposition of $V$ induced by the basis $\lbrace v_i\rbrace_{i=0}^d$. Using the matrices defining $A,B$ we find that $\lbrace V_i\rbrace_{i=0}^d$ is lowered by $A$ and raised by $B$. Therefore $A,B$ is an LR pair on $V$. Next we show that $B,C$ is an LR pair on $V$. Define the scalars $\lbrace \alpha_i \rbrace_{i=0}^d$ by $\alpha_0=1$ and $\alpha_{i-1}/\alpha_{i} = \sum_{k=0}^{i-1} q^k$ for $1 \leq i \leq d$. Define $\mathbb B = \sum_{i=0}^d \alpha_i B^i$. Note that $\mathbb B B = B \mathbb B$. With respect to the basis $\lbrace v_i\rbrace_{i=0}^d$ the matrix representing $\mathbb B$ is lower triangular, with each diagonal entry 1. Therefore $\mathbb B$ is invertible. Observe that $\lbrace \mathbb B V_{d-i}\rbrace_{i=0}^d$ is a decomposition of $V$ that is lowered by $B$. Using the matrices defining $B,C$ one checks that $\lbrace \mathbb B V_{d-i}\rbrace_{i=0}^d$ is raised by $C$. By these comments $B,C$ is an LR pair on $V$. Next we show that $C,A$ is an LR pair on $V$. Define ${\mathbb A}^{\S} = \sum_{i=0}^d \beta_i A^i$, where $\beta_0=1$ and $\beta_{i-1}/\beta_{i} = - \sum_{k=0}^{i-1} q^{-k}$ for $1 \leq i \leq d$. Note that $\mathbb A^{\S} A = A \mathbb A^{\S}$. With respect to the basis $\lbrace v_i\rbrace_{i=0}^d$ the matrix representing $\mathbb A^{\S}$ is upper triangular and Toeplitz, with parameters $\lbrace \beta_i \rbrace_{i=0}^d$. Therefore $\mathbb A^{\S}$ is invertible. (In fact $\mathbb A^{\S}$ is the inverse of $\mathbb A = \sum_{i=0}^d \alpha_i A^i$, although we do not need this result). Observe that $\lbrace \mathbb A^{\S}V_{d-i}\rbrace_{i=0}^d$ is a decomposition of $V$ that is raised by $A$. Using the matrices defining $A,C$ one checks that $\lbrace \mathbb A^{\S} V_{d-i}\rbrace_{i=0}^d$ is lowered by $C$. By these comments $C,A$ is an LR pair on $V$. We have shown that $A,B,C$ is an LR triple on $V$. Using the matrices defining $A,B,C$ we find that this LR triple is the desired one. \\ \noindent (ii) Let $A,B,C$ denote an LR triple listed in Examples \ref{ex:NBGq}, \ref{ex:NBGqEQ1}. By assumption $A,B,C$ is over $\mathbb F$, diameter $d$, nonbipartite, and normalized. We check that $A,B,C$ is not in ${\rm NBWeyl}_d(\mathbb F)$. The matrix \begin{eqnarray*} \left( \begin{array}{ccc} 1 & 1 & 1 \\ \varphi_0 & \varphi_1 &\varphi_2 \\ \varphi_1 & \varphi_2 &\varphi_3 \end{array} \right) \end{eqnarray*} has determinant $-(q+1)(q^{d+1}+1)q^{2-2d}$ for ${\rm NBG}_d(\mathbb F;q)$, and $-4$ for ${\rm NBG}_d(\mathbb F;1)$. In each case the determinant is nonzero. Consequently, there does not exist $a,b,c$ in $\mathbb F$ that are not all zero such that $a+b+c=0$ and $ a\varphi_{i-1} + b\varphi_i + c \varphi_{i+1} = 0$ for $1 \leq i \leq d$. Therefore $A,B,C$ is not in ${\rm NBWeyl}_d(\mathbb F)$. We check that the sequence $\lbrace \rho_i\rbrace_{i=0}^{d-1}$ from Definition \ref{def:Rhoi} is geometric. For $0 \leq i \leq d-1$ the scalar $\rho_i = \varphi_{i+1}/\varphi_{d-i}$ is equal to $q^{2i-d+1}$ for ${\rm NBG}_d(\mathbb F;q)$, and 1 for ${\rm NBG}_d(\mathbb F;1)$. Therefore $\lbrace \rho_i\rbrace_{i=0}^{d-1}$ is geometric. We have shown that $A,B,C$ is contained in ${\rm NBG}_d(\mathbb F)$. \\ \noindent (iii) Among the LR triples listed in Examples \ref{ex:NBGq}, \ref{ex:NBGqEQ1} no two have the same parameter array. Therefore no two are isomorphic. \end{proof} \begin{theorem} \label{thm:NBG} For $d\geq 2$, each LR triple in ${\rm NBG}_d(\mathbb F)$ is isomorphic to a unique LR triple listed in Examples \ref{ex:NBGq}, \ref{ex:NBGqEQ1}. \end{theorem} \begin{proof} Let $A,B,C$ denote an LR triple in ${\rm NBG}_d(\mathbb F)$, with parameter array (\ref{eq:paLRT}) and Toeplitz data (\ref{eq:ToeplitzData}). Recall the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ from Definition \ref{def:Rhoi}. This sequence is constrained by Proposition \ref{lem:NBCon6}, and geometric by the definition of ${\rm NBG}_d(\mathbb F)$. Therefore there exists $0 \not= r \in \mathbb F$ such that \begin{eqnarray} \label{eq:gammaForm} && \rho_i = \rho_0 r^i \qquad \qquad (0 \leq i \leq d-1), \\ && \rho^2_0 = r^{1-d}. \label{eq:rhoSquared} \end{eqnarray} By assumption $A,B,C$ is nonbipartite and normalized, so $\alpha_1=1$ and $\beta_1 = -1$. Also $\alpha_2+\beta_2=1$ from above Lemma \ref{lem:reform1}. Define \begin{eqnarray} q = \begin{cases} \beta_2/\alpha_2 & {\mbox{\rm if $\alpha_2\not=0 $}}; \\ \infty & {\mbox{\rm if $\alpha_2 =0$.}} \end{cases} \label{eq:qDEF} \end{eqnarray} Conceivably $q=0$ or $q=1$. Using $\beta_2 = q \alpha_2$ and $\alpha_2+\beta_2=1$, we obtain $q \not=-1$ and \begin{eqnarray} \alpha_2 = \frac{1}{1+q}, \qquad \qquad \beta_2 = \frac{q}{1+q}. \label{lem:qbasics} \end{eqnarray} If $q=\infty$ then $\beta_2=1$. Evaluating (\ref{eq:middleElim}) using (\ref{lem:qbasics}) we obtain \begin{eqnarray} \label{eq:mainrec} \rho_i = \frac{q \varphi_i-(q+1)\varphi_{i+1} +\varphi_{i+2}}{q+1} \qquad \qquad (0 \leq i \leq d-1). \end{eqnarray} If $q=\infty$ then (\ref{eq:mainrec}) becomes $\rho_i = \varphi_i - \varphi_{i+1}$ for $0 \leq i \leq d-1$. Until further notice, assume that $q\not=0$, $q\not=\infty$, and $1,q,r$ are mutually distinct. Define \begin{eqnarray} L = \frac{(q+1) \rho_0}{(q-r)(1-r)}. \label{eq:Tform} \end{eqnarray} Note that $L\not=0$. Since $q \not=1$, there exist $H,K \in \mathbb F$ such that $\varphi_i = H + Kq^i + L r^i$ for $i=0$ and $i=1$. Using (\ref{eq:gammaForm}), (\ref{eq:mainrec}) and induction on $i$, we obtain \begin{eqnarray} \varphi_i = H + K q^i + L r^i \qquad \qquad (0 \leq i \leq d+1). \label{eq:vpForm} \end{eqnarray} We have $K \not=0$; otherwise $r\varphi_{i-1} -(r+1)\varphi_i + \varphi_{i+1}=0$ $(1 \leq i \leq d$), putting $A,B,C$ in ${\rm NBWeyl}_d(\mathbb F)$ for a contradiction. Since $\varphi_0=0$ and $\varphi_{d+1}=0$, \begin{eqnarray} 0 = H+K+L, \qquad \qquad 0 = H+ Kq^{d+1} + Lr^{d+1}. \label{eq:RST} \end{eqnarray} For $0 \leq i \leq d+1$ define \begin{eqnarray} \mathbb Delta_i = \varphi_i - \rho_0 r^{i-1} \varphi_{d-i+1}. \label{eq:del} \end{eqnarray} We claim $\mathbb Delta_i=0$. This is the case for $i=0$ and $i=d+1$, since $\varphi_0=0$ and $\varphi_{d+1}=0$. For $1 \leq i \leq d$ we have $\mathbb Delta_i = \varphi_i-\rho_{i-1}\varphi_{d-i+1}$ by (\ref{eq:gammaForm}), and this is zero by (\ref{eq:mainrec2}). The claim is proven. For $0 \leq i \leq d+1$, in the equation $\mathbb Delta_i=0$ we evaluate the left-hand side using (\ref{eq:vpForm}), (\ref{eq:del}) to find that the following linear combination is zero: \centerline{ \begin{tabular}[t]{c|cccc} {\rm term} & $1$ & $q^i$ & $r^i$ & $(r/q)^i$ \\ \hline {\rm coefficient} & $H-L\rho_0 r^d$ & $K$ & $L-H\rho_0 r^{-1}$ & $-K \rho_0 q^{d+1}r^{-1}$ \\ \end{tabular}} \noindent By assumption $1,q,r$ are mutually distinct. Also $K\not=0$ and $d\geq 2$. We show $r=q^2$. Assume $r\not=q^2$. Then $1,q,r,r/q$ are mutually distinct. Setting $i=0,1,2,3$ in the above table, we obtain a $4 \times 4$ homogeneous linear system with coefficient matrix \begin{eqnarray*} \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & q & r & r/q \\ 1 & q^2 & r^2 & (r/q)^2 \\ 1 & q^3 & r^3 & (r/q)^3 \end{array} \right). \end{eqnarray*} This matrix is Vandermonde and hence invertible. Therefore each coefficient in the table is zero. The coeffient $K$ is nonzero, for a contradiction. Consequently $r=q^2$. From further examination of the coefficients in the table, \begin{eqnarray} \label{eq:threeCoef} H= L \rho_0 r^d,\qquad \qquad K = K \rho_0 q^{d+1}r^{-1}, \qquad \qquad L = H\rho_0 r^{-1}. \end{eqnarray} By (\ref{eq:threeCoef}) and $r=q^2$ we find $\rho_0 = q^{1-d}$ and $H = L q^{d+1}$. By these comments and (\ref{eq:Tform}) we obtain \begin{eqnarray} H = q(q-1)^{-2},\qquad \qquad L = q^{-d}(q-1)^{-2}. \label{eq:RSTsol} \end{eqnarray} Evaluating (\ref{eq:vpForm}) using (\ref{eq:RSTsol}) and $K=-H-L$, we obtain \begin{eqnarray} \varphi_i = \frac{q(q^i-1)(q^{i-d-1}-1)}{(q-1)^2} \qquad \qquad (1 \leq i \leq d). \label{eq:vpsol} \end{eqnarray} From (\ref{eq:vpsol}) and since the $\lbrace \varphi_i \rbrace_{i=1}^d$ are nonzero, we obtain $q^i \not=1$ $(1 \leq i \leq d)$. Note that $q^{d+1}\not=-1$; otherwise (\ref{eq:vpsol}) becomes $\varphi_i = q(1-q^{2i})(q-1)^{-2}$ $(1 \leq i \leq d)$, forcing $q\varphi_{i-1}-(q+q^{-1})\varphi_i + q^{-1} \varphi_{i+1}=0$ $(1 \leq i \leq d)$, putting $A,B,C$ in ${\rm NBWeyl}_d(\mathbb F)$ for a contradiction. We have met the requirements of Example \ref{ex:NBGq}, so $A,B,C$ is isomorphic to ${\rm NBG}_d(\mathbb F;q)$. We are done with the case in which $q\not=0$, $q\not=\infty$, and $1,q,r$ are mutually distinct. Until further notice, assume that $1=q=r$. We have $1=q\not=-1$, so ${\rm Char}(\mathbb F) \not=2$. By (\ref{eq:gammaForm}), (\ref{eq:rhoSquared}) we obtain $\rho_i = \rho_0$ for $0 \leq i \leq d-1$, and $\rho^2_0=1$. By (\ref{eq:mainrec}), \begin{eqnarray} \label{eq:mainrecOne} 2 \rho_0 = \varphi_i-2\varphi_{i+1}+ \varphi_{i+2} \qquad \qquad (0 \leq i \leq d-1). \end{eqnarray} Define $Q=\varphi_1 - \rho_0$, and note that $\varphi_i = i (Q+\rho_0 i)$ for $i=0$ and $i=1$. By (\ref{eq:mainrecOne}) and induction on $i$, \begin{eqnarray} \label{eq:vpFormOne} \varphi_i = i (Q+\rho_0 i) \qquad \qquad (0 \leq i \leq d+1). \end{eqnarray} Mimicking the argument below (\ref{eq:del}), we find that for $0 \leq i \leq d+1$, \begin{eqnarray} 0 = \varphi_i - \rho_0 \varphi_{d-i+1}. \label{eq:VarRho} \end{eqnarray} Evaluate the right-hand side of (\ref{eq:VarRho}) using (\ref{eq:vpFormOne}) and $\rho^2_0=1$, to find that the following linear combination is zero: \centerline{ \begin{tabular}[t]{c|ccc} {\rm term} & $1$ & $i$ & $i^2$ \\ \hline {\rm coefficient} & $-(d+1) (d+1+\rho_0 Q)$ & $2(d+1)+Q(1+\rho_0)$ & $\rho_0-1$ \\ \end{tabular}} \noindent Since $d\geq 2$, each coefficient in the table is zero. Therefore $\rho_0=1$ and $Q=-d-1$. By this and (\ref{eq:vpFormOne}), \begin{eqnarray} \label{eq:RQone} \varphi_i = i(i-d-1) \qquad \qquad (1 \leq i \leq d). \end{eqnarray} By (\ref{eq:RQone}) and since $\lbrace \varphi_i \rbrace_{i=1}^d$ are nonzero, ${\rm Char}(\mathbb F)$ is $0$ or greater than $d$. We have met the requirements of Example \ref{ex:NBGqEQ1}, so $A,B,C$ is isomorphic to ${\rm NBG}_d(\mathbb F;1)$. We are done with the case $1 = q = r$. The remaining cases are (a) $q=0 $ and $r \not=1$; (b) $q=0$ and $r =1$; (c) $q=\infty$ and $r\not=1$; (d) $q=\infty$ and $r=1$; (e) $1=q\not=r$; (f) $1\not=q=r$; (g) $1=r\not=q$ and $q\not=0, q\not=\infty$. Each case (a)--(g) is handled in a manner similar to the first two. In each case we obtain a contradiction; the details are routine and omitted. We have shown that $A,B,C$ is isomorphic to at least one LR triple in Examples \ref{ex:NBGq}, \ref{ex:NBGqEQ1}. The result follows in view of Lemma \ref{lem:NBGexist}(iii). \end{proof} \begin{lemma} \label{lem:NBGrho} Assume $d\geq 2$. Let $A,B,C$ denote an LR triple in ${\rm NBG}_d(\mathbb F)$. Then for $0 \leq i \leq d-1$ the scalar $\rho_i$ from Definition \ref{def:Rhoi} satisfies \centerline{ \begin{tabular}[t]{c|cc} {\rm case} & ${\rm NBG}_d(\mathbb F;q)$ & ${\rm NBG}_d(\mathbb F;1)$ \\ \hline $\rho_i$ & $q^{2i-d+1}$ & $1$ \\ \end{tabular}} \end{lemma} \begin{proof} Compute $\rho_i=\varphi_{i+1}/\varphi_{d-i}$ using the data in Examples \ref{ex:NBGq}, \ref{ex:NBGqEQ1}. \end{proof} \section{The classification of LR triples in ${\rm NBNG}_d(\mathbb F)$ } \noindent In this section we classify up to isomorphism the LR triples in ${\rm NBNG}_d(\mathbb F)$, for even $d\geq 4$. We first describe some examples. \begin{example} \label{ex:NBNGdt} \rm The LR triple ${\rm NBNG}_d(\mathbb F; t)$ is over $\mathbb F$, diameter $d$, nonbipartite, normalized, and satisfies \begin{eqnarray*} && d\geq 4; \qquad \qquad {\mbox {\rm $d$ is even}}; \qquad \qquad 0 \not=t \in \mathbb F; \\ && t^i \not= 1 \quad (1 \leq i \leq d/2); \qquad \qquad \quad t^{d+1}\not=1; \\ && \varphi_i = \begin{cases} t^{i/2}-1 & {\mbox{\rm if $i$ is even}}; \\ t^{(i-d-1)/2}-1 & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{lemma} \label{lem:NBNGpre} For the LR triples in Example \ref{ex:NBNGdt}, {\rm (i)} they exist; {\rm (ii)} they are contained in ${\rm NBNG}_d(\mathbb F)$; {\rm (iii)} they are mutually nonisomorphic. \end{lemma} \begin{proof} (i) Similar to the proof of Lemma \ref{lem:NBGexist}(i), except that the sequences $\lbrace \alpha_i \rbrace_{i=0}^d$, $\lbrace \beta_i \rbrace_{i=0}^d$ are now defined as follows: $\alpha_0 =1$, $\beta_0=1$ and for $1 \leq i \leq d$, \begin{eqnarray*} && \alpha_{i-2}/\alpha_i = 1-t^{i/2}, \qquad \qquad \beta_{i-2}/\beta_i = 1-t^{-i/2} \qquad \qquad {\mbox {\rm (if $i$ is even),}} \\ && \alpha_i = \alpha_{i-1}, \qquad \qquad \beta_i = -\beta_{i-1} \qquad \qquad {\mbox {\rm (if $i$ is odd).}} \end{eqnarray*} \noindent (ii) Let $A,B,C$ denote an LR triple listed in Example \ref{ex:NBNGdt}. By assumption $A,B,C$ is over $\mathbb F$, diameter $d$, nonbipartite, and normalized. We check that the sequence $\lbrace \rho_i\rbrace_{i=0}^{d-1}$ from Definition \ref{def:Rhoi} is not geometric. For $0 \leq i \leq d-1$ the scalar $\rho_i = \varphi_{i+1}/\varphi_{d-i}$ is equal to $-t^{(i-d)/2}$ (if $i$ is even) and $-t^{(i+1)/2}$ (if $i$ is odd). By this and since $t^{d+1}\not=1$, the sequence $\lbrace \rho_i\rbrace_{i=0}^{d-1}$ is not geometric. By these comments $A,B,C$ is contained in ${\rm NBNG}_d(\mathbb F)$. \\ (iii) Similar to the proof of Lemma \ref{lem:NBGexist}(iii). \end{proof} \begin{theorem} \label{thm:NBNG} Assume that $d$ is even and at least 4. Then each LR triple in ${\rm NBNG}_d(\mathbb F)$ is isomorphic to a unique LR triple listed in Example \ref{ex:NBNGdt}. \end{theorem} \begin{proof} Let $A,B,C$ denote an LR triple in ${\rm NBNG}_d(\mathbb F)$, with parameter array (\ref{eq:paLRT}) and Toeplitz data (\ref{eq:ToeplitzData}). Recall the sequence $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ from Definition \ref{def:Rhoi}. This sequence is constrained by Proposition \ref{lem:NBCon6}, so by Proposition \ref{prop:noddPre} there exists $0 \not=t \in \mathbb F$ such that \begin{eqnarray} && \label{eq:gammaNew} \rho_i = \begin{cases} \rho_0 t^{i/2} & {\mbox{\rm if $i$ is even}}; \\ \rho_0^{-1}t^{(i-d+1)/2} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (0 \leq i \leq d-1). \end{eqnarray} By assumption $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ is not geometric, so by Lemma \ref{lem:ConGeo}(iv), \begin{eqnarray} \rho_0^4 \not=t^{1-d}. \label{eq:gamma4} \label{eq:x4} \end{eqnarray} We claim that \begin{equation} t (\varphi_{i-2}-\varphi_{i-1}) = \varphi_i - \varphi_{i+1} \qquad \qquad (2 \leq i \leq d). \label{eq:extrarecNG} \end{equation} To prove the claim, consider the scalars $a,b,c$ from Definition \ref{def:NBCon2}. By Lemma \ref{lem:NBCon5} the 3-tuple $(a,b,c)$ is a linear constraint for $\lbrace \rho_i \rbrace_{i=0}^{d-1}$ in the sense of Definition \ref{def:lincon}. Now using Definition \ref{def:LC} and Proposition \ref{prop:LCbasis}(ii) we obtain $a=-tc$ and $b=0$. The claim follows from this and Lemma \ref{lem:NBCon4}. We show that $t\not=1$. Suppose $t=1$. By (\ref{eq:extrarecNG}), for $2 \leq i \leq d$ we have \begin{eqnarray} \varphi_{i-2}-\varphi_{i-1} = \varphi_i - \varphi_{i+1}. \label{eq:tOne} \end{eqnarray} Sum (\ref{eq:tOne}) over $i=2,3,\ldots, d$ and use $\varphi_0= 0= \varphi_{d+1}$ to obtain $-\varphi_{d-1} = \varphi_{2}$. Set $i=d-2$ in (\ref{eq:mainrec2}) and use (\ref{eq:gammaNew}) with $t=1$ to find $\rho_0 = \varphi_{d-1}/\varphi_{2}=-1$, which contradicts (\ref{eq:gamma4}). We have shown $t \not=1$. There exist $H,K,L \in \mathbb F$ such that for $i=0,1,2$, \begin{eqnarray} \varphi_i = \begin{cases} H+Kt^{i/2} & {\mbox{\rm if $i$ is even}}; \\ H+Lt^{(i-d-1)/2} & {\mbox{\rm if $i$ is odd}}. \end{cases} \label{eq:gentNG012} \end{eqnarray} By (\ref{eq:extrarecNG}) and induction on $i$, (\ref{eq:gentNG012}) holds for $0 \leq i \leq d+1$. Using $\varphi_0 = 0$ and $\varphi_{d+1}=0$, we obtain $H+K=0$ and $H+L=0$. Now (\ref{eq:gentNG012}) becomes \begin{eqnarray} \varphi_i = \begin{cases} H(1-t^{i/2}) & {\mbox{\rm if $i$ is even}}; \\ H(1-t^{(i-d-1)/2}) & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \label{eq:gentNGall} \end{eqnarray} The scalars $\lbrace \varphi_i\rbrace_{i=1}^d$ are nonzero. Consequently $H\not=0$, and $t^i \not=1$ for $1 \leq i \leq d/2$. Evaluating $\rho_0 = \varphi_1/\varphi_d$ using (\ref{eq:gentNGall}) we obtain $\rho_0 = -t^{-d/2}$. Now (\ref{eq:gammaNew}) becomes \begin{eqnarray} \label{eq:needthisRHO} \rho_i = \begin{cases} -t^{(i-d)/2} & {\mbox{\rm if $i$ is even}}; \\ -t^{(i+1)/2} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (0 \leq i \leq d-1), \end{eqnarray} and (\ref{eq:gamma4}) becomes $t^{d+1}\not=1$. We show $H=-1$. As in the proof of Theorem \ref{thm:NBG}, there exists $q \in \mathbb F \cup \lbrace \infty \rbrace $ such that $q \not=-1$ and \begin{eqnarray} \label{eq:mainrecREP} \rho_i = \frac{q \varphi_i-(q+1)\varphi_{i+1} +\varphi_{i+2}}{q+1} \qquad \qquad (0 \leq i \leq d-1). \end{eqnarray} Evaluate the recursion (\ref{eq:mainrecREP}) using (\ref{eq:gentNGall}), (\ref{eq:needthisRHO}). For $i$ even this gives \begin{eqnarray} 1+H^{-1} = \frac{q+t}{q+1}t^{d/2}, \label{eq:qtRel1} \end{eqnarray} and for $i$ odd this gives \begin{eqnarray} 1+H^{-1} = \frac{q+t}{q+1}t^{-d/2-1}. \label{eq:qtRel2} \end{eqnarray} Combining (\ref{eq:qtRel1}), (\ref{eq:qtRel2}) we obtain \begin{eqnarray*} \frac{(q+t)(1-t^{d+1})}{q+1} = 0. \end{eqnarray*} But $t^{d+1}\not=1$, so $q=-t$, and therefore $H=-1$ in view of (\ref{eq:qtRel1}). Setting $H=-1$ in (\ref{eq:gentNGall}) we obtain \begin{eqnarray*} \varphi_i = \begin{cases} t^{i/2}-1 & {\mbox{\rm if $i$ is even}}; \\ t^{(i-d-1)/2}-1 & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} We have met the requirements of Example \ref{ex:NBNGdt}, so $A,B,C$ is isomorphic to ${\rm NBNG}_d(\mathbb F; t)$. We have shown that $A,B,C$ is isomorphic to at least one LR triple in Example \ref{ex:NBNGdt}. The result follows in view of Lemma \ref{lem:NBNGpre}(iii). \end{proof} \noindent We record a fact from the proof of Theorem \ref{thm:NBNG}. \begin{lemma} Assume that $d$ is even and at least $4$. Let $A,B,C$ denote an LR triple in ${\rm NBNG}_d(\mathbb F)$. Then for $0 \leq i \leq d-1$ the scalar $\rho_i $ from Definition \ref{def:Rhoi} satisfies \begin{eqnarray*} \rho_i = \begin{cases} -t^{(i-d)/2} & {\mbox{\rm if $i$ is even}}; \\ -t^{(i+1)/2} & {\mbox{\rm if $i$ is odd}}. \end{cases} \end{eqnarray*} \end{lemma} \section{The classification of LR triples in ${\rm B}_d(\mathbb F)$} \noindent In this section we classify up to isomorphism the LR triples in ${\rm B}_d(\mathbb F)$, for even $d\geq 2$. We first describe some examples. \begin{example} \label{ex:BIP} \rm The LR triple ${\rm B}_d(\mathbb F;t,\rho_0, \rho'_0, \rho''_0)$ is over $\mathbb F$, diameter $d$, bipartite, normalized, and satisfies \begin{eqnarray*} && d\geq 4; \qquad \quad {\mbox {\rm $d$ is even}}; \qquad \quad 0 \not=t \in\mathbb F; \qquad \quad t^i \not= 1 \quad (1 \leq i \leq d/2); \\ && \rho_0, \rho'_0, \rho''_0 \in \mathbb F; \qquad \qquad \quad \rho_0 \rho'_0 \rho''_0 = -t^{1-d/2}; \\ &&\varphi_i = \begin{cases} \rho_0 \frac{1-t^{i/2}}{1-t} & {\mbox{\rm if $i$ is even}}; \\ \frac{t}{\rho_0} \frac{1-t^{(i-d-1)/2}}{1-t} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d); \\ &&\varphi'_i = \begin{cases} \rho'_0 \frac{1-t^{i/2}}{1-t} & {\mbox{\rm if $i$ is even}}; \\ \frac{t}{\rho'_0} \frac{1-t^{(i-d-1)/2}}{1-t} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d); \\ &&\varphi''_i = \begin{cases} \rho''_0 \frac{1-t^{i/2}}{1-t} & {\mbox{\rm if $i$ is even}}; \\ \frac{t}{\rho''_0} \frac{1-t^{(i-d-1)/2}}{1-t} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{example} \label{ex:BipONE} \rm The LR triple ${\rm B}_d(\mathbb F;1, \rho_0, \rho'_0, \rho''_0)$ is over $\mathbb F$, diameter $d$, bipartite, normalized, and satisfies \begin{eqnarray*} && d\geq 4; \quad \qquad {\mbox {\rm $d$ is even}}; \qquad \quad {\mbox {\rm ${\rm Char}(\mathbb F)$ is 0 or greater than $d/2$}}; \\ && \rho_0, \rho'_0, \rho''_0 \in \mathbb F; \qquad \qquad \qquad \rho_0 \rho'_0 \rho''_0 = -1; \\&& \varphi_i = \begin{cases} \frac{i \rho_0 }{2} & {\mbox{\rm if $i$ is even}}; \\ \frac{i-d-1}{2\rho_0} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d); \\ &&\varphi'_i = \begin{cases} \frac{i \rho'_0 }{2} & {\mbox{\rm if $i$ is even}}; \\ \frac{i-d-1}{2 \rho'_0} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d); \\ &&\varphi''_i = \begin{cases} \frac{i \rho''_0 }{2} & {\mbox{\rm if $i$ is even}}; \\ \frac{i-d-1}{2\rho''_0} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \end{eqnarray*} \end{example} \begin{example} \label{ex:BipONEd2} \rm The LR triple ${\rm B}_2(\mathbb F;\rho_0, \rho'_0, \rho''_0)$ is over $\mathbb F$, diameter $2$, bipartite, normalized, and satisfies \begin{eqnarray*} && \rho_0, \rho'_0, \rho''_0 \in \mathbb F; \qquad \qquad \qquad \rho_0 \rho'_0 \rho''_0 = -1; \\&& \varphi_1=-1/ \rho_0, \qquad \quad \varphi'_1=-1/ \rho'_0, \qquad \quad \varphi''_1=-1/ \rho''_0, \\&& \varphi_2= \rho_0, \qquad \qquad \quad \varphi'_2=\rho'_0, \qquad \qquad \quad \varphi''_2=\rho''_0. \end{eqnarray*} \end{example} \begin{lemma} \label{lem:BIPpre} For the LR triples in Examples \ref{ex:BIP}--\ref{ex:BipONEd2}, {\rm (i)} they exist; {\rm (ii)} they are contained in ${\rm B}_d(\mathbb F)$; {\rm (iii)} they are mutually nonisomorphic. \end{lemma} \begin{proof} Without loss we may assume $d\geq 4$, since for $d=2$ the result follows from Lemma \ref{lem:dtwoClassB}. \\ \noindent (i) In Example \ref{ex:BIP} we see a parameter $t \in \mathbb F$. For Example \ref{ex:BipONE} define $t=1$. We proceed as in the proof of Lemma \ref{lem:NBGexist}(i), except that now $a_i = 0 $ for $0 \leq i \leq d$ and the sequences $\lbrace \alpha_i \rbrace_{i=0}^d$, $\lbrace \beta_i \rbrace_{i=0}^d$ are defined as follows: $\alpha_0 =1$, $\beta_0=1$ and for $1 \leq i \leq d$, \begin{eqnarray*} && \alpha_{i-2}/\alpha_i = \sum_{k=0}^{i/2-1} t^k \qquad \qquad \beta_{i-2}/\beta_i = -\sum_{k=0}^{i/2-1} t^{-k} \qquad \qquad {\mbox {\rm (if $i$ is even),}} \\ && \alpha_i = 0, \qquad \qquad \beta_i = 0 \qquad \qquad {\mbox {\rm (if $i$ is odd).}} \end{eqnarray*} \noindent (ii) By construction. \\ (iii) Similar to the proof of Lemma \ref{lem:NBGexist}(iii). \end{proof} \begin{theorem} \label{thm:BIPclass} Assume that $d$ is even and at least 2. Then each LR triple in ${\rm B}_d(\mathbb F)$ is isomorphic to a unique LR triple listed in Examples \ref{ex:BIP}--\ref{ex:BipONEd2}. \end{theorem} \begin{proof} Without loss we may assume $d\geq 4$, since for $d=2$ the result follows from Lemma \ref{lem:dtwoClassB}. Let $A,B,C$ denote an LR triple in ${\rm B}_d(\mathbb F)$, with parameter array (\ref{eq:paLRT}) and Toeplitz data (\ref{eq:ToeplitzData}). Recall the sequences $\lbrace \rho_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho'_i \rbrace_{i=0}^{d-1}$, $\lbrace \rho''_i \rbrace_{i=0}^{d-1}$ from Definition \ref{def:RHOD}. These sequences are constrained by Proposition \ref{lem:3seqCon}, and related to each other by Lemma \ref{lem:RhoRelBip}(iii). So by Proposition \ref{prop:noddPre}, there exists $0 \not=t \in \mathbb F$ such that for $0 \leq i \leq d-1$, \begin{eqnarray} \label{ex:rho3times} &&\frac{\rho_i}{\rho_0} = \frac{\rho'_i}{\rho'_0} = \frac{\rho''_i}{\rho''_0} = t^{i/2} \; \qquad \qquad \quad \qquad {\mbox{\rm if $i$ is even}}; \\ && {\rho_i}{\rho_0} = {\rho'_i}{\rho'_0} = {\rho''_i}{\rho''_0} = t^{(i-d+1)/2} \qquad \quad {\mbox{\rm if $i$ is odd}}. \label{ex:rho3timesB} \end{eqnarray} We now compute $\lbrace \varphi_i \rbrace_{i=1}^d$. By Lemma \ref{lem:BipRRR2} and since $A,B,C$ is normalized, \begin{eqnarray*} \rho_i = \varphi_{i+2}-\varphi_i \qquad \qquad (0 \leq i \leq d-1). \end{eqnarray*} By this and since $\varphi_0 = 0 = \varphi_{d+1}$, \begin{eqnarray} \varphi_i = \begin{cases} \rho_0 + \rho_2 + \rho_4 + \cdots + \rho_{i-2} & {\mbox{\rm if $i$ is even}}; \\ -\rho_i-\rho_{i+2}-\rho_{i+4}- \cdots - \rho_{d-1} & {\mbox{\rm if $i$ is odd}} \end{cases} \qquad \qquad (1 \leq i \leq d). \label{eq:vpFormBIP} \end{eqnarray} Evaluate (\ref{eq:vpFormBIP}) using (\ref{ex:rho3times}), (\ref{ex:rho3timesB}) to obtain the formula for $\lbrace \varphi_i\rbrace_{i=1}^d $ given in Example \ref{ex:BIP} (if $t \not=1$) or Example \ref{ex:BipONE} (if $t=1$). We similarly obtain the formulae for $\lbrace \varphi'_i\rbrace_{i=1}^d $, $\lbrace \varphi''_i\rbrace_{i=1}^d $ given in Examples \ref{ex:BIP}, \ref{ex:BipONE}. Using these formulae and $\rho_0 = \varphi'_1/\varphi''_d$ we obtain the formula for $\rho_0 \rho'_0 \rho''_0$ given in Examples \ref{ex:BIP}, \ref{ex:BipONE}. For the moment assume that $t\not= 1$. Then for $1 \leq i \leq d/2$, $t^i\not=1$ since $\varphi_{2i}\not=0$. We have met the requirements of Example \ref{ex:BIP}, so $A,B,C$ is isomorphic to ${\rm B}_d(\mathbb F;t, \rho_0, \rho'_0, \rho''_0)$. Next assume that $t= 1$. Then for $1 \leq i \leq d/2$, $i\not=0$ in $\mathbb F$ since $\varphi_{2i}\not=0$. Therefore ${\rm Char}(\mathbb F)$ is 0 or greater than $d/2$. We have met the requirements of Example \ref{ex:BipONE}, so $A,B,C$ is isomorphic to ${\rm B}_d(\mathbb F;1, \rho_0, \rho'_0, \rho''_0)$. In summary, we have shown that $A,B,C$ is isomorphic to at least one LR triple in Examples \ref{ex:BIP}, \ref{ex:BipONE}. The result follows in view of Lemma \ref{lem:BIPpre}(iii). \end{proof} \noindent We record a result from the proof of Theorem \ref{thm:BIPclass}. \begin{lemma} Assume that $d$ is even and at least 2. Let $A,B,C$ denote an LR triple in ${\rm B}_d(\mathbb F)$. Then for $0 \leq i \leq d-1$ the scalars $\rho_i, \rho'_i, \rho''_i$ from Definition \ref{def:RHOD} are described as follows. For ${\rm B}_d(\mathbb F;t, \rho_0, \rho'_0, \rho''_0)$, \begin{eqnarray*} &&\rho_i = \begin{cases} \rho_0 t^{i/2} & {\mbox{\rm if $i$ is even}}; \\ \rho_0^{-1}t^{(i-d+1)/2} & {\mbox{\rm if $i$ is odd}}; \end{cases} \\ &&\rho'_i = \begin{cases} \rho'_0 t^{i/2} & {\mbox{\rm if $i$ is even}}; \\ (\rho'_0)^{-1}t^{(i-d+1)/2} & {\mbox{\rm if $i$ is odd}}; \end{cases} \\ &&\rho''_i = \begin{cases} \rho''_0 t^{i/2} & {\mbox{\rm if $i$ is even}}; \\ (\rho''_0)^{-1}t^{(i-d+1)/2} & {\mbox{\rm if $i$ is odd}}. \end{cases} \end{eqnarray*} For ${\rm B}_d(\mathbb F;1, \rho_0, \rho'_0, \rho''_0)$ and ${\rm B}_2(\mathbb F; \rho_0, \rho'_0, \rho''_0)$, \begin{eqnarray*} &&\rho_i = \begin{cases} \rho_0 & {\mbox{\rm if $i$ is even}}; \\ \rho_0^{-1} & {\mbox{\rm if $i$ is odd}}; \end{cases} \\ &&\rho'_i = \begin{cases} \rho'_0 & {\mbox{\rm if $i$ is even}}; \\ (\rho'_0)^{-1}& {\mbox{\rm if $i$ is odd}}; \end{cases} \\ &&\rho''_i = \begin{cases} \rho''_0 & {\mbox{\rm if $i$ is even}}; \\ (\rho''_0)^{-1} & {\mbox{\rm if $i$ is odd}}. \end{cases} \end{eqnarray*} \end{lemma} \begin{proof} For $d\geq 4$ use (\ref{ex:rho3times}), (\ref{ex:rho3timesB}). For $d=2$ use Definition \ref{def:RHOD} and Example \ref{ex:BipONEd2}. \end{proof} \noindent We have now classified up to isomorphism the normalized LR triples over $\mathbb F$ that have diameter $d\geq 2$. \noindent Recall the similarity relation for LR triples, from Definition \ref{def:simiLar}. \begin{corollary} Consider the set of LR triples consisting of the LR triple in Lemma \ref{lem:1or2} and the LR triples in Examples \ref{ex:nbw1}--\ref{ex:nbw3}, \ref{ex:NBGq}, \ref{ex:NBGqEQ1}, \ref{ex:NBNGdt}. Each nonbipartite LR triple over $\mathbb F$ is similar to a unique LR triple in this set. \end{corollary} \begin{proof} By Definition \ref{def:simiLar}, Corollary \ref{prop:normalNBunique}, Lemma \ref{lem:1or2}, and Theorems \ref{thm:ClassW}, \ref{thm:NBG}, \ref{thm:NBNG}. \end{proof} \noindent Recall the bisimilarity relation for bipartite LR triples, from Definition \ref{def:biSim}. \begin{corollary} Consider the set of LR triples consisting of Examples \ref{ex:BIP}--\ref{ex:BipONEd2}. Each nontrival bipartite LR triple over $\mathbb F$ is bisimilar to a unique LR triple in this set. \end{corollary} \begin{proof} By Definition \ref{def:biSim}, Corollary \ref{lem:biassocUnique}, and Theorem \ref{thm:BIPclass}. \end{proof} \section{The Toeplitz data and unipotent maps} \noindent Throughout this section the following notation is in effect. Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote an equitable LR triple on $V$, with parameter array (\ref{eq:paLRT}) and Toeplitz data (\ref{eq:ToeplitzData}). By Definition \ref{def:equitNorm} we have $\alpha_i = \alpha'_i = \alpha''_i $ for $0 \leq i \leq d$, and by Lemma \ref{lem:equitBasicBeta} we have $\beta_i = \beta'_i = \beta''_i $ for $0 \leq i \leq d$. Recall the unipotent maps $\mathbb A$, $\mathbb B$, $\mathbb C$ from Definition \ref{def:Del}. By Proposition \ref{prop:Apoly}, \begin{eqnarray} \label{eq:rotator1A} && \mathbb A = \sum_{i=0}^d \alpha_i A^i, \qquad \qquad \mathbb B = \sum_{i=0}^d \alpha_i B^i, \qquad \qquad \mathbb C = \sum_{i=0}^d \alpha_i C^i, \\ && \label{eq:rotator2A} \mathbb A^{-1} = \sum_{i=0}^d \beta_i A^i, \qquad \qquad \mathbb B^{-1} = \sum_{i=0}^d \beta_i B^i, \qquad \qquad \mathbb C^{-1} = \sum_{i=0}^d \beta_i C^i. \end{eqnarray} Since $A,B,C$ are Nil, \begin{eqnarray} \label{eq:NilDF} A^{d+1}=0, \qquad \qquad B^{d+1}=0, \qquad \qquad C^{d+1}=0. \end{eqnarray} In this section we compute $\lbrace \alpha_i\rbrace_{i=0}^d$, $\lbrace \beta_i\rbrace_{i=0}^d$ for the cases ${\rm NBG}_d(\mathbb F)$, ${\rm NBNG}_d(\mathbb F)$, ${\rm B}_d(\mathbb F)$. In each case, we relate $\mathbb A$, $\mathbb B$, $\mathbb C$ to the exponential function or quantum exponential function. We now recall these functions. In what follows, $\lambda $ denotes an indeterminate. The infinite series that we will encounter should be viewed as formal sums; their convergence is not an issue. \begin{definition} \label{def:exp} \rm Define \begin{eqnarray*} {\rm exp}(\lambda) = \sum_{i=0}^N \frac{\lambda^i}{i!}, \end{eqnarray*} where $N=\infty$ if ${\rm Char}(\mathbb F)=0$, and $N+1= {\rm Char}(\mathbb F)$ if ${\rm Char}(\mathbb F)>0$. \end{definition} \begin{definition} \label{def:qexp} \rm For a nonzero $q \in \mathbb F$ such that $q \not=1$, define \begin{eqnarray*} {\rm exp}_q(\lambda) = \sum_{i=0}^N \frac{\lambda^i}{(1)(1+q)(1+q+q^2)\cdots (1+q+q^2+\cdots + q^{i-1})}, \end{eqnarray*} where $N=\infty$ if $q$ is not a root of unity, and otherwise $q$ is a primitive $(N+1)$-root of unity. \end{definition} \begin{proposition} \label{prop:alphaBeta} Assume that $A,B,C$ is in ${\rm NBG}_d(\mathbb F)$ or ${\rm NBNG}_d(\mathbb F)$ or ${\rm B}_d(\mathbb F)$. Then the scalars $\lbrace \alpha_i \rbrace_{i=0}^d$, $\lbrace \beta_i \rbrace_{i=0}^d$ and maps $\mathbb A$, $\mathbb B$, $\mathbb C$ are described as follows. \noindent {\bf Case ${\rm NBG}_d(\mathbb F;q)$}. For $0 \leq i \leq d$, \begin{eqnarray*} \alpha_i &=& \frac{1}{ (1) (1+q) (1+q+q^2) \cdots (1+q+q^2+\cdots + q^{i-1}) }, \\ \beta_i &=& \frac{(-1)^i q^{\binom{i}{2}}}{(1) (1+q)(1+q+q^2) \cdots (1+q+q^2+\cdots + q^{i-1})} \\ &=& \frac{(-1)^i}{(1) (1+q^{-1})(1+q^{-1}+q^{-2}) \cdots (1+q^{-1}+q^{-2}+\cdots + q^{1-i})}. \end{eqnarray*} Moreover, \begin{eqnarray*} && \mathbb A = {\rm exp}_q(A), \qquad \qquad \mathbb B = {\rm exp}_q(B), \qquad \qquad \mathbb C = {\rm exp}_q(C), \\ && \mathbb A^{-1} = {\rm exp}_{q^{-1}}(-A), \qquad \mathbb B^{-1} = {\rm exp}_{q^{-1}}(-B), \qquad \mathbb C^{-1} = {\rm exp}_{q^{-1}}(-C). \end{eqnarray*} \noindent {\bf Case ${\rm NBG}_d(\mathbb F;1)$}. For $0 \leq i \leq d$, \begin{eqnarray*} &&\alpha_i = \frac{1}{i!}, \qquad \qquad \beta_i = \frac{(-1)^i}{i!}. \end{eqnarray*} Moreover, \begin{eqnarray*} && \mathbb A = {\rm exp}(A), \qquad \qquad \mathbb B = {\rm exp}(B), \qquad \qquad \mathbb C = {\rm exp}(C), \\ && \mathbb A^{-1} = {\rm exp}(-A), \qquad \quad \mathbb B^{-1} = {\rm exp}(-B), \qquad \quad \mathbb C^{-1} = {\rm exp}(-C). \end{eqnarray*} \noindent {\bf Case ${\rm NBNG}_d(\mathbb F;t)$}. For $0 \leq i \leq d/2$, \begin{eqnarray*} \alpha_{2i} &=& \frac{1}{(1-t)(1-t^2)\cdots (1-t^{i})}, \\ \beta_{2i} &=& \frac{(-1)^i t^{\binom{i+1}{2}} }{(1-t)(1-t^2)\cdots (1-t^i)} \\ &=& \frac{1}{(1-t^{-1})(1-t^{-2})\cdots (1-t^{-i})}. \end{eqnarray*} For $0 \leq i \leq d/2-1$, \begin{eqnarray*} \alpha_{2i+1} = \alpha_{2i}, \qquad \qquad \beta_{2i+1} = -\beta_{2i}. \end{eqnarray*} \noindent Moreover, \begin{eqnarray*} && \mathbb A = (I+A){\rm exp}_t \Bigl(\frac{A^2}{1-t}\Bigr), \qquad \qquad \mathbb A^{-1} = (I-A){\rm exp}_{t^{-1}} \Bigl(\frac{A^2}{1-t^{-1}}\Bigr), \\ && \mathbb B = (I+B){\rm exp}_t \Bigl(\frac{B^2}{1-t}\Bigr), \qquad \qquad \mathbb B^{-1} = (I-B){\rm exp}_{t^{-1}} \Bigl(\frac{B^2}{1-t^{-1}}\Bigr), \\ && \mathbb C = (I+C){\rm exp}_t \Bigl(\frac{C^2}{1-t}\Bigr), \qquad \qquad \mathbb C^{-1} = (I-C){\rm exp}_{t^{-1}} \Bigl(\frac{C^2}{1-t^{-1}}\Bigr). \end{eqnarray*} \noindent {\bf Case ${\rm B}_d(\mathbb F;t,\rho_0, \rho'_0,\rho''_0)$}. For $0 \leq i \leq d/2$, \begin{eqnarray*} \alpha_{2i} &=& \frac{1}{(1)(1+t)(1+t+t^2)\cdots (1+t+t^2 + \cdots + t^{i-1})}, \\ \beta_{2i} &=& \frac{(-1)^i t^{\binom{i}{2}}}{(1) (1+t)(1+t+t^2)\cdots (1+t+t^2 + \cdots + t^{i-1})} \\ &=& \frac{(-1)^i} {(1) (1+t^{-1})(1+t^{-1}+t^{-2})\cdots (1+t^{-1}+t^{-2} + \cdots + t^{1-i})}. \end{eqnarray*} For $0 \leq i \leq d/2-1$, \begin{eqnarray*} \alpha_{2i+1} = 0,\qquad \qquad \beta_{2i+1}=0. \end{eqnarray*} Moreover, \begin{eqnarray*} && \mathbb A= {\rm exp}_t(A^2), \qquad \qquad \mathbb B= {\rm exp}_t(B^2), \qquad \qquad \mathbb C= {\rm exp}_t(C^2), \\ && \mathbb A^{-1} = {\rm exp}_{t^{-1}}(-A^2), \qquad \mathbb B^{-1} = {\rm exp}_{t^{-1}}(-B^2), \qquad \mathbb C^{-1} = {\rm exp}_{t^{-1}}(-C^2). \end{eqnarray*} \noindent {\bf Case ${\rm B}_d(\mathbb F; 1,\rho_0,\rho'_0,\rho''_0)$}. For $0 \leq i \leq d/2$, \begin{eqnarray*} \alpha_{2i} = \frac{1}{i!}, \qquad \qquad \beta_{2i} = \frac{(-1)^i}{i!}. \end{eqnarray*} For $0 \leq i \leq d/2-1$, \begin{eqnarray*} \alpha_{2i+1} = 0,\qquad \qquad \beta_{2i+1}=0. \end{eqnarray*} \noindent Moreover, \begin{eqnarray*} && \mathbb A = {\rm exp}(A^2), \qquad \qquad \mathbb B = {\rm exp}(B^2), \qquad \qquad \mathbb C = {\rm exp}(C^2), \\ && \mathbb A^{-1} = {\rm exp}(-A^2), \qquad \mathbb B^{-1} = {\rm exp}(-B^2), \qquad \mathbb C^{-1} = {\rm exp}(-C^2). \end{eqnarray*} \noindent {\bf Case ${\rm B}_2(\mathbb F; \rho_0,\rho'_0,\rho''_0)$}. Same as ${\rm B}_d(\mathbb F; 1;\rho_0,\rho'_0,\rho''_0)$ with $d=2$. \end{proposition} \begin{proof} Compute $\lbrace \alpha_i\rbrace_{i=0}^d$, $\lbrace \beta_i\rbrace_{i=0}^d$ as follows. In each nonbipartite case, use Proposition \ref{prop:AlphaRecursion3} and induction, together with \begin{eqnarray*} \alpha_0=1, \qquad \alpha_1 = 1,\qquad \beta_0=1, \qquad \beta_1 = -1. \end{eqnarray*} In each bipartite case, use Proposition \ref{prop:AlphaRecursion} and induction, together with \begin{eqnarray*} \alpha_0=1, \qquad \alpha_1 = 0, \qquad \alpha_2=1, \qquad \beta_0=1, \qquad \beta_1 = 0, \qquad \beta_2 = -1. \end{eqnarray*} Our assertions about $\mathbb A$, $\mathbb B$, $\mathbb C$ follow from (\ref{eq:rotator1A})--(\ref{eq:NilDF}) and Definitions \ref{def:exp}, \ref{def:qexp}. \end{proof} \section{Relations for LR triples} In this section we consider the relations satisfied by an LR triple $A,B,C$. In order to motivate our results, assume for the moment that $A,B,C$ has $q$-Weyl type, in the sense of Definition \ref{def:qExceptional}. Then $A,B,C$ satisfy the relations in (\ref{eq:WWW}) and Lemma \ref{lem:sixeqs}. Next assume that $A,B,C$ is contained in ${\rm NBG}_d(\mathbb F)$ or ${\rm NBNG}_d(\mathbb F)$ or ${\rm B}_d(\mathbb F)$. We show that $A,B,C$ satisfy some analogous relations. We treat the nonbipartite cases ${\rm NBG}_d(\mathbb F)$, ${\rm NBNG}_d(\mathbb F)$ and the bipartite case ${\rm B}_d(\mathbb F)$ separately. \begin{proposition} \label{prop:ABCRel} Let $A,B,C$ denote an LR triple in ${\rm NBG}_d(\mathbb F)$ or ${\rm NBNG}_d(\mathbb F)$. Then $A,B,C$ satisfy the following relations. \noindent {\bf Case ${\rm NBG}_d(\mathbb F;q)$}. \noindent We have \begin{eqnarray*} && A^2B-q(1+q)ABA+q^3BA^2=q(1+q)A, \\ && B^2C-q(1+q)BCB+q^3CB^2=q(1+q)B, \\ && C^2A-q(1+q)CAC+q^3AC^2=q(1+q)C \end{eqnarray*} and also \begin{eqnarray*} && AB^2-q(1+q)BAB+q^3B^2A=q(1+q)B, \\ && BC^2-q(1+q)CBC+q^3C^2B=q(1+q)C, \\ && CA^2-q(1+q)ACA+q^3A^2C=q(1+q)A. \end{eqnarray*} \noindent We have \begin{eqnarray*} && A \bigl(I+(BC-qCB)(1-q^{-1})\bigr) = qB+q^{-1}C+qCB-q^{-1}BC, \\ && B \bigl(I+(CA-qAC)(1-q^{-1})\bigr) = qC+q^{-1}A+qAC-q^{-1}CA, \\ && C \bigl(I+(AB-qBA)(1-q^{-1})\bigr) = qA+q^{-1}B+qBA-q^{-1}AB \end{eqnarray*} and also \begin{eqnarray*} && \bigl(I+(BC-qCB)(1-q^{-1})\bigr) A = q^{-1}B+qC+qCB-q^{-1}BC, \\ && \bigl(I+(CA-qAC)(1-q^{-1})\bigr) B = q^{-1}C+qA+qAC-q^{-1}CA, \\ && \bigl(I+(AB-qBA)(1-q^{-1})\bigr) C = q^{-1}A+qB+qBA-q^{-1}AB. \end{eqnarray*} \noindent We have \begin{eqnarray*} ABC - BCA + q (CBA-ACB) &=& (1+q)(B-C), \\ BCA - CAB + q (ACB-BAC) &=& (1+q)(C-A), \\ CAB - ABC + q (BAC-CBA) &=& (1+q)(A-B) \end{eqnarray*} and also \begin{eqnarray*} &&(1+2 q^{-1})(ABC+BCA+CAB)-(1+2q)(CBA+ACB+BAC) \\ &&=(q-q^{-1})(A+B+C)- \frac{3(q^d-1)(q^{d+2}-1)}{q^d(q-1)^2} I. \end{eqnarray*} \noindent {\bf Case ${\rm NBG}_d(\mathbb F;1)$}. We have \begin{eqnarray*} && \lbrack A, \lbrack A,B\rbrack \rbrack = 2A, \qquad \qquad \lbrack B, \lbrack B,A\rbrack \rbrack = 2B, \\ && \lbrack B, \lbrack B,C\rbrack \rbrack = 2B, \qquad \qquad \lbrack C, \lbrack C,B\rbrack \rbrack = 2C, \\ && \lbrack C, \lbrack C,A\rbrack \rbrack = 2C, \qquad \qquad \lbrack A, \lbrack A,C\rbrack \rbrack = 2A \end{eqnarray*} and also \begin{eqnarray*} A = B+C-\lbrack B,C\rbrack, \qquad B = C+A-\lbrack C,A\rbrack, \qquad C = A+B-\lbrack A,B\rbrack. \end{eqnarray*} We have \begin{eqnarray*} && \lbrack A, \lbrack B, C\rbrack \rbrack = 2(B-C), \\ && \lbrack B, \lbrack C, A\rbrack \rbrack = 2(C-A), \\ && \lbrack C, \lbrack A, B\rbrack \rbrack = 2(A-B) \end{eqnarray*} and also \begin{eqnarray*} ABC+BCA+CAB-CBA-ACB-BAC = -d(d+2)I. \end{eqnarray*} \noindent {\bf Case ${\rm NBNG}_d(\mathbb F;t)$}. We have \begin{eqnarray*} && \frac{A^2B-tBA^2}{1-t} = -A, \qquad \frac{B^2C-tCB^2}{1-t} = -B, \qquad \frac{C^2A-tAC^2}{1-t} = -C, \\ && \frac{AB^2-tB^2A}{1-t} = -B, \qquad \frac{BC^2-tC^2B}{1-t} = -C, \qquad \frac{CA^2-tA^2C}{1-t} = -A \end{eqnarray*} and also \begin{eqnarray*} && \frac{ABC-tCBA}{1-t}+ A+C = -\frac{(1-t^{-d/2})(1-t^{1+d/2})}{1-t}\,I, \\ && \frac{BCA-tACB}{1-t}+ B+A = -\frac{(1-t^{-d/2})(1-t^{1+d/2})}{1-t}\,I, \\ && \frac{CAB-tBAC}{1-t}+ C+B = -\frac{(1-t^{-d/2})(1-t^{1+d/2})}{1-t}\,I. \end{eqnarray*} \end{proposition} \begin{proof} To verify these relations, represent $A,B,C$ by matrices, using for example the first row of the table in Proposition \ref{prop:matrixRep}. \end{proof} \begin{remark}\rm In \cite[p.~308]{benkRoby} G. Benkart and T. Roby introduce the concept of a down-up algebra. Consider an LR triple $A,B,C$ from Proposition \ref{prop:ABCRel}. By that proposition, any two of $A,B,C$ satisfy the defining relations for a down-up algebra. \end{remark} \noindent Let $A,B,C$ denote an LR triple in ${\rm B}_d(\mathbb F)$, and consider its projector $J$ from Definition \ref{def:projector}. By Lemma \ref{lem:EEEJ} we have $J^2=J$, and by Lemma \ref{lem:ABCJ}, \begin{eqnarray*} A = JA+AJ, \qquad \qquad B = JB+BJ, \qquad \qquad C = JC+CJ. \end{eqnarray*} \begin{proposition} \label{prop:ABCRelBIP} Let $A,B,C$ denote an LR triple in ${\rm B}_d(\mathbb F)$. Then $A,B,C$ and its projector $J$ satisfy the following relations. \noindent {\bf Case ${\rm B}_d(\mathbb F;t,\rho_0, \rho'_0,\rho''_0)$}. We have \begin{eqnarray*} && \Bigl(\rho_0 AB+\rho_0' \rho_0 '' B A -\frac{1-t^{-d/2}}{1-t} tI \Bigr)J=0, \\ && \Bigl(\rho_0' BC+\rho_0'' \rho_0 CB -\frac{1-t^{-d/2}}{1-t} tI \Bigr)J=0, \\ && \Bigl(\rho_0'' CA+\rho_0 \rho_0' AC -\frac{1-t^{-d/2}}{1-t} tI \Bigr)J=0 \end{eqnarray*} and also \begin{eqnarray*} && \Bigl(\rho_0'\rho_0'' AB+\rho_0 t B A -\frac{1-t^{-1-d/2}}{1-t} t^2I \Bigr)(I-J)=0, \\ && \Bigl(\rho_0''\rho_0 BC+\rho_0' t CB -\frac{1-t^{-1-d/2}}{1-t} t^2I \Bigr)(I-J)=0, \\ && \Bigl(\rho_0\rho_0' CA+\rho_0'' t AC -\frac{1-t^{-1-d/2}}{1-t} t^2I \Bigr)(I-J)=0. \end{eqnarray*} \noindent We have \begin{eqnarray*} && (A^2B-tBA^2-(t/\rho_0) A)J=0, \qquad \qquad J(A^2B-tBA^2-\rho_0 A)=0, \\ && (B^2C-tCB^2-(t/\rho'_0) B)J=0, \qquad \qquad J(B^2C-tCB^2-\rho'_0 B)=0, \\ && (C^2A-tAC^2-(t/\rho''_0) C)J=0, \qquad \qquad J(C^2A-tAC^2-\rho''_0 C)=0 \end{eqnarray*} and also \begin{eqnarray*} && J(AB^2-tB^2A-(t/\rho_0) B)=0, \qquad \qquad (AB^2-tB^2A-\rho_0 B)J=0, \\ && J(BC^2-tC^2B-(t/\rho'_0) C)=0, \qquad \qquad (BC^2-tC^2B-\rho'_0 C)J=0, \\ && J(CA^2-tA^2C-(t/\rho''_0) A)=0, \qquad \qquad (CA^2-tA^2C-\rho''_0 A)J=0. \end{eqnarray*} \noindent We have \begin{eqnarray*} &&A^3 B + A^2BA-tABA^2 -tBA^3 = (\rho_0 + t/\rho_0)A^2, \\ && B^3 C + B^2CB-tBCB^2 -tCB^3 = (\rho'_0 + t/\rho'_0)B^2, \\ && C^3 A + C^2AC-tCAC^2 -tAC^3 = (\rho''_0 + t/\rho''_0)C^2 \end{eqnarray*} and also \begin{eqnarray*} && A B^3 + BAB^2-tB^2AB -tB^3A = (\rho_0 + t/\rho_0)B^2, \\ && B C^3 + CBC^2-tC^2BC -tC^3B = (\rho'_0 + t/\rho'_0)C^2, \\ && C A^3 + ACA^2-tA^2CA -tA^3C = (\rho''_0 + t/\rho''_0)A^2. \end{eqnarray*} We have \begin{eqnarray*} && \Bigl(ABC-\frac{tA-\rho_0 t B + \rho_0 \rho_0' C}{\rho_0'(1-t)}\Bigr)J=0, \qquad \quad \Bigl(CBA-\frac{tA-\rho_0 B + \rho_0 \rho_0' C}{\rho_0'(1-t)}\Bigr)J=0, \\ && \Bigl(BCA-\frac{tB-\rho_0' t C + \rho_0' \rho_0'' A}{\rho_0''(1-t)}\Bigr)J=0, \qquad \quad \Bigl(ACB-\frac{tB-\rho_0' C + \rho_0' \rho_0'' A}{\rho_0''(1-t)}\Bigr)J=0, \\ && \Bigl(CAB-\frac{tC-\rho_0'' t A + \rho_0'' \rho_0 B}{\rho_0(1-t)}\Bigr)J=0, \qquad \quad \Bigl(BAC-\frac{tC-\rho_0'' A + \rho_0'' \rho_0 B}{\rho_0(1-t)}\Bigr)J=0 \end{eqnarray*} and also \begin{eqnarray*} && J\Bigl(ABC-\frac{\rho_0 \rho'_0 A-\rho_0' t B + t C}{\rho_0(1-t)}\Bigr)=0, \qquad \quad J\Bigl(CBA-\frac{\rho_0 \rho'_0 A-\rho_0' B + t C}{\rho_0(1-t)}\Bigr)=0, \\ && J\Bigl(BCA-\frac{\rho_0' \rho''_0 B-\rho_0'' t C + t A}{\rho'_0(1-t)}\Bigr)=0, \qquad \quad J\Bigl(ACB-\frac{\rho_0' \rho''_0 B-\rho_0'' C + t A}{\rho'_0(1-t)}\Bigr)=0, \\ && J\Bigl(CAB-\frac{\rho_0'' \rho_0 C-\rho_0 t A + t B}{\rho''_0(1-t)}\Bigr)=0, \qquad \quad J\Bigl(BAC-\frac{\rho_0'' \rho_0 C-\rho_0 A + t B}{\rho''_0(1-t)}\Bigr)=0. \end{eqnarray*} \noindent We have \begin{eqnarray*} && (ABC-CBA-(\rho_0/\rho_0')B)J=0, \qquad \qquad J(ABC-CBA-(\rho_0'/\rho_0)B)=0, \\ && (BCA-ACB-(\rho_0'/\rho_0'')C)J=0, \qquad \qquad J(BCA-ACB-(\rho_0''/\rho_0')C)=0, \\ && (CAB-BAC-(\rho_0''/\rho_0)A)J=0, \qquad \qquad J(CAB-BAC-(\rho_0/\rho_0'')A)=0. \end{eqnarray*} \noindent {\bf Case ${\rm B}_d(\mathbb F;1,\rho_0, \rho'_0,\rho''_0)$}. We have \begin{eqnarray*} && (\rho_0 AB + \rho_0' \rho_0'' BA + (d/2)I )J=0, \\ && (\rho_0' BC + \rho_0'' \rho_0 CB + (d/2)I )J=0, \\ && (\rho_0'' CA + \rho_0 \rho_0' AC + (d/2)I )J=0 \end{eqnarray*} and also \begin{eqnarray*} && \Bigl(\rho_0' \rho_0'' AB + \rho_0 BA + \frac{d+2}{2} I\Bigr)(I-J)=0, \\ && \Bigl(\rho_0'' \rho_0 BC + \rho_0' CB + \frac{d+2}{2} I \Bigr)(I-J)=0, \\ && \Bigl(\rho_0 \rho_0' CA + \rho_0'' AC + \frac{d+2}{2} I \Bigr) (I-J)=0. \end{eqnarray*} \noindent We have \begin{eqnarray*} && (A^2 B - B A^2 -A/\rho_0)J=0, \qquad \qquad J(A^2 B - B A^2 -\rho_0 A)=0, \\ && (B^2 C - C B^2 -B/\rho_0')J=0, \qquad \qquad J(B^2 C - C B^2 -\rho_0' B)=0, \\ && (C^2 A - A C^2 -C/\rho_0'')J=0, \qquad \qquad J(C^2 A - A C^2 -\rho_0'' C)=0 \end{eqnarray*} and also \begin{eqnarray*} && J(A B^2 - B^2 A -B/\rho_0)=0, \qquad \qquad (A B^2 - B^2 A -\rho_0 B)J=0, \\ && J(B C^2 - C^2 B -C/\rho_0')=0, \qquad \qquad (B C^2 - C^2 B -\rho_0' C)J=0, \\ && J(C A^2 - A^2 C -A/\rho_0'')=0, \qquad \qquad (C A^2 - A^2 C -\rho_0'' A)J=0. \end{eqnarray*} We have \begin{eqnarray*} &&A^3 B + A^2BA-ABA^2 -BA^3 = (\rho_0 + 1/\rho_0)A^2, \\ && B^3 C + B^2CB-BCB^2 -CB^3 = (\rho'_0 + 1/\rho'_0)B^2, \\ && C^3 A + C^2AC-CAC^2 -AC^3 = (\rho''_0 + 1/\rho''_0)C^2 \end{eqnarray*} and also \begin{eqnarray*} && A B^3 + BAB^2-B^2AB -B^3A = (\rho_0 + 1/\rho_0)B^2, \\ && B C^3 + CBC^2-C^2BC -C^3B = (\rho'_0 + 1/\rho'_0)C^2, \\ && C A^3 + ACA^2-A^2CA -A^3C = (\rho''_0 + 1/\rho''_0)A^2. \end{eqnarray*} \noindent We have \begin{eqnarray*} && (A - B\rho_0 - C/\rho_0'')J=0, \qquad \qquad J(A - B/\rho_0 - C\rho_0'' )=0, \\ && (B - C\rho_0' - A/\rho_0)J=0, \qquad \qquad J(B - C/\rho_0' - A\rho_0 )=0, \\ && (C - A\rho_0 '' - B/\rho_0')J=0, \qquad \qquad J(C - A/\rho_0 '' - B\rho_0')=0. \end{eqnarray*} \noindent {\bf Case ${\rm B}_2(\mathbb F;\rho_0, \rho'_0,\rho''_0)$}. Same as ${\rm B}_d(\mathbb F;1,\rho_0, \rho'_0,\rho''_0)$ with $d=2$, where we interpret $d/2=1$ and $(d+2)/2=0$ if ${\rm Char}(\mathbb F)=2$. \end{proposition} \begin{proof} To verify these relations, represent $A,B,C,J$ by matrices, using for example the first row of the table in Proposition \ref{prop:matrixRep}, along with Lemma \ref{lem:JMat}. \end{proof} \section{The quantum algebra $U_q(\mathfrak{sl}_2)$ and the Lie algebra $\mathfrak{sl}_2$} \noindent In this section, we discuss how LR triples are related to the quantum algebra $U_q(\mathfrak{sl}_2)$ and the Lie algebra $\mathfrak{sl}_2$. \noindent Until further notice, assume that the field $\mathbb F$ is arbitrary, and fix a nonzero $q \in \mathbb F$ such that $q^2 \not=1$. We recall the algebra $U_q(\mathfrak{sl}_2)$. We will use the equitable presentation, which was introduced in \cite{equit}. \begin{definition} \label{def:uqA} \rm (See \cite[Theorem~2.1]{equit}.) Let $U_q(\mathfrak{sl}_2)$ denote the $\mathbb F$-algebra with generators $x,y^{\pm 1},z$ and relations $yy^{-1}=1$, $y^{-1}y=1$, \begin{equation} \frac{qxy-q^{-1}yx}{q-q^{-1}} = 1, \quad \qquad \frac{qyz-q^{-1}zy}{q-q^{-1}} = 1, \quad \qquad \frac{qzx-q^{-1}xz}{q-q^{-1}} = 1. \label{eq:uqrels} \end{equation} \end{definition} \noindent The following subalgebra of $U_q(\mathfrak{sl}_2)$ is of interest. \begin{definition} \rm Let $U^R_q(\mathfrak{sl}_2)$ denote the subalgebra of $U_q(\mathfrak{sl}_2)$ generated by $x,y,z$. We call $U^R_q(\mathfrak{sl}_2)$ the {\it reduced $U_q(\mathfrak{sl}_2)$} algebra. \end{definition} \begin{lemma} \label{def:uqARED} \rm (See \cite[Definition~10.6, Lemma~10.9]{uawe}.) The algebra $U^R_q(\mathfrak{sl}_2)$ has a presentation by generators $x,y,z$ and relations \begin{equation} \frac{qxy-q^{-1}yx}{q-q^{-1}} = 1, \quad \qquad \frac{qyz-q^{-1}zy}{q-q^{-1}} = 1, \quad \qquad \frac{qzx-q^{-1}xz}{q-q^{-1}} = 1. \label{eq:uqrelsRED} \end{equation} \end{lemma} \noindent There is a central element $\Lambda $ in $U_q(\mathfrak{sl}_2)$ that is often called the normalized Casimir element \cite[Definition~2.11]{uawe}. The element $\Lambda$ is equal to $(q-q^{-1})^2$ times the Casimir element given in \cite[Section~2.7]{jantzen}. By \cite[Lemma~2.15]{uawe}, $\Lambda$ is equal to each of the following: \begin{eqnarray} && qx+q^{-1}y + qz-qxyz, \qquad \qquad q^{-1}x+qy+q^{-1}z-q^{-1}zyx, \label{eq:Lam1} \\ && qy+q^{-1}z + qx-qyzx, \qquad \qquad q^{-1}y+qz+q^{-1}x-q^{-1}xzy, \label{eq:Lam2} \\ && qz+q^{-1}x + qy-qzxy, \qquad \qquad q^{-1}z+qx+q^{-1}y-q^{-1}yxz. \label{eq:Lam3} \end{eqnarray} Note that $\Lambda $ is contained in $U^R_q(\mathfrak{sl}_2)$. \noindent Recall from Definition \ref{def:qExceptional} the LR triples of $q$-Weyl type. \begin{proposition} \label{thm:WeylUq} Let $A,B,C$ denote an LR triple over $\mathbb F$ that has $q$-Weyl type. Then the underlying vector space $V$ becomes a $U^R_q(\mathfrak{sl}_2)$-module on which \begin{eqnarray} A = x, \qquad \qquad B = y, \qquad \qquad C = z. \label{lem:xyzABC} \end{eqnarray} The $U^R_q(\mathfrak{sl}_2)$-module $V$ is irreducible. On the $U^R_q(\mathfrak{sl}_2)$-module $V$, \begin{eqnarray} \Lambda = \alpha_1 (q-q^{-1}) I, \label{eq:LambdaVal} \end{eqnarray} where $\alpha_1$ is the first Toeplitz number for $A,B,C$. \end{proposition} \begin{proof} Compare (\ref{eq:WWW}), (\ref{eq:uqrelsRED}) to obtain the first assertion. The $U^R_q(\mathfrak{sl}_2)$-module $V$ is irreducible by (\ref{lem:xyzABC}) and Lemma \ref{lem:ABCGEN}. To get (\ref{eq:LambdaVal}), compare Lemma \ref{lem:sixeqs} and (\ref{eq:Lam1})--(\ref{eq:Lam3}) using (\ref{lem:xyzABC}). \end{proof} \noindent We are done discussing the LR triples of $q$-Weyl type. \noindent We return our attention to $U_q(\mathfrak{sl}_2)$. By \cite[Lemma~5.1]{equit} we find that in $U_q(\mathfrak{sl}_2)$, \begin{eqnarray*} q(1-xy)= q^{-1}(1-yx), \qquad q(1-yz)= q^{-1}(1-zy), \qquad q(1-zx)= q^{-1}(1-xz). \end{eqnarray*} \begin{definition} \label{def:nxnynz} \rm (See \cite[Definition~5.2]{equit}.) Let $n_x, n_y, n_z$ denote the following elements in $U_q(\mathfrak{sl}_2)$: \begin{eqnarray*} && n_x = \frac{q(1-yz)}{q-q^{-1}} = \frac{q^{-1}(1-zy)}{q-q^{-1}}, \\ && n_y = \frac{q(1-zx)}{q-q^{-1}} = \frac{q^{-1}(1-xz)}{q-q^{-1}}, \\ && n_z = \frac{q(1-xy)}{q-q^{-1}} = \frac{q^{-1}(1-yx)}{q-q^{-1}}. \end{eqnarray*} \end{definition} \begin{lemma} {\rm (See \cite[Lemma~5.4]{equit}.)} The following relations hold in $U_q(\mathfrak{sl}_2)$: \begin{eqnarray*} && x n_y =q^2 n_y x, \qquad \qquad x n_z = q^{-2} n_z x, \\ && y n_z =q^2 n_z y, \qquad \qquad y n_x = q^{-2} n_x y, \\ && z n_x =q^2 n_x z, \qquad \qquad z n_y = q^{-2} n_y z. \end{eqnarray*} \end{lemma} \noindent Until further notice, let $A,B,C$ denote an LR triple that is contained in ${\rm NBG}_d(\mathbb F;q^{-2})$. Let $V$ denote the underlying vector space. \begin{definition} \label{def:xyzUQ} \rm Define $X,Y,Z$ in ${\rm End}(V)$ such that for $0 \leq i \leq d$, $X-q^{d-2i}I$ (resp. $Y-q^{d-2i}I$) (resp. $Z-q^{d-2i}I$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. Note that each of $X,Y,Z$ is invertible. \end{definition} \noindent The next result is meant to clarify Definition \ref{def:xyzUQ}. Recall the idempotent data (\ref{eq:idseq}) for $A,B,C$. \begin{lemma} The elements $X,Y,Z$ from Definition \ref{def:xyzUQ} satisfy \begin{eqnarray*} X = \sum_{i=0}^d q^{d-2i} E'_i, \qquad \qquad Y = \sum_{i=0}^d q^{d-2i} E''_i, \qquad \qquad Z = \sum_{i=0}^d q^{d-2i} E_i. \end{eqnarray*} \end{lemma} \begin{proof} By Definition \ref{def:xyzUQ} and the meaning of the idempotent data. \end{proof} \begin{lemma} \label{eq:XYZform} The elements $X,Y,Z$ from Definition \ref{def:xyzUQ} satisfy \begin{eqnarray*} && X = \frac{(q+q^{-1})I - (q-q^{-1})(q^2BC-q^{-2}CB)}{q^{d+1}+q^{-d-1}}, \\ && Y = \frac{(q+q^{-1})I - (q-q^{-1})(q^2CA-q^{-2}AC)}{q^{d+1}+q^{-d-1}}, \\ && Z = \frac{(q+q^{-1})I - (q-q^{-1})(q^2AB-q^{-2}BA)}{q^{d+1}+q^{-d-1}}. \end{eqnarray*} \end{lemma} \begin{proof} To verify the first equation, work with the matrices in ${\rm Mat}_{d+1}(\mathbb F)$ that represent $B,C,X$ with respect to a $(B,C)$-basis for $V$. For $B,C$ these matrices are given in Proposition \ref{prop:matrixRep}. For $X$ this matrix is diagonal, with $(i,i)$-entry $q^{d-2i}$ for $0 \leq i \leq d$. The other two equations are similarly verified. \end{proof} \begin{lemma} \label{lem:XYZUQ} The elements $X,Y,Z$ from Definition \ref{def:xyzUQ} satisfy \begin{eqnarray*} \frac{qXY-q^{-1}YX}{q-q^{-1}} = I, \quad \qquad \frac{qYZ-q^{-1}ZY}{q-q^{-1}} = I, \quad \qquad \frac{qZX-q^{-1}XZ}{q-q^{-1}} = I. \end{eqnarray*} \end{lemma} \begin{proof} To verify these equations, eliminate $X,Y,Z$ using Lemma \ref{eq:XYZform}, and evaluate the result using the relations for ${\rm NBG}_d(\mathbb F;q^{-2})$ given in Proposition \ref{prop:ABCRel}. \end{proof} \begin{proposition} \label{prop:NBGUq} Let $A,B,C$ denote an LR triple contained in ${\rm NBG}_d(\mathbb F;q^{-2})$. Then there exists a unique $U_q(\mathfrak{sl}_2)$-module structure on the underlying vector space $V$, such that for $0 \leq i \leq d$, $x-q^{d-2i}1$ (resp. $y-q^{d-2i}1$) (resp. $z-q^{d-2i}1$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. The $U_q(\mathfrak{sl}_2)$-module $V$ is irreducible. On the $U_q(\mathfrak{sl}_2)$-module $V$, \begin{eqnarray} \label{eq:ABCnuxyz} A = n_x, \qquad \qquad B = n_y, \qquad \qquad C = n_z. \end{eqnarray} \end{proposition} \begin{proof} The $U_q(\mathfrak{sl}_2)$-module structure exists, by Lemma \ref{lem:XYZUQ} and since $Y$ is invertible. The $U_q(\mathfrak{sl}_2)$-module structure is unique by construction. On the $U_q(\mathfrak{sl}_2)$-module $V$ we have $x=X$, $y=Y$, $z=Z$. To verify (\ref{eq:ABCnuxyz}), eliminate $n_x$, $n_y$, $n_z$ using Definition \ref{def:nxnynz}, and evaluate the result using Lemma \ref{eq:XYZform} along with the relations for ${\rm NBG}_d(\mathbb F;q^{-2})$ given in Proposition \ref{prop:ABCRel}. The $U_q(\mathfrak{sl}_2)$-module $V$ is irreducible by (\ref{eq:ABCnuxyz}) and Lemma \ref{lem:ABCGEN}. \end{proof} \noindent We are done discussing the LR triples contained in ${\rm NBG}_d(\mathbb F;q^{-2})$. \noindent In a moment we will discuss the LR triples contained in ${\rm NBNG}_d(\mathbb F;q^{-2})$. To prepare for this, we have some comments about $U_q(\mathfrak{sl}_2)$. \begin{lemma} \label{lem:x2y} Assume that $q^4 \not=1$. Then in $U_q({\mathfrak{sl}_2})$, \begin{equation} \label{eq:x2y} \frac{q^2x^2y-q^{-2}yx^2}{q^2-q^{-2}}=x, \qquad \frac{q^2y^2z-q^{-2}zy^2}{q^2-q^{-2}}=y, \qquad \frac{q^2z^2x-q^{-2}xz^2}{q^2-q^{-2}}=z. \end{equation} \end{lemma} \begin{proof} We verify the equation on the left in (\ref{eq:x2y}). In the equation on the left in (\ref{eq:uqrels}), multiply each term on the left by $x$ to get \begin{equation} \label{eq:xxy} \frac{qx^2y-q^{-1}xyx}{q-q^{-1}}=x. \end{equation} Also, in the equation on the left in (\ref{eq:uqrels}), multiply each term on the right by $x$ to get \begin{equation} \label{eq:yxx} \frac{qxyx-q^{-1}yx^2}{q-q^{-1}}=x. \end{equation} Now in (\ref{eq:xxy}), eliminate $xyx$ using (\ref{eq:yxx}) to obtain the equation on the left in (\ref{eq:x2y}). The remaining equations in (\ref{eq:x2y}) are similarly verified. \end{proof} \begin{lemma} \label{lem:xy2} Assume that $q^4 \not=1$. Then in $U_q({\mathfrak{sl}_2})$, \begin{equation} \label{eq:xy2} \frac{q^2xy^2-q^{-2}y^2x}{q^2-q^{-2}}=y, \qquad \frac{q^2yz^2-q^{-2}z^2y}{q^2-q^{-2}}=z, \qquad \frac{q^2zx^2-q^{-2}x^2z}{q^2-q^{-2}}=x. \end{equation} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:x2y}. \end{proof} \begin{lemma} \label{lem:xyzUQ} Assume that $q^4\not=1$. Then in $U_q({\mathfrak{sl}_2})$, \begin{eqnarray} -\frac{\Lambda}{q+q^{-1}} &=& \frac{q^2xyz-q^{-2}zyx}{q^2-q^{-2}}-x-z \label{eq:casxyz} \\ &=& \frac{q^2yzx-q^{-2}xzy}{q^2-q^{-2}}-y-x \label{eq:casyzx} \\ &=& \frac{q^2zxy-q^{-2}yxz}{q^2-q^{-2}}-z-y. \label{eq:caszxy} \end{eqnarray} \end{lemma} \begin{proof} We verify (\ref{eq:casxyz}). We have \begin{eqnarray*} \Lambda = qx+q^{-1}y+qz-qxyz, \end{eqnarray*} so \begin{eqnarray} q\Lambda = q^2x+y+q^2z-q^2xyz. \label{eq:cas1} \end{eqnarray} We have \begin{eqnarray*} \Lambda= q^{-1}x+qy+q^{-1}z-q^{-1}zyx, \end{eqnarray*} so \begin{eqnarray} \label{eq:cas2} q^{-1}\Lambda= q^{-2}x+y+q^{-2}z-q^{-2}zyx. \end{eqnarray} Subtract (\ref{eq:cas2}) from (\ref{eq:cas1}) and simplify to get (\ref{eq:casxyz}). The equations (\ref{eq:casyzx}), (\ref{eq:caszxy}) are similarly verified. \end{proof} \begin{definition} \label{def:UqMock} \rm Let $U^E_q(\mathfrak{sl}_2)$ denote the $\mathbb F$-algebra with generators $x,y,z$ and relations \begin{eqnarray} &&\frac{qx^2y-q^{-1}yx^2}{q-q^{-1}}=-x, \qquad \frac{qy^2z-q^{-1}zy^2}{q-q^{-1}}=-y, \qquad \frac{qz^2x-q^{-1}xz^2}{q-q^{-1}}=-z, \nonumber \\ &&\frac{qxy^2-q^{-1}y^2x}{q-q^{-1}}=-y, \qquad \frac{qyz^2-q^{-1}z^2y}{q-q^{-1}}=-z, \qquad \frac{qzx^2-q^{-1}x^2z}{q-q^{-1}}=-x, \nonumber \\ && \frac{qxyz-q^{-1}zyx}{q-q^{-1}}+x+z = \frac{qyzx-q^{-1}xzy}{q-q^{-1}}+y+x = \frac{qzxy-q^{-1}yxz}{q-q^{-1}}+z+y. \qquad \quad \label{eq:omeg} \end{eqnarray} We call $U^E_q(\mathfrak{sl}_2)$ the {\it extended $U_q(\mathfrak{sl}_2)$} algebra. Let $\Omega$ denote the common value of (\ref{eq:omeg}). \end{definition} \begin{lemma} \label{lem:omeg} The element $\Omega$ from Definition \ref{def:UqMock} is central in $U^E_q(\mathfrak{sl}_2)$. \end{lemma} \begin{proof} Using the relations from Definition \ref{def:UqMock}, one checks that $\Omega$ commutes with each generator $x,y,z$ of $U^E_q(\mathfrak{sl}_2)$. \end{proof} \begin{lemma} Assume that $q^4 \not=1$ and there exists $i \in \mathbb F$ such that $i^2=-1$. Then there exists an $\mathbb F$-algebra homomorhism $U^E_{q^2}(\mathfrak{sl}_2) \to U_q(\mathfrak{sl}_2)$ that sends \begin{eqnarray*} x \mapsto ix, \qquad \quad y \mapsto iy, \qquad \quad z \mapsto iz, \qquad \quad \Omega \mapsto \frac{i \Lambda}{q+q^{-1}}. \end{eqnarray*} \end{lemma} \begin{proof} Compare the relations in Lemmas \ref{lem:x2y}--\ref{lem:xyzUQ} with the relations in Definition \ref{def:UqMock}. \end{proof} \begin{proposition} Let $A,B,C$ denote an LR triple contained in ${\rm NBNG}_d(\mathbb F; q^{-2})$. Then the underlying vector space $V$ becomes a $U^E_{q}(\mathfrak{sl}_2)$-module on which \begin{eqnarray} A = x, \qquad \qquad B = y, \qquad \qquad C = z. \label{eq:NBNGxyz} \end{eqnarray} The $U^E_{q}(\mathfrak{sl}_2)$-module $V$ is irreducible. On the $U^E_{q}(\mathfrak{sl}_2)$-module $V$, \begin{eqnarray} \label{eq:Omegaval} \Omega = \frac{(q^{d/2}-q^{-d/2})(q^{1+d/2}-q^{-1-d/2})}{q-q^{-1}} I. \end{eqnarray} \end{proposition} \begin{proof} To get the first assertion and (\ref{eq:Omegaval}), compare the relations in Definition \ref{def:UqMock} with the relations for ${\rm NBNG}_d(\mathbb F; q^{-2})$ given in Proposition \ref{prop:ABCRel}. The $U^E_{q}(\mathfrak{sl}_2)$-module $V$ is irreducible by (\ref{eq:NBNGxyz}) and Lemma \ref{lem:ABCGEN}. \end{proof} \noindent We are done discussing the LR triples contained in ${\rm NBNG}_d(\mathbb F; q^{-2})$. \noindent Until further notice let $A,B,C$ denote an LR triple that is contained in ${\rm B}_d(\mathbb F;q^{-2}, \rho_0, \rho'_0, \rho''_0)$. Let $V$ denote the underlying vector space, and let $J$ denote the projector. \begin{definition} \label{def:xyzBq} \rm Define $X,Y,Z$ in ${\rm End}(V)$ such that for $0 \leq i \leq d$, $X-q^{d/2-i}I$ (resp. $Y-q^{d/2-i}I$) (resp. $Z-q^{d/2-i}I$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. Note that each of $X,Y,Z$ is invertible. \end{definition} \noindent Recall the idempotent data (\ref{eq:idseq}) for $A,B,C$. \begin{lemma} The elements $X,Y,Z$ from Definition \ref{def:xyzBq} satisfy \begin{eqnarray*} X = \sum_{i=0}^d q^{d/2-i} E'_i, \qquad \qquad Y = \sum_{i=0}^d q^{d/2-i} E''_i, \qquad \qquad Z = \sum_{i=0}^d q^{d/2-i} E_i. \end{eqnarray*} \end{lemma} \begin{proof} By Definition \ref{def:xyzBq} and the meaning of the idempotent data. \end{proof} \begin{lemma} \label{eq:XYZformBq} The elements $X,Y,Z$ from Definition \ref{def:xyzBq} satisfy \begin{eqnarray*} && X=(q^{-d/2}I-B C q^{1-d/2}(q-q^{-1}) \rho_0')J + (q^{1+d/2}I-B C q^{d/2}(q-q^{-1})/\rho_0')(I-J), \\ && Y=(q^{-d/2}I-C A q^{1-d/2}(q-q^{-1}) \rho_0'')J + (q^{1+d/2}I-C A q^{d/2}(q-q^{-1})/\rho_0'')(I-J), \\ && Z=(q^{-d/2}I-A B q^{1-d/2}(q-q^{-1}) \rho_0)J + (q^{1+d/2}I-A B q^{d/2}(q-q^{-1})/\rho_0)(I-J). \end{eqnarray*} \end{lemma} \begin{proof} To verify the first equation, work with the matrices in ${\rm Mat}_{d+1}(\mathbb F)$ that represent $B,C,J,X$ with respect to a $(B,C)$-basis for $V$. For $B,C$ these matrices are given in Proposition \ref{prop:matrixRep}. For $J$ this matrix is given in Lemma \ref{lem:JMat}. For $X$ this matrix is diagonal, with $(i,i)$-entry $q^{d/2-i}$ for $0 \leq i \leq d$. The other two equations are similarly verified. \end{proof} \begin{lemma} \label{lem:XYZBq} The elements $X,Y,Z$ from Definition \ref{def:xyzBq} satisfy \begin{eqnarray*} \frac{qXY-q^{-1}YX}{q-q^{-1}} = I, \quad \qquad \frac{qYZ-q^{-1}ZY}{q-q^{-1}} = I, \quad \qquad \frac{qZX-q^{-1}XZ}{q-q^{-1}} = I. \end{eqnarray*} \end{lemma} \begin{proof} To verify these equations, eliminate $X,Y,Z$ using Lemma \ref{eq:XYZformBq}, and evaluate the result using the relations for ${\rm B}_d(\mathbb F;q^{-2},\rho_0, \rho'_0, \rho''_0)$ given in Proposition \ref{prop:ABCRelBIP}. \end{proof} \begin{proposition} \label{prop:Bq} Let $A,B,C$ denote an LR triple contained in ${\rm B}_d(\mathbb F;q^{-2},\rho_0,\rho'_0,\rho''_0)$. Then there exists a unique $U_q(\mathfrak{sl}_2)$-module structure on the underlying vector space $V$, such that for $0 \leq i \leq d$, $x-q^{d/2-i}1$ (resp. $y-q^{d/2-i}1$) (resp. $z-q^{d/2-i}1$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. For the $U_q(\mathfrak{sl}_2)$-module $V$ the subspaces $V_{\rm out}$ and $V_{\rm in}$ are irreducible $U_q(\mathfrak{sl}_2)$-submodules. On the $U_q(\mathfrak{sl}_2)$-module $V$, \begin{eqnarray} \label{eq:ABCBqnuxyz} A^2 = n_x, \qquad \qquad B^2 = n_y, \qquad \qquad C^2 = n_z. \end{eqnarray} \end{proposition} \begin{proof} The $U_q(\mathfrak{sl}_2)$-module structure exists, by Lemma \ref{lem:XYZBq} and since $Y$ is invertible. The $U_q(\mathfrak{sl}_2)$-module structure is unique by construction. On the $U_q(\mathfrak{sl}_2)$-module $V$ we have $x=X$, $y=Y$, $z=Z$. To verify (\ref{eq:ABCBqnuxyz}), eliminate $n_x$, $n_y$, $n_z$ using Definition \ref{def:nxnynz}, and evaluate the result using Lemma \ref{eq:XYZformBq} along with the relations for ${\rm B}_d(\mathbb F;q^{-2},\rho_0,\rho'_0,\rho''_0)$ given in Proposition \ref{prop:ABCRelBIP}. By construction $V_{\rm out}$ and $V_{\rm in}$ are $U_q(\mathfrak{sl}_2)$-submodules of $V$. By Lemmas \ref{lem:A2B2C2Out}, \ref{lem:A2B2C2In} the 3-tuple $A^2,B^2,C^2$ acts on $V_{\rm out}$ and $V_{\rm in}$ as an LR triple. By these comments along with (\ref{eq:ABCBqnuxyz}) and Lemma \ref{lem:ABCGEN}, we find that the $U_q(\mathfrak{sl}_2)$-submodules $V_{\rm out}$ and $V_{\rm in}$ are irreducible. \end{proof} \noindent We mention some additional relations that hold on the $U_q(\mathfrak{sl}_2)$-module $V$ from Proposition \ref{prop:Bq}. These relations may be of independent interest. \begin{lemma} \label{lem:MoreBp} For the $U_q(\mathfrak{sl}_2)$-module $V$ from Proposition \ref{prop:Bq}, we have \begin{eqnarray*} && xB = qBx, \qquad \qquad yC=qCy, \qquad \qquad zA = qAz, \\ && yA = q^{-1} A y, \qquad \quad zB = q^{-1}Bz, \qquad \quad xC=q^{-1}Cx \end{eqnarray*} and also \begin{eqnarray*} Jx=xJ, \qquad \qquad Jy=yJ, \qquad \qquad Jz = zJ. \end{eqnarray*} We have \begin{eqnarray*} && \Bigl( AB-\frac{I - q^{d/2}z}{\rho_0 q(q-q^{-1})}\Bigr)J=0, \qquad \qquad \Bigl( BA-\frac{\rho_0 q I - \rho_0 q^{1-d/2}z}{q-q^{-1}}\Bigr)J=0, \\ && \Bigl( BC-\frac{I - q^{d/2}x}{\rho'_0 q(q-q^{-1})}\Bigr)J=0, \qquad \qquad \Bigl( CB-\frac{\rho'_0 q I - \rho'_0 q^{1-d/2}x}{q-q^{-1}}\Bigr)J=0, \\ && \Bigl( CA-\frac{I - q^{d/2}y}{\rho''_0 q(q-q^{-1})}\Bigr)J=0, \qquad \qquad \Bigl( AC-\frac{\rho''_0 q I - \rho''_0 q^{1-d/2}y}{q-q^{-1}}\Bigr)J=0 \end{eqnarray*} and also \begin{eqnarray*} && \Bigl( AB-\frac{\rho_0 q I - \rho_0 q^{-d/2}z}{q-q^{-1}}\Bigr)(I-J)=0, \qquad \qquad \Bigl( BA-\frac{ I - q^{1+d/2}z}{\rho_0 q (q-q^{-1})}\Bigr)(I-J)=0, \\ && \Bigl( BC-\frac{\rho'_0 q I - \rho'_0 q^{-d/2}x}{q-q^{-1}}\Bigr)(I-J)=0, \qquad \qquad \Bigl( CB-\frac{ I - q^{1+d/2}x}{\rho'_0 q (q-q^{-1})}\Bigr)(I-J)=0, \\ && \Bigl( CA-\frac{\rho''_0 q I - \rho''_0 q^{-d/2}y}{q-q^{-1}}\Bigr)(I-J)=0, \qquad \qquad \Bigl( AC-\frac{ I - q^{1+d/2}y}{\rho''_0 q (q-q^{-1})}\Bigr)(I-J)=0. \end{eqnarray*} We have \begin{eqnarray*} && (Ax -Bq^{-d/2}\rho_0 - C q^{d/2}/\rho''_0)J=0, \qquad \quad (xA -Bq^{1-d/2}\rho_0 - C q^{d/2-1}/\rho''_0)J=0, \\ && (By -Cq^{-d/2}\rho'_0 - A q^{d/2}/\rho_0)J=0, \qquad \quad (yB -Cq^{1-d/2}\rho'_0 - A q^{d/2-1}/\rho_0)J=0, \\ && (Cz -Aq^{-d/2}\rho''_0 - B q^{d/2}/\rho'_0)J=0, \qquad \quad (zC -Aq^{1-d/2}\rho''_0 - B q^{d/2-1}/\rho'_0)J=0 \end{eqnarray*} and also \begin{eqnarray*} && J(Ax -Bq^{d/2-1}/\rho_0 - C q^{1-d/2}\rho''_0)=0, \qquad \quad J(xA -Bq^{d/2}/\rho_0 - C q^{-d/2}\rho''_0)=0, \\ && J(By -Cq^{d/2-1}/\rho'_0 - A q^{1-d/2}\rho_0)=0, \qquad \quad J(yB -Cq^{d/2}/\rho'_0 - A q^{-d/2}\rho_0)=0, \\ && J(Cz -Aq^{d/2-1}/\rho''_0 - B q^{1-d/2}\rho'_0)=0, \qquad \quad J(zC -Aq^{d/2}/\rho''_0 - B q^{-d/2}\rho'_0)=0. \end{eqnarray*} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:XYZBq}. \end{proof} \noindent We are done discussing the LR triples contained in ${\rm B}_d(\mathbb F;q^{-2},\rho_0,\rho'_0,\rho''_0)$. \noindent For the rest of this section, assume that ${\rm Char}(\mathbb F)\not=2$. We now recall the Lie algebra $\mathfrak{sl}_2$ and its equitable basis. \begin{definition}\rm \label{def:sl2A} {\rm (See \cite[Lemma~3.2]{ht}.)} Let $\mathfrak{sl}_2$ denote the Lie algebra over $\mathbb F$ with basis $x,y,z$ and Lie bracket \begin{equation} \label{eq:uqrelsq1} \lbrack{ x,y}\rbrack = 2 {x}+2{ y}, \qquad \lbrack{ y,z}\rbrack = 2{ y}+2 { z}, \qquad \lbrack{ z,x}\rbrack = 2{ z}+2{ x}. \end{equation} \end{definition} \noindent Until further notice let $A,B,C$ denote an LR triple that is contained in ${\rm NBG}_d(\mathbb F;1)$. Let $V$ denote the underlying vector space. \begin{definition} \label{def:XYZsl2} \rm Define $X,Y,Z$ in ${\rm End}(V)$ such that for $0 \leq i \leq d$, $X-(2i-d)I$ (resp. $Y-(2i-d)I$) (resp. $Z-(2i-d)I$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. \end{definition} \noindent Recall the idempotent data (\ref{eq:idseq}) for $A,B,C$. \begin{lemma} The elements $X,Y,Z$ from Definition \ref{def:XYZsl2} satisfy \begin{eqnarray*} X = \sum_{i=0}^d (2i-d) E'_i, \qquad \qquad Y = \sum_{i=0}^d (2i-d) E''_i, \qquad \qquad Z = \sum_{i=0}^d (2i-d) E_i. \end{eqnarray*} \end{lemma} \begin{proof} By Definition \ref{def:XYZsl2} and the meaning of the idempotent data. \end{proof} \begin{lemma} \label{lem:XYZsl2A} The elements $X,Y,Z$ from Definition \ref{def:XYZsl2} satisfy \begin{eqnarray*} X = B+C -A, \qquad \quad Y = C+A-B, \qquad \quad Z = A+B-C. \end{eqnarray*} \end{lemma} \begin{proof} To verify the first equation, work with the matrices in ${\rm Mat}_{d+1}(\mathbb F)$ that represent $A,B,C,X$ with respect to a $(B,C)$-basis for $V$. For $A,B,C$ these matrices are given in Proposition \ref{prop:matrixRep}. For $X$ this matrix is diagonal, with $(i,i)$-entry $2i-d$ for $0 \leq i \leq d$. The other two equations are similarly verified. \end{proof} \begin{lemma} \label{lem:sl2moduleExists} The elements $X,Y,Z$ from Definition \ref{def:XYZsl2} satisfy \begin{eqnarray*} \lbrack{ X,Y}\rbrack = 2 {X}+2{Y}, \qquad \quad \lbrack{ Y,Z}\rbrack = 2{ Y}+2 {Z}, \qquad \quad \lbrack{ Z,X}\rbrack = 2{ Z}+2{ X}. \end{eqnarray*} \end{lemma} \begin{proof} To verify these equations, eliminate $X,Y,Z$ using Lemma \ref{lem:XYZsl2A}, and evaluate the result using the relations for ${\rm NBG}_d(\mathbb F;1)$ given in Proposition \ref{prop:ABCRel}. \end{proof} \begin{proposition} \label{prop:mainRESsl2} Let $A,B,C$ denote an LR triple contained in ${\rm NBG}_d(\mathbb F;1)$. Then there exists a unique $\mathfrak{sl}_2$-module structure on the underlying vector space $V$, such that for $0 \leq i \leq d$, $x-(2i-d)1$ (resp. $y-(2i-d)1$) (resp. $z-(2i-d)1$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. The $\mathfrak{sl}_2$-module $V$ is irreducible. On the $\mathfrak{sl}_2$-module $V$, \begin{eqnarray} \label{eq:ABCnuxyzsl2} A = (y+z)/2, \qquad \qquad B = (z+x)/2, \qquad \qquad C = (x+y)/2. \end{eqnarray} \end{proposition} \begin{proof} The $\mathfrak{sl}_2$-module structure exists by Lemma \ref{lem:sl2moduleExists}. The $\mathfrak{sl}_2$-module structure is unique by construction. On the $\mathfrak{sl}_2$-module $V$ we have $x=X$, $y=Y$, $z=Z$. To verify (\ref{eq:ABCnuxyzsl2}), eliminate $x,y,z $ using Lemma \ref{lem:XYZsl2A}, and evaluate the result using the relations for ${\rm NBG}_d(\mathbb F;1)$ given in Proposition \ref{prop:ABCRel}. The $\mathfrak{sl}_2$-module $V$ is irreducible by (\ref{eq:ABCnuxyzsl2}) and Lemma \ref{lem:ABCGEN}. \end{proof} \noindent We are done discussing the LR triples contained in ${\rm NBG}_d(\mathbb F;1)$. \noindent For the rest of this section let $A,B,C$ denote an LR triple that is contained in ${\rm B}_d(\mathbb F;1,\rho_0,\rho'_0,\rho''_0)$. Let $V$ denote the underlying vector space, and let $J$ denote the projector. \begin{definition} \label{def:XYZsl2Bip} \rm Define $X,Y,Z$ in ${\rm End}(V)$ such that for $0 \leq i \leq d$, $X-(i-d/2)I$ (resp. $Y-(i-d/2)I$) (resp. $Z-(i-d/2)I$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. \end{definition} \noindent Recall the idempotent data (\ref{eq:idseq}) for $A,B,C$. \begin{lemma} The elements $X,Y,Z$ from Definition \ref{def:XYZsl2Bip} satisfy \begin{eqnarray*} X = \sum_{i=0}^d (i-d/2) E'_i, \qquad \qquad Y = \sum_{i=0}^d (i-d/2) E''_i, \qquad \qquad Z = \sum_{i=0}^d (i-d/2) E_i. \end{eqnarray*} \end{lemma} \begin{proof} By Definition \ref{def:XYZsl2Bip} and the meaning of the idempotent data. \end{proof} \begin{lemma} \label{lem:XYZsl2ABip} The elements $X,Y,Z$ from Definition \ref{def:XYZsl2Bip} satisfy \begin{eqnarray*} && X = \Bigl(2 \rho_0' B C + \frac{d}{2}I \Bigr)J + \Bigl( \frac{2}{\rho_0'} B C - \frac{d+2}{2} I \Bigr) (I-J), \\ && Y = \Bigl(2 \rho_0'' C A + \frac{d}{2}I\Bigr)J + \Bigl( \frac{2}{\rho_0''} C A - \frac{d+2}{2} I \Bigr) (I-J), \\ && Z = \Bigl(2 \rho_0 A B + \frac{d}{2}I \Bigr)J + \Bigl( \frac{2}{\rho_0} A B- \frac{d+2}{2} I \Bigr) (I-J). \end{eqnarray*} \end{lemma} \begin{proof} To verify the first equation, work with the matrices in ${\rm Mat}_{d+1}(\mathbb F)$ that represent $B,C,J,X$ with respect to a $(B,C)$-basis for $V$. For $B,C$ these matrices are given in Proposition \ref{prop:matrixRep}. For $J$ this matrix is given in Lemma \ref{lem:JMat}. For $X$ this matrix is diagonal, with $(i,i)$-entry $i-d/2$ for $0 \leq i \leq d$. The other two equations are similarly verified. \end{proof} \begin{lemma} \label{lem:sl2moduleExistsBip} The elements $X,Y,Z$ from Definition \ref{def:XYZsl2Bip} satisfy \begin{eqnarray*} \lbrack{ X,Y}\rbrack = 2 {X}+2{Y}, \qquad \quad \lbrack{ Y,Z}\rbrack = 2{ Y}+2 {Z}, \qquad \quad \lbrack{ Z,X}\rbrack = 2{ Z}+2{ X}. \end{eqnarray*} \end{lemma} \begin{proof} To verify these equations, eliminate $X,Y,Z$ using Lemma \ref{lem:XYZsl2ABip}, and evaluate the result using the relations for ${\rm B}_d(\mathbb F;1,\rho_0,\rho'_0,\rho''_0)$ given in Proposition \ref{prop:ABCRelBIP}. \end{proof} \begin{proposition} \label{prop:mainRESsl2Bip} Let $A,B,C$ denote an LR triple contained in ${\rm B}_d(\mathbb F;1,\rho_0,\rho'_0,\rho''_0)$. Then there exists a unique $\mathfrak{sl}_2$-module structure on the underlying vector space $V$, such that for $0 \leq i \leq d$, $x-(i-d/2)1$ (resp. $y-(i-d/2)1$) (resp. $z-(i-d/2)1$) vanishes on component $i$ of the $(B,C)$-decomposition (resp. $(C,A)$-decomposition) (resp. $(A,B)$-decomposition) of $V$. For the $\mathfrak{sl}_2$-module $V$ the subspaces $V_{\rm out}$ and $V_{\rm in}$ are irreducible $\mathfrak{sl}_2$-submodules. On the $\mathfrak{sl}_2$-module $V$, \begin{eqnarray} \label{eq:ABCnuxyzsl2Bip} A^2 = (y+z)/2, \qquad \qquad B^2 = (z+x)/2, \qquad \qquad C^2 = (x+y)/2. \end{eqnarray} \end{proposition} \begin{proof} Similar to the proof of Proposition \ref{prop:Bq}. \end{proof} \noindent We mention some additional relations that hold on the $\mathfrak{sl}_2$-module $V$ from Proposition \ref{prop:mainRESsl2Bip}. These relations may be of independent interest. \begin{lemma} \label{lem:BipIndep} For the $\mathfrak{sl}_2$-module $V$ from Proposition \ref{prop:mainRESsl2Bip}, we have \begin{eqnarray*} \lbrack A,z\rbrack = A, \qquad \qquad \lbrack B,x\rbrack = B, \qquad \qquad \lbrack C,y\rbrack = C \end{eqnarray*} and also \begin{eqnarray*} \lbrack J,x\rbrack=0, \qquad \qquad \lbrack J,y\rbrack=0, \qquad \qquad \lbrack J,z\rbrack=0. \end{eqnarray*} \noindent We have \begin{eqnarray*} && \Bigl( AB - \frac{2 z - d }{4\rho_0} \Bigr) J=0, \qquad \qquad \Bigl( B A - \frac{\rho_0(2 z + d )}{4}\Bigr)J=0, \\ && \Bigl( BC - \frac{2 x - d }{4\rho'_0}\Bigr) J=0, \qquad \qquad \Bigl( C B - \frac{\rho'_0(2 x + d )}{4} \Bigr)J=0, \\ && \Bigl( CA - \frac{2 y - d }{4\rho''_0} \Bigr) J=0, \qquad \qquad \Bigl( A C - \frac{\rho''_0(2 y + d )}{4} \Bigr)J=0 \end{eqnarray*} and also \begin{eqnarray*} && \Bigl(A B - \frac{\rho_0(2 z + d+2)}{4} \Bigr)(I-J)=0, \qquad \qquad \Bigl( B A - \frac{2 z - d-2}{4 \rho_0} \Bigr)(I-J)=0, \\ && \Bigl(B C - \frac{\rho'_0(2 x + d+2)}{4} \Bigr)(I-J)=0, \qquad \qquad \Bigl( C B - \frac{2 x - d-2}{4 \rho'_0} \Bigr)(I-J)=0, \\ && \Bigl(C A - \frac{\rho''_0(2 y + d+2)}{4} \Bigr)(I-J)=0, \qquad \qquad \Bigl( A C - \frac{2 y - d-2}{4 \rho''_0} \Bigr)(I-J)=0. \end{eqnarray*} \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{lem:sl2moduleExistsBip}. \end{proof} \section{Three characterizations of an LR triple} \noindent Throughout this section let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. We characterize the LR triples on $V$ in three ways. \noindent A matrix $M \in {\rm Mat}_{d+1}(\mathbb F)$ will be called {\it antidiagonal} whenever the entry $M_{i,j}=0$ if $i+j \not=d$ $(0 \leq i,j\leq d)$. Note that the following are equivalent: (i) $M$ is antidiagonal; (ii) ${\bf Z}M$ is diagonal; (iii) $M{\bf Z}$ is diagonal. \begin{theorem} \label{thm:6bases} Suppose we are given six bases for $V$, denoted \begin{eqnarray} \label{eq:6Bases} \mbox{\rm basis 1, $\quad$ basis 2,$\quad$ basis 3, $\quad$ basis 4, $\quad$ basis 5, $\quad$ basis 6}. \end{eqnarray} \noindent Then the following are equivalent: \begin{enumerate} \item[\rm (i)] The transition matrix \centerline{ \begin{tabular}[t]{c|c|c} {\rm from basis} & {\rm to basis} & {\rm is } \\ \hline {\rm 1} & {\rm 2} & {\rm upper triangular Toeplitz} \\ {\rm 2} & {\rm 3} & {\rm antidiagonal} \\ {\rm 3} & {\rm 4} & {\rm upper triangular Toeplitz} \\ {\rm 4} & {\rm 5} & {\rm antidiagonal} \\ {\rm 5} & {\rm 6} & {\rm upper triangular Toeplitz} \\ {\rm 6} & {\rm 1} & {\rm antidiagonal} \\ \end{tabular}} \item[\rm (ii)] There exists an LR triple $A,B,C$ on $V$ for which \centerline{ \begin{tabular}[t]{c|c} {\rm basis} & {\rm has type} \\ \hline {\rm 1} & $(A,C)$ \\ {\rm 2} & $(A,B)$ \\ {\rm 3} & $(B,A)$ \\ {\rm 4} & $(B,C)$ \\ {\rm 5} & $(C,B)$ \\ {\rm 6} & $(C,A)$ \\ \end{tabular}} \end{enumerate} \noindent Suppose {\rm (i), (ii)} hold. Then the LR triple $A,B,C$ is uniquely determined by the sequence {\rm (\ref{eq:6Bases})}. \end{theorem} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ For $1 \leq k \leq 6$ let $\mathcal D_k$ denote the decomposition of $V$ induced by basis $k$. By assumption, the transition matrix from basis 1 to basis 2 is upper triangular and Toeplitz. For notational convenience denote basis 1 by $\lbrace u_i \rbrace_{i=0}^d$ and basis 2 by $\lbrace v_i \rbrace_{i=0}^d$. By Proposition \ref{prop:Toe} there exists $A \in {\rm End}(V)$ such that \begin{eqnarray} \nonumber &&Au_i = u_{i-1} \qquad (1 \leq i \leq d), \qquad Au_0 =0, \\ \label{eq:whatNeed2} && Av_i = v_{i-1} \qquad (1 \leq i \leq d), \qquad Av_0 =0. \end{eqnarray} Consequently $A$ lowers $\mathcal D_1$ and $\mathcal D_2$. Similarly there exists $B \in {\rm End}(V)$ that lowers $\mathcal D_3$ and $\mathcal D_4$. Also there exists $C \in {\rm End}(V)$ that lowers $\mathcal D_5$ and $\mathcal D_6$. By our assumption concerning the three antidiagonal transition matrices, the decompositions $\mathcal D_2$, $\mathcal D_4$, $\mathcal D_6$ are the inversions of $\mathcal D_3$, $\mathcal D_5$, $\mathcal D_1$, respectively. By these comments $A$, $B$, $C$ raise $\mathcal D_6$, $\mathcal D_2$, $\mathcal D_4$ respectively. Observe that $\mathcal D_2$ is lowered by $A$ and raised by $B$; therefore $A,B$ form an LR pair on $V$. Similarly $\mathcal D_4$ is lowered by $B$ and raised by $C$; therefore $B,C$ form an LR pair on $V$. Also $\mathcal D_6$ is lowered by $C$ and raised by $A$; therefore $C,A$ form an LR pair on $V$. By these comments $A,B,C$ form an LR triple on $V$. We now show that basis 2 is an $(A,B)$-basis of $V$. We just mentioned that $\mathcal D_2$ is lowered by $A$ and raised by $B$. Therefore $\mathcal D_2$ is the $(A,B)$-decomposition of $V$. Now using (\ref{eq:whatNeed2}) and Definition \ref{def:ABbasis}, we see that basis 2 is an $(A,B)$-basis of $V$. We have shown that basis 2 meets the requirements of the table in (ii). One similarly shows that bases 1, 3, 4, 5, 6 meet these requirements. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ By assumption basis 1 is an $(A,C)$-basis of $V$, and basis 2 is an $(A,B)$-basis of $V$. By Lemma \ref{lem:xyz}, the transition matrix from basis 1 to basis 2 is upper triangular and Toeplitz. By assumption basis 3 is a $(B,A)$-basis of $V$. So the inversion of basis 3 is an inverted $(B,A)$-basis of $V$. By Lemma \ref{lem:Dmeaning}, the transition matrix from basis 2 to the inversion of basis 3 is diagonal. Recall that $\bf Z$ is the transition matrix between basis 3 and its inversion. By these comments the transition matrix from basis 2 to basis 3 is antidiagonal. The remaining assertions of part (i) are similarly obtained. \\ \noindent Assume that (i), (ii) hold. From the construction of $A$ in the proof of ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ above, we find that $A$ is uniquely determined by the sequence (\ref{eq:6Bases}). Similarly $B$ and $C$ are uniquely determined by the the sequence (\ref{eq:6Bases}). \end{proof} \begin{theorem} Suppose we are given six invertible matrices in ${\rm Mat}_{d+1}(\mathbb F)$: \begin{eqnarray} &&D_1,\quad D_2, \quad D_3 \qquad \qquad \mbox{\rm (diagonal}), \label{eq:D123} \\ &&T_1,\quad T_2, \quad T_3 \qquad \qquad \mbox{\rm (upper triangular Toeplitz).} \label{eq:T123} \end{eqnarray} Then the following {\rm (i), (ii)} are equivalent. \begin{enumerate} \item[\rm (i)] $T_1 D_1 {\bf Z} T_2 D_2 {\bf Z} T_3 D_3 {\bf Z} \in \mathbb F I$. \item[\rm (ii)] There exists an LR triple $A,B,C$ over $\mathbb F$ for which the matrices {\rm (\ref{eq:D123}), (\ref{eq:T123})} are transition matrices of the following kind: \centerline{ \begin{tabular}[t]{c|c|c} {\rm transition matrix} & {\rm from a basis of type} & {\rm to a basis of type} \\ \hline $T_1$ & $(A,C)$ & $(A,B)$ \\ $D_1$ & $(A,B)$ & {\rm inv. $(B,A)$} \\ $T_2$ & $(B,A)$ & $(B,C)$ \\ $D_2$ & $(B,C)$& {\rm inv. $(C,B)$} \\ $T_3$ & $(C,B)$ & $(C,A)$ \\ $D_3$ & $(C,A)$ & {\rm inv. $(A,C)$} \\ \end{tabular}} \end{enumerate} \noindent Suppose {\rm (i), (ii)} hold. Then the LR triple $A,B,C$ is uniquely determined up to isomorphism by the sequence $D_1,D_2,D_3, T_1,T_2,T_3$. \end{theorem} \begin{proof} ${\rm (i)}\mathbb Rightarrow {\rm (ii)}$ We invoke Theorem \ref{thm:6bases}. Multiplying $D_1$ by a nonzero scalar in $\mathbb F$ if necessary, we may assume without loss that $T_1 D_1 {\bf Z} T_2 D_2 {\bf Z} T_3 D_3 {\bf Z} =I$. By linear algebra there exist six bases of $V$ as in (\ref{eq:6Bases}) such that the transition matrix \centerline{ \begin{tabular}[t]{c|c|c} {\rm from basis} & {\rm to basis} & {\rm is } \\ \hline {\rm 1} & {\rm 2} & $T_1$ \\ {\rm 2} & {\rm 3} & $D_1{\bf Z}$ \\ {\rm 3} & {\rm 4} & $T_2$ \\ {\rm 4} & {\rm 5} & $D_2{\bf Z}$ \\ {\rm 5} & {\rm 6} & $T_3$ \\ {\rm 6} & {\rm 1} & $D_3{\bf Z}$ \\ \end{tabular}} \noindent By construction, these six bases satisfy Theorem \ref{thm:6bases}(i). Therefore they satisfy Theorem \ref{thm:6bases}(ii). The LR triple $A,B,C$ mentioned in Theorem \ref{thm:6bases}(ii) satisfies condition (ii) of the present theorem. \\ \noindent ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ Associated with the LR triple $A,B,C$ are the upper triangular Toeplitz matrices $T$, $T'$, $T''$ from Definition \ref{def:TTT}, and the diagonal matrices $D$, $D'$, $D''$ from Definition \ref{def:DDD}. Using Lemma \ref{lem:BAbasisU} and Definition \ref{def:TTT}(ii) we find $T_1 \in \mathbb F T'$. Similarly $T_2 \in \mathbb F T''$ and $T_3 \in \mathbb F T$. Using Lemma \ref{lem:Dmeaning} we find $D_1 \in \mathbb F D$. Similarly $D_2 \in \mathbb F D'$ and $D_3 \in \mathbb F D''$. By these comments and Proposition \ref{prop:12cycle} we obtain $T_1 D_1 {\bf Z} T_2D_2{\bf Z}T_3D_3{\bf Z}\in \mathbb F I$. \\ \noindent Assume that (i), (ii) hold. Consider the matrices $D,D',D''$ and $T,T',T''$ from the proof of ${\rm (ii)}\mathbb Rightarrow {\rm (i)}$ above. By Proposition \ref{prop:extra} the LR triple $A,B,C$ is uniquely determined up to isomorphism by the sequence $D,D',D'',T,T',T''$. The matrix $D$ is obtained from $D_1$ by dividing $D_1$ by its $(0,0)$-entry. So $D$ is determined by $D_1$. The matrices $D',D'',T,T',T''$ are similarly determined by $D_2, D_3, T_3,T_1,T_2$, respectively. By these comments the LR triple $A,B,C$ is uniquely determined up to isomorphism by the sequence $D_1,D_2,D_3, T_1,T_2,T_3$. \end{proof} \begin{theorem} Let $A,B,C \in {\rm End}(V)$. Then $A,B,C$ form an LR triple on $V$ if and only if the following {\rm (i)--(iv)} hold: \begin{enumerate} \item[\rm (i)] each of $A$, $B$, $C$ is Nil; \item[\rm (ii)] the flag $\lbrace A^{d-i}V\rbrace_{i=0}^d$ is raised by $B,C$; \item[\rm (iii)] the flag $\lbrace B^{d-i}V\rbrace_{i=0}^d$ is raised by $C,A$; \item[\rm (iv)] the flag $\lbrace C^{d-i}V\rbrace_{i=0}^d$ is raised by $A,B$. \end{enumerate} \end{theorem} \begin{proof} By Proposition \ref{prop:LRchar} and Definition \ref{def:LRT}. \end{proof} \section{Appendix I: The nonbipartite LR triples in matrix form} \noindent In this section we display the nonbipartite equitable LR triples in matrix form. \noindent Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote a nonbipartite equitable LR triple on $V$, with parameter array (\ref{eq:paLRT}), trace data (\ref{eq:tracedata}), and Toeplitz data (\ref{eq:ToeplitzData}). By Definition \ref{def:equitNorm} we have $\alpha_i = \alpha'_i = \alpha''_i $ for $0 \leq i \leq d$, and by Lemma \ref{lem:equitBasicBeta} we have $\beta_i = \beta'_i = \beta''_i $ for $0 \leq i \leq d$. By (\ref{eq:list3}) we have $\beta_1=-\alpha_1$. By Lemma \ref{lem:equitBasic} we have $\varphi_i = \varphi'_i = \varphi''_i$ for $1 \leq i \leq d$, and $a_i = a'_i = a''_i = \alpha_1 (\varphi_{d-i+1}-\varphi_{d-i})$ for $0 \leq i \leq d$. Recall from Definition \ref{def:NBNorm} that $A,B,C$ is normalized if and only if $\alpha_1=1$. By Proposition \ref{prop:matrixRep} we have the following. \noindent With respect to an $(A,B)$-basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_1 & 0 & && & \\ & \varphi_2 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } a_0 & \varphi_d/\varphi_1 & && & \bf 0 \\ \varphi_d & a_1 & \varphi_{d-1}/\varphi_2 && & \\ & \varphi_{d-1} & a_2 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi_1/\varphi_d \\ {\bf 0} && & & \varphi_1 & a_d \\ \end{array} \right). \end{eqnarray*} With respect to an inverted $(A,B)$ basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1& 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1 & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & \varphi_d & && & \bf 0 \\ & 0 & \varphi_{d-1} && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } a_d & \varphi_1 & && & \bf 0 \\ \varphi_1/\varphi_d & a_{d-1} & \varphi_{2} && & \\ & \varphi_{2}/\varphi_{d-1} & a_{d-2} & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi_d \\ {\bf 0} && & & \varphi_d/\varphi_1 & a_0 \\ \end{array} \right). \end{eqnarray*} With respect to a $(B,A)$ basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} && A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_d & 0 & && & \\ & \varphi_{d-1} & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_1 & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } a_d & \varphi_1/\varphi_d & && & \bf 0 \\ \varphi_1 & a_{d-1} & \varphi_{2}/\varphi_{d-1} && & \\ & \varphi_{2} & a_{d-2} & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi_d/\varphi_1 \\ {\bf 0} && & & \varphi_d & a_0 \\ \end{array} \right). \end{eqnarray*} With respect to an inverted $(B,A)$ basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & \varphi_1 & && & \bf 0 \\ & 0 & \varphi_2 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_d \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1& 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } a_0 & \varphi_d & && & \bf 0 \\ \varphi_d/\varphi_1 & a_1 & \varphi_{d-1} && & \\ & \varphi_{d-1}/\varphi_2 & a_2 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi_1\\ {\bf 0} && & & \varphi_1/\varphi_d & a_d \\ \end{array} \right). \end{eqnarray*} \section{Appendix II: The bipartite LR triples in matrix form} \noindent In this section we display the bipartite LR triples in matrix form. \noindent Let $V$ denote a vector space over $\mathbb F$ with dimension $d+1$. Let $A,B,C$ denote a bipartite LR triple on $V$, with parameter array (\ref{eq:paLRT}). By Proposition \ref{prop:matrixRep} we have the following. \noindent With respect to an $(A,B)$-basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_1 & 0 & && & \\ & \varphi_2 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_d & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } 0 & \varphi'_d/\varphi_1 & && & \bf 0 \\ \varphi''_d & 0 & \varphi'_{d-1}/\varphi_2 && & \\ & \varphi''_{d-1} & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi'_1/\varphi_d \\ {\bf 0} && & & \varphi''_1 & 0 \\ \end{array} \right). \end{eqnarray*} With respect to an inverted $(A,B)$ basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1& 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1 & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & \varphi_d & && & \bf 0 \\ & 0 & \varphi_{d-1} && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } 0 & \varphi''_1 & && & \bf 0 \\ \varphi'_1/\varphi_d & 0 & \varphi''_{2} && & \\ & \varphi'_{2}/\varphi_{d-1} & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi''_d \\ {\bf 0} && & & \varphi'_d/\varphi_1 & 0\\ \end{array} \right). \end{eqnarray*} With respect to a $(B,A)$ basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} && A:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ \varphi_d & 0 & && & \\ & \varphi_{d-1} & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & \varphi_1 & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & 1 & && & \bf 0 \\ & 0 & 1 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & 1 \\ {\bf 0} && & & & 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } 0 & \varphi''_1/\varphi_d & && & \bf 0 \\ \varphi'_1 & 0 & \varphi''_{2}/\varphi_{d-1} && & \\ & \varphi'_{2} & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi''_d/\varphi_1 \\ {\bf 0} && & & \varphi'_d & 0 \\ \end{array} \right). \end{eqnarray*} With respect to an inverted $(B,A)$ basis for $V$ the matrices representing $A,B,C$ are \begin{eqnarray*} &&A:\; \left( \begin{array}{ c c cc c c } 0 & \varphi_1 & && & \bf 0 \\ & 0 & \varphi_2 && & \\ & & 0 & \cdot && \\ & & & \cdot & \cdot & \\ & & & & \cdot & \varphi_d \\ {\bf 0} && & & & 0 \\ \end{array} \right), \qquad \qquad B:\; \left( \begin{array}{ c c cc c c } 0 & & && & \bf 0 \\ 1 & 0 & && & \\ & 1 & 0 & && \\ & & \cdot & \cdot & & \\ & & & \cdot & \cdot & \\ {\bf 0} && & & 1& 0 \\ \end{array} \right), \\ &&C:\; \left( \begin{array}{ c c c c c c } 0 & \varphi'_d & && & \bf 0 \\ \varphi''_d/\varphi_1 & 0 & \varphi'_{d-1} && & \\ & \varphi''_{d-1}/\varphi_2 & 0 & \cdot && \\ & & \cdot & \cdot & \cdot & \\ & & & \cdot & \cdot & \varphi'_1\\ {\bf 0} && & & \varphi''_1/\varphi_d & 0 \\ \end{array} \right). \end{eqnarray*} \section{Acknowledgments} The author thanks Kazumasa Nomura for giving this paper a close reading and offering valuable suggestions. Also, part of this paper was written during the author's visit to the Graduate School of Information Sciences, at Tohoku U. in Japan. The visit was from December 20, 2014 to January 15, 2015. The author thanks his hosts Hajime Tanaka and Jae-ho Lee for their kind hospitality and insightful conversations. \noindent Paul Terwilliger \hfil\break \noindent Department of Mathematics \hfil\break \noindent University of Wisconsin \hfil\break \noindent 480 Lincoln Drive \hfil\break \noindent Madison, WI 53706 USA \hfil\break \noindent email: {\tt [email protected] }\hfil\break \end{document}
\begin{document} \twocolumn[ \icmltitle{Replica Conditional Sequential Monte Carlo} \icmlsetsymbol{equal}{*} \begin{icmlauthorlist} \icmlauthor{Alexander Y. Shestopaloff}{ed,turing} \icmlauthor{Arnaud Doucet}{ox,turing} \end{icmlauthorlist} \icmlaffiliation{turing}{The Alan Turing Institute, London, UK} \icmlaffiliation{ox}{Department of Statistics, University of Oxford, Oxford, UK} \icmlaffiliation{ed}{School of Mathematics, University of Edinburgh, Edinburgh, UK} \icmlcorrespondingauthor{Alexander Y. Shestopaloff}{[email protected]} \icmlkeywords{Markov Chain Monte Carlo, State Space Models} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} We propose a Markov chain Monte Carlo (MCMC) scheme to perform state inference in non-linear non-Gaussian state-space models. Current state-of-the-art methods to address this problem rely on particle MCMC techniques and its variants, such as the iterated conditional Sequential Monte Carlo (cSMC) scheme, which uses a Sequential Monte Carlo (SMC) type proposal within MCMC. A deficiency of standard SMC proposals is that they only use observations up to time $t$ to propose states at time $t$ when an entire observation sequence is available. More sophisticated SMC based on lookahead techniques could be used but they can be difficult to put in practice. We propose here replica cSMC where we build SMC proposals for one replica using information from the entire observation sequence by conditioning on the states of the other replicas. This approach is easily parallelizable and we demonstrate its excellent empirical performance when compared to the standard iterated cSMC scheme at fixed computational complexity. \end{abstract} \section{Introduction} We consider discrete-time state-space models. They can be described by a latent Markov process $(X_{t})_{t\ge1}$ and an observation process $(Y_{t})_{t\ge1}$, $(X_{t},Y_{t})$ being $\mathcal{X}\times\mathcal{Y}$-valued, which satisfy $X_{1}\sim\mu(\cdot)$ and \begin{equation} X_{t+1}|\{X_{t}=x\}\sim f(\cdot|x)\qquad Y_{t}|\{X_{t}=x\}\sim g(\cdot|x) \end{equation} for $t\ge1$. Our goal is to sample from posterior distribution of the latent states $X_{1:T}:=\left(X_{1},...,X_{T}\right)$ given a realization of the observations $Y_{1:T}=y_{1:T}$. This distribution admits a density given by \begin{equation} p(x_{1:T}|y_{1:T})\propto\mu(x_{1})g(y_{1}|x_{1})\prod_{t=2}^{T}f(x_{t}|x_{t-1})g(y_{t}|x_{t}). \end{equation} This sampling problem is now commonly addressed using an MCMC scheme known as the iterated cSMC sampler \cite{Andrieu_Doucet_Holenstein_2010} and extensions of it; see, e.g., \cite{ShestopaloffNeal2018}. This algorithm relies on a SMC-type proposal mechanism. A limitation of these algorithms is that they typically use data only up to time $t$ to propose candidate states at time $t$, whereas the entire sequence $y_{1:T}$ is observed in the context we are interested in. To address these issues, various lookahead techniques have been proposed in the SMC literature; see \cite{Chen2013} for a review. Alternative approaches relying on a parametric approximation of the backward information filter used for smoothing in state-space models \cite{Briers2010} have also been recently proposed in \cite{ScharthKohn2016,Guarniero_Lee_Johansen_2017,Ruiz_Kappen_2017,Heng2017}. When applicable, these iterative methods have demonstrated good performance. However, it is unclear how these ideas could be adapted to the MCMC framework investigated here. Additionally these methods are difficult to put in practice for multimodal posterior distributions. In this paper, we propose a novel approach which allows us to build proposals for cSMC that allows considering all observed data in a proposal, based on conditioning on replicas of the state variables. Our approach is based purely on Monte Carlo sampling, bypassing any need for approximating functions in the estimate of the backward information filter. The rest of this paper is organized as follows. In Section \ref{sec:Iterated-Conditional-Sequential}, we review the iterated cSMC algorithm and outline its limitations. Section \ref{sec:Replica-Iterated-Conditional} introduces the replica iterated cSMC methodology. In Section \ref{sec:Examples}, we demonstrate the methodology on a linear Gaussian model, two non-Gaussian state space models from \cite{ShestopaloffNeal2018} as well as the Lorenz-96 model from \cite{Heng2017}. \section{Iterated cSMC\label{sec:Iterated-Conditional-Sequential}} The iterated cSMC sampler is an MCMC method for sampling from a target distribution of density $\pi\left(x_{1:T}\right):=\pi_T\left(x_{1:T}\right)$. It relies on a modified SMC scheme targeting a sequence of auxiliary target probability densities $\{\pi_t\left(x_{1:t}\right)\}_{t=1,...,T-1}$ and a sequence of proposal densities $q_{1}\left(x_{1}\right)$ and $q_{t}(x_{t}|x_{t-1})$ for $t\in\{2,...,T\}$. These target densities are such that $\pi_t(x_{1:t})/\pi_{t-1}(x_{1:t-1})\propto \beta_t(x_{t-1},x_t)$. \subsection{Algorithm} We define the `incremental importance weights' for $t\geq2$ as \begin{align} w_{t}(x_{t-1},x_{t})&:=\frac{\pi_t\left(x_{1:t}\right)}{\pi_{t-1}\left(x_{1:t-1}\right)q_{t}(x_{t}|x_{t-1})}\propto\frac{\beta_t(x_{t-1},x_t)}{q_{t}(x_{t}|x_{t-1})} \label{eq:incrementalweight} \end{align} and for $t=1$ as \begin{equation} w_{1}(x_{0},x_{1}):=\frac{\pi_1(x_{1})}{q_{1}(x_{1})}. \end{equation} \begin{algorithm}[t] \protect\caption{Iterated cSMC kernel $K\left(x_{1:T},x'_{1:T}\right)$~\label{alg:CSMC}} cSMC step. \begin{enumerate} \item \textsf{At time} $t=1$ \begin{enumerate} \item \textsf{Sample $b_{1}$ uniformly on $[N]$ and set} $x_{1}^{b_{1}}=x_{1}.$ \item \textsf{For }$i\in\left[N\right]\backslash\{b_{1}\}$, \textsf{sample} $x_{1}^{i}\sim q_{1}\left(\cdot\right)$. \item \textsf{Compute} $w_{1}(x_{0}, x_{1}^{i})$ for $i\in\left[N\right]$. \end{enumerate} \item \textsf{At times} $t=2,\ldots,T$ \begin{enumerate} \item \textsf{Sample $b_{t}$ uniformly on $[N]$ and set} $x_{t}^{b_{t}}=x_{t}$. \item \textsf{For }$i\in\left[N\right]\backslash\{b_{t}\}$, \textsf{sample }\\$a_{t-1}^{i}\sim$ Cat$\{ w_{t-1}(x_{t-2}^{a_{t-2}^{j}},x_{t-1}^{j});j\in[N]\}$. \item \textsf{For }$i\in\left[N\right]\backslash\{b_{t}\}$, \textsf{sample }$x_{t}^{i}\sim q_{t}(\left.\cdot\right\vert x_{t-1}^{a_{t-1}^{i}})$\textsf{.} \item \textsf{Compute} $w_{t}(x_{t-1}^{a_{t-1}^{i}}, x_{t}^{i})$ for $i\in\left[N\right]$. \end{enumerate} \end{enumerate} Backward sampling step. \begin{enumerate} \item \textsf{At times} $t=T$ \begin{enumerate} \item \textsf{Sample }$b_{T}\sim$ Cat$\{w_{T}(x_{T-1}^{a_{T-1}^{j}},x_{T}^{j});j\in[N]\}$. \end{enumerate} \item \textsf{At times} $t=T-1,...,1$ \begin{enumerate} \item \textsf{Sample }$b_{t}\sim$ \\ Cat$\{\beta_{t+1}(x_t^j, x_{t+1}^{b_{t+1}})w_{t}(x_{t-1}^{a_{t-1}^{j}},x_{t}^{j});j\in[N]\}$. \end{enumerate} \end{enumerate} Output $x'_{1:T}=x_{1:T}^{b_{1:T}}:=\left(x_{1}^{b_{1}},\ldots,x_{T}^{b_{T}}\right)$. \end{algorithm} We introduce a dummy variable $x_{0}$ to simplify notation. We let $N\geq2$ be the number of particles used by the algorithm and $[N]:=\{1,...,N\}$. We introduce the notation $\mathbf{x}_{t}=\left(x_{t}^{1},\ldots,x_{t}^{N}\right)\in\mathcal{X}^{N},$ $\mathbf{a}_{t}=\left(a_{t}^{1},\ldots,a_{t}^{N}\right)\in\left\{ 1,\ldots,N\right\} ^{N}$, $\mathbf{x}_{1:T}=(\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{T}),$ $\mathbf{a}_{1:T-1}=(\mathbf{a}_{1},\mathbf{a}_{2},...,\mathbf{a}_{T-1}$) and $\mathbf{x}_{t}^{-b_{t}}=\mathbf{x}_{t}\backslash x_{t}^{b_{t}}$, $\mathbf{x}_{1:T}^{-b_{1:T}}=\left\{ \mathbf{x}_{1}^{-b_{1}},\ldots,\mathbf{x}_{T}^{-b_{T}}\right\} $, $\mathbf{a}_{t-1}^{-b_{t}}=\mathbf{a}_{t-1}\backslash a_{t-1}^{b_{t}}$, $\mathbf{a}_{1:T-1}^{-b_{2:T}}=\left\{ \mathbf{a}_{1}^{-b_{2}},\ldots,\mathbf{a}_{T-1}^{-b_{T}}\right\} $ and set $b_{t}=a_{t}^{b_{t+1}}$ for $t=1,...,T-1.$ \\ It can be shown that the iterated cSMC kernel, described in Algorithm \ref{alg:CSMC}, is invariant w.r.t. $\pi(x_{1:T})$. Given the current state $x_{1:T}$, the cSMC step introduced in \cite{Andrieu_Doucet_Holenstein_2010} samples from the following distribution \begin{align} \Phi(\left.\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\right\vert x_{1:T}^{b_{1:T}},b_{1:T})=\delta_{x_{1:T}}\left(x_{1:T}^{b_{1:T}}\right)\notag \\ \times {\displaystyle \prod\limits _{i=1,i\neq b_{1}}^{N}}q_1\left(x_{1}^{i}\right)\,{\displaystyle \prod\limits _{t=2}^{T}}\thinspace{\displaystyle \prod\limits _{i=1,i\neq b_{t}}^{N}}\lambda(\left.a_{t-1}^{i},x_{t}^{i}\right\vert \mathbf{x}_{t-1}),\label{eq:CPF} \end{align} where \begin{align} \lambda\left(\left.a_{t-1}^{i}=k,x_{t}^{i}\right\vert \mathbf{x}_{t-1}\right)&=\frac{w_{t-1}(x_{t-2}^{a_{t-1}^{k}},x_{t-1}^{k})}{\sum_{j=1}^{N}w_{t-1}(x_{t-2}^{a_{t-1}^{j}},x_{t-1}^{j})}~ \notag \\ &\times q_{t}(\left.x_{t}^{i}\right\vert x_{t-1}^{k}). \end{align} This can be combined to a backward sampling step introduced in \cite{Whiteley2010}; see \cite{Finke2016,ShestopaloffNeal2018} for a detailed derivation. It can be shown that the combination of these two steps defined a Markov kernel that preserves the following extended target distribution \begin{align} \gamma(\mathbf{x}_{1:T},\mathbf{a}_{1:T-1}^{-b_{2:T}},b_{1:T}):=\frac{\pi(x_{1:T}^{b_{1:T}})}{N^{T}} \notag \\ \times \ \Phi(\left.\mathbf{x}_{1:T}^{-b_{1:T}},\mathbf{a}_{1:T-1}^{-b_{2:T}}\right\vert x_{1:T}^{b_{1:T}},b_{1:T})\label{eq:extended} \end{align} as invariant distribution. In particular, it follows that if $x_{1:T}\sim \pi$ then $x'_{1:T}\sim \pi$. The algorithm is described in Algorithm 1 where we use the notation Cat$\{c_i;i\in[N]\}$ to denote the categorical distribution of probabilities $p_i\propto c_i$. Iterated cSMC has been widely adopted for state space models, i.e. when the target is $\pi(x_{1:T})=p(x_{1:T}|y_{1:T})$. The default sequence of auxiliary targets one uses is $\pi_t(x_{1:t})=p(x_{1:t}|y_{1:t})$ for $t=1,...,T-1$ resulting in the incremental importance weights \begin{equation} w_{t}(x_{t-1},x_{t})\propto\frac{f(x_{t}|x_{t-1})g(y_{t}|x_{t})}{q_{t}(x_{t}|x_{t-1})}\label{eq:incrementalweight-2} \end{equation} for $t \geq 2$ and \begin{equation} w_{1}(x_{0},x_{1})\propto\frac{\mu(x_{1})g(y_{1}|x_{1})}{q_{1}(x_{1})} \end{equation} for $t=1$. Typically we will attempt to select a proposal which minimizes the variance of the incremental weight, which at time $t \geq 2$ is $q_{t}^{\mathrm{opt}}(x_{t}|x_{t-1})=p(x_{t}|x_{t-1},y_{t})\propto g(y_{t}|x_{t})f(x_{t}|x_{t-1})$ or an approximation of it. \subsection{Limitations of Iterated cSMC} When using the default sequence of auxiliary targets for state space models, iterated cSMC does not exploit a key feature of the problem at hand. The cSMC step typically uses a proposal at time $t$ that only relies on the observation $y_{t}$, i.e. $q_{t}(x_{t}|x_{t-1})=p\left(x_{t}|x_{t-1},y_{t}\right)$, as it targets at time $t$ the posterior density $p\left(x_{1:t}|y_{1:t}\right)$. In high-dimensions and/or in the presence of highly informative observations, the discrepancy between successive posterior densities $\{p\left(x_{1:t}|y_{1:t}\right)\}_{t\geq1}$ will be high. Consequently the resulting importance weights $\{w_{t}(x_{t-1}^{a_{t-1}^{i}},x_{t}^{i});i\in[N]\}$ will have high variance and the resulting procedure will be inefficient. Ideally one would like to use the sequence of marginal smoothing densities as auxiliary densities, that is $\pi_t(x_{1:t})=p\left(x_{1:t}|y_{1:T}\right)$ for $t=1,...,T-1$. Unfortunately, this is not possible as $p\left(x_{1:t}|y_{1:T}\right)\propto p\left(x_{1:t}|y_{1:t-1}\right)p\left(y_{t:T}|x_{t}\right)$ cannot be evaluated pointwise up to a normalizing constant. To address this problem in a standard SMC framework, recent contributions \cite{ScharthKohn2016,Guarniero_Lee_Johansen_2017,Ruiz_Kappen_2017,Heng2017} perform an analytical approximation $\hat{p}\left(y_{t:T}|x_{t}\right)$ of the backward information filter $p\left(y_{t:T}|x_{t}\right)$ based on an iterative particle mechanism and target instead $\{\hat{p}\left(x_{1:t}|y_{1:T}\right)\}_{t\geq1}$ where $\hat{p}\left(x_{1:t}|y_{1:T}\right)\propto p\left(x_{1:t}|y_{1:t-1}\right)\hat{p}\left(y_{t:T}|x_{t}\right)$ using proposals of the form $q_{t}\left(x_{t}|x_{t-1}\right)\propto f\left(x_{t}|x_{t-1}\right)\hat{p}\left(y_{t:T}|x_{t}\right)$. These methods can perform well but it requires a careful design of the analytical approximation and is difficult to put in practice for multimodal posteriors. Additionally, it is unclear how these could be adapted in an iterated cSMC framework without introducing any bias. Versions of iterated cSMC using an independent approximation to the backward information filter based on Particle Efficient Importance Sampling \cite{ScharthKohn2016} have been proposed \cite{GrotheKleppeLiesenfeld} though they still require a choice of analytical approximation and use an approximation to the backward information filter which is global. This can become inefficient in high dimensional state scenarios. \section{Replica Iterated cSMC\label{sec:Replica-Iterated-Conditional}} We introduce a way to directly use the iterated cSMC algorithm to target a sequence of approximations $\{\hat{p}\left(x_{1:t}|y_{1:T}\right)\}_{t\geq1}$ to the marginal smoothing densities of a state space model. Our proposed method is based on sampling from a target over multiple copies of the space as done in, for instance, the Parallel Tempering or Ensemble MCMC \cite{Neal2011} approaches. However, unlike in these techniques, we use copies of the space to define a sequence of intermediate distributions in the cSMC step informed by the whole dataset. This enables us to draw samples of $X_{1:T}$ that incorporate information about all of the observed data. Related recent work includes \cite{Leimkuhler2018}, where information sharing amongst an ensemble of replicas is used to improve MCMC proposals. \subsection{Algorithm} We start by defining the replica target for some $K\geq2$ by \begin{align} \bar{\pi}(x_{1:T}^{(1:K)})=\prod_{k=1}^{K}p(x_{1:T}^{(k)}|y_{1:T}). \end{align} Each of the replicas $x_{1:T}^{(k)}$ is updated in turn by running Algorithm \ref{alg:CSMC} with a different sequence of intermediate targets which we describe here. Consider updating replica $k$ and let $\hat{p}^{(k)}(y_{t+1:T}|x_{t})$ be an estimator of the backward information filter, built using replicas other than the $k$-th one, $x_{t+1}^{(-k)}=(x_{t+1}^{(1)},\ldots,x_{t+1}^{(k-1)},x_{t+1}^{(k+1)},\ldots,x_{t+1}^{(K)}).$ For convenience of notation, we take $\hat{p}^{(k)}(y_{T+1:T}|x_{T}):=1$. At time $t$, the cSMC does target approximation of the marginal smoothing distribution $p\left(x_{1:t}|y_{1:T}\right)$ as in \cite{ScharthKohn2016,Guarniero_Lee_Johansen_2017,Ruiz_Kappen_2017,Heng2017}. This is of the form $\hat{p}^{(k)}\left(x_{1:t}|y_{1:T}\right)\propto p\left(x_{1:t}|y_{1:t}\right)\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$. This means that the cSMC for replica $k$ uses the novel incremental weights at time $t\geq2$ \begin{align} w_{t}^{\left(k\right)}(x_{t-1},x_{t}) &:= \frac{\hat{p}^{(k)}\left(x_{1:t}|y_{1:T}\right)}{\hat{p}^{(k)}\left(x_{1:t-1}|y_{1:T}\right)q_{t}(x_{t}|x_{t-1})} \\&\propto\frac{g(y_{t}|x_{t})f(x_{t}|x_{t-1})\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)}{\hat{p}^{(k)}\left(y_{t:T}|x_{t-1}\right)q_{t}(x_{t}|x_{t-1})} \notag \end{align} and $w_{1}^{\left(k\right)}(x_{0},x_{1})\propto g(y_{1}|x_{1})\mu(x_{1})\hat{p}^{(k)}(y_{t+1:T}|x_{1})/q_{1}(x_{1})$. We would like to use the proposal minimizing the variance of the incremental weight, which at time $t\geq2$ is $q_{t}^{\mathrm{opt}}(x_{t}|x_{t-1})\propto g(y_{t}|x_{t})f(x_{t}|x_{t-1})\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ or an approximation of it. The full replica cSMC update for $\bar{\pi}$ is described in Algorithm \ref{alg:replica-CSMC} and is simply an application of Algorithm \ref{alg:CSMC} to a sequence of target densities for each replica. A proof of the validity of the algorithm is provided in the Supplementary Material. \begin{algorithm} \protect\caption{Replica cSMC update~\label{alg:replica-CSMC}} For $k=1,\ldots,K$ \begin{enumerate} \item \textsf{Build an approximation $\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ of $p\left(y_{t+1:T}|x_{t}\right)$ using the replicas $(x_{t+1}^{(1)'},\ldots,x_{t+1}^{(k-1)'},x_{t+1}^{(k+1)},\ldots,x_{t+1}^{(K)})$ \hspace{-0.2cm}for $t=1,...,T-1$}. \item \textsf{Run Algorithm \ref{alg:CSMC} with target $\pi(x_{1:T}) = p(x_{1:T}|y_{1:T})$ and auxiliary targets $\pi_{t}(x_{1:t}) = \hat{p}^{(k)}\left(x_{1:t}|y_{1:T}\right)$ for $t = 1,\ldots, T-1$ with initial state $x_{1:T}^{(k)}$ to return $x_{1:T}^{(k')}$}. \end{enumerate} Output $x_{1:T}^{(1:K)'}$. \end{algorithm} One sensible way to initialize the replicas is to set them to sequences sampled from standard independent SMC passes. This will start the Markov chain not too far from equilibrium. For multimodal distributions, initialization is particularly crucial, since we need to ensure that different replicas are well-distributed amongst the various modes at the start of the run. \subsection{Setup and Tuning\label{subsec:Setup-and-Tuning}} The replica cSMC sampler requires an estimator $\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ of the backward information filter based on $x_{t+1}^{(-k)}$. For our algorithm, we propose an estimator $\hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)$ that is not based on any analytical approximation of $p\left(y_{t+1:T}|x_{t}\right)$ but simply on a Monte Carlo approximation built using the other replicas, \begin{equation} \hat{p}^{(k)}\left(y_{t+1:T}|x_{t}\right)\propto\sum_{j\neq k}\frac{f(x_{t+1}^{\left(j\right)}|x_{t})}{p(x_{t+1}^{\left(j\right)}|y_{1:t})},\label{eq:approxMonteCarlobackward} \end{equation} where $p\left(x_{t+1}|y_{1:t}\right)$ denotes the predictive density of $x_{t+1}$. The rationale for this approach is that at equilibrium the components of $x_{t+1}^{(-k)}$ are an iid sample from a product of $K-1$ copies of the smoothing density, $p\left(x_{t+1}|y_{1:T}\right)$. Therefore, as $K$ increases, (\ref{eq:approxMonteCarlobackward}) converges to \begin{align} &\int\frac{f\left(x_{t+1}|x_{t}\right)}{p\left(x_{t+1}|y_{1:t}\right)}p\left(x_{t+1}|y_{1:T}\right)dx_{t+1} \notag \\ & \propto\int f\left(x_{t+1}|x_{t}\right)p\left(y_{t+1:T}|x_{t+1}\right)dx_{t+1} \notag \\ & =p\left(y_{t+1:T}|x_{t}\right). \label{eq:backwardFilter} \end{align} In practice, the predictive density is also unknown and we need to use an approximation of it. Whatever being the approximation $\hat{p}\left(x_{t+1}|y_{1:t}\right)$ of $p\left(x_{t+1}|y_{1:t}\right)$ we use, the algorithm is valid. We note that for $K = 2$, any approximation of the predictive density results in the same incremental importance weights. We propose to approximate the predictive density in (\ref{eq:backwardFilter}) by a constant over the entire latent space, i.e. $\hat{p}(x_{t+1}|y_{1:t}) = 1$. We justify this choice as follows. If we assume that we have informative observations, which is typical in many state space modelling scenarios, then $p(x_{t+1}|y_{1:T})$ will tend to be much more concentrated than $p(x_{t+1}|y_{1:t})$. Thus, over the region where the posterior has high density, the predictive density will be approximately constant relative to the posterior density. This suggests approximating the predictive density in (\ref{eq:backwardFilter}) by its mean with respect to the posterior density, \begin{align} &\int\frac{f\left(x_{t+1}|x_{t}\right)}{p\left(x_{t+1}|y_{1:t}\right)}p\left(x_{t+1}|y_{1:T}\right)dx_{t+1} \notag \\ &\approx \frac{\int f\left(x_{t+1}|x_{t}\right)p\left(x_{t+1}|y_{1:T}\right)dx_{t+1}}{\int p\left(x_{t+1}|y_{1:t}\right)p\left(x_{t+1}|y_{1:T}\right)dx_{t+1}} \notag \\ &\approx \frac{\frac{1}{K}\sum_{k=1}^{K} f(x_{t+1}^{(k)}|x_{t})}{\frac{1}{K}\sum_{k=1}^{K} p(x_{t+1}^{(k)}|y_{1:t})}. \label{eq:ConstPred} \end{align} Since the importance weights in cSMC at each time are defined up to a constant, sampling is not affected by the specific value of $\frac{1}{K}\sum_{k=1}^{K} p(x_{t+1}^{(k)}|y_{1:t})$. Therefore, when doing computation it can simply be set to any value, which is what we do. We note that while the asymptotic argument doesn't hold for the estimator in (\ref{eq:ConstPred}), when the variance of the predictive density is greater than the variance of the posterior density, we expect the estimators in (\ref{eq:approxMonteCarlobackward}) and (\ref{eq:ConstPred}) to be close for any finite $K$. An additional benefit to approximating the predictive density by a constant is reduction in the variance of the mixture weights in (\ref{eq:approxMonteCarlobackward}). To see why this can be the case, consider the following example. Suppose the predictive density of $x_{t+1}$ is $\mathcal{N}(\mu,\sigma_{0}^{2})$ and the posterior density is $\mathcal{N}(0,\sigma_{1}^{2})$, where $\sigma_{1}^{2} < \sigma_{0}^{2}$. Computing the variance of the mixture weight, we get \begin{align} &\textnormal{Var}\bigg(\frac{1}{p(x_{t+1}|y_{1:t})}\biggr) \notag \\ & = \frac{2\pi\sigma_{0}^{2}}{\sqrt{2\sigma_{1}^{2}\nu_{1}}}\exp\biggr[\mu^{2}\biggl(\frac{1}{\sigma_{0}^{2}}+\frac{1}{(\sigma_{0}^{2})^{2}\nu_{1}}\biggr)\biggr] \notag \\ & - \frac{2\pi\sigma_{0}^{2}}{\sigma_{1}^{2}\nu_{2}}\exp\biggr[\mu^{2}\biggl(\frac{1}{\sigma_{0}^{2}}+\frac{1}{(\sigma_{0}^{2})^{2}\nu_{2}}\biggr)\biggr]. \end{align} where \begin{equation} \nu_{1}=\biggl(\frac{1}{2\sigma_{1}^{2}}-\frac{1}{\sigma_{0}^{2}}\biggr) \qquad \nu_{2}=\biggl(\frac{1}{\sigma_{1}^{2}}-\frac{1}{\sigma_{0}^{2}}\biggr). \end{equation} From this we can see that variance increases exponentially with the squared difference of predictive and posterior means, $\mu^{2}$. As a result, we can get outliers in the mixture weight distribution. If this happens, many of the replicas will end up having low weights in the mixture. This will reduce the effective number of replicas used. Using a constant approximation will weight all of the replicas uniformly, and allow us to construct better proposals, as illustrated in Section \ref{subsec:A-Linear-Gaussian}. A natural extension of the proposed method is to update some of the replicas with other than replica cSMC updates. Samples from these replicas can then be used in estimates of the backward information filter when doing a replica cSMC update. This makes it possible to parallelize the method, at least to some extent. For instance, one possibility is to do parallel independent cSMC updates on some of the replicas. Performing other than replica cSMC updates on some of the replicas can be useful in multimodal scenarios. If all replicas are located in an isolated mode, and the replica cSMC updates use an estimate of the backward information filter based on replicas in that mode, then the overall Markov chain will tend not to transition well to other modes. Using samples from other types of updates in the estimate of the backward information filter can help counteract this effect by making transitions to other high-density regions possible. \section{Examples\label{sec:Examples}} We consider four models to illustrate the performance of our method. In all examples, we assume that the model parameters are known. The first is a simple linear Gaussian model. We use this model to demonstrate that it is sensible to use a constant approximation to the predictive density in our estimator of the backward information filter. We also use the linear Gaussian model to better understand the accuracy and performance of replica cSMC. The second model, from \cite{ShestopaloffNeal2018}, demonstrates that our proposed replica cSMC method is competitive with existing state-of-the-art methods at drawing latent state sequences in a unimodal context. The third model, also from \cite{ShestopaloffNeal2018}, demonstrates that by updating some replica coordinates with a standard iterated cSMC kernel, our method is able to efficiently handle multimodal sampling without the use of specialized ``flip'' updates. The fourth model is the Lorenz-96 model from \cite{Heng2017}, which has very low observation noise, making it a challenging case for standard iterated cSMC. To do our computations, we used MATLAB on an OS X system, running on an Intel Core i5 1.3 GHz CPU. As a performance metric for the sampler, we used autocorrelation time, which is a measure of approximately how many steps of an MCMC chain are required to obtain the equivalent of one independent sample. The autocorrelation time is estimated based on a set of runs as follows. First, we estimate the overall mean using all of the runs. Then, we use this overall mean to estimate autocovariances for each of the runs. The autocovariance estimates are then averaged and used to estimate the autocorrelations $\hat{\rho}_{k}$. The autocorrelation time is then estimated as $1+2\sum_{m=1}^{M}\hat{\rho}_{m}$ where $M$ is chosen such that for $m>M$ the autocorrelations are approximately $0$. Code to reproduce the experiments is provided \href{https://github.com/ayshestopaloff/replicacsmc}{here}. \subsection{A Linear Gaussian Model\label{subsec:A-Linear-Gaussian}} Let $X_{t}=(X_{1,t}, \ldots,X_{d,t})'$ for $t=1, \ldots, T$. The latent process for this model is defined as $X_{1} \sim \mathcal{N}(0,\Sigma_{1})$, $X_{t}|\{X_{t-1}=x_{t-1}\} \sim \mathcal{N}(\Phi x_{t-1},\Sigma)$ for $t=2,\ldots,T$, where \begin{eqnarray*} \Phi & = & \setlength{\arraycolsep}{1pt} \begin{pmatrix}\phi_{1} & 0 & \cdots & 0\\ 0 & \phi_{2} & \ddots & \vdots\\ \vdots & \ddots & \phi_{d-1} & 0\\ 0 & \cdots & 0 & \phi_{d} \end{pmatrix}, \quad \Sigma = \setlength{\arraycolsep}{1pt} \begin{pmatrix}1 & \rho & \cdots & \rho\\ \rho & 1 & \ddots & \vdots\\ \vdots & \ddots & 1 & \rho\\ \rho & \cdots & \rho & 1 \end{pmatrix},\\ \Sigma_{1} & = & \setlength{\arraycolsep}{1pt} \begin{pmatrix}\sigma^{2}_{1,1} & \rho \sigma_{1,1} \sigma_{1,2}& \cdots & \rho \sigma_{1,1}\sigma_{1,d}\\ \rho \sigma_{1,2} \sigma_{1,1} & \sigma^{2}_{1,2} & \ddots & \vdots\\ \vdots & \ddots & \sigma^{2}_{1,d-1} & \rho \sigma_{1,d-1} \sigma_{1,d}\\ \rho \sigma_{1,d} \sigma_{1,1} & \cdots & \rho \sigma_{1,d} \sigma_{1,d-1} & \sigma^{2}_{1,d} \end{pmatrix}, \end{eqnarray*} with $\sigma^{2}_{1,i} = 1/(1-\phi_{i}^{2})$ for $i=1,\ldots,d$. The observations are $Y_{i,t}|\{X_{i,t}=x_{i,t}\} \sim \mathcal{N}(x_{i,t},1)$ for $i=1,\ldots,d$ and $t=1,\ldots,T$. We set $T=250,d=5$ and the model's parameters to $\rho=0.7$ and $\phi_{i}=0.9$ for $i=1,\ldots,d$. We generate a sequence from this model to use for our experiments. Since this is a linear Gaussian model, we are able to compute the predictive density in (\ref{eq:approxMonteCarlobackward}) exactly using a Kalman filter. So for replica $k$, we can use the following importance densities, \begin{align} q_{1}(x_{1}) & \propto\mu(x_{1})\sum_{j\neq k}\frac{f(x_{2}^{(j)}\vert x_{1})}{p(x_{2}^{(j)}|y_{1})},\nonumber \\ q_{t}(x_{t}\vert x_{t-1}) & \propto f(x_{t}\vert x_{t-1})\sum_{j\neq k}\frac{f(x_{t+1}^{(j)}\vert x_{t})}{p(x_{t+1}^{(j)}|y_{1:t})},\nonumber \\ q_{T}(x_{T}|x_{T-1}) & \propto f(x_{T}\vert x_{T-1}),\label{eq:ImportanceDensities} \end{align} where $t=2,\ldots,T-1$. Since these densities are Gaussian mixtures, they can be sampled from exactly. However, as pointed out in the previous section, this approach can be inefficient. We will show experimentally that using a constant approximation to the predictive density in (\ref{eq:approxMonteCarlobackward}) actually improves performance. In all experiments, we intialize all replicas to a sample from an independent SMC pass with the same number of particles as used for cSMC updates. Also, the different runs in our experiments use different random number generator seeds. We first check that our replica method produces answers that agree with the posterior mean computed by a Kalman smoother. To do this, we do $10$ replica cSMC runs with $100$ particles and $2$ replicas for $25,000$ iterations, updating each replica conditional on the other. We then look at whether the posterior mean of $x_{i,t}$ computed using a Kalman smoother lies within two standard errors of the overall mean of $10$ replica cSMC runs. We find this happens for about $91.4\%$ of the $x_{i,t}$. This indicates strong agreement between the answers obtained by replica cSMC and the Kalman smoother. Next, we investigate the effect of using more replicas. To do this, we compare replica cSMC using $2$ versus $75$ replicas. We do $5$ runs of each sampler. Both samplers use $100$ particles and we do a total of $5,000$ iterations per run. For the sampler using $75$ replicas, we update replica $1$ at every iteration and replicas $2$ to $75$ in sequence at every $20$-th iteration. For the sampler using $2$ replicas, we update both replicas at every iteration. In both samplers, we update replica $1$ with replica cSMC and the remaining replica(s) with iterated cSMC. After discarding 10\% of each run as burn-in, we use all runs for a sampler to compute autocorrelation time. We can clearly see in Figures \ref{fig:Replica-2} and \ref{fig:Replica-75} that using more replicas improves performance, before adjusting for computation time. We note that for this simple example, there is no benefit from using replica cSMC with a large number of replicas if we take into account computation time. To check the performance of using the constant approximation versus the exact predictive density, we run replica cSMC with $75$ replicas and the same settings as earlier, except using a constant approximation to the predictive density. Figure \ref{fig:Replica-approx} shows that using a constant approximation to the predictive density results in peformance better than when using the true predictive density. This is consistent with our discussion in Section \ref{subsec:Setup-and-Tuning}. \begin{figure} \caption{Estimated autocorrelation times for each latent variable. Different coloured lines correspond to different latent state components. The $x$-axis corresponds to different times.\label{fig:Estimated-autocorrelation-times} \label{fig:Replica-2} \label{fig:Replica-75} \label{fig:Replica-approx} \label{fig:Estimated-autocorrelation-times} \end{figure} The linear Gaussian model can also be used to demonstrate that due to looking ahead, a fixed level of precision can be achieved with much fewer particles with replica cSMC than with standard iterated cSMC. In scenarios where the state is high dimensional and the observations are informative, it is difficult to efficiently sample the variables $x_{i,1}$ with standard iterated cSMC using the initial density as the proposal. We do $20$ runs of $2,500$ iterations of both iterated cSMC with $700$ particles and of replica cSMC with $35$ particles and $2$ replicas, with each replica updated given the other. We then use the runs to estimate the standard error of the overall mean over $20$ runs. For the variable $x_{1,1}$ sampled with iterated cSMC we estimate the standard error to be approximately $0.0111$ whereas for replica cSMC the estimated standard error is a similar $0.0081$, achieved using only 5\% of the particles. Finally, we verify that the proposed method works well on longer time series by running it on the linear Gaussian model but with the length of the observed sequence set to $T=1,500$. We use $2$ replicas, each updated given the other, and do $5$ runs of $5,000$ iterations of the sampler to estimate the autocorrelation time for sampling the latent variables. In Figure \ref{fig:Estimated-autocorrelation-times-1} we can see that the replica cSMC method does not suffer from a decrease in performance when used on longer time series. \begin{figure} \caption{Estimated autocorrelation times for each latent variable. Different coloured lines correspond to different latent state components. The $x$-axis corresponds to different times.\label{fig:Estimated-autocorrelation-times-1} \label{fig:Estimated-autocorrelation-times-1} \end{figure} \subsection{Two Poisson-Gaussian Models} In this example, we consider the two models from \cite{ShestopaloffNeal2018}. Model $1$ uses the same latent process as Section \ref{subsec:A-Linear-Gaussian} with $T=250$, $d=10$ and $Y_{i,t}|\{X_{i,t}=x_{i,t}\} \sim \mathcal{\textnormal{Poisson}}(\exp(c+\sigma x_{i,t}))$ for $i=1,\ldots,d$ and $t=1,\ldots,T$ where $c=-0.4$ and $\sigma=0.6$. For Model $2$, we again use the latent process in Section \ref{subsec:A-Linear-Gaussian}, with $T=500,d=15$ and $Y_{i,t}|\{X_{i,t}=x_{i,t}\} \sim \mathcal{\textnormal{Poisson}}(\sigma|x_{i,t}|))$ for $i=1,\ldots,d$ and $t=1,\ldots,T$ where $\sigma=0.8$. We assume the observations are independent given the latent states. We generate one sequence of observations from each model. A plot of the simulated data along dimension $i=1$ is shown in Figure \ref{fig:Simulated-data-from}. We set the importance densities $q_{t}$ for the replica cSMC sampler to the same ones as in Section \ref{subsec:A-Linear-Gaussian}, with a constant approximation to the predictive density. \begin{figure} \caption{Simulated data from the Poisson-Gaussian models.\label{fig:Simulated-data-from} \label{fig:Simulated-data-from} \end{figure} \subsubsection*{Model 1} We use replica cSMC with $5$ replicas, updating one replica conditional on the other. We start with both sequences initialized to $\mathbf{0}.$ We set the number of particles to $200$. We do a total of $5$ runs of the sampler with $5,000$ iterations, each run with a different random number generator seed. Each iteration of replica cSMC takes approximately $0.80$ seconds. We discard 10\% of each run as burn-in. Plots of autocorrelation time comparing replica cSMC to the best method in \cite{ShestopaloffNeal2018} for sampling each of the latent variables are shown in Figure \ref{fig:Model1acf}. The benchmark method takes approximately $0.21$ seconds per iteration. We can see that the proposed replica cSMC method performs relatively well when compared to their best method after adjusting for computation time. The figure for iterated cSMC+Metropolis was reproduced using code available with \cite{ShestopaloffNeal2018}. \begin{figure} \caption{Model 1. Estimated autocorrelation times for each latent variable, adjusted for computation time. Different coloured lines corresponds to different latent state components. The $x$-axis corresponds to different times.\label{fig:Model1acf} \label{fig:Model1acf} \end{figure} \subsubsection*{Model 2} For this model, the challenge is to move between the many different modes of the latent state due to conditioning on $|x_{i,t}|$ in the observation density. The marginal posterior of $x_{i,t}$ has two modes and is symmetric around $0$. Additional modes appear due to uncertainty in the signs of state components. We use a total of $50$ replicas and update $49$ of the $50$ replicas with iterated cSMC and one replica with replica cSMC. This is done to prevent the Markov chain from being stuck in a single mode while at the same time enabling the replica cSMC update to use an estimate of the backward information filter based on replicas that are distributed across the state space. We initialize all replicas using sequences drawn from independent SMC passes with $1,000$ particles, and run the sampler for a total of $2,000$ iterations. Both replica cSMC and iterated cSMC updates use $100$ particles. In Figure \ref{fig:Model2trace} we plot every other sample of the same functions of state as in \cite{ShestopaloffNeal2018} of the replica updated with replica cSMC. This is the the coordinate $x_{1,300}$ with true value $-1.99$ and $x_{3,208}x_{4,208}$ with true value $-4.45$. The first has two well-separated modes and the second is ambiguous with respect to sign. We see that the sampler is able to explore different modes, without requiring any specialized ``flip'' updates or having to use a much larger number of particles, as is the case in \cite{ShestopaloffNeal2018}. We note that the replicas doing iterated cSMC updates tend to get stuck in separate modes for long periods of time, as expected. However, as long as these replicas are well-distributed across the state space and eventually explore it, the bias in the estimate of the backward information filter will be low and vanish asymptotically. The samples from the replica cSMC update will consequently be a good approximation to samples from the target density. Further improvement of the estimate of the backward information filter based on replicas in multimodal scenarios remains an open problem. \begin{figure} \caption{Trace plots for Model 2.\label{fig:Model2trace} \label{fig:Model2trace} \end{figure} \subsection{Lorenz-96 Model} Finally, we look at the Lorenz-96 model in a low-noise regime from \cite{Heng2017}. The state function for this model is the It\^{o} process $\xi(s)=(\xi_{1}(s),\ldots,\xi_{d}(s))$ defined as the weak solution of the stochastic differential equation (SDE) \begin{equation} \textnormal{d}\xi_{i}=(-\xi_{i-1}\xi_{i-2}+\xi_{i-1}\xi_{i+1}-\xi_{i}+\alpha)\textnormal{d}t+\sigma_{f}\textnormal{d}B_{i}\label{eq:Lorenz} \end{equation} for $i=1,\ldots,d$, where indices are computed modulo $d$, $\alpha$ is a forcing parameter, $\sigma_{f}^{2}$ is a noise parameter and $B(s)=(B_{1}(s),\ldots,B_{d}(s))$ is $d$-dimensional standard Brownian motion. The initial condition for the SDE is $\xi(0)=\mathcal{N}(\mathbf{0},\sigma_{f}^{2}\mathcal{I}_{d})$. We observe the process on a regular grid of size $h>0$ as $Y_{t}\sim\mathcal{N}(H\xi(th),R)$, where $t=0,\ldots,T$. We will assume that the process is only partially observed, with $H_{ii}=1$ for $i=1,\ldots,p$ and $0$ otherwise, for $p=d-2$. We discretize the SDE (\ref{eq:Lorenz}) by numerically integrating the drift using a fourth-order Runge-Kutta scheme and adding Brownian increments. Let $u$ be the mapping obtained by numerically integrating the drift of (\ref{eq:Lorenz}) on $[0,h]$. This discretization produces a state space model with $X_{1}\sim\mathcal{N}(\mathbf{0},\sigma_{f}^{2}\mathcal{I})$, $X_{t}|\{X_{t-1}=x_{t-1}\}\sim\mathcal{N}(u(x_{t-1}),\sigma_{f}^{2}h\mathcal{I})$ for $t=2,\ldots,T+1$ and $Y_{t}|\{X_{t}=x_{t}\}\sim\mathcal{N}(Hx_{t},R)$ for $t=1,\ldots,T+1$. We set $d=16,\sigma_{f}^{2}=10^{-2},R=10^{-3}\mathcal{I}_{p}$ and $\alpha=4.8801$. The process is observed for $10$ time units, which corresponds to $h=0.1$, $T=100$, and a step size of $10^{-2}$ for the Runge-Kutta scheme. A plot of data generated from the Lorenz-96 model along one of the coordinates is shown in Figure \ref{fig:Simulated-data-from-Lorenz-96}. \begin{figure} \caption{Simulated data from Lorenz-96 model along coordinate $i=1$.\label{fig:Simulated-data-from-Lorenz-96} \label{fig:Simulated-data-from-Lorenz-96} \end{figure} We compare the performance of replica cSMC with two replicas, updating each replica conditional on the other, to an iterated cSMC scheme. For iterated cSMC, we use the model's initial density as $q_{1}$ and the model's transition density as $q_{t}$ for $t\geq2$. For replica cSMC, we use the following importance densities for replica $k$, \begin{align} q_{1}(x_{1}) & \propto f(x_{1})\sum_{j\neq k}\phi(x_{1}|x_{2}^{(j)}),\nonumber \\ q_{t}(x_{t}|x_{t-1}) & \propto f(x_{t}|x_{t-1})\sum_{j\neq k}\phi(x_{t}|x_{t+1}^{(j)}),\nonumber \\ q_{T}(x_{T}|x_{T-1}) & \propto f(x_{T}|x_{T-1}),\label{eq:Case1} \end{align} where $t=2,\ldots,T-1$ and $\phi$ is the $p$-dimensional Gaussian density with mean $Hu^{-1}(x_{t+1}^{(j)})$ and variance $\sigma_{f}^{2}h\mathcal{I}_{p}$, that is, the mean is computed by running the Runge-Kutta scheme backward in time starting at the replica state $x_{t+1}^{(j)}$. We initialize the iterated cSMC sampler and each replica in the replica cSMC sampler with a sequence drawn from an independent SMC pass with $3,000$ particles. We run replica cSMC with $200$ particles for $30,000$ iterations ($0.7$ seconds per iteration) and compare to standard iterated cSMC with $600$ particles, which we also run for $30,000$ iterations ($0.7$ seconds per iteration), thus making the computational time equal. Figure \ref{fig:Lorenz-iterated-replica} shows the difference in performance of the two samplers by trace plots of $x_{1,45}$ (true value $-0.23$), from one of the runs, plotting the samples every $30$th iteration. We can see that replica cSMC performs noticeably better when compared to standard iterated cSMC. \begin{figure} \caption{Lorenz-96 model. Comparison of standard cSMC and replica cSMC.\label{fig:Lorenz-iterated-replica} \label{fig:Lorenz-iterated-replica} \end{figure} \section{Conclusion} We presented a novel sampler for latent sequences of a non-linear state space model. Our proposed method leads to several questions. The first is whether there are other ways to estimate the predictive density that does not result in mixture weights with high variance. Another question is to develop better guidelines on choosing the number of replicas to use in a given scenario. It would also be interesting to look at applications of replica cSMC in non time-series examples. Finally, while the proposed method offers an approach for sampling in models with multimodal state distributions, further improvement is needed. \end{document}
\begin{document} \title{On walk-regular graphs and graphs with symmetric hitting times} \begin{abstract} Aldous \ensuremath{\mathcal I}te{AldHit} asked whether every graph in which the distribution of the return time of random is independent of the starting vertex must be transitive. We remark that this question can be reduced into a purely graph-theoretic one that had already been answered Godsil \& McKay \ensuremath{\mathcal I}te{GoMaFea} and ask some questions motivated by this. \end{abstract} section{Walk-regular graphs} Aldous \ensuremath{\mathcal I}te{AldHit} posed the following problem \begin{problem}[\ensuremath{\mathcal I}te{AldHit}] \label{prAl} If a graph \ensuremath{G\ } satisfies \labtequ{cond}{$Pr_x(Z(n)=x) = Pr_y(Z(n)=y) \text{ for every\ } x,y\in V(G),$} is \ensuremath{G\ } necessarily vertex-transitive? \end{problem} Here, $Pr_x(Z(n)=x)$ denotes the probability that simple random walk\ on \ensuremath{G\ } started at $x$ will be at its starting point $x$ after $n$ steps. \ensuremath{\mathcal O}mment{ Similarly, we can ask \begin{problem} \label{prSymH} If $H_{xy} = H_{yx}$ for every\ $x,y\in V(G)$, is \ensuremath{G\ } necessarily vertex-transitive? \end{problem} } A graph satisfying condition \eqref{cond} is necessarily regular, for that condition implies that the expected return time to a vertex $x$ is independent of $x$, and it is known that this time equals $2m/d(x)$ \ensuremath{\mathcal I}te{BW2}. As observed by Aldous \ensuremath{\mathcal I}te{AldHit}, condition \eqref{cond} also implies that for any two vertices $x,y$, if $X$ is the (random) time it takes for random walk\ from $x$ to visit $y$ and, conversely, $Y$ is the (random) time it takes for random walk\ from $y$ to visit $x$, then $X$ and $Y$ have the same distribution. To see this, note that if $d(x)=d(y)$ and $P$ is an $x$-$y$~walk whose interior does not visit $x$ or $y$, then the probability that random walk\ from $x$ will traverse $P$ in its first $|P|$ steps equals the probability that random walk\ from $y$ will traverse $P$ in the converse direction in its first $|P|$ steps, see \ensuremath{\mathcal I}te{LyonsBook}. This implies a further proof of the fact that condition \eqref{cond} implies regularity: note that for any two neighours $x,y$, the probability of visiting the other in just one step is the reciprocal of the degree. The above remark also implies in particular that if a graph satisfies condition \eqref{cond}, then hitting times are symmetric ---the \defi{hitting time} $H_{xy}$ from $x$ to $y$ is the expected value of the variable $X$ defined above--- that is, $H_{xy}=H_{yx}$ for every\ $x,y\in V(G)$. In this note we show that \mathbb{P}b{prAl} can be reduced to a purely graph-theoretic question which has long been known to have a negative answer. This yields a negative answer to \mathbb{P}b{prAl}. A graph \ensuremath{G\ } is called \defi{walk-regular}, if for every\ $n\in \ensuremath{\mathbb N}$, the number of closed walks in \ensuremath{G\ } of length $n$ starting at a vertex $x$ is independent of the choice of $x$. \begin{observation}\label{ob} A graph is walk-regular if and only if\ it satisfies \eqref{cond}. \end{observation} \begin{proof} Applying the definition of walk-regular for $n=1$ we see that every walk-regular graph is regular. Recall that any graph that satisfies \eqref{cond} must be regular too. Now note that in a $k$-regular graph, given a closed walk $W$ of length $n$, the probability that the first $n$ steps of a random walk\ coincide with $W$ is $k^{-n}$, and the probability to return to the starting vertex $x$ after $n$ steps is that number multiplied by the number of closed walks of length $n$ starting at $x$. \end{proof} This reduces \mathbb{P}b{prAl} to the question of whether every walk-regular graph is transitive. This is however not the case, as already observed by Godsil \& MacKay \ensuremath{\mathcal I}te{GoMaFea}: any distance-regular graph \ensuremath{\mathcal I}te{BrCoNe} is walk-regular, but not necessarily transitive; in fact, there are many distance-regular graphs that have a trivial automorphism group \ensuremath{\mathcal I}te{CamRan}; see also\\ {\it \backslashall http://mathoverflow.net/questions/106589/is-every-distance-regular-graph-vertex-transitive}. Godsil \& McKay \ensuremath{\mathcal I}te{GoMaFea}\footnote{There is an error in the printed version; see the authors' website.} have constructed a walk-regular graph that is neither transitive nor distance-regular. section{Symmetry of hitting times} Let us call a graph \ensuremath{G\ } \defi{reversible} if hitting times are symmetric, that is, if $H_{xy}=H_{yx}$ holds for every\ $x,y\in V(G)$. The aforementioned discussion implies that every walk-regular graph is reversible \ensuremath{\mathcal I}te{CTree}. This motivates the following \begin{problem} Is every reversible graph regular? If yes, is it even walk-regular? \end{problem} \note{I don't think so.} I suspect that the answer is no. It is shown in \ensuremath{\mathcal I}te{CTree} that a graph is reversible if and only if\ the sum $R_d(v):= sum_{w\in V(G)} d(w) r(v,w)$ is independent of the choice of the vertex $v$, where $r(v,w)$ is the effective resistance between $v$ and $w$ when \ensuremath{G\ } is considered as an electrical network (with unit resistances). In this case, we can think of $R_d(G)=R_d(v)$ as an invariant $R_d(G)$ of the graph. It is natural to consider the normalised version $R_\pi(G):= R_d(G)/2m$, where $m$ is the number of edges of $G$; note that $R_\pi(G)$ is the expected effective resistance between an arbitrary vertex and a randomly chosen vertex chosen by picking an edge uniformly at random and choosing each of its endvertices with probability a half. These numbers are always rational because $r(v,w)$ is the solution of a linear system with integer coefficients. This motivates the following problem \begin{problem} Which (rational) numbers appear as $R_\pi(G)$ for some reversible graph \ensuremath{G}? Are they dense in the positive reals? \end{problem} For the complete graphs $K_n$ an easy calculation yields $R_\pi(K_n)=2/n^2$. For the cycles $C_n$ we have $R_\pi(C_n)=\Theta(n)$. This shows that $R_\pi(G)$ can be arbitrarily small or large. It would be interesting to have constructions that combine simple reversible graphs into more complicated ones. The fact that random walk\ on expander graphs has desirable properties (e.g.\ rapid mixing) \ensuremath{\mathcal I}te{LovRan} motivates \begin{problem} Construct reversible graphs that are good expanders. \end{problem} \end{document}
\begin{document} \title{The disorder number of a graph} \author{Sela Fried\thanks{A postdoctoral fellow in the Department of Computer Science at the Ben-Gurion University of the Negev, Israel.} \\ \href{mailto:[email protected]}{[email protected]}} \date{} \maketitle \begin{abstract} We define the disorder number of a graph as the maximal length of a walk along the edges of the graph, according to any ordering of its vertices. We then reformulate, from this graph theory point of view, the known results regarding the few special cases that were previously studied, putting them thereby in a unifying context. We study, seemingly for the first time, the disorder number of the cycle graph and of the grid graph of arbitrary size. \end{abstract} \section{Introduction} Suppose we wish to enumerate $n$ places in a row such that a walk starting at $1$, then proceeding to $2$ and so on until $n$ is reached, has the longest length. For example, if $n=5$, then the walk corresponding to $31524$ has the length $2+3+4+2=11$, which is the longest possible. Indeed, it has been recently shown by \cite[Theorem 8]{bulteau2021disorders} that the length of the longest possible walk is given by $\left\lfloor\frac{n^2}{2}\right\rfloor - 1$, which, for $n=5$, amounts to $11$. The motivation for this work is the insight that the above question may be reformulated in terms of graphs, in a way that covers the following two questions as special cases: \begin{enumerate} \item What is length of the longest possible walk, if we enumerate the $n^2$ squares of an $n\times n$ array, when the length of a walk between two consecutive squares is measured by the rectilinear distance between them? This questions seems to go back to 2010, when the corresponding sequence was registered as \seqnum{A179094} in the OEIS by T.~Young. \item In 2013, M.~J.~Dominus asked (and subsequently answered) the following question (\cite{315544}): Let $S$ be any ordering of the set $\{0,1\}^n$ of all binary sequences of length $n$. What is the maximum sum of the Hamming distances (i.e.\ the number of bits that change) between two consecutive elements of $S$? J.~O'Rourke asked almost the same question in 2018 (\cite{297861}). \end{enumerate} Our contribution in this work may be summarized as follows: \begin{enumerate} \item We define the disorder number of a graph as the maximal length of a walk along the edges of the graph, according to any ordering of its vertices. Under this definition, the three questions described above may be reformulated, respectively, as follows: What is the disorder number of the path graph, the square grid graph and the hypercube graph? \item We reformulate the known results regarding the path graph and the hypercube graph and obtain new results regarding the cycle graph and the grid graph. \item A natural question that arises is the following: What is the expected length of a walk on a graph $G$, if the ordering of its vertices is random? It turns out that, if $G$ is of order $n$, the expected length coincides, up to a multiplicative factor of $\frac{2}{n}$, with the Wiener index of $G$, a graph quantity that has been studied extensively under different names (cf.\ \cite[pp.\ 1-2]{yeh1994sum}). We give several examples in this respect. \end{enumerate} \section{Preliminaries} Let $\mathbb{N}$ denote the set of natural numbers and let $m,n\in\mathbb{N}$ to be used throughout the work. We denote by $[n]$ the set $\{1,2,\ldots,n\}$. All graphs are tacitly assumed to be connected, undirected and without loops or multiple edges. If $G$ is a graph with a vertices set $V$ and an edges set $E$, we write $G=(V, E)$. The distance between two vertices $x$ and $y$ of a graph $G$ is defined to be the length of the shortest path connecting $x$ and $y$ in $G$ and is denoted by $d(x,y)$ (e.g.\ \cite[p. 13]{harary1971graph}). As usual, $P_n$ and $C_n$ denote the path graph and the cycle graph of order $n$, respectively. The product of two graphs $G_1$ and $G_2$ is denoted by $G_1\times G_2$ (e.g., \cite[p.\ 22]{harary1971graph}). The grid graph $P_m\times P_n$, the torus graph $C_m\times C_n$ and the hypercube graph $Q_n = \underbrace{P_2\times\cdots \times P_2}_{n\textnormal{ times}}$ all result from products. The following definition lays the ground for this work. \begin{definition} Let $G=(V,E)$ be a graph of order $|V| = n$. An \emph{ordering of $G$} is a bijection $\sigma\colon V\to [n]$. The set of all orderings of $G$ is denoted by $\Sigma(G)$. The \emph{disorder number of $G$ with respect to the ordering $\sigma\in\Sigma(G)$} is denoted by $O(G,\sigma)$ and is defined to be $$O(G,\sigma)=\sum_{i=1}^{n-1}d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right).$$ The \emph{disorder number of $G$} is denoted by $M(G)$ and is defined to be $$M(G)=\max_{\sigma \in\Sigma(G)}O(G,\sigma).$$ The \emph{average disorder of $G$} is denoted by $A(G)$ and is defined to be $$A(G)=\frac{1}{n!}\sum_{\sigma \in\Sigma(G)}O(G,\sigma).$$ \end{definition} \section{Main results} This section is divided into two parts: In the first, we address the disorder number and in the second, the average disorder. \subsection{The disorder number} We begin with the cycle graph, whose disorder number seems not to have been studied before. \subsubsection{\texorpdfstring{The cycle graph $C_n$}{Lg}} Consider the cycle graph $C_n$ and assume that its set of vertices is given by $V=\{0,1,\ldots,n-1\}$. The distance between $x,y\in V$ in $C_n$ is given by $$d(x,y)=\min\{|x-y|, n-|x-y|\}.$$ \begin{lemma} We have $M(C_n)=\left\lceil\frac{(n-1)^2}{2}\right\rceil$ (registered as \seqnum{A000982}). Furthermore, $$\left|\left\{\sigma\in \Sigma(C_n)\;|\;O(C_n,\sigma)=M(C_n)\right\}\right| = 2n.$$ \end{lemma} \begin{proof} It follows from the definition of the distance in $C_n$ that $d(x,y)\leq \left\lfloor\frac{n}{2}\right\rfloor$, for every $x,y\in V$. First, assume that $n$ is odd and let $\sigma\in\Sigma(C_n)$. We have $$O(C_n, \sigma)=\sum_{i=1}^{n-1}\overbrace{d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right)}^{\leq\frac{n-1}{2}}\leq \frac{(n-1)^2}{2}=\left\lceil\frac{(n-1)^2}{2}\right\rceil$$ with an equality if and only if $d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right)=\left\lfloor\frac{n}{2}\right\rfloor$, for every $i\in[n-1]$. We shall now construct two versions of $\sigma\in\Sigma(C_n)$ such that $O(C_n,\sigma)=\frac{(n-1)^2}{2}$: Let $x_0\in V$. Either \begin{enumerate} \item $\sigma\left(x_0+(i-1)\left\lfloor\frac{n}{2}\right\rfloor\;(\textnormal{mod } n)\right)=i$, for every $i\in[n]$, or \item $\sigma\left(x_0-(i-1)\left\lfloor\frac{n}{2}\right\rfloor\;(\textnormal{mod } n)\right)=i$, for every $i\in[n]$. \end{enumerate} It is easy to see that $\sigma$ is well defined in each of the two versions and that $O(C_n,\sigma)=\frac{(n-1)^2}{2}$. To count the orderings $\sigma\in\Sigma(C_n)$ such that $O(C_n,\sigma)=\frac{(n-1)^2}{2}$, notice that, for every $x\in V$, the equation $$d\left(x, y\right)= \left\lfloor\frac{n}{2}\right\rfloor$$ has exactly two solutions in $V$, namely $$y=x\pm\left\lfloor\frac{n}{2}\right\rfloor.$$ Thus, if $$ \sigma(x_0)=1 \;\textnormal{ and }\; \sigma\left(x_0+\left\lfloor\frac{n}{2}\right\rfloor\;(\textnormal{mod } n)\right)=2, $$ then, necessarily, $$\sigma\left(x_0+(i-1)\left\lfloor\frac{n}{2}\right\rfloor\;(\textnormal{mod } n)\right)=i,$$ for every $3\leq i\leq n$. Similarly, if $$ \sigma(x_0)=1 \;\textnormal{ and }\; \sigma\left(x_0-\left\lfloor\frac{n}{2}\right\rfloor\;(\textnormal{mod } n)\right)=2, $$ then, necessarily, $$\sigma\left(x_0-(i-1)\left\lfloor\frac{n}{2}\right\rfloor\;(\textnormal{mod } n)\right)=i,$$ for every $3\leq i\leq n$. Thus, there are exactly $2n$ orderings $\sigma\in\Sigma(C_n)$ such that $O(C_n,\sigma)=\frac{(n-1)^2}{2}$, one for each $x_0\in V$ in each of the two versions. Assume now that $n$ is even and let $\sigma\in\Sigma(C_n)$. Since for every $x\in V$ there is a unique $y\in V$ such that $d(x,y)=\frac{n}{2}$, we must have $$d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right)+d\left(\sigma^{-1}(i+1), \sigma^{-1}(i+2)\right)\leq n-1,$$ for every $1\leq i\leq n-2$ and equality holds if and only if $$\left\{d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right),d\left(\sigma^{-1}(i+1), \sigma^{-1}(i+2)\right)\right\}=\left\{\frac{n}{2},\frac{n}{2} - 1\right\}.$$ It follows that \begin{align} O(C_n,\sigma)&=\sum_{i=1}^{n-1}d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right)\nonumber\\&\leq \overbrace{\frac{n}{2}+ \left(\frac{n}{2}-1\right)+ \frac{n}{2}+\left(\frac{n}{2}-1\right)+\cdots+ \frac{n}{2}}^{n-1\textnormal{ terms}}\nonumber\\&=\left(\frac{n}{2}\right)^2+\left(\frac{n}{2}-1\right)^2\nonumber\\&=\frac{(n-1)^{2}+1}{2}\nonumber\\&=\left\lceil\frac{(n-1)^2}{2}\right\rceil.\nonumber \end{align} We shall now construct two versions of $\sigma\in\Sigma(C_n)$ such that $O(C_n,\sigma)=\frac{(n-1)^2+1}{2}$: Let $x_0\in V$. Either \begin{enumerate} \item $\sigma\left(x_0+(i-1)\left(\frac{n}{2}-1\right)+\left\lceil\frac{i-1}{2}\right\rceil\;(\textnormal{mod } n)\right)=i$ for every $i\in[n]$, or \item $\sigma\left(x_0-(i-1)\left(\frac{n}{2}-1\right)-\left\lceil\frac{i-1}{2}\right\rceil\;(\textnormal{mod } n)\right)=i$ for every $i\in[n]$. \end{enumerate} It is easy to see that $\sigma$ is well defined in each of the two versions and that $O(C_n,\sigma)=\frac{(n-1)^2+1}{2}$. To count the orderings $\sigma\in\Sigma(C_n)$ such that $O(C_n,\sigma)=\frac{(n-1)^2+1}{2}$, notice that, for every $x\in V$, the equation $$d\left(x, y\right)= \frac{n}{2}-1$$ has exactly two solutions in $V$, namely $$y=x\pm\left(\frac{n}{2}-1\right).$$ Setting $\sigma(x_0)=1$, then, necessarily, $$\sigma\left(x_0+\frac{n}{2}\;(\textnormal{mod } n)\right)=\sigma\left(x_0-\frac{n}{2}\;(\textnormal{mod } n)\right)=2.$$ Now, we may either set \begin{equation}\label{eq;222}\sigma\left(x_0+\frac{n}{2}+\left(\frac{n}{2}-1\right)\;(\textnormal{mod } n)\right)=3\end{equation} or \begin{equation}\label{eq;223} \sigma\left(x_0-\frac{n}{2}-\left(\frac{n}{2}-1\right)\;(\textnormal{mod } n)\right)=3.\end{equation} Suppose we define $\sigma^{-1}(3)$ as in (\ref{eq;222}) and suppose we have already defined $\sigma^{-1}(j)$, for every $1\leq j\leq i$, where $3\leq i\leq n-3$ is odd. By induction, it is easily verified that $\sigma^{-1}(i)=x_0 -\left\lfloor\frac{i}{2}\right\rfloor\;(\textnormal{mod } n)$ and $\sigma^{-1}(i-1)=x_0-\left\lfloor\frac{i}{2}\right\rfloor-\left(\frac{n}{2} -1\right)\;(\textnormal{mod } n)$. Now, we have to set $\sigma^{-1}(i+1)=x_0 -\left\lfloor\frac{i}{2}\right\rfloor+\frac{n}{2}\;(\textnormal{mod } n)$ and since $$x_0 -\left\lfloor\frac{i}{2}\right\rfloor+\frac{n}{2}-\left(\frac{n}{2}-1\right)\;(\textnormal{mod } n)=x_0 -\left\lfloor\frac{i-2}{2}\right\rfloor\;(\textnormal{mod } n)=\sigma^{-1}(i-2),$$ we are forced to define $$\sigma^{-1}(i+2)=x_0-\left\lfloor\frac{i}{2}\right\rfloor+\frac{n}{2} +\left(\frac{n}{2}-1\right)\;(\textnormal{mod } n)=x_0-\left\lfloor\frac{i+2}{2}\right\rfloor \;(\textnormal{mod } n).$$ This shows that there are exactly $n$ orderings resulting from the choice of (\ref{eq;222}), one for each $x_0\in V$. Similar arguments show that there are $n$ additional orderings resulting from the choice of (\ref{eq;223}). \end{proof} \iffalse \begin{lemma} We have $A(G)=\left\lfloor\frac{n^2}{4}\right\rfloor$. \end{lemma} \begin{proof} We begin by noticing that if $1\leq j\leq n$, then $j\leq n-j$ if and only if $j\leq \frac{n}{2}$. Then {\tiny \begin{align} &\sum_{\sigma\in S_{n}}\sum_{i=1}^{n-1}|\sigma^{-1}(i)-\sigma^{-1}(i+1)|\nonumber\\&=(n-1)(n-2)!2\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\min\left\{ j-i,n-(j-i)\right\} \nonumber\\ &=2(n-1)!\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}\min\left\{ j,n-j\right\} \nonumber\\ &=2(n-1)!\left(\sum_{i=\left\lfloor \frac{n}{2}\right\rfloor +1}^{n-1}\sum_{j=1}^{n-i}j+\sum_{i=1}^{\left\lfloor \frac{n}{2}\right\rfloor }\left(\sum_{j=1}^{\left\lfloor \frac{n}{2}\right\rfloor }j+\sum_{j=\left\lfloor \frac{n}{2}\right\rfloor +1}^{n-i}n-j\right)\right)\nonumber\\ &=(n-1)!\left(\sum_{i=\left\lfloor \frac{n}{2}\right\rfloor +1}^{n-1}(n-i)(n-i+1)+\sum_{i=1}^{\left\lfloor \frac{n}{2}\right\rfloor }\left\lfloor \frac{n}{2}\right\rfloor \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)+2n\left(n-i-\left\lfloor \frac{n}{2}\right\rfloor\right)-\left(n-i-\left\lfloor \frac{n}{2}\right\rfloor\right)\left(n-i+\left\lfloor \frac{n}{2}\right\rfloor+1\right)\right)\nonumber\\ &=(n-1)!\left(\sum_{i=1}^{\left\lceil \frac{n}{2}\right\rceil -1}\left(i^{2}+i\right)+\sum_{i=1}^{\left\lfloor \frac{n}{2}\right\rfloor }\left(\left\lfloor \frac{n}{2}\right\rfloor \left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)+2n\left(\left\lceil \frac{n}{2}\right\rceil -i\right)-\left(\left\lceil \frac{n}{2}\right\rceil -i\right)\left(n-i+\left\lfloor \frac{n}{2}\right\rfloor +1\right)\right)\right)\nonumber\\ &=(n-1)!\left(\left\lfloor \frac{n}{2}\right\rfloor ^{2}\left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)+\sum_{i=1}^{\left\lceil \frac{n}{2}\right\rceil -1}\left(i^{2}+i\right)+\sum_{i=\left\lceil \frac{n}{2}\right\rceil -\left\lfloor \frac{n}{2}\right\rfloor }^{\left\lceil \frac{n}{2}\right\rceil -1}\left(2ni-i^{2}-2i\left\lfloor \frac{n}{2}\right\rfloor -i\right)\right)\nonumber\\ &=(n-1)!\left(\left\lfloor \frac{n}{2}\right\rfloor ^{2}\left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)+2\left\lceil \frac{n}{2}\right\rceil \sum_{i=\left\lceil \frac{n}{2}\right\rceil -\left\lfloor \frac{n}{2}\right\rfloor }^{\left\lceil \frac{n}{2}\right\rceil -1}i\right)\nonumber\\ &=(n-1)!\left(\left\lfloor \frac{n}{2}\right\rfloor ^{2}\left(\left\lfloor \frac{n}{2}\right\rfloor +1\right)+\left\lceil \frac{n}{2}\right\rceil \left\lfloor \frac{n}{2}\right\rfloor \left(2\left\lceil \frac{n}{2}\right\rceil -1-\left\lfloor \frac{n}{2}\right\rfloor \right)\right)\nonumber\\ &=(n-1)!\begin{cases} \frac{n^3}{4}, & \textnormal{ if } 2\;|\;n;\\ \frac{(n-1)^{2}}{4}\frac{n+1}{2}+\frac{(n+1)^2}{4}\frac{n-1}{2}, & \textnormal{ if } 2\nmid n \end{cases}\nonumber\\ &=n!\begin{cases} \frac{n^{2}}{4} & \textnormal{ if }2\;|\;n;\\ \frac{n^{2}-1}{4} & \textnormal{ if }2\nmid n \end{cases}\nonumber\\&=n!\left\lfloor\frac{n^2}{4}\right\rfloor.\nonumber \end{align}} \iffalse {\footnotesize \begin{align} &\sum_{\sigma\in S_{n}}\sum_{i=1}^{n-1}|\sigma^{-1}(i)-\sigma^{-1}(i+1)|\nonumber\\&= (n-1)(n-2)!2\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}\min\left\{ j-i,n-(j-i)\right\} \nonumber\\ &=2(n-1)!\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}\min\left\{ j,n-j\right\} \nonumber\\ &=2(n-1)!\left(\sum_{i=\frac{n}{2}+1}^{n-1}\sum_{j=1}^{n-i}j+\sum_{i=1}^{\frac{n}{2}}\left(\sum_{j=1}^{\frac{n}{2}}j+\sum_{j=\frac{n}{2}+1}^{n-i}n-j\right)\right)\nonumber\\ &=2(n-1)!\left(\sum_{i=\frac{n}{2}+1}^{n-1}\frac{(n-i)(n-i+1)}{2}+\sum_{i=1}^{\frac{n}{2}}\left(\frac{\frac{n}{2}(\frac{n}{2}+1)}{2}+n(n-i-\frac{n}{2})-\frac{(n-i-\frac{n}{2})(n-i+\frac{n}{2}+1)}{2}\right)\right)\nonumber\\ &=(n-1)!\left(\sum_{i=1}^{n/2-1}i^{2}+i+\sum_{i=1}^{\frac{n}{2}}\left(\frac{n}{2}(\frac{n}{2}+1)+(\frac{n^{2}}{4}-i^{2})-(\frac{n}{2}-i)\right)\right)\nonumber\\ &=(n-1)!\left(\sum_{i=1}^{n/2-1}i^{2}+i+\sum_{i=1}^{\frac{n}{2}}\left(\frac{n^{2}}{2}-i^{2}+i\right)\right)\nonumber\\ &=n!\left(\frac{n^{2}}{4}\right).\nonumber \end{align}} \fi \end{proof} \fi \subsubsection{The hypercube graph \texorpdfstring{$Q_n$}{Lg}} Consider the hypercube graph $Q_n$ and assume that its set of vertices is given by $V=\{0,1\}^n$. The distance in $Q_n$ is the Hamming distance, i.e., for $x=(x_1,\ldots,x_n),y=(y_1,\ldots,y_n)\in V$ we have $d(x,y)=\sum_{i=1}^n |x_i-y_i|$. The following result is due to M.~J.~Dominus (\cite{315544}). The proof we provide here is merely a detailed reproduction of his idea. The very same construction was given in response to a question of J.~O'Rourke (\cite{297861}). \begin{theorem} We have $M(Q_n)=(2^{n-1}-1)(2n-1)+n$ (registered as \seqnum{A271771}). \end{theorem} \begin{proof} We begin with the observation that for every $x=(x_1,\ldots,x_n)\in V$, there is a unique vertex in $V$ that is farthest from $x$, namely $x'= (1-x_1,\ldots,1-x_n)$ and we have $d(x,x')=n$. Thus, if $\sigma\in\Sigma(Q_n)$ is any ordering of $V$, then $$d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right)+d\left(\sigma^{-1}(i+1), \sigma^{-1}(i+2)\right)\leq 2n-1,$$ for every $1\leq i\leq 2^n-2$. Thus, \begin{align} O(Q_n,\sigma)&\leq\overbrace{n+(n-1)+\cdots+(n-1)+n}^{2^n-1\textnormal{ terms}}\nonumber\\&=(2^{n-1}-1)(2n-1)+n.\nonumber \end{align} It follows that $M(Q_n)\leq (2^{n-1}-1)(2n-1)+n$. It remains to show that there exists $\sigma\in\Sigma(Q_n)$ such that $O(Q_n,\sigma)=(2^{n-1}-1)(2n-1)+n$. To this end, let $Q_{n-1} = \{0,1\}^{n-1}\times\{0\}\subseteq Q_n$ and let $p=(p_i)_{i=1}^{2^{n-1}}$ be any Hamiltonian path in $Q_{n-1}$ (see, for example, \cite[p.\ 1]{dvovrak2007hamiltonian}). We construct $\sigma\in\Sigma(Q_n)$ as follows: Let $\sigma(p_1)=1$ and suppose we have already defined $\sigma(p_1),\ldots,\sigma(p_i)$ for some $1\leq i\leq 2^{n-1}-1$ and suppose $\sigma(p_i)=k$ where $k=2i-1$. Denote $p_i=(x_1,\ldots,x_{n-1},0)$. The vertices $p_i$ and $p_{i+1}$ differ in a unique coordinate $1\leq j\leq n-1$. Thus, $p_{i+1}=(x_1,\ldots,x_{j-1},1-x_j, x_{j+1},\ldots,x_{n-1},0)$. We define $\sigma(1-x_1,\ldots,1-x_{n-1},1) = k+1$ and $\sigma(p_{i+1})=k+2$. Clearly, $$d\left(\sigma^{-1}(k), \sigma^{-1}(k+1)\right)=n\;\textnormal{ and }\;d\left(\sigma^{-1}(k+1), \sigma^{-1}(k+2)\right)=n-1.$$ Finally, assume we have defined $\sigma(p_{2^{n-1}})=2^n-1$ and assume that $p_{2^{n-1}}=(x_1,\ldots,x_{n-1},0)$. Then, setting $\sigma(1-x_1,\ldots,1-x_{n-1},1)=2^n$ finishes the the definition of $\sigma$. \end{proof} \begin{question} What is $$\left|\left\{\sigma\in \Sigma(Q_n)\;|\;O(Q_n,\sigma)=M(Q_n)\right\}\right|?$$ \end{question} \subsubsection{The path graph \texorpdfstring{$P_n$}{Lg}} Consider the path graph $P_n$ and assume that its set of vertices is given by $V=[n]$. Then $\Sigma(P_n)$ coincides with the symmetric group $S_n$. The distance between vertices in $P_n$ is given by $d(x,y) = |x-y|$ for every $x,y\in V$. Let $\sigma\in S_n$. The quantity that we denote by $O(P_n, \sigma)$ is referred to by \cite{bulteau2021disorders} as the \emph{additive $y$-disorder of $\sigma$}. The work \cite{bulteau2021disorders} is actually concerned with a related quantity, called the \emph{additive $x$-disorder of $\sigma$} and it is shown there that the additive $y$-disorder of $\sigma$ is equal to the additive $x$-disorder of the (group theoretic) inverse of $\sigma$. Thus, the results that we present here regarding $P_n$ are immediate consequences of \cite{bulteau2021disorders}. \begin{definition}{\cite[p.\ 3]{bulteau2021disorders}} Let $\sigma\in \Sigma(P_n)$. The \emph{additive $x$-disorder} of $\sigma$ is denoted by $\delta_x^+(\sigma)$ and is defined by $$\delta_x^+(\sigma) = \sum_{i=1}^{n-1}d\left(\sigma(i), \sigma(i+1)\right).$$ The \emph{additive $y$-disorder} of $\sigma$ is denoted by $\delta_y^+(\sigma)$ and is defined by $$\delta_y^+(\sigma) = \sum_{i=1}^{n-1}d\left(\sigma^{-1}(i), \sigma^{-1}(i+1)\right).$$ \end{definition} \begin{theorem} $M(P_n)=\left\lfloor\frac{n^2}{2}\right\rfloor - 1$ (registered as \seqnum{A047838}). \end{theorem} \begin{proof} By \cite[Theorem 8]{bulteau2021disorders}, $\left\lfloor\frac{n^2}{2}\right\rfloor - 1=\max_{\sigma\in \Sigma(P_n)}\left\{\delta_x^+(\sigma)\right\}$. Clearly, $$\max_{\sigma\in \Sigma(P_n)}\left\{\delta_x^+(\sigma)\right\}=\max_{\sigma^{-1}\in \Sigma(P_n)}\left\{\delta_x^+(\sigma^{-1})\right\}.$$ By \cite[3.\ of Lemma 2]{bulteau2021disorders}, $\delta_x^+(\sigma^{-1})= \delta_y^+(\sigma)$. Since $\delta_y^+(\sigma)=O(P_n,\sigma)$, we have \begin{align} M(P_n) &= \max_{\sigma\in \Sigma(P_n)}\left\{O(P_n, \sigma)\right\}\nonumber\\&=\max_{\sigma^{-1}\in \Sigma(P_n)}\left\{O(P_n, \sigma)\right\}\nonumber\\&=\max_{\sigma^{-1}\in \Sigma(P_n)}\left\{\delta_y^+(\sigma)\right\}\nonumber\\&=\left\lfloor\frac{n^2}{2}\right\rfloor - 1.\nonumber \end{align} \end{proof} \cite{bulteau2021disorders} characterized the orderings that maximize the additive $x$-disorder as \emph{bipartite with centered endpoints}. Thus, we wish to characterize orderings whose inverses are bipartite with centered endpoints. \begin{definition}{\cite[p.\ 5]{bulteau2021disorders}}\label{def;561} An ordering $\sigma\in \Sigma(P_n)$ is called \emph{bipartite with threshold $k \in\left\{\left\lfloor \frac{n}{2}\right\rfloor, \left\lceil \frac{n}{2}\right\rceil\right\}$} if either \begin{enumerate} \item [(a)] $\begin{cases}\sigma(i) \leq k,&i \textnormal{ is even};\\ \sigma(i) > k,&i \textnormal{ is odd}, \end{cases}\;\;$ for every $1\leq i\leq n$, or \item [(b)] $\begin{cases}\sigma(i) > k,&i \textnormal{ is even};\\ \sigma(i) \leq k,&i \textnormal{ is odd}, \end{cases}\;\;$ for every $1\leq i\leq n$. \end{enumerate} A bipartite ordering $\sigma\in \Sigma(P_n)$ with threshold $k$ has \emph{centered endpoints} if \begin{equation}\label{eq;822}\{\sigma(1), \sigma(n)\}=\begin{cases}\left\{ \frac{n}{2}, \frac{n}{2}+ 1\right\},&n \textnormal{ is even};\\ \left\{ \frac{n-1}{2}, \frac{n-1}{2} + 1\right\}\textnormal{ or } \left\{ \frac{n-1}{2}+1, \frac{n+1}{2} + 1\right\},&n \textnormal{ is odd}. \end{cases}\end{equation} \end{definition} \begin{lemma} Let $\sigma\in\Sigma(P_n)$. Then $O(P_n,\sigma)=\left\lfloor\frac{n^2}{2}\right\rfloor - 1$ if and only if $\sigma$ satisfies the following two conditions: \begin{enumerate} \item {\small\begin{equation}\label{eq;910}\{1,n\}=\begin{cases}\left\{\sigma\left(\frac{n}{2}\right),\sigma\left(\frac{n}{2}+1\right)\right\},&n \textnormal{ is even};\\ \left\{\sigma\left(\frac{n-1}{2}\right),\sigma\left(\frac{n-1}{2}+1\right)\right\}\textnormal{ or }\left\{\sigma\left(\frac{n-1}{2}+1\right),\sigma\left(\frac{n+1}{2}+1\right)\right\},&n \textnormal{ is odd}. \end{cases}\end{equation}} \item There exists $k\in\left\{\left\lfloor\frac{n}{2}\right\rfloor, \left\lceil\frac{n}{2}\right\rceil \right\}$ such that either \begin{enumerate} \item [(a)] $\sigma(i)$ is even for every $1\leq i\leq k$ and $\sigma(i)$ is odd for every $k+1\leq i\leq n$ or \item [(b)] $\sigma(i)$ is odd for every $1\leq i\leq k$ and $\sigma(i)$ is even for every $k+1\leq i\leq n$. \end{enumerate} \end{enumerate} In particular $$\left|\left\{\sigma\in S_n\;|\;O(P_n,\sigma)=M(P_n)\right\}\right| = \begin{cases} 2(\frac{n - 2}{2}!)^2, & n \textnormal{ is even};\\4(\frac{n - 1}{2}!)(\frac{n - 1}{2} - 1)!,& n \textnormal{ is odd}, \end{cases}$$ (registered as \seqnum{A328378}). \end{lemma} \begin{proof} Let $\sigma\in \Sigma(P_n)$. By \cite[Theorem 8]{bulteau2021disorders}, $\delta_x^+(\sigma) = M(P_n)$ if and only if $\sigma$ is bipartite with centered endpoints. Since $O(P_n,\sigma)=\delta_x^+(\sigma^{-1})$, we have $O(P_n,\sigma)=M(P_n)$ if and only if $\sigma^{-1}$ is bipartite with centered endpoints. Thus, assuming that (\ref{eq;822}) holds for $\sigma^{-1}$ and applying $\sigma$ on both sides of (\ref{eq;822}), we obtain (\ref{eq;910}). Suppose that $n$ is even and that condition (a) of Definition \ref{def;561} holds for $\sigma^{-1}$. Then $k=\frac{n}{2}$ and \begin{align} \left\{\sigma^{-1}(2),\sigma^{-1}(4),\ldots,\sigma^{-1}(n)\right\}&=\left\{1,2\ldots,\frac{n}{2}\right\}\textnormal{ and }\nonumber\\ \left\{\sigma^{-1}(1),\sigma^{-1}(3),\ldots,\sigma^{-1}(n-1)\right\}&=\left\{\frac{n}{2}+1, \frac{n}{2}+2,\ldots,n\right\}.\nonumber\end{align} Equivalently, \begin{align} \{2,4,\ldots,n\}&=\left\{\sigma(1),\sigma(2)\ldots,\sigma\left(\frac{n}{2}\right)\right\}\textnormal{ and }\nonumber\\ \{1,3,\ldots,n-1\}&=\left\{\sigma\left(\frac{n}{2}+1\right), \sigma\left(\frac{n}{2}+2\right),\ldots,\sigma(n)\right\}.\nonumber\end{align} If, instead, condition (b) of Definition \ref{def;561} holds for $\sigma^{-1}$, we have \begin{align} \{1,3,\ldots,n-1\}&=\left\{\sigma(1),\sigma(2)\ldots,\sigma\left(\frac{n}{2}\right)\right\}\textnormal{ and }\nonumber\\ \{2,4,\ldots,n\}&=\left\{\sigma\left(\frac{n}{2}+1\right), \sigma\left(\frac{n}{2}+2\right),\ldots,\sigma(n)\right\}.\nonumber\end{align} Suppose now that $n$ is odd and that $k=\left\lfloor\frac{n}{2}\right\rfloor$. Then, necessarily, condition (a) of Definition \ref{def;561} holds for $\sigma^{-1}$. Thus, we have \begin{align} \{2,4,\ldots,n-1\}&=\left\{\sigma(1),\sigma(2)\ldots,\sigma\left(\left\lfloor\frac{n}{2}\right\rfloor\right)\right\}\textnormal{ and }\nonumber\\ \{1,3,\ldots,n\}&=\left\{\sigma\left(\left\lfloor\frac{n}{2}\right\rfloor+1\right), \sigma\left(\left\lfloor\frac{n}{2}\right\rfloor+2\right),\ldots,\sigma(n)\right\}.\nonumber\end{align} Similarly, if $k=\left\lceil\frac{n}{2}\right\rceil$. Then, necessarily, condition (b) of Definition \ref{def;561} holds for $\sigma^{-1}$. Thus, \begin{align} \{1,3,\ldots,n\}&=\left\{\sigma(1),\sigma(2)\ldots,\sigma\left(\left\lceil\frac{n}{2}\right\rceil\right)\right\}\textnormal{ and }\nonumber\\ \{2,4,\ldots,n-1\}&=\left\{\sigma\left(\left\lceil\frac{n}{2}\right\rceil+1\right), \sigma\left(\left\lceil\frac{n}{2}\right\rceil+2\right),\ldots,\sigma(n)\right\}.\nonumber\end{align} \end{proof} \iffalse \begin{remark} In \cite{bulteau2021disorders} it is shown that the maximal additive $x$-disorder is attained exactly by bipartite permutations with centered endpoints. But we are interested in additive $y$-disorder and the permutations that achieve the maximal $y$-disorder are exactly the inverse of those that achieve the maximal $x$-disorder. Nevertheless, the inverses are, in general, not bipartite. For example: The permutation $\sigma=(4, 1, 6, 3, 7, 2, 5)$ is bipartite but $\sigma^i = (2, 6, 4, 1, 7, 3, 5)$ is not. Interestingly all maximizing permutations have the same difference sequence as their inverse. For even $n$ the number of maximizing permutations is $2((n-2)!)^2$ \end{remark} \fi \iffalse \begin{lemma} We have $A(G) = \frac{n^2-1}{3}$ (cf.\ \cite{M}, \seqnum{A090672} and \seqnum{A005990}, Jon Perry). \end{lemma} \fi \iffalse \begin{proof} We have \begin{align} \sum_{\sigma\in S_{n}}\sum_{i=1}^{n-1}|\sigma(i)-\sigma(i+1)| &=\sum_{i=1}^{n-1}\sum_{\sigma\in S_{n}}|\sigma(i)-\sigma(i+1)| \nonumber\\&=(n-1)\sum_{\sigma\in S_{n}}|\sigma(2)-\sigma(1)| \nonumber\\&=2(n-1)(n-2)!\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}(j-i) \nonumber\\&=2(n-1)!\sum_{i=1}^{n-1}\sum_{j=1}^{n-i}j \nonumber\\&=2(n-1)!\sum_{i=1}^{n-1}\frac{(n-i)(n-i+1)}{2} \nonumber\\&=(n-1)!\sum_{i=1}^{n-1}(n-i)^2+ n-i\nonumber\\&=(n-1)!\sum_{i=1}^{n-1}i^2+ i\nonumber\\&=\frac{n!(n^2-1)}{3}.\nonumber \end{align} Dividing by $n!$ we obtain $A(G)=\frac{n^2-1}{3}$. \end{proof} \fi \subsubsection{\texorpdfstring{The grid graph $P_m\times P_n$}{Lg}} Consider the grid graph $P_m\times P_n$ and assume that its set of vertices is given by $V=\{(x,y)\;|\;1\leq x\leq m, 1\leq y\leq n\}$. Thus, we may think of $\sigma\in\Sigma(P_m\times P_n)$ as an $m\times n$ matrix whose $xy$th entry stands for $\sigma(x,y)$. For $i\in[mn]$, we denote $(x_i, y_i) = \sigma^{-1}(i)$. The distance in $P_m\times P_n$ is the rectilinear distance, i.e., for $(x,y),(x',y')\in V$, we have $d\left((x,y),(x',y')\right)=|x-x'|+|y-y'|$. To the best of our knowledge, the disorder number of the grid graph was studied only in two special cases: The path graph $P_n\simeq P_1 \times P_n$ and the square grid graph $P_n^2$. The few results that are known regarding $M(P_n^2)$ were obtained by D.~S.~McNeil and are described in \seqnum{A179094}. In his study of the sequence, McNeil exploited the fact that the problem is a variation of the traveling salesman problem and calculated $M(P_n^2)$ for every $3\leq n\leq 17$ and $n=19,21,23$ by using Concorde (\cite{applegate1998solution}) coupled with glpk (\cite{glpk}). Based on his findings, he made the following conjecture: \begin{conjecture}{[McNeil]} Suppose that $n\geq 2$. Then $$M(P_n^2)=\begin{cases}n^3-3, &n \textnormal{ is even} ;\\n^3-n-1, &n \textnormal{ is odd}.\end{cases}$$ \end{conjecture} McNeil did not provide the orderings of $P_n^2$ that achieve $M(P_n^2)$. These seem necessary in order to be able to recognize a pattern so that a construction of such orderings is possible. Thus, we began by reproducing McNeil's results. Our implementation was done by adjusting the travelling salesman problem example of the package Python-MIP. In this way we were able to find orderings $\sigma\in\Sigma(P_n^2)$ such that $O(P_n^2,\sigma)=\seqnum{A179094}(n)$, for every $2\leq n\leq 20$ (cf.\ Tables \ref{table:222}, \ref{table:223} and \ref{table:224}). These orderings allowed us not only to recognize a pattern and construct orderings for $P_n^2$, but also for $P_m\times P_n$, for arbitrary $m,n\geq 2$. Our findings are summarized in the following theorem (see also Table \ref{table:229}): \begin{theorem}\label{thm;ag1} Suppose $m,n\geq 3$. Then $$M(P_m\times P_n)\geq \begin{cases}\frac{mn(m+n)}{2} - 3,& m \textnormal{ and } n \textnormal{ are both even};\\\frac{mn(m+n)}{2}-\frac{m+n}{2}-1,& m \textnormal{ and } n \textnormal{ are both odd};\\\frac{mn(m+n)}{2}-\frac{n}{2}-1,& m \textnormal{ is odd and } n \textnormal{ is even}. \end{cases}$$ Furthermore, for every $n\geq 2$, we have $M(P_2\times P_n)\geq (n+1)^2-4$. \end{theorem} \begin{table}[H] \centering \resizebox{0.76\textwidth}{!}{ $ \begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline m\text{\textbackslash} n&1 & 2 & 3 & 4 & 5 & 6 & 7 &8&9&10&11&12&13&14&15&16&17&18&19&20\\ \hline 1& 0 & 1 & 3 & 7 & 11 & 17 & 23 & 31 & 39 & 49 & 59 & 71 & 83 & 97 & 111 & 127 & 143 & 161 & 179& 199 \\ 2& & 5 & 12 & 21 & 32 & 45 & 60 & 77 & 96 & 117 & 140 & 165 & 192 & 221 & 252 & 285 & 320 & 357 & 396& 437 \\ 3& & & 23 & 39 & 55 & 77 & 99 & 127 & 155 & 189 & 223 & 263 & 303 & 349 & 395 & 447 & 499 & 557 & 615&679 \\ 4& & & & 61 & 87 & 117 & 151 & 189 & 231 & 277 & 327 & 381 & 439 & 501 & 567 & 637 & 711 & 789 & 871 &957 \\ 5& & & & & 119 & 161 & 203 & 255 & 307 & 369 & 431 & 503 & 575 & 657 & 739 & 831 & 923 & 1025 & 1127& 1239 \\ 6& & & & & & 213 & 269 & 333 & 401 & 477 & 557 & 645 & 737 & 837 & 941 & 1053 & 1169 & 1293 & 1421 & 1557\\ 7& & & & & & & 335 & 415 & 495 & 589 & 683 & 791 & 899 & 1021 & 1143 & 1279 & 1415 & 1565 & 1715 & 1879\\ 8& & & & & & & & 509 & 607 & 717 & 831 & 957 & 1087 & 1229 & 1375 & 1533 & 1695 & 1869 & 2047 & 2237 \\ 9& & & & & & & & & 719 & 849 & 979 & 1127 & 1275 & 1441 & 1607 & 1791 & 1975 & 2177 & 2379 & 2599 \\ 10& & & & & & & & & & 997 & 1149 & 1317 & 1489 & 1677 & 1869 & 2077 & 2289 & 2517 & 2749 & 2997 \\ 11& & & & & & & & & & & 1319 & 1511 & 1703 & 1917 & 2131 & 2367 & 2603 & 2861 & 3119 & 3399\\ 12& & & & & & & & & & & & 1725 & 1943 & 2181 & 2423 & 2685 & 2951 & 3237 & 3527 & 3837\\ 13& & & & & & & & & & & & & 2183 & 2449 & 2715 & 3007 & 3299 & 3617 & 3935&4279\\ 14& & & & & & & & & & & & & & 2741 & 3037 & 3357 & 3681 & 4029 & 4381 & 4757 \\ 15& & & & & & & & & & & & & & & 3359 & 3711 & 4063 & 4445 & 4827& 5239 \\ 16& & & & & & & & & & & & & & & & 4093 & 4479 & 4893 & 5311&5757 \\ 17& & & & & & & & & & & & & & & & & 4895 & 5345 & 5795& 6279 \\ 18& & & & & & & & & & & & & & & & & & 5829 & 6317 & 6837 \\ 19& & & & & & & & & & & & & & & & & & & 6839& 7399\\ 20& & & & & & & & & & & & & & & & & & & & 7997\\ \hline \end{array}$}\caption{Lower bounds on $M(P_m\times P_n)$.} \label{table:229}\end{table} The proof of Theorem \ref{thm;ag1} is divided into several lemmas. We begin with the case $m=2$ and assume that $n$ is odd. The case of even $n$ is covered by Lemma \ref{lem;888}. \begin{lemma}\label{lem;222} Suppose $n$ is odd and let $\sigma\in\Sigma(P_2\times P_n)$ be the ordering whose corresponding matrix is given by $$\left[\begin{array}{ccccccccc} n-1 & \cdots & 4 & 2 & n+1 & 2n & n+3 & \cdots & 2n-2\\ 2n-1 & \cdots & n+4 & n+2 & 1 & 3 & 5 & \cdots & n \end{array}\right].$$ Then $O(P_2\times P_n, \sigma)=(n+1)^2-4$. \end{lemma} \begin{proof} Starting at $1$, the walk that reaches $n$ is of length $\sum_{i=1}^{n-1}(i+1)$. Then $\frac{n-1}{2}+1$ steps bring us to $n+1$ and additional $2$ steps to $n+2$. Then, a walk of length $\sum_{i=3}^{n-1}(i+1)$ brings us to $2n-1$. Finally, $\frac{n+1}{2}+1$ steps bring us to $2n$ and the walk is finished with a total length of $(n+1)^2 -4$. \end{proof} In our treatment of the case of even $m$ and $n$, we shall make use of bipartite orderings of $P_m\times P_n$. \begin{definition} Suppose that $m$ and $n$ are both even and let $A,B,C,D$ be four matrices of size $\frac{m}{2}\times\frac{n}{2}$. A block matrix $\begin{bmatrix}A&B\\C&D\end{bmatrix}$ is called \emph{bipartite} if $A,B,C$ and $D$ have the following structure: \begin{enumerate} \item $A$ consists of the numbers $1,3,\ldots, \frac{mn}{2}-1$. \item $B$ consists of the numbers $\frac{mn}{2}+1,\frac{mn}{2}+3,\ldots, mn-1$. \item $C$ consists of the numbers $\frac{mn}{2},\frac{mn}{2}+2,\ldots, mn$. \item $D$ consists of the numbers $2,4,\ldots,\frac{mn}{2}$. \end{enumerate} The order of the numbers within each of the submatrices is arbitrary. An ordering $\sigma\in\Sigma(P_m\times P_n)$ is called \emph{bipartite} if its corresponding matrix is bipartite. \end{definition} \begin{lemma}\label{lem;888} Suppose that $m$ and $n$ are both even and let $\sigma\in\Sigma(P_m\times P_n)$ be bipartite. Then \begin{equation}\label{eq;500} O(P_m\times P_n,\sigma)=\frac{mn(m+n)}{2} - 3 - f_1(\sigma) -f_2(\sigma)\end{equation} where \begin{align}f_1(\sigma)&=d\left((x_1,y_1), \left(\frac{m}{2}+1,\frac{n}{2}\right)\right) + d\left((x_{mn},y_{mn}), \left(\frac{m}{2},\frac{n}{2}\right)\right)\textnormal{ and}\nonumber\\f_2(\sigma)&=2\left(\min\left\{y_{\frac{mn}{2}},y_{\frac{mn}{2}+1}\right\}-\frac{n}{2}-1\right).\nonumber\end{align} \end{lemma} \begin{proof} First, assume that the matrix $\begin{bmatrix}A&B\\C&D\end{bmatrix}$ corresponding to $\sigma$ is given by \resizebox{0.9\linewidth}{!}{ \begin{minipage}{\linewidth} \begin{align} \left[ \begin{array}{cccc|cccc}\frac{mn}{2}-1 & \cdots & \left(\frac{m}{2}-1\right)n+3 & \left(\frac{m}{2}-1\right)n+1 & (m-1)n+1 & (m-1)n+3 & \cdots & mn-1\\ \vdots & & \vdots & \vdots & \vdots & \vdots & & \vdots\\ 2n-1 & \cdots & n+3 & n+1 & \left(\frac{m}{2}+1\right)n+1 & \left(\frac{m}{2}+1\right)n+3 & \cdots & \left(\frac{m}{2}+2\right)n-1\\ n-1 & \cdots & 3 & 1 & \frac{mn}{2}+1 & \frac{mn}{2}+3 & \cdots & \left(\frac{m}{2}+1\right)n-1\\\hline (m-1)n+2 & (m-1)n+4 & \cdots & mn & \frac{mn}{2} & \cdots & \left(\frac{m}{2}-1\right)n+4 & \left(\frac{m}{2}-1\right)n+2\\ \vdots & & & \vdots & \vdots & & \vdots & \vdots\\ \left(\frac{m}{2}+1\right)n+2 & \left(\frac{m}{2}+1\right)n+4 & \cdots & \left(\frac{m}{2}+2\right)n & 2n & \cdots & n+4 & n+2\\ \frac{mn}{2}+2 & \frac{mn}{2}+4 & \cdots & \left(\frac{m}{2}+1\right)n & n & \cdots & 4 & 2 \end{array}\right]\nonumber. \end{align}\end{minipage} } We shall refer to this particular matrix as canonical. We also make the convention that the row index is counted from the bottom up and the column index from left to right. Now, starting at $1$, we proceed to $2$ with a distance of $\frac{m}{2}+\frac{n}{2}$. Moving to $3$ we add a distance of $\frac{m}{2}+\frac{n}{2} + 1$. This is repeated $\frac{n}{2}-1$ times, once for each of the entries $1,3,\ldots,n-3$, until we reach $n-1$. Proceeding to $n$, we add $\frac{m}{2}+\frac{n}{2}$ and from there to $n+1$ with a distance of $\frac{m}{2}+2$. We repeat this for each of the $\frac{m}{2}$ rows of $A$, except that we do not return to $A$ from $\frac{mn}{2}$. All together, we have made a walk that started at $1$, covered $A$ and $D$ and arrived at $\frac{mn}{2}$. This walk has a total length of {\footnotesize\begin{equation}\label{eq;442}\left(\left(\left(\frac{m}{2}+\frac{n}{2}\right)+\left(\frac{m}{2}+\frac{n}{2}+1\right)\right)\left(\frac{n}{2}-1\right)+\left(\frac{m}{2}+\frac{n}{2}\right)+\left(\frac{m}{2}+2\right)\right)\frac{m}{2}-\left(\frac{m}{2}+2\right).\end{equation}} Continuing, one step brings us from $\frac{mn}{2}$ to $\frac{mn}{2}+1$. Now, notice that the remaining walk is identical to previous one, only mirrored. It follows that the total length of the walk is twice (\ref{eq;442}). We conclude that {\footnotesize \begin{align} &O(P_m\times P_n, \sigma)\nonumber\\&=2 \left(\left(\left(\left(\frac{m}{2}+\frac{n}{2}\right)+\left(\frac{m}{2}+\frac{n}{2}+1\right)\right)\left(\frac{n}{2}-1\right)+\left(\frac{m}{2}+\frac{n}{2}\right)+\left(\frac{m}{2}+2\right)\right)\frac{m}{2}-\left(\frac{m}{2}+2\right)\right)+1\nonumber\\&=\frac{mn\left(m+n\right)}{2}-3.\nonumber\end{align}} Since every bipartite matrix is reachable from the canonical matrix by applying transpositions within each submatrix, it suffices to establish the change in the disorder under transpositions. To this end, suppose $\pi\in\Sigma(P_m\times P_n)$ is bipartite with a corresponding matrix $\begin{bmatrix}A&B\\C&D\end{bmatrix}$ and let $i,j$ be two elements of $A$ such that $i,j\neq 1$. Denote by $\pi'$ the ordering of $P_m\times P_n$ resulting from interchanging $i$ and $j$ in $A$. We shall show that $O(P_m\times P_n,\pi')=O(P_m\times P_n,\pi)$. Clearly, it suffices to show that {\tiny \begin{align} &|x_{i-1}-x_{i}|+|y_{i-1}-y_{i}| +|x_{i}-x_{i+1}|+|y_{i}-y_{i+1}| +|x_{j-1}-x_{j}|+|y_{j-1}-y_{j}| + |x_{j}-x_{j+1}|+|y_{j}-y_{j+1}|=\nonumber\\&|x_{i-1}-x_{j}|+|y_{i-1}-y_{j}| +|x_{j}-x_{i+1}|+|y_{j}-y_{i+1}| +|x_{j-1}-x_{i}|+|y_{j-1}-y_{i}| + |x_{i}-x_{j+1}|+|y_{i}-y_{j+1}|.\nonumber \end{align}} Since $i$ and $j$ belong to $A$ and $i-1,i+1,j-1,j+1$ belong to $D$, we have {\tiny \begin{align} &|x_{i-1}-x_{i}|+|y_{i-1}-y_{i}| +|x_{i}-x_{i+1}|+|y_{i}-y_{i+1}| +|x_{j-1}-x_{j}|+|y_{j-1}-y_{j}| + |x_{j}-x_{j+1}|+|y_{j}-y_{j+1}|=\nonumber\\&(x_{i}-x_{i-1})+(y_{i-1}-y_{i}) +(x_{i}-x_{i+1})+(y_{i+1}-y_{i}) +(x_{j}-x_{j-1})+(y_{j-1}-y_{j}) + (x_{j}-x_{j+1})+(y_{j+1}-y_{j})=\nonumber\\&(x_{j}-x_{i-1})+(y_{i-1}-y_{j}) +(x_{j}-x_{i+1})+(y_{i+1}-y_{j}) +(x_{i}-x_{j-1})+(y_{j-1}-y_{i}) + (x_{i}-x_{j+1})+(y_{j+1}-y_{i})=\nonumber\\&|x_{j}-x_{i-1}|+|y_{i-1}-y_{j}| +|x_{j}-x_{i+1}|+|y_{i+1}-y_{j}| +|x_{i}-x_{j-1}|+|y_{j-1}-y_{i}| + |x_{i}-x_{j+1}|+|y_{j+1}-y_{i}|.\nonumber \end{align}} The same argument applies to transpositions within $B$, if $i,j\neq \frac{mn}{2}+1$, within $C$, if $i,j\neq mn$ and within $D$, if $i,j\neq \frac{mn}{2}$. Thus, we may assume that we have transformed the canonical matrix towards $\sigma$ and that it remains to apply at most four transpositions involving $1, \frac{mn}{2}, \frac{mn}{2}+1$ and $mn$. Let us begin with $A$ and assume that $i=1$. Then $(x_1,y_1) = \left(\frac{m}{2}+1,\frac{n}{2}\right)$ and {\tiny \begin{align} &|x_{i}-x_{i+1}|+|y_{i}-y_{i+1}|+|x_{j-1}-x_{j}|+|y_{j-1}-y_{j}|+|x_{j}-x_{j+1}|+|y_{j}-y_{j+1}|=\nonumber\\&(x_{i}-x_{i+1})+(y_{i+1}-y_{i})+(x_{j}-x_{j-1})+(y_{j-1}-y_{j})+(x_{j}-x_{j+1})+(y_{j+1}-y_{j})=\nonumber\\&|x_{j}-x_{i+1}|+|y_{i+1}-y_{j}|+|x_{i}-x_{j-1}|+|y_{j-1}-y_{i}|+|x_{i}-x_{j+1}|+|y_{j+1}-y_{i}|+x_{j}-x_{i}+y_{i}-y_{j}.\nonumber \end{align}} This shows that transposing $1$ away from its original position results in a walk that is shorter by $x_{j}-x_{i}+y_{i}-y_{j}$, or, by $d\left((x_1,y_1), \left(\frac{m}{2}+1,\frac{n}{2}\right)\right)$, if we denote by $(x_1,y_1)$ the new position of $1$ (i.e., the position of $1$ according to $\sigma$). A similar argument applies to $mn$ within the block matrix $C$, resulting in a shorter walk by $d\left((x_{mn},y_{mn}), \left(\frac{m}{2},\frac{n}{2}\right)\right)$. This settles the $f_1$ term in (\ref{eq;500}). We now wish to interchange between $i=\frac{mn}{2}$ and some $j$ within $D$. Since $(x_i,y_i) = \left(\frac{m}{2},\frac{n}{2}+1\right)$ and $(x_{i+1},y_{i+1}) = \left(\frac{m}{2}+1,\frac{n}{2}+1\right)$ and $i-1, j-1, j+1$ belong to $A$, this transposition does not change the length of the walk. Indeed, {\tiny \begin{align} &|x_{i-1}-x_{i}|+|y_{i-1}-y_{i}|+|x_{i}-x_{i+1}|+|y_{i}-y_{i+1}|+|x_{j-1}-x_{j}|+|y_{j-1}-y_{j}|+|x_{j}-x_{j+1}|+|y_{j}-y_{j+1}|=\nonumber\\&(x_{i-1}-x_{i})+(y_{i}-y_{i-1})+(x_{i+1}-x_{i})+(y_{i}-y_{i+1})+(x_{j-1}-x_{j})+(y_{j}-y_{j-1})+(x_{j+1}-x_{j})+(y_{j}-y_{j+1})\nonumber\\&|x_{i+1}-x_{j}|+|y_{j}-y_{i+1}|+|x_{j+1}-x_{i}|+|y_{i}-y_{j+1}|+|x_{i-1}-x_{j}|+|y_{j}-y_{i-1}|+|x_{j-1}-x_{i}|+|y_{i}-y_{j-1}|.\nonumber \end{align}} Finally, we transpose $i=\frac{mn}{2}+1$ with some $j$ within $B$. As before, we have {\tiny \begin{align} &|x_{i-1}-x_{i}|+|y_{i-1}-y_{i}|+|x_{i}-x_{i+1}|+|y_{i}-y_{i+1}|+|x_{j-1}-x_{j}|+|y_{j-1}-y_{j}|+|x_{j}-x_{j+1}|+|y_{j}-y_{j+1}|=\nonumber\\&(x_{i}-x_{i-1})+(y_{i-1}-y_{i})+(x_{i}-x_{i+1})+(y_{i}-y_{i+1})+(x_{j}-x_{j-1})+(y_{j}-y_{j-1})+(x_{j}-x_{j+1})+(y_{j}-y_{j+1})\nonumber\\&\boxed{(y_{i-1}-y_{i})+(y_{j}-y_{j+1})}+|x_{j}-x_{i-1}|+|x_{i}-x_{j+1}|+|x_{j}-x_{i+1}|+|y_{j}-y_{i+1}|+|x_{i}-x_{j-1}|+|y_{i}-y_{j-1}|.\nonumber \end{align}} Here, we distinguish between two cases regarding the boxed terms: \begin{enumerate} \item Suppose $y_{i-1}\leq y_{j}$. Then $$(y_{i-1}-y_{i})+(y_{j}-y_{j+1})=|y_{j}-y_{i-1}|+|y_{i}-y_{j+1}|+2(y_{i-1}-y_{i}).$$ \item Suppose $y_{j}<y_{i-1}$. Then $$(y_{i-1}-y_{i})+(y_{j}-y_{j+1})=|y_{i-1}-y_{j}|+|y_{i}-y_{j+1}|+2(y_{j}-y_{i}).$$ \end{enumerate} Since $y_i = \frac{n}{2}+1$, we have $$(y_{i-1}-y_{i})+(y_{j}-y_{j+1})=2\left(\min \{y_j, y_{i-1}\}-\frac{n}{2}-1\right).$$ This settles the $f_2$ term in (\ref{eq;500}) and the proof is completed. \end{proof} Combining Lemma \ref{lem;888} together with Lemma \ref{lem;222} we conclude: \begin{corollary} We have $M(P_2\times P_n)\geq (n+1)^2-4$ (registered as \seqnum{A028347}). \end{corollary} We shall now consider the case of odd $m$ and $n$. The proof is very similar to the proof of Lemma \ref{lem;888} and is omitted. \begin{lemma}\label{lem;889} Suppose that $m$ and $n$ are both odd and let $\sigma\in\Sigma(P_m\times P_n)$ be the ordering whose corresponding matrix is given by \resizebox{0.9\linewidth}{!}{ \begin{minipage}{\linewidth} \begin{align} \left[ \begin{array}{cccc|c|cccc} \frac{(m+1)(n-1)}{2}-1&\cdots&&\cdots&\frac{(m+1)(n-1)}{2}+1&mn-2&\cdots&&\cdots\\ &&&&\vdots&&&&\\ &&&&\frac{(m+1)(n-1)}{2}m-2&\cdots&&\cdots&\frac{(m+1)(n-1)}{2}m\\\cline{6-9} \cdots&\cdots&3&1&mn&\frac{(m+1)(n-1)}{2}&\cdots&&\cdots\\\cline{1-4} mn-1&&&\cdots&\frac{(m+1)(n-1)}{2}+2&&&&\\ &&&&\vdots&&&&\\ \cdots&&\cdots&\frac{(m+1)(n-1)}{2}m+1&\frac{(m+1)(n-1)}{2}m-1&\cdots&&4&2 \end{array}\right].\nonumber \end{align}\end{minipage} } \noindent Then $$ O(P_m\times P_n,\sigma)=\frac{(mn-1)(m+n)}{2}-1.$$ \end{lemma} Combining Lemma \ref{lem;888} together with Lemma \ref{lem;889}, we conclude that $M(P_n^2)$ is, to the very least, lower bounded by the values in McNeil's conjecture: \begin{corollary} We have $$M(P_n^2)\geq\begin{cases}n^3-3, &n \textnormal{ is even} ;\\n^3-n-1, &n \textnormal{ is odd}.\end{cases}$$ \end{corollary} Finally, we address the case where of odd $m$ and even $n$. Again, the proof is omitted. \begin{lemma} Suppose that $m$ is odd, $n$ is even and $m,n\geq 3$. Let $\sigma\in\Sigma(P_m\times P_n)$ be the ordering whose corresponding matrix is given by \resizebox{0.9\linewidth}{!}{ \begin{minipage}{\linewidth} \begin{align} \left[ \begin{array}{ccccc|cccc} \frac{mn}{2}&\cdots & & &\cdots & \cdots& & \cdots &\frac{mn}{2}+3\\ \vdots& & & & \vdots & \vdots & & & \vdots\\ \cdots& & & & n-1 & mn-2& & & \cdots \\\cline{7-9}\cline{0-0} \multicolumn{1}{c|}{mn-1}&n-3 & \cdots & 3 & 1 & \multicolumn{1}{c|}{mn} & \frac{mn}{2}+1 &\cdots & \cdots\\\cline{2-6} \cdots& & \cdots& \frac{mn}{2}+4 & \frac{mn}{2}+2 & \vdots & & &\vdots \\ \vdots& & & & \vdots & \vdots & & & \vdots\\ mn-3& & & &\cdots & n & \cdots & 4 &2 \end{array}\right]\nonumber. \end{align}\end{minipage} } \noindent Then $$ O(P_m\times P_n,\sigma)=\frac{mn(m+n)}{2}-\frac{n}{2}-1.$$ \end{lemma} \begin{example} Suppose $n\geq 3$. Then $$M(P_3\times P_n)\geq \begin{cases}\frac{3n^{2}+8n-2}{2},&n \textnormal{ is even};\\\frac{3n^{2}+8n-5}{2},&n \textnormal{ is odd}.\end{cases}$$ \end{example} \iffalse \begin{remark} \begin{enumerate} \item Suppose $x_1,\ldots,x_n\in[n]$. If all $x_i, 1\leq i\leq n$ are distinct, then we may define $\sigma\in\Sigma(P_n)$ by $\sigma(i)= x_i, 1\leq i\leq n$ and $\sum_{i=1}^{n-1}|x_i-x_{i+1}|= O(P_n,\sigma)$. \end{enumerate} \end{remark} \begin{theorem} Let $G$ be the square grid $2\times n$. Then $M(G)=(n+1)^2 - 4$. \end{theorem} \begin{proof} Let $G = \{(i,j)\;|\; i\in\{1,2\}, j\in[n]\}$. We shall show that, on one hand, $O(G,\sigma)\leq (n+1)^2-4$ for every $\sigma\in\Sigma(G)$ and, on the other hand, that the upper bound is attained. For $\sigma\in\Sigma(G)$ and $i\in[2n]$, let $x_i$ (resp. $y_i$) be the second (resp. first) coordinate of $\sigma^{-1}(i)$. Then \begin{align} &\sum_{i=1}^{2n-1}d(\sigma^{-1}(i),\sigma^{-1}(i+1))=\nonumber\\&\sum_{i=1}^{2n-1}|x_{i}-x_{i+1}|+|y_{i}-y_{i+1}|=\nonumber\\&\sum_{i=1}^{2n-1}|y_i - y_{i+1}|+ \sum_{i=1}^{n-1}|x_{i}-x_{i+1}| +|x_{n}-x_{n+1}| +\sum_{i=n+1}^{2n-1}|x_{i}-x_{i+1}|.\nonumber \end{align} First, assume that there exists $i\in[2n-1]$ such that $y_i=y_{i+1}$. Then Now, each of the two sums corresponds to the disorder of a certain labeling of the line graph $P_n$ of order $n$. Thus, \begin{equation}\label{eq;12} (\ref{eq;11}) \leq 2n-2+\max_{\pi,\bar{\pi}\in\Sigma(P_n)}\left\{ O(P_n,\pi)+O(P_n,\bar{\pi})+\left|\pi^{-1}(n)-\bar{\pi}^{-1}(1)\right|\right\}. \end{equation} Assume that $n$ is odd. A slight modification of the proof of \cite[Lemma 7]{bulteau2021disorders} (indeed, in the third row we replace equality with inequality), $$O(P_n,\pi)\leq\frac{n^2-1}{2}-\left|\pi^{-1}(1)-\frac{n+1}{2}\right|-\left|\pi^{-1}(n)-\frac{n+1}{2}\right|,$$ $$O(P_n,\bar{\pi})\leq\frac{n^2-1}{2}-\left|\bar{\pi}^{-1}(1)-\frac{n+1}{2}\right|-\left|\bar{\pi}^{-1}(n)-\frac{n+1}{2}\right|.$$ Thus, \begin{align} (\ref{eq;12})\leq(n+1)^{2}-4\nonumber+\max_{\pi,\bar{\pi}\in\Sigma(P_n)}\Bigg\{& \left|\pi^{-1}(n)-\bar{\pi}^{-1}(1)\right|-\left|\pi^{-1}(1)-\frac{n+1}{2}\right|\nonumber\\&-\left|\pi^{-1}(n)-\frac{n+1}{2}\right|\nonumber\\&-\left|\bar{\pi}^{-1}(1)-\frac{n+1}{2}\right|-\left|\bar{\pi}^{-1}(n)-\frac{n+1}{2}\right|\Bigg\}.\label{eq;13} \end{align} By the triangle inequality, $$\left|\pi^{-1}(n)-\bar{\pi}^{-1}(1)\right|-\left|\pi^{-1}(n)-\frac{n+1}{2}\right|-\left|\frac{n+1}{2}-\bar{\pi}^{-1}(1)\right|\leq0.$$ Thus, $$(\ref{eq;13})\leq(n+1)^2-4.$$ \end{proof} \iffalse \section{Extras} Another quantity that is often studied is the following: \begin{definition}[{\cite[Exercise 28 on p.\ 22]{K}}] The \emph{total displacement} of a permutation $\sigma\in\textnormal{Sym}(n)$ is defined by $$\sum_{i=1}^n |\sigma(i)-i|.$$ \end{definition} \fi \fi \subsection{The average disorder number} The previous section studied the disorder number of orderings from an extremal point of view. In this section, we address the average disorder number of the different orderings. As we shall immediately see, the average disorder number of a graph of order $n$ coincides, up to a factor of $\frac{2}{n}$, with the Wiener index of the graph, a notion that goes back to \cite{wiener1947structural} and defined as follows (e.g., \cite[p.\ 359]{yeh1994sum}): \begin{definition} The \emph{Wiener index} of a graph $G=(V,E)$ is denoted by $W(G)$ and is defined to be $$W(G)=\frac{1}{2}\sum_{x,y\in V}d(x,y).$$ \end{definition} \begin{lemma}\label{lem;421} Let $G=(V,E)$ be a graph of order $n$. Then $$A(G)=\frac{2W(G)}{n}.$$ \end{lemma} \begin{proof} We have \begin{align} A(G)&=\frac{1}{n!}\sum_{\sigma\in \Sigma(G)}O(G,\sigma) \nonumber\\&=\frac{1}{n!}\sum_{\sigma\in \Sigma(G)}\sum_{i=1}^{n-1}d\left(\sigma^{-1}(i),\sigma^{-1}(i+1)\right) \nonumber\\ &=\frac{n-1}{n!}\sum_{\sigma\in \Sigma(G)}d\left(\sigma^{-1}(1),\sigma^{-1}(2)\right) \nonumber\\ &=\frac{(n-1)!}{n!}\sum_{x,y\in V}d(x,y) \nonumber\\ &=\frac{2W(G)}{n}. \nonumber \end{align} \end{proof} In the examples that we now give, we shall make use of the fact that the Wiener index of the product $G_1\times G_2$ of two graphs $G_1$ and $G_2$ of orders $|G_1|$ and $|G_2|$, respectively, is given by $$W(G_1\times G_2)=|G_1|^2W(G_2)+|G_2|^2W(G_1)$$ (cf.\ \cite[Theorem 1]{yeh1994sum}). \begin{example} \begin{enumerate} \item We have $W(P_n)=\frac{n^3-n}{6}$ (e.g., \cite[p. 284]{entringer1976distance}) and therefore $A(P_n) = \frac{n^2-1}{3}$. Sequences in context: \seqnum{A090672} and \seqnum{A005990}. \item We have $$W(C_n) = \begin{cases} \frac{n(n^2-1)}{8},& n \textnormal{ is even};\\ \frac{n^3}{8},& n \textnormal{ is odd}. \end{cases}$$ (e.g., \cite[p. 284]{entringer1976distance}). Thus, $A(C_n)=\left\lfloor\frac{n^2}{4}\right\rfloor$. Sequences in context: \seqnum{A306258} and \seqnum{A002620}. \item We have $W(P_n^2)=\frac{n^2(n-1)n(n+1)}{3}$. Thus, $A(P_n^2)=\frac{2(n-1)n(n+1)}{3}$. Sequences in context: \seqnum{A210440} and \seqnum{A143944}. \item We have $$W(C_n^2) = \begin{cases} \frac{n^3(n^2-1)}{4},&n \textnormal{ is even};\\ \frac{n^5}{4},& n \textnormal{ is odd}. \end{cases}$$ Thus, $$A(C_n^2)=\begin{cases} \frac{n^3-n}{2},&n \textnormal{ is even} ;\\ \frac{n^3}{2},& n \textnormal{ is odd}. \end{cases}$$ Notice that $$A(C_n^2)=\frac{1}{4}\sum_{k=1}^{n}6k(k-1)+(2k-1)(-1)^{k}+1,$$ which establishes a hitherto unknown connection between (the partial sums of) \seqnum{A062717} and \seqnum{A034828}. Sequences in context: \seqnum{A122657}, \seqnum{A062717} and \seqnum{A034828}. \item We have $W(Q_n) = n4^{n-1}$. Thus, $A(Q_n) = 2^{2n-1}$. Sequences in context: \seqnum{A002697} and \seqnum{A004171}. \item Let $L_{m,n}$ denote the lollipop graph on $m$ and $n$ vertices. Then \begin{align}W(L_{m,n})&=W(P_n)+W(K_m)+\sum_{k=1}^{n}k+(m-1)\sum_{k=1}^{n}(k+1)\nonumber\\&=\frac{n(n+1)(n+2)}{6}+(m-1)\frac{n(n+3)+m}{2}\nonumber.\end{align} Thus, $$A(L_{m,n}) = \frac{n(n+1)(n+2)}{3(m+n)}+(m-1)\frac{n(n+3)+m}{m+n}.$$ Sequences in context: \seqnum{A180854}. \end{enumerate} \end{example} \begin{table} \centering \resizebox{0.6\textwidth}{!}{ \begin{tabular}{ |c|c|c| } \hline $n$ & $\sigma$ & $O(P_n^2, \sigma)$\\ \hline $2$&$\begin{bmatrix} 1& 3\\ 4& 2 \end{bmatrix}$&$5$\\\hline $3$&$\begin{bmatrix} 4& 9& 2\\ 6& 1& 7\\ 8& 3& 5 \end{bmatrix}$&$23$\\\hline $4$&$\begin{bmatrix} 2& 6& 10& 14\\ 4& 8& 16& 12\\ 13& 15& 1& 7\\ 9& 11& 3& 5 \end{bmatrix}$&$61$\\\hline $5$&$\begin{bmatrix} 3& 15& 8& 11& 24\\ 5& 17& 20& 22& 13\\ 7& 1& 25& 9& 19\\ 21& 10& 14& 2& 16\\ 12& 23& 18& 4& 6 \end{bmatrix}$&$119$\\\hline $6$&$\begin{bmatrix} 9& 3& 5& 25& 23& 31\\ 15& 17& 13& 35& 33& 29\\ 11& 7& 1& 21& 27& 19\\ 34& 26& 36& 18& 14& 10\\ 24& 30& 20& 16& 12& 8\\ 22& 28& 32& 6& 4& 2 \end{bmatrix}$&$213$\\\hline $7$&$\begin{bmatrix} 40& 44& 42& 38& 28& 20& 18\\ 5& 12& 3& 8& 30& 34& 47\\ 14& 36& 16& 1& 22& 32& 24\\ 10& 25& 46& 49& 6& 37& 26\\ 27& 48& 19& 35& 41& 39& 9\\ 7& 29& 31& 15& 11& 2& 45\\ 33& 21& 23& 17& 4& 43& 13 \end{bmatrix}$&$335$\\\hline $8$&$\begin{bmatrix} 7& 15& 23& 11& 45& 41& 53& 39\\ 19& 17& 29& 5& 47& 33& 63& 57\\ 3& 27& 9& 31& 59& 35& 51& 43\\ 21& 13& 25& 1& 49& 37& 61& 55\\ 34& 48& 54& 64& 32& 8& 28& 6\\ 60& 42& 44& 62& 26& 12& 24& 10\\ 56& 52& 40& 46& 22& 14& 30& 2\\ 58& 38& 36& 50& 4& 20& 16& 18 \end{bmatrix}$&$509$\\\hline $9$&$\begin{bmatrix} 11& 33& 37& 35& 78& 52& 60& 80& 70\\ 7& 15& 29& 9& 3& 74& 72& 50& 66\\ 39& 13& 21& 23& 46& 56& 76& 62& 54\\ 19& 27& 17& 31& 25& 48& 58& 64& 68\\ 55& 57& 5& 1& 81& 30& 42& 44& 40\\ 51& 71& 53& 73& 24& 18& 36& 32& 34\\ 69& 61& 63& 79& 45& 28& 16& 8& 4\\ 41& 77& 65& 59& 43& 14& 6& 12& 2\\ 49& 47& 67& 75& 38& 26& 22& 20& 10 \end{bmatrix}$&$719$\\\hline $10$&$\begin{bmatrix} 91& 73& 65& 79& 97& 34& 26& 48& 10& 36\\ 81& 59& 57& 67& 89& 24& 12& 38& 2& 4\\ 83& 71& 87& 93& 95& 46& 44& 40& 16& 8\\ 61& 63& 85& 55& 53& 28& 6& 22& 20& 18\\ 99& 77& 75& 69& 51& 42& 50& 32& 14& 30\\ 19& 43& 25& 47& 1& 100& 88& 78& 60& 68\\ 21& 15& 41& 29& 11& 64& 72& 66& 92& 52\\ 3& 27& 37& 9& 5& 80& 86& 56& 70& 90\\ 39& 31& 23& 33& 45& 84& 98& 82& 54& 74\\ 49& 35& 13& 17& 7& 76& 58& 94& 96& 62 \end{bmatrix}$&$997$\\\hline $11$&$\begin{bmatrix} 102& 5& 37& 84& 94& 64& 68& 10& 18& 72& 118\\ 86& 104& 112& 110& 92& 114& 20& 52& 116& 22& 24\\ 60& 74& 100& 3& 90& 82& 28& 8& 46& 42& 12\\ 58& 80& 78& 108& 106& 56& 44& 14& 70& 50& 26\\ 76& 88& 98& 31& 96& 120& 40& 16& 48& 34& 54\\ 39& 7& 53& 62& 1& 121& 4& 32& 66& 36& 30\\ 41& 55& 43& 27& 71& 23& 81& 38& 77& 101& 89\\ 9& 45& 117& 21& 15& 73& 107& 87& 111& 99& 113\\ 47& 115& 13& 19& 33& 2& 103& 109& 105& 59& 57\\ 35& 69& 51& 25& 49& 67& 75& 79& 93& 83& 6\\ 11& 29& 119& 65& 17& 91& 61& 63& 97& 85& 95 \end{bmatrix}$&$1319$\\\hline \end{tabular}} \caption{Orderings of $P_n^2$ such that $O(P_n^2,\sigma)=\seqnum{A179094}(n)$} \label{table:222} \end{table} \begin{table} \centering \resizebox{0.76\textwidth}{!}{ \begin{tabular}{ |c|c|c| } \hline $n$ & $\sigma$ & $O(P_n^2, \sigma)$\\ \hline $12$&$\begin{bmatrix} 93& 115& 85& 135& 125& 117& 28& 42& 40& 50& 56& 38\\ 133& 95& 111& 97& 91& 141& 54& 4& 14& 22& 46& 6\\ 87& 121& 83& 89& 131& 77& 62& 68& 24& 60& 44& 2\\ 123& 137& 139& 129& 79& 127& 32& 30& 16& 70& 66& 48\\ 119& 101& 103& 107& 113& 105& 58& 64& 10& 20& 12& 8\\ 73& 75& 99& 81& 109& 143& 72& 26& 18& 34& 52& 36\\ 27& 3& 43& 47& 45& 1& 144& 118& 136& 138& 108& 110\\ 49& 35& 13& 11& 33& 39& 82& 132& 124& 74& 78& 102\\ 37& 23& 15& 71& 19& 55& 86& 76& 142& 90& 140& 96\\ 29& 65& 31& 53& 61& 7& 126& 98& 80& 120& 94& 92\\ 59& 17& 67& 25& 5& 57& 130& 104& 84& 122& 128& 88\\ 41& 63& 21& 69& 9& 51& 114& 116& 112& 134& 100& 106 \end{bmatrix}$&$1725$\\\hline $13$&$\begin{bmatrix} 144& 124& 152& 128& 105& 25& 52& 60& 76& 168& 58& 160& 66 \\ 113& 23& 136& 33& 27& 31& 21& 80& 90& 50& 92& 158& 68 \\ 117& 9& 150& 3& 154& 115& 78& 74& 15& 86& 56& 64& 54 \\ 148& 140& 142& 5& 11& 119& 166& 164& 100& 70& 48& 44& 94 \\ 126& 130& 111& 146& 29& 13& 121& 42& 72& 40& 156& 38& 88 \\ 103& 35& 134& 19& 17& 138& 107& 82& 84& 46& 109& 162& 98 \\ 132& 165& 37& 89& 7& 1& 169& 62& 102& 143& 137& 123& 96 \\ 39& 159& 85& 43& 63& 91& 110& 139& 18& 153& 22& 10& 118 \\ 61& 45& 73& 157& 41& 49& 155& 127& 28& 125& 8& 141& 4 \\ 71& 122& 83& 67& 51& 95& 16& 20& 135& 106& 149& 145& 112 \\ 65& 47& 79& 93& 99& 108& 14& 26& 151& 120& 2& 104& 131 \\ 59& 81& 75& 167& 57& 77& 55& 6& 133& 34& 114& 24& 32 \\ 163& 87& 69& 97& 161& 53& 101& 129& 36& 116& 30& 147& 12 \end{bmatrix}$&$2183$\\\hline $14$&$\begin{bmatrix} 165& 161& 141& 159& 151& 131& 103& 4& 88& 76& 64& 16& 74& 66\\ 183& 153& 171& 129& 187& 113& 133& 78& 58& 46& 38& 20& 92& 86\\ 147& 115& 135& 167& 121& 137& 145& 60& 26& 52& 98& 40& 96& 18\\ 155& 163& 107& 189& 125& 105& 127& 62& 30& 22& 48& 68& 90& 80\\ 179& 111& 123& 149& 175& 195& 177& 36& 32& 50& 12& 44& 54& 24\\ 109& 143& 191& 157& 101& 193& 139& 2& 42& 6& 10& 14& 70& 34\\ 173& 99& 117& 169& 185& 181& 119& 94& 28& 56& 72& 82& 8& 84\\ 23& 41& 19& 95& 83& 35& 1& 196& 142& 112& 164& 116& 144& 118\\ 79& 5& 77& 7& 11& 3& 9& 150& 138& 186& 100& 148& 154& 160\\ 53& 29& 33& 87& 63& 57& 47& 152& 106& 182& 190& 122& 156& 188\\ 49& 97& 73& 59& 39& 75& 67& 140& 136& 178& 120& 176& 172& 162\\ 15& 69& 91& 61& 21& 43& 17& 146& 170& 158& 108& 192& 180& 166\\ 31& 55& 25& 51& 89& 37& 85& 174& 110& 104& 132& 102& 130& 194\\ 45& 65& 71& 81& 13& 27& 93& 134& 114& 168& 124& 128& 184& 126 \end{bmatrix}$&$2741$\\\hline $15$&$\begin{bmatrix} 71& 142& 11& 44& 140& 13& 22& 117& 81& 152& 154& 193& 50& 203&156\\ 132& 122& 26& 185& 17& 124& 213& 77& 8& 52& 201& 174& 4& 34&199\\ 100& 32& 209& 136& 15& 19& 126& 79& 176& 36& 85& 191& 158& 166&197\\ 128& 221& 215& 28& 219& 115& 73& 38& 178& 91& 83& 170& 162& 95&104\\ 187& 217& 42& 134& 113& 130& 69& 144& 182& 146& 54& 60& 48& 56&46\\ 109& 30& 65& 102& 75& 67& 107& 189& 180& 89& 58& 119& 150& 2&148\\ 223& 63& 207& 24& 211& 111& 138& 160& 93& 164& 6& 87& 172& 97&205\\ 120& 105& 9& 40& 196& 183& 1& 225& 21& 99& 64& 62& 195& 10&168\\ 59& 165& 167& 200& 86& 92& 96& 206& 208& 70& 108& 66& 25& 125&74\\ 157& 35& 37& 149& 147& 175& 171& 163& 131& 216& 14& 137& 224& 31&68\\ 181& 55& 90& 192& 84& 49& 47& 103& 106& 123& 184& 222& 214& 112&27\\ 57& 194& 177& 159& 78& 3& 88& 45& 43& 218& 129& 186& 127& 72&41\\ 190& 61& 94& 118& 155& 80& 82& 33& 121& 12& 29& 143& 114& 18&210\\ 51& 98& 5& 161& 53& 7& 145& 20& 23& 16& 39& 212& 101& 135& 110\\ 151& 204& 173& 153& 169& 202& 198& 179& 76& 188& 133& 139& 220& 141&116 \end{bmatrix}$&$3359$\\\hline $16$&$\begin{bmatrix} 189& 131& 179& 219& 195& 139& 133& 237& 102& 124& 54& 78& 48& 14& 64& 122\\ 163& 149& 209& 155& 191& 175& 233& 231& 2& 50& 46& 110& 90& 82& 58& 56\\ 135& 151& 215& 199& 243& 207& 187& 221& 84& 34& 98& 24& 32& 62& 70& 80\\ 193& 253& 165& 249& 247& 217& 143& 235& 52& 26& 100& 40& 96& 116& 36& 28\\ 201& 245& 183& 251& 223& 213& 145& 241& 112& 76& 66& 104& 8& 120& 94& 4\\ 161& 185& 229& 147& 239& 171& 197& 203& 114& 126& 42& 86& 44& 60& 106& 68\\ 255& 173& 211& 157& 177& 181& 159& 153& 6& 16& 92& 128& 38& 118& 20& 72\\ 137& 225& 227& 141& 129& 205& 167& 169& 88& 74& 12& 18& 22& 30& 10& 108\\ 67& 51& 69& 127& 23& 47& 79& 1& 256& 144& 236& 240& 142& 228& 224& 158\\ 87& 117& 53& 121& 33& 31& 111& 73& 166& 222& 226& 214& 200& 230& 234& 250\\ 65& 101& 123& 29& 5& 125& 19& 11& 198& 164& 206& 184& 156& 246& 170& 130\\ 39& 3& 105& 13& 21& 91& 15& 103& 204& 162& 140& 238& 192& 252& 150& 136\\ 81& 85& 95& 45& 59& 35& 61& 109& 190& 182& 212& 244& 254& 194& 202& 220\\ 25& 41& 83& 17& 75& 7& 97& 27& 174& 216& 138& 248& 178& 208& 132& 188\\ 57& 9& 77& 63& 99& 71& 89& 43& 180& 172& 160& 168& 218& 154& 148& 134\\ 113& 115& 107& 55& 119& 49& 93& 37& 196& 242& 186& 232& 210& 176& 152& 146 \end{bmatrix}$&$4093$\\\hline \end{tabular}} \caption{Orderings of $P_n^2$ such that $O(P_n^2,\sigma)=\seqnum{A179094}(n)$} \label{table:223} \end{table} \begin{table} \centering \resizebox{0.86\textwidth}{!}{ \begin{tabular}{ |c|c|c| } \hline $n$ & $\sigma$ & $O(P_n^2, \sigma)$\\ \hline $17$&$\begin{bmatrix} 230& 226& 146& 142& 202& 144& 62& 206& 178& 12& 44& 34& 101& 24& 42& 26& 28\\ 111& 56& 208& 228& 58& 121& 194& 188& 50& 14& 6& 18& 287& 85& 95& 83& 4\\ 184& 113& 168& 107& 214& 105& 170& 196& 166& 279& 67& 8& 69& 281& 242& 93& 162\\ 176& 238& 264& 218& 180& 140& 119& 198& 220& 103& 244& 46& 65& 40& 16& 248& 75\\ 256& 266& 60& 174& 136& 222& 115& 172& 275& 36& 246& 128& 71& 32& 164& 30& 97\\ 216& 260& 212& 154& 148& 52& 182& 258& 240& 79& 273& 22& 73& 254& 285& 283& 250\\ 109& 200& 236& 150& 117& 262& 210& 123& 156& 126& 81& 134& 99& 77& 160& 38& 87\\ 192& 224& 54& 190& 268& 232& 152& 204& 270& 252& 20& 48& 91& 158& 10& 2& 132\\ 186& 82& 234& 138& 274& 31& 64& 1& 289& 130& 89& 141& 193& 271& 124& 199& 277\\ 84& 247& 33& 276& 253& 127& 7& 131& 104& 259& 239& 221& 51& 223& 237& 263& 139\\ 35& 3& 15& 47& 39& 102& 11& 23& 181& 191& 233& 118& 229& 179& 213& 112& 231\\ 272& 78& 92& 5& 13& 68& 27& 21& 286& 195& 108& 155& 116& 209& 177& 197& 110\\ 90& 41& 72& 251& 45& 163& 37& 288& 133& 205& 211& 137& 147& 267& 265& 187& 143\\ 94& 80& 249& 165& 159& 245& 129& 49& 135& 120& 61& 217& 185& 122& 53& 171& 114\\ 278& 243& 86& 9& 70& 66& 161& 17& 255& 55& 175& 167& 189& 215& 63& 203& 145\\ 280& 282& 19& 100& 25& 29& 74& 125& 57& 183& 269& 153& 235& 227& 106& 257& 225\\ 284& 76& 96& 98& 241& 43& 157& 88& 151& 207& 169& 149& 219& 173& 201& 59& 261 \end{bmatrix}$&$4895$\\\hline $18$&$\begin{bmatrix} 181& 299& 183& 179& 203& 221& 271& 205& 193& 54& 130& 96& 32& 8& 30& 64& 76& 156\\ 265& 217& 169& 171& 191& 219& 225& 317& 215& 150& 66& 68& 102& 108& 16& 18& 152& 22\\ 285& 273& 165& 229& 243& 197& 255& 235& 307& 50& 138& 48& 134& 126& 154& 98& 84& 26\\ 315& 297& 257& 245& 185& 237& 321& 305& 269& 158& 34& 12& 88& 142& 70& 122& 52& 110\\ 249& 177& 283& 239& 199& 303& 289& 173& 267& 36& 124& 4& 82& 140& 100& 2& 10& 114\\ 323& 233& 189& 209& 277& 207& 281& 163& 313& 38& 74& 86& 92& 160& 90& 58& 80& 60\\ 291& 167& 311& 213& 211& 247& 301& 223& 251& 14& 144& 40& 128& 118& 94& 136& 106& 20\\ 279& 227& 293& 187& 201& 259& 319& 231& 263& 132& 146& 72& 78& 28& 56& 44& 24& 116\\ 295& 241& 309& 253& 275& 287& 195& 175& 261& 148& 162& 62& 42& 6& 120& 46& 104& 112\\ 9& 137& 153& 161& 33& 13& 119& 111& 1& 324& 188& 240& 194& 284& 206& 218& 226& 210\\ 39& 61& 73& 113& 47& 69& 89& 7& 41& 224& 192& 270& 248& 200& 264& 260& 322& 164\\ 93& 31& 105& 63& 159& 3& 129& 147& 127& 290& 258& 190& 254& 180& 266& 252& 278& 234\\ 29& 83& 121& 79& 95& 5& 149& 17& 43& 220& 272& 316& 304& 294& 320& 208& 182& 170\\ 123& 15& 143& 115& 155& 49& 101& 145& 99& 292& 244& 276& 232& 178& 228& 230& 196& 166\\ 97& 141& 109& 91& 27& 65& 157& 11& 45& 268& 186& 246& 176& 318& 216& 306& 296& 242\\ 103& 87& 117& 81& 55& 151& 53& 37& 51& 280& 198& 288& 238& 262& 168& 202& 282& 308\\ 107& 35& 57& 125& 75& 133& 71& 131& 139& 214& 312& 256& 286& 212& 298& 172& 314& 204\\ 67& 19& 77& 85& 23& 25& 135& 59& 21& 302& 250& 174& 184& 300& 236& 222& 274& 310 \end{bmatrix}$&$5829$\\\hline $19$&$\begin{bmatrix} 360& 221& 219& 285& 310& 213& 356& 337& 308& 50& 39& 291& 279& 68& 104& 312& 287& 176& 21\\ 251& 243& 352& 92& 239& 30& 42& 269& 241& 190& 283& 297& 84& 116& 316& 349& 194& 163& 322\\ 333& 215& 138& 140& 28& 275& 46& 26& 146& 237& 301& 56& 19& 64& 108& 70& 210& 314& 88\\ 132& 94& 261& 233& 44& 267& 263& 341& 152& 320& 345& 62& 192& 112& 76& 180& 303& 202& 206\\ 148& 160& 144& 130& 249& 126& 229& 257& 134& 277& 15& 110& 204& 37& 102& 347& 198& 326& 178\\ 34& 136& 265& 158& 227& 339& 273& 271& 245& 98& 196& 90& 80& 13& 118& 106& 305& 23& 324\\ 235& 335& 156& 358& 354& 150& 32& 122& 331& 6& 52& 11& 100& 54& 78& 295& 82& 299& 66\\ 171& 253& 124& 154& 167& 329& 128& 217& 231& 142& 289& 184& 2& 186& 4& 114& 182& 120& 281\\ 247& 165& 48& 8& 255& 173& 169& 223& 96& 343& 58& 208& 86& 74& 200& 318& 188& 293& 72\\ 306& 24& 40& 350& 259& 77& 175& 10& 1& 361& 60& 35& 161& 224& 248& 17& 212& 226& 328\\ 59& 191& 187& 117& 63& 105& 296& 65& 111& 133& 127& 141& 274& 172& 270& 307& 240& 236& 256\\ 179& 5& 282& 61& 321& 71& 107& 12& 22& 286& 47& 139& 97& 232& 123& 33& 135& 266& 125\\ 99& 57& 209& 20& 298& 211& 302& 323& 75& 53& 157& 262& 41& 260& 168& 27& 9& 220& 330\\ 69& 109& 225& 203& 327& 294& 113& 348& 89& 164& 45& 166& 159& 155& 131& 95& 258& 338& 222\\ 85& 119& 288& 325& 344& 38& 278& 16& 83& 311& 214& 234& 170& 129& 268& 334& 244& 151& 143\\ 346& 201& 319& 18& 199& 185& 197& 81& 181& 284& 147& 137& 230& 342& 246& 242& 309& 31& 357\\ 177& 73& 313& 292& 87& 36& 67& 205& 51& 121& 149& 272& 332& 359& 43& 145& 340& 25& 153\\ 115& 315& 14& 79& 3& 290& 101& 300& 193& 91& 254& 353& 49& 355& 238& 93& 174& 250& 351\\ 317& 280& 195& 183& 162& 103& 189& 55& 304& 207& 218& 276& 7& 228& 29& 264& 216& 336& 252 \end{bmatrix}$&$6839$\\\hline $20$&$\begin{bmatrix} 237& 305& 325& 377& 303& 327& 255& 295& 235& 209& 158& 44& 52& 136& 162& 86& 64& 104& 148& 56\\ 229& 339& 387& 257& 347& 353& 399& 239& 207& 285& 34& 160& 58& 60& 134& 36& 146& 180& 112& 78\\ 357& 385& 251& 369& 263& 269& 287& 297& 201& 243& 82& 22& 18& 144& 194& 88& 62& 38& 6& 26\\ 309& 313& 225& 361& 271& 307& 365& 397& 281& 215& 108& 138& 102& 40& 72& 50& 54& 198& 98& 20\\ 389& 315& 323& 245& 383& 373& 381& 291& 275& 317& 68& 42& 170& 184& 196& 30& 94& 120& 150& 48\\ 233& 341& 351& 301& 299& 219& 273& 395& 249& 359& 192& 10& 124& 100& 4& 66& 164& 32& 154& 92\\ 293& 363& 335& 393& 283& 343& 217& 349& 267& 311& 176& 24& 80& 110& 12& 172& 16& 188& 142& 2\\ 391& 289& 211& 247& 337& 371& 345& 231& 265& 329& 156& 132& 182& 178& 46& 90& 70& 122& 190& 74\\ 355& 223& 367& 331& 221& 205& 259& 319& 253& 333& 186& 116& 84& 114& 126& 128& 152& 14& 28& 118\\ 213& 279& 379& 277& 375& 321& 203& 241& 261& 227& 140& 166& 200& 96& 76& 130& 174& 106& 168& 8\\ 95& 131& 107& 47& 183& 19& 187& 161& 97& 1& 400& 348& 284& 248& 294& 276& 344& 260& 334& 390\\ 167& 159& 17& 15& 127& 105& 139& 29& 133& 109& 396& 336& 308& 374& 202& 376& 364& 302& 206& 288\\ 119& 49& 181& 153& 87& 85& 185& 67& 11& 39& 272& 380& 256& 246& 316& 314& 326& 214& 378& 320\\ 149& 125& 3& 111& 93& 169& 177& 113& 197& 101& 222& 306& 264& 282& 240& 230& 346& 210& 250& 244\\ 83& 171& 73& 59& 115& 13& 145& 27& 79& 99& 332& 392& 208& 234& 388& 386& 218& 370& 290& 352\\ 65& 35& 63& 193& 37& 55& 89& 199& 191& 123& 280& 328& 236& 304& 330& 300& 362& 342& 312& 310\\ 45& 173& 165& 57& 147& 121& 61& 155& 77& 163& 226& 298& 356& 262& 292& 254& 242& 360& 322& 394\\ 179& 7& 69& 91& 31& 43& 33& 117& 81& 71& 224& 384& 366& 296& 252& 286& 354& 238& 318& 220\\ 21& 9& 75& 41& 137& 151& 103& 129& 23& 51& 358& 216& 278& 324& 382& 274& 258& 350& 372& 266\\ 5& 157& 135& 141& 25& 195& 175& 53& 189& 143& 204& 270& 212& 228& 338& 232& 368& 268& 398& 340 \end{bmatrix}$&$7997$\\\hline \end{tabular}} \caption{Orderings of $P_n^2$ such that $O(P_n^2,\sigma)=\seqnum{A179094}(n)$} \label{table:224} \end{table} \iffalse Lollipop graph Tadpole graph \fi \end{document}
\begin{document} \begin{titlepage} \newgeometry{width=175mm, height=247mm} \title{\textbf{\Large Entanglement of a two--qutrit Heisenberg XXZ model with Herring–Flicker coupling and Dzyaloshinskii-Moriya interaction in homogeneous magnetic field} \thispagestyle{empty} \begin{abstract} In this study, we use the concept of negativity to characterize the entanglement of a two--qutrit Heisenberg XXZ model for subject to a uniform magnetic field and z--axis Dzyaloshinskii--Moriya (DM) interaction with Herring-Flicker (HF) coupling. By varying the temperature, magnetic field, DM interaction, and distance of HF coupling. We find that the state system becomes less entangled at high temperatures or in strong magnetic fields, and vice versa. Our findings also suggest that entanglement rises when the z--axis DM interaction increases. Finally, HF coupling affects the degree of entanglement. For example, when HF coupling and temperature are at small values, the degree of entanglement is at its highest, but when HF coupling is at substantial values, the degree of intricacy tends to stabilize. \end{abstract} \noindent PACS numbers: 03.65.Ud, 03.67-a, 75.10Jm \noindent Keywords: Two--qutrit, quantum entanglement, negativity, Heisenberg model, Herring–Flicker coupling, Dzyaloshinskii-Moriya interaction, density matrix. \end{titlepage} \section{Introduction} Entanglement is one of the most intriguing aspects of quantum physics \cite{Einstein}. It happens when particles interact in a manner that makes it impossible to characterize each particle’s quantum state separately. With implementations in quantum communication and teleportation \cite{Houca1,Bennett,Ekert}, quantum information processing \cite{Divincenzo}, quantum dense coding, and the application of various quantum protocols \cite{Grover,Shor,Khan}, quantum entanglement uses are a precious resource for research that cannot be done using conventional resources. Different physics fields have also successfully attained entanglement. For instance, quantum logic operations involving trapped ions \cite{Kielpinski}, nuclear spins of organic molecules \cite{Bogani}, semiconductor devices \cite{Chtchelkatchev}, and atomic chips \cite{Folman} have all been demonstrated to exhibit quantum communication patterns. Finding a way to tell if a specific quantum system state is entangled or not, as well as selecting the optimal way to quantify the degree of entanglement, are significant challenges. In light of this, one of the most critical issues in the realm of quantum information is the quantization and characterization of the degree of entanglement. When the quantum system is in a pure state, the notion of entanglement is simpler and easier to comprehend. In contrast, the characterization of complete entanglement characteristics of mixed states is a challenging and unanswered mathematical problem. Several measures, such as negativity \cite{Peres,Horodecki,Zyczkowski,Vidal}, have been suggested to quantify entanglement. Condensed matter systems’ quantum entanglement is a significant field, as is well known. Various studies on quantum entanglement have been accomplished on the thermal equilibrium states of spin chains subjected to an external magnetic field at a fixed temperature \cite{Amesen,WangX.G,ZhouL,ZhangG.F,Guo1}. Moreover, two-qubit quantum correlations with the DM interaction receive much attention from researchers \cite{Tufarelli,YaoY1,Song23,Yao2,Liu23}. In addition, the Heisenberg model was used to study entanglement. A number of important works were produced, including the isotropic Heisenberg XX model \cite{Wang X G33,Wang X G34}, the XXX model \cite{Houca2,Nielsen,Arnesen}, the anisotropic Heisenberg XY model \cite{Kamta G L,Wang X G35}, the completely anisotropic Heisenberg XYZ model \cite{Zhou L36}, as well as some new research on the spin--1/2 \cite{William K}. Unfortunately, since measurements for higher spin systems are lacking, entanglement in spin--1 systems has gotten less attention. Vidal and Werner established a measure of entanglement called negativity, which may apply to higher spin systems \cite{Vidal G39,Schliemann40}. Using the notion of negative, Wang et al. arrived at analytical conclusions about entanglement in a spin--1 chain \cite{Sun44,Wang45}. Now, we describe a few studies by various authors that include the coupling strength, $J$, as a function of location in various spin chain configurations. In reality, due to quantum fluctuations and noise, the coupling intensity in quantum systems may be regarded as a factor in practical scenarios regarding the distance separating quantum particles. This method could be utilized to improve quantum implementations. In $1988$ Haldane and Shastry \cite{hal,sha} presented the first effort to explore entanglement in the spin chain with long-range interactions, where the coupling strength is equal to the inverse of the distance square and follows the squared law. In a similar manner, B. Lin et al. \cite{lin} investigated the XXZ Heisenberg chain with long-range interactions, whereas M. XiaoSan et al. \cite{xia} investigated the XX Heisenberg model with Calogero Moser interactions. Due to the significance of HF coupling in quantum information physics to determine the degree of intricacy between particles and motivated by previous works, we will investigate the thermal entanglement of spin--1 in a two-spin Heisenberg XXZ system with z-axis DM interaction, HF coupling distance, and in the presence of a uniform magnetic field. To the best of our knowledge, this study is the first to offer the first finding of thermal quantum correlation measured by negativity for two-spins-qutrit over the HF coupling distance. This paper is organized as follows: Section {\color{blue}2} presents in--depth the theoretical model of our system exposed to DM interaction and a magnetic field with HF coupling. In addition, we establish the spectrum of the system at a certain temperature. Section {\color{blue}3} will address thermal quantum entanglement measured by negativity. Section {\color{blue}4} will include numerical studies to highlight the behavior of systems. Lastly, the investigation is concluded with a summary of the results. \section{Hamiltonian of the system} In this study, we take into account a Heisenberg XXZ model with two--qutrit (spin 1), Herring-Flicker (HF) coupling, and z--axis DM interaction exposed to a uniform magnetic field. The Hamiltonian of the system is expressed by \begin{equation} \mathcal{H}=\mathcal{H}_H+\mathcal{H}_{DM}+\mathcal{H}_{Z} \end{equation} $\mathcal{H}_H$ corresponds to the XXZ Heisenberg chain, $\mathcal{H}_{DM}$ indicates the DM interaction, and $\mathcal{H}_{Z}$ refers to the Zeeman energy. Usually, the DM Hamiltonian $\mathcal{H}_{DM}$ \cite{Moriya1,Moriya2} can be represented as follows \begin{eqnarray} \mathcal{H}_{DM} &=& \overrightarrow{D}. \left(\overrightarrow{\sigma}_1\times\overrightarrow{\sigma}_{2}\right) \\ \nonumber &=& D_x\left(\sigma_1^y\sigma_{2}^z-\sigma_1^z\sigma_{2}^y\right)+D_y\left(\sigma_1^x\sigma_{2}^z-\sigma_1^z\sigma_{2}^x\right)+ D_z\left(\sigma_1^x\sigma_{2}^y-\sigma_1^y\sigma_{2}^x\right) \end{eqnarray} Where $\sigma^{x, y, z}$ denotes the Pauli matrices for a spin--1 and $D_{x, y, z}$ reflects the components of DM interaction. In the current studies, we limit ourselves to the case where the DM interaction occurs along the z--axis. Then, our Hamiltonian has the form \begin{equation}\label{fr} \mathcal{H}=J \left(\sigma_1^x\sigma_{2}^x+\sigma_1^y\sigma_{2}^y+\gamma\sigma_1^z\sigma_{2}^z\right)+D_z \left(\sigma_1^x\sigma_{2}^y-\sigma_1^y\sigma_{2}^x\right)+B \left(\sigma_{1}^z+\sigma_2^z\right) \end{equation} where $\sigma^m (m=x,y,z)$ denotes the spin--1 Pauli matrices given by \begin{equation} \sigma^x={1\over\sqrt{2}}\left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \\ \end{array} \right) ,\quad \sigma^y={1\over\sqrt{2}}\left( \begin{array}{ccc} 0 & -i & 0 \\ i & 0 & -i \\ 0 & i & 0 \\ \end{array} \right),\quad \sigma^z=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \\ \end{array} \right) \end{equation} and $D_z$ denotes the DM interaction along the z-axis. It should be noted that $J$ represents the coupling between the spin chains. Whenever the value of $J > 0$, the chain is antiferromagnetic; whenever the value of $J< 0$, the chain is ferromagnetic; $\gamma$ is the anisotropy parameter. Furthermore, we assume that the coupling $J$ is an HF coupling, i.e., $J(R)$, is defined by \begin{equation} J(R)=1.642\exp(-2R)R^{5\over2}+O(R^2\exp(-2R)) \end{equation} \begin{figure} \caption{(Color online) The coupling $J$ of Herring-Flicker in terms of the distance $R$ between two spins.} \label{figure1} \end{figure} Where $R$ is HF coupling distance. The graph of the function $J$ in terms of $R$ is shown in Figure \ref{figure1}. From Fig. \ref{figure1}, it is evident that the coupling of HF is zero when the spins are far apart ($R> 6$) and has a maximum when they are closer ($R\simeq1.3$), implying that the spins become free at greater distances. As a result, in this study, we restrict ourselves to values on the margin where $J$ is non-zero $(0< R<6)$. To evaluate the spectrum of the Hamiltonian \eqref{fr} it is convenient to give the matrix form of $\mathcal{H}$ in the basis $|-1,-1>$, $|-1,0 >$, $|-1,1 >$, $|0,-1 >$, $|0,0 >$, $|0,1 >$, $|1,-1 >$, $|1,0 >$, $|1,1 >$ as \begin{equation}\label{1} \mathcal{H}=\left( \begin{array}{ccccccccc} \gamma J(R)+2 B & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & B & 0 & re^{i \theta } & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -\gamma J(R) & 0 & re^{i \theta } & 0 & 0 & 0 & 0 \\ 0 & re^{-i \theta } & 0 & B & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & re^{-i \theta } & 0 & 0 & 0 & re^{i \theta } & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -B & 0 & re^{i \theta } & 0 \\ 0 & 0 & 0 & 0 & re^{-i \theta } & 0 & -\gamma J(R) & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & re^{-i \theta } & 0 & -B & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \gamma J(R)-2 B \\ \end{array} \right) \end{equation} which the quantities $r$ and $\theta$ are defined by \begin{eqnarray} r &=& \sqrt{D_z^2+J(R)^2} \\ \theta &=& \arctan\left(\frac{D_z}{J(R)}\right) \end{eqnarray} the eigenvalues as well as the eigenvectors of the equation \eqref{1} are given by \begin{eqnarray} \epsilon_{1,2} &=&B \pm r \label{22}\\ \epsilon_{3,4} &=& \gamma J(R) \pm2 B\label{222}\\ \epsilon_{5} &=& -\gamma J(R) \\ \epsilon_{6,7}&=&-B\pm r\\ \epsilon_{8,9}&=&\frac{r }{2}\chi_{2,1} \end{eqnarray} where $\chi_{1,2}=\frac{\sqrt{\gamma ^2 J(R)^2+8 r^2}\pm\gamma J(R)}{r}$, and the related eigenvectors \begin{eqnarray}\label{psi} |\varphi_{1,2}\rangle &=& \pm\frac{e^{i \theta }}{\sqrt{2}}|-1,0\rangle +\frac{1}{\sqrt{2}}|0,-1\rangle \\ \nonumber |\varphi_{3,4}\rangle &=& |\mp1,\mp1\rangle \\ \nonumber |\varphi_5\rangle &=& -\frac{e^{2 i \theta }}{\sqrt{2}}|-1,1\rangle+\frac{1}{\sqrt{2}}|1,-1\rangle \\ \nonumber |\varphi_{6,7}\rangle &=&\pm\frac{e^{i \theta }}{\sqrt{2}}|0,1\rangle+\frac{1}{\sqrt{2}}|1,0\rangle\\ \nonumber |\varphi_{8,9}\rangle &=&\frac{2 e^{2 i \theta }}{\sqrt{\chi _{1,2}^2+8}}|-1,1\rangle\pm\frac{e^{i \theta } \chi _{1,2}}{\sqrt{\chi _{1,2}^2+8}}|0,0\rangle+\frac{2}{\sqrt{\chi _{1,2}^2+8}}|1,-1\rangle \end{eqnarray} After obtaining the spectrum of our system, it is easy to calculate the thermal density, which is essential for performing measurements of the examined system's entanglement. To this end, the following section will be devoted to calculating the density matrix following the system's parameters. \section{Thermal density matrix and negativity} After determining the system's spectrum, we shall seek the equation of the density matrix $\varrho(T)$, which will enable us to quantify the negativity at the thermal thermodynamic equilibrium at a given temperature $T$. According to the above-described framework, the system's state of thermal equilibrium might be expressed as follows \begin{equation} \varrho(T)={1\over\mathbb{Z}}e^{-\beta \mathcal{H}} \end{equation} such that the canonical partition function $\mathbb{Z}$ is written by \begin{equation} \mathbb{Z}=\sum_{i=1}^9 e^{-\beta \epsilon_i} \end{equation} such as $k_B$ is the constant of the Boltzmann, for convenience, it is considered as unity in the next. The temperature's inverse is represented by the parameter $\beta=1/T$. The spectrum of the Hamiltonian \eqref{1} allows expressing the thermal density $\varrho(T)$ as \begin{equation}\label{3} \varrho(T)={1\over\mathbb{Z}}\sum_{k=1}^{4}e^{-\beta \epsilon_k}|\phi_k\rangle\langle\phi_k| \end{equation} Adding equations \eqref{22} and \eqref{psi} to equation \eqref{3} yields the density matrix of the system in thermal equilibrium, which may be written on the previous basis as \begin{equation}\label{43} \varrho(T)={1\over\mathbb{Z}}\left( \begin{array}{ccccccccc} \rho _{11} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & \rho _{22} & 0 & e^{i \theta } \rho _{24} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \rho _{33} & 0 & e^{i \theta } \rho _{35} & 0 & e^{2 i \theta } \rho _{37} & 0 & 0 \\ 0 & e^{-i \theta } \rho _{42} & 0 & \rho _{44} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & e^{-i \theta } \rho _{53} & 0 & \rho _{55} & 0 & e^{i \theta } \rho _{57} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \rho _{66} & 0 & e^{i \theta } \rho _{68} & 0 \\ 0 & 0 & e^{-2 i \theta } \rho _{73} & 0 & e^{-i \theta } \rho _{75} & 0 & \rho _{77} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & e^{-i \theta } \rho _{86} & 0 & \rho _{88} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \rho _{99} \\ \end{array} \right) \end{equation} such that the density matrix elements are given by \begin{eqnarray}\label{23} \varrho _{11}&=&e^{-\beta (2 B+\gamma J)}\\ \nonumber \varrho _{22}&=&\varrho _{44}=e^{-\beta B} \cosh (\beta r) \\ \nonumber \varrho _{33}&=&\varrho _{77}=\frac{1}{2} e^{\beta \gamma J}+\frac{4 e^{-\frac{\beta r \chi_2}{2} }}{\chi_1^2+8}+\frac{4 e^{\frac{\beta r \chi_1}{2}}}{\chi_2^2+8} \\ \nonumber \varrho _{55}&=&\frac{\chi _1^2 e^{-\frac{1}{2} \beta r \chi _2}}{\chi _1^2+8}+\frac{\chi _2^2 e^{\frac{1}{2} \beta r \chi _1}}{\chi _2^2+8}\\ \nonumber \varrho _{37}&=&\varrho _{73}=\frac{1}{2} \left(-e^{\beta \gamma J}+\frac{8 e^{-\frac{\beta r \chi_2}{2} }}{\chi_1^2+8}+\frac{8 e^{\frac{\beta r \chi_1}{2}}}{\chi_2^2+8}\right) \\ \nonumber \varrho _{35}&=&\varrho _{53}=\varrho _{57}=\varrho _{75}=-\frac{4 \left(e^{\frac{\beta \gamma J}{2}} \sinh \left(\frac{1}{4} \beta r (\chi_1+\chi_2)\right)\right)}{\chi_1+\chi_2}\\ \nonumber \varrho _{66}&=&\varrho _{88}=e^{\beta B} \cosh (\beta r)\\ \nonumber \varrho _{24}&=&\varrho _{42}=-e^{-\beta B} \sinh (\beta r) \\ \nonumber \varrho _{68}&=&\varrho _{86}=-e^{\beta B} \sinh (\beta r)\\ \nonumber \varrho _{99}&=&e^{\beta(2B-\gamma J)} \nonumber \end{eqnarray} The negativity, based on the partial transposition approach \cite{Peres}, is shown to be a beneficial entanglement measurement introduced by Vidal et al. \cite{Vidal G39}, and that it can be determined effectively for any dimension bipartite system. We shall use the method \cite{Miranowicz}, as described by \begin{equation} \mathcal{N}(\varrho)=\sum_j|\lambda_j| \end{equation} Where the negative eigenvalues of the partial transposition $\varrho^{T_1}(\varrho^{T_2})$ of the given state $\varrho$ with consideration to the first (second) system are denoted by $\lambda_j$, in this case, the expression of $\varrho^{T_1}$ is defined by \begin{equation} \varrho^{T_1}={1\over\mathbb{Z}}\left( \begin{array}{ccccccccc} \varrho _{11} & 0 & 0 & 0 & e^{i \theta } \varrho _{24} & 0 & 0 & 0 & e^{2 i \theta } \varrho _{37} \\ 0 & \varrho _{22} & 0 & 0 & 0 & e^{i \theta } \varrho _{35} & 0 & 0 & 0 \\ 0 & 0 & \varrho _{33} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \varrho _{44} & 0 & 0 & 0 & e^{i \theta } \varrho _{57} & 0 \\ e^{-i \theta } \varrho _{42} & 0 & 0 & 0 & \varrho _{55} & 0 & 0 & 0 & e^{i \theta } \varrho _{68} \\ 0 & e^{-i \theta } \varrho _{53} & 0 & 0 & 0 & \varrho _{66} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \varrho _{77} & 0 & 0 \\ 0 & 0 & 0 & e^{-i \theta } \varrho _{75} & 0 & 0 & 0 & \varrho _{88} & 0 \\ e^{-2 i \theta } \varrho _{73} & 0 & 0 & 0 & e^{-i \theta } \varrho _{86} & 0 & 0 & 0 & \varrho _{99} \\ \end{array} \right). \end{equation} We currently have all we need to investigate the behavior of our system. To do this, we will devote the following part to a numerical analysis of the negativity $\mathcal{N}$, as detailed below, to demonstrate the full performance of the proposed system. Then, we presented some graphs depending on the parameters of the system under consideration, like temperature $T$, HF coupling distance $R$, DM interaction $D_z$ and uniform magnetic field $B$. In addition, we will continue to examine this in more detail in order to draw conclusions. \section{Numerical results} This section will quantitatively investigate several elements of entanglement in a two--spin--qutrit Heisenberg XXZ chain with z--axis of DM interaction in terms of the HF coupling distance $R$ for various temperatures and magnetic field values. Firstly, we will explore negativity $\mathcal{N}$ as a function of temperature $T$ for different $R$ and magnetic field values by fixing the z--axis DM interaction such as $D_z=1$. Secondly, for a given temperature $T$, coupling distance $R$, and magnetic field $B$, we display the negativity $\mathcal{N}$ as a function of z--axis DM interaction. Finally, we will adjust the z--axis DM interaction for the value $D_z=1$ to plot the negativity $\mathcal{N}$ as a function of the HF coupling distance $R$, temperature $T$, and magnetic field $B$. \begin{figure} \caption{(Color online) (a) The negativity versus $T$ for different $R$ for $B=D_z=1$. (b) The negativity versus $T$ for different $B$ for the values $D_z=1$ and $R=0.5$.} \label{f1} \end{figure} In figure (\ref{f1}), we plot the negativity $\mathcal{N}$ as a function of temperature for different HF coupling distances $R$ (Fig. \ref{f1}(a)) and magnetic field $B$ (Fig. \ref{f1}(b)). In Fig. \ref{f1}(a), the negativity is plotted as a function of the temperature $T$ for $R = 0.3, 0.6, 0.9$ and the other parameters are set as $D_z=B = 1$. At high temperatures, the negativity is cancelled, which means that the state's system becomes separable. Moreover, for a fixed value of $T$, the negativity increases with increasing HF coupling distance $R$. From Fig. \ref{f1}(b) when $B = 0$, the maximally entangled state $|\varphi_2\rangle=|\varphi_7\rangle$ is the ground state with eigenvalue $-r$ with the maximum entanglement, i.e., $N = 0.9616$. Furthermore, when $T$ rises, the negativity lowers owing to the mixture of other states with the maximally entangled state. The state $|11>$ is changing into the ground state for strong magnetic fields $B$ ($B =1.2)$, meaning that there is no entanglement at $T = 0$. As a result, as $T$ grows, the maximally entangled states mix with the unentangled state $|11>$, hence increasing entanglement. Negativity decreases with rising magnetic field $B$ for a given value of $T$ until it reaches zero at higher temperatures. \begin{figure} \caption{(Color online) The negativity versus z-axis DM interaction. (a) for different $T$ ($R=B=1$), (b) for different $R$ ($T=0.08$ and $B=0.5$), (c) for different $B$ ($T=0.08$ and $R=0.5$).} \label{f2} \end{figure} Figure \ref{f2} shows the entanglement change with $D_z$ in the two--particle antiferromagnetic XXZ model $(J>0)$ with different parameters ($T$, $R$ and $B$). The first remark is that the negativity is symmetrical compared to $D_z=0$. For this, we will interpret only the positive values of $D_z$. The negativity remains null for a certain value of $D_z$ lower than the critical value $D_z^c$, this last varying according to the parameters $T$, $R$ or $B$. However, the rise of the quantities $T$ and $R$ decreases the critical value of $D_z^c$ (Figs. \ref{f2}(a) and (b)). On the other hand, when the magnetic field increases, the $D_{z}^{c}$ critical value increases (Fig. \ref{f2}(c)). In addition, we observe a peak of negativity which is related to the temperature $T$, HF coupling distance $R$ and magnetic field $B$. Indeed, for $D_z>D_z^c$ and in the vicinity of $D_z=1$, the negativity presents a peak that vanishes with the temperature increase. Physically, the disappearing of the peak can be explained by the temperature dominance in the DM interaction, which is shown in Fig. \ref{f2}(a)). In Fig. \ref{f2}(b) and for the interval $D_z^c<D_z<1$, we find the same behavior as in Fig. \ref{f2}(a), except that the HF coupling is significant in this interval. Moreover, in Fig. \ref{f2}(c), the peaks always appear with increasing amplitudes as the magnetic field increases. Finally, when the DM interaction parameter $D_z$ is very high, the negativity of the entanglement approaches an identical maximum fixed value in all three figures, which means that the state's system is maximally entangled. \begin{figure} \caption{(Color online) (a): The negativity in terms of $R$ ($B=D_z=1$), and (b,c): versus $B$ ($R=D_z=1$) for different value of $T$.} \label{f3} \end{figure} In Fig. \ref{f3}, we display the negativity $\mathcal{N}$ as a function of the HF coupling distance parameter $R$ (Fig. \ref{f3}(a)), the uniform magnetic field $B$ (Fig. \ref{f3}(b)) at different temperature values $(0.04, 0.08, 0.12$ and $0.5)$ and the uniform magnetic field $B$ at $T = 0$ (Fig. \ref{f3}(c)). In Fig. \ref{f3}(a), we see that the negativity $\mathcal{N}$ increases when the parameter $R$ increases in the XXZ system with DM interaction ($D_z = 1$) and in the presence of a magnetic field ($B = 1$). When the HF coupling distance parameter $R$ is large, the negativity tends to be constant even as the temperature rises. Furthermore, it is clear that as the temperature rises, the entanglement vanishes even if $R$ varied, which is explained by the dominance of the temperature (black dashed curve). From Figures \ref{f3}(b) and \ref{f3}(c), we see that there is evidence of phase transition at low temperature by increasing the magnetic field $B$. When $B$ is small, the entanglement is initially at its maximum. When the uniform magnetic field $B$ is greater, the entanglement decreases until it takes on a null value. At $T = 0$, we can also see that the entanglement vanishes when $B$ crosses a critical point $B_c$. However, the role of magnetic fields is mainly to reduce entanglement. \section{summary} In this paper, we have explored thermal quantum entanglements by using the negativity concept to discuss the entanglement of a two--qutrit Heisenberg XXZ chain subjected to the uniform magnetic field and z--axis DM interaction with the distance of the HF coupling. The Hamiltonian model is described, the spectrum entanglement has been determined through mathematical calculations, and the thermal state at a finite temperature is mentioned explicitly. The numerical behavior of the negativity-measured entanglements in our study has been examined in terms of temperature, z--axis DM interaction, HF distance coupling and uniform magnetic field. The ground state entanglement at zero temperature has been analyzed. We discovered that negativity declines monotonically with increasing temperature, and it is also clear that the magnetic field eliminates entanglement. Furthermore, We have also looked into the z-axis DM interaction parameter $D_z$, which can be raised to enhance entanglement. Finally, the affect of the HF coupling distance $R$ on quantum entanglement in spin systems is explored, i.e., decreasing the temperature can improve the entanglement of the system states. \end{document}
\begin{document} \begin{titlepage} \mathbb{M}aketitle \begin{abstract} For every given $p\in [1,+\infty)$ and $n\in\mathbb{M}athbb{N}$ with $n\ge 1$, the authors identify the strong $L^p$-closure $L_{\mathbb{M}athbb{Z}}^p(D)$ of the class of vector fields having finitely many integer topological singularities on a domain $D$ which is either bi-Lipschitz equivalent to the open unit $n$-dimensional cube or to the boundary of the unit $(n+1)$-dimensional cube. Moreover, for any $n\in \mathbb{M}athbb{N}$ with $n\geq 2$ the authors prove that $L_{\mathbb{M}athbb{Z}}^p(D)$ is weakly sequentially closed for every $p\in (1,+\infty)$ whenever $D$ is an open domain in $\mathbb{M}athbb{R}^n$ which is bi-Lipschitz equivalent to the open unit cube. As a byproduct of the previous analysis, a useful characterisation of such class of objects is obtained in terms of existence of a (minimal) connection for their singular set.\\ \hspace{-3.25mm}\textbf{MSC (2020)}: 46N20 (primary); 58E15 (secondary). \end{abstract} \tableofcontents \end{titlepage} \mathbb{M}athbb{S}ection{Introduction} Throughout the following paper we will always use the notations listed in subsection 1.4. \mathbb{M}athbb{S}ubsection{Statement of the results and motivation} Given any smooth map $u:X\to Y$ between closed, oriented and connected $(n-1)$-dimensional manifolds, the \textit{degree} of $u$ is a measure of how many times $X$ wraps around $Y$ under the action of $u$. It can be defined as follows\footnote{An alternative definition can be given in terms of the orientations of the preimages of regular points of $u$, see for instance \mathbb{C}ite[Chapter 7]{berger-gostiaux}}: \begin{align*} \text{deg}(u)=\int_{X}u^*\omega, \end{align*} with $\omega\in\Omega^{n}(Y)$ being a renormalized volume form on $Y$, i.e. $\omega$ is nowhere vanishing and \begin{align*} \int_Y\omega=1. \end{align*} Now let $D\mathbb{M}athbb{S}ubset\mathbb{R}^{n}$ be an open and bounded Lipschitz domain and consider a map $u\in W^{1,p}(D,Y)$ for some $p\ge 1$ being \textit{smooth up to finitely many point singularities}, which simply means that $u\in C^{\infty}(D\mathbb{M}athbb{S}mallsetminus S_u,Y)$ for some finite set $S_u\mathbb{M}athbb{S}ubset D$. In this case we write $u\in R^{1,p}(D,Y)$. We define the \textit{degree of $u$ at some singular point $x\in S_u$} as \begin{align}\label{definition of degree at some point} \text{deg}(u,x):=\deg(u|_{\partial D'})=\int_{\partial D'}u^*\omega\in\mathbb{M}athbb{Z}, \end{align} where $D'\mathbb{M}athbb{S}ubset\mathbb{M}athbb{S}ubset D$ is any open, piecewise smooth domain in $D$ such that $\overline{D'}\mathbb{C}ap S_u=\{x\}$. Notice that Definition \eqref{definition of degree at some point} is independent from the choice of the set $D'$.\\ If $\text{deg}(u,x)\mathbb{N}eq 0$ for some $x\in S_u$, then we say that $x$ is a \textit{topological singularity} of $u$ and we refer to the subset of $S_u$ made of the topological singularities of $u$ as the \textit{topological singular set of} $u$, which we denote by $S_u^{top}$.\\ Notice that if $u\in R^{1,n-1}(D)$ then $ u^\ast\omega\in \Omega_1^{n-1}(D)$, moreover by \eqref{definition of degree at some point} we see that the $u^*\omega$ "detects" the topological singularities of $u$, in the sense that \begin{align} \label{Equation: degree detects topological singularities} \int_{\partial D'}u^*\omega=\mathbb{M}athbb{S}um_{x\in S_u^{top}\mathbb{C}ap D'}\deg(u,x) \end{align} for every open, piecewise smooth domain $D'\mathbb{M}athbb{S}ubset\mathbb{M}athbb{S}ubset D$ such that $\partial D'\mathbb{C}ap S_u=\emptyset$. From \eqref{Equation: degree detects topological singularities} one can deduce that \begin{align*} *d(u^*\omega)=\mathbb{M}athbb{S}um_{x\in S_u^{top}}\text{deg}(u,x)\delta_x\text{ in }\mathbb{M}athcal{D}'(D). \end{align*} \begin{rem} Sobolev maps smooth up to a finite set of topological singularities arise frequently as solutions of variational problems in critical or supercritical dimension. For example, this is the best regularity which is possible to guarantee for energy minimizing harmonic maps in $W^{1,2}(B^3,\mathbb{M}athbb{S}^2)$ (see \mathbb{C}ite[Theorem II]{schoen-uhlenbeck}). Again, quite recently the second author and T. Rivi\`ere considered the following "weak $\mathbb{M}athbb{S}^1$-harmonic map equation" \begin{align*} \text{div}(u\wedge \mathbb{N}abla u)=0 \qquad\, \mathbb{M}box{ in } \mathbb{M}athcal{D}'(B^2) \end{align*} and gave a completely variational characterization of the solutions in $R^{1,p}(B^2,\mathbb{M}athbb{S}^1)$ with finite "renormalized Dirichlet energy" for $p>1$ (see \mathbb{C}ite[Theorem I.3]{gaia-riviere} for further details). We also remark that the presence of topological singularities is deeply linked to fundamental questions concerning the strong $W^{1,p}$-approximability through smooth maps of elements in $W^{1,p}(D,Y)$ (see \mathbb{C}ite{bethuel-coron-demengel-helein}, \mathbb{C}ite{hardt-riviere-approximation}). \end{rem} The previous discussion motivates the following general definition. \begin{dfn} Let $p\in [1,\infty]$. Let $F\in\Omega_p^{n-1}(D)$. We say that $F$ has \textit{finitely many integer singularities} if there exists a finite set of points $S\mathbb{M}athbb{S}ubset D$ such that $F\in\Omega^{n-1}(D\mathbb{M}athbb{S}mallsetminus S)$ and \begin{align*} *dF=\mathbb{M}athbb{S}um_{x\in S}a_x\delta_{x}, \end{align*} where $a_x\in\mathbb{M}athbb{Z}$ for every $x\in S$. The class of $L^p$ integrable $(n-1)$-forms on $D$ having finitely many integer singularities will be denoted by $\Omega_{p,R}^{n-1}(D)$. \end{dfn} As we have seen above, $u^\ast\omega\in \Omega_{1,R}^{n-1}(D)$ for every $u\in R^{1,n-1}(D,Y)$, for any closed, oriented and connected $n-1$-manifold $Y$. Other simple examples of elements of $\Omega_{p,R}^{n-1}(D)$ can be constructed as follows. Let $\mathbb{M}athbb{S}igma: D\to \mathbb{M}athbb{R}$ denote the fundamental solution of the Laplace equation, i.e. \begin{align*} \mathbb{M}athbb{S}igma(x)=\begin{cases} -\lvert x\mathbb{R}vert\quad & \text{if } n=1,\\ -\frac{1}{2\pi}\log\lvert x\mathbb{R}vert\quad & \text{if } n=2\\ \frac{1}{n(n-2)\alpha(n)}\frac{1}{\lvert x\mathbb{R}vert^{n-2}}\quad & \text{if } n\geq 3, \end{cases} \end{align*} where $\alpha(n)$ denotes the volume of the unit ball in $\mathbb{M}athbb{R}^n$. Then $\ast d\mathbb{M}athbb{S}igma\in \Omega_{p,R}^{n-1}(D)$ for any $p\in [1,\frac{n}{n-1})$. In fact $\ast d(\ast d\mathbb{M}athbb{S}igma)=\mathbb{M}athcal{D}elta \mathbb{M}athbb{S}igma=\delta_0$.\\ Clearly any finite linear combination with integer coefficients of translations of $\ast d\mathbb{M}athbb{S}igma$ also belongs to $\Omega_{p,R}^{n-1}(D)$. In fact one can show that any element $F$ of $\Omega_{p,R}^{n-1}(D)$ can be decomposed as such a linear combination plus some $\tilde{F}\in \Omega_p^{n-1}(D)$ with $\ast d \tilde F=0$. In particular, if $p\geq \frac{n}{n-1}$ then $\ast d F=0$. Thus the class $\Omega_{p,R}^{n-1}(D)$ is relatively simple from an analytical point of view and so it is natural to ask which forms in $\Omega_p^{n-1}(D)$ can be approximated by elements in $\Omega_{p,R}^{n-1}(D)$. The main purpose of the present paper consists in giving a description of the strong and weak closure of the class $\Omega_{p,R}^{n-1}(D)$ for any open domain in $\mathbb{R}^n$ which is bi-Lipschitz equivalent to the open unit $n$-cube $Q_1(0)\mathbb{M}athbb{S}ubset\mathbb{R}^n$.\\ First we will address the strong closure in the case of the open, unit $n$-dimensional cube $Q_1(0)\mathbb{M}athbb{S}ubset\mathbb{R}^n$. To this end we introduce the class of $(n-1)$-forms with integer-valued fluxes.\\ For any $F\in \Omega_{L^p_{loc}}^{n-1}$, for any $x_0\in Q_1(0)$ let $\tilde{R}_{F,x_0}\mathbb{M}athbb{S}ubset (0,r_0)$ be the set of radii $\mathbb{R}ho\in (0,r_0)$ such that \begin{enumerate} \item the hypersurface $\partial Q_{\mathbb{R}ho}(x_0)$ consists $\mathscr{H}^{n-1}$-a.e. of Lebesgue points of $F$, \item there holds $|F|\in L^p\big(\partial Q_{\mathbb{R}ho}(x_0),\mathscr{H}^{n-1}\big)$. \end{enumerate} One can check that $\mathcal{L}^1\big((0,r_{x_0})\mathbb{M}athbb{S}mallsetminus \tilde{R}_{F,x_0}\big)=0$. \begin{dfn}\label{Definition: integer valued fluxes on unit cube} Let $p\in [1,\infty]$, let $F\in \Omega_{L^p_{loc}}^{n-1}\mathbb{C}ap \Omega_p^{n-1}(Q_1(0),\mathbb{M}u)$ for some Radon measure $\mathbb{M}u$, for any $x_0\in Q_1(0)$ let $\tilde{R}_{F, x_0}$ be defined as above. We say that $F$ has \textit{integer-valued fluxes} if for any $x_0\in Q_1(0)$, for $\mathcal{L}^1-$a.e. $\mathbb{R}ho\in \tilde{R}_{F, x_0}$ there holds\footnote{Notice that for the associated vector field $V=(*F)^\flat$ condition \eqref{equation: integer fluxes condition, definition} reads \begin{align*} \int_{\partial Q_{\mathbb{R}ho}(x_0)}V\mathbb{C}dot \mathbb{N}u_{\partial Q_{\mathbb{R}ho}(x_0)}d\mathbb{M}athscr{H}^{n-1}\in \mathbb{M}athbb{Z}. \end{align*}} \begin{align}\label{equation: integer fluxes condition, definition} \int_{\partial Q_{\mathbb{R}ho}(x_0)}i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F\in\mathbb{M}athbb{Z}. \end{align} The space of $L^p(\mathbb{M}u)$-vector fields with integer valued fluxes will be denoted by $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0),\mathbb{M}u)$. The set of radii $\mathbb{R}ho\in \tilde{R}_{F, x_0}$ for which \eqref{equation: integer fluxes condition, definition} holds will be denoted by $R_{F, x_0}$ \end{dfn} We will always write $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ for $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0),\mathcal{L}^{n})$, where $\mathcal{L}^n$ denotes the $n$-dimensional Lebesgue measure. First of all we observe that (\mathbb{R}ef{Equation: degree detects topological singularities}) implies that $\Omega_{p, R}^{n-1}(Q_1(0))\mathbb{M}athbb{S}ubset \Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. More general examples of forms in $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ can be constructed as follows. Let again $Y$ be a smooth, closed, oriented and connected $n-1$-dimensional manifold. Let $u\in W^{1, n-1}(Q_1(0), Y)$. Then for any $x_0\in Q_1(0)$, for a.e. $\mathbb{R}ho\in (0, 2\dist_\infty(x_0, \partial Q_1(0)))$, $u\big\vert_{\partial Q_\mathbb{R}ho(0)}\in W^{1,n-1}(\partial Q_\mathbb{R}ho(0),Y)$. Therefore for any such $\mathbb{R}ho$ \begin{align} \label{Equation: u*omega satisfy Def I.2} \int_{\partial Q_\mathbb{R}ho (0)}i^\ast_{\partial Q_\mathbb{R}ho(x_0)}(u^\ast \omega)=\deg \left(u\big\vert_{\partial Q_\mathbb{R}ho(x_0)}\mathbb{R}ight)\in \mathbb{M}athbb{Z} \end{align} Notice that $\deg \left(u\big\vert_{\partial Q_\mathbb{R}ho(x_0)}\mathbb{R}ight)$ is well defined (by means of approximation by functions in $W^{1,\infty}(\partial Q_\mathbb{R}ho(x_0), Y)$, see \mathbb{C}ite{brezis-nirenberg}, Section I.3).\\ We will show that in fact the closure of ${\Omega_{p,R}^{n-1}(Q_1(0)})$ in $\Omega_{p}^{n-1}(Q_1(0))$ is exactly $\Omega_{p, \mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. More precisely we have \begin{thm} \label{Theorem: strong approximation for vector fields} Let $n\in \mathbb{M}athbb{N}$ and let $p\in [1,\infty)$. Let $F\in \Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0),\mathbb{M}u)$. Then we have \begin{enumerate} \item if $q\in[0,1]$ and $\displaystyle{p\in\Big[1,\frac{n}{n-1}\Big)}$, then there exists a sequence $\{F_k\}_{k\in \mathbb{M}athbb{N}}$ in $\Omega_{p,R}^{n-1}(Q_1(0))$ such that $F_k\to F$ in $\Omega_{p}^{n-1}(Q_1(0))$ as $k\to\infty$. \item if $q\in(-\infty,0]$ and $\displaystyle{p\in\Big[\frac{n}{n-1},+\infty\Big)}$, then $\ast dF=0$. \end{enumerate} \end{thm} The reason why we have introduced the weighted measures $\mathbb{M}u=f\,\mathcal{L}^n$ for $q\mathbb{N}eq 0$ is that forms belonging to $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0),\mathbb{M}u)$ appear naturally in the proof of Corollary \mathbb{R}ef{Corollary: strong approximation on boundary of cube}. Nevertheless, we advise the reader to assume $q=0$ (i.e. $\mathbb{M}u=\mathcal{L}^n)$ throughout section 2 at a first reading of the present paper. This allows to skip many technicalities without losing formality, since all the results of this paper are independent on Corollary \mathbb{R}ef{Corollary: strong approximation on boundary of cube}. With the help of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} we will get another characterization of the $L^p$-closure of $\Omega_{p,R}^{n-1}(Q_1(0))$. For this we recall the following definition (compare with \mathbb{C}ite[Section II]{brezis-coron-lieb}): \begin{dfn}[Connection and minimal connection] Let $M\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any embedded Lipschitz $m$-dimensional submanifold of $\mathbb{R}^n$ (with or without boundary) such that $\overline{M}$ is compact as a subset of $\mathbb{R}^n$. A $1$-dimensional current $I\in\mathbb{M}athcal{R}_1(M)$ is said to be a \textit{connection} for (the singular set of) $F$ if $\mathbb{M}(I)<+\infty$ and $\partial I=*dF$ in $(W^{1,\infty}_0(M))^\ast$. A $1$-dimensional current $L\in\mathbb{M}athcal{R}_1(M)$ is said to be a \textit{minimal connection} for (the singular set of) $F$ if it is a connection for $F$ and \begin{align*} \mathbb{M}athbb{M}(L)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{D}_1(M)\\\partial T=*dF}}\mathbb{M}athbb{M}(T). \end{align*} \end{dfn} We will see in Corollary \mathbb{R}ef{Corollary: existence of connections implies existence of minimal connection} that $F$ admits a connection if and only if it admits a minimal connection.\\ Here is the characterization of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ in terms of minimal connections: \begin{thm}\label{Theorem: characterization of the integer valued fluxes class, introduction} Let $n\in \mathbb{M}athbb{N}_{>0}$, let $p\in [1,+\infty)$. Let $F\in\Omega_p^{n-1}(Q^n_1(0))$. Then, the following are equivalent: \begin{enumerate} \item there exists $L\in\mathbb{M}athcal{R}_1(Q_1(0))$ such that $\partial L=*dF$ in $(W^{1,\infty}_0(Q_1(0)))^\ast$. \item for every Lipschitz function $f:\overline{Q_1(0)}\mathbb{R}ightarrow [a,b]\mathbb{M}athbb{S}ubset\mathbb{R}$ such that $f\vert_{\partial Q_1(0)}\equiv b$, we have \begin{align*} \int_{f^{-1}(t)}i_{f^{-1}(t)}^*F\in\mathbb{M}athbb{Z}, \qquad\mathbb{M}box{ for $\mathcal{L}^1$-a.e. } t\in [a,b]; \end{align*} \item $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. \end{enumerate} \end{thm} In other words, $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ if and only if $F$ admits a (minimal) connection. This characterization allows to generalize the definition of the class $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ to general Lipschitz domains: \begin{dfn}\label{Definition: integer valued fluxes on generic domain} Let $M\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any embedded Lipschitz $m$-dimensional submanifold of $\mathbb{R}^n$ (with or without boundary). We define \begin{align*} \Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(M):=\{F\in\Omega_p^{n-1}(M) \mathbb{M}box{ s.t. } \exists\, L\in\mathbb{M}athcal{R}_1(M) \mathbb{M}box{ connection for } F\}. \end{align*} \end{dfn} Notice that if $M=Q_1(0)$, Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube} and Definition \mathbb{R}ef{Definition: integer valued fluxes on generic domain} coincide by Theorem \mathbb{R}ef{Theorem: characterization of the integer valued fluxes class, introduction}.\\ We will deduce from the previous results that the approximation result can be extended to any open domain which is bi-Lipschitz equivalent to $Q_1(0)$ of $\partial Q_1(0)$ (see Theorem \mathbb{R}ef{Theorem: strong approximation for general domains}).\\ We mention here two other corollaries of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}. \begin{cor} \label{Corollary: approximation of partial I, introduction} Let $n\in \mathbb{M}athbb{N}$. Let $I\in \mathbb{M}athcal{R}_1(Q_1^n(0))$ be an integer rectifiable one-current. Then there exists a one form $\omega\in \Omega_{1,\mathbb{M}athbb{Z}}^{n-1}(Q_1^n(0))$ such that $\ast d\omega=\partial I$ and $\partial I$ can be approximated in $(W^{1,\infty}_0(Q_1^n(0)))^\ast$ by finite sums of Dirac-deltas with integer coefficients. More precisely, there exist sequences $(P_i)_{i\in \mathbb{M}athbb{N}}$ and $(N_i)_{i\in \mathbb{M}athbb{N}}$ of points in $Q_1^n(0)$ such that \begin{align*} \partial I= \mathbb{M}athbb{S}um_{i\in \mathbb{M}athbb{N}}(\delta_{P_i}-\delta_{N_i})\,\text{ in }(W^{1,\infty}_0(Q_1^n(0)))^\ast\text{ and }\mathbb{M}athbb{S}um_{i\in \mathbb{M}athbb{N}}\lvert P_i-N_i\mathbb{R}vert<\infty. \end{align*} Moreover if $I$ is supported on a Lipschitz submanifold $M$ of $\mathbb{M}athbb{R}^n$ compactly contained in $Q_1(0)$, the points in the sequences $(P_i)_{i\in \mathbb{M}athbb{N}}$ and $(N_i)_{i\in \mathbb{M}athbb{N}}$ can be chosen to belong to $M$. \end{cor} The next corollary was obtained first by R. Schoen and K. Uhlenbeck (\mathbb{C}ite{schoen-uhlenbeck-2}, Section 4) and F. Bethuel and X. Zheng (\mathbb{C}ite[Theorem 4]{bethuel-zheng}). \begin{cor} \label{Corollary: Bethuel Zheng, introduction} Let $Q_1(0)\mathbb{M}athbb{S}ubset \mathbb{M}athbb{R}^2$ be the unit cube in $\mathbb{M}athbb{R}^2$. Let $u\in W^{1,p}(Q_1(0), \mathbb{M}athbb{S}^1)$ for some $p\in (1,\infty)$.\\ If $p\geq 2$, then $u$ can be approximated in $W^{1,p}$ be a sequence of functions in $C^\infty(Q_1(0), \mathbb{M}athbb{S}^1).$\\ If $p<2$, then $u$ can be approximated in $W^{1,p}$ by a sequence of functions in \begin{align*} \mathbb{M}athcal{R}:=\left\{v\in W^{1,p}(Q_1(0),\mathbb{M}athbb{S}^1); v\in C^\infty(Q_1(0)\mathbb{M}athbb{S}mallsetminus A, \mathbb{M}athbb{S}^1),\text{ where }A\text{ is some finite set}\mathbb{R}ight\}. \end{align*} \end{cor} In the second part of the paper we turn our attention to the weak closure of the space $\Omega_{p,R}^{n-1}(D)$ for a domain $D\mathbb{M}athbb{S}ubset\mathbb{M}athbb{R}^n$ which is bi-Lipschitz equivalent to $Q_1(0)$ (or equivalently of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(D)$). We will show the following. \begin{thm}[Weak closure]\label{Theorem: weak closure} Let $n\in \mathbb{M}athbb{N}_{\geq 2}$, $p\in(1,+\infty)$ and $D\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any open and bounded domain in $\mathbb{R}^n$ which is bi-Lipschitz equivalent to the $n$-dimensional unit cube $Q^n_1(0)$. Then, the space $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(D)$ is weakly sequentially closed in $\Omega_p^{n-1}(D)$. \end{thm} Notice that, by Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} (or more generally by Theorem \mathbb{R}ef{Theorem: strong approximation for general domains}), the statement of Theorem \mathbb{R}ef{Theorem: weak closure} is trivial for $p\in[n/(n-1),+\infty)$. Thus, we just need to provide a proof in case $p\in(1,n/(n-1))$. We first treat the case of the open unit $n$-cube $Q_1(0)\mathbb{M}athbb{S}ubset\mathbb{R}^n$ by exploiting the characterization of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ given by Theorem \mathbb{R}ef{Theorem: characterization of the integer valued fluxes class, introduction} and a suitable slice distance \`a la Ambrosio-Kirchheim (see \mathbb{C}ite{ambrosio-kirchheim} and \mathbb{C}ite{hardt-riviere}). We then address the general case by standard arguments (see Remark \mathbb{R}ef{Remark: weak closure for bounded domains}).\\ We remark that the case $n=1$ is different. In fact for any interval $I\mathbb{M}athbb{S}ubset \mathbb{M}athbb{R}$ there holds $\overline{\Omega_{p,R}^0(I)}^{\Omega_p^0(I)}=\Omega_p^0(I)$ (see Lemma \mathbb{R}ef{Lemma: weak closure for n=1}).\\ Our main motivation to look at forms (instead of maps) with finitely many integer topological singularities is the need of developing geometric measure theory for principal bundles in order to face the still deeply open questions arising in the study of $p$-Yang-Mills lagrangians. Let $p\in[1,+\infty)$ and $G$ be any compact matrix Lie group. Consider the trivial $G$-principal bundle on $B^n$ given by $\text{pr}_1:P:=B^n\times G\to B^n$, where $\text{pr}_1$ is the canonical projection on the first factor. The $p$\textit{-Yang-Mills lagrangian} on $P$ is given by \begin{align*} \text{YM}_p(A):=\int_{B^n}\lvert F_A\mathbb{R}vert^p\, d\mathcal{L}^n=\int_{B^n}\lvert dA+A\wedge A\mathbb{R}vert^p\, d\mathcal{L}^n,\qquad\forall\,A\in\Omega^1(B^n,\mathbb{M}athfrak{g}), \end{align*} where $\mathbb{M}athfrak{g}$ denotes the Lie algebra of $G$. As it is described in \mathbb{C}ite{kessel-riviere}, the reason why we aim to extend the set of the by now classical Sobolev connections is purely analytic and justified by issues arising in the application of the direct method of calculus of variations to $p$-Yang-Mills Lagrangians. On the other hand, the need to extend the notion of bundles in order to allow more and more singularities to appear has already been faced in many geometric applications, which brought to the introduction of coherent and reflexive sheaves in gauge theory (see \mathbb{C}ite{kobayashi-nomizu-1},\mathbb{C}ite{kobayashi-nomizu-2}).\\ Notice that all results mentioned above can be formulated in terms of vector fields: for any $F\in \Omega_{p}^{n-1}(D)$ we can consider the associated vector field $V_F:=(\ast F)^\flat$. In fact for the proof of some of the results we preferred to work with vector fields instead of $n-1$-forms.\\ \mathbb{M}athbb{S}ubsection{Related literature and open problems} Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} was firstly announced to hold for a $3$-dimensional domain in \mathbb{C}ite{kessel-riviere} and a full proof of the $3$-dimensional case was eventually given by the first author in \mathbb{C}ite{caniato}. Some form of the $2$-dimensional case was treated in \mathbb{C}ite{petrache-2d}, where M. Petrache proved that a strong approximability result holds for $1$-forms admitting a connection both on the $2$-dimensional disk and on the $2$-sphere. In both cases, the proof that we give here is more general and simple. The $3$-dimensional version of Theorem \mathbb{R}ef{Theorem: weak closure} was already treated by M. Petrache and T. Rivi\`ere in \mathbb{C}ite{petrache-riviere-abelian}. Nevertheless, here we took the opportunity to present the arguments in a more detailed and complete way. Both Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} and Theorem \mathbb{R}ef{Theorem: weak closure} in dimension $n\mathbb{N}eq 2,3$ are instead completely new. The first open problems that relate directly to our results are linked to the celebrated Yang-Mills Plateau problem. Indeed, the weak sequential closedness of the class $\Omega_{p,\mathbb{M}athbb{Z}}^2(B^3)$ implies that such forms behave well-enough to be considered as suitable "very weak" curvatures for the resolution of the $p$-Yang-Mills Plateau problem for $U(1)$-bundles on $B^3$ (see the introduction of \mathbb{C}ite{petrache-riviere-abelian} for further details). The question to address would be if and how we can exploit the same kind of techniques in order to face the existence and regularity issues linked to the so called "non abelian case" (i.e. the case of bundles having a non abelian structure group) in supercritical dimension. An interesting proposal in this sense is due to M. Petrache and T. Rivi\`ere and can be found in \mathbb{C}ite{petrache-riviere-non-abelian}, where a suitable class of weak connections in the supercritical dimension $5$ is introduced and studied. One could also hope that the technique presented in this paper could be adapted to show $L^p$-closedness (weak and strong) of classes of differential forms exhibiting "integer fluxes" properties similar to the one described in Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube}. As an example we define here the class $\Omega_{p,H}^n(Q_1^{2n}(0))$ of differential forms with "Hopf singularities".\\ Recall that for any $n\in \mathbb{M}athbb{N}_{\geq 1}$, for any smooth map $f:\mathbb{M}athbb{S}^{2n-1}\to\mathbb{M}athbb{S}^n$ the \textit{Hopf invariant} of $f$ is defined as follows: let $\omega$ be the standard volume form on $\mathbb{M}athbb{S}^n$. Let $\alpha\in \Omega^{n-1}(\mathbb{M}athbb{S}^{2n-1})$ be such that $f^\ast \omega=d\alpha$. Then the Hopf invariant of $f$ is given by \begin{align*} H(f):=\int_{\mathbb{M}athbb{S}^{2n-1}}\alpha\wedge d\alpha. \end{align*} One can show that $H(f)\in \mathbb{M}athbb{Z}$ and that it is independent of the choice of $\alpha$ (see \mathbb{C}ite{bott-tu}, Proposition 17.22). In the spirit of Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube} we say that a form $F\in \Omega_{p}^{n}(Q_1^{2n}(0))$ belongs to $\Omega_{p,H}^n(Q_1^{2n}(0))$ for some $\displaystyle{p\geq 2-\frac{1}{n}}$ if there exists $A\in\Omega_{W^{1,p}}^{n-1}(Q_1^{2n}(0))$ such that $dA=F$ and if for every $x_0\in Q_1^{2n}(0)$ there exists a set $R_{F,x_0}\mathbb{M}athbb{S}ubset (0,r_{x_0})$, with $r_{x_0}:=2\dist_{\infty}(x_0,\partial Q_1^{2n}(0))$ such that: \begin{enumerate} \item $\mathcal{L}^1\big((0,r_{x_0})\mathbb{M}athbb{S}mallsetminus R_{F,x_0}\big)=0$; \item for every $\mathbb{R}ho\in R_{F,x_0}$, the hypersurface $\partial Q_{\mathbb{R}ho}^{2n}(x_0)$ consists $\mathscr{H}^{2n-1}$-a.e. of Lebesgue points of $F$, $A$ and $\mathbb{N}abla A$ (the matrix of all the partial derivatives of the components of $A$); \item for every $\mathbb{R}ho\in R_{F,x_0}$ we have $|F|,\lvert A\mathbb{R}vert,\lvert\mathbb{N}abla A\mathbb{R}vert\in L^p\big(\partial Q_{\mathbb{R}ho}^{2n}(x_0),\mathscr{H}^{n-1}\big)$; \item for every $\mathbb{R}ho\in R_{F,x_0}$ it holds that \begin{align*} \int_{\partial Q_{\mathbb{R}ho}^{2n}(x_0)}i_{\partial Q_{\mathbb{R}ho}^{2n}(x_0)}^*(A\wedge F)\in\mathbb{M}athbb{Z}. \end{align*} \end{enumerate} Notice that if $u\in W^{1,2n-1}(Q_1^{2n}(0),\mathbb{M}athbb{S}^n)$, then $u^\ast\omega\in \Omega_{p,H}^n(Q_1^{2n}(0))$.\\ \mathbb{M}athbb{S}ubsection{Organization of the paper} The paper is organized as follows. Section 2 is dedicated to the strong $L^p$-closure of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0),\mathbb{M}u)$. First we present some preliminary and rather technical lemmata (Sections 2.1-2.3), then we give a proof of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} (Section 2.4). In section 2.5 we show the characterization of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ in terms of (minimal) connections (Theorem \mathbb{R}ef{Theorem: characterization of the integer valued fluxes class, introduction}). In Section 2.6 we exploit this result to extend the approximation result to other Lipschitz manifolds, and in particular to $\partial Q^{n}_1(0)$. Finally in Section 2.7 we prove Corollary \mathbb{R}ef{Corollary: approximation of partial I, introduction} and Corollary \mathbb{R}ef{Corollary: Bethuel Zheng, introduction}.\\ In section 3 we discuss the weak $L^p$ closure of $\Omega_{p,R}^{n-1}(Q_1(0))$. First we will introduce a slice distance \`a la Ambrosio-Kirchheim, first on spheres (Section 3.1) and then on cubes (Section 3.2). In Section 3.3 we discuss some of the properties of the slice distance and in Section 3.4 we use it to obtain a proof of Theorem \mathbb{R}ef{Theorem: weak closure}. We will also discuss briefly the special case $n=1$. \mathbb{M}athbb{S}ubsection{Aknowledgements} We are grateful to prof. Tristan Rivi\`ere for encouraging us to work on this subject, for his insights and for the helpful discussions. We would also like to thank Federico Franceschini for the interesting discussions.\\ This work has been supported by the Swiss National Science Foundation (SNF 200020\_192062). \mathbb{M}athbb{S}ubsection{Notation} Let $M^m\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any $m$-dimensional, embedded Lipschitz submanifold of $\mathbb{R}^n$ (with or without boundary) such that $\overline{M}$ is compact as a subset of $\mathbb{R}^n$. \begin{itemize} \item We denote by $i_M:M\hookrightarrow\mathbb{R}^n$ the usual inclusion map. \item We always assume that $M$ is endowed with the $L^\infty$-Riemannian metric given by $g_M:=i_M^*g_e$, where $g_e$ denotes the standard euclidean metric on $\mathbb{R}^n$. \item For every $k=1,...,m$, we define the following spaces of \textit{smooth $k$-forms} on $M$: \begin{align*} \Omega^k(M)&:=\{i_M^*\omega \mathbb{M}box{ s.t. } \omega \mathbb{M}box{ is a smooth } k \mathbb{M}box{-form on } \mathbb{R}^n\},\\ \mathbb{M}athcal{D}^k(M)&:=\{\omega\in\Omega^k(M) \mathbb{M}box{ s.t. } \mathbb{M}athbb{S}pt(\omega)\mathbb{M}athbb{S}ubset\mathbb{M}athbb{S}ubset M\}. \end{align*} For every $p\in[1,+\infty]$, we denote by $\Omega_p^k(M)$ and $\Omega_{W^{1,p}}^k(M)$ the completions of $\Omega^k(M)$ with respect to the usual $L^p$-norm and $W^{1,p}$-norm respectively. We call $\Omega_p^k(M)$ the space of \textit{$L^p$ $k$-forms} on $M$ and $\Omega_{W^{1,p}}^k(M)$ the space of \textit{$W^{1,p}$ $k$-forms} on $M$. \item By the symbol "$\ast$", we denote the Hodge star operator associated with the metric $g_M$ on $M$. By "$\flat$" and "$\mathbb{M}athbb{S}harp$" we denote the usual musical isomorphisms associated with the metric $g_M$. Recall that, under this notation, the map \begin{align}\label{musical isomorphism} \Omega_p^{n-1}(M)\mathbb{N}i\omega\mathbb{M}apsto(\ast\omega)^{\mathbb{M}athbb{S}harp}\in L^p(M,\mathbb{R}^n) \end{align} gives an isomorphism onto its image. Exploiting this fact, we frequently identify $(n-1)$-forms with vector fields on $M$. \end{itemize} Let $x_0\in\mathbb{R}^n$ and $\mathbb{R}ho>0$. \begin{itemize} \item We denote by $\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_{\infty}:\mathbb{R}^n\mathbb{R}ightarrow[0,+\infty)$ the following norm on $\mathbb{R}^n$: \begin{align*} \lVert x\mathbb{R}Vert_{\infty}:=\mathbb{M}ax_{j=1,...,n}\lvert x_j\mathbb{R}vert. \end{align*} We denote by $d_{\infty}:\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$ the distance associated to $\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_{\infty}$. \item We let \begin{align*} Q_{\mathbb{R}ho}^n(x_0):=\bigg\{x\in\mathbb{R}^n \mathbb{M}box{ s.t. } \lVert x-x_0\mathbb{R}Vert_{\infty}<\frac{\mathbb{R}ho}{2}\bigg\} \end{align*} be the open cube in $\mathbb{R}^n$ centered at $x_0$ and having edge-length $\mathbb{R}ho$. We will sometimes omit the $n$ when the dimension is clear from the context. We denote by $B^n_{\mathbb{R}ho}(x_0)$ the open ball in $\mathbb{R}^n$ centered at $x_0$ with radius $\mathbb{R}ho$ (here again we will sometime omit the $n$). \item We define \begin{align*} \mathbb{M}athcal{D}_k(M)&:=\{k\mathbb{M}box{-currents on } M\},\\ \mathbb{M}athcal{M}_k(M)&:=\{T\in\mathbb{M}athcal{D}_k(M) \mathbb{M}box{ s.t. } \mathbb{M}(T)<+\infty\},\\ \mathbb{M}athcal{N}_k(M)&:=\{T\in\mathbb{M}athcal{D}_k(M) \mathbb{M}box{ s.t. } \mathbb{M}(T),\mathbb{M}(\partial T)<+\infty\},\\ \mathbb{M}athcal{R}_k(M)&:=\{T\in\mathbb{M}athcal{D}_k(M) \mathbb{M}box{ s.t. } T \mathbb{M}box{ is integer-multiplicity rectifiable}\}, \end{align*} where $\mathbb{M}(\,\mathbb{C}dot\,)$ denotes the mass of a current (see \mathbb{C}ite{krantz_parks-geometric_integration_theory} for further explanations). Moreover, given any $F\in\Omega_1^k(M)$ we define the $(m-k)$\textit{-current associated to} $F$ by \begin{align*} \langle T_F,\omega\mathbb{R}angle:=\int_{M}F\wedge\omega, \qquad\forall\omega\in\mathbb{M}athcal{D}^{m-k}(M). \end{align*} \end{itemize} \mathbb{M}athbb{S}ection{The strong $L^p$-approximation Theorem} In this section we provide a proof of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}. In an attempt to make the proof more accessible, we reformulate the Theorem in terms of vector fields. For any Radon measure $\mathbb{M}u:=f\mathcal{L}^n$ with $f=\left(\frac{1}{2}-\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_\infty\mathbb{R}ight)^q$, with $q\in (-\infty,1]$ let \begin{align*} L_{R}^p(Q_1(0), \mathbb{M}u):=\{V\in L^p(Q_1(0),\mathbb{M}u) \mathbb{M}box{ vector field s.t.}\, \ast\hspace{-0.5mm}V^{\flat}\in\Omega_{p,R}^{n-1}(Q_1(0))\} \end{align*} and let \begin{align*} L_{\mathbb{M}athbb{Z}}^p(Q_1(0),\mathbb{M}u):=\{V\in L^p(Q_1(0),\mathbb{M}u) \mathbb{M}box{ vector field s.t. }\ast V^\flat\in \Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))\}. \end{align*} We will sometime write $L^p_{\mathbb{M}athbb{Z}}(Q_1(0))$ for $L^p_{\mathbb{M}athbb{Z}}(Q_1(0),\mathcal{L}^{n})$, where $\mathcal{L}^n$ denotes the $n$-dimensional Lebesgue measure. \begin{thm}\label{Theorem: strong approximation for vector fields, now really for vector fields} Let $V\in L_{\mathbb{M}athbb{Z}}^p(Q_1^n(0),\mathbb{M}u)$. The following facts hold: \begin{enumerate} \item if $q\in[0,1]$ and $p\in\big[1,n/(n-1)\big)$, then there exists a sequence $\{V_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset L_{R}^p(Q_1(0),\mathbb{M}u)$ such that $V_k\mathbb{R}ightarrow V$ strongly in $L^p(Q_1^n(0),\mathbb{M}u)$; \item if $q\in(-\infty,0]$ and $p\in\big[n/(n-1),+\infty\big)$, then $\operatorname{div}(V)=0$ distributionally on $Q_1^n(0)$. \end{enumerate} \end{thm} The case $n=1$ is particularly easy and is treated in Lemma \mathbb{R}ef{Lemma: caso n=1}. For the proof in the case $n\ge 2$ we follow the ideas of \mathbb{C}ite{petrache-2d} and \mathbb{C}ite{caniato}. We present here a plan of the proof, reducing to the case $q=0$ for simplicity and without losing generality. First of all we show that for any $\varepsilon>0$ it is possible to decompose $Q_1(0)$ into cubes $Q$ of size $\varepsilon$ (plus a negligible rest) so that $$\int_{\partial Q}V\mathbb{C}dot\mathbb{N}u_{\partial Q}\, d\mathscr{H}^{n-1}\in \mathbb{M}athbb{Z}$$ and so that the number of cubes where the integral is different from zero is controlled (Section 2.1). We will then show that $V$ can be approximated on the boundaries of the small cubes $Q$ by smooth vector fields $(V_\varepsilon)_{\varepsilon>0}$ with similar properties (Section 2.2). In Section 2.3 we show that the vector fields $V_\varepsilon$ can be extended inside the cubes $Q$ in such a way that the extension $\tilde{V}_\varepsilon$ has a finite number of singularities in $Q$ (more precisely $\tilde{V}_\varepsilon\vert_{Q}\in L^p_R(Q)$) and is close to $V$ in $L^p(Q)$. In section 2.4 we will combine the previous elements to show that the approximating fields constructed above (up to some shifting and smoothing) satisfy the claim of the Theorem. \mathbb{M}athbb{S}ubsection{Choice of a suitable cubic decomposition} Fix any $\varepsilon\in(0,1/4)$ and $a\in Q_{\varepsilon}(0)$. Let \begin{align*} q_{\varepsilon}&:=\mathbb{M}ax\,\{q\in\mathbb{N} \mathbb{M}box{ s.t. } \varepsilon q\le 1-\varepsilon\},\\ C_{\varepsilon}&:=\bigg\{\bigg(j+\frac{1}{2}\bigg)\varepsilon-\frac{1}{2}, \mathbb{M}box{ with } j=1,...,q_{\varepsilon}-1\bigg\}^n,\\ \mathbb{M}athscr{C}_{\varepsilon,a}&:=\big\{Q_{\varepsilon}(x)+a,\, \mathbb{M}box{ with } x\in C_{\varepsilon}\big\}. \end{align*} We say that $\mathbb{M}athscr{C}_{\varepsilon,a}$ is the \textit{cubic decomposition} of $Q_1(0)$ with origin in $a$ and mesh thickness $\varepsilon$. Let \begin{align*} \mathbb{M}athscr{F}_{\varepsilon,a}&:=\big\{F \,\vert\, F \mathbb{M}box{ is an } (n-1)\mathbb{M}box{-dimensional face of } \partial Q \mathbb{M}box{, for some open cube } Q\in\mathbb{M}athscr{C}_{\varepsilon,a}\big\},\\ S_{\varepsilon,a}&:=\bigcup_{F\in\mathbb{M}athscr{F}_{\varepsilon,a}}F. \end{align*} We say that $S_{\varepsilon,a}$ is the $(n-1)$\textit{-skeleton} of the cubic decomposition $\mathbb{M}athscr{C}_{\varepsilon,a}$. \allowdisplaybreaks \begin{lem}[Choice of the cubic decomposition]\label{Lemma: cubic decomposition} Let $n\in \mathbb{M}athbb{N}_{>0}$. Let $V\in L_{\mathbb{M}athbb{Z}}^p(Q_1(0),\mathbb{M}u)$ where $\mathbb{M}u:=f\mathcal{L}^n$ with \begin{align*} f:=\bigg(\frac{1}{2}-\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_{\infty}\bigg)^q \end{align*} for some $q\in(-\infty,1]$. Then, there exists a subset $E_V\mathbb{M}athbb{S}ubset (0,1)$ satisfying the following properties: \begin{enumerate} \item $\mathcal{L}^1\big((0,1)\mathbb{M}athbb{S}mallsetminus E_V\big)=0$; \item for every $\varepsilon\in E_V$, there exists $a_{\varepsilon}\in Q_{\varepsilon}(0)$ such that $\varepsilon\in R_{V,c_Q}$ for every $Q\in\mathbb{M}athscr{C}_{\varepsilon,a_{\varepsilon}}$ and \begin{align}\label{estimate cubic decomposition} \lim_{\mathbb{M}athbb{S}ubstack{\varepsilon\in E_V\\ \varepsilon\mathbb{R}ightarrow 0^+}}\varepsilon\bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon,a_{\varepsilon}}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\bigg)=0, \end{align} where $(V)_Q=\displaystyle{\fint_Q V}d\mathcal{L}^n$. \end{enumerate} \begin{proof} For any $x\in Q_1(0)$ let $R_{V,x}:=R_{V^\flat,x}$ (see Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube}). By assumption \begin{align}\label{equation: bad set has zero mass} \int_0^1 \int_{Q_{1-\mathbb{R}ho}(0)}\mathbb{M}athbbm{1}_{\mathbb{R}ho\in R_{V,x}}\,d\mathcal{L}^n\,d\mathbb{R}ho=&\int_{Q_1(0)}\mathcal{L}^1(R_{V,x})\,d\mathcal{L}^n=\int_{Q_1(0)}2\dist(x,\partial Q_1(0))\,d\mathcal{L}^n\\\mathbb{N}onumber=&\int_0^1\mathcal{L}^n(Q_{1-\mathbb{R}ho}(0))\,d\mathbb{R}ho. \end{align} For any $\mathbb{R}ho\in (0,1)$ let \begin{align*} X_\mathbb{R}ho=\{x\in Q_{1-\mathbb{R}ho}(0) \mathbb{M}box{ : }\mathbb{R}ho\mathbb{N}otin R_{V,x}\}, \end{align*} then by \eqref{equation: bad set has zero mass} $\mathcal{L}^n(X_\mathbb{R}ho)=0$ for a.e. $\mathbb{R}ho\in (0,1)$. Now notice that for any $\mathbb{R}ho\in (0,1)$ \begin{align*} \mathcal{L}^n(X_\mathbb{R}ho)\geq \mathbb{M}athbb{S}um_{c\in C_\mathbb{R}ho}\int_{Q_\mathbb{R}ho(0)}\mathbb{M}athbbm{1}_{X_\mathbb{R}ho}(x+c)\,d\mathcal{L}^n=\int_{Q_\mathbb{R}ho(0)}\mathbb{M}athbb{S}um_{c\in C_\mathbb{R}ho}\mathbb{M}athbbm{1}_{X_\mathbb{R}ho}(x+c)\,d\mathcal{L}^n, \end{align*} thus for a.e. $\mathbb{R}ho\in (0,1)$ we have that for a.e. $a_\mathbb{R}ho\in Q_\mathbb{R}ho(0)$ there holds $\mathbb{R}ho\in R_{V,c_Q}$ for any $Q\in \mathbb{M}athscr{C}_{\mathbb{R}ho, a_\mathbb{R}ho}$. Let $E_V$ be the set of all such $\mathbb{R}ho\in (0,1)$.\\ Now let $\varepsilon\in E_V$. We claim that \begin{align}\label{equation: claim n-1 decay} I_{\varepsilon}:=\int_{Q_{\varepsilon}(0)}\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon,a}}\int_{\partial Q}\lvert V(x)-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}(x)\, d\mathcal{L}^n(a)=o\big(\varepsilon^{n-1}\big), \qquad\mathbb{M}box{ as } \varepsilon\mathbb{R}ightarrow 0^+\text{ in }E_V. \end{align} Indeed, let $\mathbb{M}athscr{F}$ be the set of the faces of the cube $Q_1(0)\mathbb{M}athbb{S}ubset\mathbb{R}^n$ and notice that \begin{align*} I_{\varepsilon}&=\int_{Q_{\varepsilon}(0)}\mathbb{M}athbb{S}um_{F_0\in\mathbb{M}athscr{F}}\mathbb{M}athbb{S}um_{c\in C_{\varepsilon}}\int_{\varepsilon F_0}\bigg\lvert V(x+c+a)-\fint_{Q_{\varepsilon}(c+a)}V\bigg\mathbb{R}vert^pf(c+a)\, d\mathscr{H}^{n-1}(x)\, d\mathcal{L}^n(a)\\ &=\mathbb{M}athbb{S}um_{F_0\in\mathbb{M}athscr{F}}\mathbb{M}athbb{S}um_{c\in C_{\varepsilon}}\int_{\varepsilon F_0}\int_{Q_{\varepsilon}(0)}\bigg\lvert V(x+c+a)-\fint_{Q_{\varepsilon}(c+a)}V\bigg\mathbb{R}vert^pf(c+a)\, d\mathcal{L}^n(a)\, d\mathscr{H}^{n-1}(x)\\ &=\mathbb{M}athbb{S}um_{F_0\in\mathbb{M}athscr{F}}\mathbb{M}athbb{S}um_{c\in C_{\varepsilon}}\int_{\varepsilon F_0}\int_{Q_{\varepsilon}(c)}\bigg\lvert V(x+y)-\fint_{Q_{\varepsilon}(y)}V\bigg\mathbb{R}vert^pf(y)\, d\mathcal{L}^n(y)\, d\mathscr{H}^{n-1}(x). \end{align*} Observe that for any $c\in C_\varepsilon$, $x\in \partial Q_\varepsilon (0)$ \begin{align*} & \int_{Q_\varepsilon(c)}\left\lvert V(x+y)-\fint_{Q_\varepsilon(y)}V(z)\mathbb{R}ight\mathbb{R}vert^pf(y)d\mathcal{L}^n(y)\\ \leq & \fint_{Q_\varepsilon(0)}\int_{Q_\varepsilon(c)}\lvert V(x+y)-V(z+y)\mathbb{R}vert^p f(y) d\mathcal{L}^n(y)d\mathcal{L}^n(z). \end{align*} Thus for any $F_0\in \mathbb{M}athscr{F}$ \begin{align*} &\mathbb{M}athbb{S}um_{c\in C_{\varepsilon}}\int_{\varepsilon F_0}\int_{Q_{\varepsilon}(c)}\bigg\lvert V(x+y)-\fint_{Q_{\varepsilon}(y)}V\bigg\mathbb{R}vert^pf(y)\, d\mathcal{L}^n(y)\, d\mathscr{H}^{n-1}(x)\\ \leq &\int_{\varepsilon F_0}\fint_{Q_\varepsilon(0)}\int_{Q_{1-2\varepsilon}(0)}\lvert V(x+y)-V(z+y)\mathbb{R}vert^p f(y)d\mathcal{L}^n(y)d\mathcal{L}^n(z)d\mathscr{H}^{n-1}(x)\\ \leq &\,2^{p-1}\int_{\varepsilon F_0}\int_{Q_{1-2\varepsilon}(0)}\lvert V(x+y)-V(y)\mathbb{R}vert^p f(y) d\mathcal{L}^n(y)d\mathscr{H}^{n-1}(x)\\&+2^{p-1}\varepsilon^{n-1}\fint_{Q_\varepsilon (0)}\int_{Q_{1-2\varepsilon}(0)}\lvert V(z+y)-V(y)\mathbb{R}vert^p f(y) d\mathcal{L}^n(y)d\mathcal{L}^n(z)\\ \leq &\,2^p\varepsilon^{n-1}\mathbb{M}athbb{S}up_{\alpha\in Q_\varepsilon(0)}\lVert V-V(\mathbb{C}dot-\alpha)\mathbb{R}Vert^p_{L^p(Q_{1-2\varepsilon}, \mathbb{M}u)}. \end{align*} Since $C_c^0(Q_1(0))$ is dense in $L^p(Q_1(0),\mathbb{M}u)$ (see \mathbb{C}ite[Theorem 4.3]{maggi}), given any $\delta>0$ we can find $\tilde V\in C_c^0(Q_1(0))$ such that \begin{align*} \int_{Q_1(0)}\lvert V-\tilde V\mathbb{R}vert^p\, d\mathbb{M}u\le\delta. \end{align*} Notice that by Taylor's Theorem \begin{align}\label{estimate on the difference measure} \left\lvert\frac{f(x+\alpha)-f(x)}{f(x)}\mathbb{R}ight\mathbb{R}vert=&\left\vert\frac{(\frac{1}{2}-\lVert x+\alpha\mathbb{R}Vert_{\infty})^q-(\frac{1}{2}-\lVert x\mathbb{R}Vert_{\infty})^q}{(\frac{1}{2}-\lVert x\mathbb{R}Vert)^q}\mathbb{R}ight\mathbb{R}vert\leq q\lVert \alpha\mathbb{R}Vert_\infty \frac{(\frac{1}{2}-\lVert x\mathbb{R}Vert_\infty-\frac{\varepsilon}{2})^{q-1}}{(\frac{1}{2}-\lVert x\mathbb{R}Vert_\infty)^q}\\\mathbb{N}onumber\leq& q\varepsilon 2^{1-q}\left(\frac{1}{2}-\lVert x\mathbb{R}Vert_\infty\mathbb{R}ight)^{-1}\leq C \end{align} for every $x\in Q_{1-\varepsilon}(0)$, $\alpha\in Q_\varepsilon(0)$ and for some constant $C>0$ depending only on $q$. Thus \begin{align*} \lVert V-V(\mathbb{C}dot-\alpha)\mathbb{R}Vert_{L^p(Q_{1-\varepsilon}(0), \mathbb{M}u)}^p&\le 4^{p-1}\bigg(\int_{Q_{1-\varepsilon}(0)}\lvert V-\tilde V\mathbb{R}vert^p\, d\mathbb{M}u+\int_{Q_{1-\varepsilon}(0)}\lvert\tilde V-\tilde V(\mathbb{C}dot-\alpha)\mathbb{R}vert^p\, d\mathbb{M}u\\ &+\int_{Q_{1-\varepsilon}(0)}\lvert\tilde V(\mathbb{C}dot-\alpha)-V(\mathbb{C}dot-\alpha)\mathbb{R}vert^p\, d\mathbb{M}u\bigg)\\ &=4^{p-1}\bigg(2\int_{Q_{1}(0)}\lvert V-\tilde V\mathbb{R}vert^p\, d\mathbb{M}u+\int_{Q_{1-\varepsilon}(0)}\lvert\tilde V-\tilde V(\mathbb{C}dot-\alpha)\mathbb{R}vert^p\, d\mathbb{M}u\\ &+\int_{Q_{1-\varepsilon}(-\alpha)}\lvert\tilde V-V\mathbb{R}vert^p\,\frac{f(\mathbb{C}dot+\alpha)-f}{f}\, d\mathbb{M}u\bigg)\\ &\le 4^{p-1}(2+C)\delta+4^{p-1}\int_{Q_{1-\varepsilon}(0)}\lvert\tilde V-\tilde V(\mathbb{C}dot-\alpha)\mathbb{R}vert^p\, d\mathbb{M}u. \end{align*} As $\tilde V\in C^0_c(Q_1(0))$ \begin{align*} \mathbb{M}athbb{S}up_{\alpha\in Q_\varepsilon(0)}\lVert\tilde V-\tilde V(\mathbb{C}dot-\alpha)\mathbb{R}Vert_{L^p(Q_{1-\varepsilon}(0), \mathbb{M}u)}^p\to 0 \text{ as }\varepsilon\to 0^+, \end{align*} we have \begin{align*} \limsup_{\varepsilon\to 0^+}\mathbb{M}athbb{S}up_{\alpha\in Q_\varepsilon(0)}\lVert V-V(\mathbb{C}dot-\alpha)\mathbb{R}Vert_{L^p(Q_{1-\varepsilon}(0), \mathbb{M}u)}^p\le4^{p-1}(2+C)\delta. \end{align*} By letting $\delta\to 0^+$ in the previous inequality we get \begin{align}\label{Equation: Continuity wrt argument for weighted measure} \mathbb{M}athbb{S}up_{\alpha\in Q_\varepsilon(0)}\lVert V- V(\mathbb{C}dot-\alpha)\mathbb{R}Vert_{L^p(Q_{1-\varepsilon}(0), \mathbb{M}u)}^p\to 0 \text{ as }\varepsilon\to 0^+, \end{align} and claim \eqref{equation: claim n-1 decay} follows. By Fubini's theorem, for every fixed $\varepsilon\in E_V$ there exists some non-negligible subset $T_{\varepsilon}\mathbb{M}athbb{S}ubset Q_{\varepsilon}(0)$ such that for any $a\in T_\varepsilon$ $\varepsilon\in R_{V, c_Q}$ for any $Q\in \mathbb{M}athscr{C}_{\varepsilon,a}$ and \begin{align*} \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon,a}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\ d\mathscr{H}^{n-1}&\le\frac{1}{\varepsilon^n}\int_{Q_{\varepsilon}(0)}\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon,a}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^p\, f(c_Q) d\mathscr{H}^{n-1}\, d\mathcal{L}^n(a)\\ &=\frac{1}{\varepsilon^n}I_\varepsilon. \end{align*} By \eqref{equation: claim n-1 decay}, for every $a\in T_{\varepsilon}$ we have \begin{align*} \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon,a}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}=o(\varepsilon^{-1}) \end{align*} as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$. The statement follows. \end{proof} \end{lem} Fix any $V\in L_{\mathbb{M}athbb{Z}}^p(Q_1(0))$ and $\varepsilon\in E_V$. From now on, we will denote simply by $\mathbb{M}athscr{C}_{\varepsilon}$ the cubic decomposition $\mathbb{M}athscr{C}_{\varepsilon,a_{\varepsilon}}$ provided by Lemma \mathbb{R}ef{Lemma: cubic decomposition}. Accordingly, the subscript "$a_{\varepsilon}$" will be omitted in any writing referring to such a cubic decomposition. Given any $Q\in\mathbb{M}athscr{C}_{\varepsilon}$, we say that $Q$ is a \textit{good cube} if \begin{align*} \int_{\partial Q}V\mathbb{C}dot\mathbb{N}u_{\partial Q}=0 \end{align*} and that $Q$ is a \textit{bad cube} otherwise. We denote by $\mathbb{M}athscr{C}_{\varepsilon}^g$ the subfamily of $\mathbb{M}athscr{C}_{\varepsilon}$ made of all the good cubes and by $\mathbb{M}athscr{C}_{\varepsilon}^b$ the one made of all the bad cubes. Moreover, we let \begin{align*} \Omega_{\varepsilon}&:=\bigcup_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}Q,\quad\Omega_{\varepsilon}^g:=\bigcup_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}Q,\quad\Omega_{\varepsilon}^b:=\bigcup_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}Q,\\ S_{\varepsilon}&:=\bigcup_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\partial Q,\quad S_{\varepsilon}^g:=\bigcup_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\partial Q,\quad S_{\varepsilon}^b:=\bigcup_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\partial Q. \end{align*} \begin{lem}\label{Lemma: volume of the bad cubes vanishes at the limit} Assume that $n\geq 2$. Then, we have \begin{align*} \lim_{\mathbb{M}athbb{S}ubstack{\varepsilon\in E_V\\\varepsilon\mathbb{R}ightarrow 0^+}}\varepsilon^n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}f(c_Q)=0. \end{align*} In particular if $q=0$ (and thus $\mathbb{M}u=\mathcal{L}^n$) we have \begin{align*} \lim_{\mathbb{M}athbb{S}ubstack{\varepsilon\in E_V\\\varepsilon\mathbb{R}ightarrow 0^+}}\mathcal{L}^n(\Omega_\varepsilon^b)=0. \end{align*} \begin{proof} Notice that by estimate \eqref{estimate on the difference measure} \begin{align}\label{estimate on f} \frac{\lvert f-f(c_Q)\mathbb{R}vert}{f}\le C \qquad\mathbb{M}box{ on } Q \mathbb{M}box{ for every } Q\in\mathbb{M}athscr{C}_{\varepsilon} \end{align} for some universal constant $C>0$. For every bad cube $Q\in\mathbb{M}athscr{C}_{\varepsilon}^b$, it holds that \begin{align*} 1\le\bigg\lvert\int_{\partial Q}V\mathbb{C}dot\mathbb{N}u_{\partial Q}\, d\mathscr{H}^{n-1}\bigg\mathbb{R}vert\le\int_{\partial Q}\lvert V\mathbb{R}vert\, d\mathscr{H}^{n-1}. \end{align*} By multiplying the previous inequality by $f(c_Q)$ and summing over all the bad cubes, we get \begin{align*} \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)&\le\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V\mathbb{R}vert f(c_Q)\, d\mathscr{H}^{n-1}\\ &\le \bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\bigg)^{\frac{1}{p}} \bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}f(c_Q)\, d\mathscr{H}^{n-1}\bigg)^{\frac{1}{p'}}\\ &= (2n)^{\frac{1}{p'}}\varepsilon^{\frac{n-1}{p'}}\bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V\mathbb{R}vert^p f(c_Q)\, d\mathscr{H}^{n-1}\bigg)^{\frac{1}{p}}\bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)\bigg)^{\frac{1}{p'}}, \end{align*} which is equivalent to \begin{align*} \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)&\le (2n)^{p-1}\varepsilon^{(p-1)(n-1)}\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V\mathbb{R}vert^p f(c_Q)\, d\mathscr{H}^{n-1}. \end{align*} Hence, by the triangle inequality, we get \begin{align}\label{estimate number of bad cubes} \mathbb{N}onumber \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)&\le (4n)^{p-1}\varepsilon^{(p-1)(n-1)-1}\bigg(\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^p f(c_Q)\, d\mathscr{H}^{n-1}\\ \mathbb{N}onumber &\quad+2n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\bigg)\\ \mathbb{N}onumber &\le (4n)^{p-1}\varepsilon^{(p-1)(n-1)-1}\Bigg(\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\\ \mathbb{N}onumber &\quad+2n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert V\mathbb{R}vert^p f\, d\mathcal{L}^n+2n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert V\mathbb{R}vert^p \frac{f(c_Q)-f}{f}f\, d\mathcal{L}^n\Bigg)\\ &\le (4n)^{p-1}\varepsilon^{(p-1)(n-1)-1}\Bigg(\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\\ \mathbb{N}onumber &\quad+2n(1+C)\int_{\Omega_{\varepsilon}^b}\lvert V\mathbb{R}vert^p f\, d\mathcal{L}^n\Bigg). \end{align} Therefore \begin{align*} \varepsilon^n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)&\le (4n)^{p-1}\varepsilon^{p(n-1)}\Bigg(\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\\ \quad&+2n(1+C)\int_{Q_1(0)}\lvert V\mathbb{R}vert^p f\, d\mathcal{L}^n\Bigg) \end{align*} and the statement follows from \eqref{estimate cubic decomposition} (here we need the assumption $n>1$). \end{proof} \end{lem} \begin{rem}\label{Remark: no bad cubes if p is big enough} Assume that $p\in\big[n/(n-1),+\infty\big)$. In this case $\varepsilon^{(p-1)(n-1)-1}$ remains bounded as $\varepsilon\to 0^+$. Now by Lemma \mathbb{R}ef{Lemma: cubic decomposition} \begin{align*} \varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^p f(c_Q)\, d\mathscr{H}^{n-1}\to 0\quad\text{ as }\varepsilon\to 0^+\text{ in }E_V. \end{align*} Moreover, by Lemma \mathbb{R}ef{Lemma: volume of the bad cubes vanishes at the limit}, we have \begin{align*} \int_{\Omega_{\varepsilon}^b}f\, d\mathcal{L}^n=(1+C)\varepsilon^n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)\to 0\quad\text{ as }\varepsilon\to 0^+\text{ in }E_V. \end{align*} This implies $\mathcal{L}^n(\Omega_\varepsilon^b)\to 0$ as $\varepsilon\to 0^+$ in $E_V$, therefore \begin{align*} \int_{\Omega_{\varepsilon}^b}\lvert V\mathbb{R}vert^pf\, d\mathcal{L}^n\to 0\quad\text{ as }\varepsilon\to 0^+\text{ in }E_V \end{align*} by absolute continuity of the integral. Thus it follows from \eqref{estimate number of bad cubes} that \begin{align*} \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)\to 0^+\quad\text{ as }\varepsilon\to 0^+\text{ in }E_V. \end{align*} Let $N_{\varepsilon}^b$ be the number of bad cubes in $\mathbb{M}athscr{C}_{\varepsilon}$. Notice that for $q\le 0$, we have $f\ge 2^{-q}$ on $Q_1(0)$. This implies \begin{align*} N_{\varepsilon}^b\le 2^{q}\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}f(c_Q)\to 0^+\quad\text{ as }\varepsilon\to 0^+\text{ in }E_V. \end{align*} Since $N_\varepsilon^b\in \mathbb{M}athbb{Z}$ for any $\varepsilon\in E_V$, $N_{\varepsilon}^b=0$ for every $\varepsilon\in E_V$ small enough. Hence, whenever $p\in\big[n/(n-1),+\infty\big)$ and $q\le 0$ we will assume, without losing generality, that there are no bad cubes in our chosen decomposition. \end{rem} \mathbb{M}athbb{S}ubsection{Smoothing on the $(n-1)$-skeleton of the cubic decomposition} \begin{lem}\label{Lemma: smoothing in the (n-1)-skeleton} Let $Q\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be an $n$-dimensional cube with side length $R$. Let $V\in L^p(Q,\mathbb{M}athbb{R}^m)$. Let $\varepsilon>0$. There exists $V_\varepsilon\in C^\infty_c(Q, \mathbb{M}athbb{R}^m)$ such that \begin{align*} \int_Q V_\varepsilon\, d\mathcal{L}^n=\int_Q V\, d\mathcal{L}^n \end{align*} and \begin{align*} \lVert V_\varepsilon-V\mathbb{R}Vert_{L^p(Q)}<\varepsilon. \end{align*} \begin{proof} Without loss of generality, we will assume that $Q$ is centered in the origin of $\mathbb{R}^n$. Let $\psi\in C^\infty_c(\frac{1}{2}Q)$ and $r_0\in (1/2,1)$ such that \begin{align*} \int_Q \psi\, d\mathcal{L}^n=1\quad\text{ and } R^n(1-r_0^n)<\frac{\varepsilon}{\lVert \psi\mathbb{R}Vert_{L^p(Q)}}. \end{align*} Let $r\in (r_0,1)$ be such that \begin{align*} \lVert V\mathbb{R}Vert_{L^p(Q\mathbb{M}athbb{S}mallsetminus rQ)}\leq\mathbb{M}in\left\{\left(\frac{\varepsilon}{\lVert \psi\mathbb{R}Vert_{L^p(Q)}}\mathbb{R}ight)^\frac{1}{p},\varepsilon\mathbb{R}ight\}. \end{align*} Set \begin{align*} s:=\int_{Q\mathbb{M}athbb{S}mallsetminus rQ}V\, d\mathcal{L}^n, \quad \tilde{V}:=\mathbb{R}chi_{rQ}V+s\psi\in L^p(Q,\mathbb{R}^m). \end{align*} Then \begin{align*} \int_Q \tilde{V}\, d\mathcal{L}^n=\int_{rQ}V\, d\mathcal{L}^n+\left(\int_{Q\mathbb{M}athbb{S}mallsetminus rQ}V\, d\mathcal{L}^n\mathbb{R}ight)\int_Q \psi\, d\mathcal{L}^n=\int_Q V\, d\mathcal{L}^n. \end{align*} Moreover \begin{align*} \lvert s\mathbb{R}vert&=\left\lvert \int_{Q\mathbb{M}athbb{S}mallsetminus rQ}V\, d\mathcal{L}^n\mathbb{R}ight\mathbb{R}vert\leq\lvert Q\mathbb{M}athbb{S}mallsetminus rQ\mathbb{R}vert^\frac{1}{p'}\lVert V\mathbb{R}Vert_{L^p(Q\mathbb{M}athbb{S}mallsetminus rQ)}\\ &\leq (R^n(1-r^n))^\frac{1}{p'}\left(\frac{\varepsilon}{\lVert\psi\mathbb{R}Vert_{L^p(Q)}}\mathbb{R}ight)^\frac{1}{p}\le \frac{\varepsilon}{\lVert\psi\mathbb{R}Vert_{L^p(Q)}}. \end{align*} Therefore \begin{align*} \lVert s\psi\mathbb{R}Vert_{L^p(Q)}=\lVert \psi\mathbb{R}Vert_{L^p(Q)}\lvert s\mathbb{R}vert\leq \varepsilon \end{align*} and, by choice of $\tilde{V}$, \begin{align*} \lVert V-\tilde{V}\mathbb{R}Vert_{L^p(Q)}\leq \lVert s\psi\mathbb{R}Vert_{L^p(Q)}+\lVert V\mathbb{R}Vert_{L^p(Q\mathbb{M}athbb{S}mallsetminus rQ)}\leq 2\varepsilon. \end{align*} Notice that $\tilde V\big\vert_{Q\mathbb{M}athbb{S}mallsetminus rQ}\equiv 0$.\\ Let $\eta\in C_c^\infty(B_1(0))$ with $\int_{B_1(0)}\eta\, d\mathcal{L}^n=1$. For any $\delta>0$ let \begin{align*} \eta_\delta(x):=\frac{1}{\delta^n}\eta\left(\frac{x}{\delta}\mathbb{R}ight)\quad\forall\, x\in \mathbb{M}athbb{R}^n. \end{align*} Choose $\delta_0>0$ such that \begin{align*} 2\delta_0<\operatorname{dist}\big(\partial Q, \partial (rQ)\big)\quad\text{ and }\quad \lVert\tilde{V}-\tilde{V}\ast\eta_{\delta_0}\mathbb{R}Vert_{L^p(Q)}\leq\varepsilon. \end{align*} Set $V_\varepsilon:=\tilde{V}\ast\eta_{\delta_0}$. Then $V_\varepsilon\in C^\infty_c(Q)$, \begin{align*} \int_QV_\varepsilon\, d\mathcal{L}^n=\int_{\mathbb{M}athbb{R}^n}\eta_{\delta_0}\, d\mathcal{L}^n\int_Q\tilde{V}\, d\mathcal{L}^n=\int_Q \tilde{V}\, d\mathcal{L}^n=\int_QV\, d\mathcal{L}^n \end{align*} and \begin{align*} \lVert V-V_\varepsilon\mathbb{R}Vert_{L^p(Q)}\leq \lVert V-\tilde{V}\mathbb{R}Vert_{L^p(Q)}+\lVert \tilde{V}-V_\varepsilon\mathbb{R}Vert_{L^p(Q)}\leq 3\varepsilon. \end{align*} \end{proof} \end{lem} \mathbb{M}athbb{S}ubsection{Extensions on good and bad cubes} \begin{lem}[Extension on the good cubes]\label{appendix: extension on the good cubes} Let $\Omega\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be a bounded, connected Lipschitz domain and $p\in [1,\infty)$. Let $f\in L^p(\partial\Omega)$ with \begin{align}\label{equation: Assumption for extension on good cubes} \int_{\partial\Omega} f\, d\mathscr{H}^{n-1}=0 \end{align} There exists a vector field $V\in L^p(\Omega)$ such that \begin{align}\label{Equation: Integration by parts for new VF V} \int_\Omega V\mathbb{C}dot \mathbb{N}abla\varphi\, d\mathcal{L}^n=\int_{\partial \Omega}f\varphi\, d\mathscr{H}^{n-1}\qquad\forall\,\varphi\in C^\infty(\mathbb{M}athbb{R}^n) \end{align} and \begin{align}\label{appendix estimate extension good cubes} \int_\Omega \lvert V\mathbb{R}vert^p\, d\mathcal{L}^n\leq C(p,\Omega)\int_{\partial \Omega}\lvert f\mathbb{R}vert^p\, d\mathscr{H}^{n-1} \end{align} for some constant $C(p,\Omega)$ depending only on $p$ and $\Omega$.\\ Moreover if $p=1$, then $V\in L^q(\Omega)$ for any $q\in\big[1,\frac{n}{n-1}\big)$. \end{lem} \begin{rem} Observe that (\mathbb{R}ef{Equation: Integration by parts for new VF V}) implies that $V$ is a distributional solution of the following Neumann problem \begin{align*} \begin{cases} \operatorname{div}(V)=0 &\text{ in }\Omega\\ V\mathbb{C}dot \mathbb{N}u_{\partial Q}=f&\text{ on }\partial\Omega. \end{cases} \end{align*} \end{rem} \begin{proof} \,\\ \textbf{Step 1:} First we consider the case $p\in (1,\infty)$.\\ Let $p':=\displaystyle{\frac{p}{p-1}}$. For any $u\in W^{1,p'}(\Omega)$ let \begin{align*} E_p(u)=\frac{1}{p'}\int_\Omega \lvert \mathbb{N}abla u\mathbb{R}vert^{p'}\,d\mathcal{L}^n-\int_{\partial \Omega} fu\,d\mathscr{H}^{n-1}. \end{align*} Recall that any function $u\in W^{1, p'}(\Omega)$ has a trace in $L^{p'}(\partial \Omega)$, and that the trace operator is continuous. Thus for any $v\in W^{1,p'}(\Omega)$ with $\displaystyle{\int_\Omega v=0}$ by Poincar\'e Lemma there holds \begin{align*} \left\lvert\int_{\partial\Omega} fv\,d\mathscr{H}^{n-1}\mathbb{R}ight\mathbb{R}vert\leq \lVert f\mathbb{R}Vert_{L^p(\partial \Omega)}\lVert v\mathbb{R}Vert_{L^{p'}(\partial \Omega)}\leq C(p,\Omega)\lVert f\mathbb{R}Vert_{L^p(\partial \Omega)}\lVert \mathbb{N}abla v\mathbb{R}Vert_{L^{p'}(\Omega)} \end{align*} for some constant $C(p,\Omega)$ depending only on $p$ and $\Omega$. In particular the energy $E_p$ is well defined on $W^{1,p'}(\Omega)$.\\ Let \begin{align*} \dot W^{1,p'}(\Omega):=\left\{v\in W^{1,p'}(\Omega), \int_\Omega v\,d\mathcal{L}^n=0\mathbb{R}ight\} \end{align*} and observe that $E_p$ is strictly convex on $\dot W^{1,p'}(\Omega)$. Let $u$ be the unique minimizer of $E_p$ in $\dot W^{1,p'}(\Omega)$. Then\footnote{The argument above shows that \eqref{Equation: V satisfies equation weakly} holds for any $\varphi\in C^\infty(\mathbb{M}athbb{R}^n)$ with $\int_\Omega \varphi d\mathcal{L}^n=0$, but assumption \eqref{equation: Assumption for extension on good cubes} implies that \eqref{Equation: V satisfies equation weakly} remains valid for any $\varphi\in C^\infty(\mathbb{M}athbb{R}^n)$.} \begin{align} \label{Equation: V satisfies equation weakly} \int_\Omega \lvert \mathbb{N}abla u\mathbb{R}vert^{p'-2}\mathbb{N}abla u\mathbb{C}dot \mathbb{N}abla\varphi\,d\mathcal{L}^n=\int_{\partial \Omega}f\varphi\,d\mathscr{H}^{n-1},\qquad\forall \varphi\in C^\infty(\mathbb{M}athbb{R}^n). \end{align} Moreover, as $u$ is a minimizer of $E_p$, $E_p(u)\leq E_p(0)=0$. It follows that \begin{align*} \frac{1}{p'}\int_\Omega\lvert\mathbb{N}abla u\mathbb{R}vert^{p'}\,d\mathcal{L}^n\leq \int_{\partial \Omega}fu\,d\mathscr{H}^{n-1}\leq\lVert f\mathbb{R}Vert_{L^p(\partial \Omega)}\lVert u\mathbb{R}Vert_{L^{p'}(\partial \Omega)}\leq C(p,\Omega)\lVert f\mathbb{R}Vert_{L^p(\partial \Omega)}\lVert \mathbb{N}abla u\mathbb{R}Vert_{L^{p'}(\Omega)}. \end{align*} Thus \begin{align*} \int_\Omega \lvert \mathbb{N}abla u\mathbb{R}vert^{p'}\,d\mathcal{L}^n\leq \left(p' C(p,\Omega)\mathbb{R}ight)^p\int_{\partial \Omega}\lvert f\mathbb{R}vert^p\,d\mathscr{H}^{n-1}. \end{align*} Set $V:=\lvert\mathbb{N}abla u\mathbb{R}vert^{p'-2}\mathbb{N}abla u\quad\text{in }\Omega$. Then by (\mathbb{R}ef{Equation: V satisfies equation weakly}) \begin{align*} \int_\Omega V\mathbb{C}dot\mathbb{N}abla \varphi\,d\mathcal{L}^n=\int_{\partial\Omega} f\varphi\,d\mathscr{H}^{n-1}\quad\forall\varphi\in C^\infty(\mathbb{M}athbb{R}^n) \end{align*} and \begin{align*} \int_{\Omega}\lvert V\mathbb{R}vert^p\,d\mathcal{L}^n=\int_{\Omega}\lvert \mathbb{N}abla u\mathbb{R}vert^{p'}\,d\mathcal{L}^n\leq \left(p' C(p,\Omega)\mathbb{R}ight)^p\int_{\partial \Omega}\lvert f\mathbb{R}vert^p\,d\mathscr{H}^{n-1}. \end{align*} \textbf{Step 2:} Next we consider the case $p=1$.\\ Let $s>n$. For any $u\in W^{1,s}(\Omega)$ let \begin{align*} E_s(u)=\frac{1}{s}\int_\Omega\lvert \mathbb{N}abla u\mathbb{R}vert^s\,d\mathcal{L}^n-\int_{\partial \Omega}f u\,d\mathscr{H}^{n-1}. \end{align*} Notice that $E_s$ is well defined and strictly convex in $\dot{W}^{1,s}(\Omega)$.\\ Recall the Sobolev embedding \begin{align*} W^{1,s}(\Omega)\hookrightarrow C^{0,\alpha}(\overline{\Omega}) \end{align*} for $\alpha=1-\frac{n}{s}$. Then for any $u\in W^{1,s}(\Omega)$ the trace of $u$ on $\partial \Omega$ lies in $C^{0,\alpha}(\partial \Omega)$ and if $\int_\Omega u\,d\mathcal{L}^n=0$. Poincar\'e inequality implies \begin{align*} \lVert u\mathbb{R}Vert_{L^{\infty}(\partial \Omega)}\leq C(s,\Omega)\lVert \mathbb{N}abla u\mathbb{R}Vert_{L^s(\Omega)} \end{align*} for some constant $C(s,\Omega)$ depending only on $s$ and $\Omega$.\\ Let $u$ be the unique minimizer of $E_s$ in $\dot{W}^{1,s}(\Omega)$. Then since $E_s(u)\leq E_s(0)=0$ there holds \begin{align*} \frac{1}{s}\int_\Omega\lvert \mathbb{N}abla u\mathbb{R}vert^s\,d\mathcal{L}^n&\leq \int_{\partial\Omega}fu\,d\mathscr{H}^{n-1}\\ &\leq\lVert f\mathbb{R}Vert_{L^1(\partial \Omega)}\lVert u\mathbb{R}Vert_{L^\infty(\partial\Omega)}\leq C(s,\Omega)\lVert f\mathbb{R}Vert_{L^1(\partial\Omega)}\lVert \mathbb{N}abla u\mathbb{R}Vert_{L^s(\Omega)}. \end{align*} Therefore \begin{align*} \left(\int_\Omega \lvert\mathbb{N}abla u\mathbb{R}vert^s\,d\mathcal{L}^n\mathbb{R}ight)^{\frac{s-1}{s}}\leq s C(s,\Omega) \int_{\partial \Omega}\lvert f\mathbb{R}vert\,d\mathscr{H}^{n-1}. \end{align*} Moreover, since $u$ is a minimizer of $E_s$, \begin{align*} \int_\Omega\lvert\mathbb{N}abla u\mathbb{R}vert^{s-2}\mathbb{N}abla u\mathbb{C}dot\mathbb{N}abla\varphi\,d\mathcal{L}^n=\int_{\partial \Omega}f\varphi\,d\mathscr{H}^{n-1}\quad\forall \varphi\in C^\infty(\mathbb{M}athbb{R}^n). \end{align*} Similarly as before set $V:=\lvert \mathbb{N}abla u\mathbb{R}vert^{s-2}\mathbb{N}abla u\quad\text{in }\Omega$. Then \begin{align*} \int_{\Omega}\lvert V\mathbb{R}vert\,d\mathcal{L}^n=\int_\Omega \lvert\mathbb{N}abla u\mathbb{R}vert^{s-1}\,d\mathcal{L}^n\leq \mathcal{L}^{n}(\Omega)^\frac{1}{s}\left(\int_\Omega\lvert\mathbb{N}abla u\mathbb{R}vert^s\,d\mathcal{L}^n\mathbb{R}ight)^\frac{s-1}{s}\leq sC(s,\Omega)\mathcal{L}^{n}(\Omega)^\frac{1}{s}\int_{\partial\Omega}\lvert f\mathbb{R}vert\,d\mathscr{H}^{n-1}. \end{align*} Moreover \begin{align*} \int_\Omega \lvert V\mathbb{R}vert^{\frac{s}{s-1}}\,d\mathcal{L}^n=\int_\Omega \lvert \mathbb{N}abla u\mathbb{R}vert^s\,d\mathcal{L}^n<\infty \end{align*} \end{proof} \begin{rem}\label{appendix scaling argument} Let $Q\mathbb{M}athbb{S}ubset \mathbb{M}athbb{R}^n$ be the unit cube and let $C(p,Q)$ be the corresponding constant in (\mathbb{R}ef{appendix estimate extension good cubes}). By an easy scaling argument one sees that for any $\varepsilon>0$ one can choose $C(p,\varepsilon Q)=\varepsilon C(p,Q)$. \end{rem} \begin{lem}[Extension on the bad cubes]\label{Lemma: extension on the bad cubes} Let $Q:=Q_{\varepsilon}(c_Q)\mathbb{M}athbb{S}ubset\mathbb{R}^n$ and $r(x):=\lVert x-c_Q\mathbb{R}Vert_{\infty}$, for every $x\in\mathbb{R}^n$. Consider any vector field $V:Q\mathbb{R}ightarrow\mathbb{R}^n$ having the form \begin{align*} V(x):=\frac{1}{2^{n-1}}f\bigg(\frac{\varepsilon}{2}\frac{x-c_Q}{r(x)}+c_Q\bigg)\frac{x-c_Q}{r(x)^n} \qquad\forall\,x\in Q, \end{align*} for some $f\in L^{\infty}(\partial Q)$. Then, the following facts hold: \begin{enumerate} \item $V\in L^{p}(Q)$ for every $p\in\big[1,n/(n-1)\big)$; \item for some constant $C(n,p)>0$ depending only on $n$ and $p$ we have \begin{align}\label{estimate bad cubes} \int_Q\lvert V\mathbb{R}vert^p\, d\mathcal{L}^n\le\varepsilon C(n,p)\int_{\partial Q}\lvert f\mathbb{R}vert^p\, d\mathscr{H}^{n-1}, \end{align} \item for every $\varphi\in C^{\infty}(\mathbb{R}^n)$ we have \begin{align}\label{divergence bad cubes} \int_{Q}V\mathbb{C}dot\mathbb{N}abla\varphi\, d\mathcal{L}^n=\int_{\partial Q}f\varphi\, d\mathscr{H}^{n-1}-\bigg(\int_{\partial Q}f\, d\mathscr{H}^{n-1}\bigg)\varphi(c_Q). \end{align} \end{enumerate} \begin{proof} Without losing generality, we assume that $\varepsilon=1$ and $c_Q=0$. First, notice that $r:\mathbb{R}^n\mathbb{R}ightarrow\mathbb{R}$ is a Lipschitz map such that $\lvert\mathbb{N}abla r(x)\mathbb{R}vert=1$, for a.e. $x\in\mathbb{R}^n$. Moreover, since all the norms are equivalent on $\mathbb{R}^n$ there exists a constant $\tilde C(n)>0$ depending only on $n$ such that $\lvert x\mathbb{R}vert\le\tilde Cr(x)$, for a.e. $x\in\mathbb{R}^n$. Now choose any $p\in\big[1,n/(n-1)\big)$. By coarea formula we have \begin{align*} \int_Q\lvert V\mathbb{R}vert^p\, d\mathcal{L}^n&\le\frac{\tilde C^p}{2^{(n-1)p}}\int_{0}^\frac{1}{2}\frac{1}{\mathbb{R}ho^{(n-1)p}}\int_{\partial Q_{2\mathbb{R}ho}(0)}\bigg|f\bigg(\frac{x}{2\mathbb{R}ho}\bigg)\bigg|^p\, d\mathscr{H}^{n-1}(x)\,d\mathbb{R}ho\\%change of variables x=2\mathbb{R}ho y &=\frac{\tilde C^p}{2^{(n-1)(p-1)}}\bigg(\int_{0}^\frac{1}{2}\frac{1}{\mathbb{R}ho^{(n-1)(p-1)}}\,d\mathbb{R}ho\bigg)\bigg(\int_{\partial Q}\lvert f(y)\mathbb{R}vert^p\, d\mathscr{H}^{n-1}(y)\bigg)\\ &=C\int_{\partial Q}\lvert f\mathbb{R}vert^p\, d\mathscr{H}^{n-1}, \end{align*} with \begin{align*} C=C(n,p):=\frac{\tilde C^p}{2^{(n-1)(p-1)}} \int_{0}^\frac{1}{2}\frac{1}{\mathbb{R}ho^{(n-1)(p-1)}}\,d\mathbb{R}ho<+\infty. \end{align*} Hence, $1.$ and $2.$ follow in once. We remark that the condition $p\in\big[1,n/(n-1)\big)$ is needed in order to guarantee the convergence of the integral in $\mathbb{R}ho$. We still need to prove 3. Pick any $\varphi\in C^{\infty}(\mathbb{R}^n)$. By the coarea formula we have \begin{align*} \int_{Q}V\mathbb{C}dot\mathbb{N}abla\varphi\, d\mathcal{L}^n&=\frac{1}{2^{n-1}}\int_{0}^\frac{1}{2}\frac{1}{\mathbb{R}ho^{n}}\int_{\partial Q_{2\mathbb{R}ho}(0)}f\bigg(\frac{x}{2\mathbb{R}ho}\bigg)\big(x\mathbb{C}dot\mathbb{N}abla\varphi(x)\big)\, d\mathscr{H}^{n-1}(x)\,d\mathbb{R}ho\\ &=2\int_{0}^\frac{1}{2}\int_{\partial Q}f(y)\big(y\mathbb{C}dot\mathbb{N}abla\varphi(2\mathbb{R}ho y)\big)\, d\mathscr{H}^{n-1}(y)\,d\mathbb{R}ho\\ &=\int_{\partial Q}f(y)\int_{0}^\frac{1}{2}\frac{d}{d\mathbb{R}ho}\big(\varphi(2\mathbb{R}ho y)\big)\,d\mathbb{R}ho\, d\mathscr{H}^{n-1}(y)\\ &=\int_{\partial Q}f\varphi\, d\mathscr{H}^{n-1}-\bigg(\int_{\partial Q}f\, d\mathscr{H}^{n-1}\bigg)\varphi(0) \end{align*} and $3.$ follows. \end{proof} \end{lem} \mathbb{M}athbb{S}ubsection{Proof of Theorem 1.1} We are finally ready to prove Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields, now really for vector fields}. \begin{proof} Let $V\in L_{\mathbb{M}athbb{Z}}^p(Q_1(0))$ and let $\varepsilon\in E_V$ (constructed in Lemma \mathbb{R}ef{Lemma: cubic decomposition}). First, we notice that by using Lemma \mathbb{R}ef{Lemma: smoothing in the (n-1)-skeleton} separately on every face $F\in\mathbb{M}athscr{F}_{\varepsilon}$ we can build a vector field $V_{\varepsilon}\in C^{\infty}(S_{\varepsilon})$ such that \begin{align*} \int_{\partial Q}V_{\varepsilon}\mathbb{C}dot\mathbb{N}u_{\partial Q}\, d\mathscr{H}^{n-1}=\int_{\partial Q}V\mathbb{C}dot\mathbb{N}u_{\partial Q}\, d\mathscr{H}^{n-1}\qquad\forall\,Q\in\mathbb{M}athscr{C}_{\varepsilon} \end{align*} and \begin{align*} \mathbb{M}athbb{S}um_{Q\in \mathbb{M}athscr{C}_\varepsilon}\int_{\partial Q}\lvert V_\varepsilon-V\mathbb{R}vert^p f(c_Q)d\mathbb{M}athscr{H}^{n-1}<\varepsilon. \end{align*} Let $\tilde V_{\varepsilon}$ be the vector field defined $\mathcal{L}^n$-a.e. on $\Omega_{\varepsilon}$ as follows: \begin{enumerate} \item if $Q\in \mathbb{M}athscr{C}_{\varepsilon}$ is a good cube, then we let $\tilde V_{\varepsilon}:=W_{\varepsilon}+(V)_Q$ on $Q$, where $W_{\varepsilon}$ is the extension of the datum $f:=\big(V_{\varepsilon}-(V)_Q\big)\mathbb{C}dot\mathbb{N}u_{\partial Q}$ given by Lemma \mathbb{R}ef{appendix: extension on the good cubes} (notice that for any good cube condition \eqref{equation: Assumption for extension on good cubes} is satisfied by our choice of $f$); \item if $Q\in \mathbb{M}athscr{C}_{\varepsilon}$ is a bad cube, then we let \begin{align*} \tilde V_{\varepsilon}:=\frac{1}{2^{n-1}}f\bigg(\frac{\varepsilon}{2}\frac{x-c_Q}{\lVert x-c_Q\mathbb{R}Vert_{\infty}}+c_Q\bigg)\frac{x-c_Q}{\lVert x-c_Q\mathbb{R}Vert_{\infty}^n}, \qquad\forall\,x\in Q, \end{align*} with $f:=V_{\varepsilon}\big|_{\partial Q}\mathbb{C}dot\mathbb{N}u_{\partial Q}\in L^{\infty}(\partial Q)$. \end{enumerate} We recall that no bad cubes will appear in the cubic decomposition in case $p\in\big[n/(n-1),+\infty\big)$ (see Remark \mathbb{R}ef{Remark: no bad cubes if p is big enough}). \textbf{Claim 1}. We claim that \begin{align*} \operatorname{div}(\tilde V_{\varepsilon})=\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}d_Q\delta_{c_Q} \qquad\mathbb{M}box{ distributionally on } \Omega_{\varepsilon}, \end{align*} where \begin{align*} d_{Q}:=\int_{\partial Q}V_{\varepsilon}\mathbb{C}dot\mathbb{N}u_{\partial Q}\, d\mathscr{H}^{n-1}=\int_{\partial Q}V\mathbb{C}dot\mathbb{N}u_{\partial Q}\, d\mathscr{H}^{n-1}\in\mathbb{M}athbb{Z}\mathbb{M}athbb{S}mallsetminus\{0\},\qquad\forall\,Q\in\mathbb{M}athscr{C}_{\varepsilon}^b. \end{align*} Indeed, pick any $\varphi\in C_c^{\infty}(\Omega_{\varepsilon})$. Let $Q\in\mathbb{M}athscr{C}_{\varepsilon}$ be a good cube. By the properties of the extension given by Lemma \mathbb{R}ef{appendix: extension on the good cubes} and the divergence theorem we have \begin{align*} \int_{Q}\tilde V_{\varepsilon}\mathbb{C}dot\mathbb{N}abla\varphi\, d\mathcal{L}^n&=\int_{\partial Q}\big(V_{\varepsilon}-(V)_Q\big)\mathbb{C}dot\mathbb{N}u_{\partial Q}\varphi\, d\mathscr{H}^{n-1}+\int_{\partial Q}\big((V)_Q\mathbb{C}dot \mathbb{N}u_{\partial Q}\big)\varphi\, d\mathscr{H}^{n-1}\\ &=\int_{\partial Q}(V_{\varepsilon}\mathbb{C}dot\mathbb{N}u_{\partial Q})\varphi\, d\mathscr{H}^{n-1}. \end{align*} On the other hand, let $Q\in \mathbb{M}athscr{C}_{\varepsilon}$ be a bad cube. By \eqref{divergence bad cubes}, we have \begin{align*} \int_{Q}\tilde V_{\varepsilon}\mathbb{C}dot\mathbb{N}abla\varphi\, d\mathcal{L}^n=\int_{\partial Q}(V_{\varepsilon}\mathbb{C}dot\mathbb{N}u_{\partial Q})\varphi\, d\mathscr{H}^{n-1}-d_Q\langle\delta_{c_Q},\varphi\mathbb{R}angle. \end{align*} Hence we conclude that \begin{align*} \int_{\Omega_{\varepsilon}}\tilde V_{\varepsilon}\mathbb{C}dot\mathbb{N}abla\varphi\, d\mathcal{L}^n&=\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\int_{\partial Q}(V_{\varepsilon}\mathbb{C}dot\mathbb{N}u_{\partial Q})\varphi\, d\mathscr{H}^{n-1}-\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}d_Q\langle\delta_{c_Q},\varphi\mathbb{R}angle\\ &=-\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}d_Q\langle\delta_{c_Q},\varphi\mathbb{R}angle. \end{align*} The claim follows. \textbf{Claim 2}. We claim that $\lVert\tilde V_{\varepsilon}-V\mathbb{R}Vert_{L^p(\Omega_{\varepsilon},\mathbb{M}u)}\mathbb{R}ightarrow 0$ as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$.\\ Recall estimate \eqref{estimate on f} and notice that \begin{align*} \lVert \tilde V_{\varepsilon}-V\mathbb{R}Vert_{L^p(\Omega_{\varepsilon},\mathbb{M}u)}^p\le(1+C)(A_{\varepsilon}+B_{\varepsilon}), \end{align*} with \begin{align*} A_{\varepsilon}&:=\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert\tilde V_{\varepsilon}-V\mathbb{R}vert^p f(c_Q)\, d\mathcal{L}^n,\\ B_{\varepsilon}&:=\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert\tilde V_{\varepsilon}-V\mathbb{R}vert^p f(c_Q)\, d\mathcal{L}^n. \end{align*} By triangle inequality and by the estimate in Lemma \mathbb{R}ef{appendix: extension on the good cubes}, we have that \begin{align*} A_{\varepsilon}&\le 2^{p-1}\Bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert\tilde V_{\varepsilon}-(V)_Q\mathbb{R}vert^p f(c_Q)\, d\mathcal{L}^n+\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert V-(V)_Q\mathbb{R}vert^p f(c_Q)\, d\mathcal{L}^n\Bigg)\\ &\le2^{p-1}\Bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert W_{\varepsilon}\mathbb{R}vert^p f(c_Q)\, d\mathcal{L}^n+\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert V-(V)_Q\mathbb{R}vert^p f(c_Q)\, d\mathcal{L}^n\Bigg)\\ &\le 2^{p-1}\Bigg(\varepsilon C_p\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{\partial Q}\lvert V_{\varepsilon}-(V)_Q\mathbb{R}vert^p\, f(c_Q)d\mathscr{H}^{n-1}+\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\Bigg), \end{align*} where $C_p:=C(p,Q)$ (see Remark \mathbb{R}ef{appendix scaling argument}). Again by triangle inequality and because of our choice of $V_{\varepsilon}$, we have \begin{align*} \varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{\partial Q}\lvert V_{\varepsilon}-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}&\le 2^{p-1}\Bigg(\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{\partial Q}\lvert V_{\varepsilon}-V\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\\ &\quad+\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\Bigg)\\ &\le 2^{p-1}\Bigg(2n\varepsilon^2+\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\Bigg). \end{align*} Thus by Lemma \mathbb{R}ef{Lemma: cubic decomposition} it follows that \begin{align*} \varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}}\int_{\partial Q}\lvert V_{\varepsilon}-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\mathbb{R}ightarrow 0\qquad\mathbb{M}box{ as } \varepsilon\mathbb{R}ightarrow 0^+ \mathbb{M}box{ in } E_V. \end{align*} Moreover, by \eqref{estimate on f} we have \begin{align*} \mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\int_{Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n &\leq 2^p(1+C)\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^g}\fint_{Q_\varepsilon(0)}\int_Q\lvert V(x+y)-V(y)\mathbb{R}vert^pf(y)\, d\mathcal{L}^n\\ &\leq 2^p(1+C)\fint_{Q_\varepsilon(0)}\lVert V(x+\mathbb{C}dot\,)-V\mathbb{R}Vert_{L^p(\Omega_\varepsilon,\mathbb{M}u)}^p\to 0 \end{align*} as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$. Hence, $A_{\varepsilon}\mathbb{R}ightarrow 0$ as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$. On the other hand, by \eqref{estimate bad cubes} we have \begin{align*} B_{\varepsilon}&\le2^{p-1}\Bigg(\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert\tilde V_{\varepsilon}\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n+\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\Bigg)\\ &\le2^{p-1}\Bigg(C\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V_{\varepsilon}\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}+\int_{\Omega_{\varepsilon}^b}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\Bigg). \end{align*} We notice that \begin{align*} \varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V_{\varepsilon}\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}&\le 2^{p-1}\Bigg(\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V_{\varepsilon}-V\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\\ &\quad+\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\Bigg)\\ &\le 4^{p-1}\Bigg(2n\varepsilon^2+\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\\ &\quad+\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert (V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}\Bigg). \end{align*} Moreover, by \eqref{estimate on f} we have \begin{align*} \varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert (V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}&\le \varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\bigg(\fint_{Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\bigg)\, d\mathscr{H}^{n-1}\\ &\le\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\varepsilon\bigg(\fint_{Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\bigg)\int_{\partial Q}\, d\mathscr{H}^{n-1}\\ &\le 2n\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{Q}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n=2n\int_{\Omega_{\varepsilon}^b}\lvert V\mathbb{R}vert^pf(c_Q)\, d\mathcal{L}^n\\ &\le (1+C)2n\int_{\Omega_{\varepsilon}^b}\lvert V\mathbb{R}vert^pf\, d\mathcal{L}^n. \end{align*} Thus, we have obtained \begin{align*} B_\varepsilon&\le C\Bigg(\varepsilon^2+\varepsilon\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\int_{\partial Q}\lvert V-(V)_Q\mathbb{R}vert^pf(c_Q)\, d\mathscr{H}^{n-1}+\int_{\Omega_{\varepsilon}^b}\lvert V\mathbb{R}vert^p f\, d\mathcal{L}^n\Bigg), \end{align*} for some constant $C>0$ which does not depend on $\varepsilon\in E_V$. By Lemma \mathbb{R}ef{Lemma: cubic decomposition} and Remark \mathbb{R}ef{Remark: no bad cubes if p is big enough} we get that $B_{\varepsilon}\mathbb{R}ightarrow 0$ as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$. Hence, the claim follows. Next we show that by rescaling $V_\varepsilon$ we obtain a vector field with similar properties defined on the whole $Q_1(0)$. Let \begin{align*} \alpha_{\varepsilon}:=\mathbb{M}athbb{S}up\{\alpha\in[1/2,1) \mathbb{M}box{ s.t. } Q_1(0)\mathbb{M}athbb{S}ubset\alpha^{-1}\Omega_{\varepsilon}\}, \qquad\forall\,\varepsilon\in E_V. \end{align*} Notice that $\alpha_{\varepsilon}\mathbb{R}ightarrow 1^-$ as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$. Define the vector field $\overline{V}_{\varepsilon}:=\alpha_{\varepsilon}^{n-1}\tilde V_{\varepsilon}(\alpha_{\varepsilon}\,\mathbb{C}dot):Q_1(0)\mathbb{R}ightarrow\mathbb{R}^n$. It's straightforward that $\overline{V}_{\varepsilon}\in L^p(Q_1(0),\mathbb{M}u)$ in case $p>1$ and $\overline{V}_{\varepsilon}\in L^s(Q_1(0),\mathbb{M}u)$ for some $s>1$ in case $p=1$, for every given $\varepsilon\in E_V$. A direct computation also shows that the distributional divergence of $\overline{V}_{\varepsilon}$ on $Q_1(0)$ is given by \begin{align*} \operatorname{div}(\overline{V}_{\varepsilon})=\mathbb{M}athbb{S}um_{Q\in\mathbb{M}athscr{C}_{\varepsilon}^b}\overline{d}_Q\delta_{\alpha_{\varepsilon}^{-1}c_Q}, \end{align*} with \begin{align*} \overline{d}_{Q'}=\begin{cases}d_{Q'} &\mathbb{M}box{ if } \alpha_{\varepsilon}^{-1}c_{Q'}\in Q_1(0),\\0 &\mathbb{M}box{ otherwise.}\end{cases} \end{align*} We claim that $\overline{V}_{\varepsilon}\mathbb{R}ightarrow V$ in $L^p(Q_1(0),\mathbb{M}u)$. Indeed, we have \begin{align*} \int_{Q_1(0)}\lvert\overline{V}_{\varepsilon}-V\mathbb{R}vert^pf\, d\mathcal{L}^n&=\int_{Q_1(0)}\lvert\alpha_{\varepsilon}^{n-1}\tilde V_{\varepsilon}(\alpha_{\varepsilon}x)-V(x)\mathbb{R}vert^p\, f(x)d\mathcal{L}^n(x)\\ &=\alpha_{\varepsilon}^{p(n-1)-n}\int_{\alpha_{\varepsilon}Q_1(0)}\lvert\tilde V_{\varepsilon}(y)-\alpha_{\varepsilon}^{-(n-1)}V(\alpha_{\varepsilon}^{-1}y)\mathbb{R}vert^p f(\alpha_\varepsilon^{-1}y)\, d\mathcal{L}^n(y)\\ &=\alpha_{\varepsilon}^{p(n-1)-n}\bigg(\int_{\Omega_{\varepsilon}}\lvert\tilde V_{\varepsilon}(y)-\alpha_{\varepsilon}^{-(n-1)}V(\alpha_{\varepsilon}^{-1}y)\mathbb{R}vert^p f(y)\, d\mathcal{L}^n(y)\\ &\quad+\int_{\Omega_{\varepsilon}}\lvert\tilde V_{\varepsilon}(y)-\alpha_{\varepsilon}^{-(n-1)}V(\alpha_{\varepsilon}^{-1}y)\mathbb{R}vert^p \frac{f(\alpha_\varepsilon^{-1}y)-f(y)}{f(y)}f(y)\, d\mathcal{L}^n(y)\bigg)\\ &\le C_{n,p}\bigg(\int_{\Omega_{\varepsilon}}\lvert\tilde V_{\varepsilon}-V\mathbb{R}vert^pf\, d\mathcal{L}^n+\int_{\Omega_{\varepsilon}}\lvert V-P_{\alpha_{\varepsilon}^{-1}}V\mathbb{R}vert^pf\, d\mathcal{L}^n\bigg), \end{align*} (see Lemma \mathbb{R}ef{lemma: continuity of the dilation operator} for the definition of $P_{\alpha_{\varepsilon}^{-1}}V$). By Lemma \mathbb{R}ef{lemma: continuity of the dilation operator} and since $\tilde V_{\varepsilon}\mathbb{R}ightarrow V$ in $L^p(\Omega_{\varepsilon},\mathbb{M}u)$, our claim follows. Thus, we have built a vector field $\overline{V}_{\varepsilon}$ such that: \begin{enumerate} \item $\overline{V}_{\varepsilon}\in L^p(Q_1(0),\mathbb{M}u)$ and $\overline{V}_{\varepsilon}\in L^s(Q_1(0),\mathcal{L}^n)$ for $s=p$ if $p>1$ and $s>1$ if $p=1$\footnote{In fact even when $\mathbb{M}u$ is different from $\mathcal{L}^n$, $\tilde{V}_\varepsilon$ is constructed through Lemmata \mathbb{R}ef{appendix: extension on the good cubes} and \mathbb{R}ef{Lemma: extension on the bad cubes} as extension of a smooth boundary datum, thus $\tilde{V}_\varepsilon$ lies in $L^{r}(\Omega_{\varepsilon})$ for any $r\in [1,\frac{n}{n-1})$ if $p<n$ and in $L^r(\Omega_\varepsilon)$ for any $r\in [1,\infty)$ if $p\geq \frac{n}{n-1}$. It follows that $\overline V_\varepsilon \in L^{r}(\Omega_{\varepsilon})$ for any $r\in [1,\frac{n}{n-1})$ if $p<n$ and $\overline V_\varepsilon \in L^r(\Omega_\varepsilon)$ for any $r\in [1,\infty)$ if $p\geq \frac{n}{n-1}$.}; \item the distributional divergence of $\overline{V}_{\varepsilon}$ on $Q_1(0)$ is given by a finite sum of delta distributions supported on a finite set $X_{\varepsilon}\mathbb{M}athbb{S}ubset Q_1(0)$ with integer weights $\{d_x \mathbb{M}box{ s.t. } x\in X_{\varepsilon}\}$; \item $\lVert\overline{V}_{\varepsilon}-V\mathbb{R}Vert_{L^p(Q_1(0),\mathbb{M}u)}\mathbb{R}ightarrow 0$ as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_V$. \end{enumerate} Now we are ready to reach the conclusions 1 and 2 of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields, now really for vector fields}. \begin{enumerate} \item If $q\in [0,1]$ and $p\in\big[1,\frac{n}{n-1}\big)$ we possibly have $X_{\varepsilon}\mathbb{N}eq\emptyset$, since bad cubes can appear in the cubic decompositions. Since $\overline{V}_{\varepsilon}$ always belongs to $L^s(Q_1(0))$ for some $s>1$ (with $s=p$ if $p$ itself is already greater than 1), we can Hodge-decompose $\overline{V}_{\varepsilon}^\flat$ as $\overline{V}_{\varepsilon}^\flat=d\varphi+d^\ast A$ for some $A\in \Omega_{W^{1,s}}^2(Q_1(0))$ and some $\varphi\in W^{1,s}(Q_1(0))$. Applying $d^\ast$ to the previous decomposition we obtain \begin{align*} \mathbb{M}athcal{D}elta\varphi=d^\ast (\overline{V}_\varepsilon^\flat)=\operatorname{div}(\overline{V}_{\varepsilon})=\mathbb{M}athbb{S}um_{x\in X_{\varepsilon}}d_x\delta_x. \end{align*} By standard elliptic regularity, $\varphi\in C^{\infty}(Q_1(0)\mathbb{M}athbb{S}mallsetminus X_{\varepsilon})$. Choose $A_{\varepsilon}\in \Omega^2(Q_1(0))$ such that $\lVert A_{\varepsilon}-A\mathbb{R}Vert_{\Omega^2_{W^{1,s}}(Q_1(0))}\le\varepsilon$. Then $\lVert d^\ast A_\varepsilon-d^\ast A\mathbb{R}Vert_{\Omega^1_{L^p(\mathbb{M}u)}(Q_1(0))}\leq \varepsilon$. Let $\upsilon_{\varepsilon}:=d\varphi+d^\ast(A_{\varepsilon})$ and let $U_\varepsilon:=\upsilon_\varepsilon^\#$. Then $U_{\varepsilon}\in L_R^p(Q_1(0),\mathbb{M}u)$ for every $\varepsilon\in E_V$ and $U_{\varepsilon}\mathbb{R}ightarrow V$ in $L^p(Q_1(0),\mathbb{M}u)$. \item If $q\in (-\infty,0]$ and $p\in\big[\frac{n}{n-1},+\infty\big)$ no bad cubes are allowed in the cubic decomposition, thus $(\overline{V}_{\varepsilon})_{\varepsilon\in E_V}$ is a sequence of divergence-free vector fields converging to $V$ in $L^p(Q_1(0),\mathbb{M}u)$ as $\varepsilon\to 0^+$ in $E_V$. Hence $V$ itself is divergence-free. \end{enumerate} \end{proof} \begin{rem} \label{Remark: F_k in Lq} Notice that if $p=1$, $V_k\in L^s(Q^n_1(0))$ for any $k\in \mathbb{M}athbb{N}$ and for any $s\in\big[1,\frac{n}{n-1}\big)$. \end{rem} \begin{rem} Observe that this proof can be used to show that the analogous approximation result holds if we assume that $V$ satisfies the first three conditions of Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube} and in addition we require that for every $\mathbb{R}ho\in R_{F, x_0}$ we have that \begin{align*} \int_{\partial Q_\mathbb{R}ho(x_0)}i^\ast_{\partial Q_\mathbb{R}ho(x_0)}F\in S \end{align*} for a set $S\mathbb{M}athbb{S}ubset\mathbb{M}athbb{R}$ such that $0\in S$ and $0$ is an isolated point in $S$. In this case the vector field $V$ can be approximated in $L^p$ by a sequence of vector fields $(V_n)_{n\in \mathbb{M}athbb{N}}$ smooth outside a finite set of points and such that for any $n\in \mathbb{M}athbb{N}$, $\operatorname{div}(V_n)$ is a finite sum of deltas with coefficients in $S$. \end{rem} \begin{lem}\label{Lemma: caso n=1} Let $I\mathbb{M}athbb{S}ubset\mathbb{R}$ be a connected interval and $p\in[1,+\infty)$. Then \begin{align*} \overline{L^p_R(I)}^{L^p}= L^p(I,\mathbb{M}athbb{Z})+ \mathbb{M}athbb{R}=L^p_\mathbb{M}athbb{Z}(I). \end{align*} \begin{proof} We start by showing the first equality. Notice that \begin{align*} L^p_{R}(I)=\left\{V=c+\mathbb{M}athbb{S}um_{j\in J}a_j\mathbb{R}chi_{I_j} \mathbb{M}box{ : }(I_j)_{j\in J} \text{ is a finite partition of }I, a_j\in \mathbb{M}athbb{Z}\,\forall\, j\in J, \, c\in [0,1)\mathbb{R}ight\}. \end{align*} In other words, $L^p_R(I)$ consists of all integer-valued step functions and their translations by a constant.\\ First we show the inclusion $"\mathbb{M}athbb{S}upset"$: let $f=g+a$ with $g\in L^p(I,\mathbb{M}athbb{Z})$ and $a\in [0,1)$ and let $\varepsilon>0$. For any $k\in \mathbb{M}athbb{N}$ set $g_k:=\mathbb{M}athbbm{1}_{\lvert g\mathbb{R}vert\leq k}g$. Then there exists $K\in \mathbb{M}athbb{N}$ such that $\lVert g_K-g\mathbb{R}Vert_{L^p(I)}<\frac{\varepsilon}{2}$.\\ Now for any $j\in\{-K,..., K\}$ $g_K^{-1}(j)=g^{-1}(j)$ is a measurable set, therefore there exists a finite collection of disjoint intervals $(I_i^j)_{i\in J^j}$ such that $\mathcal{L}\left(\bigcup_{i\in J^j}I_i^j\triangle g^{-1}(j)\mathbb{R}ight)\leq \frac{\varepsilon}{2(2K+1)^2}$.\\ For any $j\in\{-K,..., K\}$ set $A_j:=\bigcup_{i\in J^j}I_i^j\mathbb{M}athbb{S}mallsetminus\left(\bigcup_{j'\mathbb{N}eq j}\bigcup_{i\in J^{j'}}I_i^{j'}\mathbb{R}ight)$. Then $A_j$ is a finite union of intervals and \begin{align*} \mathcal{L}(A_j\triangle g^{-1}(j))\leq \mathcal{L}(A_j\mathbb{M}athbb{S}mallsetminus g^{-1}(j))+\mathbb{M}athbb{S}um_{j'\mathbb{N}eq j}\mathcal{L}\left(\bigcup_{i\in J^j}I_i^{j'}\mathbb{C}ap g^{-1}(j)\mathbb{R}ight)\leq \frac{\varepsilon}{2(2K+1)}. \end{align*} Set \begin{align*} \tilde{g}_K=\begin{cases} j& \text{if }x\in A_j\\ 0&\text{otherwise} \end{cases} \end{align*} Then by construction $\tilde{g}_K\in L^p_R(I)$ and $\lVert \tilde{g}_K-g\mathbb{R}Vert_{L^p(I)}\leq \varepsilon$. We conclude that any $g\in L^p(I, \mathbb{M}athbb{Z})$ lies in the closure of $L^p_R(I)$. This shows $"\mathbb{M}athbb{S}upset"$. As $L^p(I,\mathbb{M}athbb{Z})+ \mathbb{M}athbb{R}=L^p(I,\mathbb{M}athbb{Z})+ [0,1)$ is closed in $L^p(I)$, the inclusion "$\mathbb{M}athbb{S}ubset$" holds as well.\\ Next we show the second equality. Let's start with "$\mathbb{M}athbb{S}ubset$". Let $g\in L^p(I,\mathbb{M}athbb{Z})$, $a\in \mathbb{M}athbb{R}$ and $f=g+a$. Let $x_0\in I$, then for a.e. $r\in (0, \operatorname{dist}(x_0, \partial I))$ we have $f(x_0+r)-f(x_0-r)=g(x_0+r)-g(x_0-r)\in \mathbb{M}athbb{Z}$. Let $\tilde{R}_{F,x_0}$ denote the set of all such $r$. Set $R_{R,x_0}$ to be the intersection of $\tilde{R}_{F,x_0}$ with the set of Lebesgue points of $f$. Then $f$ satisfies Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube} and thus $f\in L^p_\mathbb{M}athbb{Z}(I)$.\\ To show "$\mathbb{M}athbb{S}upset$" let $f\in L^p_\mathbb{M}athbb{Z}(I)$. Set $F: I\to\mathbb{M}athbb{S}^1,$ $x\mathbb{M}apsto e^{i2\pi f(x)}$. Then $F$ is a measurable bounded function. Notice that for any $x_0\in I$, for a.e. $r\in (0,\operatorname{dist}(x_0,\partial I))$ there holds $F(x_0-r)=F(x_0+r)$. This implies that $F$ is constant: this can be seen for instance approximating $F$ by smooth functions with the same symmetry properties away from $\partial I$ (convolving with a symmetric mollifier with small support), which then have to be constant. Choose $a\in \mathbb{M}athbb{R}$ such that $F\equiv e^{i2\pi a}$, then $f-a\in L^p(I,\mathbb{M}athbb{Z})$. This completes the proof. \end{proof} \end{lem} \begin{rem} \label{Remark: Closure of L_R and closure in case n=1} From Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields, now really for vector fields} it follows immediately that \begin{align*} \overline{L^p_R(Q_1(0))}^{L^p}=L^p_\mathbb{M}athbb{Z}(Q_1(0)). \end{align*} To see this it is enough to check that $L^p_\mathbb{M}athbb{Z}(Q_1(0))$ is closed in $L^p$, which can be shown by simple application of the coarea formula. Notice that in case $p\in[n/(n-1),+\infty)$ we can approximate $V$ strongly in $L^p$ with smooth and divergence free vector fields. This is a straightforward consequence of Hodge decomposition. \end{rem} \mathbb{M}athbb{S}ubsection{A characterization of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}$} First of all we apply the Strong approximation Theorem to obtain a useful characterization of the class $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. \begin{thm}\label{Theorem: characterization of the integer valued fluxes class} Let $n\in \mathbb{M}athbb{N}_{>0}$, $p\in [1,+\infty)$. Let $F\in\Omega_p^{n-1}(Q^n_1(0))$. Then, the following are equivalent: \begin{enumerate} \item there exists $L\in\mathbb{M}athcal{R}_1(Q_1(0))$ such that $\partial L=*dF$ in $(W^{1,\infty}_0(Q_1(0)))^\ast$ and \begin{align*} \mathbb{M}(L)=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in\mathbb{M}athcal{D}(Q_1(0)), \\ ||d\varphi||_{L^{\infty}}\le 1}}\int_{Q_1(0)}F\wedge d\varphi. \end{align*} \item for every Lipschitz function $f:\overline{Q_1(0)}\mathbb{R}ightarrow [a,b]\mathbb{M}athbb{S}ubset\mathbb{R}$ such that $f\vert_{\partial Q_1(0)}\equiv b$, we have \begin{align*} \int_{f^{-1}(t)}i_{f^{-1}(t)}^*F\in\mathbb{M}athbb{Z}, \qquad\mathbb{M}box{ for $\mathcal{L}^1$-a.e. } t\in [a,b]; \end{align*} \item $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. \end{enumerate} \begin{proof} We just need to show that $1\mathbb{M}athcal{R}ightarrow 2$, $2\mathbb{M}athcal{R}ightarrow 3$ and $3\mathbb{M}athcal{R}ightarrow 1$. We prove these implications separately. $1\mathbb{M}athcal{R}ightarrow 2$. Assume 1. Let $L\in\mathbb{M}athcal{R}_1(Q_1(0))$ be given by \begin{align*} \langle L,\omega\mathbb{R}angle=\int_{\Gamma}\theta\langle\omega,\vec L\mathbb{R}angle\, d\mathscr{H}^1, \qquad\forall\,\omega\in\mathbb{M}athcal{D}^1(Q_1(0)), \end{align*} where $\Gamma\mathbb{M}athbb{S}ubset Q_1(0)$ is a locally $1$-rectifiable set, $\vec L$ is a Borel measurable unitary vector field on $\Gamma$ and $\theta\in L^1(\Gamma,\mathscr{H}^1)$ is a $\mathbb{M}athbb{Z}$-valued function. Pick any Lipschitz function $f:Q_1(0)\mathbb{R}ightarrow\mathbb{R}$ and let $\varphi\in C_c^{\infty}((-\infty,b))$ be such that $\int_{\mathbb{R}}\varphi\,d\mathcal{L}^1=0$. By the coarea formula we have \begin{align*} \int_{Q_1(0)}F\wedge f^*(\varphi\vol_{\mathbb{R}})=\int_{-\infty}^{+\infty}\varphi(t)\bigg(\int_{f^{-1}(t)}i_{f^{-1}(t)}^*F\bigg)\, dt. \end{align*} At the same time, by the coarea formula for countably $1$-rectifiable sets, we have \begin{align*} \langle L, f^*(\varphi\vol_{\mathbb{R}})\mathbb{R}angle=\int_{\Gamma}\theta\langle f^*(\varphi\vol_{\mathbb{R}}),\vec L\mathbb{R}angle\, d\mathscr{H}^1=\int_{-\infty}^{+\infty}\varphi(t)\bigg(\int_{\Gamma\mathbb{C}ap f^{-1}(t)}\tilde\theta\bigg)\, dt, \end{align*} where $\tilde\theta:\Gamma\mathbb{R}ightarrow\mathbb{M}athbb{Z}$ is given by $\tilde\theta:=\text{sgn}(\langle f^*\vol_{\mathbb{R}},\vec L\mathbb{R}angle)\theta$. Let $\mathcal{P}hi\in C_c^{\infty}((-\infty,b))$ satisfy $d\mathcal{P}hi=\varphi\vol_{\mathbb{R}}$. Notice that since $f$ is proper $f^\ast\mathcal{P}hi\in W_0^{1,\infty}(Q_1(0))$. Then, by hypothesis, we have \begin{align*} \int_{Q_1(0)}F\wedge f^*(\varphi\vol_{\mathbb{R}})&=\int_{Q_1(0)}F\wedge d f^\ast\mathcal{P}hi=\langle*dF,f^*\mathcal{P}hi\mathbb{R}angle=\langle\partial L,f^*\mathcal{P}hi\mathbb{R}angle\\ &=\langle L,d(f^*\mathcal{P}hi)\mathbb{R}angle=\langle L,f^*(d\mathcal{P}hi)\mathbb{R}angle=\langle L,f^*(\varphi\vol_{\mathbb{R}})\mathbb{R}angle. \end{align*} Therefore \begin{align*} \int_{-\infty}^{\infty}\varphi(t)\bigg(\int_{f^{-1}(t)}i_{f^{-1}(t)}^*F-\int_{\Gamma\mathbb{C}ap f^{-1}(t)}\tilde\theta\bigg)\, dt=0, \qquad\forall\,\varphi\in C_c^{\infty}((-\infty,b)) \mathbb{M}box{ s.t. } \int_{\mathbb{R}}\varphi=0. \end{align*} We conclude that \begin{align*} \int_{f^{-1}(t)}i_{f^{-1}(t)}^*F-\int_{\Gamma\mathbb{C}ap f^{-1}(t)}\tilde\theta=c, \qquad\mathbb{M}box{ for $\mathcal{L}^1$-a.e. } t\in[a,b], \end{align*} for some constant $c\in\mathbb{R}$. We claim that $c=0$. In fact let $m\in\mathbb{N}\mathbb{M}athbb{S}mallsetminus\{0\}$. Integrating both sides on $(-m,m)$ we get \begin{align} \label{equation normalization} \int_{\{\lvert f\mathbb{R}vert< m\}}F\wedge f^*\vol_{\mathbb{R}}-\int_{\Gamma\mathbb{C}ap\{\lvert f\mathbb{R}vert< m\}}\theta\langle f^*\vol_{\mathbb{R}},\vec L\mathbb{R}angle=2mc. \end{align} Since $f^*\vol_{\mathbb{R}}=df$, we have \begin{align*} \int_{Q_1(0)}F\wedge f^*\vol_{\mathbb{R}}-\int_{\Gamma}\theta\langle f^*\vol_{\mathbb{R}},\vec L\mathbb{R}angle=\int_{Q_1(0)}F\wedge df-\langle L,df\mathbb{R}angle=\langle *dF-\partial L,f\mathbb{R}angle=0. \end{align*} Thus, by letting $m\mathbb{R}ightarrow+\infty$ in \eqref{equation normalization}, we get that the left-hand-side converges to $0$ whilst the right-hand-side diverges to $+\infty$, unless $c=0$. Hence we conclude that $c=0$, i.e. \begin{align*} \int_{f^{-1}(t)}i_{f^{-1}(t)}^*F=\int_{\Gamma\mathbb{C}ap f^{-1}(t)}\tilde\theta\in\mathbb{M}athbb{Z}, \qquad\mathbb{M}box{ for $\mathcal{L}^1$-a.e. } t\in [a,b], \end{align*} since $\Gamma\mathbb{C}ap f^{-1}(t)$ consists of finitely many points for $\mathcal{L}^1$-a.e. $t\in \mathbb{R}$.\\ $2\mathbb{M}athcal{R}ightarrow 3$. Assume 2. Given $x_0\in Q_1(0)$, let $f_{x_0}:=\mathbb{M}in\big\{\lVert\mathbb{C}dot-x_0\mathbb{R}Vert_{\infty},\frac{r_{x_0}}{2}\big\}$. We claim that we can find $R_{F,x_0}$ as in Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube}. Indeed, let $L\mathbb{M}athbb{S}ubset Q_1(0)$ be the set of the Lebesgue points of $F$. Let $r_{x_0}:=2\dist_\infty(x_0,\partial Q_1(0))$. Then, by the coarea formula, we have \begin{align*} 0=\mathcal{L}^n\big(Q_{r_{x_0}}(x_0)\mathbb{M}athbb{S}mallsetminus L\big)=\frac{1}{2^n}\int_{0}^{r_{x_0}}\mathscr{H}^{n-1}\big((Q_1(0)\mathbb{M}athbb{S}mallsetminus L)\mathbb{C}ap\partial Q_{\mathbb{R}ho}(x_0)\big)\,d\mathbb{R}ho, \end{align*} which implies that there exists a set $E_{x_0}\mathbb{M}athbb{S}ubset(0,r_{x_0})$ such that \begin{enumerate} \item $\mathcal{L}^1\big((0,r_{x_0})\mathbb{M}athbb{S}mallsetminus E_{x_0}\big)=0$; \item for every $\mathbb{R}ho\in E_{x_0}$, $\mathscr{H}^{n-1}$-a.e. $x\in\partial Q_{\mathbb{R}ho}(x_0)$ is a Lebesgue point for $F$. \end{enumerate} Hence, for every $\mathbb{R}ho\in E_{x_0}$ it makes sense to consider the pointwise restriction of $F$ to $\partial Q_\mathbb{R}ho(x_0)$. Notice that, by the coarea formula, we have \begin{align*} \int_{E_{x_0}}\bigg(\int_{\partial Q_{\mathbb{R}ho}(x_0)}\big|F\big|^p\, d\mathscr{H}^{n-1}\bigg)\,d\mathbb{R}ho=2^n\int_{Q_{r_{x_0}}(x_0)}|F|^p\,d\mathcal{L}^n<+\infty, \end{align*} which implies \begin{align*} \int_{\partial Q_{\mathbb{R}ho}(x_0)}\big|F\big|^p\, d\mathscr{H}^{n-1}<+\infty, \qquad\mathbb{M}box{ for $\mathcal{L}^1$-a.e. }\mathbb{R}ho\in (0,E_{x_0}). \end{align*} Moreover, by Statement $2.$, we have \begin{align*} \int_{f_{x_0}^{-1}(\mathbb{R}ho)}i_{f_{x_0}^{-1}(\mathbb{R}ho)}^*F=\int_{\partial Q_{\mathbb{R}ho}(x_0)}i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F\in\mathbb{M}athbb{Z}, \qquad\mathbb{M}box{ for $\mathcal{L}^1$-a.e. }\mathbb{R}ho\in (0,E_{x_0}). \end{align*} Our claim follows immediately. $3\mathbb{M}athcal{R}ightarrow 1$. Assume 3. By Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}, we can find a sequence $\{F_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,R}^{n-1}(Q_1(0))$ such that $F_k\mathbb{R}ightarrow F$ strongly in $L^p$. Since the estimate \begin{align*} \lvert\langle T_{F_k}-T_{F},\omega\mathbb{R}angle\mathbb{R}vert\le C\lvert\lvert F_k-F\mathbb{R}vert\mathbb{R}vert_{L^p} \end{align*} holds for every $\omega\in\mathbb{M}athcal{D}^1(Q_1(0))$ such that $||\omega||_{L^{\infty}}\le 1$ and for every $k\in\mathbb{N}$, we conclude that \begin{align}\label{useful equation bla} \mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W^{1,\infty}_0(Q_1(0)),\\\lVert d\varphi\mathbb{R}Vert_{L^\infty}\leq 1}}\langle*dF_k-*dF,\varphi\mathbb{R}angle\le C\lvert\lvert F_k-F\mathbb{R}vert\mathbb{R}vert_{L^p}\mathbb{R}ightarrow 0\quad \text{ as }k\to\infty. \end{align} Fix any $\varepsilon\in(0,1)$. By \eqref{useful equation bla} we can find a subsequence $\big\{F_{k_j(\varepsilon)}\big\}_{j\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,R}^{n-1}(Q_1(0))$ such that \begin{align*} \lVert *dF_{k_j(\varepsilon)}-*dF_{k_{j+1}(\varepsilon)}\mathbb{R}Vert_{(W_0^{1,\infty}(Q_1(0)))^\ast}\le\frac{\varepsilon}{2^j}, \qquad \mathbb{M}box{ for every } j\in\mathbb{N}. \end{align*} For every $h\in\mathbb{N}$, let $L_h^{\varepsilon}$ be a minimal connection for the singular set of $F_{k_h(\varepsilon)}$ (the existence of such a minimal connection is proved in Proposition \mathbb{R}ef{appendix proposition existence minimal connection finitely many singularities}. Analogously, for every $j\in\mathbb{N}$, let $L_{j,j+1}^{\varepsilon}$ be the minimal connection for the singular set of $F_{k_j(\varepsilon)}-F_{k_{j+1}(\varepsilon)}$. Define the following sequence of integer $1$-currents on $Q_1(0)$: \begin{align*} \tilde L_h^{\varepsilon}:=\begin{cases} L_0^{\varepsilon} & \mathbb{M}box{ if } h=0,\\ \displaystyle{L_0^{\varepsilon}-\mathbb{M}athbb{S}um_{j=0}^{h-1}L_{j,j+1}^{\varepsilon}}& \mathbb{M}box{ if } h>0, \end{cases} \qquad \mathbb{M}box{ for every } h\in\mathbb{N}. \end{align*} Clearly \begin{align*} \partial\tilde L_h^{\varepsilon}=\partial L_0^{\varepsilon}-\mathbb{M}athbb{S}um_{j=0}^{h-1}\partial L_{j,j+1}^{\varepsilon}=\partial L_0^{\varepsilon}-\mathbb{M}athbb{S}um_{j=0}^{h-1}(\partial L_j^{\varepsilon}-\partial L_{j+1}^{\varepsilon})=\partial L_h^{\varepsilon}=*dF_{k_h(\varepsilon)}. \end{align*} Moreover, since $L_{j,j+1}^{\varepsilon}$ is a minimal connection, it holds that \begin{align*} \mathbb{M}athbb{M}(L_{j, j+1}^\varepsilon)=\lVert *dF_{k_j(\varepsilon)}-*dF_{k_{j+1}(\varepsilon)}\mathbb{R}Vert_{(W_0^{1,\infty}(Q_1(0)))^\ast}\le\frac{\varepsilon}{2^j}, \qquad \mathbb{M}box{ for every } j\in\mathbb{N}. \end{align*} Thus, \begin{align*} \mathbb{M}athbb{M}(\tilde L_{h+1}^{\varepsilon}-\tilde L_h^{\varepsilon})=\mathbb{M}athbb{M} (L_{h,h+1}^{\varepsilon})\le\frac{\varepsilon}{2^h}, \qquad \mathbb{M}box{ for every } h\in\mathbb{N}, \end{align*} which amounts to saying that the sequence $\{\tilde L_h^{\varepsilon}\}_{h\in\mathbb{N}}$ is a Cauchy sequence in mass. Hence, by the closure of integer currents under mass convergence (see Lemma \mathbb{R}ef{appendix Lemma closure integer currents}), there exists an integer $1$-current $\tilde L^{\varepsilon}\in\mathbb{M}athcal{R}_1(Q_1(0))$ such that \begin{align*} \mathbb{M}athbb{M} (\tilde L_h^{\varepsilon}-\tilde L^{\varepsilon})\mathbb{R}ightarrow 0\quad\text{ as }h\to\infty, \end{align*} Notice that \begin{align*} \partial \tilde L^{\varepsilon}=\lim_{h\mathbb{R}ightarrow\infty}\partial \tilde L_h^{\varepsilon}=\lim_{h\mathbb{R}ightarrow\infty}*dF_{k_h(\varepsilon)}=*dF\quad \text{ in }(W_0^{1,\infty}(Q_1(0)))^\ast. \end{align*} The family of integer $1$-cycles $\{\tilde L^{\varepsilon}-\tilde L^{1/2}\}_{0<\varepsilon<1}\mathbb{M}athbb{S}ubset\mathbb{M}athcal{R}_1(Q_1(0))$ is uniformly bounded in mass. Indeed, first we notice that by (2.10) it holds that \begin{align*} \mathbb{M}(L_{h}^{\varepsilon})&=\lVert *dF_{k_h(\varepsilon)}\mathbb{R}Vert_{(W_0^{1,\infty}(Q_1(0)))^\ast}\le C, \quad\forall\,h\in\mathbb{M}athbb{N},\,\forall\,\varepsilon\in(0,1), \end{align*} where $C>0$ is a constant independent on $h$ and $\varepsilon$. Thus, we have \begin{align*} \mathbb{M}(\tilde L_h^{\varepsilon})&\le\mathbb{M}(L_0^{\varepsilon})+\mathbb{M}athbb{S}um_{j=0}^{h-1}\mathbb{M}(L_{j,j+1}^{\varepsilon})\le C+\mathbb{M}athbb{S}um_{j=0}^{+\infty}\frac{\varepsilon}{2^j}\le C+\varepsilon\le C+1\quad\forall\,h\in\mathbb{M}athbb{N},\,\forall\,\varepsilon\in(0,1). \end{align*} Since $\tilde L_{h}^{\varepsilon}\to\tilde L^{\varepsilon}$ in mass as $h\to+\infty$ for every $\varepsilon\in(0,1)$, we have \begin{align*} \mathbb{M}(\tilde L^{\varepsilon}-\tilde L^{1/2})\le \mathbb{M}(\tilde L^{\varepsilon})+\mathbb{M}(\tilde L^{1/2})\le 2(C+1). \end{align*} Hence, by standard compactness arguments for currents (see for instance \mathbb{C}ite{krantz_parks-geometric_integration_theory}, Theorem 7.5.2), we can find a sequence $\varepsilon_k\mathbb{R}ightarrow 0$ and an integer 1-cycle $\tilde L\in\mathbb{M}athcal{R}_1(Q_1(0))$ with finite mass such that $\tilde L^{\varepsilon_k}-\tilde L^{1/2}\mathbb{R}ightarrow \tilde L$ weakly in $\mathbb{M}athcal{D}_1(Q_1(0))$ as $k\mathbb{R}ightarrow+\infty$. If we let $L:=\tilde L^{1/2}+\tilde L$ then we get $\tilde L^{\varepsilon_k}\mathbb{R}ightarrow L$ weakly in $\mathbb{M}athcal{D}_1(Q_1(0))$. By construction, $L$ is again an integer $1$-current with finite mass such that $\partial L=\partial\tilde L^{1/2}=*dF$ in $(W^{1,\infty}(Q_1(0)))^\ast$. We claim that \begin{align*} \mathbb{M}(L)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{M}_1(Q_1(0)), \\ \partial T=*dF}}\mathbb{M}(T). \end{align*} By contradiction, assume that we can find $T\in\mathbb{M}athcal{M}_1(Q_1(0))$ such that $\partial T=*dF$ and \begin{align*} \mathbb{M}(T)<\mathbb{M}(L)\le\liminf_{k\mathbb{R}ightarrow\infty}\mathbb{M}(\tilde L^{\varepsilon_k}), \end{align*} where the last inequality follows by weak convergence and lower semicontinuity of the mass. Then, we can find some $h\in\mathbb{N}$ such that \begin{align*} \mathbb{M}(T)<\mathbb{M}(\tilde L^{\varepsilon_{h}})-2\varepsilon_{h}. \end{align*} Moreover, since $\mathbb{M}(L_0^{\varepsilon}-\tilde L^{\varepsilon})\le 2\varepsilon$ for every $0<\varepsilon<1$, it holds that \begin{align*} \mathbb{M}(L_0^{\varepsilon_h}-\tilde L^{\varepsilon_h})\le 2\varepsilon_{h}. \end{align*} We define $\tilde T:=T+L_0^{\varepsilon_h}-\tilde L^{\varepsilon_h}$ and we notice that $\partial\tilde T=*dF_{k_0(\varepsilon_h)}$. Moreover, by the minimality of $L_0^{\varepsilon_h}$, we conclude that \begin{align*} \mathbb{M}(L_0^{\varepsilon_h})\le\mathbb{M}(\tilde T)\le\mathbb{M}(T)+\mathbb{M}(L_0^{\varepsilon_h}-\tilde L^{\varepsilon_h})<\mathbb{M}(L_0^{\varepsilon_h}), \end{align*} which is a contradiction. Thus, our claim follows. Since $L\in\mathbb{M}athcal{R}_1(Q)$, we get that \begin{align*} \mathbb{M}(L)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{R}_1(Q_1(0)), \\ \partial T=*dF}}\mathbb{M}(T)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{M}_1(Q_1(0)), \\ \partial T=*dF}}\mathbb{M}(T) \end{align*} and, by Lemma \mathbb{R}ef{appendix lemma duality}, we have \begin{align*} \inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{M}_1(Q_1(0)), \\ \partial T=*dF}}\mathbb{M}(T)=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in\mathbb{M}athcal{D}(Q_1(0)), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_MF\wedge d\varphi. \end{align*} Hence, 1. follows. \end{proof} \end{thm} \begin{rem}\label{Remark: existence of connection is enough for strong approximability on cube} We notice that in the proof of Theorem \mathbb{R}ef{Theorem: characterization of the integer valued fluxes class} we have never used the minimality property of $L$ while showing that $1\mathbb{M}athcal{R}ightarrow 2$. Hence whenever $F\in\Omega_p^{n-1}(Q^n_1(0))$ admits a connection we have $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q^n_1(0))$ in the sense of Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube} and the conclusions of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} hold for $F$. \end{rem} By the previous remark we can deduce the following result from the proof of Theorem \mathbb{R}ef{Theorem: characterization of the integer valued fluxes class}. \begin{cor} \label{Corollary: existence of connections implies existence of minimal connection} Let $F\in \Omega_p^{n-1}(Q_1^n(0))$ and assume that there exists an integer $1$-current of finite mass $I\in \mathbb{M}athcal{R}_1(Q_1(0))$ such that $\partial I=\ast dF$. Then there exists an integer $1$-current $L\in \mathbb{M}athcal{R}_1(Q_1(0))$ of finite mass such that $\partial L=\ast dF$ and \begin{align*} \mathbb{M}(L)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{R}_1(Q_1(0))\\\partial T=\ast d F}}\mathbb{M}(T). \end{align*} \end{cor} In other words, whenever there exists a connection for $F$, then there exists a minimal connection for $F$. \mathbb{M}athbb{S}ubsection{The case of $\partial Q_1^{n+1}(0)$} In order to extend the previous result to more general manifolds we introduce the following definition. \begin{dfn}\label{definition: Lipschitz forms} Let $M$ be a Lipschitz $m$-manifold embedded in $\mathbb{M}athbb{R}^n$. Let $p\in [1,\infty)$. Set \begin{align*} \Omega_{p,R,\infty}^1(M):=\left\{\alpha\in \Omega_{p}^1(M)\mathbb{C}ap\Omega_{L^\infty_{loc}}^1(M\mathbb{M}athbb{S}mallsetminus S) \mathbb{M}box{ : }\hspace{-1mm}\ast\hspace{-0.5mm}d\alpha=\mathbb{M}athbb{S}um_{i\in I}d_i\delta_{p_i}\mathbb{R}ight\}, \end{align*} where $I$ is a finite index set, $d_i\in \mathbb{M}athbb{Z}$, $p_i\in S$ for any $i\in I$ and $F:=\{p_i\}_{i\in I}$. \end{dfn} The previous definition is motivated by the following observation: let $M$ be a Lipschitz $m$-manifold, $N$ a smooth $m$-manifold, $\varphi: M\to N$ a bi-Lipschitz map. Let $F\in \Omega_{p, R}^1(N)$. Then $\varphi^\ast F\in \Omega_{p,R,\infty}^\infty$ (see Lemma \mathbb{R}ef{Lemma: bi-Lipschitz maps preserve approximability properties}). \begin{cor}\label{Corollary: strong approximation on boundary of cube} Let $n\in \mathbb{M}athbb{N}_{\geq 2}$. Assume that $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-2}(\partial Q^n_1(0))$ admits a connection. Then, the following facts hold: \begin{enumerate} \item if $p\in\big[1,(n-1)/(n-2)\big)$, then there exists a sequence $\{ F_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset \Omega_{p,R,\infty}^{n-2}(\partial Q^n_1(0))$ such that $F_k\mathbb{R}ightarrow F$ strongly in $L^p$; \item if $p\in\big[(n-1)/(n-2),+\infty\big)$, then $*dF=0$ distributionally on $\partial Q^n$. \end{enumerate} \begin{proof} Let $N=(0,...,0,\frac{1}{2})$ be the north pole in $\partial Q^n_1(0)\mathbb{M}athbb{S}ubset\mathbb{R}^m$ and let \begin{align*} U:=\big\{(x_1,...,x_{n-1},x_n)\in\partial Q^n_1(0) \mathbb{M}box{ s.t. } x_n=\frac{1}{2}\big\} \end{align*} be the upper face of $\partial Q^n_1(0)$. Let $q:=(n-1)-(n-2)p$. For every $x=(x_1,...,x_n)\in\mathbb{R}^n$, we let $x':=(x_1,...,x_{n-1})\in\mathbb{R}^{n-1}$. Define $\mathcal{P}hi:\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus N\mathbb{M}athbb{S}ubset\mathbb{R}^n\to Q^{n-1}_1(0)$ by \begin{align*} \mathcal{P}hi(x):=\begin{cases}\displaystyle{\bigg(\frac{1}{2}-\frac{\mathbb{M}athbb{S}qrt{2}}{4}\lVert x'\mathbb{R}Vert_{\infty}^\frac{1}{2}\bigg)\frac{x'}{\lVert x'\mathbb{R}Vert_{\infty}}} & \mathbb{M}box{ on }\,U\mathbb{M}athbb{S}mallsetminus N, \\\,g(x) & \mathbb{M}box{ on }\,\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus U,\end{cases} \end{align*} where the map $g:\overline{\partial Q_1^n(0)\mathbb{M}athbb{S}mallsetminus U}\to \overline{Q_{1/2}^{n-1}(0)}$ is any bi-Lipschitz homeomorphism such that $g\equiv\bigg(\frac{1}{2}-\frac{\mathbb{M}athbb{S}qrt{2}}{4}\lVert x'\mathbb{R}Vert_{\infty}^\frac{1}{2}\bigg)\frac{x'}{\lVert x'\mathbb{R}Vert_{\infty}}$ on $\partial U$. Notice that $\mathcal{P}hi$ is an homeomorphism from $\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus N$ to $Q^{n-1}_1(0)$. We denote its inverse map by $\mathcal{P}si$.\\ We have that $\mathcal{P}si$ is Lipschitz on $Q^{n-1}_1(0)$ and $\mathcal{P}hi$ is Lipschitz on every compact set $K\mathbb{M}athbb{S}ubset\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus N$, since there exists $C>0$ such that \begin{align*} \lvert d\mathcal{P}hi(x)\mathbb{R}vert\le\frac{C}{\lVert x-N\mathbb{R}Vert_{\infty}^\frac{1}{2}}, \qquad\forall\, x\in\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus N,\\ \lvert d\mathcal{P}si(y)\mathbb{R}vert\le C\bigg(\frac{1}{2}-\lVert y\mathbb{R}Vert_{\infty}\bigg), \qquad\forall\, y\in Q_1^{n-1}(0). \end{align*} Define $\tilde F:=\mathcal{P}si^*F$ and fix any $\varepsilon\in(0,1/4)$. Notice that \begin{align*} \int_{Q^{n-1}_1(0)}\bigg(\frac{1}{2}-\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_{\infty}\bigg)^{q}\lvert\tilde F\mathbb{R}vert^p\, d\mathscr{H}^{n-1}&\le C\int_{Q^{n-1}_1(0)}\bigg(\frac{1}{2}-\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_{\infty}\bigg)^{n-1}\lvert F\mathbb{C}irc\mathcal{P}si\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\\ &\le C\int_{\partial Q^n_1(0)}\lvert F\lvert^p\, d\mathscr{H}^{n-1}<+\infty. \end{align*} Moreover if $I\in \mathbb{M}athcal{R}_1(\partial Q_1^n(0))$ and $\ast dF =\partial I$, then $\mathcal{P}hi_\ast I\in \mathbb{M}athcal{R}_1 (\partial Q_1^{n-1}(0))$ and $\ast d \tilde F=\partial \mathcal{P}hi_\ast I$ (see Lemma \mathbb{R}ef{Lemma: bi-Lipschitz maps preserve approximability properties}). This implies that $\tilde F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-2}(Q^{n-1}_1(0),\mathbb{M}u)$ with $\mathbb{M}u:=(\frac{1}{2}-\lVert\mathbb{C}dot\mathbb{R}Vert_{\infty})^q\, \mathcal{L}^{n-1}$ in the sense of Definition \mathbb{R}ef{Definition: integer valued fluxes on unit cube}.\\ Let's consider first the case $p\in\big[1,(n-1)/(n-2)\big)$. Notice that in this case $q>0$. By Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields, now really for vector fields} with $f:=(\frac{1}{2}-\lVert\mathbb{C}dot\mathbb{R}Vert_{\infty})^q$ there exists a $(n-2)$-form $\tilde F_{\varepsilon}\in\Omega_{p,R}^{n-2}(\Omega_{\varepsilon,a_{\varepsilon}})$ such that $\lVert \tilde F_\varepsilon-\tilde F\mathbb{R}Vert_{L^p(S_{\varepsilon, a_\varepsilon})}\leq \varepsilon$ and \begin{align}\label{Equation: convergence of approximation of partial Q} \lVert \tilde F_{\varepsilon}-\tilde F\mathbb{R}Vert^p_{L^p(\Omega_{\varepsilon,a_{\varepsilon}},\mathbb{M}u)}\mathbb{R}ightarrow 0 \end{align} as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_{\tilde F}$. Define $F_{\varepsilon}:=\mathcal{P}hi_{a_{\varepsilon}}^*\tilde F_{\varepsilon}$ on $\partial Q_1^n(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon}$, with $\mathcal{P}hi_{a_{\varepsilon}}:=\mathcal{P}hi+a_{\varepsilon}$ and $U_{\varepsilon}:=\mathcal{P}hi_{a_{\varepsilon}}^{-1}(Q_1^{n-1}(0)\mathbb{M}athbb{S}mallsetminus\Omega_{\varepsilon,a_{\varepsilon}})$. Notice that \begin{align*} \lVert F_{\varepsilon}-F\mathbb{R}Vert_{L^p(\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon})}^p&\le C\int_{\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon}}\frac{1}{\lVert\mathbb{C}dot - N\mathbb{R}Vert_{\infty}^{(n-2)p}}\lvert\tilde F_{\varepsilon}\mathbb{C}irc\mathcal{P}hi_{a_{\varepsilon}}-\tilde F\mathbb{C}irc\mathcal{P}hi_{a_{\varepsilon}}\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\\ &\quad+C\int_{\partial Q^n_1(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon}}\frac{1}{\lVert\mathbb{C}dot - N\mathbb{R}Vert_{\infty}^{(n-2)p}}\lvert\tilde F\mathbb{C}irc\mathcal{P}hi_{a_{\varepsilon}}-\tilde F\mathbb{C}irc\mathcal{P}hi\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\\ &\le C\bigg(\int_{\Omega_{\varepsilon,a_{\varepsilon}}}\lvert\tilde F_{\varepsilon}-\tilde F\mathbb{R}vert^p f\, d\mathscr{H}^{n-1}+\int_{Q_{1-\varepsilon}^{n-1}(0)}\lvert\tilde F(\mathbb{C}dot -a_{\varepsilon})-\tilde F\mathbb{R}vert^p f\, d\mathscr{H}^{n-1}\bigg). \end{align*} The first term tends to zero as $\varepsilon\to 0^+$ in $E_{\tilde F}$ by \eqref{Equation: convergence of approximation of partial Q}, while the second tends to zero as $\varepsilon\to 0^+$ by \eqref{Equation: Continuity wrt argument for weighted measure}. Therefore we have \begin{align*} \lVert F_{\varepsilon}-F\mathbb{R}Vert_{L^p(\partial Q_1^n(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon})}^p\mathbb{R}ightarrow 0 \end{align*} as $\varepsilon\mathbb{R}ightarrow 0^+$ in $E_{\tilde F}$. Now we notice that \begin{align*} \int_{\partial U_{\varepsilon}}i_{\partial U_{\varepsilon}}^*F_{\varepsilon}&=-\int_{\partial(\partial Q_1^n(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon})}i_{\partial(\partial Q_1^n(0)\mathbb{M}athbb{S}mallsetminus U_{\varepsilon})}^*F_{\varepsilon}\\ &= \int_{\partial\Omega_{\varepsilon,a_{\varepsilon}}}i_{\partial\Omega_{\varepsilon,a_{\varepsilon}}}^*\tilde F_{\varepsilon}=\int_{\partial\Omega_{\varepsilon,a_{\varepsilon}}}i_{\partial\Omega_{\varepsilon,a_{\varepsilon}}}^*\tilde F=:b_{\varepsilon}\in\mathbb{M}athbb{Z}. \end{align*} If $b_{\varepsilon}=0$, then we use Lemma \mathbb{R}ef{appendix: extension on the good cubes} to extend $F_{\varepsilon}$ inside $U_{\varepsilon}$. If $b_{\varepsilon}\mathbb{N}eq 0$, then we use Lemma \mathbb{R}ef{Lemma: extension on the bad cubes} to extend $F_{\varepsilon}$ inside $U_{\varepsilon}$ (notice that $U_{\varepsilon}$ is an $(n-1)$-cube of side-length $8\varepsilon^2$ contained in $U$ and centered at $N$, for $\varepsilon$ sufficiently small). In both cases, the following estimate holds: \begin{align*} \int_{U_{\varepsilon}}\lvert F_{\varepsilon}\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\le C\varepsilon^2\int_{\partial U_{\varepsilon}}\lvert F_\varepsilon\mathbb{R}vert^p\, d\mathscr{H}^{n-2}\leq C\varepsilon^2\left(\int_{\partial U_\varepsilon}\lvert F-F_\varepsilon\mathbb{R}vert^p d\mathbb{M}athscr{H}^{n-2}+\int_{\partial U_\varepsilon}\lvert F\mathbb{R}vert^p d\mathbb{M}athscr{H}^{n-2}\mathbb{R}ight). \end{align*} Notice that the first term on the right hand side tends to zero as $\varepsilon\to 0^+$ in $E_{\tilde F}$, since $\lVert \tilde F_\varepsilon-\tilde F\mathbb{R}Vert_{L^p(S_{\varepsilon, a_\varepsilon})}\leq \varepsilon$ for any $\varepsilon\in E_{\tilde{F}}$. In order to control the second term, pick any $\delta>0$ sufficiently small and notice that, by coarea formula, we have \begin{align*} \fint_0^{\delta}\varepsilon^2\int_{\partial U_{\varepsilon}}\lvert F\mathbb{R}vert^p\, d\mathscr{H}^{n-2}\, d\mathcal{L}^1(\varepsilon)&\le\int_0^{\delta}\varepsilon\int_{Q^{n-1}_{8\varepsilon^2}(N)}\lvert F\mathbb{R}vert^p\, d\mathscr{H}^{n-2}\, d\mathcal{L}^1(\varepsilon)\\ &\leq \frac{1}{16}\int_0^{8\delta^2}\int_{\partial Q^{n-1}_{\mathbb{M}athbb{Z}eta}(N)}\lvert F\mathbb{R}vert^p\, d\mathscr{H}^{n-2}\, d\mathcal{L}^1(\mathbb{M}athbb{Z}eta)\\ &\le \frac{2^{n-1}}{16}\int_{Q^{n-1}_{8\delta^2}(N)}\lvert F\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\mathbb{R}ightarrow 0^+ \end{align*} as $\delta\mathbb{R}ightarrow 0^+$. This implies that we can pick a sequence $\{\varepsilon_j\}_{j\in\mathbb{N}}\mathbb{M}athbb{S}ubset E_{\tilde F}$ such that $\varepsilon_j\mathbb{R}ightarrow 0^+$ and \begin{align*} \varepsilon_j^2\int_{\partial U_{\varepsilon_j}}\lvert F\mathbb{R}vert^p\,d\mathscr{H}^{m-1}\mathbb{R}ightarrow 0^+ \end{align*} as $j\mathbb{R}ightarrow+\infty$. Thus \begin{align*} \lVert F_{\varepsilon_j}-F\mathbb{R}Vert_{L^p(U_{\varepsilon_j})}^p\le 2^{p-1}\Big(\lVert F_{\varepsilon_j}\mathbb{R}Vert_{L^p(U_{\varepsilon_j})}^p+\lVert F\mathbb{R}Vert_{L^p(U_{\varepsilon_j})}^p\Big)\mathbb{R}ightarrow 0 \end{align*} as $j\mathbb{R}ightarrow+\infty$, since $\mathscr{H}^{n-1}\big(U_{\varepsilon_j}\big)\mathbb{R}ightarrow 0^+$ as $\varepsilon_j\mathbb{R}ightarrow 0^+$. Hence, we conclude that \begin{align*} \lVert F_{\varepsilon_j}-F\mathbb{R}Vert_{L^p(\partial Q^n_1(0))}^p\mathbb{R}ightarrow 0 \end{align*} as $j\mathbb{R}ightarrow+\infty$. Moreover, by construction we have that $\ast d F_{\varepsilon_j}$ is a finite sum of Dirac-deltas with integer coefficients, for any $j\in \mathbb{M}athbb{N}$. Thus arguing as in the final step of the proof of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields, now really for vector fields} (i.e. by Hodge decomposition), for any $j\in \mathbb{M}athbb{N}$ we can find $\hat F_{\varepsilon_j}\in \Omega^1_{p,R,\infty}(\partial Q^n_1(0))$ such that $\lVert F_{\varepsilon_j}-\hat F_{\varepsilon_j}\mathbb{R}Vert_{L^p(\partial Q_1^n(0))}<\varepsilon_j$. The sequence $\{\hat F_j\}_{j\in \mathbb{M}athbb{N}}$ then has the desired properties. This concludes the proof in the case $p\in\big[1,(n-1)/(n-2)\big)$.\\ If $p\in\big[(n-1)/(n-2),+\infty\big)$ notice that $q\leq 0$. Therefore by Remark \mathbb{R}ef{Remark: no bad cubes if p is big enough} we may assume, up to passing to a subsequence, that no bad cube appears in the construction of $\tilde{F}_\varepsilon$. Hence repeating the first part of the proof as in the previous case, we get $b_{\varepsilon_j}=0$ and $\ast d F_{\varepsilon_j} =0$ on $\partial Q^n_1(0)$ for any $j\in\mathbb{M}athbb{N}$. Thus we obtain that $F$ can be approximated in $\Omega_p^{n-1}(\partial Q^n_1(0))$ by a sequence $(F_{\varepsilon_j})_{j\in \mathbb{M}athbb{N}}$ such that for any $j\in \mathbb{M}athbb{N}$ there holds $\ast d F_{\varepsilon_j}=0$, and this property passes to the limit. \end{proof} \end{cor} \begin{rem} Notice that if $p=1$, for any $k\in \mathbb{M}athbb{N}$ we have that $F_k\in \Omega_{q}^{n-1}(\partial Q^n_1(0))$ for some $q>1$ which does not depend on $k$ (compare with Remark \mathbb{R}ef{Remark: F_k in Lq}). \end{rem} \begin{lem} \label{Lemma: bi-Lipschitz maps preserve approximability properties} Let $M,N\mathbb{M}athbb{S}ubset \mathbb{M}athbb{R}^n$ be Lipschitz $m$-manifolds in $\mathbb{M}athbb{R}^n$. Let $\varphi: M\to N$ be a bi-Lipschitz map.\\ Let $F\in \Omega_p^{m-1}(M)$ and assume that there exists a 1-rectifiable current $I\in \mathbb{M}athcal{R}_1(M)$ of finite mass such that $\ast d F=\partial I$ in $(W^{1,\infty}_0(M))^\ast$. Then $\varphi_\ast F\in \Omega_p^{m-1}(N)$ and $\ast d(\varphi_\ast F)=\partial \varphi_\ast I$.\\ If $(F_k)_{k\in \mathbb{M}athbb{N}}$ is a sequence in $\Omega_{R,p,\infty}^{m-1}(M)$ and $F_k\to F$ in $\Omega_p^{m-1}(M)$ as $k\to\infty$, then $(\varphi_\ast F_k)_{k\in \mathbb{M}athbb{N}}$ is a sequence in $\Omega_{p,R,\infty}^{m-1}(N)$ and $\varphi_\ast F_k\to\varphi_\ast F$ in $\Omega_p^{m-1}(N)$ as $k\to\infty$.\\ If in addition we assume that $N$ is smooth and closed or a bounded simply connected Lipschitz domain and for any $k\in \mathbb{M}athbb{N}$ we have $F_k\in \Omega_q^{m-1}(N)$ for some $q>1$ (possibly dependent on $k$), then $\varphi_\ast F$ can be approximated in $\Omega_{p}^{m-1}(N)$ by $(m-1)$-forms in $\Omega_{p,R}^{m-1}(N)$. \end{lem} \begin{proof} Assume that $\ast d F=\partial I$ holds in $(W^{1,\infty}_0(M))^\ast$. Then for any $f\in W^{1,\infty}_0(N)$ $\varphi^\ast f\in W^{1,\infty}_0(M)$ and thus \begin{align*} \langle \ast d (\varphi_\ast F), f\mathbb{R}angle=\int_N\varphi_\ast F\wedge df=\int_M F\wedge d\varphi^\ast f=\langle \partial I, \varphi^\ast f\mathbb{R}angle=\langle I, \varphi^\ast df\mathbb{R}angle=\langle \partial \varphi_\ast I, f\mathbb{R}angle, \end{align*} therefore $\ast d(\varphi_\ast F)=\partial \varphi_\ast I$ in $(W^{1,\infty}_0(N))^\ast$.\\ Now assume that $(F_k)_{k\in \mathbb{M}athbb{N}}$ is a sequence in $\Omega_R^{m-1}(M)$ such that $F_k\to F$ in $\Omega_p^{m-1}(M)$ as $k\to\infty$. Then for any $k\in \mathbb{M}athbb{N}$ there exists $I_k\in \mathbb{M}athcal{R}_1(M)$ of finite mass and so that $\partial I_k$ supported in a finite subset of $M$ such that $\ast d F_k=\partial I_k$. As we saw above, $\ast d(\varphi_\ast F_k)=\partial \varphi_\ast I_k$, therefore $\varphi_\ast F_k\in \Omega_{p, R,\infty}^{m-1}(N)$. Moreover we have $\varphi_\ast F_k\to \varphi_\ast F$ in $\Omega_p^{m-1}(N)$ as $k\to\infty$.\\ Finally if $N$ is smooth and closed or a bounded Lipschitz domain and for any $k\in \mathbb{M}athbb{N}$ we have that $F_k\in \Omega_q^{m-1}(N)$ for some $q>1$, we can improve the approximating sequence $(\varphi^\ast F_k)_{k\in \mathbb{M}athbb{N}}$ as follows: for any $k\in \mathbb{M}athbb{N}$, let $\alpha_k\in \Omega_{W^{1,q}}^{m-2}(N)$, $\beta_k\in \Omega_{W^{1,q}}^{m}(N)$ and $h_k\in \Omega_h^{m-1}(N)$ (the space of harmonic $(m-1)$-forms on $N$) such that $\varphi_\ast F_k=d\alpha_k+d^\ast \beta_k+h_k$. Then $\ast\mathbb{M}athcal{D}elta \beta=\ast d \varphi_\ast F_k=\ast \varphi_\ast d F_k$. Since $\varphi_\ast d F_k=\partial \varphi_\ast I_k$, $\varphi_\ast d F_k$ is supported in a finite set of points, thus $\beta$ is smooth in $N$ outside of a finite number of points. Now let $\tilde{\alpha}_k\in \Omega^{m-2}(N)$ such that $\lVert \alpha_k-\tilde{\alpha}_k\mathbb{R}Vert_{W^{1,p}}\leq \frac{1}{2^k}$ and set $\tilde{F}_k:= d\tilde{\alpha}_k+d\beta_k+h_k$. Then by construction $\tilde{F}_k\in \Omega_{R}^{m-1}(N)$ and $\tilde{F}_k\to\varphi_\ast F$ in $\Omega_p^{m-1}(N)$ as $k\to\infty$. \end{proof} Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}, Corollary \mathbb{R}ef{Corollary: strong approximation on boundary of cube}, Lemma \mathbb{R}ef{Lemma: bi-Lipschitz maps preserve approximability properties} and Remark \mathbb{R}ef{Remark: existence of connection is enough for strong approximability on cube} can be combined to obtain the following general statement. \begin{thm}\label{Theorem: strong approximation for general domains} Let $M\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any embedded $m$-dimensional Lipschitz submanifold of $\mathbb{R}^n$ which is bi-Lipschitz equivalent either to $Q^m_1(0)$ or $\partial Q^{m+1}_1(0)$. Then: \begin{align*} \overline{\Omega_{p,R,\infty}^{m-1}(M)}^{L^p}=\begin{cases} \Omega_{p,\mathbb{M}athbb{Z}}^{m-1}(M) & \mathbb{M}box{ if } p\in[1,m/(m-1)),\\ \big\{F\in\Omega_{p}^{m-1}(M) \mathbb{M}box{ s.t. } *\hspace{-0.5mm}dF=0\big\} &\mathbb{M}box{ if } p\in[m/(m-1),+\infty). \end{cases} \end{align*} Moreover, if $M$ is smooth we have $\overline{\Omega_{p,R}^{m-1}(M)}^{L^p}=\overline{\Omega_{p,R,\infty}^{m-1}(M)}^{L^p}$. \end{thm} \mathbb{M}athbb{S}ubsection{Corollaries of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}} Finally we present a couple of Corollaries of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}.\\ First we show that the boundary of any $I\in\mathbb{M}athcal{R}_1(D)$ having finite mass can be approximated strongly in $(W^{1,\infty}_0(D))^\ast$ by finite sums of deltas with integer coefficients. \begin{cor}\label{Corollary: Approximation of boundaries of 1 rect currents} Let $D\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any open and bounded domain in $\mathbb{R}^n$ which is bi-Lipschitz equivalent to $Q^n_1(0)$. Let $I\in \mathbb{M}athcal{R}_1(D)$ with finite mass. Then, there exists a vector field $\,V\in L^1(D)$ such that \begin{align*} \operatorname{div}(V)=\partial I \quad \text{ in }\,(W^{1,\infty}_0(D))^\ast. \end{align*} Thus $\partial I$ can be approximated strongly in $(W^{1,\infty}_0(D))^\ast$ by finite sums of deltas with integer coefficients. More precisely there exist sequences of points $(P_i)_{i\in \mathbb{M}athbb{N}}$ and $(N_i)_{i\in \mathbb{M}athbb{N}}$ in $D$ such that \begin{align}\label{Equation: result of cor 1.1, approx by finite sing} \partial I= \mathbb{M}athbb{S}um_{i\in \mathbb{M}athbb{N}}(\delta_{P_i}-\delta_{N_i})\,\text{ in }(W^{1,\infty}_0(D))^\ast\text{ and }\mathbb{M}athbb{S}um_{i\in \mathbb{M}athbb{N}}\lvert P_i-N_i\mathbb{R}vert<\infty. \end{align} \end{cor} \begin{proof} By Lemma \mathbb{R}ef{Lemma: bi-Lipschitz maps preserve approximability properties}, it is enough to consider the case $D=Q_1(0)$. Let $I$ be as above. By \mathbb{C}ite[Theorem 5.6]{alberti-baldo-orlandi} there exists a map $u\in W^{1,n-1}(Q_1(0),\mathbb{M}athbb{S}^{n-1})$ such that \begin{align*} \ast d\left(\frac{1}{n}\mathbb{M}athbb{S}um_{i=1}^n(-1)^{i-1}u_i\bigwedge_{j\mathbb{N}eq i}d u_j\mathbb{R}ight)=\alpha_{n-1}\partial I, \end{align*} where $\alpha_{n-1}$ denotes the volume of the $(n-1)$-dimensional ball.\\ Set \begin{align*} \omega:=\frac{1}{n\alpha_{n-1}}\mathbb{M}athbb{S}um_{i=1}^n(-1)^{i-1}u_i\bigwedge_{j\mathbb{N}eq i}d u_j \end{align*} Notice that $\omega\in \Omega^{n-1}_{1}(Q_1(0))$ and $\ast d \omega=\partial I$. Now let $V:=(\ast \omega)^{\mathbb{M}athbb{S}harp}$. Then $V\in L^1(Q_1(0))$ and \begin{align*} \operatorname{div}(V)=\partial I. \end{align*} By Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields}, there exists a sequence $(V_k)_{k\in \mathbb{M}athbb{N}}$ in $L^1_R(Q_1(0))$ such that $V_k\to V$ in $L^1.$ Then \begin{align*} \operatorname{div}(V_k)\to\operatorname{div}(V)=\partial I \quad \text{ in }\,(W^{1,\infty}_0(Q_1(0)))^\ast. \end{align*} As for any $k\in \mathbb{M}athbb{N}$ we have that $\operatorname{div}(V_k)$ is a finite sum of deltas with integer coefficients, by \mathbb{C}ite[Proposition A.1]{Ponce2004}\footnote{Here Proposition A.1 in \mathbb{C}ite{Ponce2004} is applied to the following metric space: for any $x,y\in Q_1(0)$ let $\overline{d}(x,y)=\mathbb{M}in\{d(x,y),\operatorname{dist}(x,\partial Q_1(0))+\operatorname{dist}(y,\partial Q_1(0)\}$, where $d$ denotes the Euclidean distance in $Q_1(0)$. Let $(\widetilde Q_1(0), \overline{d})$ denote the completion of $Q_1(0)$ with respect to the distance $\overline{d}$. Then Lipschitz functions on $(\widetilde Q_1(0), \overline{d})$ corresponds to functions in $W^{1,\infty}_0(Q_1(0))$ (with same Lipschitz constant) modulo additive constants.} \eqref{Equation: result of cor 1.1, approx by finite sing} holds. \end{proof} \begin{cor}\label{Corollary: representation of partial I} Let $M$ be a complete Lipschitz $m$-manifold, with or without boundary, compactly contained in the open cube $Q_2(0)$. Let $I\in \mathbb{M}athcal{R}_1(Q_2(0))$ be a rectifiable current of finite mass supported on $M$. Then there exist two sequences of points $(p_i)_{i\in \mathbb{M}athbb{N}}$ and $(n_i)_{i\in \mathbb{M}athbb{N}}$ in $M$ such that \begin{align*} \partial I= \mathbb{M}athbb{S}um_{i\in \mathbb{M}athbb{N}}(\delta_{p_i}-\delta_{n_i})\,\text{ in }(W^{1,\infty}(Q_2(0)))^\ast\text{ and }\mathbb{M}athbb{S}um_{i\in \mathbb{M}athbb{N}}\lvert p_i-n_i\mathbb{R}vert<\infty. \end{align*} \end{cor} \begin{proof} Let $I\in\mathbb{M}athcal{R}_1(M)$ be a rectifiable 1-current of finite mass supported in $M\mathbb{M}athbb{S}ubset Q_2(0)$. By Corollary \mathbb{R}ef{Corollary: Approximation of boundaries of 1 rect currents} there exists a vector field $V\in L^1(Q_2(0))$ such that $\operatorname{div}(V)=\partial I$. Thus we can apply the arguments of the proof of Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields, now really for vector fields} to $V$. For any $\varepsilon\in E_V$ this yields a vector field $\tilde V_\varepsilon\in L^1(\Omega_{a_\varepsilon, \varepsilon})$ with the following properties: all bad cubes $Q\in \mathbb{M}athscr{C}_{a_\varepsilon,\varepsilon}$ are such that $\overline{Q}\mathbb{C}ap M\mathbb{N}eq \emptyset$, therefore the topological singularities of $\tilde V_\varepsilon$ lie at a distance of at most $\mathbb{M}athbb{S}qrt{n}\varepsilon$ from $M$. Moreover notice that if $\varepsilon$ is sufficiently small, $\int_{Q_2(0)}\operatorname{div}(\tilde V_\varepsilon)\,d\mathcal{L}^n=0$ (one can see this by testing $\operatorname{div}(\tilde V_\varepsilon)$ against a function $\varphi\in C^\infty_c(Q_2(0))$ such that $\varphi\equiv 1$ in a neighbourhood of $M$). Thus $\operatorname{div}(\tilde V_\varepsilon)$ can represented by \begin{align*} \operatorname{div}(V_\varepsilon)=\mathbb{M}athbb{S}um_{i=1}^{Q^\varepsilon}(\delta_{p_i^\varepsilon}-\delta_{n_i^\varepsilon}) \end{align*} for some $Q^\varepsilon\in\mathbb{M}athbb{N}$ and points $p_i^\varepsilon$ and $n_i^\varepsilon$ (possibly repeated) in a $\mathbb{M}athbb{S}qrt{n}\varepsilon$-neighbourhood of $M$. By the argument of Lemma \mathbb{R}ef{Lemma: volume of the bad cubes vanishes at the limit} (with $Q_2(0)$ in place of $Q_1(0)$) we have $\varepsilon Q^\varepsilon \to 0$ as $\varepsilon\to 0^+$ in $E_V$. Now for any $i\in \{1,...,Q^\varepsilon\}$ let $\tilde p_i^\varepsilon$ and $\tilde n_i^\varepsilon$ in $M$ such that $\lvert p_i^\varepsilon-\tilde p_i^\varepsilon\mathbb{R}vert<2\mathbb{M}athbb{S}qrt{n}\varepsilon$ and $\lvert n_i^\varepsilon-\tilde n_i^\varepsilon\mathbb{R}vert<2\mathbb{M}athbb{S}qrt{n}\varepsilon$. Let $I_{p_i^\varepsilon}\in \mathbb{M}athcal{R}_1(Q_2(0))$ be the rectifiable current given by integration on the segment joining $p_i^\varepsilon$ and $\tilde p_i^\varepsilon$ oriented from $p_i^\varepsilon$ to $\tilde p_i^\varepsilon$ and let $I_{n_i^\varepsilon}\in \mathbb{M}athcal{R}_1(Q_2(0))$ be the rectifiable current given by integration on the segment joining $n_i^\varepsilon$ and $\tilde n_i^\varepsilon$ oriented from $\tilde n_i^\varepsilon$ to $n_i^\varepsilon$. Let $I_\varepsilon\in \mathbb{M}athcal{R}_1(Q_2(0))$ be a rectifiable 1-current of finite mass such that $\operatorname{div}(\tilde V_\varepsilon)=\partial I_\varepsilon$. Set $\tilde I_\varepsilon= I_\varepsilon+\mathbb{M}athbb{S}um_{i=1}^{Q^\varepsilon}(I_{p_i^\varepsilon}+I_{n_i^\varepsilon})$. Then \begin{align*} \partial \tilde I_\varepsilon=\mathbb{M}athbb{S}um_{i=1}^{Q^\varepsilon}(\delta_{\tilde p_i^\varepsilon}-\delta_{\tilde n_i^\varepsilon}) \end{align*} is supported in $M$. Moreover we have \begin{align*} \lVert \partial I-\partial \tilde I_\varepsilon \mathbb{R}Vert_{(W^{1,\infty}(M))^\ast}\leq \lVert \partial I-\partial I_\varepsilon\mathbb{R}Vert_{(W^{1,\infty}(Q_2(0)))^\ast}+\lVert \partial I_\varepsilon-\partial \tilde I_\varepsilon \mathbb{R}Vert_{(W^{1,\infty}(Q_2(0)))^\ast} \end{align*} (here $M$ is endowed with the euclidean distance in $Q_2(0)$; notice that we are making use of the fact that any Lipschitz function on $M$ can be extended to a Lipschitz function on $Q_2(0)$ with same Lipschitz constant). Now since $\lVert \tilde V_\varepsilon-V\mathbb{R}Vert_{L^p(\Omega_{a_\varepsilon,\varepsilon)}}\to 0$ as $\varepsilon\to 0$ in $E_V$ and $\partial I$, $\partial I_\varepsilon$ are supported in a compact subset of $Q_2(0)$, the first term on the right hand side tends to zero as $\varepsilon\to 0^+$ in $E_V$. Moreover the second term is bounded by $4\mathbb{M}athbb{S}qrt{n}\varepsilon Q^\varepsilon$ (see for instance Lemma 2 in \mathbb{C}ite{brezis}) and thus tends to $0$ as $\varepsilon\to 0^+$ in $E_V$. This shows that $\partial I$ belongs to the (strong) $(W^{1,\infty}(M))^\ast$ closure of the class of $0$-currents $T$ on $M$ such that \begin{align}\label{Equation: explicit form for T, sum of deltas in M} T=\mathbb{M}athbb{S}um_{j\in J}(\delta_{p_j}-\delta_{n_j})\text{ in }(W^{1,\infty}(M))^\ast\text{ and }\mathbb{M}athbb{S}um_{j\in J}\lvert p_j- n_j\mathbb{R}vert<\infty \end{align} for a countable set $J$ and points $p_j$, $n_j$ in $M$. By \mathbb{C}ite[Proposition A.1]{Ponce2004} applied to the complete metric space $(M, d)$ (where $d$ denotes the Euclidean distance in $Q_2(0)$) this space is closed in $(W^{1,\infty}(M))^\ast$, therefore $\partial I$ is also of this form. Since any Lipschitz function $\varphi\in W^{1,\infty}(Q_2(0))$ has a Lipschitz trace $\varphi\big\vert_{M}$ on $M$ and $\langle \partial I, \varphi\mathbb{R}angle_{Q_2(0)}=\langle \partial I, \varphi\big\vert_M\mathbb{R}angle_M$, we conclude that $\partial I$ can be represented as in \mathbb{R}ef{Equation: explicit form for T, sum of deltas in M} also as an element of $(W^{1,\infty}(Q_2(0))^\ast$. \end{proof} Theorem $\mathbb{R}ef{Theorem: strong approximation for vector fields}$ could also be useful to obtain approximation results for Sobolev maps with values into manifolds. For instance we can use it to recover the following result, due to R. Schoen and K. Uhlenbeck (for $p=2$, see \mathbb{C}ite{schoen-uhlenbeck-2}, Section 4) and F. Bethuel and X. Zheng (for $p>2$, see \mathbb{C}ite[Theorem 4]{bethuel-zheng}). \begin{cor} Let $u\in W^{1,p}(Q_1(0), \mathbb{M}athbb{S}^1)$ for some $p\in (1,\infty)$.\\ If $p\geq 2$, then \begin{align*} \operatorname{div}(u\wedge \mathbb{N}abla^\perp u)=0\quad\text{ in }\mathbb{M}athcal{D}'(Q_1(0)) \end{align*} and $u$ can be approximated in $W^{1,p}$ by a sequence of functions in $C^\infty(Q_1(0), \mathbb{M}athbb{S}^1).$\\ If $p<2$, then \begin{align} \label{Equation: form of weak Jacobian of maps to S^1} \frac{1}{2\pi}\operatorname{div}(u\wedge \mathbb{N}abla^\perp u)=\partial I, \end{align} where $I\in \mathbb{M}athcal{R}_1(Q_1(0))$ is a 1-rectifiable current of finite mass, and $u$ can be approximated in $W^{1,p}$ by a sequence of functions in \begin{align*} \mathbb{M}athcal{R}:=\left\{v\in W^{1,p}(Q_1(0),\mathbb{M}athbb{S}^1); v\in C^\infty(Q_1(0)\mathbb{M}athbb{S}mallsetminus A, \mathbb{M}athbb{S}^1),\text{ where }A\text{ is some finite set}\mathbb{R}ight\}. \end{align*} \begin{proof} First we claim that the vector field $u\wedge \mathbb{N}abla^\perp u$ belongs to $L^p_\mathbb{M}athbb{Z}(Q_1(0))$. In fact notice that for any $x_0\in Q_1(0)$, for a.e. $\mathbb{R}ho\in (0, 2\dist_\infty(x_0,\partial Q_1(0)))$ $\partial Q_\mathbb{R}ho(x_0)$ consists $\mathscr{H}^{n-1}$-a.e. of Lebesgue points of $u\wedge \mathbb{N}abla^\perp u$. Moreover for almost any such $\mathbb{R}ho$ we have \begin{align*} \frac{1}{2\pi}\int_{\partial Q_\mathbb{R}ho(x_0)}(u\wedge\mathbb{N}abla^\perp u)\mathbb{C}dot \mathbb{N}u_{\partial Q_\mathbb{R}ho(x_0)}=\deg \left(u\big\vert_{\partial Q_\mathbb{R}ho(x_0)}\mathbb{R}ight)\in \mathbb{M}athbb{Z}. \end{align*} Hence the vector field $\frac{1}{2\pi}u\wedge\mathbb{N}abla^\perp u$ belongs to $L^p_\mathbb{M}athbb{Z}(Q_1(0))$.\\ Thus if $p\geq 2$ by Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} there holds $\operatorname{div}(u\wedge\mathbb{N}abla^\perp u)=0$ in $\mathbb{M}athcal{D}'(Q_1(0))$, while if $p<2$ there exists a sequence of vector fields $(V_n)_{n\in \mathbb{M}athbb{N}}$ in $L^p_R(Q_1(0))$ such that \begin{align*} V_n\to \frac{1}{2\pi}u\wedge \mathbb{N}abla^\perp u\quad\text{ in }L^p(Q_1(0))\text{ as }n\to\infty. \end{align*} For any $n\in \mathbb{M}athbb{N}$ by Hodge decomposition there exist $a_n\in W^{1,p}(Q_1(0))$, $b_n\in W^{1,p}_0(Q_1(0))$ such that \begin{align*} 2\pi V_n=\mathbb{N}abla^\perp a_n+\mathbb{N}abla b_n. \end{align*} For any $n\in \mathbb{M}athbb{N}$ let $\tilde a_n\in C^\infty(Q_1(0))$ be such that $\lVert \tilde a_n-a_n\mathbb{R}Vert_{L^p}\leq \frac{1}{n}$. Moreover notice that there exists $d_n\in W^{1,p}(Q_1(0),\mathbb{M}athbb{S}^1)\mathbb{C}ap C^\infty(Q_1(0)\mathbb{M}athbb{S}mallsetminus A)$, where $A$ is a finite set, such that \begin{align*} \mathbb{N}abla b_n=d_n\wedge \mathbb{N}abla^\perp d_n. \end{align*} In fact \begin{align*} \mathbb{M}athcal{D}elta b_n=2\pi\operatorname{div}(V_n)=2\pi\mathbb{M}athbb{S}um_{i=1}^{Q^n}d_i^n\delta_{p_i^n} \end{align*} for some $Q^n\in \mathbb{M}athbb{N}$, $p_i^n\in Q_1(0)$ and $d_i^n\in \mathbb{M}athbb{Z}$, thus $b_n=-\mathbb{M}athbb{S}um_{i=1}^{Q^n}\log\lvert x-p_i^n\mathbb{R}vert^{d_i^n}+h_n$ for an harmonic function $h_n$. Then $d_n$ can be chosen to be \begin{align*} d_n(x)= e^{-i\tilde{h}_n}\prod_{i=1}^{Q^n} \left(\frac{x-p_i^n}{\lvert x-p_i^n\mathbb{R}vert}\mathbb{R}ight)^{d_n}, \end{align*} where $\tilde{h}_n$ is the harmonic conjugate of $h_n$ (the product has to be understood as complex multiplication in $\mathbb{M}athbb{C}\mathbb{M}athbb{S}imeq\mathbb{M}athbb{R}^2$).\\ For any $n\in \mathbb{M}athbb{N}$ set $u_n:=e^{i\tilde a_n}d_n$. Then by construction $u_n\in \mathbb{M}athcal{R}$ and \begin{align*} u_n\wedge \mathbb{N}abla^\perp u_n\to u\wedge \mathbb{N}abla^\perp u \quad\text{ in }L^p(Q_1(0))\text{ as }n\to \infty. \end{align*} Therefore there is $c\in [0,2\pi)$ so that up to a subsequence \begin{align*} e^{ic}u_n\to u\quad\text{ in }L^p(Q_1(0))\text{ as }n\to\infty. \end{align*} \end{proof} \end{cor} \begin{rem} Equation (\mathbb{R}ef{Equation: form of weak Jacobian of maps to S^1}) was obtained in \mathbb{C}ite[Theorem 3']{brezis-mironescu-ponce} with the help of the approximation result of F. Bethuel and X. Zheng. \end{rem} \mathbb{M}athbb{S}ection{The weak $L^p$-closure of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1^n(0))$} In the present section we follow the ideas presented in \mathbb{C}ite{petrache-riviere-abelian} in order to prove that the space $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q^n_1(0))$ is weakly sequentially closed for every $n\ge 2$ and $p\in(1,+\infty)$. The main reason why such techniques couldn't be used before in this context for $n\mathbb{N}eq 3$ was the lack of a strong approximation theorem like Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} for general dimension $n$. Such result is needed in order to define a suitable notion of distance between the cubical slices of a form $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q^n(0))$, given by $(x\mathbb{M}apsto x_0+\mathbb{R}ho x)^*i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F$ for $\mathcal{L}^1$-a.e. $\mathbb{R}ho$ (see subsections 3.1 and 3.2 for the precise definition). Once we have turned the space of the cubical slices of $F$ into a metric space, we will show that the "slice function" associated to $F$, given by $\mathbb{R}ho\mathbb{M}apsto(x\mathbb{M}apsto x_0+\mathbb{R}ho x)^*i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F$, is locally $\frac{1}{p'}-$H\"older continuous (see subsection 3.3). Moreover we will see that if $\{F_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ converges weakly in $L^p$, then the sequence of the slice functions associated to each $F_k$ is locally uniformly $\frac{1}{p'}-$H\"older continuous. Finally, we will use the previous facts together with some technical lemmata to conclude the proof of Theorem \mathbb{R}ef{Theorem: weak closure} for $D=Q^n_1(0)$. Notice that by Theorem \mathbb{R}ef{Theorem: strong approximation for vector fields} the result is clear if $p\in \big[ n/(n-1),\infty)$, here we will focus on the case $p\in\big(1,n/(n-1)\big)$. \mathbb{M}athbb{S}ubsection{Slice distance on $\mathbb{M}athbb{S}^{n-1}$} Throughout the following section, we will assume that $p\in\big(1,n/(n-1)\big)$. Moreover, we will denote by "$\ast$" the Hodge star operator associated with the standard round metric on $\mathbb{M}athbb{S}^{n-1}$. We will denote by $Z$ the linear subspace of $\Omega_{p}^{n-1}(\mathbb{M}athbb{S}^{n-1})$ given by \begin{align*} Z:=\bigg\{h\in\Omega_p^{n-1}(\mathbb{M}athbb{S}^{n-1}) \mathbb{M}box{ s.t. } \int_{\mathbb{M}athbb{S}^{n-1}}h\in\mathbb{M}athbb{Z}\bigg\}. \end{align*} \begin{rem} It's clear that $Z$ is weakly (and thus strongly) $L^p$ closed in $\Omega_{p}^{n-1}(\mathbb{M}athbb{S}^{n-1})$. Indeed, let $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Z$ be any sequence such that $h_k\mathbb{R}ightharpoonup h$ weakly in $\Omega_p^{n-1}(\mathbb{M}athbb{S}^{n-1})$, i.e. \begin{align*} \int_{\mathbb{M}athbb{S}^{n-1}}\varphi h_k\mathbb{R}ightarrow\int_{\mathbb{M}athbb{S}^{n-1}}\varphi h, \qquad\forall\,\varphi\in L^{p'}(\mathbb{M}athbb{S}^{n-1}). \end{align*} Then, the statement follows by picking $\varphi\equiv 1$ and noticing that a convergent sequence of integer numbers is definitively constant. \end{rem} Fix any arbitrary point $q\in\mathbb{M}athbb{S}^{n-1}$. We define the functions $d,\tilde d:Z\times Z\mathbb{R}ightarrow [0,+\infty]$ by \begin{align*} d(h_1,h_2):=\inf\bigg\{||\alpha||_{L^p}\, \mathbb{M}box{ s.t. } *\hspace{-0.5mm}(h_1-h_2)=d^*\alpha+\partial I+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2\bigg)\delta_q\bigg\}, \end{align*} with $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1}), I\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})$, and \begin{align*} \tilde d(h_1,h_2):=\inf\bigg\{||\alpha||_{L^p}\, \mathbb{M}box{ s.t. } *\hspace{-0.5mm}(h_1-h_2)=d^*\alpha+\partial I+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2\bigg)\delta_q\bigg\}, \end{align*} with $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$, $I\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})\mathbb{C}ap\mathbb{M}athcal{N}_1(\mathbb{M}athbb{S}^{n-1})$. \begin{rem}[$d$ and $\tilde d$ are always finite on $Z$] \label{remark the distances are finite} We claim that $d,\tilde d<+\infty$. Since obviously $d(h_1,h_2)\le\tilde d(h_1,h_2)$, it is enough to show that $\tilde d(h_1,h_2)<+\infty$, for every $h_1,h_2\in Z$. This just amounts to saying that given any $h_1,h_2\in Z$ we can always find $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$, $I\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})\mathbb{C}ap\mathbb{M}athcal{N}_1(\mathbb{M}athbb{S}^{n-1})$ satisfying \begin{align*} *(h_1-h_2)=d^*\alpha+\partial I+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2\bigg)\delta_q. \end{align*} Indeed, let \begin{align*} a:=\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2\in\mathbb{M}athbb{Z}. \end{align*} Consider the following first order differential system on $\mathbb{M}athbb{S}^{n-1}$: \begin{align*} \begin{cases} d^*\omega=*(h_1-h_2)-a\delta_q=:F,\\ d\omega=0. \end{cases} \end{align*} Since $p\in\big(1,n/(n-1)\big)$, $F\in\mathbb{M}athcal{L}(W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))$. Moreover, $\langle F,1\mathbb{R}angle=0$. Hence, by Lemma \mathbb{R}ef{appendix lemma d* e d}, we know that the previous differential system has a solution $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ and the statement follows. \end{rem} \begin{rem} \label{remark equivalence distances} As observed above, it is clear that $d(h_1,h_2)\le\tilde d(h_1,h_2)$, for every $h_1,h_2\in Z$. We claim that actually $d(h_1,h_2)=\tilde d(h_1,h_2)$, for every $h_1,h_2\in Z$. In order to prove the remaining inequality, fix any $h_1,h_2\in Z$ and let $\{\alpha_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$, $\{I_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})$ be such that \begin{align*} \begin{cases} *(h_1-h_2)=d^*\alpha_k+\partial I_k+a\delta_q,\quad\forall\,k\in\mathbb{N},\\ \lvert\lvert\alpha_k\mathbb{R}vert\mathbb{R}vert_{L^p}\mathbb{R}ightarrow d(h_1,h_2) \mathbb{M}box{ as } k\mathbb{R}ightarrow\infty, \end{cases} \end{align*} with \begin{align*} a:=\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2. \end{align*} By Corollary \mathbb{R}ef{appendix corollary laplace}, the linear differential equation \begin{align*} \mathbb{M}athcal{D}elta u=*(h_1-h_2)-a\delta_q \end{align*} has a weak solution $\psi\in\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$. Let $\omega_k:=d\psi-\alpha_k$, for every $k\in\mathbb{N}$. Notice that \begin{align*} d^*\omega_k=\partial I_k, \qquad\forall\, k\in\mathbb{N}. \end{align*} By Theorem \mathbb{R}ef{Theorem: strong approximation for general domains}, for every $k\in\mathbb{N}$ there exists a sequence $\{\omega_{k}^j\}_{j\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,R}^1(\mathbb{M}athbb{S}^{n-1})$ such that $r_k^j:=\omega_k-\omega_k^j\mathbb{R}ightarrow 0$ strongly in $L^p$ as $j\mathbb{R}ightarrow\infty$. By construction, it follows that \begin{align*} *(h_1-h_2)=d^*(\alpha_k+r_k^j)+d^*\omega_k^j+a\delta_q,\qquad\forall\,k,j\in\mathbb{N}. \end{align*} We observe that by Proposition \mathbb{R}ef{appendix proposition existence minimal connection finitely many singularities} for every $k,j\in\mathbb{N}$ there exist $I_k^j\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})\mathbb{C}ap\mathbb{M}athcal{N}_1(\mathbb{M}athbb{S}^{n-1})$ such that $d^*\omega_k^j=\partial I_k^j$. This implies that \begin{align*} \tilde d(h_1,h_2)\le\lvert\lvert\alpha_k+r_k^j\mathbb{R}vert\mathbb{R}vert_{L^p}\le\lvert\lvert\alpha_k\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert r_k^j\mathbb{R}vert\mathbb{R}vert_{L^p},\qquad\forall\,k,j\in\mathbb{N}. \end{align*} By letting first $j\mathbb{R}ightarrow\infty$ and then $k\mathbb{R}ightarrow\infty$ in the previous inequality, our claim follows. \end{rem} \begin{prop} \label{d is a metric} $(Z,d)$ is a metric space. \begin{proof} We need to check symmetry, triangular inequality and non-degeneracy. \textit{Symmetry}. This is clear since both the $L^p$-norm and the space $\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})$ are invariant under sign change. \textit{Triangular inequality}. Let $h_1,h_2,h_3\in Z$. By definition of infimum, for every $\varepsilon>0$ we can write \begin{align*} \begin{cases} \displaystyle{*(h_1-h_2)=d^*\alpha_{\varepsilon}+\partial I_{\varepsilon}+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2\bigg)\delta_{q}},\\ \displaystyle{*(h_2-h_3)=d^*\alpha_{\varepsilon}'+\partial I_{\varepsilon}'+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}h_2-h_3\bigg)\delta_{q}}, \end{cases} \end{align*} with $\alpha_{\varepsilon},\alpha_{\varepsilon}'\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ and $I_{\varepsilon},I_{\varepsilon}'\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})$ satisfying \begin{align*} \begin{cases} \lvert\lvert\alpha_{\varepsilon}\mathbb{R}vert\mathbb{R}vert_{L^p}\le d(h_1,h_2)+\varepsilon\\ \lvert\lvert\alpha_{\varepsilon}'\mathbb{R}vert\mathbb{R}vert_{L^p}\le d(h_2,h_3)+\varepsilon. \end{cases} \end{align*} We notice that \begin{align*} *(h_1-h_3)=d^*(\alpha_{\varepsilon}+\alpha_{\varepsilon}')+\partial(I_{\varepsilon}+I_{\varepsilon}')+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_3\bigg)\delta_{q}, \qquad\forall\,\varepsilon>0. \end{align*} Then, by definition of $d$, we have \begin{align*} d(h_1,h_3)\le\lvert\lvert\alpha_{\varepsilon}+\alpha_{\varepsilon}'\mathbb{R}vert\mathbb{R}vert_{L^p}\le\lvert\lvert\alpha_{\varepsilon}\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert\alpha_{\varepsilon}'\mathbb{R}vert\mathbb{R}vert_{L^p}\le d(h_1,h_2)+d(h_2,h_3)+2\varepsilon, \qquad\forall\,\varepsilon>0. \end{align*} By letting $\varepsilon\mathbb{R}ightarrow 0^+$ in the previous inequality, we get our claim. \textit{Non-degeneracy}. Assume that $d(h_1,h_2)=0$, for some $h_1,h_2\in Z$. Let \begin{align*} a:=\int_{\mathbb{M}athbb{S}^{n-1}}h_1-h_2\in\mathbb{M}athbb{Z}. \end{align*} Then, since $d=\tilde d$ (see Remark \mathbb{R}ef{remark equivalence distances}) and by definition of $\tilde d$, there exist $\{\alpha_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ and $\{I_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})\mathbb{C}ap\mathbb{M}athcal{N}_1(\mathbb{M}athbb{S}^{n-1})$ such that \begin{align*} *(h_1-h_2)=d^*\alpha_k+\partial I_k+a\delta_q \end{align*} and $\alpha_k\mathbb{R}ightarrow 0$ strongly in $L^p$ as $k\to\infty$. Observe that \begin{align*} \partial I_k\to \ast(h_1-h_2)-a\delta_q\quad\text{ in }(W^{1,\infty}(\mathbb{M}athbb{S}^{n-1}))^\ast. \end{align*} Now for any $k\in \mathbb{M}athbb{N}$, $\partial I_k$ can be represented as \begin{align*} \partial I_k=\mathbb{M}athbb{S}um_{j=1}^{J_k}(\delta_{p_j^k}-\delta_{n_j^k}) \end{align*} for some $J_k\in \mathbb{M}athbb{N}$ and points $p_j^k$, $n_j^k$ in $\mathbb{M}athbb{S}^{n-1}$. But the space of distributions of the form \begin{align}\label{Equation: Distribution as sum of dipoles} \mathbb{M}athbb{S}um_{j\in J}(\delta_{p_j}-\delta_{n_j})\,\text{ such that }\,\mathbb{M}athbb{S}um_{j\in J}\lvert p_j- n_j\mathbb{R}vert<\infty \end{align} (for a countable set $J$ and points $p_j$, $n_j$ in $\mathbb{M}athbb{S}^{n-1}$) is closed with respect to the (strong) topology of $({W}^{1,\infty}(\mathbb{M}athbb{S}^{n-1}))^\ast$ (see Proposition A.1 in \mathbb{C}ite{Ponce2004}), thus there exists a distribution $T$ as in \eqref{Equation: Distribution as sum of dipoles} such that \begin{align}\label{Equation: Contradiction, Lp equal sum of deltas} \ast\,(h_1-h_2)=T-a\delta_q. \end{align} But this implies that $\ast\,(h_1-h_2)=0$, since the left-hand-side in \eqref{Equation: Contradiction, Lp equal sum of deltas} is in $L^p$ whilst the right-hand-side is in $L^p$ if and only if it is equal to zero. \end{proof} \end{prop} \begin{rem} Notice that the proof above relies on the fact that $d=\tilde{d}$, which was proved using Corollary \mathbb{R}ef{Corollary: strong approximation on boundary of cube}. As the proof of the Corollary \mathbb{R}ef{Corollary: strong approximation on boundary of cube} was rather cumbersome, we remark here that there is a way to skip that passage. Indeed, in the proof of the non-degeneracy of $d$ it is not necessary to assume that $\{I_k\}_{k\in \mathbb{M}athbb{N}}$ lies in $\mathbb{M}athcal{N}_1(\mathbb{M}athbb{S}^{n-1})$. In fact, it follows from Corollary \mathbb{R}ef{Corollary: representation of partial I} that $\partial I_k$ is of the form (\mathbb{R}ef{Equation: Distribution as sum of dipoles}), and thus the limit of $\{I_k\}_{k\in \mathbb{M}athbb{N}}$ in $(W^{1,\infty}(\mathbb{M}athbb{S}^{n-1}))^\ast$ will also be of that form. \end{rem} \begin{prop} \label{d metrizes the weak topology on bounded subsets} Let $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Z$ and $h\in Z$. Then the following are equivalent: \begin{enumerate} \item $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Z$ is uniformly bounded w.r.t. the $L^p$-norm and $d(h_k,h)\mathbb{R}ightarrow 0$ as $k\mathbb{R}ightarrow\infty$; \item $h_k\mathbb{R}ightharpoonup h$ weakly in $L^p$ as $k\mathbb{R}ightarrow\infty$. \end{enumerate} \begin{proof} We prove separately the two implications. \textit{$2\mathbb{M}athcal{R}ightarrow 1$}. Pick any subsequence of $\{h_k\}_{k\in\mathbb{N}}$ (not relabelled). For every $k\in\mathbb{N}$, let \begin{align*} a_k:=\langle h_k-h,1\mathbb{R}angle=\int_{\mathbb{M}athbb{S}^{n-1}}h_k-h. \end{align*} Since $h_k\mathbb{R}ightharpoonup h$ weakly in $L^p$ as $k\mathbb{R}ightarrow\infty$, it follows that $a_k\mathbb{R}ightarrow 0$ as as $k\mathbb{R}ightarrow\infty$. Since $\{a_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\mathbb{M}athbb{Z}$, there exists $K\in\mathbb{N}$ such that $a_k=0$ for every $k\ge K$. Fix any $k\ge K$. By Lemma \mathbb{R}ef{appendix lemma d* e d}, the linear differential system \begin{align*} \begin{cases} d^*\omega=*(h_k-h)\\ d\omega=0, \end{cases} \end{align*} respectively (if $n=2$) \begin{align*} \begin{cases} d^*\omega=*(h_k-h)\\ d\omega=0\\ \displaystyle{\int_{\mathbb{M}athbb{S}^1}\omega=0}, \end{cases} \end{align*} has a unique weak solution $\alpha_k\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$. By Remark \mathbb{R}ef{appendix remark sobolev forms} we have \begin{align*} \lvert\lvert\alpha_k\mathbb{R}vert\mathbb{R}vert_{W^{1,p}}\le C\big(\lvert\lvert d\alpha_k\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert d^*\alpha_k\mathbb{R}vert\mathbb{R}vert_{L^p}\big)=C\lvert\lvert h_k-h\mathbb{R}vert\mathbb{R}vert_{L^p}. \end{align*} Since $\{h_k\}_{k\in\mathbb{N}}$ is weakly convergent, we know that it is also uniformly bounded w.r.t the $L^p$-norm. Then $\{\alpha_k\}_{k\ge K}$ is uniformly bounded w.r.t the $W^{1,p}$-norm. Hence, by weak compactness in $W^{1,p}$, there exists a subsequence $\{\alpha_{k_l}\}_{l\in\mathbb{N}}\mathbb{M}athbb{S}ubset\{\alpha_k\}_{k\ge K}$ and a one-form $\alpha\in \Omega^1_{W^{1,p}}(\mathbb{M}athbb{S}^{n-1})$ such that $\alpha_{k_l}\mathbb{R}ightharpoonup\alpha$ weakly in $W^{1,p}$. By Rellich-Kondrakov theorem, it follows that $\alpha_{k_l}\mathbb{R}ightarrow \alpha$ strongly in $L^p$. We claim that $\alpha=0$. Indeed, \begin{align*} \langle\alpha,\omega\mathbb{R}angle_{L^p-L^{p'}}&=\lim_{l\mathbb{R}ightarrow\infty}\langle\alpha_{k_l},d\varphi+d^*\beta\mathbb{R}angle_{L^p-L^{p'}}\\ &=\lim_{l\mathbb{R}ightarrow\infty}-\langle d^\ast \alpha_{k_l}, \varphi\mathbb{R}angle_{L^p-L^{p'}}\\ &=\lim_{l\mathbb{R}ightarrow\infty}-\int_{\mathbb{M}athbb{S}^{n-1}} (h_{k_l}-h)\wedge \varphi=0, \qquad\forall\, \omega=d\varphi+d^*\beta\in\Omega^1(\mathbb{M}athbb{S}^{n-1}), \end{align*} respectively (if $n=2$) \begin{align*} \langle\alpha,\omega\mathbb{R}angle_{L^p-L^{p'}}&=\lim_{l\mathbb{R}ightarrow\infty}\langle\alpha_{k_l},d\varphi+d^*\beta+\eta\mathbb{R}angle_{L^p-L^{p'}}\\ &=\lim_{l\mathbb{R}ightarrow\infty}\langle\alpha_{k_l},d\varphi+d^*\beta\mathbb{R}angle_{L^p-L^{p'}}\\ &=\lim_{l\mathbb{R}ightarrow\infty}-\langle d^\ast \alpha_{k_l}, \varphi\mathbb{R}angle_{L^p-L^{p'}}\\ &=\lim_{l\mathbb{R}ightarrow\infty}-\int_{\mathbb{M}athbb{S}^{n-1}} (h_{k_l}-h)\wedge \varphi=0, \qquad\forall\, \omega=d\varphi+d^*\beta+\eta\in\Omega^1(\mathbb{M}athbb{S}^1), \end{align*} where $\eta\in\Omega^1(\mathbb{M}athbb{S}^1)$ is a harmonic $1$-form on $\mathbb{M}athbb{S}^1$ (hence a constant $1$-form) and the second equality follows because $\alpha_{k_l}$ is distributionally closed. Hence, we have shown that $\alpha_{k_l}\mathbb{R}ightarrow 0$ strongly in $L^p$ as $l\to\infty$. As $\ast(h_{k_l}-h)=d^*\alpha_{k_l}$, for every $l\in\mathbb{N}$, we have \begin{align*} d(h_{k_l},h)\le\lvert\lvert\alpha_{k_l}\mathbb{R}vert\mathbb{R}vert_{L^p}\mathbb{R}ightarrow 0, \qquad\mathbb{M}box{ as } l\mathbb{R}ightarrow\infty. \end{align*} We have just proved that any subsequence of $\{h_k\}_{k\in\mathbb{N}}$ has a further subsequence converging to $h$ with respect to $d$, therefore $1.$ follows. \textit{$1\mathbb{M}athcal{R}ightarrow 2$}. Pick any subsequence of $\{h_k\}_{k\in\mathbb{N}}$ (not relabelled). Since $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Z$ is uniformly bounded w.r.t. the $L^p$-norm, by weak $L^p$-compactness there exists a subsequence $\{h_{k_l}\}_{l\in\mathbb{N}}$ of $\{h_k\}_{k\in\mathbb{N}}$ and a $h_w\in Z$ such that $h_{k_l}\mathbb{R}ightharpoonup h_w$ weakly in $L^p$. Since we have just shown that $2\mathbb{M}athcal{R}ightarrow 1$, we know that $d(h_{k_l},h_w)\mathbb{R}ightarrow 0$ as $l\mathbb{R}ightarrow\infty$. By uniqueness of the limit, we get $h_w=h$. We have just proved that any subsequence of $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Z$ has a further subsequence converging to $h$ weakly in $L^p$, hence, $2.$ follows. \end{proof} \end{prop} \mathbb{M}athbb{S}ubsection{Slice distance on $\partial Q^n_1(0)$} Let $Q_1(0)\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be the unit cube in $\mathbb{R}^n$ centered at the origin and let $\mathcal{P}si:\mathbb{M}athbb{S}^{n-1}\mathbb{R}ightarrow\partial Q_1(0)$ be a bi-Lipschitz homeomorphism. We let $Y$ be the linear subspace of $\Omega_{p}^{n-1}(\partial Q_1(0))$ given by \begin{align*} Y:=\bigg\{h\in\Omega_p^{n-1}(\partial Q_1(0)) \mathbb{M}box{ s.t. } \int_{\partial Q_1(0)}h\in\mathbb{M}athbb{Z}\bigg\}. \end{align*} \begin{rem} Notice that $h\in Y$ if and only if $\mathcal{P}si^*h\in Z$. Indeed, given any $h\in Z$ we have \begin{align*} C_{\mathcal{P}si}^{-1}\int_{\partial Q_1(0)}\lvert h\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\le\int_{\mathbb{M}athbb{S}^{n-1}}\lvert\mathcal{P}si^*h\mathbb{R}vert^p\, d\mathscr{H}^{n-1}\le C_{\mathcal{P}si}\int_{\partial Q_1(0)}\lvert h\mathbb{R}vert^p\, d\mathscr{H}^{n-1}, \end{align*} with $C_{\mathcal{P}si}:=\left(\mathbb{M}ax\{\lVert d\mathcal{P}si\mathbb{R}Vert_{L^{\infty}},\lVert d\mathcal{P}si^{-1}\mathbb{R}Vert_{L^{\infty}}\}\mathbb{R}ight)^{(n-1)(p-1)}$, and \begin{align*} \int_{\mathbb{M}athbb{S}^{n-1}}\mathcal{P}si^*h=\int_{\partial Q_1(0)}h. \end{align*} \end{rem} Thus, the functions $d_{\mathcal{P}si},\tilde d_{\mathcal{P}si}:Y\times Y\mathbb{R}ightarrow[0,+\infty)$ given by \begin{align*} d_{\mathcal{P}si}(h_1,h_2)&:=d(\mathcal{P}si^*h_1,\mathcal{P}si^*h_2) \qquad\forall\,h_1,h_2\in Y,\\ \tilde d_{\mathcal{P}si}(h_1,h_2)&:=\tilde d(\mathcal{P}si^*h_1,\mathcal{P}si^*h_2) \qquad\forall\,h_1,h_2\in Y, \end{align*} are well-defined and coincide on $Y\times Y$ by Remarks \mathbb{R}ef{remark the distances are finite} and \mathbb{R}ef{remark equivalence distances}. Moreover, $(Y,d_{\mathcal{P}si})$ is a metric space as a direct consequence of Proposition \mathbb{R}ef{d is a metric} and the following statement is a corollary of Proposition \mathbb{R}ef{d metrizes the weak topology on bounded subsets}. \begin{cor} \label{d_Psi metrizes the weak topology on bounded subsets} Let $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Y$ and $h\in Y$. Then, the following are equivalent: \begin{enumerate} \item $\{h_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset Y$ is uniformly bounded w.r.t. the $L^p$-norm and $d_{\mathcal{P}si}(h_k,h)\mathbb{R}ightarrow 0$ as $k\mathbb{R}ightarrow\infty$; \item $h_k\mathbb{R}ightharpoonup h$ weakly in $L^p$ as $k\mathbb{R}ightarrow\infty$. \end{enumerate} \end{cor} \begin{rem} Let $\mathcal{P}si_1,\mathcal{P}si_2:\mathbb{M}athbb{S}^{n-1}\mathbb{R}ightarrow\partial Q_1(0)$ be bi-Lipschitz homeomorphisms. We claim that the distances $d_{\mathcal{P}si_1}$ and $d_{\mathcal{P}si_2}$ induced on $Y$ by $\mathcal{P}si_1$ and $\mathcal{P}si_2$ respectively are equivalent. Indeed notice that given any bi-Lipschitz map $\mathcal{L}ambda:\mathbb{M}athbb{S}^{n-1}\mathbb{R}ightarrow\mathbb{M}athbb{S}^{n-1}$ we have \begin{align} \label{estimate Psi} d(\mathcal{L}ambda^*h_1,\mathcal{L}ambda^*h_2)\le\lVert d\mathcal{L}ambda\mathbb{R}Vert_{L^{\infty}}^{n-2}\lVert d\mathcal{L}ambda^{-1}\mathbb{R}Vert_{L^{\infty}}^{\frac{n-1}{p}}d(h_1,h_2), \qquad\forall\, h_1,h_2\in Z. \end{align} To see this notice that if $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ is a competitor in the definition of $d(h_1,h_2)$ then the form given by $(-1)^{n-2}\ast\mathcal{L}ambda^*(\ast\alpha)\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ is a competitor in the definition of $d(\mathcal{L}ambda^*h_1,\mathcal{L}ambda^*h_2)$. Hence \begin{align*} d(\mathcal{L}ambda^*h_1,\mathcal{L}ambda^* h_2)\le\lVert\ast\mathcal{L}ambda^*(\ast\alpha)\mathbb{R}Vert_{L^p}\le\lVert d\mathcal{L}ambda\mathbb{R}Vert_{L^{\infty}}^{n-2}\lVert d\mathcal{L}ambda^{-1}\mathbb{R}Vert_{L^{\infty}}^{\frac{n-1}{p}}\lVert\alpha\mathbb{R}Vert_{L^p}, \end{align*} for every competitor $\alpha$ in the definition of $d(h_1,h_2)$. By taking the infimum on all the competitors in the previous inequality, \eqref{estimate Psi} follows. By applying \eqref{estimate Psi} we obtain \begin{align*} d_{\mathcal{P}si_2}(h_1,h_2)&=d(\mathcal{P}si_2^*h_1,\mathcal{P}si_2^*h_2)=d\big((\mathcal{P}si_1^{-1}\mathbb{C}irc\mathcal{P}si_2)^*\mathcal{P}si_1^*h_1,(\mathcal{P}si_1^{-1}\mathbb{C}irc\mathcal{P}si_2)^*\mathcal{P}si_1^*h_2\big)\\ &\le C_{\mathcal{P}si_1\mathcal{P}si_2}d(\mathcal{P}si_1^*h_1,\mathcal{P}si_1^*h_2)=C_{\mathcal{P}si_1\mathcal{P}si_2}d_{\mathcal{P}si_1}(h_1,h_2) \qquad\forall\,h_1,h_2\in Y, \end{align*} Analogously, we get \begin{align*} d_{\mathcal{P}si_1}(h_1,h_2)&\le C_{\mathcal{P}si_1\mathcal{P}si_2}d_{\mathcal{P}si_2}(h_1,h_2) \qquad\forall\,h_1,h_2\in Y, \end{align*} with $C_{\mathcal{P}si_1\mathcal{P}si_2}:=\mathbb{M}ax\left\{\lVert d(\mathcal{P}si_2^{-1}\mathbb{C}irc\mathcal{P}si_1)\mathbb{R}Vert_{L^{\infty}}, \lVert d(\mathcal{P}si_1^{-1}\mathbb{C}irc\mathcal{P}si_2)\mathbb{R}Vert_{L^{\infty}}\mathbb{R}ight\}^{n-2+\frac{n-1}{p}}$. \end{rem} \mathbb{M}athbb{S}ubsection{Slice functions and their properties} \begin{dfn}[Slice functions] Let $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. Given any arbitrary $x_0\in Q_1(0)$, we let $\mathbb{R}ho_0:=2\dist_{\infty}(x_0,\partial Q_1(0))$. We call the \textit{slice function of $F$ at $x_0$} the map $s:\dom(s)\mathbb{M}athbb{S}ubset (0,\mathbb{R}ho_0)\mathbb{R}ightarrow Y$ given by \begin{align*} s(\mathbb{R}ho):=(x\mathbb{M}apsto\mathbb{R}ho x+x_0)^*i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F, \qquad\forall\, \mathbb{R}ho\in\dom(s), \end{align*} where $\dom(s)$ is the subset of $(0,\mathbb{R}ho_0)$ defined as follows: $\mathbb{R}ho\in\dom(s)$ if and only if the following conditions hold: \begin{enumerate} \item $\mathscr{H}^{n-1}$-a.e. point in $\partial Q_{\mathbb{R}ho}(x_0)$ is a Lebesgue point for $F$, \item $\lvert F\mathbb{R}vert\in L^p(\partial Q_{\mathbb{R}ho}(x_0),\mathscr{H}^{n-1})$, \item $\mathbb{R}ho$ is a Lebesgue point for the $L^p$-function \begin{align*} (0,\mathbb{R}ho_0)\mathbb{N}i\mathbb{R}ho\mathbb{M}apsto\int_{\partial Q_{\mathbb{R}ho}(x_0)}i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F, \end{align*} \item $(x\mathbb{M}apsto\mathbb{R}ho x+x_0)^*i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F\in Y$. \end{enumerate} \end{dfn} \begin{rem}\label{Remark: Lp norm of slice function} Notice that $Dom(s)$ has $\mathcal{L}^1$ full measure in $(0,\mathbb{R}ho_0)$. Moreover $s\in L^p\big((0,\mathbb{R}ho_0);Y\big)$, in the following sense: letting $j_\mathbb{R}ho: x\mathbb{M}apsto \mathbb{R}ho x+x_0$, we have \begin{align*} \lVert s(\mathbb{R}ho)\mathbb{R}Vert_{L^p}^p=\int_{\partial Q_1(0)}\lvert j_\mathbb{R}ho^\ast F\mathbb{R}vert^p \,d\mathbb{M}athscr{H}^{n-1}=\int_{\partial Q_\mathbb{R}ho(x_0)}\lvert F\mathbb{R}vert^p\mathbb{R}ho^{(n-1)(p-1)}\,d\mathbb{M}athscr{H}^{n-1} \end{align*} and thus \begin{align*} \int_0^{\mathbb{R}ho_0}\lvert\lvert s(\mathbb{R}ho)\mathbb{R}vert\mathbb{R}vert_{L^p}^p\, d\mathbb{R}ho\leq \int_0^{\mathbb{R}ho_0}\int_{\partial Q_\mathbb{R}ho}\lvert F\mathbb{R}vert^p \mathbb{R}ho^{(n-1)(p-1)}\, d \mathbb{M}athscr{H}^{n-1}d\mathbb{R}ho\leq 2^n\int_{Q_{\mathbb{R}ho_0}(x_0)}\lvert F\mathbb{R}vert^p d\mathbb{M}athscr{H}^n \end{align*} \end{rem} \begin{prop} \label{the slice functions are (1-1/p)-holder continuous} Let $x_0\in Q_1(0)$ and set $\mathbb{R}ho_0:=2\dist(x_0,\partial Q_1(0))$. Fix any $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ and let $s\in L^p\big((0,\mathbb{R}ho_0),Y\big)$ be the slice function of $F$ at $x_0$. Let $K\mathbb{M}athbb{S}ubset (0,\mathbb{R}ho_0)$ be compact. Then, there exists a subset $E\mathbb{M}athbb{S}ubset K$ such that $\mathcal{L}^1(K\mathbb{M}athbb{S}mallsetminus E)=0$ and a representative $\tilde s$ of $s$ defined pointwise on $E$ such that \begin{align} \label{holderianity estimate} d_{\mathcal{P}si}\big(\tilde s(\mathbb{R}ho_1),\tilde s(\mathbb{R}ho_2)\big)\le C_{p,K,\mathcal{P}si}\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{L^p}\lvert\mathbb{R}ho_1-\mathbb{R}ho_2\mathbb{R}vert^{\frac{1}{p'}}, \qquad\forall\,\mathbb{R}ho_1,\mathbb{R}ho_2\in E, \end{align} with \begin{align*} C_{p,K,\mathcal{P}si}:=C_{p,\mathcal{P}si}\mathbb{M}ax_{\mathbb{R}ho\in K}\mathbb{R}ho^{1-n} \end{align*} \begin{proof} Denote by $T_F\in\mathbb{M}athcal{D}_1(Q_1(0))$ the $1$-current on $Q_1(0)$ given by \begin{align*} \langle T_F,\omega\mathbb{R}angle=\int_{Q_1(0)}F\wedge\omega, \qquad\forall\,\omega\in\mathbb{M}athcal{D}^1(Q_1(0)). \end{align*} Since $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$, by Theorem \mathbb{R}ef{Theorem: characterization of the integer valued fluxes class} there exists $I\in\mathbb{M}athcal{R}_1(Q_1(0))$ such that $\mathbb{M}athbb{M}(I)<+\infty$ and $\ast d F=\partial I$. By definition of integral $1$-current, there exist a locally $1$-rectifiable set $\Gamma\mathbb{M}athbb{S}ubset Q_1(0)$, a Borel measurable unitary vector field $\vec I$ on $\Gamma$ and a positive $\mathbb{M}athbb{Z}$-valued $\mathscr{H}^1\mathbb{R}es\Gamma$-integrable function $\theta\in L^1(\Gamma,\mathscr{H}^1)$ such that \begin{align*} \langle I,\omega\mathbb{R}angle=\int_{\Gamma}\theta\langle\omega,\vec I\mathbb{R}angle\, d\mathscr{H}^1, \qquad\forall\,\omega\in\mathbb{M}athcal{D}^1(Q_1(0)). \end{align*} By the coarea formula, there exists $G\mathbb{M}athbb{S}ubset K$ such that $\mathcal{L}^1(K\mathbb{M}athbb{S}mallsetminus G)=0$ and such that $\Gamma\mathbb{C}ap\partial Q_{\mathbb{R}ho}(x_0)$ is a finite set for every $\mathbb{R}ho\in G$. Let $\mathcal{P}si:\mathbb{M}athbb{S}^{n-1}\to\partial Q_1(0)$ be the bi-Lipschitz map given by \begin{align*} \mathcal{P}si(x):=\frac{x}{2\lVert x\mathbb{R}Vert_{\infty}}, \qquad\forall\, x\in\mathbb{M}athbb{S}^{n-1}. \end{align*} Consider the map $\mathcal{P}hi:\mathbb{M}athbb{S}^{n-1}\times[0,\mathbb{R}ho_0]\mathbb{R}ightarrow\im(\mathcal{P}hi)=\overline{Q_{\mathbb{R}ho_0}(x_0)}\mathbb{M}athbb{S}ubset \overline{Q_1(0)}$ given by \begin{align*} \mathcal{P}hi(y,t):=x_0+t\mathcal{P}si(y), \qquad\forall\,(y,t)\in\mathbb{M}athbb{S}^{n-1}\times[0,\mathbb{R}ho_0]. \end{align*} Notice that $\mathcal{P}hi\big\vert_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1, \mathbb{R}ho_2]}$ is a bi-Lipschitz homeomorphism onto its image for every $\mathbb{R}ho_1,\mathbb{R}ho_2\in (0,1)$. We claim that estimate \eqref{holderianity estimate} holds on a full-measure subset of $G$. Indeed, fix any $\mathbb{R}ho_1,\mathbb{R}ho_2\in G$. Without loss of generality, assume that $\mathbb{R}ho_2>\mathbb{R}ho_1$. Let $\hat\mathcal{P}hi:=\mathcal{P}hi\big\vert_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1,\mathbb{R}ho_2]}$. Define $\pi:=\text{pr}_1\mathbb{C}irc\mathcal{P}hi^{-1}:\overline{Q_{\mathbb{R}ho_0}(x_0)}\mathbb{R}ightarrow\mathbb{M}athbb{S}^{n-1}$, where $\text{pr}_1:\mathbb{M}athbb{S}^{n-1}\times [0,\mathbb{R}ho_0]\mathbb{R}ightarrow\mathbb{M}athbb{S}^{n-1}$ is the canonical projection on the first factor, and notice that $\pi$ is a Lipschitz and proper map. Then, $\pi_*\big(T_F\mathbb{R}es\im(\hat\mathcal{P}hi)\big)\in\mathbb{M}athcal{D}_1(\mathbb{M}athbb{S}^{n-1})$ can be expressed as follows: for any $\omega\in\Omega^1(\mathbb{M}athbb{S}^{n-1})$ \begin{align*} \langle\pi_*\big(T_F\mathbb{R}es\im(\hat\mathcal{P}hi)\big),\omega\mathbb{R}angle&=\langle T_F\mathbb{R}es\im(\hat\mathcal{P}hi),\pi^*\omega\mathbb{R}angle=\int_{\im(\hat\mathcal{P}hi)}F\wedge\pi^*\omega\\ &=\int_{\im(\hat\mathcal{P}hi)}(\mathcal{P}hi^{-1})^*(\mathcal{P}hi^*F\wedge \mathcal{P}hi^*\pi^*\omega)\\ &=\int_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1,\mathbb{R}ho_2]}\mathcal{P}hi^*F\wedge\text{pr}_1^*\omega\\ &=\int_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1,\mathbb{R}ho_2]}\text{pr}_1^*\omega\wedge *(*\mathcal{P}hi^*F)\\ &=\int_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1,\mathbb{R}ho_2]}\langle\text{pr}_1^*\omega,*\mathcal{P}hi^*F\mathbb{R}angle\, d\vol_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1,\mathbb{R}ho_2]}(y,t)\\ &=\int_{\mathbb{R}ho_1}^{\mathbb{R}ho_2}\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}\langle\text{pr}_1^*\omega,*\mathcal{P}hi^*F\mathbb{R}angle\, d\vol_{\mathbb{M}athbb{S}^{n-1}}(y)\bigg)\, dt\\ &=\int_{\mathbb{R}ho_1}^{\mathbb{R}ho_2}\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}\langle\omega,i_{\mathbb{M}athbb{S}^{n-1}\times\{t\}}^**\mathcal{P}hi^*F\mathbb{R}angle\, d\vol_{\mathbb{M}athbb{S}^{n-1}}(y)\bigg)\, dt\\ &=\int_{\mathbb{R}ho_1}^{\mathbb{R}ho_2}\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}\omega\wedge\big(*i_{\mathbb{M}athbb{S}^{n-1}\times\{t\}}^**\mathcal{P}hi^*F\big)\bigg)\, dt\\ &=\int_{\mathbb{M}athbb{S}^{n-1}}\omega\wedge\bigg(\int_{\mathbb{R}ho_1}^{\mathbb{R}ho_2}\big(*i_{\mathbb{M}athbb{S}^{n-1}\times\{t\}}^**\mathcal{P}hi^*F\big)\, dt\bigg)\\ &=(-1)^{n-1}\int_{\mathbb{M}athbb{S}^{n-1}}\omega\wedge\alpha, \end{align*} where \begin{align*} \alpha:=(-1)^{n-1}\int_{\mathbb{R}ho_1}^{\mathbb{R}ho_2}\big(*i_{\mathbb{M}athbb{S}^{n-1}\times\{t\}}^**\mathcal{P}hi^*F\big)\, dt\in\Omega_p^{n-2}(\mathbb{M}athbb{S}^{n-1}). \end{align*} In particular, \begin{align*} \langle\partial\pi_*\big(T_F\mathbb{R}es\im(\hat\mathcal{P}hi)\big),\varphi\mathbb{R}angle&=\langle\pi_*\big(T_F\mathbb{R}es\im(\hat\mathcal{P}hi)\big),d\varphi\mathbb{R}angle\\ &=(-1)^{n-1}\int_{\mathbb{M}athbb{S}^{n-1}}d\varphi\wedge\alpha=\langle d^*(*\alpha),\varphi\mathbb{R}angle, \qquad\forall\,\varphi\in C^{\infty}(\mathbb{M}athbb{S}^{n-1}). \end{align*} Recall that the restriction of an integral current to a measurable set is still an integral current. Moreover, the push-forward of an integral current through a Lipschitz and proper map remains an integral current (see \mathbb{C}ite[Chapter 7, \S 7.5]{krantz_parks-geometric_integration_theory}). Then, $\tilde I:=-\pi_*\big(I\mathbb{R}es\im(\hat\mathcal{P}hi)\big)\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})$.\\ So far, we have shown that \begin{align*} \partial\pi_*\big((T_F-I)\mathbb{R}es\im(\hat\mathcal{P}hi)\big)=\partial\pi_*\big(T_F\mathbb{R}es\im(\hat\mathcal{P}hi)\big)-\partial\pi_*\big(I\mathbb{R}es\im(\hat\mathcal{P}hi)\big)=d^*(*\alpha)+\partial\tilde I. \end{align*} Let $\mathbb{M}athbb{Z}eta\in C^\infty_c((-1,1))$ such that $\displaystyle{\int_\mathbb{M}athbb{R}\mathbb{M}athbb{Z}eta=1}$. For any $\varepsilon\in (0,\mathbb{M}in\{\mathbb{R}ho_1,\mathbb{R}ho_0-\mathbb{R}ho_2\}) $ set $\displaystyle{\mathbb{M}athbb{Z}eta_\varepsilon=\frac{1}{\varepsilon}\mathbb{M}athbb{Z}eta\left(\frac{\mathbb{C}dot}{\varepsilon}\mathbb{R}ight)}$ and let $\mathbb{R}chi_\varepsilon$ be the unique solution of \begin{align*} \begin{cases} \mathbb{R}chi_\varepsilon'(x)=\mathbb{M}athbb{Z}eta_\varepsilon(x-\mathbb{R}ho_1)-\mathbb{M}athbb{Z}eta_\varepsilon(x-\mathbb{R}ho_2)\\ \mathbb{R}chi_\varepsilon(0)=0. \end{cases} \end{align*} Let $\psi\in C^\infty(\mathbb{M}athbb{S}^{n-1}\times[0,\mathbb{R}ho_0])$ and let $\text{pr}_2:\mathbb{M}athbb{S}^{n-1}\times[0,\mathbb{R}ho_0]\to[0,\mathbb{R}ho_0]$ be the projection on the second factor. We compute \begin{align*} \langle(\mathcal{P}hi^{-1})_\ast I, \psi\,d(\mathbb{R}chi_\varepsilon\mathbb{C}irc \text{pr}_2)\mathbb{R}angle&=\int_{\mathcal{P}hi^{-1}(\Gamma)}\theta_{(\mathcal{P}hi^{-1})_\ast I}\psi(\mathbb{R}chi_\varepsilon'\mathbb{C}irc \text{pr}_2)\langle d \text{pr}_2, \vec I_{(\mathcal{P}hi^{-1})_\ast I}\mathbb{R}angle\,d\mathbb{M}athscr{H}^1\\ &=\int_{\mathbb{R}ho_1-\varepsilon}^{\mathbb{R}ho_1+\varepsilon}\mathbb{M}athbb{Z}eta_\varepsilon(t)\bigg(\int_{\mathcal{P}hi^{-1}(\Gamma)\mathbb{C}ap(\mathbb{M}athbb{S}^{n-1}\times\{t\})}\psi \tilde \theta\,d\mathbb{M}athscr{H}^0\bigg)\,d\mathcal{L}^1(t)\\ &\quad-\int_{\mathbb{R}ho_2-\varepsilon}^{\mathbb{R}ho_2+\varepsilon}\mathbb{M}athbb{Z}eta_\varepsilon(t)\bigg(\int_{\mathcal{P}hi^{-1}(\Gamma)\mathbb{C}ap(\mathbb{M}athbb{S}^{n-1}\times\{t\})}\psi \tilde \theta\,d\mathbb{M}athscr{H}^0\bigg)\,d\mathcal{L}^1(t) \end{align*} with $\tilde{\theta}=\theta_{(\mathcal{P}hi^{-1})_\ast I}\text{sgn}(\langle d \text{pr}_2, \vec I_{(\mathcal{P}hi_{-1})_\ast I}\mathbb{R}angle)\in L^1(\mathcal{P}hi^{-1}(\Gamma),\mathbb{M}athbb{Z})$. Moreover \begin{align*} \langle (\mathcal{P}hi^{-1})_\ast T_F, \psi\,d(\mathbb{R}chi_\varepsilon\mathbb{C}irc \text{pr}_2)\mathbb{R}angle&=\int_{\mathbb{M}athbb{S}^{n-1}\times [0,\mathbb{R}ho_0]}\psi(\mathbb{R}chi_\varepsilon'\mathbb{C}irc\text{pr}_2){\mathcal{P}hi}^\ast F\wedge\,d\text{pr}_2\\ &=\int_{\mathbb{R}ho_1-\varepsilon}^{\mathbb{R}ho_1+\varepsilon}\mathbb{M}athbb{Z}eta_\varepsilon(t)\left(\int_{\mathbb{M}athbb{S}^{n-1}\times\{t\}}\psi {(\mathcal{P}hi\big\vert_{\mathbb{M}athbb{S}^{n-1}\times\{t\}})}^\ast F\mathbb{R}ight)\,d\mathcal{L}^1(t)\\ &\quad-\int_{\mathbb{R}ho_2-\varepsilon}^{\mathbb{R}ho_2+\varepsilon}\mathbb{M}athbb{Z}eta_\varepsilon(t)\left(\int_{\mathbb{M}athbb{S}^{n-1}\times\{t\}}\psi {(\mathcal{P}hi\big\vert_{\mathbb{M}athbb{S}^{n-1}\times\{t\}})}^\ast F\mathbb{R}ight)\,d\mathcal{L}^1(t). \end{align*} Now observe that \begin{align*} \langle (\mathcal{P}hi^{-1})_\ast (T_F-I), (\mathbb{R}chi_\varepsilon\mathbb{C}irc \text{pr}_2)\,d\psi\mathbb{R}angle\to\langle ((\mathcal{P}hi^{-1})_\ast (T_F-I))\mathbb{R}es (\mathbb{M}athbb{S}^{n-1}\times[\mathbb{R}ho_1,\mathbb{R}ho_2]), d\psi\mathbb{R}angle \end{align*} as $\varepsilon\to 0^+$, by dominated convergence. On the other hand, since $\partial (T_F-I)=0$, we have \begin{align*} \langle(\mathcal{P}hi^{-1})_\ast(T_F-I),(\mathbb{R}chi_\varepsilon\mathbb{C}irc\text{pr}_2)\,d\psi\mathbb{R}angle=\langle (\mathcal{P}hi^{-1})_\ast I,\psi\,d(\mathbb{R}chi_\varepsilon\mathbb{C}irc\text{pr}_2)\mathbb{R}angle-\langle (\mathcal{P}hi^{-1})_\ast T_F,\psi\,d(\mathbb{R}chi_\varepsilon\mathbb{C}irc\text{pr}_2)\mathbb{R}angle. \end{align*} Therefore for almost every $\mathbb{R}ho_1,\mathbb{R}ho_2\in (0,\mathbb{R}ho_0)$ (depending on $\psi$) we have \begin{align}\label{Equation: boundary of pushforward} \mathbb{N}onumber \langle \partial (({\mathcal{P}hi^{-1}})_\ast (T_F-I)\mathbb{R}es (\mathbb{M}athbb{S}^{n-1}\times[\mathbb{R}ho_1,\mathbb{R}ho_2])),\psi\mathbb{R}angle&=\int_{\mathcal{P}hi^{-1}(\Gamma)\mathbb{C}ap(\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_1\})}\psi \tilde \theta\,d\mathbb{M}athscr{H}^0\\ &\quad-\int_{\mathcal{P}hi^{-1}(\Gamma)\mathbb{C}ap(\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_2\})}\psi \tilde \theta\,d\mathbb{M}athscr{H}^0\\ \mathbb{N}onumber &\quad-\int_{\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_1\}}\psi {(\mathcal{P}hi\big\vert_{\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_1\}})}^\ast F\\ \mathbb{N}onumber &\quad+\int_{\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_2\}}\psi {(\mathcal{P}hi\big\vert_{\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_2\}})}^\ast F. \end{align} Now let $\{\psi_k\}_{k\in \mathbb{M}athbb{N}}\mathbb{M}athbb{S}ubset C^\infty(\mathbb{M}athbb{S}^{n-1}\times[0,\mathbb{R}ho_0])$ be a countable sequence dense in $C^1(\mathbb{M}athbb{S}^{n-1}\times [0,\mathbb{R}ho_0])$. For every $k\in\mathbb{N}$, let $E_k\mathbb{M}athbb{S}ubset G$ be the set such that \eqref{Equation: boundary of pushforward} holds with $\psi=\psi_k$ (i.e. the set of the $\mathbb{R}ho\in G$ wich are "$\mathbb{M}athbb{Z}eta_\varepsilon$-Lebesgue points" of the integrands in \eqref{Equation: boundary of pushforward}, with $\psi=\psi_k$) and define \begin{align*} E:=\bigcap_{k\in\mathbb{N}} E_k. \end{align*} Then $\mathcal{L}^1(E)=\mathcal{L}^1(K)$ and for every $\mathbb{R}ho_1,\mathbb{R}ho_2\in E$ estimate \eqref{Equation: boundary of pushforward} holds with $\psi=\psi_k$ for every $k\in\mathbb{N}$. By density of $\{\psi_k\}_{k\in \mathbb{M}athbb{N}}$ in $C^1(\mathbb{M}athbb{S}^{n-1}\times [0,\mathbb{R}ho_0])$, we can pass to the limit in \eqref{Equation: boundary of pushforward} and get that for any given couple of parameters $\mathbb{R}ho_1,\mathbb{R}ho_2\in\tilde E$ such estimate holds for every $\psi\in C^{\infty}(\mathbb{M}athbb{S}^{n-1}\times [0,\mathbb{R}ho_0])$. In particular, for every $\mathbb{R}ho_1, \mathbb{R}ho_2\in E$, $\varphi\in C^\infty(\mathbb{M}athbb{S}^{n-1})$ we have \begin{align*} \langle \partial\pi_\ast ((T_F-I)\mathbb{R}es{\text{Im}(\hat{\mathcal{P}hi})}),\varphi\mathbb{R}angle&=\mathbb{M}athbb{S}um_{x\in \Gamma_{\mathbb{R}ho_1}}\tilde \theta(x,\mathbb{R}ho_1)\varphi(x)-\mathbb{M}athbb{S}um_{x\in \Gamma_{\mathbb{R}ho_2}}\tilde \theta(x,\mathbb{R}ho_2)\varphi(x)\\ &\quad-\int_{\mathbb{M}athbb{S}^{n-1}} \varphi\,\mathcal{P}si^\ast s(\mathbb{R}ho_1)+\int_{\mathbb{M}athbb{S}^{n-1}}\varphi\,\mathcal{P}si^\ast s(\mathbb{R}ho_2), \end{align*} where \begin{align*} \Gamma_{\mathbb{R}ho_1}&:=\text{pr}_1(\mathcal{P}hi^{-1}(\Gamma)\mathbb{C}ap(\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_1\}))\mathbb{M}athbb{S}ubset\mathbb{M}athbb{S}^{n-1},\\ \Gamma_{\mathbb{R}ho_2}&:=\text{pr}_1(\mathcal{P}hi^{-1}(\Gamma)\mathbb{C}ap(\mathbb{M}athbb{S}^{n-1}\times\{\mathbb{R}ho_2\}))\mathbb{M}athbb{S}ubset \mathbb{M}athbb{S}^{n-1} \end{align*} are finite set for any $\mathbb{R}ho_1$, $\mathbb{R}ho_2\in G$. Gathering together what we have proved so far, we have \begin{align*} \ast\big(\mathcal{P}si^*s(\mathbb{R}ho_2)-\mathcal{P}si^*s(\mathbb{R}ho_1)\big)=d^*(*\alpha)+\partial I'+\bigg(\int_{\mathbb{M}athbb{S}^{n-1}}\mathcal{P}si^*s(\mathbb{R}ho_2)-\mathcal{P}si^*s(\mathbb{R}ho_1)\bigg)\delta_q, \end{align*} where $I'\in\mathbb{M}athcal{R}_1(\mathbb{M}athbb{S}^{n-1})$ is any rectifiable one-current of finite mass such that \begin{align*} \partial I'= \mathbb{M}athbb{S}um_{x\in \Gamma_{\mathbb{R}ho_2}}\tilde \theta(x,\mathbb{R}ho_2)\delta_x-\mathbb{M}athbb{S}um_{x\in \Gamma_{\mathbb{R}ho_1}}\tilde \theta(x,\mathbb{R}ho_1)\delta_x+\partial \tilde I+ \left(\mathbb{M}athbb{S}um_{x\in\Gamma_{\mathbb{R}ho_1}}\tilde \theta(x,\mathbb{R}ho_1)-\mathbb{M}athbb{S}um_{x\in\Gamma_{\mathbb{R}ho_2}}\tilde \theta(x,\mathbb{R}ho_2)\mathbb{R}ight)\delta_q, \end{align*} i.e. $*\alpha$ is a competitor in the definition of $d\big(\mathcal{P}si^*s(\mathbb{R}ho_2),\mathcal{P}si^*s(\mathbb{R}ho_1)\big)$. Hence, in order to estimate $d\big(\mathcal{P}si^*s(\mathbb{R}ho_2),\mathcal{P}si^*s(\mathbb{R}ho_1)\big)$ we just need to find an upper bound for $\lvert\lvert*\alpha\mathbb{R}vert\mathbb{R}vert_{L^p}$.\\ Notice that $\lvert d\mathcal{P}hi\mathbb{R}vert\leq t\lvert d\mathcal{P}si\mathbb{R}vert+\frac{\mathbb{M}athbb{S}qrt{n}}{2}$. Moreover since \begin{align*} \mathcal{P}hi^{-1}(x)=\left(\mathcal{P}si^{-1}\left(\frac{x-x_0}{2\lVert x-x_0\mathbb{R}Vert_\infty}\mathbb{R}ight), 2\lVert x-x_0\mathbb{R}Vert_\infty\mathbb{R}ight) \end{align*} we have $\lvert J\mathcal{P}hi^{-1}(x)\mathbb{R}vert\leq 2^{-(n-2)}\left(\frac{\lVert d\mathcal{P}si^{-1}\mathbb{R}Vert_{L^\infty}}{\lVert x-x_0\mathbb{R}Vert_\infty}\mathbb{R}ight)^{n-1}$. Therefore \begin{align*} \lVert \ast \alpha\mathbb{R}Vert_{L^p}^p&\leq\int_{\mathbb{M}athbb{S}^{n-1}}\left\lvert\int_{\mathbb{R}ho_1}^{\mathbb{R}ho_2}\lvert \mathcal{P}hi^\ast F \mathbb{R}vert dt\mathbb{R}ight\mathbb{R}vert^pd\mathbb{M}athscr{H}^{n-1}\leq \lvert \mathbb{R}ho_1-\mathbb{R}ho_2\mathbb{R}vert^\frac{p}{p'}\int_{\mathbb{M}athbb{S}^{n-1}\times [\mathbb{R}ho_1,\mathbb{R}ho_2]}\lvert\mathcal{P}hi^\ast F\mathbb{R}vert^pd\mathbb{M}athscr{H}^{n-1}dt\\ &\leq \left(2^{-(n-2)}\left(\lVert d\mathcal{P}si\mathbb{R}Vert_{L^\infty}+\frac{\mathbb{M}athbb{S}qrt{n}}{2}\mathbb{R}ight)^{p(n-1)}\frac{\lVert d\mathcal{P}si^{-1}\mathbb{R}Vert_{L^\infty}^{n-1}}{\mathbb{R}ho_1^{n-1}}\mathbb{R}ight)\lvert \mathbb{R}ho_1-\mathbb{R}ho_2\mathbb{R}vert^\frac{p}{p'}\lVert F\mathbb{R}Vert^p_{L^p(Q_1(0))} \end{align*} and our claim follows. \end{proof} \end{prop} \mathbb{M}athbb{S}ubsection{Proof of Theorem \mathbb{R}ef{Theorem: weak closure} for $Q^n_1(0)$} For the proof of Theorem \mathbb{R}ef{Theorem: weak closure} we need two technical Lemmata. \begin{lem} \label{useful lemma 1} Let $\{f_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset L^1(0,1)$ be such that $\lvert\lvert f_k\mathbb{R}vert\mathbb{R}vert_{L^1}\le C$ for any $k\in \mathbb{M}athbb{N}$. Then there exist a sequence of compact subsets $\{W_h\}_{h\in\mathbb{N}_{\geq 2}}$ of $(0,1)$ such that for every $h\in\mathbb{N}_{\geq 2}$ the following properties hold: \begin{enumerate} \item $\displaystyle{L^1(W_h)= 1-\frac{C+2}{h}}$; \item $W_h\mathbb{M}athbb{S}ubset (1/h,1)$; \item for almost every $\mathbb{R}ho\in W_h$ and every $k\in\mathbb{N}$ there exists $k'>k$ such that $|f_{k'}(\mathbb{R}ho)|\le h$. \end{enumerate} \begin{proof} Let $h\in\mathbb{N}_{\geq 2}$. For any $l\in \mathbb{M}athbb{N}$ let \begin{align*} A^h_l:=\bigcap_{k=l}^\infty f_k^{-1}([-h,h]^c). \end{align*} Notice that for any $l\in \mathbb{M}athbb{N}$ $A^h_l\mathbb{M}athbb{S}ubset A_{l+1}^h$ and set \begin{align*} I_h=\bigcup_{l=1}^\infty A_l^h. \end{align*} Let $m\in\mathbb{N}$ and let $k\ge m$. Notice that \begin{align*} C\ge\int_0^1|f_k(\mathbb{R}ho)|\, d\mathbb{R}ho\ge\int_{A_m^h}|f_k(\mathbb{R}ho)|\, d\mathbb{R}ho>h\mathcal{L}^1(A_m^h). \end{align*} By letting $m\mathbb{R}ightarrow\infty$ in the previous inequality, be obtain \begin{align*} L^1(I_h)\leq\frac{C}{h}. \end{align*} Then, by defining $E_h:=I_h^c\mathbb{C}ap(1/h,1)$, we clearly get \begin{align*} \mathcal{L}^1(E_h)=\mathcal{L}^1((I_h\mathbb{C}up (0,1/h])^c)=1-\mathcal{L}^1(I_h\mathbb{C}up (0,1/h])\geq 1-\frac{C+1}{h}. \end{align*} Moreover, $E_h$ and any of its subsets satisfy the properties $2.$ and $3.$. Finally, since $E_h$ is measurable, we can find a compact set $W_h\mathbb{M}athbb{S}ubset E_h$ such that \begin{align*} \mathcal{L}^1(W_h)=1-\frac{C+2}{h}. \end{align*} By construction, $W_h$ satisfies 1, 2 and 3. \end{proof} \end{lem} Since we do not know if the space $(Y, d)$ is complete, we will also need the following lemma. \begin{lem} \label{useful lemma 2} Let $K\mathbb{M}athbb{S}ubset [0,1]$ be compact and let $S\mathbb{M}athbb{S}ubset K$ be dense and countable. Let $\{f_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset C^0(K,Y)$ be such that \begin{enumerate} \item $\{f_k\}_{k\in\mathbb{N}}$ is uniformly Cauchy from $K$ to $(Y,d)$ (i.e. it is a Cauchy sequence w.r.t. uniform convergence); \item for some $C>0$, \begin{align*} \mathbb{M}athbb{S}up_{\mathbb{R}ho\in S}\mathbb{M}athbb{S}up_{k\in\mathbb{N}}||f_k(\mathbb{R}ho)||_{L^p}\le C; \end{align*} \item for some $A>0$ and some $\alpha\in(0,1]$, we have \begin{align*} d\big(f_k(\mathbb{R}ho),f_k(\mathbb{R}ho')\big)\le A|\mathbb{R}ho-\mathbb{R}ho'|^{\alpha}, \qquad\forall\,\mathbb{R}ho,\mathbb{R}ho'\in S. \end{align*} \end{enumerate} Then, there exists $f\in C^0(K,Y)$ such that $f_k\mathbb{R}ightarrow f$ uniformly. \begin{proof} Fix any $\mathbb{R}ho\in S$. By hypothesis 2, $\{f_k(\mathbb{R}ho)\}_{k\in\mathbb{N}}$ is bounded in $L^p$ and therefore it has a subsequence converging weakly in $L^p$ to a limit $f(\mathbb{R}ho)\in Y$ (recall that $Y$ is closed with respect to the weak $L^p$ convergence). By Corollary \mathbb{R}ef{d_Psi metrizes the weak topology on bounded subsets}, such a subsequence converges in $(Y,d)$ to the same limit $f(\mathbb{R}ho)$. Since $\{f_k\}_{k\in\mathbb{N}}$ is uniformly Cauchy from $K$ to $(Y,d)$, we have that $\{f_k(\mathbb{R}ho)\}_{k\in\mathbb{N}}$ is Cauchy in $(Y,d)$, therefore $f_k(\mathbb{R}ho)\xrightarrow{d}f(\mathbb{R}ho)$ and, by Corollary \mathbb{R}ef{d_Psi metrizes the weak topology on bounded subsets}, $f_k(\mathbb{R}ho)\mathbb{R}ightharpoonup(\mathbb{R}ho)$ weakly in $L^p$. Fix any $\mathbb{R}ho\in K$ and let $\{\mathbb{R}ho_i\}_{i\in\mathbb{N}}\mathbb{M}athbb{S}ubset S$ be such that $\mathbb{R}ho_i\mathbb{R}ightarrow\mathbb{R}ho$. We claim that there exists $f_\mathbb{R}ho\in Y$ such that $f(\mathbb{R}ho_i)\to f_\mathbb{R}ho$ w.r.t. $d$ and $f_\mathbb{R}ho$ doesn't depend on the choice of the sequence $\{\mathbb{R}ho_i\}_{i\in\mathbb{N}}$. Indeed by lower semicontinuity of the $L^p$ norm w.r.t. the weak $L^p$ convergence, we have \begin{align*} ||f(\mathbb{R}ho_i)||_{L^p}&\le\liminf_{k\mathbb{R}ightarrow\infty}||f_k(\mathbb{R}ho_i)||_{L^p}\le C, \end{align*} for every $i\in\mathbb{N}$. Then there exists a subsequence $\{\mathbb{R}ho_{i_j}\}_{j\in \mathbb{M}athbb{N}}$ and a $f_\mathbb{R}ho\in L^p$ such that $f(\mathbb{R}ho_{i_j})\mathbb{R}ightharpoonup f_\mathbb{R}ho$ weakly in $L^p$. Since $Y$ is weakly closed in $L^p$ we have $f_\mathbb{R}ho\in Y$. By Corollary \mathbb{R}ef{d_Psi metrizes the weak topology on bounded subsets} we also have $f(\mathbb{R}ho_{i_j}) \to f_\mathbb{R}ho$ w.r.t. $d$.\\ Relabel the subsequence as $\{\mathbb{R}ho_i\}_{i\in \mathbb{M}athbb{N}}$. To see that $f_\mathbb{R}ho$ doesn't depend on the subsequence (and thus it doesn't depend on $\{\mathbb{R}ho_i\}_{i\in \mathbb{M}athbb{N}}$), assume that $\{\tilde \mathbb{R}ho_i\}_{i\in \mathbb{M}athbb{N}}$ is another sequence in $S$ with $\tilde \mathbb{R}ho_i\to \mathbb{R}ho$ and $f(\tilde \mathbb{R}ho_i)\to \tilde f_{\mathbb{R}ho}$ w.r.t. $d$. To see that $f_\mathbb{R}ho=\tilde f_{ \mathbb{R}ho}$, first notice that by hypothesis $3.$ and by triangle inequality we have \begin{align*} d\big(f(\mathbb{R}ho_i),f(\tilde \mathbb{R}ho_i)\big)&\le d\big(f(\mathbb{R}ho_i),f_k(\mathbb{R}ho_i)\big)+d\big(f_k(\mathbb{R}ho_i),f_k(\tilde \mathbb{R}ho_i)\big)+d\big(f_k(\tilde \mathbb{R}ho_i),f(\tilde \mathbb{R}ho_i)\big)\\ &\le d\big(f(\mathbb{R}ho_i),f_k(\mathbb{R}ho_i)\big)+A|\mathbb{R}ho_i-\tilde \mathbb{R}ho_i|^{\alpha}+d\big(f_k(\tilde \mathbb{R}ho_i),f(\tilde \mathbb{R}ho_i)\big). \end{align*} for every $i\in\mathbb{N}$. Hence, passing to the limit as $k\mathbb{R}ightarrow\infty$ in the previous inequality, we get \begin{align*} d\big(f(\mathbb{R}ho_i),f(\tilde \mathbb{R}ho_i)\big)\le A|\mathbb{R}ho_i-\tilde \mathbb{R}ho_i|^{\alpha}. \end{align*} Thus we finally obtain \begin{align*} d(f_\mathbb{R}ho,\tilde f_{\mathbb{R}ho})&\le d\big(f_\mathbb{R}ho,f(\mathbb{R}ho_i)\big)+d\big(f(\mathbb{R}ho_i),f(\tilde \mathbb{R}ho_i)\big)+d\big(f(\tilde \mathbb{R}ho_i),\tilde f_{\mathbb{R}ho}\big)\\ &\le d\big(f_\mathbb{R}ho,f(\mathbb{R}ho_i)\big)+A|\mathbb{R}ho_i-\tilde \mathbb{R}ho_i|^{\alpha}+d\big(f(\tilde \mathbb{R}ho_i),\tilde f_{\mathbb{R}ho}\big) \end{align*} and, passing to the limit as $i\mathbb{R}ightarrow\infty$, we get $d(f_\mathbb{R}ho,\tilde f_{ \mathbb{R}ho})=0$, i.e. $f_\mathbb{R}ho=\tilde f_{ \mathbb{R}ho}$.\\ For any $\mathbb{R}ho\in K$ let \begin{align*} f(\mathbb{R}ho):=\lim_{i\mathbb{R}ightarrow\infty}f(\mathbb{R}ho_i), \end{align*} where $\{\mathbb{R}ho_i\}_{i\in\mathbb{N}}\mathbb{M}athbb{S}ubset S$ is any sequence such that $\mathbb{R}ho_i\mathbb{R}ightarrow\mathbb{R}ho$ and the limit is understood w.r.t. $d$. Then $f$ is a well-defined function on $K$.\\ To see that $f_k\to f$ uniformly, let $\varepsilon>0$ and let $K\in \mathbb{M}athbb{N}$ such that for any $m,n\in \mathbb{M}athbb{N}_{\geq K}$ \begin{align*} \mathbb{M}athbb{S}up_{\mathbb{R}ho\in S} d(f_m(\mathbb{R}ho), f_n(\mathbb{R}ho))<\varepsilon. \end{align*} Let $\mathbb{R}ho\in K$ and let $\{\mathbb{R}ho_i\}_{i\in \mathbb{M}athbb{N}}$ be a sequence in $S$ such that $\mathbb{R}ho_i\to \mathbb{R}ho$. Then for any $k\in \mathbb{M}athbb{N}_{\geq K}$ we have \begin{align*} d(f_k(\mathbb{R}ho), f(\mathbb{R}ho))=\lim_{i\to\infty} d(f_k(\mathbb{R}ho_i), f(\mathbb{R}ho_i))=\lim_{i\to\infty} \lim_{m\to\infty} d(f_k(\mathbb{R}ho_i), f_m(\mathbb{R}ho_i))\leq \varepsilon. \end{align*} This shows that $f$ is the uniform limit of $\{f_k\}_{k\in \mathbb{M}athbb{N}}$ in $K$, with respect to $d$. As $f_k\in C^0(K,Y)$ for any $k\in \mathbb{M}athbb{N}$, we have that $f\in C^0(K,Y)$ (this also follows directly from the construction of $f$). \end{proof} \end{lem} \begin{thm}[Weak closure for $Q_1^n(0)$]\label{weak closure for Q^n} Fix any $n\in\mathbb{N}_{\geq 2}$ and assume that $p\in\big(1,n/(n-1)\big)$. Then $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1^n(0))$ is weakly sequentially closed. \begin{proof} Assume that $F\in\Omega_p^{n-1}(Q_1(0))$ belongs to the weak $L^p$-closure of $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$, i.e. there exists $\{F_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,R}^{n-1}(Q_1(0))$ such that $F_k\xrightharpoonup{L^p} F$. What we need to show is that $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$, which amounts to saying that \begin{align} \label{integer fluxes condition} \int_{\partial Q_{\mathbb{R}ho}(x_0)}i_{\partial Q_{\mathbb{R}ho}(x_0)}^*F\in\mathbb{M}athbb{Z}, \end{align} for every $x_0\in Q_1(0)$ and for a.e. $\mathbb{R}ho\in\big(0,2\dist_{\infty}(x_0,\partial Q_1(0))\big)$. Without losing generality, we will just show \eqref{integer fluxes condition} for $x_0=0$. \textbf{Step 1}. For any $k\in \mathbb{M}athbb{N}$ let $s_k$ be the slice function of $F_k$ at $0$. Fix any $h\in\mathbb{N}_{\geq 2}$ and let $W_h\mathbb{M}athbb{S}ubset (1/h,1)$ be the compact set given by applying Lemma \mathbb{R}ef{useful lemma 1} with $f_k=\lVert s_k\mathbb{R}Vert_{L^p}$ and $C=2^\frac{n}{p}\mathbb{M}athbb{S}up_{k\in \mathbb{M}athbb{N}}\lVert F_k\mathbb{R}Vert_{L^p}$ (see Remark \mathbb{R}ef{Remark: Lp norm of slice function}). Let $E_h^k\mathbb{M}athbb{S}ubset W_h$ denote the subset associated to $W_h$ and $s_k$ by Proposition \mathbb{R}ef{the slice functions are (1-1/p)-holder continuous}, let $\tilde E_h=\bigcap_{k\in\mathbb{M}athbb{N}}E_h^k$ and let $s_k$ denote its $\frac{1}{p'}$-H\"older representative on $\tilde{E}_h$, for any $k\in \mathbb{M}athbb{N}$. By property $3.$ in Lemma \mathbb{R}ef{useful lemma 1}, for almost every $\mathbb{R}ho\in \tilde E_h$ we can find a subsequence $\{s_{k_{\mathbb{R}ho}}(\mathbb{R}ho)\}_{k_{\mathbb{R}ho}\in\mathbb{N}}\mathbb{M}athbb{S}ubset\{s_k(\mathbb{R}ho)\}_{k\in\mathbb{N}}$ such that $s_{k_{\mathbb{R}ho}}(\mathbb{R}ho)$ is uniformly bounded in $L^p$ by $h$. Denote by $E_h$ the set of all such $\mathbb{R}ho$ and observe that $\mathcal{L}^1({E_h})=\mathcal{L}^1(W_h)=1-\frac{C+2}{h}$. Then for any $\mathbb{R}ho\in E_h$, $\{s_{k_{\mathbb{R}ho}}(\mathbb{R}ho)\}_{k_{\mathbb{R}ho}\in\mathbb{N}}$ has a subsequence that converges weakly in $L^p$ or, equivalently, with respect to $d_\mathcal{P}si$ (see Corollary \mathbb{R}ef{d_Psi metrizes the weak topology on bounded subsets}).\\ Let $S_h\mathbb{M}athbb{S}ubset E_h$ be a countable dense subset. By a diagonal extraction argument, we find a subsequence $\{s_{k_l}\}_{l\in\mathbb{N}}$ such that $\{s_{k_l}(\mathbb{R}ho)\}_{l\in\mathbb{N}}$ is convergent with respect to $d_\mathcal{P}si$ and weakly in $L^p$, and is uniformly bounded in $L^p$ by $h$, for every $\mathbb{R}ho\in S_h$. \textbf{Step 2}. Next we claim that for any $l\in \mathbb{M}athbb{N}$, $s_{k_l}$ can be extended to a $\frac{1}{p'}$-H\"older continuous function on $\overline{E_h}$ (with the same H\"older constant, which is bounded uniformly in $l$).\\ In fact let $f\in \{s_{k_l}\}_{l\in \mathbb{M}athbb{N}}$, let $\mathbb{R}ho\in \overline{E_h}\mathbb{M}athbb{S}mallsetminus E_h$, let $\{\mathbb{R}ho_i\}_{i\in \mathbb{M}athbb{N}}$ be a sequence in $S_h$ such that $\mathbb{R}ho_i\to \mathbb{R}ho$ as $i\to \infty$ (observe that such a sequence exists, since $S_h$ is dense in $E_h$, which in turn is dense in $\overline{E_h}$). Since $\{f(\mathbb{R}ho_i)\}_{i\in\mathbb{N}}\mathbb{M}athbb{S}ubset Y$ is uniformly bounded in $L^p$, by weak $L^p$-compactness there exists a subsequence $\{f(\mathbb{R}ho_{i_j})\}_{j\in\mathbb{N}}$ of $\{f(\mathbb{R}ho_i)\}_{i\in\mathbb{N}}$ such that $f(\mathbb{R}ho_{i_j})\mathbb{R}ightharpoonup f_{\mathbb{R}ho}$ weakly in $L^p$ for some $f_{\mathbb{R}ho}\in Y$. By Corollary \mathbb{R}ef{d_Psi metrizes the weak topology on bounded subsets}, we know that $d_{\mathcal{P}si}\big(f(\mathbb{R}ho_{i_j}),f_{\mathbb{R}ho}\big)\mathbb{R}ightarrow 0$. Since $f$ is $\frac{1}{p'}$-H\"older continuous on $\overline {E_h}$, $f_\mathbb{R}ho$ does not depend on the sequence $\{\mathbb{R}ho_i\}_{i\in \mathbb{M}athbb{N}}$. Hence, the function \begin{align*} \tilde f(\mathbb{R}ho):=\begin{cases}f(\mathbb{R}ho) & \mathbb{M}box{ if } \mathbb{R}ho\in E_h\\ f_{\mathbb{R}ho} & \mathbb{M}box{ if }\mathbb{R}ho\in \overline{E_h}\mathbb{M}athbb{S}mallsetminus E_h\end{cases} \end{align*} is well-defined on $\overline{E_h}$ and satisfies \eqref{holderianity estimate} on $\overline{E_h}$.\\ In the following, in order to simplify the notation, we will denote again by $s_{k_l}$ the $\frac{1}{p'}$-H\"older extension of $s_{k_l}$ to $\overline{E_h}$, for any $l\in \mathbb{M}athbb{N}$. \textbf{Step 3}. We show that $\{s_{k_l}\}_{l\in\mathbb{N}}$ converges uniformly on $\overline{E_h}$ to some $s\in C^0(\overline{E_h},Y)$.\\ Fix any $\varepsilon>0$. By Step 2. we know that the sequence $\{s_{k_l}\}_{l\in\mathbb{N}}$ is equicontinuous from $\overline{E_h}$ to $(Y,d_{\mathcal{P}si})$. Therefore we can choose $\delta>0$ such that \begin{align*} d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho),s_{k_l}(\mathbb{R}ho'))<\varepsilon, \qquad\forall\,\mathbb{R}ho,\mathbb{R}ho'\in \overline{E_h} \mathbb{M}box{ s.t. } \lvert\mathbb{R}ho-\mathbb{R}ho'\mathbb{R}vert<\delta \mathbb{M}box{ and } \forall\,l\in\mathbb{N}. \end{align*} Notice that $\{(\mathbb{R}ho-\delta,\mathbb{R}ho+\delta)\}_{\mathbb{R}ho\in S_h}$ is an open cover of $\overline{E_h}$. Since $\overline{E_h}$ is compact, we can find a finite set $\{\mathbb{R}ho_1,...,\mathbb{R}ho_m\}\mathbb{M}athbb{S}ubset S_h$ such that $\{(\mathbb{R}ho_j-\delta,\mathbb{R}ho_j+\delta)\}_{j=1,...,m}$ is a finite open cover of $\overline{E_h}$. Now let $\mathbb{R}ho\in \overline{E_h}$. Observe that there exists a point $\mathbb{R}ho_j\in \{\mathbb{R}ho_1,...,\mathbb{R}ho_m\}$ such that $\mathbb{R}ho\in (\mathbb{R}ho_j-\delta,\mathbb{R}ho_j+\delta)$, i.e. $|\mathbb{R}ho-\mathbb{R}ho_j|<\delta$. By our choice of $\delta$, this implies $d\big(s_{k_l}(\mathbb{R}ho),s_{k_l}(\mathbb{R}ho_j))<\varepsilon$, for every $l\in\mathbb{N}$. By triangle inequality, we have \begin{align*} d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho),s_{k_m}(\mathbb{R}ho))&\le d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho),s_{k_l}(\mathbb{R}ho_j))+d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho_j),s_{k_m}(\mathbb{R}ho_j))+d_{\mathcal{P}si}\big(s_{k_m}(\mathbb{R}ho_j),s_{k_m}(\mathbb{R}ho))\\ &<2\varepsilon+d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho_j),s_{k_m}(\mathbb{R}ho_j)). \end{align*} But since $\mathbb{R}ho_j\in S_h$, we know that there exists $L_j>0$ such that \begin{align*} d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho_j),s_{k_m}(\mathbb{R}ho_j))<\varepsilon, \qquad\forall\, l,m\ge L_j. \end{align*} Hence, by letting $\displaystyle{L:=\mathbb{M}ax_{j=1,...,m}{L_j}}$, we have that \begin{align*} d_{\mathcal{P}si}\big(s_{k_l}(\mathbb{R}ho),s_{k_m}(\mathbb{R}ho))<3\varepsilon, \qquad\forall\, l,m\ge L, \forall\,\mathbb{R}ho\in \overline{E_h}. \end{align*} Here we have just proved that the sequence $\{s_{k_l}\}_{l\in\mathbb{N}}$ is uniformly Cauchy on $\overline{E_h}$. Since $\{s_{k_l}\}_{l\in\mathbb{N}}$ satisfies all the hypotheses of Lemma \mathbb{R}ef{useful lemma 2}, we get that there exists $s\in C^0(\overline{E_h},Y)$ such that $s_{k_l}\mathbb{R}ightarrow s$ uniformly on $\overline{E_h}$ w.r.t. $d_\mathcal{P}si$.\\ Notice that since $\lVert s_{k_l}(\mathbb{R}ho)\mathbb{R}Vert_{L^p}\leq h$ for any $l\in \mathbb{M}athbb{N}$, for any $\mathbb{R}ho\in E_h$, and since $s_{k_l}(\mathbb{R}ho)\to s(\mathbb{R}ho)$ w.r.t. $d_\mathcal{P}si$ for any $\mathbb{R}ho\in \overline{E_h}$, by Corollary \mathbb{R}ef{d_Psi metrizes the weak topology on bounded subsets} there holds $s_{k_l}(\mathbb{R}ho)\mathbb{R}ightharpoonup s(\mathbb{R}ho)$ in $L^p$ for any $\mathbb{R}ho\in E_h$. \textbf{Step 4}. Let $s_0:E_h\mathbb{R}ightarrow Y$ be the restriction to $E_h$ of the slice function of $F$ at $0$. We claim that $s=s_0$ a.e. in $E_h$. To show this we will prove that \begin{align*} \int_{E_h}\varphi(\mathbb{R}ho)\int_{\partial Q_1(0)}\psi\big(s_{k_l}(\mathbb{R}ho)-s_0(\mathbb{R}ho)\big)\, d\mathbb{R}ho\mathbb{R}ightarrow 0, \qquad\mathbb{M}box{ as } l\mathbb{R}ightarrow\infty, \end{align*} for every $\varphi\in C^{\infty}_c((0,1))$ and for every $\psi\in\operatorname{Lip}(\partial Q_1(0))$. Indeed, an explicit computation gives \begin{align*} \int_{E_h}\varphi(\mathbb{R}ho)&\int_{\partial Q_1(0)}\psi\big(s_{k_l}(\mathbb{R}ho)-s_0(\mathbb{R}ho)\big)\, d\mathbb{R}ho\\ &=\int_{E_h}\varphi(\mathbb{R}ho)\bigg(\int_{\partial Q_{\mathbb{R}ho}(0)}\psi\bigg(\frac{\mathbb{C}dot}{\mathbb{R}ho}\bigg)i_{\partial Q_{\mathbb{R}ho}(0)}^*F_{k_l}-\int_{\partial Q_{\mathbb{R}ho}}\psi\bigg(\frac{\mathbb{C}dot}{\mathbb{R}ho}\bigg)i_{\partial Q_{\mathbb{R}ho}(0)}^*F\bigg)\, d\mathbb{R}ho\\ &=2^n\int_{Q_1(0)}\mathbb{M}athbbm{1}_{E_h}(2r)\varphi(2|\mathbb{C}dot|)\psi\bigg(\frac{\mathbb{C}dot}{\lvert\,\mathbb{C}dot\,\mathbb{R}vert}\bigg)dr\wedge (F_{k_l}-F)\mathbb{R}ightarrow 0\text{ as } l\to\infty, \end{align*} since \begin{align*} \mathbb{M}athbbm{1}_{E_h}(2r)\varphi(2|\mathbb{C}dot|)\psi\bigg(\frac{\mathbb{C}dot}{\lvert\,\mathbb{C}dot\,\mathbb{R}vert}\bigg)dr\in\Omega_{p'}^1(Q_1(0)) \end{align*} and $F_{k_l}\xrightharpoonup{L^p} F$. On the other hand, since $s_{k_l}(\mathbb{R}ho)\mathbb{R}ightharpoonup s(\mathbb{R}ho)$ in $L^p$ for any $\mathbb{R}ho\in E_h$, \begin{align*} \int_{E_h}\varphi(\mathbb{R}ho)\int_{\partial Q_1(0)}\psi\big(s_{k_l}(\mathbb{R}ho)-s(\mathbb{R}ho)\big)\, d\mathbb{R}ho\mathbb{R}ightarrow 0 \qquad\mathbb{M}box{ as } l\mathbb{R}ightarrow\infty \end{align*} for every $\varphi\in C^{\infty}_c((0,1))$ and for every $\psi\in\operatorname{Lip}(\partial Q_1(0))$, we obtain \begin{align*} \int_{E_h}\varphi(\mathbb{R}ho)\int_{\partial Q_1(0)}\psi\big(s(\mathbb{R}ho)-s_0(\mathbb{R}ho)\big)\, d\mathbb{R}ho=0, \qquad\forall\,\varphi\in C^{\infty}_c((0,1)),\forall\,\psi\in\operatorname{Lip}(\partial Q_1(0)). \end{align*} This means that $s_0(\mathbb{R}ho)=s(\mathbb{R}ho)\in Y$ for a.e. $\mathbb{R}ho\in E_h$. \textbf{Step 5}. Finally we show that \eqref{integer fluxes condition} holds for almost any $\mathbb{R}ho\in (0,1)$.\\ In fact for any $\mathbb{R}ho\in E_h$ such that $s_0(\mathbb{R}ho)=s(\mathbb{R}ho)$ we have \begin{align*} \int_{\partial Q_\mathbb{R}ho(0)}i^\ast_{\partial Q_\mathbb{R}ho}F=\int_{\partial Q_1(0)}s_0(\mathbb{R}ho)=\int_{\partial Q_1(0)}s(\mathbb{R}ho)=\lim_{k_l\to\infty}\int_{\partial Q_1(0)}s_{k_l}(\mathbb{R}ho)\in \mathbb{M}athbb{Z}, \end{align*} since $s_{k_l}(\mathbb{R}ho)\mathbb{R}ightharpoonup s(\mathbb{R}ho)$ in $L^p$. Thus \eqref{integer fluxes condition} holds for $\mathcal{L}^1$-a.e. $\mathbb{R}ho\in E_h$. Since the previous step can be repeated for any $h\in \mathbb{M}athbb{N}_{\geq 2}$, and since \begin{align*} \lim_{h\to +\infty}\mathcal{L}^1(E_h)=\lim_{h\to +\infty} 1-\frac{C+1}{h}=1, \end{align*} we conclude that \eqref{integer fluxes condition} holds for $\mathcal{L}^1$-a.e. $\mathbb{R}ho\in (0,1)$. \end{proof} \end{thm} \begin{rem}\label{Remark: weak closure for bounded domains} Let $D\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any open and bounded domain which is bi-Lipschitz equivalent to $Q_1(0)$. From Theorem \mathbb{R}ef{weak closure for Q^n} follows that $\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(D)$ (see Definition \mathbb{R}ef{Definition: integer valued fluxes on generic domain}) is a weakly sequentially closed subspace of $\Omega_p^{n-1}(D)$. Indeed, let $\varphi:Q_1(0)\mathbb{R}ightarrow D$ be any bi-Lipschitz homeomorphism and let $\{F_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(D)$ be such that $F_k\xrightharpoonup{L^p}F$ on $D$. Then, by Lemma \mathbb{R}ef{Lemma: bi-Lipschitz maps preserve approximability properties} we have $\{\varphi^*F_k\}_{k\in\mathbb{N}}\mathbb{M}athbb{S}ubset\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$ and as $\varphi$ is bi-Lipschitz we have $\varphi^*F_k\xrightharpoonup{L^p}\varphi^*F$ on $Q_1(0)$. By the Weak Closure Theorem (Theorem \mathbb{R}ef{weak closure for Q^n}), $\varphi^*F\in \Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(Q_1(0))$. Thus $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{n-1}(D)$ (again by Lemma \mathbb{R}ef{Lemma: bi-Lipschitz maps preserve approximability properties}). This shows that Theorem \mathbb{R}ef{Theorem: weak closure} holds true. \end{rem} Observe that Theorem \mathbb{R}ef{Theorem: weak closure} does not hold if $n=1$. In fact in this case the following holds. \begin{lem} \label{Lemma: weak closure for n=1} Let $I$ be a bounded connected interval in $\mathbb{M}athbb{R}$. Let $p\in [1,\infty)$.\\ The weak closure of $L^p_\mathbb{M}athbb{Z}(I)$ in $L^p(I)$ is $L^p(I)$. \end{lem} \begin{proof} Since $C^0(\overline{I})$ is dense in $L^p(I)$, it is enough to show that any function $f\in C^0(\overline{I})$ can be approximated weakly in $L^p(I)$ by functions in $L^p_\mathbb{M}athbb{Z}(I)$. Without loss of generality we can assume that $I=[0,1)$. For any $n\in \mathbb{M}athbb{N}$ let's define $f_n: I\to \mathbb{M}athbb{R}$ as follows:\\ for any $k\in \{1,..., 2^n\}$ let $I_k^n:=[\frac{k-1}{2^n}, \frac{k}{2^n})$, for any $k\in \{1,..., 2^n\}$ let $c_k:=\fint_{I_k^n}f(x)dx$ and for any $x\in I_k^n$ set \begin{align*} f_n(x):=\begin{cases} \lceil c_k\mathbb{R}ceil&\text{ if }x-\frac{k-1}{2^n}\leq \frac{c_k}{2^n\lceil c_k\mathbb{R}ceil}\\ 0&\text{ otherwise}. \end{cases} \end{align*} Then $f_n\in L^p(I, \mathbb{M}athbb{Z})$ and $\int_{I_k^n} f_n(x)dx=\int_{I_k^n}f(x)dx$ for any $k\in \{1,..., 2^n\}$, for any $n\in \mathbb{M}athbb{N}$.\\ Moreover notice that since $f$ is bounded, the sequence $(f_n)_{n\in \mathbb{M}athbb{N}}$ is bounded in $L^p(I)$. Therefore if $p>1$ $(f_n)_{n\in \mathbb{M}athbb{N}}$ converges weakly in $L^p(I)$, up to a subsequence, to a function $\tilde{f}\in L^p(I)$. Testing against continuous functions on $\overline{I}$ it is easy to check that $f=\tilde{f}$.\\ If $p=1$ we have to check that, up to a subsequence, \begin{align} \label{Equation: weak convergence for L1 explicit} \lim_{n\to\infty}\int_{I} f_n g=\int_{I}fg\quad \forall\, g\in L^\infty(I). \end{align} Since $L^{\infty}(I)\mathbb{M}athbb{S}ubset L^q(I)$ for any $q>1$, (\mathbb{R}ef{Equation: weak convergence for L1 explicit}) follows from the case $p>1$ (with $p=q'$). \end{proof} \begin{rem} Let $n\ge 2$ and let $D\mathbb{M}athbb{S}ubset\mathbb{R}^n$ be any open, bounded and Lipschitz domain in $\mathbb{R}^n$. It is still unknown if the space $L_{\mathbb{M}athbb{Z}}^1(D)$ is weakly sequentially closed. Surely it is not weakly-$\ast$ closed, a proof this fact can be easily achieved by generalising the arguments in \mathbb{C}ite[Section 8]{petrache-riviere-abelian}. \end{rem} \mathbb{N}ewpage \appendix \mathbb{M}athbb{S}ection{Minimal connections for forms with finitely many integer singularities} Throughout Appendix A, we will denote by $M^m\mathbb{M}athbb{S}ubset\mathbb{R}^n$ some arbitrary embedded Lipschitz and connected $m$-dimensional submanifold of $\mathbb{R}^n$. Let $p\in [1,\infty]$. We will denote by $\Omega^1_{p,R,\infty}(M)$ the space introduced in Definition \mathbb{R}ef{definition: Lipschitz forms}. \begin{lem} \label{appendix lemma existence same boundary finitely many singularities} Let $F\in\Omega_{p,R,\infty}^{m-1}(M)$. Then, there exists a connection for $F$. \begin{proof} Throughout the following proof, given any couple of points $x,y\in M$ we will denote by $(x,y)$ an arbitrarily chosen oriented Lipschitz curve with finite length joining $x$ and $y$. By assumption, it holds that \begin{align*} *dF=\mathbb{M}athbb{S}um_{j=1}^Nd_j\delta_{x_j}, \qquad \mathbb{M}box{ for some } d_1,...,d_N\in\mathbb{M}athbb{Z}\mathbb{M}athbb{S}mallsetminus\{0\} \mathbb{M}box{ and } x_1,...,x_N\in M. \end{align*} We define \begin{align*} \{i_1,...,i_p\}&:=\big\{j\in\{1,...,N\} \mathbb{M}box{ s.t. } d_j>0\big\},\\ \{j_1,...,j_q\}&:=\big\{j\in\{1,...,N\} \mathbb{M}box{ s.t. } d_j<0\big\},\\ d&:=\mathbb{M}athbb{S}um_{j=1}^Nd_j\in\mathbb{M}athbb{Z}. \end{align*} We build a family $\mathbb{M}athscr{F}=\{I_{\alpha}\}_{\alpha\in A}$ of oriented Lipschitz curves in $M$ as follows. If there is no point $x_j$ such that $d_j<0$, then we set $\mathbb{M}athscr{F}=\emptyset$. Else, we start from $x_{i_1}$ and we add to the family $\mathbb{M}athscr{F}$ the curves $\big(x_{j_1},x_{i_1}\big),...,\big(x_{j_{k_1}},x_{i_1}\big)$, until we reach the condition $k_1=q$ or the condition \begin{align*} r_1:=d_{i_1}+\mathbb{M}athbb{S}um_{l=1}^{k_1}d_{j_l}\le 0. \end{align*} If $k_1=q$, then we stop. Else, we move to the point $x_{i_2}$. If $r_1=0$, then we add to $\mathbb{M}athscr{F}$ the segments $\big(x_{j_{k_1+1}},x_{i_2}\big),...,\big(x_{j_{k_2}},x_{i_2}\big)$, where $k_2\in\{1,...,q\}$ is the smallest value such that \begin{align*} r_2:=d_{i_2}+\mathbb{M}athbb{S}um_{l=k_1+1}^{k_2}d_{j_l}\le 0. \end{align*} If there is no $k\in\{1,...,q\}$ such that \begin{align*} d_{i_2}+\mathbb{M}athbb{S}um_{l=k_1+1}^{k}d_{j_l}\le 0, \end{align*} then we add to $\mathbb{M}athscr{F}$ the segments $\big(x_{j_{k_1+1}},x_{i_2}\big),...,\big(x_{j_{q}},x_{i_2}\big)$ and we set $k_2=q$. If $r_1<0$, then we add to $\mathbb{M}athscr{F}$ the segments $\big(x_{j_{k_1}},x_{i_2}\big)$ and $\big(x_{j_{k_1+1}},x_{i_2}\big),...,\big(x_{j_{k_2}},x_{i_2}\big)$, where $k_2\in\{1,...,q\}$ is the smallest value such that \begin{align*} r_2:=d_{i_2}+r_1+\mathbb{M}athbb{S}um_{l=k_1+1}^{k_2}d_{j_l}\le 0. \end{align*} If there is no $k\in\{1,...,q\}$ such that \begin{align*} d_{i_2}+r_1+\mathbb{M}athbb{S}um_{l=k_1+1}^{k}d_{j_l}\le 0, \end{align*} then we add to $\mathbb{M}athscr{F}$ the segments $\big(x_{j_{k_1}},x_{i_2}\big)$ and $\big(x_{j_{k_1+1}},x_{i_2}\big),...,\big(x_{j_{q}},x_{i_2}\big)$ and we set $k_2=q$. We proceed iteratively in this way, moving on to the subsequent points $x_{i_s}$ until $k_s=q$ or $s=p$. Then, the construction of the family $\mathbb{M}athscr{F}$ is complete. We let $x_{i_h}$ be the last node that is visited before the iteration stops and, for every $I_{\alpha}=(x_j,x_i)\in\mathbb{M}athscr{F}$, we define its multiplicity $m_{\alpha}$ as \begin{align*} m_{\alpha}:=\begin{cases}\lvert d_j\mathbb{R}vert-\lvert r_l\mathbb{R}vert & \mathbb{M}box{ if } i=i_l \mathbb{M}box{ and } j=i_{k_l},\\ \mathbb{M}in\{\lvert d_i\mathbb{R}vert,\lvert r_{l-1}\mathbb{R}vert\} & \mathbb{M}box{ if } i=i_l \mathbb{M}box{ and } j=i_{k_{l-1}},\\ \mathbb{M}in\{|d_j|,|d_i|\} & \mathbb{M}box{ else.}\end{cases} \end{align*} Finally, we divide three cases: \begin{enumerate} \item \textit{Case} $d=0$. Notice that this is always the case if $M$ has no boundary. We define the integer $1$-current $I\in\mathbb{M}athcal{R}_1(M)$ given by \begin{align*} \langle I,\omega\mathbb{R}angle:=\mathbb{M}athbb{S}um_{\alpha\in A}m_{\alpha}\int_{I_{\alpha}}\omega, \qquad \mathbb{M}box{ for every } \omega\in\mathbb{M}athcal{D}^1(M). \end{align*} \item \textit{Case} $d>0$. We fix a point $x_0\in\partial M$ and we let $I_s^p:=(x_0,x_{i_s})$, for every $s=h,...,p$. We define the integer $1$-current $I\in\mathbb{M}athcal{R}_1(M)$ given by \begin{align*} \langle I,\omega\mathbb{R}angle:=\mathbb{M}athbb{S}um_{\alpha\in A}m_{\alpha}\int_{I_{\alpha}}\omega+r_h\int_{I_h^b}\omega+\mathbb{M}athbb{S}um_{s=h+1}^pd_{i_s}\int_{I_s^b}\omega,\qquad \mathbb{M}box{ for every } \omega\in\mathbb{M}athcal{D}^1(M). \end{align*} \item \textit{Case} $d<0$. We fix a point $x_0\in\partial M$ and we let $I_s^n:=(x_{j_s},x_0)$, for every $s=k_h,...,q$ We define the integer $1$-current $I \in\mathbb{M}athcal{R}_1(M)$ given by \begin{align*} \langle I,\omega\mathbb{R}angle:=\mathbb{M}athbb{S}um_{\alpha\in A}m_{\alpha}\int_{I_{\alpha}}\omega+|r_h|\int_{I_h^b}\omega+\mathbb{M}athbb{S}um_{s=k_h+1}^q|d_{i_s}|\int_{I_s^b}\omega,\qquad \mathbb{M}box{ for every } \omega\in\mathbb{M}athcal{D}^1(M). \end{align*} \end{enumerate} By direct computation, we verify that $I$ has the desired properties and the statement follows. \end{proof} \end{lem} \begin{lem} \label{appendix lemma duality} Let $F\in\Omega_{p,\mathbb{M}athbb{Z}}^{m-1}(M)$ (see Definition \mathbb{R}ef{Definition: integer valued fluxes on generic domain}). Then, \begin{align}\label{equation: estimates for connections} \inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{D}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{M}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi<+\infty, \end{align} where $\mathbb{M}athcal{M}_1(M)$ denotes the set of all the $1$-currents with finite mass on $M$. Moreover, the infimum on the left-hand-side of the previous chain of equalities is achieved. \begin{proof} By definition, there exists an integer $1$-current $I\in\mathbb{M}athcal{R}_1(M)$ with finite mass such that $\partial I=*dF$. Hence \begin{align*} \inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{D}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)\le\mathbb{M}(I)<+\infty. \end{align*} The first equality in \eqref{equation: estimates for connections} is clear.\\ Notice that for every $T\in\mathbb{M}athcal{M}_1(M)$ such that $\partial T=*dF$ it holds that \begin{align*} \int_{M}F\wedge d\varphi=\langle*dF,\varphi\mathbb{R}angle=\langle\partial T,\varphi\mathbb{R}angle=\langle T,d\varphi\mathbb{R}angle\le\mathbb{M}athbb{M}(T)\lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}(M)}, \qquad\forall\,\varphi\in W_0^{1,\infty}(M). \end{align*} Hence, \begin{align} \label{appendix equation lemma duality} \inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{M}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)\ge\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi. \end{align} To prove that the former inequality is actually an equality, it suffices to show that the supremum on its right-hand-side is greater than the mass of some $1$-current with finite mass $T$ on $M$ such that $\partial T=\ast dF$. Define the vector subspace $X\mathbb{M}athbb{S}ubset\Omega_{\infty}^1(M)$ given by \begin{align*} X:=\{\omega\in\Omega_{\infty}^1(M) \mathbb{M}box{ s.t. } \omega=d\varphi, \mathbb{M}box{ for some } \varphi\in W_0^{1,\infty}(M)\}. \end{align*} Consider the linear functional $\phi:X\mathbb{M}athbb{S}ubset(\Omega^1(M),\lVert\mathbb{C}dot\mathbb{R}Vert_{L^\infty})\mathbb{R}ightarrow\mathbb{R}$ given by \begin{align*} \langle\phi,\omega\mathbb{R}angle=\int_{M}F\wedge\omega, \qquad\forall\,\omega\in X. \end{align*} By \eqref{appendix equation lemma duality} we get that $\phi$ is continuous on $X$, i.e. \begin{align*} \lvert\lvert\phi\mathbb{R}vert\mathbb{R}vert_{\mathcal{L}(X)}=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\omega\in X, \\ \lvert\lvert \omega\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge\omega=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi\le \inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{M}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)<+\infty. \end{align*} By Hahn-Banach theorem, we can extend $\phi$ to a linear functional $T:\Omega^1(M)\mathbb{R}ightarrow\mathbb{R}$ such that \begin{align*} \lvert\lvert T \mathbb{R}vert\mathbb{R}vert_{\mathcal{L}(\Omega_{\infty}^1(M))}=\lvert\lvert\phi\mathbb{R}vert\mathbb{R}vert_{\mathcal{L}(X)}=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi \end{align*} But then, $T$ is a $1$-current on $M$ having finite mass and such that \begin{align*} \mathbb{M}(T)\le\lvert\lvert T\mathbb{R}vert\mathbb{R}vert_{\mathcal{L}(\Omega_{\infty}^1(M))}=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi. \end{align*} Moreover, \begin{align*} \langle\partial T,\varphi\mathbb{R}angle=\langle T,d\varphi\mathbb{R}angle&=\langle\phi,d\varphi\mathbb{R}angle=\int_{M}F\wedge d\varphi=\langle\ast dF,\varphi\mathbb{R}angle, \qquad\forall\,\varphi\in W_0^{1,\infty}(M). \end{align*} Hence, \begin{align*} \mathbb{M}(T)=\inf_{\mathbb{M}athbb{S}ubstack{\tilde T\in\mathbb{M}athcal{D}_1(M), \\ \partial \tilde T=*dF}}\mathbb{M}athbb{M}(\tilde T)=\inf_{\mathbb{M}athbb{S}ubstack{\tilde T\in\mathbb{M}athcal{M}_1(M), \\ \partial \tilde T=*dF}}\mathbb{M}athbb{M}(\tilde T)=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi<+\infty \end{align*} and the statement follows. \end{proof} \end{lem} \begin{prop} \label{appendix proposition existence minimal connection finitely many singularities} Let $F\in\Omega_{p,R}^{m-1}(M)$. Then, there exists an integer $1$-current $L\in\mathbb{M}athcal{R}_1(M)$ such that $\partial L=*dF$ and \begin{align*} \mathbb{M}(L)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{D}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi. \end{align*} In particular, \begin{align*} \mathbb{M}(L)\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{L^p}. \end{align*} \begin{proof} Notice that by \mathbb{C}ite[Chapter 1, Section 3.4, Theorem 8]{giaquinta-modice-soucek_cartesan_currents_2}, we have \begin{align*} \inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{R}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{D}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T). \end{align*} Since the mass $\mathbb{M}(\,\mathbb{C}dot\,)$ is lower semicontinuous with respect to the weak convergence in $\mathbb{M}athcal{D}_1(M)$ and since $\mathbb{M}$-bounded subsets of the competition class $\mathbb{M}athcal{R}_1(M)\mathbb{C}ap\{T\in\mathbb{M}athcal{D}_1(M) \mathbb{M}box{ s.t. } \partial T=*dF\}$ are weakly sequentially compact (for a reference, see e.g. \mathbb{C}ite[Equation (7.5), Theorem 7.5.2]{krantz_parks-geometric_integration_theory}), by the direct method of calculus of variations we conclude that there exists an integer $1$-current $L\in\mathbb{M}athcal{R}_1(M)$ such that $\partial L=*dF$ and \begin{align*} \mathbb{M}athbb{M}(L)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{R}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)=\inf_{\mathbb{M}athbb{S}ubstack{T\in\mathbb{M}athcal{D}_1(M), \\ \partial T=*dF}}\mathbb{M}athbb{M}(T)=\mathbb{M}athbb{S}up_{\mathbb{M}athbb{S}ubstack{\varphi\in W_0^{1,\infty}(M), \\ ||d\varphi||_{L^{\infty}}\le 1}}\int_{M}F\wedge d\varphi, \end{align*} where the last equality follows from Lemma \mathbb{R}ef{appendix lemma duality}. The statement follows. \end{proof} \end{prop} \mathbb{M}athbb{S}ection{Laplace equation on spheres} Let $n\in\mathbb{N}$ be such that $n\ge 2$ and fix any $p\in(1,+\infty)$. We let \begin{align*} \dot W^{1,p}(\mathbb{M}athbb{S}^{n-1}):=\bigg\{u\in W^{1,p}(\mathbb{M}athbb{S}^{n-1}) \mathbb{M}box{ s.t. } \bar u:=\int_{\mathbb{M}athbb{S}^{n-1}}u\, d\vol_{\mathbb{M}athbb{S}^{n-1}}=0\bigg\}. \end{align*} We can endow the space $\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ with the usual $W^{1,p}$-norm induced by $W^{1,p}(\mathbb{M}athbb{S}^{n-1})$, given by \begin{align*} \lvert\lvert u\mathbb{R}vert\mathbb{R}vert_{W^{1,p}}:=\lvert\lvert u\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert du\mathbb{R}vert\mathbb{R}vert_{L^p}, \qquad\forall\, u\in \dot W^{1,p}(\mathbb{M}athbb{S}^{n-1}). \end{align*} \begin{lem}[Poincar\'e inequality on $\dot W^{1,p}$] \label{appendix poincare inequality} There exists a constant $C>0$ such that \begin{align*} \int_{\mathbb{M}athbb{S}^{n-1}}|u|^p\,d\vol_{\mathbb{M}athbb{S}^{n-1}}\le C\int_{\mathbb{M}athbb{S}^{n-1}}|du|^p\,d\vol_{\mathbb{M}athbb{S}^{n-1}}, \qquad\forall\, u\in \dot W^{1,p}(\mathbb{M}athbb{S}^{n-1}). \end{align*} \begin{proof} By contradiction, assume that for every $k>0$ there exists $u_k\in \dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ such that $\lvert\lvert u_k\mathbb{R}vert\mathbb{R}vert_{L^p}=1$ and \begin{align*} 1>k\int_{\mathbb{M}athbb{S}^{n-1}}|du_k|^p\,d\vol_{\mathbb{M}athbb{S}^{n-1}}. \end{align*} This implies immediately that $\lvert\lvert du_k\mathbb{R}vert\mathbb{R}vert_{L^p}\mathbb{R}ightarrow 0$ as $k\mathbb{R}ightarrow\infty$. In particular, the sequence $\{u_k\}_{k\in\mathbb{N}}$ is bounded with respect to the $W^{1,p}$-norm. Hence, by weak compactness of $W^{1,p}(\mathbb{M}athbb{S}^{n-1})$, there exists a subsequence $\{u_{k_j}\}_{j\in\mathbb{N}}$ of $\{u_k\}_{k\in\mathbb{N}}$ such that $u_{k_j}\mathbb{R}ightharpoonup u$ in $W^{1,p}(\mathbb{M}athbb{S}^{n-1})$. Moreover, by Rellich-Kondrakov theorem, we have $u_{k_j}\mathbb{R}ightarrow u$ strongly in $L^p(\mathbb{M}athbb{S}^{n-1})$. Since $du_{k_j}\mathbb{R}ightarrow 0$ strongly in $L^p$ we get $du=0$. Then, $u$ is constant on $\mathbb{M}athbb{S}^{n-1}$. Since $u_{k_j}\mathbb{R}ightharpoonup u$ in $L^p(\mathbb{M}athbb{S}^{n-1})$, it follows that \begin{align*} 0=\lim_{j\mathbb{R}ightarrow\infty}\int_{\mathbb{M}athbb{S}^{n-1}}u_{k_j}\, d\vol_{\mathbb{M}athbb{S}^{n-1}}=\int_{\mathbb{M}athbb{S}^{n-1}}u\, d\vol_{\mathbb{M}athbb{S}^{n-1}} \end{align*} and this leads to $u=0$. But this is absurd, since by strong $L^p$-convergence of $\{u_{k_j}\}_{j\in\mathbb{N}}$ to $u$ we obtain $\lvert\lvert u\mathbb{R}vert\mathbb{R}vert_{L^p}=1$. \end{proof} \end{lem} \begin{rem} By Lemma \mathbb{R}ef{appendix poincare inequality}, we conclude that we can endow $\dot W^{1,p}$ with the following much more convenient norm: \begin{align*} \lvert\lvert u\mathbb{R}vert\mathbb{R}vert_{\dot W^{1,p}}:=||du||_{L^p}, \qquad\forall\, u\in \dot W^{1,p}(\mathbb{M}athbb{S}^{n-1}). \end{align*} Moreover, such a norm is equivalent to $W^{1,p}$-norm. \end{rem} \begin{rem} \label{appendix remark continuity} Notice that a linear functional on $W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ restricts to an element of $(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*$ if and only if it is $W^{1,p}$-continuous and $\langle F,1\mathbb{R}angle=0$. \end{rem} \begin{lem} \label{appendix lemma d* e d} Let $F\in(W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*$ be such that $\langle F,1\mathbb{R}angle=0$. Then, the following facts hold. \begin{enumerate} \item If $n\ge 3$ the linear differential system \begin{align*} \begin{cases} d^*\omega=F\\ d\omega=0 \end{cases} \end{align*} has a unique weak solution $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$. \item If $n=2$ the linear differential system \begin{align*} \begin{cases} d^*\omega=F\\ d\omega=0\\ \displaystyle{\int_{\mathbb{M}athbb{S}^1}}\omega=0 \end{cases} \end{align*} has a unique weak solution $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^1)$. \end{enumerate} In both cases, $\alpha$ satisfies the following estimate: \begin{align*} ||\alpha||_{L^p}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{\mathbb{M}athcal{L}(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))}, \end{align*} for some constant $C>0$ depending only $n$. \begin{proof} Observe that, by Remark \mathbb{R}ef{appendix remark continuity}, $F$ restricts to an element of $(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*$. Consider the linear functional $\phi:\Omega^1(\mathbb{M}athbb{S}^{n-1})\mathbb{R}ightarrow\mathbb{R}$ given by \begin{align*} \langle\phi,\omega\mathbb{R}angle=\langle F,u\mathbb{R}angle, \qquad\forall\,\omega=du+d^*\beta+\eta\in\Omega^1(\mathbb{M}athbb{S}^{n-1}), \end{align*} where $\eta$ is a harmonic $1$-form on $\mathbb{M}athbb{S}^{n-1}$. Since $\phi$ is continuous and linear on $\Omega^1(\mathbb{M}athbb{S}^{n-1})$ w.r.t. the $L^{p'}$-norm, by Hahn-Banach theorem there exists a unique (recall that $L^p$-spaces are strictly convex) extension $\mathcal{P}hi\in(\Omega_{p'}^1(\mathbb{M}athbb{S}^{n-1}))^*$ of $\phi$ such that \begin{align*} ||\mathcal{P}hi||_{(\Omega_{p'}^1(\mathbb{M}athbb{S}^{n-1}))^*}=||\phi||_{(\Omega^1(\mathbb{M}athbb{S}^{n-1}))^*}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{\mathbb{M}athcal{L}(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))}. \end{align*} By Riesz representation theorem, there exists a unique $\alpha\in\Omega_{p}^1(\mathbb{M}athbb{S}^{n-1})$ such that \begin{align} \label{appendix riesz representation} \langle\alpha,\omega\mathbb{R}angle_{L^p-L^{p'}}:=\int_{\mathbb{M}athbb{S}^{n-1}}\alpha\wedge*\omega=\langle\mathcal{P}hi,\omega\mathbb{R}angle, \qquad\forall\,\omega\in\Omega_{p'}^1(\mathbb{M}athbb{S}^{n-1}) \end{align} and \begin{align*} ||\alpha||_{L^p}=||\mathcal{P}hi||_{(\Omega_{p'}^1(\mathbb{M}athbb{S}^{n-1}))^*}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*}. \end{align*} Finally, by applying equation \eqref{appendix riesz representation} with get \begin{align*} \langle\alpha,du\mathbb{R}angle_{L^p-L^{p'}}&=\langle\mathcal{P}hi,du\mathbb{R}angle=\langle\phi,du\mathbb{R}angle=\langle F,u\mathbb{R}angle, \qquad\forall\,u\in C^{\infty}(\mathbb{M}athbb{S}^{n-1}), \end{align*} and \begin{align*} \langle\alpha,d^*\beta\mathbb{R}angle_{L^p-L^{p'}}&=\langle\mathcal{P}hi,d^*\beta\mathbb{R}angle=\langle\phi,d^*\beta\mathbb{R}angle=0, \qquad\forall\,\beta\in\Omega^2(\mathbb{M}athbb{S}^{n-1}). \end{align*} The two previous equations are exactly the weak forms of the equations $d^*\alpha=F$ and $d\alpha=0$ respectively. Moreover, in case $n=2$, we have \begin{align*} \int_{\mathbb{M}athbb{S}^1}\alpha=\int_{\mathbb{M}athbb{S}^1}\alpha\wedge\ast 1=\langle\alpha,\ast 1\mathbb{R}angle_{L^p-L^{p'}}=\langle\mathcal{P}hi,\ast 1\mathbb{R}angle=\langle F,1\mathbb{R}angle =0. \end{align*} This concludes about the existence of a solution to the differential systems given in points 1 and 2. For what concerns uniqueness, assume that $\alpha$ and $\alpha'$ are two solutions of the differential system given in point 1 (resp. 2) and define $\beta=\alpha-\alpha'$. Then, we distinguish the two cases: \textit{Case $n\ge 3$}. In this case, $\beta$ satisfies \begin{align*} \begin{cases} d^*\beta=0\\ d\beta=0. \end{cases} \end{align*} Hence, $\beta$ is a harmonic $1$-forms on $\mathbb{M}athbb{S}^{n-1}$ for $n\ge 1$, which implies $\beta=0$. \textit{Case $n=2$}. In this case, $\beta$ satisfies \begin{align*} \begin{cases} d^*\beta=0\\ d\beta=0\\ \displaystyle{\int_{\mathbb{M}athbb{S}^1}}\beta=0. \end{cases} \end{align*} Hence, $\beta$ is a harmonic $1$-forms on $\mathbb{M}athbb{S}^1$, which implies $\beta=c\vol_{\mathbb{M}athbb{S}^1}$ for some $c\in\mathbb{R}$. But since $\beta$ has vanishing integral on $\mathbb{M}athbb{S}^1$, we get $\beta=0$. \end{proof} \end{lem} \begin{dfn}[Sobolev spaces of differential forms] \label{appendix definition sobolev forms} Fix any $k\in\mathbb{N}\mathbb{M}athbb{S}mallsetminus\{0\}$. We define the \textit{Sobolev space of $W^{1,p}$-regular differential $k$-forms on $\mathbb{M}athbb{S}^{n-1}$} by \begin{align*} \Omega_{W^{1,p}}^k(\mathbb{M}athbb{S}^{n-1}):=\big\{\omega\in\Omega_{p}^k(\mathbb{M}athbb{S}^{n-1}) \mathbb{M}box{ s.t. } d\omega,d^*\omega\in L^p\big\}. \end{align*} We endow such space with the norm \begin{align*} \lvert\lvert\omega\mathbb{R}vert\mathbb{R}vert_{W^{1,p}}:=\lvert\lvert\omega\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert d\omega\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert d^*\omega\mathbb{R}vert\mathbb{R}vert_{L^p}, \qquad\forall\,\omega\in\Omega_{W^{1,p}}^k(\mathbb{M}athbb{S}^{n-1}). \end{align*} \end{dfn} \begin{rem} \label{appendix remark sobolev forms} It can be shown (see \mathbb{C}ite[\S 3 and \S 4]{scott-L^p_theory_of_differential_forms_on_manifolds}) that such Sobolev spaces are completely equivalent to the usual ones, namely the space of $k$-forms having local coefficients in $W^{1,p}$. Moreover, in case $n\ge 3$ there exists $C>0$ such that \begin{align}\label{poincare on forms} \lvert\lvert\omega\mathbb{R}vert\mathbb{R}vert_{W^{1,p}}\le C\big(\lvert\lvert d\omega\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert d^*\omega\mathbb{R}vert\mathbb{R}vert_{L^p}\big), \qquad\forall\,\omega\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1}). \end{align} Indeed, let \begin{align*} X&:=\{d\alpha \mathbb{M}box{ s.t. } \alpha\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1})\},\\ Y&:=\{d^*\beta \mathbb{M}box{ s.t. } \beta\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1})\}. \end{align*} By \mathbb{C}ite[Proposition 7.1]{scott-L^p_theory_of_differential_forms_on_manifolds}, both $X$ and $Y$ are closed linear subspaces respectively of $\Omega_p^2(\mathbb{M}athbb{S}^{n-1})$ and $\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$. Then, $X\oplus Y$ is a Banach space with respect to the standard norm on the direct sum of two Banach spaces. We claim that the liner operator $T:\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1})\mathbb{R}ightarrow X\oplus Y$ given by $T\omega=(d\omega,d^*\omega)$, for every $\omega\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1})$ is a continuous linear bijection between Banach spaces. Indeed, the fact that $T$ is injective follows form the fact that there no non-zero harmonic forms on $\mathbb{M}athbb{S}^{n-1}$ for $n\ge 3$. Hence, we just need to show that $T$ is surjective. Pick any $(d\alpha,d^*\beta)\in X\oplus Y$. By Lemma \mathbb{R}ef{appendix lemma d* e d}, the linear differential system \begin{align*} \begin{cases} d^*\omega=d^*(\beta-\alpha)\\ d\omega=0 \end{cases} \end{align*} has a unique weak solution $\tilde\omega\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$. Since by construction we have $d\tilde\omega, d^*\tilde\omega\in L^p$, we conclude that $\tilde\omega\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1})$. Then, by letting $\omega:=\tilde\omega+\alpha\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1})$ we see that \begin{align*} T\omega=(d\omega,d^*\omega)=(d\tilde\omega+d\alpha,d^*\tilde\omega+d^*\alpha)=(d\alpha,d^*\beta) \end{align*} and we have proved our claim. This proves that $T$ has a continuous inverse and the statement follows with $C=\lvert\lvert T^{-1}\mathbb{R}vert\mathbb{R}vert_{\mathbb{M}athcal{L}(X\oplus Y,\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^{n-1}))}$. In case $n=2$, the estimate \eqref{poincare on forms} still holds for every $\omega\in\Omega_{W^{1,p}}^1(\mathbb{M}athbb{S}^1)$ such that \begin{align*} \int_{\mathbb{M}athbb{S}^1}\omega=0. \end{align*} The proof is completely analogous. \end{rem} \begin{rem}[$L^p$-Hodge decomposition] \label{appendix remark hodge} Let \begin{align*} X&:=\{d\varphi \mathbb{M}box{ s.t. } \varphi\in\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})\},\\ Y&:=\{d^*\beta \mathbb{M}box{ s.t. } \beta\in\Omega_{W^{1,p}}^2(\mathbb{M}athbb{S}^{n-1})\},\\ Z&:=\{\eta\in\Omega^1(\mathbb{M}athbb{S}^{n-1}) \mathbb{M}box{ s.t. } \mathbb{M}athcal{D}elta\eta=(dd^*+d^*d)\eta=0\}. \end{align*} Then, as a particular consequence of the $L^p$-Hodge decomposition theorem (see e.g. \mathbb{C}ite[Proposition 6.5]{scott-L^p_theory_of_differential_forms_on_manifolds}), the operator $T:X\oplus Y\oplus Z\mathbb{R}ightarrow\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ given by \begin{align*} T(d\varphi,d^*\beta,\eta):=d\varphi+d^*\beta+\eta \end{align*} is a continuous and linear isomorphism between Banach spaces. Hence, $T$ has a continuous inverse. We let \begin{align*} C_H:=\lvert\lvert T^{-1}\mathbb{R}vert\mathbb{R}vert_{\mathbb{M}athcal{L}(\Omega_p^1(\mathbb{M}athbb{S}^{n-1}),X\oplus Y)}. \end{align*} We conclude that for every $\omega\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ there exist $\varphi\in\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$, $\beta\in\Omega_{W^{1,p}}^2(\mathbb{M}athbb{S}^{n-1})$ and $\eta\in Z$ such that $\omega=d\varphi+d^*\beta+\eta$ and \begin{align} \label{appendix equation useful estimate} \lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert d^*\beta\mathbb{R}vert\mathbb{R}vert_{L^p}+\lvert\lvert\eta\mathbb{R}vert\mathbb{R}vert_{L^p}\le C_H\lvert\lvert\omega\mathbb{R}vert\mathbb{R}vert_{L^p}. \end{align} \end{rem} \begin{lem}[A weak version of Poincarè lemma] \label{appendix lemma sobolev poincare} Let $n\ge 3$ and let $\alpha\in\Omega_{p}^1(\mathbb{M}athbb{S}^{n-1})$ be such that $d\alpha=0$ weakly on $\mathbb{M}athbb{S}^{n-1}$. Then, there exists a Sobolev function $\varphi\in \dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ such that $d\varphi=\alpha$ weakly on $\mathbb{M}athbb{S}^{n-1}$. \begin{proof} We follow the notation of Remark \mathbb{R}ef{appendix remark hodge} and we notice that, since $n\ge 3$ we have $Z=\{0\}$. Hence, we write $\alpha=d\varphi+d^*\beta$, for $\varphi\in\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ and $\beta\in\Omega_{W^{1,p}}^2(\mathbb{M}athbb{S}^{n-1})$. We observe that \begin{align*} \langle d^*\beta,\omega\mathbb{R}angle_{L^p-L^{p'}}&=\langle d^*\beta,d\psi+d^*\gamma\mathbb{R}angle_{L^p-L^{p'}}\\ &=\langle d^*\beta,d^*\gamma\mathbb{R}angle_{L^p-L^{p'}}\\ &=\langle\alpha-d\varphi,d^*\gamma\mathbb{R}angle_{L^p-L^{p'}}\\ &=\langle \alpha,d^*\gamma\mathbb{R}angle_{L^p-L^{p'}}=0, \qquad\forall\,\omega=d\psi+d^*\gamma\in\Omega^1(\mathbb{M}athbb{S}^{n-1}). \end{align*} This implies $d^*\beta=0$ and the statement follows. \end{proof} \end{lem} \begin{cor}[Laplace equation on spheres] \label{appendix corollary laplace} Let $F\in\mathbb{M}athcal{L}(W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))$ such that $\langle F,1\mathbb{R}angle=0$. Then, the linear differential equation \begin{align*} \mathbb{M}athcal{D}elta u = F \end{align*} has a unique weak solution $\varphi\in\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ satisfying \begin{align*} ||\varphi||_{W^{1,p}}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*}. \end{align*} \begin{proof} First, we face the case $n\ge 3$. By Lemma \mathbb{R}ef{appendix lemma d* e d} we can find $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ satisfying \begin{align*} \begin{cases} d^*\alpha=F\\ d\alpha=0 \end{cases} \end{align*} and \begin{align*} ||\alpha||_{L^p}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*}. \end{align*} Since $d\alpha=0$, by Lemma \mathbb{R}ef{appendix lemma sobolev poincare} there exists $\varphi\in\dot W^{1,p}(\mathbb{M}athbb{S}^{n-1})$ such that $\alpha=d\varphi$. Hence, we get \begin{align*} \mathbb{M}athcal{D}elta\varphi=d^*d\varphi=d^*\alpha=F. \end{align*} Moreover, by Lemma \mathbb{R}ef{appendix poincare inequality}, we have \begin{align*} ||\varphi||_{W^{1,p}}\le C\lvert\lvert d\varphi\mathbb{R}vert\mathbb{R}vert_{L^p}=C||\alpha||_{L^p}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*}. \end{align*} This concludes the proof in case $n\ge 3$. If $n=2$, then by Lemma \mathbb{R}ef{appendix lemma d* e d} we can find $\alpha\in\Omega_p^1(\mathbb{M}athbb{S}^{n-1})$ satisfying \begin{align*} \begin{cases} d^*\alpha=F\\ d\alpha=0\\ \displaystyle{\int_{\mathbb{M}athbb{S}^1}\alpha=0} \end{cases} \end{align*} and \begin{align*} ||\alpha||_{L^p}\le C\lvert\lvert F\mathbb{R}vert\mathbb{R}vert_{(\dot W^{1,p'}(\mathbb{M}athbb{S}^{n-1}))^*}. \end{align*} By setting $\varphi:=\ast\alpha$, the statement follows. \end{proof} \end{cor} \mathbb{M}athbb{S}ection{Some technical lemmata} In this section we will make use of the following notation: let $T$ be an $m$-rectifiable current in $\mathbb{M}athbb{R}^n$, then $T$ can be represented as follows: \begin{align*} \langle T,\omega\mathbb{R}angle=\int_{\mathbb{M}athbb{R}^n}\theta\langle \omega, \xi\mathbb{R}angle d\mathbb{M}athscr{H}^m\big|_\Sigma\quad\forall \omega\in \Omega^m(\mathbb{M}athbb{R}^n), \end{align*} where $\Sigma$ is a locally $m$-rectifiable set, \begin{align*} \theta: \Sigma\to \mathbb{M}athbb{Z} \end{align*} is a locally $\mathbb{M}athscr{H}^m$-integrable, non-negative function and \begin{align*} \xi:\Sigma\to\mathcal{L}ambda_m\mathbb{M}athbb{R}^n \end{align*} is an $\mathbb{M}athscr{H}^m$-measurable function such that for $\mathbb{M}athscr{H}^m$-almost every point $x\in \Sigma$, $\xi(x)$ is a simple unit $m$-vector in $T_x\Sigma$.\\ In this case we write \begin{align*} T=\tau(\Sigma, \theta, \xi). \end{align*} \begin{lem} \label{appendix Lemma closure integer currents} For any $k\in \mathbb{M}athbb{N}$ let \begin{align*} T_k=\tau( \Sigma_k, \theta_k, \xi_k) \end{align*} be an $m$-rectifiable current on $\mathbb{M}athbb{R}^n$ of finite mass. Assume that $(T_k)_{k\in \mathbb{M}athbb{N}}$ is a Cauchy sequence with respect to the convergence in mass.\\ Then there exists an $m$-rectifiable current \begin{align*} T=\tau( \Sigma, \theta, \xi) \end{align*} such that \begin{align*} T_k\to T\quad (k\to\infty)\quad\text{in mass.} \end{align*} \end{lem} \begin{proof} Replacing the original sequence by a subsequence if necessary, we may assume that for any $k\in \mathbb{M}athbb{N}$ \begin{align*} \mathbb{M}athbb{M}(T_k- T_{k+1})\leq 2^{-k}. \end{align*} Now for any $k\in \mathbb{M}athbb{N}$ let \begin{align*} \tilde{T}_k:=\begin{cases} T_1&\text{ if }k=1\\ T_k-T_{k-1}&\text{ if }k>1. \end{cases} \end{align*} Then for any $k\in \mathbb{M}athbb{N}$ we have \begin{align*} T_k=\mathbb{M}athbb{S}um_{i=1}^k\tilde{T}_i. \end{align*} For any $k\in \mathbb{M}athbb{N}$ write \begin{align*} \tilde{T}_k=\langle \tilde{\Sigma}_k, \tilde{\theta}_k, \tilde{\xi}_k\mathbb{R}angle. \end{align*} Notice that \begin{align*} \mathbb{M}athbb{S}um_{i=1}^k\tilde\theta_i\tilde\xi_i=\theta_k\xi_k \qquad\mathscr{H}^m\mathbb{M}box{- a.e. on } \Sigma, \end{align*} for every $k\in\mathbb{N}$. Set \begin{align*} \Sigma:=\bigcup_{k\in \mathbb{M}athbb{N}}\big(\tilde{\Sigma}_k\mathbb{M}athbb{S}mallsetminus\tilde\theta_k^{-1}(0)\big). \end{align*} Then $\Sigma$ is $m$-rectifiable as countable union of $m$-rectifiable sets. Moreover $\mathbb{M}athscr{H}^m(\Sigma)<\infty$. In fact \begin{align*} \mathbb{M}athscr{H}^m(\Sigma)\leq\mathbb{M}athbb{S}um_{k\in\mathbb{M}athbb{N}}\mathbb{M}athscr{H}^m\big(\tilde{\Sigma}_k\mathbb{M}athbb{S}mallsetminus\tilde\theta_k^{-1}(0)\big)\leq\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\int_{\mathbb{M}athbb{R}^n}\lvert\tilde{\theta}_k\mathbb{R}vert\, d\mathbb{M}athscr{H}^m\mathbb{R}es{\tilde{\Sigma}_k}=\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\mathbb{M}athbb{M}(\tilde{T}_k)<\infty. \end{align*} Next let \begin{align*} \theta=\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\tilde{\theta}_k, \end{align*} where $\tilde\theta_k$ is extended by zero on $\Sigma\mathbb{M}athbb{S}mallsetminus\tilde\Sigma_k$ for any $k\in \mathbb{M}athbb{N}$ . By Beppo-Levi Theorem \begin{align}\label{Equation: theta is integrable} \mathbb{N}onumber \int_{\mathbb{R}^n}\lvert\theta\mathbb{R}vert\, d\mathscr{H}^m\mathbb{R}es\Sigma&=\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\int_{\mathbb{M}athbb{R}^n}\lvert\tilde{\theta}_k\mathbb{R}vert\, d\mathbb{M}athscr{H}^m\mathbb{R}es\Sigma\\ &=\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\int_{\mathbb{M}athbb{R}^n}\lvert\tilde\theta_k\mathbb{R}vert\, d\mathbb{M}athscr{H}^m\mathbb{R}es \tilde{\Sigma}_k=\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\mathbb{M}athbb{M}(\tilde{T}_k)<\infty. \end{align} Therefore $\theta$ is finite $\mathbb{M}athscr{H}^m$-a.e. in $\Sigma$, i.e. for $\mathbb{M}athscr{H}^m$-a.e. $x\in \Sigma$ there are only finitely many $k\in \mathbb{M}athbb{N}$ so that $\theta_k(x)\mathbb{N}eq 0$. In particular the sum \begin{align*} \mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\tilde\theta_k(x)\xi_k(x) \end{align*} is well defined and finite for $\mathbb{M}athscr{H}^m$-a.e. $x\in \Sigma$ (again $\xi_k$ is extended by zero on $\Sigma\mathbb{M}athbb{S}mallsetminus\tilde\Sigma_k$ for any $k\in \mathbb{M}athbb{N}$) and we can write \begin{align*} \theta(x)\xi(x)=\mathbb{M}athbb{S}um_{k\in \mathbb{M}athbb{N}}\tilde\theta_k(x)\xi_k(x) \end{align*} for some $\theta(x)\in\mathbb{M}athbb{Z}_{\geq 0}$ and for some simple unit $m$-vector $\xi(x)$ in $T_x\Sigma$, for $\mathscr{H}^m$-a.e. $x\in\Sigma$. Observe that $\theta$ is an $\mathbb{M}athscr{H}^m$-measurable function on $\Sigma$ as the absolute value of the a.e.-limit of $\mathbb{M}athscr{H}^m$-measurable functions. Analogously, $\xi$ is an $\mathbb{M}athscr{H}^m$-measurable map on $\Sigma$ as the a.e.-limit of $\mathbb{M}athscr{H}^m$-measurable maps. We set \begin{align*} T:=\tau(\Sigma, \theta, \xi) \end{align*} and we claim that \begin{align*} T_k\to T\quad(k\to\infty)\quad\text{in mass.} \end{align*} In fact we know that since the space of $m$-currents is complete under the convergence in mass (as a dual space), there exists an $m$-current $T'$ such that \begin{align*} T_k\to T'\quad(k\to\infty)\quad\text{in mass.} \end{align*} To see that $T=T'$ observe that for any $\omega\in\mathbb{M}athcal{D}^m(\mathbb{M}athbb{R}^n)$ \begin{align*} \theta_k\langle \omega, \xi_k\mathbb{R}angle=\mathbb{M}athbb{S}um_{i=1}^k\tilde\theta_i\langle\omega,\tilde\xi_i\mathbb{R}angle\to\theta\langle \omega, \xi\mathbb{R}angle\quad \mathbb{M}athscr{H}^m\text{-a.e. in }\Sigma, \end{align*} thus by (\mathbb{R}ef{Equation: theta is integrable}) and Dominated Convergence Theorem we conclude that \begin{align*} \langle T_k,\omega\mathbb{R}angle\to \langle T,\omega\mathbb{R}angle\quad(k\to\infty). \end{align*} In particular $T=T'$. \end{proof} \begin{lem}\label{lemma: continuity of the dilation operator} Let $\alpha\in(1,+\infty)$, $q\in(-\infty,1]$, $\varepsilon\in(0,1)$ and let $\Omega\mathbb{M}athbb{S}ubset Q_{1-\varepsilon}^n(0)$ be open, Lipschitz and bounded. For any $p\in[1,+\infty)$ and $\mathbb{M}u:=f\mathcal{L}^n$ with $f=(\frac{1}{2}-\lVert\,\mathbb{C}dot\,\mathbb{R}Vert_{\infty})^q$, consider the continuous linear operator $P_{\alpha}:L^p(\Omega,\mathbb{M}u;\mathbb{R}^n)\mathbb{R}ightarrow L^p(\Omega,\mathbb{M}u;\mathbb{R}^n)$ given by \begin{align*} (P_{\alpha}V)(x):=\begin{cases}\alpha^{n-1}V(\alpha x)& \mathbb{M}box{ if }\,x\in\alpha^{-1}\Omega,\\ 0 &\mathbb{M}box{ on } \Omega\mathbb{M}athbb{S}mallsetminus\alpha^{-1}\Omega.\end{cases} \end{align*} Then: \begin{enumerate} \item For every $\alpha\in (1,+\infty)$ such that $\lvert 1-\alpha^{-1}\mathbb{R}vert\le\varepsilon$ holds that \begin{align*} \lVert P_{\alpha}V\mathbb{R}Vert_{L^p(\mathbb{M}u)}\le C\alpha^{n-1-\frac{n}{p}}\lVert V\mathbb{R}Vert_{L^p(\mathbb{M}u)}, \end{align*} for some constant $C>0$ depending only on $q$ and $p$. \item For every $V\in L^p(\Omega,\mathbb{M}u;\mathbb{R}^n)$ we have that $P_{\alpha}V\mathbb{R}ightarrow V$ in $L^p(\Omega,\mathbb{M}u;\mathbb{R}^n)$ as $\alpha\mathbb{R}ightarrow 1^+$. \end{enumerate} \begin{proof} Fist we prove 1. Fix any $V\in L^p(\Omega,\mathbb{M}u;\mathbb{R}^n)$ and compute \begin{align*} \int_{\Omega}\lvert P_{\alpha}V\mathbb{R}vert^p\, d\mathbb{M}u&=\alpha^{p(n-1)}\int_{\alpha^{-1}\Omega}\lvert V(\alpha x)\mathbb{R}vert^p\, d\mathbb{M}u(x)\\ &\le\alpha^{p(n-1)-n}\bigg(\int_{\Omega}\lvert V(y)\mathbb{R}vert^p\, d\mathbb{M}u(y)+\int_{\Omega}\lvert V(y)\mathbb{R}vert^p\frac{f(\alpha^{-1}y)-f(y)}{f(y)}\, d\mathbb{M}u(y)\bigg). \end{align*} As in \eqref{estimate on the difference measure} we can estimate \begin{align*} \left\lvert\frac{f(\alpha^{-1}y)-f(y)}{f(y)}\mathbb{R}ight\mathbb{R}vert\leq q\left(\frac{1}{2}-\lVert y\mathbb{R}Vert_\infty\mathbb{R}ight)^{-1}\lVert y\mathbb{R}Vert_\infty(1-\alpha^{-1})\leq C \end{align*} for any $y\in Q_{1-\varepsilon}(0)$ and any $\alpha\geq 1$ such that $\lvert 1-\alpha^{-1}\mathbb{R}vert\leq \varepsilon$, for some constant $C$ depending only on $q$. Therefore \begin{align*} \int_{\Omega}\lvert P_{\alpha}V\mathbb{R}vert^p\, d\mathbb{M}u\le (C+1)\alpha^{p(n-1)-n}\int_{\Omega}\lvert V(y)\mathbb{R}vert^p\, d\mathbb{M}u(y). \end{align*} Hence $1.$ follows. We are left to prove $2.$. Fix any $\delta>0$ and let $V_{\delta}\in C_c^{0}(\Omega;\mathbb{R}^n)$ be such that \begin{align*} \lVert V_{\delta}-V\mathbb{R}Vert_{L^p(\mathbb{M}u)}\le\delta. \end{align*} By 1, we have \begin{align*} \lVert P_{\alpha}V-V\mathbb{R}Vert_{L^p(\mathbb{M}u)}&\le\lVert P_{\alpha}(V-V_{\delta})\mathbb{R}Vert_{L^p(\mathbb{M}u)}+\lVert P_{\alpha}V_{\delta}-V_{\delta}\mathbb{R}Vert_{L^p(\mathbb{M}u)}+\lVert V_{\delta}-V\mathbb{R}Vert_{L^p(\mathbb{M}u)}\\ &\le (C\alpha^{n-1-\frac{n}{p}}+1)\delta+\lVert P_{\alpha}V_{\delta}-V_{\delta}\mathbb{R}Vert_{L^p(\mathbb{M}u)}, \end{align*} for every $\alpha\in(1,+\infty)$ such that $\lvert 1-\alpha^{-1}\mathbb{R}vert\le\varepsilon$. Since $V_{\delta}$ is continuous and compactly supported, it follows from dominated convergence that $\lVert P_{\alpha}V_{\delta}-V_{\delta}\mathbb{R}Vert_{L^p(\mathbb{M}u)}\mathbb{R}ightarrow 0$ as $\alpha\mathbb{R}ightarrow 1^+$. Hence, by letting $\alpha\mathbb{R}ightarrow 1^+$ in the previous inequality we get \begin{align*} \limsup_{\alpha\mathbb{R}ightarrow 1^+}\lVert P_{\alpha}V-V\mathbb{R}Vert_{L^p(\mathbb{M}u)}&\le (C+1)\delta. \end{align*} As $\delta>0$ was arbitrary, 2 follows. \end{proof} \end{lem} \printbibliography \Addresses \end{document}
\begin{document} \title{On definite strongly quasipositive links and L-space branched covers\footnotetext{2000 Mathematics Subject Classification. Primary 57M25, 57M50, 57M99}} \author{Michel Boileau} \thanks{Michel Boileau was partially supported by ANR projects 12-BS01-0003-01 and 12-BS01-0004-01.} \address{Aix Marseille Univ, CNRS, Centrale Marseille, I2M, Marseille, France, 39, rue F. Joliot Curie, 13453 Marseille Cedex 13} \email{[email protected] } \author[Steven Boyer]{Steven Boyer} \thanks{Steven Boyer was partially supported by NSERC grant RGPIN 9446-2013} \address{D\'epartement de Math\'ematiques, Universit\'e du Qu\'ebec \`a Montr\'eal, 201 avenue du Pr\'esident-Kennedy, Montr\'eal, QC H2X 3Y7.} \email{[email protected]} \urladdr{http://www.cirget.uqam.ca/boyer/boyer.html} \author[Cameron McA. Gordon]{Cameron McA. Gordon} \thanks{Cameron Gordon was partially supported by NSF grant DMS-130902.} \address{Department of Mathematics, University of Texas at Austin, 1 University Station, Austin, TX 78712, USA.} \email{[email protected]} \urladdr{http://www.ma.utexas.edu/text/webpages/gordon.html} \begin{abstract} We investigate the problem of characterising the family of strongly quasipositive links which have definite symmetrised Seifert forms and apply our results to the problem of determining when such a link can have an L-space cyclic branched cover. In particular, we show that if $\delta_n = \sigma_1 \sigma_2 \ldots \sigma_{n-1}$ is the dual Garside element and $b = \delta_n^k P \in B_n$ is a strongly quasipositive braid whose braid closure $\widehat b$ is definite, then $k \geq 2$ implies that $\widehat b$ is one of the torus links $T(2, q), T(3,4), T(3,5)$ or pretzel links $P(-2, 2, m), P(-2,3,4)$. Applying \cite[Theorem 1.1]{BBG} we deduce that if one of the standard cyclic branched covers of $\widehat b$ is an L-space, then $\widehat b$ is one of these links. We show by example that there are strongly quasipositive braids $\delta_n P$ whose closures are definite but not one of these torus or pretzel links. We also determine the family of definite strongly quasipositive $3$-braids and show that their closures coincide with the family of strongly quasipositive $3$-braids with an L-space branched cover. \\ \noindent {\it Keywords}: Strongly quasipositive; L-space; Cyclic branched cover. \end{abstract} \maketitle \begin{center} \today \end{center} \section{Introduction} \label{sec: introduction} We assume throughout that links are oriented and contained in the $3$-sphere. To each link $L$ and integer $n \geq 2$, we associate the canonical $n$-fold cyclic cover $\Sigma_n(L) \to S^3$ branched over $L$. We are interested in links that are fibred and strongly quasipositive. By theorems of Giroux and Rudolph (see \cite[102.1]{Ru2}) these are precisely the links that can be obtained from the unknot by plumbing and deplumbing positive Hopf bands. Recall that an {\it L-space} is a rational homology $3$-sphere such that $\mbox{dim}\; \widehat{HF}(M; \mathbb Z/2)) = |H_1(M; \mathbb Z)|$ \cite{OS1}, and an {\it L-space knot} is a knot with a non-trivial L-space Dehn surgery. L-space knots are fibred \cite{Ni1}, strongly quasipositive \cite[Theorem 1.2]{He}, and prime \cite{Krc}. We ask: \begin{que}\label{que:lspaces} {\rm For which fibred strongly quasipositive links $L$ is some $\Sigma_n(L)$ an L-space?} \end{que} Examples are provided by the following. Call a link $L$ {\it simply laced arborescent} if it is the boundary of an oriented surface obtained by plumbing positive Hopf bands according to one of the trees $\Gamma$ determined by the simply laced Dynkin diagrams $\Gamma = A_m (m \geq 1), D_m (m \geq 4), E_6, E_7, E_8$. We denote $L$ by $L(\Gamma)$. It is known that \indent \hspace{5.5cm} $L(A_m) = T(2, m+1)$ \indent \hspace{5.5cm} $L(D_m) = P(-2, 2, m-2)$ \indent \hspace{5.5cm} $L(E_6) = P(-2,3,3) = T(3,4)$ \indent \hspace{5.5cm} $L(E_7) = P(-2,3,4)$ \indent \hspace{5.5cm} $L(E_8) = P(-2,3,5) = T(3,5)$ where $T(p, q)$ is the $(p,q)$ torus link and $P(p, q, r)$ the $(p,q,r)$ pretzel link. For such a link $L$, $\Sigma_2(L)$ has finite fundamental group and is therefore an L-space \cite{OS1}. \begin{conj} \label{conj: branched lspace implies simply laced arborescent} If $L$ is a prime, fibred, strongly quasipositive link for which some $\Sigma_n(L)$ is an L-space, then $L$ is simply laced arborescent. \end{conj} We say that a link $L$ of $m$ components is {\it definite} if $\vert \sigma(L) \vert = 2g(L) + (m-1)$, where $\sigma(L)$ is the signature of $L$ and $g(L)$ is its genus, and {\it indefinite} otherwise. One of the main results in our earlier paper \cite{BBG} is \begin{thm} {\rm (\cite[Theorem 1.1(1)]{BBG})} \label{thm: bbg definite} A strongly quasipositive link $L$ for which some $\Sigma_n(L)$ is an L-space is definite. \end{thm} For some subclasses of prime, fibred, strongly quasipositive links this immediately leads to a proof of Conjecture \ref{conj: branched lspace implies simply laced arborescent}. For example, the simply laced arborescent links are all definite positive braid links, i.e. closures of positive braids, and Baader has shown: \begin{thm} \label{thm: baader} {\rm (\cite[Theorem 2]{Baa})} Let $L$ be a prime positive braid link. Then $L$ is simply laced arborescent if and only if it is definite. \end{thm} Thus Conjecture \ref{conj: branched lspace implies simply laced arborescent} holds for positive braid links. In the same way, it was shown in \cite{BBG} that the conjecture holds if $L$ is prime and either a divide knot, a fibred strongly quasipositive knot which is either alternating or Montesinos, or an arborescent knot which bounds a surface obtained by plumbing positive Hopf bands along a tree. See \cite[Corollary 1.5 and Proposition 9.3]{BBG} and the remarks which follow the proof of the former. However, Theorem \ref{thm: bbg definite} is not sufficient to prove Conjecture \ref{conj: branched lspace implies simply laced arborescent} in general; there are prime, definite, fibred strongly quasipositive links that are not simply laced arborescent (see \cite{Mis} and Theorem \ref{thm: p odd}). The examples in Theorem \ref{thm: p odd} are basket links; see \cite{Ru1}. A useful point of view concerning strongly quasipositive links is obtained through the consideration of Birman-Ko-Lee (BKL) positive braids \cite{BKL}. Here, the authors introduce a presentation for the braid group $B_n$ with generators the strongly quasipositive braids $a_{rs}$, $1 \leq r < s \leq n$, given by \begin{equation} \label{exprn for ars} a_{rs} = (\sigma_r \sigma_{r+1} \ldots \sigma_{s-2}) \sigma_{s-1} (\sigma_r \sigma_{r+1} \ldots \sigma_{s-2})^{-1} \end{equation} Figure \ref{fig: steve 1} depicts the associated geometric braid. An element of $B_n$ is called {\it BKL-positive} if it can be expressed as a word in positive powers of the generators $a_{rs}$. The family of BKL-positive elements in $B_n$ coincides with the family of strongly quasipositive $n$-braids. The BKL-positive element \begin{equation} \label{def: deltan} \delta_n = \sigma_1 \sigma_2 \ldots \sigma_{n-1} \in B_n, \end{equation} called the {\it dual Garside element}, plays a particularly important role below. Let $L$ be a strongly quasipositive link, so $L$ is the closure of a BKL-positive braid, and define the {\it{BKL-exponent}} of $L$ to be \begin{equation} \label{def: kl} k(L) = \max \{k : L \hbox{ is the closure of } \delta_n^k P \hbox{ where $n \geq 2, k \geq 0$, $P \in B_n$ is BKL-positive}\} \end{equation} It is clear that $k(L) \geq 0$ and we show in Lemma \ref{lemma: k(L) finite} that $ k(L) < \infty$. Consideration of the table in \S \ref{subsec: symm seifert forms of baskets} shows that $k(L) \geq 2$ when $L$ is simply laced arborescent. The BKL-exponent $k(L)$ can be used to characterise the simply laced arborescent links amongst prime strongly quasipositive links. \begin{thm} \label{thm: def baskets} Let $L$ be a prime strongly quasipositive link. Then $L$ is simply laced arborescent if and only if it is definite and $k(L) \geq 2$. \end{thm} The condition $k(L) \geq 2$ on the BKL-exponent cannot be relaxed as there are prime strongly quasipositive definite links with $k(L) = 1$. See Theorem \ref{thm: p odd}. Theorems \ref{thm: bbg definite} and \ref{thm: def baskets} give a complete answer to Question \ref{que:lspaces} for prime strongly quasipositive links with BKL-exponent $k(L) \geq 2$. \begin{cor}\label{cor:lspace cover} Let $L$ be a prime strongly quasipositive link with BKL-exponent $k(L) \geq 2$. Then $\Sigma_n(L)$ is an L-space for some $n \geq 2$ if and only if $L$ is simply laced arborescent. \end{cor} It is worth noting that by \cite[Theorem 4.2]{Ban}, a strongly quasipositive link $L$ with $k(L) \geq 1$ is fibred. (Banfield also showed in \cite[Theorem 5.2]{Ban} that the family of strongly quasipositive links $L$ with $k(L) \geq 1$ coincides with the family of basket links; see \cite{Ru1} and the discussion below.) In particular, the strongly quasipositive links arising in Theorem \ref{thm: def baskets} and Corollary \ref{cor:lspace cover} are fibred. This reduces Conjecture \ref{conj: branched lspace implies simply laced arborescent} to the following statement. \begin{conj} \label{conj: branched lspace implies simply laced arborescent 2} If $L$ is a prime, fibred, strongly quasipositive link with BKL-exponent $k(L) \leq 1$, then no $\Sigma_n(L)$ is an L-space. \end{conj} We completely determine the definite strongly quasipositive links of braid index $3$ or less, a family which includes the simply laced arborescent links (cf. the table in \S \ref{subsec: symm seifert forms of baskets}). \begin{thm} \label{thm: definite 3-braids} Suppose that $L$ is a non-split, non-trivial, strongly quasipositive link of braid index $2$ or $3$. $(1)$ If $L$ is prime, then the following statements are equivalent. \begin{itemize} \item[{\rm (a)}] $L$ is definite; \item[{\rm (b)}] $\Sigma_2(L)$ is an L-space; \item[{\rm (c)}] $L$ is simply laced arborescent or a Montesinos link $M(1; 1/p,1/q,1/r)$ for some positive integers $p, q, r$. \end{itemize} $(2)$ If $L$ is definite, then it is fibred if and only if it is conjugate to a positive braid. \end{thm} Part (2) of Theorem \ref{thm: definite 3-braids} combines with Theorem \ref{thm: baader} and \cite[Theorem 4.2] {Ban} to prove Theorem \ref{thm: def baskets} in the case of strongly quasipositive links of braid index $2$ or $3$. Part (1) of Theorem \ref{thm: definite 3-braids} resolves Question \ref{que:lspaces} for non-split strongly quasipositive links of braid index $2$ or $3$, even in the non-fibred case. Further, since the Montesinos links $M(1; 1/p,1/q,1/r)$ are not fibred (\cite[Theorem 1.1]{Ni2}, \cite[Theorem 3.3]{Stoi}), Theorem \ref{thm: bbg definite} combines with part (1) of the theorem to imply the following corollary. \begin{cor} Conjecture \ref{conj: branched lspace implies simply laced arborescent} holds for fibred strongly quasipositive links of braid index $2$ or $3$. \qed \end{cor} We noted above that L-space knots are prime, fibred, and strongly quasipositive, so Theorem \ref{thm: bbg definite} and Theorem \ref{thm: definite 3-braids} also imply \begin{cor} If $K$ is a non-trivial L-space knot of braid index $2$ or $3$ and some $\Sigma_n(K)$ is an L-space, then $K$ is one of the torus knots $T(2,m)$ where $m \geq 3$ is odd, $T(3,4)$, or $T(3,5)$. \qed \end{cor} Our next corollary follows immediately from Theorem \ref{thm: bbg definite} and the equivalence of (a) and (b) in part (1) of Theorem \ref{thm: definite 3-braids}. \begin{cor} \label{cor: n implies 2} Suppose that $L$ is a strongly quasipositive link of braid index $2$ or $3$. If $\Sigma_n(L)$ is an L-space for some $n \geq 2$, then $\Sigma_2(L)$ is an L-space. \qed \end{cor} Experimental evidence suggests that if $L$ is a link for which $\Sigma_n(L)$ is an L-space for some $n \geq 2$, then $\Sigma_r(L)$ is an L-space for each $2 \leq r \leq n$. In \S \ref{sec: lspace branched covers 3-braids} we verify this in all but three cases of strongly quasipositive links of braid index $2$ or $3$. \begin{prop} \label{prop: 1 < r < n} Suppose that $L$ is a strongly quasipositive link of braid index $2$ or $3$ for which $\Sigma_n(L)$ is an L-space for some $n \geq 2$. If $L$ is not an appropriately oriented version of one of the links $6_2^2, 6_3^2$ or $7_1^3$, then $\Sigma_r(L)$ is an L-space for each $2 \leq r \leq n$. \end{prop} See \S \ref{subsubsec: proof of proposition} for a discussion of the exceptional cases $L = 6_2^2, 6_3^2, 7_1^3$ and in particular Remark \ref{rmk: to do} for a description of what remains to be done to deal with these open cases. By a {\it basket} we mean a positive Hopf plumbed basket in the sense of \cite{Ru1}. These are the surfaces $F$ in $S^3$ constructed by successively plumbing some number of positive Hopf bands onto a disk. The boundary of $F$ is a {\it basket link} $L$, which is fibred with fiber $F$. Also, $F$ is a quasipositive surface, and so $L$ is strongly quasipositive (\cite{Ru1}). By Banfield (\cite[Theorem 5.2]{Ban} basket links coincide with strongly quasipositive links $L$ with exponent $k(L) \geq 1$. Our next result produces examples of prime definite basket links $L$ which are not simply laced arborescent. We do this by considering basket links from the point of view of plumbing diagrams and studying the family of {\it cyclic basket links} $L(C_m, p)$ associated to an $m$-cycle $C_m$ incidence graph. \begin{thm} \label{thm: p odd} Let $m \geq 3$ and $p$ be integers with $p$ odd and $0 < p < m$. $(1)$ $L(C_m, p)$ is prime, fibred and definite. $(2)$ $L(C_m, p)$ is simply laced arborescent if and only if $p = 1$. \end{thm} It follows from Theorem \ref{thm: def baskets} and \cite[Theorem 5.2]{Ban} that for $p > 1$ odd, the BKL-exponent of $L(C_m, p)$, is $1$. Our next result shows that even though these $L(C_m, p)$ are fibred, definite and not simply laced arborescent, they do not provide counterexamples to Conjecture \ref{conj: branched lspace implies simply laced arborescent 2}. \begin{thm} \label{thm: intro not a counterexample} Suppose that $p$ is odd. $(1)$ If $p > 1$ and $n \geq 3$, then $\Sigma_n(L(C_m, p))$ is not an L-space. $(2)$ If $p > 1$, then $\Sigma_2(L(C_m, p))$ admits a co-oriented taut foliation and hence is not an L-space. $(3)$ $\Sigma_n(L(C_m, 1))$ is an L-space if and only if either $n = 2$ or $n = m = 3$. \end{thm} By Lemma \ref{lemma: definite iff p odd} the links $L(C_m,p)$ with $p$ odd are the only definite basket links whose incidence graph is an $m$-cycle, $m \ge 3$. Hence Theorems \ref{thm: bbg definite}, \ref{thm: p odd} and \ref{thm: intro not a counterexample} give the following corollary. \begin{cor} \label{cor: no contradiction} If $L$ is a basket link whose incidence graph is an $m$-cycle, $m \geq 3$, then $\Sigma_n(L)$ is an L-space for some $n \geq 2$ if and only if $L$ is simply laced arborescent. \qed \end{cor} This leads us to the following problem. \begin{problem} {\rm Determine the definite basket links and which of those have L-space branched cyclic covers.} \end{problem} In \S \ref{sec: BKL braids and baskets} we discuss background material on basket links, their BKL braid representations, and their symmetrised Seifert forms. In \S \ref{sec: 3-braids} we classify definite strongly quasipositive $3$-braids leading to a proof of Theorem \ref{thm: definite 3-braids} and that of Theorem \ref{thm: def baskets} for strongly quasipositive links of braid index $3$ or less. Section \ref{sec: seifert forms of baskets} describes the Seifert form of a basket link in terms of a BKL-positive braid representation $b = \delta_n P$. This is then used to determine conditions guaranteeing that its symmetrised Seifert form is indefinite in \S \ref{sec: indefinite bkl pos words} and conditions guaranteeing it is congruent to $E_6, E_7$, or $E_8$ in \S \ref{sec: e6, e7, e8}. Sections \ref{sec: n = 4} and \ref{sec: n = 5} prove Theorem \ref{thm: def baskets} for strongly quasipositive links of braid index $4$ and $n \geq 5$ respectively. In \S \ref{sec: cycle baskets} we introduce cyclic basket links and prove Theorem \ref{thm: p odd} and Theorem \ref{thm: intro not a counterexample}. Finally, in \S \ref{sec: lspace branched covers 3-braids} we determine the pairs $(b, n)$ where $b$ is a strongly quasipositive $3$-braid for which $\Sigma_n(\widehat b)$ is an L-space except for a finite number of cases. The third table in \S \ref{subsec: calculating alex polys} lists what is known to us and what remains open. Proposition \ref{prop: 1 < r < n} follows from this analysis. {\bf Acknowledgements}. This paper originated during the authors' visits to the winter-spring 2017 thematic semester {\it Homology theories in low-dimensional topology} held at the Isaac Newton Institute for the Mathematical Sciences Cambridge (funded by EPSRC grant no. EP/K032208/1). They gratefully acknowledge their debt to this institute. They also thank Idrissa Ba for preparing the paper's figures. \section{BKL-positive braids and basket links} \label{sec: BKL braids and baskets} Figure \ref{fig: braid conventions} illustrates our convention for relating topological and algebraic braids, and for the orientations on the components of braid closures. These differ from those of Murasugi \cite{Mu1} by replacing $\sigma_i$ by $\sigma_i^{-1}$ and those of Birman-Ko-Lee \cite{BKL} and Baldwin \cite{Bld} by replacing $\sigma_i$ by $\sigma_{n-i}$. \begin{figure} \caption{$\delta_n = \sigma_1 \sigma_2 \cdots \sigma_{n-1} \label{fig: braid conventions} \end{figure} \subsection{BKL-positive braids} The geometric braid corresponding to the Birman-Ko-Lee generator $a_{rs}$ of $B_n$ (cf. (\ref{exprn for ars})) is depicted in Figure \ref{fig: steve 1}. \begin{figure} \caption{$a_{rs} \label{fig: steve 1} \end{figure} It is clear that $a_{rs} = \sigma_r$ when $s = r+1$. The classic braid relations show that $a_{rs} = (\sigma_r \sigma_{r+1} \ldots \sigma_{s-2}) \sigma_{s-1} (\sigma_r \sigma_{r+1} \ldots \sigma_{s-2})^{-1}$ has an alternate expression: \begin{equation} \label{alt exprn for ars} a_{rs} = (\sigma_{r+1} \sigma_{r+2} \ldots \sigma_{s-1})^{-1} \sigma_{r} (\sigma_{r+1} \sigma_{r+2} \ldots \sigma_{s-1}) \end{equation} Recall the dual Garside element $\delta_n = \sigma_1 \sigma_2 \ldots \sigma_{n-1} \in B_n$ from the introduction. The $n^{th}$ power of $\delta_n$ generates the centre of $B_n$ and if we define $a_{r+1, n+1}$\footnote{For clarity, we will introduce a comma between the two parameters of the BKL generators from time to time.} to be $a_{1, r+1}$, then the reader will verify that for all $1 \leq r < s \leq n$ we have \begin{equation} \label{conj by delta} \delta_n a_{rs} \delta_n^{-1} = a_{r+1, s+1} \end{equation} \begin{remark} \label{rem: delta rotn} {\rm Identity (\ref{conj by delta}) has a geometric interpretation. Think of the $n$ strands of the trivial braid in $B_n$ as $I$-factors of $S^1 \times I$ based at the $n^{th}$ roots of unity. A general element of $B_n$ is obtained by adding half twists to these numbered strands in the usual way. If $s \leq n-1$, the algebraic identity $\delta_n a_{rs} \delta_n^{-1} = a_{r+1, s+1}$ states that the braid element obtained by conjugating $a_{rs}$ by $\delta_n$ is the geometric braid obtained by rotating the geometric braid $a_{rs}$ around the $S^1$-factor of $S^1 \times I$ by $\frac{2 \pi}{n}$. This remains true even when $s = n$, for in this case if we follow the rotation by an isotopy of the half-twisted band corresponding to $a_{rs}$ through the point at $\infty$ of a plane transverse to $S^1 \times I$, we obtain $a_{1, r+1}$.} \end{remark} The relations in the Birman-Ko-Lee presentation are \begin{eqnarray} \label{eqn: commuting generators} a_{rs} a_{tu} = a_{tu} a_{rs} \;\; \mbox{ if } \;\; (t-r)(t-s)(u-r)(u-s) > 0 \end{eqnarray} and \begin{eqnarray} \label{eqn: non-commuting generators} a_{rs} a_{st} = a_{rt} a_{rs} = a_{st} a_{rt} \;\; \mbox{ if } \;\; 1 \leq r < s < t \leq n \end{eqnarray} We say that two BKL generators $a_{rs}$ and $a_{tu}$ are {\it linked} if either $r < t < s < u$ or $t < r < u < s$. Relation (\ref{eqn: commuting generators}) states that $a_{rs}$ commutes with $a_{tu}$ if $\{r, s\} \cap \{t, u\} = \emptyset$ and they are not linked. The converse holds; if $a_{rs}$ and $a_{tu}$ commute, then $\{r, s\} \cap \{t, u\} = \emptyset$ and they are not linked (\cite{BKL}). We say that a BKL-positive word $P$ {\it covers} $\sigma_i$ if there is a letter $a_{rs}$ of $P$ for which $r \leq i < s$. The reader will verify that if a BKL-positive word $P$ covers $\sigma_i$ and $P' \in B_n$ is a BKL-positive rewriting of $P$, then $P'$ covers $\sigma_i$. The {\it span} of a BKL-positive letter $a_{rs}$ is $s - r$. \subsection{The quasipositive Seifert surface of a basket link} Let $b \in B_n$ to be a braid of the form $$b = \delta_n P$$ where $P$ is a BKL-positive braid. Since $b$ is a strongly quasipositive braid it determines a quasipositive surface $F(b)$ with oriented boundary $\widehat b$ obtained by attaching negatively twisted bands, one for each letter of $b$, to the disjoint union of $n$ disks in the usual way. See Figure \ref{fig: steve 2}. \begin{figure} \caption{Quasipositive surface} \label{fig: steve 2} \end{figure} The assumption that $b = \delta_n P$ implies that $F(b)$ is connected. Hence it minimizes the first Betti number of the Seifert surfaces for $\widehat b$. It is readily calculated that $b_1(F(b)) = l_{BKL}(b) - (n-1)$ where $l_{BKL}(b)$ denotes the number of generators $a_{rs}$, counted with multiplicity, which occur in $b$. Recall the BKL-exponent $k(L)$ we defined for strongly quasipositive links $L$ (cf. (\ref{def: kl})). \begin{lemma} \label{lemma: k(L) finite} For $L$ a strongly quasipositive link, $0 \leq k(L) < \infty$. \end{lemma} \begin{proof} Let $L$ be a strongly quasipositive link, and let $b \in B_n$ be a braid of the form $\delta_n^k P$ where $k \geq 0, n \geq 2$, $P$ is BKL-positive, and $\widehat b = L$. If every such representation of $L$ has $k = 0$ then $k(L) = 0$ and we are done. So suppose that $L = \widehat b$ is as above with $k \geq 1$. Then $F = F(b)$ is a fibre surface for $L$ (\cite{Ban}) and hence independent of the choice of such a representation. From above, the first Betti number $b_1(F)$ of $F$ is given by $l_{BKL}(b) - (n-1) = (k-1)(n-1) + l_{BKL}(P)$, from which we see that $k$ is bounded above by $b_1(F) + 1$. \end{proof} \subsection{Reducing braid indices} \label{subsec: reducing indices} Let $L$ be a basket link for which $k(L) \geq 2$ and set $$n(L) = \min\{n : L = \widehat b \hbox{ where } b = \delta_{n}^k P, n \geq 1, k \geq 2, \hbox{ and } P \in B_{n} \hbox{ is BKL-positive}\}$$ \begin{lemma} \label{lemma: not minimal} If $b = \delta_n^2 P$ where $P$ does not cover $\sigma_1$ or $\sigma_{n-1}$, then $n(\widehat b) < n$. \end{lemma} \begin{proof} Suppose that $P$ does not cover $\sigma_1$. Since \begin{eqnarray} \delta_n^2 & = & (\sigma_1 \sigma_2 \ldots \sigma_{n-1})(\sigma_1 \sigma_2 \ldots \sigma_{n-1}) \nonumber \\ & = & (\sigma_1 \sigma_2 \sigma_1)(\sigma_3 \ldots \sigma_{n-1})(\sigma_2 \ldots \sigma_{n-1}) \nonumber \\ & = & (\sigma_2 \sigma_1 \sigma_2)(\sigma_3 \ldots \sigma_{n-1})(\sigma_2 \ldots \sigma_{n-1}) \nonumber \\ & = & (\sigma_2 \sigma_1) (\sigma_2 \ldots \sigma_{n-1})(\sigma_2 \ldots \sigma_{n-1}), \nonumber \end{eqnarray} we have $b = (\sigma_2 \sigma_1) (\sigma_2 \ldots \sigma_{n-1})(\sigma_2 \ldots \sigma_{n-1}) P$, in which $\sigma_1$ occurs as a letter exactly once. Then $\widehat b = \widehat b'$ where $b' = (\sigma_2 \ldots \sigma_{n-1})^2 P \sigma_2$, which completes the proof. A similar argument deals with the case that $P$ does not cover $\sigma_{n-1}$. \end{proof} \begin{lemma} \label{lemma: innermost commute} Suppose that $b = \delta_n^2 P$ where $P$ is BKL-positive and contains a letter $a_{rs}$ which commutes with all other letters of $P$ and that there are no letters $a_{tu}$ of $P$ such that $r < t < u < s$. Then $n(\widehat b) < n$. \end{lemma} \begin{proof} We use ``$\sim$" to denote ``conjugate to" in what follows. Write $P = P_1 a_{rs} P_2$ where $P_1, P_2$ are BKL-positive. By (\ref{conj by delta}), conjugating $b$ by $\delta_n^{1-r}$ yields $$b' = \delta_n^2 P_1' a_{1s'} P_2' = P_1'' \delta_n^2 a_{1s'} P_2' \sim \delta_n^2 a_{1s'} P_2' P_1''$$ where $P_1', P_2', P_1''$ are BKL-positive words and there are no letters $a_{tu}$ of $P_2' P_1''$ such that $1 < t < u < s$. Hence, without loss of generality $r = 1$. Then $P = a_{1s}^m P'$ where for each letter $a_{tu}$ of $P'$ we have $t, u \in \{s+1, s+2, \ldots, n\}$. If $s = 2$, then \begin{eqnarray} b = \delta_n^2 \sigma_1^m P' = \delta_n^2 P' \sigma_1^m& \sim & (\sigma_1^{m+1} \sigma_2 \sigma_3 \ldots \sigma_{n-1}) (\sigma_1 \sigma_2 \ldots \sigma_{n-1}) P' \nonumber \\ & = & (\sigma_1^{m+1} \sigma_2 \sigma_1)(\sigma_3 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P' \nonumber \\ & = & (\sigma_2 \sigma_1 \sigma_2^{m+1})(\sigma_3 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P', \nonumber \end{eqnarray} which has the same closure as $$(\sigma_2^{m+2})(\sigma_3 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P' \sim (\sigma_2\sigma_3 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P' \sigma_2^{m+1}$$ This implies the conclusion of the lemma. If $s \geq 3$, then $\sigma_1$ commutes with $P'$ so as $a_{1s} = \sigma_1 a_{2s} \sigma_1^{-1}$, we have \begin{eqnarray} b = \delta_n^2 \sigma_1 a_{2s}^m \sigma_1^{-1}P' = \delta_n^2 \sigma_1 a_{2s}^m P' \sigma_1^{-1} & \sim & (\sigma_2 \sigma_3 \ldots \sigma_{n-1}) (\sigma_1 \sigma_2 \ldots \sigma_{n-1}) \sigma_1 a_{2s}^j P' \nonumber \\ & = & (\sigma_2 \sigma_3 \ldots \sigma_{n-1}) ((\sigma_1 \sigma_2 \sigma_1) \sigma_3 \ldots \sigma_{n-1}) a_{2s}^m P' \nonumber \\ & = & (\sigma_2 \sigma_3 \ldots \sigma_{n-1}) ((\sigma_2 \sigma_1 \sigma_2) \sigma_3 \ldots \sigma_{n-1}) a_{2s}^m P' \nonumber \end{eqnarray} which has the same braid closure as \begin{eqnarray} (\sigma_2 \sigma_3 \sigma_4 \ldots \sigma_{n-1}) (\sigma_2^2 \sigma_3 \ldots \sigma_{n-1}) P' & = & ((\sigma_2 \sigma_3 \sigma_2) \sigma_4 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P' \nonumber \\ &=& (\sigma_3 \sigma_2 \sigma_3 \sigma_4 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P' \nonumber \\ &\sim& (\sigma_2 \sigma_3 \ldots \sigma_{n-1}) (\sigma_2 \ldots \sigma_{n-1}) P' \sigma_3, \nonumber \end{eqnarray} which implies the conclusion of the lemma. \end{proof} \subsection{The symmetrised Seifert forms of basket links} \label{subsec: symm seifert forms of baskets} Symmetrised Seifert forms are even symmetric bilinear forms. In particular, if $b = \delta_n P$ (as above) is a basket link, the symmetrised Seifert form of the quasipositive surface $F(b)$ is an even symmetric bilinear form which we denote by $$\mathcal{F}(b): H_1(F(b)) \times H_1(F(b)) \to \mathbb Z$$ We say that $b$ is {\it definite} if $\mathcal{F}(b)$ is definite. Note that with respect to standard orientation conventions, if $b$ is definite then $\mathcal{F}(b)$ is negative definite\footnote{This holds more generally for the symmetrised Seifert form of the closure of any BKL positive braid.} and in this case we will see in \S \ref{sec: seifert forms of baskets} that $\mathcal{F}(b)$ is a root lattice. See Remark \ref{rem: root lattice}. In particular, it is congruent to an orthogonal sum of the simply laced arborescent forms $A_m, D_m, E_6, E_7$, and $E_8$\footnote{We consider the negative definite versions of these forms in this paper.}. Here are some simple examples of such braids whose associated forms are simply laced arborescent (cf. \cite{Baa}). Braids in the same row are equal modulo rewriting, conjugation, and (de)stabilisation. {\small \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline Form & $B_2$ & $B_3$ & $B_3$ & $B_4$ & $B_5$& Braid Closure \\ \hline \hline $A_m$ ($m \geq 1$) & $\delta_2^{m+1}$ &$\delta_3 \sigma_1^m$ & $\sigma_1^{m+1} \sigma_2$ & && $T(2,m+1)$ \\ \hline $D_m$ ($m \geq 4$)& &$\delta_3^3 \sigma_1^{m-4}$ & $\sigma_1^{m-2} \sigma_2 \sigma_1^2 \sigma_2$ & && $P(-2,2,m-2)$ ($= T(3,3)$ when $m = 4$) \\ \hline $E_6$ & &$\delta_3^4$ & $\sigma_1^3 \sigma_2 \sigma_1^3 \sigma_2$ & $\delta_4^3$ & & $P(-2,3,3) = T(3,4)$ \\ \hline $E_7$ & &$\delta_3^4 \sigma_1$ & $\sigma_1^4 \sigma_2 \sigma_1^3 \sigma_2$ && & $P(-2,3,4)$ \\ \hline $E_8$ & &$\delta_3^5$ & $\sigma_1^5 \sigma_2 \sigma_1^3 \sigma_2$ & & $\delta_5^3$ & $P(-2,3,5) = T(3,5)$ \\ \hline \end{tabular} \end{center}} Given two BKL-positive words $b, b' \in B_n$, we say that {\it $b$ contains $b'$ as a subword} if there are BKL-positive words $a, c \in B_n$ such that $b = a b' c$. More generally, we say that {\it $b$ contains $b'$}, written $b \supseteq b'$, if $b'$ can be obtained from $b$ by deleting some BKL-positive letters. If $b' \subseteq b$, then $F(b') \subseteq F(b)$ and the inclusion-induced homomorphism $H_1(F(b')) \to H_1(F(b))$ is injective with image a direct summand of $H_1(F(b))$. Consequently, $\mathcal{F}(b')$ is a primitive sublattice of $\mathcal{F}(b)$. For $P$ a BKL-positive word, set $$r(P) = \max\{s : \delta_n^s \subseteq P\}$$ Then if $b = \delta_n^k P$, $b \supseteq \delta_n^{k + r(P)}$ and therefore $\mathcal{F}(b)$ contains $\mathcal{F}(\delta_n^{k + r(P)})$ as a sublattice. The braid closure of $\delta_n^{k + r(P)}$ is the $(k+r(P), n)$ torus link $T(k+r(P),n)$ and from the table above we see that $\mathcal{F}(T(k+r(P), n))$ is definite in certain cases: $\mathcal{F}(T(2,n)) \cong A_{n-1}$, $\mathcal{F}(T(3,3)) \cong D_4$, $\mathcal{F}(T(3,4)) \cong E_6$, and $\mathcal{F}(T(3,5)) \cong E_8$. The converse is also true. \begin{lemma} \label{restrictions on krn} {\rm (\cite[Theorem 1]{Baa})} $\;$ $\delta_n^{k}$ is definite if and only if $\{k, n\}$ is $\{2, n\}, \{3,3\}, \{3,4\}$, or $\{3, 5\}$. Consequently, if $n \geq 3$ and $b = \delta_n^k P$ is BKL-positive and definite, then $k + r(P) \leq 5$. \qed \end{lemma} \section{Definite strongly quasipositive $3$-braids} \label{sec: 3-braids} The goal of this section is to prove Theorem \ref{thm: definite 3-braids}. \subsection{Minimal representatives of strongly quasipositive $3$-braids} \label{subsec: minimal rep} Recall that strongly quasipositive $3$-braids $P$ are reduced products of words of the form $\sigma_1^p, a_{13}^q, \sigma_2^r$ where $p, q, r > 0$. We define the {\it syllable length} of a particular strongly quasipositive expression for $P$ to be the number of such words in its expression. Given a strongly quasipositive $3$-braid $b$, choose a braid $\delta_3^k P$ from among its strongly quasipositive conjugates and their strongly quasipositive rewritings, for which $k$ is maximal and for such maximal $k$, for which $P$ has minimal syllable length. Then $P$ does not contain the subwords $\sigma_1\sigma_2, \sigma_2 a_{13}, a_{13} \sigma_1$ as each of these equals $\delta_3$ which can be moved to the left through $P$ to yield a strongly quasipositive rewriting $b = \delta_3^{k+1} P'$, contrary to our choices. If $P \ne 1$, we can assume that its first letter is $\sigma_1$ after conjugating by an appropriate power of $\delta_3$. Then there are words $w_1, w_2, \ldots, w_s$ ($s \geq 0$) of the form $$w_i = \sigma_1^{p_i} a_{13}^{q_i} \sigma_2^{r_i}$$ with $p_i, q_i, r_i > 0$ for each $i$, such that $P = w_1w_2 \ldots w_s P'$ where $P' \in \{1, \sigma_1^{p}, \sigma_1^{p} a_{13}^{q}\}$ for some $p, q \geq 1$. We call an expression for $b$ of the sort just described a {\it minimal} representative of its conjugacy class. \begin{lemma} \label{lemma: minimal form} Let $\delta_3^k P$ be a minimal representative for a strongly quasipositive braid $b$ where $P = w_1w_2 \ldots w_s P'$ with $P' \in \{1, \sigma_1^{p}, \sigma_1^{p} a_{13}^{q}\}$ as above. $(1)$ Suppose that $s = 0$. Then $b$ is either $\delta_3^k$, or $\delta_3^k \sigma_1^p$ for some $p \geq 1$, or $\delta_3^k \sigma_1^p a_{13}^q$ where $k \equiv 1$ {\rm (mod $3$)} and $p, q \geq 1$. In all cases, $b$ is conjugate to a positive braid. $(2)$ Suppose that $s > 0$. Then $$P = \left\{ \begin{array}{ll} w_1w_2 \ldots w_s & \hbox{if } k \equiv 0 \hbox{ {\rm (mod $3$)}} \\ w_1w_2 \ldots w_s \sigma_1^{p_{s+1}} a_{13}^{q_{s+1}} & \hbox{if } k \equiv 1 \hbox{ {\rm (mod $3$)}} \\ w_1w_2 \ldots w_s\sigma_1^{p_{s+1}} & \hbox{if } k \equiv 2 \hbox{ {\rm (mod $3$)}} \end{array} \right.$$ \end{lemma} \begin{proof} First suppose that $s = 0$. Then $b = \delta_3^k P$ where $P \in \{1, \sigma_1^{p}, \sigma_1^{p} a_{13}^{q}\}$. If $P$ is either $1$ or $\sigma_1^p$, we are done. Assume then that $P = \sigma_1^{p} a_{13}^{q}$. If $k \equiv 0$ (mod $3$), then $b \sim (a_{13}^q \delta_3^k) \sigma_1^p = (\delta_3^k a_{13}^q) \sigma_1^{p} = \delta_3^k \sigma_1 \sigma_2^q \sigma_1^{p -1} = \delta_3^{k+1} \sigma_2^{q-1} \sigma_1^{p -1}$, which contradicts the maximality of $k$. If $k \equiv 2$ (mod $3$), then $b \sim (a_{13}^q \delta_3^k) \sigma_1^p = (\delta_3^k \sigma_1^{q}) \sigma_1^{p} = \delta_3^k \sigma_1^{p + q}$, which contradicts the minimality of the syllable length of $P$. Thus, $k \equiv 1$ (mod $3$). Finally observe that $\delta_3 (\delta_3^k \sigma_1^{p} a_{13}^{q}) \delta_3^{-1}$ is the positive braid $\delta_3^k \sigma_2^{p} \sigma_1^{q}$. Next suppose that $s> 0$. If $k \equiv 0$ (mod $3$) and $P' = \sigma_1^{p} a_{13}^{q}$ for some $p, q \geq 1$, then $$\delta_3^k P \sim \delta_3^k \sigma_1^{p} a_{13}^{q} w_1w_2 \ldots w_s = \delta_3^k \sigma_1^{p} \delta_3 \sigma_2^{q-1} w_1' w_2 \ldots w_s = \delta_3^{k+1} a_{13}^p \sigma_2^{q-1}w_1' w_2 \ldots w_s,$$ where $w_1' = \sigma_1^{p_1 - 1} a_{13}^{q_1} \sigma_2^{r_1}$, which contradicts the maximality of $k$. On the other hand, if $k \equiv 0$ (mod $3$) and $P' = \sigma_1^{p}$ then $$\delta_3^k P \sim \sigma_1^{p} \delta_3^k w_1w_2 \ldots w_s = \delta_3^k \sigma_1^{p} w_1w_2 \ldots w_s = \delta_3^k w_1' w_2 \ldots w_s $$ where $w_1' = \sigma_1^{p_1 + p} a_{13}^{q_1} \sigma_2^{r_1}$, which contradicts the minimality of the syllable length of $P$. Thus $P' = 1$. If $k \equiv 1$ (mod $3$) and $P' = \sigma_1^{p}$ for some $p \geq 1$ then \begin{eqnarray} \delta_3^k P \sim \sigma_1^{p} \delta_3^k w_1w_2 \ldots w_s = \delta_3^k a_{13}^{p} w_1w_2 \ldots w_s & = & \delta_3^{k+1} \sigma_2^{p-1} \sigma_1^{p_1-1} a_{13}^{q_1} \sigma_2^{r_1} w_2 \ldots w_s, \nonumber \end{eqnarray} which contradicts the maximality of $k$. On the other hand, if $k \equiv 1$ (mod $3$) and $P' = 1$ then $$\delta_3^k P \sim \sigma_2^{r_s} \delta_3^k w_1 \ldots w_{s-1} \sigma_1^{p_s} a_{13}^{q_s} = \delta_3^k \sigma_1^{r_s} w_1w_2 \ldots w_{s-1} \sigma_1^{p_s} a_{13}^{q_s} = \delta_3^k w_1' w_2 \ldots w_{s-1} \sigma_1^{p_s} a_{13}^{q_s}$$ where $w_1' = \sigma_1^{r_s + p_1} a_{13}^{q_1} \sigma_2^{r_1}$, which contradicts the minimality of the syllable length of $P$. Thus $P' = \sigma_1^{p} a_{13}^{q}$ for some $p, q \geq 1$. Finally, if $k \equiv 2$ (mod $3$) and $P' = 1$ then \begin{eqnarray} \delta_3^k P \sim \sigma_2^{r_s} \delta_3^k w_1w_2 \ldots w_{s-1} \sigma_1^{p_s} a_{13}^{q_s} & = & \delta_3^k a_{13}^{r_s} w_1 \ldots w_{s-1} \sigma_1^{p_s} a_{13}^{q_s} \nonumber \\ & = & \delta_3^{k+1} \sigma_2^{r_s-1} \sigma_1^{p_1-1} a_{13}^{q_1} \sigma_2^{r_1} w_2 \ldots w_{s-1} \sigma_1^{p_s} a_{13}^{q_s}, \nonumber \end{eqnarray} which contradicts the maximality of $k$. On the other hand, if $k \equiv 2$ (mod $3$) and $P' = \sigma_1^{p} a_{13}^{q}$ for some $p, q \geq 1$ then $$\delta_3^k P \sim a_{13}^q \delta_3^k w_1 \ldots w_{s} \sigma_1^{p} = \delta_3^k \sigma_1^q w_1w_2 \ldots w_{s}\sigma_1^{p} = \delta_3^k w_1' w_2 \ldots w_{s} \sigma_1^{p}$$ where $w_1' = \sigma_1^{q + p_1} a_{13}^{q_1} \sigma_2^{r_1}$, which contradicts the minimality of the syllable length of $P$. Thus $P' = \sigma_1^p$ for some $p \geq 1$. \end{proof} \subsection{The Murasugi normal form of strongly quasipositive $3$-braids} In order to calculate the signature of the closure of $\delta_3^k P$, we determine the Murasugi normal form for its conjugacy class. \begin{lemma} \label{lemma: simple rewrite} Suppose that $p,q, r$ are three positive integers. $(1)$ $\sigma_1^p = \sigma_2^{-1} (\sigma_1^{-1} \sigma_2^{p-1} \delta_3) \sigma_1$. $(2)$ $\sigma_1^p a_{13}^q = \sigma_2^{-1} (\sigma_1^{-1} \sigma_2^{p-1} \sigma_1^{-1} \sigma_2^{q-1}\delta_3^2) a_{13}$ $(3)$ $\sigma_1^p a_{13}^q \sigma_2^r = \sigma_2^{-1} (\delta_3^3 \sigma_1^{-1} \sigma_2^{p-1} \sigma_1^{-1} \sigma_2^{q-1} \sigma_1^{-1} \sigma_2^{r-1})\sigma_2$. \end{lemma} \begin{proof} We have, $$\sigma_1^p = \sigma_2^{-1} \sigma_2 \sigma_1^p = \sigma_2^{-1} (\sigma_2 \delta_3^{-1} \sigma_2^{p} \delta_3 ) = \sigma_2^{-1} (\sigma_1^{-1} \sigma_2^{p-1} \delta_3 \sigma_1) = \sigma_2^{-1} (\sigma_1^{-1} \sigma_2^{p-1} \delta_3) \sigma_1,$$ and $$\sigma_1^p a_{13}^q = \sigma_2^{-1} (\sigma_2 \sigma_1^p a_{13}^q) = \sigma_2^{-1} (\sigma_2 \delta_3^{-1} \sigma_2^{p} \delta_3^{-1} \sigma_2^q\delta_3^2) = \sigma_2^{-1} (\sigma_1^{-1} \sigma_2^{p-1} \sigma_1^{-1} \sigma_2^{q-1}\delta_3^2) a_{13},$$ and finally \begin{eqnarray} \sigma_1^p a_{13}^q \sigma_2^r = \sigma_2^{-1} (\sigma_2 \sigma_1^p a_{13}^q \sigma_2^{r-1})\sigma_2 & = & \sigma_2^{-1} (\delta_3 \sigma_1 a_{13}^p \sigma_2^q \delta_3^{-1} \sigma_2^{r-1})\sigma_2 \nonumber \\ & = & \sigma_2^{-1} (\delta_3^2 a_{13} \sigma_2^p \delta_3 ^{-1} \sigma_2^q \delta_3^{-1} \sigma_2^{r-1})\sigma_2 \nonumber \\ & = & \sigma_2^{-1} (\delta_3^3 \sigma_2 \delta_3^{-1} \sigma_2^p \delta_3^{-1} \sigma_2^q \delta_3^{-1} \sigma_2^{r-1})\sigma_2 \nonumber \\ & = & \sigma_2^{-1} (\delta_3^3 \sigma_1^{-1} \sigma_2^{p-1} \sigma_1^{-1} \sigma_2^{q-1} \sigma_1^{-1} \sigma_2^{r-1})\sigma_2 \nonumber \end{eqnarray} \end{proof} \begin{prop} \label{prop: rewrite} Let $\delta_3^k P$ be a minimal representative for a strongly quasipositive $3$-braid $b$ where $P = w_1w_2 \ldots w_s P'$ with $P' \in \{1, \sigma_1^{p}, \sigma_1^{p} a_{13}^{q}\}$ as above. Suppose that $s > 0$. $(1)$ If $k = 3r$, then $\displaystyle b \sim \delta_3^{3(r+s)} \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1}\sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}$; $(2)$ If $k = 3r + 1$, then $\displaystyle b \sim \delta_3^{3(r +s+1)} \big(\prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1}\sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big) \sigma_1^{-1} \sigma_2^{p_{s+1}-1} \sigma_1^{-1} \sigma_2^{q_{s+1}-1}$; $(3)$ If $k = 3r + 2$, then $\displaystyle b \sim \delta_3^{3(r + s+1)} \big(\prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1}\sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big) \sigma_1^{-1} \sigma_2^{p_{s+1} - 1}$. \end{prop} \begin{proof} Suppose that $k = 3r$. Then Lemma \ref{lemma: minimal form}(2) and Lemma \ref{lemma: simple rewrite}(3) show that \begin{eqnarray} b = \delta_3^k \prod_{i=1}^s \sigma_1^{p_i} a_{13}^{q_i} \sigma_2^{r_i} & = & \delta_3^{3r} \prod_{i=1}^s \sigma_2^{-1} \big(\delta_3^3 \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big) \sigma_2 \nonumber \\ & = & \sigma_2^{-1} \big( \delta_3^{3(r+s)}\prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big) \sigma_2, \nonumber \end{eqnarray} which is Assertion (1). Suppose that $k = 3r + 1$. Lemma \ref{lemma: minimal form}(2) and Lemma \ref{lemma: simple rewrite}(2) show that \begin{eqnarray} b & = & \delta_3^{3r+1} \big(\prod_{i=1}^s \sigma_1^{p_i} a_{13}^{q_i} \sigma_2^{r_i}\big)\sigma_1^{p_{s+1}} a_{13}^{q_{s+1}} \nonumber \\ & = & \delta_3^{3r+1} \big(\sigma_2^{-1} \big( \delta_3^{3s} \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big) \sigma_2\big) \big(\sigma_2^{-1} \big(\sigma_1^{-1} \sigma_2^{p_{s+1}-1} \sigma_1^{-1} \sigma_2^{q_{s+1}-1} \delta_3^2 \big) a_{13} \big)\nonumber \\ & = & a_{13}^{-1} \big(\delta_3^{3r + 3s+1} \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big)\big(\sigma_1^{-1} \sigma_2^{p_{s+1}-1} \sigma_1^{-1} \sigma_2^{q_{s+1}-1} \delta_3^2 \big) a_{13} \nonumber \\ & = & (\delta_3^2 a_{13})^{-1} \delta_3^{3(r + s+1)} \big( \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big)\big(\sigma_1^{-1} \sigma_2^{p_{s+1}-1} \sigma_1^{-1} \sigma_2^{q_{s+1}-1} \big) (\delta_3^2 a_{13})\nonumber \end{eqnarray} This is Assertion (2). Finally, suppose that $k = 3r + 2$. Lemma \ref{lemma: minimal form}(2) and Lemma \ref{lemma: simple rewrite}(1) show that \begin{eqnarray} b & = & \delta_3^{3r + 2} \big(\prod_{i=1}^s \sigma_1^{p_i} a_{13}^{q_i} \sigma_2^{r_i}\big)\sigma_1^{p_{s+1}} \nonumber \\ & = & \delta_3^{3r+2} \big(\sigma_2^{-1} \big( \delta_3^{3s} \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big) \sigma_2\big) \big(\sigma_2^{-1} \big(\sigma_1^{-1} \sigma_2^{p_{s+1}-1} \delta_3 \big) \sigma_1 \big)\nonumber \\ & = & \sigma_1^{-1} \big(\delta_3^{3r + 3s+2} \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big)\big(\sigma_1^{-1} \sigma_2^{p_{s+1}-1} \delta_3 \big) \sigma_1\nonumber \\ & = & (\delta_3 \sigma_1)^{-1} \delta_3^{3(r + s+1)} \big( \prod_{i=1}^s \sigma_1^{-1} \sigma_2^{p_i-1} \sigma_1^{-1} \sigma_2^{q_i-1} \sigma_1^{-1} \sigma_2^{r_i-1}\big)\big(\sigma_1^{-1} \sigma_2^{p_{s+1}-1} \big) (\delta_3 \sigma_1) \nonumber \end{eqnarray} This completes the proof. \end{proof} \subsection{Definite strongly quasipositive $3$-braids} Murasugi \cite{Mu1} and Erle \cite{Erle} have determined the signatures of the closures of $3$-braids. \begin{prop} \label{prop: signatures} Suppose that $b = \delta_3^{3d} \sigma_1^{-1} \sigma_2^{a_1} \sigma_1^{-1} \sigma_2^{a_2} \cdots \sigma_1^{-1} \sigma_2^{a_n}$ where each $a_j \geq 0$. Then there is an integer $\epsilon(d) \in \{-1, 0, 1\}$ congruent to $d$ (mod $2$) such that $$\sigma(\widehat b) = \left\{ \begin{array}{ll} n - 4d - 1 + \epsilon(d) & \mbox{ if each $a_j = 0$} \\ n - 4d - \sum_{i=1}^n a_i & \mbox{ if some $a_j \ne 0$} \end{array} \right.$$ \qed \end{prop} \begin{prop} \label{prop: when definite} Let $\delta_3^k P$ be a minimal representative for a strongly quasipositive $3$-braid $b$ where $P = w_1w_2 \ldots w_s P'$ with $P' \in \{1, \sigma_1^{p}, \sigma_1^{p} a_{13}^{q}\}$ as above. $(1)$ If $P = 1$, then $\widehat b = T(3, k)$. In particular, $\widehat b$ is definite if and only if $b = \delta_3^k$ where $k \leq 5$. $(2)$ If $P \ne 1$ and $s = 0$, then $b$ is definite if and only if either $(a)$ $b = \delta_3^k \sigma_1^p$ for some $p \geq 1$ where $k \leq 3$ or $k = 4$ and $p \in \{1, 2\}$. $(b)$ $b = \delta_3 \sigma_1^p a_{13}^q$ where $p, q \geq 1$. In this case, $\widehat b = T(2, p+1) \# T(2, q+1)$. $(3)$ If $P \ne 1, s > 0$, and $p_i = q_i = r_i = 1$ for all values of $i$, then $b$ is definite if and only if $b = \sigma_1 a_{13} \sigma_2$. $(4)$ If $k = 3r, s> 0$, and $\max\{p_i, q_i, r_i\} \geq 2$, then $b$ is definite if and only if $b = \sigma_1^{p_1} a_{13}^{q_1} \sigma_2^{r_1}$. $(5)$ If $k = 3r + 1, s >0$, and $\max\{p_i, q_i, r_i\} \geq 2$, then $b$ is indefinite. $(6)$ If $k = 3r + 2, s> 0$, and $\max\{p_i, q_i, r_i\} \geq 2$, then $b$ is indefinite. \end{prop} \begin{proof} (1) This is well known. (2) Lemma \ref{lemma: minimal form}(1) implies that $b$ is either $\delta_3^k \sigma_1^p$ for some $p \geq 1$ or $\delta_3^k \sigma_1^p a_{13}^q$ where $k \equiv 1$ {\rm (mod $3$)} and $p, q \geq 1$. Suppose that $b$ is definite. If $b = \delta_3^k \sigma_1^p$ for some $p \geq 1$, then $k \leq 5$ as $b \supset \delta_3^k$. One easily verifies that $b$ is always definite if $k = 0$. Further, $$b \sim \left\{ \begin{array}{ll} \sigma_1^{p+1} \sigma_2 & \mbox{ if } k = 1 \\ \sigma_1^{p+3} \sigma_2 & \mbox{ if } k = 2 \\ \sigma_1^{p+2} \sigma_2 \sigma_1^2 \sigma_2 & \mbox{ if } k = 3 \\ \sigma_1^{p+3} \sigma_2 \sigma_1^3 \sigma_2 & \mbox{ if } k = 4 \\ \sigma_1^{p+2} \sigma_2 \sigma_1^3 \sigma_2\sigma_1^{2} \sigma_2 & \mbox{ if } k = 5 \\ \end{array} \right.$$ The symmetrised Seifert form of $\widehat b$ is congruent to $A_{p}, A_{p+2}, D_{p+4}$ when $k = 1,2,3$, so is always definite. When $k = 4$, it is $E_7, E_8$ when $p = 1,2$ (cf. \cite[page 351]{Baa}), and therefore definite, and is indefinite for $p \geq 3$ (cf. fifth paragraph of \cite[page 356]{Baa}). The form is always indefinite when $k = 5$, as is noted in the second paragraph of \cite[page 356]{Baa}. Thus $b = \delta_3^k \sigma_1^p$ is definite if and only if $k \leq 3$ or $k = 4$ and $p \in \{1, 2\}$. Next suppose that $b = \delta_3^k \sigma_1^p a_{13}^q$ where $k \equiv 1$ {\rm (mod $3$)} and $p, q \geq 1$. Write $k = 3r +1$ where $r$ is a non-negative integer. If $r \geq 2$, then $b \supset \delta_3^6$, so is indefinite. If $r = 0$, then $b = \delta_3 \sigma_1^p a_{13}^q \sim \delta_3 \sigma_2^p \sigma_1^q \sim \sigma_1^{q+1} \sigma_2^{p+1}$. Then $\widehat b = T(2, q+1) \# T(2, p+1)$, so is definite. If $r =1$, then \begin{eqnarray} b = \delta_3^4 \sigma_1^p a_{13}^q \sim \delta_3^3 \sigma_1^p \delta_3 \sigma_2^q = \delta_3^3 \sigma_1^{p+1} \sigma_2^{q+1} & \sim & \delta_3^2 \sigma_1^{p+1} (\sigma_2^{q+1} \sigma_1 \sigma_2) \nonumber \\ & = & \delta_3^2 \sigma_1^{p+1} (\sigma_1 \sigma_2 \sigma_1^{q+1}) \nonumber \\ & = & \delta_3^2 \sigma_1^{p+2} \sigma_2 \sigma_1^{q+1} \nonumber \\ & \sim & \sigma_1^{p+2} \sigma_2 \sigma_1^{q+2} \sigma_2 \sigma_1 \sigma_2 \nonumber \\ & =& \sigma_1^{p+2} \sigma_2 \sigma_1^{q+3} \sigma_2 \sigma_1 \nonumber \\ & \sim & \sigma_1^{p+3} \sigma_2 \sigma_1^{q+3} \sigma_2 \nonumber \end{eqnarray} Since $p+3, q+3 \geq 4$, $b$ is indefinite by the fifth paragraph of \cite[page 356]{Baa}. Thus when $b = \delta_3^k \sigma_1^p a_{13}^q$ where $k \equiv 1$ {\rm (mod $3$)} and $p, q \geq 1$, it is definite if and only if $k = 1$. (3) Suppose that $P \ne 1, s > 0$, and $p_i = q_i = r_i = 1$ for all values of $i$. Then Lemma \ref{lemma: minimal form}(2) implies that \begin{eqnarray} b_1(F(b)) & = & \left\{ \begin{array}{ll} 6r + 3s - 2 & \mbox{ if } k = 3r \\ 6r + 3s + 2 & \mbox{ if } k = 3r + 1 \\ 6r + 3s + 3 & \mbox{ if } k = 3r + 2 \end{array} \right. \nonumber \end{eqnarray} On the other hand, it follows from Proposition \ref{prop: rewrite} that $b \sim \delta_3^{3d(b)} \sigma_1^{-e(b)}$ where $$d(b) = \left\{ \begin{array}{ll} s + r & \mbox{ if } k = 3r \\ s + r + 1 & \mbox{ otherwise} \end{array} \right. \; \; \mbox{ and } \;\; e(b) = \left\{ \begin{array}{ll} 3s & \mbox{ if } k = 3r \\ 3s + 2 & \mbox{ if } k = 3r + 1 \\ 3s + 1 & \mbox{ if } k = 3r + 2 \end{array} \right.$$ Then Proposition \ref{prop: signatures} shows that \begin{eqnarray} \sigma(\widehat b) = e(b) - 4d(b) - 1 + \epsilon(d(b)) & = & \left\{ \begin{array}{ll} 3s - 4(s+r) - 1 + \epsilon(s+r) & \mbox{ if } k = 3r \\ (3s + 2) - 4(s+r+1) - 1 + \epsilon(s+r+ 1) & \mbox{ if } k = 3r + 1 \\ (3s + 1) - 4(s+r+1) - 1 + \epsilon(s+r+ 1) & \mbox{ if } k = 3r + 2 \end{array} \right. \nonumber \\ & = & \left\{ \begin{array}{ll} -(s + 4r) - 1 + \epsilon(s+r) & \mbox{ if } k = 3r \\ -(s + 4r) - 3 + \epsilon(s+r+ 1) & \mbox{ if } k = 3r + 1 \\ -(s + 4r) - 4 + \epsilon(s+r+ 1) & \mbox{ if } k = 3r + 2 \end{array} \right. \nonumber \end{eqnarray} Then $\widehat b$ is definite if and only if \begin{eqnarray} \label{eqn: r, s} 0 = b_1(F(b)) + \sigma(\widehat b) = \left\{ \begin{array}{ll} 2(r+s) - 3 + \epsilon(s+r) & \mbox{ if } k = 3r \\ 2(r+s) - 1 + \epsilon(s+r+1) & \mbox{ if } k = 3r + 1 \\ 2(r+s) - 1 + \epsilon(s+r+ 1)& \mbox{ if } k = 3r + 2 \end{array} \right. \end{eqnarray} Now if $b \supset \delta_3^k$ is definite, we have $k \leq 5$. We assume this below. Then writing $k = 3r + t$ where $t = 0, 1, 2$, we have $r = 0, 1$. First suppose that $r \not \equiv s$ (mod $2$). Since $\epsilon(d) \equiv d$ (mod $2$), (\ref{eqn: r, s}) implies that $k = 3r \in \{0, 3\}$ and $2(r+s) = 3 \pm 1 \in \{2, 4\}$. But $r + s$ is odd by assumption, so $r + s = 1$. Since $s \geq 1$ it follows that $r = 0, s = 1$. Thus under our assumptions, when $r \not \equiv s$ (mod $2$), $\widehat b$ is definite if and only if $b = \sigma_1 a_{13} \sigma_2$. Next suppose that $r \equiv s$ (mod $2$). In this case (\ref{eqn: r, s}) implies that $k \in \{1, 2, 4, 5\}$ and $$2(r+s) = 1 - \epsilon(s+r+1)$$ Since $r + s$ is even, it follows that $r+s = 0$. But this is impossible as $r \geq 0$ and $s \geq 1$. Thus this case does not arise. \begin{remark} \label{rmk: only s = 1} {\rm It follows from (2) that the braid $(\sigma_1 a_{13} \sigma_2)^2$ is indefinite. Thus so are the braids considered in (4), (5) and (6) whenever $s > 1$.} \end{remark} (4) Suppose that $k = 3r, s> 0$, and $\max\{p_i, q_i, r_i\} \geq 2$. We know that $b$ is indefinite if $s > 1$ by Remark \ref{rmk: only s = 1}, so by Lemma \ref{lemma: minimal form}(2) suppose that $b = \sigma_1^{p_1} a_{13}^{q_1} \sigma_2^{r_1}$ where $\max\{p_1, q_1, r_1\} \geq 2$. Then $$b_1(F(b)) = -2 + 6r + (p_1 + q_1 + r_1)$$ On the other hand, $b \sim \delta_3^{3(1 + r)} \sigma_1^{-1} \sigma_2^{p_1-1} \sigma_1^{-1}\sigma_2^{q_1-1} \sigma_1^{-1} \sigma_2^{r_1-1}$ by Proposition \ref{prop: rewrite} so it follows from Proposition \ref{prop: signatures} that $$\sigma(\widehat b) = 2 -4r - (p + q + r)$$ Then $\widehat b$ is definite if and only if $0 = b_1(F(b)) + \sigma(\widehat b) = 2(r+1) -2 = 2r$. This occurs if and only if $r = 0$. Equivalently, $b = \sigma_1^{p_1} a_{13}^{q_1} \sigma_2^{r_1}$. (5) Suppose that $k = 3r + 1, s >0$, and $\max\{p_i, q_i, r_i\} \geq 2$. Since $b$ is indefinite for $s > 1$, one can suppose by Lemma \ref{lemma: minimal form}(2) that $b = \sigma_1^{p_1} a_{13}^{q_1} \sigma_2^{r_1} \sigma_1^{p_2} a_{13}^{q_2}$. As $b \supset \sigma_1^{p_1} a_{13}^{q_1} \sigma_2^{r_1} \sigma_1^{p_2}$, part (5) is a consequence of part (6). (6) Suppose that $k = 3r + 2, s >0$, and $\max\{p_i, q_i, r_i\} \geq 2$. As in the previous two cases, by Lemma \ref{lemma: minimal form}(2) we can suppose that $b = \sigma_1^{p_1} a_{13}^{q_1} \sigma_2^{r_1} \sigma_1^{p_2}$. In this case $$b_1(F(b)) = 6r + 4 + (p_1 + q_1 + r_1 + p_{2})$$ On the other hand, $b \sim \delta_3^{3(r + 2)} \sigma_1^{-1} \sigma_2^{p_1-1} \sigma_1^{-1}\sigma_2^{q_1-1} \sigma_1^{-1} \sigma_2^{r_1-1} \sigma_1^{-1} \sigma_2^{p_{2}-1}$ by Proposition \ref{prop: rewrite} so it follows from Proposition \ref{prop: signatures} that $$\sigma(\widehat b) = - 4r - (p_1 + q_1 + r_1 + p_{2})$$ Then $\widehat b$ is definite if and only if $0 = b_1(F(b)) + \sigma(\widehat b) = 2(r+2)$. Since $r \geq 0$, this is impossible. Thus $\widehat b$ is indefinite. \end{proof} \begin{cor} \label{cor: definiteness, fibredness, and forms} Let $b = \delta_3^k P$ be as above represent a non-trivial, non-split, prime link. Then $b$ is definite if and only if $b$ is conjugate to one of $(1)$ $\sigma_1^p a_{13}^q \sigma_2^r$ where $p, q, r \geq 1$. In this case $\widehat b$ is the Montesinos link $M(1; 1/p,1/q,1/r)$ and is not a fibred link. $(2)$ $\delta_3 \sigma_1^m$ where $m \geq 1$. In this case $\widehat b$ is the torus link $T(2,m+1)$ and is fibred. Further, $\mathcal{F}(b) \cong A_m$. $(3)$ $\delta_3^3 \sigma_1^{m-4}$ where $m \geq 4$. In this case $\widehat b$ is the pretzel link $P(-2,2,m-2)$ and is fibred. Further, $\mathcal{F}(b) \cong D_m$. $(4)$ $\delta_3^4$. In this case $\widehat b$ is the torus knot $T(3,4)$ and is fibred. Further, $\mathcal{F}(b) \cong E_6$. $(5)$ $\delta_3^4 \sigma_1$. In this case $\widehat b$ is the pretzel link $P(-2,3,4)$ and is fibred. Further, $\mathcal{F}(b) \cong E_7$. $(6)$ $\delta_3^5$. In this case $\widehat b$ is the torus knot $T(3,5)$ and is fibred. Further, $\mathcal{F}(b) \cong E_8$. \end{cor} \begin{proof} It is easy to verify that the closures of the braids listed in the corollary are as stated. Further $\widehat b$ is not fibred in case (1) and fibred otherwise (\cite[Theorem 1.1]{Ni2}, \cite[Theorem 3.3]{Stoi}). The symmetrised Seifert forms of the associated quasipositive surfaces are listed in \S \ref{subsec: symm seifert forms of baskets}. To complete the proof of the corollary, we need to show that the corollary gives a complete list of conjugacy class representatives of definite strongly quasipositive $3$-braids whose closures are non-trivial, non-split, and prime. Proposition \ref{prop: when definite} provides a complete list of conjugacy class representatives of definite strongly quasipositive $3$-braids: \begin{itemize} \item $\sigma_1^p a_{13}^q \sigma_2^r$ where $p, q, r \geq 1$; \item $\delta_3^k$ where $k \leq 5$; \item $\delta_3^k \sigma_1^p$ where $k \leq 3$ and $p \geq 1$ or $k = 4$ and $p \in \{1, 2\}$; \item $\delta_3 \sigma_1^p a_{13}^q$ where $p, q \geq 1$. \end{itemize} Thus each of the braids listed in the corollary is definite. Suppose that $b$ is one of the definite braids listed in the proposition whose closure is non-trivial, non-split, and prime. Since $\widehat b$ is prime, the case that $b \sim \delta_3 \sigma_1^p a_{13}^q$ is excluded: $$b \sim \sigma_1^{p+1} \sigma_2^{q+1} \Rightarrow \widehat b = T(p+1, 2) \# T(q+1, 2)$$ We also exclude the braids $1$ and $\delta_3$ by the non-split and non-triviality conditions. It remains to examine the braids $\delta_3^2 \sigma_1^p$ where $p \geq 0$ and $\delta_3^4 \sigma_1^2$. But if $b = \delta_3^2 \sigma_1^p$ where $p \geq 0$, then $b = \sigma_1 \sigma_2 \sigma_1 \sigma_2 \sigma_1^p \sim \sigma_1^{p+1} (\sigma_2 \sigma_1 \sigma_2) = \sigma_1^{p+1} (\sigma_1 \sigma_2 \sigma_1) = \sigma_1^{p+2} \sigma_2 \sigma_1 \sim \sigma_1^{p+3} \sigma_2 = \sigma_1^{p+2} \delta_3 \sim \delta_3 \sigma_1^{p+2}$, which is listed in (2). And if $b = \delta_3^4 \sigma_1^2$, then $b \sim \sigma_1 \delta_3^4 \sigma_1 = \delta_3^4 a_{13} \sigma_1 = \delta_3^4 \delta_3 = \delta_3^5$, which is (6). \end{proof} \subsection{The proof of Theorem \ref{thm: definite 3-braids}} \label{subsec: proof of thm on 3-braids} Suppose that $L$ is a non-split, non-trivial, strongly quasipositive link of braid index $3$. Stoimenow has shown that strongly quasipositive links of braid index $3$ are the closures of strongly quasipositive $3$-braids (\cite[Theorem 1.1]{Stoi}), so without loss of generality we can suppose that $L = \widehat b$ where $b = \delta_3^k P$ is a minimal representative of a conjugacy class of strongly quasipositive $3$-braids. We know that $\widehat b$ is definite if and only if it is one of the links listed in (1) through (6) of Corollary \ref{cor: definiteness, fibredness, and forms}, which shows the equivalence of statements (a) and (c) of part (1) of the theorem. In each of the six cases $\Sigma_2(\widehat b)$ is an L-space: the links in (1) are alternating (\cite[Proposition 3.3]{OS2}) while those in (2), (3), (4), (5), and (6) have $2$-fold branched covers with finite fundamental groups. Conversely, Theorem \ref{thm: bbg definite} shows that $b$ is definite if $\Sigma_2(\widehat b)$ is an L-space. Thus (a) and (b) of part (1) of the theorem are equivalent, which completes the proof of (1). Positive braids are fibred \cite{Sta}, which is one direction of (2). For the other direction, suppose that $\widehat b$ is fibred. Then the minimal representative of $b$ has $k \geq 1$ (\cite[Theorem 1.1]{Ni2}, \cite[Theorem 3.3]{Stoi}). If it is also definite, Proposition \ref{prop: when definite} implies that it is conjugate to a braid of the form $\delta_3^k \sigma_1^{m}$, for some $k \geq 1$ and $m \geq 0$, or $\delta_3 \sigma_1^{p_1} a_{13}^{q_1} = \sigma_2^{p_1} \sigma_1^{q_1} \delta_3$, each of which is a positive braid. This proves (2). \section{Seifert forms of basket links} \label{sec: seifert forms of baskets} Suppose that $b = \delta_n P$ is a BKL-positive word. Each letter $a_{rs}$ of $P$ determines a primitive element $\alpha$ of $H_1(F(b))$ as depicted in Figure \ref{fig: alphars}. \begin{figure} \caption{The class associated to $a_{rs} \label{fig: alphars} \end{figure} Here, a representative cycle for $\alpha$ consists of an arc which passes over the band in $F(b)$ corresponding to $a_{rs}$ and then descends to pass over the bands corresponding to the letters $\sigma_{r}, \sigma_{r+1}, \ldots, \sigma_{s-1}$ of the initial $\delta_n$ factor of $b$. The set of all such $\alpha$ forms a basis for $H_1(F(b))$. \begin{remark} \label{rem: alpha to alpha'} {\rm Recall that $\delta_n b \delta_n^{-1} = \delta_n P'$ where a letter $a_{rs}$ of $P$ is converted to $a_{r+1, s+1}$ if $s \leq n-1$ and $a_{1, r+1}$ if $s = n$ (cf. Identity (\ref{conj by delta})). We noted in Remark \ref{rem: delta rotn} that the geometric braid $b' = \delta_n P'$ can be obtained by rotating the geometric braid of $\delta_n P$ through an angle of $\frac{2 \pi}{n}$. It's easy to see that $F(b')$ can also be obtained from $F(b)$ by a rotation of $\frac{2 \pi}{n}$. This rotation takes the class $\alpha$ associated to $a_{rs}$ described above to the class associated to $a_{r+1, s+1}$ if $s \leq n-1$ and the {\it negative} of the class associated to $a_{1, r+1}$ if $s = n$. } \end{remark} We are interested in the Seifert form on $H_1(F(b))$ and as such we need to calculate the linking numbers $lk(\alpha^+, \beta)$ where $\alpha, \beta \in H_1(F(b))$ are classes corresponding to letters of $P$. Here, $\alpha^+$ is represented by the cycle obtained by pushing a representative cycle for $\alpha$ to the positive side of $F(b)$. Given our orientation convention for braid closures (cf. Figure \ref{fig: braid conventions}), the side of $F(b)$ which is shaded in Figure \ref{fig: alphars} is its negative side. \begin{lemma} \label{lemma: linking info} Suppose that $b = \delta_n P$ is a BKL-positive word and that $P = a_1 a_2 \ldots a_m$ where each $a_i$ is one of the BKL generators. Let $\alpha_1, \alpha_2, \ldots, \alpha_m$ be the associated basis elements of $H_1(F(b))$. Fix $1 \leq i, j \leq m$ and suppose that $\alpha_i$ corresponds to the letter $a_{rs}$ while $\alpha_j$ corresponds to $a_{tu}$. Then $$lk(\alpha_i^+, \alpha_j) = \left\{ \begin{array}{rl} -1 & \hbox{if either } i = j \mbox{ or } i > j \mbox{ and } r \leq t < s \leq u \\ 1 & \hbox{if } i > j \mbox{ and } t < r \leq u < s \\ 0 & \hbox{otherwise} \end{array} \right.$$ \end{lemma} \begin{proof} If $r < s < t < u$, consideration of the cycles representing $\alpha_i$ and $\alpha_j$ shows that they are separated by a $2$-sphere in $S^3$. Thus $lk(\alpha_i^+, \alpha_j) = 0$. It then follows that the same holds whenever $t < r < s < u$, or $t < u < r < s$, or $r < t < u < s$. Indeed, in each of these cases we can simultaneously conjugate $a_{rs}$ and $a_{tu}$ by a power of $\delta_n$ to obtain new letters $a_{r's'}$ and $a_{t'u'}$ where $r' < s' < t' < u'$. Remark \ref{rem: alpha to alpha'} shows that this conjugation changes linking numbers at most up to sign, so $lk(\alpha_i^+, \alpha_j) = 0$. Next, if $t < u = r < s$ the cycles representing $\alpha_i^+$ and $\alpha_j$ are separated by a $2$-sphere; this is obvious if $i < j$ and is easy to see by considering the cycles representing $\alpha_i$ and $\alpha_j$ if $j < i$. Thus $lk(\alpha_i^+, \alpha_j) = 0$. Remark \ref{rem: alpha to alpha'} then implies, as in the previous paragraph, that $lk(\alpha_i^+, \alpha_j) = 0$ whenever $r < t < s = u$ or $r = t < s < u < s$. In the case that $r < s = t < u$, it is easy to see that $$lk(\alpha_i^+, \alpha_j) = \left\{ \begin{array}{ll} 0 & \mbox{ if } i < j \\ 1 & \mbox{ if } j < i \end{array} \right.$$ Remark \ref{rem: alpha to alpha'} then implies that $$lk(\alpha_i^+, \alpha_j) = \left\{ \begin{array}{rl} 0 & \mbox{ if } i < j \\ -1 & \mbox{ if } j < i \end{array} \right.$$ when $r < t < s = u$ or $r = t < s < u < s$ then The final three cases to consider are when $i = j$, or $i \ne j$ and $r < t < s < u$, or $i \ne j$ and $t < r < s < u$. Before dealing with these cases we need to make an observation. Let $D_i$ be the $2$-disk in $F(b)$ with boundary the $i^{th}$ component of the trivial $n$-braid and suppose that $b$ contains a word $a_{r_0 r_1} a_{r_1 r_2} \cdots a_{r_{k-1} r_k}$. Let $A$ be a smooth arc in the interior of $F(b)$ obtained by concatenating a core of the half-twisted band in $F(b)$ corresponding to $a_{r_0 r_1}$, an arc properly embedded in $D_{r_1}$, a core of the band corresponding to $a_{r_1 r_2}$, an arc properly embedded in $D_{r_2}$, etc., ending with a core of the $1$-handle corresponding to $a_{r_{k-1} r_k}$. Thinking of $A$ as a properly embedded arc in the union $X$ of $D_{r_0}, D_{r_1}, \ldots, D_{r_k}$ and the bands corresponding to $a_{r_0 r_1}, a_{r_1 r_2}, \ldots a_{r_{k-1} r_k}$, the reader will verify that $A$ has a tubular neighbourhood in $X$ which is isotopic (rel $X \cap (D_{r_0} \cup D_{r_k}$)) to a half-twisted band corresponding to $a_{r_0 r_k}$. It follows that the boundary of a tubular neighbourhood of the cycle corresponding to $a_{rs}$ is a Hopf band whose components have linking number is $-1$ when they are like-oriented. Hence, $$lk(\alpha_i^+, \alpha_i) = -1,$$ which is the case $i = j$. Suppose that $i \ne j$ and $r < t < s < u$. From the previous paragraph we see that if we replace $a_{tu}$ by $a_{ts} a_{su}$ and let $\alpha_j(1), \alpha_j(2)$ correspond to $a_{ts}$ and $a_{su}$ respectively, then $$lk(\alpha_i^+, \alpha_j) = lk(\alpha_i^+, \alpha_j(1)) + lk(\alpha_i^+, \alpha_j(2))$$ Consideration of the cases $r < t < s = u$ and $r < s = t < u$, which were handled above, implies that $lk(\alpha_i^+, \alpha_j(1)) = 0$ is zero while $lk(\alpha_i^+, \alpha_j(2)) = \left\{ \begin{array}{ll} 0 & \mbox{ if } i < j \\ 1 & \mbox{ if } j < i \end{array} \right.$. Thus $$lk(\alpha_i^+, \alpha_j) = \left\{ \begin{array}{ll} 0 & \mbox{ if } i < j \\ 1 & \mbox{ if } j < i \end{array} \right.$$ Finally, an application of Remark \ref{rem: alpha to alpha'} shows that if $i \ne j$ and $t < r < s < u$, then $$lk(\alpha_i^+, \alpha_j) = \left\{ \begin{array}{rl} 0 & \mbox{ if } i < j \\ -1 & \mbox{ if } j < i \end{array} \right.$$ \end{proof} \begin{remark} \label{rem: root lattice} {\rm The lemma shows that $H_1(F(b))$ is generated by elements $\alpha$ for which $\mathcal{F}(b)(\alpha, \alpha) = -2$. Hence, when $\mathcal{F}(b)$ is definite, it is a root lattice.} \end{remark} \section{Some indefinite BKL-positive words} \label{sec: indefinite bkl pos words} \begin{lemma} \label{lemma: bound on spans} Suppose that $b = \delta_n^k P$ is a BKL-positive $n$-braid. $(1)$ If $P$ contains a letter of span $l$ where $4 \leq l \leq n-4$, then $b$ is indefinite. $(2)$ If $P$ contains a letter of span $3$ or $n-3$, then \begin{itemize} \item[{\rm (a)}] $b$ is indefinite if $n \geq 9$. \item[{\rm (b)}] $\mathcal{F}(b)$ contains a primitive $E_n$ sublattice for $n = 6, 7, 8$. \end{itemize} $(3)$ If $P$ contains the square of a letter of span $2$ or $n-2$, then \begin{itemize} \item[{\rm (a)}] $b$ is indefinite if $n \geq 8$. \item[{\rm (b)}] $\mathcal{F}(b)$ contains a primitive $E_{n+1}$ sublattice for $n = 5, 6, 7$. \end{itemize} \end{lemma} \begin{proof} After conjugating by an appropriate power of $\delta_n$ we can suppose that $b \supseteq \delta_n^2 a_{1, l+1}$. Figure \ref{fig: span3} depicts a schematic of the braid $\delta_n^2 a_{1, l+1}$. \begin{figure} \caption{A schematic of the braid $\delta_n^2 a_{1, l+1} \label{fig: span3} \end{figure} Lemma \ref{lemma: linking info} shows that if $\eta, \zeta$ are any two of the classes shown in Figure \ref{fig: span3} then $\mathcal{F}(\eta, \zeta) = -2$ if $\eta = \zeta$ and is $0$ or $\pm 1$ otherwise. In fact, the classes define a sublattice of $\mathcal{F}(b)$ corresponding to the tree in Figure \ref{fig: e6 tree}. \begin{figure}\label{fig: e6 tree} \end{figure} The only trees with vertices of weight $-2$ and edges of weight $\pm 1$ whose associated forms are definite correspond to the Dynkin diagrams associated to $A_m, D_m, E_6, E_7$, or $E_8$ (see \cite[pages 61-62]{HNK}). It follows that $\mathcal{F}(b)$ is indefinite if $4 \leq l \leq n-4$ or $l \in \{3, n-3\}$ and $n \geq 9$. If $l \in \{3, n-3\}$ where $n = 6,7$, or $8$, the tree corresponds to a primitive $E_n$ sublattice of $\mathcal{F}(b)$. This completes the proof of parts (1) and (2) of the lemma. A similar argument deals with part (3) where we can suppose that $P$ contains $a_{13}^2$. The graph associated to the classes depicted in Figure \ref{fig: square} \begin{figure} \caption{A schematic of the braid $\delta_n^2 a_{13} \label{fig: square} \end{figure} determines a sublattice of $\mathcal{F}(b)$ corresponding to the tree in Figure \ref{fig: square tree}. \begin{figure}\label{fig: square tree} \end{figure} It follows that $b$ is indefinite if $n \geq 8$ and that $\mathcal{F}(b)$ contains a primitive $E_{n+1}$ sublattice for $n = 5, 6, 7$. \end{proof} Recall that two BKL generators $a_{rs}$ and $a_{tu}$ are {\it linked} if either $r < t < s < u$ or $t < r < u < s$. \begin{lemma} \label{lemma: no linking} Suppose that $b = \delta_n^2 P$ is a BKL-positive $n$-braid where $P$ contains a pair of linked letters. Then $b$ is indefinite. \end{lemma} \begin{proof} Suppose that $P$ contains a product $a_{rs} a_{tu}$ of linked letters. We can assume that $r < t < s < u$. Otherwise we conjugate $b$ by $\delta_n^{n-u+1}$. Let $\alpha, \beta_1, \beta_2, \ldots, \beta_{u-s}, \gamma, \theta$ be the elements of $H_1(F(b))$ corresponding respectively to the letters $\sigma_{s-1}, \sigma_s, \sigma_{s+1}, \ldots, \sigma_{u-1}$ in the second $\delta_n$, and $a_{rs}$ and $a_{st}$. (See Figure \ref{fig: alphars}.) Set $\beta = \beta_1 + \cdots + \beta_{u-s}$ From Lemma \ref{lemma: linking info} the restriction of the Seifert form of $F(b)$ to the span of $\alpha, \beta, \gamma, \theta$ has matrix $$\left(\begin{matrix} -1 & \;\;\; 0 & \;\;\; 0 & \;\;\; 0 \\ \;\;\; 1 & -1 & \;\;\; 0 & \;\;\; 0 \\ -1 & \;\;\; 0 & -1 & \;\;\; 0 \\ \;\;\; 0 & -1 & \;\;\; 1 & -1 \end{matrix} \right) $$ Hence $\mathcal{F}(b)|_{\langle \alpha, \beta, \gamma, \theta \rangle}$ is represented by the matrix $$\left(\begin{matrix} -2 & \;\;\; 1 & -1 & \;\;\; 0 \\ \;\;\; 1 & -2 & \;\;\; 0 & -1 \\ -1 & \;\;\; 0 & -2 & \;\;\; 1 \\ \;\;\; 0 & -1 & \;\;\; 1 & -2 \end{matrix} \right)$$ If $c_i$ is the $i^{th}$ column of this matrix, then $c_1 + c_2 - c_3 - c_4 = 0$, so $\mathcal{F}(b)$ indefinite. \end{proof} \begin{lemma} \label{lemma: when commuting implies indefinite} Suppose that $b = \delta_n^k P$ is a BKL-positive braid where $P$ contains a pair of commuting letters $a_{rs}, a_{tu}$ each of whose spans are bounded between $2$ and $n-2$. Then $b$ is indefinite. \end{lemma} \begin{proof} Up to conjugation by a power of $\delta_n$, we can suppose that $1 \leq r < s < t < u \leq n$. Consider Figure \ref{fig: commute} which represents a braid contained in $\delta_n^2P$. \begin{figure}\label{fig: commute} \end{figure} The associated graph is a tree with two vertices of valency $3$, as depicted in Figure \ref{fig: classes for 2 vertices valency 3 tree}, \begin{figure}\label{fig: classes for 2 vertices valency 3 tree} \end{figure} which implies that $\mathcal{F}(b)$ is indefinite. \end{proof} A {\it $2$-step descending staircase} is a product $$a_{st} a_{rs}$$ where $1 \leq r < s < t \leq n$. The letters $a_{rs}$ and $a_{st}$ are the {\it steps} of the staircase. \begin{lemma} \label{lemma: 2-step descending stairs of length two or more indefinite} Suppose that $b = \delta_n^k P$ is a BKL-positive braid where $P$ contains a descending $2$-step staircase $a_{st} a_{rs}$ whose steps are of span $2$ or more. Then $b$ is indefinite. \end{lemma} \begin{proof} Define classes in $H_1(F(b))$ as follows \begin{itemize} \item $\theta_1$ corresponds to the class associated to $\sigma_{s-2}$; \item $\theta_0$ corresponds to the class associated to $\sigma_{s-1}$; \item $\theta_2$ corresponds to the class associated to $\sigma_{s}$; \item $\theta_3$ corresponds to the negative of the class associated to $a_{rs}$; \item $\theta_4$ corresponds to the class associated to $a_{st}$; \end{itemize} It follows from Lemma \ref{lemma: linking info} that $\mathcal{F}(b)(\theta_i, \theta_i) = -2$ for each $i$, $\mathcal{F}(b)(\theta_i, \theta_j) \in \{0, 1\}$ for $i \ne j$, and the associated graph is a tree with a vertex of valency $4$. Hence, the restriction of $\mathcal{F}(b)$ to it is indefinite. \end{proof} \begin{lemma} \label{lemma: contains sigma1} If $b = \delta_n^2 P$ is a BKL-positive definite $n$-braid and $n(\widehat b) = n$, then $\sigma_1$ is contained in some conjugate of $P$ by a power of $\delta_n$. \end{lemma} \begin{proof} Lemma \ref{lemma: not minimal} implies that each conjugate of $P$ by some $\delta_n^k$ contains a letter of the form $a_{1s'}$. Let $s$ be the largest such $s'$. Without loss of generality we can suppose that $P \supseteq a_{1s}$. If $s < n$, Lemma \ref{lemma: not minimal} implies that $P$ also contains a letter of the form $a_{rn}$. Then Lemma \ref{lemma: no linking} shows that $s \leq r$. But then $\delta_n P \delta_n^{-1} \supseteq \delta_n a_{rn} \delta_n^{-1} = a_{1, r+1}$, which is a contradiction since $s \leq r < r+1 \leq n$. Thus $s = n$ so that $P \supseteq a_{1n}$. Then $\delta_n P \delta_n^{-1} \supseteq \sigma_1$, which is what we set out to prove. \end{proof} \section{BKL-positive braids with forms $E_6, E_7$, and $E_8$} \label{sec: e6, e7, e8} In this section we identify certain situations under which the symmetrised Seifert form of a basket is either $E_6, E_7$, or $E_8$. In this case, \begin{equation} \label{eqn: l(P) bound} (n - 1) + l_{BKL}(P) = \mbox{{\rm rank}} \; H_1(F(b)) = \left\{ \begin{array}{ll} 6 & \mbox{if } \mathcal{F}(b) \cong E_6 \\ 7 & \mbox{if } \mathcal{F}(b) \cong E_7 \\ 8 & \mbox{if } \mathcal{F}(b) \cong E_8 \end{array} \right. \end{equation} \subsection{Definite extensions of $E_6, E_7$, and $E_8$} For convenience we use the positive definite versions of $E_6, E_7$, and $E_8$ in the next lemma. \begin{lemma} \label{lemma: e* case} Suppose that $A^* = \left(\begin{matrix} A & R^T \\ R & 2 \end{matrix}\right)$ is a symmetric bilinear positive definite form on $\mathbb Z^{d}$ where $R = (0, 0, \ldots, 0, 1)$. Then, $(1)$ $A \cong E_6$ implies that $A^* \cong E_7$. $(2)$ $A \cong E_7$ implies that $A^* \cong E_8$. $(3)$ $A \not \cong E_8$. \end{lemma} \begin{proof} We remark that $E_8$ is the unique even symmetric, bilinear positive definite unimodular lattice of rank $8$ or less; $E_7$ is the unique non-diagonalisable symmetric, bilinear positive definite lattice of determinant $2$ and rank $7$ or less; and $E_6$ is the unique non-diagonalisable symmetric, bilinear positive definite lattice of determinant $3$ and rank $6$ or less. See the last paragraph of the introduction of \cite{Gri} for a discussion of these statements and further references. Write $A = \left(\begin{matrix} B & * \\ * & * \end{matrix}\right)$ where $B$ is a symmetric bilinear form on $\mathbb Z^{d-2}$ and observe that as $A^*$ is positive definite, $0 < \det(B)$ and $0 < \det(A^*)$. On the other hand, $$\det(A^*)= 2 \det(A) - \det(B)$$ and therefore $$0 < \det(B) < 2\det(A)$$ If $A \cong E_6$, then $\det(A) = 3$, so that $0 < \det(B) < 6$. On the other hand, $B$ is an even form so it can be written $B = C + C^T$ where $C$ is a $5 \times 5$ matrix with integer coefficients. Then as $C - C^T$ is skew-symmetric, $0 = \det(C - C^T) \equiv \det(C + C^T) \hbox{ {\rm(mod $2$)}} = \det(B)$. Thus $\det(B) \in \{2, 4\}$. If $\det(B) = 2$, then $B$ is a definite diagonal form (see the first paragraph of the proof) and hence has determinant at least $32$, a contradiction. Thus $\det(B) = 4$ and therefore the determinant of the even form $A^*$ is $2 \det(A) - \det(B) = 2$. Another application of the remarks of the first paragraph implies that $A^* \cong E_7$. If $A \cong E_7$ then $\det(A) = 2$, so that $0 < \det(B) < 4$. On the other hand, $B$ is an even form of rank $6$ and as there are no even unimodular lattices of rank less than $8$, $\det(B) \in \{2, 3\}$. If $B$ was a diagonalisable form its determinant would be at least $64$, thus by the remarks in the first paragraph it must be congruent to $E_6$ and therefore have determinant $3$. It follows that $\det(A^*) = 1$ and therefore $A^*$ is an even symmetric unimodular form on $\mathbb Z^8$. Consequently, $A^* \cong E_8$. Finally suppose that $A \cong E_8$. Then $\det(A) = 1$ so that $0 < \det(B) < 2$. Thus $\det(B) = 1$, implying that $B$ is an even symmetric unimodular form of rank $7$, a contradiction. This completes the proof. \end{proof} \begin{lemma} \label{lemma: e_n} Let $b_0 \subset b = \delta_n P$ be BKL-positive $n$-braids where $l_{BKL}(b) - l_{BKL}(b_0) = 1$ and $F(b_0)$ is connected. Let $a_{rs}$ be the letter of $b$ not contained in $b_0$ and suppose that $\alpha \in H_1(F(b))$ corresponds to $a_{rs}$. Suppose as well that there is a class $\beta \in H_1(F(b_0))$ such that $\mathcal{F}(b)(\alpha, \beta) = \pm1$. $(1)$ If $\mathcal{F}(b_0) \cong E_6$ and $b$ is definite, then $\mathcal{F}(b) \cong E_7$. $(2)$ If $\mathcal{F}(b_0) \cong E_7$ and $b$ is definite, then $\mathcal{F}(b) \cong E_8$. $(3)$ If $\mathcal{F}(b_0) \cong E_8$, then $b$ is indefinte. \end{lemma} \begin{proof} Up to replacing $\alpha$ by its negative, we can suppose that $\mathcal{F}(b)(\alpha, \beta) = -1$. Since $\beta$ is primitive, there is a basis $\{\gamma_1, \gamma_2, \ldots, \gamma_r\}$ of $H_1(F(b_0))$ where $\gamma_r = \beta$. After replacing $\gamma_i$ by $\gamma_i + \mathcal{F}(b)(\gamma_i, \alpha) \gamma_r$ for $1 \leq i \leq r-1$ we can suppose that $$\mathcal{F}(b)(\gamma_i, \alpha) = \left\{ \begin{array}{rl} 0 & \mbox{ if } 1 \leq i \leq r -1 \\ -1 & \mbox{ if } i = r \end{array} \right.$$ The lemma now follows from Lemma \ref{lemma: e* case} after replacing $\mathcal{F}(b)$ by $-\mathcal{F}(b)$. \end{proof} \begin{cor} \label{cor: contains deltan2} Suppose that $b = \delta_n^2 P \in B_n$ is a definite BKL-positive braid and $P \supset P_0$ such that $\mathcal{F}(\delta_n^2 P_0) \cong E_6, E_7$ or $E_8$. Then $\mathcal{F}(b) \cong E_6, E_7$ or $E_8$. \end{cor} \begin{proof} There is nothing to prove if $P_0 = P$ so suppose otherwise and let $P_1 \subseteq P$ be obtained from $P_0$ by reinserting a letter $a_{rs}$ of $P \setminus P_0$. Denote by $\alpha$ the associated class in $H_1(F(\delta_n^2 P_1))$. For each $1 \leq i \leq n-1$, $H_1(F(\delta_n^2 P_0))$ contains the class associated to the letter $\sigma_i$ of the second $\delta_n$ factor of $b = \delta_n^2 P$ and some such class $\beta$ satisfies $\mathcal{F}(b)(\alpha, \beta) = \pm1$ by Lemma \ref{lemma: linking info}. Lemma \ref{lemma: e_n} then implies that $\mathcal{F}(\delta_n^2 P_1) \cong E_7$, or $E_8$. The conclusion of the lemma now follows by induction on the number of elements in $P \setminus P_0$. \end{proof} \subsection{Some BKL-positive braids with form $E_6, E_7$, or $E_8$ } \label{subsec: other e configs} \begin{lemma} \label{lemma: small span} Suppose that $n \geq 6$ and that $b = \delta_n^2 P$ is a BKL-positive definite $n$-braid. If $P$ contains a letter of span $3$ or $n-3$, then $n \leq 8$ and $\mathcal{F}(b) \cong E_6, E_7$, or $E_8$. \end{lemma} \begin{proof} Conjugating by a suitable power of $\delta_n$ we may assume that $P$ contains $a_{14}$. According to Lemma \ref{lemma: bound on spans}(2), $n \leq 8$ and $\mathcal{F}(b)$ contains an $E_m$ sublattice for $m = 6, 7$, or $8$. Inspection of the proof of this lemma shows that $P$ contains a subword $P_0$ such that $\mathcal{F}(\delta_n^2 P_0) \cong E_{m}$. The conclusion of the lemma then follows from Corollary \ref{cor: contains deltan2}. \end{proof} \begin{lemma} \label{lemma: no repeated span 2 letters} Suppose that $n \geq 5$ and $b = \delta_n^2 P$ is a definite BKL-positive $n$-braid. If $P$ contains the square of a letter of span $2$ or $n-2$, then $n \leq 7$ and $\mathcal{F}(b) \cong E_6, E_7$, or $E_8$. \end{lemma} \begin{proof} Conjugating by a suitable power of $\delta_n$ we may assume that $P$ contains $a_{13}^2$. Then by Lemma \ref{lemma: bound on spans}(3), $n \leq 7$ and $\mathcal{F}(b)$ contains an $E_m$ sublattice for $m = 6, 7$, or $8$. Inspection of the proof of this lemma shows that $P$ contains a subword $P_0$ such that $\mathcal{F}(\delta_n^2 P_0) \cong E_{m}$. The conclusion of the lemma now follows from Corollary \ref{cor: contains deltan2}. \end{proof} An {\it ascending} $(2, 2)$ pair is a BKL-positive word of the form $a_{r, r+2} a_{r+2, r+4}$. \begin{lemma} \label{lemma: no ascending stairs of total length 3 or 4} Suppose that $n \geq 5$ and $b = \delta_n^2 P$ is a definite BKL-positive $n$-braid. If $P$ contains an ascending $(2,2)$ pair, then $n \leq 7$ and $\mathcal{F}(b) \cong E_6, E_7$, or $E_8$. \end{lemma} \begin{proof} Without loss of generality, we can suppose that $P$ contains $a_{13} a_{35}$. Since $$\delta_n^2 a_{13} a_{35} \sim a_{35} \delta_n^2 a_{13} = \delta_n^2 a_{13}^2,$$ we see that up to conjugation, $b = \delta_n^2 P$ where $P$ contains the square of a letter of span $2$. An appeal to Lemma \ref{lemma: no repeated span 2 letters} then completes the proof. \end{proof} We call a product $$a_{r_0 r_1} a_{r_1 r_2} \ldots a_{r_{k-1} r_k}$$ where $1 \leq r_0 < r_1 < r_2 < \ldots < r_k \leq n$ a {\it $k$-step ascending staircase} and the letters $a_{r_{i-1} r_i}$ are the {\it steps} of the staircase. It follows from (\ref{eqn: non-commuting generators}) that for each $1 \leq i < j \leq k$, $$a_{r_{i-1} r_i} a_{r_i r_{i+1}} \ldots a_{r_{j-1} r_j} = a_{r_{i-1} r_j} a_{r_{i-1} r_{j-1}} \ldots a_{r_{i-1} r_i}$$ so that up to rewriting, $P$ contains a letter of span $r_j - r_{i-1} \geq j - i$ for each $1 \leq i < j \leq k$. \begin{lemma} \label{lemma: $3$-step ascending staircase e6 definite} Suppose that $b = \delta_n^2 P$ is a BKL-positive definite $n$-braid. If $P$ contains a $3$-step ascending staircase $a_{r_0 r_1} a_{r_1 r_2} a_{r_2 r_3}$, then $\mathcal{F}(b) \cong E_6, E_7$, or $E_8$. \end{lemma} \begin{proof} The existence of a $3$-step ascending staircase implies that $n \geq 4$ and if $n = 4$, the staircase must be $\sigma_1 \sigma_2 \sigma_3 = \delta_4$. In this case $b = \delta_4^2 P \supseteq \delta_4^2 \delta_4 = \delta_4^3$ whose closure is $T(3,4)$. Thus $\mathcal{F}(b)$ contains $\mathcal{F}(T(3,4)) \cong E_6$. The proof now follows from Corollary \ref{cor: contains deltan2}. A similar argument shows that the lemma holds for $n \geq 5$ if each of the steps in the staircase has span $1$: Up to conjugation by a power of $\delta_n$, $P$ contains $P_0 = \sigma_1 \sigma_2 \sigma_3 = \delta_4$ so that $b = \delta_n^2 P \supseteq \delta_4^3$. Then $\mathcal{F}(b) \supset \mathcal{F}(\delta_4^3) \cong E_6$. Let $F_0$ be the quasipositive surface of the $4$-braid $\delta_4^3$. The reader will verify that $F_1 = F(\delta_n \delta_4^2) \cong F_0$ and the inclusion $F_0 \to F_1$ is a homotopy equivalence. Then $\mathcal{F}(\delta_n \delta_4^2) \cong E_6$. Adding the twisted band associated to the letter $\sigma_4$ in the second $\delta_n$ factor of $b$ to $F_1$ yields the surface $F(\delta_n \delta_5 \delta_4)$. Let $\alpha \in H_1(F(\delta_n \delta_5 \delta_4))$ be the class associated to this band and $\beta$ the class associated to the $\sigma_3$ in the second $\delta_n$ factor of $b$. Then $\mathcal{F}(b)(\alpha, \beta) = \pm 1$ by Lemma \ref{lemma: linking info}. Lemma \ref{lemma: e_n} then implies that $\mathcal{F}(\delta_n \delta_5 \delta_4) \cong E_7$ or $E_8$. We can obtain the quasipositive surface $F(\delta_n^2 \delta_3)$ by successively adding the twisted bands of the letters letters $\sigma_5, \ldots , \sigma_{n-1}$ to $F(\delta_n \delta_5 \delta_4)$ and inductively applying Lemma \ref{lemma: e_n} shows that $\mathcal{F}(\delta_n^2 \delta_4)$ is $E_7$ or $E_8$. Corollary \ref{cor: contains deltan2} then shows that $\mathcal{F}(b) \cong E_7$ or $E_8$. Assume next that $r_i - r_{i-1} \geq 2$ for some $i$. Then $n \geq 5$. We saw above that up to rewriting, $P \supseteq P_0$ where $P_0$ has a letter of span $r_j - r_{i-1}$ for each $1 \leq i < j \leq 3$. Lemma \ref{lemma: bound on spans}(1), Lemma \ref{lemma: small span}, and Corollary \ref{cor: contains deltan2} imply that we are done if there are $i$ and $j$ such that $3 \leq r_j - r_{i-1} \leq n-3$. Suppose then that $r_j - r_{i-1} \in \{1, 2, n-2, n-1\}$ for each $i < j$. In this case $r_i - r_{i-1} \leq 2$ for each $i$ as otherwise $r_3 - r_0 = (r_3 - r_2) + (r_2 - r_1) + (r_1 - r_0) \geq n$. We can assume that $r_i - r_{i-1} = 2$ does not occur for successive values of $i$ as then $b$ contains an ascending $(2, 2)$ pair, so we are done by Lemma \ref{lemma: no ascending stairs of total length 3 or 4}. Thus $P$ contains a product $\sigma_k a_{k+1, k+3} = a_{k, k+3} \sigma_k$ or $a_{k, k+2} \sigma_{k+2} = a_{k, k+3} a_{k, k+2}$. In either case, after rewriting $P$ contains a letter of span $3$ so if $n \geq 6$, Lemma \ref{lemma: small span} and Corollary \ref{cor: contains deltan2} imply that $\mathcal{F}(b) \cong E_6, E_7$ or $E_8$. On the other hand, if $n = 5$ then $a_{r_0 r_1} a_{r_1 r_2} a_{r_2 r_3}$ is one of $a_{13} \sigma_3 \sigma_4, \sigma_1 a_{24} \sigma_4$, or $\sigma_1 \sigma_2 a_{35}$. In each case we are done by Lemma \ref{lemma: no ascending stairs of total length 3 or 4} and Corollary \ref{cor: contains deltan2}: \begin{itemize} \item $\delta_5^2 a_{13} \sigma_3 \sigma_4 = \delta_5^2 a_{13} a_{35} \sigma_3 \supset \delta_5^2 a_{13} a_{35}$ \item $\delta_5^2 \sigma_1 a_{24} \sigma_4 = \delta_5^2 \sigma_1 a_{25} a_{24} \sim \delta_5^2 \sigma_2 a_{13} a_{35} \supset \delta_5^2 a_{13} a_{35}$ \item $\delta_5^2 \sigma_1 \sigma_2 a_{35} = \delta_5^2 a_{13} \sigma_1 a_{35} \supset \delta_5^2 a_{13} a_{35}$ \end{itemize} This completes the proof. \end{proof} \section{The proof of Theorem \ref{thm: def baskets} when $n = 4$} \label{sec: n = 4} In this section we prove Theorem \ref{thm: def baskets} for strongly quasipositive $4$-braids. Given Baader's theorem (Theorem \ref{thm: baader}), it suffices to prove the following proposition. \begin{prop} \label{prop: definite strongly quasipositive 4-braids} If $L$ is a definite basket link which is the closure of a BKL-positive word of the form $\delta_4^2 P \in B_4$, then $\delta_4^2 P$ is conjugate to a positive braid. \end{prop} Recall the BKL-positive $4$-braid letters \begin{equation} \label{eqn: script L} \mathcal{L} = \{\sigma_1, \sigma_2, \sigma_3, a_{14}, a_{13}, a_{24}\} \end{equation} Conjugation by $\delta_4$ determines a permutation of $\mathcal{L}$ with two orbits: $$\sigma_1 \mapsto \sigma_2 \mapsto \sigma_3 \mapsto a_{14} \mapsto \sigma_1$$ and $$a_{13} \mapsto a_{24} \mapsto a_{13}$$ Throughout we consider braids of the form \begin{equation} \label{basic form} \delta_4^k P \end{equation} where $k \geq 0$ and $P$ is BKL-positive. Given such a braid $b$, choose from among all of its conjugates and their BKL-positive rewritings which have the form (\ref{basic form}), a braid $\delta_4^k P$ for which $k$ is maximal. Then $P$ does not contain a subword equalling $\delta_4$. Recall the integer $r(P) \geq 0$ defined by $P \supseteq \delta_4^{r(P)}$ but not $\delta_4^{r(P)+1}$ (cf. \S \ref{subsec: symm seifert forms of baskets}). \begin{lemma} If $b = \delta_4^2 P \in B_4$ is strongly quasipositive and definite, then $r(P) \leq 1$. \end{lemma} \begin{proof} If $r(P) \geq 2$ then $b = \delta_4^2 P \supseteq \delta_4^4$ so that $\mathcal{F}(b) \supseteq \mathcal{F}(T(4,4))$, which is indefinite (Lemma \ref{restrictions on krn}), contrary to our hypotheses. \end{proof} \begin{lemma} If $r(P) = 0$, then $\delta_4^2 P$ definite implies that it is conjugate to a positive braid. \end{lemma} \begin{proof} We divide the proof into several cases. \setcounter{case}{0} \begin{case} {\it $P$ contains neither $a_{13}$ nor $a_{24}$. } \end{case} We can suppose that $P \ne 1$ is not a positive braid so that up to conjugation, $$b = \delta_4^2 a_{14}^{d_1} p_1 a_{14}^{d_2} p_2 \ldots a_{14}^{d_t} p_t$$ where $t \geq 1$, $p_1, \ldots, p_t$ are positive braids, and $d_1, \ldots, d_t > 0$. If $p_1 \cdots p_r \not \supseteq \sigma_3$, conjugating by $\delta_4$ changes $b$ into a positive braid, so we are done. Assume then that $p_1 \cdots p_t \supseteq \sigma_3$ and let $i_0$ be the smallest index $i$ for which $p_i \supseteq \sigma_3$. Since $\sigma_3 a_{14} \sigma_1 = \delta_4$, $p_i \not \supseteq \sigma_1$ for $i > i_0$. Consequently, $$p_i = \left\{ \begin{array}{ll} p_i(\sigma_1, \sigma_2) & \mbox{ if } i < i_0 \\ p_i(\sigma_2, \sigma_3) & \mbox{ if } i > i_0 \end{array} \right.$$ Then pushing one $\delta_4$ forward through $P$ till just before $p_{i_0}$ and conjugating the other to the end of $P$ and then pushing it backward till just after $p_{i_0}$ we obtain \begin{eqnarray} b & = & \delta_4^2 a_{14}^{d_1} p_1(\sigma_1, \sigma_2) \cdots a_{14}^{d_{i_0 - 1}} p_{i_0 - 1}(\sigma_1, \sigma_2) a_{14}^{d_{i_0}} p_{i_0} a_{14}^{d_{i_0 + 1}} p_{i_0 + 1}(\sigma_2, \sigma_3) \cdots a_{14}^{d_t} p_t (\sigma_2, \sigma_3) \nonumber \\ & \sim & \sigma_1^{d_1} p_1(\sigma_2, \sigma_3) \cdots \sigma_1^{d_{i_0 - 1}} p_{i_0 - 1}(\sigma_2, \sigma_3) \sigma_1^{d_{i_0}} \delta_4 p_{i_0} \delta_4 \sigma_3^{d_{i_0 + 1}} p_{i_0 + 1}(\sigma_1, \sigma_2) \cdots \sigma_3^{d_t} p_t (\sigma_1, \sigma_2), \nonumber \end{eqnarray} which is a positive braid. \begin{case} {\it $P$ contains at least one of $a_{13}$ or $a_{24}$.} \end{case} Since $\delta_4^2 a_{13} a_{24}$ and $\delta_4^2 a_{24} a_{13}$ are indefinite (Lemma \ref{lemma: no linking}), $P$ contains either no $a_{13}$ or no $a_{24}$. Then up to conjugation we can write $$b = \delta_4^2 a_{13}^{d_1} w_1 a_{13}^{d_2} w_2 \cdots a_{13}^{d_t} w_t$$ where $w_i = w_i(\sigma_1, \sigma_2, \sigma_3, a_{14})$ and $t$ is minimal among all such BKL-positive expressions for conjugates of $b$. It follows that $d_i > 0$ and $w_i \ne 1$ for each $i$. Since $a_{13} \sigma_1 \sigma_3 = \delta_4$, the assumption that $r(P) = 0$ implies that $w_1 w_2 \cdots w_t \not \supseteq \sigma_1 \sigma_3$. Up to conjugating by $\delta_4^2$ we can suppose that $w_1 w_2 \cdots w_t \not \supseteq \sigma_3$. After a further conjugation by $\delta_4$ we have $$b \sim \delta_4^2 a_{24}^{d_1} p_1 a_{24}^{d_2} p_2 \cdots a_{24}^{d_t} p_t$$ where $p_i = p_i(\sigma_1, \sigma_2, \sigma_3) \ne 1$ for each $i$. If $t = 0$, $b \sim \delta_4^2$, a positive braid, and if $t = 1$, $b \sim \delta_4^2 a_{24}^{d_1} p_1(\sigma_1, \sigma_2, \sigma_3) = \delta_4 (\sigma_1 \sigma_2 \sigma_3) (\sigma_3^{-1} \sigma_2^{d_1} \sigma_3) p_1(\sigma_1, \sigma_2, \sigma_3) = \delta_4 \sigma_1 \sigma_2^{d_1 + 1} \sigma_3 p_1(\sigma_1, \sigma_2, \sigma_3)$, again a positive braid. We assume below that $t \geq 2$. Since $\sigma_1 \sigma_3 a_{24} = \delta_4$, $p_1 \cdots p_{t-1} \not \supseteq \sigma_1 \sigma_3$. There are two cases to consider. \begin{subcase} $p_1 \cdots p_{t-1} \not \supseteq \sigma_3$. \end{subcase} If $p_i \not \supseteq \sigma_1$ for some $i < t$, then $p_i$ is a non-zero power of $\sigma_2$, say $p_i = \sigma_2^m$ where $m > 0$. Then $a_{24}^{d_i} p_i= \sigma_2 \sigma_3^{d_i} \sigma_2^{m -1}$, contrary to the minimality of $t$. (When $i = 1$, the condition implies that $b \sim \delta_4^2 a_{13}^{d_2} v_1 a_{13}^{d_3} v_2 \cdots a_{13}^{d_t} v_{t-1}$ where each $v_i = v_i(\sigma_1, \sigma_2, \sigma_3, a_{14})$ is a BKL-positive word.) Thus $p_i \supseteq \sigma_1$ for each $i < t$. If $p_2 p_3 \cdots p_t \supseteq \sigma_2$, then $P \supseteq \sigma_1 a_{24} \sigma_2 = \delta_4$, a contradiction. Thus $p_2 p_3 \cdots p_t \not \supseteq \sigma_2$. It follows that $$p_i = \left\{ \begin{array}{ll} p_1(\sigma_1, \sigma_2) & \mbox{ if } i = 1 \\ \sigma_1^{m_i} & \mbox{ if } 1 < i < t \\ p_3(\sigma_1, \sigma_3) & \mbox{ if } i = t \end{array} \right.$$ As above, the minimality of $t$ can be used to see that $p_1(\sigma_1, \sigma_2) = \sigma_1 q_1(\sigma_1, \sigma_2)$ where $q_1(\sigma_1, \sigma_2)$ is a positive braid. Also, $p_t(\sigma_1, \sigma_3) = \sigma_1^{m_t} \sigma_3^{m_0}$. Then \begin{eqnarray} b & \sim & \delta_4^2 a_{24}^{d_1} p_1(\sigma_1, \sigma_2) a_{24}^{d_2} \sigma_1^{m_2} \cdots a_{24}^{d_{t-1}} \sigma_1^{m_{t-1}} a_{24}^{d_t} \sigma_1^{m_t} \sigma_3^{m_0} \nonumber \\ & \sim & \delta_4^2 \sigma_1^{m_0} \sigma_3^{m_t} a_{24}^{d_1} p_1(\sigma_1, \sigma_2) a_{24}^{d_2} \sigma_1^{m_2} \cdots a_{24}^{d_{t-1}} \sigma_1^{m_{t-1}} a_{24}^{d_t} \nonumber \end{eqnarray} If $m_0, m_t > 0$, then $P \supseteq \sigma_1^{m_0} \sigma_3^{m_t} a_{24}^{d_1} \supseteq \sigma_1 \sigma_3 a_{24} = \delta_4$, contradicting our assumption that $r(P) = 0$. Thus one of $m_0, m_t$ is zero. If $m_t > 0$, the fact that $t \geq 2$ implies that $P \supseteq \sigma_3^{m_t} a_{24}^{d_1} p_1(\sigma_1, \sigma_2) a_{24}^{d_2} = \sigma_3^{m_t} a_{24}^{d_1} \sigma_1 q_1(\sigma_1, \sigma_2) a_{24}^{d_2} \supseteq \sigma_3 \sigma_1 a_{24} = \delta_4$, again a contradiction. Hence $m_t = 0$. Then $m_0 > 0$ and \begin{eqnarray} b & \sim & \delta_4^2 a_{24}^{d_1} p_1(\sigma_1, \sigma_2) a_{24}^{d_2} \sigma_1^{m_2} \cdots a_{24}^{d_{t-1}} \sigma_3^{m_0} \nonumber \\ & \sim & \delta_4^2 \sigma_3^{m_{t-1}} a_{24}^{d_t} \sigma_1^{m_0} a_{24}^{d_1} p_1(\sigma_1, \sigma_2) a_{24}^{d_2} \sigma_1^{m_2} \cdots a_{24}^{d_{t-1}} \ \nonumber \\ & \supseteq & \delta_4^2 \sigma_3 \sigma_1 a_{24} \nonumber \\ & = & \delta_4^3 \nonumber \end{eqnarray} a contradiction, which completes the proof when $p_1 \cdots p_{t-1} \not \supseteq \sigma_3$. \begin{subcase} $p_1 \cdots p_{t-1} \not \supseteq \sigma_1$. \end{subcase} Then $b \sim \delta_4^2 a_{24}^{d_1} p_1(\sigma_2, \sigma_3) a_{24}^{d_2} p_1(\sigma_2, \sigma_3) \cdots a_{24}^{d_{t-1}} p_{t-1}(\sigma_2, \sigma_3) a_{24}^{d_t} p_t$. By the minimality of $t$, $p_i = \sigma_3 q_i \sigma_2$ where each $q_i$ is a positive braid for $i < t$. Thus $$b \sim \delta_4^2 a_{24}^{d_1} \sigma_3 q_1 \sigma_2 a_{24}^{d_2} \sigma_3 q_2 \sigma_2 \cdots a_{24}^{d_{t-1}} \sigma_3 q_{t-1} \sigma_2 a_{24}^{d_t} p_t$$ As $t \geq 2$, $b \supseteq (\sigma_2 \sigma_3)^2 \sigma_2 a_{24} \sigma_3 \sigma_2$, which is an indefinite form by Proposition \ref{prop: when definite}(6), so this case does not occur. This completes the proof of the lemma. \end{proof} \begin{lemma} If $r(P) = 1$, then $\delta_4^2 P$ definite implies that it is conjugate to a positive braid. \end{lemma} \begin{proof} In this case $P \supseteq \delta_4$ and so $b \supseteq b_0 = \delta_4^3$. Then $\mathcal{F}(b) \supseteq \mathcal{F}(b_0) = \mathcal{F}(\delta_4^3) \cong E_6$. Hence $\mathcal{F}(b)$ is either $E_6, E_7$, or $E_8$ by Corollary \ref{cor: contains deltan2}. Further, by (\ref{eqn: l(P) bound}) we see that $$3 + l_{BKL}(P) = \mbox{{\rm rank}} \; H_1(F(b)) = \left\{ \begin{array}{ll} 6 & \mbox{if } \mathcal{F}(b) \cong E_6 \\ 7 & \mbox{if } \mathcal{F}(b) \cong E_7 \\ 8 & \mbox{if } \mathcal{F}(b) \cong E_8 \end{array} \right.$$ Since $r(P) = 1$, $l_{BKL}(P) \geq 3$. Thus $l_{BKL}(P)$ is $3$ if $\mathcal{F}(b) \cong E_6$, $4$ if $\mathcal{F}(b) \cong E_7$, and $5$ if $\mathcal{F}(b) \cong E_8$. It follows that $P = \delta_4$ if $\mathcal{F}(b) \cong E_6$ and is obtained from $\delta_4$ by adding one, respectively two, BKL-positive letters if $\mathcal{F}(b) \cong E_7$, respectively $\mathcal{F}(b) \cong E_8$. \setcounter{case}{0} \begin{case} {\it $\mathcal{F}(b) \cong E_6$.} \end{case} Then $b = \delta_4^3$ and therefore $\widehat b = T(3,4)$, so we are done. \begin{case} {\it $\mathcal{F}(b) \cong E_7$.} \end{case} Then $P$ is obtained from $\delta_4$ by adding one BKL positive letter in $\mathcal{L}$ (cf. (\ref{eqn: script L})). The letter must be added between two letters of $\delta_4$ as otherwise $b \sim \delta_4^3 x$, contrary to our assumption that $k = 2$. Every expression for $\delta_4$ as a product of BKL-positive words has the form $uvw$ where $u, v, w \in \mathcal{L}$. Hence for such $u, v, w$, $P$ is either $uxvw$ or $uvxw$ for some $x \in \mathcal{L}$. The reader will verify that $$\sigma_3 x \sigma_1 \sigma_2 \;\; \mbox{ and } \;\; \sigma_2 \sigma_3 x \sigma_1$$ are positive braids for all $x \in \mathcal{L}$, as are $$\sigma_3 \delta_4 x \;\; \mbox{ and } \;\; x \delta_4 \sigma_1$$ \begin{subcase} {\it $P = uxvw$ for some $x \in \mathcal{L}$.} \end{subcase} Then $b = \delta_4^2 u x u^{-1} \delta_4$. Up to conjugation by a power of $\delta_4$ and changing $x$, $b$ is either $\delta_4^2 \sigma_1 x \sigma_1^{-1} \delta_4$ or $\delta_4^2 a_{13} x a_{13}^{-1} \delta_4 = \delta_4^2 a_{13} x \sigma_1 \sigma_3$. In the first case, $b = \sigma_3 (\delta_4^2 x) (\sigma_2 \sigma_3)$ and as $\delta_4^2 x$ is a positive braid for each $x \in \mathcal{L}$, we are done. In the second, $b = \delta_4^2 a_{13} x \sigma_1 \sigma_3 = \delta_4 a_{24} \delta_4 x\sigma_1 \sigma_3 = \sigma_1 \sigma_2^2 \sigma_3 (\delta_4 x \sigma_1) \sigma_3$. Since $\delta_4 x \sigma_1 = x' \delta_4 \sigma_1$ is always a positive braid, we are done. \begin{subcase} {\it $P = uvxw$ for some $x \in \mathcal{L}$.} \end{subcase} Then $b = \delta_4^3 (w^{-1} x w)$. Up to conjugation by a power of $\delta_4$, $b$ is either $\delta_4^3 \sigma_3^{-1} x \sigma_3$ or $\delta_4^3 a_{24}^{-1} x a_{24}$. If the former occurs, then $b \sim \delta_4^2 \sigma_1 \sigma_2 x \sigma_3 = \delta_4 \sigma_2 (\sigma_3 x' \delta_4) \sigma_3$ where $x' = \delta_4 x \delta_4^{-1} \in \mathcal{L}$. Since $\sigma_3 x' \delta_4 = \sigma_3 \delta_4 x''$ is a positive braid for each $x' \in \mathcal{L}$, we are done. Otherwise, $b = \delta_4^3 a_{24}^{-1} x a_{24} = \delta_4^2 \sigma_1 \sigma_3 x a_{24} \sim \sigma_1 (\sigma_3 x \delta_4) (a_{13} \delta_4) = \sigma_1 (\sigma_3 x \delta_4) \sigma_1 \sigma_2^2 \sigma_3$, so we are done as $\sigma_3 x \delta_4 = \sigma_3 \delta_4 x'$ is a positive braid. This completes the proof when $\mathcal{F}(b) \cong E_7$. \begin{case} {\it $\mathcal{F}(b) \cong E_8$.} \end{case} Then $P$ is obtained from $\delta_4$ by adding two BKL positive letters. Since $P$ does not contain $\delta_4$ as a subword, for any expression $\delta_4 = uvw$ where $u, v, w \in \mathcal{L}$, there are seven possibilities for $P$ : $$P = \left\{ \begin{array}{l} (1) \; x u y vw \\ (2) \; x uvy w \\ (3) \; u x y vw \\ (4) \; u x v y w \\ (5) \; u x vw y \\ (6) \; u v xy w \\ (7) \; u v x w y \end{array} \right. $$ Since $\delta_4^2 u x vw y \sim \delta_4^2 y' u x vw$ and $\delta_4^2 uv x w y \sim \delta_4^2 y' u x vw$ where $y' = \delta_4^{-2} y \delta_4^2 \in \mathcal{L}$, we need only deal with cases (1), (2), (3), (4) and (6). \begin{subcase} {\it $P = x u y vw$ for some $x, y \in \mathcal{L}$.} \end{subcase} Then $b \sim \delta_4^2 x u y vw = \delta_4^2 x (u y u^{-1}) \delta_4$. Up to conjugation, we can take $u$ to be either $\sigma_1$ or $a_{13}$. Further, by Lemma \ref{lemma: no linking}, definiteness implies that if $x = a_{13}$, respectively $a_{24}$, then $y \ne a_{24}$, respectively $a_{13}$. If $u = \sigma_1$, then $b \sim \delta_4^2 x \sigma_1 y \sigma_2 \sigma_3 \sim (\delta_4 x \sigma_1) (\delta_4 y' \sigma_1) \sigma_2$ where $y' = \delta_4^{-1} y \delta_4$. Since $\delta_4 x \sigma_1$ and $\delta_4 y' \sigma_1$ are positive braids for all $x, y \in \mathcal{L}$, we are done. If $u = a_{13}$, then $b \sim \delta_4^2 x a_{13} y a_{13}^{-1} \delta_4 \sim \delta_4^2 x a_{13} y \sigma_1 \sigma_3 \sim \sigma_3 x' \delta_4 a_{24} \delta_4 y \sigma_1 = (\sigma_3 x' \sigma_1 \sigma_2) \sigma_2 \sigma_3 (\delta_4 y \sigma_1)$, which is a positive braid. \begin{subcase} {\it $P = x uv y w $ for some $x, y \in \mathcal{L}$.} \end{subcase} Then $b \sim \delta_4^2 x uv y w = \delta_4^2 x \delta_4 (w^{-1} y w)$. Up to conjugation, we can take $w$ to be either $\sigma_3$ or $a_{24}$. If $w = \sigma_3$, then $b \sim \delta_4^2 x \delta_4 \sigma_3^{-1} y \sigma_3 \sim (\delta_4 x \sigma_1) \sigma_2 (y \delta_4) \sigma_2$. This is a positive braid unless $y = a_{24}$. But in this case $x \ne a_{13}$ and so $\sigma_3 x \sigma_1$ is a positive braid. Hence $b \sim \sigma_1 \sigma_2 (\sigma_3 x \sigma_1 \sigma_2) a_{24} \delta_4 \sigma_2 \sim (\sigma_3 x \sigma_1 \sigma_2) a_{24} \delta_4 (\sigma_2 \sigma_1 \sigma_2) = (\sigma_3 x \sigma_1 \sigma_2) \delta_4 a_{13} (\sigma_1 \sigma_2 \sigma_1) = (\sigma_3 x \sigma_1 \sigma_2) \delta_4 \sigma_1 \sigma_2^2 \sigma_1)$, which is a positive braid. If $w = a_{24}$, then $b \sim \delta_4^2 x \delta_4 a_{24}^{-1} y a_{24} \sim \delta_4^2 x \sigma_1 \sigma_3 y a_{24} = (\sigma_2 \sigma_3 x \sigma_1) \sigma_3 y a_{24} \delta_4 \sigma_1 = (\sigma_2 \sigma_3 x \sigma_1) (\sigma_3 y \delta_4) a_{24}\sigma_1 = (\sigma_2 \sigma_3 x \sigma_1) (\sigma_3 y \delta_4) \sigma_1 \sigma_2$, which is a positive braid. \begin{subcase} {\it $P = u x y vw $ for some $x, y \in \mathcal{L}$.} \end{subcase} Then $b \sim \delta_4^2 (a x y a^{-1}) \delta_4 = \delta_4^2 (axa^{-1}) \delta_4 y$. Up to conjugation, we can take $u$ to be either $\sigma_1$ or $a_{24}$. If $u = \sigma_1$, then $b \sim \delta_4^2 \sigma_1 xy \sigma_2 \sigma_3 \delta_4 = \sigma_3 \delta_4 (\delta_4 x) \delta_4 y' \sigma_1 \sigma_2 = \sigma_3 \delta_4 (\delta_4 x \sigma_1) (\sigma_2 \sigma_3 y' \sigma_1) \sigma_2$, which is a positive braid. If $u = a_{13}$, then $b \sim \delta_4^2 a_{13} x y a_{13}^{-1} \delta_4 = \delta_4 a_{24} \delta_4 x y \sigma_1 \sigma_3 = \sigma_1 \sigma_2^2 \sigma_3 \delta_4 x y \sigma_1 \sigma_3 = \sigma_1 \sigma_2 (\sigma_2 \sigma_3 x' \delta_4) y \sigma_1 \sigma_3$ where $x' = \delta_4 x \delta_4^{-1}$. Hence $b \sim \sigma_1 \sigma_2 (\sigma_2 \sigma_3 x' \sigma_1) (\sigma_2 \sigma_3 y \sigma_1) \sigma_3$, which is always a positive braid. \begin{subcase} {\it $P =u x v y w $ for some $x, y \in \mathcal{L}$.} \end{subcase} There are four BKL-positive expressions for $\delta_4$ up to conjugacy by a power of $\delta_4$: $\sigma_1 \sigma_2 \sigma_3, \sigma_1 \sigma_3 a_{24}$, $\sigma_1 a_{24} \sigma_2$, and $a_{24} \sigma_1 \sigma_3$. We consider each of these separately. First suppose that $b \sim \delta_4^2 \sigma_1 x \sigma_2 y \sigma_3$. Then $b \sim \sigma_3 x' \delta_4 \sigma_3 y' \delta_4 \sigma_3$ where $x' = \delta_4^2 x \delta_4^{-2}$ and $y' = \delta_4 y \delta_4^{-1}$. Hence $b \sim (\sigma_3 x' \delta_4) (\sigma_3 y' \delta_4) \sigma_3$, which is a positive braid. Next suppose that $b \sim \delta_4^2 \sigma_1 x \sigma_3 y a_{24}$. Then $b \sim \sigma_3 x' \delta_4 \sigma_3 y' \delta_4 a_{24} = (\sigma_3 x' \sigma_1 \sigma_2) \sigma_3 a_{14} y' \sigma_1 \sigma_2^2 \sigma_3$ where $x' = \delta_4^2 x \delta_4^{-2}$ and $y' = \delta_4 y \delta_4^{-1}$. Hence $b \sim (\sigma_3 x' \sigma_1) \sigma_1 (\sigma_2 \sigma_3 y' \sigma_1) \sigma_2^2 \sigma_3$. Now $\sigma_2 \sigma_3 y' \sigma_1$ is a positive braid as is $\sigma_3 x' \sigma_1$ as long as $x' \ne a_{14}$. But if $x' = a_{14}$, then $x = \sigma_2$ so that $b \sim \delta_4^2 \sigma_1 \sigma_2 \sigma_3 y a_{24} = \delta_4^3 y a_{24} = \delta_4^2 y' \delta_4 a_{24} = \delta_4 \sigma_1 \sigma_2 (\sigma_3 y' \sigma_1 \sigma_2) (\sigma_3 a_{24}) = \delta_4 \sigma_1 \sigma_2 (\sigma_3 y' \sigma_1 \sigma_2) (\sigma_2 \sigma_3)$, which is a positive braid. If $b \sim \delta_4^2 \sigma_1 x a_{24} y \sigma_2$, then $b \sim (\sigma_3 x' \sigma_1 \sigma_2) (\sigma_3 \delta_4 a_{24}) y \sigma_2 = (\sigma_3 x' \sigma_1 \sigma_2) (\sigma_3 \sigma_1 \sigma_2)(\sigma_2 \sigma_3 y) \sigma_2$. Now $\sigma_3 x' \sigma_1 \sigma_2$ is a positive braid while $\sigma_2 \sigma_3 y$ is as long as $y \ne a_{13}$. But $y \ne a_{13}$ as $b \sim \delta_4^2 \sigma_1 x a_{24} y \sigma_2 = \delta_4^2 \sigma_1 x a_{24} a_{13} \sigma_2$ is indefinite (Lemma \ref{lemma: no linking}). Finally suppose that $b \sim \delta_4^2 a_{24} x \sigma_1 y \sigma_3$. Then $b \sim \delta_4 \sigma_1 \sigma_2^2 \sigma_3 x \sigma_1 y \sigma_3 \sim (\sigma_2 \sigma_3 x \sigma_1) y \sigma_3 \delta_4 \sigma_1 \sigma_2 = (\sigma_2 \sigma_3 x \sigma_1) (\delta_4 y' \sigma_2 \sigma_1 \sigma_2) = (\sigma_2 \sigma_3 x \sigma_1) (\delta_4 y' \sigma_1) \sigma_2 \sigma_1$, which is a positive braid. \begin{subcase} {\it $P = uv xy w$ for some $x, y \in \mathcal{L}$.} \end{subcase} Then $b \sim \delta_4^2 uv xy w = \delta_4^3 w^{-1} xy w$. Up to conjugation, we can take $w$ to be either $\sigma_3$ or $a_{24}$. If $w = \sigma_3$, then $b \sim \delta_4^3 \sigma_3^{-1} xy \sigma_3 = \delta_4^2 \sigma_1 \sigma_2 xy \sigma_3 = (\sigma_2 \sigma_3 x' \delta_4) (y' \delta_4) \sigma_3 = (\sigma_2 \sigma_3 x' \sigma_1) (\sigma_2 \sigma_3 y' \sigma_1) \sigma_2 \sigma_3^2$ where $x' = \delta_4^2 x \delta_4^{-2}$ and $y' = \delta_4 y \delta_4^{-1}$. As this is a positive braid for each $x, y \in \mathcal{L}$, we are done. Finally suppose that $w = a_{24}$. Then $b \sim \delta_4^3 a_{24}^{-1} xy a_{24} = \delta_4^3 (\sigma_3^{-1} \sigma_2^{-1} \sigma_3) xy a_{24} = \delta_4^2 \sigma_1 \sigma_3 xy a_{24} \sim \sigma_1 \sigma_3 xy a_{24} \delta_4^2 = \sigma_1 \sigma_3 x \delta_4 y' a_{13} \delta_4 = \sigma_1 (\sigma_3 x \sigma_1 \sigma_2) (\sigma_3 y' \sigma_1 \sigma_2) \sigma_2 \sigma_3$ where $y' = \delta_4 y \delta_4^{-1}$. Thus $b$ is conjugate to a positive braid. \end{proof} \section{The proof of Theorem \ref{thm: def baskets} when $n \geq 5$} \label{sec: n = 5} In this section we prove \begin{prop} \label{prop: definite strongly quasipositive n-braids} If $L$ is a definite basket link which is the closure of a BKL-positive word of the form $\delta_n^2 P \in B_n$ where $n \geq 5$, then $b$ is conjugate to a positive braid. \end{prop} We assume throughout this section that $n \geq 5$ and that \begin{equation} b = \delta_n^k P \end{equation} is a definite braid where $P \in B_n$ is a BKL-positive word and $k \geq 2$. Given Theorem \ref{thm: definite 3-braids} and Proposition \ref{prop: definite strongly quasipositive 4-braids}, we can suppose that $n = n(\widehat b)$ (cf. \S \ref{subsec: reducing indices}) without loss of generality. \begin{lemma} \label{lemma: wlog r = 0} Proposition \ref{prop: definite strongly quasipositive n-braids} holds if $r(P) > 0$. \end{lemma} \begin{proof} If $r(P) > 0$, then $\mathcal{F}(b) \supseteq \mathcal{F}(\delta_n^{2 + r(P)}) = \mathcal{F}(T(n, 2 + r(P))$. Since $b$ is definite, we have $n = 5$ and $r(P) = 1$ (Lemma \ref{restrictions on krn}). Then $\mathcal{F}(b) \supseteq \mathcal{F}(\delta_5^3) = \mathcal{F}(T(3,5) \cong E_8$. Corollary \ref{cor: contains deltan2} now implies that $b$ is the positive braid $\delta_5^3$. \end{proof} We assume below that $r(P) = 0$. \begin{lemma} \label{lemma: the case e6} Suppose that $b = \delta_n^2 P$ is a definite BKL-positive braid where $n \geq 5$. If $\mathcal{F}(b)$ is congruent to $E_6, E_7$, or $E_8$, then $b$ is conjugate to a positive braid. \end{lemma} \begin{proof} By (\ref{eqn: l(P) bound}), $$(n - 1) + l_{BKL}(P) = \mbox{{\rm rank}} \; H_1(F(b)) = \left\{ \begin{array}{ll} 6 & \mbox{if } \mathcal{F}(b) \cong E_6 \\ 7 & \mbox{if } \mathcal{F}(b) \cong E_7 \\ 8 & \mbox{if } \mathcal{F}(b) \cong E_8 \end{array} \right.$$ In particular, $l_{BKL}(P) \leq 4$ and equals $4$ if and only if $n = 5$ and $\mathcal{F}(b) \cong E_8$. If $l_{BKL}(P) = 0$, then $b$ is the positive braid $\delta_n^k$. If $l_{BKL}(P) = 1$, we can conjugate $b$ by a power of $\delta_n$ to see that $b \sim \delta_n^2 a_{1s} \sim \delta_n a_{1s} \delta_n = \delta_n \delta_{s-1} \delta_{s-2}^{-1} \delta_n = \delta_n \delta_{s-1} \sigma_{s-1} \sigma_{s} \ldots \sigma_{n-1}$, which is a positive braid. If $l_{BKL}(P) = 2$, then Lemmas \ref{lemma: not minimal} and \ref{lemma: contains sigma1} imply that up to conjugation by a power of $\delta_n$, the two letters of $P$ are $\sigma_1$ and $a_{rn}$. Now $r$ is either $1$ or $2$ as otherwise Lemma \ref{lemma: innermost commute} implies that $n(\widehat b) < n$. But $r = 1$ implies the two letters of $\delta_n b \delta_n^{-1}$ are $\sigma_1$ and $\sigma_2$, so $\sigma_{n-1}$ is not covered, contrary to Lemma \ref{lemma: not minimal}. And if $r = 2$, $P$ is either $\sigma_1 a_{2n} = a_{1n} \sigma_1$, which is impossible, or $a_{2n} \sigma_1$. In the latter case, $b = \delta_n^2 a_{2n} \sigma_1 = \delta_n \sigma_1 \sigma_2^2 \sigma_3 \sigma_4 \cdots \sigma_{n-1} \sigma_1$, a positive braid. Next suppose that $l_{BKL}(P) = 3$. By Lemma \ref{lemma: not minimal} we can assume that the letters of $P$ are $\sigma_1, a_{rs}$, and $a_{tn}$. Lemma \ref{lemma: innermost commute} implies that $\min \{r, t\} \in \{1, 2\}$. If $t = 1$, then conjugation by $\delta_n$ yields the letters $\sigma_1, \sigma_2, a_{r+1, s+1}$. Hence either $r = n-1$, so $a_{rs} = \sigma_{n-1}$, or $s = n-1$. The former is impossible as it implies that the letters of $\delta_n^2 P \delta_n^{-2}$ are $\sigma_1, \sigma_2$, and $\sigma_3$, which doesn't cover $\sigma_{n-1}$. Thus $s = n-1$. But then $r = 1$ or $2$ by Lemma \ref{lemma: innermost commute} applied to $\delta_n P \delta_n^{-1}$. In either case, the letters of $\delta_n^2 P \delta_n^{-2}$ are $\sigma_1, \sigma_3$, and $a_{1,r+2}$ and as $r+2 \leq 4 < n$, $\sigma_{n-1}$ is not covered, a contradiction. Similarly if $t = 2$, conjugation by $\delta_n$ yields the letters $\sigma_1, a_{13}, a_{r+1, s+1}$. Hence either $r = n-1$, so $a_{rs} = \sigma_{n-1}$, or $s = n-1$. The former is impossible as it implies that the letters of $\delta_n^2 P \delta_n^{-2}$ are $\sigma_1, a_{24}$, and $\sigma_3$. Thus $s = n-1$. But then $r = 2$ (Lemma \ref{lemma: innermost commute} and Lemma \ref{lemma: no linking}), and in this case the letters of $\delta_n^2 bP \delta_n^{-2}$ are $\sigma_3, a_{14}$, and $a_{24}$, a contradiction. Assume that $t > 2$. Then $r \in \{1, 2\}$ and $s \in \{t, n\}$ (Lemma \ref{lemma: innermost commute} and Lemma \ref{lemma: no linking}). If $s = n$, the letters of $\delta_n P \delta_n^{-1}$ are $\sigma_2, a_{1, r+1}$, and $a_{1, t+1}$. Then $t = n-1$ so that the letters of $\delta_n^2 bP \delta_n^{-2}$ are $\sigma_3, a_{2, r+2},$ and $\sigma_2$, a contradiction. If $s = t$, the letters of $\delta_n P \delta_n^{-1}$ are $\sigma_2, a_{1, t+1}$, and $a_{r+1, t+1}$. Then $t = n-1$ so that the letters of $\delta_n^2 bP \delta_n^{-2}$ are $\sigma_3, a_{1, r+2},$ and $\sigma_1$, a contradiction. This completes the proof in the case that $l_{BKL}(P) = 3$. In fact it proves more. The same arguments show that the lemma holds if the number of distinct letters in $P$ is at most three. We will use this below. Finally suppose that $l_{BKL}(P) = 4$ so as noted above, $n = 5$ and $\mathcal{F}(b) \cong E_8$. From the last paragraph we can suppose that $P$ is made up of four distinct BKL-positive letters. There are two conjugacy classes of BKL-positive letters: $$A = \{\sigma_1, \sigma_2, \sigma_3, \sigma_4, a_{15}\}$$ and $$B = \{a_{13}, a_{24}, a_{35}, a_{14}, a_{25}\}$$ By Lemma \ref{lemma: no linking}, there are at most two letters of $P$ which come from $B$. Hence at least two come from $A$. \setcounter{case}{0} \begin{case} {\rm {\it Each letter of $P$ comes from $A$.}} \end{case} In this case $P$ is a product of four of the letters $\sigma_1, \sigma_2, \sigma_3, \sigma_4, a_{15}$. Hence after conjugation by an appropriate power of $\delta_5$ we have that $P$, and therefore $b$, becomes a positive braid. \begin{case} {\rm {\it Three letters of $P$ come from $A$ and one from $B$.}} \end{case} Then up to conjugation, $b = \delta_5^2 a_1 a_2 a_3 a_{1r}$ where $a_1, a_2, a_3 \in A$ and $a_{1r} \in B$ so that $r \in \{3, 4\}$. Then $$b \sim \delta_5 a_1 a_2 a_3 a_{1r} \delta_5 = \left\{ \begin{array}{ll} \delta_5 a_1 a_2 a_3 \sigma_1 \sigma_2^2 \sigma_3 \sigma_4 & \mbox{ if } r = 3 \\ \delta_5 a_1 a_2 a_3 \sigma_1 \sigma_2 \sigma_{3}^2 \sigma_4 & \mbox{ if } r = 4 \end{array} \right.$$ By hypothesis, the $a_i$ are distinct. If no $a_i$ is $a_{15}$, $b$ is a positive braid. If $a_1 = a_{15}$, then as $\delta_5 a_{15} = \sigma_1 \delta_5$ and $a_2, a_3 \in A \setminus \{a_{15}\}$, we are done. If $a_3 = a_{15}$ then note that either $$b \sim \delta_5 a_1 a_2 a_{15} \sigma_1 \sigma_2 \sigma_3^2 \sigma_4 = \delta_5 a_1 a_2 \delta_5 \sigma_3 \sigma_4,$$ so we are done, or \begin{eqnarray} b \sim \delta_5 a_1 a_2 a_{15} \sigma_1 \sigma_2^2 \sigma_3 \sigma_4 = \delta_5 a_1 a_2 \delta_5 \sigma_3^{-1} \sigma_2 \sigma_3 \sigma_4 = \delta_5 a_1 a_2 \sigma_4^{-1} \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \end{eqnarray} We're done if $a_2 = \sigma_4$. If $a_2 = \sigma_3$ and $a_1 \ne \sigma_4$, then $\delta_5 a_1 a_2 \sigma_4^{-1} = \delta_5 a_1 \sigma_3 \sigma_4^{-1} = a_1' \sigma_4 \delta_5 \sigma_4^{-1} = a_1' \sigma_4 \sigma_1 \sigma_2 \sigma_3$ where $a_1' \in \{\sigma_2, \sigma_3, \sigma_4\}$, so we're done. If $a_2 = \sigma_3$ and $a_1 = \sigma_4$, then \begin{eqnarray} b \sim \delta_5 \sigma_4 \sigma_3 \sigma_4^{-1} \delta_5 \sigma_2 \sigma_3 \sigma_4 & = & \delta_5 \sigma_3^{-1} \sigma_4 \sigma_3 \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \\ & = & \sigma_1 \sigma_2 \sigma_3 \sigma_4 \sigma_3^{-1} \sigma_4 \sigma_3 \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \\ & = & \sigma_1 \sigma_2 \sigma_4^{-1} \sigma_3 \sigma_4^2 \sigma_3 \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \\ & = & \sigma_4^{-1} \sigma_1 \sigma_2 \sigma_3 \sigma_4^2 \sigma_3 \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \\ & \sim & \sigma_1 \sigma_2 \sigma_3 \sigma_4^2 \sigma_3 \delta_5 \sigma_2 \sigma_3, \nonumber \end{eqnarray} a positive braid. Suppose $a_2 = \sigma_i \in \{\sigma_1, \sigma_2\}$. Then $b \sim \delta_5 a_1 \sigma_i \sigma_4^{-1} \delta_5 \sigma_2 \sigma_3 \sigma_4 = \delta_5 a_1 \sigma_4^{-1} \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4$. We're done if $a_1 \in \{\sigma_1, \sigma_2, \sigma_4\}$ as then $b \sim \delta_5 \sigma_4^{-1} a_1 \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4 = \sigma_1 \sigma_2 \sigma_3 a_1 \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4$. If $a_1 = \sigma_3$, then \begin{eqnarray} b \sim \delta_5 \sigma_3 \sigma_4^{-1} \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4 & = & \sigma_1 \sigma_2 \sigma_3 (\sigma_4 \sigma_3 \sigma_4^{-1}) \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \\ & = & \sigma_1 \sigma_2 \sigma_3 (\sigma_3^{-1} \sigma_4 \sigma_3) \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4 \nonumber \\ & = & \sigma_1 \sigma_2 \sigma_4 \sigma_3 \sigma_i \delta_5 \sigma_2 \sigma_3 \sigma_4, \nonumber \end{eqnarray} a positive braid. This completes the argument when $a_3 = a_{15}$. Finally, if $a_2 = a_{15}$ we are done if $a_1 \ne \sigma_4$ as $\delta_5 a_1 a_2 = (\delta_5 a_1 \delta_5^{-1}) (\delta_5 a_{15})$. If $a_2 = a_{15}$ and $a_1 =\sigma_4$, then $\delta_5 a_1 a_2 = \delta_5 \sigma_4 (\sigma_4^{-1} \sigma_3^{-1} \sigma_2^{-1}) \delta_5 = \delta_5 \sigma_3^{-1} \sigma_2^{-1} \delta_5 = \sigma_4^{-1} \sigma_3^{-1} \delta_5^{2}$. It is obvious then that $b = \delta_5 a_1 a_2 \sigma_4^{-1} \delta_5 \sigma_2 \sigma_3 \sigma_4 = \sigma_4^{-1} \sigma_3^{-1} \delta_5^{2} \sigma_4^{-1} \delta_5 \sigma_2 \sigma_3 \sigma_4 = (\sigma_3 \sigma_4)^{-1} \delta_5 \sigma_1 \sigma_2 \sigma_3 \delta_5 \sigma_2) (\sigma_3 \sigma_4)$ is conjugate to a positive braid. \begin{case} {\rm {\it Two letters of $P$ come from $A$ and two from $B$.}} \end{case} Lemma \ref{lemma: no linking} implies that up to conjugation by a power of $\delta_5$, the two letters from $B$ are $a_{13}$ and $a_{35}$. Lemma \ref{lemma: 2-step descending stairs of length two or more indefinite} implies that $a_{13}$ occurs before $a_{35}$ in $P$. Note as well that we are reduced to the previous subcase if $P$ contains $a_{13} a_{35} = a_{15} a_{13}$. Thus there are distinct letters $a_1, a_2 \in A$ such that \begin{equation} \label{eqn: possibilities} P = \left\{ \begin{array}{l} a_{13} a_1 a_{35} a_2, \mbox{ or} \\ a_{13} a_1 a_2 a_{35}, \mbox{ or} \\ a_1 a_{13} a_2 a_{35} \end{array} \right. \end{equation} In particular, $a_{13}$ is one of the first two letters of $P$ and $a_{35}$ one of the last two letters. Since $\delta_5 \{a_{13}, a_{35}\} \delta_5^{-1} = \{a_{24}, a_{14}\}$, Lemma \ref{lemma: not minimal} implies that either $\sigma_4$ or $a_{15}$ must be contained in the letters of $\delta_5 P \delta_5^{-1}$. Thus $\sigma_3$ or $\sigma_4$ is a letter of $P$. Next note that as $\delta_5^3 \{a_{13}, a_{35}, \sigma_3\} \delta_5^{-3} = \{a_{13}, a_{14}, \sigma_1\}$ and $\delta_5^3 \{a_{13}, a_{35}, \sigma_4\} \delta_5^{-3} = \{a_{13}, a_{14}, \sigma_2\}$, either $\sigma_4$ or $a_{15}$ must be contained in the letters of $\delta_5^3 P \delta_5^{-3}$. Thus the letters of $P$ are either $a_{13}, a_{35}, \sigma_1, \sigma_3$, or $a_{13}, a_{35}, \sigma_1, \sigma_4$, or $a_{13}, a_{35}, \sigma_2, \sigma_3$, or $a_{13}, a_{35}, \sigma_2, \sigma_4$. We consider these cases separately. \begin{subcase} {\rm {\it The letters of $P$ are $a_{13}, a_{35}, \sigma_1, \sigma_3$.}} \end{subcase} Since $\sigma_1$ commutes with $a_{35}$ and $\sigma_3$, if it occurs after $a_{13}$, $P$ can be be rewritten to contain $a_{13} \sigma_1 = \sigma_1 \sigma_2$ and we can then appeal to the previous subcase. So without loss of generality we can suppose that $\sigma_1$ occurs before $a_{13}$. Then (\ref{eqn: possibilities}) implies that $P = \sigma_1 a_{13} \sigma_3 a_{35} = \sigma_1 a_{14} a_{13} a_{35}$. But this is impossible since the presence of $a_{14}$ and $a_{35}$ in (a rewritten) $P$ forces $\mathcal{F}(b)$ to be indefinite. \begin{subcase} {\rm {\it The letters of $P$ are $a_{13}, a_{35}, \sigma_1, \sigma_4$.}} \end{subcase} A similar argument to that used in the previous subcase shows that $P = \sigma_1 a_{13} \sigma_4 a_{35} = \sigma_1 a_{13} \sigma_3 \sigma_4$, so again we are done by the previous case. \begin{subcase} {\rm {\it The letters of $P$ are $a_{13}, a_{35}, \sigma_2, \sigma_3$.}} \end{subcase} Since $\sigma_2 a_{13} = \sigma_1 \sigma_2$ and $a_{35} \sigma_3 = \sigma_3 \sigma_4$, we are reduced to the previous case if $\sigma_2$ occurs before $a_{13}$ or $\sigma_3$ occurs after $a_{35}$ (cf. (\ref{eqn: possibilities})). Assume that this isn't the case. Lemma \ref{lemma: no linking} rules out the possibility that $P$ contains the subwords $\sigma_2 a_{35} = a_{25} \sigma_2$ or $a_{13} \sigma_3 = a_{14} a_{13}$. From (\ref{eqn: possibilities}), the only possibility is for $P = a_{13} \sigma_2 \sigma_3 a_{35} = a_{13} a_{24} \sigma_2 a_{35}$, which is ruled out by Lemma \ref{lemma: no linking}. \begin{subcase} {\rm {\it The letters of $P$ are $a_{13}, a_{35}, \sigma_2, \sigma_4$.}} \end{subcase} Since $\sigma_2$ commutes with $\sigma_4$, if it occurs before $a_{13}$, $P$ can be be rewritten to contain $\sigma_2 a_{13} = \sigma_1 \sigma_2$ and we can then appeal to the previous case. So without loss of generality we can suppose that $\sigma_2$ occurs after $a_{13}$. Similarly $\sigma_4$ must occur after $a_{35}$ so from (\ref{eqn: possibilities}), $P = a_{13} \sigma_2 a_{35} \sigma_4 = a_{13} a_{25} \sigma_2 \sigma_4$. This is ruled out by Lemma \ref{lemma: no linking}. \end{proof} \begin{lemma} \label{lemma: done if all spans 1 or n-1} If $b = \delta_n^2 P$ where $P$ is a product of the letters of span $1$ and $n-1$, then $b$ is conjugate to a positive braid. \end{lemma} \begin{proof} Write $b = \delta_n^2 p_1 a_{1n}^{d_1} p_2 a_{1n}^{d_2}\ldots p_r a_{1n}^{d_r}p_{r+1}$ where each $p_i$ is a positive braid. If $P$ contains $\sigma_{n-1} a_{1n} \sigma_1$, then $\delta_n^2 b \delta_n^{-2}$ contains the $3$-step ascending staircase $\sigma_1 \sigma_2 \sigma_3$ and therefore $\mathcal{F}(b) \cong E_6, E_7$, or $E_8$ by Lemma \ref{lemma: $3$-step ascending staircase e6 definite}. In this case Lemma \ref{lemma: the case e6} implies that $b$ is conjugate to a positive braid. Similarly if $P$ contains $(a_{1n} \sigma_1)^2$, then $\delta_n P \delta_n^{-1}$ contains $(\sigma_1 \sigma_2)^2 = (a_{13} \sigma_1)^2 \supset a_{13}^2$. Lemma \ref {lemma: no repeated span 2 letters} then implies that $\mathcal{F}(b) \cong E_6, E_7$, or $E_8$ and so $b$ is conjugate to a positive braid by Lemma \ref{lemma: the case e6}. Assume below that $P$ contains neither $\sigma_{n-1} a_{1n} \sigma_1$ nor $(a_{1n} \sigma_1)^2$. Consequently, if there is a $1 \leq k \leq r$ such that $p_{k}$ contains $\sigma_1$, then \begin{itemize} \item if $i < k, p_i$ does not contain $\sigma_{n-1}$; otherwise $P$ contains $\sigma_{n-1} a_{1n} \sigma_1$; \item if $k < i, p_i$ does not contain $\sigma_1$; otherwise $P$ contains $a_{1n} \sigma_1 a_{1n} \sigma_1 = (a_{1n} \sigma_1)^2$. \end{itemize} Now rewrite $P$ by conjugating one copy of $\delta_n$ forward through $P$ till just after $a_{1n}^{d_{k-1}}$. Since there is no $\sigma_{n-1}$ in $p_1 p_2 \ldots p_{k-1}$ and since $\delta_n a_{1n} \delta_n^{-1} = \sigma_1$, we see that \begin{equation} \label{eqn: rewrite 1} b = \delta_n q_1 \sigma_1^{d_1} q_2 \sigma_1^{d_2}\ldots \sigma_1^{d_{k-1}} \delta_n p_k a_{1n}^{d_k}p_{k+1} \ldots p_r a_{1n}^{d_r}p_{r+1} \end{equation} where each $q_i$ is a positive braid. Next conjugate the lead $\delta_n$ in (\ref{eqn: rewrite 1}) to the``back of $b$" and then conjugate it backward through the rewritten $P$ till just before $a_{1n}^{d_k}$. Since $p_{k+1} p_{k+2} \ldots p_{r+1}$ does not contain $\sigma_1$ and since pushing $\delta_n^{-1} a_{1n} \delta_n = \sigma_{n-1}$, we see that $$b \sim q_1 \sigma_1^{d_1} q_2 \sigma_1^{d_2}\ldots \sigma_1^{d_{k-1}} \delta_n p_k \delta_n \sigma_{n-1}^{d_k} q_{k+1} \ldots q_r \sigma_{n-1}^{d_r} q_{r+1}$$ where $q_{k+1}, q_{k+2}, \ldots$ are positive braids. This completes the proof when there is a $1 \leq k \leq r$ such that $p_{k}$ contains $\sigma_1$. Suppose then that $p_2 p_3 \ldots p_{r}$ does not contain $\sigma_1$. If it is also not contained in $p_{r+1}$, conjugate one copy of $\delta_n$ to the back of $b$ and then backward through $P$ till just before $a_{1n}^{d_1}$. As above, this operation yields a positive braid. On the other hand, if $p_{r+1}$ does contain $\sigma_1$, then $p_1 p_2 \ldots p_{r}$ cannot contain $\sigma_{n-1}$ as otherwise $P$ contains $\sigma_{n-1} a_{1n} \sigma_1$. Then conjugating one copy of $\delta_n$ forward through $P$ till just after $a_{1n}^{d_r}$ rewrites $b$ in the form $b = \delta_n q_1 \sigma_1^{d_1} q_2 \sigma_1^{d_2}\ldots q_r \sigma_1^{d_{r}} \delta_n p_{r+1}$ where each $q_i$ is a positive braid, thus completing the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop: definite strongly quasipositive n-braids}] We can assume that $P$ contains a letter of span $l \not \in \{1, n-1\}$ by Lemma \ref{lemma: done if all spans 1 or n-1}. We can further suppose that the spans of the letters of $P$ lie in $\{1, 2, n-2, n-1\}$. This is obvious if $n = 5$. If $n \geq 6$ and $P$ contains a letter of span $3 \leq l \leq n-3$, Lemmas \ref{lemma: small span} and \ref{lemma: the case e6} imply that Proposition \ref{prop: definite strongly quasipositive n-braids} holds. We can also assume that $P$ contains at most one letter of span $2$ and if one, it occurs exactly once by Lemmas \ref{lemma: no linking}, \ref{lemma: when commuting implies indefinite}, \ref{lemma: 2-step descending stairs of length two or more indefinite}, \ref{lemma: no repeated span 2 letters}, \ref{lemma: no ascending stairs of total length 3 or 4}, and \ref{lemma: the case e6}. If $P$ contains two letters (possibly equal) of span $n - 2$ they cannot be distinct as otherwise the assumption that $n \geq 5$ implies that they are linked, contrary to Lemma \ref{lemma: no linking}. Thus they coincide. But then, we could conjugate by an appropriate power of $\delta_n$ to obtain $b' = \delta_n^2 P'$ where $P'$ contains the square of a span $2$ letter. Then by Lemmas \ref{lemma: no repeated span 2 letters} and \ref{lemma: the case e6} we are done. Assume then that $P$ does not contain two letters (possibly equal) of span $n - 2$. If $P$ contains a letter of span $2$ and one of span $n-2$, conjugate $b$ by an appropriate power of $\delta_n$ so that the span $n-2$ letter is $a_{1, n-1}$. The span $2$ letter now has been converted to a letter $a_{rs}$ of span either $2$ or $n-2$. We have just seen that the latter is impossible so $a_{rs}$ has span $2$. Lemma \ref{lemma: no linking} now implies that $s \leq n-1$. The reader will verify that conjugating by either $\delta_n$ or $\delta_n^{-1}$ yields $P'$ with distinct letters of span $2$, so we are done as in the second paragraph of this proof. Assume then that $P$ does not contain both a letter of span $2$ and one of span $n-2$. By assumption, it contains at least one such letter, so after conjugating by a suitable power of $\delta_n$ we can suppose that $P$ contains $a_{13}$. All other letters of $P$ have span $1$ or $n-1$. Thus $b = \delta_n^2 P_1 a_{13} P_2$ where the letters of $P_1$ and $P_2$ have span $1$ or $n-1$. Then $$b = \delta_n^2 P_1 a_{13} P_2 \sim P_1 a_{13} P_2 \delta_n^2 = P_1 a_{13} \delta_n^2 P_2' \sim \delta_n^2 P_2' P_1 a_{13} = \delta_n^2 P' a_{13}$$ where the letters of $P'$ are of span $1$ or $n-1$. If $P'$ contains $a_{1n}$, then it contains $a_{1n} a_{13} = a_{13} a_{3n}$. Since the span of $a_{3n}$ is $n-3$, Lemmas \ref{lemma: small span} and \ref{lemma: the case e6} imply that we are done when $n \geq 6$. On the other hand, if $n = 5$, $a_{13} a_{3n}$ is an ascending $(2,2)$ pair. Hence we are done by Lemma \ref{lemma: no ascending stairs of total length 3 or 4} and \ref{lemma: the case e6}. Thus without loss of generality, $P'$ is a positive braid. But then $$b \sim \delta_n^2 P' a_{13} \sim \delta_n P' a_{13} \delta_n = \delta_n P' \sigma_1 \sigma_2^2 \sigma_3 \cdots \sigma_{n-1},$$ a positive braid. This completes the proof. \end{proof} \section{Cyclic basket links} \label{sec: cycle baskets} A basket surface $F$ is determined by a finite set $\mathcal{A}$ of arcs properly embedded in the disk $D^2$, together with an ordering $\omega$ of $\mathcal{A}$. Let the arcs be ordered $\alpha(1), \alpha(2),...,\alpha(m)$. Then $F = F(\mathcal{A}, \omega)$ is obtained by successively plumbing positive Hopf bands $b(1), b(2),...,b(m)$, where $b(i)$ is plumbed along a neighborhood of the arc $\alpha(i), 1 \le i \le m$. We adopt the convention that each plumbing is a {\it bottom plumbing} \cite{Ru1}, so $b(i)$ lies underneath $D^2 \cup b(1) \cup ... \cup b(i-1)$. Baskets are also considered in \cite{Hir}; we will adopt the terminology of \cite{Hir} and call the collection of arcs $\mathcal{A}$ in the disk a {\it chord diagram}. These are considered up to the equivalence relation generated by isotopy of an arc $\alpha$, keeping $\partial \alpha$ in $S^1$ and disjoint from the endpoints of the other arcs. In particular we may assume that any two arcs are either disjoint or intersect transversely in a single point. As in \cite{Hir}, $\mathcal{A}$ determines a graph $\Gamma = \Gamma(\mathcal{A})$, the {\it incidence graph} of $\mathcal{A}$: the vertices of $\Gamma$ correspond bijectively to the arcs in $\mathcal{A}$, and two vertices are joined by an edge if and only if the corresponding arcs intersect. If $\Gamma$ is a tree then a planar embedding of $\Gamma$ determines a unique chord diagram $\mathcal{A}$, and $F(\mathcal{A})$ is the corresponding {\it arborescent plumbing}; this is clearly independent of the ordering $\omega$. Recall the simply laced arborescent links $L(A_m), L(D_m), L(E_6), L(E_7)$, and $L(E_8)$ from the introduction. We note that $$L(D_4) = P(-2,2,2) = T(3,3)$$ Also, the expression for $L(D_m)$ continues to hold for $m = 3$: $D_3 = A_3$ and $$L(D_3) = P(-2,2,1) = T(2,4) = L(A_3)$$ \begin{problem} {\rm Determine the definite baskets.} \end{problem} We will see that there are prime definite basket links that are not simply laced arborescent. We discuss the case where $\Gamma = C_m$ is an $m$-cycle, $m \ge 3$. As observed in \cite{Hir}, $\Gamma$ has a unique realization as a chord diagram $\mathcal{A}$. Let $\omega$ be an ordering of $\mathcal{A}$. We encode $\omega$ as follows. Number the arcs $\alpha_1,\alpha_2,...,\alpha_m$, and the corresponding bands $b_1,b_2,...,b_m$, in clockwise order on $D^2$, and define $\boldsymbol{\epsilon} = (\epsilon_1,...,\epsilon_m) \in \{\pm 1\}^n$ as illustrated in Figure \ref{fig: cam 1}. Thus $\boldsymbol{\epsilon}$ determines the order $\omega$ in which the bands are plumbed. Denote the corresponding basket by $F(C_m, \boldsymbol{\epsilon})$, the corresponding link by $L(C_m, \boldsymbol{\epsilon})$, and the symmetrised Seifert form of $F(C_m, \boldsymbol{\epsilon})$ by $\mathcal{F}(C_m, \boldsymbol{\epsilon})$. \begin{figure}\label{fig: cam 1} \end{figure} The link $L(C_m, \boldsymbol{\epsilon})$ has two components if $m$ is odd and three components if $m$ is even. If $L$ is an oriented link with components $K_1, K_2,..., K_n, n \ge 2$, define $$lk(L) = \sum_{1 \le i < j \le n} lk(K_i,K_j)$$ In particular, $$lk(L(D_m)) = lk(P(-2,2,m-2)) = \left\{ \begin{array}{ll} 2 & \hbox{if $m$ is odd,} \\ \frac{m}{2} + 1 & \hbox{if $m$ is even.} \end{array} \right.$$ Let $p(\boldsymbol{\epsilon})$ be the number of $i$ such that $\epsilon_i = +1$. Note that if $F(C_m, \boldsymbol{\epsilon})$ is a basket then $1 \le p(\boldsymbol{\epsilon}) \le m-1$. \begin{lemma} \label{lemma: linking} $lk(L(C_m,\boldsymbol{\epsilon})) = \left\{ \begin{array}{ll} 2p(\boldsymbol{\epsilon}) & \hbox{if $m$ is odd,} \\ \frac{m}{2} + p(\boldsymbol{\epsilon}) & \hbox{if $m$ is even.} \end{array} \right.$ \end{lemma} \begin{proof} Let $L = L(C_m, \boldsymbol{\epsilon})$. Each band contributes $+1$ to $lk(L)$. Consider a crossing of a pair of adjacent bands $b_i$ and $b_{i+1}$. First assume that $m$ is odd. Then $L$ has two components, $K_1, K_2$ say, and the crossing contributes $\epsilon_i$ to $lk(L)$; see Figure \ref{fig: cam 2}. Hence \begin{eqnarray} lk(L) & = & m + (\hbox{number of $i$ such that $\epsilon_i = +1$) - (number of $i$ such that $\epsilon_i = -1$)} \nonumber \\ & = & m + p(\boldsymbol{\epsilon}) - (m - p(\boldsymbol{\epsilon})) \nonumber \\ & = & 2p(\boldsymbol{\epsilon}) \nonumber \end{eqnarray} \begin{figure}\label{fig: cam 2} \end{figure} If $m$ is even then all three components, $K_1, K_2, K_3$ say, appear at the band crossing and the contribution to $lk(L)$ is $\epsilon_{i}/2$; see Figure \ref{fig: cam 3}. Hence $$lk(L) = m + (p(\boldsymbol{\epsilon}) - (m - p(\boldsymbol{\epsilon}))/2 = m/2 + p(\boldsymbol{\epsilon}).$$ \begin{figure}\label{fig: cam 3} \end{figure} \end{proof} \begin{prop} \label{prop: only depends on p} The baskets $F(C_m, \boldsymbol{\epsilon})$ and $F(C_m, \boldsymbol{\epsilon}')$ are isotopic if and only if $p(\boldsymbol{\epsilon}) = p(\boldsymbol{\epsilon}')$. \end{prop} \begin{proof} The ``only if" direction follows from Lemma \ref{lemma: linking}. For the``if" direction, note that replacing a top plumbing by a bottom plumbing does not change the isotopy class of the surface \cite[Lemma 3.2.1]{Ru1}. If $b_i$ is top plumbed and is replaced by a bottom plumbing then the effect on $\boldsymbol{\epsilon}$ is that $(\epsilon_{i-1}, \epsilon_i)$ changes from $(+1,-1)$ to $(-1,+1)$. After a sequence of such moves we can bring $\boldsymbol{\epsilon}$ to the form $(-1,...,-1,+1,...,+1)$. \end{proof} In view of Proposition \ref{prop: only depends on p} we write $F(C_m,p)$ for $F(C_m, \boldsymbol{\epsilon})$ where $p = p(\boldsymbol{\epsilon})$, and similarly for $L(C_m,p)$ and $\mathcal{F}(C_m, p)$. We can think of the basket $F(\mathcal{A}, \omega)$ as being obtained from $D^2$ by attaching 1-handles $h_1,...,h_m$, where $h_i = \overline{b_i \setminus D^2}$. We then have the following {\it handle-sliding} move, which does not change the isotopy class of the surface. Let $h$ and $h'$ be 1-handles such that no 1-handle meets the interior of the interval $I$ shown in Figure \ref{fig: cam 4}. \begin{figure}\label{fig: cam 4} \end{figure} Then $h$ can be slid over $h'$ as indicated in Figure \ref{fig: cam 4}, resulting in Figure \ref{fig: cam 5}. \begin{figure}\label{fig: cam 5} \end{figure} Note that the core of the band corresponding to $h''$ still has framing $-1$. \begin{thm} \label{thm: p = 1} If $p = 1$ then $F(C_m,p)$ is isotopic to $F(D_m)$. \end{thm} \begin{proof} Without loss of generality $\epsilon_m = +1, \epsilon_i = -1$ for $1 \le i \le m-1$. The surface $F(C_m, \boldsymbol{\epsilon})$ is illustrated in Figure \ref{fig: cam 6}, which shows the case $m = 7$. \begin{figure}\label{fig: cam 6} \end{figure} Slide $h_1$ over $h_2$, then over $h_3$,..., then over $h_{m-1}$. The resulting surface is shown in Figure \ref{fig: cam 7}. \begin{figure}\label{fig: cam 7} \end{figure} The corresponding chord diagram has incidence graph $D_m$. \end{proof} We now determine which of the links $L(C_m, p)$ are definite (cf. Lemma \ref{lemma: definite iff p odd}). Let $x_i$ be the core of $b_i, 1 \le i \le m$, with the anticlockwise orientation, and let $x_i^{+}$ be a copy of $x_i$ pushed slightly off $F = F(C_m, \boldsymbol{\epsilon})$ in the positive normal direction, i.e. the direction from the disk towards the reader. Let $S = (s_{ij})$ be the Seifert matrix of $F$ with respect to the basis $[x_1],...,[x_m]$ for $H_1(F)$. Note that $$s_{i, i+1} = lk(x_{i}^+, x_{i+1}) = \left\{ \begin{array}{l} -1 \;\; \hbox{ if } \epsilon_i = +1 \;\;\; \\ 0 \;\; \hbox{ if } \epsilon_i = -1 \end{array} \right.$$ and $$s_{i+1, i} = lk(x_{i+1}^+, x_i) = \left\{ \begin{array}{l} 0 \;\; \hbox{ if } \epsilon_i = +1 \\ \;\;\; 1 \;\; \hbox{ if } \epsilon_i = -1 \end{array} \right.$$ Also, $s_{ii} = -1$, and $s_{ij} = 0$ if $|i-j| > 1$ (where subscripts are interpreted modulo $m$). Let $p = p(\boldsymbol{\epsilon})$, so $0 < p < m$. By Proposition \ref{prop: only depends on p} we may assume that $$\epsilon_i = \left\{ \begin{array}{l} -1 \;\; \hbox{ if } 1 \leq i \leq m-p \\ +1 \;\; \hbox{ if } m-p < i \leq m \end{array} \right.$$ Thus the matrix $S$ is $$S = \left( \begin{array}{rrrrrrrr|rrrrrrr} -1 & 0 & 0 & 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot &\cdots & \cdot & 0 \\ 1 & -1 & 0 & 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & 0 \\ 0 & 1 & -1& 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & 0 \\ \cdot & \cdot & \cdot& \cdot& \cdots&\cdot & \cdot & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & 0 \\ 0 & 0 & 0 & 0 &\cdots & 1 & -1 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & 0 \\ \hline 0 & 0 & 0 & 0 &\cdots & 0 & 1 & & -1 & -1 & 0 & \cdot & \cdots &\cdot & 0 \\ 0 & 0 & 0 & 0 &\cdots & 0 & 0 & & 0 & -1 & -1 & 0 & \cdots &\cdot & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & \cdot& \cdot & \cdot & \cdot & \cdots &\cdot & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & \cdot & \cdot & \cdot & \cdot & \cdots &\cdot & 0 \\ \cdot & \cdot & \cdot& \cdot& \cdots&\cdot & \cdot & & \cdot & \cdot & \cdot & \cdot & \cdots &-1 & -1 \\ -1 & 0 & 0 & 0 &\cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots & 0 & -1 \\ \end{array}\right)$$ Then $$-(S + S^T) = \left( \begin{array}{rrrrrrrr|rrrrrrrr} 2 & -1 & 0 & 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot &\cdots & \cdot & \cdot & 1 \\ -1 & 2 & -1 & 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ 0 & -1 & 2& -1 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ \cdot & \cdot & \cdot& \cdot& \cdots&\cdot & -1 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ 0 & 0 & 0 & 0 &\cdots & -1 & 2 & & -1 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ \hline 0 & 0 & 0 & 0 &\cdots & 0 & -1 & & 2 & 1 & 0 & \cdot & \cdots &\cdot & \cdot &0 \\ 0 & 0 & 0 & 0 &\cdots & 0 & 0 & & 1 & 2 & 1 & 0 & \cdots &\cdot & \cdot & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & \cdot& \cdot & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & \cdot & \cdot & \cdot & \cdot & \cdots &\cdot & \cdot & 0 \\ \cdot & \cdot & \cdot& \cdot& \cdots&\cdot & \cdot & & \cdot & \cdot & \cdot &\cdot & \cdots & 1 &2 & 1 \\ 1 & 0 & 0 & 0 &\cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots & 0 &1 & 2 \\ \end{array}\right)$$ For $\varepsilon = \pm 1, m \ge 3$, let $Q_m(\varepsilon)$ be the $m \times m$ symmetric matrix $$Q_m(\varepsilon) = \left(\begin{matrix} 2 & 1 & 0 & 0 & \cdots & 0 & \varepsilon \\ 1 &2 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 2& 1 & \cdots & 0 & 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots& \cdot & \cdot \\ \cdot & \cdot & \cdot& \cdot& \cdots&\cdot & \cdot \\ 0 & 0 & 0 & 0 &\cdots & 2 & 1 \\ \epsilon & 0 & 0 & 0 &\cdots & 1 & 2 \end{matrix}\right)$$ Successively multiplying the $i^{th}$ row and column of $-(S+S^T)$ by $-1$ for $$i = \left\{ \begin{array}{l} m-p, m-p-2, \ldots, 1 \;\; \hbox{ if $m-p$ is odd} \\ m-p, m-p-2, \ldots, 2 \;\; \hbox{ if $m-p$ is even} \end{array} \right.$$ we see that $-(S+S^T)$ is congruent to $Q_m((-1)^{m-p})$. One easily shows by induction on $r$ that the $r \times r$ leading minor of $Q_m(\varepsilon)$ is $r+1$ for $1 \leq r \leq m-1$. A straightforward calculation also gives the following. \begin{lemma} \label{lemma: det qme} $$\det Q_m(\varepsilon) = \left\{ \begin{array}{l} 2 + 2\varepsilon\;\; \hbox{ if $m$ is odd} \\ 2 - 2\varepsilon \;\; \hbox{ if $m$ is even} \end{array} \right. \qed$$ \end{lemma} It follows that $\mathcal{F}(C_m, p)$ depends only on the parity of $p$ and is definite if and only if $p$ is odd. Recalling Theorem \ref{thm: p = 1} we therefore have the following. \begin{lemma} \label {lemma: definite iff p odd} $\;$ $(1)$ If $p$ is odd then $\mathcal{F}(C_m, p) \cong D_m$. $(2)$ If $p$ is even then $\mathcal{F}(C_m, p)$ is indefinite. \qed \end{lemma} We now prove Theorem \ref{thm: p odd} which we restate here for the reader's convenience. {\bf Theorem \ref{thm: p odd}}. {\it Let $m \geq 3$ and $p$ be integers with $p$ odd and $0 < p < m$. $(1)$ $L(C_m, p)$ is prime and definite. $(2)$ $L(C_m, p)$ is simply laced arborescent if and only if $p = 1$. } \begin{proof} The definiteness of $L(C_m,p)$ in part (1) follows from Lemma \ref{lemma: definite iff p odd} (1), and the primeness of $L(C_m,p)$ follows from Lemma \ref{lemma: definite iff p odd}(1) and the fact that $D_m$ is indecomposable. To prove part (2), we have that $L(C_m,1) = L(D_m)$ by Theorem \ref{thm: p = 1}. Conversely, if $L(C_m,p)$ is simply laced arborescent then it is definite, so $p$ is odd by Lemma \ref{lemma: definite iff p odd}, and therefore $F(C_m,p)$ is isotopic to $F(D_m)$. By Lemma \ref{lemma: linking} this implies that $p = 1$. \end{proof} Next we prove Theorem \ref{thm: intro not a counterexample}, which shows that the links in Theorem \ref{thm: p odd} do not provide counterexamples to Conjecture \ref{conj: branched lspace implies simply laced arborescent}. \begin{lemma} \label{lemma: alex poly} The Alexander polynomial of $L(C_m, p)$ is $$\Delta_{L(C_m, p)} = \left\{ \begin{array}{l} (t^p - 1)(t^{(m-p)} + 1) \;\; \hbox{ if $m$ is odd} \\ (t^p - 1)(t^{(m-p)} - 1) \;\; \hbox{ if $m$ is even} \end{array} \right.$$ \end{lemma} \begin{proof} The matrix $S - tS^T$ (immediately below) is a Seifert matrix for $L(C_m,p)$ whose determinant is $\Delta_{L(C_m, p)}$. $$\left( \begin{array}{cccccccc|cccccccc} t-1 & -t & 0 & 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot &\cdots & \cdot& \cdot & t \\ 1 & t-1& -t & 0 & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ 0 & 1 & t-1& -t & \cdots & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ \cdot & \cdot & \cdot &\cdot & \cdots & \cdot & \cdot & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ \cdot & \cdot & \cdot& \cdot& \cdots&\cdot & -t & & 0 & 0 & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ 0 & 0 & 0 & 0 &\cdots & 1 & t-1 & & -t & 0 & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ \hline 0 & 0 & 0 & 0 &\cdot & 0 & 1 & & t-1 & -1 & 0 & \cdot & \cdots &\cdot & \cdot& 0 \\ 0 & 0 & 0 & 0 &\cdot & 0 & 0 & & t & t-1 & -1 & 0 & \cdots &\cdot & \cdot& 0 \\ \cdot & \cdot & \cdot &\cdot & \cdot & \cdot & \cdot & & \cdot& \cdot & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ \cdot & \cdot & \cdot &\cdot & \cdot & \cdot & \cdot & & \cdot & \cdot & \cdot & \cdot & \cdots &\cdot & \cdot& 0 \\ \cdot & \cdot & \cdot& \cdot& \cdot&\cdot & \cdot & & \cdot & \cdot & \cdot & \cdot & \cdots & t &t-1 & -1 \\ -1 & 0 & 0 & 0 &\cdot & 0 & 0 & & 0 & 0 & \cdot & \cdot & \cdots & 0 & t & t-1 \\ \end{array}\right)$$ Let $M_1, M_2$, and $M_3$ be the matrices obtained by removing from $S - tS^T$, respectively, the first row and column, the first two rows and columns, and the first and last rows and columns. Correspondingly, let $F_1, F_2$, and $F_3$ be the surfaces obtained by removing, respectively, the first band $b_1$, the first and second bands $b_1$ and $b_2$, and the first and last bands $b_1$ and $b_m$. Then for $i = 1, 2,3$, $M_i = R_i - tR_i^T$ where $R_i$ is a Seifert matrix for $F_i$. Also, $F_1$ is the arborescent basket $F(A_{m-1})$, and $F_2$ and $F_3$ are the arborescent basket $F(A_{m-2})$, the corresponding links being $T(2,m)$ and $T(2,m-1)$ respectively. Expanding by the first column we obtain $\Delta_{L(C_m,p)} = \det(S - tS^T) = (t-1) \det M_1 - \det U + (-1)^m \det V$. Here $U = \left(\begin{matrix} -t & C_1 \\ C_2^T & M_2 \end{matrix}\right)$ where $C_1 = (0, 0, \ldots, 0, t)$ and $C_2 = (1, 0, \ldots , 0)$, and $V = \left(\begin{matrix} D_1 & t \\ M_3 & D_2^T \end{matrix}\right)$ where $D_1 = (-t, 0, \ldots, 0, 0)$ and $D_2 = (0, 0, \ldots , 0, -1)$ Expanding $\det U$ by the first row we get $$\det U = (-t) \det M_2 + (-1)^{m} t^p$$ Similarly, expanding $\det V$ by the last column we get $$\det V = (-1)^m t \det M_3 + (-1)^{m-1} t^{m-p}$$ Since $\det M_1= \Delta_{T(2,m)}$ and $\det M_2 = \det M_3 = \Delta_{T(2,m-1)}$, we obtain $$\Delta_{L(C_m, p)} = (t-1) \Delta_{T(2,m)} + 2t \Delta_{T(2,m-1)} + (-1)^m t^p - t^{m-p}$$ Now $$\Delta_{T(2,r)} = \left\{ \begin{array}{l} (t^r + 1)/(t + 1) \;\; \hbox{ if $r$ is odd} \\ (t^r - 1)/(t + 1) \;\; \hbox{ if $r$ is even} \end{array} \right.$$ See for example \cite[Exercise 7.3.1]{Mu2}. First suppose that $m$ is even. Then $(t-1)\Delta_{T(2,m)} + 2t\Delta_{T(2,m-1)} = \big((t-1)(t^m - 1) + 2t(t^{m-1}+ 1)\big)/(t+1) = (t^{m+1} + t^m + t + 1)/(t+1) = t^m + 1$. Hence $\Delta_{L(C_m,p)} = t^m + 1 - t^p - t^{m-p} = (t^p - 1)(t^{m-p} - 1)$. The case that $m$ is odd is similar; we leave the details to the reader. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: intro not a counterexample}] For each $\zeta \in S^1$ we define $I_+(\zeta)$, to be the open subarc of the circle with endpoints $\zeta, \bar \zeta$ which contains $+1$. If $\Sigma_n(L(C_m, p))$ is an L-space, then \begin{equation} \label{eqn: roots} \hbox{{\it All the roots of $\Delta_{L(C_m, p)}$ lie in $I_+(\zeta_n)$}} \end{equation} by Theorem \ref{thm: bbg definite}. If $p > 1$ and $n \geq 3$, the fact that $t^p - 1$ divides $\Delta_{L(C_m, p)}$ (cf. Lemma \ref{lemma: alex poly}) shows that (\ref{eqn: roots}) is violated. Next suppose that $\Sigma_n(L(C_m, 1))$ is an L-space and recall that by Theorem \ref{thm: p = 1}, $L(C_m, 1) = P(-2, 2, m-2)$. Hence $\Sigma_2(L(C_m, 1)$ is an L-space. Also, if $m = 3$ then $L(C_3, 1) = P(-2, 2, 1)) = T(2,4)$, and so $\Sigma_3(L(C_3, 1))$ is also an L-space. Conversely, first suppose that $m$ is even; then the roots of $\Delta_{L(C_m,1)}$ are the $(m-1)^{st}$ roots of unity. Since $m \geq 4$, (\ref{eqn: roots}) fails for $n \geq 3$. If $m$ is odd then the roots of $\Delta_{L(C_m, 1)}$ are the complex numbers $\exp(\frac{\pi i r}{m-1})$, $r$ odd, together with $1$. Therefore, if $m \geq 5$ then (\ref{eqn: roots}) fails for $n \geq 3$, and if $m = 3$, it fails for $n \geq 4$. Finally suppose that $p > 1$ and $n = 2$. \begin{lemma} \label{lem: lcm braid} As an unoriented link, $-L(C_m, p)$ is the closure of the $3$-braid $\gamma^m \delta^{3p}$, where $\delta = \sigma_2 \sigma_1, \gamma = \sigma_2 \delta^{-1}$. \end{lemma} \begin{proof} We can express $L(C_m, p)$ as a circular concatenation of tangles corresponding to pairs of adjacent bands, as in Figure \ref{fig: cam 14}. These tangles are the braids $\beta_{\pm}$ shown in Figure \ref{fig: cam 15}. \begin{figure}\label{fig: cam 14} \end{figure} \begin{figure}\label{fig: cam 15} \end{figure} Reading from right to left in Figure \ref{fig: cam 15}, we see that $$\beta_- = \sigma_2 \sigma_1 \sigma_2^{-1}, \;\;\;\;\; \beta_+ = \sigma_2^{-1} \sigma_1^{-1} \sigma_2^{-3}$$ Taking inverses for convenience we have $$\beta_-^{-1} = \sigma_2 \sigma_1^{-1} \sigma_2^{-1} = \sigma_2 \delta^{-1} = \gamma$$ $$\beta_+^{-1} = \sigma_2^3 \sigma_1 \sigma_2 = \sigma_2^2 \sigma_1 \sigma_2 \sigma_1 = \sigma_2 \delta^2 = \gamma \delta^{3}$$ Since $\delta^3$ is central in $B_3$, the result follows. \end{proof} First note that for any link $L$, $\Sigma_2(L)$ is independent of the orientations of the components of $L$. By Lemma \ref{lem: lcm braid}, as an unoriented link, $-L(C_m, p)$ is the closure of the $3$-braid $\beta_p = \gamma^m \delta^{3p}$. Let $p = 2r+1, r>0$; then $\beta_p = \beta_1 \delta^{6r}$. The inverse image in $\Sigma_2(-L(C_m, p))$ of the braid axis of $\widehat \beta_p$ in $S^3$ is a simple closed curve whose exterior is fibred over $S^1$ with fibre a once-punctured torus $T$, the double branched cover of $(D^2, 3 \hbox{ points})$. Recall the natural identification of $B_3$ with a subgroup of the group of homeomorphisms of $(D^2, 3 \hbox{ points})$ which restrict to the identity on $\partial D^2$. Then there are simple closed curves $\alpha_1$ and $\alpha_2$ in Int($T$), meeting transversely in a single point, such that $\sigma_i$ lifts to the positive Dehn twist $\tau_i$ along $\alpha_i$, $i = 1, 2$. Thus $\beta_p$ lifts to $(\tau_2 \tau_1^{-1} \tau_2^{-1})^m (\tau_2 \tau_1)^p = \varphi_p$, say, the monodromy of the $T$-bundle in $\Sigma_2(-L(C_m, p))$. Note that $(\tau_2 \tau_1)^6$ is isotopic (rel $\partial T$) to $\tau_\partial$, the positive Dehn twist along $\partial T$. For a homeomorphism $\psi: T \to T$ such that $\psi|\partial T$ is the identity, let $c(\psi)$ denote the fractional Dehn twist coefficient of $\psi$; see \cite[\S 3]{HKM1}. \begin{claim} $c(\varphi_1) = \frac12$. \end{claim} \begin{proof} Since $\delta^3$ is central in $B_3$, $\varphi_1^2$ is the lift of $\gamma^{2m} \delta^6$, which is $(\tau_2 \tau_1^{-1} \tau_2^{-1})^{2m} \tau_\partial$. There is an essential properly embedded arc $a$ in $T$ such that $\tau_1(a) = a$. It follows that $c(\tau_1^{-2m}) = 0$ (see \cite{HKM1}). Hence $c((\tau_2 \tau_1^{-1} \tau_2^{-1})^{2m}) = c(\tau_1^{-2m}) = 0$. Therefore $c(\varphi_1^2) = 1$, implying that $c(\varphi_1) = \frac12$ (see \cite{HKM1}). \end{proof} Since $\varphi_p = \varphi_1 \tau_\partial^r$, we have that $c(\varphi_p) = \frac12 + r > 1$. Therefore by \cite{Ro} (see \cite[Theorem 4.1]{HKM2}), $\Sigma_2(L(C_m, p))$ admits a co-oriented taut foliation. \end{proof} Having given a detailed analysis of the cyclic case we will not attempt a complete classification of definite baskets, but will content ourselves with a few remarks. Recall that a subgraph $\Gamma_0$ of a graph $\Gamma$ is {\it full} if $\Gamma_0$ contains every edge of $\Gamma$ whose endpoints lie in $\Gamma_0$. Let $\mathcal{A}$ be a chord diagram, with ordering $\omega$, and $\mathcal{A}_0 \subset \mathcal{A}$ a subchord diagram, with ordering $\omega_0$ induced by $\omega$. If $\Gamma(\mathcal{A}_0)$ is a full subgraph of $\Gamma(\mathcal{A})$ then $F(\mathcal{A}_0, \omega_0)$ is a homologically injective subsurface of $F(\mathcal{A}, \omega)$. Hence, if $F(\mathcal{A}, \omega)$ is definite then so is $F(\mathcal{A}_0, \omega_0)$. This can be used to show that many baskets are indefinite. For example, if $F(\mathcal{A}, \omega)$ is definite then any full subtree of $\Gamma(\mathcal{A})$ must be simply laced arborescent. As a simple application of this, let $C_{m,l}$ be an $m$-cycle with a leg of length $l$, $l \ge 1, m \ge 3$. See Figure \ref{fig: cam 8}, which shows $C_{5,2}$. \begin{figure}\label{fig: cam 8} \end{figure} It is easy to see that any chord diagram realising $C_{m,l}$ is of the form shown in Figure \ref{fig: cam 20}. \begin{figure}\label{fig: cam 20} \end{figure} Thus if $l > 1$, $C_{m,l}$ does not have a unique realisation as a chord diagram. However, given an ordering of the arcs of the $m$-cycle $C_m$, the corresponding basket $F(C_{m,l})$ is clearly unique up to isotopy. \begin{thm} Assume that $m \geq 3$ and $l \geq 1$. There is a definite basket with incidence graph $C_{m,l}$ if and only if $(m,l)$ is one of the following: $(3,l)$ or $(4,l)$ for $l \geq 1$; $(5,l)$ for $l = 1, 2$, or $3$; $(6, 1)$ or $(7, 1)$. \end{thm} \begin{proof} For the ``only if" direction, note that if $(m,l)$ is not one of the pairs listed then either $(m,l) = (5,4)$ or $C_{m,l}$ contains a full subtree that is not simply laced arborescent. In the case $(m,l) = (5,4)$, the corresponding basket $F$ is obtained by plumbing a positive Hopf band to $F(E_8)$ along a non-separating arc. Hence $F$ is indefinite by Lemma \ref{lemma: e* case}(3). For the ``if" direction, order the arcs of the $m$-cycle $C_m$ so that $p = 1$. By Theorem \ref{thm: p = 1}, $F(C_m, 1)$ is isotopic to $F(D_m)$. This isotopy is effected by sliding $h_1$ successively over $h_2, \ldots, h_{m-1}$. After these handle-slides, the handles $h_1$ and $h_m$ correspond to the vertices $v_1$ and $v_m$ of $D_m$ as shown in Figure \ref{fig: cam 21}. \begin{figure}\label{fig: cam 21} \end{figure} Also, the transverse arcs (co-cores) of $h_1$ and $h_m$ are unaffected by the handle-slides. It follows that if we plumb a row of positive Hopf bands to $h_1$ (say), then the resulting surface is $F(\Gamma)$, where $\Gamma$ is the tree obtained by adjoining a line of $l$ edges to $D_m$ at $v_1$. If $m = 3$ or $4$ then $\Gamma \cong D_{m+l}$, so $F(\Gamma)$ is definite. If $m = 5$ and $l = 1, 2,$ or $3$ then $\Gamma \cong E_6, E_7$, or $E_8$ respectively. Similarly if $m = 6$ or $7$ and $l = 1$ then $\Gamma \cong E_7$ or $E_8$. In all cases, $F(\Gamma)$ is definite. \end{proof} We conclude with one more example. Let $K_m$ be the complete graph on $m$ vertices. It is easy to see that $K_m$ has a unique realization as a chord diagram $\mathcal{A}$, namely $m$ diameters of $D^2$. \begin{thm} If $F(K_m, \omega)$ is definite then it is isotopic to $F(A_m)$. \end{thm} \begin{proof} If $m \leq 2$ then $K_m = A_m$ and there is nothing to prove. Suppose $m \ge 3$. Number the arcs in $\mathcal{A}, \alpha_1, \alpha_2,...,\alpha_m,$ in anticlockwise order around the disk. Let $\omega$ be an ordering of $\mathcal{A}$ such that $F(\mathcal{A}, \omega)$ is definite. Any 3-cycle in $K_m$ is full, and so by Lemma \ref{lemma: definite iff p odd}, up to cyclic renumbering of the $\alpha_i$'s, we must have $\alpha_1 > \alpha_2 > ... > \alpha_m$. The corresponding basket $F(K_m, \omega)$ is illustrated in Figure \ref{fig: cam 9}, which shows the case $m = 6$. \begin{figure}\label{fig: cam 9} \end{figure} Slide $h_{m-1}$ over $h_m$. The resulting chord diagram has incidence graph $K_{m-1}$ with a leg of length 1; see Figure \ref{fig: cam 10}. \begin{figure}\label{fig: cam 10} \end{figure} Now slide $h_{m-2}$ over $h_m$, and then slide $h_m$ over $h'_{m-2}$; see Figures \ref{fig: cam 11} and \ref{fig: cam 12}. \begin{figure}\label{fig: cam 11} \end{figure} \begin{figure}\label{fig: cam 12} \end{figure} The incidence graph is now $K_{m-2}$ with a leg of length 2. Continue this procedure: slide $h_{m-3}$ over $h'_m$ and then slide $h'_m$ over $h'_{m-3}$; the incidence graph is now $K_{m-3}$ with a leg of length 3; see Figure \ref{fig: cam 13}. \begin{figure}\label{fig: cam 13} \end{figure} Eventually we obtain the incidence graph $A_m$. \end{proof} \section{L-space branched covers of strongly quasipositive links of braid index $3$} \label{sec: lspace branched covers 3-braids} In this section we investigate the pairs $(L, n)$ where $L$ is a non-split strongly quasipositive link of braid index $3$ and $\Sigma_n(L)$ is an L-space. This will lead to a proof of Proposition \ref{prop: 1 < r < n}. We assume that $(L, n)$ is such a pair throughout. By Theorem \ref{thm: bbg definite}, $L$ is definite and therefore Theorem \ref{thm: definite 3-braids} implies that $\Sigma_2(L)$ is an L-space. Stoimenow has shown that strongly quasipositive links of braid index $3$ are the closures of strongly quasipositive $3$-braids (\cite[Theorem 1.1]{Stoi}), so we can write $$L = \widehat b$$ where $$b = \delta_3^k P$$ is a minimal representative of a conjugacy class of strongly quasipositive $3$-braids (cf.~\S \ref{subsec: minimal rep}). We analyse the cases $k > 0$, $k = 0$ separately. \subsection{The case $k > 0$} \label{subsec: k > 0} Proposition \ref{prop: when definite} implies that the minimal form of $b$ is one of the following braids: \begin{enumerate} \item $\delta_3^k \sigma_1^p \mbox{ where either } \left\{ \begin{array}{l} p = 0 \mbox{ and } 1 \leq k \leq 5 \\ p > 0 \mbox{ and } 1 \leq k \leq 3 \\ p = \{1, 2\} \mbox{ and } k = 4 \end{array} \right.$ \item $\delta_3 \sigma_1^p a_{13}^q$ where $p, q \geq 1$. \end{enumerate} The closures of these braids are non-split and the only one whose closure is trivial is $\delta_3$. In the latter case, $\Sigma_n(\widehat b)$ is an L-space for each $n$, and so Proposition \ref{prop: 1 < r < n} holds. Without loss of generality we assume $\widehat b$ is non-trivial below. First consider the braids listed in (2) and note that the closure of $b = \delta_3 \sigma_1^p a_{13}^q$ is the composite link $\widehat b = T(2, p+1) \# T(2, q+1)$. Since $\Sigma_n(T(2, p+1) \# T(2, q+1)) \cong \Sigma_n(T(2, p+1)) \# \Sigma_n(T(2, q+1))$ is an L-space if and only if both $\Sigma_n(T(2, p+1))$ and $\Sigma_n(T(2, q+1))$ are L-spaces, it follows from \cite{GLid} and \cite[Example 1.11]{Nem} that Proposition \ref{prop: 1 < r < n} holds for the closures of the braids in family (2). More precisely, if $p = q = 1$, $\widehat b = T(2, 2) \# T(2, 2)$ where $T(2, 2)$ is the Hopf link. Since each $\Sigma_n(T(2,2))$ is a lens space, $\Sigma_n(T(2, 2) \# T(2, 2))$ is an L-space for each $n \geq 2$. Otherwise, $\Sigma_n(\widehat b)$ is an L-space if and only if $$n \leq \left\{ \begin{array}{l} 2 \mbox{ when } \max\{p, q\} \geq 5 \\ 3 \mbox{ when } \max\{p, q\} = 3, 4 \\ 5 \mbox{ when } \max\{p, q\} = 2 \end{array} \right.$$ Next consider the braids listed in (1). These are clearly positive and their closures are prime. They are also definite, so by Theorem \ref{thm: baader}, $\hat b$ is simply laced arborescent and therefore conjugate to one of the following positive braids: {\small \begin{center} \begin{tabular}{|c||c|c|c|c|c|} \hline $b$ & $\sigma_1^m \sigma_2$ ($m \geq 2$)& $\sigma_1^m \sigma_2 \sigma_1^2 \sigma_2$ ($m \geq 2$)& $\sigma_1^3 \sigma_2 \sigma_1^3 \sigma_2 $ & $\sigma_1^4 \sigma_2 \sigma_1^3 \sigma_2$ & $\sigma_1^5 \sigma_2 \sigma_1^3 \sigma_2$ \\ \hline \footnotesize{$\widehat b$} & $T(2,m)$ & $P(-2,2,m) $ & $P(-2,3,3) = T(3,4)$ & $P(-2,3,4)$ & $P(-2,3,5) = T(3,5)$ \\ \hline \footnotesize{$\mathcal{F}(\widehat b)$} & $A_{m-1}$ & $D_{m+2}$ & $E_6$ & $E_7$ & $E_8$ \\ \hline \end{tabular} \end{center}} To see that Proposition \ref{prop: 1 < r < n} holds for these braids first note that $\Sigma_n(T(2,2))$ is a lens space for each $n \geq 2$, and therefore an L-space. Also, it follows from \cite{GLid} and \cite[Example 1.11]{Nem} that $$\Sigma_n(\widehat b) \mbox{ is an L-space if and only if } n \leq \left\{ \begin{array}{ll} 2 & \mbox{ if } \widehat b = T(2,m) \mbox{ where } m \geq 6 \\ 2 & \mbox{ if } \widehat b = T(3,3), T(3,4) \mbox{ or } T(3,5) \\ 3 & \mbox{ if } \widehat b = T(2,4) \mbox{ or } T(2,5) \\ 5 & \mbox{ if } \widehat b = T(2,3) \end{array} \right.$$ Thus we are left with considering the cases that $b$ is either $\sigma_1^m \sigma_2 \sigma_1^2 \sigma_2$ ($m \geq 2$) or $\sigma_1^4 \sigma_2 \sigma_1^3 \sigma_2$. We begin with some remarks. For each $\zeta \in S^1 \setminus \{-1\}$ we define $\overline{I}_-(\zeta)$, to be the closed subarc of the circle with endpoints $\zeta, \bar \zeta$ which contains $-1$. Let $\mathcal{S}_{F(b)}: H_1(F(b)) \times H_1(F(b)) \to \mathbb Z$ be the Seifert form of the surface $F(b)$ and recall the Hermitian form $\mathcal{S}_{F(b)}(\zeta) = (1 - \zeta)\mathcal{S}_{F(b)} + (1 - \bar \zeta) \mathcal{S}_{F(b)}^T$ defined for $\zeta \in S^1$. It follows from \cite[Theorem 1.1]{BBG} that if $\Sigma_n(\widehat b)$ is an L-space, then $\mathcal{S}_{F(b)}(\zeta)$ is definite for $\zeta \in \overline{I}_-(\zeta_n)$. If $b_0$ is contained in $b$, $F(b_0) \subseteq F(b)$ and therefore $\mathcal{S}_{F(b_0)}(\zeta)$ is definite for $\zeta \in \overline{I}_-(\zeta_n)$ as well. In particular, $\Delta_{\widehat b_0}(t)$ has no zero in this interval. Suppose that $b = \sigma_1^4 \sigma_2 \sigma_1^3 \sigma_2$ and that $\Sigma_n(\widehat b)$ is an L-space for some $n \geq 2$. Then $\mathcal{S}_{F(b)}(\zeta)$ is definite for $\zeta \in \overline{I}_-(\zeta_n)$. As $b$ contains $b_0 = \sigma_1^7 \sigma_2$, the same is true for $\mathcal{S}_{F(b_0)}(\zeta)$. In particular, the Alexander polynomial of $\widehat{b_0} = T(2,7)$ has no root in $\overline{I}_-(\zeta_n)$. Since $$\Delta_{\widehat b_0}(t) = \frac{t^7 + 1}{t+1},$$ $\Delta_{\widehat b_0}^{-1}(0)$ is the set of primitive $14^{th}$ roots of $1$. It follows that $\zeta_{14}^{5}$ is not contained in $\overline{I}_-(\zeta_n)$ and therefore $\frac{5}{14} < \frac{1}{n}$. Equivalently, $n < \frac{14}{5} < 3$. Thus $n = 2$ so that Proposition \ref{prop: 1 < r < n} holds for these braids. Finally suppose that $b = \sigma_1^m \sigma_2 \sigma_1^2 \sigma_2$ where $m \geq 2$. Then $b$ contains the sub-braid $b_0 = \sigma_1^2 \sigma_2 \sigma_1^2 \sigma_2 = \sigma_1 (\sigma_2 \sigma_1 \sigma_2) \sigma_1 \sigma_2 = \delta_3^3$. It is simple to verify that $\widehat{b_0} = T(3,3)$ has Conway polynomial $z^2(z^2 + 3)$, and as $\Delta_{\widehat{b_0}}(t^2) = \nabla_{\widehat{b_0}}(t - t^{-1})$, the Alexander polynomial of $\widehat{b_0}$ is $(t-1)^2(t^2 + t + 1)$. Hence $\Delta_{\widehat{b_0}}(\zeta_3) = 0$. It follows that $\mathcal{S}_{F(b_0)}(\zeta_3)$, and therefore $\mathcal{S}_{F(b)}(\zeta_3)$, is indefinite. In particular, $\Sigma_n(\widehat b)$ cannot be an L-space for $n \geq 3$. Summarising, when $k > 0$ and $b = \delta_3^k P$ is a BKL-positive $3$-braid for which $\widehat b$ is a non-trivial prime link and some $\Sigma_n(\widehat b)$ is an L-space, then $b$ is conjugate to one of the positive braids listed in the table above. If $b$ is $\sigma_1^2 \sigma_2$, then $\Sigma_n(\widehat b)$ is an L-space for each $n \geq 2$. Otherwise, $\Sigma_n(\widehat b)$ is an L-space if and only if $$n \leq \left\{ \begin{array}{l} 2 \mbox{ and } b = \sigma_1^m \sigma_2 \mbox{ ($m \geq 6$)}, \sigma_1^m \sigma_2 \sigma_1^2 \sigma_2 \mbox{ ($m \geq 2$)}, \mbox{or } \sigma_1^m \sigma_2 \sigma_1^3 \sigma_2 \mbox{ ($m = 3,4,5$)} \\ 3 \mbox{ and } b = \sigma_1^4 \sigma_2 \mbox{ or } \sigma_1^5 \sigma_2 \\ 5 \mbox{ and } b = \sigma_1^3 \sigma_2 \end{array} \right.$$ Thus Proposition \ref{prop: 1 < r < n} holds in the case that $k > 0$ \subsection{The case $k = 0$} \label{subsec: k = 0} In this case $b$ is conjugate to $b(p,q,r) = \sigma_1^p a_{13}^q \sigma_2^r$ where $p, q, r \geq 1$ (Corollary \ref{cor: definiteness, fibredness, and forms}). As we observed above, $\Sigma_2(\widehat{b})$ is an L-space. \subsubsection{Restricting the values of $p, q, r$} \label{subsubsec: bounding pqr} The identity $$\delta_3 b(p,q,r) \delta_3^{-1} = \sigma_2^p \sigma_1^q a_{13}^r \sim b(q,r,p)$$ implies that $\widehat{b(p,q,r)} = \widehat{b(q,r,p)} = \widehat{b(r, p,q)}$. Further, $\widehat{b(p,q,r)}$ and $\widehat{b(q,p,r)}$ differ by a flype (\cite[The Classification Theorem, page 27]{BiM}), so coincide. Thus \begin{lemma} \label{lemma: invariant under permutation} $\widehat{b(p,q,r)}$ is invariant under arbitrary permutations of $(p,q,r)$. \qed \end{lemma} As such, we assume below that $$p \leq q \leq r$$ We noted in \S \ref{subsec: k > 0} that if $b_0$ is contained in $b$, then $F(b_0) \subseteq F(b)$ and therefore $\mathcal{S}_{F(b_0)}(\zeta)$ is definite for $\zeta \in \overline{I}_-(\zeta_n)$. In particular, $\Delta_{\widehat b_0}(t)$ has no zero in this interval. For instance, taking $b_0 = \sigma_1 \sigma_2^r$ we have $\widehat b_0 = T(2, r)$ and therefore $$\Delta_{\widehat b_0}(t) = \frac{t^r + (-1)^{r+1}}{t+1}$$ It follows that $$\Delta_{\widehat b_0}^{-1}(0) = \left\{ \begin{array}{l} \mbox{ the set of primitive $2r^{th}$ roots of $1$ other than $-1$ if $r$ is odd} \\ \mbox{ the set of $r^{th}$ roots of $1$ other than $-1$ if $r$ is even} \end{array} \right.$$ If $r \geq 3$ is odd, then $\zeta_{2r}^{r - 2} \ne -1$ and is a primitive $2r^{th}$ root of unity, so it is not contained in $\overline{I}_-(\zeta_n)$. Therefore $\frac{1}{n} > \frac{r-2}{2r}$. Equivalently, $r < 2 + \frac{4}{n-2}$. Similarly if $r \geq 2$ is even, then $\zeta_r^{\frac{r}{2} - 1} \not \in \overline{I}_-(\zeta_n)$ so that $\frac{\frac{r}{2}-1}{r} = \frac{r-2}{2r} < \frac{1}{n}$. Again we deduce that $r < 2 + \frac{4}{n-2}$. Thus $$\left\{ \begin{array}{lll} n = 3 & \Rightarrow & 1 \leq p \leq q \leq r \leq 5 \\ n = 4, 5 & \Rightarrow & 1 \leq p \leq q \leq r \leq 3 \\ n \geq 6 & \Rightarrow & 1 \leq p \leq q \leq r \leq 2 \end{array} \right.$$ In what follows, we study whether or not $\Sigma_n(\widehat{b(p,q,r)})$ is an L-space by analysing the zeros of the Alexander polynomials of the links $\widehat{b(p,q,r)}$ for each of the thirty-five values of $(p, q, r)$ with $1 \leq p \leq q \leq r \leq 5$ . \subsubsection{The Alexander polynomials of $\widehat{b(p,q,r)}$ where $1 \leq p \leq q \leq r \leq 5$} \label{subsec: calculating alex polys} Let $\nabla(p,q, r)(z)$ denote the Conway polynomial of $\widehat b(p,q,r)$. The skein relation (\cite[\S 2]{Kf}) implies that \begin{equation} \label{eqn: skein} \nabla(p,q, r)(z) = \left\{ \begin{array}{l} z \nabla(p-1,q, r)(z) + \nabla(p-2,q, r)(z) \\ z \nabla(p,q-1, r)(z) + \nabla(p,q-2, r)(z)\\ z \nabla(p,q, r-1)(z) + \nabla(p,q, r-2)(z) \end{array} \right. \end{equation} Since $T(2, p) = \widehat{\sigma_1^p\sigma_2}$, the reader will verify using (\ref{eqn: skein}) that $\nabla_{T(2, p)}(z) = f_p(z)$ where $f_n(z)$ is defined recursively by $$f_0(z) = 0, f_1(z) = 1, f_n(z) = zf_{n-1}(z) + f_{n-2}(z) \mbox{ when } n \geq 2$$ For instance, {\small \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline $n$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ \\ \hline $f_n(z)$ & $0$ & $1$ & $z$ & $z^2 + 1$ & $z(z^2 + 2)$ & $z^4 + 3z^2 + 1$ & $z(z^2 + 1)(z^2 + 3)$ & $z^6 + 5z^4 + 6z^2 + 1$ \\ \hline \end{tabular} \end{center}} Since $b(p, q, 0) \cong T(2, p) \# T(2, q)$, we have $$\nabla(p, q, 0) = f_p(z) f_q(z)$$ for $p, q \geq 0$. An inductive argument then shows that \begin{eqnarray} \label{eqn: r} \nabla(1,1,r) = f_r(z) \nabla(1,1,1) + f_{r-1}(z) \nabla(1,0,1) = f_r(z) \nabla(1,1,1) + f_{r-1}(z) \nonumber \end{eqnarray} Similarly we have \begin{eqnarray} \label{eqn: qr} \nabla(1,q,r) & = & f_q(z) \nabla(1,1,r) + f_{q-1}(z) \nabla(1,0,r) \nonumber \\ & = & f_q(z) f_r(z) \nabla(1,1,1) + f_q(z) f_{r-1}(z) + f_{q-1}(z) f_{r}(z) \nonumber \end{eqnarray} Finally, \begin{eqnarray} \label{eqn: pqr} \nabla(p,q,r) & = & f_p(z) \nabla(1,q,r) + f_{p-1}(z) \nabla(0,q,r) \nonumber \\ & = & f_p(z) f_q(z) f_r(z) \nabla(1,1,1) + f_p(z) f_q(z) f_{r-1}(z) + f_p(z) f_{q-1}(z) f_{r}(z) \nonumber \\ & & \hspace{.5cm} + f_{p-1}(z) f_{q}(z) f_{r}(z) \nonumber \end{eqnarray} Since $\nabla(1,1,1) = 2z$, we have \begin{eqnarray} \nabla(p,q,r) = 2z f_p(z) f_q(z) f_r(z) + f_p(z)f_q(z)f_{r-1}(z) + f_p(z)f_{q-1}(z)f_r(z) + f_{p-1}(z)f_q(z)f_r(z) \nonumber \end{eqnarray} This identity combined with the skein relation (\ref{eqn: skein}) yields the following table. {\footnotesize \begin{center} \begin{tabular}{|c|c||c|c|} \hline $(p,q,r)$ & $\nabla(p,q,r)$ & $(p,q,r)$ & $\nabla(p,q,r)$ \\ \hline \hline $(1,1,1)$ & $2z$ & $(2,2,5)$ & $z(2z^6 + 9 z^4 +10z^2 + 2)$ \\ \hline $(1,1,2)$ & $2z^2 + 1$ & $(2,3,3)$ & $(z^2 + 1)(2z^4 + 5z^2 + 1)$ \\ \hline $(1,1,3)$ & $z(2z^2 + 3)$ & $(2,3,4)$ & $z(2z^6 + 9z^4 + 11z^2 + 3)$ \\ &&& $= z(2 z^2 + 3) (z^4 + 3 z^2 + 1)$\\ \hline $(1,1,4)$ & $2z^4 + 5z^2 + 1$ & $(2,3,5)$ & $2z^8 + 11z^6 + 18z^4 + 9z^2 + 1$ \\ \hline $(1,1,5)$ & $z(2z^4 + 7z^2 + 4)$ & $(2,4,4)$ & $z^2(z^2+2)(2z^4 + 7z^2+4)$ \\ \hline $(1,2,2)$ & $2z(z^2 + 1)$ & $(2,4,5)$ & $z(2z^8 + 13z^6 + 27z^4 + 19z^2 + 3)$ \\ &&& $ = z(z^2 + 1) (z^2 + 3) (2 z^4 + 5 z^2 + 1)$ \\ \hline $(1,2,3)$ & $2z^4 + 4z^2 + 1$ & $(2,5,5)$ & $(z^4 + 3z^2 + 1)(2z^6 + 9z^4 + 9z^2 + 1)$ \\ \hline $(1,2,4)$ & $z(2z^4 + 6z^2 + 3)$ & $(3,3,3)$ & $z(z^2 + 1)^2 (2z^2 + 5)$ \\ \hline $(1,2,5)$ & $(z^2 + 1) (2 z^4 + 6 z^2 + 1)$ & $(3,3,4)$ & $(z^2+1)(2z^6 + 9z^4+10z^2+1)$ \\ \hline $(1,3,3)$ & $2z(z^2 + 1)(z^2 + 2)$ & $(3,3,5)$ & $z(z^2 + 1) (z^2 + 2)(2z^4 + 7z^2 + 3)$ \\ &&&$ = z(z^2 + 1) (z^2 + 2)(2 z^2 + 1) (z^2 + 3)$ \\ \hline $(1,3,4)$ & $2z^6 + 8z^4 + 8z^2 + 1$ & $(3,4,4)$ & $z(z^2 + 2)(2z^6 + 9z^4 + 10z^2 + 2)$ \\ \hline $(1,3,5)$ & $z(2z^6 + 10z^4 + 14z^2 + 5)$ & $(3,4,5)$ & $2z^{10} + 15z^8 + 39z^6 + 41z^4 + 15z^2 + 1$ \\ \hline $(1,4,4)$ & $2z(z^2 + 2)(z^4 + 3z^2 + 1)$ & $(3,5,5)$ & $z(z^4 + 3z^2 + 1)(2z^6 + 11z^4 + 17z^2 + 7)$ \\ \hline $(1,4,5)$ & $2z^8 + 12 z^6 + 22 z^4 + 12 z^2 + 1$ & $(4,4,4)$ & $z^2(z^2+2)^2(2z^4 + 7z^2 + 3)$ \\ &&& $ = z^2(z^2+2)^2(2z^2 + 1)(z^2 + 3)$ \\ \hline $(1,5,5)$ & $2z(z^2+1)(z^2+3)(z^4 + 3z^2 + 1)$ & $(4,4,5)$ & $z(z^2+2)(2z^8 + 13z^6 + 26z^4 + 16 z^2 + 2)$ \\ \hline $(2,2,2)$ & $z^2(2z^2 + 3)$ & $(4,5,5)$ & $(z^2+1)(z^4+3z^2+1)(2z^6 + 11z^4 + 15z^2 + 1)$ \\ \hline $(2,2,3)$ & $z(2z^4 + 5z^2 + 2) = z(2 z^2 + 1) (z^2 + 2)$ & $(5,5,5)$ & $z(z^4 + 3z^2 + 1)^2(2z^4 + 9z^2 + 8)$ \\ \hline $(2,2,4)$ & $z^2(2z^4 + 7z^2 + 5)$ & & \\ \hline \end{tabular} \end{center}} The Alexander polynomials of these links are obtained using the identity $\nabla(p,q,r)(t - t^{-1}) = \Delta_{\widehat{b(p,q,r)}}(t^2)$. The result is listed in the second column of the following table. The third column lists the values of $n$ for which the position of the roots of the Alexander polynomial obstructs $\Sigma_n(\widehat{b(p,q,r)})$ from being an L-space (cf. \S \ref{subsubsec: bounding pqr}). The last column lists what is known about whether or not $\Sigma_n(\widehat{b(p,q,r)})$ is an L-space for the remaining values of $n \geq 3$ as well as those values for which the answer is unknown to us. {\footnotesize \begin{center} \begin{tabular}{|c|c|c|c|} \hline & & $\Delta_{\widehat b}(t)$ implies & $\Sigma_n(\widehat b)$ is an L-space ($\checkmark$) \\ $(p,q,r)$ & $\Delta_{\widehat b}(t)$ & $\Sigma_n(\widehat b)$ not & $\Sigma_n(\widehat b)$ is not an L-space (x) \\ && an L-space& Unknown (?) \\ \hline \hline $(1,1,1)$ & 2(t-1) & & all $n$ $\checkmark$ \\ \hline $(1,1,2)$ & $2t^2 - 3t + 2$ &$n \geq 9$ & $n = 3,4,5 \checkmark; 6,7,8$ (x)\\ \hline $(1,1,3)$ &$(t - 1)(2t^2 - t + 2)$ & $n \geq 5$ & $n = 3, 4$ (?)\\ \hline $(1,1,4)$ & $2t^4 - 3t^3 + 3t^2 - 3t + 2$& $n \geq 4$ & $n = 3$ $\checkmark$ \\ \hline $(1,1,5)$ & $(t-1)(2t^4 - t^3 + 2t^2 - t + 2)$ & $n \geq 4$ & $n = 3$ (?) \\ \hline $(1,2,2)$ &$2(t - 1)(t^2 - t + 1)$ & $n \geq 6$ & $n = 3, 4, 5$ (?) \\ \hline $(1,2,3)$ & $2t^4 - 4t^3 + 5t^2 - 4t + 2$ & $n \geq 5$ & $n = 3$ $\checkmark$; $4$ (?) \\ \hline $(1,2,4)$ & $(t -1)(2t^4 - 2t^3 + 3t^2 - 3t + 2)$ & $n \geq 4$ & $n = 3$ (?) \\ \hline $(1,2,5)$ & $(t^2 - t + 1)(2t^4 - 2t^3 + t^2 - 3t + 2)$ & $n \geq 4$& $n = 3$ (?)\\ \hline $(1,3,3)$ & $2(t-1)(t^2 - t + 1)(t^2 + 1)$ & $n \geq 4$ & $n = 3$ (?)\\ \hline $(1,3,4)$ & $2t^6 - 4t^5 + 6t^4 - 7t^3 + 6t^2 - 4t + 2$ & $n \geq 4$& $n = 3$ (?)\\ \hline $(1,3,5)$ & $(t - 1)(2t^6 - 2t^5 + 4t^4 - 3t^3 + 4t^2 - 2t + 2)$ & $n \geq 4$& $n = 3$ (?) \\ \hline $(1,4,4)$ & $2(t-1)(t^2 + 1)(t^4 - t^3 + 5t^2 - t + 1)$ &$n \geq 4$ & $n = 3$ (?)\\ \hline $(1,4,5)$ &$2t^8 - 4t^7 + 6t^6 - 8t^5 + 9t^4 - 8t^3 + 6t^2 - 4t + 2$ & $n \geq 4$ & $n = 3$ (?)\\ \hline $(1,5,5)$ &$2(t-1)(t^2 - t + 1)(t^2+t+1)(t^4 - t^3 + t^2 - t + 1) $ & $n \geq 3$ & $*$ \\ \hline $(2,2,2)$ & $(t-1)^2(2t^2 - t + 2)$ & $n \geq 5$ & $n = 3, 4$ (?) \\ \hline $(2,2,3)$ & $(t-1)(t^2 + 1)(2t^2 - 3t + 2)$& $n \geq 4$ & $n = 3$ (?)\\ \hline $(2,2,4)$ & $(t-1)^2(2t^4 - t^3 + 3t^2 - t + 2)$ & $n \geq 4$ & $n = 3$ (?) \\ \hline $(2,2,5)$ & $(t-1)(2t^6 - 3t^5 + 4t^4 - 4t^3 + 4t^2 - 3t +2)$ & $n \geq 4$ & $n = 3$ (?) \\ \hline $(2,3,3)$ & $(t^2 - t +1)(2t^4 - 3t^3 + 3t^2 - 3t + 2)$ & $n \geq 4$ & $n = 3$ (?) \\ \hline $(2,3,4)$ & $(t-1)(2t^2 - t + 2)(t^4 - t^3 + 5t^2 - t + 1)$ & $n \geq 4$ & $n = 3$ (?)\\ \hline $(2,3,5)$ & $2t^8 - 5t^7 + 8t^6 - 19t^5 + 29t^4 - 19t^3 + 8t^2 - 5t + 2$ & $n \geq 3$ & $*$\\ \hline $(2,4,4)$ & $(t-1)^2(t^2 + 1)(2t^4 - t^3 + 2t^2 - t + 2)$ & $n \geq 4$ & $n = 3$ (?)\\ \hline $(2,4,5)$ & $(t-1)(t^2 - t +1)(t^2 + t +1)(2t^4 - 3t^3 + 3t^2 - 3t + 2)$ & $n \geq 3$ & $*$\\ \hline $(2,5,5)$ & $(t^4 - t^3 + 5t^2 - t + 1)(2t^6 - 3t^5 + 3t^4 - 3t^3 + 3t^2 - 3t +2)$ & $n \geq 3$ & $*$\\ \hline $(3,3,3)$ & $(t-1)(t^2 - t + 1)^2(2t^2 + t + 2)$ & $n \geq 4$ & $n = 3$ (?) \\ \hline $(3,3,4)$ & $(t^2 - t + 1)(2t^6 - 3t^5 + 4t^4 - 4t^3 + 4t^2 - 3t + 2)$ & $n \geq 4$ & $n = 3$ (?)\\ \hline $(3,3,5)$ & $(t-1)(t^2 - t + 1)(t^2 + 1)(t^2 + t + 1)(2t^2 - 3t + 2)$ & $n \geq 3$ &$*$\\ \hline $(3,4,4)$ & $(t-1)(t^2 + 1)(2t^6 - 3t^5 + 4t^4 - 4t^3 + 4t^2 - 3t +2)$ & $n \geq 4$ & $n = 3$ (?)\\ \hline $(3,4,5)$ & $2t^{10} - 5t^9 + 9t^8 - 13t^7 + 16t^6 -17 t^5 + 16t^4 -13t^3 + 9t^2 - 5t + 2$ & $n \geq 3$ &$*$ \\ \hline $(3,5,5)$ & $(t-1)(t^4 - t^3 + 5t^2 - t + 1)(2t^6 - t^5 + 3t^4 - t^3 + 3t^2 - t + 2)$ & $n \geq 3$ &$*$\\ \hline $(4,4,4)$ & $(t-1)^2(t^2 + 1)^2(t^2 + t + 1)(2t^2 - 3t + 2)$ & $n \geq 3$ & $*$\\ \hline $(4,4,5)$ & $(t-1)(t^2 + 1)(2t^8 - 3t^7 + 4t^6 - 5t^5 + 6t^4 - 5t^3 + 4t^2 - 3t + 2)$ & $n \geq 3$ & $*$\\ \hline $(4,5,5)$ & $(t^2 - t + 1)(t^4 - t^3 + 5t^2 - t + 1)(2t^6 - t^5 + t^4 - 3t^3 + t^2 - t + 2)$ &$n \geq 3$ &$*$\\ \hline $(5,5,5)$ & $(t-1)(t^4 - t^3 + 5t^2 - t + 1)^2(2t^4 + t^3 + 2t^2 + t + 2)$ & $n \geq 3$ &$*$\\ \hline \end{tabular} \end{center}} \subsubsection{The proof of Proposition \ref{prop: 1 < r < n} when $k = 0$} \label{subsubsec: proof of proposition} Assume that $\Sigma_n(\widehat{b(p,q,r)})$ is an L-space for some $ 1 \leq p \leq q \leq r \leq 5$ and $n \geq 2$. Since $\Sigma_2(\widehat{b(p,q,r)})$ is an L-space, Proposition \ref{prop: 1 < r < n} holds if $n = 2$ or $3$. We suppose that $n \geq 4$ below. \begin{case} $n \geq 6$ \end{case} We saw in \S \ref{subsubsec: bounding pqr} that $\max \{p, q, r\} \leq 2$ when $n \geq 6$, so $(p,q,r)$ is either $(1,1,1), (1,1,2), (1,2,2)$, or $(2,2,2)$. We also saw that there are no roots of $\Delta_{\widehat b}(t)$ in $\overline{I}_-(\zeta_n)$. Equivalently, if $\zeta \in S^1$ is a root of $\Delta_{\widehat b}(t)$, then $\mathfrak{Re}(\zeta) > \cos(2\pi/n) \geq \cos(2\pi/6) = \frac12$. The Alexander polynomials of these links are listed in the third table in \S \ref{subsec: calculating alex polys} and their roots are $$\left\{ \begin{array}{cl} \{1\} & \mbox{ if } (p,q,r) = (1,1,1)\\ \{\frac{3 + \sqrt{7} i}{4}, \frac{3 - \sqrt{7} i}{4}\} & \mbox{ if } (p,q,r) = (1,1,2) \\ \{1, \frac{1 + \sqrt{3} i}{2}, \frac{1 - \sqrt{3} i}{2}\} & \mbox{ if } (p,q,r) = (1,2,2) \\ \{1, \frac{1 + \sqrt{15} i}{4}, \frac{1 - \sqrt{15} i}{4}\} & \mbox{ if } (p,q,r) = (2,2,2) \end{array} \right.$$ Since these roots $\zeta$ must satisfy $\mathfrak{Re}(\zeta) > \frac12$, the only possibilities are $b(1,1,1)$ (any $n$) or $b(1,1,2)$ ($n = 6, 7, 8$). \begin{subcase} $(p,q,r) = (1,1,1)$ \end{subcase} \begin{lemma} The link $\widehat{b(1,1,1)}$ is $T(2, -4)$ oriented so that its components have linking number $2$. Further, $\Sigma_n(\widehat{b(1,1,1)})$ is an L-space for each $n$. In particular, Proposition \ref{prop: 1 < r < n} holds when $(p,q,r) = (1,1,1)$. \end{lemma} \begin{proof} The first claim is easily verified. For the second, note that the exterior $X$ of $\widehat{b(1,1,1)}$ is Seifert fibred with base orbifold $A(2)$ where $A$ is an annulus. The homomorphism $\pi_1(X) \to \mathbb Z/n$ associated to the $n$-fold cyclic cover $X_n$ of $X$ determined by $\widehat{b(1,1,1)}$ factors through $H_1(A(2)) \cong \mathbb Z \oplus \mathbb Z/2$ (and therefore $\pi_1(A(2))$) in such a way that the $\mathbb Z/2$ factor is killed. Thus the base orbifold of $X_n$ is an annulus with cone points of order $2$. Further, the Seifert fibre of $X_n$ has distance $1$ from the lifts of the meridians, so $\Sigma_n(\widehat{b(1,1,1)}$ has base orbifold a $2$-sphere with $n$ cone points of order $2$. It's also a rational homology $3$-sphere since its Alexander polynomial is $\Delta_{\widehat b}(t) = 2(t - 1)$ and therefore $|H_1(\Sigma_n(\widehat{b(1,1,1)})| = 2^{n-1} \prod_{j=1}^{n-1} |\zeta_n^j - 1| = 2^{n-1} |\prod_{j=1}^{n-1} (\zeta_n^j - 1)| = 2^{n-1}|t^{n-1} + t^{n-2} + \cdots + t + 1|_{t=1} = 2^{n-1} n$ (\cite[Theorem 1]{HK}). The normalised Seifert invariants of $\Sigma_n(\widehat{b(1,1,1)}$ (\cite[\S 3]{EHN}) are of the form $$(0; b_0, \frac12, \frac12, \ldots, \frac12)$$ where there are $n$ ``$\frac12$"'s. Therefore $e(\Sigma_n(\widehat{b(1,1,1)})) = -b_0 - \frac{n}{2}$. On the other hand, it follows from \cite[Corollary 6,2]{JN} that $|e(\Sigma_n(\widehat{b(1,1,1)})| = (\frac{1}{\;\;2^n})|H_1(\Sigma_n(\widehat{b(1,1,1)})| = n/2$. Thus $$-b_0 - \frac{n}{2} = e(\Sigma_n(\widehat{b(1,1,1)})) = \epsilon n/2$$ for some $\epsilon = \pm 1$. It follows that $b_0 + \frac{n}{2} \ne 0$ and further, $$b_0 = -(\epsilon + 1)n/2$$ Hence, $|b_0|$ is either $0$ or $n$. If $\Sigma_n(\widehat{b(1,1,1)}))$ supports a co-oriented taut foliation, then by \cite[Theorem 3.3]{EHN}, either $b_0 + \frac{n}{2} = 0$ (which has already been ruled out), or $b_0 \leq -1$ and $b_0 + \frac{n}{2} \geq 1$. The condition that $b_0 \leq -1$ implies that $b_0 = -n$. But then $b_0 + \frac{n}{2} = -\frac{n}{2} < 0$, which contradicts the requirement that $b_0 + \frac{n}{2} \geq 1$. Hence no $\Sigma_n(\widehat{b(1,1,1)}))$ supports a co-oriented taut foliation. Consequently, each is an L-space (\cite{BGW}). \end{proof} \begin{subcase} $(p,q,r) = (1,1,2)$ \end{subcase} \begin{lemma} $\widehat{b(1,1,2)}$ is the knot $5_2$. Consequently, $\Sigma_n(\widehat{b(1,1,2)})$ is an L-space if and only if $n = 2,3,4, 5$. In particular, Proposition \ref{prop: 1 < r < n} holds when $(p,q,r) = (1,1,2)$. \end{lemma} \begin{proof} The first claim is easily verified. For the second, note that it follows from the third table in \S \ref{subsec: calculating alex polys} that $\Sigma_n(\widehat{b(1,1,2)})$ is not an L-space for $n \geq 9$. Robert Lipshitz has shown by computer calculation that $\Sigma_n(K)$ is an L-space for $n = 5$, and is not an L-space for $n = 6, 7$, and $8$ (private communication). The fact that $\Sigma_5(K)$ is an L-space was also proved by Mitsunori Hori. See \cite{Te2}. We already know that $\Sigma_2(K)$ is an L-space. This is also true for $\Sigma_3(\widehat{b(1,1,2)})$ by \cite{Pe} and $\Sigma_4(\widehat{b(1,1,2)})$ by \cite{Te1}. \end{proof} \begin{case} $n \in \{4, 5\}$ \end{case} In this case, $\max \{p, q, r\} \leq 3$ so $(p,q,r)$ is one of $(1,1,1), (1,1,2), (1,1,3), (1,2,2), (1,2,3)$, $(1,3,3), (2,2,2), (2,2,3), (2,3,3)$, $(3,3,3)$. Similar to the last case, the condition that $\Sigma_n(\widehat{b(p,q,r)})$ is an L-space implies that for each root $\zeta \in S^1$ of $\Delta_{\widehat b}(t)$ we have $$\mathfrak{Re}(\zeta) > \left\{ \begin{array}{ll} \cos(2\pi/4) = 0 & \mbox{ if } n = 4 \\ \cos(2\pi/5) > 0.3 & \mbox{ if } n = 5 \end{array} \right.$$ We saw above that both $\Sigma_4(\widehat{b(p,q,r)})$ and $\Sigma_5(\widehat{b(p,q,r)})$ are L-spaces when $(p,q,r)$ is $(1,1,1)$ or $(1,1,2)$, which verifies Proposition \ref{prop: 1 < r < n} in these cases. \begin{subcase} $(p,q,r) = (1,2,2)$ \end{subcase} The reader will verify that $\widehat{b(1,2,2)}$ is the link $6_3^2$. See Figure \ref{fig: b(1,2,2)}. The root condition implies (cf. the third table in \S \ref{sec: lspace branched covers 3-braids}): \begin{figure} \caption{$\widehat{b(1,2,2)} \label{fig: b(1,2,2)} \end{figure} \begin{lemma} $\Sigma_n(\widehat{b(1,2,2)})$ is not an L-space for $n \geq 6$. \qed \end{lemma} We do not know whether or not $\Sigma_n(\widehat{b(1,2,2)})$ is an L-space for $n = 3, 4, 5$. \begin{subcase} $(p,q,r) = (2,2,2)$ \end{subcase} The closure of $b(2,2,2)$ is the link $7_1^3$ (cf. Figure \ref{fig: b(2,2,2)}). From the third table in \S \ref{subsec: calculating alex polys} we see that: \begin{figure} \caption{$\widehat{b(2,2,2)} \label{fig: b(2,2,2)} \end{figure} \begin{lemma} $\Sigma_n(\widehat{b(2,2,2)})$ is not an L-space for $n \geq 5$. \qed \end{lemma} We do not know whether or not $\Sigma_n(\widehat{b(2,2,2)})$ is an L-space for $n = 3, 4$. Next suppose that $(p,q,r)$ is one of $(1,1,3), (1,2,3), (1,3,3), (2,2,3), (2,3,3), (3,3,3)$ and consider $b_0 = b(1,3,3)$. From above, $\nabla(1,3,3) = 2z(z^2 + 1)(z^2 + 2)$ and so $\Delta_{\widehat b_0}(t) = 2(t-1)(t^2 - t +1)(t^2 + 1)$. Hence $i$ is a root of $\Delta_{\widehat b_0}(t)$ so that $\mathcal{S}_{F(\widehat{b_0})}(i)$ is indefinite. It follows that $\mathcal{S}_{F(\widehat{b(p,q,r}))}(i)$ is indefinite if $b(p,q,r)$ contains $b_0$. But then neither $\Sigma_4(\widehat{b(p,q,r)})$ nor $\Sigma_5(\widehat{b(p,q,r)})$ is an L-space (\cite[Theorem 1.1]{BBG}), contrary to our assumptions. We are left with considering $(p,q,r) = (1,1,3), (1,2,3)$, or $(2,2,3)$. From the third table in \S \ref{subsec: calculating alex polys} we see that the roots of the associated Alexander polynomials are $$\left\{ \begin{array}{cl} \{1, \frac14 \pm \frac{\sqrt{15}}{4}i\} & \mbox{ if } (p,q,r) = (1,1,3)\\ \{0.14645 \pm 0.98922i, 0.85355 \pm 0.52101i\} & \mbox{ if } (p,q,r) = (1,2,3) \\ \{1, \pm i, \frac34 \pm \frac14 \sqrt{7} i\} & \mbox{ if } (p,q,r) = (2,2,3) \end{array} \right.$$ In each case there is a root $\zeta$ of $\Delta_{\widehat b}(t)$ for which $\mathfrak{Re}(\zeta) < \frac{3}{10} < \cos(2\pi/5)$. Thus $n \ne 5$. Similarly $n = 4$ is ruled out for $(p,q,r) = (2,2,3)$, contrary to our assumptions. On the other hand, $n = 4$ remains a possibility for $(p,q,r) = (1,1,3)$ or $(1,2,3)$. \begin{subcase} $(p,q,r) = (1,1,3)$ \end{subcase} The closure of $b(1,1,3)$ is the link $6_2^2$. See Figure \ref{fig: b(1,3,3)}. \begin{figure} \caption{$\widehat{b(1,3,3)} \label{fig: b(1,3,3)} \end{figure} From the third table in \S \ref{subsec: calculating alex polys} we see that: \begin{lemma} $\Sigma_n(\widehat{b(1,1,3)})$ is not an L-space for $n \geq 5$. \qed \end{lemma} We do not know whether or not $\Sigma_n(\widehat{b(1,1,3)})$ is an L-space for $n = 3, 4$. \begin{subcase} $(p,q,r) = (1,2,3)$ \end{subcase} The closure of $b(1,2,3)$ is the knot $7_5$ as depicted in \cite[Appendix C, page 392]{Rlf}. \begin{lemma} $\Sigma_n(\widehat{b(1,2,3)})$ is an L-space if $n = 3, 4$ and not an L-space for $n \geq 5$. Thus Proposition \ref{prop: 1 < r < n} holds when $(p,q,r) = (1,2,3)$. \end{lemma} \begin{proof} The third table in \S \ref{subsec: calculating alex polys} shows that $\Sigma_n(\widehat{b(1,2,3)})$ is not an L-space for $n \geq 5$. Since $7_2$ is a genus $2$ two-bridge knot, \cite[Theorem 1.2]{Ba} shows that $\Sigma_n(\widehat{b(1,2,3)})$ is an L-space for $n = 2$ and $3$. \end{proof} \begin{remark} \label{rmk: to do} {\rm We summarise here what is needed to do to extend Proposition \ref{prop: 1 < r < n} to include the three exceptional links $6_2^2, 6_3^2$ and $7_3^1$. \begin{itemize} \item If $L$ is the link $6_2^2 = \widehat{b(1,1,3)}$ (cf. Figure \ref{fig: b(1,3,3)}), it must be shown that either $\Sigma_3(L)$ is an L-space or $\Sigma_4(L)$ is not an L-space. \item If $L$ is the link $6_3^2 = \widehat{b(1,2,2)}$ (cf. Figure \ref{fig: b(1,2,2)}), it must be shown that either $\Sigma_3(L)$ and $\Sigma_4(L)$ are L-spaces or $\Sigma_4(L)$ and $\Sigma_5(L)$ are not L-spaces. \item If $L$ is the link $7_3^1 = \widehat{b(2,2,2)}$ (cf. Figure \ref{fig: b(2,2,2)}), it must be shown that either $\Sigma_3(L)$ is an L-space or $\Sigma_4(L)$ is not an L-space. \end{itemize} } \end{remark} \end{document}
\begin{document} \title[$\SL (2, \R)$-invariant measures] { Ratner's theorem on $\SL (2, \R)$-invariant measures} \begin{abstract}We give a relatively short and self contained proof of Ratner's theorem in the special case of $\SL (2, \R)$-invariant measures. \end{abstract} \author {Manfred Einsiedler} \address{Mathematics Department, The Ohio State University, 231 W. 18th Avenue, Columbus, Ohio 43210} \thanks{The author acknowledges support of NSF Grant 0509350. This research was partially conducted while the author was employed by the Clay Mathematics Institute as a Research Scholar.} \date{\today} \maketitle \section {Introduction} M.~Ratner proved in a series of papers \cite{Ratner-invent-solvable,Ratner-acta-measure,Ratner-measure-rigidity,Ratner91t,Ratner-bull-distribution} very strong results on invariant measures and orbit closures for certain subgroups $H$ of a Lie group $G$ --- where $H$ acts on the right of $X = \mathbf{G}amma \backslash G$ and $\mathbf{G}amma< G$ is a lattice. More concretely, $H$ needs to be generated by one parameter unipotent subgroups, and the statements all are of the form that invariant measures and orbit closures are always algebraic as conjectured earlier by Raghunathan. Today these theorems are applied in many different areas of mathematics. While there are some very special cases for which the proof simplifies \cite{Ratner-sl2,Witte}, the general proof requires a deep understanding of the structure of Lie groups and ergodic theory. The aim of this paper is to give a self-contained more accessible proof of the classification of invariant and ergodic measures for subgroups $H$ isomorphic to $\SL (2, \R)$. While this is as well a special case of Ratner's theorem, it is a rich class since $G$ can be much larger than $H$. Moreover, the proof for this class is more accessible in terms of its requirements. The methods used are not new, in particular most appear also in earlier work \cite{Ratner-joinings, Ratner-acta-measure} of Ratner, but it does not seem to be known that the following theorem allows also a relatively simple and short proof. \begin{theorem} \label{Main theorem} Let $G$ be a Lie group, $\mathbf{G}amma< G$ a discrete subgroup, and $H< G$ a subgroup isomorphic to $\SL (2, \R)$. Then any $H$-invariant and ergodic probability measure $\mu$ on $X = \mathbf{G}amma \backslash G$ is homogeneous, i.e.\ there exists a closed connected subgroup $L < G$ containing $H$ such that $\mu$ is $L$-invariant and some $x _ 0 \in X$ such that the $L$-orbit $x_0 L$ is closed and supports $\mu$. In other words $\mu$ is an $L$-invariant volume measure on $x_0L$. \end{theorem} As we will see a graduate student, who started to learn or is willing to learn the very basics of Lie groups and ergodic theory, should be able to follow the argument (no knowledge of radicals or structure theory of Lie groups and no knowledge of entropy is necessary). We will discuss the requirements in Section~\ref{ingredients}. The initiated reader will notice that this approach generalizes without too much work to other semisimple groups $H<G$ without compact factors. However, to keep the idea simple and to avoid unnecessary technicalities we only treat the above case. In the next section we give some motivation for the above and related questions. In particular, we will discuss an application of Ratner's theorems where the above special case is sufficient. This paper is best described as an introduction to Ratner's theorem on invariant measures, and is not a comprehensive survey of this area of research. The reader seeking such a survey is referred to \cite{survey-Kleinbock} and for some more recent developments to \cite{Einsiedler-Lindenstrauss-survey}. The author would like to thank M.~Ratner for comments on an earlier draft of this paper. \section {Motivation} Let $G$ be a closed linear group, i.e.\ a closed subgroup of $\mathbf{G}L(n,\R)$ (with that assumption some definitions are easier to make). Let $\mathbf{G}amma<G$ be a discrete subgroup so that $X=\mathbf{G}amma\backslash G$ is a locally compact space with a natural $G$-action: \[ \mbox{for }g\in G,x\in X\mbox{ let }g.x=xg^{-1}. \] Now let $H<G$ be a closed subgroup and restrict the above action to the subgroup $H$. The question about the properties of the resulting $H$-action has many connections to various a priori non-dynamical mathematical problems, and is from that point of view but also in its own light highly interesting. The most basic question, vaguely formulated, is how $H$-orbits $H.x_0$ for various points $x_0\in X$ look like. Here one can make restrictions on $x_0$ or not, and ask, more precisely, either about the distribution properties of the orbit or about the nature of the closure of the orbit. If $X$ carries a $G$-invariant probability measure $m_X$, which in many situations is the case, then $m_X$ is called the {\em Haar measure} of $X$ and $\mathbf{G}amma$ is by definition a {\em lattice}. In that situation one can restrict these questions to $m_X$-typical points and by doing so one has entered the realm of ergodic theory. Rephrased the basic question is now whether $m_X$ is $H$-ergodic. Recall that by definition $m_X$ is $H$-ergodic if every measurable $f$ that is $H$-invariant is constant a.e.\ with respect to $m_X$, and note that while $H$-invariance of $m_X$ is inherited from $G$-invariance, the same is not true for $H$-ergodicity. The characterization of $H$-ergodicity in this context has been given in varying degrees of generality by many authors mostly before 1980, see \cite[Chpt.\ 2]{survey-Kleinbock} for a detailed account. The power of this characterization is that often --- unless there are obvious reasons for failure of ergodicity --- the Haar measure turns out to be $H$-ergodic. Moreover, assuming ergodicity one of the most basic theorems in abstract ergodic theory, the ergodic theorem, states that a.e.\ points equidistribute in $X$. (For the notion of equidistribution we need that $H$ is an amenable subgroup. The fact that $ \SL (2, \R)$ is not amenable actually makes it harder to apply Theorem \ref{Main theorem} --- and is the reason why the proof simplifies.) In particular, a.e.\ $H$-orbit is dense. This can be seen as the first answer to our original question. Let us assume from now on that $\mathbf{G}amma$ is a lattice (some of what follows holds more generally for any discrete $\mathbf{G}amma$ but not all). The above discussion around $H$-ergodicity is only the first step in understanding the structure of $H$-orbits. In general, there is no reason to believe that a similarly simple answer is possible for all points of $X$. This is especially true for general dynamical systems but as we will discuss also in the algebraic setting we consider here. More surprisingly, there are cases where we can understand {\bf all} $H$-orbits respectively believe that it is possible to understand {\bf all} $H$-orbits. To be able to describe this we need to recall a few notions: $u \in G$ is {\em unipotent} if $1$ is the only eigenvalue of the matrix $u$, $a \in G$ is $\R$-{\em diagonalizable} if it is diagonalizable as a matrix over $\R$. A subgroup $U < G$ is a {\em one-parameter unipotent subgroup} if $U$ is the image of a homomorphism $t \in \R \mapsto u _ t \in U$ with $u _ t$ unipotent for all $t \in \R$. E.g.\ in $G = \SL (n, \R)$ the subgroup \begin{equation*} U = \begin{pmatrix}1 & * &\cdots & *\\ 0&1&\ddots&\vdots \\ \vdots&\ddots&\ddots&* \\ 0 &\cdots & 0 &1 \end{pmatrix} \end{equation*} contains only unipotent elements and is generated by one-parameter unipotent subgroups. A subgroup $A < G$ is $\R$-{\em diagonalizable} if for some $g \in \mathbf{G}L (n, \R)$ the conjugate $g A g ^{- 1}$ is a subgroup of the diagonal subgroup. \subsection{Unipotently generated subgroups and Oppenheim's conjecture} One case of subgroups we today understand quite well is when $H$ is generated by one-parameter unipotent subgroups (as it is the case for $\SL (2, \R)$). As mentioned before this is thanks to the theorem of M.~Ratner \cite{Ratner91t} which says that all $H$-orbits are well-behaved as conjectured by Raghunathan earlier. Every $H$-orbit $H.x_0$ is dense in the closed orbit $L.x_0$ of a closed connected group $L>H$, and the latter orbit $L.x_0$ supports a finite $L$-invariant volume measure $m_{L.x_0}$. If $H$ is itself a unipotent one-parameter subgroup the orbit $H.x_0$ is equidistributed with respect to this measure $m_{L.x_0}$. These theorems were later extended by Ratner \cite{Ratner-padic} herself, as well as by Margulis and Tomanov \cite{Margulis-Tomanov} to the more general setting of products of linear algebraic groups over various local fields ($S$-algebraic groups). A bit more technical is the following question: What are the $H$-invariant probability measures? Here it suffices to restrict to $H$-invariant and ergodic measures --- the general theorem of the ergodic decomposition states that any $H$-invariant probability measure can be obtained by averaging $H$-invariant and ergodic measures. Therefore, if we understand the latter measures we understand all of them. Ratner showed that all $H$-invariant and ergodic probability measures are of the form $m_{L.x_0}$ as discussed above --- Theorem \ref{Main theorem} is a special case of this. As it turns out this question is crucial for the proof of the topological theorem regarding orbit closures mentioned above. Namely using her theorem on measure classification Ratner then proves first that the orbit of a unipotent group always equidistributes with respect to an ergodic measure, and finally uses this to prove her topological theorem --- this is similar to the earlier discussion of $m_X$-typical points. However, these steps are quite involved: First of all it is not clear that a limit distribution coming from the orbit of a one-parameter unipotent subgroup is a probability measure since the space might not be compact. However, earlier work of Dani \cite{Dani-inventiones, Dani-I, Dani-II} (which extend work by Margulis \cite{Margulis-nondivergence}) shows precisely this. Then it is not clear why such a limit is ergodic and why it is independent of the times used in the converging subsequence --- without going into details let us just say that the proof relies heavily on the structure of the ergodic measures and the properties of unipotent subgroup. Before Ratner classified in her work all orbit closures Margulis used a special case of this to prove Oppenheim's conjecture, which by that time was a long standing open conjecture. This conjecture concerns the values of an indeterminate irrational quadratic form $Q$ in $n$ variables at the integer lattice $\Z ^n$, and Margulis theorem \cite{Margulis-Oppenheim-proof} states that under these assumptions $Q(\Z^n)$ is dense in $\R$ if $n\geq 3$. (It is not hard to see that all of these assumptions including $n\geq 3$ are necessary for the density conclusion.) The proof consists of analyzing all possible orbits of $\operatorname{SO}(2,1)$ on $X_3=\SL(3,\Z) \backslash \SL(3,\R)$. Even though $\operatorname{SO}(2,1)$ is essentially a quotient of $\SL (2, \R)$ Theorem \ref{Main theorem} does not imply immediately Oppenheim's conjecture since for the non-amenable group $\SL (2, \R)$ it is not obvious how to find an invariant measure on the closure of an orbit. \subsection{Diagonalizable subgroups} The opposite extreme to unipotent elements are $\R$-diagonalizable elements, so it is natural to ask next about the case of a $\R$-diagonalizable subgroups $A<G$: What do the closures of $A$-orbits look like? What are the $A$-invariant and ergodic probability measures? As we will see this case is more difficult in various ways, in particular we do not have complete answers to these questions. {\bf Rank one:} Let $G = \SL (2, \R)$, $\mathbf{G}amma = \SL (2, \Z)$, and set $X _ 2 = \mathbf{G}amma \backslash G$. Let $A< \SL (2, \R) $ be the diagonal subgroup. Then the action of $A$ on $X _ 2$ can also be described as the geodesic flow on the unit tangent bundle of the modular surface. Up to the fact that the underlying space $X _ 2 $ is not compact this is a very good example of an Anosov flow. The corresponding theory can be used to show that there is a huge variety of orbit closures and $A$-invariant ergodic probability measures. So the answer is in that case that there is no simple answer to our questions --- but at least we know that. {\bf Higher rank:} We replace `2' by `3' and encounter very different behaviour. Let $G = \SL (3, \R)$, $\mathbf{G}amma = \SL (3, \Z)$, and set $X _ 3 = \mathbf{G}amma \backslash G$. Let $A< \SL (2, \R) $ be the diagonal subgroup, which this time up to finite index is isomorphic to $\R ^2$. Margulis, Furstenberg, Katok, and Spatzier conjectured that for the action of $A$ on $X_3$ there are very few $A$-invariant and ergodic probability measures, in particular that they again are all of the type $m_{L.x_0}$ for some $L>A$ and $x_0 \in X_3$ with closed orbit $L.x_0$. One motivation for that conjecture is Furstenberg's theorem \cite{Furstenberg-67} on $ \times 2$, $\times 3$-invariant closed subsets of $\R / \Z$ which states that all such sets are either finite unions of rational points or the whole space. This can be seen as an abelian analogue to the above problem. For orbit closures the situation is a bit more complicated: $G$ contains an isomorphic copy $L$ of the subgroup $\mathbf{G}L(2,\R)$ embedded into the upper left 2-by-2 block (where the lower right entry is used to fix the determinant). Now $\mathbf{G}amma L$ is a closed orbit of $L$ and the $A$-action inside this orbit consists of the rank one action discussed above and one extra direction which moves everything towards infinity. (This behaviour of the $L$-orbit and the $A$-orbits needs of course justification, which in this case can be done algebraically.) Therefore, in terms of orbit closures the situation for $A$-orbits inside this $L$-orbit is as bad as for the corresponding action on $X_2$. However, it is possible to avoid this issue and to formulate a meaningful topological conjecture: Margulis conjectured that all bounded $A$-orbits on $X_3$ are compact, i.e.\ the only bounded orbits are periodic orbits for the $A$-action (which in fact all arise from a number theoretic construction). Margulis \cite{Margulis-Oppenheim-conjecture} also noted that the question regarding orbits in this setting is related to a long standing conjecture by Littlewood. Littlewood conjectured around 1930 that for any two real numbers $\alpha, \beta \in \R$ the vector $(\alpha, \beta)$ is well approximable by rational vectors in the following multiplicative manner: \begin{equation*} \varliminf _{n \rightarrow \infty} n \|n \alpha\|\|n \beta\|=0, \end{equation*} where $\|u\|$ denotes the distance of a real number $u \in \R$ to $\Z$. Here $n$ is the common denominator of the components of the rational vector that approximates $(\alpha, \beta)$, and instead of taking the maximum of the differences along the $x$-axis and the $y$-axis we instead measure the quality of approximation by taking the product of the differences. The corresponding dynamical conjecture states that certain points (defined in terms of the vectors $(\alpha, \beta)$) all have unbounded orbit (where actually only a quarter of the acting group $A$ is used). Building on earlier work of E.~Lindenstrauss \cite{Lindenstrauss-Quantum} and a joint work of the author with A.~Katok \cite{Einsiedler-Katok-02} we have obtained together \cite{Einsiedler-Katok-Lindenstrauss} a partial answer to the conjecture on $A$-invariant and ergodic probability measures: If the measure has in addition positive entropy for some element of the action, then it must be the Haar measure $m_{X_3}$ --- this generalizes earlier work of Katok, Spatzier, and Kalinin \cite{Katok-Spatzier-96, Katok-Spatzier-98, Kalinin-Spatzier-02}, and related work by Lyons \cite{Lyons-88}, Rudolph \cite{Rudolph-90}, and Johnson \cite{Johnson-2-3} in the abelian setting of $\times 2, \times 3$. For Littlewood's conjecture we show also in \cite{Einsiedler-Katok-Lindenstrauss} that the exceptions form at most a set of Hausdorff dimension zero. Roughly speaking the classification of all $A$-invariant probability measures with positive entropy can be used to show that very few $A$-orbits can stay within a compact subset of $X_3$, which by the mentioned dynamical formulation of Littlewood's conjecture is what is needed. Most of the proof of this theorem consists in showing that positive entropy implies invariance of the measure $\mu$ under some subgroup $H<\SL (3, \R)$ that is generated by one-parameter unipotent subgroups. Then one can apply Ratner's classification of invariant measures to the $H$-ergodic components of $\mu$. However, in this case (unless we are in the easy case of $H=\SL (3, \R)$) the subgroup $H$ is actually isomorphic to $\SL (2, \R)$. Therefore, Theorem \ref{Main theorem} is sufficient to analyze the $H$-ergodic components. The more general case of the maximal diagonal subgroup $A$ acting on $X_n$ for $n\geq 3$ is also treated in \cite{Einsiedler-Katok-Lindenstrauss} (always assuming positive entropy). Even more generally one can ask about any $\R$-diagonalizable subgroup of an (algebraic) linear group. However, here there are unsolved technical difficulties that prevent so far a complete generalization. Joint ongoing work \cite{Einsiedler-Lindenstrauss-maximally-split} of the author with E.~Lindenstrauss solves these problems for maximally $\R$-diagonalizable subgroups (more technically speaking, for maximal $\R$-split tori $A$ in algebraic groups $G$ over $\R$ and similarly also for $S$-algebraic groups). This approach uses results from \cite{Einsiedler-Katok-nonsplit} and \cite{Einsiedler-Lindenstrauss-low-entropy}. For a more complete overview of these results and related applications see the survey \cite{Einsiedler-Lindenstrauss-survey}. \section{Ingredients of the proof} \label{ingredients} We list the facts and notions needed for the proof of Theorem \ref{Main theorem}, all of which, except for the last one, can be found in any introduction to Lie groups resp.\ ergodic theory. \subsection{ The Lie group and its Lie algebra} The Lie algebra $\mathfrak g$ of $G$ is the tangent space to $G$ at the identity element $e \in G$. The exponential map $\exp: \mathfrak g \rightarrow G$ and the locally defined inverse, the logarithm map, give local isomorphisms between $\mathfrak g$ and $G$. For any $g \in G$ the derivative of the conjugation map is the adjoint transformation $\operatorname{Ad} _ g: \mathfrak g \rightarrow \mathfrak g$ and satisfies $\exp \operatorname{Ad} _ g (v) = g \exp (v) g ^ {- 1}$ for $g \in G$ and $v \in \mathfrak g$. For linear groups this could not be easier, the Lie algebra is a linear subspace of the space of matrices, $\exp(\cdot)$ and $\log(\cdot)$ are defined as usual by power series, and the adjoint transformation $\operatorname{Ad} _ g$ is still conjugation by $g$. Closed subgroups $L < G$ are almost completely described by their respective Lie algebras $\mathfrak l$ inside $\mathfrak g$ as follows. Let $L ^\circ$ be the connected component of $L$ (that contains the identity $e$). Then the Lie algebra $\mathfrak l$ of $L$ (and $L ^\circ$) uniquely determines $L ^\circ$ --- $L ^\circ$ is the subgroup generated by $\exp (\mathfrak l)$. (Moreover, any element $\ell \in L$ sufficiently close to $e$ is actually in $L^\circ$ and equals $\ell= \exp (v)$ for some small $v \in \mathfrak l$.) Using an inner product on $\mathfrak g$ we can define a left invariant Riemannian metric $d(\cdot,\cdot)$ on $G$. We will be using the restriction of $d(\cdot,\cdot)$ to subgroups $L < G$ and denote by $B_r^L$ the $r$-ball in $L$ around $e \in L$. If $\mathbf{G}amma <G$ is a discrete subgroup, then $X=\mathbf{G}amma \backslash G$ has a natural topology and in fact a metric defined by $d(\mathbf{G}amma g, \mathbf{G}amma h)=\min_ {\gamma \in \mathbf{G}amma}d(g, \gamma h)$ for any $g,h \in G$ (which uses left invariance of $d(\cdot,\cdot)$). With this metric and topology $X$ can locally be described by $G$ as follows. For any $x \in X$ there is an $r>0$ such that the map $\imath: g\mapsto xg$ is an homeomorphism between $B_r^G$ and a neighborhood of $y$. Moreover, if $r$ is small enough $\imath:B_r^G \rightarrow X$ is in fact an isometric embedding. For a given $x$ a number $r>0$ with these properties we call an {\em injectivity radius at $x$}. \subsection{Complete reducibility and the irreducible representations of $ \SL (2, \R)$} \label{Representations} The first property of $\SL (2, \R)$ we will need is the following standard fact. Let $V$ be a finite dimensional real vector space and suppose $\SL (2, \R)$ acts on $V$. Then any $\SL (2, \R)$-invariant subspace $W < V$ has an $\SL (2, \R)$-invariant complement $W ' < V$ with $V = W \oplus W '$. The above implies that all finite dimensional representations of $\SL (2, \R)$ can be written as a direct sum of irreducible representations. The second fact we need is the description of these irreducible representations. Let $A= \begin{pmatrix} 1 \\ 0 \end{pmatrix}$ and $B = \begin{pmatrix}0 \\ 1 \end{pmatrix}$ denotes the standard basis of $\R ^ 2 $ so that $\begin{pmatrix}1 & t \\ 0 & 1 \end{pmatrix} A = A$ and $\begin{pmatrix}1 & t \\ 0 & 1 \end{pmatrix} B = B+t A$. Any irreducible representation is obtained as a symmetric tensor product $\operatorname{Sym}^n(\R ^ 2)$ of the standard representation on $\R ^ 2$ for some $n$. $\operatorname{Sym}^n(\R ^ 2)$ has $A^n,A^{n-1}B,\ldots,B^n$ as a basis, and every element we can view as a homogeneous polynomial $p(A,B)$ of degree $n$. The action of $\begin{pmatrix}1 & t \\ 0 & 1 \end{pmatrix}$ can now be described by substitution, $p(A,B)$ is mapped to $p(A,B+tA)$. More concretely, $p(A,B)=c_0A^n+c_{1}A^{n-1}B+\cdots+c_nB^n$ is mapped to \begin{align*} p(A,B+tA)=&(c_0+c_{1}t+\cdots+c_nt^n) A^n+\\&(c_{1}+\cdots+c_nnt^{n-1})A^{n-1}B+\\&\cdots+c_nB^n, \end{align*} where the coefficients in front of the various powers of $t$ are the original components of the vector $p(A,B)$ multiplied by binomial coefficients. Notice that all components of $p(A,B)$ appear in the image vector in the component corresponding to $A^n$. Moreover, for any component of $p(A,B)$ the highest power of $t$ it gets multiplied by appears in the resulting component corresponding to $A^n$. For that reason, when $t$ grows (and say $p (A,B)$ is not just a multiple of $A ^ n$) the image of $p(A,B)$ under $\begin{pmatrix}1 & t \\ 0 & 1 \end{pmatrix}$ will always grow fastest in the direction of $A ^ n$ when $t\rightarrow\infty$. \subsection{Recurrence and the ergodic theorem} Let $ (X, \mu)$ be a probability space, and let $T:X \rightarrow X$ be measure preserving. Then for any set $B \subset X$ of positive measure and a.e.\ $x \in B$ there are infinitely many $n$ with $T^nx \in B$ by Poincar\'e recurrence. Now suppose $u _ t: X \rightarrow X$ for $t \in \R$ is a one parameter flow acting on $X$ such that $\mu$ is $u _ \R$-invariant and ergodic. Then for any $f \in L^1 (X, \mu)$ and $\mu$-a.e.\ $x \in X$ we have \begin{equation*} \frac{1}{T} \int_ 0 ^T f(u_t(x))\operatorname{d}\! t \rightarrow \int_ X f \operatorname{d}\mu\mbox{ for }T \rightarrow \infty \end{equation*} This is Birkhoff's pointwise ergodic theorem for flows. \subsection{Mautner's phenomenon for $\SL(2,\R)$} To be able to apply the ergodic theorem as stated in the last section in the proof of Theorem~\ref{Main theorem} we will need to know that the $\SL(2, \R)$-invariant and ergodic probability measure is also ergodic under a one-parameter flow. The corresponding fact is best formulated in terms of unitary representations and is due to Moore \cite{Moore} and is known as the {\em Mautner phenomenon}. For completeness we prove the special case needed. \begin{proposition} \label{Mautner} Let $\mathfrak H$ be a Hilbert space, and suppose $\phi:\SL(2,\R) \rightarrow U(\mathfrak H)$ is a continuous unitary representation on $\mathfrak H$. In other words, $\phi$ is a homomorphism into the group of unitary automorphisms $\mathbb{U}(\mathfrak H)$ of $\mathfrak H$ such that for every $v \in \mathfrak H$ the vector $\phi(g)(v) \in \mathfrak H$ depends continuously on $g \in \SL(2,\R)$. Then any vector $v \in \mathfrak H$ that is invariant under the upper unipotent matrix group $U=\left\{\begin{pmatrix} 1 & * \\ 0 & 1\end{pmatrix}\right\}$ is in fact invariant under $\SL (2,\R)$. \end{proposition} Since any measure preserving action on $(X,\mu)$ gives rise to a continuous unitary representation on $\mathfrak H=L^2(X,\mu)$ the above gives immediately what we need (see also \cite[Prop.~5.2]{Ratner-acta-measure} for another elementary treatment): \begin{corollary} \label{Mautner ergodicity} Let $\mu$ be an $H$-invariant and ergodic probability measure on $X=\mathbf{G}amma\backslash G$ with $\mathbf{G}amma<G$ discrete, and $H<G$ isomorphic to $\SL(2,\R)$. Then $\mu$ is also ergodic with respect to the one-parameter unipotent subgroup $U$ of $H$ corresponding to the upper unipotent subgroup in $\SL(2,\R)$. \end{corollary} In fact, an invariant function $f \in L^2(X,\mu)$ that is invariant under $U$ must be invariant under $\SL(2,\R)$ by Proposition \ref{Mautner}. Since the latter group is assumed to be ergodic, the function must be constant as required. (We leave it to the reader to check the continuity requirement.) \begin{proof}[Proof of Proposition \ref{Mautner}] Following Margulis \cite{Margulis-Kyoto} we define the auxiliary function $p (g) = (\phi (g) v, v)$. Notice first that the function $p(\cdot)$ characterizes invariance in the sense that $p (g) = (v, v)$ implies $\phi (g) v = v$. By continuity of the representation $p(\cdot)$ is also continuous. Moreover, by our assumption on $v$ the map $p(\cdot)$ is bi-$U$-invariant since \begin{equation*} p(ugu')= (\phi (u) \phi (g) \phi (u')v,v)=(\phi (g) v, \phi (u ^{- 1}) v) = p (g). \end{equation*} Let $\epsilon, r,s \in \R$ and calculate \begin{equation*} \begin{pmatrix} 1 & r \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ \epsilon & 1 \end{pmatrix} \begin{pmatrix} 1 & s \\ 0 & 1 \end{pmatrix}= \begin{pmatrix} 1+r \epsilon & r+s+rs \epsilon \\ \epsilon & 1+s \epsilon \end{pmatrix}. \end{equation*} Now fix some $t \in \R$, let $\epsilon$ be close to zero but nonzero, choose $r=\frac{e^t-1}{ \epsilon}$ and $s=\frac{-r}{1+r \epsilon}$. Then the above matrix simplifies to \begin{equation*} \begin{pmatrix} e^t & 0 \\ \epsilon & e ^{-t} \end{pmatrix} \end{equation*} In particular, this shows that \begin{equation*} p\left(\begin{pmatrix} 1 & 0 \\ \epsilon & 1 \end{pmatrix}\right)=p\left(\begin{pmatrix} e^t & 0 \\ \epsilon & e ^{-t} \end{pmatrix} \right)\end{equation*} is both close to $p(e)$ and to \begin{equation*} p\left(\begin{pmatrix} e^t & 0 \\ 0& e ^{-t} \end{pmatrix}\right). \end{equation*} Therefore, the latter equals $(v,v)$ which implies that $v$ is invariant under $\begin{pmatrix} e^t & 0 \\ 0 & e ^{-t} \end{pmatrix}$ as mentioned before. The above implies now that $p(\cdot)$ is bi-invariant under the diagonal subgroup. Using this and the above argument once more, it follows that $v$ is also invariant under $\begin{pmatrix} 1 & 0 \\ s & 1 \end{pmatrix}$ for all $s \in \R$. \end{proof} \section{The proof of Theorem \ref{Main theorem}} In this section we prove Theorem \ref{Main theorem} using the prerequisites discussed in the last section. Let us mention again that the general outline of the proof is very similar to the strategy M.~Ratner \cite{Ratner-acta-measure} used to prove her theorems. From now on let $\mu$ be an $H$-invariant and ergodic probability measure on $X = \mathbf{G}amma \backslash G$. \subsection{ The goal and the first steps} It is easy to check that \begin{equation*} \operatorname{Stab}(\mu) =\{ g \in G:\mbox{ right multiplication with $ g$ on $X$ preserves }\mu\} \end{equation*} is a closed subgroup of $G$. Let $L=\operatorname{Stab} (\mu) ^\circ$ be the connected component. Then as discussed any element of $\operatorname{Stab} (\mu)$ sufficiently close to $e$ belongs to $L$. Also since $\SL (2, \R)$ is connected we have $H <L$. We will show that $\mu$ is concentrated on {\bf a single orbit} of $L$, i.e. that there is some $L$-orbit $L.x_0$ of measure one $\mu (L.x_0) = 1$. Then by $L$-invariance of $\mu$ and uniqueness of Haar measure, $\mu$ would have to be the $L$-invariant volume form on this orbit $L.x_0$. However, since $\mu$ is assumed to be a probability measure this also implies that the orbit $L.x_0$ is closed as seen in the next lemma. \begin{lemma}\label{lemma to come} If $\mu$ is concentrated on a single $L$-orbit $L.x_0$ and is $L$-invariant, then $L.x_0$ is closed and $\mu$ is supported on $L.x_0$. \end{lemma} In the course of the proof we will recall a few facts about Haar measures and also prove that a Lie group which admits a lattice is unimodular, i.e.\ satisfies that the Haar measure is left and right invariant. For a more comprehensive treatment of the relationship between lattices and Haar measures see \cite{Raghunathan}. \begin{proof} Suppose $x _ i \in \ell _ i.x _ 0 \in L.x _ 0$ converges to $y$. We have to show that $y \in L.x _ 0$. Now either $x _ i \in \overline{B_1^L}. y$ for some $i$ --- i.e. the convergence is along $L$ and the lemma follows --- or $x _ i \not \in \overline{B_1^L}.y $ for all $i$. In the latter case we may choose a subsequence so that $x _ i \not \in \overline{B_1^L}.x _ j$ for $i \ne j$. Now let $r < 1$ be an injectivity radius of $X$ at $y$. Then $x _ i \in B _{r/2}.y$ for large enough $i$, say for $i \geq i _ 0$. It follows that the sets $B_{r/2}^L.x_i \subset X$ are disjoint for $i \geq i _ 0$. Since $x _ i = \ell _ i.x_0$ it follows that these sets are all of the form $B_{r/2}^L (\ell_i).x_0$. We claim that the existence of the finite volume orbit implies that $L$ is unimodular. If this is so, then we see that the sets $B_{r/2}^L.x_i$ all have the same measure, which contradicts the finite volume assumption. It remains to show that $L$ is unimodular if the $L$-orbit of $x _ 0 = \mathbf{G}amma g _ 0$ has finite $L $-invariant volume, or equivalently if $\mathbf{G}amma _ L = g _ 0 ^{- 1} \mathbf{G}amma g _ 0 \cap L < L$ is a lattice. So suppose $\mu$ is an $L$-invariant probability measure on $\mathbf{G}amma _ L \backslash L$, where $L$ is acting on the right. Then the following gives the relationship between $\mu$ and a right Haar measure $m_L$ on $L$. Let $f$ be a compactly supported continuous function on $L$, then \begin{equation*} \int f \operatorname{d} \! m_L = \int \sum_{\gamma \in \mathbf{G}amma _ L}f(\gamma \ell) \operatorname{d} \! \mu. \end{equation*} Here note first that the sum $F (\ell)$ inside the integral on the right is finite for every $\ell$, satisfies $F(\gamma \ell) = F(\ell)$, and so defines a function on $\mathbf{G}amma _ L \backslash L$ which is also compactly supported and continuous. Therefore, the right integral is well-defined. Using invariance of $\mu$ and the uniqueness of the right Haar measure on $L$, the equation follows (after possibly rescaling $m_L$). The above formula immediately implies that $m_L$ is left-invariant under $\mathbf{G}amma _ L$. We use Poincar\'e recurrence to extend this to all of $L$. Let $K \subset L$ be a compact subset of positive measure. Let $\ell \in L$ be arbitrary and consider the map $T: \mathbf{G}amma _ L \backslash L \rightarrow \mathbf{G}amma _ L \backslash L$ defined by left multiplication by $\ell ^{- 1}$. By assumption this preserves the probability measure $\mu$, therefore there exists some fixed $k \in K$, infinitely many $n _ i$, and $\gamma _{i} \in \mathbf{G}amma _ L$ such that $\gamma _{ i} k \ell ^{- n _ i} \in K$. In other words there exist infinitely many $k _ i \in K$ with $\ell ^{n _ i} = k _ i \nope ^{- 1} \gamma _ i k$. The measure obtained by left multiplication by $\ell$ from a right Haar measure $m_L$ it again a right Haar measure and so must be a multiple $c (\ell)m_L$. The constant $c (\ell) \in \R ^ \times$ defines a character, i.e.\ a continuous homomorphism $c: L \rightarrow \R ^ \times$. It follows that $c (\ell)^{n _ i} = c(k _ i)c(\gamma _ i) c (k) = c (k _ i) c (k)$ remains bounded as $n_i \rightarrow \infty$. Therefore, $c (\ell)=1$. Since $\ell$ was arbitrary this proves the claim and the lemma. \end{proof} The main argument will be to show that if $\mu$ is not concentrated on a single orbit of $L$, then there are other elements of $\operatorname{Stab} (\mu)$ close to $e$. This shows that we should have started with a bigger subgroup $L'$. If we repeat the argument with this bigger $L'$, we will either achieve our goal or make $L'$ even bigger. We start by giving a local condition for a measure $\mu$ to be concentrated on a single orbit. \begin{lemma} \label{Local condition} Suppose $x _ 0 \in X$ has the property that $\mu (B_\delta^L.x_0)>0$ for some $\delta>0$, then $\mu$ is concentrated on $L.x_0$. So either the conclusion of Theorem \ref{Main theorem} holds for $L$ and $x_0$, or for every $x_0$ we have $\mu(B_\delta ^L.x_0)=0$. \end{lemma} \begin{proof} This follows immediately from the definition of ergodicity of $\mu$ and the fact that $L.x_0$ is an $H$-invariant measurable set. \end{proof} We will be achieving the assumption to the last lemma by studying large sets $X' \subset X$ of points with good properties. Let $x _ 0 \in X '$ be such that all balls around $x _ 0$ have positive measure. Suppose $X'$ has the property that points $x '$ close to $x _ 0$ that also belong to $X '$ give `rise to additional invariance' of $\mu$ unless $x$ and $x '$ are locally on the same $L$-orbit (i.e.\ $x ' = \ell.x$ for some $\ell \in L$ close to $e$). Then either $L$ can be made bigger or $B_\delta ^L.x\cap X'\subset B_\delta^L.x_0$ for some $\delta>0$ and therefore the latter has positive measure. However, to carry that argument through requires a lot more work. We start by a less ambitious statement where two close by points in a special position from each other give `rise to invariance' of $\mu$. Recall that $U=\left\{\begin{pmatrix} 1 & * \\ 0 & 1\end{pmatrix}\right\}$. \begin{proposition} \label{Centralizer proposition} There is a set $X ' \subset X$ of $\mu$-measure one such that if $x, x ' \in X '$ and $x ' = c. x$ with \begin{equation*} c \in C_G(U) =\{ g \in G: g u = u g\text{\ for all }u \in U\}, \end{equation*} then $c$ preserves $\mu$. \end{proposition} The set $X'$ in the above proposition we define to be the set of $\mu$-generic points (for the one paramenter subgroup defined by $U$). A point $x \in X$ is {\em $\mu$-generic} if \begin{equation*} \frac{1}{T}\int_0^Tf(u_t.x)\operatorname{d}\!t \rightarrow \int f\operatorname{d}\!\mu\mbox{ for }T \rightarrow \infty \end{equation*} for all compactly supported, continuous functions $f:X \rightarrow \R$. Recall that by the Mautner phenomenon $\mu$ is $U$-ergodic. Now the ergodic theorem implies that the set $X '$ of all $\mu$-generic points has measure one. (Here one first applies the ergodic theorem for a countable dense set of compactly supported, continuous functions and then extends the statement to all such functions by approximation.) \begin{proof} For $c \in C _ G (U)$ and a compactly supported, continuous function $f: X \rightarrow \R$ define the function $f_c(x)=f(c.x)$ of the same type. Now assume $x,x'=c.x \in X'$ are $ \mu$-generic. Since $u _ t c =cu _ t$ we have $f(u _ t.x') = f (cu _ t.x) = f_c (u _ t.x)$ and so the limits \begin{align*} \frac{1}{T}\int_0^Tf(u_t.x')\operatorname{d}\!t & \rightarrow \int f\operatorname{d}\!\mu\mbox{ and}\\ \frac{1}{T}\int_0^Tf_c(u_t.x)\operatorname{d}\!t & \rightarrow \int f_c\operatorname{d}\!\mu \end{align*} are equal. However, the last integral equals $\int f_c\operatorname{d}\!\mu=\int f\operatorname{d}\!c_*\mu$ were $c_*\mu$ is the push forward of $\mu$ under $c$. Since $f$ was any compactly supported, continuous function, $\mu = c_*\mu$ as claimed. \end{proof} \subsection{Outline of the H-principle} In Proposition \ref{Centralizer proposition} we derived invariance of $\mu$ but only if we have two points $x, x ' \in X '$ that are in a very special relationship to each other. On the other hand if $\mu$ is not supported on the single $L$-orbit, then we know that we can find many $y, y ' \in X'$ that are close together but are not on the same $L$-leaf locally by Lemma \ref{Local condition}. Without too much work we will see that we can assume \begin{equation*} y ' = \exp (v). y \mbox{ with }v \in \mathfrak l' \end{equation*} where $ \mathfrak l'$ is an $\SL (2, \R)$-invariant complement in $\mathfrak g$ of the Lie algebra $\mathfrak l$ of $L$, see Lemma \ref{pigeon hole}. What we are going to describe is a version of the so-called H-principle as introduced by Ratner \cite{Ratner-factor,Ratner-joinings} and generalized by Witte \cite{Witte-rigidity}, see also \cite{Witte}. By applying the same unipotent matrix $u \in U$ to $y$ and $y '$ we get \begin{equation*} u. y ' = (u \exp (v) u ^{- 1}).(u.y) = \exp (\operatorname{Ad} _ u (v)).(u.y). \end{equation*} In other words, the divergence of the orbits through $y$ and $y '$ can be described by conjugation in $G$ --- or even by the adjoint representation on $\mathfrak g$. Since $H$ is assume to be isomorphic to $\SL (2, \R)$ we will be able to use the theory on representations as in Section \ref{Representations}. In particular, recall that the fastest divergence is happening along a direction which is stabilized by $U$. Since all points on the orbit of a $\mu$-generic point are also $\mu$-generic, one could hope to flow along $U$ until the two points $x=u.y, x '=u.x'$ differ significantly but not yet to much. Then $y'=u.x'= h.(u.x)=h.y$ with $h$ almost in $C_G(U)$. To fix the `almost' in this statement we will consider points that are even closer to each other, flow along $U$ for a longer time, and get a sequence of pairs of $\mu$-generic points that differ more and more by some element of $C_G(U)$. In the limit we hope to get to points that differ precisely by some element of $C_G(U)$ which is not in $L$. The main problem is that limits of $\mu$-generic points need not be $\mu$-generic (even for actions of unipotent groups). Therefore, we need to introduce quite early in the argument a compact subset $K \subset X '$ of almost full measure that consists entirely of $\mu$-generic points. When constructing $u.x',u.x$ we will make sure that they belong to $K$ --- this way we will be able to go to the limit and get $\mu$-generic points that differ by some element of $C_G(U)$. We are now ready to proceed more rigorously. \subsection{Formal preparations, the sets $K$, $X _ 1$, and $X _ 2$} Let $X '$ be the sets of $\mu$-generic points as above, and let $K \subset X '$ be compact with $\mu (K) > 0.9$. By the ergodic theorem \begin{equation*} \frac{1}{T} \int_ 0 ^ T 1 _ K (u _ t.y) \operatorname{d}\!t \rightarrow \mu (K) \end{equation*} for $\mu$-a.e.\ $y \in X$. In particular, we must have for a.e.\ $y \in X$ \begin{equation*} \frac{1}{T} \int_ 0 ^ T 1 _ K (u _ t.y) \operatorname{d}\!t > 0.8\mbox{ for large enough } T. \end{equation*} Here $T$ may depend on $y$ but by choosing $T _ 0$ large enough we may assume that the set \begin{equation*} X_1= \left \{ y \in X:\frac{1}{T} \int_ 0 ^ T 1 _ K (u _ t.y) \operatorname{d}\!t > 0.8 \text{{ for all }} T\geq T_0 \right\} \end{equation*} has measure $\mu (X _ 1) >0.99$. By definition points in $X _ 1$ visit $K$ often enough so that we will be able to find for any $y, y ' \in X_1$ many common values of $t$ with $u _ t.y, u _ t.y'\in K$. The last preparation we need will allow us to find $y, y ' \in X _ 1$ that differ by some $\exp (v)$ with $v \in \mathfrak l '$. For this we define \begin{equation*} X _ 2 = \left \{ z \in X:\frac{1}{m_L(B_1^L)} \int_ {B_1^L} 1 _ {X _ 1} (\ell. z) \operatorname{d} \! m_L (\ell) > 0.9\right \} \end{equation*} where $ m _ L$ is a Haar measure on $L$. (Any other smooth measure would do here as well.) \begin{lemma} $\mu (X _ 2) > 0 .9$. \end{lemma} \begin{proof} Define $Y = X \times B_1^L$ and consider the product measure $\nu = \frac{1}{m_L(B_1^L)} \mu \times m_L$ on $Y$. The function $f(z,\ell)=1_{X_1}(\ell.z)$ integrated over $z$ gives independently of $\ell$ allways $\mu (X _ 1)$ since $\mu$ is $L$-invariant. By integrating over $\ell$ first we get therefore the function \begin{equation*} g(z)=\frac{1}{m_L(B_1^L)} \int_ {B_1^L} 1 _ {X _ 1} (\ell. z) \operatorname{d} \! m_L (\ell) \end{equation*} whose integral satisfies $\int g \operatorname{d} \! \mu = \mu (X _ 1)$. Since $g (z) \in [0, 1]$ for all $z$ \begin{equation*} 0 .99 < \int_{X _ 2} g \operatorname{d} \! \mu+\int_{X\setminus X_2} g \operatorname{d} \! \mu \leq \mu (X _ 2) + 0 .9 \mu (X \setminus X_2) = 0 .1 \mu (X _ 2) + 0 .9 \end{equation*} which implies the lemma. \end{proof} Let as before $\mathfrak l \subset \mathfrak g$ be the Lie algebra of $L < G$ and let $\mathfrak l ' \subset \mathfrak g$ be an $\SL (2, \R)$-invariant complement of $\mathfrak l $ in $\mathfrak g$. Then the map $\phi: \mathfrak l ' \times \mathfrak l \rightarrow G$ defined by $\phi ( v,w) = \exp (v) \exp (w)$ is $C^ \infty$ and its derivative at $(0,0)$ is the embedding of $\mathfrak l ' \times \mathfrak l $ into $\mathfrak g$. Therefore, $\phi$ is locally invertible so that every $g \in G$ close to $e$ is a unique product $g = \exp (v)\ell$ for some $\ell \in L$ close to $e$ and some small $v \in \mathfrak l '$. We define $\pi _ L (g) = \ell$. For simplicity of notation we assume that this map is defined on an open set containing $\overline{B_1 ^ L}$ (if necessary we rescale the metric). \begin{lemma}\label{pigeon hole} For any $\epsilon > 0$ there exists $\delta > 0$ such that for $g \in B_\delta ^ G$, and $z, z ' = g.z \in X _ 2 $ there are $\ell_2 \in B_1^L$ and $\ell_1 \in B_{ \epsilon} ^ L(\ell_2)$ with $\ell_1 .z, \ell_2.z'\in X _ 1$ and $\ell_2g\ell_1^{-1}= \exp (v)$ for some $v \in B_\epsilon ^{ \mathfrak l '} (0)$. \end{lemma} \begin{proof} Let $g \in B_\delta ^ G$ and consider the $C ^ \infty$-function $\psi (\ell) = \pi _ L (\ell g)$. If $g = e$ then $\psi$ is the identity, therefore if $\delta$ is small enough the derivative of $\psi$ is close to the identity and its Jacobian is close to one. In particular, we can ensure that $m_L(\psi (E ' )) > 0.9 m_L(E ')$ for any measurable subset $E ' \subset B_1^L$. Moreover, for $\delta$ small enough we have $\psi (\ell) \in B_{\epsilon} ^ L(\ell)$ for any $\ell \in B_1 ^ L$ and so $\psi (B_1^L) \subset B_{ 1 + \epsilon} ^ L$. Now define the sets $E = \{\ell \in B _ 1 ^ L: \ell.z \in X _ 1 \}$ and $E ' = \{\ell \in B _ 1 ^ L: \ell.z ' \in X _ 1 \}$ which satisfy $m_L(E)>0.9m_L(B_1^L)$ and $m_L(E ')>0.9m_L(B_1^L)$ by definition of $X _ 2$. We may assume that $\epsilon$ is small enough so that $m_L(B_{ 1 + \epsilon}^L)<1.1 m_L(B_1^L)$. Together with the above estimate we now have $m_L(\psi (E')) > 0.5 m _ L (B_{1 + \epsilon} ^L)$ and $m_L(E) > 0.5 m _ L (B_{1 + \epsilon} ^L)$. Therefore, there exists some $\ell_2 \in E'$ with $\ell_1= \psi (\ell_2) \in E$. By definition of $E,E'$ we have $\ell_1.y,\ell_2.y ' \in X _ 1$. Finally, by definition of $\pi _ L$ \begin{equation*} \ell_2 g \ell_1 ^ { - 1} =\ell_2 g \pi _ L (\ell_2 g) ^{- 1} = \exp (v) \end{equation*} for some $v \in \mathfrak l '$. Again for sufficiently small $\delta$ we will have $v \in B_\epsilon ^{ \mathfrak l '} (0)$. \end{proof} \subsection{H-principle for $\SL(2,\R)$} Let $x_0 \in X_2 \cap \supp \mu|_{X_2}$ so that \begin{equation*} \mu\bigl((B_\delta^G.x_0) \cap X_2\bigr)>0 \text { for all } \delta>0. \end{equation*} Now one of the following two statements must hold: \begin{enumerate} \item there exists some $\delta>0$ such that $B_\delta^G.x_0 \cap X_2 \subset B_\delta^L.x_0$, or \item for all $\delta>0$ we have $B_\delta^G.x_0 \cap X_2 \not \subset B_\delta ^L.x_0$. \end{enumerate} We claim that actually only (1) above is possible if $L$ is really the connected component of $\operatorname{Stab}(\mu)$. Assuming this has been shown, then we have $\mu(B_\delta ^L.x_0)>0$ which was the assumption to Lemma \ref{Local condition}. Therefore, $\mu(L.x_0)=1$ and by Lemma \ref{lemma to come} $L.x_0 \subset X$ is closed --- Theorem~\ref{Main theorem} follows. So what we really have to show is that (2) implies that $\mu$ is invariant under a one parameter subgroup that does not belong to $L$. \begin{lemma} \label{Transverse existence} Assuming (2) there are for every $\epsilon >0$ two points $y, y ' \in X _ 1$ with $d (y, y ') < \epsilon$ and $y ' = \exp (v). y$ for some nonzero $v \in B_\epsilon^{\mathfrak l'}(0)$. \end{lemma} \begin{proof} Let $z=x_0$. By (2) there exists a point $z' \in X_2$ with $d(z,z')<\delta$ and $z' \not \in B_{\delta}^L.z$. Let $g \in B_{\delta}^G$ be such that $z'=g.z$ and $g \not \in L$. Applying Lemma~\ref{pigeon hole} the statement follows since $g \not \in L$ and so $v \ne 0$ by our choice of $z'$. \end{proof} Using $y,y' \in X_1$ and $v \in \mathfrak l'$ for all $\epsilon>0$ as in the above lemma we will show that $\mu$ is invariant under a one-parameter subgroup that does not belong to $L$. For this it is enough to show the following: {\bf Claim:} For any $\eta>0$ there exists a nonzero $w \in B_\eta^{\mathfrak l'}(0)$ such that $\mu$ is invariant under $\exp(w)$. To see that this is the remaining assertion, notice that we then also have invariance of $\mu$ under the subgroup $\exp(\Z w)$. While this subgroup could still be discrete, when $\eta \rightarrow 0$ we find by compactness of the unit ball in $\mathfrak l'$ a limiting one parameter subgroup $\exp(\R w)$ that leaves $\mu$ invariant. We start proving the claim. Let $\eta>0$ be fixed, and let $\epsilon>0$, $y,y' \in X_1$, and $v \in B_\epsilon ^{\mathfrak l'}(0)$ as above. We will think of $\epsilon$ as much smaller than $\eta$ since we will below let $\epsilon$ shrink to zero while not changing $\eta$. Let $\operatorname{Sym}^n(\R ^2)$ be an irreducible representation as in Section \ref{Representations}, and let $p=p(A,B) \in \operatorname{Sym}^n(\R ^2)$. Recall that $u_t=\begin{pmatrix} 1 & t \\ 0 & 1\end{pmatrix}$ applied to $p(A,B)$ gives $p(A,B+tA)$. We define \[ T_p=\frac{\eta}{\max(|c_1|,\ldots,|c_n|^{1/n})} \] and set $T_p=\infty$ if the expression on the right is not defined. The significance of $T_p$ is that for $t=T_p$ at least one term in the sum $(c_0+c_1t+\cdots+c_nt^n)$ is of absolute value one while all others are less than that --- recall that this sum is the coefficient of $A$ in $p(A,B+tA)$. To extend this definition to $\mathfrak l'$ which is not necessarily irreducible we split $\mathfrak l'$ into irreducible representations $\mathfrak l'=\bigoplus_{j=1}^k V_j$ and define for $v=(p_j)_{j=1,\ldots,k}$ \[ T_v=\min_j T_{p_j}. \] \begin{lemma} \label{Next to last} There exists constants $n>0$ and $C>0$ that only depend on $\mathfrak l'$ such that for $v \in B_\epsilon ^{\mathfrak l'}(0)$ and $t \in [0,T_v]$ we have \begin{equation*} \operatorname{Ad} _{u_t}(v) = w+O(\epsilon ^{1/n}) \end{equation*} where $w \in B_{C \eta}^{\mathfrak l'}(0)$ is fixed under the subgroup $U=u_{\R}$. Here we write $O(\epsilon ^{1/n})$ to indicate a vector in $\mathfrak l'$ of norm less than $C \epsilon^{1/n}$. \end{lemma} \begin{proof} We first show the statement for irreducible representations $\operatorname{Sym} ^ n (\R ^ 2)$ inside $\mathfrak l'$. There any multiple of $A ^ n$ is fixed under $U$. Similar to the earlier discussion the coefficient $(c _ 0 + c _ 1 t + \cdots + c _ n t ^ n)$ is for $t \in [0, T _ p]$ bounded by $n \eta$. For the other coefficients note first that these are sums of terms of the form $c _ i t ^ j$ for $j < i$. Since $t < \eta | c _ i| ^{- 1 / i}$ we have that each such term is bounded by $|c _ i t ^ j| \leq | c _ i|| c _ i| ^{-j/i} \leq| c _ i| ^{1 / i} \ll \epsilon ^{1/n}$ where the implied constant only depends on the norm on $\mathfrak l '$ and the way we split $\mathfrak l '$ into irreducibles (and we assumed $\eta<1$). This proves the lemma for irreducible representations. The general case is now straightforward. If $t < T _ v = \min_ j T _ {p _ j}$ then since $\operatorname{Ad} _{u _ t} (p _ j)$ is of the required form for $j = 1,\ldots, k$ the lemma follows by taking sums. \end{proof} If $v$ is already fixed by $U$ then $T _ v = \infty$ (and other way around) and the above statement is rather trivial since $w = v$. Moreover, by definition of $X _ 1$ we have $\frac{1}{T} \int_ 0 ^ T 1_K(u _ t x _ i) \operatorname{d} \! t >0.8 $ for $i = 1, 2$. From this it follows that there is some $t \in [0, T _ 0]$ with $u _ t.x _ 1, u _ t.x _ 2 \in K$. Since $K \subset X '$ Lemma \ref{Centralizer proposition} proves (assuming $\epsilon < \eta$) the claim in that case and we may from now on assume that $v$ is not fixed under the action of $U$ and so $ T _ v < \infty$. \begin{lemma} \label{Last one} There exists a constant $c > 0$ that only depends on $\mathfrak l '$ such that the decomposition $\operatorname{Ad} _{u _ t} (v) = w + O(\epsilon ^{1/n})$ as above satisfies $\| w\| >c \eta$ for $t \in E_v$ where $E_v \subset [0,T_v]$ has Lebesgue measure at least $0 .9 T _ v$. \end{lemma} \begin{proof} We only have to look at the irreducible representations $V_j= \operatorname{Sym} ^ n (\R ^ 2)$ in $\mathfrak l '$ with $T _ v = T _{ p _ j}$. The size of the corresponding component of $w$ is determined by the value of the polynomial $ c _ 0 + c _ 1 t +\cdots+ c _ n t ^ n$. We change our variable by setting $s=\frac{t}{ T _ v}$ and get the polynomial $q (s) = c _ 0 +\frac{ c _ 1}{ T _ v} s +\cdots +\frac{ c _ n}{ T _ v ^ n} s ^ n$. By definition of $T _ v$ the polynomial $q(s)$ has at least one coefficient of absolute value one while the others are in absolute value less or equal than one. Therefore, we have reduced the problem to finding a constant $c$ such that for every such polynomial we have that $E_1=\{s \in [0,1]:|q(s)|\geq c\}$ has measure bigger than $0.9$. This can be done in various ways --- we will use the following property of polynomials for the proof. Every polynomial of degree $n$ is determined by $n+1$ values (by the standard interpolation procedure). Moreover, we can give an upper bound on the coefficients in terms of these values unless the values of $s$ used for the data points are very close together. (The determinant of the Vandermonde determinant is the product of the differences of the values of $s$ used in the interpolation.) Suppose $s_0 \in [0,1]\setminus E_1$, then all points close to $s_0$ give unsuprisingly also small values. So we now look for $s_1 \in [0,1] \setminus (E_1 \cup [s_n-\frac{1}{20n},s_n+\frac{1}{20n}])$ --- if there is no such point then $E_1$ is as big as required. We repeat this search until we have found $n+1$ points $s_0,\ldots,s_n \in [0,1]\setminus E_1$ that are all separated from each other by at least $\frac{1}{20n}$. Again if we are not able to find these points, then $E_1$ is sufficiently big. However, as discussed the coefficients of $q$ are due to $|q(s_0)|,\ldots,|q(s_n)|<c$ bounded by a multiple of $c$ (that also involves some power of the degree $n$). If $c$ is small enough compared to that coefficient we get a contradiction to the assumption that $q$ has at least one coefficient of absolute value one. It follows that for that choice of $c$ we can find at most $n$ points in the above search and so $E_1$ has Lebesgue measure at least $1-n\frac{2}{20n}=0.9$. \end{proof} Recall that case (1) from the beginning of this section implies Theorem \ref{Main theorem} and that we are assuming case (2). Moreover, recall that this implies for all $\epsilon > 0$ the existence of $y, y ' \in X _ 1$ with $y ' = \exp (v). y$ for some nonzero $v \in B_\epsilon ^{\mathfrak l '} (0)$ by Lemma \ref{Transverse existence}. By definition of $X _ 1$ the sets \begin{align*} E_T & = \{ t \in [0, T]: u _ t . y \in K \} \text{and} \\ E_T ' & = \{ t \in [0, T]: u _ t . y ' \in K \} \end{align*} have Lebesgue measure bigger than $0.8 T$ whenever $T \geq T _ 0$. From the definition it is easy to see that $T_v \geq T_0$ once $\epsilon$ and therefore $v$ are sufficiently small, so we can set to $T = T _ v$. Moreover, let $E_v$ be as in Lemma \ref{Last one}. Then the union of the complements of these three sets in $[0, T _ v]$ has Lebesgue measure less than $0 .5 T_v$. Therefore, there exists some $t \in E_{ T _ v} \cap E_{ T _ v} ' \cap E _ v$. We set $x = u _ t. y$, $x ' = u _ t. y '$ which both belong to $K$ by definition of $E _{T _ v}$ and $E _{T _ v} '$. Moreover, $x ' = \exp (w + O(\epsilon ^{1/n})). X$ where $ w \in \mathfrak l'$ is stabilized by $U$ and satisfies $c \eta \leq \|w\| \leq C \eta$ by Lemma~\ref{Next to last}-- \ref{Last one}. We let $\epsilon \rightarrow 0$ and choose converging subsequences for $x, x '$, and $w$. This shows the existence of $x, x ' = \exp (w). x \in K$ and $w \in \mathfrak l '$ with $c \eta \leq\| w\| \leq C \eta$ which is stabilized by $U$. This is in effect our earlier claim which as we have shown implies that $\mu$ is invariant under a oneparameter subgroup not belong into $L$. This concludes the proof of Theorem \ref{Main theorem}. \def$'${$'$} \end{document}
\begin{document} \title{Quantum process tomography of linear and quadratically nonlinear optical systems} \author{Kevin Valson Jacob\textsuperscript{1}} \email{[email protected]} \author{Anthony E. Mirasola\textsuperscript{1,2}} \author{Sushovit Adhikari\textsuperscript{1}} \author{Jonathan P. Dowling\textsuperscript{1,3,4,5}} \affiliation{\textsuperscript{1}Hearne Institute for Theoretical Physics and Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA } \affiliation{\textsuperscript{2}Department of Physics and Astronomy, Rice University, Houston, Texas 77005, USA} \affiliation{\textsuperscript{3} NYU-ECNU Institute of Physics at NYU Shanghai, Shanghai 200062, China} \affiliation{\textsuperscript{4} CAS-Alibaba Quantum Computing Laboratory, USTC Shanghai, Shanghai 201315, China } \affiliation{\textsuperscript{5} National Institute of Information and Communications Technology, Tokyo 184-8795, Japan} \begin{abstract} A central task in quantum information processing is to characterize quantum processes. In the realm of optical quantum information processing, this amounts to characterizing the transformations of the mode creation and annihilation operators. This transformation is unitary for linear optical systems, whereas these yield the well-known Bogoliubov transformations for systems with Hamiltonians that are quadratic in the mode operators. In this paper, we propose a shot noise limited scheme for characterizing both these kinds of evolutions by employing a modified Mach-Zehnder interferometer. In order to characterize a $N$-mode device, we require $O(N^2)$ measurements. While it suffices to use coherent states for the characterization of linear optical systems, we additionally require single photons to characterize quadratically nonlinear optical systems. \end{abstract} \pacs{03.65.Wj,03.67.-a,03.67.Mn} \maketitle \section{Introduction} Quantum process tomography is an indispensable tool in the characterization of the evolution of quantum systems. In general, the evolution of a $N$-dimensional quantum system is a completely positive trace-preserving map, which is characterized by $O(N^4)$ real parameters \cite{nielsen}. In addition to standard quantum process tomography, various schemes such as ancilla-assisted process tomography \cite{altepeter}, direct characterization of quantum dynamics \cite{lidar} and compressed sensing \cite{shabani} have been developed to characterize such maps. Characterizing evolutions in optical systems require a different scheme as the Hilbert space corresponding to such systems is infinite dimensional. Several schemes have been proposed for characterizing optical systems. In Ref. \cite{lobino}, optical systems were probed with coherent states; and the results were used to predict the action of the system on an arbitrary state of light using the Glauber-Sudarshan $P$-representation. Simpler schemes are possible when we restrict our attention to linear optics. Such systems have been found to have a variety of applications ranging from interferometry, quantum metrology \cite{giovanetti}, linear optical quantum computing \cite{dowling}, and boson sampling \cite{aaronson}. In such systems, the mode operators evolve unitarily; and characterizing the corresponding finite dimensional unitary matrix completely specifies the evolution. Several schemes for characterizing linear optical devices were developed in Refs. \cite{obrien,ish,melbourne}. In all these schemes, probe states are inputted into the device which is then characterized from the probabilities of specific outcomes of measurements from the output of the device. In Ref. \cite{obrien}, single-photon probes were used to find the moduli of all matrix elements, and two-photon coincidence probabilities were used to find all the phases of the matrix elements of a $d$-mode unitary transformation. A similar scheme was analyzed in detail in Ref. \cite{ish}. Another approach using coherent state probes instead of single photons was developed in Ref. \cite{melbourne}. However, all these schemes assumed that the unitary matrix is real-bordered i.e. that the elements in the first row and first column of the matrix were real. This restricts the class of devices that we can characterize. For instance, we would not be able to characterize a single mode phase shifter by these schemes. In general, the phases in the first row and column would be relevant when either the input state is superposed across input modes or when there is further interferometry after the device. The restriction on the class of unitaries which could be characterized in these schemes stems from the fact that in quantum mechanics, only phase differences and not phases themselves can be measured. Thus in order to find all the phases in the transformation matrix, at least one auxillary mode must be introduced relative to which all phases can be measured. We will show that a modified Mach-Zehnder interferometer serves this purpose. Although characterizing linear optical devices have been explored, not much attention has been given to characterizing nonlinear devices. Such devices have been shown to be useful in producing squeezed light \cite{yuen} and entangled photons \cite{shih}. Systems where the Hamiltonian is quadratic in the mode operators produce the well-known Boguliubov transformations of the mode operators \cite{lvovsky}. We will show that the modified Mach-Zehnder interferometer can characterize such transformations also. \section{Setup} \begin{figure} \caption{Modified Mach-Zehnder interferometer for characterizing an unknown device $D$. The $p^{th} \end{figure} A schematic diagram of the proposed modified Mach-Zehnder interferometer is shown as FIG. 1. It consists of two 50:50 beamsplitters, a phase shifter, the unknown device to be characterized, and photodetectors. One of the input and output modes of the unknown device are placed in the lower arm of the interferometer; and the phase shifter is placed in the upper arm of the interferometer. The input modes of the first beamsplitter and the output modes of the second beamsplitter are labelled as $\tilde{a}_i$ and $\tilde{b}_i$ respectively where $i = 0,1$. The first beamsplitter implements the transformation, \begin{equation} \left( \begin{array}{c} \hat{\tilde{a}}_0^\dagger \\ \hat{\tilde{a}}_1^\dagger \end{array} \right) = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix} \left( \begin{array}{c} \hat{a}_0^\dagger \\ \hat{a}_p^\dagger \end{array} \right). \end{equation} Here $a_p$ is the $p^{th}$ input mode of the unknown device which is coupled to the lower arm of the interferometer. The upper mode consists of a phase shifter which implements the transformation \begin{equation} \hat{a}_0^\dagger = e^{i\phi}\hat{b}_0^\dagger \end{equation} where $\phi \in \{0,\frac{\pi}{2}\}$. The output mode $b_q$ of the device and mode $b_0$ transform in the second beamsplitter as \begin{equation} \left( \begin{array}{c} \hat{b}_0^\dagger \\ \hat{b}_q^\dagger \end{array} \right) = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix} \left( \begin{array}{c} \hat{\tilde{b}}_0^\dagger \\ \hat{\tilde{b}}_1^\dagger \end{array} \right). \end{equation} \section{Tomography of unitary transformations} Consider a $N$-mode passive linear optical device where the input and output modes are labelled as $a_i$ and $b_i$ respectively where $i \in \left\lbrace1,2,...N\right\rbrace$. The input mode and output mode creation operators are related by a unitary transformation as \begin{equation} \label{unitary transformation} \hat{a}_i^\dagger = U_{ij} \hat{b}_j^\dagger \end{equation} where it is implicit that the repeated index is summed over. Our aim is to fully characterize this unitary matrix. For this, we probe it with coherent states. Consider a coherent state input in mode $\tilde{a}_0$. The input state is \begin{equation} |\Psi\rangle={\cal{D}}_{\tilde{a}_0}(\alpha)|0\rangle= e^{\alpha \hat{\tilde{a}}_0^{\dagger}-\alpha^*\hat{\tilde{a}}_0}|0\rangle \end{equation} where $\alpha$ is arbitrarily chosen, and ${\cal{D}}_{\tilde{a}_0}$ is the displacement operator acting on mode $\tilde{a}_0$. After the first beamsplitter, this state is \begin{equation} |\Psi\rangle={\cal{D}}_{a_0}\left(\frac{\alpha}{\sqrt{2}}\right)\otimes{\cal{D}}_{a_p}\left(\frac{i\alpha}{\sqrt{2}}\right) |0\rangle \end{equation} After the unitary device and the phase shifter, this state is transformed as \begin{equation} |\Psi\rangle={\cal{D}}_{b_0}\left(\frac{e^{i\phi}\alpha}{\sqrt{2}}\right)\otimes_{j=1}^N \exp\left[{\frac{i\alpha U_{pj}\hat{b}_j^\dagger}{\sqrt{2}}+\frac{i\alpha U_{pj}^*\hat{b}_j}{\sqrt{2}}}\right]|0\rangle \end{equation} This can be rewritten as \begin{equation} |\Psi\rangle={\cal{D}}_{b_0}\left(e^{i\phi}\frac{\alpha}{\sqrt{2}}\right)\otimes{\cal{D}}_{b_q}\left(\frac{i\alpha U_{pq}}{\sqrt{2}}\right) \otimes_{j \neq q}{\cal{D}}_{b_j}\left(\frac{i\alpha U_{pj}}{\sqrt{2}}\right)|0\rangle \end{equation} After the final beamsplitter, the reduced state in modes $\tilde{b}_0$ and $\tilde{b}_1$ is \begin{equation} |\tilde{\Psi}\rangle={\cal{D}}_{\tilde{b}_0}\left(\frac{\alpha}{2}(e^{i\phi}-U_{pq})\right)\otimes{\cal{D}}_{\tilde{b}_1}\left(\frac{i\alpha}{2}(e^{i\phi}+U_{pq})\right)|0\rangle \end{equation} We then measure the intensity difference between the modes. This is \begin{equation} I_{\tilde{b}_1}-I_{\tilde{b}_0} = \left|\alpha\right|^2 \mathrm{Re}[e^{-i\phi}U_{pq}] \end{equation} Thus by choosing $\phi$ as 0 or $\frac{\pi}{2}$, we are able to find the real part and the imaginary part of the matrix element $U_{pq}$ respectively. By choosing $p,q\in{1,2,...N}$ we can find all the matrix elements in $O(N^2)$ measurements. This completes the characterization of the unitary matrix. \section{Tomography of Bogoliubov transformations} Having seen how unitary evolutions of the mode operators can be characterized, we now move on to characterizing Bogoliubov transformations. In such devices the mode operators evolve as \begin{equation} \hat{a}_i^\dagger=U_{ij}\hat{b}_j^\dagger + V_{ij}\hat{b}_j \end{equation} where $UU^\dagger-VV^\dagger = \mathds{1}$. Note that $U$ here is unitary iff $V = 0$. Hence in general our aim is to find both $U$ and $V$. We will first find $U$, and then use that information to find $V$. As earlier, consider a $d$-mode device where the input and output modes are labelled as $a_i$ and $b_i$ respectively where $i \in \left\lbrace1,2,...N\right\rbrace$. For finding $U$, we use a scheme similar to the unitary case but with single photon probes. We first input a single photon in mode $\tilde{a}_0$. The state after the first beamsplitter is \begin{equation} |\Psi\rangle=\left(\frac{\hat{a}_0^\dagger+i\hat{a}_p^\dagger}{\sqrt{2}}\right)|0\rangle \end{equation} This state is transformed to \begin{equation} |\Psi\rangle=\left(\frac{e^{i\phi} \hat{b}_0^\dagger}{\sqrt{2}}+\frac{iU_{pj} \hat{b}_j^\dagger}{\sqrt{2}}\right)|0\rangle \end{equation} where we have noted that $\hat{b}_i|0\rangle=0 \,\forall \, i$. The modes $b_0$ and $b_q$ transform in the beamsplitter so as to yield the final state \begin{equation} |\Psi\rangle=\left[\frac{(e^{i\phi}-U_{pq})}{2}\hat{\tilde{b}}_0^\dagger + \frac{(ie^{i\phi}+iU_{pq})}{2}\hat{\tilde{b}}_1^\dagger+\sum_{j\neq q}\frac{iU_{pj}{b}_j^\dagger}{\sqrt{2}}\right]|0\rangle \end{equation} The difference in the probabilities of measuring the photons at the output of the final beamsplitter is \begin{equation} P_{\tilde{b}_1}-P_{\tilde{b}_0}=\mathrm{Re}[e^{-i\phi}U_{pq}] \end{equation} As earlier, by choosing $\phi$ and $p,q$, we can fully characterize the matrix $U$. We now need to characterize $V$. For this, we send in a coherent state probe as in the unitary case. Proceeding as earlier, we find the reduced state of modes $b_0$ and $b_q$ as \begin{equation} |\tilde{\Psi}\rangle={\cal{D}}_{b_0}\left(\frac{e^{i\phi}\alpha}{\sqrt{2}}\right)\otimes {\cal{D}}_{b_q} \left(\frac{i\alpha U_{pq}}{\sqrt{2}}+\frac{i\alpha^* V_{pq}^*}{\sqrt{2}}\right)|0\rangle\ \end{equation} For simplicity, define $\beta_{pq}=\alpha U_{pq}+\alpha^*V_{pq}^*$. After the final beamsplitter, this state is \begin{equation} |\tilde{\Psi}\rangle={\cal{D}}_{\tilde{b}_0}\left(\frac{e^{i\phi}\alpha-\beta_{pq}}{2}\right)\otimes {\cal{D}}_{\tilde{b}_1}\left(\frac{i(e^{i\phi}\alpha+\beta_{pq})}{2}\right) |0\rangle\ \end{equation} In this case, the intensity difference between the outputs is \begin{equation} I_{\tilde{b}_1}-I_{\tilde{b}_0}=\rm{Re}[\beta_{\it{pq}}\alpha^*e^{-i\phi} ] \end{equation} This allows us to find $\beta_{pq}$ $\forall \, p,q$ which, can be used to find $V$ completely. Note that the above expression reduces to Eq. (10) if $V=0$. This completes the characterization of Bogoliubov transformations. \section{Lossy devices} Having discussed how to characterize both unitary and Bogoliubov transformations in lossless devices, we now turn our attention to lossy devices. As shown in Ref. \cite{melbourne}, if the loss is independent of the path taken by the photon in the device, then the loss can be modeled by fictitious beamsplitters. This embeds the transformation matrices of the device in a larger matrix. Akin to the model in Ref. \cite{melbourne}, we attach a fictitious beamsplitter of transmissivity $\eta_i \in [0,1]$ to the $i^{th}$ input mode of the device. This is represented in FIG. 2. \begin{figure} \caption{The loss of the device is modeled by fictitious beamsplitters. The input modes of the beamsplitter are labeled $a_i$ and the input modes of the unknown device is labeled $a_i'$. The auxillary mode of the $i^{th} \end{figure} The fictitious beamsplitters transform the modes as \begin{equation} \left( \begin{array}{c} \hat{a}_i^\dagger \\ \hat{a}_{N+i}^\dagger \end{array} \right) = \begin{pmatrix} \eta_i & -\sqrt{1-\eta_i^2} \\ \sqrt{1-\eta_i^2} & \eta_i \end{pmatrix} \left( \begin{array}{c} \hat{a}_i^{'\dagger} \\ \hat{a}_{N+i}^{'\dagger} \end{array} \right) \end{equation} where $\eta_i$ is the transmissivity of the $i^{th}$ beamsplitter. For convenience, define the diagonal matrices $\eta$ and $\tilde{\eta}$ as \begin{align} \eta_{ij}&= \eta_i\delta_{ij}\nonumber \\ \tilde{\eta}_{ij}&=\sqrt{1-\eta_i^2}\delta_{ij} \end{align} \subsection{Unitary transformations} We now focus on the case of a lossy unitary device. In this case, we have \begin{equation} \hat{a}_i^{'\dagger}=U_{ij}\hat{b}_j^\dagger \end{equation} Combining Eqs. (20) and (21), we obtain \begin{equation} \left(\begin{array}{c} \hat{a}_1^\dagger \\ \vdots \\\hat{a}_N^\dagger \\ \hat{a}_{N+1}^\dagger \\ \vdots \\ \hat{a}_{2N}^\dagger\end{array}\right)=\begin{pmatrix} & \\ $$ \mbox{ $ (\eta U)_{N\times N}$ }$$ $$ \mbox{ $ (-\tilde{\eta}\mathds{1})_{N\times N}$ }$$ \\ & \\ & \\ & \\ $$ \mbox{ $ (\tilde{\eta} U)_{N\times N}$ }$$ $$ \mbox{ $ (\eta\mathds{1})_{N\times N}$ }$$ \\ & \\ \end{pmatrix} \left(\begin{array}{c} \hat{b}_1^\dagger \\ \vdots \\\hat{b}_N^\dagger \\ \hat{a}_{N+1}^{'\dagger} \\ \vdots \\ \hat{a}_{2N}^{'\dagger}\end{array}\right) \end{equation} It is this $2N\times2N$ matrix that now characterizes the device. Thus, in addition to $U$, we need to find $\eta$ and $\tilde{\eta}$ also. In order to find the losses, we send in a coherent state into mode $a_i$. The state evolves as \begin{align} |\Psi\rangle &= {\cal{D}}_{a_i}(\alpha)|0\rangle = {\cal{D}}_{a'_i}(\eta_i \alpha)\otimes {\cal{D}}_{a'_{N+i}}(-\sqrt{1-\eta_i^2}\alpha)|0\rangle \nonumber \\ &=\otimes_{j=1}^N{\cal{D}}_{b_j}(\eta_iU_{ij}\alpha)\otimes {\cal{D}}_{a'_{N+i}}(-\sqrt{1-\eta_i^2}\alpha)|0\rangle \end{align} Then the sum of the intensities in the accessible output modes is \begin{equation} I=\eta_i^2\sum_j |U_{ij}|^2|\alpha|^2=\eta_i^2|\alpha|^2 \end{equation} From this, all $\eta_i$ and hence $\eta$ and $\tilde{\eta}$ can be found out. In order to find $U$, we proceed exactly as in the lossless unitary case. We will see that Eq. (10) will be modified to read \begin{equation} I_{\tilde{b}_1}-I_{\tilde{b}_0} = \left|\alpha\right|^2 \mathrm{Re}[e^{-i\phi}\eta_p U_{pq}] \end{equation} from which we can now find $U$. Thus the lossy unitary device can be characterized. \subsection{Bogoliubov transformations} We now move on to lossy devices that implement Bogoliubov transformations. In such devices, the mode operators evolve as \begin{equation} \hat{a}_i^{'\dagger}=U_{ij}\hat{b}_j^\dagger + V_{ij}\hat{b}_j \end{equation} Modeling the loss as earlier, the full transformation becomes \begin{align} \left(\begin{array}{c} \hat{a}_1^\dagger \\ \vdots \\\hat{a}_N^\dagger \\ \hat{a}_{N+1}^\dagger \\ \vdots \\ \hat{a}_{2N}^\dagger\end{array}\right)&=\begin{pmatrix} & \\ $$ \mbox{ $ (\eta U)_{N\times N}$ }$$ $$ \mbox{ $ (-\tilde{\eta}\mathds{1})_{N\times N}$ }$$ \\ & \\ & \\ & \\ $$ \mbox{ $ (\tilde{\eta} U)_{N\times N}$ }$$ $$ \mbox{ $ (\eta\mathds{1})_{N\times N}$ }$$ \\ & \\ \end{pmatrix} \left(\begin{array}{c} \hat{b}_1^\dagger \\ \vdots \\\hat{b}_N^\dagger \\ \hat{a}_{N+1}^{'\dagger} \\ \vdots \\ \hat{a}_{2N}^{'\dagger}\end{array}\right) \notag \\ &\quad + \begin{pmatrix} & \\ $$ \mbox{ $ (\eta V)_{N\times N}$ }$$ & &$$ \mbox{ $ -\mathbb{0}_{N\times N}$ }$$ \\ & \\ & \\ & \\ $$ \mbox{ $ (\tilde{\eta} V)_{N\times N}$ }$$ & &$$ \mbox{ $ \mathbb{0}_{N\times N}$ }$$ \\ & \\ \end{pmatrix} \left(\begin{array}{c} \hat{b}_1 \\ \vdots \\\hat{b}_N \\ \hat{a}'_{N+1} \\ \vdots \\ \hat{a}'_{2N}\end{array}\right) \end{align} Thus we have to find $U$, $V$, $\eta$, and $\tilde{\eta}$ in order to fully characterize this device. In order to find $\eta$ we send in a single photon in mode $a_i$. The probability that the photon will be detected in any of the accessible output modes in $|\eta_i|^2$. Thus $\eta$ and $\tilde{\eta}$ can be found out. To find $U$, we proceed exactly as in the case of lossless Bogoliubov transformations so that Eq. (15) will be modified to \begin{equation} P_{\tilde{b}_1}-P_{\tilde{b}_0}=\mathrm{Re}[e^{-i\phi}\eta_pU_{pq}] \end{equation} from which $U$ can be found out. Similarly Eq. (18) will be modified to \begin{equation} P_{\tilde{b}_1}-P_{\tilde{b}_0}=\mathrm{Re}[e^{-i\phi}\alpha^*\eta_p\beta_{pq}] \end{equation} which enable us to find $V$. This completes the characterization of lossy Bogoliubov transformations \section{Conclusion} We have shown that a modified Mach-Zehnder interferometer can characterize both unitary and Bogoliubov transformations. As we have used coherent states and single photons in our scheme, the sensitivity of our scheme is limited by shot noise. \begin{acknowledgments} The authors would like to acknowledge support from the the Air Force Office for Scientific Research, the Army Research Office, the Defense Advanced Projects Agency, the National Science Foundation, and the Northrop Grumman Corporation. The authors thank Mark M. Wilde, Xiaoting Wang, and Chenglong You for helpful discussions and comments on this work. \end{acknowledgments} \end{document}
\begin{document} \mathclass{Primary 53B30; Secondary 53C05.} \newtheorem{th}{Theorem} \newtheorem{lemma}[th]{Lemma} \abbrevauthors{P. Gilkey and R. Ivanova} \abbrevtitle{Complex IP curvature tensors} \title{Complex IP pseudo-Riemannian\\ algebraic curvature tensors} \author{Peter\ B\ Gilkey} \address{Mathematics Department, University of Oregon, Eugene Or 97403 USA\\ E-mail: [email protected]} \author{Raina\ Ivanova} \address{Dept. of Descriptive Geometry, University of Architecture,\\ Civil Engineering \& Geodesy, 1, Christo Smirnenski Blvd., 1421 Sofia, Bulgaria\\email: [email protected]} \maketitlebcp \def\operatorname#1{{\rm#1\,}} \def\text#1{{\hbox{#1}}} \section{Introduction}\label{Sect1} The Riemann curvature tensor contains a great deal of information about the geometry of the underlying pseudo-Riemannian manifold; pseudo-Riemannian geometry is to a large extent the study of this tensor and its covariant derivatives. It is often convenient to work in a purely algebraic setting. We shall say that a tensor is an algebraic curvature tensor if it satisfies the symmetries of the Riemann curvature tensor. The Riemann curvature tensor defines an algebraic curvature tensor at each point of the manifold; conversely every algebraic curvature tensor is locally geometrically realizable. Thus algebraic curvature tensors are an integral part of certain questions in differential geometry. The skew-symmetric curvature operator is a natural object of study; there are other natural operators and we refer to \cite{PI5} for a survey of this area. In the present paper, we examine when the (complex) Jordan normal form of the skew-symmetric curvature operator is constant either in the real or complex settings. In Section \ref{Sect2}, we give the basic definitions and notational conventions we shall need. We also review the basic results in the real setting. In Section \ref{Sect3}, we discuss a natural generalization to the complex setting. \section{IP algebraic curvature tensors}\label{Sect2} Let $(M,g)$ be a connected pseudo-Riemannian manifold of signature $(p,q)$ and dimension $m=p+q$. Let $\nabla$ be the Levi-Civita connection. The curvature operator and associated curvature tensor are defined by: \begin{eqnarray}\label{2.1} &&{}^gR(X,Y):=\nabla_X\nabla_Y-\nabla_Y\nabla_X-\nabla_{[X,Y]},\\ &&{}^gR(X,Y,Z,W):=g({}^gR(X,Y)Z,W).\nonumber\end{eqnarray} This tensor has the following symmetries: \begin{eqnarray} \label{eqna} &&{}^gR(X,Y,Z,W)=-{}^gR(Y,X,Z,W),\\ &&{}^gR(X,Y,Z,W)={}^gR(Z,W,X,Y),\text{ and}\label{eqnb} \\ &&{}^gR(X,Y,Z,W)+{}^gR(Y,Z,X,W)+{}^gR(Z,X,Y,W)=0.\label{eqnc} \end{eqnarray} Let $V$ be finite dimensional real vector space which is equipped with a non-degenerate inner product $(\cdot,\cdot)$ of signature $(p,q)$. We say that a $4$ tensor $R\in\otimes^4V$ is an {\it algebraic curvature tensor} if $R$ satisfies the symmetries given in equations (\ref{eqna}-\ref{eqnc}). Note that we do not impose the second Bianchi identity; algebraic curvature tensors measure second order phenomena. We say that $(M,g)$ is a {\it geometric realization} of an algebraic curvature tensor at a point $P$ if there is an isometry $\Psi:T_PM\rightarrow V$ so that ${}^gR_P=\Psi^*R$. Let $\{e_1,e_2\}$ be an oriented basis for a $2$ plane $\pi\subset V$ and let $h_{ij}:=(e_i,e_j)$ describe the restriction of the inner product $(\cdot,\cdot)$ to $\pi$. We shall say that $\pi$ is {\it non-degenerate} if $h_{ij}$ is non-degenerate, i.e. $\det(h):=h_{11}h_{22}-h_{12}^2\ne0$. We say that $\pi$ is {\it timelike}, {\it mixed}, or {\it spacelike} as the quadratic form $h_{ij}$ is negative definite, indefinite, or positive definite, respectively. We may decompose the Grassmannian of all oriented non-degenerate $2$ planes as the disjoint union of the oriented timelike, mixed, and spacelike $2$ planes. Let $R$ be an algebraic curvature tensor on $V$. Let $\{e_1,e_2\}$ be an oriented basis for a non-degenerate spacelike $2$ plane $\pi\subset V$. The {\it skew-symmetric curvature operator} $$R(\pi):=\det(h)^{-1/2}R(e_i,e_j)$$ is independent of the particular oriented basis for $\pi$ which was chosen. We say that $R$ is {\it spacelike Jordan IP} if the complex Jordan normal form of the operator $R(\pi)$ is constant on the Grassmannian $\operatorname{Gr}_{0,2}^+(V)$ of oriented spacelike $2$ planes. The notions of {\it timelike Jordan IP} and {\it mixed Jordan IP} are defined similarly using the Grassmannians $\operatorname{Gr}_{2,0}^+(V)$ and $\operatorname{Gr}_{1,1}^+(V)$ respectively. In the Riemannian setting $(p=0)$, this is equivalent to assuming that the eigenvalues of $R(\pi)$ are constant on $\operatorname{Gr}_{(0,2)}^+(V)$. In the pseudo-Riemannian setting ($p>0$) a bit more care must be taken as there are examples where $R(\pi)$ has only the zero eigenvalue but where the rank of $R(\pi)$ varies with $\pi$; we refer to \cite{GZ6} for details. We note that there are examples of algebraic curvature tensors which are spacelike Jordan IP but not timelike Jordan IP; again see \cite{GZ6}. We say that a pseudo-Riemannian manifold $(M,g)$ is {\it spacelike Jordan IP} if the associated curvature tensor ${}^gR$ is spacelike Jordan IP at every point of $M$; the eigenvalues and Jordan normal form are allowed to vary with the point. Let $p=0$. The study of the skew-symmetric curvature tensor was initiated in this context by Stanilov and Ivanova \cite{IS8}; we also refer to related work by Ivanova \cite{I9}-\cite{I13}. Subsequently, Ivanov and Petrova \cite{IP7} classified the spacelike Jordan IP metrics in the Riemannian setting for $m=4$; for this reason the notation `IP' has been used by later authors. The classification in \cite{IP7} was later extended by P. Gilkey, J. Leahy, and H. Sadofsky \cite{GLS3} and by Gilkey \cite{G1} to the cases $m\ge5$ and $m\ne7$. We refer to Gilkey and Semmelman \cite{GS4} for some partial results if $m=7$. Zhang \cite{Z12} has extended these results to the Lorentzian setting ($p=1$); see also Gilkey and Zhang \cite{GZ7} for related work. We say that $R$ has {\it spacelike rank $r$} if $\operatorname{Rank}(R(\pi))=r$ for every spacelike $2$ plane $\pi$. The following theorem, which shows that $r=2$ in many cases, was proved using topological methods \cite{GLS3,Z12}. \begin{th}\label{Th1} Let $R$ be an algebraic curvature tensor of spacelike rank $r$ on a vector space of signature $(p,q)$. \smallbreak{\rm 1)} Let $p\le1$. Let $q=5$, $q=6$, or $q\ge9$. Then $r=2$. \smallbreak{\rm 2)} Let $p=2$. Let $q\ge10$. Assume neither $q$ nor $q+2$ are powers of $2$. Then $r=2$. \end{th} In light of Theorem \ref{Th1}, we shall focus our attention on the algebraic curvature tensors of rank $2$. Let $V$ be a vector space of signature $(p,q)$. We say that a linear map $\phi$ of $V$ is {\it admissible} if $\phi$ satisfies the following two conditions: \begin{enumerate} \item $\phi$ is self-adjoint and $\phi^2=\operatorname{Id}$, or $\phi^2=-\operatorname{Id}$, or $\phi^2=0$. \item If $\phi(x)=0$, then $(x,x)\le0$, i.e. $\ker\phi$ contains no spacelike vectors. \end{enumerate} Let $\phi$ be admissible. If $\phi^2=\operatorname{Id}$, then $\phi$ is an isometry, i.e. $(\phi x,\phi y)=(x,y)$ for all $x,y\in V$. If $\phi^2=-\operatorname{Id}$, then $\phi$ is a {\it para-isometry}, i.e. $(\phi x,\phi y)=(-x,-y)$ for all $x,y\in V$; necessarily $p=q$ in this setting. If $\phi^2=0$, then the range of $\phi$ is {\it totally isotropic}, i.e. $(\phi x,\phi y)=0$ for all $x,y\in V$. We define: \begin{eqnarray*} &&R_\phi(x,y)z:=(\phi y,z)\phi x -(\phi x,z)\phi y,\text{ and}\label{RP}\\ &&R_\phi(x,y,z,w):=(\phi y,z)(\phi x,w) -(\phi x,z)(\phi y,w).\nonumber\end{eqnarray*} We showed in \cite{GZ6} that $R_\phi$ is an algebraic curvature tensor with $range(R_\phi(\pi))=\phi\pi$ if $\pi$ is a spacelike $2$ plane. The eigenvalue structure of $R_\phi(\pi)$ is given by: \begin{enumerate} \item Suppose that $\phi^2=\pm\operatorname{Id}$. Then $R_\phi(\pi)$ is a rotation through an angle of 90 degrees on the spacelike ($\phi^2=\operatorname{Id}$) or timelike ($\phi^2=-\operatorname{Id}$) $2$ plane $\phi\pi$, $R_\phi(\pi)$ vanishes on $\phi\pi^\perp$, and $R_\phi$ has two non-trivial complex eigenvalues $\pm\sqrt{-1}$. \item Suppose that $\phi^2=0$. Since $\ker\phi$ contains no spacelike vectors, $R_\phi(\pi)$ has rank $2$. The $2$ plane $\phi\pi$ is totally isotropic. We have $\{R_\phi(\pi)\}^2=0$. \end{enumerate} Thus $CR_\phi$ is a spacelike rank $2$ Jordan IP algebraic curvature tensor for any $C\ne0$. Conversely, we have the following classification result \cite{GZ6}: \begin{th}\label{Thm2} Let $V$ be a vector space of signature $(p,q)$, where $q\ge5$. A tensor $R$ is a spacelike rank 2 Jordan IP algebraic curvature tensor on $V$ if and only if there exists a non-zero constant $C$ and an admissible $\phi$ so that $R=CR_\phi$. \end{th} \section{ Almost complex Jordan IP algebraic curvature tensors}\label{Sect3} Algebraic curvature tensors have been studied by many authors in the complex setting; we refer to Falcitelli, Farinola, and Salamon \cite{FFS} and Gray \cite{Gr} for further details concerning almost Hermitian geometry. Let $J:V\rightarrow V$ be a real linear map with $J^2=-\operatorname{Id}$. We use $J$ to provide $V$ with a complex structure: $(a+\sqrt{-1}b)v=av+bJv$. Thus a real linear map $S$ of $V$ is {\it complex} if and only if $SJ=JS$. We shall assume that $J$ is {\it pseudo-Hermitian}, i.e. $(Jv_1,Jv_2)=(v_1,v_2)$; necessarily both $p$ and $q$ are even. A $2$ plane $\pi$ is called {\it complex line} if and only if $J\pi\subset\pi$. If $\pi$ is non-degenerate, then $\pi$ is either spacelike or timelike, there are no mixed complex lines. An algebraic curvature tensor $R$ is said to be {\it almost complex} if $JR(x,Jx)=R(x,Jx)J$ for all $x$ in $V$, i.e. $R(x,Jx)$ is complex linear. Such an $R$ is said to be {\it almost complex spacelike Jordan IP} if $R(\pi)$ (regarded as a complex linear map) has constant Jordan normal form for every spacelike complex line; the notion of {\it almost complex timelike Jordan IP} is defined similarly. Theorem \ref{Thm2} controls the eigenvalue structure of a spacelike Jordan IP algebraic curvature tensor in the real setting. There is a similar result in the complex setting which we describe as follows. Let $p=0$ and let $R$ be an almost complex spacelike Jordan IP algebraic curvature tensor. The operator $JR(\pi)$ is a self-adjoint complex linear map and is therefore diagonalizable. Let $\{\lambda_i,\mu_i\}$ be the eigenvalues and multiplicities of $JR(\cdot)$, where $\mu_0\ge\mu_1\ge...\ge\mu_\ell$; $\lambda_i\in\mbox{${\rm I\!R }$}$ and $q=2(\mu_0+...+\mu_\ell)$. We refer to \cite{G2} for the proof of the following result which controls the eigenvalue structure in the Riemannian setting; it is not known if a similar result holds in the higher signature setting. \begin{th}\label{thm3} Let $R$ be an almost complex spacelike Jordan IP algebraic curvature tensor on a Riemannian vector space of signature $(0,q)$. Let $\{\lambda_i,\mu_i\}$ be the eigenvalues and multiplicities of $JR(\cdot)$, where $\mu_0\ge...\ge\mu_\ell>0$. Suppose $\ell\ge1$. If $m\equiv2$ mod $4$, then $\ell=1$ and $\mu_1=1$. If $m\equiv0$ mod $4$, then either $\ell=1$ and $\mu_1\le2$ or $\ell=2$ and $\mu_1=\mu_2=1$. \end{th} We say that $(\phi,J)$ is an {\it admissible pair} if $\phi$ is admissible, if $\phi J=\pm J\phi$, and if $J$ is a pseudo-Hermitian almost complex structure on $V$. \begin{th}\label{Thm4} Let $V$ be a vector space of signature $(p,q)$. If $(\phi,J)$ is an admissible pair, then $R_\phi$ is an almost complex spacelike Jordan IP algebraic curvature tensor. \end{th} \Proof By Theorem \ref{Thm2}, $R_\phi$ is a spacelike rank $2$ Jordan IP algebraic curvature tensor. Since $J\phi=\varepsilon\phi J$ for $\varepsilon=\pm1$, we may compute: \begin{eqnarray*} &&JR_\phi(x,Jx)z=(\phi Jx,z)J\phi x-(\phi x,z)J\phi J x,\\ &&R_\phi(x,Jx)Jz=(\phi Jx,Jz)\phi x-(\phi x,Jz)\phi Jx\\ &&\quad=-(J\phi Jx,z)\phi x+(J\phi x,z)\phi Jx\\ &&\quad=\varepsilon^2(\phi Jx,z)J\phi x-\varepsilon^2(\phi x,z)J\phi Jx =JR_\phi(x,Jx)z. \end{eqnarray*} This shows that $R$ is almost complex. \endproof Let $V$ be a vector space of signature $(p,q)$. We say that $(\phi_1,\phi_2,J)$ is an {\it admissible triple} if the following conditions are satisfied: \begin{enumerate} \item $\phi_1$ and $\phi_2$ are admissible and either $\phi_1^2\ne0$ or $\phi_2^2\ne0$; \item $J$ is a pseudo-Hermitian almost complex structure on $V$; \item $J\phi_1=\phi_1J$, $J\phi_2=-\phi_2J$, and $\phi_2\phi_1+\phi_1\phi_2=0$. \end{enumerate} \begin{lemma}\label{Lem3} Let $V$ be a vector space of signature $(p,q)$. Let $\{\phi_1,\phi_2,J\}$ be an admissible triple on $V$.\begin{enumerate} \item If $x$ is spacelike, then the set $\{\phi_1x,\phi_1Jx,\phi_2x,\phi_2Jx\}$ is orthogonal and linearly independent. \item For any $x\in V$, we have that: $$R_{\phi_1}(x,Jx)R_{\phi_2}(x,Jx)=R_{\phi_2}(x,Jx)R_{\phi_1}(x,Jx)=0.$$ \end{enumerate}\end{lemma} \Proof To show that $\{\phi_1x,\phi_1Jx,\phi_2x,\phi_2Jx\}$ is an orthogonal set, we compute: \begin{eqnarray*} &&(\phi_1x,\phi_1Jx)=\varepsilon_1(\phi_1x,J\phi_1x) =-\varepsilon_1(J\phi_1x,\phi_1x)=-(\phi_1Jx,\phi_1x),\\ &&(\phi_1x,\phi_2x)=(\phi_2\phi_1x,x)=-(\phi_1\phi_2x,x)=-(\phi_2x,\phi_1x),\\ &&(\phi_1x,\phi_2Jx)=-(J\phi_2\phi_1x,x) =\varepsilon_1\varepsilon_2(\phi_1\phi_2Jx,x)=-(\phi_2Jx,\phi_1x),\\ &&(\phi_1Jx,\phi_2x)=-(x,J\phi_1\phi_2x) =\varepsilon_1\varepsilon_2(x,\phi_2\phi_1Jx)=-(\phi_2x,\phi_1Jx),\\ &&(\phi_1Jx,\phi_2Jx)=(\phi_2\phi_1Jx,Jx)=-(\phi_1\phi_2Jx,Jx) =-(\phi_2Jx,\phi_1Jx),\\ &&(\phi_2x,\phi_2Jx)=\varepsilon_2(\phi_2x,J\phi_2x) =-\varepsilon_2(J\phi_2x,\phi_2x)=-(\phi_2Jx,\phi_2x). \end{eqnarray*} Let $\pi:=\operatorname{Span}\{x,Jx\}$. Then $\phi_1\pi$ and $\phi_2\pi$ are orthogonal $2$ planes. If $\phi_1^2=\pm\operatorname{Id}$, then $\phi_1\pi$ is non-degenerate so $\phi_2\pi\subset\phi_1\pi^\perp$ implies $\phi_1\pi\cap\phi_2\pi=\{0\}$ and assertion (1) follows; the argument is the same if $\phi_2^2=\pm\operatorname{Id}$. To prove assertion (2), we compute: \begin{eqnarray*} &&R_{\phi_1}(\pi)R_{\phi_2}(\pi)z =R_{\phi_1}(\pi)\{(\phi_2Jx,z)\phi_2x-(\phi_2x,z)\phi_2Jx\}\\ &&\quad=(\phi_2Jx,z)\{(\phi_1Jx,\phi_2x)\phi_1x-(\phi_1x,\phi_2x)\phi_1Jx\}\\ &&\quad-(\phi_2x,z)\{(\phi_1Jx,\phi_2Jx)\phi_1x-(\phi_1x,\phi_2Jx)\phi_1Jx\} =0. \end{eqnarray*} We argue similarly to show that $R_{\phi_2}(\pi)R_{\phi_1}(\pi)=0$. \endproof The following is the main result of this paper. It shows the estimates of Theorem \ref{thm3} are sharp and provides a large family of non-trivial new examples. \begin{th}\label{Thm5} Let $V$ be a vector space of signature $(p,q)$. Let $(\phi_1,\phi_2,J)$ be an admissible triple on $V$. Let $\lambda_i$ be real constants. Then $R:=\lambda_1R_{\phi_1}+\lambda_2R_{\phi_2}$ is an almost complex spacelike Jordan IP algebraic curvature tensor. \end{th} \Proof We assume $\lambda_1\ne0$ and $\lambda_2\ne0$; otherwise the proof follows directly from Theorem \ref{Thm4}. As the set of almost complex algebraic curvature tensors is a linear subspace of the set of all $4$ tensors, Theorem \ref{Thm4} shows that $R$ is an almost complex algebraic curvature tensor. We complete the proof by discussing the complex Jordan form. Let $\{x,Jx\}$ be an orthonormal basis for a spacelike complex line $\pi$. We use Lemma \ref{Lem3} to see that $\phi_1\pi$ and $\phi_2\pi$ are orthogonal complex lines and that $\operatorname{Rank}(R(\pi))=4$. Also by Lemma \ref{Lem3} we have: $$R(\pi)^2=\lambda_1^2R_{\phi_1}(\pi)^2+\lambda_2^2R_{\phi_2}(\pi)^2.$$ Suppose that $\phi_1^2=\pm\operatorname{Id}$ and that $\phi_2^2=\pm\operatorname{Id}$. The metric restricted to $\phi_1\pi\oplus\phi_2\pi$ is non-degenerate. Let $$V_0:=(\phi_1\pi\oplus\phi_2\pi)^\perp.$$We have an orthogonal direct sum decomposition $$V=\phi_1\pi\oplus\phi_2\pi\oplus V_0$$ which is preserved by $R(\pi)$. The map $R(\pi)$ has 4 non-trivial purely-imaginary eigenvalues $\pm\lambda_i\sqrt{-1}$; $V_0=\ker(R(\pi))$. The map $R(\pi)^2$ is diagonalizable. Thus $R$ is almost complex spacelike Jordan IP because: \begin{eqnarray*} &&R(\pi)^2=0\text{ on }V_0,\\ &&R(\pi)^2=-\lambda_1^2\text{ on }\phi_1\pi\text{, and}\\ &&R(\pi)^2=-\lambda_2^2\text{ on } \phi_2\pi.\end{eqnarray*} Suppose that $\phi_1^2=\pm\operatorname{Id}$ and $\phi_2^2=0$; the argument is similar if $\phi_1^2=0$ and $\phi_2^2=\pm\operatorname{Id}$. By assumption $\ker\phi_2$ contains no spacelike vectors. Let $V_0:=(\phi_1\pi)^\perp$. Then we have an orthogonal direct sum decomposition $V=\phi_1\pi\oplus V_0$ which is preserved by $R(\pi)$. The map $R(\pi)$ has two non-zero eigenvalues $\pm\lambda_1\sqrt{-1}$. The map $R(\pi)^2$ is diagonalizable; $$R(\pi)^2=0\text{ on }V_0\text{ and }R(\pi)^2=-\lambda_1^2\text{ on }\phi_1\pi.$$ Thus $R$ is almost complex spacelike Jordan IP.\endproof We construct examples to show that all 8 cases of Theorem \ref{Thm5} can occur. Let \smallbreak\noindent\begin{eqnarray*} &&e_1:=\left(\begin{array}{rrrr}0&1&0&0\\ 1&0&0&0\\0&0&0&1\\0&0&1&0 \end{array}\right)\text{ and } e_2:=\left(\begin{array}{rrrr}0&0&1&0\\ 0&0&0&-1\\1&0&0&0\\0&-1&0&0\end{array}\right)\text{ on }\mbox{${\rm I\!R }$}^{(0,4)},\\ \\ &&J_0:=\left(\begin{array}{rr}0&1\\-1&0\end{array}\right) \qquad\phantom{..}\text{ and } \alpha:=\left(\begin{array}{rr}0&1\\1&0\end{array}\right) \qquad\qquad\phantom{....}\text{ on } \mbox{${\rm I\!R }$}^{(0,2)},\\ \\ &&\beta:\phantom{}=\left(\begin{array}{rr}0&1\\-1&0\end{array}\right) \qquad\phantom{..} \text{ and }\gamma:=\left(\begin{array}{rr}1&-1\\1&-1\end{array}\right) \qquad\qquad\phantom{.} \text{ on }\mbox{${\rm I\!R }$}^{(1,1)}.\end{eqnarray*} be matrices satisfying the relations\medbreak \centerline{\begin{tabular}{rrrrr} $e_1^*=e_1,$&$e_2^*=e_2,$&$e_1^2=\operatorname{Id},$&$e_2^2=\operatorname{Id},$&$e_1e_2+e_2e_1=0$,\\ \\ $J_0^*=-J_0,$&$J_0^2=-\operatorname{Id},$&$\alpha^*=\alpha,$&$\alpha^2=\operatorname{Id},$&$J_0\alpha=-\alpha J_0,$\\ \\ $\beta^*=\beta,$&$\beta^2=-\operatorname{Id},$&$\gamma^*=\gamma,$&$\gamma^2=0$,&$\operatorname{Range}\gamma=\ker\gamma.$ \end{tabular}}\medbreak\noindent Define the matrix $\tau_i$ by: $$\tau_i=\left\{\begin{array}{ll} \operatorname{Id}\otimes\operatorname{Id}&\quad\text{ if }\delta_i=+1,\\ \beta\otimes\operatorname{Id}&\quad\text{ if }\delta_i=-1,\\ \operatorname{Id}\otimes\gamma&\quad\text{ if }\delta_i=0.\end{array}\right.$$ We construct $(\phi_1,\phi_2,J)$ admissible so $$\phi_1^2=\delta_1\operatorname{Id}\text{ and } \phi_2^2=\delta_2\operatorname{Id}$$ by setting: \begin{eqnarray*} &&\phi_1:=e_1\otimes\operatorname{Id}\otimes\tau_1,\\ &&\phi_2:=e_2\otimes\alpha\otimes\tau_2,\text{ and }\\ &&J:=\operatorname{Id}\otimes J_0\otimes\operatorname{Id}.\end{eqnarray*} The tensor $$R=\lambda_1R_{\phi_1}+\lambda_2R_{\phi_2}$$ is then both almost complex spacelike Jordan IP and almost complex timelike Jordan IP. \end{document}
\begin{document} \title{Bell inequality tests using asymmetric entangled coherent states in asymmetric lossy environments } \author{Chae-Yeun Park} \author{Hyunseok Jeong}\email{[email protected]} \affiliation{Center for Macroscopic Quantum Control, Department of Physics and Astronomy,\\ Seoul National University, Seoul, 151-742, Korea } \date{\today} \begin{abstract} We study an asymmetric form of two-mode entangled coherent state (ECS), where the two local amplitudes have different values, for testing the Bell-Clauser-Horne-Shimony-Holt (Bell-CHSH) inequality. We find that the asymmetric ECSs have obvious advantages over the symmetric form of ECSs in testing the Bell-CHSH inequality. We further study an asymmetric strategy in distributing an ECS over a lossy environment and find that such a scheme can significantly increase violation of the inequality. \end{abstract} \pacs{03.65.Ud, 03.67.Mn, 42.50.Dv} \maketitle \section{Introduction} Entangled coherent states (ECSs) in free-traveling fields \cite{yurke86,mecozzi87,sanders92} have been found to be useful for various applications such as Bell inequality tests \cite{mann95,filip01,wilson02,jeong03,wenger03,JR2006,stobinska07,jeong08,lee09,gerry09,jeong09,lim12,birby13,birby14}, tests for non-local realism \cite{lg1,lg2}, quantum teleportation \cite{jeong02,wang01,enk01,jeong01,an03}, quantum computation \cite{conchrane99,jeong02_comp,ralph03,lund08,marek10,myers11,kim10}, precision measurements \cite{gerry01,gerry02,campos03,munro02,joo11,hirota11,joo12,zhang13}, quantum repeater \cite{repeater} and quantum key distribution \cite{sergienko14}. The ECSs can be realized in various systems that can be described as harmonic oscillators and numerous schemes for their implementing have been suggested \cite{yurke86,yurke87,sanders92,mecozzi87,tombesi87,sanders99,sanders00,gerry97}. An ECS in a free-traveling field was experimentally generated using the photon subtraction technique on two approximate superpositions of coherent states (SCSs) \cite{ourjoumtsev09}. A proof-of-principle demonstration of quantum teleportation using an ECS as a quantum channel was performed \cite{nielsen13}. So far, most of the studies on ECSs have considered symmetric types of two-mode states where the local amplitudes have the same value. Since ECSs are generally sensitive to decoherence due to photon losses in testing Bell-type inequalities \cite{wilson02}, it would be worth investigating the possibility of using an asymmetric type of ECS to reduce decoherence effects. In fact, asymmetric ECSs can be used to efficiently teleport an SCS \cite{ralph03,nielsen13} and to remotely generate symmetric ECSs \cite{lund13} in a lossy environment. In addition to the asymmetry of the ECSs, as a closely related issue, it would be beneficial to study strategies for distributing the ECSs over asymmetrically lossy environments. In this paper, we investigate asymmetric ECSs as well as asymmetric entanglement distribution strategies, and find their evident advantages over symmetric ones for testing the Bell-Clauser-Horne-Shimony-Holt (Bell-CHSH) inequality \cite{clauser69} under various conditions. The remainder of the paper is organized as follows. In Sec.~\ref{sec:bell_test_prefect}, we discuss the Bell-CHSH inequality tests using asymmetric ECSs with ideal detectors. We consider photon on-off detection and photon number parity detection, respectively, for the Bell-CHSH inequality tests. Section~\ref{sec:bell_test_deco} is devoted to the study of decoherence effects on Bell inequality violations with an asymmetric strategy to share ECSs. We then investigate effects of inefficient detectors in Sec.~\ref{sec:ineff}, and conclude with final remarks in Sec.~\ref{sec:remarks}. \section{\label{sec:bell_test_prefect}Bell inequality tests with asymmetric entangled coherent states} In this paper, we are interested in a particular form of two-mode ECSs \begin{align} \ket{{\rm ECS}^\pm} = \mathcal{N}_\pm \left( \ket{\alpha_1}\otimes\ket{\alpha_2} \pm \ket{-\alpha_1}\otimes\ket{-\alpha_2} \right) \label{ECS} \end{align} where $\ket{\pm\alpha_i}$ are coherent states of amplitudes $\pm\alpha_i$ for field mode $i$, and $\mathcal{N}_\pm = [2 \pm 2 \exp ( -2\alpha_1^2-2\alpha_2^2 )]^{-1/2}$ are the normalization factors. We note that amplitudes $\alpha_1$ and $\alpha_2$ are assumed to be real without loss of generality throughout the paper. The ECSs show noticeable properties as macroscopic entanglement when the amplitudes are sufficiently large and these properties have been extensively explored \cite{JR2006,jeong14,lim12,jeong-review}. It is worth noting that there are studies on other types of ECSs such as multi-dimensional ECSs \cite{EnkInfinite}, cluster-type ECSs \cite{Semiao1,AnCluster,Semiao2}, multi-mode ECSs \cite{W-ECS} and generalization of ECSs with thermal-state components \cite{JR2006,Pater2006} while we focus on two-mode ECSs in free-traveling fields in this paper. We call $\ket{{\rm ECS}^+}$ ($\ket{{\rm ECS}^-}$) the even (odd) ECS because it contains only even (odd) numbers of photons. When $\alpha_1=\alpha_2$, we call the states in Eq.~(\ref{ECS}) symmetric ECSs, and otherwise they shall be called asymmetric ECSs. It is straightforward to show that an ECS can be generated by passing an SCS in the form of $|\alpha\rangle\pm|-\alpha\rangle$ \cite{Ourj2007}, where $\alpha^2=\alpha_1^2+\alpha_2^2$, through a beam splitter. A beam splitter with an appropriate ratio should be used to generate an asymmetric ECS with desired values of $\alpha_1$ and $\alpha_2$. In this paper, asymmetric ECSs with various values of $\alpha_1$ and $\alpha_2$ are compared for given values of $\alpha$. The average photon numbers of the ECSs are solely dependent on the values of $\alpha$ and are simply obtained as \begin{align} \bar{n}_\pm=\braket{{\rm ECS}^\pm|\hat{n}|{\rm ECS}^\pm} = \frac{\alpha^2 (1\mp \,\mathrm{e}^{ -2 \alpha^2})} {1\pm\, \mathrm{e}^{-2\alpha^2}} \label{number} \end{align} where $\hat{n}=\hat{n}_1+\hat{n}_2$ and $\hat{n}_i$ is the number operator for field mode $i$. \subsection{Bell-CHSH tests with photon on-off measurements} We first investigate a Bell-CHSH inequality test using photon on-off measurements with the displacement operations. A Bell-CHSH inequality test requires parametrized measurement settings of which the outcomes are dichotomized to be either $+1$ or $-1$ \cite{clauser69}. In the simplest example of a two-qubit system, a parametrized rotation about the $x$ axis followed by a dichotomized measurement in the $z$ direction is used. In our study, the displacement operator that is known to well approximate the qubit rotation for a coherent-state qubit \cite{jeong02_comp,jeong03} is used for parametrization. The displacement operator can be implemented using a strong coherent field and a beam splitter with a high transmissivity. Together with the displacement operator, the photon on-off measurement is experimentally available using current technology with an avalanche photodetector \cite{takeuchi99} in spite of the issue of the detection efficiency. We shall analyze the effects of the limited detection efficiency in Sec. IV. The photon on-off measurement operator for mode $i$ is defined as \begin{align} \hat{O}_i(\xi) = \hat{D}_i(\xi)\left( \sum \limits_{n=1}^{\infty} \ket{n}\bra{n}-\ket{0}\bra{0}\right)\hat{D}_i^\dagger(\xi) \label{eq:O} \end{align} where $\hat{D}_i(\xi)=\exp \left[ \xi\hat{a}_i^\dagger - \xi^*\hat{a}_i\right]$ is the displacement operator and $|n\rangle$ denotes the Fock state. The correlation function is defined as the expectation value of the joint measurement \begin{align} E_O(\xi_1,\xi_2) = \langle \hat{O}_1(\xi_1)\otimes\hat{O}_2(\xi_2) \rangle \label{E_onoff} \end{align} and the Bell-CHSH function is \begin{align} \mathcal{B}_O = E_O(\xi_1,\xi_2) + E_O(\xi_1',\xi_2)+E_O(\xi_1,\xi_2')-E_O(\xi_1',\xi_2'). \label{bell_onoff} \end{align} In any local realistic theory, the absolute value of the Bell-CHSH function is bounded by 2 \cite{clauser69}. We calculate an explicit form of the correlation function $E_O(\xi_1,\xi_2)$ using Eqs.~(\ref{eq:O}) and (\ref{E_onoff}), of which the details are presented in in Appendix \ref{sec:corr_func}. We then find the Bell-CHSH function $\mathcal{B}_O$ using Eq.~(\ref{bell_onoff}) and its absolute maximum values $\abs{\mathcal{B}_O}_{\rm max}$ over the displacement variables $\xi_1$, $\xi_1'$, $\xi_2$ and $\xi_2'$ together with $\bar{n}_\pm$, $\alpha_1$ and $\alpha_2$. It requires a numerical multivariable maximization method, and we use the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm \cite{fletcher87} throughout this paper. \begin{figure} \caption{\label{onoff_n} \label{onoff_n} \end{figure} The optimized Bell-CHSH functions against the average photon numbers $\bar{n}_\pm$ for $|{\rm ECS}^\pm\rangle$ are presented in Fig.~\ref{onoff_n}, while Fig.~2 shows how asymmetric the ECSs become in order to maximize the Bell violations. The solid curves show the results for symmetric ECSs while the dotted curves correspond to general cases (asymmetric ones). The Bell-CHSH functions are numerically optimized over all displacement variables and amplitudes under the condition of Eq.~\eqref{number}. The Bell violations occur both for the even and odd ECSs regardless of the values of $\bar{n}_\pm$ that are consistent with the results in Ref.~\cite{jeong03}. The odd ECS, $|{\rm ECS}^-\rangle$, violates the inequality more than the other one, $|{\rm ECS}^+\rangle$ for a given average photon number. The Bell-CHSH function for state $|{\rm ECS}^-\rangle$ reaches up to $\abs{\mathcal{B}_O}_{\rm max}\approx2.743$ while the maximum Bell-CHSH function for $|{\rm ECS}^+\rangle$ is $\abs{\mathcal{B}_O}_{\rm max}\approx2.131$. This is due to the fact that the odd ECS is maximally entangled regardless of the value of $\bar{n}_{-}$ \cite{non-ortho,jeong01} and the on-off measurement can effectively reveal nonlocality of the ECSs when the amplitudes are small \cite{jeong03}. \begin{figure} \caption{\label{onoff_minus_ab} \label{onoff_minus_ab} \end{figure} It is obvious from Fig.~\ref{onoff_n} that one can increase the amount of violations by using the asymmetric ECSs for certain regions of $\bar{n}_{\pm}$. In the case of state $|{\rm ECS}^-\rangle$, this difference appears for $\bar{n}_{-}\gtrsim1.43$. This difference in the optimized Bell-CHSH function reaches its maximum value $\approx0.053$ for $\bar{n}_{-}\approx2.24$. For this value of $\bar{n}_{-}$, the symmetric ECS gives $|B_O|_\mathrm{max}\approx 2.135$ for $\alpha_1=\alpha_2\approx1.04$ while the asymmetric ECS yields $|B_O|_\mathrm{max}\approx 2.189$ for $\alpha_1\approx1.26$ and $\alpha_2\approx0.77$. The even ECS also shows a small increase of the violation when an asymmetric ECS is used in place of the symmetric ECS for $\bar{n}_{+}\gtrsim2.83$. The maximum difference is $\approx0.007$ when $\bar{n}_{+}\approx 3.93$. In order to further investigate the advantages of the asymmetric ECS, we plot the optimized Bell-CHSH function $\abs{\mathcal{B}_O}_{\rm max}$ as a function of $\alpha_1$ and $\alpha_2$ for $\ket{\rm ECS^-}$ in Fig.~\ref{onoff_minus_ab}. The blue line indicates the point for the maximum Bell-CHSH function for each value of $\bar{n}_{-}$. The unsmooth change in the blue line at $\alpha_1=\alpha_2\approx 0.77$ ($\bar{n}_-\approx1.43$) results from the numerical optimization process where {\it local} maximum values are compared with changes of related parameters, i.e., the displacement variables and amplitudes. We note that such comparisons among local maxima at a number of different parameter regions lead to unsmooth changes in several plots throughout this paper \cite{note-num}. In fact, the blue line splits to two symmetric curves from the point of $\bar{n}_-\approx1.43$, according to our numerical calculation (which is obvious because $\alpha_1$ and $\alpha_2$ are simply interchangeable), while we plot only one of the curves for convenience. The results in Figs.~\ref{onoff_n} and \ref{onoff_minus_ab} shows that when the average photon number is relatively large, the asymmetric ECS outperforms the symmetric one in testing the Bell-CHSH inequality with photon on-off measurements and displacement operations. On the other hand, when the average photon number is small, the symmetric ECS gives larger Bell violations. \begin{figure} \caption{\label{nvsb_parity} \label{nvsb_parity} \end{figure} \subsection{Bell-CHSH tests with photon number parity measurements} We now consider the photon number parity measurements with the displacement operators for both modes. Here we use the displacement operator again in order to approximate a parametrized rotation for a coherent-state qubit in the Bell-CHSH test. The displaced parity measurement is given by \begin{align} \hat{\Pi}_i(\xi)=\hat{D}_i(\xi) \sum \limits_{n=0}^{\infty}\big( \ket{2n}\bra{2n}-\ket{2n+1}\bra{2n+1} \big) \hat{D}^\dagger_i(\xi) \label{eq:Pi} \end{align} where $i\in\{1,2\}$ denotes each mode and the correlation function is \begin{align} E_\Pi(\xi_1,\xi_2) = \braket{\hat{\Pi}_1(\xi_1)\otimes\hat{\Pi}_2(\xi_2)} \label{E_parity} \end{align} with which the Bell-CHSH function $\mathcal{B}_\Pi$ can be constructed using Eq.~\eqref{bell_onoff}. We have obtained and presented an explicit form of the correlation function in Appendix \ref{sec:corr_func}. \begin{figure} \caption{\label{parity_plus_ab} \label{parity_plus_ab} \end{figure} The optimized Bell-CHSH functions are plotted for varying $\bar{n}_{\pm}$ using parity measurements in Fig.~\ref{nvsb_parity}. In contrast to the on-off measurement case, the optimized Bell-CHSH functions increase monotonically toward Cirel'son's bound \cite{cirelson80}, $2\sqrt{2}$, for both even and odd ECSs as shown in the figure. These results are consistent with previous studies \cite{wilson02,jeong03}. As explained in Ref.~\cite{jeong03}, the reason that the Bell-CHSH violation increases monotonically in the case of the parity measurement but not in the case of the on-off measurement can be explained as follows. When the average photon number of the symmetric ECS is sufficiently large, it can be represented as a maximally entangled two-qubit state in a $2 \otimes 2$ Hilbert space spanned by the even and odd SCS basis ($|\alpha\rangle\pm|-\alpha\rangle$) and the displacement operator well approximates the qubit rotation. Therefore, in this limit, the violation approaches the maximum value, $2\sqrt{2}$, with the parity measurement that perfectly discriminates between the even and odd SCS. In contrast, the photon number on-off measurement cannot cause Bell violations for large amplitudes because the vacuum weight in the state almost vanishes in the limit of $\alpha\gg1$~\cite{jeong03}. As implied by the apparent overlaps between the cases of symmetric ECSs (solid curve) and those of the asymmetric ECSs (dotted), our numerical investigations show that unlike the case with on-off measurements, the asymmetric ECSs do not show any larger Bell violations. Figure~\ref{parity_plus_ab} shows the optimized Bell-CHSH function of $\ket{\rm ECS^+}$ for amplitudes $\alpha_1$ and $\alpha_2$. The blue line in the figure indicates the states with which the maximum violations are obtained for the same average photon numbers. The figure shows that the asymmetry of the amplitudes does not improve the amount of violations in the case of the parity measurement. We conjecture that the symmetric structure of the parity measurement is closely related to this result. \section{\label{sec:bell_test_deco}Bell-CHSH inequality tests under decoherence effects} In this section, we consider decoherence effects on Bell nonlocality tests using an ECS. We will study the Bell-CHSH inequality violation using a general form of ECS and compare it to the result with a symmetric ECS for both on-off and parity measurements. \subsection{Symmetric and asymmetric strategies for entanglement distribution} In the presence of photon loss, the time evolution of a density operator $\rho$ is described by the master equation as \cite{phoenix90} \begin{align*} \frac{\partial\rho}{\partial\tau} = \hat{J}\rho + \hat{L}\rho, \label{master}\numberthis \end{align*} where $\tau$ is the interaction time, and the Lindblad superoperators $\hat{J}$ and $\hat{L}$ are defined as $\hat{J}\rho = \gamma \sum_i \hat{a}_i \rho \hat{a}_i^\dagger$ and $\hat{L}\rho = - \gamma/2 \sum_i (\hat{a}_i^\dagger \hat{a}_i \rho + \rho \hat{a}_i^\dagger \hat{a}_i)$ with a decay rate $\gamma$. We consider two different strategies, i.e. symmetric (A) and asymmetric (B) ones, when distributing an ECS over a distance to Alice and Bob as illustrated in Fig.~\ref{photon_loss}. In the case of strategy A, photon loss occurs symmetrically for both modes of the ECS during the decoherence time $\tau$. On the other hand, in strategy B, an ECS is first generated in the location of Alice and one mode of the ECS is sent to Bob at a distance. In the latter case, photon losses occur only in one of the field modes but the decoherence time becomes $2\tau$. Assuming a zero temperature bath and a decay rate $\gamma$ for both cases, a direct calculation of the master equation leads to the states \begin{align*} \rho^{\pm}_A(\alpha_1,\alpha_2,t) &= \ \mathcal{N_\pm}^2 \Bigl\{ \ket{\sqrt{t}\alpha_1}\bra{\sqrt{t}\alpha_1}\otimes\ket{\sqrt{t}\alpha_2}\bra{\sqrt{t}\alpha_2} \\ &\pm e^{-2(1-t)(\alpha_1^2+\alpha_2^2)}&\\ &\big[\ket{\sqrt{t}\alpha_1}\bra{-\sqrt{t}\alpha_1}\otimes\ket{\sqrt{t}\alpha_2}\bra{-\sqrt{t}\alpha_2} \\ &+ \ket{-\sqrt{t}\alpha_1}\bra{\sqrt{t}\alpha_1}\otimes\ket{-\sqrt{t}\alpha_2}\bra{\sqrt{t}\alpha_2} \big] \\ &+ \ket{-\sqrt{t}\alpha_1}\bra{-\sqrt{t}\alpha_1}\otimes\ket{-\sqrt{t}\alpha_2}\bra{-\sqrt{t}\alpha_2} \Bigr\}, \numberthis \label{rhoA}\\ \rho^\pm_B(\alpha_1,\alpha_2,t) &= \ \mathcal{N_\pm}^2 \Bigl\{ \ket{\alpha_1}\bra{\alpha_1}\otimes\ket{t\alpha_2}\bra{t\alpha_2} \\ & \pm e^{-2(1-t^2) \alpha_2^2} \bigl[ \ket{\alpha_1}\bra{-\alpha_1}\otimes\ket{t\alpha_2}\bra{-t\alpha_2} \\ &+ \ket{-\alpha_1}\bra{\alpha_1}\otimes\ket{-t\alpha_2}\bra{t\alpha_2} \bigr]\\ &+ \ket{-\alpha_1}\bra{-\alpha_1}\otimes\ket{-t\alpha_2}\bra{-t\alpha_2} \Bigr\} \numberthis \label{rhoB} \end{align*} for strategies A and B, respectively, where $t = e^{-\gamma \tau}$. For convenience, we define the normalized time $r=1-t$ which has the value of zero when $\tau=0$ and increases to $1$ as $\tau$ increases to the infinity. If the cross terms in Eqs.~\eqref{rhoA} and \eqref{rhoB} vanish, the states become classical mixtures of two distinct states and quantum effects generally disappear. We observe that the cross term in $\rho^{\pm}_A$ is proportional to $e^{-2(1-t)(\alpha_1^2+\alpha_2^2)}$ and it is proportional to $e^{-2(1-t^2) \alpha_2^2}$ in $\rho^{\pm}_B$. This implies that one may reduce decoherence effects using strategy B by making the amplitude smaller for the mode in which loss occurs (i.e., by making the field mode sent to Bob in Fig.~\ref{photon_loss} to have the smaller amplitude). Such a strategy was also applied to the tele-amplification protocol \cite{nielsen13} and the distributed generation scheme for ECSs \cite{lund13}. \begin{figure} \caption{\label{photon_loss} \label{photon_loss} \end{figure} \begin{figure} \caption{\label{onoff_t} \label{onoff_t} \end{figure} \begin{figure} \caption{\label{onoff_ab} \label{onoff_ab} \end{figure} \subsection{Bell-CHSH tests with on-off measurements under photon loss effects} We first consider Bell-CHSH tests with on-off measurements in comparing strategies A and B regarding robustness to the decoherence effects. An explicit form of the correlation functions calculated using Eqs.~\eqref{rhoA} and \eqref{rhoB} can be found Appendix~\ref{sec:corr_func}. We numerically optimize the corresponding Bell-CHSH function, $|{\cal B}_O|_{\rm max}$, over all displacement variables and amplitudes $\alpha_1$ and $\alpha_2$ for given $r$. The numerically optimized Bell-CHSH functions against the normalized time are presented for both $\rho_A^\pm$ and $\rho_B^\pm$ in Fig.~\ref{onoff_t} where the decrease of violations due to the decoherence effects is apparent. The optimizing values of amplitudes $\alpha_1$ and $\alpha_2$ are found between $0.4$ and $1.4$, where relatively smaller values correspond to large values of $r$. We observe that strategy B leads to larger violations than strategy A for $r\gtrsim0.07$ when we use the even ECS $\rho^+$. For the smaller value of $r$, strategy A gives slightly larger violations where the difference is up to $\lesssim 0.001$. On the other hands, the odd ECS $\rho^-$ shows larger violations for strategy B than strategy A regardless of the value of $r$. As explained earlier in Sec.~IIA, the discontinuities of the first derivative at $r\approx0.09$ for $\rho_A^+$ and $r\approx0.07$ for $\rho_B^+$ in Fig.~\ref{onoff_t}(a) emerge from the numerical optimization process where local maxima for different parameter regions are compared. \begin{figure} \caption{\label{parity_t} \label{parity_t} \end{figure} The Bell violations of the odd ECSs for varying $\alpha_1$ and $\alpha_2$ for $r=0.2$, numerically optimized for the displacement variables, are shown in Fig.~\ref{onoff_ab}. Figure~\ref{onoff_ab}(a) clearly shows the asymmetric behavior of the optimized Bell-CHSH function under strategy A. Figure~\ref{onoff_ab}(b) is for the case of asymmetric decoherence (strategy B). For a given average photon number, the optimizing value of $\alpha_2$ in strategy B is smaller than that of strategy A. It shows that an asymmetric ECS can reduce the asymmetric decoherence effects by lessening the amplitude of the mode which decoherence occurs and adjusting the other mode which is free from decoherence. These results are consistent with previous studies for Bell inequality tests using hybrid entanglement \cite{jpark12,kwon13} and quantum teleportation \cite{kpark12,nielsen13}, where the amplitude of the mode that suffers decoherence should be kept small in order to optimize Bell violations or teleportation fidelities. It is important to note from Fig.~\ref{onoff_ab} that strategy B shows significantly larger violation than that of strategy A. In strategy A (Fig.~\ref{onoff_ab}(a)), the maximum value of the Bell-CHSH function is $\approx2.054$ for $\alpha_1\approx 0.81$ and $\alpha_2 \approx 0.74$. On the other hands, the maximum value for strategy B (Fig.~\ref{onoff_ab}(b)) is $\approx2.145$ for $\alpha_1 \approx 0.76$ and $\alpha_2 \approx 0.50$. Noting that $2.0$ is the maximum value of the Bell function by a local realistic theory, the absolute value of the Bell function with strategy B, $2.145$, is significantly larger than that of strategy A, $2.054$. It is a remarkable advantage of using asymmetric ECSs with the asymmetric distribution scheme in testing the Bell-CHSH inequality. \subsection{\label{subsec:parity_loss}Bell-CHSH tests with photon number parity measurement under photon loss effects} \begin{figure} \caption{\label{parity_ab} \label{parity_ab} \end{figure} Figure~\ref{parity_t} presents the numerically optimized Bell-CHSH function $\abs{\mathcal{B}_\Pi}$ with parity measurements against the normalized time $r$ (see Appendix \ref{sec:corr_func} for details). For both the even and odd ECSs, strategy B give significantly larger violations over strategy A. The optimizing values of $\alpha_1$ and $\alpha_2$ approach infinity for $r\rightarrow0$. This is because, when we use the photon number parity measurements and the photon loss is absence in one mode, the optimized Bell-CHSH functions increase monotonically with repect to the amplitude of that mode. However, we are interested in values that can be obtained within a realistic range (e.g. $\bar{n}_\pm<5$). We thus plot in Fig.~\ref{parity_ab} the optimization results for the even ECSs under a normalized decoherence time ($r=0.1$) for varying $\alpha_1$ and $\alpha_2$. In contrast to the case of on-off measurements, as shown in Fig.~\ref{parity_ab}(a), the asymmetry of the amplitudes of the ECS does not improve the amount of Bell violations for strategy A where decoherence occurs symmetrically. Similar to the results of the on-off measurements, the asymmetry of the amplitudes in ECSs increases the amount of Bell violation for strategy B (Fig.~\ref{parity_ab}(b)). In this case, the maximum value of optimized Bell-CHSH function is $|{\cal B}_\Pi|_{\rm max}\approx 2.146$ for $\alpha_1\rightarrow \infty$ and $\alpha_2\approx 0.46$. A similar value $|{\cal B}_\Pi|_{\rm max}\approx 2.131$ can be obtained for $\alpha_1=2$ and $\alpha_2\approx 0.44$ where the average photon number is given by $n_+\approx 4.19$. This value is much larger than that of strategy A, which only shows $|{\cal B}_\Pi|_{\rm max}\approx2.014$ for $\alpha_1=\alpha_2 \approx 0.44$. \section{\label{sec:ineff}Effects of detection inefficiency} Here we investigate the effects of detection inefficiency in the Bell-CHSH inequality tests. An imperfect photodectector with efficiency $\eta$ can be modeled using a perfect photodetector together with a beam splitter of transmissivity $\sqrt{\eta}$ in front of it \cite{yuen80}. Meanwhile, decoherence by photon loss (Eqs.~\eqref{rhoA} and \eqref{rhoB}) in the entangled state can also be modeled using a beam splitter. The only difference between the effect of detection inefficiency and that of photon loss in the entangled state is the order of photon loss and displacement as shown in Fig.~\ref{scheme1}. We compare these two cases in Appendix \ref{sec:equiv} where the results show that the two cases lead the same Bell-CHSH function except for the coefficients of the displacement variables which disappear during the optimization process. We have numerically investigated both cases of on-off and parity measurements with limited efficiencies, $\eta_1$ for mode 1 and $\eta_2$ for mode 2, for both even and odd ECSs. As we find that the even ECS is better for on-off measurements and the odd ECS is better for parity measurements, as already implied in Figs. 6 and 8, we present the two corresponding cases in Figure~\ref{inefficient}. It shows that the on-off measurement scheme generally gives much larger violations compared to the parity measurement scheme. This is consistent with the results of the case with photon loss effects studied in the previous section (see Figs. 6(b) and 8(a)). \begin{figure} \caption{\label{scheme1} \label{scheme1} \end{figure} \begin{figure} \caption{\label{inefficient} \label{inefficient} \end{figure} In detail, for the case of the displaced on-off measurements with the odd ECS (Figs.~\ref{inefficient}(a) and ~\ref{inefficient}(b)), optimizing values of $\alpha_1$ and $\alpha_2$ lie between $0.4$ to $1.4$, which is experimentally feasible \cite{ourjoumtsev09,Ourj2007}. We find that the asymmetric ECS (Fig~\ref{inefficient}(b)) shows larger violations compared to the symmetric ECS (Fig~\ref{inefficient}(a)). When $\eta=\eta_1=\eta_2$, a detection efficiency of $\eta \ge 0.771$ is required for violation of $\abs{\mathcal{B}_O}\geq 2.001$ with the symmetric ECS while a smaller efficiency $\eta \ge 0.745$ is sufficient for the same amount of violation with the asymmetric ECS. When the efficiencies for modes 1 and 2 are $\eta_1=0.75$ and $\eta_2=1$, the asymmetric ECS shows the optimized Bell quantity of $2.305$ which is larger than $2.269$ for the symmetric ECS. In the case of the parity measurements using the even ECSs, the improvement by the asymmetric ECS is even larger. For example, comparing $\eta_1=0.98$ line in Fig.~\ref{inefficient}(c) and (d), we find that there is a violation $\abs{\mathcal{B}_\Pi}\geq2.01$ for $\eta_2 \ge 0.760$ if we use the asymmetric form of ECS. However, much larger efficiency $\eta_2\ge 0.805$ is needed when we use the symmetric ECS. The optimizing values of $\alpha_1$ and $\alpha_2$ lie between $0.02$ to $1.64$ if $\eta_1<0.99$ and $\eta_2<0.99$. We also note that if one of the detectors is perfect, the optimizing value of the amplitude of that mode goes to infinity. This behavior can be inferred from the results in Sec.~\ref{subsec:parity_loss} with Fig.~9(b) for the case of decoherence, noting that the decoherence and the detection inefficiency give qualitatively the same effects (Appendix B). When both the detectors are perfect, of course, the larger amplitude gives the larger violation for each mode as implied in Fig.~4; the maximum violation appears when both $\alpha_1$ and $\alpha_2$ become infinity. \section{\label{sec:remarks}Remarks} In this paper, we have studied asymmetric ECSs as well as asymmetric lossy environments for Bell-CHSH inequality tests. We first investigated the violations of Bell-CHSH inequality using perfect on-off detectors and ideal ECSs. We have shown that the asymmetric form of ECS could give larger violations of the Bell-CHSH inequality in some region of the averaged photon numbers of the ECSs. On the other hand, in the case of photon number parity measurements, we could not find apparent improvement of the Bell violations using the asymmetric form of ECS with perfect detectors. We then studied Bell-CHSH violations under the effects of decoherence on the ECSs. We considered two different schemes for distributing entanglement under photon loss, i.e., symmetric and asymmetric schemes. In the symmetric scheme, the photon losses occur in both modes of the ECSs. On the other hands, photon loss occurs only in one of the two modes in the case of the asymmetric scheme. We showed that the asymmetric form of ECS can increase the amount of violations significantly under the asymmetric scheme compared to the case of the symmetric scheme. For example, when the normalized time is $r=0.2$, the Bell-CHSH function using photon on-off detectors for the asymmetric loss scheme shows the maximum value of $2.145$, which is much larger than that for the symmetric case, $2.054$. A similar improvement can be made in the case of photon number parity measurements; when $r=0.1$ with the ECS of average photon number $\approx 4.19$, the optimized Bell-CHSH function under the symmetric loss scheme is about $2.014$ but it is $2.131$ under the asymmetric scheme. We have also investigated effects of inefficient detectors. We show that the asymmetric form of ECSs lowers the detection efficiency required for violation of the Bell-CHSH inequality. For example, a detection efficiency of $\eta \ge 0.771$ is required for a Bell-CHSH violation of $\abs{\mathcal{B}_O}\geq 2.001$ with the symmetric ECS, but a smaller efficiency $\eta \ge 0.745$ is sufficient for the same amount of violation with the asymmetric ECS. This improvement is even more significant for the case of parity measurements particularly when the detection efficiencies of two detectors differ much. When the detection efficiency for mode 1 is $\eta_1=0.98$, the required detection efficiency for mode 2 is $\eta_2 \gtrsim 0.81$ to show the violation of $|B_\Pi| \geq 2.01$ using the symmetric ECS is used, but it is $\eta_2 \gtrsim 0.76$ when the asymmetric ECS is used. The optimizing amplitudes are found between $0.4$ to $1.4$ in the case of the on-off measurements and between $0.02$ to $1.64$ for the parity measurements. It is worth noting that these values of amplitudes for ECSs are within reach of current technology \cite{ourjoumtsev09,Ourj2007,sasaki-cat,nam-cat}. In summary, our extensive study reveals that the asymmetric form of ECSs and the asymmetric scheme for distributing entanglement enable one to effectively test the Bell-CHSH inequality with the same resources. \begin{acknowledgments} This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2010-0018295) and by the Center for Theoretical Physics at Seoul National University. The numerical calculations in this work were performed using Chundoong cluster system in the Center for Manycore Programming at Seoul National University. \end{acknowledgments} \appendix \section{\label{sec:corr_func}Correlation functions for on-off and parity measurements} Here, we present the explicit forms of the correlation functions defined in Eqs.~\eqref{E_onoff} and \eqref{E_parity} under lossy effects. Instead of computing the correlation function for every case, we calculate the results for an ECS under the beam splitters with transmissivities $\eta_1$ and $\eta_2$ for mode 1 and mode 2, respectively, and show that these results are applicable to all the cases discussed in this paper. The beam splitter operator with transmissivity $\sqrt{\eta}$ between modes $a$ and $b$ can be represented by $\hat{B}_{ab} = \exp[(\cos^{-1}\sqrt{\eta})(\hat{a}_a^\dagger\hat{a}_b-\hat{a}_a\hat{a}_b^\dagger)]$ \cite{campos89}. If a coherent-state dyadic $\ket{\alpha}\bra{\beta}$ for mode $C$ is mixed with the vacuum for mode $v$, the initial coherent-state dyadic becomes \begin{align*} &\Tr_v[\hat{B}_{Cv}(\ket{\alpha}\bra{\beta})_C\otimes(\ket{0}\bra{0})_v\hat{B}_{Cv}^\dagger] \\ &= \exp\bigl[-\frac{1}{2}(1-\eta)(\abs{\alpha}^2+\abs{\beta}^2-2\alpha\beta^*)\bigr]\ket{\sqrt{\eta}\alpha}\bra{\sqrt{\eta}\beta}.\label{dkb}\numberthis \end{align*} We now apply this result to a pure ECS $\ket{\rm ECS^\pm}$ mixed with the vacuum modes through two beam splitters with transmissivities $\eta_1$ and $\eta_2$, and the result is \begin{align*} \rho^{\pm}&[\alpha_1,\alpha_2,\eta_1,\eta_2]\\ &= \mathcal{N_\pm}^2 \bigl\{ \ket{\sqrt{\eta_1}\alpha_1}\bra{\sqrt{\eta_1}\alpha_1}\otimes\ket{\sqrt{\eta_2}\alpha_2}\bra{\sqrt{\eta_2}\alpha_2} \\ &\pm e^{-2[(1-\eta_1)\alpha_1^2+(1-\eta_2)\alpha_2^2]}&\\ &\bigl[\ket{\sqrt{\eta_1}\alpha_1}\bra{-\sqrt{\eta_1}\alpha_1}\otimes\ket{\sqrt{\eta_2}\alpha_2}\bra{-\sqrt{\eta_2}\alpha_2} \\ &+ \ket{-\sqrt{\eta_1}\alpha_1}\bra{\sqrt{\eta_1}\alpha_1}\otimes\ket{-\sqrt{\eta_2}\alpha_2}\bra{\sqrt{\eta_2}\alpha_2} \bigr] \\ &+ \ket{-\sqrt{\eta_1}\alpha_1}\bra{-\sqrt{\eta_1}\alpha_1}\otimes\ket{-\sqrt{\eta_2}\alpha_2}\bra{-\sqrt{\eta_2}\alpha_2} \bigr\}. \end{align*} It is straightforward to check that if we let $\eta_1=\eta_2=t$ then the state will be exactly the same to Eq.~\eqref{rhoA}. In addition, $\eta_1=1$ with $\eta_2=t^2$ will give Eq.~\eqref{rhoB}. Using the result, the correlation functions for on-off measurement and parity measurement are obtained as \begin{widetext} \begin{align*} E_O(\xi,\chi) &= \Tr[\rho^\pm \hat{O}_1(\xi)\otimes\hat{O}_2(\chi)]\\ &=\frac{2}{2 \pm 2 e^{-2 \alpha_1^2-2 \alpha_2^2}} \bigl[1\mp2 e^{-2 \alpha_1^2-2 \alpha_2^2-\chi_i^2-\chi_r^2-\xi_i^2-\xi_r^2} \bigl(e^{\alpha_1^2 \eta_1 + \chi_i^2 + \chi_r^2} \cos \bigl(2 \alpha_1 \sqrt{\eta_1} \xi_i\bigr) \\ &+e^{\alpha_2^2 \eta_2+\xi_i^2+\xi_r^2} \cos \bigl(2 \alpha_2 \chi_i \sqrt{\eta_2}\bigr) \bigr) -2 e^{\alpha_1^2 \eta_1 + \alpha_2^2 \eta_2} \cos \bigl(2 \bigl(\alpha_1 \sqrt{\eta_1} \xi_i+ \alpha_2 \sqrt{\eta_2}\chi_i \bigr)\bigr)\\ & \pm e^{-2 \bigl(\alpha_1^2+\alpha_2^2\bigr)} +2 e^{-\bigl(\xi_r-\alpha_1 \sqrt{\eta_1}\bigr)^2-\bigl(\chi_r-\alpha_2 \sqrt{\eta_2}\bigr)^2-\chi_i^2-\xi_i^2} +2 e^{-\bigl(\alpha_1 \sqrt{\eta_1}+\xi_r\bigr)^2-\bigl(\alpha_2 \sqrt{\eta_2}+\chi_r\bigr)^2-\chi_i^2-\xi_i^2}\\ &-e^{-\bigl(\xi_r - \alpha_1 \sqrt{\eta_1}\bigr)^2-\xi_i^2}-e^{-\bigl(\alpha_1 \sqrt{\eta_1}+\xi_r\bigr)^2-\xi_i^2} -e^{-\bigl(\alpha_2 \sqrt{\eta_2}+\chi_r\bigr)^2-\chi_i^2}-e^{-\bigl(\chi_r-\alpha_2 \sqrt{\eta_2}\bigr)^2-\chi_i^2}\bigr], \\ E_\Pi(\xi,\chi) &= \Tr[\rho^\pm \hat{\Pi}_1(\xi)\otimes\hat{\Pi}_2(\chi)]\\ &=\frac{1}{2\pm 2e^{-2 \alpha_1^2-2 \alpha_2^2}}\exp \left(-2 \left(\alpha_1^2 (\eta_1+1) + \alpha_2^2 (\eta_2+1)+\chi_i^2+\chi_r^2+\xi_i^2+\xi_r^2\right)\right)\\ &\left(\pm2e^{4 \left(\alpha_1^2 \eta_1 + \alpha_2^2 \eta_2\right)} \cos \left(4 \alpha_1 \sqrt{\eta_1} \xi_i+4 \alpha_2 \chi_i \sqrt{\eta_2}\right)+e^{2 \left(\alpha_1^2 -2\alpha_1\xi_r\sqrt{\eta_1} + \alpha_2^2 -2 \alpha_2 \chi_r \sqrt{\eta_2}\right)} \left(e^{8 \alpha_1 \sqrt{\eta_1} \xi_r+8 \alpha_2 \chi_r \sqrt{\eta_2}}+1\right)\right), \end{align*} \end{widetext} where $\xi=\xi_r+i\xi_i$ and $\chi=\chi_r+i\chi_i$, and $O_i(\xi)$ and $\Pi_i(\xi)$ were defined in Eqs.~\eqref{eq:O} and \eqref{eq:Pi}. By substituting $\eta_1=\eta_2=1$, we can get a correlation functions for the case of perfect detectors. Likewise, $\eta_1=\eta_2=t$ gives the correlation functions for distribution strategy A and $\eta_1=1$, $\eta_2=t^2$ gives the correlation function for strategy B. As we will discuss below, we also can use these results for the cases of imperfect detectors even though they differ in the order of the displacement operators and the beam splitters. \section{\label{sec:equiv}The order of photon loss and displacement operator in correlation} In this section, we show that ECSs give the same optimized value of the Bell-CHSH function independent to the order of the displacement and the photon loss by a beam splitter. Note that it is sufficient to only consider two coherent-state dyadics of the form $\ket{\gamma}\bra{\gamma}$ and $\ket{\gamma}\bra{-\gamma}$. This is because ECSs only contain this kind of dyadics and a photon loss by a beam splitter is superlinear(linear in the space of density matrices) and a displacement operator is linear. First, suppose that the states undergo a beam splitter first and displacement second. From the above, the dyadics after the transmission to the beam splitter with transmissivity $\sqrt{\eta}$ would be \begin{align*} \ket{\gamma}\bra{\gamma} &\longrightarrow \ket{\sqrt{\eta}\gamma}\bra{\sqrt{\eta}\gamma} \\ \ket{\gamma}\bra{-\gamma} &\longrightarrow \exp[-2(1-\eta)\abs{\gamma}^2]\ket{\sqrt{\eta}\gamma}\bra{-\sqrt{\eta}\gamma}. \end{align*} Now we apply the displacement operator $\hat{D}(\xi) = \exp[\xi\hat{a}^\dagger-\xi^*\hat{a}]$ to photon lost dyadics. Then the states would be \begin{align*} \ket{\gamma}\bra{\gamma} \longrightarrow& \ket{\sqrt{\eta}\gamma-\xi}\bra{\sqrt{\eta}\gamma-\xi} \\ \ket{\gamma}\bra{-\gamma} \longrightarrow& \exp[-2(1-\eta)\abs{\gamma}^2+\sqrt{\eta}(\xi^*\gamma-\xi\gamma^*)]\\ &\quad\ket{\sqrt{\eta}\gamma-\xi}\bra{-\sqrt{\eta}\gamma-\xi}.\numberthis \label{disloss} \end{align*} Next, let us consider the case we employ the operators to opposite order, displacement first and photon loss second. After the displacement, the dyadics would be \begin{align*} \ket{\gamma}\bra{\gamma} \longrightarrow& \ket{\gamma-\xi}\bra{\gamma-\xi} \\ \ket{\gamma}\bra{-\gamma} \longrightarrow& \exp[\xi^*\gamma-\xi\gamma^*]\ket{\gamma-\xi}\bra{-\gamma-\xi}. \end{align*} Applying photon loss to these dyadics using Eq.~\eqref{dkb}, we finally obtain \begin{align*} \ket{\gamma}\bra{\gamma} \longrightarrow& \ket{\sqrt{\eta}(\gamma-\xi)}\bra{\sqrt{\eta}(\gamma-\xi)} \\ \ket{\gamma}\bra{-\gamma} \longrightarrow& \exp[-2(1-\eta)\abs{\gamma}^2+\eta(\xi^*\gamma-\xi\gamma^*)]\\ &\quad\ket{\sqrt{\eta}(\gamma-\xi)}\bra{\sqrt{\eta}(-\gamma-\xi)}.\numberthis \label{lossdis} \end{align*} These results show that if we replace $\xi$ with $\sqrt{\eta}\xi$ in Eq.~\eqref{disloss}, it will be exactly the same to Eq.~\eqref{lossdis}. This means that the optimized values of the Bell-CHSH functions for all displacement variables do not depend on the order of loss and displacement. \end{document}
\begin{document} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \title{ Unique continuation property for anomalous slow diffusion equation} \begin{abstract} A Carleman estimate and the unique continuation of solutions for an anomalous diffusion equation with fractional time derivative of order $0<\alpha<1$ are given. The estimate is derived via some subelliptic estimate for an operator associated to the anomalous diffusion equation using calculus of pseudo-differential operators. \end{abstract} \section{Introduction}\label{sec1} In this paper we are concern with the UCP (unique continuation property) of solutions of anomalosu diffusion equation. Even in the case discussing UCP of solution in $H^{\alpha, 2}(\Omega\times (0,T))$ (see below in this section for its definition) giving zero Cauchy data on a small part $\Gamma$ of the $C^2$ boundary $\partial\Omega$ of a domain $\Omega\subset{\Bbb R}^n$ over some time interval, we can always consider the $0$ extension of the solution outside $\Omega$ in a neighborhood of $\Gamma$. Hence assuming $0\in\partial\Omega$ without loss of generality we only need to consider the following for UCP of solutions of anomalous diffusion equation. That is let $\hat{y}=(\hat{y}_1,\cdots,\hat{y}_{n-1},0)\in \mathbf{R}^n$ and $\omega=\{(y_1,\cdots,y_n):\hat{y}_j-l<y_j<\hat{y}_j+l,-l<y_n\leq0, 1\leq j\leq n-1\}\subset\mathbf{R}^n$ with $l>0$, consider the following equation for $0<\alpha<1$ \begin{equation}\label{1.1} \begin{cases} \begin{array}{l} \partial^{\alpha}_{t}u(t,y)-\Delta_y u(t,y) =l_1(t,y;\nabla_y)u(t,y),\\ u(t,y)=0\quad (t\leq0),\\ u(t,y)=0 \quad (y\in \omega, 0<t<T), \end{array} \end{cases} \end{equation} where $l_1$ is a linear differential operator of order $1$ and $\partial^{\alpha}_{t}u$ is the fractional derivative of $u$ in the Caputo sense which is defined by $$\partial^{\alpha}_{t}u(t,y)=\frac{1}{\Gamma(1-\alpha)}((t^{-\alpha}H(t)\otimes\delta_y)\ast\partial_tu)(t,y),$$ where $H(t)$ is the Heaviside function and $\delta_y$ is the Dirac function with singularity at $y$. The anomolous diffusion equation was first studied in material science and the exponent $\alpha$ of $\partial^{\alpha}_{t}u$ in this equation is an index which describes the long time behavior of the mean square displacement $<x^2(t)>\sim\mbox{\it positive const.}\,t^\alpha$ of a diffusive particle $x(t)$ describing anomalous diffusion on fractals such as some amorphous semiconductors or strongly porous materials (see \cite{Anh}, \cite{Metzler} and references therein). Recently a strong inertia to the study of anomolous diffusion equation came from a study in environmental science. It showed by an experiment that the spread of pollution in soils cannot be modeled correctly by the usual diffusion equation, but it can be modeled by an anomalous diffusion equation (see \cite{hat1}, \cite{hat2}). The Cauchy problem and intial boundary value problem for the anomalous diffusion equation have been studied by many people (see \cite{A}, \cite{E} and the references therein). The aim of this paper is to give a Carleman estimate of solutions of anomalous diffusion equation which enables us to have UCP of its solutions for any $\alpha\,(0<\alpha<1)$ and $n\in{\Bbb N}$. UCP is a key to the study of control problem and inverse problem for this equation. It can give the approximate boundary controlability for the control problem and it is very important for inverse problem if one wants to develop for instance linear sampling type reconstruction scheme to identify unknown objects such as cracks, cavities and inclusions inside an anomalous diffusive medium. Some Carleman estimates have been given for some special cases. That is for $\alpha=1/2$, a Carleman estimate was given in \cite{XCY}, \cite{ZX}, \cite{ZY} for $n=1$ and \cite{CLN} for $n=2$ via that for the operator $\partial_t+\Delta^2$ with some lower order terms. Looking at the symbol of the anomalous diffusion equation, we can say that it is basically semi-elliptic. Based on this, we adapt Treve's argument (\cite{Treve}) to derive a Carleman estimate via some subelliptic estimate to the equation conjugated by $e^{\tau_0 t}$ with $\tau_0$ and transformed by a Holmgren transform. Our method has a potential to be applied to space time fractional diffusion equations (\cite{Hanyga}) and some fractional derivative visco-elastic equations (\cite{Lu}). \par In order to state our results, let $\mathcal{S}(\mathbf{R}_t^1\times\mathbf{R}_x^n)$ and $\mathcal{S}'(\mathbf{R}_t^1\times\mathbf{R}_x^n)$ be the set of rapidly decreasing functions in $\mathbf{R}_t^1\times\mathbf{R}_x^n$ and its dual space, respectively. Then, for $m,s\in \mathbf{R}$, $v=v(t,x)\in \mathcal{S}'(\mathbf{R}_t^1\times\mathbf{R}_x^n)$, belongs to the function space $H^{m,s}(\mathbf{R}_t\times\mathbf{R}_x^n)$ if $$\|v\|^2_{H^{m,s}}:=\iint(1+|\xi|^s+|\tau|^m)^2|\hat{v}|^2d\tau d\xi$$ is finite, and $\|v\|^2_{H^{m,s}}$ denotes the norm of $v$ of this function space, where $\hat{v}$ is the Fourier transform defined by $$\mathcal{F}(v)(\tau,\xi)=\hat{v}(\tau,\xi)=\iint e^{-it\tau-ix\cdot\xi}v(t,x)dxdt.$$ Further, we define $H^{m,s}(\Omega\times(0,T))$ as the restriction of $H^{m,s}(\mathbf{R}_t\times\mathbf{R}_x^n)$ to $\Omega\times(0,T)$. In this paper, we will give a Carleman estimate and the following UCP of solutions of \eqref{1.1}. \begin{theorem}\label{thm1.1} Let $u\in H^{\alpha,2}(\mathbf{R}^{1+n})$ satisfy \eqref{1.1}. Then $u$ will be zero across $y_n=0$. \end{theorem} The rest of this paper is organized as follows. We compute the principal symbol of the anomalous diffusion operator which undergone Holmgren type transformation and then multiplied by an exponential function of time variable, and analyze its properties in Section 2. In Section 3, we derive some subelliptic estimate for some pseudo-differential operator associated to this operator. Then by using this subelliptic estimate, the Carleman estimate is derived in Section 4. Finally in the last section, we give UCP of solutions to the anomalous diffusion operator. \section{Change of variables and principal symbol}\label{sec2} UCP is a local property of solutions of \eqref{1.1}. Hence we consider $\theta(t)\kappa(y_n)u(t,y)$ with $\kappa(y_n)\in C^\infty(\mathbf{R})$ and $\theta(t)\in C^\infty(\mathbf{R})$ defined by \begin{equation*} \begin{array}{l} \kappa(y_n)= \begin{cases} 0,\quad y_n\leq -2l/3,\\ 1, \quad y_n\geq -l/3 \end{cases} \end{array} \end{equation*} and \begin{equation*} \begin{array}{l} \theta(t)= \begin{cases} 1, \quad t\leq T-\epsilon,\\ 0, \quad t\geq T-\epsilon/2 \end{cases} \end{array} \end{equation*} for small positive $\epsilon$. By abusing the notation $u(t,y)$ to denote $\theta(t)\kappa(y_n)u(t,y)$, this $u(t,y)$ satisfies \begin{equation}\label{2.1} \begin{cases} \begin{array}{l} \partial^{\alpha}_{t}u(t,y)-\Delta_y u(t,y) =l_1(t,y;\nabla_y)u(t,y),\\ u(t,y)=0\quad (t\leq0),\\ u(t,y)=0 \quad (y_n\leq 0\quad {\rm or}\quad t\geq T). \end{array} \end{cases} \end{equation} Then our aim is to show that $u$ considered on $\{y_n\geq 0\}\cap \{t\leq T-\epsilon\}$ where we have $\theta(t)\kappa(y_n)=1$ will be zero across $y_n=0$. To have UCP of solutions of \eqref{2.1} in the $y_n$-direction by a Carleman estimate, we use the change of variables $x'=y'-\hat{y}', x_n=y_n+|y'-\hat{y}'|^2+\frac{X}{T}(t-T), t=t$, where $x'=(x_1,\cdots,x_{n-1})$, $y'=(y_1,\cdots,y_{n-1})$, $\hat{y}'=(\hat{y}_1,\cdots,\hat{y}_{n-1})$ and $X$ is a small positive constant which will be determined later. This is a Holmgren type transformation. By $\partial_t=\frac{X}{T}\partial_{x_n}+\partial_t$, $ \partial_{y_j}=\partial_{x_j}+2x_j\partial_{x_n}$ for $j=1,\cdots,n-1$, we have \begin{equation}\label{2.2} \begin{array}{l} \partial_{t}^\alpha u(t,y)=\frac{1}{\Gamma(1-\alpha)}\int_0^t(t-\eta)^{-\alpha}\partial_\eta u(\eta,y)d\eta\\ =\frac{1}{\Gamma(1-\alpha)}\int_0^t(t-\eta)^{-\alpha}\partial_\eta u(\eta,x)d\eta +\frac{X}{T\Gamma(1-\alpha)}\int_0^t(t-\eta)^{-\alpha}\partial_{x_n}u(\eta,x)d\eta, \end{array} \end{equation} where $\Gamma(\cdot)$ denotes the Gamma function and $u(t,x)$ with $t=\eta$ on the right hand side of \eqref{2.2} is the push forward of $u(t,y)$ by the change of variables. We further abuse the notation $u(t,x)$ to denote this $u(t,x)$ multiplied by $e^{\tau_0 t}$ with $\tau_0<0$. Then, this new $u(t,x)$ also satisfies $${\rm supp}\, u\subset\{x_n\geq -X\}.$$ For further arguments, we define a function space $\dot{H}^m_{\alpha}(\overline{\mathbf{R}_+^{1+n}})$ for $m\in \mathbf{R}$ as follows. First, let $(t,x)\in\mathbf{R}^{1+m}$ and denote the open half of $\mathbf{R}^{1+m}$ space defined by $t>0$ and complement of its closure $\bar{\mathbf{R}}_+^{1+m}$ by $\mathbf{R}_+^{1+m}$ and $\mathbf{R}_-^{1+m}$, respectively. If $E$ is a space of distributions in $\mathbf{R}^{1+m}$ we use the notation $\bar{E}(\mathbf{R}_+^{1+m})$ for the space of restrictions to $\mathbf{R}_+^{1+m}$ of elements in $E$ and we write $\dot{E}(\bar{\mathbf{R}}_+^{1+m})$ for the set of distributions in $E$ supported by $\bar{\mathbf{R}}_+^{1+m}$. Next, set $$\Lambda_{\alpha}^m(\tau,\xi)=((1+|\xi|^2)^{1/\alpha}+i\tau)^{m\alpha/2} \quad \mbox{for}\,\,m\in \mathbf{R}$$ and define a pseudo-differential operator $\Lambda_{\alpha}^m(D_t,D_x)$ by $\Lambda_{\alpha}^m(D_t,D_x)\phi:=\mathcal{F}^{-1}(\Lambda_{\alpha}^m(\tau,\xi)\hat{\phi})$, $\phi\in \dot{\mathcal{S}}(\overline{\mathbf{R}_+^{1+n}})$, where $\mathcal{F}^{-1}$ is the inverse Fourier transform. Then $\Lambda_{\alpha}^m(D_t,D_x)$ is a continuous map in $\dot{\mathcal{S}}(\overline{\mathbf{R}_+^{1+n}})$ with inverse $\Lambda_{\alpha}^{-m}(D_t,D_x)$. (See Theorem B.2.4 in \cite{Hormander}.) Based on this we define $\dot{H}^m_{\alpha}(\overline{\mathbf{R}_+^{1+n}})$ by $\dot{H}^m_{\alpha}(\overline{\mathbf{R}_+^{1+n}}):=\Lambda_{\alpha}^{-m}L^2(\mathbf{R}_+^{1+n})$, where we considered $L^2(\mathbf{R}_+^{1+n})$ as a subset of $\dot{\mathcal{S}}'(\overline{\mathbf{R}_+^{1+n}})$. Now denote by $P(t,x;D_t,D_x)$ the operator $e^{\tau_0 t}(\partial_t^\alpha-\Delta_y)$ in terms of the coordinates $(t,x)$ and its total symbol by $p(t,x;\tau,\xi)$, where $D_t=-\sqrt{-1}\partial_t$, $D_x=-\sqrt{-1}\partial_x$. Then it is easy to see \begin{equation*} \begin{array}{rl} p(t,x;\tau,\xi)&=(i(\tau+i\tau_0))^\alpha+\Sigma_{j=1}^{n-1}(\xi_j+2x_j\xi_n)^2+\xi_n^2+\frac{X}{T}i^\alpha(\tau+i\tau_0)^{\alpha-1}\xi_n\\ &=(i(\tau+i\tau_0))^\alpha+|\xi'|^2+4g\xi_n+f\xi_n^2+\frac{X}{T}i^\alpha(\tau+i\tau_0)^{\alpha-1}\xi_n, \end{array} \end{equation*} where $g=g(x',\xi')=\Sigma_{j=1}^{n-1}x_j\xi_j$ and $f=f(x')=1+4|x'|^2$. Also, by the Payley-Wiener type theorem, $P(t,x;D_t,D_x)$ maps $\dot{H}^m_{\alpha}(\overline{\mathbf{R}_+^{1+n}})$ into $\dot{H}^{m-2}_{\alpha}(\overline{\mathbf{R}_+^{1+n}})$ for any $m\in \mathbf{R}$. We want to derive a Carleman estimate for the operator $P(t,x;D_t,D_x)$ using some subelliptic estimate based on the idea by F. Treve (\cite{Treve}). For that let $\psi=\frac{1}{2}(x_n-X)^2$ and consider the symbol $p(x;\tau,\xi+i|\sigma|\nabla\psi)$ over $\mathbf{R}^{n+1}\times\mathbf{R}_z$ to define a pseudo-differential operator $P_\psi(x,D_t,D_{x},D_z)$ by $$P_\psi=P_\psi(x,D_t,D_{x},D_z)=p(x,D_{t},D_x+i|D_z|\nabla \psi)$$ which can be given for any compactly supported distribution $v$ in $\mathbf{R}^{n+1}\times\mathbf{R}_z$ by \begin{equation}\label{2.3} \begin{array}{rl} P_\psi v(z,t,x)=\int e^{i(x\cdot\xi+t\tau+z\sigma)}p(x;\tau,\xi+i|\sigma|\nabla\psi)\hat{v}(\sigma,\xi,\tau)d\sigma d\tau d\xi. \end{array} \end{equation} We denote the principal symbol of $P_\psi$ by $\tilde{p}_\psi$ which is given by \begin{equation*} \begin{array}{rl} \tilde{p}_\psi=&(i(\tau+i\tau_0))^\alpha+|\xi'|^2+4g\xi_n+f\xi_n^2-f|\sigma|^2(x_n-X)^2\\ &+i4g(x_n-X)|\sigma|+i2f\xi_n(x_n-X)|\sigma|. \end{array} \end{equation*} It is important to note here that $\tilde{p}_\psi$ is independent of $t$ and $z$. This gives the advantage in computing the Poisson bracket of $\tilde{p}_\psi$. By the definition of Poisson bracket \begin{equation}\label{Poisson bracket} \{\Re \tilde{p}_\psi, \Im \tilde{p}_\psi\}=\Sigma_{j=1}^{n}(\partial_{\xi_j}\Re \tilde{p}\cdot\partial_{x_j}\Im \tilde{p} -\partial_{x_j}\Re \tilde{p}\cdot\partial_{\xi_j}\Im \tilde{p}) \end{equation} with the real part $\Re \tilde{p}_\psi$ and imaginary part $\Im \tilde{p}_\psi$ of $\tilde{p}_\psi$ given by \begin{equation*} \begin{array}{ll} \Re \tilde{p}_\psi=\Re(i(\tau+i\tau_0))^\alpha+|\xi'|^2+4g\xi_n+f\xi_n^2-f(x_n-X)^2|\sigma|^2\\ \Im \tilde{p}_\psi=\Im(i(\tau+i\tau_0))^\alpha+4g(x_n-X)|\sigma|+2f\xi_n(x_n-X)|\sigma|. \end{array} \end{equation*} Note that there are no terms related with $t$ and $z$ derivatives in \eqref{Poisson bracket}. A direct computation gives \begin{equation*} \begin{array}{l} \nabla_\xi\Re \tilde{p}_\psi=2(\xi',0)+4\xi_n(x',0)+4g(0',1)+2f\xi_n(0',1)\\ \nabla_x\Im \tilde{p}_\psi=4(x_n-X)|\sigma|(\xi',0)+4g|\sigma|(0',1)+16\xi_n(x_n-X)|\sigma|(x',0)+2f\xi_n|\sigma|(0',1)\\ \nabla_x\Re \tilde{p}_\psi=4\xi_n(\xi',0)+8\xi_n^2(x',0)-8(x_n-X)^2|\sigma|^2(x',0)-2f(x_n-X)|\sigma|^2(0',1)\\ \nabla_\xi\Im \tilde{p}_\psi=4(x_n-X)|\sigma|(x',0)+2f(x_n-X)|\sigma|(0',1), \end{array} \end{equation*} where $0'$ denotes $\xi'=0$. Thus \begin{equation*} \begin{array}{rl} &\Sigma_{j=1}^{n}(\partial_{\xi_j}\Re \tilde{p}\cdot\partial_{x_j}\Im \tilde{p})\\ =&8(x_n-X)|\sigma||\xi'|^2+48g\xi_n(x_n-X)|\sigma|+64|x'|^2\xi_n^2(x_n-X)|\sigma|\\ &+16g^2|\sigma|+16fg\xi_n|\sigma|+4f^2\xi_n^2|\sigma| \end{array} \end{equation*} and \begin{equation*} \begin{array}{rl} &\Sigma_{j=1}^{n}(\partial_{x_j}\Re \tilde{p}\cdot\partial_{\xi_j}\Im \tilde{p})\\ =&16g\xi_n(x_n-X)|\sigma|+32|x'|^2\xi_n^2(x_n-X)|\sigma|-32|x'|^2(x_n-X)^3|\sigma|^3-4f^2(x_n-X)^2|\sigma|^3. \end{array} \end{equation*} Observe that $$ (i(\tau+i\tau_0))^\alpha=|\tau+i\tau_0|^\alpha[\cos(\alpha \arg(i(\tau+i\tau_0)))+i\sin((\alpha \arg(i(\tau+i\tau_0)))], $$ we have for some $\epsilon_0>0$ \begin{equation}\label{3.4} \begin{array}{l} \Re (i(\tau+i\tau_0))^\alpha\geq\epsilon_0|\tau|^\alpha. \end{array} \end{equation} From $\Re \tilde{p}_\psi=0$, $|x'|^2\leq X/4$ with $X<<1$ and \eqref{3.4}, we have \begin{equation}\label{2.4} \begin{array}{l} (x_n-X)^2|\sigma|^2\gtrsim |\tau|^\alpha+|\xi|^2+\sigma^2. \end{array} \end{equation} Also from \eqref{2.4}, we have \begin{equation}\label{2.5} \begin{array}{l} \{\Re \tilde{p}_\psi, \Im \tilde{p}_\psi\}\gtrsim (|\tau|^\alpha+|\xi|^2+\sigma^2)^{3/2}. \end{array} \end{equation} \section{Subelliptic estimates}\label{sec3} In this section we will show the following subelliptic estimate for the operator $P_\psi$. \begin{lemma}\label{lem3.2} There exists a sufficiently small constant $z_0$ such that for all $u(t,x,z)\in C_0^{\infty}(U\times[-z_0,z_0])\cap\dot{\mathcal{S}}(\bar{\mathbf{R}}_+^{1+n+1})$, we have that \begin{equation}\label{3.1} \begin{array}{l} \Sigma_{k+s<2}||h(D_z)^{2-k-s}\Lambda_\alpha^{s}D_z^k u|| \lesssim||P_\psi u||, \end{array} \end{equation} where $h(D_z)=(1+D_z^2)^{1/4}$ and $U$ is a small open neighborhood of the origin in ${\Bbb R}^{1+n}$. \end{lemma} \noindent{\bf Proof}. Let us suppose there is a finite open covering of $$|\xi|^2+\sigma^2+|\tau|^\alpha=1$$ and a subordinate smooth partition of unity $\chi_\nu(\xi,\tau,\sigma)$ homogeneous of degree $0$ in terms of the scaling $$ (\xi,\tau,\sigma)\mapsto (\eta\xi,\eta^{2/\alpha}\tau,\eta\sigma) $$ for $\eta>0$ such that $\Sigma\chi_\nu^2=1$ and \begin{equation}\label{3.2} \begin{array}{l} ||\chi_\nu(D_x,D_t,D_z)h(D_z) u||^2_{H^{\alpha/2,1}} \lesssim||P_\psi\chi_\nu u||_{L^2}^2+||u||^2_{H^{\alpha/2,1}}. \end{array} \end{equation} Here we have abused the notation $\Vert u\Vert_{H^{m,s}}$ with $m,\,s\in\mathbf{R}$ to denote the norm for $u(t,x,z)\in\dot{\mathcal{S}}(\bar{\mathbf{R}}_+^{1+n+1})$ given by $$ \Vert u\Vert_{H^{m,s}}^2:=\int\int\int(1+|\tau|^m+|\xi|^s+|\sigma|^s)^2|\hat u|^2\,d\tau\,d\xi\,d\sigma. $$ Summing up all $\nu$ on \eqref{3.2}, we can have from elementary estimates that \begin{equation}\label{3.3} \begin{array}{l} ||h(D_z) u||^2_{H^{\alpha/2,1}} \lesssim||P_\psi u||_{L^2}^2 \end{array} \end{equation} which implies \eqref{3.1} for a small enough $z_0$. So, it suffices to work microlocally. That is, we need to establish \eqref{3.2}. Let's divide into two cases to show \eqref{3.2}. If $\chi_\nu$ is supported in a small neighborhood of $\sigma=0$, then $|\xi|^2+|\tau|^\alpha\geq \delta_0>0$. We recall that the principal symbol $\tilde{p}_\psi$ of $P_\psi$ is \begin{equation*} \begin{array}{rl} \tilde{p}_\psi=&(i(\tau+i\tau_0))^\alpha+|\xi'|^2+4g\xi_n+f\xi_n^2-f|\sigma|^2(x_n-X)^2\\ &+i4g(x_n-X)|\sigma|+i2f\xi_n(x_n-X)|\sigma|. \end{array} \end{equation*} A direct computation gives for small enough $X$ that \begin{equation}\label{3.5} \begin{array}{rl} |\tilde{p}_\psi|\geq|\Re\tilde{p}_\psi|&=\Re (i(\tau+i\tau_0))^\alpha+|\xi'|^2+4g\xi_n+f\xi_n^2-f|\sigma|^2(x_n-X)^2\\ &\geq \epsilon_0(|\xi|^2+|\tau|^\alpha)-f|\sigma|^2(x_n-X)^2\\ &\gtrsim \delta_0\gtrsim |\xi|^2+\sigma^2+|\tau|^\alpha, \end{array} \end{equation} where $\epsilon_0$ is the constant in \eqref{3.4} and $\delta_0$ is a small positive constant. So, we can have a better result than \eqref{3.2} in this case. On the other hand, if the support of $\chi_\nu$ is bounded away from $\sigma=0$, then $\sigma^2\geq \delta_1(|\xi|^2+|\tau|^\alpha)$ with a positive constant $\delta_1$. We write $||P_\psi\chi_\nu u||_{L^2}^2=(P_\psi\chi_\nu u,P_\psi\chi_\nu u)$ as the following. \begin{equation}\label{3.6} \begin{array}{rl} ||P_\psi\chi_\nu u||_{L^2}^2&=(P_\psi\chi_\nu u,P_\psi\chi_\nu u)\\ &=(P^*_\psi P_\psi\chi_\nu u,\chi_\nu u)\\ &=(P_\psi P^*_\psi\chi_\nu u,\chi_\nu u)+([P^*_\psi, P_\psi]\chi_\nu u,\chi_\nu u)\\ &=\big((I-\eta B)P_\psi P^*_\psi\chi_\nu u,\chi_\nu u\big)+\big(([P^*_\psi, P_\psi]+\eta BP_\psi P^*_\psi)\chi_\nu u,\chi_\nu u\big), \end{array} \end{equation} where the principal symbol of the commutator $[P^*_\psi, P_\psi]$ is $[\overline{\tilde{p}_\psi}, \tilde{p}_\psi]=2\{\Re \tilde{p}_\psi, \Im \tilde{p}_\psi\}$ which has been already studied in Section \ref{sec2}, $\eta$ a large positive constant and $B=\Lambda^{-1/2}(\Lambda^{-1/2})^*$ with an elliptic pseudo-differential operator $\Lambda$ whose principal symbol is $(|\tau|^\alpha+|\xi|^2+\sigma^2)^{1/2}$. From \eqref{2.5}, we have that the principal symbol of $[P^*_\psi, P_\psi]+\eta BP_\psi P^*_\psi$ satisfies \begin{equation}\label{3.7} \begin{array}{l} \eta(|\xi|^2+\sigma^2+|\tau|^\alpha)^{-1/2}|\tilde{p}_\psi|^2+2\{\Re \tilde{p}_\psi, \Im \tilde{p}_\psi\}\gtrsim (|\xi|^2+\sigma^2+|\tau|^\alpha)^{3/2}. \end{array} \end{equation} By G{\aa}rding's inequality and \eqref{3.6}, we obtain for large enough $\eta$ that \begin{equation}\label{3.8} \begin{array}{rl} ||\chi_\nu u||^2_{H^{3\alpha/4,3/2}} \lesssim||P_\psi\chi_\nu u||_{L^2}^2+||u||^2_{H^{\alpha/2,1}} \end{array} \end{equation} which proves \eqref{3.2}. \section{Carleman estimates}\label{sec4} In this section, we will derive a Carleman estimate from \eqref{3.1} by conjugating $u$ in \eqref{3.1} by $e^{i\beta z}$ with the large parameter $\beta$. \begin{lemma}\label{lem4.1} There exist a sufficiently large constant $\beta_1$ depending on $n$ such that for all $v(t,x)\in C_0^{\infty}(U)\cap\dot{\mathcal{S}}(\bar{\mathbf{R}}_+^{1+n})$ and $\beta\geq \beta_1$, we have that \begin{equation}\label{4.1} \begin{array}{l} \sum_{|\gamma|\leq1}\beta^{3-2|\gamma|}\int e^{2\beta\psi(x)}|D_x^\gamma v|^2dtdx \lesssim \int e^{2\beta\psi(x)}|P(t,x;D_t,D_x) v|^2dtdx. \end{array} \end{equation} \end{lemma} \noindent{\bf Proof}. Let $u\in C^\infty_0(U\times(-z_0,z_0))$ and $\beta\in \mathbf{R}$. We denote $$u\hat{}(\sigma,t,x)=\int u(z,t,x)e^{-i\sigma z}dz.$$ Then \begin{equation}\label{4.2} \begin{array}{rl} e^{-i\beta z}P_\psi (e^{i\beta z}u) =\int e^{i(\sigma-\beta)z}p(t,x,D_t,D_x+i|\sigma|\nabla\psi)u\hat{}(\sigma-\beta,t,x)d\sigma. \end{array} \end{equation} By the Leibniz formula, it yields that for any smooth function $w=w(t,x)$, \begin{equation}\label{4.3} \begin{array}{rl} &p(t,x,D_t,D_x+i|\sigma|\nabla\psi)w\\ =&e^{|\sigma|\psi}p(t,x,D_t,D_x) e^{-|\sigma|\psi}w\\ =&e^{(|\sigma|-|\beta|)\psi}e^{|\beta|\psi}p(t,x,D_t,D_x) e^{-(|\sigma|-|\beta|)\psi}e^{-|\beta|\psi}w\\ =&p(t,x,D_t,D_x+i|\beta|\nabla\psi)w+\Sigma_{k+|\gamma|<2,j>0}C_{j,k,\gamma}(x)(|\sigma|-|\beta|)^j\beta^kD_x^\gamma w\\ &+C_0(|\sigma|-|\beta|)D_t^{\alpha-1} w, \end{array} \end{equation} where the last term $C_0(|\sigma|-|\beta|)D_t^{\alpha-1} w$ is coming from the last term of \eqref{2.2} whose symbol is $\frac{X}{T}i^\alpha(\tau+i\tau_0)^{\alpha-1}\xi_n$. Now, let $u(z,t,x)=f(t,x)g(z)$, then we have $u\hat{}(\sigma-\beta,t,x)=f(t,x)\hat{g}(\sigma-\beta)$. Applying \eqref{4.3} to $w=f(t,x)\hat{g}(\sigma-\beta)$, we have from \eqref{4.2} and \eqref{4.3} \begin{equation}\label{4.4} \begin{array}{rl} &e^{-i\beta z}P_\psi (e^{i\beta z}f(t,x)g(z))\\ =&\int e^{i(\sigma-\beta)z}p(t,x,D_t,D_x+i|\sigma|\nabla\psi)f(t,x)\hat{g}(\sigma-\beta)d\sigma\\ =&\int e^{i(\sigma-\beta)z}p(t,x,D_t,D_x+i|\beta|\nabla\psi)f(t,x)\hat{g}(\sigma-\beta)d\sigma\\ &+\int e^{i(\sigma-\beta)z}\Sigma_{j+k+|\gamma|\le 2,\,j>0}C_{j,k,\gamma}(x)(|\sigma|-|\beta|)^j\beta^kD_x^\gamma f(t,x)\hat{g}(\sigma-\beta)d\sigma\\ &+\int e^{i(\sigma-\beta)z}C_0(|\sigma|-|\beta|)D_t^{\alpha-1} f(t,x)\hat{g}(\sigma-\beta)d\sigma\\ =&g(z)p(t,x,D_t,D_x+i|\beta|\nabla\psi)f(t,x)\\ &+\Sigma_{j+k+|\gamma|\le 2,\,j>0}C_{j,k,\gamma}(x)G_j(\beta)g(z)\beta^kD_x^\gamma f(t,x)\\ &+C_0D_t^{\alpha-1} f(t,x)G_1(\beta)g(z), \end{array} \end{equation} where $G_j(\beta)g(z)=\int e^{i(\sigma-\beta)z}(|\sigma|-|\beta|)^j\hat{g}(\sigma-\beta)d\sigma =\int e^{i\sigma z}(|\sigma+\beta|-|\beta|)^j\hat{g}(\sigma)d\sigma$. By the Plancherel theorem, we have \begin{equation}\label{4.5} \begin{array}{l} ||G_j(\beta)g(z)||^2\lesssim \int |\sigma|^{2j}|\hat{g}(\sigma)|^2d\sigma\lesssim||g||^2_{H^j(\mathbf{R})}. \end{array} \end{equation} Let $g\in C_0^\infty((-z_0,z_0))$ be any non-zero function. Combining \eqref{4.4} and \eqref{4.5}, it implies that \begin{equation}\label{4.6} \begin{array}{rl} &||P_\psi (e^{i\beta z}f(t,x)g(z))||\\ \lesssim&||g||\cdot||p(t,x,D_t,D_x+i|\beta|\nabla\psi)f||+\Sigma_{j+k+|\gamma|\le 2,j>0}\beta^k\cdot||g||_{H^j(\mathbf{R})}\cdot||D_x^\gamma f||\\ &+||D_t^{\alpha-1} f||\cdot||g||_{H^1(\mathbf{R})}\\ \lesssim&||p(t,x,D_t,D_x+i|\beta|\nabla\psi)f||+\Sigma_{k+|\gamma|<2}\beta^k\cdot||D_x^\gamma f||+||D_t^{\alpha-1} f||. \end{array} \end{equation} On the other hand, we need to estimate the lower bound on the left hand side of \eqref{3.1}. We let $u(t,x,z)=e^{i\beta z}f(t,x)g(z)$ in \eqref{3.1}. A direct computation gives that \begin{equation}\label{4.7} \begin{array}{rl} e^{-i\beta z}h(D_z)^jD_z^k(e^{i\beta z}g(z))&=(2\pi)^{-1}\int e^{i(\sigma-\beta)z}h(\sigma)^j\sigma^k\hat{g}(\sigma-\beta)d\sigma\\ &=(2\pi)^{-1}\int e^{i\sigma z}h(\sigma+\beta)^j(\sigma+\beta)^k\hat{g}(\sigma)d\sigma\\ &=(2\pi)^{-1}\int e^{i\sigma z}[h(\sigma+\beta)-h(\beta)+h(\beta)]^j(\sigma+\beta)^k\hat{g}(\sigma)d\sigma\\ &=h(\beta)^j\beta^kg(z)+\Sigma_{j',k'\in J}C_{j',k'}h(\beta)^{j-j'}\beta^{k-k'}H_{j',k'}(\beta)g(z), \end{array} \end{equation} where $J=\{(j',k'):0\leq j'\leq j,0\leq k'\leq k\}\setminus(0,0)$ and $H_{j',k'}(\beta)g(z)=\int e^{i\sigma z}[h(\sigma+\beta)-h(\beta)]^{j'}\sigma^{k'}\hat{g}(\sigma)d\sigma$. By \eqref{4.7}, we have for large $\beta$ that \begin{equation}\label{4.8} \begin{array}{l} \begin{cases} ||e^{-i\beta z}h(D_z)^jD_z^k(e^{i\beta z}g(z))-h(\beta)^j\beta^kg(z)||\lesssim h(\beta)^j\beta^{k}(h(\beta)^{-1}+\beta^{-1})\\ ||h(D_z)^jD_z^k(e^{i\beta z}g(z))||\gtrsim h(\beta)^{j}\beta^{k}. \end{cases} \end{array} \end{equation} By \eqref{4.8}, we have for large $\beta$ \begin{equation}\label{4.9} \begin{array}{rl} &||h(D_z)^j\Lambda_\alpha^{s} D_z^k(u)||\\ =&||\Lambda_{\alpha}^{s} f(t,x)e^{-i\beta z}h(D_z)^jD_z^k(e^{i\beta z}g(z))||\\ =&||\Lambda_{\alpha}^{s} f(t,x)[e^{-i\beta z}h(D_z)^jD_z^k(e^{i\beta z}g(z))-h(\beta)^j\beta^kg(z)+h(\beta)^j\beta^kg(z)]||\\ \gtrsim&h(\beta)^{j}\beta^{k}||\Lambda_{\alpha}^{s} f||. \end{array} \end{equation} Recall that $h(\beta)\simeq\beta^{1/2}$, by \eqref{4.9} and \eqref{3.1}, we obtain \begin{equation}\label{4.10} \begin{array}{rl} \sum_{|\gamma|\leq1}\beta^{3-2|\gamma|}\int |D_x^\gamma f|^2dtdx&\lesssim\Sigma_{k+s<2}h(\beta)^{2(2-k-s)}\beta^{2k}||\Lambda_{\alpha}^{s} f||^2\\ &\lesssim\Sigma_{k+s<2}||h(D_z)^{2-k-s}\Lambda_{\alpha}^{s} D_z^k(u)||^2\\ &\lesssim||P_\psi u||^2\\ &=||P_\psi(e^{i\beta z}f(t,x)g(z))||^2. \end{array} \end{equation} Combining \eqref{4.10} and \eqref{4.6}, we have for large enough $\beta$ that \begin{equation}\label{4.11} \begin{array}{rl} \sum_{|\gamma|\leq1}\beta^{3-2|\gamma|}\int |D_x^\gamma f|^2dtdx &\lesssim||p(t,x,D_t,D_x+i|\beta|\nabla\psi)f||^2. \end{array} \end{equation} By letting $f=e^{\beta \psi}v$ in \eqref{4.11}, we immediately have \eqref{4.1}. \eproof \section{Proof of Theorem \ref{thm1.1}}\label{sec5} This section is devoted to the proof of the main theorem, Theorem \ref{thm1.1}. Since $u\in H^{\alpha,2}(\mathbf{R}^{1+n})$ and $u(t,y)=0\quad (t\leq0)$, we can find a sequence $\{u_m\}$ in $\mathcal{S}(\bar{\mathbf{R}}_+^{1+n})$ which converges to $u$ in $H^{\alpha,2}(\mathbf{R}^{1+n})$. The limit arguments in \eqref{4.1} imply that we can assume $u\in \mathcal{S}(\bar{\mathbf{R}}_+^{1+n})$. To apply Lemma \ref{lem4.1}, we first recall $${\rm supp}\, u\subset\{x_n\geq -X\}$$ and then define a smooth function $\chi$ by \begin{equation}\label{5.1} \chi (x_n)= \begin{cases} \begin{array}{l} 1,\quad x_n\leq X/2,\\ 0,\quad x_n\geq X. \end{array} \end{cases} \end{equation} From \eqref{2.1}, it is not hard to get $\chi u\in C_0^{\infty}(U)\cap\dot{\mathcal{S}}(\bar{\mathbf{R}}_+^{1+n})$. Thus, we can apply $\chi u$ to the Carleman estimates \eqref{4.1} and get that \begin{equation}\label{5.2} \begin{array}{rl} &\sum_{|\alpha|\leq1}\beta^{3-2|\gamma|}\int_{x_n\leq X/2} e^{2\beta\psi(x)}|D^\alpha u|^2dtdx\\ \leq &\sum_{|\alpha|\leq1}\beta^{3-2|\gamma|}\int e^{2\beta\psi(x)}|D^\gamma (\chi u)|^2dtdx\\ \lesssim &\int e^{2\beta\psi(x)}|P(t,x;D_t,D_x) (\chi u)|^2dtdx\\ \lesssim &\sum_{|\alpha|\leq1}\int_{x_n\leq X/2} e^{2\beta\psi(x)}|D^\gamma u|^2dtdx+\int_{X/2<x_n\leq X} e^{2\beta\psi(x)}|[P,\chi] u|^2dtdx, \end{array} \end{equation} where $[\cdot,\cdot]$ denotes the commutator. Let $\beta$ be large enough to absorb the first term on the right hand side of \eqref{5.2}, we get from \eqref{5.2} that \begin{equation}\label{5.3} \begin{array}{rl} &\beta^{3}\int_{x_n\leq X/4} e^{9\beta X^2/16}|u|^2dtdx\\ \leq &\sum_{|\alpha|\leq1}\beta^{3-2|\alpha|}\int_{x_n\leq X/2} e^{2\beta\psi(x)}|D^\alpha u|^2dtdx\\ \lesssim &C(u)e^{\beta X^2/4}. \end{array} \end{equation} Let $\beta$ tend to $\infty$, we obtain that $u=0$ on $x_n\leq X/4$. Recall that $x_n=y_n+|y'-\hat{y}'|^2+\frac{X}{T}(t-T)$, the proof is complete. \end{document}
\begin{document} \title{Pro-species of algebras I: Basic properties} \author{Julian K\"ulshammer} \date{\today} \address{ Institute of Algebra and Number Theory, University of Stuttgart \\ Pfaffenwaldring 57 \\ 70569 Stuttgart, Germany} \email{[email protected]} \thanks{The author would like to thank Chrysostomos Psaroudakis, Sondre Kvamme, and the anonymous referee for helpful comments on previous versions of the paper.} \keywords{species, valued quiver, preprojective algebra, reflection functors, separated quiver} \begin{abstract} In this paper, we generalise part of the theory of hereditary algebras to the context of pro-species of algebras. Here, a pro-species is a generalisation of Gabriel's concept of species gluing algebras via projective bimodules along a quiver to obtain a new algebra. This provides a categorical perspective on a recent paper by Gei\ss, Leclerc, and Schr\"oer \cite{GLS16}. In particular, we construct a corresponding preprojective algebra, and establish a theory of a separated pro-species yielding a stable equivalence between certain functorially finite subcategories. \end{abstract} \maketitle \section{Introduction} The representation theory of finite dimensional hereditary algebras is among the best understood theories to date. Over algebraically closed fields, hereditary algebras are given by path algebras of finite acyclic quivers. Over more general fields, species, introduced by Gabriel \cite{Gab73}, form another class of hereditary algebras, which in the case that the ground field is perfect exhaust all finite dimensional hereditary algebras. Species can be regarded as skew fields glued via bimodules along a quiver to obtain an algebra. Their representation theory was studied intensively by Dlab and Ringel in a series of papers \cite{DR74a, DR74b, DR75, DR76, DR80}. Recently, Gei\ss, Leclerc, and Schr\"oer \cite{GLS16} defined algebras by quivers and relations, which turn out to have a species-like behaviour -- although they can as well be defined over algebraically closed fields. These can be viewed as various $k[x_i]/(x_i^{c_i})$ glued via bimodules which are free from both sides along a quiver. In \cite{GLS16} part of the representation theory of species has been generalised to these algebras resulting in an analogue of Gabriel's theorem. Their theory has been partially generalised to Frobenius algebras glued via bimodules which are free from both sides by Fang Li and Chang Ye \cite{LY15}. They do not use the language of species but instead work with upper triangular matrix rings. In this paper, we generalise part of the theory of species to what we call pro-species of algebras, that is we generalise species by gluing arbitrary algebras (not necessarily skewfields) via bimodules which are projective from both sides along a quiver. Our goal is to give a conceptual approach to the papers \cite{GLS16} and \cite{LY15}, and provide some additional results for this theory. The philosophy is that the representation theory of a pro-species $\Lambda$ consisting of algebras $\Lambda_{\mathtt{i}}$ glued via bimodules $\Lambda_\alpha$ which are projective from both sides along a quiver $Q$ is in some parts governed by the individual representation theories for the $\Lambda_{\mathtt{i}}$. As a first example we restate results of Wang \cite{Wan16} and Luo and Zhang \cite{LZ13} on how to construct Iwanaga-Gorenstein algebras and describe their categories of Gorenstein projective modules as well as modules of finite projective dimension. A second part of the paper concerns the theory of reflection functors. In 1973, Bernstein, Gel'fand, and Ponomarev \cite{BGP73} introduced reflection functors for quivers in order to give a more conceptual proof of Gabriel's theorem characterising representation-finite path algebras of quivers. These functors were generalised to species by Dlab and Ringel in \cite{DR76}. In 1979, Gel'fand and Ponomarev \cite{GP79} introduced the preprojective algebra of the path algebra of an acyclic quiver by a certain doubling procedure of the original quiver. This algebra, regarded as a module over the original algebra, decomposes as the direct sum of all preprojective modules. Surprisingly recently, reflection functors have also been defined for preprojective algebras, independently by Baumann and Kramnitzer \cite{BK12}, Bolten \cite{Bol10}, and Buan, Iyama, Reiten, and Scott \cite{BIRS09}. The first two papers describe them in linear algebra terms similar to \cite{GP79} while \cite{BIRS09} gives a (in the non-Dynkin case) tilting module which provides this equivalence. See \cite{BKT14} for a comparison of the two approaches which was observed by Amiot. We extend this theory to the setting of pro-species. \begin{theorem}[Section \ref{sec:reflectionfunctors}]\label{mainthm1} Let $\Lambda$ be a pro-species of algebras. Then, there are reflection functors $(\Sigma_\mathtt{i}^+,\Sigma_\mathtt{i}^-)$ on the module category of the associated preprojective algebra $\Pi(\Lambda)$ of $\Lambda$. They can be described in terms of linear algebra as well as by $(\Hom_\Lambda(I_\mathtt{i},-), I_\mathtt{i}\otimes_\Lambda -)$ for a two-sided ideal $I_\mathtt{i}\subseteq \Pi(\Lambda)$. \end{theorem} A third section of this article generalises the theory of the separated quiver of a radical square zero algebra and the resulting stable equivalence, as obtained by Auslander and Reiten \cite{AR73, AR75}. \begin{theorem}[Theorem \ref{stableequivalence}]\label{mainthm2} Let $\Lambda$ be a pro-species such that all the $\Lambda_\mathtt{i}$ are selfinjective. Let $\Gamma$ be the quotient of the tensor algebra $T(\Lambda)$ by its degree greater or equal to two part. Let $\Lambda^s$ be the separated pro-species of $\Lambda$. Then there is a stable equivalence of subcategories \[\underline{\modu}_{l.p.}\Gamma\to \underline{\rep}_{l.p.}\Lambda^s\] where the subscript $l.p.$ denotes the subcategory of modules which are projective when regarded as modules for $\Lambda_\mathtt{i}$. \end{theorem} Furthermore, extending results of Ib\'a\~nez Cobos, Navarro, and L\'opez Pe\~na in the context of ``generalised path algebras'' we explain the quiver and relations given by Gei\ss, Leclerc, and Schr\"oer by the following \begin{proposition}[Propositions \ref{quivertensoralgebra} and \ref{quiverpreprojectivealgebra}]\label{mainthm3} Let $\Lambda$ be a pro-species of algebras such that algebras $\Lambda_\mathtt{i}$ associated to the vertices are given by path algebras of quivers with relations. Then, the tensor algebra $T(\Lambda)$ as well as the preprojective algebra $\Pi(\Lambda)$ have a description in terms of a quiver with relations using the descriptions of the $\Lambda_\mathtt{i}$ in terms of quivers with relations and descriptions of the $\Lambda_\alpha$ as quotients of their respective projective cover as a bimodule for each arrow $\alpha$. \end{proposition} The article is structured as follows. In Section \ref{sec:species} we introduce the concept of a pro-species of algebras and the corresponding category of representations generalising the original notions due to Gabriel. Furthermore we define the associated tensor algebra $T(\Lambda)$ which has the same representation theory as $\Lambda$. Section \ref{sec:gorenstein} shows that $T(\Lambda)$ shows similar behaviour as a hereditary algebra and provides conditions under which it is Iwanaga-Gorenstein. In Section \ref{sec:preprojectivealgebra} we introduce the preprojective algebra $\Pi(\Lambda)$ associated to a pro-species of algebras $\Lambda$. Section \ref{sec:reflectionfunctors} contains the definition of the reflection functors and a proof of Theorem \ref{mainthm1}. Section \ref{sec:separated} describes the separated species for a pro-species of algebras and proves a stable equivalence between subcategories of locally projective representations, i.e. Theorem \ref{mainthm2}. The final section, Section \ref{sec:quivers}, which is independent of Sections \ref{sec:gorenstein}, \ref{sec:reflectionfunctors}, and \ref{sec:separated}, gives the quiver and relations of $T(\Lambda)$ and $\Pi(\Lambda)$ as stated in Proposition \ref{mainthm3}. Throughout let $\mathbbm{k}$ be a field. Unless specified otherwise, modules are left modules. Unless stated otherwise, algebras and modules are assumed to be finite dimensional over $\mathbbm{k}$. For a quiver $Q$ we denote its set of vertices by $Q_0$, the set of arrows by $Q_1$, the set of paths in $Q$ including the length $0$ paths by $Q_p$, the set of paths excluding the length $0$ paths by $Q_p^+$, and by $s,t\colon Q_1\to Q_0$ the functions mapping an arrow to its starting, respectively terminating vertex. Furthermore, throughout we write $M\otimes_A g$ to mean $\id_M\otimes_A g$ for a right $A$-module $M$ and an $A$-linear map $g$. \section{Pro-species of algebras} \label{sec:species} In this section, we generalise the notion of a species as defined by Gabriel \cite{Gab73}. The generalisation is similar to \cite{Li12} with the difference, that we do not start with a valued quiver which we want to ``modulate'' by algebras and bimodules. Instead we start with a gadget consisting of algebras and bimodules and only define (under certain conditions) the corresponding valued quiver. A further difference is the language of bicategories, which is not essential in the setting of this article, but the author hopes it will make it possible in the future to generalise some of the notions to other categories than free categories (i.e. path algebras of a quiver regarded as categories). \begin{defn} \begin{enumerate}[(i)] \item The \emphbf{bicategory of algebras with bimodules} $\Alg$ (or $\mathbbm{k}{\hbox{-}}\Alg$ if we want to emphasise the ground field) is defined as follows: \begin{description} \item[objects] are $\mathbbm{k}$-algebras \item[1-morphisms] the category of 1-morphisms between two objects $A$, $B$ is defined to be the class of finitely generated $B$-$A$-bimodules \item[2-morphisms] bimodule homomorphisms \item[1-composition] $[{}_CN_B]\circ [{}_BM_A]:=[{}_CN_B\otimes {}_BM_A]$ \item[2-composition] composition of bimodule homomorphisms \item[identity] $\id_A={}_AA_A$ \end{description} \item The \emphbf{bicategory of algebras with bimodules, projective from both sides} $\Algpro$ is the subcategory of $\Alg$ with the same objects, but for morphisms only taking $B$-$A$-bimodules which are finitely generated projective from either side. Similarly we define $\Algfree$, the \emphbf{bicategory of algebras with bimodules which are free of finite rank from each side}. \end{enumerate} \end{defn} It is easy to see that $\Algpro$ really is a sub-bicategory, i.e. that the tensor product of two bimodules which are projective from either side is again projective from either side and the identity is a module which is projective from either side. \begin{defn} Let $Q$ be a (finite) free $\mathbbm{k}$-category (i.e. the path algebra of a quiver, regarded as a category). \begin{enumerate}[(i)] \item A \emphbf{pro-species of algebras} is a $\mathbbm{k}$-linear strict $2$-functor\footnote{It might seem unnatural to consider a strict $2$-functor (instead of a pseudofunctor) to a bicategory, but since $Q$ is a free $\mathbbm{k}$-category, this works. Many things in the sequel will carry over to the setting where $Q$ is not assumed to be free and $\Lambda$ is only assumed to be a pseudofunctor. On some occasions one has to replace a strict notion by the corresponding weak notion.}: $\Lambda\colon Q\to \Algpro$. We write $\Lambda_\mathtt{i}$ for $\Lambda(\mathtt{i})$ when $\mathtt{i}\in Q_0$ and $\Lambda_\alpha$ for $\Lambda(\alpha)$ when $\alpha\in Q_p$. \item A pro-species of algebras is called a \emphbf{species of algebras} if $\Lambda\colon Q\to \Algfree$. \end{enumerate} \end{defn} \begin{rmk} Stated in more basic terms, a pro-species of algebras over $Q$ is a $\mathbbm{k}$-algebra $\Lambda_\mathtt{i}$ for each vertex $\mathtt{i}\in Q_0$ and a $\Lambda_\mathtt{j}$-$\Lambda_\mathtt{i}$-bimodule $\Lambda_\alpha$ for each arrow $\alpha\colon \mathtt{i}\to \mathtt{j}$. \end{rmk} \begin{ex} \begin{enumerate}[(a)] \item If $Q$ is the category with only one object and only scalar multiples of the identity, then a (pro)species of algebras is a $\mathbbm{k}$-algebra. \item If $\Lambda_\mathtt{i}$ is a $\mathbbm{k}$-division ring for all $\mathtt{i}$, then a pro-species of algebras is a species in the sense of Gabriel, see \cite{Gab73}. A special case is when all $\Lambda_\mathtt{i}$ are in fact the ground field $\mathbbm{k}$, then such $\Lambda$ can be regarded as a $\mathbbm{k}$-quiver. \end{enumerate} \end{ex} To a species of algebras one can associate a valued quiver which provides the link to \cite{Li12}. \begin{defn} Let $Q$ be a quiver. \begin{enumerate}[(i)] \item A \emphbf{valuation} on $Q$ consists of two functions a function $c_0\colon Q_0\to \mathbb{Z}_{\geq 0}, \mathtt{i}\mapsto c_\mathtt{i}$ and $c_1\colon Q_1\to \mathbb{Z}^2_{\geq 0}, \alpha \mapsto (c_\alpha,c_{\alpha^*})$ such that $\forall \mathtt{i}\in Q_0$ there exists $c_\mathtt{i}>0$ with $c_{\alpha}c_{s(\alpha)}=c_{\alpha^*}c_{t(\alpha)}$. \item Let $\Lambda$ be a species of algebras. Then, the \emphbf{associated valuation} is given by $c_\mathtt{i}:=\dim_k \Lambda_\mathtt{i}$ and $c_\alpha:=\rank_{\Lambda_{s(\alpha)}} \Lambda_\alpha$. \end{enumerate} \end{defn} The next step is to introduce a notion of representation of a pro-species. For this we need another bicategory. This bicategory is in fact the lax coslice bicategory in $\Alg$ over $\mathbbm{k}$. \footnote{The author would like to thank Pavel Safronov for this observation \href{http://mathoverflow.net/questions/201038}{http://mathoverflow.net/questions/201038}.} \begin{defn} \begin{enumerate}[(i)] \item The \emphbf{bicategory of algebra-module pairs} $\AlgMod$ (or $\mathbbm{k}{\hbox{-}}\AlgMod$ if we want to emphasise the commutative ring we are working over) is defined as follows: \begin{description} \item[objects] are pairs $(A,N)$ where $A$ is a $\mathbbm{k}$-algebra and $N$ is an $A$-module, \item[1-morphisms] the category of $1$-morphisms between two objects $(A,N)$, $(A',N')$ is defined to be the class of pairs $(M,g)$ where $M$ is an $A'$-$A$-bimodule and $g\colon M\otimes_A N\to N'$ is an $A'$-module homomorphism, \item[2-morphisms] bimodule homomorphisms $\psi\colon (M,g)\to (\tilde{M},\tilde{g})$ such that $\tilde{g}\circ (\psi\otimes N)=g$, \item[1-composition] $(M',g')\circ (M,g)=(M'\otimes_{A'} M, g'\circ (M'\otimes g))$, \item[2-composition] composition of bimodule homomorphisms, \item[identity] $\id_{(A,N)}=(A,\psi)$, where $\psi\colon A\otimes_A N\to N$ is the canonical identification. \end{description} \item The \emphbf{bicategory of algebra-module pairs which are projective from both sides} $\AlgproMod$ is the subcategory of $\AlgMod$ with the same objects, but for morphisms only taking $B$-$A$-bimodules which are projective from either side. Similarly we define $\AlgfreeMod$. \item There is a forgetful functor $\Theta\colon \AlgMod\to \Alg$ forgetting about the module. It restricts to functors $\Theta\colon \AlgproMod\to \Algpro$ and $\Theta\colon \AlgfreeMod\to \Algfree$. \end{enumerate} \end{defn} \begin{defn} Let $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras. \begin{enumerate}[(i)] \item A \emphbf{representation of $\Lambda$} is a $\mathbbm{k}$-linear strict $2$-functor $M\colon Q\to \AlgproMod$ such that $\Theta M=\Lambda$. \item A \emphbf{morphism of $\Lambda$-representations $M\to N$} is a natural transformation $\alpha\colon M\Rightarrow N$ such that $\Theta(\alpha)=\id$. \end{enumerate} All $\Lambda$-representations form a category $\Rep(\Lambda)$ with the composition of natural transformations and the identity natural transformation. \end{defn} \begin{rmk} Again in more basic terms, a $\Lambda$-representation is a $\Lambda_\mathtt{i}$-representation $M_\mathtt{i}$ for each $\mathtt{i}\in Q_0$ and a $\Lambda_{t(\alpha)}$-linear map $M_\alpha\colon \Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} M_{s(\alpha)}\to M_{t(\alpha)}$ for each $\alpha\in Q_1$. \end{rmk} As pro-species of algebras can be regarded as a generalisation of the notion of a $\mathbbm{k}$-quiver, as usual, one can associate an algebra whose category of modules is equivalent to the category of representations of the pro-species. \begin{defn} Let $\Lambda\colon Q\to \mathbbm{k}{\hbox{-}}\Algpro$ be a pro-species of algebras. Then the \emphbf{tensor algebra} $T(\Lambda)$ of $\Lambda$ is defined as follows. As a $\mathbbm{k}$-vector space it is: \[T(\Lambda):=\prod_{\mathtt{i}\in Q_0} \Lambda_\mathtt{i}\oplus \bigoplus_{\alpha\in Q_p^+} \Lambda_\alpha\] The multiplication is given as follows: \begin{itemize} \item $\prod \Lambda_\mathtt{i}$ has the usual multiplication of a product of algebras. \item By definition each $\Lambda_\alpha$ is an $\Lambda_{t(\alpha)}$-$\Lambda_{s(\alpha)}$-bimodule. Thus, it is also an $\prod \Lambda_\mathtt{i}$-$\prod \Lambda_\mathtt{i}$-bimodule via the projection maps. \item For $\lambda_\alpha\in \Lambda_\alpha$ and $\lambda_\beta\in \Lambda_\beta$ we define \[\lambda_\alpha\cdot \lambda_\beta:=\begin{cases}\lambda_\alpha\otimes \lambda_\beta\in \Lambda_{\alpha\beta}=\Lambda_\alpha\otimes\Lambda_\beta&\text{if } t(\beta)=s(\alpha)\\0&\text{else.}\end{cases}\] \end{itemize} By $\varepsilon_\mathtt{i}$ we denote the identity element of $\Lambda_\mathtt{i}$, considered as an element of $T(\Lambda)$. \end{defn} Note that $T(\Lambda)$ is unital if and only if $Q$ is finite. In this case $1_{T(\Lambda)}=\sum_{\mathtt{i}\in Q_0} \varepsilon_\mathtt{i}$ is a decomposition into orthogonal idempotents (which is primitive if and only if all the $\Lambda_\mathtt{i}$ are local). Also note that $T(\Lambda)$ is finite dimensional if and only if $Q$ has no oriented cycles. Unless stated otherwise, we will assume these two properties of $T(\Lambda)$ from now on. \begin{ex}\label{examples} \begin{enumerate}[(a)] \item Let $A$ be a $\mathbbm{k}$-algebra. If $\Lambda\colon Q\to \mathbbm{k}{\hbox{-}}\Algpro$ is defined by $\Lambda_\mathtt{i}=A$ and $\Lambda_\alpha=A$ for all $\mathtt{i}\in Q_0$ and $\alpha\in Q_1$, then $T(\Lambda)=A\otimes_\mathbbm{k} \mathbbm{k}Q$ is the path algebra of $Q$ over $A$. \item Let $A,B$ be $\mathbbm{k}$-algebras and $M$ an $A$-$B$-bimodule. Let $Q=(\mathtt{1}\stackrel{\alpha}{\to} \mathtt{2})$ be the quiver of Dynkin type $\mathbb{A}_2$. Then, for $\Lambda$ defined by $\Lambda_\mathtt{1}=B$, $\Lambda_\mathtt{2}=A$, $\Lambda_\alpha=M$ we have that $T(\Lambda)\cong\begin{pmatrix}A&M\\0&B\end{pmatrix}$, with multiplication being the usual matrix multiplication. \item An extension of case (b) is considered in \cite{LY15} where the authors study $n\times n$-upper triangular matrices (with Frobenius algebras on the diagonal). We explain the relationship: For $\mathtt{i}=\mathtt{1},\dots, \mathtt{n}$ let $A_{\mathtt{i}}$ be algebras and for $\mathtt{i}<\mathtt{j}$ let $B_{\mathtt{i}\mathtt{j}}$ be $A_{\mathtt{i}}$-$A_{\mathtt{j}}$-bimodules which are projective from either side. Define a quiver $Q$ with vertices $\mathtt{1},\dots,\mathtt{n}$ and exactly one arrow $\mathtt{j}\to \mathtt{i}$ whenever $B_{\mathtt{i}\mathtt{j}}\neq 0$. Let $\Lambda\colon Q\to \Algpro$ be the pro-species of algebras defined by $\Lambda_{\mathtt{i}}=A_{\mathtt{i}}$ and $\Lambda_{\alpha}:=B_{\mathtt{i}\mathtt{j}}$ for the unique arrow $\alpha\colon \mathtt{j}\to \mathtt{i}$. Let \[A_{\mathtt{i}\mathtt{j}}:=\bigoplus_{l=0}^{\mathtt{j-i-1}}\bigoplus_{\mathtt{i}<\mathtt{k}_1<\mathtt{k}_2<\dots<\mathtt{k}_l<\mathtt{j}}B_{\mathtt{i}\mathtt{k}_1}\otimes_{A_{\mathtt{k}_1}} B_{\mathtt{k}_1\mathtt{k}_2}\otimes_{A_{\mathtt{k}_2}}\dots\otimes_{A_{\mathtt{k}_l}} B_{\mathtt{k}_l\mathtt{j}}.\] Then, $T(\Lambda)\cong \begin{pmatrix}A_\mathtt{1}&A_{\mathtt{12}}&\dots&A_{\mathtt{1n}}\\0&A_2&\dots&A_{\mathtt{2n}}\\\vdots&\vdots&\ddots&\vdots\\0&0&\dots&A_{\mathtt{n}}\end{pmatrix}$. Conversely, let $Q$ be an acyclic quiver with $\mathtt{n}$ vertices and $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras. Number the vertices of $Q$ such that all the arrows go in decreasing direction of arrows. Define $B_{\mathtt{ij}}:=\bigoplus_{\alpha\colon \mathtt{j}\to \mathtt{i}} \Lambda_\alpha$ (which should be taken as $0$ if there are no arrows $\mathtt{j}\to \mathtt{i}$). Define $A_{\mathtt{ij}}$ as above. Then $T(\Lambda)$ is isomorphic to the triangular matrix ring defined as above. \end{enumerate} \end{ex} \begin{prop} Let $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras. The categories of representations of $\Lambda$ and modules over $T(\Lambda)$ are equivalent. In particular, $\operatorname{Rep}(\Lambda)$ is an abelian category. \end{prop} \begin{proof} Let $\Theta'\colon \AlgMod\to \Mod \mathbbm{k}$ be the projection on the second component. One easily checks that $\Phi\colon \operatorname{Rep}(\Lambda)\to \Mod(T(\Lambda))$ defined via $M\mapsto \bigoplus \Theta' M(x)$ with the $\prod \Lambda_\mathtt{i}$-action given by the projections $\prod \Lambda_\mathtt{i}\to \Lambda_\mathtt{j}$ and the action of $\Lambda_\alpha$ given by $\Theta'(M(\alpha))$, is an equivalence with inverse functor given by $V\mapsto M$ with $M(\mathtt{i})=(\Lambda_\mathtt{i},\varepsilon_\mathtt{i}V)$ for all $\mathtt{i}\in Q_0$ and $M(\alpha)$ is given by the restriction of the action $T(\Lambda)\otimes V\to V$ to $\Lambda_\alpha\otimes \varepsilon_\mathtt{i}V\to \varepsilon_\mathtt{j}V$ for an arrow $\alpha\colon \mathtt{i}\to \mathtt{j}$ . \end{proof} In some of the later sections it will turn out that the representation theory of a pro-species $\Lambda$ is in some way glued together from the individual representation of the algebras $\Lambda_\mathtt{i}$ sitting on the vertices. This observation yields to the following notions. \begin{defn}\label{locally} \begin{enumerate}[(i)] \item Let $\Lambda\colon Q\to \Alg$ be a pro-species of algebras. We say that a property \emphbf{holds locally} if it holds for all $\Lambda_\mathtt{i}$. \item Let $M\colon Q\to \AlgproMod$ be a $\Lambda$-representation. We say that a property \emphbf{holds locally} if it holds for all $\Lambda_\mathtt{i}$-modules $M_\mathtt{i}$. \end{enumerate} \end{defn} The general philosophy should be that if a property holds locally for a pro-species of algebras $\Lambda$, then (a slightly weaker version of) this property holds for the algebra $T(\Lambda)$.\\ Furthermore it should be of interest to understand the category of all $\Lambda$-representations for which a certain local property holds. For example, the results of \cite{GLS16, LY15} show that under certain conditions the representations which are locally free behave like the (ordinary) representations of $Q$. \section{Iwanaga-Gorenstein algebras} \label{sec:gorenstein} In this section, we provide cases in which $T(\Lambda)$ behaves like a hereditary algebra. Furthermore, we give instances of the general philosophy claimed at the end of the foregoing section. Namely, the algebras $\Lambda_\mathtt{i}$ all being Iwanaga-Gorenstein results in $T(\Lambda)$ being Iwanaga-Gorenstein. In these cases, the category of Gorenstein projective modules as well as the category of modules of finite global dimension have been described locally. The following is a generalisation of \cite[Proposition 3.1]{GLS16} (see also {\cite[Proposition 2.6]{LY15}}). \begin{prop}\label{tensoralgebralocallyprojective} Let $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras. Then, the following statements hold. \begin{enumerate}[(i)] \item\label{tensor:i} The algebra $T(\Lambda)$ regarded as a $\Lambda$-representation is locally projective. \item\label{tensor:ii} There is the following short exact sequence of $T(\Lambda)$-modules: \[0\to \bigoplus_{\substack{p\in Q_p^+\\s(\alpha)=\mathtt{i}}}\Lambda_\alpha \to T(\Lambda)\varepsilon_\mathtt{i}\to \Lambda_\mathtt{i}\to 0.\] In particular, $\projdim_{T(\Lambda)}\Lambda_\mathtt{i}\leq 1$ for all $\mathtt{i}\in Q_0$. \end{enumerate} \end{prop} \begin{proof} For \eqref{tensor:i} note that $\varepsilon_\mathtt{i}T(\Lambda)=\Lambda_\mathtt{i}\oplus \bigoplus_{t(p)=\mathtt{i}}\Lambda_p$. Hence $\varepsilon_\mathtt{i}T(\Lambda)$ is a projective $\Lambda_\mathtt{i}$-module. For \eqref{tensor:ii} note that if we set $\deg \Lambda_p=|p|$, the length of the path $p$ in $Q$ and $\deg \Lambda_\mathtt{i}=0$ then $T(\Lambda)$ is a graded algebra. With respect to this grading, the projection $T(\Lambda)\varepsilon_\mathtt{i}\to \Lambda_\mathtt{i}$ is the projection on the degree $0$ component and is therefore $T(\Lambda)$-linear. Its kernel is obviously $\bigoplus_{s(p)=\mathtt{i}} \Lambda_p$. It remains to prove that the kernel is a projective $T(\Lambda)$-module. This follows from the fact that $\bigoplus_{s(p)=\mathtt{i}} \Lambda_p=\bigoplus_{s(p)=\mathtt{i}}\bigoplus_{p=q\beta, \beta\in Q_1(\mathtt{i},\mathtt{j}), q\in Q_p}\Lambda_q\otimes_{\Lambda_\mathtt{j}} \Lambda_\beta$, and the fact that since $\Lambda_q$ is a projective $\Lambda_{t(q)}$-module, this is a direct summand of $\bigoplus_{\beta\colon \mathtt{i}\to \mathtt{j}}(T(\Lambda)\varepsilon_\mathtt{j})^{m_\mathtt{j}}$ for some $m_\mathtt{j}$. \end{proof} The following lemma describing a bimodule resolution of $T(\Lambda)$ generalises \cite[Proposition 7.1]{GLS16} and \cite[Lemma 3.3]{LY15}. All statements are special cases of \cite[Theorems 10.1 and 10.5]{Sch85}. \begin{prop} There is a short exact sequence of $T(\Lambda)$-$T(\Lambda)$-bimodules \[ \begin{tikzcd} P_\bullet\colon 0\arrow{r}&\bigoplus_{\alpha\in Q_1} T(\Lambda)\varepsilon_{t(\alpha)}\otimes_{\Lambda_{t(\alpha)}} \Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}}\varepsilon_{s(\alpha)}T(\Lambda)\\ {\phantom{P_\bullet\colon 0}}\arrow{r}{d}&\bigoplus_{\mathtt{i}\in Q_0} T(\Lambda)\varepsilon_\mathtt{i}\otimes_{\Lambda_\mathtt{i}}\varepsilon_\mathtt{i} T(\Lambda)\arrow{r}{\mult}&T(\Lambda)\arrow{r} &0 \end{tikzcd} \] where $\mult$ denotes the natural multiplication and $d(p\otimes h\otimes q):=ph\otimes q-p\otimes hq$. \end{prop} As an application we obtain projective resolutions not only of $\Lambda_\mathtt{i}$, but of all locally projective $\Lambda$-modules. We denote the category of such modules by $\rep_{l.p.}(\Lambda)$. It is easy to see that (for $Q$ acyclic) this category is the extension closure of $\add\{\Lambda_\mathtt{i}\, |\, \mathtt{i}\in Q_0\}$. Further, let $\rep_{l.p.}^{np}(\Lambda)$ be the full subcategory of $\Lambda$-representations without projective direct summands. \begin{cor} Let $\Lambda$ be a pro-species of algebras. Let $M\in \rep_{l.p.}(\Lambda)$. Then $P_\bullet\otimes_{T(\Lambda)} M$ is a projective resolution of $M$ which explicitly looks as follows \[ \begin{tikzcd} 0\arrow{r}&\bigoplus_{\alpha\in Q_0} T(\Lambda)\varepsilon_{t(\alpha)}\otimes_{\Lambda_{t(\alpha)}}\Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} M_{s(\alpha)}\\ {}\arrow{r}{d\otimes M}&\bigoplus_{\mathtt{i}\in Q_0} T(\Lambda)\varepsilon_\mathtt{i}\otimes_{\Lambda_\mathtt{i}} M_\mathtt{i}\arrow{r}{\mult} &M\arrow{r}&0 \end{tikzcd} \] with $(d\otimes M)(p\otimes h\otimes m)=ph\otimes m-p\otimes M_\alpha(h\otimes m)$. \end{cor} \begin{proof} As $T(\Lambda)$ is projective as a right $T(\Lambda)$-module, the short exact sequence $P_\bullet$ splits as right $T(\Lambda)$-modules. Thus, the sequence remains exact by tensoring with $M$ and can be identified with the claimed sequence. If $M$ is locally projective, then the two left terms are indeed projective $T(\Lambda)$-modules by a similar argument as in the proof of Proposition \ref{tensoralgebralocallyprojective}. The claim follows. \end{proof} We recall the definition of an Iwanaga-Gorenstein algebra, sometimes also called Gorenstein algebra. This is a generalisation of the class of selfinjective algebras (which is the case $n=0$). For an introduction to the topic, see e.g. \cite{Che10}. \begin{defn} An algebra $A$ is called \emphbf{$n$-Iwanaga-Gorenstein} if $\injdim_A A\leq n$ and $\projdim_AD(A)\leq n$. \end{defn} The following proposition generalises \cite[Theorem 1.1]{GLS16} where the case $n=0$ is proven for a specific example (see also \cite[Corollary 2.8]{LY15}). Using the language of triangular matrix rings, it is proven in \cite[Corollary 3.6, Proposition 3.8]{Wan16}. For a slightly different proof, it is possible to generalise the proof of \cite[Theorem 1.1]{GLS16}. \begin{prop}\label{locallyGorenstein} Let $\Lambda$ be a locally $n$-Iwanaga-Gorenstein pro-species in the sense of Definition \ref{locally}. Then the following are equivalent for a $T(\Lambda)$-module $U$: \begin{enumerate}[(1)] \item\label{locallyGorenstein:1} $\projdim U\leq n+1$, \item\label{locallyGorenstein:2} $\projdim U<\infty$, \item\label{locallyGorenstein:3} $\injdim U\leq n+1$, \item\label{locallyGorenstein:4} $\injdim U<\infty$, \item\label{locallyGorenstein:5} $U$ locally has projective dimension $\leq n$, \item\label{locallyGorenstein:6} $U$ locally has injective dimension $\leq n$. \end{enumerate} In particular $T(\Lambda)$ is $(n+1)$-Gorenstein. \end{prop} The following corollary was stated as \cite[Theorem 3.9]{GLS16} for the special case $n=0$. There, it was proved directly generalising the methods in \cite{AR91}. \begin{cor} Let $\Lambda$ be a locally $n$-Iwanaga-Gorenstein pro-species. Then the category of modules which are locally of projective dimension $\leq n$ is functorially finite, resolving and coresolving and has Auslander--Reiten sequences. \end{cor} \begin{proof} This follows from the previous proposition, since for a general $(n+1)$-Iwanaga-Gorenstein algebra, the category of modules of finite projective dimension has the stated properties, see \cite[Corollary 2.3.6]{Che10}. \end{proof} Another important subclass of the category of modules for a Gorenstein algebra is the category of Gorenstein projective modules. \begin{defn} Let $A$ be an algebra and $M$ be an $A$-module. A \emphbf{projective biresolution} \[P^\bullet\colon \cdots \to P^{n-1}\stackrel{d^{n-1}}{\to} P^n\stackrel{d^n}{\to} P^{n+1}\to \cdots\] of $M$ is an exact sequence of projectives with $\ker d^0\cong M$. A projective biresolution is called \emphbf{complete} if $\Hom_A(P^\bullet,A)$ is exact, or equivalently $\Hom_A(P^\bullet,Q)$ is projective for every projective $A$-module $Q$. An $A$-module $M$ is called \emphbf{Gorenstein projective} if there exists a complete projective biresolution of $M$. \end{defn} The following result generalises \cite[Theorem 4.1]{LZ13} from the case $n=2$ and \cite[Theorem 2.4]{LY15b} from the context of generalised path algebras (under stronger assumptions), see also \cite[Corollary 1.5]{XZ12} for the case that $n=2$ and $T(\Lambda)$ is Gorenstein. \begin{prop} Let $\Lambda\colon Q\to \Algpro$ be a pro-species. Then, $X\in T(\Lambda)$ is Gorenstein projective if and only if $M_{\mathtt{i},\arrowin}$ is injective and $\Coker(X_{\mathtt{i},\arrowin})$ is Gorenstein projective over $\Lambda_\mathtt{i}$ for every $\mathtt{i}\in Q_0$. \end{prop} \begin{proof} The proof follows the strategy given in \cite{LZ13} for $n=2$. Let $P_{\mathtt{i}}^{\bullet}$ be the complete projective biresolution with $\ker d^0_\mathtt{i}=\Coker(X_{\mathtt{i},\arrowin})$. We inductively define projective biresolutions \[Q_{\mathtt{i}}^\bullet=\bigoplus_{\substack{p\in Q_p\\t(p)=\mathtt{i}}}\Lambda_p\otimes_{\Lambda_{s(p)}} P_{s(p)}^\bullet\] of $X_{\mathtt{i}}$. For the start of the induction let $\mathtt{i}$ be a source in the quiver. Then, $X_\mathtt{i}=\Coker X_{\mathtt{i},\arrowin}$ and the statement holds by assumption. Next assume that $\mathtt{i}$ is a vertex such that $X_{\mathtt{j}}$ already was defined such a projective biresolution for all $\mathtt{j}$ such that there is a path $\mathtt{j}\to \mathtt{i}$. Since $X_{\mathtt{i},\arrowin}$ is a monomorphism, there is an exact sequence \[0\to \bigoplus_{\substack{\alpha\in Q_1\\t(\alpha)=\mathtt{i}}} \Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} X_{s(\alpha)}\to X_\mathtt{i}\to \Coker(X_{\mathtt{i},\arrowin})\to 0\] Using that $X_{s(\alpha)}$ and $\Coker(X_{\mathtt{i},\arrowin})$ have biresolutions $Q_{s(\alpha)}^\bullet$ and $P_{\mathtt{i}}^\bullet$, respectively, using a version of the horseshoe lemma and the fact that projectives are injective in the category of Gorenstein projective modules, claimed biresolution $Q_{\mathtt{i}}^\bullet$ also exists for $\mathtt{i}$, cf. \cite[Proof of Corollary 1.5]{LZ13}. Furthermore, such resolution can be chosen to have upper triangular differential with the differentials induced by those of the $P_{s(p)}^\bullet$ on the diagonal. Putting these biresolutions $Q_\mathtt{i}^\bullet$ together one obtains a biresolution $Q^\bullet$ of $X$. Here, for $\alpha\colon \mathtt{i}\to \mathtt{j}$, the corresponding map $\Lambda_\alpha\otimes_{\Lambda_\mathtt{i}} Q_\mathtt{i}^\bullet\to Q_\mathtt{j}^\bullet$ is just the direct sum of the identities on $\Lambda_\alpha\otimes_{\Lambda_{\mathtt{i}}} \Lambda_p\otimes P_{s(p)}^{\bullet}$. We need to check that $\Hom_\Lambda(Q^\bullet,T(\Lambda)\varepsilon_\mathtt{i})$ is again exact for every $\mathtt{i}\in Q_0$. Considering the diagram \[ \begin{tikzcd} \Lambda_\alpha \otimes \Lambda_p\otimes P_{s(p)}\arrow{r}{\Lambda_\alpha\otimes \varphi_{p,p'}}\arrow{d}{\id} &\Lambda_\alpha\otimes \Lambda_{p'}\otimes \Lambda_{s(p')}\arrow{d}\\ \Lambda_{\alpha p}\otimes P_{s(p)}\arrow{r}{\varphi_{\alpha p, q'}} &\Lambda_{q'} \otimes \Lambda_{s(q')} \end{tikzcd} \] where $\varphi\in \Hom_\Lambda(Q^\bullet, T(\Lambda)\varepsilon_\mathtt{i})$ we see that $\varphi_{\alpha p,q'}=\begin{cases}\Lambda_\alpha\otimes \varphi_{p,p'}&\text{if $q'=\alpha p'$}\\0&\text{else.}\end{cases}$ Thus \[\Hom_\Lambda(Q^\bullet,T(\Lambda)\varepsilon_{\mathtt{i}})=\bigoplus_j \bigoplus_{\substack{p\in Q_p\\p\colon \mathtt{i}\to \mathtt{j}}} \Hom_{\Lambda_{\mathtt{j}}}(P^\bullet_{\mathtt{j}},\Lambda_p\otimes \Lambda_{\mathtt{i}})\] as $\mathbb{Z}$-graded vector spaces with upper triangular differential (coming from the upper triangular differential of $Q^\bullet_{\mathtt{i}}$). As the $P_{\mathtt{j}}^\bullet$ are complete projective resolutions and $\Lambda_p\otimes \Lambda_{\mathtt{i}}$ is projective, it follows $\Hom_{\Lambda_{\mathtt{j}}}(P_{\mathtt{j}},\Lambda_p\otimes \Lambda_{\mathtt{i}})$ is exact for every $\mathtt{i}$. Using induction on $|Q_0|$, applying homology it follows that also $\Hom_{\Lambda}(Q^\bullet, T(\Lambda)\varepsilon_{\mathtt{i}})$ is exact for every $\mathtt{i}\in Q_0$. Thus, $Q^\bullet$ is a complete projective biresolution. The claim follows. For the other direction, let $Q^\bullet$ be a complete projective biresolution of $X$. By the form of the projective $T(\Lambda)$-modules, this gives projective biresolutions $Q_{\mathtt{i}}^\bullet$ of $X_\mathtt{i}$ and furthermore $Q_\mathtt{i}^\bullet\cong \bigoplus_{\substack{p\in Q_p\\t(p)=\mathtt{i}}} \Lambda_p\otimes P_{s(p)}^\bullet$. Consider the differential $d$ on the complex $Q$. From the fact that $d$ is a $T(\Lambda)$-module homomorphism we get that the following diagram commutes (for every arrow $\alpha$, every $j\in \mathbb{Z}$ and every paths $p,p',q'$ in the quiver with $t(p)=t(p')$ and $t(\alpha)=t(q')$ : \[ \begin{tikzcd} \Lambda_\alpha\otimes \Lambda_p\otimes P_{s(p)}^j\arrow{d}{\id}\arrow{r}{\Lambda_\alpha\otimes d_{p,p'}^j} &\Lambda_\alpha \otimes \Lambda_{p'}\otimes P_{s(p')}^{j-1}\arrow{d}\\ \Lambda_{\alpha p}\otimes P_{s(p)}^{j}\arrow{r}{d_{\alpha p,q'}^j} &\Lambda_{q'}\otimes P_{s(q')}^{j-1} \end{tikzcd} \] where the right vertical arrow is $\id$ or $0$ when $q'=\alpha p'$ or $q'\neq \alpha p'$ respectively. Thus, the differential $d$ can be assumed to be upper triangular and by induction $P^\bullet_{\mathtt{i}}$ is a complex for all $\mathtt{i}$. From applying homology to the sequence, \[0\to \bigoplus_{\substack{p\in Q_p^+\\t(p)=\mathtt{i}}} \Lambda_p\otimes P_{s(p)}^\bullet\to Q_{\mathtt{i}}^\bullet\to P_{\mathtt{i}}^\bullet\to 0\] it follows by induction that each of the complexes $P_\mathtt{i}^\bullet$ is in fact exact. The claim follows by applying the snake lemma to the following diagram: \[ \begin{tikzcd} 0\arrow{r}&\bigoplus_{\substack{p\in Q_p^+\\t(p)=\mathtt{i}}}\Lambda_p\otimes X_{s(p)}\arrow{r}\arrow{d} &\bigoplus_{\substack{p\in Q_p^+\\t(p)=\mathtt{i}}} \Lambda_p\otimes P_{s(\alpha)}^0\arrow{d}\arrow{r}&\dots\\ 0\arrow{r}&X_\mathtt{i}\arrow{r}&P_\mathtt{i}\oplus \bigoplus_{\substack{p\in Q_p^+\\t(p)=\mathtt{i}}} \Lambda_p\otimes P_{s(\alpha)}^0\arrow{r}&\dots \end{tikzcd} \] as $\Coker(X_{\mathtt{i},\arrowin})=\Coker(\bigoplus_{\substack{p\in Q_p\\t(p)=\mathtt{i}}} \Lambda_p\otimes X_{s(p)}\to X_{\mathtt{i}})$ as all other components factor through some $X_\alpha$ for $\alpha\in Q_1$. \end{proof} The following lemma shows that under appropriate conditions $T(\Lambda)$ deserves to be called ``hereditary''. \begin{lem}\label{tensoralgebrahereditary} Let $\Lambda\colon Q\to \Algpro$ be a locally selfinjective pro-species with $Q$ not necessarily acyclic. Then, the following properties hold: \begin{enumerate}[(i)] \item[(i)] Every locally projective submodule of a projective module is projective. \item Let $\rep_{l.p.}^{np}(\Lambda)$ be the subcategory of representations of $\Lambda$ without projective direct summands. Then the natural functor $\rep_{l.p.}^{np}(\Lambda)\to \underline{\rep}_{l.p.}(\Lambda)$ is an equivalence of categories. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate}[(i)] \item Let $P$ be projective and let $E\subseteq P$ be a submodule. Since $E_\mathtt{i}\subseteq P_{\mathtt{i}}$ and $\Lambda$ is locally selfinjective, the inclusion splits and $E$ and $P/E$ are locally projective. Consider the short exact sequence $0\to E\to P\to P/E\to 0$. As $P/E$ is locally projective, it follows from Proposition \ref{locallyGorenstein} that $\projdim P/E\leq 1$. Thus, \cite[Proposition A.4.7]{ASS06} implies that $E$ is projective. \item Let $g\colon M\to N$ factor through a projective object $P$, i.e. $g=ts$ with $s\colon M\to P$ and $t\colon P\to N$. By the first part, it follows that $\image s\subseteq P$ is projective. It follows that $\image s$ is a direct summand of $M$. Thus, $\image s=0$ and hence, $g=0$. The claim follows.\qedhere \end{enumerate} \end{proof} \section{The preprojective algebra} \label{sec:preprojectivealgebra} In this section, we introduce the notion of the preprojective algebra of a pro-species generalising \cite{DR80, GLS16} and establish some of its basic properties. For this level of generality we need the classical existence result of a dual basis of a projective module, cf. \cite[Lemma 2.4]{LY15}. The following lemma is well known, it can e.g. be found in lecture notes on Advanced Algebra by Pareigis. Since these are not published, we include a proof for the convenience of the reader. \begin{lem}[Dual basis lemma]\label{dualbasislemma} Let $P_R$ be a right $R$-module. Then, the following statements are equivalent: \begin{enumerate}[(1)] \item\label{dualbasis:i} $P$ is finitely generated and projective. \item\label{dualbasis:ii} There are $x_1,\dots, x_m\in P$, $f_1,\dots,f_m\in \Hom_{R}(P,R)$ such that for each $x\in P$ we have $x=\sum_{i=1}^m x_if_i(x)$ \item\label{dualbasis:iii} The dual basis homomorphism $\mathbbm{d}\colon P\otimes_R \Hom_R(P,R)\to \End_{R}(P), p\otimes f\mapsto (q\mapsto pf(q))$ is an isomorphism. \end{enumerate} \end{lem} \begin{proof} The equivalence of \eqref{dualbasis:i} and \eqref{dualbasis:ii} is proved e.g. in \cite[I.3.23]{Fai81}. For \eqref{dualbasis:ii}$\Rightarrow$\eqref{dualbasis:iii} let $\mathbbm{b}\colon \End_R(P)\to P\otimes_R \Hom_R(P,R)$ be defined by $\mathbbm{b}(e):=e(x_i)\otimes_R f_i$. We claim that $\mathbbm{b}$ is inverse to $\mathbbm{d}$. Indeed, $\mathbbm{d}\mathbbm{b}(e)(p)=\sum_i\mathbbm{d}(e(x_i)\otimes_R f_i)(p)=\sum_{i}e(x_i)f_i(p)=e(\sum_i x_if_i(p))=e(p)$ and $\mathbbm{b}\mathbbm{d}(p\otimes f)=\sum_i \mathbbm{d}(p\otimes f)(x_i)\otimes_R f_i=\sum_i pf(x_i)\otimes_R f_i=p\otimes_R \sum_i f(x_i)f_i=p\otimes f$ since $\sum_i f(x_i)f_i(q)=f(x_if_i(q))=f(q)$. On the other hand, \eqref{dualbasis:iii}$\Rightarrow$\eqref{dualbasis:ii} follows since writing $\mathbbm{d}^{-1}(\id_P)=\sum_i x_i\otimes f_i$ the elements $x_i, f_i$ satisfy the stated condition as $\sum_i x_if_i(x)=\mathbbm{d}(x_i\otimes f_i)(x)=\mathbbm{d}\mathbbm{d}^{-1}(\id_P)(x)=x$. \end{proof} \begin{cor}\label{uniquenesscasimirelement} Let $P_R$ be a finitely generated projective right $R$-module. The element $\sum_{i=1}^m x_i\otimes f_i\in P\otimes_R \Hom_R(P,R)$ does not depend on the choice of $x_i, f_i$ satisfying \eqref{dualbasis:ii} in Lemma \ref{dualbasislemma}. \end{cor} \begin{proof} Any elements $x_i, f_i$ satisfying \eqref{dualbasis:ii} in Lemma \ref{dualbasislemma} give $\mathbbm{d}\mathbbm{b}(\sum_{i=1}^m f_i\otimes x_i)=\id_P$. Hence, by \eqref{dualbasis:iii} in Lemma \ref{dualbasislemma} they have to coincide. \end{proof} \begin{defn} Let $P$ be a finitely generated projective left $R$-module and $x_i, f_i$ as in \eqref{dualbasis:ii} of Lemma \ref{dualbasislemma}. Then, the element $c:=\sum_{i=1}^m x_i\otimes f_i\in P\otimes \Hom_R(P,R)$ is called the \emphbf{Casimir element}. \end{defn} Although not strictly necessary for defining the preprojective algebra of a pro-species, the following additional property is needed for it to enjoy some of the usual properties, e.g. of the preprojective algebra being independent of the orientation. It appeared already in \cite[Section 5.1]{GLS16} and \cite[Definition 1.1]{LY15} without being given a name. \begin{defn} A pro-species of algebras $\Lambda\colon Q\to \Algpro$ is called \emphbf{dualisable} if \[\Hom_{\Lambda_\mathtt{i}^{op}}(\Lambda_\alpha,\Lambda_\mathtt{i})\cong \Hom_{\Lambda_\mathtt{j}}(\Lambda_{\alpha},\Lambda_\mathtt{j})\] as $\Lambda_\mathtt{i}$-$\Lambda_\mathtt{j}$-bimodules for each $\alpha\colon \mathtt{i}\to \mathtt{j}\in Q_1$. \end{defn} \begin{defn}\label{preprojectivealgebra} Let $\Lambda\colon Q\to \Algpro$ be a dualisable pro-species of algebras. \begin{enumerate}[(i)] \item The \emphbf{double quiver} $\overline{Q}$ of $Q$ is the quiver with $\overline{Q}_0=Q_0$ and $\overline{Q}_1=Q_1\cup Q_1^*$ where $Q_1^*:=\{\alpha^*\colon \mathtt{j}\to \mathtt{i}| \alpha\colon \mathtt{i}\to \mathtt{j} \in Q_1\}$. \item Let $\overline{\Lambda}\colon \overline{Q}\to \Algpro$ be the pro-species of algebras with $\overline{\Lambda}_\mathtt{i}:=\Lambda_\mathtt{i}$ for $\mathtt{i}\in Q_0$ and $\overline{\Lambda}_\alpha=\Lambda_\alpha$ for $\alpha\in Q_1$ and $\overline{\Lambda}_{\alpha^*}:=\Hom_{\Lambda_{s(\alpha)}^{op}}(\Lambda_\alpha, \Lambda_{s(\alpha)})$. \item For each $\alpha\in \overline{Q}_1$ let $c_\alpha$ be the Casimir element of $\Lambda_\alpha\otimes \Lambda_{\alpha^*}$. Define $c=\sum_{\alpha\in \overline{Q}_1}\sgn(\alpha)c_\alpha$, where $\sgn(\alpha)=\begin{cases}1&\text{if $\alpha\in Q_1$}\\-1 &\text{else}\end{cases}$. The algebra $\Pi(\Lambda):=T(\overline{\Lambda})/\langle c\rangle$ is called ``the'' \emphbf{preprojective algebra} of $\mathcal{M}$. \end{enumerate} \end{defn} \begin{ex} Let $Q=\mathbb{A}_2$ and $\Lambda\colon Q\to \Algpro$ be a dualisable pro-species of algebras. Then, $\Pi(\Lambda)$ is isomorphic to the Morita ring $\Lambda_{(0,0)}$ as studied by Green and Psaroudakis in \cite{GP14}. \end{ex} \begin{rmk}Note that for some of the purposes the signs $\operatorname{sgn}(\alpha)$ could be omitted. In particular, if the underlying unoriented graph is a tree, then $\Pi(\Lambda)$ does not depend on the signs up to isomorphism. In applications it seems that these signs are more ``natural'' than other choices, see \cite[Section 6]{Ri98}. \end{rmk} The following proposition generalises \cite[Lemma 1.1]{DR80} \begin{lem} Let $\Lambda$ be a dualisable pro-species of algebras. Then, $\Pi(\Lambda)$ does not depend on the choice of the dual bases of the $\Lambda_\alpha$ and $\Lambda_{\alpha^*}$. \end{lem} \begin{proof} This follows immediately from Corollary \ref{uniquenesscasimirelement}. \end{proof} As noted before, the preprojective algebra $\Pi(\Lambda)$ does not depend on the orientation of $Q$ in the following sense. \begin{lem} Let $\Lambda\colon Q\to \Algpro$ be a dualisable pro-species of algebras. Choose any orientation $Q'$ of $Q$. Define a new pro-species $\Lambda'\colon Q'\to \Algpro$ by $\Lambda_\mathtt{i}'=\Lambda_\mathtt{i}$ for $\mathtt{i}\in Q'_0=Q_0$ and \[\Lambda_\alpha' =\begin{cases}\Lambda_\alpha&\text{if $\alpha\in Q_1$}\\\Hom(\Lambda_\alpha,\Lambda_{s(\alpha)})&\text{if $\alpha=\beta^*$ for $\beta\in Q_1$}\end{cases}\] Then $\Lambda'$ is dualisable and $\Pi(\Lambda')\cong \Pi(\Lambda)$. \end{lem} \begin{proof} Since $\Lambda$ is dualisable and projective modules are reflexive, it is immediate that $\Hom_{\Lambda_{t(\alpha)}}(\Lambda_{\alpha^*},\Lambda_{t(\alpha)})\cong \Lambda_\alpha$. Furthermore, if $\Lambda_\alpha$ is identified with its double dual then $f_1,\dots, f_m\in \Lambda_{\alpha^*}, x_1,\dots,x_m\in \Lambda_\alpha$ give a dual basis of $\Lambda_{\alpha^*}\otimes \Lambda_\alpha$ (as noted in the proof of Lemma \ref{dualbasislemma} $f=\sum_i f(x_i)f_i$). As in the classical case, interchanging the roles of $\alpha$ and $\alpha^*$, changing the signs of one of them, then yields an isomorphism between $\Pi(\Lambda)$ and $\Pi(\Lambda')$. \end{proof} In the remainder of this section, we introduce some prerequisites for the definition of reflection functors in the following section. \begin{prop}[cf. {\cite[Exercise 2.20]{Lam99}}] Let $R$ and $S$ be rings. Let $M$, $N$ be $R$-modules. Then, there is a linear map $f\colon \Hom_R(M,R)\otimes_R N\to \Hom_R(M,N)$ sending $\varphi\otimes n$ to the map $m\mapsto \varphi(m)n$. If $M$ is also an $S^{op}$-module, then this map is a homomorphism of $S$-modules. If $M$ is finitely generated and $R$-projective, then $f$ is bijective with inverse given by $\psi\mapsto \sum_{i}f_i\otimes \psi(x_i)$ where $x_i,f_i$ are chosen as in Lemma \ref{dualbasislemma}. \end{prop} \begin{proof} Define $f\colon \Hom_R(M,R)\times N\to \Hom_R(M,N)$ by $(\varphi,n)\mapsto (m\mapsto \varphi(m)n)$. This is $R$-balanced as $(\varphi r)(m):=\varphi(m)r$. If $M$ is also an $S^{op}$-module, then $\Hom_R(M,R)$ and $\Hom_R(M,N)$ have an $S$-module structure defined by $(s\varphi)(m)=\varphi(ms)$. It is obvious that $f$ is $S$-linear. A direct calculation shows that the claimed map is indeed inverse. \end{proof} \begin{defn} Let $\Lambda$ be a dualisable pro-species of algebras, $M_{s(\alpha)}$ a left $\Lambda_{s(\alpha)}$-module and $N_{t(\alpha)}$ a left $\Lambda_{t(\alpha)}$ module. Then, combining the tensor-hom adjunction with the previous proposition we obtain an isomorphism \begin{align*} \Hom_{\Lambda_{t(\alpha)}}(\Lambda_\alpha\otimes M_{s(\alpha)},N_{t(\alpha)})&\to \Hom_{\Lambda_{s(\alpha)}}(M_{s(\alpha)},\Hom_{\Lambda_{t(\alpha)}}(\Lambda_{\alpha},N_{t(\alpha)}))\\ &\to \Hom_{\Lambda_{s(\alpha)}}(M_{s(\alpha)}, \Lambda_{\alpha^*}\otimes N_{t(\alpha)}). \end{align*} We denote it by $f\mapsto f^\vee$, its inverse by $f\mapsto f^\wedge$. \end{defn} This isomorphism is used in Section \ref{sec:reflectionfunctors} to construct reflection functors in the context of pro-species of algebras. Let $\Lambda$ be a dualisable pro-species of algebras. Let $M\in \modu T(\overline{\Lambda})$. Recall that each arrow $\alpha\in \overline{Q}_1$ defines a map $M_\alpha\colon \Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} M_{s(\alpha)}\to M_{t(\alpha)}$ and a map $M_\alpha^\vee\colon M_{s(\alpha)}\to \Lambda_{\alpha^*}\otimes_{\Lambda_{t(\alpha)}} M_{t(\alpha)}$. Define \[M_{\mathtt{i},\arrowin}:=\bigoplus_{\substack{\alpha\in \overline{Q}_1\\t(\alpha)=\mathtt{i}}}\sgn(\alpha)M_{\alpha} \text{\quad and $M_{\mathtt{i},\arrowout}:=\bigoplus_{\substack{\alpha\in \overline{Q}_1\\s(\alpha)=\mathtt{i}}}M_\alpha^{\vee}$}.\] The following proposition generalises \cite[Proposition 5.2]{GLS16}. \begin{prop}\label{inout} Let $\Lambda$ be a dualisable pro-species of algebras. The category $\modu \Pi(\Lambda)$ is equivalent to the full subcategory of $\modu T(\overline{\Lambda})$ whose modules satisfy $M_{\mathtt{i},\arrowin}\circ M_{\mathtt{i},\arrowout}=0$ for all $\mathtt{i}\in Q_0$. \end{prop} \begin{proof} For an object $M=(M_\mathtt{i},M_\alpha,M_{\alpha^*})_{\mathtt{i},\alpha}$ in $\modu T(\overline{\Lambda})$ note that \[M_{\mathtt{i},\arrowin}\circ M_{\mathtt{i},\arrowout}=\sum_{\substack{\alpha\in \overline{Q}\\s(\alpha)=\mathtt{i}}} \sgn(\alpha) M_{\alpha^*}\circ M_\alpha^\vee.\] Recall that $M_\alpha^\vee$ is given explicitly by $M_\alpha^\vee(m)=\sum_{j}f_j\otimes M_\alpha(x_j\otimes m)$ for all $m\in M_{s(\alpha)}$. Hence, \[M_{\mathtt{i},\arrowin}\circ M_{\mathtt{i},\arrowout}(m)=\sum_{\substack{\alpha\in \overline{Q}\\s(\alpha)=\mathtt{i}}} \sgn(\alpha)\sum_j M_{\alpha^*}(f_j \otimes M_\alpha(x_j\otimes m)),\] which vanishes if and only if $M\in \Pi(\Lambda)$. \end{proof} \section{BGP reflection functors} \label{sec:reflectionfunctors} In this section, we generalise BGP reflection functors to our setup. We start by giving a partial description of a projective bimodule resolution of the preprojective algebra analogous to \cite[Lemma 8.1.1]{GLS07}. \begin{lem}\label{bimoduleresolutionpreprojective} Let $\Lambda\colon Q\to \Algpro$ be a dualisable pro-species of algebras. Then, the following is the start of a projective bimodule resolution of $\Pi(\Lambda)$. \[ \begin{tikzcd} Q_\bullet\colon \bigoplus_{\mathtt{i}\in Q_0} \Pi(\Lambda)\otimes_{\Lambda_\mathtt{i}}\Pi(\Lambda) \arrow{r}{d^1}&\bigoplus_{\alpha\in \overline{Q}_1}\Pi(\Lambda)\otimes_{\Lambda_{t(\alpha)}} \Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} \Pi(\Lambda)\\ {\phantom{Q_\bullet\colon \bigoplus_{\mathtt{i}\in Q_0} \Pi(\Lambda)\otimes_{\Lambda_\mathtt{i}}\Pi(\Lambda)}}\arrow{r}{d^0}&\bigoplus_{\mathtt{i}\in Q_0}\Pi(\Lambda)\otimes_{\Lambda_\mathtt{i}} \Pi(\Lambda) \end{tikzcd} \] where $d^0(1\otimes x\otimes 1)=x\otimes 1-1\otimes x$ for $x\in \Lambda_\alpha$ and $d^1(1\otimes 1)=\sum_{\substack{\alpha\in \overline{Q}_1\\s(\alpha)=\mathtt{i}}}\sgn(\alpha)(c_\alpha\otimes 1+1\otimes c_\alpha)$. \end{lem} \begin{proof} It is obvious that $\Coker(d^0)=\Pi(\Lambda)$. Noting that $Q_\bullet$ is the total complex of a double complex, the statement follows from the acyclic assembly lemma \cite[(2.7.3)]{Wei94} in a similar way to \cite[Theorem 3.15]{BBK02}. \end{proof} For $\mathtt{i}\in Q_0$ define $I_\mathtt{i}:=\Pi(\Lambda)(1-\varepsilon_\mathtt{i})\Pi(\Lambda)$ to be the annihilator of the $\Pi(\Lambda)$-module $\Lambda_\mathtt{i}$. Define functors $\Sigma^+_\mathtt{i}:=\Hom_{\Pi(\Lambda)}(I_\mathtt{i},-)\colon \modu \Pi(\Lambda)\to \modu \Pi(\Lambda)$ and $\Sigma^-_\mathtt{i}:=I_i\otimes_{\Pi(\Lambda)} -\colon \modu \Pi(\Lambda)\to \modu\Pi(\Lambda)$. It follows that $(\Sigma_\mathtt{i}^+,\Sigma_\mathtt{i}^-)$ is a pair of adjoint functors, $\Sigma_\mathtt{i}^+$ being left exact and $\Sigma_\mathtt{i}^-$ being right exact. We will give an explicit description of the functors $\Sigma_\mathtt{i}^+$ and $\Sigma_\mathtt{i}^-$ generalising results by Baumann and Kamnitzer in \cite{BK12, BKT14}, and independently by Bolten in \cite{Bol10}. This in turn also generalises results by Geiss, Leclerc, and Schr\"oer in \cite{GLS16} for the case that $\Lambda_\mathtt{i}=k[x_\mathtt{i}]/(x_\mathtt{i}^{c_\mathtt{i}})$ and $\Lambda_\alpha$ a bimodule which is free from either side. Let $M\in \modu \Pi(\Lambda)$. Recall from Proposition \ref{inout} that $M_{\mathtt{i},\arrowout}$ induces a map $M_\mathtt{i}\to \ker M_{\mathtt{i},\arrowin}$ which will be denoted by $\overline{M}_{\mathtt{i},\arrowout}$. Define $N_\mathtt{i}:=\ker(M_{\mathtt{i},\arrowin})$ and the exact sequence \[ \begin{tikzcd}0\arrow{r} &N_\mathtt{i}\arrow{r}{(N_\alpha)^T_\alpha} &\bigoplus_{\substack{\alpha\in \overline{Q}_1\\t(\alpha)=\mathtt{i}}}\Lambda_\alpha\otimes M_{s(\alpha)}\arrow{r}{(M_\alpha)_{\alpha}} &M_\mathtt{i}\arrow{r} &0 \end{tikzcd} \] Let $\Sigma^+_\mathtt{i}(M)\in \modu \Pi(\Lambda)$ be defined by \begin{align*} \Sigma^+_\mathtt{i}(M)_\mathtt{j}&:=\begin{cases}M_\mathtt{j}&\text{if $\mathtt{j}\neq \mathtt{i}$,}\\N_\mathtt{i}&\text{if $\mathtt{j}=\mathtt{i}$,}\end{cases}\\ \Sigma^+_\mathtt{i}(M)_\alpha&:=\begin{cases}M_\alpha &\text{if $t(\alpha)\neq \mathtt{i}$ \text{ and } $s(\alpha)\neq \mathtt{i}$,}\\ N_{\alpha^*}^\wedge&\text{if $s(\alpha)=\mathtt{i}$,}\\ (\overline{M}_{\mathtt{i},\arrowout})_\alpha\circ M_\alpha &\text{if $t(\alpha)=\mathtt{i}$.}\end{cases} \end{align*} Dually if we denote $N_\mathtt{i}:=\Coker(M_{\mathtt{i},\arrowout})$ and by $\overline{M}_{\mathtt{i},\arrowin}$ the induced from $M_{\mathtt{i},\arrowin}$ map $\Coker(M_{\mathtt{i},\arrowout})\to M_\mathtt{i}$ and the exact sequence \[ \begin{tikzcd} 0\arrow{r} &M_\mathtt{i}\arrow{r}{(M_\alpha^\vee)^T_\alpha} &\bigoplus_{\substack{\alpha\in \overline{Q}_1\\s(\alpha)=\mathtt{j}}}\Lambda_{\alpha^*}\otimes M_{t(\alpha)}\arrow{r}{(N_{\alpha})_{\alpha}} &N_\mathtt{i}\arrow{r} &0), \end{tikzcd} \] $\Sigma_\mathtt{i}^-(M)\in \modu \Pi(\Lambda)$ can be defined via: \begin{align*} \Sigma^-_\mathtt{i}(M)_\mathtt{j}&:=\begin{cases}M_\mathtt{j}&\text{if $\mathtt{j}\neq \mathtt{i}$,}\\N_\mathtt{i}&\text{if $\mathtt{j}=\mathtt{i}$,}\end{cases}\\ \Sigma^-_\mathtt{i}(M)_\alpha&:=\begin{cases}M_\alpha &\text{if $s(\alpha)\neq \mathtt{i}$ and $t(\alpha)\neq \mathtt{i}$,}\\ N_{\alpha^*}&\text{if $t(\alpha)=\mathtt{i}$,}\\ (M_\alpha^{\vee}\circ (\overline{M}_{\mathtt{i},\arrowin})_{\alpha^*})^\wedge &\text{if $s(\alpha)=\mathtt{i}$}.\end{cases} \end{align*} The constructions $\Sigma_\mathtt{i}^{\pm}$ can be extended to functors by the universal property of the kernel and the cokernel. The following lemma generalises \cite[Proposition 5.1]{BKT14} \begin{lem} The two definitions of $\Sigma_\mathtt{i}^+$ and $\Sigma_\mathtt{i}^-$ coincide. \end{lem} \begin{proof} We only show the statement for $\Sigma_\mathtt{i}^-$, the proof for $\Sigma_\mathtt{i}^+$ is similar, taking into account the remarks from \cite[Proof of Proposition 5.1]{BKT14}. Applying $\Lambda_\mathtt{i}\otimes_{\Pi(\Lambda)}-$ to the resolution given in Lemma \ref{bimoduleresolutionpreprojective} (changing the sign of $\Lambda_\mathtt{i}\otimes d^0$) and replacing the rightmost terms by the kernel of $\Lambda_\mathtt{i}\otimes_{\Pi(\Lambda)} \mult$, we obtain an exact sequence (again since the former sequence is split as right $\Pi(\Lambda)$-modules) of $\Lambda_\mathtt{i}$-$\Pi(\Lambda)$-bimodules: \[ \begin{tikzcd} \Lambda_\mathtt{i}\otimes_{\Lambda_\mathtt{i}} \Pi(\Lambda)\arrow{r}{\partial_1}&\bigoplus_{\substack{\alpha\in \overline{Q}_1\\t(\alpha)=\mathtt{i}}}\Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} \Pi(\Lambda)\arrow{r}{\partial_0}&\varepsilon_\mathtt{i} I_\mathtt{i}\arrow{r}&0 \end{tikzcd} \] where $\partial_1(1\otimes 1)=\sum_{\substack{\alpha\in \overline{Q}_1\\t(\alpha)=\mathtt{i}}} c_\alpha$ and $\partial_0(x\otimes 1)=x$. Let $M\in \modu \Pi(\Lambda)$. Applying $-\otimes_{\Pi(\Lambda)} M$ to this sequence, the resulting exact sequence of left $\Lambda_\mathtt{i}$-modules can be identified with \[ \begin{tikzcd} M_\mathtt{i}\arrow{r}{M_{\mathtt{i},\arrowout}} &\bigoplus_{\alpha\in \overline{Q}_1} \Lambda_\alpha\otimes_{\Lambda_{s(\alpha)}} M\arrow{r}&\varepsilon_\mathtt{i} I_\mathtt{i}\otimes_{\Pi(\Lambda)} M\arrow{r}&0 \end{tikzcd} \] As $I_\mathtt{i}=(1-\varepsilon_\mathtt{i})\Pi(\Lambda)\oplus \varepsilon_\mathtt{i} I_\mathtt{i}$, $I_\mathtt{i}\otimes_{\Pi(\Lambda)} M$ only changes $M$ at the $\mathtt{i}$-th component, the foregoing sequence tells us that as $\Pi(\Lambda)$-modules, the two definitions coincide. \end{proof} For $M\in \modu \Pi(\Lambda)$, let $\sub_\mathtt{i}(M)$ be the largest submodule $U$ of $M$ such that $\varepsilon_\mathtt{i} U=U$. Dually, let $\fac_\mathtt{i}(M)$ be the largest factor module $M/U$ of $M$ such that $\varepsilon_\mathtt{i} (M/U)=M/U$. These two constructions are functorial. The following generalises \cite[Proposition 9.1 (ii), Corollary 9.2]{GLS16}. \begin{cor} \begin{enumerate}[(i)] \item There are functorial short exact sequences \[ 0\to \sub_\mathtt{i}\to \id\to \Sigma_\mathtt{i}^+\Sigma_\mathtt{i}^-\to 0 \] and \[0\to \Sigma_\mathtt{i}^-\Sigma_\mathtt{i}^+\to \id\to \fac_\mathtt{i}\to 0. \] \item The functors $\Sigma_\mathtt{i}^+$ and $\Sigma_\mathtt{i}^-$ restrict to inverse equivalences $\Sigma_\mathtt{i}^+\colon \mathcal{T}_\mathtt{i}\to \mathcal{S}_\mathtt{i}$ and $\Sigma_\mathtt{i}^-\colon \mathcal{S}_\mathtt{i}\to \mathcal{T}_\mathtt{i}$ where \[\mathcal{T}_\mathtt{i}:=\{M\in \modu \Pi(\Lambda) | \fac_\mathtt{i}(M)=0\}\] and \[\mathcal{S}_\mathtt{i}:=\{M\in \modu \Pi(\Lambda)| \sub_\mathtt{i}(M)=0\}.\] \end{enumerate} \end{cor} \begin{proof} The proof given for \cite[Proposition 9.1]{GLS16} applies verbatim. \end{proof} As in the classical case, we can restrict $\Sigma_\mathtt{i}^+$ (resp. $\Sigma_\mathtt{i}^-$) to $\modu T(\Lambda)$ provided $\mathtt{i}$ is a sink (respectively a source) in $Q$. Let $s_\mathtt{i}(Q)$ be the quiver with vertex set $Q_0$ and arrows $\{\alpha \in Q_1| t(\alpha)\neq \mathtt{i}\} \cup \{\beta^*\colon \mathtt{i}\to \mathtt{j} | (\beta\colon \mathtt{j}\to \mathtt{i})\in Q_1\}$ (respectively $\{\alpha\in Q_1| s(\alpha)\neq \mathtt{i}\}\cup \{\beta^*\colon \mathtt{j}\to \mathtt{i} | (\beta\colon \mathtt{i}\to \mathtt{j})\in Q_1\}$). Define the dualisable pro-species of algebras $s_\mathtt{i}(\Lambda)\colon s_\mathtt{i}(Q)\to \Algpro$ as follows: \begin{align*} (s_\mathtt{i}(\Lambda))_\mathtt{j}&:=\Lambda_\mathtt{j}&\text{ for all $\mathtt{j}\in Q_0$,}\\ (s_\mathtt{i}(\Lambda))_\alpha&:=\Lambda_\alpha&\text{if $t(\alpha)\neq \mathtt{i}$ (respectively $s(\alpha)\neq \mathtt{i}$),}\\ (s_\mathtt{i}(\Lambda))_{\beta^*}&:=\Hom_{\Lambda_\mathtt{i}}(\Lambda_\beta,\Lambda_\mathtt{i})&\text{if $t(\beta)=\mathtt{i}$ (respectively $t(\beta)= \mathtt{i}$).} \end{align*} Define the \emphbf{reflection functor} $F_\mathtt{i}^+\colon \rep(\Lambda)\to \rep(s_\mathtt{i}(\Lambda))$ (respectively $F_\mathtt{i}^-\colon \rep(\Lambda)\to \rep(s_\mathtt{i}(\Lambda))$) as follows. Let $M\in \rep(\Lambda)$. Define $N_\mathtt{i}:=\ker(M_{\mathtt{i},\arrowin})$ (respectively $N_\mathtt{i}:=\coker(M_{\mathtt{i},\arrowout})$). Then, there is an exact sequence \[ \begin{tikzcd}0\arrow{r} &N_\mathtt{i}\arrow{r}{(N_\alpha)^T_\alpha} &\bigoplus_{\substack{\alpha\in Q_1\\t(\alpha)=\mathtt{i}}}\Lambda_\alpha\otimes M_{s(\alpha)}\arrow{r}{(M_\alpha)_{\alpha}} &M_\mathtt{i}\arrow{r} &0 \end{tikzcd} \] (respectively an exact sequence \[ \begin{tikzcd} 0\arrow{r} &M_\mathtt{i}\arrow{r}{(M_\alpha^\vee)^T_\alpha} &\bigoplus_{\substack{\alpha\in Q_1\\s(\alpha)=\mathtt{i}}}\Lambda_{\alpha^*}\otimes M_{t(\alpha)}\arrow{r}{(N_{\alpha})_{\alpha}} &N_\mathtt{i}\arrow{r} &0). \end{tikzcd} \] Then, define the representation $F_\mathtt{i}^+(M)$ by \begin{align*} F_\mathtt{i}^+(M)_j&:=\begin{cases}M_\mathtt{j}&\text{if $\mathtt{j}\neq \mathtt{i}$,}\\N_\mathtt{i}&\text{if $\mathtt{j}=\mathtt{i}$,}\end{cases}\\ F_\mathtt{i}^+(M)_\alpha&:=M_\alpha&\text{if $t(\alpha)\neq \mathtt{i}$,}\\ F_\mathtt{i}^+(M)_{\beta^*}&:=N_\beta^{\wedge}&\text{if $t(\beta)=\mathtt{i}$.} \end{align*} (respectively the representation $F_\mathtt{i}^-(M)$ by \begin{align*} F_\mathtt{i}^-(M)_\mathtt{j}&:=\begin{cases}M_\mathtt{j}&\text{if $\mathtt{j}\neq \mathtt{i}$,}\\N_\mathtt{i}&\text{if $\mathtt{j}=\mathtt{i}$,}\end{cases}\\ F_\mathtt{i}^-(M)_\alpha&:=M_\alpha&\text{if $s(\alpha)\neq \mathtt{i}$,}\\ F_\mathtt{i}^-(M)_{\beta^*}&:=N_\beta&\text{if $s(\beta)=\mathtt{i}$.}) \end{align*} For a morphism $f\colon M\to \tilde{M}$, $F_\mathtt{i}^+(f)$ is defined by the universal property of the kernel (respectively $F_\mathtt{i}^-(f)$ is defined by the universal property of the cokernel). It is proven in \cite[Section 5]{LY15} that if $\Lambda$ is a locally Frobenius dualisable species of algebras, then $F_\mathtt{i}$ is given by tilting at an analogue of an APR tilting module. In a forthcoming paper, joint with Chrysostomos Psaroudakis, we study conditions under which the module $I_\mathtt{i}$, giving rise to the reflection functors $(\Sigma_\mathtt{i}^+,\Sigma_\mathtt{i}^-)$ on the preprojective algebra is a tilting module. Furthermore, in joint work with Chrysostomos Psaroudakis and Nan Gao, we use reflection functors in the study of a generalisation of the submodule category as studied by Ringel and Schmidmeier, see e.g. \cite{RS08}. \section{The separated pro-species} \label{sec:separated} In this section, we show that the well known results on the seperated quiver of a radical square zero algebra (see e.g. \cite[Section 10]{DR75}, \cite[Section X.2]{ARS95}) extend to our setting. \begin{defn} Let $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras. Define $\rep_{l.p.}^{epi}(\Lambda)$ to be the full subcategory of $\rep_{l.p.}$ such that $M_{\mathtt{i},\arrowin}$ is an epimorphism for all $\mathtt{i}\in Q_0$. \end{defn} \begin{lem}\label{separateddecomposes} Let $\Lambda\colon Q\to \Algpro$ be a locally selfinjective pro-species such that $\Lambda_\alpha\neq 0$ implies that $\Lambda_\beta\neq 0$ for each $\beta$ with $t(\beta)=s(\alpha)$. Then every object in $\rep_{l.p.}$ decomposes into a direct sum of objects in $\rep_{l.p.}^{epi}(\Lambda)$ and objects isomorphic to projective $\Lambda_\mathtt{i}$-modules regarded as $T(\Lambda)$-modules for some $\mathtt{i}\in Q_0$. \end{lem} \begin{proof} Without loss of generality we can assume that $Q$ is a bipartite quiver. Otherwise we can replace $Q$ with $\tilde{Q}$ where all the arrows with $\Lambda_\alpha=0$ are removed. Defining $\tilde{\Lambda}$ by $\tilde{\Lambda}_\mathtt{i}=\Lambda_{\mathtt{i}}$ for all $\mathtt{i}$ and $\tilde{\Lambda}_\alpha:=\Lambda_\alpha$ for $\alpha$ with $\Lambda_\alpha\neq 0$ one obtains an equivalence between $\rep_{l.p.}(\Lambda)$ and $\rep_{l.p.}(\tilde{\Lambda})$. For such a quiver let $I_0$ be the vertices such that there is $\alpha$ with $t(\alpha)\in I_0$ and $I_1:=Q_0\setminus I_0$. Let $M\in \modu_{l.p.}\Gamma$ be an arbitrary representation. Let $X$ be the $\Lambda$-representation with $X_\mathtt{i}:=\image M_{\mathtt{i},\arrowin}$ for $\mathtt{i}\in I_0$ and $X_\mathtt{i}:=M_{\mathtt{i}}$ otherwise. Define $X_\alpha$ to be the map induced by $M_\alpha$ for all $\alpha$. Since $\Lambda$ is locally selfinjective and $M$ is locally projective, the inclusion $X_\mathtt{i}\to M_\mathtt{i}$ splits. Thus, $X\in \rep_{l.p.}^{epi}(\Lambda)$. Furthermore $X$ is a subrepresentation of $M$ and we can form the quotient $N:=M/X$ which is easily seen to satisfy $N_\alpha=0$ for all $\alpha$. Since $N$ is also locally projective, it follows that $N$ is isomorphic to a direct summand of a direct sum of $\Lambda_\mathtt{i}$ for some $\mathtt{i}\in Q_0$. \end{proof} \begin{rmk} Without the assumption of $\Lambda$ being locally selfinjective, the result is not true as one can see from the example of $Q=\mathbb{A}_2=1\stackrel{\alpha}{\to} 2$, $\Lambda_\mathtt{i}:=k\mathbb{A}_2$ for $\mathtt{i}\in \{1,2\}$ and $\lambda_\alpha:=\Lambda_\mathtt{1}$. Then the representation $M$ with $M_\mathtt{1}=P_2=S_2$ and $M_{\mathtt{2}}=P_1$ and $M_\alpha$ being the natural inclusion is indecomposable, but neither in $\rep_{l.p.}^{epi}(\Lambda)$ nor isomorphic to $\Lambda_{\mathtt{i}}$ for some $\mathtt{i}$. \end{rmk} The assumptions of Lemma \ref{separateddecomposes} are in particular satisfied for the separated pro-species: \begin{defn} Let $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras with $Q$ not necessarily acyclic. The \emphbf{separated pro-\-spe\-cies} $\Lambda^s$ associated to $\Lambda$ is defined as follows. Let $Q^s$ be the quiver with vertex set being the disjoint union $Q_0 \coprod Q_0$ where we denote a vertex in the latter set by $\overline{\mathtt{i}}$ for $\mathtt{i}\in Q_0$, and arrows $\overline{\alpha}\colon \mathtt{i}\to \overline{\mathtt{j}}$ for each arrow $\alpha\colon \mathtt{i}\to \mathtt{j}$ in $Q$. Let $\Lambda^s\colon Q^s\to \Algpro$ be the pro-species with $\Lambda^s_\mathtt{i}:=\Lambda^s_{\overline{\mathtt{i}}}:=\Lambda_\mathtt{i}$ and $\Lambda^s_{\overline{\alpha}}:=\Lambda_\alpha$. \end{defn} The following proposition is essentially proven as in the classical case (see e.g. \cite[Lemma X.2.1]{ARS95}) replacing $\rad{T(\Lambda)}$ by $T(\Lambda)_{\geq 1}$. We provide it for convenience of the reader. \begin{prop} Let $\Lambda\colon Q\to \Algpro$ be a locally selfinjective pro-species of algebras with $Q$ not necessarily acyclic. Let $\Lambda^s$ be the separated pro-species associated to $\Lambda$. Let $T(\Lambda)$ be equipped with the tensor grading, i.e. $\Lambda_{\mathtt{i}}$ is in degree $0$ and $\Lambda_\alpha$ is in degree $1$ for $\alpha\in Q_1$. Let $T(\Lambda)_{\geq 2}$ be the part of degrees greater or equal than $2$. Let $\Gamma:=T(\Lambda)/T(\Lambda)_{\geq 2}$. Then the functor \[F\colon \modu_{l.p.}\Gamma:=\{X\in \modu \Gamma\, |\, X\in \rep_{l.p.} \Lambda\}\to \rep_{l.p.}^{epi}\Lambda^s,\] given on objects by $F(M)_{\overline{\mathtt{i}}}=\image M_{\mathtt{i},\arrowin}$ and $F(M)_\mathtt{i}=M_\mathtt{i}/\image M_{\mathtt{i},\arrowin}$ and $F(M)_\alpha$ being induced by $M_\alpha$, is full, dense and a representation embedding, i.e. it preserves indecomposability and reflects isomorphisms. \end{prop} \begin{proof} Let $\mathfrak{r}$ be the $\Gamma$-module $T(\Lambda)_{\geq 1}/T(\Lambda)_{\geq 2}$. Then, there is a short exact sequence of $\Gamma$-modules $0\to \mathfrak{r}\to \Gamma\to \Gamma/\mathfrak{r}\to 0$. Tensoring this with a projective $\Gamma$-module, one obtains a short exact sequence $0\to \mathfrak{r}\otimes_\Gamma P\to P\to P/\mathfrak{r}P\to 0$. Let $E$ be a $\Gamma$-module which is locally projective when considered as a $\Lambda$-module. Note that each such module is in fact gradable. Thus its $\Gamma$-projective cover comes from a graded map $f\colon P\to E$. Thus, there is the following commutative diagram of gradable maps with exact rows and columns \[ \begin{tikzcd} 0\arrow{r} &\mathfrak{r}P\arrow{d}{s}\arrow{r}&P\arrow{d}\arrow{r} &P/\mathfrak{r}P\arrow{r}\arrow{d}{f_1}&0\\ 0\arrow{r}&\mathfrak{r}E\arrow{d}\arrow{r}&E\arrow{r}\arrow{d}&E/\mathfrak{r}E\arrow{r}&0\\ &0&0 \end{tikzcd} \] Note that $f_1$ is an isomorphism since $E$ is locally projective, which results in the left square being a pullback square as well as a pushout square. Note that $T(\Lambda^s)$ is isomorphic to the triangular matrix ring \[\begin{pmatrix}T(\Lambda)/T(\Lambda)_{\geq 1}&0\\\mathfrak{r}&T(\Lambda)/T(\Lambda)_{\geq 1}\end{pmatrix}.\] As such, $T(\Lambda^s)$-modules are given by triples $(M,N,f)$ where $M$ and $N$ are $T(\Lambda)/T(\Lambda)_{\geq 1}$-modules and $f\colon T(\Lambda)_{\geq 1}\otimes_{T(\Lambda)_{\geq 1}} M\to N$ is a $T(\Lambda)/T(\Lambda)_{\geq 1}$-linear map. Identifying $\mathfrak{r}P$ with $\mathfrak{r}\otimes_{T(\Lambda)/T(\Lambda)_{\geq 1}} P/\mathfrak{r}P$ we obtain a commutative diagram showing that $(P/\mathfrak{r}P, \mathfrak{r}E,s)$ is isomorphic to $(E/\mathfrak{r}E,\mathfrak{r}E,h)$ where $h$ is the map induced by the multiplication map. \[ \begin{tikzcd} \mathfrak{r}\otimes_{T(\Lambda)/T(\Lambda)_{\geq 1}} P/\mathfrak{r}P\arrow{d}{1\otimes f_1}\arrow{r}{s}& \mathfrak{r}E\arrow{d}{\id_{\mathfrak{r}E}}\\ \mathfrak{r}\otimes_{T(\Lambda)/T(\Lambda)_{\geq 1}} E/\mathfrak{r}E\arrow{r}{h}&\mathfrak{r}E \end{tikzcd} \] We first prove that the functor $F$ is full. For this let $E$ and $E'$ be in $\modu_{l.p.} \Gamma$ and write $F(E)=(M,N,t)$ and $F(E')=(M',N',t')$. Consider a morphism $(u,v)\colon (M,N,t)\to (M',N',t')$. Let $f\colon P\to E$ and $f'\colon P'\to E'$ be projective covers. According to the foregoing remarks there are $T(\Lambda^s)$-isomorphisms $(M,N,t)\cong (P/\mathfrak{r}P,N,s)$ and $(M',N',f')\cong (P'/\mathfrak{r}P',N',s')$. It thus suffices to prove that each morphism $(u',v)\colon (P/\mathfrak{r}P,N,s)\to (P'/\mathfrak{r}P',N',s')$ comes from a morphism $w\colon E\to E'$. For such a morphism $(u',v)$ as noted before there is a pushout square \[ \begin{tikzcd} \mathfrak{r}\otimes_{T(\Lambda)/T(\Lambda)_{\geq 1}} P/\mathfrak{r}P\arrow{d}{s}\arrow{r} &P\arrow{d}{f}\\ N\arrow{r}&E \end{tikzcd} \] Let $g\colon P\to P'$ be a lift of $u'\colon P/\mathfrak{r}P\to P'/\mathfrak{r}P'$. There is a commutative diagram \[ \begin{tikzcd} \mathfrak{r}P\arrow{rd}{\mathfrak{r}u'}\arrow{dd}{s}\arrow{r}&P\arrow{rd}{g}\\ &\mathfrak{r}P'\arrow{dd}{s'}\arrow{r}&P'\arrow{dd}{f'}\\ N\arrow{rd}{v}\\ &N'\arrow{r}&E' \end{tikzcd} \] where the left square comes from the fact that $(u',v)$ is a homomorphism of $T(\Lambda^s)$-modules, the lower right corner is the pushout square for $(P'/\mathfrak{r}P',N',s')$, and the upper square commutes as $g$ is a lift of $u'$. Combining the latter two diagrams one gets a map $w\colon E\to E'$ by the universal property of the pushout. That $F(w)=(u',v)$ now follows from the commutativity of the following two cubes: \[ \begin{tikzcd} \mathfrak{r}P\arrow{rd}{g'}\arrow{dd}{s}\arrow{rr}{i}&&P\arrow{dd}[near end]{f}\arrow{rd}{g}\\ &\mathfrak{r}P'\arrow[crossing over]{rr}[near start]{i'}&&P'\arrow{dd}{f'}\\ \mathfrak{r}E\arrow{rd}{v}\arrow[]{rr}{h}&&E\arrow{rd}{w}\\ &\mathfrak{r}E'\arrow[from=uu, crossing over]{}[near end]{s'}\arrow{rr}{h'}&&E' \end{tikzcd} \] where the lower square commutes as $whs=wfi=f'gi=f'i'g'=h's'g'=h'vs$ and $s$ is an epimorphism, and \[ \begin{tikzcd} P\arrow{rd}{g}\arrow{dd}{f}\arrow{rr}{\pi}&&P/\mathfrak{r}P\arrow{dd}[near end]{f_1}\arrow{rd}{u'}\\ &P'\arrow[crossing over]{rr}[near start]{\pi'}&&P'/\mathfrak{r}P'\arrow{dd}{f_1'}\\ E\arrow{rd}{w}\arrow{rr}[near end]{p}&&E/\mathfrak{r}E\arrow{rd}{u}\\ &E'\arrow[from=uu, crossing over]\arrow{rr}{p'}&&E'/\mathfrak{r}E' \end{tikzcd} \] where the lower square commutes as $p'wf=p'f'g=f_1'\pi'g=f_1'u'\pi=uf_1\pi=upf$ and $f$ is an epimorphism. We claim that $F$ preserves indecomposability. For this, first note that $F(f)=0$ if and only if $f(E)\subseteq \mathfrak{r}E'$. It follows that $\End_{T(\Lambda^s)}(F(E))\cong \End_{\Gamma}(E)/\Hom_{\Gamma}(E,\mathfrak{r}E)$. Since $\mathfrak{r}^2=0$, it follows that $\Hom_{\Gamma}(E,\mathfrak{r}E)\subseteq \rad{\End_\Gamma(E)}$. In particular, $\End_{T(\Lambda^s)}(F(E))$ is local if and only if $\End_{\Gamma}(E)$ is local. Hence, $F$ preserves indecomposability. To show that $F$ reflects isomorphisms let $E$ and $E'$ be such that $F(E)\cong F(E')$. Since $F$ is full, there exist $f\colon E\to E'$ and $g\colon E'\to E$ with $F(fg)=\id$ and $F(gf)=\id$. But then, there exists $r\in \rad{\End_\Gamma(E)}, r'\in \rad{\End_\Gamma(E')}$ with $gf+r=1$ and $fg+r'=1$. Since $r,r'$ are nilpotent, it follows that $gf$ and $fg$, and thus $f$ and $g$ are isomorphisms. Finally, to prove that $F$ is dense let $(M,N,t)\in \rep_{l.p.}^{epi}\Lambda^s$. Let $f\colon P\to M$ be a projective cover. Taking the pushout of $\mathfrak{r}P\to P$ along the surjective $t\colon \mathfrak{r}P\to N$ gives a $\Gamma$-module $E$. We claim that $F(E)\cong (M,N,t)$. This follows from the pushout diagram as $P\to E$ being an epimorphism gives that the induced morphism $i(N)\to E$ has image $\mathfrak{r}E$. \end{proof} \begin{thm}\label{stableequivalence} Let $\Lambda$, $\Gamma$, $\Lambda^s$, and $F$ be as above. Then $F$ induces an equivalence of the corresponding stable categories $\underline{\modu}_{l.p.}\Gamma\to \underline{\rep}_{l.p.}\Lambda^s$. \end{thm} \begin{proof} As $T(\Lambda^s)$ can be written as a triangular matrix ring \[\begin{pmatrix}T(\Lambda)/T(\Lambda)_{\geq 1}&0\\T(\Lambda)_{\geq 1}/T(\Lambda)_{\geq 2}&T(\Lambda)/T(\Lambda)_{\geq 1}\end{pmatrix},\] it follows that $F$ preserves and reflects projectivity, see e.g. \cite[Lemma X.2.2]{ARS95}. It follows that $F$ restricts to a functor $F\colon \modu_{l.p.}^{np} \Gamma\to \rep_{l.p.}^{np} \Lambda^s$. Since the $\Lambda^s$-re\-pre\-sen\-ta\-tions not in $\rep^{epi}_{l.p.}\Lambda^s$ are projective, see the proof of Lemma \ref{separateddecomposes}, it follows that this restriction is dense. Since, $F$ sends projectives to projectives, it follows that $F$ induces a dense functor $F'\colon \underline{\modu}_{l.p.} \Gamma\to \underline{\rep}_{l.p.} \Lambda^s$. By the foregoing lemma, it is as well full. According to Lemma \ref{tensoralgebrahereditary}, $\rep_{l.p.}^{np}\Lambda^s\cong \underline{\rep}_{l.p.}\Lambda^s$. Suppose that $F'(f)=0$ for some morphism $f\colon M\to N$. Then, according to the proof of the previous proposition, it is in $\Hom_\Gamma(M,T(\Lambda)_{\geq 1}N)$. Thus there is a factorisation $M\to M/T(\Lambda)_{\geq 1} M\to T(\Lambda)_{\geq 1} N\to N$ of $f$. Let $g\colon P\to N$ be a $\Gamma$-projective cover of $N$. Then, $g$ induces an epimorphism $T(\Lambda)_{\geq 1} P\to T(\Lambda)_{\geq 1} N$. Noting that $\Lambda$ is locally selfinjective and the locally projective modules $T(\Lambda)_{\geq 1} P, T(\Lambda)_{\geq 1}, M/T(\Lambda)_{\geq 1} M$ are all modules for the selfinjective algebra $\prod \Lambda_{\mathtt{i}}$, it follows that the map $M/T(\Lambda)_{\geq 1}\to T(\Lambda)_{\geq 1}N$ factors through $T(\Lambda)_{\geq 1}P$. It follows that also $f$ factors through $P$. \end{proof} \section{Quivers and relations for pro-species} \label{sec:quivers} This final section of the article provides the bridge to classical representation theory of algebras by determining Gabriel quiver and relations for the tensor algebra as well as the preprojective algebra of a pro-species. This generalises a result by Ib{\'a}{\~n}ez Cobos, Navarro and L{\'o}pez Pe{\~n}a where the case of pro-species where each $\Lambda_\alpha$ is a free bimodule is considered under the name of a ``generalised path algebra". A similar result holds if the $\Lambda_\mathtt{i}$ on the vertices are given by classical species. We leave the obvious generalisation to the reader. \begin{prop}[cf. {\cite[Theorem 3.3]{INL08}}]\label{quivertensoralgebra} Let $\Lambda\colon Q\to \Algpro$ be a pro-species of algebras. Suppose $\Lambda_\mathtt{i}\cong \mathbbm{k}\tilde{Q}_\mathtt{i}/R_\mathtt{i}$ is given by a quiver $\tilde{Q}_\mathtt{i}$ and relations $R_\mathtt{i}$. Let $\pi_\alpha\colon \tilde{P}_\alpha\to \Lambda_\alpha$ be a projective cover of $\Lambda_\alpha$ as a $\Lambda_{t(\alpha)}$-$\Lambda_{s(\alpha)}$-module. Denote its kernel by $R_\alpha$. Then, $T(\Lambda)\cong \mathbbm{k}\tilde{Q}/R$ is a description by a quiver with relations, where \[\tilde{Q}=\bigcup_{\mathtt{i}\in Q_0} \tilde{Q}_\mathtt{i}\cup \{j\to i\, |\, \Lambda_{t(\alpha)} e_j\otimes_\mathbbm{k} e_i\Lambda_{s(\alpha)} \text{ is a direct summand of } \tilde{P}_\alpha \text{ for some }\alpha\}\] and $R=\langle R_\mathtt{i}, R_\alpha\rangle$. \end{prop} \begin{proof} Firstly, we have to determine the Jacobson radical of $T(\Lambda)$. It is easy to see that the ideal $J$ spanned by the $\tilde{Q}_{\mathtt{i},1}$, the arrows of the quiver $Q_\mathtt{i}$ of $\Lambda_\mathtt{i}$, and the $\Lambda_\alpha$ is nilpotent as the quiver $Q$ is acyclic. Noting that $T(\Lambda)/J\cong \prod \Lambda_\mathtt{i}/J_\mathtt{i}$, where $J_\mathtt{i}$ is the Jacobson radical of $\Lambda_\mathtt{i}$ the first claim follows. Secondly, to determine $J/J^2$, note that the arrows in $\tilde{Q}_\mathtt{i}$ correspond to a basis of $J_\mathtt{i}/J_\mathtt{i}^2$. Furthermore, for the $\Lambda_\alpha$ one notes that elements of $\Kopf(\Lambda_\alpha)$ do not belong to $J^2$. As the elements of $\Kopf(\Lambda_\alpha)$ correspond to the direct summands of the form $\Lambda_{t(\alpha)}e_j\otimes e_i\Lambda_{s(\alpha)}$, the description of the quiver follows. Thirdly, it is clear that $T(\Lambda)\cong \mathbbm{k}\tilde{Q}/R$. What is left to prove is that $R$ is admissible. It is obvious that $(\mathbbm{k}\tilde{Q}^+)^m\subseteq R$. For the inclusion $R\subseteq (\mathbbm{k}\tilde{Q}^+)^2$ note that this follows from the corresponding fact for the $R_\mathtt{i}$ and the fact that we chose a projective cover of $\Lambda_\alpha$ and hence $R_\alpha\subseteq \rad{\tilde{P}_\alpha}$ whose summands are of the form $\rad{\Lambda_{t(\alpha)}}e_j\otimes_\mathbbm{k} e_i \Lambda_{s(\alpha)}+\Lambda_{t(\alpha)}e_j\otimes_\mathbbm{k} e_i\rad{\Lambda_{s(\alpha)}}$. Since $e_j\otimes_\mathbbm{k} e_i\in \rad{T(\Lambda)}$, the claim follows. \end{proof} The proof does not use the fact that the $\Lambda_\alpha$ are projective from both sides. We included this assumption because we use it everywhere else in the paper. It is not known to the author whether this property was already established for triangular matrix rings. \begin{ex}\label{glsexample} \begin{enumerate}[(a)] \item\label{gls:i} Let $Q=1\stackrel{\alpha}{\to} 2$ and let $\Lambda$ be given by $\Lambda_1=\Lambda_2=\mathbbm{k}(1\stackrel{\beta}{\to} 2)$ and let $\Lambda_\alpha=\Lambda_1$. Then a projective cover of $\Lambda_\alpha$ is given by $\Lambda_2e_1\otimes e_1\Lambda_1\oplus \Lambda_2e_2\otimes e_2\Lambda_1$, the kernel of $\pi_\alpha$ is generated by $\beta\otimes e_1-e_2\otimes \beta$. We get the well known fact that this triangular matrix ring is given by the commutative square. \item\label{gls:ii} Let $Q$ be a quiver. Let $c_\mathtt{i}, f_{\mathtt{i}\mathtt{j}}, f_{\mathtt{j}\mathtt{i}}, g_{\mathtt{i}\mathtt{j}}\in \mathbb{N}$ for $\mathtt{i},\mathtt{j}\in Q_0$. Let $Q\to \Algpro$ be the pro-species of algebras given by \[\Lambda_\mathtt{i}:=\mathbbm{k}[x_\mathtt{i}]/(x_\mathtt{i}^{c_\mathtt{i}})\] for $\mathtt{i}\in Q_0$ and \[\Lambda_{\alpha}:=\bigoplus_{g_{\mathtt{i}\mathtt{j}}} \left(\mathbbm{k}[x_\mathtt{i}]/(x_\mathtt{i}^{c_\mathtt{i}})\otimes_\mathbbm{k} \mathbbm{k}[x_\mathtt{j}]/(x_\mathtt{j}^{c_\mathtt{j}})\right)/\left(x_\mathtt{i}^{f_{\mathtt{j}\mathtt{i}}}\otimes_\mathbbm{k} 1-1\otimes_\mathbbm{k} x_\mathtt{j}^{f_{\mathtt{i}\mathtt{j}}}\right)\] for $\alpha\in Q_1$. Then $T(\Lambda)$ is isomorphic to the algebra $H(C,D,\Omega)$ as defined in \cite{GLS16} where (H1) are the relations corresponding to $R_\mathtt{i}$ and (H2) correspond to the relations given by $R_\alpha$. This explains the relations (H1) and (H2) of \cite{GLS16} which at first sight might seem unnatural. \end{enumerate} \end{ex} Similarly, one obtains the following statement for the preprojective algebra. Note, that this algebra is not necessarily finite dimensional, thus we do not use the expression ``quiver with relations'' here. \begin{prop}\label{quiverpreprojectivealgebra} Let $\Lambda\colon Q\to \Algpro$ be a dualisable pro-species of algebras. Suppose $\Lambda_\mathtt{i}\cong \mathbbm{k}\tilde{Q}_\mathtt{i}/R_\mathtt{i}$ is given by a quiver $\tilde{Q}_\mathtt{i}$ and relations $R_\mathtt{i}$. For each $\alpha\in \overline{Q}$ let $\pi_\alpha\colon \tilde{P}_\alpha\to \Lambda_\alpha$ be a projective cover of $\Lambda_\alpha$ as a $\Lambda_{t(\alpha)}$-$\Lambda_{s(\alpha)}$-module. Denote its kernel by $R_\alpha$. Then, $\Pi(\Lambda)\cong \mathbbm{k}\tilde{Q}/R$, where \[\tilde{Q}=\bigcup_{\mathtt{i}\in Q_0} \tilde{Q}_\mathtt{i}\cup \{j\to i\, |\, \Lambda_{t(\alpha)} e_j\otimes_\mathbbm{k} e_i\Lambda_{s(\alpha)} \text{ is a direct summand of } \tilde{P}_\alpha \text{ for some }\alpha\}\] and $R=\langle R_\mathtt{i}, R_\alpha, c\rangle$ where $c$ is as in Definition \ref{preprojectivealgebra}. \end{prop} \begin{ex} Let $Q$ be a quiver. Let $\Lambda\colon Q\to \Algpro$ be defined as in Example \ref{glsexample} \eqref{gls:ii}, then $\Pi(\Lambda)$ is isomorphic to the algebra $\Pi(C,D,\Omega)$ from \cite{GLS16} where (P1) corresponds to the $R_\mathtt{i}$, (P2) corresponds to the $R_\alpha$ and (P3) corresponds to the relations in $c$. \end{ex} \end{document}
\begin{document} \title[Affine Representability and decision procedures]{Affine representability and decision procedures for commutativity theorems for rings and algebras} \author[J. P. Bell]{Jason P. Bell} \address{Department of Pure Mathematics, University of Waterloo, Waterloo, ON Canada N2L 3G1} \email{[email protected]} \author[P. V. Danchev]{Peter V. Danchev} \address{Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, ``Acad. G. Bonchev" str., bl. 8, 1113 Sofia, Bulgaria} \email{[email protected]; [email protected]} \keywords{Specht's problem, Kemer's solution, PI-rings, commutativity, Jacobson theorem, Herstein theorem} \subjclass[2010]{Primary 16R10, 16R30, 16R60} \begin{abstract} We consider applications of a finitary version of the Affine Representability theorem, which follows from recent work of Belov-Kanel, Rowen, and Vishne. Using this result we are able to show that when given a finite set of polynomial identities, there is an algorithm that terminates after a finite number of steps which decides whether these identities force a ring to be commutative. We then revisit old commutativity theorems of Jacobson and Herstein in light of this algorithm and obtain general results in this vein. In addition, we completely characterize the homogeneous multilinear identities that imply the commutativity of a ring. \end{abstract} \maketitle \section{Introduction and Background} One of the crowning achievements in the theory of rings satisfying a polynomial identity is Kemer's solution in characteristic zero \cite{K1}--\cite{K4} to the Specht problem \cite{S}, which asks whether every set of polynomial identities of an algebra is a consequence of a finite number of identities. A key component of Kemer's work is the Affine Representability theorem (see \cite{Alja}), which shows that for a finitely generated algebra $A$ satisfying an identity over an infinite field, there is a finite-dimensional algebra $B$ (possibly over a larger field) that satisfies the exact same set of identities as $A$. The original groundbreaking work of Kemer has since been expanded by others, including notably Belov-Kanel, Rowen, and Vishne \cite{B1,B2,B3,BRV1,BRV2,BRV3}, who extended much of Kemer's theory to the setting of algebras over commutative noetherian base rings. It is clear that the Affine Representability theorem does not in general have a positive solution for algebras over a finite field; for example, if $A=\mathbb{F}_p[t]$ then $A$ cannot satisfy the exact same set of identities as a finite-dimensional $\mathbb{F}_p$-algebra $B$, because a finite-dimensional $\mathbb{F}_p$-algebra satisfies an identity of the form $X^m-X^n=0$ for some $m$ and $n$ with $m>n$, while $A$ satisfies no such identity. In practice, however, one is often only concerned with finite sets of non-identities. When one restricts one's attention to this setting, it then becomes a natural question of whether this finitary version of the Affine Representability theorem holds. \begin{question} Given sets $\mathcal{S}$ and $\mathcal{T}$ of polynomial identities with $\mathcal{T}$ finite, if there is an algebra which satisfies all of the identities from $\mathcal{S}$ and none of the identities from $\mathcal{T}$, is there a finite ring with this property? \label{A} \end{question} Although this question is not explicitly answered in the literature, recent work of Belov-Kanel, Rowen, and Vishne \cite{BRV1, BRV2, BRV3} can be used to quickly show that this question has an affirmative answer (see Theorem {\ref{lem:p} for details). Historically, the most important results involving both polynomial identities and polynomial non-identities have largely focused on the case when a collection of identities force $[X,Y]=0$ to also be an identity; that is, to show that an identity or family of identities force a ring to be commutative (see Herstein \cite[Chapt. 6]{Her} and the references given at the end of the chapter along with the paper of Pinter-Lucke \cite{PL}). This, for example, is the content of special cases of famous results of Jacobson \cite{J} and Herstein \cite{H}, which respectively assert that rings for which $X^n=X$ and $[X^n-X,Y]=0$ are identities for some fixed $n\ge 2$ are necessarily commutative.\footnote{The general version of Jacobson's result says that if for each $x$ in a ring $R$ there is some $n=n(x)\ge 2$ such that $x^n=x$ then $R$ is commutative and Herstein's result is similar, but instead now just requiring that $x^n-x$ be central in $R$.} Results of this type are typically called \emph{commutativity theorems}. An affirmative answer to Question \ref{A} gives a general approach to attacking such problems, as it reduces the analysis to looking at finite rings, although we are unaware of this approach being used previously. In fact, in the special case where one is looking at $[X,Y]=0$ being a non-identity for a ring, one can give a more direct answer to Question \ref{A} that does not require the deep machinery coming out of the recent work of Belov-Kanel, Rowen, and Vishne on Kemer's theorem (see Theorem \ref{thm:Specht}). One of the consequences of the fact that Question \ref{A} has an affirmative answer is that if there is a noncommutative ring that satisfies some set of identities, then there is a finite noncommutative ring satisfying all identities in the set. We use this observation to give a decision procedure to determine whether a finite set of polynomial identities forces a ring to be commutative. In light of this result, we are able to prove the following somewhat unexpected result, which can be viewed as a sort of machine for producing commutativity theorems. \begin{thm} There is a decision procedure that takes as input a finite number of polynomial identities $P_1=\cdots =P_m=0$ and gives as output either a finite noncommutative ring for which these identities all hold or shows that every ring for which these identities all simultaneously hold is commutative. \label{thm:algorithm} \end{thm} By a \emph{decision procedure} we simply mean there is an algorithm that terminates after a finite number of steps. The algorithm itself is rather lengthy to describe in general, but in practice---for specific sets of identities---it can be done reasonably quickly and we give applications of our algorithm in Section \ref{Applications}. To produce an algorithm, we require a coarse classification of finite noncommutative rings with the property that all non-trivial homomorphic images and all proper subrings are commutative. This is the content of Theorem \ref{thm:trichotomy} and Remark \ref{rem:trichotomy}; Theorem \ref{thm:trichotomy} is somewhat technical, but it shows that all such algebras lie in one of three infinite classes of algebras that are indexed by the prime numbers. As a quick application of our Theorem \ref{thm:trichotomy}, we are able to completely characterize the homogeneous multilinear polynomial identities that force a ring to be commutative. Arguably the most important class of polynomial identities are the homogeneous multilinear identities, which arise in the theory of polynomial identities via a natural linearization process. To give this characterization, we fix a homogeneous multilinear polynomial $$P(X_1,\ldots ,X_m)=\sum_{\sigma \in S_m} c_{\sigma}X_{\sigma(1)}\cdots X_{\sigma(m)} \in \mathbb{Z}\{X_1,\ldots ,X_m\}.$$ For each $i,j\in \{1,\ldots ,m\}$ with $i<j$ we define \begin{equation} \Theta_{i,j}(P) = \sum_{\{\sigma\in S_m\colon \sigma^{-1}(i)<\sigma^{-1}(j)\}} c_{\sigma}\in \mathbb{Z}. \label{eq:Theta} \end{equation} Then we have the following result. \begin{thm} Let $P(X_1,\ldots ,X_m)=\sum_{\sigma \in S_m} c_{\sigma}X_{\sigma(1)}\cdots X_{\sigma(m)} \in \mathbb{Z}\{X_1,\ldots ,X_m\}$ be a homogeneous multilinear polynomial. Then there is a noncommutative ring for which $P=0$ is an identity if and only if there is a prime number $p$ such that the following hold: \begin{enumerate} \item $p\mid P(1,1,\ldots ,1)$; \item $p\mid \Theta_{i,j}(P)$ for $1\le i<j\le m$. \end{enumerate} Moreover, if there is such a prime $p$ for which these conditions hold, then $P=0$ is an identity for the noncommutative ring $\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$. \label{thm:multi} \end{thm} We have chosen to state the conditions in Theorem \ref{thm:multi} in terms of $P(1,\ldots ,1)$ and the $\Theta_{i,j}$, as these are integers that one can explicitly compute and so one can determine whether a prime $p$ holds for which (1) and (2) hold. Nevertheless, for a homogeneous multilinear polynomial $P(X_1,\ldots, X_s)$, it is more natural to let $P_{i,j}$ denote specialization $P(1,\ldots ,1, X_i, 1,\ldots ,1,X_j,1,\ldots ,1)$ for $i<j$, where we have an $X_i$ in the $i$-th coordinate and $X_j$ in the $j$-th coordinate. Then $P_{i,j} = \Theta_{i,j}(P)X_i X_j + (P(1,1,\ldots ,1)-\Theta_{i,j})X_jX_i$, and so conditions (1) and (2) are equivalent to the condition that the polynomial $P_{i,j}$ be divisible by the prime $p$ for $1\le i<j\le m$. The outline of this paper is as follows. In \S\ref{Rowen} we show how one can give an affirmative answer to Question \ref{A} problem using the powerful work of Belov-Kanel, Rowen, and Vishne. In addition, we give a direct argument in the case where one is studying rings for which $[X,Y]=0$ is a non-identity and give a coarse classification of finite noncommutative rings for which every proper homomorphic image and every proper subring is commutative. In \S\ref{Algorithm}, we use results from \S\ref{Rowen} to give an algorithm, which proves Theorem \ref{thm:algorithm}. In \S\ref{Applications} we revisit the fixed-degree versions of old commutativity theorems of Jacobson \cite{J} and Herstein \cite{H} in light of these results and prove general commutativity theorems and in \S\ref{sec:multilinear} we characterize the homogeneous multilinear polynomial identities with the property that whenever a ring satisfies this identity it is necessarily commutative and prove a more general version of Theorem \ref{thm:multi} (see Theorem \ref{thm:multi2}). Throughout this paper, we will take a \emph{noncommutative ring} to be a ring that is \emph{not} commutative and all rings considered are assumed to be associative and possessing an identity element. We refer the reader to \cite{BR}, \cite{D}, and \cite{R1} for background on polynomial identities. Finally, we will often say that a polynomial $P\in \mathbb{Z}\{X_1,\ldots ,X_s\}$ is either an \emph{identity} or a \emph{non-identity} for a ring $R$. By this, we simply mean that $P(r_1,\ldots ,r_s)=0$ for all $r_1,\ldots, r_s\in R$ when speaking of $P$ being an identity for $R$; and when speaking of $P$ being a non-identity for $R$ we mean that there exist $r_1,\ldots ,r_s\in R$ such that $P(r_1,\ldots ,r_s)\neq 0$. \section*{Acknowledgments} We are grateful to Lance W. Small and to Louis H. Rowen for many useful comments. The first-named author expresses his thanks to Luna Xin, who asked a question that led to Theorem 4.3. In addition, we are grateful to the anonymous referee, who made numerous helpful comments, including suggesting the addition of Proposition \ref{prop:gen}. \section{Finite rings and finite sets of identities} \label{Rowen} In this section, we show that Question \ref{A} has an affirmative answer. We once again point out that our work relies heavily on the aforementioned work of Belov-Kanel, Rowen, and Vishne \cite{BRV3}. We begin with a classical fact from commutative algebra, which, if one borrows terminology from group theory, says that finitely generated $\mathbb{Z}$-algebras are \emph{residually finite} (see \cite{Vara}). (We note that although the paper \cite{CL} predates the reference \cite{Vara}, the paper of Chew and Lawn \cite{CL} deals with what are now called \emph{just infinite ring} and not residually finite rings in the sense given here.) \begin{lemma} Let $C$ be a finitely generated commutative $\mathbb{Z}$-algebra. If $x\in C$ is nonzero, then there is some ideal $L$ such that $x\not\in L$ and such that $C/L$ is finite. \label{lem:Krull} \end{lemma} \begin{proof} Let $I$ be the annihilator of $x$. Then since $x$ is nonzero, there is some maximal ideal $Q$ such that $I$ is contained in $Q$. In particular, $x$ has nonzero image in the localization $C_Q$. By the Krull intersection theorem (see, for instance, \cite[Corollary 5.4]{Eisenbud}), we deduce that $$\bigcap_{n\in \mathbb{N}} Q^n C_Q = (0)$$ and so there is some $n$ such that $x\not\in Q^n C_Q$ and thus $x\not\in Q^n$. By the Nullstellensatz \cite[Theorem 4.19, p. 132]{Eisenbud}, $C/Q$ is a finite field, and so every ideal in the chain $$Q\supseteq Q^2\supseteq Q^3 \supseteq \cdots $$ is cofinite and so letting $L=Q^n$ gives the result. \end{proof} \begin{thm} Let $\mathcal{S}$ and $\mathcal{T}$ be sets of polynomial identities with $\mathcal{T}$ finite. If there exists a ring $R$ such that all elements of $\mathcal{S}$ are identities for $R$ and all elements of $\mathcal{T}$ are non-identities for $R$, then there exists a finite ring $S$ such that all elements of $\mathcal{S}$ are identities for $S$ and all elements of $\mathcal{T}$ are non-identities for $S$. \label{lem:p} \end{thm} \begin{proof} We observe that it suffices to prove the case when $|\mathcal{T}|=1$, since if this holds, then for each non-identity $G=0$ in $\mathcal{T}$ there is a finite ring $A$ for which each element of $\mathcal{S}$ is an identity and for which $G=0$ is a non-identity. Then the direct product of these finite rings we produce is a finite ring with the desired properties. Thus we assume henceforth that $\mathcal{T}$ consists of a single identity $G(X_1,\ldots ,X_s)=0$. By assumption there is some $\mathbb{Z}$-algebra $R$ for which all elements of $\mathcal{S}$ are identities and such that $G=0$ is a non-identity. Then there is some $s$-generated subalgebra $R_0$ of $R$ that witnesses the fact that $G(X_1,\ldots ,X_s)=0$ is a non-identity. We let $I$ be the $T$-ideal in $S:=\mathbb{Z}\{X_1,\ldots, X_s\}$ generated by the identities from $\mathcal{S}$. Then since $\mathbb{Z}$ is noetherian, a result of Belov-Kanel, Rowen, and Vishne \cite[\S7.2]{BRV3}, the algebra $A:=\mathbb{Z}\{X_1,\ldots, X_s\}/I$ is representable and hence there is a commutative $\mathbb{Z}$-algebra $C$ such that $A$ embeds as a subalgebra of the full $n\times n$ matrix ring $M_n(C)$ for some $n\ge 1$. We may replace $A$ by its image in $M_n(C)$, and since this image is isomorphic to $A$, it satisfies all identities in $\mathcal{S}$. In addition, we may replace $C$ by the finitely generated subalgebra generated by the entries of a finite set of generators for $A$, since all we require is that the map $A\to M_n(C)$ be an embedding. We therefore assume that $C$ is finitely generated and hence noetherian. Since $I$ is a $T$-ideal and since $G=0$ is a non-identity for $A$, the image of $G(X_1,\ldots ,X_s)$ in $M_n(C)$ is nonzero, and thus there is some nonzero $x\in C$ such that $x$ is an entry of the image of $G(X_1,\ldots ,X_e)$ in $M_n(C)$. By Lemma \ref{lem:Krull} there is a cofinite ideal $L$ of $C$ such that the image of $x$ is nonzero in $C/L$. In particular, we have that the images of $G(X_1,\ldots ,X_s)$ in $M_n(C/L)$ under the composition of maps $$S\to S/I \to M_n(C)\to M_n(C/L)$$ is nonzero, and so the image of $S/I$ in $M_n(C/L)$ is a finite ring which by construction satisfies every identity from the set $\mathcal{S}$ and does not satisfy the identity $G=0$. \end{proof} } \begin{remark} It is interesting to compare Theorem \ref{lem:p} with other algebraic objects. We note that work by Kle\u\i{man} \cite{Kleiman} on groups and Murski\u\i ~\cite{Murskii} on semigroups shows that these situations are very different and that one does not expect analogues of our results to hold in these settings. \end{remark} As mentioned earlier, the most important special case of Question \ref{A} has historically been the case of studying polynomial identity rings for which $[X,Y]=0$ is a non-identity (see, for example, \cite[Chapt. 6]{Her} and references therein). In this case, we can give a more direct proof. \begin{thm} Let $\mathcal{S}$ be a set of polynomial identities. If there is a noncommutative ring for which the elements of $\mathcal{S}$ are identities, then there exists a finite noncommutative ring with this property. \label{thm:Specht} \end{thm} \begin{proof} If $\mathcal{S}$ is empty, we can take $R$ to be the ring of $2\times 2$ matrices over a finite field. Thus we may assume $\mathcal{S}$ is non-empty. Let $I$ denote the $T$-ideal of $\mathbb{Z}\{x,y\}$ generated by the identities in $\mathcal{S}$. Then by assumption, the ring $R:=\mathbb{Z}\{x,y\}/I$ is not commutative. In particular, the image of $a:=[x,y]$ is nonzero in $R$. Now let $\mathcal{X}$ denote the collection of ideals $J$ of $R$ such that $R/J$ is not a commutative ring (i.e., the ideals that do not contain the image of $a$ in $R$). Then by Zorn's lemma, there exists a maximal element $J_0$ of $\mathcal{X}$. Then we may replace $R$ by $R/J_0$ and then assume that $R/L$ is commutative whenever $L$ is a nonzero ideal of $R$. By construction, $R$ is a $\mathbb{Z}$-algebra that is generated by two elements $u$ and $v$ that do not commute such that every nonzero ideal of $R$ contains $[u,v]$; moreover, $R$ satisfies the identities in $\mathcal{S}$. If $R$ is finite, then there is nothing to do. Thus we may assume that $R$ is infinite. Let $L=R[u,v]R$. Then by construction $L$ is a minimal nonzero two-sided ideal of $R$ and hence it is a simple $T:=R\otimes_{\mathbb{Z}} R^{\rm op}$-module. Since $\mathcal{S}$ is non-empty, $R$ satisfies a polynomial identity and so by a theorem of Regev \cite[Theorem 1, p. 152]{Regev}, $T$ does too. By minimality of $L$ as a nonzero two-sided ideal of $R$, the annihilator of $L$ as a left $T$-module is a primitive ideal $Q$ of $T$. Then by Kaplansky's theorem \cite[Theorem 6.1.25]{R2}, $T/Q\cong M_n(D)$, where $D$ is a division ring that is finite-dimensional over its center. Thus $T/Q$ is a finite module over its center and since $T$ is a finitely generated $\mathbb{Z}$-algebra, the center of $T/Q$ is a finitely generated $\mathbb{Z}$-algebra by the Artin-Tate lemma (cf. \cite[\S6.2]{R2}). In addition, the centre of $T/Q$ is a field and thus by the Nullstellensatz \cite[Theorem 4.19, p. 132]{Eisenbud}, the center of $T/Q$ is a finite field and so $T/Q$ is finite. Notice that $L=T\cdot [x,y]$ and since $T/Q$ is finite and $Q$ annihilates $L$, we then see that $L$ is necessarily finite. Since $L$ is finite, there are cofinite left and right ideals $I_1$ and $I_2$ of $R$ such that $I_1\cdot L=L\cdot I_2=(0)$. These ideals then contain cofinite two-sided ideals and by taking the intersection of these ideals, we see that there is a two-sided ideal $I$ of $R$ such that $R/I$ is a finite ring and such that $IL=LI=(0)$. Since $R/L$ is a homomorphic image of $\mathbb{Z}[u,v]$, we see that $R/L$ is noetherian and since $L$ is finite, $R$ is both left and right noetherian as well. Since $R$ is a countable ring we can take an enumeration $r_1,r_2,r_3,\ldots $ of the elements of $R$. Notice that for each $r\in R$, the elements $[u,r]$ and $[v,r]$ lie in $L$. Thus each element $r\in R$ gives us a map $f_r: \{u,v\}\to L$, given by $f_r(u)=[u,r]$ and $f_r(v)=[v,r]$. Then since $L$ is finite, there are only finitely many maps from $\{u,v\}$ to $L$, so we see that there is some natural number $N$ such that whenever $n> N$, there is some $i\le N$, depending on $n$, such that $f_{r_n}=f_{r_i}$. In particular, one sees that $r_n-r_i$ is central. It follows that $R$ is spanned by $r_1,\ldots ,r_N$ as a module over its center, $Z(R)$. By the Artin-Tate lemma (cf. \cite[\S6.2]{R2}), $Z(R)$ is finitely generated as a $\mathbb{Z}$-algebra and hence it is noetherian. Notice that if $z\in Z(R)$, then multiplication by $z$ induces a self-map of the finite ideal $L$ and hence there is some monic integer polynomial $P(z)$ that annihilates $L$, since $L$ is finite. We claim that $P(z)^n = 0$ in $R$ for some $n$. To see this, suppose that this is not the case. Then, for each $n$, $P(z)^nR$ is a nonzero ideal of $R$ and by minimality of $L$ as a nonzero ideal, for each $n\ge 1$ there is some $x_n\in R$ such that $P(z)^n x_n = [u,v]$. Now let $I_n = \{x\in R\colon P(z)^n x\in L\}$. Then $I_1\subseteq I_2\subseteq I_3\subseteq \cdots $ is an ascending chain of ideals in $R$ and since $R$ is noetherian, there is some $\ell$ such that $I_{\ell}=I_{\ell+1}$. In particular, $P(z)^{\ell} x_{\ell+1}\in L$. But since $P(z)$ annihilates $L$, this gives $$[u,v]=P(z)^{\ell+1} x_{\ell+1} =P(z)(P(z)^{\ell} x_{\ell+1})\in P(z)L = (0),$$ a contradiction. Thus there is some $n$ such that $P(z)^n=0$. Since $P(z)^n$ is a monic polynomial with integer coefficients, every $z\in Z(R)$ is integral over the image of $\mathbb{Z}$ in $Z(R)$ and since $Z(R)$ is a finitely generated $\mathbb{Z}$-algebra, we see that $Z(R)$ is a finitely generated $\mathbb{Z}$-module. Since $L$ is finite, there is some positive integer $b$ such that $bL=(0)$. Therefore, the same argument as before shows that there is some $n$ such that $k:=b^n=0$ in $R$. Thus $Z(R)$ is a finitely generated $\mathbb{Z}/k\mathbb{Z}$-module and hence it is finite. Since $R$ is a finitely generated $Z(R)$-module, $R$ must be finite too. The result follows. \end{proof} We now use Theorem \ref{lem:p} to give a coarse classification of the minimal finite noncommutative rings $R$ for which a collection $\mathcal{S}$ of polynomial identities must hold if there exists at least one noncommutative ring for which all identities in $\mathcal{S}$ hold. To do this, we introduce three classes of rings. For each prime $p$, we let $U_p$ denote the ring of upper-triangular $2\times 2$ matrices with entries in $\mathbb{F}_p$; that is, \begin{equation} U_p = \left\{ \left( \begin{array}{cc} a & b \\ 0 & c \end{array}\right) \colon a,b,c\in \mathbb{F}_p\right\}. \end{equation} Given a prime $p$, an integer $n\ge 2$, and $i\in \{1,\ldots ,n-1\}$, we let $B_{p,n,i}$ denote the ring \begin{equation} B_{p,n,i}:= \left\{ \left(\begin{array}{cc} x^{p^i}& y \\ 0 & x \end{array} \right) \colon x,y\in \mathbb{F}_{p^n}\right\}. \end{equation} Finally, given a prime $p$, we let $\mathcal{A}_p$ denote the collection of noncommutative rings that are a homomorphic image of a ring of the form \begin{equation} \mathbb{Z}\{x,y\}/(I+J_n)\end{equation} for some $n\ge 3$, where $I=(p,x,y)[x,y]\mathbb{Z}\{x,y\} + \mathbb{Z}\{x,y\}[x,y] (p,x,y)$ and $J_n$ is the ideal $(x,y,p)^n$. \begin{thm} Let $\mathcal{S}$ be a set of polynomial identities, and suppose there exists a noncommutative ring $R$ that satisfies every identity in $\mathcal{S}$. Then one of the following must hold: \begin{enumerate} \item[(a)] there is a prime $p$ such that $U_p$ satisfies the identities in $\mathcal{S}$; \item[(b)] there is a prime $p$ and $n\ge 2$ and $i\in \{1,\ldots ,n-1\}$ such that $B_{p,n,i}$ satisfies the identities in $\mathcal{S}$; \item[(c)] there is a prime $p$ and a ring in $\mathcal{A}_p$ that satisfies the identities in $\mathcal{S}$. \end{enumerate} In particular, if there is a noncommutative ring that satisfies the identities of $\mathcal{S}$ then there is a finite noncommutative ring with nonzero nilpotent commutator ideal that satisfies the identities of $\mathcal{S}$. \label{thm:trichotomy} \end{thm} \begin{proof} Pick $u$ and $v$ in $R$ that do not commute. Then we may replace $R$ by the $\mathbb{Z}$-subalgebra generated by $u$ and $v$ and assume $R$ is two-generated. By Theorem~\ref{thm:Specht} we may also assume that $R$ is finite. By replacing $R$ by $R/I$, where $I$ is maximal with respect to not being commutative, we may further assume that $R$ has a minimal nonzero two-sided ideal $L$ generated by $[u,v]$. Moreover, since every proper homomorphic image of $R$ is commutative and since $R$ is finite, there is some prime $p$ such that $p^m R=(0)$. We now argue via cases. \vskip 2mm \emph{Case 1.} $R$ is semiprimitive, that is, $J(R)=(0)$. \vskip 2mm In this case, by the Artin-Wedderburn theorem \cite[Chapter 14]{R3}, $R$ is isomorphic to a finite product of matrix rings over finite fields. Since $R$ is not commutative, there is some factor $M_n(F)$ with $n>1$ and $F$ a field of characteristic $p$, which will satisfy every identity that $R$ does. Since $U_p$ is isomorphic to a subring of $M_n(F)$, we see that $U_p$ satisfies all the identities that $R$ does. \vskip 2mm \emph{Case 2.} $J(R)\neq (0)$. \vskip 2mm Since $L$ is minimal among nonzero ideals of $R$, $L$ is contained in $J(R)$ and hence $(0)=J(R)L=L^2$, because the Jacobson radical of a finite ring is nilpotent. Since $R$ has finitely many primitive ideals, which are pairwise comaximal, and since their product is contained in $J(R)$, we then see that there is a unique primitive ideal $P$ of $R$ such that $PL=(0)$; similarly, there is a unique primitive ideal $Q$ of $R$ such that $LQ=(0)$. \vskip 2mm \emph{Subcase 2.a.} $P\neq Q$. \vskip 2mm In this case, we can pick $y\in P$ such that $1-y\in Q$. Then $[[u,v],y]=-y[u,v]+[u,v](1-(1-y)) = [u,v]$. Let $z=[u,v]$. Then since $z\in L$ and since $p\in P\cap Q$, we have $pz=z^2=yz=0$ and $zy=z$. Therefore, the subring $S$ generated by $z$ and $y$ is not commutative and satisfies all the identities that $R$ does. Notice that $S=S_0\oplus S_1$, where $S_0$ is the image of $\mathbb{Z}[y]$ in $S$ and $S_1=\{0,z,2z,\ldots, (p-1)z\}$. Let $h(X)$ denote the minimal polynomial of $y$ in $S_0$. Then since $z(y-1)=0$ and $yz=0$, we see that $h(X)\in (p,(X-1)X)\mathbb{Z}[X]$. In particular, there is an ideal $I$ of $S_0$ such that $S_0/I\cong \mathbb{F}_p[X]/(X^2-X)$ with an isomorphism that sends $y+I$ to $X+(X^2-X)$, and since $IS_1= S_1 I=(0)$, we see that $J:=I\oplus (0)$ is an ideal of $S$ and so $S/J$ is isomorphic to the three-dimensional $\mathbb{F}_p$-algebra $B$ with generators $s,t$ and with relations $$s^2=ts=t^2-t=s(1-t)=0.$$ We then have a map $\phi: U_p\to B$ via the map $e_{1,2}\mapsto s$, $e_{2,2}\mapsto t$. This map is easily checked to be an isomorphism. \vskip 2mm \emph{Subcase 2.b.} $P=Q$ and $L$ is not contained in the center of $R$. \vskip 2mm In this case, we claim that $pR=(0)$. If not, then since $L$ is minimal among nonzero ideals, we have $pR\supseteq L$ and so $[u,v]=pr$ for some $r\in R$. But since $L$ is not central, we observe that there is some $z$ such that $[z,[u,v]]\neq 0$. However, this is equal to $p[z,r]$ and $pL=(0)$, a contradiction. Thus $pR=(0)$ and so $R$ is an $\mathbb{F}_p$-algebra. Now we pick $y\in L$ that is not central. Then $R/P$ is a finite field and hence isomorphic to $\mathbb{F}_{p^n}$ for some $n\ge 1$ and we let $q=p^n$. We let $x\in R$ be such that $x+P$ is a generator for the multiplicative group of $R/P$. Then since the elements of $P$ annihilate $y$, we see that $[x,y]\neq 0$ since $y$ is non-central and $R$ is generated as an algebra by $x$ and $P$ by construction. Therefore $x^q-x\in P$ and so $$(0)=(x^q-x)Ry=yR(x^q-x).$$ We may replace $R$ by the noncommutative subalgebra generated by $x$ and $y$ if necessary and then $L$ is a simple $R/P$-$R/P$-bimodule generated by $y$ as a bimodule. Thus $L$ is isomorphic to a simple quotient of $R/P\otimes_{\mathbb{F}_p} R/P \cong \mathbb{F}_q^n$ and moreover we have $(x+P)\cdot y \neq y\cdot (x+P)$. A simple quotient $M$ of $R/P\otimes_{\mathbb{F}_p} R/P$ is isomorphic $\mathbb{F}_q$ and satisfies $(x+P)\cdot v =v\cdot (x^{p^i}+P)$ for every $v\in M$ for some fixed $i\in \{0,1,2,\ldots ,n-1\}$. Since $L$ is not central, we then see that $L\cong \mathbb{F}_q$ and $(x+P)\cdot y =y\cdot (x^{p^i}+P)$ for some fixed $i\in \{1,2,\ldots ,n-1\}$. Now fix an isomorphism $f:R/P\to \mathbb{F}_q$. Then we claim there is an endomorphism $\Phi : R\to B_{p,n,i}$ defined on the generators $x$ and $y$ by $$x\in R\mapsto \left( \begin{array}{cc} f(x^{p^i}+P) & 0\\ 0 & f(x+P)\end{array}\right)$$ and $$y\in R\mapsto \left( \begin{array}{cc} 0 & 1\\ 0 & 0\end{array}\right).$$ To show that this is an endomorphism, we must show that if $P(X,Y)$ is an element of the free algebra $\mathbb{F}_p\{X,Y\}$ such that $P(x,y)=0$ in $R$ then $P(\Phi(x),\Phi(y))=0$. One can check that $$\Phi(x)^q-\Phi(x)=\Phi(y)^2=\Phi(x)\Phi(y)-\Phi(y)\Phi(x^{p^i})=0.$$ Thus we may reduce $P(X,Y)$ modulo the ideal $(X^q-X,Y^2, XY-YX^{p^i})$ and we may assume without loss of generality that $P(X,Y)$ is of the form $A(X) + y B(X)$, where $A(X),B(X)\in \mathbb{F}_p[X]$ have degree at most $q-1$. Now suppose that $A(x)+y B(x)=0$ in $R$ with $A(X), B(X)$ of degree $\le q-1$. Then left-multiplying by $y$ gives that $yA(x)=0$. In particular, since the right annihilator of $y$ is a proper right ideal that contains $P$ and since $R/P$ is a field, we then see that $A(x)\in P$ and so $f(A(x)+P)=f(A(x)^{p^i}+P)=0$. Thus $\Phi(A(x))=0$. Thus we may assume that $P(X,Y)$ is of the form $yB(X)$ with $B(X)\in \mathbb{F}_p[X]$. Then as before, since $B(X)$ annihilates $y$, $B(X)\in P$ and so $\Phi(y)B(\Phi(x))=0$. Since $\Phi$ is surjective, $B_{p,n,i}$ satisfies all the identities that $R$ does. \vskip 2mm \emph{Subcase 2.c.} $P=Q$ and $L$ is central. In this case, for $z,x\in R$ we have $[z^p,x]={\rm ad}_z^p(x) = {\rm ad}_z^{p-1}([z,x]) = 0$, since $[z,u]\in L$ and elements of $L$ are central. Then since $R/J(R)$ is a finite product of fields of characteristic $p$, there exists some $m$ such that $u^{p^m}-u$ and $v^{p^m}-v\in J(R)$. Since $p$-th powers are central, we derive that $[u^{p^m}-u,v^{p^m}-v]=[u,v]\neq 0$, and so by considering the subring $S$ of $R$ generated by $a:=u^{p^m}-u$ and $b:=v^{p^m}-v$, we see that $S/J(S)\cong \mathbb{F}_p$ and $S$ is noncommutative. In particular, since $S$ is a finite ring, $J(S)^n=(0)$ for some $n\ge 1$ and so we have $(p,a,b)^n=(0)$ in $S$. Since $S$ is noncommutative and $[a,b]\in J(S)^2$ is nonzero, we see that $n\ge 3$. By replacing $S$ by a suitable homomorphic image, we may assume that $S$ is noncommutative but that $S/I$ is commutative for all nonzero ideals $I$ of $S$. We next claim that $J(S)[a,b]S=S[a,b]J(S)=(0)$. We only prove $J(S)[a,b]S=(0)$, with the other direction handled in a similar manner. To see this, observe that if $I:=J(S)[a,b]S$ is nonzero then since $S/I$ is commutative, we have $[a,b]\in J(S)[a,b]S$ and so $$[a,b] = \sum_{i=1}^m x_i [a,b] y_i$$ for some $m\ge 1$, $x_1,\ldots ,x_m\in J(S)$, and $y_1,\ldots ,y_m\in S$. Now we let $j\ge 1$ denote the smallest positive integer such that $J(S)^j [a,b]=(0)$. Then there is some $\theta\in J(S)^{j-1}$ such that $\theta[a,b]\neq 0$. But now $$\theta [a,b] = \sum_{i=1}^m (\theta x_i) [a,b] y_i\subseteq J(S)^j [a,b]S=(0),$$ a contradiction. It follows that $S$ is in the class $\mathcal{A}_p$, which completes the proof. \end{proof} \begin{remark}\label{rem:trichotomy} We point out that the proof of Theorem \ref{thm:trichotomy} in fact shows the following: if $R$ is a finite ring that is not commutative then after a finite set of steps in which at each step we either replace $R$ by a subring or a homomorphic image, we will arrive at one of the finite rings appearing in the statement of Theorem \ref{thm:trichotomy}. In particular, if $R$ is a \emph{minimal} finite noncommutative ring (that is, a ring with the property that every proper subring and every proper homomorphic image is commutative), then it must appear among one of the three classes of rings appearing in the statement of Theorem \ref{thm:trichotomy}. In this sense, we consider this result as giving a coarse classification of minimal finite noncommutative rings. We point out, however, that not all the rings that appear in the statement of Theorem \ref{thm:trichotomy} are minimal and it is an interesting problem to give a precise classification of such rings, particularly for the class $\mathcal{A}_p$ with $p$ a prime. In general, the question of whether a minimal noncommutative ring is necessarily finite appears to be difficult. For example, one would need to rule out the existence of infinite simple rings with the property that each pair of noncommuting elements generates the entire ring. \end{remark} Notice that Theorem \ref{thm:trichotomy} gives a quick proof of the fixed-degree version of Jacobson's $X^n=X$ theorem \cite{J}: if for some $n\ge 2$ there is a noncommutative ring for which the identity $X^n=X$ holds, then the above result shows there is a finite noncommutative ring with a nonzero nilpotent commutator ideal for which the identity $X^n=X$ holds; but if $a$ is a nonzero element in this nilpotent ideal then $0=a(1-a^{n-1})$ and since $a$ is nilpotent, $1-a^{n-1}$ is a unit, so $a=0$, a contradiction. The same argument applies to show that if $n\ge 2$ then the identity $[X,Y]^n=[X,Y]$ forces a ring to be commutative, which is a special case of a result due to Herstein \cite{HerX}. \section{Decidability Procedures}\label{sec:dec} \label{Algorithm} In this section we prove Theorem \ref{thm:algorithm} by showing that the question of whether a finite set of identities forces a ring to be commutative is, in fact, decidable and we give an algorithm that always terminates after finitely many steps to make such a decision. By Theorem \ref{thm:trichotomy} it suffices to check whether there exists a ring from one of the three classes of algebras given in the statement of the theorem for which the identities all simultaneously hold. We now describe a decision procedure to deal with each of these three classes. Before giving this procedure, we first give some notation. We let \begin{equation} \mathcal{C}_s\subseteq \mathbb{Z}\{X_1,\ldots ,X_s\} \label{eq:C} \end{equation} denote the $\mathbb{Z}$-submodule generated by all monomials $X_1^{i_1}\cdots X_s^{i_s}$ with $i_1,\ldots ,i_s\ge 0$. Notice that the canonical homomorphism \begin{equation} \label{eq:Phi} \Phi: \mathbb{Z}\{X_1,\ldots ,X_s\}\to \mathbb{Z}[X_1,\ldots ,X_s] \end{equation} with $\Phi(X_i)=X_i$ has the property that the restriction of $\Phi$ to $\mathcal{C}_s$ gives a set bijection between $\mathcal{C}_s$ and the polynomial ring, and given an element $P\in \mathbb{Z}\{X_1,\ldots ,X_s\}$ there is a unique element $\bar{P}\in \mathcal{C}_s$ such that $\Phi(P-\bar{P})=0$. This map $P\mapsto \bar{P}$ is a transversal of the projection map $\mathbb{Z}\{X_1,\ldots ,X_s\}\to \mathbb{Z}[X_1,\ldots ,X_s]$. In practice, given $P\in \mathbb{Z}\{X_1,\ldots ,X_s\}$, one can compute $\bar{P}$ by simply replacing each monomial that occurs in $P$ by the rearrangement of the letters that puts it in the form $X_1^{i_1}\cdots X_s^{i_s}$. A key component of our algorithm involves dividing our analysis into two cases: the case when $\Phi(P_i)=0$ for all $i$; and the case when there is some $i$ such that $\Phi(P_i)\neq 0$. In the latter case, when there is some $i$ such that $\Phi({P}_i)$ is nonzero, we let $D$ denote the total degree of this nonzero polynomial. By a result of Alon \cite{Alon} there are integers $(n_1,\ldots ,n_s)\in \{0,1,\ldots ,D\}^s$ such that $N:=\Phi({P}_i)(n_1,\ldots ,n_s)$ is a nonzero integer. Then if $P_1,\ldots ,P_m$ are simultaneously identities for a ring $R$, we necessarily have $N=0$ in $R$. It follows that if $R$ is a ring in one of the three classes given in the statement of Theorem \ref{thm:trichotomy} then $p\mid N$. An analysis of the above argument shows that if $\bar{P}_1,\ldots ,\bar{P}_m$ are not identically zero then we have an algorithm for determining a finite (possibly empty) set of prime numbers $\{p_1,\ldots ,p_t\}$ such that if $P_1,\ldots ,P_m$ are identities for a ring $R$ in one of the three classes given in the statement of Theorem \ref{thm:trichotomy} then the associated prime number $p$ must be in $p\in \{p_1,\ldots ,p_t\}$. In fact, if the integer $|N|$ has prime factorization $p_1^{a_1}\cdots p_t^{a_t}$ then the characteristic of $R$ must be a divisor of one of the elements from $\{p_1^{a_1},\ldots ,p_t^{a_t}\}$. For the remainder of this section, we assume that we are given $P_1,\ldots ,P_m \in \mathbb{Z}\{X_1,\ldots ,X_s\}$ and we will answer the question: ``\emph{Is there a noncommutative ring for which $P_1,\ldots ,P_m$ are all identities?}'' We do so by considering each of the three classes of rings in the statement of Theorem \ref{thm:trichotomy} separately. \subsection{Decision procedures for $U_p$} We begin with the simplest case, which is to decide whether there is a prime $p$ such that $P_1,\ldots ,P_m$ are all identities for the algebra $U_p$. \begin{lemma} Let $P_1,\ldots, P_m\in \mathbb{Z}\{X_1,\ldots ,X_s\}$. Then it is decidable whether or not there is some prime $p$ such that $P_1,\ldots ,P_m$ are identities for $U_p$. \end{lemma} \begin{proof} We let $\mathcal{X}$ denote the subset of $M_2(\mathbb{Z})$ consisting of the eight upper-triangular matrices with $\{0,1\}$-entries. Then for each $s$-tuple $(A_1,\ldots ,A_s)\in \mathcal{X}^s$ we compute the matrices $P_i(A_1,\ldots ,A_s)$ for $i=1,\ldots ,m$. If $P_i(A_1,\ldots ,A_s)=0$ for every $i\in \{1,\ldots ,m\}$ and every $(A_1,\ldots ,A_s)\in \mathcal{X}^s$ then since the $P_i$ are integer polynomials and since the image of $\mathcal{X}$ in $U_2$ is all of $U_2$, we have that $P_1,\ldots ,P_m$ are identities for $U_2$. Alternatively, there is some $i$ and some $(A_1,\ldots ,A_s)\in \mathcal{X}^s$ such that $P_i(A_1,\ldots ,A_s)$ is a nonzero integer. We compute the gcd, $d$, of the entries of this matrix. If $d$ is equal to one then there cannot exist a prime $p$ such that $P_1,\ldots ,P_m$ are identities for $U_p$, and so we may assume that the gcd is strictly greater than one. We then compute the primes $p_1,\ldots ,p_s$ that divide $d$. Then if $P_1,\ldots ,P_m$ are identities for $U_p$ for some prime $p$ then $p$ is necessarily in $\{p_1,\ldots ,p_s\}$. Then for each $p\in \{p_1,\ldots ,p_s\}$, it can be checked whether or not $P_1,\ldots ,P_m$ are identities for $U_p$ by simply taking each $s$-tuple of elements from $U_p$ and verifying whether the evaluations of $P_1,\ldots ,P_m$ at this $s$-tuple are zero in $U_p$; if all possible evaluations are zero, then $P_1,\ldots ,P_m$ are identities for $U_p$. Since $U_p$ is a finite ring and there are finitely many primes $p$ to check, this process terminates and so we have a decision procedure to determine whether or not $P_1,\ldots ,P_m$ are identities for $U_p$ for some prime $p$. \end{proof} \subsection{Decision procedures for the algebras $B_{p,n,i}$} We now show how one can check whether $P_1,\ldots ,P_m$ are identities for some algebra of the form $B_{p,n,i}$. To proceed, we require a lemma. \begin{lemma} Let $Q(X_1,\ldots ,X_s)\in \mathcal{C}_s$ be nonzero and let $q$ be a power of a prime $p$. If $Q(X_1,\ldots ,X_s)$ is an identity for the finite field $\mathbb{F}_q$, then $\Phi(Q)\in (p,X_1^q-X_1,\ldots ,X_s^q-X_s)\mathbb{Z}[X_1,\ldots ,X_s]$ and if the reduction of $\Phi(Q)$ mod $p$ is not identically zero then the reduction has total degree at least $q$. \label{lem:Alon} \end{lemma} \begin{proof} Since $\mathbb{F}_q$ is commutative, we may replace $Q$ by $\Phi(Q)$, its image in the polynomial ring $\mathbb{Z}[X_1,\ldots ,X_s]$, and assume it is a nonzero (commutative) polynomial. If $Q$ is identically zero mod $p$ there is nothing to prove, so we assume that this is not the case and let $d$ denote the total degree of the reduction of $Q$ mod $p$. Then there exist $t_1,\ldots ,t_s$ with $$\sum t_i=d$$ such that the coefficient of $X_1^{t_1}\cdots X_s^{t_s}$ is nonzero. It $t_i<q$ for $i=1,\ldots ,s$ then by Alon's combinatorial Nullstellensatz \cite[Theorem 1.2]{Alon} there exist $\alpha_1,\ldots ,\alpha_s\in \mathbb{F}_q$ such that $Q(\alpha_1,\ldots ,\alpha_s)\neq 0$ and so we see that the total degree of $Q$ must be at least $q$. To complete the proof, we must show that $Q\in (p,X_1^q- X_1,\ldots ,X_s^q-X_s)$. Since each $X_i^q-X_i$ is an identity for $\mathbb{F}_q$, we may reduce $Q$ modulo the ideal $$ (p,X_1^q-X_1,\ldots ,X_s^q-X_s)$$ and assume that it has degree at most $q-1$ in each variable $X_i$. If $Q$ is nonzero then there exist $t_1,\ldots ,t_s$ with $t_i<q$ for $i=1,\ldots ,s$ such that the coefficient of $X_1^{t_1}\cdots X_s^{t_s}$ is nonzero. But a second application of \cite[Theorem 1.2]{Alon} shows there exist $ \alpha_1,\ldots ,\alpha_s\in \mathbb{F}_q$ such that $Q(\alpha_1,\ldots ,\alpha_s)\neq 0$, a contradiction. \end{proof} The following lemma requires the use of Cartier operators. We let $p$ be a prime number. Then $\mathbb{F}_p[X_1,\ldots ,X_s]$ is a free $\mathbb{F}_p[X_1^p,\ldots ,X_s^p]$-module with basis $X_1^{j_1}\cdots X_s^{j_s}$ with $(j_1,\ldots ,j_s)\in \{0,\ldots ,p-1\}^s$, and every element of $\mathbb{F}_p[X_1^p,\ldots ,X_s^p]$ is the $p$-th power of some element of $\mathbb{F}_p[X_1,\ldots ,X_s]$. In particular, for each $s$-tuple of integers $(j_1,\ldots ,j_s)\in \{0,\ldots ,p-1\}^s$, we define maps \begin{equation} \Lambda_{j_1,\ldots ,j_s}:{{\mathbb F}}_p[X_1,\ldots ,X_s]\to {{\mathbb F}}_p[X_1,\ldots ,X_s], \end{equation} which are the operators uniquely defined by \begin{equation} P(X_1,\ldots ,X_s) = \sum_{j_1=0}^{p-1}\cdots \sum_{j_s=0}^{p-1} X_1^{j_1}\cdots X_s^{j_s} \Lambda_{j_1,\ldots ,j_s}(P(X_1,\ldots ,X_s))^{p}, \end{equation} for $P(X_1,\ldots ,X_s)$ in ${{\mathbb F}}_p[X_1,\ldots ,X_s]$. Observe that \begin{equation} \label{eq:Cart}\Lambda_{j_1,\ldots ,j_s}(A+B^{p} C) = \Lambda_{j_1,\ldots ,j_s}(A) + B\Lambda_{j_1,\ldots ,j_s}(C) \end{equation} for $A,B,C$ in ${{\mathbb F}}_p[X_1,\ldots ,X_s]$. In addition, if $P$ is a polynomial in $\mathbb{F}_p[X_1,\ldots ,X_s]$ then $P$ is the zero polynomial if and only if $ \Lambda_{j_1,\ldots ,j_s}(P)=0$ for all $(j_1,\ldots ,j_s)\in \{0,\ldots ,p-1\}^s$. For us, we shall be considering linear combinations of polynomials of the form $A^{p^k} B$. In this case, if $B$ has degree strictly less than $p^k$ then if $\Omega$ is a $k$-fold composition of Cartier operators then $\Omega(A^{p^k} B) = A\Omega(B)$ and moreover $\Omega(B)$ is a coefficient (possibly zero) of some monomial occurring in $B$. \begin{lemma} Let $A_0,\ldots ,A_t$ and $B_0,\ldots ,B_t$ be polynomials in $\mathbb{Z}[X_1,\ldots ,X_s]$ with $A_0,\ldots ,A_t$ linearly independent over $\mathbb{Z}$ and let $\alpha_0,\ldots ,\alpha_t\in \{1,\ldots ,s\}$. Then we can decide whether or not there exists a triple $(p,n,k)$, with $p$ a prime, $n\ge 2$ an integer, and $k\in \{1,\ldots, \lfloor n/2\rfloor\}$, such that $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is in the ideal $(p, X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{Z}[X_1,\ldots, X_s]$. \label{lem:At} \end{lemma} \begin{proof} Let $\kappa$ be the maximum of the degrees of $A_0,\ldots ,A_t, B_0,\ldots ,B_t$. Suppose that $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is in $(p, X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{Z}[X_1,\ldots, X_s]$. We first consider the case when $p^k > \kappa+2$. Therefore, after reducing mod $p$, we need to determine whether $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i\in (X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{F}_p[X_1,\ldots, X_s]$. Then since the total degree is at most $\kappa p^k + \kappa + p^k < p^{2k} \le p^n$, we see from Lemma \ref{lem:Alon} that this is the case if and only if $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is identically zero, when regarded as a polynomial with coefficients in $\mathbb{F}_p$. Then since $B_i X_{\alpha_i}$ has total degree strictly less than $p^k$, if $\Omega$ is a $k$-fold composition of Cartier operators then $\Omega(B_i)$ and $\Omega(B_i X_{\alpha_i})$ are the coefficients of some fixed monomials in $B_i$ and $B_i X_{\alpha_i}$ respectively. Thus we see that $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is identically zero mod $p$ if and only if $$\sum_{i=0}^t X_{\alpha_i}A_i \Omega(B_i) - A_i \Omega(X_{\alpha_i}B_i)$$ is identically zero for every $k$-fold composition of Cartier operators when we work over $\mathbb{F}_p$. Since the total degrees of $B_i$ and $B_iX_{\alpha_i}$ are less than $p^k$, we can obtain each coefficient by applying $k$-fold compositions of Cartier operators, and so if we let $\lambda_{i;j_1,\ldots ,j_s}$ denote the coefficient of $X_1^{j_1}\cdots X_s^{j_s}$ in $B_i$ then we see that this is equivalent to $$\sum_{i=0}^t X_{\alpha_i}A_i \lambda_{i;j_1,\ldots ,j_s} - A_i \lambda_{i;j_1-\delta_{1,\alpha_i},\ldots ,j_s-\delta_{s,\alpha_i}}$$ being identically zero mod $p$ for each $s$-tuple $(j_1,\ldots ,j_s)$ with $\sum j_i\le \kappa$, where $\delta_{i,j}$ is the Kronecker delta function. Now for each such $s$-tuple we can compute the gcd of the coefficients of $$\sum_{i=0}^t X_{\alpha_i}A_i \lambda_{i;j_1,\ldots ,j_s} - A_i \lambda_{i;j_1-\delta_{1,\alpha_i},\ldots ,j_s-\delta_{s,\alpha_i}}$$ and by taking the gcd over each of the gcds produced for each $s$-tuples we can compute a natural number $N$ with the property that for a prime $p$ and integers $k\ge 1$ and $n\ge 2k$ such that $p^k> \kappa+2$ and $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is in $$(p, X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{Z}[X_1,\ldots, X_s]$$ if and only if $p\mid N$. In particular, if there is some prime $p$ that divides $N$ then there exists a triple $(p,n,k)$ with $k\le n/2$ and $k\ge 1$ such that $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is in $(p, X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{Z}[X_1,\ldots, X_s]$. If, on the other hand, $N=1$ then we know that if $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is in $(p, X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{Z}[X_1,\ldots, X_s]$ then $p^k \le \kappa+2$. Since there are only finitely many pairs $(p,k)$ with $p$ prime and $k\ge 1$ such that $p^k \le \kappa+2$. We have reduced our analysis to considering triples $(p,n,k)$ with $p^k \le \kappa+2$. Notice that the total degree of $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is then at most $\kappa^2+ 4\kappa + 2$ and so if $p^n >\kappa^2+ 4\kappa + 2$ then by Lemma \ref{lem:Alon}, $\sum_{i=0}^t (X_{\alpha_i}^{p^k}-X_{\alpha_i}) A_i^{p^k} B_i$ is in $(p, X_1^{p^n}-X,\ldots , X_s^{p^n}-X_s)\mathbb{Z}[X_1,\ldots, X_s]$ if and only if it is identically zero mod $p$; in particular, this condition is independent of $n$ in this case and we can check this for the finite set of $p$ with $p\le \kappa+2$. Finally, if $p^k\le \kappa+2$ and $p^n \le \kappa^2+4\kappa+2$ then $(p,n,k)$ lies in a finite set and we can check these cases on a case-by-case basis via computation. \end{proof} We can now describe the algorithm for deciding whether $$P_1=\cdots =P_m=0$$ are identities for a noncommutative ring of the form $B_{p,n,\ell}$. We let $J$ denote the commutator ideal of $\mathbb{Z}\{X_1,\ldots ,X_s\}$ generated by $[X_i,X_j]$ with $i\neq j$. We argue via cases. \vskip 2mm \emph{Case I.} $\bar{P}_1=\cdots =\bar{P}_m=0$. Then in this case we can compute $P_1,\ldots ,P_m$ mod $J^2$ and each $Q\in \{P_1,\ldots ,P_m\}$ mod $J^2$ is of the form $$Q:=\sum_{1\le i< j \le s} \sum_{k=1}^{m_{i,j}} A_{i,j,k} [X_i,X_j] C_{i,j,k},$$ where $A_{i,j,k},C_{i,j,k}\in \mathcal{C}_s$, and for each pair $(i,j)$ we have $\{A_{i,j,k}\colon k\le m_{i,j}\}$ is linearly independent over $\mathbb{Z}$ and $\{C_{i,j,k}\colon k\le m_{i,j}\}$ is linearly independent over $\mathbb{Z}$; moreover, it is not difficult to compute these expressions mod $J^2$. Since each element of $J^2$ is an identity for $B_{p,n,\ell}$, we see that each such $Q$ is an identity for $B_{p,n,\ell}$ if and only if each of $P_1,\ldots ,P_m$ are identities for $B_{p,n,\ell}$. Now if $Q$ is an identity for a ring $B_{p,n,\ell}$ then since $X [Y,Z] = [Y,Z] X^{p^{\ell}}$ is also an identity for $B_{p,n,\ell}$, we have $$\sum_{1\le i< j \le s} \sum_{k=1}^{m_{i,j}} [X_i,X_j] A_{i,j,k}^{p^{\ell}}C_{i,j,k}=0$$ is an identity for $B_{p,n,\ell}$ and since the square of the commutator ideal is zero, we may replace each $A_{i,j,k}^{p^{\ell}}C_{i,j,k}$ by their images in the commutative polynomial ring over $\mathbb{F}_p$. Then we consider the commutative polynomial ring $\mathbb{F}_p[U_1,\ldots ,U_s,V_1,\ldots ,V_s]$. Then specializing $X_i$ at the element \[\left(\begin{array}{cc} U_i^{p^{\ell}} & V_i \\ 0 & U_i\end{array} \right), \] we see that an element of the form $$\sum_{1\le i< j \le s} \sum_{k=1}^{m_{i,j}} [X_i,X_j] A_{i,j,k}^{p^{\ell}}C_{i,j,k}=0$$ is an identity for $B_{p,n,\ell}$ if and only if $$H:=\sum_{1\le i< j \le s} \sum_{k=1}^{m_{i,j}} (U_i^{p^{\ell}} V_j +U_j V_i -U_j^{p^{\ell}} V_i - U_i V_j) A_{i,j,k}^{p^{\ell}}(U_1,\ldots ,U_s)C_{i,j,k}(U_1,\ldots ,U_s) $$ is an identity for $\mathbb{F}_{p^n}$. We let $$\kappa:=\max\{{\rm deg}(A_{i,j,k},C_{i,j,k})\colon i<j, 1\le k\le m_{i,j}\}.$$ We first consider the subcase when $\ell \le n/2$. In this case, if $p^{n/2}/4> \kappa$ then the total degree of $H$ is at most $$(p^{n/2}+1)\kappa + p^{n/2}\le p^n/2 + p^{n/2} < p^n,$$ since $n\ge 2$. It then follows from Lemma \ref{lem:Alon} that the polynomial $H$ must be identically zero after we identify it with its image in $\mathbb{F}_p[U_1,\ldots ,U_s,V_1,\ldots ,V_s]$. In particular, taking the coefficient of $V_j$, we see that $$H_j:=\sum_{i\neq j} \sum_{k=1}^{m_{i,j}} (-1)^{\chi(i,j)}(U_i^{p^{\ell}}-U_i) A_{i,j,k}^{p^{\ell}}(U_1,\ldots ,U_s)C_{i,j,k}(U_1,\ldots ,U_s)$$ must be identically zero, when viewed as a commutative polynomial with coefficients in $\mathbb{F}_p$, where $\chi(i,j)$ is $1$ if $i>j$ and is $0$ otherwise. Thus we have reduced the problem to deciding whether $H_1,\ldots ,H_s$ are zero mod $p$, and by Lemma \ref{lem:At} we can decide whether there exists a prime $p$ for which this occurs. On the other hand, there are only finitely many triples $(p,n,\ell)$ with $p$ prime, $\ell \le n/2$ and $p^{n/2} < 4\kappa$ and we can check on a case-by-case basis whether $P_1,\ldots ,P_m$ are identities for the algebras $B_{p,n,\ell}$ by evaluating them at all $s$-tuples of elements in these algebras and checking whether the results are always zero. The second subcase is when we have a triple $(p,n,\ell)$ with $\ell>n/2$. Then applying the $\mathbb{F}_{p^n}$ field automorphism given by $x\mapsto x^{p^{n-\ell}}$ to our expression for $H$ and using the fact that $a^{p^n}=a$ in $\mathbb{F}_{p^n}$, we see this is the case if and only if $$H':=\sum_{1\le i< j \le s} \sum_{k=1}^{m_{i,j}} (U_i V_j^{p^{n-\ell}} +U_j^{p^{n-\ell}} V_i^{p^{n-\ell}} -U_jV_i^{p^{n-\ell}} - U_i^{p^{n-\ell}} V_j^{p^{n-\ell}}) A_{i,j,k}C_{i,j,k}^{p^{n-\ell} }$$ is an identity for $\mathbb{F}_{p^n}$, where the $A_{i,j,k}$ and $C_{i,j,k}$ are polynomials in the variables $U_1,\ldots ,U_s$. Moreover, since the $V_i$'s are indeterminates and since the map $x\mapsto x^{p^{n-\ell}}$ is bijective on $\mathbb{F}_{p^n}$, this is the case if and only if $$H'':=\sum_{1\le i< j \le s} \sum_{k=1}^{m_{i,j}} (U_i V_j +U_j^{p^{n-\ell}} V_i -U_jV_i - U_i^{p^{n-\ell}} V_j )A_{i,j,k}C_{i,j,k}^{p^{n-\ell} }$$ is an identity for $\mathbb{F}_{p^n}$. Since $n-\ell\le n/2$, we can now handle this in a completely symmetric manner as we handled the first case when $\ell\le n/2$. Thus this case is decidable. \emph{Case II.} $\bar{P}_i$ is nonzero for some $i$. In this case, we compute the gcd $d_i$ of the coefficients of the monomials occurring in each $\bar{P}_i$ for $i=1,\ldots ,s$. By assumption at least one $d_i$ is nonzero and so if we let $\mathcal{S}$ denote the set of primes $p$ which divide $\gcd(d_1,\ldots ,d_t)$ then $\mathcal{S}$ is a finite set. To check whether there is some prime $p\in \mathcal{S}$ and some integers $n$ and $\ell$ such that $P_1,\ldots ,P_m$ are identities for $B_{p,n,\ell}$, we simply use the procedure given in Case 1 above for these particular primes, since modulo $p$ we have $\bar{P}_1=\cdots =\bar{P}_s=0$. Now for $p\not\in \mathcal{S}$ we have that there is some $Q\in\{ \bar{P}_1,\ldots ,\bar{P}_s\}$ that is not identically zero mod $p$. Then if $P_1,\ldots ,P_m$ are identities for $B_{p,n,\ell}$ then $Q$ must be an identity for $B_{p,n,i}/([B_{p,n,i},B_{p,n,i}])\cong \mathbb{F}_{p^n}$. In particular, $Q\in (p, X_1^q-X,\ldots ,X_s^q-X)\subseteq \mathbb{Z}[X_1,\ldots ,X_s]$ and is identically zero mod $p$ if the total degree of $Q$ is strictly less than $p^n$ by Lemma \ref{lem:Alon}. Thus if we let $D_{p,Q}$ be the total degree of $Q$ mod $p$, then $D_{p,Q}$ is equal to the total degree of $Q$ for all but a finite computable set of primes. Then if $D_{p,Q}< p^n$, then $Q$ is not identically zero mod $p$ and so by Lemma \ref{lem:Alon}, $Q$ is not an identity for $\mathbb{F}_{p^n}$. Thus we may consider the pairs $(p,n)$ with $D_{p,Q}\ge p^n$. But there are only finitely many $n$ and $p$ not in $\mathcal{S}$ such that $D_{p,Q}\ge p^n$ and moreover it is easy to compute all such pairs $(p,n)$. Once we have computed all eligible pairs $(p,n)$, they give rise to a finite number of eligible triples $(p,n,\ell)$ and we can again then test whether $P_1,\ldots ,P_m$ are identities for this finite set of allowable algebras $B_{p,n,\ell}$ via finitely many computations. \subsection{Decision procedures for algebras in the class $\mathcal{A}_p$} We let $J$ denote the commutator ideal of $\mathbb{Z}\{X_1,\ldots ,X_s\}$ generated by all commutators $[X_i,X_j]$ and we let $I$ denote the sum of $J^2$ and the ideal generated by the elements $[[X_i,X_j],X_k]$. Then all elements of $I$ are identities for algebras in $\mathcal{A}_p$ and so we first reduce $P_1,\ldots ,P_m$ mod $I$ and we may assume that we have \begin{equation}\label{eq:Hk} P_k=H_k + \sum_{i<j} A_{i,j,k} [X_i,X_j],\end{equation} where $H_k, A_{i,j,k}\in \mathcal{C}_s$, and where $\mathcal{C}_s$ is defined as in Equation (\ref{eq:C}) We now give an overview of the procedure we use to test whether there is a ring in the class $\mathcal{A}_p$ for which $P_1,\dots ,P_m$ are all simultaneously identities. \vskip 2mm \begin{enumerate} \item[Step 1.] We first compute $\bar{P}_1,\ldots ,\bar{P}_s$. If these are all zero, we go to Step 3; otherwise, we go to Step 2. \item[Step 2.] Use the theory of Gr\"obner-Shirshov bases to decide whether there is an algebra in $\mathcal{A}_p$ for which $P_1,\ldots ,P_m$ are all identities and stop. \item[Step 3.] Use Lemma \ref{lem:comm} and the procedure described afterwards to decide whether there is an algebra in $\mathcal{A}_p$ for which $P_1,\ldots ,P_m$ are all identities and stop. \end{enumerate} The easier case is the third step in the procedure above, which we describe now. For the following result, we let $[S,S]$ denote the commutator ideal of a ring $S$ and we let $[[S,S],S]$ denote the ideal generated by all elements $[a,b]$ with $a\in [S,S]$ and $b\in S$. \begin{lemma} Let $p$ be a prime and suppose that $P_1,\ldots ,P_m\in S:= \mathbb{Z}\{X_1,\ldots ,X_s\}$ are polynomials with $\bar{P}_1=\cdots =\bar{P}_m=0$. Then there exists $R\in \mathcal{A}_p$ for which $P_1,\ldots ,P_m$ are identities for $R$ if and only if $$P_1,\ldots ,P_m \in pS + [S,S]^2+ [[S,S],S] + (X_1^p-X_1,\ldots ,X_s^p-X_s)[S,S].$$ \label{lem:comm} \end{lemma} \begin{proof} Observe that every element of the ideal $$pS + [S,S]^2+ [[S,S],S] + (X_1^p-X_1,\ldots ,X_s^p-X_s)[S,S]$$ is an identity for the noncommutative ring $$\mathbb{F}_p\{X,Y\}/(X,Y)^3\in \mathcal{A}_p$$ and so it suffices to prove that the converse holds. Since each $\bar{P}_i=0$ we have that $P_i$ is in the commutator ideal for $i=1,\ldots , m$. Then since every element of $\mathbb{Z}\{X_1,\ldots ,X_s\}$ is congruent to an element of $\mathcal{C}_s$ modulo the commutator ideal, we see that, modulo $J:=[[S,S],S]+[S,S]^2$, we can write $$P_k \equiv \sum_{1\le i<j\le s} Q_{i,j,k} [X_i,X_j]~(\bmod ~J)$$ with each $Q_{i,j,k}\in \mathcal{C}_s$. Since every element of $J$ is an identity for every ring in $\mathcal{A}_p$, we may assume that $P_k$ is in fact equal to $$ \sum_{1\le i<j\le s} Q_{i,j,k} [X_i,X_j]$$ for $k=1,\ldots ,m$. Now if there exist $i,j,k$ and integers $n_1,\ldots ,n_s$ such that $c:=Q_{i,j,k}(n_1,\ldots ,n_s)$ is not a multiple of $p$, then for $R\in \mathcal{A}_p$ there exist $x,y\in J(R)$ with $[x,y]\neq 0$ and $J(R)[x,y]=[x,y]J(R)=(0)$. Then if we evaluate $P_k$ at $$(X_1,\ldots ,X_s)=(n_1,n_2,\ldots ,n_i+x,\ldots ,n_j+y,\ldots ,n_s),$$ we obtain $c[x,y]$, since $x[x,y]=y[x,y]=0$ in $R$. But this is a contradiction since $[x,y]\neq 0$ and $P_k$ is assumed to be an identity for $R$. It follows that $Q_{i,j,k}(n_1,\ldots, n_s)$ is a multiple of $p$ for all $i,j,k$ and integers $n_1,\ldots ,n_s$. In particular, $\Phi(Q_{i,j,k})$ is an identity for $\mathbb{F}_p$ and so by Lemma \ref{lem:Alon} it is in the ideal $(p,X_1^p-X_1,\ldots ,X_s^p-X_s)$. Since $Q_{i,j,k}\equiv \bar{Q}_{i,j,k}~(\bmod ~[S,S])$, we see that $$P_1,\ldots ,P_m\in pS + [S,S]^2+ [[S,S],S] + (X_1^p-X_1,\ldots ,X_s^p-X_s)[S,S],$$ as required. \end{proof} We now see how we can determine whether there is a prime number $p$ and an algebra $R$ in the class $\mathcal{A}_p$ for which $P_1,\ldots ,P_m$ are all identities when $\bar{P}_1=\cdots \bar{P}_m=0$. We claim that this is the case if and only if there is a prime $p$ for which each $A_{i,j,k}$ is an identity for $\mathbb{F}_p$, where the $A_{i,j,k}$ are as in Equation (\ref{eq:Hk}). To see this, observe that using Equation (\ref{eq:Hk}) and the assumption that $\bar{P}_k=0$ for all $k$, we have $P_k= \sum_{i<j} A_{i,j,k} [X_i,X_j]$ where $A_{i,j,k}\in \mathcal{C}_s$. Now suppose that there is some $A_{i,j,k}$ that is not an identity for $\mathbb{F}_p$. Then there are integers $n_1,\ldots ,n_s$ such that $p\nmid A_{i,j,k}(n_1,\ldots ,n_s)$. We now have that an algebra $R\in \mathcal{A}_p$ is generated by elements $x$ and $y$ in the Jacobson radical of $R$ and we set $a_{\ell}=n_{\ell}$ for $\ell\neq i,j$ and we set $a_i=n_i+x$ and $a_j=n_j+y$. Then $P_k(a_1,\ldots ,a_s)=A_{i,j,k}(n_1,\ldots ,n_s)[x,y]\neq 0$ in $R$, since $R$ is noncommutative and of characteristic a power of $p$. Thus $P_k$ cannot be an identity for an algebra in $\mathcal{A}_p$. Conversely, if each $A_{i,j,k}$ is an identity for $\mathbb{F}_p$ then by Lemma \ref{lem:Alon}, each $\Phi(A_{i,j,k})\in (p,X_1^p-X_1,\ldots ,X_s^p-X_s)\mathbb{Z}[X_1,\ldots ,X_s]$. In particular, by Lemma \ref{lem:comm} we see that there exists $R\in \mathcal{A}_p$ for which $P_1,\ldots ,P_m$ are identities for $R$. Thus we have reduced the analysis in the case that $\bar{P}_1,\ldots ,\bar{P}_s$ are all identically zero to the question of whether the polynomials $A_{i,j,k}$ computed above are all simultaneously identities for some $\mathbb{F}_p$. If the $A_{i,j,k}$ are identically zero, it is immediate that all primes $p$ work; on the other hand, if some $A_{i,j,k}$ is nonzero we let $D$ denote its total degree and we let $D_p$ denote the total degree of its reduction modulo a prime $p$, then $D_p=D$ for all but a finite computable set of primes. Then by Lemma \ref{lem:Alon} if this $A_{i,j,k}$ is an identity for $\mathbb{F}_p$ then $p\le D_p$. Consequently, we can now handle the case when $\bar{P}_1,\ldots ,\bar{P}_s$ are all identically zero: we compute the finite set of primes $p$ with $p\le D_p$ and by evaluating each polynomial $A_{i,j,k}$ at each of the $p^s$ $s$-tuples of elements of $\mathbb{F}_p$, we can determine whether there is some prime $p$ such that the $A_{i,j,k}$ are all simultaneously identities for $\mathbb{F}_p$. Thus it now suffices to show what to do in the case when $\bar{P}_1,\ldots ,\bar{P}_s$ are not all identically zero. For the remaining case, we make use of Gr\"obner-Shirshov bases. The main power of Gr\"obner-Shirshov bases is that they allow one to test ideal membership. In particular, if one has a finite Gr\"obner-Shirshov basis for an ideal in a finitely generated ring $R$ then one can decide whether the ring is commutative by testing whether the commutators of all generators have zero image in the ring. There are, however, two potential pitfalls that arise when working in the context of rings satisfying a certain set of polynomial identities. The first problem is that one requires all specializations of the identities to be zero and so one cannot guarantee that one is working with a finitely generated ideal; the second issue is that for noncommutative algebras there are no guarantees that the Gr\"obner-Shirshov algorithm terminates. Thankfully, we are able to get around both of these problems in this setting. We first give a brief overview of how to use Gr\"obner-Shirshov bases in general. We let $C$ be a finitely generated commutative ring and we let $I$ be a finitely generated two-sided ideal of the free $C$-algebra $R:=C\{x_1,\ldots ,x_s\}$. For our purposes, we will assume in what follows that every ideal of $C$ is principal, which will hold in the setting we use. We put a degree lexicographic order $\preceq$ on the monomials $x_1,\ldots ,x_s$ by fixing some order on the elements of $\{x_1,\ldots ,x_s\}$. Given an ideal $I$ we then have a procedure (which need not terminate) to produce a Gr\"obner-Shirshov basis for an ideal $I$ in the algebra $C\{x_1,\ldots ,x_s\}$. Then given a nonzero element $g\in R$ we have an \emph{initial term}, which is $c\cdot w$ with $c\in C\setminus \{0\}$ and $w\in \{x_1,\ldots ,x_s\}^*$ such that $g-c\cdot w$ is a $C$-linear combination of words in $\{x_1,\ldots ,x_s\}^*$ that are strictly less than $w$ with respect to the order $\prec$. Given a nonzero element $f\in C\{x_1,\ldots ,x_s\}$, we let \begin{equation} \label{eq:inf} {\rm in}(f)\in (C\setminus \{0\}) \{x_1,\ldots ,x_s\}^*\end{equation} denote the initial term of $f$. Given a finite set of generators $f_1,\ldots ,f_d$ for the ideal $I$ there is a procedure (see Bokut and Chen \cite{BC} and Mikhalev, Zolotykh \cite{MZ1,MZ2}) that produces a possibly infinite set of generators $h_1,h_2, \ldots $ for $I$ with the following properties: \begin{enumerate} \item ${\rm in}(h_i) \not\in (C\setminus \{0\}) \{x_1,\ldots ,x_s\}^* {\rm in}(h_j)\{x_1,\ldots ,x_s\}^*$ for $i\neq j$; \item if $h\in I$ is nonzero then there is some $j$ such that $${\rm in}(h)\in (C\setminus \{0\}) \{x_1,\ldots ,x_s\}^* {\rm in}(h_j)\{x_1,\ldots ,x_s\}^*.$$ \end{enumerate} In particular, this gives a way of testing ideal membership: to test whether $f_0\in I$, we simply check whether there is some $h_i$ such that $${\rm in}(f)\in (C\setminus \{0\}) \{x_1,\ldots ,x_s\}^* {\rm in}(h_i)\{x_1,\ldots ,x_s\}^*;$$ if not, then $f_0\not\in I$ and we may stop; if this is the case, we can find $c\in C\setminus \{0\}$ and $a,b\in \{x_1,\ldots ,x_s\}^*$ such that the monomial occurring in ${\rm in}(f_0-cah_ib)$ is degree lexicographically less than the monomial in ${\rm in}(f_0)$. It then suffices to check that $f_1:=f-cah_ib \in I$, and by applying the procedure above we obtain a sequence $f_0,f_1,\ldots $ which must terminate since the collection of monomials is well-ordered with respect to $\prec$; in particular, we either obtain that $f_n\not\in I$ for some $n$, in which case $f_0\not\in I$; or we get $f_n=0$ for some $n$, in which case $f_0$ is in $I$. We are now able to complete the analysis for the class $\mathcal{A}_p$. We may assume that $\bar{P}_1,\ldots ,\bar{P}_m$ are not all zero and we let $D$ denote the largest degree of a nonzero element of $\{\bar{P}_1,\ldots ,\bar{P}_k\}$ and we let $d$ denote the gcd of the coefficients occurring in the elements $\{\bar{P}_1,\ldots ,\bar{P}_m\}$. Then each monomial that occurs with nonzero coefficient in an element from $\{\bar{P}_1,\ldots ,\bar{P}_k\}$ is of the form $X_1^{i_1}\cdots X_s^{i_s}$ with $0\le i_1,\ldots ,i_s<D+1$. Then since the elements $$i_1+i_2(D+1)+\cdots +i_s(D+1)^{s-1}$$ with $0\le i_1,\ldots ,i_s<D+1$ are pairwise distinct, we see that the univariate polynomials $$G_i(X):=P_i(X,X^{D+1}\ldots ,X^{(D+1)^{s-1}})=\Phi(P_i)(X,X^{D+1},\ldots ,X^{(D+1)^{s-1}})$$ have the property that the gcd of the coefficients occurring in the elements $$\{G_1(X),\ldots ,G_m(X)\}$$ is again $d$; moreover, each $G_i$ is of degree at most $(D+1)^s$. Then there is some $i\in \{1,\ldots ,m\}$ and some $j\in \{0,\ldots , (D+1)^s\}$ such that $G_i(j)\neq 0$. We then let $N$ denote the gcd of all elements of the form $G_i(j)$ with $i\in \{1,\ldots ,m\}$ and $j\in \{0,\ldots , (D+1)^s\}$, which we can compute. It follows that if $P_1,\ldots ,P_m$ are identities for $R$ then $N=0$ in $R$. In particular, if $|N|$ has prime factorization $p_1^{a_1}\cdots p_s^{a_s}$ and $R\in \mathcal{A}_p$ is such that $P_1,\ldots ,P_m$ are identities for $R$ then $p\in \{p_1,\dots ,p_s\}$ and if $p=p_i$ then $p_i^{a_i}=0$ in $R$. So we will now give the procedure for dealing with this finite set of primes. For $p\in \{p_1,\ldots ,p_s\}$ we then have that if $R\in \mathcal{A}_p$ satisfies the identities $$P_1=\cdots =P_m=0$$ then we have some fixed computable $a$ such that $p^a=0$ (here $a=a_i$, where $i$ is such that $p=p_i$). Now if $p^a\mid d$ then $\bar{P}_1,\ldots ,\bar{P}_m$ all vanish on $R$ and so in the expression given in Equation (\ref{eq:Hk}), we have that each $H_k$ vanishes on $R$; in particular, in this situation, we may assume without loss of generality that each $H_k$ is identically zero and so we appeal to the earlier case we considered where this occurs. Thus we may assume that $p^a\nmid d$, and we let $p^b=\gcd(p^a,d)$ with $b<a$. Then there is some $k$ such that the gcd of $p^a$ and the coefficients occurring in $G_k$ is exactly $p^b$. Then there are integer polynomials $F$ and $F'$ such that $G_k(X) = p^b F(X) + p^{b+1} F'(X)$ and such that $F(X)$ is nonzero and no nonzero coefficient in $F(X)$ is a multiple of $p$. Then since $G_k$ is an identity for $R$, we see that $$p^b (F(X) + p F'(X))\left(\sum_{j=0}^{a-b-1} F(X)^{a-b-1-j} (pF'(X))^{j}(-1)^j\right),$$ which is $$p^b F(X)^{a-b} +(-1)^{a-b-1} p^a F'(X)^{a-b},$$ is also an identity for $R$. Since $p^a=0$ in $R$, we then see that $p^b F(X)^a$ is zero on $R$. Now let $t$ denote the smallest power of $X$ that occurs with a nonzero coefficient in $F(X)$. By multiplying $G_k$ by a unit in $\mathbb{Z}/p^a\mathbb{Z}$, we may assume that the coefficient of $X^t$ is $1$ in $F(X)$. Then $t\ge 1$ since otherwise we'd have $p\nmid F(0)$, but by construction $p^a$ must divide $p^b(F(0)+pF'(0))$, so this can't be the case. Thus $t\ge 1$. It follows that for $u\in J(R)$ we have $p^b u^{at} =0$ in $R$ since $p^b F(u)^a \in p^b u^{at}(1+J(R))$. We now show how one can use Gr\"obner-Shirshov bases to complete the decision procedure in this remaining case. We claim that there is a noncommutative ring $R$ in the class $\mathcal{A}_p$ for which $P_1,\ldots, P_m$ are all identities for $R$ if and only if $$S:=\mathbb{Z}\{X,Y\}/((p^a, p^b (X,Y)^{at})+L)$$ is noncommutative, where $L$ is the finitely generated ideal generated by all specializations of $P_1,\ldots ,P_m$ at elements of the form $X_i=\sum a_w w$, where $w$ runs over words in $X$ and $Y$ of length less than $at$ and $a_w\in \{0,\ldots ,p^a-1\}$. To see this, notice that since $P_1,\ldots ,P_m$ are identities for $R$ all elements of $L$ must be zero in $R$ and since $p^a=0$ in $R$, we see that $R$ is a homomorphic image of $S$. Now it suffices to show that $P_1,\ldots ,P_m$ are identities for $S$. Notice that if $z_1,\ldots ,z_s\in S$, then we may write $z_i =b_i + c_i$ with $b_i$ of the form $\sum a_w w$, where $w$ runs over words in $X$ and $Y$ of length less than $at$ and $a_w\in \{0,\ldots ,p^a-1\}$, and $c_i\in (p^a, (X,Y)^{at})$. Then by construction $$P_i(b_1,\ldots ,b_s) \in L.$$ Then using Equation (\ref{eq:Hk}), $$P_i(z_1,\ldots ,z_s)=\bar{P_i}(z_1,\ldots ,z_s) + \sum A_{i,j,k}[z_j,z_k] ~(\bmod ~I).$$ Since every coefficient of $\bar{P}_i$ is divisible by $p^b$ and since the image of $p^b(X,Y)^{at}$ is zero in $S$, we see that the images of $\bar{P_i}(z_1,\ldots ,z_s)$ and $ \bar{P}_i(b_1,\ldots ,b_s)$ are equal in $S$. Similarly, since the images of $(p,X,Y)[X,Y]$ and $[X,Y](p,X,Y)$ are zero in $S$ we see that the images of $\sum A_{i,j,k}(z_1,\ldots ,z_s) [z_j,z_k] $ and $\sum A_{i,j,k}(b_1,\ldots ,b_s) [b_j,b_k] $ in $S$ are equal. It follows that $P_i(z_1,\ldots ,z_s)=P_i(b_1,\ldots ,b_s)=0$, and so we have proved the claim. Now we put a degree lexicographic order on the monomials in $X$ and $Y$ with $Y\succ X$ and we consider the finitely generated ideal generated by $$p^a, p^j (X,Y)^{at}, (p,X,Y)[X,Y], [X,Y](p,X,Y)$$ along with the finitely many generators for $L$. Then by construction $pYX$, $XYX$, $YYX$, $YXX$, and $YXY$ are all initial terms of elements of $I$. It follows that every element of our Gr\"obner-Shirshov basis whose leading term is not on the list $$\{pYX, XYX, YYX, YXX, YXY\}$$ cannot be a multiple of one of these terms and hence must be (after multiplying a suitable unit in $\mathbb{Z}/p^a\mathbb{Z}$) $YX$ or of the from $\alpha X^i Y^j$ with $\alpha\in \{1,p,\ldots ,p^{a-1}\}$. We now claim that $I$ must have a finite Gr\"obner-Shirshov basis. To see this, suppose that this is not the case. Then there must be an infinite collection of elements $f_1,f_2,\ldots $ in our Gr\"obner-Shirshov basis whose initial terms are of the form $p^k X^i Y^j$ with $k$ a fixed element in $\{0,1,\ldots, {a-1}\}$. Thus we have ${\rm in}(f_i)=p^k X^{a_i}Y^{b_i}$ with the pairs $(a_i,b_i)$ pairwise distinct, since the initial terms of the $f_i$ are pairwise distinct. Moreover, for $i\neq j$, ${\rm in}(f_j)$ cannot be in the monomial ideal generated by ${\rm in}(f_i)$. Thus we see that for $i\neq j$ we must have either $a_i< a_j$ and $b_i > b_j$ or $a_i>a_j$ and $b_i <b_j$. Now let $a=\min\{a_i \colon i\ge 1\}$ and pick $i_0$ such that $a_{i_0}=a$. We let $b=b_{i_0}$. Then ${\rm in}(f_{i_0}) = p^k X^a Y^b$ and since $X^a Y^b$ is not in the monomial ideal generated by $X^{a_j} Y^{b_j}$ for $j\neq i_{0}$ and since each $a_j\ge a$, we see that $b_j \le b$ for all $j$. A symmetric argument shows that there is some $c$ such that $a_j\le c$ for all $j$. But there are only finitely many monomials of the form $X^e Y^f$ with $e\le c$ and $f\le b$, which contradicts the fact that the initial terms of our monomials are pairwise distinct. Thus $I$ has a finite Gr\"obner-Shirshov basis, which means, in particular, that the Gr\"obner-Shirshov basis algorithm, applied to the ideal $I$, necessarily terminates. Then since ideal membership is testable when one has a finite Gr\"obner-Shirshov basis, we can test whether $[X,Y]\in I$ and in particular, we can determine whether our ring is commutative. \subsection{Examples of application of the algorithm} The algorithm above makes use of the three types of rings that occur in the statement of Theorem \ref{thm:trichotomy}. We give a few examples to show how this algorithm can be applied in practice. \begin{example} The identity $P(X,Y)=X^2 Y^2 + X^4 Y^2 + XYXY$ forces a ring to be commutative. \end{example} \begin{proof} If we follow the steps, we first compute $P$ mod the commutator ideal in terms of the basis $\{X^i Y^j\}$ and we get $P(X,Y)=2X^2 Y^2 + X^4 Y^2 - X[X,Y]Y$. Since $\bar{P}$ is nonzero, we see that taking $X=1, Y=1$ gives that $3=0$ in $R$. In particular, if there is a noncommutative ring for which $P=0$ is an identity then $P$ must be an identity for $U_3$, a ring of the form $B_{3,n,i}$ with $n\ge 2$, or a ring in $\mathcal{A}_3$ with $3=0$. For $U_3$, we compute and find that $X=e_{1,1}$, $Y=e_{1,2}+e_{2,2}$ gives $P(X,Y) = -e_{1,2}\neq 0$ and so $P$ is not an identity for $U_3$. For $B_{3,n,i}$ we have the additional identity $Z[X,Y]=[X,Y]Z^{3^i}$ and so if $P=0$ is an identity for $B_{3,n,i}$ then so is $2 X^2 Y^2 + X^4 Y^2 - [X,Y]X^{3^i}Y=0$. Since $B_{3,n,i}$ mod its commutator ideal is $\mathbb{F}_{3^n}$, we have that $ 2X^2 Y^2 + X^4 Y^2=0$ must be an identity for $\mathbb{F}_{3^n}$ and so by Lemma \ref{lem:Alon}, we must have $n\le 1$, a contradiction. Thus $P=0$ cannot be an identity for $B_{3,n,i}$ with $n\ge 2$. Finally, if $P=0$ is an identity for a ring $R$ in $\mathcal{A}_3$ with $3=0$, then following the algorithm we find that $P(X,X)=2X^4+X^6=-X^4(1-X^2)$ is an identity for $R$. Then we have $x^4=0$ for all $x\in J(R)$ since $1-x^2$ is a unit whenever $x\in J(R)$. We now claim that $J(R)^7=(0)$. To see this, since $R\in \mathcal{A}_3$, $R$ is generated by two elements $u,v\in J(R)$, it suffices to show that all monomials in $u$ and $v$ of length seven are equal to zero in $R$. Since $R\in \mathcal{A}_3$, we have $(0)=[R,R]J(R)=J(R)[R,R]$. Now we suppose that some monomial $w$ of length seven in $u$ and $v$ is nonzero. Then since $u^4=v^4=0$, after possibly switching the labels of $u$ and $v$, $w$ must be of the form $u^a v^b w'$ with $0<a,b\le 3$ and $w'$ a monomial of length $7-a-b\ge 1$ in $u$ and $v$ that starts with $u$. Among all such nonzero monomials, we pick one with $a$ maximal. Then since $u^a\in J(R)$, $u^a [v^b, w'] =0$ in $R$ and so $u^a w' v^b = u^a v^b w'$. In particular, $u^a w' v^b$ is nonzero. But $u^a w'$ is a monomial with $u^{a+1}$ as a prefix, which contradicts the maximality of $a$. Thus we obtain the claim. Since $[[X,Y],Z]=0$ is an identity for every ring in $\mathcal{A}_3$ we see that $ 2X^2 Y^2 + X^4 Y^2 - [X,Y]XY=0$ is an identity in $R$. At this point in the algorithm we would normally use Gr\"obner-Shirshov bases, which are computationally non-trivial, but in this case one can use an ad hoc argument to simplify things. Note that $R$ is an $\mathbb{F}_3$-algebra generated by elements $u,v$ with $(u,v)^7=(0)$. Then we first compute all evaluations of $P$ at $\mathbb{F}_3$-linear combinations of words of length at most $6$ in $u$ and $v$. Doing this, we find that $0=P(1+u,1+v)\in v+ J(R)^2$. Since all elements in $J(R)^2$ are central we then get $[v,u]=0$ and so $R$ is commutative. \end{proof} We give a second example, illustrating the other key case: when $P$ is in the commutator ideal. \begin{example} Let $P(X,Y)=X^2YXY -X^2Y^2X -XYX^2Y +XY^2X^2 +YX^2YX - YXYX^2$. Then there is a noncommutative ring $R$ for which $P=0$ is an identity. \end{example} \begin{proof} To implement the algorithm, we compute $P$ modulo the commutator ideal, using the basis $\{X^i Y^j\}$. Doing so, we find $$P(X,Y)= X[X,Y]XY - X[X,Y]YX -[X,Y]XYX +[X,Y]YX^2.$$ Then for any ring $R$ in the statement of Theorem \ref{thm:trichotomy} we have $[R,R]R[R,R]=(0)$ and so $P=0$ holds for $R$ if and only if the identity $X[X,Y]XY -X[X,Y]XY -[X,Y]X^2Y -[X,Y]X^2Y = 0$ holds for $R$. But since this is the identity $0=0$, we see that in fact $P$ is an identity for every ring whose commutator ideal has square zero. In particular, $P=0$ is an identity for the ring $U_2$. \end{proof} The next example is a special case of the main result from \cite{AD}, which is more general in that it allows $m$ and $n$ to depend on the elements of the ring and also allows there to be a sign that depends on the elements of the ring. \begin{example} If $m,n\ge 2$ and have opposite parity and $P(X)=X^m-X^n$ then the identity $P(X)=0$ forces a ring to be commutative. \end{example} \begin{proof} To apply the algorithm in practice one must have fixed $m$ and $n$, but we shall use ad hoc arguments to get around this restriction. First, we find $P(-1)=0$ gives that $2=0$ in every ring for which $P(X)=0$ is an identity. Thus if there is a noncommutative ring for which $P(X)=0$ is an identity then it must hold for either $U_2$, a ring of the form $B_{2,n,i}$ with $n\ge 2$, or a ring from $\mathcal{A}_2$ with $2=0$. The case of $U_2$ is straightforward: take $X= e_{1,1}+e_{1,2}+e_{2,2}$; then $X^k=1$ when $k$ is even and $X^k = X$ when $k$ is odd, so $X^m\neq X^n$ in $U_2$. To actually apply the algorithm for $B_{2,n,i}$, we would bound $n$ in terms of the degree of $P$ for a fixed $m$ and $n$ and then check whether any of the resulting finite set of rings have $P=0$ as an identity. But we observe in this case we can again take $X= e_{1,1}+e_{1,2}+e_{2,2}$ and we get that $X^m- X^n$ is a non-identity for $B_{2,n,i}$. Finally, if $R$ is in $\mathcal{A}_2$ and we take $X=1+u$ with $u\in J(R)\setminus J(R)^2$ then $X^k \in 1+J(R)^2$ if $k$ is even and $X^k \in 1+u+J(R)^2$ if $k$ is odd. In particular, if $X^m= X^n$ we see that $u\in J(R)^2$, which is a contradiction. Thus the identity $P(X)=0$ forces a ring to be commutative. \end{proof} \section{Jacobson's and Herstein's theorems revisited} \label{Applications} A famous theorem of Jacobson \cite[Theorem 11]{J} asserts that if a ring $R$ has the property that, for each $x\in R$, there exists an integer $n(x)>1$ depending on $x$ with $x^{n(x)}=x$ (such rings are further called {\it potent}), then $R$ is commutative. On the other hand, Herstein \cite{H} generalized this important assertion by proving that if $R$ is a ring with center $Z(R)$ such that $x^{n(x)}-x\in Z(R)$ for every $x\in R$, then $R$ is necessarily commutative. A recent generalization of Jacobson's result is given in \cite{AD}: If $R$ is a ring such that, for any $x\in R$, there are two integers $n(x)>m(x)>1$ of opposite parity with $x^{n(x)}=x^{m(x)}$, then $R$ is commutative. So, taking into account the statements alluded to above, it is quite logical to consider those rings $R$ for which $x^{n(x)}-x^{m(x)}\in Z(R)$. However, the next construction illustrates that situation is more complicated than one might anticipate. In fact, if $p\geq 3$ and we take $R=\mathbb{F}_p\{x,y\}/J$, where $J$ is the ideal $(x,y)^3$ (the cube of the homogeneous maximal ideal), then every element $a$ in $R$ can be written as $c+u$ with $c$ in $\mathbb{F}_p$ and $u$ in the homogeneous maximal ideal. Since $u^3=0$ and $p\geq 3$, we see that $(c+u)^p = c^p + u^p = c$ and so $a^{2p} - a^p$ is a central element for all $a$ in $R$. Notice $R$ is not commutative since $xy-yx$ is not in $J$ by construction. Thus additional conditions are required to obtain a commutativity theorem. The following result gives some further advantage in discovering the discrepancies in commutativity of rings when we involve the center $Z(R)$. \begin{thm} Let $P(X)\in \mathbb{Z}[X]$ be a polynomial. Then the following statements hold: \begin{enumerate} \item there is a noncommutative ring $R$ for which $P(X)=0$ is an identity if and only if there is a prime $p$ such that $P(X)\in (p,(X^p-X)^2)\mathbb{Z}[X]$; \item there is a noncommutative ring $R$ for which $P(X)Y=YP(X)$ is an identity if and only if there is a prime $p$ such that the first derivative, $P'(X)$, of $P(X)$ is in the ideal $$(p,X^p-X)\mathbb{Z}[X].$$ \end{enumerate} \label{thm:Herstein} \end{thm} \begin{proof} If $P(X)\in (p,(X^p-X)^2)$, then $P(X)=0$ is an identity for the noncommutative ring $U_p$ and if $P'(X)\in (p,X^p-X)$, then $[P(X),Y]=0$ is an identity for the (noncommutative) ring $\mathbb{F}_p\{u,v\}/(u,v)^3$. Thus one direction is immediate. We now consider the more difficult direction. Suppose that there is a ring $R$ that is not commutative for which $P(X)=0$ is an identity. Then we may assume that $R$ lands in one of the cases (a)--(c) given in the statement of Theorem~\ref{thm:trichotomy}. In particular, there is some prime $p$ and some $m\ge 1$ such that $R$ has characteristic $p^m$. In all cases, $R/J(R)$ contains a subring isomorphic to $\mathbb{F}_p$ and so $X^p-X$ must divide $P(X)$ mod $p$, since $P(X)$ must be an identity for $\mathbb{F}_p$. Thus we may write $P(X)=(X^p-X)Q(X)$ modulo $p$. Now let $u=\alpha+[s,t]$ with $\alpha\in \mathbb{Z}$ and $[s,t]$ a nonzero commutator in $R$. We now consider two cases. The first case is when $R$ is an $\mathbb{F}_p$-algebra. Then since $[R,R]^2=p[R,R]=0$, $0=P(u) = ((\alpha^p-\alpha) - [s,t])Q(u)$. Moreover, since $\alpha^p-\alpha=0$ in $R$, we have $$P(u)= ((\alpha^p-\alpha) - [s,t])Q(u)= - [s,t]Q(\alpha).$$ So we must have $Q(\alpha)=0$ in $R$ for every $\alpha\in \mathbb{Z}$, which implies that $$X^p-X\mid Q(X)~(\bmod~p),$$ and so we are done in this case. Alternatively, $R$ has characteristic $p^m$ with $m$ strictly larger than $1$ and so for $v\in R$ we have $$0=P(v+p^{m-1}) = P(v) + p^{m-1} P'(v)=p^{m-1}P'(v).$$ It follows that $p^{m-1}P'(X)$ is also an identity for $R$ and since $R$ has characteristic $p^m$, we see that $p\mid P'(\alpha)$ for every $\alpha\in \mathbb{Z}$ and so $P(X)\in (p,(X^p-X)^2)$ as required. Next suppose that $P(X)\in \mathbb{Z}[X]$ is a polynomial and that there exists a ring $R$ which is not commutative such that $P(X)Y-YP(X)=0$ is an identity for $R$. Then again we may assume that $R$ is covered by one of the cases (a)--(c) given in the statement of Theorem \ref{thm:trichotomy}, and so there is a prime $p$ and $m\ge 1$ such that $p^mR=(0)$. We first consider the case when not all commutators are central. Then $R$ is not in the class $\mathcal{A}_p$ and so it is an $\mathbb{F}_p$-algebra. Then there exist $u,v$ and $z$ in $R$ such that $[[u,v],z]\neq 0$. But since $[u,v]^2=0$, one has for $\alpha\in \mathbb{F}_p$ that $P(\alpha +[u,v]) = P(\alpha)+P'(\alpha)[u,v]$ and so since $[P(\alpha+[u,v]),z]=0$, we see that $P'(\alpha)=0$ for all $\alpha\in \mathbb{F}_p$. In particular, $X^p-X$ divides $P'(X)$ mod $p$, and so we obtain the result in this case. We now consider the case when all commutators are central in $R$ and so $R$ is in the class $\mathcal{A}_p$. Given ideals $I$ and $J$ of $R$, we let $[I,J]$ denote the two sided ideal of $R$ generated by commutators $[u,v]$ with $u\in I$ and $v\in J$. Then, in this case, $[[R,R],R]=(0)$ and $[R,R]^2=(0)$. For $x,y\in R$, $$P(x)y-yP(x)\equiv P'(x)[x,y]~(\bmod ~L),$$ where $L=[[R,R],R]$. Since the annihilator of nonzero commutators contains the Jacobson radical and since $R/J(R)\cong \mathbb{F}_p$, we therefore see that $P'(X)$ is an identity for a finite field of characteristic $p$. In particular, $P'(X)\in (p,X^p-X)\mathbb{Z}[X]$, and so we get the result in this case too. \end{proof} We point out that Theorem \ref{thm:algorithm} shows that one can decide whether there is some prime $p$ for which either (1) or (2) holds in the statement of Theorem \ref{thm:Herstein}. In this case, however, the decision procedure can be performed much more quickly. For example, for (1), if $P(X)$ has degree $d$ and is nonzero, one can find some $i\in \{0,\ldots ,d\}$ such that $N:=P(i)$ is nonzero. If $P(X)$ is an identity for a ring $R$ then $N=0$ in $R$ and so if there exists a prime $p$ for which $P(X)\in (p,(X^p-X)^2)$ then $p\mid N$. For the finite set of primes $p$ for which $p\mid N$ one can then check whether $P(X)$ is in this ideal. The condition (2) can be checked similarly. As a consequence, we obtain a general commutativity theorem. \begin{cor} Let $a$ and $b$ be positive integers with $a>b$. A ring satisfying the identity $$[X^a-X^b,Y]=0$$ is necessarily commutative if and only if one of the following conditions hold: \begin{enumerate} \item $b=1$; \item $\gcd(a,b)=1$ and $a$ and $b$ have opposite parity. \end{enumerate} \end{cor} \label{cor:XXX} \begin{proof} Let $P(X)=X^a-X^b$. It suffices to show that there is a prime $p$ such that $P'(X)\in (p, X^p-X)$ if and only if $b>1$ and either $\gcd(a,b)>1$ or $a$ and $b$ have the same parity. Notice $P'(X)= X^{b-1}(aX^{a-b}-b)$ and so if $b=1$, then $P'(X)=aX^{a-b}-1$, which is nonzero mod $p$ when $X=0$ and hence there is no prime $p$ such that $P'(X)\not \in (p,X^p-X)$ in this case. Next suppose that $\gcd(a,b)=1$ and $a$ and $b$ have opposite parity. In particular, there is no prime $p$ such that $p^2-p$ divides $a-b$, since $p^2-p$ is always even. If there is some prime $p$ such that $P'(X)\in (p,X^p-X)$, then since $P'(1)=a-b$, $p|a-b$ and so $$P'(X)\equiv a X^{b-1}(X^{a-b}-1)~(\bmod~p).$$ Since $\gcd(a,b)=1$, we then see $p\nmid a$ and thus if $P'(X)\in (p,X^p-X)$ then $X^{p-1}-1$ must divide $X^{a-b}-1$ mod $p$, and so $(p-1)\mid a-b$. Thus $p(p-1)\mid a-b$, a contradiction, since they have opposite parity. Then, by Theorem \ref{thm:Herstein}, a ring satisfying the identity $[X^a-X^b,Y]=0$ with $a$ and $b$ satisfying the above conditions is necessarily commutative. To see the other direction, suppose that $b>1$ and either $\gcd(a,b)>1$ or $2$ divides $a-b$. If there is some prime $q$ such that $q|\gcd(a,b)$, then $P'(X)\equiv 0~(\bmod~q)$, and so Theorem \ref{thm:Herstein} (2) then shows that the condition $\gcd(a,b)=1$ is necessary. If $b>1$ and $2$ divides $a-b$, then $P'(X)\equiv a X^{b-1}(X^{a-b}-1)~(\bmod~2)$. Since $b>1$, it follows that $P'(0)\equiv 0~(\bmod ~q)$; and since $X-1$ divides $P'(X)$ mod $2$, it must be that $P'(X)\in (2,X^2-X)$, and so we see necessity of the condition that $a$ and $b$ have opposite parity from Theorem \ref{thm:Herstein} (2). \end{proof} Notice one can rephrase Corollary \ref{cor:XXX} as follows: \emph{Let $R$ is a ring and $n$, $m$ are fixed integers greater than $1$ of opposite parity such that $\gcd(m,n)=1$. If, for all $x\in R$, $x^n-x^m\in Z(R)$, then $R$ is necessarily commutative.} We point out that this somewhat extends the mentioned above results from \cite{H} and \cite{AD} for fixed degrees. The general form of the results from \cite{H} and \cite{AD} suggest the following should hold. \begin{conj*} Suppose that $R$ is a ring such that for every $x\in R$ there exist positive integers $a=a(x)$ and $b=b(x)$, depending on $x$, such that: \begin{enumerate} \item $x^{a}-x^{b}$ is central; \item either $b=1$ or $\gcd(a,b)=1$ and $a$ and $b$ have opposite parity. \end{enumerate} Then $R$ is commutative. \end{conj*} We give one last application of the algorithm described in \S\ref{Algorithm}. Herstein \cite{Her2} considered rings $R$ for which the identity $(XY)^n=X^n Y^n$ holds for some $n\ge 2$. In this case, he showed that the commutator ideal is necessarily nilpotent. \begin{thm} Let $S\subseteq \mathbb{N}$. Then there is a noncommutative ring satisfying the identities $(XY)^n=X^n Y^n$ for every $n\in S$ if and only if there exists a prime $p$ such that $p\mid {n\choose 2}$ for every $n\in S$. \end{thm} \begin{proof} Suppose first that there is a prime $p$ such that ${n\choose 2}$ is a multiple of $p$ for every $n\in S$. Consider the noncommutative ring $R:=\mathbb{F}_p\{X,Y\}/(X,Y)^3$. Then there is a homomorphism $\phi :R \to \mathbb{F}_p$ such that for each $a\in R$ we have $a=\alpha(a)+j(a)$, where $j(a)$ is in the Jacobson radical of $R$. In particular, since $J(R)^3=(0)$, we have $a^p =\alpha(a)^p=\alpha(a)$ when $p\ge 3$ and we have $a^4=\alpha(a)$ when $p=2$. If $n\in S$ then by assumption ${n\choose 2}$ is a multiple of $p$. Thus $p|n$ or $p|(n+1)$ when $p$ is odd and $n\equiv 0,1~(\bmod~ 4)$ when $p=2$. In either case, we see that $a^n b^n =(ab)^n$ since the left- and right-hand sides are both either $\alpha(ab)^{n/p}$ or $\alpha(ab)^{(n-1)/p} ab$, depending on whether $n$ is a multiple of $p$ or is $1$ mod $p$. Next suppose that for every prime $p$ there is some $n\in S$ such that ${n\choose 2}$ is not a multiple of $p$ and suppose to the contrary that there is a noncommutative ring $R$ with the property that $(ab)^n =a^n b^n$ for all $a,b\in R$ and all $n\in S$. Then there is some prime $p$ such that the identities $(XY)^n =X^n Y^n$, for $n\in S$, hold in one of the rings given in the statement of Theorem \ref{thm:trichotomy}. Notice that such an identity cannot hold in $U_p$, since the elements $a=e_{1,1}$ and $b=e_{1,2}+e_{2,2}$ are both idempotent and so $a^n b^n = ab = e_{1,2}$ while $(ab)^n = 0$ for $n\ge 2$. We next consider the rings $B_{p,n,i}$. Let $m=(p^n-1)/\gcd(p^n-1,p^i-1)$. Then $m\ge (p^n-1)/(p^{n/2}-1)\ge p+1$ and so $m$ is either a multiple of an odd prime or a multiple of $4$. Then by assumption there is some $n$ such that $n\not\equiv 0,1~(\bmod ~m)$. We pick such an $n\in S$ and write $n=m n_0+b$ with $b\in \{2,\ldots ,m-1\}$. For $\lambda\in \mathbb{F}_q$ we let \[a_{\lambda}=\left( \begin{array}{cc} \lambda^{p^i} & 0 \\ 0 & \lambda\end{array}\right) \] and \[b_{\lambda}=\left( \begin{array}{cc} \lambda^{p^i} & 1 \\ 0 & \lambda\end{array}\right). \] Then by assumption \begin{equation} \label{eq:ablam} a_{\lambda}^n b_{\lambda}^n =(a_{\lambda}b_{\lambda})^n. \end{equation} Then for $\lambda$ such that $\lambda^{p^i}\neq \lambda$, computing the $(1,2)$-entry of both sides of Equation (\ref{eq:ablam}) gives $$\lambda^{p^i n}\cdot \left(\frac{\lambda^{p^i n} - \lambda^n}{\lambda^{p^i}-\lambda}\right) = \lambda^{p^i}\cdot \left(\frac{\lambda^{2p^i n} - \lambda^{2n}}{\lambda^{2p^i}-\lambda^2}\right).$$ Then a simple computation shows that this holds only when either $\lambda^{(p^i-1) n}=1$ or $1 = \lambda^{(p^i-1) (n-1)}$. Since $n\not\equiv 0,1~(\bmod ~m)$, we see there is some $\lambda\in \mathbb{F}_q$ with $\lambda^{p^i}\neq \lambda$ such that $\lambda^{(p^i-1) n}\neq 1$ and $\lambda^{(p^i-1) (n-1)}\neq 1$, contradicting the fact that the identity $(XY)^n =X^n Y^n$ holds in $B_{p,n,i}$. Finally, we consider rings in the class $\mathcal{A}_p$. So suppose that $(XY)^n=X^n Y^n$, for each $n\in S$, is an identity for a ring $R$ in $\mathcal{A}_p$ for some prime $p$. Then by assumption there is some $n\in S$ such that $p\nmid {n\choose 2}$. Then there is some smallest $k\ge 1$ such that $p^k =0$ in $R$. Let $\alpha: R\to \mathbb{Z}/p^k \mathbb{Z}$ be the surjection obtained by reducing modulo the nilpotent radical. Then if $a\in R$ we have $a=\alpha(a)+x(a)$ for some $x(a)\in J(R)$. Moreover, there is some fixed $m$ such that $x^{p^m}=0$ for every $x\in J(R)$ and $m\ge 2$ if $p=2$. Since $n\not\equiv 0,1~(\bmod ~p^m)$, we can write $n=p^m n_0+b$ with $b\in \{2,\ldots ,p^m-1\}$. Then for $u\in R$ we have $u^n = u^{p^m n_0+b} = \alpha(u)^{p^m n_0} u^b$. Thus if $G$ is the subgroup of $R^*$ consisting of elements of the form $1+x$ with $x\in J(R)$, we have $x^b y^b = (xy)^b$ for all $x,y\in G$. Notice, however, that $G$ is a finite $p$-group and hence is nilpotent. Moreover, $G$ generates $R$ as a $\mathbb{Z}$-algebra and since $R$ is noncommutative, $G$ must be a nonabelian nilpotent group. In particular, $G$ has a normal subgroup $N$ such that $H:=G/N$ has the property that $H/Z(H)$ is abelian, where $Z(H)$ is the centre of $H$. It follows that there exist $x,y\in H$ such that $yx=zxy$ with $z\in Z(H)$. Since $x^b y^b=(xy)^b$ in $G$ this holds in $H$ and so $x^b y^b = (xy)^b = z^{b\choose 2} x^b y^b$. Since $H$ is a $p$-group, ${b\choose 2}$ must be a multiple of $p$. Since ${n\choose 2}\equiv {b\choose 2}~(\bmod ~p)$, we see that $p\mid {n\choose 2}$, a contradiction. \end{proof} \begin{remark} Herstein \cite{Her2} also considered the case of identities of the form $$(X+Y)^n=X^n+Y^n.$$ Here the answer is simpler for which sets $S$ of natural numbers allow there to be a noncommutative ring such that $(X+Y)^n=X^n+Y^n$ for every $n\in S$. This can only be the case if there is a prime $p$ such that $S\subseteq T_p:=\{p,p^2,p^3,\ldots \}$ when $p$ is odd; and $S\subseteq T_2:=\{4,8,\ldots \}$. To see this, observe that these identities hold for all $n\in T_p$ for the ring $\mathbb{F}_p\{X,Y\}/(X,Y)^3$. On the other hand, if $(X+Y)^n=X^n+Y^n$ holds for a ring occurring in the statement of Theorem \ref{thm:trichotomy}, it must also hold on $\mathbb{F}_p$ for some prime $p$, since this is always a subring of a homomorphic image of these rings. But then it is easily checked that this forces $n$ to be a power of this fixed prime. In the special case $p=2$, the identity $(X+Y)^2=X^2+Y^2$ forces a ring to be commutative. \end{remark} \section{Multilinear identities that force commutativity} \label{sec:multilinear} In this brief section we give a proof of a general result that in particular implies Theorem \ref{thm:multi} when we take $\mathcal{S}$ below to comprise a single identity. \begin{thm} Let $\mathcal{S}$ be a set of homogeneous multilinear polynomials with integer coefficients. Then there is a noncommutative ring $R$ for which every element of $\mathcal{S}$ is an identity if and only if there is some fixed prime $p$ such that whenever $P(X_1,\ldots ,X_m)=\sum_{\sigma \in S_m} c_{\sigma}X_{\sigma(1)}\cdots X_{\sigma(m)} \in \mathbb{Z}\{X_1,\ldots ,X_m\}$ is an element of $\mathcal{S}$ the following hold: \begin{enumerate} \item $p\mid P(1,1,\ldots ,1)$; \item $p\mid \Theta_{i,j}(P)$ for $1\le i<j\le m$, \end{enumerate} where the $\Theta_{i,j}(P)$ are as defined in Equation (\ref{eq:Theta}). Moreover, if there is such a prime $p$ for which these conditions hold, then every element of $\mathcal{S}$ is an identity for the noncommutative ring $\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$. \label{thm:multi2} \end{thm} \vskip 2mm \begin{proof} If the elements of $\mathcal{S}$ are identities for a noncommutative ring $R$ then by Theorem \ref{thm:trichotomy} there is a prime $p$ such that there is a ring $R$ from one of the three classes of rings associated to the prime $p$ from the statement of the theorem for which each element of $\mathcal{S}$ is an identity. Then if $P(X_1,\ldots ,X_m)\in \mathcal{S}$, $P(1,1,\ldots ,1)=0$ in $R$ and so $p\mid P(1,1,\ldots ,1)$. Notice that if $i<j$ and we specialize $P(X_1,\ldots, X_s)$ taking $X_k=1$ for $k\neq \{i,j\}$ and $X_i=r$ and $X_j=s$ with $r,s\in R$ such that $[r,s]\neq 0$ then $P(X_1,\ldots ,X_s)$ becomes $\Theta_{i,j}(P)rs + (P(1,1,\ldots ,1)-\Theta_{i,j}(P))sr$. Then since $P(1,1,\ldots ,1)=0$ in $R$ and $rs-sr\neq 0$ we see that $\Theta_{i,j}(P)$ annihilates $rs-sr$ and hence it must have non-trivial gcd with the characteristic of $R$, which is a power of $p$. It follows that $p\mid \Theta_{i,j}(R)$ whenever $i<j$. Thus we see the necessity of these conditions. Now suppose that there exists a prime $p$ such that whenever $P(X_1,\ldots ,X_m)\in \mathcal{S}$, we have $p\mid P(1,1,\ldots ,1)$ and $p\mid \Theta_{i,j}(P)$ whenever $i<j$. Consider the ring $S:=\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$ and let $u$ and $v$ denote the images of $U$ and $V$ in $S$ respectively. Then $S$ is a noncommutative $4$-dimensional $\mathbb{F}_p$-algebra with basis $\mathcal{T}=\{1,u,v,vu\}$. We claim that $P$ is an identity for $S$. To see this, since $P$ is multilinear, it suffices to show that $P$ vanishes whenever it is evaluated at $s$-tuples in $\mathcal{T}^s$. Moreover, if $z_1,\ldots ,z_m$ in $S$ commute then $P(z_1,\ldots ,z_m)=P(1,1,\ldots ,1)z_1\cdots z_m =0$ and since $vu$ and $1$ are central in $S$, we then see that it suffices to consider $s$-tuples in $\mathcal{T}^s$ with at least one copy of $u$ and at least one copy of $v$. Moreover, since $(u,v)^3=(0)$, we now see it suffices to consider $s$-tuples with exactly one copy of $u$, exactly one copy of $v$, and all other elements equal to $1$. If we take $i<j$ and $X_i=u$, $X_j=v$, and $X_k=1$ for $k\neq i,j$ then when we specialize $P$ at these values of $X_1,\ldots ,X_m$ we obtain $\Theta_{i,j}(P)[u,v] =0$, since $p\mid \Theta_{i,j}(P)$. Thus $P$ vanishes at all $s$-tuples in $\mathcal{T}^s$ and so it is a polynomial identity for $S$. The result follows. \end{proof} To give an example of how to apply Theorem \ref{thm:multi2}, observe that if $m=3$ and $$P(X_1,X_2,X_3) = \sum_{\sigma\in S_3} X_{\sigma(1)}X_{\sigma(2)}X_{\sigma(3)},$$ then $P(1,1,1)=6$ and $\Theta_{i,j}(P) =3$ for $1\le i<j\le 3$. Then we see that the conditions in the statement of the theorem are satisfied with the prime $p=3$ and $P=0$ is an identity for the ring $\mathbb{F}_3\{U,V\}/(U^2,V^2,UV)$, which has $81$ elements. \vskip 2mm In light of Theorem \ref{thm:multi2}, it is natural to ask for a finite set of identities that generate the identities for the ring $\mathbb{F}_p\{U,V\}/(U^2,V^2, UV)$, since these are the identities that do not help in the context of proving commutativity for multilinear identities. Notice that \begin{equation}\label{eq:idents} \begin{array}{ll} (Z_1^p-Z_1)Z_2(Z_3^p-Z_3)Z_4(Z_5^p-Z_5)=0 & [[Z_1,Z_2],Z_3]=0 \\ (Z_1^p-Z_1)Z_2 [Z_3,Z_4]=0 & [Z_1,Z_2]Z_3[Z_4,Z_5]=0 \\ {[}Z_1,Z_2] Z_3(Z_4^p-Z_4)=0 & p=0. \end{array} \end{equation} are polynomial identities for $\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$. We point out this is not a minimal set: for example, one can deduce the identity $[Z_1,Z_2]Z_3 (Z_4^p -Z_4)=0$ from the identities $[[Z_1,Z_2],Z_3]=0$ and $(Z_1^p-Z_1)Z_2 [Z_3,Z_4]=0$. We choose, however, to work with this set of identities, as it is convenient to work with. The following result shows that these identities generate the identities for $\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$. \begin{prop} Let $p$ be a prime number. Then every polynomial identity for $\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$ is generated by the identities given in Equation (\ref{eq:idents}). \label{prop:gen} \end{prop} \begin{proof} We make use of the notation from Section \ref{sec:dec} in this proof. Let $$P(X_1,\ldots ,X_s)\in \mathbb{Z}\{X_1,\ldots ,X_s\}$$ be an identity for $R:=\mathbb{F}_p\{U,V\}/(U^2,V^2,UV)$ and let $u$ and $v$ denote the images of $U$ and $V$ respectively in $R$. Since we have the identity $p=0$ at our disposal, we may work over $\mathbb{F}_p\{X_1,\ldots ,X_s\}$ instead, so we assume now that $P\in \mathbb{F}_p\{X_1,\ldots ,X_s\}$ and we adjust $\mathcal{C}_s$ and the maps $\Phi$ from items (\ref{eq:C}) and (\ref{eq:Phi}) to reflect that our base is now $\mathbb{F}_p$. Given two identities $P_1$ and $P_2$, we'll write $P_1\equiv P_2$ if the identity $P_1-P_2$ is implied by the identities in Equation (\ref{eq:idents}). Our goal is to show that the identity $P=0$ is implied by the identities in Equation (\ref{eq:idents}). Then since $R/([R,R])\cong \mathbb{F}_p$, $\Phi(P)$ is an identity for $\mathbb{F}_p$. Hence by Lemma \ref{lem:Alon}, $\Phi(P)\in (X_1^p-X_1,\ldots ,X_s^p-X_s)\mathbb{F}_p[X_1,\ldots ,X_s]$. So we have $$\Phi(P) = (X_1^p-X_1)A_1+\cdots + (X_s^p-X_s)$$ for some $A_1,\ldots, A_s\in \mathbb{F}_p[X_1,\ldots ,X_s]$. Then notice that since $P$ is an identity for $R$, $\Phi(P)$ must be an identity for every commutative subring of $R$. Thus $\Phi(P)$ is an identity for $\mathbb{F}_p[vu]=\mathbb{F}_p\oplus \mathbb{F}_p\cdot vu$. Then if we take $(\lambda_1,\ldots ,\lambda_s)\in \mathbb{F}_p^s$ and we specialize $X_j = \lambda_j$ for $j\neq i$ and $X_i=\lambda_i+vu$, then $X_j^p-X_j$ becomes zero for $j\neq i$ and $X_i^p-X_i$ becomes $-vu$. Then the fact that $\Phi(P)$ is an identity for $\mathbb{F}_p[uv]$ gives $$-vu A_i(\lambda_1,\ldots ,\lambda_s)=0$$ for every $(\lambda_1,\ldots ,\lambda_s)\in \mathbb{F}_p^s$. Hence $A_1,\ldots ,A_s$ are also identities for $\mathbb{F}_p$. Thus we can in fact write $\Phi(P)$ as $$\sum_{1\le i\le j\le s} (X_i^p-X_i)(X_j^p-X_j)Q_{i,j}$$ with the $Q_{i,j}\in \mathbb{F}_p[X_1,\ldots ,X_s]$. Since $\Phi(P-\bar{P})=0$, we then have that there are $\hat{Q}_{i,j}\in \mathcal{C}_s$ for $1\le i\le j\le s$ with $\Phi(\hat{Q}_{i,j})=Q_{i,j}$ and so $$P-\sum_{1\le i\le j\le s} (X_i^p-X_i)(X_j^p-X_j)\hat{Q}_{i,j} $$ is in the commutator ideal of $\mathbb{F}_p\{X_1,\ldots ,X_s\}$. Hence \begin{equation} P=\sum_{1\le i\le j\le s} (X_i^p-X_i)(X_j^p-X_j)\hat{Q}_{i,j} + \sum_{1\le i<j\le s} \sum_{k=1}^{m_{i,j}} B_{i,j,k} [X_i,X_j] C_{i,j,k}, \end{equation} for some integers $m_{i,j}$ and polynomials $B_{i,j,k},C_{i,j,k}\in \mathbb{Z}\{X_1,\ldots, X_s\}$ for $i$ and $j$ with $1\le i<j\le s$ and $k=1,\ldots ,m_{i,j}$. Since $[[Z_1,Z_2], Z_3]=0$ is an identity in Equation (\ref{eq:idents}), we can reduce our expression for $P$ modulo these identities and we have $$\sum_{k=1}^{m_{i,j}} B_{i,j,k} [X_i,X_j] C_{i,j,k} \equiv [X_i,X_j] \left(\sum_{k=1}^{m_{i,j}} B_{i,j,k} C_{i,j,k}\right).$$ Thus if we let $$D_{i,j}=\sum_{k=1}^{m_{i,j}} B_{i,j,k} C_{i,j,k}$$ for $1\le i<j\le s$, we see we may assume that our identity $P$ is of the form \begin{equation} \label{eq:newP} \sum_{1\le i\le j\le s} (X_i^p-X_i)(X_j^p-X_j)\hat{Q}_{i,j} + \sum_{1\le i<j\le s} [X_1,X_j] D_{i,j}. \end{equation} Moreover, since $[Z_1,Z_2]Z_3[Z_4,Z_5]=0$ is an identity in Equation (\ref{eq:idents}), we may again work modulo the equivalence above and assume without loss of generality that each $D_{i,j}\in \mathcal{C}_s$. We now fix $i$ and $j$ with $i<j$. We let $(\lambda_1,\ldots ,\lambda_s)\in \mathbb{F}_p$ and we specialize our variables with $X_k = \lambda_k$ for $k\neq i,j$, $X_i=\lambda_i+u$, $X_j=\lambda_j+v$. Then for $k\le \ell$, $(X_k^p-X_k)(X_{\ell}^p-X_{\ell})$ becomes zero unless $(k,\ell)=(i,j)$, but in this case it becomes $-uv$, which is also zero. On the other hand, $[X_k,X_{\ell}]$ becomes zero under this specialization unless $(k,\ell)=(i,j)$ and $[X_i,X_j]$ becomes $-vu\neq 0$. Then since $(u,v)^3=(0)$, we see that under this specialization Equation (\ref{eq:newP}) becomes $-vu D_{i,j}(\lambda_1,\ldots ,\lambda_s)$, and so $D_{i,j}$ is an identity for $\mathbb{F}_p$. Then Lemma \ref{lem:Alon}, gives that $D_{i,j}$ is in the ideal generated by $X_k^p-X_k$ for $1\le k\le s$ along with the commutators $[X_k,X_{\ell}]$ for $1\le k<\ell\le s$. But this means that $[X_i,X_j]D_{i,j}$ is an identity for $R$ and that it is implied by the identities $[Z_1,Z_2]Z_3[Z_4,Z_5]=0$ and $[Z_1,Z_2]Z_3 (Z_4^p-Z_4)=0$ given in Equation (\ref{eq:idents}) for $1\le i<j\le s$. Thus we can further reduce modulo our equivalence and assume that $P$ is of the form $$\sum_{1\le i\le j\le s} (X_i^p-X_i)(X_j^p-X_j)\hat{Q}_{i,j}.$$ Now we fix $i$ and $j$ with $1\le i<j\le s$. For $(\lambda_1,\ldots ,\lambda_s)\in \mathbb{F}_p$, we specialize $X_k=\lambda_k$ for $k\neq i,j$ and $X_i=\lambda_i+v$, $X_j=\lambda_j+u$. Then under this specialization $(X_i^p-X_i)(X_j^p-X_j)$ becomes $vu\neq 0$ but $(X_k^p-X_k)(X_{\ell}^p-X_{\ell})$ becomes zero for $1\le k\le \ell\le s$ and $(k,\ell)\neq (i,j)$ (the case when $k=\ell=i$ and $k=\ell=j$ follow from the fact that both $u^2$ and $v^2$ are zero in $R$). Thus $$\sum_{1\le i\le j\le s} (X_i^p-X_i)(X_j^p-X_j)\hat{Q}_{i,j}$$ specializes to $vu\hat{Q}_{i,j}(\lambda_1,\ldots ,\lambda_s)$, and so $\hat{Q}_{i,j}$ is an identity for $\mathbb{F}_p$ for $i<j$. Thus Lemma \ref{lem:Alon} gives that $\hat{Q}_{i,j}$ is in the ideal $X_k^p-X_k$ for $1\le k\le s$ along with the commutators $[X_k,X_{\ell}]$ for $1\le k<\ell\le s$. Again, since $$(Z_1^p-Z_1)Z_2(Z_3^p-Z_3)Z_4(Z_5^p-Z_5)=0 ~~{\rm and}~~(Z_1^p-Z_1)Z_2[Z_3,Z_4]=0$$ are identities in Equation (\ref{eq:idents}), we see that $ (X_i^p-X_i)(X_j^p-X_j)\hat{Q}_{i,j}$ is an identity for $R$ and that it is implied by the identities in Equation (\ref{eq:idents}) for $i<j$. Thus we may further reduce our identity modulo the equivalence above and assume that our identity is of the form $$\sum_{i=1}^s (X_i^p-X_i)(X_i^p-X_i)\hat{Q}_{i,i}.$$ Now we fix $i\in \{1,\ldots ,s\}$ and for $(\lambda_1,\ldots ,\lambda_s)\in \mathbb{F}_p$, we specialize $X_k=\lambda_k$ and $X_i=\lambda_i +u+v$. Then $X_k^p-X_k$ becomes zero under this specialization for $k\neq i$ and $(X_i^p-X_i)^2$ becomes $vu$. Thus the same argument as above shows that $\hat{Q}_{i,i}$ is an identity for $\mathbb{F}_p$ and that $ (X_i^p-X_i)(X_i^p-X_i)\hat{Q}_{i,i}$ is implied by the identities in Equation (\ref{eq:idents}) for $i=1,\ldots ,s$. Thus $P\equiv 0$ and the result now follows. \end{proof} We point out that, beyond traditional polynomial identities, there is a large body of work dealing with \emph{functional identities}, which are more general and have been developed by Bre\v{s}ar and others (see, for example, \cite{Bres1, Bres2}). Many natural classes of functional identities yield commutativity theorems---for example the fixed-degree case of Herstein's result on multiplicative commutators in division ring \cite{Her3} can be cast in this framework---and it is natural to ask to what extent the results given here can be extended to this more general framework. We conclude this paper by raising a question. Theorem \ref{thm:algorithm} is a theorem for ring identities, but one can instead fix a finitely generated commutative $\mathbb{Z}$-algebra $C$ (e.g., the ring of integers in a number field or a finite field) and work in the category of $C$-algebras and consider polynomial identities with coefficients in $C$. It is possible that the approach we use could be used to deal with certain interesting classes of commutative base rings $C$, but we do not know of an algorithm that works for a general finitely generated commutative base ring $C$. We note, however, that our approach applies to the case when $C$ a homomorphic image of $\mathbb{Z}$: in this case one can lift the identities to identities over the integers; the condition that our rings be $C$-algebras then puts an additional constraint on the characteristic of the ring when $C\neq\mathbb{Z}$. In particular, we can use the algorithm provided in \S3, but where we restrict our focus to rings in the various classes whose characteristic divides the characteristic of $C$. \begin{question} Let $C$ be finitely generated commutative ring. Given a finite presentation of $C$ and a finite set of polynomial identities $P_1=\cdots =P_m=0$, with $P_1,\ldots ,P_m\in C\{X_1,\ldots ,X_s\}$ for some $s\ge 1$, is there a decision procedure that takes the data from the presentation of $C$ and the polynomials $P_1,\ldots ,P_m$ as input and decides after a finite number of steps whether or not every $C$-algebra for which these identities all simultaneously hold is commutative? \end{question} This question is especially interesting in the cases when $C$ is either a number ring (i.e., the ring of algebraic integers in a finite field extension of $\mathbb{Q}$) or when $C$ is a finite field. In these cases, one might be able to extend the approach given in this paper to this setting, although it would require an extension of Theorem \ref{thm:trichotomy} to such $C$-algebras. \vskip 2mm \noindent{\bf Funding:} The work of Jason P. Bell was supported by NSERC Discovery Grant RGPIN-2016-03632. The work of Peter V. Danchev was partially supported by the Bulgarian National Science Fund under Grant KP-06 No 32/1 of December 07, 2019. \vskip2pc \end{document} \end{document}
\begin{document} \title[Metrizable isotropic SODE and Hilbert's fourth problem] {Metrizable isotropic second-order differential equations and Hilbert's fourth problem} \author[Bucataru]{Ioan Bucataru} \address{Ioan Bucataru, Faculty of Mathematics, Alexandru Ioan Cuza University \\ Ia\c si, Romania} \urladdr{http://www.math.uaic.ro/\textasciitilde{}bucataru/} \author[Muzsnay]{Zolt\'an Muzsnay} \address{Zolt\'an Muzsnay, Institute of Mathematics, University of Debrecen \\Debrecen, Hungary} \urladdr{http://www.math.klte.hu/\textasciitilde{}muzsnay/} \date{\today} \begin{abstract} It is well known that a system of homogeneous second-order ordinary differential equations (spray) is necessarily isotropic in order to be metrizable by a Finsler function of scalar flag curvature. In Theorem \ref{scalar_flag} we show that the isotropy condition, together with three other conditions on the Jacobi endomorphism, characterize sprays that are metrizable by Finsler functions of scalar flag curvature. The proof of Theorem \ref{scalar_flag} provides an algorithm to construct the Finsler function of scalar flag curvature, in the case when a given spray is metrizable. One condition of Theorem \ref{scalar_flag}, regarding the regularity of the sought after Finsler function, can be relaxed. By relaxing this condition, we provide examples of sprays that are metrizable by conic pseudo-Finsler functions as well as degenerate Finsler functions. Hilbert's fourth problem asks to determine the Finsler functions with rectilinear geodesics. A Finsler function that is a solution to Hilbert's fourth problem is necessarily of constant or scalar flag curvature. Therefore, we can use the conditions of \cite[Theorem 4.1]{BM13} and Theorem \ref{scalar_flag} to test when the projective deformations of a flat spray, which are isotropic, are metrizable by Finsler functions of constant or scalar flag curvature. We show how to use the algorithms provided by the proofs of \cite[Theorem 4.1]{BM13} and Theorem \ref{scalar_flag} to construct solutions to Hilbert's fourth problem. \end{abstract} \subjclass[2000]{53C60, 58B20, 49N45, 58E30} \keywords{isotropic sprays, Finsler metrizability, flag curvature} \maketitle \section{Introduction} Second-order ordinary differential equations (SODEs) are important mathematical objects because they have a large variety of applications in different domains of mathematics, science and engineering, \cite{AIM93}. A particularly interesting class of SODE is the one which can be derived from a variational principle. The inverse problem of the calculus of variations (IP) consists of characterizing variational SODEs, which means to determine whether or not a given SODE can be described as the critical point of a functional. The most significant contribution to this problem is the famous paper of Douglas \cite{Douglas41} in which, using Riquier's theory, he classifies variational differential equations with two degrees of freedom. Generalizing his results to higher dimensional cases is a hard problem because the Euler-Lagrange system is an extremely over-determined partial differential system (PDE), so in general it has no solution. The integrability conditions of the Euler-Lagrange PDE can be very complex and can change case by case, \cite{AT92, BM11, GM00, Krupkova97, MFLMR90, STP02}. Therefore, it seems to be impossible to obtain a complete classification of variational SODE in the $n$-dimensional case, unless we restrict the problem to particular classes of sprays with special curvature properties, \cite{Berwald41, BR04, Bryant02, BM13, Crampin07, Shen03}. A special and very interesting problem, within the IP, is known as the Finsler metrizability problem. Here the Lagrangian to search for is the energy function of a Finslerian or a Riemannian metric, \cite{KS85, Muzsnay06, SV02}. Of course, in this problem the given system of SODE and the associated spray must be homogeneous or quadratic. If the corresponding metric exists, then the integral curves of the given SODE are the geodesic curves of the corresponding Finslerian or Riemannian metric. Since the obstructions to the existence of a metric for a given SODE are essentially related to curvature properties of the associated canonical nonlinear connection, it seems to be reasonable to consider SODEs with special curvature properties. Obvious candidates to investigate are Finsler structures with constant or scalar flag curvature. It is therefore natural to formulate the following problem. Provide the necessary and sufficient conditions that can be used to decide whether or not a given homogeneous system of second-order ordinary differential equations represents the Euler-Lagrange equations of a Finsler function of constant flag curvature or scalar flag curvature, respectively. In \cite{BM13} we solved the first part of the problem by giving a characterization of sprays that are metrizable by Finsler functions of constant flag curvature. In the present paper we consider the second part of the problem and solve it completely by giving a coordinate free characterization of sprays metrizable by Finsler functions of scalar flag curvature. Our main result can be found in Section \ref{sec:scalar_metrizable}, where we provide the necessary and sufficient conditions, as tensorial equations on the Jacobi endomorphism, which can be used to decide whether or not a given homogeneous SODE represents the geodesic equations of a Finsler function of scalar curvature. It is known that a spray metrizable by a Finsler function of scalar flag curvature is necessarily isotropic. In Theorem \ref{scalar_flag} we provide three other conditions, which together with the isotropy condition, will characterize the class of sprays that are metrizable by Finsler functions of scalar flag curvature. The proof offers, in the case when the test is affirmative, an algorithm to construct the Finsler function of scalar flag curvature that metricizes the given spray. In all the examples we provide, we show how to use the proposed algorithm to construct such Finsler functions. The importance of characterizing sprays metrizable by Finsler functions of scalar flag curvature for constructing all systems of ODEs with vanishing Wilczynski invariants has been discussed recently in \cite{CDT12}. In Section \ref{sec:Hilbert} we show that our results for characterizing metrizable sprays lead to a new approach for Hilbert's fourth problem. This problem asks to construct and study the geometries in which the straight line segment is the shortest connection between two points, \cite{Alvarez05}. Alternatively, one can reformulate the problem as follows: ''given a domain $\Omega \subset \mathbb{R}^n$, determine all (Finsler) metrics on $\Omega$ whose geodesics are straight lines'', \cite[p.191]{Shen01}. Yet another reformulation of the problem requires to determine projectively flat Finsler metrics, \cite{Crampin11}. Projectively flat Finsler functions have isotropic geodesic sprays and therefore have constant or scalar flag curvature. Such Finsler metrics, of constant flag curvature where studied in \cite{Shen03}. We use the conditions of \cite[Theorem 4.1]{BM13} and Theorem \ref{scalar_flag} to study when the projective deformations of a flat spray are metrizable. Using these conditions, we show how to construct examples which are solutions to Hilbert's fourth problem by Finsler functions of constant, and respectively scalar flag curvature. In Section \ref{sec:examples} we give working examples to show how to use Theorem \ref{scalar_flag} to test whether or not some other sprays are Finsler metrizable, and in the affirmative case how to construct the corresponding Finsler function. By relaxing a regularity condition of Theorem \ref{scalar_flag}, we show that we can also characterize sprays that are metrizable by conic pseudo-, or degenerate Finsler functions. \section{The geometric framework for Finsler metrizability} In this section we present the geometric setting for addressing the Finsler metrizability problem, \cite{BM12a, KS85, Muzsnay06, Shen01, Szilasi03}. This geometric setting that includes connections and curvature can be derived directly from a given homogeneous SODE using the Fr\"olicher-Nijenhuis formalism, \cite[\S 30]{KMS93}, \cite[Chapter 2]{GM00}. \subsection{Spray, connections and curvature} We consider $M$ a smooth, real and $n$-dimensional manifold. In this work, all geometric structures are assumed to be smooth. We denote by $C^{\infty}(M)$ the set of smooth functions on $M$, by $\mathfrak{X}(M)$ the set of vector fields on $M$, and by $\Lambda^k(M)$ the set of $k$-forms on $M$. For the manifold $M$, we consider the tangent bundle $(TM, \pi, M)$ and $(T_0M=TM\setminus\{0\}, \pi, M)$ the tangent bundle with the zero section removed. If $(x^i)$ are local coordinates on the base manifold $M$, the induced coordinates on the total space $TM$ will be denoted by $(x^i, y^i)$. The tangent bundle carries some canonical structures, very useful to formulate our geometric framework. One structure is the \emph{vertical subbundle} $VTM=\{\xi \in TTM, (D\pi)\xi=0\}$, which induces an integrable, $n$-dimensional distribution $V: u\in TM \to V_u=VTM\cap T_uTM$. Locally, this distribution that we will refer to as the \emph{vertical distribution}, is spanned by $\{\partial /\partial y^i\}$. Two other structures, defined on $TM$, are the tangent structure, $J$, and the Liouville vector field, $\mathbb{C}$, locally given by \begin{eqnarray*} J=\frac{\partial}{\partial y^i}\otimes dx^i, \quad \mathbb{C}=y^i\frac{\partial}{\partial y^i}. \end{eqnarray*} The main object of this work is a system of $n$ homogeneous second-order ordinary differential equations, whose coefficients do not depend explicitly on time, \begin{eqnarray} \frac{d^2x^i}{dt^2} + 2G^i\left(x, \frac{dx}{dt}\right)=0. \label{sode} \end{eqnarray} For functions $G^i(x,y)$ we assume that they are positive $2$-homogeneous, which means that $G^i(x,\lambda y)=\lambda^2 G^i(x,y)$, for all $\lambda>0$. By Euler's Theorem the homogeneity condition of the functions $G^i$ is equivalent to $\mathbb{C}(G^i)=2G^i$. The system \eqref{sode} can be identified with a special vector field $S\in \mathfrak{X}(T_0M)$ that satisfies the conditions $JS=\mathbb{C}$ and $[\mathbb{C}, S]=S$. Such a vector field is called a \emph{spray} and it is locally given by \begin{eqnarray} S=y^i\frac{\partial}{\partial x^i} - 2G^i(x,y)\frac{\partial}{\partial y^i}. \label{slocal} \end{eqnarray} If we reparameterize the second-order system \eqref{sode}, by preserving the orientation of the parameter, we obtain a new system and hence a new spray $\tilde{S}=S-2P\mathbb{C}$, \cite{AIM93, Shen01}. The function $P\in C^{\infty}(T_0M)$ is $1$-homogeneous, which means that it satisfies $\mathbb{C}(P)=P$. The two sprays $S$ and $\tilde{S}$ are called \emph{projectively related}, the function $P$ is called a \emph{projective deformation} of the spray $S$. An important geometric structure that can be associated to a spray is that of \emph{nonlinear connection} (horizontal distribution, Ehresmann connection). A nonlinear connection is defined by an $n$-dimensional distribution $H: u\in TM \to H_u\subset T_uTM$ that is supplementary to the vertical distribution: $T_uTM=H_u\oplus V_u$. It is well known that a spray $S$ induces a nonlinear connection with the corresponding horizontal and vertical projectors given by \begin{eqnarray*} h=\frac{1}{2}\left( \operatorname{Id} - [S, J]\right), \quad v=\frac{1}{2}\left( \operatorname{Id} + [S, J]\right). \end{eqnarray*} Locally, the above two projectors can be expressed as follows \begin{eqnarray*} h & =& \frac{\delta}{\delta x^i} \otimes dx^i, \quad v=\frac{\partial}{\partial y^i} \otimes \delta y^i, \quad \textrm{ where } \\ \frac{\delta}{\delta x^i} & = & \frac{\partial}{\partial y^i} - N^j_i(x,y) \frac{\partial}{\partial ^j}, \quad \delta y^i=dy^i + N^i_j(x,y) dx^j, \quad N^i_j(x,y)=\frac{\partial G^i}{\partial y^j}(x,y). \end{eqnarray*} Alternatively, the nonlinear connection induced by a spray $S$ can be characterized in terms of an \emph{almost complex structure}, \begin{eqnarray*} \mathbb{F}=h\circ \mathcal{L}_Sh - J= \frac{\delta}{\delta x^i}\otimes \delta y^i - \frac{\partial}{\partial y^i} \otimes dx^i. \label{complexstr} \end{eqnarray*} It is straightforward to check that $\mathbb{F}\circ J=h$ and $J\circ \mathbb{F}=v$. The horizontal distribution $H$ is, in general, non-integrable. The obstruction to its integrability is given by the \emph{curvature tensor} \begin{eqnarray} R=\frac{1}{2}[h,h]=\frac{1}{2}R^i_{jk}\frac{\partial}{\partial y^i}\otimes dx^j \wedge dx^k = \frac{1}{2}\left(\frac{\delta N^i_j}{\delta x^k} - \frac{\delta N^i_k}{\delta x^j} \right)\frac{\partial}{\partial y^i}\otimes dx^j \wedge dx^k \label{curvature} \end{eqnarray} Due to the homogeneity condition of a spray $S$, curvature information can be obtained also from the \emph{Jacobi endomorphism} \begin{eqnarray} \Phi=v\circ [S,h] = R^i_j \frac{\partial}{\partial y^i}\otimes dx^j = \left(2\frac{\partial G^i}{\partial x^j} - S(N^i_j) - N^i_kN^k_j\right) \frac{\partial}{\partial y^i}\otimes dx^j. \label{localphi} \end{eqnarray} The two curvature tensors are related by \begin{eqnarray} 3R=[J, \Phi], \quad \Phi = i_SR. \label{rphi} \end{eqnarray} As we will see in this work, important geometric information about the given spray $S$ are encoded in the \emph{Ricci scalar}, $\rho\in C^{\infty}(T_0M)$, \cite{BR04}, \cite[Def. 8.1.7]{Shen01}, which is given by \begin{eqnarray} (n-1)\rho = R^i_i=\operatorname{Tr}(\Phi). \label{ricr} \end{eqnarray} \begin{defn} \label{defn:iso} A spray $S$ is said to be \emph{isotropic} if there exists a semi-basic $1$-form $\alpha \in \Lambda^1(T_0M)$ such that the Jacobi endomorphism can be written as follows \begin{eqnarray} \Phi = \rho J-\alpha \otimes \mathbb{C}. \label{isophi} \end{eqnarray} \end{defn} Due to the homogeneity condition, for isotropic sprays, the Ricci scalar is given by $\rho=i_S\alpha$. Using formulae \eqref{rphi} and \eqref{isophi}, it can be shown that the class of isotropic sprays can be characterized also in terms of the curvature $R$ of the nonlinear connection, \cite[Prop. 3.4]{BM11}, \begin{eqnarray} 3R= \left(d_J\rho +\alpha\right)\wedge J -d_J\alpha \otimes \mathbb{C}. \label{isor}\end{eqnarray} To complete the geometric setting for studying the Finsler metrizability problem of a spray, we will use also the \emph{Berwald connection}. It is a linear connection on $T_0M$, given by \begin{eqnarray*} D_XY=h[vX, hY] + v[hX, vY] + (\mathbb{F} +J)[hX, JY] + J[vX, (\mathbb{F} +J)], \quad \forall X,Y \in \mathfrak{X}(T_0M). \end{eqnarray*} Locally, the Berwald connection is given by \begin{eqnarray*} D_{\frac{\delta}{\delta x^i}} \frac{\delta}{\delta x^j} = \frac{\partial N^k_i}{\partial y^j} \frac{\delta}{\delta x^k}, \quad D_{\frac{\delta}{\delta x^i}} \frac{\partial }{\partial y^j} = \frac{\partial N^k_i}{\partial y^j} \frac{\partial}{\partial y^k}. \\ D_{\frac{\partial}{\partial y^i}} \frac{\delta}{\delta x^j} = 0, \quad D_{\frac{\partial}{\partial y^i}} \frac{\partial }{\partial y^j} = 0. \end{eqnarray*} The Berwald connection has two curvature components. One is the Riemann curvature tensor and it is directly related to the curvature tensor $R$ and the Jacobi endomorphism $\Phi$. Another one is the Berwald curvature, \cite[\S 7.1, \S 8.1]{Shen01}. \subsection{Finsler spaces} In this section, we briefly recall the notion of Finsler functions, as well as some generalizations: conic pseudo-Finsler functions and degenerate Finsler functions. The variational problem for the energy of a Finsler function determines a spray, which is called the geodesic spray. The Finsler metrizability problem requires to decide if a given spray represents the geodesic spray of a Finsler function. \begin{defn} \label{Finsler} A continuous function $F:TM \to \mathbb{R}$ is called a \emph{Finsler function} if it satisfies the following conditions: \begin{itemize} \item[i)] $F$ is smooth and strictly positive on $T_0M$, $F(x,0)=0$; \item[ii)] $F$ is positive homogeneous of order $1$ in the fibre coordinates, which means that $F(x,\lambda y)=\lambda F(x,y)$ for all $\lambda > 0$ and $(x,y)\in T_0M$; \item[iii)] The $2$-form $dd_JF^2$ is a symplectic form on $T_0M$. \end{itemize} \end{defn} In this work we will allow for some relaxations of the above conditions, regarding the domain of the function as well as the regularity condition iii). See \cite[\S 1.1.2, \S 1.2.1]{AIM93}, \cite{Bryant02}, \cite{JS12} for more discussions about the regularity conditions and their relaxation for a Finsler function. If the function $F$ is defined on some positive conical region $A\subset TM$ and the three conditions of Definition \ref{Finsler} are satisfied on $A\cap T_0M$, then we call $F$ a \emph{conic pseudo-Finsler metric}. Moreover, if we replace the regularity condition iii) by a weaker condition, $\operatorname{rank}(dd_JF^2)\in \{1,..., 2n-1\}$ on $A\cap T_0M$, we call $F$ a \emph{degenerate Finsler metric}, \cite{JS12}. \begin{defn} \label{fmetr} A spray $S$ is called Finsler metrizable if there exists a Finsler function $F$ such that \begin{eqnarray} i_Sdd_JF^2=-dF^2. \label{isddj} \end{eqnarray} \end{defn} We will also use the metrizability property in a broader sense by calling a spray $S$ conic pseudo-, or degenerate Finsler metrizable if there exist a conic pseudo-, or degenerate Finsler function $F$ such that the equation \eqref{isddj} is satisfied. If a spray $S$ satisfies the equation \eqref{isddj}, we call it the \emph{geodesic spray} of the (conic pseudo-, or degenerate) Finsler function $F$. It is well known that $S$ is the geodesic spray of such function if and only if satisfies following equation: \begin{eqnarray} d_hF^2=0. \label{gsf2} \end{eqnarray} Consider $S$ the geodesic spray of some (conic pseudo-, or degenerate) Finsler function $F$ and let $\Phi$ be the Jacobi endomorphism. \begin{defn} The function $F$ is said to be of \emph{scalar flag curvature} if there exists a function $\kappa \in C^{\infty}(T_0M)$ such that \begin{eqnarray} \Phi = \kappa \left( F^2 J - Fd_J F\otimes \mathbb{C}\right). \label{fscalar} \end{eqnarray} \end{defn} Using formulae \eqref{isophi} and \eqref{fscalar} it follows that for a Finsler function $F$, of scalar flag curvature $\kappa$, its geodesic spray $S$ is isotropic, with Ricci scalar $\rho=\kappa F^2$ and the semi-basic $1$-form $\alpha = \kappa F d_J F.$ Conversely, it can be shown that if an isotropic spray $S$ is metrizable by a Finsler function $F$, then $F$ is necessarily of scalar flag curvature. See \cite[Lemma 8.3.2]{Shen01}, or the first implication in \cite[Thm. 4.2]{BM13} for an alternative proof. One can conclude the above considerations as follows. \begin{rem} For a Finsler function, its geodesic spray is isotropic if and only if the Finsler function is of scalar flag curvature.\end{rem} \section{Sprays metrizable by Finsler functions of scalar curvature} \label{sec:scalar_metrizable} The problem we want to address in this paper is the following: provide the necessary and sufficient conditions for a sprays $S$ to be metrizable by a Finsler function of scalar flag curvature. Above discussion restricts the class of sprays to start with to the class of isotropic sprays. Alternative formulations of the conditions we use in the next theorem were proposed first in \cite[Thm 7.2]{GM00}, in the analytic case, to decide when a non-flat isotropic spray is variational, by discussing the formal integrability of an associated partial differential operator. However, next theorem, will provide an algorithm to construct the Finsler function that metricizes a given spray, in the case that it is variational. Moreover, the differentiability assumption we use in the next theorem is weaker, all geometric structure we use are smooth, not necessarily analytic. Next theorem extends the results of Theorem 4.1 in \cite{BM13}, where we characterize sprays metrizable by Finsler functions of constant flag curvature. \begin{thm} \label{scalar_flag} Consider $S$ a spray of non-vanishing Ricci scalar. The spray $S$ is metrizable by a Finsler function $F$ of non-vanishing scalar flag curvature if and only if \begin{itemize} \item[i)] $S$ is isotropic; \item[ii)] $d_J(\alpha/\rho) =0$; \item[iii)] $D_{hX}(\alpha/\rho) =0$, for all $X\in \mathfrak{X}(T_0M)$; \item[iv)] $d(\alpha/\rho) + 2i_{\mathbb F}\alpha/\rho \wedge \alpha/\rho$ is a symplectic form on $T_0M$. \end{itemize} \end{thm} \begin{proof} We assume that the spray $S$ is metrizable by a Finsler function $F$ of scalar flag curvature $\kappa$ and we will prove that the four conditions i)-iv) are necessary. Since the Jacobi endomorphism $\Phi$ is given by formula \eqref{fscalar}, as we discussed already, it follows that $S$ is isotropic, and hence condition i) is satisfied. The semi-basic $1$-form $\alpha$ and the Ricci scalar $\rho$ are given by \begin{eqnarray} \alpha=\kappa F d_JF, \quad \rho=\kappa F^2. \label{ark} \end{eqnarray} It follows that $\alpha/\rho=d_JF/F$ and therefore $d_J(\alpha/\rho)=0$, which means that condition ii) is satisfied. Since $S$ is the geodesic spray of the Finsler function $F$, it follows from first formula \eqref{gsf2} that $d_hF=0$. Therefore, $D_{hX}F=(hX)(F)=(d_h F)(X)=0$ and $D_{hX}d_JF=0$. It follows that $D_{hX}(\alpha/\rho) =0$ and hence the condition iii) is also satisfied. We check now the regularity condition iv). Using $d_hF=0$ and $J\circ \mathbb{F}=v$, we obtain \begin{eqnarray*} i_{\mathbb{F}}\frac{\alpha}{\rho}=i_{\mathbb{F}}\frac{1}{F}d_JF=\frac{1}{F}d_vF = \frac{1}{F}dF. \label{ifar} \end{eqnarray*} Therefore, using the regularity of the Finsler function, it follows that \begin{eqnarray*} d\left(\frac{\alpha}{\rho}\right)+ 2i_{\mathbb F}\frac{\alpha}{\rho} \wedge \frac{\alpha}{\rho} = d\left(\frac{d_J F}{F}\right) + \frac{2}{F^2} dF \wedge d_JF = \frac{1}{2F^2} dd_JF^2 \label{symplectic} \end{eqnarray*} is a symplectic form on $T_0M$. Let us prove now the sufficiency of the four conditions i)-iv). Consider $S$ a spray that satisfies all four conditions i)-iv). First condition i) says that the spray $S$ is isotropic, which means that its Jacobi endomorphism $\Phi$ is given by formula \eqref{isophi}. Next three conditions ii)-iv) refer to the semi-basic $1$-form $\alpha$ and the Ricci scalar $\rho$, which enter into the expression \eqref{isophi} of the Jacobi endomorphism $\Phi$. From condition ii) we have that the semi-basic $1$-form $\alpha/\rho$ is a $d_J$-closed $1$-form. Since the tangent structure $J$ is integrable, it follows that $[J,J]=0$ and hence $d_J^2=0$. Therefore, using a Poincar\'e-type Lemma for the differential operator $d_J$, it follows that, locally, $\alpha/\rho$ is a $d_J$-exact $1$-form. It follows that there exists a function $f$, locally defined on $T_0M$, such that \begin{eqnarray} \frac{1}{\rho}\alpha = d_Jf=\frac{\partial f}{\partial y^i} dx^i. \label{radjf} \end{eqnarray} Note that this function $f$ is not unique, it is given up to an arbitrary basic function $a\in C^{\infty}(M)$. We will prove that using this function $f$ and a corresponding basic function $a$, we can construct a Finsler function $F=\exp(f-a)$, of scalar flag curvature, which metricizes the given spray $S$. Using the commutation rule for $i_S$ and $d_J$, see \cite[Appendix A]{GM00}, we have \begin{eqnarray} {\mathbb C}(f)= i_Sd_Jf = i_S\frac{\alpha}{\rho}=1. \label{eq3} \end{eqnarray} Using the condition ii) of the theorem, and the form \eqref{isor} of the curvature tensor $R$, we obtain \begin{eqnarray} 3d_Rf= (d_J\rho+\alpha )\wedge d_Jf - \mathbb{C}(f) d_J\alpha = (d_J\rho +\alpha )\wedge \frac{\alpha}{\rho} - d_J\alpha = - \rho d_J\left(\frac{\alpha}{\rho}\right)=0. \label{eq5} \end{eqnarray} The condition iii) of the theorem can be written locally as follows \begin{eqnarray} D_{\delta/\delta x^i}\frac{\partial f}{\partial y^i} = \frac{\partial}{\partial y^i} \left(\frac{\delta f}{\delta x^i}\right) = 0, \label{dpf} \end{eqnarray} which means that the components $\omega_i= \delta f/\delta x^i$ are independent of the fibre coordinates. In other words \begin{eqnarray} \omega=d_hf=\frac{\delta f}{\delta x^i}dx^i, \label{eq9} \end{eqnarray} is a basic $1$-form on $T_0M$. Using formula \eqref{eq5} we \begin{eqnarray} 0=d_Rf=d^2_hf=d_h(d_h f) = \frac{1}{2}\left(\frac{\partial \omega_i}{\partial x^j} - \frac{\partial \omega_j}{\partial x^i} \right) dx^i\wedge dx^j=d(d_hf). \label{eq10} \end{eqnarray} It follows that the basic $1$-form $d_hf \in \Lambda^1(M)$ is closed and hence it is locally exact. Therefore, there exists a function $a$, which is locally defined on $M$, such that \begin{eqnarray} d_hf=da = d_ha. \label{dhaf} \end{eqnarray} We will prove now that the function \begin{eqnarray} F=\exp(f-a),\label{expf} \end{eqnarray} locally defined on $T_0M$, is a Finsler function of scalar flag curvature, whose geodesic spray is the given spray $S$. Depending on the domain of the two functions $f$ and $a$, the function $F$ might be a conic pseudo-Finsler function. From formula \eqref{eq3}, we have that $ \mathbb{C}(F)= \exp(f-a) \mathbb{C}(f)=F$, which means that $F$ is $1$-homogeneous. Using formula \eqref{dhaf}, we obtain that \begin{eqnarray} d_hF= \exp(f-a) d_h(f-a)=0. \label{dhfinsler} \end{eqnarray} The semi-basic $1$-form $\alpha/\rho$, which is given by formula \eqref{radjf}, can be expressed in terms of the function $F$, given by formula \eqref{expf}, as follows \begin{eqnarray*} \frac{\alpha}{\rho}=\frac{d_JF}{F}. \label{far} \end{eqnarray*} We use now formula \eqref{dhfinsler} and obtain \begin{eqnarray} d\left(\frac{\alpha}{\rho}\right)+ 2i_{\mathbb F}\frac{\alpha}{\rho} \wedge \frac{\alpha}{\rho} = \frac{1}{F^2} dd_JF^2. \label{symplectic} \end{eqnarray} The last condition of the theorem assures that $dd_JF^2$ is a symplectic form and hence $F$ is a Finsler function. Due to formula \eqref{dhfinsler}, we obtain that $S$ is the geodesic spray of the Finsler function $F$. To complete the proof, we have to show now that $F$ has non-vanishing scalar flag curvature. Since the Finsler function $F$ is given by formula \eqref{expf}, we have that $F>0$ on $T_0M$ and we may consider the function \begin{eqnarray} \kappa=\frac{\rho}{F^2}. \label{krf} \end{eqnarray} It follows that the semi-basic $1$-form $\alpha$ is given by \begin{eqnarray} \alpha=\frac{\rho}{F}d_JF = \kappa F d_JF. \label{akf}\end{eqnarray} Since the Ricci scalar does not vanish, it follows that the function $\kappa$ has the same property. The last two formulae \eqref{krf} and \eqref{akf} show that the Jacobi endomorphism $\Phi$, of the geodesic spray $S$ of the Finsler function $F$, is given by formula \eqref{fscalar}. Therefore, the Finsler function $F$ has non-vanishing scalar flag curvature $\kappa$. \end{proof} We can replace the regularity condition $iv)$ of the Theorem \ref{scalar_flag} by a weaker condition and require that $\operatorname{rank}\left( d(\alpha/\rho) + 2i_{\mathbb F}\alpha/\rho \wedge \alpha/\rho \right) \neq 0$ on some conical region in $T_0M$. In this case the theorem provides a characterization for sprays metrizable by conic pseudo-, or degenerate Finsler function. We consider two examples of such sprays in Section \ref{sec:examples}. For dimensions grater than two, the Theorem \ref{scalar_flag} does not address the Finsler metrizability problem in its most general context. The cases that are not covered by this theorem refer to sprays that are metrizable by Finsler functions which do not have scalar flag curvature. However, in the $2$-dimensional case, the Theorem \ref{scalar_flag} covers the Finsler metrizability problem in the most general case. This is due to the fact that any $2$-dimensional spray is isotropic and therefore, the Finsler metrizability problem is equivalent to the metrizability by a Finsler function of scalar flag curvature. For the two-dimensional case, in \cite{Berwald41}, Berwald provides necessary and sufficient conditions, in terms of the curvature scalars, such that the extremals of a Finsler space are rectilinear. The importance of characterizing sprays that are metrizable by Finsler functions of scalar flag curvature was discussed recently in \cite{CDT12} since it will allow to ''construct all systems of ODEs with vanishing Wilczynski invariants''. \section{Hilbert's fourth problem} \label{sec:Hilbert} ''Hilbert's fourth problem asks to construct and study the geometries in which the straight line segment is the shortest connection between two points", \cite{Alvarez05}. Alternatively, the problem can be reformulated as follows: ''given a domain $\Omega \subset \mathbb{R}^n$, determine all (Finsler) metrics on $\Omega$ whose geodesics are straight lines'', \cite[p.191]{Shen01}. These Finsler metrics are projectively flat and can be studied using different techniques, \cite{Crampin11, CMS13, Shen03}. All such Finsler functions have constant or scalar flag curvature. Therefore, we can use the conditions of \cite[Thm. 4.1]{BM13} and Theorem \ref{scalar_flag} to test when a projectively flat spray is Finsler metrizable. For such sprays we use the algorithms provided in the proofs of \cite[Thm. 4.1]{BM13} and Theorem \ref{scalar_flag} to construct solutions to Hilbert's fourth problem. We start with $S_0$, the flat spray on some domain $\Omega \subset \mathbb{R}^n$. A projective deformation $S=S_0-2P\mathbb{C}$ leads to a new spray that is isotropic. In the case that the spray $S$ satisfies the metrizability tests of either \cite[Thm. 4.1]{BM13} or Theorem \ref{scalar_flag}, then $S$ is the geodesic spray of a Finsler function of constant or scalar flag curvature. This way we provide a method to construct Finsler functions of constant or scalar flag curvature with rectilinear geodesics. Consider a domain $\Omega\subset \mathbb{R}^n$ and let $S_0\in \mathfrak{X}(\Omega \times \mathbb{R}^n)$ be the flat spray. We will study now, when a projective deformation \begin{eqnarray} S=S_0 - 2P\mathbb{C}=y^i\frac{\partial}{\partial x^i} - Py^i\frac{\partial}{\partial y^i}, \label{pfspray} \end{eqnarray} for a $1$-homogeneous function $P \in C^{\infty}(\Omega \times \mathbb{R}^n\setminus\{0\})$, leads to a metrizable spray $S$ by a Finsler function $F$ of constant flag curvature. Such Finsler function $F$ will be then a solution to Hilbert's fourth problem. Using formulae \cite[(4.8)]{BM12a}, the Jacobi endomorphism of the new spray $S$ is given by \begin{eqnarray} \Phi = (P^2 - S_0P) J - (Pd_JP + d_J(S_0P) - 3d_{h_0}P)\otimes \mathbb{C}. \label{pp0} \end{eqnarray} It follows that the spray $S$ is isotropic, the Ricci scalar, $\rho$, and semi-basic form $\alpha$ are given by: \begin{eqnarray} \rho=P^2 - S_0P, \ \alpha = Pd_JP + d_J(S_0P) - 3d_{h_0}P. \label{raflat} \end{eqnarray} From above formula it follows that \begin{eqnarray} d_J\alpha =-3d_Jd_{h_0}P= 3d_{h_0}d_JP. \label{dja0}\end{eqnarray} Using formula \cite[(4.8)]{BM12a}, the corresponding horizontal projectors for the two sprays $S$ and $S_0$ are related by \begin{eqnarray} h=h_0-PJ - d_JP\otimes \mathbb{C}. \label{hh0} \end{eqnarray} We use that $\mathbb{C}(P^2-S_0P)=2 (P^2-S_0P)$ as well as the formulae \eqref{raflat} and \eqref{hh0} to obtain \begin{eqnarray} d_h\rho=d_{h_0}(P^2-S_0P) - Pd_J\rho - 2\rho d_JP. \label{dhr0} \end{eqnarray} In Subsection \ref{subsec:h4cc} we will use the conditions of \cite[Thm. 4.1]{BM13} to test if the spray $S$, given by formula \eqref{pfspray}, is metrizable by a Finsler function of constant flag curvature. In Subsection \ref{subsec:h4sc} we will use the conditions of Theorem \ref{scalar_flag} to test if the spray $S$ is metrizable by a Finsler function of scalar flag curvature. In each subsection, we show how to construct examples of sprays that are metrizable by such Finsler functions. \subsection{Solutions to Hilbert's fourth problem by Finsler functions of constant flag curvature} \label{subsec:h4cc} The projectively flat spray $S$, given by formula \eqref{pfspray}, is isotropic, the Ricci scalar, $\rho$, and the semi-basic $1$-form $\alpha$ are given by formulae \eqref{raflat}. According to \cite[Thm. 4.1]{BM13}, the spray $S$ is metrizable by a Finsler function of constant flag curvature if and only if the following three conditions are satisfied: \begin{itemize} \item[C1)] $d_J\alpha=0$; \item[C2)] $d_h\rho=0$; \item[C3)] $\operatorname{rank}(dd_J\rho)=2n$. \end{itemize} We study now the first condition $C1)$. Since the spray $S_0$ is flat, it follows that $R_0=[h_0,h_0]/2=0$ and therefore $d^2_{h_0}=0$. Using a Poincar\'e-type Lemma for the differential operator $d_{h_0}$, and formula \eqref{dja0}, it follows that the condition $C1)$ is satisfied if and only if there exists a locally defined, $0$-homogeneous, smooth function $g$ on $\Omega \times \mathbb{R}^n\setminus\{0\}$ such that \begin{eqnarray} d_JP=d_{h_0}g. \label{c1}\end{eqnarray} From the above formula, by composing with the inner product $i_{S_0}$ to the both sides, we obtain \begin{eqnarray} P=\mathbb{C}(P)=i_{S_0}d_JP=i_{S_0}d_{h_0}g=S_0(g). \label{psog} \end{eqnarray} In view of this formula, we obtain that the Ricci scalar, $\rho$, in formula \eqref{raflat}, can be expressed as follows: \begin{eqnarray} \rho = \left(S_0(g)\right)^2 - S^2_0(g). \label{rhosog} \end{eqnarray} Using formula \eqref{dhr0}, as well as the above formulae, we obtain that the second condition $C2)$ is satisfied if and only if \begin{eqnarray*} d_{h_0}\rho - S_0(g) d_J\rho - 2\rho d_{h_0}g=0. \end{eqnarray*} We can write above formula, which is equivalent to the condition $C2)$, as follows \begin{eqnarray} d_{h_0}(\exp(-2g)\rho)+\frac{1}{2} S_0(\exp(-2g))d_J\rho=0. \label{c2} \end{eqnarray} \begin{rem} \label{h4eqcc} Each solution $g$ of the above equation \eqref{c2} determines a projectively flat Finsler metric $F^2=\left|\left(S_0(g)\right)^2 - S^2_0(g)\right|$, of constant flag curvature, if and only if the regularity condition $C3)$ is satisfied. \end{rem} Next, we provide some examples of such functions $g$. \subsubsection{Example} \label{ssc:ex1} Consider the open disk $\Omega=\{x\in \mathbb{R}^n, |x|<1\}$, the function $g(x)=-\ln \sqrt{1-|x|^2}$, and the projectively flat spray $S=S_0-2g^c\mathbb{C}\in \mathfrak{X}(\Omega \times \mathbb{R}^n)$. The particular form of the projective factor $P(x,y)=g^c(x,y)=S_0(g)=y^i \partial g/\partial x^i$ assures that the the function $g$ is a solution of the equation \eqref{c1}, which means that the condition $C1)$ is satisfied. For the spray $S$, the Ricci scalar given by formula \eqref{rhosog} has the following expression \begin{eqnarray} \rho(x,y)=-\frac{|y|^2(1-|x|^2)+<x,y>^2}{(1-|x|^2)^2}. \label{rhoklein} \end{eqnarray} Since the function $g$ is a solution of the equation \eqref{c2} it follows that the condition $C2)$ is satisfied. It remains to check the regularity condition $C3)$. By a direct computation we have $dd_J\rho = 2g_{ij}\delta y^i \wedge dx^j$, where \begin{eqnarray} g_{ij} = \frac{\partial^2 g}{\partial x^i\partial x^j} - \frac{\partial g}{\partial x^i} \frac{\partial g}{\partial x^j} = \frac{1}{1-|x|^2}\left(\delta_{ij} + \frac{x_ix_j}{1-|x|^2}\right), \end{eqnarray} is the Klein metric on the unit ball, see \cite[Example 11.3.1]{Shen01}. Therefore, we have that the projectively flat spray $S$ is the geodesic spray of the Kein metric, \begin{eqnarray} F^2(x,y)=-\rho(x,y)= \frac{|y|^2(1-|x|^2)+<x,y>^2}{(1-|x|^2)^2}, \label{fklein} \end{eqnarray} which has constant flag curvature $\kappa=\rho/F^2=-1$. \subsubsection{Example} \label{ssc:ex2} If we consider the function $g(x)=-\ln\sqrt{1+|x|^2}$, solution of equations \eqref{c2}, we obtain that the spray $S=S_0-2g^c\mathbb{C}\in \mathfrak{X}(\mathbb{R}^n\times \mathbb{R}^n)$ is metrizable by the following metric on $\mathbb{R}^n$, \begin{eqnarray} F^2=S_0(g^c)-(g^c)^2=\frac{|y|^2(1+|x|^2) - <x,y>^2}{(1+|x|^2)^2}, \label{fpos} \end{eqnarray} of constant curvature $\kappa=1$, \cite[Example 11.3.2]{Shen01}. \subsection{Solutions to Hilbert's fourth problem by Finsler functions of scalar flag curvature} \label{subsec:h4sc} In this subsection, we try to extend the question we addressed in the previous subsection, from constant flag curvature to scalar flag curvature. Therefore, we consider a domain $\Omega\subset \mathbb{R}^n$ and let $S_0\in \mathfrak{X}(\Omega \times \mathbb{R}^n)$ be the flat spray. We will provide an example of a projective deformation $S=S_0 - 2P\mathbb{C}$, for a $1$-homogeneous function $P \in C^{\infty}(\Omega \times \mathbb{R}^n)$, which will lead to a spray metrizable by Finsler functions of scalar flag curvature. Such projectively flat Finsler function, will be therefore a solution to Hilbert's fourth problem. As we have seen already, the spray $S=S_0 - 2P\mathbb{C}$ is isotropic, the Ricci scalar, $\rho$, and the semi-basic $1$-form $\alpha$ are given by formulae \eqref{raflat}. Since the spray $S$ is isotropic, according to Theorem \ref{scalar_flag}, it follows that $S$ is Finsler metrizable, which is equivalent to be metrizable by a Finsler function of scalar Flag curvature, if and only if the following three conditions are satisfied: \begin{itemize} \item[S1)] $d_J(\alpha/\rho)=0$; \item[S2)] $\mathcal{D}_{hX}(\alpha/\rho)=0$; \item[S3)] the regularity condition iv) of Theorem \ref{scalar_flag}. \end{itemize} Next we provide an example of a projective factor $P$, which has a very similar form with those considered in the previous two examples. However, for the function $P$ in the next example, the projectively flat spray $S$ satisfies the conditions $S1)$, $S2)$, and $S3)$ and hence will be metrizable by a Finsler function of scalar flag curvature. \subsubsection{Example} \label{ssc:ex3} For the open disk $\Omega=\{x\in \mathbb{R}^n, |x|<1\}$ in $\mathbb{R}^n$, we consider the function $g \in C^{\infty}(\Omega \times (\mathbb{R}^n\setminus\{0\}))$, and the projectively flat spray $S\in \mathfrak{X}(\Omega \times (\mathbb{R}^n\setminus\{0\}))$, given by \begin{eqnarray} g(x,y)=\ln \sqrt{|y|+<x,y>}, \quad S=S_0-2S_0(g)\mathbb{C}=y^i\frac{\partial}{\partial x^i} - \frac{|y|^2y^i}{|y|+<x,y>} \frac{\partial}{\partial y^i}. \label{ex3g} \end{eqnarray} The projective factor $P=S_0(g)$ is given by \begin{eqnarray*} P(x,y)=\frac{1}{2}\frac{|y|^2}{|y|+<x,y>}. \end{eqnarray*} Using first formula \eqref{raflat} we obtain that the Ricci scalar is given by \begin{eqnarray} \rho(x,y)=3P^2(x,y) = \frac{3}{2}\frac{|y|^4}{(|y|+<x,y>)^2}. \label{ex3rho} \end{eqnarray} Using above formula for $\rho$ and the second formula \eqref{raflat} we obtain that the semi-basic $1$-form $\alpha$ is given by \begin{eqnarray} \alpha = -3(d_{h_0}P + Pd_JP)= \frac{3|y|^2}{4(|y|+<x,y>)^3}(y_i|y| + x_i|y|^2)dx^i. \label{ex3alpha} \end{eqnarray} Using formulae \eqref{ex3rho} and \eqref{ex3alpha} it follows that the semi-basic $1$-form $\alpha/\rho$ is $d_J$-closed, since \begin{eqnarray} \frac{\alpha}{\rho} = \frac{1}{|y|+<x,y>}\left(\frac{y_i}{|y|} + x_i\right)dx^i = d_J f, \quad f(x,y)=\ln(|y|+<x,y>). \label{ex3ar} \end{eqnarray} From the above formula we have that the first condition $S1)$ is satisfied. As we have shown in the proof of Theorem \ref{scalar_flag}, the second condition $S2)$ is equivalent to the fact that $d_hf$ is a basic $1$-form on $\Omega$. Using formula \eqref{hh0} for the horizontal projector $h$ and expression \eqref{ex3ar} for the function $f$, we have that $d_hf=0$. The regularity condition $S3)$ is also satisfied and hence, by formula \eqref{expf}, we obtain that \begin{eqnarray} F(x,y)=\exp f(x,y)=|y|+<x,y>, \label{ex3finsler} \end{eqnarray} is a Finsler function. The function $F$ is a Finsler function of Numata type, see \cite[3.9. B]{BCS00}. The Finsler function $F$ has scalar flag curvature, which is given by formula \eqref{krf}, \begin{eqnarray} \kappa(x,y)=\frac{\rho}{F^2} = \frac{3}{4}\frac{|y|^4}{(|y|+<x,y>)^4}. \label{ex3sf}\end{eqnarray} The geodesics of the Finsler function $F$, given by formula \eqref{ex3finsler}, are segments of straight lines in the open disk $\Omega$. As expected, from the recent result of \'Alvarez-Paiva in \cite{Alvarez13}, the non-reversible Finsler function $F$ is the sum of a reversible projective metric and an exact $1$-form. \section{Examples} \label{sec:examples} In the previous subsection, we have seen already an example, \eqref{ssc:ex3}, of a spray that is metrizable by a Finsler function of scalar flag curvature. We have tested the Finsler metrizability of this spray using the conditions of Theorem \ref{scalar_flag}. In this section we will use again the conditions of Theorem \ref{scalar_flag} to test whether or not some other examples of sprays are Finsler metrizable. We will also see that the regularity condition iv) of Theorem \ref{scalar_flag} can be relaxed and we can search for sprays metrizable by conic pseudo-, or degenerate Finsler functions. \subsection{A spray metrizable by a conic pseudo-Finsler function} Consider the following affine spray on some domain $M\subset \mathbb{R}^2$, where two smooth functions $\phi$ and $\psi$ are defined, \begin{eqnarray*} S=y^1\frac{\partial}{\partial x^1} + y^2\frac{\partial}{\partial x^2} - \phi(x^1, x^2) (y^1)^2 \frac{\partial}{\partial y^1} - \psi(x^1, x^2) (y^2)^2 \frac{\partial}{\partial y^2}. \end{eqnarray*} Using formulae \eqref{localphi}, the local components of the corresponding Jacobi endomorphism are given by \begin{eqnarray*} R^1_1= -\phi_{x^2}y^1y^2, \quad R^1_2 = \phi_{x^2}(y^1)^2, \quad R^2_1 = \psi_{x^1} (y^2)^2, \quad R^2_2= -\psi_{x^1}y^1y^2 . \end{eqnarray*} According to formula \eqref{ricr}, the Ricci scalar is given by \begin{eqnarray*} \rho= R^1_1+R^2_2=-y^1y^2(\phi_{x^2} + \psi_{x^1}). \end{eqnarray*} The case when $\phi_{x^2} =- \psi_{x^1} \neq 0$ has been studied in Example 8.2.4 from \cite{Shen01}. In this case, we have that the Ricci scalar is $\rho=0$ while $\Phi\neq 0$ and hence $S$ is not Finsler metrizable. We pay now attention to the case $\rho\neq 0$. In this case, using the four conditions of Theorem \ref{scalar_flag}, we will prove that $S$ is Finsler metrizable if and only if there exists a constant $c\in \mathbb{R}\setminus\{0,1\}$, such that \begin{eqnarray} c\phi_{x^2}=(1-c)\psi_{x^1}. \label{exfpc} \end{eqnarray} Since $S$ is a spray on a $2$-dimensional manifold, it follows that it is isotropic and hence the first condition of Theorem \ref{scalar_flag} is satisfied. The two components of the semi-basic $1$-form $\alpha=\alpha_1dx^1+\alpha_2 dx^2$, which appear in the expression \eqref{isophi} of the Jacobi endomorphism, are given by \cite[(4.4)]{BM13}: \begin{eqnarray*} \alpha_1=\frac{R^2_2}{y^1}=-\psi_{x^1} y^2, \quad \alpha_2=\frac{R^1_1}{y^2}=-\phi_{x^2} y^1. \end{eqnarray*} The last three conditions of Theorem \ref{scalar_flag} refer to the semi-basic $1$-form $\alpha/\rho$, which is given by \begin{eqnarray*} \frac{\alpha}{\rho} = \frac{\psi_{x^1}}{(\phi_{x^2} + \psi_{x^1})y^1}dx^1 + \frac{\phi_{x^2}}{(\phi_{x^2} + \psi_{x^1})y^2}dx^2. \label{exfpar} \end{eqnarray*} For the second condition of Theorem \ref{scalar_flag}, one can immediately check that $d_J(\alpha/\rho)=0$ and therefore there exists a function $f$ defined on the conic region $A=\{(x^1,x^2, y^1, y^2)\in TM, y^1>0, y^2>0\}$ of $T_0M$, such that $\alpha/\rho=d_Jf$. The function $f$ is given by \begin{eqnarray} f(x,y)=\frac{1}{\phi_{x^2} + \psi_{x^1}}\left( \psi_{x_1}\ln y^1 + \phi_{x_2}\ln y^2\right). \label{exfpf} \end{eqnarray} For the third condition of Theorem \ref{scalar_flag}, we have to test if $d_hf$ is a basic $1$-form. For the spray $S$, the local coefficients $N^i_j$, of the nonlinear connection are given by \begin{eqnarray*} N^1_1=\phi y^1, \quad N^1_2=N^2_1=0, \quad N^2_2=\psi y^2. \end{eqnarray*} It follows that \begin{eqnarray*} d_hf = \frac{\delta f}{\delta x^1}dx^1 + \frac{\delta f}{\delta x^2}dx^2, \quad \frac{\delta f}{\delta x^1} = \frac{\partial f}{\partial x^1} - \frac{\phi \psi_{x^1}}{\phi_{x^2} + \psi_{x^1}}, \quad \frac{\delta f}{\delta x^2} = \frac{\partial f}{\partial x^2} - \frac{\psi \phi_{x^2}}{\phi_{x^2} + \psi_{x^1}}. \end{eqnarray*} Therefore $d_hf$ is a basic $1$-form if and only if there exist two real constant $c_1$ and $c_2$ such that \begin{eqnarray} \frac{\psi_{x^1}}{\phi_{x^2} + \psi_{x^1}} = c_1, \quad \frac{\phi_{x^2}}{\phi_{x^2} + \psi_{x^1}}=c_2. \label{exfpc12} \end{eqnarray} Expression \eqref{exfpf} and the condition $\mathbb{C}(f)=1$ implies $c_1+c_2=1$. Formula \eqref{exfpc12} is equivalent to formula \eqref{exfpc}, for $c=c_1$ and $c_2=1-c$. We will show that, within the given assumptions \eqref{exfpc}, the last condition of Theorem \ref{scalar_flag} is satisfied. We have that \begin{eqnarray*} \frac{\alpha}{\rho} = \frac{c}{y^1}dx^1 + \frac{1-c}{y^2}dx^2 \end{eqnarray*} and therefore \begin{eqnarray*} d\left(\frac{\alpha}{\rho}\right) + 2i_{\mathbb F} \frac{\alpha}{\rho}\wedge \frac{\alpha}{\rho} = \frac{c(2c-1)}{(y^1)^2} \delta y^1 \wedge dx^1 + \frac{c(2-2c)}{y^1y^2}(\delta y^1 \wedge dx^2 + \delta y^2 \wedge dx^1) + \frac{(1-c)(1-2c)}{(y^2)^2} \delta y^2\wedge dx^2, \end{eqnarray*} which is non-degenerate and hence it is a symplectic form on $A\subset T_0M$. We have shown that the spray $S$ is Finsler metrizable if and only if the condition \eqref{exfpc} is satisfied. We will show now how we can construct the Finsler function that metricizes the spray. To simplify the calculations, we choose the constant $c=1/2$ and the functions $\phi(x^1, x^2)=\psi(x^1, x^2)=2g'(x^1+x^2)/g(x^1+x^2)$, where $g(t)$ is a non-vanishing smooth function. In this case, one can see that the condition \eqref{exfpc} is satisfied. For this choice we have that the basic $1$-form $d_hf$ is given by \begin{eqnarray*} d_hf= - \frac{g'}{g} dx^1 - \frac{g'}{g} dx^2 = da, \quad a(x^1, x^2) = -\ln g(x^1+x^2). \end{eqnarray*} According to formula \eqref{expf}, it follows that \begin{eqnarray*} F(x,y)=\exp(f-a)=\frac{\sqrt{y^1y^2}}{g(x^1+x^2)}, \end{eqnarray*} metricizes the spray $S$ for the given choice of the functions $\phi$ and $\psi$. The scalar flag curvature is given by formula \eqref{krf}, and for the above Finsler function is \begin{eqnarray*} \kappa = \frac{\rho}{F^2}= 4(g''g-(g')^2). \end{eqnarray*} For the particular case, when $g(t)=t/2$ we obtain the case of constant sectional curvature $\kappa=-1$ studied in \cite[\S 5.4]{BM13}. \subsection{A spray metrizable by a degenerate Finsler function} We present now an example of a spray that is metrizable by a degenerate Finsler function of scalar flag curvature. This means that the first three conditions of Theorem \ref{scalar_flag} are satisfied, while the last one it is not. On $M=\mathbb{R}^2$, consider the following system of second order ordinary differential equations: \begin{eqnarray} \frac{d^2x^1}{dt^2} + 2\frac{dx^1}{dt}\frac{dx^2}{dt}=0, \quad \frac{d^2x^2}{dt^2} - \left(\frac{dx^2}{dt}\right)^2=0. \label{systemdf} \end{eqnarray} The corresponding spray $S\in \mathfrak{X}(TM)$ is given by \begin{eqnarray*} S=y^1\frac{\partial}{\partial x^1} + y^2\frac{\partial}{\partial x^2} - 2y^1y^2\frac{\partial}{\partial y^1} + (y^2)^2\frac{\partial}{\partial y^2}. \label{spraydf} \end{eqnarray*} Using the formulae \eqref{localphi} and \eqref{ricr}, the local components of the corresponding Jacobi endomorphism and the Ricci scalar are given by \begin{eqnarray*} R^1_1= -2(y^2)^2, \quad R^2_2= 0, \quad \rho = -2(y^2)^2. \end{eqnarray*} Since $S$ is a two-dimensional spray, it follows that it is isotropic and hence first condition of Theorem \ref{scalar_flag} is satisfied. The semi-basic $1$-form $\alpha/\rho=\alpha_1/\rho dx^2 + \alpha_2/\rho dx^2$ has the components: \begin{eqnarray*} \frac{\alpha_1}{\rho}=\frac{R^2_2}{y^1\rho}=0, \quad \frac{\alpha_2}{\rho}=\frac{R^1_1}{y^2\rho}=\frac{1}{y^2}. \label{ardf} \end{eqnarray*} From the above formulae, one can immediately check that $d_J(\alpha/\rho)=0$ and hence the second condition of Theorem \ref{scalar_flag} is satisfied. Moreover, we have that there exists a function $f\in C^{\infty}(T_0M)$ such that \begin{eqnarray*} \frac{\alpha}{\rho}=d_Jf, \quad \textrm{for} \ f(x,y)=\ln |y^2|. \end{eqnarray*} Third condition of Theorem \ref{scalar_flag} is satisfied if and only if $d_hf$ is a basic $1$-form. By direct calculation we have that this is true, since $d_hf=dx^2$. More than that, for $a(x^1, x^2)=x^2$, we have that $d_hf=da$. Therefore the function \begin{eqnarray*} F(x,y)=\exp(f(x,y)-a(x))=\exp(-x^2)y^2 \end{eqnarray*} is a degenerate Finsler function that metricizes the given system \eqref{systemdf}. This degenerate Finsler function has scalar flag curvature, given by formula \eqref{krf}, which in our case is \begin{eqnarray*} \kappa = \frac{\rho}{F^2}= \frac{-2}{\exp(-x^2)}. \end{eqnarray*} It can be directly checked that any solution of the system \eqref{systemdf} is also a solution of the Euler-Lagrange equations for $F^2$. Some other non-homogeneous Lagrangian functions that metricizes the system \eqref{systemdf} where determined in \cite[Ex. 7.10]{AT92}. \subsection{A spray that is not Finsler metrizable} We consider now an example of a spray that is not Finsler metrizable, and this is due to the fact that the third condition of Theorem \ref{scalar_flag} is not satisfied. On $M=\mathbb{R}^2$, we consider the following system of second order ordinary differential equations: \begin{eqnarray} \frac{d^2x^1}{dt^2} + \left(\frac{dx^1}{dt}\right)^2 + \left(\frac{dx^2}{dt}\right)^2=0, \quad \frac{d^2x^2}{dt^2} +4 \frac{dx^1}{dt}\frac{dx^2}{dt}=0. \label{systemnf} \end{eqnarray} The above system can be identified with a spray $S\in \mathfrak{X}(TM)$, which is given by \begin{eqnarray*} S=y^1\frac{\partial}{\partial x^1} + y^2\frac{\partial}{\partial x^2} - \left((y^1)^2+(y^2)^2\right)\frac{\partial}{\partial y^1} - 4y^1y^2\frac{\partial}{\partial y^2}. \label{spraynf} \end{eqnarray*} We make use of formulae \eqref{localphi} and \eqref{ricr} to compute the local components of the corresponding Jacobi endomorphism and the Ricci scalar, which are given by \begin{eqnarray*} R^1_1= -(y^2)^2, \quad R^2_2= -2(y^1)^2, \quad \rho = -2(y^1)^2 - (y^2)^2. \end{eqnarray*} Again, the spray $S$ is two-dimensional and hence it is isotropic, which means that the first condition of Theorem \ref{scalar_flag} is satisfied. The other conditions refer to the semi-basic $1$-form $\alpha/\rho=(\alpha_1/\rho) dx^2 + (\alpha_2/\rho) dx^2$, whose components are given by: \begin{eqnarray*} \frac{\alpha_1}{\rho}=\frac{R^2_2}{y^1\rho}=\frac{2y^1}{2(y^1)^2 + (y^2)^2}, \quad \frac{\alpha_2}{\rho}=\frac{R^1_1}{y^2\rho}=\frac{y^2}{2(y^1)^2 + (y^2)^2}. \label{arnf} \end{eqnarray*} From the above formulae, it follows that $d_J(\alpha/\rho)=0$, which means that the second condition of Theorem \ref{scalar_flag} is satisfied. Therefore, there exists a function $f\in C^{\infty}(T_0M)$ such that \begin{eqnarray*} \frac{\alpha}{\rho}=d_Jf, \quad \textrm{for} \ f(x,y)=\ln(2(y^1)^2 + (y^2)^2). \end{eqnarray*} For the above considered function $f$ we can check that $d_hf$ is not a basic $1$-form. It follows then that third condition of Theorem \ref{scalar_flag} is not satisfied and consequently the spray is not Finsler metrizable. The system \eqref{systemnf} has been considered in \cite[Ex. 7.2]{AT92}, where it has been shown that it is not metrizable, using different techniques. \subsection*{Acknowledgments} The work of I.B. has been supported by the Romanian National Authority for Scientific Research, CNCS UEFISCDI, project number PN-II-ID-PCE-2012-4-0131. The work of Z.M. has been supported by the Hungarian Scientific Research Fund (OTKA) Grant K67617. \end{document}
\begin{document} \begin{abstract} We show that the blowup of an extremal K\"ahler manifold at a relatively stable point in the sense of GIT admits an extremal metric in K\"ahler classes that make the exceptional divisor sufficiently small, extending a result of Arezzo-Pacard-Singer. We also study the K-polystability of these blowups, sharpening a result of Stoppa in this case. As an application we show that the blowup of a K\"ahler-Einstein manifold at a point admits a constant scalar curvature K\"ahler metric in classes that make the exceptional divisor small, if it is K-polystable with respect to these classes. \end{abstract} \title{On blowing up extremal K\"ahler manifolds} \section{Introduction} Let $(M,\omega)$ be a compact K\"ahler manifold of dimension $m$, such that $\omega$ is an extremal metric in the sense of Calabi~\cite{Cal82}. This means that the gradient of the scalar curvature of $\omega$ is a holomorphic vector field, so important special cases are constant scalar curvature (cscK) metrics and K\"ahler-Einstein metrics. Following Arezzo-Pacard~\cite{AP06,AP09} and Arezzo-Pacard-Singer~\cite{APS06} we study the problem of constructing extremal metrics on the blowup of $M$ in one or more points, in K\"ahler classes which make the exceptional divisors sufficiently small. To state the result precisely, we make a few definitions. The condition that $\omega$ is extremal implies that the Hamiltonian vector field $X_\mathbf{s}$ corresponding to the scalar curvature $\mathbf{s}(\omega)$ is a Killing field. Let $G$ be the group of Hamiltonian isometries of $(M,\omega)$, and write $\mathfrak{g}$ for its Lie algebra. We fix a moment map \[ \mu : M \to \mathfrak{g}^*\] for the action of $G$ on $M$, such that for any vector field $X\in \mathfrak{g}$ the function $\mathbf{l}angle \mu, X\rangle$ has zero mean on $M$. We will also identify $\mathfrak{g}$ with its dual $\mathfrak{g}^*$ using the inner product \[ \mathbf{l}angle X,Y\rangle = \int_M \mathbf{l}angle \mu, X\rangle\mathbf{l}angle \mu, Y\rangle \omega^n.\] for $X,Y\in\mathfrak{g}$, so we will think of elements in $\mathfrak{g}^*$ as vector fields. Our first main result is then as follows. \begin{thm}\mathbf{l}abel{thm:main} Choose distinct points $p_1,\mathbf{l}dots, p_n\in M$ and numbers $a_1,\mathbf{l}dots, a_n>0$ such that the vector fields $X_\mathbf{s}$ and $\sum\mathbf{l}imits_i a_i^{m-1}\mu(p_i)$ vanish at the $p_i$. Then there exists $\varepsilon_0 > 0$ such that for $\varepsilon\in(0,\varepsilon_0)$ the blowup $Bl_{p_1,\mathbf{l}dots,p_n}M$ admits an extremal metric in the K\"ahler class \[ \pi^*[\omega] - \varepsilon^2\mathbf{l}eft(a_1[E_1] + \mathbf{l}dots + a_n[E_n]\right),\] where $E_i$ are the exceptional divisors and $\pi$ is the blowdown map to $M$. \end{thm} To compare with the earlier results, we now describe the theorem proved by Arezzo-Pacard-Singer in \cite{APS06}. As above $(M,\omega)$ is an extremal K\"ahler manifold. We choose $K$ to be any group of Hamiltonian isometries of $(M,\omega)$ such that its Lie algebra $\mathfrak{k}$ contains the vector field $X_\mathbf{s}$. Now $G$ is the group of Hamiltonian isometries commuting with $K$, and $\mathfrak{g}$ is its Lie algebra. We define $\mathfrak{g}' = \mathfrak{g}\cap \mathfrak{k}$, and define $\mathfrak{g}''$ to be the orthogonal complement, so \[ \mathfrak{g} = \mathfrak{g}' \oplus \mathfrak{g}''.\] Then the most general result in \cite{APS06} is the following. \begin{thm}[Arezzo-Pacard-Singer] \mathbf{l}abel{thm:APS} With notation as above, let $p_1,\mathbf{l}dots, p_n\in M$ be points where each vector field in $\mathfrak{k}$ vanishes. Suppose that \begin{enumerate} \item[(i)](Balancing condition) we choose $a_1,\mathbf{l}dots,a_n>0$ such that \[ \sum_{j=1}^n a_j^{m-1}\mu(p_j) \in \mathfrak{g}'^*,\] \item[(ii)](Genericity condition) the projections of $\mu(p_1),\mathbf{l}dots,\mu(p_n)$ onto $\mathfrak{g}''^*$ span $\mathfrak{g}''^*$, \item[(iii)](General position condition) there is no nontrivial element of $\mathfrak{g}''$ that vanishes at $p_1,\mathbf{l}dots,p_n$. \end{enumerate} Then there exists $\varepsilon_0>0$ such that for all $\varepsilon\in (0,\varepsilon_0)$ there is a $K$-invariant extremal K\"ahler metric on the blowup $Bl_{p_1,\mathbf{l}dots,p_n}M$ whose K\"ahler class is \[ [\omega] - \varepsilon^2\mathbf{l}eft(a_1[E_1]+ \mathbf{l}dots+a_n[E_n]\right), \] where the $E_i$ are the exceptional divisors. \end{thm} In addition condition (iii) can be removed if we allow losing control of the K\"ahler class a bit (see \cite{APS06} for more details). Note that since the vector field $X_\mathbf{s}$ and also any vector field in $\mathfrak{g}'$ is contained in $\mathfrak{k}$, the assumptions of Theorem~\ref{thm:APS} imply those of Theorem~\ref{thm:main}. In particular we do not need conditions (ii) and (iii). Although once the number of points blown up is large enough the conditions (ii) and (iii) are satisfied generically, it is clearly of interest to obtain results that work for fewer points. We also see that condition (i) seems to be weakened, but in fact if we choose $K$ to be the largest possible fixing the points $p_1,\mathbf{l}dots,p_n$ then condition (i) is equivalent to the vanishing of $\sum a_i^{m-1}\mu(p_i)$ at the points $p_i$. The new ingredient in the proof of Theorem~\ref{thm:main} is fairly simple so we describe it here briefly, focusing on the case of blowing up just one point. Starting with an extremal metric on $M$, in \cite{APS06} the authors try to directly construct an extremal metric on the blowup $Bl_pM$ in suitable K\"ahler classes, whereas we try to solve a slightly more general equation instead. More precisely for suitably small $\varepsilon$ we find a metric $\omega_{p,\varepsilon}$ on $Bl_pM$ in the K\"ahler class $[\omega]-\varepsilon^2[E]$ together with a vector field $h_{p,\varepsilon}\in \mathfrak{g}$ such that if the vector field $h_{p,\varepsilon}$ vanishes at the point $p$, then $\omega_{p,\varepsilon}$ is an extremal metric. So the problem becomes to analyse when $h_{p,\varepsilon}$ vanishes at $p$, but this is a finite dimensional problem. Varying $p$ we obtain a map \[ h_\varepsilon : M \to \mathfrak{g} \] for small $\varepsilon$, and the crucial point is that $h_\varepsilon$ is a perturbation of the moment map $\mu$. Then a perturbation argument shows that if $\mu(p)$ vanishes at $p$ then there is a point $q$ in the same orbit of the complexified group $G^c$ as $p$, such that $h_\varepsilon(q)$ vanishes at $q$. This means that we have an extremal metric on the blowup $Bl_qM$, but this is biholomorphic to the blowup $Bl_pM$ and this concludes the proof. The actual proof will be slightly different, since for technical reasons we will work on a suitable submanifold of $M$ instead of all of $M$. The idea of separating the problem in this way into an infinite dimensional problem that is easier to solve than the original, together with a finite dimensional ``obstruction'' problem is well known (see for example Hong~\cite{Hong02, Hong08} for a similar technique used to construct constant scalar curvature metrics on ruled manifolds). Recently there has been much work on relating the existence of extremal metrics to algebro-geometric conditions on the underlying complex manifold. This work is centered around the following conjecture. \begin{conj}[Yau-Tian-Donaldson]\mathbf{l}abel{conj:YTD} Let $L$ be an ample line bundle over a compact complex manifold $M$. Then there exists an extremal metric in $c_1(L)$ if and only if the pair $(M,L)$ is relatively K-polystable. \end{conj} This conjecture goes back to Yau~\cite{Yau93} in the case of K\"ahler-Einstein metrics, and the first results are due to Tian~\cite{Tian97}. Donaldson~\cite{Don02} extended the question to the case of constant scalar curvature metrics and proved the conjecture in the case of toric surfaces~\cite{Don08_1}. Two recent surveys on this topic are Thomas~\cite{Thomas06} and Phong-Sturm~\cite{PS08}. Note that it is essentially known that if an extremal metric exists then the manifold is relatively K-polystable (see Donaldson~\cite{Don05}, Stoppa~\cite{Sto08,SSz09}, Mabuchi~\cite{Mab08}), although this depends on the precise definition of stability being used. A natural problem is to verify the conjecture for the type of blowups we are considering, building on existence results like Theorem~\ref{thm:main} and \ref{thm:APS}. Let us suppose that $\omega$ is an extremal metric on $M$, and $\omega\in c_1(L)$ for some ample line bundle $L\to M$. For simplicity let us focus on the case of blowing up just one point. We want to characterize the points $p\in M$ for which the blowup $Bl_pM$ admits an extremal metric in the class $c_1(\pi^*L-\varepsilon E)$ for small rational $\varepsilon$. Note that some partial results on this question for cscK metrics on multiple blowups were obtained by Arezzo-Pacard~\cite{AP07}. The choice of moment map gives a lifting of the infinitesimal action of $G$ to the line bundle $L$. Replacing $L$ by a large power if necessary we can assume that we obtain a global action of $G$, and moreover we can extend this to an action of the complexified group $G^c$. In this case Theorem~\ref{thm:main} can be reformulated as follows. \begin{cor} Suppose $p\in M$ is such that $X_\mathbf{s}$ vanishes at $p$, and $p$ is relatively stable for the action of $G^c$ on $M$ with respect to the polarization $L$. Then $Bl_pM$ admits an extremal metric in the class $c_1(\pi^*L-\varepsilon E)$ for sufficiently small $\varepsilon$. \end{cor} In order to verify Conjecture~\ref{conj:YTD} we need to relate the stability of $p\in M$ for the action of $G^c$, to relative K-polystability of $Bl_pM$ with respect to the class $\pi^*L-\varepsilon E$ for small $\varepsilon$. In this direction, Stoppa~\cite{Sto10} and Della Vedova~\cite{DV08} showed that if $p$ is relatively \emph{strictly} unstable for the action of $G^c$, then $Bl_pM$ is not relatively K-stable with the polarization $\pi^*L-\varepsilon E$ for small $\varepsilon$, and so it does not admit an extremal metric in these classes. The remaining question is what happens when $p$ is strictly semistable. We now focus on the cscK case, for which we have the following. \begin{thm}\mathbf{l}abel{thm:stable} Let $(M,L)$ be a polarized manifold and fix $p\in M$. Suppose that for all sufficiently small rational $\varepsilon>0$, the blowup $Bl_pM$ is K-polystable with respect to the polarization $\pi^*L-\varepsilon E$. Then $p\in M$ is stable for the action of $G^c$ with respect to the linearization $L_\delta$ for all sufficiently small rational $\delta > 0$, where $L_\delta$ is the following, depending on the dimension: \begin{itemize} \item If $m > 2$ then $L_\delta = L+\delta K_M$, where $K_M$ is the canonical bundle. \item If $m=2$ and $K_X\cdot L \geqslant 0$ then $L_\delta = L + \delta K_M$. \item If $m=2$ and $K_X\cdot L < 0$ then $L_\delta = L - \delta K_M$. \end{itemize} \end{thm} In comparison, Stoppa's result~\cite{Sto10} implies that if $Bl_pM$ is K-polystable for the polarizations $\pi^*L-\varepsilon E$ for sufficiently small $\varepsilon$, then $p\in M$ is semistable with respect to the linearization $L$. But if $p\in M$ is stable with respect to the linearization $L\pm\delta K_M$ for all sufficiently small $\delta > 0$, then by letting $\delta\to 0$ it follows easily that $p\in M$ is semistable with respect to $L$, so our result is slightly stronger. In fact we expect Theorem~\ref{thm:stable} to be sharp at least when $m>2$. In order to show this, we would need the following strengthening of Theorem~\ref{thm:main} in the cscK case, stated for the blowup in just one point. \begin{conj}\mathbf{l}abel{conj:stronger} Let $\dim M=m > 2$. Suppose that $M$ admits a cscK metric in $c_1(L)$, and let $p\in M$. There exist $\delta_0, \varepsilon_0>0$ such that if $\mu(p) + \delta\Delta\mu(p)=0$ for some $\delta\in(0,\delta_0)$ then for all $\varepsilon\in(0,\varepsilon_0)$ the manifold $Bl_pM$ admits a cscK metric in the K\"ahler class $c_1(\pi^*L - \varepsilon E)$. If $m=2$ then we can ask for an analogous result to hold, just using $\mu(p) \pm \delta\Delta\mu(p)$ with the sign in accordance with the signs in Theorem~\ref{thm:stable}. \end{conj} Here $\Delta\mu$ is the Laplacian of $\mu$ taken componentwise after identifying $\mathfrak{g}^*$ with $\mathbf{R}^l$ for some $l$. Note that $\mu + \delta\Delta\mu$ is a moment map for the action of $G$ on $M$ with respect to the K\"ahler form $\omega-\delta\rho$, where $\rho$ is the Ricci form of $\omega$, so by the Kempf-Ness theorem we can find a zero of $\mu+\delta\Delta\mu$ in the $G^c$-orbit of $p$ if and only if $p$ is stable with respect to the linearization $L+\delta K_M$ (see Lemma~\ref{lem:LK} in Section~\ref{sec:stability} for this). So Theorem~\ref{thm:stable} and Conjecture~\ref{conj:stronger} together imply that if $M$ admits a cscK metric in $c_1(L)$ and $Bl_pM$ is K-polystable for the polarization $\pi^*L-\varepsilon E$ for some sufficiently small $\varepsilon$, then $Bl_pM$ admits a cscK metric in $c_1(\pi^*L-\varepsilon E)$. At the end of Section~\ref{sec:gluing} we indicate why we expect this conjecture to hold at least when $m>2$. There is one case when Conjecture~\ref{conj:stronger} follows directly from Theorem~\ref{thm:main}, namely when $(M,\omega)$ is K\"ahler-Einstein. The only interesting case is when $M$ is Fano, since otherwise $M$ does not admit Hamiltonian holomorphic vector fields, so the blowup at any point admits a cscK metric by the theorem in \cite{AP06}. In the Fano case we can scale the metric so that $\rho=\omega$, in which case $\Delta\mu=-\mu$. The statement of the conjecture then reduces to that of Theorem~\ref{thm:main}, so we obtain the following. \begin{cor}\mathbf{l}abel{cor:KE} Suppose that $(M,\omega)$ is Fano and K\"ahler-Einstein, and let $p\in M$. For sufficiently small rational $\varepsilon>0$ the following are equivalent: \begin{itemize} \item[(i)] The blowup $Bl_pM$ admits a cscK metric in the class $\pi^*[\omega]-\varepsilon[E]$, \item[(ii)] $Bl_pM$ is K-polystable with respect to the polarization $\pi^*K_M^{-1} - \varepsilon E$, for test-configurations that are invariant under a maximal torus. \item[(iii)] The point $p\in M$ is GIT polystable for the action of the automorphism group of $M$, with respect to the polarization $K_M^{-1}$. \end{itemize} \end{cor} In the second statement we need to use test-configurations invariant under a maximal torus for the implication (i)$\Rightarrow$(ii) because in~\cite{SSz09} we were not yet able to prove full K-polystability assuming the existence of a cscK metric. The outline of the paper is as follows. We first give the proof of the finite dimensional perturbation result in Section~\ref{sec:deform} together with a related result in geometric invariant theory. In Section~\ref{sec:prelim} we discuss some background material on extremal metrics. Then in Section~\ref{sec:mainargument} we set up the equation that we want to solve and give the proof of Theorem~\ref{thm:main} assuming that we can solve this equation. In Section~\ref{sec:gluing} we solve the equation using a gluing method similar to \cite{APS06}, completing the proof of Theorem~\ref{thm:main}. Our approach is slightly different than that of Arezzo-Pacard-Singer, but the technical ingredients are more or less the same. We could also have adapted the proof in \cite{APS06} more directly to our slightly more general setting. Finally in Section~\ref{sec:stability} we discuss the algebro-geometric side of the problem, and we prove Theorem~\ref{thm:stable} and Corollary~\ref{cor:KE}. \section{Deforming relatively stable points in GIT}\mathbf{l}abel{sec:deform} Let $U$ be a K\"ahler manifold (open, or compact without boundary) with K\"ahler form $\omega$, and suppose that a compact group $H$ acts on $U$, preserving $\omega$. Let $H^c$ be the complexification of $H$, and $\mathfrak{h}$ be the Lie algebra of $H$. The action of $H$ extends to a partial action of $H^c$, ie. if $x\in U$ and $\xi\in\mathfrak{h}$ is sufficiently small then $e^{i\xi}x\in U$. For any $x\in U$ write $\mathfrak{h}_x$ for the stabilizer of $x$. Also, let \[ \mu: U\to \mathfrak{h} \] be a moment map for the $H$ action on $U$, where we have identified $\mathfrak{h}$ with its dual using an invariant inner product. \begin{prop}\mathbf{l}abel{prop:deform} Suppose that $x\in U$ satisfies $\mu(x)\in \mathfrak{h}_x$. Let $\mu_\varepsilon: U\to\mathfrak{h}$ be a family of maps such that $\mu_\varepsilon\to\mu$ in $C^0$ as $\varepsilon\to 0$, and for each $y\in U$ and $\varepsilon >0$ the element $\mu_\varepsilon(y)$ commutes with the stabilizer $\mathfrak{h}_y$. Then for sufficiently small $\varepsilon$ there exists $\xi\in\mathfrak{h}$ such that $y=e^{i\xi}x$ satisfies $\mu_\varepsilon(y)\in\mathfrak{h}_y$. \end{prop} \begin{proof} This is essentially an application of the implicit function theorem. Let us first treat the case when $\mu(x)=0$. For every $y\in U$ write $\mathfrak{h}_y^\perp$ for the orthogonal complement of $\mathfrak{h}_y$, and let $\Pi_y$ be the orthogonal projection onto $\mathfrak{h}_y^\perp$. For $\xi$ in a small ball $B\subset\mathfrak{h}_x^\perp$ we define the projection $P_\xi:\mathfrak{h}\to\mathfrak{h}_x^\perp$ by \[ P_\xi(\eta) = \Pi_x\Pi_{e^{i\xi}x}(\eta).\] Then $P_\xi(\mu(e^{i\xi}x))=0$ means that $\mu(e^{i\xi}x)\in \mathfrak{h}_{e^{i\xi}x}$. For small $\varepsilon$ define the map $F_\varepsilon$ by \[ \begin{aligned} F_\varepsilon : B\subset\mathfrak{h}_x^\perp &\to \mathfrak{h}_x^\perp \\ \xi &\mapsto P_\xi(\mu_\varepsilon(e^{i\xi}x)). \end{aligned}\] Then $F_0(0)=0$ and $DF_0$ at $0$ is just the derivative of $\mu$ at $x$ (we use here that $\mu(x)=0$). Since this is an isomorphisms $\mathfrak{h}_x^\perp\to\mathfrak{h}_x^\perp$, we know that $F_0(B)$ contains a small ball $B_\delta$ around the origin. It then follows from degree considerations that for sufficiently small $\varepsilon$, the image $F_\varepsilon(B)$ contains a small ball $B_{\delta'}$ around the origin, in particular $F_\varepsilon(\xi)=0$ for some $\xi$. But this means that $y=e^{i\xi}x$ satisfies $\mu(y)\in\mathfrak{h}_y$. If $\mu(x)=\xi\not=0$, then we reduce to the previous case as follows. Let $T\subset H$ be the closure of the subgroup of $H$ generated by $\xi$, so $T$ is a torus. Write $H_T$ for the centraliser of $T$ in $H$, and $\mathfrak{h}_T$ for its Lie algebra. Inside this Lie algebra let $\mathfrak{h}_{T^\perp}$ be the orthogonal complement of the Lie algebra of $T$, and let $H_{T^\perp}$ be the corresponding subgroup of $H$. We then look at the action of $H_{T^\perp}$ on $U$, for which the moment map $\mu_T$ is simply the projection of $\mu$ onto $\mathfrak{h}_{T^\perp}$. But then $\mu_T(x)=0$, so we can apply the previous argument to find $\xi\in\mathfrak{h}_{T^\perp}$ such that $y=e^{i\xi}x$ satisfies $\mu_{\varepsilon,T}(y)\in\mathfrak{h}_y$. Here $\mu_{\varepsilon,T}$ is the projection of $\mu_\varepsilon$ onto $\mathfrak{h}_{T^\perp}$. Since $\mu_\varepsilon(y)$ commutes with elements in the stabiliser $\mathfrak{h}_y$, in particular it commutes with $\mathfrak{t}$. Hence $\mu_\varepsilon(y)$ differs from the projection $\mu_{\varepsilon,T}(y)$ by an element in $\mathfrak{t}$, so $\mu_\varepsilon(y)\in\mathfrak{h}_y$. \end{proof} It is helpful to put this result into the context of relative stability. In this setting $U$ is a compact K\"ahler manifold and the symplectic form $\omega$ represents the first Chern class of a $\mathbf{Q}$-line bundle $L$ over $U$. Moreover the choice of moment map $\mu$ corresponds to a choice of lifting of the action to some power of $L$, called a choice of linearization. \begin{defn} A point $x\in U$ is \emph{relatively stable} if there exists a point $y$ in the $H^c$-orbit of $x$ for which $\mu(y)\in\mathfrak{h}_y$. \end{defn} The relationship of this definition using moment maps to geometric invariant theory is a version of the Kempf-Ness theorem~\cite{KN79} and is worked out in~\cite{GSz04} (see also Kirwan~\cite{Kir84}). Using this terminology, Proposition~\ref{prop:deform} says that if $x$ is relatively stable for a certain choice of line bundle and linearization then it is still relatively stable for small perturbations. It is more general however, because we do not need to know that the $\mu_\varepsilon$ are also moment maps. In the rest of this section we study what more we can say about the stability of points in the sense of GIT as we deform the polarization. We will only consider a very simple kind of deformation, namely we have two line bundles $L$ and $K$ on $M$, such that $L$ is ample. We suppose that a complex reductive group $G$ acts on $M$ and we choose linearizations of the action on $L$ and $K$. Fix a point $p\in M$, and let $\mathbf{l}ambda$ be a one-parameter subgroup in $G$. Let \[ q = \mathbf{l}im_{t\to 0}\mathbf{l}ambda(t)p.\] Then $\mathbf{l}ambda$ fixes the point $q$ and we write $w_L(p,\mathbf{l}ambda)$ for the weight of the action of $\mathbf{l}ambda$ on the fiber $L_q$, and $w_K(p,\mathbf{l}ambda)$ for the weight on $K_q$. The following is well-known, see Mumford-Fogarty-Kirwan~\cite{MFK94}. \begin{prop}[Hilbert-Mumford criterion] The point $p\in M$ is semistable with respect to the polarization $L$ if and only if $w_L(p,\mathbf{l}ambda) \geqslant 0$ for all one-parameter subgroups $\mathbf{l}ambda$. If $w_L(p,\mathbf{l}ambda) > 0$ for all $\mathbf{l}ambda$ which does not fix $p$, then $p$ is polystable. \end{prop} The result we will need is the following version of this for the deformed polarization $L+\varepsilon K$. \begin{prop}\mathbf{l}abel{prop:HM} There exists $\varepsilon_0>0$ such that for all $\varepsilon\in(0,\varepsilon_0)$ the point $p$ is polystable with respect to the polarization $L+\varepsilon K$ if and only if $p$ is semistable with respect to $L$ and for every one-parameter subgroup $\mathbf{l}ambda$ for which $w_L(p,\mathbf{l}ambda)=0$ and $\mathbf{l}ambda$ does not fix $p$, we have $w_K(p,\mathbf{l}ambda) > 0$. In particular whether $p$ is polystable with respect to $L+\varepsilon K$ or not is independent of the choice of $\varepsilon\in(0,\varepsilon_0)$. \end{prop} \begin{proof} First let us assume that $K$ is ample. Choose a maximal torus $T\subset G$ and let $\mathfrak{t}$ be its Lie algebra. We will use Proposition 2.14 in \cite{MFK94}. This says that there are a finite number of rational linear functionals $l_i, m_j\in \mathfrak{t}^*$, such that for every point $x\in M$ and one-parameter subgroup $\mathbf{l}ambda\subset T$ we have \[ \begin{aligned} w_L(x,\mathbf{l}ambda) &= \max\{ l_i(\mathbf{l}ambda)\,|\, i\in I(x)\}, \\ w_K(x, \mathbf{l}ambda) &= \max\{ m_j(\mathbf{l}ambda)\, |\, j\in J(x)\}. \end{aligned}\] Here we identified $\mathbf{l}ambda$ with its generator in $\mathfrak{t}$ and $I(x)$, $J(x)$ are finite index sets depending on $x$. If we have a one-parameter subgroup $\mathbf{l}ambda\subset G$ which is not in $T$, then we can always find a conjugate $\gamma\mathbf{l}ambda\gamma^{-1}\subset T$, and we have \[ \begin{aligned} w_L(x,\mathbf{l}ambda) &= w_L(\gamma x, \gamma\mathbf{l}ambda\gamma^{-1})= \max\{ l_i(\gamma\mathbf{l}ambda\gamma^{-1})\,|\, i\in I(\gamma x)\}, \\ w_K(x, \mathbf{l}ambda) &= w_K(\gamma x, \gamma\mathbf{l}ambda\gamma^{-1}) = \max\{ m_j(\gamma\mathbf{l}ambda\gamma^{-1})\, |\, j\in J(\gamma x)\}. \end{aligned}\] We want to show that for sufficiently small $\varepsilon$, the weight $w_L(p,\mathbf{l}ambda) + \varepsilon w_K(p,\mathbf{l}ambda)$ is positive for all $\mathbf{l}ambda$ which does not fix $p$. It is enough to check this for $\mathbf{l}ambda\subset T$, because allowing conjugate one-parameter subgroups $\gamma\mathbf{l}ambda\gamma^{-1}$ amounts to replacing $p$ by $\gamma p$, but in the weight computation all that matters is the index sets $I(\gamma p)$ and $J(\gamma p)$. Since there are only finitely many of these, if we find an $\varepsilon$ that works for each case separately, then we can take the minimum of these. Restricting attention to one-parameter subgroups $\mathbf{l}ambda\subset T$, we can extend the definition of $w_L(x,\mathbf{l}ambda)$ continuously to $\mathbf{l}ambda\in\mathfrak{t}_\mathbf{R}$. By taking a smaller torus we can also assume that no element in $T$ fixes $p$. The main point is that then the set of $\mathbf{l}ambda$ for which $w_L(p,\mathbf{l}ambda)=0$ is a convex cone $\mathcal{C}\subset\mathfrak{t}_\mathbf{R}$, whose extremal rays are rational. By our assumption $w_K(p,\mathbf{l}ambda) > 0$ for $\mathbf{l}ambda\in \mathcal{C}\cap \mathfrak{t}_\mathbf{Q}$, but then this is true for all $\mathbf{l}ambda\in \mathcal{C}$ because of the rationality of the $m_j$. Let us write $\partial B\subset\mathfrak{t}$ for the unit sphere with respect to some fixed norm. We then have $w_K(p,\mathbf{l}ambda) > 0$ on $\mathcal{C}\cap \partial B$, but the latter is compact, so there exists an open neighbourhood $U\subset \partial B$ of $\mathcal{C}\cap \partial B$ for which \[ w_K(p,\mathbf{l}ambda) > 0\,\quad\text{ for }\mathbf{l}ambda\in U.\] At the same time $|w_K(p,\mathbf{l}ambda)|<C$ for some constant $C$ and all $\mathbf{l}ambda\in \partial B$. In addition there exists $\delta>0$ such that $w_L(p,\mathbf{l}ambda) > \delta$ for $\mathbf{l}ambda\not\in U$. Finally it follows that if $\varepsilon < \delta/C$, then \[ w_L(p,\mathbf{l}ambda) + \varepsilon w_K(p,\mathbf{l}ambda) > 0\] for all $\mathbf{l}ambda$. For the converse direction, we note that $|w_K(p,\mathbf{l}ambda)| < C$ for some constant $C$ and all $\mathbf{l}ambda\in \partial B$, with $C$ independent of $p$. If $p$ is not semistable with respect to $L$, then there is a one-parameter subgroup $\mathbf{l}ambda$ for which $w_L(p,\mathbf{l}ambda) = -\delta < 0$. But then for all $\varepsilon < \delta / (C|\mathbf{l}ambda|)$ we have \[ w_L(p,\mathbf{l}ambda) + \varepsilon w_K(p,\mathbf{l}ambda) < -\delta + C\varepsilon|\mathbf{l}ambda| < 0,\] so $p$ is not stable with respect to $L+\varepsilon K$ for any sufficiently small $\varepsilon$. The fact that we need $w_K(p,\mathbf{l}ambda) > 0$ for all $\mathbf{l}ambda$ such that $w_L(p,\mathbf{l}ambda)=0$ is immediate. Now suppose that $K$ is not ample. We can choose a large constant $c$ such that $cL+K$ is ample. Then note that for small $\varepsilon$ \[ L + \varepsilon K = (1-\varepsilon c)\mathbf{l}eft( L + \frac{\varepsilon}{1-\varepsilon c}(cL+K)\right),\] so if $p$ is polystable with respect to $L+\varepsilon K$ then is is also polystable with respect to $L+\frac{\varepsilon}{1-\varepsilon c}(cL + K)$, where $cL+K$ is ample. If $\varepsilon$ is sufficiently small, then we can apply what we just proved. So $p$ is polystable with respect to $L+\varepsilon K$ if and only if $p$ is semistable with respect to $L$ and for every one-parameter subgroup $\mathbf{l}ambda$ for which $w_L(p,\mathbf{l}ambda)=0$ and $\mathbf{l}ambda$ does not fix $p$, we have $w_K(p,\mathbf{l}ambda) = w_{cL+K}(p,\mathbf{l}ambda)>0$. \end{proof} \section{Background on extremal metrics} \mathbf{l}abel{sec:prelim} In this section we collect some material which we will need later on. \subsection{The extremal metric equation} As we said before, the basic strategy to obtain an extremal metric on a blowup $Bl_pM$ is to first use the extremal metric $\omega$ on $M$ and a simple gluing argument to obtain an approximately extremal metric $\omega_\varepsilon$ on $Bl_pM$ and then to try perturbing this to an extremal metric. When we set up the problem more precisely later we will have a maximal torus of automoprhisms $T$ acting on $Bl_pM$, preserving the approximate solution $\omega_\varepsilon$ and we will seek an extremal metric of the form \[ \omega_\varepsilon + i\partial\overline{\partial}\varphi,\] where $\varphi$ is $T$-invariant. Let us write $\overline{\mathfrak{t}} \subset C^\infty(Bl_pM)$ for the space of Hamiltonian functions generating elements of $T$, which includes the constants. Note that $\dim\overline{\mathfrak{t}}=\dim\mathfrak{t}+1$ where $\mathfrak{t}$ is the Lie algebra of $T$, and $\overline{\mathfrak{t}}$ consists of the smooth $T$-invariant functions in the kernel of the Lichnerowicz operator on $Bl_pM$, defined in Section~\ref{sec:Lichnerowicz}. We need the following which can also be found in \cite{APS06}. \begin{lem}\mathbf{l}abel{lem:basiceqn} Suppose that $\varphi\in C^\infty(Bl_pM)^T$ and $f\in\overline{\mathfrak{t}}$ such that \begin{equation} \mathbf{l}abel{eq:basiceqn} \mathbf{s} (\omega_\varepsilon+i\partial\overline{\partial}\varphi) - \frac{1}{2}\nabla f\cdot\nabla\varphi = f, \end{equation} where the gradient and inner product are computed with respect to the metric $\omega_\varepsilon$. Then $\omega_\varepsilon + i\partial\overline{\partial}\varphi$ is an extremal metric. \end{lem} \begin{proof} Let $X$ be the holomorphic vector field on $Bl_pM$ with Hamiltonian function $f$, ie. \[ df = \iota_X\omega_\varepsilon.\] At the same time we can compute \[ \iota_X(i\partial\overline{\partial} \varphi) = \frac{1}{2} d(JX(\varphi)). \] Since $JX=\nabla f$, by combining the previous two formulae we get \[ \iota_X(\omega_\varepsilon + i\partial\overline{\partial}\varphi) = d\mathbf{l}eft(f + \frac{1}{2} \nabla f\cdot\nabla\varphi\right) = d\mathbf{s}(\omega_\varepsilon+i\partial\overline{\partial}\varphi). \] This means that $\omega_\varepsilon + i\partial\overline{\partial}\varphi$ is an extremal metric. \end{proof} In order to solve Equation~(\ref{eq:basiceqn}) as a perturbation problem, we will write it in the form \begin{equation}\mathbf{l}abel{eq:2} \mathbf{s}(\omega_\varepsilon+i\partial\overline{\partial}\varphi) - \frac{1}{2}\nabla(\tilde{s}+\tilde{f})\cdot\nabla\varphi = \tilde{s}+\tilde{f}, \end{equation} where $\tilde{s},\tilde{f}\in \overline{\mathfrak{t}}$, and $\tilde{s}$ is chosen so that the holomorphic vector field $\nabla\tilde{s}$ is the natural holomorphic lift of the vector field $\nabla \mathbf{s}(\omega)$ on $M$. In addition we can normalise $\tilde{s}$ so that it agrees with $\mathbf{s}(\omega)$ outside a small ball around $p$, where the metrics $\omega$ and $\omega_\varepsilon$ coincide. The advantage of this is that we now seek $\varphi$ and $\tilde{f}$ which are small, or in other words, setting $\varphi=0$ and $\tilde{f}=0$ we get an approximate solution to the equation. For any metric $\tilde{\omega}$ let us define the operators $L_{\tilde{\omega}}$ and $Q_{\tilde{\omega}}$ by \begin{equation}\mathbf{l}abel{eq:scal} \mathbf{s}(\tilde{\omega} + i\partial\overline{\partial}\varphi) = \mathbf{s} (\tilde{\omega}) + L_{\tilde{\omega}}(\varphi) + Q_{\tilde{\omega}}(\varphi), \end{equation} where $L$ is the linearized operator. A simple computation shows that \[ L_{\tilde{\omega}}(\varphi) = \Delta^2_{\tilde{\omega}}\varphi + \mathrm{Ric}(\tilde{\omega})^{i\bar j}\varphi_{i\bar j},\] and analysing this operator will be crucial later on. We are using the complex Laplacian here which is half of the usual Riemannian one. At the same time note that the linear operator appearing in the linearization of Equation~(\ref{eq:2}) is \begin{equation} \mathbf{l}abel{eq:linop} (\varphi,\tilde{f}) \mapsto L_{\omega_\varepsilon}(\varphi) - \frac{1}{2}\nabla\tilde{s}\cdot \nabla \varphi - \tilde{f}, \end{equation} which is closely related to the Lichnerowicz operator. \subsection{The Lichnerowicz operator}\mathbf{l}abel{sec:Lichnerowicz} For any K\"ahler metric $\tilde{\omega}$ on a manifold $X$ we have the operator \[ \mathcal{D}_{\tilde{\omega}}:C^\infty(X) \to \Omega^{0,1}(T^{1,0}X),\] given by $\mathcal{D}(\varphi) = \overline{\partial}\nabla^{1,0}\varphi$ where $\overline{\partial}$ is the natural $\overline{\partial}$-operator on the holomorphic tangent bundle. The Lichnerowicz operator is then the fourth order operator \[ \mathcal{D}^*_{\tilde{\omega}}\mathcal{D}_{\tilde{\omega}} : C^\infty(X)\to C^\infty(X),\] whose significance is that the kernel consists of precisely those functions whose gradients are holomorphic vector fields. The relation to the operator in Equation~(\ref{eq:linop}) is that a computation (see eg. LeBrun-Simanca~\cite{LS94}) shows that \begin{equation}\mathbf{l}abel{eq:Lichn} \mathcal{D}^*_{\tilde{\omega}}\mathcal{D}_{\tilde{\omega}}(\varphi) = L_{\tilde{\omega}}(\varphi) - \frac{1}{2}\nabla \mathbf{s}(\tilde{\omega})\cdot\nabla\varphi, \end{equation} but note that in general $\mathbf{s}(\tilde{\omega})$ is not equal to $\tilde{s}$. \subsection{Burns-Simanca metric}\mathbf{l}abel{sec:BurnsSimanca} The approximate metric $\omega_\varepsilon$ on $Bl_pM$ is constructed by gluing the extremal metric $\omega$ on $M$ to a rescaling of a suitable model metric on $Bl_0\mathbf{C}^m$, ie. on the blowup of $\mathbf{C}^m$ at the origin. This model metric is a scalar flat metric found by Burns (see LeBrun~\cite{LeBrun88}) for $m=2$ and by Simanca~\cite{Sim91} for $m\geqslant 3$. Away from the exceptional divisor it can be written in the form \[ \eta = i\partial\overline{\partial}\mathbf{l}eft( \frac{1}{2}|z|^2 +\psi(z) \right),\] where $z=(z_1,\mathbf{l}dots,z_m)$ are standard coordinates on $\mathbf{C}^m$. For $m=2$ we have $\psi(z) = \mathbf{l}og|z|$ while for $m>2$ we have \[ \psi(z) = -|z|^{4-2m} + O(|z|^{3-2m}) \] for large $|z|$. The quantity $O(|z|^{3-2m})$ is a function in the space $C^{k,\alpha}_{3-2m}(Bl_0\mathbf{C}^m)$ in the notation of section~\ref{sec:gluing}, for any $k$ and $\alpha\in(0,1)$. See Lemma~\ref{lem:BSmetric} for a sharper asymptotic expansion. \section{The main argument}\mathbf{l}abel{sec:mainargument} Suppose as before that $\omega$ is an extremal K\"ahler metric on $M$. Let $X_\mathbf{s}$ be the Hamiltonian vector field corresponding to the scalar curvature $\mathbf{s}(\omega)$. Write $G$ for the Hamiltonian isometry group of $(M,\omega)$, so the Lie algebra $\mathfrak{g}$ of $G$ consists of holomorphic Killing fields with zeros. Choose a point $p\in M$ where the vector field $X_\mathbf{s}$ vanishes, and let $T\subset G$ be a maximal torus of the subgroup fixing $p$. Let $H\subset G$ consist of the elements commuting with $T$ and let us write $\overline{\mathfrak{h}} \subset C^{\infty}(M)$ for the space of Hamiltonian functions of vector fields in the Lie algebra of $H$. Note that $\overline{\mathfrak{h}}$ contains the constants as well. Let us also write $\overline{\mathfrak{t}}\subset\overline{\mathfrak{h}}$ for the Hamiltonian functions corresponding to the subgroup $T\subset H$. Given a small parameter $\varepsilon > 0$, we will construct an approximate solution to our problem on $Bl_pM$ in the K\"ahler class $[\omega] - \varepsilon^2d_m[E]$, for some constant $d_m$ depending on the dimension, so that $d_m^{m-1}$ is the volume of the exceptional divisor of $Bl_0\mathbf{C}^m$ with the Burns-Simanca metric $\eta$ from Section~\ref{sec:BurnsSimanca}. Of course we could make $d_m=1$ by rescaling $\eta$. For simplicity assume that the exponential map is defined on the unit ball in the tangent space $T_pM$ (if not, we can scale up the metric $\omega$). Choose local normal coordinates $z$ near $p$ such that the group $T$ acts by unitary transformations on the unit ball $B_1$ around $p$ (this is possible by linearizing the action, see Bochner-Martin~\cite{BM48} Theorem 8). In these coordinates we can write \[ \omega = i\partial\overline{\partial}\big(|z|^2/2 + \varphi(z)\big),\] where $\varphi=O(|z|^4)$. At the same time recall the Burns-Simanca metric \[ \eta = i\partial\overline{\partial}\big(|z|^2/2 + \psi(z)\big).\] We glue $\varepsilon^2\eta$ to $\omega$ using a cutoff function in the annulus $B_{2r_\varepsilon}\setminus B_{r_\varepsilon}$ in $M$, where the dependence of $r_\varepsilon$ on $\varepsilon$ will be chosen later. To do this, let $\gamma:\mathbf{R}\to[0,1]$ be smooth such that $\gamma(x)=0$ for $x < 1$ and $\gamma(x)=1$ for $x > 2$ and then define \[ \gamma_1(r) = \gamma(r/r_\varepsilon), \] and write $\gamma_2=1-\gamma_1$. Then we can define a K\"ahler metric $\omega_\varepsilon$ on $Bl_pM$ which on the annulus $B_1\setminus B_\varepsilon$ is given by \[ \omega_\varepsilon = i\partial\overline{\partial}\mathbf{l}eft( \frac{|z|^2}{2} + \gamma_1(|z|)\varphi(z) + \gamma_2(|z|) \varepsilon^2\psi(\varepsilon^{-1} z)\right).\] Moreover outside $B_{2r_\varepsilon}$ the metric $\omega_\varepsilon=\omega$ while inside the ball $B_{r_\varepsilon}$ we have $\omega_\varepsilon=\varepsilon^2\eta$. Note that the action of $T$ lifts to $Bl_pM$ giving biholomorphisms, and that $\omega_\varepsilon$ is $T$-invariant. It will be important to lift functions in $\overline{\mathfrak{h}}$ to $Bl_pM$. Only elements in $\overline{\mathfrak{t}}$ have a natural lifting, since they correspond to holomorphic vector fields vanishing at $p$, so we give the following definition. \begin{defn}\mathbf{l}abel{def:lifting} We define a linear map \[ \mathbf{l} : \overline{\mathfrak{h}} \to C^\infty(Bl_pM) \] as follows. First let us decompose $\overline{\mathfrak{h}}$ into a direct sum $\overline{\mathfrak{h}}=\overline{\mathfrak{t}}\oplus\mathfrak{h}'$, where we can assume that each function in $\mathfrak{h}'$ vanishes at $p$. Any $f\in\overline{\mathfrak{t}}$ corresponds to a holomorphic vector field $X_f$ on $M$ vanishing at $p$. For such $f$ we define $\mathbf{l}(f)$ to be the Hamiltonian function of the holomorphic lift of the vector field $X_f$ to $Bl_pM$, with respect to the symplectic form $\omega_\varepsilon$, normalized so that $f=\mathbf{l}(f)$ outside $B_1$. For $f\in\mathfrak{h}'$ we simply let $\mathbf{l}(f)=\gamma_1 f$ near $p$ using the cutoff function $\gamma_1$ from before, and we think of this $\mathbf{l}(f)$ as a function on $Bl_pM$. Finally define the lift of general elements in $\overline{\mathfrak{h}}$ by linearity. \end{defn} We can now state the main technical result we need, whose proof will be given in Section~\ref{sec:gluing}. \begin{prop}\mathbf{l}abel{prop:gluing} Suppose that the point $p\in M$ is chosen so that the vector field $X_\mathbf{s}$ vanishes at $p$. Then there are constants $\varepsilon_0, c > 0$ such that for all $\varepsilon\in (0,\varepsilon_0)$ we can find $u\in C^\infty(Bl_pM)^T$ and $f\in\overline{\mathfrak{h}}$ satisfying the equation \begin{equation}\mathbf{l}abel{eq:glued} \mathbf{s} (\omega_\varepsilon+i\partial\overline{\partial} u) - \frac{1}{2}\nabla \mathbf{l}(f)\cdot\nabla u = \mathbf{l}(f). \end{equation} In addition the element $f\in\overline{\mathfrak{h}}$ has an expansion \begin{equation} \mathbf{l}abel{eq:expansion} f = \mathbf{s} + \varepsilon^{2m-2}(\mathbf{l}ambda+c_m \mu(p)) + f_\varepsilon, \end{equation} where $c_m$ is a constant depending only on the dimension, $\mathbf{l}ambda = \mathrm{Vol}(M)^{-1}c_m$ is another constant, and $|f_\varepsilon| \mathbf{l}eqslant c\varepsilon^\kappa$ for some $\kappa > 2m-2$. \end{prop} Note that in this proposition, $\mathbf{l}(f)$ corresponds to a Hamiltonian vector field $X_{\mathbf{l}(f)}$ on $Bl_pM$, and if this vector field is holomorphic, then the metric $\omega_\varepsilon + i\partial\overline{\partial} u$ above is extremal by Lemma~\ref{lem:basiceqn}. Moreover $X_{\mathbf{l}(f)}$ is holomorphic if and only if $f\in\overline{\mathfrak{t}}$, ie. if the vector field $X_f$ on $M$ vanishes at $p$. Given this proposition we can now prove Theorem~\ref{thm:main}. \begin{proof}[Proof of Theorem~\ref{thm:main}] We will give the proof for the blowup of one point to simplify the notation, since blowing up several points does not give rise to essential new difficulties. Let us use the notation from before, so that $G$ is the Hamiltonian isometry group of $(M,\omega)$, $p\in M$ and $T$ is a maximal torus in the stabilizer of $p$. The subgroup $H\subset G$ consists of the elements commuting with each element of $T$, and let $\mathfrak{h}, \mathfrak{t}$ be the Lie algebras of $H,T$. Note that in this case $\mathfrak{h}_p=\mathfrak{t}$, where $\mathfrak{h}_p$ is the stabilizer of $p$ in $\mathfrak{h}$. We will work on the $H^c$-orbit of $p$, so let us write $U=H^c\cdot p$. Then $U\subset M$ is an $H$-invariant complex submanifold. If $\mu(p)\in\mathfrak{h}_p$, then the stabilizer of $p$ in $H^c$ is $T^c$. This can be seen using the structure of the stabilizer group of relatively stable points (analogous to Calabi's structure theorem for the automorphism groups of extremal metrics~\cite{Cal85}). Since every element in $H^c$ commutes with $T$, it follows that for every $q\in U$ the stabilizer of $q$ in $H$ is $T$. We can therefore apply Proposition~\ref{prop:gluing} to each point $q\in U$ with $T$ as the maximal torus. We can first replace $U$ by a relatively compact complex submanifold $U'\subset\subset U$ which is still $H$-invariant and contains $p$, to ensure that we can choose $\varepsilon,c$ in the proposition uniformly over $U'$. Note that the solution of the equation in Proposition~\ref{prop:gluing} is obtained using the contraction mapping theorem, so although the solution is not unique, the various choices can be made so that it depends smoothly on the data. For a suitably small $\varepsilon$ we therefore have a smooth map \[\begin{aligned} \mu_\varepsilon : U' &\to \mathfrak{h} \\ \mu_\varepsilon(q)&= \mu(q) + c_m^{-1}\varepsilon^{2-2m}f_\varepsilon, \end{aligned}\] where $f_\varepsilon\in \mathfrak{h}$ is given by the Proposition \ref{prop:gluing} applied at the point $q$ (so $f_\varepsilon$ depends on $q$). Since $|f_\varepsilon|\mathbf{l}eqslant c\varepsilon^\kappa$ for some $\kappa > 2m-2$, it follows that \[ \mathbf{l}im_{\varepsilon\to 0}\mu_\varepsilon = \mu.\] Applying Proposition~\ref{prop:deform} we see that if the vector field $\mu(p)$ vanishes at the point $p$ then for sufficiently small $\varepsilon > 0$ we can find a point $q$ in the $H^c$-orbit of $p$ such that $\mu_\varepsilon(q)$ vanishes at $q$. Note that $X_\mathbf{s}$ is in the center of $\mathfrak{g}$, so if the vector field $X_\mathbf{s}$ vanishes at $p$ then it also vanishes at $q$. This means that when applying Proposition~\ref{prop:gluing} at the point $q$, the element $f\in\overline{\mathfrak{h}}$ is actually in $\overline{\mathfrak{t}}$, ie. $X_f$ vanishes at $q$. By Lemma~\ref{lem:basiceqn} we therefore obtain an extremal metric on the blowup $Bl_qM$ in the K\"ahler class \[ [\omega] - \varepsilon^2d_m[E].\] Since $q$ is in the $H^c$-orbit of $p$, the manifolds $Bl_qM$ and $Bl_pM$ are biholomorphic so we have constructed an extremal metric on the blowup $Bl_pM$. \end{proof} \section{The gluing argument} \mathbf{l}abel{sec:gluing} In this section we prove Proposition~\ref{prop:gluing}. As before we will only blow up one point, but the proof of the general case is identical apart from more complicated notation. We will mainly focus on the case $m\geqslant 3$ since the case $m=2$ needs special care but we will make brief comments on how to adapt the arguments when $m=2$. We first need some analytic preliminaries. \subsection{The Lichnerowicz operator on weighted spaces} The key to solving our equation using a perturbation method is to construct an inverse to the linear operator (\ref{eq:linop}) and to control its inverse acting between suitable Banach spaces. It turns out that weighted H\"older spaces are suitable spaces to work in and in order to understand the mapping properties of the operator (\ref{eq:linop}) between these spaces on the blowup $Bl_pM$ we first need to understand the behaviour of the Lichnerowicz operator on weighted spaces on the manifolds $M\setminus\{p\}$ and $Bl_p\mathbf{C}^m$. This is the fundamental tool in Arezzo-Pacard~\cite{AP06,AP09} and Arezzo-Pacard-Singer~\cite{APS06} and we follow their treatment here. See also Lockhart-McOwen~\cite{LM85}, Mazzeo~\cite{Maz91}, Melrose~\cite{Mel93} or Pacard-Rivi\`ere~\cite{PR00} for more details on weighted spaces. First we look at $M_p=M\setminus\{p\}$ with the metric $\omega$. For functions $f:M_p \to\mathbf{R}$ we define the weighted norm \[ \Vert f\Vert_{C^{k,\alpha}_\delta(M_p)} = \Vert f\Vert_{C^{k,\alpha}_\omega(M\setminus B_{1/2})} + \sup_{r < 1/2} r^{-\delta} \Vert f\Vert_{C^{k,\alpha}_{r^{-2}\omega}(B_{2r} \setminus B_r)}.\] Here the subscripts $\omega$ and $r^{-2}\omega$ indicate the metrics used for computing the corresponding norm. The weighted space $C^{k,\alpha}_\delta(M_p)$ consists of functions on $M\setminus\{p\}$ which are locally in $C^{k,\alpha}$ and whose $\Vert\cdot\Vert_{C^{k,\alpha}_\delta}$ norm is finite. The main result we need is the following. \begin{prop}\mathbf{l}abel{prop:Mp} If $\delta < 0$ and $\alpha\in(0,1)$ then the operator \[ \begin{aligned} C^{4,\alpha}_{\delta}(M_p)^T \times\overline{\mathfrak{h}} &\to C^{0,\alpha}_{\delta-4}(M_p)^T \\ (\varphi, f) &\mapsto \mathcal{D}_\omega^*\mathcal{D}_\omega (\varphi) - f \end{aligned}\] has a bounded right-inverse. Here $T$ is a torus of isometries of $(M,\omega)$ and $\overline{\mathfrak{h}}$ is the space of $T$-invariant Hamiltonian functions of holomorphic killing fields. \end{prop} \begin{proof} This follows from the duality theory in weighted spaces. The image of \[ \mathcal{D}^*\mathcal{D} : C^{4,\alpha}_\delta(M_p) \to C^{0,\alpha}_{\delta-4}(M_p)\] is the orthogonal complement of the kernel of \[ \mathcal{D}^*\mathcal{D} : C^{4,\alpha}_{4-2m-\delta}(M_p) \to C^{0,\alpha}_{-2m-\delta}(M_p).\] If $\delta < 0$ then $4-2m-\delta > 4-2m$, so we need to see that if $h\in\mathrm{Ker}\mathcal{D}^*\mathcal{D}$ is such that $h\in C^{4,\alpha}_\gamma(M_p)^T$ for some $\gamma > 4-2m$, then $h$ is smooth. This follows from the regularity theory in weighted spaces since there are no indicial roots in $(4-2m,0)$. \end{proof} Let us turn now to the manifold $Bl_0\mathbf{C}^m$ with the Burns-Simanca metric $\eta$. The relevant weighted H\"older norm is now given by \[ \Vert f\Vert_{C^{k,\alpha}_\delta(Bl_0\mathbf{C}^m)} = \Vert f\Vert_{C^{k,\alpha}_\eta(B_2)} + \sup_{r > 1} r^{-\delta} \Vert f\Vert_{C^{k,\alpha}_{r^{-2}\eta}(B_{2r}\setminus B_r)}.\] Here we abused notation slightly by writing $B_r\subset Bl_0 \mathbf{C}^m$ for the set where $|z|<r$ (ie. the pullback of the $r$-ball in $\mathbf{C}^m$ under the blowdown map). The key result here is the following. \begin{prop}\mathbf{l}abel{prop:BlC} For $\delta > 4-2m$ and $\alpha\in(0,1)$ the operator \[ \begin{aligned} C^{4,\alpha}_\delta(Bl_0\mathbf{C}^m)^T &\to C^{0,\alpha}_{\delta-4}(Bl_0\mathbf{C}^m)^T \\ \varphi &\mapsto \mathcal{D}^*_\eta\mathcal{D}_\eta(\varphi) \end{aligned}\] has a bounded inverse. If $m=2$ then we should instead choose $\delta\in(3-2m,4-2m)$. In that case if we let $\chi$ be a compactly supported function on $Bl_0\mathbf{C}^m$ with non-zero integral, then the operator \begin{equation}\mathbf{l}abel{eq:op2} \begin{aligned} C^{4,\alpha}_\delta(Bl_0\mathbf{C}^m)^T\times\mathbf{R} &\to C^{0,\alpha}_{\delta-4}(Bl_0\mathbf{C}^m)^T \\ (\varphi, t) &\mapsto \mathcal{D}^*_\eta\mathcal{D}_\eta(\varphi) + t\chi \end{aligned}\end{equation} has bounded inverse. \end{prop} \begin{proof} This is also a consequence of duality theory in weighted spaces. Once again the image of \[ \mathcal{D}^*\mathcal{D} : C^{4,\alpha}_\delta(Bl_0\mathbf{C}^m) \to C^{0,\alpha}_{\delta-4}(Bl_0\mathbf{C}^m)\] is the orthogonal complement of the kernel of \[ \mathcal{D}^*\mathcal{D} : C^{4,\alpha}_{4-2m-\delta}(Bl_0 \mathbf{C}^m) \to C^{0,\alpha}_{-2m-\delta}(Bl_0\mathbf{C}^m).\] If $\delta > 4-2m$, then $4-2m-\delta < 0$. If $h\in \mathrm{Ker}\,\mathcal{D}^*\mathcal{D}$ and $h\in C^{4,\alpha}_\gamma(Bl_0\mathbf{C}^m)$ for some $\gamma < 0$ then we must have $h=0$ (for the proof see~\cite{AP06}). This implies that our operator is surjective. When $m=2$ then the same argument shows that the image of $\mathcal{D}^*_\eta\mathcal{D}_\eta$ when $\delta\in(3-2m,4-2m)$ has codimension 1, and more precisely the image is the subspace of functions with integral zero. It follows that the operator (\ref{eq:op2}) is surjective. \end{proof} \subsection{Weighted spaces on $Bl_pM$} We will need to do analysis on the blown-up manifold $Bl_pM$ endowed with the approximately extremal metric $\omega_\varepsilon$ we constructed in Section~\ref{sec:mainargument}. For this we define the following weighted spaces, which are simply glued versions of the above weighted spaces on $M\setminus\{p\}$ and $Bl_p\mathbf{C}^m$. We define the weighted H\"older norms $C^{k,\alpha}_\delta$ by \[ \Vert f\Vert_{C^{k,\alpha}_\delta} = \Vert f\Vert_{C^{k,\alpha}_{\omega}(M\setminus B_1)} + \sup_{\varepsilon\mathbf{l}eqslant r\mathbf{l}eqslant 1/2} r^{-\delta}\Vert f\Vert_{C^{k,\alpha}_{r^{-2}\omega_\varepsilon}(B_{2r}\setminus B_r)} + \varepsilon^{-\delta}\Vert f\Vert_{C^{k,\alpha}_{\eta}(B_\varepsilon)}. \] The subscripts indicate the metrics used to compute the relevant norm. This is a glued version of the two spaces defined in the previous section in the following sense. If $f\in C^{k,\alpha}(Bl_pM)$ and we think of $Bl_pM$ as a gluing of $M\setminus\{p\}$ and $Bl_0\mathbf{C}^m$ then $\gamma_1f$ and $\gamma_2f$ can naturally be thought of as functions on $M\setminus\{p\}$ and $Bl_0\mathbf{C}^m$ respectively. Then the norm $\Vert f\Vert_{C^{k,\alpha}_\delta(Bl_pM)}$ is comparable to \[ \Vert \gamma_1f\Vert_{C^{k,\alpha}_\delta(M_p)} + \varepsilon^{-\delta}\Vert\gamma_2f\Vert_{C^{k,\alpha}_\delta (Bl_0\mathbf{C}^m,\eta)}.\] Another way to think about the norm is that if $\Vert f\Vert_{C^{k,\alpha}_\delta}\mathbf{l}eqslant c$ then $f$ is in $C^{k,\alpha}(Bl_pM)$ and also for $i\mathbf{l}eqslant k$ we have \[ \begin{gathered} |\nabla^i f| \mathbf{l}eqslant c\,\,\text{ for } r\geqslant 1\\ |\nabla^i f| \mathbf{l}eqslant cr^{\delta-i}\,\,\text{ for } \varepsilon \mathbf{l}eqslant r \mathbf{l}eqslant 1\\ |\nabla^i f| \mathbf{l}eqslant c\varepsilon^{\delta-i} \,\,\text{ for } r\mathbf{l}eqslant\varepsilon. \end{gathered}\] The norms here are computed with respect to the metric $\omega_\varepsilon$, and note that on $B_\varepsilon$ we have $\omega_\varepsilon=\varepsilon^2\eta$. Sometimes we will restrict this norm to subsets such as $C^{k,\alpha}_\delta(M\setminus B_{r_\varepsilon})$ and $C^{k,\alpha}_\delta(B_{2r_\varepsilon})$. A crucial property of these weighted norms is that \begin{equation}\mathbf{l}abel{eq:cutoff} \Vert \gamma_i\Vert_{C^{4,\alpha}_0} \mathbf{l}eqslant c \end{equation} for some constant $c$ independent of $\varepsilon$. In addition we need the following lemma about lifting elements of $\overline{\mathfrak{h}}\subset C^\infty(M)$ to $C^\infty(Bl_pM)$ according to Definition~\ref{def:lifting}. \begin{lem}\mathbf{l}abel{lem:lb} For any $f\in\overline{\mathfrak{h}}$ its lifting satisfies \[ \Vert \mathbf{l}(f) \Vert_{C^{0,\alpha}_0} \mathbf{l}eqslant c|f|\] and also $|X_{\mathbf{l}(f)}|_{\omega_\varepsilon} \mathbf{l}eqslant c|f|$ for some constant $c$ independent of $\varepsilon$. Here $|\cdot|$ is any fixed norm on $\overline{\mathfrak{h}}$. \end{lem} \begin{proof} Recall that we defined the lifting in Definition~\ref{def:lifting} using a decomposition $\overline{\mathfrak{h}}=\overline{\mathfrak{t}} \oplus\mathfrak{h}'$, where the functions in $\mathfrak{h}'$ vanish at $p$. Suppose first that $f\in\mathfrak{h}'$. Since $f$ vanishes at $p$, we have \[ \Vert f\Vert_{C^{1,\alpha}_1(M_p)}\mathbf{l}eqslant c|f|,\] where $c$ is independent of $f$. It follows from the multiplication properties of weighted spaces and (\ref{eq:cutoff}) that \[ \Vert \mathbf{l}(f)\Vert_{C^{1,\alpha}_1} \mathbf{l}eqslant c|f|,\] from which the required inequalities follow. Now suppose that $f\in\overline{\mathfrak{t}}$, and write $X_f$ for the holomorphic vector field on $M$ corresponding to $f$. On the ball $B_{r_\varepsilon}\subset M$, the action of $X_f$ is given by unitary transformations, and the size of the lifting to $B_{r_\varepsilon}\subset Bl_pM$ is determined by the size of $X_f$ on $\partial B_{r_\varepsilon}$. Outside $B_{r_\varepsilon}$ the vector field is unchanged and the metrics $\omega$ and $\omega_\varepsilon$ are uniformly equivalent. From these observations we can check that $|X_{\mathbf{l}(f)}|_{\omega_\varepsilon}\mathbf{l}eqslant c|f|$ for some constant $c$. This in turn bounds $\nabla\mathbf{l}(f)$, from which the bound on $\Vert\mathbf{l}(f)\Vert_{C^{0,\alpha}_0}$ follows. \end{proof} \subsection{The linearized operator} We now want to start studying the linearized operator (\ref{eq:linop}). The constants that appear below will be independent of $\varepsilon$ unless the dependence is made explicit. Recall that for any metric $\tilde\omega$ we defined \[ L_{\tilde{\omega}}(\varphi) = \Delta_{\tilde{\omega}}^2 \varphi + \mathrm{Ric}(\tilde{\omega})^{i\bar j} \varphi_{i\bar j}.\] We want to first study how this varies as we change the metric. For this we have \begin{prop}\mathbf{l}abel{prop:linear} Suppose $\delta < 0$. There exist constants $c_0, C>0$ such that if $\Vert\varphi\Vert_{C^{4,\alpha}_2} < c_0$ then \[ \Vert L_{\omega_\varphi}(f) - L_{\omega_\varepsilon}(f)\Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant C\Vert\varphi\Vert_{C^{4,\alpha}_2}\Vert f\Vert_{C^{4,\alpha}_\delta},\] where $\omega_\varphi=\omega_\varepsilon + i\partial\overline{\partial}\varphi$. \end{prop} \begin{proof} In the proof $c$ will denote a constant that may change from line to line, but is always independent of $\varepsilon$. Let us write $g$, $g_\varphi$ and for the Riemannian metrics corresponding to $\omega_\varepsilon$ and $\omega_\varphi$. We can first choose $c_0$ small enough so that $\frac{1}{2}g < g_\varphi < 2g$, so the metrics are uniformly equivalent. We also have \[\begin{aligned} \Vert g_\varphi\Vert_{C^{2,\alpha}_0} & \mathbf{l}eqslant c(1 + \Vert \varphi\Vert_{C^{4,\alpha}_2}) \mathbf{l}eqslant c,\\ \end{aligned} \] where the norms are always computed with respect to $\omega_\varepsilon$ and a set of coordinate charts obtained from a fixed set of charts on $M$ and on $Bl_0\mathbf{C}^m$. Schematically we have \[ \begin{gathered} \partial(g_{\varphi}^{-1}) = g_\varphi^{-2}\partial g_\varphi \\ \partial^2(g_\varphi^{-1}) = g_\varphi^{-3}\partial g_\varphi\partial g_\varphi + g_\varphi^{-2}\partial^2 g_\varphi \end{gathered} \] which implies, using our previous statements, that $\Vert g_\varphi^{-1}\Vert_{C^{2,\alpha}_0}\mathbf{l}eqslant c$. Since \[ g_\varphi^{-1}-g^{-1} = g_\varphi^{-1}(g - g_\varphi)g^{-1}, \] we get \[ \Vert g_\varphi^{-1}-g^{-1}\Vert_{C^{2,\alpha}_0} \mathbf{l}eqslant c \Vert\varphi\Vert_{C^{4,\alpha}_2} .\] From this we can control $\Delta^2_{g_\varphi}- \Delta^2_g$, since schematically \[ \begin{gathered} \Delta^2_{g_\varphi}f - \Delta^2_g f = g_\varphi^{-1}\partial^2(g_{\varphi}^{-1}\partial^2f) - g^{-1}\partial^2(g^{-1}\partial^2f) \\ = (g_\varphi^{-1} - g^{-1})\partial^2(g_{\varphi}^{-1} \partial^2f) + g^{-1}\partial^2\big[ (g_\varphi^{-1} - g^{-1})\partial^2f\big], \end{gathered}\] from which we get \[ \begin{aligned} \Vert \Delta^2_{g_\varphi}f - \Delta^2_{g}f\Vert_{ C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant &\, \Vert g_\varphi^{-1} - g^{-1}\Vert_{C^{0,\alpha}_0} \Vert g_{\varphi}^{-1}\Vert_{C^{2,\alpha}_0} \Vert\partial^2 f\Vert_{C^{2,\alpha}_{\delta-2}} \\ & + \Vert g^{-1}\Vert_{C^{0,\alpha}_0} \Vert g_\varphi^{-1} - g^{-1}\Vert_{C^{2,\alpha}_0} \Vert \partial^2 f\Vert_{C^{2,\alpha}_{\delta-2}} \\ \mathbf{l}eqslant &\, c\Vert\varphi\Vert_{ C^{4,\alpha}_2} \Vert f\Vert_{C^{4,\alpha}_\delta}. \end{aligned}\] For the terms involving the curvature, we first note that $\Vert \mathrm{Riem}(g)\Vert_{C^{k,\alpha}_{-2}}\mathbf{l}eqslant c$ for some constant independent of $\varepsilon$. In addition \[ \Vert\mathrm{Riem}(g_\varphi)-\mathrm{Riem}(g)\Vert_{C^{0,\alpha}_{-2}} \mathbf{l}eqslant c\Vert\varphi\Vert_{C^{4,\alpha}_2}.\] This follows from the schematic \[ \mathrm{Riem}(g) = \partial\Gamma + \Gamma\star\Gamma,\] where $\Gamma = g^{-1}\partial g$. In addition \[ \Gamma_\varphi - \Gamma = (g_\varphi^{-1} - g^{-1})\partial g_\varphi + g^{-1}(\partial g_\varphi - \partial g),\] which implies that \[ \Vert \Gamma_\varphi - \Gamma\Vert_{C^{1,\alpha}_{-1}} \mathbf{l}eqslant c\Vert\varphi\Vert_{C^{4,\alpha}_2},\] and so with similar calculations we get the required result for the curvature. \end{proof} This result will have several useful consequences. First it allows us to estimate the nonlinear operator $Q_{\omega_\varepsilon}$ in the formula \begin{equation}\mathbf{l}abel{eq:Q} \mathbf{s}(\omega_\varepsilon + i\partial\overline{\partial}\varphi) = \mathbf{s}(\omega_\varepsilon) + L_{\omega_\varepsilon}(\varphi) + Q_{\omega_\varepsilon}(\varphi). \end{equation} \begin{lem}\mathbf{l}abel{lem:contract} Suppose that $\delta < 0$. There exist $c_0,C > 0$ such that if \[ \Vert \varphi\Vert_{C^{4,\alpha}_2}, \Vert \psi \Vert_{C^{4,\alpha}_2} \mathbf{l}eqslant c_0, \] then \[ \Vert Q_{\omega_\varepsilon}(\varphi) - Q_{\omega_\varepsilon}(\psi)\Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant C\big\{\Vert\varphi\Vert_{C^{4,\alpha}_2} + \Vert\psi\Vert_{C^{ 4,\alpha}_2}\big\}\Vert \varphi-\psi\Vert_{C^{4,\alpha}_\delta}. \] \end{lem} \begin{proof} By the mean value theorem there exists some $\chi$, which is a convex combination of $\varphi$ and $\psi$, such that \[ Q_{\omega_\varepsilon}(\varphi) - Q_{\omega_\varepsilon}(\psi) = DQ_{\omega_\varepsilon,\chi}( \varphi-\psi).\] From Equation~\ref{eq:Q} we see that $DQ_{\omega_\varepsilon, \chi} = L_{\omega_{\chi}}-L_{\omega_\varepsilon}$, so if $c_0$ is sufficiently small, then from the previous proposition we get \[ \Vert DQ_{\omega_\varepsilon,\chi}(\varphi-\psi)\Vert_{C^{0,\alpha}_{\delta -4}} \mathbf{l}eqslant C\Vert\chi\Vert_{C^{4,\alpha}_2} \Vert\varphi-\psi\Vert_{C^{4,\alpha}_\delta}.\] But $\Vert\chi\Vert_{C^{4,\alpha}_2}\mathbf{l}eqslant \Vert\varphi\Vert_{C^{4,\alpha}_2} + \Vert\psi\Vert_{C^{4,\alpha}_2}$ so the required inequality holds. \end{proof} Next we want to study the invertibility of the linearized operator (\ref{eq:linop}) of our problem on $Bl_pM$. Let us write $X=\nabla\mathbf{l}(s)$, where $\mathbf{l}(s)$ is the lift to $Bl_pM$ of the scalar curvature $\mathbf{s}(\omega)$. \begin{prop}\mathbf{l}abel{prop:inv} For sufficiently small $\varepsilon$ and $\delta\in(4-2m,0)$ the operator \[ \begin{gathered} G : (C^{4,\alpha}_\delta)^T \times \overline{\mathfrak{h}} \to (C^{0,\alpha}_{\delta-4})^T \\ (\varphi, f) \mapsto L_\omega(\varphi) - \frac{1}{2}X(\varphi) - \mathbf{l}(f) \end{gathered}\] has a right inverse $P$, with bounded operator norm $\Vert P\Vert < C$ for some constant $C$ independent of $\varepsilon$. When $m=2$ and we choose $\delta=4-2m-\theta$ for $\theta > 0$ small, then we obtain a right inverse $P$ with $\Vert P\Vert < C\varepsilon^{-\theta}$. \end{prop} \begin{proof} This follows a standard argument for gluing solutions of linear problems, by first constructing an approximate inverse. See for example Chapter 7 in Donaldson-Kronheimer~\cite{DK90}. We will use the cutoff functions $\gamma_1,\gamma_2$ from before, where $\gamma_1+\gamma_2=1$, the function $\gamma_1$ is supported on $M\setminus B_{r_\varepsilon}$, $\nabla\gamma_1$ is supported on $B_{2r_\varepsilon}\setminus B_{r_\varepsilon}$ and \[ \Vert \gamma_i\Vert_{C^{4,\alpha}_0}\mathbf{l}eqslant c.\] We will also need a cutoff function $\beta_1$ which is equal to 1 on the support of $\gamma_1$, such that $\nabla\beta_1$ is supported on a set slightly smaller than $B_{r_\varepsilon}\setminus B_\varepsilon$ and $\beta_1=0$ on $B_\varepsilon$. We will later choose $a < 1$ such that $r_\varepsilon=\varepsilon^a$, and for now let $\overline{a}$ be such that $a < \overline{a} < 1$. Then we can define \[ \beta_1(z) = \beta\mathbf{l}eft(\frac{\mathbf{l}og |z|}{\mathbf{l}og\varepsilon}\right), \] where $\beta : \mathbf{R}\to\mathbf{R}$ is a fixed cutoff function such that $\beta(r)=1$ for $r<a$ and $\beta(r)=0$ for $r > \overline{a}$. The key point is that with this definition \[ \Vert \nabla\beta_1\Vert_{C^{3,\alpha}_{-1}} \mathbf{l}eqslant \frac{c}{|\mathbf{l}og\varepsilon|},\] and also the support of $\nabla\beta_1$ is in $B_{r_\varepsilon} \setminus B_{\varepsilon^{\overline{a}}}$. Similarly we define $\beta_2$ so that $\beta_2=1$ on the support of $\gamma_2$, but we want the support of $\nabla\beta_2$ to be slightly smaller than $B_1\setminus B_{2r_\varepsilon}$. Namely we want $\nabla\beta_2$ to be supported on $B_{2\varepsilon^{\underline{a}}} \setminus B_{2r_\varepsilon}$, where $0<\underline{a}<a$. Again we can define \[ \beta_2(z) = \tilde{\beta}\mathbf{l}eft(\frac{\mathbf{l}og|z|/2}{\mathbf{l}og r_\varepsilon}\right),\] where $\tilde{\beta}:\mathbf{R}\to\mathbf{R}$ is a cutoff function such that $\beta(r)=0$ for $r < \underline{a}/a$ and $\beta(r)=1$ for $r > 1$. Once again, we obtain \[ \Vert \nabla \beta_2\Vert_{C^{3,\alpha}_{-1}}\mathbf{l}eqslant \frac{c}{|\mathbf{l}og r_\varepsilon|}\mathbf{l}eqslant \frac{c'}{|\mathbf{l}og \varepsilon|},\] for sufficiently small $\varepsilon$. Let $\varphi\in (C^{0,\alpha}_{\delta-4})^T$. The function $\gamma_1\varphi$ can be thought of as being defined on $M_p$. Since $\Vert\gamma_1\Vert_{C^{0,\alpha}_0}\mathbf{l}eqslant c$ and the metrics $\omega_\varepsilon$ and $\omega$ are uniformly equivalent, we have \[ \Vert \gamma_1\varphi\Vert_{C^{0,\alpha}_{\delta-4}(M_p)} \mathbf{l}eqslant c\Vert \varphi\Vert_{C^{0,\alpha}_{\delta-4}}.\] It follows from Proposition~\ref{prop:Mp} that there exists some $f\in \overline{\mathfrak{h}}$ and $P_1(\gamma_1\varphi)$ with \begin{equation}\mathbf{l}abel{eq:b1} \Vert P_1(\gamma_1\varphi)\Vert_{C^{4,\alpha}_{\delta}(M_p)} +|f| \mathbf{l}eqslant c\Vert\varphi\Vert_{C^{0,\alpha}_{\delta -4}} \end{equation} for which \begin{equation}\mathbf{l}abel{eq:Mp} L_{\omega}P_1(\gamma_1\varphi) -\frac{1}{2}\nabla\mathbf{s}(\omega)\cdot\nabla P_1(\gamma_1\varphi)- f = \gamma_1\varphi. \end{equation} Similarly the function $\gamma_2\varphi$ can be thought of as a function on $Bl_0\mathbf{C}^m$, and from the definition of our norm we have \[ \Vert \gamma_2\varphi\Vert_{C^{0,\alpha}_{\delta-4}(Bl_0\mathbf{C}^m)} \mathbf{l}eqslant c\varepsilon^{\delta-4}\Vert\varphi\Vert_{C^{0,\alpha}_{ \delta-4}}.\] From Proposition~\ref{prop:BlC} we have some $P_2(\gamma_2\varphi)$ with \begin{equation}\mathbf{l}abel{eq:b2} \Vert P_2(\gamma_2\varphi)\Vert_{ C^{4,\alpha}_{\delta}(Bl_0\mathbf{C}^m,\eta)}\mathbf{l}eqslant c\Vert\varepsilon^4\gamma_2\varphi\Vert_{ C^{0,\alpha}_{\delta-4}(Bl_0\mathbf{C}^m,\eta)} \mathbf{l}eqslant c\varepsilon^\delta\Vert\varphi\Vert_{C^{0,\alpha}_{\delta-4}}, \end{equation} for which \[ L_{\eta}P_2(\gamma_2\varphi) = \varepsilon^4\gamma_2\varphi,\] so we also have \[ L_{\varepsilon^2\eta}(P_2(\gamma_2\varphi)) = \gamma_2\varphi.\] We can think of $\beta_2P_2(\gamma_2\varphi)$ as a function on $Bl_pM$ or on $Bl_0\mathbf{C}^m$. We then define \[ P(\varphi) = \beta_1P_1(\gamma_1\varphi) + \beta_2P_2(\gamma_2\varphi),\] where we are thinking of the annulus $B_1\setminus B_\varepsilon$ as a subset of $M_p$, $Bl_0\mathbf{C}^m$ and $Bl_pM$ at the same time. The bounds (\ref{eq:b1}) and (\ref{eq:b2}) imply that \begin{equation} \mathbf{l}abel{eq:b3} \Vert P(\varphi)\Vert_{C^{4,\alpha}_\delta} \mathbf{l}eqslant c\Vert\varphi\Vert_{C^{0,\alpha}_{\delta-4}}. \end{equation} We want to show that the operator $\varphi\mapsto (P(\varphi),f)$ gives an approximate inverse to the operator $G$. \begin{claim} For sufficiently small $\varepsilon$ we have \[ \Big\Vert L_{\omega_\varepsilon} (P\varphi) - \frac{1}{2}X(P\varphi) - \mathbf{l}(f) - \varphi\Big\Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant \frac{1}{2} \Vert\varphi\Vert_{C^{0,\alpha}_{\delta-4}}. \] \end{claim} To prove this note that we can write the expression we want to estimate as \[ \begin{gathered} L_{\omega_\varepsilon}(\beta_1P_1(\gamma_1\varphi))) - \frac{1}{2}X(\beta_1P_1(\gamma_1\varphi)) - \gamma_1\mathbf{l}(f) - \gamma_1\varphi \\ + L_{\omega_\varepsilon}(\beta_2P_2(\gamma_2\varphi)) - \frac{1}{2}X(\beta_2P_2(\gamma_2\varphi)) - \gamma_2\mathbf{l}(f) - \gamma_2\varphi, \end{gathered}\] where the terms on the top row are all supported in $M\setminus B_{\varepsilon^{\overline{a}}}$ and the terms on the bottom row are supported in $B_{2\varepsilon^{\underline{a}}}$. We first deal with $M\setminus B_{\varepsilon^{\overline{a}}}$. On this set we have $\omega_\varepsilon=\omega+i\partial\overline{\partial}\rho$, where \[ \rho(z) = \gamma_2(|z|)(-\varphi(z) + \varepsilon^2\psi(\varepsilon^{-1}z)).\] It follows that on the complement of $B_{\varepsilon^{\overline{a}}}$ we have \[ \Vert \rho\Vert_{C^{4,\alpha}_2(M\setminus B_{\varepsilon^{ \overline{a}}})} \mathbf{l}eqslant c(r_\varepsilon^2 + \varepsilon^{(2m-2)(1- \overline{a})})=o(1),\] where by $o(1)$ we mean a constant going to zero as $\varepsilon\to 0$. By the argument in Proposition~\ref{prop:linear} this implies that on the complement of $B_{\varepsilon^{\overline{a}}}$ we have \[ \Vert L_{\omega_\varepsilon}-L_{\omega}\Vert = o(1).\] At the same time $\mathbf{s}(\omega)-\mathbf{l}(s)$ is supported on $B_{2r_\varepsilon}$ and is bounded in $C^{1,\alpha}_0$ by Lemma~\ref{lem:lb}. It follows from this that \begin{equation}\mathbf{l}abel{eq:Xgrads} \Vert (X - \nabla\mathbf{s}(\omega))\varphi \Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant cr_\varepsilon^2\Vert \varphi\Vert_{C^{4,\alpha}_\delta}. \end{equation} Similarly, inside $B_{2\varepsilon^{\underline{a}}}$ we have \[ \Big\Vert L_{\omega} + \frac{1}{2}X - L_{\varepsilon^2\eta} \Big\Vert = o(1).\] Therefore it remains to show that for sufficiently small $\varepsilon$ we have \begin{equation}\mathbf{l}abel{ineq:1} \Vert L_{\omega}(\beta_1P_1(\gamma_1\varphi)) - \frac{1}{2} \nabla\mathbf{s}(\omega)\cdot\nabla (\beta_1P_1(\gamma_1\varphi)) - \gamma_1\mathbf{l}(f) - \gamma_1\varphi \Vert_{C^{0,\alpha}_{\delta-4}(M\setminus B_\varepsilon)} < \frac{1}{4}\Vert\varphi\Vert_{C^{0,\alpha}_{\delta-4}}, \end{equation} and \[ \Vert L_{\varepsilon^2\eta}(\beta_2P_2(\gamma_2\varphi)) - \gamma_2\mathbf{l}(f) - \gamma_2\varphi\Vert_{C^{0,\alpha}_{\delta-4}(B_1)} < \frac{1}{4} \Vert\varphi\Vert_{C^{0,\alpha}_{\delta-4}}.\] For the first inequality note, using Equation~(\ref{eq:Mp}) that \[ \begin{aligned} &L_{\omega}(\beta_1P_1(\gamma_1\varphi)) - \frac{1}{2} \nabla\mathbf{s}(\omega)\cdot\nabla (\beta_1P_1(\gamma_1\varphi)) - \gamma_1\mathbf{l}(f) - \gamma_1\varphi \\ = &\beta_1\gamma_1\varphi + \beta_1f - \gamma_1\mathbf{l}(f) - \gamma_1\varphi + D^3(\nabla\beta_1\star P_1(\gamma_1\varphi)) \\ = &\beta_1f - \gamma_1\mathbf{l}(f) + D^3(\nabla\beta_1\star P_1(\gamma_1\varphi)), \end{aligned}\] where $D^3$ denotes a 3rd order differential operator with the coefficient of $\nabla^i$ bounded in $C^{4,\alpha}_{i-3}(M_p)$ and $\star$ is a bilinear algebraic operator. Since $\beta_1f - \gamma_1\mathbf{l}(f)$ is supported in $B_{2r_\varepsilon}$ and is bounded in $C^{0,\alpha}_0$ by $c|f|$, we get \[ \Vert \beta_1f - \gamma_1\mathbf{l}(f)\Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant cr_\varepsilon^{4-\delta}\Vert \varphi\Vert_{C^{4,\alpha}_\delta}.\] Finally we have \[ \Vert D^3(\nabla\beta_1\star P_1(\gamma_1\varphi))\Vert_{C^{0,\alpha}_{ \delta-4}} \mathbf{l}eqslant c\Vert \nabla\beta_1\Vert_{C^{3,\alpha}_{-1}} \Vert P_1(\gamma_1 \varphi)\Vert_{C^{3,\alpha}_{\delta}} = o(1) \Vert\varphi\Vert_{C^{0,\alpha}_{\delta-4}},\] so for small enough $\varepsilon$ the Inequality~\ref{ineq:1} holds. The proof of the second inequality is similar, just note that $\gamma_2\mathbf{l}(f)$ is supported in $B_{2r_\varepsilon}$ so \[ \Vert\gamma_2\mathbf{l}(f)\Vert_{C^{0,\alpha}_{\delta-4}(B_1)} \mathbf{l}eqslant c r_\varepsilon^{4-\delta}\Vert\varphi\Vert_{C^{4,\alpha}_\delta}. \] This proves the claim, so if we write $\tilde{P}(\varphi) = (P\varphi,f)$, this means that the operator norm $\Vert G\circ\tilde{P}-I\Vert \mathbf{l}eqslant \frac{1}{2}$, which implies that we have a uniformly bounded inverse $(G\circ\tilde{P})^{-1}$. This in turn shows that $G$ has a right inverse $\tilde{P}\circ(G\circ\tilde{P})^{-1}$ whose norm is bounded independently of $\varepsilon$, which is what we wanted. When $m=2$ then we first work with the operator \[\begin{gathered} G_0 : (C^{4,\alpha}_\delta)^T\times\overline{\mathfrak{h}}_0 \times\mathbf{R}\to (C^{0,\alpha}_{\delta-4})^T \\ (\varphi,f,t)\mapsto \mathcal{D}^*_\omega\mathcal{D}_\omega(\varphi) - \mathbf{l}(f) + t\chi, \end{gathered}\] where $\chi$ is the function from Proposition~\ref{prop:BlC}. We are thinking of $\chi$ as a function on $Bl_pM$ using the identification of $B_\varepsilon\subset Bl_pM$ with $B_1\subset Bl_0\mathbf{C}^m$ and in addition $\overline{ \mathfrak{h}}_0$ denotes the functions $f\in\overline{\mathfrak{h}}$ for which the lifting $\mathbf{l}(f)$ has zero mean. The same argument as above can be used to show that if $\varepsilon$ is small then $G_0$ is invertible, with the inverse bounded independently of $\varepsilon$. Since $\mathcal{D}^*\mathcal{D}(\varphi)$ and $\mathbf{l}(f)$ have zero mean, this implies that the operator \[\begin{gathered} G_1 : (C^{4,\alpha}_\delta)^T\times\overline{\mathfrak{h}}_0 \to (C^{0,\alpha}_{\delta-4})^T_0 \\ (\varphi,f)\mapsto \mathcal{D}^*_\omega\mathcal{D}_\omega(\varphi) - \mathbf{l}(f) \end{gathered}\] also has bounded inverse, where $(C^{0,\alpha}_{\delta-4})^T_0$ consist of functions with zero mean. Now note that if $\delta < 4-2m$ then \[ \Vert \varphi\Vert_{L^1} \mathbf{l}eqslant c\varepsilon^{\delta-(4-2m)}\Vert \varphi\Vert_{C^{0,\alpha}_{\delta-4}}.\] It follows, by applying $G_1^{-1}$ to $\varphi-\overline{\varphi}$ and then absorbing the mean value $\overline{\varphi}$ into $f$, that the operator \[\begin{gathered} G_2 : (C^{4,\alpha}_\delta)^T\times\overline{\mathfrak{h}} \to (C^{0,\alpha}_{\delta-4})^T \\ (\varphi,f)\mapsto \mathcal{D}^*_\omega\mathcal{D}_\omega(\varphi) - \mathbf{l}(f) \end{gathered}\] also has bounded inverse, but we only get $\Vert G_2^{-1}\Vert < C\varepsilon^{\delta-(4-2m)}=C\varepsilon^{-\theta}$. Finally we can compare $G_2$ with the operator $G$ in the statement of the proposition using (\ref{eq:Xgrads}), and we find that $G$ is also invertible with the same bound, if $\theta$ is small enough so that $r_\varepsilon^2\mathbf{l}l \varepsilon^\theta$. \end{proof} \subsection{The nonlinear equation} We are now ready to solve the equation of Proposition~\ref{prop:gluing} (see also Equation (\ref{eq:2})), ie. we want to find a $T$-invariant function $u\in C^\infty(Bl_pM)$ and $f\in\overline{\mathfrak{h}}$ satisfying \[ \mathbf{s}(\omega_\varepsilon + i\partial\overline{\partial} u) - \frac{1}{2}\nabla\mathbf{l}(f + s) \cdot\nabla u = \mathbf{l}(f+s).\] Recall that here $\mathbf{l}(f)$ and $\mathbf{l}(s)$ are our lifts of the functions $f,\mathbf{s}(\omega)\in \overline{\mathfrak{h}}$ to the blowup $Bl_pM$, defined in Definition~\ref{def:lifting}. As before we will write $X=\nabla\mathbf{l}(s)$. Let us write the equation as \begin{equation}\mathbf{l}abel{eq:nonlinear} L_{\omega_\varepsilon}(u) - \frac{1}{2}X(u) - \mathbf{l}(f) = \mathbf{l}(s) - \mathbf{s}(\omega_\varepsilon) - Q_{\omega_\varepsilon}(u) + \frac{1}{2}\nabla\mathbf{l}(f)\cdot\nabla u. \end{equation} Following \cite{APS06} we first modify $\omega$ on $M\setminus\{p\}$ so that it matches up with the Burns-Simanca metric to higher order. For this let $\Gamma$ be a $T$-invariant solution of the linear equation \begin{equation}\mathbf{l}abel{eq:Gamma} \mathcal{D}^*_\omega \mathcal{D}_\omega\Gamma = h\qquad \text{on } M\setminus\{p\} \end{equation} for some $h\in\overline{\mathfrak{h}}$, such that $\Gamma$ has an expansion \[ \Gamma(z) = -|z|^{4-2m} + \tilde{\Gamma}, \] where $\tilde{\Gamma} = O(|z|^{5-2m})$ for $m>2$ and $\Gamma$ has leading term $\mathbf{l}og|z|$ when $m=2$. It follows from this expansion that $\Gamma$ is a distributional solution of \[ \mathcal{D}^*_\omega\mathcal{D}_\omega \Gamma = h - c_m\delta_p,\] where $c_m>0$ is a constant depending on the dimension and $\delta_p$ is the delta function at $p$. We then find that for all $g\in\overline{\mathfrak{h}}$ we have \[ \int_M gh\omega^m = c_m g(p),\] so $h=c_m\mu(p)+\mathbf{l}ambda$, where $\mathbf{l}ambda=\mathrm{Vol}(M)^{-1}c_m$ is a constant. Define the metric $\tilde{\omega}=\omega+i\partial\overline{\partial}\Gamma$ on $M\setminus\{p\}$, so \[ \tilde{\omega} = i\partial\overline{\partial}\mathbf{l}eft(\frac{|z|^2}{2} + \varepsilon^{2m-2}\Gamma(z) + \varphi(z)\right),\] where recall that $\omega = i\partial\overline{\partial}(|z|^2/2 + \varphi(z))$ near $p$. We can then write \[ \tilde{\omega} = i\partial\overline{\partial}\mathbf{l}eft( \frac{|z|^2}{2} - \varepsilon^{2m-2} |z|^{4-2m} + \varepsilon^{2m-2}\tilde{\Gamma}(z) + \varphi(z)\right)\] when $m>2$. At the same recall that the Burns-Simanca metric $\eta$ has an expansion \[ \eta = i\partial\overline{\partial}\mathbf{l}eft(\frac{|z|^2}{2} - |z|^{4-2m} + \tilde{\psi}(z)\right),\] where $\tilde{\psi} = O(|z|^{3-2m})$ for large $z$. We define the metric $\tilde{\omega}_\varepsilon$ by gluing $\tilde{\omega}$ to $\varepsilon^2\eta$ as before, to get on the annulus $B_{2r_\varepsilon}\setminus B_{r_\varepsilon}$ \begin{equation}\mathbf{l}abel{eq:annulus} \begin{gathered} \tilde{\omega_\varepsilon} = i\partial\overline{\partial}\mathbf{l}eft( \frac{|z|^2}{2} - \varepsilon^{2m-2}|z|^{4-2m} + \gamma_1(|z|)\mathbf{l}eft[\varepsilon^{2m-2}\tilde{\Gamma}(z) + \varphi(z)\right]\right. \\ + \gamma_2(|z|) \varepsilon^2\tilde{\psi}(\varepsilon^{-1} z)\Big). \end{gathered} \end{equation} Moreover outside $B_{2r_\varepsilon}$ we have $\tilde{\omega_\varepsilon}=\tilde{\omega}$ while inside $B_{r_\varepsilon}$ we have $\tilde{\omega_\varepsilon}=\varepsilon^2\eta$. Note that in terms of our previous approximate metric $\omega_\varepsilon$ \begin{equation} \mathbf{l}abel{eq:newmetric} \tilde{\omega}_\varepsilon = \omega_\varepsilon + i\partial\overline{\partial}\mathbf{l}eft[\varepsilon^{2m-2} \gamma_1(|z|) \Gamma(z)\right]. \end{equation} If $m=2$ then we can glue $\tilde{\omega}$ to $\varepsilon^2\eta$ in the same way. We want to find a solution to Equation~\ref{eq:nonlinear} as a perturbation of $\tilde{\omega}_\varepsilon$ so we write \begin{equation}\mathbf{l}abel{eq:expand} \begin{gathered} u = \varepsilon^{2m-2}\gamma_1\Gamma + v \\ f = \varepsilon^{2m-2}h + g. \end{gathered} \end{equation} Substituting this into Equation~\ref{eq:nonlinear} and rearranging we get \[ \begin{aligned} L_{\omega_\varepsilon}(v) - \frac{1}{2}X(v)- \mathbf{l}(g) =& \mathbf{l}(s) - \mathbf{s}(\omega_\varepsilon) - Q_{\omega_\varepsilon}(u) + \frac{1}{2}\nabla\mathbf{l}(f)\cdot \nabla u \\ & -L_{\omega_\varepsilon}(\varepsilon^{2m-2}\gamma_1\Gamma) + \frac{1}{2} X(\varepsilon^{2m-2} \gamma_1\Gamma) + \varepsilon^{2m-2}\mathbf{l}(h). \end{aligned}\] We can write this as a fixed point problem \[ (v, g) = \mathcal{N}(v,g) \] where we use the inverse $P$ constructed in Propostion~\ref{prop:inv} and \[\begin{aligned} \mathcal{N}(v,g) = & P\bigg\{ \mathbf{l}(s) - \mathbf{s} (\omega_\varepsilon) - Q_{\omega_\varepsilon}(\varepsilon^{2m-2}\gamma_1\Gamma + v)\\ &\quad + \frac{1}{2}\nabla(\varepsilon^{2m-2}\mathbf{l}(h)+\mathbf{l}(g))\cdot \nabla\mathbf{l}eft(\varepsilon^{2m-2}\gamma_1\Gamma + v\right) \\ &\quad -L_{\omega_\varepsilon}(\varepsilon^{2m-2}\gamma_1\Gamma) + \frac{1}{2} X(\varepsilon^{2m-2} \gamma_1\Gamma) + \varepsilon^{2m-2}\mathbf{l}(h)\bigg\} \end{aligned}\] We first show that \[ \mathcal{N} : (C^{4,\alpha}_\delta)^T\times\overline{\mathfrak{h}} \to (C^{4,\alpha}_\delta)^T\times\overline{\mathfrak{h}} \] is a contraction on a small ball. \begin{lem}\mathbf{l}abel{lem:contract2} There exist constants $c_0, \varepsilon_0 > 0$ such that for $\varepsilon < \varepsilon_0$ the operator $\mathcal{N}$ is a contraction on the set \[ \{(v,g)\quad :\quad \Vert v\Vert_{C^{4,\alpha}_2}, |g| < c_0 \}\] with constant $1/2$. \end{lem} \begin{proof} Suppose $m>2$. Since $P$ is bounded independently of $\varepsilon$, we need to control \[ Q_{\omega_\varepsilon}(u_1) - \frac{1}{2}\nabla\mathbf{l}(f_1)\cdot\nabla u_1 - Q_{\omega_\varepsilon}(u_2) + \frac{1}{2}\nabla\mathbf{l}(f_2)\cdot\nabla u_2,\] where \[ \begin{gathered} u_i = \varepsilon^{2m-2}\gamma_1\Gamma + v_i \\ f_i = \varepsilon^{2m-2}h + g_i. \end{gathered}\] First note that \[ \Vert\varepsilon^{2m-2}\Gamma\Vert_{C^{k,\alpha}_2} \mathbf{l}eqslant c(\varepsilon r_\varepsilon^{-1})^{2m-2} = o(1) \] as $\varepsilon\to 0$. Hence for any $\mathbf{l}ambda > 0$ we can choose sufficiently small $c_0$ and $\varepsilon$ for which Lemma~\ref{lem:contract} implies that \[ \Vert Q_{\omega_\varepsilon}(u_1) - Q_{\omega_\varepsilon}(u_2) \Vert_{C^{0,\alpha}_{\delta -4}} \mathbf{l}eqslant \mathbf{l}ambda \Vert u_1-u_2\Vert_{C^{4,\alpha}_\delta} = \mathbf{l}ambda \Vert v_1-v_2 \Vert_{C^{4,\alpha}_\delta}.\] On the other hand we have \[ \begin{aligned} \Vert\nabla \mathbf{l}(f_1)\cdot \nabla u_1 &- \nabla \mathbf{l}(f_2) \cdot\nabla u_2\Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant \\ &\mathbf{l}eqslant \Vert\nabla \mathbf{l}(f_1)\cdot (\nabla u_1 - \nabla u_2)\Vert_{C^{0,\alpha}_{\delta-4}} + \Vert(\nabla \mathbf{l}(f_1) - \nabla \mathbf{l}(f_2))\cdot\nabla u_2\Vert_{C^{0,\alpha}_{\delta-4}} \\ &\mathbf{l}eqslant \Vert\nabla \mathbf{l}(f_1)\Vert_{C^{0,\alpha}_{-3}} \Vert u_1-u_2\Vert_{C^{4,\alpha}_{\delta}} + \Vert u_2\Vert_{C^{4,\alpha}_{\delta}}\Vert \nabla \mathbf{l}(f_1-f_2)\Vert_{C^{0, \alpha}_{-3}} \\ &\mathbf{l}eqslant c(|f_1|\cdot\Vert u_1-u_2\Vert_{C^{4,\alpha}_\delta} + \Vert u_2\Vert_{C^{4,\alpha}_{\delta}} |f_1-f_2|), \end{aligned}\] where we used Lemma~\ref{lem:lb}. From this it is clear that by choosing $c_0$ and $\varepsilon$ sufficiently small, $\mathcal{N}$ is a contraction with constant $1/2$. If $m=2$ then $P$ is not bounded independently of $\varepsilon$, but if we choose $\delta < 4-2m$ very close to $4-2m$ then the bound only blows up slowly as $\varepsilon\to0$ and the same argument works. \end{proof} Next we need to bound $\mathcal{N}(0,0)$, which is the same as estimating $\Vert F\Vert_{C^{0,\alpha}_{\delta-4}}$ where $F$ is the function \begin{equation}\mathbf{l}abel{eq:F} \begin{gathered} F= \mathbf{l}(s) - \mathbf{s}(\omega_\varepsilon) - Q_{\omega_\varepsilon}(\varepsilon^{2m-2}\gamma_1\Gamma) + \frac{1}{2}\nabla\varepsilon^{2m-2}\mathbf{l}(h)\cdot \nabla\varepsilon^{2m-2} \gamma_1\Gamma \\ -L_{\omega_\varepsilon}(\varepsilon^{2m-2}\gamma_1\Gamma) + \frac{1}{2} X(\varepsilon^{2m-2} \gamma_1\Gamma) + \varepsilon^{2m-2}\mathbf{l}(h). \end{gathered} \end{equation} \begin{lem}\mathbf{l}abel{lem:F} Choose $\delta$ very close to $4-2m$ with $\delta > 4-2m$ for $m>2$ and $\delta < 4-2m$ for $m=2$. Let $r_\varepsilon = \varepsilon^{\frac{2m-1}{2m+1}}.$ Then we have the estimate \[ \Vert F\Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant cr_\varepsilon^{4-\delta},\] where $F$ is defined by Equation~(\ref{eq:F}). \end{lem} \begin{proof} To prove this we look at three different pieces of $Bl_pM$ separately, namely $M\setminus B_{2r_\varepsilon}$, $B_{2r_\varepsilon}\setminus B_{r_\varepsilon}$ and $B_{r_\varepsilon}$. First of all in $B_{r_\varepsilon}$ we have $F = \mathbf{l}(s) + \varepsilon^{2m-2}\mathbf{l}(h)$, but note that by Lemma~\ref{lem:lb} \[ \Vert \mathbf{l}(s) \Vert_{C^{0,\alpha}_0}, \quad \Vert \mathbf{l}(h)\Vert_{C^{0,\alpha}_0 } \mathbf{l}eqslant c, \] which implies that $\Vert F\Vert_{C^{0,\alpha}_{\delta-4} (B_{r_\varepsilon})} \mathbf{l}eqslant cr_\varepsilon^{4-\delta}$. On the set $M\setminus B_{2r_\varepsilon}$ note that $\omega_\varepsilon=\omega$, so $\mathbf{l}(s)= \mathbf{s}(\omega_\varepsilon)$ and $\mathbf{l}(h) = h$. In addition \[ \mathcal{D}^*_\omega\mathcal{D}_\omega \Gamma = L_\omega\Gamma - \frac{1}{2}X(\Gamma) = h \] using Equations~(\ref{eq:Gamma}) and (\ref{eq:Lichn}). This means that on the set $M\setminus B_{2r_\varepsilon}$ \begin{equation}\mathbf{l}abel{eq:F1} F = -Q_\omega(\varepsilon^{2m-2}\Gamma) + \frac{1}{2} \varepsilon^{4m-4}\nabla h\cdot\nabla \Gamma. \end{equation} It is useful to note that $\Vert\gamma_1\Gamma\Vert_{C^{4,\alpha}_w}$ is bounded by $cr_\varepsilon^{4-2m-w}$ for $w > 4-2m$ and by $c$ for $w < 4-2m$. For the second term in (\ref{eq:F1}) we have \[\begin{aligned} \varepsilon^{4m-4}\Vert \nabla h\cdot \nabla\Gamma \Vert_{C^{0,\alpha}_{\delta-4}(M\setminus B_{2r_\varepsilon})} &\mathbf{l}eqslant c\varepsilon^{4m-4} \Vert \nabla\Gamma\Vert_{C^{0,\alpha}_{\delta-4}} \\ &\mathbf{l}eqslant c\varepsilon^{4m-4} \Vert \Gamma\Vert_{ C^{1,\alpha}_{\delta-3}} \\ &\mathbf{l}eqslant c\varepsilon^{4m-4} \mathbf{l}l r_\varepsilon^{4-\delta}, \end{aligned}\] as long as $\delta$ is close to $4-2m$. For the term involving $Q_\omega$ we use Proposition~\ref{prop:linear2} below. Indeed \[ \Vert Q_\omega(\varepsilon^{2m-2}\Gamma)\Vert_{C^{0,\alpha}_{ \delta-4}(M\setminus B_{2r_\varepsilon})} \mathbf{l}eqslant c\varepsilon^{4m-4} r_\varepsilon^{6-4m-\delta}\mathbf{l}eqslant cr_\varepsilon^{4-\delta},\] as long as $m\geqslant 2$. Finally on the annulus $A_\varepsilon = B_{2r_\varepsilon}\setminus B_{r_\varepsilon}$ we first note that by Equation~(\ref{eq:newmetric}) we have \[ \mathbf{s}(\omega_\varepsilon) + L_{\omega_\varepsilon}(\varepsilon^{2m-2} \gamma_1\Gamma) + Q_{\omega_\varepsilon}(\varepsilon^{2m-2}\gamma_1\Gamma) = \mathbf{s}(\tilde{\omega}_\varepsilon).\] The other terms in the expression for $F$ can be dealt with as before, so we only need to show that on the annulus $A_\varepsilon$ \[ \Vert \mathbf{s}(\tilde{\omega}_\varepsilon) \Vert_{C^{0,\alpha}_{\delta-4}} \mathbf{l}eqslant cr_\varepsilon^{4-\delta}.\] This is where we use that $\tilde{\omega}$ matches up with the Burns-Simanca metric to leading order at $p$. We use the formula given in Equation~(\ref{eq:annulus}), which we write as \[ \tilde{\omega}_\varepsilon = i\partial\overline{\partial}\mathbf{l}eft( \frac{|z|^2}{2} + g(z)\right). \] On $A_\varepsilon$ we then have \[ \mathbf{s}(\tilde{\omega}) = L_\eta(g),\] where $\eta = i\partial\overline{\partial}\mathbf{l}eft(|z|^2/2 + tg(z)\right)$ for some $t\in[0,1]$. We have \[ \mathbf{s}(\tilde{\omega}) = \Delta^2_0 g + (L_\eta - L_0)g,\] where $L_0 = \Delta^2_0$ is the linearized operator at the flat metric. At the same time \[ \Vert g \Vert_{C^{4,\alpha}_{\delta}(A_\varepsilon)} \mathbf{l}eqslant c\varepsilon^{2m-2}r_\varepsilon^{4-2m-\delta} \mathbf{l}l r_\varepsilon^{2-\delta}, \] so Proposition~\ref{prop:linear2} implies that \[ \Vert L_\eta(g) - L_0(g)\Vert_{C^{0,\alpha}_{\delta -4}(A_\varepsilon)} \mathbf{l}eqslant cr_\varepsilon^{\delta-2}(\varepsilon^{2m-2} r_\varepsilon^{4-2m-\delta})^2 \mathbf{l}eqslant cr_\varepsilon^{4-\delta}. \] Finally for $\Delta_0^2g$ note that $\Delta_0^2(|z|^{4-2m})=0$, so writing \[ g(z) = -\varepsilon^{2m-2}|z|^{4-2m} + \tilde{g}(z)\] we have $\Delta_0^2g = \Delta_0^2\tilde{g}$. At the same time \[ \Vert \tilde{g} \Vert_{C^{4,\alpha}_\delta (A_\varepsilon)} \mathbf{l}eqslant c\varepsilon^{2m-1}r_\varepsilon^{3-2m-\delta} = cr_\varepsilon^{4-\delta},\] so \[ \Vert \Delta_0^2\tilde{g}\Vert_{C^{0,\alpha}_{ \delta-4}(A_\varepsilon)} \mathbf{l}eqslant cr_\varepsilon^{4-\delta},\] which gives the result we wanted. \end{proof} We used the following result, whose proof is identical to that of Proposition~\ref{prop:linear}. \begin{prop}\mathbf{l}abel{prop:linear2} There exist constants $c_0, C>0$ such that if $U\subset Bl_pM$ and $\Vert u\Vert_{C^{4,\alpha}_2}(U) \mathbf{l}eqslant c_0$ then for any $v$ we have \[ \Vert L_{\omega_u}(v) - L_{\omega_\varepsilon}(v)\Vert_{C^{0,\alpha}_{\delta-4}(U)} \mathbf{l}eqslant C\Vert u\Vert_{C^{4,\alpha}_2(U)}\Vert v\Vert_{C^{4,\alpha}_\delta(U)},\] where $\omega_u=\omega_\varepsilon + i\partial\overline{\partial} u$. It follows that \[ \Vert Q_{\omega_\varepsilon}(u)\Vert_{C^{0,\alpha}_{\delta-4} (U)} \mathbf{l}eqslant C\Vert u\Vert_{C^{4,\alpha}_2(U)} \Vert u\Vert_{C^{4,\alpha}_\delta(U)}. \] \end{prop} We can now complete the proof of Proposition~\ref{prop:gluing}. Let us choose $\delta$ close to $4-2m$ with $\delta > 4-2m$ for $m>2$ and $\delta < 4-2m$ for $m=2$, and let $r_\varepsilon=\varepsilon^{\frac{2m-1}{2m+1}}$ as above. First by Lemma~\ref{lem:F} and our bound on the inverse $P$ from Proposition~\ref{prop:inv}, we have \[\Vert\mathcal{N}(0,0)\Vert_{C^{4,\alpha}_\delta} \mathbf{l}eqslant c_1r_\varepsilon^{4-\delta}\varepsilon^{-\theta}\] for some constant $c_1$ independent of $\varepsilon$, as long as $\varepsilon$ is sufficiently small. Here $\theta=0$ if $m>2$ and $\theta=4-2m-\delta$ if $m=2$. Define \[ S = \{ (v,g) \quad :\quad \Vert v\Vert_{C^{4,\alpha}_\delta}, |g| \mathbf{l}eqslant 2c_1r_\varepsilon^{4-\delta}\varepsilon^{-\theta}\}.\] For $(v,g)\in S$ we have \[ \Vert v\Vert_{C^{4,\alpha}_2}\mathbf{l}eqslant Cr_\varepsilon^{4-\delta}\varepsilon^{\delta-2-\theta},\] so if $\varepsilon$ and $\theta$ are small enough then Lemma~\ref{lem:contract2} implies that $\mathcal{N}$ is a contraction with constant $1/2$ on $S$. In particular $\mathcal{N}$ then maps $S$ to itself, since if $(v,g)\in S$ then \[ \Vert\mathcal{N}(v,g)\Vert \mathbf{l}eqslant \Vert\mathcal{N}(v,g) - \mathcal{N}(0,0)\Vert + \Vert\mathcal{N}(0,0) \Vert \mathbf{l}eqslant \frac{1}{2}\Vert(v,g)\Vert + c_1r_\varepsilon^{4-\delta}\varepsilon^{-\theta} \mathbf{l}eqslant 2c_1 r_\varepsilon^{4-\delta}\varepsilon^{-\theta}.\] It follows that for small enough $\varepsilon$ there is a fixed point of $\mathcal{N}$ in the set $S$. This gives a solution $(v,g)$ to our equation, with $|g|\mathbf{l}eqslant 2c_1r_\varepsilon^{4-\delta}\varepsilon^{-\theta}$. Finally if $\delta$ is sufficiently close to $4-2m$, we find that $r_\varepsilon^{4-\delta} < \varepsilon^\kappa$ for some $\kappa > 2m-2$. Hence from the expansion (\ref{eq:expand}) and the fact that $h=c_m\mu(p)$, we obtain the required expansion in Equation (\ref{eq:expansion}). \subsection{A remark on Conjecture~\ref{conj:stronger}} A natural problem is to compute more terms in the expansion (\ref{eq:expand}) of the element $f$ above. Examining the argument we see that the key point was to first perturb the extremal metric $\omega$ away from $p$, so that it matches up with the Burns-Simanca metric to higher order. To see what the next term should be we need the following. \begin{lem}\mathbf{l}abel{lem:BSmetric} If $m\geqslant 3$ then the K\"ahler potential for a suitable scaling of the Burns-Simanca metric \[\eta=i\partial\overline{\partial}(|z|^2/2 + \psi(z))\] satisfies \begin{equation}\mathbf{l}abel{eq:metric} \psi(z) = -|z|^{4-2m} + a|z|^{2-2m} + O(|z|^{6-4m}), \end{equation} where $a > 0$. \end{lem} \begin{proof} This can be seen by finding the first few terms in the power series expansion of the solution of the ODE for scalar flat $U(n)$ invariant metrics on $Bl_0\mathbf{C}^m$, written down in~\cite{AP06}, Section 7 (see also~\cite{Sim91}). Following \cite{AP06} let us write $\eta=i\partial\overline{\partial} A(|z|^2)$ and let $s=|z|^2$. Let us also introduce the variable $t=s^{-1}$ and define the function $\xi(t)=\partial_s A(s)$. From the equations given in \cite{AP06} one can check that $\xi$ satisfies the equation \[ \xi^{m-1}(t)\xi'(t) - (m-1)t^{m-2}\xi(t) + (m-2)t^{m-1} = 0.\] Moreover we want $\xi(0)=1/2$. It is then straightforward to check that the first few terms in the expansion of $\xi$ around $t=0$ are \[ \xi(t) = \frac{1}{2} + 2^{m-2}t^{m-1} - \frac{m-2}{m}2^{m-1}t^m + O(t^{m+1}).\] From this we can recover $A(s)$, and finally by scaling the variable $z$ and the metric, we obtain the first two terms in (\ref{eq:metric}), with $a>0$. To show that the next term is $O(|z|^{6-4m})$ we can either compute more terms in the expansion of $\xi$, or instead we can follow the argument in \cite{AP06}, Lemma 7.2. The scalar curvature of $\eta=i\partial\overline{\partial}(|z|^2/2 + \psi(z))$ is given by \[ \mathbf{s}(\eta) = \Delta^2\psi + Q(\psi),\] where $\Delta$ is the Laplacian for the flat metric (we use the K\"ahler Laplacian and half the Riemannian scalar curvature, so the coefficient of $\Delta^2$ differs from that in \cite{AP06}). It is shown in~\cite{AP06} that if $\psi\in C^{4,\alpha}_\delta(Bl_0\mathbf{C}^m)$ for some $\delta < 2$ then \[ Q(\psi)\in C^{0,\alpha}_{2\delta - 6}(Bl_0\mathbf{C}^m).\] If $\mathbf{s}(\eta)=0$ then also $\Delta^2\psi\in C^{0,\alpha}_{2\delta-6}$, so from the regularity theory for the Laplacian acting between weighted spaces we get that \begin{equation}\mathbf{l}abel{eq:exp} \psi \in C^{4,\alpha}_{2\delta-2} \oplus \mathrm{span}\{1, |z|^{4-2m}, |z|^{2-2m}\}. \end{equation} The reason why we only get these powers of $|z|$ is that these (together with $|z|^2$) are the only $U(n)$ invariant elements in the kernel of $\Delta^2$. Or in other words if we work with $U(n)$ invariant spaces then the indicial roots are $2,0,4-2m$ and $2-2m$. Subtracting a constant we can therefore suppose that $\psi\in C^{4,\alpha}_{4-2m}$ so we can apply the above with $\delta=4-2m$. Then $2\delta-2=6-4m$ so the result follows from (\ref{eq:exp}). \end{proof} In order to match with the metric $\varepsilon^2\eta$ we therefore need to perturb $\omega$ to $\omega+i\partial\overline{\partial}\Gamma$, where \[ \Gamma = -\varepsilon^{2m-2}|z|^{4-2m} + \varepsilon^{2m}a|z|^{2-2m} + \text{lower order terms}\] and $\mathcal{D}_\omega^*\mathcal{D}_\omega\Gamma = h$ on $M\setminus\{p\}$ for some $h\in\overline{\mathfrak{h}}$. By changing the lower order terms if necessary (we can use a term of the order of $\varepsilon^{2m}|z|^{4-2m}$ to cancel the contribution of $\mathrm{Ric}^{i\bar j}\Gamma_{i\bar j}$) we can assume that $\Gamma$ is a distributional solution of \[ \mathcal{D}_\omega^*\mathcal{D}_\omega\Gamma = h - \varepsilon^{2m-2}c_m\delta_p - \varepsilon^{2m}ac_m'\Delta\delta_p,\] where $c_m,c_m'>0$ are constants depending on the dimension. Taking the $L^2$ product of both sides with all $g\in\overline{\mathfrak{h}}$ as before, we find that \[ h = \varepsilon^{2m-2}(\mathbf{l}ambda+c_m\mu(p)) + \varepsilon^{2m}ac_m'\Delta\mu(p),\] where $\mathbf{l}ambda=\mathrm{Vol}(M)^{-1}c_m$. Under the assumptions of Conjecture~\ref{conj:stronger}, if $\varepsilon$ is sufficiently small we can assume that $h=\varepsilon^{2m-2}\mathbf{l}ambda$. It seems reasonable to expect that one can deform this metric $\omega+i\partial\overline{\partial}\Gamma$ to a cscK metric, but we have not been successful with this so far. Note also that when $m=2$, then the potential for the Burns-Simanca metric is given by $\psi=\mathbf{l}og|z|$ with no lower order terms, so it is not clear where the expression $\mu(p) \pm \varepsilon\Delta\mu(p)$, which we see in the algebro-geometric calculations in the next section, comes from in this case. \section{K-stable blowups}\mathbf{l}abel{sec:stability} In this section we give the proof of Theorem~\ref{thm:stable}, which is an extension of a result in Stoppa~\cite{Sto10}. First we need to review the notion of K-stability introduced by Donaldson~\cite{Don02}. Let $L\to M$ be an ample line bundle. A \emph{test-configuration} for the pair $(M,L)$ is a flat $\mathbf{C}^*$-equivariant family $\pi:\mathcal{M}\to \mathbf{C}$ together with a $\mathbf{C}^*$-equivariant relatively ample line bundle $\mathcal{L}\to\mathcal{M}$, such that the fiber $(\pi^{-1}(1), \mathcal{L}|_{\pi^{-1}(1)})$ is isomorphic to $(M,L^r)$ for some $r > 0$. Let us denote by $\alpha$ the induced $\mathbf{C}^*$-action on the central fiber $(M_0,L_0)$. This gives rise to a $\mathbf{C}^*$-action on the space of sections $H^0(M_0,L_0^k)$ for each $k$. Let us write $d_k = \dim H^0(M_0,L_0^k)$ and $w_k$ for the total weight of the action on $H^0(M_0,L_0^k)$. Define the numbers $a_0,a_1,b_0,b_1$ to be the coefficients in the expansions \[ \begin{gathered} d_k = a_0k^m + a_1k^{m-1} + \mathbf{l}dots \\ w_k = b_0k^{m+1} + b_1k^m + \mathbf{l}dots, \end{gathered}\] valid for large $k$. We define the \emph{Futaki invariant} of the action $\mathbf{C}^*$-action $\alpha$ on $(M_0,L_0)$ to be \[ \mathrm{Fut}(\alpha, M_0, L_0) = \frac{a_1}{a_0}b_0 - b_1.\] Since the $\mathbf{C}^*$-action is induced by the test-configuration $(\mathcal{M},\mathcal{L})$, we also write \[ \mathrm{Fut}(\mathcal{M},\mathcal{L}) = \mathrm{Fut}(\alpha,M_0,L_0). \] \begin{defn} The polarized manifold $(M,L)$ is \emph{K-polystable} if for all test-con\-fi\-gu\-rations $\mathrm{Fut}(\mathcal{M},\mathcal{L})\geqslant 0$ with equality only if the central fiber of the test-configuration is isomorphic to $M$. \end{defn} We want to study the K-stability of the blowup $Bl_pM$ with the polarization $\pi^*L-\varepsilon E$ for sufficiently small $\varepsilon$. For this the key calculation is to compute the Futaki invariants of $\mathbf{C}^*$-actions on $Bl_pM$. Let us fix a K\"ahler metric $\omega\in c_1(L)$, and suppose that the vector field $v = J\nabla h$ generates a holomorphic $S^1$-action for some $h\in C^\infty(M)$. Replacing $L$ by a large power if necessary, a choice of $h$ gives rise to a lifting of the $S^1$-action to $L$ which can be extended to a holomorphic $\mathbf{C}^*$-action. We define the Futaki invariant of $v$ with the same formula as above. If the vector field $v$ vanishes at $p$ then it has a holomorphic lift $\tilde{v}$ to $Bl_pM$. Consider the $\mathbf{Q}$-line bundle $L_\varepsilon = \pi^*L-\varepsilon E$ on $Bl_pM$ for small rational $\varepsilon$, where $E$ is the exceptional divisor. We have a $\mathbf{C}^*$-action on the space of sections $H^0(Bl_pM, L_\varepsilon^k)$, and so we can define constants $\tilde{a}_0,\tilde{a}_1, \tilde{b}_0, \tilde{b}_1$ corresponding to this action as above, which depend on $\varepsilon$ (if we take $k$ for which $k\varepsilon$ is an integer, then $L_\varepsilon^k$ is a line bundle). The following lemma is an extension of the calculation in Stoppa~\cite{Sto10}. \begin{lem}\mathbf{l}abel{lem:weights} For the action on the blowup we have \[\begin{aligned} \tilde{a}_0 &= a_0 - \frac{\varepsilon^m}{m!} \\ \tilde{a}_1 &= a_1 - \frac{\varepsilon^{m-1}}{2(m-2)!} \\ \tilde{b}_0 &= b_0 + \frac{\varepsilon^m}{m!}h(p) + \frac{\varepsilon^{m+1}}{(m+1)!}\Delta h(p) \\ \tilde{b}_1 &= b_1 + \frac{\varepsilon^{m-1}}{2(m-2)!}h(p) +\frac{(m-2)\varepsilon^m}{2m!}\Delta h(p), \end{aligned}\] where $\Delta h$ is the Laplacian of $h$ with respect to the metric $\omega$. \end{lem} \begin{proof} Let us write $\mathcal{I}_p$ for the ideal sheaf of $p\in M$. For large $k$ we have an isomorphism \[ H^0(Bl_pM, L_\varepsilon^k) = H^0(M, \mathcal{I}_p^{k\varepsilon}L^k).\] To study this space, we use the exact sequence \[ 0\mathbf{l}ongrightarrow \mathcal{I}_p^{k\varepsilon}L^k \mathbf{l}ongrightarrow L^k \mathbf{l}ongrightarrow \mathcal{O}_{k\varepsilon p}\otimes L^k|_p \mathbf{l}ongrightarrow 0.\] As before, let us write $d_k$ and $w_k$ for the dimension of $H^0(M,L^k)$ and the weight of the action on this space. Similarly write $\tilde{d}_k$ and $\tilde{w}_k$ for the dimension of, and weight of the action on, $H^0(Bl_pM, L_\varepsilon^k)$. From the exact sequence we have \begin{equation}\mathbf{l}abel{eq:dkwk} \begin{aligned} \tilde{d}_k &= d_k - \dim \mathcal{O}_{k\varepsilon p} \\ \tilde{w}_k &= w_k - w(\mathcal{O}_{k\varepsilon p}\otimes L^k|_p). \end{aligned}\end{equation} Here the weight $w(\mathcal{O}_{k\varepsilon p}\otimes L^k|_p)$ is given by \[ w(\mathcal{O}_{k\varepsilon p}\otimes L^k|_p) = w(\mathcal{O}_{k\varepsilon p}) - kh(p)\dim(\mathcal{O}_{ k\varepsilon p}),\] since the weight of the action on the fiber $L_p$ is $-h(p)$. The sign here depends on our convention that the real part of the $\mathbf{C}^*$-action corresponding to $h$ is generated by $\nabla h$. We can think of $\mathcal{O}_{lp}$ for an integer $l>0$ as being the space of $(l-1)$-jets of functions at $p$, ie. \[ \mathcal{O}_{lp} = \mathbf{C}\oplus T_p^*\oplus\mathbf{l}dots\oplus S^{l-1}T_p^*,\] where $S^i$ is the $i^\mathrm{th}$ symmetric product. The dimension of $\mathcal{O}_{lp}$ is therefore given by \[ \dim(\mathcal{O}_{lp}) = \binom{m+l-1}{m} = \frac{1}{m!}\mathbf{l}eft( l^m + \frac{m(m-1)}{2} l^{m-1} + O(l^{m-2})\right).\] Similarly if we write $w$ for the weight of the action on $T_p^*$, then we can compute that \[ w(\mathcal{O}_{lp}) = \binom{m+l-1}{m+1}w = \frac{w}{(m+1)!} \mathbf{l}eft( l^{m+1} + \frac{(m-2)(m+1)}{2}l^m + O(l^{m-1})\right).\] Substituting $k\varepsilon$ for $l$ and using the formulas (\ref{eq:dkwk}) we get \[ \begin{aligned} \tilde{d}_k &= d_k - \frac{\varepsilon^m}{m!} k^m - \frac{\varepsilon^{m-1}}{2(m-2)!} k^{m-1} + O(k^{m-2}) \\ \tilde{w}_k &= w_k - \mathbf{l}eft(\frac{w\varepsilon^{m+1}}{(m+1)!} - \frac{h(p)\varepsilon^m}{m!}\right)k^{m+1} \\ &\quad - \mathbf{l}eft(\frac{(m-2)w\varepsilon^m}{2m!} - \frac{h(p)\varepsilon^{m-1}}{2(m-2)!}\right)k^m + O(k^{m-1}) \end{aligned}\] The only thing that remains is to see that the weight of the action on $T_p^*$ is given by $w=-\Delta h(p)$. This follows from the fact that by our convention the induced action on the tangent space $T_p$ is given by the Hessian of $h$ at $p$. \end{proof} A simple calculation then gives \begin{cor}\mathbf{l}abel{cor:Fblowup} If the Futaki invariant $\mathrm{Fut}(v,M,L)=0$ on $M$ then on the blowup we have \begin{equation}\mathbf{l}abel{eq:Fut} \begin{aligned} \mathrm{Fut}(\tilde{v}, &Bl_pM, L_\varepsilon) = \frac{\tilde{a}_1}{\tilde{a}_0}\tilde{b}_0-\tilde{b}_1 \\ &= -\frac{\varepsilon^{m-1}}{2(m-2)!}h(p) - \frac{\varepsilon^m}{m!} \mathbf{l}eft( \frac{m-2}{2}\Delta h(p) - \frac{a_1}{a_0}h(p)\right) + O(\varepsilon^{m+1}), \end{aligned}\end{equation} if $m\geqslant 3$, \[ \begin{aligned} \mathrm{Fut}(\tilde{v}, Bl_pM, L_\varepsilon) &= -\frac{\varepsilon}{2}h(p) + \frac{\varepsilon^2a_1}{2a_0} h(p)+\frac{\varepsilon^3}{2a_0}\mathbf{l}eft( \frac{a_1}{3}\Delta h(p)- \frac{h(p)}{2}\right) + O(\varepsilon^4), \end{aligned}\] if $m=2$ and $a_1\not=0$ and finally \[ \begin{aligned} \mathrm{Fut}(\tilde{v}, Bl_pM, L_\varepsilon) &= -\frac{\varepsilon}{2}h(p) +\frac{\varepsilon^3}{4a_0}h(p) - \frac{\varepsilon^4}{12a_0}\Delta h(p) + O(\varepsilon^5), \end{aligned}\] if $m=2$ and $a_1=0$. In each case if $h(p)=\Delta h(p)=0$, then $\mathrm{Fut}(\tilde{v}, Bl_pM, L_\varepsilon)=0$ for all $\varepsilon$. \end{cor} Combining this with Proposition~\ref{prop:HM} from Section~\ref{sec:deform} we can prove Theorem~\ref{thm:stable}. \begin{proof}[Proof of Theorem~\ref{thm:stable}] Let us assume that $m>2$, since the argument in the $m=2$ case is essentially identical. We argue by contradiction. We can also assume that $p$ is semistable with respect to the polarization $L$ since if it were strictly unstable, then Stoppa's result~\cite{Sto10} implies that the pair $(Bl_pM, L_\varepsilon)$ is K-unstable for all sufficiently small $\varepsilon$. So suppose that $p$ is semistable with respect to $L$, but it is not polystable with respect to $L+\varepsilon K_M$ for sufficiently small $\varepsilon$. Then by Proposition~\ref{prop:HM} there exists a one-parameter subgroup $\mathbf{l}ambda$ such that $\mathbf{l}im\mathbf{l}imits_{t\to 0}\mathbf{l}ambda(t)\cdot p=q$ and $w_L(q,\mathbf{l}ambda)=0$, but $w_{K_M}(q,\mathbf{l}ambda)\mathbf{l}eqslant 0$. By blowing up the trivial family $M\times\mathbf{C}$ in the closure of the $\mathbf{C}^*$-orbit of $(p,1)$ under the action $t(p,1)=(\mathbf{l}ambda(t)p,t)$ we obtain a test-configuration for $(Bl_pM, L_\varepsilon)$ with central fiber $(Bl_qM, L_\varepsilon)$, and the action on $Bl_qM$ is simply the lifting of the action $\mathbf{l}ambda$. Suppose that this $\mathbf{l}ambda$ is generated by a holomorphic vector field with Hamiltonian function $h$. Then $w_L(q,\mathbf{l}ambda)=-h(q)=0$ and $w_{K_M}(q,\mathbf{l}ambda)=-\Delta h(q)\mathbf{l}eqslant 0$. If $-\Delta h(q)< 0$ then from the formula (\ref{eq:Fut}) we see that for small enough $\varepsilon$ the Futaki invariant of this test-configuration is negative. Whereas if $\Delta h(q)=0$ then we get that the Futaki invariant is zero. In both cases this means that $(Bl_pM,L_\varepsilon)$ is not K-polystable. To deal with the different situations when $m=2$, note that by the Hirzebruch-Riemann-Roch formula $a_1 = -\frac{1}{2}K_M\cdot L$. \end{proof} To relate GIT stability with respect to $L+\delta K_M$ (or $L-\delta K_M$) to moment maps, we need the following. \begin{lem}\mathbf{l}abel{lem:LK} Let $X$ be a holomorphic Killing field on $(M,\omega)$. If $\iota_X\omega = dh$, then $\iota_X\rho = -d\Delta h$, where $\rho$ is the Ricci form. It follows that if $p$ is polystable with respect to the polarization $L+\delta K_M$ for the action of the Hamiltonian isometry group $H$, then there is a point $q$ in the complex orbit $H^c\cdot p$ such that $\mu(q)+\delta\Delta\mu(q)=0$. \end{lem} \begin{proof} We can compute \[ \begin{aligned} 2\iota_X\rho &= \iota_X(dJd\mathbf{l}og\det(\omega)) =-d\iota_X(Jd\mathbf{l}og\det(\omega)) \\ &=d(\mathcal{L}_{JX}\mathbf{l}og\det(\omega)) =d\Lambda \mathcal{L}_{JX}\omega, \end{aligned}\] where $\mathcal{L}$ is the Lie derivative and $\Lambda$ is taking the trace with respect to $\omega$. Since $\mathcal{L}_{JX}\omega = -2i\partial\overline{\partial} h$, we get $\iota_X\rho = -d\Delta h$, which is what we wanted. It follows that if $\mu$ is a moment map for the action of $H$ with respect to the symplectic form $\omega$ then $\mu+\delta\Delta\mu$ is a moment map with respect to $\omega-\delta\rho$. Since $\omega-\delta\rho\in c_1(L+\delta K_M)$, the second statement in the lemma follows from the Kempf-Ness theorem. \end{proof} Finally we give the proof of Corollary~\ref{cor:KE}. \begin{proof}[Proof of Corollary~\ref{cor:KE}] We will prove the following three implications. \begin{itemize} \item[(i)$\Rightarrow$(ii)] This follows from the main theorem in~\cite{SSz09}. \item[(ii)$\Rightarrow$(iii)] This follows from Theorem~\ref{thm:stable}, even though we are using a more restrictive version of K-polystability. The reason is that in finite dimensions when using the Hilbert-Mumford criterion for testing stability of a point, it is enough to look at one-parameter subgroups which commute with a torus fixing the point. This follows for example from the theory of optimal destabilizing one-parameter subgroups (see Kempf~\cite{Kem78}), or also from the Kempf-Ness theorem and the observation that $\mu(p)$ is always in the center of the stabilizer of $p$ since $\mu$ is equivariant. \item[(iii)$\Rightarrow$(i)] This follows from Theorem~\ref{thm:main}. Namely if $p\in M$ is GIT polystable then by replacing $p$ by a different point in its $H^c$-orbit, we can assume that $\mu(p)=0$, so $Bl_pM$ admits an extremal metric in the class $\pi^*[\omega]-\varepsilon[E]$ for small $\varepsilon$. Since $(M,\omega)$ is K\"ahler-Einstein, the Futaki invariant of any vector field on $M$ is zero with respect to the class $[\omega]$. Since $\mu(p)=0$ and $\rho=\omega$, Lemma~\ref{lem:LK} implies that $\Delta\mu(p)=0$, so from Corollary~\ref{cor:Fblowup} the Futaki invariant of every vector field on $Bl_pM$ vanishes in the classes $\pi^*[\omega]-\varepsilon[E]$. This means that the extremal metric we obtain actually has constant scalar curvature. \end{itemize} \end{proof} \providecommand{\bysame}{\mathbf{l}eavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2} \noindent{\sc Columbia University\\ New York} \\ \noindent{\tt [email protected]} \end{document}
\begin{document} \title{Uniform bounds for ruin probability in Multidimensional Risk Model} \author{Nikolai Kriukov } \address{Nikolai Kriukov, Department of Actuarial Science, University of Lausanne,\\ UNIL-Dorigny, 1015 Lausanne, Switzerland } \email{[email protected]} \date{\today} \maketitle {\bf Abstract:} In this paper we consider some generalizations of the classical $d$-dimensional Brownian risk model. This contribution derives some non-asymptotic bounds for simultaneous ruin probabilities of interest. In addition, we obtain non-asymptotic bounds also for the case of general trend functions and convolutions of our original risk model. {\bf Key Words:} Brownian risk model; Brownian motion; simultaneous ruin probability; uniform bounds. {\bf AMS Classification:} 60G15 \section{Introduction and first Result} Let $\vk B(t),t\ge 0$ be a $d$-dimensional Brownian motion with independent standard Brownian motion components and set $\vk Z(t)= A \vk B(t), t\ge 0$ with $A$ a $d\times d$ real non-singular matrix. The recent contribution \cite{KWW} derived the following remarkable inequality \begin{eqnarray} 1 \le \frac{\pk{ \exists t\in [0,T]:\vk{Z}(t) \ge \vk{b}}}{\pk{\vk{Z}(T)\ge \vk{b}} } &\le& K(T), \quad K(T)= \frac{1} {\pk{\vk{Z}(T)\ge \vk 0} } \label{Korsh0} \end{eqnarray} valid for all $\vk b \in \R^d, T>0$. In our notation bold symbols are column vectors with $d$ rows and all operations are meant component-wise, for instance $\vk x \ge \vk 0$ means $x_i \ge 0$ for all $i\le d$ with $\vk 0= (0 ,\ldots, 0)\in \R^d$. \\ The special and crucial feature of \eqref{Korsh0} is that the bounds are uniform with respect to $\vk b$. Moreover, if at least one component of $\vk b$ tends to infinity, then $\pk{ \exists t\in [0,T]:\vk{Z}(t) \ge \vk{b}}$ can be accurately approximated (up to some constant) by the survival probability $\pk{\vk{Z}(T)\ge \vk{b}}$.\\ Inequality \eqref{Korsh0} has been crucial in the context of Shepp-statistics investigated in \cite{KWW}. It is also of great importance in the investigation of simultaneous ruin probabilities in vector-valued risk models (see \cite{ZKE,parisian,doi:10.1080/03461238.2021.1902853}). Specifically, consider the multidimensional risk model $$ \vk R(t,u)=\vk a u-\vk X(t), \ \vk X(t)= \vk Z(t)- \vk c t$$ for some vectors $\vk a,\vk c\in\mathbb{R}^d$ and $\vk Z(t),t\ge 0$ defined above. Typically, $\vk R$ models the surplus of all $d$-portfolios of an insurance company, where $a_i u, u>0$ plays the role of the initial capital. Here the component $Z_i$ models the accumulated claim amount up to time $t$ and $c_i t$ is the premium income for the $i$th portfolio. Given a positive integer $k\le d$, of interest is the calculation of the $k$-th simultaneous ruin probability, i.e., at least $k$ out of $d$ portfolios are ruined on a given time interval $[0,T]$ with $T$ possibly also infinite. That ruin probability can be written as $$\pk{\exists_{t\in[0,T]}:\vk Z(t)-\vk c t\in u \vk S}, \qquad u>0,$$ where $$\vk S:=\bigcup_{\substack{I\subset\{1,\ldots,s d\}\\\abs{I}=k}}\vk S_{I},\qquad \vk S_{I}=\{\vk x\in\mathbb{R}^d:~\forall i\in I,~x_i>a_i\}.$$ The particular case $\vk Z(t)= A \vk B(t), t\ge 0$ with $A$ a $d\times d$ non-singular matrix is of special importance for insurance risk models, see e.g., \cite{delsing2018asymptotics}. Clearly, this instance is also of great importance in statistics and probability given the central role of the $\mathbb{R}^d$-valued Brownian motion as a natural limiting process\\ In \cite{pandemic} it has been shown that \eqref{Korsh0} can be extended for this risk model, i.e., for all $u,T>0$ \begin{eqnarray} 1 \le \frac{\pk{\exists_{t\in [0,T]}: \vk X(t) \in u \vk S} }{ \pk{ \vk X(T)\in u \vk S} } &\le& K_{ \vk S}(T), \ \vk X(t)=\vk Z(t)-\vk c( t), \label{Korsh1} \end{eqnarray} with $\vk c(t)= \vk c t, t\ge 0$ and some known constant $K_{\vk S}(T)>0$. Again the bounds are uniform with respect to $u$.\\ It is clear that the inequality \eqref{Korsh1} does not hold for an arbitrary set $\vk S\subset\mathbb{R}^d$. Since Brownian motion has almost surely continuous sample paths, , if it hits some closed set $\vk S$, it definitely hits its boundary. Hence in the following special case, for all $u$ positive we have \bqny{ \{\exists t\in[0,T]:Z(t)\geq u\}=\{\exists t\in[0,T]:Z(t)=u\}. } Hence, taking $\vk S=\{\vk x\in\mathbb{R}^d: x_1=1\}$ and $\vk c(t)=\vk 0$ we have that \bqny{ \pk{\exists_{t\in [0,T]}: \vk X(t) \in u \vk S}&=&\pk{\exists_{t\in [0,T]}: X_1(t) > u}\geq\pk{X_1(T) > u}>0, \\ \pk{ \vk X(T)\in u \vk S}&=&0, } and \eqref{Korsh1} does not hold. \\ Therefore hereafter we shall consider only closed sets S described as follows: \begin{de}\label{cone_def} Let $\vk X$ and $\vk Z$ are as defined above. The closed Borel set $\vk{S}\subset \mathbb{R}^d$ satisfies the cone condition with respect to the the vector-valued process $\vk X$ if there exists a strictly positive function $\varepsilon_{\vk{S}}(t), t> 0$ such that for any point $\vk x\in \vk{S}$ and any $t>0$ there exists a Borel set $\vk V_{\vk x}\subset \vk{S}$ that contains $\vk{x}$ and not depending on t, satisfying $\vk V_{\vk x}-\vk x \subset C \left(\vk V_{\vk x}-\vk x\right)$ for all $C>1$ and $\pk{ \vk Z(t) \in \vk V_x-\vk x} \geq\varepsilon_{\vk S}(t)$. \end{de} It is of interest to consider a general trend function in \eqref{Korsh1}. We consider below a large class of trend functions which is tractable if $\vk Z$ has self-similar coordinates with index $\alpha>0$. This is in particular the case when $\vk Z= A \vk B $. \begin{de} A continuous measurable vector-valued function $\vk c:[0,+\infty)\to\mathbb{R}^d$ belongs to $RV_{t_0}(\alpha)$ for some $\alpha>0$, $t_0\in[0,T]$ if for some $M>0$, all $i\in\{1,\ldots,s d\}$, $t\in[0,T]$ $$|c_i(t)-c_i(t_0)|\le M|t-t_0|^{\alpha}.$$ \end{de} We state next our first result. Below $\vk F:\mathbb{R}^d\to\mathbb{R}^d$ growing means that for any $\vk x,\vk y\in\mathbb{R}^d$ such that for all $i\in\{1,\ldots, d\}$ $x_i\geq y_i$ we have that $F_i(\vk x)\geq F_i(\vk y)$ holds for all $i\in\{1,\ldots, d\}$. \begin{theo}\label{ConeSet} If $\vk S \subset \mathbb{R}^d$ satisfies the cone condition with respect to the process $ \vk Z=A \vk B$ such that $\vk 0\not\in\vk S$ and $\vk c \in RV_T(1/2),$ then for all constants $T>0, u>1$ the inequality \eqref{Korsh1} holds with \bqny{ K(T)= \frac{2^{d/2}}{\mathfrak{C}(T)\varepsilon_{\vk S}(T)}, \quad \mathfrak{C}(T)=\inf_{t\in [0,T)}e^{-T\left(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}}\right)^\top\Sigma^{-1}\left(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}}\right)}>0, } where $\Sigma$ is the covariance matrix of $\vk Z(T)$. In particular, for any growing function $\vk F:\mathbb{R}^d\to\mathbb{R}^d$ \bqny{ \pk{\exists_{t\in [0,T] }: \vk F(\vk Z(t)-\vk c(t))>u\vk a}\leq C_T\pk{\vk F(\vk Z(T)-\vk c(T))>u\vk a} } we have $\vk a \in \mathbb{R}^d\setminus (-\infty,0]^d$, $u>1$ and some constant $C_T$ which does not depend on $u$. \end{theo} If $\vk Z$ is a given separable random field, it is of interest to determine conditions such that \eqref{Korsh1} can be extended to \begin{eqnarray} 1 \le \frac{\pk{\exists_{ \vk t \in \mathbb{T} }:\vk Z(\vk t)-\vk c(\vk t)\in u \vk S} }{ \pk{ \vk Z(\vk T)-\vk c(\vk T)\in u \vk S} } &\le& K_{ \vk S}(\vk T), \label{Korsh2} \end{eqnarray} where $\mathbb{T} = [0,T_1]\times,\ldots,s\times[0,T_n]$ and $\vk T=(T_1 ,\ldots, T_n)$ has positive components. For the case $\vk Z(\vk t)= \sum_{i=1}^n \vk Z_i(t_i)$ where $\vk Z_i$'s are independent copies of $\vk Z$, and $\vk c(\vk t)=0$ the result \eqref{Korsh2} was shown in \cite{KWW}[Thm 1.1] for some special set $\vk S$. For more general set $\vk S$ we have the following result: \begin{theo}\label{convshift} If $\vk S\subset \mathbb{R}^d$ satisfies the cone condition with respect to $\vk Z$, $\vk 0\not\in\vk S$ and all $\vk c_i \in RV_T(1/2),$ then for all constants $T_1,\ldots, T_n>0, u>1$ the inequality \eqref{Korsh2} holds with $\vk Z(\vk t)= \sum_{k=1}^{n}\vk Z_k(t_k)$ and $\vk c(\vk t)=\sum_{k=1}^{n}\vk c_k(t_k)$ with \bqny{ K_{\vk S}(\vk T)=\prod_{k=1}^{n}\frac{2^{d/2}}{\mathfrak{C}_k(T_k)\varepsilon_{\vk S}(T_k)}, \quad \mathfrak{C}_k(T_k)=\inf_{t\in[0,T_k)}e^{-T_k\left(\frac{\vk c_k(T_k)-\vk c_k(t)}{\sqrt{T_k-t}}\right)^\top\Sigma^{-1}(T_k)\left(\frac{\vk c_k(T_k)-\vk c_k(t)}{\sqrt{T_k-t}}\right)}>0, } where $\varepsilon_{\vk S}$ is any function satisfies the claims of Definition 1.1. \end{theo} \section{Discussion} In this section as in Introduction we consider first $$ \vk Z(t)=A\vk B(t),\qquad t\geq 0$$ with $A$ non-singular and $\vk B$ a $d$-dimensional Brownian motion with independent components. We shall discuss next the generalisation of the upper bound \eqref{Korsh1} for various special cases. \subsection{Order statistics} The classical multidimensional Brownian motion risk model (see \cite{delsing2018asymptotics}) is formulated in terms of some risk process $\vk R$ specified by $$ \vk R(t,u)=\vk a u-\vk Z(t)+\vk c t$$ for some vectors $\vk a,\vk c\in\mathbb{R}^d$. We are interested in the finite-time simultaneous ruin probability for $k$ out of $d$ portfolios, i.e. the probability that at least $k$ portfolios are ruined. In other words, we are investigating the probability $$\pk{\exists_{t\in[0,T]},\exists_{\mathcal{I}\subset\{1,,\ldots,s,d\}}:\abs{\mathcal{I}}=k,~\forall i\in\mathcal{I}~Z_i(t)-c_it\geq a_i u},$$ which in view of our previous notation reads $$\pk{\exists_{t\in[0,T]}:\vk Z(t)-\vk c t\in \vk S_u}, \qquad u>0,$$ where $$\vk S_u:=\bigcup_{\substack{I\subset\{1,\ldots,s d\}\\\abs{I}=k}}\vk S_{I,u},\qquad \vk S_{I,u}=\{\vk x\in\mathbb{R}^d:~\forall i\in I~x_i\geq a_iu\}.$$ Asymptotic approximations of such probability was already obtained in \cite{pandemic}. Now we want to derive a uniform non-asymptotic bound based on our previous findings. It is clear that all sets $\vk S_{I,u}$ satisfy the cone condition with respect to the process $\vk Z$. Thus, $\vk S_u$ also satisfies the cone condition with respect to the process $\vk Z$, hence we can use \netheo{ConeSet} and write for some positive constant $C$ $$ \pk{\vk Z(T)-\vk cT\in\vk S_u}\leq \pk{\exists_{t\in[0,T]}:\vk Z(t)-\vk c t\in\vk S_u}\leq C\pk{\vk Z(T)-\vk cT\in\vk S_u}.$$ \subsection{Fractional Brownian motion} Consider next the 1-dimensional risk model $$R(u,t)=u-B_H(t)+ct, \, t >0,$$ where $B_H(t)$ is a standard fractional Brownian motion with zero mean and variance function $\abs{t}^{2H}$, $H\in (0,1]$. We are interested in the calculation of the finite-time ruin probability for given $T>0$. The inequalities below have already been shown in \cite{ZKE}. We retrieve them using our findings. Namely, by the Slepian inequality, we can write for $H>\frac{1}{2}$ and W a standard Brownian motion \bqny{ \pk{\exists_{t\in[0,T]}R(u,t)\leq 0}&\leq&\pk{\exists_{t\in[0,T]}W\left(t^{2H}\right)-ct\geq u}\\ &=&\pk{\exists_{t\in[0,T^{2H}]}W\left(t\right)-ct^{1/2H}\geq u}\\ &=&\pk{\exists_{t\in[0,1]}W\left(T^{2H}t\right)-cTt^{1/2H}\geq u}\\ &=&\pk{\exists_{t\in[0,1]}W\left(t\right)-cT^{1-H}t^{1/2H}\geq u/T^H}. } Since $cT^{1-H}t^{1/2H}\in RV_{1}(1/2)$, using \netheo{ConeSet}, for some positive constant $C$ we can write \bqny{ \pk{\exists_{t\in[0,1]}W\left(t\right)-cT^{1-H}t^{1/2H}\geq u/T^H}&\leq&C\pk{W\left(1\right)-cT^{1-H}\geq u/T^H}\\ &=&C\pk{W\left(T^{2H}\right)-cT\geq u}\\ &=&C\pk{R(u,T)\leq 0}. } The above can be extended considering the convolution of $n$ independent one-dimensional fractional Brownian motions $B_{H_i}(t), t>0, i\le n$. Let $H_i>1/2$ and define the risk processes $$R_i(u,t)=u/n-B_{H_i}(t)+c_it,\, i\le n. $$ Consider the convolution of processes $R_i(u,t)$. Using Slepian inequality, for all $H_i>\frac{1}{2}$ we can write \bqny{ \pk{\exists_{\vk t\in\prod\limits_{i=1}^{n}[0,T_i]}\sum_{i=1}^{n}R_i(u,t_i)\leq 0}&\leq&\pk{\exists_{\vk t\in\prod\limits_{i=1}^{n}[0,T_i]}\sum_{i=1}^{n}W_{i}\left(t^{2H_i}\right)-c_it_i\geq u}\\ &=&\pk{\exists_{\vk t\in\prod\limits_{i=1}^{n}[0,T_i^{2H_i}]}\sum_{i=1}^nW_{i}\left(t\right)-c_it_i^{1/2H_i}\geq u}. } Here $W_i$ stands for an independent copy of Brownian motion. As $c_it^{1/2H_i}\in RV_{T_i}(1/2,1)$, using \netheo{convshift}, for some positive constant $C$ we can write \bqny{ \pk{\exists_{\vk t\in\prod\limits_{i=1}^{n}[0,T_i^{2H_i}]}\sum_{i=1}^nW_{i}\left(t\right)-c_it_i^{1/2H_i}\geq u}&\leq&C\pk{\sum_{i=1}^{n}W_{i}\left(T_i^{2H_i}\right)-c_iT_i\geq u}\\ &=&C\pk{\sum_{i=1}^{n}B_{H_i}\left(T_i\right)-c_iT_i\geq u}\\ &=&C\pk{\sum_{i=1}^{n}R_i(u,T_i)\leq 0}. } \section{Vector-valued time-transform} Finally, we discuss some extensions of \eqref{Korsh1} under different time transformations. We use the notation from Section 2 and define the following time transform. Let $\vk f(t):[0,+\infty)\in\mathbb{R}^d$ be a growing vector-valued function and define \bqny{ \vk Z(\vk f(t))=(Z_1(f_1(t)),,\ldots,s, Z_d(f_d(t)))^\top. } Hence $\vk f(t)$ can be considered as a generalised transformation of time. \begin{theo}\label{TimeTransform} Let $\vk c(t),\vk f(t):[0,T]\to\mathbb{R}^d$ be given. Suppose that all $f_i(t)$'s are continuous, strictly growing and for all $i\in\{1,\ldots, d\}$ we have $f_i(0)=0$ and function $\delta_i(t)=\frac{f_i(T)-f_i(t)}{f_1(T)-f_1(t)}$ has a positive finite limit as $t\to T$. Let also $\abs{c_i(T)-c_i(t)}< M\sqrt{f_1(T)-f_1(t)}$ for all $t\in[0,T]$, all $i\in\{1,\ldots, d\}$, some $M>0$, and $\vk S$ satisfies the cone condition with respect to the process $\vk Z$. If $\vk 0\not\in\vk S$, then for all constants $T>0, u>1$ the inequality \eqref{Korsh1} holds with $\vk X(t)= \vk Z(\vk f(t))$ and \bqny{ K^*(T)= \frac{(2f_1(T))^{d/2}}{\mathfrak{C}(T)\bar{\varepsilon}_{\vk S}}, \quad \mathfrak{C}(T)=\inf_{t\in [0,T)}e^{-\left(\frac{\vk c(T_k)-\vk c(t)}{\sqrt{f_1(T)-f_1(t)}}\right)^\top\Sigma^{-1}(\vk \delta(t))\left(\frac{\vk c(T_k)-\vk c(t)}{\sqrt{f_1(T)-f_1(t)}}\right)}>0, } where \bqny{ \bar\varepsilon_{\vk S}=\left(\frac{\inf\limits_{\substack{i\in\{1,\ldots, d\}\\t\in [0,T]}}\delta_i(t)}{\sup\limits_{\substack{i\in\{1,\ldots, d\}\\t\in [0,T]}}\delta_i(t)}\right)^{d/2}\varepsilon_{\vk S}\left(\inf\limits_{\substack{i\in\{1,\ldots, d\}\\t\in [0,T]}}\delta_i(t)\right)>0. } \end{theo} \begin{remark} The function $\vk f$ in \netheo{TimeTransform} may also be an almost surely growing stochastic process, independent of $\vk Z$, satisfying \bqny{ & &\max_{i\in\{1,,\ldots,s,d\}}f_i(T)<F,\qquad \max_{i\in\{1,,\ldots,s,d\}}\sup_{t\in [0,T)}\abs{\frac{c_i(T_k)-c_i(t)}{\sqrt{f_1(T)-f_1(t)}}}<M,\\ & &\delta<\inf\limits_{\substack{i\in\{1,\ldots, d\}\\t\in [0,T]}}\delta_i(t)\leq \sup\limits_{\substack{i\in\{1,\ldots, d\}\\t\in [0,T]}}\delta_i(t)<\Delta, } almost surely with some positive constants $F,M,\delta,\Delta$. In this case the inequality \eqref{Korsh1} holds with \bqny{ K^*(T)= \frac{(2F)^{d/2}}{\mathfrak{C}(T)\bar{\varepsilon}_{\vk S}}, \quad \mathfrak{C}(T)=\min_{\substack{\vk x\in [-M,M]^d\\ \vk t\in[\delta,\Delta]^d}}e^{-\vk x^\top\Sigma^{-1}(\vk t)\vk x}>0, } and \bqny{ \bar\varepsilon_{\vk S}=\left(\frac{\delta}{\Delta}\right)^{d/2}\varepsilon_{\vk S}\left(\delta\right)>0. } \end{remark} We illustrate the above findings considering again $d$ independent one-dimensional fractional Brownian motions $B_{H_i}(t), t>0$ with Hurst parameters $H_i>\frac{1}{2}, i \le d$. Define $d$ ruin portfolios $$R_i(u,t)=u-B_{H_i}(t)+c_it,$$ and we are interested in probability that all of them will be simultaneously ruined in $[0,T]$.\\ Using Gordon inequality (see\COM{e.g. \cite{MR800188} or} \cite[page~55]{MR1088478}), we obtain \bqny{ \pk{\exists_{t\in[0,T]}\forall_{i\in\{1 ,\ldots, d\}} R_i(u,t)<0}&\leq&\pk{\exists_{t\in[0,T]} \forall_{i\in\{1 ,\ldots, d\}}W_{i}\left(t^{2H_i}\right)-c_it>u}. } Where $B_i(t)$ are independent Brownian motions. Since \bqny{ \lim_{t\to T}\frac{T^{2H_i}-t^{2H_i}}{T^{2H_1}-t^{2H_1}}=\frac{2H_i}{2H_1}\frac{T^{2H_i-1}}{T^{2H_1-1}}>0, } using \netheo{TimeTransform}, for some positive constant $C$, which does not depend on $u$ we can write \bqny{ \pk{\exists_{t\in[0,T]} \forall_{i\in\{1 ,\ldots, d\}}W_{i}\left(t^{2H_i}\right)-c_it>u}&\leq&C\pk{\forall_{i\in\{1 ,\ldots, d\}}W_{i}\left(T^{2H_i}\right)-c_iT>u}\\ &=&C\pk{\forall_{i\in\{1 ,\ldots, d\}}B_{H_i}\left(T\right)-cT>u}\\ &=&C\pk{\forall_{i\in\{1 ,\ldots, d\}}R_i(u,T)<0}. } \section{Proofs} Let us note the following property of the function $\varepsilon_{\vk S}(t)$. \begin{lem}\label{1} If set $\vk S$ satisfies the cone condition with respect to the process $\vk Z(t)$ with some function $\varepsilon_{\vk S}(t)$, then for any constant $u>1$ set $u\vk S$ also satisfies the cone condition with respect to the process $\vk Z(t)$, and for any function $\varepsilon_{\vk S}(t)$ exists a function $\varepsilon_{u\vk S}(t)$ such that $$\varepsilon_{u\vk S}(t)\geq \varepsilon_{\vk S}(t)$$ \end{lem} \prooflem{1} Fix some $\vk x\in u\vk S$. Then we know that $\vk y=\vk x/u\in\vk S$. As $\vk S$ satisfies the cone condition with respect to the process $\vk Z(t)$, there exists some cone $V_{\vk y}\subset \vk S$ with vertex $\vk y$ such that $\pk{\vk Z(t)\in V_{\vk y}-\vk y}\geq \varepsilon_{\vk S}(t)$. Hence, $uV_{\vk y}\subset u\vk S$ for all $u>1$. Note that using the properties of cone \bqny{ uV_{\vk y}=u(\vk y+(V_{\vk y}-\vk y))=\vk x+u(V_{\vk y}-\vk y)\supset \vk x+(V_{\vk y}-\vk y). } Hence, $\vk x+(V_{\vk y}-\vk y)\subset u\vk S$ is some cone with vertex $\vk x$, and \bqny{ \pk{\vk Z(t)\in uV_{\vk y}-\vk x}\ge\pk{\vk Z(t)\in V_{\vk y}-\vk y}\ge\varepsilon_{\vk S}(t). } \qed \prooftheo{ConeSet} Consider the first inequality. Define the following stopping moment \bqny{ \tau=\inf\{t\in [0,T]:\vk Z(t)-\vk c(t)\in u\vk S \}. } According to the strong Markov property \bqny{ & &\pk{\vk Z(T) -\vk c (T)\in u\vk S}\\ & &\qquad\quad=\int_{0}^{T}\int_{u \partial \vk S}\pk{\vk Z(\tau)-\vk c (\tau)\in\text{\rm d}\vk x,\tau\in \text{\rm d} t}\pk{\vk Z(T)-\vk c(T)\in u\vk S|\vk Z(t)-\vk c(t)=\vk x}. } Using \nelem{1}, $\vk uS$ satisfies the cone condition with respect to the process $\vk Z(t)$. Hence for all $\vk x\in u\vk S$, $t\in [0,T]$ \bqny{ \pk{\vk Z(T)-\vk c(T)\in u\vk S|\vk Z(t)-\vk c(t)=\vk x}&\geq&\pk{\vk Z(T)-\vk c(T)\in \vk V_{\vk x}|\vk Z(t)-\vk c(t)=\vk x}\\ &=&\pk{\vk Z(T-t)-(\vk c(T)-\vk c(t))\in \vk V_{\vk x}-\vk x}\\ &=&\pk{\vk Z(1)-(\vk c(T)-\vk c(t))/\sqrt{T-t}\in (\vk V_{\vk x}-\vk x)/\sqrt{T-t}}\\ &\geq&\pk{\sqrt{T}\vk Z(1)\in \vk V_{\vk x}-\vk x+\sqrt{T}(\vk c(T)-\vk c(t))/\sqrt{T-t}}\\ &=&\pk{\vk Z(T)\in \vk V_{\vk x}-\vk x+\sqrt{T}(\vk c(T)-\vk c(t))/\sqrt{T-t}}\\ &=&\int_{\vk V_{\vk x}-\vk x}\frac{1}{(2\pi)^\frac{d}{2}\sqrt{\abs{\Sigma}}}e^{-\frac{1}{2}(\tilde{\vk x}+\sqrt{T}\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})^\top\Sigma^{-1}(\tilde{\vk x}+\sqrt{T}\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})}\text{\rm d}\tilde{\vk x}\\ &\geq&\int_{\vk V_{\vk x}-\vk x}\frac{1}{(2\pi)^\frac{d}{2}\sqrt{\abs{\Sigma}}}e^{-T(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})^\top\Sigma^{-1}(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})}e^{-\frac{1}{2}(\sqrt{2}\tilde{\vk x})^\top\Sigma^{-1}(\sqrt{2}\tilde{\vk x})}\text{\rm d}\tilde{\vk x}\\ &=&e^{-T(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})^\top\Sigma^{-1}(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})}\frac{\pk{\vk Z(T)\in\sqrt{2}(\vk V_{\vk x}-\vk x)}}{2^{d/2}}\\ &\geq&\frac{1}{2^{d/2}}e^{-T(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})^\top\Sigma^{-1}(\frac{\vk c(T)-\vk c(t)}{\sqrt{T-t}})}\pk{\vk Z(T)\in\vk V_{\vk x}-\vk x}\\ &\geq&\frac{\mathfrak{C}(T)\varepsilon_{u \vk S}(T)}{2^{d/2}}\geq\frac{\mathfrak{C}(T)\varepsilon_{\vk S}(T)}{2^{d/2}}, } where $\vk V_{\vk x}$ is the cone from \nedef{cone_def}. As the right part does not depend on $\vk x$ and $t$, we can write \bqny{ \pk{\vk Z(T) -\vk c (T)\in u\vk S}&\geq&\frac{\mathfrak{C}(T)\varepsilon_{\vk S}(T)}{2^{d/2}}\int_{0}^{T}\int_{u\partial \vk S}\pk{\vk Z(\tau)-\vk c (\tau)\in\text{\rm d}\vk x,\tau\in \text{\rm d} t}\\ &=&\frac{\mathfrak{C}(T)\varepsilon_{\vk S}(T)}{2^{d/2}}\pk{\exists_{t\in [0,T]}\vk Z(t)-\vk c(t)\in u\vk S}. } Hence, the first inequality holds. Consider the second one. Define a set \bqny{ \vk a^+=\{\vk x\in\mathbb{R}^d:\vk x\geq \vk a\} } and \bqny{ \vk S_u=\frac{1}{u}\{\vk x\in\mathbb{R}^d: \vk F(x)\in u\vk a^+\}. } Set $\vk S_u$ satisfies the cone condition with respect to the process $\vk Z(t)$ for $V_{\vk x}=\vk x^+$, as for any $\vk y\geq \vk x\in\vk S_u$ \bqny{ \vk F(u\vk y)\geq\vk F(u\vk x)\geq u\vk a^+. } Consequently, $\vk y\in\vk S_u$, and \bqny{ \varepsilon_{\vk S_u}(t)=\pk{\vk X(t)\in\vk x^+-\vk x}=\pk{\vk X(t)\in[0,+\infty)^d} } does not depend on $u$. Applying the result above for the set $\vk S_u$ we obtain \bqny{ \pk{\exists_{t\in [0,T]}:\vk X(t)\in u\vk S_u}\leq\frac{2^{d/2}\pk{\vk X(T)\in u\vk S}}{\mathfrak{C}(T)\varepsilon_{\vk S_u}(T)}=\frac{2^{d/2}\pk{\vk X(T)\in u\vk S_u}}{\mathfrak{C}(T)\pk{\vk X(T)\in[0,+\infty)^d}}. } As the event $\{\vk X(t)\in u\vk S_u\}$ is equal to the event $\{\vk F(\vk X(t)-\vk c(t))>u\vk a\}$, this completes the proof. $\Box$ \prooftheo{convshift} Define \bqny{ \psi_k(\vk S):=\pk{\exists_{\vk t\in\mathbb{T}_k}:\sum_{i=1}^{k}(\vk Z_i(t_i)-\vk c_i(t_i))+\sum_{i=k+1}^{n}(\vk Z_i(T_i)-\vk c_i(T_i))\in\vk S}, } where $\mathbb{T}_k=[0,T_1]\times,\ldots,s\times[0,T_k]$. As in the previous section we are going to prove that the inequality \bqny{ \psi_{k}(u\vk S)\leq\frac{2^{d/2}\psi_{k-1}(u \vk S)}{\varepsilon_{\vk S}(T_k)\mathfrak{C}_k(T_k)} } takes place for any $k\in\{1,\ldots, n\}$. We can fix the trajectories of processes $\vk Z_i(t)$ called $\vk x_i(t)$, fix random vectors $\vk Z_i(T_i)$ called $\vk x_i$, and define the process \bqny{ \vk Z^{*k}(t,\vk t^k)=\vk Z_k(t)-\vk c_k(t)+\sum_{i=1}^{k-1}(\vk x_i(t_i)-\vk c_i(t_i))+\sum_{i=k+1}^{n}(\vk x_i-\vk c_i(T_i)), } where $\vk t^k=(t_1,\ldots, t_{k-1})\in\mathbb{T}_{k-1}$. \\ Since $\vk Z_i$ are independent, it is enough to show that for every set of trajectories $\vk x_i(t)$ and points $\vk x_j$, the inequality \bqny{ \psi^*(u\vk S)\leq\frac{2^{d/2}\nu(u\vk S)}{\varepsilon_{\vk S}(T_k)\mathfrak{C}_k(T_k)} } takes place, where \bqny{ \psi^*(\vk S)&=&\pk{\exists_{t\in[0,T_k]}:\vk Z^{*k}(t,\vk t^k)\in\vk S~for~some~\vk t^k\in\mathbb{T}_{k-1}},\\ \nu(\vk S)&=&\pk{\vk Z^{*k}(T_k,\vk t^k)\in\vk S~for~some~\vk t^k\in\mathbb{T}_{k-1}}. } Define the following stopping time: \bqny{ \tau_k=\inf\left\{t: \vk Z^{*k}(t,\vk t^k)\in u\vk S~for~some~\vk t^k\in\mathbb{T}^k\right\}, } and the random vector \bqny{ \tilde{\vk x}_k=\begin{cases} \vk x^*, \qquad &\tau_k\leq T_k,\\ \vk 0,\qquad &otherwise, \end{cases} } where $\vk x^*$ is any point from the following set: $$ \bigcup\limits_{\vk t^k\in\mathbb{T}^k}\left\{\vk Z^{*k}(\tau_k,\vk t^k)\right\}\bigcap u\vk S. $$. Using the total probability formula we obtain \bqny{ \nu(u\vk S)=\int_0^{T_k}\int_{u\partial \vk S}\pk{\tilde{\vk x}_k\in\text{\rm d}\vk x_0,\tau_k\in\text{\rm d} t}\pk{\vk Z^{*k}(T_k,\vk t^k)\in u\vk S~for~some~\vk t^k\in\mathbb{T}_{k-1}\left|\tau_{k}=t,\tilde{\vk x}_k=\vk x_0\right.}. } For any $\vk t^k\in\mathbb{T}_{k-1}$ we have \bqny{ \vk Z^{*k}(T_k,\vk t^k)-\vk Z^{*k}(t,\vk t^k)=\vk Z_k(T_k)-\vk Z_k(t)-(\vk c_k(T_k)-\vk c_k(t)). } Thus, using the same chain of inequalities as in \netheo{ConeSet} we obtain \bqny{ & &\pk{\vk Z^{*k}(T_k,\vk t^k)\in u\vk S~for~some~\vk t^k\in\mathbb{T}^k\left|\tau_{k}=t,\tilde{\vk x}_k=\vk x_0\right.}\\ & &\qquad\geq\pk{\vk Z_k(T_k)-\vk Z_k(t)-(\vk c_k(T_k)-\vk c_k(t))\in u\vk S-\vk x_0}\\ & &\qquad \geq \frac{\mathfrak{C}_k(T_k)\varepsilon_{\vk S}(T_k)}{2^{d/2}}, } which completes the proof. $\Box$ \begin{remark} The random variable $\tau_k$ is measurable, because it can be represented as \bqny{ \tau_{k}=\inf\left\{t:~\vk Z_k(t)-\vk c_k(t)\in \vk S^*_k \right\}, } where \bqny{ \vk S^*_k=\bigcup\limits_{t^k\in \mathbb{T}^k} \left(u\vk S-\sum_{i=1}^{k-1}\left(\vk x_i(t_i)-\vk c_i(t_i)\right)-\sum_{i=k+1}^{n}\left(\vk x_i-\vk c_i(T)\right)\right). } As all the functions $\vk x_i(t)$ and $\vk c_i(t)$ are continuous, set $\vk S_k^*$ is closed, hence $\tau_k$ is measurable. \end{remark} \prooftheo{TimeTransform} Define a stopping the \bqny{ \tau=\inf\{t\in [0,T]:\vk Z(\vk f(t))-\vk c(t) \in u\vk S \}. } According to the strong Markov property \bqny{ \pk{\vk Z(\vk f(T)) -\vk c (T)\in u\vk S}&=&\int_{0}^{T}\int_{u\partial \vk S}\pk{\vk Z(\vk f(\tau))-\vk c(\tau)\in\text{\rm d} \vk x, \tau\in \text{\rm d} t}\\ & &\qquad\qquad\times\pk{\vk Z(\vk f(T))-\vk c(T)\in u\vk S|\vk Z(\vk f(t))-\vk c(t)=\vk x,\tau=t}. } In view of \nelem{1}, $u\vk S$ satisfies the cone condition with respect to the process $\vk Z(t)$. Consequently, we have \bqny{ \lefteqn{\pk{\vk Z(\vk f(T))-\vk c(T)\in u\vk S|\vk Z(\vk f(t))-\vk c(t)=\vk x,\tau=t}}\\ & &=\pk{\vk Z(\vk f(T))-\vk c(T)\in u\vk S|\vk Z(\vk f(t))-\vk c(t)=\vk x}\\ & &\geq\pk{\vk Z(\vk f(T))-\vk c(T)\in V_{\vk x}|\vk Z(\vk f(t))-\vk c(t)=\vk x}\\ & &=\pk{\vk Z(\vk f(T))-\vk Z(\vk f(t))-\vk c(T)+\vk c(t)\in V_{\vk x}-\vk x}\\ & &=\pk{\vk Z(\vk f(T)-\vk f(t))-(\vk c(T)-\vk c(t))\in V_{\vk x}-\vk x}\\ & &=\pk{\vk Z(\vk\delta(t))-\frac{\vk c(T)-\vk c(t)}{\sqrt{f_1(T)-f_1(t)}}\in \frac{V_{\vk x}-\vk x}{\sqrt{f_1(T)-f_1(t)}}}\\ & &\geq\pk{\vk Z(\vk\delta(t))-\frac{\vk c(T)-\vk c(t)}{\sqrt{f_1(T)-f_1(t)}}\in \frac{V_{\vk x}-\vk x}{\sqrt{f_1(T)}}}\\ & &=\int_{\vk y\in\frac{V_{\vk x}-\vk x}{\sqrt{f_1(T)}}}\varphi_{\vk\delta(t)}\left(\vk y+\frac{\vk c(T)-\vk c(t)}{\sqrt{f_1(T)-f_1(t)}}\right)\text{\rm d} \vk y\\ & &\geq\int_{\vk y\in\frac{V_{\vk x}-\vk x}{\sqrt{f_1(T)}}}\mathfrak{C}(T)\varphi_{\vk\delta(t)}(\sqrt{2}\vk y)\text{\rm d} \vk y\\ & &\geq\frac{\mathfrak{C}(T)}{2^{d/2}}\pk{\vk Z(\vk \delta(t))\in\frac{V_{\vk x}-\vk x}{\sqrt{f_1(T)}}}\\ & &\geq\frac{\mathfrak{C}(T)}{(2f_1(T))^{d/2}}\pk{\vk Z(\vk \delta(t))\in V_{\vk x}-\vk x}\\ & &=\frac{\mathfrak{C}(T)}{(2f_1(T))^{d/2}}\pk{\vk B(\vk \delta(t))\in A^{-1}(V_{\vk x}-\vk x)}\\ & &=\frac{\mathfrak{C}(T)}{(2f_1(T))^{d/2}}\frac{1}{\sqrt{2\pi\prod_{i=1}^d\delta_i(t)}}\int_{\vk y\in A^{-1}(V_{\vk x}-\vk x)} e^{-\frac{1}{2}\sum\limits_{i=1}^{d}\frac{y_i^2}{\delta_i(t)}}\text{\rm d}\vk y, } where $\varphi_{\vk\delta(t)}$ is the pdf of $\vk Z(\vk \delta(t))$. Using that all the functions $\delta_i(t)$ are bounded and separated from zero for $t\in [0,T]$, there exists some constants $\delta,\Delta>0$, such that for all $i\in\{1,\ldots, d\}$ and all $t\in [0,T]$ $$ \delta\leq\delta_i(t)\leq \Delta. $$ Hence we obtain \bqny{ \frac{1}{\sqrt{2\pi\prod_{i=1}^d\delta_i(t)}}\geq \frac{1}{\sqrt{2\pi\prod_{i=1}^d\Delta}},\qquad e^{-\frac{1}{2}\sum\limits_{i=1}^{d}\frac{x_i^2}{\delta_i(t)}}\geq e^{-\frac{1}{2}\sum\limits_{i=1}^{d}\frac{x_i^2}{\delta}}, } and finally \bqny{ \lefteqn{\pk{\vk Z(\vk f(T))-\vk c(T)\in u\vk S|\vk Z(\vk f(t))-\vk c(t)=x,\tau=t}}\\ & &\geq \frac{\mathfrak{C}(T)}{(2f_1(T))^{d/2}}\frac{1}{\sqrt{2\pi\prod_{i=1}^d\Delta}}\int_{\vk y\in A^{-1}(V_{\vk x}-\vk x)} e^{-\frac{1}{2}\sum\limits_{i=1}^{d}\frac{y_i^2}{\delta}}\text{\rm d}\vk y\\ & &= \frac{\mathfrak{C}(T)}{(2f_1(T))^{d/2}}\frac{\sqrt{\prod_{i=1}^d\delta}}{\sqrt{\prod_{i=1}^d\Delta}}\pk{\vk B(\delta)\in A^{-1}(V_{\vk x}-\vk x)}\\ & &\geq \frac{\mathfrak{C}(T)}{(2f_1(T))^{d/2}}\frac{\sqrt{\prod_{i=1}^d\delta}}{\sqrt{\prod_{i=1}^d\Delta}}\varepsilon_{\vk S}(\delta). } Hence the claim follows. $\Box$ \begin{center} {\bf Acknowledgements} \end{center} I am grateful to four reviewers for numerous comments and suggestions that lead to a significant improvement of the manuscript. Partial financial support from the SNSF Grant 200021-191984 is kindly acknowledged. \end{document}
\begin{document} \newcommand{\ignore}[1]{} \sloppy \date{} \title{Distributed Algorithms for Matching in Hypergraphs} \thispagestyle{empty} \begin{abstract} We study the $d$-Uniform Hypergraph Matching ($d$-UHM) problem: given an $n$-vertex hypergraph $G$ where every hyperedge is of size $d$, find a maximum cardinality set of disjoint hyperedges. For $d\geq3$, the problem of finding the maximum matching is $\mathcal{NP}$-complete, and was one of Karp's 21 $\mathcal{NP}$-complete problems. In this paper we are interested in the problem of finding matchings in hypergraphs in the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In this model, we present the first three parallel algorithms for $d$-Uniform Hypergraph Matching, and we analyse them in terms of resources such as memory usage, rounds of communication needed, and approximation ratio. The highlights include:\begin{itemize} \item A $O(\log n)$-round $d$-approximation algorithm that uses $O(nd)$ space per machine. \item A $3$-round, $O(d^2)$-approximation algorithm that uses $\tilde{O}(\sqrt{nm})$ space per machine. \item A $3$-round algorithm that computes a subgraph containing a $(d-1+\frac{1}{d})^2$-approximation, using $\Tilde{O}(\sqrt{nm})$ space per machine for linear hypergraphs, and $\Tilde{O}(n\sqrt{nm})$ in general. \end{itemize} For the third algorithm, we introduce the concept of HyperEdge Degree Constrained Subgraph (HEDCS), which can be of independent interest. We show that an HEDCS contains a fractional matching with total value at least $|M^*|/(d-1+\frac{1}{d})$, where $|M^*|$ is the size of the maximum matching in the hypergraph. Moreover, we investigate the experimental performance of these algorithms both on random input and real instances. Our results support the theoretical bounds and confirm the trade-offs between the quality of approximation and the speed of the algorithms. \end{abstract} \section{Introduction}\label{introduction} As massive graphs become more ubiquitous, the need for scalable parallel and distributed algorithms that solve graph problems grows as well. In recent years, we have seen progress in many graph problems (e.g. spanning trees, connectivity, shortest paths \cite{andoni2014parallel,andoni2018parallel}) and, most relevant to this work, matchings \cite{ghaffari2018improved, czumaj2019round}. A natural generalization of matchings in graphs is to matchings in {\em hypergraphs}. Hypergraph Matching is an important problem with many applications such as capital budgeting, crew scheduling, facility location, scheduling airline flights \cite{skiena1998algorithm}, forming a coalition structure in multi-agent systems \cite{sandholm1999coalition} and determining the winners in combinatorial auctions \cite{sandholm2002algorithm} (see \cite{vemuganti1998applications} for a partial survey). Although matching problems in graphs are one of the most well-studied problems in algorithms and optimization, the NP-hard problem of finding a maximum matching in a hypergraph is not as well understood. In this work, we are interested in the problem of finding matchings in very large hypergraphs, large enough that we cannot solve the problem on one computer. We develop, analyze and experimentally evaluate three parallel algorithms for hypergraph matchings in the MPC model. Two of the algorithms are generalizations of parallel algorithms for matchings in graphs. The third algorithm develops new machinery which we call a hyper-edge degree constrained subgraph (HEDCS), generalizing the notion of an edge-degree constrained subgraph (EDCS). The EDCS has been recently used in parallel and dynamic algorithms for graph matching problems \cite{bernstein2015fully, bernstein2016faster, assadi2019coresets}. We will show a range of algorithm tradeoffs between approximation ratio, rounds, memory and computation, evaluated both as worst case bounds, and via computational experiments. More formally, a {\em hypergraph} $G$ is a pair $G = (V,E)$ where $V$ is the set of vertices and $E$ is the set of hyperedges. A {\em hyperedge} $e \in E$ is a nonempty subset of the vertices. The cardinality of a hyperedge is the number of vertices it contains. When every hyperedge has the same cardinality $d$, the hypergraph is said to be $d$-\textit{uniform}. A hypergraph is \textit{linear} if the intersection of any two hyperedges has at most one vertex. A {\em hypergraph matching} is a subset of the hyperedges $M \subseteq E$ such that every vertex is covered at most once, i.e. the hyperedges are mutually disjoint. This notion generalizes matchings in graphs. The cardinality of a matching is the number of hyperedges it contains. A matching is called maximum if it has the largest cardinality of all possible matchings, and maximal if it is not contained in any other matching. In the $d$-Uniform Hypergraph Matching Problem (also referred to as Set Packing or $d$-Set Packing), a $d$-uniform hypergraph is given and one needs to find the maximum cardinality matching. We adopt the most restrictive MapReduce-like model of modern parallel computation among \cite{karloff2010model, goodrich2011sorting, beame2013communication, andoni2014parallel}, the Massively Parallel Computation (MPC) model of \cite{beame2013communication}. This model is widely used to solve different graph problems such as matching, vertex cover \cite{lattanzi2011filtering, ahn2018access, assadi2017randomized, assadi2019coresets, ghaffari2018improved}, independent set \cite{ghaffari2018improved, harvey2018greedy}, as well as many other algorithmic problems. In this model, we have $k$ machines (processors) each with space $s$. $N$ is the size of the input and our algorithms will satisfy $k\cdot s = \tilde{O}(N)$, which means that the total space in the system is only a polylogarithmic factor more than the input size. The computation proceeds in rounds. At the beginning of each round, the data (e.g. vertices and edges) is distributed across the machines. In each round, a machine performs local computation on its data (of size $s$), and then sends messages to other machines for the next round. Crucially, the total amount of communication sent or received by a machine is bounded by $s$, its space. For example, a machine can send one message of size $s$, or $s$ messages of size 1. It cannot, however, broadcast a size $s$ message to every machine. Each machine treats the received messages as the input for the next round. Our model limits the number of machines and the memory per machine to be substantially sublinear in the size of the input. On the other hand, no restrictions are placed on the computational power of any individual machine. The main complexity measure is therefore the memory per machine and the number of rounds $R$ required to solve a problem, which we consider to be the ``parallel time'' of the algorithm. For the rest of the paper, $G(V, E)$ is a $d$-uniform hypergraph with $n$ vertices and $m$ hyperedges, and when the context is clear, we will simply refer to $G$ as a graph and to its hyperedges as edges. $MM(G)$ denotes the maximum matching in $G$, and $\mu(G) = |MM(G)|$ is the size of that matching. We define $d_G(v)$ to be the degree of a vertex $v$ (the number of hyperedges that $v$ belongs to) in $G$. When the context is clear, we will omit indexing by $G$ and simply denote it $d(v)$. \section{Our contribution and results.} We design and implement algorithms for the $d$-UHM in the MPC model. We will give three different algorithms, demonstrating different trade-offs between the model's parameters. Our algorithms are inspired by methods to find maximum matchings in graphs, but require developing significant new tools to address hypergraphs. We are not aware of previous algorithms for hypergraph matching in the MPC model. First we generalize the randomized coreset algorithm of \cite{assadi2017randomized} which finds an 3-rounds $O(1)$-approximation for matching in graphs. Our algorithm partitions the graph into random pieces across the machines, and then simply picks a maximum matching of each machine's subgraph. We show that this natural approach results in a $O(d^2)$-approximation. While the algorithmic generalization is straightforward, the analysis requires several new ideas. \begin{namedtheorem}[Theorem \ref{result1} (restated)] There exists an $MPC$ algorithm that with high probability computes a $(3d(d-1)+3+\epsilon)$-approximation for the $d$-UHM problem in 3 $MPC$ rounds on machines of memory $s = \Tilde{O}(\sqrt{nm})$. \end{namedtheorem} Our second result concerns the MPC model with per-machine memory $O(d\cdot n)$. We adapt the sampling technique and post-processing strategy of \cite{lattanzi2011filtering} to construct maximal matchings in hypergraphs, and are able to show that in $d$-uniform hypergraphs, this technique yields a maximal matching, and thus a $d$-approximation to the $d$-UHM problem in $O(\log{n})$ rounds. \begin{namedtheorem}[Theorem \ref{mpcmax} (restated)] There exists an $MPC$ algorithm that given a $d$-uniform hypergraph $G(V, E)$ with high probability computes a maximal matching in $G$ in $O(\log{n})$ $MPC$ rounds on machines of memory $s =\Theta(d\cdot n)$. \end{namedtheorem} Our third result generalizes the edge degree constrained subgraphs (EDCS), originally introduced by Bernstein and Stein \cite{bernstein2015fully} for maintaining large matchings in dynamic graphs, and later used for several other problems including matching and vertex cover in the streaming and MPC model \cite{assadi2019coresets, bernstein2016faster}. We call these generalized subgraphs hyper-edge degree constrained subgraphs (HEDCS). We show that they exist for specific parameters, and that they contain a good approximation for the $d$-UHM problem. We prove that an HEDCS of a hypergraph $G$, with well chosen parameters, contain a fractional matching with a value at least $\frac{d}{d^2-d+1}\mu(G)$, and that the underlying fractional matching is special in the sense that for each hyperedge, it either assigns a value of $1$ or a value less than some chosen $\epsilon$. We call such a fractional matching an $\epsilon$-restricted fractional matching. For Theorem \ref{result4}, we compute an HEDCS of the hypergraph in a distributed fashion. This procedure relies on the robustness properties that we prove for the HEDCSs under sampling. \begin{namedtheorem}[Theorem \ref{mainth} (restated informally)] Let $G$ be a $d$-uniform hypergraph and $0<\epsilon <1$. There exists an HEDCS that contains an $\epsilon$-restricted fractional matching $M^{H}_f$ with total value at least $\mu(G)\left(\frac{d}{d^2-d+1} - \epsilon\right)$. \end{namedtheorem} \begin{namedtheorem}[Theorem \ref{result4} (restated)] There exists an $MPC$ algorithm that given a $d$-uniform hypergraph $G(V, E)$, where $|V|= n$ and $|E| = m$, can construct an HEDCS of $G$ in 2 $MPC$ rounds on machines of memory $s = \tilde{O}(n\sqrt{nm})$ in general and $s = \tilde{O}(\sqrt{nm})$ for linear hypergraphs. \end{namedtheorem} \begin{namedtheorem}[Corollary \ref{result5} (restated)] There exists an $MPC$ algorithm that with high probability achieves a $d(d-1+1/d)^2$-approximation to the $d$-Uniform Hypergraph Matching in 3 rounds. \end{namedtheorem} Table \ref{table:results} summarizes our results. {\small \begin{center} \begin{tabular}{|c|c|c|c|} \hline Approximation ratio & Rounds & Memory per machine & Computation per round \\ \hline $3d(d-1) + 3$ & 3 & $\Tilde{O}(\sqrt{nm})$ & Exponential \\ \hline $d$ & $O(\log{n})$ & $O(dn)$ & Polynomial \\ \hline $d(d-1+1/d)^2$ & 3 & $\Tilde{O}(n\sqrt{nm})$ in general & Polynomial \\ & & $\Tilde{O}(\sqrt{nm})$ for linear hypergraphs & \\ \hline \end{tabular} $ $ \captionof{table}{Our parallel algorithms for the $d$-uniform hypergraph matching problem.} \label{table:results} \end{center} } \noindent\textbf{Experimental results.} We implement our algorithms in a simulated MPC model environment and test them both on random and real-world instances. Our experimental results are consistent with the theoretical bounds on most instances, and show that there is a trade-off between the extent to which the algorithms use the power of parallelism and the quality of the approximations. This trade-off is illustrated by comparing the number of rounds and the performance of the algorithms on machines with the same memory size. See Section~\ref{sec:experiments} for more details.\\ \noindent\textbf{Our techniques.} For Theorem \ref{result1} and Theorem \ref{result4}, we use the concept of composable coresets, which has been employed in several distributed optimization models such as the streaming and MapReduce models \cite{abbar2013diverse,mirzasoleiman2013distributed,badanidiyuru2014streaming,balcan2013distributed,bateni2014mapping,indyk2014composable}. Roughly speaking, the main idea behind this technique is as follows: first partition the data into smaller parts. Then compute a representative solution, referred to as a coreset, from each part. Finally, obtain a solution by solving the optimization problem over the union of coresets for all parts. We use a randomized variant of composable coresets, first introduced in \cite{mirrokni2015randomized}, where the above idea is applied on a random clustering of the data. This randomized variant has been used after for Graph Matching and Vertex Cover \cite{assadi2017randomized, assadi2019coresets}, as well as Column Subset Selection \cite{bhaskara2016greedy}. Our algorithm for Theorem \ref{result1} is similar to previous works, but the analysis requires new techniques for handling hypergraphs. Theorem \ref{mpcmax} is a relatively straightforward generalization of the corresponding result for matching in graphs. The majority of our technical innovation is contained in Theorem \ref{mainth} and \ref{result4}. Our general approach is to construct, in parallel, an HEDCS that will contain a good approximation of the maximum matching in the original graph and that will fit on one machine. Then we can run an approximation algorithm on the resultant HEDCS to come up with a good approximation to the maximum matching in this HEDCS and hence in the original graph. In order to make this approach work, we need to generalize much of the known EDCS machinery \cite{bernstein2015fully,bernstein2016faster} to hypergraphs. This endeavor is quite involved, as almost all the proofs do not generalize easily and, as the results show, the resulting bounds are weaker than those for graphs. We first show that HEDCSs exist, and that they contain large fractional matchings. We then show that they are useful as a coreset, which amounts to showing that even though there can be many different HEDCSs of some fixed hypergraph $G(V, E)$, the degree distributions of every HEDCS (for the same parameters) are almost identical. In other words, the degree of any vertex $v$ is almost the same in every HEDCS of $G$. We show also that HEDCS are robust under edge sampling, in the sense that edge sampling from a HEDCS yields another HEDCS. These properties allow to use HEDCS in our coresets and parallel algorithm in the rest of the paper. \section{Related Work} \textbf{Hypergraph Matching.} The problem of finding a maximum matching in $d$-uniform hypergraphs is NP-hard for any $d \geq 3$ \cite{karp1972reducibility, papadimitriou2003computational}, and APX-hard for $d=3$ \cite{kann1991maximum}. The most natural approach for an approximation algorithm for the hypergraph matching problem is the greedy algorithm: repeatedly add an edge that doesn't intersect any edges already in the matching. This solution is clearly within a factor of $d$ from optimal: from the edges removed in each iteration, the optimal solution can contain at most $d$ edges (at most one for each element of the chosen edge). It is also easy to construct examples showing that this analysis is tight for the greedy approach. All the best known approximation algorithms for the Hypergraph Matching Problem in $d$-uniform hypergraphs are based on local search methods \cite{berman2000d, berman2003optimizing, chandra2001greedy, halldorsson1995approximating, hurkens1989size}. The first such result by Hurkens and Schrijver \cite{hurkens1989size} gave a $(\frac{d}{2}+\epsilon)$-approximation algorithm. Halldorsson \cite{halldorsson1995approximating} presented a quasi-polynomial $(\frac{d+2}{3})$-approximation algorithm for the unweighted $d$-UHM. Sviridenko and Ward \cite{sviridenko2013large} established a polynomial time $(\frac{d+2}{3})$-approximation algorithm that is the first polynomial time improvement over the $(\frac{d}{2}+\epsilon)$ result from \cite{hurkens1989size}. Cygan \cite{cygan2013improved} and Furer and Yu \cite{furer2013approximate} both provide a $(\frac{d+1}{3} + \epsilon)$ polynomial time approximation algorithm, which is the best approximation guarantee known so far. On the other hand, Hazan, Safra and Schwartz \cite{hazan2006complexity} proved that it is hard to approximate $d$-UHM problem within a factor of $\Omega(d/\log{d})$.\\ \noindent\textbf{Matching in parallel computation models.} The study of the graph maximum matching problem in parallel computation models can be traced back to PRAM algorithms of 1980s \cite{israeli1986fast,alon1986fast, luby1986simple}. Since then it has been studied in the LOCAL and MPC models, and $(1 + \epsilon)$- approximation can be achieved in $O(\log{\log{n}})$ rounds using a space of $O(n)$ \cite{assadi2019coresets, czumaj2019round, ghaffari2018improved}. The question of finding a maximal matching in a small number of rounds has also been considered in \cite{ghaffari2019sparsifying,lattanzi2011filtering}. Recently, Behnezhad \textit{et al}. \cite{behnezhad2019exponentially} presented a $O(\log{\log{\Delta}})$ round algorithm for maximal matching with $O(n)$ memory per machine. While we are not aware of previous work on hypergraph matching in the MPC model, finding a maximal hypergraph matching has been considered in the LOCAL model. Both Fischer \textit{et al}. \cite{fischer2017deterministic} and Harris \cite{harris2018distributed} provide a deterministic distributed algorithm that computes $O(d)$-approximation to the $d$-UHM.\\ \noindent\textbf{Maximum Independent Set.} The $d$-UHM is strongly related to different variants of the Independent Set problem as well as other combinatorial problems. The maximum independent set (MIS) problem on degree bounded graphs can be mapped to $d$-UHM when the degree bound is $d$ \cite{halldorsson2004approximations, berman1994approximating,halldorsson1997greed, trevisan2001non}. The $d$-UHM problem can also be studied under a more general problem of maximum independent set on $(d + 1)$-claw-free graphs \cite{berman2000d, chandra2001greedy}. (See \cite{chan2012linear} of connections between $d$-UHM and other combinatorial optimization problems). \section{A 3-round $O(d^2)$-approximation} In this section, we generalize the randomized composable coreset algorithm of Assadi and Khanna \cite{assadi2017randomized}. They used a maximum matching as a coreset and obtained a $O(1)$-approximation. We use a hypergraph maximum matching and we obtain a $O(d^2)$-approximation. We first define a $k$-partitioning and then present our greedy approach. \begin{definition}[Random $k$-partitioning] Let $E$ be an edge-set of a hypergraph $G(V, E)$. We say that a collection of edges $E^{(1)}, \ldots , E^{(k)}$ is a random $k$-partition of $E$ if the sets are constructed by assigning each edge $e \in E$ to some $E^{(i)}$ chosen uniformly at random. A random $k$-partition of $E$ naturally results in partitioning the graph G into $k$ subgraphs $G^{(1)}, \ldots , G^{(k)}$ where $G^{(i)} := G(V, E^{(i)})$ for all $i \in [k]$. \end{definition} Let $G(V, E)$ be any $d$-uniform hypergraph and $G^{(1)}, \ldots , G^{(k)}$ be a random $k$-partitioning of $G$. We describe a simple greedy process for combining the maximum matchings of $G^{(i)}$, and prove that this process results in a $O(d^2)$-approximation of the maximum matching of $G$. \begin{algorithm} \SetAlgoLined \DontPrintSemicolon Construct a random $k$-partitioning of $G$ across the $k$ machines. Let $M^{(0)}:=\emptyset.$\; \For{$i = 1$ to $k$}{ Set $MM(G^{(i)})$ to be an arbitrary hypergraph maximum matching of $G^{(i)}$.\; Let $M^{(i)}$ be a maximal matching obtained by adding to $M^{(i-1)}$ the edges of $MM(G^{(i})$ that do not violate the matching property.\; } \Return $M:=M^{(k)}$.\; \caption{\textsf{Greedy}} \end{algorithm} \begin{restatable}{theorem}{result1} \label{result1} Greedy computes, with high probability, a $\big(3d(d-1)+3+o(1)\big)$-approximation for the $d$-UHM problem in 3 $MPC$ rounds on machines of memory $s = \Tilde{O}(\sqrt{nm})$. \end{restatable} In the rest of the section, we present the proof of Theorem \ref{result1}. Let $c = \frac{1}{3d(d-1)+3}$, we show that $M^{(k)} \geq c \cdot \mu(G)$ w.h.p, where $M^{(k)}$ is the output of \textsf{Greedy}. The randomness stems from the fact that the matchings $M^{(i)}$ (for $i \in \{1,\ldots,k\}$) constructed by \textsf{Greedy} are random variables depending on the random $k$-partitioning. We adapt the general approach in \cite{assadi2017randomized} for $d$-uniform hypergraphs, with $d \geq 3$. Suppose at the beginning of the $i$-th step of \textsf{Greedy}, the matching $M^{(i-1)}$ is of size $o(\mu(G)/d^2)$. One can see that in this case, there is a matching of size $\Omega(\mu(G))$ in $G$ that is entirely incident on vertices of $G$ that are not matched by $M^{(i-1)}$. We can show that in fact $\Omega(\mu(G)/(d^2 k))$ edges of this matching are appearing in $G^{(i)}$, even when we condition on the assignment of the edges in the first $(i-1)$ graphs. Next we argue that the existence of these edges forces any maximum matching of $G^{(i)}$ to match $\Omega(\mu(G)/(d^2 k))$ edges in $G^{(i)}$ between the vertices that are not matched by $M^{(i-1)}$. These edges can always be added to the matching $M^{(i-1)}$ to form $M^{(i)}$. Therefore, while the maximal matching in \textsf{Greedy} is of size $o(\mu(G))$, we can increase its size by $\Omega(\mu(G)/(d^2 k))$ edges in each of the first $k/3$ steps, hence obtaining a matching of size $\Omega(\mu(G)/d^2)$ at the end. The following Lemma \ref{lone} formalizes this argument. \begin{restatable}{lemma}{lone} \label{lone} For any $i \in [k/3]$, if $M^{(i-1)} \leq c \cdot \mu(G)$, then, w.p. $1 - O(1/n)$, $$ M^{(i)} \geq M^{(i-1)}+ \frac{ 1-3d(d-1)c-o(1)}{k} \cdot \mu(G) \ .$$ \end{restatable} Before proving the lemma, we define some notation. Let $M^*$ be an arbitrary maximum matching of $G$. For any $i \in [k]$, we define $M^{*<i}$ as the part of $M^*$ assigned to the first $i - 1$ graphs in the random $k$-partitioning, i.e., the graphs $G^{(1)}, \ldots G^{(i-1)}$. We have the following concentration result: \begin{claim}\label{c1} W.p. $1 - O(1/n)$, for any $i \in [k]$: $$ M^{*<i} \leq \Big( \frac{i-1+o(i)}{k} \Big) \cdot \mu(G) $$ \end{claim} \begin{proof}[Proof of claim \ref{c1}] Fix an $i \in [k]$; each edge in $M^*$ is assigned to $G^{(1)}, \ldots , G^{(i-1)}$, w.p. $(i - 1)/k$, hence in expectation, size of $M^{*<i}$ is $\frac{i-1}{k}\cdot \mu(G)$. For a large $n$, the ratio $\frac{\mu(G)}{k}$ is large and the claim follows from a standard application of Chernoff bound. \end{proof} \begin{proof}[Proof of Lemma \ref{lone}] Fix an $i \in [k/3]$ and the set of edges for $E^{(1)}, \ldots E^{(i-1)}$. This also fixes the matching $M^{(i-1)}$ while the set of edges in $E^{(i)}, \ldots , E^{(k)}$ together with the matching $M^{(i)}$ are still random variables. We further condition on the event that after fixing the edges in $E^{(1)}, \ldots , E^{(i-1)}, |M^{*<i}| \leq \frac{i-1+o(i)}{k} \cdot \mu(G)$ which happens w.p. $1-O(1/n)$ by claim \ref{c1}. Let $V_{old}$ be the set of vertices incident on $M^{(i-1)}$ and $V_{new}$ be the remaining vertices. Let $E^{\geq i}$ be the set of edges in $E \setminus E^{(1)} \cup \ldots \cup E^{(i-1)}$. We partition $E^{\geq i}$ into two parts: (i) $E_{old}$: the set of edges with \textit{at least one} endpoint in $V_{old}$, and (ii) $E_{new}$: the set of edges incident entirely on $V_{new}$. Our goal is to show that w.h.p. any maximum matching of $G^{(i)}$ matches $\Omega(\mu(G)/k)$ vertices in $V_{new}$ to each other by using the edges in $E_{new}$. The lemma then follows easily from this. The edges in the graph $G^{(i)}$ are chosen by independently assigning each edge in $E^{\geq i}$ to $G^{(i)}$ w.p. $1/(k - i + 1)$. This independence makes it possible to treat the edges in $E_{old}$ and $E_{new}$ separately. We can fix the set of sampled edges of $G^{(i)}$ in $E_{old}$, denoted by $E^{i}_{old}$, without changing the distribution of edges in $G^{(i)}$ chosen from $E_{new}$. Let $\mu_{old} := MM(G(V, E^{i}_{old}))$, i.e., the maximum number of edges that can be matched in $G^{(i)}$ using only the edges in $E^{i}_{old}$. In the following, we show that w.h.p., there exists a matching of size $\mu_{old}+\Omega(\mu(G)/k)$ in $G^{(i)}$. By the definition of $\mu_{old}$, this implies that any maximum matching of $G^{(i)}$ has to use at least $\Omega(\mu(G)/k)$ edges in $E_{new}$. Let $M_{old}$ be any arbitrary maximum matching of size $\mu_{old}$ in $G(V, E^{i}_{old})$. Let $V_{new}(M_{old})$ be the set of vertices in $V_{new}$ that are incident on $M_{old}$. We show that there is a large matching in $G(V, E_{new})$ that avoids $V_{new}(M_{old})$.\\ \begin{claim}\label{c12} $| V_{new}(M_{old})| < c \cdot d(d-1)\cdot \mu(G)$. \end{claim} \begin{proof}[Proof of Claim \ref{c12}] Since any edge in $M_{old}$ has at least one endpoint in $V_{old}$, we have $|V_{new}(M_{old})| \leq (d-1)|M_{old}| \leq (d-1) |V_{old}|$. By the assertion of the lemma, $|M^{(i-1)}| < c \cdot \mu(G)$, and hence $V_{new}(M_{old})| \leq (d-1)\cdot |V_{old}| \leq d(d-1)\cdot |M^{(i-1)}| < c \cdot d(d-1) \cdot \mu(G)$. \end{proof} \begin{claim}\label{c2} There exists a matching in $G(V,E_{new})$ of size $\Big(\frac{k-i+1-o(i)}{k} - 2d(d-1)c\Big)\cdot \mu(G)$ that avoids the vertices of $V_{new}(M_{old})$. \end{claim} \begin{proof}[Proof of Claim \ref{c2}] By the assumption that $|M^{*<i}| \leq \frac{i-1+o(i)}{k} \cdot \mu(G)$, there is a matching of size $\tfrac{k-i+1-o(i)}{k}\cdot \mu(G)$ in the graph $G(V,E^{\geq i})$. By removing the edges in $M$ that are either incident on $V_{old}$ or $V_{new}(M_{old})$, at most $2d(d-1)c \cdot \mu(G)$ edges are removed from $M$. Now the remaining matching is entirely contained in $E_{new}$ and also avoids $V_{new(Mold)}$, hence proving the claim. \end{proof} We are now ready to finalize the proof of Lemma \ref{lone}. Let $M_{new}$ be the matching guaranteed by Claim \ref{c2}. Each edge in this matching is chosen in $G^{(i)}$ w.p. $1/(k - i + 1)$ independent of the other edges. Hence, by Chernoff bound, there is a matching of size $$ (1-o(1))\cdot \Big(\frac{1}{k} - \frac{o(i)}{k(k-i+1)} - \frac{2d(d-1)c}{k-i+1}\Big) \cdot \mu(G) \geq \frac{1 - o(1) - 3d(d-1)c}{k}\cdot \mu(G)$$ in the edges of $M_{new}$ that appear in $G^{(i)}$ (for $i \leq k/3$). This matching can be directly added to the matching $M_{old}$, implying the existence of a matching of size $\mu_{old} +\frac{1 - o(1) - 3d(d-1)c}{k} \cdot \mu(G)$ in $G^{(i)}$. As we argued before, this ensures that any maximum matching of $G^{(i)}$ contains at least $\frac{1 - o(1) - 3d(d-1)c}{k} \cdot \mu(G)$ edges in $E_{new}$. These edges can always be added to $M^{(i-1)}$ to form $M^{(i)}$, hence proving the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{result1}] Recall that $M := M^{(k)}$ is the output matching of \textsf{Greedy}. For the first $k/3$ steps of \textsf{Greedy}, if at any step we got a matching of size at least $c \cdot \mu(G)$, then we are already done. Otherwise, at each step, by Lemma \ref{lone}, w.p. $1 - O(1/n)$, we increase the size of the maximal matching by $\frac{1 - 3d(d-1)c - o(1)}{k}\cdot \mu(G)$ edges; consequently, by taking a union bound on the $k/3$ steps, w.p. $1 - o(1)$, the size of the maximal matching would be $\frac{1 - 3d(d-1)c - o(1)}{3}\cdot \mu(G)$. Since $c = 1/(3d(d-1)+3)$, we ensure that $\frac{1 - 3d(d-1)c}{3} = c $ and in either case, the matching computed by \textsf{Greedy} is of size at least $\mu(G)/(3d(d-1) + 3) - o(\mu(G))$, and this proves that \textsf{Greedy} is a $O(d^2)$-approximation. All is left is to prove that \textsf{Greedy} can be implemented in 3 rounds with a memory of $\Tilde{O}(\sqrt{nm})$ per machine. Let $k = \sqrt{\frac{m}{n}}$ be the number of machines, each with a memory of $\Tilde{O}(\sqrt{nm})$. We claim that \textsf{Greedy} can run in three rounds. In the first round, each machine randomly partitions the edges assigned to it across the $k$ machines. This results in a random $k$-partitioning of the graph across the machines. In the second round, each machine sends a maximum matching of its input to a designated central machine $M$; as there are $k = \sqrt{\frac{m}{n}}$ machines and each machine is sending $\tilde{O}(n)$ size coreset, the input received by $M$ is of size $\Tilde{O}(\sqrt{nm})$ and hence can be stored entirely on that machine. Finally, $M$ computes the answer by combining the matchings. \end{proof} We conclude this section by stating that computing a maximum matching on every machine is only required for the analysis, i.e., to show that there exists a large matching in the union of coresets. In practice, we can use an approximation algorithm to obtain a large matching from the coresets. In our experiments, we use a maximal matching instead of maximum. \section{A $O(\log{n})$-rounds $d$-approximation algorithm} In this section, we show how to compute a maximal matching in $O(\log{n})$ MPC rounds, by generalizing the algorithm in \cite{lattanzi2011filtering} to $d$-uniform hypergraphs. The algorithm provided by Lattanzi \textit{el al.} \cite{lattanzi2011filtering} computes a maximal matching in graphs in $O(\log n)$ if $s = \Theta(n)$, and in at most$\lfloor c/\epsilon\rfloor$ iterations when $s = \Theta(n^{1+\epsilon})$, where $0< \epsilon < c$ is a fixed constant. Harvey \textit{et al.} \cite{harvey2018greedy} show a similar result. The algorithm first samples $O(s)$ edges and finds a maximal matching $M_1$ on the resulting subgraph (we will specify a bound on the memory $s$ later). Given this matching, we can safely remove edges that are in conflict (i.e. those incident on nodes in $M_1$) from the original graph $G$. If the resulting filtered graph $H$ is small enough to fit onto a single machine, the algorithm augments $M_1$ with a matching found on $H$. Otherwise, we augment $M_1$ with the matching found by recursing on $H$. Note that since the size of the graph reduces from round to round, the effective sampling probability increases, resulting in a larger sample of the remaining graph. \begin{restatable}{theorem}{mpcmax} \label{mpcmax} Given a $d$-uniform hypergraph $G(V, E)$, \textsf{Iterated-Sampling} omputes a maximal matching in $G$ with high probability in $O(\log{n})$ $MPC$ rounds on machines of memory $s =\Theta(d\cdot n)$. \end{restatable} \begin{algorithm} \SetAlgoLined \DontPrintSemicolon Set $M:=\emptyset$ and $\mathcal{S}= E$.\; Sample every edge $e \in \mathcal{S}$ uniformly with probability $p =\frac{s}{5|\mathcal{S}|d}$ to form $E'$.\; If $|E'| > s$ the algorithm fails. Otherwise give the graph $G(V,E')$ as input to a single machine and compute a maximal matching $M'$ on it. Set $M= M\cup M'$.\; Let $I$ be the unmatched vertices in $G$ and $G[I]$ the induced subgraph with edges $E[I]$. If $E[I] > s$, set $\mathcal{S}:= E[I]$ and return to step 2.\; Compute a maximal matching $M''$ on $G[I]$ and output $M = M \cup M''$.\; \caption{\textsf{Iterated-Sampling}} \label{maximalalg} \end{algorithm} Next we present the proof of Theorem \ref{mpcmax}. We first show that after sampling edges, the size of the graph $G[I]$ induced by unmatched vertices decreases exponentially with high probability, and therefore the algorithm terminates in $O(\log{n})$ rounds. Next we argue that $M$ is indeed a maximal matching by showing that if after terminating, there is an edge that's still unmatched, this will yield a contradiction. This is formalized in the following lemmas. \begin{lemma}\label{smpl} Let $E' \subset E$ be a set of edges chosen independently with probability $p$. Then with probability at least $1 - e^{-n}$, for all $I \subset V$ either $|E[I]| < 2n/p$ or $E[I] \cap E' \neq \emptyset$. \end{lemma} \begin{proof}[Proof of Lemma \ref{smpl}] Fix one such subgraph, $G[I]=(I,E[I])$ with $|E[I]| \geq 2n/p$. The probability that none of the edges in $E[I]$ were chosen to be in $E'$ is $(1 - p)^{|E[I]|} \leq (1 - p)^{2n/p} \leq e^{-2n}$. Since there are at most $2^n$ total possible induced subgraphs $G[I]$ (because each vertex is either matched or unmatched), the probability that there exists one that does not have an edge in $E'$ is at most $2^n e^{-2n} \leq e^{-n}$. \end{proof} \begin{lemma}\label{cmplx} If $s \geq 20d \cdot n$ then \textsf{Iterated-Sampling} runs for at most $O(\log{n})$ iterations with high probability. \end{lemma} \begin{proof}[Proof of Lemma \ref{cmplx}] Fix an iteration $i$ of \textsf{Iterated-Sampling} and let $p$ be the sampling probability for this iteration. Let $E_i$ be the set of edges at the beginning of this iteration, and denote by $I$ be the set of unmatched vertices after this iteration. From Lemma \ref{smpl}, if $|E[I]| \geq 2n/p$ then an edge of $E[I]$ will be sampled with high probability. Note that no edge in $E[I]$ is incident on any edge in $M'$. Thus, if an edge from $E[I]$ is sampled then \textsf{Iterated-Sampling} would have chosen this edge to be in the matching. This contradicts the fact that no vertex in $I$ is matched. Hence, $|E[I]| \leq 2n/p$ with high probability. Now consider the first iteration of the algorithm, let $G_1(V_1, E_1)$ be the induced graph on the unmatched nodes after the first step of the algorithm. The above argument implies that $|E_1| \leq \frac{10d\cdot n |E_0|}{s} \leq \frac{10d\cdot n |E|}{s} \leq \frac{|E|}{2}$. Similarly $|E_2| \leq \frac{10d\cdot n |E_1|}{s} \leq \frac{(10d\cdot n)^2 |E_1|}{s^2} \leq \frac{|E|}{2^2}$. After $i$ iterations : $|E_i| \leq \frac{|E|}{2^i}$, and the algorithm will terminate after $O(\log{n})$ iterations. \end{proof} \begin{lemma}\label{correctness} \textsf{Iterated-Sampling} finds a maximal matching of $G$ with high probability. \end{lemma} \begin{proof}[Proof of Lemma \ref{correctness}] First consider the case that the algorithm does not fail. Suppose there is an edge $e=\{v_1,\ldots,v_d\} \in E$ such that none of the $v_i$'s are matched in the final matching $M$ that the algorithm output. In the last iteration of the algorithm, since $e\in E$ and its endpoints are not matched, then $e \in E[I]$. Since this is the last run of the algorithm, a maximal matching $M''$ of $G[I]$ is computed on one machine. Since $M''$ is maximal, at least one of the $v_i$'s must be matched in it. All of the edges of $M''$ get added to $M$ in the last step. This yields a contradiction. Next, consider the case that the algorithm fails. This occurs due to the set of edges $E'$ having size larger than the memory in some iteration of the algorithm. Note that $\mathbb{E}[|E'|] = |S| \cdot p = s/5d \leq s/10$ in a given iteration. By the Chernoff Bound it follows that $|E'| \geq s$ with probability smaller than $2^{s} \geq 2^{-20d\cdot n}$ (since $s \geq 20d\cdot n$). By Lemma \ref{cmplx} the algorithm completes in at most $O(\log{n})$ rounds, thus the total failure probability is bounded by $O(\log{n} \cdot 2^{-20\cdot n})$ using the union bound. \end{proof} We are now ready to show that \textsf{Iterated-Sampling} can be implemented in $O(\log n)$ MPC rounds. \begin{proof}[Proof of Theorem \ref{mpcmax}] We show that \textsf{Iterated-Sampling} can be implemented in the $MPC$ model with machines of memory $\Theta(dn)$ and $O(\log{n})$ MPC rounds. Combining this with Lemma \ref{correctness} on the correctness of \textsf{Iterated-Sampling} , we immediately obtain Theorem \ref{mpcmax} for the case of $s = \Theta(dn)$. Every iteration of \textsf{Iterated-Sampling} can be implemented in $O(1)$ MPC rounds. Suppose the edges are initially distributed over all machines. We can sample every edge with the probability $p$ (in each machine) and then send the sampled edges $E'$ to the first machine. With high probability we will have $|E'| = O(n)$, so we can fit the sampled edges in the first machine. Therefore, the sampling can be done in one $MPC$ round. Computing the subgraph of $G$ induced by $I$ can be done in 2 rounds; one round to send the list of unmatched vertices $I$ from the first machine to all the other machines, and the other round to compute the subgraph $G[I]$, and send it to a single machine if it fits, or start the sampling again. \end{proof} \section{A 3-round $O(d^3)$-approximation using HEDCSs} In graphs, edge degree constrained subgraphs (EDCS) \cite{bernstein2015fully,bernstein2016faster} have been used as a local condition for identifying large matchings, and leading to good dynamic and parallel algorithms \cite{bernstein2015fully, bernstein2016faster, assadi2019coresets}. These papers showed that an EDCS exists, that it contains a large matching, that it is robust to sampling and composition, and that it can be used as a coreset. In this section, we present a generalization of EDCS for hypergraphs. We prove that an HEDCS exists, that it contains a large matching, that it is robust to sampling and composition, and that it can be used as a coreset. The proofs and algorithms, however, require significant developments beyond the graph case. We first present definitions of a fractional matching and HEDCS. \begin{definition}[Fractional matching] In a hypergraph $G(V,E)$, a fractional matching is a mapping from $y: E \mapsto [0,1]$ such that for every edge $e$ we have $0 \leq y_e \leq 1$ and for every vertex $v$ : $\sum_{e \in v} y_e \leq 1$. The value of such fractional matching is equal to $\sum_{e \in E} y_e$. For $\epsilon > 0$, a fractional matching is an $\epsilon$-restricted fractional matching if for every edge $e$: $y_e = 1 \mbox{ or } y_e \in [0,\epsilon].$ \end{definition} \begin{definition} For any hypergraph $G(V, E)$ and integers $\beta \geq \beta^- \geq 0$ a hyperedge degree constraint subgraph HEDCS$(H, \beta, \beta^-)$ is a subgraph $H := (V, E_{H})$ of $G$ satisfying: \begin{itemize} \item \textit{(P1)}: For any hyperedge $e \in E_{H}$: $ \sum_{v \in e} d_{H}(v) \leq \beta.$ \item \textit{(P2)}: For any hyperedge $e \not\in E_{H}$: $ \sum_{v \in e} d_{H}(v) \geq \beta^-.$ \end{itemize} \end{definition} We show via a constructive proof that a hypergraph contains an HEDCS when the parameters of this HEDCS satisfy the inequality $\beta - \beta^- \geq d - 1$. \begin{restatable}{lemma}{existence} \label{existence} Any hypergraph $G$ contains an HEDCS$(G, \beta, \beta^-)$ for any parameters $\beta - \beta^- \geq d - 1$. \end{restatable} \begin{proof} Consider the following simple procedure for creating an HEDCS $H$ of a given hypergraph $G$: start by initializing $H$ to be equal to $G$. And while $H$ is not an HEDCS($G, \beta, \beta^-)$, find an edge $e$ which violates one of the properties of HEDCS and fix it. Fixing the edge $e$ implies removing it from $H$ if it was violating Property \textit{(P1)} and adding it to $H$ if it was violating Property \textit{(P2)}. The output of the above procedure is clearly an HEDCS of graph $G$. However, a-priori it is not clear that this procedure ever terminates as fixing one edge $e$ can result in many edges violating the HEDCS properties, potentially undoing the previous changes. In the following, we use a potential function argument to show that this procedure always terminates after a finite number of steps, hence implying that a $HEDCS(G, \beta, \beta^{-})$ always exists.\\ We define the following potential function $\Phi$: $$ \Phi := (\frac{2}{d}\beta - \frac{d-1}{d}) \cdot \sum\limits_{v \in V} d_{H}(v) - \sum\limits_{e \in H} \sum\limits_{u \in e} d_{H}(u)$$ We argue that in any step of the procedure above, the value of $\Phi$ increases by at least 1. Since the maximum value of $\Phi$ is at most $O(\frac{2}{d}n\cdot \beta^2)$, this immediately implies that this procedure terminates in $O(\frac{2}{d}n\cdot \beta^2)$ iterations.\\ Define $\Phi_1 = (\frac{2}{d}\beta - \frac{d-1}{d}) \cdot \sum\limits_{v \in V} d_{H}(v)$ and $\Phi_2 = \sum\limits_{e \in H} \sum\limits_{u \in e} d_{H}(u)$. Let $e$ be the edge we choose to fix at this step, $H_b$ be the subgraph before fixing the edge $e$, and $H_a$ be the resulting subgraph. Suppose first that the edge $e$ was violating Property \textit{(P1)} of HEDCS. As the only change is in the degrees of vertices $v \in e$, $\Phi_1$ decreases by $(2\beta - (d-1))$. On the other hand, $ \sum\limits_{v \in e} d_{H_{b}}(v) \geq \beta + 1$ originally (as $e$ was violating Property (P1) of HEDCS), and hence after removing $e$, $\Phi_2$ increases by $\beta + 1$. Additionally, for each edge $e_u$ incident upon $u \in e$ in $H_a$ , after removing the edge $e$, $\sum\limits_{v \in e_u}d_{H_a}(v)$ decreases by one. As there are at least $\sum\limits_{u \in e}d_{H_a}(u) = \sum\limits_{u \in e}d_{H_b}(u) - d \geq \beta - (d-1)$ choices for $e_u$, this means that in total, $\Phi_2$ increases by at least $2\beta + 1 - (d-1)$. As a result, in this case $\Phi$ increases by at least 1 after fixing the edge $e$. Now suppose that the edge $e$ was violating Property \textit{(P2)} of HEDCS instead. In this case, degree of vertices $u \in e$ all increase by one, hence $\Phi_1$ increases by $2\beta - (d-1)$. Additionally, note that since edge $e$ was violating Property (P2) we have $ \sum\limits_{v \in e} d_{H_b}(v) \leq \beta^- - 1$, so the addition of edge $e$ decreases $\Phi_2$ by at most $\sum\limits_{v \in e} d_{H_a}(v) = \sum\limits_{v \in e} d_{H_b}(v) + d \leq \beta^- - 1 + d $. Moreover, for each edge $e_u$ incident upon $u \in e$, after adding the edge $e$, $\sum\limits_{v \in e_u} d_{H_a}(v)$ increases by one and since there are at most $ \sum\limits_{v \in e} d_{H_b}(v) \leq \beta^- - 1$ choices for $e_u$, $\Phi_2$ decreases in total by at most $2\beta^- - 2 +d$. The total variation in $\Phi$ is therefore equal to $2\beta - (d-1) - (2\beta^- - 2 +d) = 3 + 2(\beta - \beta^- - d).$ So if $\beta - \beta^- \geq d - 1$, we have that $\Phi$ increases by at least 1 after fixing edge $e$. \end{proof} The main result in this section shows that an HEDCS of a graph contains a large $\epsilon$-restricted fractional matching that approximates the maximum matching by a factor less than $d$. \begin{restatable}{theorem}{mainth} \label{mainth} Let $G$ be a $d$-uniform hypergraph and $0 \leq \epsilon < 1$. Let $H$:= HEDCS$(G, \beta, \beta(1 - \lambda))$ with $\lambda =\frac{\epsilon}{6}$ and $\beta \geq \frac{8 d^2}{d-1} \cdot \lambda^{-3}$. Then $H$ contains an $\epsilon$-restricted fractional matching $M^{H}_f$ with total value at least $\mu(G)(\frac{d}{d^2 - d + 1 } - \frac{\epsilon}{d-1}).$ \end{restatable} In order to prove Theorem \ref{mainth}, we will need the following two lemmas. The first lemma we prove is an algebraic result that will help us bound the contribution of vertices in the $\epsilon$-restricted fractional matching. The second lemma identifies additional structure on the HEDCS, that we will use to construct the fractional matching of the theorem. For proofs of both lemmas, see Appendix~\ref{appmainth} \begin{restatable}{lemma}{phix} \label{phi} Let $ \phi(x) = \min \{ 1 , \frac{(d-1)x}{d (\beta-x)} \} $. If $a_1, \ldots a_d \geq 0$ and $a_1 + \ldots + a_d \geq \beta(1 - \lambda)$ for some $\lambda \geq 0$, then $\sum\limits_{i=1}^d \phi(a_i) \geq 1 - 5\cdot \lambda$. \end{restatable} \begin{restatable}{lemma}{fourprop} \label{4prop} Given any $HEDCS(G,\beta, \beta(1-\lambda))$ $H$, we can find two disjoint sets of vertices $X$ and $Y$ that satisfy the following properties: \begin{enumerate} \item $|X| + |Y| = d \cdot \mu(G)$. \item There is a perfect matching in $Y$ using edges in $H$. \item Letting $\sigma = \frac{|Y|}{d} + \sum\limits_{x \in X} \phi(d_H(x)) $,we have that $\sigma \geq \mu(G) (1-5\lambda)$. \item All edges in $H$ with vertices in $X$ have at least one other vertex in $Y$, and have vertices only in $X$ and $Y$. \end{enumerate} \end{restatable} We are now ready to prove Theorem \ref{mainth}. \begin{proof}[Proof of theorem \ref{mainth}] Suppose we have two sets $X$ and $Y$ satisfying the properties of Lemma \ref{4prop}. We construct an $\epsilon$-restricted fractional matching $M^H_f$ using the edges in $H$ such that \[val(M^H_f) \geq (\frac{d}{d^2 - d + 1 } - \frac{\epsilon}{d-1}) \mu(G),\] where $val(M^H_f)$ is the value of the fractional matching $M^H_f$. Now, by Property 2 of Lemma \ref{4prop}, $|Y|$ contains a perfect matching $M^H_Y$ using edges in $H$. Let $Y^-$ be a subset of $Y$ obtained by randomly sampling exactly $1/d$ edges of $M^H_Y$ and adding their endpoints to $Y^-$ Let $Y^* = Y \setminus Y^-$ and observe that $|Y^-| = |Y|/d$ and $ |Y^*| = \frac{d-1}{d} |Y|$. Let $H^*$ be the subgraph of $H$ induced by $X \cup Y^*$ (each edge in $H^*$ has vertices in only $X$ and $Y^*$). We define a fractional matching $M^{H^*}_f$ on the edges of $H^*$ in which all edges have value at most $\epsilon$. We will then let our final fractional matching $M^H_f$ be the fractional matching $M^{H^*}_f$ joined with the perfect matching in $H$ of $Y^-$ (so $M^H_f$ assigns value $1$ to the edges in this perfect matching). $M^H_f$ is, by definition, an $\epsilon$-restricted fractional matching. We now give the details for the construction of $M^{H^*}_f$. Let $V^* = X \cup Y^*$ be the vertices of $H^*$, and let $E^*$ be its edges. For any vertex $v \in V^*$, define $d^*_{H}(v)$ to be the degree of $v$ in $H^*$. Recall that by Property 4 of Lemma \ref{4prop}, if $x \in X$ then all the edges of $H$ incident to $x$ go to $Y$ (but some might go to $Y^-$). Thus, for $x \in X$, we have $E[d^*_{H}(x)] \geq \frac{d_{H}(x)(d-1)}{d}$. We now define $M^{H^*}_f$ as follows. For every $x \in X$, we arbitrarily order the edges of $H$ incident to $x$, and then we assign/add a value of $\min\Big\{\frac{\epsilon}{|X \cap e|}, \frac{1}{|X \cap e|}\frac{1}{\beta -d_H(x)}\Big\}$ to these edges one by one, stopping when either $val(x)$ (the sum of values assigned to vertices incident to $x$) reaches 1 or there are no more edges in $H$ incident to $x$, whichever comes first. In the case that $val(x)$ reaches $1$ the last edge might have added value less than $\min\Big\{\frac{\epsilon}{|X \cap e|}, \frac{1}{|X \cap e|} \frac{1}{\beta -d_H(x)}\Big\}$, where $e$ is the last edge to be considered. We now verify that $M^{H^*}_f$ is a valid fractional matching in that all vertices have value at most ~1. This is clearly true of vertices $x \in X$ by construction. For a vertex $y \in Y^*$, it suffices to show that each edge incident to $y$ receives a value of at most $1/d_H(y) \leq 1/d^*_H(y) $. To see this, first note that the only edges to which $M^{H^*}_f$ assigns non-zero values have at least two endpoints in $X \times Y^*$. Any such edge $e$ receives value at most $\min{\big\{ \epsilon, \sum\limits_{x \in X \cap e} \frac{1}{|X\cap e|}\frac{1}{\beta - d_H(x)}\big\} }$, but since $e$ is in $M^{H^*}_f$ and so in $H$, we have by Property \textit{(P1)} of an HEDCS that $d_H(y) \leq \beta - d_H(x)$, and so $\sum\limits_{x \in X \cap e} \frac{1}{|X\cap e|} \frac{1}{\beta -d_H(x)} \leq \frac{1}{|X\cap e|} \frac{|X \cap e|}{d_H(y)} \leq \frac{1}{d_H(y)}$ . By construction, for any $x \in X$, we have that the value $val(x)$ of $x$ in $M_f^{H^*}$ satisfies : \begin{eqnarray*} val(x) & = & \min \left\{ 1 , \sum\limits_{e \ni x} \min\left\{ \frac{\epsilon}{|X\cap e|}, \frac{1}{|X \cap e|} \frac{1}{\beta - d_H(x)} \right\} \right\} \\ & \geq &\min \left\{ 1 , d_H^*(x) \cdot \min\left\{ \frac{\epsilon}{d-1}, \frac{1}{d-1}\frac{1}{\beta - d_H(x)} \right\} \right\}\ . \end{eqnarray*} Furthermore, we can bound the value of the fractional matching $M^{H*}_f$: as $ val(M^{H*}_f) \geq \sum\limits_{x \in X} val(x).$ For convenience, we use $val'(x) = \min \left\{ 1 , d_H^*(x) \cdot \min\left\{ \epsilon, \frac{1}{\beta - d_H(x)} \right\} \right\}$ such that \begin{eqnarray} val(x) & \geq & \frac{val'(x)}{d-1} \, \mbox{ and}\\ val(M^{H*}_f) & \geq & \frac{1}{d-1}\sum\limits_{x \in X} val'(x) \ ,\ \label{val} \end{eqnarray} Next we present a lemma, proved in Appendix \ref{appmainth} that bounds $val'(x)$ for each vertex. \begin{restatable}{lemma}{lemmaval} \label{lemmaval} For any $x \in X$, $E[val'(x)] \geq (1-\lambda)\phi(d_H(x))$. \end{restatable} This last lemma, combined with (\ref{val}), allows us to lower bound the value of $M_f^{H^*}$ : $$ val(M_f^{H^*}) \geq \frac{1}{d-1}\sum\limits_{x \in X} val'(x)\geq \frac{1-\lambda}{d-1} \sum\limits_{x \in X} \phi(d_H(x)). $$ Note that we have constructed $M_f^H$ by taking the fractional value in $M_f^{H^*}$ and adding the perfect matching on edges from $Y^-$. The latter matching has size $\frac{|Y^-|}{d} = \frac{|Y|}{d^2}$, and the value of $M_f^{H}$ is bounded by: \begin{eqnarray*} val(M^H_f) & \geq & \frac{1}{d-1} (1-\lambda) \sum\limits_{x \in X} \phi(d_H(x)) + \frac{|Y|}{d^2} \\ & = & \frac{1}{d-1} \big( (1-\lambda) \sum\limits_{x \in X} \phi(d_H(x)) + \frac{|Y|}{d}\big) - \frac{ |Y|}{d^2(d-1)} \\ & \geq & \frac{1}{d-1}(1 - \lambda)(1 - 5\lambda)\mu(G)- \frac{|Y|}{d^2(d-1)} \\ & \geq & \frac{1}{d-1}(1 - 6\lambda)\mu(G) - \frac{|Y|}{d^2(d-1)} \ . \end{eqnarray*} To complete the proof, recall that $Y$ contains a perfect matching in $H$ of $|Y|/d$ edges, so if $\frac{|Y|}{d} \geq \frac{d}{d(d-1)+1} \mu(G)$ then there already exists a matching in $H$ of size at least $\frac{d}{d(d-1)+1} \mu(G)$, and the theorem is true. We can thus assume that $|Y|/d <\big(\frac{d}{d(d-1)+1}\big) \mu(G)$, in which case the previous equation yields that: \begin{eqnarray*} val(M^H_f) & \geq & \frac{1}{d-1}(1 - 6\lambda)\mu(G) - \frac{|Y|}{d^2(d-1)} \\ & \geq & (\frac{1-6 \lambda}{d-1} )\mu(G) - \frac{\mu(G)}{(d-1)(d(d-1)+1)} \\ & = & \big(\frac{d}{d(d-1)+1} - \frac{6\lambda}{d-1}\big) \mu(G) \ . \end{eqnarray*} In both cases we get that $$ val(M^H_f) \geq (\frac{d}{d^2 - d + 1 } - \frac{6\lambda}{d-1})\mu(G).$$ \end{proof} \subsection{Sampling and constructing an HEDCS in the MPC model} Our results in this section are general and applicable to every computation model. We prove structural properties about the HEDCSs that will help us construct them in the MPC model. We show that the degree distributions of every HEDCS (for the same parameters $\beta$ and $\lambda$) are almost identical. In other words, the degree of any vertex $v$ is almost the same in every HEDCS of the same hypergraph $G$. We show also that HEDCS are robust under edge sampling, i.e. that edge sampling from and HEDCS yields another HEDCS, and that the degree distributions of any two HEDCS for two different edge sampled subgraphs of $G$ is almost the same no matter how the two HEDCS are selected. In the following lemma, we argue that any two HEDCS of a graph $G$ (for the same parameters $\beta$, $\beta^-$) are ``somehow identical" in that their degree distributions are close to each other. In the rest of this section, we fix the parameters $\beta$, $\beta^-$ and two subgraphs $A$ and $B$ that are both HEDCS$(G,\beta,\beta^-)$. \begin{restatable}{lemma}{degreedist} \label{degree_dist} (Degree Distribution Lemma). Fix a $d$-uniform hypergraph $G(V, E)$ and parameters $\beta$, $\beta^- = (1-\lambda)\cdot \beta $ (for $\lambda$ small enough). For any two subgraphs $A$ and $B$ that are HEDCS$(G,\beta, \beta^-)$, and any vertex $v \in V$, then $ |d_A(v) - d_B(v)| = O(\sqrt{n})\lambda^{1/2} \beta$. \end{restatable} \begin{proof} Suppose that we have $d_A(v) = k \lambda \beta$ for some $k$ and that $d_B(v) = 0$. We will show that if the $k = \Omega(\tfrac{\sqrt{n}}{\lambda^{1/2}})$, then this will lead to a contradiction. Let $e$ be one of the $k \lambda \beta$ edges that are incident to $v$ in $A$. $e \not\in B$ so $\sum\limits_{u \neq v} d_B(u) \geq (1-\lambda)\beta$. From these $(1-\lambda)\beta$ edges, at most $(1-k\lambda)\beta$ can be in $A$ in order to respect \textit{(P1)}, so at least we will have $(k-1)\lambda \beta$ edges in $B \setminus A$, thus we have now covered $k\lambda\beta + (k-1)\lambda \beta $ edges in both $A$ and $B$. Let's keep focusing on the edge $e$, and especially on one of its $(k-1)\lambda\beta$ incident edges in $B \setminus A$. Let $e_1$ be such an edge. $e_1 \in B \setminus A $, therefore $\sum\limits_{v' \in e_1} d_A(v') \geq (1-\lambda)\beta$. The edges incident to $e_1$ in $A$ that we have covered so far are at most $(1-k\lambda)\beta$, therefore we still need at least $(k-1)\lambda$ new edges in $A$ to respect \textit{(P1)}. Out of these $(k-1)\lambda$ edges, at most $\lambda \beta$ can be in $B$ (because $e_1$ has already $(1-\lambda)\beta$ covered edges incident to it in $B$). Therefore at least $(k-2)\lambda \beta$ are in $A\setminus B$. Thus, we have so far covered at least $k\lambda \beta + (k-1) \lambda \beta + (k-2)\lambda \beta$. One can see that we can keep doing this until we cover at least $\tfrac{k(k+1)}{2} \lambda \beta$ edges in both $A$ and $B$. The number of edges in each of $A$ and $B$ cannot exceed $n\cdot \beta$ (each vertex has degree $\leq \beta$), therefore we will get a contradiction if $\tfrac{k(k+1)}{2} \lambda \beta > 2n \beta$, which holds if $k > \tfrac{2\sqrt{n}}{\lambda^{1/2}}$. Therefore \[k \leq \tfrac{2\sqrt{n}}{\lambda^{1/2}}\mbox{, and } d_A(v) = k \lambda \beta \leq 2 \sqrt{n} \lambda^{1/2} \beta.\] \end{proof} The next corollary shows that if the hypergraph is linear (every two hyperedges intersect in at most on vertex), then the degree distribution is closer. The proof is in Appendix \ref{section53}. \begin{restatable}{corollary}{degreedistlinear} \label{degree_dist_linear} For $d$-uniform linear hypergraphs, the degree distribution is tighter, and $ |d_A(v) - d_B(v)| = O(\log{n})\lambda \beta.$ \end{restatable} Next we prove two lemmas regarding the structure of different HEDCSs across sampled subgraphs. The first lemma shows that edge sampling an HEDCS results in another HEDCS for the sampled subgraph. The second lemma shows that the degree distributions of any two HEDCS for two different edge sampled subgraphs of $G$ is almost the same. \begin{restatable}{lemma}{sampledhedcs} \label{sampledhedcs} Let $H$ be a HEDCS($G, \beta_H , \beta^-_H)$ for parameters $\beta_H := (1 - \frac{\lambda}{\alpha})\cdot \frac{\beta}{p}$, $\beta_H^- := \beta_H - (d-1)$ and $\beta \geq 15d(\alpha d)^2 \cdot \lambda^{-2} \cdot \log{n}$ such that $p < 1 - \frac{2}{\alpha}$. Suppose $G_p := G^E_p(V, E_p)$ is an edge sampled subgraph of G and $H_p := H \cap G_p$; then, with high probability: \begin{enumerate} \item For any vertex $v \in V$ : $|d_{H_p}(v)- p \cdot d_H(v)| \leq \frac{\lambda}{\alpha d}\beta$ \item $H_p$ is a HEDCS of $G_p$ with parameters $(\beta,(1 - \lambda) \cdot \beta)$ . \end{enumerate} \end{restatable} \begin{proof} Let $\alpha':= \alpha d$. For any vertex $v \in V$ , $E[d_{H_p}(v)] = p \cdot d_H(v)$ and $d_H(v) \leq \beta_H$ by Property \textit{(P1)} of HEDCS $H$. Moreover, since each edge incident upon $v$ in $H$ is sampled in $H_p$ independently, by the Chernoff bound: $$ P\Big(|d_{H_p}(v)- p \cdot d_H(v)| \geq \frac{\lambda}{\alpha'}\beta\Big) \leq 2\cdot \exp(-\frac{\lambda^2 \beta}{3 \cdot \alpha'^2}) \leq \frac{2}{n^{5d}}\ .$$ In the following, we condition on the event that: $$ |d_{H_p}(v)- p \cdot d_H(v)| \leq \frac{\lambda}{\alpha'}\beta\ .$$ This event happens with probability at least $1 - \frac{2}{n^{5d-1}}$ by above equation and a union bound on $|V| = n$ vertices. This finalizes the proof of the first part of the claim. We are now ready to prove that $H_p$ is indeed am HEDCS$(G_p, \beta,(1 - \lambda) \cdot \beta)$ conditioned on this event. Consider any edge $e \in H_p$. Since $H_p \subset H$, $e \in H$ as well. Hence, we have, $$ \sum\limits_{v \in e} d_{H_p}(v) \leq p \cdot \beta_H + \frac{d \lambda}{\alpha'} \beta = (1 - \frac{\lambda}{\alpha} + \frac{d\lambda}{\alpha'})\beta = \beta \ ,$$ because $\frac{\alpha}{\alpha'} = \frac{1}{d}$, where the inequality is by Property \textit{(P1)} of HEDCS $H$ and the equality is by the choice of $\beta_H$. As a result, $H_p$ satisfies Property \textit{(P1)} of HEDCS for parameter $\beta$. Now consider an edge $e \in G_p \setminus H_p$. Since $H_p = G_p \cap H$, $e \not\in H$ as well. Hence, \begin{eqnarray*} \sum\limits_{v \in e} d_{H_p}(v) \geq p \cdot \beta_H^- - \frac{d \lambda}{\alpha'} \beta & = & (1 - \frac{\lambda}{\alpha} - \frac{d \lambda}{\alpha'}) \beta - p\cdot (d-1) \\ & = & (1 - \frac{2\lambda}{\alpha}) \beta - p\cdot (d-1) \\ & > & (1-\lambda) \cdot \beta \ . \end{eqnarray*} \end{proof} \begin{restatable}{lemma}{edgesample} \label{lemma25} (HEDCS in Edge Sampled Subgraph). Fix any hypergraph $G(V, E)$ and $p \in (0, 1)$. Let $G_1$ and $G_2$ be two edge sampled subgraphs of $G$ with probability $p$ (chosen not necessarily independently). Let $H_1$ and $H_2$ be arbitrary HEDCSs of $G_1$ and $G_2$ with parameters $(\beta,(1 - \lambda) \cdot \beta)$. Suppose $\beta \geq 15d(\alpha d)^2 \cdot \lambda^{-2} \cdot \log{n}$, then, with probability $1 - \frac{4}{n^{5d-1}}$, simultaneously for all $v \in V$ : $|d_{H_1}(v) - d_{H_2}(v)| = O\big(n^{1/2}\big) \lambda^{1/2} \beta $. \end{restatable} \begin{proof} Let $H$ be an HEDCS(G, $\beta_H , \beta_H^-)$ for the parameters $\beta_H$ and $\beta_H^-$ as defined in the previous lemma. The existence of $H$ follows since $\beta_H - (d-1) \geq \beta_H^-$ . Define $\hat{H}_1 := H \cap G_1$ and $\hat{H}_2 := H \cap G_2$. By Lemma \ref{sampledhedcs}, $\hat{H}_1$ (resp. $\hat{H}_2$) is an HEDCS of $G_1$ (resp. $G_2$) with parameters $(\beta,(1 -\lambda)\beta)$ with probability $1 - \frac{4}{n^{5d-1}}$ . In the following, we condition on this event. By Lemma \ref{degree_dist} (Degree Distribution Lemma), since both $H_1$ (resp. $H_2$) and $\hat{H}_1$ (resp. $\hat{H_2}$) are HEDCSs for $G_1$ (resp. $G_2$), the degree of vertices in both of them should be ``close" to each other. Moreover, since by Lemma \ref{sampledhedcs} the degree of each vertex in $\hat{H}_1$ and $\hat{H}_2$ is close to $p$ times its degree in $H$, we can argue that the vertex degrees in $H_1$ and $H_2$ are close. Formally, for any $v \in V$ , we have \begin{tabular}{lll} \\ $|d_{H_1}(v) - d_{H_2}(v)|$ & $\leq$ & $|d_{H_1}(v) - d_{\hat{H}_1}(v)| + |d_{\hat{H}_1}(v) - d_{\hat{H}_2}(v)| + |d_{\hat{H}_2}(v) - d_{H_2}(v)|$ \\ & $\leq$ & $O\big(n^{1/2}\big) \lambda^{1/2} \beta + | d_{\hat{H}_1}(v) - p d_H(v)| + | d_{\hat{H}_2}(v) - p d_H(v)|$\\ & $\leq$ & $O\big(n^{1/2}\big) \lambda^{1/2} \beta + O(1) \cdot \lambda \cdot \beta \ . $\\ \end{tabular} \end{proof} \begin{restatable}{corollary}{cor16}\label{cor16} If $G$ is linear, then $|d_{H_1}(v) - d_{H_2}(v)| = O\big(\log{n}\big) \lambda \beta$. \end{restatable} We are now ready to present a parallel algorithm that will use the HEDCS subgraph. We first compute an HEDCS in parallel via edge sampling. Let $G^{(1)}, \ldots , G^{(k)}$ be a random $k$-partition of a graph $G$. We show that if we compute an arbitrary HEDCS of each graph $G^{(i)}$ (with no coordination across different graphs) and combine them together, we obtain a HEDCS for the original graph $G$. We then store this HEDCS in one machine and compute a maximal matching on it. We present our algorithm for all range of memory $s = n^{\Omega(1)}$. Lemma \ref{cishedcs} and Corollary \ref{linearmemory} serve as a proof to Theorem \ref{result4}. \begin{algorithm} \SetAlgoLined \DontPrintSemicolon Define $k:= \frac{m}{s\log{n}}$, \ $\lambda := \frac{1}{2n\log{n}}$ and $\beta := 500\cdot d^3 \cdot n^2 \cdot \log^3{n}$.\; $G^{(1)}, \ldots, G^{(k)}$:= random $k$-partition of $G$.\; \For{$i = 1$ to $k$, in parallel}{ Compute $C^{(i)} = HEDCS(G^{(i)}, \beta,(1 - \lambda) \cdot \beta)$ on machine $i$.\; } Define the multi-graph $C(V, E_C)$ with $E_C := \cup_{i=1}^k C^{(i)}$.\; Compute and output a maximal matching on $C$.\; \caption{\textsf{HEDCS-Matching}($G, s)$: a parallel algorithm to compute a $O(d^3)$-approximation matching on a $d$-uniform hypergraph $G$ with $m$ edges on machines of memory $O(s)$} \end{algorithm} \begin{restatable}{lemma}{cishedcs} \label{cishedcs} Suppose $k \leq \sqrt{m}$. Then with high probability \begin{enumerate} \item The subgraph $C$ is an HEDCS$(G, \beta_C , \beta_C^-)$ for parameters: $ \lambda_C = O\big(n^{1/2}\big) \lambda^{1/2} \mbox{ , } \beta_C = (1 + d \cdot \lambda_C) \cdot k \cdot \beta \mbox{ and } \beta_C^- = (1- \lambda- d \cdot \lambda_C) \cdot k \cdot \beta $. \item The total number of edges in each subgraph $G^{(i)}$ of $G$ is $\tilde{O}(s)$. \item If $s = \tilde{O}(n\sqrt{nm})$, then the graph $C$ can fit in the memory of one machine. \end{enumerate} \end{restatable} \begin{proof}[Proof of Lemma \ref{cishedcs}] $ $ \begin{enumerate} \item Recall that each graph $G^{(i)}$ is an edge sampled subgraph of $G$ with sampling probability $p = \frac{1}{k}$. By Lemma \ref{lemma25} for graphs $G^{(i)}$ and $G^{(j)}$ (for $i \neq j \in [k])$ and their HEDCSs $C^{(i)}$ and $C^{(j)}$, with probability $1 - \frac{4}{n^{5d-1}}$ , for all vertices $v \in V$ : $$|d_{C^{(i)}}(v) - d_{C^{(j)}}(v)| \leq O\big(n^{1/2}\big) \lambda^{1/2} \beta \ .$$ By taking a union bound on all ${k}\choose{2}$ $\leq n^d$ pairs of subgraphs $G^{(i)}$ and $G^{(j)}$ for $i \neq j \in [k]$, the above property holds for all $i, j \in [k]$, with probability at least $1 - \frac{4}{n^{4d-1}}$. In the following, we condition on this event. We now prove that $C$ is indeed a $HEDCS(G, \beta_C , \beta_C^-)$. First, consider an edge $e \in C$ and let $j \in [k]$ be such that $e \in C^{(j)}$ as well. We have \begin{eqnarray*} \sum\limits_{v \in e} d_C(v) & = & \sum\limits_{v \in e} \sum\limits_{i=1}^k d_{C^{(i)}}(v) \\ & \leq & k \cdot \sum\limits_{v \in e} d_{C^{(j)}}(v) + d \cdot k \cdot \lambda_C \cdot \beta \\ & \leq & k \cdot \beta + d \cdot k \cdot \lambda_C \cdot \beta \\ & = & \beta_ C \ . \end{eqnarray*} Hence, $C$ satisfies Property \textit{(P1)} of HEDCS for parameter $\beta_C$. Now consider an edge $e \in G \setminus C$ and let $j \in [k]$ be such that $e \in G^{(j)} \setminus C^{(j)}$ (recall that each edge in $G$ is sent to exactly one graph $G^{(j)}$ in the random $k$-partition). We have, \begin{eqnarray*} \sum\limits_{v \in e} d_C(v) & = & \sum\limits_{v \in e} \sum\limits_{i=1}^k d_{C^{(i)}}(v) \\ & \geq & k \cdot \sum\limits_{v \in e} d_{C^{(j)}} - d \cdot k \lambda_C \beta \\ & \geq & k \cdot (1- \lambda) \cdot \beta - d \cdot k \lambda_C \beta \ . \end{eqnarray*} \item Let $E^{(i)}$ be the edges of $G^{(i)}$. By the independent sampling of edges in an edge sampled subgraph, we have that $E \big[|E^{(i)}|\big] = \tfrac{m}{k} = \tilde{O}(s)$. By Chernoff bound, with probability $1 - \tfrac{1}{k\cdot n^{20}}$, the size of $E^{(i)}$ is $\tilde{O}(s)$ . We can then take a union bound on all $k$ machines in $G^{(i)}$ and have that with probability $1 - 1/ n^{20}$, each graph $G^{(i)}$ is of size $\tilde{O}(s)$. \item The number of edges in $C$ is bounded by $n \cdot \beta_c = O(n \cdot k\cdot \beta) = \tilde{O}(\frac{n^3m}{s}) = \tilde{O}(s)$. \end{enumerate} \end{proof} \begin{restatable}{corollary}{linearmemory}\label{linearmemory} If $G$ is linear, then by choosing $\lambda:= \frac{1}{2\log^2{n}}$ and $\beta:= 500 \cdot d^3 \cdot \log^4{n} \ $ in \textsf{HEDCS-Matching} we have: \begin{enumerate} \item With high probability, the subgraph $C$ is a HEDCS$(G, \beta_C , \beta_C^-)$ for parameters: $ \lambda_C = O(\log{n}) \lambda \mbox{ , } \beta_C = (1 + d \cdot \lambda_C) \cdot k \cdot \beta \mbox{ and } \beta_C^- = (1- \lambda- d \cdot \lambda_C) \cdot k \cdot \beta $. \item If $s= \tilde{O}(\sqrt{nm})$ then $C$ can fit on the memory of one machine. \end{enumerate} \end{restatable} \begin{proof}[Proof of Corollary \ref{linearmemory}] $ $ \begin{enumerate} \item Similarly to Lemma \ref{cishedcs} and by using corollary \ref{cor16}, we know that for graphs $G^{(i)}$ and $G^{(j)}$ (for $i \neq j \in [k])$ and their HEDCSs $C^{(i)}$ and $C^{(j)}$, with high probability , for all vertices $v \in V$ : $$|d_{C^{(i)}}(v) - d_{C^{(j)}}(v)| \leq O\big(\log{n}\big) \lambda \beta. $$ By taking a union bound on all ${k}\choose{2}$ $\leq n^d$ pairs of subgraphs $G^{(i)}$ and $G^{(j)}$ for $i \neq j \in [k]$, the above property holds for all $i, j \in [k]$, with probability at least $1 - \frac{4}{n^{4d-1}}$. In the following, we condition on this event. Showing that $C$ is indeed a $HEDCS(G, \beta_C , \beta_C^-)$ follows by the same analysis from the proof of Lemma \ref{cishedcs}. \item The number of edges in $C$ is bounded by $n \cdot \beta_c = O(n \cdot k\cdot \beta) = O(n \cdot k) = \Tilde{O}(\frac{nm}{s}) = \tilde{O}(s)$. \end{enumerate} \end{proof} The previous lemmas allow us to formulate the following theorem. \begin{restatable}{theorem}{result4} \label{result4} \textsf{HEDCS-Matching }constructs a HEDCS of $G$ in 3 $MPC$ rounds on machines of memory $s = \tilde{O}(n\sqrt{nm})$ in general and $s = \tilde{O}(\sqrt{nm})$ for linear hypergraphs. \end{restatable} \begin{corollary}\label{result5} \textsf{HEDCS-Matching} achieves a $d(d-1+1/d)^2$-approximation to the $d$-Uniform Hypergraph Matching in 3 rounds with high probability. \end{corollary} \begin{proof}[Proof of Corollary \ref{result5}] We show that with high probability, $C$ verifies the assumptions of theorem \ref{mainth}. From Lemma \ref{cishedcs}, we get that with high probability, the subgraph $C$ is a $HEDCS(G, \beta_C , \beta_C^-)$ for parameters: $ \lambda_C = O\big(n^{1/2}\big) \lambda^{1/2} \mbox{ , } \beta_C = (1 + d \cdot \lambda_C) \cdot k \cdot \beta \mbox{ and } \beta_C^- = (1- \lambda- d \cdot \lambda_C) \cdot k \cdot \beta.$ We can see that $\beta_c \geq \frac{8 d^2}{d-1} \cdot \lambda_c^{-3}$. Therefore by Theorem \ref{mainth}, $C$ contains a $(d-1+\frac{1}{d})$-approximate $\epsilon$-restricted matching. Since the integrality gap of the $d$-UHM is at most $d-1+\frac{1}{d}$ (see \cite{chan2012linear} for details), then $C$ contains a $(d-1+\frac{1}{d})^2$-approximate matching. Taking a maximal matching in $C$ multiplies the approximation factor by at most $d$. Therefore, any maximal matching in $C$ is a $d(d-1+\frac{1}{d})^2$-approximation. \end{proof} \section{Computational Experiments} \label{sec:experiments} To understand the relative performance of the proposed algorithms, we conduct a wide variety of experiments on both random and real-life data \cite{sen2008collective,yang2015defining, kunegis2013konect}. We implement the three algorithms \textsf{Greedy}, \textsf{Iterated-Sampling} and \textsf{HEDCS-Matching} using Python, and more specifically relying on the module \textit{pygraph} and its class \textit{pygraph.hypergraph} to construct and perform operations on hypergraphs. We simulate the MPC model by computing a $k$-partitioning and splitting it into $k$ different inputs. Parallel computations on different parts of the $k$-partitioning are handled through the use of the \textit{multiprocessing} library in Python. We compute the optimal matching through an Integer Program for small instances of random uniform hypergraphs ($d\leq 10)$ as well as geometric hypergraphs. The experiments were conducted on a 2.6 GHz Intel Core i7 processor and 16 GB RAM workstation. The datasets differ in their number of vertices, hyperedges, vertex degree and hyperedge cardinality. In the following tables, $n$ and $m$ denote the number of vertices and number of hyperedges respectively, and $d$ is the size of hyperedges. For Table 3, the graphs might have different number of edges and $\Bar{m}$ denotes the average number of edges. $k$ is the number of machines used to distribute the hypergraph initially. We limit the number of edges that a machine can store to $\frac{2m}{k}$. In the columns Gr, IS and HEDCS, we store the average ratio between the size of the matching computed by the algorithms \textsf{Greedy}, \textsf{Iterated-Sampling} and \textsf{HEDCS-Matching} respectively, and the size of a benchmark. This ratio is computed by the percentage $\frac{ALG}{BENCHMARK}$, where $ALG$ is the output of the algorithm, and $BENCHMARK$ denotes the size of the benchmark. The benchmarks include the size of optimal solution when it is possible to compute, or the size of a maximal matching computed via a sequential algorithm. $\#I$ denotes the number of instances of random graphs that we generated for fixed $n$, $m$ and $d$. $\beta$ and $\beta^-$ are the parameters used to construct the HEDCS subgraphs in \textsf{HEDCS-Matching}. These subgraphs are constructed using the procedure in the proof of Lemma \ref{existence}. \subsection{Experiments with random hypergraphs} We perform experiments on two classes of $d$-uniform random hypergraphs. The first contains random uniform hypergraphs, and the second contains random geometric hypergraphs.\\ \noindent \textbf{Random Uniform Hypergraphs.} For a fixed $n$, $m$ and $d$, each potential hyperedge is sampled independently and uniformly at random from the set of vertices. In Table 2, we use the size of a perfect matching $\frac{n}{d}$ as a benchmark, because a perfect matching in random graphs exists with probability $1-o(1)$ under some conditions on $m,n$ and $d$. If $d(n,m) = \frac{m\cdot d}{n}$ is the expected degree of a random uniform hypergraph, Frieze and Janson \cite{frieze1995perfect} showed that $\frac{d(n,m)}{n^{1/3}} \rightarrow \infty$ is a sufficient condition for the hypergraph to have a perfect matching with high probability. Kim \cite{kim2003perfect} further weakened this condition to $\frac{d(n,m)}{n^{1/(5 + 2/(d-1))}}\rightarrow \infty$. We empirically verify, by solving the IP formulation, that for $d=3,5$ and for small instances of $d=10$, our random graphs contain a perfect matching. In Table 2, the benchmark is the size of a perfect matching, while in Table 5, it is the size of a greedy maximal matching. In terms of solution quality (Tables 2 and 5) \textsf{HEDCS-Matching} performs consistently better than \textsf{Greedy}, and \textsf{Iterated-Sampling} performs significantly better than the other two. None of the three algorithms are capable of finding perfect matchings for a significant number of the runs. When compared to the size of a maximal matching, \textsf{Iterated-Sampling} still performs better, followed by \textsf{HEDCS-Matching}. However, the ratio is smaller when compared to a maximal matching, which is explained by the deterioration of the quality of greedy maximal matching as $n$ and $d$ grow. Dufosse \textit{et al}. \cite{dufosse2019effective} confirm that the approximation quality of a greedy maximal matching on random graphs that contain a perfect matching degrades as a function of $n$ and $d$. The performance of the algorithms decreases as $d$ grows, which is theoretically expected since their approximations ratio are both proportional to $d$. The number of rounds for \textsf{Iterated-Sampling} grows slowly with $n$, which is consistent with $O(\log{n})$ bound. Recall that the number of rounds for the other two algorithms is constant and equal to 3.\\ \noindent\textbf{Geometric Random Hypergraphs.} The second class we experimented on is random geometric hypergraphs. The vertices of a random geometric hypergraph (RGH) are randomly sampled from the uniform distribution of the space $[0,1)^2$. A set of $d$ different vertices $v_1, \ldots, v_d \in V$ forms a hyperedge if, and only if, the distance between any $v_i$ and $v_j$ is less than a previously specified parameter $r \in (0,1)$. The parameters $r$ and $n$ fully characterize a RGH. We fix $d=3$ and generate different geometric hypergraphs by varying $n$ and $r$. We compare the output of the algorithms to the optimal solution that we compute through the IP formulation. Table 3 shows that the performance of our three algorithms is almost similar with \textsf{Iterated-Sampling} outperforming \textsf{Greedy} and \textsf{HEDCS-Matching} as the size of the graphs grows. We also observe that random geometric hypergraphs do not contain perfect matchings, mainly because of the existence of some outlier vertices that do not belong to any edge. The number of rounds of \textsf{Iterated-Sampling} still grows with $n$, confirming the theoretical bound and the results on random uniform hypergraphs. {\small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $m$ & $d$ & $k$ & \#I & Gr & IS & HEDCS & $\beta$ & $\beta^-$ & Rounds IS \\ \hline \hline 15 & 200 & \multirow{4}{*}{3} & 5 & \multirow{4}{*}{500} & 77.6\% & 86.6\% & 82.8\% & 5 & 3 & 3.8 \\ 30 & 400 & & 5 & & 78.9\% & 88.1\% & 80.3\% & 7 & 4 & 4.56\\ 100 & 3200 & & 10 & & 81.7\% & 93.4\% & 83.1\% & 5 & 2 & 5.08 \\ 300 & 4000 & & 10 & & 78.8\% & 88.7\% & 80.3\% & 8 & 6 & 7.05 \\ \hline 50 & 800 & \multirow{4}{*}{5} & 6 & \multirow{4}{*}{500} & 66.0\% & 76.2\% & 67.0\% & 16 & 11 & 4.89 \\ 100 & 2,800 & & 10 & & 68.0\% & 79.6\% & 69.8\% & 16 & 11 & 4.74 \\ 300 & 4,000 & & 10 & & 62.2\% & 75.1\% & 65.5\% & 10 & 5 & 6.62 \\ 500 & 8,000 & & 16 & & 63.3\% & 76.4\% & 65.6\% & 10 & 5 & 7.62\\ \hline 500 & 15,000 & \multirow{3}{*}{10} & 16 & \multirow{4}{*}{500} & 44.9\% & 58.3\% & 53.9\% & 20 & 10 & 6.69 \\ 1,000 & 50,000 & & 20 & & 47.3\% & 61.3\% & 50.5\% & 20 & 10 & 8.25 \\ 2,500 & 100,000 & & 20 & & 45.6\% & 59.9\% & 48.2\% & 20 & 10 & 8.11 \\ 5,000 & 200,000 & & 20 & & 45.0\% & 59.7\% & 47.8\% & 20 & 10 & 7.89 \\ \hline 1,000 & 50,000 & \multirow{4}{*}{25} & 25 & \multirow{4}{*}{100} & 27.5\% & 34.9\% & 30.8\% & 75 & 50 & 8.10 \\ 2,500 & 100,000 & & 25 & & 26.9\% & 34.0\% & 27.0\% & 75 & 50 & 8.26 \\ 5,000 & 250,000 & & 30 & & 26.7\% & 33.8\% & 28.8\% & 75 & 50 & 8.23 \\ 10,000 & 500,000 & & 30 & & 26.6\% & 34.1\% & 28.2\% & 75 & 50 & 8.46\\ \hline 5,000 & 250,000 & \multirow{4}{*}{50} & 30 & \multirow{4}{*}{100} & 22.4\% & 30.9\% & 27.9\% & 100 & 50 & 10.22\\ 10,000 & 500,000 & & 30 & & 22.2\% & 31.0\% & 26.5\% & 100 & 50 & 10.15 \\ 15,000 & 750,000 & & 30 & & 20.9\% & 30.8\% & 26.4\% & 100 & 50 & 10.26 \\ 25,000 & 1,000,000 & & 30 & & 20.9\% & 30.8\% & 26.4\% & 100 & 50 & 10.29 \\ \hline \end{tabular} $ $ \captionof{table}{Comparison on random instances with perfect matching benchmark, of size $\frac{n}{d}$.} \end{center}} {\small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $r$ & $\Bar{m}$ & $d$ & $k$ & \#I & Gr & IS & HEDCS & $\beta$ & $\beta^-$ & Rounds IS \\ \hline \hline 100 & 0.2 & 930.1 $\pm$ 323 & \multirow{4}{*}{3} & 5 & \multirow{4}{*}{100} & 88.3\% & 89.0\% & 89.6\% & 3 & 5 & 4.1\\ 100 & 0.25 & 1329.5 $\pm$ 445 & & 10 & & 88.0\% & 89.0\% & 89.5\% & 3 & 5 & 5.2 \\ 250 & 0.15 & 13222$\pm$ 3541 & & 5 & & 85.0\% & 88.6\% & 85.5\% & 4 & 7 & 8.1 \\ 300 & 0.15 & 14838 $\pm$ 4813 & & 10 & & 85.5\% & 88.0\%& 85.2\% & 4 & 7 & 11.1 \\ 300 & 0.2 & 27281 $\pm$ 8234 & & 10 & & 85.0\% & 89.0\%& 86.3\% & 4 & 7 & 13.5 \\ \hline \end{tabular} $ $ \captionof{table}{Comparison on random geometric hypergraphs with optimal matching benchmark.} \end{center} } \subsection{Experiments with real data} \textbf{PubMed and Cora Datasets.} We employ two citation network datasets, namely the Cora and Pubmed datasets \cite{sen2008collective,yang2015defining}. These datasets are represented with a graph, with vertices being publications, and edges being a citation links from one article to another. We construct the hypergraphs in two steps 1) each article is a vertex; 2) each article is taken as a centroid and forms a hyperedge to connect those articles which have citation links to it (either citing it or being cited). The Cora hypergraph has an average edge size of $3.0 \pm 1.1$, while the average in the Pubmed hypergraph is $4.3 \pm 5.7$. The number of edges in both is significantly smaller than the number of vertices, therefore we allow each machine to store only $\frac{m}{k} + \frac{1}{4}\frac{m}{k}$ edges. We randomly split the edges on each machine, and because the subgraphs are small, we are able to compute the optimal matchings on each machine, as well as on the whole hypergraphs. We perform ten runs of each algorithm with different random $k$-partitioning and take the maximum cardinality obtained. Table 4 shows that none of the algorithms is able to retrieve the optimal matching. This behaviour can be explained by the loss of information that using parallel machines implies. We see that \textsf{Iterated-Sampling}, like in previous experiments, outperforms the other algorithms due to highly sequential design. \textsf{HEDCS-Matching} particularly performs worse than the other algorithms, mainly because it fails to construct sufficiently large HEDCSs.\\ \noindent\textbf{Social Network Communities.} We include two larger real-world datasets, orkut-groups and LiveJournal, from the Koblenz Network Collection \cite{kunegis2013konect}. We use two hypergraphs that were constructed from these datasets by Shun \cite{shun2020practical}. Vertices represent individual users, and hyperedges represent communities in the network. Because membership in these communities does not require the same commitment as collaborating on academic research, these hypergraphs have different characteristics from co-citation hypergraphs, in terms of size, vertex degree and hyperedge cardinality. We use the size of a maximal matching as a benchmark. Table 4 shows that \textbf{Iterated-Sampling} still provides the best approximation. \textbf{HEDCS-Sampling} performs worse than \textbf{Greedy} on Livejournal, mainly because the ratio $\frac{m}{n}$ is not big enough to construct an HEDCS with a large matching. {\small \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Name & $n$ & $m$ & $k$ & Gr & IS & HEDCS & Rounds IS\\ \hline \hline Cora & 2,708 & 1,579 & 2 & 75.0\% & 83.2\% & 63.9\% & 6 \\ \hline PubMed & 19,717 & 7,963 & 3 & 72.0\% & 86.6\% & 62.4\% & 9 \\ \hline Orkut & 2,32 $\times 10^6$ & 1,53 $\times 10^7$ & 10 & 55.6\% & 66.1\% & 58.1\% & 11 \\ \hline Livejournal & 3,20 $\times 10^6$ & 7,49 $\times 10^6$ & 3 & 44.0\% & 55.2\% & 43.3\% & 10 \\ \hline \end{tabular} \captionof{table}{Comparison on co-citation and social network hypergraphs.} \end{center}} \subsection{Experimental conclusions} In the majority of our experiments, \textsf{Iterated-Sampling} provides the best approximation to the $d$-UHM problem, which is consistent with its theoretical superiority. On random graphs, \textsf{HEDCS-Matching} performs consistently better than \textsf{Greedy}, even-though \textsf{Greedy} has a better theoretical approximation ratio. We suspect it is because the $O(d^3)$-approximation bound on \textsf{HEDCS-Matching} is loose. We conjecture that rounding an $\epsilon-$restricted matching can be done efficiently, which would improve the approximation ratio. The performance of the three algorithms decreases as $d$ grows. The results on the number of rounds of \textsf{Iterated-Sampling} also are consistent with the theoretical bound. However, due to its sequential design, and by centralizing the computation on one single machine while using the other machines simply to coordinate, \textsf{Iterated-Sampling} not only takes more rounds than the other two algorithms, but is also slower when we account for the absolute runtime as well as the runtime per round. We compared the runtimes of our three algorithms on a set of random uniform hypergraphs. Figure \ref{fig:runtime} and \ref{fig:runtime_per_round} show that the absolute and per round run-times of \textsf{Iterated-Sampling} grow considerably faster with the size of the hypergraphs. We can also see that \textsf{HEDCS-Sampling} is slower than \textsf{Greedy}, since the former performs heavier computation on each machine. This confirms the trade-off between the extent to which the algorithms use the power of parallelism and the quality of the approximations. \section{Conclusion} We have presented the first algorithms for the $d$-UHM problem in the MPC model. Our theoretical and experimental results highlight the trade-off between the approximation ratio, the necessary memory per machine and the number of rounds it takes to run the algorithm. We have also introduced the notion of HEDCS subgraphs, and have shown that an HEDCS contains a good approximation for the maximum matching and that they can be constructed in few rounds in the MPC model. We believe better approximation algorithms should be possible, especially if we can give better rounding algorithms for a $\epsilon$-restricted fractional hypergraph matching. For future work, it would be interesting to explore whether we can achieve better-than-$d$ approximation in the MPC model in a polylogarithmic number of rounds. Exploring algorithms relying on vertex sampling instead of edge sampling might be a good candidate. In addition, our analysis in this paper is specific to unweighted hypergraphs, and we would like to extend this to weighted hypergraphs. \begin{figure} \caption{Runtime of the three algorithms when $d=3$ and $m = 20\cdot d \cdot n$} \label{fig:runtime} \end{figure} \begin{figure} \caption{Total runtime per round (maximum over all machines in every round) of the three algorithms when $d=3$ and $m = 20\cdot d\cdot n$} \label{fig:runtime_per_round} \end{figure} \appendix \section{Omitted proofs} \subsection{Proof of Lemma \ref{lemmaval}} \lemmaval* \begin{proof} We distinguish three cases: \begin{itemize} \item $d_H(x) \leq \frac{\beta}{d}$: in this case $\frac{1}{\beta - d_H(x)} \leq \frac{d}{(d - 1) \beta} < \epsilon$ and $d_{H^*}(x) \leq d_H(x) \leq \beta - d_H(x) $. This implies that $val'(x) = \frac{d_{H^*}(x)}{\beta - d_H(x)}$, so that : $$ E[val'(x)] \geq \frac{d-1}{d}\cdot \frac{d_{H}(x)}{\beta - d_H(x)} = \phi(d_H(x))\ .$$ Now consider the case in which $d_H(x) > \frac{\beta}{d}$. Then $$ E[d_{H^*}(x)] \geq \frac{(d-1)d_H(x)}{d} > \frac{(d-1)}{d^2} \cdot \beta \geq 8 \cdot \lambda^{-3} >> \epsilon^{-1} \ ,$$ because $\beta \geq \frac{8 d^2}{d-1} \cdot \lambda^{-3}$, and by a Chernoff Bound : \begin{equation} \label{chernoff} \begin{split} P\Big[ d_{H^*}(x) < (1 - \frac{\lambda}{2}) \frac{(d-1)d_H(x)}{d} \Big] & \leq \exp{(-E[d_{H^*}(x)] (\frac{\lambda}{2})^2 \frac{1}{2})}\\ & \leq \exp{(-\lambda^{-1})}\\ &\leq \lambda /2 \ . \end{split} \end{equation} \item Let us now consider the case $d_H(x)> \frac{\beta}{d}$ and $\min\Big\{ \epsilon, \frac{1}{\beta - d_H(x)} \Big\} = \epsilon$. With probability at least $(1 - \frac{\lambda}{2})$, we have that: $$ d_{H^*}(x) \geq (\beta - \frac{1}{\epsilon}) (1 - \frac{\lambda}{2}) \frac{d-1}{d} >> \epsilon^{-1}.$$ Thus with probability at least $(1 - \frac{\lambda}{2})$, we have that $d_{H^*}(x) \epsilon > 1$ and: $$E[val'(x)] \geq (1 - \frac{\lambda}{2}) \geq (1 - \frac{\lambda}{2}) \phi(x).$$ \item The only case that we need to check is $d_H(x) \geq \frac{\beta}{d}$ and $\min\Big\{ \epsilon, \frac{1}{\beta - d_H(x)} \Big\} =\frac{1}{\beta - d_H(x)}$, so that $val'(x) = \min\Big\{ 1, \frac{d_{H^*}(x)}{\beta - d_H(x)} \Big\}$. Again we have that with probability at least $(1 - \frac{\lambda}{2}):$ $$ \frac{d_{H^*}(x)}{\beta - d_H(x)} \geq \frac{d-1}{d} \frac{d_{H}(x)}{\beta - d_H(x)} (1 - \frac{\lambda}{2}) \geq (1 - \frac{\lambda}{2}) \phi(d_H(x)).$$ \end{itemize} In other words, with probability at least $(1 - \frac{\lambda}{2})$, we have $val(x) \geq (1 - \frac{\lambda}{2}) \phi(d_H(x))$, so that $E[val(x)] \geq (1 - \frac{\lambda}{2})^2 \phi(d_H(x)) > (1 -\lambda) \phi( d_H(x))$. We just showed that in all cases $E[val'(x)] \geq (1 -\lambda) \phi(d_H(x))$ \end{proof} \subsection{Proof of Lemma \ref{phi} and Lemma \ref{4prop}}\label{appmainth} \phix* \begin{proof} We will provide a proof for $d=3$ that is easy to generalize. We first show that if $a + b + c \geq \beta$, then $\phi(a) + \phi(b) + \phi (c) \geq 1$. The claim is true if $\phi(a) \geq 1$ or $\phi(b) \geq 1$ or $\phi(c) \geq 1$. Suppose that $\phi(a) < 1$ and $\phi(b) < 1$ and $\phi(c) < 1$. Then : $$ \phi(a) + \phi(b) + \phi (c) = \frac{d-1}{d} \Big( \frac{a}{\beta - a} + \frac{b}{\beta - b} + \frac{c}{\beta - c}\Big) \geq \frac{d-1}{d} \Big( \frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b}\Big) \ . $$ By Nesbitt's Inequality we know that $$ \frac{a}{b+c} + \frac{b}{a+c} + \frac{c}{a+b} \geq \frac{d}{d-1} = \frac{3}{2},$$ and therefore $\phi(a) + \phi(b) + \phi (c) \geq 1$.\\ By the general Nesbitt's Inequality (See appendix \ref{nesbitt}), we know that for $d>3$ $$ \sum\limits_{i = 1}^d \frac{a_i}{\sum\limits_{j\neq i} a_j} \geq \frac{d}{d-1}\ .$$ So if $\sum\limits_{i=1}^d a_i \geq \beta$, then $\sum\limits_{i=1}^d \phi(a_i) \geq 1$. Now, let $\phi'(x) = \frac{d}{dx}\phi(x)$. To complete the proof, it is sufficient to show that we always have $\phi'(x) \leq \frac{5}{\beta}$. To prove this inequality, note that if $x \geq \frac{d}{2d-1}\beta$ then $\phi(x) = 1$ and thus $\phi'(x) = 0.$ Now, if $x \leq \frac{d}{2d-1}\beta$ then: $$ \phi'(x) = \frac{d-1}{d}\frac{d}{dx} \frac{x}{\beta - x} = \frac{d-1}{d} \frac{\beta}{(\beta - x)^2}\ ,$$ which is increasing in $x$ and maximized at $x = \frac{d}{2d-1}\beta$, in which case $\phi'(x) = \frac{(2d-1)^2}{d(d-1)} \frac{1}{\beta} \leq \frac{5}{\beta}$. In the end we get: $$\sum\limits_{i=1}^d \phi(a_i) \geq 1 - 5\cdot \lambda\ .$$ \end{proof} \fourprop* \begin{proof} Let $M^G$ be some maximum integral matching in $G$. Some of the edges in $M^G$ are in $H$, while others are in $G \setminus H$. Let $X_0$ contain all vertices incident to edges in $M_G \cap (G\setminus H)$, and let $Y_0$ contain all vertices incident to edges in $M_G \cap H$. We now show that $X_0$ and $Y_0$ satisfy the first three properties of the lemma. Property 1 is satisfied because $X_0 \cup Y_0$ consists of all matched vertices in $M_G$. Property 2 is satisfied by definition of $Y_0$. To see that Property 3 is satisfied, remark that the vertices of $Y_0$ each contribute exactly $\frac{1}{d}$. Now, $X_0$ consists of $|X_0|/d$ disjoint edge in $G \setminus H$, and by Property P2 of a HEDCS, for each such edge $e$ : $\sum\limits_{x \in e} d_H(x) \geq \beta (1-\lambda)$ and by Lemma \ref{phi}, we have $\sum\limits_{x \in e} \phi(d_H(x)) \geq (1-5\lambda)$ and each one of these vertices contributes in average at least $\frac{1-5\lambda}{d} $ to $\sigma$, just as desired. Loosely speaking, $\phi(d_H(x))$ will end up corresponding to the profit gained by vertex $x$ in the fractional matching $M^H_f$.\\ Consider $Y_0 $ and $X_0$ from above. These sets might not satisfy the Property 4 (that all edges in $H$ with an endpoint in $X$ have at least one other endpoint in $Y$). Can we transform these into sets $X_1$ and $Y_1$, such that the first three properties still hold and there are no hyperedges with endpoints in $X_1$ and $V\setminus (X_1 \cup Y_1)$; at this stage, however, there will be possibly edges in $H$ with different endpoints in $X_1$. To construct $X_1$,$Y_1$, we start with $X = X_0$ and $Y = Y_0$, and present a transformation that terminates with $X = X_1$ and $Y = Y_1$. Recall that $X_0$ has a perfect matching using edges in $G\setminus H$. The set $X$ will maintain this property throughout the transformation, and each vertex $x \in X$ has always a unique \textit{mate} $e'$. The construction does the following : as long as there exists an edge $e$ in $H$ containing $x$ and only endpoints in $ X$ and $V \setminus (X \cup Y)$, let $e'$ be the mate of $x$, we then remove the endpoints of $e'$ from $X$ and add the endpoints of $e$ to $Y$. Property 1 is maintained because we have removed $d$ vertices from $X$ and added $d$ to $Y$. Property 2 is maintained because the vertices we added to $Y$ were connected by an edge in $H$. Property 3 is maintained because $X$ clearly still has a perfect matching in $G \setminus H$, and for the vertices $\{x_1, x_2, \ldots, x_d\} = e'$, the average contribution is still at least $\frac{1 - 5\lambda}{d}$, as above. We continue this process while there is an edge with endpoints in $X$ and $V \setminus (X \cup Y)$. The process terminates because each time we are removing $d$ vertices from $X$ and adding $d$ vertices to $Y$. We end up with two sets $X_1$ and $Y_1$ such that the first three properties of the lemma are satisfied and there are no edges with endpoints in $X_1$ and $V \setminus (X_1 \cup Y_1)$. This means that for any edges in $H$ incident to $X$, this edge is either incident to $Y$ as well or incident to only points in $X$.\\ We now set $X = X_1$ and $Y = Y_1$ and show how to transform $X$ and $Y$ into two sets that satisfy all four properties of the lemma. Recall that $X_1$ still contains a perfect matching using edges in $G\setminus H$; denote this matching $M^G_X$. Our final set, however, will not guarantee such a perfect matching. Let $M^H_X$ be a maximal matching in $X$ using edges in $H$ (with edges not incident to $Y$, because they already satisfy Property 4). Consider the edge set $E^*_X = M^G_X \cup M^H_X$. \\ \begin{figure} \caption{Example with $d = 4$. In blue the edges of $M^G_X$ and in yellow the edges of $M^H_X$} \label{fig:my_label} \end{figure} We now perform the following simple transformation, we remove the endpoints of the edges in $M^H_X$ from $X$ and add them directly to $Y$. Property 1 is preserved because we are deleting and adding the same number of vertices from $X$ and to $Y$ respectively. Property 2 is preserved because the endpoints we add to $Y$ are matched in $M^H_X$ and thus in $H$. We will see later for Property 3.\\ Let's check if Property 4 is preserved. To see this, note that we took $M_H^X$ to be maximal among edges of $H$ that contain only endpoints in $X$, and moved all the matched vertices in $M^H_X$ to $Y$. Thus all vertices that remain in $X$ are free in $M_H^X $, and so there are no edges between only endpoints in $X$ after the transformation. There are also no edges in $H$ containing endpoints from $X$ and $V \setminus (X \cup Y)$, because originally they don't exist for $X=X_1$\\ Let's check if the Property 3 is preserved. As before, this involves showing that after the transformation, the average contribution of a vertex in $X \cup Y$ to $\sigma$ is at least $\frac{1- 5\lambda}{d}$. (Because every vertex in $X$ is incident to an edge in $E^*_X$, each vertex is accounted for in the transformation.) Now, all vertices that were in $Y_1$ remain in $Y$ , so their average contribution remains at $1/d$. We thus need to show that the average contribution to $\sigma$ among vertices in $X_1$ remains at least $\frac{1- 5\lambda}{d}$ after the transformation.\\ Let $n = |M^G_X|$ and $k = |M^H_X| $. Since $M^G_X$ is a perfect matching on $X_1$, we always have $k \leq n$. If $n = k$, then all vertices of $X_1$ are transferred to $Y$ and clearly their average contribution to $\sigma$ is $\frac{1}{d} \geq \frac{1- 5\lambda}{d}$. Now consider when $k \leq n-1$. Let the edges of $M^H_X$ be $\{e_1, \ldots e_k\}$ and these of $M^G_X$ be $\{e'_1, \ldots e'_n\}$. Let $X'$ be the set of vertices that remain in $X_1$. Because the edges of $M^G_X$ are not $H$, the by two properties of HEDCS : \begin{eqnarray*} \sum\limits_{1 \leq i\leq n, \ x \in e'_i} d_H(x) & \geq & n\beta(1-\lambda) \ , \mbox{ and } \\ \sum\limits_{1 \leq i\leq k, \ x \in e_i} d_H(x) & \leq & k\beta \ . \end{eqnarray*} The sum of the degrees of vertices in $X'$ can be written as the following difference: \begin{eqnarray} \sum\limits_{ x \in X'} d_H(x) & = &\sum\limits_{1 \leq i\leq n, \ x \in e'_i} d_H(x) - \sum\limits_{1 \leq i\leq k, \ x \in e_i} d_H(x) \\ \nonumber &\geq & n\beta(1-\lambda) - k\beta\\ \nonumber & = & (n-k)\cdot \beta - n\beta \lambda \ . \label{xprime} \end{eqnarray} Now we prove that (\ref{xprime}) implies that the contribution of vertices from $X'$ on average at least $\frac{1-5\lambda}{d}$. \begin{claim}\label{claim22} $\sum\limits_{x \in X'} \phi(d_H(x)) \geq (n-k) - 5n\cdot\lambda$. \end{claim} By claim \ref{claim22}, the average contribution among the vertices that are considered is $$ \frac{(n-k) -5n\cdot\lambda + k }{nd} = \frac{1 - 5\lambda}{d} \ , $$ which proves Property 3 and completes the proof of the lemma.\end{proof} \begin{proof}[Proof of claim \ref{claim22}] Let's denote $m := n-k$. Recall that the number of vertices in $X'$ is equal to $md$ and $\phi(x) = \frac{d-1}{d} \frac{x}{\beta - x }$. Here we will prove it for $\lambda = 0$. This means that we will prove that $$ \sum\limits_{ x \in X'} d_H(x) \geq m\cdot \beta \Rightarrow \sum\limits_{x \in X'} \phi(d_H(x)) \geq m\ .$$ If $ m = n-k = 1$, then the result clearly holds by Lemma \ref{phi}. Let's suppose $k < n-1$ and thus $m \geq 2$. By Lemma \ref{phi}, we know that: \begin{equation} \label{applemma4} \frac{md-1}{md} \sum\limits_{x \in X'} \frac{d_H(x)}{m \cdot \beta - d_H(x)} \geq 1 \ . \end{equation} We also know that : \begin{eqnarray} \frac{m\beta - a}{\beta - a} & = & m + \frac{(m-1)a}{\beta - a} \ , \mbox{ and } \\ \frac{a}{\beta - a} & = & m \cdot \frac{a}{m\beta - a} + \frac{(m-1)a^2}{(m\beta -a)(\beta - a) } \ .\label{comb2} \end{eqnarray} Combining (\ref{applemma4}) and (\ref{comb2}), we get: \begin{equation*} \sum\limits_{x \in X'} \frac{d_H(x)}{\beta -d_H(x)} \geq \frac{m^2d}{md-1} + \sum\limits_{x \in X'}\frac{(m-1)d_H(x)^2}{(m\beta -d_H(x))(\beta - d_H(x))}\ . \end{equation*} which leads to: \begin{equation} \label{sumphi} \begin{split} \sum\limits_{x \in X'} \phi(d_H(x)) & \geq \frac{d-1}{d}\sum\limits_{x \in X'} \frac{d_H(x)}{\beta -d_H(x)}\\ & \geq m \cdot\frac{m(d-1)}{md-1} + \frac{d-1}{d}\sum\limits_{x \in X'}\frac{(m-1)d_H(x)^2}{(m\beta -d_H(x))(\beta - d_H(x))}\\ & \geq m + \frac{d-1}{d}\sum\limits_{x \in X'}\frac{(m-1)d_H(x)^2}{(m\beta -d_H(x))(\beta - d_H(x))} - \frac{m-1}{md-1} \ . \end{split} \end{equation} By convexity of the function $x \mapsto \frac{x^2}{(m\beta -x)(\beta-x)}$: \begin{equation} \begin{split} \sum\limits_{x \in X'}\frac{d_H(x)^2}{(m\beta -d_H(x))(\beta - d_H(x))} & \geq md \frac{(\frac{\sum d_H(x))}{md})^2}{(m\beta - \frac{\sum d_H(x))}{md}(\beta - \frac{\sum d_H(x))}{md})} \\ & \geq \frac{m\beta}{(md-1)(d-1)} \ . \end{split} \end{equation} Where the last inequality is due to $\sum\limits_{x \in X'} d_H(x) \geq m\cdot \beta$ Therefore, when $ \beta \geq \frac{d}{2}$ the right hand side of (\ref{sumphi}) becomes: \begin{eqnarray*} m + \frac{d-1}{d}\sum\limits_{x \in X'}\frac{d_H(x)^2}{(m\beta -d_H(x))(\beta - d_H(x))} - \frac{1}{md-1} & \geq & m + \frac{1}{md-1}\left(\frac{m\beta}{d} - 1\right) \\ & \geq & m. \end{eqnarray*} \end{proof} \subsection{Proof of Corollary \ref{degree_dist_linear}}\label{section53} \degreedistlinear* \begin{proof} For linear hypergraphs, assume that $d_A(v) = k\lambda\beta$ with $k = \Omega(\log{n})$ and $d_B(v)=0$. The difference in the analysis is that, for every edge $e$ belonging to the $k \lambda \beta$ edges that are incident to $v$ in $A$, we can find a new set of at least $(k-(d+1))\lambda \beta$ edges in $B \setminus A$. In fact, for such an edge $e$, every one of the $(1-\lambda)\beta$ edges that verify $\sum\limits_{u \neq v} d_B(u) \geq (1-\lambda)\beta$ intersect $e$ in exactly on vertex that is not $v$. The same goes for the subset of at least $(k-1)\lambda\beta$ that are in $B \setminus$ A, these edges already intersect $e$, and can at most have one intersection in between them. At most $d(d-1)$ of these edges can be considered simultaneously for different $e_1, \ldots, e_d$ from the $k\lambda \beta$ edges incident to $v$. Therefore, for every edge $e$, we can find at least a set of $(k-1)\lambda \beta d(d-1) \geq (k-(d+1))\lambda \beta $ new edges that in $B \setminus A$. This means that at this point we have already covered $k\lambda \beta (k-(d+1))\lambda \beta$ edges in both $A$ and $B$. One can see that we can continue covering new edges just like in the previous lemma, such that at iteration $l$, the number of covered edges is at least $$ k(k-(d+1))(k-2(d+1))\ldots (k- l(d+1)) (\lambda \beta)^l,$$ for $l \leq \frac{k-1}{d+1}$. It is easy to see that for $l = \frac{k-1}{d+1}$, we will have $k(k-(d+1))(k-2(d+1))\ldots (k- l(d+1)) (\lambda \beta)^l > 2n\beta$ if $k = \Omega(\log{n})$, which will be a contradiction. \end{proof} \section{Figures and Tables}\label{appendixfigures} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & $m$ & $d$ & $k$ & \#I & Gr & IS & HEDCS & $\beta$ & $\lambda$ \\ \hline \hline 15 & 200 & \multirow{4}{*}{3} & 5 & \multirow{4}{*}{500} & 79.1\% & 87.5\%& 82.3\% & 5 & 3 \\ 30 & 400 & & 5 & & 82.6\% & 91.3\%& 84.4\% & 7 & 4 \\ 100 & 3200 & & 10 & & 83.9\% & 96.2\% & 88.2\% & 5 & 2 \\ 300 & 4000 & & 10 & & 81.1\% & 92.0\% & 86.3\% & 4 & 2 \\ \hline 50 & 800 & \multirow{4}{*}{5} & 6 & \multirow{4}{*}{500} & 76.0\% & 89.1\%& 78.5\% & 16 & 11 \\ 100 & 2,800 & & 10 & & 77.9\% & 92.1\% & 81.9\% & 16 & 11 \\ 300 & 4,000 & & 10 & & 77.8\% & 93.9\% & 87.1\% & 10 & 5 \\ 500 & 8,000 & & 16 & & 79.2\% & 94.3\%& 85.9\% & 10 & 5 \\ \hline 500 & 15,000 & \multirow{3}{*}{10} & 16 & \multirow{4}{*}{500} & 71.8\% & 90.6\% & 79.2\% & 20 & 10 \\ 1,000 & 50,000 & & 20 & & 73.8\% & 92.3\% & 81.5\% & 20 & 10 \\ 2,500 & 100,000 & & 20 & & 72.2\% & 91.5\% & 80.7\% & 20 & 10 \\ 5,000 & 200,000 & & 20 & & 72.5\% & 90.7\% & 79.8\% & 20 & 10 \\ \hline 1,000 & 5,0000 & 25 & 20 & 100 & 68.2\% & 87.5\% & 75.6\% & 75 & 50 \\ 2,500 & 100,000 & & 25 & & 69.0\% & 87.9\% & 74.3\% & 75 & 50 \\ 5,000 & 250,000 & & 30 & & 67.8\% & 87.3\% & 75.1\% & 75 & 50 \\ 10,000 & 500,000 & & 30 & & 67.2\% & 86.9\% & 73.7\% & 75 & 50\\ \hline 5,000 & 250,000 & \multirow{4}{*}{50} & 30 & \multirow{4}{*}{100} & 67.4\% & 86.6\% & 74.0\% & 100 & 50 \\ 10,000 & 500,000 & & 30 & & 68.1\% & 87.1\%& 73.4\% & 100 & 50 \\ 15,000 & 750,000 & & 30 & & 66.9\% & 86.2\% & 73.2\% & 100 & 50 \\ 25,000 & 1,000,000 & & 30 & & 67.3\% & 86.0\% & 72.8\% & 100 & 50 \\ \hline \end{tabular} $ $ \captionof{table}{Comparison on random uniform instances with maximal matching benchmark.} \end{center} \section{Chernoff Bound} Let $X_1, \ldots, X_n$ be independent random variables taking value in $[0, 1]$ and $X:= \sum\limits_{i=1}^n X_i$. Then, for any $\delta \in (0, 1)$ $$ Pr\Big(|X-\mathbb{E}[X]| \geq \delta \mathbb{E}[X]\Big) \geq 2\cdot \exp{\Big(-\frac{\delta^2\mathbb{E}[X]}{3}\Big)}. $$ \section{Nesbitt's Inequality}\label{nesbitt} Nesbitt's inequality states that for positive real numbers $a$, $b$ and $c$, $$ \frac{a}{b+c} + \frac{b}{a+c}+ \frac{c}{a+b} \geq \frac{3}{2},$$ with equality when all the variables are equal. And generally, if $a_1, \ldots, a_n$ are positive real numbers and $s = \sum\limits_{i=1}^n$, then: $$ \sum\limits_{i=1}^n \frac{a_i}{s-a_i} \geq \frac{n}{n-1},$$ with equality when all the $a_i$ are equal. \end{document} \appendix \end{document}
\begin{document} \title[CMC Surfaces in Minkowski Space via Loop Groups]{Holomorphic Representation of Constant Mean Curvature Surfaces in Minkowski Space: Consequences of Non-Compactness in Loop Group Methods} \begin{abstract} We give an infinite dimensional generalized Weierstrass representation for spacelike constant mean curvature (CMC) surfaces in Minkowski 3-space ${\mathbb R}^{2,1}$. The formulation is analogous to that given by Dorfmeister, Pedit and Wu for CMC surfaces in Euclidean space, replacing the group $SU_2$ with $SU_{1,1}$. The non-compactness of the latter group, however, means that the Iwasawa decomposition of the loop group, used to construct the surfaces, is not global. We prove that it is defined on an open dense subset, after doubling the size of the real form $SU_{1,1}$, and prove several results concerning the behavior of the surface as the boundary of this open set is encountered. We then use the generalized Weierstrass representation to create and classify new examples of spacelike CMC surfaces in ${\mathbb R}^{2,1}$. In particular, we classify surfaces of revolution and surfaces with screw motion symmetry, as well as studying another class of surfaces for which the metric is rotationally invariant. \end{abstract} \author{David Brander} \address{Department of Mathematics, Matematiktorvet, Technical University of Denmark, DK-2800, Kgs. Lyngby, Denmark} \email{[email protected]} \author{Wayne Rossman} \address{Department of Mathematics, Faculty of Science, Kobe University, Japan} \email{[email protected]} \author{Nicholas Schmitt} \address{GeometrieWerkstatt, Mathematisches Institut, Universit\"at T\"ubingen, Germany} \email{[email protected]} \keywords{differential geometry, surface theory, loop groups, integrable systems} \operatorname{su}bjclass[2000]{Primary 53C42, 14E20; Secondary 53A10, 53A35} \maketitle \section*{Introduction} \operatorname{su}bsection{Motivation} It is well known that minimal surfaces in Euclidean $3$-space have a Weierstrass representation in terms of holomorphic functions, and that the Gauss map of such a surface is holomorphic. For \emph{non}-minimal constant mean curvature (CMC) surfaces, Kenmotsu \cite{kenmotsu} showed that the Gauss map is harmonic, and gave a formula for obtaining CMC surfaces from any such harmonic maps. On the other hand, as a result of work by Pohlmeyer \cite{pohlmeyer}, Uhlenbeck \cite{Uhl} and others, it became known that harmonic maps from a Riemann surface into a symmetric space $G/H$ can be lifted to holomorphic maps into the based loop group $\Omega G$, satisfying a horizontality condition - see \cite{guest} for the history. Subsequently, Dorfmeister, Pedit and Wu \cite{DorPW} gave a method, the so-called DPW method, for obtaining such harmonic maps directly from a certain holomorphic map into the complexified loop group $\Lambda G^\mathbb{C}$, via the Iwasawa splitting of this group, $\Lambda G^\mathbb{C} = \Omega G \cdot \Lambda ^+ G^\mathbb{C}$. This method has the advantage that the holomorphic loop group map itself is obtained from a collection of arbitrary complex-valued holomorphic functions. Combined with the Sym-Bobenko formula, discussed below, for obtaining a surface from its loop group extended frame, this gives an infinite dimensional ``generalized Weierstrass representation'' for CMC surfaces in terms of holomorphic functions. Integrable systems methods have been shown to have many applications in submanifold theory. Concerning CMC surfaces, notable early results were the classification of CMC tori in $\mathbb{R}^3$ by Pinkall and Sterling \cite{pinkallsterling}, and the rendering of all CMC tori in space forms in terms of theta functions by Bobenko \cite{bobenko}. The DPW method has led to new examples of non-simply-connected CMC surfaces in $\mathbb{R}^3$ - and other space forms - that have not yet been proven to exist by any other approach \cite{KKRS}, \cite{KMS}, \cite{SKKR}. Unsurprisingly, an analogous construction is obtained for spacelike, which is to say Riemannian, CMC surfaces in Minkowski space $\mathbb{R}^{2,1}$, by replacing the group $SU_2$, used in the Euclidean case, with the non-compact real form $SU_{1,1}$. However, there is a major difference, in that the Iwasawa decomposition is not global when the underlying group is non-compact, which has consequences for the global properties of the surfaces constructed. There is already an extensive collection of work about spacelike non-maximal CMC surfaces in $\mathbb{R}^{2,1}$ and their harmonic (\cite{milnor}) Gauss maps. Works of Treibergs \cite{Trei1}, Wan \cite{wan}, and Wan-Au \cite{wan2} show existence of a large class of entire examples, which are then necessarily complete (Cheng and Yau \cite{cheng-yau}). Other studies, also without the loop group point of view, include \cite{treibergschoi} and \cite{akutagawanishikawa}. Inoguchi \cite{inoguchi} gave a loop group formulation and discussed finite type solutions and solutions obtained via dressing, which are two further methods, distinct from the DPW method employed here, that can also be used for loop group type problems. Studying the generalized Weierstrass representation for CMC surfaces in ${\mathbb R}^{2,1}$ is interesting for various reasons: from the viewpoint of surface theory, there is naturally a richer variety of such surfaces, compared to the Euclidean case, due to the fact that not all directions are the same in Minkowski space. CMC surfaces in Minkowski space are important in the study of classical relativity - see for example, the work of Bartnik and Simon \cite{bartniksimon, bartnikacta}. The main issue addressed in those works was to give conditions which would guarantee that surfaces obtained from a variational problem are everywhere spacelike. The holomorphic representation studied here is a completely different approach: all surfaces are, in principle, obtained from this method and the surface is guaranteed to be spacelike as long as the holomorphic loop group map takes its values in an open dense subset of the loop group (the ``big cell''). The surface fails to be spacelike or immersed only when the corresponding holomorphic data encounters the boundary of this dense set. Since all CMC surfaces have such a representation, understanding the behavior at this boundary potentially gives a means to characterize the singularities. More generally in the context of integrable systems in geometry, this example can be thought of as a test case regarding the significance of the absence of a global Iwasawa decomposition, or, more broadly, of the non-compactness of the group. \operatorname{su}bsection{Results} In Sections \ref{section1-wayne} and \ref{twistediwasawasection} we present the Iwasawa decomposition associated to the group of loops in $SU_{1,1}$. The general case for non-compact groups had been earlier treated by Kellersch \cite{Kell}; we provide a rather explicit proof for our case. The main new result here, which is important for our applications, is that, after doubling the size of the group, by setting $G= SU_{1,1} \, \sqcup \, i\sigma_1 \cdot SU_{1,1} $, where $\sigma_1$ is a Pauli matrix, we are able to prove that the Iwasawa splitting we need is almost global. That is, if $\Lambda G^\mathbb{C}$ is the group of loops in a complexification $G^\mathbb{C}$ of $G$, $\Lambda ^+ G^\mathbb{C}$ is the subgroup of loops which extend holomorphically to the unit disc, and $\Omega G$ is the subgroup of based loops mapping 1 to the identity, then \begin{equation} \label{Iwasawa1} \Omega G \cdot \Lambda ^+ G^\mathbb{C} \end{equation} is an open dense subset, called the \emph{(Iwasawa) big cell}, of $\Lambda G^\mathbb{C}$. We are primarily interested in this result in the twisted setting, described in Section \ref{twistediwasawasection}. We also prove, in Section \ref{nickssection}, that, for a loop which extends meromorphically to the unit disk with exactly one pole, the Iwasawa decomposition can be computed explicitly using finite linear algebra. This result is used for the analysis of singularities arising in CMC surfaces. In Section \ref{section2} we give the loop group formulation and the DPW method for CMC surfaces in Minkowski space. This uses the first factor $F$ of the decomposition $\phi = F B$, corresponding to (\ref{Iwasawa1}), to obtain a CMC surface from a certain holomorphic map $\phi: \Sigma \to \Lambda G^\mathbb{C}$, where $\Sigma$ is a Riemann surface. In Section \ref{brander-section} we examine the behavior of the surfaces at the boundary of the big cell. In Theorem \ref{summarythm1}, we prove that the DPW construction maps an open dense set $\Sigma^\circ \operatorname{su}bset \Sigma$ to a smooth CMC surface, and that the singular set, $\Sigma \setminus \Sigma^\circ$ is locally given as the zero set of a non-constant real analytic function. The boundary of the big cell is a countable disjoint union of ``small cells'', the first two of which are of lowest codimension in the loop group, and therefore the most significant. We examine the behavior of the surface as points on the set $\Sigma \setminus \Sigma^\circ$ which correspond to the first two small cells are approached. In Theorem \ref{summarythm}, we prove that the surface always has finite singularities at points which are mapped by $\phi$ to the first small cell (and this also occurs along the zero set of a non-constant real analytic function). On the other hand, we prove that, as points mapping to the second small cell are approached, the surface is always unbounded and the metric blows up. The next two sections are devoted to applications. There are a variety of CMC rotational surfaces in ${\mathbb R}^{2,1}$, because the rotation axes can be either timelike or spacelike or lightlike. Classifications of such rotational surfaces were considered by Hano and Nomizu \cite{HanoNom} and Ishihara and Hara \cite{HI}, with the aim of studying rolling curve constructions for the profile curves, but the moduli space was not considered. Here we find the moduli spaces for both surfaces of revolution and the more general class of equivariant surfaces. In Section \ref{section4-wayne}, we explicitly construct and classify all spacelike CMC surfaces of revolution in $\mathbb{R}^{2,1}$. In particular, this results in a new family of loops for which we know the explicit $SU_{1,1}$-Iwasawa splitting. We also study the surfaces in the associate families of the CMC surfaces of revolution, which we prove give all spacelike CMC surfaces with screw motion symmetry (equivariant surfaces). In both those cases, the explicit nature of the construction can be used to study the singularities and the end behaviors of the surfaces. In Section \ref{section5-wayne} we use the Weierstrass representation to construct $\mathbb{R}^{2,1}$ analogues of Smyth surfaces \cite{smyth} (surfaces whose metrics have a rotational symmetry), and study their properties. \section{The Iwasawa decomposition for the untwisted loop group} \label{section1-wayne} If $G$ is a compact semisimple Lie group, then the Iwasawa decomposition of $\Lambda G$, proved in \cite{PreS}, is \begin{equation} \label{psiwasawa} \Lambda G^\mathbb{C} = \Omega G \cdot \Lambda^+ G, \end{equation} where $\Omega G$ is the set of based loops $\gamma \in \Lambda G$ such that $\gamma(1) = 1$. For non-compact groups, this problem was investigated by Kellersch \cite{Kell}. An English presentation of those results can be found in the appendix of \cite{BD}. Here we restrict to $SU_{1,1}$, as it is a representative example, and as it has applications to CMC surface theory. \operatorname{su}bsection{Notation and definitions} \label{notation1} Throughout this article we will make extensive use of the Pauli matrices \[ \sigma_1 := \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \; , \;\; \sigma_2 := \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \; , \;\; \sigma_3:= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \; . \] Let $\mathbb{S}^1$ be the unit circle in the complex $\lambda$-plane, $D_+$ the open unit disk, and $D_- = \{\lambda \in \mathbb{C} \,| \, |\lambda | > 1 \} \cup \{ \infty \}$ the exterior disk in $\mathbb{C}\mathbb{P}^1$. If $G^\mathbb{C}$ is any complex semisimple Lie group then $\Lambda G^\mathbb{C}$ denotes the Banach Lie group of maps from $\mathbb{S}^1$ into $G^\mathbb{C}$ with some $H^s$-topology, $s> 1/2$. All subgroups are given the induced topology. For any subgroup $\mathcal{H}$ of $\Lambda G^\mathbb{C}$ we denote the subgroup of constant loops, which is to say $\mathcal{H} \cap G^\mathbb{C}$, by $\mathcal{H}^0$. For us, $G^\mathbb{C}$ will be the special linear group $SL_2\mathbb{C}$. Now the real form $SU_{1,1}$ is the fixed point subgroup with respect to the involution \begin{equation} \label{gtaudef} \tau(x) = \textup{Ad}_{\sigma_3} (\overline{x^t})^{-1}. \end{equation} For our application, however, it will become clear that it is convenient to set \begin{displaymath} G := \{ x \in SL_2\mathbb{C} ~|~ \tau (x) = \pm x \} . \end{displaymath} As a manifold, $G$ is a disjoint union $SU_{1,1} \, \sqcup \, i\sigma_1 \cdot SU_{1,1} $, and has a complexification $G^\mathbb{C} = SL_2 \mathbb{C}$. It turns out that $G$ works just as well as $SU_{1,1}$ for our application, and this choice will mean that the Iwasawa decomposition is almost global. We remark that an alternative way to achieve this would have been to set $G^\mathbb{C}$ to be the group $\{ x \in GL(2,\mathbb{C}) ~|~ \det x = \pm 1 \}$, and in this case the appropriate real form $G$ would be just the fixed point subgroup with respect to $\tau$. Let $\Lambda G$ denote the subgroup of $\Lambda G^\mathbb{C}$ consisting of loops with values in the subgroup $G$. We extend $\tau$ to an involution of the loop group by the formula \begin{equation}a \label{extendtaudef} (\tau(x))(\lambda) &:=& \tau(x(\bar{\lambda}^{-1})) \\ &=& \sigma_3 (\overline{x(\bar \lambda^{-1})}^t)^{-1} \sigma_3. \nonumber \end{equation}a Then it is easy to verify that the definition of $\Lambda G \operatorname{su}bset \Lambda G^\mathbb{C}$ is the analogue of $G \operatorname{su}bset G^\mathbb{C}$: \begin{equation}as \Lambda G &=& \{x \in \Lambda G^\mathbb{C} ~|~ \tau(x) = \pm x \} \; , \\ &=& (\Lambda G^\mathbb{C})_\tau \, \sqcup \, i \sigma_1 \cdot (\Lambda G^\mathbb{C})_\tau , \end{equation}as where $(\Lambda G^\mathbb{C})_\tau = \Lambda SU_{1,1}$ is the fixed point subgroup with respect to $\tau$. We want a decomposition similar to the Iwasawa decomposition (\ref{psiwasawa}), but our group $G$ is non-compact. \operatorname{su}bsubsection{Normalizations for the untwisted setting} Let $\triangle^+$ and $\triangle^-$ denote the sets of $2 \times 2$ upper triangular and lower triangular matrices, respectively, and $\triangle^\pm_{\mathbb R}$ denote the subsets with the further restriction that the diagonal components are positive and real. For any lie group $X$, let $\Lambda ^\pm X$ denote the subgroup consisting of loops which extend holomorphically to $D_\pm$. We start by defining some further subgroups of the untwisted loop group $\Lambda G^\mathbb{C} := \Lambda SL_2\mathbb{C}$. Denote the centers of the interior and exterior disks, $D_\pm$, by $\lambda_+ := 0$ and $\lambda_- := \infty$. Set \[ \Lambda_{\triangle}^\pm G^\mathbb{C} := \{ B \in \Lambda^\pm G^ \mathbb{C} \, | \, B(\lambda_\pm) \in \triangle^\pm \} , \] \[ \Lambda_{{\mathbb R}}^+G^\mathbb{C} := \{ B \in \Lambda^+ G^ \mathbb{C} \, | \, B(0) \in \triangle^+_{\mathbb R} \} , \] \[ \Lambda_{I}^\pm G^ \mathbb{C} := \{ B \in \Lambda^\pm G^ \mathbb{C} \, | \, B(\lambda_\pm) = I \} \; , \] \operatorname{su}bsection{The Birkhoff decomposition} To obtain the corresponding results for the twisted loop group later, we normalize the factors in the Birkhoff factorization theorem of \cite{PreS}, in a certain way: \begin{theorem}\label{thm:birkhoff} (Birkhoff decomposition \cite{PreS}) Any $\phi \in \Lambda G^ \mathbb{C}$, has a decomposition: \[ \phi = B_- M B_+ \, , \hspace{.5cm} B_\pm \in \Lambda^{\pm}_\triangle G^ \mathbb{C} \, , \] where either \begin{displaymath} M = \begin{pmatrix} \lambda^\ell &0 \\ 0 & \lambda^{-\ell} \end{pmatrix}, \hspace{.5cm} \textup{or} \hspace{.5cm} M= \begin{pmatrix} 0 & \lambda^\ell \\ -\lambda^{-\ell} &0 \end{pmatrix}, \hspace{.5cm} \ell \in \mathbb{Z} \; . \end{displaymath} The middle term, $M$, is uniquely determined by $\phi$. The big cell $\mathcal{B}^U$, where $l=0$, is open and dense in $\Lambda G^ \mathbb{C}$, and in this case there is a unique factorization $\phi = \hat{B}_- M_0 \hat{B}_+$, with $\hat{B}_\pm \in \Lambda_I^\pm G^\mathbb{C}$ and $M_0 \in G^\mathbb{C}$. Moreover, the map $\mathcal{B}^U \to \Lambda^{-}_I G^ \mathbb{C} \times G^ \mathbb{C} \times \Lambda^{+}_I G^ \mathbb{C}$, given by $[ \phi \mapsto (\hat{B}_-,\, M_0 , \, \hat{B}_+)]$, is a real analytic diffeomorphism. \end{theorem} \begin{proof} The result is stated and proved in an alternative form as Theorem 8.1.2 and Theorem 8.7.2 of \cite{PreS}, without the upper and lower triangular normalization of the constant terms, and where the middle term, $M$, is a homomorphism from $\mathbb{S}^1$ into a maximal torus, which is to say the first type of middle term here. That is \begin{displaymath} \phi = \phi_- \begin{pmatrix} \lambda^l &0\\ 0 & \lambda^{-l} \end{pmatrix} \phi_+, \hspace{1.5cm} \phi_\pm \in \Lambda^\pm G^\mathbb{C}. \end{displaymath} Such a product can be manipulated so that the constant terms of $\phi_\pm$ are appropriately triangular if one allows the middle term to become off-diagonal. \end{proof} \operatorname{su}bsection{The untwisted Iwasawa decomposition for $G$} Define the untwisted Iwasawa big cell \begin{displaymath} \mathcal{B}_{1,1}^U := \{ \phi \in \Lambda G^\mathbb{C} ~|~ (\tau(\phi))^{-1}\phi \in \mathcal{B}^U \}. \end{displaymath} \begin{theorem}\label{thm:SU11Iwa} (Untwisted $SU_{1,1}$ Iwasawa decomposition) \begin{enumerate} \item The group $\Lambda G^\mathbb{C}$ is a disjoint union, \begin{displaymath} \mathcal{B}_{1,1}^U \sqcup \bigsqcup_{m \in \mathbb{Z} } \widehat{\mathcal{P}}_m, \end{displaymath} where $\widehat{\mathcal{P}}_m$ are defined below at item (3). \item Any element $\phi \in \mathcal{B}_{1,1}^U$ has a decomposition \begin{displaymath} \phi = F B, \hspace{1.5cm} F \in \Lambda G, \hspace{.5cm} B \in \Lambda^+_\triangle G^ \mathbb{C}. \end{displaymath} We can choose $B \in \Lambda^+_{\mathbb R} G^\mathbb{C}$, and then $F$ and $B$ are uniquely determined, and the product map $\Lambda G \times \Lambda^+_{\mathbb R} G^\mathbb{C} \to \mathcal{B}_{1,1}^U$ is a real analytic diffeomorphism. We call this unique decomposition \emph{normalized}. \item Any element $\phi \in \widehat{\mathcal{P}}_m$ can be expressed as \begin{displaymath} \phi = F \hat \omega_m B, \hspace{1.5cm} F \in (\Lambda G^\mathbb{C}) _\tau, \hspace{.5cm} B \in \Lambda^+_\triangle G^ \mathbb{C}, \end{displaymath} where \begin{displaymath} \hat \omega_m := \begin{pmatrix} \frac{1}{2} & \lambda^m \\ -\frac{1}{2} \lambda^{-m} & 1 \end{pmatrix}. \end{displaymath} \item The Iwasawa big cell $\mathcal{B}^U_{1,1}$ is an open dense set of $\Lambda G^\mathbb{C}$. The complement of the big cell is locally given as the zero set of a non-constant real analytic function $g: \Lambda G^\mathbb{C} \to \mathbb{C}$. \end{enumerate} \end{theorem} The proof of Theorem \ref{thm:SU11Iwa} is a consequence of the following lemma: \begin{lemma}\label{lem:preSU11Iwa} If $\psi \in \Lambda G^ \mathbb{C}$ satisfies $(\tau(\psi))^{-1} = \psi$, then \[ \psi = (\tau(B_+))^{-1} (\pm I) B_+ \;\; \text{or} \;\;\; \psi = (\tau(B_+))^{-1} \begin{pmatrix} 0 & \lambda^m \\ -\lambda^{-m} & 0 \end{pmatrix} B_+ \] for some uniquely determined integer $m$, and for some $B_+ \in \Lambda_\triangle ^+ G^\mathbb{C}$. \end{lemma} \begin{proof} Consider the two cases for the Birkhoff splitting of $\psi$ given in Theorem \ref{thm:birkhoff}. First, if $\psi = B_- \text{diag}(\lambda^k,\lambda^{-k}) B_+$, then \begin{equation}\label{B-definition} B = B_+ \tau(B_-) =\begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{equation} is an element of $\Lambda_\triangle^+ G^ \mathbb{C}$, and the assumption that $(\tau(\psi))^{-1} = \psi$ is equivalent to the equation \[ \begin{pmatrix} a^* \lambda^{-k} & -c^* \lambda^k \\ -b^* \lambda^{-k} & d^* \lambda^k \end{pmatrix} = \begin{pmatrix} a \lambda^k & b \lambda^k \\ c \lambda^{-k} & d \lambda^{-k} \end{pmatrix} \; . \] It follows that $b$ and $c$ are both identically zero, that $a, d$ are constant and real, and that $k=0$. So $B = \text{diag}(\alpha,\alpha^{-1}) (\pm I) \text{diag}(\alpha,\alpha^{-1})$ for some constant $\alpha > 0$. Then $\psi = (\tau (\psi))^{-1}= (\tau (B_+))^{-1}(\tau(B_-))^{-1} = \tau(\tilde B_+)^{-1} (\pm I) \tilde B_+$, where $\tilde B_+ = \text{diag}(\alpha^{-1},\alpha) B_+$. Now consider the case $\psi = B_- {\tiny{\begin{pmatrix} 0 & \lambda^k\\ -\lambda^{-k} & 0 \end{pmatrix}}} B_+$. Proceeding as before, we have \[ \begin{pmatrix} a^* & -c^* \\ -b^* & d^* \end{pmatrix} \begin{pmatrix} 0 & \lambda^k \\ -\lambda^{-k} & 0 \end{pmatrix} = \begin{pmatrix} 0 & \lambda^k \\ -\lambda^{-k} & 0 \end{pmatrix} \begin{pmatrix} a& b \\ c & d \end{pmatrix} \; , \] where $B$ is as in \eqref{B-definition}. It follows that $\bar a = d$ is constant and $|a|=1$, and $b \cdot c$ is identically zero. Further, when $k<0$, then $b=0$ and $c = c^* \lambda^{-2k}$ with a finite expansion in $\lambda$ of the form $c = c_1 \lambda^1 + ... + c_{-2k-1} \lambda^{-2k-1}$, while, on the other hand, if $k \geq 0$, we have that $c=0$ and $b=b^* \lambda^{2k}$, with $b = b_0 \lambda^0 + ... + b_{2k} \lambda^{2k}$. Setting $\tilde{B}_+ = y B_+$ and $\tilde{B}_- = B_- x^{-1}$ then the requirements that $\tilde{B}_+ \in \Lambda_\triangle^+ G^\mathbb{C}$ and that $\psi = (\tau(\tilde{B}_+))^{-1} \begin{pmatrix} 0 & \lambda^m \\ -\lambda^{-m} & 0 \end{pmatrix} \tilde{B}_+$ will be satisfied if we can choose $y \in \Lambda^+_\triangle G^\mathbb{C}$ and $x \in \Lambda^-_\triangle G^\mathbb{C}$ with the properties: \[ x^{-1} \begin{pmatrix} 0 & \lambda^k \\ -\lambda^{-k} & 0 \end{pmatrix} y = \begin{pmatrix} 0 & \lambda^k \\ -\lambda^{-k} & 0 \end{pmatrix} \; , \hspace{1cm} B = y^{-1} \tau(x). \] Set \[ y = \begin{pmatrix} \sqrt{a}^{-1} & y_1 \\ y_2 & \sqrt{a} \end{pmatrix} \; , \hspace{1cm} x^{-1} = \begin{pmatrix} \sqrt{a}^{-1} & x_1 \\ x_2 & \sqrt{a} \end{pmatrix} \; , \] then when $k \geq 0$, we can take $(y_1,y_2,x_1,x_2)=(-\sqrt{a} b/2,0,0,-\sqrt{a} b \lambda^{-2k}/2)$. When $k < 0$, we take $(y_1,y_2,x_1,x_2)=(0,-c/(2 \sqrt{a}),-c \lambda^{2k}/(2 \sqrt{a}),0)$. \end{proof} \noindent \textbf{Proof of Theorem \ref{thm:SU11Iwa}} \begin{proof} Take any $\phi \in \Lambda G^ \mathbb{C}$. Set $\psi := \tau(\phi)^{-1} \phi$. Then $(\tau(\psi))^{-1} = \psi$ and so we can apply Lemma \ref{lem:preSU11Iwa}, which implies that \[ \psi = (\tau(B_+))^{-1} \tau(\hat \omega)^{-1} \hat \omega B_+ \; , \] where $\hat \omega$ is (uniquely) one of the following: \begin{displaymath} \hat \omega_+ = I, \hspace{1cm} \hat \omega_- = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}, \hspace{1cm} \hat \omega_m = \begin{pmatrix} \tfrac{1}{2} & \lambda^m \\ -\tfrac{1}{2} \lambda^{-m} & 1 \end{pmatrix}, \end{displaymath} $m \in \mathbb{Z}$, and $B_+ \in \Lambda^+_\triangle G^\mathbb{C}$. To see this, compute that $(\tau(\hat \omega_+))^{-1} \hat \omega_+ = I$, $\tau(\hat \omega_-)^{-1} \hat \omega_- = -I$ and $\tau(\hat \omega_m)^{-1} \hat \omega_m = {\tiny{\begin{pmatrix} 0 & \lambda^m \\ -\lambda^{-m} & 0 \end{pmatrix}}}$. Hence \begin{displaymath} \phi = \hat{F} \hat \omega B_+, \end{displaymath} where $\hat F = \tau(\phi) \tau(\hat \omega B_+)^{-1}$. Now $\psi = \tau(\hat \omega B_+)^{-1} \cdot \hat \omega B_+$ is equivalent to the equation $\tau(\hat F)= \hat F$, and so $\hat F \in (\Lambda G^\mathbb{C})_\tau$. To prove item (2) of the theorem, note that $\phi \in \mathcal{B}^U_{1,1}$ if and only if $(\tau(\phi))^{-1} \phi \in \mathcal{B}^U$, and this corresponds to $\hat \omega = \hat \omega_\pm$, by the construction in Lemma \ref{lem:preSU11Iwa}. Since $\tau (\hat \omega _\pm) = \pm \hat \omega _\pm$, $\phi = F B_+$, with $F := \hat F \hat \omega_\pm$, is the required decomposition. The uniqueness and the diffeomorphism property follow from the corresponding properties on the big cell in Theorem \ref{thm:birkhoff}. Item (3) has already been proved, and the disjointness property of item (1) follows from the uniqueness of the middle term in the Birkhoff Theorem. To prove item (4) note that, by definition, $\mathcal{B}^U_{1,1} = h^{-1} (\mathcal{B}^U)$, where $h: \Lambda G^\mathbb{C} \to \Lambda G^\mathbb{C}$ takes $\phi \mapsto (\tau(\phi))^{-1} \phi$. It is shown in \cite{DorPW} that the Birkhoff big cell $\mathcal{B}^U$ is given as the complement of the zero set of a non-trivial holomorphic section $\mu$ (called $\tau$ in \cite{DorPW}) of the holomorphic line bundle $\psi^* Det^* \to \Lambda G^\mathbb{C}$, where $\psi$ is a composition of holomorphic maps $\Lambda G^\mathbb{C} \to GL_{res}(H) \to Gr(H)$, and $Det^* \to Gr(H)$ is the dual of the determinant line bundle. Hence the Iwasawa big cell $\mathcal{B}^U_{1,1}$ is given as the complement of the zero set of the section $h^* \mu$, locally represented by a real analytic function $g: \Lambda G^\mathbb{C} \to \mathbb{C}$. The complement of such a zero set is either open and dense or empty, and the big cell is not empty, as it contains the identity. \end{proof} \begin{remark} A similar procedure can be used to prove the $SU_2$ Iwasawa splitting. In that case, as a consequence of the compactness of the group, everything is much simpler and the small cells $\hat{\mathcal{P}}_m$ do not appear. \end{remark} \operatorname{su}bsection{Explicit Iwasawa factorization of Laurent loops} \label{nickssection} Computing the Iwasawa factorization explicitly is not possible in general. However, if $X \in \mathcal{B}^U_{1,1}$ extends meromorphically to the unit disk, with just one pole at $\lambda = 0$, then the Iwasawa decomposition can be computed by finite linear algebra. To show this we will define a linear operator on a finite dimensional vector space whose kernel corresponds to the $G$ factor of $X$. For $-\infty < p \le q \le \infty$, denote the vector space of formal Laurent series by \[ \Lambda_{p,q} = \left\{ \left. \textstyle\operatorname{su}m_{j=p}^q a_j\lambda^j \right| a_j\in M_{2 \times 2}\mathbb{C} \right\}, \] and let $P_{p,\,q}:\Lambda_{-\infty,\infty}\to\Lambda_{p,q}$ be the projection \begin{equation*} P_{p,q}\left(\textstyle\operatorname{su}m_{j=-\infty}^\infty a_j\lambda^j\right) = \textstyle\operatorname{su}m_{j=p}^{q}a_j\lambda_j. \end{equation*} Define the anti-involution $\rho$ on $\Lambda_{-\infty,\infty}$ by, for $W \in \Lambda_{-\infty,\infty}$, \[ (\rho{W})(\lambda) = \sigma_3 \overline{W\left(1/\overline{\lambda}\right)}^t \sigma_3 . \] Note that if $W(\lambda)$ is an invertible matrix, then $\rho$ is the composition of $\tau$ with the matrix inverse operation. For any $X \in \Lambda G^\mathbb{C}$, define a linear map $\mathcal{L}_X: \Lambda_{-\infty,\infty}\to \Lambda_{-\infty,-1}\oplus \overline{\Lambda_{-\infty,-1}}\oplus \mathbb{C}^4$ by \begin{equation} \begin{split} & \mathcal{L}_X(W) = \\ &\quad\ \ \left( P_{-\infty,-1}(W X),\ \overline{P_{-\infty,-1}(\textup{adj}(\rho W) X)},\, \left. \left( P_{0,0}(WX) - \overline{P_{0,0}(\textup{adj}(\rho W) X)}\right)\right|_{11},\, \right.\\ &\quad\ \ \left. \left. \left( P_{0,0}(WX) - \overline{P_{0,0}(\textup{adj}(\rho W) X)}\right) \right|_{22},\, P_{0,0}(WX)\Big|_{21}, \left. \overline{P_{0,0}(\textup{adj}(\rho W) X))}\right|_{21} \right),\quad \end{split} \end{equation} where $\text{adj}$ gives the adjugate matrix and the subscripts $ij$ refer to matrix entries. The map $\mathcal{L}_X$ is clearly complex linear. \begin{lemma} \label{lem:laurent-iwasawa-kerL} Let $n\in\mathbb{Z}_{\ge 0}$ and $X\in {\Lambda G^\mathbb{C}} \cap\Lambda_{-n,\infty}$. Suppose $X$ lies in the big cell, and let $X = FB$ be its normalized $\operatorname{SU}_{1,1}$-Iwasawa factorization. Then \begin{enumerate} \item \label{lem:laurent-iwasawa-kerL-1} $\operatorname{Ker}\mathcal{L}_I = \mathbb{C} \cdot I$ and $\operatorname{Ker}\mathcal{L}_{i\sigma_2} = \mathbb{C}\cdot\sigma_1$. \item \label{lem:laurent-iwasawa-kerL-2} If $F\in (\Lambda G^\mathbb{C})_\tau$, then $\operatorname{Ker}\mathcal{L}_X = \mathbb{C} \cdot F^{-1}$. \item \label{lem:laurent-iwasawa-kerL-3} If $F\in i \sigma_1 \cdot (\Lambda G^\mathbb{C})_\tau$, then $\operatorname{Ker}\mathcal{L}_X = \mathbb{C} \cdot (F\sigma_3)^{-1}$. \end{enumerate} \end{lemma} \begin{proof} Let $X\in {\Lambda G^\mathbb{C}} \cap\Lambda_{-n,\infty}$. By the definition of $\mathcal{L}_X$, $W\in\operatorname{Ker}\mathcal{L}_X$ if and only if for some $p,\,q\in\mathbb{C}$, \begin{displaymath} P_{-\infty,0}(W X) = \begin{pmatrix}p & c_1 \\ 0 & q\end{pmatrix} \;\;\; \text{and} \;\;\; P_{-\infty,0}(\textup{adj}(\rho W) X) = \begin{pmatrix}\overline{p} & c_2 \\ 0 & \overline{q}\end{pmatrix}, \end{displaymath} where $c_i \in \mathbb{C}$. It follows that \begin{displaymath} \begin{split} &\operatorname{Ker}\mathcal{L}_I = \mathbb{C} \cdot I\text{ and } \operatorname{Ker}\mathcal{L}_{i\sigma_2} = \mathbb{C}\cdot\sigma_1 , \\ &\operatorname{Ker}\mathcal{L}_{XB} = \operatorname{Ker}\mathcal{L}_{X} \text{ for all $B\in \Lambda_{\mathbb{R}}^{+}G^{\mathbb{C}}$,}\\ &\operatorname{Ker}\mathcal{L}_{FX} = \operatorname{Ker}\mathcal{L}_{X}\cdot F^{-1} \text{ for all $F\in (\Lambda G^\mathbb{C})_\tau$.} \end{split} \end{displaymath} Statement \ref{lem:laurent-iwasawa-kerL-2} follows from \[ \operatorname{Ker}\mathcal{L}_{FB} = \operatorname{Ker}\mathcal{L}_{I} \cdot F^{-1} = \mathbb{C}\cdot F^{-1}. \] If $F\in i \sigma_1 \cdot (\Lambda G^\mathbb{C})_\tau$, then $F = G i \sigma_2$ for some $G\in (\Lambda G^\mathbb{C})_\tau$, and statement \ref{lem:laurent-iwasawa-kerL-3} follows from \[ \operatorname{Ker}\mathcal{L}_{G i \sigma_2 B} = \operatorname{Ker}\mathcal{L}_{i \sigma_2} \cdot G^{-1} = \mathbb{C}\cdot i \sigma_1 G^{-1} =\mathbb{C}\cdot \sigma_3 F^{-1} =\mathbb{C}\cdot (F\sigma_3)^{-1}. \] \end{proof} Let $X$ be a Laurent loop in the big cell, of pole order $n \in \mathbb{Z}_{\geq 0}$ at $\lambda=0$ and with no other singularities on the unit disk. Let $X=FB$ be the $\operatorname{SU}_{1,1}$-Iwasawa decomposition. Then $F$ is a Laurent loop in $\Lambda_{-n,n}$, because $XB^{-1}=F$ has a pole of order $n$ at $\lambda=0$ and $\tau(F)= \pm F$. In fact, we have the following theorem: \begin{theorem}\label{thm:laurent-iwasawa-kerL} Lemma \ref{lem:laurent-iwasawa-kerL} provides an explicit construction of the normalized $\operatorname{SU}_{1,1}$-Iwasawa decomposition of any $X\in {\mathcal{B}_{1,1}^U} \cap\Lambda_{-n,\infty}$ by finite linear methods. In particular, let $X = F B$ be the $\operatorname{SU}_{1,1}$-Iwasawa decomposition. Then \begin{enumerate} \item $F$ is a Laurent loop in $\Lambda_{-n,n}$ if and only if $X$ extends meromorphically to the unit disk, with pole of order $n$ at $\lambda = 0$, and no other poles. \item In this case, the two conditions that $F \in \Lambda G$ and that $B \in \Lambda^+_{\mathbb R} G^\mathbb{C}$ form an algebraic system on the coefficients of $F^{-1}$ with a unique solution. \end{enumerate} \end{theorem} \begin{proof} Compute $W\in\operatorname{Ker}\mathcal{L}_X\setminus\{0\}$. This involves solving a complex linear system with $16n+4$ equations and $8n+4$ variables. That $\det W$ is $\lambda$-independent can be seen as follows: Since $W$ solves the linear system, $WX$ and $\textup{adj}(\rho W) X$ are in $\Lambda _{0,\infty}$, and so $\det W(\lambda)$ and $\det \overline{W(1/\bar \lambda)}$ are holomorphic in the unit disk. In particular, $\det W(\lambda)$ is holomorphic on $\mathbb{C} \cup \{ \infty \}$, and so is constant. Thus, multiplying by a constant scalar if necessary, we may, and do, assume $\det W \equiv 1$. By Lemma \ref{lem:laurent-iwasawa-kerL}, $((i \sigma_3)^kW)^{-1}$ is the $\Lambda G$ factor of the normalized $\operatorname{SU}_{1,1}$-Iwasawa decomposition of $X$ for some $k\in\{0,\dots,3\}$. \end{proof} For the simplest case, when $X$ is a constant loop, the linear system in the proof of Theorem \ref{thm:laurent-iwasawa-kerL} gives the following corollary: \begin{corollary} \label{factcor} For $X \in \operatorname{SL}_2 \mathbb{C}$, the $\operatorname{SU}_{1,1}$-Iwasawa decomposition has three cases: \begin{enumerate} \item When $|X_{11}|>|X_{21}|$, there exist $u,v,\beta \in \mathbb{C}$ and $r \in \mathbb{R}^+$ such that $u \bar u - v \bar v = 1$ and \[ X = \begin{pmatrix} u & v \\ \bar v & \bar u \end{pmatrix} \begin{pmatrix} r & \beta \\ 0 & r^{-1} \end{pmatrix} \; . \] \item When $|X_{11}|<|X_{21}|$, there exist $u,v,\beta \in \mathbb{C}$ and $r \in \mathbb{R}^+$ such that $u \bar u - v \bar v = -1$ and \[ X = \begin{pmatrix} u & v \\ -\bar v & -\bar u \end{pmatrix} \begin{pmatrix} r & \beta \\ 0 & r^{-1} \end{pmatrix} \; . \] \item When $|X_{11}|=|X_{21}|$, there exist $\theta,\gamma \in \mathbb{C}$, $r \in \mathbb{R}^+$ and $\beta \in \mathbb{C}$ such that \[ X = \begin{pmatrix} e^{i \theta} & 0 \\ e^{i \gamma} & e^{-i \theta} \end{pmatrix} \begin{pmatrix} r & \beta \\ 0 & r^{-1} \end{pmatrix} \; . \] \end{enumerate} \end{corollary} \section{Iwasawa factorization in the twisted loop group} \label{twistediwasawasection} \operatorname{su}bsection{Notation and definitions for the twisted loop group} \label{notation} As before, we set $G = SU_{1,1} \, \sqcup \, i\sigma_1 \cdot SU_{1,1} $, but from now on we work in the twisted loop group \[ \mathcal{U}^\mathbb{C} := \Lambda G_\sigma^\mathbb{C} := \{ x \in \Lambda G^\mathbb{C} | \, \sigma (x) = x \} \; , \] where the involution $\sigma$ is defined, for a loop $x$, by \begin{displaymath} (\sigma(x))(\lambda) := \textup{Ad}_{\sigma_3} \, x(-\lambda). \end{displaymath} We will also refer to three further subgroups of $\mathcal{U}^\mathbb{C}$, \begin{equation}as \begin{aligned} & \mathcal{U}^\mathbb{C}_\pm := \{ B \in \mathcal{U}^\mathbb{C} \, | \, B \text{ extends holomorphically to } D_\pm \}, \\ & \widehat{\mathcal{U}}^\C_+ := \{ B \in \mathcal{U}^\mathbb{C}_+ ~|~ B(0) = \tiny{\begin{pmatrix} \rho & 0 \\ 0 & \rho^{-1} \end{pmatrix}}, ~\rho \in {\mathbb R}, ~\rho>0 \}. \end{aligned} \end{equation}as We extend $\tau$ to an involution of the loop group by the formula \begin{displaymath} (\tau(x))(\lambda) := \tau(x(\bar{\lambda}^{-1})). \end{displaymath} The ``real form'' is \begin{equation}as \mathcal{U} &:=& \Lambda G_\sigma = \{F \in \Lambda G^\mathbb{C}_\sigma ~|~ \tau(F) = \pm F \} \; , \\ &=& \mathcal{U}_\tau \, \sqcup \, \Psi(i \sigma_1) \cdot \mathcal{U}_\tau, \end{equation}as where $\mathcal{U}_\tau$ is the fixed point subgroup of $\tau$, and $\Psi(i \sigma_1) = \tiny{\begin{pmatrix} 0 & i \lambda \\ i \lambda^{-1} & 0 \end{pmatrix}}$ (see (\ref{psiisom}) below). For any Lie group $A$, let $Lie(A)$ denote its Lie algebra. We use the same notations $\sigma$ and $\tau$ for the infinitesimal versions of the involutions, which are given on $Lie(\Lambda G^\mathbb{C})$ by \begin{displaymath} (\sigma (X))(\lambda) := \textup{Ad}_{\sigma_3} \, X(-\lambda), \hspace{1cm} (\tau (X))(\lambda) := -\textup{Ad}_{\sigma_3} \overline{X^t(\bar{\lambda}^{-1})}. \end{displaymath} We have $Lie(\mathcal{U}^\mathbb{C}) = \{X = \operatorname{su}m X_i \lambda^i ~|~ X_i \in \mathfrak{sl}_2\mathbb{C},~ \sigma(X) = X \}$, and $Lie(\mathcal{U})$ is the subalgebra consisting of elements fixed by $\tau$. The convergence condition of these series depends on the topology used. For practical purposes, we should note that $\mathcal{U}^\mathbb{C}$ and $Lie(\mathcal{U}^\mathbb{C})$ consist of loops {\tiny{$\begin{pmatrix} a&b\\c&d \end{pmatrix}$}} which take values in $SL_2\mathbb{C}$ and $\mathfrak{sl}_2 \mathbb{C}$ respectively, and such that the coefficients $a$ and $d$ are even functions of the loop parameter $\lambda$, whilst $b$ and $c$ are odd functions of $\lambda$. $\mathcal{U}_{\pm}^\mathbb{C}$ and $Lie(\mathcal{U}_{\pm}^\mathbb{C})$ are the elements which have the further condition that only non-negative or non-positive exponents of $\lambda$ appear in their Fourier expansions. For a scalar-valued function $x(\lambda)$, we use the notation \begin{displaymath} x^*(\lambda) := \overline{x(\bar{\lambda}^{-1}}). \end{displaymath} Then for the real form $\mathcal{U}$ we have \begin{equation} \label{matrixforms} \mathcal{U}_\tau = \Big\{ \begin{pmatrix} a&b\\ b^* & a^* \end{pmatrix} \in \mathcal{U}^\mathbb{C} \Big\}, \hspace{0.8cm} \Psi(i \sigma_1) \cdot \mathcal{U}_{\tau} = \Big\{ \begin{pmatrix} a&b\\ -b^* & -a^* \end{pmatrix} \in \mathcal{U}^\mathbb{C} \Big\}, \end{equation} and the analogue for the Lie algebras. \operatorname{su}bsection{The Iwasawa decomposition for $SU_{1,1}$} \label{mainiwasawa} To convert Theorem \ref{thm:SU11Iwa} to the twisted setting, we use the isomorphism from the untwisted to the twisted loop group, defined by \begin{equation} \label{psiisom} \Psi: \Lambda G^\mathbb{C} \to \Lambda G^\mathbb{C}_\sigma, \hspace{1cm} \begin{pmatrix} a(\lambda) & b(\lambda) \\ c(\lambda) & d(\lambda) \end{pmatrix} \mapsto \begin{pmatrix} a(\lambda^2) & \lambda b(\lambda^2) \\ \lambda^{-1} c(\lambda^2) & d(\lambda^2) \end{pmatrix}. \end{equation} We define the Birkhoff big cell in $\Lambda G^\mathbb{C}_\sigma$ by $\mathcal{B}:= \Psi (\mathcal{B}^U)$. The Birkhoff factorization theorem, Theorem \ref{thm:birkhoff}, then translates to the assertion that $\mathcal{B} = \mathcal{U}^\mathbb{C}_- \cdot \mathcal{U}^\mathbb{C}_+$, and that this is an open dense subset of $\mathcal{U}^\mathbb{C}$. Define the \emph{$G$-Iwasawa big cell} for $\mathcal{U}^\mathbb{C}$ to be the set \begin{displaymath} \mathcal{B}_{1,1} := \{ \phi \in \mathcal{U}^\mathbb{C} ~|~ \tau(\phi)^{-1} \phi \in \mathcal{B}\}. \end{displaymath} It is easy to verify that $\tau = \Psi^{-1} \circ \tau \circ \Psi$, and this implies that $\Psi$ maps $\mathcal{B}_{1,1}^U$ to $\mathcal{B}_{1,1}$. To define the small cells, we first set, for a positive integer $m \in \mathbb{Z}^+$, \[ \omega_{m} = \begin{pmatrix} 1 & 0 \\ \lambda^{-m} & 1 \end{pmatrix} \; , \;\; \text{$m$ odd} \;\; ; \;\;\; \omega_{m} = \begin{pmatrix} 1 & \lambda^{1-m} \\ 0 & 1 \end{pmatrix} \; , \;\; \text{$m$ even.} \] The \emph{$n$-th small cell} is defined to be \begin{equation} \label{pndecomp} \mathcal{P}_n := \mathcal{U}_\tau \cdot \omega_n \cdot \mathcal{U}^\mathbb{C}_+. \end{equation} Note that elements of $\Omega G$, in the Iwasawa decomposition (\ref{psiwasawa}), correspond naturally to elements of the left coset space $\Lambda G /G$. For the twisted loop group, $\mathcal{U}$, the role of $\Omega G$ is effectively played by $\mathcal{U}/\mathcal{U}^0$. \pagebreak \begin{theorem}\label{section1-thm:SU11Iwa} ($SU_{1,1}$ Iwasawa decomposition) \begin{enumerate} \item\label{mainthm-(1)} The group $\mathcal{U}^{\mathbb{C}}$ is a disjoint union \begin{equation} \label{globaldecomp} \mathcal{U}^{\mathbb{C}} = \mathcal{B}_{1,1} \sqcup \bigsqcup_{m \in \mathbb{Z}^+ } \mathcal{P}_m. \end{equation} \item \label{mainthm-(2)} Any loop $\phi \in \mathcal{B}_{1,1}$ can be expressed as \begin{equation} \label{bciwasawa} \phi = F B, \end{equation} for $F \in \mathcal{U}$ and $B \in \mathcal{U}^\mathbb{C}_+$. The factor $F$ is unique up to right multiplication by an element of $\mathcal{U}^0$. The factors are unique if we require that $B \in \widehat{\mathcal{U}}^\C_+$, and then the product map $\mathcal{U} \times \widehat{\mathcal{U}}^\C_+ \to \mathcal{B}_{1,1}$ is a real analytic diffeomorphism. \noindent \item\label{mainthm-(3)} The Iwasawa big cell, $\mathcal{B}_{1,1}$, is an open dense subset of $\mathcal{U}^{\mathbb{C}}$. The complement of $\mathcal{B}_{1,1}$ in $\mathcal{U}^{\mathbb{C}}$ is locally given as the zero set of a non-constant real analytic function $g: \mathcal{U}^{\mathbb{C}} \to \mathbb{C}$. \end{enumerate} \end{theorem} \begin{proof} The theorem follows from the untwisted statement, Theorem \ref{thm:SU11Iwa}. Under the isomorphism $\Psi$, given by (\ref{psiisom}), $\hat \omega_+$ stays the same, $\hat \omega_-$ becomes ${\tiny{\begin{pmatrix} 0 & \lambda \\ -\lambda^{-1} & 0\end{pmatrix}}}$, and the $\hat \omega_m$ appear only for odd $m$. Then, noting that, for $m>0$, \[ (i \sigma_3) \hat \omega_m (-i \sigma_3) = \begin{pmatrix} 1 & 0 \\ \lambda^{-m} & 1 \end{pmatrix} B_+ \; , \hspace{.7cm} B_+ = \begin{pmatrix} 1/2 & -\lambda^m \\ 0 & 2 \end{pmatrix} \in \Lambda^+ G^ \mathbb{C}, \] and, for $m<0$, \[ \hat \omega_m = \begin{pmatrix} 1 & \lambda^m \\ 0 & 1 \end{pmatrix} B_+ \; , \hspace{.7cm} B_+ = \begin{pmatrix} 1 & 0 \\ -\lambda^{-m}/2 & 1 \end{pmatrix} \in \Lambda^+ G^ \mathbb{C}, \] and that $B_+$ can be absorbed into the right-hand $\mathcal{U}^\mathbb{C}_+$ factor of any splitting, we can replace, in Theorem \ref{thm:SU11Iwa}, the above $\hat \omega_\pm$ and $\hat \omega_m$ respectively with the matrices $I$, $\tiny{\begin{pmatrix} 0 & \lambda\\-\lambda^{-1} & 0 \end{pmatrix}}$ and the $\omega_m$ defined in Section \ref{section2}. This gives the small cell factorizations of (\ref{pndecomp}) of Theorem \ref{section1-thm:SU11Iwa}. The big cell factorization of item \ref{mainthm-(2)} follows from the observation that \begin{displaymath} \tau \begin{pmatrix} 0 & \lambda \\ -\lambda^{-1} & 0\end{pmatrix} = - \begin{pmatrix} 0 & \lambda \\ -\lambda^{-1} & 0\end{pmatrix}, \end{displaymath} so that elements with this middle term can be represented as $\phi = F B$, with $\tau(F) = -F$, that is, $F \in \Psi(i \sigma_1) \mathcal{U}_\tau \operatorname{su}bset \mathcal{U}$. The diffeomorphism property on the big cell, the disjoint union property, item \ref{mainthm-(1)}, and item \ref{mainthm-(3)} follow from the corresponding statements in Theorem \ref{thm:SU11Iwa}. \end{proof} \begin{corollary} \label{cor1} The map $\pi: \mathcal{B}_{1,1} \to \mathcal{U}/\mathcal{U}^0$ given by $\phi \mapsto [F]$, derived from (\ref{bciwasawa}), is a real analytic projection. \end{corollary} \begin{remark} \label{psidefremark} The density of the big cell can also be seen explicitly as follows: consider the continuous family of loops \[ \psi^m_z := \begin{pmatrix} 1 & 0 \\ z \lambda^{-m} & 1 \end{pmatrix}, ~~\textup{$m$ odd}; \hspace{.7cm} \psi^m_z := \begin{pmatrix} 1 & z \lambda^{-m+1} \\ 0 & 1 \end{pmatrix}, ~~\textup{$m$ even}. \] Now $\psi^m_1 = \omega_m$, but for $|z| \neq 1$, $\psi_z$ is in the big cell and has the Iwasawa decomposition: $\psi^m_z = F^m_z \cdot B^m_z$, where, for odd values of $m$, \begin{displaymath} F^m_z = \frac{1}{\sqrt{1-z\bar{z}}} \begin{pmatrix} 1 & \bar{z} \lambda ^m\\ z \lambda^{-m} & 1 \end{pmatrix}, \hspace{.7cm} B^m_z = \frac{1}{\sqrt{1-z\bar{z}}} \begin{pmatrix} 1-z \bar{z} & - \bar{z}\lambda ^m\\ 0 & 1 \end{pmatrix}, \end{displaymath} and, for even values of $m$: \begin{displaymath} \psi^m_z = \textup{Ad}_{\sigma_1} \psi_z^{m-1} = \textup{Ad}_{\sigma_1} F^{m-1}_z \cdot \textup{Ad}_{\sigma_1} B^{m-1}_z. \end{displaymath} If ${\phi}_0$ is any element of $\mathcal{P}_m$, then it has a decomposition $\phi_0 = F_0 \omega_m B_0$, in accordance with (\ref{pndecomp}). Now define the continuous path, for $t \in {\mathbb R}$, $\hat{\phi}_t = F_0 \psi^m_t B_0$. Then $\hat{\phi}_1 = \phi_0$, but for $t \neq 1$, $\hat{\phi}_t = F_0 F^m_t B^m_t B_0$, which is in the big cell. So $\hat{\phi}_t$ gives a family of elements in the big cell which are arbitrarily close to $\phi_0$ as $t \to 1$. \end{remark} \operatorname{su}bsection{A factorization lemma} Later, in Section \ref{brander-section}, we will use the following explicit factorization for an element of the form $B\omega_m^{-1}$, for $B \in \mathcal{U}^\mathbb{C}_+$, and $m = 1$ or $2$. \begin{lemma} \label{switchlemma} Let $B = \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} \operatorname{su}m_{i = 0}^\infty a_i \lambda^i & \operatorname{su}m_{i = 1}^\infty b_i \lambda^i \\ \operatorname{su}m_{i = 1}^\infty c_i \lambda^i & \operatorname{su}m_{i = 0}^\infty d_i \lambda^i \end{pmatrix}$ be any element of $\mathcal{U}^\mathbb{C}_+$. Then there exists a factorization \begin{equation} \label{switchfact} B \omega_1^{-1} = X \widehat{B}, \end{equation} where $\widehat{B} \in \mathcal{U}^\mathbb{C}_+$ and $X$ is of one of the following three forms: \begin{displaymath} k_1 = \begin{pmatrix} u & v \lambda \\ \bar{v} \lambda^{-1} & \bar{u} \end{pmatrix}, \hspace{.5cm} k_2 =\begin{pmatrix} u & v \lambda \\ -\bar{v} \lambda^{-1} & -\bar{u} \end{pmatrix}, \hspace{.5cm} \omega_1^\theta = \begin{pmatrix} 1 & 0 \\ e^{i\theta} \lambda^{-1} & 1 \end{pmatrix}, \end{displaymath} where $u$ and $v$ are constant in $\lambda$ and can be chosen so that the matrix has determinant one, and $\theta \in {\mathbb R}$. The matrices $k_1$ and $k_2$ are in $\mathcal{U}$, and their components satisfy the equation \begin{equation} \label{uveqn} \frac{|u|}{|v|} = |b_1-a_0||a_0| \; . \\ \end{equation} The third form occurs if and only if $B \omega_1^{-1}$ is in the first small cell, $\mathcal{P}_1$, and the three cases correspond to the cases $|(b_1-a_0)a_0|$ greater than, less than or equal to 1, respectively. The analogue holds replacing $\omega_1$ with $\omega_2$, the matrices $k_i$ and $\omega_1^\theta$ with $\textup{Ad}_{\sigma_1} k_i$ and $\textup{Ad}_{\sigma_1} \omega_1^\theta$, and replacing $\mathcal{P}_1$ with $\mathcal{P}_2$, and Equation \eqref{uveqn} with \begin{equation} \label{uveqn2} \frac{|u|}{|v|} = |c_1-d_0||d_0| \; . \end{equation} \end{lemma} \begin{proof} The second statement, concerning $\omega_2$, is obtained trivially from the first, because $\omega_2 = \textup{Ad}_{\sigma_1} \omega_1$, so we can get the factorization by applying the homomorphism $\textup{Ad}_{\sigma_1}$ to both sides of \eqref{switchfact}. To obtain the factorization \eqref{switchfact}, note that under the isomorphism given by (\ref{psiisom}), $\omega_1^{-1}$ becomes $\begin{pmatrix} 1 & 0 \\ -1 & 1 \end{pmatrix}$, so the untwisted form of $B \omega_1^{-1}$ has no pole on the unit disc, and the factorization can be obtained by factoring the constant term, using Corollary \ref{factcor}. Alternatively, one can write down explicit expressions as follows: for the cases $|(b_1-a_0)a_0|^{\varepsilon} >1$, where $\varepsilon = \pm 1$, the factorization is given by \begin{equation} \label{explicitfact} \begin{split} &X = \begin{pmatrix} u & v \lambda \\ \varepsilon \bar{v} \lambda^{-1} & \varepsilon \bar{u} \end{pmatrix} , \\ &\widehat{B} = \begin{pmatrix} -\varepsilon \bar{u}b \lambda^{-1} + dv + \varepsilon \bar{u}a -vc \lambda ~&~ b \varepsilon \bar{u} - v d \lambda \\ \varepsilon \bar{v} b \lambda^{-2} -(\varepsilon \bar{v} a + u d)\lambda^{-1} + u c ~&~ -b \varepsilon \bar{v}\lambda^{-1} + u d \end{pmatrix}. \end{split} \end{equation} One can choose $u$ and $v$ so that $\varepsilon (u \bar{u} - v \bar{v}) = 1$ and such that $\widehat{B} \in \mathcal{U}^\mathbb{C}_+$, the latter condition being assured by the requirement that $\frac{u}{\bar{v}} = \varepsilon (b_1 -a_0)a_0$. It is straightforward to verify that $X \widehat{B} = B \omega_1^{-1}$. For the case $|(b_1-a_0)a_0| =1$, substitute $\bar u$ for $\varepsilon \bar u$ and $-\bar{v}$ for $\varepsilon \bar{v}$ in the above expression, and choose $\frac{u}{\bar{v}} = (a_0-b_1)a_0$. One can choose $u= \frac{1}{\sqrt{2}}$ and $\bar v = \frac{-e^{i \theta}}{\sqrt{2}}$ and \begin{displaymath} \begin{pmatrix} u & v \lambda \\ -\bar{v} \lambda^{-1} & \bar{u} \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ e^{i \theta} \lambda^{-1} & 1 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} e^{- i \theta} \lambda \\ 0 & \sqrt{2} \end{pmatrix}. \end{displaymath} Pushing the last factor into $\widehat{B}$ then gives the required factorization. In this case, $B\omega_1^{-1}$ is in $\mathcal{P}_1$, because it can be expressed as \begin{displaymath} \small\begin{pmatrix} e^{-i\theta/2}& 0 \\ 0 & e^{i\theta/2} \end{pmatrix} \cdot \omega_1 \cdot \begin{pmatrix} e^{i\theta/2} & 0 \\ 0 & e^{-i\theta/2} \end{pmatrix} \widehat{B}. \end{displaymath} \end{proof} \section{The loop group formulation and DPW method for spacelike CMC surfaces in $\mathbb{R}^{2,1}$} \label{section2} The loop group formulation for CMC surfaces in ${\mathbb E}^3$, $\mathbb{S}^3$ and ${\mathbb H}^3$ evolved from the work of Sym \cite{sym1985}, Pinkall and Sterling \cite{pinkallsterling}, and Bobenko \cite{bobenko, bobenko1994}. The Sym-Bobenko formula for CMC surfaces was given by Bobenko \cite{Bob:cmc, bobenko1994}, generalizing the formula for pseudo-spherical surfaces of Sym \cite{sym1985}. The case that the ambient space is non-Riemannian is analogous, replacing the compact Lie group $SU_2$ with the non-compact real form $SU_{1,1}$, as we show in this section. \operatorname{su}bsection{The $SU_{1,1}$-frame}\label{section2-wayne} The matrices $\{ e_1, ~ e_2, ~ e_3 \} := \{\sigma_1, ~ - \sigma_2, ~i \sigma_3 \}$ form a basis for the Lie algebra $\mathfrak{g} = \mathfrak{su}_{1,1}$. Identifying the Lorentzian 3-space $\mathbb{R}^{2,1}$ with $\mathfrak{g}$, with inner product given by $\langle X,Y \rangle = \tfrac{1}{2} \text{trace} (XY)$, we have \begin{displaymath} \langle e_1,e_1 \rangle = \langle e_2,e_2 \rangle = - \langle e_3, e_3 \rangle = 1 \end{displaymath} and $\langle \sigma_i,\sigma_j \rangle = 0$ for $i \neq j$. Let $\Sigma$ be a Riemann surface, and suppose $f: \Sigma \to \mathbb{R}^{2,1}$ is a spacelike immersion with mean curvature $H \neq 0$. Choose conformal coordinates $z = x + iy$ and define a function $u: \Sigma \to {\mathbb R}$ such that the metric is given by \begin{equation}\label{eqn:dssquared} \textup{d} s^2 = 4e^{2u}(\textup{d} x^2 + \textup{d} y^2). \end{equation} We can define a frame $F: \Sigma \to SU_{1,1}$ by demanding that \begin{displaymath} F e_1 F^{-1} = \frac{f_x}{|f_x|}, \hspace{1cm} F e_2 F^{-1} = \frac{f_y}{|f_y|}. \end{displaymath} Assume coordinates for the target and domain are chosen such that $f_x(0) = |f_x(0)|e_1$ and $f_y(0) = |f_y(0)|e_2$, so that $F(0) = I$. Then the frame $F$ is unique. A choice of unit normal vector is given by $N = F e_3 F^{-1}$. The Hopf differential is defined to be $Q \textup{d} z^2$, where \begin{displaymath} Q:= \langle N,f_{zz} \rangle = -\langle N_z,f_z \rangle. \end{displaymath} The Maurer-Cartan form, $\alpha$, for the frame $F$ is defined by $\alpha := F^{-1} \textup{d} F = U \textup{d} z + V \textup{d} \bar{z}$. \begin{lemma} The connection coefficients $U := F^{-1}F_z$ and $V := F^{-1}F_{\bar{z}}$ are given by \begin{equation}\label{UhatandVhat} U = \frac{1}{2} \begin{pmatrix} u_z & -2 i H e^u \\ i e^{-u} Q & -u_z \end{pmatrix} , \hspace{1cm} V = \frac{1}{2} \begin{pmatrix} -u_{\bar z} & -i e^{-u} \bar Q \\ 2 i H e^u & u_{\bar z} \end{pmatrix} \; . \end{equation} The compatibility condition $\textup{d} \alpha + \alpha \wedge \alpha = 0$ is equivalent to the pair of equations \begin{equation}a \, && u_{z \bar z} - H^2 e^{2u}+\tfrac{1}{4} |Q|^2 e^{-2u} = 0, \label{compatibility1} \\ \, && Q_{\bar z} = 2 e^{2 u} H_z \; . \nonumber \end{equation}a \end{lemma} \begin{proof} This is a straightforward computation, using $H = \frac{1}{8}e^{-2u}\langle f_{xx} + f_{yy}, N \rangle$, and the consequent $f_{zz} = 2u_z f_z - QN$, $f_{\bar{z} \bar{z}} = 2u_{\bar{z}}f_{\bar{z}} - \bar{Q}N$, $f_{z \bar{z}} = -2 H e^{2u}N$, in addition to \begin{equation} f_z = 2 e^u F \cdot \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \cdot F^{-1} \; , \hspace{1cm} f_{\bar z} = 2 e^u F \cdot \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \cdot F^{-1} \; . \end{equation} \end{proof} \operatorname{su}bsection{The loop group formulation and the Sym-Bobenko formula} Now let us insert a parameter $\lambda$ into the $1$-form $\alpha$, defining the family $\alpha^\lambda := U^\lambda \textup{d} z + V^\lambda \textup{d} \bar{z}$, where \begin{equation}\label{withlambda} U^\lambda = \frac{1}{2} \begin{pmatrix} u_z & -2 i H e^u \lambda^{-1} \\ i e^{-u} Q \lambda^{-1} & -u_z \end{pmatrix} , \hspace{.5cm} V^\lambda = \frac{1}{2} \begin{pmatrix} -u_{\bar z} & -i e^{-u} \bar Q \lambda \\ 2 i H e^u \lambda & u_{\bar z} \end{pmatrix} \; . \end{equation} It is simple to check the following fundamental fact: \begin{proposition} The $1$-form $\alpha^\lambda$ satisfies the Maurer-Cartan equation \begin{displaymath} \textup{d} \alpha^\lambda + \alpha^\lambda \wedge \alpha^\lambda = 0 \end{displaymath} for all $\lambda \in \mathbb{C} \setminus \{ 0 \}$ if and only if the following two conditions both hold: \begin{enumerate} \item $\textup{d} \alpha^{1} + \alpha^{1} \wedge \alpha^{1} = 0$, \item the mean curvature $H$ is constant. \end{enumerate} \end{proposition} Note that, comparing with (\ref{matrixforms}), $\alpha^\lambda$ is a 1-form with values in $Lie(\mathcal{U}_\tau)$, and is integrable for all $\lambda$. Hence it can be integrated to obtain a map $F: \Sigma \to \mathcal{U}_\tau$. \begin{definition} \label{extendedframedef} The map $F: \Sigma \to \mathcal{U}_\tau$ obtained by integrating the above 1-form $\alpha^\lambda$, with the initial condition $F(0) = I$, is called an \emph{extended frame} for the CMC surface $f$. \end{definition} \begin{remark} Such a frame $F$ is also an extended frame for a harmonic map, as, for each $\lambda \in \mathbb{S}^1$, $F^\lambda$ projects to a harmonic map into $SU_{1,1}/K$, where $K$ is the diagonal subgroup. We will not be emphasizing that aspect in this article, however. \end{remark} When $H$ is a nonzero constant, the Sym-Bobenko formula, at $\lambda_0 \in \mathbb{S}^1$, is given by: \begin{equation}a \label{symformula} &&\hat{f}^{\lambda_0} = -\frac{1}{2H} \mathcal{S}(F) \Big|_{\lambda = \lambda_0}, \\ && \mathcal{S}(F) := F i\sigma_3 F^{-1} + 2 i \lambda \partial_\lambda F \cdot F^{-1} \; . \end{equation}a \begin{theorem} \label{symthm} $\,$ \begin{enumerate} \item \label{symthm1} Given a CMC $H$ surface, $f$, with extended frame $F: \Sigma \to \mathcal{U}_\tau$, described above, the original surface $f$ is recovered, up to a translation, from the Sym-Bobenko formula as $\hat{f}^1$. For other values of $\lambda \in \mathbb{S}^1$, $\hat{f}^\lambda$ is also a CMC $H$ surface in ${\mathbb R}^{2,1}$, with Hopf differential given by $\lambda^{-2} Q$. \item \label{symthm2} Conversely, given a map $F: \Sigma \to \mathcal{U}_\tau$ whose Maurer-Cartan form has coefficients of the form given by (\ref{withlambda}), the map $\hat{f}^\lambda$ obtained by the Sym-Bobenko formula is a CMC $H$ immersion into $\mathbb{R}^{2,1}$. \item \label{symthm3} If $D$ is any diagonal matrix, constant in $\lambda$, then $\mathcal{S}(FD) = \mathcal{S}(F)$. \end{enumerate} \end{theorem} \begin{proof} For \ref{symthm1}, one computes that $\hat f^1_z=f_z$ and $\hat f^1_{\bar z}=f_{\bar z}$, so $f$ and $\hat f^1$ are the same surface up to translation. For other values of $\lambda$, see item \ref{symthm2}. To prove \ref{symthm2}, one computes $\hat{f}_z$ and $\hat{f}_{\bar{z}}$, and then the metric, the Hopf differential and the mean curvature. Item \ref{symthm3} of the theorem is obvious. \end{proof} The family of CMC surfaces $\hat{f}^\lambda$ is called the \emph{associate family} for $f$. The invariance of the Sym-Bobenko formula with respect to right multiplication by a diagonal matrix is due to the fact that the surface is determined by its Gauss map, given by the equivalence class of the frame in $SU_{1,1}/K$. By direct computation using the first and second fundamental forms, we have: \begin{lemma} The surfaces \begin{equation}as \hat f^{1}_{||} &=& -\frac{1}{2H} \textup{Ad}_{\sigma_1} \mathcal{S}(\textup{Ad}_{\sigma_1} F) \Big|_{\lambda = 1} \\ &=& - \tfrac{1}{2 H} \left[ - F i \sigma_3 F^{-1} + 2 i \lambda \partial_\lambda F \cdot F^{-1} \right]_{\lambda=1} \; , \\ \hat f^{1}_{K} &=& - \tfrac{1}{2 H} \left[ 0 + 2 i \lambda \partial_\lambda F \cdot F^{-1} \right]_{\lambda=1} \end{equation}as are the parallel CMC $-H$ surface and the parallel constant Gaussian curvature $-4H^2$ surfaces, respectively, to $\hat f^1$. \end{lemma} \operatorname{su}bsection{Extending the construction to $G$} In the formulation above we used the group $SU_{1,1}$, but we can use the bigger group $G$ instead, and allow the extended frame to take values in $\mathcal{U} = \mathcal{U}_\tau \sqcup \Psi(i \sigma_1) \cdot \mathcal{U}_\tau$. If we integrate the 1-form $\alpha^\lambda$ above, with the initial condition $\hat{F}(0) = \Psi(i\sigma_1)$ instead of the identity, we obtain a frame, $\hat{F} = \Psi(i\sigma_1) F$, with values in $\Psi(i\sigma_1) \cdot \mathcal{U}_\tau$. But $\mathcal{S}(\Psi(i \sigma_1) F) = - \textup{Ad}_{\sigma_1} \mathcal{S}(F) + \textup{translation}$, and the effect of $- \textup{Ad}_{\sigma_1}$ on the surface is just an isometry of $\mathbb{R}^{2,1}$, and so a CMC surface is obtained. Similarly, it is clear that we can replace $\mathcal{U}_\tau$ with $\mathcal{U}$ in the converse part of Theorem \ref{symthm}. \operatorname{su}bsection{The DPW method for $\mathbb{R}^{2,1}$}\label{section3-wayne} Here we give the holomorphic representation of the extended frames constructed above. To see how it works in practice, consult the examples below, in Section \ref{examplessect}. On a simply-connected Riemann surface $\Sigma$ with local coordinate $z=x+i y$, we define a {\em holomorphic potential} as an $\mathfrak{sl}_2 \mathbb{C}$-valued $\lambda$-dependent $1$-form \[ \xi = A(z,\lambda) dz = \begin{pmatrix} \operatorname{su}m_{j=0}^\infty c_{2j} \lambda^{2j} & \operatorname{su}m_{j=0}^\infty a_{2j-1} \lambda^{2j-1} \\ \operatorname{su}m_{j=0}^\infty b_{2j-1} \lambda^{2j-1} & -\operatorname{su}m_{j=0}^\infty c_{2j} \lambda^{2j} \end{pmatrix} dz \; , \] where the $a_j dz, b_j dz, c_j dz$ are all holomorphic $1$-forms defined on $\Sigma$, and $a_{-1}$ is never zero. Choose a solution $\phi : \Sigma \to \mathcal{U}^\mathbb{C}$ of $d\phi = \phi \xi$, and $G$-Iwasawa split $\phi=F B$ with $F: \Sigma \to \mathcal{U}$ and $B :\Sigma \to \widehat{\mathcal{U}}^\C_+$ whenever $\phi \in \mathcal{B}_{1,1}$. Expanding \begin{displaymath} B = \begin{pmatrix} \rho & 0 \\ 0 & \rho^{-1} \end{pmatrix} + \mathcal{O}(\lambda), \hspace{1cm} \rho(z,\bar{z}) \in \mathbb{R}^+, \end{displaymath} and, noting that \begin{displaymath} F^{-1} dF = BAB^{-1} dz - dB \cdot B^{-1} \end{displaymath} and $\tau({F^{-1} dF}) = F^{-1} dF$, one deduces that \begin{equation}as \begin{aligned} &F^{-1} dF = \mathcal{A}_1 dz + \mathcal{A}_2 dz + \tau(\mathcal{A}_2) d\bar z + \tau(\mathcal{A}_1) d\bar z \; ,\\ &\mathcal{A}_1={\begin{pmatrix} 0 & \lambda^{-1} \rho^2 a_{-1}\\ \lambda^{-1} \rho^{-2} b_{-1} & 0 \end{pmatrix}} , \hspace{1cm} \mathcal{A}_2= \begin{pmatrix} \tfrac{\rho_z}{\rho} & 0 \\ 0 & -\tfrac{\rho_z}{\rho} \end{pmatrix} . \end{aligned} \end{equation}as Take any nonzero real constant $H$. Substituting $w = \tfrac{i}{H} \int a_{-1} dz$, $Q=-2 H \tfrac{b_{-1}}{a_{-1}}$ and $\rho = e^{u/2}$, we have $F^{-1} dF = U^\lambda dw+V^\lambda d\bar w$ for $U^\lambda(w)$, $V^\lambda(w)$ as in Section \ref{section2}. By Theorem \ref{symthm}, $F$ is an extended frame for a family of spacelike CMC $H$ immersions. \begin{remark} The invariance of the Sym-Bobenko formula, pointed out in Theorem \ref{symthm}, shows that we did not need to choose the unique $F \in \mathcal{U}$ given by the normalization $B \in \widehat{\mathcal{U}}^\C_+$ in our splitting of $\phi$ above, because the freedom for $F$ (Theorem \ref{section1-thm:SU11Iwa}) is postmultiplication by $\mathcal{U}^0$, which consists of diagonal matrices. The normalized choice of $B$, however, will be used sometimes, as it captures some information about the metric of the surface in terms of $\rho$. We also point out that allowing $a_{-1}$ to have zeros will result in a surface with branch points at these zeros. \end{remark} We have proved one direction of the following theorem, which gives a holomorphic representation for all non-maximal CMC spacelike surfaces in $\mathbb{R}^{2,1}$. In the converse statement, the main issue is that we do not assume $\Sigma$ is simply-connected, which can be important for applications: see, for example \cite{DorH:cyl}, \cite{DH98}. \begin{theorem} \label{dpwthm} (Holomorphic representation for spacelike CMC surfaces in $\mathbb{R}^{2,1}$) Let \begin{displaymath} \xi = \operatorname{su}m_{i=-1}^\infty A_i \lambda^i \textup{d} z ~~ \in ~ Lie(\mathcal{U}^\mathbb{C}) \otimes \Omega ^{1,0} (\Sigma) \end{displaymath} be a holomorphic 1-form over a simply-connected Riemann surface $\Sigma$, with \begin{displaymath} a_{-1} \neq 0, \end{displaymath} on $\Sigma$, where $A_{-1} = {\small{\begin{pmatrix} 0 & a_{-1} \\ b_{-1} & 0 \end{pmatrix}}}$. Let $\phi :\Sigma \to \mathcal{U}^\mathbb{C}$ be a solution of \begin{displaymath} \phi^{-1} d\phi=\xi. \end{displaymath} Define the open set $\Sigma^\circ := \phi ^{-1} (\mathcal{B}_{1,1})$, and take any $G$-Iwasawa splitting on $\Sigma^\circ$: \begin{equation} \label{thmsplit} \phi = F B, \hspace{1.5cm} F \in \mathcal{U}, \hspace{.2cm} B \in \mathcal{U}^\mathbb{C}_+. \end{equation} Then for any $\lambda_0 \in \mathbb{S}^1$, the map $f^{\lambda_0} := \hat f^{\lambda_0}: \Sigma^\circ \to \mathbb{R}^{2,1}$, given by the Sym-Bobenko formula \eqref{symformula}, is a conformal CMC $H$ immersion, and is independent of the choice of $F$ in (\ref{thmsplit}). Conversely, let $\Sigma$ be a non-compact Riemann surface. Then any non-maximal conformal CMC spacelike immersion from $\Sigma $ into $\mathbb{R}^{2,1}$ can be constructed in this manner, using a holomorphic potential $\xi$ that is well defined on $\Sigma$. \end{theorem} \begin{proof} The only point remaining to prove is the converse statement. This follows from our construction of the extended frame associated to any such surface, together with the argument in \cite{DorPW} (Lemma 4.11 and the Appendix) given for the case that $\Sigma$ is contractible. However, the latter argument is also valid if $\Sigma$ is any non-compact Riemann surface: the global statement only depends on the generalization of Grauert's Theorem given in \cite{bungart}, that any holomorphic vector bundle over a Stein manifold (such as a non-compact Riemann surface, see \cite{grauertremmert} Section 5.1.5) with fibers in a Banach space, is trivial. \end{proof} \begin{remark} \label{metricremark} We also showed above that if we normalize the factors in (\ref{thmsplit}) so that $B \in \widehat{\mathcal{U}}^\C_+$, and define the function $\rho: \Sigma^\circ \to {\mathbb R}$ by $B|_{\lambda=0} = \textup{diag}(\rho, \rho^{-1})$, then there exist conformal coordinates $\tilde z= \tilde x+i \tilde y$ on $\Sigma$ such that the induced metric for $f^1$ is given by \begin{displaymath} \textup{d} s^2 = 4 \rho^4 (\textup{d} \tilde x^2 + \textup{d} \tilde y^2), \end{displaymath} and the Hopf differential is given by $Q \textup{d} \tilde z^2$, where $Q = -2H\frac{b_{-1}}{a_{-1}}$. \end{remark} \operatorname{su}bsection{Preliminary examples} \label{examplessect} We conclude this section with three examples: \begin{example}\label{exa:cylinders} A cylinder over a hyperbola in $\mathbb{R}^{2,1}$. Let \begin{displaymath} \xi = {\small{\begin{pmatrix} 0 & \lambda^{-1} dz \\ \lambda^{-1} dz & 0\end{pmatrix}}}, \end{displaymath} on $\Sigma = \mathbb{C}$. Then one solution $\phi$ of $d\phi = \phi \xi$ is \begin{displaymath} \phi = \exp \Big\{ \begin{pmatrix} 0 & z \lambda^{-1} \\ z \lambda^{-1} &0 \end{pmatrix} \Big\}, \end{displaymath} which has the Iwasawa splitting $\phi = F \cdot B$, where \begin{displaymath} F = \exp \Big\{ \begin{pmatrix} 0 & z \lambda^{-1} + \bar z \lambda \\ z \lambda^{-1} + \bar z \lambda & 0\end{pmatrix} \Big\}, \hspace{1cm} B = \exp \Big\{ \begin{pmatrix} 0 & -\bar z\lambda \\ -\bar z\lambda & 0 \end{pmatrix} \Big\} , \end{displaymath} take values in $\mathcal{U}$ and $\widehat{\mathcal{U}}^\C_+$ respectively. The Sym-Bobenko formula $\hat f^1$ gives the surface \begin{displaymath} \frac{-1}{2H} \cdot [ 4 y, \, -\sinh(4 x), \,\cosh(4 x) ], \end{displaymath} in $\mathbb{R}^{2,1} = \{ [x_1,x_2,x_0] := x_1e_1+x_2e_2+x_0e_3 \}$. The image is the set \begin{displaymath} \{ [x_1,x_2,x_0] ~|~ x_0^2-x_2^2 = \tfrac{1}{4H^2} \}, \end{displaymath} which is a cylinder over a hyperbola. \end{example} \begin{example}\label{exa:spheres} The hyperboloid of two sheets. Let \begin{displaymath} \xi = \begin{pmatrix} 0 & \lambda^{-1}\\ 0&0 \end{pmatrix} dz, \end{displaymath} on $\Sigma=\mathbb{C}$. Then one solution of $d \phi = \phi \xi$ is \begin{displaymath} \phi = \begin{pmatrix} 1 & z \lambda^{-1} \\ 0 &1 \end{pmatrix}, \end{displaymath} which takes values in $\mathcal{B}_{1,1}$ for $|z| \neq 1$. For these values of $z$, the $G$-Iwasawa splitting is $\phi = F \cdot B$ with $F : \Sigma \setminus \mathbb{S}^1 \to \mathcal{U}$ and $B: \Sigma \setminus \mathbb{S}^1 \to \widehat{\mathcal{U}}^\C_+$, where \begin{equation}as F &=& \frac{1}{\sqrt{\varepsilon (1-|z|^2)}} \begin{pmatrix} \varepsilon & z \lambda^{-1} \\ \varepsilon \bar z \lambda & 1 \end{pmatrix} ,\\ B &=& \frac{1}{\sqrt{\varepsilon (1-|z|^2)}} \begin{pmatrix} 1 & 0 \\ -\varepsilon \bar{z} \lambda & \varepsilon(1-z \bar{z}) \end{pmatrix}, \hspace{1cm} \varepsilon = \text{sign}(1-|z|^2) \; . \end{equation}as Then the Sym-Bobenko formula gives \begin{displaymath} \hat{f}^1(z) = \frac{1}{H (x^2+y^2-1)} \cdot [ 2y, \, -2x, \, (1+3x^2+3y^2)/2], \end{displaymath} whose image is the two-sheeted hyperboloid $\{ x_1^2 + x_2^2 -(x_0-\frac{1}{2H})^2 = -\frac{1}{H^2} \}$, that is, two copies of a hyperbolic plane of constant curvature $-H^2$. For this example, we are in a small cell precisely when $|z|=1$. In this case, we can write $\phi$ as a product of a loop in $\mathcal{U}_\tau$ times $\omega_2$ times a loop in $\mathcal{U}^\mathbb{C}_+$, as follows: \[ \begin{pmatrix} 1 & z \lambda^{-1} \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} p \sqrt{z} & \lambda^{-1} q \sqrt{z} \\ \lambda q \sqrt{z}^{-1} & p \sqrt{z}^{-1} \end{pmatrix} \cdot \omega_2 \cdot \begin{pmatrix} (p+q) \sqrt{z}^{-1} & 0 \\ - \lambda q \sqrt{z}^{-1} & (p-q) \sqrt{z} \end{pmatrix} \; , \] where $p^2-q^2=1$ and $p$, $q \in {\mathbb R}$. Hence $\phi \in \mathcal{P}_{2}$ for $|z|=1$. \end{example} \begin{example}\label{beaksexample} The first two examples were especially simple, so that we were able to perform the Iwasawa splitting explicitly. This is not possible, in general. However, it can always be approximated numerically, using, for example, the program XLab \cite{xlab}, and images of the surface corresponding to an arbitrary potential $\xi$ can be produced. For example, taking the potential $\xi = \lambda^{-1} \cdot {\small{\begin{pmatrix} 0& 1 \\ \, 100 z & 0 \end{pmatrix}}} dz$, and integrating with the initial condition $\phi(0)=\omega_1$, we obtain, numerically, a surface with a singularity that appears to have the topology of a Shcherbak surface \cite{shcherbak} singularity at $z=0$. The Shcherbak surface singularity is of the form $(u, ~v^3 +uv^2, ~12 v^5 + 10u v^4)$. The singularity from our construction is displayed in Figure \ref{fg:7}. Since $\phi(0) = \omega_1$, this singularity is arising when $\phi$ takes values in $\mathcal{P}_1$. \end{example} \begin{figure} \caption{The singularity appearing in Example \ref{beaksexample} \label{fg:7} \end{figure} \section{Behavior of the Sym-Bobenko formula on the boundary of the big cell} \label{brander-section} We saw in Example \ref{exa:spheres} an instance of a surface which blows up as the boundary of the big cell is approached. On the other hand, in Example \ref{beaksexample}, we have a case where finite singularities occur. We now want to examine what behavior can be expected in general. Let $\phi: \Sigma \to \mathcal{U}^\mathbb{C}$ be a holomorphic map in accordance with the construction of Theorem \ref{dpwthm}, and $\Sigma^\circ := \phi^{-1}(\mathcal{B}_{1,1})$. We also assume that $\phi$ maps at least one point into $\mathcal{B}_{1,1}$, so that $\Sigma^\circ$ is not empty. Set \begin{displaymath} C:= \Sigma \setminus \Sigma^\circ = \bigcup_{j=1}^\infty \phi^{-1} (\mathcal{P}_j). \end{displaymath} \begin{theorem} \label{summarythm1} Let $\phi$ be as above, and assume that $\Sigma$ is simply connected. Then $\Sigma^\circ$ is open and dense in $\Sigma$. More precisely, its complement, the set $C$, is locally given as the zero set of a non-constant real analytic function from some open set $W \operatorname{su}bset \Sigma$ to $\mathbb{C}$. \end{theorem} \begin{proof} This follows from item \ref{mainthm-(3)} of Theorem \ref{section1-thm:SU11Iwa}: the union of the small cells is given as the zero set of a real analytic section $s$ of a real analytic line bundle on $\mathcal{U}^\mathbb{C}$ (see the proof of Theorem \ref{thm:SU11Iwa}). Thus $C$ is given as the zero set of $\phi^* s$, which is also a real analytic section of a real analytic line bundle. Since we assume that the complement of $C$ contains at least one point, it follows that this set is open and dense. \end{proof} For the first two small cells, for which the analysis is the least complicated, we will prove more specific information: set \begin{displaymath} C_1 := \phi^{-1} (\mathcal{P}_1), \hspace{1cm} C_2 := \phi^{-1} (\mathcal{P}_2). \end{displaymath} \begin{theorem} \label{summarythm} Let $\phi$ be as given in Theorem \ref{summarythm1}. Then: \begin{enumerate} \item \label{summary1} The sets $\Sigma^\circ \cup C_1$ and $\Sigma^\circ \cup C_2$ are both {open} subsets of $\Sigma$. The sets $C_i$ are each locally given as the zero set of a non-constant real analytic function $\mathbb{R}^2 \to \mathbb{R}$. \item \label{summary2} All components of the matrix $F$ obtained by Theorem \ref{dpwthm} on $\Sigma^\circ$, and evaluated at $\lambda_0 \in \mathbb{S}^1$, blow up as $z$ approaches a point $z_0$ in either $C_1$ or $C_2$. In the limit, the unit normal vector $N$, to the corresponding surface, becomes asymptotically lightlike, i.e. its length in the Euclidean ${\mathbb R}^3$ metric approaches infinity. \item \label{summary3} The surface $f^{\lambda_0}$ obtained from Theorem \ref{dpwthm} extends to a real analytic map $\Sigma^\circ \cup C_1 \to \mathbb{R}^{2,1}$, but is not immersed at points $z_0 \in C_1$. \item \label{summary4} The surface $f^{\lambda_0}$ diverges to $\infty$ as $z \to z_0 \in C_2$. Moreover, the induced metric on the surface blows up as such a point in the coordinate domain is approached. \end{enumerate} \end{theorem} \begin{proof} Item \ref{summary1}: For the open condition, it is enough to show that if $z_0 \in \Sigma^\circ \cup C_i$, then there is a neighborhood of $z_0$ also contained in this set. Let $z_0 \in \Sigma^\circ \cup C_1$. Now $\Sigma^\circ$ is open, so take $z_0 \in C_1$. It easy to see that, in the following argument, no generality is lost by assuming that $\phi(z_0) = \omega_1^{-1}$. We can express $\phi$ as \begin{displaymath} \phi = \hat{\phi} \omega_1^{-1}, \end{displaymath} where $\hat{\phi} := \phi \omega_1$. Since $\hat{\phi}(z_0) = I$, the identity, $\hat{\phi}(z)$ is in the big cell in a neighborhood of $z_0$, and therefore can locally be expressed as \begin{displaymath} \hat{\phi} = F B, \hspace{1cm} F: \Sigma \to \mathcal{U}, \hspace{.5cm} B: \Sigma \to \mathcal{U}^\mathbb{C}_+. \end{displaymath} So $\phi = F B \omega_1^{-1}$, and, denoting the components of $B$ as in Lemma \ref{switchlemma}, we have that $\phi(z)$ is in $\mathcal{P}_1$ precisely when \begin{displaymath} g(z) := |b_1(z) - a_0(z)| |a_0(z)| -1 = 0, \end{displaymath} and is in the big cell for other values of this function. Note that $g$ cannot be constant, because, by Theorem \ref{summarythm1}, $z_0$ is a boundary point of $\Sigma^\circ$. The case $z_0 \in \Sigma^\circ \cup C_2$ is analogous, and the claim follows. Items \ref{summary2}-\ref{summary4} are proved below as Corollaries \ref{sumcor2}, \ref{sumcor3} and \ref{sumcor4} respectively. \end{proof} \begin{remark} Noting that $\textup{Ad}_{\sigma_1} \omega_{2k-1} = \omega_{2k}$, and that the parallel surface is obtained by applying the Sym-Bobenko formula to $\textup{Ad}_{\sigma_1} F$, the analogue of Theorem \ref{summarythm} applies to the parallel surface, switching $\mathcal{P}_1$ and $\mathcal{P}_2$. \end{remark} \operatorname{su}bsection{Behavior of the $\mathcal{U}$ and $\mathcal{U}^\mathbb{C}_+$ factors approaching the first two small cells} We can use Lemma \ref{switchlemma} to show that the matrix $F$, in an $SU_{1,1}$ Iwasawa factorization $\phi = F B$, blows up as $\phi$ approaches either of the first two small cells. Note that all such discussions take place for $\lambda \in \mathbb{S}^1$, so that, for example, if $a$ is a function of $\lambda$, then $a^* = \bar{a}$. \begin{proposition} \label{blowupprop} Let $\phi_n$ be a sequence in $\mathcal{B}_{1,1}$, with $\lim_{n \to \infty} \phi_n = \phi_0 \in \mathcal{P}_m$, for $m = 1$ or $2$. Let $\phi_n = F_n B_n$ be an $SU_{1,1}$ Iwasawa decomposition of $\phi_n$, with $F_n \in \mathcal{U}$, $B_n \in \mathcal{U}^\mathbb{C}_+$. Then: \begin{enumerate} \item Writing $F_n$ as \begin{displaymath} F_n = \begin{pmatrix} x_n & y_n \\ \pm y^*_n & \pm x^*_n \end{pmatrix} \; , \end{displaymath} we have $\lim_{n \to \infty} |x_n| = \lim_{n \to \infty} |y_n| = \infty$, for all $\lambda \in \mathbb{S}^1$.\\ \item Writing the constant term of $B_n$ as \begin{displaymath} B_n \big |_{\lambda =0} = \begin{pmatrix} \rho_n & 0 \\ 0 & \rho_n^{-1} \end{pmatrix}, \end{displaymath} if $m=1$ then $\lim_{n \to \infty} |\rho_n| = 0$, and if $m=2$ then $\lim_{n \to \infty} |\rho_n| = \infty$. \end{enumerate} \end{proposition} \begin{proof} \textbf{Item (1):} We give the proof for $m=1$. The case $m=2$ can be proved in the same way, or simply obtained from the first case by applying $\textup{Ad}_{\sigma_1}$. According to Theorem \ref{section1-thm:SU11Iwa}, we can write \begin{displaymath} \phi_0 = F_0 \omega_1 B_0, \end{displaymath} with $F_0 \in \mathcal{U}_\tau$ and $B_0 \in \mathcal{U}^\mathbb{C}_+$. Expressing $\phi_n$ as \[ \phi_n = \hat{\phi}_n \omega_1 B_0 \; , \hspace{1cm} \hat{\phi}_n := \phi_n B_0^{-1} \omega_1^{-1} \; , \] we have $\lim_{n \to \infty}\hat{\phi}_n = F_0$, so $\hat{\phi}_n \in \mathcal{B}_{1,1}$ for sufficiently large $n$, because $\mathcal{B}_{1,1}$ is open. Thus, for large $n$, we have the factorization $\hat{\phi}_n = \hat{F}_n \hat{B}_n$, and the factors can be chosen to satisfy $\hat{F}_n \to F_0$ and $\hat{B}_n \to I$, as $n \to \infty$. Using Lemma \ref{switchlemma}, with $\lambda$ replaced by $-\lambda$, we have the expression \begin{displaymath} \phi_n = \hat{F}_n \hat{B}_n \omega_1 B_0 = \hat{F}_n X_n \tilde{B}_n B_0 \; , \hspace{.7cm} \tilde{B}_n \in \mathcal{U}^\mathbb{C}_+ \; . \end{displaymath} Since by assumption $\phi_n \in \mathcal{B}_{1,1}$ for all $n$, the factor $\hat{B}_n \omega_1$ is also, and $X_n$ is always a matrix of the form $k_1$ or $k_2$, that is \begin{displaymath} X_n = \begin{pmatrix} u_n & v_n \lambda \\ \pm \bar{v}_n \lambda^{-1} & \pm \bar{u}_n \end{pmatrix}, \end{displaymath} with $u_n$ and $v_n$ constant in $\lambda$. We also have from Lemma \ref{switchlemma}, that $|u_n|/|v_n| = |\hat{b}_{1,n} - \hat{a}_{0,n}||\hat{a}_{0,n}|$, where $\hat{b}_{1,n} \to 0$ and $\hat{a}_{0,n} \to 1$, as $n \to \infty$, because $\hat{B}_n \to I$. Hence $\lim_{n \to \infty} \frac{|u_n|}{|v_n|} = 1$. Combined with the condition $|u_n|^2 - |v_n|^2 = \pm 1$, this implies that $\lim_{n \to \infty} |u_n| = \lim_{n \to \infty} |v_n| = \infty$, and \begin{displaymath} \lim_{n \to \infty} ||X_n|| = \infty, \end{displaymath} where $|| \cdot ||$ is some suitable matrix norm. Now the uniqueness of the Iwasawa splitting $\phi_n = F_n B_n$ says that \begin{displaymath} F_n = \hat{F}_n X_n D_n, \end{displaymath} where $D_n = \textup{diag}(e^{i \theta_n}, e^{-i \theta_n})$ for some $\theta_n \in \mathbb{R}$. Then we have \begin{displaymath} ||X_n|| = ||\hat{F}^{-1}_n F_n || \leq ||\hat{F}^{-1}_n || \, || F_n||, \end{displaymath} and so $\lim_{n \to \infty} ||\hat{F}^{-1}_n || \, || F_n|| = \infty$ also. But $||\hat{F}^{-1}_n|| \to ||F_0||$, which is finite, and so we have $|| F_n || \to \infty$. Because the components of $F_n$ satisfy $|x_n|^2 - |y_n|^2 = \pm 1$, the result follows. \\ \noindent \textbf{Item (2):} For the case $m=1$, proceeding as above, we have $\phi_n = \hat{F}_n X_n \tilde{B}_n B_0$, where $X_n \tilde{B}_n = \hat{B}_n \omega_1$, and $\hat{B}_n \to I$. Up to some constant factor coming from $B_0$, the quantity $\rho_n^{-1}$ is given by the constant term of the matrix component $[\tilde{B}_n]_{22}$, for which we have an explicit expression in (\ref{explicitfact}), that is: \begin{displaymath} \rho_n^{-1} = -\varepsilon \hat b _{n,1} \, \bar{v}_n + u_n \, \hat{d}_{n,0}, \hspace{.5cm} \textup{where } \hspace{.2cm} \hat{B}_n = \begin{pmatrix} \operatorname{su}m_{i=0}^\infty \hat a_{n,i} \lambda ^i & \operatorname{su}m_{i=1}^\infty \hat b_{n,i} \lambda ^i \\ \operatorname{su}m_{i=1}^\infty \hat c_{n,i} \lambda ^i & \operatorname{su}m_{i=0}^\infty \hat d_{n,i} \lambda ^i \end{pmatrix}. \end{displaymath} Now the facts that $\hat{B}_n \to I$ and $u_n \bar u_n - v_n \bar v_n = \varepsilon$, so that \begin{equation}as b _{n,1} \to 0, \hspace{1cm} \hat{d}_{n,0} \to 1,\\ \frac{|u_n|}{|v_n|} = |\hat{b}_{n,1} - \hat{a}_{n,0}||\hat{a}_{n,0}| \to 1, \hspace{1cm} |u_n| \to \infty, \end{equation}as imply that $|\rho_n^{-1}| \to \infty$, which is what we needed to show. The case $m=2$ is obtained by applying $\textup{Ad}_{\sigma_1}$, which switches $\rho$ and $\rho^{-1}$. \end{proof} \begin{corollary} \label{sumcor2} Proof of item \ref{summary2} of Theorem \ref{summarythm}. \end{corollary} \begin{proof} We just saw that all components of $F$ blow up as $\phi$ approaches $\mathcal{P}_1$ or $\mathcal{P}_2$. Taking $F = {\small{\begin{pmatrix} a & b \\ \pm b^* & \pm a^* \end{pmatrix}}}$, Proposition \ref{blowupprop} says $|a| \to \infty$ and $|b| \to \infty$. The unit normal vector is given by \begin{displaymath} F i\sigma_3 F^{-1} = i \cdot \begin{pmatrix} \pm(aa^* + bb^*) & -2ab\\ 2a^*b^* & \mp(bb^* +aa^*) \end{pmatrix}. \end{displaymath} The $e_3$ component, $\pm(aa^* + bb^*)$, approaches $\infty$. Since $N$ is a unit vector, the only way this can happen is for the vector to become asymptotically lightlike. \end{proof} \operatorname{su}bsection{Extending the Sym-Bobenko formula to the first small cell} To show that the surface extends analytically to $C_1 = \phi^{-1}(\mathcal{P}_1)$, we think of the Sym-Bobenko formula as a map from $\mathcal{U}^\mathbb{C}$, instead of $\mathcal{U}$, by composing it with the projection onto $\mathcal{U}$. This is necessary because we showed that the $\mathcal{U}$ factor blows up as we approach $\mathcal{P}_1$. Recall the function $\mathcal{S}$ in \eqref{symformula} used for the Sym-Bobenko formula. Note that if $F \in \mathcal{U}$ then either $F$ or $iF$ is an element of $\Lambda \hat{G}_\sigma \operatorname{su}bset \mathcal{U}^\mathbb{C}$, where $\hat{G} = U_{1,1}$. The Lie algebra of $\hat{G}$ is just $\mathfrak{g} = \mathfrak{su}_{1,1}$ and we can conclude that $F i\sigma_3 F^{-1}$ and $i \lambda \partial_\lambda F \cdot F^{-1}$ are loops in $Lie(\mathcal{U})$. Thus $\mathcal{S}$ is a real analytic map from $\mathcal{U}$ to $Lie(\mathcal{U})$. Define \begin{displaymath} \mathcal{K} := \{ k \in \mathcal{U} ~|~ \mathcal{S}(k) = i \sigma_3 \}. \end{displaymath} \begin{lemma} \label{symlemma} $\mathcal{K}$ is a subgroup of $\mathcal{U}$. Moreover, $\mathcal{K}$ consists precisely of the elements $k \in \mathcal{U}$ such that \begin{equation} \label{keqn} \mathcal{S}(Fk) = \mathcal{S}(F), \end{equation} for any $F \in \mathcal{U}$. \end{lemma} \begin{proof} Both statements follow from the easily verified formula \begin{displaymath} \mathcal{S}(xy) = x \mathcal{S}(y) x^{-1} + 2 i\lambda \partial_\lambda x \cdot x^{-1}. \end{displaymath} Hence it is straightforward to show that $\mathcal{K}$ is a group, and any element $k \in \mathcal{K}$ satisfies (\ref{keqn}) for any $F$. Conversely, if $k$ is an element such that (\ref{keqn}) holds for all $F$, in particular for $F = I$, then $\mathcal{S}(k) = \mathcal{S}(I) = i \sigma_3$, so $k \in \mathcal{K}$. \end{proof} Now $\mathcal{U}^0$ consists of constant diagonal matrices, which are in $\mathcal{K}$, so an immediate corollary of this lemma (see also Theorem \ref{symthm}) is \begin{lemma} \label{extendtoquotient} The function $\mathcal{S}$ is a well-defined real analytic map $\mathcal{U}/\mathcal{U}^0 \to Lie(\mathcal{U})$. \end{lemma} On the big cell, $\mathcal{B}_{1,1}$, we can define an extended Sym-formula $\widetilde{\mathcal{S}} : \mathcal{B}_{1,1} \to Lie(\mathcal{U})$, by the composition \begin{equation} \label{sym3} \widetilde{\mathcal{S}}(\phi) := \mathcal{S} (\pi (\phi)), \end{equation} where $\pi$ is the projection to $\mathcal{U}/\mathcal{U}^0$ given by the $SU_{1,1}$ Iwasawa splitting, described in Corollary \ref{cor1}. It is a real analytic function on $\mathcal{B}_{1,1}$, since it is a composition of two such functions. In spite of the conclusion of Proposition \ref{blowupprop}, we now show that this function extends to the first small cell $\mathcal{P}_1$. The critical point in the following argument is the easily verified fact that the matrices $k_i$ given in Lemma \ref{switchlemma} are elements of $\mathcal{K}$. The argument does not apply to the second small cell, because the corresponding matrices $\textup{Ad}_{\sigma_1} k_i$ are \emph{not} elements of $\mathcal{K}$. \begin{theorem} \label{extensionthm} The function $\widetilde{\mathcal{S}}$ extends to a real analytic function $\mathcal{B}_{1,1} \sqcup \mathcal{P}_1 \to Lie(\mathcal{U})$. \end{theorem} \begin{proof} Let $\phi_0$ be an element of $\mathcal{B}_{1,1} \sqcup \mathcal{P}_1$. If $\phi_0 \in \mathcal{B}_{1,1}$ define $\widetilde{\mathcal{S}} (\phi_0)$ by (\ref{sym3}), and this is well defined and analytic in a neighborhood of $\phi_0$. If $\phi_0 \in \mathcal{P}_1$, we have a factorization \begin{equation} \label{ssdecomp} \phi_0 = F_0 \omega_1 B_0, \end{equation} given by (\ref{globaldecomp}). Then $\phi_0 B_0^{-1} \omega_1^{-1}$ is in $\mathcal{B}_{1,1}$, which is an open set. Hence we can define, for $\phi$ in some neighborhood $\mathcal{W}_0$ of $\phi_0$, a new element \begin{displaymath} \hat{\phi} := \phi B_0^{-1} \omega_1^{-1}, \end{displaymath} and $\hat{\phi}$ is in $\mathcal{B}_{1,1}$ for all $\phi \in \mathcal{W}_0$. Now we define, for $\phi \in \mathcal{W}_0$, \begin{equation} \label{sym4} \hat{\mathcal{S}} (\phi) := \widetilde{\mathcal{S}}(\hat{\phi}). \end{equation} We need to check that this is well defined (because $B_0$ is not unique in (\ref{ssdecomp})) and also that \eqref{sym3} and \eqref{sym4} coincide on $\mathcal{W}_0 \cap \mathcal{B}_{1,1}$. To prove both of these points it is enough to show just the second one, because $\mathcal{W}_0 \cap \mathcal{B}_{1,1}$, is dense in $\mathcal{W}_0$ and because \eqref{sym4} is defined and continuous on the whole of $\mathcal{W}_0$. Now on $\mathcal{W}_0 \cap \mathcal{B}_{1,1}$, we have the Iwasawa factorization $\phi = F B$, so \begin{displaymath} \hat{\phi} = F B B_0^{-1} \omega_1^{-1}. \end{displaymath} Since we know this is in the big cell, we can express this, by Lemma \ref{switchlemma}, as \begin{displaymath} \hat{\phi} = F k B^\prime, \end{displaymath} where $k$ is of the form $\begin{pmatrix} u & v \lambda \\ \pm \bar{v} \lambda^{-1} & \pm \bar{u} \end{pmatrix}$, and $B^\prime$ is in $\mathcal{U}^C_+$. Now $Fk \in \mathcal{U}$, so, by definition, $\widetilde{\mathcal{S}}(\hat{\phi}) = \mathcal{S}(Fk)$. But $k \in \mathcal{K}$, so, in fact, $\hat{\mathcal{S}}(\phi):= \widetilde{\mathcal{S}}(\hat{\phi}) = \mathcal{S}(F) = \widetilde{\mathcal{S}}(\phi)$. \end{proof} \pagebreak \begin{corollary} \label{sumcor3} Proof of item \ref{summary3} of Theorem \ref{summarythm}. \end{corollary} \begin{proof} We just showed that the surface obtained by the Sym-Bobenko formula extends to a real analytic map from $\Sigma^\circ \cup C_1$. To prove that the surface is not immersed at $z_0 \in C_1$, suppose the contrary: that is, there is an open set $W$ containing $z_0$ such that ${f}^{\lambda_0}: W \to \mathbb{R}^{2,1}$ is an immersion. Let $\textup{d} \hat s ^2$ denote the induced metric. From Remark \ref{metricremark}, this metric is given on the open dense set $\Sigma^\circ$ by the expression $4 \rho^4 (\textup{d} x^2 + \textup{d} y^2)$. The 1-form $\textup{d} x^2 + \textup{d} y^2$ is well defined on $\Sigma$, but, by item (2) of Proposition \ref{blowupprop}, the function $\rho^4$ approaches $0$ as $z$ approaches $z_0$. Therefore the induced metric is zero at this point. This is a contradiction, because a conformally immersed surface in $\mathbb{R}^{2,1}$ cannot be null at a point. \end{proof} \operatorname{su}bsection{The behavior of $\widetilde{\mathcal{S}}$ when approaching other small cells} The function $\widetilde{\mathcal{S}}$ does not extend continuously to any of the other small cells. To see this, consider the functions $\psi^m_z$ and $F_z^m$ given in Remark \ref{psidefremark}. On the big cell, we have \begin{equation}as \widetilde{\mathcal{S}}(\psi^m_z) &=& \mathcal{S}(F^m_z) = i\sigma_3 + \frac{2i(m-1)}{1-z\bar{z}} \begin{pmatrix} - z \bar{z} & \bar{z}\lambda^m \\ -z \lambda^{-m} & z \bar{z} \end{pmatrix}, \hspace{.2cm} \textup{$m$ odd}; \\ \widetilde{\mathcal{S}}(\psi^m_z) &=& i \sigma_3 + \frac{2i\, m}{1-z\bar{z}} \begin{pmatrix} z \bar{z} &-z \lambda^{-m+1} \\ \bar{z}\lambda^{m-1} & - z \bar{z} \end{pmatrix}, \hspace{.2cm} \textup{$m$ even}. \end{equation}as We know $\psi^m_z = \omega_m \in \mathcal{P}_m$ at $z = 1$, and that $\psi^m_z \in \mathcal{B}_{1,1}$ for $|z| \neq 1$; but, other than the case $m=1$, we see that $\widetilde{\mathcal{S}}(\psi^m_z)$ does not have a finite limit as $z \to 1$. We next show that for $\mathcal{P}_2$ this behavior is typical. An example corresponding to the following result is the two-sheeted hyperboloid of Example \ref{exa:spheres}. \begin{theorem} \label{blowupprop2} Let $\phi_n$ be a sequence in $\mathcal{B}_{1,1}$ with $\lim_{n \to \infty} \phi_n = \phi_0 \in \mathcal{P}_2$. Denote the components of $\widetilde{\mathcal{S}}(\phi_n)$ by \begin{small}$\widetilde{\mathcal{S}}(\phi_n) = \begin{pmatrix} a_n & b_n \\ b^*_n & -a_n \end{pmatrix}$. \end{small} Then $\lim_{n\to \infty} |a_n| = \lim_{n\to \infty} |b_n| = \infty$, for all $\lambda \in \mathbb{S}^1$. \end{theorem} \begin{proof} Let $\phi_n = F_n B_n$ be the $SU_{1,1}$ Iwasawa splitting for $\phi_n$, and $\phi_0 = F_0 \omega_2 B_0$. Because $\textup{Ad}_{\sigma_1} \omega_2 = \omega_1$, $\textup{Ad}_{\sigma_1} \phi_0 = \textup{Ad}_{\sigma_1}F_0 \, \textup{Ad}_{\sigma_1}\omega_2 \, \textup{Ad}_{\sigma_1} B_0$ is in $\mathcal{P}_1$. So $\textup{Ad}_{\sigma_1} \phi_n $ is a sequence in $\mathcal{B}_{1,1}$ which approaches $\mathcal{P}_1$. Therefore, by Theorem \ref{extensionthm}, there exists a finite limit: \begin{displaymath} \lim_{n\to\infty} \mathcal{S}(\textup{Ad}_{\sigma_1} F_n) = L. \end{displaymath} Now \begin{equation} \label{parallelsym} \mathcal{S}(\textup{Ad}_{\sigma_1} F_n) = \sigma_1[-F_n i \sigma_3 F_n^{-1} + 2 i \lambda (\partial_\lambda F_n) F_n^{-1}] \sigma_1, \end{equation} and, from Proposition \ref{blowupprop}, we can write \begin{displaymath} F_n \sigma_3 F_n^{-1} = \begin{pmatrix} \pm (|x_n|^2 + |y_n|^2) & -2x_n y_n \\ 2 x^*_n y^*_n & \mp (|x_n|^2 + |y_n|^2) \end{pmatrix}, \end{displaymath} where $|x_n| \to \infty$, $|y_n| \to \infty$. Thus, all components of the matrix $F_n i \sigma_3 F_n^{-1}$ blow up as $n \to \infty$, and, for the limit $L$ to exist it is necessary that all components of the matrix $\lambda (\partial_\lambda F_n) F_n^{-1}$ also blow up. Now we compute \begin{equation}as \mathcal{S} (F_n) &=& F_n i\sigma_3 F_n^{-1} + 2i \lambda(\partial_\lambda F_n)F_n^{-1} \\ &=& -[-F_n i\sigma_3 F_n^{-1} + 2i \lambda(\partial_\lambda F_n)F_n^{-1}] + 4i \lambda(\partial_\lambda F_n)F_n^{-1} \\ &=& -\sigma_1 \mathcal{S}(\textup{Ad}_{\sigma_1} F_n) \sigma_1 + 4i \lambda(\partial_\lambda F_n)F_n^{-1}. \end{equation}as Since the first term on the right-hand side has the finite limit $-\sigma_1 L \sigma_1$, and all components of the second term diverge, it follows that all components of $\mathcal{S} (F_n)$ diverge. \end{proof} \pagebreak \begin{corollary} \label{sumcor4} Proof of item \ref{summary4} of Theorem \ref{summarythm}. \end{corollary} \begin{proof} We just showed that $f^{\lambda_0}$ diverges to $\infty$ as $z \to z_0 \in C_2$. The metric is given on $\Sigma^\circ$ by the expression $4 \rho^4 (\textup{d} x^2 + \textup{d} y^2)$ (see Remark \ref{metricremark}). By Proposition \ref{blowupprop}, we have $\rho^4 \to \infty$ as $z \to z_0$. \end{proof} \operatorname{su}bsubsection{The higher small cells} Numerical experimentation shows that the behavior of the surface as $\mathcal{P}_j$ is approached, for $j \geq 3$, may not be so straightforward. To analyze the behavior analytically becomes more complicated. In principle, one can obtain explicit factorizations such as in Lemma \ref{switchlemma} by finite linear algebra, but we do not attempt an exhaustive account here. One should observe, however, that, relating the Iwasawa decomposition given here to Theorem (8.7.2) of \cite{PreS} shows that the higher small cells occur in higher codimension in the loop group. \section{Spacelike CMC surfaces of revolution and equivariant surfaces in $\mathbb{R}^{2,1}$}\label{section4-wayne} \operatorname{su}bsection{Surfaces with rotational symmetry} To make general spacelike rotational CMC surfaces in $\mathbb{R}^{2,1}$, we convert a result in \cite{SKKR} to the $SU_{1,1}$ case. This theorem provides us with a frame $F$ that gives rotationally invariant surfaces when inserted into the Sym-Bobenko formula. \begin{theorem}\label{SU11DelaunayIwasawa} For $a,b \in \mathbb{R}^*$ and $c \in \mathbb{R}$, let $\Sigma = \{ z=x+i y \in \mathbb{C} \, | \, -\kappa_1^2 < x < \kappa_2^2 \}$ and choose $\kappa_1,\kappa_2$ so that $x \in (-\kappa_1^2,\kappa_2^2)$ is the largest interval for which a solution $v=v(x)$ of \begin{equation} \label{vprimeeqn} \begin{split} & (v^\prime)^2 = (v^2-4a^2)(v^2-4b^2)+4c^2v^2 \; , \\ & v^{\prime \prime} = 2 v (v^2-2a^2-2b^2+2c^2), \\ & v(0) = 2 b, \end{split} \end{equation} is finite and never zero ($\prime$ denotes $\tfrac{d}{dx}$). When $c \neq 0$, we require $v^\prime(0)$ and $-b c$ to have the same sign. Let $\phi$ solve $d\phi = \phi \xi$ on $\Sigma$ for $\xi = A dz$ with \begin{equation}\label{eqn:formforA} A = \begin{pmatrix} c & a \lambda^{-1} + b \lambda \\ -a \lambda - b \lambda^{-1} & -c \end{pmatrix} \end{equation} and $\phi(z=0) = I$. Then we have the $SU_{1,1}$ Iwasawa splitting $\phi = F B$, with \[ \phi = \exp((x+iy)A)\; , \;\;\; F = \phi \cdot \exp (-f A) \cdot B_1^{-1} \; , \;\;\; B = B_1 \cdot \exp (f A) \; , \] where, taking $\sqrt{\det B_0}$ so that $\sqrt{\det B_0}|_{\lambda=0}>0$, \begin{equation}as \begin{aligned} & f = \int_0^x \frac{2 dt}{1 +(4 a b \lambda^2)^{-1} v^2(t)} \; , \\ &B_1 = \frac{1}{\sqrt{\det B_0}} B_0 \; , \hspace{0.7cm} B_0 = \begin{pmatrix} 2 v (b+a \lambda^2) & (2 c v+v^\prime) \lambda \\ 0 & 4 a b \lambda^2 + v^2 \end{pmatrix} \; . \end{aligned} \end{equation}as \end{theorem} The second, overdetermining, equation in (\ref{vprimeeqn}) excludes certain enveloping solutions. In particular it removes constant solutions for $v$, except precisely in the case where we want them (when $a=\pm b$ and $c=0$). \begin{figure} \caption{A surface of revolution in $T_2$ with timelike axis, a surface in its associate family, and the parallel constant Gaussian curvature surface (left to right). The second and third surfaces appear to have cuspidal edge singularities.} \label{fg:4} \end{figure} \begin{proof} Because $B_0|_{z=0} = (4 a b \lambda^2 + 4 b^2) \cdot I$, we have $B|_{z=0} = F|_{z=0} = I$. We set $\Theta=\Theta_1 dx + \Theta_2 dy$, where $z=x+iy$, with \[ \Theta_1 = \begin{pmatrix} 0 & \tfrac{2 a b}{\lambda v} - \tfrac{v \lambda}{2} \\ \tfrac{2 a b \lambda}{v} - \tfrac{v}{2 \lambda} & 0 \end{pmatrix} \; , \hspace{.7cm} \Theta_2 = i \begin{pmatrix} -\tfrac{v^\prime}{2 v} & \tfrac{2 a b}{\lambda v} + \tfrac{v \lambda}{2} \\ - \tfrac{2 a b \lambda}{v} - \tfrac{v}{2 \lambda} & \tfrac{v^\prime}{2 v} \end{pmatrix} \; . \] A computation gives $B_x+(\Theta_1 + i \Theta_2) B = 0$ and $\Theta_2 B - i B A = 0$, implying $dB+\Theta B - B A (dx+idy) = 0$, and so $F^{-1} dF = \Theta$. Noting that $\Theta_1 + i \Theta_2$ has no singularity at $\lambda = 0$, we have that $B$ is holomorphic in $\lambda$ for all $\lambda \in \mathbb{C}$. Also, $\text{trace}(\Theta_1 + i \Theta_2) = 0$ implies $\det B$ is constant, so $\det B = 1$. Hence $B$ takes values in $\widehat{\mathcal{U}}^\C_+$. We have $\tau(\Theta)= \Theta$, so $\tau(F^{-1} dF)=F^{-1} dF$. It follows from $F|_{z=0}=I$ that $\tau(F)=F$, so $F$ takes values in $\mathcal{U}_\tau$. \end{proof} \begin{remark} Note that we must restrict $\kappa_1,\kappa_2$ so that $v$ is never zero on $\Sigma$. When $v$ reaches zero, this is precisely the moment when $\phi$ leaves $\mathcal{B}_{1,1}$. Also, note that $v$ can be non-constant even when $c = 0$. A solution to the equation for $v$, for example when $0<b<a$ and $c \leq 0$, is given in terms of the Jacobi sn function as: $ v(x) = 2 b \ell^{-1} \text{sn}_{b/(\ell^2 a)}(2 \ell a (x+x_0))$, where $\ell$ is the largest (in absolute value) of the real solutions to the equation $a^2\ell^4 + (c^2-a^2-b^2)\ell ^2 + b^2 =0$, and $x_0$ is chosen so that $v(0)=2b$ and $v^\prime(0) \geq 0$. \end{remark} Inserting the above $F$ into the Sym-Bobenko formula, we get explicit parametrizations of CMC rotational surfaces in $\mathbb{R}^{2,1}$. Because the mean curvature $H$ and the Hopf differential term $Q$ are constant reals, and because the metric $ds^2$ is invariant under translation of the $z$-plane in the direction of the imaginary axis, we conclude that these surfaces are rotationally symmetric, by the fundamental theorem for surface theory, and we have the following corollary. \begin{corollary}\label{lastresult} Inserting $F$ as in Theorem \ref{SU11DelaunayIwasawa} into \eqref{symformula} with $\lambda_0=1$, we have a surface of revolution $\hat f^1$ with axis parallel to the line through $0$ and $i A$ in $\mathbb{R}^{2,1} \approx \mathfrak{su}_{1,1}$. In particular, the axis is timelike, null or spacelike when $(a+b)^2-c^2$ is negative, zero or positive, respectively. \end{corollary} \begin{proof} The rotational symmetry of the surface is represented by $F \to \exp(i y_0 A) F$ at $\lambda_0=1$ for each $y_0 \in \mathbb{R}$, and the Sym-Bobenko formula changes from $\hat f^1$ to \[ \exp(i y_0 A) \hat f^1 \exp(-i y_0 A) - i H^{-1} \partial_\lambda (\exp(i y_0 A))|_{\lambda=1} \cdot \exp(-i y_0 A) \; . \] The axis is then a line parallel to the line invariant under conjugation by $\exp(i y_0 A)$. \end{proof} \begin{remark} Using conjugation by $\text{diag}(\sqrt{i},1/\sqrt{i})$ on all of $A$, $\phi$, $F$, $B$, one can see that if we choose $H=-2 a b$, Equation \eqref{withlambda} gives $v=e^{-u}$ and $Q=1$, for the surfaces in Corollary \ref{lastresult}. \end{remark} \operatorname{su}bsection{Equivariant surfaces} By inserting the $F$ in Theorem \ref{SU11DelaunayIwasawa} into \eqref{symformula} and evaluating at various values of $\lambda_0 \in \mathbb{S}^1$, we get surfaces in the associate families of the surfaces of revolution in Corollary \ref{lastresult}. These are the equivariant surfaces, which we now describe. \begin{definition} An immersion $f: U \operatorname{su}bset \mathbb{R}^2\to{\R^{2,1}}$ is equivariant with respect to $y$ if there exists a continuous homomorphism $R_t:\mathbb{R}\to\mathcal{E}$ into the group $\mathcal{E}$ of isometries of ${\R^{2,1}}$ such that \begin{equation*} \label{eq:equivariant-def} f(x,\,y+t) = R_t f(x,\,y). \end{equation*} \end{definition} In the following we write $z=x+iy$. \begin{proposition} \label{prop:equivariant1} Let $f: U \operatorname{su}bset \mathbb{R}^2\to{\R^{2,1}}$ be a conformal immersion with metric $4 v^{-2} \abs{dz}^2$, mean curvature $H$, and Hopf differential $Q\,dz^2$. Then $f$ is equivariant with respect to $y$ if and only if $v$, $H$ and $Q$ are $y$-independent. \end{proposition} \begin{proof} The proposition is shown by the following sequence of equivalent statements: 1. The immersion $f$ is equivariant with respect to $y$. 2. For any $t\in\mathbb{R}$, the maps $f(x,\,y)$ and $f_t(x,\,y) = f(x,\,y+t)$ differ by an isometry $R_t$ of ${\R^{2,1}}$. To show statement 1 from 2, note that $R_{s+t}f(x,\,y) = f(x,\,y+s+t) = R_t f(x,\,y+s) = R_t R_s f(x,\,y)$, so under suitable non-degeneracy conditions on $f$, the map $t\mapsto R_t$ is a continuous homomorphism. 3. The immersions $f$ and $f_t$ have equal first and second fundamental forms. Statements 2 and 3 are equivalent by the fundamental theorem of surface theory. 4. The geometric data for $f$ satisfy $v(x,\,y+t) = v(x,\,y)$, $H(x,\,y+t)=H(x,y)$ and $Q(x,\,y+t) = Q(x,\,y)$. 5. The functions $v$, $H$ and $Q$ are $y$-independent. \end{proof} \begin{proposition} \label{prop:equivariant2} Let $f: \Sigma \operatorname{su}bset \mathbb{C} \to{\R^{2,1}}$ be a conformal CMC $H$ immersion with metric $4 v^{-2} \abs{dz}^2$ and Hopf differential $Q\,dz^2$. Take $q \in {\mathbb R}^* := {\mathbb R} \setminus \{ 0 \}$ so that $4H^2 = q^2$, and suppose $|Q|$ is $1$ at some point in $\mathbb{R}^2$. Then $f$ is equivariant with respect to $y$ if and only if $Q$ is constant, $v$ depends only on $x$, and for some $p\in\mathbb{R}$, $v$ satisfies \begin{equation} \label{eq:equivariant-v} {v'}^2 = v^4 -2 p v^2 + q^2 , \quad v'' = 2v(v^2-p). \end{equation} \end{proposition} \begin{proof} If $f$ is equivariant, then $v$ and $Q$ are $y$-independent by Proposition~\ref{prop:equivariant1}. Since $f$ is CMC, then $Q$ is holomorphic in $z$, and is hence constant. So $|Q| \equiv 1$. Since $v$ is $y$-independent, the Gauss equation~\eqref{compatibility1} with $v=e^{-u}$ is a second order ODE in $x$. Multiplying the Gauss equation by $u'$ and integrating yields~\eqref{eq:equivariant-v}, where $p$ is a constant of integration. Conversely, if $v$ and $Q$ satisfy the conditions of the proposition, then $f$ is equivariant, by Proposition~\ref{prop:equivariant1}. \end{proof} \begin{corollary} \label{cor:equivariant3} Any immersion $\hat f^{\lambda_0}$ into ${\R^{2,1}}$ as in \eqref{symformula}, obtained from a DPW potential of the form $Adz$, where $A$ is given by~\eqref{eqn:formforA}, is a conformal CMC immersion equivariant with respect to $y$. Conversely, up to an isometry of ${\R^{2,1}}$, every non-totally-umbilic conformal spacelike CMC $H \neq 0$ immersion equivariant with respect to $y$ is obtained from some DPW potential $Adz$, where $A$ is of the form~\eqref{eqn:formforA}. \end{corollary} \begin{proof} By Theorem~\ref{SU11DelaunayIwasawa}, the extended frame of the immersion obtained from $A$ is of the form $F(x,\,y) = \exp(iyA)\mathcal{G}(x)$ for some map $\mathcal{G}: J \to SU_{1,1}$, where $J = (-\kappa_1^2, \kappa_2^2) \operatorname{su}bset {\mathbb R}$. The Sym-Bobenko formula $\hat f^{\lambda_0}$ applied to $F$ yields an immersion which is equivariant with respect to $y$. Conversely, given a CMC immersion $f: (-\tilde{\kappa}_1^2, \tilde{\kappa}_2^2) \times {\mathbb R} \operatorname{su}bset \mathbb{R}^2\to{\R^{2,1}}$, which is equivariant with respect to the second coordinate $y$, let $4 v^{-2} \abs{dz}^2$ and $Qdz^2$ be the metric and Hopf differential for $f$, respectively. By a dilation of coordinates $z \to r z$ for a constant $r \in \mathbb{R}$, we may assume $|Q|=1$. Let $q$ be as in Proposition~\ref{prop:equivariant2}. By that proposition, $v$ satisfies~\eqref{eq:equivariant-v} for some $p\in\mathbb{R}$. Let $b = v(0)/2$, and define $a \in {\mathbb R}^*$ by the equation $H=-2 a b$, and so $q=\pm 4ab$. Then it follows that $p \leq 2 (a^2+b^2)$, and there exist $c\in\mathbb{R}$ and $\lambda_0 \in \mathbb{S}^1$ such that $p = 2(a^2+b^2-c^2)$ and $Q = \lambda_0^{-2}$. Let $\hat f^{\lambda_0}$ be the immersion induced from the DPW potential $\xi =A\textup{d} z$, with $A$ as in Theorem \ref{SU11DelaunayIwasawa}, initial condition $\Phi(0)=I$, and $\lambda_0$ and $H=-2 a b$. Then $\hat f^{\lambda_0}$ has metric $4 v^{-2} \abs{dz}^2$, by Theorems \ref{SU11DelaunayIwasawa} and \ref{dpwthm}, and has mean curvature $-2 a b$ and Hopf differential $\lambda_0^{-2} dz^2$. By the fundamental theorem of surface theory, $f$ and $\hat f^{\lambda_0}$ differ by an isometry of ${\R^{2,1}}$. \end{proof} We now describe the two spaces $R/\!\!\sim_R$ and $E/\!\!\sim_E$ of immersions into ${\R^{2,1}}$ which are rotationally invariant and equivariant, respectively. Both constructions are based on the family of solutions to the integrated Gauss equation~\eqref{eq:equivariant-v}, where solutions are identified which amount to a coordinate shift and hence yield ambiently isometric immersions. Bifurcations in the space of solutions to Equation~\eqref{eq:equivariant-v} lead to non-Hausdorff quotient spaces. \begin{figure} \caption{\small A blowup of the moduli space of surfaces with rotational symmetry $\mathbb{R} \label{fig:moduli} \end{figure} The space $R/\!\!\sim_R$ of immersions with rotational symmetry is a quotient of the space \begin{equation*} \label{eq:equivariantR} R = \{(p,\,q,\,v_0)\in\mathbb{R}^3 \operatorname{su}chthat v_0^4-2pv_0^2+q^2 \ge 0\} \end{equation*} parametrizing solutions to~\eqref{eq:equivariant-v}. The equivalence relation $\sim_R$ on $R$ is defined as follows: $(p_1,\,q_1,\,v_1)\!\!\sim_R\!\!(p_2,\,q_2,\,v_2)$ if, for $k=1$ and $2$, the respective solutions to \begin{equation} \label{vprimesquaredeqn} \begin{split} &{v'}^2 = v^4 - 2 p_k v^2 + q_k^2, \\ & v'' = 2v(v^2-p_k),\\ & v(0) = v_k, \end{split} \end{equation} are equivalent in the following sense: there exist $r\in\mathbb{R}_+$ and $c\in\mathbb{R}$ such that $v_2(x) = rv_1(rx+c)$ or $v_2(x) = -rv_1(rx+c)$. The space $R/\!\!\sim_R$ is a $1$-dimensional non-Hausdorff manifold. For a point in $R$ with $q\neq 0$, the corresponding surface is constructed by relating (\ref{vprimesquaredeqn}) to (\ref{vprimeeqn}). This determines $a$, $b$ and $c$ in Theorem \ref{SU11DelaunayIwasawa}, and the surface is given by Corollary \ref{lastresult}. If $q=0$, the surface is totally umbilic. To describe the space of equivariant immersions, let \begin{equation*} \label{eq:equivariantE} E = \{(p,\,P,\,v_0)\in\mathbb{R}\times\mathbb{C}\times\mathbb{R} \operatorname{su}chthat v_0^4-2pv_0^2+\abs{P}^2 \ge 0\}. \end{equation*} The equivalence relation $\sim_E$ on $E$ is defined as follows: $(p,\,P,\,v_0)\!\!\sim_E\!\!(p',\,P',\,v_0')$ if there exist $q,\,q'\in\mathbb{R}$ and $\lambda\in\mathbb{S}^1$ such that $P = q\lambda^{-2}$ and $P' = q'\lambda^{-2}$, and $(p,\,q,\,v_0)\!\!\sim_R\!\!(p',\,q',\,v_0')$. The space $E/\!\!\sim_E$ is a $2$-dimensional non-Hausdorff manifold. The surface corresponding to a point in $E$, when $P \neq 0$, is as in the case of the space $R$, except that the Sym-Bobenko formula now uses general $\lambda \in \mathbb{S}^1$ (not necessarily $\lambda = 1$). When $P=0$, the surface is totally umbilic. The above constructions are summarized as: \begin{theorem} \label{thm:equivariant-moduli} Up to coordinate change and ambient isometry, the spaces $E/\!\!\sim_E$ and $R/\!\!\sim_R$ are the moduli spaces of CMC immersions into ${\R^{2,1}}$ which are respectively equivariant and rotational. \end{theorem} \operatorname{su}bsection{The moduli space of surfaces with rotational symmetry} Figure~\ref{fig:moduli} shows a blowup of the moduli space of surfaces with rotational symmetry in ${\R^{2,1}}$. The underlying space is the closed $(b,c)$-half-plane obtained by the normalization $\lambda=1$ and $a=1$. The blowdown to the 1-dimensional moduli space of surfaces is the quotient modulo identification of points on segments of hyperbolas $1+b^2-c^2 = (\mathrm{constant})\cdot b$ foliating each region. The examples with spacelike, timelike and lightlike axes are represented respectively by the shaded and unshaded regions, and the left heavily-drawn v-shaped line. Subscripted letters $S$, $L$ and $T$ denote one-parameter families with spacelike, lightlike and timelike axes, respectively; likewise, $s$, $\ell$ and $t$ designate single examples, and the example $m$ has no axis. The moduli space is a connected non-Hausdorff space, and is the disjoint union of eight one-parameter families $S_{1a}$, $S_{1b}$, $S_{2a}$, $S_{2b}$, $S_3$, $T_1$, $T_2$, $T_3$, eight individual examples $s_{1a}$, $s_{1b}$, $s_{1c}$, $\ell_{1a}$, $\ell_{1b}$, $m$, $\ell$, $t$, and the hyperboloids corresponding to $s_0$, $\ell_0$, $t_0$ considered with spacelike, lightlike and timelike axes respectively. The non-Hausdorffness of the moduli space arises from the fact that the limit surface of a sequence of surfaces in any of the one-parameter families (designated by capital letters) to a point not in that family is not uniquely determined: the sequence will have different limit surfaces depending on how the sequence is chosen to be positioned in ${\R^{2,1}}$. The blowup of the moduli space shown in Figure~\ref{fig:moduli} maps this topology. For example, the same sequence of surfaces in $S_3$ can converge to either $s_{1a}$, $s_{1b}$ or $s_{1c}$; likewise a sequence in $T_3$ can converge to either $l_{1a}$, $l_{1b}$ or $m$.\\ \begin{figure} \caption{Examples from each of the eight families of surfaces with rotational symmetry in ${\R^{2,1} \end{figure} \begin{figure} \caption{The eight surfaces with rotational symmetry in ${\R^{2,1} \label{fig:delaunay2} \end{figure} \section{Analogues of Smyth surfaces in $\mathbb{R}^{2,1}$} \label{section5-wayne} A generalization of Delaunay surfaces in $\mathbb{R}^3$ was studied by B. Smyth in \cite{smyth}. These are constant mean curvature surfaces whose metrics are invariant under rotations. They were also studied by Timmreck et al. in \cite{FerPT}, where they were shown to be properly immersed (a property which we will see does not hold for the analogue in ${\mathbb R}^{2,1}$). The DPW approach was applied in \cite{DorPW} and \cite{bi}. Here we use the DPW method to construct the analogue of Smyth surfaces in $\mathbb{R}^{2,1}$, and describe some of their properties. Define \begin{equation}\label{smyth-potential} \xi = \lambda^{-1} \begin{pmatrix} 0 & 1\\ c z^k & 0 \end{pmatrix} dz \; , \;\;\; c \in \mathbb{C} \; , \;\;\; z \in \Sigma = \mathbb{C} \; , \end{equation} and take the solution $\phi$ of $d\phi = \phi \xi$ with $\phi |_{z=0}=I$. If $k=0$ and $c \in \mathbb{S}^1$, then one can explicitly split $\phi$ as in Example \ref{exa:cylinders}, and the resulting CMC surface is a cylinder over a hyperbola whose axis depends on the choice of $c$. When $c=0$, one produces a two-sheeted hyperboloid. However, when $c \not\in \mathbb{S}^1 \cup \{ 0 \}$ or when $k>0$, Iwasawa splitting of $\phi$ is not so simple. Changing $c$ to $c e^{i \theta_0}$ for any $\theta_0 \in \mathbb{R}$ only changes the resulting surface by a rigid motion and a reparametrization $z\to ze^{-\frac{i\theta_0}{k+2}}$. So without loss of generality we assume that $c \in \mathbb{R}^+ := {\mathbb R} \cap (0,\infty)$. \begin{lemma}\label{lm:smyth} The surfaces $f: \Sigma^\circ = \phi^{-1}(\mathcal{B}_{1,1}) \to {\mathbb R}^{2,1}$, produced via the DPW method, from $\xi$ in \eqref{smyth-potential}, with $\phi|_{z=0}=I$ and $\lambda_0=1$, have reflective symmetry with respect to $k+2$ geodesic planes that meet equiangularly along a geodesic line. \end{lemma} \begin{proof} Consider the reflections \begin{displaymath} R_\ell(z)=e^{\frac{2 \pi i \ell}{k+2}} \bar z, \end{displaymath} of the domain $\Sigma = \mathbb{C}$, for $\ell \in \{0,1,...,k+1\}$. In the coordinate $w := R_\ell (z)$, we have \begin{displaymath} \xi = A_\ell \Big ( \lambda^{-1} \begin{pmatrix} 0 & 1 \\ c \bar{w}^k & 0 \end{pmatrix} \textup{d} \bar{w} \Big ) A_\ell^{-1}, \hspace{1cm} A_\ell = \text{diag}(e^{\frac{\pi i \ell}{k+2}}, e^{\frac{-\pi i \ell}{k+2}}). \end{displaymath} Comparing this with (\ref{smyth-potential}), it follows that $\phi (z) = A_\ell \phi(\bar{w}) A_\ell^{-1}$, and hence \begin{displaymath} \phi (R_\ell (z)) = A_\ell \phi(\bar{z}) A_\ell^{-1}. \end{displaymath} It is easy to see that this relation extends to the factors $F$ and $B$ in the Iwasawa splitting $\phi = FB$, and so we have a frame $F$ which satisfies \begin{displaymath} F(R_\ell(z)) = A_\ell F(\bar{z}) A_\ell^{-1}. \end{displaymath} Since we have assumed $c \in {\mathbb R}^+$, it follows from the form of $\xi$ and the initial condition for $\phi$ that $\phi(\bar{z}, \lambda) = \overline{\phi(z, \bar{\lambda})}$. This symmetry also extends to the factors $F$ and $B$, and combines with the first symmetry as: $F(R_\ell(z), \lambda) = A_\ell \overline{F(z, \bar{\lambda})}A_\ell^{-1}$. Inserting this into \eqref{symformula}, we have \[ \hat f^\lambda(R_\ell(z)) = -A_\ell \overline{\hat f^{\bar \lambda}(z)} A_\ell^{-1} \; . \] Then for $\hat f^1$, the transformation $\hat f^1 \to -\overline{\hat f^1}$ represents reflection across the plane $\{x_2=0\}$ of $\mathbb{R}^{2,1} = \{ x_1e_1+x_2e_2+x_0e_3 \}$, and conjugation by $A_\ell$ represents a rotation by angle $2\pi i\ell /(k+2)$ about the $x_0$-axis. \end{proof} \begin{figure} \caption{Details of Smyth surface analogs in $\mathbb{R} \label{fig:smyth} \label{fg:1} \end{figure} We now show that $u: \Sigma^\circ \to \mathbb{R}$ in the metric \eqref{eqn:dssquared} of the surface resulting from the frame $F$ is constant on each circle of radius $r$ centered at the origin in $\Sigma$, that is, $u=u(r)$ is independent of $\theta$ in $z=r e^{i \theta}$. Having this internal rotational symmetry of the metric (without actually having a surface of revolution) is what defines the surface as an analogue of a Smyth surface. \begin{proposition} \label{prop:Smyth rotational symmetry} The solution $u$ of the Gauss equation \eqref{compatibility1} for a surface generated by $\xi$ in \eqref{smyth-potential}, with $\phi |_{z=0}=I$, is rotationally symmetric. That is, $u$ depends only on $|z|$. \end{proposition} \begin{proof} Define \begin{displaymath} \tilde z =e^{i\theta}z, \hspace{1cm} \tilde{\lambda}= e^{i\theta}e^{\frac{i\theta k}{2}}\lambda, \end{displaymath} for any fixed $\theta \in{\mathbb{R}}$. Then \begin{displaymath} \xi(z,\lambda) = L^{-1} \, \Big ( \tilde{\lambda}^{-1} \begin{pmatrix} 0 & 1 \\ c \tilde{z}^k & 0 \end{pmatrix} \textup{d} \tilde{z} \Big ) \, L, \hspace{1cm} L = \begin{pmatrix} e^{\frac{-i k \theta}{4}} & 0 \\ 0 & e^{\frac{i k \theta}{4}} \end{pmatrix}. \end{displaymath} It follows that \begin{displaymath} \phi (\tilde z,\tilde{\lambda})=L\,\phi(z,\lambda)\,L^{-1}. \end{displaymath} Let $\phi = F B$ be the normalized Iwasawa splitting of $\phi$, with $B: \Sigma^\circ \to \widehat{\mathcal{U}}^\C_+$. Then \begin{equation}as \phi (\tilde z,\tilde{\lambda})&=&(L F(z,\lambda) L^{-1})\cdot(L B(z,\lambda) L^{-1})\\ &=& F(\tilde z, \tilde \lambda) \cdot B(\tilde z, \tilde \lambda). \end{equation}as Since $L B(z,\lambda) L^{-1}$ and $B(\tilde z, \tilde \lambda)$ are both loops in $\widehat{\mathcal{U}}^\C_+$, and the left factors are both loops in $\mathcal{U}$, it follows by uniqueness that the corresponding factors are equal. Recall from Section \ref{section3-wayne} that $u = 2 \log \rho$ is determined by the function $\rho(z)$, which is the first component of the diagonal matrix $B(z) \big|_{\lambda = 0}$. Since this matrix is diagonal and independent of $\lambda$, we have just shown that $B(z) \big|_{\lambda = 0} = B(\tilde z) \big|_{\lambda = 0}$, and hence $u(\tilde z) = u(z)$. \end{proof} We now show that the Gauss equation for these surfaces in $\mathbb{R}^{2,1}$ is a special case of the Painleve III equation. This was proven for Smyth surfaces in $\mathbb{R}^3$, in \cite{bi}. \begin{proposition} \label{prop:Painleve III} The Gauss equation \eqref{compatibility1} for a surface generated by $\xi$ in \eqref{smyth-potential}, with $\phi |_{z=0}=I$, is a special case of the Painleve III equation. \end{proposition} \begin{proof} The Painleve III equation, for constants $\alpha, \beta, \gamma, \delta$, is \begin{displaymath} y^{\prime\prime}=y^{-1}(y^{\prime})^2-x^{-1}y^\prime + x^{-1}(\alpha y^2+\beta)+\gamma y^3 + \delta y^{-1}, \end{displaymath} where $\prime$ denotes the derivative with respect to $x$. Setting $y=e^v$, $\alpha=\beta=0$, $\gamma=-\delta=1$, we have $(v^{\prime}e^v)^\prime=e^{-v} {(v^{\prime}e^v)}^2 - x^{-1} v^{\prime}e^v + 0+e^{3v}-e^{-v}$. Therefore \begin{equation} v^{\prime \prime}+x^{-1} v^{\prime} - 2\sinh (2v)=0 \label{eq:Painleve III} \end{equation} is a particular case of the Painleve III equation. By a homothety and/or reflection, we may assume the surface has $H = 1/2$, and then we have $Q=-cz^k$. (By Section \ref{section3-wayne}, $Q= - 2 H b_{-1}/a_{-1}$.) Setting $r:= |z|$, the Gauss equation becomes \begin{equation} \label{eq:Gauss of Smyth} 4u_{z\bar z}+c^2 \,r^{2k} e^{-2u}-e^{2u}=0\; . \end{equation} To prove this proposition,\ we show that Equation \eqref{eq:Gauss of Smyth} can be written in the form \eqref{eq:Painleve III}. Set \[ v:=u-\tfrac{1}{2}\log |Q|=u-\tfrac{1}{2}\log c- \tfrac{k}{2}\log r \; , \] so $4v_{z\bar z}+\tfrac{k}{2}(\log r)_{z\bar z}+ c^2\,r^{2k}\, e^{-2(v+\frac{1}{2}\log c+\frac{k}{2}\log {r})}- e^{2(v+\frac{1}{2}\log c+ \frac{k}{2}\log {r})}=0$, and this simplifies to \begin{displaymath} 4v_{z\bar z}-2c\,r^k\,\sinh (2v)=0. \end{displaymath} Now $v$ is a function of $r$ only, which means that $v_{z \bar{z}} = \frac{1}{4}(v^{\prime \prime}(r) + \frac{1}{r} v^\prime(r))$, and the equation becomes \begin{displaymath} v^{\prime \prime}(r) +r^{-1} v^\prime(r) - 2c\,r^k\,\sinh (2v)=0. \end{displaymath} Now set \begin{displaymath} \mu:=(1+\frac{k}{2})^{-1} r^{1+\frac{k}{2}} \,\sqrt{c}. \end{displaymath} Then $\partial_{r}\mu=r^{\frac{k}{2}}\,\sqrt{c}$. So we have \[ \partial_{r}(\partial_\mu v \,r^{\frac{k}{2}}\sqrt{c})+ r^{-1}\partial_\mu v \,r^\frac{k}{2}\sqrt{c}-2c\,r^k\,\sinh (2v) =0\; , \] which simplifies to $v_{\mu\mu}+\mu^{-1} \,v_\mu - 2\sinh (2v)=0$, coinciding with \eqref{eq:Painleve III}. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Just-in-Time Batch Scheduling Problem with Two-dimensional Bin Packing Constraints} \author[1]{S. Polyakovskiy} \author[1]{A. Makarowsky} \author[2]{R. M'Hallah} {\footnotesize \affil[1]{Optimisation and Logistics Group, School of Computer Science, University of Adelaide, Australia.} \affil[2]{Department of Statistics and Operations Research, \newline College of Science, Kuwait University, Kuwait.} } \maketitle \begin{abstract} This paper introduces and approximately solves a multi-component problem where small rectangular items are produced from large rectangular bins via guillotine cuts. An item is characterized by its width, height, due date, and earliness and tardiness penalties per unit time. Each item induces a cost that is proportional to its earliness and tardiness. Items cut from the same bin form a batch, whose processing and completion times depend on its assigned items. The items of a batch have the completion time of their bin. The objective is to find a cutting plan that minimizes the weighted sum of earliness and tardiness penalties. We address this problem via a constraint programming (CP) based heuristic ({CPH}) and an agent based modelling heuristic ({ABH}). {CPH} is an impact-based search strategy, implemented in the general-purpose solver IBM CP Optimizer. {ABH} is constructive. It builds a solution through repeated negotiations between the set of agents representing the items and the set representing the bins. The agents cooperate to minimize the weighted earliness-tardiness penalties. The computational investigation shows that {CPH} outperforms {ABH} on small-sized instances while the opposite prevails for larger instances. \end{abstract} \section{Introduction} Multi-component problems are increasingly suscitating the interest of Evolutionary Computation (EC) and Operations Research (OR) communities \cite{Polyakovskiy2014TTP, Bonyadi2016}. Not only do they combine several combinatorial optimisation aspects into a single problem but they also emanate from the compounded complexity of conflicting issues in areas like logistics and supply chain management. Solving them requires a thorough understanding of both their compounded and their individual natures. In practice, solving a multi-component problem is harder than separately tackling its components. This paper focuses on a relevant multi-component problem occurring in make-to-order industries that adopt a pull production strategy. A production station requests some parts from preceding stages, and specifies when it needs each part. The preceding stages react by producing the parts on time in order to avoid handling, transfer, and temporary storage when parts are completed early and starving of subsequent stations and delays when parts are finished tardy. Synchronizing the production requires that every stage complete its workload on time. The problem we consider involves the cutting of raw material at one of its production stages. It occurs in furniture, wood, and plastic industries. The cutting stage produces a set $N = \left\{1,\ldots,n\right\}$ of $n$ small rectangular items from a set $B=\left\{1,\ldots,m\right\}$ of large identical rectangular sheets of raw material, referred to hereafter as bins. It uses a single guillotine cutting machine whose cuts are parallel to the edges of the sheets. That is, every item is obtained by a series of edge to edge parallel straight cuts. Item $i \in N$ is characterized by its width $w_i,$ height $h_i,$ due date $d_i,$ which defines when $i$ should be ideally produced, a per time unit earliness cost $\epsilon_i,$ and a per unit tardiness cost $\tau_i.$ Depending on the items' dimensions, it is possible to cut more than one item from a bin. A bin $k \!\in\! B$ is characterized by its width $\overbar{W}$ and height $\overbar{H}$. The number of bins $m \!\leq\! n.$ That is, at worst, each item is packed in a single bin. A subset $N_k \subseteq N$ of items assigned to a bin $k \in B$ can not overlap and should be completely contained in the bin. The subset forms a batch whose processing time is a function $f\left(N_k\right)$ of $N_k$ and whose completion time is $c_k.$ The completion time $c_i$ of item $i \in N_k$ equals $c_k.$ When produced earlier than $d_i$, item $i$ generates earliness $E_i = \max\left\{0,d_i - c_i\right\}$ yielding an earliness penalty $\epsilon_i E_i.$ On the other hand, it has a tardiness $T_i = \max\left\{0,c_i - d_i\right\}$ and a tardiness cost $\tau_i T_i$ when produced later than $d_i.$ This problem, denoted {JIT with BP}, searches for (i) a feasible guillotine packing of $N$ into bins of $B$, and (ii) its bin cutting schedule that minimizes the total weighted earliness-tardiness $\textstyle\sum_{i \in N} \left(\epsilon_i E_i + \tau_i T_i\right)$. Applications combining bin packing and scheduling range from steel manufacturing~\cite{Reinertsen10,Arbib14} to ship lock scheduling~\cite{Verstichel15} to wood cutting~\cite{Polyakovskiy11}. However, to the best of our knowledge, no research deals with {JIT with BP} introduced in this paper. The problem is motivated by the specificity of make-to-order industries, which are characterised by low-volume high-mix orders with fixed due dates~\cite{plataine}. To be flexible, such industries don't produce large batches of identical items. Their production plans are a function of the demands' due dates and the items' earliness and tardiness penalties. {JIT with BP} is very challenging. It combines/extends two $\mathcal{NP}$-hard combinatorial optimization problems. Its first component is a two-dimensional packing problem that has been extensively addressed via a panoply of techniques varying from OR to Artificial Intelligence (AI) to EC~ \cite{Lodi2014107,Burke2012,Polyakovsky2009767,Sim201537}. Its second component is a just in time single machine batch scheduling problem~\cite{Hazır2012} that has been tackled via different EC approaches with the most prominent ones enumerated in~\cite{Hallah2015}. It incorporates non-overlap, containment, assignment, disjunctive, and sequencing constraints whose simultaneous satisfaction is difficult. In addition, its search space, as for most packing related problems, contains a large number of infeasible and of symmetrical solutions. Optimizing {JIT with BP} is further complicated by the non-traditional interdependence of its two components. Relaxing the packing density (by assigning a single item per bin) does not reduce earliness-tardiness penalties while a dense production plan that cuts multiple items from a bin gives not only a tighter packing, but also smaller cutting times. In fact, a faster production of the items supports the minimization of the earliness-tardiness. In summary, determining the optimal number of bins is not trivial. We address {JIT with BP} via two approximate approaches: {CPH} and {ABH}. {CPH} is a constraint programming (CP) search that explores constraint propagation. CP generates feasible solutions efficiently thanks to its flexible modelling framework, which exploits the structure of a model to direct and accelerate the search. Here, {CPH} adopts the impact-based search strategy, implemented in the general-purpose solver IBM CP Optimizer \cite{Refalo2004}. {ABH}, on the other hand, is a hybrid constructive heuristic that searches the solution space via agent based modelling, identifies a feasible packing via CP techniques and optimally schedules the bins via linear programming. It builds a solution through repeated negotiations between the set of agents representing the items and the set representing the bins. The agents cooperate to minimize the weighted earliness-tardiness penalties. The choice of an agent-based modelling technique is motivated by (i) its success in application to bin packing \cite{Polyakovsky2009767} and just-in-time scheduling \cite{Polyakovskiy2014115} problems, (ii) the numerous infeasible and symmetric solutions encountered during the search, and (iii) the reported computational results, which confirm its efficiency in comparison to several enumerative techniques. Sections \ref{sec:CPBA} and \ref{sec:AB} detail {CPH} and {ABH}. Section \ref{sec:Results} discusses the results. Section \ref{sec:Conclusion} is a summary. \section{Constraint Programming Search}\label{sec:CPBA} Because CP search techniques are prominent for bin packing and scheduling problems, we formulate {JIT with BP} as a CP model that combines both components and that we approximately solve with a general-purpose commercial solver. In CP, a problem is defined via a set of variables $\mathcal{X}$ and a set of constraints $\mathcal{C}$ given on $\mathcal{X}$ \cite{Bockmayr05,H02}. A variable $x_i \!\in\! \mathcal{X}$ can assume any value of its domain $D(x_i)$. To find an optimum, CP enumerates solutions subject to the constraint store $\mathcal{C}$ using a search tree where every variable $x_i \in \mathcal{X}$ is examined within some node. When $x_i$ is instantiated, the search inspects those constraints that share $x_i$. In CP, each constraint is viewed as a special-purpose procedure that operates on a solution space. In fact, each procedure is a filtering algorithm that excludes, from the domains of the variables, those values that lead to infeasible solutions. CP relies on constraint propagation. Fixing the value of $x_i$ may eliminate some values from the domains of other variables that are connected to $x_i$ via one or more constraints of $\mathcal{C}$. That is, constraints sharing some common variables are linked to each other. Subsequently, the results of one filtering procedure are propagated to the others. CP calls the filtering algorithms repeatedly in order to achieve a certain level of consistency. When it yields at some stage $D(x_i) \!=\! \emptyset$, the filtering signals an infeasible solution (i.e., an inconsistent set of constraints). When $\left|D(x_i)\right|\!>\!1$, CP branches on $x_i$ by partitioning $D(x_i)$ into unique values, each corresponding to a branch. As the search descends into the tree, constraint propagation reduces the size of the domains of the variables. CP obtains a feasible solution when $|D(x_i)|=1$ for all $x_i$. When emphasis is on optimality, the search continues until either the optimum is found, or the exploration of the whole search tree is unsuccessful. The performance of a CP model depends on its solver; specifically, on the filtering algorithms and on the search strategies it applies. Here, we resort to IBM ILOG \textsc{CP Optimizer} 12.6.2 with its searching algorithm set to the \textit{restart mode}. This mode adopts a general purpose search strategy ~\cite{Refalo2004} inspired from integer programming techniques and based on the concept of the impact of a variable. The impact measures the importance of a variable in reducing the search space. The impacts, which are learned from the observation of the domains' reduction during the search, help the restart mode dramatically improve the performance of the search. \subsection{Decision Variables}\label{sec:DV} The CP model for {JIT with BP} explores the features of the problem. First, any item $i \!\in\! N$ can be obtained from a bin (or from any region of a bin) via a sequence of horizontal and vertical cuts, as Figure \ref{fig1} illustrates. When the first applied cut is horizontal, as depicted in Figure \ref{fig1}.a, the bin $\textstyle\left(\overbar{W},\overbar{H}\right)$ is split into two regions: $R_i^t$ of size $\left(\overbar{W}, \overbar{H} \!-\! h_i\right)$ at the top of $i,$ and $R_i^r$ of size $\left(\overbar{W} \!-\! w_i, h_i \right)$ at the right of $i$. Hereafter, we refer to this case as a cutting pattern $a$. When the first cut is vertical, as shown in Figure \ref{fig1}.b, the cutting of the bin produces two regions: $R_i^t$ of size $\left(w_i, \overbar{H} \!-\! h_i\right)$ at the top of $i$ and $R_i^r$ of size $\left(\overbar{W} \!-\! w_i, \overbar{H} \right)$ at the right of $i$. Herein, we designate this pattern as $b$. Therefore, the extraction of any item always generates two regions: one to the top and one to the right of the item. Clearly, some of the regions may get a zero area if $\overbar{W} = w_i$ or $\overbar{H} = h_i.$ Suppose that the number of bins $m\!=\!n.$ The set $R$ of possible regions where the $n$ items may be positioned has $3n$ regions: the first $n$ regions emanate from the $n$ initial empty bins, the second $n$ regions correspond to $R_1^t,\ldots,R_n^t$, and the last $n$ regions to $R_1^r,\ldots,R_n^r$. In fact, the last $2n$ regions result from extracting the $n$ items. Thus, $R=\{R_1,\ldots,R_n,R_1^t,\ldots,R_n^t,R_1^r,\ldots,R_n^r\}$, where $R_1=\ldots=R_n=\left(\overbar{W},\overbar{H}\right)$. Evidently, this assumes that at most one item can be assigned to a region. \setlength{\belowcaptionskip}{0.2\baselineskip plus 0.2\baselineskip minus 0.5\baselineskip} \begin{figure} \caption{Illustrating the free regions resulting from a horizontal (when variable $v_i=0$) and a vertical (when $v_i=1$) cut} \label{fig1} \end{figure} Let $r=\left(r_1,\ldots,r_{3n}\right)$ be an integer-valued vector variable whose $j$th entry $r_j,\ j=1,\ldots,3n,$ is the item packed in the $j$th region; i.e., $r_j=i$ if region $j$ contains item $i$ and 0 if $j$ contains no item. Its domain is therefore \formula{D\!\left(r_j\right) \!=\! \left\{0,\ldots,n\right\}, \, \fa{j}{R}}{cp:a1} \noindent When $j \in \left\{1,\ldots,n\right\}$, item $r_j$ is the first item in bin $j.$ When $j\in\left\{n+1,\ldots,2n\right\},\ r_j$ is placed into $R_{j-n}^t$ on top of item $j-n.\ r_j \neq \left(j-n\right)$ since $R_{j-n}^t$ is the result of cutting off item $\left(j-n\right).$ Thus, the domain $D(r_j)$ excludes $j-n:$ \formula{r_j \!\neq\! j\!-\!n, \,\fa{j}{\left\{n\!+\!1,\ldots,2n\right\}}}{cp:a2} \noindent Finally, when $j\in\left\{2n+1,\ldots,3n\right\},\ r_j$ is placed in $R_{j-2n}^r;$ that is, at the right of item $\left(j-2n\right)$ and $D(r_j)$ is free of element $j-2n:$ \formula{r_j \!\neq\! j\!-\!2n, \, \fa{j}{\left\{2n\!+\!1,\ldots,3n\right\}}}{cp:a3} We compute $lb_{f\!s},$ the lower bound of Fekete and Schepers~\cite{Fekete04} on the number of bins required to cut the $n$ items. We force the first $lb_{f\!s}$ regions to have exactly one item. Therefore the domain $D(r_j)$ of $r_j,\ j=1,\ldots,lb_{fs},$ excludes $\{0\}$: \formula{r_j \!\neq\! 0, \, \fa{j}{\left\{1,\ldots,lb_{f\!s}\right\}}}{cp:a4} This rule partially excludes symmetrical solutions arising when a filled bin $k$ empties all its items into an empty bin $k'$ that precedes or succeeds bin $k$ on the cutting machine. This exchange of items produces a different solution but the same objective function value. An empty bin has a zero processing time. Similarly, let $e\!=\!\left(e_1,\ldots,e_n\right)$ be an integer-valued vector of variables reflecting the assignment of regions to items such that $e_i\!=\!j$ if item $i$ is positioned in region $j$. When $e_i \!\in\! \left\{1,\ldots,n\right\},\ i$ is the first packed item into its bin. On the other hand, when $e_i \!\in\! \left\{n+1,\ldots,2n\right\}$ or $e_i \in \left\{2n+1,\ldots,3n\right\},\ i$ is located in region $R_{e_i-n}^t$ at the top of item $\left(e_i-n\right)$ or in region $R_{e_i-2n}^r$ at the top of item $e_i-2n$, respectively. The domain of variable $e_i$ is \formula{D\!\left(e_i\right) \!=\! \left\{1,\ldots,3n\right\}, \, \fa{i}{N}}{cp:a5} \noindent Because item $i$ can not be packed into $R_{n+i}$ and $R_{2n+i}$, which are respectively $R_{i}^t$ and $R_{i}^r$, the following holds \formula{\left(e_i \!\neq\! n\!+\!i \right) \!\wedge\! \left(e_i \!\neq\! 2n\!+\!i \right), \, \fa{i}{N}}{cp:a6} Along with the aforementioned variables and constraints, the CP model uses an additional \textbf{five} sets of variables. The \textbf{first} set has two integer-valued vectors $W \!\in\! \left\{\mathbb{N}_{\geq0}\right\}^{3n}$ and $H \!\in\! \left\{\mathbb{N}_{\geq0}\right\}^{3n}$ representing the widths and heights of the regions, where $\left(W_j,H_j\right),\ j\!=\!1,\ldots,3n,$ is the size of $R_j \!\in\! R$. Because $R_i\!=\!(\overbar{W},\overbar{H}),\ R_{n+i}\!=\!R_{i}^t,$ and $R_{2n+i}\!=\!R_{i}^r,$ for $i\!=\!1,\ldots,n,$ {\small \begin{flalign} & D\!\left(W_j\right) \!=\! \overbar{W},\, D\!\left(H_j\right) \!=\! \overbar{H} \!, \; \fa{j}{\left\{1,\ldots,n\right\}} \label{cp:a7} \\ \nonumber &D\!\left(W_j\right) \!=\! \left\{w_{j-n},\dots,\overbar{W}\right\}\!,\; D\!\left(H_j\right) \!=\! \left\{0,\dots,\overbar{H}\!-\!h_{j-n}\right\}\!, \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \fa{j}{\left\{n\!+\!1,\ldots,2n\right\}} \label{cp:a8} \\ \nonumber& D\!\left(W_j\right) \!=\! \left\{0,\dots,\overbar{W}\!-\!w_{j-2n}\right\}\!,\; D\!\left(H_j\right) \!=\! \left\{h_{j-2n},\dots,\overbar{H}\right\}\!, \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \fa{j}{\left\{2n\!+\!1,\ldots,3n\right\}} \label{cp:a9} \end{flalign} } The \textbf{second} set has $x \!\in\! \left\{\mathbb{N}_{\geq0}\right\}^{3n}$ and $y \!\in\! \left\{\mathbb{N}_{\geq0}\right\}^{3n},$ two integer-valued vectors representing the bottom-left corner coordinates of the regions, where $\left(x_j,y_j\right),\ j=1,\ldots,3n,$ refers to the coordinates of $R_j \!\in\! R$. These variables abide to the conditions: {\small \begin{flalign} & D\!\left(x_j\right) \!=\! 0,\; D\!\left( y_j\right) \!=\! 0, \; \fa{j}{\left\{1,\ldots,n\right\}} \label{cp:a10} \\ \nonumber &D\!\left(x_j\right) \!=\! \left\{0,\dots,\overbar{W}-w_{j-n}\right\}\!,\; D\!\left(y_j\right) \!=\! \left\{h_{j-n},\dots,\overbar{H}\right\}\!, \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \fa{j}{\left\{n\!+\!1,\ldots,2n\right\}} \label{cp:a11} \\ \nonumber& D\!\left(x_j\right) \!=\! \left\{w_{j-2n},\dots,W\right\}\!,\; D\!\left(x_j\right) \!=\! \left\{0,\dots,\overbar{H} \!-\! h_{j-2n}\right\}\!, \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \fa{j}{\left\{2n\!+\!1,\ldots,3n\right\}} \label{cp:a12} \end{flalign} } The \textbf{third} set has an integer-valued vector $s \!\in\! \left\{1,\ldots,n\right\}^{3n}$ where $s_j \!=\! k$ when region $R_j \!\in\! R$ belongs to bin $k \!\in\! B$. The first $n$ regions correspond to the empty bins: {\small \begin{flalign} &D\!\left(s_j\right) \!=\! \left\{1,\ldots,n\right\}, \, \fa{j}{\left\{1,\ldots,3n\right\}} \label{cp:a13} \\[-0.1cm] &s_j\!=\!j, \, \fa{j}{\left\{1,\ldots,n\right\}} \label{cp:a14} \end{flalign} } The \textbf{fourth} set has $v \!\in\! \left\{0,1\right\}^{n}$ a binary vector defining the cutting pattern used to extract the item: $v_i=0$ if the first cut that yields item $i$ is horizontal (cf. Figure \ref{fig1}.a), and 1 otherwise. Thus, \formula{D\!\left(v_i\right) \!=\! \left\{0,1\right\}, \, \fa{i}{N}}{cp:a15} The \textbf{fifth} set has $t\!\in\! \left\{\mathbb{N}_{\geq 0}\right\}^{n}$ and $c \!\in\! \left\{\mathbb{N}_{\geq 0}\right\}^{n},$ two integer-valued vectors that define the processing times and the completion times of the bins. Let $t_{max}$ and $c_{max}$ be upper bounds on the maximal processing and completion time of a bin, respectively. Then {\small \begin{flalign} &D\!\left(t_k\right) \!=\! \left\{0,\ldots,t_{max}\right\}, \, \fa{k}{B} \label{cp:a16} \\[-0.1cm] &D\!\left(c_k\right) \!=\! \left\{0,\ldots,c_{max}\right\}, \, \fa{k}{B} \label{cp:a17} \end{flalign} } \subsection{CP Model}\label{sec:MODEL} The CP model uses constraint $\ad{g_1}{g_n},$ which ensures that the variables of $\left[g_1,\ldots,g_n\right]$ take distinct values. Additionally, the model uses three expressions: $\eq{g}{h},$ which returns $1$ or $true$ if $g=h$ and $0$ or $false$ otherwise; $\el{g}{h},$ which returns the $h$th variable in the list of variables $g$; and $\countn{g}{h},$ which counts the number of variables of $g$ taking the value $h$. The CP model follows. {\footnotesize \begin{flalign} \nonumber \mbox{min} \,\,& \textstyle\sum_{i \in N} \!\textstyle\sum_{k \in B} \eq{{\el{s}{e_i}}}{k} \cdot \\[-0.05cm] & \left(\epsilon_i \!\!\cdot\! \max\!\left\{0,\!d_i \!-\! c_k\right\} \!+\! \tau_i \!\!\cdot\! \max\!\left\{0,\!c_k \!-\! d_i\right\} \right) \label{cpC:0} \\[-0.05cm] \nonumber \mbox{s.t.} \,\,& \text{(\ref{cp:a1})-(\ref{cp:a17})} \\[-0.05cm] & \el{r}{e_i} \!=\! i & \fa{i}{N} \label{cpC:1} \\[-0.05cm] & \ad{e_1}{e_n} \label{cpC:2} \\[-0.05cm] & \countn{r}{i}\!=\!1 & \fa{i}{N} \label{cpC:15} \\[-0.05cm] & w_i \!\le\!\el{W}{e_i} & \fa{i}{N} \label{cpC:3} \\[-0.05cm] & h_i \!\le\!\el{H}{e_i} & \fa{i}{N} \label{cpC:4} \\[-0.05cm] & W_{n+i} \!\le\!\el{W}{e_i} \cdot \left(1 \!-\! v_i\right) \!+\! w_iv_i & \fa{i}{N} \label{cpC:5} \\[-0.05cm] & H_{n+i} \!\le\!\el{H}{e_i} \!-\! h_i & \fa{i}{N} \label{cpC:6} \\[-0.05cm] & W_{2n+i} \!\le\!\el{W}{e_i} \!-\! w_i & \fa{i}{N} \label{cpC:7} \\[-0.05cm] & H_{2n+i} \!\le\!h_i\left(1 \!-\! v_i\right) \!+\! \el{H}{e_i} \cdot v_i & \fa{i}{N} \label{cpC:8} \\[-0.05cm] & x_{n+i} \!=\! \el{x}{e_i} & \fa{i}{N} \label{cpC:9} \\[-0.05cm] & y_{n+i} \!=\! \el{y}{e_i}\!+\!h_i & \fa{i}{N} \label{cpC:10} \\[-0.05cm] & x_{2n+i} \!=\! \el{x}{e_i}\!+\!w_i & \fa{i}{N} \label{cpC:11} \\[-0.05cm] & y_{2n+i} \!=\! \el{y}{e_i} & \fa{i}{N} \label{cpC:12} \\[-0.05cm] & s_{n+i} \!=\! \el{s}{e_i} & \fa{i}{N} \label{cpC:13} \\[-0.05cm] & s_{2n+i} \!=\! \el{s}{e_i} & \fa{i}{N} \label{cpC:14} \\[-0.05cm] & \eq{r_k}{0} \rightarrow \!\!\textstyle\sum_{l=k+1}^n {r_l} \!=\! 0 & k \!\!\in\!\! \left\{lb_{f\!s}\!\!+\!\!1,\!...,\!n\right\} \label{cpC:16} \\ & \scalebox{.98}[1.0]{$t_k \!=\! f \!\left( \cup_{i=1}^n { \left\{e_i \!\!:\! \eq{{\el{s}{e_i}}}{k} \right\}}\right)\!$} & \fa{k}{B} \label{cpC:17} \\[-0.05cm] & c_k \leq c_{k+1} \!-\! p_{k+1} & k \!\!\in\! B \! \setminus \!\! \left\{m \right\} \label{cpC:18} \\[-0.05cm] & c_1 \geq p_1 & \label{cpC:19} \end{flalign} } Expression (\ref{cpC:0}) calculates the objective value as the weighted sum of earliness-tardiness penalties. Because $e_i$ is the region to which item $i$ is assigned, $\el{s}{e_i}$ in Eq. (\ref{cpC:0}) gives the index of the bin where $i$ is packed. When $\eq{{\el{s}{e_i}}}{k}$ confirms that $i$ is assigned to bin $k,$ the expression computes the weighted earliness tardiness of $i$ based on $c_k$. Constraint (\ref{cpC:1}) establishes a dual notation by forcing region $r_{e_i}$ to contain item $i$ when $i$ is assigned to $e_i$. Constraint (\ref{cpC:2}) verifies that all the items are assigned to different regions. Constraint (\ref{cpC:15}) ensures that exactly one region contains item $i$. Constraints (\ref{cpC:3}) and (\ref{cpC:4}) guarantee that $e_i,$ which holds item $i,$ is large enough to fit $i$. Constraints (\ref{cpC:5}) and (\ref{cpC:6}) bound the size of the top residual area $R_i^t$ obtained when cutting item $i$. When the first cut generating $i$ is horizontal (i.e., $v_i\!=\!0$), Eq. (\ref{cpC:5}) limits the width of $R_i^t$ to the width of region $e_i$ where $i$ is positioned. On the other hand, when the first cut generating $i$ is vertical (i.e., $v_i\!=\!1$), Eq. (\ref{cpC:5}) limits the width of $R_i^t$ to the width $w_i$. Constraint (\ref{cpC:6}) bounds the height of $R_i^t$ by the difference between the height of $e_i$ and $h_i$. Indeed, it does not depend on the cutting pattern. Similarly, constraints (\ref{cpC:7}) and (\ref{cpC:8}) bound the size of $R_i^r$ with Constraint (\ref{cpC:7}) delimiting the width of $R_i^r$ by the difference between the width of $e_i$ and $w_i$ and Constraint (\ref{cpC:8}) limiting the height of $R_i^r$ to $h_i$ when $i$ is extracted with respect to pattern $a$ (i.e., $v_i=0$) and to the height of $e_i$ when $i$ is positioned and cut according to pattern $b$. Constraint (\ref{cpC:9}) sets the $x$-coordinate of region $R_i^t$ to $e_i$'s $x$-coordinate. Constraint (\ref{cpC:10}) computes the $y$-coordinate of region $R_i^t$ as the sum of the $y$-coordinate of region $e_i$ and $h_i$. Constraint (\ref{cpC:11}) computes the $x$-coordinate of region $R_i^r$ as the sum of the $x$-coordinate of region $e_i$ and $x_i$. Constraint (\ref{cpC:12}) sets the $y$-coordinate of region $R_i^r$ to the $y$-coordinate of region $e_i$. Constraints (\ref{cpC:13}) and (\ref{cpC:14}) define, respectively, the bin to which regions $R_i^t$ and $R_i^r$ belong. Constraint (\ref{cpC:16}) implies that no bin succeeding bin $k$ can hold a region if $k$ is empty. This is a symmetry breaking constraint that prohibits filling a bin $\left(k\!+\!1\right)$ if bin $k$ is not filled. Constraint (\ref{cpC:17}) calculates the processing time of bin $k$. Constraint (\ref{cpC:18}) guarantees the no overlap of the processing windows of two consecutive bins. Finally, constraint (\ref{cpC:19}) ensures that the schedule starts after time zero. This CP model solves {JIT with BP} exactly even with the restart mode. However, {CPH} uses it to find an approximate solution as it presets its runtime. Within the search, items are assigned to regions in the ascending order of their due dates. The solver instantiates $e_1,\ldots,e_n;$ then applies its default strategy to the other variables. Our extensive study shows that such an order gives the best results fast. \section{Agent-Based Heuristic}\label{sec:AB} {ABH} is a constructive heuristic that packs items into bins through negotiation, and schedules the packed bins optimally using a linear program. It uses a packing procedure {\texttt{PACK}} that searches for a feasible guillotine packing of a set of items into a single bin via a reduced version of {CPH}. Because it solves a feasibility problem, {\texttt{PACK}} omits Eq. ({\ref{cpC:0}}) and drops the $s$, $t$ and $c$ variables along with their related constraints, i.e. Eqs. (\ref{cp:a13}-\ref{cp:a14},\ref{cp:a16}-\ref{cp:a17},\ref{cpC:13}-\ref{cpC:19}). {\texttt{PACK}} is solved via IBM ILOG's CP Optimizer, which is allocated a preset threshold runtime. It either returns a feasible packing or signals the infeasibility of the grouping of the items in a bin. Suppose that $m$ is fixed and that an integer-valued vector $x\!\in\!\left\{\mathbb{N}_{\geq0}\right\}^n$ represents an assignment of items to bins such that $x_i=k$ if item $i \!\in\! N_k$. For a given $x,$ the processing time of bin $k \!\in\! B$ is known. Thus, the completion times that minimize the weighted earliness tardiness of the items is the optimal solution to the linear program $\OBJ{N}$ whose decision variables are: $c_k\geq0,\ T_i\geq 0$ and $E_i\geq 0$ for $k \!\in\! B$ and $i \!\in\! N$. $\OBJ{N}$ follows. {\small \begin{flalign} \mbox{min} \,\,& \textstyle\sum_{i \in N} \left(\epsilon_i E_i + \tau_i T_i\right) & \label{eq:1} \\[-0.1cm] \nonumber \mbox{s.t.} \,\,& \text{(\ref{cpC:18})-(\ref{cpC:19})} &\\[-0.1cm] & T_i-E_i = c_k - d_i & \fa{i}{N},\; \fa{k}{B\,:\, x_i=k} \label{eq:2} \\[-0.1cm] & T_i,\;E_i\in \mathbb{R}_{\geq0} & \fa{i}{N} \label{eq:3} \\[-0.1cm] & c_k\in \mathbb{R}_{\geq0} & \fa{k}{B} \label{eq:4} \end{flalign} } $\OBJ{N}$ schedules the bins on the single cutter and inserts idle time between successive bins if this decreases the total weighted earliness tardiness, defined by Eq. (\ref{eq:1}). Eq. (\ref{cpC:18}) determines the completion time of each bin ensuring that the processing periods of two successive bins do not overlap in time. Eq. (\ref{cpC:19}) guarantees that the schedule starts after time zero. Eq. (\ref{eq:2}) calculates the tardiness $T_i$ and the earliness $E_i$ of job $i$ when $i$ is assigned to bin $k$. Finally, Eqs. (\ref{eq:3}-\ref{eq:4}) declare the variables positive. $\OBJ{N}$ is a linear program that can be solved via IBM ILOG's CPLEX. \subsection{Initial Partial Solution}\label{sec:ps} {ABH} starts with constructing a partial solution. \textbf{First}, {ABH} computes the lower bound $lb_{f\!s}$ of Fekete and Schepers~\cite{Fekete04} on the number of bins required to pack the $n$ items. Then {ABH} constructs a partial solution with $m\!=\!lb_{f\!s}$ empty bins. \textbf{Second}, {ABH} uses a peak clustering algorithm \cite{Yager} that identifies items with close due dates. These items constitute a bottleneck on the machine. The idea is that a bin groups a set of items such that its completion time lies at the centre of the due dates of the items. Assigning items causing a bottleneck to the same bin reduces the lateness of the items; in particular when the items' per unit earliness tardiness penalties are symmetric. The clustering algorithm finds the $k$th cluster by evaluating, for $i \!\in\! N,$ a clustering function $$\phi_i^{k} = \begin{cases} \textstyle\sum_{i' \in N} \exp \left(- \frac{{\lvert d_i - d_{i'} \rvert}^2}{(.5r)^2 }\right), & \mbox{if } k=1 \\ \phi_i^{k-1} - \phi_{i_{k-1}^*}^{k-1} \exp \left(- \frac{{\lvert d_i - d_{i_{k-1}^*} \rvert}^2}{(.5r)^2} \right), & \mbox{if } k>1 \end{cases}$$ \noindent where $r$ is the parameter that governs the width of the peak function. It then selects $i_k^*,$ the item with the largest peak function value: $\phi_{i_k^*}^k = \max_{i \in N} \{{\phi_i^k}\}$. It removes $i_k^*$ from $N$ and places it in a list $\overbar{U}$ of highly bottleneck items. It updates the modified peak function and iterates the process until it has removed $m$ bottleneck items. The $n-m$ unpacked items constitute the set $U$. \subsection{Completing a Partial Solution}\label{sec:CS} Let $\WETbin{i^*}{k}=$ \begin{center} $\displaystyle \sum_{i\in\overbar{U}\cup\left\{i^*\right\}} \sum_{k' \in B:x_i=k'} \epsilon_i \!\cdot\! \max\left\{0,d_i \!-\! c_{k'}\right\} \!+\! \tau_i \!\cdot\! \max\left\{0,c_{k'} \!-\! d_i\right\}$ \end{center} denote the objective value of the current partial solution augmented by $\{i^*\}$ were $i^*$ to leave $U$ and be packed in $k$. $\WETbin{i^*}{k}$ is the value of the optimum of $\OBJ{\overbar{U}\!\cup\! \{i^*\}}$. The items in $U$ and the filled bins act as individual greedy agents that undertake negotiation actions. A bin $k$ seeks to attract the best item $i^*\!\in\! U$, defined as the item which minimises $\WETbin{i^*}{k}$. Likewise, item $i \!\in\! U$ seeks the best bin $k^*$ where $k^*$ minimises $\WETbin{i}{k^*}$ should $i$ leave $U$ and join $k^*$. The item-attraction and bin-seeking negotiation actions, named as {\texttt{GroupFormation}} and {\texttt{GroupJoin}} respectively, continue until all items are assigned to bins $\left(U \!=\! \emptyset \right)$ or none of the remaining items of $U$ are packable into existing bins. At this point, a repacking procedure is triggered. Every time a bin's {\texttt{GroupFormation}} action completes successfully, {ABH} runs a local search that obtains an optimised partial solution before {ABH} undertakes its next assignment decision. Differently stated, the local search downsizes the myopia of the greedy actions of the agents and keeps the partial solution balanced during the construction phase. It employs two operators: \textsl{insertion}, which moves an item from bin $k$ to its neighbouring bin $k\!+\!1$ or vice versa, and \textsl{swap}, which exchanges two items, $i \!\in\! N_k$ and $i'\!\in\! N_{k+1},$ from two neighbouring bins. A move or exchange is adopted when it improves the current solution and maintains its feasibility. The local search applies the operators sequentially until it obtains a better solution. It operates on neighboring bins only in order to have a short run time. However, it may consider a larger neighbourhood when the earliness and tardiness costs are highly asymmetric. \subsubsection{\textbf{GroupFormation}}\label{sec:GF} The {\texttt{GroupFormation}} action, detailed in Algorithm \ref{Alg:GF}, considers the items in $U$ in ascending order of their $\textstyle \epsilon_i E_i + \tau_i T_i,$ where $E_i$ and $T_i$ are computed with respect to the current $c_{k}$ of bin $k$. Evidently, the current $c_{k}$ is an estimate of the true $c_{k},$ which changes as items of $U$ are assigned to $k$. For each of the first $\rho$ items of $U$, the {\texttt{GroupFormation}} action of bin $k$ computes, using $\OBJ{N}$, the true $c_{k}$ were this item to join $k$. It considers the $\rho$ most prominent items first to reduce its run time. Next, the {\texttt{GroupFormation}} action of a bin $k$ chooses the candidate $i^*$ which induces the smallest $\WETbin{i^*}{k}$ were $i^*$ to join $k$. It checks whether $k$ can append $i^*$. If the $lb_{f\!s}$ lower bound for $N_{k}\!\cup\!\left\{i^*\right\}$, denoted as $lb_{f\!s}\left(N_{k}\!\cup\!\left\{i^*\right\} \right),$ is less than or equal to 1, $i^*$ may be packed in $k$. Therefore, the {\texttt{GroupFormation}} action triggers {\texttt{PACK}}, which searches for a feasible guillotine packing of $i^*$ and the items already assigned to $k$. When {\texttt{PACK}} returns a valid packing, $k$ sends a join request to $i^*$, which has independently undertaken its own {\texttt{GroupJoin}} action. When $i^*$ accepts the request, the {\texttt{GroupFormation}} action assigns $i^*$ to $k$, and terminates successfully. {\texttt{PACK}} may fail to find a further packing $i^*$ into $k$ either because it runs out of time or it has exhausted all possible packing possibilities. In this case, the {\texttt{GroupFormation}} action of $k$ removes $i^*$ from the list of candidates items and selects the next item, say $i^{**},$ that minimizes $\WETbin{i^{**}}{k}$ were $i^{**}$ to join $k$. When $k$ fails to append any of the $\rho$ candidate items, it iteratively considers the next item of $U$, compute the true $c_{k}$ were this item to join $k$ and checks the feasibility of packing this item into $k$. If it still fails to append any of the items of $U$, the {\texttt{GroupFormation}} action terminates unsuccessfully. When all available bins fail their {\texttt{GroupFormation}} actions while $U \!\neq\!\emptyset,$ the repacking procedure of Section~\ref{sec:Repacking} is applied. \begin{algorithm}[t] {\small \caption{{\texttt{GroupFormation}} action of bin $k$}\label{Alg:GF} \begin{adjustwidth}{-0.3cm}{} \begin{algorithmic} \State Sort $U$ in the ascending order of $\epsilon_i E_i \!+\! \tau_i T_i$, $i\!\!\in\!\!U$, with respect to $c_{k}$; \State Initialise the list of candidates $\Gamma=\emptyset$ and the list of examined items $\Delta=\emptyset$; \State Select the first $\rho$ items in $U$ and add them to $\Gamma$; \While{($true$)} \For{($\forall i \in \Gamma$)} \If {($\alpha_{ki}=-1$)} $\alpha_{ki}$ = $\WETbin{i}{k}$; \EndIf \EndFor \State Select $i^* = \min_{i \in \Gamma}\{\alpha_{ki}\}$ \If{($lb_{f\!s}\left(N_{k}\!\cup\!\left\{i^*\right\}\right)\leq 1$)} \If{($\lambda_{ki^*} = -1$)} $\lambda_{ki^*} = \text{\texttt{PACK}}\left(N_k\!\cup\!\left\{i^*\right\}\right)$; \EndIf \If{($\lambda_{ki^*} = 1$)} \If{(the {\texttt{GroupJoin}} action of $i^*$ returns $true$)} \State Append $i^*$ to $N_k$ and remove it from $U$; \For{($\forall\alpha_{k'i} \in A:\,k'\in B,\,i\in N$)} Set $\alpha_{k'i}=-1$; \EndFor \For{($\forall\lambda_{ki} \in \Lambda:\,i\in N$)} Set $\lambda_{ki}=-1$; \EndFor \State Return $true;$ \EndIf \EndIf \EndIf \State Add $i^*$ to $\Delta$ and remove it from $\Gamma$; \If{($\Gamma = \emptyset$)} \If{($\Delta = U$)} Return $false$; \EndIf \State Set $\Gamma = U \setminus \Delta$; \EndIf \EndWhile \end{algorithmic} \end{adjustwidth} } \end{algorithm} \subsubsection{\textbf{GroupJoin}}\label{sec:GJ} The bin and item agents act individually and greedily. This can result in low-penalty items being attached to bins whose completion times are far from the items' due dates. This might also be problematic when the per unit penalties of the items assigned to a bin are highly asymmetric. It can quickly lead to a ripple effect, as these low-penalty items may force items with higher penalties not to be in their most proximal bins due to packing constraints. This, in turn, forces more items to be packed into less proximal bins; yielding a large increase of the penalties and an overall degeneration in solution quality. To avoid such a scenario, an item $i$ declines an attachment offer from bin $k$ if it can be packed into bin $k'$ and the overlap of $[d_{i}-p_{i},d_{i}]$ with $[c_{k'}-p_{k'},c_{k'}]$ is larger than the overlap of $[d_{i}-p_{i},d_{i}]$ with $[c_{k}-p_{k},c_{k}],$ where $[d_{i}-p_{i},d_{i}]$ is the ideal processing time of $i$ and $p_{i}$ is the processing time of a bin containing only item $i$. Each item $i \!\in\! U$ is interested in joining a bin whose processing time window overlaps its own ideal processing window. Let $\mathcal{O}_i$ denote the set of such bins. The {\texttt{GroupJoin}} action sorts the bins of $\mathcal{O}_i$ in descending order of the overlap, and chooses the bin $k' \!\in\! \mathcal{O}_i$ with the largest overlap. If $k'$ is the bin that sent $i$ the attachment offer, $i$ accepts the joining request. Otherwise, the {\texttt{GroupJoin}} action checks whether $i$ can be packed in $k'$. If $lb_{f\!s}\left(N_{k'}\!\cup\!\left\{i\right\} \right)\! \leq\! 1$, it calls {\texttt{PACK}}. If the packing of $i$ in $k'$ is feasible, $i$ declines the invitation of $k$; otherwise, the {\texttt{GroupJoin}} action considers the next bin of $\mathcal{O}_{i}.$ When the {\texttt{GroupJoin}} action exhausts all bins of $\mathcal{O}_{i}$ without packing $i$, it proceeds to the following alternative plan. For each bin $k' \!\in\! B \!\setminus\! \mathcal{O}_i,$ it computes the penalty $\WETbin{i}{k'}$ were $i$ to join $k'$. It first forms the set of bins $B'$ such that $WET(i,k') \leq WET(i,k)$ and sorts it in ascending order of $WET(i,k').$ Then, it iteratively scans $B' \!\subseteq\! B \!\setminus\! \mathcal{O}_i$. If $k\!=\!k'$, $k'\!\in\! B'$, $i$ accepts the invitation of $k$ and the {\texttt{GroupJoin}} action ends successfully. Otherwise, it tests whether $i$ is packable in $k'$ with a lower earliness tardiness penalty. If $lb_{f\!s}\left(N_{k'}\!\cup\!\left\{i\right\} \right)\! \leq\! 1$ and {\texttt{PACK}} returns a feasible packing, the {\texttt{GroupJoin}} action declines the invitation of $k$ whereas the {\texttt{GroupJoin}} action checks the next bin of $B'$ when packing is impossible. The {\texttt{GroupJoin}} action ends successfully when it checks all the bins of $B'$ and does not find a bin proposing a placement yielding smaller earliness-tardiness penalties. Algorithm \ref{ALG:JR} details the {\texttt{GroupJoin}} action. \begin{algorithm}[t] {\small \caption{{\texttt{GroupJoin}} action of item $i$ when a join request from bin $k$ is received}\label{ALG:JR} \begin{adjustwidth}{-0.3cm}{} \begin{algorithmic} \State Initialise the list of bins $\mathcal{O}_i\!=\!\cup_{k'\in B} \left\{k': [d_i\!-\!p_i,d_i] \!\cap\! [c_{k'}\!-\!p_{k'},c_{k'}] \!\neq \!\varnothing\! \right\}$; \State Sort $\mathcal{O}_i$ in the descending order of the overlap $[c_{k'}-p_{k'},c_{k'}] \cap [d_i-p_i,d_i]$; \For {($\forall k' \in \mathcal{O}_i$)} \If{($k'=k$)} Return $true$ \EndIf \If{($lb_{f\!s}\left(N_{k'}\!\cup\!\left\{i\right\}\right)\leq 1$)} \If{($\lambda_{k'i} = -1$)} Set $\lambda_{k'i} = \text{\texttt{PACK}}\left(N_{k'}\!\cup\!\left\{i\right\}\right)$; \EndIf \If{($\lambda_{k'i} = 1$)} Return $false$; \EndIf \EndIf \EndFor \For{($\forall k' \in B \setminus \mathcal{O}_i$)} \If {($\alpha_{k'i}=-1$)} $\alpha_{k'i}=\WETbin{i}{k'}$; \EndIf \EndFor \State Form the list of bins $B'\!=\cup_{k'\in B \setminus \mathcal{O}_i} \left\{k': \WETbin{i}{k'}\leq\WETbin{i}{k}\right\}$; \State Sort $B'$ in ascending order of $\WETbin{i}{k'}$, $k' \in B'$; \For{($\forall k' \in B'$)} \If {($k'=k$)} Return $true$; \EndIf \If{($lb_{f\!s}\left(N_{k'}\!\cup\!\left\{i\right\}\right)\leq 1$)} \If {($\lambda_{k'i} = -1$)} $\lambda_{k'i}= \text{\texttt{PACK}}\left(N_{k'}\!\cup\!\left\{i\right\}\right)$; \EndIf \If {($\lambda_{k'i} = 1$)} Return $false$; \EndIf \EndIf \EndFor \State Return $true$; \end{algorithmic} \end{adjustwidth} } \end{algorithm} \subsubsection {\textbf{Repacking}}\label{sec:Repacking} When there remain unpacked items (i.e., $U\neq \emptyset$) while the {\texttt{GroupFormation}} actions of all the bins fail, {ABH} triggers a repacking procedure. It identifies a bin $k^*$ whose completion time $c_{k^*}$ is the closest to the due date of the first unpacked item $i\!\in\!U$. It empties bin $k^*$ along with bins $(k^*-1)$ and $(k^*+1)$ (if they exist), and inserts their unpacked items into set $U$. It inserts a new bin immediately following bin $k$, increments the number of bins by one, and resumes the {\texttt{GroupFormation}} actions. \subsubsection {\textbf{Caching}}\label{sec:Caching} {ABH} reduces its runtime by nearly an order of magnitude by caching results from previous objective function calls and packing feasibility checks. Consider bin $k \in B$ sending a join request to item $i \in U$. {ABH} doesn't recompute the total scheduling penalty resulting from $i$ joining $k$ if this penalty was computed previously and the schedule has since been unchanged. Similarly, {ABH} doesn't rerun {\texttt{PACK}} for $N_k \cup \left\{i\right\}$ if it was obtained at a prior stage and the content of the bin $k$ has since been unchanged. {ABH} proceeds similarly for the {\texttt{GroupJoin}} actions when an item requests to join a bin. Matrix $\Lambda \!\in\! \left\{-1,0,1\right\}^{m \times n}$ stores packing feasibility check results such that its element $\lambda_{ki},\ k\in B,\ i \in N,$ equals 0 if the packing of $i$ into $k$ is infeasible, equals 1 if it is feasible, and -1 if the result is unknown. Every time an item is added to a bin $k \in B$, all the elements of the row associated with $k$ are changed to the default unknown packing state; i.e., to $-1$. Penalty computation results are stored in matrix $A \in \mathbb{R}_{\geq-1}^{m \times n}$ whose element $\alpha_{ki},\ k\in B,\ i \in N,$ equals the total penalty generated by attaching $i$ to $k$. Unlike the feasibility checks, every entry of $A$ is reset to the unknown state $-1$ when any bin adds a new item as this addition may alter the total scheduling penalty. \section{Computational Results}\label{sec:Results} {CPH} and {ABH} are implemented in C\#, with CP models solved via IBM's ILOG Optimisation Studio's CP solver version 12.6.2. They are run on a personal computer with 4GB of RAM and a 3.06GHz Dual Core processor. Their performances are compared on instances that are randomly generated as follows: \begin{itemize}[noitemsep, nosep, label=$\bullet$, leftmargin=*] \item $n=$20, 40, 60, 80, and 100; \item $\epsilon_i$ and $\tau_i$ following the uniform$[1,5]$; \item the processing time function of non-empty bin $k$ is $f\left(N_k\right) = t^l + t^h\cdot\left|N_k\right| + t^c \cdot \textstyle\sum_{i\in N_k}\left(w_i\!+\!h_i\right)$, where bin loading time $t^l=100$, the item's handling time $t^h=30$, and the cutting time $t^c=0.02$; \item items' due dates distributed according to: a normal with mean $\lambda$ and standard deviation $0.1\lambda,$ or a uniform[0, $2\lambda$] where $\lambda=\frac{t^l \sum_{i \in N} h_i w_i}{2 \overbar{W} \overbar{H}}$; and \item classes of instance as given in Table \ref{tab:type}. They are those of the most known benchmark set for two-dimensional bin packing \cite{Lodi2002379}. Classes 7-10 involve four types of items whose dimensions follow uniform distributions whose ranges are respectively: {\small \begin{itemize}[noitemsep, nosep, label=$-$, leftmargin=*] \item type 1: $\left(\left[\frac{2}{3}\overbar{W},\!\overbar{W}\right]\!,\!\left[1,\!\frac{1}{2}\overbar{H}\right]\right)$; \item type 2: $\left(\left[1,\!\frac{1}{2}\overbar{W}\right]\!,\!\left[\frac{2}{3}\overbar{H},\!\overbar{H}\right]\right)$; \item type 3: $\left(\left[\frac{1}{2}\overbar{W},\!\overbar{W}\right]\!,\!\left[\frac{1}{2}\overbar{H},\!\overbar{H}\right]\right)$; and \item type 4: $\left(\left[1,\!\frac{1}{2}\overbar{W}\right]\!,\!\left[1,\!\frac{1}{2}\overbar{H}\right]\right)$. \end{itemize} } \end{itemize} The instances are grouped according to their bin to item size ratios, which reflect the average number of items that can be packed into a bin: A low bin density set $\mathcal{L}$ with relatively large items and a high bin density set $\mathcal{S}$ with small items. As in real life furniture manufacturing, a bin's processing time accounts for its size and that of its items. For each class, problem size, and type of due date distribution, 10 instances are generated. The resulting 1000 problems, which will serve as benchmark instances for future research\footnote{The full set of test instances is available at http://cs.adelaide.edu.au/$\sim$optlog/}. \begin{table}[tb] \centering \caption{Generation of the widths and heights of items}\label{tab:type} {\scriptsize \begin{tabularx}{\columnwidth} {lXlp{5.5cm}} \hline Density&Class&\begin{color}{black}$\overbar{W}=\overbar{H}$\end{color}& $\left(w_i,h_i)\right)$\\ \hline $\mathcal{L}$&1&10&uniform$\left[1,10\right]$\\ $\mathcal{S}$&2&30&uniform$\left[1,10\right]$\\ $\mathcal{L}$&3&40&uniform$\left[1,35\right]$\\ $\mathcal{S}$&4&100&uniform$\left[1,35\right]$\\ $\mathcal{L}$&5&100&uniform$\left[1,100\right]$\\ $\mathcal{S}$&6&300&uniform$\left[1,100\right]$\\ $\mathcal{L}$&7&100&type 1 with probability 70\%; type 2, 3, 4 with probability 10\% each\\ $\mathcal{L}$&8&100&type 2 with probability 70\%; type 1, 3, 4 with probability 10\% each\\ $\mathcal{L}$&9&100&type 3 with probability 70\%; type 1, 2, 4 with probability 10\% each\\ $\mathcal{S}$&10&100&type 4 with probability 70\%; type 1, 2, 3 with probability 10\% each\\ \hline \end{tabularx} } \end{table} {ABH} is run with $r\!=\!40$ and $\rho\!=\!3$. Both values are set as a result of extensive experiments. $r$ is not very influential but instance-dependent. Setting $r$ should avoid both very large and very small values. When $r$ is too large, the peak function accounts for too many due dates; consequently, the selection of bottlenecks becomes problematic as the clustering algorithm will tend to detect a single cluster rather than bottlenecks. Similarly, when $r$ is too small, the clustering algorithm identifies too many bottlenecks; consequently, finding the bottleneck with the largest impact is hard. On the other hand, the value of $\rho$ is not critical as it slightly speeds up the {\texttt{GroupFormation}} computations. {ABH} allocates a 0.5 s runtime to the constraint program {\texttt{PACK}} that checks for the feasibility of a packing. All statistical inferences are valid at a 5\% significance level. \begin{figure} \caption{Heat map of performance ratio} \label{fig:results} \end{figure} Figure \ref{fig:results} displays the heat map of the performance ratio $\frac{z_H}{z_{best}},\ H \!\in\! \mathcal{H}\!=\!\{${CPH}$_{10},$ {CPH}$_{30},$ {ABH}$\},$ where {CPH}$_{10}$ and {CPH}$_{30}$ denote {CPH} when allocated 10 and 30 minutes of runtime respectively, $z_{best}=\min_{H \in \mathcal{H}}\{z_H\},$ and a dark blue color signals a better objective function value. It suggests that allocating a longer runtime to {CPH} enhances its mean performance ratio; in particular when $n>20.$ This enhancement occurs in about 80\% of instances. The percent of times {CPH}$_{10}$ equals {CPH}$_{30}$ decreases as $n$ increases, as illustrated by Figure \ref{fig:barchart}. This is further supported by a paired statistical test and by Figure \ref{fig:Figure2}.a-b, which displays the mean performance ratios as a function of size and class. It is unlikely that increasing the runtime of CPH beyond 30 minutes for larger instances will induce further improvements shortly because of the large size of the search space. \begin{figure} \caption{Percent of times {CPH} \label{fig:barchart} \end{figure} {ABH} outperforms {CPH} for instances with more than 60 items whereas {CPH} performs better than {ABH} when the number of items is 20 or 40. For $n=60$, {ABH} outperforms {CPH} for set $\mathcal{L}$ while being outperformed by {CPH} for set $\mathcal{S}.$ {CPH} investigates a substantially higher portion of the solution space when the search space is small than when the search space is large. As $n$ increases, {CPH} struggles to reduce the search space (thus, to converge to low-penalty solutions) whereas {ABH} reaches such solutions for the larger instances progressively thanks to its general rules of thumb that aim at minimizing the incremental weighted earliness tardiness as {ABH} assigns items to bins. Analysis of variance tests further show that the mean performance ratio is not sensitive to the distribution of the due dates but is sensitive to both the class and the problem size. For single machine scheduling, problems with normally distributed due dates are harder to solve; however, for the problem at hand, clusters of items with very close due dates can be scheduled in the same bin. On the other hand, processing items with uniformly distributed due dates in a single batch increases the objective function value, and constitutes an additional difficulty to the problem. This is reflected by \textbf{slightly} lower solution quality of {CPH} and {ABH} and \textbf{slightly} larger run times. A hypothesis test infers that there is not enough statistical evidence to claim that the mean runtime of {ABH} is larger when the due dates are normally distributed than when they are uniformly generated. A one-way analysis of variance further confirms that the runtime of {ABH} is not sensitive to the distribution of due dates. That is, processing the items in batches desensitizes {ABH} to the clustering of the due dates. The mean runtime of {ABH} is on the other hand dependent on the problem size and class as illustrated by Figure \ref{fig:Figure2}.c-d, which shows the main interaction of size, class, and distribution of due dates on the mean runtime of {ABH}. The maximal mean runtime of 203 s of {ABH} is far less than the allocated 10 minutes of {CPH}$_{10}$. \begin{figure*} \caption{Mean performance ratio as a function of size (a) and class (b). Mean runtime (s) of {ABH} \label{fig:Figure2} \end{figure*} The larger the bin to item size, the larger the run time of {ABH} is. This is particularly apparent for larger instances of classes 2, 4, 6 and 10, which all have high bin to item size ratios. As the number of items per bin increases, the number of potential arrangements of items increases too; making it harder to determine whether the packing of a given item set to the bin is feasible. Unfortunately, increasing the threshold of the runtime of the constraint program {\texttt{PACK}} did not enhance the performance of {ABH} for those classes of instances. Indeed, the feasibility problem remains too hard to solve when solutions lay ``on the edge of feasibility''. \begin{figure} \caption{95\% confidence intervals for the mean runtime (s) of {ABH} \label{fig:RTABsets} \end{figure} Figure \ref{fig:RTABsets}, which displays the 95\% confidence intervals of the mean runtime (s) of {ABH} as a function of size for sets $\mathcal{L}$ and $\mathcal{S},$ provides further computational proof that {ABH} runs faster on the instances with relatively large items. {ABH} performs consistently better for instances with low bin to item size ratios than for instances with high ratios. This is most likely due to the ``ripple effect'', which arises from low-penalty items forcing items with higher penalties away from their most proximal bins. While the search for ideal processing windows favors the assignment of items to their most proximal bins, the asymmetric nature of the per unit penalties $\epsilon$ and $\tau$ does not guarantee low earliness tardiness to all items assigned to a bin. As filled bins near their full capacity, the greedy {\texttt{GroupJoin}} and {\texttt{GroupFormation}} actions may result in locally-optimal scenarios where small, low-penalty items prohibit large, high-penalty items from being packed into their proximal bins; thus, force them into bins that are far away from their due dates. This occurs more often in instances with high bin to item ratios because bins can pack more items (which are not necessarily homogeneous). \section{Conclusions}\label{sec:Conclusion} This paper introduces a very pertinent industrial problem that combines two very hard combinatorial optimization problems: a two-dimensional guillotine bin packing / cutting, and a single machine weighted earliness tardiness batch scheduling. It defines the problem, and models it using two different approaches: one based on constraint programming and one based on agent based modeling where items and bins act as cooperating negotiating agents. The computational results on randomly generated instances that will serve as benchmark sets for future research show that the constraint programming approach is more promising when the problem size is small while the agent based model gives better results for larger instances thanks to the common sense rules that govern the negotiations of the agents. The first heuristic can be enhanced with additional selection / dominance rules that will prune large parts of the search space while the second can be improved with a neighborhood search. In addition to improving these two heuristics, future research can focus on applying and adapting enumerative approaches, such as genetic algorithms, to the specificities of this problem. In fact, it would be interesting to design/adapt a high-performing evolutionary technique that can cope with the large number of infeasible solutions it would encounter during its search. Moreover, the problem can be extended to more complex manufacturing set ups such as flow shops and job shops, to cutting problems with more complex shapes and constraints as in apparel manufacturing, to three dimensional shapes as in transportation, or to the variable-sized variable-cost bin packing problem. \section*{Acknowledgments} This work has been supported by the ARC Discovery Project DP130104395. \end{document}